text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
Let L/K be a finite Galois extension of number fields with Galois group G. Let p be an odd prime and r>1 be an integer. Assuming a conjecture of Schneider, we formulate a conjecture that relates special values of equivariant ArtinL-series at s=r to the compact support cohomology of theétale p-adic sheaf _p(r). We show that our conjecture is essentially equivalent to the p-part of the equivariant Tamagawa number conjecture for the pair (h^0((L))(r), [G]). We derive from this explicit constraints on the Galois module structure of Banaszak's p-adic wild kernels. A molecular-dynamics approach for studying the non-equilibrium behavior of x-ray-heated solid-density matter Ian D. Morris December 30, 2023 ============================================================================================================ § INTRODUCTION Let L/K be a finite Galois extension of number fields with Galois group G. To each finite set S of places of K containing all archimedean places, one can associate a so-called `Stickelberger element' θ_S in the center of the complex group algebra [G]. This Stickelberger element is defined via L-values at zero of S-truncated Artin L-functions attached to the (complex) characters of G. Let us denote the roots of unity of L by μ_L and the class group of L by _L. Assume that S contains all finite primes of K that ramify in L/K. Then it was independently shown in <cit.>, <cit.> and <cit.> that when G is abelian we have_[G] (μ_L) θ_S⊆[G],where we denote by _Λ(M) the annihilator ideal of M regarded as a module over the ring Λ. Now a conjecture of Brumer asserts that _[G] (μ_L) θ_S annihilates _L.Using L-values at integers r<0, one can define higher Stickelberger elements θ_S(r). When G is abelian, Coates and Sinnott <cit.> conjectured that these elements can be used to construct annihilators of the higher K-groups K_-2r(𝒪_L,S), where we denote by 𝒪_L,S the ring of S(L)-integers in L for any finite set S of placesof K; here, we write S(L) for the set of places of L which lie above those in S. Coates and Sinnott essentially proved a p-adic étale cohomological version of their conjecture in the case K =. First results on the K-theoretic version are due to Banaszak <cit.> and Nguyen Quang Do <cit.>. However if, for example, L is totally real and r is even, these conjectures merely predict that zero annihilates K_-2r(𝒪_L,S) if r<0 and _L if r=0. In the case r=0, Burns <cit.> presented a universal theory of refined Stark conjectures. In particular, the Galois group G may be non-abelian, and he uses leading terms rather than values of Artin L-functions to construct conjectural nontrivial annihilators of the class group. His conjecture thereby extends the aforementioned conjecture of Brumer (we point out that there are different generalizations of Brumer's conjecture due to the author <cit.> and Dejou and Roblot <cit.>). Similarly, in the case r<0 the author <cit.> has formulated a conjecture on the annihilation of higher K-groups which generalises the Coates–Sinnott conjecture and a conjecture of Snaith <cit.>. More precisely, using leading terms at negative integers a certain `canonical fractional Galois ideal' 𝒥_r^S is defined. It is then conjectured that for every odd prime p and every x ∈__p[G](K_1-2r(𝒪_L,S)_⊗__p) one has__p[G](x) ·ℋ_p(G) ·𝒥_r^S ⊆__p[G](K_-2r(𝒪_L,S) ⊗__p).Here, the subscript `tor' refers to the torsion submodule ofK_1-2r(𝒪_L,S), we denote the reduced norm ofany x ∈_p[G] by __p[G](x), and ℋ_p(G) denotes a certain `denominator ideal' (introduced in <cit.>; see <ref>).When G is abelian and r=1, Solomon <cit.> has defined a certain ideal which he conjectures to annihilate the p-part of the class group. This has recently been generalized to arbitrary (finite) Galois groups G by Castillo and Jones <cit.>. All these annihilation conjectures are implied by appropriate special cases of the equivariant Tamagawa number conjecture (ETNC) formulated by Burns and Flach <cit.>.Now let r>1 be a positive integer. When L/K is an abelian extension of totally real fields and r is even, Barrett <cit.> has defined a `Higher Solomon ideal' which he conjectures to annihilate the p-adic wild kernel K_2r-2^w(𝒪_L,S)_p of Banaszak <cit.> (see also <cit.>). There is an analogue on `minus parts'when L/K is an abelianCM-extension and r is odd. Under the same conditions Barrett and Burns <cit.> have constructed conjectural annihilators of the p-adic wild kernel via integer values of p-adic Artin L-functions. This approach has been further generalized to the non-abelian situation by Burns and Macias Castillo <cit.>.In this paper we consider the most general case, where L/K is anarbitrary (not necessarily abelian or totally real)Galois extension and r>1 is an arbitrary integer. Let G_L be the absolute Galois group of L.Assuming conjectures of Gross <cit.> and of Schneider <cit.>, we define a canonical fractional Galois ideal 𝒥_r^S and conjecture that for every x ∈__p[G](_p(r-1)_G_L) we have that__p[G](x) ·ℋ_p(G) ·𝒥_r^S⊆__p[G](K_2r-2^w(𝒪_L,S)_p).Note that the conjectures of Gross and Schneider are known when L is totally real and r is even (see Theorem <ref> and Theorem <ref> below, respectively).When in addition L/K is abelian, we show that our conjecture is compatible with Barrett's conjecture.In order to show that our conjecture is implied by the appropriate special case of the ETNC, we reformulate the ETNCfor the pair h^0((L)(r), [G]) inthe spirit of the `lifted root number conjecture' of Gruenberg, Ritter and Weiss <cit.> and the `leading term conjectures' of Breuning and Burns <cit.>. Note that the leading term conjecture at s=1 is equivalent to the ETNC for the pair h^0((L)(1), [G]) when Leopoldt's conjecture holds (see <cit.>), and that Schneider's conjecture is a natural analogue when r>1. This reformulation is more explicit than the rather involved and general formulation of Burns and Flach <cit.>. This will actually occupy a large part of the paper and isinteresting in its own right. Moreover, the relation to the ETNC will lead to a proof of our annihilation conjecture in several important cases.In a little more detail, we modify the compact support cohomology of theétale p-adic sheaf _p(r) such that we obtain a complex which is acyclic outside degrees 2 and 3. We show that this complex is a perfectcomplex of _p[G]-modules provided that Schneider's conjecture holds. Assuming Gross' conjecture we define a trivialization of this complex that involves Soulé's p-adic Chern class maps <cit.> and the Bloch–Kato exponential map <cit.>. These data define a refined Euler characteristic which our conjecture relates to the special values of the equivariant Artin L-series at s=r and determinants of a certain regulator map. This relation is expressed as an equality in a relative algebraic K-group.This article is organized as follows. In <ref> we review the higher Quillen K-theory of rings of integers in number fields. We discuss its relation to étale cohomology and introduce Banaszak's wild kernels. In <ref> we prove basic properties of the compact support cohomology of the étale p-adic sheaf _p(r), where r>1 is an integer. We recall Schneider's conjecture and provide a reformulation in terms of Tate–Shafarevich groups (which originates with Barrett <cit.>). We then construct the aforementioned complex of _p[G]-modules which is perfect when Schneider's conjecture holds. We recall some background on relative algebraic K-theory and in particular on refined Euler characteristics in <ref>. We state Gross' conjecture on leading terms of Artin L-functions at negative integers in <ref> and give a reformulation at positive integers by means of the functional equation. In <ref> we construct a trivialization of our conjecturally perfect complex and formulate a leading term conjecture at s=r for every integer r>1. We show that our conjecture is essentially equivalent to the ETNC for the pair h^0((L)(r), [G]).Finally, in <ref> we define thecanonical fractional Galois ideal and give a precise formulation of our conjecture on the annihilation of p-adic wild kernels. We show that this conjecture is implied by the leading term conjecture of <ref>. The relation to the ETNC then implies that our conjectures hold in several important cases. We also discuss the relation to a recent conjecture of Burns, Kurihara and Sano <cit.>. §.§ AcknowledgementsThe author acknowledges financial support provided by theDeutsche Forschungsgemeinschaft (DFG)within the Collaborative Research Center 701 `Spectral Structures and Topological Methods in Mathematics' and the Heisenberg programme (No.NI 1230/3-1). The author is indebted to Grzegorz Banaszak for various stimulating discussions concerning higher K-theory of rings of integers during a short stay at Adam Mickiewicz University in Poznań, Poland. Finally, the author thanks the anonymous referees for their valuable suggestions.§.§ Notation and conventionsAll rings are assumed to have an identity element and all modules are assumed to be left modules unless otherwise stated. Unadorned tensor products will always denote tensor products over . If K is a field, we denote its absolute Galois group by G_K. For a module M we write M_ for its torsion submodule and set M_ := M / M_ which we regard as embedded into ⊗ M. If R is a ring, we write M_m × n(R) for the set of allm × n matrices with entries in R. We denote the group of invertible matrices in M_n × n(R) by _n(R).§ HIGHER K-THEORY OF RINGS OF INTEGERS §.§ The setup Let L/K be a finite Galois extension of number fields with Galois group G. We write S_∞ for the set of archimedean places of K and let S be a finite set of places of K containing S_∞. We let 𝒪_L,S be the ring of S(L)-integers in L, where S(L) denotes the finite set of places of L that lie above a place in S; we will abbreviate 𝒪_L,S_∞ to 𝒪_L.For any place v of K we choose a place w of L above v and write G_w and I_w for the decomposition group and inertia subgroup of L/K at w, respectively. We denote the completions of L and K at w and v by L_w and K_v, respectively, and identify the Galois group of the extension L_w / K_v with G_w. We put G_w := G_w / I_w which we identify with the Galois group of the residue field extension which we denote by L(w) / K(v). Finally, we let ϕ_w ∈G_w be the Frobenius automorphism, and we denote the cardinality of K(v) by N(v). §.§ Higher K-theoryFor an integer n ≥ 0and a ring R we write K_n(R) for the Quillen K-theory of R. In the case R = 𝒪_L,S or R=L the groups K_n(𝒪_L,S) and K_n(L) are equipped with a natural G-action and for every integer r>1 the inclusion 𝒪_L,S⊆ L induces an isomorphism of [G]-modulesK_2r-1(𝒪_L,S) ≃ K_2r-1(L).Moreover, if S' is a second finite set of places of K containing S, then for every r>1 there is a natural exact sequence of [G]-modules0 → K_2r(𝒪_L,S) → K_2r(𝒪_L,S') →⊕_w ∈ S'(L) ∖ S(L) K_2r-1(L(w)) → 0.Both results, (<ref>) and (<ref>), follow from work of Soulé <cit.>, see <cit.>. We also note that sequence (<ref>) remains left-exact in the case r=1. The structure of the finite [G_w]-modules K_2r-1(L(w)) has been determined by Quillen <cit.> (see also <cit.>) to beK_2r-1(L(w)) ≃[G_w] / (ϕ_w - N(v)^r).If S contains all places of K that ramify in L/K, we thus have an isomorphism of [G]-modules⊕_w ∈ S'(L) ∖ S(L) K_2r-1(L(w)) ≃⊕_v ∈ S' ∖ S_G_w^G [G_w] / (ϕ_w - N(v)^r),where we write _U^G M := [G] ⊗_[U] M for any subgroup Uof G and any [U]-module M. We also note that the even K-groups K_2r(𝔽) of a finite field 𝔽 all vanish. §.§ The regulators of Borel and Beilinson Let Σ(L) be the set of embeddings of L into the complex numbers ; we then have |Σ(L)| = r_1 + 2 r_2, where r_1 and r_2 are the number of real embeddings and the number of pairs of complex embeddings of L, respectively. For an integer k ∈ we defineH_k(L) := ⊕_Σ(L) (2 π i)^-kwhich is endowed with a natural ( / )-action, diagonally on Σ(L) and on (2π i)^-k. The invariants of H_k(L) under this action will be denoted by H_k^+(L), and it is easily seen thatd_k := _(H_1-k^+(L)) = {[ r_1 + r_2 2 ∤ k; r_22 | k. ].Let r>1 be an integer. Borel <cit.> has shown that the even K-groups K_2r-2(𝒪_L) (and thus K_2r-2(𝒪_L,S) for any S as above by (<ref>) and (<ref>)) are finite, and that the odd K-groups K_2r-1(𝒪_L) are finitely generated abelian groups of rank d_r. More precisely, Borel constructed regulator mapsρ_r: K_2r-1(𝒪_L) → H_1-r^+(L) ⊗with finite kernel. Its image is a full lattice in H_1-r^+(L) ⊗. The covolume of this lattice is called the Borel regulator and will be denoted by R_r(L). Moreover, Borel showed thatζ_L^∗(1-r)/R_r(L)∈^×,where ζ_L^∗(1-r) denotes the leading term at s=1-r of the Dedeking zeta function ζ_L(s) attached to L. In the context of the ETNC it is often more natural to work with Beilinson's regulator map <cit.>. However, by a result of Burgos Gil <cit.> Borel's regulator map is twice the regulator map of Beilinson. As we will eventually work prime by prime and exclude the prime 2, there will be no essential difference which regulator map we use.§.§ Derived categories and Galois cohomology Let Λ be a noetherian ring and (Λ) be the category of all finitely generated projective Λ-modules. We write 𝒟(Λ) for the derived category of Λ-modules and 𝒞^b((Λ)) for the category of bounded complexes of finitely generated projective Λ-modules. Recall that a complex of Λ-modules is called perfect if it is isomorphic in 𝒟(Λ) to an element of 𝒞^b((Λ)). We denote the full triangulated subcategory of 𝒟(Λ) comprising perfect complexes by 𝒟^(Λ).If M is a Λ-module and n is an integer, we write M[n] for the complex⋯→ 0 → 0 → M → 0 → 0 →⋯,where M is placed in degree -n. We will also use the following convenient notation: When t ≥ 1 and n_1, …, n_t are integers, we putM{n_1, …, n_t} := ⊕_i=1^t M[-n_i].In particular, we have M{n} = M[-n] and M{n_1, …, n_t}[n] = M{n_1-n, …, n_t-n}.Recall the notation of <ref>. In particular, L/K is a Galois extension of number fields with Galois group G.For a finite set S of places of K containing S_∞ we let G_L,S be the Galois group over L of the maximal extension of L that is unramified outside S(L). For any topological G_L,S-module M we write RΓ(𝒪_L,S, M) for the complex of continuous cochains of G_L,S with coefficients in M. If F is a field and M is a topological G_F-module, we likewise define RΓ(F,M) to be the complex of continuous cochains of G_F with coefficients in M.If F is a global or a local field of characteristic zero, and M is a discrete or a compact G_F-module, then for r ∈ we denote the r-th Tate twist of M by M(r). Now let p be a prime and suppose that S also contains all p-adic places of K. Then we will particularly be interested in the complexes RΓ(𝒪_L,S, _p(r)) in 𝒟(_p[G]). Note that for an integer i the cohomology group in degree i of RΓ(𝒪_L,S, _p(r)) naturally identifies with H^i_ (𝒪_L,S, _p(r)), the i-th étale cohomology group of the affine scheme (𝒪_L,S) with coefficients in the étale p-adic sheaf _p(r). §.§ p-adic Chern class mapsFix an odd prime p and assume that S contains S_∞ and the set S_p of all p-adic places of K. Then for any integer r > 1 and i=1,2 Soulé <cit.> has constructed canonical G-equivariant p-adic Chern class mapsch^(p)_r, i: K_2r-i(𝒪_L,S) ⊗_p → H^i_ (𝒪_L,S, _p(r)).We need the following deep result. Let p be an odd prime. Then for any integer r > 1 and i=1,2 thep-adic Chern class maps ch^(p)_r, i are isomorphisms. Soulé <cit.> proved surjectivity. Building on work of Rost and Voevodsky, Weibel<cit.> completed the proof of the Quillen–Lichtenbaum Conjecture. Let r>1 be an integer and let p be an odd prime. Then we have isomorphisms of _p[G]-modules H^iRΓ(𝒪_L,S, _p(r)) ≃ H^i_(𝒪_L,S, _p(r)) ≃{[ K_2r-1(𝒪_L,S) ⊗_p i=1; K_2r-2(𝒪_L,S) ⊗_p i=2; 0 i ≠1,2. ].This follows from Theorem <ref> and the fact that the Galois groupG_L,S has cohomological p-dimension 2 by <cit.>. §.§ K-theory of local fields Let p be a prime. For an integer n ≥ 0 and a ring R we write K_n(R; _p) for the K-theory of R with coefficients in _p. Now let p be odd and let w be a finite place of L. We write 𝒪_w for the ring of integers in L_w. If w does not belong to S_p(L), then for r>1 and i = 1,2 we have isomorphisms of _p[G_w]-modulesK_2r-i(𝒪_w; _p) ≃ K_2r-i(L(w); _p) ≃(_p / _p(r-i+1) )^G_L_w.Here, the first isomorphism is a special case of Gabber's Rigidity Theorem <cit.>. As the even K-groups of a finite field vanish, the Universal Coefficient Theorem <cit.> identifies K_2r-i(L(w); _p) with K_2r-1(L(w)) ⊗_p if i=1 and with K_2r-3(L(w)) ⊗_p if i=2. Now (<ref>) gives the second isomorphism. Note that in particular K_2r-i(𝒪_w; _p) is a finite group. We likewise have[H^1_(L_w, _p(r)) ≃ H^0_(L_w, _p / _p(r)) = (_p / _p(r))^G_L_w,;H^2_(L_w, _p(r)) ≃ H^0_(L_w, _p / _p(1-r))^∨ ≃ (_p / _p(r-1))^G_L_w, ]where (-)^∨ := (-, _p / _p) denotesthe Pontryagin dual and we have used local Tate duality (see also <cit.> and the subsequent remark). This shows the case w ∉S_p(L) of the following well-known theorem. The case w ∈ S_p(L) is another instance of the Quillen–Lichtenbaum Conjecture and has been proven by Hesselholt and Madsen <cit.>. Let p be an odd prime and let w be a finite place of L. Then for any integer r > 1 and i=1,2there are canonical isomorphisms of _p[G_w]-modules K_2r-i(𝒪_w; _p) ≃ H^i_(L_w, _p(r)). §.§ Wild KernelsLet p be an odd prime and let S be a finite set of places of K containing all archimedean and all p-adic places. The following definition is due to Banaszak <cit.> (a variant has been defined slightly earlier by Nguyen Quang Do <cit.>).Let r>1 be an integer.The kernel of the natural map K_2r-2(𝒪_L,S) ⊗_p →⊕_w ∈ S(L) H^2_(L_w, _p(r)) is called the p-adic wild kernel and will be denoted by K^w_2r-2(𝒪_L,S)_p. This can be described in purely K-theoretic terms as follows.As p is odd, the cohomology groups H^2_(L_w, _p(r)) vanish for archimedean w.Thus Theorem <ref> implies thatK^w_2r-2(𝒪_L,S)_p identifies with the kernel of the map K_2r-2(𝒪_L,S) ⊗_p →⊕_w ∈ S(L) ∖ S_∞(L) K_2r-2(𝒪_w; _p). Let S' be a second finite set of places of K such that S ⊆ S'.As we have observed in <ref>, we have isomorphisms K_2r-2(𝒪_w; _p) ≃ K_2r-2(L(w); _p) ≃ K_2r-3(L(w)) ⊗_p for every w ∈ S'(L) ∖ S(L). Taking sequence (<ref>) into account,a diagram chase shows that the p-adic wild kernel K^w_2r-2(𝒪_L,S)_pdoes in fact not depend on the set S.§ THE CONJECTURES OF LEOPOLDT AND SCHNEIDER §.§ Local Galois cohomologyWe keep the notation of <ref>. In particular,L/K is a finite Galois extension of number fields with Galois group G. Let p be an odd prime. We denote the (finite) set of places of K that ramify in L/K by S_ and let S be a finite set of places of K containing S_ and all archimedean and p-adic places (i.e. S_∞∪ S_p ∪ S_⊆ S).Let M be a topological G_L,S-module. Then M becomes a topological G_L_w-module for every w ∈ S(L) by restriction.For any i ∈ we putP^i(𝒪_L,S, M) := ⊕_w ∈ S(L) H^i_(L_w, M).We write S_f for the subset of S comprising all finite places in S. Let r>1 be an integer. Then we have isomorphisms of _p[G]-modules P^i(𝒪_L,S, _p(r)) ≃{[H_-r^+(L) ⊗_pi=0; ⊕_w ∈ S_f(L) K_2r-1(𝒪_w; _p)i=1; ⊕_w ∈ S_f(L) K_2r-2(𝒪_w; _p)i=2;0].We first observe that H^0_(L_w, _p(r)) vanishes unless w is a complex place orw is a real place and r is even, whereas in these cases we haveH^0_(L_w, _p(r)) = _p(r). Thus the isomorphism ⊕_Σ(L)_p(r) ≃(⊕_Σ(L)(2π i)^r ) ⊗_p that maps a generator of _p(r) to (2 π i)^r restricts to an isomorphism P^0(𝒪_L,S, _p(r)) = ⊕_w ∈ S_∞(L) H^0_(L_w, _p(r)) ≃ H_-r^+(L) ⊗_p. Now let i>0. As p is odd, it is clear that H^i_(L_w, _p(r)) vanishes forall archimedean w. Now let w be a finite place of L. Since the cohomological dimensionof G_L_w equals 2 by <cit.>, we haveH^i_(L_w, _p(r)) = 0 for i >2.The remaining cases now follow from Theorem <ref>.Let r>1 be an integer. Then __p(P^i(𝒪_L,S, _p(r))) = {[ d_r+1 i=0;[L:] i=1; 0 ].In degree zero the result follows from Lemma <ref> and the definition of d_r+1.We have already observed that the groups K_2r-i(𝒪_w; _p) are finite for i=1,2and all finite places w of L which are not p-adic. If w belongs to S_p(L), thenK_2r-2(𝒪_w; _p) is finite, whereas K_2r-1(𝒪_w; _p)has _p-rank [L_w: _p] by <cit.>.The result for i ≠0 now follows again from Lemma <ref> andthe formula [L:] = ∑_w ∈ S_p(L) [L_w:_p]. For any integers r and i we define P^i(𝒪_L,S, _p(r)) to be P^i(𝒪_L,S, _p(r)) ⊗__p_p. The following result is also proven in <cit.>. Let r>1 be an integer. Then we have isomorphisms of _p[G]-modules P^i(𝒪_L,S, _p(r)) ≃{[ H_-r^+(L) ⊗_p i=0;L ⊗__p i=1; 0 ].This follows from Lemma <ref> and Corollary <ref>unless i=1. To handle this case we let w ∈ S_p(L) and putD_dR^L_w(_p(r)) := H^0(L_w, B_dR⊗__p_p(r)),where B_dR denotes Fontaine's de Rham period ring.Then the Bloch–Kato exponential map exp_r^BK: L_w = D_dR^L_w(_p(r)) → H^1_(L_w, _p(r)) is an isomorphism for every w ∈ S_p(L) as follows from <cit.>. Thus wehave isomorphisms of _p[G]-modules P^1(𝒪_L,S, _p(r)) ≃⊕_w ∈ S_p(L) H^1_(L_w, _p(r)) ≃⊕_w ∈ S_p(L) L_w ≃ L ⊗__p.By abuse of notation we write exp_r^BK for the isomomrphismL ⊗__p ≃ P^1(𝒪_L,S, _p(r)). §.§ Schneider's conjectureWe recall the following conjecture of Schneider <cit.>. Let r ≠0 be an integer. Then the cohomology group H^2_(𝒪_L,S, _p / _p (1-r)) vanishes. It can be shown that Schneider's conjecture for r=1 is equivalent to Leopoldt's conjecture (see <cit.>). For a given number field L and a fixed prime p, Schneider's conjecture holds for almost all r. This follows from <cit.> and <cit.>. Let M be a topological G_L,S-module.For any integer iwe denote the kernel of the natural localization map H^i_ (𝒪_L,S, M) → P^i(𝒪_L,S, M) by ^i(𝒪_L,S, M).We call ^i(𝒪_L,S, M) the Tate–Shafarevich group of M in degree i.The relation of Tate–Shafarevich groups to Schneider's conjecture is explained by the following result (see also <cit.>). Let r ≠ 0 be an integer and let p be an odd prime. Then the following holds.* The Tate–Shafarevich group ^1(𝒪_L,S, _p(r)) is torsion-free.* Schneider's conjecture holds at r and p if and only if the Tate–Shafarevich group ^1(𝒪_L,S, _p(r)) vanishes. We first claim that for any place w of L the group H^2_(L_w, _p / _p (1-r)) vanishes.This is clear when w is archimedean. If w is a finite place, then the Pontryagin dual ofH^2_(L_w, _p / _p (1-r)) naturally identifes with H^0_(L_w, _p(r)) = 0by local Tate duality. Now by Poitou–Tate duality <cit.> and the claimwe have ^1(𝒪_L,S, _p(r)) ≃^2(𝒪_L,S, _p / _p(1-r))^∨ = H^2_(𝒪_L,S, _p / _p (1-r))^∨. This implies (ii) and also (i) as the groups H^2_(𝒪_L,S, _p / _p (1-r))are divisible <cit.>. We record some cases, where Schneider's conjecture is known. Let p be an odd prime.* If r<0 is an integer, then Schneider's conjecture holds at r and p.* If r>0 is even and L is a totally real field, then Schneider's conjecture holds at r and p. Case (i) is due to Soulé <cit.> (see also <cit.>).Now suppose that r>0 is even and that L is totally real. Then the K-groupsK_2r-1(𝒪_L,S) are finite by work of Borel (see <ref>).The Quillen–Lichtenbaum Conjecture (Theorem <ref>) impliesthat the groups H^1_(𝒪_L,S, _p(r)) are finite as well. It follows thatthe Tate–Shafarevich group ^1(𝒪_L,S, _p(r)) is finite and thusvanishes by Proposition <ref> (i).Now (ii) follows from Proposition <ref> (ii).§.§ Compact support cohomologyLet M be a topological G_L,S-module.Following Burns and Flach <cit.> we define the compact support cohomology complex to beRΓ_c(𝒪_L,S, M) := (RΓ(𝒪_L,S, M) →⊕_w ∈ S(L) RΓ(L_w, M) )[-1],where the arrow is induced by the natural restriction maps. For any i ∈ we abbreviate H^iRΓ_c(𝒪_L,S, M) to H^i_c(𝒪_L,S, M). If r is an integer, we setH^i_c(𝒪_L,S, _p(r)) := H^i_c(𝒪_L,S, _p(r)) ⊗__p_p. For every topological G_L,S-module M we haveH^0_c(𝒪_L,S, M) = ^0(𝒪_L,S,M) = 0.This is <cit.>. We repeat the short argument for the reader's convenience. By definition, the groups H^0_c(𝒪_L,S, M) and^0(𝒪_L,S,M) both identify with the kernel of the map H^0_(𝒪_L,S, M) → P^0(𝒪_L,S, M) which is just the diagonal embedding M^G_L,S↪⊕_w ∈ S(L) M^G_L_w.Let r be an integer. Then the complex RΓ_c(𝒪_L,S, _p(r))belongs to 𝒟^ (_p[G]). This is a special case of <cit.>, for example. Let r>1 be an integer and let p be an odd prime. Then the following holds.* We have an exact sequence of _p[G]-modules0 → H_-r^+(L) ⊗_p → H^1_c(𝒪_L,S, _p(r)) →^1(𝒪_L,S, _p(r)) → 0.In particular, we have H^1_c(𝒪_L,S, _p(r)) ≃ H_-r^+(L) ⊗_p if and only if Schneider's conjecture <ref> holds.* We have an isomorphism of _p[G]-modulesH^3_c(𝒪_L,S, _p(r)) ≃_p(r-1)_G_L * We have an exact sequence of _p[G]-modules0 →^2(𝒪_L,S, _p(r)) → H^2_(𝒪_L,S, _p(r))→⊕_w ∈ S(L)_p(r-1)_G_L_w→_p(r-1)_G_L→ 0. * We have an isomorphism of _p[G]-modules^2(𝒪_L,S, _p(r)) ≃ K_2r-2^w(𝒪_L,S)_p.In particular, ^2(𝒪_L,S, _p(r)) is finite and does not depend on S.* Schneider's conjecture <ref> holds if and only if the _p-rank of H^2_c(𝒪_L,S, _p(r)) equals d_r+1.We first observe that Artin–Verdier duality impliesH^3_c(𝒪_L,S, _p(r)) ≃ H^0_(𝒪_L,S, _p / _p(1-r))^∨= (_p / _p (1-r)^G_L)^∨ = _p(r-1)_G_Lgiving (ii). For any w ∈ S(L) local Tate duality likewise impliesH^2_(L_w, _p(r)) ≃ H^0_(L_w, _p / _p(1-r))^∨ = (_p / _p (1-r)^G_L_w)^∨ = _p(r-1)_G_L_w.As H^0_c(𝒪_L,S, _p(r)) vanishes by Lemma <ref>, the long exact sequence in cohomology associated to the exact triangleRΓ_c(𝒪_L,S, _p(r)) →RΓ(𝒪_L,S, _p(r)) →⊕_w ∈ S(L) RΓ(L_w, _p(r))now gives the exact sequences in (i) and (iii) by Lemma <ref> and the very definitionof Tate–Shafarevich groups (in view of (iv) the sequence in (iii) then actually coincides with the sequence in <cit.>).It is then also clear that Schneider's conjecture implies that we have an isomorphismH^1_c(𝒪_L,S, _p(r)) ≃ H_-r^+(L) ⊗_p. Conversely, if these two _p[G]-modules are isomorphic, they are in particular finitely generated _p-modules of the same rank. The short exact sequence in (i) then implies that the Tate–Shafarevich group ^1(𝒪_L,S, _p(r)) is torsion and thus vanishes by Proposition <ref> (i). Hence Schneider's conjecture holds by Proposition<ref> (ii). This completes the proof of (i).Claim (iv) is an easy consequence of Theorem <ref> and Remark <ref>. Alternatively, it can be derived from <cit.>. Finally, it follows from Theorem <ref>, Corollary <ref> and the exact sequence[ 0 →^1(𝒪_L,S, _p(r)) →H^1_(𝒪_L,S, _p(r)) → P^1(𝒪_L,S, _p(r)); → H^2_c(𝒪_L,S, _p(r)) →^2(𝒪_L,S, _p(r)) → 0 ] that the _p-rank of H^2_c(𝒪_L,S, _p(r)) equals [L:] - d_r + __p(^1(𝒪_L,S, _p(r))) = d_r+1 + __p(^1(𝒪_L,S, _p(r))).Thus (v) is a consequence of Proposition <ref>.§.§ A conjecturally perfect complex We keep the notation of the last subsection and also recall the notation of <ref>.Let C_L, S(r) ∈𝒟(_p[G]) be the cone of the mapH^1_c(𝒪_L,S, _p(r)){1,4}→ RΓ_c(𝒪_L,S, _p(r)) ⊕ (H_1-r^+(L) ⊗_p){2,3}which on cohomology induces the identity map in degree 1 and the zero map in all other degrees. Let r>1 be an integer and let p be an odd prime. Then the following holds. * The complex C_L, S(r) is acyclic outside degrees 2 and 3. * There is an isomorphism of _p[G]-modules H^2(C_L, S(r)) ≃ H^2_c(𝒪_L,S, _p(r)) ⊕ H_1-r^+(L) ⊗_p. In particular, there is a surjection H^2(C_L, S(r)) →^2(𝒪_L,S, _p(r)). * Assume that Schneider's conjecture <ref> holds. Then the complex C_L, S(r) belongs to 𝒟^(_p[G]) and we havean isomorphism of _p[G]-modules H^3(C_L, S(r)) ≃ H^3_c(𝒪_L,S, _p(r)) ⊕( H_-r^+(L) ⊕ H_1-r^+(L)) ⊗_p.This follows easily from Propositions <ref> and<ref> once we have observed that the _p[G]-module H_k^+(L) ⊗_p is projective for every k ∈. Indeed, the [G ×(/)]-module H_k(L) is free over [G] of rank [K:] and H_k^+(L) ⊗_p is adirect summand of H_k(L) ⊗_p as p is odd. § RELATIVE ALGEBRAIC K-THEORY For further details and background on algebraic K-theory used in this section, we refer the reader to <cit.> and <cit.>. §.§ Algebraic K-theoryLet R be a noetherian integral domain of characteristic 0 with field of fractions E. Let A be a finite-dimensional semisimple E-algebra and let Λ be an R-order in A. Recall that (Λ) denotes the category of finitely generated projective left Λ-modules. Then K_0(Λ) naturally identifies with the Grothendieck group of (Λ) (see <cit.>) and K_1(Λ) with the Whitehead group (see <cit.>). For any field extension F of E we set A_F := F ⊗_E A. Let K_0(Λ, F) denote the relative algebraic K-group associated to the ring homomorphism Λ↪ A_F. We recall that K_0(Λ, F) is an abelian group with generators [X,g,Y] where X and Y are finitely generated projective Λ-modules and g:F ⊗_R X → F ⊗_R Y is an isomorphism of A_F-modules; for a full description in terms of generators and relations, we refer the reader to <cit.>. Furthermore, there is a long exact sequence of relative K-theoryK_1(Λ) ⟶ K_1(A_F) ∂_Λ,F⟶ K_0(Λ,F) ⟶ K_0(Λ) ⟶ K_0(A_F)(see<cit.>). We write ζ(A) for the center of (any ring) A.The reduced norm map_A: A ⟶ζ(A)is defined componentwise (see <cit.>) and extends to matrix rings over A in the obvious way; hence this induces a map K_1(A) →ζ(A)^× which we also denote by _A.Let P be a finitely generated projective A-module and let γ be an A-endomorphism of P. Choose a finitely generated projective A-module Q such that P ⊕ Q is free. Then the reduced norm of γ⊕𝕀_Q with respect to a chosen basis yields a well-defined element _A(γ) ∈ζ(A). In particular, if γ is invertible, then γ defines a class [γ] ∈ K_1(A) and we have _A(γ) = _A([γ]). §.§ Refined Euler characteristics For any C^∙∈𝒞^b ( (Λ)) we define Λ-modulesC^ev := ⊕_i ∈ C^2i,C^odd := ⊕_i ∈ C^2i+1.Similarly, we define H^ev(C^∙) and H^odd(C^∙)to be the direct sum over all even and odd degree cohomology groups of C^∙, respectively. A pair (C^∙,t) consisting of a complex C^∙∈𝒟^(Λ) and an isomorphism t: H^odd(C_F ^∙) → H^ev(C_F^∙) is called a trivialized complex, where we write C_F^∙ for F ⊗^𝕃_R C^∙. We refer to t as a trivialization of C^∙. One defines the refined Euler characteristicχ_Λ,F(C^∙, t) ∈ K_0(Λ,F) of a trivialized complex as follows: Choose a complex P^∙∈𝒞^b((Λ)) which is quasi-isomorphic to C^∙. Let B^i(P_F ^∙) andZ^i(P_F^∙) denote the i-th cobounderies and i-th cocycles of P_F ^∙, respectively.For every i ∈ we have the obvious exact sequences0 →B^i(P_F^∙)→Z^i(P_F^∙)→H^i(P_F^∙)→ 0, 0 →Z^i(P_F^∙)→P_F^i→B^i+1(P_F^∙)→ 0.If we choose splittings of the above sequences, we get an isomorphism of A_F-modulesϕ_t: P_F^odd≃⊕_i ∈ B^i(P_F^∙) ⊕ H^odd(P_F^∙) ≃⊕_i ∈ B^i(P_F^∙)⊕ H^ev(P_F^∙) ≃P_F^ev,where the second map is induced by t. Then the refined Euler characteristic is defined to beχ_Λ, F (C^∙, t) := [P^odd, ϕ_t, P^ev] ∈ K_0(Λ, F)which indeed is independent of all choices made in the construction. For further information concerning refined Euler characteristics we refer the reader to <cit.>. §.§ K-theory of group ringsLet p be a prime and let G be a finite group.By a well-known theorem of Swan (see <cit.>) themap K_0(_p[G]) → K_0(_p[G]) induced by extension of scalars is injective.Thus from (<ref>) we obtain an exact sequenceK_1(_p[G]) ⟶ K_1(_p[G]) ⟶ K_0(_p[G],_p) ⟶ 0.The reduced norm map induces an isomorphism K_1(_p[G]) ⟶ζ(_p[G])^× (use <cit.>) and __p[G](K_1(_p[G]))=__p[G]((_p[G])^×)(this follows from<cit.>).Hence from (<ref>) we obtain an exact sequence(_p[G])^×__p[G]⟶ζ(_p[G])^×∂_p⟶ K_0(_p[G],_p) ⟶ 0,where we write ∂_p for ∂__p[G], _p. The canonical mapsK_0([G], ) → K_0(_p[G], _p) induce an isomorphismK_0([G], ) ≃⊕_p K_0(_p[G], _p)where the sum ranges over all primes (see the discussion following <cit.>). By abuse of notation we let∂_p: ζ([G])^×→ K_0(_p[G], _p)also denote the composite map of the inclusion ζ([G])^×→ζ(_p[G])^× and the surjection ∂_p in sequence (<ref>). Finally, the reduced norm _[G]: K_1([G]) →ζ([G])^× is injective and there is an extended boundary homomorphism∂̂: ζ([G])^×⟶ K_0([G],)such that ∂̂∘_[G] coincides with the usual boundary homomorphism ∂_[G], in sequence (<ref>) (see <cit.>). § RATIONALITY CONJECTURES §.§ Artin L-seriesLet L/K be a finite Galois extension of number fields with Galois group G and let S be a finite set of places of K containing all archimedean places. For any irreducible complex-valued character χ of G we denote the S-truncated Artin L-series by L_S(s, χ), and the leading coefficient of L_S(s, χ) at an integer r by L_S^∗(r, χ). We will use this notion even if L_S^∗(r, χ) = L_S(r, χ) (which will happen frequently in the following).There is a canonical isomorphism ζ([G]) ≃∏_χ∈_(G), where _(G) denotes the set of irreducible complex characters of G. We define the equivariant S-truncated Artin L-series to be the meromorphic ζ([G])-valued functionL_S(s) := (L_S(s,χ))_χ∈_(G).For any r ∈ we also putL_S^∗(r) := (L_S^∗(r,χ))_χ∈_(G)∈ζ([G])^×.Now let v ∈ S_∞ be an archimedean place of K. Let χ be an irreducible complex character of G and let V_χ be a [G]-module with character χ. We setn_χ := _(V_χ) = χ(1), n_χ,v^+:= _(V_χ^G_w), n_χ,v^- := n_χ - n_χ,v^+.We write S_ and S_ for the subsets of S_∞ consisting of real and complex places, respectively, and define ϵ-factorsϵ_v(s,χ) := {[ (2 · (2 π)^-sΓ(s))^n_χ v ∈ S_,; L_(s)^n_χ,v^+· L_(s+1)^n_χ,v^- v ∈ S_, ].where L_(s) := π^-s/2Γ(s/2) and Γ(s) denotes the usual Gamma function. The completed Artin L-series is then defined to beΛ(s, χ) := (∏_v ∈ S_∞ϵ_v(s,χ)) L_S_∞(s,χ) = ∏_vϵ_v(s,χ),where the second product runs over all places of K and for a finite place v of K we haveϵ_v(s,χ) := (1 - ϕ_w N(v)^-s| V_χ^I_w)^-1.We denote the contragradient of χ by χ̌. Then the completed Artin L-series satisfies the functional equationΛ(s, χ) = ϵ(s, χ) Λ(1-s, χ̌),where the ϵ-factor ϵ(s, χ) is defined as follows. Let d_K be the absolute discriminant of K. We write W(χ) and 𝔣(χ) for the Artin root number and the Artin conductor of χ, respectively. We then havec(χ) := |d_K|^n_χ N(𝔣(χ)), ϵ(s, χ) := W(χ) c(χ)^1/2 - s.We also define equivariant ϵ-factors and the completed equivariant Artin L-series byϵ_v(s) :=(ϵ_v(s,χ))_χ∈_(G), ϵ(s) :=(ϵ(s,χ̌))_χ∈_(G), Λ(s) := (Λ(s,χ))_χ∈_(G).The functional equations (<ref>) for all irreducibe characters then combine to give an equalityΛ(s)^♯ = ϵ(s) Λ(1-s),where x ↦ x^♯ denotes the -linear anti-involution of [G] which sends each g ∈ G to its inverse. §.§ A conjecture of GrossLet r>1 be an integer. Since the Borel regulator map ρ_r induces an isomorphism of [G]-modules, the Noether–Deuring theorem (see <cit.> for instance) implies the existence of [G]-isomorphismsϕ_1-r: H_1-r^+(L) ⊗≃⟶ K_2r-1(𝒪_L) ⊗.Let χ be a complex character of G and let V_χ be a [G]-module with character χ. Composition with ρ_r ∘ϕ_1-r induces an automorphism of _G(V_χ̌, H_1-r^+(L) ⊗). Let R_ϕ_1-r(χ) ∈^× be its determinant. If χ' is a second character, then clearly R_ϕ_1-r(χ + χ') = R_ϕ_1-r(χ) · R_ϕ_1-r(χ') so that we obtain a mapR_ϕ_1-r: R(G)⟶ ^× χ ↦ (ρ_r ∘ϕ_1-r|_G(V_χ̌, H_1-r^+(L) ⊗)),where R(G) denotes the ring of virtual complex characters of G. We likewise defineA_ϕ_1-r^S: R(G)⟶ ^× χ ↦R_ϕ_1-r(χ) / L_S^∗(1-r, χ).Gross <cit.> conjectured the following higher analogue of Stark's conjecture. We have A_ϕ_1-r^S(χ^σ) = A_ϕ_1-r^S(χ)^σ for all σ∈(). It is straightforward to see that Gross' conjecture does not depend on S and the choice of ϕ_1-r (see also <cit.>). We briefly collect what is known about Conjecture <ref>. When L/K is a CM-extension, recall that χ is odd when χ(j) = - χ(1), where j ∈ G denotes complex conjugation.Conjecture <ref> holds in each of the following cases: * χ is the trivial character;* χ is absolutely abelian, i.e. L^(χ) / is abelian;* L^(χ) is totally real and r is even;* L^(χ) / K is a CM-extension, χ is an odd character and r is odd. (i) is Borel's result (<ref>) above. In cases (iii) and (iv) the regulator map disappears, and Conjecture <ref> boils down to the rationality of the L-values at negative integers which is a classical result of Siegel <cit.>. Finally, Gross' conjecture for all characters χ of G is equivalent to the rationality statement of the ETNC for the pair (h^0((L))(1-r), [G]) by <cit.> (see also <cit.>). In fact, the full ETNC is known for absolutely abelian extensions by work of Burns and Greither <cit.> and of Flach <cit.> (see also Huber and Kings <cit.>) which implies (ii).Let f: R(G) →^× be a homomorphism. Then we may view f as an element inζ([G])^×≃∏_χ∈_(G)^× by declaring itsχ-component to be f(χ), χ∈_(G). Conversely, eachf = (f_χ)_χ∈_(G) in ζ([G])^× defines a uniquehomomorphism f: R(G) →^× such that f(χ) = f_χ for eachχ∈_(G). Under this identification Conjecture <ref> asserts thatA_ϕ_1-r^S ∈ζ([G])^× actually belongs to ζ([G])^×. §.§ A reformulation of Gross' conjectureIn this subsection we give a reformulation of Gross' conjecture using the functional equation of Artin L-series. For any integer k we writeι_k: L ⊗_→⊕_Σ(L) = (H_1-k^+(L) ⊕ H_-k^+(L)) ⊗for the canonical [G ×( / )]-equivariant isomorphism which is induced by mapping l ⊗ z to (σ(l)z)_σ∈Σ(L) for l ∈ L and z ∈. Now fix an integer r>1. We define an [G]-isomorphismλ_r: (K_2r-1(𝒪_L) ⊕ H_-r^+(L)) ⊗≃(H_1-r^+(L) ⊕ H_-r^+(L)) ⊗≃(L ⊗_)^+ = L ⊗_.Here, the first isomorphism is ρ_r ⊕𝕀_H_-r^+(L) and the second isomorphism is induced by ι_r^-1. As above, there exist [G]-isomorphismsϕ_r: L ≃⟶(K_2r-1(𝒪_L) ⊕ H_-r^+(L)) ⊗.We now define mapsR_ϕ_r: R(G)⟶ ^× χ ↦ (λ_r ∘ϕ_r|_G(V_χ̌, L ⊗_))andA_ϕ_r^S: R(G)⟶ ^× χ ↦R_ϕ_r(χ) / L_S^∗(r, χ̌).We have A_ϕ_r^S(χ^σ) = A_ϕ_r^S(χ)^σ for all σ∈(). It is again easily seen that this conjecture does not depend on S and the choice of ϕ_r. In fact we have the following result. Fix an integer r>1 and a character χ. Then Gross' Conjecture <ref>holds if and only if Conjecture <ref> holds. Let k be an integer. If k is even, multiplication by (2 π i)^k inducesa [G]-isomorphismH_0^+(L) ⊗≃ H_-k^+(L) ⊗. Similarly, multiplication by (2 π i)^k-1inducesa [G]-isomorphism H_0^-(L) ⊗≃ H_1-k^+(L) ⊗.When k is odd, we likewise have [G]-isomorphisms H_0^+(L) ⊗≃ H_1-k^+(L) ⊗and H_0^-(L) ⊗≃ H_-k^+(L) ⊗ induced by multiplication by (2 π i)^k-1and (2 π i)^k, respectively. So for any k we obtain a [G]-isomorphism μ_k: H_0(L) ⊗≃(H_1-k^+(L) ⊕ H_-k^+(L)) ⊗. Moreover, we define an [G]-isomorphism π_L: L ⊗_≃(H_0^+(L) ⊕ H_-1^+(L)) ⊗(1, -i)⟶ H_0(L) ⊗, where the first isomorphism is induced by ι_1. It is clear that π_L agreeswith the map π_L in <cit.>.Bley and Burns define an explicit [G]-isomorphismϕ: L ≃⟶ H_0(L) ⊗.Building on a result of Fröhlich<cit.> on Galois Gauss sums, the authors <cit.>then show that _[G]((ϕ⊗ 1) ∘π_L^-1) ·ϵ(0) ∈ζ([G])^×. Now choose a [G]-isomorphism ϕ_1-r as in (<ref>). We defineϕ_r to be the composite map ϕ_r := (ϕ_1-r⊕𝕀_H_-r^+(L) ⊗) ∘μ_r ∘ϕ. Let a,b ∈ζ([G])^×. In the following we write a ∼ bif ab^-1∈ζ([G])^×. Under the identification in Remark <ref>we thus have to show that A_ϕ_r^S ∼ A_ϕ_1-r^S. We observe that λ_r ∘ϕ_r = ι_r^-1∘ (ρ_r ⊕𝕀_H_-r^+(L)) ∘ (ϕ_1-r⊕𝕀_H_-r^+(L)) ∘μ_r ∘ϕ, where we view each map as a [G]-isomorphism by extending scalars. This implies that R_ϕ_r=R_ϕ_1-r·_[G](ι_r^-1∘μ_r ∘ϕ)^♯. We now use (<ref>), the fact that c(χ^σ) = c(χ)^σfor all χ∈_(G) and σ∈(),the definition of ϵ and the functional equation(<ref>) to compute R_ϕ_r ∼R_ϕ_1-r·(_[G](ι_r^-1∘μ_r ∘π_L) ·ϵ(0)^-1)^♯∼R_ϕ_1-r·(_[G](ι_r^-1∘μ_r ∘π_L) ·ϵ(1-r)^-1)^♯∼R_ϕ_1-r·_[G](ι_r^-1∘μ_r ∘π_L)^♯·Λ^∗(r)^♯/Λ^∗(1-r) Now let v be an archimedean place of K. As Γ(k) is a non-zero rational number for every positiveinteger k and Γ(s) has simple poles with rational residues at s = k for every non-positiveinteger k, an easy computation shows that for v ∈ S_ one has ϵ_v(r)^♯/ϵ_v^∗(1-r)∼(π^(1 - 2r) n_χ)_χ∈_(G). Moreover, using Γ(s+1) = s Γ(s) and Γ(1/2) = √(π)we find that Γ((2k+1)/2) ∈√(π)·^× for every integer k. Thena computation shows that for v ∈ S_ one has ϵ_v(r)^♯/ϵ_v^∗(1-r)∼{[ (π^(1 - r) n_χ,v^+ - r n_χ,v^-)_χ∈_(G) 2 ∤ r; (π^(1 - r) n_χ,v^- - r n_χ,v^+)_χ∈_(G)2 | r. ]. The automorphism μ_r ∘π_L ∘ι_r^-1 on(H_1-r^+(L) ⊕ H_-r^+(L)) ⊗ is given up to sign by multiplicationby (2π)^r-1 and (2π)^r on the first and second direct summand, respectively.It follows that _[G](ι_r^-1∘μ_r ∘π_L)^♯=_[G](μ_r ∘π_L ∘ι_r^-1)^♯∼ {[ (π^(r-1) (|S_| n_χ + ∑_v ∈ S_ n_χ,v^+) + r (|S_| n_χ+ ∑_v ∈ S_ n_χ,v^-))_χ 2 ∤ r; (π^(r-1) (|S_| n_χ + ∑_v ∈ S_ n_χ,v^-) + r (|S_| n_χ+ ∑_v ∈ S_ n_χ,v^+))_χ2 | r. ]. If we compare this to (<ref>) and (<ref>) we find that _[G](ι_r^-1∘μ_r ∘π_L)^♯∼(∏_v ∈ S_∞ϵ_v(r)^♯/ϵ_v^∗(1-r))^-1. Finally, by the very definition of Λ(s) we haveΛ(s) = (∏_v ∈ S_∞ϵ_v(s)) · L_S_∞(s).We obtain R_ϕ_r ∼R_ϕ_1-r·_[G](ι_r^-1∘μ_r ∘π_L)^♯·(∏_v ∈ S_∞ϵ_v(r)^♯/ϵ_v^∗(1-r)) ·L_S_∞^∗(r)^♯/L_S_∞^∗(1-r)∼R_ϕ_1-r·L_S_∞^∗(r)^♯/L_S_∞^∗(1-r) which exactly means that A_ϕ_r^S_∞ = R_ϕ_r/L_S_∞^∗(r)^♯∼R_ϕ_1-r/L_S_∞^∗(1-r) = A_ϕ_1-r^S_∞. As both conjectures do not depend on the choice of S we are done. § EQUIVARIANT LEADING TERM CONJECTURESWe fix a finite Galois extension L/K with Galois group G and an odd prime p. Let r>1 be an integer. In this section we assume throughout that Schneider's conjecture <ref> holds. In particular, if S is a sufficiently large finite set of places of K as in <ref>, then the complex C_L,S(r) ∈𝒟(_p[G])constructed in <ref> is perfect by Proposition <ref>. §.§ Choosing a trivialization In this subsection we construct a trivialization of C_L,S(r). We first choose a [G]-isomorphismα_r: L →(H_1-r^+(L) ⊕ H_-r^+(L)) ⊗.For instance, we may take α_r = μ_r ∘ϕ, where μ_r and ϕare the isomorphisms (<ref>) and (<ref>), respectively. Moreover, we choose a [G]-isomorphismϕ_1-r: H_1-r^+(L) ⊗≃⟶ K_2r-1(𝒪_L) ⊗as in (<ref>).We let ψ_r := (ϕ_1-r, α_r) be the corresponding pair of [G]-isomorphisms. As ^1(𝒪_L,S, _p(r)) vanishes by Proposition <ref> and ^2(𝒪_L,S, _p(r)) is finite by Proposition <ref> (iv), we have an exact sequence of _p[G]-modules0 → H^1_(𝒪_L,S, _p(r)) → P^1(𝒪_L,S, _p(r)) → H^2_c(𝒪_L,S, _p(r)) → 0.Since _p[G] is semisimple, we may choose a _p[G]-equivariant splitting σ_r of this sequence. We now define a trivialization t(ψ_r,σ_r,S) of C_L,S(r) to be the composite of the following _p[G]-isomorphisms (note that we have H^ev(C_L,S(r)) = H^2(C_L,S(r)) and H^odd(C_L,S(r)) = H^3(C_L,S(r)) by Proposition <ref>):H^3(C_L,S(r)) ⊗__p_p⟶ (H_1-r^+(L) ⊕ H_-r^+(L)) ⊗_pα_r^-1⟶L ⊗__pexp_r^BK⟶P^1(𝒪_L,S, _p(r))σ_r⟶H^1_(𝒪_L,S, _p(r)) ⊕ H^2_c(𝒪_L,S, _p(r))(ch_r,1^(p))^-1⊕𝕀⟶(K_2r-1(𝒪_L,S) ⊗_p) ⊕ H^2_c(𝒪_L,S, _p(r))⟶(K_2r-1(𝒪_L) ⊗_p) ⊕ H^2_c(𝒪_L,S, _p(r))ϕ_1-r^-1⊕𝕀⟶(H_1-r^+(L) ⊗_p) ⊕ H^2_c(𝒪_L,S, _p(r))⟶H^2(C_L,S(r)) ⊗__p_p.Here, the unlabelled isomorphisms come from Propositions <ref>and <ref> (ii) and (<ref>). We now defineΩ_ψ_r, S := χ__p[G], _p(C_L,S(r), t(ψ_r,σ_r,S))∈ K_0(_p[G], _p)which is easily seen to be independent of the splitting σ_r (see <ref> and <ref>). §.§ The leading term conjecture at s=rWe are now in a position to formulate the central conjectures of this article. Recall the notation of the last subsection and in particular the pairψ_r = (ϕ_1-r, α_r). Define a [G]-isomorphismϕ_r: L ≃⟶(K_2r-1(𝒪_L) ⊕ H_-r^+(L)) ⊗by ϕ_r := (ϕ_1-r⊕𝕀_H_-r^+(L)) ∘α_r.Let L/K be a finite Galois extension of number fields with Galois group G and let r>1 be an integer. Let p be an odd prime.* The Tate–Shafarevich group ^1(𝒪_L,S, _p(r)) vanishes. * We have that A_ϕ_r^S belongs to ζ([G])^×. * We have an equality∂_p(A_ϕ_r^S)^♯ = - Ω_ψ_r, S.Part (i) and (ii) of Conjecture <ref> are equivalent to Schneider's conjecture<ref> and Gross' conjecture <ref>by Propositions <ref>and <ref>, respectively. Suppose that part (i) and part (ii) of Conjecture <ref> both hold. Then part (iii) does not depend on any of the choices made in the construction.Let S' be a second sufficiently large finite set of places of K. By embeddingS and S' into the union S ∪ S' we may and do assume that S ⊆ S'.By induction we may additionally assume that S' = S ∪{v },where v is not in S. In particular, v is unramified in L/K and v ∤ p. We compute A^S'_ϕ_r(χ) / A^S_ϕ_r(χ) = L_S^∗(r, χ̌) / L_S'^∗(r, χ̌) = ϵ_v(r, χ̌). On the other hand, by <cit.> we have an exact triangle _G_w^G RΓ_f(L_w, _p(r))[-1] ⟶ RΓ_c(𝒪_L,S', _p(r)) ⟶ RΓ_c(𝒪_L,S, _p(r)), where RΓ_f(L_w, _p(r)) is a perfect complex of _p[G_w]-modules which is naturally quasi-isomorphic to _p[G_w] [rr]^1 - ϕ_w N(v)^-r_p[G_w] with terms in degree 0 and 1.As Schneider's conjecture holds by assumption, the cohomology groupH^1_c(𝒪_L,S, _p(r)) does not depend on S by Proposition <ref>.Thus by the definition of C_L,S(r) we likewise have an exact triangle _G_w^G RΓ_f(L_w, _p(r))[-1] ⟶ C_L,S'(r) ⟶ C_L,S(r). We therefore may compute Ω_ψ_r, S -Ω_ψ_r, S'=χ__p[G], _p(_G_w^G RΓ_f(L_w, _p(r)), 0) =∂_p(ϵ_v(r))=∂_p(A_ϕ_r^S')^♯ - ∂_p(A_ϕ_r^S)^♯, where the last equality follows from (<ref>). This shows that Conjecture <ref> (iii) does not depend on S.Now suppose that α_r' is a second choice of [G]-isomorphismas in (<ref>). Let ϕ_r' := (ϕ_1-r⊕𝕀_H_-r^+(L)) ∘α_r'.Then we have (A_ϕ_r^S ·(A_ϕ_r'^S)^-1)^♯=(R_ϕ_r· R_ϕ_r'^-1)^♯ =_[G]((ϕ_r')^-1ϕ_r)=_[G]((α_r')^-1α_r). Letting ψ_r' := (ϕ_1-r, α_r') we likewise compute Ω_ψ_r, S -Ω_ψ_r', S=∂_p(_[G](α_r' α_r^-1)) = - ∂_p(_[G]((α_r')^-1α_r)). Finally, a similar computation shows that the conjecture does not depend on the choice ofϕ_1-r.It is therefore convenient to putTΩ(L/K,r)_p := - (∂_p(A_ϕ_r^S)^♯ + Ω_ψ_r, S)∈ K_0(_p[G], _p).Then Conjecture <ref> (iii) simply asserts that TΩ(L/K,r)_p vanishes. The reason for the minus sign will become apparent in the next subsection (see Theorem <ref>).Now choose an isomorphism j: ≃_p. By functoriality, this induces a map j_∗: K_0([G], ) → K_0(_p[G], _p).We define a trivialization t(r,S,j) of the complex C_L,S(r) as in <ref>, but we tensor with _p and replace the isomorphisms α_r and ϕ_1-r by ι_r ⊗_j_p and ρ_r^-1⊗_j_p (see (<ref>) and (<ref>)).Thus we obtain an objectΩ_r,S^j := χ__p[G], _p(C_L,S(r), t(r,S,j)) ∈ K_0(_p[G], _p).Then the argument in the proof of Proposition <ref> showsthe following result.Let j: ≃_p be an isomorphism. Suppose that part (i) and (ii)of Conjecture <ref> both hold. Then we have an equality TΩ(L/K,r)_p = j_∗( ∂̂(L_S^∗(r))) - Ω_r,S^j in K_0(_p[G], _p).§.§ The relation to the equivariant Tamagawa number conjectureWe now compare our invariant TΩ(L/K,r)_p to theequivariant Tamagawa number conjecture (ETNC) as formulated byBurns and Flach <cit.>. For an arbitrary integer r we set(r)_L := h^0((L))(r) which we regard as a motive defined over K and with coefficients in the semisimple algebra [G].The ETNC <cit.> for the pair ((r)_L, [G]) asserts that a certain canonical element TΩ((r)_L, [G]) in K_0([G], ) vanishes. Note that in this case the element TΩ((r)_L, [G]) is indeed well-defined as observed in <cit.>. If TΩ((r)_L, [G]) is rational, i.e. belongs to K_0([G], ), then by means of (<ref>) we obtain elementsTΩ((r)_L, [G])_p in K_0(_p[G], _p). We say that the `p-part' of the ETNC for the pair ((r)_L, [G]) holds if TΩ((r)_L, [G])_p vanishes. Let L/K be a finite Galois extension of number fields with Galois group Gand let r>1 be an integer. Then the following holds.* Conjecture <ref> (ii) holds if and only ifTΩ((r)_L, [G]) belongs to K_0([G],).* Suppose that part (i) and (ii) of Conjecture <ref> both hold. Then TΩ((r)_L, [G])_p = TΩ(L/K,r)_p.In particular, Conjecture <ref> (iii) and the p-part of the ETNC for the pair ((r)_L, [G]) are equivalent. Conjecture <ref>(ii) is equivalent to Gross' conjecture <ref>by Proposition <ref>. The latter conjectureis equivalent to the rationality of TΩ((1-r)_L, [G])by <cit.>. Finally,TΩ((1-r)_L, [G]) is rational if and only if TΩ((r)_L, [G])is rational by <cit.>. This proves (i).For (ii) we briefly recall some basic facts on virtual objects.If Λ is a noetherian ring, we write V(Λ) for the Picard category of virtual objectsassociated to the category (Λ). We fix a unit object 1_Λand write ⊠ for the bifunctor in V(Λ).For each object M there is an object M^-1, unique up to unique isomorphism,with an isomorphism τ_M: M ⊠ M^-1∼→1_Λ.If N is an object in (Λ), we write [N] for the associated object in V(Λ).More generally, if C^∙ belongs to 𝒟^(Λ), we write[C^∙] ∈ V(Λ) for the associated object (see <cit.>).We let V(_p[G], _p) be the Picard category associatedto the ring homomorphism _p[G] ↪_p[G] as defined in<cit.>. We recall that objects in V(_p[G], _p) are pairs(M, t), where M is an object in V(_p[G]) and t is an isomorphism_p ⊗__p M ≃1__p[G] in V(_p[G]). By <cit.> one has an isomorphism π_0(V(_p[G], _p)) ≃ K_0(_p[G], _p), where π_0(𝒫) denotes the group of isomorphism classes of objectsof a Picard category 𝒫.For any motive M which is defined over K and admits an action of a finite dimensional-algebra A, Burns and Flach <cit.> define an element Ξ(M) of V(A). In the case M = (r)_L and A = [G] one has Ξ((r)_L) = [K_2r-1(𝒪_L) ⊗]^-1⊠ [H_-r^+(L) ⊗]^-1⊠ [L] ∈ V([G]). The regulator map (<ref>) and (<ref>) then induce an isomorphism in V([G]): ϑ_∞: ⊗_Ξ((r)_L) ≃1_[G]. Moreover, Burns and Flach construct for each prime p an isomorphism ϑ_p: _p ⊗_Ξ((r)_L) ≃ [RΓ_c(𝒪_L,S, _p(r))] in V(_p[G]) (see <cit.>). These data determine an element RΩ((r)_L, [G])in K_0([G], ) and one has TΩ((r)_L, [G]) = ∂̂(L_S_∞^∗(r)) +RΩ((r)_L, [G]) by definition.Now suppose that part (i) and (ii) of Conjecture <ref> both hold.Recall the definition of C_L,S(r). The isomorphisms τ_[N], where N = H^1_c(𝒪_L,S, _p(r)) and N = H_1-r^+(L) ⊗_p, yield an isomorphism [C_L,S(r)] ≃ [RΓ_c(𝒪_L,S, _p(r))] in V(_p[G]). Now let j:≃_p be an isomorphism. Then the trivializationt(r,S,j) induces an isomorphism ϑ_p,j: [_p ⊗__p^𝕃 C_L,S(r)] ≃1__p[G] in V(_p[G]). After extending scalars to _p, the isomorphisms (<ref>), ϑ_p^-1 and ϑ_∞ likewise induce an isomorphism ϑ_p,j': [_p ⊗__p^𝕃 C_L,S(r)] ≃1__p[G] in V(_p[G]). Taking <cit.> into account, we see that the class of the pair ([C_L,S(r)],ϑ_p,j) in π_0(V(_p[G], _p))maps to -Ω_r,S^j under the isomorphism (<ref>), whereas ([C_L,S(r)],ϑ_p,j')corresponds to j_∗ (RΩ((r)_L, [G])).Unwinding the definitions of ϑ_p,j and ϑ_p,j' one sees that both isomorphisms almost coincide. The only difference rests on the following. Let Λ be a noetherian ringand let ϕ: P → P be an automorphism of a finitely generated projectiveΛ-module P. Consider the complex C: P ϕ→ P,where P is placed in degree 0 and 1. Then there a two isomorphisms[C] ≃1_Λ induced by τ_[P] and the acyclicity of C,respectively. Now for every finite place v ∈ S, there appears such an acyclic complex of _p[G_w]-modulesin the construction of RΩ((r)_L, [G]).Namely, if v ∤ p this is the complex RΓ_f(L_w, _p(r))which is canonically quasi-isomorphic to _p[G_w] [rr]^1 - ϕ_w N(v)^-r_p[G_w] with terms in degree 0 and 1 (see <cit.>). If v divides p, this complex appears as the rightmostcomplex in <cit.> and is given by D_cris^L_w(_p(r)) [rr]^1 - ϕ_crisD_cris^L_w(_p(r)), where D_cris^L_w(_p(r)) := H^0(L_w, B_cris⊗__p_p(r))naturally identifies with the maximal unramified subextension of L_wand ϕ_cris denotes the Frobenius on the crystalline period ring B_cris.Burns and Flach choose the isomorphisms induced by the correspondingτ's, whereas we have implicitly used the acyclicity of these complexes.For each such v this gives rise to an Euler factor ϵ_v(r)(for more details we refer the reader to <cit.>; though the authorsconsider a slightly different situation, the arguments naturally carry overto the case at hand).This discussion gives an equality j_∗(RΩ((r)_L, [G])) = - Ω_r,S^j + j_∗(∂̂(∏_v ∈ Sε_v(r))). Thus TΩ((r)_L, [G])_p and TΩ(L/K,r)_p have the sameimage under the injective mapK_0(_p[G], _p) → K_0(_p[G], _p). § ANNIHILATING WILD KERNELS§.§ Generalised adjoint matrices Let G be a finite group and let p be a prime. Let 𝔐_p(G) be a maximal _p-order such that_p[G] ⊆𝔐_p(G) ⊆_p[G]. Let e_1, …, e_t be the central primitive idempotents of _p[G]. Then each Wedderburn component _p[G] e_i is isomorphic to an algebra of m_i × m_i matrices over a skewfield D_i and F_i := ζ(D_i) is a finite field extension of _p. We denote the Schur index of D_i by s_i so that [D_i : F_i] = s_i^2 and put n_i := m_i · s_i. We let 𝒪_i be the ring of intgers in F_i.Choose n ∈ and let H ∈ M_n × n(𝔐_p[G]). Then we may decompose H intoH = ∑_i=1^t H_i∈ M_n × n(𝔐_p(G)) = ⊕_i=1^tM_n × n(𝔐_p(G) e_i),where H_i := He_i. The reduced characteristic polynomial f_i(X) = ∑_j=0^n_i nα_ijX^j of H_i has coefficients in 𝒪_i. Moreover, the constant term α_i0 is equal to __p[G](H_i) · (-1)^n_i n. We putH_i^∗ := (-1)^n_i n +1·∑_j=1^n_i nα_ijH_i^j-1,H^∗ := ∑_i=1^t H_i^∗.Note that this definition of H^∗ differs slightly from the definition in <cit.>, but follows the conventions in <cit.>. Let 1_n × n denote the n × n identity matrix. We have H^∗∈ M_n× n (𝔐_p(G)) and H^∗ H = H H^∗ = __p[G](H) · 1_n × n.The first assertion is clear by the above considerations. Since f_i(H_i) = 0, we find thatH_i^∗· H_i = H_i· H_i^∗= (-1)^n_i n+1 (-α_i0) = __p[G](H_i),as desired (see also <cit.>). §.§ Denominator idealsWe defineℋ_p(G) :={ x ∈ζ(_p[G]) | xH^∗∈ M_n × n(_p[G])∀ H ∈ M_n × n(_p[G])∀ n ∈}, ℐ_p(G) :=⟨__p[G](H) | H ∈M_n × n(_p[G]),n ∈⟩_ζ(_p[G]).Since x ·__p[G](H) = xH^∗H ∈ζ(_p[G]) by Lemma <ref>, in particular we haveℋ_p(G) ·ℐ_p(G) = ℋ_p(G) ⊆ζ(_p[G]).Hence ℋ_p(G) is an ideal in the commutative _p-order ℐ_p(G). We will refer to ℋ_p(G) as thedenominator ideal of the group ring _p[G]. The following result determines the primes p for which the denominator ideal ℋ_p(G) is best possible. We have ℋ_p(G) = ζ(_p[G]) if and only if p does not divide the order of the commutator subgroup G' of G. Furthermore, when this is the case we have that ℐ_p(G) = ζ(_p[G]).The first assertion is a special case of <cit.>. The second assertion follows from (<ref>).§.§ A canonical fractional Galois idealLet L/K be a finite Galois extension of number fields with Galois group G and let r>1 be an integer. Let p≠2 be a prime and let S be a finite set of places of Kcontaining S_∪ S_∞∪ S_p.Recall the notation of <ref>. As p is odd, the_p[G]-moduleY_r := ( H_1-r^+(L) ⊕ H_-r^+(L)) ⊗_pis projective. We also observe that P^1(𝒪_L,S, _p(r))_does not depend on S by Lemma <ref> and the fact that K_2r-1(𝒪_w; _p) is finite for eachw ∉S_p(L). We let E (α_r) :={γ∈__p[G](Y_r ⊗__p_p) |exp_r^BKα_r^-1γ(Y_r) ⊆ P^1(𝒪_L,S, _p(r))_},ℰ (α_r) :=⟨__p[G](γ) |γ∈ E(α_r) ⟩_ζ(_p[G])⊆ζ(_p[G]).Now suppose that Schneider's conjecture <ref> holds. Then we have the short exact sequence (<ref>) and we may choose a _p[G]-equivariant splitting σ_r of this sequence:σ_r: P^1(𝒪_L,S, _p(r)) ≃⟶H^1_(𝒪_L,S, _p(r)) ⊕ H^2_c(𝒪_L,S, _p(r)).We let σ_r^1: P^1(𝒪_L,S, _p(r)) ⟶ H^1_(𝒪_L,S, _p(r))be the composite map of σ_r and the projection onto the first component. We putF (ϕ_1-r, σ_r) :={δ∈__p[G](H_1-r^+(L) ⊗_p) | δϕ_1-r^-1 (ch_r,1^(p))^-1σ_r^1(P^1(𝒪_L,S, _p(r))_) ⊆ H_1-r^+(L) ⊗_p },ℱ (ϕ_1-r) :=⟨__p[G](δ) |δ∈ F(ϕ_1-r, σ_r) σ_r ⟩_ζ(_p[G])⊆ζ(_p[G]).Recall that ϕ_r = (ϕ_1-r⊕𝕀_H_-r^+(L)) ∘α_r. Let L/K be a finite Galois extension of number fields with Galois group G and let r>1 be an integer. Let p≠2 be a prime and let S be a finite set of places of Kcontaining S_∪ S_∞∪ S_p.Suppose that Schneider's Conjecture <ref> and Gross' Conjecture (Conjecture <ref>) both hold.Then with the notation above 𝒥_r^S = 𝒥_r^S(L/K,p) := ℰ(α_r) ℱ(ϕ_1-r) ·((A_ϕ_r^S)^-1)^♯⊆ζ(_p[G]) only depends upon L/K, p, r and S. We call 𝒥_r^Sthe canonical fractional Galois ideal. Suppose that α_r' is a second choice of [G]-isomorphismas in (<ref>). Let ϕ_r' := (ϕ_1-r⊕𝕀_H_-r^+(L)) ∘α_r'.Then we have a bijection E(α_r)⟶E(α_r') γ ↦ α_r' α_r^-1γ which implies ℰ(α_r) = __p[G](α_r (α_r')^-1) ℰ(α_r').Now (<ref>) implies that 𝒥_r^S does not depend onthe choice of α_r. The argument for ϕ_1-r is similar.Suppose that L/K is an extension of totally real fields and that r is even.Then the conjectures of Schneider and Gross both hold by Theorem <ref>and Theorem <ref>, respectively. We have thatH_1-r^+(L) vanishes by (<ref>) and thus ℱ (ϕ_1-r) = ζ(_p[G]). Moreover, we haveY_r = H_-r^+(L) ⊗_p and α_r = ϕ_r. We conclude thatwe have𝒥_r^S = ℰ(ϕ_r)·((A_ϕ_r^S)^-1)^♯⊆ζ(_p[G])unconditionally. We also have ((A_ϕ_r^S)^-1)^♯ = L_S^∗(r) ·_[G](ι_r ϕ_r^-1). Put d := [K:] and fix an isomorphism j: ≃_p. We observe that ι_r = (2π i)^-rμ_r ∘ι_0 and that μ_r (H_0(L) ⊗_p) = Y_r.We let E' := {γ' ∈__p[G](H_0(L) ⊗_p) |exp_r^BKι_0^-1γ'(H_0(L) ⊗_p)⊆ P^1(𝒪_L,S, _p(r))_} and obtain (substitute γ' by μ_r^-1ι_r ϕ_r^-1γμ_r) 𝒥_r^S = __p[G](j(2π i)^-r)^d·⟨__p[G](γ') |γ' ∈ E' ⟩_ζ(_p[G])· j(L_S^∗(r)). Now suppose in addition that L/K is abelian.The inverse of the Bloch–Kato exponential map and ι_0 ⊗_j _p induce a mapP^1(𝒪_L,S, _p(r)) ⟶ H_0(L) ⊗_p which in turn induces a regulator map 𝔰_S,r^(j): ⋀__p[G]^d P^1(𝒪_L,S, _p(r))⟶⋀__p[G]^d (H_0(L) ⊗_p) ≃_p[G]. It is then not hard to show that 𝒥_r^S = j(2π i)^-rd·Im(𝔰_S,r^(j)) · j(L_S^∗(r)) = j(i/π)^rd·Im(𝔰_S,r^(j)) · j(L_S^∗(r)), where the second equality holds, since p is odd and r is even. This shows thatin this case the canonical fractional Galois ideal 𝒥_r^Scoincides with the `Higher Solomon ideal' of Barrett <cit.>.When L/K is a CM-extension and r is odd, similar observations hold on minus parts. Let L/K be a Galois extension of totally real fields, but now we assume that r is odd. Then (<ref>) implies that H^+_-r(L) vanishes and that we have Y_r = H_1-r^+(L) ⊗_p. We assume that Schneider's conjecture holds so that the natural localization maps induce an isomorphism of _p[G]-modules H^1_(𝒪_L,S, _p(r)) ≃⟶ P^1(𝒪_L,S, _p(r)) by Propositions <ref> and <ref>(v). We let σ_r = σ_r^1 be the inverse of this isomorphism. We set τ_r := (ch_r,1^(p))^-1σ_r^1 exp_r^BK, which is an isomorphism L ⊗__p ≃ K_2r-1(𝒪_L,S) ⊗_p, and define G(ϕ_1-r, α_r) :={γ∈__p[G](Y_r ⊗__p_p) |ϕ_1-r^-1τ_r α_r^-1γ(Y_r) ⊆ Y_r },𝒢 (ϕ_1-r, α_r) :=⟨__p[G](γ) |γ∈ G(ϕ_1-r, α_r)⟩_ζ(_p[G])⊆ζ(_p[G]) =ℰ(α_r) ·ℱ(ϕ_1-r), where the last equality follows easily from the definitions. Clearly, the set G(ϕ_1-r, α_r) contains γ_r := α_r τ_r^-1ϕ_1-r and hence __p[G](γ_r) ∈𝒢 (ϕ_1-r, α_r). Conversely, for every γ∈ G(ϕ_1-r, α_r) we have that __p[G](γ_r^-1γ) ∈ℐ_p(G). In other words, we have an equality 𝒢(ϕ_1-r, α_r) ·ℐ_p(G) = __p[G](γ_r) ·ℐ_p(G). Define a _p[G]-automorphism of H_1-r^+(L) ⊗_p by ϑ_r^(j) := ρ_r τ_r ι_r^-1, where we extend scalars via the isomorphism j:≃_p on the right hand side. Noting that ℋ_p(G) is an ideal in ℐ_p(G), we compute ℋ_p(G) ·𝒥_r^S =ℋ_p(G) ·__p[G](γ_r) ·((A_ϕ_r^S)^-1)^♯ =ℋ_p(G) ·__p[G](ϑ_r^(j)) · j(L_S^∗(r)). If L/K is a CM-extension and r is even, similar observations again hold on minus parts.§.§ The annihilation conjecture Let L/K be a finite Galois extension of number fields with Galois group G and let r>1 be an integer. Let p≠2 be a prime and let S be a finite set of places of Kcontaining S_∪ S_∞∪ S_p. Suppose that Schneider's Conjecture <ref> andGross' Conjecture (Conjecture <ref>) both hold. For every x ∈__p[G](_p(r-1)_G_L) we have that __p[G](x) ·ℋ_p(G) ·𝒥_r^S⊆__p[G](K_2r-2^w(𝒪_L,S)_p). The _p[G]-annihilator of _p(r-1)_G_L≃ (_p / _p(1-r)^G_L)^∨ is generated by the elements 1 - ϕ_w N(v)^1-r, where v runs through the finite places of Kwith v ∉S_∪ S_p (cf. <cit.>).Moreover, if L/K is totally real and r is even, then _p(r-1)_G_L vanishes. If p does not divide the order of the commutator subgroup of G,then we have ℋ_p(G) = ζ(_p[G]) by Proposition <ref>.In particular, if G is abelian, then Conjecture <ref>simplifies to the assertion __p[G](_p(r-1)_G_L) ·𝒥_r^S⊆__p[G](K_2r-2^w(𝒪_L,S)_p). Taking Example <ref> and Remark <ref> intoaccount, we see that our conjecture is compatible with <cit.>. The author also expects that for every x ∈__p[G](_p(r-1)_G_L) we have that __p[G](x) ·𝒥_r^S ⊆ℐ_p(G). Then (<ref>) implies that the left hand side in Conjecture <ref> belongs to ζ(_p[G]). Let S' be a second finite set of places of K such that S ⊆ S'. * If Conjecture <ref> holds for S, then it holds for S' as well. * If (<ref>) holds for S, then it holds for S' as well. Recall from Remark <ref> that the p-adic wild kerneldoes not depend on S. Thus (i) follows once we have shown that ℋ_p(G) ·𝒥_r^S'⊆ℋ_p(G) ·𝒥_r^S. By definition we have𝒥_r^S' = 𝒥_r^S·(∏_v ∈ S' ∖ Sϵ_v(r)^-1)^♯. However, each ϵ_v(r)^-1 = __p[G](1 - ϕ_w N(v)^-r) belongs toℐ_p(G) as for v ∉S we have v ∤ p and thus N(v) ∈_p^×.This implies (ii) and also (<ref>) by (<ref>).§.§ Noncommutative Fitting invariantsWe briefly recall the definition and some basic properties of noncommutative Fitting invariants introduced in <cit.> and further developed in <cit.>.Let G be a finite group and let p be a prime. Let N and M be two ζ(_p[G])-submodules of a _p-torsion-free ζ(_p[G])-module. Then N and M are called -equivalent if there exists an integer n and a matrix U ∈_n(_p[G]) such that N = __p[G](U) · M. We denote the corresponding equivalence class by [N]. We say that N is -contained in M (and write [N] ⊆ [M]) if for all N' ∈ [N] there exists M' ∈ [M] such that N' ⊆ M'.Note that it suffices to check this property for one N_0 ∈ [N]. We will say that x is contained in [N] (and write x ∈ [N])if there is N_0 ∈ [N] such that x ∈ N_0.Now let M be a finitely presented _p[G]-module and let_p[G]^a h⟶_p[G]^b ⟶ M ⟶ 0be a finite presentation of M. We identify the homomorphism h with the corresponding matrix inM_a × b(_p[G]) and define S(h) = S_b(h) to be the set of all b × b submatrices of h ifa ≥ b. In the case a=b we call (<ref>) a quadratic presentation. The Fitting invariant of h over _p[G] is defined to be__p[G](h) = {[ [0] a<b; [⟨__p[G](H) | H ∈ S(h)⟩_ζ(_p[G])]a ≥ b. ].We call __p[G](h) a (noncommutative) Fitting invariant of M over _p[G].One defines __p[G]^max(M) to be the unique Fitting invariant of M over _p[G] which is maximal among all Fitting invariants of M with respect to the partial order “⊆”. If M admits a quadratic presentation h,one also puts __p[G](M) := __p[G](h) which is independent of the chosen quadratic presentation. The following result is <cit.>.If M is a finitely presented _p[G]-module, thenℋ_p(G) ·__p[G]^max(M) ⊆__p[G](M).Let C^∙∈𝒟^ (_p[G]) be a perfect complex such that H^i(C^∙) is finite for all i ∈ and vanishes if i ≠ 2,3. Choose ℒ∈ζ(_p[G])^× such that ∂_p(ℒ) = χ__p[G], _p (C^∙,0). Then we have an equality__p[G]^max((H^2(C^∙))^∨)^♯ = __p[G]^max(H^3(C^∙)) ·ℒ.This is an obvious reformulation of <cit.> (with a shift by 2). §.§ The relation to the leading term conjectureThe aim of this subsection is to prove the following theorem which describes the relation of Conjecture <ref>to the leading term conjecture at s=r and thus also to the ETNC for the pair ((r)_L, [G]) by Theorem <ref>.Let L/K be a finite Galois extension of number fields with Galois group G.Let r>1 be an integer and letp be an odd prime. Suppose that the leading term conjecture at s=r(Conjecture <ref>) holds for L/K at p. Then Conjecture <ref>is also true. Fix an odd prime p and suppose that L is abelian over .Then the leading term conjecture at s=r and Conjecture <ref>both hold for almost all r>1 (and all even r if L is totally real). As L/ is abelian, the ETNC for the pair ((r)_L, [G]) holds for all r ∈ by work of Burns and Flach <cit.>. Now fix an odd prime p.Then Schneider's conjecture holds for almost all r by Remark <ref>and for all even r>1 if L is totally real by Theorem <ref>.Thus the result follows from Theorem <ref> and Theorem <ref>. Recall the notation from <ref>.Let γ∈ E(α_r), δ∈ F(ϕ_1-r, σ_r) and x ∈__p[G](_p(r-1)_G_L). We have to show that ℋ_p(G) ·__p[G](x) ·__p[G](γ) ·__p[G](δ) ·((A_ϕ_r^S)^-1)^♯⊆__p[G](K_2r-2^w(𝒪_L,S)_p). As the reduced norm is continuous for the p-adic topology, we may and do assumethat γ and δ are _p[G]-automorphisms (and not just endomorphisms).By the definition of E(α_r) we therefore get an injection exp_r^BKα_r^-1γ: Y_r ⟶ P^1(𝒪_L,S, _p(r))_ which we may lift to an injection η_r: Y_r ⟶ P^1(𝒪_L,S, _p(r)), since Y_r is a projective _p[G]-module. Likewise, by the definition of F(ϕ_1-r, σ_r) we obtain a map δϕ_1-r^-1 (ch_r,1^(p))^-1σ_r^1:P^1(𝒪_L,S, _p(r))_⟶ H_1-r^+(L) ⊗_p. We may therefore define a _p[G]-homomorphism ξ_r: Y_r ⟶ (H_1-r^+(L) ⊗_p) ⊕ H_c^2(𝒪_L,S, _p(r)) such that the projection onto H_c^2(𝒪_L,S, _p(r)) is the composition ofη_r and the natural map P^1(𝒪_L,S, _p(r)) → H_c^2(𝒪_L,S, _p(r)),whereas the projection onto H_1-r^+(L) ⊗_p is given by the composite map of (<ref>) and (<ref>).We then have an equality ξ_r ⊗__p_p = (δ⊕𝕀_H^2_c(𝒪_L,S, _p(r))) t(ψ_r, σ_r, S) γ which implies that ξ_r is injective.The perfect complex C_L,S(r) is isomorphic in 𝒟(_p[G]) to a complexA → B of _p[G]-modules of finite projective dimension, where A is placed in degree 2.Choose n ∈ such that p^n γ(Y_r) ⊆ Y_r. As Y_r is projective,we may construct the following commutative diagram of _p[G]-modules with exact rows and columns. Y_r @=[r] @^(->[d]^ξ_rY_r @^(->[d] [r]^0 Y_r @^(->[d] @=[r] Y_r @^(->[d]^p^n γ (H_1-r^+(L) ⊗_p) ⊕ H_c^2(𝒪_L,S, _p(r)) @^(->[r] @->>[d] A [r] @->>[d] B @->>[r] @->>[d] H_c^3(𝒪_L,S, _p(r)) ⊕ Y_r @->>[d](ξ_r) @^(->[r] A' [r] B' @->>[r] H_c^3(𝒪_L,S, _p(r)) ⊕(p^n γ) The arrow A' → B' defines a complex C' in 𝒟^(_p[G])(where we place A' in degree 2; note that C' depends on a lot of choiceswhich we suppress in the notation). The cohomology groups of this complexare finite and vanish outside degrees 2 and 3. Thus the zero map is the uniquetrivialization of this complex. Likewise the arrow Y_r 0→ Y_rdefines the complex Y_r{2,3} in 𝒟^(_p[G]) and we choose t_δ,n := p^n (δ^-1⊕𝕀_H_-r^+(L) ⊗_p) as a trivialization.Using equation (<ref>) we compute -∂_p(A_ϕ_r^S)^♯=χ__p[G], _p(C_L,S(r), t(ψ_r,σ_r,S))=χ__p[G], _p(C',0) + χ__p[G], _p(Y_r{2,3},t_δ,n) =χ__p[G], _p(C',0) + ∂_p(__p[G](t_δ,n)), where the first equality is Conjecture <ref>. Now Lemma <ref>implies the first equality in the following computation. __p[G]^max((ξ_r)^∨)^♯=__p[G]^max(H_c^3(𝒪_L,S, _p(r)) ⊕(p^n γ)) ·((A_ϕ_r^S)^♯__p[G](t_δ,n))^-1⊇ __p[G]^max(H_c^3(𝒪_L,S, _p(r))) ·__p[G]((p^n γ)) · ((A_ϕ_r^S)^♯__p[G](t_δ,n))^-1 =__p[G]^max(H_c^3(𝒪_L,S, _p(r))) ·__p[G](p^n γ) · ((A_ϕ_r^S)^♯__p[G](t_δ,n))^-1 =__p[G]^max(H_c^3(𝒪_L,S, _p(r))) ·__p[G](γ) ·__p[G](δ) · ((A_ϕ_r^S)^♯)^-1∋ __p[G](x) ·__p[G](γ) ·__p[G](δ) ·((A_ϕ_r^S)^♯)^-1. The inclusion follows from <cit.>. The second equalityholds, since p^n γ: Y_r → Y_r is a quadratic presentation of (p^n γ).The definition of t_δ,n gives the third equality. Finally, the _p[G]-moduleH_c^3(𝒪_L,S, _p(r)) is cyclic by Proposition <ref> (ii) and thus __p[G](x) belongs toits maximal Fitting invariant by <cit.>.As __p[G]((ξ_r)^∨)^♯ equals __p[G]((ξ_r)),Theorem <ref> implies that ℋ_p(G) ·__p[G](x) ·__p[G](γ) ·__p[G](δ) ·((A_ϕ_r^S)^-1)^♯⊆__p[G]((ξ_r)). However, the composition of ξ_r and the projection onto H_c^2(𝒪_L,S, _p(r)) factors through P^1(𝒪_L,S, _p(r)) viaη_r and thus there is a surjection of (ξ_r) onto (P^1(𝒪_L,S, _p(r)) → H_c^2(𝒪_L,S, _p(r))) ≃^2(𝒪_L,S, _p(r)) ≃ K_2r-2^w(𝒪_L,S)_p, where the last isomorphism is Proposition <ref> (iv).Now (<ref>) and (<ref>) imply (<ref>). The proof also shows that Conjecture <ref> implies the containment (<ref>).§.§ The relation to a conjecture of Burns, Kurihara and SanoLet L/K be an abelian extension of number fields with Galois group G and let r be an integer. In <cit.> the authors define a certain ideal in terms of `generalized Stark elements of weight -2r' (in particular, this involves the equivariant L-value L_S^∗(r)) and conjecture that this ideal coincides with the initial Fitting ideal ofH^2_(𝒪_L,S, _p(1-r)). In this final subsection, we will explain the relation of their conjecture to our Conjecture <ref> if r>1.So let us henceforth assume that r>1. Fix a second finite set T of places of K, which is disjoint from S. Following <cit.> we define RΓ_T(𝒪_L,S, _p(1-r)) to be a complex that lies in an exact triangle in the derived category 𝒟(_p[G]) of the formRΓ_T(𝒪_L,S, _p(1-r)) → RΓ(𝒪_L,S, _p(1-r)) →⊕_w ∈ T(L) RΓ(L(w), _p(1-r)),where the second arrow is induced by the natural morphism. For each i ∈ we abbreviate H^iRΓ_T(𝒪_L,S, _p(1-r)) by H^i_T(𝒪_L,S, _p(1-r)). The conjecture of Burns, Kurihara and Sano <cit.> concerns the initial Fitting ideal and thus also the annihilator ideal of the finite cohomology groupH^2_T(𝒪_L,S, _p(1-r)). In order to relate their conjecture to ours, we have to determine the relation between this cohomolgy group and the wild kernelK_2r-2^w(𝒪_L,S)_p. Artin-Verdier duality and the triangle (<ref>) give an exact triangle in 𝒟(_p[G])of the form ⊕_w ∈ T(L) RΓ(L(w), _p(1-r)) → C_S,T^∙(r) → D_S^∙(r)(see <cit.> or <cit.>), where we have setC_S,T^∙(r) := RΓ_T(𝒪_L,S, _p(1-r))[1] ⊕ (H_r^+(L) ⊗_p)[-1]; D_S^∙(r) := R__p(RΓ_c(𝒪_L,S, _p(r)), _p)[-2].For any _p-module M we write M^∗ for its _p-linear dual. We henceforth assume that Schneider's conjecture holds. Then Proposition <ref> implies that H^1_c(𝒪_L,S, _p(r)) ≃ H_-r^+(L) ⊗_p is _p[G]-projective. Thus the complex D_S^∙(r) is acyclic outside degrees 0 and 1 and we have canonical isomorphisms of _p[G]-modulesH^0(D_S^∙(r))_ ≃ H^3_c(𝒪_L,S, _p(r))^∨, H^0(D_S^∙(r))_ ≃ H^2_c(𝒪_L,S, _p(r))^∗, H^1(D_S^∙(r))≃ (H^2_c(𝒪_L,S, _p(r))_)^∨⊕ (H_r^+(L) ⊗_p).In particular, the triangle (<ref>) yields a right exact sequence of _p[G]-modules⊕_w ∈ T(L)_p(r-1)_G_w→ H^2_T(𝒪_L,S, _p(1-r)) → (H^2_c(𝒪_L,S, _p(r))_)^∨→ 0.Moreover, we have a surjection H^2_c(𝒪_L,S, _p(r)) ↠ K_2r-2^w(𝒪_L,S)_pby Proposition <ref>(iv). Thus <cit.> and our conjecture predict annihilators of the torsion subgroup and a finite quotient of H^2_c(𝒪_L,S, _p(r)), respectively. In order to compare the two conjectures we will hence assume that H^2_c(𝒪_L,S, _p(r)) is finite so that we have an inclusion__p[G](H^2_T(𝒪_L,S, _p(1-r)))^♯⊆__p[G](K_2r-2^w(𝒪_L,S)_p).By Proposition <ref>(v) and (<ref>) this implies that L is totally real and that r is odd. Since H_-r^+(L) vanishes in this case, the wedge product which occurs in <cit.> is empty (see <cit.>)so that this conjecture predicts that the initial Fitting ideal of H^2_T(𝒪_L,S, _p(1-r)) is generated by an element η_L/K,S,T(r) as defined in <cit.>. By its very definition (and taking <cit.> into account) this element is given byη_L/K,S,T(r)^♯ = (∏_v ∈ T (1 - ϕ_w N(v)^1-r)) ·__p[G](ϑ_r^(j)) · j(L_S^∗(r)).Now the inclusion (<ref>), Remark <ref> and Example <ref> imply the following result. Let L/K be an abelian extension of totally real fields and let r>1 be an odd integer. Assume that Schneider's Conjecture <ref> and Gross' Conjecture <ref> both hold. Then <cit.> for all T implies Conjecture <ref>. The conjecture of Burns, Kurihara and Sano indeed involves the choice of a certain idempotent ε of _p[G]. Under the hypotheses of Proposition <ref> it suffices to consider their conjecture for ε = 1 (which implies their conjecture for all admissible idempotents). However, we point out that in general 1 is not an admissible idempotent. For instance, this happens if L/K is a CM-extension. If we further assume that r is even, then e^- := (1-c)/2 is admissible, where c ∈ G denotes complex conjugation. In this case e^- H^2_c(𝒪_L,S, _p(r)) is finite and one can formulate an analogue of Proposition <ref> on minus parts.*amsalpha | http://arxiv.org/abs/1703.09088v2 | {
"authors": [
"Andreas Nickel"
],
"categories": [
"math.NT",
"math.KT",
"11R42, 19F27, 11R70"
],
"primary_category": "math.NT",
"published": "20170327140631",
"title": "Annihilating wild kernels"
} |
Statistical and Computational Tradeoff in Genetic Algorithm-Based Estimation Manuel Rizzoa**Corresponding Author. Email: [email protected] and Francesco Battagliaa aDepartment of Statistical Sciences, Sapienza University of Rome, Piazzale Aldo Moro 5, I-00185 Rome, Italy ================================================================================================================================================================================================================= When a Genetic Algorithm (GA), or a stochastic algorithm in general, is employed in a statistical problem, the obtained result is affected by both variability due to sampling, that refers to the fact that only a sample is observed, and variability due to the stochastic elements of the algorithm. This topic can be easily set in a framework of statistical and computational tradeoff question, crucial in recent problems, for which statisticians must carefully set statistical and computational part of the analysis, taking account of some resource or time constraints. In the present work we analyze estimation problems tackled by GAs, for which variability of estimates can be decomposed in the two sources of variability, considering some constraints in the form of cost functions, related to both data acquisition and runtime of the algorithm. Simulation studies will be presented to discuss the statistical and computational tradeoff question. Evolutionary algorithms, Convergence rate, Analysis of variability, Least absolute deviation, Autoregressive model, g-and-k distribution § INTRODUCTIONIn recent years the huge growth in size of datasets has introduced many novel problems in the statistical field. In fact the need to carry out successful statistical analysis must now be accompanied by a careful setting of the computational part, that may include the choice of computational methodology and must consider some resource or time constraints, that are crucial in real problems. Questions like these are known in literature as statistical and computational tradeoff (or time-data tradeoff) problems, that aim at balancing and optimizing statistical efficiency and computational complexity. This is a very general topic, so many different methodologies have been proposed in literature to deal with many different applications. Chandrasekaran and Jordan <cit.> considered a class of parameters estimation problems for which they studied a theoretical relationship in the form of a convex relaxation between number of statistical observations, runtime of the selected algorithm and statistical risk. An algebraic hierarchy of these convex relaxations is built to successfully achieve the time-data tradeoff for different algorithms. Dillon and Lebanon <cit.> studied consistency of intractable Stochastic Composite Likelihood estimators, whose formula depends also on parameters related to computational elements. Therefore they aimed at balancing statistical accuracy and computational complexity. Shender and Lafferty <cit.> studied the tradeoff in Ridge Regression models introducing sparsity in the sample covariance matrix. Wang et al. <cit.>, in a Sparse Principal Component Analysis framework, addressed the question of whether is possible to find an estimator that is computable in polynomial time, and then they analyzed its minimax optimal rate of convergence. Several other applications can be found in <cit.>.In the present paper we address the statistical and computational tradeoff discussion in complex models building problems to be optimized by Genetic Algorithms (GAs), in a pure statistical approach. They have been widely employed in statistical applications <cit.>, but the question could be discussed also considering other evolutionary methods. The key point, in fact, is that algorithms including stochastic element introduce an additional source of variability in the estimation process along with variability induced by sampling: statistical efficiency of the estimates will be evaluated by considering the effect of both of these components. The tradeoff will finally be discussed by introducing some cost functions in the analysis related to both data acquisition and runtime of the algorithm. The applications considered are a Linear Regression model to be estimated by Least Absolute Deviation, an Autoregressive model simultaneous identification and parameter estimation, and a g-and-k distribution maximum likelihood estimation, a kind of so-called intractable likelihood problem.The paper is organized as follows: Section <ref> describes standard GAs and their application in parameters estimation problems; in Section <ref> the tradeoff question is discussed, along with a literature review on GA variability quantification; Section <ref> illustrates the selected applications, for which the tradeoff is analyzed; last section includes final comments and future developments. § GENETIC ALGORITHMS FOR MODELS BUILDING §.§ Overview of the AlgorithmGenetic Algorithms (GAs) are among the most important methodologies in the Evolutionary Computation field, because of their simplicity and versatility of applications. They were introduced by Holland <cit.> as a method to explain the adaptive processes of natural systems, using metaphors from biology and genetics, but soon they found application in complex optimization problems. The complexity may be due to the objective function, that may be non-differentiable for example, or to the search space, possibly very large or irregular.In this framework the goal is to find the global optimum of a function, called fitness, that measures the goodness of solutions. In the metaphor every solution is represented by an individual, coded in a string called chromosome, whose elements represent the genetic heritage of the individual (genes). In the standard binary coding case genes can only take values 0 or 1 (bits). In every iteration (or generation, in the terminology of GAs) the algorithm considers a population of individuals of fixed size N that evolves by use of genetic operators of selection, crossover and mutation. The selection randomly chooses solutions for the subsequent steps, usually proportionally to their fitness value; by crossover two solutions are allowed to combine together, with a fixed rate pC, exchanging part of their genes, creating two new individuals; lastly, the mutation step allows every bit to flip its value from 0 to 1, or vice versa, with a fixed probability pM, providing a further exploration of different areas of the search space. The resulting population replaces the previous, and the flow of generations stops if a certain condition is met, for example a fixed number of generations. It is also possible, adopting the elitist strategy, to maintain the best individual found up to the current generation, in spite of the effect of genetic operators. In this case, the user interested in optimization may consider just the flow of these solutions.Elitism is crucial as far as convergence is contemplated. In fact most of convergence results have been obtained for elitist GAs, generally by use of Markov Chain theory. A fundamental theorem by Rudolph <cit.>, easily adaptable to a wide class of Evolutionary Algorithms (EAs), considers an elitist GA with pM>0 and models X_g, namely the best solution found up to generation g, by a Markov Chain. It states that under the above assumptions the sequence D_g=f^* - f(X_g), where f^* is the global optimum and f(X_g) the fitness of the best solution found up to generation g, is a non-negative supermartingale which converges almost surely to zero. Generalizations have been proposed to extend Rudolph's approach to time varying mutation and/or crossover rates by modeling GA as a non homogeneous Markov Chain <cit.>. Reference <cit.> includes also a review of other ways of studying GAs convergence by Markov Chain modeling. In our paper we employ a simple GA, so we shall mainly refer to Rudolph's theorem of convergence, that allows also to easily generalize the framework to other EAs. It is worth noting that this theorem just states the convergence of a GA, but it gives no information about its rate.§.§ Statistical Parameters Estimation There are many statistical problems where complexity is high, for example outliers detection, cluster analysis or design of experiment. Here we consider parametric model building problems, where the function to be maximized, a likelihood for example, is hardly tractable, and standard methods may fail in finding good estimates. In this situation a sample y is generated from a distribution known up to a parameters vector θ. The inference on θ is made by maximizing an objective function that depends on both θ and y. We shall now specify GA solutions coding and fitness function structure. Even if floating-point GAs have been employed in literature to deal with real parameters optimization we shall employ the simple binary coded GA described above. The standard rule to binary code a real parameter θ in the real interval [a,b] is: θ=a+b-a/2^H-1∑_j=1^H2^j-1x_j, where H is the number of genes considered and x_j is the j-th bit. If the interest is focused on a vector θ=(θ_1,...,θ_k) then a chromosome of length M=k· H includes the coding of each component. Length H of each genes group is constant, but the coding interval [a,b] can vary for each parameter. Since we are considering a kind of discretization of a continuous search space, we aim at building a fine grid in such a way that fitness function is adequately smooth on that grid, so that the related loss of information is negligible.The fitness f is proportional to the above-cited objective function, say g(θ;y). We shall consider a scaled exponential transformation of g: f(ψ)=exp{g(θ;y)/τ}, where ψ is the chromosome and τ is a problem dependent constant. This kind of scaling procedure may allow to modify the shape of fitness function without changing solutions ranking.As far as the choice of genetic operators is considered, we shall adopt: roulette wheel selection, to select chromosomes proportionally to their fitness value; single point crossover, so that a chromosome can exchange up to k-1 parameters in every recombination; standard bit-flip mutation strategy. Lastly, elitism is adopted to guarantee convergence of the procedure.§ PROBLEM DESCRIPTION §.§ Variability Decomposition A statistical parameter estimate is naturally subject to sampling variability; in fact if we make inference using two different samples we may obtain two possibly different results. This issue had to be deepened in all statistical inference approaches; in the present work we consider a classical approach, where sampling variability is closely related to variability of selected estimators.When GAs are employed in the estimation process a new form of variability is introduced in the analysis, due to the stochastic nature of the algorithm. It refers to elements like the starting population, selection mechanism, mutation and crossover rate or the random choice of the cutting point of crossover. As a result of this, if we run a GA several times using the same sample we may obtain different results.The total variability of a GA estimate can be easily decomposed in these two forms of variability, as shown in <cit.> for the univariate case.We shall adopt the following notation: y is the sample of observations, θ the parameter of the generating statistical model, θ(y) the best theoretical value, that can not be computed in practice, and θ^*(y) the result of optimization obtained via GA, that is an approximation of θ(y) and depends from the observed sample as well. The error of a GA estimate, under the assumption of independence between data-generating model and random seeds of the GA, can be decomposed as: θ^*(y) -θ = [θ(y) - θ] + [ θ^*(y)-θ(y)] The first term in square brackets depends on consistency of the estimates, while the second depends on the convergence of GA. Both must be ensured to allow the GA estimate to converge to zero in probability. A similar issue has been analyzed in <cit.>, where a Threshold Accepting algorithm is employed for a GARCH model estimation problem.As long as we focus on models indexed by a vector θ=(θ_1,...,θ_k) then in practice we shall consider the corresponding multiparametric of (<ref>). This means that we must define two random vectors θ(y) and θ^*(y), which are affected, respectively, by sampling variability and GA variability. While θ(y) is defined as the best statistical estimator, the GA component, for which the sample y is held fixed, needs to be defined specifically for our own estimation problem. If an elitist strategy is employed then we define the random vector θ^*(g)(y) as the best estimate obtained up to generation g, that corresponds to the best individual of generation g. The idea is to evaluate GA variability considering the behaviour of this random vector among GA runs, and the key is Rudolph's theorem. In fact it states that if, along with elitist strategy, mutation rate is greater than zero, then the sequence θ^*(g)(y)(g=0,1,...) will converge almost surely to the optimum, that is θ(y) in our case. This means that when g increases then each GA run is likely to approach the optimum, so variability between runs tends to decrease as well. So, in our framework, evaluating the variability of the GA coincides with studying the convergence rate of the algorithm.Having defined both random vectors θ(y) and θ^*(y) we shall define their variance-covariance matrices, respectively Σ_S and Σ_GA, to relate to (<ref>). The generic (i,j) elements of these matrices are:σ^S_ij= 𝔼_S[ (θ_i-θ_i) (θ_j-θ_j)],i,j=1,...k,σ^*_ij=𝔼_GA[ (θ_i^*-θ_i) (θ_j^*-θ_j)],i,j=1,...k. So σ^S_ij and σ^*_ij measure the dependence between θ_i and θ_j induced, respectively, by sampling and GA. As long as we need to get a scalar summary of these matrices, a possible choice is to consider the traces only, a strategy generally adopted in literature. This makes a good sense in an optimization framework, because the optimum is reached when variances σ^S_ii and σ^*_ii (i=1,...,k) go to zero, with no practical interest on covariances. So, if Σ_TOT is the total variance-covariance matrix, then, using the linearity of trace, and under the same independence assumption of (<ref>), we can write:tr(Σ_TOT)=tr(Σ_S)+tr(Σ_GA).§.§ Tradeoff ProblemNow we shall set the variability analysis of subsection <ref> in the framework of statistical and computational tradeoff. Assuming that both statistical estimator and GA's configurations are fixed, then we must figure out how to optimally balance statistical accuracy and GA efficiency.As long as we consider estimators having property of consistency, then statistical accuracy can be naturally represented by the size n of sample y=(y_1,...,y_n), because if n increases then also the precision of estimators increases (and, in contrast, variability decreases). This happens under some regularity conditions (see <cit.>, in the case of Maximum Likelihood Estimators).Concerning GA efficiency, we refer to Rudolph's theorem of convergence. Informally, a GA converges when g tends to infinity, but it is worth noting that in every GA generation each of the N chromosomes in the population is evaluated on the basis of fitness function. So, instead of considering the number of generations, we represent GA efficiency component by the number of fitness function evaluations V, also because it is usually the most computationally expensive step.Since we want to figure out if both tr(Σ_S) and tr(Σ_GA) go to zero, we shall study their behaviour when n →∞ and V →∞. Let us consider two functions f(n) and h(V) for which, respectively, f(n) →∞ when n →∞ and h(V) →∞ when V →∞. If we consider a consistent statistical estimator and the assumptions of Rudolph's theorem are fulfilled, then we can write tr(Σ_S)=𝒪([f(n)]^-1) and tr(Σ_GA)=𝒪([h(V)]^-1). In that case: tr(Σ_TOT)=tr(W_S)1/f(n) + tr(W_GA)1/h(V), where matrices W_S and W_GA are constant and composed by elements that depend, respectively, from the statistical model and from the GA. It is possible that sample size n may have an effect also on W_GA, because fitness function will change as a consequence. For this reason we shall include n in our fitness scaling procedure as constant τ in (<ref>). In this way we can strongly restrict the effect of n on behaviour of the algorithm and describe the total variability of a GA estimate by considering decomposition (<ref>).To specify the total effort of a GA estimate we shall introduce some cost functions: S(n) represents the cost for obtaining a sample of n observations, while T(n) indicates the cost of one fitness function evaluation, which depends on the number of observations as well, because a solution is evaluated by analyzing the full sample. So the total cost C for obtaining an estimate θ^*(y) using n statistical observations and V fitness function evaluations is given by: C=S(n)+VT(n).Now we can write our tradeoff question as an optimization problem: n,Vmintr(Σ_TOT)=tr(W_S)1/f(n) + tr(W_GA)1/h(V) s.t. C=S(n)+VT(n)A particular case that simplifies the latter is the assumption of linearity in n for cost functions T and S. This is reasonable because statistical observations are usually collected in sequence and if GA's fitness function includes a summation over the considered sample. In such a case T(n)=nT, S(n)=nS and we can incorporate the effort constraint into the objective function to obtain: nmintr(Σ_TOT)=tr(W_S)1/f(n) + tr(W_GA)1/h([C-nS]/nT). A solution can be found numerically once consistency and convergence ratesf(·) and h(·) have been established. A particular case that allows to obtain a simple closed form expression for optimal n and V is given when f(n)=n and h(V)=V. In this case, computing the derivative of the objective function with respect to n, we obtain solutions: ñ=-SCtr(W_S)± C√(CTtr(W_S)tr(W_GA))/CTtr(W_GA)-S^2tr(W_S). Since n is a sample size, then we are interested only in the positive solution ñ of (<ref>). Ṽ is obtained by constraint:Ṽ=C-ñS/ñT.§.§ Consistency and Convergence Rates Functions f(n) and h(V) introduced in the previous subsection specify, respectively, consistency rate of statistical part and convergence rate of algorithmic part of equation (<ref>). The assumption of linearity is a particular case that simplifies the tradeoff analysis.Linearity of f(n) is satisfied if we consider estimators having property of asymptotic efficiency: in that case, under some regularity conditions, f(n)=n (see <cit.>, in the case of Maximum Likelihood Estimators).On the other side, the behaviour of h(V), as said, is related to the convergence rate of GAs in our case. This is an essential issue for any optimization algorithm, and in the field of Evolutionary Algorithms (EAs) has been analyzed in different ways. A part of literature focuses on comparison of EAs with different configurations, to identify the algorithm that reaches the optimum more quickly <cit.>; other researchers have developed more rigorous approaches, focusing among other things on convergence rate of single chromosome bits, relatively to classic problems like OneMax <cit.>; a different proposal <cit.>, inspired by statistical mechanics, aims to model the GA as a complex system, and to summarize its probability distribution through generations by considering cumulants. In such a way GA convergence can be evaluated by considering the limiting cumulants.Recently Clerc <cit.> has proposed a theoretical framework for analyzing optimization performances. For a general stochastic algorithm (deterministic algorithms are considered particular cases of this class) he introduced a bivariate probability density p(ψ,r), called Eff-Res, that is function of both optimization result r and computational effort ψ, spent for obtaining r. By analyzing this function it is possible to deepen different useful questions: for a given result r, the probability of obtaining r with a generic effort ψ; for a given effort ψ, the probability of obtaining a generic result r. Our interest is focused on the latter question, because if we fix a computational effort, that is related to the number of fitness evaluations in our case, then we are interested in how the result r varies. The theoretical variance of results for fixed effort can be written as: σ^2(ψ)=μ(ψ) ∫_R̃(r-r̅(ψ))^2p(ψ,r) dr, where R̃ is the set of possible results, r̅(ψ) the theoretical mean result for fixed effort and μ(ψ) the normalization coefficient of p(ψ,r). Expression (<ref>) can be evaluated empirically. If we have observed J results r(1),r(2),...,r(J), obtained with effort ψ, then the estimated variance is given by: σ̂^2(ψ) = 1/J-1∑_j=1^J[r(j)-r̅_J(ψ)]^2, where r̅_J(ψ) is the empirical mean of results.In the present work we shall employ a very similar approach to evaluate GA variability. Since we are interested in convergence of θ_i^* to the optimum θ_i (i=1,...,k), then in both (<ref>) and (<ref>) we plug θ_i in place of theoretical and empirical mean, and θ_i^* in place of results. In that case (<ref>) corresponds to variance σ^*_ii=𝔼_GA[ (θ_i^*-θ_i)^2] of matrix Σ_GA. If we run a GA J times, obtaining θ_1,i^*,θ_2,i^*,...,θ_J,i^*(i=1,...,k), thus we get the estimates by: σ̂_ii^* = 1/J∑_j=1^J[θ_j,i^*-θ_i]^2,i=1,...,k. As far as we need quantifications of convergence rate, then we shall analyze the behaviour of these variances through generations. Thus we consider the sequence of variances for the k dimensional parameter θ, given a fixed maximum number of generations G: σ̂^*(g)=(σ̂^*(g)_11,σ̂^*(g)_22,...,σ̂^*(g)_kk),g=1,...,G. We shall now conduct the following regression analysis for each parameter indexed by i: σ̂_ii^*(g)=w_GA,i1/[V^(g)]^a+ϵ_g, g=1,...,G, where [V^(g)]^a is the a-th power of the number of fitness evaluations up to generation g and w_GA,i is the regression parameter. The goal is to find out if exists an a for which [V^(g)]^a can be taken as a satisfactory GA convergence rate h(V). In that case w_GA,i will become part of matrix W_GA in (<ref>); for this reason a shall be the same for all parameters in the considered model. § APPLICATIONS The applications selected in this paper are a Least Absolute Deviation Regression estimation (code LAD), an Autoregressive model building (code AR) and a g-and-k distribution maximum likelihood estimation (code gk). In order to discuss the tradeoff question for each of these experiment we shall now give details on methods employed for obtaining variability and convergence rate estimates, motivations on choices of estimators and GAs implementation. Simulations and computations were implemented by use of software R <cit.> for all applications, and also R package gk <cit.> for the last application.Concerning GAs configuration we adopted coding, fitness and genetic operators described in subsection <ref>, and coding interval boundaries specific for each parameter. Crossover and mutation rates were fixed at, respectively, 0.7 and 0.1, maximum number of generations G at 1400 and population size N will be equal to 50. If not otherwise specified the initial population was generated uniformly at random. These configurations have been chosen on the basis of empirical studies to guarantee stability and convergence of the procedure.As mentioned in subsection <ref>, if an estimator is asymptotically efficient then f(n)=n in formula (<ref>). We considered estimators which have this property. We then estimated variability of estimators by simulating 10000 samples and computing mean squared deviations of estimates obtained by software optimization routines from the true parameters, to get a quantification of W_S in (<ref>).On the other side, GA variability have been estimated by considering 10 equally-sized datasets. For each sample we computed variance estimates using J=500 GA runs as shown in formulas (<ref>) and (<ref>); then we considered point by point average of these estimates with respect to g, obtaining final estimates to conduct regression analysis (<ref>).This regression analysis has been conducted for the three applications with a=1/3,1/2, 1,2, and goodness of fit results (R^2 coefficient) are summarized in Table <ref>. Concerning experiments LAD and gk the best fits are observed for a=1, while a=1/2 rate is dominant for experiment AR. We adopted these two convergence rates in the tradeoff analysis of next section. As an example Figure <ref> shows the fit for parameter β_2 of LAD experiment. Results of estimates of tr(W_S) and tr(W_GA) are summarized in Table <ref>. For both estimates we used simulated data of length n=200 for all the experiments.The tradeoff will be discussed for the three applications by evaluating optimal ñ on a common grid of values for linear cost functions S and T, assuming a fixed total effort C=10^5. Comments on optimal V can be derived by complement. We shall make some remarks also for the case where we estimate computational cost T with time (in seconds) needed in our computer to evaluate fitness in the three experiment, using gk as corner point. In this way we can make more realistic comparative comments. §.§ Least Absolute Deviation Estimation Least Absolute Deviations (LAD) regression is an alternative to Ordinary Least Squares regression, that has been proven to be more robust to outliers <cit.>. In this framework the estimator, that is asymptotically efficient <cit.>, is the function that minimizes the sum of absolute values of errors. This function is neither differentiable, nor convex, so numerical methods must be employed to find an optimal solution. Zhou and Wang <cit.> have already employed a real valued GA to estimate the parameters of a LAD regression with censored data. In this paper we consider a standard linear regression model: y_i= β_0 + β_1 x_i,1 + β_2 x_i,2 + ϵ_i, i=1,...,n, where (y,x) is the observed dataset and ϵ_i ∼ t_5.Since our goal is always maximization, then the fitness function shall be: f(ψ)=exp{- ∑_i=1^n | y_i - β_0 - β_1 x_i,1 - β_2 x_i,2 |/n}. True parameters vector will be β=(0.5,0.5,-0.5), each chromosome length shall be M=24 and coding interval boundaries will be [-2,2] for all parameters.Figure <ref> shows the behaviour of optimal n (on z axis) with respect to a grid of values for cost functions S and T. It obviously increases to large values as costs S and T decrease, and rapidly decreases as they increase.§.§ Autoregressive Models Building GAs have been widely applied in the field of time series analysis. In fact optimization problems related to parameters estimation and model identification may have some difficulties due to the intractability of objective functions or to the size of search spaces. The latter question is common in model identification problems, and it has been analyzed also for standard ARMA models <cit.>. Here we address the problem of how to simultaneously identify and estimate Autoregressive (AR) models, given a fixed maximum order.The general equation of an AR model of order p is:Y_t=ϕ_1 Y_t-1 + ... + ϕ_p Y_t-p + ϵ_t, where Y_t is a zero mean random process, ϵ_t a Gaussian white noise and ϕ=(ϕ_1,...,ϕ_p) the parameters vector.Model (<ref>) is usually identified by using penalized likelihood criteria like AIC or BIC, to be minimized. In this work we shall consider BIC, because of its property of consistency <cit.>. As long as we need to simultaneously identify and estimate the model, we shall not include maximized likelihood in the criterion, but just the likelihood evaluated for a generic vector ϕ. Thus we get: BIC(ϕ;y)=nlog σ̂^2(p) + klogn, where y is the observed time series, σ̂^2(p)= ∑_i=1^n (y_t - ϕ_1 y_t-1 - ... - ϕ_p y_t-p)^2 / n and k ≤ p is the number of free parameters in the model. Sampling variability will be estimated considering asymptotic efficiency of maximum likelihood estimator for AR models <cit.>.True model is an AR(1) with ϕ_1=0.8, and we shall consider a maximum order p=8. Chromosome length shall be M=64, and coding will necessarily include the case ϕ_i=0 (i=1,...,8), that has a direct impact on the penalization term of (<ref>). To facilitate the identification of subset models we shall force the starting population to include a chromosome that corresponds to a white noise (all parameters are zero), and also 8 chromosomes for which one of the parameters is zero, so that all ϕ_i=0 (i=1,...,8) are represented; the remaining chromosomes will be generated uniformly at random, coherently with other applications. This may be a reasonable strategy in a situation of total lack of knowledge.Fitness function shall be: f(ψ)=exp{- BIC(ϕ;y)/n}, while coding interval will be [-2,2] for each ϕ_i. Figure <ref> shows the analogous plot to Figure <ref>. This experiment has slower GA convergence rate with respect to LAD and gk, possibly because of the effect of model identification in the fitness(e.g. estimating a ϕ_i value slightly different from zero implies may implies a slight decrease of the residual sum of squares but a term k one unit larger in the penalization part of BIC). For this reason values of optimal n are generally lower than LAD.§.§ g-and-k Distribution Estimation The g-and-k distribution was introduced by Haynes et al. <cit.>, and is a family of distributions specified by a quantile function. It is a very flexible tool which has been applied in statistical control charts techniques <cit.> and non-life insurance modelling <cit.>. For a univariate random sample x=(x_1,...,x_n) the quantile function is: Q_X(u_i|A,B,g,k)=A + Bz_u_i ( 1 + c1-e^-g z_u_i/1+e^-g z_u_i ) (1+z^2_u_i)^k, i=1,...,n, where u_i=F_X(x_i|A,B,g,k) is the depth corresponding to data value x_i, z_u_i the u_i-th quantile of standard normal distribution, A and B>0 are location and scale parameters, g measures skewness in the distribution, k>-0.5 is a measure of kurtosis and c is a constant introduced to make the distribution proper. By combining values of the four parameters several essential distributions like Normal, Student's t or Chi square can be derived.Maximum Likelihood estimation of this distribution falls in the case of so called intractable likelihood problems. The expression of likelihood is given by: L(θ | x)= (∏_i=1^n Q'_X( Q^-1_X(x_i| θ)| θ) )^-1, where x is the observed sample, θ=(A,B,g,k) and Q'_X( u| θ)= ∂ Q_X / ∂ u.The main complication in computing (<ref>) is that there is no closed form for the expression Q^-1_X(x_i| θ), that must be obtained numerically, for example with Brent's method, commonly used in many softwares.A lot of research on g-and-k distributions estimation has been made in a Bayesian framework, using Markov Chain Monte Carlo <cit.> or indirect methods like Approximate Bayesian Computation <cit.>.In this paper we shall follow the pure likelihood approach proposed by Rayner and MacGillivray <cit.>. In this situation a numerical procedure has to be selected to maximize (<ref>). They proposed a Nelder-Mead simplex algorithm, reporting some weaknesses and highlighting the need to use several starting point for the optimization. In the final discussion they also observed that metaheuristic methods like GAs could be more successful in this optimization problem.In our GA approach we shall consider the fitness: f(ψ)=exp{ logL(θ | x) /n }. We will simulate data using the typical parameters generator vector θ=(A,B,g,k)=(3,1,2,0.5), with c=0.8, that leads to an 'interesting far-from-normal distribution' <cit.>.Each chromosome will have length M=28, and coding interval boundaries shall be A ∈ [-10,10], B ∈ [0,10], g ∈ [-10,10] and k ∈ [-0.5,10]. If a decoded chromosome provides unacceptable values B=0 or k=-0.5 it is rejected and regenerated.Concerning sampling variability, Rayner and MacGillivray <cit.> investigated the approximation of maximum likelihood estimator variability by Cramer-Rao variance bound, which is of order 𝒪(n^-1). In estimating sampling variability we shall allow for this asymptotic approximation of Σ_S.Perspective plot for this experiment (Figure <ref>) shows a similar behaviour of optimal n to AR, because even if in this case there is a linear GA convergence rate, the experiment is more complex (tr(W_GA)/tr(W_S) ratio is much larger).Lastly we shall make some comments on the behaviour of ñ when sampling cost S varies and fitness evaluation cost T is estimated in each experiment by elapsed execution time (in seconds) of our computer for single fitness evaluation, taking gk as corner point. Results are: T_LAD/T_gk=0.007 and T_AR/T_gk=0.101. Figure <ref> shows the behaviour of ñ in this more realistic scenario, for which each computational cost ratio has been multiplied by a constant to highlight the behaviour of each experiment. In this case the three curves are ranked with respect of computational cost and experiment complexity, that is related on both GA convergence rate and variability ratio tr(W_GA)/tr(W_S) magnitude. gk experiment shows lowest values of ñ, but when S increases the three experiments tend to conform to common values, suggesting that a large sampling cost could have a larger influence in the tradeoff than model complexity.§ DISCUSSION AND FURTHER DEVELOPMENTS In this paper we proposed a statistical and computational tradeoff analysis for complex estimation problems tackled by GAs based on a decomposition of variability of estimates in two terms depending, respectively, on sampling and stochastic features of the algorithm. Results of applications showed how the behaviour of optimal sample size changes with complexity of experiment. A comparative analysis of the three experiments in which computational cost is estimated also suggested that large sampling cost could influence optimal values more than complexity of the model, represented by statistical and computational variability. This is an interesting consideration, especially for real applications, where often large costs can decisively restrict the analysis.The present study could be improved by considering other scalar quantifications of statistical and computational variability. For example one could consider the determinant of Σ_S and Σ_GA instead of trace. An other direction for further research is to generalize this framework to other statistical problems where a GA is involved. In fact, as already said, there are many complex optimization problems in the statistical field, and going deeply through the understanding of the tradeoff could facilitate integration of GAs in standard statistical methods. Lastly the discussion of statistical and computational tradeoff could be interesting also in estimation problems when nature inspired algorithms for continuous optimization are employed, like Differential Evolution (DE) <cit.> or Particle Swarm Optimization (PSO) <cit.>, for which there is direct real coding. In fact the specific stochastic elements in these algorithms, for example the differential mutation mechanism in DE or the parameter regulating particle velocity in PSO, could result in different convergence rates for the algorithmic variability. § ACKNOWLEDGEMENTSThis research has been financially supported by Sapienza University of Rome (grant Progetti per Avvio alla Ricerca 2016).99chandra Chandrasekaran V, Jordan MI. Computational and statistical tradeoffs via convex relaxation. Proc Natl Acad Sciences. 2013; 110(13): E1181-E1190.dillon Dillon JV, Lebanon G. Stochastic Composite Likelihood. J Mach Learn Res. 2010; 11: 2597-2633.shender Shender D, Lafferty J. Computation-Risk Tradeoffs for Covariance-thresholded Regression. In: Proceedings of The 30th International Conference on Machine Learning; 2013: p. 756-764.wang Wang T, Berthet Q, Samworth RJ. Statistical and computational trade-offs in estimation of sparse principal components. Ann Stat. 2016; 44(5): 1896-1930.jordan Jordan MI. On statistics, computation and scalability. Bernoulli. 2013; 19(4): 1378-1390.berthet Berthet Q, Chandrasekaran V. Resource Allocation for Statistical Estimation. Proc IEEE. 2016; 104(1): 111-125.bruer Bruer JJ, Tropp JA, Cevher V, Becker SR. Designing Statistical Estimators That Balance Sample Size, Risk, and Computational Cost. IEEE J Sel Top Signal Process. 2013; 9(4): 612-624.chen Chen Y, Xu J. Statistical-Computational Tradeoffs in Planted Problems and Submatrix Localization with a Growing Number of Clusters and Submatrices. J Mach Learn Res.2016; 17(27): 1-57.agarwal Agarwal A. Computational Trade-offs in Statistical Learning [dissertation]. Berkeley (CA): University of California; 2012.baragona Baragona R, Battaglia F, Poli I. Evolutionary Statistical Procedures: An Evolutionary Computation Approach to Statistical Procedures, Design and Applications. Berlin: Springer-Verlag; 2011.kapanoglu Kapanoglu M, Ozan Koc I, Erdogmus S. Genetic algorithms in parameter estimation for nonlinear regression models: an experimental approach. J Stat Comput Simul. 2007; 77(10): 851-867.rizzo Rizzo M, Battaglia F. On the Choice of a Genetic Algorithm for Estimating GARCH Models. Comput Econ. 2016; 48(3): 473-485.satman Satman MH, Diyarbakirlioglu E. Reducing errors-in-variables bias in linear regression using compact genetic algorithms. J Stat Comput Simul. 2015; 85(16): 3216-3235.gurunlu Grnl Alma , Kurt S, Ugur A. Genetic algorithms for outlier detection in multiple regression with different information criteria. J Stat Comput Simul. 2011; 81(1): 29-47.holland Holland JH. Adaptation in Natural and Artificial Systems. Ann Arbor: University of Michigan Press; 1975.rudolph Rudolph G. Convergence Properties of Evolutionary Algorithms. Hamburg: Verlag Dr. Kovac; 1997.rojascruz Rojas Cruz JA, Pereira AGC. The elitist non-homogeneous genetic algorithm: Almost sure convergence. Stat Probab Lett. 2013; 83(10): 2179-2185.pereira_deandr Pereira AGC, de Andrade BB. On the genetic algorithm with adaptive mutation rate and selected statistical applications. Comput Stat. 2015; 30(1): 131-150.pereira_campos Pereira AGC, Campos VSM. Multistage non homogeneous Markov chain modeling of the non homogeneous genetic algorithm and convergence results. Commun Stat-Theory Methods. 2016; 45(6): 1794-1804.winker Winker P, Maringer D. The convergence of estimators based on heuristics: theory and application to a GARCH model. Comput Stat. 2009; 24(3): 533-550.casella Casella G, Berger RL. Statistical Inference. Pacific Grove, CA: Duxbury; 2002.eiben Eiben AE, Smit SK. Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm Evol Comput. 2011; 1(1): 19-31.derrac Derrac J, Garcia S, Hui S, Suganthan PN, Herrera F. Analyzing convergence performance of evolutionary algorithms: A statistical approach. Inf Sci. 2014; 289: 41-58.oliveto Oliveto PS, Witt C. On the runtime analysis of the Simple Genetic Algorithm. Theor Comput Sci. 2014; 545: 2-19.auger Auger A, Doerr, B, editors. Theory of Randomized Search Heuristics: Foundations and Recent Developments. World Scientific; 2011.prugel Prgel-Bennett A, Rogers A. Modelling genetic algorithm dynamics. In Kallel L, Naudts B, Rogers A, editors. Theoretical aspects of Evolutionary Computing. Berlin: Springer-Verlag; 2001. p. 59-58.shapiro Shapiro JL. Statistical mechanics theory of genetic algorithms. In Kallel L, Naudts B, Rogers A, editors. Theoretical aspects of Evolutionary Computing. Berlin: Springer-Verlag; 2001. p. 87-108.reeves Reeves CR, Rowe JE. Genetic algorithms: Principles and perspectives - A guide to GA theory. London: Kluwer Academic Publishers; 2003.clerc Clerc M. Guided Randomness in Optimization. Wiley; 2015.rcore R Core Team. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2013. Available from: http://www.R-project.org/.gk Prangle D. gk: g-and-k and g-and-h Distribution Functions. R package version 0.4.0; 2017. Available from: http://CRAN.R-project.org/package=gk.bloomfield Bloomfield P, Steiger WL. Least absolute deviations: Theory, applications and algorithms. Boston: Birkhuser; 1983.zhou_wang Zhou X, Wang J. A genetic method of LAD estimation for models with censored data. Comput Stat Data Anal. 2005; 48: 451-466.gaetan Gaetan C. Subset ARMA model identification using genetic algorithms. J Time Ser Anal. 2000; 21(5): 559-570.minerva Minerva T, Poli I. Building ARMA models with genetic algorithms. In Boers EJW et al., editors. Workshops on Applications of Evolutionary Computation. Berlin Heidelberg: Springer; 2001. p. 335-342.hannan Hannan EJ. The estimation of the order of an ARMA process. Ann Stat. 1980; 8(5): 1071-1080.brockwell Brockwell PJ, Davis RA. Time series: theory and methods. New York: Springer; 1991.haynes_gatton Haynes MA, Gatton ML, Mengersen KL. Generalized control charts for nonnormal data. Brisbane, Australia: School of Mathematical Sciences, Queensland University of Technology; 1997. (Technical Report No. 97/4).haynes_meng_rippon Haynes MA, Mengersen KL, Rippon P. Generalized Control Charts for Non-Normal Data Using g-and-k Distributions. Commun Stat-Simul Comput. 2008; 37(9): 1881-1903.peters Peters GW, Chen W, Gerlach RH. Estimating Quantile Families of Loss Distributions for Non-Life Insurance Modelling via L-Moments. Risks. 2016; 4(2): 14.haynes_meng Haynes MA, Mengersen KL. Bayesian Estimation of g-and-k Distributions using MCMC. Comput Stat. 2005; 20(1): 7-30.allingham Allingham D, King RAR, Mengersen KL. Bayesian estimation of quantile distributions. Stat Comput. 2009; 19(2): 189-201.grazian Grazian C, Liseo B. Approximated Integrated Likelihood via ABC methods. Stat Its Interface. 2015; 8(2): 161-171.rayner Rayner GD, MacGillivray HL. Numerical maximum likelihood estimation for the g-and-k and generalized g-and-h distributions. Stat Comput. 2002; 12(1): 57-75.price Price K, Storn RM, Lampinen JA. Differential evolution: a practical approach to global optimization. Berlin: Springer Science & Business Media; 2006.kennedy Kennedy J, Eberhart R. Particle Swarm Optimization. In: Proceedings of the IEEE Conference on neural networks; 1995; Perth, Australia. Piscataway, NJ: IEEE Service Center; 1995. p. 1942-1948. | http://arxiv.org/abs/1703.08676v1 | {
"authors": [
"Manuel Rizzo",
"Francesco Battaglia"
],
"categories": [
"stat.CO"
],
"primary_category": "stat.CO",
"published": "20170325112021",
"title": "Statistical and Computational Tradeoff in Genetic Algorithm-Based Estimation"
} |
french Soit G un groupe p-adique se déployant sur une extension non-ramifiée. Nous décomposons Rep_Λ^0(G), la catégorie abélienne des représentations lisses de G de niveau 0 à coefficients dans Λ=ℚ_ℓ ou ℤ_ℓ, en un produit de sous-catégories indexées par des paramètres inertiels à valeurs dans le dual de Langlands. Ces dernières sont construites à partir de systèmes d'idempotents sur l'immeuble de Bruhat-Tits et de la théorie de Deligne-Lusztig. Nous montrons ensuite des compatibilités aux foncteurs d'induction et de restriction paraboliques ainsi qu'à la correspondance de Langlands locale.english Let G be a p-adic group that splits over an unramified extension. We decompose Rep_Λ^0(G), the abelian category of smooth level 0 representations of G with coefficients in Λ=ℚ_ℓ or ℤ_ℓ, into a product of subcategories indexed by inertial Langlands parameters. We construct these categories via systems of idempotents on the Bruhat-Tits building and Deligne-Lusztig theory. Then, we prove compatibilities with parabolic induction and restriction functors and the local Langlands correspondence.d^0 half-metallic ferromagnetism in CaN and CaAs pnictides: An ab initio study Omid Khakpour^2 December 30, 2023 ==============================================================================§ INTRODUCTIONSoient k un corps local non archimédien et 𝐆 un groupe réductif connexe défini sur k. Notons G:=𝐆(k). Soit ℓ un nombre premier, ℓ≠ p, et posons Λ=ℚ_ℓ ou ℤ_ℓ. On appelle Rep_Λ(G) la catégorie abélienne des représentations lisses de G à coefficients dans Λ et Rep_Λ^0(G) la sous-catégorie pleine des représentations de niveau 0. Pour Λ=ℚ_ℓ le théorème de décomposition de Bernstein nous fournit une décomposition de Rep_ℚ_ℓ(G) en un produit de blocs (c'est à dire de sous-catégories indécomposables). Le cas Λ=ℤ_ℓ est quant à lui assez peu connu. Pour 𝐆=GL_n, Vignéras a obtenu dans <cit.> une décomposition de la catégorie Rep_𝔽_ℓ(GL_n(k)) en blocs (voir aussi les travaux de Sécherre et Stevens <cit.>). Celle-ci a permis par la suite à Helm <cit.> d'obtenir une décomposition de Rep_ℤ_ℓ(GL_n(k)). Prenons une classe d'équivalence inertielle de paires (L,π) où L est un Levi de GL_n(k) et π est une représentation supercuspidale irréductible de L sur 𝔽_ℓ. Définissons alors Rep_ℤ_ℓ(GL_n(k))_[L,π] la sous-catégorie pleine de Rep_ℤ_ℓ(GL_n(k)) dont les objets sont les Π tels que tous les sous-quotients simples de Π ont un "support supercuspidal inertiel mod ℓ" donné par (L,π). Les blocs de Rep_ℤ_ℓ(GL_n(k)) sont exactement les sous-catégories Rep_ℤ_ℓ(GL_n(k))_[L,π].Ces méthodes sont basées sur "l'unicité du support supercuspidal", qui est vraie pour GL_n mais ne l'est pas en général. Un contre exemple dans Sp_8 sur un corps fini a été trouvé récemment par Dudas puis a été relevé sur un corps p-adique par Dat dans <cit.>.Dans le cas du niveau 0, Dat propose (voir <cit.>) une nouvelle construction des blocs de GL_n(k) en utilisant la théorie de Deligne-Lusztig et des systèmes d'idempotents sur l'immeuble de Bruhat-Tits semi-simple (comme dans l'article de Meyer et Solleveld <cit.>). Il réinterprète également dans <cit.> les paramétrisations des décompositions précédentes de Rep_Λ(GL_n(k)) en termes "duaux". Introduisons quelques notations pour un énoncé plus précis.On suppose que 𝐆 est déployé sur l'extension non-ramifiée maximale de k, c'est à dire que 𝐆 est une forme intérieure d'un groupe non-ramifié. Soient W_k le groupe de Weil absolu de k et I_k le sous-groupe d'inertie. Le groupe Γ:=Gal(k̅/k)/I_k est topologiquement engendré par le Frobenius inverse Frob. Via le choix d'un épinglage de 𝐆, le dual de 𝐆 sur ℚ_ℓ, Frob induit un automorphisme que l'on nomme ϑ sur 𝐆. On note Φ(G) l'ensemble des morphismes admissibles ϕ : W'_k→^LG(ℚ_ℓ) modulo les automorphismes intérieurs par des éléments de G(ℚ_ℓ), oùW_k'=W_k⋉ℚ_ℓ désigne le groupe de Weil-Deligne et ^LG(ℚ_ℓ):=⟨ϑ⟩⋉G(ℚ_ℓ) le groupe de Langlands dual. Posons I_k^Λ:=I_k si Λ = ℚ_ℓ et I_k^Λ:=I_k^(ℓ), le sous-groupe fermé maximal de I_k de pro-ordre premier à ℓ, si Λ = ℤ_ℓ. Pour I un sous-groupe de W_k, définissons Φ(I,G) l'ensemble des classes de G-conjugaison de morphismes continus I →^LG(ℚ_ℓ) qui admettent une extension à un L-morphisme de Φ(G). On s'intéressera principalement à Φ(I_k^Λ,G). Enfin, si I contient l'inertie sauvage, posons Φ_m(I,G) les éléments de Φ(I,G) qui sont triviaux sur l'inertie sauvage. Il y a alors une bijection entre les blocs de Bernstein pour GL_n(k) et les éléments de Φ(I_k,GL_n) ainsi qu'entre les blocs de Vignéras-Helm et Φ(I_k^(ℓ),GL_n), nous donnant ainsi une décomposition en blocsRep_Λ(GL_n(k))=∏_ϕ∈Φ(I_k^Λ,GL_n) Rep^ϕ_Λ(GL_n(k)).De plus les facteurs apparaissant dans Rep_Λ^0(GL_n(k)), la sous-catégorie de niveau 0, correspondent aux ϕ∈Φ_m(I_k^Λ,GL_n).La méthode de Vignéras ne pouvant s'appliquer dans le cas général, nous nous proposons ici de généraliser la construction de systèmes d'idempotents via la théorie de Deligne-Lusztig pour obtenir des décompositions de Rep_Λ^0(G). Nous obtenons ainsi une décomposition analogue à celle de Dat (voir sections <ref> et <ref>) Soit 𝐆 un groupe réductif connexe défini sur k et déployé sur une extension non-ramifiée de k. Alors la catégorie de niveau 0 se décompose enRep_Λ^0(G) = ∏_ϕ∈Φ_m(I_k^Λ,𝐆) Rep_Λ^ϕ(G).De plus, les catégories Rep_Λ^ϕ(G) vérifient les propriétés suivantes :* Lien entre ℤ_ℓ et ℚ_ℓ : Soit ϕ∈Φ_m(I_k^(ℓ),𝐆), alors Rep_ℤ_ℓ^ϕ(G) ∩ Rep_ℚ_ℓ(G) = ∏_ϕ' Rep_ℚ_ℓ^ϕ'(G) où le produit est pris sur les ϕ' ∈Φ_m(I_k,𝐆) tels que ϕ'_| I_k^(ℓ)∼ϕ. * Représentations irréductibles de Rep_ℚ_ℓ^ϕ(G) : Soit π∈Irr_ℚ_ℓ(G). Alors π∈ Rep_ℚ_ℓ^ϕ(G) si et seulement s'il existe 𝐓 un tore maximal non-ramifié de 𝐆, ϕ_𝐓∈Φ_m(I_k,𝐓) et x un sommet de l'immeuble de G (sur k) qui est dans l'appartement de 𝐓 (sur K) tels que * ⟨π^G_x^+, ℛ__x^_x(θ_𝐓) ⟩≠ 0* ι∘ϕ_𝐓∼ϕoù ι est un plongement ι : ^L𝐓↪^L𝐆, G_x^∘ est le sous-groupe parahorique en x, G_x^+ son pro-p-radical, _x≃ G_x^∘/G_x^+ le quotient réductif, _x est le tore induit par 𝐓 sur _x, ℛ__x^_x est l'induction de Deligne-Lusztig et θ_𝐓 est le caractère de niveau 0 de 𝐓^F correspondant à ϕ_𝐓 via la correspondance de Langlands pour les tores restreinte à l'inertie.(Notons que l'on obtient également une description de Rep_ℤ_ℓ^ϕ(G) ∩Irr_ℚ_ℓ(G) grâce au (1).)* Compatibilité à l'induction et à la restriction parabolique : Soient 𝐏 un sous-groupe parabolique de 𝐆 ayant pour facteur de Levi 𝐌 et ι: ^L𝐌↪^L𝐆 un plongement.* Soit ϕ_M∈Φ_m(I_k^Λ,𝐌), alors i_P^G( Rep_Λ^ϕ_M(M) ) ⊆ Rep_Λ^ι∘ϕ_M(G), où i_P^G désigne l'induction parabolique.* Soit ϕ∈Φ_m(I_k^Λ,𝐆), alors r_P^G( Rep_Λ^ϕ(G) ) ⊆∏_ϕ_M Rep_Λ^ϕ_M(M), où r_P^G désigne la restriction parabolique et le produit est pris sur les ϕ_M∈Φ_m(I_k^Λ,𝐌) tels que ι∘ϕ_M∼ϕ. * Soit ϕ_M∈Φ_m(I_k^Λ,𝐌). Posons ϕ = ι∘ϕ_M∈Φ_m(I_k^Λ,𝐆) et notons C_𝐆(ϕ) le centralisateur dans 𝐆 de l'image de ϕ. Alors si C_𝐆(ϕ) ⊆ι(𝐌) le foncteur i_P^G réalise une équivalence de catégories entre Rep_Λ^ϕ_M(M) et Rep_Λ^ϕ(G). On s'attend à ce que cette décomposition soit compatible à la correspondance de Langlands. Nous le vérifions dans le cas des groupes classiques. Supposons que 𝐆 est un groupe classique non-ramifié, Λ=ℚ_ℓ, k est de caractéristique nulle et de caractéristique résiduelle impaire. On obtient alors également les propriétés suivantes * Compatibilité à la correspondance de Langlands : Soient π∈Irr_ℚ_ℓ(G) une représentation irréductible de niveau 0 et ϕ∈Φ_m(I_k,𝐆). Notons φ_π le paramètre de Langlands associé à π via la correspondance de Langlands locale pour les groupes classiques (<cit.> <cit.> <cit.> <cit.> <cit.>). Alors π∈ Rep_ℚ_ℓ^ϕ(G) ⇔φ_π | I_k∼ϕ* Blocs stables :Soit ϕ∈Φ_m(I_k,𝐆) tel que C_𝐆(ϕ) soit connexe. Alors Rep_ℚ_ℓ^ϕ(G) est un "bloc stable" (c'est-à-dire, correspond à un idempotent primitif du centre de Bernstein stable au sens de Haines <cit.>).Contrairement au cas de GL_n les catégories Rep_Λ^ϕ(G) ne sont pas des facteurs indécomposables en général. Dans un prochain article, en cours d'écriture, nous expliquerons comment pousser la méthode utilisée ici, pour obtenir une décomposition minimale construite à partir de systèmes d'idempotents et de la théorie de Deligne-Lusztig. Nous interpréterons également cette nouvelle décomposition à l'aide d'invariants cohomologiques.Dans cet article nous nous limitons à l'étude de Rep_Λ^0(G) la catégorie de niveau 0. On peut cependant espérer que les travaux menés sur la théorie des types permettront d'en déduire des résultats sur Rep_Λ(G). En effet Chinello montre dans <cit.> que l'on a une équivalence de catégories entre chaque bloc de Rep_Λ(GL_n(k)) et un bloc de niveau 0 deRep_Λ^0(G') où G' est un groupe de la forme G'=∏_i=1^r GL_n_i(D_i) avec pour i ∈{1, ⋯ ,r}, n_i un entier et D_i une algèbre à division centrale de dimension finie sur un corps p-adique. Des travaux en cours de Stevens et Kurinczuk tentent d'étendre ces résultats à un G plus général.Cet article se compose de 4 parties. La partie 1 rappelle les résultats sur les systèmes cohérents d'idempotents. Dans la seconde partie nous expliquons comment construire des idempotents à partir de la théorie de Deligne-Lusztig. Nous associons aux paramètres de l'inertie modérés des systèmes cohérents dans la troisième partie pour obtenir la décomposition du théorème précédent. La dernière partie a pour but de montrer les propriétés des facteurs directs ainsi obtenus. Les quatre premières propriétés découlent de la construction de ces catégories et la dernière repose sur les travaux de Moeglin <cit.>, Moussaoui <cit.>, Haines <cit.>, Lust et Stevens <cit.>.Remerciements Je tiens à remercier Jean-François Dat pour son aide précieuse concernant la rédaction de cet article.§ NOTATIONS Soit k un corps local non archimédien et 𝔣 son corps résiduel. Notons q=|𝔣|. On fixe une clôture algébrique k de k et on note K l'extension non-ramifiée maximale de k dans k. On appellera 𝔬_k (resp. 𝔬_K) l'anneau des entiers de k (resp. K). Notons également 𝔉 le corps résiduel de K qui est alors une clôture algébrique de 𝔣. On adopte les conventions d'écriture suivantes. G désignera un groupe réductif connexe défini sur k que l'on identifiera avec G(k) et l'on notera G:=G(k) et G^nr:=G(K). On appellera 𝐆 son groupe dual sur ℚ_ℓ. Pour les groupes réductifs connexes sur 𝔣, nous utiliserons la police d'écritureet l'on identifieraavec (𝔉) et de même on note :=(𝔣). Le groupe dual de G sur 𝔉 sera noté ^*. Si H désigne l'ensemble des points d'un groupe algébrique à valeur dans un corps alors H_ss désigne l'ensemble des classes de conjugaison semi-simples dans H. On fixe dans tout ce papier un système compatible de racines de l'unité (ℚ/ℤ)_p'∽→𝔉^× et (ℚ/ℤ)_p'↪ℤ_ℓ^×. Dans la suite G désignera un groupe réductif connexe défini sur k.§ SYSTÈME COHÉRENT D'IDEMPOTENTS Les diverses décompositions obtenues dans cet article sont construites à partir de systèmes cohérents d'idempotents. Cette partie à pour but de rappeler leur définition ainsi que les premières propriétés.Soit K_1 une extension non-ramifiée de k. On note BT(K_1) l'immeuble de Bruhat-Tits semi-simple associé à 𝐆(K_1). L'immeuble est un complexe polysimplicial et l'on note BT_0(K_1) pour l'ensemble des polysimplexes de dimension 0, c'est à dire les sommets. Dans la suite on utilisera des lettres latines x,y,⋯ pour parler des sommets et des lettres grecques pour parler des polysimplexes généraux σ,τ, ⋯. L'immeuble BT(K_1) est partiellement ordonné par la relation d'ordre σ≤τ si σ est une facette de τ. Un ensemble de polysimplexes σ_1,⋯,σ_k est dit adjacent s'il existe un polysimplexe σ tel que ∀ i ∈1,⋯,k, σ_i≤σ. Si x et y sont deux sommets adjacents on notera [x,y] le plus petit polysimplexe contenant x∪ y. Notons également, pour σ,τ deux polysimplexes, H(σ,τ) l'enveloppe polysimpliciale de σ et τ, c'est à dire l'intersection de tous les appartements contenant σ∪τ.Pour simplifier les notations nous noterons BT:=BT(k) et BT_0:=BT_0(k).Soit R un anneau commutatif dans lequel p est inversible. On munit G d'une mesure de Haar et on note ℋ_R(G) l'algèbre de Hecke à coefficients dans R, c'est à dire l'algèbre des fonctions de G dans R localement constantes et à support compact.On dit qu'un système d'idempotents e=(e_x)_x∈ BT_0 de ℋ_R(G) est cohérent si les propriétés suivantes sont satisfaites : * e_xe_y=e_ye_x lorsque x et y sont adjacents.* e_xe_ze_y=e_xe_y lorsque z ∈ H(x,y) et z est adjacent à x.* e_gx=ge_xg^-1 quel que soit x∈ BT_0 et g∈ G.Soit Rep_R(G) la catégorie abélienne des représentations lisses de G à coefficients dans R. Grâce à un résultat de Meyer et Solleveld on a : [<cit.>, Thm 3.1]Soit e=(e_x)_x∈ BT_0 un système cohérent d'idempotents, alors la sous-catégorie pleine Rep_R^e(G) des objets V de Rep_R(G) tels que V=∑_x∈ BT_0e_xV est une sous-catégorie de Serre.Soit σ∈ BT. Notons G_σ le fixateur de σ. Celui-ci contient un sous-groupe appelé sous-groupe parahorique, que l'on note G_σ^∘, qui est le "fixateur connexe" de σ. Enfin on note G_σ^+ le pro-p-radical (pro-p-sous-groupe distingué maximal) de G_σ^∘.Si x est un sommet de l'immeuble BT alors G_x^+ détermine un idempotent e_x^+∈ℋ_ℤ[1/p](G_x).[<cit.>, Section 2.2]Le système d'idempotents (e_x^+)_x∈ BT_0 est cohérent. On a de plus que, pour tout polysimplexe σ, l'idempotent e_σ^+:=∏_x∈σ e_x^+ est l'idempotent associé à G_σ^+.Soient x,y ∈ BT_0 deux sommets. Alors il existe une suite de sommets x_0=x,x_1,⋯,x_ℓ=y joignant x à y, telle que pour tout i ∈{0,⋯,l-1}, x_i+1 est adjacent à x_i et x_i+1∈ H(x_i,y).Pour x_1, il suffit de prendre un sommet dans H(x,y) qui est adjacent à x et tel que la distance de x_1 à y est strictement inférieure à celle de x à y. En ré-appliquant ce résultat à x_1 et y on construit x_2 et ainsi de suite pour obtenir le résultat voulu par récurrence.On dit qu'un système (e_σ)_σ∈ BT est 0-cohérent si * e_gx=ge_xg^-1 quel que soit x∈ BT_0 et g∈ G.* e_σ=e_σ^+e_x=e_xe_σ^+ pour x ∈ BT_0 et σ∈ BT tels que x ≤σ.En s'inspirant de <cit.> section 3.2.1 on montre :Soit (e_σ)_σ∈ BT un système d'idempotents 0-cohérent, alors le système d'idempotents (e_x)_x ∈ BT_0 est cohérent.On a de plus, pour x,y ∈ BT_0, e_x^+e_y=e_xe_y.Il ne reste à vérifier que les conditions 1. et 2. de la définition <ref>.Commençons par vérifier la propriété 1. de <ref>: Soient x et y deux sommets adjacents et σ le polysimplexe [x,y]. On sait déjà que e_x^+e_y^+=e_σ^+=e_σ^+e_σ^+. Ainsie_xe_y=e_xe_x^+e_y^+e_y=e_xe_σ^+e_σ^+e_y=e_σe_σ=e_σCe qui montre la propriété 1. de <ref>. Examinons maintenant la propriété 2. de <ref>: Soient x,y et z des sommets de BT tels que z soit dans l'enveloppe polysimpliciale de {x,y} et z adjacent à x. Par la proposition <ref> on sait que e_x^+e_y^+=e_x^+e_z^+e_y^+. Par ce qui précède e_[x,z]=e_xe_z et on ae_xe_y=e_xe_x^+e_y^+e_y=e_xe_x^+e_z^+e_y^+e_y=e_xe_[x,z]^+e_y=e_[x,z]e_y=e_xe_ze_yLe système d'idempotents (e_x)_x ∈ BT_0 est cohérent.Montrons maintenant que e_x^+e_y=e_xe_y.Si x et y sont adjacents on obtient que e_x^+e_y=e_x^+e_y^+e_y=e_[x,y]^+e_y=e_[x,y]=e_xe_yDans le cas général choisissons grâce au lemme <ref> une suite de sommets x_0=x,x_1,⋯,x_ℓ=y joignant x à y, telle que pour tout i ∈{0,⋯,l-1}, x_i+1 est adjacent à x_i et x_i+1∈ H(x_i,y). Alorse_x^+e_y =e_x^+e_y^+e_y=e_x^+e_x_1^+e_y^+e_y=⋯=e_x^+e_x_1^+⋯ e_x_l-1^+e_y^+e_y=e_x^+e_x_1^+⋯ e_x_l-1^+e_y=e_x^+e_x_1^+⋯ e_x_l-1e_y=⋯=e_xe_x_1⋯ e_x_l-1e_y=e_xe_x_1⋯ e_x_l-2e_y=⋯=e_xe_x_1e_y=e_xe_y(La première ligne découle de la propriété 2. de <ref> appliquée aux e_x^+. Pour la seconde, on utilise le fait que e_x_i^+e_x_i+1=e_x_ie_x_i+1 car x_i et x_i+1 sont adjacents. Enfin pour la dernière on applique que e_x_ie_x_i+1e_y=e_x_ie_y car x_i+1 est adjacent à x_i et x_i+1∈ H(x_i,y).)Ainsi∀ x,y ∈ BT_0, e_x^+e_y=e_xe_y § CONSTRUCTION D'IDEMPOTENTS Nous expliquons ici comment construire des idempotents sur l'immeuble à partir de la théorie de Deligne-Lusztig ainsi que les conditions qu'ils doivent vérifier pour obtenir un système cohérent. §.§ Théorie de Deligne-Lusztig Rappelons brièvement le fonctionnement de la théorie de Deligne-Lusztig. Dans cette section les groupes algébriques considérés seront sur 𝔉 et ℓ est un nombre premier différent de p.Soitun groupe réductif défini sur 𝔣. Soitun sous-groupe parabolique de radical unipotentet supposons quecontienne un Levi F-stable . On associe alors à ces données la variété de Deligne-Lusztig Y_ définie parY_={ g∈/, g^-1F(g) ∈F()}C'est une variété définie sur 𝔉 avec une action à gauche de 𝖦:=^F donnée par (γ,g)↦γ g et une action à droite de 𝖫:=^F donnée par (g ,δ)↦ gδ. Le complexe cohomologique RΓ_c(Y_,ℤ_ℓ) est alors un complexe de (ℤ_ℓ[G],ℤ_ℓ[L])-bimodules et induit deux foncteurs adjointsℛ_⊂^=RΓ_c(Y_,ℤ_ℓ) ⊗_ℤ_ℓ[L] - : D^b(ℤ_ℓ[L]) ⟶ D^b(ℤ_ℓ[G]) ^*ℛ_⊂^=RHom_ℤ_ℓ[G](RΓ_c(Y_,ℤ_ℓ) , -) : D^b(ℤ_ℓ[G]) ⟶ D^b(ℤ_ℓ[L])où D^b signifie "catégorie dérivée bornée".Nous avons fixé des systèmes compatibles de racines de l'unité (ℚ/ℤ)_p'∽⟶𝔉^× et (ℚ/ℤ)_p'↪ℤ_ℓ^×. Siest un tore défini sur 𝔣 et ^* est son tore dual, aussi défini sur 𝔣, alors les deux applications précédentes permettent de définir une bijection ^*F→Hom(T,ℤ_ℓ^×). Ainsi un élément t∈^*F détermine un caractère t:T →ℤ_ℓ^×.Soit s une classe de conjugaison semi-simple dans 𝖦^*:=^*F. Une représentation irréductible π∈Irr_ℚ_ℓ(𝖦) appartient à la série rationnelle attachée à s s'il existe un tore F-stabledans , un élément t∈^*F qui appartienne à s et un Borelcontenanttel que π apparaisse avec une multiplicité non nulle dans [ℛ_⊂^(t)] (où la notation [·] signifie que l'on prend l'image du complexe dans le groupe de Grothendieck). On note alors ℰ(𝖦,s) l'ensemble des telles séries rationnelles et par e_s,ℚ_ℓ^𝖦∈ℚ_ℓ[] l'idempotent central les sélectionnant. * (<cit.> théorème 8.23) Nous avons 1=∑_s e_s,ℚ_ℓ^𝖦 dans ℚ_ℓ[𝖦].* (<cit.> théorème A' et remarque 11.3) Si s se compose d'éléments ℓ-réguliers, alors nous avons un idempotent e_s,ℤ_ℓ^𝖦=∑_s' ∽_ℓ s e_s',ℚ_ℓ^𝖦∈ℤ_ℓ[𝖦], où s' ∽_ℓ s signifie que s est la partie ℓ-régulière de s'.Soitun Levi F-stable decontenu dans un parabolique . Une classe de conjugaison t dans ^* donne une classe de conjugaison s dans ^*. Ainsi nous avons une application φ_^*,^* à fibres finies définie par[ φ_^*,^* : ^*_ss → ^*_ss; t ↦ s ] Soit Λ= ℚ_ℓ ou ℤ_ℓ. Construisons alors un idempotent e_s,Λ^𝖫:=∑_t ∈φ_^*,^*^-1(s) e_t,Λ^𝖫. Dans le cas oùest lui-même F-stable, on notele radical unipotent deet =^F. Notons e_𝖴:=1/|𝖴|∑_x∈𝖴1_x l'idempotent réalisant la moyenne sur 𝖴. Alors on a[<cit.> section 2.1.4] e_s,Λ^𝖦e_𝖴=e_𝖴e_s,Λ^𝖫=:e_s,Λ^𝖯où e_s,Λ^𝖯 est un idempotent central dansΛ[𝖯].Soient ' un autre groupe réductif défini sur 𝔣 et φ : →' un isomorphisme compatible avec les F-structures. Alors φ induit une bijection (voir annexe <ref>) φ^* : 𝖦^*_ss⟶𝖦'^*_ssOn aφ(e_s,Λ^𝖦)=e_φ^*(s),Λ^𝖦' Par construction un élément t∈^*F appartenant à s est envoyé sur φ^*(t) appartenant à φ^*(s). Ainsi φ envoie ℰ(𝖦,s) sur ℰ(𝖦',φ^*(s)) et on a le résultat.§.§ Construction d'idempotents sur l'immeuble Maintenant que l'on sait fabriquer des idempotents sur les groupes finis, il nous faut les relever en des idempotents sur le groupe p-adique. On utilise pour cela le fait que les sous-groupes parahoriques G_σ^∘ admettent un modèle entier et que le quotient G_σ^∘/G_σ^+ est alors l'ensemble des points d'un groupe réductif connexe à valeur dans un corps fini.Soit σ un polysimplexe dans BT. D'après <cit.> section 3.4 il existe un schéma en groupes affine lisse 𝒢_σ défini sur 𝔬_k, unique à isomorphisme près, tel que * La fibre générique 𝒢_σ,k de 𝒢_σ est G* Pour toute extension galoisienne non-ramifiée K_1 de k, 𝒢_σ(𝔬_K_1) est le sous-groupe compact maximal de 𝐆(K_1)_σ, où σ est identifié avec son image canonique dans BT(K_1). L'application de réduction modulo p fournit un morphisme surjectif G_σ=𝒢_σ(𝔬_k) →𝖦̃_σ:=_σ^F, où _σ, la fibre spéciale de 𝒢_σ, est un groupe algébrique défini sur 𝔣. On note 𝒢_σ^∘ la composante neutre de 𝒢_σ et _σ^∘ celle de _σ. D'après <cit.> section 5.2.6, on a G_σ^∘=𝒢_σ^∘(𝔬_k). D'où un morphisme surjectif G_σ^∘→_σ^∘.Notons _σ le quotient réductif de _σ^∘. On a donc un morphisme surjectif G_σ^∘→𝖦_σ de noyau G_σ^+, d'oùun isomorphisme :G_σ^∘ / G_σ^+∼⟶𝖦_σSoit s ∈ (𝖦^*_σ)_ss (rappelons que 𝖦^*_σ=^*F_σ) d'ordre inversible dans Λ, on peut alors tirer en arrière par cet isomorphisme l'idempotent e_s,Λ^_σ en un idempotent e_σ^s,Λ∈Λ[ G_σ^∘/G_σ^+ ] ⊂ℋ_Λ(G_σ).Soit x∈ BT_0. Si l'on considère la sous-partie de l'immeuble constituée des polysimplexes τ tels que x ≤τ alors d'après <cit.> section 3.5.4, on obtient l'immeuble sphérique ("immeuble des sous-groupes 𝔣-paraboliques") de 𝖦_x.Soit σ∈ BT tel que x ≤σ. Alors G_σ^∘⊂ G_x^∘. On a ainsi un morphisme G_σ^∘→ G_x^∘→𝖦_x. Notons 𝖯_σ l'image de G_σ^∘ dans 𝖦_x qui est un sous-groupe parabolique et 𝖴_σ son radical unipotent. L'image réciproque de 𝖴_σ dans G_σ^∘ est G_σ^+. Ceci fournit donc un isomorphisme G_σ/G_σ^+≃𝖦_σ≃𝖯_σ / 𝖴_σ. Considérons _σ^* un sous-groupe de Levi de 𝖦_x^* relevant _σ^*. Dans la section <ref>, nous avons défini une application φ__σ^*,𝖦^*_x : (_σ^*)_ss→ (𝖦^*_x)_ss. Cette application est indépendante du choix du relèvement de _σ^* et nous définit donc une application φ^*_σ,x: (𝖦^*_σ)_ss→ (𝖦^*_x)_ss. On définit alors pour s ∈ (𝖦^*_x)_ss d'ordre inversible dans Λ, l'idempotent e_σ^s,Λ:=∑_t∈φ_σ,x^*-1(s)e_σ^t,Λ.Soient x∈ BT_0, σ∈ BT tel que x ≤σ et s ∈ (𝖦^*_x)_ss d'ordre inversible dans Λ. Alors e_σ^+e_x^s,Λ=e_x^s,Λe_σ^+=e_σ^s,Λ.D'après la proposition <ref> on a e_s,Λ^𝖦_xe_𝖴_σ=e_𝖴_σe_s,Λ^𝖦_σ dans Λ[𝖦_x]. Lorsque l'on tire en arrière ces idempotents par l'isomorphisme G_x^∘ / G_x^+∼⟶𝖦_x, on obtient dans Λ[ G_x^∘/G_x^+ ] :e_x^s,Λe_σ^+=e_σ^+e_σ^s,Λ.Maintenant comme e_σ^+e_σ^s,Λ=e_σ^s,Λ et e_σ^+e_x^s,Λ=e_x^s,Λe_σ^+ on a le résultat.§.§ Systèmes 0-cohérents de classes de conjugaisonÀ partir d'un polysimplexe σ et d'une classe de conjugaison semi-simple s nous savons maintenant construire un idempotent e_σ^s,Λ. On décrit alors dans cette partie les conditions que l'on doit imposer pour que le système d'idempotents formé à partir des e_σ^s,Λ soit un système 0-cohérent.Soient g ∈ G et σ∈ BT, on a g G_σ^∘ g^-1=G_gσ^∘ et g G_σ^+ g^-1= G_g σ^+, d'où g (G_σ^∘ / G_σ^+ )g^-1≃ G_gσ^∘ / G_g σ^+. D'après <cit.> 4.6.30, la conjugaison par g se prolonge en un isomorphisme du 𝔬_k-schéma en groupes 𝒢_σ^∘ sur le 𝔬_k-schéma en groupes 𝒢_gσ^∘ et donc induit un isomorphisme φ_g,σ: _σ∼→_gσ. On obtient alors, comme dans l'annexe <ref>, un isomorphisme sur les classes de conjugaison semi-simples des groupes duauxφ^*_g,σ:(^*_σ)_ss∼⟶ (^*_gσ)_ss Pour σ∈ BT, on note(𝖦^*_σ)_ss,Λ={ s ∈ (𝖦^*_σ)_ss tel quessoit d'ordre inversible dans Λ} Soit S=(S_σ)_σ∈ BT un système d'ensembles de classes de conjugaison avec S_σ⊆ (𝖦^*_σ)_ss,Λ. On dit que S est 0-cohérent si * φ^*_g,x(S_x)=S_gx quel que soit x∈ BT_0 et g∈ G.* φ_σ,x^*-1(S_x)=S_σ pour x∈ BT_0 et σ∈ BT tels que x ≤σ.Soit S=(S_σ)_σ∈ BT un système 0-cohérent. Soit σ∈ BT, on définit alors e_σ^S,Λ=∑_s ∈ S_σ e_σ^s,Λ.Le système (e_σ^S,Λ)_σ∈ BT est 0-cohérent.Commençons par vérifier la condition 1. :Soient x ∈ BT_0 et g∈ G. On a la commutativité du diagramme suivantG_x^∘ / G_x^+@->[r]^-∼@->[d]^g 𝖦_x@->[d]^φ_g,xG_gx^∘ / G_gx^+@->[r]^-∼ 𝖦_gxLe lemme <ref> nous dit que φ_g,x(e_s,Λ^𝖦_x)=e_φ^*_g,x(s),Λ^𝖦_gx. Ainsige_x^S,Λg^-1=∑_s ∈ S_x ge_x^s,Λ g^-1=∑_s ∈ S_x e_gx^φ^*_g,x(s),Λ=∑_s ∈φ^*_g,x(S_x) e_gx^s,Λ=∑_s ∈ S_gx e_gx^s,Λ =e_gx^S,Λ Passons maintenant à la condition 2. : Soient x ∈ BT_0 et σ∈ BT tels que x ≤σ.e_σ^+e_x^S,Λ=∑_s ∈ S_x e_σ^+e_x^s,ΛPar la proposition <ref> on a e_σ^+e_x^s,Λ=∑_t∈φ^*-1_σ,x(s) e_σ^t,Λ. Donce_σ^+e_x^S,Λ=∑_s ∈ S_x∑_t∈φ^*-1_σ,x(s) e_σ^t,Λ=∑_t ∈φ_σ,x^*-1(S_x)e_σ^t,Λ=∑_t ∈ S_σ e_σ^t,Λ=e_σ^S,Λ On note Rep_Λ(G) la catégorie abélienne des représentations lisses de G à coefficients dans Λ. Notons Rep_Λ^0(G) la sous-catégorie des représentations de niveau 0, c'est à dire la sous-catégorie découpée par le système d'idempotents (e_x^+)_x ∈ BT_0.Soit S=(S_σ)_σ∈ BT un système 0-cohérent, il définit alors un système (e_σ^S,Λ)_σ∈ BT 0-cohérent et forme donc, d'après le théorème <ref>, une catégorie Rep_Λ^S(G). Soient S_1=(S_1,σ)_σ∈ BT et S_2=(S_2,σ)_σ∈ BT deux systèmes de classes de conjugaison. On définit alors S_1∪ S_2:=(S_1,σ∪ S_2,σ)_σ∈ BT et S_1∩ S_2:=(S_1,σ∩ S_2,σ)_σ∈ BT. On dit que S_2⊆ S_1 si pour tout σ∈ BT S_2,σ⊆ S_1,σ. Enfin, si S_2⊆ S_1, on note S_1\ S_2:=(S_1,σ\ S_2,σ)_σ∈ BT. Soient S_1 et S_2 deux systèmes 0-cohérents tels que S_1∩ S_2 = ∅. Alors les catégories Rep_Λ^S_1(G) et Rep_Λ^S_2(G) sont orthogonales.Soit V un objet de Rep_Λ^S_1(G). Nous devons montrer que pour tout sommet de l'immeuble x, on a e_x^S_2V=0. Fixons un tel x. Par définition V=∑_y ∈ BT_0 e_y^S_1 V, donc e_x^S_2V=∑_y ∈ BT_0 e_x^S_2 e_y^S_1 V. Soit y ∈ BT_0. Comme (e_σ^S_1)_σ∈ BT est 0-cohérent, on sait par <ref> que e_x^+e_y^S_1=e_x^S_1e_y^S_1 et on a quee_x^S_2e_y^S_1=e_x^S_2e_x^+e_y^S_1=e_x^S_2e_x^S_1e_y^S_1 Or si s et s' sont deux éléments distincts de (𝖦_x)_ss d'ordre inversible dans Λ, e_x^s,Λe_x^s',Λ=0 donc e_x^S_2e_x^S_1=0 et on a le résultat. Soient S_1,⋯,S_n des systèmes 0-cohérents tels que S_i∩ S_j = ∅ si i ≠ j et ⋃_i=1^n S_i = ((𝖦^*_σ)_ss,Λ)_σ∈ BT. Alors la catégorie de niveau 0 se décompose enRep_Λ^0(G) = ∏_i=1^n Rep_Λ^S_i(G)D'après le lemme <ref> nous savons déjà que les catégories Rep_Λ^S_i(G) sont deux à deux orthogonales. Prenons maintenant V un objet de Rep_Λ^0(G). Par définition, V=∑_x ∈ BT_0 e_x^+V. Fixons un sommet x ∈ BT_0. D'après <ref>, on a e_x^+=∑_s ∈ (𝖦_x)_ss e_x^s,ℚ_ℓ. Ainsi e_x^+=∑_i=1^n e_x^S_i. On en déduit que V=∑_x ∈ BT_0 e_x^+V=∑_x ∈ BT_0∑_i=1^n e_x^S_iV =∑_i=1^n(∑_x ∈ BT_0 e_x^S_iV). Or ∑_x ∈ BT_0 e_x^S_iV est un objet de Rep_Λ^S_i(G) d'après <cit.> proposition 3.2, d'où le résultat. § PARAMÈTRES DE L'INERTIE MODÉRÉS Dans toute cette section on suppose de plus que G est K-déployé. Cela signifie que 𝐆 est une forme intérieure d'un groupe non-ramifié. On souhaite obtenir une décomposition de Rep_Λ^0(G) indexée par les paramètres de l'inertie modérés ϕ. Pour cela on construit un procédé permettant d'associer à chaque ϕ un système de classes de conjugaison 0-cohérent. §.§ Classes de conjugaison dans ^* Commençons par définir les paramètres de l'inertie modérés et montrons que l'on peut décrire ceux-ci en terme de classes de conjugaison semi-simples dans ^*.On notera 𝒢_k=Gal(k/k) le groupe de Galois absolu de k, W_k le groupe de Weil absolu de k et I_k le sous-groupe d'inertie. Le groupe Γ:=𝒢_k/I_k est topologiquement engendré par un élément Frob dont l'inverse induit l'automorphisme x ↦ x^q sur 𝔉. Ainsi K=k^I_k et k=K^Frob. L'action de 𝒢_k sur 𝐆 donne une action de Γ sur 𝐆(K), complètement déterminée par un automorphisme F ∈ Aut(𝐆(K)) donné par l'action de Frob. On a alors G=𝐆(K)^F. Soit 𝐓 un k-tore maximal K-déployé maximalement déployé, alors I_k agit trivialement sur X_*(𝐓), le groupe des co-caractères de 𝐓, et l'action de 𝒢_k sur X_*(𝐓) se factorise à travers Γ. Notons alors ϑ l'automorphisme de X_*(𝐓) induit par F. La dualité entre X_*(𝐓) et X^*(𝐓) permet d'associer de façon naturelle à ϑ un automorphisme ϑ∈ Aut(X^*(𝐓)). Cet automorphisme s'étend alors alors en un automorphisme ϑ⊗ 1 de 𝐓(ℚ_ℓ):=X^*(𝐓)⊗ℚ_ℓ^× que nous noterons encore ϑ. Fixons un épinglage (𝐆,𝐁,𝐓,{x_α}_α∈Δ) de 𝐆 où 𝐁 est un Borel contenant 𝐓. Celui-ci permet de prolonger ϑ en un automorphisme ϑ∈ Aut(𝐆). On note P_k le groupe d'inertie sauvage, c'est à dire le pro-p sous-groupe maximal de I_k. Le groupe d'inertie modérée est le quotient I_k/P_k et le groupe de Weil modéré est le quotient W_k/P_k. On note W_k'=W_k⋉ℚ_ℓ le groupe de Weil-Deligne. Un morphisme φ : W'_k→^LG(ℚ_ℓ):=⟨ϑ⟩⋉G(ℚ_ℓ) est dit admissible si * Le diagramme suivant commute :W'_k@->[r]^φ@->[d]^LG(ℚ_ℓ) @->[d] ⟨Frob⟩@->[r]⟨ϑ⟩* φ est continue, φ(ℚ_ℓ) est unipotent dans G(ℚ_ℓ), et φ envoie W_k sur des éléments semi-simples de ^LG(ℚ_ℓ) (un élément de ^LG(ℚ_ℓ) est semi-simple si sa projection dans ⟨ϑ⟩ / n⟨ϑ⟩⋉G(ℚ_ℓ) est semi-simple, où n est l'ordre de ϑ).On note alors Φ(G) l'ensemble des morphismes admissibles φ : W'_k→^LG(ℚ_ℓ) modulo les automorphismes intérieurs par des éléments de G(ℚ_ℓ).Soit I un sous-groupe de W_k. On note alors Φ(I,G) l'ensemble des classes de G-conjugaison des morphismes continus I →^LG(ℚ_ℓ) (où ℚ_ℓ est muni de la topologie discrète) qui admettent une extension à un L-morphisme de Φ(G). Dans ce qui suit nous allons nous intéresser principalement aux paramètres de Langlands inertiels Φ(I_k,G).Si I contient P_k, l'inertie sauvage, on dit qu'un paramètre ϕ∈Φ(I,G) est modéré s'il est trivial sur P_k, et on note Φ_m(I,G) pour l'ensemble des paramètres de I modérés. Intéressons nous à Φ_m(I_k,G). Comme I_k/P_k est procyclique de pro-ordre premier à p un morphisme continu I_k/P_k→G(ℚ_ℓ) est donné par le choix d'un élément s d'ordre fini premier à p. Nous avons la décomposition W_k/P_k=⟨Frob⟩⋉ (I_k/P_k), où pour x ∈ (I_k/P_k), Frob ^-1xFrob = x^q. Un paramètre de Langlands doit envoyer Frob sur ϑf où f est un élément semi-simple de G(ℚ_ℓ). Un tel morphisme s'étend donc à W_k/P_k si Ad(f) ∘ϑ∘ s^q =s, où Ad(f) désigne la conjugaison par f. Ainsi à un paramètre inertiel modéréϕ∈Φ_m(I_k,G) on peut associer une classe de conjugaison semi-simple dans G(ℚ_ℓ) stable sous ϑ∘ψ où ψ est l'élévation à la puissance q-ième. On a donc une application Φ_m(I_k,G) → (G(ℚ_ℓ)_ss)^ϑ∘ψ. Or nous savons que ( T(ℚ_ℓ)/W_0 ) ∼⟶G(ℚ_ℓ)_ss, où W_0:=N(𝐓)/ 𝐓 désigne le groupe de Weyl de 𝐓, ce qui nous permet de définir l'application Φ_m(I_k,G) → ( T(ℚ_ℓ)/W_0 )^ϑ∘ψ. Réciproquement, prenons un élément de ( T(ℚ_ℓ)/W_0 )^ϑ∘ψ. Ceci nous fournit un élément semi-simple s tel que ϑ∘ψ(s)=w · s, où w∈ W_0. Soit f un relèvement de w, qui est alors un élément semi-simple, on a alors ϑ∘ψ(s)=Ad(f)(s), donc on obtient un ϕ∈Φ_m(I_k,G). Ceci nous montre que l'on a une correspondance Φ_m(I_k,G) ⟷ ( T(ℚ_ℓ)/W_0 )^ϑ∘ψ.Soit s ∈( T(ℚ_ℓ)/W_0 )^ϑ∘ψ, on peut représenter s par s=(a_1, ⋯, a_n), avec a_i∈ℚ_ℓ^× (T≃𝔾_m^n). Soit k∈ℕ, par définition on a ϑ^k(s^p^k)=s dans ( T(ℚ_ℓ)/W_0)^ϑ∘ψ. Comme ϑ est d'ordre fini, disons N, s^p^N=s dans ( T(ℚ_ℓ)/W_0)^ϑ∘ψ. Donc il existe w ∈ W_0 tel que (a_1^p^N, ⋯, a_n^p^N)=w· (a_1,⋯, a_n). Or W_0 est de cardinal fini, donc il existe k ∈ℕ^*, tel que (a_1^p^kN, ⋯, a_n^p^kN)=(a_1, ⋯, a_n). Ainsi ∀ i ∈{1,⋯,n}, a_i^p^kN-1=1. Les a_i sont donc des racines p'-ièmes de l'unité (racines de l'unité d'ordre premier à p).Notre groupe 𝐆 étant K-déployé, il possède une forme intérieure non-ramifiée. Cette dernière permet de définir sur ^*, le groupe dual de 𝐆 sur 𝔉, une 𝔣-structure (et donc un Frobenius F) en choissant un sommet hyperspécial dans l'immeuble. Le choix d'un système compatible de racines de l'unité (que l'on a fixé au début dans les notations) permet d'identifier ( 𝐓(ℚ_ℓ)/W_0 )^ϑ∘ψ⟷ (^*(𝔉)/W_0 )^ϑ∘ψ.(Nous rappelons que 𝐓 désigne le dual de 𝐓 sur ℚ_ℓ et ^* celui sur 𝔉. Ainsi 𝐓(ℚ_ℓ)=X^*(𝐓)⊗ℚ_ℓ^× et ^*(𝔉)=X^*(𝐓)⊗𝔉^×.) Or l'action de ϑ∘ψ sur ^*(𝔉) correspond à l'action du Frobenius F (voir annexe <ref>, ici ϑ=τ_X^-1). Ainsi ( ^*(𝔉)/W_0 )^ϑ∘ψ = ( ^*(𝔉)/W_0 )^F⟷ (^*(𝔉)_ss)^F.En résumé nous avons montréLa discussion précédente nous fournit une identification :Φ_m(I_k,𝐆) ⟷ (^*(𝔉)_ss)^F.Dans le but d'étudier les représentations à coefficients dans ℤ_ℓ nous avons besoin de restreindre Φ_m(I_k,𝐆). Introduisons I_k^(ℓ):=ker{I_k→ℤ_ℓ(1)} qui est le sous-groupe fermé maximal de I_k de pro-ordre premier à ℓ. Sous l'identification de la proposition <ref>, Φ_m(I_k^(ℓ),𝐆) correspond aux s ∈ ((𝔉)_ss)^F d'ordre premier à ℓ.Pour unifier les notations, notons I_k^Λ qui vaut I_k si Λ=ℚ_ℓ et I_k^(ℓ) si Λ = ℤ_ℓ. On obtient alorsL'identification de la proposition <ref> se restreint en :Φ_m(I_k^Λ,𝐆) ⟷{ s ∈ (^*(𝔉)_ss)^F, sd'ordre inversible dans Λ}. §.§ Classes de conjugaison dans les quotients réductifs des groupes parahoriques Nous venons de voir que l'on pouvait identifier les paramètres de l'inertie modérés avec des classes de conjugaison semi-simples dans ^*. Pour obtenir des systèmes 0-cohérents nous avons besoin de classes de conjugaison dans les quotients réductifs des groupes parahoriques. Nous construisons alors dans cette section un système d'applications compatibles ((^*_σ)_ss)^F→ (^*(𝔉)_ss)^F, pour σ∈ BT. Soit 𝐒 un tore déployé maximal, tel que σ∈𝒜(𝐒,k), où 𝒜(𝐒,k) est l'appartement associé à S dans BT. Notons 𝐓 un k-tore maximal K-déployé contenant 𝐒 (qui existe par <cit.> 5.1.12). De plus par <cit.> 2.6.1 𝒜(𝐒,k)=BT ∩𝒜(𝐓,K). Notons σ_1 l'image canonique de σ dans BT(K).Notons W le groupe de Weyl affine de 𝐆(K), W_0=N(𝐓)/𝐓, le groupe de Weyl de 𝐆(K) et W_σ_1 le groupe engendré par les réflexions des hyperplans contenant σ_1 dans BT(K). Nous avons W=W_0⋉ T/^∘T, où ^∘T désigne le sous-groupe borné maximal de T. De plus W_σ_1 est un sous-groupe de W, on a donc une application W_σ_1→ W → W_0. Le noyau du morphisme W=W_0⋉ T/^∘T → W_0 est un groupe sans torsion. Or W_σ_1 est un groupe fini donc l'application W_σ_1→ W_0 est injective et nous permet de voir W_σ_1 comme un sous-groupe de W_0.Par <cit.> 3.4.3, 𝒢_σ_1 est obtenu à partir de 𝒢_σ par changement de base. En particulier _σ_1=_σ×_𝔣𝔉. Le tore 𝐒 (resp. 𝐓) se prolonge en un tore de 𝒢_σ, 𝒮_σ (resp. 𝒯_σ), défini sur 𝔬_k de fibre générique 𝒮_σ,k=S (resp. 𝒯_σ,k=T). Notons 𝖲_σ (resp 𝖳_σ) la fibre spéciale de 𝒮_σ (resp 𝒯_σ). Alors 𝖳_σ est un tore maximal de 𝖦_σ défini sur 𝔣. De plus on a que _σ_1=_σ×_𝔣𝔉.Le groupe des caractères de 𝖳_σ_1, X^*(_σ_1), est canoniquement isomorphe à X=X^*(𝐓), on les identifiera désormais. De plus, par <cit.> 3.5.1, le groupe de Weyl de 𝖦_σ_1 associé à 𝖳_σ_1 est W_σ_1. L'action de W_σ_1 sur X^*(𝖳_σ_1) coïncide avec l'action de l'image de W_σ_1→ W_0 sur X^*(𝐓).On obtient alors :((^*_σ)_ss)^F≃( ^*_σ/W_σ_1)^F≃((X⊗_ℤ𝔉^×)/W_σ_1)^FLe morphisme W_σ_1→ W_0 induit((X⊗_ℤ𝔉^×)/W_σ_1)^F→((X⊗_ℤ𝔉^×)/W_0)^F.Et de même que précédemment, on a un isomorphisme((X⊗_ℤ𝔉^×)/W_0)^F≃ (^*(𝔉)_ss)^F.On vient donc de construire une applicationψ̃_σ : ((^*_σ)_ss)^F→ (^*(𝔉)_ss)^FL'application ψ̃_σ est indépendante du choix du tore S.Soit S' un autre tore déployé maximal tel que σ∈𝒜(S'). Nous utiliserons la notation ' pour les éléments se rapportant àS'. D'après <cit.> 4.6.28, G_σ_1^∘ permute transitivement les appartements de BT(K) contenantσ_1. Ainsi, T et T' sont conjugués par un élément g ∈ G_σ_1^∘, c'est à dire T'=g Tg^-1. Comme 𝐓 et 𝐓' sont deux k-tores, g vérifie que g^-1F(g) ∈ N(𝐆,𝐓), le normalisateur de 𝐓 dans 𝐆. La conjugaison par g, Ad(g), induit alors un isomorphisme X ⟶ X':=X^*(𝐓'). De plus comme elle envoie 𝒜(T,K) sur 𝒜(T',K) et que les morphismes W_σ_1⟶ W_0 et W_σ_1' ⟶ W_0' sont définis à partir des racines, on a le diagramme commutatifW_σ_1@->[r] @->[d]^Ad(g)W_0@->[d]^Ad(g)W_σ_1' @->[r] W_0'et donc le diagramme commutatif((X⊗_ℤ𝔉^×)/W_σ_1)^F@->[r] @->[d]^Ad(g) ((X⊗_ℤ𝔉^×)/W_0)^F@->[d]^Ad(g) ((X'⊗_ℤ𝔉^×)/W_σ_1')^F@->[r]((X'⊗_ℤ𝔉^×)/W_0')^F L'application G_σ_1^∘→_σenvoie g sur un élément que l'on note g. De plus nous savons que l'action par conjugaison par g qui envoie X_*(_σ) sur X_*('_σ) coïncide avec l'action par conjugaison par g qui envoie X sur X'.La conjugaison par g∈_σ d'un coté et par g ∈𝐆 de l'autre, induit les deux diagrammes commutatifs suivants (lemme <ref>) :((^*_σ)_ss)^F@->[r]^-∼@=[d]((X⊗_ℤ𝔉^×)/W_σ_1)^F@->[d]^Ad(g)((^*_σ)_ss)^F@->[r]^-∼ ((X'⊗_ℤ𝔉^×)/W_σ_1')^F((X⊗_ℤ𝔉^×)/W_0)^F@->[d]^Ad(g)@->[r]^-∼ (^*(𝔉)_ss)^F@=[d] ((X'⊗_ℤ𝔉^×)/W_0')^F@->[r]^-∼ (^*(𝔉)_ss)^FOn obtient alors que le diagramme suivant commute :((^*_σ)_ss)^F@->[r]^-∼@=[d]((X⊗_ℤ𝔉^×)/W_σ_1)^F@->[r] @->[d]^Ad(g) ((X⊗_ℤ𝔉^×)/W_0)^F@->[d]^Ad(g)@->[r]^-∼ (^*(𝔉)_ss)^F@=[d]((^*_σ)_ss)^F@->[r]^-∼ ((X'⊗_ℤ𝔉^×)/W_σ_1')^F@->[r]((X'⊗_ℤ𝔉^×)/W_0')^F@->[r]^-∼ (^*(𝔉)_ss)^FCe qui nous montre le résultat. §.§ Classes de conjugaison dans un groupe fini La partie précédente nous fournit des classes de conjugaison semi-simples géométriques d'un groupe réductif connexe fini. Nous sommes plus intéressé par des classes de conjugaison rationnelles. On rappelle alors ici le lien entre les deux.Dans cette sous-sectiondésigne un groupe réductif connexe défini sur 𝔣. Pour un élément semi-simple x ∈, on note [x] sa classe de conjugaison, [x] ∈_ss.Soit s ∈ (_ss)^F. Alors il existe x ∈:=^F tel que s=[x].s=[y] avec y ∈. La classe de conjugaison s étant F-stable, il existe g ∈ tel que F(y)=g^-1yg. L'application de Lang, Lan:→ définie par Lan(g)=g^-1F(g) est surjective d'après <cit.> Théorème 7.1. Ainsi, il existe h ∈ tel que g=h^-1F(h). AlorsF(hyh^-1)=F(h)F(y)F(h)^-1=F(h)g^-1ygF(h)^-1=hyh^-1Ainsi x=hyh^-1 convient.L'application 𝖦_ss↠ (_ss)^F est surjective.§.§ Systèmes 0-cohérents de classes de conjugaison associés aux paramètres de l'inertie modérés On met bout à bout les résultats des sous-sections précédentes pour obtenir une application qui à un paramètre inertiel modéré associe un système de classes de conjugaison 0-cohérent.En composant la proposition <ref>, l'application ψ̃_σ et la proposition <ref>, on obtient une applicationψ_σ : (𝖦^*_σ)_ss⟶ ((^*_σ)_ss)^Fψ̃_σ⟶ (^*(𝔉)_ss)^F∼⟶Φ_m(I_k,G) Soient σ, ω∈ BT tels que σ≤ω. Nous avons vu que 𝖦_ω est un Levi de 𝖦_σ. Ceci nous donne donc, comme dans la section <ref>, une application φ^*_ω,σ:(𝖦^*_ω)_ss→ (𝖦^*_σ)_ss.Soient σ, ω∈ BT tels que σ≤ω. Alorsψ_ω=ψ_σ∘φ^*_ω,σ W_ω_1 est le groupe engendré par les réflexions des hyperplans contenant ω_1 où ω_1 est l'image canonique de ω dans BT(K). Or σ_1≤ω_1, donc un hyperplan contenant ω_1 contient aussi σ_1 et W_ω_1 est un sous-groupe de W_σ_1. Ainsi le diagramme commutatif W_ω_1@->[r] @->[d]W_0W_σ_1@->[ur]induit le diagramme commutatif((X⊗_ℤ𝔉^×)/W_ω_1)^F@->[r] @->[d] ((X⊗_ℤ𝔉^×)/W_0)^F ((X⊗_ℤ𝔉^×)/W_σ_1)^F@->[ur]D'où la commutativité de(𝖦^*_ω)_ss@->[r] @->[d]^φ^*_ω,σ((^*_ω)_ss)^F@->[r]^-∼@->[d]^φ_^*_ω,^*_σ ((X⊗_ℤ𝔉^×)/W_ω_1)^F@->[r] @->[d] ((X⊗_ℤ𝔉^×)/W_0)^F(𝖦^*_σ)_ss@->[r] ((^*_σ)_ss)^F@->[r]^-∼ ((X⊗_ℤ𝔉^×)/W_σ_1)^F@->[ur]et on a le résultat voulu. Soient g ∈ G et σ∈ BT, nous avons déjà vu (au début de la section <ref>) que la conjugaison par g induisait deux applicationsφ^*_g,σ:(^*_σ)_ss⟶ (^*_gσ)_ss φ^*_g,σ:((^*_σ)_ss)^F⟶ ((^*_gσ)_ss)^FSoient g ∈ G et σ∈ BT alorsψ_σ = ψ_g σ∘φ^*_g,σ Soit 𝐒 un tore déployé maximal tel que σ∈𝒜(𝐒). Alors si l'on pose 𝐒'=Ad(g)(𝐒), 𝐒' est un tore déployé maximal tel que gσ∈𝒜(𝐒'). La conjugaison par g induit un isomorphisme de X vers X'. Le lemme <ref> nous donne le diagramme commutatif suivant :((^*_σ)_ss)^F@->[r]^-∼@->[d]^φ^*_g,σ ((X⊗_ℤ𝔉^×)/W_σ_1)^F@->[d]^Ad(g)((^*_gσ)_ss)^F@->[r]^-∼ ((X'⊗_ℤ𝔉^×)/W_gσ_1)^F La conjugaison par g envoie les racines affines pour 𝐒 s'annulant sur σ_1 sur les racines affines pour 𝐒' s'annulant sur gσ_1. On a donc le diagramme commutatif suivant W_σ_1@->[r] @->[d]^Ad(g)W_0@->[d]^Ad(g)W_gσ_1@->[r] W_0'et((X⊗_ℤ𝔉^×)/W_σ_1)^F@->[r] @->[d]^Ad(g) ((X⊗_ℤ𝔉^×)/W_0)^F@->[d]^Ad(g) ((X'⊗_ℤ𝔉^×)/W_gσ_1)^F@->[r]((X'⊗_ℤ𝔉^×)/W_0')^F Enfin la conjugaison par g étant un isomorphisme intérieur sur G, le diagramme ci-dessous commute (lemme <ref>) ((X⊗_ℤ𝔉^×)/W_0)^F@->[d]^Ad(g)@->[r]^-∼ (^*(𝔉)_ss)^F@=[d] ((X'⊗_ℤ𝔉^×)/W_0')^F@->[r]^-∼ (^*(𝔉)_ss)^F Mis bout à bout ces diagrammes donnent la commutativité de(^*_σ)_ss@->[d] @->[r]^φ^*_g,σ(^*_gσ)_ss@->[d] ((^*_σ)_ss)^F@->[d]^-∼@->[r]^φ^*_g,σ ((^*_gσ)_ss)^F@->[d]^-∼ ((X⊗_ℤ𝔉^×)/W_σ_1)^F@->[d] @->[r]^Ad(g) ((X'⊗_ℤ𝔉^×)/W_gσ_1)^F@->[d] ((X⊗_ℤ𝔉^×)/W_0)^F@->[r]^Ad(g)@->[d]^-∼ ((X'⊗_ℤ𝔉^×)/W_0')^F@->[d]^-∼(^*(𝔉)_ss)^F@=[r] (^*(𝔉)_ss)^Fce qui finit la preuve. Construisons maintenant un système 0-cohérent de classes de conjugaison. Soient ϕ∈Φ_m(I_k^Λ,𝐆) et σ∈ BT. On définit le système de classes de conjugaison S_ϕ=(S_ϕ,σ)_σ∈ BT parS_ϕ,σ= ψ_σ^-1(ϕ) Soit ϕ∈Φ_m(I_k^Λ,𝐆). Le système S_ϕ est 0-cohérent. La condition 1. de <ref> est vérifiée par <ref> et la condition 2. par <ref>. Ainsi par la proposition <ref>, si l'on note Rep_Λ^ϕ(G):=Rep_Λ^S_ϕ(G), alorsSoit 𝐆 un groupe réductif connexe défini sur k et K-déployé. Alors la catégorie de niveau 0 se décompose enRep_Λ^0(G) = ∏_ϕ∈Φ_m(I_k^Λ,𝐆) Rep_Λ^ϕ(G)Notons que si 𝐆 est quasi-déployé alors il est non-ramifié et possède donc un sommet hyperspécial o. Dans ce cas, l'application ψ̃_o est bijective, donc ψ_o est surjective et Rep_Λ^ϕ(G) est non vide pour tout ϕ∈Φ_m(I_k^Λ,𝐆). Cependant, lorsque 𝐆 n'est pas quasi-déployé, les catégories Rep_Λ^ϕ(G) peuvent être vides. Nous devons rajouter une condition de "relevance" pour avoir Rep_Λ^ϕ(G) non vide, ce que nous détaillerons dans la partie <ref>. § PROPRIÉTÉS DE REP_Λ^Φ(G) Fixons dans toute cette section un paramètre inertiel modéré ϕ∈Φ_m(I_k^Λ,𝐆). Le but de cette section est d'étudier quelques propriétés vérifiées par Rep_Λ^ϕ(G). Rappelons qu'à ϕ nous avons associé dans la partie <ref> un système 0-cohérent de classes de conjugaison S_ϕ, qui permet de définir e_ϕ=(e_ϕ,x)_x ∈ BT_0 un système0-cohérent d'idempotents défini par e_ϕ,x=∑_s ∈ S_ϕ,x e_x^s,Λ. §.§ Lien entre les décompositions sur ℤ_ℓ et ℚ_ℓ Au vu de la construction de Rep_Λ^ϕ(G) il est assez simple de comprendre le lien entre Λ = ℤ_ℓ et Λ=ℚ_ℓ ce que nous faisons ici.Considérons ici que ϕ∈Φ_m(I_k^(ℓ),𝐆). Soit x ∈ BT_0 et notons S'_ϕ,x l'ensemble des s' ∈ (𝖦^*_x)_ss dont s la partie ℓ-régulière de s' est dans S_ϕ,x. Alors par construction, e_ϕ,x=∑_s ∈ S_ϕ,x e_x^s,ℤ_ℓ=∑_s'∈ S'_ϕ,x e_x^s,ℚ_ℓ. Prenons s' ∈ (𝖦^*_x)_ss et nommons ϕ' ∈Φ_m(I_k,𝐆) le paramètre inertiel qui lui est associé, c'est à dire ϕ':=ψ_x(s'). Soit s ∈ S_ϕ,x (donc ψ_x(s)=ϕ), s est la partie ℓ-régulière de s' si et seulement si ϕ'_| I_k^(ℓ)∼ϕ. Le lien entre les décompositions sur ℤ_ℓ et ℚ_ℓ est alors clairSoit ϕ∈Φ_m(I_k^(ℓ),𝐆), alorsRep_ℤ_ℓ^ϕ(G) ∩ Rep_ℚ_ℓ(G) = ∏_ϕ' Rep_ℚ_ℓ^ϕ'(G)où le produit est pris sur les ϕ' ∈Φ_m(I_k,𝐆) tels que ϕ'_| I_k^(ℓ)∼ϕ.§.§ Représentations irréductibles de Rep_Λ^ϕ(G) Nous souhaitons dans cette partie décrire les représentations irréductibles qui sont dans Rep_Λ^ϕ(G).Soit 𝐓 un tore maximal non-ramifié de 𝐆 (𝐓 est un k-tore K-déployé maximal de 𝐆). Nommons 𝐓_0 le tore de référence utilisé pour définir ϑ et 𝐆. Le tore 𝐓 étant non-ramifié il existe g ∈ G^nr tel que T^nr=^gT_0^nr. Dans ce cas g^-1F(g) ∈ N(T_0^nr,G^nr) et définit un élément w ∈ W_0. Ainsi ^L𝐓≃⟨ wϑ⟩⋉𝐓_0(ℚ_ℓ). Le choix d'un relèvement ẇ∈ N(𝐓_0,𝐆) de w permet alors de définir un plongement ^L𝐓↪^L𝐆 par 𝐓_0(ℚ_ℓ) ⊆𝐆(ℚ_ℓ) et wϑ↦ (ẇ,ϑ). Ce plongement dépend (même à 𝐆(ℚ_ℓ)-conjugaison près) du choix du relèvement de w. Il induit cependant une applicationι : Φ_m(I_k,𝐓) →Φ_m(I_k,𝐆)qui elle est indépendante des choix effectués car les paramètres inertiels sont à valeurs dans 𝐓_0(ℚ_ℓ) (ou 𝐆(ℚ_ℓ)). Soit ϕ_𝐓∈Φ_m(I_k,𝐓). Notons X:=X^*(𝐓). Nous avons vu dans les sections <ref> et <ref> que l'on a une bijection Φ_m(I_k,𝐓) ≃ (X ⊗_ℤ𝔉^×)^F. Soit x ∈𝒜(𝐓,K) ∩ BT_0. Nous savons que l'on a également un isomorphisme (X ⊗_ℤ𝔉^×)^F≃ (_x^*)^F≃ Hom(_x^F,ℚ_ℓ^×). On associe donc à ϕ_𝐓 de manière bijective un caractère θ_𝐓:_x^F→ℚ_ℓ^×, qui se relève en un caractère de niveau 0 : θ_𝐓:^0T^F→ℚ_ℓ^×. L'association qui à ϕ_𝐓 donne θ_𝐓 est alors la correspondance de Langlands locale pour les tores restreinte à l'inertie.Soit π∈Irr_ℚ_ℓ(G). Alors π∈ Rep_ℚ_ℓ^ϕ(G) si et seulement s'il existe 𝐓 un tore maximal non-ramifié de 𝐆, ϕ_𝐓∈Φ_m(I_k,𝐓) et x ∈𝒜(𝐓,K) ∩ BT_0 tels que ι(ϕ_𝐓) ∼ϕ et ⟨π^G_x^+, ℛ__x^_x(θ_𝐓) ⟩≠ 0 (où π^G_x^+ est vue comme une représentation de _x≃ G_x^∘/G_x^+ et ℛ__x^_x désigne l'induction de Deligne-Lusztig).Par définition de Rep_ℚ_ℓ^ϕ(G), comme π est une représentation irréductible, π∈ Rep_ℚ_ℓ^ϕ(G) si et seulement s'il existe x ∈ BT_0 tel que e_ϕ,xπ^G_x^+≠ 0. Soit x ∈ BT_0, alors par construction e_ϕ,xπ^G_x^+≠ 0 est équivalent à l'existence d'une classe de conjugaison rationnelle semi-simple s∈ S_ϕ,x telle que e_s,ℚ_ℓ^_xπ^G_x^+≠ 0. Soit _x un 𝔣-tore maximal de _x tel que s ∈ (_x^*)^F. Relevons _x en 𝐓 un tore maximal non-ramifié de 𝐆. Nous avons que Φ_m(I_k,𝐓) ≃ (X ⊗_ℤ𝔉^×)^F≃ (_x^*)^F et donc s correspond à ϕ_𝐓 un paramètre inertiel modéré de 𝐓. La discussion qui précède le théorème montre que s est également associé au caractère θ_𝐓:_x^F→ℚ_ℓ^×. La section <ref> nous dit alors que e_s,ℚ_ℓ^_xπ^G_x^+≠ 0 si et seulement si ⟨π^G_x^+, ℛ__x^_x(θ_𝐓) ⟩≠ 0.On vient donc de montrer que π∈ Rep_ℚ_ℓ^ϕ(G) si et seulement s'il existe 𝐓 un tore maximal non-ramifié de 𝐆,x ∈𝒜(𝐓,K) ∩ BT_0, ϕ_𝐓∈Φ_m(I_k,𝐓) correspondant à s ∈ S_ϕ,x tels que ⟨π^G_x^+, ℛ__x^_x(θ_𝐓)⟩≠ 0. Pour achever la preuve du théorème il ne nous reste donc qu'à montrer que ϕ = ι(ϕ_𝐓). Or cela découle du diagramme commutatif suivant (^*_x)^F@->[d] @->[r] (^*_x)_ss@->[d](X⊗_ℤ𝔉^×)^F@->[d] @->[r]((X⊗_ℤ𝔉^×)/W_x)^F@->[d](X⊗_ℤ𝔉^×)^F@->[r] @->[d]^-∼ ((X⊗_ℤ𝔉^×)/W_0)^F@->[d]^-∼(^*(𝔉))^F@->[r] @->[d]^-∼ (^*(𝔉)_ss)^F@->[d]^-∼ Φ_m(I_k,𝐓) @->[r]^ι Φ_m(I_k,𝐆)Notons que le théorème précédent n'est énoncé que pour Λ=ℚ_ℓ puisque l'on peut en déduire une description de Irr_ℚ_ℓ(G) ∩ Rep_ℤ_ℓ^ϕ(G) grâce à la proposition <ref>. Notons également que pour Λ=ℤ_ℓ, les objets simples de Rep_ℤ_ℓ^ϕ(G) sont * Les objets simples de caractéristique 0 qui sont les π∈Irr_ℚ_ℓ(G) ∩ Rep_ℤ_ℓ^ϕ(G) qui ne sont pas entières.* Les objets simples de caractéristique ℓ qui sont les sous-quotients simples des réductions modulo ℓ des π∈Irr_ℚ_ℓ(G) ∩ Rep_ℤ_ℓ^ϕ(G) qui sont entières (voir le lemme 6.8 de <cit.>, les hypothèses peuvent être supprimées ici car on est en niveau 0).§.§ Condition de relevance Nous avons noté précédemment que si 𝐆 n'est pas quasi-déployé alors les catégories Rep_Λ^ϕ(G) peuvent être vides. Nous allons montrer dans cette partie que Rep_Λ^ϕ(G) est non vide si et seulement si ϕ est relevant, au sens suivantSoit ϕ∈Φ(I_k^Λ,𝐆) un paramètre inertiel. On dit que ϕ est relevant s'il existe φ' ∈Φ(𝐆) une extension de ϕ à W_k' qui est relevant, c'est à dire que si l'image de φ' est contenue dans un Levi de ^L𝐆 alors ce dernier est relevant (au sens de <cit.> 3.4).Soit φ∈Φ(W_k,𝐆) et posons ϕ:=φ_|I_k^Λ. Pour w ∈ W_k, l'action par conjugaison de φ(w) normalise ϕ(I_k^Λ) donc normalise également C_𝐆(ϕ)^∘, le centralisateur connexe de l'image de ϕ dans 𝐆. On définit alors ℳ_φ:=C_^L𝐆(Z(C_𝐆(ϕ)^∘)^φ(W_k),∘)qui est un Levi de ^L𝐆 dont la partie connexe est M_φ:=C_𝐆(Z(C_𝐆(ϕ)^∘)^φ(W_k),∘).Soit φ∈Φ(W_k,𝐆). Alors toute extension φ' ∈Φ(𝐆) de φ à W_k' se factorise par ℳ_φ. De plus il existe un φ' ∈Φ(𝐆) étendant ϕ:=φ_|I_k ne se factorisant par aucun sous Levi propre de ℳ_φ.Ici, on écrira plutôt W_k' sous la forme W_k × SL_2. On prendra garde cependant à prendre la bonne "restriction" de W_k × SL_2 à W_k qui est donnée par le plongement W_k ↪ W_k × SL_2, w ↦ (w,diag(|w|^1/2,|w|^-1/2)). Néanmoins, cela ne fait pas de différence lorsque l'on prend les restrictions à l'inertie. Prenons φ' ∈Φ(G) une extension de φ. Par définition ℳ_φ contient φ(W_k). De plus, φ'(SL_2) est contenue dans C_𝐆(ϕ)^∘ donc φ'(SL_2) ⊆ℳ_φ et par conséquent φ'(W_k') ⊆ℳ_φ. Construisons maintenant un φ' ne se factorisant par aucun sous Levi propre de ℳ_φ. Un Levi minimal de ℳ_φ factorisant φ' est obtenu en prenant le centralisateur dans ℳ_φ d'un tore maximal de C_M_φ(φ')^∘. Ainsi pour prouver la propriété demandée, il nous suffit de fabriquer un φ' tel que C_M_φ(φ')^∘⊆ Z(ℳ_φ). Soit ϕ∈Φ(I_k,𝐆). Prenons φ étendant ϕ tel que l'automorphisme semi-simple θ de conjugaison par φ(Frob) préserve un épinglage (C_𝐆(ϕ)^∘,𝐁,𝐓,{x_α}_α∈Δ) de C_𝐆(ϕ)^∘. On définit φ'_|W_k=φ (ici on considère la restriction naïve de W_k × SL_2 à W_k) et φ'_|SL_2:SL_2→ C_𝐆(φ)^∘ le morphisme principal de SL_2 à valeur dans C_𝐆(φ)^∘ associé à l'épinglage choisi. Nous avons alors que C_𝐆(φ')^∘=C_ C_𝐆(φ)^∘(φ'_|SL_2)^∘ = Z(C_𝐆(φ)^∘)^∘. Pour achever la preuve il ne reste donc qu'à montrer que Z(C_𝐆(φ)^∘)^∘ = Z(C_𝐆(ϕ)^∘)^φ(W_k),∘. En effet, on aura alors le résultat voulu puisque C_M_φ(φ')^∘⊆ C_𝐆(φ')^∘ = Z(C_𝐆(ϕ)^∘)^φ(W_k),∘⊆ Z(ℳ_φ). Notons que C_𝐆(ϕ)=C_𝐆(φ)^θ. Pour simplifier les notations on pose H=C_𝐆(ϕ). Il nous reste donc à prouver que Z(H^θ,∘)^∘=Z(H^∘)^θ,∘. Calculons les centres ici présents. Nous avons que Z(H^∘) = ∩_α∈Δ(α) et par conséquent Z(H^∘)^θ,∘ = ((∩_α∈Δ(α)) ∩ T^θ,∘)^∘. Comme θ préserve un épinglage, on a également grâce au théorème 1.8 (v) de <cit.>, Z(H^θ,∘) = ∩_α∈Δ/θ(α_|T^θ,∘) = ∩_α∈Δ/θ(α) ∩ T^θ,∘, d'où le résultat voulu. On appelle tore maximal de ^L𝐆 un sous-groupe 𝒯 de ^L𝐆 qui se surjecte sur ⟨ϑ⟩ et dont l'intersection 𝒯^∘ avec 𝐆 est un tore maximal de 𝐆. Pour un tel tore, on notera 𝐓:=𝒯^∘ sa partie connexe. Nous avons lasuite exacte suivante : 𝐓↪𝒯↠⟨ϑ⟩. Le tore 𝒯 agit par conjugaison sur 𝐓 et donc, on en déduit une action de ⟨ϑ⟩ sur 𝐓 qui nous permet de définir une k-structure sur 𝐓 le dual de 𝐓. On dira qu'un tore maximal 𝒯 est relevant si le plongement 𝐓↪𝐆 correspond dualement à un k-plongement 𝐓↪𝐆. Enfin, on dira que 𝒯 est elliptique dans ^L𝐆 si 𝒯 n'est contenu dans aucun Levi propre ℳ de ^L𝐆 ou de façon équivalente si Z(𝒯)^∘=Z(^L𝐆)^∘.Soit ℳ un Levi de ^L𝐆.* Si ℳ contient 𝒯, un tore maximal relevant, alors ℳ est relevant.* Si ℳ est relevant et 𝒯 est un tore maximal elliptique de ℳ alors 𝒯 est relevant.* (Dat) Notons (ℳ^∘)_ab l'abélianisé de ℳ^∘ qui est un tore. Le groupe ℳ agit par conjugaison sur ℳ^∘ donc sur (ℳ^∘)_ab. Cette action est triviale sur ℳ^∘ donc nous donne une action de ⟨ϑ⟩ sur (ℳ^∘)_ab. Le plongement 𝒯^∘→ℳ^∘ induit alors un morphisme ⟨ϑ⟩-équivariant 𝒯^∘→ (ℳ^∘)_ab. On obtient ainsi dualement un k-plongement 𝐒↪𝐓. Si 𝒯 est relevant, on peut choisir un plongement 𝐓↪𝐆 rationnel, et alors C_𝐆(𝐒) est un Levi rationnel, dual de ℳ, qui est donc relevant. * Le tore 𝒯 de ℳ nous fournit dualement un plongement 𝐓↪𝐌_qd, où 𝐌_qd est la forme quasi-déployée de 𝐌, un k-sous-groupe de Levi de 𝐆 dual de 𝐌. Comme 𝒯 est elliptique, le rang déployé de 𝐓 est le même que celui du centre de 𝐌_qd, et par conséquent 𝐓 est elliptique. Ainsi, il se plonge dans toutes les formes intérieures de 𝐌_qd (voir par exemple <cit.> lemme 3.2.1) donc en particulier dans 𝐌 et donc 𝒯 est relevant.Soit φ∈Φ(W_k,𝐆). Notons ϕ=φ_|I_k et posons 𝒞:=C_𝐆(ϕ)^∘φ(W_k) qui est un sous-groupe de ^L𝐆. Alors il existe 𝒯⊆𝒞 un sous-tore maximal de 𝒞 tel que Z(𝒯)^∘=Z(𝒞)^∘.Notons θ la conjugaison par φ(Frob) qui est un automorphisme semi-simple de C:=C_𝐆(ϕ)^∘. Prenons alors (𝐓,𝐁) une paire de Borel deC_𝐆(ϕ)^∘ stable sous θ (existe par <cit.> théorème 7.5). Nous pouvons écrire 𝒞 sous la forme 𝒞:=C ⋊⟨θ⟩. On forme alors 𝒯:=𝐓⋊⟨θ⟩ qui est un tore maximal de 𝒞. Nous allons modifier 𝒯 pour le rendre elliptique. Toute section η de la suite exacte N_C(𝐓) ↪ N_𝒞(𝐓) ↠⟨θ⟩ (la notation N_C(𝐓) signifie le normalisateur de 𝐓 dans C et idem pour N_𝒞(𝐓) avec 𝒞) nous permet de définir un tore maximal 𝒯_η par 𝒯_η:=𝐓·η(⟨θ⟩). Une section η est donnée par η(θ)=n_θθ où n_θ∈ N_C(𝐓). On prend alors pour n_θ un élément de θ-coxeter, c'est à dire un élément du groupe de Weyl formé en prenant un produit de réflexions simples, une pour chaque orbite sous θ. Il découle alors du lemme 7.4 (i) de <cit.> que le tore que l'on obtient est elliptique, ce qui achève la preuve.Rappelons nous que dans la section <ref> nous avons fixé un tore maximal de référence 𝐓_0, qui nous a permis d'associer à 𝐓, un tore maximal non-ramifié de 𝐆, un élément w ∈ W_0, un tore ^L𝐓=𝐓_0⋊⟨ w ϑ⟩ et un plongement ι : ^L𝐓↪^L𝐆 en choisissant un relèvement ẇ∈ N(𝐓_0,𝐆) de w.Soit ϕ∈Φ_m(I_k,𝐆). Alors les propositions suivantes sont équivalentes : * ϕ est relevant* Il existe 𝐓 un tore maximal non-ramifié de 𝐆 et ϕ_T∈Φ_m(I_k,𝐓) tel que ϕ∼ι∘ϕ_T où ι : ^L𝐓↪^L𝐆* Rep_ℚ_ℓ^ϕ(G) est non videL'équivalence (2) ⇔ (3) est donnée par le théorème <ref>. Montrons (1) ⇔ (2). Supposons ϕ relevant. Par définition, il existe φ':W_k' →^L𝐆 relevant qui étend ϕ. Notons φ = φ'_|W_k. Alors le lemme <ref> nous dit que ℳ_φ factorise φ' et est donc un Levi relevant. Le lemme <ref> nous fournit 𝒯 un tore maximal de 𝒞:=C_𝐆(ϕ)^∘φ(W_k) tel que Z(𝒯)^∘=Z(𝒞)^∘. Comme 𝒞⊆ℳ_φ on a également que 𝒯 est un tore maximal de ℳ_φ. Maintenant Z(𝒯)^∘=Z(𝒞)^∘⊆ Z(ℳ_φ)^∘ et comme 𝒯 est un tore maximal de ℳ_φ on a aussi Z(ℳ_φ)^∘⊆ Z(𝒯)^∘ et par conséquent Z(ℳ_φ)^∘ = Z(𝒯)^∘, c'est à dire que 𝒯 est un tore maximal elliptique de ℳ_φ. Le lemme <ref> (2) nous dit alors que 𝒯 est un tore relevant. De plus comme 𝒯⊆𝒞, ce tore factorise ϕ et l'on a (2).Supposons maintenant qu'il existe 𝐓⊆𝐆 un tore maximal non-ramifié de 𝐆 et ϕ_T∈Φ_m(I_k,𝐓) tel que ϕ∼ι∘ϕ_T. On rappelle que ^L𝐓=𝐓_0⋊⟨ w ϑ⟩. Le paramètre ϕ_T se prolonge en φ_T∈Φ(W_k,𝐓) et on pose φ = ι∘φ_T∈Φ(W_k,G) qui est une extension de ϕ à W_k. Ainsi Im(ϕ) ⊆T_0, donc T_0⊆ C_G(ϕ)^∘ et donc T_0⊆ℳ_ϕ. Nous avons également que ẇϑ = φ(Frob) ∈ℳ_ϕ. Ainsi ι(^L𝐓) ⊆ℳ_ϕ et le lemme <ref> (1) nous dit alors que ℳ_ϕ est relevant. De plus, le lemme <ref> nous construit φ' ∈Φ(G) un paramètre étendant ϕ et tel que ℳ_φ soit un Levi minimal contenant son image. Comme ℳ_φ est relevant, on en déduit que φ' est relevant et donc que ϕ est relevant.Soit ϕ∈Φ_m(I_k^Λ,𝐆). Alors Rep_Λ^ϕ(G) est non vide si et seulement si ϕ est relevant.La proposition <ref> nous donne le résultat lorsque Λ=ℚ_ℓ. Pour le cas général, notons que par la proposition <ref>, Rep_Λ^ϕ(G) est non vide si et seulement s'il existe ϕ' ∈Φ_m(I_k,𝐆) tel que ϕ'_|I_k^Λ∼ϕ et Rep_Λ^ϕ(G) est non vide, si et seulement s'il existe ϕ' ∈Φ_m(I_k,𝐆) relevant prolongeant ϕ, si et seulement si ϕ est relevant.§.§ Compatibilité à l'induction et à la restriction parabolique Cette sous-partie à pour but d'étudier le comportement des catégories Rep_Λ^ϕ(G) vis à vis de l'induction et de la restriction parabolique.Jusqu'à présent, nous avons considéré l'immeuble de Bruhat-Tits semi-simple, puisque celui-ci est muni d'une structure de complexe polysimplicial. Cependant dans cette section, nous souhaitons comparer l'immeuble d'un Levi et celui de G. L'immeuble deBruhat-Tits "étendu" semble alors plus approprié. Cela ne fait pas une grosse différence. En effet nous traitions la structure polysimpliciale de façon purement combinatoire. De plus, les idempotents e_ϕ,σ auraient très bien pu être définis pour un point quelconque x de l'immeuble, on aurait alors eu que e_ϕ,x=e_ϕ,σ, où σ est le plus petit polysimplexe contenant x. Ainsi, dans cette partie seulement, on utilisera l'immeuble deBruhat-Tits "étendu", que l'on notera BT^e(G).Soit 𝐏 un k-sous-groupe parabolique de 𝐆 de quotient de Levi 𝐌 défini sur k. Prenons 𝐒 un tore déployé maximal de 𝐆 contenu dans 𝐏 et notons 𝐓 son centralisateur dans 𝐆. Il existe alors un unique relèvement de 𝐌 en un sous-groupe de 𝐆 contenant 𝐓. Notons φ le système de racines de G relativement à S et φ_M⊆φ celui de M. L'appartement 𝒜_M^e(𝐒,k) de BT^e(M) relativement à 𝐒 est égal à 𝒜^e:=𝒜^e(𝐒,k) mais en ne gardant que les murs associés aux racines affines dont la partie vectorielle est dans φ_M. Soit x ∈𝒜, alors M_x^∘=M ∩ G_x^∘ et M_x^+=M ∩ G_x^+ (voir <cit.> section 4.3). Rappelons que l'on a déjà défini _x≃ M_x^∘/M_x^+ et posons _x l'image de P_x:=P ∩ G_x^∘ dans _x. _x est un sous-groupe parabolique de 𝖦_x de quotient de Levi _x.Notons φ_P le sous-ensemble de φ des racines α telles que P soit engendré par T et les U_α, α∈φ_P. Notons maintenant φ_x (resp. φ_P,x, resp. φ_M,x) les racines affines passant par x et dont la partie vectorielle est dans φ (resp. φ_P, resp. φ_M). Alors φ_x (resp. φ_P,x, resp. φ_M,x) est le système de racine de _x (resp. _x, resp. _x) relativement à S_x. Choisissons maintenant une forme linéaire f : X^*(𝐒) →ℝ telle que φ_P={α∈φ , f(α) ≥ 0} et φ_M={α∈φ, f(α)=0}. Les sous-ensembles φ_P,x et φ_M,x vérifient alors φ_P,x={α∈φ_x , f(α) ≥ 0} et φ_M,x={α∈φ_x, f(α)=0}, ce qui montre que 𝖯_x est bien un parabolique de 𝖦_x de quotient de Levi 𝖬_x.Considérons 𝐌 un dual de 𝐌 sur ℚ_ℓ muni d'un plongement ι:^L𝐌↪^L𝐆 (voir <cit.> section 3.4), qui induit une application Φ_m(I_k^Λ,𝐌) →Φ_m(I_k^Λ,𝐆). Commençons par vérifier la compatibilité à la restriction parabolique.Soit ϕ∈Φ_m(I_k^Λ,𝐆). Alors pour tout sous-groupe parabolique 𝐏 de 𝐆 ayant pour facteur de Levi 𝐌, on ar_P^G( Rep_Λ^ϕ(G) ) ⊆∏_ϕ_M Rep_Λ^ϕ_M(G)où le produit est pris sur les ϕ_M∈Φ_m(I_k^Λ,𝐌) tels que ι∘ϕ_M∼ϕ et r_P^G désigne la restriction parabolique.Soit V ∈ Rep_Λ^ϕ(G). La restriction parabolique préserve le niveau donc r_P^G(V) ∈ Rep_Λ^0(M). Il nous suffit donc de montrer que pour x ∈𝒜^e_M et ϕ' ∈Φ_m(I_k^Λ,𝐌) tel que ι∘ϕ' ≠ϕ, on a e_ϕ',xr_P^G(V)=0.Nous avons r_P^G(V)^M_x^+≃r_𝖯_x^_x (V^G_x^+) (voir <cit.> propositions 3.1 et 6.2), donc e_ϕ',x r_P^G(V)^M_x^+≃ e_ϕ',x r_𝖯_x^_x (V^G_x^+) ≃ e_ϕ',x r_𝖯_x^_x( e_ι∘ϕ',x(V^G_x^+)) (la dernière égalité provient de <ref>). Or par hypothèse ι∘ϕ' ≠ϕ donc e_ι∘ϕ',x(V^G_x^+) = 0 d'où le résultat.Passons maintenant à l'induction parabolique.Soit ϕ_M∈Φ_m(I_k^Λ,𝐌) et notons ϕ∈Φ_m(I_k^Λ,𝐆) son image par Φ_m(I_k^Λ,𝐌) →Φ_m(I_k^Λ,𝐆). Alors pour tout sous-groupe parabolique 𝐏 de 𝐆 ayant pour facteur de Levi 𝐌, on ai_P^G( Rep_Λ^ϕ_M(M) ) ⊆ Rep_Λ^ϕ(G)où i_P^G désigne l'induction parabolique.Cela découle du théorème <ref> et du fait que r_P^G est adjoint à gauche de i_P^G.Soit ϕ∈Φ_m(I_k^Λ,𝐆). Alors si ϕ est discret (c'est-à-dire ne se factorise pas par un Levi rationnel propre) toutes les représentations de Rep_Λ^ϕ(G) sont cuspidales et toutes les représentations irréductibles de Rep_Λ^ϕ(G) sont supercuspidales.De plus, si 𝐆 est quasi-déployé, on a la réciproque pour la cuspidalité, c'est à dire que ϕ est discret si et seulement si Rep_Λ^ϕ(G) ne contient que des cuspidales.La cuspidalité découle immédiatement du théorème <ref>. Pour la supercuspidalité, remarquons que si ϕ est discret, alors le théorème <ref> montre qu'une induite n'as pas de composante dans Rep_Λ^ϕ(G) et donc n'a pas de sous-quotient irréductible dans Rep_Λ^ϕ(G).Maintenant si 𝐆 est quasi-déployé et que ϕ n'est pas discret, alors Rep_Λ^ϕ(G) contient des induites d'après le théorème <ref> (nous utilisons l'hypothèse quasi-déployé pour dire que ces facteurs sont non-nuls). Si 𝐆 n'est pas quasi-déployé l'équivalence peut être fausse, comme le montre l'exemple de G=D^× où D est une k-algèbre à division de dimension finie. Alors toutes les représentations sont cuspidales, en particulier Rep_Λ^1(G) ne contient que des cuspidales.Avoir ϕ discret n'est pas une condition nécessaire pour avoir des cuspidales (supercuspidales) dans Rep_Λ^ϕ(G). En effet, toute cuspidale unipotente se retrouvera dans Rep_Λ^1(G). Soit ϕ_M∈Φ_m(I_k^Λ,𝐌) et posons ϕ = ι∘ϕ_M∈Φ_m(I_k^Λ,𝐆). Nous venons de voir que i_P^G réalise un foncteuri_P^G : Rep_Λ^ϕ_M(M) ⟶ Rep_Λ^ϕ(G).La catégorie Rep_Λ^ϕ_M(M) est un facteur direct de Rep^0_Λ(M), on a donc un foncteur e_ϕ_M : Rep^0_Λ(M) → Rep_Λ^ϕ_M(M). Définissons r_P^G,ϕ_M:=e_ϕ_M∘ r_P^G, de sorte que r_P^G,ϕ_M soit un foncteurr_P^G,ϕ_M : Rep_Λ^ϕ(G) ⟶ Rep_Λ^ϕ_M(M).Le foncteur r_P^G,ϕ_M est adjoint à gauche de i_P^G.Soient V ∈ Rep_Λ^ϕ(G) et W ∈ Rep_Λ^ϕ_M(M). Nous savons déjà que r_P^G est adjoint à gauche de i_P^G, donc Hom(r_P^G(V),W)=Hom(V,i_P^G(W)). Maintenant Rep_Λ^ϕ_M(M) est un facteur direct de Rep_Λ^0(M) de sorte que Hom(r_P^G(V),W)=Hom(r_P^G,ϕ_M(V),W) et on a le résultat.Notons C_𝐆(ϕ) le centralisateur dans 𝐆 de l'image de ϕ. Alors si C_𝐆(ϕ) ⊆ι(𝐌) la paire de foncteurs adjoints (i_P^G,r_P^G,ϕ_M) réalise une équivalence de catégories entre Rep_Λ^ϕ_M(M) et Rep_Λ^ϕ(G).Soit V ∈ Rep_Λ^ϕ_M(M). Par adjonction, nous avons une application r_𝐏^𝐆i_𝐏^𝐆(V)→ V. Le lemme géométrique nous dit qu'elle est surjective et que son noyau W, admet une filtration dont les composantes du gradué associé sont isomorphes à (i_M ∩^wP^M∘ w ∘ r_^w^-1P∩ M^M)(V), où w parcourt un ensemble 𝒲^P de représentants particuliers dans G des doubles classes W_M,0\ W_0/W_M,0 ne contenant pas la classe triviale (W_M,0 est le groupe de Weyl de 𝐌). Nous souhaitons montrer que e_ϕ_M(W)=0. Prenons donc w ∈𝒲^P et montrons que e_ϕ_M((i_M ∩^wP^M∘ w ∘ r_^w^-1P∩ M^M)(V))=0.Identifions les paramètres inertiels avec des classes de conjugaison semi-simples F-stables. Ainsi ϕ_M correspond à s ∈ (^*_ss)^F et on note Rep_Λ^s(M) pour Rep_Λ^ϕ_M(M). Par le théorème <ref>r_^w^-1P∩ M^M(V) ∈∏_i=1^n Rep_Λ^s_i(^w^-1M∩ M)où {s_1,⋯,s_n} est l'image réciproque de {s} par l'application ((^w^-1∩)^*_ss)^F→ (^*_ss)^F. Doncw ∘ r_^w^-1P∩ M^M(V) ∈∏_i=1^n Rep_Λ^^ws_i(M∩^wM).Enfin par le théorème <ref>(i_M ∩^wP^M∘ w ∘ r_^w^-1P∩ M^M)(V) ∈∏_i=1^m Rep_Λ^t_i(M)où {t_1,⋯,t_m} est l'image de {^ws_1,⋯,^ws_n} par l'application ((∩^w)^*_ss)^F→ (^*_ss)^F. On veut donc montrer qu'aucun des t_i n'est égal à s. Supposons le contraire et que l'on ait un i tel que t_i=s. Par construction, t_i est dans l'une des classes de conjugaison sur ^* des ^w(s_j), donc il existe g ∈^* et j ∈{1,⋯,n} tels que t_i=^gw(s_j). De même par construction des s_j, il existe h ∈^* tel que s_j=^hs. Donc s=t_i=^gwh(s) et gwh ∈ C_^*(s) ⊆^* (par hypothèse), ce qui est absurde car w ∉^*. Ceci nous montre que e_ϕ_M(W)=0 et donc que r_𝐏^𝐆,ϕ_Mi_𝐏^𝐆(V) ∼→ V est un isomorphisme.Montrons maintenant que le foncteur r_𝐏^𝐆,ϕ_M est conservatif. Ceci nous permettra de conclure grâce au lemme <ref> ci-dessous. Comme les catégories considérées sont abéliennes et que r_𝐏^𝐆,ϕ_M est exact, il nous suffit de montrer que si V≠ 0 alors r_𝐏^𝐆,ϕ_M(V) ≠ 0.Prenons donc V ∈ Rep_Λ^ϕ(G) tel que V≠ 0. Il existe x ∈𝒜^e tel que e_ϕ,xV≠ 0. Notons s ∈ (^*_ss)^F la classe de conjugaison semi-simple correspondant à ϕ_M. Comme e_ϕ,xV≠ 0, il existe s_x∈ (^*_x)_ss dont l'image par (^*_x)_ss^F→ (^*_ss)^F est s et telle que e_s_x,Λ^_xV^G_x^+≠ 0. L'hypothèse C_^*(s) ⊆^* peut se retraduire, si l'on voit s comme un élément de ^*, de la manière suivante : si w ∈ W_0 est tel que ws=s alors w ∈ W_M,0. Or l'application (^*_x)_ss→ (^*_ss) est définie par _x^*/W_M,x→^*/W_M,0 et comme W_M,x=W_M,0∩ W_x on en déduit que s_x vérifie les mêmes hypothèses que s, c'est à dire C__x^*(s_x) ⊆^*_x. Maintenant, nous avons vu dans la preuve du théorème <ref> que e_s_x,Λ^_x(r_P^G(V))^M_x^+≃ e_s_x,Λ^_x r__x^_x(e_s_x,Λ^_x(V^G_x^+)).En notant r__x^_x,s_x le foncteur e_s_x,Λ^_x r__x^_x, on a e_s_x,Λ^_x(r_P^G(V))^M_x^+≃ r__x^_x,s_x(e_s_x,Λ^_x(V^G_x^+)).Comme C__x^*(s_x) ⊆^*_x le théorème B' de <cit.> nous dit que r__x^_x,s_x réalise une équivalence de catégories. En particulier il est conservatif et comme e_s_x,Λ^_x(V^G_x^+) ≠ 0 on a e_s_x,Λ^_x(r_P^G(V))^M_x^+≃ r__x^_x,s_x(e_s_x,Λ^_x(V^G_x^+)) ≠ 0et donc r_𝐏^𝐆,ϕ_M(V) ≠ 0 ce qui achève la preuve. Soient 𝒞, 𝒟 deux catégories et F : 𝒞→𝒟 un foncteur. Si F est conservatif et admet un adjoint à droite (ou à gauche) pleinement fidèle, alors F réalise une équivalence de catégories. Soit G un adjoint à droite. Le cas d'un adjoint à gauche se traite de la même manière par dualité. Le foncteur G étant pleinement fidèle, le morphisme α : FG → id_𝒟 est un isomorphisme naturel. Nous devons montrer que β : id_𝒞→ GF est également un isomorphisme. Par les axiomes d'adjonctions, la compositionF(x) F(β_x)⟶FGF(x) α_F(x)⟶ F(x)est id_F(x) pour tout x∈𝒞. Comme α est déjà un isomorphisme, on en déduit que F(β_x) est un isomorphisme, et donc que β_x est un isomorphisme puisque F est conservatif.§.§ Compatibilité à la correspondance de Langlands Dans cette partie nous prendrons Λ=ℚ_ℓ et k de caractéristique nulle. La correspondance de Langlands locale prédit une application à fibres finies Irr_ℚ_ℓ(G) →Φ(𝐆), π↦φ_π. Dans des cas où elle est connue, nous souhaitons vérifier que la décomposition du théorème <ref> est bien compatible à cette dernière, c'est à dire que si π∈Irr_ℚ_ℓ(G) est une représentation de niveau 0 alors π∈ Rep_ℚ_ℓ^ϕ(G) avec φ_π | I_k∼ϕ.La correspondance de Langlands est connue dans plusieurs cas dont : les tores (prouvé par Langlands lui-même), les représentations unipotentes des groupes p-adiques adjoints (<cit.>, <cit.>) et les groupes classiques (<cit.> <cit.> <cit.> <cit.> <cit.>). La compatibilité à la correspondance de Langlands pour les tores est contenue dans le théorème <ref>. Pour ce qui est des représentations unipotentes, par construction, elle appartiennent toutes à Rep_ℚ_ℓ^1(G). Par conséquent, dans cette section nous examinerons le cas des groupes classiques. Notons que 𝐆=GL_n a déjà été fait dans <cit.> section 3.2.6, on se concentrera ici sur les autres cas. Dans le but d'utiliser les résultats de <cit.>, nous supposerons de plus que k est de caractéristique résiduelle impaire.Commençons par expliquer ce que l'on entend par un groupe classique. Soit k' une extension de degré au plus 2 de k et 𝔣' son corps résiduel. Notons σ le générateur du groupe de Galois de k'/k et N_k'/k:k'^×→ k^× l'application norme. Fixons un signe ε=± 1 et soit (V,h) un espace k'/k-ε-hermitien non-dégénéré. Dans cette partie G:=U(V)^∘ est la composante connexe du groupe des k-points du groupe réductif déterminé par (V,h), c'est à dire U(V):={g ∈ Aut_k'(V), h(gv,gw)=h(v,w)pour toutv,w∈ V } et U(V)^∘:={g ∈ U(V), N_k'/k det_k'(g)=1}. La correspondance de Langlands pour les groupes classiques (restreinte à W_k) est compatible à l'induction parabolique (voir <cit.> théorème 4.9). Il en est de même pour Rep_ℚ_ℓ^ϕ(G) d'après la section <ref>. Nous pouvons ainsi nous restreindre ici aux représentations irréductibles cuspidales. On pose 𝒜(G) l'ensemble des classes d'équivalence des représentations irréductibles cuspidales de G et 𝒜_[0](G) le sous-ensemble des représentations de niveau 0. Soit ρ une représentation irréductible cuspidale de GL_n(k'). On définit la représentation ρ^σ de GL_n(k') par ρ^σ(g)=ρ(^tσ(g^-1)), où ^tg désigne la transposée de g. On dit alors que ρ est auto-duale si ρ^σ≃ρ et on pose 𝒜_n^σ(k') l'ensemble des représentations irréductibles cuspidales auto-duales de GL_n(k'), 𝒜^σ(k'):=∪_n ≥ 1𝒜_n^σ(k') et 𝒜_[0]^σ(k') le sous-ensemble de 𝒜^σ(k') composé des représentations de niveau 0. Notons H=H^-⊕ H^+ le plan hyperbolique, c'est à dire, H^± est un k'-espace vectoriel de dimension 1 de base e_± et H est muni de la forme h_H donnée par h_H(λ_-e_-+λ_+e_+,μ_-e_-+μ_+e_+)=λ_-σ(μ_+)+ελ_+σ(μ_-). Pour un entier n ≥ 0, on pose V_n:=V ⊕ nH muni de la forme h_n:=h ⊕ h_H⊕⋯⊕ h_H et G_n:=U(V_n)^∘. Le stabilisateur dans G_n de la décomposition V_n=nH^-⊕ V ⊕ nH^+ est un sous-groupe de Levi M_n de G_n et l'on a un isomorphisme M_n ≃ GL_n(k') × G. Le stabilisateur de nH^- est quand à lui un sous-groupe parabolique P_n de G_n de facteur de Levi M_n. Ainsi si ρ∈𝒜_n^σ(k') et π∈𝒜(G), on peut former ρ⊗π que l'on considère comme une représentation de M_n. Définissons pour s ∈ℂ, I(ρ,π,s):=i_P_n^G_nρ| det(·)|_k'^s⊗π.[<cit.>, théorème 1.6] Si I(ρ,π,s) est réductible pour un certain s∈ℝ alors il existe un unique réel positif s_π(ρ) tel que, I(ρ,π,s) est réductible si et seulement si s=± s_π(ρ).Si I(ρ,π,s) est irréductible pour tout s∈ℝ, on pose alors s_π(ρ)=0. Définissons alors l'ensemble de Jordans Jord(π) par Jord(π) := { (ρ,m) ∈𝒜^σ(k') ×ℕ^*, 2s_π(ρ) -(m+1) ∈ 2 ℕ} Rappelons que comme G est un groupe classique on a la correspondance de Langlands locale, Irr_ℚ_ℓ(G) →Φ(𝐆), π↦φ_π. Jusqu'à présent nos paramètres de Langlands φ∈Φ(𝐆) étaient de la forme φ : W'_k→^LG(ℚ_ℓ) avec W_k'=W_k⋉ℚ_ℓ. Dans cette section seulement, on considére une autre version du groupe de Weil-Deligne : W_k' ≃ W_k× SL_2(ℚ_ℓ). Ainsi les paramètres de Langlands sont de la forme φ:W_k× SL_2(ℚ_ℓ) →^L𝐆. Notons N_𝐆 la dimension de l'espace vectoriel sur lequel agit naturellement 𝐆. Alors si φ:W_k× SL_2(ℚ_ℓ) →^L𝐆 est un paramètre de Langlands, on notera φ̃:W_k'× SL_2(ℚ_ℓ) → GL_N_𝐆(ℚ_ℓ) l'application obtenue en restreignant φ à W_k' et en la composant avec 𝐆↪ GL_N_𝐆(ℚ_ℓ). Il est attendu (et connu au moins dans le cas où 𝐆 est quasi-déployé, voir par exemple <cit.>) que le paramètre φ̃_π soit décrit grâce à l'ensemble de Jordan Jord(π) de la façon suivante[<cit.>]Si 𝐆 est un groupe classique quasi-déployé et π∈𝒜(G), on aφ̃_π=⊕_(ρ,m) ∈ Jord(π)φ_ρ⊗ st_moù φ_ρ est la représentation irréductible de W_k' correspondant à ρ via la correspondance de Langlands locale pour GL_n et st_m est la représentation irréductible m-dimensionnelle de SL_2(ℚ_ℓ). Le résultat précédent étant connu dans le cas où 𝐆 est quasi-déployé et le théorème <ref> nécessitant l'hypothèse K-déployé, nous nous limiterons ici au cas où 𝐆 est un groupe classique non-ramifié, c'est à dire un groupe spécial orthogonal impair SO_2n+1, un groupe spécial orthogonal pair SO_2n (groupe spécial orthogonal déployé) ou SO_2n^* (groupe spécial orthogonal quasi-déployé associé à une extension quadratique non-ramifiée k'/k), un groupe symplectique Sp_2n ou un groupe unitaire U_n(k'/k) où k' est une extension non-ramifiée de k.Pour comprendre φ_π nous avons donc besoin de comprendre Jord(π) et en particulier s_π(ρ). Nous allons pour cela nous appuyer sur les résultats obtenus dans <cit.>. Soit π∈𝒜_[0](G). Il existe alors x ∈ BT_0 et s ∈ (_x^*)_ss tels que e_s,ℚ_ℓ^_xπ^G_x^+. Comme nous sommes dans le cas où 𝐆 est un groupe classique, le groupe _x se décompose en un produit de deux groupes _x≃_x,1×_x,2, où les _x,i sont des groupes classiques (voir par exemple <cit.> section 2). Ainsi s correspond via cet isomorphisme à (s_1,s_2) où s_i ∈ (_x,i^*)_ss. Soit P ∈𝔣'[X] un polynôme unitaire. Désignons par τ le générateur du groupe de Galois Gal(𝔣'/𝔣). On définit P^τ(X):= τ(P(0))^-1X^deg(P)τ(P)(1/X) et on dit que P est auto-dual si P=P^τ. Le polynôme caractéristique P_s_i de s_i est alors un polynôme unitaire auto-dual et on l'écrit (comme dans <cit.> section 7) P_s_i(X)=∏_P P(X)^a_P^(i), où le produit est pris sur l'ensemble des polynômes unitaires irréductibles auto-duaux sur 𝔣' (une telle écriture est possible car la série de Deligne-Lusztig associée à s_i contient une représentation cuspidale et que si P_s_i contenait un facteur de la forme P(X)P^τ(X) avec P irreductible et P≠ P^τ alors le centralisateur de s_i dans (_x,i^*)_ss serait contenu dans un Levi rationnel propre). Soit ρ une représentation irréductible cuspidale auto-duale de niveau zero d'un certain GL_n(k'). Notons G'=GL_n(k'). Comme précédemment nous avons l'existence d'un y ∈ BT_0(𝐆',k') et d'une classe de conjugaison semi-simple s_ρ∈ ('^*_y)_ss telle que e_s_ρ,ℚ_ℓ^'_yρ^G'^+_y. Notons qu'ici '^*_y≃ GL_n(𝔣'). Associons de même à s_ρ son polynôme caractéristique Q qui est un polynôme unitaire irréductible auto-dual de degré n.[<cit.> section 8]Notons ρ'∈𝒜_n^σ(k') l'unique (à équivalence près) twist non-ramifié (non-équivalent) de ρ qui est auto-dual. Alors pour 𝐆 un groupe classique non-ramifié, on a⌊ s_π(ρ)^2⌋ + ⌊ s_π(ρ')^2⌋=a_Q^(1)+a_Q^(2)sauf si 𝐆=Sp_2n et Q(X)=X-1 où dans ce cas on a⌊ s_π(ρ)^2⌋ + ⌊ s_π(ρ')^2⌋=a_(X-1)^(1)+a_(X-1)^(2)-1.Les résultats dans <cit.> sont exprimés en termes de polynômes auto-duaux. Nous nous utilisons plutôt des classes de conjugaisons semi-simples. Nous souhaitons donc faire le lien entre les deux.Soitun groupe du type GL_n, Sp_2n, SO_2n+1 ou SO_2n sur 𝔉. On considère son plongement naturel ⊆ GL_N où N est en entier naturel (N=n pour GL_n, N=2n+1 pour SO_2n+1 et N=2n pour Sp_2n ou SO_2n). Soit s ∈_ss une classe de conjugaison semi-simple. Celle-ci donne lieu à une classe de conjugaison semi-simple de GL_N et l'on peut considérer P son polynôme caractéristique.Une classe de conjugaison géométrique semi-simple s ∈_ss est caractérisée par son polynôme caractéristique P.Traitons =Sp_2n, les autres cas étant similaires. On a donc N=2n. Soitle tore déployé de Sp_2n, c'est à dire =Diag(a_1,⋯,a_n,a_n^-1,⋯,a_1^-1), a_i∈𝔉. Nous pouvons supposer que s ∈ et donc écrire s=(a_1,⋯,a_2n) avec a_n+i =a_n-i+1^-1 pour 1≤ i ≤ n. Prenons s'=(b_1,⋯,b_2n) ∈ une autre classe de conjugaison semi-simple géométrique deayant pour polynôme caractéristique P. Nous souhaitons donc montrer que s et s' sont conjuguées dans . Comme ce sont deux éléments decela est équivalent au fait qu'il existe w ∈ W_^0, le groupe de Weyl de, tel que w· s'=s. Rappelons que W_^0≃𝒮_n⋉ (ℤ/2ℤ)^n. Comme s et s' ont même polynôme caractéristique, il existe une permutation σ∈𝒮_2n telle que b_i = a_σ(i). Comme s ∈, {a_σ(1),⋯,a_σ(2n)}={a_1^±,⋯,a_n^±} (comptés avec multiplicités), donc il existe i∈{1,⋯,n} tel que a_σ(1)=a_i^±. Posons τ(1)=i. Comme s' ∈, a_σ(2n)=b_2n=b_1^-1=a_σ(1)^-1, donc {a_σ(2),⋯,a_σ(2n-1)}={a_i^±,1 ≤ i ≤ n, i≠τ(1)}. On construit donc par récurrence une permutation τ∈𝒮_n telle que a_σ(i)=a_τ(i)^±, pour 1≤ i ≤ n. Dit autrement, on vient de fabriquer un élément w ∈ W_^0≃𝒮_n⋉ (ℤ/2ℤ)^n tel que b_i=a_σ(i)=w · a_i, donc tel que s' = w · s ce qui achève la preuve. Soit x ∈ BT_0 et s ∈ (_x^*)_ss. La classe de conjugaison s correspond comme précédemment à (s_1,s_2), s_i∈ (_x,i^*)_ss qui ont pour polynômes caractéristiques P_s_1 et P_s_2. Dans la section <ref> on a construit une application ψ̃_x : ((^*_x)_ss)^F→ (^*(𝔉)_ss)^F. Nommons s̃:=ψ̃_x(s) l'image de s par cette application. Notons P_s̃ le polynôme caractéristique de s̃ alors on a : * Si 𝐆≠ Sp_2n : P_s̃(X)=P_s_1(X)P_s_2(X)* Si 𝐆 = Sp_2n : P_s̃(X)=P_s_1(X)P_s_2(X)/(X-1) Traitons par exemple le cas où 𝐆 = Sp_2n est un groupe symplectique. Dans ce cas _x_i=Sp_2n_i(𝔣) (avec n_1+n_2=n). On a ^*=SO_2n+1 et _x_i^*=SO_2n_i+1. Notons ^* le tore déployé de ^* et _i^* celui de _x_i^*. On peut alors considérer que s̃∈^*, s_i∈_i^* et donc écrires_i=(1,a_1^(i),⋯,a_n_i^(i),(a_n_i^(i))^-1, ⋯, (a_1^(i))^-1).On obtient alors que s̃=(1,a_1^(1),⋯,a_n_1^(1),a_1^(2),⋯,a_n_2^(2),(a_n_2^(2))^-1, ⋯, (a_1^(2))^-1,(a_n_1^(1))^-1, ⋯, (a_1^(1))^-1)d'où le résultat. On fait de même avec les autres cas.Supposons que 𝐆 est un groupe classique non-ramifié, k de caractéristique nulle et p ≠ 2. Soient π∈Irr_ℚ_ℓ(G) une représentation de niveau 0 et ϕ∈Φ_m(I_k,𝐆) tel que π∈ Rep_ℚ_ℓ^ϕ(G). Notons φ_π le paramètre de Langlands associé à π via la correspondance de Langlands locale pour les groupes classiques. Alors φ_π | I_k∼ϕ.La section <ref> permet de nous ramener au cas où π est cuspidale. Le théorème <ref> nous dit alors φ̃_π=⊕_(ρ,m) ∈ Jord(π)φ_ρ⊗ st_m donc φ̃_π | I_k =⊕_(ρ,m) ∈ Jord(π) m φ_ρ | I_k. Comme ∑_m,(ρ,m) ∈ Jord(π) m = ⌊ s_π(ρ)^2⌋ (où ⌊·⌋ désigne la partie entière inférieure) on obtient φ̃_π | I_k =⊕_ρ∈𝒜^σ_[0](k)⌊ s_π(ρ)^2⌋φ_ρ | I_k. Notons [ρ] la classe d'inertie de ρ de sorte que [ρ] ∩𝒜^σ(k') = {ρ,ρ'}. Les deux paramètres de Langlands φ_ρ et φ_ρ' ont la même restriction à l'inertie donc φ̃_π | I_k =⊕_[ρ] (⌊ s_π(ρ)^2⌋ + ⌊ s_π(ρ')^2⌋)φ_ρ | I_k. Notons t ∈ (^*_ss)^F la classe de conjugaison semi-simple associée à φ_π | I_k. Reprenons les notations précédentes, π nous fournit un x ∈ BT_0 et un s ∈ (_x^*)_ss correspondant à (s_1,s_2), s_i∈ (_x,i^*)_ss. On définit également s̃∈ (^*_ss)^F par s̃:=ψ̃_x(s). Par définition s̃ est la classe de conjugaison semi-simple F-stable associée à ϕ. Il nous faut donc montrer que t=s̃. Le lemme <ref> nous dit qu'il suffit de montrer que P_t=P_s̃ où P_t et P_s̃ sont les polynômes caractéristiques de t et s̃.Si ρ∈𝒜_[0]^σ(k), on a vu qu'on pouvait lui associer une classe de conjugaison semi-simple s_ρ et l'on note Q son polynôme caractéristique. La compatibilité à Langlands dans le cas 𝐆=GL_n, démontré dans <cit.> section 3.2.6, montre que Q est bien le polynôme caractéristique de la classe de conjugaison semi-simple F-stable associée à φ_ρ| I_k.Le théorème <ref> et le lemme <ref> permettent alors de conclure. Traitons par exemple la cas 𝐆=Sp_2n. Nous avons φ̃_π | I_k =⊕_[ρ] (⌊ s_π(ρ)^2⌋ + ⌊ s_π(ρ')^2⌋)φ_ρ | I_k. Le théorème <ref> nous donne alors ⌊ s_π(ρ)^2⌋ + ⌊ s_π(ρ')^2⌋=a_Q^(1)+a_Q^(2) si Q(X) ≠ X-1 et ⌊ s_π(ρ)^2⌋ + ⌊ s_π(ρ')^2⌋=a_(X-1)^(1)+a_(X-1)^(2)-1 si Q(X) = X-1. Donc P_t(X) =(X-1)^(a_(X-1)^(1)+a_(X-1)^(2)-1)∏_Q(X) ≠ X-1 Q(X)^(a_Q^(1)+a_Q^(2))=1/(X-1)(∏_Q Q^a_Q^(1))(∏_Q Q^a_Q^(2))=1/(X-1)P_s_1(X)P_s_2(X)Le lemme <ref> montre alors que P_t=P_s̃ qui est le résultat recherché. Le théorème de décomposition de Bernstein fournit une partition des irréductibles Irr_ℚ_ℓ(G)=⊔_𝔰∈ℬ(G) Irr_𝔰(G), où ℬ(G) désigne l'ensemble des classes d'inertie de données cuspidales. En supposant vraie la correspondance de Langlands locale, on obtient une autre partition Irr_ℚ_ℓ(G)=⊔_φ∈Φ(𝐆)Π_φ, où Π_φ est le L-paquet associé au paramètre φ. Haines introduit dans <cit.> la notion de "centre de Bernstein stable" qui permet de comparer ces deux décompositions. Soit λ∈Φ(W_k,𝐆) et 𝔦 sa classe d'inertie (on rappelle la définition de l'équivalence inertielle de Haines en <ref>). On définit un paquet inertiel parΠ_𝔦^+ := _φ_|W_k∈𝔦φ∈Φ(𝐆)Π_φ(G). Supposons que l'on ait la correspondance de Langlands locale pour G ainsi que ses sous-groupes de Levi, la compatibilité à l'induction parabolique et à certains isomorphismes (voir <cit.> définition 5.2.1 pour plus de détails). Alors on peut définir une application ℒ_i qui à une classe d'inertie de données cuspidales 𝔰=[L,σ] ∈ℬ(G) associe ℒ_i(𝔰) la classe d'inertie du paramètre de Weil φ_σ|W_k. Les paquets inertiels permettent de comparer les décompositions Irr_ℚ_ℓ(G)=⊔_𝔰∈ℬ(G) Irr_𝔰(G)=⊔_φ∈Φ(𝐆)Π_φ :[<cit.> théorème 2.9] Soient G un groupe classique déployé et 𝔦 une classe d'inertie de paramètres de Weil. AlorsΠ_𝔦^+(G)=_ℒ_i(𝔰)=𝔦𝔰∈ℬ(G) Irr_𝔰(G) Revenons à l'étude de Rep_ℚ_ℓ^ϕ(G).Soient 𝐆 un groupe classique non-ramifié, k de caractéristique nulle, p ≠ 2 et ϕ∈Φ_m(I_k,𝐆) un paramètre inertiel modéré. Alors si C_𝐆(ϕ), le centralisateur de ϕ(I_k) dans 𝐆, est connexeIrr_ℚ_ℓ(G) ∩ Rep_ℚ_ℓ^ϕ(G) = Π_𝔦^+(G)où 𝔦 est la classe d'inertie formée des paramètres de Weil qui étendent ϕ.Dit autrement, Rep_ℚ_ℓ^ϕ(G) est un "bloc stable" (c'est-à-dire, correspond à un idempotent primitif du centre de Bernstein stable au sens de Haines <cit.>).La proposition <ref> montre que si C_𝐆(ϕ) est connexe, alors l'ensemble des λ∈Φ(W_k,𝐆) tels que λ_|I_k∼ϕ forme une classe d'équivalence inertielle. De plus, le théorème <ref> montre que la construction de Rep_ℚ_ℓ^ϕ est compatible avec la correspondance de Langlands, d'où le résultat. Lorsque C_𝐆(ϕ) n'est pas connexe, Rep_ℚ_ℓ^ϕ(G) est une somme de "blocs stables". Nous expliquerons dans un prochain article comment décomposer naturellement la catégorie Rep_ℚ_ℓ^ϕ(G) en "blocs stables". § RAPPELS SUR LE GROUPE DUAL Soient Ω un corps algébriquement clos et 𝐆, 𝐆' deux groupes réductifs définis sur Ω. Prenons φ : 𝐆→𝐆' un isomorphisme. Choisissons T un tore maximal de 𝐆 et notons 𝐓' = φ(𝐓) qui est un tore maximal de 𝐆'. On associe à 𝐆 et 𝐓 une donnée radicielle ϕ(𝐆,𝐓)=(X,X^∨,ϕ,ϕ^∨) où X=X^*(𝐓) est le groupe des caractère de 𝐓, X^∨=X_*(𝐓) est le groupe des co-caractères, ϕ est l'ensemble des racines et ϕ^∨ l'ensemble des co-racines. À partir de ϕ(𝐆,𝐓) on forme la donnée radicielle duale ϕ(𝐆,𝐓) définie par ϕ(𝐆,𝐓)=(X^∨,X,ϕ^∨,ϕ). D'après <cit.> théorème 2.9 nous savons qu'il existe un groupe réductif 𝐆 et un tore 𝐓 tel que ϕ(𝐆,𝐓)=ϕ(𝐆,𝐓). De façon analogue il existe 𝐆' et 𝐓' tel que ϕ(𝐆',𝐓')=ϕ(𝐆',𝐓'). L'isomorphisme φ : 𝐆→𝐆' induit un isomorphisme f(φ):ϕ(𝐆',𝐓') →ϕ(𝐆,𝐓). Nous avons alors aussi l'isogénie ^tf(φ):ϕ(𝐆',𝐓')→ϕ(𝐆,𝐓), c'est à dire ^tf(φ):ϕ(𝐆',𝐓') →ϕ(𝐆,𝐓). Le théorème 2.9 de <cit.> nous dit également qu'il existe un isomorphisme φ :𝐆' →𝐆 qui envoie 𝐓' sur 𝐓 et tel que f(φ)=^tf(φ). Ce φ n'est pas unique et deux tels φ diffèrent par un automorphisme Int(t̂') où t̂' ∈𝐓'.Nous savons que 𝐓≃ X_*(𝐓)⊗_ℤΩ^× = X ⊗_ℤΩ^× et de même 𝐓' ≃X' ⊗_ℤΩ^×. Par définition de φ, le morphisme φ:𝐓' →𝐓 est donné par f(φ) ⊗ id : X' ⊗_ℤΩ^×→ X ⊗_ℤΩ^×. Nous avons donc le diagramme commutatif suivant : X' ⊗_ℤΩ^×@->[r] @->[d]^f(φ) ⊗ id 𝐆' @->[d]^φX ⊗_ℤΩ^×@->[r]𝐆φ est défini à conjugaison intérieure près, ainsi si l'on passe aux classes de conjugaison celui ci est bien défini. De plus on sait que 𝐆_ss≃ (𝐓/W) où W est le groupe de Weyl de 𝐆 relativement à 𝐓. Ainsi on obtientAvec les notations précédentes on a un diagramme commutatif (X' ⊗_ℤΩ^×)/W' @->[r]^-∼@->[d]^f(φ) ⊗ id 𝐆'_ss@->[d]^φ_ss(X ⊗_ℤΩ^×)/W @->[r]^-∼ 𝐆_ssSi maintenant on prend 𝐆'=𝐆 et φ∈ Aut(𝐆). Nous pouvons identifier de manière canonique 𝐆_ss et 𝐆'_ss de la façon suivante. Il existe un g ∈𝐆 tel que 𝐓'=Ad(g)(𝐓). L'automorphisme Ad(g) induit donc un isomorphisme f de X' dans X et un isomorphisme canoniquef ⊗ id : (X' ⊗_ℤΩ^×)/W' ⟶ (X ⊗_ℤΩ^×)/W(comme on a quotienté par le groupe de Weyl, f ⊗ id ne dépend pas du choix de g).Le lemme <ref> nous fournit un isomorphisme canonique f_ss entre 𝐆_ss et 𝐆'_ss. Via cette identification, l'automorphisme φ donne lieu à un isomorphisme φ_ss∈ Aut(𝐆_ss). Ce dernier est défini par le diagramme commutatif suivant (X ⊗_ℤΩ^×)/W @->[rr]^-∼@->[d]^f(φ) ⊗ id𝐆_ss@->[d]^φ_ss(X' ⊗_ℤΩ^×)/W' @->[r]^f ⊗ id (X ⊗_ℤΩ^×)/W @->[r]^-∼ 𝐆_ss Il correspond donc à l'automorphisme de X : f ∘ f(φ). Une autre façon de voir les choses est la suivante : Fixons un épinglage (𝐆,𝐁,𝐓,{u_α}_α∈Δ). On a alors une suite exacte scindée{1}⟶ Int(𝐆) ⟶ Aut(𝐆) ⟶ Out(𝐆) ⟶{1}L'automorphisme f ∘ f(φ) correspond à l'image de φ par l'application Aut(𝐆) ⟶ Out(𝐆). Son image par Out(𝐆) → Out(𝐆) est ^t(f ∘ f(φ)). On peut donc prendre pour φ l'image de ^t(f ∘ f(φ)) par Out(𝐆) → Aut(𝐆). L'automorphisme φ_ss recherché est alors l'application induite par φ sur les classes de conjugaison semi-simples.On a un diagramme commutatif (X ⊗_ℤΩ^×)/W @->[r]^-∼@->[d]^f(φ) ⊗ id 𝐆_ss@->[d]^φ_ss(X' ⊗_ℤΩ^×)/W @->[r]^-∼ 𝐆_ssoù φ est l'image de φ par l'application Aut(𝐆) → Out(𝐆) → Out(𝐆) → Aut(𝐆). En particulier, si φ∈ Int(𝐆), φ_ss=id. § RAPPELS SUR LE FROBENIUS Soitun groupe réductif connexe défini sur 𝔉. Une structure sur 𝔣 donne lieu à un endomorphisme de Frobenius noté F:→, tel que les 𝔣-points desoient :=^F=(𝔣). Celui-ci est défini de la manière suivante : possède une 𝔣-structure si et seulement si son algèbre des fonctions, que l'on note A, vérifie A=A_0⊗_𝔣𝔉 où A_0 est une 𝔣-algèbre. L'endomorphisme de Frobenius F:→ est alors défini par son co-morphisme F^*∈ End(A_0⊗_𝔣𝔉) qui a x ⊗λ associe x^q⊗λ.Le groupe de Galois Gal(𝔉 / 𝔣) agit également sur A_0⊗_𝔣𝔉 par x⊗λ↦ x ⊗λ^q et donc induit une action que l'on note τ sur .Nous avons besoin de comprendre un peu plus en détail comment ces deux actions agissent lorsque = est un tore défini sur 𝔣. Notons X=X^*()=Hom(,𝔾_m) le groupe des caractères de . On a X≃ Hom_alg-Hopf(𝔉[t,t^-1],A) et donc τ et F se prolongent en des actions sur X. Notons τ_X l'action de τ induit sur X. Soit α∈ X et calculons τ_X· F ·α.[ τ_X· F ·α : 𝔉[t,t^-1]α⟶ A_0⊗_𝔣𝔉F⟶ A_0⊗_𝔣𝔉τ⟶ A_0⊗_𝔣𝔉; x⊗λ ↦ x^q⊗λ ↦ x^q⊗λ^q; ]On voit donc que τ_X· F ·α = α^q. Notons ψ l'élévation à la puissance q. Comme ≃ Hom(X,𝔉^×), ces actions se transfèrent en des actions sur . A quoi correspond ψ? Prenons u:X →𝔉^× alors [ψ· u : Xψ⟶ Xu⟶ 𝔉^×; α ↦ α^q ↦ u(α^q)=u(α)^q; ] Donc ψ agit comme l'élévation à la puissance q sur . Finalement on trouve que l'action de F surest donnée par F=τ_X^-1∘ψ.§ ÉQUIVALENCE INERTIELLE POUR LES PARAMÈTRES DE LANGLANDS Nous rappelons dans cette section la définition de classe inertielle introduite par Haines dans <cit.>. Nous montrons également une proposition, utile pour le théorème <ref>, qui relie une classe inertielle aux paramètres de l'inertie.Soit 𝐆 un groupe réductif connexe défini sur k un corps p-adique. On considère ici des représentations à coefficients complexes.Commençons par rappeler la notion de parabolique standard et de Levi standard de ^L𝐆, définis dans <cit.> paragraphes 3.3 et 3.4. Fixons des données 𝐓_0⊆𝐁_0⊆𝐆, composées d'un tore maximal et d'un Borel, stables sous l'action du groupe de Galois. On dit alors qu'un sous-groupe parabolique 𝒫 de ^L𝐆 est standard si 𝒫⊇^L𝐁_0. Sa composante neutre 𝒫^∘:=𝒫∩𝐆 est alors un sous-groupe parabolique standard de 𝐆 contenant 𝐁_0 et on a 𝒫=𝒫^∘⋊ W_k. Soit ℳ^∘ l'unique Levi de 𝒫^∘ contenant 𝐓_0. Alors ℳ:=N_𝒫(ℳ^∘) est un sous-groupe de Levi de 𝒫 et ℳ=ℳ^∘⋊ W_k. Les sous-groupes de Levi de ^L𝐆 construits de cette manière sont appelés standards. Tout sous-groupe de Levi de ^L𝐆 est 𝐆-conjugué à un Levi standard, et pour ℳ un Levi standard de ^L𝐆, on note {ℳ} l'ensemble des sous-groupes de Levi standards qui sont 𝐆-conjugués à ℳ.Soit λ : W_k→^L𝐆 un morphisme admissible. L'image de λ est alors contenue dans un Levi minimal de ^L𝐆, bien défini à conjugaison par un élément de C_𝐆(λ)^∘, où C_𝐆(λ) désigne le centralisateur de λ(W_k) dans 𝐆 (voir <cit.> proposition 3.6). Notons (λ)_𝐆 la classe de 𝐆-conjugaison de λ. Alors λ donne lieu à une unique classe de Levi standards {ℳ_λ} telle qu'il existe λ^+∈ (λ)_𝐆 dont l'image est contenue minimalement dans ℳ_λ, pour un ℳ_λ dans cette classe.[<cit.> définition 5.3.3]Soient λ_1,λ_2 : W_k→^L𝐆 deux paramètres admissibles. On dit que λ_1 et λ_2 sont inertiellement équivalents si * {ℳ_λ_1}={ℳ_λ_2}* il existe ℳ∈{ℳ_λ_1}, λ_1^+∈ (λ_1)_𝐆 et λ_2^+∈ (λ_2)_𝐆 dont les images sont minimalement contenues dans ℳ, et z ∈ H^1(⟨ϑ⟩, (Z(ℳ^∘)^I_k)^∘) vérifiant(zλ_1^+)_ℳ^∘=(λ_2^+)_ℳ^∘Soit ϕ∈Φ(I_k,𝐆) tel que C_𝐆(ϕ), le centralisateur de ϕ(I_k) dans 𝐆, est connexe. Alors l'ensemble des λ∈Φ(W_k,𝐆) tels que λ_|I_k∼ϕ forme une classe d'équivalence inertielle.Remarquons tout d'abord que deux paramètres admissibles de W_k inertiellement équivalents ont des restrictions à l'inertie conjuguées. Prenons donc deux paramètres admissibles λ_1,λ_2 : W_k→^L𝐆 tels que λ_1|I_k∼λ_2|I_k∼ϕ et montrons qu'ils sont inertiellement équivalents. Quitte à conjuguer par 𝐆, on peut supposer que λ_1|I_k = λ_2|I_k = ϕ. Fixons (𝐓,𝐁) une paire constituée d'un tore maximal et d'un Borel de C_𝐆(ϕ). Comme λ_1|I_k = ϕ, pour w ∈ W_k, λ_1(w) normalise ϕ(I_k) donc normalise également C_𝐆(ϕ). En faisant agir w par conjugaison par λ_1(w) on obtient une action Ad_λ_1:W_k/I_k→ Aut(C_𝐆(ϕ)). La conjugaison par λ_1(Frob), Ad_λ_1(Frob) est donc un automorphisme semi-simple de C_𝐆(ϕ), que l'on notera θ_λ_1. Par le théorème 7.5 de <cit.> il existe une paire (𝐓_1,𝐁_1) constituée d'un tore maximal et d'un Borel de C_𝐆(ϕ) tous les deux θ_λ_1-stables. Quitte à conjuguer λ_1 par un élément de C_𝐆(ϕ), on peut supposer que (𝐓_1,𝐁_1)=(𝐓,𝐁), c'est à dire que 𝐓 et 𝐁 sont θ_λ_1-stables. On fait de même pour λ_2 et donc 𝐓 et 𝐁 sont également θ_λ_2-stables. Posons ℳ_λ_1:=C_^L𝐆([𝐓^θ_λ_1]^∘). Comme (𝐓^θ_λ_1)^∘ est un tore maximal de C_𝐆(λ_1)^∘ (<cit.> théorème 1.8 (iii)), ℳ_λ_1 est un Levi minimal contenant l'image de λ_1 (<cit.> proposition 3.6). Adoptons les même notations pour λ_2. Écrivons pour w ∈ W_k, λ_2(w)=η(w)λ_1(w) avec η(w) ∈𝐆. Alors η est un cocycle à valeurs dans C_𝐆(ϕ) pour l'action Ad_λ_1, c'est-à-dire η∈ Z^1_Ad_λ_1(W_k/I_k, C_𝐆(ϕ)). Comme la paire (𝐓,𝐁) est à la fois θ_λ_1-stable et θ_λ_2-stable, η(Frob) est dans le normalisateur dans C_𝐆(ϕ) de la paire (𝐓,𝐁) qui est égal à 𝐓. Puisque η(Frob) ∈𝐓, on a en particulier que 𝐓^θ_λ_1 = 𝐓^θ_λ_2 et donc ℳ_λ_1=ℳ_λ_2. Notons maintenant ℳ:=ℳ_λ_1=ℳ_λ_2.La suite exacte 1 →ℳ^∘→ℳ→⟨ϑ⟩→ 1 et le choix d'une section permettent de définir une action de ⟨ϑ⟩ sur ℳ^∘. Par conjugaison, elle induit une action de ⟨ϑ⟩ sur Z(ℳ^∘) qui est alors indépendante du choix de la section. On a donc une action canonique de ⟨ϑ⟩ sur Z(ℳ^∘). On aurait pu conjuguer ϕ dès le départ pour avoir ℳ standard. Il nous suffit donc pour montrer que λ_1 et λ_2 sont inertiellement équivalents de montrer l'existence d'un z ∈ H^1(⟨ϑ⟩, (Z(ℳ^∘)^I_k)^∘) tel que (zλ_1)_ℳ^∘=(λ_2)_ℳ^∘. Soit t ∈𝐓, alors ^tλ_2=η'λ_1 avec η'=t^-1η Ad_λ_1(t). Ainsi, pour finir la preuve de la proposition, il nous suffit de montrer qu'il existe t ∈𝐓 tel que t^-1η(Frob) θ_λ_1(t) ∈ (Z(ℳ^∘)^I_k)^∘. Or comme (𝐓^θ_λ_1)^∘⊆ (Z(ℳ^∘)^I_k)^∘ cela découle du lemme <ref> ci-dessous. Pour pouvoir appliquer ce dernier, il nous reste à vérifier que θ_λ_1 induit un automorphisme d'ordre fini sur X_*(𝐓). Par dualité, il suffit de vérifier que θ_λ_1 a une action d'ordre fini sur le groupe des caractère X^*(𝐓). Pour cela nous allons montrer la finitude de l'action de θ_λ_1 sur Q, le réseau engendré par les racines, puis sur le centre Z(C_𝐆(ϕ)). Comme le groupe des caractères de Z(C_𝐆(ϕ)) est X^*(𝐓)/Q (<cit.> 2.15), on en déduit la finitude de l'action sur X^*(𝐓). L'automorphisme θ_λ_1 de C_𝐆(ϕ) stabilise 𝐓, donc agit sur l'ensemble des racines de C_𝐆(ϕ) par rapport à 𝐓. Cet ensemble étant fini, cette action est également d'ordre fini. Il nous reste à étudier l'action de θ_λ_1 sur Z(C_𝐆(ϕ)) le centre de C_𝐆(ϕ). D'après <cit.> lemme 2.1.1, il existe n ∈ℕ^* et λ : W_k→^L𝐆 une extension de ϕ telle que λ(nFrob)=(1,nϑ). En particulier θ_λ a une action d'ordre fini sur Z(C_𝐆(ϕ)). Or nous avons vu que deux paramètres de W_k qui étendent ϕ diffèrent par un cocycle à valeur dans C_𝐆(ϕ), donc l'action sur le centre Z(C_𝐆(ϕ)) est indépendante du choix de l'extension de ϕ. Par conséquent θ_λ_1 a une action d'ordre fini sur le centre et on a le résultat.Soient 𝐓 un tore sur ℂ et θ∈ Aut(𝐓). On note X_*:=X_*(𝐓) l'ensemble des co-caractères et on suppose que θ induit un automorphisme d'ordre fini sur X_*. Posons L_θ l'application L_θ:𝐓→𝐓, t↦ t^-1θ(t). Alors 𝐓=L_θ(𝐓) · (𝐓^θ)^∘.Le groupe des co-caractères de (𝐓^θ)^∘ vaut X_*((𝐓^θ)^∘)=X_*^θ. On définit L_θ^X:X_*→ X_* par L_θ^X(λ)=λ-θ(λ). Alors X_*(L_θ(𝐓)) ⊇ Im(L_θ^X) . Posons X_*,ℚ:=X_*⊗ℚ et de même L_θ,ℚ^X:X_*,ℚ→ X_*,ℚ. Comme θ est d'ordre fini, θ est un endormorphisme semi-simple du ℚ-espace vectoriel X_*,ℚ. Ainsi, on a la décomposition X_*,ℚ=X_*,ℚ^θ⊕ Im(L_θ,ℚ^X). On en déduit que X_*^θ+Im(L_θ^X) est d'indice fini dans X_* et donc, comme ℂ^* est divisible, que l'application (X_*^θ⊗ℂ^*) × (Im(L_θ^X)⊗ℂ^*)→ X_*⊗ℂ^* est surjective. En particulier l'application (X_*((𝐓^θ)^∘) ⊗ℂ^*) × (X_*(L_θ(𝐓))⊗ℂ^*)→ X_*⊗ℂ^* est surjective, d'où le résultat. hep | http://arxiv.org/abs/1703.08689v2 | {
"authors": [
"Thomas Lanard"
],
"categories": [
"math.RT"
],
"primary_category": "math.RT",
"published": "20170325130907",
"title": "Sur les $\\ell$-blocs de niveau zéro des groupes $p$-adiques"
} |
[ Gestalt versus Heisenberg]Gestalt Principles re-investigated within Heisenberg uncertainty relation Mimar Sinan Fine Arts University, 34380 Sisli, Istanbul, [email protected] 2017Perception, sensation and re-action are central questions both in Psychology, Arts, Neurology and Physics. Some hundred years ago, believed to start with Wertheimer, researchers and artists tried to classify our human being "understanding" of Nature, in terms of Gestalt principles. During same period Quantum mechanics were developed by Schroedinger, Heisenberg, Dirac, Majorana and others. In this work we briefly summarize the basic concepts of these two approaches and try to combine them at a simplistic level. We show that, perception and sensation can be handled within electrical signal processing utilizing Fourier transformation, which finds its counter-part in quantum mechanics. § INTRODUCTIONDuring the evolution, we as Homo Sapines have reached, as far as we know, the peak point of understanding the nature <cit.>. Although, loosing our respect to it and raping this beauty. Our nerve system together with our brain, we could process and proceed things happening around <cit.>. However, it is still a major question "how" we conceive, convolve and react the information coming from outside. For seeing, it is clear that an electro-magnetic signal stimulates our eye cells. This input is converted to an electrical signal via Fourier transform and is conveyed by neurological wires to the acceptor center. By "center" we mean the orthodox understanding, which as authors barely understand. Exactly 105 years ago, Wertheimer <cit.> summarized the basics of our perception called the Gestalt Principles. He states that, there should be certain rules in our perception and sensation. However, in the following years discussion was pushed forward by many many researchers. These principles are used bypsychologists, artists and others. In spite of all the efforts, the main mechanism lying behind is still under debate. Recently, in a mind blowing pioneering work <cit.> (and the references given there) mathematical and neurological background of Gestalt principles are discussed in detail, deep and firmly. We think that there is sufficient space to improve on the re-vised Gestalt Principles, utilizing neurological findings and quantum mechanical approach.In this letter, we provide an approach that may interconnect the relation between gestalt principles (GPs) and uncertainty relation (UR) via Fourier transformation (FT). First, we very briefly remind the readers GPs, then re-introduce discrete FT and finally review UR. Second we, show some of our simple but fundamental results and relate GPs to UR.§ GESTALT PRINCIPLESApproximately 2.5 Milion years ago our ancestors woke up, i.e. stand on their legs. They were called Homo Erectus, the erected ape. Their vision were much greater than the monkeys, then. Nobody knows, why they erected? It is out of interest for us “why?", our target is to answer “how?". We would like to emphasize that, the authors are not sure that it was a good idea to erect, if one observes the contemporary situation of human beings in modern times. This erection lead more information due to the new point of view. How do we and other eyed animals percept information by grouping and finally analyze is not clear, yet. As a step forward, process and react this premature information is even more complicated. GPs of grouping (perception in a limited case), provided somehow consistent, coherent and may be logical ground for our visualization. Those are <cit.>:*Proximity *Similarity *Closure *"Good" Continuation *Common "Fate" *Good "Form" The first three principles are rather clear and scientifically (in mathematics, physics and neurology Etc.) digestible. However, the last three are questionable.Just as an (simple) example we show in Fig.1 the proximity effect, together with non-grouping. GPs imposes that due to proximity effect, perception is better.We will show that, this statement is not true, at least signal wise. § FOURIER TRANSFORMATION In applied mathematics, counting is fundamental. Historically, as far as we know, it all started making scratches on wood mapping number of sheep leaving or coming back to yard. Fig 2. Hence, there were two different worlds. The reality was singular, sheep to be fed and to feed yourself. To be simple as possible, mathematicians map the real world to some symbols like scratches or numbers Etc. Moreover, they find some smart way to relate two different realities with each other: observables and observers. Now, many many years later Fourier realized that, if one takes any function (a special case of relation) it is possible to express this function in terms of already known functions. One of the well known are Cosine and Sine, periodically repeating functions. Therefore, one can expand in principle a given electro-magnetic signal (function) in bases of these. Fig 3. <cit.>. Mathematically; f(k)=∫_-∞^∞f(x)e^-2π i x kdx, here x is a real number and k is the dual space. Similar to mapping between scratch and sheep. For the interested reader we suggest to look at the duality concept from Mevlana, Goethe, Hegel and Marx <cit.>.In any case, we hope it is now clear to an untrained reader that there are two different, however, equivalent sets/spaces, which can be related to each other by Fourier transformation.Unfortunately and mathematically ugly, there is a funciton called delta, δ(x,k), i.e. relates x and k spaces. Presented as; δ (x-a)=1/2 π∫_-∞^∞dkcos (kx-ka).Here a is the period, i.e. repeating number. By period a it is meant that, the repeating distance between peak points of the function, as shown in Fig.3. The mathematical ugliness of δ function comes from the fact that, in one space (x or k) it diverges to infinity, meanwhile in the dual space tends to zero. More disgustingly, the area below this function (integral) is constant always and equates to 1.In the next Section, basics of Uncertainty relation will be re-introduced, while the FT in perception is strongly related with it.§ THE VERY BASICS OF QUANTUM MECHANICS Early in 1899, Max-Planck realized that the very well known classical mechanics and statistics does not explain the observed facts, which are obviously repeatable experimentally. Hence he, as a genius, developed a novel mathematical formulation to explain the observed in nature, black body radiation. However, his work in the beginning, was not more than collecting ideas which already existed. Un (or) fortunately, there was the “catastrophe" at Ultra-Violet radiation <cit.>, that can not be explained by classical means. He had to invent a new formulation,<N_i >/N=g_i/e^(ϵ_i_(n) -μ)/kT,which essentially expresses that light is composed of quantized particles, as Newton stated couple of years ago <cit.>. In this formula, i counts the state of particle number, μ is the chemical potential determining the probability to include another particle to the system, T is the so called temperature, and k represents Boltzmann constant. We, added (n) subindex to emphasize the discreetness of energy level, similar to g_i, representing the Spin degree of freedom. Here, N is the total number of particles and, < N_i > is the expected number of particles in i th state.The report by Max-Planck paved the way to quantum mechanics <cit.>. However, it was Heisenberg who formulated QM independent of physical nature just grounding on triangle inequality;|<𝐮,𝐯>|^2 ≤ <𝐮,𝐮>. <𝐯,𝐯>where 𝐮 and 𝐯 are the two corners of a triangle, as shown in Fig 4. Here <.,.> is the inner product.At the last step, we re-present the Heisenberg uncertainty relation;[x̂,p̂]ψ(x)≥ iħψ(x).In fact, wave function ψ(x) also depends on time, however, it is pedagogically ignored usually. To be brief, it is the fact that mathematically, using Schwartz inequality one can prove that;[Â,B̂]=ÂB̂-B̂Â, Â,B̂, are operators that can be replaced by coordinate and momentum operators x̂ and p̂which does not commute with each other. For a uninterested and ordinary reader, the two perpendicular sides of a triangle can not be larger than the third line that connects the open edges of the sides. Finally, while knowing it is a long path to investigate perception, sensation and re-action we have found an interconnection between GPs, FT and UR. This it requires much more effort, than presented here.Now, we able to discuss Gestalt principles within Heisenberg <cit.> uncertainty principle, for the reader who has sufficient back-ground for all we discussed above, i.e. GPs, FT and UR .§ PERCEPTION, SENSATION UND SO WEITERHopefully equipped with the above information, we demonstrate in Fig. 5 a caricature of perception and signal conversion. Next figure (Fig. 6). shows, a simple example that “how" a triangle is converted by FT. This is just perception received by eye cells, turned to a electrical signal and send to related center.This is how we wire electromagnetic outer signal to an internal electrical one. Note that the game is not over, yet. To have a sensation from information is much more complicated. Knowledge is even more disgusting, which will be discussed in forthcoming papers. Remind yourself, we have shown in Fig. 1. which are the imposed images extracted from GS, and presented by <cit.>. In Fig. 1, lower panels are just the Fourier transformed images. This is what we have in our brains as an electrical signal, if some has one. However, it is just the beginning, the center starts to compare this signal with previous perceptions and sensations. Analyzed similarly, before.At this point, to keep reader with us, we summarize: An independent signal (here EM) data reached to our cells (eye). Then converted information to an electrical signal and related cells made a FT. Next, connected brain center captured the electrical impulse and compared with previous signal store (memory). Now, the sense is there electrical, compared and filtered. Here comes knowledge, after comparison brain produces a new information composed of previous and recent data. However, it is not finalized yet. While, with all the previous information-knowledge Etc. it is the time to react.§ CONCLUSION The long standing debate on the explanation of observer and observed in terms of seeing is discussed with in Gestalt Principles utilizing Fourier transformation. Finally we related, GPs to uncertainty relation. Our approach is promising in couple of senses: It bases on firm mathematical grounds, known physiologic and art principles moreover supported by physics.§ ACKNOWLEDGEMENTSThis work was not financially supported, other than personal fundings. § REFERENCES elsarticle-num10 url<#>1 urlprefixURLhref#1#2#2#1#1Moses:5000 Torah, Moses, (updated version∼ 450 BC), BIBLICAL STUDIES Mikra: Text, Translation, Reading and Interpretation. Norton Irish Theological Quarterly. 72: 305-30 (2007)Jeusus:0Barton, John. "Before the parting of the ways". Times Literary Supplement. p.12. (2012)muhammed:652 Elmalili Hamdi Efendi, 12th edition, ISBN: 978- 975- 19- 3243- 3 (2011)Beethoven:7 https://www.youtube.com/watch?v=-4788Tmz9Zo (2017)Bruno:G http://www.astronomy.ohio-state.edu/ pogge/Essays/Bruno.html (2017)Goedel:tBiographical Memoirs of Fellows of the Royal Society. 26: 148?126 (1980) Wertheimer:12 Wertheimer, M. Experimentelle Studien ueber das Sehen von Bewegung [Experimental studies on the seeing of motion]. Zeitschrift fuer Psychologie, 61, 161?265. (1912)WAGEMANS:12 American Psychological Association, Vol. 138, No. 6, 1172?1217 (2012).GSp:2017 https://en.wikipedia.org/wiki/Principles of grouping. (2014)FT:14 https://ibmathsresources.com/2014/08/14/fourier-transforms-the-most-important-tool-in-mathematics/ (2014) duality https://en.wikipedia.org/wiki/Duality Loudon:00 Loudon, R.[1973]. The Quantum Theory of Light (third ed.). Cambridge University Press. ISBN 0-19-850177-3, (2000). Newton Newton, I., Philosophi Naturalis Principia Mathematica, (1687).Planck Planck, Max. Vorlesungen ber Thermodynamik. (1897) Heisenberg Biographical Memoirs of Fellows of the Royal Society. Royal Society. 23: 212. (1976) ] | http://arxiv.org/abs/1704.03302v1 | {
"authors": [
"S. Iyikalender",
"A. Siddiki"
],
"categories": [
"physics.pop-ph"
],
"primary_category": "physics.pop-ph",
"published": "20170327192831",
"title": "Gestalt Principles re-investigated within Heisenberg uncertainty relation"
} |
J. Lou () School of Information Engineering, Changzhou Vocational Institute of Mechatronic Technology, Changzhou, Jiangsu 213164, P.R. China. [email protected]. Wang L. Chen F. Xu Q. Xia W. Zhu M. Ren () School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu 210094, P.R. China. [email protected] page: <http://www.loujing.com/cns-sod/> Exploiting Color Name Space for Salient Object DetectionJing LouHuan Wang Longtao Chen Fenglei Xu Qingyuan Xia Wei Zhu Mingwu RenReceived: date / Accepted: date ============================================================================================= In this paper, we will investigate the contribution of color names for the task of salient object detection. An input image is first converted to color name space, which is consisted of 11 probabilistic channels. By exploiting a surroundedness cue, we obtain a saliency map through a linear combination of a set of sequential attention maps. To overcome the limitation of only using the surroundedness cue, two global cues with respect to color names are invoked to guide the computation of a weighted saliency map. Finally, we integrate the above two saliency maps into a unified framework to generate the final result. In addition, an improved post-processing procedure is introduced to effectively suppress image backgrounds while uniformly highlight salient objects. Experimental results show that the proposed model produces more accurate saliency maps and performs well against twenty-one saliency models in terms of three evaluation metrics on three public data sets.§ INTRODUCTION Visual attention, one of intrinsic properties of human vision to extract important information from abundant visual inputs, is concerned with the understanding and modeling of biological perception systems. Psychophysical and physiological studies indicate that the selective attention mechanism, which can be directed by human visual system to gaze the most conspicuous location and then shift to the next conspicuous location, plays an important role in the early representation <cit.>. Since these conspicuous locations might be the feature cues based salient regions, the computational visual attention aims to deal with the automatic saliency detection in images or videos. In computer vision, the main tasks of saliency research include eye fixation prediction which attempts to predict human fixation data <cit.>, and salient object detection for the localization and identification of salient regions in visual scenes <cit.>.Over the past decades, saliency detection has been widely used in many computer vision applications, including image segmentation <cit.>, object detection <cit.>, object recognition <cit.>, visual tracking <cit.>, image and video compression <cit.>, and video summarization <cit.>. Generally, the resultant map of saliency detection is called “saliency map”, which topographically describes the conspicuity of each location in the whole scene. From a computational point of view, saliency detection techniques can be divided into two categories: slow, top-down, task-dependent manner; and rapid, bottom-up, task-independent manner <cit.>. Although top-down manner is indispensable for guiding the attention to behaviorally relevant objects, the salient features based bottom-up attention is more closely related to an early stage of visual processing <cit.> and has been investigated by numerous researchers.In the feature integration theory of attention, a visual scene is initially coded along a number of elementary features, e.g., color, orientation, brightness, and spatial frequency <cit.>. The selective attention mechanism <cit.> suggests to compute these elementary features in parallel and combine the resultant cortical topographic maps into a saliency map. Hence, a majority of bottom-up saliency models aim to investigate different visual features and apply them to define the saliency of a pixel or a region. In these models, the contrast based detection is one of the most commonly adopted techniques. As no prior knowledge regarding salient objects is provided, the contrast based detection mainly focuses on two aspects: local center-surround difference, and global rarity.For local center-surround difference, one of the most influential bottom-up saliency models is introduced by Itti et al. <cit.>. Basing on the Koch and Ullman's early representation model <cit.>, Itti et al. extract various features at multiple resolutions and use center-surround differences between different resolutions to form a saliency map. Ma and Zhang <cit.> regard an image as a perceive field and define the saliency by measuring differences between the stimuli perceived by different perception units. Goferman et al. <cit.> exploit four basic principles of human visual attention to detect the context-aware saliency, i.e., local low-level features, global considerations, visual organization rules, and high-level factors. Furthermore, by means of the Kullback-Leibler divergence, an information-theoretic approach is proposed to extract saliency from multi-scale center-surround feature distributions <cit.>.For another, the global rarity based saliency models tend to find rare features from an image. Achanta et al. <cit.> propose a frequency-tuned (FT) approach, which defines the pixel-wise saliency by comparing the color of each pixel with the average image color in LAB color space. In <cit.>, Cheng et al. present a histogram contrast (HC) based saliency method, which uses color statistics to compute saliency. In addition, a regional contrast (RC) based saliency method is introduced in that work, which simultaneously evaluates global contrast differences and spatial coherences. In order to reduce the complexity of calculating the color contrasts between regions, we subsequently follow the RC method and propose a regional principal color (RPC) based saliency method <cit.> by only retaining the most frequently occurred color of each superpixel. Besides the widely used color features, some other visual cues are also exploited in the global contrast based saliency models, such as orientation <cit.>, intensity <cit.>, spectrum <cit.>, and texture <cit.>.In this paper, we also focus on the bottom-up and contrast-based saliency detection technique. Actually, if we review the task of salient object detection, we can see it has two clear implications: one is that the detected regions should be salient in an image, the other is that these salient regions should contain objects of any category. Gestalt psychological studies indicate that objects lying in the foreground may result in being more salient than background elements <cit.>. Since salient objects are more likely to be involved in foreground regions, two questions consequently arise: 1) How to extract foreground regions? 2) How to define the contrast-based saliency? For the first question, one answer is to employ figure-ground segregation.Recently, a simple and effective saliency model called “Boolean Map based Saliency” (BMS) is proposed in <cit.>. The BMS model first demonstrates that the rarity based models sometimes ignore global structure information and falsely highlight high contrast regions. Then following the suggestion of Gestalt psychology that the surroundedness may influence figure-surround segregation <cit.>, BMS exploits a set of randomly sampled boolean maps to model the saliency of foreground regions. By using different parameter settings, BMS is suitable for both eye fixation prediction and salient object detection, and achieves the state-of-the-art performance.Here, we only discuss its results of salient object detection. Although three channels of LAB color space are chosen as the randomly sampled feature maps, the essence of BMS is the use of the closed outer contours of foreground regions. The effect of salient object detection is somewhat equivalent to applying it to a lightness image. As illustrated in Fig. <ref>, it is interesting that if we convert the input RGB image (Fig. <ref>) to LAB color space and apply BMS to the L channel (normalized to [0,255], see Fig. <ref>), we obtain two similar saliency maps (cf. Figs. <ref> and <ref>). The detected salient regions have similar characteristics, that is, they are enclosed by the outer boundaries and not connected to the image borders. Obviously, the color information is discarded in this case.In this paper, we couple a surroundedness cue with two global color cues into a unified framework by extending BMS to Color Name Space, which is obtained by using the PLSA-bg color naming model <cit.> (or called PLSA-ind in <cit.>). In computer vision, color names are linguistic color labels assigned to image pixels. The linguistic study of Berlin and Kay <cit.> indicates that there are eleven basic color terms (i.e., color names) in the English language, as given in Table <ref>. In the proposed model, both the probabilities and statistics of eleven color names are simultaneously incorporated to measure color differences. The topological structure information also participates in the computation of the color names based saliency, hence several weighted master attention maps are generated. Through a simple linear combination and an improved post-processing procedure, we obtain two saliency maps and then fuse them into a single map. Finally, several image processing procedures, including truncation operation, intensity mapping, and hole-filling, are invoked to infer the final result. Figures <ref> and <ref> show the saliency results produced by the proposed model. We can see that the color name space based saliency shows higher precision. It demonstrates that the color cue is of as much importance as the surroundedness cue.In the following sections, the proposed model will be called “CNS”. The main contributions of this paper include: 1) By expoliting color name space, we propose an integrated framework to effectively compute the color based saliency.2) A weighted global contrast mechanism is introduced to incorporate more color cues into the topological structure information of an image.3) An improved post-processing procedure is proposed to uniformly highlight salient objects, which are easy to be segmented. The remainder of this paper is organized as follows. Section <ref> is the review of related work. The proposed salient object detection model is presented in Section <ref>. In Section <ref>, performance comparisons are made with three benchmark data sets. Conclusions and possible extensions are presented in Section <ref>. § RELATED WORK We base the proposed model on BMS <cit.> and PLSA-bg <cit.>. The key idea of BMS is the use of the surroundedness cue, which can be characterized by a set of boolean maps. The BMS model first converts an input RGB image I to LAB color space, then scales each channel to [0,255]. Subsequently, BMS chooses each channel as a feature map, and uses a set of fixed thresholds to binarize each feature map to boolean maps B_i as follows <cit.>:B_i = (ϕ(I), θ),where ϕ(I) is a feature map of I, and θ represents a fixed threshold. Based on a Gestalt principle of figure-ground segregation <cit.>, BMS performs several morphological operations to generate a set of attention maps, in which all the regions connected to the image borders are masked out since they are not surrounded by any closed outer contour. The final saliency map is simply the average of these attention maps, followed by a morphological post-processing.The surroundedness cue is also invoked in the proposed CNS model. However, different from BMS, CNS uses color name space instead of LAB color space. In the field of document analysis, the standard PLSA model <cit.> computes the conditional probability of a word w in a document d, and estimates the distributions p(z|d) and p(w|z) by using an Expectation-Maximization (EM) algorithm, where z represents a latent topic. Considering that PLSA does not exploit the color name labels of training images, the PLSA-bg model <cit.> represents an image d (i.e., document) as a LAB color histogram with a group of color bins (i.e., words), and decomposes d into the foreground distribution according to a given color name label l_d (i.e., topic) and the background distribution shared between all training images. By estimating the mixing proportion of foreground versus background, color name distributions, and background model, the probability of a color name for a given image pixel is represented asp(z|w) ∝ p(z) p(w|z),where the prior over eleven color names is taken to be uniform.Besides the probability information of color names, the proposed model makes use of a statistical cue. This is achieved by a Color Name Histogram, in which eleven color name bins are involved for measuring color differences. In <cit.>, the HC method directly uses color statistics to define the saliency value of each color bin. Compared with HC, our model solely exploits the color name histogram to compute eleven weighting coefficients, and further produces eleven weighted master attention maps. The color name histogram does not participate in the generation of original attention maps, which are still determined by the surroundedness cue as used in BMS.§ COLOR NAME SPACE BASED SALIENCY DETECTION To incorporate more color information, we extend BMS <cit.> from LAB color space to color name space. Two saliency cues, i.e., surroundedness and color, are separately invoked to produce two saliency maps. They are fused into a single map for generating the final result. These steps are described in the following sections. §.§ General FrameworkAs illustrated in Fig. <ref>, the integrated framework of CNS includes two computational pipelines.Pipeline I. An input RGB image is first resized to 400 pixels in width and converted to color name space. The resultant space is composed of 11 monochrome intensity components, namely Color Name Channel in this paper. Following BMS <cit.>, a set of attention maps is generated based on figure-ground segregation. The attention maps of each channel are linearly fused to produce a master attention map. Finally, the mean attention map A̅ is obtained by combining 11 master attention maps and further post-processed to generate the saliency map S.Pipeline II. The resized RGB image is first converted to a Color Name Image, from which two statistical characteristics are derived: 1) a color name histogram which consists of 11 color bins, and 2) 11 binary index matrices where each of them represents the distribution of a corresponding color name. By exploiting two kinds of weighting patterns, we generate 11 weighted master attention maps. All the master attention maps obtained in Pipeline I also participate in this process. Finally, the weighted saliency map S_w is obtained by using the same combination and post-processing as used in Pipeline I.Combination. The saliency maps S and S_w are fed into a truncation operation to produce the mean saliency map S̅, which simultaneously codes for the topological structure and color conspicuity over the entire image. In addition, we apply another post-processing procedure to generate the final saliency result S, in which the salient region is evenly highlighted and smoothed for convenience in the task of salient object segmentation.§.§ Color Name Channel Based Attention Map First, we directly use the im2c function provided by <cit.>[<http://lear.inrialpes.fr/people/vandeweijer/color_names.html>] to generate the color name space 𝐂={C_1,C_2,…,C_11}, where each color name channel C_i has the range of values [0,1]. Thus, for the resized RGB image I, the color representation of each pixel is mapped from a 3-dimensional (3-D) RGB value to a probabilistic 11-dimensional (11-D) vector which sums up to 1. Considering that the topological structure of I is independent of the perceptual color coherence, each color name channel is treated equally and normalized to [0,255] for the subsequent thresholding operation.Then, we use a set of sequential thresholds from 0 to 255 with a step size of δ to binarize each color name channel C_i∈𝐂 to n boolean mapsB_i^j = (C_i, θ_j),where at each threshold θ_j, the above function generates a boolean map B_i^j by setting all the values above θ_j to 1s and replacing all the others with 0s. After two morphological operations on B_i^j, including closing and hole-filling, weuse a clear-border algorithm <cit.> to mask out all the foreground regions connected to the image borders, and obtain a corresponding attention map A_i^j. The same processing steps are also executed for the complement map of B_i^j (denoted B_i^j). As summarized in Algorithm <ref>, two parameters are required in this stage: sample step δ, and kernel radius ω_c of the closing operation. We will discuss the influences of them in Section <ref>.However, different from BMS which averages all the attention maps, the proposed model computes the mean attention map A_i of each color name channel C_i separately. All the attention maps A_i^j and A_i^j share the same weight and are averaged to A_i, which is called Master Attention Map in this paper. Then, the mean attention map A̅ of 11 master attention mpas can be further calculated as follows:A_i= 1/2n∑_j=1^n(A_i^j + A_i^j ),A̅ = 1/11∑_i=1^11 A_i. Actually, if we merge Eqs. (<ref>) and (<ref>), we can get the same computation procedure of A̅ as introduced in the BMS model <cit.>. The slight difference lies in the usage of 11 master attention maps. In Pipeline I, the computation of A̅ is mainly based on the surroundedness cue. To make better use of color name space, the proposed framework couples the surroundedness cue with two color cues to compute the color based saliency. In Section <ref>, we will again use the 11 master attention maps to produce a weighted mean attention map A̅_w.§.§ Post-processing The mean attention map A̅ is shown in Fig. <ref>. Due to the existence of other surrounded objects that have clear boundaries and uniform colors (for example, the red flower below the cat), there are several small salient regions in A̅. In order to outstand the main salient object (i.e., the cat), we also follow BMS to remove small salient regions by sequentially performing two steps of morphological reconstruction operations <cit.>. The structuring element used here is a disk shape with the radius ω_r. Figure <ref> shows the reconstruction result. It can be observed that those small salient regions have been erased while the original shape of the salient cat is still remained.For the task of salient object detection, the ideal output should be a binary map in which the pixel values of salient objects are 1s while the others are 0s. However, the disadvantage of the morphological reconstruction is that the high intensity values of salient pixels are suppressed simultaneously. In addition, the background of the reconstruction result contains some inconspicuous regions with non-black pixels, which would decrease the detection precision. To address the above issues, a nonlinear mapping function is introduced to transform the intensity values to a new range. Overall, we wish to weight the mapping toward the lower output values and map all the intensity values above a specific threshold to 1s. Suppose that F is the reconstruction result, the intensity mapping function has the syntax form as follows:G = (F, [0,T_F], ϑ_g ),where T_F is the truncation threshold, ϑ_g determines the mapping relationship between F and G. To suppress non-salient pixels, the lower limit of the mapping is set to 0, and ϑ_g is set to be greater than 1.In Eq. (<ref>), all the intensity values above T_F (i.e., in the interval [T_F,255]) are clipped and mapped to 1s. For automatically obtaining T_F, we exploit the statistical information extracted from the histogram of F. After scaling F to [0,255] (see Fig. <ref>), we get its histogram H where H_k,k∈[0,255] denotes the number of pixels at the kth intensity level. By summing up the number of pixels from H_0, we obtain the minimum intensity level T_F which should satisfy the following criteria:(1-ϑ_r)∑_k=0^255H_k ⩽ ∑_k=0^T_FH_k ,that is, the non-salient pixels should cover no less than 1-ϑ_r of the total number of image pixels. For convenience, we abbreviate Eq. (<ref>) asG = (F, ϑ_r, ϑ_g ),where ϑ_r is empirically set to be less than 10%. Figure <ref> illustrates the intensity mapping curve with ϑ_r=0.02 and ϑ_g=1.5. By using this mapping, the lower (darker) values in the output map (Fig. <ref>) are further suppressed. From the difference map (Fig. <ref>), we can see that those non-salient regions on the right side of the cat have been eliminated. Finally, we perform a hole-filling operation to generate the saliency map S. The whole post-processing procedure is summarized in Algorithm <ref>. In Section <ref>, we will discuss the influences of the parameters ω_r, ϑ_r, and ϑ_g. §.§ Global Color Cue Based Saliency As indicated previously, we then introduce a color-based saliency algorithm to overcome the limitation of only using the surroundedness cue. In order to take advantage of color attributes, two global color cues including statistic and contrast, are inferred from color name image and employed to compute weighting coefficients and matrices. The 11 master attention maps obtained in Section <ref> are coupled with two kinds of weights to further produce a weighted saliency map S_w.First, we again use the im2c function <cit.> to convert each pixel value of the resized RGB image I from a 3-D RGB value to a probabilistic 11-D vector. By exploiting the index number of the largest element in the vector, we construct an index map 𝐌 where each pixel has an integer value between 1 and 11. Basing on 𝐌, we derive two kinds of weights. §.§.§ Color Name Statistic Based WeightsIf we use the corresponding RGB value c_i given in Table <ref> to represent each pixel in 𝐌, we get the color name image as shown in Fig. <ref>. The histogram of the color name image has totally 11 color levels, where the ith level corresponds to the number of pixels having the color name t_i. In this paper, the histogram is called Color Name Histogram, as shown in Fig. <ref>. From the color name histogram, we can obtain 11 probability values. The probability of the ith color name is denoted as f_i.Another cue is the distributions of the color names in 𝐌. For the purpose of combining with 11 master attention maps, we use Eq. (<ref>) to construct 11 index matrices. In the ith index matrix M_i, any element value equal to i is set to 1, otherwise is set to 0:M_i(x,y) = 1,if M(x,y) = i ; 0,otherwise . As discussed in Section <ref>, the attention map A_i of the ith color name channel is computed by linearly averaging boolean maps, where all the foreground regions that connected to the image borders are abandoned. For any boolean map, all the pixels in the surrounded regions share the same weight. To jointly consider the frequencies and distributions of different color names, we simply combine f_i and M_i to obtain the first kind of weights, i.e., weighting matricesW_i = f_i M_i . §.§.§ Color Name Contrast Based WeightsMainly inspired by <cit.> and <cit.>, we calculate the second kind of weights, i.e., the contrast based weighting coefficients. The weight of each color name is defined as its color contrast to all the other color names. All the pixels having the same color name share the same weight. For the color distance metric, we directly use the RGB values of 11 color names given in Table <ref>. Specifically, the weighting coefficient w_i of the color name t_i is defined asw_i = ∑_j=1^11f_jc_i-c_j _2^2 ,where c_i-c_j _2 is the ℓ_2-norm of the color difference between the color names t_i and t_j.By integrating two kinds of weights into 11 master attention maps and averaging them, we compute the weighted mean attention map A̅_w (see Fig. <ref>) as follows:A̅_w=∑_i=1^11w_i·(W_i ∘ A_i) ,where ∘ denotes the Hadamard product, and (·) is the normalization function which sets the values in A̅_w to [0,1]. Figures <ref>–<ref> illustrate the same post-processing procedure introduced in Section <ref>. From Fig. <ref>, we can see that the hole-filling operation completes the closed dark regions inside the cat. Finally, we obtain the second saliency map, i.e., the weighted saliency map S_w with the range [0,255], as shown in Fig. <ref>. §.§ Combination To couple with the saliency maps S and S_w, we simply average them at the first step of the combination stage. The original output is illustrated in Fig. <ref>. Considering that the use of saliency maps is to assist in salient object segmentation, the original combination result is obviously not ideal. First, for the purpose of eliminating the perceptually insignificant regions outside the cat, we perform an intensity mapping in the post-processing I, which simultaneously suppresses the inner saliency and subsequently results in an indeterminate object region in S. Second, in S_w the salient object has a clear contour, but apparently shows a nonuniform intensity distribution. Third, the locations of the regions with higher saliency values are completely different between two saliency maps.To address the above issues, a truncation operation is introduced to clip the original output. Intuitively, we wish the resultant salient object to have a uniform intensity distribution, which can be further highlighted by using a post-processing procedure. Since both S and S_w have been normalized to the range [0,255], we define the improved mean saliency map S̅ asS̅ = [ S+S_w ]_0^255/2 ,where [ · ]_0^255 is the operator for truncating the inner to have values between 0 and 255. As illustrated in Fig. <ref>, the above definition causes a piecewise mapping, in which the values above 128 are clipped and the others stay unchanged. From Fig. <ref>, we can see the resultant map S̅ occupies the common salient parts between S and S_w. Although the detected region has lower saliency, the whole region is uniform and clearly stands out of the background. This means that we also can perform a post-processing operation on S̅ to refine its saliency.A new post-processing procedure is summarized in Algorithm <ref>. Compared with Algorithm <ref>, the difference is that this procedure only includes two operations: intensity mapping and hole-filling. For the former operation, we use the same parameter settings as before. Figure <ref> shows the histogram of the mean saliency map S̅, the intensity mapping curve is illustrated in Fig. <ref>. Note that different from Fig. <ref>, here the intensity mapping curve maps the inputs in the range [0,128] to the outputs in the range [0,255]. After filling all the small dark holes in the object region, we obtain the final saliency result S of the proposed model, as shown in Fig. <ref>. It can be seen that our model well suppresses the image background and uniformly highlights the foreground object. More importantly, for the future task of salient object segmentation, we can easily perform a thresholding operation on S while generate more stable segmentation results over a wide range of thresholds. § EXPERIMENTS We evaluate the proposed model with twenty-one saliency models including AC <cit.>, BMS <cit.>, CA <cit.>, COV <cit.>, FES <cit.>, FT <cit.>, GC <cit.>, GU <cit.>, HC <cit.>, HFT <cit.>, MSS <cit.>, PCA <cit.>, RC <cit.>, RPC <cit.>, SEG <cit.>, SeR <cit.>, SIM <cit.>, SR <cit.>, SUN <cit.>, SWD <cit.>, and TLLT <cit.> on three benchmark data sets: ASD <cit.>, ECSSD <cit.>, and ImgSal <cit.>. The used saliency maps of the above models are from:* For BMS,[<http://cs-people.bu.edu/jmzhang/BMS/BMS.html>] HFT,[<http://www.escience.cn/people/jianli/DataBase.html>] HS,[<http://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/>] RPC,[<http://www.loujing.com/rpc-saliency/>] and TLLT[<http://www.escience.cn/people/chengong/Codes.html>] over all the data sets, we use the author-provided saliency maps, or run the authors' codes to obtain saliency maps. * For the AC, CA, FT, HC, RC, and SR models on the ASD data set, we directly use the saliency maps provided by Cheng et al. <cit.>.[<http://cg.cs.tsinghua.edu.cn/people/ cmm/Saliency/Index.htm>] For the remainder models on ASD, we retrieve the related saliency maps from the MSRA10K database <cit.>.[<http://mmcheng.net/msra10k/>] * For the remainder models, we employ the implementation of the salient object detection benchmark published by Borji et al. <cit.>.[<http://mmcheng.net/salobjbenchmark/>] On the ECSSD data set, the saliency maps come directly from the author-provided results; on the ImgSal data set, we run the authors' source code to generate saliency maps. §.§ Data setsThe popular ASD data set (a.k.a, MSRA1000) is a subset of MSRA5000 <cit.>.[<http://research.microsoft.com/en-us/um/people/jiansun/SalientObject/salient_object.htm>] The original MSRA5000 data set contains 5000 images with the labeled rectangles from nine participants. Achanta et al. <cit.> consider that the use of saliency maps is for salient object segmentation, then derive ASD with 1000 images from MSRA5000. Instead of the user-drawn rectangles around salient regions used in <cit.>, the ASD data set provides the object-contour based ground truth for more accurate comparisons of segmentation results.[<http://ivrl.epfl.ch/supplementary_material/RK_CVPR09/>]The ECSSD data set is an extension of CSSD.[<http://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/data set.html>] In order to represent more general situations of natural images than ASD, Yan et al. construct the CSSD data set, which contains 200 images with diversified patterns in both foreground and background <cit.>. Subsequently, they extend CSSD to a larger data set named ECSSD, which includes 1000 structurally complex images and pixel-wise ground truth masks labeled by five helpers <cit.>.In addition, we evaluate the proposed model on the ImgSal data set, which is designed for the detection of salient regions of different sizes <cit.>.<ref> The ImgSal contains 235 images collected using Google, and provides both region ground truth (human labeled) and fixation ground truth (by eye tracker). For region ground truth, the authors ask nineteen naive subjects to label the images in a random manner, and generate two kinds of labeling results for each image: binary map and probability map. In our experiments, we only use the binary maps for evaluating saliency detection results. §.§ Experimental SetupThe common used metrics to evaluate salient object detection models are Precision-Recall and F_β-measure. For an input image, the resultant saliency map is a gray-scale image having integer values in the range [0,255]. So we can partition it to a binary map M by using a threshold, then compute precision and recall by comparing M with the corresponding ground truth G as follows:Precision=|M∩ G|/|M| , Recall=|M∩ G|/|G| ,where | · | indicates the number of the foreground pixels. Moreover, to jointly evaluate precision and recall, the F_β-measure can be computed asF_β=(1+β^2)× Precision × Recall/β^2 × Precision + Recall ,where β^2 is set to 0.3 for emphasizing the precision as suggested in <cit.>.In our experiments, two binarization ways are used to partition saliency maps.1) Fixed Thresholding: We vary a threshold T_f from 0 to 255 to compute the scores of precision, recall and F_β-measure. Besides plotting the Precision-Recall and F_β-measure curves, we report two statistics for quantitative evaluation, i.e., average F_β score (denoted “AvgF”) and maximum F_β score (denoted “MaxF”).2) Adaptive Thresholding: As presented in <cit.>, we use an adaptive threshold T_a (cf. Eq. (<ref>)) to partition S and compute the scores of precision, recall and F_β-measure. Besides plotting Precision-Recall bars, we also report the F_β score obtained by using T_a (denoted “AdpF”):T_a = 2/W × H∑_x=1^W∑_y=1^HS(x,y) ,where W and H are the width and height of S respectively, S(x,y) is the saliency value at the coordinate (x,y). §.§ Parameter Analysis The proposed model includes five parameters: sample step δ, kernel radius ω_c of closing operation, kernel radius ω_r of morphological reconstruction, saturation ratio ϑ_r and gamma ϑ_g of intensity mapping. To find the optimal parameter setting, we exploit the “MaxF” metric suggested in <cit.> to compare the saliency maps obtained using different parameter settings. After 256 F_β scores have been computed by fixed thresholding, the maximum one is chosen as the best score for each group of parameter setting. In our experiments, the ranges of five parameters are: δ∈ [4:4:40], ω_c ∈ [1:20], ω_r ∈ [1:20], ϑ_r ∈ [0.001:0.001:0.009] ∪ [0.01:0.01:0.1], and ϑ_g ∈ [1.0:0.1:3.0], respectively.Figure <ref> shows the influences of five parameters on the evaluation data sets. First, the proposed model is not sensitive to the parameter ϑ_g, while varying it from 1.0 to 3.0 rarely changes the MaxF scores. Second, the parameters ω_c, ω_r, and ϑ_r have direct impacts on MaxF, especially on the ImgSal data set. Overall, each MaxF curve shows a slight upward trend as the parameter value increases, then starts to drop after the MaxF reached the summit. Compared with ASD or ECSSD, the influences of the above three parameters are more apparent on the ImgSal data set. Third, the sample step δ does not significantly impact on MaxF, all the curves do not clearly show the unimodal distributions. However, the runtime of the proposed model is directly influenced by the sample step. As the value of δ decreases, it typically results in lower speed performance.Based on the diversity of three data sets, we further use the average MaxF metric to determine the optimal parameter values. After three MaxF curves of each parameter have been obtained, we simply average them and choose the location of the maximum as the optimal value of each parameter. The black curves in Fig. <ref> (indicated by “Average”) exhibit the trends of five parameters. The optimal values of five parameters are reported in Table <ref>.§.§ ResultsWe present the statistical comparison results of the proposed model compared with twenty-one saliency models. Figures <ref> and <ref> show the precision-recall and F_β-measure curves produced by fixed thresholding. The precision-recall bars generated by utilizing the adaptive threshold T_a are presented in Fig. <ref>. More quantitative details are given in Fig. <ref>.Due to the intensity mapping used in the post-processing procedure, the resultant curves of our model clearly present two noticeable characteristics: one is that the recall scores span a more narrow range; the other is that the F_β-measure curves tend to be more flat after they rapidly reach the summits. Although having some disadvantages in the precision, our model has higher F_β scores, especially on the ECSSD and ImgSal data sets. The crucial advantage of our model indeed is associated with the essential task of salient object detection, which is to solve a salient foreground segmentation problem <cit.>.A good salient object detection model should generate accurate saliency maps with evenly highlighted foregrounds and thoroughly suppressed backgrounds. Then an easy way to extract salient objects is to binarize the saliency maps with a single fixed threshold. However, this threshold is quite difficult to determine automatically. In practice, we usually use the maximum F_β score (i.e., MaxF) to evaluate the performance of a saliency model, and choose the location of the MaxF as the optimal segmentation threshold <cit.>. Suppose that a saliency map is the same as its ground truth mask, the F_β-measure curve would be a horizontal line. Contrarily, if the F_β-measure curve is a horizontal line, we can obtain the identical segmentation results at any threshold in [0,255]. Therefore, for two models having the same MaxF, we prefer to select the one which produces a more flat F_β-measure curve. This means that the segmentation results would be more stable (that is, virtually unchanged) over a wide range of thresholds. Figure <ref> shows a visual comparison of the saliency maps generated by different models. For these example images, our model generates more accurate saliency maps, which are very close to the corresponding ground truth masks. The salient regions detected by our model have uniform intensities and well-defined boundaries, which result in a simple thresholding for the subsequent salient object segmentation.In Fig. <ref>, we report the quantitative statistics of the three evaluation metrics discussed earlier. The baseline scores, indicated by “Average”, are simply the average of evaluation scores. With respect to AvgF, TLLT ranks the first on ASD. The proposed model outperforms all the others on ECSSD and ImgSal. Obviously, this is mainly owed to more flat F_β-measure curves in a wide range.However, on the ASD and ECSSD data sets, our model has some disadvantages in terms of MaxF and AdpF. For the MaxF metric, TLLT performs the best on the ASD data set, and ranks the third on ECSSD. For the AdpF metric, TLLT also ranks the first on ASD, while on ECSSD the RC model performs the best. However, our model is among top three models in terms of both MaxF and AdpF on these two data sets. On the ImgSal data set, our model again outperforms all the others with large margins. Moreover, compared with ASD and ECSSD, the average performances of all the models are lower on ImgSal. It means that this data set is more challenging because the images collected in it contain salient regions of different sizes.Finally, on average, the proposed model performs the best over all the compared models. Besides, the best two models are TLLT and BMS. The MaxF scores of nine models are lower than the average score. The top five worst models are SUN, SR, AC, SIM, and SeR. Except AC, the other four are eye fixation prediction models, which have no advantages for salient object detection because the output saliency maps are blurred and sparse. But this does not necessarily mean that the eye fixation prediction models are not suitable for detecting salient objects. For example, the BMS model is initially designed for the task of eye fixation prediction. We can see that on average it ranks the third and performs better than most of the salient object detection models evaluated in our experiments. §.§ DiscussionsAlthough the proposed model performs well on the evaluation data sets, it does fail in some cases. These failures are mainly caused by three visual attributes implicitly used for identifying salient objects: location, color, and size. Figure <ref> shows several hard image cases collected from the evaluation data sets. The third row are the color name images annotated by using the RGB colors given in Table <ref>.* Location: The key idea of BMS is the Gestalt principle based surroundedness, thus the salient regions connected to the image borders would be masked out in the generation of attention maps, as shown in Fig. <ref>. * Color: The proposed model originates from BMS, and exploits eleven color name channels for figure-ground segregation. Sometimes, the foreground objects do not directly touch the image borders, but may have very similar colors to the backgrounds. For example, in the third rows of Figs. <ref> and <ref>, the RGB colors of the manually labeled salient objects (the horse and the statue) and some background regions (e.g., the valley and the plinth) are almost the same. While salient objects and image borders are connected by background elements, the salient objects are always removed in the generation of attention maps. Moreover, the color statistics based global contrast is introduced in the proposed model. The color similarities between foreground regions and background elements impact the ability of literally popping out salient objects (cf. Figs. <ref> and <ref>). * Size: In the proposed model, some morphological operations, including closing and reconstruction, are used to compute saliency maps. The influences of the parameters ω_c and ω_r have been presented in Fig. <ref>. These parameters have a substantial impact on the outputs of our model, especially on the ImgSal data set. As Figs. <ref> and <ref> show, the manually labeled regions are eroded because the morphological structures are larger than the sizes of these regions. * Another hard case is caused by the thin artificial borders around some test images, as illustrated in Fig. <ref>. When doing the clear-border operation on boolean maps, the proposed model will regard the inner area as a whole region which is surrounded by an enclosed boundary, and does not set any of the foreground pixels to 0. Such a processing mechanism leaves unchanged background elements inside the artificial borders, and results in the failure of figure-ground segregation. Clearly, the proposed model focuses on the bottom-up image processing technique, and only exploits some low-level image features. Therefore, it fails to highlight the regions that have similar colors to their surroundings. One way to tackle this issue is to invoke more complex visual features. Second, under the definition of surroundedness, the regions connected to the image borders are not enclosed by any complete outer contour. This results in the absence of object-level information in the attention map computation. The above problem can be solved by invoking some background priors and top-down cues. Finally, the proposed model works well for detecting large salient objects, but is not suitable for small ones. It would be interesting to adopt a multi-scale strategy or automatically seek the optimal scale for the detection of different sizes of salient objects.§ CONCLUSIONS Throughout this paper, we present a salient object detection model based on color name space. Considering the outstanding contribution of color contrast for saliency detection, a unified framework is constructed to overcome the limitation of the boolean map based saliency. By exploiting several visual features with respect to linguistic color names, we suggest that the model of fusing color attributes provides superior performance over that only based on the surroundedness cue. Moreover, we propose an improved post-processing procedure to uniformly smooth and highlight salient regions, so that the detected salient objects have high and constant intensity levels for the convenience of object segmentation. Experimental results indicate the performance improvement of the proposed model on three evaluation data sets.With regard to future work, first, we intend to invoke a background measure to handle the salient objects that heavily connected to the image borders. Second, it would be interesting to incorporate more visual features and top-down cues to solve the problem of color confusion between foreground regions and backgrounds. Third, for the morphological structures used in the proposed model, only a fixed value is chosen as the optimal kernel radius, which results in the loss of small salient objects. We have noted that an adaptive radius can effectively address this issue. How to automatically determine the radius size is left to future investigation. Finally, the current version of our MATLAB code is implemented for the purpose of academic research. We further plan to optimize the code to improve the speed performance of the proposed model. J. Lou is supported by the Changzhou Key Laboratory of Industrial Internet and Data Intelligence (No. CM20183002) and the QingLan Project of Jiangsu Province (2018). The work of L. Chen, F. Xu, W. Zhu, and M. Ren is supported by the National Natural Science Foundation of China (Nos. 61231014 and 61727802). H. Wang is supported by the National Defense Pre-research Foundation of China (No. 9140A01060115BQ02002) and the National Natural Science Foundation of China (No. 61703209). Q. Xia is supported by the National Natural Science Foundation of China (No. 61403202) and the China Postdoctoral Science Foundation (No. 2014M561654). The authors thank Andong Wang and Haiyang Zhang for helpful discussions regarding this manuscript.spmpsci | http://arxiv.org/abs/1703.08912v2 | {
"authors": [
"Jing Lou",
"Huan Wang",
"Longtao Chen",
"Fenglei Xu",
"Qingyuan Xia",
"Wei Zhu",
"Mingwu Ren"
],
"categories": [
"cs.CV",
"I.4"
],
"primary_category": "cs.CV",
"published": "20170327030858",
"title": "Exploiting Color Name Space for Salient Object Detection"
} |
Cosmic Equilibration: A Holographic No-Hair Theorem from the Generalized Second Law Sean M. Carroll and Aidan Chatwin-Davies December 30, 2023 ===================================================================================== Stars form in cold molecular clouds. However, molecular gas is difficult to observe because the most abundant molecule (H_2) lacks a permanent dipole moment. Rotational transitions of CO are often used as a tracer of H_2, but CO is much less abundant and the conversion from CO intensity to H_2 mass is often highly uncertain. Here we present a new method for estimating the column density of cold molecular gas () using optical spectroscopy. We utilise the spatially resolved Hα maps of flux and velocity dispersion from the Sydney-AAO Multi-object Integral-field spectrograph (SAMI) Galaxy Survey. We derive maps ofby inverting the multi-freefall star formation relation, which connects the star formation rate surface density () withand the turbulent Mach number (). Based on the measured range ofand , we predictin the star-forming regions of our sample of 260 SAMI galaxies. These values are close to previously measuredobtained directly with unresolved CO observations of similar galaxies at low redshift. We classify each galaxy in our sample as `Star-forming' (219) or `' (41), and find that in `' galaxies the average , , andare enhanced by factors of 2.0, 1.6, and 1.3, respectively, compared to Star-forming galaxies. We compare our predictions ofwith those obtained by inverting the Kennicutt-Schmidt relation and find that our new method is a factor of two more accurate in predicting , with an average deviation of 32% from the actual .galaxies: ISM — galaxies: star formation — galaxies: structure — stars: formation — techniques: spectroscopic — turbulence § INTRODUCTION The coalescence of gases by turbulence and gravity intricately controls star formation within giant molecular clouds <cit.>. On one hand, turbulence has the ability to hinder star formation by providing kinetic energy that can oppose gravity. On the other, the supersonic turbulence ubiquitously observed in the molecular phase of the interstellar medium (ISM) produces local shocks and compressions, which lead to enhanced gas densities that are key for triggering star formation <cit.>. Understanding the complex effects of turbulence in the ISM is therefore crucial to understanding the process of galaxy evolution.The cold turbulent gas that provides the fuel for star formation is only visible in the millimetre/sub-millimetre to radio wavelengths, and is often faint, making it difficult to detect at high spatial resolutions. A standard method to measure the mean column density of molecular gas () is to use rotational lines of CO. A severe problem with this method is that, because CO is about 10^4 times less abundant than the main mass carrier, H_2, one requires a CO-to-H_2 conversion factor, which is typically calibrated based on measurements in our own galaxy. However, the CO-to-H_2 conversion factor may depend on metallicity, environment and redshift, introducing high uncertainties in the reconstruction of the total gas surface densities from measurements of CO <cit.>. Another method is to measure dust emission or dust extinction and assuming a gas-to-dust ratio to infer the molecular gas masses and surface densities. These methods can suffer from uncertainties in the gas-to-dust ratio, especially for low-metallicity galaxies where this ratio becomes increasingly uncertain. Both CO and dust observations require telescopes and instruments that work at millimetre/sub-millimetre wavelengths, which may not always be available and/or may have relatively low spatial resolution. Here we present a new method to estimatebased on the star formation rate (SFR), which can be obtained with optical spectroscopy.Large optical integral field spectroscopy (IFS) surveys have started to provide us with details regarding the chemical distribution and kinematics of extragalactic sources at a size and uniformity unprecedented until recent times. Large galaxy surveys such as the Sloan Digital Sky Survey <cit.>, 2-degreeField Galaxy Redshift Survey <cit.>, the Cosmic Evolution Survey <cit.>, the VIMOS VLT Deep Survey <cit.>, and the Galaxy and Mass Assembly survey <cit.> have contributed more than 3.5 million spectra that have been of extraordinary aid to our understanding of galaxy evolution. However, those spectra have been taken with a single fibre or slit, and provide only a single, global spectrum per galaxy <cit.>. These spectra are therefore susceptible to aperture effects because differing parts or fractions of the galaxies are recorded for each source, thus making each observation dependent on the size and distance of the galaxy, as well as the positioning of the fibre <cit.>. Conversely, IFS can spatially resolve each galaxy observed, thus assigning individual spectra at many locations across the galaxy.Here we utilise data from the SAMI galaxy survey, an IFS survey with the aim to observe 3400 galaxies over a broad range of environments and stellar masses. We use the SFRs measured in SAMI in order to provide a tool for estimating .The basis of ourreconstruction method is a recent star formation relation developed in the multi-freefall framework of turbulent gas <cit.>. There have been many ongoing efforts to find an intrinsic relation between the amount of gas and the rate at which stars form in a molecular cloud. Initiated by <cit.>, hereafter K98,correlates with<cit.>, which can be approximated by an empirical power law with exponent n,∝^n.For a sample of low-redshift disc and starburst galaxies, K98 found an exponent of n=1.40±0.15. However, significant scatter and discrepancies between different sets of data exist within this framework, commonly referred to as the Kennicutt-Schmidt relation. These discrepancies suggest thatdoes not only depend on , but also on factors such as the turbulence and the freefall time of the dense gas on small scales.Motivated by the fact that dense gas forms stars at a higher rate, a new star formation correlator was derived in <cit.>, hereafter SFK15. This descriptor, denoted byand called the `maximum or multi-freefall gas consumption rate' (MGCR), is dependent on the probability density function <cit.> of molecular gas,= 0.45%× = 0.45%××( 1+b^2ℳ^2β/β+1)^3/8,whereis the Mach number of the turbulence, b is the turbulence driving parameter <cit.> and β is the ratio of thermal to magnetic pressure <cit.> in the molecular gas.The SFK15 model forgiven by Eq. (<ref>) is built upon foundational concepts laid out by <cit.>, hereafter KDM12, which had parameterisedby the ratio betweenand the average (single) freefall time , a correlator hereon denoted by<cit.>. Our new correlator instead uses the concept of a multi-freefall time, which was pioneered by <cit.>, tested with numerical simulations in <cit.>, and used in SFK15 as a stepping stone to expand upon the KDM12 model. SFK15 found thatis equal to 0.45% of the MGCR by placing observations of Milky Way clouds and the Small Magellanic Cloud (SMC) in the K98, KDM12 and SFK15 frameworks, confirming the measured low efficiency of star formation <cit.>. Statistical tests in SFK15 showed that a significantly better correlation betweenandwas achieved than that which could be attained between either theorparameterisations of the previous star formation relations by K98 and KDM12, respectively. The scatter in the SFK15 relation was found to be a factor of 3–4 lower than in the K98 and KDM12 relations, suggesting that it provides a better physical model forcompared to the empirical relation by K98 and compared to the single-freefall relation by KDM12.The aim of the current work is to formulate a method to predict the distribution ofby inverting Eq. (<ref>) and using optical observations, which will be plentiful in the coming few years. Here we use the Hα luminosities and velocity dispersions provided by the SAMI Galaxy Survey to estimatefrom measurements ofand .In Sec. <ref> we describe the observations of our SAMI galaxy sample. Sec. <ref> introduces our new method to deriveby inverting the SFK15 relation. In Sec. <ref> we present our results and compare purely star-forming withgalaxies in our sample. In Sec. <ref> we compare our own and other observations and predictions to previous star formation relations within the Kennicutt-Schmidt framework. In Sec. <ref> we demonstrate that our new method for predictingis superior to inverting the K98 relation. Our conclusions are summarised in Sec. <ref>. The new data products for each SAMI galaxy in our sample derived here (average turbulent Mach number, cold gas density, freefall time, etc., and finally ) are listed in Tab. <ref> in Appendix <ref> and are available for download in the online version of the journal or by contacting the authors. § SAMPLE SELECTION §.§ The SAMI Galaxy SurveyWe selected a sample of 260 galaxies from the SAMI Galaxy Survey internal data release version 0.9. The Sydney-AAO Multi-object Integral field spectrograph <cit.> is a front-end fibre feed system for the AAOmega spectrograph <cit.>, consisting of 13 bundles of 61 fibres each <cit.> that can be deployed over a 1 degree diameter field of view. SAMI therefore enables simultaneous spatially-resolved spectroscopy of twelve galaxies and one calibration star with a 15” diameter field-of-view on each object.The AAOmega spectrograph can be configured to provide different resolutions and wavelength ranges; the SAMI Galaxy survey employs the 570V grating to obtain a resolution of R=1730 (74^-1) atand the 1000R grating to obtain R=4500 (29^-1) at . SAMI datacubes are reduced and re-gridded to a spatial scale of 0.5”×0.5” <cit.> and the spatial resolution is about 2” (Green et al., in prep).The SAMI Galaxy Survey plans to include more than 3000 galaxies at redshift z<0.1 covering a wide range of stellar masses and environments.The sample is drawn from GAMA <cit.> with additional entries from eight nearby clusters to cover denser environments <cit.>. Reduced datacubes and a variety of emission line based higher-level data products are included in the first public data release <cit.>.The emission lines of SAMI galaxies have been analysed using the spectral fitting pipeline LZIFU <cit.> to extract emission line fluxes and kinematics for each spectrum. The spectrum associated with each spectral/spatial pixel (`spaxel') is first fit with a stellar template using the `penalized pixel-fitting' (pPXF) routine <cit.> before fitting up to three Gaussian line profiles to each of eleven strong emission lines simultaneously. For this paper, we choose to use the single Gaussian fits, and make use of the emission line flux maps, gas velocity maps, and gas velocity dispersion maps below.Also available in the SAMI Galaxy Survey database are maps of SFR and(in units of ). These maps are made using extinction-corrected Hα fluxes converted to SFRs following the relation derived in <cit.>. The SFR maps are fully described in Medling et al. (in prep).§.§ Our subsample From the pool of SAMI galaxies, we select a sub-sample of galaxies according to the criteria described below. We only consider spaxels with a sufficiently high signal-to-noise (S/N) ratio. The S/N was defined to be the ratio of the total emission line flux to the statistical one-sigma error in the line flux. This error was inferred using the Levenberg-Marquardt technique of chi-squared minimisation <cit.>. In the following, we list the selection criteria: * Source Extractor (SExtractor) ellipticity values are available. These values were obtained from the GAMA database <cit.>. We require the ellipticity for each galaxy to estimate the physical volume of gas within each spaxel (explained in detail in Sec. <ref> below). * The S/N ratio must be ≥5 in the Hα, Hβ, [N], [S], [O] and [O] emission lines. This allows reliable classification of the emission mechanism. However, in order to measure velocity dispersions down to about 12^-1, we require and impose an S/N ratio of ≥34 in the measured velocity dispersion (explained in detail in Sec. <ref> below). We also require that beam-smearing (see Sec. <ref>) did not have a significant effect on the measured velocity dispersion. * After removing spaxels that have low S/N and/or are affected by beam-smearing, the galaxy must have more than ten star-forming spaxels remaining. The star-forming spaxels were filtered using the optical classification criteria given in <cit.>, an example of which is shown in Fig. <ref>. This classification scheme uses optical emission line ratios <cit.>, in order to distinguish between star-forming galaxies and galaxies that are dominated by an active galactic nucleus (AGN) or by shocks. The Hα-to-SFR conversion factor used in this work is only valid for star-forming regions, because AGN/shock-dominated spaxels are contaminated with emission from AGN/shock regions <cit.>.Emission line fluxes of each spaxel were corrected for extinction using the Balmer decrement and the <cit.> reddening curve. Standard extinction for the diffuse ISM was assumed, with an R_v value of 3.1 being utilised throughout the analysis <cit.>.Each galaxy was classified as either a `Star-forming' or `' galaxy. To be classified as Star-forming, the galaxy had to have at least 90% of all valid spaxels lying below and to the left-hand side of the <cit.> classification line in the [O]/Hβ versus [N]/Hα diagram, and below and to the left-hand side of the <cit.> line in the [S]/Hα and [O]/Hα diagrams, as described in <cit.> (see Fig. <ref>). A galaxy was classified as , if at least 10% of all valid spaxels lie above the <cit.> classification line on the [O]/Hβ versus [N]/Hα diagram and above the <cit.> classification line on the [S]/Hα and [O]/Hα diagnostic diagrams. Thus,galaxies may include Composite, AGN, or Shock <cit.> galaxies according to the classification in <cit.>. These classifications resulted in a sample of 219 Star-forming and 41 classified galaxies. § ESTIMATING THE MOLECULAR GAS SURFACE DENSITY ()Here we exploit the spatially resolved SAMI Hα flux andmaps in combination with the Hα velocity dispersion maps to derive predictions ofacross each galaxy in our sample. Examples ofmaps are shown in the left-hand panels of Fig. <ref> for the Star-forming andclassified galaxies from Fig. <ref>. We further derive spaxel-averaged values of the physical parameters for each galaxy in our subsample. §.§ Deriving turbulent Mach number maps (ℳ)The SFK15 model relies on the availability of the sonic Mach number ℳ, the ratio between the gas velocity dispersion and the local speed of sound. This was prompted by the findings of <cit.>, showing that the observed scatter within the K98 and KDM12 relations may be primarily attributed to the physical variations in ℳ. As direct measurements of ℳ are unavailable for the SAMI sample, for every pixel we estimate a value using the method described in the following sections. The result of these procedures is shown in the middle panels of Fig. <ref>, for the two example galaxies, GAMA 492414 (top) and GAMA 69740 (bottom).§.§.§ Estimating the sound speedMolecular clouds in Galactic spiral arms exhibit a gas temperature range of , while those in the Galactic centre can have temperatures up to 100 K <cit.>. All H_2 gas should lie within this temperature range, otherwise it will cease to be molecular under typical conditions in the ISM <cit.>. The local sound speed () of the gas is given by=[k_BT/(μ_pm_H)]^1/2,with the Boltzmann constant k_B, the mass of a hydrogen atom m_H, and the mean particle weight μ_p. The latter is μ_p=2.3 for molecular gas and μ_p=0.6 for ionised gas, assuming standard cosmic abundances <cit.>. Therefore temperatures of T=10 K and T=100 K correspond to molecular sound speeds of =0.2^-1 and 0.6^-1, respectively. We hence estimate the Mach number of the gas in each spaxel by dividing the velocity dispersion by the molecular sound speed of =0.4±0.2^-1, which is appropriate for the dense, cold star-forming phase of the ISM in the temperature range . §.§.§ Turbulent velocity dispersion In order to apply our star formation relation, Eq. (<ref>), we need an estimate of the turbulent velocity dispersion of the molecular gas in order to construct the turbulent Mach number. Here we use the Hα velocity dispersion to approximate the velocity dispersion of the cold gas. The Hα velocity dispersion is similar (to within a factor of ) to the molecular gas velocity dispersion, because of the coupling of turbulent gas flows between the hot, warm and cold phases of the ISM. For instance, it has been found that for M33, the second-most luminous spiral galaxy in our local group, the atomic H dispersions are a fair estimator of the CO dispersions <cit.>. In M33, Hα velocities have been found to trace H velocities reasonably well <cit.>. However, the Hα velocity dispersion is expected to be somewhat higher than the H_2 velocity dispersion, because the ionised emission comes from H regions close to massive stars, which directly contribute to driving turbulence. We thus expect the velocity dispersion in the direct vicinity of massive stars to be overestimated. In order to take this effect into account, we crudely approximate the H_2 velocity dispersion with half the Hα velocity dispersion, σ_v=σ_v(Hα)/2. While this provides only a rough estimate of the molecular velocity dispersion (trustworthy only to within a factor of 2–3), we show below that the uncertainties that this introduces into ourestimate are only of the order of 50%. This is because of the relatively weak dependence ofon , as we will derive in Sec. <ref> below. To demonstrate this, we investigate a case below, where we assume that the molecular gas velocity dispersion is equal to the Hα velocity dispersion, σ_v=σ_v(Hα), which yields <30% lower . Thus, even though our velocity dispersion estimate is uncertain by factors of , the final uncertainty inis ≲50%. S/N requirements:The SAMI/AAOmega spectrograph setup has an instrumental velocity resolution of σ_instr=29^-1 at the wavelength of Hα (see Sec. <ref>). Velocity dispersions below this resolution limit can still be reliably measured if the S/N in the observed (instrument-convolved) velocity dispersion is sufficiently high. In the following, we estimate the required S/N in order to reconstruct intrinsic velocity dispersions down to σ_true=12^-1. We choose this cutoff of 12^-1, because it is the sound speed of the ionised gas, Eq. (<ref>) with T=10^4 K and μ_p=0.6, and thus represents a physical lower limit for σ.The intrinsic (true) velocity dispersion (σ_true) can be obtained by subtracting the instrumental velocity resolution (σ_instr) from the observed (instrument-convolved) velocity dispersion (σ_obs) in quadrature, withσ_true^2 = σ_obs^2 - σ_instr^2.The same relation holds for the uncertainties (noise) in the velocity dispersion,d(σ_true^2) = d(σ_obs^2) - d(σ_instr^2), 2σ_trued(σ_true) = 2σ_obsd(σ_obs) - 2σ_instrd(σ_instr).Assuming that the instrumental velocity resolution is fixed, we can use d(σ_instr)=0 and simplify the last equation to d(σ_true)= σ_obs/σ_trued(σ_obs).Dividing both sides by σ_true and substituting Eq. (<ref>) yieldsσ_obs/d(σ_obs) = σ_true/d(σ_true)σ_obs^2/σ_true^2 = σ_true/d(σ_true)(1+σ_instr^2/σ_true^2).Since (S/N)_obs≡σ_obs/d(σ_obs) and (S/N)_true≡σ_true/d(σ_true) are the observed (instrument-convolved) and intrinsic S/N ratios, respectively, we can estimate the required (S/N)_obs for the target intrinsic (S/N)_true=5 and the target intrinsic velocity dispersion that we want to resolve, σ_true=12^-1, by evaluating(S/N)_obs ≥(S/N)_true(1+σ_instr^2/σ_true^2) ≥ 5 [1+(29^-1/12^-1)^2] = 34.Thus, for spaxels with observed (instrument-convolved) velocity dispersion S/N ratios greater or equal to 34, we can reliably reconstruct the intrinsic (instrument-corrected) velocity dispersion down to 12^-1, with an intrinsic S/N ratio of at least 5. We note that the SAMI database provides the instrument-subtracted velocity dispersion σ_subtracted (`') and its error d(σ_subtracted) (`') based on the LZIFU fits <cit.>. Thus, in order to apply the S/N cut of 34 derived in Eq. (<ref>), we first reconstruct σ_obs=(σ_subtracted^2+σ_instr^2)^1/2 and its error d(σ_obs)=d(σ_subtracted)σ_subtracted/σ_obs, using error propagation. This criterion is functionally equivalent to setting a S/N cut on the instrument-subtracted velocity dispersion,(S/N)_subtracted = 34/1+σ_instr^2/σ_subtracted^2. After applying our S/N cuts of 34 to the observed (instrument-convolved) velocity dispersion, any spaxels with velocity dispersions less than 12^-1 are disregarded. We note that this final cut only removes 1% of the spaxels with (S/N)_obs≥34.Beam smearing:We also have to account for `beam smearing', a phenomenon that occurs because of the limitation in spatial resolution of the instrument. Beam smearing occurs for a physical velocity field that changes on spatial scales smaller than the spatial resolution of the observation. If there is a steep velocity gradient across neighbouring pixels, such as near the centre of a galaxy, beam smearing leads to an artificial increase in the measured velocity dispersion at such spatial locations. To account for beam smearing, we follow the method in <cit.> and estimate the local velocity gradient v_grad for a given spaxel with coordinate indices (i,j) as the magnitude of the vector sum of the difference in the velocities in the adjacent pixels,v_grad(i,j) = √([v(i+1,j)-v(i-1,j)]^2 + [v(i,j+1)-v(i,j-1)]^2).Note that the differencing to compute v_grad occurs over a linear scale of three SAMI pixels along i and j and thus covers roughly the spatial resolution of the seeing-limited SAMI observations with FWHM∼2” (see Sec. <ref>). If a pixel has a neighbour that is undefined (e.g., because of low S/N), the gradient in that direction is not taken into account. As our standard criterion to account for beam smearing, we cut any pixels in which the velocity dispersion is less than twice that of the velocity gradient (σ_v < 2 v_grad) and disregard such pixels in further analyses, leaving only spaxels that are largely unaffected by beam smearing.In addition to our fiducial beam-smearing criterion (σ_v < 2 v_grad), we test a case with a relaxed beam-smearing cut of σ_v < v_grad, and find nearly identical results (see Tab. <ref> below). We note that our standard beam-smearing cut with σ_v < 2 v_grad tends to remove spaxels near the centre of some of the galaxies (see e.g., Fig. <ref>). However, using the relaxed beam-smearing cut with σ_v < v_grad yields global (galaxy-averaged) Mach numbers and globalestimates that agree to within 4% with our standard beam-smearing cut (see Tab. <ref>), demonstrating that our results are largely unaffected by beam smearing.Turbulent velocity dispersion versus systematic motions:Beam-smearing is the result of un-resolved velocity gradients in the plane-of-the-sky. However, systematic velocity gradients (such as resulting from rotation or large-scale shear) along the line-of-sight (LOS) also increase the velocity dispersion (even for arbitrarily high spatial resolution) by LOS-blending. These large-scale systematic motions do not represent turbulent gas flows <cit.>. As we have not subtracted or accounted for these factors, our values of the turbulent velocity dispersion may be overestimated.In summary, we emphasise that the turbulent velocity dispersion has large uncertainties and is only accurate to within a factor of 2–3. However, the uncertainties that this introduces into our final product () are ≲50%, because of the relatively weak dependence ofon(derived in detail in Sec. <ref> below).§.§ DerivingandTo find the MGCR , we divide(left-hand panels of Fig. <ref>) by the SFR efficiency of 0.45% found in SFK15. That is, we invert Eq. (<ref>),[] = /0.0045 .In order to find the ratio between the gas column density and the freefall time at the average gas density, , we take the Mach number calculated in Sec. <ref> and convertto ,[] = /(1 + b^2 ℳ^2 β/β+1)^3/8.In the following, we will assume a fixed turbulence driving parameter b=0.4, representing a natural mixture <cit.>, and assume an absence of magnetic fields such that β→∞. Although both of these are strong assumptions, we emphasise that, in the absence of constraints on b or β in these galaxies, we have to assume fixed, typical values for them and allow that these assumptions contribute to the uncertainties of theestimation. However, if these parameters will be measured in the future, they can be used in Eqs. (<ref>) and (<ref>) to obtain a more accurate prediction of . For simplicity, here we fix b and β, and only consider the remaining dependence on .§.§ Estimating the gas density (ρ) and local freefall time ()Now that we have ≡/ from Eq. (<ref>), we need an estimate of the average freefall time =√(3π/(32Gρ)) to obtainfrom . Thus, we need an estimate of the local gas density ρ, which requires some geometrical considerations and assumptions similar to the ones outlined in KDM12.First, we make the assumption that the galaxy has a uniform gas disc geometry with a scale height of<cit.>. However, depending on the viewing angle with respect to the orientation of the galactic disc in the plane of the sky, the LOS length through the gas may be greater than the scale height (see Fig. <ref>). This angle can be estimated from the observed ellipticity of the galaxy. To correct for the viewing angle, we obtain SExtractor ellipticity values, ε, for each galaxy from the GAMA database, from which we obtain the inclination angle, θ, of the galaxy:θ = arccos(1 - ε).The column length, L, can then be inferred by dividing the scale height, H, by the cosine of the inclination angle, as pictured in Fig. <ref>,L[] = H/cos(θ) = 100/cos(θ) = 100/1 - ε.We caution that the assumed cylindrical geometry is a drastic simplification, as there has been much evidence to suggest that the scale height of a galaxy follows a relation dependent upon its radius from the galactic centre <cit.>. This may cause our predicted gas maps to underestimate the gas density towards the centre and overestimate the gas density towards the outskirts of the galaxy. However, the general shape of the predicted distribution of gas and especially the galaxy-averaged gas surface density should not be affected significantly by this geometrical simplification. A refinement in the geometry is relatively straightforward to implement, if one requires more accurate maps. We estimate that the relative uncertainties in L may be up to 100%. However, our final result (), does not depend significantly on L (see detailed discussion in Sec. <ref>).Given the column length L, we can write the gas densityρ = /L. Since we do not havebecause it is our final product, we now substitute a rearrangement of the definition of ,= × t_ff,as well as the definition of the freefall time in terms of ρ,t_ff(ρ)=√(3π/32Gρ),where G is the gravitational constant. We combine the three previous equations and solve for the gas density,ρ =/L√(3π/32Gρ) ⇒ρ = (√(3π/32G)×/L)^2/3.We substitute ρ back into Eq. (<ref>) to obtain the freefall time t_ff for the average gas density ρ.§.§ Deriving our final product,Finally, we obtain our prediction foreither by multiplying the freefall time from Sec. <ref> bycalculated in Sec. <ref>, i.e., using Eq. (<ref>), or by multiplying the volume density ρ from Eq. (<ref>) by the column length L from Eq. (<ref>). In terms of the principle observables,and ℳ=σ_v/, as well as our assumptions for the parameters L=H/(1-ε), b and β, this corresponds to the final expression forgiven by= (3π L/32G)^1/3[/0.0045(1 + b^2 ℳ^2 β/1+β)^3/8]^2/3.Two examples of the spatially resolved maps of estimatedbased on the new method provided by Eq. (<ref>) are shown in the right-hand panels of Fig. <ref>.§.§ Uncertainties in thereconstructionHere we estimate the uncertainties in ourprediction based on Eq. (<ref>). We derive the uncertainties by error propagation of all variables in Eq. (<ref>). First, we note that the dependence ofon L is weak (∝ L^1/3) and the dependence onis also relatively weak (∝^-1/2), which means that the uncertainties in L andenter the final uncertainty inwith a weight of 1/3 and 1/2, respectively. The strongest dependence ofis on the SFR, i.e., ∝^2/3, so the uncertainties inare weighted by 2/3, and we thus expect these to dominate the final uncertainties. Rigorously, the relative uncertainty ()/ from Eq. (<ref>) is given by()/ =[ (1/3(L)/L)^2 + (1/2()/)^2 + (2/3()/)^2 ]^1/2,where we approximated the denominator (1+b^2^2) in Eq. (<ref>) as b^2^2 for the uncertainty propagation (recall that we also assumed β→∞), because b^2^2≫1, based on our velocity dispersion cut and sound speed (see Sec. <ref>). With typical relative uncertainties of 70% in L, 100% in(see Sec. <ref>) and 20% in(based on our S/N cuts of 5 on the Hα flux; see Sec. <ref>), we find a relative uncertainty of ()/=57%, which is dominated by the uncertainty in . Even if the uncertainties in both L andwere 100% and 150% respectively, we would still be able to estimatewith an uncertainty of 83%. In summary, despite the large uncertainties inand L (see Sec. <ref> and <ref>), our final uncertainties inare less than a factor of 2.§ RESULTS §.§ Gas surface density estimatesOur main objective is to estimatefromand the turbulence properties () in our SAMI galaxy sample. We do this by applying the new method introduced in the previous section (Sec. <ref>), going step-by-step fromto .Fig. <ref> shows each of theparameterisations explored in SFK15, presented in the same order as the computations of ourderivations (Sec. <ref>). The framework of the first panel assumes a direct correlation betweenand . That is, it assumes the star formation relation of Eq. (<ref>) to hold, thus by construction the SAMI data points in this framework lie along the same line. The data points from SFK15 which were used to obtain this relation are also shown. We note that in the SFK15 derivation of Eq. (<ref>), the K98 galaxies were omitted because they did not havevalues assigned to them due to their lack ofmeasurements. They are thus similarly excluded in this panel.Compared to the observational data published in SFK15, we updated and corrected some of the previous data, and added new observations in Figure <ref>. First, we replace the <cit.> data for the SMC by the most recent 200 resolution data provided in <cit.> (J16). We also add the Large Magellanic Cloud (LMC) data from <cit.> and assume that the SMC and LMC data have Mach numbers in the range , i.e., we basically treat the Mach number as unconstrained, i.e., varying in a plausible range, but we currently do not have direct measurements ofin the SMC or LMC.[The Mach number range ofassumed in SFK15 for the SMC was somewhat too high, because the 200 resolution data from <cit.> and <cit.> are more consistent with velocity dispersions that correspond tofor the SMC and LMC. However, without a direct measurement of the velocity dispersion and gas temperature, the Mach number remains rather unconstrained for the SMC and LMC.] Second, we replace the global CMZ data from <cit.> by the local CMZ cloud G0.253+0.016 `Brick' <cit.> for which significantly more information is available. We take the values of , , b, and β measured in <cit.> (F16) and use the SFR per freefall time estimate of 2% from <cit.> to obtainfor the `Brick'. The other cloud data are identical to those published in KDM12, F13 and SFK15, which were taken from <cit.> (H10), <cit.> (G11), <cit.> (W10), and <cit.> (H10). However, we corrected the error bar on the L10 clouds, which showed the standard deviation instead of the standard deviation of the mean in SFK15. We further propagated the uncertainties in ,andbetween ,and . Finally, we note that the observational data included in Fig. <ref> cover a wide range in spatial and spectral resolution (for details we refer the reader to the source publications of these data), which allowed us to test the universality of the SFK15 relation. In the future, when turbulence estimates become available for high-redshift data, those need to be included as well, to revisit the question of universality of the star formation relation derived in SFK15.The second panel of Fig. <ref> depicts the KDM12 parameterisation,versus . The derivation of this value for the SAMI galaxies required inputs from both the Hα flux and velocity dispersion, withcomputed from Eq. (<ref>). In addition to the observational data shown in the left-hand panel, we added the individual K98 disc and starburst galaxies from KDM12 <cit.>.The third panel of Fig. <ref> shows the final product of ourpredictions; the average gas column density estimate for each of the SAMI galaxies in our sample. These predictions span a range of . We note that the estimatedvalues for the SAMI galaxies are close to thevalues of the K98 low-redshift disc galaxies. This is encouraging, because they are the most similar in type to our sample of SAMI galaxies. The offset inandby ∼0.5 dex between the SAMI and K98 galaxies can be understood as a consequence of spatial resolution. In contrast to our spaxel-resolved analysis of the SAMI galaxies (with spatial resolution of ∼2”; see Sec. <ref>), the K98 galaxies are unresolved, which reduces the inferred<cit.>. The reason is that, although the total Hα flux (∝SFR) remains similar even at lower resolutions, the area Δ A over which Hα is emitted tends to be overestimated and hence thetends to be underestimated (=SFR/Δ A) for the global K98 data. Similar holds for , because it depends on Δ A in the same way as , and indeed, we find that the resolved SAMI galaxies tend to lie at somewhat highercompared to the unresolved K98 disc galaxy sample.§.§ Comparison between Star-forming andgalaxies §.§.§ Gas surface density in Star-forming andgalaxies Fig. <ref> suggests that the distributions ofandare similar between the Star-forming andgalaxies. To quantify any statistical differences inbetween these two sub-samples, we investigate the distribution functions of . Fig. <ref> shows the histograms of . We see thatis enhanced ingalaxies compared to Star-forming galaxies. This difference inbetween Star-forming andtype galaxies is primarily a consequence of the differences in , and secondarily a consequence of the differences inbetween the two classes, i.e., the dependences ofonand(see Eqs. <ref> and <ref>). Other dependences are realtively insignificant, such as the dependence on the assumed scale height of the galaxies. In this context, we have checked that any differences in the ellipticity distributions between the Star-forming andgalaxies are statistically insignificant.The measured mean and standard deviation of ,andin the Star-forming andgalaxy samples are listed in Table <ref> (the full list of physical parameters derived for each galaxy is provided in Table <ref>). The SFR surface densities and Mach numbers of the Star-forming andsample are =0.054 and 0.11, and =36 and 57, respectively. The resulting averagevalues are 26 and 35 for Star-forming andgalaxies, respectively.Table <ref> further shows that changing the beam-smearing cutoff from the fiducial σ_v < 2 v_grad to a less strict cutoff (σ_v < v_grad) yields nearly identical results. Finally, the last two rows of Table <ref> show that using the velocity dispersion of the ionized gas (σ_v=σ_v(Hα)) instead of the approximate velocity dispersion of the molecular gas (σ_v=σ_v(Hα)/2), reduces the derivedby 30%. Thus, even with the large uncertainties in σ_v and hence in(see Sec. <ref>), our final estimates incan be considered accurate to within a factor 2. §.§.§ Mach number in Star-forming andgalaxies Here we investigate global differences in the gas kinematics between Star-forming andgalaxies in our SAMI sample. Fig. <ref> shows our measurements of the Mach number (c.f. Sec. <ref>) as a function of derivedfor the two galaxy classes. We see that overall and also for fixed ,galaxies have higher Mach number by a factor of ∼1.5 compared to Star-forming galaxies. This may be a consequence of AGN and/or shocks raising the velocity dispersion over turbulence driven by pure star-formation feedback.Fig. <ref> further reveals a significant scatter in Mach number for fixed , which is somewhat more pronounced in the case ofgalaxies compared to purely Star-forming ones. This may indicate that different driving sources of the turbulence act together and possibly dominate at different times in different galaxies. Such driving sources can be divided into two main categories: i) stellar feedback (such as supernova explosions, stellar jets, and/or radiation pressure) and ii) galaxy dynamics (such as galactic shear, magneto-rotational instability, gravitational instabilities, and/or accretion onto the galaxy) <cit.>. Our results here suggest that AGN feedback may be another important, potentially highly variable source of the turbulent gas velocity dispersion in galaxies. § COMPARISON TO PREVIOUSVERSUSRELATIONSMany studies in the literature have attempted to measure the correlation betweenandwithin different sets of data. However, there is no clear consensus on the coefficients and scaling exponents, due to the intrinsic scatter in the Kennicutt-Schmidt relation <cit.>. Some studies find breaks in the power-law relations, which can be interpreted as thresholds <cit.>, while other studies do not find evidence for such thresholds <cit.>. Here we explore how ourpredictions for the SAMI galaxies compare to the relations published in the literature.In Fig. <ref>, we show an enhanced version of the right-hand panel of Fig. <ref>, in order to compare ourestimates to previously derived star formation relations in theversusframework. The relations we investigate are described in K98, <cit.> (B08), <cit.> (W10), <cit.> (H10) and <cit.> (B11). These relations are shown as lines in Fig. <ref>. We see that different sets of data follow different relations, which often show significant deviations from one another.The SAMI galaxies have higherthan described by the K98, B08, H10, or B11 relations, but lowerthan described by the W10 relation. A power-law fit to all the SAMI galaxies yields a power-law exponent of 1.6±0.1 instead of 1.4 (K98); see Eq. (<ref>).In summary, we find that none of the previously proposed scaling relations ofas a function ofdescribes the entirety of the data well. The reason for this is that the SFR () depends on more than just gas density (). Instead, star formation also strongly depends on the turbulence of the gas (Mach number and driving mode), the magnetic field, and on the virial parameter <cit.>. A complete understanding and prediction of star formation requires taking into account the dependences on these variables, in addition to gas (surface) density.§ COMPARINGPREDICTIONS BY INVERTING STAR FORMATION LAWSHere we compare the prediction ofbased on inverting the K98 relation with theprediction based on the SFK15 framework developed here. First, we note that a popular way of obtainingestimates in the absence of a direct measurement of it, is to invert the K98 relation, i.e., to invert Eq. (<ref>), which yields_,K98 [] = ( []/a)^1/n,with a=(2.5±0.7)×10^-4 and n=1.40±0.15 (K98).Here we derived an alternative way to estimatefrom , which is given by Eq. (<ref>), and contains additional dependences on the geometry (L), turbulence ( and b), and on the magnetic field (β). For theestimates of the SAMI galaxies here, we fixed L, b and β for simplicity, and only included the Mach number based on the measured velocity dispersion as an additional parameter to(compared to the K98 relation, which depends ononly).We now want to see how theestimates based on our new relation (Eq. <ref>) compare to inverting the K98 relation (Eq. <ref>). Fig. <ref> shows the direct comparison of the two as a function of . We plot the logarithmic difference of (predicted) and (measured) on the ordinate of Fig. <ref> for (predicted) based on K98 (Eq. <ref>) in grey and (predicted) based on Eq. (<ref>) in colour. In addition to the observational data already shown in Figs. <ref> and <ref>, we add direct estimates offor a subset of 56 Star-forming SAMI galaxies for which Herschel 500 μm dust measurements were available, using the methods in <cit.>. A detailed description of how the dust emission was converted tois provided in Appendix <ref>.In Fig. <ref> we see that our new method of estimatingfromgiven by Eq. (<ref>) is significantly better than simply inverting the K98 relation, Eq. (<ref>). We find that our new relation providesestimates with an average deviation of 0.12 dex (32%), while inverting the K98 relation yields an average deviation from the true (measured)by 0.42 dex (160%). This shows that our method provides a significantly betterprediction fromthan inverting the K98 relation. Our improvedestimate comes at the cost of requiring an estimate of the Mach number (velocity dispersion) as an additional parameter for the reconstruction (prediction) of . However, ifis obtained from Hα (as for the SAMI galaxies analysed here), then we have shown that the velocity dispersion of Hα can be used to estimate the Mach number (Sec. <ref>).Even betterpredictions based on Eq. (<ref>) are expected if the exact scale height H, the turbulence driving parameter b, and the magnetic field plasma β are available from future observations and/or by combining different observational datasets.§ CONCLUSIONSWe presented a new method to estimate the molecular gas column density () of a galaxy using only optical IFS data, by inverting the star formation relation derived in SFK15. Our method utilises observed values ofand velocity dispersion (here from Hα) as inputs and returns an estimate of the molecular . The derivation of our method is explained in detail in Sec. <ref>, with the final result given by Eq. (<ref>). We apply our new method to estimatefor Star-forming andgalaxies classified and observed in the SAMI Galaxy Survey.Our main findings from this study are the following: * From the range inand Mach numbermeasured for the SAMI galaxies, we predictin the star-forming regions of our SAMI galaxy sample, consisting of 260 galaxies in total. The predicted values ofare similar to those of unresolved low-redshift disc galaxies observed in K98. While the K98 galaxies required CO detections, here we estimatesolely based on Hα emission lines. * We classify each galaxy in our sample as Star-forming or . Based on the sample-averaged =0.054 and 0.11, and =36 and 57 for Star-forming andgalaxies, respectively, we estimate =26 and 35, respectively (see Table <ref>). We therefore find that on average, thegalaxies have enhanced , , andby factors of 2.0, 1.6, and 1.3, respectively, compared to the Star-forming SAMI galaxies (see Table <ref>; for each individual SAMI galaxy, see Table <ref>). * We discussed methods to account for finite spectral resolution and beam-smearing in Sec. <ref>. While the uncertainties are large in the velocity dispersion used to estimate the turbulent Mach number of the molecular gas (Sec. <ref>), we show that the final estimate ofis accurate to within a factor of 2 (see Sec. <ref>). * We compare our new method of estimatingfromwith a simple inversion of the K98 relation (Fig. <ref>). We find that our new method yields a significantly better estimate ofthan inverting the K98 relation, with average deviations from the intrinsicby 32% for our new method, compared to average deviations of 160% from inverting the K98 relation. § ACKNOWLEDGEMENTSWe thank Mark Krumholz and the anonymous referee for their useful comments, which helped to improve this work. CF acknowledges funding provided by the Australian Research Council's (ARC) Discovery Projects (grants DP150104329 and DP170100603). DMS is supported by an Australian Government's New Colombo Plan scholarship.Support for AMM is provided by NASA through Hubble Fellowship grant #HST-HF2-51377 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.BAG gratefully acknowledges the support of the ARC as the recipient of a Future Fellowship (FT140101202).LJK gratefully acknowledges the support of an ARC Laureate Fellowship. SB acknowledges the funding support from the ARC through a Future Fellowship (FT140101166). SMC acknowledges the support of an ARC Future Fellowship (FT100100457). NS acknowledges support of a University of Sydney Postdoctoral Research Fellowship. The SAMI Galaxy Survey is based on observations made at the Anglo-Australian Telescope.The Sydney-AAO Multi-object Integral field spectrograph (SAMI) was developed jointly by the University of Sydney and the Australian Astronomical Observatory. The SAMI input catalogue is based on data taken from the Sloan Digital Sky Survey, the GAMA Survey and the VST ATLAS Survey. The SAMI Galaxy Survey is funded by the ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and other participating institutions. The SAMI Galaxy Survey website is <http://sami-survey.org/>. § ONLINE DATATable <ref> lists the derived physical parameters of all SAMI galaxies analysed here. Listed are the first ten galaxies in each of our two galaxy classes (Star-forming and ). The complete table is available in the online version of the journal or upon request.§ HERSCHEL DUST-TO-GAS ESTIMATES FOR SAMITo provide an independent measure offor our SAMI galaxy sample, we used the empirical relation determined by <cit.>, correlating the total gas mass of galaxies with their sub-mm dust luminosities. Using a sample of nearby galaxies, <cit.> found that the total (atomic+molecular) gas mass of galaxies (M_gas,tot) could be determined within 0.12 dex using the monochromatic 500 μm luminosity (L_500), withlog_10(M_gas,tot/)=28.5 log_10(L_500/L_⊙). To determine the sub-mm luminosity of the SAMI galaxies, we made use of the Herschel-ATLAS survey <cit.>, a wide 550 square degrees infrared survey of the sky by the Herschel Space Observatory, that covers the GAMA regions from which the SAMI Galaxy Survey sample arise.In particular, we cross-matched the 219 star-forming SAMI galaxies classified here against the single-entry source catalog from Herschel-ATLAS Data Release 1 <cit.>.[Available at <http://www.h-atlas.org/public-data/download>] Of the 219 SAMI Star-forming galaxies, 128 have Herschel detections. Of these, 56 have significant detections (signal-to-noise >3) in the SPIRE 500 μm band. To convert the total gas mass to a gas surface density we required a surface area over which the infrared flux is emitted. Given the large beam size of the SPIRE 500 μm observations, the SAMI galaxies are unresolved. However, as can be seen in the radial profiles of the nearby galaxy sample used in <cit.> (in particular their Figure 7 and online figures), the highest surface brightness regions occur within half an optical radius <cit.>, with most of the infrared luminosity (and molecular gas mass) occurring within this radius. <cit.> further find that at this radius, the atomic and molecular gas surface densities are about the same (the ratio of the total atomic and molecular gas masses inside 0.5 R_25 is also about unity). Based on those findings, we approximated the molecular gas mass within 0.5 R_25 with 0.5 M_gas,tot. Therefore, we derive the molecularthrough=0.5 M_gas,tot/π(1.8 R_e)^2 (1-ε),where the effective radius R_e, and ellipticity ε, of the SAMI galaxies are as derived in the GAMA survey <cit.>.101 natexlab#1#1[Abazajian et al.(2009)Abazajian, Adelman-McCarthy, Agüeros, Allam, Allende Prieto, An, Anderson, Anderson, Annis, Bahcall, & et al.]Abazajian:2009aa Abazajian, K. N., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2009, , 182, 543[Allen et al.(2015)Allen, Croom, Konstantopoulos, Bryant, Sharp, Cecil, Fogarty, Foster, Green, Ho, Owers, Schaefer, Scott, Bauer, Baldry, Barnes, Bland-Hawthorn, Bloom, Brough, Colless, Cortese, Couch, Drinkwater, Driver, Goodwin, Gunawardhana, Hampton, Hopkins, Kewley, Lawrence, Leon-Saval, Liske, López-Sánchez, Lorente, McElroy, Medling, Mould, Norberg, Parker, Power, Pracy, Richards, Robotham, Sweet, Taylor, Thomas, Tonini, & Walcher]AllenEtAl2015 Allen, J. T., Croom, S. M., Konstantopoulos, I. S., et al. 2015, , 446, 1567[Baldry et al.(2010)Baldry, Robotham, Hill, Driver, Liske, Norberg, Bamford, Hopkins, Loveday, Peacock, Cameron, Croom, Cross, Doyle, Dye, Frenk, Jones, van Kampen, Kelvin, Nichol, Parkinson, Popescu, Prescott, Sharp, Sutherland, Thomas, & Tuffs]Baldry:2010aa Baldry, I. K., Robotham, A. S. G., Hill, D. T., et al. 2010, , 404, 86[Baldry et al.(2014)Baldry, Alpaslan, Bauer, Bland-Hawthorn, Brough, Cluver, Croom, Davies, Driver, Gunawardhana, Holwerda, Hopkins, Kelvin, Liske, López-Sánchez, Loveday, Norberg, Peacock, Robotham, & Taylor]Baldry:2014aa Baldry, I. K., Alpaslan, M., Bauer, A. E., et al. 2014, , 441, 2440[Baldwin et al.(1981)Baldwin, Phillips, & Terlevich]Baldwin:1981aa Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, , 93, 5[Barnes et al.(2017)Barnes, Longmore, Battersby, Bally, Kruijssen, Henshaw, & Walker]BarnesEtAl2017 Barnes, A. T., Longmore, S. N., Battersby, C., et al. 2017, , in preparation[Bigiel et al.(2008)Bigiel, Leroy, Walter, Brinks, de Blok, Madore, & Thornley]BigielEtAl2008 Bigiel, F., Leroy, A., Walter, F., et al. 2008, , 136, 2846[Bigiel et al.(2011)Bigiel, Leroy, Walter, Brinks, de Blok, Kramer, Rix, Schruba, Schuster, Usero, & Wiesemeyer]Bigiel:2011aa Bigiel, F., Leroy, A. K., Walter, F., et al. 2011, , 730, L13[Bland-Hawthorn et al.(2011)Bland-Hawthorn, Bryant, Robertson, Gillingham, O'Byrne, Cecil, Haynes, Croom, Ellis, Maack, Skovgaard, & Noordegraaf]BlandHawthornEtAl2011 Bland-Hawthorn, J., Bryant, J., Robertson, G., et al. 2011, Optics Express, 19, 2649[Bolatto et al.(2011)Bolatto, Leroy, Jameson, Ostriker, Gordon, Lawton, Stanimirović, Israel, Madden, Hony, Sandstrom, Bot, Rubio, Winkler, Roman-Duval, van Loon, Oliveira, & Indebetouw]BolattoEtAl2011 Bolatto, A. D., Leroy, A. K., Jameson, K., et al. 2011, , 741, 12[Bourne et al.(2016)Bourne, Dunne, Maddox, Dye, Furlanetto, Hoyos, Smith, Eales, Smith, Valiante, Alpaslan, Andrae, Baldry, Cluver, Cooray, Driver, Dunlop, Grootes, Ivison, Jarrett, Liske, Madore, Popescu, Robotham, Rowlands, Seibert, Thompson, Tuffs, Viaene, & Wright]BourneEtAl2016 Bourne, N., Dunne, L., Maddox, S. J., et al. 2016, , 462, 1714[Bryant et al.(2014)Bryant, Bland-Hawthorn, Fogarty, Lawrence, & Croom]BryantEtAl2014 Bryant, J. J., Bland-Hawthorn, J., Fogarty, L. M. R., Lawrence, J. S., & Croom, S. M. 2014, , 438, 869[Bryant et al.(2015)Bryant, Owers, Robotham, Croom, Driver, Drinkwater, Lorente, Cortese, Scott, Colless, Schaefer, Taylor, Konstantopoulos, Allen, Baldry, Barnes, Bauer, Bland-Hawthorn, Bloom, Brooks, Brough, Cecil, Couch, Croton, Davies, Ellis, Fogarty, Foster, Glazebrook, Goodwin, Green, Gunawardhana, Hampton, Ho, Hopkins, Kewley, Lawrence, Leon-Saval, Leslie, McElroy, Lewis, Liske, López-Sánchez, Mahajan, Medling, Metcalfe, Meyer, Mould, Obreschkow, O'Toole, Pracy, Richards, Shanks, Sharp, Sweet, Thomas, Tonini, & Walcher]BryantEtAl2015 Bryant, J. J., Owers, M. S., Robotham, A. S. G., et al. 2015, , 447, 2857[Calzetti et al.(2000)Calzetti, Armus, Bohlin, Kinney, Koornneef, & Storchi-Bergmann]Calzetti:2000aa Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, , 533, 682[Cappellari(2017)]Cappellari2017 Cappellari, M. 2017, , 466, 798[Cappellari & Emsellem(2004)]CappellariEmsellem2004 Cappellari, M., & Emsellem, E. 2004, , 116, 138[Cardelli et al.(1989)Cardelli, Clayton, & Mathis]Cardelli:1989aa Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245[Colless et al.(2001)Colless, Dalton, Maddox, Sutherland, Norberg, Cole, Bland-Hawthorn, Bridges, Cannon, Collins, Couch, Cross, Deeley, De Propris, Driver, Efstathiou, Ellis, Frenk, Glazebrook, Jackson, Lahav, Lewis, Lumsden, Madgwick, Peacock, Peterson, Price, Seaborne, & Taylor]Colless:2001aa Colless, M., Dalton, G., Maddox, S., et al. 2001, , 328, 1039[Croom et al.(2012)Croom, Lawrence, Bland-Hawthorn, Bryant, Fogarty, Richards, Goodwin, Farrell, Miziarski, Heald, Jones, Lee, Colless, Brough, Hopkins, Bauer, Birchall, Ellis, Horton, Leon-Saval, Lewis, López-Sánchez, Min, Trinh, & Trowland]CroomEtAl2012 Croom, S. M., Lawrence, J. S., Bland-Hawthorn, J., et al. 2012, , 421, 872[Daddi et al.(2010)Daddi, Elbaz, Walter, Bournaud, Salmi, Carilli, Dannerbauer, Dickinson, Monaco, & Riechers]Daddi:2010aa Daddi, E., Elbaz, D., Walter, F., et al. 2010, , 714, L118[de Grijs & Peletier(1997)]de-Grijs:1997aa de Grijs, R., & Peletier, R. F. 1997, , 320, L21[de Grijs & van der Kruit(1996)]de-Grijs:1996aa de Grijs, R., & van der Kruit, P. C. 1996, , 117, 19[Driver et al.(2009)Driver, Norberg, Baldry, Bamford, Hopkins, Liske, Loveday, Peacock, Hill, Kelvin, Robotham, Cross, Parkinson, Prescott, Conselice, Dunne, Brough, Jones, Sharp, van Kampen, Oliver, Roseboom, Bland-Hawthorn, Croom, Ellis, Cameron, Cole, Frenk, Couch, Graham, Proctor, De Propris, Doyle, Edmondson, Nichol, Thomas, Eales, Jarvis, Kuijken, Lahav, Madore, Seibert, Meyer, Staveley-Smith, Phillipps, Popescu, Sansom, Sutherland, Tuffs, & Warren]Driver:2009aa Driver, S. P., Norberg, P., Baldry, I. K., et al. 2009, Astronomy and Geophysics, 50, 12[Driver et al.(2011)Driver, Hill, Kelvin, Robotham, Liske, Norberg, Baldry, Bamford, Hopkins, Loveday, Peacock, Andrae, Bland-Hawthorn, Brough, Brown, Cameron, Ching, Colless, Conselice, Croom, Cross, de Propris, Dye, Drinkwater, Ellis, Graham, Grootes, Gunawardhana, Jones, van Kampen, Maraston, Nichol, Parkinson, Phillipps, Pimbblet, Popescu, Prescott, Roseboom, Sadler, Sansom, Sharp, Smith, Taylor, Thomas, Tuffs, Wijesinghe, Dunne, Frenk, Jarvis, Madore, Meyer, Seibert, Staveley-Smith, Sutherland, & Warren]DriverEtAl2011 Driver, S. P., Hill, D. T., Kelvin, L. S., et al. 2011, , 413, 971[Druard et al.(2014)Druard, Braine, Schuster, Schneider, Gratier, Bontemps, Boquien, Combes, Corbelli, Henkel, Herpin, Kramer, van der Tak, & van der Werf]Druard:2014aa Druard, C., Braine, J., Schuster, K. F., et al. 2014, , 567, A118[Eales et al.(2010)Eales, Dunne, Clements, Cooray, De Zotti, Dye, Ivison, Jarvis, Lagache, Maddox, Negrello, Serjeant, Thompson, Van Kampen, Amblard, Andreani, Baes, Beelen, Bendo, Benford, Bertoldi, Bock, Bonfield, Boselli, Bridge, Buat, Burgarella, Carlberg, Cava, Chanial, Charlot, Christopher, Coles, Cortese, Dariush, da Cunha, Dalton, Danese, Dannerbauer, Driver, Dunlop, Fan, Farrah, Frayer, Frenk, Geach, Gardner, Gomez, González-Nuevo, González-Solares, Griffin, Hardcastle, Hatziminaoglou, Herranz, Hughes, Ibar, Jeong, Lacey, Lapi, Lawrence, Lee, Leeuw, Liske, López-Caniego, Müller, Nandra, Panuzzo, Papageorgiou, Patanchon, Peacock, Pearson, Phillipps, Pohlen, Popescu, Rawlings, Rigby, Rigopoulou, Robotham, Rodighiero, Sansom, Schulz, Scott, Smith, Sibthorpe, Smail, Stevens, Sutherland, Takeuchi, Tedds, Temi, Tuffs, Trichas, Vaccari, Valtchanov, van der Werf, Verma, Vieria, Vlahakis, & White]EalesEtAl2010 Eales, S., Dunne, L., Clements, D., et al. 2010, , 122, 499[Elmegreen & Scalo(2004)]ElmegreenScalo2004 Elmegreen, B. G., & Scalo, J. 2004, , 42, 211[Federrath(2013)]Federrath2013sflaw Federrath, C. 2013, , 436, 3167[Federrath(2015)]Federrath2015 —. 2015, , 450, 4035[Federrath & Klessen(2012)]FederrathKlessen2012 Federrath, C., & Klessen, R. S. 2012, , 761, 156[Federrath et al.(2008)Federrath, Klessen, & Schmidt]FederrathKlessenSchmidt2008 Federrath, C., Klessen, R. S., & Schmidt, W. 2008, , 688, L79[Federrath et al.(2010)Federrath, Roman-Duval, Klessen, Schmidt, & Mac Low]Federrath:2010aa Federrath, C., Roman-Duval, J., Klessen, R. S., Schmidt, W., & Mac Low, M.-M. 2010, , 512, A81[Federrath et al.(2016)Federrath, Rathborne, Longmore, Kruijssen, Bally, Contreras, Crocker, Garay, Jackson, Testi, & Walsh]FederrathEtAl2016 Federrath, C., Rathborne, J. M., Longmore, S. N., et al. 2016, , 832, 143[Federrath et al.(2017)Federrath, Rathborne, Longmore, Kruijssen, Bally, Contreras, Crocker, Garay, Jackson, Testi, & Walsh]FederrathEtAl2017iaus Federrath, C., Rathborne, J. M., Longmore, S. N., et al. 2017, in IAU Symposium, Vol. 322, IAU Symposium, ed. R. M. Crocker, S. N. Longmore, & G. V. Bicknell, 123–128[Ferrière(2001)]Ferriere:2001aa Ferrière, K. M. 2001, Reviews of Modern Physics, 73, 1031[Fisher et al.(2017)Fisher, Glazebrook, Damjanov, Abraham, Obreschkow, Wisnioski, Bassett, Green, & McGregor]FisherEtAl2017 Fisher, D. B., Glazebrook, K., Damjanov, I., et al. 2017, , 464, 491[Ginsburg et al.(2016)Ginsburg, Henkel, Ao, Riquelme, Kauffmann, Pillai, Mills, Requena-Torres, Immer, Testi, Ott, Bally, Battersby, Darling, Aalto, Stanke, Kendrew, Kruijssen, Longmore, Dale, Guesten, & Menten]GinsburgEtAl2016 Ginsburg, A., Henkel, C., Ao, Y., et al. 2016, , 586, A50[Glazebrook(2013)]Glazebrook:2013aa Glazebrook, K. 2013, , 30, 56[Groves et al.(2015)Groves, Schinnerer, Leroy, Galametz, Walter, Bolatto, Hunt, Dale, Calzetti, Croxall, & Kennicutt]GrovesEtAl2015 Groves, B. A., Schinnerer, E., Leroy, A., et al. 2015, , 799, 96[Gutermuth et al.(2011)Gutermuth, Pipher, Megeath, Myers, Allen, & Allen]GutermuthEtAl2011 Gutermuth, R. A., Pipher, J. L., Megeath, S. T., et al. 2011, , 739, 84[Heiderman et al.(2010)Heiderman, Evans, Allen, Huard, & Heyer]HeidermanEtAl2010 Heiderman, A., Evans, II, N. J., Allen, L. E., Huard, T., & Heyer, M. 2010, , 723, 1019[Hennebelle & Chabrier(2011)]HennebelleChabrier2011 Hennebelle, P., & Chabrier, G. 2011, , 743, L29[Hennebelle & Chabrier(2013)]HennebelleChabrier2013 —. 2013, , 770, 150[Hennebelle & Falgarone(2012)]HennebelleFalgarone2012 Hennebelle, P., & Falgarone, E. 2012, , 20, 55[Ho et al.(2016)Ho, Medling, Groves, Rich, Rupke, Hampton, Kewley, Bland-Hawthorn, Croom, Richards, Schaefer, Sharp, & Sweet]HoEtAl2016 Ho, I.-T., Medling, A. M., Groves, B., et al. 2016, , 361, 280[Hopkins et al.(2013)Hopkins, Driver, Brough, Owers, Bauer, Gunawardhana, Cluver, Colless, Foster, Lara-López, Roseboom, Sharp, Steele, Thomas, Baldry, Brown, Liske, Norberg, Robotham, Bamford, Bland-Hawthorn, Drinkwater, Loveday, Meyer, Peacock, Tuffs, Agius, Alpaslan, Andrae, Cameron, Cole, Ching, Christodoulou, Conselice, Croom, Cross, De Propris, Delhaize, Dunne, Eales, Ellis, Frenk, Graham, Grootes, Häußler, Heymans, Hill, Hoyle, Hudson, Jarvis, Johansson, Jones, van Kampen, Kelvin, Kuijken, López-Sánchez, Maddox, Madore, Maraston, McNaught-Roberts, Nichol, Oliver, Parkinson, Penny, Phillipps, Pimbblet, Ponman, Popescu, Prescott, Proctor, Sadler, Sansom, Seibert, Staveley-Smith, Sutherland, Taylor, Van Waerbeke, Vázquez-Mata, Warren, Wijesinghe, Wild, & Wilkins]Hopkins:2013aa Hopkins, A. M., Driver, S. P., Brough, S., et al. 2013, , 430, 2047[Jameson et al.(2016)Jameson, Bolatto, Leroy, Meixner, Roman-Duval, Gordon, Hughes, Israel, Rubio, Indebetouw, Madden, Bot, Hony, Cormier, Pellegrini, Galametz, & Sonneborn]JamesonEtAl2016 Jameson, K. E., Bolatto, A. D., Leroy, A. K., et al. 2016, , 825, 12[Kam et al.(2015)Kam, Carignan, Chemin, Amram, & Epinat]KamEtAl2015 Kam, Z. S., Carignan, C., Chemin, L., Amram, P., & Epinat, B. 2015, , 449, 4048[Kauffmann et al.(2003)Kauffmann, Heckman, Tremonti, Brinchmann, Charlot, White, Ridgway, Brinkmann, Fukugita, Hall, Ivezić, Richards, & Schneider]KauffmannEtAl2003 Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, , 346, 1055[Kauffmann et al.(2008)Kauffmann, Bertoldi, Bourke, Evans, & Lee]KauffmannEtAl2008 Kauffmann, J., Bertoldi, F., Bourke, T. L., Evans, II, N. J., & Lee, C. W. 2008, , 487, 993[Kennicutt & Evans(2012)]KennicuttEvans2012 Kennicutt, R. C., & Evans, N. J. 2012, , 50, 531[Kennicutt(1998)]K98 Kennicutt, Jr., R. C. 1998, , 498, 541[Kennicutt et al.(1994)Kennicutt, Tamblyn, & Congdon]KennicuttEtAl1994 Kennicutt, Jr., R. C., Tamblyn, P., & Congdon, C. E. 1994, , 435, 22[Kewley & Dopita(2003)]Kewley:2003aa Kewley, L. J., & Dopita, M. A. 2003, in Revista Mexicana de Astronomia y Astrofisica Conference Series, Vol. 17, Revista Mexicana de Astronomia y Astrofisica Conference Series, ed. V. Avila-Reese, C. Firmani, C. S. Frenk, & C. Allen, 83–84[Kewley et al.(2013)Kewley, Dopita, Leitherer, Davé, Yuan, Allen, Groves, & Sutherland]Kewley:2013aa Kewley, L. J., Dopita, M. A., Leitherer, C., et al. 2013, , 774, 100[Kewley et al.(2001)Kewley, Dopita, Sutherland, Heisler, & Trevena]KewleyEtAl2001 Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., & Trevena, J. 2001, , 556, 121[Kewley et al.(2002)Kewley, Geller, Jansen, & Dopita]Kewley:2002aa Kewley, L. J., Geller, M. J., Jansen, R. A., & Dopita, M. A. 2002, , 124, 3135[Kewley et al.(2006)Kewley, Groves, Kauffmann, & Heckman]K06 Kewley, L. J., Groves, B., Kauffmann, G., & Heckman, T. 2006, , 372, 961[Kruijssen & Longmore(2014)]KruijssenLongmore2014 Kruijssen, J. M. D., & Longmore, S. N. 2014, , 439, 3239[Krumholz(2014)]Krumholz2014 Krumholz, M. R. 2014, , 539, 49[Krumholz et al.(2012)Krumholz, Dekel, & McKee]KDM12 Krumholz, M. R., Dekel, A., & McKee, C. F. 2012, , 745, 69[Krumholz et al.(2013)Krumholz, Dekel, & McKee]KDM13 —. 2013, , 779, 89[Krumholz & McKee(2005)]KrumholzMcKee2005 Krumholz, M. R., & McKee, C. F. 2005, , 630, 250[Krumholz & Tan(2007)]KrumholzTan2007 Krumholz, M. R., & Tan, J. C. 2007, , 654, 304[Lada et al.(2010)Lada, Lombardi, & Alves]LadaLombardiAlves2010 Lada, C. J., Lombardi, M., & Alves, J. F. 2010, , 724, 687[Le Fèvre et al.(2004)Le Fèvre, Vettolani, Paltani, Tresse, Zamorani, Le Brun, Moreau, Bottini, Maccagni, Picat, Scaramella, Scodeggio, Zanichelli, Adami, Arnouts, Bardelli, Bolzonella, Cappi, Charlot, Contini, Foucaud, Franzetti, Garilli, Gavignaud, Guzzo, Ilbert, Iovino, McCracken, Mancini, Marano, Marinoni, Mathez, Mazure, Meneux, Merighi, Pellò, Pollo, Pozzetti, Radovich, Zucca, Arnaboldi, Bondi, Bongiorno, Busarello, Ciliegi, Gregorini, Mellier, Merluzzi, Ripepi, & Rizzo]Le-Fevre:2004aa Le Fèvre, O., Vettolani, G., Paltani, S., et al. 2004, , 428, 1043[Leroy et al.(2008)Leroy, Walter, Brinks, Bigiel, de Blok, Madore, & Thornley]LeroyEtAl2008 Leroy, A. K., Walter, F., Brinks, E., et al. 2008, , 136, 2782[Mac Low & Klessen(2004)]MacLowKlessen2004 Mac Low, M.-M., & Klessen, R. S. 2004, RvMP, 76, 125[McKee & Ostriker(2007)]McKeeOstriker2007 McKee, C. F., & Ostriker, E. C. 2007, , 45, 565[Molina et al.(2012)Molina, Glover, Federrath, & Klessen]MolinaEtAl2012 Molina, F. Z., Glover, S. C. O., Federrath, C., & Klessen, R. S. 2012, , 423, 2680[Padoan et al.(2014)Padoan, Federrath, Chabrier, Evans, Johnstone, Jørgensen, McKee, & Nordlund]PadoanEtAl2014 Padoan, P., Federrath, C., Chabrier, G., et al. 2014, Protostars and Planets VI, 77[Padoan & Nordlund(2011)]PadoanNordlund2011 Padoan, P., & Nordlund, Å. 2011, , 730, 40[Padoan et al.(1997)Padoan, Nordlund, & Jones]PadoanNordlundJones1997 Padoan, P., Nordlund, Å., & Jones, B. J. T. 1997, , 288, 145[Passot & Vázquez-Semadeni(1998)]PassotVazquez1998 Passot, T., & Vázquez-Semadeni, E. 1998, PhRvE, 58, 4501[Renaud et al.(2012)Renaud, Kraljic, & Bournaud]RenaudEtAl2012 Renaud, F., Kraljic, K., & Bournaud, F. 2012, , 760, L16[Rich et al.(2010)Rich, Dopita, Kewley, & Rupke]Rich:2010aa Rich, J. A., Dopita, M. A., Kewley, L. J., & Rupke, D. S. N. 2010, , 721, 505[Rich et al.(2011)Rich, Kewley, & Dopita]Rich:2011aa Rich, J. A., Kewley, L. J., & Dopita, M. A. 2011, , 734, 87[Rich et al.(2012)Rich, Torrey, Kewley, Dopita, & Rupke]Rich:2012aa Rich, J. A., Torrey, P., Kewley, L. J., Dopita, M. A., & Rupke, D. S. N. 2012, , 753, 5[Richards et al.(2016)Richards, Bryant, Croom, Hopkins, Schaefer, Bland-Hawthorn, Allen, Brough, Cecil, Cortese, Fogarty, Gunawardhana, Goodwin, Green, Ho, Kewley, Konstantopoulos, Lawrence, Lorente, Medling, Owers, Sharp, Sweet, & Taylor]RichardsEtAl2016 Richards, S. N., Bryant, J. J., Croom, S. M., et al. 2016, , 455, 2826[Robotham et al.(2010)Robotham, Driver, Norberg, Baldry, Bamford, Hopkins, Liske, Loveday, Peacock, Cameron, Croom, Doyle, Frenk, Hill, Jones, van Kampen, Kelvin, Kuijken, Nichol, Parkinson, Popescu, Prescott, Sharp, Sutherland, Thomas, & Tuffs]Robotham:2010aa Robotham, A., Driver, S. P., Norberg, P., et al. 2010, , 27, 76[Salim et al.(2015)Salim, Federrath, & Kewley]Salim:2015aa Salim, D. M., Federrath, C., & Kewley, L. J. 2015, , 806, L36[Scalo & Elmegreen(2004)]ScaloElmegreen2004 Scalo, J., & Elmegreen, B. G. 2004, , 42, 275[Schmidt(1959)]Schmidt1959 Schmidt, M. 1959, , 129, 243[Schruba et al.(2011)Schruba, Leroy, Walter, Bigiel, Brinks, de Blok, Dumas, Kramer, Rosolowsky, Sandstrom, Schuster, Usero, Weiss, & Wiesemeyer]SchrubaEtAl2011 Schruba, A., Leroy, A. K., Walter, F., et al. 2011, , 142, 37[Scoville et al.(2007)Scoville, Aussel, Brusa, Capak, Carollo, Elvis, Giavalisco, Guzzo, Hasinger, Impey, Kneib, LeFevre, Lilly, Mobasher, Renzini, Rich, Sanders, Schinnerer, Schminovich, Shopbell, Taniguchi, & Tyson]Scoville:2007aa Scoville, N., Aussel, H., Brusa, M., et al. 2007, , 172, 1[Sharp et al.(2006)Sharp, Saunders, Smith, Churilov, Correll, Dawson, Farrel, Frost, Haynes, Heald, Lankshear, Mayfield, Waller, & Whittard]SharpEtAl2006 Sharp, R., Saunders, W., Smith, G., et al. 2006, in SPIE Conference Series, Vol. 6269, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 62690G[Sharp et al.(2015)Sharp, Allen, Fogarty, Croom, Cortese, Green, Nielsen, Richards, Scott, Taylor, Barnes, Bauer, Birchall, Bland-Hawthorn, Bloom, Brough, Bryant, Cecil, Colless, Couch, Drinkwater, Driver, Foster, Goodwin, Gunawardhana, Ho, Hampton, Hopkins, Jones, Konstantopoulos, Lawrence, Leslie, Lewis, Liske, López-Sánchez, Lorente, McElroy, Medling, Mahajan, Mould, Parker, Pracy, Obreschkow, Owers, Schaefer, Sweet, Thomas, Tonini, & Walcher]SharpEtAl2015 Sharp, R., Allen, J. T., Fogarty, L. M. R., et al. 2015, , 446, 1551[Shetty et al.(2011a)Shetty, Glover, Dullemond, & Klessen]ShettyEtAl2011a Shetty, R., Glover, S. C., Dullemond, C. P., & Klessen, R. S. 2011a, , 412, 1686[Shetty et al.(2011b)Shetty, Glover, Dullemond, Ostriker, Harris, & Klessen]ShettyEtAl2011b Shetty, R., Glover, S. C., Dullemond, C. P., et al. 2011b, , 415, 3253[Toomre(1964)]Toomre:1964aa Toomre, A. 1964, , 139, 1217[Valiante et al.(2016)Valiante, Smith, Eales, Maddox, Ibar, Hopwood, Dunne, Cigan, Dye, Pascale, Rigby, Bourne, Furlanetto, & Ivison]ValianteEtAl2016 Valiante, E., Smith, M. W. L., Eales, S., et al. 2016, , 462, 3146[van der Kruit & Freeman(2011)]van-der-Kruit:2011aa van der Kruit, P. C., & Freeman, K. C. 2011, , 49, 301[van der Kruit & Searle(1981)]van-der-Kruit:1981aa van der Kruit, P. C., & Searle, L. 1981, , 95, 105[van der Kruit & Searle(1982)]van-der-Kruit:1982aa —. 1982, , 110, 61[Varidel et al.(2016)Varidel, Pracy, Croom, Owers, & Sadler]Varidel:2016aa Varidel, M., Pracy, M., Croom, S., Owers, M. S., & Sadler, E. 2016, , 33, e006[Vázquez-Semadeni(1994)]Vazquez1994 Vázquez-Semadeni, E. 1994, , 423, 681[Veilleux & Osterbrock(1987)]VeilleuxOsterbrock1987 Veilleux, S., & Osterbrock, D. E. 1987, , 63, 295[Williams et al.(2010)Williams, Bureau, & Cappellari]WilliamsEtAl2010 Williams, M. J., Bureau, M., & Cappellari, M. 2010, , 409, 1330[Wu et al.(2010)Wu, Evans, Shirley, & Knez]WuEtAl2010 Wu, J., Evans, II, N. J., Shirley, Y. L., & Knez, C. 2010, , 188, 313[York et al.(2000)York, Adelman, Anderson, Anderson, Annis, Bahcall, Bakken, Barkhouser, Bastian, Berman, Boroski, Bracker, Briegel, Briggs, Brinkmann, Brunner, Burles, Carey, Carr, Castander, Chen, Colestock, Connolly, Crocker, Csabai, Czarapata, Davis, Doi, Dombeck, Eisenstein, Ellman, Elms, Evans, Fan, Federwitz, Fiscelli, Friedman, Frieman, Fukugita, Gillespie, Gunn, Gurbani, de Haas, Haldeman, Harris, Hayes, Heckman, Hennessy, Hindsley, Holm, Holmgren, Huang, Hull, Husby, Ichikawa, Ichikawa, Ivezić, Kent, Kim, Kinney, Klaene, Kleinman, Kleinman, Knapp, Korienek, Kron, Kunszt, Lamb, Lee, Leger, Limmongkol, Lindenmeyer, Long, Loomis, Loveday, Lucinio, Lupton, MacKinnon, Mannery, Mantsch, Margon, McGehee, McKay, Meiksin, Merelli, Monet, Munn, Narayanan, Nash, Neilsen, Neswold, Newberg, Nichol, Nicinski, Nonino, Okada, Okamura, Ostriker, Owen, Pauls, Peoples, Peterson, Petravick, Pier, Pope, Pordes, Prosapio, Rechenmacher, Quinn, Richards, Richmond, Rivetta, Rockosi, Ruthmansdorfer, Sandford, Schlegel, Schneider, Sekiguchi, Sergey, Shimasaku, Siegmund, Smee, Smith, Snedden, Stone, Stoughton, Strauss, Stubbs, SubbaRao, Szalay, Szapudi, Szokoly, Thakar, Tremonti, Tucker, Uomoto, Vanden Berk, Vogeley, Waddell, Wang, Watanabe, Weinberg, Yanny, Yasuda, & SDSS Collaboration]York:2000aa York, D. G., Adelman, J., Anderson, Jr., J. E., et al. 2000, , 120, 1579[Yusef-Zadeh et al.(2009)Yusef-Zadeh, Hewitt, Arendt, Whitney, Rieke, Wardle, Hinz, Stolovy, Lang, Burton, & Ramirez]YusefZadehEtAl2009 Yusef-Zadeh, F., Hewitt, J. W., Arendt, R. G., et al. 2009, , 702, 178 | http://arxiv.org/abs/1703.09224v1 | {
"authors": [
"Christoph Federrath",
"Diane M. Salim",
"Anne M. Medling",
"Rebecca L. Davies",
"Tiantian Yuan",
"Fuyan Bian",
"Brent A. Groves",
"I-Ting Ho",
"Robert Sharp",
"Lisa J. Kewley",
"Sarah M. Sweet",
"Samuel N. Richards",
"Julia J. Bryant",
"Sarah Brough",
"Scott Croom",
"Nicholas Scott",
"Jon Lawrence",
"Iraklis Konstantopoulos",
"Michael Goodwin"
],
"categories": [
"astro-ph.GA",
"astro-ph.CO",
"astro-ph.IM",
"astro-ph.SR"
],
"primary_category": "astro-ph.GA",
"published": "20170327180001",
"title": "The SAMI Galaxy Survey: a new method to estimate molecular gas surface densities from star formation rates"
} |
http://arxiv.org/abs/1703.08695v2 | {
"authors": [
"Salvatore Baldino",
"Stefano Bolognesi",
"Sven Bjarke Gudnason",
"Deniz Koksal"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20170325135930",
"title": "A Solitonic Approach to Holographic Nuclear Physics"
} |
|
Trespassing the Boundaries:Labeling Temporal Bounds for Object Interactions in Egocentric Video Davide Moltisanti Michael Wray Walterio Mayol-Cuevas Dima Damen University of Bristol, United Kingdom <FirstName>.<LastName>@bristol.ac.uk December 30, 2023 ===============================================================================================================================================================gobble Manual annotations of temporal bounds for object interactions (i.e. start and end times) are typical training input to recognition, localization and detection algorithms. For three publicly available egocentric datasets, we uncover inconsistencies in ground truth temporal bounds within and across annotators and datasets. We systematically assess the robustness of state-of-the-art approaches to changes in labeled temporal bounds, for object interaction recognition. As boundaries are trespassed, a drop of up to 10% is observed for both Improved Dense Trajectories and Two-Stream Convolutional Neural Network. We demonstrate that such disagreement stems from a limited understanding of the distinct phases of an action, and propose annotating based on the Rubicon Boundaries, inspired by a similarly named cognitive model, for consistent temporal bounds of object interactions. Evaluated on a public dataset, we report a 4% increase in overall accuracy, and an increase in accuracy for 55% of classes when Rubicon Boundaries are used for temporal annotations.§ INTRODUCTIONEgocentric videos, also referred to as first-person videos, have been frequently advocated to provide a unique perspective into object interactions <cit.>. These capture the viewpoint of an object close to that perceived by the user during interactions.Consider, for example, `turning a door handle'. Similar appearance and motion information will be captured from an egocentric perspective as multiple people turn a variety of door handles.Several datasets have been availed to the research community focusing on object interactions from head-mounted <cit.> and chest-mounted <cit.> cameras. These incorporate ground truth labels that mark the start and the end of each object interaction, such as `open fridge', `cut tomato' and `push door'. These temporal bounds are the base for automating action detection, localization and recognition. They are thus highly influential in the ability of an algorithm to distinguish one interaction from another.As temporal bounds vary, the segments may contain different portions of the untrimmed video from which the action is extracted.Humans can still recognize an action even when the video snippet varies or contains only part of the action. Machines are not yet as robust, given that current algorithms strongly rely on the data and the labels we feed to them. Should these bounds be incorrectly or inconsistently annotated, the ability to learn as well as assess models for action recognition would be adversely affected.In this paper, we uncover inconsistencies in defining temporal bounds for object interactions within and across three egocentric datasets. We show that temporal bounds are often ill-defined, with limited insight into how they have been annotated. We systematically show that perturbations of temporal bounds influence the accuracy of action recognition, for both hand-crafted features and fine-tunedclassifiers, even when the tested video segment significantly overlaps with the ground truth segment.While this paper focuses on unearthing inconsistencies in temporal bounds, and assessing their effect on object interaction recognition, we take a step further into proposing an approach for consistently labeling temporal bounds inspired by studies in the human mindset.Main ContributionsMore specifically, we: * Inspect the consistency of temporal bounds for object interactions across and within three datasets for egocentric object interactions. We demonstrate that current approaches are highly subjective, with visible variability in temporal bounds when annotating instances of the same action; * Evaluate the robustness of two state-of-the-art action recognition approaches, namely Improved Dense Trajectories <cit.> and Convolutional Two-Stream Network Fusion <cit.>, to changes in temporal bounds. We demonstrate that the recognition rate drops by 2-10% when temporal bounds are modified albeit within an Intersection-over-Union of more than 0.5; * Propose, inspired by studies in Psychology, the Rubicon Boundaries to assist in consistent temporal boundary annotations for object interactions; * Re-annotate one dataset using the Rubicon Boundaries, and show more than 4% increase in recognition accuracy, with improved per-class accuracies for most classes in the dataset. We next review related works in Section <ref>, before embarking on inspecting labeling consistencies in Section <ref>, evaluating recognition robustness in Section <ref> and proposing and evaluating the Rubicon Boundaries in Section <ref>. The paper concludes with an insight into future directions. § RELATED WORK In this section, we review all papers that, up to our knowledge, ventured into the consistency and robustness of temporal bounds for action recognition.Temporal Bounds in Non-Egocentric DatasetsThe leading work of Satkin and Hebert <cit.> first pointed out that determining the temporal extent of an action is often subjective, and that action recognition results vary depending on the bounds used for training. They proposed to find the most discriminative portion of each segment for the task of action recognition. Given a loosely trimmed training segment, they exhaustively search for the cropping that leads to the highest classification accuracy, using hand-crafted features such as HOG, HOF <cit.> and Trajectons <cit.>. Optimizing bounds to maximize discrimination between class labels has also been attempted by Duchenne et al. <cit.>, where they refined loosely labeled temporal bounds of actions, estimated from film scripts, to increase accuracy across action classes. Similarly, two works evaluated the optimal segment length for action recognition <cit.>. From the start of the segment, 1-7 frames were deemed sufficient in <cit.>, with rapidly diminishing returns as more frames were added. More recently, <cit.> showed that 15-20 frames were enough to recognize human actions from 3D skeleton joints. Interestingly, assessing the effect of temporal bounds is still an active research topic within novel deep architectures. Recently, Peng et al. <cit.> assessed how frame-level classifications using multi-region two-stream CNN are pooled to achieve video-level recognition results. The authors reported that stacking more than 5 frames worsened the action detection and recognition results for the tested datasets, though only compared to a 10-frame stack.The problem of finding optimal temporal bounds is much akin to that of action localization in untrimmed videos <cit.>. Typical approaches attempt to find similar temporal bounds to those used in training, making them equally dependent on manual labels and thus sensitive to inconsistencies in the ground truth labels.An interesting approach that addressed reliance on training temporal bounds for action recognition and localization is that of Gaidon et al. <cit.>.They noted that action recognition methods rely on temporal bounds in test videos to be strictly containing an action, and in exactly the same fashion as the training segments. They thus redefined an action as a sequence of key atomic frames, referred to as actoms. The authors learned the optimal sequence of actoms per action class with promising results.More recently, Wang et al. <cit.> represented actions as a transformation from a precondition state to an effect state. The authors attempted to learn such transformations as well as locate the end of the precondition and the start of the effect. However, both approaches <cit.> rely on manual annotations of actoms <cit.> or action segments <cit.>, which are potentially as subjective as the temporal bounds of the actions themselves.Temporal Bounds in Egocentric DatasetsCompared to third person action recognition (e.g. 101 action classes in <cit.> and 157 action classes in <cit.>), egocentric datasets have a smaller number of classes (5-44 classes <cit.>), with considerable ambiguities (e.g. `turn on' vs `turn off' tap).Comparative recognition results have been reported on these datasets in <cit.>.Previously, three works noted the challenge and difficulty in defining temporal bounds for egocentric videos <cit.>. In <cit.>, Spriggs et al. discussed the level of granularity in action labels (e.g. `break egg' vs `beat egg in a bowl') for the CMU dataset <cit.>. They also noted the presence of temporally overlapping object interactions (e.g. `pour' while `stirring'). In <cit.>, multiple annotators were asked to provide temporal bounds for the same object interaction. The authors showed variability in annotations, yet did not detail what instructions were given to annotators when labeling these temporal bounds. In <cit.>, the human ability to order pairwise egocentric segments was evaluated as the snippet length varied. The work showed that human perception improves as the size of the segment increases to 60 frames, then levels off.To summarize, temporal bounds for object interactions in egocentric video have been overlooked and no previous work attempted to analyze the influence of consistency of temporal bounds or the robustness of representations to variability in these bounds. This paper particularly attempts to answer both questions; how consistent are current temporal bound labels in egocentric datasets? and how sensitive are action recognition results to changes in these temporal bounds? We next delve into answering these questions.§ TEMPORAL BOUNDS: INSPECTING INCONSISTENCY Current egocentric datasets are annotated for a number of action classes, described using a verb-noun label. Each class instance is annotated with its label as well as the temporal bounds (i.e. start and end times) that delimit the frames used to learn the action model. Little information is typically provided on how these manually labeled temporal bounds are acquired. In Section <ref>, we compare labels across and within egocentric datasets. We then discuss in Section <ref> how variability is further increased when multiple annotators for the same action are employed. §.§ Labeling in Current Egocentric Datasets We study ground truth annotations for three public datasets, namely BEOID <cit.>, GTEA Gaze+ <cit.> and CMU <cit.>. Observably, many annotations base the start and end of an action as respectively the first and last frames when the hands are visible in the field of view. Other annotations tend to segment an action more strictly, including only the most relevant physical object interaction within the bounds. Figure <ref> illustrates an example of three different temporal bounds for the `pour' action across the three datasets.Frames marked in red are those that have been labeled in the different datasetsas containing the `pour' action.The annotated temporal bounds in this example vary remarkably;(i) BEOID's are the tightest; (ii) The start of GTEA Gaze+'s segment is belated: in fact, the first frame in the annotated segment shows some oil already in the pan; (iii) CMU's segment includes picking the oil container and putting it down before and after pouring. These conclusions extend to other actions in the three datasets.We observe that annotations are also inconsistent within the same dataset. Figure <ref> shows three intra-dataset annotations. (i) For the action `open door' in BEOID, one segment includes the hand reaching the door, while the other starts with the hand already holding the door's handle; (ii) For the action `cut pepper' in GTEA Gaze+, in one segment the user already holds the knife and cuts a single slice of the vegetable. The second segment includes the action of picking up the knife, and shows the subject slicing the whole pepper through several cuts. Note that the length difference between the two segments is considerable - the segments are respectively 3 and 80 seconds long; (iii) For the action `crack egg' in CMU, only the first segment shows the user tapping the egg against the bowl. While the figure shows three examples, such inconsistencies have been discovered throughout the three datasets. However, we generally observe that GTEA Gaze+ shows more inconsistencies, which could be due to the dataset size, as it is the largest among the evaluated datasets. §.§ Multi-Annotator Labeling As noted above, defining when an object interaction begins and finishes is highly subjective.There is usually little agreement when different annotators segment the same object interaction.To assess this variability, we collected 5 annotations for several object interactions from an untrimmed video of the BEOID dataset. First, annotators were only informed of the class name and asked to label the start and the end of the action. We refer to these annotations as conventional. We then asked a different set of annotators to annotate the same object interactions following our proposed Rubicon Boundaries (RB) approach which we will present in Section <ref>. Figure <ref> shows per-class box plots for the Intersection-over-Union (IoU) measure for all pairs of annotations.RB annotations demonstrate gained consistency for all classes. For conventional annotations, we report an average IoU = 0.63 and a standard deviation of 0.22,whereas for RB annotations we report increased average IoU = 0.83 with a lower standard deviation of 0.11. To assess how consistency changes as more annotations are collected, we employ annotators via the Amazon Mechanical Turk (MTurk) for two object interactions from BEOID, namely the actions of `scan card' and `wash cup', for which we gathered 45 conventional and RB labels. Box plots for MTurk labels are included in Figure <ref>, showing marginal improvements with RB annotations as well. We will revisit RB annotations in detail in Section <ref>.In the next Section, we assess the robustness of current action recognition approaches to variations in temporal boundaries.§ TEMPORAL BOUNDS: ASSESSING ROBUSTNESSTo assess the effect of temporal bounds on action recognition, we systematically vary the start and end times of annotated segments for the three datasets, and report comprehensive results on the effect of such alterations.Results are evaluated using 5-fold cross validation.For training, only ground truth segments are considered. We then classify both the original ground truth and the generated segments. We provide results using Improved Dense Trajectories <cit.> encoded with Fisher Vector <cit.> (IDT FV)[IDT features have been extracted using GNU Parallel <cit.>.] and Convolutional Two-Stream Network Fusion for Video Action Recognition (2SCNN) <cit.>.The encoded IDT FV features are classified with a linear SVM. Experiments on 2SCNN are carried out using the provided code and the proposed VGG-16 architecture pre-trained on ImageNet and tuned on UCF101 <cit.>. We fine-tune the spatial, temporal and fusion networks on each fold's training set.Theoretically, the two action recognition approaches are likely to respond differently to variations in start and end times. Specifically, 2SCNN averages the classification responses of the fusion network obtained on n frames randomly extracted from a test video v of length |v|. In our experiments, n = min(20, |v|). Such strategy should ascribe some degree of resilience to 2SCNN.IDT densely samples feature points in the first frame of the video, whereas in the following frames only new feature points are sampled to replace the missing ones. This entails that IDT FV should be more sensitive to start (specifically) and end time variations, at least for shorter videos. This fundamental difference makes both approaches interesting to assess for robustness. §.§ Generating SegmentsLet v_gt be a ground truth action segment obtained by cropping an untrimmed video from time s_gt to time e_gt, which denote the annotated ground truth start and end times.We vary both s_gt and e_gt in order to generate new action segments with different temporal bounds. More precisely, let s_gen^0 = s_gt - Δ and let s_gen^n = s_gt + Δ. The set containing candidate start times is defined as:𝒮 = {s_gen^0, s_gen^0 + δ, s_gen^0 + 2δ, ..., s_gen^0 + (n-1) δ, s_gen^n}Analogously, let e_gen^0 = e_gt - Δ and let e_gen^n = e_gt + Δ, the set containing candidate end times is defined as:ℰ = {e_gen^0, e_gen^0 + δ, e_gen^0 + 2δ, ..., e_gen^0 + (n-1) δ, e_gen^n}To accumulate the set of generated action segments, we take all possible combinations of s_gen∈𝒮 and e_gen∈ℰ and keep only those such that the Intersection-over-Union between [s_gt, e_gt] and [s_gen, e_gen] ≥ 0.5. In all our experiments, we set Δ = 2 and δ = 0.5 seconds. §.§ Comparative EvaluationTable <ref> reports the number of ground truth and generated segments for BEOID, GTEA Gaze+ and CMU. Figure <ref> illustrates the segments' length distribution for the three datasets, showing considerable differences: BEOID and GTEA Gaze+ contain mostly short segments (1-2.5 seconds), although the latter includes also videos up to 40 seconds long. CMU has longer segments, with the majority ranging from 5 to 15 seconds. BEOID <cit.>is the evaluated dataset with the most consistent and the tightest temporal bounds. When testing the ground truth segments using both IDT FV and 2SCNN, we observe high accuracy for ground truth segments - respectively 85.3% and 93.5% - as shown in Table <ref>. When classifying the generated segments, we observe a drop in accuracy of 9.9% and 9.7% respectively.Figure <ref> shows detailed results where accuracy is reported vs IoU, start/end shifts and length difference between ground truth and generated segments. We particularly show the results for shifts in the start and the end times independently.A negative start shift implies that a generated segment begins before the corresponding ground truth segment, and a negative end shift implies that a generated segment finishes before the corresponding ground truth segment. These terms are used consistently throughout this section. Results show that: (i) as IoU decreases the accuracy drops consistently for IDT FV and 2SCNN - which questions both approaches' robustness to temporal bounds alterations; (ii) IDT FV exhibits lower accuracy with both negative and positive start/end shifts; (iii) IDT FV similarly exhibits lower accuracy with negative and positive length differences. This is justified as BEOID segments are tight; by expanding a segment we include new potentially noisy or irrelevant frames that confuse the classifiers;(iv) 2SCNN is more robust to length difference which is understandable as it randomly samples a maximum of 20 frames regardless of the length.While these are somehow expected, we also note that (v) 2SCNN is robust to positive start shifts. CMU <cit.>is the dataset with longer ground truth segments. Table <ref> compares results obtained for CMU's ground truth and generated segments.For this dataset, IDT FV accuracy drops by 2.1% for the generated segments, whereas 2SCNN drops by 4.3%.In Figure <ref>, CMU consistently shows low robustness for both IDT FV and 2SCNN.As IoU changes from > 0.9 to > 0.5, we observe a drop of more than 20% in accuracy for both. However, due to the long average length of segments in CMU, the effect of shifts in start end times as well as length differences is not visible for IDT FV. Interestingly for 2SCNN, the accuracy slightly improves with positive start shift, negative end shift and negative length difference. This suggests that CMU's ground truth bounds are somewhat loose and that tighter segments are likely to contain more discriminative frames.GTEA Gaze+ <cit.>is the dataset with the most inconsistent bounds, based on our observations.Table <ref> shows that accuracy for IDT FV drops by 2.1%, while overall accuracy for 2SCNN drops marginally (1.6%). This should not be mistaken for robustness, and that is evident when studying the results in Figure <ref>. For all variations (i.e. start/end shifts and length differences), the generated segments achieve higher accuracy for both IDT FV and 2SCNN. When labels are inconsistent, shifting temporal bounds does not systematically alter the visual representation of the tested segments.The generated segments tend to include (or exclude) frames that increase the similarity between the testing and training segments.Figure <ref> reports per-class differences between generated and ground truth segments. Positive values entail that the accuracy for the given class is higher when testing the generated segments, and vice versa. Horizontal lines indicate the average accuracy difference. In total, 63% of classes in all three datasets exhibit a drop in accuracy drop when using IDT FV compared to 80% when using 2SCNN.Data augmentation:For completeness, we evaluate the performance when using temporal data augmentation methods on two datasets. Generated segments in Section <ref> are used to augment training.We double the size of the training sets, taking random samples for augmentation. Test sets remained unvaried. Results are reported in Table <ref>. While we observe an increase in robustness, we also notice a drop in accuracy for ground truth segments, respectively of 1% and 4% for BEOID and GTEA Gaze+. In conclusion, we note that both IDT FV and 2SCNN are sensitive to changes in temporal bounds for both consistent and inconsistent annotations. Approaches that improve robustness using data augmentation could be attempted, however a broader look at how the methods could be inherently more robust is needed, particularly for CNN architectures.§ LABELING PROPOSAL: THE RUBICON BOUNDARIESThe problem of defining consistent temporal bounds of an action is most akin to the problem of defining consistent bounding boxes of an object.Attempts to define guidelines for annotating objects' bounding boxes started nearly a decade ago. Among others, the VOC Challenge 2007 <cit.> proposed what has become the standard for defining the bounding box of an object in images. These consistent labels have been used to train state-of-the-art object detection and classification methods. With this same spirit, in this Section we propose an approach to consistently segment the temporal scope of an object interaction. Defining RB: The Rubicon Model of Action Phases <cit.>, developed in the field of Psychology, posits an action as a goal a subject desires to achieve and identifies the main sub-phases the person gets through in order to complete the action. First, a person decides what goal he wants to obtain. After forming his intention, he enters the so-called pre-actional phase, that is a phase where he plans to perform the action.Following this stage, the subject acts towards goal achievement in the actional phase.The two phases are delimited by three transition points: the initiation of prior motion, the start of the action and the goal achievement. The model is named after the historical fact of Caesar crossing the Rubicon river, which became a metaphor for deliberately proceeding past a point of no return, which in our case is the transition point that signals the beginning of an action.We take inspiration from this model, specifically from the aforementioned transitions points, and define two phases for an object interaction:Pre-actional phase This sub-segment contains the preliminary motion that directly precedes the goal, and is required for its completion. When multiple motions can be identified, the pre-actional phase should contain only the last one;Actional phase This is the main sub-segment containing the motion strictly related to the fulfillment of the goal. The actional phase starts immediately after the pre-actional phase.In the following section, we refer to a label as an RB annotation when the beginning of an object interaction aligns with the start of the pre-actional phase and the ending of the interactions aligns with the end of the actional phase.Figure <ref> depicts three object interactions labeled according to the Rubicon Boundaries. The top sequence illustrates the action of cutting a pepper.The sequence shows the subject fetching the knife before cutting the pepper and taking it off the plate. Based on the aforementioned definitions, the pre-actional phase is limited to the motion of moving the knife towards the pepper in order to slice it. This is directly followed by the actional phase where the user cuts the pepper. The actional phase ends as the goal of `cutting' is completed. The middle sequence illustrates the action of opening a fridge, showing a person approaching the fridge, reaching towards the handle before pulling the fridge door open. In this case, the pre-actional phase would be the reaching motion, while the actional phase would be the pulling motion. Evaluating RB: We evaluate our RB proposal for consistency, intuitiveness as well as accuracy and robustness. (i) Consistency: We already reported consistency results in Section <ref>, where RB annotations exhibit higher average overlap and less variation for all the evaluated object interactions - average IoU for all pairs of annotators increased from 0.63 for conventional boundaries to 0.83 for RB.Figure <ref> illustrates per-class IoU box plots for the pre-actional and the actional phases separately, along with the concatenation of the two.For 7 out of the 13 actions, the actional phase was more consistent than the pre-actional phase, and for 12 out of the 13 actions, the concatenation of the phases proved the highest consistency.(ii) Intuitiveness: While RB showed higher consistency in labeling, any new approach for temporal boundaries would require a shift in practice. We collect RB annotations from university students as well as from MTurk annotators. Locally, students successfully used the RB definitions to annotate videos with no assistance. However, this has not been the case for MTurk annotators for the two object interactions `wash cup' and `scan card'. The MTurk HIT provided the formal definition of the pre-actional and actional phases, then ran two multiple-choice control questions to assess the ability of annotators to distinguish these phases from a video. The annotators had to select from textual descriptions what the pre and the actional phases entailed.For both object interactions, only a fourth of the annotators answered the control questions correctly.Three possible explanations could be given, namely: annotators were accustomed to the conventional labeling method and did not spend sufficient time to study the definitions, or the definitions were difficult to understand. Further experimentation is needed to understand the cause.(iii) Accuracy: We annotated GTEA Gaze+ using the Rubicon Boundaries, by employing three people to label its 1141 segments[RB labels and video of results are available on project webpage: <http://www.cs.bris.ac.uk/ damen/Trespass/>]. For these experiments, we asked annotators to label both the pre-actional and the actional phase. In Table <ref>, we report results for the actional phase alone (RBact) as well as the concatenation of the two phases (RB), using 2SCNN on the same 5 folds from Section <ref>. The concatenated RB segments proved the most accurate, leading to an increase of more than 4% in accuracy compared to conventional ground truth segments.Temporal augmentation on conventional labels (Conv^aug_gt) results in a drop of accuracy by 7.7% compared with the RB segments, highlighting that consistent labeling cannot be substituted with data augmentation.Figure <ref> shows the accuracy per class difference between the two sets of RB annotations and the conventional labels. When using RBact, 21/42 classes improved, whereas accuracy dropped for 11 classes compared to the conventional annotations. When using the full RB segment, 23/42 classes improved, while 10 classes were better recognized with the conventional annotations.In each case, 10 and 9 classes remain unchanged.Given that the experimental setup was identical to that used for the conventional annotations, the boost in accuracy can be ascribed solely to the new action boundaries. Indeed, the RB approach helped the annotators to more consistently segment the object interactions contained in GTEA Gaze+, which is one of the most challenging datasets for egocentric action recognition. (iv) Robustness:Table <ref> also compares the newly annotated RB segments to generated segments with varied start and end times, as explained in Section <ref>. While RB_gen shows higher accuracy than the Conventional_gen segments (59.6% as reported in Table <ref>), we still observe a clear drop in accuracy between gt and gen segments. Interestingly, we observe improved robustness when using the actional phase alone. Given that the actional segment's start is closer in time to the beginning of the object interaction, when varying the start of the segment we are effectively including part of the pre-actional phase in the generated segment, which assists in making actions more discriminative.Importantly, we show that RB annotations improved both consistency and accuracy of annotations on the largest dataset of egocentric object interactions. We believe these form solid basis for further discussions and experimentation on consistent labeling of temporal boundaries.§ CONCLUSION AND FUTURE DIRECTIONS Annotating temporal bounds for object interactions is the base for supervised action recognition algorithms. In this work, we uncovered inconsistencies in temporal bound annotations within and across three egocentric datasets. We assessed the robustness of both hand-crafted features and fine-tuned end-to-end recognition methods, and demonstrated that both IDT FV and 2SCNN are susceptible to variations in start and end times. We then proposed an approach to consistently label temporal bounds for object interactions. We foresee three potential future directions:Other NN architectures?While 2SCNN randomly samples frames from a video segment, the classification accuracy is still sensitive to variations in temporal bounds. Other architectures, particularly those that model temporal progression using Recurrent networks (including LSTM), rely on labeled training samples and would thus equally benefit from consistent labeling. Evaluating the robustness of such networks is an interesting future direction.How can robustness to temporal boundaries be achieved?Classification methods that are inherently robust to temporal boundaries, while learning from supervised annotations, is a topic for future directions. As deep architectures reportedly outperform hand-crafted features and other classifiers, architectures that are designed to handle variations in start and end times are desired.Which temporal granularity?The Rubicon Boundaries address consistent labeling of temporal bounds, but they do not address the concern of granularity of the action. Is the action of cutting a whole tomato composed of several short cuts or is it one long action? The Rubicon Boundaries model discusses actions relative to the goal a person wishes to accomplish. The granularity of an object interaction is another matter, and annotating the level of granularity consistently has not been addressed yet. Expanding Rubicon Boundaries to enable annotating the granularity would require further investigation.Data Statement & Ack:Public datasets were used in this work; no new data were created as part of this study. RB annotations are available on the project's webpage. Supported by EPSRC DTP and EPSRC LOCATE (EP/N033779/1). ieee | http://arxiv.org/abs/1703.09026v2 | {
"authors": [
"Davide Moltisanti",
"Michael Wray",
"Walterio Mayol-Cuevas",
"Dima Damen"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170327121407",
"title": "Trespassing the Boundaries: Labeling Temporal Bounds for Object Interactions in Egocentric Video"
} |
T. Scrimshaw]Travis Scrimshaw [T. Scrimshaw]School of Mathematics and Physics, The University of Queensland, St. Lucia, QLD 4072, Australia [email protected] https://people.smp.uq.edu.au/TravisScrimshaw/ [2010]05E10, 17B37, 05A19, 81R50, 82B23The author was partially supported by the National Science Foundation RTG grant NSF/DMS-1148634. We give a uniform description of the bijection Φ from rigged configurations to tensor products of Kirillov–Reshetikhin crystals of the form ⊗_i=1^N B^r_i,1 in dual untwisted types: simply-laced types and types A_2n-1^(2), D_n+1^(2), E_6^(2), and D_4^(3). We give a uniform proof that Φ is a bijection and preserves statistics. We describe Φ uniformly using virtual crystals for all remaining types, but our proofs are type-specific. We also give a uniform proof that Φ is a bijection for ⊗_i=1^N B^r_i,s_i when r_i, for all i, map to 0 under an automorphism of the Dynkin diagram. Furthermore, we give a description of the Kirillov–Reshetikhin crystals B^r,1 using tableaux of a fixed height k_r depending on r in all affine types. Additionally, we are able to describe crystals B^r,s using k_r × s shaped tableaux that are conjecturally the crystal basis for Kirillov–Reshetikhin modules for various nodes r. Uniform description of the rigged configuration bijection [ Received date; Accepted date ========================================================= § INTRODUCTIONKashiwara began the study of crystals in the early 1990's as a method to explore the representation theory of quantum groups <cit.>. One particular application is the highest weight elements of a tensor product of Kirillov–Reshetikhin (KR) crystals naturally index solutions on two-dimensional solvable lattice models from using Baxter's corner transfer matrix <cit.>. Kerov, Kirillov, and Reshetikhin introduced combinatorial objects called rigged configurations that naturally index solutions to the Bethe Ansatz for the isotropic Heisenberg spin model <cit.>. Moreover, the row-to-row transfer matrices can be described by tensor product of KR crystals. This suggests a link between rigged configurations and highest weight elements of a tensor product of KR crystals.This was formalized by Kerov, Kirillov, and Reshetikhin by constructing a bijection Φ for the tensor product (B^1,1)^⊗ N in type A_n^(1) and the corresponding rigged configurations. This was extended to the general case ⊗_i=1^N B^r_i, s_i in type A_n^(1) in <cit.>, and it was soon conjectured that there exists an analogous bijection in all affine types. For the remaining non-exceptional types, such a bijection for (B^1,1)^⊗ N was given in <cit.> and type E_6^(1) in <cit.>. Many other special cases are also known: the ⊗_i=1^N B^1, s_i case for non-exceptional types <cit.>; the ⊗_i=1^N B^r_i, 1 case for types D_n+1^(2), A_2n^(2), and C_n^(1) <cit.>, and type D_n^(1) <cit.>; the case B^r,s for type D_n^(1) <cit.> and other non-exceptional types <cit.>; and both types of tensor products are known for type D_4^(3) <cit.>. Recently, the general case for type D_n^(1) was proven <cit.>, followed soon thereafter for all non-exceptional types <cit.>. Additionally, the bijection Φ was extended to a crystal isomorphism for the full crystal in type A_n^(1) in <cit.> and a classical crystal isomorphism for type D_n^(1) in <cit.> and A_2n-1^(2) in <cit.>.Despite being defined recursively, obfuscating many of its properties, the bijection Φ has many remarkable (conjectural) attributes. There is a natural statistic defined on tensor products of KR crystals called energy that arose from the related statistical mechanics, but energy is an algebraic statistic whose computation requires using the very intricate combinatorial R-matrix. On the rigged configuration side, there is a combinatorial statistic called cocharge, which also comes from the related physics, and Φ sends energy to cocharge (with interchanging riggings and coriggings). This gives a combinatorial proof the X = M conjecture of <cit.>. We recall that the X side comes from the sum over the classically highest weight elements of tensor products of KR crystals and is related to the one-point function of 2D lattice models. Additionally, the M side is summed over highest weight rigged configurations and is related to solutions to the Bethe equation of the Heisenberg spin chain. Moreover, the combinatorial R-matrix gets sent to the identity map under Φ.Because of these properties, rigged configurations in type A_n^(1) describe the action-angle variables of box-ball systems <cit.>, which is an ultradiscrete version of the Korteweg-de Vries (KdV) equation. More specifically, the partition ν^(1) describes the sizes of the solitons when there is no interaction <cit.>. A tropicalization of a ratio of (cylindric) loop Schur functions is conjectured to describe Φ for box-ball systems <cit.>, and Φ^-1 can be described using the τ function from the Kadomtsev–Petviashvili (KP) heirarchy <cit.>. Generalizations of box-ball systems, soliton cellular automata <cit.>, are also believed to have deep connections with rigged configurations. In type A_n^(1), the state energy was related to rigged configurations <cit.>.There are many properties of rigged configurations that are known to be uniform. A crystal structure on rigged configurations was first given for simply-laced types in <cit.>, which was then extended to a classical crystal structure for U_q'()-crystals for affine types <cit.> and highest weight crystals and B(∞) for general Kac–Moody algebras in <cit.>. Furthermore, the ∗-involution on B(∞) <cit.> is the map that replaces all riggings with their respective coriggings <cit.>. In <cit.>, the bijection Φ was also extended (uniformly) to describe a bijection between the rigged configurations and marginally large tableaux <cit.> for B(∞).Similarly, there are also uniform descriptions of KR crystals of the form B = ⊗_i=1^N B^r_i,1 using the alcove path model (up to non-dual Demazure arrows) <cit.> and quantum and projected level-zero LS paths <cit.>. This is based upon the work of Kashiwara <cit.>, where B is a crystal basis of the tensor product of the corresponding KR modules and is constructed by projecting the crystal basis of a level-zero extremal weight module. A uniform model of extremal level-zero crystals using Nakajima monomials was given in <cit.>, but the projection onto B was done type-by-type. The connection of (resp. Demazure) characters of B with (resp. non-symmetric) Macdonald polynomials was given in <cit.> (resp. <cit.>).KR crystals also have a number of other additional properties. Their characters (resp. q-characters in the sense of <cit.>) give solutions to Q-systems (resp. T-systems) <cit.>. The existence and combinatorial structure of B^r,s was given for non-exceptional types in <cit.> and a few other special cases <cit.>. Existence for types G_2^(1) and D_4^(3) was recently proven in <cit.>. KR crystals are conjectured to generally be perfect, which is known for non-exceptional types <cit.> and some other special cases <cit.>.While many special cases of the conjectured bijection Φ are known (as mentioned above), the description of Φ is given in a type-by-type fashion, meaning that there is no natural extension to the other exceptional types. The original goal of this paper was to extend Φ to ⊗_i=1^N B^r_i, 1 for type E_6,7,8^(1) by using the crystal graph, which was first explicitly used by Okado–Sano <cit.> for ⊗_i=1^N B^1,1 in type E_6^(1). However, it soon became apparent that our description of Φ could be given uniformly for dual untwisted types, and moreover, the proofs given here are uniform. Using this, we are able to prove a number of special cases of the X = M conjecture in all exceptional types, where there has otherwise been very little progress <cit.>.Explicitly, the core of our main result is a description of Φ when the basic map δ removes the left-most factor B^r,1, where _r is either a minuscule weight (Section <ref>, Lemma <ref>) or the highest (short) root[Our results also include type A_n^(1), where we instead have B^1,1⊗ B^n,1 as the atomic object.] (i.e., it is the perfect crystal of <cit.> or B(_r) is the (“little”) adjoint representation) (Section <ref>, Lemma <ref>). We then extend the bijection to B = ⊗_i=1^N B^r_i,1 (Section <ref>, Proposition <ref>). As stated above, the description and proof of this is uniform for all dual untwisted types. For the remaining types, we give a uniform description using virtual crystals (Section <ref>), and while our proof is essentially uniform, it does contain some type-specific arguments. However, the last part of our main results are that we give a uniform proof that the virtualization map commutes with the bijection Φ (Theorem <ref>).We show that these descriptions of δ are equivalent to those described in <cit.> (in particular, proving the conjectural description of Φ in <cit.>) (Section <ref>). As a secondary result, we provide further evidence of the conjecture that KR crystals B^r,s in the exceptional types correspond to crystal bases of KR modules and of the X = M conjecture by showing the fermionic formula agrees with the conjectured decompositions of <cit.>. We also describe the so-called Kirillov–Reshetikhin (KR) tableaux for B^r,1 in types E_6,7,8^(1), E_6^(2), F_4^(1), and G_2^(1) (Section <ref>). For certain r, we describe the KR tableaux for B^r,s and show that Φ gives a bijection for the single tensor factor.We are further able to extend our bijection for ⊗_i=1^N B^r_i,s_i when _r_i is a minuscule weight by using the tableaux description given in <cit.> (Theorem <ref>). Specifically, the tableaux can be thought of as single rows that are weakly increasing with entries in B(_r_i), which is naturally considered as a poset. Moreover, the proofs that we give are also uniform. This is the generalization of the results of <cit.>.Our results are evidence that there should be a natural bijection between rigged configurations and the aforementioned models for KR crystals. Additionally, it also suggests that there should be a uniform description of the U_q'()-crystal structure on rigged configurations by considering the Demazure subcrystal of B(ℓΛ_0) following <cit.>. Furthermore, our results and our proof techniques are further evidence that the map η that replaces riggings with coriggings in our setting of rigged configurations, which is key in the proof that Φ preserves statistics, is connected with the ∗-involution on B(∞). Our results also give a uniform description of the combinatorial R-matrix in the cases we consider (extend Remark <ref> to all of our results), for which a uniform description was given on the alcove path model in <cit.>, but our proof is type-specific.Our results also give further evidence that rigged configurations are intimately connected with the Weyl chamber geometry. Indeed, as rigged configurations are well-behaved under virtualization, the results of <cit.> gives the first evidence. Yet, it is the fact that our results are given for the types where the fundamental alcove is translated by precisely α_a (there are some slight modifications needed for type A_2n^(2)) to another alcove is further evidence. This is additionally emphasized with our result showing Φ intertwines with the virtualization map, extending results of <cit.>. Additionally, the related results of <cit.> for the untwisted non-simply-laced types appears to be related to our descriptions when the rigged partition (ν, J)^(a) is scaled by the coefficient of α_a required to translate the fundamental alcove. Making this explicit would lead to a completely uniform description of Φ and more strongly link rigged configurations to the underlying geometry. §.§ Summary of new results Recall that we consider the case ⊗_i=1^N B^r_i,1 and the case ⊗_i=1^N B^r_i,s_i, where r_i is a minuscule node for all i. Our results for the rigged configuration bijection and a combinatorial proof of the X = M conjecture are new for all exceptional types with the exception of (B^r,1)^⊗ N for r =1,6 (the r = 6 is implicit in <cit.> by the diagram symmetry) and ⊗_i=1^N B^1,s_i and ⊗_i=1^N B^r_i,1 in type D_4^(3). Furthermore, our description of the bijection for single columns in type A_n^(1) and spin columns in type D_n^(1) is new, which significantly reduces the number of steps needed to compute the bijection Φ. In addition, we note that our proofs are now done almost uniformly. §.§ Organization This paper is organized as follows. In Section <ref>, we describe the necessary background on crystals, KR crystals, rigged configurations, and the bijection Φ. In Section <ref>, we describe the map δ for minuscule nodes for dual untwisted types. In Section <ref>, we describe the map δ for the adjoint node for dual untwisted types. In Section <ref>, we extend the left-box map for dual untwisted types. In Section <ref>, we show that the map δ for untwisted types is well-defined by using the virtualization map to the corresponding dual type. In Section <ref>, we give our results and proofs. In Section <ref>, we show that our description of Φ is the same as the KSS bijections. In Section <ref>, we describe the highest weight rigged configurations and KR tableaux for B^r,s in a number of different cases. In Section <ref>, we give our concluding remarks.§.§ Acknowledgements The author would like to thank Masato Okado for the reference <cit.> and useful discussions. The author would like to thank Anne Schilling for useful discussions. The author would like to thank Ben Salisbury for comments on an early draft of this paper. The author would like to thank the referee for their useful comments. This work benefited from computations using SageMath <cit.>.The majority of this work was done while the author was at the University of Minnesota.§ BACKGROUNDIn this section, we provide the necessary background.§.§ Crystals Letbe an affine Kac–Moody Lie algebra with index set I, Cartan matrix (A_ab)_a,b ∈ I, simple roots (α_a)_a ∈ I, simple coroots (α_a^∨)_a ∈ I, fundamental weights (Λ_a)_a ∈ I, weight lattice P, coweight lattice P^∨, and canonical pairing ⟨ , ⟩ P^∨× P → given by α_a^∨α_b = A_ab. We note that we follow the labeling given in <cit.> (see Figure <ref> for the exceptional types and their labellings). Let _0 denote the canonical simple Lie algebra given by the index set I_0 = I ∖{0}. Let _a and _a denote the natural projection of Λ_a and α_a, respectively, onto the weight lattice P of _0. Note (_a)_a ∈ I_0 are the simple roots in _0.Let c_a and c_a^∨ denote the Kac and dual Kac labels <cit.>. We definet_a := max(c_a/c_a^∨, c_0^∨),t_a^∨ := max(c_a^∨/c_a, c_0).Note that t_a^∨ for typeequals t_a for the dual type of . Let T_a denote the translation factors, the smallest factors such that T_a α_a maps the fundamental polygon, the fundamental domain of the action of the root lattice or the image of the fundamental alcove under the corresponding finite Weyl group, to another polygon. Note that T_a = t_a except for type A_2n^(2) (resp. A_2n^(2)†), where we have T_n = 1/2 (resp. T_0 = 1/2) and T_a = 1 otherwise. We have T_a = 1 for all a ∈ I except in the cases mentioned above, T_n = 2 in type B_n^(1), T_a = 2 for a ≠ 0,n in type C_n^(1), T_3 = T_4 = 2 in type F_4^(1), and T_1 = 3 in type G_2^(1). The null root is δ = ∑_a ∈ I c_a α_a, and the canonical central element is c = ∑_a ∈ I c_a^∨α_a^∨. The normalized (symmetric) invariant form ( · | · )P × P → is defined by(α_a | α_b) = c_a^∨/c_a A_ab. We write a ∼ b if A_ab≠ 0 and a ≠ b; in other words, the nodes a and b are adjacent in the Dynkin diagram of .Fornot of type A_n^(1), let N_ denote the unique node such that N_∼ 0, which we call the adjoint node. We say a node r ∈ I_0 is special if there exists a diagram automorphism ϕ I → I such that ϕ(0) = r. We say a node r ∈ I_0 is minuscule if it is special andis of dual untwisted affine type.An abstract U_q()-crystal is a set B with the crystal operators e_a, f_aB → B ⊔{0}, for a ∈ I, and weight function B → P that satisfy the following conditions. Let ε_a, φ_a B →_≥ 0 be statistics given byε_a(b) := max{ k | e_a^k b ≠ 0 }, φ_a(b) := max{ k | f_a^k b ≠ 0 }.(1) φ_a(b) = ε_a(b) + α^∨_a(b) for all b ∈ B and a ∈ I.(2) f_a b = b' if and only if b = e_a b' for b, b' ∈ B and a ∈ I.We say an element b ∈ B is highest weight if e_a b = 0 for all a ∈ I. Defineε(b) = ∑_a ∈ Iε_a(b) Λ_a, φ(b) = ∑_a ∈ Iφ_a(b) Λ_a.The abstract crystals we consider in this paper sometimes called regular or seminormal in the literature. We call an abstract U_q()-crystal B a U_q()-crystal if B is the crystal basis of some U_q()-module. Kashiwara has shown that the irreducible highest weight module V(λ) admits a crystal basis <cit.>. We denote this crystal basis by B(λ), and let u_λ∈ B(λ) denote the unique highest weight element and is the unique element of weight λ. Recall that B(λ) is connected. A U_q(_0)-crystal is a minuscule representation if the corresponding finite Weyl groupacts transitively on B(_r). In particular, the U_q(_0)-crystal B(_r) is a minuscule representation if and only if r is a minuscule node.We define the tensor product of abstract U_q()-crystals B_1 and B_2 as the crystal B_2 ⊗ B_1 that is the Cartesian product B_2 × B_1 with the crystal structuree_a(b_2 ⊗ b_1) =e_a b_2 ⊗ b_1if ε_a(b_2) > φ_a(b_1), b_2 ⊗ e_a b_1if ε_a(b_2) ≤φ_a(b_1), f_a(b_2 ⊗ b_1) =f_a b_2 ⊗ b_1if ε_a(b_2) ≥φ_a(b_1), b_2 ⊗ f_a b_1if ε_a(b_2) < φ_a(b_1),ε_a(b_2 ⊗ b_1) = max(ε_a(b_1), ε_a(b_2) - α^∨_a(b_1)),φ_a(b_2 ⊗ b_1) = max(φ_a(b_2), φ_a(b_1) + α^∨_a(b_2)),(b_2 ⊗ b_1) = (b_2) + (b_1).Our tensor product convention is opposite of Kashiwara <cit.>. Let B_1 and B_2 be two abstract U_q()-crystals. A crystal morphism ψ B_1 → B_2 is a map B_1 ⊔{0}→ B_2 ⊔{0} with ψ(0) = 0 such that the following properties hold for all b ∈ B_1: (1) If ψ(b) ∈ B_2, then (ψ(b)) = (b), ε_a(ψ(b)) = ε_a(b), and φ_a(ψ(b)) = φ_a(b).(2) We have ψ(e_a b) = e_a ψ(b) if ψ(e_a b) ≠ 0 and e_a ψ(b) ≠ 0.(3) We have ψ(f_a b) = f_a ψ(b) if ψ(f_a b) ≠ 0 and f_a ψ(b) ≠ 0.An embedding and isomorphism is a crystal morphism such that the induced map B_1 ⊔{0}→ B_2 ⊔{0} is an embedding or bijection respectively. A crystal morphism is strict if it commutes with all crystal operators.In type E_n for n = 6,7, we follow <cit.> and label elements b ∈ B(_n) (and in B(_1) for type E_6) by X ⊂{1, , 2, , …, n, }, where we have an a ∈ X (resp. a∈ X) if and only if φ_a(b) = 1 (resp. ε_a(b) = 1). To ease notation, we write X as a word in the alphabet {1, , …, n, }. See Figure <ref> and Figure <ref> for examples. We follow the same notation for an element of any minuscule representation. §.§ Kirillov–Reshetikhin crystals Let U_q'() = U_q([, ]) denote the quantum group of the derived subalgebra of . Denote the corresponding weight lattice by P' = P / δ, and therefore, there is a linear dependence relation on the simple roots in P'. As we will not be considering U_q()-crystals in this paper, we will abuse notation and also denote the U_q'()-weight lattice by P. If B is a U_q'()-crystal, then we say b ∈ B is classically highest weight if e_a b = 0 for all a ∈ I_0.Kirillov–Reshetikhin (KR) crystals are the conjectural crystal bases corresponding to an important class of finite-dimensional irreducible U_q'()-modules known as Kirillov–Reshetikhin (KR) modules <cit.>. We denote a KR module and crystal by W^r,s and B^r, s, respectively, where r ∈ I_0 and s ∈_>0. KR modules are classified by their Drinfel'd polynomials, and W^r,s corresponds to the minimal affinization of B(s_r) <cit.>. In <cit.>, it was shown that KR modules in all non-exceptional types admit crystal bases whose combinatorial structure is given in <cit.>. For the exceptional types, this has been done in a few special cases <cit.>. Recently, existence was established in general for types G_2^(1) and D_4^(3) in <cit.>. Moreover, a uniform model for B^r,1 was given using quantum and projected level-zero LS paths <cit.>.We note that there is a unique element u(B^r,s) ∈ B^r,s, called the maximal element, which is characterized by being the unique element of classical weight s _r (it is also classically highest weight, so ε_a( u(B^r,s) ) = 0 and φ_a( u(B^r,s) ) = δ_ars for all a ∈ I_0). Furthermore, it is known that tensor products of KR crystals are connected <cit.>, and it is known that the maximal vector of B ⊗ B' is given by the tensor product of maximal elements u ⊗ u'. We define the unique U_q'()-crystal morphism RB ⊗ B' → B' ⊗ B, called the combinatorial R-matrix, by R(u ⊗ u') = u' ⊗ u, where u and u' are the maximal weight elements of B and B' respectively.We say a KR crystal B^r,1 is minuscule if r is a minuscule node. We note that B^r,s B(s_r) as U_q(_0)-crystals if r is a special node. §.§ Virtual crystals We recall the definition of a virtual crystal from <cit.>. Let ϕΓΓ denote a folding of the Dynkin diagram Γ ofonto the Dynkin diagram Γ of that arises from the natural embeddings <cit.>C_n^(1), A_2n^(2), A_2n^(2)†, D_n+1^(2) ⟶ A_2n-1^(1), B_n^(1), A_2n-1^(2) ⟶ D_n+1^(1), F_4^(1), E_6^(2) ⟶ E_6^(1), G_2^(1), D_4^(3) ⟶ D_4^(4).For ease of notation, if X is an object for type , we denote the corresponding object for typeby X. By abuse of notation, let ϕI I also denote the corresponding map on the index sets. We define the scaling factors γ = (γ_a)_a ∈ I byγ_a = max_a T_a/T_a.Note that if |ϕ^-1(a)| = 1, then γ_a = 1. See Table <ref>.Furthermore, we have a natural embedding Ψ P →P given byΛ_a ↦γ_a ∑_b ∈ϕ^-1(a)Λ_b,and note this induces a similar embedding on the root latticeα_a ↦γ_a ∑_b ∈ϕ^-1(a)α_band Ψ(δ) = c_0 γ_0 δ. Let B be a U_q'()-crystal and V ⊆B. Let ϕ and(γ_a)_a ∈ I be the folding and the scaling factors given above. The virtual crystal operators (of type ) are defined ase^v_a := ∏_b ∈ϕ^-1(a)e_b^ γ_a,f^v_a := ∏_b ∈ϕ^-1(a)f_b^ γ_a.A virtual crystal is the quadruple (V, B, ϕ, (γ_a)_a ∈ I) such that V has an abstract U_q'()-crystal structure defined by1.3[e_a := e^v_a,f_a := f^v_a,; ε_a := γ_a^-1ε_b, φ_a := γ_a^-1φ_b,; := Ψ^-1∘, ]for any b ∈ϕ^-1(a). When there is no danger of confusion, we simply denote the virtual crystal by V. We say B virtualizes in B if there exists a U_q'()-crystal isomorphism vB → V for some virtual crystal, and we call the resulting isomorphism a virtualization map.It is straightforward to see that virtual crystals are closed under direct sums. Moreover, they are closed under tensor products. Virtual crystals form a tensor category. Furthermore, KR crystals are conjecturally well-behaved under virtualization.There exists a virtualization map from the KR crystal B^r,s intoB^r,s =B^n,s⊗ B^n,s if= A_2n^(2), A_2n^(2)† andr = n,⊗_r' ∈ϕ^-1(r) B^r', γ_r s otherwise. Conjecture <ref> is known for all non-exceptional types <cit.> and B^r,1 for all other types unless r=2 andis of type F_4^(1) <cit.>. A more uniform proof of Conjecture <ref> for some special cases using projected level-zero LS paths was given in <cit.>; in particular, for B^r,1 when γ_r = 1. §.§ Adjoint crystals We recall the construction of certain level 1 perfect crystals from <cit.>. Define θ := δ / c_0 - α_0, and soθ = (c_1 α_1 + c_2 α_2 + ⋯ + c_n α_n) / c_0. Let Δ = {(b) | b ∈ B(θ) }∖{0}. We denote the vertices of the U_q(_0)-crystal B(θ) by{x_α|α∈Δ}⊔{y_a | a ∈ I_0, α_a ∈Δ},and a-arrows of B(θ) are given by * x_α x_β if and only if α - α_a = β, or* x_α_a y_ax_-α_a.The (classical) weight functionis given by (x_α) = α and (y_a) = 0. Let Δ^± := Δ∩ P^±, and we say (b) > 0 if (b) ∈Δ^+ and (b) < 0 if (b) ∈Δ^-.Ifis of untwisted type, then B(θ) is the adjoint representation and Δ is the set of all roots of _0. Forof twisted type, B(θ) is the “little” adjoint representation of _0 with highest weight being the highest short root of _0 and Δ the set of all short roots of _0. For type A_2n^(2), there does not exist an a ∈ I_0 such that α_a ∈Δ. For type A_2n^(2)†, we have B(θ) = B(_1) of type B_n. For more information on finite (crystallographic) root systems, we refer the reader to standard texts such as <cit.>. Let ∅ be the unique element of B(0). We then define a U_q'()-crystal B^θ,1 by the classical decompositionB^θ,1 B(θ)ifis of type A_2n^(2)†,B(θ) ⊕ B(0)otherwise,and 0-arrows * x_α x_β if and only if α + θ = β and α,β≠±θ, or* x_-θ∅ x_θ.The weight is given by requiring it to be level zero. That is to say, we let(b) = (b) + k_0 Λ_0,under the natural lift _a →Λ_a for all a ∈ I_0 and k_0 is such that c(b) = 0. Thus we haveB^θ,1 B^n,1⊗ B^1,1 ifis of type A_n^(1),B^N_,2 ifis of type C_n^(1),B^N_,1 otherwise. In the construction of B^θ,1, we can consider ∅ = y_0. Thus, for type A_2n^(2)†, as there are not elements x_α_0 nor x_-α_0, we do not include y_0. This reflects the fact that A_2n^(2)† is isomorphic to A_2n^(2) after forgetting about the choice of the affine node. There exists higher level analogs B^θ,s, where as classical crystals, we have B^θ,s⊕_k=0^s B(kθ). The U_q'()-crystal structure is currently known for all non-exceptional types <cit.> and type D_4^(3) <cit.>. However, the U_q'()-crystal structure is not known in general, much less uniformly. One potential approach might be to generalize the approach of <cit.> by examining various embeddings of B^θ,s-1 into B^θ,s, where the difficulty is overcoming that the multiplicity of the weights of B(kθ) that do not appear in B((k-1)θ) are not all 1 in general. §.§ Rigged configurations For this section, we assume thatis not of type A_2n^(2) or A_2n^(2)† for simplicity of the exposition. However, the analogous statements with the necessary modifications for these types may be found in <cit.>.Denote _0 := I_0 ×_>0. Consider a tensor product of KR crystals B = ⊗_i=1^N B^r_i, s_i. A configuration ν = (ν^(a))_a ∈ I_0 is a sequence of partitions. Let m_i^(a) denote the multiplicity of i in ν^(a). Define the vacancy numbers asp_i^(a)(ν; B) = ∑_j ∈_>0 L_j^(a)min(i, j) - 1/t_a^∨∑_(b,j) ∈_0 (_a | _b) min(t_b υ_a i, t_a υ_b j) m_j^(b) = ∑_j ∈_>0 L_j^(a)min(i,j) - ∑_b ∈ I_0A_ab/γ_b∑_j ∈_>0min(γ_a i, γ_b j) m_j^(b),where L_s^(r) equals the number of factors B^r,s that occur in B andυ_a =2 a = nand= C_n^(1),1/2a = nand= B_n^(1),1/2a = 3,4and= F_4^(1),1/3a = 1and= G_2^(1), 1otherwise.When there is no danger of confusion, we will simply write p_i^(a). Note that when t_a = 1 for all a ∈ I_0, we have-p_i-1^(a) + 2 p_i^(a) - p_i+1^(a) = L_i^(a) - ∑_b ∼ a A_ab m_i^(b).Moreover, when m_i^(a) = 0, we have the convexity inequalityp_i-1^(a) + p_i+1^(a)≤ 2 p_i^(a)or equivalently -p_i-1^(a) + 2p_i^(a) - p_i+1^(a)≥ 0.The values (υ_a)_a ∈ I_0 arise from a different convention for rigged configurations than those used in, e.g., <cit.>. A B-rigged configuration is the pair (ν, J), where ν is a configuration and J = (J_i^(a))_(a,i) ∈_0 is such that J_i^(a) is a multiset {x ∈| x ≤ p_i^(a)(ν; B) } with | J_i^(a)| = m_i^(a) for all (a, i) ∈_0. When B is clear, we call a B-rigged configuration simply a rigged configuration. A highest weight rigged configuration is a rigged configuration (ν, J) such that min J_i^(a)≥ 0 for all (a, i) ∈_0 such that m_i^(a) > 0. Let (B) denote the set of all highest weight B-rigged configurations.The integers in J_i^(a) are called riggings or labels. The corigging or colabel of a rigging x ∈ J_i^(a) is defined as p_i^(a) - x. We note that we can associate a row of length i in ν^(a) with a rigging x, and we call such a pair (i, x) a string. We identify each row of the partition ν^(a) with its corresponding string.We say a row (or string) is singular if p_i^(a) = x. We say a row (or string) is quasisingular if p_i^(a) = x + 1 and there does not exist a singular row of length i.Next, let (B) denote the closure of (B) under the following crystal operators. Fix a rigged configuration (ν, J) and a ∈ I_0. For simplicity, we assume there exists a row of length 0 in ν^(a) with a rigging of 0. Let x = min{min J_i^(a)| i ∈_>0}; i.e., the smallest rigging in (ν, J)^(a). e_a If x = 0, then define e_a(ν, J) = 0. Otherwise, remove a box from the smallest row with rigging x, replace that rigging with x + 1, and change all other riggings so that the coriggings remain fixed. The result is e_a(ν, J). f_a Add a box from the largest row with rigging x, replace that rigging with x - 1, and change all other riggings so that the coriggings remain fixed. The result is f_a(ν, J) unless the result is not a valid rigged configuration, in which case f_a(ν, J) = 0.We can extend this to a full U_q(_0)-crystal structure on (B) by(ν, J) = ∑_(a, i) ∈_0 i ( L_i^(a)_a - m_i^(a)_a ).We note thatα_a^∨(ν, J) = p_∞^(a) = ∑_j ∈_>0 j L_j^(a) - ∑_b ∈ I_0 A_ab|ν^(b)|,and we can extend the classical weight (B) →P to the affine weight (B) → P as in Equation (<ref>).Let B be a tensor product of KR crystals. Fix some (ν, J) ∈(B). Let X_(ν, J) denote the closure of (ν, J) under e_a and f_a for all a ∈ I_0. Then X_(ν, J) B(λ), where λ = (ν, J). Furthermore, we have the following way to compute the statistics ε_a and φ_a on a rigged configuration.Let x = min{min J_i^(a)| i ∈_>0}; i.e., the smallest rigging in (ν, J)^(a). We haveε_a(ν, J) = -x, φ_a(ν, J) = p_∞^(a) - x. Proposition <ref> states that we could define f_a(ν, J) = 0 if and only if φ_a(ν, J) = p_∞^(a) - x = 0. We will need the complement rigging involution η(B) →(B^rev), where B^rev is B in the reverse order. The map η is given on highest weight rigged configurations by replacing each rigging with its corresponding corigging and then extended as a U_q(_0)-crystal isomorphism.Additionally, rigged configurations are known to be well-behaved under virtualization <cit.>. We construct a virtualization map v (B) →(B), where the virtual rigged configuration (ν, J) := v(ν, J) is given bym_γ_a i^(b)= m_i^(a),J_γ_a i^(b)= γ_a J_i^(a), for all b ∈ϕ^-1(a). Moreover, we havep_γ_a i^(b) = γ_a p_i^(a)for all b ∈ϕ^-1(a). §.§ Statistics We now describe two important statistics that arise from mathematical physics. The first is defined on tensor products of KR crystals and the second is defined on rigged configurations.Let B^r,s and B^r',s' be KR crystals of type . The local energy function HB^r,s⊗ B^r',s'→ is defined as follows. Let b' ⊗b = R(b ⊗ b'), and define the following conditions: (LL) e_0(b ⊗ b') = e_0 b ⊗ b'ande_0(b' ⊗b) = e_0 b' ⊗b;(RR) e_0(b ⊗ b') = b ⊗ e_0 b'ande_0(b' ⊗b) = b' ⊗ e_0 b.The local energy function is given byH( e_i(b ⊗ b') ) = H(b ⊗ b') +-1ifi = 0and(LL), 1ifi = 0and(RR), 0otherwise,and it is known H is uniquely defined up to an additive constant <cit.>. We normalize H by the condition H( u(B^r ,s) ⊗ u(B^r',s') ) = 0.Next consider B^r,s, and let b^♯ be the unique element such that φ(b^♯) = ℓΛ_0, where ℓ = min{cφ(b)| b ∈ B^r,s}. We then define D_B^r,s B^r,s→, following <cit.>, byD_B^r,s(b) = H(b ⊗ b^♯) - H(u(B^r,s) ⊗ b^♯). Let B = ⊗_i=1^N B^r_i,s_i. We define energy <cit.> DB → byD = ∑_1 ≤ i < j ≤ N H_i R_i+1 R_i+2⋯ R_j-1 + ∑_j=1^N D_B^r_j,s_j R_1 R_2 ⋯ R_j-1,where R_i and H_i are the combinatorial R-matrix and local energy function, respectively, acting on the i-th and (i+1)-th factors and D_B^r_j,s_j acts on the rightmost factor. Note that D is constant on classical components since H is and R is a U_q'()-crystal isomorphism.For rigged configurations, we define a statistic called cocharge as follows. First considera configuration ν, and define the cocharge of ν by(ν) = 1/2∑_(a,i) ∈_0 (b, j) ∈_0 (_a | _b) min(t_b υ_a i, t_a υ_b j) m_i^(a) m_j^(b).To obtain the cocharge of a rigged configuration (ν, J), we add all of the riggings to (ν):(ν, J) = (ν) + ∑_(a,i) ∈_0 t_a^∨∑_x ∈ J_i^(a) x.Moreover, it is known that cocharge is invariant under the classical crystal operators. Fix a classical component X_(ν, J) as given in Theorem <ref>. The cochargeis constant on X_(ν, J). Let q be an indeterminate. The one-dimensional sum is defined asX(B, λ; q) = ∑_b ∈𝒫(B; λ) q^D(b),where 𝒫(B; λ) denotes the classically highest weight elements of B of classical weight λ. The fermionic formula is defined asM(B, λ; q) = ∑_ν∈𝒞(B; λ) q^(ν)∏_(a, i) ∈_0[ m_i^(a) + p_i^(a); m_i^(a) ]_q,where 𝒞(B; λ) are all B-configurations of classical weight λ and abq is the usual q-binomial. Note that J_i^(a) of a highest weight rigged configuration can be considered as a partition in a p_i^(a)× m_i^(a) box for all (a, i) ∈_0. Thus we can writeM(B, λ; q) = ∑_(ν, J) ∈(B; λ) q^(ν, J).Now we recall the X = M conjecture of <cit.>.[To obtain the formulas of <cit.>, we need to substitute q = q^-1.]Let B be a tensor product of KR crystals of type . Then we haveX(B, λ; q) = M(B, λ; q).Consider a virtualization map vB → B^v. We first define the virtual combinatorial R-matrix R^vB^v ⊗ B'^v → B'^v⊗ B^v as the restriction of the typecombinatorial R-matrix R to the image of v. We note that it is not clear that R^v is well-defined, but this follows for B^r,1⊗ B^r',1 for dual untwisted types from the results of <cit.>. Thus, we may define virtual analogs of (local) energy and cocharge byH^v(b ⊗ b') := γ_0^-1H( v(b) ⊗ v(b') ), D^v(b) := γ_0^-1D( v(b) ),^v(ν, J) := γ_0^-1(ν, J).Hence, we defineX^v(B, λ; q) = ∑_b ∈𝒫(B; λ) q^D^v(b), M^v(B, λ; q) = ∑_(ν, J) ∈(B; λ) q^^v(ν, J). Let B^v be a virtual crystal of B. Then we haveD^v(b) = D(b),^v(ν, J) = (ν, J).Moreover, we haveX^v(B, λ; q) = X(B, λ; q), M^v(B, λ; q) = M(B, λ; q).§.§ Kleber algorithm These results will be used in Section <ref>.We first recall the Kleber algorithm <cit.> for whenis an affine type such that _0 is simply-laced. For x,y ∈P^+, let d_xy := x - y. Let B be a tensor product of KR crystals of type . The Kleber tree T(B) is a tree whose nodes will be given by weights in P^+ and edges are labeled by d_xy∈Q^+ ∖{0} and constructed recursively as follows. Begin with T_0 being the tree consisting of a single node of weight 0. We then do the following steps starting with ℓ = 1. (K1) Let T^'_ℓ be obtained from T_ℓ-1 by adding ∑_a=1^n _a ∑_i ≥ℓ L_i^(a) to the weight of each node.(K2) Construct T_ℓ from T^'_ℓ as follows. Let x be a node at depth ℓ - 1. Suppose there is a weight y ∈P^+ such that d_xy∈Q^+ ∖{0}. If x is not the root, then let w be the parent of x. Then we have d_wx - d_xy∈Q^+ ∖{0}. For all such y, attach y as a child of x.(K3) If T_ℓ≠ T_ℓ-1, then repeat from (K1); otherwise terminate and return T(B) = T_ℓ.Now we convert the tree to highest weight rigged configurations as follows. Let x be a node at depth p in T(B), andx_0, x_1, …, x_p = x be the weights of nodes on the path from the root of T(B) to x. The resulting configuration ν is given bym_i^(a) = (x_i-1 - 2 x_i + x_i+1|_a)where we make the convention that x = x_j for all j > p. We then take the riggings over all possible values between 0 and p_i^(a).We can reformulate the construction of the configuration ν in the following ways. Suppose d_x_i-1x_i = ∑_b ∈ I k_i^(b)α_b for all 1 ≤ i ≤ p. There are k_i^(a) rows of length i in ν^(a). We also have ν^(a) equal to the transpose of the partition k_1^(a) k_2^(a)⋯ k_p^(a), or we stack a column of height k_a^(i) over all i. When _0 is of non-simply-laced type, we use the virtual Kleber algorithm <cit.> by using virtual rigged configurations.The virtual Kleber tree is defined from the Kleber tree of B in the ambient type, but we only add a child y to x in step (K2) if the following conditions are satisfied: (V1) (y |α_b) = (y |α_b') for all b, b' ∈ϕ^-1(a).(V2) For all a ∈ I_0, if ℓ - 1 ∉γ_a, then the coefficient of _a in d_wx and d_xy,where w is the parent of x, must be equal.Let T(B) be the resulting tree, which we will call the ambient tree. Let γ = max_a ∈ Iγ_a. We now select nodes which satisfy either: (A1) y is at depth ℓ∈γ, or(A2) (d_xy|_a) = 0 for every a such that 1 < γ = γ_a, where x is the parent of y.We construct the final rigged configurations from the selected nodes by devirtualizing the (virtual) rigged configurations obtained from the usual Kleber algorithm satisfying Equation (<ref>) (note that Equation (<ref>) is satisfied by (V1) and (V2)).§.§ KSS-type bijectionIn this section, we describe the (conjectural) KSS-type bijection Φ(B) → B.Let B be a tensor product of KR crystals. We consider B expressed in terms of the so-called Kirillov–Reshetikhin (KR) tableaux of <cit.>. KR tableaux, generally speaking, are r × s rectangular tableaux filled with entries of B^1,1 and determined by their classically highest weight elements. Following <cit.>, we define a map Φ(B) → B recursively by the composition ofδ B^1,1⊗ B^∙ → B^∙,(b) ⊗ b^∙ ↦ b^∙, B^r,1⊗ B^∙ → B^1,1⊗ B^r-1,1⊗ B^∙ (r ≠ 1),[ b_1; ⋮; b_r-1; b_r; ]⊗ b^∙ ↦[ b_r; ]⊗[ b_1; ⋮; b_r-1; ]⊗ b^∙, B^r,s⊗ B^∙ → B^r,1⊗ B^r,s-1⊗ B^∙ (s ≥ 2),[ b_11 b_12⋯ b_1s;⋮⋮⋱⋮; b_r1 b_r2⋯ b_rs;]⊗ b^∙ ↦[ b_11;⋮; b_r1;]⊗[ b_12⋯ b_1s;⋮⋱⋮; b_r2⋯ b_rs;]⊗ b^∙,where B^∙ is a tensor product of KR crystals, and their corresponding maps on rigged configurations. We do not explicitly recall the map δ on rigged configurations here as it strongly depends upon type and we later give a more uniform description of the map. Instead we refer the reader to <cit.> for the explicit (type-dependent) descriptions. Note that δ is currently only described/known for non-exceptional affine types, type E_6^(1), and type D_4^(3).[A map for when the left factor is B^2,1 of type E_6^(1) was conjectured in <cit.>.] Moreover, δ is the key component of the bijection. The mapis given for all non-exceptional types by adding a length 1 singular row to all ν^(a) for all a < r. The mapis the identity map.We recall and consolidate some of the conjectures given in <cit.> and has been known to experts prior.Let B be a tensor product of KR crystals of affine type. Then Φ(B) → B is a (classical) crystal isomorphism such that D ∘Φ∘η = and the diagram3pc3.5pc(B) [r]^Φ[d]_𝕀B [d]^R (B') [r]_ΦB'commutes, where B' is a reordering of the factors of B. When we restrict Φ to classically highest weight elements, this gives a combinatorial proof of the X = M conjecture of <cit.>.In type A_n^(1), it was shown in <cit.> that Conjecture <ref> holds on classically highest weight elements, and as such, the analogous (conjectural) bijections are known as KSS-type bijections. This was an extension of the pioneering work of Kerov, Kirillov, and Reshetikhin in <cit.>, where they showed Conjecture <ref> is true for classically highest weight elements in the special case B = (B^1,1)^⊗ N of type A_n^(1). In <cit.>, it was shown that Φ is a classical crystal isomorphism in type A_n^(1) and a U_q'()-crystal isomorphism in <cit.>. Furthermore, Conjecture <ref> is known to (partially) hold in many different special cases for classically highest weight elements across the non-exceptional types <cit.>, with recent and some progress has been made in the exceptional types <cit.>. Furthermore, in <cit.>, it was shown that Φ is a classical crystal isomorphism in type D_n^(1). Recently, the general case for type D_n^(1) was proven in <cit.> and all non-exceptional types in <cit.>.As we will need it later on, we recall the general steps of the proof that Φ is a statistic preserving bijection on highest weight elements for B = ⊗_k=1^N B^1, 1 when r_k = 1 for all k. Let (ν, J) ∈(B; λ). Define (ν, J) = δ(ν, J) and let r be the return value from δ. Denote by β_1^(r_N), β_1^(r_N) the length of the first column in ν^(r_N), ν^(r_N), respectively.There are five things the need to be verified to show that Φ is a statistic preserving bijection on classically highest weight elements for B = B^r_N, 1⊗ B^∙, where B^∙ is a tensor product of KR crystals: *λ - (r) is dominant.*δ(ν, J) ∈(B^∙, λ - (r)).*r can be appended to (ν, J) to give (ν, J).*For N ≥ 2, we have(ν, J) - (ν, J) = t_r_N^∨/c_0^∨β_1^(r_N) - χ(b_N = ∅). *For N ≥ 2, we haveH(b_N ⊗ b_N-1) = t_r_N^∨/c_0^∨( β_1^(r_N) - β_1^(r_N)) - χ(b_N = ∅) + χ(b_N-1 = ∅), where χ(S) is 1 if the statement S is true and 0 otherwise.We remark that Equation (<ref>) and Equation (<ref>) are those given in <cit.>.Next, we will need dual notions of the maps δ, , andacting on the right, which we denote by δ^, , and . First, we recall the definition of Lusztig's involution B(λ) → B(λ), the unique U_q(_0)-crystal involution satisfying(e_i b)^ = f_ξ(i) b^,(f_i b)^ = e_ξ(i) b^, (B^) = w_0 (b),where w_0 is the long element of the Weyl group of _0 and ξ I_0 → I_0 is defined by w_0_i ↦_ξ(i) and w_0 _i = -_ξ(i). In particular, Lusztig's involution sends the highest weight element to the lowest weight element. We extend Lusztig's involution to an involution B^r,s→ B^r,s by defining ξ(0) = 0 and satisfying Equation (<ref>) and sends the maximal element to the minimal element, the unique element of weight -(u(B^r,s)). We also can extend Lusztig's involution to tensor products by a natural isomorphism(B_2 ⊗ B_1)^ B_1^⊗ B_2^given by (b_2 ⊗ b_1)^ = b_1^⊗ b_2^. Then we define on classically highest weight elementsδ^:= ↑∘∘δ∘,:= ∘∘,:= ∘∘,where ↑(b) is the classically highest weight corresponding to b. By considering ♢ := ↑∘, we also haveδ^= ♢∘δ∘♢,= ♢∘∘♢,= ♢∘∘♢.We then extend these maps as classical crystal isomorphisms.§ MINUSCULE DELTA FOR DUAL UNTWISTED TYPESIn this section, we describe the map δ used to construct Φ for minuscule fundamental weights whenis of dual untwisted affine type. More explicitly, we restrict ourselves to simply-laced affine types and types A_2n-1^(2) and D_n+1^(2) as types D_4^(3) and E_6^(2) do not contain any minuscule fundamental weights. Note that for these types, we have t_a = 1 for all a ∈ I.We construct the map δ_rB^r,1⊗ B^∙→ B^∙, where B^∙ is a tensor product of KR crystals and _r is a minuscule weight of type _0 (i.e., r is a special node) as follows. Start at b_1 = u__r, and set ℓ_0 = 1. Consider step j. From b_j, let ℓ_j denote a minimal i_a ≥ℓ_j-1 (a ∈ I_0 also varies) such that f_a b_j ≠ 0 and ν^(a) has a singular row of length i_a that has not been previously selected. If no such row exists, terminate, set all ℓ_j' = ∞ for j' ≥ j, and return b_j. Otherwise select such a row in ν^(a) and repeat the above with b_j+1 := f_a b_j.We form the new rigged configuration by removing a box from each row selected by δ_r, making the resulting rows singular, and keeping all other rows the same.Consider type D_5^(1) and B = B^5,1⊗ B^4,1⊗ B^1,1⊗ B^5,1. See Figure <ref> for the crystal graphs of B(_4) and B(_5) and Figure <ref> for the crystal graph of B(_1). We compute the bijection[scale=.35,baseline=-18] [yshift=0cm] [lightgray] (20,-4) rectangle (21,-3); [scale=.7] at (20.6, -3.5) ℓ_1;[lightgray] (10,-5) rectangle (11,-4); [scale=.7] at (10.6, -4.5) ℓ_2;[lightgray] (5,-4) rectangle (6,-3); [scale=.7] at (5.6, -3.5) ℓ_3;[lightgray] (0,-3) rectangle (1,-2); [scale=.7] at (0.6, -2.5) ℓ_4;1,10,00,0[xshift=5cm]1,1,10,0,00,0,0[xshift=10cm]1,1,1,10,0,0,00,0,0,0[xshift=15cm]1,10,01,1[xshift=20cm]1,1,10,0,00,0,0[->] (10.5,-6cm) – (10.5,-9cm) node[midway,right] δ_5;(17,-7.5cm) node (returns 4); [yshift=-9cm] [lightgray] (15,-3) rectangle (16,-2); [scale=.7] at (15.6, -2.5) ℓ_1;[lightgray] (10,-4) rectangle (11,-3); [scale=.7] at (10.6, -3.5) ℓ_2;[lightgray] (5,-3) rectangle (6,-2); [scale=.7] at (5.6, -2.5) ℓ_3;[lightgray] (20,-3) rectangle (21,-2); [scale=.7] at (20.6, -2.5) ℓ_4;[lightgray] (10,-3) rectangle (11,-2); [scale=.7] at (10.6, -2.5) ℓ_5;[lightgray] (15,-2) rectangle (16,-1); [scale=.7] at (15.6, -1.5) ℓ_6;101[xshift=5cm]1,10,00,0[xshift=10cm]1,1,10,0,00,0,0[xshift=15cm]1,10,00,0[xshift=20cm]1,10,00,0[->] (10.5,-14cm) – (10.5,-17cm) node[midway,right] δ_4;(17,-15.5cm) node (returns 1); [yshift=-17cm] [lightgray] (0,-1) rectangle (1,-2); [scale=.7] at (0.6, -1.5) ℓ_1;[lightgray] (5,-1) rectangle (6,-2); [scale=.7] at (5.6, -1.5) ℓ_2;[lightgray] (10,-1) rectangle (11,-2); [scale=.7] at (10.6, -1.5) ℓ_3;[lightgray] (20,-1) rectangle (21,-2); [scale=.7] at (20.6, -1.5) ℓ_1;100[xshift=5cm]100[xshift=10cm]100[xshift=15cm] at (0.5,-1.5) ∅;[xshift=20cm]100[->] (10.5,-20cm) – (10.5,-23cm) node[midway,right] δ_1;(17,-21.5cm) node (returns 4); [yshift=-23cm] at (0.5,-1.5) ∅;[xshift=5cm] at (0.5,-1.5) ∅;[xshift=10cm] at (0.5,-1.5) ∅;[xshift=15cm] at (0.5,-1.5) ∅;[xshift=20cm] at (0.5,-1.5) ∅;[->] (10.5,-25.5cm) – (10.5,-28.5cm) node[midway,right] δ_5;(17,-27cm) node (returns 5); [yshift=-28cm] at (0.5,-1.5) ∅;[xshift=5cm] at (0.5,-1.5) ∅;[xshift=10cm] at (0.5,-1.5) ∅;[xshift=15cm] at (0.5,-1.5) ∅;[xshift=20cm] at (0.5,-1.5) ∅;where at each step, we have labeled the sequence of boxes that are removed under δ_r. Recall that we are using the notation for minuscule nodes, so for an element b, any a ∈ b (resp. a∈ b) corresponds to ε_a(b) = 1 (resp. φ_a(b) = 1) and is 0 otherwise. By using the sequence of returned elements above, we obtain[scale=.35,baseline=-28] 1,10,00,0[xshift=5cm]1,1,10,0,00,0,0[xshift=10cm]1,1,1,10,0,0,00,0,0,0[xshift=15cm]1,10,01,1[xshift=20cm]1,1,10,0,00,0,0Φ⟼4 ⊗ 1⊗ 4⊗ 5.§ ADJOINT DELTA FOR DUAL UNTWISTED TYPESIn this section, we describe the map δ_θ := δ_N_ for the adjoint node N_ for dual untwisted types (i.e., t_a = 1 for all a ∈ I or equivalently,is of simply-laced affine type, A_2n-1^(2), D_n+1^(2), D_4^(3), E_6^(2)). Furthermore, we give a uniform proof that Φ is a statistic preserving bijection.We define the map δ_θ B^θ, 1⊗ B^∙→ B^∙, where B^∙ is a tensor product of KR crystals, by the following algorithm. Begin with r_1 = u_θ being the highest weight element in B(θ) ⊆ B^θ,1, and set ℓ_0 = 1. Consider step j such that r_j = x_β, where β > 0 and β≠α_a for all a ∈ I_0. From r_j, consider any outgoing arrow labeled by a and find the minimal i_a ≥ℓ_j-1 such that ν^(a) has a singular row of length i_a which has not been previously selected. If no such row exists, terminate, set all ℓ_j' = ∞ for j' ≥ j and _j' = ∞ for all j', and return r_j. Otherwise select such a row, set ℓ_j = min_a i_a, and repeat the above with r_j+1 := f_a' r_j, where a' is such that i_a' = min_a i_a. If r_j = x_α_a for some a ∈ I_0, we do one of the following disjoint cases. We discard all previously selected (singular) rows. (S) If there exists a singular row of length i_a ≥max{ℓ_j-1, 2},[Note that if ℓ_j-1 = 1 and there exists a singular row of length 1, then we would not be in this case as i_a = 1 < max{ℓ_j-1,2} = 2.] select such a row and set ℓ_j = i_a.(E) If there exists a singular row of length 1 and ℓ_j-1 = 1, we terminate, set ℓ_j = 1 and _j' = ∞ for all j', and return ∅.(Q) If there exists a quasisingular row of length i_a ≥ℓ_j-1, we select the quasisingular string and set ℓ_j = i_a.(T) Otherwise we terminate, set ℓ_j = _j' = ∞ for all j', and return x_α_a.If the process has not terminated, set r_j+1 := y_a and perform the following. Let _0 = ℓ_h, where h = ∑_a ∈ I_0 c_a, i.e., the height of θ or the number of steps we currently have done. Consider step j, and consider any outgoing arrow labeled by a from r_j. Find the minimal i_a ≥_j-1 such that ν^(a) has a singular row of length i_a such that (D) it had been selected at step j' with ℓ_j' = i_a or(N) it had not been previously selected and Case (D) does not occur.If no such row exists, terminate, set all _j' = ∞ for j' ≥ j, and return r_j. Otherwise select such a row, set _j = min_a i_a, redefine ℓ_j' := ℓ_j' - 1 if Case (D) had occurred, and repeat the above with r_j+1 := f_a' r_j, where a' is such that i_a' = min_a i_a.We form the new rigged configuration by (1) removing a box from each row each time it was selected by δ (i.e., if Case (D) occurred, then we remove 2 boxes);(2) making the resulting rows singular unless Case (Q) occurred, then we make the row selected by _1 (if _1 ≠∞) quasisingular; and(3) keeping all other rows the same. Note that the same row cannot be selected twice by Case (D) due to the redefinition of ℓ_j'. We clearly cannot have more than x_α_a in this process since α_a is a simple root and hence has no directed path between them by the crystal axioms. This is the (conjectural) map δ of bin Mohammad <cit.> forof type E_6^(1). Moreover, this was the map δ forof type D_n+1^(2) in <cit.> and of type D_4^(3) in <cit.>. We can extend this description for types C_n^(1), A_2n^(2), and A_2n^(2)†. Indeed, since B^θ,1 for type A_2n^(2) (resp., type A_2n^(2)†) does not contain any elements y_a, for a ∈ I_0 (resp., a = 0), as noted in Remark <ref> (resp. Remark <ref>), we modify the definition of δ_θ by removing Case (Q) (resp., Case (E)) as a possibility. Likewise for type C_n^(1), we do not have y_a for all a ∈ I, so we modify the definition of δ_θ by removing both Case (Q) and Case (E) and combine Case (S) with the first Case (D) (think of performing these steps simultaneously to do x_α_1 x_-α_1), but we also need to consider the parts of ν^(n) doubled as per Remark <ref>. We have the following classification of elements in B(θ) and will be used to describe the KR tableaux of type E_8^(1) and E_6^(2).Letbe of simply-laced or twisted type, and fix some b ∈ B(θ). Then b has the following properties. * b is uniquely determined by ε and φ.* (b) = 0 if and only if there exists a unique i ∈ I_0 such that ε_i(b) = φ_i(b) = 1 and ε_j(b) = φ_j(b) = 0 for all j ≠ i.* ε_i(b) = 2 implies ε_j(b) = 0 for all j ≠ i.* φ_i(b) = 2 implies φ_j(b) = 0 for all j ≠ i. This follows from the description of B(θ). Thus, similar to types E_6,7, we can equate our elements in B(θ) by multisets of {1, , 2, , …, n, }, which as above, we write as words.Consider type E_6^(2) and B = (B^1,1)^⊗ 4. See Figure <ref> for the corresponding classical crystal B(_1). We compute the bijection[scale=.35,baseline=-18] [yshift=0cm] [lightgray] (0,-4) rectangle (1,-3); [scale=.7] at (0.6, -3.5) ℓ_1;[lightgray] (5,-7) rectangle (6,-6); [scale=.7] at (5.6, -6.5) ℓ_2;[lightgray] (10,-5) rectangle (11,-4); [scale=.7] at (10.6, -4.5) ℓ_3;[lightgray] (5,-6) rectangle (6,-5); [scale=.7] at (5.6, -5.5) ℓ_4;[lightgray] (16,-3) rectangle (17,-2); [scale=.7] at (16.6, -2.5) ℓ_5;[lightgray] (10,-4) rectangle (11,-3); [scale=.7] at (10.6, -3.5) ℓ_6;[lightgray] (5,-5) rectangle (6,-4); [scale=.7] at (5.6, -4.5) ℓ_7;[lightgray] (1,-2) rectangle (2,-1); [scale=.7] at (1.6, -1.5) ℓ_9;[lightgray] (0,-4) rectangle (1,-5); [scale=.7] at (0.6, -4.5) ℓ_8;[lightgray] (6,-3) rectangle (7,-4); [scale=.7] at (6.6, -3.5) ℓ_10;[lightgray] (11,-2) rectangle (12,-3); [scale=.7] at (11.6, -2.5) ℓ_11;[lightgray] (17,-2) rectangle (18,-1); [scale=.7] at (17.6, -1.5) ℓ_12;[lightgray] (6,-2) rectangle (7,-3); [scale=.7] at (6.6, -2.5) ℓ_13;[lightgray] (11,-1) rectangle (12,-2); [scale=.7] at (11.6, -1.5) ℓ_14;[lightgray] (6,-1) rectangle (7,-2); [scale=.7] at (6.6, -1.5) ℓ_15; 2,2,1,11,0,2,11,1,2,2[xshift=5cm]2,2,2,1,1,10,0,0,0,0,00,0,0,0,0,0[xshift=10cm]2,2,1,10,0,0,00,0,0,0[xshift=16cm]2,10,00,0[->] (7.5,-8cm) – (7.5,-11cm) node[midway,right] δ_θ;(13,-9.5cm) node (returns 1); [yshift=-11cm] [lightgray] (1,-1) rectangle (2,-2); [scale=.7] at (1.6, -1.5) ℓ_1; 2,10,10,2[xshift=5cm]1,1,10,0,00,0,0[xshift=10cm]1,10,00,0[xshift=16cm]100[->] (7.5,-16cm) – (7.5,-19cm) node[midway,right] δ_θ;(13,-17.5cm) node (returns 2); [yshift=-19cm] [lightgray] (0,-3) rectangle (1,-2); [scale=.7] at (0.6, -2.5) ℓ_1;[lightgray] (5,-4) rectangle (6,-3); [scale=.7] at (5.6, -3.5) ℓ_2;[lightgray] (10,-2) rectangle (11,-3); [scale=.7] at (10.6, -2.5) ℓ_3;[lightgray] (16,-2) rectangle (17,-1); [scale=.7] at (16.6, -1.5) ℓ_4;[lightgray] (5,-3) rectangle (6,-2); [scale=.7] at (5.6, -2.5) ℓ_5;[lightgray] (10,-1) rectangle (11,-2); [scale=.7] at (10.6, -1.5) ℓ_6;[lightgray] (5,-1) rectangle (6,-2); [scale=.7] at (5.6, -1.5) ℓ_7;[lightgray] (0,-1) rectangle (1,-2); [scale=.7] at (0.6, -1.5) ℓ_8; 1,11,11,1[xshift=5cm]1,1,10,0,00,0,0[xshift=10cm]1,10,00,0[xshift=16cm]100[->] (7.5,-24cm) – (7.5,-27cm) node[midway,right] δ_θ;(13,-25.5cm) node (returns ∅); [yshift=-27cm] at (0,-1.5) ∅;[xshift=5cm] at (0,-1.5) ∅;[xshift=10cm] at (0,-1.5) ∅;[xshift=16cm] at (0,-1.5) ∅;and the final application of δ_θ returns 1. As with Example <ref>, we label the sequence of boxes removed under δ_θ, but in our labeling here, we have ℓ_k-h = _k for all k ≥ h. Note that for the first (resp. third) application of δ_θ, we used Case (Q) (resp. Case (E)) when at x_α_1 = 211. Hence, the result of applying Φ is the element1⊗2 ⊗∅⊗ 1.Consider type E_6^(2) and B = (B^1,1)^⊗ 3. We compute the bijection[scale=.35,baseline=-18] [yshift=0cm][lightgray] (1,-1) rectangle (2,-2);[scale=.7] at (1.6, -1.5) ℓ_1;[lightgray] (1,-2) rectangle (2,-3);[scale=.7] at (1.6, -2.5) ℓ_8;2,21,01,1[xshift=5cm][lightgray] (1,-3) rectangle (2,-4);[scale=.7] at (1.6, -3.5) ℓ_2;[lightgray] (1,-2) rectangle (2,-3);[scale=.7] at (1.6, -2.5) ℓ_5;[lightgray] (1,-1) rectangle (2,-2);[scale=.7] at (1.6, -1.5) ℓ_7;2,2,20,0,00,0,0[xshift=10cm][lightgray] (1,-2) rectangle (2,-3);[scale=.7] at (1.6, -2.5) ℓ_3;[lightgray] (1,-1) rectangle (2,-2);[scale=.7] at (1.6, -1.5) ℓ_6;2,20,00,0[xshift=16cm][lightgray] (1,-1) rectangle (2,-2);[scale=.7] at (1.6, -1.5) ℓ_4;200[->] (7.5,-5cm) – (7.5,-8cm) node[midway,right] δ_θ;(13,-6.5cm) node (returns 1); [yshift=-8cm]1,11,01,1[xshift=5cm]1,1,10,0,00,0,0[xshift=10cm]1,10,00,0[xshift=16cm]100the second application of δ_θ is similar and also returns 1 and the final returns 1. Note that in the examples above we are in Case (Q) when performing δ_θ as we disregarded the previously selected singular row in ν^(1) (as in Example <ref>). Hence, the result of applying Φ is the element1 ⊗1 ⊗ 1. Consider type E_6^(2) and B = (B^1,1)^⊗ 3. We compute the bijection[scale=.35,baseline=-18] [yshift=0cm][lightgray] (1,-1) rectangle (2,-2);[scale=.7] at (1.6, -1.5) ℓ_1;[lightgray] (1,-2) rectangle (2,-3);[scale=.7] at (1.6, -2.5) ℓ_6;[lightgray] (0,-2) rectangle (1,-3);[scale=.7] at (0.6, -2.5) ℓ_12;[lightgray] (0,-1) rectangle (1,-2);[scale=.7] at (0.6, -1.5) ℓ_16;2,21,11,1[xshift=5cm][lightgray] (1,-3) rectangle (2,-4);[scale=.7] at (1.6, -3.5) ℓ_2;[lightgray] (1,-2) rectangle (2,-3);[scale=.7] at (1.6, -2.5) ℓ_5;[lightgray] (1,-1) rectangle (2,-2);[scale=.7] at (1.6, -1.5) ℓ_8;[lightgray] (0,-1) rectangle (1,-2);[scale=.7] at (0.6, -1.5) ℓ_9;[lightgray] (0,-2) rectangle (1,-3);[scale=.7] at (0.6, -2.5) ℓ_13;[lightgray] (0,-3) rectangle (1,-4);[scale=.7] at (0.6, -3.5) ℓ_15;2,2,20,0,00,0,0[xshift=10cm][lightgray] (1,-2) rectangle (2,-3);[scale=.7] at (1.6, -2.5) ℓ_3;[lightgray] (1,-1) rectangle (2,-2);[scale=.7] at (1.6, -1.5) ℓ_7;[lightgray] (0,-1) rectangle (1,-2);[scale=.7] at (0.6, -1.5) ℓ_10;[lightgray] (0,-2) rectangle (1,-3);[scale=.7] at (0.6, -2.5) ℓ_14;2,20,00,0[xshift=16cm][lightgray] (1,-1) rectangle (2,-2);[scale=.7] at (1.6, -1.5) ℓ_4;[lightgray] (0,-1) rectangle (1,-2);[scale=.7] at (0.6, -1.5) ℓ_11;200[->] (7.5,-5cm) – (7.5,-8cm) node[midway,right] δ_θ;(13,-6.5cm) node (returns ); [yshift=-8cm] at (0,-1.5) ∅;[xshift=5cm] at (0,-1.5) ∅;[xshift=10cm] at (0,-1.5) ∅;[xshift=16cm] at (0,-1.5) ∅;and the last two applications of δ_θ return 1. In this example, we are in Case (S) when at x_α_2 = 22 and then the remaining strings are selected according to Case (D). Hence, the result of applying Φ is the element⊗ 1 ⊗ 1. § EXTENDING THE LEFT-BOX MAPIn this section, we describe a generalization of the left-box map in order to give a tableau description of the crystals B^r,1 for dual untwisted types. To do so, we first construct -diagrams, which are digraphs on I_0 such that * every node has at most one outgoing edge,* there is a unique sink σ, and* each arrow rr' is labeled by b ∈ B(_σ) such that ε_a(b) = δ_ar' and φ_a(b) = δ_ar.For a fixed -diagram D, we define the left-box map on rigged configurations (B^r,1) →(B^σ,1⊗ B^r',1), where we have the arrow rr' in D as follows. Let e_a_1 e_a_2⋯ e_a_m b = u__σ, and define (ν, J) as the rigged configuration obtained by adding a singular row of length 1 to ν^(a_i), for all 1 ≤ i ≤ m. By weight considerations, the map is well-defined since the result is independent of the order of the path from b to the highest weight. Note that we can consider δ∘ to be the same procedure as δ except starting at b.Next, we defineon B^r,1 by requiring that the diagram3pc3.5pc(B^r,1) [r]^-[d]_Φ (B^σ,1⊗ B^r',1) [d]^ΦB^r,1[r]_-B^σ,1⊗ B^r',1commutes, where again σ is the unique sink in the -diagram. In particular, we note thatis a strict U_q(_0)-crystal embedding. Therefore, we define Kirillov–Reshetikhin (KR) tableaux as the tableaux given by iterating themap, where the entries elements in B^σ,1 and the classical crystal structure is induced by the reverse column reading word. See Appendix <ref> for the description of B^r,1 in types E_6,7,8^(1) and E_6^(2).For example, consider for B^r,1→ B^σ,1⊗ B^σ,1 corresponding to the arrow r σ, we can use this to construct the tableau (x,y), where x, y ∈ B^σ,1, given by its image under , which is (y) ⊗(x). We also note that the construction of the KR tableaux is dependent upon the choice of -diagram.We then extend the left-box map to B^r,1⊗ B^∙→ B^σ,1⊗ B^r',1⊗ B^∙, with respect to the -diagram D, as the strict U_q(_0)-crystal embedding given by b ⊗ b^∙↦(b) ⊗ b^∙.We note that this is a generalization of themap for the KSS bijection. Specifically, for the non-exceptional types, the defining -diagram is[baseline=-4, xscale=1.3](1) at (0,0) 1;(2) at (2,0) 2;(3) at (4,0) ⋯;(4) at (6,0) n-1;(5) at (8,0) n; [->] (2) – (1) node[midway, above]1; [->] (3) – (2) node[midway, above]2; [->] (4) – (3) node[midway, above]n-2; [->] (5) – (4) node[midway, above]n-1; . For type E_6^(1), we use the -diagram[baseline=-4](1) at (0,0) 1;(2) at (4,1) 2;(3) at (2,0) 3;(4) at (4,0) 4;(5) at (6,1) 5;(6) at (2,1) 6; [->] (3) – (1) node[midway, below]3; [->] (4) – (3) node[midway, below]4; [->] (2) – (6) node[midway, above]2 6;[->] (5) – (2) node[midway, above]5; [->] (6) – (1) node[midway, above left]6; .(Note that the edges are labeled by the elements given in Figure <ref>.) We have chosen the -diagram to minimize the distance from node r to σ and each edge label b has minimal depth from u__σ. In type E_6^(1) for B^6,1⊗ B^6,1, we have for(ν, J) = [scale=.35,baseline=-18] at (0,-1.5) ∅;[xshift=4cm]100[xshift=8cm]100[xshift=12cm]1,10,00,0[xshift=16cm] 1,10,00,0[xshift=20cm]1,10,00,0,(ν, J) = [scale=.35,baseline=-18]1,10,00,0[xshift=4cm]1,10,00,0[xshift=8cm] 1,1,10,0,00,0,0[xshift=12cm]1,1,1,10,0,0,00,0,0,0[xshift=16cm] 1,1,10,0,00,0,0[xshift=20cm]1,10,00,0,which is in (B^1,1⊗ B^1,1⊗ B^6,1). In particular, we added 2 singular rows of length 1 to ν^(1), ν^(3), ν^(4) and 1 such row to ν^(2), ν^(3), ν^(5) since 1 = e_1 e_3 e_4 e_2 e_5 e_4 e_3 e_1(6 ). Note that 6 comes from the edge 16 in the -diagram. Thus, we obtainΦ((ν, J) ) = (Φ(ν, J) ) = [baseline=-2pt][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=10,align=center,inner sep=3] [draw];; ⊗[baseline=-2pt][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=10,align=center,inner sep=3] [draw]1;; ⊗[baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=10,align=center,inner sep=3] [draw]1;[draw]6;; , Φ(ν, J) = [baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=10,align=center,inner sep=3] [draw]1;[draw];; ⊗[baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=10,align=center,inner sep=3] [draw]1;[draw]6;; . We could alternatively use the -digram for type E_6^(1) by having the edge 32 instead of 62. However, this results in different KR tableaux.Using δ_6, we define the ^∨-diagram for type E_6^(1) by[baseline=-4](1) at (2,1) 1;(2) at (4,1) 2;(3) at (6,1) 3;(4) at (4,0) 4;(5) at (2,0) 5;(6) at (0,0) 6; [->] (1) – (6) node[midway, above left]1; [->] (2) – (1) node[midway, above]2; [->] (3) – (2) node[midway, above]3; [->] (4) – (5) node[midway, below]4; [->] (5) – (6) node[midway, below]5; .Note that this is a usual -diagram, but we name it in parallel to the contragredient dual (recall B(_1)^∨ = B(_6) and we can define δ_1^∨ = δ_6).For type E_7^(1), the definition of left-box we use is given by the -diagram[baseline=-4](1) at (2,1) 1;(2) at (4,1) 2;(3) at (6,1) 3;(4) at (6,0) 4;(5) at (4,0) 5;(6) at (2,0) 6;(7) at (0,0) 7; [->] (6) – (7) node[midway, below]6; [->] (5) – (6) node[midway, below]5 6; [->] (4) – (5) node[midway, below]4 5; [->] (2) – (1) node[midway, above]2; [->] (1) – (7) node[midway, above left]1; [->] (3) – (2) node[midway, above]3; .We note that other -diagrams are possible, but we use the one in (<ref>) for its similarity to (<ref>).In type E_7^(1) for B^4,1, consider the rigged configuration(ν, J) = [scale=.35,anchor=top,baseline=-18]100[xshift=4cm]1,10,00,0[xshift=8cm]1,11,01,1[xshift=12cm]1,1,1,10,0,0,00,0,0,0[xshift=16cm]1,1,10,0,00,0,0[xshift=20cm]1,10,00,0[xshift=24cm]100.Note that e_7 e_6 e_5(4 ) = 7, and so we have(ν, J) = [scale=.35,anchor=top,baseline=-18]100[xshift=4cm]1,10,00,0[xshift=8cm]1,11,01,1[xshift=12cm]1,1,1,10,0,0,00,0,0,0[xshift=16cm]1,1,1,10,0,0,00,0,0,0[xshift=20cm]1,1,10,0,00,0,0[xshift=24cm]1,10,00,0in (B^7,1⊗ B^5,1). Next, by applying δ_7, we remove the following boxes:[scale=.35,baseline=-18] [lightgray] (24,-3) rectangle (25,-2); [scale=.7] at (24.6, -2.5) ℓ_1;[lightgray] (20,-4) rectangle (21,-3); [scale=.7] at (20.6, -3.5) ℓ_2;[lightgray] (16,-5) rectangle (17,-4); [scale=.7] at (16.6, -4.5) ℓ_3;[lightgray] (12,-5) rectangle (13,-4); [scale=.7] at (12.6, -4.5) ℓ_4;[lightgray] (4,-3) rectangle (5,-2); [scale=.7] at (4.6, -2.5) ℓ_5;[lightgray] (8,-2) rectangle (9,-1); [scale=.7] at (8.6, -1.5) ℓ_6;[lightgray] (0,-2) rectangle (1,-1); [scale=.7] at (0.6, -1.5) ℓ_7;[lightgray] (12,-4) rectangle (13,-3); [scale=.7] at (12.6, -3.5) ℓ_8;[lightgray] (16,-4) rectangle (17,-3); [scale=.7] at (16.6, -3.5) ℓ_9;[lightgray] (20,-3) rectangle (21,-2); [scale=.7] at (20.6, -2.5) ℓ_10;[lightgray] (24,-2) rectangle (25,-1); [scale=.7] at (24.6, -1.5) ℓ_11; 100[xshift=4cm]1,10,00,0[xshift=8cm]1,11,01,1[xshift=12cm]1,1,1,10,0,0,00,0,0,0[xshift=16cm]1,1,1,10,0,0,00,0,0,0[xshift=20cm]1,1,10,0,00,0,0[xshift=24cm]1,10,00,0.Thus δ_7 returns 3 and the resulting rigged configuration (δ_7∘)(ν, J) ∈(B^5,1) is the following:[scale=.35,baseline=-18] at (0,-1.5) ∅;[xshift=3cm]100[xshift=7cm]100[xshift=11cm]1,10,00,0[xshift=15cm]1,10,00,0[xshift=19cm]100[xshift=23cm] at (0,-1.5) ∅;.Since e_7 e_6 (5 ) = 7, applyingresults in[scale=.35,baseline=-18] at (0,-1.5) ∅;[xshift=3cm]100[xshift=7cm]100[xshift=11cm]1,10,00,0[xshift=15cm]1,10,00,0[xshift=19cm]1,10,00,0[xshift=23cm]100,and applying δ_7 selects[scale=.35,baseline=-18] [lightgray] (23,-2) rectangle (24,-1); [scale=.7] at (23.6, -1.5) ℓ_1;[lightgray] (19,-3) rectangle (20,-2); [scale=.7] at (19.6, -2.5) ℓ_2;[lightgray] (15,-3) rectangle (16,-2); [scale=.7] at (15.6, -2.5) ℓ_3;[lightgray] (11,-3) rectangle (12,-2); [scale=.7] at (11.6, -2.5) ℓ_4;[lightgray] (3,-2) rectangle (4,-1); [scale=.7] at (3.6, -1.5) ℓ_5;[lightgray] (7,-2) rectangle (8,-1); [scale=.7] at (7.6, -1.5) ℓ_6;[lightgray] (11,-2) rectangle (12,-1); [scale=.7] at (11.6, -1.5) ℓ_7;[lightgray] (15,-2) rectangle (16,-1); [scale=.7] at (15.6, -1.5) ℓ_8;[lightgray] (19,-2) rectangle (20,-1); [scale=.7] at (19.6, -1.5) ℓ_9;at (0,-1.5) ∅;[xshift=3cm]100[xshift=7cm]100[xshift=11cm]1,10,00,0[xshift=15cm]1,10,00,0[xshift=19cm]1,10,00,0[xshift=23cm]100,which yields the empty rigged configuration and a return value of 1 7. Thus, iterating this, we haveΦ(ν, J) = [baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=15,align=center,inner sep=3] [draw]7;[draw]6;[draw]17;[draw]3;; .For type E_8^(1), the -diagram is[baseline=-4](1) at (2,1) 1;(2) at (4,1) 2;(3) at (6,1) 3;(4) at (8,0) 4;(5) at (6,0) 5;(6) at (4,0) 6;(7) at (2,0) 7;(8) at (0,0) 8; [->] (7) – (8) node[midway, below]7 8; [->] (6) – (7) node[midway, below]6; [->] (5) – (6) node[midway, below]5 6; [->] (4) – (5) node[midway, below]4 5; [->] (2) – (1) node[midway, above]2; [->] (1) – (8) node[midway, above left]1 8; [->] (3) – (2) node[midway, above]3; . For type E_6^(2), we use the -diagram[baseline=-4](1) at (0,0) 1;(2) at (2,0) 2;(3) at (4,0) 3;(4) at (2,1) 4; [->] (2) – (1) node[midway, below]2; [->] (3) – (2) node[midway, below]3; [->] (4) – (1) node[midway, above left]4; .§ UNTWISTED TYPESLetbe of type C_n^(1), F_4^(1), or G_2^(1). For these types, we note that there is a virtualization map v to the corresponding dual type ^∨ and that the scaling factors (γ_a)_a ∈ I are exactly those consideringas a folding of the corresponding simply-laced type . For type G_2^(1) to D_4^(3), we also need to interchange 1 ↔ 2 due to our numbering conventions. However, for type B_n^(1), we will use the embedding into type D_n+1^(1) as it affords an easier proof than A_2n-1^(2). Using this, we construct the bijection Φ by showing it commutes with the virtualization map to the dual untwisted type.§.§ The map δ_r. It is known that B^r,1 can be realized as a virtual crystal inside ofB^r,1 =B^n+1,1⊗ B^n,1 if= B_n^(1), B^2,1 if= G_2^(1), B^r,1 otherwise,of type ^∨. We want to define the map δ_r := v^-1∘δ^v_r ∘ v, where r and δ_r^v are given in Table <ref>. Thus, we need to show thatδ_r^v (B^r,1⊗ B^∙) →(B^∙)is well-defined when restricted to the image of v.Suppose (ν, J) satisfies Equation (<ref>), then δ_r^v(ν, J) satisfies Equation (<ref>). Moreover, the map δ_r^v is well-defined when restricted to the image of v.We proceed by induction by examining (δ^v_r)^-1, where the base case is done by δ_r^v(ν, J) = (ν, J), which returns the highest weight element v(u__r). Next we assume the claim holds when δ_r^v returns b := v(b). Fix a ∈ I_0. Let (ν', J') be the rigged configuration such that (ν, J) := δ_r^v(ν, J) = δ_r^v(ν', J') but with a return value of f_a^v b = v(f_a b).For type C_n^(1), we have that (ν', J') differs from (ν, J) by the addition of γ_a boxes to a row in ν^(a). From Equation (<ref>), we have all riggings J' are still multiples of γ_a' for all a' ∈I, and the claim follows.Next we consider type F_4^(1). The case when f_a^v (b) = (f_a^v b_2) ⊗b_1 is similar to the type C_n^(1) case. Now suppose f_a^v (b) = (f_a b_2) ⊗ (f_a b_1). Note that δ^-1 for f_a b_1 starts at ν^(a) and the only singular rows in ν^(a') for γ_a' > 1 are the rows selected by δ_r^-1 by Equation (<ref>). Hence, applying δ^-1 for f_a b_2 must select those same rows in ν^(a') for γ_a' > 1 as there are sufficient singular rows in ν^(a') for γ_a' = 1 of length ℓ_i_j≤ℓ_k ≤ℓ_i_j+1 for all i_j ≤ k ≤ i_j+1, where ℓ_i_1, …, ℓ_i_q are the lengths of the rows selected of ν^(a') for fixed a' such that γ_a' > 1. We note that such rows exists because b_2 ≥b_1. Once all such rows have been paired, we are equivalent to the case of b' ⊗ v(u__r) with all sufficiently long rows non-singular. Hence, the claim follows by induction.Now suppose f_a^v (b) = b_2 ⊗ (f_a^v b_1) and let (, ) and (', ') denote δ_r^-1(ν, J) by adding b_1 and f_a^v b_1 respectively. Note that any row selected to obtain ' is at most as long as that to obtainand that δ_r^-1 added γ_a boxes to this row in obtaining (')^(a). Therefore, this case follows from our induction assumption for the case where the necessary rows are made to be (non-)singular but with a return value of (b).The proof for type G_2^(1) is similar. For type B_n^(1), we note that if f_a(b_2 ⊗b_1) = (f_a b_2) ⊗b_1 for a ≠ n,n+1, then we must have previously had f_a act on the right. Specifically, this is equivalent to having s_a have the same sign in both columns of B(_n) ⊗ B(_n+1). Thus the proof is also similar for type B_n^(1). We note that our proof is almost type independent as it is the same general technique, but we require some mild type dependencies. We also note that Theorem <ref>, <cit.>, and <cit.> implies that we could define δ by considering the virtualization map of B_n^(1) into A_2n-1^(1).Instead of using the scaling factors to enlarge the partitions, we could instead consider scaling each ν^(a) by 1 / T_a. So that the partitions have integer lengths, we scale by (a multiple of) max_a T_a, which the net effect would be to multiply by γ_a. This suggests a strong relationship between the Weyl chamber geometry and rigged configurations through the bijection Φ. §.§ Definingand general columns The -diagram for type C_n^(1) is given by Equation (<ref>). For type B_n^(1), the -diagram we consider is[baseline=-4](n) at (0,0) n;(1) at (2,0) 1;(d) at (2,0.75) ⋮;(nm) at (2,1.5) n-1; [->] (1) – (n) node[midway, below]1; [->,dotted] (d) – (n); [->] (nm) – (n) node[midway, above left](n-1); . For type F_4^(1), we use the -diagram[baseline=-4](4) at (0,0) 4;(3) at (2,0) 3;(2) at (4,0) 2;(1) at (2,1) 1; [->] (3) – (4) node[midway, below]3; [->] (2) – (3) node[midway, below]2; [->] (1) – (4) node[midway, above left]1; . For type G_2^(1), we want to consider B(_1) as a virtual crystal inside of B(3_1) ⊆ B(_2) ⊗ B(_2). This corresponds to adding a singular row of length 1 to ν^(2), which would be of length γ_2 in γ_2 ν^(2). This allows us to construct an -diagram as 12.§ RESULTSWe gather our results and proofs here. We first prove our results for minuscule nodes. Next will be for the adjoint node. We then extend our results to all single-columns. In the following subsection, we collect our main results: a uniform description and proof of the rigged configuration bijection Φ and X = M for all single-column KR tableaux in dual untwisted types. We then discuss how Φ can be extended to a bijection for all affine types by describing the relation with virtualization. We conclude this section extending Φ and X = M uniformly to tensor products of higher level KR crystals corresponding to minuscule nodes.§.§ Minuscule nodes We assume thatis a dual untwisted type and r is a minuscule node.We note that δ_1 = δ was described by Okado and Sano <cit.> for type E_6^(1). It is also straightforward to see that δ_1 = δ in type A_n^(1) (given in <cit.>) and type D_n^(1) and A_2n-1^(2) (given in <cit.>). We collect these results the following theorem.Let B = ⊗_i=1^N B^1,1 be of type A_n^(1), D_n^(1), E_6^(1), or A_2n-1^(2). The map Φ(B) → B is a bijection such that Φ∘η sends cocharge to energy. We need a few facts about minuscule representations (see, e.g., <cit.>). Let = ⟨ s_a | a ∈ I_0 ⟩ be the Weyl group of _0 with s_a being the simple reflection corresponding to α_a. The cosets / _r, where _r is the parabolic subgroup generated by ⟨ s_i | i ∈ I_0 ∖{r}⟩, parameterize the elements B(_r). Specifically, we haveB(_r) = { b_w := f_a_1⋯ f_a_ℓ u__r| w = s_a_1⋯ s_a_ℓ∈ / _r},where the elements w are the minimal length coset representatives. Furthermore, there reduced expressions of w give all paths from b_w to u__r.Let _r be a minuscule weight. Then 0 ≤ε_a(b) + φ_a(b) ≤ 1 for all b ∈ B(_r) and a ∈ I_0.The claim follows immediately from Equation (<ref>). The following lemma is the key fact for minuscule nodes, which is a generalization of <cit.>.Let λ∈ P^+. Then λ is a minuscule weight if and only if the crystal graph of B(λ) has the following properties: *Consider a path P in B(_r) such that the initial and terminal arrows have the same color a. Then either * there are exactly two arrows colored by a' and a” in P such that a' ∼ a and a”∼ a, or* there is exactly one arrow colored by a' in P such that A_aa' = -2.*Consider a length 2 path with colors (a, a') in B(_r) with a ≁a'. Then there exists a length 2 path (a', a) with the same initial and terminal vertices in B(_r). We recall that λ∈ P^+ is minuscule implies λ is a fundamental weight. By Equation (<ref>), property (<ref>) is given by <cit.> and (<ref>) was shown in <cit.> for the simply-laced case and the general case by <cit.>.The conditions (2) and (4) of <cit.>, respectively, are consequences of (<ref>) and (<ref>) of Lemma <ref>, respectively, which correspond to (1) and (3) in <cit.>, respectively. Thus we have only stated the necessary properties. One important consequence of Lemma <ref> is that the result of δ_r does not depend on the choice of a' such that i_a' = min_a i_a at each step. Another is that the proof given in <cit.> holds in this generality except for a fact about the local energy function (i.e., part (<ref>)). Recall that we need to show H(b_N ⊗ b_N-1) is equal to the number of length 1 singular rows selected by δ_r in ν^(r), which implies that Φ preserves statistics. Note that we have already shown that Φ is a bijection.Next, we compute the (local) energy function on classically highest weight elements in B^r,1⊗ B^r,1.Let _r be a special node. Then for classically highest weight elements b ⊗ u_s'_r∈ B^r,s⊗ B^r,s', we have H(b ⊗ u_s'_r) equal to the number of r-arrows in the path from b to u_s_r in B(s_r).First recall that for r to be a special node, there exists a diagram automorphism ϕ such that r = ϕ(0). Therefore, if we consider the finite-type _r given by I_r := I ∖{r}, then the corresponding fundamental weight Λ̌_0 is minuscule. We note that B^r,s B(sΛ̌_0) as U_q(_r)-crystals, and the classically highest weight element in B^r,s is the I_r-lowest weight element in B(sΛ̌_0). Hence, for every classically highest weight element b ⊗ u_s'_r∈ B^r,s⊗ B^r,s', there exists a path to u_s_r⊗ u_s'_r using the crystal operators f_a, for a ∈ I ∖{r}. Moreover, the crystal operators only act on the left-most factor since φ(u_s_r) = sΛ_r. The number of 0-arrows is equal to the number of r-arrows in the path from b to u_s_r in B(s_r) because f_r f_0 b = f_0 f_r b as r ≁0. Hence, we have H(b ⊗ u_s'_r) as claimed. It remains to show the local energy function satisfies Equation (<ref>).Part (<ref>) holds forB = ⊗_i=1^N B^r, 1when _r is a minuscule weight.Note that in order for the second application of δ_r to return u__r, there must not exist any singular rows in ν^(r) after the first application of δ_r. Hence all rows selected in ν^(r) must have length 1. Thus the claim holds on classically highest weight elements of B^r,1⊗ B^r,1 by Theorem <ref>.Thus, to show this holds in the general case of b' ⊗ b, we use induction on the classical components in B^r,1⊗ B^r,1. We note that we are not applying f_r to the crystal/rigged configuration, but instead looking at how the two left-most factors differ, which results in a box being added to a row in ν^(r). Indeed, we show the claim holds by showing the additional box removed to obtain f_a(b' ⊗ b) must not have come from a length 1 row as H(b' ⊗ b) = H( f_a(b' ⊗ b) ) for all a ∈ I_0.Suppose f_r(b' ⊗ b) = b' ⊗ (f_r b) and let (ν', J') be the corresponding rigged configuration. We note that the element b' is unchanged, and so δ_r selects the same number of boxes in ν^(r) as in ν'^(r). There must be at least one more singular row in the r-th partition of δ_r(ν',J') than in δ_r(ν, J). This implies we must have removed the same number of (singular) rows of length 1 from ν^(r) and ν'^(r). Hence, the claim follows by induction.The case for a ≠ r is similar as above except the number of singular rows in the r-th partition of δ_r(ν', J') is at least the same as in δ_r(ν, J).Instead suppose f_r(b' ⊗ b) = (f_r b') ⊗ b. Let (ν, J) = δ_r(ν, J) = δ_r(, ) with a return value of b' ⊗ b and (f_r b') ⊗ b respectively. Thus, we have ε_r(b') ≥φ_r(b), and we must have ε_r(b') = φ_r(b) = 0 because f_r(b') ≠ 0 and Lemma <ref>. This implies that b ≠ u__r as φ_r(u__r) = 1, and so there must exist a singular string in ν^(r). Suppose we select only length 1 singular strings in ^(r), then in order for these to be singular strings in ν^(r), we must have followed twice as many a-arrows, for a ∼ r, than r-arrows. However, by Lemma <ref>, we can only select the same number of a-arrows as r-arrows. This is a contradiction, and hence, the claim follows. Thus, we collect all of our results for this section in the following.Let B = ⊗_i=1^N B^r,1, where r is a minuscule node. The mapΦ_r (B) → B,which is given by applying δ_r, is a bijection on classically highest weight elements such that Φ_r ∘η sends cocharge to energy.§.§ Adjoint nodes We assume thatis a dual untwisted type and consider the adjoint node N_.As for minuscule representations, there is a bijection between paths x_α to x_θ and reduced expressions of minimal length coset representatives in / _N_ (or / _1, n for type A_n^(1)). We also have an analog of Lemma <ref> for B(θ).Consider the pathx_α_a y_ax_-α_aas a single edge. Then (<ref>) and (<ref>) of Lemma <ref> holds for B(θ).This follows from Proposition <ref> and thatacts transitively on Δ. Letbe such that t_a = 1 for all a ∈ I. Let B = ⊗_i=1^N B^θ,1 be a tensor product of KR crystals. Then the mapΦ(B) → Bis a bijection on classically highest weight elements such that Φ∘η sends cocharge to energy. We follow the proof of the KSS-type bijection <cit.>. Recall that we have p_∞^(a) = α^∨_a(ν, J). We give our proof whennot of type A_n^(1) for simplicity, but the proof for type A_n^(1) follows by considering N_ = {1, n}. Suppose λ = λ - (r) is not dominant. Thus r ≠ y_a, ∅ as (y_a) = (∅) = 0 for all a ∈ I. Thus the only possibilities which make λ not dominant for some a is when 0 ≤α^∨_aλ < φ_a(r). Note that δ_θ terminates at ν^(a) in each of these cases, and let P_δ denote the path taken by δ_θ. Let ℓ = ν_1^(a), i.e., the largest part of ν^(a). Let u_θ denote the highest weight element in B(θ) ⊆ B^N_,1.We start by assuming ℓ = 0. Then we haveα^∨_aλ = ∑_j ∈_>0 j L_j^(a) - ∑_b ∈ I_0 ∖{a} A_abν^(b).Consider the case when α^∨_aλ = 0. If a = N_, then this is a contradiction since L_1^(N_) > 0 and A_ab≤ 0 for all b ∈ I_0 ∖{a}. Now if a ≠ N_, then r ≠ u_θ. Thus we must have removed a box from ν^(b) for some b ∼ a when performing δ_θ. So -A_abν^(b) > 0, which is a contradiction. Next, consider the case α^∨_aλ > 0. Hence, we have φ_a(r) > 1, and so r ≠ u_θ (specifically, r = x_α_a in the types we consider). Note that either * there exists a a' ∼ a such that δ_θ removes a box from ν^(a') with -A_aa'≥φ_a(r), or* there exists a', a” such that a', a”∼ a such that δ_θ removes a box from ν^(a') and ν^(a”) (if a' = a”, then two boxes are removed)from the crystal structure of B(θ). (Note this is essentially Lemma <ref>.) Thus, Equation (<ref>) impliesα^∨_aλ≥φ_a(r), which is a contradiction.Now assume that ℓ > 0. By the definition of the vacancy numbers, we have0 ≤ p_i^(a) + ∑_b ∈ I_0 A_ab m_i^(b) = α^∨_aλ.In particular, we have-p_i-1^(a) + 2 p_i^(a) - p_i+1^(a) = L_i^(a) - ∑_b ∈ I_0 A_ab m_i^(b). α^∨_aλ = 0:We note that this is the same proof of (<ref>) for ℓ > 0 as given in <cit.>.In this case, ν^(a) has a singular string of length ℓ since 0 ≥ p_ℓ^(a)≥ J_ℓ^(a)≥ 0 by convexity. Moreover, we have m_i^(b) = 0 for all b ∼ a and i > ℓ. If not all rows of length ℓ in ν^(a) have been selected (or doubly selected in Case (S) if we return x_α with α < 0), then we have a contradiction as we can select a row from m_ℓ. Next, by considering the smallest subpath P of P_δ ending at r and starting with an a arrow, the fact selecting every row of length ℓ implies2m_ℓ^(a)≤∑_b ∼ a A_ab m_i^(a)by Lemma <ref>. Thus, Equation (<ref>) implies that p_ℓ-1^(a) = 0 and that Equation (<ref>) is an equality. Therefore all rows of length ℓ in ν^(b) for b ∼ a are selected by δ between. Hence, when δ_θ selected the first row of length ℓ, then all rows of length at most ℓ-1 must have been selected by the definition of δ_θ, otherwise we would have selected a row of length at most ℓ-1 in μ^(b) for b ∼ a. Thus, we can proceed by induction and show that we select every row of ν^(a).Therefore, we now have0 = L_1^(a) - ∑_b ∈ I_0 A_ab m_1^(b).If a = N_, then this is a contradiction from Equation (<ref>) and L_1 ≥ 1. If a ≠ N_, then we must have selected all boxes in length 1 rows of ν^(b) for b ∼ a between the first box of a length 1 row of ν^(a). However, to get to a, we must have selected a length 1 row of ν^(b) for some b ∼ a. This is a contradiction.α^∨_aλ = 1:As above, we have φ_a(r) > 1 and r = x_α_a. By convexity, we have p_ℓ^(a)≤ p_ℓ+1^(a)≤α^∨_aλ. If p_ℓ^(a) = 1, then we have m_i^(a') = 0 for all a' ∼ a and i > ℓ because α^∨_aλ = p_i^(a) = p_ℓ^(a) and Equation (<ref>). Furthermore, every row of ν^(a) of length ℓ must be selected and singular, as otherwise δ_θ would select such a (quasi)singular row. Therefore, the previous argument holds and results in a contradiction. Hence, we can assume p_ℓ^(a) = 0.Since 0 = p_ℓ^(a) < p_ℓ+1^(a) = 1, we must have m_ℓ+1^(a') = 1 for precisely one a' ∼ a with A_aa' = -1 by Equation (<ref>). Note that there exists a singular row of length ℓ in ν^(a). If δ_θ can select a row of length ℓ from ν^(a) (whether there is a selectable row or not), then we have a contradiction as above. Therefore, we must have δ_θ selecting the row of length ℓ+1 in ν^(a'). If the row of length ℓ+1 was the first row of length ℓ+1 selected, then we would have chosen this row in ν^(a) before the row in ν^(a'), which is a contradiction. Thus suppose δ_θ previously selected a row of length ℓ+1 in ν^(a”) corresponding to some node x_α∈ B(θ). If a”∼ a, then we have p_ℓ+1^(a) > 1, which is a contradiction. We note that f_a x_α≠ 0 as the coefficient of α_a' in α is still 1. Hence, by Lemma <ref>, the map δ_θ would remove a box from ν^(a) before the one from ν^(a”). This is a contradiction. We need to show that0 ≤max J_i^(a)≤ p_i^(a)for all (a,i) ∈_0. Considering the algorithm for δ and considering the change in vacancy numbers, the only way to violate Equation (<ref>) is when the following cases occur. *There exists a singular or quasisingular string of length i in ν^(a) such that p_i^(a) - p_i^(a) = -1, -2 respectively and ℓ' ≤ i < ℓ.*We have m_ℓ-1^(a) = 0, p_ℓ-1^(a) = 0, and ℓ' < ℓ.In both cases, let ℓ be the length of the row selected by δ in ν^(a) and ℓ' be the length of the row selected immediately before ℓ in ν^(c) such that c ∼ a.We will first show that (<ref>) cannot occur. Assume a singular string occurs, then δ would have selected the string of length i, which is a contradiction. Now suppose a quasisingular string occurs which corresponds to p_i^(a) - p_i^(a) = -2, in this case we have a = 1 and again, the map δ would have selected this quasisingular string, which is a contradiction.Now we will show that (<ref>) cannot occur. Let t < ℓ be the largest integer such that m_t^(a) > 0, and if no such t exists, set t = 0. By the convexity condition, we have p_i^(a) = p_ℓ-1^(a) for all t < i < ℓ. Thus m_i^(c) = 0 for all c ∼ a and t ≤ i ≤ℓ. Thus ℓ' < t. If t = 0, then this is a contradiction since 1 ≤ℓ'. If t > 0, then we have p_t^(a) = 0 and the row must be singular because m_t^(a) > 0. Thus we must have t = ℓ, which is a contradiction. From the change in vacancy numbers, any string of length not of a length created from δ_θ becomes non-singular. Therefore it is easy to see that the procedure outlined for δ_θ^-1 is the inverse of δ_θ. Recall that (ν, J) = δ_θ(ν, J). We can rewrite Equation (<ref>) using Equation (<ref>) as(ν) = 1/2∑_(a,i) ∈_0t_a^∨( ∑_j ∈_>0 L_j^(a)min(i, j) - p_i^(a)) m_i^(a)Next we express (ℓ_k)_k as (ℓ_m^(a_k))_k by the m-th selection in ν^(a_k) by δ. Let m_i^(a), p_i^(a), J_i^(a), and L_i^(a) denote m_i^(a), p_i^(a), J_i^(a), and L_i^(a), respectively, after applying δ, and we have m_i^(a)= m_i^(a) + ∑_m δ_ℓ^(a)_m-1,i - δ_ℓ^(a)_m, i,L_i^(a)= L_i^(a) - δ_a, N_δ_i, 1, Let Δ( (ν, J) ) := (ν, J) - (ν, J). Next, from a straight-forward, but somewhat tedious, calculation using the changes in vacancy numbers, we obtainΔ((ν)) = ∑_(a,i) ∈_0 t_a^∨( p_i^(a) - p_i^(a)) ( m_i^(a) - ∑_m δ_ℓ_m^(a),i) - χ(r = y_a)+ t_N_^∨/c_0^∨∑_i ∈_≥ 0 m_i^(N_) - χ(r = ∅),where r is the return value of δ_θ. Moreover, from the description of δ_θ, we haveΔ(| J |) = ∑_(a,i) ∈_0 t_a^∨( p_i^(a) - p_i^(a)) ( m_i^(a) - ∑_m δ_ℓ_m^(a),i) + χ(r = y_a),where the last term is because a quasisingular string was changed into a singular string if that holds. Combining this with the fact that b_N = r, we haveΔ((ν, J)) = Δ((ν)) + Δ(| J |) = t_N_^∨/c_0^∨β_1^(N_) - χ(b_N = ∅)as desired.We recall the local energy function on B^θ,1⊗ B^θ,1 from <cit.> in Table <ref>, renormalized to our convention. We note that there are two minor typos in <cit.>, where it is stated H(x_α⊗ x_θ) = 1 and H(x_θ-α_N_⊗ x_θ) = 1 (the only difference is for types D_n+1^(2) and A_2n^(2)).Note that in order for the second application of δ_θ to return x_θ or ∅, we must not have any singular rows in ν^(N_). Hence, β_1^(N_) - β_1^(N_) is the number of N_-arrows in the path from x_θ to b' from the definition of δ_θ. We note that for every type, we have c_N_ = 2, and hence, it is straightforward to see the claim holds in this case.When b' ⊗ b is not classically highest weight, this is similar to the non-highest weight case in the proof of Lemma <ref>.We can also extend our proof to types C_n^(1), A_2n^(2), and A_2n^(2)† (recall Remark <ref>). We note that for the proofs above to hold for type A_2n^(2) and A_2n^(2)†. We have to use the (modified) scaling factors of γ_a = 1 for all a ∈ I and the matrix(A_ab)_a,b∈ I_0 = [2 -1 ; -12 -1; ⋱⋱⋱ ; -12 -1;-11 ]instead of the Cartan matrix in, e.g., Equation (<ref>) and Equation (<ref>). §.§ Onwards and upwards We have shown that δ induces a bijection Φ. Now we need to showinduces a bijection. Note thatis clearly a (classically strict) crystal embedding. Let σ denote the unique sink of the -diagram.The vacancy numbers are invariant under .Given an arrow rr', note that (b) = _r - _r'. Sinceadds rows of length 1 and the number of rows in ν^(a) corresponds to the coefficient of α_a in _σ - (b) = _σ - _r + _r', the claim follows from the definition of vacancy numbers. Suppose r ≠σ, and let rr' be the outgoing arrow of r in the -digram. Let B = B^r,1⊗ B^∙. Then we have∘Φ = Φ∘. Suppose the claim holds for r' (so Φ is a bijection for B^σ,1⊗ B^r',1⊗ B^∙ by induction). It is sufficient to show the claim on classically highest weight elements. Consider some classically highest weight element b ⊗ b^∙∈ B^r,1⊗ B^∙, and let (b ⊗ b^∙) = b”⊗ b' ⊗ b^∙. If b” = b_r, then b = u(B^r,1) and b' = u(B^r',1) from the definition ofand there is a unique element of maximal weight. Therefore, Φ^-1(b^∙) has the same partitions and riggings as Φ^-1(b' ⊗ b^∙) (we will see below that applying ^-1 removes the boxes added by δ_σ^-1 with b_r). Now since b' = u(B^r',1), the vacancy numbers p_i^(r') of (ν, J) := Φ^-1(b' ⊗ b^∙) are all one larger than those of Φ^-1(b^∙). Thus, there are no singular rows in ν^(r'). Hence, when we add b_r, we only add singular rows of length 1 as the first box added under δ_σ^-1(b_r) has to be to a row of length 0 in ν^(r) and δ_σ^-1 adds boxes to weakly shorter rows at each subsequent step. Hence, the result is in the image of , and hence ^-1 can be applied.Next, we proceed by induction on the depth of b”⊗ b' to b_r ⊗ u(B^r',1) (i.e., the number of e_a operators, for a ∈ I_0, one needs to apply). Suppose f_a(b”⊗ b') = f_a b”⊗ b', then the claim follows by induction as one additional box is added by δ_σ. Indeed, δ_σ^-1(f_a b”) could only have increased the number of length-one singular rows added compared to δ_σ^-1(b”) as δ_σ^-1(f_a b”) adds an extra box to some singular row in ν^(a) during its initial step and the subsequent selected rows must be weakly smaller. Thus every row selected for δ_σ^-1(f_a b”) must be weakly smaller than the corresponding one from δ_σ^-1(b”). Now instead assume f_a(b”⊗ b') = b”⊗ f_a b'. Let (ν, J) = Φ^-1(b' ⊗ b^∙) and (, ) = Φ^-1(f_a b' ⊗ b^∙). For b^†∈ B^r',1, to ease notation define δ_σ^-1(b^†) := δ_σ^-1(b_σ^†), where (b^†) = b_σ^†⊗b^†. As in the previous case, the rows selected by δ_σ^-1(f_a b') are at most as long as those added byδ_σ^-1(b'). From the tensor product rule, we must have 0 = ε_a(b”) < φ_a(b') = 1, and so the first row that we add a box for δ_σ^-1(b”) cannot be in ν^(a). By considering how the vacancy numbers change after applying δ_σ^-1(b'), the first box added by δ_σ^-1(b”) to (, ) can be at a row at most as long as the one selected by δ_σ^-1(b”) for (ν, J). Hence, the rows selected by δ_σ^-1(b”) in (, ) must be at most as long as those selected for (ν, J). Therefore, we can apply ^-1. Let B = B^r,1⊗⊗_i=1^N B^r_i, s_i. For (ν, J) ∈(B), we have( (ν, J) ) - (ν, J) = ∑_k=1^ℓ t_a_k^∨∑_j ∈_>0 L_j^(a_k) + 1/2∑_k=1^ℓ t_a_k^∨ (δ_a_k r' + δ_a_k σ - δ_a_k r),where we have the path b = f_a_1 f_a_2… f_a_ℓ u_Λ_σ for the arrow rr' in the -diagram.Recall from Equation (<ref>) that the cocharge of a configuration can be expressed as(ν) = 1/2∑_(a,i) ∈_0 t_a^∨( ∑_j ∈_>0 L_j^(a)min(i, j) - p_i^(a)) m_i^(a).Sinceadds a singular string of length 1 to ν^(a_k) for all 1 ≤ k ≤ℓ, where we have b = f_a_1 f_a_2… f_a_ℓ u_θ, for the arrow rr' in the corresponding -diagram. A straightforward tedious calculation givesK := ∑_(a,i) ∈_0 t_a^∨∑_j ∈_>0( L_j^(a)m_i^(a) - L_j^(a) m_i^(a)) min(i, j) = ∑_(a,i) ∈_0 t_a^∨ (δ_ar' + δ_a σ - δ_ar) m_i^(a)+ ∑_k=1^ℓ t_a_k^∨∑_j ∈_>0 L_j^(a_k) + ∑_k=1^ℓ t_a_k^∨ (δ_a_k r' + δ_a_k σ - δ_a_k r) = -∑_k=1^ℓ t_a_k^∨ p_1^(a_k) + 2∑_k=1^ℓ t_a_k^∨∑_j ∈_>0 L_j^(a_k)+ ∑_k=1^ℓ t_a_k^∨ (δ_a_k r' + δ_a_k σ - δ_a_k r).Note that during the derivation, we used Lemma <ref> and note that min(i, j) = 1 if either i = 1 or j = 1. Therefore, we have( (ν, J) ) - (ν, J) = 1/2( K - ∑_k=1^ℓ t_a_k^∨ p_1^(a_k)) + ∑_k=1^ℓ t_a_k^∨ p_1^(a_k) = ∑_k=1^ℓ t_a_k^∨∑_j ∈_>0 L_j^(a_k) + 1/2∑_k=1^ℓ t_a_k^∨ (δ_a_k r' + δ_a_k σ - δ_a_k r)as desired. We first need a type-independent proof of <cit.>, which we show for s = 1. This is essentially the same proof as the proof of <cit.>.Let B = ⊗_i=1^N B^r_i,1 for dual untwisted types, and let λ = - ∑_i=1^N Λ_ξ(r_i). Let t_λ denote the translation by λ in the extended affine Weyl group, and let t_λ = v τ, where v is an element in the affine Weyl group. Then there exists a U_q'()-crystal isomorphismjB(Λ_τ(0)) → B ⊗ B(Λ_0)such that the image of the Demazure subcrystal B_v(Λ_τ(0)) is B ⊗ u_Λ_0.Recall from <cit.> that any Littelmann path that stays in the dominant chamber with an endpoint of λ generates B(λ). The claim follows from using (projected) level-zero LS paths <cit.>, that tensor products in the Littelmann path model corresponding to concatenation of the LS paths <cit.>. Let B = ⊗_i=1^N B^r_i, 1⊗ B^r,1. For b ∈ B, we haveD( (b) ) - D(b) = ∑_k=1^ℓ t_a_k^∨( ∑_j ∈_>0 L_j^(a_k) + 1/2 (δ_a_k r' + δ_a_k σ - δ_a_k r) ),where b = f_a_1 f_a_2… f_a_ℓ u__σ, for the arrow r_Nr'_N.Let u denote the maximal vector in B. Note that u^∈ B is the unique element of classical weight w_0 (u) and is in the same classical component as u.Let B^∙ = ⊗_i=1^N B^r_i, 1, and denote the maximal vector in B by u_∙⊗ u_r, where u_∙ and u_r are the maximal elements in B^∙ and B^r,1 respectively. We note that the unique element of classical weight w_0 (u) is given by v = u_∙^⊗ u_r^ (technically we applyto each factor individually, but we do not care about the ordering of B^∙ and u_∙). We note that v is in the same classical component as u_∙⊗ u_r. Therefore, v^ is the maximal element in B^r,1⊗ B^∙. Therefore, we have(v^) = b ⊗ u_r'⊗ u_∙,(v^)^= u_∙^⊗ u_ξ(r')⊗ b^ =: b_0,whereb^ = f_ξ(a_1) f_ξ(a_2)… f_ξ(a_ℓ) u__σ.Next, we want to apply a sequence e__k to b_k-1 such that we obtainb_k := u_∙^⊗ u_ξ(r')⊗( f_ξ(a_k+1)… f_ξ(a_ℓ) u__σ),where b_ℓ = u_∙^⊗ u_ξ(r')⊗ u__σ.Fix some j ∈ I_0 and i. Let I_r_i = I ∖{r_i}. If we have v_r_i⊗ x, where v_r_i is the lowest weight element in B^r_i,1, then if r_i ≠ j, we have e_j(v_r_i⊗ x) = v_r_i⊗ e_j x. If r_i = j, then e_j^2(v_r_i⊗ x) = (e_j v_r_i) ⊗ (e_j x), but we note that there exists a sequence of crystal operators e_j⃗ that acts only on the left that returns to v_i ⊗ (e_j x) that uses precisely t_a_k 0-arrows. This follows from the fact that e_j v_i is the unique I_r_i-lowest weight element in the unique I_r_i-component B(Λ̌_0) ⊆ B^r,1. Note that these 0-arrows are Demazure, not at the beginning of a 0-string, as the extra B^σ,1 results in φ_0((v^)^) - φ_0(v) = 1.From <cit.>, the number of Demazure 0-arrows, a 0-arrow that is not at the end of a 0-string, between two elements determines the difference in energy. Therefore, we haveD( (v) ) - D(v) = ∑_k=1^ℓ t_a_k^∨∑_j ∈_>0L_j^(a_k) + S,where L corresponds to B^∙ and S is the claim for a single tensor factor B^r,1. Thus, it remains to show the claim for a single tensor factor, which is a finite computation using the results from <cit.>. Thus the claim holds for any element in the same classical component of v.Let d(b) denote the affine grading of b <cit.>, the number of Demazure 0-arrows in the path from b to u_Λ_τ(0) in B(Λ_τ(0)). Next, for any other classically lowest weight element v', we have d( (v') ) = d(b') since the number of Demazure 0-arrows is determined by the coefficient of α_0 in Λ_τ(0) - (v') and ( (v') ) = (v'). Therefore, we haveD((v')) - D(v') = [ d((v')) - d(v_) ] - [ d(v') - d(v) ]= [ d((v')) - d(v') ] + [ d(v) - d(v_) ]= D((v)) - D(v),where v_ is the minimal element in (B). Hence, the claim follows.The proof of Lemma <ref> and Lemma <ref> is true for general typeand any -diagram.The proof of Lemma <ref> holds for general tensor factors provided there is an analog of <cit.> and <cit.> for all B^r,s in any type. In particular, we are using <cit.>, which is essentially a type-independent proof (other than the single factor case, i.e., <cit.>).§.§ Statistic preserving bijection Letbe of type A_n^(1), D_n^(1), E_6,7,8^(1), or E_6^(2). Let B = ⊗_i=1^N B^r_i,1 and B' = ⊗_i=1^N' B^r_i',1 (except possibly for r_i and r_i' being 4,5 in type E_8^(1)). Then the diagram3pc3.5pc(B ⊗ B') [r]^Φ[d]_𝕀B ⊗ B' [d]^R (B' ⊗ B) [r]_ΦB' ⊗ Bcommutes.For type A_n^(1) (resp. D_n^(1)) was shown in <cit.> (resp. <cit.>). For types E_6,7,8^(1) and E_6^(2), the claim reduces to B^r_1,1⊗ B^r_2,1 by the definition of Φ and is a finite computation. Now we can show that the map Φ is a bijection that sends cocharge to energy.Let B = ⊗_i=1^N B^r_i,1 be a tensor product of KR crystals. Then the mapΦ(B) → Bis a bijection on highest weight elements such that Φ∘η sends cocharge to energy.By Proposition <ref>, Lemma <ref>, and Lemma <ref>, showing the bijection is well-defined and preserves statistics is reduced to showing when the left-most factor for (B) and right-most factor for B^rev is minuscule or adjoint. Thus the claim follows by Lemma <ref> and Lemma <ref>. Let Φ be defined using δ_r such that r is a minuscule node. Then Φ(B) → B is a U_q(_0)-crystal isomorphism.A straightforward check shows that Φ preserves weights. A tedious but straightforward check shows that the arguments in <cit.> extend to all minuscule nodes by the description of δ_r and that the arguments are about the relative position where two boxes are added by applying f_a. Let B be as in Theorem <ref> and B' be some reordering of the tensor factors. From Proposition <ref> and Theorem <ref>, we can construct the combinatorial R-matrix RB → B' by R = Φ' ∘Φ^-1, where Φ' (B') → B' is the corresponding bijection. Note that this provides a uniform combinatorial construction of the combinatorial R-matrix. For r = 1, 2, 6 in type E_6^(1), we can describe e_0 and f_0 on (B^r,s) by using the description in <cit.> as it is given solely in terms of ε_i, φ_i, and the weight. Hence, Theorem <ref> immediately implies the following. Letbe of type E_6^(1) and r = 1, 2, 6. ThenΦ(B^r,s) → B^r,sis a U_q'()-crystal isomorphism.§.§ Virtual bijection In order to extend the bijection to full columns in all other types, we need to extendto commute with virtualization maps. In particular, we introduce the notion of a virtualmap, which we denote by ^v. Specifically, we generalize the notion of an -diagram to use for the map ^v, which we call a ^v-diagram. In all cases below, the resulting ^v-diagram is derived from a virtualization map.For type E_6^(2) as a folding of type E_6^(1), we require having an arrow (r_1, r_2) ⟶ r' defining a map ^vB^r_1,1⊗ B^r_2,1→ B^2,1⊗ B^r',1. Hence, the virtual ^v-diagram we use is[baseline=-4](1) at (0,0) 2;(2) at (2,0) 4;(3) at (4,0) (3,5);(4) at (2,1) (1,6); [->] (2) – (1) node[midway, below]4; [->] (3) – (2) node[midway, below]35; [->] (4) – (1) node[midway, above left]16; . For type B_n^(1) as the dual of A_2n-1^(2) (this has a virtualization map with scaling factors given by Table <ref>), the ^v-diagram we consider is[baseline=-4](n) at (0,0) n;(1) at (3,0) [1];(d) at (3,0.75) ⋮;(nm) at (3,1.5) [n-1]; [->] (1) – (n) node[midway, below]1,, …,; [->,dotted] (d) – (n); [->] (nm) – (n) node[midway, above left]1,…,n-1,;where [r] corresponds to B^r,2. Note that the arrows are labeled by a single-column KR tableau [t_1, …, t_k], where we read the column from top to bottom. See also Appendix <ref>.For type F_4^(1) as the dual of E_6^(2), the ^v-diagram is a modification of Equation (<ref>):[baseline=-4](1) at (0,0) 4;(2) at (2,0) 3;(3) at (4,0) [2];(4) at (2,1) [1]; [->] (2) – (1) node[midway, below]1, 3; [->] (3) – (2) node[midway, below]1, 22; [->] (4) – (1) node[midway, above left]1, 1; .The derivation is similar to the type B_n^(1) case.For type G_2^(1) as the dual of D_4^(3), recall that we consider B(_1) as the natural virtual crystal of B(3_1) ⊆ B(_2) ⊗ B(_2) in type G_2^(1). Continuing this in type D_4^(3), we construct an ^v-diagram as 2[[1]], where [[1]] corresponds to B^1,3. Consider type B_4^(1) and B^3,1. Note that the image under the virtualization map B_4^(1)⟶ A_7^(2) is B^3,2. Then we have^v((1,2,3)) = (1,2,3,) ⊗(1,2,3,4) ∈ B^4,1⊗ B^4,1.Let v be one of the virtualization maps ofC_n^(1)⟶ A_2n^(2)⟶ D_n+1^(2)⟶ A_2n-1^(1),with scaling factors (2, 1, …, 1), (1, …, 1, 2), and (1, …, 1) respectively. Thenv ∘δ_θ = δ_θ∘ v. Letbe of type D_n+1^(2) andof type A_2n-1^(2). We note that if δ_θ selects a singular string in ν^(a) for a ≠ n, then it must select the same singular string in ν^(a') = ν^(a”) for all a', a”∈ϕ^-1(a).For the remaining virtualization maps, this follows from the description of δ_θ as per Section <ref>.We can also compose the virtualization maps of Equation (<ref>), and we obtain another proof of <cit.>. We also can useC_n^(1)⟶ A_2n^(2)†⟶ D_n+1^(2)⟶ A_2n-1^(1),with scaling factors (1, …, 1, 2), (2, 1, …, 1), and (1, …, 1), respectively, instead of Equation (<ref>).For type D_n+1^(2) as a folding of type A_2n-1^(1), we use the ^v-diagram:[baseline=-4, xscale=1.3](1) at (0,0) (1,2n-1);(2) at (4,0) ⋯;(3) at (0,-1) ⋯;(4) at (5,-1) (n-1,n+1);(5) at (8.5,-1) [n]; [->] (2) – (1) node[midway, above]2(2n-2)(2n-1); [->] (4) – (3) node[midway, above](n-2)(n-1)(n+1)(n+2); [->] (5) – (4) node[midway, above](n-1)nn(n+1); . Letbe of non-simply-laced affine type. Then we havev ∘ = ^v ∘ v,where v is one of the virtualization maps given above.Supposeis of type E_6^(2). Recall thatadds a length 1 singular string to ν^(a_i), where e_a_1… e_a_k b = u__2. Moreover, recall thatdoes not change the vacancy numbers. It is a finite computation to show that for any arrow rr' in the -diagram in Equation (<ref>), we have an arrow ϕ^-1(r) ϕ^-1(r'), where ϕ E_6^(1)↘ E_6^(2) is the diagram folding, in the ^v-diagram in Equation (<ref>). Thus the claim follows.For the other types, the proof is similar with also considering the doubling (or tripling) map. Letbe of non-simply-laced affine type. Thenv ∘Φ = Φ∘ v,where v is one of the virtualization maps given above. Moreover, Φ is a bijection such that Φ∘η sends cocharge to energy.Lemma <ref> implies that it is sufficient to show v ∘δ = δ∘ v. For untwisted types, this follows by definition and Theorem <ref>. For all dual untwisted types, type A_2n^(2), and type A_2n^(2)†, the proof is similar to the proof of Proposition <ref>. Thus the claim follows. Letbe of affine type. Let B = ⊗_i=1^N B^r_i,1, and let B' = ⊗_i=1^N' B^r_i',1. Then the diagram3pc3.5pcv(B ⊗ B') [r]^Rv(B ⊗ B') B' ⊗ B [u]^v[r]_RB' ⊗ B [u]_vcommutes. Moreover, the combinatoral R-matrix can be defined as the restriction of R to the image of v.This follows from Proposition <ref>.Proposition <ref> holds for all affine types.This follows from Theorem <ref>, Proposition <ref>, and Proposition <ref>. §.§ Higher levels for minuscule nodesLet r be such that _r is a minuscule weight. Then B(s_r) ⊆ B(_r)^⊗ s is characterized by{ b_1 ⊗⋯⊗ b_s | b_1 ≤⋯≤ b_s }. By <cit.>, it is sufficient to show when s = 2. We show this by induction, where the base case is x_1 = x_2 = u__r. Next, suppose x_1 ≤ x_2, and let x_2 = f_a_m⋯ f_a_1 x_1. Fix some i ∈ I_0. Consider when f_i(x_1 ⊗ x_2) = x_1 ⊗ (f_i x_2), then we have x_1 ≤ f_i x_2. Now, consider the case f_i(x_1 ⊗ x_2) = (f_i x_1) ⊗ x_2. Since B(_r) is minuscule, all elements are parameterized by trailing words of the longest coset representative w ∈ / _r and that w is fully commutative <cit.>. Since w is fully commutative, we must have f_a_1 f_i x = f_i f_a_1 x, and similarly, there exists a k such that a_k = i andx_2 = f_a_m⋯ f_a_k+1 f_a_k-1⋯ f_a_1 f_a_k x_1.Therefore, we have f_i x_1 ≤ x_2. Next, we note that the proof given in <cit.> that the left-split map B^r,s⊗ B^∙→ B^r,1⊗ B^r,s-1⊗ B^∙ is preserved under Φ is straightforward to generalize to when r is a minuscule node. We also can show that Φ send the combinatorial R-matrix to the identity map on rigged configurationsLet B = ⊗_i=1^N B^r_i,s_i and B' = ⊗_i=1^N' B^r_i',s_i', where r_i and r_i' are minuscule nodes for all i. Then the diagram3pc3.5pc(B ⊗ B') [r]^Φ[d]_𝕀B ⊗ B' [d]^R (B' ⊗ B) [r]_ΦB' ⊗ Bcommutes.By the description of Φ it is sufficient to reduce this to the case when B = B^r,s and B' = B^r',s'. Recall that B^r,s B(s_r) as U_q(_0)-crystal when r is a special node. Since _r and _r' are minuscule weights, it follows from <cit.> that B^r,s⊗ B^r',s' B(s_r) ⊗ B(s'_r') is multiplicity free.Next, it is straightforward to see that on rigged configurations, we haveacting as the identity map on the configuration ν and preserves the coriggings. Next, we need the following description of the dual version of δ_r.Let r be a minuscule node, and define δ_r^ := η∘δ_r ∘η on rigged configurations. Suppose B = B^∙⊗ B^r,1 and (ν, J) is a highest weight rigged configuration. Define (ν, J) byJ_i^(a) =J_i^(a) ifa ≠ r,{x-1 | x ∈ J_i^(r)} ifa = r,for all i ∈_>0. Then we haveδ^(ν, J) = e_a_1… e_a_k (ν, J),where a_1, …, a_k ∈ I_0 and e_a_1… e_a_k (ν, J) is a highest weight rigged configuration.On (ν, J), it is clear that δ_r^ proceeds the same as δ_r except by preserving coriggings and selecting and keeping cosingular rows: rows with a rigging of 0. Thus if there exists a cosingular row in (ν, J)^(r) (which is the first partition we must select from under δ_r^), there exists a row with a negative rigging in (ν, J)^(r) and we can apply e_r. Moreover, e_r removes a box from the smallest row in (ν, J)^(r), which matches the procedure for δ_r^, and decreases the riggings by 1 in all weakly longer rows in (ν, J)^(r') for all r' ∼ r. Hence, if there are any cosingular rows in (ν, J)^(r') selected by δ_r^ in the next step, we can also apply e_r' to e_r(ν, J). By repeating this argument, we can clearly apply an e_a_j for every cosingular row in (ν, J)^(a_j) selected by δ_r^. The fact this is an if and only if is similar to the proof that δ_r^ gives a bijection (see the proof in <cit.>). From the definition of e_a_j and the resulting change in vacancy numbers (recall that the classical crystal operators on rigged configurations preserve the coriggings of all unchanged rows), it is straightforward to see that the resulting rigged configurations are equal.Consider B = B^3,1⊗ B^4,1⊗ B^2,2⊗ B^6,1 in type E_6^(1). Then we have(ν, J) = [scale=.33,baseline=-18][lightgray] (0,-2) rectangle (1,-3);[scale=.7] at (0.6, -2.5) ℓ_5;2,10,00,0[xshift=5cm][lightgray] (2,-1) rectangle (3,-2);[scale=.7] at (2.6, -1.5) ℓ_6;3,2,10,1,10,1,1[xshift=11cm][lightgray] (0,-4) rectangle (1,-5);[scale=.7] at (0.6, -4.5) ℓ_4;2,2,1,10,0,0,00,0,0,0[xshift=16cm][lightgray] (0,-6) rectangle (1,-7);[scale=.7] at (0.6, -6.5) ℓ_3;[lightgray] (2,-1) rectangle (3,-2);[scale=.7] at (2.6, -1.5) ℓ_7;3,2,2,1,1,10,0,0,0,0,00,0,0,0,0,0[xshift=22cm][lightgray] (0,-4) rectangle (1,-5);[scale=.7] at (0.6, -4.5) ℓ_2;[lightgray] (2,-1) rectangle (3,-2);[scale=.7] at (2.6, -1.5) ℓ_8;3,2,1,10,0,0,00,0,0,0[xshift=28cm][lightgray] (0,-2) rectangle (1,-3);[scale=.7] at (0.6, -2.5) ℓ_1;[lightgray] (2,-1) rectangle (3,-2);[scale=.7] at (2.6, -1.5) ℓ_9;3,10,00,1δ_6^(ν, J) = [scale=.33,baseline=-18]211[xshift=5cm]2,2,10,1,10,1,1[xshift=11cm]2,2,10,0,00,0,0[xshift=16cm]2,2,2,1,10,0,0,0,00,0,0,0,0[xshift=22cm]2,2,10,0,00,0,0[xshift=28cm]201(where the return value of δ_6^ is 3). Now, we compute e_6 e_5 e_4 e_2 e_1 e_3 e_4 e_5 e_6(ν, J):(ν, J) = [scale=.33,baseline=-18]2,10,00,0[xshift=5cm]3,2,10,1,10,1,1[xshift=11cm]2,2,1,10,0,0,00,0,0,0[xshift=16cm]3,2,2,1,1,10,0,0,0,0,00,0,0,0,0,0[xshift=22cm]3,2,1,10,0,0,00,0,0,0[xshift=28cm]3,1-1,-1-1,0[scale=.33,baseline=-18]2,10,00,0[xshift=5cm]3,2,10,1,10,1,1[xshift=11cm]2,2,1,10,0,0,00,0,0,0[xshift=16cm]3,2,2,1,1,10,0,0,0,0,00,0,0,0,0,0[xshift=22cm]3,2,1,1-1,-1,-1,-1-1,-1,-1,-1[xshift=28cm]311[scale=.33,baseline=-18]2,10,00,0[xshift=5cm]3,2,10,1,10,1,1[xshift=11cm]2,2,1,10,0,0,00,0,0,0[xshift=16cm]3,2,2,1,1,1-1,-1,-1,-1,-1,-1-1,-1,-1,-1,-1,-1[xshift=22cm]3,2,11,1,11,1,1[xshift=28cm]300[scale=.33,baseline=-18]2,10,00,0[xshift=5cm]3,2,1-1,0,0-1,0,0[xshift=11cm]2,2,1,1-1,-1,-1,-1-1,-1,-1,-1[xshift=16cm]3,2,2,1,11,1,1,1,11,1,1,1,1[xshift=22cm]3,2,10,0,00,0,0[xshift=28cm]300[scale=.33,baseline=-18]2,1-1,-1-1,-1[xshift=5cm]3,2,1-1,0,0-1,0,0[xshift=11cm]2,2,11,1,11,1,1[xshift=16cm]3,2,2,1,10,0,0,0,00,0,0,0,0[xshift=22cm]3,2,10,0,00,0,0[xshift=28cm]300[scale=.33,baseline=-18]211[xshift=5cm]3,2,1-1,0,0-1,0,0[xshift=11cm]2,2,10,0,00,0,0[xshift=16cm]3,2,2,1,10,0,0,0,00,0,0,0,0[xshift=22cm]3,2,10,0,00,0,0[xshift=28cm]300[scale=.33,baseline=-18]211[xshift=5cm]2,2,10,0,00,0,0[xshift=11cm]2,2,10,0,00,0,0[xshift=16cm]3,2,2,1,1-1,0,0,0,0-1,0,0,0,0[xshift=22cm]3,2,10,0,00,0,0[xshift=28cm]300[scale=.33,baseline=-18]211[xshift=5cm]2,2,10,0,00,0,0[xshift=11cm]2,2,10,0,00,0,0[xshift=16cm]2,2,2,1,10,0,0,0,00,0,0,0,0[xshift=22cm]3,2,1-1,0,0-1,0,0[xshift=28cm]300[scale=.33,baseline=-18]211[xshift=5cm]2,2,10,0,00,0,0[xshift=11cm]2,2,10,0,00,0,0[xshift=16cm]2,2,2,1,10,0,0,0,00,0,0,0,0[xshift=22cm]2,2,10,0,00,0,0[xshift=28cm]3-1-1[scale=.33,baseline=-18]211[xshift=5cm]2,2,10,0,00,0,0[xshift=11cm]2,2,10,0,00,0,0[xshift=16cm]2,2,2,1,10,0,0,0,00,0,0,0,0[xshift=22cm]2,2,10,0,00,0,0[xshift=28cm]201and note that the result equals δ_6^(ν, J). Let r be a minuscule node. Let δ_r^ := η∘δ_r ∘η on rigged configurations and δ_r^ := ♢∘δ_r ∘♢ on classically highest weight elements in tensor product of KR crystals and extended as a classical crystal isomorphism. We haveδ^∘Φ = Φ∘δ^. Recall δ_r commutes with the classical crystal operators by Theorem <ref>. Note that we can define δ_r^ on classically highest weight elements in a tensor product of KR crystals by removing the rightmost factor and then going to the highest weight element from <cit.>.[Recall that δ_r^ in <cit.>, where it was denoted by rh, was defined by removing the rightmost factor (and then going to the highest weight element) in contrast to our definition of conjugating the left factor removal by . However, these definitions are equivalent by <cit.>.] Lemma <ref> shows the same description for the rigged configurations.To prove the claim, it is sufficient to prove this on classically highest weight elements. Since the rightmost factor is B^r,1, where r is a minuscule node, the corresponding rightmost element in the tensor product must be u__r. Recall that Φ^-1(u__r) is the empty rigged configuration (but with vacancy numbers p_i^(r) shifted by 1 for all i ∈_>0). Note also that Φ^-1 is only computed using singular rows, not the actual rigging values. Therefore, if Φ^-1(b ⊗ u__r) = (ν, J), then Φ^-1(b) = (ν, J) as defined in Lemma <ref>. If e_a_1… e_a_k b, where a_1, …, a_k ∈ I_0, is the corresponding classically highest weight element, then we haveΦ^-1( δ^*(b ⊗ u_s_r) ) = Φ^-1(e_a_1… e_a_k b) = e_a_1… e_a_kΦ^-1(b) = δ^*( Φ^-1(b ⊗ u_s_r) )as desired. We also have the following from Proposition <ref> and that clearly [δ, δ^] = 0 on a tensor product of KR crystals.On rigged configurations, we have [δ, δ^] = 0. We remark that this provides an alternative proof of <cit.> and <cit.>.Using Corollary <ref>, the remaining proof of <cit.> that ♢∘Φ = Φ∘θ holds. Hence, the proof of <cit.> by using Theorem <ref>. Thus, we have the following.Let B = ⊗_i=1^N B^r_i,s_i be a tensor product of KR crystals, where r_i is a minuscule node for all i. Then the mapΦ(B) → Bis a U_q(_0)-crystal isomorphism such that Φ∘η sends cocharge to energy. § EQUIVALENT BIJECTIONSIn this section, we show that all of our defined bijections give the same bijection and Φ = Φ for all types except B_n^(1) (where it remains a conjecture), F_4^(1), and G_2^(1). We show that Φ defined using a combination of various δ_σ defines the same bijection as Φ for types A_n^(1), D_n^(1), and E_6^(1). Note that for type A_n^(1), the map δ_n-1 is the dual bijection of <cit.>. For types A_n^(1) and D_n^(1), we will show that the bijection defined by δ_σ and δ are equal. For type A_2n-1^(2), we only have δ_1 = δ, which was done in <cit.>. For type D_n+1^(2), we will use the virtualization map and type A_2n-1^(1). For type E_6^(1), the map δ_1 = δ from <cit.> and δ_6 follows from the diagram automorphism given by 1 ↔ 6.We will first show that all of the bijections defined using δ_r for any r ∈ I_0 in type A_n^(1) are equal.Letbe of type A_n^(1) and B = B^r,1⊗ B^∙. We haveδ_r = δ∘ (δ∘)^r-1such that ^r-1(b_r) = b_1^r ⊗⋯⊗ b_1^1 (withalways acting on the rightmost factor), were b_1^i is the return value for the i-th application of δ.We note that δ_1 = δ. Thus fix an r > 1. We proceed by induction on r. Note that it is sufficient to show by our induction assumption thatδ_r = δ_r-1∘δ∘such that (b_r) = (b) ⊗ b_r-1, where b_r denotes the return value of δ_r and (b) ⊗ b_r-1 the return values under δ_r-1∘δ∘. Recall that (u__r) = (r) ⊗ u__r-1, which defines the unique strict crystal embedding B(_r) → B(_1) ⊗ B(_r-1).Suppose δ_r selects (singular) rows ℓ^(a)_1 ≤⋯≤ℓ^(a)_k_a from ν^(a) for all a ∈ I_0; we consider k_a = 0 if no such row was selected in ν^(a). Note that δ∘ selects the same singular row of length ℓ^(a)_1 in ν^(a) under δ_r as it is the smallest selectable singular row in ν^(a) for all r ≤ a < b, where δ returns (b), and both algorithms (effectively[From the definition ofand thatpreserves vacancy numbers, we can consider δ∘ as one operation that is the same as δ except that it begins at ν^(r) instead of ν^(1).]) start at ν^(r). Indeed, by the definition of δ_r, we must have ℓ_1^(a)≤ℓ_1^(a+1) and no singular rows in ν^(a+1) of length ℓ_1^(a)≤ i < ℓ_1^(a+1) for all r ≤ a < b-1 as δ_r selects the minimal singular row and would have instead selected any singular row of length ℓ_1^(a)≤ i < ℓ_1^(a+1).Next, let (ν, J) = (δ∘)(ν, J). We claim that δ_r-1 on (ν, J) must select exactly all other rows selected by δ_r. If k_r-1 = 0, then δ_r has selected no other singular rows (note that in this case the crystal structure of B(_r) ⊆ B(_1) ⊗ B(_r-1) implies k_a = 1 for all r ≤ a < b and k_a = 0 otherwise). Now we havep_i^(r-1)(ν,J) =p_i^(r-1)(ν,J) + 1ifi < ℓ_1^(r), p_i^(r-1)(ν,J)otherwise,so none of these rows of length i < ℓ_1^(r) are singular and k_r-1 = 0 with Equation (<ref>) implies the other rows are not singular, thus δ_r-1 returns u__r-1, and the claim holds.Now suppose k_r-1 > 0. The algorithm for δ_r-1 begins by selecting the row of length ℓ^(r-1)_1 because of Equation (<ref>) and δ_r would have selected any singular rows of length ℓ_1^(r)≤ i < ℓ_1^(r-1). Indeed, if there was such a singular row R and δ_r selects ℓ_1^(r-1) at step t, then R would have been selected during the an earlier step of δ_r by the fact we select a singular row of minimal length and we could always select a string from ν^(r-1) after the first step (and up to step t). Next, we note that there are no singular rows in ν^(r) of length ℓ^(r)_1 ≤ i < ℓ^(r+1)_1 as p_i^(r) has been increased by 1 in that region. Therefore, the next selected row in ν^(r) is of length ℓ_2^(r) since * all vacancy numbers p_i^(r) for i ≥ℓ^(r+1)_1 remain unchanged,* ℓ_1^(r-1)≥ℓ_1^(r),* all other rows of length at least ℓ_1^(r+1) are singular in (ν, J)^(r) if and only if they are singular in (ν, J)^(r),* and we can select a second row in ν^(r) as soon as we select a row in ν^(r-1) from the definition of δ_r.Similarly for all (ν, J)^(a) with r < a < b-1, but we note there are no singular rows in ν^(b-1) of length at least ℓ_1^(b-1). Note that by the definition of δ_r and the crystal B(_r), we must have ℓ_2^(b-2)≥ℓ_1^(b-1) (as otherwise we would have [ b-1; b-1; ] as a subtableau for that element since the definition of δ_r would imply the (b-2)-arrow [ b-1; b-2; ][ b-1; b-1; ] in the crystal graph exists, which is impossible (note all other entries in the tableau do not matter by the column strictness, so we do not write them)). Thus, we have (b) ⊗ b_r-1∈ B(_r) ⊆ B(_1) ⊗ B(_r-1) (equivalently (b_r) = (b) ⊗ b_r-1) as δ_r-1 must select the row of length ℓ_2^(b-2) in ν^(b-2) before selecting a row in ν^(b-1) and hence cannot select any more rows in ν^(b-1). Additionally, we have (ν,J)^(a) = (ν,J)^(a) for all a < r with the same vacancy numbers for all a < r - 1 andp_i^(a)(ν,J) =p_i^(a)(ν,J) + 1if ℓ_1^(a)≤ i < ℓ_1^(a+1), p_i^(a)(ν,J)ifi ≥ℓ_1^(a+1),for all r ≤ a < b - 1. Thus, once δ_r-1 starts selecting ℓ_2^(a), it must select the same rows of length ℓ_k^(a) for k ≥ 2 as δ_r. Hence the rest of the algorithm for δ_r-1 selects all the same other rows selected by δ_r following the same path as δ_r except for those boxes selected by δ∘. Therefore, the resulting rigged configurations are equal, and hence the claim follows. We note that the proof of Lemma <ref> is given by δ_r-1∘δ∘ following the pathu__r = (u,r) ⋯(u,b) _δ∘(x,b) = b_rin B(_r), where u = u__r-1 and x = b_r-1. ConsiderB = B^3,1⊗ B^2,2⊗ B^4,2⊗ B^1,5⊗ B^4,3⊗ B^3,2⊗ B^4,1⊗ B^2,1⊗ B^4,2of type A_5^(1). Then we have[scale=.33,baseline=-18][lightgray] (0,-2) rectangle (1,-3);[scale=.7] at (0.6, -2.5) ℓ_3;4,10,12,1[xshift=8cm][lightgray] (0,-4) rectangle (1,-5);[scale=.7] at (0.6, -4.5) ℓ_2;[lightgray] (3,-1) rectangle (4,-2);[scale=.7] at (3.6, -1.5) ℓ_6;4,2,1,10,0,0,00,0,0,0[xshift=16cm][lightgray] (0,-4) rectangle (1,-5);[scale=.7] at (0.6, -4.5) ℓ_1;[lightgray] (3,-1) rectangle (4,-2);[scale=.7] at (3.6, -1.5) ℓ_5;4,2,1,14,1,2,24,4,2,2[xshift=24cm][lightgray] (1,-4) rectangle (2,-3);[scale=.7] at (1.6, -3.5) ℓ_4;[lightgray] (3,-1) rectangle (4,-2);[scale=.7] at (3.6, -1.5) ℓ_8;4,2,2,11,1,1,01,1,1,1[xshift=32cm][lightgray] (2,-1) rectangle (3,-2);[scale=.7] at (2.6, -1.5) ℓ_7;322 [->] (15.5,-6cm) – (15.5,-9cm) node[midway,right] δ_3;(21,-7.5cm) node (returns (3,5,6) ); [yshift=-9cm]402[xshift=8cm]3,2,10,0,00,0,0[xshift=16cm]3,2,13,1,23,3,2[xshift=24cm]3,2,1,12,1,0,02,2,0,0[xshift=32cm]222Furthermore, we also compute [scale=.33,baseline=-18]4,10,12,1[xshift=8cm]4,2,1,10,0,0,00,0,0,0[xshift=16cm][lightgray] (0,-4) rectangle (1,-5);[scale=.7] at (0.6, -4.5) ℓ_1;4,2,1,14,1,2,24,4,2,2[xshift=24cm][lightgray] (1,-3) rectangle (2,-4);[scale=.7] at (1.6, -3.5) ℓ_2;4,2,2,11,1,1,01,1,1,1[xshift=32cm][lightgray] (2,-1) rectangle (3,-2);[scale=.7] at (2.6, -1.5) ℓ_3;322 [->] (15.5,-6cm) – (15.5,-9cm) node[midway,right] δ∘;(23,-7.5cm) node (returns (6) ); [yshift=-9cm][lightgray] (0,-2) rectangle (1,-3);[scale=.7] at (0.6, -2.5) ℓ_2;4,10,12,1[xshift=8cm][lightgray] (0,-4) rectangle (1,-5);[scale=.7] at (0.6, -4.5) ℓ_1;[lightgray] (3,-1) rectangle (4,-2);[scale=.7] at (3.6, -1.5) ℓ_4;4,2,1,10,0,0,00,0,0,0[xshift=16cm][lightgray] (3,-1) rectangle (4,-2);[scale=.7] at (3.6, -1.5) ℓ_3;4,2,14,1,24,4,3[xshift=24cm][lightgray] (3,-1) rectangle (4,-2);[scale=.7] at (3.6, -1.5) ℓ_5;4,2,1,12,1,0,02,2,0,0[xshift=32cm]222[->] (15.5,-15cm) – (15.5,-18cm) node[midway,right] δ_2;(21,-16.5cm) node (returns (3,5) ); [yshift=-18cm]402[xshift=8cm]3,2,10,0,00,0,0[xshift=16cm]3,2,13,1,23,3,2[xshift=24cm]3,2,1,12,1,0,02,2,0,0[xshift=32cm]222which agrees with applying δ_3 as per Lemma <ref>. In order to show that the bijections agree for type D_n^(1), we recall the map δ_sp B^r,1⊗ B^∙→ B^∙ for r = n-1, n from <cit.>. Let vB^r,1→ B^r,2 denote the virtualization map given by γ_a = 2 for all a ∈ I. Let ^(r) B^r,2⊗ B^∙→ B^1,1⊗ B^n-1,1⊗ B^n,1⊗ B^∙ be given on rigged configurations by adding a length-one singular row to ν^(a) for all a ≤ n - 2 and a = n,n-1 if r = n-1, n respectively. Let B^n-1, 1⊗ B^n,1⊗ B^∙↦ B^n-2,1⊗ B^∙ be given on rigged configurations by adding a length-one singular row to ν^(a) for all a ≤ n-2. We define δ_sp B^r,1⊗ B^∙→ B^∙ byδ_sp := v^-1∘ (δ∘)^n-2∘δ∘∘δ∘^(r)∘ v. We could also define δ_sp by using the (generalized) -diagram of[baseline=-4, xscale=1.4](1) at (0,0) 1;(2) at (2,0) ⋯;(3) at (4,0) n-2;(4) at (6,0) (n-1,n);(5) at (8,0) [r]; [->] (2) – (1) node[midway, above]1; [->] (3) – (2) node[midway, above]n-2; [->] (4) – (3) node[midway, above]n-1; [->] (5) – (4) node[midway, above]x; ,where x = , n if r = n-1, n respectively and recall [r] corresponds to B^r,2.Letbe of type D_n^(1) and B = B^r,1⊗ B^∙ with r = n-1, n. We haveδ_r = δ_sp. We will use the spin representation for elements of B(_r)B^r,1 (as U_q(_0)-crystals) of type D_n^(1). That is, an element b ∈ B(_r) with (b) = 1/2∑_i=1^n s_i ϵ_i (given as an element in the ambient 1/2^n lattice) we represent as the ±-sequence (s_1, …, s_n) (see also, e.g., <cit.>). Under the doubling map v, the corresponding tableau has an i (resp. ) if and only if s_i = + (resp. s_i = -), written in increasing order down the column. Note that u__r = (+, +, …, +, ±), where we have s_n = -,+ if r = n-1, n respectively.The proof is essentially the same as the proof of Lemma <ref>. Indeed, it is sufficient to show that the resulting rigged configurations are equal after applying δ_r and δ_sp to a fixed rigged configuration (ν, J) as B(_r) is a minuscule representation. Let b be the return value of δ_r, which selects rows of length ℓ^(a)_1 ≤⋯≤ℓ^(a)_k_a in ν^(a) for all a ∈ I_0, where we consider k_a = 0 if no row was selected in ν^(a).Let j_1 < ⋯ < j_m be all indices such that s_j_h = -. Let (, ) = v(ν, J). Let ř = n,n-1 if r = n-1,n respectively. For h = 1, we must have applied f_r f_n-2… f_j_1 to u__r in order to obtain b. Thus, the application of δ∘^(r) results in selecting the row of length 2ℓ^(a)_1 in ^(a) for all a = r, n-2, …, j_1 as a row is singular in (ν, J) if and only if it is singular in (, ). Let (ν, J) be the resulting rigged configuration. For h = 2, we must have also applied f_ř f_n-2… f_j_2. When applying δ∘, we note that we have increased p_i^(ř) for all i < 2ℓ_1^(n-2), and so we must select a row of length 2ℓ_1^(ř) in ν^(ř). Similar to the proof of Lemma <ref>, we select rows of length 2ℓ^(a)_2 in ν^(a) for all a = n-2, …, j_2. The case for all other h and applying δ∘ is similar except the initial steps until we reach n-1, n remove a box from an odd length row. Similarly, all of the positive values in the column v(b) remove a box from an odd length row. Therefore, the resulting rigged configurations are equal and the claim follows. Next, we consider the case of D_n+1^(2) and recall that B^n,1 = B^n,1 of type A_2n-1^(1). Let δ_n denote the map δ_n of type A_2n-1^(1). Thus, we have the following from the definition of δ_r.Letbe of type D_n+1^(2). Thenv ∘δ_n = δ_n ∘ v. Letbe of type E_6^(1) and B = B^6,1⊗ B^∙. Thenδ_6 = δ_1 ∘δ_1 ∘,δ_1 = δ_6 ∘δ_6 ∘^∨. Similar to the proof of Lemma <ref>. In type D_n+1^(2), we define B^1,1⊗ B^∙→ B^n,1⊗ B^n,1⊗ B^∙ by(1) ⊗ b^∙ ↦ 1 ⊗ n ⊗ b^∙,(∅) ⊗ b^∙ ↦⊗ n ⊗ b^∙,and extended as a U_q(_0)-crystal embedding.Consider B = B^θ,1⊗ B^∙ of type . Then, we haveδ_θ = δ_n ∘δ_1 = δ_1 ∘δ_nif= A_n^(1),δ_1 ∘δ_1 ∘ if= D_n^(1),δ_n ∘δ_n ∘ if= D_n+1^(2),δ_1 ∘δ_1 ∘∘δ_1 ∘ if= E_6^(1),δ_7 ∘δ_7 ∘ if= E_7^(1). We first consider type E_6^(1).Note that B(θ) ⊆ B(_1)^⊗ 3 is the unique factor with highest weight element 2 6⊗ 6 ⊗ 1. By Lemma <ref>,we haveδ_1 ∘δ_1 ∘∘δ_1 ∘ = δ_6 ∘δ_1 ∘.Therefore, it is sufficient to show δ_θ = δ_6 ∘δ_1 ∘.Note that the return values can be ignored since there is a unique B(θ) ⊆ B(_6) ⊗ B(_1) and unique B(_6) ⊆ B(_1) ⊗ B(_1), giving a canonical identification of the return values. Furthermore, the only non-zero weight multiplicity of B(θ) is for weight 0, but these are distinguished by ε_a(y_a') = δ_aa', which in terms of the algorithm δ_θ is determined by the partition ν^(a) the algorithm terminates at.We proceed by induction on depth of the return value of δ_θ. The base case is when δ_θ returns u_θ as no boxes are removed, and so we wish to show δ_6 ∘δ_1 ∘ returns 2 ⊗ 6. Adding u_θ under δ_θ^-1 does not change the rigged configuration. Similarly, adding 6 under δ_6^-1 only makes all rows in ν^(6) non-singular. Hence, all boxes added by 2 under δ_1^-1 must be of length 1, which are all subsequently removed by ^-1. Hence, we have δ_θ = δ_6 ∘δ_1 ∘.Consider b' ⊗ b ∈ B(θ) ⊆ B(_1) ⊗ B(_6). If f_a(b' ⊗ b) = (f_a b') ⊗ b, then the claim follows from induction as the new box is selected on the application of δ_1. Therefore, suppose f_a(b' ⊗ b) = b' ⊗ (f_a b). Therefore, we must have ε_a(b') = 0 < 1 = φ_a(b) by the tensor product rule. If φ_a(b') = 1, then we have φ_a(b' ⊗ b) = 2, and hence, b' ⊗ b corresponds to x_α_a. In this case, f_a(b' ⊗ b) differs by a quasisingular string in ν^(a), hence after δ_1 ∘, this string becomes singular and selected by δ_6. Thus, the result of δ_6 ∘δ_1 ∘ is b' ⊗ f_a b. Therefore, we have φ_a(b') = 0. Hence, the new singular row in ν^(6) is selected during δ_6. Hence, we have δ_θ = δ_6 ∘δ_1 ∘. Finally, we consider the case when δ_θ returns ∅. In this case, we want to show δ_6 ∘δ_1 ∘ returns 6 andrespectively. Adding ∅ under δ_θ^-1 adds c_a singular rows of length 1 to ν^(a). Next, we note that δ_6^-1 adding 6 makes all rows in ν^(6) non-singular. Therefore, addingunder δ_1^-1 can only create singular rows of length 1 and, after performing ^-1, there are precisely c_a such rows. Hence, we have δ_θ = δ_6 ∘δ_1 ∘.The other cases are similar.Letbe of type C_n^(1). Then δ_1 = δ.This follows from the description of δ_1 and Proposition <ref>. By combining the above results and noting that our proofs did not rely on any specific choice of -diagram (with a given sink), we have the following.Letbe of dual untwisted type or type A_2n^(2), A_2n^(2)†, C_n^(1). Let σ be either a minuscule or adjoint node. Let Φ be a bijection defined by δ_σ with a corresponding -diagram. Then we haveΦ = Φ.Moreover, for any bijection Φ' defined by δ_σ', where σ' is either a minuscule or adjoint node, with a corresponding -diagram, then we have Φ = Φ'.Theorem <ref> holds for type B_n^(1). Theorem <ref> states that there is a unique bijection Φ that is defined by KKR-type algorithms. In other words, for a fixed rigged configuration, we can only obtain different KR tableaux representations of the same element in a tensor product of KR crystals under such a bijection. It is likely that there is a unique bijection that sends cocharge to energy and the combinatorial R-matrix to the identity map.§ THE FILLING MAPIn this section, we characterize all highest weight rigged configurations appearing in (B^r,s) for various r ∈ I_0 in types E_6,7,8^(1), E_6^(2), and F_4^(1). Note that it has not been shown that B^r,s is the crystal basis of the KR module W^r,s in general. For non-exceptional types, this was shown in <cit.> and for certain other special cases in <cit.>. We note that when r is a special (resp. adjoint) node, then W^r,s has a crystal basis by <cit.> and <cit.> (resp. <cit.>). However, this provides further evidence for this to be true as we show many of the graded decompositions agrees with the conjectures of <cit.>. Using this, we describe the filling map and define Kirillov–Reshetikhin (KR) tableaux for B^r,s for certain r in types E_6,7,8^(1) and E_6^(2) as an extension of <cit.>. We give a table of the KR tableaux for B^r,1 in Appendix <ref>. Note that the height of the KR tableaux is the distance from r to σ in the -diagram (recall σ is the unique sink in the -diagram); in particular, r no longer necessarily corresponds to the height.For the nodes we do not consider here, the roots which appear as edge labels in the (ambient) Kleber tree, and hence the weights can appear in the decompositions, are completely determined by those at level 1, i.e., appear in the (ambient) Kleber tree for B^r,1. Thus, it is possible to determine an explicit parameterization of the classical decomposition of B^r,s for all exceptional types, as well as the corresponding rigged configurations (and their cocharge). In particular, this parameterization will be given by integer points in a polytope.However, the author believes any such parameterization is likely to not be enlightening as it will involve numerous linear inequalities (and possibly some equalities). Then for the filling map, the rules can become even more complicated. For example, consider the parameterization for B^2,s of type D_4^(3) given in <cit.> and the corresponding filling map. §.§ Notation We define some notation to aid in the description of (B^r,s). Consider a tuple (α^(1), α^(2), …, α^(ℓ)), where α^(k) = ∑_a ∈ I_0 c_a^(k)_a ∈ Q^+_0 for all 1 ≤ k ≤ℓ. We denote by ν(α^(1), α^(2), …, α^(ℓ)) the configuration given by ν^(a) stacking a column of height c_a^(k) for all 1 ≤ k ≤ℓ and left justifying this (so it is a partition). We also denote k ∗ [α] as the sequence (α, α, …, α) of length k and k_1 ∗ [α^(1)] + k_2 ∗ [α^(2)] as the concatenation of the two sequences. §.§ Type E_6^(1) For r = 1 and r = 6 in type E_6^(1), we have B^r,s B(s_r) as U_q(_0)-crystals. Thus, the filling map B^r,s→ T^r,s is the identity map. Moreover, we have (B^r,s) (B^r,s; s _r) as U_q(_0)-crystals.Consider the KR crystal B^2,s of type E_6^(1). We have(B^2,s) = ⊕_k=0^s (B^2,s; (s-k) _2).Moreover, the highest weight rigged configurations in (B^2,s) are given by ν(k∗[α]) with all riggings 0, whereα = _1 + 2_2 + 2_3 + 3_4 + 2_5 + _6 = _2,and(ν, J) = k. Note that by Condition (K2) of Definition <ref>, the only root that we can subtract from _2 is α. Thus the Kleber tree T(B^2,s) is a path where the node at depth k has weight _2 - kα for 0 ≤ k ≤ s. Hence, the rigged configuration is given by ν(k ∗ [α]). It is straightforward to check that (ν, J) = k. We define the filling map B^2,s→ T^2,s on the classically highest weight element b ∈ B( (s-k)Λ_2) ⊆ B^2,s by defining (b) as the tableau with the first s-k columns as 1, 6, 2, the next ⌊ k / 2 ⌋ columns as[baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=15,align=center,inner sep=3] [draw]5;[draw]1;[draw]6;[draw]6;[draw];[draw]2;;and if k is odd, the final column as 1, 6,. We then extendas a classical crystal isomorphism.Consider B^2, 8 of type E_6^(1). Then B(k_2) ⊆ B^2,8 corresponds to the classically highest weight elementk = 4 ⇝ [baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=15,align=center,inner sep=3, ampersand replacement=&] [draw]1; &[draw]1; &[draw]1; &[draw]1; &[draw,fill=gray!20]5; &[draw,fill=gray!20]1; &[draw,fill=gray!20]5; &[draw,fill=gray!20]1;[draw]6 ; &[draw]6 ; &[draw]6 ; &[draw]6 ; &[draw,fill=gray!20]6; &[draw,fill=gray!20]6 ; &[draw,fill=gray!20]6; &[draw,fill=gray!20]6 ;[draw]2; &[draw]2; &[draw]2; &[draw]2; &[draw,fill=gray!20]; &[draw,fill=gray!20]2; &[draw,fill=gray!20]; &[draw,fill=gray!20]2;; , k = 5 ⇝ [baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=15,align=center,inner sep=3, ampersand replacement=&] [draw]1; &[draw]1; &[draw]1; &[draw,fill=gray!20]5; &[draw,fill=gray!20]1; &[draw,fill=gray!20]5; &[draw,fill=gray!20]1; &[draw,fill=gray!40]1;[draw]6; &[draw]6; &[draw]6; &[draw,fill=gray!20]6; &[draw,fill=gray!20]6; &[draw,fill=gray!20]6; &[draw,fill=gray!20]6; &[draw,fill=gray!40]6;[draw]2; &[draw]2; &[draw]2; &[draw,fill=gray!20]; &[draw,fill=gray!20]2; &[draw,fill=gray!20]; &[draw,fill=gray!20]2; &[draw,fill=gray!40];; ,where the shaded regions are the parts that are “filled in.” The corresponding rigged configurations, respectively, are[scale=.25,baseline=-18]400[xshift=8cm]4,400,0[xshift=16cm]4,40,00,0[xshift=24cm]4,4,40,0,00,0,0[xshift=32cm]4,40,00,0[xshift=40cm]400,[scale=.25,baseline=-18]500[xshift=8cm]5,500,0[xshift=16cm]5,50,00,0[xshift=24cm]5,5,50,0,00,0,0[xshift=32cm]5,50,00,0[xshift=40cm]500,Let B^2,s→ T^2,s given by Definition <ref> and ι be the natural (classical) crystal isomorphism. We haveΦ = ∘ιon classically highest weight elements.If we apply column splittingto B^2,s when s > k, then we make all rows in ν^(2) nonsingular. A straightforward computation shows that we obtain 1, 3, 2 and the rigged configurationhas not changed. If s = k, thenkeeps all rows of length k being singular. It is a straightforward computation shows the column is 1, 6, 6 when k > 1 and the new rigged configuration is obtained by deleting two boxes from every row of ν^(a) for all a ∈ I_0. If k = 1, then a finite computation shows we obtain 1, 6,. Consider the KR crystal B^3,s of type E_6^(1). We have(B^3,s) = ⊕_k=0^s (B^3,s; (s-k) _3 + k _6).Moreover the highest weight rigged configurations in (B^3,s) are given by ν(k∗[α]) with all riggings 0, whereα = _1 + _2 + 2_3 + 2_4 + _5 = _3 - _6,and(ν, J) = k. Similar to the proof of Proposition <ref>. We define the filling map B^3,s→ T^3,s on classically highest weight element b ∈ B((s-k) _3 + k _6) ⊆ B^3,s by defining (b) as the tableau with the first s-k columns as 1, 3, the next ⌊ k / 2 ⌋ columns as[baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=15,align=center,inner sep=3] [draw]16;[draw]1;[draw]6;[draw]3;;and if k is odd, the last column as 1, 6. We then extendas a classical crystal isomorphism. Let B^3,s→ T^3,s given by Definition <ref> and ι be the natural (classical) crystal isomorphism. We haveΦ = ∘ιon classically highest weight elements.Similar to the proof of Proposition <ref>. The case for r = 5 is dual to the above. In particular, the proofs of the following propositions are similar to those for the r = 3 case.Consider the KR crystal B^5,s of type E_6^(1). We have(B^5,s) = ⊕_k=0^s (B^5,s; (s-k) _5 + k _1).Moreover the highest weight rigged configurations in (B^5,s) are given by ν(k∗[α]) with all riggings 0, whereα = _2 + _3 + 2_4 + _5 + _6 = _5 - _1,and(ν, J) = k.We define the filling map B^5,s→ T^5,s on classically highest weight element b ∈ B( (s-k) _5 + k _1 ) ⊆ B^5,s by defining (b) as the tableau with the first s-k columns be 1, 3, 2, 5, the next ⌊ k / 2 ⌋ columns as[baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=15,align=center,inner sep=3] [draw]1;[draw]1;[draw]26;[draw]3;[draw];[draw]2;[draw]1;[draw]5;;and if k is odd, the last column as 1, 3, 2, 1. We then extendas a classical crystal isomorphism. Let B^5,s→ T^5,s given by Definition <ref> and ι be the natural (classical) crystal isomorphism. We haveΦ = ∘ιon classically highest weight elements. Consider the KR crystal B^4,s of type E_6^(1). We have(B^4,s) = ⊕_λ(B^4,s; λ)^⊕ (1 + k_2 - k_4 - 2k_5),where λ = s _4 - ∑_i=1^5 k_i ^(i) with^(1):= 2_1 + 3_2 + 4_3 + 6_4 + 4_5 + 2_6= _4,^(2):= _1 + _2 + 2 _3 + 3 _4 + 2 _5 + _6= _4 - _2,^(3):= _2 + _3 + 2_4 + _5= _4 - _1 - _6,^(4):= _2 + _4= _2 + _4 - (_3 + _5),^(5):= _2= 2_2 - _4,such that * k_1 + k_2 + k_3 + k_4 ≤ s,* k_i ≥ 0 for i = 1,2,3,4,5, and* k_2 ≥ k_4 + 2k_5.Moreover the highest weight rigged configurations in (B^4,s) are given byν( k_1∗ [^(1)] + k_2 ∗ [^(2)] + k_3 ∗ [^(3)] + k_4 ∗ [^(4)]+ k_5 ∗ [^(5)] ),with all riggings 0 except for x in the first row of ν^(2), which satisfies 0 ≤ x ≤ k_2 - k_4 - 2k_5, and(ν, J) = 3k_1 + k_2 + k_3 + k_4 + k _5 + x. Note that ^(1) > ^(2) > ^(3) > ^(4) > ^(5) component-wise expressed in terms of {_i }_i ∈ I_0. The claim for the rigged configurations follows from Definition <ref> and the description of ^(i) for i = 1,2,3,4,5. The claim for the cocharge is a straightforward computation. We first note that for r =1,2,3,5,6, it is clear that these graded decompositions agree with the conjectures in <cit.> (with some straightforward relabeling). Thus, we show the r = 4 case.Let q^k B(_r) denote that B(_r) is given a grading of k. We haveB^4,s ⊕_j_1+j_2+2j_3+j_4 ≤ s j_1,j_2,j_3,j_4 ∈_≥ 0min(1 + j_2, 1 + s - j_1 - j_2 - 2j_3 - j_4) q^3s - 2j_1 - 3j_2 - 4j_3 - 2j_4×∑_k=0^j_1 q^k B(j_1 _2 + j_2 _4 + j_3 (_3 + _5) + j_4 (_1 + _6)). First note that(ν, J) = s_4 - k_1 _2 - k_2(_4 - _2) - k_3(_4 - _1 - _6)- k_4(_2 + _4 - (_3 + _5) ) - k_5 (2_2 - _4)= (k_2 - k_4 - 2k_5)_2 + (s - k_1 - k_2 - k_3 - k_4 + k_5) _4+ k_4 (_3 + _5) + k_3 (_1 + _6)Hence, we must havej_1 = k_2 - k_4 - 2k_5, j_2 = s - k_1 - k_2 - k_3 - k_4 + k_5, j_3 = k_4, j_4 = k_3.Note that the conditions on k_1, k_2, k_3, k_4, k_5 guarantee that j_1, j_2, j_3, j_4 ≥ 0. Next, we havej_1 + j_2 + 2j_3 + j_4 = s - k_1 - k_5 ≤ s, 3s - 2j_1 - 3 j_2 - 4 j_3 - 2 j_4 = 3k_1 + k_2 + k_3 + k_4 + k_5 = (ν).Additionally, we want the multiplicity to agree, so we need to showM := min(1 + j_2, 1 + s - j_1 - j_2 - 2j_3 - j_4)equals the number of time a node of weight j_1 _2 + j_2 _4 + j_3 (_3 + _5) + j_4 (_1 + _6) occurs since the multiplicity of a node equals 1 + k_2 - k_4 - 2k_5 = 1 + j_1. We note that ^(1) = 2^(2) + ^(5). Explicitly, we fix some j_1, j_2, j_3, j_4 ∈_≥ 0 such that j_1 + j_2 + 2j_3 + j_4 ≤ s, and we want to show M equals the number of k_1, k_2, k_3, k_4, k_5 ∈_≥ 0 such that s≥ k_1 + k_2 + k_3 + k_4, k_2≥ k_4 + 2k_5, and, from Equation (<ref>), that:k_1 + k_5 = s - j_1 - j_2 - 2j_3 - j_4,k_2 - 2 k_5 = j_1 + j_3, k_3 = j_4,k_4 = j_3. Immediately, we have k_3, k_4 ≥ 0 and are completely determined since we have fixed j_3, j_4 ∈_≥ 0. We note thatk_2 = j_1 + j_3 + 2k_5 ≥ j_3 + 2k_5 = k_4 + 2k_5, k_1 + k_2 + k_3 + k_4 = s - j _2 + k_5,and so if we take k_5 = 0, then k_1 and k_2 are completely determined and give a valid set. Next, since we want to look over all possible values that give rise to the same weight, the linear dependence ^(1) = 2^(2) + ^(5) implies we can replace k_1 ↦ 2k_2 + k_5. Note Equation (<ref>), and hence Equation (<ref>), still holds under this replacement, but from Equation (<ref>) and Equation (<ref>), we have k_5 ≤ j_2 (so we can do this replacement at most j_2 times). However, we can only do this at most s - j_1 - j_2 - 2j_3 - j_4 times since it is the maximum value of k_1. Thus, we have exactly M possible values for k_1, k_2, k_3, k_4, and k_5.§.§ Type E_7^(1) For r = 7 in type E_7^(1), we have (B^7,s) (B^7,s; s _7). Threrefore the filling map B^7,s→ T^7,s is the identity map.Consider the KR crystal B^1,s of type E_7^(1). We have(B^1,s) ⊕_k=0^s (B^1,s; k _1).Moreover the highest weight rigged configurations in (B^3,s) are given by ν(k∗[α]) with all riggings 0, whereα = 2_1 + 2_2 + 3_3 + 4_4 + 3_5 + 2_6 + _7 = _1,and(ν, J) = k. Similar to the proof of Proposition <ref>. We define the filling map B^1,s→ T^1,s on classically highest weight element b ∈ B((s-k) _1) ⊆ B^1,s by defining (b) as the tableau with the first s-k columns as 7,1, the next ⌊ k / 2 ⌋ columns as[baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=15,align=center,inner sep=3] [draw]7;[draw]7;[draw];[draw]1;;and if k is odd, the last column as 7,. We then extendas a classical crystal isomorphism. Let B^1,s→ T^1,s given by Definition <ref> and ι be the natural (classical) crystal isomorphism. We haveΦ = ∘ιon classically highest weight elements.Similar to the proof of Proposition <ref>. Consider the KR crystal B^2,s of type E_7^(1). We have(B^2,s) ⊕_k=0^s (B^2,s; (s-k) _2 + k _7)Moreover the highest weight rigged configurations in (B^3,s) are given by ν(k∗[α]) with all riggings 0, whereα =_1 + 2_2 + 2_2 + 3_4 + 2_5 + _6 = _2 - _7,and(ν, J) = k. Similar to the proof of Proposition <ref>. We define the filling map B^2,s→ T^2,s on classically highest weight element b ∈ B((s-k) _2 + k _7) ⊆ B^2,s by defining (b) as the tableau with the first s-k columns as 7, 1, 2, the next ⌊ k / 2 ⌋ columns as[baseline][matrix of math nodes,column sep=-.4, row sep=-.5,text height=10,text width=15,align=center,inner sep=3] [draw]7;[draw]7;[draw]1;[draw]1;[draw]7;[draw]2;;and if k is odd, the last column as 7, 1, 7. We then extendas a classical crystal isomorphism. Let B^2,s→ T^2,s given by Definition <ref> and ι be the natural (classical) crystal isomorphism. We haveΦ = ∘ιon classically highest weight elements.Similar to the proof of Proposition <ref>. Consider the KR crystal B^6,s of type E_7^(1). We have(B^6,s) ⊕_k_1 + k_2 ≤ s k_1,k_2 ∈_≥ 0(B^6,s; (s - k_1 - k_2) _6 + k_2 _1).Moreover the highest weight rigged configurations in (B^6,s) are given byν( k_1∗ [^(1)] + k_2 ∗ [^(2)] ),where^(1)= 2_1 + 3_2 + 4_3 + 6_4 + 5_5 + 4_6 + 2_7= _6,^(2)= _2 + _3 + 2_4 + 2_5 + 2_6 + _7= _6 - _1,with all riggings 0 and(ν, J) = 2k_1 + k_2. Similar to the proof of Proposition <ref> as ^(1) > ^(2). Consider the KR crystal B^3,s of type E_7^(1). We have(B^3,s) = ⊕_λ(B^3,s; λ)^⊕ (1 + k_2 - k_4 - 2k_5),where λ = s _3 - ∑_i=1^5 k_i ^(i) with^(1)= 3_1 + 4_2 + 6_3 + 8_4 + 6_5 + 4_6 + 2_7= _3^(2)= _1 + 2_2 + 3_3 + 4_4 + 3_5 + 2_6 + 1_7= _3 - _1^(3)= _1 + _2 + 2_3 + 2_4 + _5= _3 - _6^(4)= _1 + _3= _1 + _3 - _4^(5)= _1= 2 _1 - _3such that * k_1 + k_2 + k_3 + k_4 ≤ s,* k_i ≥ 0 for i = 1,2,3,4,5, and* k_2 ≥ k_4 + 2k_5.Moreover the highest weight rigged configurations in (B^3,s) are given byν( k_1∗ [^(1)] + k_2 ∗ [^(2)] + k_3 ∗ [^(3)] + k_4 ∗ [^(4)]+ k_5 ∗ [^(5)] ),with all riggings 0 except for x in the first row of ν^(1), which satisfies 0 ≤ x ≤ k_2 - k_4 - 2k_5, and(ν, J) = 3k_1 + k_2 + k_3 + k_4 + k _5 + x. Similar to the proof of Proposition <ref>. We note that our graded decompositions for r = 1,2,3,6,7 agree with the conjectures of <cit.>, where the case of r=3 is similar to the proof of Proposition <ref>.§.§ Type E_8^(1) For type E_8^(1), we do not have any minuscule nodes as there are no diagram automorphism. Thus, we begin with the adjoint node. Consider the KR crystal B^8,s of type E_8^(1). We have(B^8,s) = ⊕_k=0^s (B^8,s; (s-k) _8).Moreover the highest weight rigged configurations in (B^8,s) are given by ν(k ∗ [α]) with all riggings 0, whereα = 2 _1 + 3 _2 + 4 _3 + 6 _4 + 5 _5 + 4 _6 + 3 _7 + 2 _8 = _8and (ν, J) = k.The proof is similar to Proposition <ref>. We define the filling map B^8,s→ T^8,s on classically highest weight element b ∈ B((s-k) _8) ⊆ B^8,s by defining (b) as the tableau with the first s-k columns as 8, the next ⌊ k / 2 ⌋ columns as (8), and if k is odd, the last column as ∅. We then extendas a classical crystal isomorphism. Let B^8,s→ T^8,s given by Definition <ref> and ι be the natural (classical) crystal isomorphism. We haveΦ = ∘ιon classically highest weight elements.Similar to the proof of Proposition <ref>.Consider the KR crystal B^7,s of type E_8^(1). We have(B^7,s) = ⊕_λ(B^7,s; λ)^⊕ (1 + k_2 - k_4 - 2k_5)where^(1)= 4_1 + 6_2 + 8_3 + 12_4 + 10_5 + 8_6 + 6_7 + 3_8= _7,^(2)= 2_1 + 3_2 + 4_3 + 6_4 + 5_5 + 4_6 + 3_7 + _8= _7 - _8,^(3)= _2 + _3 + 2_4 + 2_5 + 2_6 + 2_7 + _8= _7 - _1,^(4)= _7 + _8= _7 + _8 - _6,^(5)= _8= 2 _8 - _7,and λ = s _7 - ∑_i=1^5 k_i ^(i), such that * k_1 + k_2 + k_3 + k_4 ≤ s,* k_i ≥ 0 for i = 1,2,3,4,5, and* k_2 ≥ k_4 + 2k_5.Moreover the highest weight rigged configurations in (B^8,s) are given byν(k_1 ∗ [^(1)] + k_2 ∗ [^(2)] + k_3 ∗ [^(3)] + k_4 ∗ [^(4)] + k_5 ∗ [^(5)])with all riggings 0 except the first row of ν^(8), which satisfies 0 ≤ x ≤ k_2 - k_4 - 2k_5, and(ν, J) = 3k_1 + k_2 + k_3 + k_4 + k _5 + x. Similar to the proof of Proposition <ref>. Consider the KR crystal B^1,s of type E_8^(1). We have(B^1,s) = ⊕_k_1,k_2 ≥ 0 k_1+k_2 ≤ s(B^1,s; (s - k_1 - k_2) _1 + k_2 _8).Moreover the highest weight rigged configurations in (B^1,s) are given byν(k_1 ∗ [^(1)] + k_2 ∗ [^(2)]),where^(1)= 4_1 + 5_2 + 7_3 + 10_4 + 8_5 + 6_6 + 4_7 + 2_8= _1,^(2)= 2_1 + 2_2 + 3_3 + 4_4 + 3_5 + 2_6 + _7= _1 - _8,with all riggings 0 and(ν, J) = 2k_1 + k_2. Similar to the proof of Proposition <ref> as ^(1) > ^(2). We note that our graded decompositions for r = 1,7,8 agree with the conjectures of <cit.>, where the case of r = 7 is similar to the proof of Proposition <ref>. The cases of r = 4 of type E_6^(1), r = 3 of type E_7^(1), and r = 7 of type E_8^(1) are all nodes which are distance 2 from the affine node, and all have the same graded decompositions. Thus it would be interesting to see if there is a uniform describe of these KR crystals, and additionally to compare it with B^r,s for r = 1, 3 in type D_n^(1) and r = 2, n-2 in type A_n^(1).§.§ Type E_6^(2) For r = 1, this is immediate deduced from the devirtualization of Proposition <ref> (r = 2 in type E_6^(1)) as γ_a = 1 for all a ∈ I.Consider the KR crystal B^1,s of type E_6^(2). We have(B^1,s) = ⊕_k=0^s (B^1,s; (s-k) _1).Moreover, the highest weight rigged configurations in (B^1,s) are given by ν(k∗[α]) with all riggings 0, whereα = 2_1 + 3_2 + 2_3 + _4 = _1,and(ν, J) = k. The proof is similar to Proposition <ref> as the ambient Kleber tree is the same as the virtual Kleber tree. We define the filling map B^1,s→ T^1,s on classically highest weight element b ∈ B((s-k) _1) ⊆ B^1,s by defining (b) as the tableau with the first s-k columns as 1, the next ⌊ k / 2 ⌋ columns as (1), and if k is odd, the last column as ∅. We then extendas a classical crystal isomorphism. Let B^1,s→ T^1,s given by Definition <ref> and ι be the natural (classical) crystal isomorphism. We haveΦ = ∘ιon classically highest weight elements.Similar to the proof of Proposition <ref>.Consider the KR crystal B^2,s of type E_6^(2). We have(B^2,s) = ⊕_λ(B^2,s; λ)^⊕ (1 + k_2 - k_4 - 2k_5),where^(1):= 3_1 + 6_2 + 4_3 + 2_4 = _2,^(2):= _1 + 3 _2 + 2 _3 + _4= _2 - _1,^(3):= _1 + 2_2 + _3= _2 - _4,^(4):= _1 + _2= _1 + _2 - _3,^(5):= _1= 2_1 - _2,and λ = s _2 - ∑_i=1^5 k_i ^(i), such that * k_1 + k_2 + k_3 + k_4 ≤ s,* k_i ≥ 0 for i = 1,2,3,4,5, and* k_2 ≥ k_4 + 2k_5.Moreover the highest weight rigged configurations in (B^2,s) are given byν(k_1 ∗ [^(1)] + k_2 ∗ [^(2)] + k_3 ∗ [^(3)] + k_4 ∗ [^(4)] + k_5 ∗ [^(5)]),with all riggings 0 except for x in the first row of ν^(1), which satisfies 0 ≤ x ≤ k_2 - k_4 - 2k_5, and(ν, J) = 3k_1 + k_2 + k_3 + k_4 + k _5 + x. Note that ^(1) > ^(2) > ^(3) > ^(4) > ^(5) component-wise expressed in terms of {_i }_i ∈ I_0. Thus any sequence (k_1, k_2, k_3, k_4, k_5) uniquely determines a path in the ambient Kleber tree, which is the same tree constructed in Proposition <ref>. Hence the claim follows from Definition <ref> as all ^(i) are symmetric with respect to the diagram automorphism ϕ and that γ_a = 1 for all a ∈ I. Consider the KR crystal B^4,s of type E_6^(2). We have(B^4,s) = ⊕_k_1,k_2 ≥ 0 k_1+k_2 ≤ s(B^4,s; k_2 _1 + (s - k_1 - k_2) _4).Moreover, the highest weight rigged configurations in (B^4,s) are given byν(k_1 ∗ [^(1)] + k_2 ∗ [^(2)]),where^(1)= 2_1 + 4_2 + 3_3 + 2 _4= _4,^(2)= _2 + _3 + _4= _4 - _1,with all riggings 0 and(ν, J) = 2k_1 + k_2. We note that ^(1) > ^(2) (and likewise for their ambient counterparts). The claim follows from Definition <ref>. It is straightforward to see that our graded decompositions agree with those conjectured in <cit.>, where the r = 2 case is similar to the proof of Proposition <ref>. §.§ Type F_4^(1) We can also describe the set (B^r,s) for r = 1,2,4 in type F_4^(1). For r = 1,2 in type F_4^(1), this is the same as for type E_6^(2) except for ν^(3) and ν^(4) are scaled by 2. This follows from Definition <ref>, that γ_0,1,2 = 2, and that γ_3,4 = 1. Consider the KR crystal B^4,s of type F_4^(1). We have(B^4,s) = ⊕_k_1,k_2 ≥ 0 k_1+k_2 ≤ s/2(B^4,s; k_2 _1 + (s - 2k_1 - 2k_2) _4).Moreover the highest weight rigged configurations in (B^4,s) are given byν^(1)= (k_1, k_1)ν^(2)= (k_1+k_2, k_1, k_1, k_1),ν^(3)= (2k_1+2k_2, 2k_1, 2k_1),ν^(4)= (2k_1+2k_2, 2k_1),with all riggings 0.Similar to the proof of Proposition <ref> except for we select all nodes at even depths and the note above about devirtualization. Given the devirtualization map and that all nodes for r = 1,2,4 appear at even levels in the ambient Kleber tree, and Proposition <ref>, it is straightforward to see that our crystals agree with the conjectured decompositions of <cit.>. § OUTLOOKFor untwisted non-simply-laced affine types, the author believes that there is a way to modify the description of the rigged configurations such that each partition is scaled by 1 / T_r as in Remark <ref>. In particular, this modification will be similar to the description given here and the proof would be similar and uniform. We note that this was done for type C_n^(1) and B_n^(1) for B^1,1 in <cit.>. However, we need to take B^1,2 for type C_n^(1) and B^2,1 for type B_n^(1) in order to get the adjoint node (and have a perfect crystal of level 1). As such, a modification to our description would be needed. It seems plausible that the extra possibility of case (Q), a singular row of length one less than previously selected, as given in δ_1 of type B_n^(1) from <cit.> is the necessary modification. For type G_2^(1), this is an open problem from <cit.>, which we give here in more generality. Describe explicitly the map δ_θ forof untwisted non-simply-laced affine type.§ KR TABLEAUX FOR FUNDAMENTAL WEIGHTS In this section, we list the classically highest weight KR tableaux for B^r,1 for types E_6,7,8^(1), E_6^(2), F_4^(1), and G_2^(1).Recall that highest weight rigged configurations must have 0 ≤ p_i^(a) for all (a,i) ∈_0. Moreover, all rigged configurations in this section will have p_i^(a) = 0 except for possibly one (b, j) ∈_0 where p_j^(b) = 1. Therefore we describe the rigged configuration simply by its configuration ν and if p_i^(a) = 1, then we write the corresponding rigging x as the subscript (x). We also write the column tableau [ x_1 x_2 x_3 ⋯ x_r; ]^ t as x_1, x_2, x_3, …, x_r. In type E_7^(1) for B^4,1, we denote the rigged configuration[scale=.35,anchor=top,baseline=-18]100[xshift=4cm]1,10,00,0[xshift=8cm]1,11,01,1[xshift=12cm]1,1,1,10,0,0,00,0,0,0[xshift=16cm]1,1,10,0,00,0,0[xshift=20cm]1,10,00,0[xshift=24cm]100, by (1, 11, 1_(1) 1_(0), 1111, 111, 11, 1) or more compactly (1, 1^2, 1^2_(1,0), 1^4, 1^3, 1^2, 1). In the remaining part of this section, we give the KR tableaux for B^r,1 for the exceptional types (except for r = 4, 5 in type E_8^(1), which can be generated using SageMath <cit.>).§.§ Type E_6^(1) r=1(∅, ∅, ∅, ∅, ∅, ∅) ↦1 r=2 (∅, ∅, ∅, ∅, ∅, ∅) ↦1,6, 2 (1, 1^2, 1^2, 1^3, 1^2, 1) ↦1,6, r=3 (∅, ∅, ∅, ∅, ∅, ∅) ↦1,3(1, 1, 1^2, 1^2, 1, ∅) ↦1,6 r=4 (∅, ∅, ∅, ∅, ∅, ∅) ↦1,3,4(∅, 1, 1, 1^2, 1, ∅) ↦1,3, 16(1, 1_(0), 1, 1^3, 1^2, 1) ↦1,6, 2(1, 1_(1), 1^2, 1^3, 1^2, 1) ↦1,3, 2 (1^2, 1^3, 1^4, 1^6, 1^4, 1^2) ↦1,6,r=5 (∅, ∅, ∅, ∅, ∅, ∅) ↦1,6, 2 ,5(∅, 1, 1, 1^2, 1^2, 1) ↦1,6, 2 , 1r=6 (∅, ∅, ∅, ∅, ∅, ∅) ↦1,6 §.§ Type E_7^(1) r=1(∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦7, 1 (1^2, 1^2, 1^3, 1^4, 1^3, 1^2, 1) ↦7,r=2 (∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦7, 1 ,2(1, 1^2, 1^2, 1^3, 1^2, 1, ∅) ↦7, 1 ,7 r=3 (∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦7, 1 ,2,3(1, 1, 1^2, 1^2, 1, ∅, ∅) ↦7, 1 ,2,6(1_(0), 1^2, 1^3, 1^4, 1^3, 1^2, 1) ↦7, 1 ,7, 1 (1_(1), 1^2, 1^3, 1^4, 1^3, 1^2, 1) ↦7, 1 , 2, 1 (1^3, 1^4, 1^6, 1^8, 1^6, 1^4, 1^2) ↦7, 1 ,7,r=4 (∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦7, 6 , 5 , 4 (∅, 1, 1, 1^2, 1, ∅, ∅) ↦7, 6 , 5 , 16(∅, 1^2, 1^2, 1^4, 1^3, 1^2, 1) ↦7, 6 , 17, 1 (1, 1_(0), 1^2, 1^3, 1^2, 1, ∅) ↦7, 6 , 17,2(1, 1_(1), 1^2, 1^3, 1^2, 1, ∅) ↦7, 6 , 5 , 27(1, 1^2, 1^2_(0,0), 1^4, 1^3, 1^2, 1) ↦7, 1 ,2,3(1, 1^2, 1^2_(1,0), 1^4, 1^3, 1^2, 1) ↦7, 6 , 17,3 (1, 1^2, 1^2_(1,1), 1^4, 1^3, 1^2, 1) ↦7, 6 , 5 , 3 (1^2, 1^3, 1^4, 1^6, 1^4, 1^2, ∅) ↦7, 6 , 1 7,7(1^2, 1^3, 1^4, 1^6, 1^4, 1^2_(0,0), 1) ↦7, 1 ,2,6(1^2, 1^3, 1^4, 1^6, 1^4, 1^2_(1,0), 1) ↦7, 6 , 17,6 (1^2, 1^3, 1^4, 1^6, 1^4, 1^2_(1,1), 1) ↦7, 6 , 5 ,6(1^2_(0,0), 1^4, 1^5, 1^8, 1^6, 1^4, 1^2) ↦7, 1 ,7, 1 (1^2_(1,0), 1^4, 1^5, 1^8, 1^6, 1^4, 1^2) ↦7, 1 ,2, 1 (1^2_(1,1), 1^4, 1^5, 1^8, 1^6, 1^4, 1^2) ↦7, 6 , 17, (1^4, 1^6, 1^8, 1^12, 1^9, 1^6, 1^3) ↦7, 1 ,7, (2, 21, 2^2, 2^2 1^2, 2 1^2, 1^2, 1) ↦7, 6 , 2 ,6(1^2, 2 1^2, 2 1^3, 2^2 1^4, 2^2 1^2, 2^2, 2) ↦7, 6 , 2 ,1(2^2, 2^2 1^2, 2^3 1^2, 2^4 1^4, 2^3 1^3, 2^2 1^2, 21) ↦7, 6 ,7,r=5 (∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦7, 6 , 5(∅, 1,1, 1^2, 1^2, 1, ∅) ↦7, 6 , 17(1, 1_(0), 1^2, 1^3, 1^3, 1^2, 1) ↦7, 1 ,2(1, 1_(1), 1^2, 1^3, 1^3, 1^2, 1) ↦7, 6 , 2(1^2, 1^3, 1^4, 1^6, 1^5, 1^3, 1_(0)) ↦7, 1 ,7(1^2, 1^3, 1^4, 1^6, 1^5, 1^3, 1_(1)) ↦7, 6 ,7 r=6 (∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦7, 6 (∅, 1, 1, 1^2, 1^2, 1^2, 1) ↦7, 1 (1^2, 1^3, 1^4, 1^6, 1^5, 1^4, 1^2) ↦7,r=7 (∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦7 §.§ Type E_8^(1) r=1(∅, ∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦8, 1(1^2, 1^2, 1^3, 1^4, 1^3, 1^2, 1, ∅) ↦8, ∅(1^4, 1^5, 1^7, 1^10, 1^8, 1^6, 1^4, 1^2) ↦∅, ∅ r=2 (∅, ∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦8, 1,2(1, 1^2, 1^2, 1^3, 1^2, 1, ∅, ∅) ↦8, 1,7(1_(0), 1^3, 1^3, 1^5, 1^4, 1^3, 1^2, 1) ↦8, 1,1(1_(1), 1^3, 1^3, 1^5, 1^4, 1^3, 1^2, 1) ↦8, 1, ∅(1^3, 1^5, 1^6, 1^9, 1^7, 1^5, 1^3, 1_(0)) ↦8, ∅, 8(1^3, 1^5, 1^6, 1^9, 1^7, 1^5, 1^3, 1_(1)) ↦8, ∅, ∅(1^5, 1^8, 1^10, 1^15, 1^12, 1^9, 1^6, 1^3) ↦∅, ∅, ∅ r=3 (∅, ∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦8, 1,2,3(1, 1, 1^2, 1^2, 1, ∅, ∅, ∅) ↦8, 1,2,6(1_(0), 1^2, 1^3, 1^4, 1^3, 1^2, 1, ∅) ↦8, 1,7, 1 8(1_(1), 1^2, 1^3, 1^4, 1^3, 1^2, 1, ∅) ↦8, 1,2, 1 8(1^2, 1^2_(0,0), 1^4, 1^5, 1^4, 1^3, 1^2, 1) ↦8, 1,7, 2 (1^2, 1^2_(1,0), 1^4, 1^5, 1^4, 1^3, 1^2, 1) ↦8, 1,2,2(1^2, 1^2_(1,1), 1^4, 1^5, 1^4, 1^3, 1^2, 1) ↦8, 1,2, ∅(1^3, 1^4, 1^6, 1^8, 1^6, 1^4, 1^2, ∅) ↦8, 1,7,8 8(1^3, 1^4, 1^6, 1^8, 1^6, 1^4, 1^2_(0,0), 1) ↦8, 1,1,7(1^3, 1^4, 1^6, 1^8, 1^6, 1^4, 1^2_(1,0), 1) ↦8, 1,7,7(1^3, 1^4, 1^6, 1^8, 1^6, 1^4, 1^2_(1,1), 1) ↦8, 1,7, ∅(1^3_(0,0,0), 1^5, 1^7, 1^10, 1^8, 1^6, 1^4, 1^2) ↦8, ∅,8, 1 (1^3_(1,0,0), 1^5, 1^7, 1^10, 1^8, 1^6, 1^4, 1^2) ↦8, 1,1,1(1^3_(1,1,0), 1^5, 1^7, 1^10, 1^8, 1^6, 1^4, 1^2) ↦8, 1,1, ∅(1^3_(1,1,1), 1^5, 1^7, 1^10, 1^8, 1^6, 1^4, 1^2) ↦8, 1, ∅, ∅(1^5, 1^7, 1^10, 1^14, 1^11, 1^8, 1^5, 1^2_(0,0)) ↦8, ∅,8,8(1^5, 1^7, 1^10, 1^14, 1^11, 1^8, 1^5, 1^2_(1,0)) ↦8, ∅,8, ∅(1^5, 1^7, 1^10, 1^14, 1^11, 1^8, 1^5, 1^2_(1,1)) ↦8, ∅, ∅, ∅(1^7, 1^10, 1^14, 1^20, 1^16, 1^12, 1^8, 1^4) ↦∅, ∅, ∅, ∅(21, 2^2, 2^2 1^2, 2^3 1^2, 2^2 1^2, 21^2, 1^2, 1) ↦8, 1,2,7(1^3, 21^3, 21^5, 2^2 1^6, 2^2 1^4, 2^2 1^2, 2^2, 2) ↦8, 1,7, 1 (2^2 1_(0), 2^2 1^3, 2^3 1^4, 2^4 1^6, 2^3 1^5, 2^2 1^4, 2 1^3, 1^2) ↦8, 1,1,8(2^2 1_(1), 2^2 1^3, 2^3 1^4, 2^4 1^6, 2^3 1^5, 2^2 1^4, 2 1^3, 1^2) ↦8, 1, ∅,8(2^2 1^3, 2^3 1^4, 2^4 1^6, 2^6 1^8, 2^5 1^6, 2^4 1^4, 2^3 1^2, 2^2) ↦8, ∅, ∅,r=6 (∅, ∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦8, 7, 6(∅, 1, 1, 1^2, 1^2, 1^1, 1, ∅) ↦8, 7, 1 8(1, 1_(0), 1^2, 1^3, 1^3, 1^3, 1^2, 1) ↦8, 1,2(1, 1_(1), 1^2, 1^3, 1^3, 1^3, 1^2, 1) ↦8, 7, 2 (1^2, 1^3, 1^4, 1^6, 1^5, 1^4, 1^2, ∅) ↦8, 7,8 8(1^2, 1^3, 1^4, 1^6, 1^5, 1^4, 1^2_(0,0), 1) ↦8, 1,7(1^2, 1^3, 1^4, 1^6, 1^5, 1^4, 1^2_(1,0), 1) ↦8, 7,7(1^2, 1^3, 1^4, 1^6, 1^5, 1^4, 1^2_(1,1), 1) ↦8, 7, ∅(1^2_(0,0), 1^4, 1^5, 1^8, 1^7, 1^6, 1^4, 1^2) ↦8,8, 1 (1^2_(1,0), 1^4, 1^5, 1^8, 1^7, 1^6, 1^4, 1^2) ↦8, 1 ,1(1^2_(1,1), 1^4, 1^5, 1^8, 1^7, 1^6, 1^4, 1^2) ↦8, 1 8, ∅(1^4, 1^6, 1^8, 1^12, 1^10, 1^8, 1^5, 1^2_(0,0)) ↦8,8,8(1^4, 1^6, 1^8, 1^12, 1^10, 1^8, 1^5, 1^2_(1,0)) ↦8,8, ∅(1^4, 1^6, 1^8, 1^12, 1^10, 1^8, 1^5, 1^2_(1,1)) ↦8, ∅, ∅(1^6, 1^9, 1^12, 1^18, 1^15, 1^12, 1^8, 1^4) ↦∅, ∅, ∅(1^2, 21^2, 21^3, 2^2 1^4, 2^2 1^3, 2^2 1^2, 2^2, 2) ↦8, 7, 1 (2^2, 2^2 1^2, 2^3 1^2, 2^4 1^4, 2^3 1^4, 2^2 1^4, 21^3, 1^2) ↦8, 1,8(2^2 1^2, 2^3 1^3, 2^4 1^4, 2^6 1^6, 2^5 1^5, 2^4 1^4, 2^3 1^2, 2^2) ↦8, ∅,r=7 (∅, ∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦8, 7(∅, 1, 1, 1^2, 1^2, 1^2, 1^2, 1) ↦8, 1(1^2, 1^3, 1^4, 1^6, 1^5, 1^4, 1^3, 1_(0)) ↦8, 8(1^2, 1^3, 1^4, 1^6, 1^5, 1^4, 1^3, 1_(1)) ↦8, ∅(1^4, 1^6, 1^8, 1^12, 1^10, 1^8, 1^6, 1^3) ↦∅, ∅ r=8 (∅, ∅, ∅, ∅, ∅, ∅, ∅, ∅) ↦8(1^2, 1^3, 1^4, 1^6, 1^5, 1^4, 1^3, 1^2) ↦∅§.§ Type E_6^(2) r=1 (∅, ∅, ∅, ∅) ↦1(1^2, 1^3, 1^2, 1) ↦∅ r=2 (∅, ∅, ∅, ∅) ↦1,2(1, 1^2, 1, ∅) ↦1,4(1_(0), 1^3, 1^2, 1) ↦1,1(1_(1), 1^3, 1^2, 1) ↦1, ∅(1^3, 1^6, 1^4, 1^2) ↦∅, ∅ r=3 (∅, ∅, ∅, ∅) ↦1,2,3(∅, 1, 1, ∅) ↦1,2, 14(∅, 1^2, 1^2, 1) ↦1,2, 11(1, 1^2_(0,0), 1^2, 1) ↦1,4, 2(1, 1^2_(1,0), 1^2, 1) ↦1,2,2(1, 1^2_(1,1), 1^2, 1) ↦1,2, ∅(1^2, 1^4, 1^3, 1_(0)) ↦1, ∅,4(1^2, 1^4, 1^3, 1_(1)) ↦1,4,∅(1^2_(0,0), 1^5, 1^4, 1^2) ↦1,1,1(1^2_(1,0), 1^5, 1^4, 1^2) ↦1,1, ∅(1^2_(1,1), 1^5, 1^4, 1^2) ↦1, ∅,∅(1^4, 1^8, 1^6, 1^3) ↦∅, ∅, ∅(2, 2^2, 21, 1) ↦1,2,4(1^2, 2 1^3, 2 1^2, 2) ↦1,4, 1(2^2, 2^3 1^2, 2^2 1^2, 21) ↦1, ∅,r=4 (∅, ∅, ∅, ∅) ↦1,4(∅, 1, 1, 1) ↦1, ∅(1^2, 1^4, 1^3, 1^2) ↦∅, ∅§.§ Type F_4^(1) We follow Proposition <ref> to describe the elements of B(_4).r=1 (∅, ∅, ∅, ∅) ↦4, 1(1^2, 1^3, 2^2, 2) ↦4,r=4 (∅, ∅, ∅, ∅) ↦4, 3, 2(1, 1^2, 2, ∅) ↦4, 3, 44(1_(0), 1^3, 2^2, 2) ↦4, 4, 1(1_(1), 1^3, 2^2, 2) ↦4, 3, 1(1^3, 1^6, 2^4, 2^2) ↦4, 4,r=3 (∅, ∅, ∅, ∅) ↦4, 3(1, 1^2, 21, 1) ↦4, 4 r=4 (∅, ∅, ∅, ∅) ↦4§.§ Type G_2^(1) r=1 (∅, ∅) ↦1 r=2 (∅, ∅) ↦1,2(3, 1^2) ↦1, § EXAMPLES WITH SAGEMATH We give some examples using SageMath <cit.>, where rigged configurations, KR tableaux, and the bijection Φ has been implemented by the author.We first construct Example <ref>.sage: RC=RiggedConfigurations(['E',6,2], [[1,1]]*4) sage: n=RC(partition_list=[[2,2,1,1],[2,2,2,1,1,1],[2,2,1,1],[2,1]], ....:rigging_list=[[1,0,2,1],[0,0,0,0,0,0],[0,0,0,0],[0,0]]) sage: ascii_art(n) 1[ ][ ]10[ ][ ]00[ ][ ]00[ ][ ]0 1[ ][ ]00[ ][ ]00[ ][ ]00[ ]0 2[ ]2 0[ ][ ]00[ ]02[ ]1 0[ ]0 0[ ]00[ ]00[ ]0sage: n.to_tensor_product_of_kirillov_reshetikhin_tableaux().pp()(1, -2) (X)(-1, 2) (X) E (X)(1,)Next, we construct Example <ref>.sage: RC=RiggedConfigurations(['E',7,1], [[4,1]]) sage: nu=RC.module_generators[6] sage: ascii_art(nu)0[ ]00[ ]01[ ]10[ ]00[ ]00[ ]00[ ]00[ ]01[ ]00[ ]00[ ]00[ ]00[ ]00[ ]0 0[ ]0 sage: nu.to_tensor_product_of_kirillov_reshetikhin_tableaux().pp() (7,)(-7, 6) (-6, 7, 1)(-1, -7, 3)alpha | http://arxiv.org/abs/1703.08945v2 | {
"authors": [
"Travis Scrimshaw"
],
"categories": [
"math.CO",
"math.RT",
"05E10, 17B37, 05A19, 81R50, 82B23"
],
"primary_category": "math.CO",
"published": "20170327062432",
"title": "Uniform description of the rigged configuration bijection"
} |
=1 | http://arxiv.org/abs/1703.08749v1 | {
"authors": [
"Márk Mezei",
"Silviu S. Pufu",
"Yifan Wang"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20170326003427",
"title": "A 2d/1d Holographic Duality"
} |
Transductive Zero-Shot Learning with a Self-training dictionary approach Yunlong Yu, Zhong Ji, Xi Li, Jichang Guo, Zhongfei Zhang, Haibin Ling, Fei WuDecember 30, 2023 ========================================================================================== empty empty Visual scene understanding is an important capability that enables robots to purposefully act in their environment. In this paper, we propose a novel deep neural network approach to predict semantic segmentation from RGB-D sequences. The key innovation is to train our network to predict multi-view consistent semantics in a self-supervised way. At test time, its semantics predictions can be fused more consistently in semantic keyframe maps than predictions of a network trained on individual views. We base our network architecture on a recent single-view deep learning approach to RGB and depth fusion for semantic object-class segmentation and enhance it with multi-scale loss minimization. We obtain the camera trajectory using RGB-D SLAM and warp the predictions of RGB-D images into ground-truth annotated frames in order to enforce multi-view consistency during training. At test time, predictions from multiple views are fused into keyframes. We propose and analyze several methods for enforcing multi-view consistency during training and testing. We evaluate the benefit of multi-view consistency training and demonstrate that pooling of deep features and fusion over multiple views outperforms single-view baselines on the NYUDv2 benchmark for semantic segmentation. Our end-to-end trained network achieves state-of-the-art performance on the NYUDv2 dataset in single-view segmentation as well as multi-view semantic fusion. § INTRODUCTIONIntelligent robots require the ability to understand their environment through parsing and segmenting the 3D scene into meaningful objects. The rich appearance-based information contained in images renders vision a primary sensory modality for this task. In recent years, large progress has been achieved in semantic segmentation of images. Most current state-of-the-art approaches apply deep learning for this task. With RGB-D cameras, appearance as well as shape modalities can be combined to improve the semantic segmentation performance. Less explored, however, is the usage and fusion of multiple views onto the same scene which appears naturally in the domains of 3D reconstruction and robotics. Here, the camera is moving through the environment and captures the scene from multiple view points. Semantic SLAM aims at aggregating several views in a consistent 3D geometric and semantic reconstruction of the environment. In this paper, we propose a novel deep learning approach for semantic segmentation of RGB-D images with multi-view context. We base our network on a recently proposed deep convolutional neural network (CNN) for RGB and depth fusion <cit.> and enhance the approach with multi-scale deep supervision. Based on the trajectory obtained through RGB-D simultaneous localization and mapping (SLAM), we further regularize the CNN training with multi-view consistency constraints as shown in Fig. <ref>. We propose and evaluate several variants to enforce multi-view consistency during training. A shared principle is using the SLAM trajectory estimate to warp network outputs of multiple frames into the reference view with ground-truth annotation. By this, the network not only learns features that are invariant under view-point change. Our semi-supervised training approach also makes better use of the annotated ground-truth data than single-view learning. This alleviates the need for large amounts of annotated training data which is expensive to obtain. Complementary to our training approach, we aggregate the predictions of our trained network in keyframes to increase segmentation accuracy at testing. The predictions of neighboring images are fused into the keyframe based on the SLAM estimate in a probabilistic way. In experiments, we evaluate the performance gain achieved through multi-view training and fusion at testing over single-view approaches. Our results demonstrate that multi-view max-pooling of feature maps during training best supports multi-view fusion at testing. Overall we find that enforcing multi-view consistency during training significantly improves fusion at test time versus fusing predictions from networks trained on single views. Our end-to-end training achieves state-of-the-art performance on the NYUDv2 dataset in single-view segmentation as well as multi-view semantic fusion. While the fused keyframe segmentation can be directly used in robotic perception, our approach can also be useful as a building block for semantic SLAM using RGB-D cameras. § RELATED WORKRecently, remarkable progress has been achieved in semantic image segmentation using deep neural networks and, in particular, CNNs. On many benchmarks, these approaches excell previous techniques by a great margin.Image-based Semantic Segmentation. As one early attempt, Couprie <cit.> propose a multiscale CNN architecture to combine information at different receptive field resolutions and achieved reasonable segmentation results. Gupta <cit.> integrate depth into the R-CNN approach by Girshick <cit.> to detect objects in RGB-D images. They convert depth into 3-channel HHA, i.e., disparity, height and angle encoding and achieve semantic segmentation by training a classifier for superpixels based on the CNN features. Long <cit.> propose a fully convolutional network (FCN) which enables end-to-end training for semantic segmentation. Since CNNs reduce the input spatial resolution by a great factor through layers pooling, FCN presents an upsample stage to output high-resolution segmentation by fusing low-resolution predictions. Inspired by FCN and auto-encoders <cit.>, encoder-decoder architectures have been proposed to learn upsampling with unpooling and deconvolution <cit.>.For RGB-D images, Eigen <cit.> propose to train CNNs to predict depth, surface normals and semantics with a multi-task network and achieve very good performance. FuseNet <cit.> proposes an encoder-decoder CNN to fuse color and depth cues in an end-to-end training for semantic segmentation, which is shown to be more efficient in learning RGB-D features in comparison to direct concatenation of RGB and depth or the use of HHA. Recently, more complex CNN architectures have been proposed that include multi-resolution refinement <cit.>, dilated convolutions <cit.> and residual units (e.g., <cit.>) to achieve state-of-the-art single image semantic segmentation. Li <cit.> use a LSTM recurrent neural network to fuse RGB and depth cues and obtain smooth predictions. Lin <cit.> design a CNN that corresponds to a conditional random field (CRF) and use piecewise training to learn both unary and pairwise potentials end-to-end.Our approach trains a network on multi-view consistency and fuses the results from multiple view points. It is complementary to the above single-view CNN approaches.Semantic SLAM. In the domain of semantic SLAM, Salas-Moreno <cit.> developed the SLAM++ algorithm to perform RGB-D tracking and mapping at the object instance level. Hermans <cit.> proposed 3D semantic mapping for indoor RGB-D sequences based on RGB-D visual odometry and a random forest classifier that performs semantic image segmentation. The individual frame segmentations are projected into 3D and smoothed using a CRF on the point cloud. Stückler <cit.> perform RGB-D SLAM and probabilistically fuse the semantic segmentations of individual frames obtained with a random forest in multi-resolution voxel maps. Recently, Armeni <cit.> propose a hierarchical parsing method for large-scale 3D point clouds of indoor environments. They first seperate point clouds into disjoint spaces, i.e., single rooms, and then further cluster points at the object level according to handcrafted features.Multi-View Semantic Segmentation. In contrast to the popularity of CNNs for image-based segmentation, it is less common to apply CNNs for semantic segmentation on multi-view 3D reconstructions.Recently, Riegler <cit.> apply 3D CNNs on sparse octree data structures to perform semantic segmentation on voxels. Nevertheless, the volumetric representations may discard details which are present at the original image resolution. McCormac <cit.> proposed to fuse CNN semantic image segmentations on a 3D surfel map <cit.>. He <cit.> propose to fuse CNN semantic segmentations from multiple views in video using superpixels and optical flow information. In contrast to our approach, these methods do not impose multi-view consistency during CNN training and cannot leverage the view-point invariant features learned by our network. Kundu <cit.> extend dense CRFs to videos by associating pixels temporally using optical flow and optimizing their feature similarity. Closely related to our approach for enforcing multi-view consistency is the approach by Su et al. <cit.> who investigate the task of 3D shape recognition. They render multiple views onto 3D shape models which are fed into a CNN feature extraction stage that is shared across views. The features are max-pooled across view-points and fed into a second CNN stage that is trained for shape recognition. Our approach uses multi-view pooling for the task of semantic segmentation and is trained using realistic imagery and SLAM pose estimates. Our trained network is able to classify single views, but we demonstrate that multi-view fusion using the network trained on multi-view consistency improves segmentation performance over single-view trained networks. § CNN ARCHITECTURE FOR SEMANTIC SEGMENTATIONIn this section, we detail the CNN architecture for semantic segmentation of each RGB-D image of a sequence. We base our encoder-decoder CNN on FuseNet <cit.> which learns rich features from RGB-D data. We enhance the approach with multi-scale loss minimization, which gains additional improvement in segmentation performance. §.§ RGB-D Semantic Encoder-DecoderFig. <ref> illustrates our CNN architecture. The network follows an encoder-decoder design, similar to previous work on semantic segmentation <cit.>. The encoder extracts a hierarchy of features through convolutional layers and aggregates spatial information by pooling layers to increase the receptive field. The encoder outputs low-resolution high-dimensional feature maps, which are upsampled back to the input resolution by the decoder through layers of memorized unpooling and deconvolution. Following FuseNet <cit.>, the network contains two branches to learn features from RGB (ℱ_rgb) and depth (ℱ_d), respectively. The feature maps from the depth branch are consistently fused into the RGB branch at each scale. We denote the fusion by ℱ_rgb⊕ℱ_d. The semantic label set is denoted as ℒ = {1,2,…, K} and the category index is indicated with subscript j. Following notation convention, we compute the classification score 𝒮=(s_1, s_2, … , s_K) at location 𝐱 and map it to the probability distribution 𝒫=(p_1, p_2,…, p_K) with the softmax function σ(·). Network inference obtains the probabilityp_j(𝐱, 𝒲|ℐ) = σ(s_j(𝐱, 𝒲)) = exp (s_j(𝐱, 𝒲))/∑_k^K exp (s_k(𝐱, 𝒲)) ,of all pixels 𝐱 in the image for being labelled as class j, given input RGB-D image ℐ and network parameters 𝒲.We use the cross-entropy loss to learn network parameters for semantic segmentation from ground-truth annotations l_gt,L(𝒲) = - 1/N∑_i^N ∑_j^Kj =l_gtlog p_j(𝐱_i, 𝒲|ℐ) ,where N is the number of pixels. This loss minimizes the Kullback-Leibler (KL) divergence between predicted distribution and the ground-truth, assuming the ground-truth has a one-hot distribution on the true label. §.§ Multi-Scale Deep SupervisionThe encoder of our network contains five 2× 2 pooling layers and downsamples the input resolution by a factor of 32. The decoder learns to refine the low resolution back to the original one with five memorized unpooling followed by deconvolution. In order to guide the decoder through the successive refinement, we adopt the deeply supervised learning method <cit.> and compute the loss for all upsample scales. For this purpose, we append a classification layer at each deconvolution scale and compute the loss for the respective resolution of ground-truth which is obtained through stochastic pooling <cit.> over the full resolution annotation (see Fig. <ref> for an example).§ MULTI-VIEW CONSISTENT LEARNING AND PREDICTIONWhile CNNs have been shown to obtain the state-of-the-art semantic segmentation performances for many datasets, most of these studies focus on single views. When observing a scene from a moving camera such as on a mobile robot, the system obtains multiple different views onto the same objects. The key innovation of this work is to explore the use of temporal multi-view consistency within RGB-D sequences for CNN training and prediction. For this purpose, we perform 3D data association by warping multiple frames into a common reference view. This then enables us to impose multi-view constraints during training.In this section, we describe several variants of such constraints. Notably, these methods can also be used at test time to fuse predictions from multiple views in a reference view.§.§ Multi-view Data Association Through WarpingInstead of single-view training, we train our network on RGB-D sequences with poses estimated by a SLAM algorithm. We define each training sequence to contain one reference view ℐ_k with ground-truth semantic annotations and several overlapping views ℐ_i that are tracked towards ℐ_k. The relative poses ξ of the neighboring frames are estimated through tracking algorithms such as DVO SLAM <cit.>. In order to impose temporal consistency, we adopt the warping concept from multi-view geometry to associate pixels between view points and introduce warping layers into our CNN. The warping layers synthesize CNN output in a reference view from a different view at any resolution by sampling given a known pose estimate and the known depth. The warping layers can be viewed as a variant of spatial transformers <cit.> with fixed transformation parameters.We now formulate the warping. Given 2D image coordinate 𝐱∈ℝ^2, the warped pixel location𝐱^ω := ω(𝐱, ξ)=π( 𝐓(ξ)π^-1(𝐱, Z_i(𝐱))),is determined through the warping function ω(𝐱, ξ) which transforms the location from one camera view to the other using the depth Z_i(𝐱) at pixel 𝐱 in image ℐ_i and the SLAM pose estimate ξ. The functions π and its inverse π^-1 project homogeneous 3D coordinates to image coordinates and vice versa, while 𝐓(ξ) denotes the homogeneous transformation matrix derived from pose ξ.Using this association by warping, we synthesize the output of the reference view by sampling the feature maps of neighboring views using bilinear interpolation. Since the interpolation is differentiable, it is straight-forward to back-propagate gradients through the warping layers. With a slight abuse of notation, we denote the operation of synthesizing the layer output ℱ given the warping by ℱ^ω := ℱ(ω( 𝐱, ξ)). We also apply deep supervision when training for multi-view consistency through warping. As shown in Fig. <ref>, feature maps at each resolution of the decoder are warped into the common reference view. Despite the need to perform warping at multiple scales, the warping grid is only required to be computed once at the input resolution, and is normalized to the canonical coordinates within the range of [-1, 1]. The lower-resolution warping grids can then be efficiently generated through average pooling layers.§.§ Consistency Through Warp AugmentationOne straight-forward solution to enforce multi-view segmentation consistency is to warp the predictions of neighboring frames into the ground-truth annotated keyframe and computing a supervised loss there. This approach can be interpreted as a type of data augmentation using the available nearby frames. We implement this consistency method by warping the keyframe into neighboring frames, and synthesize the classification score of the nearby frame from the keyframe's view point. We then compute the cross-entropy loss on this synthesized prediction. Within RGB-D sequences, objects can appear at various scales, image locations, view perspective, color distortion given uncontrolled lighting and shape distortion given rolling shutters of RGB-D cameras. Propagating the keyframe annotation into other frames implicitly regulates the network predictions to be invariant under these transformations.§.§ Consistency Through Bayesian Fusion Given a sequence of measurements and predictions at test time, Bayesian fusion is frequently applied to aggregate the semantic segmentations of individual views. Let us denote the semantic labelling of a pixel by y and its measurement in frame i by z_i. We use the notation z^i for the set of measurements up to frame i. According to Bayes rule,p( y | z^i )= p( z_i | y, z^i-1 ) p( y | z^i-1 )/p( z_i | z^i-1 )= η_i p( z_i | y, z^i-1 ) p( y | z^i-1 ).Suppose measurements satisfy the i.i.d. condition, i.e. p( z_i | y, z^i-1 ) = p( z_i | y ), and equal a-priori probability for each class, then Equation (<ref>) simplifies top( y | z^i ) = η_i p( z_i | y ) p( y | z^i-1 ) = ∏_iη_i p( z_i | y ).Put simple, Bayesian fusion can be implemented by taking the product over the semantic labelling likelihoods of individual frame at a pixel and normalizing the product to yield a valid probability distribution. This process can also be implemented recursively on a sequence of frames.When training our CNN for multi-view consistency using Bayesian fusion, we warp the predictions of neighboring frames into the keyframe using the SLAM pose estimate. We obtain the fused prediction at each keyframe pixel by summing up the unnormalized log labelling likelihoods instead of the individual frame softmax outputs. Applying softmax on the sum of log labelling likelihoods yields the fused labelling distribution. This is equivalent to Eq. (<ref>) since∏_i p_i,j^ω/∑_k^K ∏_i p_i,k^ω = ∏_iσ(s_i,j^ω)/∑_k^K ∏_i σ(s_i,k^ω) = σ( ∑_i s_i,j^ω),where s_i,j^ω and p_i,j^ω denote the warped classification scores and probabilities, respectively, and σ(·) is the softmax function as defined in Equation (<ref>).§.§ Consistency Through Multi-View Max-Pooling While Bayesian fusion provides an approach to integrate several measurements in the probability space, we also explore direct fusion in the feature space using multi-view max-pooling of the warped feature maps. We warp the feature maps preceeding the classification layers at each scale in our decoder into the keyframe and apply max-pooling over corresponding feature activations at the same warped location to obtain a pooled feature map in the keyframe,ℱ = max_pool (ℱ^ω_1, ℱ^ω_2, …, ℱ^ω_N).The fused feature maps are classified and the resulting semantic segmentation is compared to the keyframe ground-truth for loss calculation. § EVALUATIONWe evaluate our proposed approach using the NYUDv2 RGB-D dataset <cit.>. The dataset provides 1449 pixelwise annotated RGB-D images capturing various indoor scenes, and is split into 795 frames for training/validation (trainval) and 654 frames for testing. The original sequences that contain these 1449 images are also available with NYUDv2, whereas sequences are unfortunately not available for other large RGB-D semantic segmentation datasets. Using DVO-SLAM <cit.>, we determine the camera poses of neighboring frames around each annotated keyframe to obtain multi-view sequences. This provides us with in total 267,675 RGB-D images, despite that tracking fails for 30 out of 1449 keyframes. Following the original trainval/test split, we use 770 sequences with 143,670 frames for training and 649 sequences with 124,005 frames for testing. For benchmarking, our method is evaluated for the 13-class <cit.> and 40-class <cit.> semantic segmentation tasks. We use the raw depth images without inpainted missing values.§.§ Training DetailsWe implemented our approach using the Caffe framework <cit.>. For all experiments, the network parameters are initialized as follows. The convolutional kernels in the encoder are initialized with the pretrained 16-layer VGGNet <cit.> and the deconvolutional kernels in the decoder are initialized using He's method <cit.>. For the first layer of the depth encoder, we average the original three-channel VGG weights to obtain a single-channel kernel. We train the network with stochastic gradient descent (SGD) <cit.> with 0.9 momentum and 0.0005 weight decay. The learning rate is set to 0.001 and decays by a factor of 0.9 every 30k iterations. All the images are resized to a resolution of 320×240 pixels as input to the network and the predictions are also up to this scale. To downsample, we use cubic interpolation for RGB images and nearest-neighbor interpolation for depth and label images. During training, we use a minibatch of 6 that comprises two sequences, with one keyframe and two tracking frames for each sequence. We apply random shuffling after each epoch for both inter and intra sequences. The network is trained until convergence. We observed that multi-view CNN training does not require significant extra iterations for convergence. For multi-view training, we sample from the nearest frames first and include 10 further-away frames every 5 epochs. By this, we alleviate that typically tracking errors accumulate and image overlap decreases as the camera moves away from the keyframe. §.§ Evaluation CriteriaWe measure the semantic segmentation performance with three criteria: global pixelwise accuracy, average classwise accuracy and average intersection-over-union (IoU) scores. These three criteria can be calculated from the confusion matrix. With K classes, each entry of the K× K confusion matrix c_ij is the total amount of pixels belonging to class i that are predicted to be class j. The global pixelwise accuracy is computed by ∑_ic_ii / ∑_ijc_ij, the average classwise accuracy is computed by 1/K∑_i (c_ii / ∑_j c_ij), and the average IoU score is calculated by 1/K∑_i (c_ii / (∑_i c_ij + ∑_j c_ij - c_ii) ).§.§ Single Frame Segmentation In a first set of experiments, we evaluate the performance of several variants of our network for direct semantic segmentation of individual frames. This means we do not fuse predictions from nearby frames to obtain the final prediction in a frame. We predict semantic segmentation with our trained models on the 654 test images of the NYUDv2 dataset and compare our methods with state-of-art approaches. The results are shown in Table <ref>. Unless otherwise stated, we take the results from the original papers for comparison and report their best results (i.e. SceneNet-FT-NYU-DO-DHA model for SceneNet <cit.>, VGG-based model for Eigen <cit.>). The result of Hermans <cit.> is obtained after applying a dense CRF <cit.> for each image and in-between neighboring 3D points to further smoothen their results. We also remark that the results reported here for the Context-CRF model are finetuned on NYUDv2 like in our approach to facilitate comparison. Furthermore, the network output is refined using a dense CRF <cit.> which is claimed to increase the accuracy of the network by approximately 2%. The results for FuseNet-SF3 are obtained by our own implementation. Our baseline model MVCNet-Mono is trained without multi-view consistency, which amounts to FuseNet with multiscale deeply supervised loss at decoder. However, we apply single image augmentation to train the FuseNet-SF3 and MVCNet-Mono with random scaling between [0.8,1.2], random crop and mirror. This data augmentation is not used fro multi-view training. Nevertherless, our results show that the different variants of multi-view consistency training outperform the state-of-art methods for single image semantic segmentation. Overall, multi-view max-pooling (MVCNet-MaxPool) has a small advantage over the other multi-view consistency training approaches (MVCNet-Augment and MVCNet-Bayesian).§.§ Multi-View Fused Segmentation Since we train on sequences, in the second set of experiment, we also evaluate the fused semantic segmentation over the test sequences. The number of fused frames is fixed to 50, which are uniformly sampled over the entire sequence. Due to the lack of ground-truth for neighboring frames, we fuse the prediction of neighboring frames in the keyframes using Bayesian fusion according to Equation (<ref>). This fusion is typically applied for semantic mapping using RGB-D SLAM. The results are shown in Table <ref>. Bayesian multi-view fusion improves the semantic segmentation by approx. 2% on all evaluation measures towards single-view segmentation. Also, the training for multi-view consistency achieves a stronger gain over single-view training (MVCNet-Mono) when fusing segmentations compared to single-view segmentation. This performance gain is observed in the qualitative results in Fig. <ref>. It can be seen that our multi-view consistency training and Bayesian fusion produces more accurate and homogeneous segmentations. Fig. <ref> shows typical challenging cases for our model.We also compare classwise and average IoU scores for 13-class semantic segmentation on NYUDv2 in Table <ref>. The results of Eigen <cit.> are from their publicly available modeltested on 320×240 resolution. The results demonstrate that our approach gives high performance gains across all occurence frequencies of the classes in the dataset. § CONCLUSION In this paper we propose methods for enforcing multi-view consistency during the training of CNN models for semantic RGB-D image segmentation. We base our CNN design on FuseNet <cit.>, a recently proposed CNN architecture in an encoder-decoder scheme for semantic segmentation of RGB-D images. We augment the network with multi-scale loss supervision to improve its performance. We present and evaluate three different approaches for multi-view consistency training. Our methods use an RGB-D SLAM trajectory estimate to warp semantic segmentations or feature maps from one view point to another. Multi-view max-pooling of feature maps overall provides the best performance gains in single-view segmentation and fusion of multiple views.We demonstrate the superior performance of multi-view consistency training and Bayesian fusion on the NYUDv2 13-class and 40-class semantic segmentation benchmark. All multi-view consistency training approaches outperform single-view trained baselines. They are key to boosting segmentation performance when fusing network predictions from multiple view points during testing. On NYUDv2, our model sets a new state-of-the-art performance using an end-to-end trained network for single-view predictions as well as multi-view fused semantic segmentation without further postprocessing stages such as dense CRFs. In future work, we want to further investigate integration of our approach in a semantic SLAM system, for example, through coupling of pose tracking and SLAM with our semantic predictions.ieeetr | http://arxiv.org/abs/1703.08866v2 | {
"authors": [
"Lingni Ma",
"Jörg Stückler",
"Christian Kerl",
"Daniel Cremers"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170326202802",
"title": "Multi-View Deep Learning for Consistent Semantic Mapping with RGB-D Cameras"
} |
=1 | http://arxiv.org/abs/1703.09226v3 | {
"authors": [
"Andreas Crivellin",
"Dario Müller",
"Toshihiko Ota"
],
"categories": [
"hep-ph",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20170327180001",
"title": "Simultaneous Explanation of $R(D^{(*)})$ and $b\\to sμ^+μ^-$: The Last Scalar Leptoquarks Standing"
} |
Learning optimal wavelet bases using a neural network approach Andreas Søgaard December 30, 2023 ============================================================== Iterative load balancing algorithms for indivisible tokens have been studied intensively in the past, e.g., <cit.>. Complementing previous worst-case analyses, we study an average-case scenario where the load inputs are drawn from a fixed probability distribution. For cycles, tori, hypercubes and expanders, we obtain almost matching upper and lower bounds on the discrepancy, the difference between the maximum and the minimum load. Our bounds hold for a variety of probability distributions including the uniform and binomial distribution but also distributions with unbounded range such as the Poisson and geometric distribution. For graphs with slow convergence like cycles and tori, our results demonstrate a substantial difference between the convergence in the worst- and average-case. An important ingredient in our analysis is new upper bound on the t-step transition probability of a general Markov chain, which is derived by invoking the evolving set process.§ INTRODUCTIONIn the last decade, large parallel networks became widely available for industrial and academic users.An important prerequisite for their efficient usage is to balance their work efficiently. Load balancing is known to have applications toscheduling <cit.>, routing <cit.>, numerical computation such as solving partial differential equations <cit.>, and finite element computations <cit.>. In the standard abstract formulation of load balancing, processors are represented by nodes of a graph, while links are represented by edges. The objective is to balance the load by allowing nodes to exchange loads with their neighbors via the incident edges. In this work we will study a decentralized and iterative load balancing protocol where a processor knows only its current load and that of the neighboring processors and based on this, decides how much load should be sent (or received). Load Balancing Models.A widely used approach is diffusion, e.g., the first-order-diffusion scheme <cit.>, where the amount of load sent along each edge in each round is proportional to the load difference between the incident nodes. In this work, we consider the alternative, the so-called matching model, where in each round only the edges of the matching are used to average the load locally. In comparison to diffusion, the matching model reduces the communication in the network and moreover tends to behave in a more “monotone” fashion than diffusion, since it avoids concurrent load exchanges which may increase the maximum load or decrease the minimum load in certain cases.We measure the smoothness of the load distribution by the so-called discrepancy which is the difference between the maximum and minimum load among all nodes. In view of more complex scenarios where jobs are eventually removed or new jobs are generated, the discrepancy seems to be a more appropriate measure than the makespan, which only considers the maximum load.Many studies in load balancing assume that load is arbitrarily divisible. In this so-called continuous case, load balancing corresponds to a Markov chain on the graph and one can resort to a wide range of established techniques to analyze the convergence speed <cit.>. In particular, the spectral gap captures the time to reach a small discrepancy fairly accurately, e.g., see <cit.> for the diffusion and see <cit.> for the matching model.However, in many applications a processor's load may consist of tasks which are not further divisible, which is why the continuous case has been also referred to as “idealized case” <cit.>. A natural way to model indivisible tasks is the unit-size token model where one assumes a smallest load entity, the unit-size token, and load is always represented by a multiple of this smallest entity. In the following, we will refer to the unit-size token model as the discrete case. Initiated by the work of <cit.>, there has been a number of studies on load balancing in the discrete case. Unlike <cit.>, <cit.> analyzed a randomized rounding based strategy, meaning that an excess token will be distributed uniformly at random among the two communicating nodes. The authors of <cit.> proved that with this strategy the time to reach constant discrepancy in the discrete case is essentially the same as the corresponding time in the continuous case. Their results hold both for the random matching model, where in each round a new random matching is generated by a simple distributed protocol, and the balancing circuit model (a.k.a. dimension exchange), where a fixed sequence of matching is applied periodically. In this work, we will focus on the balancing circuit model, which is particularly well suited for highly structured graphs such as cycles, tori or hypercubes.Worst-Case vs. Average-Case Inputs. Previous work has almost always adopted the usual worst-case framework for deriving bounds on the load discrepancy <cit.>.That means that any upper bound on the discrepancy holds for an arbitrary input, i.e., an arbitrary initial load vector. While it is of course very natural and desirable to have such general bounds, the downside is that for graphs with poor expansion like cycles or 2D-tori, the convergence is rather slow, i.e., quadratic or linear in the number of nodes n.This serves as a motivation to explore an average-case input. Specifically, we assume that the number of load items at each node is sampled independently from a fixed distribution. Our main results demonstrate that the convergence of the load vector is considerably quicker (measured by the load discrepancy), especially on networks with slow convergence in the worst-case such as cycles and 2D-tori.We point out that many related problems including scheduling on parallel machines or load balancing in a dynamic setting (meaning that jobs are continuously added and processed) have been studied under random inputs, e.g., <cit.>. To the best of our knowledge, only very few works have studied this question in iterative load balancing. One exception is <cit.>, which investigated the performance of continuous load balancing on tori in the diffusion model. In contrast to this work, however, only upper bounds are given and they hold for the multiplicative ratio between maximum and minimum load, rather than the discrepancy. Our main results in this paper hold for all distributions satisfying the following definition, which is satisfied by the uniform, binomial, Poisson and geometric distribution (see Section <ref>).We say that a distribution D over ℕ∪{0} is exponentially concentrated if there is a constant κ >0 so that for any X ∼ D, δ > 0,| X - μ | ≥δ·σ≤exp( -κδ),where μ and σ^2 are the expectation and variance of D. In the following, we refer to average-case when the initial number of load items on each vertex is drawn independently from a fixed exponentially concentrated distribution.Our Results.Our first contribution is a general formula that allows us to express the load difference between an arbitrary pair of nodes in round t. Here the round matrixis the product of the matching matrices that are applied periodically (cf. Section <ref>). thmmainone Consider the balancing circuit model with an arbitrary round matrixin the average case. Then for any pair of nodes u,v and round t, it holds for any δ > 0 that| x_u^(t) - x_v^(t)| ≥δ·√(128)κ·σ·log n ·^t_.,u - ^t_.,v_2 + √(48 log n)≤ 2 · e^-δ^2 + 2 n^-2.Further, for any pair of vertices u,v and any round t satisfying t=ω(1), | x_u^(t) - x_v^(t)| ≥σ/(2 √(2log_2 σ)) ·^t_.,u - ^t_.,v_2 - √(48 log n)≥1/16. The proof of the upper bound Theorem <ref> is the easier direction, and it relies on a previous result relating continuous and discrete load balancing from <cit.>. The lower bound is technically more challenging and applies a generalized version of the central limit theorem. Together, the upper and lower bound in the above result establish that the load deviation between any two nodes u and v is essentially captured by ^t_.,u - ^t_.,v_2. However, in some instances it might be desirable to have a more tangible estimate at the expense of generality. A first step towards this goal is to observe that ^t_.,u - ^t_.,v_2^2≤ 4 ·max_k ∈ V^t_.,k - 1/𝐧_2^2 (see Lemma <ref>). Hence we are left with the problem of understanding the t-step probablity vector ^t_.,k. For reversible Markov chains, the last expression has been analyzed in several works, e.g., a result from <cit.> implies that for random walks on graphs, _u,v^t=O((v)/√(t)) (cf. <cit.>). However, the Markov chain associated tois not reversible in general. For irreversible Markov chains, <cit.> use the so-called evolving set process to derive a similar bound. Specifically, they proved in <cit.> that ifdenotes the transition matrix of a lazy random walk (i.e., a random walk with loop probability at least 1/2) on a graph with maximal degree Δ, then for any vertex x ∈ V:| ^t_x,x - π_x | ≤√(2)Δ^5/2/√(t),where π is the stationary distribution of . Such estimates have been used in various applications besides load balancing, including distributed random walks and spanning tree enumeration <cit.>. Here we generalize this result to Markov chains with an arbitrary loop probability and to arbitrary t-step transition probabilities:thmmarkov Let 𝐏 be the transition matrix of an irreducible Markov chainand π its stationary distribution. Then we have for all states x, y and step t, |𝐏^t_x,y - π_y | ≤π_max^3/2/π_min^3/2·2/β^1/2α√(1-β+α/α t), where α := min_u ≠ v_u,v > 0 and β := min_u_u,u > 0. Applying this bound to a round matrixthat is formed of d=O(1) matchings, we obtain | ^t_u,v - 1/n | = O(t^-1/2). It should be noted that <cit.> proved a weaker version where the upper bound is only O(t^-1/8) instead of O(t^-1/2). As we will prove in Lemma <ref>, the bound O(t^-1/2) is asymptotically tight if we consider the balancing circuit model on cycles.Combining the bound in Theorem <ref> with the upper bound in Theorem <ref> yields:thmmaintwo Consider the balancing circuit model with an arbitrary round matrixconsisting of d=O(1) matchings in the average case. Then the discrepancy after t rounds is O(t^-1/4·σ· (log n)^3/2 + √(log n)) with probability 1 - O(n^-1).Since the initial discrepancy in the average case is O(σ·log n) (see Lemma <ref>), Theorem <ref> implies that in the average case, there is a signficant decrease (roughly of order t^-1/4) in the discrepancy, regardless of the underlying topology. For round matriceswith small second largest eigenvalue, the next result provides a significant improvement: thmmainthreeConsider the balancing circuit model with an arbitrary round matrixconsisting of d matchings in the average case. Then the discrepancy after t rounds is O(λ()^t/4·σ· (log n)^3/2+√(log n)) with probability 1 - O(n^-1).Hence for graphs where λ is bounded away from 1, we even obtain an exponential convergence. r4cmGraph (x^(t)) Cycle t^-1/4·σ r-dim. Torus t^-r/4·σExpander λ^t/4·σ Hypercube 2^-t/2·σDiscrepancy bounds (without logarithmic factors) for different topologies. In Section <ref>, wederive bounds on the discrepancy for cycles, r-dim. Torus, expanders and hypercubes. A summary of these results can be found in Figure 1.Finally, we discuss our results and contrast them to the convergence of the discrepancy in the worst-case in Section <ref>. On a high level, these results demonstrate that on all the considered topologies, we have much faster convergence in the average-case than in the worst-case. However, if we are only interested in the time to achieve a very small, say, constant or poly-logarithmic discrepancy, then we reveal an interesting dichotomy: we have a quicker convergence than in the worst-case if and only if the standard deviation σ is smaller than some threshold, which depends on the actual toplogy. We observe the same phenomena in our experiments, which are also discussed in Section <ref>.§ NOTATION AND BACKGROUND We assume that G = (V,E) is an undirected, connected graph with n nodes labelled in [0,n-1]. Unless stated otherwise, all logarithms are to the base e. The notations ℰ and X denote the probability of an event ℰ and the expectation of a random variable X, respectively. For any n-dimensional vector x, (x)=max_i x_i - min_i x_i denotes the discrepancy. Matching Model. In the matching model (sometimes also called dimension exchange model),every two matched nodes in round t balance their load as evenly as possible. This can be expressed by a symmetric n by n matching matrix ^(t), where with slight abuse of notation we use the same symbol for the matching and the corresponding matching matrix. Formally, matrix ^(t) is defined by _u,u^(t):=1/2, _v,v^(t):=1/2 and _u,v^(t)=_v,u^(t):=1/2 if {u,v}∈^(t)⊆ E, and 𝐌^(t)_u,u=1, 𝐌^(t)_u,v=0 (u≠ v) if u is not matched. Balancing Circuit. In the balancing circuit model, a specific sequence of matchings is applied periodically. More precisely, let 𝐌^(1),…,𝐌^(d) be a sequence of d matching matrices, also called period [Note that d may be different from the maximal degree (or degree) of the underlying graph.]. Then in step t ≥ 1, we apply the matching matrix 𝐌^(t) := 𝐌^(((t-1)d)+1). We define the round matrix by := ∏_s=1^d^(s).Ifis symmetric, we define λ() to be its second largest eigenvalue (in absolute value). Following <cit.>, ifis not symmetric (which is usually the case), we define λ() as the second largest eigenvalue of the symmetric matrix ·^T, where ^T is the transpose of . We always assume that λ() < 1, which is guaranteed to hold if the matrixis irreducible. Notice that sinceis doubly stochastic, all powers ofare doubly stochastic as well. A natural choice for the d matching matrices is given by an edge coloring of G. There are various efficient distributed edge coloring algorithms, e.g. <cit.>.Balancing Circuit on Specific Toplogies. For hypercubes, the canonical choice is dimension exchange consisting of d=log_2 n matching matching matrices ^(i) by _u,v^(i)=1/2 if and only if the bit representation of u and v differ only in bit i. Then the round matrixis defined by ∏_i=1^log_2 n^(i). For cycles, we will consider the natural “Odd-Even”-scheme meaning that for ^(1), the matching consists of all edges {j,(j+1)n } for any odd j, while for ^(2), the matching consists of all edges {j,(j+1)n } for any even j. More generally, for r-dimensional tori with vertex set [0,n^1/r-1]^r, we will have 2· r matchings in total, meaning that for every dimension 1 ≤ i ≤ r we have two matchings along dimension i, similar to the definition of matchings for the cycle.The Continuous Case. In the continuous case, load is arbitrarily divisible. Let ξ^(0)∈ℝ^n be the initial load represented as a row vector, and in every round two matched nodes average their load perfectly. We consider the load vector ξ^(t) after t rounds in the balancing circuit model (that means, after the executions of t · d matchings in total). This process corresponds to a linear system and ξ^(t), t∈ℕ, can be expressed as ξ^(t) = ξ^(t-1), which results in ξ^(t) = ξ^(0) ^t.The Discrete Case. Let us now turn to the discrete case with indivisible, unit-size tokens. Let x^(0)∈ℤ^n be the initial load vector with average load x:=∑_w ∈ V x_w^(0)/n,and x^(t) be the load vector at the end of round t. In case the sum of tokens of the two paired nodes is odd, we employ the so-called random orientation (or randomized rounding) <cit.>. More precisely, if there are two nodes u and v with load a and b being paired by matching ^(t), then node u gets either ⌈a+b/2⌉ or ⌊a+b/2⌋ tokens, with probability 1/2 each. The remaining tokens are assigned to node v. The Average-Case Setting. We consider a setting where each entry of the initial load vector x^(0) is chosen from an exponentially concentrated probability distribution D with expectation μ and variance σ^2 (see Definition <ref>).It is not difficult to verify that many natural distributions satisfy the condition of exponentially concentrated (see the appendix for more details). lemdistributions The uniform distribution, binomial distribution, geometric distribution and Poisson distribution are all exponentially concentrated. Note that the uniform distribution [0,k] is trivially exponentially concentrated, since σ=Θ(k). However, also distributions with unbounded range may be exponentially concentrated, with one example being the geometric distribution (p). To verify this, first note that we have μ=1/p and σ=√((1-p)/p^2) (and so μ=Θ(σ)) and thus μ - X ≥δ·σ≤exp( - κδ) holds trivially for a sufficiently small constant κ > 0. Secondly, for the upper tail, by Markov's inequality, X ≥ 2 ·X≤ 1/2, and by the memoryless property of the geometric distribution, for any j ≥ 1, X ≥ j · 2 ·X≤ 2^-j. For the binomial distribution [m,p] with expectation μ=m · p and standard deviation σ=√(m · p · (1-p)), we will assume w.l.o.g. that p ≤ 1/2, so that σ = Θ( √(mp)). Then by <cit.>, we have for X ∼[m,p], X - μ≥ϵ·μ≤exp(-ϵ^2 μ/2+2ϵ/3). Choosing ϵ = δ·σ/μ yields X - μ≥δ·σ≤exp(-δ^2 ·σ^2 / μ/2+2 σ/μ), as needed. For the lower tails, we use μ - X ≥ϵ·μ≤ e^-1/2 ϵ^2 μ and obtain a similar result as before (see again <cit.>). For the Poisson Distribution [μ], we can verify in an analogous way that it is exponentially distributed by using the following two Chernoff bounds for Poisson random variables (<ref>). The definition of exponentially concentrated implies the following concentration result: Let D be an exponentially concentrated distribution and let X ∼ D. Then,X ∈ [μ - 8/κ·σlog n, μ + 8/κ·σlog n] ≥ 1 -n^-2.In particular, the initial discrepancy satisfies (x^(0))=O( σ·log n) with probability at least 1-n^-1.The advantage of Lemma <ref> is that we can use a simple conditioning trick to work with distributions that have a finite range and are therefore easier to analyze with concentration tools like Hoeffding's inequality (Theorem <ref>). That is in the analysis we simply work with a bounded-range distribution D, which is the distribution D under the condition that only values in the interval [μ - 8/c ·σlog(n), μ + 8/c ·σlog(n)] occur. § PROOF OF THE GENERAL BOUND (THEOREM <REF>) * §.§ Proof of Theorem <ref> (Upper Bound) We will use the following result from <cit.> that bounds the deviation between the continuous and discrete load, assuming that we have ξ^(0)=x^(0). Consider the balancing circuit model with an arbitrary round matrix . Then for any round t ≥ 1 it holds that max_w ∈ V| x_w^(t) - ξ_w^(t)| ≤√(12 ·log n)≥ 1 - n^-2.Recall that the initial vectorξ^(0)=x^(0) consists of n i.i.d. random variables. As explained at the end of Section <ref>, we condition on the eventℰ := ⋂_w ∈ V{| ξ_w^(0) - μ|≤ 8/κ·σ·log n }.By Lemma <ref>, ℰ≥ 1- n^-2. In the remainder of the proof, all random variables are conditional on ℰ, but for simplicity we will not explicitly express this conditioning. Since ξ_u^(t) = ∑_w ∈ Vξ_w^(0)^t_w,k, the load ξ_u^(t) is just a weighted sum of i.i.d. random variables and we obtain ξ_u^(t) - ξ_v^(t) = ∑_w ∈ Vξ_w^(0)·(^t_w,u - ^t_w,v), which is in fact still a sum of n i.i.d. random variables. The expectation is ξ_u^(t) - ξ_v^(t) = ∑_w∈ Vξ_w^(0)·(^t_w,u - ^t_w,v)= ∑_w ∈ V(^t_w,u - ^t_w,v) ξ_w^(0) = 0, where the last equality holds sinceis doubly stochastic. Now applying Hoeffding's inequality (Theorem <ref>) and recalling that conditional on ℰ, the range of each ξ_w^(t) is 16/ κ·σ·log n, we obtain that | ξ_u^(t) - ξ_v^(t)| ≥δ ≤ 2 ·exp(-2δ^2/256/κ^2 ·σ^2 ·log^2 n ·^t_.,u - ^t_.,v_2^2 ). Applying Theorem <ref> yields | x_u^(t) - x_v^(t)| ≥δ + √(48 ·log n)≤2 ·exp(-2δ^2/256/κ^2 ·σ^2 ·log^2 n ·^t_.,u - ^t_.,v_2^2 ) + n^-2. The statement of the theorem follows by scaling δ and recalling that ℰ≥ 1 -n^-2.§.§ Proof of Theorem <ref> (Lower Bound)The proof of the lower bound will use the following quantitative version of a central limit type theorem for independent but non-identical random variables. Let X_1,X_2,...,X_n be independently distributed with X_i = 0, X_i^2 = X_i = σ_i^2, and |X_i|^3 = ρ_i < ∞. If F_n(x) is the distribution of X_1 + ... + X_n/√(σ_1^2 + σ_2^2 + ... + σ_n^2) and Φ(x) is the standard normal distribution, then |F_n(x) - Φ(x)|≤ C_0 ψ_0, where ψ_0 = ( ∑_i=1^nσ_i^2 )^-3/2·∑_i=1^nρ_i and C_0>0 is a constant. With this concentration tool at hand, we are able to prove the lower bound in Theorem <ref>. Unfortunately, it appears quite difficult to apply Theorem <ref> directly to equation (<ref>), since we need a good bound on the error term ψ_0. To this end, we will first partition the vertex set V into buckets with equal contribution to ξ_u^(t) - ξ_v^(t). Then we will apply Theorem to the bucket with the largest variance, for which we can show that ψ_0=o(1), thanks to the precondition that t=ω(1) and the bound in Theorem <ref>.As in the derivation of the upper bound, we first consider ξ_u^(t) - ξ_v^(t):dev := ξ_u^(t) - ξ_v^(t) = ∑_w ∈ Vξ_w^(0)·( ^t_w,u - ^t_w,v).Again we are dealing with a weighted sum of i.i.d. random variables with expectation μ and variance σ^2. As mentioned earlier, we have dev = ∑_w ∈ Vξ_w^0·( ^t_w,u - ^t_w,v) = 0 sinceis a doubly stochastic matrix. Of course, we could apply Theorem <ref> directly to dev, but it appears difficult to control the error term ψ_0. Therefore we will first partition the above sum into buckets where the weights of the random variables are roughly the same.More precisely, we will partition V into 2log_2 σ - 1 buckets, where for each i we have V_i := { w ∈ V| ^t_w,u - ^t_w,v| ∈ (2^-i,2^-i+1] } for 1 ≤ i ≤ 2 log_2 σ - 1, and V_2log_2 σ - 1 := { w∈ V | ^t_w,u - ^t_w,v| ≤1/σ^2}.Further, let us consider the variance of dev:σ^2 = ∑_w ∈ V( ^t_w,k - ^t_w,k')^2.Then by the pigeonhole principle there exists an index 1 ≤ i ≤ 2 log_2 σ -1 such that∑_v ∈ V_i( ^t_w,u - ^t_w,v)^2 ≥1/2 log_2 σ·σ^2.Firstly, if that index i is equal to 2 log_2 σ, then _.,u^t- _.,v^t _2^2 = O(σ^-1),and the lower bounds holds trivially. Therefore, we will assume in the remainder of the proof that i < 2 log_2 σ - 1. We now decompose dev into dev=S+S^c, whereS := ∑_w ∈ V_iξ_w^(0)·( ^t_w,u - ^t_w,v)andS^c := ∑_w ∉V_iξ_w^(0)·( ^t_w,u - ^t_w,v). Let us first analyze S. We will now apply Theorem <ref> to S. In preparation for this, let us first upper bound ψ_0. Using the definition of exponentially concentrated, it follows that for any constant k, the first k moments are all bounded from above by O(σ^k). Hence,ψ_0= ∑_w ∈ V_i|ξ_w^(0)·( ^t_w,u - ^t_w,v)|^3/( ∑_w ∈ V_iξ_w^(0)·( ^t_w,u - ^t_w,v)^2)^3/2≤ O(σ^3) ·∑_w ∈ V_i| ^t_w,u - ^t_w,v|^3/σ^3 ·( ∑_w ∈ V_i( ^t_w,u - ^t_w,v)^2)^3/2.Recalling that for any w ∈ V_i, | ^t_w,u - ^t_w,v| ∈ (2^-i,2^-i+1], we can simplify the above expression as follows:ψ_0= O ( |V_i| · 2^-3i/ |V_i|^3/2· 2^-3i) = O( |V_i|^-1/2).However, since we have t=ω(1), by Theorem <ref>, | ^t_x,y - 1/n | = O(t^-1/2) and therefore it must be that |V_i|=ω(1), and we conclude that ψ_0 = o(1). Before applying Theorem <ref>, we scale the original distribution to ξ_w^'(0) = ξ_w^(0) - μ. Since (aX) = a^2 (X), we haveF_n(x) = ∑_w ∈ V_iξ_w^'(0)·( ^t_w,u - ^t_w,v) /σ√(∑_w ∈ V_i( ^t_w,u - ^t_w,v)^2)≤ x= ∑_w ∈ V_iξ_w^(0)·( ^t_w,u - ^t_w,v) - ∑_w ∈ V_iμ·( ^t_w,u - ^t_w,v) /σ√(∑_w ∈ V_i( ^t_w,u - ^t_w,v)^2)≤ x= S - S/σ√(∑_w ∈ V_i(^t_w,u - ^t_w,v)^2)≤ x=S - S≤ xσ√(∑_w ∈ V_i(^t_w,u - ^t_w,v)^2). As derived earlier ψ_0 = o(1), and therefore S - S≥ x σ√(∑_w ∈ V_i( ^t_w,u - ^t_w,v)^2) ≥Φ(-x) - C_0ψ_0≥1/√(π) (x + √(x^2 + 2))e^x^2 - o(1),where last inequality uses <cit.>:1/x + √(x^2 + 2) < e^x^2∫_x^∞ e^-t^2dt ⩽1/x + √(x^2 + 4/π) (x>0). Therefore, by substitution, we get1/√(π)(x+√(x^2+2))e^x^2 < Φ^c(x) ⩽1/√(π)(x+√(x^2+4/π))e^x^2. Hence with x=1,S - S≥σ√(∑_w ∈ V_i( ^t_w,u - ^t_w,v)^2) ≥1/16. Similarly, we can derive thatS - S ≥σ√(∑_w ∈ V_i( ^t_w,u - ^t_w,v)^2) ≥1/16. Hence, independent of what the value S^c is, there is still a probability of at least 1/16 so that |S+S^c| ≥σ/2 ·√(1/(2 log_2 σ))·√(∑_w ∈ V( ^t_w,u - ^t_w,v)^2).§ PROOF OF THE UNIVERSAL BOUNDS (THEOREM <REF>, THEOREM <REF>) In the previous section we proved that the deviation between the loads of two nodes u and v is essentially captured by ^t_.,u - ^t_.,v_2. However, in some cases it might be hard to compute or estimate this quantity for arbitrary vertices u and v. Therefore we will first prove the following universal upper bound on the discrepancy that works for arbitrary graphs and pair of nodes, as stated on page thm:maintwo. *§.§ Proof of Theorem <ref>The proof of Theorem <ref> is fairly involved and we first sketch the high level ideas. We first show that ^t_.,u - ^t_.,v_2^2 can be upper bounded in terms of the ℓ_2-distance to the stationary distribution.lemtwonodestoavg Consider the balancing circuit model with an arbitrary round matrix . Then for all u,v ∈ V, we have ^t_.,u - ^t_.,v_2^2 ≤ 4 ·max_k ∈ V^t_.,k - 1/𝐧_2^2. Further, for any u ∈ V we have max_v ∈ V^t_.,u - ^t_.,v_2^2 ≥^t_.,u - 1/𝐧_2^2. ∑_w ∈ V( ^t_w,u - ^t_w,v)^2≤ 2 ·( ∑_w ∈ V( ^t_w,u - 1/n)^2 + ( ^t_w,v - 1/n)^2 ) ≤ 4 ·max_k ∈ V∑_w ∈ V( ^t_w,k - 1/n)^2, and the first statement follows. We now prove the second statement: ∀ u max_v ∈ V√(∑_w ∈ V(_w,u^t - _w,v^t )^2)≥√(∑_w ∈ V(_w,u^t - 1/n)^2). We first look at the difference between these two terms squared. That is, for any vertex v ∈ V we have ∑_w ∈ V(_w,u^t - _w,v^t )^2 - ∑_w ∈ V(_w,u^t - 1/n)^2 = -2 ∑_w ∈ V(_w,u^t - 1/n)(_w,v^t - 1/n) + ∑_w ∈ V(_w,v^t - 1/n)^2= -2 ∑_w ∈ V_w,u^t ·_w,v^t + 4/n - 2/n + ∑_w ∈ V(_w,v^t)^2 + 1/n = -2 ∑_w ∈ V_w,u^t ·_w,v^t + ∑_w ∈ V(_w,v^t)^2 + 1/n Now let Z be a uniform random variable over the set V \{u}. Then it follows that Z ∼ V ∖{u}∑_w ∈ V_w,u^t ·_w,Z^t = ∑_z ∈ V, z ≠ u1/n-1·∑_w ∈ V_w,u^t ·_w,z^t= 1/n-1∑_w ∈ V_w,u^t ·∑_z ∈ V, z ≠ u_w,z^t= 1/n-1∑_w ∈ V_w,u^t ·( 1 - _w,u^t )= 1/n-1( 1 - ∑_w ∈ V(_w,u^t)^2 ). Further, by linearity of expectations Z ∼ V ∖{u}-2 ∑_w ∈ V_w,u^t ·_w,Z^t + ∑_w ∈ V( _w,Z^t )^2 + 1/n= -2/n-1( 1 - ∑_w ∈ V(_w,u^t)^2 ) + ∑_z ∈ V, y ≠ u∑_w ∈ V1/n-1( _w,z^t )^2 + 1/n. By definition of expectation, this implies that there exists a vertex v ∈ V, v ≠ u such that -2 ∑_w ∈ V_w,u^t ·_w,v^t + ∑_w ∈ V( _w,v^t )^2 + 1/n≥-2/n-1( 1 - ∑_w ∈ V(_w,u^t)^2 ) + ∑_z ∈ V, z ≠ u∑_w ∈ V1/n-1( _w,z^t )^2 + 1/n. Combining (<ref>) and (<ref>), ∑_w ∈ V(_w,u^t - _w,v^t )^2- ∑_w ∈ V(_w,v^t - 1/n)^2 ≥-2/n - 1·( 1- ∑_w ∈ V(_w,u^t )^2 ) + ∑_w ∈ V∑_z ∈ V, z ≠ u1/n-1·( _w,z^t )^2+ 1/n = -1/n-1 - 1/n · (n-1) + 2/n-1∑_w ∈ V( _w,u^t )^2+ 1/n - 1·∑_w ∈ V∑_z ∈ V, z ≠ u(_w,z^t )^2= -1/n-1 - 1/n · (n-1) + 1/n-1∑_w ∈ V(_w,u^t)^2 + 1/n-1·∑_w ∈ V∑_z ∈ V(_w,z^t )^2≥ -1/n-1 - 1/n · (n-1) + 1/n · (n-1) + 1/n-1≥ 0, where the second last inequality holds sinceis doubly stochastic.The next step and main ingredient of the proof of Theorem <ref> is to establish that ^t_.,k - 1/𝐧_∞ = O(1/√(t)). This result will be a direct application of a general bound on the t-step probabilities of an arbitrary, possibly non-reversible Markov chain, as given in Theorem <ref> from page thm:markov:*In this subsection we prove Theorem <ref>, assuming the correctness of Theorem <ref> whose proof is deferred to Section <ref>. By Theorem <ref> and Lemma <ref>, we obtain | x_u^(t) - x_v^(t)| ≥δ· 16√(2)κ·σ·log n ·max_k ∈ V^t_.,k - 1/𝐧_2+ √(48 log n)≤ 2 · e^- δ^2 + 2n^-2. Hence we can find a δ=√(3log n) so that the latter probability gets smaller than 3 n^-2. Further, by applying Theorem <ref> with α = β = 2^-d to = we conclude that ^t_.,k - 1/𝐧_∞ = O(t^-1/2), since d=O(1). Using the fact ._2^2 ≤._∞·._1, ^t_.,k - 1/𝐧_2^2 = O(t^-1/2), and by the union bound, (x^(t)) = O(t^-1/4·σ· (log n)^3/2 + √(log n)) with probability at least 1-3 n^-1.§.§ Proof of Theorem <ref> This section is devoted to the proof of Theorem <ref>. Our proof is based on the evolving-set process, which is a Markov chain based on any given irreducible, not necessarily reversible Markov chain on Ω. For the definition of the evolving set process, we closely follow the exposition in <cit.>.Letdenote the transition matrix of an irreducible Markov chain and π its stationary distribution. ^t is the t-step transition probability matrix. The edge measure Q is defined by Q_x,y := π_x _x,y and Q(A, B) = ∑_x ∈ A, y ∈ B Q_x,y. Given a transition matrix , the evolving-set process is a Markov chain on subsets of Ω defined as follows. Suppose the current state is S ⊂Ω. Let U be a random variable which is uniform on [0,1]. The next state of the chain is the set S̃ = { y ∈Ω : Q(S,y)/π_y≥ U }.This chain is not irreducible because∅ and Ω are absorbing states. It follows thaty ∈ S_t+1 |S_t = Q(S_t, y)/π_ysince the probability that y ∈ S_t+1 is equal to the probability of the event that the chosen value of U is less than Q(S_t, y)/π_y. Let (M_t) be a non-negative martingale with respect to (Y_t), and define T_h := min{t ≥ 0: M_t = 0orM_t ≥ h} Assume that for any h ≥ 0 0pt * For t < T_h, (M_t+1 |Y_0, … , Y_t) ≥σ^2, and * M_T_h≤ Dh. Let T := T_1. If M_0 is a constant, then T > t≤2M_0/σ√(D/t). We now generalize <cit.> to cover arbitrarily small loop probabilities. Let (U_t) be a sequence of independent random variables, each uniform on [0,1], such that S_t+1 is generated from S_t using U_t+1. Then with β := min_u_u,u > 0, π(S_t+1)|U_t+1≤β, S_t = S ≥π(S) + Q(S, S^c) ,π(S_t+1)|U_t+1 > β, S_t = S ≤π(S) - β Q(S,S^c)/1 - β.We list a few auxiliary results from <cit.> about the evolving set process that will be used to prove the result. If (S_t)_t ≥ 0 is the evolving-set process associated to the transition matrix , then for any time t and x,y ∈Ω ^t_x,y = π_y/π_x{x}y ∈ S_t. Recall that (S_t) is the evolving-set process based on the Markov chain whose transition matrix is . {x}y ∈ S_t means the probability of the event y ∈ S_t with the initial state of the evolving set being { x }. The sequence {π(S_t)} is a martingale. Let (M_t) be a martingale and τ a stopping time. If τ < ∞ and |M_t τ| ≤ K for all t and some constant K where t τ := min{t, τ}, then M_τ = M_0. Given U_t+1≤β, the distribution of U_t+1 is uniform on [0, β]. Case 1: For y ∉ S, we know that for y satisfying Q(S,y)/π_y∈ [0, β] Q(S,y)/π_y≥ U_t+1 |U_t+1≤β, S_t = S = Q(S,y)/βπ_y and for y satisfying Q(S,y)/π_y∈ (β,1], Q(S,y)/π_y≥ U_t+1 |U_t+1≤β, S_t = S = 1. We know that Q(S,y)/π_y = ∑_x ∈ Sπ_x_x,y/π_y≤∑_x ∈Ωπ_x_x,y/π_y = 1. Since y ∈ S_t+1 if and only if U_t+1≤ Q(S_t,y)/π_x, we therefore can combine the above results by using an inequality and conclude that y ∈ S_t+1 |U_t+1≤β, S_t = S ≥Q(S,y)/π_y for y ∉ S because β≤ 1 and Q(S,y)/π_y ≤ 1. Case 2: For y ∈ S, we have Q(S,y)/π_y ≥ Q(y,y)/π_y ≥β, it follows that when U_t+1≤β y ∈ S_t+1 |U_t+1≤β, S_t = S= 1 for y ∈ S. We have π(S_t+1)|U_t+1≤β, S_t = S = ∑_y ∈Ω1_{y ∈ S_t+1}π_y|U_t+1≤β, S_t = S= ∑_y ∈ Sπ_yy ∈ S_t+1 |U_t+1≤β, S_t = S+ ∑_y ∉ Sπ_yy ∈ S_t+1 |U_t+1≤β, S_t = S. Based on previous results, we can see that π(S_t+1)|U_t+1≤β, S_t = S≥π(S) + Q(S, S^c). By Lemma <ref> and the formulas above, π(S) = π(S_t+1)|S_t = S = β·π(S_t+1)|U_t+1≤β, S_t = S+ (1 - β) ·π(S_t+1)|U_t+1 > β, S_t = S. Rearranging shows that π(S_t+1)|U_t+1 > β, S_t = S≤π(S) - β Q(S,S^c)/1 - β. The derivation of the next lemma closely follows the analysis in <cit.>. For the sake of completeness, a proof can be found in the appendix. lempartone For any two states x,y, | ^t_x,y - π_y | ≤π_y/π_x·{x}τ > t. First of all, let the hitting time τ = min{t ≥ 0 : S_t ∈{∅, Ω}}. We have S_τ∈{∅, Ω} and π(S_τ) = 1_{S_τ = Ω}. We consider an evolving set process with S_0 = {x}. By Theorem <ref> and Lemma <ref>, π_x = {x}π(S_0) = {x}π(S_τ) = {x}1_{S_τ = Ω} = {x}S_τ = Ω = {x}x ∈ S_τ For the last equality, it is true because we know that S_τ can only be ∅ or Ω. Hence, the probability that x is an element in S_τ is equal to the probability that S_τ is Ω. Note that here the second x in the last line can be any other element in Ω. For example, we also know that ∀ y ∈Ω, {x}S_τ = Ω = {x}y ∈ S_τ. For our bound, we know that by Lemma <ref> and (<ref>), | ^t(x,y) - π_y| = π_y/π_x|{x}y ∈ S_t - π_x | = π_y/π_x|{x}y ∈ S_t - {x}S_τ = Ω|. By (<ref>), {x}y ∈ S_t= {x}y∈ S_t, τ > t + {x}y∈ S_t, τ≤ t = {x}y∈ S_t, τ > t + {x}S_τ = Ω, τ≤ t. By simple substitution we obtain | ^t_x,y - π_y | = π_y/π_x|{x}y∈ S_t, τ > t + {x}S_τ = Ω, τ≤ t - {x}S_τ = Ω| = π_y/π_x|{x}y∈ S_t, τ > t - {x}S_τ = Ω, τ > t| ≤π_y/π_x{x}τ > t. The last line is true because we remove all possible intersections. Now we want to use Proposition <ref> to bound {x}τ > t. To apply it, we substitute the following parameters: M_0 is chosen to be π({x}), Y_t is S_t, and T = T_1 := min{t ≥ 0:π(S_t) = 0or π(S_t) ≥ 1 }. Hence in our case, τ is the same as T (or T_1) in the proposition. The following two lemmas elaborate on the two preconditions (i) and (ii) of Proposition <ref>. lemparttwo For any time t and S_0={x}, _S_t(π(S_t+1)) ≥βπ_min^2α^2. Conditioning always reduces variance and S_t ≠∅ or Ω, we have _S_t(π(S_t+1)) ≥_S_t(π(S_t+1)| 1_{U_t+1≤β}). For S_t = S, S_tπ(S_t+1)| 1_{U_t+1≤β} = π(S_t+1)|U_t+1≤β, S_t = S ,w.p. β,π(S_t+1)|U_t+1 > β, S_t = S,w.p.1 - β and by Lemma <ref>, we know that S_tπ(S_t+1)| 1_{U_t+1≤β}≥π(S) + Q(S, S^c) ,w.p. β,≤π(S) - β Q(S,S^c)/1 - β ,w.p.1 - β. For simplicity, we let S_tπ(S_t+1)| 1_{U_t ≤β} be X, π(S_t+1)|U_t+1≤β, S_t = S be x_1 and π(S_t+1 |U_t+1 > β, S_t = S) be x_2. Then we have _S_t (π(S_t+1)| 1_{U_t ≤β}) = X^2 - X^2= β x_1^2 + (1 - β)x_2^2 - (β x_1 + (1-β)x_2)^2 = (β - β^2)(x_1 - x_2)^2 In order to derive a lower bounds on this variance, based on Lemma <ref> we let x_1 = π(S) + Q(S,S^c) and x_2 = π(S) - (β/1 - β)Q(S,S^c). With this we obtain _S_t (π(S_t+1)| 1_{U_t ≤β}) ≥β/1 - βQ^2(S,S^c). Therefore, provided S_t ∉{∅, Ω}, we have _S_t( π(S_t+1) ) ≥β/1 - βQ^2(S, S^c) ≥βπ_min^2 α^2/(1 - β). The last inequality follows from the fact that if S ∉{∅, Ω} then there exist u ∈ S, v ∉ S with _u,v > 0, whence Q(S, S^c) = ∑_s ∈ S w ∈ S^cπ_s _s,w≥π_u _u,v≥π_minα. Since 1 - β < 1, we finally obtain _S_t(π(S_t+1)) ≥βπ_min^2α^2. Finally, we derive an upper bound on the amount by which S_t can increase in one iteration. lempartthree For any time t andS_0={x}, π(S_t+1) ≤(1-β/α + 1)π_max/π_min·π(S_t). Since S_t+1 = { y ∈Ω: ∑_x ∈ S_tπ_x _x,y/π_y≥ U }. If U decreases to 0, then every y ∈ S_t+1 is at least connected to an x ∈ S_t. In other words, _x,y > 0 for x ∈ S_t and y ∈ S_t+1. Hence |S_t+1| ≤ (1 - β/α + 1)|S_t|. We also know that π(S_t+1) ≤ |S_t+1| ·π_max≤(1 - β/α + 1 ) · |S_t| ·π_max≤(1 - β/α + 1 ) ·π(S_t) ·π_max/π_min The proof of Theorem <ref> follows then by combining Proposition <ref>, Lemma <ref>, Lemma <ref>, Lemma <ref> and Lemma <ref>. With the help of the previous three lemmas, we can apply Proposition <ref> with M_0 = π_x, σ≥β^1/2π_x α and D = (1-β/α) π_max/π_min to obtain |^t_x,y - π_y|≤π_y/π_x{x}τ > t≤π_y/π_x2π_x/σ√(D/t) = 2π_y/β^1/2π_minα√((1-β/α + 1)π_max/π_min/t)≤π_max^3/2/π_min^3/2·2/β^1/2α√(1-β+α/α t) §.§ Proof of Theorem <ref> We now prove the following discrepancy bound that depends on the λ(), as defined in Section <ref>.By <cit.>, for any pair of vertices u,v ∈ V, | ^t_u,v - 1/n| ≤λ()^t/2. Hence by Lemma <ref> ^t_.,u - ^t_.,v_2 = O(λ()^t/4), and the bound on the discrepancy follows from Theorem <ref> and the union bound over all vertices. § APPLICATIONS TO DIFFERENT GRAPH TOPOLOGIES Cycles. Recall that for the cycle, V={0,…,n-1} is the set of vertices, and the distance between two vertices is (x,y) = min{y-x, x+n-y } for any pair of vertices x < y.The upper bound on the discrepancy follows directly from Theorem <ref>, and it only remains to prove the lower bound. To this end, we will apply the lower bound in Theorem <ref> and need to derive a lower bound on _.,u^t - 1/𝐧_2^2. Intuitively, if we had a simple random walk, we could immediately infer that this quantity is Ω( 1/√(t)), since after t steps, the random walk is with probability ≈ 1/√(t) at any vertex with distance at most O(√(t)). To prove that this also holds for the load balancing process, we first derive a concentration inequality that upper bounds the probability for the random walk to reach a distant state: lemupper Consider the standard balancing circuit model on the cycle with round matrix . Then for any u ∈ V and δ∈ (0,n/2-1), we have∑_v ∈ V (u,v) ≥δ^t_u,v≤ 2 ·exp( - (δ-2)^2/8t). The proof of the lemma above makes uses of the following variant of Azuma's concentration inequality for martingales, which can be for instance found in McDiarmid's survey on concentration inequalities. Let Z_1,Z_2,…,Z_n be a martingale difference sequence with a_k ≤ Z_k ≤ b_k for each k, for suitable constants a_k, b_k. Then for any δ≥ 0, max_1 ≤ j ≤ n| ∑_i=1^j Z_i | > δ ≤ 2 ·exp(- 2 δ^2/∑_k=1^n (b_k-a_k)^2). Note that the balancing circuit on the cycle corresponds to the following random walk (X_1,X_2,…,X_t) on the vertex set V= {-n/2+1,…,0,…,n/2-1}, where for any time-step t ∈ℕ, X_t denotes the position of the random walk after step t. First, we consider the transition for any odd s: If X_s is odd, then with probability 1/2, X_s+1 = X_s+1 and otherwise X_s+1=X_s. If X_s is even, then with probability 1/2, X_s+1=X_s-1 and otherwise X_s+1=X_s (additions and subtractions are under the implicit assumptions that -n/2+1 ≡ n/2-1 and n/2 ≡ -n/2+1). The case for even s is analogous. We will couple the random walk (X_t)_t ≥ 0 with another random walk (Y_t)_t ≥ 0 on the integers ℕ, where again Y_t denotes the position of the walk after step t. The transition probabilities are exactly the same as for the walk (X_t)_t ≥ 0, the only difference is that we don't use the equivalences -n/2+1 ≡ n/2-1 and n/2 ≡ -n/2+1. It is clear that we can couple the transitions of the two walks so that they evolve identically as long as the walks do not reach any of the two boundary points -n/2+1 or n/2-1.Let us first analyze Y_t for an odd time step. As described above, the distribution of Y_t-Y_t-1 depends on whether Y_t-1 is even or not. However, notice regardless of where the random walk is at step t-2, the random walk will be at an odd or even vertex at step t-1 with probability 1/2 each. Hence for any starting position y,Y_t - Y_t-1 |Y_0 = y =Y_t-1·( 1/2· 1 + 1/2· 0 ) +Y_t-1·( 1/2· (-1) + 1/2· 0 ) = 0,and further,| Y_1 - Y_0 | ≤ 1.Combining the last two inequalities shows that for any start vertex y,|Y_t |Y_0 = y- y | ≤ 1.With the same arguments as before we conclude that for any fixed start vertex Y_0=y_0,max_a,b ∈ V|Y_t - Y_t-1 |Y_1=a -Y_t - Y_t-1 |Y_1=b | ≤ 2,because the expected differences of Y_t - Y_t-1 are all zero whenever t ≥ 3. Let us now consider the martingale W_i =Y_t|Y_0,Y_1,…,Y_i, and let Z_i := W_i - W_i-1 be the corresponding martingale difference sequence. As shown before, |W_i-W_i-1 | ≤ 2. Hence by Lemma <ref>, max_1 ≤ j ≤ t| ∑_i=1^j Z_i |≥δ ≤ 2 ·exp(-δ^2 / 8 · t)If for every 1 ≤ j ≤ t, ∑_i=1^j W_i< δ holds, then this implies both random walks (X_t)_t ≥ 0 and (Y_t)_t ≥ 0 behave identically since none of them ever reaches any of the two boundary points -n/2+1 or n/2-1. In particular we conclude that for the original walk (X_t)_t ≥ 0,∑_v ∈ V: (u,v) ≥δ^t_u,v = | ∑_i=1^t X_i |≥δ≤max_1 ≤ j ≤ t| ∑_i=1^j X_t |≥δ= max_1 ≤ j ≤ t| ∑_i=1^j Y_t |≥δ≤max_1 ≤ j ≤ t| ∑_i=1^j Z_t |≥δ -2 ≤ 2 ·exp(- (δ-2)^2 / 8 · t ),where the second-to-last inequality is due to the fact that | ∑_i=1^j Y_t | ≤ 2. With the help Lemma <ref>, we can indeed verify our intuition:lemintuition Consider the standard balancing circuit model on the cycle with round matrix . Then for any vertex u ∈ V, _.,u^t - 1/𝐧_2^2 = Ω(1/√(t)). Define S_δ := { w ∈ V: (w, u) ≤δ}, so that | S_δ | = 2δ. With δ = 20√(t) and t ≥ 10 ∑_w ∈ S_δ^t_w,u= 1 - ∑_w ∉ S_δ^t_w,u≥ 1 - 2·exp( -(δ - 2)^2/8t)≥1/2.By Cauchy-Schwarz inequality, _w,·^t _2^2≥∑_w ∈ S_δ(^t_w,u)^2≥1/2δ( ∑_w ∈ S_δ^t_w,u)^2 ≥1/2δ(1/2)^2 = Ω(t^-1/2). Lemma <ref> also proves that the factor √(1/t) in the upper bound in Theorem <ref> is best possible. The lower bound on the discrepancy now follows by combining Lemma <ref> with Theorem <ref> and Lemma <ref> stating that for any vertex u ∈ V, there exists another vertex v ∈ V such that _.,u^t - _.,v^t _2^2 ≥_.,u^t - 1/𝐧_2^2 =Ω(1/√(t)).Tori.In this section we consider r-dimensional tori, where r ≥ 1 is any constant.For the upper bound, note that the computation of ^t_.,. can be decomposed to independent computations in the r dimensions, and each dimension has the same distribution as the cycle on n^1/r vertices. Specifically, if we denote bythe round matrix of the standard balancing circuit scheme on the cycle with n^1/r vertices andis the round matrix of the r-dimensional torus with n vertices, then for any pair of vertices x=(x_1,…,x_r), v=(y_1,…,y_r) on the torus we have _x,y^t = ∏_i=1^r _x_i,y_i^t. From Theorem <ref>, | _x_i,y_i^t - 1/n^1/r | = O(t^-1/2), and therefore, since r is constant,_x,y^t≤∏_i=1^r ( 1/n^1/r + O(t^-1/2) ) = O( t^-r/2 + n^-1),and thus _x,y^t - 1/𝐧_2^2 = O(t^-r/2) for any pair of vertices x,y. Hence by Lemma <ref>, _.,u^t - _.,v^t _2^2 = O( t^-r/2). Plugging this bound into Theorem <ref> yields that the load difference between any pair of the nodes u and v at round t is at most O( t^-r/4·σ·log^3/2 n+ √(log n)) with probability at least 1-2 n^-2. The bound on the discrepancy now simply follows by the union bound.We now turn to the lower bound on the discrepancy. With the same derivation as in Lemma <ref> we obtain the following result: lemintuitiontwo Consider the standard balancing circuit model on the r-dimensional torus with round matrix . Then for any vertex u ∈ V, _.,u^t - 1/𝐧_2^2 = Ω(t^-r/2).As before, the lower bound on the torus now follows by combining Lemma <ref> with the general lower bound given in Theorem <ref>.Expanders. The upper bound O(λ()^t/4·σ· (log n)^3/2 + √(log n)) for expanders follows immediately from Theorem <ref>.For the lower bound, since the round matrix consists of d matchings, it is easy to verify that whenever _u,v^t > 0, we have _u,v^t ≥ 2^-d · t. Consequently, for any vertex u ∈ V, _.,u^t - 1/𝐧_2^2 =Ω( 2^-d · t). Plugging this into Theorem <ref> yields a lower bound on the discrepancy which is Ω( 2^-d · t/2·σ / √(logσ)).Hypercubes. For the hypercube, there is a worst-case bound of log_2 log_2 n + O(1) <cit.> for any input after log_2 n iterations of the dimension-exchange, i.e., after one execution of the round matrix. Hence, we will only analyze the discrepancy after s matchings, where 1 ≤ s < log_2 n.The derivation of the lower bound is almost analogous to the one for expanders, since for any pair of vertices u,v, ∏_i=s^t _u,v^(s)∈{0,2^-t} (recall that _.,.^(s) is the matching applied in the s-step of the dimension exchange). The only difference is that we are counting matchings individually and not full periods. By applying the same analysis as in Theorem <ref>, but with the stronger inequality | ∏_s=1^t _u,v^(s) - 1/n | ≤ 2^-t, and we obtain that the upper bound of the discrepancy is O(2^-t/2·σ· (log n)^3/2 + √(log n)). Applying Theorem <ref>, we obtain the lower bound Ω( 2^-t/2·σ / √(logσ) ).§ DISCUSSION AND EMPIRICAL RESULTS§.§ Average-Case versus Worst-Case We will now compare our average-case to a worst-case scenario on cycles, 2D-tori and hypercubes. For the sake of concreteness, we always assume that the input is drawn from the uniform distribution 𝖴𝗇𝗂[0,2K], where K will be specified later. Note that the total number of tokens is ≈ n · K, and the initial discrepancy will be Θ(K). Our choice for the worst-case load vector will have the same number of tokens and initial discrepancy, however, the exact definition of the vector as well as the choice of the parameter K will depend on the underlying topology. Cycles. As one representative of a worst-case setting, fix an arbitrary node u ∈ V and let all nodes with distance at most n/4 initially have a load of 2K while all other nodes have load 0. This gives rise to a load vector with n · K tokens and initial discrepancy 2K.2D-Tori. Again, we fix an arbitrary node u ∈ V and assign a load of 2K to the n/2-nearest neighbors of u and load 0 to the other nodes. Again, this defines a load vector with n · K tokens and initial discrepancy 2K.The next result provides a lower bound on the discrepancy for cycles and 2D-tori in the aforementioned worst-case setting. It essentially shows that for worst-case inputs, Ω(n^2) rounds and Ω(n) rounds are necessary for the cycle, 2D-tori, respectively, in order to reduce the discrepancy by more than a constant factor. This stands in sharp contrast to Theorem <ref>, proving a decay of the discrepancy by ≈ t^-1/4, starting from the first round.propositionworstcaseFor the aforementioned worst-case setting on the cycle, it holds for any round t > 0 that (x^(t)) ≥1/8· K ·(1 - exp( - n^2/2048 t) ) - √(48 log n), with probability at least 1-n^-1. Further, for 2D-tori, it holds for any round t > 0 that (x^(t)) ≥1/8· K ·(1 - exp( - n/2048 t) ) - √(48 log n), with probability at least 1-n^-1. We first consider the case of a cycle. Let S_1 be the subset of nodes that have a non-zero initial load; so |S_1|=n/2. Clearly, there is a subset of nodes S_2 ⊆ V with |S_2| = n/8 so that for each node u ∈ S_2, only nodes v with (u,v) ≥ n/16 can have x_v^(0) > 0. We will now derive a lower bound on the discrepancy in this worst-case setting by upper bounding the load of vertices in the subset S_2. To lower bound the discrepancy at round t, recall that by Lemma <ref> we have that ∑_v ∈ V (u,v) ≥δ^t_u,v≤ 2 ·exp( - (δ-2)^2/8t). Let us now choose δ=n/16, and we thus conclude that ∑_u ∈ S_1∑_v ∈ S_2^t_u,v ≤∑_u ∈ S_1∑_v ∈ V (u,v) ≥δ^t_u,v≤ 2 · |S_1| ·exp( - n^2/2048 t). This implies for the total load of vertices in S_2 at time t: ∑_v ∈ S_2ξ_v^(t) = ∑_v ∈ S_2∑_u ∈ S_1ξ_u^(0)·_u,v^(t)= 2 K ∑_u ∈ S_1∑_v ∈ S_2^(t)_u,v≤ K · n ·exp( - n^2/2048 t), where K is the average load. Recalling that |S_2|=n/8, by the pigeonhole principle there exists a node v ∈ S_2 such that ξ_v^(t) ≥1/|S_2|· K · n ·exp( -n^2/2048 t) ≥1/8· K ·exp( - n^2/2048 t). This immediately implies the following lower bound on the discrepancy: (ξ^(t))≥ξ̅ - ξ_v^(t)≥ξ̅·(1 - exp( - n^2/2048 t) ), where ξ̅ = K is the average load. The corresponding lower bound on (x^(t)) follows by Theorem <ref> and the union bound. The proof for the 2-dimensional torus is almost identical. Again, let S_1 be the set of nodes that have a non-zero load. Clearly, there is a subset S_2 ⊆ V with |S_2|=n/8 so thatfor each node u ∈ S_2, only nodes v with (u,v) ≥√(n)/16 can have x_v^(0) > 0. Let us now viewas the transition matrix of a Markov chain. Then ^t is obtained by running two independent Markov chains(one for each dimension), where each of the two Markovchains corresponds to the round matrix of the cycle. We can still apply Lemma <ref> as before, even though here the size of each cycle is √(n), to obtain that ∑_v ∈ V (u,v) ≥δ^t_u,v≤ 2 ·exp( - (δ-2)^2/8t). Here we choose δ=√(n)/16, and the remaining part of the proof is exactly the same as before.Hypercube. Regarding the hypercube, we will consider only log_2 n rounds, since the discrepancy is loglog_2 n +O(1) after log_2 n rounds and O(1) after 2 log_2 n rounds <cit.>. A natural corresponding worst-case distribution is to have load 2 K on all nodes whose log_2 n-th bit is equal to one and load 0 otherwise. This way, the discrepancy is only reduced in the final round log_2 n. §.§ Experimental Setup For each of the three graphs cycles, 2D-tori and hypercube, we consider two comparative experiments with an average-case load vector and a worst-case initial load vector each. The plots and tables on the next two pages display the results, where for each case we took the average discrepancy over 10 independent runs. The first experiment considers a “lightly loaded case”, where the theoretical results suggest that a small (i.e., constant or logarithmic) discrepancy is reached well before the expected “worst-case load balancing times”, which are ≈ n^2 for cycles and ≈ n for 2D-tori. The second experiments considers a “heavily loaded case”, where the theoretical results suggest that a small discrepancy is not reached faster than in the worst-case. Specifically, for cycles and 2D-tori, we choose for the lightly loaded case K=√(n) and for the heavily loaded case K=n^2. The experiments confirm the theoretical results in the sense that for both choices of K, we have a much quicker convergence of the discrepancy than in the corresponding worst cases. However, the experiments also demonstrate that only in the lightly loaded case we reach a small discrepancy quickly, whereas in the heavily loaded case there is no big difference between worst-case and average-case if it comes to the time to reach a small discrepancy.On the hypercube, since we are interested in the case where 1 ≤ t ≤log_2 n, our bounds on the discrepancy indicates that we should choose K smaller than in the case of cycles and 2D-tori. That is why we choose K=n^1/4 in the lightly loaded case and K=n in the heavily loaded case (As a side remark, we note that due to the symmetry of the hypercube, any initial load vector sampled from [0,β· (n-1)] is equivalent to an initial load vector sampled from [0, n-1].) With these adjustments of K in both cases, the experimental results of the hypercube are inline with the ones for the cycle and 2D-tori. The details of the experiments containing plots and tables with the sampled discrepancies can be found on the following two pages (Section <ref>). § EXPERIMENTAL DATA AND CHARTS [scale=0.23] [very thick][<->] (0,16)– (0,0) – (27,0) node[below]t; [left=4pt] at (0,2.5) 25; [left=4pt] at (0,5.0) 50; [left=4pt] at (0,7.5) 75; [left=4pt] at (0,10) 100; [left=4pt] at (0,12.5) 125; [below] at (1,-1) 1; [below] at (5,-1) 2^4; [below] at (9,-1) 2^8; [below] at (13,-1) 2^12; [below] at (17,-1) 2^16; [below] at (21,-1) 2^20; [below] at (25,-1) 2^24; in 1,5,...,25 (,-0.5) – (,+0.5); in 0,2.5,...,12.5 (-0.5,) – (+0.5,); [color=blue,thick] – plot[smooth,mark=x, mark size=8] coordinates (0,12.8)(1,10.89)(2,9.02)(3,7.56)(4,6.35)(5,5.27)(6,4.26)(7,3.42)(8,2.79)(9,2.25)(10,1.82)(11,1.47)(12,1.19)(13,0.93)(14,0.74)(15,0.57)(16,0.49)(17,0.38)(18,0.29)(19,0.2)(20,0.16)(21,0.12)(22,0.1)(23,0.1)(24,0.1)(25,0.1); [color=red,thick] – plot[smooth,mark=o, mark size=6] coordinates (0,12.80)(1,12.80)(2,12.8)(3,12.8)(4,12.80)(5,12.80)(6,12.8)(7,12.8)(8,12.8)(9,12.8)(10,12.8)(11,12.80)(12,12.8)(13,12.8)(14,12.8)(15,12.8)(16,12.8)(17,12.8)(18,11.8)(19,8.88)(20,4.8)(21,1.4)(22,0.2)(23,0.1)(24,0.1)(25,0.1); [thick,color=red] –plot[smooth,mark=o,mark size=6] coordinates (12,15); () at (16.5,15) _wc(x^(t)); () at (6.5,15) _ac(x^(t)); [thick,color=blue] –plot[smooth,mark=x,mark size=8] coordinates (2,15); () at (37,3.2) [t]0.4 t _ac _wc0 128.0 128.02^0 108.9 128.02^2 75.6 128.02^4 52.7 128.02^6 34.2 128.02^8 22.5 128.02^10 14.7 128.02^12 9.3 128.02^14 5.7 128.02^16 3.8 128.02^18 2.0 88.02^20 1.2 14.02^22 1.0 1.0 2^24 1.0 1.0 ; [yshift=-31cm] [very thick][<->] (0,18) – (0,0) – (27,0) node[below]t; [left=4pt] at (0,2) 10^1; [left=4pt] at (0,4) 10^2; [left=4pt] at (0,6) 10^3; [left=4pt] at (0,8) 10^4; [left=4pt] at (0,10) 10^5; [left=4pt] at (0,12) 10^6; [left=4pt] at (0,14) 10^7; [left=4pt] at (0,16) 10^8; [below] at (1,-1) 1; [below] at (5,-1) 2^4; [below] at (9,-1) 2^8; [below] at (13,-1) 2^12; [below] at (17,-1) 2^16; [below] at (21,-1) 2^20; [below] at (25,-1) 2^24; in 1,5,...,25 (,-0.5) – (,+0.5); in 0,2,...,16 (-0.5,) – (+0.5,); [thick, color=blue] – plot[smooth,mark=x, mark size=8]coordinates (0,15.06)(1,14.9)(2,14.74)(3,14.58)(4,14.42)(5,14.26)(6,14.1)(7,13.92)(8,13.74)(9,13.54)(10,13.34)(11,13.14)(12,12.94)(13,12.72)(14,12.52)(15,12.32)(16,12.06)(17,11.82)(18,11.52)(19,11.1)(20,10.52)(21,9.46)(22,7.32)(23,3.06)(24,0)(25,0); [thick, color=red] – plot[smooth,mark=o, mark size=6] coordinates (0,15.06)(1,15.06)(2,15.06)(3,15.06)(4,15.06)(5,15.06)(6,15.06)(7,15.06)(8,15.06)(9,15.06)(10,15.06)(11,15.06)(12,15.06)(13,15.06)(14,15.06)(15,15.06)(16,15.06)(17,15.04)(18,14.96)(19,14.72)(20,14.18)(21,13.12)(22,10.98)(23,6.68)(24,0.60); [thick, color=red] –plot[smooth,mark=o,mark size=6] coordinates (12,17); () at (16.5,17) _wc(x^(t)); () at (6.5,17) _ac(x^(t)); [thick, color=blue] –plot[smooth,mark=x,mark size=8] coordinates (2,17); () at (38.5,6) [t]0.4 t _ac _wc0 3.35 × 10^7 3.35 × 10^72^0 2.83 × 10^7 3.35 × 10^72^2 1.96 × 10^7 3.35 × 10^72^4 1.34 × 10^7 3.35 × 10^72^6 9.17 × 10^6 3.35 × 10^72^8 5.94 × 10^6 3.35 × 10^72^10 3.72 × 10^6 3.35 × 10^72^12 2.30 × 10^6 3.35 × 10^72^14 1.43 × 10^6 3.35 × 10^72^16 8.19 × 10^5 3.32 × 10^72^18 3.58 × 10^5 2.30 × 10^72^20 5.34 × 10^4 3.62 × 10^62^22 3.35 × 10^1 2.21 × 10^3 2^24 1.0 1.0 ; [yshift=-58cm] [very thick][<->] (0,16)– (0,0) – (19,0) node[below]t; [left=4pt] at (0,2.5) 100; [left=4pt] at (0,5.0) 200; [left=4pt] at (0,7.5) 300; [left=4pt] at (0,10) 400; [left=4pt] at (0,12.5) 500; [below] at (1,-1) 1; [below] at (5,-1) 2^4; [below] at (9,-1) 2^8; [below] at (13,-1) 2^12; [below] at (17,-1) 2^16; in 1,5,...,17 (,-0.5) – (,+0.5); in 0,2.5,...,12.5 (-0.5,) – (+0.5,); [thick, color=blue] – plot[smooth,mark=x, mark size=8] coordinates (0,12.8)(1,7.0275)(2,4.5)(3,3)(4,2.0225)(5,1.37)(6,0.8825)(7,0.5750)(8,0.38)(9,0.25)(10,0.17)(11,0.105)(12,0.0625)(13,0.0325)(14,0.025)(15,0.025)(16,0.025)(17,0.025); [thick, color=red] – plot[smooth,mark=o, mark size=6] coordinates (0,12.8)(1,12.8)(2,12.8)(3,12.8)(4,12.8)(5,12.8)(6,12.8)(7,12.8)(8,12.8)(9,12.675)(10,11.925)(11,9.5375)(12,5.34)(13,1.575)(14,0.15)(15,0.025)(16,0.025)(17,0.025); [thick, color=red] –plot[smooth,mark=o,mark size=6] coordinates (12,15); () at (16.5,15) _wc(x^(t)); () at (6.5,15) _ac(x^(t)); [thick, color=blue] –plot[smooth,mark=x,mark size=8] coordinates (2,15); () at (38.5,6) [t]0.4 t _ac _wc0 512.0 512.02^0 281.1 512.02^2 120.0 512.02^4 54.8 512.02^6 23.0 512.02^8 10.0 507.0 2^10 4.2 381.5 2^12 1.3 63.0 2^14 1.0 1.0 2^16 1.0 1.0 ; () at (24,-11) 0.9 Experimental Results: Experiments (i) on the cycle with n=2^12 and initial discrepancy 2^7=128, (ii) on the cycle with n=2^12 and initial discrepancy 2^25=33,554,432, and (iii) on the 2D-torus with n=2^16 and initial discrepancy of 2^9. For the heavily loaded case, we used logarithmic scaling on the y-axis to highlight the behavior when t is close to the worst-case load balancing time. ; [scale=0.23] [very thick][<->] (0,21) – (0,0) – (19,0) node[below]t; [left=5pt] at (0,2) 10^1; [left=5pt] at (0,4) 10^2; [left=5pt] at (0,6) 10^3; [left=5pt] at (0,8) 10^4; [left=5pt] at (0,10) 10^5; [left=5pt] at (0,12) 10^6; [left=5pt] at (0,14) 10^7; [left=5pt] at (0,16) 10^8; [left=5pt] at (0,18) 10^9; [left=1pt] at (0,20) 10^10; [below] at (1,-1) 1; [below] at (5,-1) 2^4; [below] at (9,-1) 2^8; [below] at (13,-1) 2^12; [below] at (17,-1) 2^16; in 1,5,...,19 (,-0.5) – (,+0.5); in 0,2,...,20 (-0.5,) – (+0.5,); [color=blue,thick] – plot[smooth,mark=x, mark size=8] coordinates (0,19.86)(1,19.36)(2,18.94)(3,18.56)(4,18.18)(5,17.84)(6,17.5)(7,17.1)(8,16.72)(9,16.32)(10,15.84)(11,15.32)(12,14.66)(13,13.54)(14,11.40)(15,7.12)(16,0.3)(17,0); [color=red,thick] – plot[smooth,mark=o, mark size=6] coordinates (0,19.86)(1,19.86)(2,19.86)(3,19.86)(4,19.86)(5,19.86)(6,19.86)(7,19.86)(8,19.86)(9,19.86)(10,19.80)(11,19.60)(12,19.1)(13,18.04)(14,15.88)(15,11.60)(16,3.06)(17,0.6); [thick,color=red] –plot[smooth,mark=o,mark size=6] coordinates (12,22); () at (16.5,22) _wc(x^(t)); () at (6.5,22) _ac(x^(t)); [thick,color=blue] –plot[smooth,mark=x,mark size=8] coordinates (2,22); () at (37,6.2) [t]0.4 t _ac _wc0 8.59 × 10^9 8.59 × 10^92^0 4.83 × 10^9 8.59 × 10^92^2 1.89 × 10^9 8.59 × 10^92^4 8.28 × 10^8 8.59 × 10^92^6 3.58 × 10^8 8.59 × 10^92^8 1.44 × 10^8 8.50 × 10^9 2^10 4.52 × 10^7 6.38 × 10^9 2^12 5.91 × 10^6 1.03 × 10^9 2^14 3.59 × 10^3 6.33 × 10^5 2^16 1.0 2.0 ; [yshift=-26cm] [very thick][<->] (0,16) – (0,0) – (30,0) node[below]t; [left=4pt] at (0,2.5) 25; [left=4pt] at (0,5.0) 50; [left=4pt] at (0,7.5) 75; [left=4pt] at (0,10) 100; [left=4pt] at (0,12.5) 125; [below] at (1,-1) 1; [below] at (5,-1) 5; [below] at (10,-1) 10; [below] at (15,-1) 15; [below] at (20,-1) 20; [below] at (25,-1) 25; [below] at (28,-1) 28; in 1,5,10,15,20,25,28 (,-0.5) – (,+0.5); in 0,2.5,...,12.5 (-0.5,) – (+0.5,); [thick, color=blue] – plot[smooth,mark=x, mark size=8]coordinates (0,12.8) (1,12.8) (2,12.8) (3,11.69) (4,9.31) (5,6.91) (6,4.8) (7,3.51) (8,2.5) (9,1.79) (10,1.32) (11,1.01) (12,0.8) (13,0.6) (14,0.58) (15,0.4) (16,0.4) (17,0.4) (18,0.4) (19,0.4) (20,0.25) (21,0.2) (22,0.2) (23,0.2) (24,0.2) (25,0.2) (26,0.2) (27,0.2) (28,0.2); [thick, color=red] – plot[smooth,mark=o, mark size=6]coordinates (0,12.8) (1,12.8) (2,12.8) (3,12.8) (4,12.8) (5,12.8) (6,12.8) (7,12.8) (8,12.8) (9,12.8) (10,12.8) (11,12.8) (12,12.8) (13,12.8) (14,12.8) (15,12.8) (16,12.8) (17,12.8) (18,12.8) (19,12.8) (20,12.8) (21,12.8) (22,12.8) (23,12.8) (24,12.8) (25,12.8) (26,12.8) (27,12.8) (28,0.1); [thick, color=red] –plot[smooth,mark=o,mark size=6] coordinates (12,17); () at (16.5,17) _wc(x^(t)); () at (6.5,17) _ac(x^(t)); [thick, color=blue] –plot[smooth,mark=x,mark size=8] coordinates (2,17); () at (39.5,6) [t]0.4 t_ac _wc0 128.0 128.0 1 128.0 128.0 2128.0 128.0 3 116.9128.0 493.1 128.0 5 69.1 128.010 13.2 128.0 15 5.8 128.0 20 2.5 128.0 25 2.0 128.0 27 2.0 128.0 28 2.0 2.0 ; [yshift=-58cm] [very thick][<->] (0,20) – (0,0) – (30,0) node[below]t; [left=4pt] at (0,2) 10^1; [left=4pt] at (0,4) 10^2; [left=4pt] at (0,6) 10^3; [left=4pt] at (0,8) 10^4; [left=4pt] at (0,10) 10^5; [left=4pt] at (0,12) 10^6; [left=4pt] at (0,14) 10^7; [left=4pt] at (0,16) 10^8; [left=4pt] at (0,18) 10^9; [below] at (1,-1) 1; [below] at (5,-1) 5; [below] at (10,-1) 10; [below] at (15,-1) 15; [below] at (20,-1) 20; [below] at (25,-1) 25; [below] at (28,-1) 28; in 1,5,10,15,20,25,28 (,-0.5) – (,+0.5); in 0,2,...,18 (-0.5,) – (+0.5,); [thick, color=blue] – plot[smooth,mark=x, mark size=8]coordinates (0,16.86) (1,16.86) (2,16.84) (3,16.76) (4,16.54) (5,16.28) (6,15.98) (7,15.64) (8,15.34) (9,15.02) (10,14.68) (11,14.36) (12,14.00) (13,13.7) (14,13.33) (15,13.02) (16,12.68) (17,12.3) (18,11.9) (19,11.64) (20,11.26) (21,11) (22,10.64) (23,10.06) (24,9.56) (25,8.94) (26,8.74) (27,8.22) (28,0); [thick, color=red] – plot[smooth,mark=o, mark size=6] coordinates (0,16.86) (1,16.86) (2,16.86) (3,16.86) (4,16.86) (5,16.86) (6,16.86) (7,16.86) (8,16.86) (9,16.86) (10,16.86) (11,16.86) (12,16.86) (13,16.86) (14,16.86) (15,16.86) (16,16.86) (17,16.86) (18,16.86) (19,16.86) (20,16.86) (21,16.86) (22,16.86) (23,16.86) (24,16.86) (25,16.86) (26,16.86) (27,16.86) (28,0); [thick, color=red] –plot[smooth,mark=o,mark size=6] coordinates (12,21); () at (16.5,21) _wc(x^(t)); () at (6.5,21) _ac(x^(t)); [thick, color=blue] –plot[smooth,mark=x,mark size=8] coordinates (2,21); () at (43,10) [t]0.4 t _ac _wc0 2.68 × 10^8 2.68 × 10^8 1 2.68 × 10^8 2.68 × 10^8 22.65 × 10^82.68 × 10^8 3 2.41 × 10^8 2.68 × 10^8 4 1.88 × 10^8 2.68 × 10^8 5 1.37 × 10^8 2.68 × 10^8102.21 × 10^7 2.68 × 10^8 15 3.26 × 10^6 2.68 × 10^8 20 4.23 × 10^5 2.68 × 10^8 25 2.98 × 10^4 2.68 × 10^8 27 1.29 × 10^4 2.68 × 10^8 28 3.0 3.0 ; () at (24,-9) 0.9 Experimental Results (cntd.): Experiments (iv) on the 2D-torus with n=2^16 and initial discrepancy 2^33=8,589,934,592, (v) on the hypercube with n=2^28 and initial discrepancy 256, and (vi) on the hypercube with n=2^28 and initial discrepancy of 2^28=268,435,456. For the heavily loaded cases, we used logarithmic scaling on the y-axis to highlight the behaviour when t is close to the worst-case load balancing time. ; § CONCENTRATION TOOLS Let X have a Poisson distribution with mean μ. Then for any ϵ > 0, X ≤ (1 - ϵ) μ ≤ e^-ϵ^2 μ/2 , X ≥ (1 + ϵ) μ ≤[ e^ϵ (1+ϵ)^-(1+ϵ)]^μ≤ e^-min(ϵ,ϵ^2) ·μ / 3. Let (M_t) be a martingale and τ a stopping time. If τ < ∞ and |M_t τ| ≤ K for all t and some constant K where t τ := min{t, τ}, then M_τ = M_0. Consider a collection of independent random variables X_i ∈ [a_i, b_i] with i ∈ [n]. Then for any number δ > 0, ℙ[|∑_i=1^n X_i - 𝔼[∑_i=1^n X_i] | ⩾δ] ⩽ 2 ·exp(-2δ^2/∑_i=1^n (b_i - a_i)^2). plainurl myheadings | http://arxiv.org/abs/1703.08702v1 | {
"authors": [
"Leran Cai",
"Thomas Sauerwald"
],
"categories": [
"cs.DC",
"cs.DS",
"G.3"
],
"primary_category": "cs.DC",
"published": "20170325150349",
"title": "Randomized Load Balancing on Networks with Stochastic Inputs"
} |
Transductive Zero-Shot Learning with a Self-training dictionary approach Yunlong Yu, Zhong Ji, Xi Li, Jichang Guo, Zhongfei Zhang, Haibin Ling, Fei WuDecember 30, 2023 ========================================================================================== Open domain Question Answering (QA) systems must interact with external knowledgesources, such as web pages, to find relevant information.Information sources like Wikipedia, however, are not well structured and difficult to utilize in comparison with Knowledge Bases (KBs). In this work we present a two-step approach to question answering from unstructured text, consisting of a retrieval step and a comprehension step. For comprehension, we present an RNN based attention model with a novel mixture mechanism for selecting answers from either retrieved articles or a fixed vocabulary.For retrieval we introduce a hand-crafted model and a neural model for ranking relevant articles. We achieve state-of-the-art performance on WikiMovies dataset, reducing the error by 40%. Our experimental results further demonstrate the importance of each of the introduced components. § INTRODUCTION Natural language based consumer products, such as Apple Siri and Amazon Alexa, have found wide spread use in the last few years. A key requirement for these conversational systems is the ability to answer factual questions from the users, such as those about movies, music, and artists.Most of the current approaches for Question Answering (QA) are based on structured Knowledge Bases (KB) such as Freebase <cit.> and Wikidata <cit.>. In this setting the question is converted to a logical form using semantic parsing,which is queried against the KB to obtain the answer <cit.>. However, recent studies have shown that even large curated KBs, such as Freebase, are incomplete<cit.>. Further, KBs support only certain types of answer schemas, andconstructing and maintaining them is expensive.On the other hand, there is a vast amount of unstructured knowledge available in textual form from web pages such as Wikipedia, and hence an alternative is to directly answer questions from these documents. In this approach, shown in Figure <ref>, articles relevant to the question are first selected (retrieval step).Then, the retrieved articles and question are jointly processed to extract the answer (comprehension step). This retrieval based approach has a longer history than the KB based approach <cit.>. It can potentially provide a much wider coverage over questions, and is not limited to specific answer schemas. However, there are still gaps in its performance compared to the KB-based approach <cit.>. The comprehension step, which requires parsing information from natural language, is the main bottleneck, though suboptimal retrieval can also lead to lower performance. Several large-scale datasets introduced recently <cit.>have facilitated the development of powerful neural models for reading comprehension. These models fall into one of two categories: (1) those which extract answers as a span of textfrom the document <cit.> (Figure <ref> top);(2) those which select the answer from a fixed vocabulary <cit.> (Figure <ref> bottom). Here we argue that depending on the type of question, either (1) or (2) may be more appropriate, and introduce a latent variable mixture model to combine the two in a single end-to-end framework. We incorporate the above mixture model in a simple Recurrent Neural Network (RNN) architecture with an attention mechanism <cit.> for comprehension. In the second part of the paper we focus on the retrieval step for the QA system, and introduce a neural network based ranking model to select the articles to feed the comprehension model.We evaluate our model on WikiMovies dataset, which consists of 200K questions about movies, along with18K Wikipedia articles for extracting the answers.KV:16 applied Key-Value Memory Neural Networks (KV-MemNN) to the dataset, achieving 76.2% accuracy.Adding the mixture model for answer selection improves the performance to 85.4%. Further, the ranking model improves both precision and recall of the retrieved articles,and leads to anoverall performance of 85.8%.§ WIKIMOVIES DATASETWe focus on the WikiMovies[<http://fb.ai/babi>]dataset, proposed by <cit.>. The dataset consists of pairs of questions and answers about movies. Some examples are shown in Table <ref>.As a knowledge source approximately 18K articles from Wikipedia are also provided, where each article is about a movie. Since movie articles can be very long, we only use the first paragraph of the article, which typically provides a summary of the movie.Formally, the dataset consists of question-answer pairs {(q_j, A_j)}_j=1^J and movie articles {d_k}_k=1^K.Additionally, the dataset includes a list of entities: movie titles, actor names, genres etc. Answers to all the questions are in the entity list.The questions are created by human annotators using SimpleQuestions <cit.>, an existing open-domain question answering dataset, and the annotated answers come from facts in two structured KBs: OMDb[<http://beforethecode.com/projects/omdb/download.aspx>] andMovieLens[<http://grouplens.org/datasets/movielens/>]. There are two splits of the dataset.The “Full” dataset consists of 200K pairs of questions and answers. In this dataset, some questions are difficult to answer from Wikipedia articles alone.A second version of the dataset, “Wiki Entity” is constructed byremoving those QA pairs where the entities in QAs are not found in corresponding Wikipedia articles. We call these splits WikiMovies-FL and WikiMovies-WE, respectively. The questions are divided into train, dev and test such that the same question template does not appear in different splits. Further, they can be categorized into 13 categories, including , , etc.[Category labels are only available for dev/test dataset]The basic statistics of the dataset are summarized in Table <ref>.We also note thatmore than 50% of the entities appear less than 5 times in the training set. This makes it very difficult to learn the global statistics of each entity, necessitating the need to use an external knowledge source.§ COMPREHENSION MODELOur QA system answers questions in two steps, as shown in Figure <ref>. The first step is retrieval, wherearticles relevant to the question are retrieved. The second step is comprehension, wherethe question and retrieved articles are processed to derive answers.In this section we focus on the comprehension model,assuming that relevant articles have already been retrieved and merged into a context document.In the next section, we will discuss approaches for retrieving the articles.<cit.>, who introduced WikiMovies dataset, used an improved variant of Memory Networks called Key-Value Memory Networks. Instead, we use RNN based network, which has been successfully used in many reading comprehension tasks <cit.>. WikiMovies dataset has two notable differences frommany of the existing comprehension datasets, such as CNN and SQuAD <cit.>. First, with imperfect retrieval, the answer may not be present in the context. We handle this case by using the proposed mixture model. Second, there may be multiple answers to a question, such as a list of actors. We handle this by optimizing a sum of the cross-entropy loss over all possible answers.We also use attention sum architecture proposed by <cit.>, which has been shown to give high performancefor comprehension tasks. In this approach, attention scores over the context entities are used as the output.We term this the attention distribution p_att, defined over the entities in the context. The mixture model combines this distribution with another output probabilitydistribution p_vocab over all the entities in the vocabulary. The intuition behind this is that named entities (such as actors and directors)can be better handled by the attention part,since there are few global statistics available for these,and other entities (such as languages and genres) can be captured by vocabulary part,for which global statistics can be leveraged. §.§ Comprehension model detail Let 𝒱 be the vocabulary consisting of all tokens in the corpus, and ℰ be the set of entities in the corpus The question is converted to a sequence of lower cased word ids, (w_i) ∈𝒱 and a sequence of 0-1 flags for word capitalization, (c_i) ∈{0,1}. For each word position i, we also associate an entity id if the i-th word is part of an entity, e_i ∈ℰ (see Figure <ref>).Then, the combined embedding of the i-th position is given by x_i = W_w(w_i) + W_c(c_i) ‖W_e(e_i), (i=1,…,L_q),where ‖ is the concatenation of two vectors,L_q is the number of words in a question q, and W_w, W_c and W_e are embedding matrices. Note that if there are no entities at i-th position, W_e(e_i) is set to zero. The context is composed of up to M movie articles concatenated with a special separation symbol.The contexts are embedded in exactly the same way as questions, sharing the embedding matrices.To avoid overfitting, we use another technique called anonymization. We limit the number of columns of W_e to a relatively small number, n_e, and entity ids are mapped to one of n_e columns randomly (without collision). The map is common for each question/context pair but randomized across pairs. The method is similar to the anonymization method used in CNN / Daily Mail datasets <cit.>. emergent:16 showed that such a procedure actually helps readers since it adds coreference information to the system.Next, the question embedding sequence (x_i) is fed into a bidirectional GRU (BiGRU) <cit.> to obtain a fixed length vector v v = h_q(L_q) ‖h_q(0),where h_q and h_q are the final hidden states of forward and backward GRUs respectively. The context embedding sequence is fed into another BiGRU, to produce the output H_c = [h_c,1, h_c,2, … h_c,L_c], where L_c is the length of the context. An attention score for each word position i is given by s_i ∝exp ( v^T h_c,i ). The probability over the entities in the context is then given by p_att(e) ∝∑_i ∈ I(e, c) s_i,where I(e,c) is the set of word positions in the entity e within the context c. We next define the probability p_vocab to be the probability over the complete set of entities in the corpus, given by p_vocab(e) =Softmax(V u),where the vector u is given byu= ∑_i s_i h_c, i.Each row of the matrix V is the coefficient vector for an entity in the vocabulary. It is computed similar to Eq. (<ref>).V(e) = ∑_w ∈ e W_w(w) + ∑_c ∈ e W_c(c) ‖ W_e(e).The embedding matrices are shared between question and context. The final probability that an entity e answers the question is given by the mixture p(e) = (1-g) p_att(e) + g p_vocab(e), with themixture coefficient g defined asg= σ(W_g g_0),g_0 = v^T u ‖max V u.The two components of g_0 correspond to the attention part and vocabulary part respectively. Depending on the strength of each, thevalue of g may be high or low. Since there may be multiple answers for a question,we optimize the sum of the probabilities:loss = - log( ∑_a ∈ A_j p(a|q_j,c_j) )Our overall model is displayed in Figure <ref>. We note that KV-MemNN <cit.> employs “Title encoding” technique, which uses the prior knowledge that movie titles are often in answers. <cit.> showed that this technique substantially improves model performance by over 7% for WikiMovies-WE dataset. In our work, on the other hand, we do not use any data specific feature engineering. § RETRIEVAL MODELOur QA system answers questions by two steps as in Figure <ref>. Accurate retrieval ofrelevant articles is essential for good performance of the comprehension model, and in this section we discuss three approaches for it. We use up to M articles as context. A baseline approach for retrieval is to select articles which contain at least one entity also present in the question.We identify maximal intervals of words that match entities in questions and articles. Capitalization of words is ignored in this step because some words in the questions are not properly capitalized. Out of these (say N) articles we can randomly select M. We call this approach (r0). For some movie titles, however, this method retrieves too many articles that are actually not related to questions. For example, there is a movie titled “Love Story” which accidentally picks up the words “love story”. This degrades the performance of the comprehension step.Hence, we describe two more retrieval models – (1) a dataset specific hand-crafted approach, and (2) a general learning based approach. §.§ Hand-Crafted Model (r1)In this approach, the N articles retrieved using entity matching are assigned scores based on certain heuristics. If the movie title matches an entity in the question, the article is given a high score, sinceit is very likely to be relevant. A similar heuristic was also employed in <cit.>. In addition, the number of matching entities is also used to score each article. The top M articles based on these scores are selected for comprehension. This hand-crafted approach already gives strong performance for the WikiMoviesdataset, however the heuristic for matching article titles may not be appropriate for otherQA tasks. Hence we also study a general learning based approach for retrieval.§.§ Learning Model (R2)The learning model for retrieval is trained by an oracle constructed using distant supervision. Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question. For example, forquestion type,the answer movie articles are the correct articles to be retrieved.On the other hand, for questions intype, the movie in the question should be retrieved. Having collected the labels, we train a retrieval model for classifying a question and article pair as relevant or not relevant.Figure <ref> gives an overview of the model, which uses a Word Level Attention (WLA) mechanism.First, the question and article are embedded into vector sequences, using thesame method as the comprehension model.We do not use anonymization here,to retain simplicity. Otherwise, the anonymization procedure would have to be repeated several times for a potentially large collection of documents.These vector sequences are next fed to a Bi-GRU, to produce the outputs v (for the question) and H_c (for the document) similar to the previous section.To classify the article as relevant or not, we introduce a novel attention mechanism to compute the score,s = ∑_i ((w ṽ + b)^T h̃_c,i)^4Each term in the sum above corresponds to the match between the query representation anda token in the context. This is passed through a 4-th order non-linearity so that relevant tokens are emphasized more[ We use exponent d=4 here. Higher d tend to have better performance. Empirically, this approach works better than exponential and softmax non-linearities.]. Next, we compute the probability that the article is relevant using a sigmoid:o = σ(w' s + b')In the above, x̃ is the normalized version (by L2-norm) of vector x, w, b, w', b' are scalar learnable parameters to control scales. 00§.§ Joint Training[Under Investigation.., seems to be not working well...] §.§ Other AnalysisEffect of M Error case analysis?§ EXPERIMENTSWe evaluate the comprehension modelon both WikiMovies-FL and WikiMovies-WE datasets.The performance is evaluated using the accuracy of the top hit (single answer) over all possible answers (all entities). This is called hits@1 metric. For the comprehension model, we use embedding dimension 100,and GRU dimension 128. We use up to M=10 retrieved articles as context. The order of the articles are randomly shuffled for each training instance to prevent over-fitting.The size of the anonymized entity set n_e is 600, since in most of the cases, number of entities in a question and context pair is less than 600. For training the comprehension model,the Adam <cit.> optimization rule is used with batch size 32. We stop the optimization based on dev-set performance, and training takes around 10 epochs. For WikiMovies-FL (resp. WikiMovies-WE) dataset,each epoch took approximately 4 (resp. 2) hours on an Nvidia GTX1080 GPU.For training the retrieval model R2, we use a binary cross entropy objective. Since most articles are not relevant to a question, the ration of positive and negative samples is tuned to 1:10.Each epoch for training the retrieval model takes about 40 minutes on an Nvidia GTX1080 GPU. §.§ Performance of Retrieval Models We evaluate the retrieval models based on precision and recall of the oracle articles.The evaluation is done on the test set. R@k is the ratio of cases where the highest ranked oracle article is in the top k retrieved articles. P@k is the ratio of oracle articles which are in the top k retrieved results. These numbers are summarized in Table <ref>. We can see that both (r1) and (R2) significantly outperform (r0), with (R2) doingslightly better. We emphasize that (R2) uses no domain specific knowledge,and can be readily applied to other datasets where articles may not be about specific types of entities.We have also tested simpler models based on inner product of question and article vectors. In these models, a question q_j and article d_k are converted to vectors Φ(q_j), Ψ(d_k), and the relevance score is given by their inner product:score(j,k) = Φ(q_j)^T Ψ(d_k).In the view of computation, those models are attractivebecause we can compute the article vectors offline, and do not need to compute the attention over words in the article. Maximum Inner Product Search algorithms may also be utilized here <cit.>. However, as shown in upper block of Table <ref>, those models perform much worse in terms of scoring. The “Sum of Hidden State” and “Query Free Attention” modelsare similar to WLA model, using BiGRUs for question and article. In both of those models, Φ(q) is defined the same way as WLA model, Eq (<ref>). For the “Sum of Hidden States” model, Ψ(d) is given by the sum of BiGRU hidden states. This is the same as the proposed model by replacing the fourth order of WLA to one. For the “Query Free Attention” model, Ψ(d) is given by the sum of BiGRU hidden states. We compare our model and several ablations with the KV-MemNN model. Table <ref> shows the average performance across three evaluations. The(V) “Vocabulary Model” and(A) “Attention Model” aresimplified versions of the full (AV) “Attention and Vocabulary Model”, using only p_vocab and p_att, respectively. Using a mixture of p_att and p_vocab gives the best performance.Interestingly, for WE dataset the Attention model works better.For FL dataset, on the other hand, it is often impossible to select answer from the context, and hence the Vocab model works better.The number of entities in the full vocabulary is 71K, and some of these are rare.Our intuition to use the Vocab model was to only use it for common entities,and hence we next constructed a smaller vocabulary consisting of all entities which appear at least 10 times in the corpus.This results in a subset vocabulary 𝒱_S of 2400 entities.Using this vocabulary in the mixture model (AsV) further improves the performance.Table <ref> also shows a comparison between (r0), (r1), and (R2) in terms of the overall task performance.We can see that improving the quality of retrieved articles benefitsthe downstream comprehension performance. In line with the results of the previous section, (r1) and (R2) significantly outperform (r0). Among (r1) and (R2), (R2)performs slightly better.§.§ Benefit of training methods Table <ref> shows the impact of anonymization of entitiesand shuffling of training articles before the comprehension step, described in Section <ref>. Shuffling the context article before concatenating them, works as a data augmentation technique.Entity anonymization helps because without it each entity has one embedding. Since most of the entities appear only a few times in the articles, these embeddingsmay not be properly trained. Instead,the anonymous embedding vectors are trained to distinguish different entities. This technique is motivated by a similar procedure used in the construction of CNN / Daily Mail <cit.>, and discussed in detail in <cit.>. §.§ Visualization Figure <ref> shows a test example from the WikiMovies-FL test data. In this case, even though the answers “Hindi” and “English” are not in the context, they are correctly estimated from p_vocab. Note the high value of g in this case.Figure <ref> shows another example of how the mixture model works. Here the the answer is successfully selected from the document instead of the vocabulary. Note the low value of gin this case.§.§ Performance in each category Table <ref> shows the comparison for each category of questions between our model and KV-MemNN for the WikiMovies-WE dataset[Categories “Movie to IMDb Votes” and “Movie to IMDb Rating” are omitted from this table because there are only 0.5% test data for these categories and most of the answers are “famous” or “good”.].We can see that performance improvements in thecategory is relatively large. The KV-MemNN model has a dataset specific “Title encoding” feature which helps the modelquestion types.However without this feature performance in other categories is poor. §.§ Analysis of the mixture gateThe benefit of the mixture model comes from the fact that p_pointer works well for some question types,while p_vocab works well for others. Table <ref> shows how often for each category p_vocab is used (g > 0.5) in AsV model. For question types “Movie to Language” and “Movie to Genre” (the so called “choice questions”) the number of possible answers is small. For this case, even if the answer can be found in the context, it is easier for the model to select answer from an external vocabulary which encodes global statistics about the entities. For other “free questions”, depending on the question type, one approach is better than the other. Our model is able to successfully estimate the latent categoryand switch the model type by controlling the coefficient g.§ RELATED WORKhierarchical:16 solve the QA problem by selecting a sentence in the document. They show that joint training of selection and comprehension slightly improves the performance. In our case, joint training is much harderbecause of the large number of movie articles. Hence we introduce a two-step retrieval and comprehension approach. Recently architecture:16 proposed a framework to use the performance on a downstream task (e.g. comprehension)as a signal to guide the learning of neural network which determines the input to thedownstream task (e.g. retrieval). This motivates us to introduce neural network based approach for both retrieval and comprehension, since in this case the retrieval step can be directly trained to maximize the downstream performance.In the context of language modeling,the idea of combining of two output probabilities is given in <cit.>, however, our equation to compute the mixture coefficient is slightly different.More recently, ahn2016neural used a mixture model to predict the next word from either the entire vocabulary, or a set of Knowledge Base facts associated with the text. In this work, we present the firstapplication of such a mixture model to reading comprehension. § CONCLUSION AND FUTURE WORKWe have developed QA system using a two-step retrieval and comprehension approach. The comprehension step uses a mixture model to achieve state of the art performance on WikiMovies dataset,improving previous work by asignificant margin.We would like to emphasize that our approach has minimal heuristics and does not use dataset specific feature engineering.Efficient retrieval while maintaining representation variation is a challenging problem. While there has been a lot of research on comprehension,little focus has been given to designing neural network based retrieval models.We present a simple such model, and emphasize the importance of this direction of research.acl_natbib 0 § SUPPLEMENTAL MATERIAL- Add example of questions in each category- Human performance estimate of WikiMovies dataset- Error case analysis- Detail of retrieval models | http://arxiv.org/abs/1703.08885v1 | {
"authors": [
"Yusuke Watanabe",
"Bhuwan Dhingra",
"Ruslan Salakhutdinov"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20170326234806",
"title": "Question Answering from Unstructured Text by Retrieval and Comprehension"
} |
Strong gravitational lensing for the charged black holes with scalar hair Ruanjing Zhang, Jiliang Jing[Corresponding author, Email: [email protected]] December 30, 2023 ================================================================================= Essentially all biology is active and dynamic. Biological entities autonomously sense, compute, and respond using energy-coupled ratchets that can produce force and do work. The cytoskeleton, along with its associated proteins and motors, is a canonical example of biological active matter, which is responsible for cargo transport, cell motility, division, and morphology. Prior work on cytoskeletal active matter systems showed either extensile or contractile dynamics. Here, we demonstrate a cytoskeletal system that can control the direction of the network dynamics to be either extensile, contractile, or static depending on the concentration of filaments or transient crosslinkers through systematic variation of the crosslinker or microtubule concentrations. Based off these new observations and our previously published results, we created a simple one-dimensional model of the interaction of filaments within a bundle. Despite its simplicity, our model recapitulates the observed activities of our experimental system, implying that the dynamics of our finite networks of bundles are driven by the local filament-filament interactions within the bundle. Finally, we show that contractile phases can result in autonomously motile networks that resemble cells.Our experiments and model allow us to gain a deeper understanding of cytoskeletal dynamics and provide a stepping stone for designing active, autonomous systems that could potentially dynamically switch states. § INTRODUCTION Active matter appears in biology in various forms. Examples range in scale from intracellular transport to cell locomotion and from bacterial turbulence to swarms of birds. These non-equilibrium, drivensystemsshow dynamics that are unlikely or impossible in passive, thermodynamically-equilibrated systems. This makes it profitable to adopt an active matter approach for understanding the complex dynamics of biological systems. Cytoskeletal networks are some of the most important active, mechano-chemical systems in nature. Composed of microtubules, actin, and intermediate filaments, the cytoskeleton utilizes an array of crosslinkers and molecular motors to self-organizethe cellular interior. It is responsible for intracellular cargo transport, cell division, locomotion, and cell shape, yet we do not fully understand how the network is actively reorganized to facilitate these activities. One reason for this lack of understanding is the large number of constituent components in the cytoskeleton and the complexity of their interactions. In this work, we study the dynamics of a simplified, reconstituted cytoskeleton: microtubules interacting via weak, transient crosslinkers and driven by molecular motors. We show that this system is capable of reproducing extensile, contractile, and static dynamic phases reminiscent of those observed in cells.Microtubules inside the cell need to rearrange themselves during different developmental and cell division stages. During cell division, microtubules change from an aster-like configuration during interphase to the bipolar mitotic spindle that aligns the chromosomes and then separates and pushes apart chromosomes during cell division and cytokinesis. These highly anisotropic structures are formed by bundling microtubules with microtubule associated proteins (MAPs). MAP65 is one such protein usedto control microtubule networks in mitosis and the plant cell cortex to form antiparallel microtubule arrays. MAP65 is a plant protein, but has analogs in yeast (Ase1) and humans (PRC1) <cit.>. It acts by dimerizing and bridging between two microtubules to form a microtubule bundle with antiparallel arrangement and space of 35 nm between the filaments <cit.>. We have shown that the bundles observed at low MAP65 and low microtubule concentrations become networks of bundles when either are increased <cit.>. Further, individual MAP65 are transient binders to single microtubules <cit.>, and the affinity is relatively low in dilute solution, 1.2 μM <cit.>. Despite this low affinity and rapid turn-over, we showed that MAP65 binds rapidly to antiparallel microtubules and can stall microtubules gliding under the power of kinesin-1 motor proteins <cit.>.Kinesin-1 is a vital enzymatic microtubule-associated motor protein responsible for long range cargo transport inside cells <cit.>. When fixed to a substrate, kinesin-1 can be used to drive microtubule gliding, fueled by ATP hydrolysis. We and other groups have routinely used this experimental platform to investigate self-organization of propelled filamentous particles <cit.>. Similar studies have been performed using myosin-II propelling actin filaments <cit.>. The facility of such filament gliding assays have resulted in them being engineered in lab-on-a-chip devices to produce nanoscale transport processes, termed "molecular shuttles"<cit.>. Other active cytoskeletal systems use crosslinked motors (kinesins or myosins) to drive the motion of cytoskeletal filaments and examine network steady-states <cit.>. Most of these steady-states for actin-myosin result in contracting networks. Microtubule-kinesin networks have proven more interesting creating steady-state asters <cit.> or dynamically extensile active liquid crystals or cilia-like beating <cit.>. A microtubule, unlike any other filamentous component of the cytoskeleton, has a enormous persistence length, 1mm <cit.>. Thus, 10-30 μ m long microtubules behave, essentially, as rigid rods. Unlike actomyosin networks, this stiffness tends to lead to alignment within bundles and, consequently, extensile motion under motor driving <cit.>.Contractile steady-states of microtubules driven by crosslinking motors typically evolve from asters <cit.>. Some of these experiments were conducted in cell extracts making it unclear which cellular components are responsible for the novel contractile phases <cit.>. Other experiments in vitro by different groups using the same components have reported contradictory results <cit.>. Together, prior work tells a story that asters are needed for contraction and bundles are extensile. Further, in all previous studies, the system evolves to extension or contraction exclusively without room for tuning or regulating the observed states.Using the microtubule gliding assay driven by kinesin-1 motors, and a network of transiently-crosslinked bundles associated via MAP65, we have designed an active, crosslinked microtubule network. This minimal system allowed us to vary the relative concentrations of crosslinkers and microtubules independently to examine their effect on the dynamics of the network. Excitingly, we observed a wide variety of dynamical activities, including extension and contraction. The dynamics was dependent on the concentration of crosslinkers or the denisty of filaments, which we varied as control parameters. At low concentrations of crosslinkers or microtubule densities, the network spread out under the driving of kinesin-1 motors. As we increased either the relative concentration of crosslinker or microtubules, the finite networks of bundles became contractile. At the highest concentrations of cross-linkers or microtubule densities, all network dynamics ceased. Each dynamic activity was characterized by measuring the rate of spreading of the finite microtubule networks. The spreading speed was constant over experimental times irrespective of the network's phase, which could have been extensile or contractile. The spreading rate was nonmonotonic as a function of either crosslinker concentration or microtubule density. Finally, we modeled the behavior of networks of bundles using a simplified one-dimensional molecular dynamic simulation. Our model was able to recover the crossover from an extensile to contractile phase that we observed in experiments. Together, our observations and simple one-dimensional model gives new insights on the mechanism driving the dynamics of the two-dimensional networks of bundles, showing that the dynamics are driven by the microtubule-microtubule interactions.§ EXPERIMENTALTubuin,kinesin, and all other reagents and materials were fully described in our prior methods chapter <cit.>. Also, all methodological details can be found in Stanhope and Ross <cit.> and Pringle et. al. <cit.>. Briefly, fluorescent microtubules were incubated with MAP65 at a specific concentration (variable) in the presence of 35μ M Taxol and 35μ M DTT in PEM-100 buffer (100 mM K-PIPES, 1 mM MgSO_4, 1 mM EGTA, pH 6.8) for 30 minutes. We made a 10μ L flow chamber using a slide, double-stick tape, and a cover slip. The chamber was incubated with kinesin-1 at room temperature for 5 minutes. We flowed BSA (5 mg/ml BSA, 60 μ M Taxol, 20 mM DTT in PEM-100) through the channel. Prior to insertion of microtubule networks in the chamber, we added a motility mix containing1 mM ATP, 1.9 μ M phosphocreatine, 68 μ g/ml creatin phosphokinase, 4.3 mg/ml glucose, and 0.4 μ L of deoxygenation system (6 units/ml of glucose oxidase and 0.177 mg/ml of catalase) for every 10 μ L volume in PEM-100 to the networks.The network dynamics in the chamber were imaged using epi-fluorecence. We waited approximately 5 minutes to allow the networks to sediment and interact with the kinesin-1 on the cover slip. Images were taken every 20 seconds for a duration of up to 1-3 hours. At least three separate chambers were used for each configuration of microtubule and crosslinker concentrations.Analysis was performed after networks were in full contact with the surface and in a two-dimensional configuration to ensure that the changes in fluorescence were only due to motion in the imaging plane and not due to filaments coming into the imaging plane from the third dimension. As the microtubule bundle expanded, the decay in signal to noise ratio made tracking the edges difficult. To overcome this problem we use a predictor-corrector type algorithm developed in Matlab (Supplemental Information)<cit.>.The first step of the algorithm used an edge detection function to find the edges of the network. We tuned the parameters so that only the interior was detected. Next, the interior portion was subtracted from the original image. The corrected image was used as an input for the next round of edge detection. By tuning the sensitivity of edge detection and the number of iterations, we were able to detect edges robustly for all time points.We repeated the experiments on at least three different networks in different experimental chambers. The normalized variance of the network shape (see below) was averaged together before plotting to smooth variations in microtubule density that might exist between the distinct networks of bundles.§ RESULTS AND DISCUSSION We created microtubule networks by incubating Taxol stabilized microtubules with MAP65 crosslinkers prior to performing a kinesin-driven filament gliding assay <cit.>. We previously measured the equilibrium phases of microtubules with MAP65 crosslinking in the absence of kinesin motor driving. We found either single filaments at low MAP65 concentrations, individual bundles as higher MAP65 concentrations, and low microtubule concentrations, or networks of bundles when the microtubule and MAP65 concentrations were sufficiently large<cit.>. Here, we used the same networks of bundles in the relatively high MAP65 and microtubule regimes and added them to a kinesin-gliding assay. We found that these networks could dynamically rearrange under the power of the kinesin-1 driving by either extending, contracting, or remaining static, depending on the relative concentration of crosslinkers and microtubules(Fig. <ref>, Supp. Fig. 1). We quantified the spatial distribution of microtubules in each network over time by tracking the edges of the network from images obtained via fluorescence microscopy (see supplemental information for details on this tracking). At least three or more different timelapse movies of network evolution were analyzed corresponding to each microtubule and crosslinker concentration, totaling to 72 different data sets. Once we identified the edges of the network over time, we used those pixel locations to quantify the shape of the network by measuring the second moment (σ) of the edge:σ = √(∑_i=1^N(x_i - μ)^2/N).Here, x_i is the i^th pixel detected on the edge of the network, N is the total number of pixels detected on the edge, and μ is the center of mass of the network. As each network may have an arbitrary initial shape and size, we normalized σ by its value at time t = 0, σ_0. We plot the normalized variance of the shape (σ/σ_0) as a function of time (Fig. <ref>A,B). We observed that the change in σ/σ_0 was linear with time (Fig. <ref>A,B). Accordingly, we performed a least-squared fit of the normalized σ data vs. time to the linear equation:σ_t/σ_0 = ω t + 1.This equation defined the spreading speed ω as the slope of the best linear fit. The spreading speed, ω, provides a metric to classify network dynamics as either extensile (ω > 0), contractile (ω < 0), or static (ω = 0). We will use this metric to determine the type of dynamics of the network as a function of the percentage of crosslinker bound, ρ_c, (relative to the maximal crosslinker binding) and the concentration of microtubules, ρ_MT.It is interesting that we always observed the size of the network of bundles to change linearly with time. If the network of bundles was simply being driven apart by a random, thermal process, we would expect the size to change with the square-root of time, as one might expect for a droplet spreading under diffusion. Further, the linear evolution is in contrast with the subdiffusive dynamics seen in cell spreading experiments <cit.>. The linear spreading makes sense because the system is driven by processive motors that move with a constant velocity. Previously, processes of network or cell spreading of this type have been dubbed "active wetting" <cit.>.§.§ The Dynamics of Driven Networks Depend on Crosslinker and Microtubule Density We performed a series of experiments for a variety of different microtubule concentrations and crosslinker percentages. Specifically, we bundled the microtubules at each concentration (1.0, 2.5, 5.0, 8.0, 10.0 μ M) with a concentration of MAP65 crosslinker that would result in a prescribed percentage of MAP65 bound to the microtubules (13, 24, 35, 43, 57%). The percentage of crosslinker bound to microtubules was controlled by assuming that the equilibrium binding affinity K_D = 1.2 μ M as previously measured for dilute microtubule-MAP65 conditions <cit.>. We have previously determined the equilibrium phases for MAP65-crosslinkers with microtubules using this parameter <cit.>. As an example, we show the change in the size of the networks for one microtubule concentration (2.5 μ M) (Fig. <ref>A). At low crosslinker concentrations, the spreading rate, ω, was positive because the network was extensile (Fig. <ref>C). With increasing crosslinker concentration, the networks slowed down and changed from extensile to contractile motion with a negative spreading rate, ω (Fig. <ref>C). At the highest concentration of crosslinker, there were no dynamics, and the spreading rate was zero (Fig. <ref>C). Together, we observed that the spreading rate in a non-monotonic function of crosslinker density (Fig. <ref>C).The dynamics of our networks of bundles system were not only controlled by the concentration of crosslinker, but also by the concentration of microtubules present. We observed the same pattern of extensile, contractile, and static activities when we increased the concentration of microtubules and held the relative crosslinker density fixed at 13% (Fig. <ref>B). Further, the same non-monotonic behavior of the spreading rate ω was likewise observed as the microtubule concentration increased (Fig. <ref>D). In order to examine if the presence of crosslinker is essential to the existence of a contractile phase, we performed a series of control experiments where the microtubules were bundled via depletion interactions using polyethylene glycol (PEG) as a crowding agent. We introduced the PEG-bundles to a kinesin-driven gliding assay. We never observed any contraction in this system. Thus, bundling by itself is not the cause of contractility – the specific, cohesive crosslinking activity is required. In our previous work, we showed that the MAP65 crosslinking is cooperative and quick acting in the presence of antiparallel microtubules. Further, MAP65 crosslinking was capable of dynamically slowing down microtubule gliding between only two filaments <cit.>. The slower speed was a function of higher crosslinker concentration.We were intrigued by the contractile activity of the networks, which has never before been observed in gliding microtubule systems. Prior work with actin filament networks showed that contraction was mediated by filament buckling <cit.>. In order to investigate if individual filaments were able to buckle, we used two color experiments to monitor individual tracer microtubules within differentially labeled networks. We never detected microtubule buckling during contractile phases (data not shown). The absence of filament buckling is not surprising given how much stiffer the microtubule filament is, with a persistence length of about 1 mm <cit.> compared to the actin, with a persistence length of 10-20 μ M<cit.>. We performed these experiments over a range of microtubule concentrations and crosslinker binding percentages, and repeated the measurement for at least three distinct networks at each location in parameter space. For each network, we averaged the normalized variance and used that to quantify the spreading rate metric. Combining all the data together into a single state diagram reveals that the types of dynamics (extensile, contractile, or static)are clustered depending on the concentration of microtubules and crosslinker binding percentage (Fig. <ref>).In the absence of any crosslinker, all networks of microtubule bundles are extensile for any density of microtubules. We performed these experiments using PEG to bundle microtubules.At low concentrations of microtubules and percentages of crosslinkers, the networks of bundles were extensile. The dynamics of such extensile networks was being driven by the activity of microtubules with the kinesin-1 motors on the surface. The crosslinkers most likely work to set-up local bundles of antiparallel microtubules. Thus, at the edges of the networks of bundles, there should be just as many filaments pointing outward as inward. If there was not enough cohesion between microtubules, these filaments would be driven according to their polarity - causing an overall expansion of the edges. We observed that the spreading rate decreases with increasing filaments or crosslinkers (Fig. <ref>, magenta), implying that the crosslinking acts to slow the kinesin-1 driven dynamics. As we know that crosslinkers only act locally between neighboring microtubules, the crosslinkers act as a local cohesion between these bundles of microtubules.At the other extreme, very high concentrations of microtubules or crosslinkers, the networks are static. Since the crosslinker density is fixed, it is possible that slowing down of dynamics is due to local filament density only. Prior work has shown that dynamics of rod-like particles slow down as the volume fraction increases <cit.>. In our experiments, the microtubule concentration was varied from 0.5 μ M to 10 μ M. These values correspond to volume fractions, ϕ, ranging from approximately 10^-4 to 10^-3. Even for these small values of ϕ, the dynamics can slow down significantly due to the large aspect ratio of microtubules <cit.>. Since an increase in volume fraction leads to a loss of rotational and translational degrees of freedom, we would expect a purely monotonic decrease of spreading speed with microtubule density. We observe a non-monotonic dependence implying that the crosslinker is important for driving the contraction, and not just filament density. This fits well with our previous results that showed that higher concentrations of MAP65 slowed the gliding of microtubules as a function of crosslinker percentage <cit.>. We also previously showed that the most likely mechanism for the complete halting of microtubules is from antiparallel microtubules passing each other and crosslinking dynamically during gliding. If the amount of crosslinker is high enough, the microtubules stall due to the fact that they are being pulled in opposite directions. In another set of studies, we examined the effect of MAP65 on single molecule kinesin-1 motility <cit.>. In these studies, we showed that MAP65 does not affect the association time of individual kinesin-1s, but can slow down single kinesin-1 motors <cit.>. Further, bundling with MAP65 actually enhances the association time of single kinesin-1 motors because the bundle offers more sites for kinesin-1 motor to bind <cit.>. Taken together, our current and prior work indicates that the static bundles are static because crosslinkers are holdingmicrotubules tightly together. The motors are trying to push the filaments in opposite directions, but are stalled due to high cohesion between oppositely oriented filaments (Fig. <ref>). Thus, the “activity” of static networks is likely completely controlled by the crosslinking of microtubules into bundles locally.At intermediate concentrations between these two extremes, however, we observe a prominent, albeit narrow, contractile regime. In the contractile regime, we never observe buckling of microtubules, but we do see rearrangements of individual filaments within the network. We postulate that the rearrangements result in an increase in microtubule overlap at the local, bundle level. The mechanism for these rearrangements is that individual microtubules, which are thermally equilibrated before being added to the kinesin-1 gliding assays, are not maximally overlapped (Fig. <ref>A). When put in contact with the driving kinesin, these filaments will first move together to increase the region of overlap. The increasing overlap will increase the cohesion of the MAP65, as we previously observed between two microtubules <cit.>. The increased cohesion will eventually stall the motors trying to move the microtubules in opposite directions and cause the filaments to reach a new, motor-driven equilibrium position, which is a shorter bundle, locally (Fig. <ref>B). Performing this same driven equilibration over the entire network will cause the entire network to contract. Our contractile phase is distinct from those reported previously for actin-myosin or other microtubule systems. For actin-myosin systems, prior work showed that actin networks can contract, but the contraction mechanism depends on actin filament buckling <cit.>. In contrast, our mechanism, from both experiments and simulations, is overlap rearrangement. Such rearrangement is likely impossible if the crosslinker is strong or static. Our self-organization is likely made possible because the MAP65 proteins are weak, transient crosslinkers <cit.>. Prior work on microtubule-kinesin systems that contract showed contraction stemming from the creation and mobility of microtubule asters, whereas microtubule bundles were always extensile <cit.>. Our mechanism is distinct from those previously studied. Further, unlike all prior studies, our system has the ability to be tuned from extensile to contractile by changing the crosslinker or filament density.§.§ SimulationsIn order to test the proposed mechanism behind the active network dynamics we observe (Fig. <ref>), we performed a simple simulation on one-dimensional bundles as model networks. As we describe above, all the activity of the networks can be simplified to the activity of the individual bundles because the crosslinkers only work on neighboring microtubules. Further, MAP65can crosslink microtubules at different angles, but have a preference for shallow angles and these angles evolve toward antiparallel alignment <cit.>.Based on these previous observations, we performed a simplified simulation of a one-dimensional “network” of microtubules. We initiated the network in a random configuration. The location of the center of mass of each filament along one dimension and the orientation of the filament polarity, which dictates the direction of gliding, were initially randomized. This initial bundle represented the network of bundles that reached equilibrium thermally. In the absence of crosslinkers, we assumed the speed of self-propelled filaments, V_free, was constant. Our previous observations showed that the binding of crosslinkers is cooperative and that crosslinking slowed down antiparallel microtubules that overlapped each other <cit.>.Thus, the simulation dynamics depended on only the same two variables that modified the velocity: the microtubule density and the fraction of crosslinkers bound to each microtubule. Finally, we assumed that all microtubules have a fixed length l to simplify the model. We formulated an expression to determine the speed of the microtubule in the presence of overlapping, antiparallel microtubules as a function of the local density of microtubules, ρ_MT, the average density of crosslinkers, and the polarity. The crosslinker density was parameterized as the fraction f of crosslinkers bound to single microtubules, which is directly proportional to the percentage of MAP65 bound, as we use in our experiments. The fraction f exists from 0 to 1, but in simulations f was varied from 0 to 0.55 to mimic our experiments. The speed decrease arose from both microtubule crowding as well crosslinker binding between pairs of nearby, antiparallel microtubules. The crosslinker cooperativity that we observed in our previous publication <cit.> suggests that the decay of activity should be more sensitive to the change in crosslinker concentration than to the density of microtubules.We modeled these effects by assuming the microtubule velocity was a product of two linear decay terms, one of which depended on microtubule density and the other on crosslinker concentration,V_i = V_free(1 - ρ_MT/ρ_max)(1-f∑_-l^+l O_i/O_max),where V_i was speed of i^th microtubule, and ρ_max and O_max are the maximum density and total overlap respectively at and above which all dynamics halt. To define the overlap, O, we noted that, for an arbitrary microtubule with its center of mass at x pointing along positive x axis, the only other microtubules it can crosslink with are located within (x-l,x+l). For a neighboring microtubule located at position x', we defined its overlap O with the microtubule at x asO =l-|x-x'|if x-l<x'<x+l0 otherwise .Thus, ∑_-l^+lO_i in Eq. (<ref>) is the total overlap of i^th microtubule at x with its anti-parallel, aligned neighbors in the range (x-l,x+l). F Complete details of simulation parameters can be found in the supplemental information (Supp. Fig. S2).We simulated a system of N microtubules of length l randomly oriented in one dimension, with the transport direction either toward the right or left along the x-axis, evolving according to equation (<ref>). The simulations ran for an order of magnitude longer than the time it takes a free microtubule to travel through the system size. Each sample corresponded to a different value of ρ_MT and f and was repeated 100 times to generate statistics on the type of activity observed. We tracked the number of microtubules within the system and the mean distance traveled by each microtubule.As we varied ρ_MT and f in our simulation, we identified dynamic states of networks as either extensile, contractile, or static by comparing the mean distance traveled by the microtubules and the final number of microtubules inside the system (Fig. <ref>A-C).In our simulation, both the contractile and static states were ultimately static at long times. In order to differentiate between the contractile and the static states, we measured the mean distance traveled by the filaments. Microtubules in the static state move little (less than half the body length) before coming to rest and do not show a significant density change (Fig. <ref>C). On the other hand, microtubules in contractile states travel significantly longer distances (more than half the body length of filament) before becoming trapped in a few high density locations (Fig. <ref>B). Using these definitions of static and contraction were reasonable given our experimental data (Fig. <ref>). As clearly observed in the kymographs, the static networks did not change size or density with time (Fig. <ref>Ciii). Small rearrangements at the beginning of the dynamics do occur before the network froze into place (Fig. <ref>Cii). Contracting networks, on the other hand, not only became smaller, but also had an increase in the intensity, signaling increased filament density in that region (Fig. <ref>B). The resulting phase space plot has excellent qualitative agreement with our experimentally obtained phase space (Figs. <ref>D, <ref>). Prior theoretical and simulation work studying the effect of alignment-induced interactions among discrete particles has been discussed through the Vicsek model, or the “flying XY” model <cit.>. Similar problems have also been studied using a continuum approach <cit.>. A broad theme in these works is how local interaction leads to global alignment. In our simulations, we take the route of a simple, 1D model of interacting filaments. We do not require more complicated, 2D models because we started with the experimentally determined assumption that our crosslinkers align and bundle filaments <cit.>. Future models of similar systems that allow the initial evolution of the microtubules into networks could be of interest to theorists. Systems consisting of filaments, interacting in 2D through polymerization or motor proteins will be interesting as well and can take advantage of prior experimental results <cit.>.As the model makes clear, the contraction of bundles can arise entirely from an overlap-dependent slowing down of nearby microtubules, mediated by the crosslinkers, and not from specific contractile forces induced by the crosslinkers or motors themselves. At very high densities of crosslinker, only a little overlap between neighboring microtubules is required to overcome the forces applied by motors and hence the system comes to a complete halt without showing any significant contraction.Similarly, as the concentration of microtubules increases at fixed crosslinker density, so does the fraction of antiparallel aligned microtubules. This promotes a higher probability for a crosslinker to bind, thereby joining two microtubules. Above a certain concentration there is sufficient overlap of antiparallel, aligned microtubules and also a sufficient density of crosslinkers to bind at these sites and cause contraction. If we keep increasing the microtubule density, however, the dynamics of the system slows down due to crowding and goes to zero above a specific density. This causes slowing down of contraction and finally ceases any activity. §.§ Cell-Like NetworksIn our simulations, the contractile state always ended in a static state where the density of the stationary, high density regions was fixed in time. In the experiments, we sometimes saw the contractile state result in a static state over the long time, although many are still contracting at 45 minutes (Supp. Fig. 2). On other occasions, initially contracting networks would eventually make it past each other and spread apart to become extensile at very long times. Perhaps more interesting, some initially contracting networks would dislodge from the surface to move around as an intact network (Fig. <ref>). We previously observed the cell-like networks at one particular concentration of microtubules and MAP65 <cit.>. In most cases, there was a high density “nucleus” of filaments with long bundles protruding from them. In many cases the bundle would ultimately be pulled away from the nucleus. From our current work, we deduce that these cell-like networks form when the MAP65 is at a sweet spot for cohesion. Further, the fact that we only observe these cell-like networks for small networks of bundles implies that there may be some cut-off size beyond which the networks only extend or become static, but cannot become mobile. These networks were typically initially smaller. At very long times, the dynamics ceased because the network would lose contact with the kinesin-coated surface (Fig. <ref>E. The network would remain self-assembled, but without the contact of the motors, it was unable to move in a directed manner.§ CONCLUSIONIn this paper, we showed that the dynamics of active crosslinked networks of microtubules are tuned by both microtubule and crosslinker density. We have shown previously that MAP65 slows down antiparallel aligned microtubules in gliding assays in an overlap dependent manner <cit.>. Our experimental results combined with our simulations show that crosslinking of microtubules is essential for contraction. The mechanism behind contraction is the slowing down of microtubule velocity by crosslinkers. The crosslinker induced drag force depends on the bound crosslinker density. This leads to local contraction of bundles and alignment of neighboring microtubules leading to further contraction. It is essential to understand how various components of a cell interact and produce the dynamics that are hallmarks of all living, cellular systems. To that end, we need to observe and quantify interactions between filaments, motor proteins, and crosslinkers and model those interactions in understanding cytoskeletal mechanics. Towards this aim, we designed an experiment to study the behavior of active microtubule networks in which individual microtubules interact through the action of molecular motors and crosslinkers. We systematically varied the concentration of both microtubules and crosslinkers independently to explore the effect on the dynamics of a network. We showed that, for the first time, an active system of cytoskeletal filaments being driven by motor activity can be tuned through the control parameters of relative density of filaments and crosslinkers. Our system goes through a novel, contractile phase in which networks shrink instead of expanding.To elucidate the mechanism of these dynamics, we simulated a microtubule bundle as a collection of active rods in one dimension interacting via density driven dynamics. Using our simple model, we were able to recapitulate extensile, contractile, and static dynamic phases and generate a dynamic phase diagram that is qualitatively similar to experimentally obtained data. Our work suggests that the cell can tune its dynamics through tuning the relative density of filaments and crosslinkers to create phases that are biologically important such as the mitotic spindle and contractile ring. Understanding the fundamental principles behind such systems will elucidate how cells autonomously sense, compute, and respond to their environment through shape changes of the cytoskeleton.§ ACKNOWLEDGEMENTSWe would like to thank Dr. Peker Milas and Dr. Art Evans for constructive conversation about particle tracking, active matter theory, and models. K.S. was funded by NSF Inspire grant 1344203 to J.L.R, NIH RO1-GM109909 to J.L.R. and The Mathers Foundation. V.Y. was funded by NSF Inspire to J.L.R., MURI 67455-CH-MUR, and Mathers Foundation. C.S. is funded by NSF EFRI ODISSEI 1240441 and Keck Foundation. J.L.R. was funded by NSF INSPIRE, Mathers Foundation, NIH, and MURI.unsrt 1Braun2011 M. Braun et. al.,Nature Cell Biol, 2011, 13, 1259-1264 .Loiodice2005I. Loiodice et. al.,Mol. Bio. Cell, 2015, 16, 1756-1768.Subramanian2010 R. Subramanian et. al.,Cell, 2010, 142, 433-443.Lansky2015 Z. Lansky et. al.,Cell, 2015, 160, 1159-1168.Bieling2010 P. Bieling, I. Telley, andT. Surrey,Cell, 2010, 142, 420-432.Jiang1998 W. Jiang et. al.,Mol. Cell, 1998, 2, 877-885.Chan1999 J. Chan, C.G, Jensen, C.L. Jensen, M. Bush, and C.W. Lloyd,Proc. Natl. Acad. Sci., 1999, 96, 14931-14936.Pringle2013 J. Pringle, A. Muthukumar, A. Tan, L. Crankshaw, L. Conway, and J.L. Ross,J. Phys. Cond. Mat., 2013, 25, 374103.Tulin2012 A. Tulin, S. McClerkin, Y. Huang, and R. Dixit,Biophys. J., 2012, 102, 802-809.Hendricks2010 A.G. Hendricks et. al.,Curr. Biol., 2010, 20, 697-702.Vale2000 R.D. Vale, and R. A. Milligan,Science, 2000, 288, 88-95.Hirokawa2009 N. Hirokawa, Y. Noda, Y. Tanaka, and S. Niwa,Nat Rev Mol Cell Biol., 2009, 10, 682-696.Surrey2010 C. Hentrich, and T. Surrey,J. Cell Biol., 2010, 189, 465-480.Liu2011 L. Liu, E. Tuzel, and J. L. Ross,J. Phys. Cond. Mat., 2011, 23, 374104.Sumino2012 Y. Sumino et. al.,Nature, 2012, 483, 448-452.Schaller2010 V. Schaller, C. Weber, C. Semmrich,E. Frey, and A.R. Bausch,Nature, 2010, 467, 73-77.Vogel2007 R. K. Doot, H. Hess, and V. Vogel,Soft Matter, 2007, 3, 349-356.Vogel2006 S. Ramachandran, K.H. Ernst, G.D. Bachand, and V. Vogel,Small, 2006, 2, 330-334.Vogel2001 H. Hess, and V. Vogel,Reviews in Molecular Biotechnology, 2001, 82, 67-85.Leibler1997 F.J. Nedelec, T. Surrey, A. C. Maggs, and S. Leibler,Nature, 1997, 389, 305-308.Sanchez2012 T.Sanchez, D.T. Chen, S.J. Decamp, M. Heymann, and Z. Dogic,Nature, 2012, 491, 431-434.Murrell2012 M. Murrell, M. Gardel,Proc. Natl. Acad. Sci., 2012, 109, 20820-20825.Alvarado2013 J. Alvarado, M. Sheinman, A. Sharma, F.C. Mackintosh, and G.H. Koenderink,Nature Physics, 2013, 9, 591–597.Koenderink2009 G.H. Koenderink et. al.,Proc. Natl. Acad. Sci., 2009, 106, 15192-15197.Stachowiak2012 M.R. Stachowiak et. al., Biophys. J., 2012, 103, 1265-1274.Sanchez2013 T. Sanchez, and Z. Dogic,Methods Enzymol., 2013, 524, 205-224.Hawkins2010 T. Hawkins, M.Mirigan, M. SelcukYasar, andJ.L. Ross,J. Biomech, 2010, 43, 23-30. Needleman2015 P.J. Foster, S. Furthauer, M.J. Shelly, and D.J. Needleman,eLife, 2015;4:e10837.Nedelec2016 J.M. Belmonte, and F. Nedelec,eLife, 2016 5:e14076.Kazuhiro2016 T. Torisawa, S. Taniguchi, S. Ishihara, and K. Oiwa,Biophys. J., 2016, 111, 373-385. Stanhope2015 K. Stanhope, and J.L. Ross, Microtubules, MAPs, and Motor Proteins, Eds. J.L. Ross, W. Marshall, Methods in Cell Biology, 2015, 2.NR W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, New York, 2007.Mahadevan2007 D. Cauvelier et. al.,Curr. Biol., 2007, 17, 694-699.Joanny2013 J.F. Joanny, K. Kruse, J. Prost, and S. Ramaswamy,Euro. Phys. J. E., 2013, 36,52.Hawkins2012 T.L. Hawkins et. al.,Cellular and Molecular Bioengineering, 2012, 5, 227-238.Hawkins2013 T.L. Hawkins, D. Sept, B. Moogessie, A. Straube, and J.L. Ross,Biophysical Journal, 2013, 104, 1517-1528.Gittes1993 F. Gittes, B. Mickey, J. Nettleton, and J. Howard,J. Cell Biol., 1993, 120, 923-934. McCullough2011 B.R. McCullough et. al.,Biophys. J., 2011, 101, 151-159.Doi1999 M. Doi, and S.F. Edwards, The Theory of Polymer Dynamics, Oxford Science Publication, 2012.Yadav2012 V. Yadav, and A. Kudrolli,Euro. Phys. Jour. E,2012, 35(10), 104.Conway2014-1 L. Conway, and M.W. Gramlich, and S.M.A. Tabei, and J.L. Ross,Cytoskeleton, 2014, 71, 595-610.Conway2014-2 L. Conway, and J.L. Ross,2014, arXiv:1409.3455 [q-bio.BM].Kruse2004 K. Kruse, J.F. Joanny, F. Julicher, J. Prost, and K. Sekimoto,Physical Review Letters, 2004, 92 078101.Kruse2005 K. Kruse, J.F. Joanny, F. Julicher, J. Prost, and K. Sekimoto,Euro. Phys. Jour. E, 2005, 16, 5-16.Bertin2009 E. Bertin, M. Droz, and G. Gregoire,Journal of Physics A: Mathematical and Theoretical, 2009, 42 445001.Farrell2012 F. Farrell, M. Marchetti, D. Marenduzzo, and J. Tailleur,Physical Review Letters, 2012, 108, 248101.Vicsek1995 T. Vicsek, A. Czirok, E. Ben-Jacob, I. Cohen, and O. Shochet,Physical Review Letter, 1995, 75, 1226.Kosterlitz1973 J.M. Kosterlitz, and D.J. Thouless,J. Phys. C: Solid State Physics, 1973, 6, 1181-1203.Tu1995 Y. Tu, and J.Toner,Physical Review Letters, 1995, 75, 4326-4329. | http://arxiv.org/abs/1703.08755v1 | {
"authors": [
"Kasimira T. Stanhope",
"Vikrant Yadav",
"Christian D. Santangelo",
"Jennifer L. Ross"
],
"categories": [
"cond-mat.soft",
"q-bio.BM"
],
"primary_category": "cond-mat.soft",
"published": "20170326022244",
"title": "Contractility in an Extensile System"
} |
top=6cm,bottom=1cmPerson Re-Identification by Camera Correlation Aware Feature AugmentationYing-Cong Chen, Xiatian Zhu, Wei-Shi Zheng, Jian-Huang Lai Code is available at the project page:http://isee.sysu.edu.cn/%7ezhwshi/project/CRAFT.htmlFor reference of this work, please cite: Ying-Cong Chen, Xiatian Zhu, Wei-Shi Zheng, and Jian-Huang Lai. Person Re-Identification by Camera Correlation Aware Feature Augmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (DOI: 10.1109/TPAMI.2017.2666805)Bib:@article{craft, title={Person Re-Identification by Camera Correlation Aware Feature Augmentation},author={Chen, Ying-Cong and Zhu, Xiatian and Zheng, Wei-Shi and Lai, Jian-Huang},journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (DOI: 10.1109/TPAMI.2017.2666805)}} The challenge of person re-identification (re-id) is to match individual images of the same person captured by different non-overlapping camera views against significant and unknown cross-view feature distortion.While a large number of distance metric/subspace learning models have been developed for re-id, the cross-view transformations they learned are view-generic and thus potentially less effective in quantifying the feature distortion inherent to each camera view. Learning view-specific feature transformations for re-id (i.e., view-specific re-id), an under-studied approach, becomes an alternative resort for this problem. In this work, we formulate a novel view-specific person re-identification frameworkfrom the feature augmentation point of view, calledCamera coRrelation Aware Feature augmenTation(CRAFT). Specifically, CRAFT performs cross-view adaptation by automatically measuring camera correlation from cross-view visual data distribution and adaptively conducting feature augmentation to transform the original features into a new adaptive space. Through our augmentation framework, view-generic learning algorithms can be readily generalized to learn and optimize view-specific sub-models whilst simultaneously modelling view-generic discrimination information. Therefore, our framework not only inherits the strength of view-generic model learning but also provides an effective way to take into account view specific characteristics. Our CRAFT framework can be extended to jointly learn view-specific feature transformations for person re-id across a large network with more than two cameras, a largely under-investigated but realistic re-id setting.Additionally, we present a domain-generic deep person appearance representation which is designed particularly to be towards view invariant for facilitating cross-view adaptation by CRAFT. We conducted extensively comparative experiments to validate the superiority and advantages of our proposed framework over state-of-the-art competitors on contemporary challenging person re-id datasets. Person re-identification, adaptive feature augmentation, view-specific transformation. Person Re-Identification by Camera Correlation Aware Feature AugmentationYing-Cong Chen, Xiatian Zhu, Wei-Shi Zheng, Jian-Huang LaiYing-Cong Chen is with the School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510275, China, with the Collaborative Innovation Center of High Performance Computing, National University of Defense Technology, Changsha 410073, China, and is also with Department of Computer Science and Engineering, The Chinese University of Hong Kong. E-mail: [email protected] Xiatian Zhu is with School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510275, China; and also with School of Electronic Engineering and Computer Science, Queen Mary University of London, United Kingdom. E-mail: [email protected] Wei-Shi Zheng is with the School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510275, China., and is also with the Key Laboratory of Machine Intelligence and Advanced Computing (Sun Yatsen University), Ministry of Education, China. E-mail: [email protected]. Jian-Huang Lai are with the School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510275, China, and is also with Guangdong Province Key Laboratory of Information Security, P. R. China. E-mail: [email protected] 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTIONThe extensive deployment of close-circuit television cameras in visual surveillance results in a vast quantity of visual data and necessitates inevitably automated data interpretation mechanisms. One of the most essential visual data processing tasks is to automatically re-identify individual person across non-overlapping camera views distributed at different physical locations, which is known as person re-identification (re-id). However, person re-id by visual matching is inherently challenging due to the existence of many visually similar people and dramatic appearance changes of the same person arising from the great cross-camera variation in viewing conditions such as illumination, viewpoint, occlusions and background clutter <cit.> (Figure <ref>). In current person re-id literature, the best performers are discriminative learning based methods <cit.>. Their essential objective is to establish a reliable re-id matching model through learning identity discriminative information from the pairwise training data. Usually, this is achieved by either view-generic modelling (e.g., optimizing a common model for multiple camera views) <cit.> orview-specific modelling scheme (e.g., optimizing a separate model for each camera view) <cit.>. The former mainly focuses on the shared view-generic discriminative learning but does notexplicitly take the individual view information (e.g., via camera view labels) into modelling.Given that person re-id inherently incurs dramatic appearance change across camera views due to the great difference in illumination, viewpoint or camera characteristics,the view-generic approach is inclined to be sub-optimal inquantifying the feature distortion caused by these variations in individual camera views.While the latter approach may enable to mitigate this problem byparticularly considering view label information during modelling,most of these methods do not explicitly take into considerationthe feature distribution alignment across camera views so that cross-view data adaptation cannot be directly quantified and optimized during model learning as view-generic counterparts do.Additionally, existing view-specific methods are often subject to limited scalability for person re-id across multiple (more than two) camera viewsin terms of implicit assumptions and formulation design.In view of the analysis above, we formulate a novel view-specific person re-identification framework, named Camera coRrelation Aware Feature augmenTation (CRAFT), capable of performing cross-view feature adaptation by measuring cross-view correlation from visual data distribution and carrying out adaptive feature augmentation to transform the original features into a new augmented space.Specifically, we quantify the underlying camera correlation in our framework by generalizing the conventional zero-padding, a non-parameterized feature augmentation mechanism, to a parameterized feature augmentation. As a result, any two cameras can be modelled adaptively but not independently, whereas the common information between camera views have already been quantified in the adaptive space. Through this augmentation framework, view-generic learning algorithms can be readily generalized to induce view-specific sub-models whilst involving simultaneously view-generic discriminative modelling. More concretely, we instantialize our CRAFT framework with Marginal Fisher Analysis (MFA) <cit.>, leading to a re-id method instance called CRAFT-MFA. We further introduce camera view discrepancy regularization in order to append extra modelling capability for controlling the correlation degree between view-specific sub-models. This regularization can be viewed as a complementary means of incorporating camera correlation modelling on top of the proposed view-specific learning strategy. Moreover, our CRAFTframework can be flexibly deployed for re-id across multiple (>2) camera views by jointly learning a unified model, which is largely under-studied in existing approaches.Apart from cross-view discriminative learning, we also investigate domain-generic(i.e., independent of target data or domain) person appearance representation, with the aim to make person features towards view invariant for facilitating the cross-view adaptation process using CRAFT.In particular, we explore the potential of deep learning techniques for person appearance description, inspired by the great success of deep neural networks in other related applications like object recognition and detection <cit.>. Differing significantly from existing deep learning based re-id approaches <cit.> that typically learn directly from the small person re-id training data, we instead utilize large auxiliary less-related image data. This strategy allows to not only avoid the insufficient training data problem and address the limited scalability challenge in deep re-id models, but also yield domain-generic person features with more tolerance to view variations. The main contributions of this work include: (I) We propose a camera correlation aware feature augmentation person re-id framework called CRAFT. Our framework is able to generalize existing view-genericperson re-identification models to perform view-specific learning.A kernelization formulation is also presented. (II) We extend our CRAFT framework to jointly learn view-specific feature transformations for person re-id across a large network involving more than two cameras. Although this is a realistic scenario,how to build an effective unified re-id model for an entire camera network is still under-exploredin existing studies. (III) We present a deep convolutional network based appearance feature extraction method in order to extract domain-generic and more view invariant person features. To our knowledge, this is the first attempt that explores deep learning with large auxiliary non-person image data for constructing discriminative re-id features.For evaluating our method, we conducted extensive comparisons between CRAFT and a variety of state-of-the-art models onVIPeR <cit.>, CUHK01 <cit.>, CUHK03 <cit.>,QMUL GRID <cit.>, and Market-1501 <cit.> person re-id benchmarks.§ RELATED WORKDistance metric learning in person re-id.Supervised learning based methods <cit.> dominate current person re-id research by achieving state-of-the-art performance, whilst a much fewer unsupervised re-id methods <cit.> have been proposed with much inferior results yielded.This is because large cross-camera variations in viewing conditions may cause dramatic person appearance changes and arise great difficulties for accurately matching identities.Discriminative learning from re-id training data is typically considered as a necessary and effective strategy for reliable person re-id. Notable re-id learning models includePRDC <cit.>, LADF <cit.>, KISSME <cit.>, PCCA <cit.>, LFDA <cit.>, XQDA <cit.>, PSD<cit.>, Metric Ensemble <cit.>, DNS <cit.>, SCSP <cit.>, and so forth.All the above re-id methods are mainly designed for learningthe common view-generic discriminative knowledge, but ignoring greatly the individual view-specific feature variation under each camera view.This limitation can be relaxed by the recent view-specific modelling strategy capable of learning an individual matching function for each camera view.Typical methods of such kind include the CCA (canonical correlation analysis) based approaches ROCCA <cit.>, refRdID <cit.>, KCCA-based re-id <cit.>, and CVDCA <cit.>.However, they are less effective in extracting the shared discriminative information between different views, because these CCA-based methods <cit.> do notdirectly/simultaneously quantify the commonness and discrepancy between views during learning transformation, so that they cannot identify accuratelywhat information can be shared between views. While CVDCA <cit.> attempts to quantify the inter-view discrepancy, it is theoretically restricted due to the stringent Gaussian distribution assumption on person image data, which may yield sub-optimal modelling at the presence of typically complex/significant cross-view appearance changes.View-specific modelling for person re-id remains under studied to a great extent.In this work, we present a different view-specific person re-id framework, characterized by a unique capability of generalizing view-generic distance metric learning methods to perform view-specific person re-id modelling, whilst still preserving their inherent learning strength. Moreover, our method can flexibly benefit many existing distance metric/subspace-based person re-id models for substantially improving re-id performance. Feature representation in person re-id. Feature representation is another important issue for re-id.Ideal person image features should be sufficiently invariant against viewing condition changes and generalized across different cameras/domains.To this end, person re-id images are often represented by hand-crafted appearance pattern based features, designed and computed according to human domain knowledge <cit.>.These features are usually constituted by multiple different types of descriptors (e.g., color, texture, gradient, shape or edge)and greatly domain-generic(e.g., no need to learn from target labelled training data). Nowadays, deep feature learning for person re-id has attracted increasing attention <cit.>. These alternatives allow benefiting from the powerful modelling capacity of neural networks, and are thus suitable for joint learning even given very heterogeneous training data <cit.>. Often, they require a very large collection of labelled training data and caneasily suffer from the model overfitting risk in realistic applications when data are not sufficiently provided <cit.>.Also, these existing deep re-id features are typically domain-specific.In contrast, our method exploits deep learning techniques for automatically mining more diverse and view invariantappearance patterns (versus restricted hand-crafted ones) from auxiliary less-relevant image data (versus using the sparse person re-id training data), finally leading to more reliable person representation. Furthermore, our deep feature is largely domain-generic with no need for labelled target training data. Therefore, our method possesses simultaneously the benefits of the two conventional feature extraction paradigms above.The advantages of our proposed features over existing popular alternatives are demonstrated in our evaluations (Section <ref> and Table <ref>).Domain adaptation. In a broader context, our cross-view adaptation for re-id is related to but different vitally from domain adaptation (DA) <cit.>. Specifically, the aim of existing DA methods is to diminish the distribution discrepancy between the source and target domains, which is similar conceptually to our method. However, DAmodels assume typically that the training and test classes (or persons) are overlapped, whilst our method copes with disjoint training and test person (class) sets with the objective to learn a discriminative model that is capable of generalizing to previously unseen classes (people). Therefore, conventional DA methods are less suitable for person re-id. Feature augmentation. Our model formulation is also related to the zero padding technique <cit.> and feature data augmentation methods in <cit.>. However, ours is different substantially from these existing methods. Specifically, <cit.> is designed particularly for a heterogeneous modelling problem (i.e., different feature representations are used in distinct domains), whilst person re-id is typically homogeneous and therefore not suitable. More critically, beyond all these conventional methods, our augmentation formulation uniquely considers the relation between transformations of different camera views (even if more than two) and embeds intrinsic camera correlation into the adaptive augmented feature space for facilitating cross-view feature adaptation and finally person identity association modelling.§ TOWARDS VIEW INVARIANT STRUCTURED PERSON REPRESENTATION We want to construct more view change tolerant person re-id features. To this end, we explore the potential of deep convolutional neural network, encouraged by its great generalization capability in related applications <cit.>. Typically, one has only sparse (usually hundreds or thousands) labelled re-id training samples due to expensive data annotation. This leads to great challenges for deriving effective domain-generic features using deep models <cit.>.We resolve the sparse training data problem by learning the deep model with a large auxiliary image dataset, rather than attempting to learn person discriminative features from the small re-id training data. Our intuition is that, a generic person description relies largely on the quality of atomic appearance patterns. Naturally, our method can be largely domain-generic in that appearance patterns learned from large scale data are likely to be more general. To this purpose,we first exploitthe AlexNet[Other networks such as the VggNet <cit.> and GoogLeNet <cit.> architectures can be considered without any limitation.] <cit.> convolutional neural network (CNN) to learn from the ILSVRC 2012 training dataset for obtaining diverse patterns. Then, we design reliable descriptors to characterize the appearance of a given person image. We mainly leverage the lower convolutional (conv) layers, unlike <cit.> that utilizes the higher layers (e.g., the 5^th conv layer, 6^th/7^th fully connected (fc) layers). The reasons are: (1) Higher layers can be more likely to be task-specific (e.g., sensitive to general object categories rather than person identity in our case), and may have worse transferability than lower ones <cit.> (see our evaluations in Table <ref>). In contrast, lower conv layers correspond to low-level visual features such as color, edge and elementary texture patterns <cit.>,and naturally have better generality across different tasks. (2) Features from higher layers are possibly contaminated by dramatic variations in human pose or background clutter due to their large receptive fields and thus not sufficiently localized for person re-id.We present two person descriptors based on feature maps of the first two conv layers[ More conv layers can be utilized similarly but at the cost of increasing the feature dimension size.], as detailed below. Structured person descriptor. The feature data from convnet layers are highly structured, e.g., they are organized in form of multiple 2-D feature maps. Formally, we denote by F_c ∈ℝ^h_c× w_c× m_c the feature maps of the c-th (c ∈{1, 2}) layer, with h_c/w_c the height/width of feature maps, and m_c the number of conv filters.In our case, h_1 = w_1 = 55, m_1 = 96; and h_2=w_2=27, m_2 = 256.For the c-th layer, F_c(i,j,κ) represents the activation of the κ-th filter on the image patch centred at the pixel (i,j).Given the great variations in person pose, we further divide each feature map into horizontal strips with a fixed height h_s for encoding spatial structure information and enhancing pose invariance, similar to ELF <cit.> and WHOS <cit.>. We empirically set h_s = 5 in our evaluation for preserving sufficient spatial structure. We then extract the intensity histogram (with bin size 16) from each strip for every feature map. The concatenation of all these strip based histograms forms the Histogram of Intensity Pattern (HIP) descriptor, with feature dimension 37376=16(bins)×11(strips)×96(m_1) + 16(bins)×5(strips)× 256(m_2). As such, HIP is inherently structured, containing multiple components with different degrees of pattern abstractness.View-invariant person descriptor.The proposed HIP descriptor encodes completely the activation information of all feature maps, regardless their relative saliency and noise degree. This ensures pattern completeness, butbeing potentially sensitive to cross-view covariates such as human pose changes and background discrepancy.To mitigate this issue,we propose to selectively use these feature mapsfor introducing further view-invariance capability.This is realized by incorporating the activation ordinal information <cit.>, yielding another descriptor called Histogram of Ordinal Pattern (HOP). We similarly encode the spatial structure information by the same horizontal decomposition as HIP. Specifically, we rank all activations {F_c(i,j,κ)}_κ=1^m_c in descendant order and get the top-κ feature map indices, denoted asp_c(i,j) =[v_1,v_2, ⋯, v_κ],where v_i is the index of the i-th feature map in the ranked activation list. We fix κ = 20 in our experiments. By repeating the same ranking process for each image patch, we obtain a new representation P_c ∈ℝ^h_c× w_c×κ with elements p_c(i,j). Since P_c can be considered as another set of feature maps, we utilize a similar pooling way as HIP to construct its histogram-like representation, but with bin size m_c for the c-th layer. Therefore, the feature dimension of HOP is 46720=96(bins, m_1)×11(strips)×20(κ) + 256(bins, m_2)×5(strips)×20(κ). Together with HIP, we call our final fused person feature as HIPHOP, with the total dimension 84096=46720+37376.Feature extraction overview. We depict the main steps of constructing our HIPHOP feature.First, we resize a given image into the size of 227×227, as required by AlexNet (Figure <ref>(a)). Second, we forward propagate the resized image through the AlexNet (Figure <ref>(b)). Third, we obtain the feature maps from the 1^st and 2^nd conv layers (Figure <ref>(c)). Fourth, we compute the HIP (Figure <ref>(d)) and HOP (Figure <ref>(e)) descriptors. Finally, we composite the HIPHOP feature for the given person imageby vector concatenation(Figure <ref>(f)). For approximately suppressing background noise, we impose an Epanechnikov kernel <cit.> as weight on each activation map before computing histogram. § CAMERA CORRELATION AWARE FEATURE AUGMENTATION FOR RE-ID We formulate a novel view-specificperson re-id framework, namely Camera coRrelation Aware Feature augmenTation (CRAFT), to adapt the original image features into another view adaptive space, where many view-generic methods can be readily deployed for achieving view-specific discrimination modelling.In the following, we formulate re-id as a feature augmentation problem,and then present our CRAFT framework. We first discuss the re-id under two non-overlapping camera views and later generalize our model under multiple (more than two) camera views. §.§ Re-Id Model Learning Under Feature Augmentation Given image data from two non-overlapping camera views, namely camera a and camera b, we reformulate person re-id in a feature augmentation framework. Feature augmentation has been exploited in the domain adaptation problem. For example, Daumé III <cit.> proposed the feature mapping functions ρ^s (x) = [x^⊤, x^⊤, (0_d)^⊤]^⊤ (for the source domain) and ρ^t (x) = [x^⊤, (0_d)^⊤, x^⊤]^⊤ (for the target domain) for homogeneous domain adaptation, with x∈ℝ^d denoting the sample feature, 0_d the d column vector of all zeros, d the feature dimension, and the superscript ^⊤ the transpose of a vector or a matrix. This can be viewed as incorporating the original feature into an augmented space for enhancing the similarities between data from the same domain and thus increasing the impact of same-domain (or same-camera) data. For person re-id, this should be unnecessary given its cross-camera matching nature. Without original features in augmentation, they are resorted to zero padding, a technique widely exploited in signal transmission <cit.>.Zero padding.Formally, the zero padding augmentation can be formulated as:[ X̃_zp^a =[ I_d × d; O_d × d ]X^a, X̃_zp^b =[ O_d × d; I_d × d ]X^b ],where X^a = [x_1^a, …, x_n_a^a] ∈ℝ^d × n_aand X^b = [x_1^b, …, x_n_b^b] ∈ℝ^d × n_b represent the column-wise image feature matrix from camera a and b; X̃_zp^a = [x̃_zp,1^a, …, x̃_zp,n_a^a] ∈ℝ^2d × n_a and X̃_zp^b = [x̃_zp,1^b, …, x̃_zp,n_b^b] ∈ℝ^2d × n_b refer to augmented feature matrices;n_a and n_b are the training sample numbers of camera a and camera b, respectively; I_d × d and O_d × d denote the d × d identity matrix and zero matrix, respectively. Re-id reformulation.The augmented features X̃_zp^ϕ (ϕ∈{a,b}) can be incorporated into different existing view-generic distance metric or subspace learning algorithms. Without loss of generality, we take the subspace learning as example in the follows. Specifically, the aim of discriminative learning is to estimate the optimal projections Ŵ∈ℝ^2d × m (with m the subspace dimension) such that after projection z^ϕ = Ŵ^⊤x̃_zp^ϕ, where ϕ∈{a,b} and x̃_zp^ϕ is an augmented feature defined in Eqn. (<ref>), one can effectively discriminate between different identities by Euclidean distance. Generally,the objective function can be written asŴ = min_Wf_obj(W^⊤X̃_zp),where X̃_zp = [X̃_zp^a, X̃_zp^b] = [x̃_zp,1, …, x̃_zp,n], n=n_a+n_b. X̃_zp is the combined feature data matrix from camera a and camera b.Clearly, Ŵ can be decomposed into two parts as:Ŵ = [(Ŵ^a)^⊤, (Ŵ^b)^⊤]^⊤ ,with∫ Ŵ^a ∈ℝ^d × m and Ŵ^b ∈ℝ^d × m corresponding to the respective projections (or sub-models) for camera a and b. This is due to the zero padding based feature augmentation (Eqn. (<ref>)):[ Ŵ^⊤X̃^a_zp = (Ŵ^a)^⊤X^a,; [0.1cm] Ŵ^⊤X̃^b_zp = (Ŵ^b)^⊤X^b. ]Clearly,zero padding allows view-generic methods tosimultaneously learn two view-specific sub-models, i.e., Ŵ^a for view a and Ŵ^b for view b, and realize view-specific modelling <cit.>, likely better aligning cross-view image data distribution <cit.>. For better understanding, we take an example in 1-D feature space.Often, re-id feature data distributions from different camera views are misaligned due to the inter-camera viewing condition discrepancy (Figure <ref>(a)).With labelled training data, the transformation learned in Eqn. (<ref>) aims to search for an optimizedprojection in the augmented feature space (the red line in Figure <ref>(b)) such thatcross-view data distributions are aligned and thus good for matching images of the same person across camera views. Clearly, the zero-padding treats each camera view independently byoptimizing two separated view-specific sub-models and thereforeallows to better quantify the feature distortion of either camera view. Nonetheless, as compared to a single view-generic model, this doubled modelling spacemay unfavourably loosen inter-camera inherent correlation (e.g., the “same” person with “different” appearance in the images captured by two cameras with distinct viewing conditions).This may in turn make the model optimization less effectivein capturing appearance variation across views and extracting shared view-generic discriminative cues.To overcome the above limitation, we design particularly the camera correlation aware feature augmentation,which allows for adaptively incorporating the common information between camera views into the augmented space whilst retaining the capability of well modellingthe feature distortion of individual camera views. §.§ Camera Correlation Aware Feature AugmentationThe proposed feature augmentation method is performed in two steps: (I) We quantify automatically the commonness degree between different camera views. (II) We exploit the estimated camera commonness information for adaptive feature augmentation. (I) Quantifying camera commonness by correlation.We propose exploiting the correlation in image data distributions for camera commonness quantification. Considering that many different images may be generated by any camera, we represent a camera by a set of images captured by itself, e.g., a set of feature vectors. We exploit the availabletraining images captured by both cameras for obtaining more reliable commonness measure. Specifically, given image features X^a and X^b for camera a and b, respectively, we adopt the principle angles <cit.> to measure the correlation between the two views.In particular, first, we obtain the linear subspace representations by the principle component analysis, G^a ∈ℝ^n_a ×r for X^a and G^b ∈ℝ^n_b ×r for X^b, with r the dominant component number. In our experiments, we empirically set r=100.Either G^a or G^b can then be seen as a data point on the Grassmann manifold – a set of fixed-dimensional linear subspaces of Euclidean space. Second, we measure the similarity between the two manifold points with their principle angles (0 ≤θ_1 ≤⋯≤θ_k ≤⋯≤θ_r≤π/2)defined as:[ cos(θ_k) = max_q_j ∈span(G^a)max_v_k ∈span(G^b)q_k^⊤v_k ,;[0.3em] s.t. q_k^⊤q_k = 1, v_k^⊤v_k = 1,; [0.3em] q_k^⊤q_i = 0, v_k^⊤v_i = 0,i ∈ [0, k-1] , ]where span(·) denotes the subspace spanned by the column vectors of a matrix. The intuition is that, principle angles have good geometry interpretation (e.g., related to the manifold geodesic distance <cit.>) and their cosines cos(θ_k) are known as canonical correlations. Finally, we estimate the camera correlation (or commonness degree) ω as:ω = 1/r∑_k=1^rcos(θ_k),withcos(θ_k) computedbySingular Value Decomposition:(G^a)^⊤G^b = Qcos(Θ) V^⊤,where cos(Θ) = diag(cos(θ_1), cos(θ_2), ⋯cos(θ_r)), Q = [q_1, q_2, ⋯, q_r], and V = [v_1, v_2, ⋯, v_r].(II) Adaptive feature augmentation. Once obtaining the camera correlation ω, we want to incorporate it into feature augmentation.To achieve this, we generalize the zero padding (Eqn. (<ref>)) to: [ X̃_craft^a =[ R; M ]X^a, X̃_craft^b =[ M; R ]X^b ],where R and M refer to the d × d augmentation matrices. So, zero padding is a special case of the proposed feature augmentation (Eqn. (<ref>)) where R = I_d × d and M = O_d × d. With some view-generic discriminative learning algorithm, we can learn an optimal model W = [(W^a)^⊤, (W^b)^⊤]^⊤ in our augmented space.Then, feature mapping functions can be written:[f_a (X̃_craft^a) = W^⊤X̃_craft^a = (R^⊤W^a+ M^⊤W^b)^⊤X^a,; [0.1cm] f_b (X̃_craft^b) = W^⊤X̃_craft^b = (M^⊤W^a + R^⊤W^b)^⊤X^b. ]Compared to zero padding (Eqn. (<ref>)), it is clear that the feature transformation for each camera view is not treated independently in our adaptive feature augmentation (Eqn. (<ref>)). Instead,the transformations of all camera views are intrinsically correlated and meanwhile being view-specific.However, it is non-trivialto estimate automatically the augmentation matrices R and M with the estimated camera correlation information accommodated for enabling more accurate cross-view discriminative analysis (Sec. <ref>). This is because a large number of (2d^2) parameters are required to be learned given the typically high feature dimensionality d (e.g., tens of thousands) but only a small number of (e.g., hundreds) training data available. Instead of directly learning from the training data, we propose to properly design R and M for overcoming this problem as:R = 2 - ω/ϖI_d × d, M = ω/ϖI_d × d , where ω isthe camera correlation defined in Eqn. (<ref>) and ϖ = √((2-ω)^2 + ω^2) is the normalization term. In this way, camera correlation is directly embedded into the feature augmentation process. Specifically, when ω = 0, which means the two camera views are totally uncorrelated with no common property, we have M= O_d × d and R = I_d × d, and our feature augmentation degrades to zero padding. When ω = 1, which means the largest camera correlation, we have R = M thus potentially similar view-specific sub-models, i.e., strongly correlated each other. In other words, M represents the shared degree across camera views whilst R stands for view specificity strength, with their balance controlled by the inherent camera correlation.By using Eqn. (<ref>), the view-specific feature mapping functions in Eqn. (<ref>) can be expressed as:[f_a (X̃_craft^a) = 2 - ω/ϖ(W^a)^⊤_specificityX^a + ω/ϖ(W^b)^⊤_adaptivenessX^a,; [0.3cm]f_b (X̃_craft^b)= ω/ϖ(W^a)^⊤_adaptivenessX^b + 2-ω/ϖ (W^b)^⊤_specificityX^b. ]Obviously, the mapped discriminative features for each camera view depend on its respective sub-model (weighted by 2-ω/ϖ, corresponding to view-specific modelling) as well as the other sub-model (weighted by ω/ϖ, corresponding to view-generic modelling). As such,our proposed transformations realize the joint learning of both view-generic and view-specific discriminative information. We call this cross-view adaptive modellingModel formulation analysis - We examine the proposed formulation by analyzing the theoretical connection among model parameters {R, M, W}. Note that in our whole modelling, the augmentation matrices R and M (Eqn. (<ref>)) are appropriately designed with the aim for embedding the underlying camera correlation into a new feature space, whereas W (Eqn. (<ref>)) is automatically learned from the training data. Next, we demonstrate that learning W alone is sufficient to obtain the optimal solution. Formally, we denote the optimal augmentation matrices:R^opt = R + ∇R,M^opt = M + ∇M,with ∇R the difference (e.g., the part learned from the training data by some ideal algorithm) between our designed R and the assumed optimal one R^opt(similarly for ∇M). The multiplication operation between M (or R) and W in Eqn. (<ref>) suggests that{[(R + ∇R)^⊤W^a + (M + ∇M)^⊤W^b;= R^⊤ (W^a + ∇W^a)+ M^⊤(W^b + ∇W^b); [0.15cm] (M + ∇M)^⊤W^a + (R + ∇R)^⊤W^b;=M^⊤(W^a + ∇W^a) + R^⊤ (W^b + ∇W^b) ]. ,where ∇W^a and ∇W^b are:( [ ∇W^a; ∇W^b;]) = ( [ R^⊤ M^⊤; M^⊤ R^⊤; ])^-1( [ ∇R ∇M; ∇M ∇R;])^⊤( [ W^a; W^b; ]).Eqn. (<ref>) indicates that the learnable parts ∇R and ∇M can be equivalently obtained in the process of optimizing W. This suggests no necessity of directly learning R and M from the training data as long as W is inferred through some optimization procedure. Hence, deriving R and M as in Eqn. (<ref>) should not degrade the effectiveness of our whole model, but instead making our entire formulation design more tractable and more elegant with different components fulfilling a specific function.§.§ Camera View Discrepancy RegularizationAs aforementioned in Eqn. (<ref>), our transformations of all camera views are not independent to each other. Although these transformations are view-specific, they are mutually correlated in practice because they quantify the same group of people for association between camera views.View-specific modelling (Eqns. (<ref>) and (<ref>)) allows naturally for regularizing the mutual relation betweendifferent sub-models,potentially incorporating complementary correlation modelling between camera views in addition to Eqn. (<ref>). To achieve this, we enforce the constraint on sub-modelsby introducing a Camera View Discrepancy (CVD) regularization as:γ_cvd = ||W^a - W^b||^2 ,Moreover, this CVD constraint can be combined well with the common ridge regularization as:[ γ= ||W^a - W^b||^2 + η_ridgetr (W^⊤W); [0.5em] = tr (W^⊤[I -I; -II ]W) + η_ridgetr ( W^⊤W); [1em]= (1+η_ridge)tr (W^⊤CW),; ]whereC = [ I -βI; -βI I ], β = 1/1+η_ridge.tr (·) denotes the trace operation, and η_ridge > 0 is a tradeoff parameter for balancing the two terms. The regularization γ can be readily incorporated into existing learning methods <cit.> for possibly obtaining better model generalization.Specifically, we define an enriched objective function on top of Eqn. (<ref>) as:Ŵ = argmin_Wf_obj (W^⊤X̃_craft)+ λtr( W^⊤CW ),where λ controls the influence of γ. Next, we derive the process for solving Eqn. (<ref>). View discrepancy regularized transformation.Since β = 1/1+η_ridge < 1, the matrix C is of positive-definite. Therefore, C can be factorized into the form ofC = PΛP^⊤,with Λ a diagonal matrix and PP^⊤ = P^⊤P = I. So, P^⊤CP=Λ. By definingW = PΛ^-1/2H, we have W^⊤CW = H^⊤H.Thus, Eqn. (<ref>) can be transformed equivalently to: Ĥ = argmin_H f_obj (H^⊤Λ^-1/2P^⊤X̃_craft) + λtr(H^⊤H). We define the transformed data matrix from all viewsẌ_craft = Λ^-1/2P^⊤X̃_craft =[ ẍ_1,⋯,ẍ_n] , which we call view discrepancy regularized transformation. So, Eqn. (<ref>) can be simplified as:[Ĥ = argmin_H f_obj(H^⊤Ẍ_craft)+ λtr(H^⊤H). ]Optimization.Typically, the same optimization algorithm as the adopted view-generic discriminative learning method can be exploited to solve the optimization problem. For providing a complete picture, we will present a case study with a specific discriminative learning method incorporated into the proposed CRAFT framework. §.§ CRAFT Instantialization In our CRAFT framework, we instantilize a concrete person re-id method usingthe Marginal Fisher Analysis (MFA) <cit.>, due toits several important advantages over the canonical Linear Discriminant Analysis (LDA) model <cit.>: (1) no strict assumption on data distribution and thus more general for discriminative learning, (2) a much larger number of available projection directions, and (3) a better capability of characterizing inter-class separability. We call this method instance as “CRAFT-MFA”.Algorithm <ref> presents an overview of learning the CRAFT-MFA model.Specifically, we consider each person identity as an individual class. That is, all images of the same person form the whole same-class samples, regardless of being captured by either camera. Formally, given the training data X = [X^a, X^b] = [x_1, …, x_n]with n = n_a + n_b,we first transform them to Ẍ_craft = [ ẍ_1,⋯,ẍ_n](Lines 1-9 in Algorithm <ref>).Ĥ can then be obtained by solving the MFA optimization problem (Line 11 in Alg. <ref>):[ min_H∑_ i ≠ jA^c_ij || H^⊤ (ẍ_i - ẍ_j) ||_2^2 + λtr(H^⊤H); s.t. ∑_ i ≠ jA^p_ij|| H^⊤ (ẍ_i - ẍ_j) ||_2^2 = 1, ]where the element A^c_ij of the intrinsic graph A^c is:A^c_ij = {[1 if i ∈ N^+_k_1(j) or j ∈ N^+_k_1(i);0otherwise ].,with N^+_k_1(i) denoting the set of k_1 nearest neighbor indices of sample x_i in the same class. And the elements A^p_ij of the penalty graph A^p are defined as:A^p_ij = {[1 if (i,j) ∈ P_k_2(y_i) or (i,j) ∈ P_k_2(y_j);0otherwise ].,where y_i and y_j refer to the class/identity label of the i^th and j^th sample, respectively, P_k_2(y_i) indicates the set of data pairs that are k_2 nearest pairs among{(i,j)| y_i≠ y_j}.Finally, we compute the optimal Ŵ with Eqn. (<ref>) (Line 12 in Algorithm <ref>). Note that this process generalizes to other view-generic discriminative learning algorithms <cit.> (see evaluations in Table <ref>). §.§ Kernel ExtensionThe objective function Eqn. (<ref>) assumes linear projection. However, given complex variations in viewing condition across cameras, the optimal subspace for person re-id may not be obtainable by linear models. Thus, we further kernelize our feature augmentation (Eqn. (<ref>)) by projecting the original feature data into a reproducing kernel Hilbert space ℋ with an implicit function ϕ(·). The inner-product of two data points in ℋ can be computed by a kernel functionk(x_i, x_j) = ⟨ϕ(x_i), ϕ(x_j) ⟩. In our evaluation, we utilized the nonlinear Bhattacharyya kernel function due to (1) its invariance against any non-singular linear augmentation and translation and (2) its additional consideration of data distribution variance and thus more reliable <cit.>. We denote k(x) as the kernel similarity vector of a sample x, obtained by:[k(x) = [k(x_1,x), k(x_2,x), ⋯, k(x_n,x)]^⊤ , ]where x_1, x_2,⋯,x_n are all samples from all views, and n=n_a+n_b is the number of training samples. Then the mapping function can be expressed as:f_ker(x) = U^⊤ϕ(X)^⊤ϕ(x) = U^⊤k(x) ,where U∈ℝ^n represents the parameter matrix to be learned. The superscript for camera id is omitted for simplicity.Conceptually, Eqn. (<ref>) is similar to the linear case Eqn. (<ref>)if we consider k(x) as a feature representation of x. Hence, by followingEqn. (<ref>), the kernelized version of our feature augmentation can be represented as:[ k̃^a_craft(x) =[ R; M ]k^a(x),k̃^b_craft(x) =[ M; R ]k^b(x), ]whereR∈ℝ^n × n and M∈ℝ^n × n are the same as in Eqn. (<ref>) but with a different dimension, k^a(x) and k^b(x) denote the sample kernel similarity vectors forcamera a and b, respectively. Analogously, both view discrepancy regularization (Eqn. (<ref>))and model optimization (Eqn. (<ref>)) can be performed similarly as in the linear case.§.§ Extension to More than Two Camera Views There often exist multiple (more than two) cameras deployed across a typical surveillance network. Suppose there are J(>2) non-overlapping camera views. Therefore, person re-id across the whole network is realistically critical, but joint quantification on person association across multiple camera views is largely under-studied in the current literature <cit.>.Compared to camera pair based re-id above, this is a more challenging situation due to: (1) intrinsic difference between distinct camera pairs in viewing conditions which makes the general cross-camera feature mapping more complex and difficult to learn; (2) quantifying simultaneously the correlations ofmultiple camera pairs is non-trivial in both formulation and computation.In this work, we propose to address this challenge by jointly learning adaptive view-specific re-id models for all cameras in a unified fashion.Specifically, we generalize our camera correlation aware feature augmentation (Eqn. (<ref>)) into multiple (J) camera cases as:[x̃^ϕ_craft = [M_i,1,M_i,2,⋯,M_i,i-1_#: i-1, R_i,; [0.5cm] M_i,i+1,M_i,i+2,⋯,M_i,J_#:J-i]^⊤x^ϕ , ]withM_i,j = ω_i,j/ϖ_iI_d × d,where ω_i,j denotes the correlation between camera i and j, estimated by Eqn. (<ref>).Similar to Eqn. (<ref>), we design [ R_i = 2 - 1/J - 1∑_j≠ iω_i,j/ϖ_i ],with[ ϖ_i = √((2 - 1/J-1∑_j≠ iω_i,j)^2 + ∑_j ≠ iω_i,j^2) ]. Similarly, we extend the CVD regularizationEqn. (<ref>) to:γ_cvd = ∑_i,j ∈{1,2,…,J} andi ≠ j||W^i - W^j||^2 .Thus, the matrix C of γ (Eqn. (<ref>)) is expanded as: C = ( [I -β'I -β'I⋯ -β'I; -β'II -β'I⋯ -β'I;⋮⋮⋮⋮; -β'I -β'I -β'I⋯I;]),where β'=β/J-1. The following view discrepancy regularized transformation and model optimization can be carried out in the same way as in Sec. <ref>. Clearly, re-id between two cameras is a special case of the extended model when J = 2.Therefore,our extended CRAFT method allows to consider and model the intrinsic correlation between camera views in the entire network.§.§ Person Re-identification by CRAFT Once a discriminative model (W or U) is learned from the training data using the proposed method, we can deploy it for person re-id. Particularly, first, we perform feature augmentation to transform all original features {x} to the CRAFT space {x̃_craft} (Eqn. (<ref>)). Second, we match a given probe person x̃^p_craft from one camera against a set of gallery people {x̃_craft,i^g} from another cameraby computing the Euclidean distance {dist(x̃^p_craft, x̃_craft,i^g)} in the prediction space (induced by Eqn. (<ref>) or (<ref>)). Finally, the gallery people are sorted in ascendant order of their assigned distancesto generate the ranking list. Ideally, the true match(es) can be found among a few top ranks. The pipeline of our proposed re-id approach is depicted in Figure <ref>.§ EXPERIMENTS§.§ Datasets and Evaluation SettingsDatasets.We extensively evaluated the proposed approach on five person re-id benchmarks: VIPeR <cit.>, CUHK01 <cit.>, CUHK03 <cit.>, QMUL GRID <cit.>, and Market-1501 <cit.>. All datasets are very challenging due to unknown large cross-camera divergence in viewing conditions, e.g., illumination, viewpoint, occlusion and background clutter (Figure <ref>).The VIPeR dataset consists of 1264 images from 632 persons observed from two surveillance cameras with various viewpoints and background settings. As these images are of low spatial resolution, it is very difficult to extract reliable appearance features(Figure <ref>(a)). The CUHK01 dataset consists of 971 people observed from two outdoor cameras. Each person has two samples per camera view. Compared with VIPeR, this dataset has higher spatial resolution and thus more appearance information are preserved in images (Figure <ref>(b)).The CUHK03 dataset consists of 13164 images from 1360 people collectedfrom six non-overlapping cameras.In evaluation, we used the automatically detected person imageswhich represent a more realistic yet challenging deployment scenario, e.g., due to more severe misalignment caused by imperfect detection(Figure <ref>(c)). The QMUL GRID dataset consists of 250 people image pairsfrom eight different camera views in a busy underground station.Unlike all the three datasets above, there are 775 extra identities or imposters. All images of this dataset are featured with lower spatial resolution and more drastic illumination variation(Figure <ref>(d)). The Market-1501 dataset contains person images collected in front of a campus supermarket at Tsinghua University. A total of six cameras were used, with five high-resolution and one low-resolution.This dataset consists of 32,668 annotated bounding boxes of 1501 people (Figure <ref>(e)).Evaluation protocol.For all datasets, we followed the standard evaluation settings for performing a fair comparison with existing methods as below:(I) On the VIPeR, CUHK01 and QMUL GRID benchmarks, we split randomly the whole people into two halves: one for training and one for testing. The cumulative matching characteristic (CMC) curve was utilized to measure the performance of the compared methods. As CUHK01 is a multi-shot (e.g., multiple images per person per camera view) dataset, we computed the final matching distance between two people by averaging corresponding cross-view image pairs. We repeated the experiments for 10 trials and reported the average results.(II) On CUHK03, we followed the standard protocol <cit.> – repeating 20 times of random 1260/100 people splits for model training/test andcomparing the averaged matching results. (III) On Market-1501, we utilized the standard training (750) and testing (751) people split provided by the authors of <cit.>. Apart from CMC, we also used other two performance metrics: (1) Rank-1 accuracy, and (2) mean Average Precision (mAP), i.e. first computing the area under the Precision-Recall curve for each probe,then calculating the mean of Average Precision over all probes. §.§ Evaluating Our Proposed Re-Id ApproachWe evaluated and analyzed the proposed person re-id approach in these aspects: (1) Effect of our CRAFT framework(Eqns. (<ref>) and (<ref>)); (2) Comparison between CRAFT and domain adaptation; (3) Effect of our CVD regularization γ_cvd (Eqn. (<ref>));(4) Generality of CRAFT instantialization;(5) Effect of our HIPHOP feature; (6) Complementary of HIPHOP on existing popular features.Effect of our CRAFT framework.We evaluated the proposed CRAFT method by comparing with (a) baseline feature augmentation (BaseFeatAug) <cit.>, (b) zero padding (ZeroPad) <cit.>, and (c) original features (OriFeat). Their kernelized versions using the Bhattacharyya kernel function were also evaluated. For fair comparison, our CRAFT-MFA method is utilized for re-id model learningin all compared methods.Table <ref> shows the re-id results. It is evident that our CRAFT approach outperformed consistently each baseline method on all the four re-id datasets, either using kernel or not. For example, CRAFT surpasses OriFeat, ZeroPad and BaseFeatAug by 4.5% / 5.7% /15.1% / 2.5% / 4.4%, 10.3% / 3.6% / 2.5% / 29.7% / 18.2%, 2.3% / 11.0% / 0.2% / 2.6% / 5.3% at rank-1 on VIPeR/CUHK01/CUHK03/Market-1501/QMUL GRID, respectively. Similar improvements were observed in the kernelized case. It is also found that, without feature augmentation, OriFeat produced reasonably good person matching accuracies,whilst both ZeroPad and BaseFeatAug may deteriorate the re-id performance. This plausible reasons are: (1) The small training data size may lead to some degree of model overfitting given two or three times as many parameters as OriFeat; and (2) Ignoring camera correlation can result in sub-optimal discrimination models.The re-id performance can be further boosted using the kernel trickin most cases except QMUL GRID. This exception may be due to the per-camera image imbalance problem on this dataset, e.g., 25 images from the 6^th camera versus 513 images from the 5^th camera.In the following, we used the kernel version of all methods unless otherwise stated.Comparison between CRAFT and domain adaptation.We compared our CRAFT with two representative domain adaptation models: TCA <cit.> and TFLDA <cit.>. It is evident from Table <ref> that the proposed CRAFT always surpasses TCA and TFLDA with a clear margin in the re-id performance on all datasets. This is because: (1) TCA is not discriminant and thus yielded much poor accuracies; and (2) both TCA and TFLDA assume that the target domain shares the same class labels as the source domain, which however is not valid in re-id applications.This suggests the advantage and superiority of our cross-view adaptive modelling overconventional domain adaptation applied to person re-id. Effect of our CVD regularization. For evaluating the exact impact of the proposed regularization γ_cvd (Eqns. (<ref>) and (<ref>)) on model generalization, we compared the re-id performance of our full model with a stripped down variant without γ_cvd during model optimization, called “CRAFT(no γ_cvd)”. The results in Table <ref> demonstrate the usefulness of incorporating γ_cvd for re-id.This justifies the effectiveness of our CVD regularization in controlling the correlation degree between view-specific sub-models.Generality of CRAFT instantialization.We evaluated the generality of CRAFT instantialization by integrating different supervised learning models. Five popular state-of-the-art methods were considered: (1) MFA <cit.>, (2) LDA <cit.>, (3) KISSME <cit.>, (4) LFDA <cit.>, (5) XQDA <cit.>. Our kernel feature was adopted. The results are given in Table <ref>. It is evident that our CRAFT framework is general for incorporating different existing distance metric learning algorithms. Specifically, we find that CRAFT achieved better re-id results than other feature augmentation competitors with any of these learning algorithms.The observation also suggests the consistent superiority and large flexibility of the proposed approach over alternatives in learning discriminative re-id models.We utilize CRAFT-MFAfor the following comparisons with existing popular person re-id methods.Effect of our HIPHOP feature. We compared the proposed HIPHOP feature with several extensively adopted re-id representations: (1) ELF18 <cit.>, (2) ColorLBP <cit.>, (3) WHOS <cit.>, and (4) LOMO <cit.>.The Bhattacharyya kernel was utilized for all compared visual features.The re-id results are reported in Table <ref>. It is evident that our HIPHOP feature is overall much more effective than the compared alternatives for person re-id. For example, using HIPHOP improved the rank-1 rate from 42.3% by LOMO to 50.3% on VIPeR, from 66.0% by WHOS to 74.5% on CUHK01, from 77.9% by LOMO to 84.3% on CUHK03,and from 52.2% by WHOS to 68.7% on Market-1501. This suggests the great advantage and effectiveness ofmore view invariantappearance representation learned from diverse albeit generic auxiliary data.A few reasons are: (1) By exploiting more complete and diverse filters (Figure <ref>(b)) than ELF18 (Figure <ref>(a)),more view change tolerant person appearance details shall be well encoded for discriminating between visually alike people;(2) By selecting salient patterns, our HOP descriptor possesses some features more tolerant to view variety, which is critical to person re-id due to the potential cross-view pose and viewing condition variation. This is clearly illustrated in Figure <ref>(c-d): (i) Cross-view images of the same person have similar HOP patterns; (ii) Visually similar people (P-2, P-3) share more commonness than visually distinct people (P-1, P-2) in HOP histogram. Besides, we evaluated the discrimination power of features from different conv layers in Table <ref>. More specifically, it is shown in Table <ref> that the 1^st/2^nd conv layer based features (i.e., HIP/HOP, being low-level) are shown more effective than those from the 6^th/7^th conv layer (i.e., fc6/fc7, being abstract). This confirms early findings <cit.> that higher layers are more task-specific, thus less generic to distinct tasks. Recall that the ordinal feature HOP (Eqn. (<ref>)) is computed based on top-κ activations of the feature maps. We further examined the effect of κ on the feature quality on the VIPeR dataset. For obtaining a detailed analysis, we tested the HOP feature extracted from the 1^st/2^nd conv layer and both layers, separately. Figure <ref> shows the impact of setting different κ values ranging from 5 to 50 in terms of rank-1 recognition rate.The observations suggest clearly that κ is rather insensitive with a wide satisfiable range. We set κ=20 in all other evaluations.Complementary of HIPHOP on existing re-id features.Considering the different design nature of our person representation as compared to previous re-id features, we evaluated the complementary effect of our HIPHOP with ELF18, ColorLBP, WHOS and LOMO (see the last four rows in Table <ref>). It is found that: (1) After combining our HIPHOP, all these existing features can produce much better re-id performance. This validates the favourable complementary role of our HIPHOP feature for existing ones.(2) Interestingly, these four fusions produce rather similar and competitive re-id performance, as opposite to the large differences in the results by each individual existing feature (see the first four rows in Table <ref>). This justifies the general complementary importance of the proposed feature for different existing re-id features. Next, we utilize “HIPHOP+LOMO” as the default multi-type feature fusion in the proposed CRAFT-MFA method due to its slight superiority over other combinations. This is termed as CRAFT-MFA(+LOMO) in the remaining evaluations.§.§ Comparing State-of-the-Art Re-Id MethodsWe compared extensively our method CRAFT-MFA with state-of-the-art person re-id approaches. In this evaluation, we considered two scenarios: (1) Person re-id between two cameras; (2) Person re-id across multiple (>2) cameras. Finally, we compared the person re-id performance by different methods using multiple feature types. We utilized the best results reported in the corresponding papers for fair comparison across all compared models.(I) Person re-id between two cameras.This evaluation was carried out on VIPeR <cit.> and CUHK01 <cit.> datasets, each with a pair of camera views. We compared our CRAFT-MFA with both metric learning and recent deep learning based methods.Comparisons on VIPeR -We compared with 26 state-of-the-art re-id methods on VIPeR. The performance comparison is reported in Table <ref>.It is evident that our CRAFT-MFA method surpassed all competitors over top ranks clearly. For instance, the rank-1 rate is improved notably from 47.8% (by the 2^nd best method TCP <cit.>) to 50.3%. This shows the advantages and effectiveness of our approach in a broad context.Comparisons on CUHK01 -We further conducted the comparative evaluation on the CUHK01 dataset with the results shown in Table <ref>. It is evident that our CRAFT-MFA method generated the highest person re-id accuracies among all the compared methods. Specifically, the best alternative (DGD) was outperformed notably by our method with a margin of 7.9%(=74.5%-66.6%) rank-1 rate. Relative to VIPeR, CRAFT-MFA obtained more performance increase on CUHK01, e.g., improving rank-1 rate by 2.5%(=50.3%-47.8%) on VIPeR versus 7.9% on CUHK01. This is as expected, because person images from VIPeR are much more challenging for re-id due to poorer imaging quality, more complex illumination patterns, and more severe background clutter (see Figure <ref>(a-b)). This also validates the generality and capability of the proposed method in coping with various degrees of person re-id challenges when learning view-specific and view-generic discriminative re-id information.(II) Person re-id across more than two cameras.Real-world person re-id applications often involve a surveillance network with many cameras. It is therefore critical to evaluate the performance of associating people across a whole camera network, although the joint learning and quantification is largely under-studied in the current literature. In this multi-camera setting, we exploit the generalized CRAFT model (Eqn. (<ref>)) to learn an adaptive sub-model for each camera view in a principled fashion. This evaluation was performed on three multi-camera re-id datasets: CUHK03 <cit.> (with 6 cameras in a university campus), QMUL GRID <cit.> (with 8 cameras in an underground station), and Market-1501 <cit.> (with 6 cameras near a university supermarket).Comparisons on CUHK03 -We evaluated our approach by comparing the state-of-the-arts on CUHK03 <cit.>. This evaluation was conducted using detected images. It is shown in Table <ref> that our method significantly outperformed all competitors, e.g., the top-2 Gated-SCNN/DGD by 16.2%/9.0% at rank-1, respectively.Comparisons on QMUL GRID -The re-id results of different methods on QMUL GRID are presented in Table <ref>. It is found that the proposed CRAFT-MFA method produced the most accurate results among all competitors, similar to the observation on CUHK03 above.In particular, our CRAFT-MFA method outperformed clearly the 2^nd best model LSSCDL, e.g., with similar top-1 matching rate but boosting rank-10 matching from 51.3% to 61.8%.This justifies the superiority of our CRAFT model and person appearance feature in a more challenging realistic scenario.Comparisons on Market-1501 -We compared the performance on Market-1501with these methods: Bag-of-Words (BoW) based baselines <cit.>, a Discriminative Null Space (DNS) learning based model <cit.>, and four metric learning methods KISSME <cit.>, MFA <cit.>, kLFDA <cit.>, XQDA <cit.>, S-LSTM <cit.>, Gated-SCNN <cit.>.We evaluated both the single-query and multi-query (using multiple probe/query images per person during the deployment stage) settings. It is evident from Table <ref> that our CRAFT-MFA method outperformed all competitors under both single-query and multi-query settings. By addressing the small sample size problem, the DNS model achieves more discriminative models than other metric learning algorithms. However, all these methods focus on learning view-generic discriminative information whilst overlooking largely useful view-specific knowledge.Our CRAFT-MFA method effectively overcome this limitation by encoding camera correlation into an extended feature space for jointly learning both view-generic and view-specific discriminative information. Additionally, our method benefits from more view change tolerantappearance patterns deeply learned from general auxiliary data source for obtaining more effective person description, and surpassed recent deep methods Gated-SCNN and S-LSTM. All these evidences validate consistently the effectiveness and capability of the proposed person visual features and cross-view re-id model learning approach in multiple application scenarios. (III) Person re-id with multiple feature representations.Typically, person re-id can benefit from using multiple different types of appearance features owing to their complementary effect (see Table <ref>). Here, we compared our CRAFT-MFA(+LOMO) method with competitive re-id models using multiple features. This comparison is given in Table <ref>. It is found that our CRAFT-MFA(+LOMO) method notably outperformed all compared methods utilizing two or more types of appearance features, particularly on CUHK01, CUHK03, and Market-1501. Along with the extensive comparison with single feature based methods above, these observations further validate the superiority and effectiveness of our proposed method under varying feature representation cases. §.§ DiscussionHere, we discuss the performance of HIPHOPin case that deep model fine-tuning on the available labelled target data is performed in prior to feature extraction. Our experimental results suggest that this network adaptation can only bring marginal re-id accuracy gain on our HIPHOP feature, e.g., <1% rank-1 increase for all datasets except Market-1501 (1.4%), although the improvement on using fc6 and fc7 feature maps (which are much inferior to HIPHOP either fine-tuned or not, see the supplementary file for more details) is clearer. This validates empirically our argument that lower layer conv filters can be largely task/domain generic and expressive, thus confirming the similar earlier finding <cit.> particularly in person re-id context. This outcome is reasonable considering that the amount of training person images could be still insufficient (e.g., 632 on VIPeR, 1940 on CUHK01, 12197 on CUHK03, 250 on QMUL GRID, 12936 on Market-1501) for producing clear benefit, especially for lower conv layers which tend to be less target task specific <cit.>. Critically, model fine-tuning not only introduces the cost of extra network building complexity but also unfavourably renders our feature extraction method domain-specific – relying on a sufficiently large set of labelled training data in the target domain. Consequently, we remain our domain-generic (i.e., independent of target labelled training data and deployable universally) re-id feature extraction method as our main person image representation generation way.§ CONCLUSION We have presented a new framework called CRAFT for person re-identification. CRAFT is formed based on the camera correlation aware feature augmentation. It is capable of jointly learning both view-generic and view-specific discriminative information for person re-id in a principled manner. Specifically, by creating automatically a camera correlation aware feature space, view-generic learning algorithms are allowed to induce view-specific sub-models which simultaneously take into account the shared view-generic discriminative information so that more reliable re-id models can be produced. The correlation between per-camera sub-models can be further constrained by our camera view discrepancy regularization. Beyond the common person re-id between two cameras, we further extend our CRAFT framework to cope with re-id jointly across a whole network of more than two cameras. In addition, we develop a general feature extraction method allowing to construct person appearance representations with desired view-invariance property by uniquely exploiting less relevant auxiliary object images other than the target re-id training data. That is, our feature extraction method is universally scalable and deployable regardless of the accessibility of labelled target training data. Extensive comparisons against a wide range of state-of-the-art methods have validated the superiority and advantages of our proposed approach under both camera pair and camera network based re-id scenarios on five challenging person re-id benchmarks.§ ACKNOWLEDGEMENTThis work was supported partially by the National Key Research and Development Program of China (2016YFB1001002),NSFC (61522115, 61573387, 61661130157), and the RS-Newton Advanced Fellowship (NA150459), and Guangdong Natural Science Funds for Distinguished Young Scholar under Grant S2013050014265. This work was finished when Mr. Chen was a master student at Sun Yat-sen University. The corresponding author and principal investigator for this paper is Wei-Shi Zheng.IEEEtran [ < g r a p h i c s > ]Ying-Cong Chen received his BEng and Master degree from Sun Yat-sen University in 2013 and 2016 respectively. Now he is a PhD student in the Chinese University of Hong Kong. His research interest includes computer vision and machine learning.[ < g r a p h i c s > ]Xiatian Zhu received his B.Eng. and M.Eng. from University of Electronic Science and Technology of China, and his Ph.D. (2015) from Queen Mary University of London.He won The Sullivan Doctoral Thesis Prize (2016), an annual award representing the best doctoral thesissubmitted to a UK University in the field of computer or natural vision.His research interests include computer vision, pattern recognition and machine learning.[ < g r a p h i c s > ] Wei-Shi Zheng is now a Professor at Sun Yat-sen University. He has now published more than 90 papers, including more than 60 publications in main journals (TPAMI,TIP,PR) and top conferences (ICCV, CVPR,IJCAI). His research interests include person/object association and activity understanding in visual surveillance. He has joined Microsoft Research Asia Young Faculty Visiting Programme. He is a recipient of Excellent Young Scientists Fund of the NSFC, and a recipient of Royal Society-Newton Advanced Fellowship. Homepage: http://isee.sysu.edu.cn/%7ezhwshi/[ < g r a p h i c s > ]Jian-Huang Lai is Professor of School of Data and Computer Science in Sun Yat-sen university. His current research interests are in the areas of digital image processing, pattern recognition, multimedia communication, wavelet and its applications. He has published over 100 scientific papers in the international journals and conferences on image processing and pattern recognition e.g., IEEE TPAMI, IEEE TNN, IEEE TIP, IEEE TSMC (Part B), Pattern Recognition, ICCV, CVPR and ICDM. | http://arxiv.org/abs/1703.08837v1 | {
"authors": [
"Ying-Cong Chen",
"Xiatian Zhu",
"Wei-Shi Zheng",
"Jian-Huang Lai"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170326161848",
"title": "Person Re-Identification by Camera Correlation Aware Feature Augmentation"
} |
In this article we investigate the pressure function and affinity dimension for iterated function systems associated to the “box-like” self-affine fractals investigated by D.-J. Feng, Y. Wang and J.M. Fraser. Combining previous results of V. Yu. Protasov, A. Käenmäki and the author we obtain an explicit formula for the pressure function which makes it straightforward to compute the affinity dimension of box-like self-affine sets. We also prove a variant of this formula which allows the computation of a modified singular value pressure function defined by J.M. Fraser. We give some explicit examples where the Hausdorff and packing dimensions of a box-like self-affine fractal may be easily computed. A molecular-dynamics approach for studying the non-equilibrium behavior of x-ray-heated solid-density matter Ian D. Morris December 30, 2023 ============================================================================================================§ INTRODUCTION AND STATEMENT OF RESULTSAn iterated function system or IFS is defined to be any finite collection T_1,…,T_N of contractions of some fixed complete, nonempty metric space. It is a classical result of J. E. Hutchinson <cit.> that for every iterated function system T_1,…,T_N there exists a unique compact nonempty set X which satisfies X=⋃_i=1^N T_iX. We call X the attractor of the IFS T_1,…,T_N. In this article we shall be interested in the case where each T_i is an affine contraction of ℝ^d, in which case the attractor X is called a self-affine set.The dimension theory of self-affine sets has been the subject of ongoing research investigation since the 1980s (see e.g. <cit.>) and has enjoyed an intense burst of activity in recent years (we note for example <cit.>). In many cases the value of the Hausdorff, packing or box dimension of a self-affine set is determined by the values of a pressure functional, which is itself defined in terms of limits of sequences of matrix products arising from the corresponding affine transformations T_i. The calculation of the dimension values defined by those formulas is in general a nontrivial problem: for example, it was shown only in 2014 that the affinity dimension, a dimension formula first defined by K. Falconer in the 1988 article <cit.>, depends continuously on the affine transformations T_i which are used to define it (see <cit.>). It was also shown only in 2016 that the affinity dimension can in principle be computed to any prescribed accuracy in finitely many computational steps (see <cit.>). The purpose of this article is to show that in the special case of the “box-like” affine IFS studied by D.-J. Feng, Y. Wang and J.M. Fraser in <cit.> the affinity dimension admits a simple description which renders it essentially trivial to compute. For each d ≥ 1 let M_d(ℝ) denote the vector space of all d × d real matrices. We recall that the singular values of a matrix A ∈ M_d(ℝ), denoted α_1(A),…,α_d(A), are defined to be the non-negative square roots of the eigenvalues of the positive semidefinite matrix A^TA, listed in decreasing order with repetition in the case of multiple eigenvalues. We note that α_1(A)=A and ∏_i=1^d α_i(A)=| A| for all A ∈ M_d(ℝ), and α_d(A)=A^-1^-1 whenever A ∈ M_d(ℝ) is invertible. We will say that A∈ M_d(ℝ) is a generalised permutation matrix if it has exactly one nonzero entry in every row and in every column, and denote the group of all d × d generalised permutation matrices by P_d(ℝ). If A ∈ P_d(ℝ) then it is easy to see that the singular values of A are simply the absolute values of the d nonzero entries of A, listed in decreasing order. Following <cit.>, for each real number s ≥ 0 we define the singular value function φ^sM_d(ℝ) → [0,+∞) byφ^s(A):={[ α_1(A)α_2(A)⋯α_⌊ s⌋(A) α_⌈ s ⌉(A)^s-⌊ s ⌋ if 0≤ s≤ d,;| A|^s/d if s ≥ d. ].For each fixed value of s ≥ 0 the singular value function is upper semi-continuous on M_d(ℝ), is continuous when restricted to the set of invertible matrices, and satisfies φ^s(AB)≤φ^s(A)φ^s(B) for all A,B ∈ M_d(ℝ). If 𝖠=(A_1,…,A_N) ∈ M_d(ℝ)^N and s>0 are given, we define the pressure of (A_1,…,A_N) to be the limitP(𝖠,s):= lim_n →∞1/nlog∑_i_1,…,i_n=1^N φ^s(A_i_n⋯ A_i_1) ∈ [-∞,+∞).The pressure P(𝖠,s) exists by subadditivity, and is a real number whenever A_1,…,A_N are all invertible by virtue of the elementary inequality φ^s(B)≥α_d(B)^s which holds for every B ∈ M_d(ℝ). We refer the reader to the article <cit.> for proofs of these statements.If T_1,…,T_N ℝ^d →ℝ^d are affine contractions then we have T_ix=A_ix+v_i for all x ∈ℝ^d where A_1,…,A_N ∈ M_d(ℝ) and v_1,…,v_N ∈ℝ^d. Since the transformations T_i are contractions we have A_i<1 for every i. The function s ↦ P(𝖠,s) associated to 𝖠:=(A_1,…,A_N) is hence easily seen to be strictly decreasing, and it is in fact continuous. The unique value of s≥ 0 such that P(𝖠,s)=0 is called the affinity dimension of (T_1,…,T_N) and has been investigated for its role in the dimension theory of self-affine fractals since its introduction by Falconer in 1988; see <cit.> as well as the original article <cit.>. Following <cit.>, an affine IFS T_1,…,T_N acting on ℝ^d will be called box-like if we may write each T_i in the form T_ix=A_ix+v_i where A_i ∈ P_d(ℝ) for i=1,…,N. Let e_1,…,e_d denote the standard basis of ℝ^d; we observe that A ∈ M_d(ℝ) belongs to P_d(ℝ) if and only if there exist a_1,…,a_d ∈ℝ∖{0} and π{1,…,d}→{1,…,d} such that Ae_i = a_i e_π(i) for every i=1,…,d. Let us also write ρ(A) for the spectral radius of a matrix A ∈ M_d(ℝ).The contribution of this article is the following formula for the pressure of a box-like iterated function system:Let 𝖠=(A_1,…,A_N) ∈ P_d(ℝ)^N, let 0<s<d and define k:=⌊ s ⌋. For i=1,…,N let π_i {1,…,d}→{1,…,d} be the unique permutation and a^(i)_1,…,a^(i)_d the unique real numbers such that A_i e_j= a^(i)_j e_π_i(j) for every i=1,…,N and j=1,…,d.Let 𝔖 denote the set of all pairs (S,ℓ ) where S ⊂{1,…,d} has k elements and where ℓ∈{1,…,d}∖ S, and observe that 𝔖 has exactly (d-k)dk elements. Let us relabel the basis for ℝ^(d-k)dk as {e_S,ℓ (S,ℓ )∈𝔖}. For each i=1,…,N let us define a matrix Â_i ∈ P_(d-k)dk(ℝ) byÂ_i e_S,ℓ= (∏_j ∈ S|a^(i)_j|)|a^(i)_ℓ|^s-k e_π_i(S),π_i(ℓ).ThenP(𝖠,s)=logρ(∑_i=1^N Â_i). Theorem <ref> arises from the combination of two prior results: first, a formula for a certain pressure-like function for positive matrices given by V. Yu. Protasov in <cit.>; and second, a simplified expression for the function φ^s(A) which is valid for all A ∈ P_d(ℝ) and which was given by A. Käenmäki and the author in <cit.>.Whilst Theorem <ref> can be used to quickly compute the affinity dimension of a box-like IFS of any dimension, in the two-dimensional case it admits a particularly straightforward articulation:Let (T_1,…,T_N) be an affine iterated function system acting on ℝ^2, and letv_1,…,v_N ∈ℝ^2 and A_1,…,A_N ∈ M_2(ℝ) such that T_ix=A_ix+v_i for every x ∈ℝ^2. Suppose that for some integer k ∈{0,…,N} we haveA_i ={[ [ a_i 0; 0 d_i ] when 1 ≤ i ≤ k; [ 0 b_i; c_i 0 ] when k+1 ≤ i ≤ N ].where each a_i,b_i,c_i,d_i is nonzero. Then the affinity dimension of (A_1,…,A_N) is the unique real number s>0 such that one of the following holds: either the matrix[ ∑_i=1^k |a_i|^s ∑_i=k+1^N |b_i|^s; ∑_i=k+1^N |c_i|^s ∑_i=1^k |d_i|^s ]has spectral radius 1, and 0<s ≤ 1; or the matrix[ ∑_i=1^k |a_i|.|d_i|^s-1 ∑_i=k+1^N |b_i|.|c_i|^s-1;∑_i=k+1^N |b_i|^s-1|c_i|∑_i=1^k |a_i|^s-1|d_i| ]has spectral radius 1 and 1 ≤ s ≤ 2; or ∑_i=1^N | A_i|^s/2=1 and s ≥ 2.The proof of Theorem <ref> and Corollary <ref> are given in the following section. In 3 we show how these ideas can be adapted to a modified pressure functional considered by J.M. Fraser in <cit.>, and at the end of the paper we present some examples. § PROOF OF THEOREM <REF> AND COROLLARY <REF> The following result is a special case of a theorem of V. Yu. Protasov <cit.>; some related results may be found in <cit.>. Since the proof is both short and simple we include it.Let A_1,…,A_N∈ M_d(ℝ) be non-negative matrices. Thenlim_n →∞(∑_i_1,…,i_n=1^N A_i_n⋯ A_i_1)^1/n=ρ(∑_i=1^N A_i).Let |B| denote the sum of the absolute values of the entries of B∈ M_d(ℝ) and note that if B_1,B_2 are non-negative matrices then |B_1+B_2|=|B_1|+|B_2|. Clearly |·| is a norm on M_d(ℝ) and in particular is equivalent to the Euclidean operator norm ·. Using Gelfand's formula and non-negativity we may calculateρ(∑_i=1^N A_i)=lim_n →∞(∑_i=1^N A_i)^n^1/n=lim_n →∞|(∑_i=1^N A_i)^n|^1/n=lim_n →∞| ∑_i_1,…,i_n=1^N A_i_n⋯ A_i_1|^1/n=lim_n →∞(∑_i_1,…,i_n=1^N |A_i_n⋯ A_i_1|)^1/n=lim_n →∞(∑_i_1,…,i_n=1^N A_i_n⋯ A_i_1)^1/nas required.The following result was introduced in <cit.>, where it was used to investigate the equilibrium states of the pressure functional for box-like affine iterated function systems. We again include the proof since it is barely longer than the statement.Let d ≥ 2, let 0<s<d and define k:=⌊ s ⌋. Let 𝔖 denote the set of all pairs (S,ℓ ) where S ⊂{1,…,d} has k elements and where ℓ∈{1,…,d}∖ S, and observe that 𝔖 has exactly (d-k)dk elements. Let us relabel the basis for ℝ^(d-k)dk as {e_S,ℓ (S,ℓ )∈𝔖}.Define a function 𝔭_sP_d(ℝ) → P_(d-k)dk(ℝ) as follows. Given A ∈ P_d(ℝ), let π{1,…,d}→{1,…,d} be the unique permutation and a_1,…,a_d the unique nonzero real numbers such that Ae_i=a_ie_π(i) for every i=1,…,d. Let 𝔭_s(A) ∈ P_(d-k)dk(ℝ) be the unique matrix such that𝔭_s(A) e_S,ℓ= (∏_j ∈ S|a_j|)|a_ℓ|^s-k e_π(S),π(ℓ)for every (S,ℓ) ∈𝔖. Then 𝔭_s P_d(ℝ) → P_(d-k)dk(ℝ) is a group homomorphism, and 𝔭_s(A)=φ^s(A) for every A ∈ P_d(ℝ). Let us first show that 𝔭_s(A)=φ^s(A) for A ∈ P_d(ℝ). Let Ae_i=a_ie_π(i) for each i=1,…,d. We note that 𝔭_s(A) is the largest of the absolute values of the entries of the generalised permutation matrix 𝔭_s(A), and hence𝔭_s(A) = max_(S,ℓ) ∈𝔖(∏_j ∈ S|a_j|)|a_ℓ|^s-k= max_(S,ℓ)∈𝔖(∏_j∈ Sα_j(A))α_ℓ(A)^s-k = α_1(A)⋯α_k(A)α_k+1(A)^s-k=φ^s(A)as claimed. Now suppose that A,B ∈ P_d(ℝ) such that Ae_i=a_ie_π_A(i) and Be_i=b_ie_π_B(i) for each i=1,…,d. Clearly ABe_i=a_π_B(i)b_i e_(π_A ∘π_B)(i) for every i=1,…,d. For every (S,ℓ) ∈𝔖 we have𝔭_s(B)e_S,ℓ =(∏_j ∈ S|b_j|)|b_ℓ|^s-k e_π_B(S),π_B(ℓ)and therefore𝔭_s(A)𝔭_s(B)e_S,ℓ =(∏_j ∈π_B(S)|a_j|)(∏_j ∈ S|b_j|)|a_π_B(ℓ)|^s-k|b_ℓ|^s-k e_(π_A ∘π_B)(S),(π_A ∘π_B)(ℓ)=(∏_j ∈ S|a_π_B(j) b_j|)|a_π_B(ℓ)|^s-k|b_ℓ|^s-k e_(π_A ∘π_B)(S),(π_A ∘π_B)(ℓ) = 𝔭_s(AB)e_S,ℓ.Since (S,ℓ) ∈𝔖 was arbitrary we see that 𝔭_s(AB)=𝔭_s(A)𝔭_s(B) as claimed.Theorem <ref> follows immediately from the two results above : given A_1,…,A_N∈ P_d(ℝ) and s ∈ (0,d) we havee^P(𝖠,s) =lim_n →∞(∑_i_1,…,i_n=1^N φ^s(A_i_n⋯ A_i_1))^1/n = lim_n →∞(∑_i_1,…,i_n=1^N 𝔭_s(A_i_n⋯ A_i_1))^1/n = lim_n →∞(∑_i_1,…,i_n=1^N 𝔭_s(A_i_n)⋯𝔭_s(A_i_1))^1/nby Proposition <ref>, and since the matrices 𝔭_s(A_i) have only non-negative entries,lim_n →∞(∑_i_1,…,i_n=1^N 𝔭_s(A_i_n)⋯𝔭_s(A_i_1))^1/n =ρ(∑_i=1^N 𝔭_s(A_i))by Proposition <ref>. The proof is complete.Let us now derive Corollary <ref>. Let A_1,…,A_N be as in the statement of that result. For each s ≥ 2 we haveP(𝖠,s) =lim_n →∞1/nlog∑_i_1,…,i_n=1^N φ^s(A_i_n⋯ A_i_1)=lim_n →∞1/nlog∑_i_1,…,i_n=1^N | (A_i_n⋯ A_i_1)|^s/2 =log∑_i=1^N | A_i|^s/2.Since each A_i is a contraction this quantity tends to -∞ as s →∞, and since P(𝖠,s) is continuous, strictly decreasing as a function of s, and satisfies P(𝖠,0)=log N ≥ 0 it follows that it has a unique zero, which is the affinity dimension. If the affinity dimension is given by s ≥ 2 then clearly it solves ∑_i=1^N | A_i|^s/2=1 and we are in the third of the three cases mentioned in the statement of the corollary.For every s ∈ (0,1) the construction of Theorem <ref> leads to the basis e_∅,1, e_∅,2 for ℝ^2, with respect to which the matrices Â_i take the formÂ_i ={[ [ |a_i|^s 0; 0 |d_i|^s ] when 1 ≤ i ≤ k; [ 0 |b_i|^s; |c_i|^s 0 ] when k+1 ≤ i ≤ N ].and therefore we havee^P(𝖠,s) = ρ([ ∑_i=1^k |a_i|^s ∑_i=k+1^N |b_i|^s; ∑_i=k+1^N |c_i|^s ∑_i=1^k |d_i|^s ])for all s in this range. It follows that if the affinity dimension is in the range 0<s<1 then it is the unique value for which the above spectral radius equals 1; and if the affinity dimension is not in this range, then the above spectral radius must exceed 1 for all s ∈ (0,1).For s ∈ [1,2), the construction of Theorem <ref> leads to the basis e_{1},2, e_{2},1 for ℝ^2, with respect to which the matrices Â_i take the formÂ_i ={[ [ |a_i|.|d_i|^s-1 0; 0|a_i|^s-1|d_i| ] when 1 ≤ i ≤ k; [ 0 |b_i|.|c_i|^s-1;|b_i|^s-1|c_i| 0 ] when k+1 ≤ i ≤ N ].We deduce that e^P(𝖠,s) =ρ([ ∑_i=1^k |a_i|.|d_i|^s-1 ∑_i=k+1^N |b_i|.|c_i|^s-1;∑_i=k+1^N |b_i|^s-1|c_i|∑_i=1^k |a_i|^s-1|d_i| ])for all s in this range. We observe that the expressions (<ref>) and (<ref>) coincide for s=1, and that (<ref>) evaluates to ∑_i=1^k |a_id_i|+∑_i=k+1^N |b_ic_i|=∑_i=1^N | A_i| when s=2. It follows that if the affinity dimension is in [1,2] then it is the unique value in that range for which the spectral radius (<ref>) equals 1. This completes the proof of the corollary.§ A MODIFIED PRESSURE FUNCTION The methods of this article can easily be adapted to consider certain modifications of the pressure functional discussed in 1. For example, let (T_1,…,T_N) be a box-like affine IFS acting on ℝ^2 and suppose that (T_1,…,T_N) satisfies the following Rectangular Open Set Condition: there exists a nonempty open rectangle R=(a,b)× (c,d) ⊂ℝ^2 such that the sets T_iR are disjoint open subsets of R. Suppose also that at least one of the transformations T_i maps horizontal lines to vertical lines and vice versa. Then the attractor X=⋃_i=1^N T_iX includes a (possibly reflected) image of itself rotated through 90^∘, which implies that the projection of X onto each of the two co-ordinate axes has the same box dimension t ∈ [0,1], say. By a theorem of J.M. Fraser <cit.>, in this case the packing dimension and box dimension of X are equal to the unique real number s ∈ [t,2t] such thatlim_n →∞(∑_i_1,…,i_n=1^N α_1(A_i_n⋯ A_i_1)^tα_2(A_i_n… A_i_1)^s-t)^1/n=1,where of course each A_i is the linear part of the associated affine transformation T_i. We note that if t=1 then s is simply the affinity dimension. The above limit can also be computed using the methods of this article:Let A_1,…,A_N ∈ M_2(ℝ), 0<t ≤ 1 and t ≤ s ≤ 2t. Suppose that for some integer k ∈{0,…,N} we haveA_i ={[ [ a_i 0; 0 d_i ] when 1 ≤ i ≤ k; [ 0 b_i; c_i 0 ] when k+1 ≤ i ≤ N ].where each a_i,b_i,c_i,d_i is nonzero. Then the limitlim_n →∞(∑_i_1,…,i_n=1^N α_1(A_i_n⋯ A_i_1)^tα_2(A_i_n… A_i_1)^s-t)^1/nis equal to the spectral radiusρ([ ∑_i=1^k |a_i|^t|d_i|^s-t ∑_i=k+1^N |b_i|^t|c_i|^s-t; ∑_i=k+1^N |b_i|^s-t|c_i|^t ∑_i=1^k |a_i|^s-t|d_i|^t ]).Let us define 𝔮_tP_2(ℝ) → P_2(ℝ) by𝔮_t([ a 0; 0 d ]):=[ |a|^t|d|^s-t0;0 |a|^s-t|d|^t ]and𝔮_t([ 0 b; c 0 ]):=[0 |b|^t|c|^s-t; |b|^s-t|c|^t0 ].It is easily checked that 𝔮_t(AB)=𝔮_t(A)𝔮_t(B) and 𝔮_t(A)=α_1(A)^t α_2(A)^s-t for every A,B ∈ P_2(ℝ), and thereforelim_n →∞(∑_i_1,…,i_n=1^N α_1(A_i_n⋯ A_i_1)^tα_2(A_i_n… A_i_1)^s-t)^1/n=lim_n →∞(∑_i_1,…,i_n=1^N 𝔮_t(A_i_n)⋯𝔮_t(A_i_1) )^1/n=ρ(∑_i=1^N 𝔮_t(A_i))similarly to the previous section.Our methods could also be extended to the computation of further variant pressure functionals considered in <cit.>, but we do not pursue this.§ EXAMPLESExample 1. Define two affine transformations T_1,T_2 ℝ^2 →ℝ^2 byT_1[ x; y ] :=[ -13/270;07/9 ][ x; y ]+[ 13/27; 2/9 ]T_2[ x; y ] :=[ 0 13/27; 7/9 0 ][ x; y ]+[ 14/27; 0 ]and denote the associated 2× 2 matrices by A_1 and A_2 respectively. For 0 <s ≤ 1 the matrix[ 13^s/27^s 13^s/27^s; 7^s/9^2 7^s/9^s ] clearly has determinant zero, and therefore has spectral radius equal to its trace 13^s/27^s+ 7^s/9^s≥13/27+7/9>1. We deduce that the affinity dimension of (T_1,T_2) must exceed one. On the other hand clearly | A_1|+| A_2|=218/243<1 and so the affinity dimension must be less than 2. It follows that the affinity dimension is the unique value of s such that [ 13/27(7/9)^s-1 13/27(7/9)^s-1; 7/9(13/27)^s-1 7/9(13/27)^s-1 ]has spectral radius 1. Since this matrix also has determinant zero we conclude that the affinity dimension solves 13/27(7/9)^s-1+7/9(13/27)^s-1=1 and is therefore equal tos≈ 1.43035 20226 23969 408121 44729 61299 96697 74324 72301 14759 …Let us show using <cit.> that the Hausdorff dimension of the attractor X of (T_1,T_2) has Hausdorff dimension equal to the affinity dimension of (T_1,T_2). According to that proposition this holds true if we can show that: * The entries of the matrices and vectors defining T_1 and T_2 are algebraic;* The IFS (T_1,T_2) satisfies the Strong Open Set Condition: there exists a nonempty open set U ⊂ℝ^2 such that T_1U, T_2U ⊆ U and T_1U ∩ T_2U=∅, and such that additionally T_i_n⋯ T_i_1U⊆ U for some i_1,…,i_n ∈{1,2};* Let P_x, P_y denote projection of ℝ^2 onto the first and second co-ordinates respectively; then for every n≥ 1, if (i_1,…,i_n), (j_1,…,j_n) ∈{1,2}^n are distinct then P_xT_i_n⋯ T_i_1(0) ≠ P_xT_j_n⋯ T_j_1(0) and P_yT_i_n⋯ T_i_1(0) ≠ P_yT_j_n⋯ T_j_1(0).Clearly (i) holds. To see that (ii) is satisfied we note that T_1(0,1)^2=(0,13/27)× (2/9,1) and T_2(0,1)^2=(14/27,1)× (0,7/9) are disjoint subsets of (0,1)^2, and that T_2^2[0,1]^2=[14/27,217/243]×[98/243,7/9]⊂(0,1)^2. To prove (iii), suppose for a contradiction that (i_1,…,i_n),(j_1,…,j_n) ∈{1,2}^n are the shortest pair of distinct sequences such that either P_xT_i_n⋯ T_i_1(0) = P_xT_j_n⋯ T_j_1(0) or P_yT_i_n⋯ T_i_1(0) = P_yT_j_n⋯ T_j_1(0). By minimality of n we necessarily have i_n ≠ j_n, so without loss of generality suppose i_n=1 and j_n=2. Let (x_1,y_1)^T:=T_i_n-1⋯ T_i_1(0) and (x_2,y_2)^T:=T_j_n-1⋯ T_j_1(0). These vectors belong to [0,1]^2 and their entries are obtained from 0 by repeatedly adding one of 13/27,14/27 or 2/9 and by repeatedly multiplying by ±13/27 or 7/9, in some order. In particular each entry is of the form p/q where p ∈ℤ and q is a power of three; which is to say, each entry belongs to the ring ℤ[1/3]. By hypothesis we have either P_x T_1(x_1,y_1)^T=P_x T_2(x_2,y_2)^T or P_y T_1(x_1,y_1)^T=P_y T_2(x_2,y_2)^T. If the former, we obtain -13/27x_1 + 13/27 = 13/27y_2+14/27 which yields x_1+y_2=-1/13<0, an impossibility. If the latter we obtain 7/9y_1+2/9=7/9x_2, whence 2/7=x_2-y_1 ∈ℤ[1/3] which contradicts the fundamental theorem of arithmetic. We conclude that criteria (i)–(iii) above are satisfied by (T_1,T_2) and therefore the Hausdorff dimension (and hence also the packing and box dimensions) of X are equal to the affinity dimension of (T_1,T_2) as claimed.Example 2. Define four affine transformations T_1,T_2,T_3,T_4 ℝ^2 →ℝ^2 byT_1[ x; y ] :=[ 1/3 0; 0 2/3 ][ x; y ]+[ 2/3; 0 ]T_2[ x; y ] :=[ -2/30;0 -1/3 ][ x; y ]+[ 2/3; 1 ]T_3[ x; y ] :=[02/9; -1/30 ][ x; y ]+[ 2/3; 1 ]T_4[ x; y ] :=[04/9; -1/30 ][ x; y ]+[ 2/9; 2/3 ]and denote the associated 2× 2 matrices by A_1, A_2, A_3 and A_4 respectively. It is easy to see that (T_1,T_2) has affinity dimension 1 since the matrix produced by applying Corollary <ref> to (A_1,A_2) alone is equal to the identity matrix when s=1. It follows that the affinity dimension of (T_1,T_2,T_3,T_4) must exceed 1. Since on the other hand ∑_i=1^4 | A_i|=2/3<1 the affinity dimension of (T_1,T_2,T_3,T_4) must be less than 2, and hence it is the unique value of s ∈ [1,2] such that the matrix1/3^s[2^s-1 + 22; (2/3)^s-1 +(4/3)^s-12^s-1+2 ]has spectral radius 1. A real 2 × 2 matrix B has 1 as an eigenvalue if and only if 1+ B=tr B by elementary consideration of the characteristic polynomial, and hence the affinity dimension s ∈[1,2] solves1 + 1/9^s(4^s-1+2^s+1+4- 2(2/3)^s-1-2(4/3)^s-1)=2^s+4/3^s.We may thus easily compute the affinity dimension of (T_1,T_2,T_3,T_4) to bes ≈ 1.54202 66478 62956 03651 88932 87043 45802 50254 31511 44645…For this example we ignore the Hausdorff dimension in favour of the simpler task of studying the packing dimension of the attractor. It is easily verified that (T_1,T_2,T_3,T_4) maps the open unit square (0,1)^2 into four pairwise disjoint open subrectangles of (0,1)^2 which are separated by the lines x=2/3 and y=2/3 and which meet at corners at the point (2/3,2/3). In particular (T_1,T_2,T_3,T_4) satisfies the Rectangular Open Set Condition of Feng-Wang and Fraser <cit.>. By <cit.> the projection of the attractor X onto either co-ordinate axis has box dimension 1, and it follows by <cit.> that the packing dimension and box dimension of X are equal to the affinity dimension of (T_1,T_2,T_3,T_4).§ ACKNOWLEDGEMENTSThis research was supported by the Leverhulme Trust (Research Project Grant number RPG-2016-194). The author thanks J.M. Fraser, A. Käenmäki and P. Shmerkin for helpful remarks. acm | http://arxiv.org/abs/1703.09097v2 | {
"authors": [
"Ian D. Morris"
],
"categories": [
"math.MG",
"math.DS",
"28A80 (primary), 15A60, 37D35, 37H15 (secondary)"
],
"primary_category": "math.MG",
"published": "20170327142131",
"title": "An explicit formula for the pressure of box-like affine iterated function systems"
} |
=1→ Ł( )Λ λ / | http://arxiv.org/abs/1703.08911v2 | {
"authors": [
"Zhaofeng Kang",
"Jinmian Li",
"Mengchao Zhang"
],
"categories": [
"hep-ph",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20170327025919",
"title": "Uncover Compressed Supersymmetry via Boosted Bosons from the Heavier Stop/Sbottom"
} |
Paola Pinilla, Hubble fellow Department of Astronomy/Steward Observatory, The University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA [email protected] Department of Astronomy/Steward Observatory, The University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI, USA Leiden Observatory, Leiden University, P.O. Box 9513, 2300RA Leiden, The Netherlands Max-Plank-Institut für Extraterrestrische Physik, Giessenbachstraße 1, D-85748 Garching, Germany Center for Space and Habitability, Physikalisches Institut, Universitaet Bern, 3012 Bern, Switzerland Univ. Grenoble Alpes, CNRS, IPAG, F-38000 Grenoble, France University Observatory, Faculty of Physics, Ludwig-Maximilians-Universität München, Scheinerstr. 1, 81679 Munich, Germany Institute of Astronomy, Madingley Road, Cambridge CB3 OHA, UK Dublin Institute for Advanced Studies, School of Cosmic Physics, 31 Fitzwilliam Place, Dublin 2, Ireland INAF-Arcetri, Largo E. Fermi 5, I-50125 Firenze Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA Department of Physics and Astronomy, Rice University, 6100 Main Street, 77005 Houston, TX, USA INAF-Arcetri, Largo E. Fermi 5, I-50125 Firenze European Southern Observatory, Karl-Schwarzschild-Str. 2, D85748 Garching, GermanyWe present new Atacama Large Millimeter/sub-millimeter Array (ALMA) 1.3 mm continuum observations of the SR 24S transition disk with an angular resolution ≲0.18” (12 au radius).We perform a multi-wavelength investigation by combining new data with previous ALMA data at 0.45 mm.The visibilities and images of the continuum emission at the two wavelengths are well characterized by a ring-like emission. Visibility modeling finds that the ring-like emission is narrower at longer wavelengths, in good agreement with models of dust trapping in pressure bumps, although there are complex residuals that suggest potentially asymmetric structures. The 0.45 mm emission has a shallower profile inside the central cavity than the 1.3 mm emission.In addition, we find that the ^13CO and C^18O (J=2-1) emission peaks at the center of the continuum cavity.We do not detect either continuum or gas emission from the northern companion to this system (SR 24N), which is itself a binary system. The upper limit for the dust disk mass of SR 24N is≲ 0.12 M_⊕, which gives a disk mass ratio in dust between the two components ofM_dust, SR 24S/M_dust, SR 24N≳840. The current ALMA observations may imply that either planets have already formed in the SR 24N disk or that dust growth to mm-sizes is inhibited there and that only warm gas, as seen by ro-vibrational CO emission inside the truncation radii of the binary, is present. § INTRODUCTION Recent multi-wavelength observations of protoplanetary disks revealed astonishing structures, such as concentric dust rings, spiral arms, and asymmetries <cit.>. These observations suggest that significant evolution has taken place and that probably planets have already imprinted their existence in the parental disks. Transition disks (TD) have been of particular interest due to their inner cavities, which were first identified by the lack of infrared emission <cit.>. Different mechanisms have been proposed for the origin of TD cavities, including photo-evaporation, magneto-rotational instabilities (MRI), and planet-disk interaction <cit.>. To understand whether one or several mechanisms dominate the evolution, it is crucial to spatially resolve protoplanetary disks at different wavelengths since each physical process (or the combination of several of them) lead to different structures for the small/large dust and for the gas <cit.>.For instance, when a planet opens a gap in a gaseous disk, at the outer edge of the gap, the gas density increases and the pressure has a local maximum where dust particles stop their fast radial drift and accumulate <cit.>. This process can lead to a spatial difference for the distribution of small (micron-sized) particles, which are well coupled to the gas, and large (mm-sized) particles. As a consequence, smaller and less depleted cavities or gaps are expected in the gas and small grains than in the large mm-dust particles. In this scenario, the possibility of observing rings and cavities in the dust at different wavelengths strongly depends on the disk viscosity <cit.>. Similarly, the formation of a broad and robust pressure bump that can arise from MRI processes, such as dead zones together with MHD winds, can lead tocomparable structures in the gas and dust as in the case of planet-disk interaction <cit.>. Recent ALMA gas and dust observations of TDs show that in most cases, there is gas inside the mm-dust cavity. The gas usually also shows a cavity, but with a lower depletion factor than the mm-emission <cit.>. In this paper, we report ALMAobservations of the transition disk around SR 24S at 1.3 mm of the dust continuum emission and the molecular lines ^13CO (J=2-1) andC^18O (J=2-1). For the analysis, we combine these new data with previous ALMAdata at 0.45 mm. SR 24 is a hierarchical triple system located in the L1688 dark cloud in the Ophiuchus star-formation region. L1688extends over a range of distances, likely between 120 pc and 145 pc <cit.>. In this paper, we adopt a value of 137 pc. Each of the components of SR 24 was identified as a T-Tauri star <cit.>, with infrared excess <cit.>. The separation between the two main components of SR 24 is 5.2” at a position angle (PA) of 348^∘ <cit.>.The northern component, SR 24N, is itself a binary system with a separation of 0.2” at a PA of 87^∘ <cit.>.The primary component SR 24S is a K2 star and its mass is >1.4 M_⊙, while the stars in SR 24N are a K4-M4 star with a mass of 0.61^+0.6_-0.27 M_⊙ and a K7-M5 star with a mass of 0.34^+0.46_-0.18 M_⊙ <cit.>. SR 24S and SR 24N show similar infrared emission, indicating warm dust in the inner part of both disks <cit.>. <cit.> reported ro-vibrational CO emissionat 4.7 μm tracing warm gas in the inner parts of both disks (SR 24S and SR 24N). In addition, both circumprimary (SR 24S) and circumsecondary(SR 24N) disks were resolved in the infrared image obtained with the adaptive optics coronagraph CIAO (in the Subaru telescope). These observations show that the primary disk is more extended than the secondary and the disks seem to be extended enough to fill the effective Roche radius of the system<cit.>. SR 24S and SR 24N are highly accreting, the accretion rates obtained from the hydrogen recombination lines are for SR 24N 10^-6.9 M_⊙year^-1 and forSR 24S 10^-7.15 M_⊙year^-1 <cit.>.However, only the southern component, SR 24S, has been detected in the dust continuum <cit.>. <cit.> reported SMA observations of SR 24 at 225 GHz (1.3 mm) continuum and ^12CO (J=2-1) line emission. Stronger CO emission was seen around SR 24N than around SR 24S.Later SMA observations of SR 24S at 0.88 mm detected a dust cavity of ∼32 au radius <cit.>, revealing that SR 24S is a transition disk.<cit.> reported ALMA Cycle 0 observations of the continuum and ^12CO (J=6-5) at 0.45 mm. The CO observations of SR 24S were affected by extended emission and foreground absorption from the dark cloud in Ophiuchus, and it was not possible to infer the amount of gas and its distribution inside the dust-cavity. The dust cavity size was inferred to be 25 au from the fitting of the ALMA Cycle 0 continuum observations. This paper is organized as follows. In Sect. <ref> and Sect. <ref>, we summarize the details of the ALMA observations, data reduction and imaging. Sect. <ref> presents the analysis of the data, in particular of the continuum emission and the comparison with previous ALMA observations at 0.45 mm. The discussion and main conclusions are in Sect. <ref> and Sect. <ref> respectively.§ OBSERVATIONSSR 24 was observed with ALMA in Band 6 during Cycle 2 on September 26th, 2015 (#2013.1.00091.S).For these observations 34 antennas were used and the longest baseline was 2269.9 m. The source was observed in four spectral windows, each with a bandwidth of 1875.0 MHz. Two of them were chosen with a smoothed resolution of 976.563 kHz centered at 219.56035 GHz for the C^18O (J=2-1) transitionand 220.39868 GHz for the ^13CO (J=2-1) transition, for a channel width of ∼1.35 km s^-1 for the two lines. The other spectral windows were configured to obtain the continuum emission centered at 235 GHz (∼1.3 mm). The quasar QSO J1517-2422 was observed for bandpass calibration, while the quasars QSO J1617-2537 and QSO J1627-2426were observed for phase calibration. The asteroid Pallas was observed for the flux calibration. The total observation time was 49.31 min, with a total time on source of 22.67 min. The data were calibrated using the Common Astronomy Software Package (CASA), version 4.4. For the reduction, there was one antenna flagged due to strange/elevated T_sys. For imaging, the data were correctly centered by two independent procedures. First, to find the center of the image, position angle (PA), and disk inclination (i), a simple Gaussian or disk model was used to fit the data (using uvmodelfit in CASA), either using only short baselines (≲200kλ) or all the uv coverage. The obtained PA and inclination for both models of the disk are 27.8^∘±1.3^∘ and 49.8^∘±2.4^∘ respectively. We applied the same procedure for fitting the 0.45 mm data with uvmodelfit, finding PA=26.9^∘±1.6^∘ and i=47.2^∘±3.1^∘, in agreement with the values found by <cit.>. Second, since the image does not have significant asymmetries, the center is also checked by minimizing the rms scatter ofthe imaginary part of the visibilities around zero. Both methods give very similar centers, and α_2000=16:26:58.5, δ_2000=-24:45:37.2 were used to correct the phase center and obtain the visibilities using fixvis. The same procedure was used for previous ALMA observations at 0.45 mm. Nevertheless, the center, PA and i are again taken as free parameters when the analysis is done in the visibility domain (Sect. <ref>). Different studies show that visibility modeling has some advantages over imaging analysis, since it can identify unresolved structures and constrain better the disk morphology, without being limited by deconvolution issues that may arise during imaging <cit.>.Continuum and line imaging were performed using the cleanalgorithm. We used natural weighting and Briggs weighting (robust = 0.5) to find the best compromise between resolution and sensitivity. For the continuum, with Briggs weighting, we achieved a rms of 63 μJy beam^-1 with a beam size of 0.18” × 0.12”.The continuum was subtracted from line-containing channels using uvcontsub. Since the ^13CO and the C^18O are weak detections, we performed natural weighting for a final rms of around 1.4 mJy beam^-1 per 1.35 km s^-1 channel for both lines, and the final beam size in this case is 0.21”×0.16”.§ RESULTS§.§ Continuum emissionFigure <ref> show the resulting 1.3 mm image of the SR 24 system after the cleaning process (and after primary beam correction). In the left panel, the contour lines correspond to 3σ_N, where σ_N=75 μJy beam^-1 is the rms measured at the location of the north component <cit.>. Taking a circular area with a radius of 1” and centered at the position of SR 24N, the total flux is ∼ 3.0σ_N. Assuming optically thin emission, the dust mass can be estimated as <cit.> M_dust≃d^2 F_ν/κ_ν B_ν (T(r)) where d is the distance to the source, κ_ν is the dust opacity at a given frequency, and B_ν (T) is the Planck function for a given temperature radial profile T(r). Assuming a distance of 137 pc, a dust opacity at 1.3 mm of ∼3cm^2g^-1 <cit.> and a temperature of 20 K, the upper limit for the dust mass of the SR 24N disk isM_dust, SR 24N≲ 3.5×10^-7 M_⊙ or in Earth masses equivalent to≲ 0.12 M_⊕. Figure <ref> presents the continuum ALMA observations of SR 24S at 0.45 mm and at 1.3 mm.The details of the calibration process for the Cycle 0 data are presented in <cit.>. We summarize the properties of each image in Table <ref>. The continuum emission is detected with a signal-to-noise ratio (with respect to the peak) of 146for 0.45 mm dataand 256 for the 1.3 mm data (see Table <ref>). The total flux at 0.45 mm is 1.9 Jy and at 1.3 mm 0.22 Jy.Calculating the dust disk mass from 1.3 mm flux, and assuming the same opacity and temperature as for SR 24N, we obtain that M_dust, SR 24S∼3.0×10^-4M_⊙, implying adust disk ratio between the southern and the northern component of ≳840. Nonetheless, Eq. <ref> assumes optically thin emission, which may not be the case for SR 24S, specially close to the location of the ring (see. Sect. <ref>). If only part of the emission is optically thin, the dust mass for the disk around SR 24S is underestimated, which would increase the dust mass disk ratio between the southern and the northern component.Taking the total flux at each wavelength, the integrated spectral index is given by α_mm = ln(F_1.3 mm/F_0.45 mm) /ln (0.45 mm/1.3 mm) = 2.02±0.13 (the error includes a calibration uncertainty of 10%), which is lower than the valuepreviously reported <cit.> based on SMA and ATCA observations at 0.88and 3.0 mm <cit.>. This low value may indicate grain growth and/or a small cavity, but most likely arises from optically thick emission as discussed in Sect <ref>. Figure <ref> also shows the continuum flux normalized to the peak of emission at 0.45 and 1.3 mm of a radial cut along the PA of the disk (obtained in Sect. <ref>). Both profiles reveal a cavity and a ring. The 1.3 mm emission strongly decreases inside the cavity where the flux is reduced by around 85%. Contrary, the 0.45 mm emission shows a shallower cavity with the emission reduced by about 24% compared to the peak of emission. In addition, the position of the peak of the ring is located further out at 1.3 mm. However, this contrast and location of the cavity can be affected by the beam convolution and a more detailed analysis of the intensity profiles is done in the visibility domain in Sect.<ref>.§.§ Gas emissionBoth ^13CO and C^18O (J=2-1) lines are detected, but affected by foreground absorption from the nearby dark cloud. In particular ^13CO is more contaminated than C^18O, because C^18O is more optically thin than ^13CO. Figure <ref> shows the channel maps of the ^13CO and C^18O emission in SR 24S from 0 to 9.45 km s^-1. In addition, the ^13CO and C^18O (J=2-1) spectrum is shown in Fig. <ref>, which is obtained by integrating over a circular area centered at the location of SR 24S and with a radius of 1”. These channel maps confirm the presence of ^13CO and C^18O in the SR 24S disk, but also the effect of the foreground absorption, in particular in the channels of 2.7 and 4.05km s^-1. Thus, the asymmetry of the double-peaked velocity profile for both lines is likely due to this foreground absorption. Figure <ref> shows the zero moment map for ^13CO and C^18O obtained from -1.35 km s^-1 to 10.8 km s^-1, where the channels contain significant emission (≳ 5 σ, with σ=1.4mJy beam^-1 per 1.35 km s^-1 channel). For ^13CO,inside a circle of ∼0.7” radius centered at the location of SR 24S,the total flux per velocity interval is ∼1Jy beam^-1 km s^-1, and approximately60% of this emission comes from the inner part of the disk inside the mm-dust cavity. The emission peaks at the center with a value of ∼0.1Jy beam^-1 km s^-1 and the rms of the zero moment map is around ∼10mJy beam^-1 km s^-1, which gives a signal-to-noise ratio of 10 with respect to the peak and 100 with respect to the total flux. For C^18O, inside the same circle of ∼0.7” radius, the total flux per velocity is ∼0.5Jy beam^-1 km s^-1, and approximately40% of this emission comes from the inner part. This emission also peaks at the center with a value of ∼40mJy beam^-1 km s^-1 and the rms of the zero moment map is around ∼8.6mJy beam^-1 km s^-1, which gives a signal-to-noise ratio of 5 and 58 with respect to the peak and the total flux respectively. The flux normalized to the peak of the ^13CO and the C^18O in a radial cut along the PA of the disk is also shown in Fig. <ref>. For comparison, the normalized flux of the continuum mm-emission is over-plotted. The peak of both lines resides inside the mm-cavity. The uncertainties for the flux are also shown and therefore the wiggles of emission beyond 0.4”of ^13CO and C^18O are within the noise of the observations and they are not significant. We do not derive the inclination and PA from the gas emission, but from the dust continuum emission. Moreover, because of the foreground absorption in some of the channels,we do not estimate gas masses from the current observations of CO isotopologues as done by e.g. <cit.> and <cit.>. In the northern component of the SR 24 system (SR 24N), there is no significant detection of^13CO or C^18O(i.e. nothing ≳ 3 σ, where σ has a value of around ∼ 1.5mJy beam^-1 per 1.35 km s^-1 channel).§ DATA ANALYSIS§.§ Disk morphologyAll the following analysis for fitting the morphology of the disk from the dust continuum emission is performed in the visibility domain for the two wavelengths separately. We work with each observed (u,v) point since we do not assume any a-priori knowledge of the total flux, inclination, position angle, and center of the image. Hence, these are free parameters of each of the explored models (F_total, i, PA, x_0 and y_0, being x_0 and y_0 the potential offset from the center taken at α_2000=16:26:58.5, δ_2000=-24:45:37.2). As a first approximation for the structure, we assume an axisymmetric disk (Fig. <ref>), and thus we focus on fitting the real part of the visibilities. The Fourier transform of a symmetric brightness distribution can be expressed in terms of the zeroth-order Bessel function of the first kind J_0 of the de-projected uv-distance, such that V_Real (r_uv)=2π∫^∞_0 I(r) J_0(2π r_uvr)r dr, where r_uv=√(u_ϕ^2cosi^2+v_ϕ^2), with u_ϕ=ucosϕ+vsinϕ and v_ϕ=-usinϕ+vcosϕ, being i and ϕ the inclination and position angle of the disk respectively.To fit the morphology of the disk, because the visibilities and continuum maps reveal a cavity at the two wavelengths, we explore models where the intensity profile has a ring shape. The fitting is conducted using Markov chain Monte Carlo (MCMC) method. The first model we use is a radially symmetric Gaussian ring, with two extra free parameters, for a total of seven free parameters(R_peak, R_width, F_total, i, PA, x_0 and y_0), such that the intensity radial profile is given byI(r)=Cexp(-(r-R_peak)^2/2R_width^2), where the constant C is related with the total flux of the disk asC=F_total/∫^∞_0 I(r) J_0(0)r dr. The cuts along the PA of the disk in Fig. <ref> show that the ring is not necessarily a symmetric Gaussian around the peak in the radial direction (in particular for the 0.45 mm emission), and hence we use two other different models to mimic a radially asymmetric ring (still azimuthally symmetric since our models are focused on fitting the real part of the visibilities). In the first of these models we assume an asymmetric Gaussian with two different widths that coincide at the location of the peak of emission, such thatI(r)={[Cexp(-(r-R_peak)^2/2R_width^2) r≤ R_peak; Cexp(-(r-R_peak)^2/2R_width2^2)r> R_peak, ].this model has a total of eight free parameters: R_peak, R_width, R_width2, F_total, i, PA, x_0 and y_0.This radially asymmetric ring model is also motivated by results of particle trapping in radial pressure bumps. These models of dust evolution predict that the regions where dust accumulates become narrower for larger grains (and therefore for longer wavelengths). Additionally, it is expected that the accumulation is radially narrower at longer times of evolution (≳1Myr), because micron-sized dust particles require long time to growto mm-sizes in the outer parts of the disk, from where they will then drift towards the pressure bump. At longer times (∼5Myr), the emission from the models is expected to be a symmetric ring. As a consequence, the morphology of the trapped dust is expected to be an asymmetric ring in the radial direction (which can be mimicked assuming R_width<R_width2 in Eq. <ref>) at shorter times after the pressure bump is formed (≲1Myr), becoming narrower and radially symmetric at longer times<cit.>.Secondly, we assume a combination of a Gaussian profile with a power-law. This also has eight free parameters, namely R_peak, R_width, γ,F_total, i, PA, x_0 and y_0, given by I(r)=C[r^-γ+exp(-(r-R_peak)^2/2R_width^2) ].The motivation of this model is to investigate the potential emission from the inner disk and its possible dependency with wavelength. For the fits, we used emcee <cit.>, which allows us to efficiently sample the parameter space in order to maximize the likelihood result for each model. Maximizing the likelihood function (ℒ(Θ)) is equivalent to minimize the negative of the logarithm of the likelihood, since the logarithm is an increasing function over the entire range. Therefore we aim to minimize the following function-log(ℒ(Θ)) = -1/2∑_i=1^n[log(2πσ_i^2) + (V_Real, obs^i-V_Real, model^i)^2/2σ_i^2] where σ is the uncertainty of each observed (u,v) point, n is the total number of data, V_Real, obs is the real part of the observed visibilities, andV_Real, model are the visibilities for each model calculated with Eq. <ref>. We adopted a set of uniform prior probability distributions forthe free parameters explored by the Markov chain in the three models, specifically:R_peak ∈[1, 100] auR_width ∈[1, 50] auR_width2 ∈[1, 50] auF_total ∈[1.0, 5.0] Jyfor the 0.45 mm data F_total ∈[0.02, 2.5] Jyfor the 1.3 mm data i ∈[10, 80] ^∘ PA ∈[10, 80] ^∘x_0 ∈[-0.2, 0.2] ”y_0 ∈[-0.2, 0.2] ” γ ∈[-3, 3]. For the radial grid, we assume r∈[1-500]au with steps of 0.1 au.The burn-in phase for convergency is ∼1000 steps, which is ∼10 times the autocorrelation time of 100 steps <cit.>. We let the Markov chain to sample the parameter space for another thousands of steps, for a total of 4000 steps with 1000 walkers. Each measurement set is fitted separately and therefore we usedemcee to fit a total of six models (3 models for 2 different data sets). To simplify the fitting process, we obtained the PA and the i using the 1.3 mm data (since it has better signal-to-noise)for each model, and keep the best-fit values of these two parameters to fit the 0.45 mm data. The results are summarized in Tables <ref>, <ref>, <ref>, and Fig. <ref>. All three models show that the peak of the ring is located further out and becomes narrower at longer wavelength, with adifference of around ∼20 au for the location of the peak and ∼10 au for the width(s). The models of the radially asymmetric ring, shows thatR_width<R_width2, creating a slightly (radially) asymmetric ring with an outer tail. The model of the power law together with a Gaussian gives a slightly stepper profile for the 0.45 mm emission. The inclination and position angle obtained with the three models give very similar values, with mean values of 46^∘ and 24^∘ respectively, in agreement with the values found by <cit.> and <cit.> and in Sect. <ref>. The values obtained for the shift of the center are very low compared with pixel size from the observations (0.02” for the 1.3 mm observations and 0.04” for the 0.45 mm observations). We image the models and residuals(data-model) using identical (u,v) coordinates as the actual ALMA observations. Figure <ref> shows the synthetic images of the best fit models and the corresponding residuals. In general, the quality of the three fits is similar.For the 1.3 mm data, all three models reproduce roughly the same amount of residuals with respect to the rms of the observations. This is because all three models resample an almost symmetric ring and a quite empty cavity, where the intensity decreases around 80-90% with respect to the peak of emission, as the observations (left panel Fig. <ref>).For the 0.45 mm data, the asymmetric ring is the model that reproduces less residuals, where the emission inside the cavity only decreases by ∼20% with respect to the peak of emission.Figure <ref> shows the profile of the normalized intensity with respect to peak value, calculated with the best fit parameters (Tables <ref>, <ref>, and <ref>), for each case and each wavelength (for this plot, the inclination and the position angle are taken to have the same value for all three models, that is the mean value, i=46^∘ and PA=24^∘).All of these thee models resemble a roughly symmetric ring-like emission at the two wavelengths, and the profiles in Fig. <ref> are comparable with the azimuthallyaveraged radial profile of the de-projected images (left panel in Fig. <ref>).Independently of the model, an asymmetric structure persists in the maps of the residuals(Fig. <ref>). The model under-predicts the flux in the north-east and over-predicts the flux in the south-west with similar magnitude and morphology. The residuals show a spiral-shape like structure similar to those found in the residual maps of the visibilities analysis of SR 21 and HD 135344B <cit.>. The residuals peak along ∼47±2. Taking a radial cut along this angle, the positive residuals peak around 0.45±0.1”from the center, and the dip of emission has its minimum around the same location in the opposite side of the disk. The residual map of the 1.3 mm data shows an additional structure at ∼0.15” with a swap in the flux emission (negative in the north-east and positive in the south-west). As an experiment, we also performed the MCMC fitting keeping PA, i, x_0, and y_0 fixed and assuming the values obtained with uvmodelfit. In this experiment, we found a higher amount of residuals, but with similar shape as the ones shown in Fig. <ref>. In our analysis, we only fitted the real part of the visibilities, assuming none azimuthal variations.As a result, if an asymmetry arises due to an offset, the model optimizes the fit towards a symmetric emission. Hence, in the framework of our models, it is difficult to conclude how significant is the amount of residuals and the their shape and higher angular resolution observations are required to confirm if these potential asymmetries are real.<cit.> modeled the visibility profile of several disks with multiple rings (such as HL Tau, TW Hya and HD 163296). Applying their method to our data does not seem suitable since there is not more than one distinctive peak in the visibility profile. However, as a test we did an experiment of fitting the visibilities at 1.3 mm with at least two rings. In this case, the fit converges to one single ring that dominates the intensity profile. More complex models that include asymmetric structures, such as spiral arms, may improve the fit (Fig. <ref>). However, due to the high degeneracy of including several parameters to fit both the imaginary and the real part of the visibilities simultaneously, we do not perform such analysis.§.§ Optical thickness and spectral index interpretationThe slope of the spectral energy distribution (SED) at millimeter wavelengths, or spectral index (α_mm, such that F_mm∝ν^α_mm), has been widely used to trace millimeter-grains in protoplanetary disk. If the millimeter emission is optically thin, low values of the spectral index (≲3.5) indicate the growth of particles to millimeter sizes. The spatially integrated spectral index (α_mm) in protoplanetary disks observed in different star forming regions and around different stellar types has values lower than 3.5 <cit.>. Spatial variations of the spectral index in different protoplanetary disks have been resolved, where in most of the cases the spectral index increases radially, evidencing that the grain size decreases for increasing radius<cit.>, as expected from radial drift, as seen in dust evolution models <cit.>.However, for transition disks the spectral index is expected to increase towards the outer edge of the cavity, that is, towards the location where particles are trapped and have grown to mm-sizes. For these disks, the spatially integrated spectral index is also expected to be higher for larger cavities <cit.>. There arefew transition disks where the spectral index has been imaged, HD 142527, IRS 48 and SR 21 <cit.>, and in these few cases the spectral index decreases towards the location where the mm-emission peaks and where a particle accumulation is expected. The middle panel of Fig. <ref> shows the radial profile of thespectral index calculated from the intensity profiles taking the best fit parameters for each model at each wavelength described in Sect <ref>.At the location of the ring, the spectral index has values lower than 2.0, specifically from 32±3 au to 81±4 au. The right panel of Fig. <ref> also shows the optical depth τ obtained from the brightness temperature, which is calculated from the azimuthally averaged flux of the de-projected image (displayed in the left panel of Fig. <ref>) andwithout assuming the Rayleigh-Jeans regime. For the physical temperature, we assume the mid-plane valuesfrom dust radiative transfer models for SR 24S in <cit.>. The error bars are obtained from error propagation taking into account the rms of the observations and the standard deviation from the azimuthally averaged flux values. With the assumed temperature, the emission is optically thick at both wavelengths (τ>1) close to the location of the ring-like emission. In particular, τ>1 from ∼33 to ∼81 au for the 0.45 mm emission and from∼36 to ∼66 au for the 1.3 mm emission. Therefore, the decrease of the spectral index towards the location of the ring is likely because of optical thickness and with the current observations,α_mm cannot be interpreted asgrain growth inside the ring.Observations at shorter (optically thick) wavelengths can provide information of the temperature distribution in SR 24S; and at longer (optically thin) wavelengths, which can give direct information of the dust size, are therefore needed to better constrain the dust density distribution in SR 24S. § DISCUSSION Embedded planets inside the mm-cavities are commonly used to explain the observed gas and dust structures of transition disks. From planet-disk interaction together with dust evolution models, it is expected that at the outer edge of the planet-carved gap, the dust particles accumulate and grow to mm-sizes. For the gas, it is expected that the cavity is smaller and less depleted than the mm-cavity, as observed in several transition disks <cit.>. The small grains (micron-sized) particles are also expectedto have a different distribution than the mm-dust, with smaller or no-gaps, depending on planet mass and disk viscosity <cit.>. From the current observations of^13CO and C^18O in the transition disk SR 24S, the gas emission is centrally peaking inside the mm-dust cavity. It is possible that the gas cavity (or gap) remain unresolved or that the gas is not depleted inside the mm-cavity. The latter case suggests that if embedded planets are the cause of the mm-cavity, they must be low-mass planets <cit.>. An alternative explanation is dust trapping at the outer edge of a dead zone. In this scenario, strong accumulation of millimeter and centimeter-sized particles are expected at the location of the outer edge of the dead zone, while the gas is only slightly depleted in the inner part of the disk <cit.>.Information of the scattered light emission of SR 24S can give important insights to distinguish between the two possibilities, since in the dead zone scenario the cavity size in scattered light is expected to have the same size as the mm-cavity, contrary to the planet-disk interaction scenario<cit.>. The resolution of previous near-infrared observations was not enough to detect a potential cavity in small grains in SR 24S <cit.>. Additional information about the distribution of the intermediate grain sizes, which can be obtained bymillimeter-wave polarization can also give significant insights to recognize whether or not planet-disk interaction is the most likely cause for the origin of the cavity in SR 24S <cit.>. Models of internal photoevaporation predict a cavity in gas and dust(an low accretion rates) and therefore this scenario can be ruled out as a potential origin for the mm-cavity in SR 24S <cit.>.Our analysis for the morphology of the dust emission at 0.45 and 1.30 mm in SR 24S shows that at longer wavelengths the ring-like emission becomes narrower. This is in agreement with models of dust trapping in a pressure bump, since larger grains that are traced with longer wavelengths accumulate more efficiently at the location of the pressure maximum (or particle trap). Contrary to model predictions, in the observations there is also a shift of the peak of emission. This shift may result from optically thick emission at the two wavelengths and observations at longer (optically thin) wavelengths are required to see if the peak of emission for larger grains shifts. The models of particle trapping by planets predict that at early times of evolution (∼1Myr), the ring like emission is a radially asymmetric ring,where the outer width of the ring is expected to be higher than the inner width of the ring <cit.>. As a result of slow grain growth in the outer disk, the intensity profile at early times of evolution is a ring with an outer tail. At longer times (∼5Myr), the emission from the models is expected to be a symmetric ring. The dust morphology found from the current observations of SR 24S shows that the emission at both wavelengths is well represented by an almost symmetric ring. The models with the radially asymmetric Gaussian give similar profiles than the perfect symmetric ring (Fig. <ref>). This suggests that if trapping is occurring due to embedded planets, the trapping process has occurred already for long times of evolution, which is in agreement with the age reported by <cit.> of ∼4 Myr for SR 24S.In addition, the emission inside the mm-cavity at 0.45 mm is less depleted than at 1.3 mm. This could happen because smaller grains may not be completely filtered at the outer edge of the planet induced-gap, which allows that small grains pass through the gap and replenish the inner disk. The SED of SR 24S shows near-infrared (NIR) excess emission <cit.>, which suggests the presence of an inner disk. The combination of NIR excess and ring-like emission can be reproduced in models of partial filtration at the outer edge of gap opened by a massive planet, with changes of the dust dynamics near the water snow line. Without variations of the fragmentation velocities of the particles near the snow line, no NIR excess is produced because even in the case of partial filtration, when small grains pass the planet gap, the growth in the inner disk is so efficient that the grains are lost in very short timescales (≲0.1Myr). The required conditions to keep a long-lived inner disk and an outer ring are in agreement with the gas emission peaking inside the mm-cavity that suggests the presence of low mass planet(s), allowing the inner disk to be continuously replenished by the outer edge, even at very long times of evolution <cit.>.The analysis of the current data suggest that the morphology of SR 24S disk is not completely azimuthally symmetric and complex asymmetric structures may exist. These asymmetric structures can also be linked with the presence of planets, MRI effects, or from the interaction with the binary system SR 24N.Higher angular resolution observations are needed to confirm the existence of such structures and to better understand their nature and origin. The current observations of SR 24N suggest that this disks is poor on mm-grains and cold molecular gas. A possible explanation is that the growth of sub-micron sized particles to larger objects is inhibited by the high dust relative velocities of collisions expected around binary systems <cit.>. CIAO observations at H-band showed that the disks in scattered light seem to be extended enough to fill the effective Roche radius of the system, probing tidal interactions between the two disks <cit.>. For the gas, it is possible that only warm gas inside the truncation radii is present <cit.>. The truncation radii for close binaries is expected to be ≲0.4a <cit.>, being a the separation of the two stars. In this case, a∼0.2”, which means that the truncation radii is around 0.08” (∼10 au), which is smaller than the current resolution of the ALMA observations. An alternative possibility for the lack of mm-dust and gas in SR 24N is that planets already formed in this binary system, depleting thedisk in gas and large grains. § CONCLUSIONS We present ALMA observations of the transition disk SR 24S at 0.45 and 1.3 mm respectively. Our findings are as follows:* The visibilities and images of the continuum emission at the two wavelengths are well described by ring-like emission. The width of such ring is narrower at longer wavelengths in agreement with particle trapping in pressure bumps. The ring is mostly radially symmetric suggesting that if the trapping process is due to embedded planets, it must occur for long times of evolution. * Inside the mm-cavity, the emission at 0.45 mm is less reduced than at 1.3 mm (∼20% vs. 85% of depletion with respect to the peak of emission). This could be linked with a constant replenishment of small grains from the outer disk (or partial filtration of particles at the outer edge of a gap). This is agreement with the NIR excess of the SED of SR 24S, i.e. with the presence of an inner disk in the first au.* Analysis in the visibility domain reveal a complex morphology of the disk with potential asymmetric shape. Higher angular resolution observations are required to confirm such structure(s). * We detect ^13CO and C^18O (J=2-1) emission in SR 24S disk. The current observations show that the emission of both molecular lines peak at the center of the mm-cavity, in contrast with most of the transition disks observed so far with ALMA. This suggest thatwhatever is the origin of mm-cavity allows enough gas to reside in the inner part of the disk, as in the case of dead zones <cit.>. Internal photoevaporation is unlikely as a potential origin for the seen structures in SR 24S given the high accretion rate, the large mm-cavity and the gas emission peaking in the inner disk. * There is no detection of the dust continuum, ^13CO, or C^18O emission at the location of the northern component (SR 24N) of this hierarchical triple system SR 24.This suggests that in SR 24N, either planets already formed in this binary system, depleting this circumbinary disk in gas and mm-grains; or dust growth to mm-sizes is inhibited in this disk and that only warm gas inside the truncation radii of the binary is present.* Assuming physical temperatures from previous modeling for SR 24S disk, we conclude that the emission at both wavelengths is optically thick close to the location of the ring-like emission. High angular resolution observations at longer wavelength are reeded to investigate potential spatial changes (radial and/or azimuthal) of the dust density distribution through spatially resolved spectral index variations. The authors are thankful with the ALMA contact and data reducer, Luke Maud, for his help to understand the data reduction and calibration. We also acknowledge Catherine Walsh, Marco Tazzari, and Sierk van Terwisga for useful discussions. P.P. acknowledges support by NASA through Hubble Fellowship grant HST-HF2-51380.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5- 26555. Astrochemistry in Leiden is supported by the European Union A-ERC grant 291141 CHEMPLAN, by the Netherlands Research School for Astronomy (NOVA), by a Royal Netherlands Academy of Arts and Sciences (KNAW) professor prize. T.B. acknowledges funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme under grant agreement No 714769. A.N. acknowledges funding from Science Foundation Ireland (Grant 13/ERC/I2907). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2013.1.00091.S and #2011.0.00724.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. [Alexander et al.(2014)]alexander2014 Alexander, R., Pascucci, I., Andrews, S., Armitage, P., & Cieza, L. 2014, Protostars and Planets VI, 475, eds. H. Beuther, R. Klessen, C. Dullemond, & Th. Henning (Univ. of Arizona Press, Tucson).[ALMA Partnership et al.(2015)]alma2015 ALMA Partnership, Brogan, C. L., Pérez, L. M., et al. 2015, , 808, L3 [Andrews & Williams(2005)]andrews2005 Andrews, S. M., & Williams, J. P. 2005, , 619, L175 [Andrews et al.(2010)]andrews2010 Andrews, S. M., Wilner, D. J., Hughes, A. M., Qi, C., & Dullemond, C. P. 2010, , 723, 1241 [Andrews et al.(2011)]andrews2011 Andrews, S. M., Wilner, D. J., Espaillat, C., et al. 2011, , 732, 42 [Andrews et al.(2016)]andrews2016 Andrews, S. M., Wilner, D. J., Zhu, Z., et al. 2016, , 820, L40 [Artymowicz & Lubow(1994)]artymowicz1994 Artymowicz, P., & Lubow, S. H. 1994, , 421, 651 [Banzatti et al.(2011)]banzatti2011 Banzatti, A., Testi, L., Isella, A., et al. 2011, , 525, A12 [Birnstiel et al.(2010)]birnstiel2010 Birnstiel, T., Ricci, L., Trotta, F., et al. 2010, , 516, L14 [Birnstiel et al.(2012)]birnstiel2012 Birnstiel, T., Klahr, H., & Ercolano, B. 2012, , 539, A148 [Bontemps et al.(2001)]bontemps2001 Bontemps, S., André, P., Kaas, A. A., et al. 2001, , 372, 173[Brown et al.(2013)]brown2013 Brown, J. M., Pontoppidan, K. M., van Dishoeck, E. F., et al. 2013, , 770, 94[Bruderer et al.(2014)]bruderer2014 Bruderer, S., van der Marel, N., van Dishoeck, E. F., & van Kempen, T. A. 2014, , 562, A26 [Casassus et al.(2015)]casassus2015 Casassus, S., Wright, C. M., Marino, S., et al. 2015, , 812, 126 [Canovas et al.(2016)]canovas2016 Canovas, H., Caceres, C., Schreiber, M. R., et al. 2016, , 458, L29 [Correia et al.(2006)]correia2006 Correia, S., Zinnecker, H., Ratzka, T., & Sterzik, M. F. 2006, , 459, 909[Cutri et al.(2003)]cutri2003 Cutri, R. M., Skrutskie, M. F., van Dyk, S., et al. 2003, VizieR Online Data Catalog, 2246,[de Boer et al.(2016)]deboer2016 de Boer, J., Salter, G., Benisty, M., et al. 2016, , 595, A114 [de Juan Ovelar et al.(2016)]maria2016 de Juan Ovelar, M., Pinilla, P., Min, M., Dominik, C., & Birnstiel, T. 2016, , 459, L85 [Dipierro et al.(2016)]dipierro2016 Dipierro, G., Laibe, G., Price, D. J., & Lodato, G. 2016, , 459, L1 [Fedele et al.(2017)]fedele2017 Fedele, D., Carney, M., Hogerheijde, M. R., et al. 2017, arXiv:1702.02844 [Flock et al.(2015)]flock2015 Flock, M., Ruge, J. P., Dzyurkevich, N., et al.2015, , 574, A68 [Foreman-Mackey et al.(2013)]foreman2013 Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, , 125, 306 [Ghez et al.(1993)]ghez1993 Ghez, A. M., Neugebauer, G., & Matthews, K. 1993, , 106, 2005[Ginski et al.(2016)]ginski2016 Ginski, C., Stolker, T., Pinilla, P., et al. 2016, , 595, A112 [Greene et al.(1994)]greene1994 Greene, T. P., Wilking, B. A., Andre, P., Young, E. T., & Lada, C. J. 1994, , 434, 614 [Hildebrand(1983)]hildebrand1983 Hildebrand, R. H. 1983, , 24, 267[Loinard et al.(2008)]loinard2008 Loinard, L., Torres, R. M., Mioduszewski, A. J., & Rodríguez, L. F. 2008, , 675, L29 [Mamajek(2008)]mamajek2008 Mamajek, E. E. 2008, Astronomische Nachrichten, 329, 10 [Mayama et al.(2010)]mayama2010 Mayama, S., Tamura, M., Hanawa, T., et al. 2010, Science, 327, 306[Miotello et al.(2016)]miotello2016 Miotello, A., van Dishoeck, E. F., Kama, M., & Bruderer, S. 2016, , 594, A85 [Miranda & Lai(2015)]miranda2015 Miranda, R., & Lai, D. 2015, , 452, 2396 [Natta et al.(2006)]natta2006 Natta, A., Testi, L., & Randich, S. 2006, , 452, 245 [Nuernberger et al.(1998)]nuernberger1998 Nuernberger, D., Brandner, W., Yorke, H. W., & Zinnecker, H. 1998, , 330, 549 [Owen & Clarke(2012)]owen2012 Owen, J. E., & Clarke, C. J. 2012, , 426, L96 [Pérez et al.(2012)]perez_L2012 Pérez, L. M., Carpenter, J. M., Chandler, C. J., et al. 2012, , 760, L17 [Pérez et al.(2014)]perez_L2014 Pérez, L. M., Isella, A., Carpenter, J. M., & Chandler, C. J. 2014, , 783, L13 [Pérez et al.(2015)]perez_L2015 Pérez, L. M., Chandler, C. J., Isella, A., et al. 2015, , 813, 41 [Pérez et al.(2016)]perez_L2016 Pérez, L. M., Carpenter, J. M., Andrews, S. M., et al. 2016, Science, 353, 1519 [Perez et al.(2015)]perez_s2015 Perez, S., Casassus, S.,Ménard, F., et al. 2015, , 798, 85[Pinilla etal.(2012b)]pinilla2012 Pinilla, P., Benisty, M., & Birnstiel, T.2012b,A&A, 545, A81[Pinilla et al.(2014)]pinilla2014 Pinilla, P., Benisty, M., Birnstiel, T., et al. 2014, , 564, A51 [Pinilla et al.(2015)]pinilla2015_alma Pinilla, P., van der Marel, N., Pérez, L. M., et al. 2015, , 584, A16 [Pinilla et al.(2016a)]pinilla2016a Pinilla, P., Klarmann, L., Birnstiel, T., et al. 2016a, , 585, A35 [Pinilla et al.(2016b)]pinilla2016b Pinilla, P., Flock, M., Ovelar, M. d. J., & Birnstiel, T. 2016b, , 596, A81 [Pohl et al.(2016)]pohl2016 Pohl, A., Kataoka, A., Pinilla, P., et al. 2016, , 593, A12[Regály et al.(2012)]regaly2012 Regály, Z., Juhász, A., Sándor, Z., & Dullemond, C. P. 2012, , 419, 1701 [Reipurth & Zinnecker(1993)]reipurth1993 Reipurth, B., & Zinnecker, H. 1993, , 278, 81 [Ricci et al.(2010)]ricci2010 Ricci, L., Testi, L., Natta, A., & Brooks, K. J. 2010, , 521, A66[Rosotti et al.(2013)]rosotti2013 Rosotti, G. P., Ercolano, B., Owen, J. E., & Armitage, P. J. 2013, , 430, 1392 [Rosotti et al.(2016)]rosotti2016 Rosotti, G. P., Juhasz, A., Booth, R. A., & Clarke, C. J. 2016, , 459, 2790 [Simon et al.(1995)]simon1995 Simon, M., Ghez, A. M., Leinert, C., et al. 1995, , 443, 625 [Sokal(1994)]sokal1994 Sokal, A. D. 1994, arXiv:hep-lat/9405016 [Stanke & Zinnecker(2000)]stanke2000 Stanke, T., & Zinnecker, H. 2000, IAU Symposium, 200, 38 [Stolker et al.(2016)]stolker2016 Stolker, T., Dominik, C., Avenhaus, H., et al. 2016, , 595, A113 [Strom et al.(1989)]strom1989 Strom, K. M., Strom,S. E., Edwards, S., Cabrit, S., & Skrutskie, M. F. 1989, , 97, 1451 [Tazzari et al.(2016)]tazzari2016 Tazzari, M., Testi, L., Ercolano, B., et al. 2016, , 588, A53 [Testi et al.(2014)]testi2014 Testi, L., Birnstiel, T., Ricci, L., et al. 2014, Protostars and Planets VI, 339, eds. H. Beuther, R. Klessen, C. Dullemond, & Th. Henning (Univ. of Arizona Press, Tucson).[Trotta et al.(2013)]trotta2013 Trotta, F., Testi, L., Natta, A., Isella, A., & Ricci, L. 2013, , 558, A64 [van der Marel et al.(2013)]marel2013 van der Marel, N., van Dishoeck, E. F., Bruderer, S., et al. 2013, Science, 340, 1199 [van der Marel et al.(2015)]marel2015 van der Marel, N., van Dishoeck, E. F., Bruderer, S., Pérez, L., & Isella, A. 2015, , 579, A106[van der Marel et al.(2015)]marel2015_irs48 van der Marel, N., Pinilla, P., Tobin, J., et al. 2015, , 810, L7 [van der Marel etal.(2016)]marel2016 van der Marel, N., van Dishoeck, E. F., Bruderer, S., et al. 2016, , 585, A58 [Walsh et al.(2016)]walsh2016 Walsh, C., Juhász, A., Meeus, G., et al. 2016, , 831, 200 [Whipple(1972)]whipple1972 Whipple, F. L. 1972, From Plasma to Planet, 211[White et al.(2016)]white2016 White, J. A., Boley, A. C., Dent, W. R. F., Ford, E. B., & Corder, S. 2016, arXiv:1612.01648 [Wilking et al.(1989)]wilking1989 Wilking, B. A., Lada, C. J., & Young, E. T. 1989, , 340, 823 [Wilking et al.(2005)]wilking2005 Wilking, B. A., Meyer, M. R., Robinson, J. G., & Greene, T. P. 2005, , 130, 1733 [Williams & Best(2014)]williams2014 Williams, J. P., & Best, W. M. J. 2014, , 788, 59 [Zhang et al.(2016)]zhang2016 Zhang, K., Bergin, E. A., Blake, G. A., et al. 2016, , 818, L16 [Zhu et al.(2012)]zhu2012 Zhu, Z., Nelson, R. P., Dong, R., Espaillat, C., & Hartmann, L. 2012, , 755, 6[Zsom et al.(2011)]zsom2011 Zsom, A., Sándor, Z., & Dullemond, C. P. 2011, , 527, A10 | http://arxiv.org/abs/1703.09227v1 | {
"authors": [
"P. Pinilla",
"L. M. Pérez",
"S. Andrews",
"N. van der Marel",
"E. F. van Dishoeck",
"S. Ataiee",
"M. Benisty",
"T. Birnstiel",
"A. Juhász",
"A. Natta",
"L. Ricci",
"L. Testi"
],
"categories": [
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.SR"
],
"primary_category": "astro-ph.EP",
"published": "20170327180001",
"title": "A Multi-Wavelength Analysis of Dust and Gas in the SR 24S Transition Disk"
} |
[footnoteinfo]The material in this paper was not presented at any conference.First]Huong [email protected], First]James S. [email protected], Second]Cristian R. [email protected], Second]Bo [email protected][First]School of Electrical Engineering and Computer Science, The University of Newcastle, Australia [Second]Department of Automatic Control and ACCESS, School of Electrical Engineering, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden SPARSEVA estimate; upper bound; finite sample data. In this paper, we develop an upper bound for the SPARSEVA (SPARSe Estimation based on a VAlidation criterion) estimation error in a general scheme, i.e., when the cost function is strongly convex and the regularized norm is decomposable for a pair of subspaces. We show how this general bound can be applied to a sparse regression problem to obtain an upper bound for the traditional SPARSEVA problem. Numerical results are used to illustrate the effectiveness of the suggested bound. § INTRODUCTION Regularization is a well known technique for estimating model parameters from measured input-output data. Its applications are in any fields that are related to constructing mathematical models from observed data, such as system identification, machine learning and econometrics. The idea of the regularization technique is to solve a convex optimization problem constructed from a cost function and a weighted regularizer (regularized M-estimators). There are various types of regularizers that have been suggested so far, such as the l_1 <cit.>, l_2 <cit.> and nuclear norms <cit.> <cit.>. During the last few decades, in the system identification community, regularization has been utilised extensively <cit.>, to impose properties of smoothness and sparsity in the estimated models (see, e.g., <cit.>). Most of this work has focused on analysing the asymptotic properties of an estimator, i.e., when the length of the data goes to infinity. The purpose of this type of analysis is to evaluate the performance of the estimation method to determine if the estimate is acceptable. However, in practice, the data sample size for any estimation problem is always finite, hence, it is difficult to judge the performance of the estimated parameters based on asymptotic properties, especially when the data length is short.Recently, a number of authors have published research (<cit.>, <cit.>, <cit.>) aimed at analysing estimation error properties of the regularized M-estimators when the sample size of the data is finite. Specifically, they develop upper bounds on the estimation error for high dimensional problems, i.e., when the number of parameters is comparable to or larger than the sample size of the data. Most of these activities are from the statistics and machine learning communities. Among these works, the paper <cit.> provides a very elegant and interesting framework for establishing consistency and convergence rates of estimates obtained from a regularized procedure under high dimensional scaling. It determines a general upper bound for regularized M-estimators and then shows how it can be used to derive bounds for some specific scenarios.Here in this paper we utilize the framework suggested in <cit.> to develop an upper bound for the estimation error of the M-estimators used in a system identification problem. Here, the M-estimator problems are implemented using the SPARSEVA (SPARSe Estimation based on a VAlidation criterion) framework <cit.>, <cit.>. The approach in <cit.> has been developed for penalized estimators, so it has to be suitably modified for SPARSEVA, which is not a penalized estimator, but the solution of a constrained optimization problem. Our aim is to derive an upper bound for the estimation error of the general SPARSEVA estimate. We then apply this bound to a sparse linear regression problem to obtain an upper bound for the traditional SPARSEVA problem with some assumptions on the regression matrix. These assumptions can be considered as the price in order to derive the upper bound. In addition, we also provide numerical simulation results to illustrate the suggested bound of the SPARSEVA estimation error.The paper is organized as follows. Section 2 formulates the problem. Section 3 provides definitions and properties required for the later analysis. The general bound for the SPARSEVA estimation error is then developed in Section 4. In Section 5, we apply the general bound to the special case when the model is cast in a linear regression framework. Section 6 illustrates the developed bound by numerical simulation. Finally, Section 7 provides conclusions. §.§ Notation In this paper, we will use the following notation: * f(x| 0,σ^2) = (2πσ^2)^-1/2exp(-x^2/2σ^2) denotes the probability density function (pdf) of the Normal distribution𝒩(0,σ^2).* χ^2_β(N) denotes the value that P(X < χ^2_β(N)) = 1-β, where X is Chi square distributed with N degrees of freedom.§ PROBLEM FORMULATION Let Z_1^N = {Z_1, ..., Z_N}∈𝒵^N denote N identically distributed observations with marginal distribution ℙ in 𝒵⊆ℝ^k. ℒ: ℝ^n×𝒵^N →ℝ denotes a convex and differentiable cost function. Letθ^*∈argmin_θ∈ℝ^nℒ(θ) be a minimizer of the population risk ℒ(θ)= 𝔼_Z_1^N[ℒ(θ;Z_1^N)]. The task here is to estimate the unknown parameter θ^* from the data Z_1^N. A well known approach to this problem is to use a regularization technique, i.e., to solve the following convex optimization problem,θ̂_λ_N∈θ∈ℝ^narg min{ℒ(θ; Z_1^N) + λ_N ℛ(θ) },where λ_N > 0 is a user-defined regularization parameter and ℛ: ℝ^n →ℝ^+ is a norm. A difficulty when estimating the parameter θ^* using the above regularization technique is that one needs to find the regularization parameter λ_N. The traditional method to choose λ_N is to use cross validation, i.e., to estimate the parameter θ^* with different values of λ_N, then select the value of λ_N that provides the best fit to the validation data. This cross validation method is quite time consuming and very dependent on the data. Here we are specifically interested in the SPARSEVA (SPARSe Estimation based on a VAlidation criterion) framework, suggested in <cit.> and <cit.>, which provides automatic tuning of the regularization parameters. Utilizing the SPARSEVA framework, an estimate of θ^* can be computed using the following convex optimization problem:θ̂_ϵ_N∈θ∈ℝ^narg minℛ(θ)s.t.ℒ(θ; Z_1^N) ≤ℒ(θ̂_NR; Z_1^N)(1+ϵ_N),where ϵ_N > 0 is the regularization parameter and θ̂_NR is the “non-regularized” estimate obtained from minimizing the cost function ℒ(θ; Z_1^N), i.e.θ̂_NR∈θ∈ℝ^narg minℒ(θ; Z_1^N). It can be shown <cit.> that (<ref>) and (<ref>) are equivalent in the sense that there exists a bijection between λ_N and ϵ_N such that both estimators coincide. However, as discussed in <cit.>, that bijection is data-dependent and it does not seem possible to derive an explicit expression for it. The advantage of the SPARSEVA framework, with respect to (<ref>), is that there are some natural choices of the regularization parameter ϵ_N based the chosen validation criterion. For example, as suggested in <cit.> <cit.>, ϵ_N can be chosen as 2n/N (Akaike Information Criterion (AIC)), nlog(N)/N (Bayesian Information Criterion (BIC)); or as suggested in <cit.>, n/N (Prediction Error Criterion).For the traditional regularization method described in (<ref>), <cit.> recently developed an upper bound on the estimation error between the estimate θ̂_λ_N and the unknown parameter vector θ^*. This bound is a function of some constants related to the nature of the data, the regularization parameter λ_N, the cost function ℒ and the data length N. The beauty of this bound is that it quantifies the relationship between the estimation error and the finite data length N. Through this relationship, it is easy to confirm most of the properties of the estimate θ̂_λ_N in the asymptotic scenario, i.e. N →∞, which were developed in the literature some time ago (<cit.>, <cit.>). Inspired by <cit.>, our goal is to derive a similar bound for the SPARSEVA estimate θ̂_ϵ_N, i.e, we want to know how much the SPARSEVA estimate θ̂_ϵ_N differs from the true parameter θ^* when the data sample size N is finite. Note that the notation and techniques used in this paper are similar to <cit.>; however, in <cit.>, the convex optimization problem is posed in the traditional regularization framework (<ref>), while in this paper, the optimization problem is based on the SPARSEVA regularization (<ref>).§ DEFINITIONS AND PROPERTIES OF THE NORM ℛ(Θ) AND THE COST FUNCTION ℒ(Θ)In this section, we provide descriptions of some definitions and properties of the norm ℛ(θ) and the cost function ℒ(θ; Z_1^N), needed to establish an upper bound on the estimation error. Note that we only provide a brief summary such that the research described in this paper can be understood. Readers can find a more detailed discussion in <cit.>.§.§ Decomposability of a Norm Let us consider a pair of arbitrary linear subspaces of ℝ^n, (ℳ, ℳ), such that ℳ⊆ℳ. The orthogonal complement of the space ℳ is then defined as,ℳ^⊥ = { v ∈ℝ^n | ⟨ u,v⟩ = 0for all u ∈ℳ},where ⟨·,·⟩ is the inner product that maps ℝ^n ×ℝ^n →ℝ. The norm ℛ is said to be decomposable with respect to (ℳ,ℳ^⊥) ifℛ(θ+γ) = ℛ(θ) + ℛ(γ)for all θ∈ℳ and γ∈ℳ^⊥.There are many combinations of norms and vector spaces that satisfy this property (cf. <cit.>). An example is the l_1 norm and the sparse vector space defined (<ref>). For any subset S ⊆{ 1,2,…,n} with cardinality s, define the model subspace ℳ as,ℳ(S) = {θ∈ℝ^n | θ_j = 0 for all j ∉S}.Now if we define ℳ(S)=ℳ(S), then the orthogonal complement ℳ(S), with respect to the Euclidean inner product, can be computed as follows,ℳ^⊥(S) = {γ∈ℝ^n | γ_j = 0 for all j ∈ S}.As shown in <cit.>, the l_1-norm is decomposable with respect to the pair (ℳ(S),ℳ^⊥(S)). §.§ Dual Norm For a given inner product ⟨·,·⟩, the dual of the norm ℛ is defined by, ℛ^*(v) = u ∈ℝ^n ∖{ 0 }sup⟨ u,v ⟩ℛ(u) = ℛ(u) ≤ 1sup⟨ u, v ⟩, where sup is the supremum operator. Based on the above definition, one can easily see that the dual of the l_1 norm, with respect to the Euclidean inner product, is the l_∞ norm <cit.>. §.§ Strong Convexity A twice differentiable function ℒ(θ): ℝ^n→ℝ is strongly convex on ℝ^n when there exists an m>0 such that its Hessian ▽^2ℒ(θ) satisfies,▽^2ℒ(θ) ≽ mI,for all θ∈ℝ^n <cit.>.This is equivalent to the statement that the minimum eigenvalue of ▽^2ℒ(θ) is not smaller than m for all θ∈ℝ^n. An interesting consequence of the strong convexity property in (<ref>) is that for all θ, Δ∈ℝ^n, we have,ℒ(θ+Δ) ≥ℒ(θ) + ▽ℒ(θ)^TΔ+m2‖Δ‖_2^2.The inequality in (<ref>) has a geometric interpretation in that the graph of the function ℒ(θ) has a positive curvature at any θ∈ℝ^n. The term m/2 for the largest m satisfying (<ref>) is typically known as the curvature of ℒ(θ).§.§ Subspace Compatibility Constant For a given norm ℛ and an error norm ‖·‖, the subspace compatibility constant of a subspace ℳ⊆ℝ^n with respect to the pair (ℛ, ‖·‖) is defined as,Ψ(ℳ) = u ∈ℳ∖{0}supℛ(u)‖ u ‖.This quantity measures how well the norm ℛ is compatible with the error norm ‖ . ‖ over the subspace ℳ. As shown in <cit.>, when ℳ is ℝ^s, the regularized norm ℛ is the l_1 norm, and the error norm is the l_2 norm, then the subspace compatibility constant is Ψ(ℳ) = √(s). Notice also that Ψ(ℳ) is finite, due to the equivalence of finite dimensional norms. §.§ Projection OperatorThe projection of a vector u onto a space ℳ, with respect to the Euclidean norm, is defined by the following,Π_ℳ(u) = _v∈ℳ‖ u-v‖_2.In the sequel, to simplify the notation, we will write u_ℳ to denote Π_ℳ(u).§ ANALYSIS OF THE REGULARIZATION TECHNIQUE USING THE SPARSEVA In this section, we apply the properties described in Section 3 to derive an upper bound on the error between the SPARSEVA estimate θ̂_ϵ_N and the unknown parameter θ^*. This upper bound is described in the following theorem.Assume ℛ is a norm and is decomposable with respect to the subspace pair (ℳ,ℳ^⊥) and the cost function ℒ(θ) is differentiable and strongly convex with curvature κ_ℒ. Consider the SPARSEVA problem in (<ref>), then the following properties hold: * When ϵ_N > 0, there exists a Lagrange multiplier, λ_N = λ_ϵ_N, such that (<ref>) and (<ref>) have the same solution. * Any optimal solution θ̂_ϵ_N≠ 0 of the SPARSEVA problem (<ref>) satisfies the following inequalities:* If ϵ_N is chosen such thatλ_ϵ_N≤ 1/ℛ^*(∇ℒ(θ^*)),then‖θ̂_ϵ_N-θ^* ‖^2_2≤ 4κ^2_ℒλ^2_ϵ_NΨ^2(ℳ) + 4κ_ℒλ_ϵ_Nℛ(θ^*_ℳ^⊥). * If ϵ_N is chosen such thatλ_ϵ_N > 1/ℛ^*(∇ℒ(θ^*)),then‖θ̂_ϵ_N-θ^* ‖^2_2≤ 2κ^2_ℒ{ℛ^*(∇ℒ(θ^*))}^2 Ψ^2(ℳ)+8κ^2_ℒ{ℛ^*(∇ℒ(θ^*))}^2 Ψ^2(ℳ^⊥)+ 4κ_ℒλ_ϵ_Nℛ(θ^*_ℳ^⊥).Proof. See the Appendix (Section A.2). □ Note that Theorem <ref> is intended to provide an upper bound on the estimation error for the general SPARSEVA problem (<ref>). At this stage, it is hard to evaluate, or quantify, the value on the right hand side of the inequalities (<ref>) and (<ref>) as they still contain the term λ_ϵ_N and other abstract terms. However, in the later sections of this paper, from this general upper bound, we will provide bounds on the estimation errors for some specific scenarios.The bound in Theorem <ref> is actually a family of bounds. For each choice of the pair of subspaces (ℳ,ℳ^⊥), there is one bound for the estimation error. Hence, in the usual sense, to apply Theorem <ref> for any specific scenario, the goal is to choose ℳ and ℳ^⊥ to obtain an optimal rate of the bound.§ AN UPPER BOUND FOR SPARSE REGRESSIONIn this section, we illustrate how to apply Theorem <ref> to derive an upper bound of the error between the SPARSEVA estimate θ̂_ϵ_N and the true parameter θ^* for the following linear regression model,Y_N = Φ_N^Tθ^* + e,where θ^* ∈ℝ^n is the unknown parameter that is required to be estimated; e ∈ℝ^N is the disturbance noise; Φ_N ∈ℝ^n× N is the regression matrix and Y_N∈ℝ^N is the output vector. Here, we make the following assumption on the true parameter θ^*,The true parameter θ^* is “weakly" sparse, i.e. θ^* ∈𝔹_q(R_q), where,𝔹_q(R_q) := {θ∈ℝ^p |∑_i=1^p |θ_i|^q ≤ R_q .},with q ∈ [0,1] being a constant.Using the SPARSEVA framework in (<ref>) with ℛ chosen as the l_1 norm and the cost function ℒ(θ) chosen as,ℒ(θ) = 12N‖ Y_N - Φ_N^Tθ‖_2^2,then an estimate of θ^* in (<ref>) can be found by solving the following problem,θ̂_ϵ_N∈θ∈ℝ^narg min‖θ‖_1 s.t.ℒ(θ)≤ ℒ(θ̂_NR)(1+ϵ_N),with θ̂_NR=(Φ_NΦ_N^T)^-1Φ_NY_N and ϵ_N > 0 being the user-defined regularization parameter. Now ϵ_N can be chosen as either 2n/N or log(N)n/N as suggested in <cit.>; or n/N as suggested in <cit.>. Note that the sparse regression problem is very common in system identification and is often used to obtain a low order linear model by regularization.For Assumption <ref>, note that when q=0, under the convention that 0^0 = 0, the set in (<ref>) corresponds to an exact sparsity set, where all the elements belonging to the set have at most R_0 non-zero entries. Generally, for q ∈ (0,1], the set 𝔹_q(R_q) forces the ordered absolute values of θ^* to decay with a certain rate.§.§ An Analysis on the Strong Convexity Property and the Curvature of the l_2 norm Cost Function Consider the convex optimization problem in (<ref>), the Hessian matrix of the cost function ℒ(θ) is computed as,▽^2ℒ(θ) = 1NΦ_NΦ_N^T.To prove that ℒ(θ) is strongly convex, we need to prove,∃κ_ℒ >0s.t. 1NΦ_NΦ_N^T ≽ 2κ_ℒI.We see that the requirement in (<ref>) coincides with the requirement of persistent excitation of the input signal in a system identification problem. If an experiment is well-designed, then the input signal u(t) needs to be persistently exciting of order n, i.e., the matrix Φ_NΦ_N^T is a positive definite matrix. This means that the condition in (<ref>) is always satisfied for any linear regression problem derived from a well posed system identification problem. This means that for any choice of the regression matrix Φ_N that satisfies the persistent excitation condition, there exists a positive curvature κ_ℒ of the cost function ℒ(θ).Consider Φ_N ∈ℝ^n× N to be a matrix where each row Φ_N,j is sampled from a Normal distribution of zero mean and covariance matrix Σ∈ℝ^N× N, i.e., Φ_N,j∼𝒩(0,Σ),∀ j=1,..,n. We then denote the distribution of the smallest eigenvalue of N^-1Φ_NΦ_N^T to be P(x|Σ,N,n), means that given a probability 1-α , 0≤α≤ 1, there exists a value w_min such that N^-1Φ_NΦ_N^T ≽ w_minI, for any matrix Φ_N constructed following the above assumption. Then the global curvature κ, i.e. the curvature that satisfies (<ref>) for any regression matrix Φ_N, can be expressed as (1/2) w_min. For the rest of the paper, we will denote by κ_α lower bound on the global curvature κ with probability 1-α , 0≤α≤ 1. §.§ Assumptions For the linear regression in (<ref>), the following assumptions are made:The rows Φ_N,j, j=1,...n of the regressor matrix Φ_N are distributed as Φ_N,j∼𝒩(0, Σ), where Σ∈ℝ^N × N is a constant, symmetric, positive definite matrix.Note that an obvious practical case where Assumption <ref> is satisfied is when the model is FIR and the input signal being white noise or coloured noise.The noise vector e∈ℝ^N is Gaussian with i.i.d. 𝒩(0,σ_e^2) entries.[The assumption of Gaussian noise is fairly standard in system identification. However, this assumption can be relaxed to `sub-Gaussian' noise (i.e., when the tails of the noise distribution decay like e^-α x^2) at the expense of longer derivations.]§.§ Developing the Upper Bound The following theorem provides an upper bound on the estimation error ‖θ̂_ϵ_N - θ^* ‖_2 for the optimization problem in (<ref>) in the case of weakly sparse estimates.Suppose Assumptions <ref>, <ref> and <ref> hold, when N is large, then with probability (1-α)(1 - 4 n β) (0 ≤α≤ 1, 0 ≤β≤ 1), if θ̂_ϵ_N≠ 0 we have the following inequality‖θ̂_ϵ_N - θ^* ‖^2_2 ≤max(a_1, a_2),where a_1 = 8 n_ησ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N) ln(2/β)/κ_α^2 N^2 + √(32σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/κ_α N‖θ_[n_η+1:n]^* ‖_1, a_2 = (16 n - 12 n_η) σ_e^2 χ^2_β(Σ, I) ln(2/β)/κ_α^2 N^2+√(32σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/κ_α N‖θ_[n_η+1:n]^* ‖_1.where κ_α is a lower bound on the curvature of the regression matrix (i.e., half the smallest eigenvalue of N^-1Φ_N^T Φ_N) with probability 1-α, n_η is any integer between 1 and n, θ_[n_η+1:n]^* is the vector formed from the n-n_η smallest (in magnitude) entries of θ^*, and s_max is the maximum singular value of the matrix Σ. Proof. This proof relies on three preliminary results introduced in Appendix A.3. For an integer n_η∈{1,…,n}, define S_η as the set of the indices of the n_η largest (in magnitude) entries of θ^*, and its complementary set S^c_η asS^c_η = { 1,2,...,n }∖ S_η;with the corresponding subspaces ℳ(S_η) and ℳ^⊥(S_η) as,ℳ(S_η)= {θ∈ℝ^n |θ_j=0 ∀ j ∉S_η},ℳ^⊥(S_η)= {γ∈ℝ^n |γ_j=0 ∀ j ∈ S_η}.Using the definition of the subspace compatibility constant described in Section 3, we have, Ψ^2(ℳ(S_η))=| S_η| = n_η,Ψ^2(ℳ^⊥(S_η)) = | S^c_η| = n - n_η.where | S | denotes the cardinality of S.Now, for Theorem <ref> to generate an upper bound for the problem (<ref>), we need to establish an upper bound on ‖θ^*_ℳ^⊥(S_η)‖_1. Based on the definition of the subspace ℳ^⊥(S_η), we have,‖θ^*_ℳ^⊥(S_η)‖_1 = ‖θ_[n_η+1:n]^* ‖_1,where θ_[n_η+1:n]^* denotes the vector formed from the n-n_η smallest (in magnitude) entries of θ^*. Define κ_α as a lower bound on the global curvature of the regression matrix, i.e. half the smallest eigenvalue of Φ_N^TΦ_N, with probability 1-α, 0 ≤α≤ 1. Substituting the results of Propositions <ref>-<ref> from Appendix A.3, (<ref>) and (<ref>)into the bound in Theorem <ref>, then with n_η being any integer between 1 and n, we have the following bounds:* If ϵ_N is chosen such thatλ_ϵ_N≤ 1/ℛ^*(∇ℒ(θ^*))= 1/ ∇ℒ(θ^*)_∞,then, with probability at least (1-α)(1 - 2 n β), ‖θ̂_ϵ_N-θ^* ‖^2_2≤8 n_ησ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N) ln(2/β)/κ_α^2 N^2 + √(32σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/κ_α N‖θ_[n_η+1:n]^* ‖_1. * If ϵ_N is chosen such thatλ_ϵ_N > 1/ℛ^*(∇ℒ(θ^*))= 1/ ∇ℒ(θ^*)_∞,then, with probability at least (1-α)(1 - 4 n β), ‖θ̂_ϵ_N-θ^* ‖^2_2≤(16 n - 12 n_η) σ_e^2 χ^2_β(Σ, I) ln(2/β)/κ_α^2 N^2+√(32σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/κ_α N‖θ_[n_η+1:n]^* ‖_1. Therefore, for n_η being any integer between 1 and n, with probability at least (1-α)(1 - 4n β), we have‖θ̂_ϵ_N-θ^* ‖^2_2 ≤max(a_1, a_2),wherea_1 = 8 n_ησ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N) ln(2/β)/κ_α^2 N^2 + √(32σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/κ_α N‖θ_[n_η+1:n]^* ‖_1, a_2 = (16 n - 12 n_η) σ_e^2 χ^2_β(Σ, I) ln(2/β)/κ_α^2 N^2+√(32σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/κ_α N‖θ_[n_η+1:n]^* ‖_1.□ The bound in Theorem <ref> is also a family of bounds, one for each value of n_η. When Σ = σ_u I, i.e. the model is FIR and the input u(t) is white noise, then s_max=σ_u and the generalized Chi square distribution χ^2(Σ, I) becomes the Chi square distribution σ_uχ^2(N).Note that the developed bound in Theorem <ref> depends on the true parameter θ^*, which is unknown but constant. Using a similar proof as in Proposition 2.3 of <cit.>, we can derive under Assumption <ref> an upper bound for the term ‖θ_[n_η+1:n]^* ‖_1. Specifically, we have,‖θ_[n_η+1:n]^* ‖_1 = ∑_i=n_η+1^n |θ^*_[i]| = ∑_i=n_η+1^n |θ^*_[i]|^1-q |θ^*_[i]|^qSince S_η is the set of the indices of the n_η largest (in magnitude) entries of θ^*, i.e. |θ^*_[i]| ≤ |θ^*_n_η|, ∀ i = n_η+1, ...,n, hence,‖θ_[n_η+1:n]^* ‖_1≤ |θ^*_n_η|^1-q∑_i=n_η+1^n |θ^*_[i]|^qUsing the same argument, we have,|θ^*_n_η|^1-q = (1n_η∑_i=1^n_η |θ^*_[n_η]|^q )^(1-q)/q≤(1n_η∑_i=1^n_η |θ^*_[i]|^q )^(1-q)/q.Therefore,‖θ_[n_η+1:n]^* ‖_1≤(1n_η∑_i=1^n_η |θ^*_[i]|^q )^(1-q)/q∑_i=n_η+1^n |θ^*_[i]|^q ≤(1n_η∑_i=1^n |θ^*_[i]|^q )^(1-q)/q∑_i=1^n |θ^*_[i]|^q ≤(1n_η)^1/q-1‖θ^* ‖_q^1-q‖θ^* ‖_q^q≤ (n_η)^1-1/q‖θ^* ‖_q ≤ (n_η)^1-1/q (R_q)^1/q.This means we can always place an upper bound on the term ‖θ_[n_η+1:n]^* ‖_1 by a known constant which depends on the nature of the true parameter θ^*. Therefore, from Theorem <ref>, we can see that the estimation error ‖θ̂_ϵ_N-θ^* ‖^2_2 = O_p(N^-1/2) <cit.>. This confirms the result in <cit.>, that in the asymptotic case, when ϵ_N > 0, the SPARSEVA estimate θ̂_ϵ_N converges to the true parameter θ^*. § NUMERICAL EVALUATIONIn this section, numerical examples are presented to illustrate the bound ‖θ̂_ϵ_N-θ^* ‖^2_2 as stated in Theorem <ref>. In Section 6.1, we consider the case when the input is Gaussian white noise whilst in Section 6.2, the input is a correlated signal with zero mean. §.§ Gaussian White Noise Input In this section, a random discrete time system with a random model order between 1 and 10 is generated using the command drss from Matlab. The system has poles with magnitude less than 0.9. Gaussian white noise is added to the system output to give different levels of SNR, e.g. 30dB, 20dB and 10dB. For each noise level, 50 different input excitation signals (Gaussian white noise with variance 1) and output noise realizations are generated. For each set of input and output data, the system parameters are estimated using a different sample size, i.e., N=[450, 1000, 5000, 10000, 50000, 100000]. The FIR model structure is used here in order to construct the SPARSEVA problem (<ref>). The number of parameters n of the FIR model is set to be 35. The regularization parameter ϵ_N is chosen as n/N <cit.>.We then compute the upper bound of ‖θ̂_ϵ_N-θ^* ‖_2 using (<ref>) with different values of n_η, i.e. n_η = [10, 15, 25]. The probability parameters α and β are chosen to be 0.02 and 0.001 respectively. Related to the computation of the universal constant κ corresponding to the distribution 𝒩(0,Σ), note that, in reality, it is very difficult to compute its exact distribution P(x|Σ,N,n), hence, here we use an empirical method to compute the distribution P(x|Σ,N,n). The idea is to generate a large number of random matrices Φ_N, compute the smallest eigenvalue of N^-1Φ_NΦ_N^T, and then build a histogram of these values, which is an approximation of P(x|Σ,N,n). Then we compute the value of w_min to ensure the inequality N^-1Φ_NΦ_N^T ≽ w_minI occurs with probability 1-α. Finally, κ_α is computed using the formula κ_α = (1/2) w_min.With the setting described above, the probability of the upper bound being correct is (1-α)(1-4nβ)=0.84. This upper bound will be compared with ‖θ̂_ϵ_N-θ^* ‖_2. Note that we plot both the upper bound and the true estimation errors on a logarithmic scale.Plots of the estimation error versus the data length N with different noise levels are displayed in Figures <ref> to <ref>. In Figures <ref> to <ref>, the red lines are the true estimation errors from 50 estimates using the SPARSEVA framework. The magenta, blue and cyan lines are the upper bounds developed in Theorem <ref>, which correspond to n_η = [10,15,25], respectively. We can see that the plots confirm the bound developed in Theorem <ref> for all noise levels. When N becomes large, the estimation error and the corresponding upper bound become smaller. When N goes to infinity, the estimation error will tend to 0. Note that the bounds are slightly different for the chosen values of n_η, however, not significantly. As can be seen, the bounds are relatively insensitive to the choice of n_η.In addition, we plot another graph, shown in Fig. <ref>, to compare the proposed upper bound and the true estimation errors corresponding to different value of ϵ_N, i.e. nN (PEC), 2nN (AIC) and log(N)nN (BIC). The blue lines are the upper bounds developed in Theorem 5.1, which correspond to n_η=25, with the three different values of ϵ_N. The magenta (BIC), green (AIC) and red (PEC) lines are the true estimation errors from 50 estimates (for each value of ϵ_N) using the SPARSEVA framework. We can see that the plot again confirms the validity of the proposed upper bound for all choices of ϵ_N. Note that the upper bound is not extremely tight, it is quite conservative, however, it is the price to usually pay for finite sample bounds with a general SPARSEVA setting, i.e. the regularized parameter ϵ_N can be any positive value. When ϵ_N is larger, the upper bound will be closer to the true estimate error. §.§ Coloured Noise Input In this section, a random discrete time system with a random model order between 1 and 10 is generated using the command drss from Matlab. The system has poles with magnitude less than 0.9. White noise is added to the system output with different levels of SNR, e.g. 30dB, 20dB and 10dB. For each noise level, 50 different input excitation signals and output noise realizations are generated. For each set of input and output data, the system parameters are estimated using different sample sizes, i.e. N=[450, 1000, 5000, 10000, 50000]. Here, the input signal is generated by filtering a zero mean Gaussian white noise with unit variance through the filter,F_u(q) = 0.97981-0.2q^-1. Due to this filtering, the covariance matrix of the regression matrix distribution will not be of a diagonal form. Note that this is a completely different scenario to that in Section 6.1. The FIR model structure is used here in order to construct the linear regression for the SPARSEVA problem (<ref>). The number of parameters n of the FIR model is set to be 35. The regularization parameter, ϵ_N, is chosen as n/N <cit.>.We then compute the upper bound of ‖θ̂_ϵ_N-θ^* ‖_2 using (<ref>) with different values of n_η, i.e. n_η = [10, 15, 25]. The probability parameters α and β are chosen to be 0.02 and 0.001 respectively. With this setting, the probability of the upper bound being correct is (1-α)(1-4nβ)=0.84. This upper bound will be compared with ‖θ̂_ϵ_N-θ^* ‖_2. Plots of the upper bound as stated in Theorem <ref> and the true estimation error ‖θ̂_ϵ_N-θ^* ‖_2 are displayed in Figures <ref> to <ref>. In Figures <ref> to <ref>, the red lines are the true estimation errors from 50 estimates using the SPARSEVA framework. The magenta, blue and cyan lines are the upper bounds developed in Theorem <ref>, which correspond to n_η = [10,15,25] respectively. We can see that the plots confirmed the bound developed in Theorem <ref> for all noise levels. When N becomes large, the estimation error and the corresponding upper bound become smaller. When N goes to infinity, the estimation error will tend to 0. Note that the bounds are slightly different for the chosen values of n_η, however, not significantly. As can be seen, the bounds are relatively insensitive to the choice of n_η. § CONCLUSIONThe paper provides an upper bound on the SPARSEVA estimation error in the general case, for any choice of strongly convex cost function and decomposable norm. We also evaluate the bound for a specific scenario, i.e., a sparse regression estimate problem. Numerical results confirm the validity of the developed bound for different input signals with different output noise levels for different choices of the regularization parameters. plain § APPENDIX§.§ Background knowledge First, we cite a lemma directly from <cit.>, to enable the proof of Theorem <ref> to be constructed. For any norm ℛ that is decomposable with respect to (ℳ,ℳ^⊥); and any vectors θ, Δ, we haveℛ(θ+Δ) - ℛ(θ) ≥ℛ(Δ_ℳ^⊥)-ℛ(Δ_ℳ)-2ℛ(θ_ℳ^⊥),Recall that Δ_ℳ^⊥ is the Euclidean projection of Δ onto ℳ^⊥(see Section <ref>), and similarly for the other terms in (<ref>).Proof. See the supplementary material of <cit.>. □We now quote the following lemma from <cit.> with modification to fit with the notation used in the SPARSEVA problem (<ref>). This lemma helps us to find some important properties related to the SPARSEVA estimate θ̂_λ_N. Based on these properties, in the next section we can derive the upper bound on the estimation error. Note that for notational simplicity, we will denote ℒ(θ;Z_1^N) as ℒ(θ).Consider the convex optimization problem in (<ref>). Then the pair (θ̂_ϵ_N, λ_ϵ_N), with θ̂_ϵ_N≠ 0, has the property that θ̂_ϵ_N is the solution of the problem (<ref>) and λ_ϵ_N is the Lagrange multiplier if and only if all of the following hold: * λ_ϵ_N∈ R^+;* the function ℛ(θ) + λ{ℒ(θ)-ℒ(θ̂_NR)(1+ϵ_N)} attains its minimum over ℝ^n at θ̂_ϵ_N; and* ℒ(θ̂_ϵ_N)-ℒ(θ̂_NR)(1+ϵ_N) = 0. Proof. See the proof of Theorem 2.2 in <cit.>. The third condition in the cited theorem is a complementary slackness condition, which reduces to condition (3) here if θ̂_ϵ_N≠ 0 (cf. <cit.>).□ §.§ Proof of Theorem <ref> First we need to prove that there exists a Lagrange multiplier for the SPARSEVA problem (<ref>). We can assume without loss of generality that ℒ(θ̂_NR) ≠ 0, since otherwise we can take λ_ϵ_N = 0. According to <cit.>, the Lagrange multiplier for a convex optimization problem with constraint exists when the Slater condition is satisfied. Specifically, for the SPARSEVA problem (<ref>), the Lagrange multiplier λ_ϵ_N exists when there exists a θ_1 such that ℒ(θ_1) < ℒ(θ̂_NR)(1+ϵ_N). If ϵ_N > 0, and ℒ(θ̂_NR) ≠ 0, there always exists a parameter vector θ_1 such that ℒ(θ_1) < ℒ(θ̂_NR)(1+ϵ_N) (just take θ_1 = θ̂_NR). Therefore, there exists a Lagrange multiplier λ_ϵ_N for the SPARSEVA problem. Now that we have confirmed the existence of the Lagrange multiplier λ_ϵ_N, consider the function ℱ(Δ) defined as follows,ℱ(Δ) = ℛ(θ^*+Δ) - ℛ(θ^*) + λ_ϵ_N{ℒ(θ^*+Δ)-ℒ(θ^*)}.Using the strong convexity condition of ℒ(θ),ℒ(θ^*+Δ)-ℒ(θ^*)≥⟨∇ℒ(θ^*), Δ⟩ + κ_ℒ‖Δ‖^2_2.From (<ref>), we have that |⟨∇ℒ(θ^*), Δ⟩|≤ℛ^*(∇ℒ(θ^*))ℛ(Δ).Next, combining the inequality (<ref>) and the triangle inequality, i.e., ℛ(Δ) ≤ℛ(Δ_ℳ)+ℛ(Δ_ℳ^⊥), we have,|⟨∇ℒ(θ^*), Δ⟩| ≤ℛ^*(∇ℒ(θ^*))ℛ(Δ) ≤ℛ^*(∇ℒ(θ^*))(ℛ(Δ_ℳ)+ℛ(Δ_ℳ^⊥)),therefore,⟨∇ℒ(θ^*), Δ⟩≥ -ℛ^*(∇ℒ(θ^*))(ℛ(Δ_ℳ)+ℛ(Δ_ℳ^⊥)).Now, combining (<ref>), (<ref>), (<ref>), and Lemma <ref>, ℱ(Δ) = ℛ(θ^*+Δ) - ℛ(θ^*) + λ_ϵ_N{ℒ(θ^*+Δ)-ℒ(θ^*)} ≥ℛ(Δ_ℳ^⊥)-ℛ(Δ_ℳ)-2ℛ(θ^*_ℳ^⊥) + λ_ϵ_N{ -ℛ^*(∇ℒ(θ^*)) (ℛ(Δ_ℳ)+ℛ(Δ_ℳ^⊥)) + κ_ℒ‖Δ‖^2_2 } ≥{1-λ_ϵ_Nℛ^*(∇ℒ(θ^*))}ℛ(Δ_ℳ^⊥) - { 1 + λ_ϵ_Nℛ^*(∇ℒ(θ^*))}ℛ(Δ_ℳ) + κ_ℒλ_ϵ_N‖Δ‖^2_2 -2ℛ(θ^*_ℳ^⊥). Notice that, when θ̂_ϵ_N is the estimate of the SPARSEVA problem, then property 2 in Lemma <ref> states that the function ℛ(θ) + λ{ℒ(θ)-ℒ(θ̂_NR)(1+ϵ_N)} attains its minimum over ℝ^n at θ̂_ϵ_N, which means,∀θ∈ℛ^n,ℛ(θ̂_ϵ_N) + λ_ϵ_N(ℒ(θ̂_ϵ_N)-ℒ(θ̂_NR)(1+ϵ_N))≤ℛ(θ) + λ_ϵ_N(ℒ(θ)-ℒ(θ̂_NR)(1+ϵ_N)).Hence,∀θ∈ℛ^n, ℛ(θ̂_ϵ_N) - ℛ(θ) + λ_ϵ_N{ℒ(θ̂_ϵ_N) - ℒ(θ)}≤0,or, taking θ = θ^* and defining Δ̂_ϵ_N := θ̂_ϵ_N - θ^*,ℱ(Δ̂_ϵ_N) ≤ 0.Combining (<ref>) with (<ref>), we then have,0≥κ_ℒλ_ϵ_N‖Δ̂_ϵ_N‖^2_2 + {1-λ_ϵ_Nℛ^*(∇ℒ(θ^*))}ℛ(Δ̂_ϵ_N,ℳ^⊥)- { 1 + λ_ϵ_Nℛ^*(∇ℒ(θ^*))}ℛ(Δ̂_ϵ_N,ℳ)-2ℛ(θ^*_ℳ^⊥). Now we consider two cases. Case 1: λ_ϵ_N≤ 1/ℛ^*(∇ℒ(θ^*))From (<ref>), we have,0≥κ_ℒλ_ϵ_N‖Δ̂_ϵ_N‖^2_2 - { 1 + λ_ϵ_Nℛ^*(∇ℒ(θ^*))}ℛ(Δ̂_ϵ_N,ℳ) -2ℛ(θ^*_ℳ^⊥).By the definition of subspace compatibility,ℛ(Δ̂_ϵ_N,ℳ) ≤Ψ(ℳ)‖Δ̂_ϵ_N,ℳ‖_2.Now we also have that ‖Δ̂_ϵ_N,ℳ‖_2 = ‖Π_ℳ(Δ̂_ϵ_N)‖_2≤ ‖Δ̂_ϵ_N‖_2.Therefore,ℛ(Δ̂_ϵ_N,ℳ) ≤Ψ(ℳ)‖Δ̂_ϵ_N‖_2.Substituting this into (<ref>) gives,0 ≥κ_ℒλ_ϵ_N‖Δ̂_ϵ_N‖^2_2 - { 1 + λ_ϵ_Nℛ^*(∇ℒ(θ^*))}Ψ(ℳ)‖Δ̂_ϵ_N‖_2 -2ℛ(θ^*_ℳ^⊥).Note that for a quadratic polynomial f(x) = ax^2+bx+c, with a > 0, if there exists x ∈ℝ^+ that makes f(x) ≤ 0, then such x must satisfyx ≤-b+√(b^2-4ac)2a.Since (A+B)^2 ≤ 2A^2 + 2B^2 for all A,B ∈ℝ, x^2 ≤ 2[ b^24a^2+b^2-4ac4a^2] = b^2-2aca^2.Applying this inequality to (<ref>), we have,‖Δ̂_ϵ_N‖^2_2≤ 1κ^2_ℒλ^2_ϵ_N{ 1 + λ_ϵ_Nℛ^*(∇ℒ(θ^*))}^2 Ψ^2(ℳ) + 4κ_ℒλ_ϵ_Nℛ(θ^*_ℳ^⊥) ≤ 4κ^2_ℒλ^2_ϵ_NΨ^2(ℳ) + 4κ_ℒλ_ϵ_Nℛ(θ^*_ℳ^⊥).Case 2: λ_ϵ_N > 1/ℛ^*(∇ℒ(θ^*)) Using a similar analysis as in Case 1,ℛ(Δ̂_ϵ_N,ℳ^⊥) ≤Ψ(ℳ^⊥)‖Δ̂_ϵ_N‖_2.Substituting (<ref>) and (<ref>) into (<ref>), we obtain,0≥κ_ℒλ_ϵ_N‖Δ̂_ϵ_N‖^2_2 + {1-λ_ϵ_Nℛ^*(∇ℒ(θ^*))}Ψ(ℳ^⊥)‖Δ̂_ϵ_N‖_2- { 1 + λ_ϵ_Nℛ^*(∇ℒ(θ^*))}Ψ(ℳ)‖Δ̂_ϵ_N‖_2 - 2ℛ(θ^*_ℳ^⊥).Now using the inequality (<ref>), yields,‖Δ̂_ϵ_N‖^2_2≤1κ^2_ℒ({1λ_ϵ_N-ℛ^*(∇ℒ(θ^*))}Ψ(ℳ^⊥)- {1λ_ϵ_N + ℛ^*(∇ℒ(θ^*))}Ψ(ℳ) )^2+4κ_ℒλ_ϵ_Nℛ(θ^*_ℳ^⊥).Applying the inequality (A+B)^2 ≤ 2A^2 + 2B^2 to the first term in (<ref>) gives,‖Δ̂_ϵ_N‖^2_2≤2κ^2_ℒ{1λ_ϵ_N-ℛ^*(∇ℒ(θ^*))}^2Ψ^2(ℳ^⊥) + 2κ^2_ℒ{1λ_ϵ_N+ℛ^*(∇ℒ(θ^*))}^2Ψ^2(ℳ) +4κ_ℒλ_ϵ_Nℛ(θ^*_ℳ^⊥).Note that 0 < 1/λ_ϵ_N < ℛ^*(∇ℒ(θ^*)), therefore,{1λ_ϵ_N - ℛ^*(∇ℒ(θ^*))}^2≤{ℛ^*(∇ℒ(θ^*))}^2.We also have,{1λ_ϵ_N + ℛ^*(∇ℒ(θ^*))}^2 ≤ 4{ℛ^*(∇ℒ(θ^*))}^2.Therefore, combining (<ref>) and (<ref>), ‖Δ̂_ϵ_N‖^2_2≤ 2κ^2_ℒ{ℛ^*(∇ℒ(θ^*))}^2 Ψ^2(ℳ) +8κ^2_ℒ{ℛ^*(∇ℒ(θ^*))}^2 Ψ^2(ℳ^⊥) + 4κ_ℒλ_ϵ_Nℛ(θ^*_ℳ^⊥).□§.§ Preliminary propositions for Theorem <ref> In this Appendix we present three propositions that assist in the development of the proof of Theorem <ref>:Consider the optimization problem in (<ref>), and denote by λ_ϵ_N the corresponding Lagrange multiplier of its constraint. Then, if θ̂_ϵ_N≠ 0, λ_ϵ_N can be computed asλ_ϵ_N= 1‖∇ℒ(θ̂_ϵ_N) ‖_∞.Proof. See Appendix A.4. □Suppose Assumptions <ref> and <ref> hold. Then, with probability 1- n β (0 ≤β≤1/n), we haveP( .‖∇ℒ(θ^*)‖_∞≤ t | Φ_N) ≥( 1 - 2exp[ -N^2 t^2/2 σ_e^2 χ^2_β(Σ,I)] )^n,where s_max is the maximum element on the diagonal of the matrix Σ. In particular, choosing a specific value for t,‖∇ℒ(θ^*) ‖_∞≤√(2 σ_e^2 χ^2_β(Σ,I)ln(2/β))/N,with probability at least 1 - 2n β (0 ≤β≤ 1/2n).Proof. See Appendix A.5. □Suppose Assumptions <ref> and <ref> hold, then with probability at least 1- n β (0 ≤β≤1/n), we haveP(. ‖∇ℒ(θ̂_ϵ_N) ‖_∞≤ t | e ) ≥{ 1 - 2exp( -N^2 t^2/2σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)) }^nwhere s_max is the maximum element on the diagonal of the matrix Σ. In particular, choosing a specific value for t, ‖∇ℒ(θ̂_ϵ_N) ‖_∞≤√(2σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/N, with probability at least 1 - 2 n β (0 ≤β≤ 1/2n).Proof. See Appendix A.6. □§.§ Proof of Proposition <ref> Let us rewrite the SPARSEVA problem (<ref>),θ̂_ϵ_N∈θ∈ℝ^narg min‖θ‖_1s.t.ℒ(θ) - ℒ(θ̂_NR)(1+ϵ_N) ≤ 0,in the Lagrangian form (<ref>) using Lemma <ref>. The Lagrangian of the optimization problem (<ref>) is, g(θ,λ) = ‖θ‖_1+ λ(ℒ(θ) - ℒ(θ̂_NR)(1+ϵ_N)) = ‖θ‖_1 + λ2N(‖ Y_N-Φ_N^Tθ‖^2_2 -‖ Y_N-Φ_N^Tθ̂_NR‖^2_2(1+ϵ_N)). The subdifferential of g(θ, λ) can be computed as∂ g(θ, λ)/∂θ = v - λNΦ_N(Y_N-Φ_N^Tθ),where v=(v_1,…,v_m)^T is of the formv_i= 1ifθ_i>0 v_i= -1ifθ_i<0 v_i∈[-1,1] ifθ_i=0.Using property 2 of Lemma <ref>, when θ̂_ϵ_N is a solution of the SPARSEVA problem (<ref>) and λ_ϵ_N is a Lagrange multiplier, we have,0= ∂ g(θ, λ)/∂θ|_θ=θ̂_ϵ_N, λ=λ_ϵ_N =-λ_ϵ_NNΦ_N(Y_N-Φ_N^Tθ̂_ϵ_N)+v_θ̂_ϵ_N,for some v of the form in (<ref>).Note that when θ̂_ϵ_N≠0, ‖v‖_∞=1, which means thatλ_ϵ_N = N‖Φ_N(Y_N-Φ_N^Tθ̂_ϵ_N) ‖_∞.Since ∇ℒ(θ̂_ϵ_N)=1NΦ_N(Y_N-Φ_N^Tθ̂_ϵ_N), we can also write λ_ϵ_N asλ_ϵ_N= 1‖∇ℒ(θ̂_ϵ_N) ‖_∞.□Note that this proof is similar to the one in <cit.>, where an expression was derived for the Lagrange multiplier in the traditional l_1 norm regularization problem (the LASSO). Here we have derived the Lagrange multiplier for the SPARSEVA problem as given in (<ref>).§.§ Proof of Proposition <ref> For the linear regression (<ref>) and the choice of ℒ(θ) in (<ref>), ∇ℒ(θ^*) = 1NΦ_N(Y_N-Φ_N^Tθ^*) = 1NΦ_Ne.Denote R_j as the j^th row of the matrix Φ_N, then ∇ℒ(θ^*) can be computed as,∇ℒ(θ^*) = 1NΦ_Ne = 1N[ R_1; R_2; ⋮; R_n ]e = 1N[ R_1e; R_2e;⋮; R_ne ]consider the variable Z = N^-1 R_je, using Assumption <ref> on the disturbance noise e, e ∼𝒩(0, σ_e^2), we have,Z | e ∼𝒩(0, σ_e^2N^2R_jR_j^T).Now in order to derive a bound for ∇ℒ(θ^*), we first derive an upper bound for the variance N^-2σ_e^2R_jR_j^T of the distribution in (<ref>). Since R_j ∼𝒩(0, Σ), we have,R_jR_j^T ∼χ^2(Σ, I),where χ^2(Σ, I) is the generalized Chi squared with parameters Σ and I. Hence, with probability 1-β, 0 ≤β≤ 1, we have,R_jR_j^T ≤χ^2_β(Σ, I).Hence, the variance of the distribution of the variable N^-1 R_je, is,σ_e^2N^2R_jR_j^T ≤σ_e^2N^2χ^2_β(Σ, I),with probability 1-β.Note that from (<ref>), for any t > 0, we have,P(. |1NR_je |≤ t | Φ_N ) = ∫^t_-t f(x | 0, σ_e^2N^2R_jR_j^T.)dx,where f(x| 0,N^-2σ_e^2 R_jR_j^T) denotes the pdf of the Normal distribution 𝒩(0,N^-2σ_e^2 R_jR_j^T). This gives,P( .‖Φ_NeN‖_∞≤ t | Φ_N) = ∏_j=1^n{∫^t_-t f(x | 0, σ_e^2N^2R_jR_j^T.)dx }.This expression can be bounded from below using the standard result that P(|𝒩(0,σ^2)| > t) ≤ 2 exp(-t^2/2σ^2) <cit.>, to obtain P( .‖Φ_NeN‖_∞≤ t | Φ_N) ≥∏_j=1^n( 1 - 2exp[ -N^2 t^2/2 σ_e^2 R_jR_j^T] ). The expression in parentheses on the right hand side is monotonically decreasing in R_jR_j^T, so using (<ref>) gives P( .‖Φ_NeN‖_∞≤ t | Φ_N) ≥( 1 - 2exp[ -N^2 t^2/2 σ_e^2 χ^2_β(Σ, I)] )^n, which holds with probability[This bound follows because the events A_j that (<ref>) holds are not necessarily independent, but their joint probability can be bounded like P(A_1 ∩⋯∩ A_n) = 1 - P(A_1^C ∪⋯∪ A_n^C) ≥ 1 - P(A_1^C) - ⋯ - P(A_n^C) = 1 - nβ.] at least 1- n β.In particular, taking t = √(2 σ_e^2 χ^2_β(Σ, I)ln(2/β)) / N gives P( .‖Φ_NeN‖_∞≤√(2 σ_e^2 χ^2_β(Σ, I)ln(2/β))/N| Φ_N) ≥ (1 - β)^n ≥1 - n β with probability at least 1- n β, or equivalently, ‖∇ℒ(θ^*) ‖_∞≤√(2 σ_e^2 χ^2_β(Σ, I)ln(2/β))/N with probability at least 1- 2 n β. □ §.§ Proof of Proposition <ref> When θ̂_ϵ_N is the solution of the problem in (<ref>), we have, ∇ℒ(θ̂_ϵ_N) = 1NΦ_N(Y_N-Φ_N^Tθ̂_ϵ_N).Denote e_ϵ_N = Y_N-Φ_N^Tθ̂_ϵ_N, and R_j as the j^th row of the matrix Φ_N, then (<ref>) becomes,∇ℒ(θ̂_ϵ_N) = 1NΦ_N e_ϵ_N = 1N[ R_1; R_2; ⋮; R_n; ]e_ϵ_N = 1N[ R_1e_ϵ_N; R_2e_ϵ_N;⋮; R_ne_ϵ_N;].From Assumption <ref>, and using the same argument as in Proposition <ref>, we have that each element of R_j is distributed as 𝒩(0, Σ(j,j)).Consider the variable Z=N^-1e_ϵ_N^TR_j^T, Since R_j ∼𝒩(0, Σ),Z ∼𝒩(0, 1N^2e_ϵ_N^TΣ e_ϵ_N).Since Σ is symmetric and positive definite matrix, hence using singular value decomposition, we can find a diagonal matrix D that satisfies,Σ = Q^TDQ,where Q is the unitary matrix, i.e. QQ^T=I. Therefore, we have,1N^2e_ϵ_N^TΣ e_ϵ_N≤s_maxN^2 e_ϵ_N^Te_ϵ_N,where s_max is the maximum element on the diagonal of matrix D, i.e. maximum singular value of matrix Σ. Note that, e_ϵ_N^Te_ϵ_N = (Y_N-Φ_N^Tθ̂_ϵ_N)^T(Y_N-Φ_N^Tθ̂_ϵ_N) = 2N ℒ(θ̂_ϵ_N) = 2N ℒ(θ̂_NR)(1+ϵ_N).From Section 4.4 in <cit.>, we have,ℒ(θ̂_NR)|Φ_N ∼σ_e^22Nχ^2(N-n),which gives,ℒ(θ̂_NR) ≤σ_e^22Nχ^2_β(N-n),with probability 1-β, 0 ≤β≤ 1. Combining this inequality with (<ref>) and (<ref>) gives, with probability 1-β,1N^2e_ϵ_N^TΣ e_ϵ_N≤σ_e^2N^2s_maxχ^2_β(N-n) (1+ϵ_N).Hence,P(. |R_je_ϵ_NN|≤ t | e )≥∫^t_-t f(x | 0, s_maxσ_e^2N^2χ^2_β(N-n) (1+ϵ_N).)dx,with probability 1-β.This means,P(. ‖Φ_Ne_ϵ_NN‖_∞≤ t | e ) = ∏_j=1^n {∫^t_-t f(x | 0, σ_e^2N^2 s_maxχ^2_β(N-n) (1+ϵ_N).)dx }≥{ 1 - 2exp( -N^2 t^2/2σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)) }^nwith probability at least 1- n β, following the same reasoning as in the proof of Proposition <ref>.Therefore, P(. ‖∇ℒ(θ̂_ϵ_N) ‖_∞≤ t | e ) ≥{ 1 - 2exp( -N^2 t^2/2σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)) }^nwith probability at least 1 - n β.Taking t = √(2σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/N gives P(. ‖∇ℒ(θ̂_ϵ_N) ‖_∞≤√(2σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/N| e ) ≥ (1 - β)^n ≥ 1 - n β, with probability at least 1 - n β, or, equivalently, ‖∇ℒ(θ̂_ϵ_N) ‖_∞≤√(2σ_e^2 s_maxχ^2_β(N-n) (1+ϵ_N)ln(2/β))/N, with probability at least 1 - 2 n β. □ | http://arxiv.org/abs/1703.09351v3 | {
"authors": [
"Huong Ha",
"James S. Welsh",
"Cristian R. Rojas",
"Bo Wahlberg"
],
"categories": [
"math.ST",
"stat.TH"
],
"primary_category": "math.ST",
"published": "20170327234631",
"title": "An analysis of the SPARSEVA estimate for the finite sample data case"
} |
e-mail: [email protected] Veksler and Baldin Laboratory of High Energy Physics, JINR Dubna, 141980 Dubna, Russia Bogolyubov Institute for Theoretical Physics, 03680 Kiev, Ukraine INFN - Sezione di Firenze, I-50019 Sesto Fiorentino (Firenze), Italy Veksler and Baldin Laboratory of High Energy Physics, JINR Dubna, 141980 Dubna, Russia Veksler and Baldin Laboratory of High Energy Physics, JINR Dubna, 141980 Dubna, Russia M. V. Lomonosov Moscow State University, Moscow, Russia D. V. Skobeltsyn Institute of Nuclear Physics, Moscow, Russia Veksler and Baldin Laboratory of High Energy Physics, JINR Dubna, 141980 Dubna, Russia Institute of Theoretical and Experimental Physics (ITEP), Moscow, RussiaVeksler and Baldin Laboratory of High Energy Physics, JINR Dubna, 141980 Dubna, Russia Warsaw University of Technology, Faculty of Physics, Warsaw 00662, Poland Correlation femtoscopy allows one to measure the space-time characteristics of particle production in relativistic heavy-ion collisions due to the effects of quantum statistics (QS) and final state interactions (FSI).The main features of the femtoscopy measurements at top RHIC and LHC energies are considered as a manifestation of strong collective flow and are well interpreted within hydrodynamic models employing equation ofstate (EoS) with a crossover type transition between Quark-Gluon Plasma (QGP) and hadron gas phases. The femtoscopy at lower energies was intensively studied at AGS and SPS accelerators and is being studied now in the Beam Energy Scanprogram (BES) at the BNL Relativistic Heavy Ion Collider in the context of exploration of the QCD phase diagram. In this article we present femtoscopic observables calculated for Au-Au collisions at√(s_NN) = 7.7 - 62.4 GeV in a viscous hydro + cascade modeland their dependence on the EoS of thermalized matter.25.75.-q, 25.75.Gz Correlation femtoscopy study at energies available at the JINR Nuclotron-based Ion Collider fAcility and the BNL Relativistic Heavy Ion Collider within a viscous hydrodynamic plus cascade model D.Wielanek Received ; accepted================================================================================================================================================================================================= § INTRODUCTIONOne of the main motivations for heavy-ion collision programs is to study a new state of matter, the Quark-Gluon Plasma (QGP) which is defined as a deconfined state of quarks and gluons <cit.>.The systematics of transverse momentum spectra, elliptic and higher order flow coefficients measured in heavy-ion collisions at BNL Relativistic Heavy Ion Collider (RHIC) and CERN Large Hadron Collider (LHC) energiesconfirmed the presence of strong collective motion and hydrodynamic behavior of the system <cit.>. While the hydrodynamic approach was successful in reproduction of elliptic flow measured at the top RHIC energies from the very beginning, it was unable to reproduce the femtoscopic correlation measurements. The problem was solved several years ago along with substantial improvements in hydrodynamic modeling <cit.>. The improvements comprise a presence of pre-thermal transverse flow, an inclusion of shear viscous corrections to hydrodynamic evolution, an equation of state compatible with recent lattice QCD calculations, and a consistent treatment of hadronic stage (hadronic cascade phase). As a result, existing state-of-the-art hydrodynamic models can reproduce, besides the transverse momentum distributions and elliptic flow coefficients, also the pion femtoscopic measurements <cit.>.Recent lattice QCD calculations show that the transition from QGP to a hadron gas at high temperature and small μ_B is a crossover <cit.>, and there are no signs of critical behavior in the region of μ_ B/T<2 <cit.>. This is supported by a recent analysis of combined full RHIC (√(s_NN) = 200 GeV) and LHC (√(s_NN) = 2.76 TeV) data in viscous hydrodynamic + cascade model, where an elaborate model-to-data comparison using Bayesian framework suggests that the speed of sound at all temperatures cannot fall below the hadron resonance gas value of ∼ 0.15, and that the resulting posterior distribution over possible equations of state of matter is compatible with the lattice QCD results <cit.>. At the same time, there are predictions inspired by the lattice QCD calculations on a possible change of existing regime to a first-order phase transition occurring at lower energies and higher chemical potentials <cit.> and thus implying the existence of a critical point on the QCD phase diagram at a moderate value of chemical potential <cit.>. These considerations motivated the BES program allowing one to study different parts of the QCD phase diagram at existing accelerators like SPS and RHIC, and, in future, at NICA and FAIR facilities.It has been shown many years ago <cit.> that the long duration of particle emission related to a first order phase transition could reveal itself in the energy region of the onset of deconfinement as a strong increase of the Gaussian femtoscopic radius R_out, measured along the pair transverse momentum, compared with the nearly constant radius R_side, measured along the perpendicular direction in the transverse plane. As a result, one may expect a strong increase of the ratio R_out / R_side. In fact, a first order phase transition leads to a stalling of the mean expansion speed and a longer emission duration Δτ manifested as an increase of the radius measured along the beam direction R_long and of the ratio of transverse femtoscopy radii R_out / R_side, respectively. A large data set of correlation functions of identical charged pions has been recently obtained by the STAR Collaboration within the RHIC BES at √(s_NN) = 7.7, 11.5, 19.6, 27, 39, 62.4 GeV. The study of R_out / R_side and R_out^2 - R_side^2 behavior as a function of √(s_NN) indicates a wide maximum near √(s_NN)∼ 20 GeV, which is reported also by the PHENIX Collaboration <cit.>.Could this wide maximum be related to the expected change of the type of phase transition?To answer this and other questions, we study the sensitivity of Bose-Einstein correlations of identical pions to the EoS using a hybridmodel <cit.>. The model combines the UrQMD approach <cit.> for the early and late stages of the evolution with numerical (3 + 1)-dimensional viscous hydrodynamical solution <cit.> for the hot and dense expanding matter. A hydrodynamic approach has an essential advantage for the present analysis, since it allows one to simulate different scenarios of hadron / quark-gluon transition by changing the EoS and other transport coefficients input.The paper is organized as follows: the details of the model are discussed in Section <ref>; in Section <ref> the femtoscopy formalism is described; in Section <ref> the results are presented and discussed; Section <ref> is dedicated to conclusions.§ VHLLE+URQMD MODEL The use of multi-component dynamical models for the description of dynamics of relativistic heavy-ion collisions at RHIC and higher energies is essential because hydrodynamics alone cannot describe the entire reaction. At the early stage of collision, thermalization of out-of-equilibrium quark-gluon system is assumed to occur, which allows one to describe the subsequent complex multi-particle dynamics using a relatively simple formalism of relativistic hydrodynamics. This requires an approach to calculate initial conditions for the hydrodynamic evolution[Note that for lower collision energies there exist one-, two-, and three-fluid hydrodynamical models (e.g. <cit.>), which apply hydrodynamical description already for incoming nuclei.].As the matter expands, the characteristic mean free path of its constituents (quarks and gluons transforming into hadrons) becomes comparable to the system size. The interactions become less frequent, but they do not cease instantaneously, a process which can be simulated by switching from the hydrodynamical evolution to a hadronic cascade, usually with the help of Cooper-Frye formula <cit.>.For the early stage of collision different approaches (or models) have been used in literature, such as , , , , Glauber model etc. Those approaches have been developed for full RHIC or LHC energies, and lose their applicability as the collision energy decreases. As for the Glauber model, it can estimate initial energy density profiles in transverse direction only. Therefore we choose to use UrQMD to simulate the initial stage of the collision. We enforce a transition to hydrodynamical description at a hyper-surface of constant longitudinal proper time τ_0=√(t^2-z^2). The minimal value of the starting time τ_0 is taken to be equal to the average time for the two colliding nuclei to completely pass through each other: τ_0=2R/√((√(s_ NN)/2m_N)^2-1),where R is average radius of the nucleus and m_N is nucleon mass.At τ=τ_0 energy, momentum and baryon/electric charges of hadrons are distributed to fluid cells ijk around each hadron's position according to Gaussian profiles:Δ P^α_ijk= P^α· C·exp(-Δ x_i^2+Δ y_j^2/R_⊥^2-Δη_k^2/R_η^2γ_η^2 τ_0^2)Δ N^0_ijk =N^0 · C·exp(-Δ x_i^2+Δ y_j^2/R_⊥^2-Δη_k^2/R_η^2γ_η^2 τ_0^2),where P^α and N^0 are 4-momentum and charge of a hadron, {Δ x_i, Δ y_j, Δη_k} are the distances between hadron's position and center of a hydro cell ijk in each direction, γ_η= cosh(y_p-η) is the longitudinal Lorentz factor of the hadron as seen in a frame moving with the rapidity η, and C is a normalization constant. The normalization constant C is calculated so that the discrete sum of energy depositions to the hydrodynamic cells equals to the energy of the hadron. The width parameters R_⊥ and R_η control granularity of the produced initial state.For all collision energies in consideration, the resulting initial energy density is large enough for the dense parts of the system to reside in the QGP phase.The following 3-dimensional viscous hydrodynamic expansion is simulated with thecode <cit.>. Another input to the hydrodynamic part is the EoS, for which we use chiral model EoS <cit.> or bag model EoS <cit.>. Whereas the present version of the chiral model EoS has a crossover type transition between QGP and hadronic phases for all baryon densities, the bag model EoS has a first order phase transition between the phases also for all baryon densities. Therefore below we dub chiral model EoS as “XPT EoS”, and bag model EoS as “1PT EoS”. For both EoS there are publicly available tables, computed in full physically allowed T-μ_B region, which makes them particularly useful for hydrodynamic computations with fluctuating initial conditions. Pressure as a function of energy density from both EoS is demonstrated on Fig. <ref>.Fluid to particle transition, or particlization, is set to happen at a hypersurface of constant (hydrodynamic) energy density ϵ_ sw=0.5 GeV/fm^3, when the hydrodynamic EoS corresponds to hadronic phase. The particlization hypersurface is reconstructed with the CORNELIUS subroutine <cit.>. At this hypersurface, individual hadrons are sampled using the Cooper-Frye formula including shear viscous corrections to the distribution functions. The hadronic rescatterings and decays are treated with thecascade.The initial state parameters R_⊥, R_η, hydrodynamic starting time τ_0 and shear viscosity over entropy ratio η/s in fluid phase are tuned for different collision energies in order to approach basic experimental observables in the RHIC Beam Energy Scan region: (pseudo)rapidity distributions, transverse momentum spectra and elliptic flow coefficient <cit.>. The resulting values of the parameters are presented in Table <ref>. The tuning has been made with the XPT (chiral model) EoS, and in present work we use the same set of parameter values (i.e. do not re-tune them) for the simulations with the 1PT (bag model) EoS.§ FEMTOSCOPY FORMALISM Since the first demonstration of the sensitivity of the Bose-Einstein correlations to the spatial scale of the emitting source done almost 60 years ago by G. Goldhaber, S. Goldhaber, W. Lee and A. Pais <cit.>, the momentum correlation technique was successfully developed and is known presently as a “correlation femtoscopy”. It was successfully applied to the measurement of the space-time characteristics of particle production processes in high energy collisions, especially in heavy-ion collisions <cit.>.Femtoscopy correlations are studied by means of a two-particle correlation function.In a production process of a small enough phase space density, the correlations of two particles emitted with a small momentumk^*= | k^*|in the pair rest frame (PRF)[Calculations made in PRF are denoted by asterisk.] are dominated by the effects of their mutual final state interaction (FSI) and quantum statistics (QS), depending on the PRF temporal (t^* = t_1^* - t_2^*) and spatial (r^* =r_1^* -r_2^*) separation of particle emission points. Usually, one can neglect the temporal separation <cit.> and in such equal-time approximation describe these effects by properly symmetrized wave functions at a given total pair spin S, [ψ_- k^*^S,α^'α( r^*)]^*, representing solutions of the scattering problem viewed in the opposite time direction. So the complex conjugate, negative sign of the vector k^* =p_1^* = - p_2^* and the detected channel α being the entrance one. Since the FSI factorization requires the FSI duration much larger than the particle production time, the relative momentum should be small also in the intermediate channels α^', so that one may consider the particles in these channels belonging to the same isospin multiplets as those in the detected channel α. Particularly, for identical particles, the multi-channel problem reduces to the single elastic transition α→α only. Assuming further sufficiently smooth behavior of single-particle spectra in a narrow correlation region (smoothness assumption) <cit.>, one can neglect the space-time coherence and write the correlation function at a given k^* and pair three-momentum P asC( k^*, P)= ∫ d^3 r^* S^α( r^*, P)| ψ_- k^*^S,α^'α( r^*) |^2,where the overline describes the averaging over the total pair spin S and summing over the intermediate channels α^'. It is implied that particles are produced in a complex process with equilibrated spin and isospin projections and so the separation distribution (source function) S^α( r^*, P) is independent of S and α^'.Experimentally, a two-particle correlation function is defined as a ratio C( q) = A( q) / B( q). A( q) is a measured distribution of the difference q =p_1 -p_2, where p_1 and p_2 are three-momenta of two considered particles taken from the same event, while B( q) is a reference distribution of pairs of particles taken from different events. The momentum difference is usually calculated in the longitudinally co-moving system (LCMS), where the longitudinal pair momentum vanishes. The vector q is usually expressed in terms of q_ out , q_ side , q_ long, where the “long” axis is directed along the beam, “out” - along the pair transverse momentum and “side” -perpendicular to the latter one in the transverse plane. To perform a quantitative analysis of femtoscopic correlations, an analytical form of S is often used so that the result of the integration procedurein Eq. (<ref>) can be compared with a correlation functionC obtained from an experiment. The source function is usually considered independent of the relative momentum q and its Gaussian shape is assumed:S(r) ∼exp (-r^*_out^2/4R^*_out^2-r^*_side^2/4R^*_side^2-r^*_long^2/4R^*_long^2).The widths in three directions (out, side and long) are called “Gaussian femtoscopy radii”.In LCMS q_ out=γ_t (q_out^*+β_t (m_1^2-m_2^2)/m_12),q_side=q_side^*, q_long=q_long^*, where γ_t and β_t are the LCMS Lorentz factor and velocity of the pair. Since the space-time separation in PRF and LCMS is related by the Lorentz boost in the out direction: r_ out^*=γ_t(r_ out-β_t t), r_ side^*=r_ side, r_ long^*=r_ long, the PRF and LCMS Gaussian radii coincide except for R_ out^*=γ_t R_ out. In present paper we consider the correlation function of two identical pions neglecting their FSI, so that|ψ_- k^*( r^*)|^2= |[exp(-i k^* r^*)+exp(i k^* r^*)]/√(2)|^2 = = 1 +cos(2 k^* r^*),and Eqs. (<ref>, <ref>, <ref>) yield the 3-dim Gaussian form of the correlation function. This form is usually used to fit the LCMS Gaussian radii according to:C( q)= N (1+λexp(-R_ out^2q_ out^2-R_ side^2q_ side^2-R_ long^2q_ long^2)).where N is the normalization factor and λ is the correlation strength parameter, which can differ from unity due to the contribution of long-lived emitters and a non-Gaussian shape of the correlation function; R_out, R_side, R_long are the Gaussian femtoscopy radii in the LCMS frame. Eq. (<ref>) assumes azimuthal symmetry of the production process, which forbids the presence of the cross-terms except for q_out q_long. We neglect the latter assuming further the invariance under longitudinal boosts. Generally, in case of a correlation analysis with respect to the reaction plane, all three cross-terms q_iq_j contribute.The described fitting procedure allows one to compare extracted femtoscopy radii from the model with existing experimental data. This can be considered as a standard approach. A disadvantage of this approach is due to the fact that the Gaussian parametrization can suppress important information that could be derived from the long non-Gaussian tails of source functions.The PHENIX and STAR collaborations have recently started to apply a new “imaging technique” in order to extract directly the source function <cit.>. In contrast to the standard approach, the source imaging allows one to extract a real non-Gaussian source function, being in this sense a model-independent one. Here we will use themodel to study the effect of a non-Gaussian shape of the source function and its dependence on the nature of the phase transition. This model provides the information on particle four-momenta and four-coordinates of the emission points allowing one to calculate the correlation function with the help of the weight procedure. For non-interacting identical pions, the weight is given in Eq. (<ref>).§ RESULTS AND DISCUSSION As mentioned above, the parameters of the model were adjusted to approach experimental data for (pseudo-)rapidity distributions, transverse momentum spectra and elliptic flow coefficients with the XPT EoS, corresponding to a crossover type transition <cit.>. Then, switching the EoS from XPT to 1PT leads to only small differences in multiplicities and rapidity distributions of the produced hadrons in the model. At the same time, hydrodynamic evolution with the 1PT EoS leads to somewhat decreased mean p_⊥, elliptic and triangular flow coefficients <cit.>. Those trends are explained by a less violent transverse expansion with the 1PT EoS.In this section, we study the space-time characteristics of the hadron emission in the model and present the results for the Bose-Einstein correlations of identical pions, obtained with the two aforementioned EoS's in a wide collision energy range √(s_NN) = 7.7 - 62.4 GeV covered by the BES program at RHIC. §.§ Pion emission time distributionsIn the model one has access to the space-time points of particle productionin last collisions and resonance decays, in addition to their momenta. In Fig. <ref> we visualize the averaged time distributions of last interaction points of pions from the model simulations at the lowest, middle and highest collision energies (√(s_NN) = 7.7, 19.6 and 62.4 GeV, respectively) using the 1PT and XPT EoS's.A detailed information on the pion emission times as a function of √(s_NN) for all simulated collision energies and EoS's is given in Table <ref>. The time distributions of midrapidity pions have been obtained at the particlization surface (points of their creation) and at the points of last interactions.From the Table <ref> one can conclude an apparently weak dependence of the average pion creation time t̅ at the particlization surface on collision energy.It is an interplay of longer pre-thermal and shorter hydrodynamic stage at lower collision energies: at √(s_NN) < 39 GeV, the hydro stage starts at τ_0 = 2R/(γ v_z) when the two colliding nuclei have completely passed through each other, and the value of τ_0 is as large as 3.2 fm/c at √(s_NN) = 7.7 GeV. On the other hand, the duration of hydro stage becomes shorter as a collision energy decreases because of lower initial energy density at the hydro starting time. For the average pion creation times, the differences between 1PT EoS and XPT EoSare largest at the lowest collision energy in consideration. In addition, the 1PT EoS leads to larger root-mean-square (RMS) values of the time distributions, and the difference is again largest at the lowest collision energy. Because of re-scatterings and, more importantly, resonance decays in the final stage of hadronic cascade, the points of last interactions correspond to larger values of t̅, which also depend weakly on collision energy. The cascade somewhat smears the relative difference between the 1PT and XPT scenarios, both for t̅ and RMS. §.§ Three-dimensional correlation radii in LCMS.An example of one-dimensional projections of three-dimensional correlation function obtained with themodel using the 1PT EoS and XPT EoS is shown in Fig. <ref>. The analysis involved thesimulations performed for gold-gold collisions at √(s_NN)= 7.7 GeV with applied cuts on event centrality of 0-5% and the pair transverse momentum k_T. The second one pertains to the range of (0.15 - 0.25) GeV/c. One can see that the pion correlation functions at small q_i (i = “out”, “side”, “long”) are not well described by the Gaussian function in Eq. (<ref>). The observed difference between correlation functions calculated with the 1PT and XPT EoS'sis noticeable in the “out” and “long” directions. In Fig. <ref> this fact is demonstrated by the ratios of individual projections. The ratios in the “out” and “long” directions reach values up to 1.03 at small q_out and q_long. A percent deviation from unity at small q_side values appears due to the finite cuts on q_out and q_long.In Fig. <ref> we present the m_T-dependence of the three-dimensional femtoscopy LCMS radii calculated at √(s_NN) = 7.7, 11.5, 19.6, 27, 39, 62.4 GeV using the 1PT and XPT EoS's, and a comparison of the obtained results with those ones obtained by the STAR collaboration <cit.>.One can see that the model reasonably describes the m_T-dependence of radii for all beam energies with both EoS's.As for the radii, they show different trends in the “out”, “side” and “long” directions. Whereas the R_side using both EoS's practically coincide, the R_ out with the 1PT EoS is generally larger (however, not more than 0.5 fm at any collision energy) than for the XPT EoS. This also leads to larger values of the R_out / R_side ratio using the 1PT EoS. The difference comes from a weaker transverse flow developed in the fluid phase with the 1PT EoS as compared with the XPT EoS[A similar influence of the transverse flow on R_out and R_side has been observed for the RHIC and LHC energies <cit.>.]. A longer lifetime of the fluid phase in the 1PT scenario also results in a larger values of R_long as compared with the XPT scenario. Whereas one could expect that at lower collision energies in the 1PT EoS a larger fraction of the fluid phase evolution occurs in the mixed phase with zero speed of sound leading toan increase of evolution time and R_long, we did not observe such a trend in the model. The reason is that at lower collision energies in the model a sizable amount of radial flow is developed at pre-hydro stage. At the same time, the R_out / R_side ratio at lowest collision energies shows a clear EoS dependence.The R_out / R_side and R_out^2 - R_side^2 as a function of √(s_NN) were studied at fixed m_T by the STAR collaboration <cit.>. A wide maximum near √(s_NN) ∼ 20 GeV/c in both excitation functions was observed. This observation is however accompanied by rather large systematic error bars. We have calculated the very same quantities in the model and compared them with experimental data. The result of comparison is shown in Fig. <ref>.One can see that due to large experimental error bars the model calculations involving the XPT EoS are in a strong agreement with the data within the error bars at all energies, whereas the 1PT EoS overestimates the data. However, in the model taking into account the XPT EoS we observe a monotonic increase in excitation functions of both quantities, meanwhile the 1PT EoS results in a non-decreasing behavior of the quantities. The XPT EoS “works” better for lowest collision energies that might be seen earlier from a better description of individual radii in that energy region shown in Fig. <ref>. A study of the R_out / R_side ratio looks traditional in the modern femtoscopy since the R_out and R_side radii are both reduced by flow, thus their ratio is a more robust against the flow effects.As mentioned above, the parameters of the model were adjusted to approach the basic hadronic observables: rapidity, transverse momentum distributions and elliptic flow coefficients within the BES region, but not femtoscopic ones. No model tuning has been made for the femtoscopy, therefore the obtained radii may be considered as a free “prediction” even though the experimental data already exists. §.§ Source emission functionsIn a Monte Carlo model one has an access to the space-time characteristics of produced particles, which allows one to avoid the complicated procedure of solving the integral equation (see Eq. <ref>) as it is done in experiment <cit.>. The source emission function can be calculated directly as: S(r^*)=∑_i jδ_Δ (r^*-r_i^*+r_j^*)/N Δ^3 Here r^∗_i and r^∗_j are the particles space positions, r^∗ is the particles separation in PRF;δ_Δ = 1 if |x| < p/2 and 0 otherwise, p is a size of the histogram bin. The denominator in Eq. (<ref>) takes care for the normalization by a product of the number of pairs N = ∑_ij 1 and a bin size Δ^3.Fig. <ref> demonstrates an example of one-dimensional projections of source emission function S(r^*) derived from the model directly. One can see that calculations involving the 1PT EoS lead to longer visible tails in the projections as compared with the XPT EoS, especially for “out” direction. It is related to a weaker transverse flow developed in the fluid phase and a longer lifetime of the fluid phase taking place in the 1PT EoS. A similar observation has been reported for the “out” femtoscopic radii in the previous section.A set of functions consisting of a single Gaussian, double Gaussian, Gaussian + Exponential, Gaussian + Lorentzian, and Hump function <cit.> was tested for a description of one-dimensional projections of source emission functions. The best description was obtained with theHump-function and the double Gaussian function. The last one gives only slightly worse χ^2 than the Hump-function, but allows one for a clear interpretation of parameters and a more stable fit. The single Gaussian and double Gaussian fit functions are shown in Fig. <ref>. The parameters extracted from these fits are presented in Table <ref>.The fit of projections of source emission function with a single Gaussian gives a large value of χ^2/ndf. The fit with a double Gaussian allows one to get much better values of χ^2/ndf and obtain a better description of the tails of projections of source emission functions until ∼ 60 fm in “out” and ∼ 25 fm in “side” and “long” directions, respectively. The radii extracted from the double Gaussian fit have a small component R_i^small of 4-12 fm and a large component R_i^large of 8-20 fm (as usual, i denotes “out”, “side” and “long” directions). It reflects the fact that pion source consists of direct particles (described by the first component) and re-scatterings (the second one). The difference between radii extracted from the source emission functions obtained with the two EoS's is seen for both components - R_i^small and R_i^large, but it is rather small, less than 0.5 fm. The radii are larger for the 1PT scenario being consistent with the three-dimensional femtoscopic radii reported above. It is interesting to note that in case of the single Gaussian fit the values of the radii are approximately equal to the ones derived from the double Gaussian fit and averaged quadratically over relative contributions of small and large radii. It means thatthe one-dimensional Gaussian radii roughly reflect the main features of double Gaussian fits.Fig. <ref> shows a √(s_NN)-dependence of the small and large radii and their relative contributions extracted from the double Gaussian fit depending on EoS.The radii increase with increasing √(s_NN) for both types of calculations. The visible difference between small and largeradii in “out” direction (see Fig.<ref> (a)) decreases with increase of √(s_NN). The relative contributions of small and large radii to “out” direction are equal to ∼ 0.65 and ∼ 0.35 (see Fig.<ref> (d)) and practically do not depend on either √(s_NN) or type of EoS.The radii in “side” direction seem to be independent of √(s_NN) (see Fig.<ref> (b), (e)). The radii in “long” projection almost coincide for both types of EoS (see Fig.<ref> (c)), but the relative contributions of themas a function of √(s_NN) demonstrate a difference depending on EoS. The relative contribution of the large radii has a tendency to increase with √(s_NN) and is larger in case of the 1PT scenario (see Fig.<ref> (f)).Of course, the best comparison with experiment is a direct comparison of source emission functions from the model with the extracted ones experimentally. Nevertheless, this study shows us that the use of a double Gaussian fit also reflects a lot of interesting features of source emission functions, while a single Gaussian fit used in many experiments can be a rather risky procedure due to poor description of the source by this function.However, as it was demonstrated above, the radii extracted from the single Gaussian fit are equal to the properly averaged double Gaussian radii, giving, in principle, a realistic information on the source. This result is quite encouraging since it is much easier to study the three-dimensional radii than the source emission functions.§ SUMMARY We have presented the first study of pion femtoscopy in viscous hydro + cascade modelin the energy range of the BES program at RHIC. It is shown that the chiral model EoS <cit.> (XPT EoS), which has a crossover-type transition between QGP and hadron gas phases, in the fluid phase results in a quite reasonable reproduction of three-dimensional pion femtoscopic radii measured by the STAR collaboration. The “out” Gaussian femtoscopic radii obtained with the bag model EoS (1PT EoS) are systematically larger as compared with the XPT EoS; the “side” radii coincide for both types of EoS; the “long” radii are also somewhat larger for the 1PT EoS. The 1PT EoS results in a systematically worse reproduction of the data, however the differences between two EoS's are not so large. The R_out / R_side ratio and R_out^2 - R_side^2are in agreement with the STAR results within the error bars at all collision energies using the XPT EoS, but their energy dependences observed in the model are quite monotonic as opposed to the broad maximum around √(s_NN) = 20 GeV reported by STAR. At the same time, the 1PT EoS overestimates experimental data points for both R_out / R_side and R_out^2 - R_side^2.In particular, the latter EoSdoes not reproduce the femtoscopic radii even at the lowest energy considered, √(s_NN) = 7.7 GeV.Parameters of the model were adjusted in <cit.> based on rapidity, transverse momentum spectra and elliptic flow data in the BES region for the XPT EoS scenario. No readjustment for the 1PT EoS has been made, which poses an open question whether the differences in femtoscopic radii between the two EoS will be even smaller if the readjustment is made for each EoS scenario individually. Also, no additional parameter tuning has been made for the femtoscopic observables, therefore the results may be considered as “free model predictions” even though the experimental data already exists.We find that a better overall description of the femtoscopic radii would require about 1 fm/c shorter duration of pion emission with the present setup of the model. If it is realized, then at lower energies the 1PT scenario will be closer to the data. This in turn may indicate a change of the nature of phase transition at energies less than √(s_ NN)=20 GeV and this should be an incentive for future experiments at NICA and FAIR facilities. It is an open question whether a new set of parameters more suitable for the femtoscopic radii description can be found.In addition to traditional femtoscopic radii, we have calculated source emission functions of pion pairs. We show that it is possible to distinguish calculations with the two different EoS. The projections of source emission functions onto “out” direction are wider for the use of the 1PT EoS. For “side” direction these projections coincide for both scenarios; for “long” direction the projections obtained with the 1PT EoS are also wider in comparison with calculations using the XPT EoS. This observation is related to a weaker transverse flow developed in the fluid phase and a longer lifetime of the phase in case of the 1PT EoS used. In order to describe the source emission functions quantitatively a set of different fitting functions has been tested. It is shown that the use of a double Gaussian fit to the source emission function gives a reasonable description and allows one for a simple interpretation of the obtained small and large radii.So far we have performed femtoscopic analysis with vHLLE+UrQMD model only. As a next step we plan to extend the analysis using 3-fluid hydrodynamics-based event generator THESEUS <cit.>. In THESEUS the hydrodynamical description of heavy ion reaction starts earlier, which results in different sensitivity to hydrodynamic EoS especially in the NICA energy range.tocsection | http://arxiv.org/abs/1703.09628v4 | {
"authors": [
"P. Batyuk",
"Iu. Karpenko",
"R. Lednicky",
"L. Malinina",
"K. Mikhaylov",
"O. Rogachevsky",
"D. Wielanek"
],
"categories": [
"nucl-th",
"hep-ex",
"hep-ph"
],
"primary_category": "nucl-th",
"published": "20170327172139",
"title": "Correlation femtoscopy study at NICA and STAR energies within a viscous hydrodynamic plus cascade model"
} |
Department of Physics, and Institute for Soft Matter Synthesis and Metrology, Georgetown University, Washington DC, 20057, USA Although 3D printing has the potential to transform manufacturing processes, the strength of printed parts often does not rival that of traditionally-manufactured parts. The fused-filament fabrication method involves melting a thermoplastic, followed by layer-by-layer extrusion of the molten viscoelastic material to fabricate a three-dimensional object. The strength of the welds between layers is controlled by interdiffusion and entanglement of the melt across the interface. However, diffusion slows down as the printed layer cools towards the glass transition temperature. Diffusion is also affected by high shear rates in the nozzle, which significantly deform and disentangle the polymer microstructure prior to welding. In this paper, we model non-isothermal polymer relaxation, entanglement recovery, and diffusion processes that occur post-extrusion to investigate the effects that typical printing conditions and amorphous (non-crystalline) polymer rheology have on the ultimate weld structure. Although we find the weld thickness to be of the order of the polymer size, the structure of the weld is anisotropic and relatively disentangled; reduced mechanical strength at the weld is attributed to this lower degree of entanglement. Fused filament fabrication, polymer melt, welding, disentanglement, non-isothermal § INTRODUCTION Fused filament fabrication (FFF) <cit.> has become an essential tool for the rapid fabrication of custom parts via additive manufacturing. Although there are numerous advantages to this technique <cit.>, including ease of use, cost and flexibility, improving the strength of printed parts to rival that of traditionally-manufactured parts remains an underlying issue. The most common printing materials are amorphous polymer melts such as linear polycarbonate (PC) <cit.> and acrylonitrile butadiene styrene (ABS) <cit.>, a melt containing rubber nano-particles that provide toughness even at low temperatures. FFF printers can also handle semi-crystalline polymers such as poly-lactic acid (PLA) <cit.>. The printing process involves melting a solid filament of the printing material and extruding it through a nozzle. To fabricate a three-dimensional object the melt is deposited layer-by-layer, as illustrated in Fig. <ref>. The key to ensuring the strength of the final printed part is successful interdiffusion and re-entanglement of the polymer melt across the layer-layer interfaces.In general, the weld strength of a polymer-polymer interface grows sub-diffusively with welding time as t^1/4 until the bulk strength plateau is reached <cit.>. Several molecular mechanisms are proposed to explain this scaling. Since the weld thickness arising from interpenetration depth also scales as t^1/4 until the radius of gyration R_g is reached due to polymer reptation, one suggested mechanism is that this interpenetration depth determines the weld strength <cit.>. Others suggest that the formation of bridges between the surfaces is the key strengthening mechanism <cit.>. Both approaches are motivated by the idea of entanglement formation across the interface <cit.>. Whilst some studies assume a simple proportionality between the interpenetration distance and entanglement formation <cit.>, others assume a minimum interpenetration distance for an entanglement to form <cit.>. Few experiments find that diffusion by the radius of gyration R_g is required to achieve bulk strength <cit.>, but in many cases welds reach bulk strength in much shorter times <cit.>.In FFF the welding behaviour is essentially a thermally-driven diffusive process <cit.>, and interdiffusion is limited as the melt rapidly cools towards its glass transition temperature <cit.>. In addition, large shear rates in the nozzle deform and align the polymer microstructure prior to welding; it is suggested that this alignment can affect the diffusive behaviour at the weld line causing de-bonding <cit.>. The deformation induced by the FFF extrusion and deposition process, which involves a 90^o turn, has recently been investigated using a molecularly-aware model for a non-crystalline polymer melt <cit.>. In that paper both stretch and orientation of the polymer are incorporated using the Rolie-Poly constitutive equation <cit.> and the entanglement density is allowed to vary with the flow <cit.>. Flow through the nozzle followed by deposition into an elliptically-shaped layer induces complex, non-axisymmetric polymer configurations, with the polymer microstructure varying dramatically from the top to the bottom of the printed layer. This deformation significantly disentangles the polymer melt via convective constraint release <cit.> in an inhomogeneous way. Due to this deformation imposed by the FFF extrusion process, interdiffusion does not necessarily occur from an equilibrium state. Non-equilibrium molecular dynamics (NEMD) calculations of a diffusion tensor for relatively short polymer melts under both planar Couette and planar elongational flow show a significant enhancement of diffusion parallel to the flow direction <cit.>. Recently, models that incorporate an anisotropic shear-rate-dependent friction coefficient, or mobility tensor, have been proposedto successfully reproduce the dynamics of polymers under shear <cit.>. Suitably accounting for flow-induced friction-reduction effects is also required to quantitatively model uniaxial extensional data <cit.>. The simple dynamical model of polymers under shear with an anisotropicmobility tenor of Uneyama et al. <cit.> is consistent with NEMD simulations and experimental data; polymer segment alignment is suggested as the main cause for the anisotropy of the diffusion tensor. However, this model is not expected to apply o flows other than planar shear, and does not capture the anisotropic relaxation dynamics of aligned polymers after flow cessation.Furthermore, due to the nature of the deposition flow <cit.> polymers on either side of the interface reside in different deformation environments. Thus, a mutual diffusion mechanism should be considered, in which the diffusion coefficient depends on the local composition of chainmobility <cit.>. In particular, the theory of Kramer et al.<cit.>, suggesting that mutual diffusion is controlled by the mobility of faster moving chains, can successfully describe experimentally measured diffusion coefficients <cit.>.In this paper, we investigate the post-extrusion diffusive behaviour at the weld between two printed layers of an amorphous (non-crystalline) polymer. We use the procedure developed by McIlroy & Olmsted <cit.> to calculate the polymer conformation tensor and corresponding disentanglement that is induced by the extrusion process. We then introduce a spatio-temporal temperature profile to examine how this deformation relaxes in the weld region between two layers whilst cooling. In particular, we study how the structure of the weld evolves, how entanglements recover and calculate an interpenetration distance that incorporates both anisotropic and mutual diffusion effects. Finally, we address how these weld properties are affected by molecular weight, nozzle shear rate and print temperature.§ FFF MODEL§.§ Ideal Extrusion Process The solid filament feedstock is melted within the nozzle. Recently Mackay et al. developed a model that solves an approximate energy balance to correlate the maximum feed velocity with the print temperature T_N <cit.>. In a frame moving with the nozzle, the melt exits the nozzle in steady state at mass-averaged speed U_N. It is then deposited onto the build surface, where the material must speed up and deform to make a 90^o turn. The build plate moves horizontally in relation to the nozzle at mass-averaged speed U_L. Assuming mass conservation and deformation into a cylindrical deposited filament, the two speeds are related byπ R^2 U_N = π R H/2 U_L,where R is the nozzle radius and H is the thickness of the deposition. Typically H<2R, so that the deposited filament is elliptically shaped. The deposited filament then cools from the print temperature T_N towards the glass transition temperature T_g. Typical model parameters are detailed in the Appendix (Table <ref>).For comparison with Ref. <cit.>, we consider an extrusion process that deposits a single filament (or layer) in the xy-plane. Subsequent filaments are deposited on top of the previously-deposited filament to create a vertical printed `wall' in the z-direction (Fig. <ref>). Due to this geometry we use the term `layer' to refer to a single deposited filament; note that some authors use `layer' to refer to the planar geometry of the build.Figs. <ref>a and b illustrate these two key stages of FFF printing; layer 𝙻_𝚙-1 is deposited at time t_w(p-1), followed by layer 𝙻_𝚙 on top of 𝙻_𝚙-1 at time t_w(p). During the second stage the previously-printed layer heats up and a weld forms between the adjacent layers. The temperature profile T(t,x) of the two layers drives the welding across the layer-layer interface z=0 in the region of the polymer size (± R_g); we denote weld sites either side of the interface to be 𝚝_𝚙-1 (at the top of 𝙻_𝚙-1) and 𝚋_𝚙 (at the bottom of 𝙻_𝚙), respectively. Inter-diffusion between the layers occurs until the weld temperature reaches T_g at time t_g^W.§.§ Temperature Profile In typical FFF processes, the nozzle is fixed at temperature T_N, the build plate is held at the ambient temperature T_a (usually just below T_g), and printing occurs within an oven. The temperature profile of the oven is inhomogeneous due to the moving nozzle, which is continually accelerating and decelerating according to the print geometry and generating complex air-flow patterns. Thus, heat flow through the layers and exchange with the air is a very complicated problem. Typically, the boundary conditions at the layer-layer and layer-air interfaces are determined by a combination of convection, conduction and radiation. However, the heat transfer coefficient to describe this cooling process is not well understood. We neglect temperature variations in the x-direction and model the one-dimensional temperature profile T(t,z) across two layers (z∈ [-H,H], where z=0 is the layer-layer interface and z=H is the layer-air interface) via the one-dimensional heat equationdT/dt = α(T) ∂^2 T/∂ z^2.This gives the temperature evolution through the centre (x=0) of the two layers (Fig. <ref>b). For polycarbonate, which is a typical printing material, the thermal diffusivity α has a linear temperature dependence <cit.> of the formα(T) = α_0(1 - B T),for constants α_0, B and T_g=140^oC. Thermal diffusivity changes by ∼ 30% between the melt at T_N and the solid at T_a (see Appendix, Table <ref>). We solve Eqs. <ref>-<ref> via an explicit finite-differencing scheme with the following boundary conditions. We assume Dirichlet boundary conditions at the interfaces such that T(t,z=± H) = T_a,and the initial temperature profile is a step function, T(t=0,z) =T_N,z > 0, T_a, z < 0.Although Eqs. <ref>-<ref> give a crude approximation for cooling, this thermal protocol mimics experimental infrared measurements of the surface temperature of a printed ABS `wall' <cit.>, which has similar thermal diffusivity to polycarbonate.In particular, Fig. <ref> shows that the one-dimensional model well describes the infrared-measured temperature evolution at layer midpoints 𝚖_𝚙 and 𝚖_𝚙-1 during the second stage of printing. The temperature at the weld line is hard to measure using infrared imaging due to the curvature of the surface; in Ref. <cit.> it is determined by taking an average of the temperature evolution at 𝚖_𝚙 and 𝚖_𝚙-1. Arguably, the one-dimensional calculation shown in Fig <ref> gives a more accurate description of the weld line temperature evolution, although complex cooling dynamics during deposition are neglected. The idealised wall geometry used here also does not capture the effects of adjacent layers in the xy-plane on the temperature profile. Figs. <ref>c shows the spatio-temporal temperature profile T(t,z) predicted by Eqs. <ref>-<ref> for polycarbonate properties (see Table <ref>) - the material we will consider henceforthly as in Ref. <cit.>. Fig. <ref>d shows the temporal temperature evolution of sites at the top 𝚝, middle 𝚖 and bottom 𝚋 of each layer. We use a log-linear scale for comparison with later results. We choose T_N=250^oC and T_a=95^oC, as in typical FFF systems. Welding at the interface z=0 is governed by the temperature evolution and material dynamics on either side of the weld line at weld sites 𝚝_𝚙-1 and 𝚋_𝚙. These sites are chosen such that𝚝_𝚙-1- 𝚋_𝚙= 2R_g,where R_g ∼ 10 nm is the radius of gyration of an individual polymer. In particular, Fig <ref>d shows how layer-air interface at 𝚝_𝚙-1 rapidly cools below T_g during stage 1. At t_w(p) the next layer is deposited, creating a layer-layer interface, and instantly heats up weld site𝚝_𝚙-1; the second weld site 𝚋_𝚙 instantly cools by the same degree. During stage 2, the layer-layer interface cools much slower than the layer-air interface reaching T_g in approximately t_g^W=1 s. We will study how this weld line temperature evolution affects the relaxation of the printing material at weld sites 𝚝_𝚙-1 and 𝚋_𝚙, and consequently the characteristics of the weld.§.§ Polymer DynamicsWe describe the polymer microstructure using a modified version Rolie-Poly model <cit.> that includes flow-induced changes in the entanglement fraction, as in Ref. <cit.>. Essentially the Rolie-Poly model is a variation of the standard Doi-Edwards tube model for linear entangled polymer networks, which approximates the more powerful but unwieldy microscopic GLaMM model <cit.> to provide a simple one-mode constitutive equation for the stress tensor (see Appendix <ref> for more details). Since surrounding chains restrict transverse motion in a melt, a polymer chain is restricted to a tube-like region. This tube represents topological constraints due to entanglements <cit.>. At equilibrium, the entanglement number of a melt is related to the molecular weight M_w viaZ_eq = M_w/M_e,where M_e is the molecular weight between entanglements (see Table <ref>). Motion of a chain along the contour of the tube is unhindered by topological constraints and is known as reptation. The polymer microstructure can be parametrised by a conformation tensor A = ⟨ RR⟩/3R_g^2,for end-to-end vector R and radius of gyration R_g, which satisfies the Rolie-Poly equation <cit.>DA/Dt =K· A +A· K^T - 1/τ_d(T,γ̇) ( A- I) - 2/τ_R(T)( 1 - √(3/tr A)) (A + β√(tr A/3) ( A- I) ),where D/Dt = ∂/∂ t + ( u·∇) is the material derivative for fluid velocity u, K = ∇_αβ u_α is the velocity gradient tensor and tr A denotes the trace of tensor A. The first term in Eq. <ref> described how chains become stretched and oriented in the flow field, whereas the last two terms define two relaxation mechanisms: reptation along the tube and Rouse relaxation of the tube stretch, respectively. The convective constraint release (CCR) mechanism, where the motion of neighbouring tubes can release a topological constraint, is controlled by the parameter β. In fast flow conditions, CCR together with alignment can lead to a flow-induced decrease in the entanglement fraction ν = Z/Z_eq. Thus, we incorporate flow-induced disentanglement via the recent kinetic equation of Ianniruberto <cit.>:dν/dt = - β( K:A - 1/tr Ad/dt (tr A) ) ν + 1-ν/τ_d^eq(T).Entanglement loss can be modified by changing β (see Ref. <cit.>) and entanglements are regained via curvilinear diffusion along the tube i.e. reptation. At equilibrium, the relaxation times depend on temperature via the typical Williams-Landel-Ferry (WLF) equation <cit.>: the reptation time τ_d^eq governs the orientation of the tube:τ_d^eq(T) = τ_d^0 exp(-C_1(T-T_0)/T+C_2-T_0),and the Rouse time τ_R^eq governs the relaxation of the tube stretch:τ_R^eq(T) = τ_R^0 exp(-C_1(T-T_0)/T+C_2-T_0).Here C_1 and C_2 are the WLF constants, T_0 is the reference temperature and τ_d^0 and τ_R^0 are the reptation and Rouse time at T_0 given by <cit.>τ_R^0= τ_e^0 Z_eq^2,τ_d^0= 3 τ_e^0 Z_eq^3 (1 - 3.38/√(Z_eq) + 4.17/Z_eq - 1.55/√(Z_eq)^3), respectively, where τ_e^0 is the Rouse time of one entanglement segment at T_0 (see Table <ref> ). To incorporate the anisotropic nature of polymers in flow, the reptation time is modified according to <cit.>1/τ_d(T,γ̇) = 1/τ_d^eq(T) + β ( K: A - 1/tr Ad tr A/dt).where the temperature-dependence of the equilibrium reptation time is given by Eq. (<ref>). Thus, polymers that are more aligned (and therefore partially disentangled) can relax faster at a rate proportional to CCR. The Rouse time in Eq. <ref> does not depend on the flow.Often the entanglement number is written asZ_eq≈3τ_d^eq/τ_R^eq.However, this relation is strictly only true for Z_eq > 100. Since printing materials typically have fewer entanglements, it is important to acknowledge that from Eq. <ref>b the reptation time scales asτ_d^0 = 3/11τ_e^0 Z_eq^7/2,in the range 6<Z_eq<50, rather than Z_eq^3. This is consistent with experiments that observe a 3.4 scaling <cit.>.Flow through the nozzle is characterised by the equilibrium mass-averaged nozzle Weissenberg number byWi = U_N/Rτ_d^eq(T_N).For Wi>1, the flow rate exceeds the characteristic relaxation time and can thus induce significant departure from the equilibrium polymer configuration. Similarly the equivalent average Rouse Weissenberg number is given byWi_R = U_N/Rτ_R^eq(T_N),and signifies stretching of the tube. § MODELLING THE WELD REGION§.§ Initial ConditionSuccessful welding depends on the how the polymer melt interdiffuses and re-entangles across the layer-layer interface. To understand the interdiffusion process, we first must quantify the microstructure induced by the extrusion and deposition process. This configuration will provide an initial condition to calculate the evolution of the polymer structure at the weld. During deposition the polymer melt must deform to make the 90^o turn and transform into the elliptical geometry. Ref. <cit.> calculates the deformation A imposed by the extrusion and deposition process using the Rolie-Poly model (Sec. <ref>). The model parameters are T_N=250^oC, Wi=2, Z_eq=37 and β=0.3. In particular, Fig. <ref>a shows the stretch across the elliptical cross section of the deposit. There is a distinct gradient in the stretch from the top to the bottom of the layer, with the bottom half becoming much more stretched due to the stretching of the free surface during the 90^o turn. Due to polymer realignment during the deposition flow, the melt becomes significantly disentangled across the layer (Fig. <ref>b). §.§ Dynamics at the Weld To determine how the polymer microstructure evolves at the weld after deposition, we solve the modified Rolie-Poly Eqs. <ref> and <ref> under zero flow conditions (i.e. K= 0). The initial condition is calculated by the procedure in Ref. <cit.> (e.g. Fig. <ref>). We then calculate the relaxation process at the weld sites 𝚝_𝚙-1 and 𝚋_𝚙 on either side of the weld line during the two stages of printing. During stage 1, 𝚝_𝚙-1 is at a layer-air interface, whereas during stage 2, 𝚝_𝚙-1 and 𝚋_𝚙 form a layer-layer interface (Fig. <ref>). The temperature dependence of the reptation and Rouse times (Eqs. <ref> and <ref>) is determined by the temperature profile at the weld calculated in Sec. <ref>. The model parameters are set to T_N=250^oC, Z_eq=37 and Wi=2, which are typical for polycarbonate printing material. The CCR parameter is set to β=0.3 as in Ref. <cit.>. § EVOLUTION OF DISENTANGLED WELD STRUCTUREFig. <ref> shows an elliptical representation of tensor A at weld sites 𝚝_𝚙-1 and 𝚋_𝚙; a sphere represents an undeformed polymer at equilibrium, whereas an ellipse represents a stretched and oriented polymer. The grey circles correspond to the equilibrium shape. We discuss the effects of changing the print speed and the CCR parameter β on the relaxation dynamics in the Appendices <ref>, <ref>. Stage 1 (Fig. <ref>a): Layer 𝙻_𝚙-1 exits the nozzle at temperature T_N and is deposited at time t_w(p-1). The left-most ellipse represents the initial polymer configuration at weld site 𝚝_𝚙-1 induced by the deposition process. Since site 𝚝_𝚙-1 is exposed to the air it rapidly cools to the ambient temperature T_a (Fig. <ref>d); the temperature of the free surface drops below T_g in t_g^F =0.5 μs. Thus, the extrusion-induced deformation and corresponding disentanglement fraction do not have time fully relax and a non-equilibrium polymer configuration is locked into the weld site prior to the creation of the layer-layer interface.Stage 2 (Fig. <ref>b): Layer 𝙻_𝚙 exits the nozzle at temperature T_N and is deposited on top of the cool layer 𝙻_𝚙-1 at time t_w(p). This creates a layer-layer interface between weld sites 𝚝_𝚙-1 and 𝚋_𝚙. Again the left-most ellipses represent the initial polymer configuration at time t_w(p). Each site has a different initial microstructure due to different degrees of deformation at the top and bottom of the layer during deposition, as well as the thermal history. In particular, the initial ellipse at 𝚝_𝚙-1 represents the deformation induced at the top of a layer that has been frozen in by the cooling of the layer-air interface during stage 1. On the other hand, the initial ellipse at 𝚋_𝚙 represents the deformation induced at the bottom of a layer at print temperature T_N, which is much larger due to the greater stretch around the outer corner. Since the layer-layer interface cools much slower than the layer-air interface (Fig <ref>d), the polymer has much longer to relax before T_g is reached (t_g^F ≪ t_g^W). Thus we see relaxation of the polymer at 𝚝_𝚙-1. Similarly, the larger deformation at 𝚋_𝚙 also relaxes with a similar temperature evolution, also reaching T_g at time t_g^W. There is still insufficient time for the polymer to fully relax before the onset of the glass transition, so the weld region remain slightly anisotropic at t_g^W.Fig. <ref> shows how the reptation time τ_d, the tube stretch tr A-3, the principle shear component A_yz and the entanglement fraction ν at the weld sites 𝚝_𝚙-1 and 𝚋_𝚙 evolve during the two stages of printing; note that 𝚋_𝚙 only exists once 𝙻_𝚙 has been deposited during the second stage of printing. We discuss these features in turn.Reptation Time (Fig. <ref>a,b): After deposition of 𝙻_𝚙-1, the reptation time τ_d rapidly diverges (Fig <ref>a), since the temperature at the layer-air interface 𝚝_𝚙-1 drops below T_g in less than 100μs. When the layer-layer interface is formed at time t_w(p), heat transfer between the two layers causes the polymer at 𝚝_𝚙-1 to instantaneously become more mobile and the reptation time becomes finite (Fig <ref>b). Despite having a similar temperature evolution, the two weld sites 𝚝_𝚙-1 and 𝚋_𝚙 have different reptation times due to the different degree of stretch induced by the extrusion process (Eq. <ref>).Deformation Relaxation (Fig. <ref>c,d,e,f): During stage 1, Fig. <ref>c,e shows that the initial tube stretch tr A and shear deformation A_yz at the free surface 𝚝_𝚙-1 do not relax due to the diverging reptation time and persist to the second printing stage. Once 𝙻_𝚙 is deposited the increased temperature of the weld site allows relaxation of tr A and A_yz (Fig. <ref>d,f). Since Wi_R>1, linear relaxation of the deformation at both 𝚝_𝚙-1 and 𝚋_𝚙 does not apply. Instead, the relaxation process is two-stage; the first relaxation mode is Rouse-like and governed by τ_R. Once the tube length returns to the equilibrium value tr A = 3, the usual reptation behaviour prevails and the reptation time τ_d becomes equivalent for both weld sites (Fig. <ref>b), now depending only on the temperature evolution. Convective constraint release (parametrised by β) only contributes to relaxation whilst tr A>3 and therefore only has a small effect on the relaxation dynamics during the first Rouse relaxation mode (see Appendix <ref>).Fig <ref>d shows that the stretch of the tube at both 𝚝_𝚙-1 and 𝚋_𝚙 has sufficient time to relax prior to the onset of the glass transition at time t_g^W. In contrast, the relaxation of principle shear component A_yz (Fig <ref>f) is arrested by the glass transition. This anisotropy is particularly prominent at site 𝚋_𝚙. Thus, a non-equilibrium polymer orientation becomes `locked' into the weld region at t_g^W so that the structure of the weld is slightly anisotropic.Recovery of Entanglements (Fig. <ref>g,h): Finally, Fig. <ref>h shows how the entanglement fraction ν at both sites recovers towards unityduring stage 2 according to Eq. (<ref>). Initially, the evolution of entanglements is governed by the tube stretch; whilst tr A>3, ν remains constant since entanglements cannot be gained when the tube is retracting. Once the stretch has returned to equilibrium, entanglements recover at a rate determined by the reptation time τ_d^eq, which depends only on the temperature evolution. Due to the arresting glass transition, entanglements are not able to fully recover and ν≈ 0.5 at t_g^W for both sites 𝚝_𝚙-1 and 𝚋_𝚙. Thus, the weld region is approximately 50% less entangled than the equilibrium material, presumably yielding a lower mechanical strength compared to the equilibrium material. Since β only affects entanglement recovery during the Rouse relaxation mode, we see see similar ν at t_g^W for all β (see Appendix <ref>).§ WELD INTER-PENETRATION THICKNESSWe now consider diffusion dynamics at the two weld sites 𝚝_𝚙-1 and 𝚋_𝚙 located at distances ± R_g from the weld line (Eq. <ref>). First we consider isotropic diffusion that is sped up by the relaxation of the tube stretch according to Eq. <ref>. We then include the effects that an anisotropic environment has on the diffusion direction. Finally, we incorporate inhomogeneous diffusivity across the chain due to the polymer diffusing into a different deformation environment as it crosses the weld line.§.§ Isotropic Welding Approximation The curvilinear diffusion coefficient along the contour length of the tube is given by <cit.>D_c = k_B T/N ζwhere N is the number of Kuhn steps along the path, ζ is the friction coefficient and k_B is the Boltzman constant. The Kuhn length b is the statistical length of a polymer segment and often represents chain stiffness <cit.>.The curvilinear distance ℓ travelled by a polymer chain in time t is then given by the Einstein relation for a one-dimensional random walk <cit.>⟨ℓ^2 ⟩= 2 D_c t = L_c^2t/τ_d.The tube contour length is defined by <cit.>L_c = Nb^2/a_T,for Kuhn length b and tube diameter a_T = b√(N_e),where N_e is the number of Kuhn steps between entanglements; at equilibrium, N_e= N/Z_eq.The 1D tube contour executes a random walk in 3D-space, leading to an interpenetration distance χ given by <cit.>χ^2 = ⟨ℓ^2 ⟩^1/2 a_T, = N b^2 ( t/τ_d)^1/2Thus, a polymer chain diffuses via a double random walk process. In the FFF problem the decreasing temperature progressively slows the motion. Thus, the interpenetration distance is calculated via the integralχ/R_g = ( 36 ∫_t_w^t_g^W1/τ_d(T(t),γ̇(t)) dt' )^1/4,where R_g = Nb^2/√(6) and the reptation time depends on both temperature and shear rate (Eq. <ref>). For polycarbonate printed at T_N=250^oC, Fig. <ref> shows the interpenetration depth for polymers located at the weld: polymers located at 𝚋_𝚙 travel slightly further than polymers located at 𝚝_𝚙-1 due to a smaller reptation time, which is a result of the increased tube stretch at this weld site. In particular, for our model parameters χ≈2R_g at 𝚋_𝚙 before the glass transition arrests diffusion at t_g^W. Usually, experiments find that bulk strength is achieved once diffusion of the order of R_g has occurred <cit.> and molecular simulations suggest that only a few entanglement lengths are required <cit.>. Despite a diffusion distance greater than R_g, we find that orientations do not fully relax during this time, thus creating an anisotropic weld structure (Fig. <ref>). The polymer must diffuse its end-to-end distance R_E = √(6) R_g to fully escape its tube and relax to equilibrium.§.§ Anisotropic Welding Approximation Eq. <ref> does not account for a preferred diffusion direction due to the local anisotropic structure of the polymer melt. Similar to the work of Ilg & Kroger <cit.>, we propose a time-dependent anisotropic diffusion tensor of the formD = D_0 ( I + η ( A- I)),where D_0 = Nb^2/τ_d(t). The anisotropy parameter was found to be η≃ 1/3 by molecular simulations of long chains <cit.>. The interpenetration distance across the interface is then given byχ_zz/R_g = ( 36 ∫_t_w^t_g^W1/τ_d(t)( 1 + η (A_zz(t)-1) ) dt' )^1/4. For polycarbonate printed at T_N=250^oC, Fig. <ref> shows how the interpenetration depth across the weld line is reduced for anisotropic diffusion of polymers located at weld sites 𝚝_𝚙-1 and 𝚋_𝚙. Yet χ_zz is also larger than R_g, which according to molecular dynamics simulations <cit.> suggests that bulk strength should be achieved in the weld region. §.§ Mutual Diffusion ApproximationWhen a polymer molecule diffuses across the interface, it is not only affected by the anisotropy of its own structure, but also the anisotropy of the environment into which it diffuses. We consider polymers located either side of the interface of type 𝚝 and 𝚋 to signify the deformation at weld sites 𝚝_𝚙-1 and 𝚋_𝚙, respectively. Similar to thework of Kramer et al. <cit.>,we propose a mutual diffusion coefficient of the formD^M (t,z) =( 1 - ϕ(t,z))D^𝚋(t) + ϕ(t,z)D^𝚝(t),where D^𝚋(t) =Nb^2/τ_d^𝚋(t) (I + η( A^𝚋(t)- I) ), D^𝚝(t) =Nb^2/τ_d^𝚝(t) (I + η( A^𝚝(t)- I) ). The volume fraction occupied by type 𝚝 polymers is denoted by ϕ. Initially ϕ = 1 at weld site 𝚝_𝚙-1 and ϕ=0 at weld site 𝚋_𝚙. The volume fraction evolves according to∂ϕ/∂ t = ∂/∂ z(D_zz^M(t,z) ∂ϕ/∂ z),where D_zz^M is the zz-component of the mutual diffusion tensor governed by Eq. <ref>. In this way a diffusing chain carries its mobility across the weld line and the diffusion coefficient depends on the local composition of mobility. For polycarbonate printed at T_N=250^oC, Fig. <ref>a shows the evolution of ϕ during the relaxation process. The weld is formed due to polymers diffusing across the interface. Assuming that a weld is formed in the region 2 % < ϕ < 98%, Fig. <ref>b. shows the evolution of the interfacial width of the weld region. We highlight the asymmetric nature of diffusion across the interface by plotting the width calculated to the left and the right of the interface. Asymmetry arises due to the different degree of deformation at weld sites 𝚝_𝚙-1 and 𝚋_𝚙. The Kramer model <cit.> describes the mutual diffusion of two different molecular weight polymers <cit.>.However, since the FFF problem discussed here involves a single molecular weight inter-diffusing between different-mobility environments, a polymer chain will inherit the relaxation characteristics of the environment into which it is diffusing. Thus, a similar approach can be taken to calculate the mutual interpenetration distance of a molecule located at weld site 𝚝_𝚙-1, namely χ_M. That isχ_M/R_g = ( 36 ∫_t_w^t_g^W D^M_zz(t)dt' )^1/4,where the diffusion coefficient of this molecule is given byD^M(t) = ( 1- χ_M/2R_g)D^𝚝 + χ_M/2R_g D^𝚋 , χ_M < 2R_g, D^𝚋 , χ_M ≥ 2R_g,and χ_M parametrises how far the polymer has penetrated. D^𝚝 and D^𝚋 are given by Eq. <ref>. A similar relation is used for molecules located at weld site 𝚋_𝚙. In this way the mutual diffusion coefficient of a chain is given by a weighted average of the two environments on either side of the interface that depends on the polymer's location.Compared to χ_zz (Eq. <ref>), Fig. <ref>a shows how diffusion of molecules initially located at weld site 𝚋_𝚙 is slowed down by diffusing into a slower moving environment for polycarbonate printed at T_N=250^oC. In contrast, the diffusion of molecules initially located at 𝚝_𝚙-1 is increased by diffusing into a faster-moving environment (Fig. <ref>b). For this mutual diffusion model, type 𝚝 polymers ultimately diffuse further across the interface due to the increased mobility of diffusing into a faster environment. Thus, despite type 𝚋 polymers initially being more mobile due to the degree of deformation, the type 𝚝 polymers ultimately create a thicker interfacial width. This approach gives a final asymmetric interfacial width similar to that seen in Fig. <ref>b for the Kramer model based on local composition (Eq. <ref>). Including the effects of mutual diffusion also yields χ_M > R_g at this print temperature. Thus, for this case the mechanical strength of the weld region is limited the molecular structure (and entanglement fraction) of the weld region itself (Fig <ref>), rather than the interpenetration depth of the polymer. § CONTROLLING WELD STRENGTH§.§ Final Weld Properties The mechanical properties of a material can be characterised by the plateau modulus G_e <cit.> and the fracture toughness G_c <cit.>. Both properties are proportional to themolecular weight between entanglements, M_e: G_e∼1/M_e, G_c∼(1 - M_e/qM_w)^2, for q≈ 0.6 <cit.>. Smaller M_e results in a greater entanglement density and therefore increases both the bulk modulus and fracture toughness of a material.For polycarbonate, the entanglement molecular weight in determined by M_e = 1.156 a_T^2g/mol,where a_T = 37.9 Åis the tube diameter (Eq. <ref>) and the pre-factor accounts for bond angle, characteristic ratio and monomer weight <cit.>.At a welded interface the mechanical strength is determined by how many segments of length a_T cross the interface <cit.>. Thus, the mechanical strength of a weld is attributed to the weld thickness χ_W, as well as the integrity of the entanglement network at the weld ν_W. Bulk strength is expected for ν_W=1 and χ_W/R_g>1. For certain printing conditions we have seen that, although the weld thickness exceeds R_g, the ultimate structure of the weld is anisotropic with a weaker entanglement structure. Thus, reliable prediction of the weld properties from material properties and printing parameters is key to ensuring weld strength and advancing FFF technology.The FFF extrusion model in <cit.> assumes an idealised deposition process during which there is no cooling or relaxation of the melt. In addition, the assumed temperature profile in the nozzle neglects inhomogeneities due to thermal diffusion and shear heating effects. Both of these factors may significantly affect the temperature and polymer structure at the weld site, and consequently affect the weld properties. Thus, it is important to validate χ_W against experimentally measured weld thicknesses, which can be challenging to measure reproducibly and reliably from FFF-printed parts. In this section we draw qualitative conclusions of the weld properties based on our model. First we calculate the final weld entanglement number ν_W and the interpenetration depth χ_W at time t_g^W and examine how they vary with the equilibrium mass-averaged nozzle Weissenberg number (Eq. <ref>), print temperature T_N and molecular weight Z_eq. Finally, we suggest how these weld properties can increase the entanglement molecular weight in the weld region and therefore decrease the mechanical strength of a printed part. §.§ Effect of Print SpeedFor a fixed print temperature T_N and a range of typical entanglement numbers Z_eq, Figs. <ref>a and b show weld properties ν_W and χ_W for increasing Wi.We observe a slight decrease in ν_W with Wi for the largest molecular weight Z_eq=37, whereas χ_W is independent of the nozzle shear rate for all three Z_eq shown (Fig. <ref>b). Thus, the increased initial stretch imposed by larger shear rates in the nozzle is not large enough to influence the weld thickness by reducing the reptation time.Melts with fewer entanglements are more mobile and are therefore able to diffuse further during the welding process, creating a thicker, more entangled weld structure. For χ_W < R_E the weld structure will be anisotropic at the glass transition. Once the interpenetration depth surpasses R_E we expect an isotropic weld structure. §.§ Effect of Print Temperature In contrast to Weissenberg number, we find that print temperature significantly affects welding behaviour (Figs. <ref>c,d). Both ν_W and χ_W increase as a function of T_N since higher temperatures significantly speed up diffusion. Notably, even at low print temperature T_N =200^oC the weld thickness continues to surpass R_g, suggesting healing of the interface. However, the weld site remains almost fully disentangled in this case (Fig. <ref>c), so that bulk strength is unattainable. This is a consequence of the delay in entanglement recovery due to the relaxation of the tube stretch (Fig. <ref>h). Moreover, since smaller molecular weights have shorter Rouse times, entanglement recovery begins earlier, which allows the formation of a more entangled weld. Only for T_N ≥ 300^oC does the weld becomes fully entangled for all Z_eq. §.§ Effect of Molecular Weight For fixed print speed U_N=10 mm/s, Figs. <ref>a,b show how ν_W and χ_W vary with a broader range of molecular weights. Since entanglements only recover once the stretch has relaxed, the final weld entanglement can be predicted from Eq. <ref> as follows: d ν/dt = 1-ν/τ_d^eq(T(t)),∫_ν_dep^ν_W1/1-ν dν = ∫_t_w^t_g^W1/τ_d^eq(T(t)) dt, where ν_dep is the entanglement fraction at the weld site after deposition. Eq. <ref> yieldsν_W = 1 - (1-ν_dep(Z_eq))exp(-∫_t_w^t_g^W1/τ_d^eq(T(t)) dt )and is plotted in Fig. <ref>a. The integral in Eq. <ref> can be approximated as∫_t_w^t_g^W1/τ_d^eq(T(t)) dt ≃C/τ_d^eq(T_N)∫_t_w^t_g^W dt,where the constant C accounts for τ_d^eq being approximately two orders of magnitude larger when entanglement recovery begins due to cooling (Fig. <ref>b). Thus, the final weld entanglement can be written asν_W ≃ 1 - (1-ν_dep(Z_eq))exp( - C t_g^W/τ_d^eq(T_N)),and is also plotted in Fig. <ref>a for T_N =250^oC and C=0.016. Z_eq has two effects on entanglement recovery at the weld. First, larger Z_eq increases the time scale τ_d^eq so that larger molecular weights diffuse slower and therefore recover fewer entanglements. Second, at a fixed print speed increasing Z_eq yields a larger Weissenberg number, which leads to greater disentanglement during deposition i.e. smaller ν_dep <cit.>.The weld interpenetration depth scales asχ_W/R_g∼( 1/τ_d^eq(T(t)))^1/4∼1/Z_eq^7/8,since τ_d^eq scales as Z_eq^7/2 (see Eq. <ref>). The predicted χ_W deviates from Eq. <ref>at some Z_eq depending on print temperature (Fig. <ref>b). This critical Z_eq defines the molecular weight at which diffusion is arrested during relaxation of the tube stretch. In this regime the tube dynamics reduce to the reptation time via Eq. <ref> and speed up diffusion. This effect results in a larger χ_W than predicted by Eq. <ref>.§.§ Weld Fracture Toughness Mechanical strength at the weld is determined in part by the molecular weight between entanglements (Eq. <ref>). In the weld region we haveM_e^W = M_e/ν_W.Thus, from Eq. <ref>b the fracture toughness at the weld is given byG_c^W ∼(1- M_e^W/q M_w)^2 ∼(1 - 1/qν_W Z_eq)^2.Hence decreasing ν_W increases M_e^W and reduces the toughness of the weld. The equilibrium bulk strength G_c is only achieved for ν_W=1. For polycarbonate, Fig. <ref> shows that G_c^W can decrease by 50% depending on the entanglement number Z_eq and the print temperature T_N. In particular, for a prescribed print temperature, there exists a maximum molecular weight that can be printed whilst maintaining G_c^W = G_c. For example, Z_eq≤ 40 can be printed at T_N=300^oC before strength is affected by disentanglement, whereas only Z_eq≤ 22 maintains bulk strength at lower temperature T_N=200^oC.§ CONCLUSION We have developed a model for the non-isothermal FFF welding process to test the effect of changing print speed, temperature and entanglement number on the ultimate welding characteristics of an non-crystalline polymer melt. It has previously been shown that the extrusion process can significantly deform and disentangle the polymer microstructure prior to welding <cit.>. After deposition the printed layer cools and this deformation relaxes via reptation, whilst inter-diffusing with the previously-printed layer. The temperature profile at the weld between the two layers is calculated by solving the heat equation in one dimension. The cooling rate inhibits the total relaxation of the deformation induced by printing, so that the ultimate structure of the weld is anisotropic and less entangled than the equilibrium material for typical printing conditions. Solving a diffusion equation that incorporates anisotropic and mutual diffusion yields the thickness of the weld formed between the layers. The model predicts that the weld thickness typically surpasses R_g, but not quite enough to fully relax. Thus, mechanical strength should not be limited by interpenetration depth. However, despite sufficient weld thickness for bulk strength at the interface, entanglements do not have sufficient time to recover during cooling; ν_W is as low as 50% for typical printing conditions. Since a disentangled weld structure can significantly increase the entanglement molecular weight in the weld region, the mechanical properties at the weld may be significantly reduced. These findings suggest that disentanglement in the nozzle combined with the delay in entanglement recovery due to the relaxation of the tube stretch is the key mechanism responsible for a reduced mechanical strength in the weld region.Although the theory of flow-induced disentanglement has been compared to molecular dynamics simulations of planar shear flow <cit.>, the way in which polymers recover entanglements in the event of flow cessation is yet to be addressed. It is crucial to benchmark this re-entanglement theory against molecular dynamics simulations of disentangled melts, in particular to verify that ν<1 for χ> R_E is a result of the initial delay in entanglement recovery due to tube stretch. Practically, thicker and more entangled welds can be formed by increasing the print temperature or using a less entangled printing material, since both parameters significantly reduce the reptation time.We find that the weld thickness is independent of the deformation induced by different nozzle shear rates, as is the final entanglement network for Z_eq=22. Consequently equivalent mechanical integrity is expected across all print speeds. Thus, the maximum print speed available can be exploited to increase productivity. § ACKNOWLEDGEMENTS We thank Jonathan Seppala and Kalman Migler for advice and an enjoyable collaboration, as well as the National Institute for Standards and Technology (NIST), Georgetown University, and the Ives Foundation for funding.§ EXTRUSION AND DEPOSITION MODEL Here we summarise the FFF extrusion and deposition model detailed in Ref. <cit.>. The printing material is heated to temperature T_N and extruded through a nozzle of radius R at mass-averaged speed U_N. Assuming a steady state the momentum balance is given by∇·σ =0,for stress tensor σ. The total stress in the polymer melt comprises solvent and polymer contributionsσ = - pI + G_e ( A -I) + 2 μ_s (K +K^T),where p is the isotropic pressure and G_e is the plateau modulus. For times shorter than τ_e, Rouse modes corresponding to lengths shorter than M_e contribute to a background viscosity defined as <cit.>μ_s = π^2/12G_e/Z_eqτ_R^eq. The temperature profile is assumed to be uniform across the nozzle and Eq. <ref> is solved alongside the Rolie-Poly Eqs. (<ref>) and (<ref>) to calculate the plug-like velocity profile, and the polymer deformation and disentanglement across the nozzle. Nozzle Weissenberg numbers (see Eqs. (<ref>) and (<ref>)) for typical fast and slow print speeds are quoted in Table <ref>; the polymer is found to stretch and orient in the nozzle depending on print speed.The material is then deposited into a layer of thickness H, which travels horizontally at speed U_L, in a frame moving with the nozzle. During this deposition the material must speed up and deform to make a 90^o turn and transform from circular to elliptical geometry (since typical H < 2R). In order to conserve mass, U_L > U_N (Eq. <ref>). Rather than calculating the full fluid mechanics, the shape of the deposition is prescribed (ignoring die swell) and the fluid is advected using local flux conservation. We assume a uniform temperature profile through out the deposition and neglect polymer relaxation during this stage. Due to the assumption of zero polymer relaxation, the model breaks down for slower print speeds such that Wi < H/R <cit.>. In this way the velocity profile u is prescribed only by the geometry of the deposition and the Rolie-Poly Eq. <ref> reduces to( u·∇ )A =K· A +A +K^T. § WELD STRUCTURE AND PRINT SPEEDFig. <ref> shows the relaxation dynamics at the weld site 𝚋_𝚙 for the two typical print speeds given in Table <ref>, corresponding to Wi=2 and 13. Parameters are T_N=250^o, Z_eq=37 and β=0.3. Time is scaled by τ_d^eq(T_N) in order to highlight the effect of the cooling temperature profile and the case τ_d = τ_d^eq (with an initial polymer configuration imposed by the fast print speed) is plotted to highlight the effect that polymer stretch has on the relaxation process. The reptation time τ_d(T,γ̇) and the Rouse time τ_R(T) are plotted in Figs. <ref>a,b. Although the stretch induced by printing significantly reduces the reptation time compared to τ_d^eq (Eq. <ref>), the difference in the reptation for the two typical print speeds is small. The Rouse time is independent of print speed (Eq. <ref>). Figs. <ref>c,dshows the relaxation of the principle shear component A_yz and the the tube stretch tr A-3. The principle shear component A_yz relaxes on a time scale t/τ_d^eq(T_N)>1, demonstrating how the cooling temperature profile inhibits the relaxation process. We see a similar two-mode relaxation (Rouse followed by reptation) for both print speeds. The stretch has sufficient time to relax prior to the glass transition but the polymer orientation remains out of equilibrium for both print speeds. The structure is slightly closer to equilibrium for the fast-printing case due to a slightly smaller reptation time at t_w(p). Fig. <ref>e shows how the entanglement fraction evolves at the weld for the two print cases. The weld region is approximately 50% less entangled than the bulk material and the final ν is similar for both printing speeds. Finally, Fig. <ref>f shows the isotropic interpenetration distance χ calculated by Eq. (<ref>). Since the reptation time is initially smaller, faster printing allows for a slightly longer welding time and consequently a slightly thicker weld form, although the difference is much smaller than the polymer size (≪ R_g).§ EFFECT OF CCR PARAMETERFig. <ref> shows the relaxation dynamics at the weld site 𝚋_𝚙 for CCR parameters β=0.3 and 1. Parameters are T_N=250^oC, Z_eq=37 and Wi=2. The reptation time (Fig. <ref>a) depends on β whilst the tube is stretched (tr A>3), and β=1 reduces the reptation time by an order of magnitude compared to β=0.3. Fig <ref>b shows that the Rouse time is independent of β (Eq. <ref>). Due to the reduced reptation time, both the principle shear deformation A_yz and the tube stretch tr A relax faster for β=1 (Fig. <ref>c,d). Since entanglement recovery depends only on τ_d^eq(T), fewer entanglements are recovered for β=1 due to the smaller initial ν generated (Fig. <ref>e). The polymer is able to diffuse further in the case β=1, although the difference is less than R_g (Fig. <ref>f), again as a result of the reduced reptation time. The conclusion that weld properties are independent of print speeds also holds for β=1.elsarticle-num-names | http://arxiv.org/abs/1703.09295v1 | {
"authors": [
"Claire McIlroy",
"Peter Olmsted"
],
"categories": [
"cond-mat.soft"
],
"primary_category": "cond-mat.soft",
"published": "20170327201542",
"title": "Disentanglement Effects on the Welding Behaviour of Polymer Melts during the Fused-Filament-Fabrication Method for Additive Manufacturing"
} |
Compact stars with significant high densities in their interiors can give rise to quark deconfined phases that can open a window for the study of strongly interacting dense nuclear matter. Recent observations on the mass of two pulsars, PSR J1614-2230 and PSR J0348+0432, have posed a great restriction on their composition, since their equations of state must be hard enough to support masses of about at least two solar masses. The onset of quarks tends to soften the equation of state, butdue to their strong interactions, different phases can be realized with new parameters that affect the corresponding equations of state and ultimately the mass-radius relationships. In this paper I will review how the equation of state of dense quark matter is affected by the physical characteristics of the phases that can take place at different baryonic densities with and without the presence of a magnetic field, as well as their connection with the corresponding mass-radius relationship, which can be derived by using different models. EoS's of different phases ofdense quark matter E J FerrerDept. of Engineering Science and Physics, College of Staten Island, CUNY,and CUNY-Graduate Center,New York 10314, USA Received: date / Accepted: date ===================================================================================================================================================§ INTRODUCTION The composition of neutron stars is still an open question. Propositions ranging from nuclear matter (possibly with hyperons and superfluid nucleons) to deconfined quark matter (in any of its possible phases as color superconducting or with inhomogeneous particle-hole pairing, etc.) are under exam. New data on masses and radii, transport properties, as well as the modeling of other phenomena as glitches, bursts episodes, etc., help to constraint the equation of state (EoS) corresponding to different matter phases that can prevail in their interior, but no firm conclusion has been drawn yet.In both, nuclear and quark matter descriptions, there still exist a few parameters to be adjusted, which should be further constrained by nuclear matter and heavy-ion collision data, so to be able to discriminate among the different star constituents,but their region of reliability at low temperature and large chemical potential poses a big challenge. The proposition that quark stars could be a stable configuration for compact stellar objects has been around for almost 50 years <cit.>. The idea is based on the fact that matter composed of up, down, and strange quarks, the so called strange quark matter (SQM), may have a lower energy per baryon number than the nucleon, thus being absolutely stable. More recent developments brought the idea that color superconductivity should be the favored state of deconfined quark matter, since the Cooper pair condensation lower the total energy per baryon number of the system <cit.>. On this basis, many works have been done to study the characteristics of the EoS of the different phases of quark matter. In this regard, recent data <cit.> have determined masses and/or radii for some compact objects with high precision (although some of these remain to be confirmed <cit.>), rendering some information about the possible composition of these objects. In this scenario, the recent observation for the pulsar J1614-2230, a binary system for which the mass of the neutron star was measured rather accurately through the Shapiro delay yielding M = 1.97±0.04 M_⊙ <cit.>, posed an important question about the existence of SQM. Of course the answer to this is directly related to the prevailing matter phase that can produce an EoS stiff enough to reach such large mass value. In this paper, I am reviewing several results obtained in a set of papers that will be conveniently referred, where different approaches in describing the quark matter EoS were used. A main purpose will be to discuss some effects that can affect the stiffness of the EoS, so to determine if it is possible to reach the reported mass value M = 1.97±0.04 M_⊙. I will finish discussing the magnetic field effect on the EoS of quark matter in the color superconducting color-flavor-locked (CFL) phase, pointing out the new challenges that the presence of a magnetic field poses on the calculation of the M-R relationship.§ BAG VS NJL MODEL IN THE CFL PHASEIn the study of the EoS of the possible matter phases that could form neutron stars,two frameworks have been mainly used: the MIT bag model and the Nambu-Jona-Lasinio (NJL) model. Both models are inspired in some of the fundamental properties of strongly interacting systems, although some aspects could be lacking.The MIT bag model is a phenomenological model that was proposed to explain hadrons <cit.>. In this model, the quarks are asymptotically free and confinement is provided through the bag constant B, which artificially constrains the quarks inside a finite region in space.In the case of bulk quark matter, the asymptotically-free phase of quarks will form a perturbative vacuum (inside a "bag"), which is immersed in the nonperturbative vacuum. The creation of the 'bag" costs free energy. Then, in the energy density, the energy difference between the perturbative vacuum and the true one should be added. Essentially, that is the bag constant characterizing a constant energy per unit volume associated to the region where the quarks live. From the point of view of the pressure, B can be interpreted as an inward pressure needed to confine the quarks into the bag. Let's consider now the difference between these two approaches in the case of SQM. For the free quark system, the thermodynamic potential is given in the MIT framework by Ω=∑_iΩ_i+B,whereΩ_i=μ_i^4/4π^2,with i running forquarks u, d, s and electrons, and μ_i being the corresponding chemical potential for each particle i. When the color superconducting pairing is considered in the CFL phase, which is the most stable phase at high densities <cit.>, the thermodynamical potential is assumed to be the sum of the one corresponding to the unpaired state (<ref>), plus a gap (Δ) depending term <cit.>, Ω_CFL^MIT=∑_iΩ_i-3/π^2μ^2Δ_CFL^2+BThe term ∑_iΩ_i represents a fictitious non-paired state proportional to the volume of the Fermi sphere in which all quarks have a common Fermi momentum and the extra Δ-depending term represents a surface contribution of the Fermi sphere that is associated with the Cooper pair biding energy. Here, as in (<ref>), B represents the bag constant contribution.In the context of the NJL model, the thermodynamic potential of the CFL phase is given by <cit.>,Ω_CFL^NJL =-1/4π^2∫_0^∞ dp p^2 e^-p^2/Λ^2(16| ϵ|+16|ϵ|)+ -1/4π^2∫_0^∞ dp p^2 e^-p^2/Λ^2(2|ϵ'|+2|ϵ'|)+ 3Δ_CFL^2/G+B,whereε=±√((p-μ)^2+Δ_CFL^2), ε=±√((p+μ)^2+Δ_CFL^2), ε'=±√((p-μ)^2+4Δ_CFL^2,)ε'=±√((p+μ)^2+4Δ_CFL^2),are the quasiparticles dispersion relations. See that in (<ref>) the particle masses are neglected at the high chemical potentials where the CFL phase is realized. In (<ref>) a smooth cutoff depending onthe effective-theory energy scale Λ was introduced to guarantee continuous thermodynamical quantities.The system EoS is then obtained from the thermodynamic potential as the relation between the energy density and the pressure, which for an isotropic system can be respectively obtained asϵ=Ω-μ∂Ω/∂μ, P=-Ω On the other hand, the mass-radius (M-R) relationship is obtained by integrating the relativistic equations for stellar structure, that is, the well-known Tolman-Oppenheimer-Volkoff (TOV) and mass continuity equations, which innatural units, c = G = 1 are given bydM/dR = 4π R^2ϵ dP/dR = -ϵ M/R^2(1+P/ϵ) (1+4π R^3P/M)(1-2M/R)^-1with ϵ and P taken from (<ref>) and (<ref>) respectively.A fundamental difference betweenmodels (<ref>) and (<ref>) is related to the way the pairing term Δ is implemented. In the MIT bag model the value for the gap parameter is fixed by hand, hence it does not explicitly depend on changes in other parameters characterizing the system. While in the NJL approach the pairing gap is self-consistently obtained through the gap equation∂Ω_CFL/∂Δ_CFL=0.In this way, as it was shown in Ref. <cit.>, Δ depends on the density and diquark coupling constant G, as can be seen in Fig <ref>. Thus, changing the coupling constant in this case from G = 4.32 to 7.10 GeV^-2, the corresponding gap parameters obtained at μ=500 MeV change from Δ = 10 to 100 MeV, respectively. We point out that the value G=7.10 GeV^-2 is used with the only purpose of comparing the NJL results with the MIT ones, where the use of Δ= 100 MeV is a common practice. The EoS corresponding to NJL and MIT models are represented in the region of interest for SQM in Fig. <ref> for two different values of the gap parameter. It can be observed that the splitting between the EoS for different Δ's is more significant for the values introduced in the MIT model than in those obtained in the NJL model, although they are really significant only for high gap values. As it was found in <cit.>, in theMIT case, the higher the gap, the stiffer the EoS. Hence, as the mass supported by a given star configuration is related to the stiffness of the EoS, a higher value of the gap renders higher maximum masses for stable strange stars <cit.>. However, this is not the case for the NJL calculations. When a higher value of G is used, although the corresponding gap parameter increases for a fixed value of μ, the EoS does not change considerably and actually weakly softens in the region of interest for neutron star interior. Therefore, in the NJL approach it is not possible to increase the maximum star mass of strange stars even when one uses unphysically large values of the coupling constant as can be seen from Fig. <ref>. The origin of the softening of the EoS in the CFL-NJL approach is due to the term 3Δ^2/G that enters with a negative sign in the pressure.As discussed in<cit.>, from a physical point of view, the increase of G beyond a certain value in the CFL-NJL model implies a softening of the EoS because for large enough G's the system begins to crossover fromBCS to BEC <cit.>-<cit.>. The crossover is reflected in the decay of the system pressure, which is due to an increment in the number of diquarks that become Bose molecules and hence cannot contribute to the Pauli pressure of the system. As shown in Refs. <cit.>, if the diquark coupling were high enough to produce a complete crossover from the BCS to the BEC regime, the pressure of the bosonic system at zero temperature would become zero implying a collapse of the stellar system. To conclude this section, using Eq. (<ref>) one can indeed increase the maximum allowed mass for CFL strange stars by using an arbitrarily high gap value to match the recent reported data in <cit.>-<cit.>. Nevertheless, this effect is not possible within the more consistent NJL approach (<ref>), as can be seen in Fig. <ref>, since to increase the gap the coupling constant should be increased and it tends to decrease the mass for a given radius.§ EOS OF THE CFL-GLUONIC PHASE In the NJL approach, gluons degrees of freedom are usually disregarded as having a negligible effect at zero temperature. Nevertheless, as it was shown in Ref. <cit.>, gluons can affect the EoS of the CFL phase. The reason is that, as shown in <cit.>, the polarization effects of quarks in the CFL background embed all the gluons with Debye (m_D) and Meissner (m_M) masses that depend on the baryon chemical potential, m_D^2=21-8ln 2/18 m_g^2, m_M^2=21-8ln 2/54m_g^2,m_g^2=g^2μ^2N_f/6π^2.Here, N_f is the number of flavors of massless quarks, and g the quark-gluon gauge coupling constant. As a consequence, the gluons in the CFL phase have nonzero rest energy that produces a positive contribution to the system's energy density and consequently a negative contribution to the pressure. To consider the gluon effect into the system EoS we started in <cit.> from the three-flavor gauged-NJL Lagrangian at finite baryon density, which is invariant under SU_c(3)× SU_L(3)× SU_R(3)× U_V(1) symmetryℒ=-ψ̅(γ^μ D_μ+μγ^0)ψ-G_V(ψ̅γ_μψ)^2+G_S∑_k=0^8 [(ψ̅λ_k ψ )^2 + (ψ̅iγ_5λ_k ψ )^2]-K [det_f( ψ̅(1 +γ_5)ψ ) +det_f( ψ̅(1 +γ_5)ψ ) ] +G_D/4∑_η(ψ̅P_ηψ̅^T)(ψ^TP_ηψ)+ℒ_G,In (<ref>), the quark fields ψ_i^a have flavor (i=u,d,s) and color (a=r,g,b) indexes. Here, the coupling G_S is associated with the quark-antiquark channel, the coupling K with the 't Hooft determinant term that excludes the U_A(1) symmetry <cit.>, the coupling G_D with the diquark channel P_η=Cγ_5ϵ^abηϵ_ijη and the coupling G_V with the repulsive vector channel. This last interaction term naturally appears after a Fierz transformation of a point-like four-fermion interaction with the Lorentz symmetry broken by the finite density <cit.>. We are neglecting the current quark masses m_0i since our main domain of interest will be the CFL phase with μ≫ m_0i.The gluon Lagrangian density ℒ_G isℒ_G=-1/4G_μν^AG^μν_A +ℒ_gauge+ℒ_ghost,with G_μν^A=∂_μ G^A_ν-∂_ν G^A_μ +gf^ABCG^B_μ G^C_ν, the gluon strength tensor; ℒ_gauge, a gauge-fixing term; and ℒ_ghost=-η^A†∂^μ(∂_μη^A+gf^ABCG^B_μη^C), the ghost-field Lagrangian. The coupling between gluons and quarks occurs through the covariant derivative D_μ= ∂_μ -i gT^AG^A_μ,where T^A=λ^A/2, A=1-8,are the generators of the SU_c(3) group in the fundamental representation (λ^A are the Gell-Mann matrices). The model (<ref>) is nonrenormalizable due to the four-fermion interaction terms, so a cutoff Λ needs to be introduced in all the calculations to regularize the theory in the ultraviolet region. The parameter Λ defines the energy scale below which this effective theory is valid. We want to consider the region of densities large enough to have a CFL phase, where the chiral condensate has already vanished and hence only the expectation values for the diquark condensate Δ_η=⟨ψ^TP_ηψ⟩ and the baryon charge density ρ=⟨ψ̅γ_0ψ⟩ should be considered. One can now bosonize the four-fermion interaction via a Hubbard-Stratonovich transformation and then take the mean-field approximation to obtain the zero-temperature thermodynamic potential <cit.>,Ω=Ω_q+ Ω_g- Ω_vac,where the quark, gluon and vacuum contributions are given respectively byΩ_q= -1/4π^2∫_0^Λ dp p^2 (16| ϵ|+16|ϵ|)-1/4π^2∫_0^Λ dp p^2 (2|ϵ'|+2|ϵ'|) +3Δ^2/G_D-G_Vρ^2 , Ω_g=2/π^2∫_0^Λ dp p^2 (√(p^2+ m̃^2_D θ(Δ-p)+3m̃^2_gθ(μ̃-p)θ(p-Δ))+3√(p^2+m̃_M^2θ(Δ-p)) ),Ω_vac≡Ω(μ=0, Δ=0), with energy spectraε=±√((p-μ̃)^2+Δ^2), ε=±√((p+μ̃)^2+Δ^2), ε'=±√((p-μ̃)^2+4Δ^2,)ε'=±√((p+μ̃)^2+4Δ^2),and dynamical quantities, Δ and ρ, found from the equations, ∂Ω/∂Δ=0,ρ=-∂Ω_q/∂μ̃ We have that, as the mean value ⟨ψ̅γ_0ψ⟩ enters in the covariant derivative as a shift to the baryon chemical potential, the effective chemical potential for the baryon charge is now μ=μ-2G_V ρ instead of μ. Moreover, while the solution of the gap equation (first equation in (<ref>)) is a minimum of the thermodynamic potential, the solution of the second equation is a maximum <cit.>, since it defines, as usual in statistics, the particle number density ρ.To find the EoS of this system we need the corresponding pressure P and energy density ϵ, that can be found from the thermodynamic potential (<ref>) asP= -(Ω_q+ Ω_g- Ω_vac)+(B-B_0), ϵ = Ω_q+ Ω_g- Ω_vac + μ̃ρ-(B-B_0)Notice that the chemical potential that multiplies the particle number density ρ in the energy density is μ̃ instead of μ. This result can be derived following the same calculations of Ref. <cit.> to find the quantum-statistical average of the energy-momentum tensor component τ_00. In (<ref>), we added, as usual, the bag constant B (B_0 is introduced to ensure that ϵ=P=0 in vacuum). As we already pointed out, in the MIT bag model, B was introduced as a phenomenological input parameter to account for the free energy cost of quark matter relative to the confined vacuum <cit.>. Nevertheless, in the NJL model, the bag constant can be calculated in the mean-field approximation as a dynamical quantity related to the spontaneous breaking of chiral symmetry <cit.>. Following this dynamical approach, we showed in <cit.> that the bag constant in this case is given by B=0 and the only remaining bag parameter in (<ref>) is B_0, which is written asB_0=9/π^2[∫_0^Λ p^2dp(√(m^2+p^2)-√(p^2)+2G_Sm/√(m^2+p^2) )]-4K (3/π^2 )^3 [ ∫_0^Λ dp p^2 m/√(m^2+p^2) ]^3,with the same vacuum dynamical mass for the three quarks found from1=4G_S3/π^2∫_0^Λ p^2 dp 1/√(m^2+p^2)+2K9/π^4 [∫_0^Λ p^2 dp m/√(m^2+p^2) ]^2For the parameter set under consideration one obtains B_0=57.3 MeV/fm^3 <cit.>, which is a value very close to what we were using in Section 2. Nevertheless, we should underline that in this calculation no contribution associated with the confinement/deconfinement phase transition has been taken into account.Then, the EOS for a range of G_V with and without the gluon contribution to the thermodynamic potential was found in <cit.> and is plotted in Fig. <ref>. As can be seen by comparing the two panels, the inclusion of gluons' degrees of freedom soften the EOS. Only for relatively high values of G_V the EOS of the CFL matter with gluons becomes stiffer than the EOS of the regular CFL matter with no gluons and no vector coupling, at least in the energy density region relevant for compact stars. Moreover, if we compare the curve for the CFL matter at certain value of G_V with the corresponding one including gluons and the same coupling value, the former will be stiffer than the second one, independent of the G_V value. We can also note that with the increase of the energy density the shift in pressure due to a change in G_V for CFL matter with gluonsincreases for the same value of energy density. On the other hand, while at energy densities smaller than 500 MeV/fm^3 there is a negligible change even for G_V/G_S=0.5, at ϵ=1500 MeV/fm^3, the jump in pressure is really noticeable.This is a natural result, since the effect of the vector interaction is proportional to the baryon density and should be more significant in the high density region.Now it can be calculated the M-R relationship for strange stars made of CFL matter with gluon contribution by using Eqs. (<ref>) and(<ref>). The corresponding M-R sequences found in <cit.> are shown in Fig. <ref> for the EoS of quark matter with (right panel) and without (left panel) the gluon contribution. Comparing them, it is evident that the gluons decrease the maximum mass for each sequence up to 20 %, an effect that turns more prominent for lower values of the vector interaction. Sequences including gluons cannot reach 2M_⊙ if G_V/G_S<0.2. Here, it is important to point out that several results actually suggest that a low vector coupling between quarks may be favored at high-densities<cit.>. Even more, as discussed in <cit.>, the vector interaction makes the chiral phase transition weaker in the low T and high μ region, with the possibility thatthe transition becomes a crossover in the region when the interaction is strong enough <cit.>, what is not expected from Lattice calculations. Then, the absence of the vector interaction would be preferable in the high density region. The same conclusion was posed by studying the phase transition from the hadronic phase to the quark phase in Ref. <cit.>. There, it was found that the hadron-quark phase transition may take place only at small G_V/G_S ratio. For larger ratio, the repulsive vector interaction makes the NJL phase too stiff to cross to the hadronic phase. However, taking a zero vector interaction for quark matter and further considering effects which soften the quark matter EOS, like diquark condensation, there could be a phase transition. Then, if it is needed to consider a very weak or even zero vector interaction for high-dense quark matter, gluon effects would rule out stars composed entirely of CFL matter. More precisely, stars composed entirely of CFL quark matter with gluons, within theformalism presented here, should be disregarded if G_V is sufficiently low, as they cannot explain the mass measurements of the mentioned compact stars PSR J1614-2230 and PSR J0348+0432.§ EOS OF MAGNETIZED QUARK MATTERIn the presence of a uniform magnetic field H, the EoS becomes anisotropic <cit.>. In this case, the pressure of the system develops a splitting in the directions parallel and perpendicular to the applied field. This splitting has to be taken into account in the EoS, which is then modified from the usual form intoP_∥ =-Ω-H^2/2 , P_⊥=-Ω - Hℳ + H^2/2 , ε=Ω+μρ +H^2/2 ,where Ω is the thermodynamical potential evaluated at the physical minimum, ρ=-∂Ω/∂μ is the particle density, ε the energy density and M=-∂Ω/∂ H the system magnetization.In the case of SQM in the color superconducting phase, the magnetic field gives rise to a new phase that is called the magnetic CFL (MCFL) phase (see for review <cit.> and references therein).In this case the thermodynamic potential is given by <cit.>Ω_MCFL =Ω_C+Ω_Nwith Ω_C =-eH/4π^2∑_n=0^∞ (1-δ_n0/2)∫_0^∞ dp_3 e^-(p_3^2+2eHn)/ Λ^2[8|ε^(c)|+8|ε^(c)|],Ω_N =-1/4π^2∫_0^∞ dp p^2 e^-p^2/Λ^2[6| ε^(0)|+6|ε^(0)|]-1/4π^2∫_0^∞ dp p^2 e^-p^2/Λ^2∑_j=1^2[2|ε_j^(0)|+2|ε_j^(0)|]+ Δ^2/G+2Δ^2_H/G,andε^(c)=±√((√(p_3^2+2eHn)-μ)^2+Δ_H^2), ε^(c)=±√((√(p_3^2+2eHn)+μ)^2+Δ_H^2),ε^(0)=±√((p-μ)^2+Δ^2),ε^(0)=±√((p+μ)^2+Δ^2), ε_1^(0)=±√((p-μ)^2+Δ_a^2),ε_1^(0)=±√((p+μ)^2+Δ_a^2), ε_2^(0)=±√((p-μ)^2+Δ_b^2),ε_2^(0)=±√((p+μ)^2+Δ_b^2),being the dispersion relations of the charged (c) and neutral (0) quarks. In the above we used the notation Δ_a/b^2=1/4(Δ±√(Δ^2+8Δ_H^2))^2. In this formulation we are neglecting again the quark masses since they are too much smaller than the chemical potential at the densities under consideration. Also we are not considering the possible effect of the magnetic field interaction with the quark anomalous magnetic moment, since as proved in <cit.> it has a negligible effect. Even the third gap appearing in the MCFL phase, which is associated to the Cooper condensate magnetic moment <cit.> is not taken into consideration because it is only relevant at extremely high magnetic fields when the particles are confined to the lowest Landau level. The MCFL gaps Δ and Δ_H correspond respectively to the (d,s) pairing gap, which takes place only between neutral quarks, and to the (u,s) and (u,d) pairing gaps, which receive contribution from pairs of charged and neutral quarks. The separation of the gap in two different parameters in the MCFL case, as compared to the CFL, where there is only one gap, reflects the symmetry difference between these two phases <cit.>. Here again, Λ-dependent smooth cutoffs were introduced. The notation,H, is used for the in-medium rotated magnetic field that due to his long-range nature can penetrate the color superconductor <cit.>. The EoS, corresponding to Eqs. (<ref>)-(<ref>) evaluated in the solutions of the gap equations ∂Ω_MCFL/∂Δ=0,∂Ω_MCFL/∂Δ_H=0,are plotted in Fig. <ref>. There, we can see that in the presence of a magnetic field the EoS becomes highly anisotropic. While for the parallel pressure the magnetic field shifts the EoS curve above the zero-field one, for the parallel pressure the magnetic field effect is the opposite. Moreover, the magnetic-field effect is much significant for the ϵ-p^ relationship than for the ϵ-p^ one.At H∼ 10^18 G, the shift in the energy density with respect to the zero-field value is ∼ 200 MeV/fm^3 for the same pressurein the parallel-pressure case, while this field effect is too much smaller for the same range of field values in the perpendicular-pressure case.Now we can use the obtained EoS to obtain the M-R relationship through the TOV equations (<ref>)-(<ref>) for the parallel and perpendicular pressures independently. The results are plotted in Fig. <ref>. There we can see that the effect of the parallel pressure is to increase the star mass for a given radius, while the perpendicular pressure tends to decrease the mass for the same radius. Hence, the anisotropy of the magnetized EoS opens an interesting question in regards to the calculation of the star M-R relationship, since each pressure has an opposite effect. To solve this paradox it will be needed to find an alternative formulation that includes the anisotropy from where to obtain the M-R relationship, since the TOV equations (<ref>)-(<ref>) were obtained under the supposition that the system has an isotropic configuration. § SUMMARY In this paper it was reviewed the EoS's of quark matter under several conditions and following different approaches with the goal to see under what conditions the observed mass value M = 1.97±0.04 M_⊙ can be reached. First, the results for the EoS in the MIT bag model was compared to those of the NJL model. It was shown that while in the MIT model the introduction by hand of a large gap produces a stiffer EoS, when in the NJL model it is increased the gap self consistently by strengthening the diquark-diquark coupling the EoS becomes softer. This last effect is associated with the fact that a strong coupling can produce a BCS-BEC crossover with a decrease of pressure in the BEC phase. This result implies that to use the MIT bag model with large gap values to predict stellar masses can be misleading. Also, it is worth to notice that the maximum value of the mass that can be reached in both models is sensitive to the used bag constant value.Then, it was consider the effect of gluons in the EoS of dense quark matter in the CFL phase at zero temperature. Since in this phase the gluons acquire Debye and Meissner masses they can contribute to the thermodynamic potential calculated in the framework of the gauged-NJL model. The gluon masses increase the system rest energy at zero temperature and hence decrease the pressure. As a consequence, the gluon effect softer the EoS in such a way that only for values of the coupling constant associated with the vector-vector interaction, G_V,larger than a certain critical value, the system can reach the observed stellar mass valueM = 1.97±0.04 M_⊙.Also, the effect of a strong magnetic field on the EoS and M-R relation for CFL matter was discussed. The main result is that a sufficiently high magnetic field produces a big anisotropy in the EoS that prevents the use of the TOV equations to study the M-R relationship in those conditions, since the TOV equations were derived for an isotropic medium with spherical symmetry.Moreover, it is known that at sufficiently high H new phases of dense quark matter can take place <cit.>, as for example, a paramagnetic phase with a ground state of gluon vortices that are formed at rotated magnetic fields higher than the gluon Meissner mass square <cit.>-<cit.>. How this magnetic phase can affect the EoS of dense quark matter is an open question.Finally, to extrapolate the results presented here to hybrid stars, other quark phases that are more likely to be realized at the moderated-high densities that can prevails in the coreclose to the transition from the existing nuclear matter in the envelope, should be considered. Other phases of quark matter,as 2SC at strong coupling with <cit.> or without <cit.> magnetic field, or the magnetized neutral DCDW phase <cit.>, are good candidates in this context. I would like to thank my collaborators V. de la Incera, J. E. Horvath and L. Paulucci who contributed to must of the results reviewed here.§ REFERENCES 15BodmerBodmer A 1971 Phys. Rev. D 4 1601ChinChin S A and Kerman A K 1979 Phys. Rev. Lett. 43 1292 TerazawaTerazawa H 1979 Tokyo U. Report INS-336 WitWitten E 1984 Phys. Rev. D 30 272 Pairing1 Alford M, Rajagopal K and Wilczek F 1999Nucl. Phys. B 537 433Pairing2 Rapp R, Schaefer T, Shuryak E V and Velkovsky M 2000Ann. Phys. (N.Y.) 280 35Pairing3 Alford M, Rajagopal K, Reddy S and Wilczek F 2001Phys. Rev. D 64 074017 German Lugones G and Horvath J E 2002 Phys. Rev. D 66 074017Ozels1Ozel F and Psaltis D 2009Phys. Rev. D 80 103003Ozels2 Ozel F, Baym G and Guver T 2010 Phys. Rev. D 82 101301 Demorest Demorest P B et al. 2010 Nature 467 1081Lattimer Steiner A W, Lattimer J M and Brown E F 2010 Astrophys. J. 722 33 MIT Chodos A, Jaffe R L, Johnson K, Thorn C B and Weisskopf V F 1974Phys. Rev. D 9 3471Alford04Alford M, Kouvaris C and Rajagopal K 2004 Phys. Rev. D 92 222001 Rajagopal2001 Rajagopal K and Wilczek F 2001Phys. Rev. Lett. 86 3492 Alford2001 Alford M, Rajagopal K, Reddy S and Wilczek F 2001 Phys. Rev. D 64 074017 Paulucci Paulucci L, Ferrer E J,de la Incera V and Horvath J E 2011Phys. Rev. D 83 043009Incera Paulucci L, Ferrer E J, Horvath J E andde la Incera V 2013 J. Phys. G 40 125202 BCS-BEC1 Nishida Y and Abuki H 2005Phys. Rev. D 72 096004BCS-BEC2 Sun G-F, He L and Zhuang P 2007Phys. Rev. D 75 096004BCS-BEC3 He L and Zhuang P 2007 Phys. Rev. D 75 096003IsraelFerrer E J, de la Incera V, Keith J P and Portillo I 2015 Nucl. Phys. A. 933 229Jason Ferrer E J and Keith J P 2012 Phys. Rev. C 86 035205Gluons Ferrer E J, de la Incera V and Paulucci L 2015 Phys. Rev. D 92 043010Rischke2000 Rischke D H 2000 Phys. Rev. D 62 054017 'tHooft Hooft 't1976 Phys. Rev. Lett. 37 8Buballa Buballa M 2005Phys. Rep. 407 205Kitazawa2002 Kitazawa M, Koide T, Kunihiro T and Nemoto Y 2002 Prog. Theor. Phys. 108 929 Israel-2 Ferrer E J, de la Incera V, Keith J P, Portillo I and Springsteen P L 2010 Phys. Rev. C 82 065802Oertel Buballa M and Oertel M 1999 Phys. Lett. B 457 261GV-Vacuum Steinheimer J and Schramm S 2014 Phys. Lett. B 736 241 Kashiwa Kashiwa K, Kouno H, Sakaguchi T, Matsuzaki M and Yahiro M 2007 Phys. Lett. B 647 446Abuki Abuki H, Gatto R and Ruggieri M 2009 Phys. Rev. D 80 074019 Bentz Bentz W, Horikawa T, Ishii N and Thomas A W 2003 Nucl. Phys. A 720 95 MCFL-review Ferrer E J and de la Incera V 2013 Magnetism in Dense Quark Matter (Lect. Notes in Phys. vol.871 )ed. by Kharzeev D, Landsteiner K, Schmitt A andYeeH.-U.(Springer Berlin Heidelberg) pp. 399-432 Angel Ferrer E J, de la Incera V, Manreza Paret D, Perez Martinez A and Sanchez A 2015 Phys. Rev. D 91 085041Bo Feng B, Ferrer E J and de la Incera V 2011 Nucl . Phys. B 853 213 Cristina-1 Ferrer E J, de la Incera V and Manuel C 2005 Phys. Rev. Lett. 95 152002Cristina-2 Ferrer E J, de la Incera V and Manuel C 2006 Nucl. Phys. B 747 88Gatto Alford M, Berges J and Rajagopal K 2000 Nucl. Phys. B 571 269 phase Ferrer E J and de la Incera V 2007 Phys. Rev. D 76 045011vortices Ferrer E J and de la Incera V 2006Phys. Rev. Lett 97 122301vortices-2 Ferrer E J and de la Incera V 2007J. Phys. A 40 69132SC-B Mandal T and Jaikumar P 2016Phys.Rev. D 94 074016 2SCRüster S B, Werth V, Buballa M, Shovkovy I A and Rischke D H 2005 Phys. Rev. D 72 034004PRD92 Carignano S, Ferrer E J, de la Incera V and Paulucci L 2015 Phys. Rev. D 92 105018 | http://arxiv.org/abs/1703.09270v1 | {
"authors": [
"E. J. Ferrer"
],
"categories": [
"nucl-th",
"astro-ph.HE",
"hep-ph"
],
"primary_category": "nucl-th",
"published": "20170327190316",
"title": "EoS's of different phases of dense quark matter"
} |
The most effective model for the universal behavior of unstable surface growth Y. Minami National Institute of Advanced IndustrialScience and Technology (AIST), Ibaraki 305-8560, Japan [email protected] S. Sasa Department of Physics, Kyoto University, Kyoto 606-8502, Japan [email protected] The most effective model for describing the universal behavior of unstable surface growth Yuki Minami Shin-ichi Sasa December 30, 2023 ========================================================================================= We study a noisy Kuramoto-Sivashinsky (KS) equation which describes unstable surface growth and chemical turbulence. It has been conjectured that the universal long-wavelength behavior of the equation, which is characterized by scale-dependent parameters, is described by a Kardar-Parisi-Zhang (KPZ) equation. We consider this conjecture by analyzing a renormalization-group equation for a class of generalizedKPZ equations. We then uniquely determine the parameter values of the KPZ equation that most effectivelydescribesthe universal long-wavelength behavior of the noisy KS equation. § INTRODUCTION Eddy viscosity in turbulence, which can explain how a vortex pattern emerges in a non-uniform turbulent flow, depends on the observed length scales <cit.>. As exemplified by the Richardson law <cit.>, there are cases in which a parameter of a macroscopic description is not given as a definite value, but is rather expressed as a function of the length scale. Another example of scale-dependent parameters has been observed in one- or two- dimensional fluid dynamics, where the viscosity is not uniquely defined in the hydrodynamic description <cit.>. Here, it seems reasonable to expect that such scale-dependent parameters in a macroscopic description can be reproduced by an effective stochastic system <cit.>. In this paper,we attempt to determine the effective stochastic systemtheoretically when scale-dependent parameters are observed.As the simplest example for scale-dependent parameters, we consider the one-dimensional Kardar-Parisi-Zhang (KPZ) equation <cit.>. It is known that the effective surface tension ν(Λ) at a scale 2π/Λ for the equation is ν(Λ) = C_νΛ^-1/2 in the limit Λ→ 0, which is similar to the Richardson law for turbulence. Recently, the KPZ equation was rigorously derived from a stochastic many-particle model <cit.>, and the so-called KPZ class has been extensively discussed both theoretically and experimentally <cit.>. However, in general, even if we find systems that may exhibit scale-dependent parameterssimilar to those for theKPZ class, a method to determine the parameter values of the corresponding KPZ equation has not yet been reported.Specifically, let us consider a noisy Kuramoto-Sivashinsky (KS) equation, which exhibits spatially extended chaos in the noiseless limit <cit.>. The model describes turbulent chemical waves and unstable interface motions, which are caused by negative surface tension. It has been conjectured that a KPZ equation may be an effective modelfor describing the long-wavelength behavior of the noisy KS equation; this conjecture is referred to as the Yakhot conjecture <cit.>. Indeed,direct numerical simulations showed that statistical properties of thelong wave length modes are similar to those of the KPZ equations <cit.>. Here, one may recall the renormalization group (RG), which is a standard method for studyingscale-dependent parameters. For a given noisy KS equation, the RG equation was calculated using a perturbation theory <cit.>. The infrared fixed point of the RG equation determines the scale-dependent behavior ν(Λ)=C_νΛ^-1/2 in the limit Λ→ 0, which has the same power-law form as that for the KPZ equations. Nevertheless, as shown below, the analysis at the infrared fixed point of the RG equation cannot determine the parameter values of the corresponding KPZ equation.In this paper, we present a framework for studying the effective description. We study an RG equation for generalized KPZ equations that include noisy KS equations and KPZ equations. We then consider solution trajectories of the RG equation, in which each point flows to the infrared fixed point of the noisy KS equation we study. The solution trajectories also approach a subspace in the ultraviolet limit, which enables us to define a collection of bare parameters of the generalized KPZ equations. By using the lowest perturbation theory for the RG equation, we uniquely determine the most effective model among such KPZ equations that describes the infrared universal behavior of a noisy KS equation in the most efficient manner.This paper is organized as follows. In Sect. <ref>, we introduce a class of models we study,define scale-dependent parameters for the models, and review RG equations for the parameters. We also discuss ultraviolet and infrared behaviors of solution trajectories for the RG equation, and classify universal and non-universal properties of those.After that, we give a definition of“the most effective model”. In Sect. <ref>, we simplify a representation of the trajectories so as to determine the most effective model.The solution trajectories for the RG equation are expressed as curves in a five-dimensional parameter space. Then, the trajectory for the noisy KS equation is attracted to a two-dimensional subspace, due to emergence of a time-reversal symmetry. We define the most effective model for the noisy KS equation in this subspace. In Sect. <ref>, we determine parameter values of the most effective model from its definition. In Sect. <ref>, we provide concluding remarks. We discuss renormalizability of the KPZ equation and its relevant parameters. We also remark an another application of our formalism to turbulence. In Appendix <ref>, we derive Ward-Takahashi identities for scale-dependent parameters from symmetries of our model. § SETUPWe study models for the stochastic growth of a surface. We assume that the time evolution of the height h(x,t) of the surface is described by a generalized KPZ equation:∂_t h= ν∂_x^2 h - K ∂_x^4 h +λ/2(∂_x h)^2 + η,⟨η(x,t) η(x^',t^') ⟩ = 2(D-D_d ∂_x^2)δ(t-t^')δ(x-x^'),where ν is the surface tension, K is the surface diffusion constant, λ is the strength of the non-linearity, and η(x,t) is the white noise. Here, D and D_d are the strength of the noise. When K=D_d=0, (<ref>) is theKPZ equation, while when D=D_d=0, (<ref>) with ν <0 is the deterministic KS equation. We refer to (<ref>) with ν < 0 and K, D, D_d > 0, as the noisy KS equation.The five parameters in (<ref>) are collectively denoted by ≡ (ν, K, D, D_d, λ). More precisely, these parameters are defined for a field h(x) whose Fourier transform ĥ(k) is assumed to be zero for |k| > Λ.Λ is called a cut-off wavenumber. We explicitly express the cutoff dependence of the parameters as(Λ). Here, for a given model with (Λ_0), we define a model with (Λ) for Λ < Λ_0 by eliminating the contribution Λ≤ |k| ≤Λ_0 in the dynamics, which may be formally expressed as (Λ;(Λ_0)). Note that we do not employ a rescaling transformation after the coarse-graining. This functional relation trivially satisfies(Λ';(Λ_0))=(Λ';(Λ;(Λ_0))).From this, we obtain the RG equation-Λd /d Λ=Ψ_(),which determines (Λ;(Λ_0)) under an initial condition(Λ_0)=_0. In the next subsection, we reviewthe RG equation for thegeneralized KPZ equations.§.§ definition of scale-dependent parameters We first define the scale-dependent parameters ν(Λ), D(Λ), K(Λ), D_d(Λ) and λ(Λ), and then introduce a perturbation theory leading to the equation for determining them.We start with the generating functional Z[,] by which all statistical quantities of the KPZ equations are determined. Following the Martin-Siggia-Rose-Janssen-deDominicis (MSRJD) formalism <cit.>,Z[,] is expressed asZ[, ]=∫𝒟[ h, ] exp[-S[h,;Λ_0] +∫_-∞^∞ dω∫_-Λ_0^Λ_0 dk ((k,ω) h(-k,-ω)+ (k,ω)(-k,-ω) ) ],whereis the auxiliary field,andare source fields, and S[h,;Λ_0] is the MSRJD action for the generalized KPZ equation. Hereafter, we use the notation A(k,ω) for the Fourier transform of A(x,t) for any field A. The action S[h,;Λ_0] is explicitly written asS[h,;Λ_0] = 1/2∫_-∞^∞dω/2π∫^Λ_0_-Λ_0d k/2π[ h(-k,-ω)(-k,-ω) ] G_0^-1(k,ω) [ h(k,ω);(k,ω) ] +λ_0/2∫_-∞^∞dω_1 dω_2/(2π)^2∫^Λ_0_-Λ_0d k_1 d k_2/(2π)^2k_1 k_2( -k_1-k_2,-ω_1-ω_2) h(k_1, ω_1)h(k_2,ω_2),whereG_0^-1 is the inverse matrix of the bare propagatorG_0^-1(k,ω)= [0iω+ν_0 k^2 +K_0 k^4; -iω+ν_0 k^2 +K_0 k^4-2(D_0 +D_d0 k^2) ]. Here, we consider a coarse-grained description at a cutoff Λ < Λ_0.Let us defineA^<(k, ω)≡ θ(Λ-k) A(k, ω), A^>(k, ω)≡ θ(k-Λ) A(k, ω),for any quantity A(k,ω), where θ(x) is the Heaviside step function. The statistical quantities of h^< are described bythe generating functional Z[^<, ^<] with replacement of (, ) by (^<, ^<).We thus define theeffective MSRJD action S[h^<,^<;Λ] by the relationZ[^<,^<]=∫𝒟[ h^<, ^<]exp[-S[h^<,^<;Λ]+∫_-∞^∞ dω∫_-Λ^Λ dk (^<(k,ω) h^<(-k,-ω)+ ^<(k,ω)^<(-k,-ω) ) ].We can then confirm that S[h^<,^<;Λ] is determined asexp[ -S[h^<,^<; Λ] ]= ∫𝒟 [h^>, ^>] exp[-S[h^<+h^>,^<+^>;Λ_0]].Then, the propagator and the three point vertex function for the effective MSRJD action at Λ are defined as(G^-1)_h̃h(k_1, ω_1; Λ) δ(ω_1+ω_2) δ(k_1+k_2) ≡. δ^2 S[,; Λ]/δ((k_1, ω_1))δ((k_2, ω_2))|_,=0, (G^-1)_h̃h̃(k_1, ω_1; Λ) δ(ω_1+ω_2) δ(k_1+k_2) ≡. δ^2 S[,; Λ]/δ((k_1, ω_1))δ((k_2, ω_2))|_,=0,Γ_h̃ h h(k_1, ω_1;k_2,ω_2; Λ) δ(ω_1+ω_2+ω_3)δ(k_1+k_2+k_3)≡.δ^3 S[,; Λ]/δ((k_1, ω_1))δ((k_2, ω_2))δ((k_3,ω_3))|_,=0.From these quantities, we define the parameters asν(Λ)≡lim_ω, k → 01/2!∂^2 (G^-1)_h̃h(k, ω; Λ)/∂ k^2, K(Λ) ≡lim_ω, k → 01/4!∂^4 (G^-1)_h̃h(k, ω; Λ)/∂ k^4, -2D(Λ) ≡lim_ω, k → 0 (G^-1)_h̃h̃(k, ω;Λ), -2D_d(Λ) ≡lim_ω, k → 01/2!∂^2 (G^-1)_h̃h̃(k, ω; Λ)/∂ k^2,λ(Λ)≡lim_ω_1,ω_2, k_1,k_2 → 0∂^2 Γ_h̃ h h(k_1, ω_1;k_2,ω_2;Λ)/∂ k_1 ∂ k_2.From a tilt symmetry of the generalized KPZ equation, we can obtainλ(Λ) = λ_0.In Appendix <ref>, we will provide a non-perturbative proof for (<ref>) based on symmetry properties <cit.>. Below, we derive a set of equations that determines ν(Λ), D(Λ), K(Λ), and D_d(Λ).§.§ renormlization group equations We can calculate (G^-1)_ij(k,ω; Λ) by using the perturbation theory in λ_0. At the second-order level, the propagators are calculated as(G^-1)_h̃h(k, ω; Λ) = (G_0^-1)_h̃h (k,ω) +λ_0^2∫^∞_-∞dΩ/2π∫_Λ≤| q|≤Λ_0 d q/2π[ kq(k-q)^2 (G_0)_h̃h(q,Ω) C_0(k-q,ω-Ω) + k q^2(k-q) (G_0)_h̃h(k-q,ω-Ω) C_0(q,Ω) ], (G^-1)_h̃h̃(k, ω; Λ) = (G_0^-1)_h̃h̃(k,ω) -2λ_0^2∫^∞_-∞dΩ/2π∫_Λ≤| q|≤Λ_0 d q/2π q^2(k-q)^2C_0(q,Ω) C_0(k-q,ω-Ω),where C_0(k,ω ) is the bare correlation function defined byC_0(k,ω) ≡2 ( D_0 +D_d0 k^2) | (G_0)_h̃h (k, ω) |^2.In the calculation of (<ref>), one should carefully note the relation <cit.>∫_Λ≤| q|≤Λ_0 d q/2π q(k-q)^2 (G_0)_h̃h(q,Ω) C_0(k-q,ω-Ω) ≠∫_Λ≤| q|≤Λ_0 d q/2πq^2 (k-q) (G_0)_h̃h(k-q,ω-Ω) C_0(q,Ω).We emphasize that the Feynman rule does not distinguish these. By setting (Λ-Λ_0) /Λ_0 ≪ 1 for (<ref>) - (<ref>), (<ref>) and (<ref>), we obtain the RG equation as -Λd ν(Λ)/d Λ =ν(Λ)[ G/F(1+F)^3(3+F+(1-F)H/G)],-Λd K(Λ)/d Λ =K(Λ)[ G/2(1+F)^5(26-F+2F^2+F^3+(2-21F+6F^2+F^3)H/G)],-Λd D(Λ)/d Λ =D(Λ)[ G/(1+F)^3(1+H/G)^2],-Λd D_d(Λ)/d Λ =D_d(Λ)[ G^2/2H(1+F)^5(16+3F+F^2+2(9-5F)H/G +(2-13F-F^2)H^2/G^2)] , where we have introduced the dimensionless parameters F, G and H as F =ν(Λ) /K(Λ) Λ^2,G =λ_0^2 D(Λ)/4π K^3(Λ) Λ^7,H =λ_0^2 D_d(Λ)/4π K^3(Λ) Λ^5.Here,from (<ref>)-(<ref>), we derive the autonomous equation for (F(Λ), G(Λ), H(Λ)) as-Λd F/d Λ =2F+G/2(1+F)^5[6-12F+11F^2-F^4+(2+19F^2-8F^3-F^4)H/G], -Λd G/d Λ =7G-G^2/2(1+F)^5[76-7F+4F^2+3F^3+(2-71F+14F^2+3F^3)H/G-2(1+F)^2H^2/G^2], -Λd H/d Λ =5H+G^2/2(1+F)^5[16+3F+F^2-(60+7F+6F^2+3F^3)H/G-(4-50F+19F^2+3F^3)H^2/G^2].§.§ Infrared and ultraviolet behaviors of solution trajectories of the RG equationThe stable fixed point of the equations (<ref>) - (<ref>) is found to be (F^*,G,^*,H^*)=(10.7593, 680.652, 63.2614). By substituting the fixed point values to (<ref>)-(<ref>) and solving them, we obtain the scaling lawsν(Λ) =C_νΛ^-0.5, D(Λ) =C_DΛ^-0.5, K(Λ) =C_KΛ^-2.5, D_d(Λ) =C_D_dΛ^-2.5,where C_ν, C_D, C_K, and C_D_d are constants that depend on the initial condition _0.We next consider the dimensionless quantities given by1/F =K(Λ) Λ^2/ν(Λ),H/G =D_d(Λ)Λ^2/D(Λ).Substituting the scaling relations (<ref>) - (<ref>) to these equalities, we have1/F^* =C_K/C_ν,H^*/G^* =C_D_d/C_D.Since (F,G,H) takes the value (10.7593, 680.652, 63.2614) in the limit Λ→ 0, we obtainC_K/C_ν=C_D_d/C_D=0.0929,which is independent of _0 . The singular behavior ν(Λ)=C_νΛ^-1/2 implies that the effective surface tension depends on the observed scale Λ. This is contrasted with cases in which each (Λ) converges to a finite value in the limit Λ→ 0. Then, (Λ=0) is interpreted as renormalized parameters measured in experiments. Since the exponents characterizing the divergent behaviors are common to all the models given by (<ref>), we refer to the power-law region as the universal range. The smallest characteristic wavenumber scale is also denoted by Λ_IR,the value of which depends on _0. Then, the universal range is defined as Λ≪Λ_IR. As another common aspect of the RG equation (<ref>), we observethat(Λ) shows a plateau region in the ultraviolet limit when Λ_0 is sufficiently large. This enables us to define a collection of bare parameters, which is denoted by .Here, we focus on a specific model, a noisy KS equation with _0^KS, (ν_0=-1.0, D_0=0, K_0=D_d0=λ_0=1.0),defined at Λ_0=2 π. In Fig. <ref>, we display the numerical solution of (<ref>) for this initial condition _0^KS. It can be seen that Λ_0 is in the plateau region. Thus, the collection of the bare parameters ^KS is assumed to be identical to the initial condition _0^KS without loss of accuracy. On the other hand, the numerical solution in the infrared limit obeys ν(Λ)=C_νΛ^-0.5 and D(Λ) =C_DΛ^-0.5 in accordance with the analysis of the fixed point.We note that D_d(Λ) does not show the plateau regionin Fig. <ref>. However, this graph quickly converges toD_d(Λ) with D_d0=0 at Λ_0=∞. As shown in Fig. <ref>, the graphs of D_d(Λ) with D_d0=0 at Λ_0=2π, 10π and 1000π do not exhibit the plateau. Instead,the graphs at Λ_0=2π and 10π quickly approachD_d(Λ) in the limit Λ_0 = ∞ when Λ is smaller than Λ_0. Therefore, we define the bare parameter as D_dB=0for such cases. §.§ definition of the most effective modelNow, for the noisy KS equation with ^KS, we consider the set B(^KS) of bare parameters , each of which has the samefactors C_ν, C_D, C_K, and C_D_d in the universal range and the same wavenumber scaleΛ_IR as those for the noisy KS equation. The graph of (Λ) for a given ∈ B(^KS) determines the wavenumber scale Λ_UV that represents the end of the ultraviolet plateau. Note that the value of Λ_UV depends on ∈ B(^KS). Then, there is a special model with ∈ B(^KS) such that Λ_UV=Λ_IR^KS. For this model, as soon as the graph of (Λ) exits from the ultraviolet plateau region, it enters the infrared universal range. In other words, this special model represents the universal behavior of the noisy KS equation in the most efficient manner. We refer to it as the most effective model for the universal range of the noisy KS equation with ^KS. Below, we determine the most effective model.§ REPRESENTATION OF THE PARAMETER SPACE The solution trajectories for the RG equation are expressed as curves in the five-dimensional parameter space consisting of . We attempt to simplify a representation of the trajectories so as to determine the most effective model.First, recalling λ(Λ) = λ_0, we may restrict the parameter space into the subspace λ=λ_0=1.Next, as shown in Fig. <ref>, we find that D(Λ)/ν(Λ) and D_d(Λ)/K(Λ) converge to the same value, 2.24, in the universal range for the noisy KS equation. We can explain this phenomenon as follows. First, for the generalized KPZ equations withsatisfying D_B/ν_B=D_dB/K_B ≡χ >0, we can show the fluctuation-dissipation relation with the effective temperature χ fixed by usinga time-reversal symmetry. The time-reversal transformation is given ash^'(k,ω) =-h(k,-ω), h̃^'(k,ω) =h̃(k,-ω)-ν_0 k^2/D_0 h (k,-ω).The variation of the action (<ref>) under this transformation is calculated asδ S ≡S[h^', ^';Λ_0]-S[h,;Λ_0],= (D_0/ν_0-D_d0/K_0)ν_0 K_0/D_0∫d ω d k/(2π)^2 (ν_0/D_0k^2h(-k,-ω) h(k, ω) -2(-k,-ω) h(k, ω)).The generalized KPZ equation is invariant when D_0/ν_0=D_d0/K_0 or K_0=D_d0=0. This symmetry leads to the invariance property of D(Λ)/ν(Λ) and D_d(Λ)/K(Λ) along the solution trajectories of the RG equation. See Appendix <ref> for the time-reversal symmetry of the generalized KPZ equation and the derivation of the fluctuation-dissipation relation. For the other cases where D_B/ν_B ≠D_dB/K_B including for noisy KS equations, D(Λ)/ν(Λ) and D_d(Λ)/K(Λ) change in Λ. However, they satisfy D(Λ)/ν(Λ)= D_d(Λ)/K(Λ) in the universal range. Therefore, it is reasonable to conjecture that the time-reversal symmetry emerges in the universal range. Now, since the most effective model represents the universal behavior most efficiently, this special model should be in the subspace satisfying D_B/ν_B=D_dB/K_B ≡χ=2.24. On the basis of the results, we express the bare-parameter space by (ν_B, K_B, D_B=2.24 ν_B, D_dB=2.24K_B, λ_B=1), as illustrated in Fig. <ref>. For each value of (ν_B, K_B),we have a model that exhibits the infrared universal behavior of ^KS.Finally, fora generalized KPZ equation withat Λ_0 in the ultraviolet plateau region, we consider the following scale transformation:X =b_x x,T =b_t t,H(X,T) =b_h h(x,t),which yields another generalized KPZ equation with a different collection of bare parameters ' at Λ_0'=b_x^-1Λ_0 in the ultraviolet plateau region. These are the equivalent models in different unit systems.For the cases thatD=χν and D_d=χ K, the equation for H(X,T)is written as∂_T H = ν^'∂_X^2 H - K^'∂_X^4 H +λ/2(∂_X H)^2 + F,⟨ F(X,T) F(X^',T^') ⟩ = 2χ^'(ν^'-K^'∂_X^2)δ(T-T^')δ(X-X^'),where we have introducedν^' =b_t^-1b_x^2ν, K^' =b_t^-1b_x^4 K,λ^' =b_t^-1b_x^2 b_h^-1λ, F(X,T) =b_t^-1b_h η(x,t), χ^' =b_x^-1b_h^2 χ.By imposing χ'=χ and λ'=λ, we obtain b_h=b_x^1/2 and b_t=b_x^3/2. Then, we have the relationν_B^' =b_x^1/2ν_B, K^'_B=b_x^5/2K_B, We find that J ≡ K_B/ν_B^5 is invariant under the transformation.Thus, we parameterize (ν_B,K_B) as (b_x^1/2, b_x^5/2 J). The next problem is to determine the values of b_x and J of the most effective model for the universal range of the noisy KS equation.§ THE MOST EFFECTIVE MODELSince J is invariant under the scale transformation, the determination of J can be separated from the determination of b_x. Here, we notice the condition Λ_UV=Λ_IR for the most effective model. Because this condition is invariant under the scale transformation, the value of J is uniquely determined. Furthermore, the condition Λ_IR=Λ_IR^KS fixes the value of b_x. Below, we explicitly calculate these values. In order to determine the value of J, we study the dimensionless quantity F(Λ) = ν(Λ)/(K(Λ) Λ^2) as a function ofs(Λ) ≡ -ln(J^1/2 b_xΛ),where Fand s are invariant under the scale transformation.It should be noted that, for any J and b_x, F approachesF→ e^2s,in the ultraviolet limit s → -∞, whileF→ F^* = 10.76,in the infrared limit s →∞. In Fig. <ref>, we show graphs of F as functions of s for several values of J. In general, there are two characteristic scales of s, the departure scale from e^2s and the relaxation scale to F^*, as clearly observed for J=0.1. When J increases, the peak of F decreases and eventually vanishes at J=7.3. In this case, the transition scale between the infrared universal region and the ultraviolet region is simply given bythe cross point s_c of the ultraviolet behaviorF=e^2s and the infrared behavior F^*=10.8.That is, e^2s_c=F^*,which gives s_c=1.2. Thus, we conclude that the value of J of the most effective model is J=7.3. Next, we determine the value of b_x. From the cross point s_c, we define the transition length scale Λ_c^-1 by s_c = -ln(J^1/2 b_xΛ_c), which gives Λ_c^-1= √(J F^*)b_x=8.9 b_x. Here, the value of b_x is determined by identifying Λ_c with Λ_IR^KS. Then, we estimate Λ_IR^KS from the graph of ν(Λ) for the noisy KS equation under study. In Fig. <ref>, we show how ν(Λ) approaches C_νΛ^-0.5. We find that |ν(Λ)-C_νΛ^-0.5| is well fitted to a power-law function of Λ^-1, which does not provide any wavenumber scale. Through more detailed analysis, we find a fitting functionν(Λ)-C_νΛ^-0.5 = -AΛ^B+C exp[ -Λ^-1/D],with A=3.57, B=0.431, C=1.1 × 10^-2, and D=195. From the second term of (<ref>), we obtain the characteristic scale (Λ_IR^KS)^-1=D=195. Now, from the condition(Λ_IR^KS)^-1=Λ_c^-1, we obtain b_x=22. Thus, we have arrived at the most effective model for the universal range of the noisy KS equation with ^KS, where the collection ofthe bare parameter valuesof the most effective model, _B^ ME, isdetermined as (ν_B=4.7,D_B=10, K_B=1.6 × 10^4,D_dB=3.7× 10^4, λ_B=1). Now, the linear decay rate of the disturbance of a wavenumber k in the universal range is expressed as ν_B k^2+K_B k^4 at an early time. Here, we notice that (ν_B/K_B)^0.5 defines one wavenumber scale. Since the most effective model has only one wavelength scale Λ_c, (ν_B/K_B)^0.5≃Λ_c holds. This implies that the linear decay rate ν_B k^2+K_B k^4 is estimated as ν_B k^2 for k ≪Λ_c. In this manner, ν_B can be measured in experiments. Indeed, by applying this method to the numerical simulation of the noisy KS equation, the result ν_B^ exp≃ 5.5 was obtained <cit.>. Thus, our theoretical value ν_B= 4.7 is in good agreement with the numerical value.§ CONCLUDING REMARKSThe main result of this paper is illustrated in Fig. <ref>. For a given noisy KS equation, we construct the most effective model exhibiting the same infrared universal behaviorwithjust one cross-over wavenumber scale Λ_IR^KS connectingthe infrared behavior and the ultraviolet behavior. We emphasize that our theory enables us to calculate the bare surface tension ν_B of the effective model in the universal range, which could not be obtained by previous studies. We conclude this paper by presenting a few remarks.The first remark is on the relevant parameter space in the universal range. Since λ(Λ) is a conserved quantity along the solution of the RG equation, it obviously depends on the initial condition _0. Thus, it is relevant in the universal range. Furthermore, D/ν- D_d/K is not relevant because D(Λ)/ν(Λ)-D_d(Λ)/K(Λ) approaches zero. At the same time, χ=D/ν is a relevant parameter because its value is invariant along the solution trajectorywhen D_0/ν_0=D_d0/K_0. Finally, in the limit Λ→ 0, K(Λ) Λ^2/ ν(Λ) approaches the universal constant value 0.0929 which is independent of _0. Thus, we can state that K(Λ) Λ^2/ ν(Λ) is irrelevant, following the argument in <cit.>. In other words, ν(Λ) and K(Λ) are not independent of each other in the universal range, as shown in Fig. <ref>.In summary, the relevant parameter space in the universal range is spanned by the three parameters (ν, χ, λ). However, the parameter K cannot be negligible because the irrelevant parameter K(Λ) Λ^2/ ν(Λ) is not zero in the universal range. This is different from many standardRG analysis <cit.>. Second, we remark that the original Yakhot conjecture claims a statistical property of the deterministic KS equation <cit.>. Here, we discuss the noiseless limit D_0 → 0 for the noisy KS equation. In this case, we obtain χ→ 0 which is not consistent with observations. This implies that the lowest order contribution in loop expansions is not sufficient to yield statistical properties for the small D_0 limit. In order to overcome this situation, we have to formulate a non-perturbative calculation. This is an interesting problem for future work. Finally, we expect that the concept proposed in this paper will be applied to various systems, although we have studied a specific phenomenon as an example of scale-dependent parameters.The most interesting example may be fluid turbulence. The effective model for the universal range in the turbulence is given by a noisy Navier-Stokes equation <cit.>∂_t v(x,t)= -∇ p(x,t) + η_0 ∇^2 v(x,t)+f(x,t),⟨ f_i (x,t) f_j(x^', t^') ⟩ = 2P_i j(∇) F(x-x^')δ(t-t^'),where v is the fluid velocity, p is the pressure, η_0 is the viscosity and f is the noise.Here, P_i j and F in the Fourier space are given as P_i j(k)=δ_i j-k_i k_j/k^2, F(k)=(2π)^d 2 D_0k^-y,where d is the space dimension, D_0 is the noise strength, and y is a positive parameter. When y=d=3, this model exhibits the Kolmogorov scaling lawE(k)= C k^-5/3,where E(k) is the energy spectra andC is a universal constant and takes the value ∼ 1.5. The analysis of solution trajectories of an RG equation for such the noisy Navier-Stokes equation may providefresh insight into the understanding of the turbulence such as the universal constant C. We hope that this paper stimulates the study of whole solutions ofRG equations in various research fields.The authors thank K. A. Takeuchi, M. Itami, and T. Haga for useful discussions. The present study was supported by KAKENHI (Nos. 25103002 and 17H01148).99Frisch U. Frisch, Turbulence, (Cambridge University Press, Cambridge, 1995). Richardson L. F. Richardson,Proc. R. Soc. Lond. A. 110, 709-737 (1926). review Y. Pomeau and P. Résibois, Phys. Rep. 19C 64 (1975). PhysRevA.16.732 D. Forster, D. R. Nelson, and M. J. Stephen Phys. Rev. A 16, 732 (1977).Dominicis C. DeDominicis and P. C. Martin,Phys. Rev. A. 19, 419-421 (1979)Fournier-Frisch1978 J. D. Fournier and U. Frisch, Phys. Rev. A 17, 747(1978).Fournier-Frisch1983 J. D. Fournier and U. Frisch, Phys. Rev. A 28, 1000 (1983). Yakhot-Orszag V. Yakhot and S. A. Orszag,Phys. Rev. Lett. 57, 1722 (1986).Yakhot-Orszag2 V. Yakhot and S. Orszag, J. Sci. Comput. 1, 3 (1986).Yakhot-Smith V. Yakhot and M. L. Smith,J. Sci. Comput. 7, 35 (1992).Eyink G. L. Eyink,Phys. Fluids 6 3063 (1994).PhysRevLett.56.889 M. Kardar, G. Parisi, and Y.-C.Zhang, Phys. Rev. Lett. 56, 889 (1986). Bertini-Giacomin L. Bertiniand G. Giacomin,Comm. Math. Phys. 183, 571 (1997).PhysRevLett.104.230602 T. Sasamoto and H. Spohn, Phys. Rev. Lett. 104, 230602 (2010). Takeuchi-Sano2010 K. A. Takeuchi and M. Sano, Phys. Rev. Lett. 104, 230601 (2010).Takeuchi-Sano-Sasamoto-Spohn K. A. Takeuchi, M. Sano, T. Sasamoto and H. Spohn, Sci. Rep. 1, 34 (2011).Amir-Corwin-Quastel G. Amir, I. Corwin,and J. Quastel,Comm. Pure Appl. Math. 64466 (2010).Calabrese-Doussal P. Calabrese and P. LeDoussal, Phys. Rev. Lett. 106, 250603 (2011).Imamura-Sasamoto T. Imamura and T. Sasamoto Phys. Rev. Lett. 108, 190603 (2012).HairerM. Hairer, Annals of Mathematics 178559, (2013). kuramoto1976persistent Y. Kuramoto and T. Tsuzuki, Prog. Theor. Phys. 55, 356 (1976).sivashinsky1977nonlinear G. I. Sivashinsky, Acta Astron. 4, 1177 (1977).Kuramoto-book Y. Kuramoto, Chemical Oscillations, Waves, and Turbulence, (Springer, Berlin, 1984). PhysRevA.24.642 V. Yakhot, Phys. Rev. A 24, 642 (1981).PhysRevLett.60.1840 V. Yakhot and Z.-S. She, Phys. Rev. Lett. 60, 1840 (1988). zaleski1989stochastic S. Zaleski, Physica D 34, 427 (1989).PhysRevA.46.R7351 K. Sneppen, J. Krug, M. H. Jensen, C. Jayaprakash, and T. Bohr, Phys. Rev. A 46, R7351 (1992).PhysRevE.47.911 F. Hayot, C. Jayaprakash, and C. Josserand, Phys. Rev. E 47, 911 (1993).sakaguchi2002effective H. Sakaguchi, Prog. Theor. Phys. 107, 879 (2002).PhysRevE.71.046138 K. Ueno, H. Sakaguchi, and M. Okamura, Phys. Rev. E 71, 046138 (2005).Cuerno-Lauritsen R. Cuerno and K. B. Lauritsen,Phys. Rev. E 52, 4853 (1995).MSR P. C. Martin, E. D. Siggia, and H. A. Rose, Phys. Rev. A 8, 423 (1973).J H. K. Janssen, Z. Phys. B 23, 377 (1976).D1 C. De Dominicis, J. Phys. Colloq. 37, C1-247 (1976).D2 C. De Dominicis, Phys. Rev. B 18, 4913 (1978). freyPRE1994 E. Frey and U. C. Täuber, Phys. Rev. E 50, 1024 (1994).Canet2011 L. Canet, H. Chaté, B. Delamotte, and N. Wschebor,Phys. Rev. E 84, 061128 (2011). polchinski1984 J. Polchinski, Nucl. Phys. B 231, 269 (1984).Weingberg1995 S. Weinberg, The quantum theory of fields, (Cambridge university press, Cambridge, 1995).Wilson1974 K. G. Wilson and J. Kogut, Phys. Rep. 12, 75 (1974).§ WARD-TAKAHASHI IDENTITIES In this section, we proveλ(Λ)=λ_0,for all generalized KPZ equations, andν(Λ)/D(Λ) =ν_0/D_0,for K_0=D_d0=0 or K_0/D_d0=ν_0/D_0, andK(Λ)/D_d(Λ) =ν_0/D_0,for K_0/D_d0=ν_0/D_0. These results are easily obtained from the following Ward-Takahashi identities <cit.>:(G^-1)_h̃ h(k=0, ω; Λ) = -i ω, i λ_0 k_1 ∂_ω_1 (G^-1)_h̃ h(k_1,ω_1;Λ)= lim_ω, k → 0∂_k Γ_h̃ h h(k_1,ω_1;k,ω; Λ),andG_h̃h^-1(k_1, ω_1;Λ)+G_h̃h^-1(-k_1, -ω_1;Λ) =-ν_0 k_1^2/D_0 G_h̃h̃^-1(k_1, -ω_1;Λ).These identities are relaed to invariance properties of the MSRJD action for a shift transformation, a tilt transformation, and a time-reversal transformation, respectively. In the next subsections, we will derive (<ref>)-(<ref>) following the arguments <cit.>.Here, we derive (<ref>)-(<ref>) from (<ref>)-(<ref>). First, by differentiating (<ref>) with respect to k_1 and taking the limit k_1 → 0, we haveiλ_0 ∂_ω_1(G^-1)_h̃ h(k_1=0,ω_1;Λ)= lim_ω, k,k_1 → 0∂_k ∂_k_1Γ_h̃ h h(k_1,ω_1;k,ω; Λ).Next, we substitute (<ref>) to (<ref>) and take the limit ω_1 → 0. Then, we obtainλ_0=lim_ω, ω_1,k,k_1 → 0∂_k ∂_k_1Γ_h̃ h h(k_1,ω_1;k,ω; Λ).By recalling the definition (<ref>),we find that this equality is (<ref>). Second, we differentiate (<ref>)twice with respect to k_1. Then, we have∂_k_1^2 G_h̃h^-1(k_1, ω_1;Λ)+∂_k_1^2G_h̃h^-1(-k_1, -ω_1;Λ) = -ν_0/D_0 ( 2 + 2k_1∂_k_1+ k_1^2∂_k_1^2 )G_h̃h̃^-1(k_1, -ω_1;Λ).By taking the limit ω_1, k_1 → 0 and using (<ref>) and (<ref>), we obtain (<ref>). Finally, by differentiating (<ref>) four times with respect to k_1,we arrive at (<ref>). §.§ Proof of (<ref>) We consider ashift transformationh^'(x,t)=h(x,t)+c(t),where c(t) is an infinitesimal parameter that depends on time. The variation of theMSRJD action for the transformation is calculated asS[h^',^';Λ_0] - S[h,;Λ_0]= ∫ dt dx (x,t) ∂_t c(t).It should be noted that this simple form comes from the invariance property of the MSRJD action for the time-independent c [In general, by assuming time dependence of the infinitesimal parameter for a continuous symmetry transformation, we can obtain non-trivial identities such as (<ref>). This technique, which has been referred to as “gauging a global symmetry”, is standard when wederiveidentities from a continuousglobal symmetry <cit.>. For such a case,the variation of an action under a time-gauged transformation is expressed as δ S = ∫ dt Q(t) ∂_t ϵ(t), where Q(t) is a Noether charge of the corresponding global symmetry, and ϵ(t) is the time-gauged infinitesimal parameter. The Noether charge of the shift symmetry is calculated as Q_ shift = ∫ dx (x,t), which is consistent with (<ref>).]. Then, the variation of the effective MSRJD action is derived asS[h^<',^'<;Λ]= -log∫𝒟 [h^>', ^>']exp[-S[h^',^'; Λ_0]], = -log∫𝒟 [h^>, ^>]exp[-S[h,; Λ_0]-∫ dtdx (x,t) ∂_t c(t) ], = ∫ dtdx ^<(x,t) ∂_t c(t) -log∫𝒟 [h^>, ^>]exp[-S[h,; Λ_0]-∫ dtdx ^>(x,t) ∂_t c(t) ], = S[,; Λ]+∫ dtdx ^<(x,t) ∂_t c(t).When we obtain the fourth line in (<ref>) from the third line, we have used∫ dtdx ^>(x,t) ∂_t c(t) = ∫ dt ∂_t c(t) ( ∫ dx ^>(x,t) ), = ∫ dt ∂_t c(t) ^>(k=0,t), =0.Here, noting the trivial relationS[h^<',^'<;Λ]=S[,; Λ]+ ∫ dt dx δ S[,; Λ]/δ(x,t)c(t),we rewrite (<ref>) as∫ dt dx ( δ S[,; Λ]/δ(x,t)c(t)-(x,t) ∂_t c(t) )=0,which is further expressed as∫ dt dx ( δ S[,; Λ]/δ(x,t)+∂_t(x,t))c(t)=0.Since this equality holds for any c(t),we obtain∫ dx ( δ S[,; Λ]/δ(x,t)+∂_t(x,t)) =0.The differentiation of (<ref>) with respect to (t^',x^') leads to∫ dx ( (G^-1)_h̃ h(x^'-x, t^'-t; Λ) +∂_t δ (t-t^')δ(x-x^'))=0.By performing the Fourier transformation, we arrive at (<ref>). §.§ Proof of (<ref>)We consider a tilt transformationh^'(x,t) =h(x+λ_0 v t,t)+v x, h̃^'(x,t) =h̃(x+λ_0 v t,t),where v is an infinitesimal parameter. The tilt transformation for their Fourier transforms is expressed ash^'(k,t) =e^iλ_0 v k th(k,t)-i v∂_k δ (k), ^'(k,t) =e^iλ_0 v k th(k,t).We then find the symmetry propertyS[h^<'+h^>',^<'+^>'; Λ_0]=S[h^<+h^>,^<+^>; Λ_0],from which we obtainS[h^<,^<; Λ] =-log∫𝒟 [h^>, ^>]exp[-S[h^<+h^>,^<+^>; Λ_0]], =-log∫𝒥𝒟 [h^>', ^>']exp[-S[h^<'+h^>',^<'+^>'; Λ_0]], =S[h^<',^<'; Λ]-log𝒥,where 𝒥 =1+ v a is the Jacobian for the tilt transformation, and a is a field independent quantity. The expansion of (<ref>) in v leads to the identity∫ dk dt [ iλ_0 kt ( δ S[,; Λ]/δ(k,t)(k,t) +δ S[,; Λ]/δ(k,t)(k,t)) +i δ(k)∂_k δ S[,; Λ]/δ(k,t) -a ]=0.We differentiate this identity with respect to (k_1,t_1) and (k_2,t_2). Then, we have∫ dk dt [ iλ_0 kt ( δ^2 S[,; Λ]/δ(k,t)δ(k_1,t_1)δ(k_2-k)δ(t_2-t) +δ^3 S[,; Λ]/δ(k,t)δ(k_1,t_1) δ(k_2,t_2) (k,t) + δ^2 S[,; Λ]/δ(k,t) δ(k_2,t_2)δ(k_1-k)δ(t_1-t) +δ^3 S[,; Λ]/δ(k,t)δ(k_1,t_1)δ(k_2,t_2)(k,t) ) +i δ(k)∂_k δ^3 S[,; Λ]/δ(k,t)δ(k_1,t_1) δ(k_2,t_2)]=0.By taking the limit ,→ 0 and recalling the definitions given in (<ref>) - (<ref>), we obtainλ_0(k_1t_1+k_2t_2)(G^-1)_h̃ h(k_1,t_1-t_2;Λ) δ (k_1+k_2) =-i lim_k → 0∂_k ∫ dtΓ_h̃ h h(k,k_1,k_2;t-t_1, t_2-t_1; Λ ) δ (k+k_1+k_2).The Fourier transform of this equality is (<ref>).§.§ Proof of (<ref>)We consider a time-reversal transformationh^'(k,ω) =-h(k,-ω), h̃^'(k,ω) =h̃(k,-ω)-ν_0 k^2/D_0 h (k,-ω).The variation of the action (<ref>) under this transformation is calculated asδ S ≡S[h^', ^';Λ_0]-S[h,;Λ_0],= (D_0/ν_0-D_d0/K_0)ν_0 K_0/D_0∫d ω d k/(2π)^2 (ν_0/D_0k^2h(-k,-ω) h(k, ω) -2(-k,-ω) h(k, ω)).The generalized KPZ equation is invariant when D_0/ν_0=D_d0/K_0 or K_0=D_d0=0.Here, we focus on the case D_0/ν_0=D_d0/K_0. Then, we obtainS[h^<',^<';Λ ]=S[h^<,^<;Λ]-log𝒥,where 𝒥 is the Jacobian of the time-reversal transformation. By differentiating this equality with respect to (k_1, ω_1) and (k_2,ω_2), we haveδ^2 S[,; Λ]/δ((k_1, ω_1))δ((k_2, ω_2)) = -ν_0 k_1^2/D_0δ^2 S[h^<',^<'; Λ]/δ (^<'(k_1, -ω_1))δ(^<'(k_2, -ω_2))-δ^2 S[h^<', ^<'; Λ]/δ (h^<'(k_1, -ω_1))δ(^<'(k_2, -ω_2)),where we have used the relationδ/δ h(k,ω) =-δ/δ h^'(k,-ω)-ν_0 k^2/D_0δ/δ^'(k,-ω),δ/δ(k,ω) = δ/δ^'(k,-ω).By recalling the definition given in (<ref>) - (<ref>), we obtainG_h̃h^-1(k_1, ω_1;Λ) δ(ω_1+ω_2)δ(k_1+k_2)= -(ν_0 k_1^2/D_0 G_h̃h̃^-1(k_1, -ω_1;Λ)+G_h̃h^-1(-k_1, -ω_1;Λ))δ(ω_1+ω_2)δ(k_1+k_2).By rearranging (<ref>), we arrive at the identities (<ref>). | http://arxiv.org/abs/1703.08946v2 | {
"authors": [
"Yuki Minami",
"Shin-ichi Sasa"
],
"categories": [
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.stat-mech",
"published": "20170327063142",
"title": "The most effective model for describing the universal behavior of unstable surface growth"
} |
lable1,label2]Liren Shanlable1,label2]Huan Lilable1,label2]Zhongzhi Zhang [email protected][lable1]School of Computer Science, Fudan University, Shanghai 200433, China [label2]Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai 200433, China The minimum dominating set (MDS) problem is a fundamental subject of theoretical computer science, and has found vast applications in different areas, including sensor networks, protein interaction networks, and structural controllability. However, the determination of the size of a MDS and the number of all MDSs in a general network is NP-hard, and it thus makes sense to seek particular graphs for which the MDS problem can be solved analytically. In this paper, we study the MDS problem in the pseudofractal scale-free web and the Sierpiński graph, which have the same number of vertices and edges. For both networks, we determine explicitly the domination number, as well as the number of distinct MDSs. We show that the pseudofractal scale-free web has a unique MDS, and its domination number is only half of that for the Sierpiński graph, which has many MDSs. We argue that the scale-free topology is responsible for the difference of the size and number of MDSs between the two studied graphs, which in turn indicates that power-law degree distribution plays an important role in the MDS problem and its applications in scale-free networks. Minimum dominating setDomination numberScale-free networkSierpiński graphComplex network § INTRODUCTION A dominating set of a graph 𝒢 with vertex set 𝒱 is a subset 𝒟 of 𝒱, such that each vertex in 𝒱∖𝒟 is adjacent to at least one vertex belonging to 𝒟. We call 𝒟 a minimum dominating set (MDS) if it has the smallest cardinality, and the cardinality of a MDS is called the domination number of graph 𝒢. The MDS problem has numerous practical applications in different fields <cit.>, particularly in networked systems, such as routing on ad hoc wireless networks <cit.>, multi-document summarization in sentence graphs <cit.>, and controllability in protein interaction networks <cit.>. Recently, from the angle of MDS, structural controllability of complex networks has been addressed <cit.>, providing an alternative new viewpoint of the control problem for large-scale networks <cit.>.Despite the vast applications, solving the MDS problem of a graph is a challenge, because finding a MDS of a graph is NP-hard <cit.>. Over the past years, the MDS problem has attracted considerable attention from theoretical computer science <cit.>, discrete and combinatorial mathematics <cit.>, as well as statistical physics <cit.>, and continues to be an active object of research <cit.>. Extensive empirical research <cit.> uncovered that most real networks exhibit the prominent scale-free behavior <cit.>, with thedegree of their vertices following a power-law distribution P(k) ∼ k^-γ. Although an increasing number of studies have been focused on MDS problem, related works about MDS problem in scale-free networks are very rare <cit.>. In particular, exact results for domination number and the number of minimum dominating sets in a scale-free network are still lacking.Due to the ubiquity of scale-free phenomenon in realistic networks, unveiling the behavior of minimum dominating sets with respect to power-law degree distribution is important for better understanding the applications of MDS problem in real-life scale-free networks. On the other hand, determining the domination number and enumerating all minimum dominating sets in a generic network are formidable <cit.>, it is thus of great interest to find specific scale-free networks for which the MDS problem can be exactly solved <cit.>.In this paper, we focus on the domination number and the number of minimum dominating sets in a scale-free network, called pseudofractal scale-free web <cit.> and the Sierpiński graph with the same number of vertices and edges. For both networks, we determine the exact domination number and the number of all different minimum dominating sets. The domination number of the pseudofractal scale-free web is only half of that of the Sierpiński graph. In addition, in the pseudofractal scale-free web, there is a unique minimum dominating set, while in Sierpiński graph the number of all different minimum dominating sets grows exponentially with the number of vertices in the graph. We show the root of the difference of minimum dominating sets between studied networks rests with their architecture dissimilarity, with the pseudofractal scale-free web being heterogeneous while the Sierpiński graph being homogeneous.§ DOMINATION NUMBER AND MINIMUM DOMINATING SETS IN PSEUDOFRACTAL SCALE-FREE WEB In this section, we determine the domination number in the pseudofractal scale-free web, and show that its minimum dominating set is unique. §.§ Network construction and properties The pseudofractal scale-free web <cit.> considered here is constructed using an iterative approach. Let𝒢_n, n≥ 1, denote the n-generation network. Then the scale-free network is generated in the following way. When n=1, 𝒢_1 is a complete graph of 3 vertices. For n > 1, each subsequent generation is obtained from the current one by creating a new vertex linked to both endvertices of every existent edge. Figure <ref> shows the networks for the first several generations. By construction,the number of edges in network 𝒢_n is E_n=3^n+1.The resultant network exhibits the prominent properties observed in various real-life systems. First, it is scale-free, with the degree distribution of its vertices obeying a power law form P(k)∼ k^1+ln 3/ln 2 <cit.>. In addition, it displays the small-world effect, with its average distance increasing logarithmically with the number of vertices <cit.> and the average clustering coefficient converging to a high constant 4/5. Another interesting property of the scale-free network is its self-similarity, which is also pervasive in realistic networks <cit.>. The initial three vertices of 𝒢_1 have the largest degree, which we call hub vertices. For network 𝒢_n, we denote its three hub vertices by A_n, B_n, and C_n, respectively. The self-similarity of the network can be seen from its alternative construction method <cit.>. Given the nth generation network 𝒢_n, 𝒢_n+1 can be obtained by merging three replicas of 𝒢_n at their hub vertices, see Fig. <ref>. Let 𝒢_n^θ, θ=1,2,3, be three copies of 𝒢_n, the hub vertices of which are represented by A_n^θ, B_n^θ, and C_n^θ, respectively. Then, 𝒢_n+1 can be obtained by joining 𝒢_n^θ, with A_n^1 (resp. C_n^1, A_n^2) and B_n^3 (resp. B_n^2, C_n^3) being identified as the hub vertex A_n+1 (resp. B_n+1, C_n+1) in 𝒢_n+1.Let N_n be the number of vertices in G_n. According to the second construction of the network, N_n obeys the relation N_n+1=3N_n-3, which together with the initial value N_0=3 yields N_n=(3^n+3)/2. §.§ Domination number and minimum dominating setLet γ_n denote the domination number of network 𝒢_n. In order to determine γ_n, we define some intermediate quantities. As shown above, there are three hub vertices in 𝒢_n, then all dominating sets of 𝒢_n can be sorted into four classes: Ω_n^0, Ω_n^1, Ω_n^2, and Ω_n^3, where Ω_n^k (k = 0,1,2,3) represents those dominating sets, each of which includes exactly k hub vertices. Let Θ_n^k, k = 0,1,2,3, be the subset of Ω_n^k, which has the smallest cardinality (number of vertices), denoted by γ_n^k. Figure <ref> illustrates the definitions for Θ_n^k, k = 0,1,2,3.By definition, we have the following lemma.The domination number of network 𝒢_n, n≥1, is γ_n = min{γ_n^0,γ_n^1,γ_n^2,γ_n^3}.After reducing the problem of determining γ_n to computing γ_n^k, k = 0,1,2,3, we next evaluate γ_n^k by using the self-similar property of the network. For two successive generation networks 𝒢_n and 𝒢_n+1, n≥1,γ_n+1^0 = min{3γ_n^0,2γ_n^0+γ_n^1,2γ_n^1+γ_n^0,3γ_n^1}, γ_n+1^1= min{2γ_n^1+γ_n^0-1,3γ_n^1-1,γ_n^0+γ_n^1+γ_n^2-1,γ_n^2+2γ_n^1-1,2γ_n^2+γ_n^0-1,2γ_n^2+γ_n^1-1}, γ_n+1^2= min{2γ_n^1+γ_n^2-2,γ_n^3+2γ_n^1-2,2γ_n^2+γ_n^1-2,γ_n^3+γ_n^2+γ_n^1-2,3γ_n^2-2,γ_n^3+2γ_n^2-2}, γ_n+1^3 = min{3γ_n^2-3,γ_n^3+2γ_n^2-3,2γ_n^3+γ_n^2-3,3γ_n^3-3}.By definition, γ_n+1^k, k = 0,1,2,3, is the cardinality of Θ_n+1^k. Below, we will show that the four sets Θ_n+1^k, k = 0,1,2,3, constitute a complete set, since each one can be constructed iteratively from Θ_n^0, Θ_n^1, Θ_n^2, and Θ_n^3. Then, γ_n+1^k, k = 0,1,2,3, can be obtained from γ_n^0, γ_n^1, γ_n^2, and γ_n^3.We first consider Eq. (<ref>), which can be proved graphically.Note that 𝒢_n+1 is composed of three copies of 𝒢_n, 𝒢_n^θ (θ=1,2,3). By definition, for any dominating set χ belonging to Θ_n+1^0, the three hub vertices of 𝒢_n+1 are not in χ, which means that the corresponding six identified hub vertices of 𝒢_n^θ do not belong to χ, see Fig. <ref>. Thus, we can construct χ from Θ_n^0, Θ_n^1, Θ_n^2, and Θ_n^3 by considering whether the other three hub vertices of 𝒢_n^θ, θ=1,2,3, are inχ or not.Figure <ref> shows all possible configurations of dominating sets Ω_n+1^0 that contains Θ_n+1^0 as subsets. From Fig. <ref>, we obtainγ_n+1^0 = min{3γ_n^0,2γ_n^0+γ_n^1,2γ_n^1+γ_n^0,3γ_n^1}. For Eqs. (<ref>), (<ref>), and (<ref>), they can be proved similarly. In Figs. <ref>, <ref>, and <ref>, we provide graphical representations of Eqs. (<ref>), (<ref>), and (<ref>), respectively. For network 𝒢_n, n≥ 3, γ_n^3 ≤γ_n^2 ≤γ_n^1 ≤γ_n^0. We will prove this lemma by mathematical induction on n. For n=3, we can obtain γ_3^3=3, γ_3^2=4, γ_3^1=5 and γ_3^0=6 by hand. Thus, the basis step holds immediately. Assuming that the lemma hold for n=t, t≥ 3. Then, from Eq. (<ref>), γ_t+1^3 = min{3γ_t^2-3,γ_t^3+2γ_t^2-3,2γ_t^3+γ_t^2-3,3γ_t^3-3}. By induction hypothesis, we haveγ_t+1^3 = 3γ_t^3-3.Analogously, we can obtain the following relations:γ_t+1^2 = γ_t^3 + 2γ_t^2-2, γ_t+1^1 = 2γ_t^2 + γ_t^1 - 1, γ_t+1^0 = 3γ_t^1.By comparing the above Eqs. (<ref>-<ref>) and using the induction hypothesis γ_t^3 ≤γ_t^2 ≤γ_t^1 ≤γ_t^0, we have γ_t+1^3 ≤γ_t+1^2 ≤γ_t+1^1 ≤γ_t+1^0. Therefore, the lemma is true for n=t+1.This concludes the proof of the lemma. The domination number of network 𝒢_n, n≥3, isγ_n = 3^n-2+3/2 .By Lemma <ref> and Eq. (<ref>), we obtainγ_n+1 = γ_n+1^3 = 3γ_n^3 - 3 = 3γ_n - 3 ,which, with the initial condition γ_3=3, is solved to yield the result.Theorem <ref>, especially Eq. (<ref>), implies that any minimum dominating set of 𝒢_n includes the three hub vertices.The smallest number of vertices in a dominating set of 𝒢_n, n≥3, which contains exactly 2, 1, and 0 hub vertices, isγ_n^2 = 3^n-2+1/2+2^n-2, γ_n^1 = 3^n-2-1/2+2^n-1andγ_n^0 = 3^n-2-3/2+3· 2^n-2,respectively. By Eqs. (<ref>) and (<ref>), we have γ_n^3=γ_n=3^n-2+3/2. ConsideringEq. (<ref>), we obtain a recursive equation for γ_n^2 as:γ_n+1^2 = 2γ_n^2 + 3^n-2+3/2-2,which coupled with γ_3^2 = 4 is solved to give Eq. (<ref>).Analogously, we can prove Eqs. (<ref>) and (<ref>).For network 𝒢_n,n ≥ 3, there is a unique minimum dominating set.Equation (<ref>) and Fig. <ref> shows that for n ≥ 3 any minimum dominating set of 𝒢_n+1 is in fact the union of minimum dominating sets, Θ_n^3, of the three replicas of 𝒢_n (i.e. 𝒢_n^1, 𝒢_n^2, and 𝒢_n^3) forming 𝒢_n+1, with each pair of their identified hub vertices being counted only once. Thus, any minimum dominating set of 𝒢_n+1 is determined by those of 𝒢_n^1, 𝒢_n^2, and 𝒢_n^3.Since the minimum dominating set of 𝒢_3 is unique, there is uniqueminimum dominating set for G_n when n ≥ 3. Moreover, it is easy to see that the unique dominating set of 𝒢_n, n ≥ 3, is actually the set of all vertices of 𝒢_n-2.§ DOMINATION NUMBER AND MINIMUM DOMINATING SETS IN SIERPIŃSKI GRAPH In this section, we study the domination number and the number of minimum dominating sets in the Sierpiński graph, and compare the results with those of the above-studied scale-free network, aiming to uncover the effect of scale-free property on the domination number and the number minimum dominating sets. §.§ Construction of Sierpiński graph The Sierpinski graph is also built in an iterative way. Let𝒮_n, n≥ 1, denote the n-generation graph. Then the Sierpiński graph is created as follows. When n=1, 𝒮_1 is an equilateral triangle containing three vertices and three edges. For n =2, perform a bisection of the sides of𝒮_1 forming four small replicas of the original equilateral triangle, and remove the central downward pointing triangle to get 𝒮_2. For n>2, 𝒮_n is obtained from 𝒮_n-1 by performing the bisecting and removing operations for each triangle in 𝒮_n-1. Figure <ref> shows theSierpiński graphs for n=1,2,3.It is easy to verify that both the number of vertices and the number of edges in the Sierpiński 𝒮_n graph are identical to those for the scale-free network 𝒢_n, which are N_n=(3^n+3)/2 and E_n=3^n+1, respectively.Different from 𝒢_n, the Sierpiński graph is homogeneous, since the degree of vertices in 𝒮_n is equal to 3, excluding the topmost vertex A_n, leftmost vertex B_n and the rightmost vertex C_n, the degree of which is 2. We call these three vertices with degree 2 as outmost vertices. The Sierpiński graph is also self-similar, which suggests another construction method of the graph. Given the nth generation graph 𝒮_n, the (n+1)th generation graph 𝒮_n+1 can be obtained by joining three replicas of 𝒮_n at their outmost vertices, see Fig. <ref>. Let 𝒮_n^θ, θ=1,2,3, be three copies of 𝒮_n, the outmost vertices of which are represented by A_n^θ, B_n^θ, and C_n^θ, respectively. Then, 𝒮_n+1 can be obtained by joining 𝒮_n^θ, with A_n^1, B_n^2, and C_n^3 being the outmost vertices A_n+1, B_n+1, and C_n+1of 𝒮_n+1.§.§ Domination number In the case without confusion, for the Sierpiński graph 𝒮_n we employ the same notation as those for 𝒢_n studied in the preceding section. Let γ_n be the domination number of 𝒮_n. Note that all dominating sets of 𝒮_n can be classified into four types: Ω_n^0, Ω_n^1, Ω_n^2, and Ω_n^3, where Ω_n^k (k = 0,1,2,3) denotes those dominating sets, each including exactly k outmost vertices. Let Θ_n^k, k = 0,1,2,3, be the subsets of Ω_n^k, each of which has the smallest cardinality, denoted by γ_n^k.The domination number of the Sierpiński graph 𝒮_n, n≥1, is γ_n = min{γ_n^0,γ_n^1,γ_n^2,γ_n^3}.Thus, in order todetermine γ_n for 𝒮_n, we can alternatively evaluate γ_n^k, k = 0,1,2,3, which can be solved by using the self-similar structure of the Sierpinski graph.To determine γ_n^k, k = 0,1,2,3, we introduce some more quantities assisting the calculation. We use 𝒮_n^k to denote a subgraph of the Sierpiński graph 𝒮_n, which is obtained from 𝒮_n by removing k, k =1,2,3, outmost vertices and the edges incident to them. Let ϕ_n^i, i=0,1,2 be the smallest number of vertices in a dominating set of 𝒮_n^1 containing i outmost vertices but excluding either of the two neighbors of the removed outmost vertex; letξ_n^j, j=0,1, be the smallest number of vertices in a dominating set of 𝒮_n^2 including j outmost vertex but excluding any neighbor of the two removed outmost vertices; and let η_n denote the smallest number of vertices in a dominating set of 𝒮_n^3, which does not include the neighbors of the three removed outmost vertices.According to the self-similar property of the Sierpiński graph, we can establish some relations between the quantities defined above. For any integer n ≥ 3, the following relations hold. γ_n+1^0= min{γ_n^0+ϕ_n^0+ξ_n^0, 3ϕ_n^0, 2γ_n^0 +ξ_n^0, γ_n^0+2ϕ_n^0, 2γ_n^0+ϕ_n^0, 3γ_n^0,2γ_n^1+ξ_n^0-1, γ_n^0+2ϕ_n^1-1, γ_n^1+ϕ_n^1+ϕ_n^0-1, 2γ_n^1+ϕ_n^0-1, γ_n^1+γ_n^0+ϕ_n^1-1, 2γ_n^1+γ_n^0-1, γ_n^2+γ_n^1+ϕ_n^1-2, γ_n^2+2γ_n^1-2,3γ_n^2-3}, γ_n+1^1= min{γ_n^0+ϕ_n^1+ξ_n^0, γ_n^1+ϕ_n^0+ξ_n^0, γ_n^0+ϕ_n^0+ξ_n^1, ϕ_n^1+2ϕ_n^0, 2γ_n^0+ξ_n^1, γ_n^0+ϕ_n^1+ϕ_n^0, γ_n^1+γ_n^0+ξ_n^0, γ_n^1+2ϕ_n^0, 2γ_n^0+ϕ_n^1, γ_n^1+γ_n^0+ϕ_n^0, γ_n^1+2γ_n^0, γ_n^2+γ_n^1+ξ_n^0-1, γ_n^0+ϕ_n^2+ϕ_n^1-1, γ_n^1+ϕ_n^2+ϕ_n^0-1,γ_n^2+ϕ_n^1+ϕ_n^0-1, γ_n^1+γ_n^0+ϕ_n^2-1,γ_n^2+γ_n^0+ϕ_n^1-1,γ_n^2+γ_n^1+ϕ_n^0-1,γ_n^2+γ_n^1+γ_n^0-1,2γ_n^1+ξ_n^1-1, γ_n^1+2ϕ_n^1-1, 2γ_n^1+ϕ_n^1-1, 3γ_n^1-1,γ_n^3+γ_n^1+ϕ_n^1-2,γ_n^3+2γ_n^1-2, γ_n^2+γ_n^1+ϕ_n^2-2,2γ_n^2+ϕ_n^1-2, 2γ_n^2+γ_n^1-2,γ_n^3+2γ_n^2-3}, γ_n+1^2= min{γ_n^0+ϕ_n^1+ξ_n^1, γ_n^1+ϕ_n^0+ξ_n^1,γ_n^1+ϕ_n^1+ξ_n^0, 2ϕ_n^1+ϕ_n^0, 2γ_n^1+ξ_n^0, γ_n^1+γ_n^0+ξ_n^1,γ_n^0+2ϕ_n^1, γ_n^1+ϕ_n^1+ϕ_n^0, 2γ_n^1+ϕ_n^0, γ_n^1+γ_n^0+ϕ_n^1, 2γ_n^1+γ_n^0, γ_n^0+2ϕ_n^2-1, γ_n^2+ϕ_n^2+ϕ_n^0-1, 2γ_n^2+ξ_n^0-1, γ_n^2+γ_n^0+ϕ_n^2-1,2γ_n^2+ϕ_n^0-1, 2γ_n^2+γ_n^0-1, γ_n^1+ϕ_n^2+ϕ_n^1-1, γ_n^2+γ_n^1+ξ_n^1-1,γ_n^2+2ϕ_n^1-1,γ_n^2+γ_n^1+ϕ_n^1-1, 2γ_n^1+ϕ_n^2-1, γ_n^2+2γ_n^1-1,γ_n^3+γ_n^2+ϕ_n^1-2, γ_n^3+γ_n^1+ϕ_n^2-2,γ_n^3+γ_n^2+γ_n^1-2,2γ_n^2+ϕ_n^2-2,3γ_n^2-2,2γ_n^3+γ_n^2-3}, γ_n+1^3= min{γ_n^1+ϕ_n^1+ξ_n^1, 3ϕ_n^1, 2γ_n^1+ξ_n^1, γ_n^1+2ϕ_n^1,2γ_n^1+ϕ_n^1,3γ_n^1, 2γ_n^2+ξ_n^1-1,γ_n^1+2ϕ_n^2-1,γ_n^2+ϕ_n^2+ϕ_n^1-1,2γ_n^2+ϕ_n^1-1,γ_n^2+γ_n^1+ϕ_n^2-1, 2γ_n^2+γ_n^1-1,γ_n^3+γ_n^2+ϕ_n^2-2,γ_n^3+2γ_n^2-2,3γ_n^3-3}, ϕ_n+1^0= min{γ_n^0+2ξ_n^0, 2ϕ_n^0+ξ_n^0, γ_n^0+ϕ_n^0+η_n, 2γ_n^0+η_n, γ_n^0+ϕ_n^0+ξ_n^0, 3ϕ_n^0, 2γ_n^0+ξ_n^0, γ_n^0+2ϕ_n^0, 2γ_n^0+ϕ_n^0, γ_n^1+ϕ_n^1+ξ_n^0-1, 2ϕ_n^1+ϕ_n^0-1,γ_n^1+ϕ_n^0+ξ_n^1-1, γ_n^0+ϕ_n^1+ξ_n^1-1, γ_n^0+2ϕ_n^1-1, γ_n^1+γ_n^0+ξ_n^1-1,γ_n^1+ϕ_n^1+ϕ_n^0-1, γ_n^1+γ_n^0+ϕ_n^1-1, 2γ_n^1+η_n-1, 2ϕ_n^1+ϕ_n^0-1, γ_n^1+ϕ_n^1+ξ_n^0-1, 2γ_n^1+ξ_n^0-1, γ_n^1+π_n^1+ϕ_n^0-1, 2γ_n^1+ϕ_n^0-1, γ_n^1+ϕ_n^2+ϕ_n^1-2, 2γ_n^1+ϕ_n^2-2, γ_n^2+γ_n^1+ξ_n^1-2, γ_n^2+2ϕ_n^1-2, γ_n^2+γ_n^1+ϕ_n^1-2, 2γ_n^2+ϕ_n^2-3}, ϕ_n+1^1= min{γ_n^1+2ξ_n^0, 2ϕ_n^0+ξ_n^1, γ_n^0+ξ_n^1+ξ_n^0, ϕ_n^1+ϕ_n^0 +ξ_n^0, γ_n^1+ϕ_n^0+η_n, γ_n^0+ϕ_n^1+η_n, ϕ_n^1+ϕ_n^0+ξ_n^0, ϕ_n^1+2ϕ_n^0, γ_n^0+ϕ_n^1+ξ_n^0, γ_n^1+ϕ_n^0+ξ_n^0, γ_n^0+ϕ_n^1+ϕ_n^0, γ_n^0+γ_n^1+ξ_n^0,γ_n^1+2ϕ_n^0, γ_n^1+γ_n^0+ϕ_n^0, γ_n^2+ϕ_n^1+ξ_n^0-1, γ_n^0+ϕ_n^2+ξ_n^1-1, γ_n^2+ϕ_n^0+ξ_n^1-1, ϕ_n^2+ϕ_n^1+ϕ_n^0-1, γ_n^0+ϕ_n^2+ϕ_n^1-1, γ_n^2+ϕ_n^1+ϕ^0-1, γ_n^2+γ_n^0+ξ_n^1-1, γ_n^2+γ_n^0+ϕ_n^1-1, γ_n^1+ϕ_n^1+ξ_n^1-1, 3ϕ_n^1-1, γ_n^1+2ϕ_n^1-1, 2γ_n^1+ξ_n^1-1,2γ_n^1+ϕ_n^1-1, γ_n^2+γ_n^1+η_n-1, γ_n^1+ϕ_n^2+ξ_n^0-1, γ_n^2+γ_n^1+ξ_n^0-1, γ_n^1+ϕ_n^2+ϕ_n^0-1, γ_n^2+γ_n^1+ϕ_n^0-1, γ_n^2+ϕ_n^2+ϕ_n^1-2,γ_n^1+2ϕ_n^2-2, γ_n^2+γ_n^1+ϕ_n^2-2,2γ_n^2+ξ_n^1-2, 2γ_n^2+ϕ_n^1-2, γ_n^3+γ_n^1+ξ_n^1-2, γ_n^3+2ϕ_n^1-2, γ_n^3+γ_n^1+ϕ_n^1-2,γ_n^3+γ_n^2+ϕ_n^2-3}, ϕ_n+1^2= min{ϕ_n^1+ϕ_n^0+ξ_n^1, γ_n^1+ϕ_n^1+η_n, 2ϕ_n^1+ξ_n^0, γ_n^1+ξ_n^1+ξ_n^0, 2ϕ_n^1+ϕ_n^0, γ_n^1+ϕ_n^1+ξ_n^0, 2γ_n^1+ξ_n^0, γ_n^1+ϕ_n^1+ϕ_n^0, 2γ_n^1+ϕ_n^0, 2ϕ_n^2+ϕ_n^0-1, 2γ_n^2+η_n-1, γ_n^2+ϕ_n^2+ξ_n^0-1,γ_n^2+ϕ_n^2+ϕ_n^0-1, 2γ_n^2+ξ_n^0-1, 2γ_n^2+ϕ_n^0-1,γ_n^1+ϕ_n^2+ξ_n^1-1,γ_n^2+ϕ_n^1+ξ_n^1-1, ϕ_n^2+2ϕ_n^1-1, γ_n^2+γ_n^1+ξ_n^1-1, γ_n^2+2ϕ_n^1-1, γ_n^1+ϕ_n^2+ϕ_n^1-1,γ_n^2+γ_n^1+ϕ_n^1-1,γ_n^2+2ϕ_n^2-2, 2γ_n^2+ϕ_n^2-2, γ_n^3+γ_n^2+ξ_n^1-2, γ_n^3+ϕ_n^2+ϕ_n^1-2, γ_n^3+γ_n^2+ϕ_n^1-2,2γ_n^3+ϕ_n^2-3}, ξ_n+1^0= min{γ_n^0+ξ_n^0+η_n, ϕ_n^0+2ξ_n^0, 2ϕ_n^0+η_n, γ_n^0+ϕ_n^0+η_n, 2ϕ_n^0+ξ_n^0, γ_n^0+2ξ_n^0, 3ϕ_n^0, γ_n^0+ϕ_n^0+ξ_n^0,γ_n^0+2ϕ_n^0, γ_n^0+2ξ_n^1-1, 2ϕ_n^1+ξ_n^0-1, ϕ_n^1+ϕ_n^0+ξ_n^1-1, 2ϕ_n^1+ϕ_n^0-1, γ_n^0+ϕ_n^1+ξ_n^1-1, γ_n^0+2ϕ_n^1-1,γ_n^1+ϕ_n^1+η_n-1, γ_n^1+ξ_n^1+ξ_n^0-1, γ_n^1+ϕ_n^1+ξ_n^0-1, γ_n^1+ϕ_n^0+ξ_n^1-1, γ_n^1+ξ_n^1+ξ_n^0-1, γ_n^2+ϕ_n^1+ξ_n^1-2, γ_n^2+2ϕ_n^1-2, γ_n^1+ϕ_n^2+ξ_n^1-2, γ_n^1+ϕ_n^2+ϕ_n^1-2, ϕ_n^2+2ϕ_n^1-2, γ_n^2+2ϕ_n^2-3}, ξ_n+1^1= min{γ_n^1+ξ_n^0+η_n, ϕ_n^0+ξ_n^1+ξ_n^0, ϕ_n^1+ϕ_n^0+η_n, ϕ_n^1+2ξ_n^0, ϕ_n^1+ϕ_n^0+ξ_n^0, γ_n^1+2ξ_n^0,ϕ_n^1+2ϕ_n^0, γ_n^1+ϕ_n^0+ξ_n^0, γ_n^1+2ϕ_n^0, γ_n^1+2ξ_n^1-1, 2ϕ_n^1+ξ_n^1-1, 3ϕ_n^1-1, γ_n^1+ϕ_n^1+ξ_n^1-1, γ_n^1+2ϕ_n^1-1, ϕ_n^2+ϕ_n^0+ξ_n^1-1, γ_n^2+ϕ_n^1+η_n-1, γ_n^2+ξ_n^1+ξ_n^0-1, γ_n^2+ϕ_n^1+ξ_n^0 -1, γ_n^2+ϕ_n^1+ξ_n^0-1, ϕ_n^2+ϕ_n^1+ϕ_n^0-1, γ_n^2+ϕ_n^0+ξ_n^1-1, γ_n^2+ϕ_n^1+ϕ_n^0-1, γ_n^3+ϕ_n^1+ξ_n^1-2, γ_n^3+2ϕ_n^1-2, γ_n^2+ϕ_n^2+ξ_n^1-2,2ϕ_n^2+ϕ_n^1-2, γ_n^2+ϕ_n^2+ϕ_n^1-2, γ_n^3+2ϕ_n^2-3}, η_n+1 = min{ϕ_n^0+ξ_n^0+η_n, 3ξ_n^0, ϕ_n^0+2ξ_n^0, 2ϕ_n^0+η_n, 2ϕ_n^0+ξ_n^0,3ϕ_n^0, ϕ_n^0+2ξ_n^1-1, 2ϕ_n^1+η_n-1, ϕ_n^1+ξ_n^1+ξ_n^0-1, 2ϕ_n^1+ξ_n^0-1, ϕ_n^1+ϕ_n^0+ξ_n^1-1, 2ϕ_n^1+ϕ_n^0-1, ϕ_n^2+ϕ_n^1+ξ_n^1-2, ϕ_n^2+2ϕ_n^1-2, 3ϕ_n^2-3}.This lemma can be proved graphically. Figs. <ref>-<ref> illustrate the graphical representations from Eq. (<ref>) to Eq. (<ref>).For arbitrary n ≥ 3,γ_n^1 = γ_n^0 +1 ,γ_n^2 =γ_n^0+2 ,γ_n^3 = γ_n^0+2 ,ϕ_n^0= γ_n^0 ,ϕ_n^1= γ_n^0+1 ,ϕ_n^2= γ_n^0+1 ,ξ_n^0= γ_n^0 ,ξ_n^1= γ_n^0+1 ,η_n= γ_n^0 .We prove this lemma by induction. For n = 3, it is easy to obtain by hand that γ_3^0 =3, γ_3^1 = 4, γ_3^2=5, γ_3^3=5, ϕ_3^0=3, ϕ_3^1 = 4, ϕ_3^2=4, ξ_3^0=3, ξ_3^1=4, and η_3=3. Thus, the result is true for n = 3. Let us assume that the lemma holds for n = t.For n =t+1, by induction assumption and lemma <ref>, it is not difficult to check that the result is true for n =t+1. The domination number of the Sierpiński graph 𝒮_n, n≥3, is γ_n = 3^n-2. Accordingto Lemmas <ref>, <ref>, and <ref>, we obtainγ_n+1 = γ_n+1^0 = 3γ_n^0 = 3 γ_n. Considering γ_3 = 3, it is obvious that γ_n = 3^n-2 holds for all n ≥ 3.§.§ Number of minimum dominating sets Let a_n denote the number of minimum dominating sets of the Sierpiński graph 𝒮_n, b_n the number of minimum dominating sets of𝒮_n^1 containing no outmost vertices, c_n the number of minimum dominating sets of𝒮_n^2 including no outmost vertices,d_n the number of minimum dominating sets of𝒮_n^3,e_n the number of minimum dominating sets of 𝒮_n^1 containing two outmost vertices. For n≥ 3, the five quantities a_n, b_n, c_n, d_n and e_n can be obtained recursively according to the following relations.a_n+1 = 6a_nb_nc_n+2b_n^3+3a_n^2c_n+9a_nb_n^2+6a_n^2b_n+a_n^3, b_n+1 = 2a_nc_n^2+4b_n^2c_n+2a_nb_nd_n+a_n^2d_n+8a_nb_nc_n+3b_n^3+2a_n^2c_n+4a_nb_n^2+a_n^2b_n, c_n+1 = 2a_nc_nd_n+4b_nc_n^2+2b_n^2d_n+7b_n^2c_n+3a_nc_n^2+ 2b_n^3+2a_nb_nd_n+4a_nb_nc_n+a_nb_n^2, d_n+1 = 6b_nc_nd_n+c_n^3+9b_nc_n^2+3b_n^2d_n+6b_n^2c_n+b_n^3+e_n^3, e_n+1 = b_ne_n^2,with the initial condition a_3=2, b_3=3, d_3=2, d_3=1 and e_3=1. We first prove Eq. (<ref>). Note that a_n is actually is the number of different minimum dominating sets for 𝒮_n, each of which contains no outmost vertices. Then, according to Lemma <ref> and Fig. <ref>, we can establish Eq. (<ref>) by using the rotational symmetry of the Sierpiński graph. The remaining equations (<ref>)-(<ref>) can be proved similarly.Applying Eqs. (<ref>)-(<ref>), the values of a_n, b_n, c_n, d_n and e_n can be recursively obtained for small n as listed in Table <ref>, which shows that these quantities grow exponentially with n. § COMPARISON AND ANALYSIS In the preceding two sections, we studied the minimum dominating set problem for the pseudofracal scale-free web and the Sierpiński graph with identical number of vertices and edges. For both networks, we determined the domination number and the number of minimum dominating sets. From the obtained results, we can see that the domination number of the pseudofracal scale-free web is only half of that corresponding to the Sierpiński graph. However, there exists only one minimum dominating set in pseudofracal scale-free web, while the number of different minimum dominating sets grows fast with the number of vertices. Because the size and number of minimum dominating sets are closely related to network structure, we argue that this distinction highlights the structural disparity between the pseudofracal scale-free web and the Sierpiński graph.The above-observed difference of minimum dominating sets between the pseudofracal scale-free web and the Sierpiński graph can be easily understood. The pseudofracal scale-free web is heterogeneous, in which there are high-degree vertices that are connected to each other and are also linked to other small-degree vertices in the network. These high-degree vertices are very efficient in dominating the whole network. Thus, to construct a minimum dominating set, we will try to select high-degree vertices instead of small-degree vertices, which substantially lessens the number of minimum dominating sets. Quite different from the pseudofracal scale-free web, the Sierpiński graph is homogeneous, all its vertices, excluding the three outmost ones, have the same degree of four and thus play a similar role in selecting a minimum dominating set. That's why the number of minimum dominating sets in the Sierpiński graph is much more than that in the pseudofracal scale-free web. Therefore, the difference of minimum dominating sets between the Sierpiński graph and the pseudofracal scale-free web lies in the inhomogeneous topology of the latter.We note that although we only study the minimum dominating sets in a specificscale-freenetwork, the domination number of which is considerably small taking up about one-sixth of the number of all vertices in the network. Since the scale-free property appears in a large variety of realistic networks, it is expected that the domination number of realistic scale-free networks is also much less, which is qualitatively similar to that of the pseudofracal scale-free web. Recent empirical study reported that in many real scale-free networks, and the domination number is smaller than the network size <cit.>. § CONCLUSIONS To conclude, we studied the size and the number of minimum dominating sets in the pseudofracal scale-free web and the Sierpiński graph, which have the same number of vertices and edges. For both networks, by using their self-similarity we determined the explicit expressions for the domination number, which for the former network is only one-half of that for the latter network. We also demonstrated that the minimum dominating set for the pseudofracal scale-free web is unique, while the number of minimum dominating sets in the Sierpiński graph grows exponentially withthe number of vertices in the whole graph. The difference of the minimum dominating sets in the two considered networks is rooted in the their architecture disparity: the pseudofracal scale-free web is heterogeneous, while the Sierpiński graph is homogeneous. Our work provides useful insight into applications of minimum dominating sets in scale-free networks.§ ACKNOWLEDGEMENTS This work is supported by the National Natural Science Foundation of China under Grant No. 11275049. § REFERENCE 29 natexlab#1#1 [#1],#1[Haynes et al.(1998)Haynes, Hedetniemi, and Slater]HaHeSl98 authorT. W. Haynes, authorS. Hedetniemi, authorP. Slater, titleFundamentals of Domination in Graphs, publisherMarcel Dekker, New York, year1998.[Wu and Li(2001)]WuLi01 authorJ. Wu, authorH. Li, titleA dominating-set-based routing scheme in ad hoc wireless networks, journalTelecomm. Syst. volume18 (year2001) pages13–36.[Wu(2002)]Wu02 authorJ. Wu, titleExtended dominating-set-based routing in ad hoc wireless networks with unidirectional links, journalIEEE Trans. Parallel Distrib. Syst. volume13 (year2002) pages866–881.[Shen and Li(2010)]ShLi10 authorC. Shen, authorT. Li, titleMulti-document summarization via the minimum dominating set, in: booktitleProceedings of the 23rd International Conference on Computational Linguistics, organizationAssociation for Computational Linguistics, pp. pages984–992.[Wuchty(2014)]Wu14 authorS. Wuchty, titleControllability in protein interaction networks, journalProc. Natl. Acad. Sci. USA volume111 (year2014) pages7156–7160.[Nacher and Akutsu(2012)]NaAk12 authorJ. C. Nacher, authorT. Akutsu, titleDominating scale-free networks with variable scaling exponent: heterogeneous networks are not difficult to control, journalNew J. Phys. volume14 (year2012) pages073005.[Nacher and Akutsu(2013)]NaAk13 authorJ. C. Nacher, authorT. Akutsu, titleStructural controllability of unidirectional bipartite networks, journalSci. Rep. volume3 (year2013) pages1647.[Liu et al.(2011)Liu, Slotine, and Barabási]LiSlBa11 authorY.-Y. Liu, authorJ.-J. Slotine, authorA.-L. Barabási, titleControllability of complex networks, journalNature volume473 (year2011) pages167–173.[Nepusz and Vicsek(2012)]NeVi12 authorT. Nepusz, authorT. Vicsek, titleControlling edge dynamics in complex networks, journalNature Phys. volume8 (year2012) pages568–573.[Fomin et al.(2008)Fomin, Grandoni, Pyatkin, and Stepanov]FoGrPySt08 authorF. V. Fomin, authorF. Grandoni, authorA. V. Pyatkin, authorA. A. Stepanov, titleCombinatorial bounds via measure and conquer: Bounding minimal dominating sets and applications, journalACM Tran. Algorithms volume5 (year2008) pages9.[Hedar and Ismail(2012)]HeIs12 authorA.-R. Hedar, authorR. Ismail, titleSimulated annealing with stochastic local search for minimum dominating set problem, journalInt. J. Mach. Learn. Cybernet. volume3 (year2012) pages97–109.[da Fonseca et al.(2014)da Fonseca, de Figueiredo, de Sá, and Machado]dadedeMa14 authorG. D. da Fonseca, authorC. M. de Figueiredo, authorV. G. P. de Sá, authorR. C. Machado, titleEfficient sub-5 approximations for minimum dominating sets in unit disk graphs, journalTheoret. Comput. Sci. volume540 (year2014) pages70–81.[Gast et al.(2015)Gast, Hauptmann, and Karpinski]GaHaK15 authorM. Gast, authorM. Hauptmann, authorM. Karpinski, titleInapproximability of dominating set on power law graphs, journalTheoret. Comput. Sci. volume562 (year2015) pages436–452.[Couturier et al.(2015)Couturier, Letourneur, and Liedloff]CoLeLi15 authorJ.-F. Couturier, authorR. Letourneur, authorM. Liedloff, titleOn the number of minimal dominating sets on some graph classes, journalTheoret. Comput. Sci. volume562 (year2015) pages634–642.[Matheson and Tarjan(1996)]MaTa96 authorL. R. Matheson, authorR. E. Tarjan, titleDominating sets in planar graphs, journalEurop. J. Combinatorics volume17 (year1996) pages565–568.[Kanté et al.(2014)Kanté, Limouzy, Mary, and Nourine]KaLiMaNo14 authorM. M. Kanté, authorV. Limouzy, authorA. Mary, authorL. Nourine, titleOn the enumeration of minimal dominating sets and related notions, journalSIAM J. Discrete Math. volume28 (year2014) pages1916–1929.[Honjo et al.(2010)Honjo, Kawarabayashi, and Nakamoto]HoKaNa10 authorT. Honjo, authorK.-i. Kawarabayashi, authorA. Nakamoto, titleDominating sets in triangulations on surfaces, journalJ. Graph Theory volume63 (year2010) pages17–30.[Zhao et al.(2015)Zhao, Habibulla, and Zhou]ZhHaZh15 authorJ.-H. Zhao, authorY. Habibulla, authorH.-J. Zhou, titleStatistical mechanics of the minimum dominating set problem, journalJ. Stat. Phys. volume159 (year2015) pages1154–1174.[Li et al.(2016)Li, Zhu, Shao, and Xu]LiZhShXu16 authorZ. Li, authorE. Zhu, authorZ. Shao, authorJ. Xu, titleOn dominating sets of maximal outerplanar and planar graphs, journalDiscrete Appl. Math. volume198 (year2016) pages164–169.[Golovach et al.(2016)Golovach, Heggernes, and Kratsch]GoHeKr16 authorP. A. Golovach, authorP. Heggernes, authorD. Kratsch, titleEnumerating minimal connected dominating sets in graphs of bounded chordality, journalTheoret. Comput. Sci. volume630 (year2016) pages63–75.[Newman(2003)]Ne03 authorM. E. J. Newman, titleThe structure and function of complex networks, journalSIAM Rev. volume45 (year2003) pages167–256.[Barabási and Albert(1999)]BaAl99 authorA. Barabási, authorR. Albert, titleEmergence of scaling in random networks, journalScience volume286 (year1999) pages509–512.[Molnár Jr et al.(2014)Molnár Jr, Derzsy, Czabarka, Székely, Szymanski, and Korniss]MoDeCzSzSzKo14 authorF. Molnár Jr, authorN. Derzsy, authorÉ. Czabarka, authorL. Székely, authorB. K. Szymanski, authorG. Korniss, titleDominating scale-free networks using generalized probabilistic methods, journalSci. Rep. volume4 (year2014) pages6308.[Lovász and Plummer(1986)]LoPl86 authorL. Lovász, authorM. D. Plummer, titleMatching Theory, volume volume29 of seriesAnnals of Discrete Mathematics, publisherNorth Holland, addressNew York, year1986.[Dorogovtsev et al.(2002)Dorogovtsev, Goltsev, and Mendes]DoGoMe02 authorS. N. Dorogovtsev, authorA. V. Goltsev, authorJ. F. F. Mendes, titlePseudofractal scale-free web, journalPhys. Rev. E volume65 (year2002) pages066122.[Zhang et al.(2009)Zhang, Qi, Zhou, Xie, and Guan]ZhQiZhXiGu09 authorZ. Z. Zhang, authorY. Qi, authorS. G. Zhou, authorW. L. Xie, authorJ. H. Guan, titleExact solution for mean first-passage time on a pseudofractal scale-free web, journalPhys. Rev. E volume79 (year2009) pages021127.[Zhang et al.(2007)Zhang, Zhou, and Chen]ZhZhCh07 authorZ. Z. Zhang, authorS. G. Zhou, authorL. C. Chen, titleEvolving pseudofractal networks, journalEur. Phys. J. B volume58 (year2007) pages337–344.[Song et al.(2005)Song, Havlin, and Makse]SoHaMa05 authorC. Song, authorS. Havlin, authorH. Makse, titleSelf-similarity of complex networks, journalNature volume433 (year2005) pages392–395.[Zhang et al.(2010)Zhang, Liu, Wu, and Zhou]ZhLiWuZh10 authorZ. Z. Zhang, authorH. X. Liu, authorB. Wu, authorS. G. Zhou, titleEnumeration of spanning trees in a pseudofractal scale-free web, journalEurophys. Lett. volume90 (year2010) pages68002. | http://arxiv.org/abs/1703.09023v1 | {
"authors": [
"Liren Shan",
"Huan Li",
"Zhongzhi Zhang"
],
"categories": [
"cs.SI",
"cs.DM"
],
"primary_category": "cs.SI",
"published": "20170327120950",
"title": "Domination number and minimum dominating sets in pseudofractal scale-free web and Sierpiński graph"
} |
Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can) Karl BringmannMax Planck Institute for Informatics, Saarland Informatics CampusPaweł GawrychowskiUniversity of Haifa. Partially supported bythe Israel Science Foundation grant 794/13.Shay MozesIDC Herzliya. Partially supported bythe Israel Science Foundation grant 794/13. Oren Weimann^† ======================================================================================================================================================================================================================================================================================================================== The edit distance between two rooted ordered trees with n nodes labeled from an alphabet Σ is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes.Tree edit distance is a well known generalization of string edit distance. The fastest known algorithm for tree edit distance runs in cubic O(n^3) time and is based on a similar dynamic programming solution as string edit distance.In this paper we show that a truly subcubic O(n^3-ε) time algorithm for tree edit distance is unlikely: For |Σ| = Ω(n), a truly subcubic algorithm for tree edit distance implies a truly subcubic algorithm for the all pairs shortest paths problem. For |Σ| = O(1), a truly subcubicalgorithm for tree edit distance implies an O(n^k-ε) algorithm for finding a maximum weight k-clique. Thus, while in terms of upper bounds string edit distance and tree edit distance are highly related, in terms of lower bounds string edit distance exhibits the hardness of the strong exponential time hypothesis [Backurs, Indyk STOC'15] whereas tree edit distance exhibits the hardness of all pairs shortest paths.Our result provides a matching conditional lower bound for one of the last remaining classic dynamic programming problems.§ INTRODUCTION Tree edit distance is the most common similarity measure between labelled trees. Algorithms for computing the tree edit distance are being used in a multitude of applications in various domains including computational biology <cit.>, structured text and data processing (e.g., XML) <cit.>, programming languages and compilation <cit.>, computer vision <cit.>, character recognition <cit.>, automatic grading <cit.>, answer extraction <cit.>, and the list goes on and on.Let F and G be two rooted trees with a left-to-right order among siblings and where each vertex is assigned a label from an alphabet Σ. The edit distance between F and G is the minimum cost of transforming F into G by a sequence of elementary operations (at most one operation per node): changing the label of a node v, deleting a node v and setting the children of v as the children of v's parent (in the place of v in the left-to-right order), and inserting a node v (the complement of delete[Since a deletion in F is equivalent to an insertion in G and vice versa, we can focus on finding the minimum cost of a sequence of just deletions and relabelings in both trees that transform F and G into isomorphic trees. ]). See Figure <ref>. The cost of these elementary operations is given by two functions: (a) is the cost of deleting or inserting a vertex with label a, and (a,b) is the cost of changing the label of a vertex from a to b.The Tree Edit Distance (TED) problem was introduced by Tai in the late 70's <cit.> as a generalization of the well known string edit distance problem <cit.>. Since then it was extensively studied. Tai gave an O(n^6)-time algorithm for TED which was subsequently improved to O(n^4) in the late 80's <cit.>, then to O(n^3 log n) in the late 90's <cit.>, andfinally to O(n^3) in 2007 <cit.>. Many other algorithms have been developed for TED, see the popular survey of Bille <cit.> (this survey alone has more than 600 citations) and the books of Apostolico and Galil <cit.> and Valiente <cit.>. For example, Pawlik and Augsten <cit.> recently defined a class of dynamic programming algorithms that includes all the above algorithms for TED, and developed an algorithm whose performance on any input is not worse (and possibly better) than that of any of the existing algorithms. Other attempts achieved better running time by restricting theedit operations or the scoring schemes <cit.>, or by resorting to approximation <cit.>. However, in the worst case no algorithm currently beats Ω(n^3) (not even by a logarithmic factor). Due to their importance in practice, many of the algorithms described above, as well as additional heuristics andoptimizations were studied experimentally <cit.>.Solving tree edit distance in truly subcubic O(n^3-ε) time is arguably one of the main open problems in pattern matching, and the most important one in tree pattern matching.The fact that, despite the significant body of work on this problem, no truly subcubic time algorithm has been found, leads to the following natural conjecture that no such algorithm exists. For any ϵ >0 Tree Edit Distance on two n-node trees cannot be solved in O(n^3-ϵ) time. In the same paper proving the O(n^3) upper bound for TED <cit.>, Demaine et al. prove that their algorithm is optimal within a certain class of dynamic programming algorithms for TED. However, proving Conjecture <ref> seems to be beyond our current lower bound techniques.A recent development in theoretical computer science suggests a more fine-grained classification of problems in P. This is done by showing lower bounds conditioned on the conjectured hardness of certain archetypal problems such asAll Pairs Shortest Paths (APSP), 3-SUM, k-Clique, and Satisfiability, i.e., the Strong Exponential Time Hypothesis (SETH).The APSP Conjecture. Given a directed or undirected graph with n vertices and integer edge weights, many classicalalgorithms for APSP (such as Dijkstra or Floyd-Warshall) run in O(n^3) time. The fastestto date is the recent algorithm ofWilliams <cit.> that runs faster than O(n^3/log^C n) time for all constants C. Nevertheless, no truly subcubic O(n^3-ε) time algorithm for APSP is known. This led to the following conjecture assumed in many papers, e.g. <cit.>.For any ε > 0 there exists c > 0, such that All Pairs Shortest Pathson n node graphs with edge weights in {1,…, n^c} cannot be solved in O(n^3-ε) time.The (Weighted) k-Clique Conjecture. The fundamental k-Clique problem asks whether a given undirected unweighted graph on n nodes and O(n^2) edges contains a clique on k nodes.This is the parameterized version of the famously NP-hard Max-Clique <cit.>. k-Clique is amongst the most well-studied problems in theoretical computer science, and it is the canonical intractable (W[1]-complete) problem in parameterized complexity. A naive algorithm solves k-Clique in O(n^k) time. A faster O(n^ω k/3)-time algorithm (where ω < 2.373 is the exponent of matrix multiplication) can be achieved via a reduction to Boolean matrix multiplication on matrices of size n^k/3× n^k/3 if k is divisible by 3 <cit.>[For the case that k is not divisible by 3 see <cit.>.]. Any improvement to this boundimmediately implies a faster algorithm for MAX-CUT <cit.>. It is a longstanding open question whether improvements to this bound are possible <cit.>. The k-Clique conjecture asserts that for no k ≥ 3 and > 0 the problem has an O(n^ω k/3 - ) time algorithm, or an O(n^k-) algorithm avoiding fast matrix multiplication, and has been used e.g. in <cit.>.We work with a conjecture on a weighted version of k-Clique.In the Max-Weight k-Clique problem, the edges have integral weights and we seek the k-clique of maximum total weight. When the edge weights are small, one can obtain an Õ(n^k-) time algorithm <cit.>. However, when the weights are larger than n^k, the trivial O(n^k) algorithm is the best known (ignoring n^o(k) improvements). This gives rise to the following conjecture, which has been used e.g. in <cit.>.For any ε>0 there exists a constant c>0, such that for any k≥ 3Max-Weight k-Clique on n-node graphs with edge weights in {1,…,n^ck} cannot be solved in O(n^k(1-ε)) time.In 2014, with the burst of the conditional lower bound paradigm, Abboud <cit.> highlighted seven main open problems in the field: The first two were to prove quadratic n^2-o(1) lower bounds forString Edit Distance and Longest Common Subsequence, which were soon resolved in STOC'15 <cit.> and FOCS'15 <cit.> conditional on SETH. The third problem was to show a cubic n^3-o(1) lower bound for RNA-Folding. Surprisingly, inFOCS'16 <cit.> it was shown that RNA-Folding can actually be solved in truly subcubic time, thus ruling out the possibility of such a lower bound.The remaining four problems remain open. In fact, two of them, showing a cubic lower bound for Graph Diameterand ann^⌈ k/2 ⌉-o(1) lower bound for k-SUM, have actually been used as hardness conjectures themselves, e.g., in SODA'15 <cit.> and ICALP'13 <cit.>. Until the present work, no progress has been made on the last problem posed by Abboud: A cubic lower bound for Tree Edit Distance. In the absence of progress on either upper bounds or conditional lower bounds for TED, one might think that Conjecture <ref> is yet another fragment in the current landscape of fine grained complexity, and is unrelated to other common conjectures. §.§ Our Results In this paper we resolve the complexity of tree edit distance by showing a tight connection between edit distance of trees and all pairs shortest paths of graphs. We prove that Conjecture <ref> implies Conjecture <ref>, and that Conjecture <ref> implies Conjecture <ref>, even for constant alphabet. A truly subcubic algorithm for tree edit distance on alphabet size |Σ| = Ω(n) implies a truly subcubic algorithm for APSP. A truly subcubicalgorithm for tree edit distance on sufficiently large alphabet size |Σ| = O(1) implies an O(n^k(1-ε)) algorithm for Max-Weight k-Clique. Note that the known upper bounds for string edit distance and tree edit distance are highly related. The O(n^2) algorithm for strings and the O(n^3) algorithm for trees (and forests) are both based on a similar recursive solution: The recursive subproblems in strings (forests) are obtained by either deleting, inserting, or matching the rightmost or leftmost character (root). In strings, it is best to always consider the rightmost character. The recursive subproblems are then prefixes and the overall running time is O(n^2). In trees however, sticking with the rightmost (or leftmost) root may result in an O(n^4) running time. The specific way in which the recursion switches between leftmost and rightmost roots is exactly what enables the O(n^3) solution. It is interesting that while the upper bounds for both problems are so similar, the lower bounds string edit distance exhibits the hardness of the SETH while tree edit distance exhibits the hardness of APSP.While a considerable share of the recent conditional lower bounds is on string pattern matching problems <cit.>, the only conditional lower bound for a tree pattern matching problem is the recent SODA'16 quadratic lower bound for exact pattern matching <cit.> (the problem of deciding whether one tree is a subtree of another). We solve the main remaining open problem in tree pattern matching, and one of the last remaining classic dynamic programming problems.Indeed, apart from the problems discussed above, for most of the classic dynamic programming problems a conditional lower bound or an improved algorithm have been found recently.This includes the Fréchet distance <cit.>, bitonic TSP <cit.>, context-free grammar parsing <cit.>, maximum weight rectangle <cit.>, and pseudopolynomial time algorithms for subset sum <cit.>. Tree edit distance was one of the few classic dynamic programming problems that so far resisted this approach. Two notable remaining dynamic programming problems without matching bounds are the optimal binary search tree problem (O(n^2)) <cit.> and knapsack (pseudopolynomial O(n W)) <cit.>. §.§ Our ReductionsAPSP to TED. In order to prove APSP-hardness, by <cit.> it suffices to show a reduction from the negative triangle detection problem, where we are given an n-node graph G with edge weights w(.,.) and want to decide whether there are i,j,k with w(i,j) + w(j,k) + w(i,k) < 0.Our first result is a reduction from negative triangle detection to tree edit distance, which produces trees of size O(n) over an alphabet of size O(n). This yields the matching conditional lower bound of O(n^3-).Our reduction constructs trees that are of a very special form: Both trees consist of a single path (called spine) of length O(n) with a single leaf pending from every node (see Figure <ref>). Such instances already have been identified as difficult for arestricted class of algorithms based on a specific dynamic programming approach <cit.>. In our setting we cannot assume anything about the algorithm, and hence need a deeper insight on the structure of any valid sequence of edit operations (see Figure <ref> and Lemma <ref>).Using this structural understanding, we then show that it is possible to carefully construct a cost function so that any optimal solution must obey a certain structure (Figure <ref>). Namely, for some i,j,k we match the two leaves in depth k, we match the right spine node in depth k to the left leaf in depth i (which encodes w(i,k)), we match the left spine node in depth k to the right leaf in depth j (which encodes w(j,k)), and we match as many spine nodes above depth i and j as possible (which together encode w(i,j) by a telescoping sum). Constant alphabet size. The drawback of the above reduction is the large alphabet size |Σ|, as essentially each node needs its own alphabet symbol. There are two major difficulties to improving this to constant alphabet size. First, the instances identified as hard by the above reduction (and by Demaine et al. <cit.> for a restricted class of algorithms) are no longer hard for small alphabet! Indeed, in Section <ref> we give an O(n^2|Σ|^2log n) algorithm for these instances, which is truly subcubic for constant alphabet size. This algorithm is the first to break the barrier by Demaine et al. for such trees, and we believe it is of independent interest. Regarding lower bounds, this algorithm shows that for a reduction with constant alphabet size our trees necessarily need to be more complicated, making it much harder to reason about the structure of a valid edit sequence. We will construct new hard instances by taking the previous ones and attaching small subtrees to all nodes.The second difficulty is that, since the input size of TEDis Õ(n + |Σ|^2), a reduction from negative triangle detection to TED with constant alphabet size would need to considerably compress the Ω(n^2) input size of negative triangle detection. It is a well-known open problem whether such compressing reductions exist. To circumvent this barrier, we assume the stronger Max-Weight k-Clique Conjecture, where the input size Õ(n^2) is very small compared to the running time O(n^k).Max-Weight k-Clique to TED. Given an instance of Max-Weight k-Clique on an n-node graph G and weights bounded by n^O(k) we construct a TED instance on trees of size O(n^k/3+2) over an alphabet of size O(k). One can verify that an O(n^3-) algorithm for TED now implies an O(n^k(1-/6)) algorithm for Max-Weight k-Clique, for any sufficiently large k=k().We roughly follow the reduction from negative triangle detection; now each spine node corresponds to a k/3-clique in G.To cope with the small alphabet, we simulate the previous matching costs with small gadgets. In particular, to each spine node, corresponding to some k/3-clique U, we add a small subtree T(U) of size O(n^2) such that the edit distance between T(U) and T(U') encodes the total weight of edges between U and U'.This raises two issues. First, we need to represent a weight w ∈{0,…,n^O(k)} by trees over an alphabet of size O(k) (that is, constant). This is solved by writing w in base n as ∑_i=0^O(k)α_i n^i and constructing α_i nodes of type i, such that the cost of matching two type i nodes is n^i.A second issue is that we need the small subtree T(U) to interact with every other small subtree T(U'). So, in a sense, T(U) needs to “prepare” for any possible U', and yet its size needs to be small. We achieve this by creating in T(U'), for every node u in G, a separate component responsible for counting the total weight of all edges between u and all nodes in U'. Then, in T(U) we have a separate component for every node u∈ U, and make sure that it is necessarily matched to the appropriate component in T(U').The final and most intricate component of our reduction is to enforce that in any optimal solution we have some control on which small subtrees can be matched to which. A similar issue was present in the negative triangle reduction, when we require control over which spine nodes above depth i are matched to which spine nodes above depth j.This is handled in the negative triangle reduction by assigning a different matching cost depending on the node's depth. Now however, we cannot afford so many different costs. We overcome this with yet another gadget, called an I-gadget,that achieves roughly the same goal, but in a more “distributed” manner.Both of our reductions are highly non-trivial and introduce a number of new tricks that could be useful for other problems on trees.§ REDUCING APSP TO TEDWe re-define the cost of matching two nodes to be the original cost minus the cost of deleting both nodes. Then, the goal of TED amounts to choosing a subset of red nodes in both trees, so that the subtrees defined by the red nodes are isomorphic (i.e., their left-right and ancestor-descendant relation is the same in both trees) and the total cost of matching the corresponding red nodes is minimized. See Figure <ref>. We work with this formulation from now on.It turns out that a hard instance for TED is given by two seemingly simple caterpillar trees. These two trees F and G, also called left and right, are shown in Figure <ref>.Each tree consists of spine nodes and leaf nodes. If u is a spine node then we denote by u' the (unique) leaf node attached to u. For any such hard instance of TED, the red nodes in any matching have the structure given by Lemma <ref> below. Informally, it states that starting from the top of the left tree and ordering the nodes by depth, the matching consists of (1) a prefix of a matching subsequence of spine nodes in both trees, (2) a suffix of a matching subsequence of leaf nodes that are in reverse order in the other tree, and (3) at most one final spine node in each of the trees matching a leaf node in the other tree that is located between the prefix part (1) and the suffix part (2).Let f_1,f_2,… and g_1,g_2,… denote the spine nodes of F and G, respectively, ordered by the depth. Then, for some p,q≥ 0 and some i_1<i_2⋯ < i_p<i_p+1<⋯ <i_p+q+1<i_p+q+2 and j_1<j_2⋯ < j_p<j_p+1<⋯ <j_p+q+1<j_p+q+2the set of red nodes consists of: * Spine nodes f_i_1,f_i_2,…,f_i_p matched respectively to spine nodes g_j_1,g_j_2,…,g_j_p,* Leaf nodes f'_i_p+2,f'_i_p+3,…,f'_i_p+q+1 matched respectively to leaf nodes g'_j_p+q+1,g'_j_p+q,…,g'_j_p+2 (note the reversed order),* Optionally,a spine node f_i_p+q+2 matched to leaf node g'_j_p+1. Also optionally, a spine node g_j_p+q+2 matched to a leaf node f'_i_p+1. Consider the subtree defined by the red nodes. It has two isomorphic copies, one in F and one in G. Its nodes are all the red nodes. The children of node u are all red nodes v_1,v_2,…,v_k whose lowest red ancestor is u. The order is such that v_i precedes v_i+1 in a left-to-right preorder traversal of F (or equivalently of G). Let u be a red node with two or more children v_1,v_2,…,v_k, k≥ 2. Observe that u must correspond to spine nodes in both F and G. Further observe that at most one v_i can correspond to a spine node (otherwise, for two spine nodes one must be an ancestor of the other).Consider any ℓ∈{1,2,…,k-1}. It is not hard to see that node v_ℓ+1 must correspond to a leaf node in F and node v_ℓ must correspond to a leaf node in G. This implies that both v_ℓ and v_ℓ+1 are leaves in the red subtree. Moreover, v_1 is the only node that may correspond to a spine node in F and v_k is the only node that may correspond to a spine node in G. Consequently, the red subtree has a particularly simple structure: it consists of nodes u_1,u_2,…,u_p such that for every ℓ=1,2,…,p-1 the only child of u_ℓ is u_ℓ+1, and nodes v_1,v_2,…,v_k (for some k≥ 1) that are all children of u_p.For every ℓ=1,2,…,p, the node u_ℓ must correspond to a spine node f_i_ℓ∈ F and g_j_ℓ∈ G. We immediately obtain <ref> that i_1<i_2<…<i_p and that j_1<j_2<…<j_p. The nodes v_1,v_2,…,v_k are all children of u_p in the subtree. It is possible that all v_i are mapped to leaf nodes f'_i_p+2,f'_i_p+3,…,f'_i_p+q+1 andg'_j_p+2,g'_j_p+3,…,g'_j_p+q+1. In this case, they must be mapped in reverse order since a left-to-right preorder traversal visits the leaves of G in order of their depth and in reverse-depth order in F. This implies <ref> that i_p≤ i_p+2< … < i_p+q+1 and j_p≤ j_p+2 < … < j_p+q+1. Recall however that v_1 may be mapped to a spine node f_i_p+q+2 in F and a leaf node g'_j_p+1 in G. The requirement that i_p+q+2>i_p+q+1 and thatj_p+2 > j_p+1≥ j_p follows from the fact that these nodes correspond to a leftmost leaf in the subtree. For symmetric reasons, v_k may be matched to a spine node g_j_p+q+2∈ G for some j_p+q+2>j_p+q+1 and i_p+2 > i_p+1≥ i_p. This implies <ref> and concludes the proof. The above lemma characterizes the structure of a solution to what we call the hard instance of TED.We next show how to reducethe negative triangle detection problem to TED on the hard instance. Negative triangle detection is known to be subcubic equivalent to APSP <cit.>. Given a complete weighted n-node undirected graph, where w(i,j) denotes the weight of the edge (i,j), the problem asks whether there are i,j,k such that w(i,j)+w(j,k)+w(i,k) < 0. To solve negative triangle detection, we clearly only need to find i,j,k that minimize w(i,j)+w(j,k)+w(i,k).We will show how to construct, given such a graph, a hard instance of TED of size O(n), such that min_i,j,kw(i,j)+w(j,k)+w(i,k) can be extracted from the edit distance.Given a complete undirected n-node graph G with weights w(.,.) in {1,…,n^c}, we construct, in linear time in the output size, an instance of TED of size O(n) with alphabet size |Σ|=O(n) such that the minimum weight of a triangle in G can be extracted from the edit distance. Consequently, an O(n^3-ϵ) time algorithm for TED implies an O(n^3-ϵ) algorithm for negative triangle detection, and thus an O(n^3-ϵ/3) algorithm for APSP by a reduction in <cit.>.We create a hard instance of TED consisting of two trees F and G as in Figure <ref>. Every tree is divided into a top and a bottom part. The spine nodes of these parts are denoted by a_1,a_2,…,a_n for the top left part, b_1,b_2,…,b_n+1 for the bottom left part, c_1,c_2,…,c_n for the top right part, and d_1,d_2,…,d_n+1 for the bottom right part. The labels of all nodes are distinct (hence the alphabet size |Σ| is Θ(n)).We set the cost (u,v) of matching two nodes u and v as described below, where M denotes a sufficiently large number to be specified later. Intuitively, our assignment of costs ensures that any valid solution to TED must match b'_k to d'_k, b_k+1 to c'_j, and d_k+1 toa'_i for some i,j,k (as shown in Figure <ref>). Furthermore, the optimal solution (i.e., of minimum cost) must choose i,j,k that minimize w(i,k)+w(k,j)+w(i,j). The costs are assigned as follows:* (b'_k,d'_k)=-M^2-2M· k for every k=1,2,…, n. * (b_k+1,c'_j)=-M^2+M· k+M· j+w(k,j) for every k=1,2,…,n and j=1,2,… ,n. * (a'_i,d_k+1)=-M^2+M· k+ M· i+w(i,k) for every i=1,2,…,n and k=1,2,… ,n. * (a_i,c_j)=-2M+w(i,j)-w(i-1,j-1) for every i=2,3,…,n and j=2,3,… ,n. * (a_i,c_1)=-M(i+1)+w(i,1) for every i=1,2,…,n. * (a_1,c_j)=-M(j+1)+w(1,j) for every j=1,2,…,n.All the remaining costs (u,v) are set to ∞. The following theorem proves that these costs imply the required structure on the optimal solution as described above.Intuitively, by choosingsufficiently large M, because of the -M^2 addend in <ref>, <ref> and <ref> we can ensure that any optimal solution matches b'_k to d'_k, b_k'+1 to c'_j, and d_k”+1 to a'_i, for some i,j,k and k',k”≤ k. Then, because of the M· k in <ref> and <ref>, in any optimal solution actually k=k'=k” and the total cost of all these matchings is w(k,j)+w(i,k). Finally, because of the -2M in <ref>, the -M(i+1) in <ref>, and the -M(j+1) in <ref>, in any optimal solution a_i is matched to c_j, a_i-1 to c_j-1, a_i-2 to c_j-2 and so on. The total cost of these matching is w(i,j) since the w(i,j)-w(i-1,j-1) terms in <ref> form a telescoping sum.For sufficiently large M, the total cost of an optimal matching in a hard instance with costs <ref>-<ref> is -3M^2+min_i,j,kw(i,k)+w(k,j)+w(i,j).Consider i,j,k minimizing w(i,k)+w(k,j)+w(i,j).We assume without loss of generality that i≥ j. It is easy to see that it is possible to choose the following matching (see Figure <ref>): * b'_k to d'_k with cost -M^2-2M· k.* b_k+1 to c'_j with cost -M^2+M· k+M· j+w(k,j).* d_k+1 to a'_i with cost -M^2+M· k+M· i+w(i,k).* a_i to c_j, a_i-1 to c_j-1, a_i-2 to c_j-2, …, a_i-j+2 to c_2 with costs -2M+w(i,j)-w(i-1,j-1),-2M+w(i-1,j-1)-w(i-2,j-2),…,-2M+w(i-j+2,2)-w(i-j+1,1). * a_i-j+1 to c_1 with cost -M · (i-j+2)+w(i-j+1,1).Summing up and telescoping, the total cost is -M^2-2M· k -M^2+M· k+M· j+w(k,j)-M^2+M· k+M· i+w(i,k)-2M· (j-1)+w(i,j)-M· (i-j+2) which is equal to -3M^2+w(i,k)+w(k,j)+w(i,j).For the other direction, we need to prove that every solution has cost at least -3M^2+min_i,j,k w(i,k)+w(k,j)+w(i,j). We first observe that, by Lemma <ref>, a solution can match b'_k to d'_k at most once for some k. Similarly, it can match b_k'+1 to c'_j at most once for some j and k', and d_k”+1 to a'_iat most once for some i and k”. Furthermore, for M large enough, either the cost is larger than -3M^2 or all three such pairs of nodes are matched for some k,i,j, and k',k”≥ k. Furthermore, if k'>k and M is large enough then we can decrease k' by one thus decreasing the total cost, and similarly if k”>k.It is enough to consider an optimal solution and hence we can assume that k=k'=k”. Again by Lemma <ref>, the only possible additional matched pairs of nodes are a subsequence of spine nodes a_1,…,a_i and c_1,…,c_j. We show that an optimal solution matches a_i with c_j, a_i-1 with c_j-1, ..., a_i-j+1 with c_1. To this end, suppose that a_x_z is matched to c_y_z, for every z=1,2,…,L, where 1≤ x_1<… <x_L≤ i and 1≤ y_1<… <y_L≤ j. For every z this contributes, up to lower order terms less than M,-2M if x_z,y_z>1, or -M(y_z+1) if x_z=1, or -M(x_z+1) if y_z=1. In an optimal solution we have x_1=1 or y_1=1, as otherwise we can match a_1 to c_1 to decrease the total cost. First, assume that y_1=1. Then, if x_z'+1<x_z'+1 for some z'∈{1,…,L} (where we define x_L+1=i+1), we can increase all x_1,x_2,…,x_z' by 1 to decrease the total cost by M, up to lower order terms. So x_L=i, x_L-1=i-1,…, x_1 = i-L+1. Now if L<j then x_1>1 (recall that we assumed i ≥ j) and also y_z'+1<y_z'+1 for some z'∈{1,…,L} (again, we define y_L+1=j+1). This means that we can increase all y_1,y_2,…,y_z' by 1 and then additionally match a_x_1-1 with c_1 to decrease the total cost by M, up to lower order terms.Second, if x_1=1 a symmetric argument applies. We obtain that indeed L=min(i,j)=j, and a_i-j+1 is matched to c_1, a_i-j+2 is matched to c_2, ..., a_i is matched to c_j. Now, by the same calculations as in the previous paragraph, the total cost is -3M^2+w(i,k)+w(k,j)+w(i,j). § REDUCING MAX-WEIGHT K-CLIQUE TO TEDThe drawback of the reduction described in Section <ref> is the large size of the alphabet. That is, given a complete weighted n-node undirected graph it creates two trees of size O(n) where labels of nodes are distinct, and therefore |Σ|=Θ(n). We would like to refine the reduction so that |Σ|=O(1). However, as the input size of TED on n-node trees and alphabet Σ with O(log n)-bit integer weights is Õ(n + |Σ|^2), such a reduction would need to compress the Õ(n^2) input size of negative triangle detection considerably.To circumvent this barrier, we assume the stronger Max-Weight k-Clique Conjecture, where the input size Õ(n^2) is very small compared to the running time bound O(n^k). Given a complete undirected n-node graph G with weights in {1,…,n^ck}, we construct, in linear time in the output size, an instance of TED of size O(n^k/3+2) with alphabet size |Σ|=O(ck) such that the maximum weight of an k-clique in G can be extracted from the edit distance.Thus, an O(n^3-ϵ') time algorithm for TED for sufficiently large |Σ|=O(1) implies an O(n^(k/3+2)(3-ϵ')) time algorithm for max-weight k-Clique. Setting ϵ=ϵ'/6, we obtain that, for every c>0, there exists k=⌈ 6/ϵ⌉ such that max-weight k-Clique can be solved in timeO(n^(k/3+2)(3-ϵ')) = O(n^k-ϵ'k/3+6-2ϵ') = O(n^k(1-ϵ)-kϵ+6) = O(n^k(1-ϵ)),so Conjecture <ref> is violated. The reduction starts with enumerating all k/3-cliques in the graph and identifying them with numbers 1,2,…,N, where N≤ n^k/3. Let Q(i) denote the set of nodes in the i-th clique. Then, for i,j such that Q(i)∩ Q(j)=∅, W(i,j) is the total weight of all edges connecting two nodes in the i-th clique or a node in the i-th clique with a node in the j-th clique. Our goal is to calculate the maximum value of W(i,z)+W(z,j)+W(j,i) over i,j,z such that Q(i),Q(j) and Q(z) are pairwise disjoint. If we define w(u,u)=0 and increase every other weight w(u,v) by Λ := k^2 n^ck, this is equivalent to maximising over all i,j,z. Indeed, if Q(i),Q(j),Q(z) are pairwise disjoint, the total weight is at least k2Λ, and otherwise it is at most (k2 - 1) (Λ + n^ck) < k2Λ. Note that the new weights are still bounded by n^O(ck).Our construction of a hard instance of size O(N ·(n)) is similar to Section <ref>, however,the costs are set up differently and we attach small additional gadgets to some of the nodes (which is necessary, cf. Section <ref>). The original two trees (with some extra spine nodes without any leaves) are called the macro structure and all small gadgets are called the micro structures. With notation as in Section <ref>, the following micro structures are created for every i=1,2,…,N (see Figure <ref>): * A'_i attached to the leaf a'_i,* a copy of I attached as the left child of the leaf c'_i,* C'_i attached as the right child of the leaf c'_i,* A_i attached to the spine node a_i-1 between the previously existing children a_i and a'_i-1,* B_i attached to the spine node b_i between the previously existing children b_i+1 and b'_i,* C_i attached to the spine node c_i-1 as the rightmost child,* D_i attached to the spine node d_i between the previously existing children d'_i and d_i+1.Notice that A_i is attached above a_i (and similarly C_j is attached above c_j).Therefore, we need to create dummy spine nodes a_0 and c_0. We also insert an additional spine node b”_i between b_i and b_i+1 and similarly d”_i between d_i and d_i+1, for every i=1,2,…,N-1. See Figure <ref>.The costs in the macro structure are chosen as follows, where again M is a sufficiently large value (picking M=n^O(ck) will suffice): * (b_z,c'_j) = -M^8 for every z=1,2,…,N and i=1,2,…,N,* (a'_i,d_z') = -M^8 for every i=1,2,…,N and z'=1,2,…,N,* (b'_z,d'_z') = -M^7· 2 for every z=1,2,…,N and z'=1,2,…,N,* (a_i,c_j) = -M^3· 2+M^2 for every i=1,2,…,N and j=1,2,…,N.Additionally, the extra spine nodes b”_i and d”_i can be matched to some of the nodes of I. Each copy of I is a path consisting of k/3 segments I_0,I_1,…,I_k/3-1 of length n-1, where the root of the whole I belongs to I_0. The label of every u∈ I_i is the same and the costs are set so that (u,u)=-M^7· n^i. The label of every b”_z (and also d”_z') is chosen as the label of every u∈ I_m, where n^m is the largest power of n dividing N-z. The cost of matching any other two labels used in the macro structure is set to infinity. For the nodes belonging to the other micro structures, the cost of matching is at least -M^6 and will be specified precisely later. This is enough to enforce the following structural property.For sufficiently large M, any optimal matching has the following structure: there exist i,j,z such that a'_i is matched to d_z, c'_j is matched to b_z, b'_1 is matched to d'_z-1, b'_2 is matched to d'_z-2, …, b'_z-1 is matched to d'_1. Furthermore, if z<N then b”_z is matched to a descendant of c'_j and d”_z is matched to a descendant of a'_i. Ignoring the spine nodes a_1,…,a_i,c_1,…,c_i and all micro structures that are not copies of I the cost of any such solution is -M^8· 2-M^7· 2(N-1).For sufficiently large M, any optimal solution must match a'_i to d_z and c'_j to b_z', for some i,j,z,z', as otherwise its cost is larger than -M^8· 2. By the reasoning described inLemma <ref>, these i,j,z,z' are uniquely defined for any optimal solution.Nodes from the copy of I attached as the left child of the leaf c'_j can be matched to some spine nodes below b_z, nodes from the copy of I attached as the right child of the leaf a'_i can be matched to some spine nodes below d_z', and no other nodes from the copies of I can be matched. We claim that the total contribution of these nodes is -M^7(N-z) and -M^7(N-z'), respectively. By symmetry, it is enough to justify the former. Observe that the cost of matching a single u∈ I_i is smaller than the total cost of matching all nodes from I_0∪… I_i-1, therefore an optimal solution must match as many nodes to nodes from I_k/3-1 as possible. Looking at the expansions of all numbers N-z,N-(z+1),…,N-(N-1) in base n, where N-z=∑_i=0^k/3-1α_in^i, we see that there are α_k/3-1 such nodes, namely the nodes b'_N-z' with z ≤ z' < N and N-z' divisible by n^k/3-1. Then, an optimal solution must match as many nodes to nodes from I_k/3-2 as possible to nodes above the topmost node matched to a node from I_k/3-1. Looking again at the same expression, we see that there are α_k/3-2 such nodes, namely the nodes b'_N-z' with z ≤ z' < N - α_k/3-1 n^k/3-1 and N-z' divisible by n^k/3-2. Continuing in the same fashion, we obtain that there are α_i nodes matched to nodes from I_i, making the total cost -M^7(N-z) as claimed.We assume without loss of generality that z≥ z'. Then, an optimal solution must match d'_z'-1 to b'_x_z'-1, d'_z'-2 to b'_x_z'-2, …, and d'_1 to b'_x_1, for some z ≥ x_1 > … > x_z'-1≥ 1, as otherwise its cost is larger than -M^8· 2-M^7(2N-z-z')-M^7· 2(z'-1). Rewriting the cost we obtain -M^8· 2-M^7(2N-2-z+z'), so recalling our assumption z ≥ z' we see that in fact z=z' as otherwise its cost is larger than -M^8· 2-M^7· 2(N-1). We are now ready to state properties of the remaining micro structures. Let (T_1,T_2) denote the cost of matching two trees T_1 and T_2. Then, we require that: * (A'_i,D_z') = -M^6 - M^3(N-i) - W(i,z') for every i=1,2,…,N and z'=1,2,…,N,* (B_z,C'_j) = -M^6 - M^3(N-j) - W(z,j) for every z=1,2,…,N and j=1,2,…,N.* (A_i,C_j) = -M^2 - W(j,i) + W(j-1,i-1) for every i=2,3,…,N and j=2,3,…,N.* (A_i,C_1) = -M^5-M^3(i-1) - W(1,i) for every i=1,2,…,N,* (A_1,C_j) = -M^5-M^3(j-1) - W(j,1) for every j=1,2,…,N.The labels of the nodes in the micro structures should be partitioned into disjoint subsets corresponding to the following micro structures: * {A'_1,A'_2,…,A'_N,D_1,D_2,…,D_N},* {B_1,B_2,…,B_N,C'_1,C'_2,…,C'_N},* {A_1,A_2,…,A_N,C_1,C_2,…,C_N},so that two nodes can be matched only if their labels belong to the same subset. The cost of matching any node of A'_i,D_z',B_z,C'_j should be at least -M^6. The cost of matching any node of A_i,C_j should be at least -M^2, except that the root of A_i (C_j) can be matched to the root of C_1 (A_1) with costlarger than -M^5-M but at most -M^5, and, for any non-root node of A_i (C_j) and for any non-root node of C_1 (A_1), the cost of matching is larger than -M^4. Finally, every A_i and C_j should consist of O(n^2) nodes. Now we can show that, assuming these properties, any optimal solution has a specific structure.For sufficiently large M, the total cost of an optimal matching is-M^8· 2 - M^7· 2(N-1) - M^6· 2 - M^5 - M^3· 2N + M^2 - max_i,j,z{W(i,z) + W(z,j) + W(j,i)}. Consider i,j,z maximizing W(i,z) + W(z,j) + W(j,i). We may assume that i≥ j. Then, it is possible to choose the following matching: * b_k to c'_j with cost -M^8,* some nodes from the copy of I being the left child of c'_j to some spine nodes below b_z with total cost -M^7(N-z),* a'_i to d_k with cost -M^8,* some nodes from the copy of I being the right child of a'_i to some spine nodes below d_z with total cost -M^7(N-z),* b'_1 to d'_z-1, b'_2 to d'_z-2, …, b'_z-1 to d'_1 with cost -M^7· 2 each,* a_i to c_j, a_i-1 to c_j-1, …, a_i-j+1 to c_1 with cost -M^3· 2+M^2 each,* A'_i to D_z with cost -M^6-M^3(N-i)-W(i,z),* B_z to C'_j with cost -M^6-M^3(N-j)-W(z,j),* A_i to C_j, A_i-1 to C_j-1, …, A_i-j+2 to C_2 with costs -M^2-W(j,i)+W(j-1,i-1), -M^2-W(j-1,i-1)+W(j-2,i-2), …, -M^2-W(2,i-j+2)+W(1,i-j+1).* A_i-j+1 to C_1 with cost -M^5-M^3(i-j)-W(1,i-j+1).Summing up and telescoping, the total cost is- M^8- M^7(N-z)- M^8- M^7(N-z)- M^7· 2(z-1)- M^3· 2j+M^2· j- M^6-M^3(N-i)-W(i-z)- M^6-M^3(N-j)-W(z,j)- M^2(j-1)-W(j,i)-M^5-M^3(i-j)=-M^8· 2 - M^7· 2(N-1) - M^6· 2 - M^5 - M^3· 2N + M^2 - W(i,z) - W(z,j) - W(j,i). For the other direction, we need to argue that every solution has cost at least -M^8· 2 - M^7· 2(N-1) - M^6· 2 - M^5 - M^3· 2N + M^2 - max_i,j,z{W(i,z) + W(z,j) + W(j,i)}. We start with invoking Lemma <ref> and analyse the remaining small micro structures. Due to leaves b'_1,…,b'_z-1,d'_1,…,d'_z-1 being already matched, no node from B_1,…,B_z-1,D_1,…,D_z-1 can be matched (as they can in general only be matched to A'_*'s and C'_*'s). Then, due to b”_z and d”_z being already matched (or z=N) no node from B_z+1,…,B_N,D_z+1,…,D_N can be matched, and nodes from B_z or D_z can be only matched to nodes from C'_j or A'_i, respectively. The cost incurred by all such nodes is (A'_i,D_z)+(B_z,C'_j), making the total cost -M^8· 2-M^7· 2(N-1)-M^6· 2-M^3(2N-i-j)-W(i,z)-W(z,j). It remains to analyse the contribution of all spine nodes a_1,…,a_N,b_1,…,b_N and nodes from micro structures A_1,…,A_N,C_1,…,C_N.Consider the micro structures C_1 and A_1. Matching their roots to roots of some A_i' and C_j', respectively, decreases the total cost by at least -M^5, which is much smaller than the cost of matching the remaining nodes. Furthermore, it is not possible to match both the root of C_1 to the root of some A_i' and the root of A_1 to the root of some C_j' at the same time, unless the root of A_1 is matched to the root of C_1. Therefore, an optimal solution matches exactly one of them or both to each other, say we match the root of C_1 to the root of some A_i', thus adding (A_i',C_1) to the total cost.Due to a'_i being matched to d_z, i'≤ i holds. Now, unless i'=1, no node from A_1 can be matched to a node from C_j', so the cost of matching any a_i' to c_j' is now much smaller than the cost of matching nodes in the remaining micro structures (for each such node, the cost is at least -M^2, and there are at most O(n^2) of them in a single micro structure, so the total cost contributed by a single micro structure is larger than -M^3 for M large enough) and, by Lemma <ref>, only nodes a_1,…,a_i,c_1,…,c_j can be matched, so an optimal solution matchesas many such pairs as possible. Due to the root of C_1 being matched to the root of A_i', only nodes a_i',a_i'+1,…,a_i and c_1,…,c_j can be matched, so there are min(i-i'+1,j) such matched pairs. If i-i'+1<j and i'>1 then C_1 can be matched with A_i'-1 instead of A_i' which allows for an additional pair and decreases the total cost (because matching a pair (a_*,c_*) adds -M^3· 2 to the cost while decreasing i' by one adds M^3 to the cost (A_i',C_1), up to lower order terms). If i-i'+1<j and i'=1 then A_1 can be matched with C_2 instead of C_1 while keeping the number of matched pairs intact to decrease the total cost. So i-i'+1≥ j (implying i≥ j, which is due to our initial assumption that the root of C_1 is matched to the root of some A_i'). Then, if i'<i-j+1, C_1 can be matched with A_i'+1 instead of A_i' without changing the number of matched pairs to decrease the total cost. Thus, i'=i-j+1 and a_i is matched to c_j, a_i-1 to c_j-1, …, and a_i-j+1 to c_1, Then nodes from A_i can be only matched to nodes from C_j, nodes from A_i-1 only to nodes from C_j-1, and so on. By the same calculations as in the previous paragraph, the total cost is therefore -M^8· 2 - M^7· 2(N-1) - M^6· 2 - M^5 - M^3· 2N + M^2 - max_i,j,z{W(i,z) + W(z,j) + W(j,i)}. To complete the proof we need to design the remaining micro structures. We start with describing some preliminary gadgets that will be later appropriately composed to obtain the micro structures A_i,A'_i,B_z,C_j,C'_j,D_z' with the required properties. Each such gadget consists of two trees, called left and right, and we are able to exactly calculate the cost of matching them. The main difficulty here is that we need to keep the size of the alphabet small, so for instance we are not able to create a distinct label for every node of the original graph. At this point it is also important to note that we can assume M=n^O(ck), i.e., there is a constant d = O(ck) such that all weights constructed above have absolute value less than n^d. Decrease gadget D(x).For any x∈{0,…,n^d-1}, the edit distance of the left and right tree of D(x) is -x, and furthermore the right tree does not depend on the value of x. This is obtained by representing x in base n as x=∑_i=0^d-1α_in^i. The left tree is a path composed of d segments, the i-th segment consisting of α_i nodes. The right tree is a path composed of d segments, each consisting of n-1 nodes. Nodes from the i-th segment of the left tree can be matched with nodes from the i-th segment of the right tree with cost -n^i, so the total cost is -x, see Figure <ref> (left). We reuse the same set of distinct labels in every decrease gadget of the same type, hence we need only O(d) distinct labels in total. Furthermore, note that the cost of matching the left tree of D(x) with any tree is at least -x and the cost of matching any node of D(x) is -n^i for some i∈{0,1,…,d-1}. Equality gadget E(u,v,c_=). For any u,v∈{1,…,n} and c_=∈{0,…,n^d-1}, the edit distance of the left and right tree of E(u,v,c_=) is -c_=· n if u=v and at least -c_=· n+c_= otherwise. Also, the left tree does not depend on v and the right tree does not depend on u.The left tree is a path composed of a segment of length u and a segment of length n-u. The right tree is a path composed of a segment of length v and a segment of length n-v. Nodes from the first segment of the left tree can be matched with nodes from the first segment of the right tree with cost -c_=, and similarly for the second segments. Then, if u=v we can match all nodes in both trees, so the total cost is -c_=· n. Otherwise, we can match at most n-1 nodes, making the total cost at least -c_=· n+c_=, see Figure <ref> (right). Furthermore, note that the total cost of matching the left tree of E(u,v,c_=) with any tree is at least -c_=· n and the cost of matching any node of E(u,v,c_=) is -c_=. Connection gadget C(i,j,M). For any i,j∈{1,…,N} and sufficiently large M∈{0,…,n^d-1}, the edit distance of the left and right tree of C(i,j,M) is -M-W(i,j). The left tree does not depend on j and the right tree does not depend on i.Let {u_1,…,u_k/3} and {v_1,…,v_k/3} be the k/3-cliques corresponding to i and j, respectively, where u_1<u_2<…,u_k/3 and v_1<v_2<…<v_k/3. Recall that W(i,j) denotes the total weight of all edges connecting two nodes in the i-th clique or a node in the i-th clique with a node in the j-th clique, where we assume that w(u,u)=0. We construct the gadget C(i,j,M) as follows. The root of the left tree has degree 1+k/3 and the root of the right tree has degree 1+n. Their rightmost children correspond to the root of the left and the right trees of D(∑_x<yw(u_x,u_y)), respectively. Every other child of the left root can be matched with every other child of the right root with cost -M_2 (we fix M_1 and M_2 later). Intuitively, we would like the x-th child of the the left root to be matched with the u_x-th child of the right root, and then contribute -∑_yw(u_x,v_y) to the total cost, so that summing up over x=1,2,…,k/3 we obtain the desired sum. To this end, we attach the left tree of E(u_x,·,M_1) and the right tree of D(·) to the x-th child of the left root. Similarly, we attach the right tree of E(·,t,M_1) and the left tree of D(∑_yw(t,v_y)) to the t-th child of the right root. Here we use · to emphasise that a particular tree does not depend on the particular value of the parameter. All decrease gadgets are of the same type. See Figure <ref>.We can clearly construct a solution with total cost -M_2· k/3-M_1· n· k/3-W(i,j) (because we have enumerated the clique corresponding to i so that u_1<u_2<…<u_k/3). We claim that, for appropriately chosen M_1 and M_2, no better solution is possible. Let W=∑_u,vw(u,v). We fix M_1=W · (k/3+1). This is enough to guarantee that the total cost contributed by nodes in all decrease gadgets is at least -M_1. The total cost contributed by nodes in all equality gadgets is at least -M_1· n· k/3. Consequently, setting M_2=M_1· n· k/3+M_1 guarantees that any optimal solution must match all children of the left root, so in fact, for every x=1,2,…,k/3 we must match the x-th child of the left root to some child of the right root. Because matching the left tree of any decrease gadget contributes at least -W to the total cost, by the choice of M_1 an optimal solution in fact must match the x-th child of the left root with the u_x-th child of the right root, as otherwise we lose at least M_1 due to the corresponding equality gadget that cannot be compensated by matching its accompanying decrease gadget. Finally, the corresponding decrease gadget adds -∑_yw(u_x,v_y) to the total cost. Therefore, as long as M≥ M_2· k/3+M_1· n· k/3 the total cost is indeed -M-W(i,j) after choosing the cost of matching the roots to be -M+M_2· k/3+M_1· n· k/3.For any node in a decrease gadget, the cost of matching is at least -W, for any node in an equality gadget, the cost of matching is -M_1, and finally the cost of matching the children of the roots is -M_2, so the cost of matching any node of C(i,j,W) is at least -M. For the correctness of the construction it is enough that M is at leastM_2· k/3+M_1· n· k/3 =M_1((nk/3+1)k/3+nk/3)=W(k/3+1)k/3(nk/3+1+n)= W(k/3+1)k/3(n(k/3+1)+1) ≤W· n(k/3+1)^3 = n^O(ck).Micro structures A'_i,D_z',B_z,C'_j. We only explain how to construct A'_i and D_z', for any i=1,2,…,N and z'=1,2,…,N, as the construction of B_z and C'_j is symmetric. Recall that we require (A'_i,D_z')=-M^6-M^4(N-i)-W(i,z') and for every node in A'_i and D_z' the cost of matching should be at least -M^6.A'_i consists of a root to which we attach the left tree of D(M^6+M^4(N-i)-M) and the left tree of C(i,·,M), while D_z' consists of a root to which we attach the right tree of D(·) and the right tree of C(·,z',M). All decrease gadgets attached as the left children of A'_i and D_z' are of the same type, and all decrease gadgets used inside the connection gadgets attached as the right children are also of the same but other type. This guarantees that the total cost of matching A'_i to D_z' is simply -M^6-M^4(N-i)+M-M-W(i,j)=-M^6-M^4(N-i)-W(i,j). For sufficiently large M≥ W· n(k/3+1)^3,the cost of matching any node in D(M^6+M^4(N-i)-M) is at least -M^6 and the cost of matching any node in C(i,j,M) is at least M. Micro structures A_i,C_j. Here the situation is a bit more complex, as we simultaneously require that (A_i,C_j)=-M^2-W(j,i)+W(j-1,i-1) for every i=2,3,…,N and j=2,3,…,N and (A_i,C_1)=-M^5-M^3(i-1)-W(1,i), and (A_1,C_j)=-M^5-M^3(j-1)-W(j,1) for every i=1,2,…,N and j=1,2,…,N. We must also make sure that the cost of matching a node of A_i to a node of C_j should be at least -M^2, except that the root of A_i (C_j) can be matched to the root of C_1 (A_1)with cost larger than -M^5-M but at most -M^5 and, for any non-root node of A_i (C_j) and for any non-root node of C_1 (A_1), the cost of matching is larger than -M^4.For every i>1 (j>1), A_i (C_j) consists of two subtrees, called left and right, attached to the common root, while A_1 (C_1) consists of a single subtree connected to a root. For every i>1 (j>1), the left subtree of A_i (the right subtree of C_j) consists of a root with two subtrees, called left-right and left-right (right-left and right-right).For every i>1, nodes of the right subtree of A_i can only be matched to nodes of C_1 and nodes of the left subtreeof A_i can only be matched to nodes of the right subtree of C_j for any j>1. For every j>1, nodes of the left subtree of C_j can only be matched to nodes of A_1 and nodes of the right subtree of C_jcan only be matched to nodes of the left subtree of A_i for any i>1. Nodes of A_1 can be matched to nodes of the left subtree of C_j, for any j>1. Nodes of C_1 can be matched to nodes of the right subtree of A_i, for any i>1. Additionally, the root of A_1 can be matched to the root of C_1 with cost -M^5-W(1,1)>-M^5-M, and for any i>1 (j>1), the root of A_i (C_j) can be matched to the root of C_1 (A_1) with cost -M^5. The subtrees are constructed as follows: * the right subtree of A_i is the left tree of D(M^3(i-1)+W(1,i)),* the only subtree of A_1 is the right tree of D(·),* the left subtree of C_j is the left tree of D(M^3(j-1)+W(j,1)),* the only subtree of C_1 is the right tree of D(·).It remains to fully define the left subtree of every A_i and the right subtree of every C_j, for i,j>1. Recall that the goal is to ensure (A_i,C_j)=-M^2-W(j,i)+W(j-1,i-1). We define a new n-node graph with weight function w'(u,v)=M-w(u,v) for any u≠ v (for sufficiently large M, the new weights are positive). Then, let C'(i,j,M) denote the connection gadget C(i,j,M) constructed for the new graph. Nodes of the left-left (left-right) subtree of A_i can be only matched to nodes of the right-left (right-right) subtree of C_j. The subtrees are constructed as follows: * the left-left subtree of A_i is the left tree of C(i,·,M· (k/3)^2),* the right-left subtree of C_j is the right tree of C(·,j,M· (k/3)^2),* the left-right subtree of A_i is the left tree of C'(i-1,·,M^2),* the right-right subtree of C_j is the right tree of C'(·,j-1,M^2).See Figure <ref>. For the construction of C(i,j,M· (k/3)^2) and C(i-1,j-1,M^2) to be correct we need that M · (k/3)^2≥ W· n(k/3+1)^3 and M^2≥ M· n^2· n(k/3+1)^3, respectively, which holds for sufficiently large M.Now we calculate (A_i,C_j). C(i-1,j-1,M^2) contributes -M^2 minus the total cost of edges connecting two subsets of k/3 nodes in the new graph. As the weights in the new graph are defined as w'(u,v)=M-w(u,v), this is exactly -M^2-(M· (k/3)^2-W(i-1,j-1)). C(i,j,M· (k/3)^2) contributes -M· (k/3)^2-W(i,j), so (A_i,C_j)=-M· (k/3)^2-W(i,j)-M^2-(M· (k/3)^2-W(i-1,j-1))=-M^2-W(i,j)+W(i-1,j-1) as required.It remains to bound the cost of matching nodes. Nodes in the left subtree of A_i (C_j) can be matched only to nodes of C_1 (A_1) with cost at least -M^3· n > -M^4, except that the roots can be matched with cost -M^5. The cost of matching a node of A_i to a node of C_j, for i,j>1, is either at least -M· (k/3)^2 (for the nodes of C(i,j,M· (k/3)^2) or at least -M^2 (for the nodes of C(i,j,M^2)), so for sufficiently large M at least -M^2. Wrapping up. We have shown how to construct, given a complete undirected n-node graph G, two trees such that the weight of the max-weight k-clique in G can be extracted from the cost of an optimal matching (and, as mentioned in the beginning of Section <ref>, by a simple transformation this is equal to the edit distance). To complete the proof of Lemma <ref>, we need to bound the size of both trees and also the size of the alphabet used to label their nodes.Initially, each tree consists of 2N original spine nodes, where N≤ n^k/3, 2N leaf nodes, and N additional spine nodes. Then, we attach appropriate microstructures to the original spine nodes and leaf nodes. The microstructures are A'_i,I,C'_j,A_i,B_z,C_j,D_z'.Every copy of I consists of k/3· n nodes. To analyse the size of the remaining microstructures, first note that if x∈{0,…,n^d} then the decrease gadget D(x) consists of O(d· n) nodes. The equality gadget always consists of O(n) nodes. Finally, the connection gadget E(·,·,M) with M∈{0,…,n^d} consists of O(n(n+d· n)+d· n)=O(d· n^2) nodes.Let M=n^d with d to be specified later. Now, the size of the microstructures can be bounded as follows: A'_i and D_z' (and also B_z and C'_j) consist of O(6d· n+d· n^2)=O(d· n^2) nodes. The right subtree of A_i (and the left subtree of C_j) consists of O(3d· n) nodes, while the left subtree of A_i (and the right subtree of C_j) consist of O(k^2d· n^2+2d· n^2)=O(k^2d· n^2) nodes. Thus, the total size of all microstructures is O(N· k^2d· n^2). It remains to bound M. Recall that we require M≥ W· n(k/3+1)^3, M· (k/3)^2≥ W· n(k/3+1)^3 and M ≥ n^3(k/3+1)^3, where W ≤ n^2· n^O(ck)=n^O(ck). Hence, it is sufficient that M ≥ 8 W · n^3k^3. Setting d=Θ(ck) is therefore enough. The size of the whole instance thus is O(n^k/3+2· ck)=O(n^k/3+2).We also have to bound the size of the alphabet. We need k/3 distinct labels for the nodes of I. We need O(d) distinct labels for the nodes of all decrease gadgets of the same type. There is a constant number of types, and all other nodes require only a constant number of distinct labels (irrespectively on c and k), so the total size of the alphabet is O(ck)=O(1).§ ALGORITHM FOR CATERPILLARS ON SMALL ALPHABETIn this section, we show that the hard instances of TED from Section <ref> can be solved in time O(n^2|Σ|^2log n), where n is the size of the trees and Σ is the alphabet. Recall that in such an instance we are given two trees F and G both consisting of a single path (called spine) of length O(n) with a single leaf pending from every node, and all these leafs are to the right of the path in F and to the left of the path in G (see Figure <ref>). In the following we use the same notation as in Lemma <ref>. At a high level, we want to guess the rootmost non-spine node in the left tree f'_i_p+1 andthe rootmost non-spine node in the right tree g'_j_p+1. The optimal matching of spine nodes above these non-spine nodes can be precomputed in O(n^2) total time with a simple dynamic programming algorithm. It might be tempting to say the same about the situation below, but this is much more complicated due to the fact that leaf nodes in this part are matched in reversed order. To overcome this difficulty, we need the following tool.For strings s[1..n] and t[1..m] over alphabet Σ and matching cost (c,d) for any two letters c,d∈Σ, we define the optimal matching of s and the reverse of t as min{∑_ℓ=1^k (s[i_ℓ],t[j_ℓ]) | k ≥ 0, 1 ≤ i_1 < … < i_k ≤ n, 1 ≤ j_k < … < j_1 ≤ m}.Given two strings s[1..n], t[1..n], in O(n^2log n) total time we can calculate, for every i and j, the optimal matching of s[1..i] and the reverse of t[1..j].We construct an (n+1)×(n+1) grid graph on nodes v_i,j, where i,j=0,1,…,n as follows. For every i,j=0,1,…,n, we create a directed edge from v_i,j to v_i+1,j and v_i,j+1 with length zero. Also, we create a directed edge from v_i,j to v_i+1,j+1 with length (s[i],t[n+1-j]). Then, paths from v_1,n+1-j to v_i,n+1 are in one-to-one correspondence with matchings of s[1..i] to the reverse of t[1..j]. Therefore, the cheapest such path corresponds to the optimal matching. The grid is a planar directed graph, and all starting nodes v_1,n+1-j lie on the external face, so we can use the multiple-source shortest paths algorithm of Klein <cit.> to compute, in O(n^2log n) time, a representation of shortest paths from all starting nodes v_1,n+1-j to all nodes of the grid.[In the presence of negative-length edges, Klein's algorithm requires an initial shortest paths tree from some node on the external face to all other nodes. In our case, computing this initial shortest path tree can easily be done in O(n^2) time as our graph is a directed acyclic graph.] This representation can be then queried in O(log n) time to extract the length of any path from v_1,n+1-j to v_i,n+1. Thus, the total time is O(n^2log n). To see how Lemma <ref> can be helpful, consider the (simpler) case when there are no additional spine nodes f_i_p+q+2 and g_j_p+q+2. We construct two strings s and t by writing down the labels of leaf nodes f'_n,f'_n-1,…,f'_1 and g'_n,g'_n-1,…,g'_1, respectively, and preprocess them using Lemma <ref>. Then, to find the optimal matching we guess i_p+2 and j_p+2. As explained above, optimal matching of spine nodes above f'_i_p+2 and g'_j_p+2 can be precomputed in O(n^2) time in advance. Then, we need to match some of the leaf nodes f'_i_p+2,f'_i_p+2+1,f'_i_p+2+2,… to some of the leaf nodes g'_j_p+2,g'_j_p+2+1,g'_j_p+2+2,… in the reversed order. This exactly corresponds to matching s[1,n+1-i_p+2] to the reverse of t[1,n+1-j_p+2] and thus is also precomputed. Iterating over all possible i_p+2 and j_p+2 gives us the optimal matching in O(n^2log n) total time.Now consider the general case. We assume that both optional spine nodes f_i_p+q+2 and g_j_p+q+2 exist; if only one of them is present the algorithm is very similar. As in the simpler case, we iterate over all possible i_p+1 and j_p+1. The natural next step would be to iterate over all possible i_p+q+2 and j_p+q+2, but this is too expensive. However, because no spine nor leaf nodes below f_i_p+q+2 (or g_j_p+q+2) are matched, we can as well replace f_i_p+q+2 with the lowest spine node with the same label (and similarly for g_j_p+q+2). Thus, instead of guessing i_p+q+2 we can guess the label of f_i_p+q+2 and choose the lowest spine node with such label (and similarly for j_p+q+2).Now we retrieve the precomputed optimal matching of spine nodes above f'_i_p+1 and g'_j_p+1. Then we need to find the optimal matching of leaf nodes f'_i_p+1+1,f'_i_p+1+2,…,f'_i_p+q+2-1 and g'_i_p+1+1,g'_i_p+1+2,…,g'_j_p+q+2-1. This can be precomputed in O(n^2|Σ|^2log n) time with Lemma <ref>. Indeed, there are only |Σ| possibilities for i_p+q+2-1 and also |Σ| possibilities for j_p+q+2-1, as both of them are defined by the lowest occurrence of a label among the spine nodes of the left and the right tree, respectively. For each such combination, we construct two strings s and t by writing down the labels of leaf nodes above f'_p+q+2 and g'_p+q+2 in the bottom-up order and preprocess them in O(n^2log n) time. This allows us to retrieve the optimal matching of leaf nodes and then we only have to add (f'_i_p+1,g_j_p+q+2) and (f_i_p+q+2,g'_j_p+1) to obtain the total cost. Thus, after O(n^2|Σ|^2log n) preprocessing, we can find the optimal matching by iterating over n^2|Σ|^2 possibilities and checking each of them in constant time.abbrv 10YRICALP A. Abboud. Hardness for easy problems. In YR-ICALP Satellite Workshop of ICALP, 2014.AmirIsomorphism A. Abboud, A. Backurs, T. D. Hansen, V. V. Williams, and O. Zamir. Subtree isomorphism revisited. In SODA, pages 1256–1271, 2016.Clique A. Abboud, A. Backurs, and V. V. Williams. If the current clique algorithms are optimal, so is Valiant's parser. In FOCS, pages 98–117, 2015.AbboudLCS A. Abboud, A. Backurs, and V. V. Williams. Tight hardness results for LCS and other sequence similarity measures. In FOCS, pages 59–78, 2015.AbboudPlanar A. Abboud and S. Dahlgaard. Popular conjectures as a barrier for dynamic planar graph algorithms. In FOCS, pages 476–486, 2016.AbboudGrandoniVassilevska A. Abboud, F. Grandoni, and V. V. Williams. Subcubic equivalences between graph centrality problems, APSP and diameter. In SODA, pages 1681–1697, 2015.polylogshaved A. Abboud, T. D. Hansen, V. V. Williams, and R. Williams. Simulating branching programs with edit distance and friends or: a polylog shaved is a lower bound made. In STOC, pages 375–388, 2016.AbboudLewi2013 A. Abboud and K. Lewi. Exact weight subgraphs and the k-SUM conjecture. In ICALP, pages 1–12, 2013.AbboudVassilevskaFOCS14 A. Abboud and V. V. Williams. Popular conjectures imply strong lower bounds for dynamic problems. In FOCS, pages 434–443, 2014.AbboudVassilevskaWeimann A. Abboud, V. V. Williams, and O. Weimann. Consequences of faster alignment of sequences. In ICALP, pages 39–51, 2014.AmirVassilevskaYu A. Abboud, V. V. Williams, and H. Yu. Matching triangles and basing hardness on an extremely popular conjecture. In STOC, pages 41–50, 2015.akutsu2 T. Akutsu, D. Fukagawa, and A. Takasu. Approximating tree edit distance through string edit distance. In ISAAC, volume 4288, pages 90–99, 2006.Alon:1997 N. Alon, Z. Galil, and O. Margalit. On the exponent of the all pairs shortest path problem. JCSS, 54(2):255–262, 1997.AlurDGDV R. Alur, L. D'Antoni, S. Gulwani, D. Kini, and M. Viswanathan. Automated grading of dfa constructions. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI '13, pages 1976–1982, 2013.JumbledHardness A. Amir, T. M. Chan, M. Lewenstein, and N. Lewenstein. On hardness of jumbled indexing. In ICALP, pages 114–125, 2014.ApostolicoBook A. Apostolico and Z. Galil, editors. Pattern matching algorithms. Oxford University Press, Oxford, UK, 1997.AratsuHK10 T. Aratsu, K. Hirata, and T. Kuboyama. Approximating tree edit distance through string edit distance for binary tree codes. Fundam. Inform., 101(3):157–171, 2010.BackursDikkalaTzamos A. Backurs, N. Dikkala, and C. Tzamos. Tight hardness results for maximum weight rectangles. In ICALP, pages 81:1–81:13, 2016.BackursIndyk A. Backurs and P. Indyk. Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). In STOC, pages 51–58, 2015.backurs2016regular A. Backurs and P. Indyk. Which regular expression patterns are hard to match? In FOCS, pages 457–466, 2016.BackursTzamos A. Backurs and C. Tzamos. Improving Viterbi is hard: Better runtimes imply faster clique algorithms. Arxiv 1607.04229, 2016.BellandoK99 J. Bellando and R. Kothari. Region-based modeling and tree edit distance as a basis for gesture recognition. In Proceedings 10th International Conference on Image Analysis and Processing, pages 698–703, 1999.bellman1957dynamic R. Bellman. Dynamic programming. Princeton University Press, 1957.Bille2005 P. Bille. A survey on tree edit distance and related problems. Theoretical Computer Science, 337(1-3):217–239, 2005.bringmann2014walking K. Bringmann. Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms unless SETH fails. In FOCS, pages 661–670, 2014.bringmann2017near K. Bringmann. A near-linear pseudopolynomial time algorithm for subset sum. In SODA, pages 1073–1084, 2017.RNA K. Bringmann, F. Grandoni, B. Saha, and V. V. Williams. Truly sub-cubic algorithms for language edit distance and rna-folding via fast bounded-difference min-plus product. In 57th FOCS, pages 375–384, 2016.BringmannGL16 K. Bringmann, A. Grønlund, and K. G. Larsen. A dichotomy for regular expression membership testing. CoRR, abs/1611.00918, 2016.Bringmann K. Bringmann and M. Künnemann. Quadratic conditional lower bounds for string problems and dynamic time warping. In FOCS, pages 79–97, 2015.BKG03 P. Buneman, M. Grohe, and C. Koch. Path queries on compressed XML. In VLDB, pages 141–152, 2003.XML3 S. Chawathe. Comparing hierarchical data in external memory. In VLDB, pages 90–101, 1999.CIP09 R. Clifford. Matrix multiplication and pattern matching under Hamming norm. http://www.cs.bris.ac.uk/Research/Algorithms/events/BAD09/BAD09/Talks/BAD09-Hammingnotes.pdf.deberg_et_al M. de Berg, K. Buchin, B. M. P. Jansen, and G. Woeginger. Fine-grained complexity analysis of two classic TSP variants. In ICALP, volume 55, pages 5:1–5:14, 2016.DMRW E. Demaine, S. Mozes, B. Rossman, and O. Weimann. An optimal decomposition algorithm for tree edit distance. ACM Transactions on Algorithms, 6(1):1–19, 2009. Preliminary version in ICALP 2007.EG04 F. Eisenbrand and F. Grandoni. On the complexity of fixed parameter clique and dominating set. Theoretical Computer Science, 326(1-3):57–67, 2004.FerraginaFOCS2005 P. Ferragina, F. Luccio, G. Manzini, and S. Muthukrishnan. Compressing and indexing labeled trees, with applications. J. ACM, 57:1–33, 2009.Gus D. Gusfield. Algorithms on strings, trees and sequences: computer science and computational biology. Cambridge University Press, 1997.Hoffmann82 C. M. Hoffmann and M. J. O'Donnell. Pattern matching in trees. J. ACM, 29(1):68–95, 1982.IvkinThesis E. Ivkin. Comparison of tree edit distance algorithms. B.Sc. thesis, Charles University in Prague, 2012.Karp72 R. M. Karp. Reducibility among combinatorial problems. In Complexity of Computer Computations, pages 85–103, 1972.Klein P. N. Klein. Computing the edit-distance between unrooted ordered trees. In ESA, pages 91–102, 1998.Klein05 P. N. Klein. Multiple-source shortest paths in planar graphs. In SODA, pages 146–155, 2005.Computervision P. N. Klein, S. Tirthapura, D. Sharvit, and B. B. Kimia. A tree-edit-distance algorithm for comparing simple, closed shapes. In SODA, pages 696–704, 2000.knuth1971optimum D. E. Knuth. Optimum binary search trees. Acta Informatica, 1(1):14–25, 1971.larsen2015hardness K. G. Larsen, J. I. Munro, J. S. Nielsen, and S. V. Thankachan. On hardness of several string indexing problems. Theoretical Computer Science, 582:74–82, 2015.babai Miscellaneous Authors. Queries and problems. SIGACT News, 16(3):38–47, 1984.NP85 J. Nešetřil and S. Poljak. On the complexity of the subgraph problem. Commentationes Math. Universitatis Carolinae, 026(2):415–419, 1985.PawlikA M. Pawlik and N. Augsten. Efficient computation of the tree edit distance. ACM Trans. Database Syst., 40(1):3:1–3:40, Mar. 2015.Rico-JuanM03 J. R. Rico-Juan and L. Micó. Comparison of AESA and LAESA search algorithms using string and tree-edit-distances. Pattern Recognition Letters, 24(9-10):1417–1426, 2003.RodittyZwick L. Roditty and U. Zwick. On dynamic shortest paths problems. Algorithmica, 61(2):389–401, 2011.Selkow S. Selkow. The tree-to-tree editing problem. Information Processing Letters, 6(6):184–186, 1977.ShapiroZ90 B. A. Shapiro and K. Zhang. Comparing multiple RNA secondary structures using tree comparisons. Computer Applications in the Biosciences, 6(4):309–318, 1990.Shasha D. Shasha and K. Zhang. Simple fast algorithms for the editing distance between trees and related problems. SIAM Journal on Computing, 18(6):1245–1262, 1989.Unit D. Shasha and K. Zhang. Fast algorithms for the unit cost editing distance between trees. Journal of Algorithms, 11(4):581–621, 1990.Tai K. Tai. The tree-to-tree correction problem. J. ACM, 26(3):422–433, 1979.ValienteBook G. Valiente. Algorithms on Trees and Graphs. Springer-Verlag, 2002.StringED R. A. Wagner and M. J. Fischer. The string-to-string correction problem. J. ACM, 21(1):168–173, 1974.Wat M. Waterman. Introduction to computational biology: maps, sequences and genomes, Chapters 13, 14. Chapman and Hall, 1995.williams2005new R. Williams. A new algorithm for optimal 2-constraint satisfaction and its implications. Theor. Comput. Sci., 348(2-3):357–365, 2005.RyanWilliams R. Williams. Faster all-pairs shortest paths via circuit complexity. In STOC, pages 664–673, 2014.VW10 V. V. Williams and R. Williams. Subcubic equivalences between path, matrix and triangle problems. In FOCS, pages 645–654, 2010.CountingWeightedSubgraphs V. V. Williams and R. Williams. Finding, minimizing, and counting weighted subgraphs. SIAM J. Comput., 42(3):831–854, 2013.woeginger G. J. Woeginger. Space and time complexity of exact algorithms: Some open problems. In IWPEC, pages 281–290, 2004.woeginger2008open G. J. Woeginger. Open problems around exact algorithms. Discr. Appl. Math., 156(3):397–405, 2008.YaoDCC13 X. Yao, B. V. Durme, C. Callison-Burch, and P. Clark. Answer extraction as sequence tagging with tree edit distance. In HLT-NAACL 2013, pages 858–867, 2013.Constrained K. Zhang. Algorithms for the constrained editing distance between ordered labeled trees and related problems. Pattern Recognition, 28(3):463–474, 1995. | http://arxiv.org/abs/1703.08940v1 | {
"authors": [
"Karl Bringmann",
"Paweł Gawrychowski",
"Shay Mozes",
"Oren Weimann"
],
"categories": [
"cs.DS"
],
"primary_category": "cs.DS",
"published": "20170327054836",
"title": "Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can)"
} |
,[email protected] , [email protected]@tfai.vu.lt Institute of Theoretical Physics and Astronomy,Vilnius University, Saulėtekio al. 3, LT-10222 Vilnius, Lithuania ^20O(d,p)^21O transfer reactions are describedusingmomentum-space Faddeev-type equations for transition operators and including the vibrational excitation of the ^20O core.The available experimentalcross section data at 10.5 MeV/nucleon beam energy for the^21Oground state 5/2^+ and excited state 1/2^+are quite well reproduced by our calculations including the core excitation. Its effect can be roughly simulated reducing the single-particle cross section by the corresponding spectroscopic factor. Consequently, the extraction of the spectroscopic factors taking the ratio of experimental data and single-particle cross section at this energyis a reasonable procedure. However, at higher energies core-excitation effects are much more complicated and have no simple relation to spectroscopic factors. We found that core-excitation effects are qualitatively very different for reactions with the orbital angular momentum transfer ℓ=0 and ℓ=2, suppressing the cross sections for the former and enhancing for the latter, and changes the shape of the angular distribution in both cases. Furthermore, the core-excitation effect is a result of a complicated interplay between itscontributions of the two- and three-body nature.Three-body scattering core excitation transfer reactions spectroscopic factor 24.10.-i 21.45.-v 25.45.Hi 25.40.Hs§ INTRODUCTIONInteractions between nucleons (N) and composite nuclei (A) are usually modeled by two-body effective optical or binding potentials acting between structureless particles.This scheme works quite well for stable tightly bound nucleibut may become a poor approximation for exotic nuclei that nowadays are extensively studied both experimentally and theoretically. An improvement of the structureless nucleus model, at a first step, consists in explicitly considering also its lowest excited states (A^*), thereby accounting for the compositeness of the nucleus A in an approximate way. This extension has been proposed long ago <cit.> and applied to numerous studies of elastic and inelastic N+A scattering. However, the application of interaction modelsincluding the excitation of the involved nucleus, also called the core excitation,to three-body nuclear reactions, e.g., deuteron (d) stripping and pickup, is still a complicated task.First studies of (d,p) reactions demonstrating the importance of the core excitation <cit.>were based on two-body-like approaches such as the distorted-wave Born approximation (DWBA) and coupled-channels Born approximation (CCBA) that relied on deuteron-nucleus optical potentials. Only quite recently the three-body calculations have emerged that include the core excitation. Extensions of the DWBA<cit.>and continuum discretized coupled channels (CDCC)method <cit.> mostly focused on the breakup reactions, in particular,of11Be. The calculations for neutron transfer reactions10Be(d,p)11Be and 11Be(p,d)10Be were performed using rigorous Faddeev three-bodyscattering theory <cit.>in the form of Alt, Grassberger, and Sandhas (AGS) equations <cit.> for transition operators,solved in the extended Hilbert space <cit.>. The latter works demonstrated that in the deuteron stripping and pickup the core excitation effectcannot be simply simulated by the reduction of the cross sectionaccording to the respective spectroscopic factor (SF). It was found that extracting the SF from the ratio of experimental and theoreticaltransfer cross sections, as often used with the adiabatic distorted wave approximation (ADWA) calculations <cit.>,may lead to a strong underestimation of the SF. Calculations of Refs. <cit.> employed the rotational model <cit.> for the excitation of the 10Be core; the most prominent core-excitation effects have been observedfor the 10Be(d,p)11Be transfer to the ground state of 11Be(1/2^+) whose dominant component corresponds to an S-wave neutron coupled to the 10Be(0^+) ground state, i.e., the orbital angular momentum transferfor this reaction is ℓ=0.In contrast, for the ℓ=1 transfer leading to the excited state 11Be(1/2^-) the core-excitation effects have been less remarkable. It is therefore very important to clarify the systematics of the core-excitation effects in transfer reactions, investigatingother types of excitation mechanisms and bound states. Furthermore, a deeper understanding may be gained by disentangling the effects of two- and three-body nature. The study of 20O(d,p)21O transfer reactions intended in the present work leads to the desired goal and is interesting for several reasons.First, the 21O(5/2^+) ground statehas a significant component of D-wave neutron coupled to the 20O(0^+) ground state, thereby allowing the extension of systematics from Refs. <cit.> tothe D-wave neutron state andℓ=2 transfer. Second, the lowest excitation of the 20O core 2^+ has a vibrational character, giving opportunity to investigate the vibrational model for the nucleon-core interaction <cit.> in the context oftransfer reactions. Last but not least there are experimental data for20O(d,p)21O transfer reactionsat 10.5 MeV/nucleon beam energy <cit.> that have not yet been analyzed with rigorous Faddeev-type calculations. In Sec. <ref> we shortly recall the three-body scattering equations with core excitation, and in Sec. <ref> describe the employednucleon-20O potentials. Results are presented inSec. <ref>, and a summary is given inSec. <ref>. § SOLUTION OF THREE-BODY SCATTERING EQUATIONS WITH CORE EXCITATION The numerical technique for calculating deuteron-nucleus reactions with the inclusion of the core excitation is taken over from Refs. <cit.> but further developments are needed to get insight into separate core-excitation contributions of the two- and three-body nature. The method is based on the integral formulation of rigorous Faddeev-type three-body scattering theory for transition operators as proposed by Alt, Grassberger, and Sandhas <cit.>, but extended for the Hilbertspace _g ⊕_x whose sectors correspond to the core being in its ground (g) or excited (x) state. These sectors are coupled by the nucleon-core two-body potentials v_α^ji where thesuperscripts j and i, being either g or x, label the internal states of the core, and the subscript α, being A, p, or n, labels the spectator particle in the odd-man-out notation. Consequently, the respective two-body transition operatorsT_α^ki =v_α^ki +∑_j=g,xv_α^kj G_0^j T_α^jiand three-body transition operatorsU_βα^ki= δ̅_βα δ_kiG^i_0^-1+ ∑_γ=A,p,n∑_j=g,xδ̅_βγT_γ^kjG_0^j U_γα^jicouple _g and_x as well.Here δ̅_βα = 1 - δ_βα andG_0^j = (E+i0-δ_jxΔ m_A - K)^-1 is the projection of the free resolvent into_j, with E, Δ m_A,and K being the available energy in the center-of-mass (c.m.) frame, core-excitation energy, and kinetic energy operator, respectively. The amplitudes for deuteron stripping reactions A(d,p)B, B denoting the (An) bound state, are given by the on-shell matrix elements ⟨Φ_p^g|U_pA^gg|Φ_A^g ⟩ + ⟨Φ_p^x|U_pA^xg|Φ_A^g ⟩since the final p+B channel state |Φ_p ⟩ = |Φ_p^g ⟩ + |Φ_p^x ⟩ has components in both Hilbert sectors. The core-excitation effects can be separated into contributions of two- and and three-body nature. The former consists in modifyingT_α^gg through intermediate core excitations, i.e., through the terms of type v_α^gx G_0^x v_α^xg and so onin the iterated coupled-channel Lippmann-Schwinger equation (<ref>).The contribution of the three-body nature arises due tonondiagonal components T_α^xg and T_α^gx that are responsible for the couplingof the two Hilbert sectors in Eq. (<ref>), i.e., T_β^gxδ̅_βα G_0^x T_α^xg and so on, yielding, in fact, anenergy-dependent effective three-body force (E3BF).Lowest-order diagrams for both types are depicted in Fig. <ref>. We note a formal similarity between these contributions andthe so-called dispersive and three-nucleon force effects arising in the description of thethree-nucleon system with the Δ-isobar excitation <cit.>. Since the full core-excitation effect will be extracted from thesolution of Eq. (<ref>), to get insight into the importanceof separatetwo- and and three-body contributions it is enough toexclude one of them. It is most convenient to do so for the E3BF, whose exclusion can be achievedby setting T_γ^kj = δ_kgδ_jg T_γ^gg inEq. (<ref>). This type of results will be labeled in the following as “no E3BF”.Although the present work employs the potentials v_α^ji derived from the vibrational model <cit.>, calculations proceed in the same way aswith rotational model potentials used inRefs. <cit.>.The AGS equations (<ref>) are solved numerically in the momentum-spacepartial-wave representation. Six sets of base functions |p_α q_α(l_α{ [ L_α (s^i_β s^i_γ)S^i_α] j^i_α s^i_α}𝒮^i_α) J M ⟩ are employed with(αβγ) = (Apn), (pnA), or (nAp), and i = g or x.Here p_α andq_α are magnitudes of Jacobi momenta for the configuration α(βγ) whileL_α andl_α are the associated orbital angular momenta. Furthermore,s^i_A and s^i_p = s^i_n = 1/2 are spins of the corresponding particles, among them only s_A^i depends on the Hilbert sector i, i.e.,s_A^g = 0 and s_A^x = 2in the considered case of the 20O nucleus with the ground and first excited states 0^+ and 2^+, respectively. All discrete angular momentum quantum numbers, via theintermediate angular momenta S^i_α, j^i_α, and 𝒮^i_α,are coupled to the total angular momentum J with the projection M. We note that the spin s_A^x = 2 implies roughly five times more basis states in _xas compared to_g, thereby increasing the demand on computer memoryand time by a factor of 20 to 40. Including more states of the core, e.g.,the second excited state 4^+ would be even significantly more demanding, and for this reason we restrict our present calculations to the inclusion of 0^+ and 2^+ states of 20O. Well-converged results for20O(d,p)21O transfer reactions are obtained by including J ≤ 25 states with L_A ≤ 3, L_p ≤ 5, andL_n ≤ 10. Higher value for L_n is needed due to the Coulomb force present within the A+p pair which isincluded via the screening and renormalization method <cit.>. § POTENTIALS We consider the system of a proton, a neutron, and a 20O core with masses m_p = 0.99931 m_N, m_n = 1.00069 m_N, andm_A = 19.84153 m_N given in units ofm_N = (m_n + m_p)/2 = 938.919 MeV; the core excitation energy is Δ m_A = 1.684 MeV. To the best of our knowledge, potentials specifically designed for the N+20O interaction including the core excitation are not available. The corresponding experimental data are scarce, we areaware of only twop+20O elastic and inelastic scattering measurements at 30 <cit.> and 43 <cit.> MeV/nucleon beam energies.In these works the data have been analyzed using DWBA or coupled-channel calculations with global optical potentials, e.g., <cit.>. Extracted values of the quadrupole vibrational coupling parameter β_2 are 0.50 ± 0.04<cit.> and 0.55 ± 0.06 <cit.>. We alsobase our calculations onglobal optical potentials but use more modern parametrizations, namely, those of Koning-Delaroche (KD) <cit.> and Chapel Hill 89 (CH) <cit.>. These potentials were designed for A ≥ 24 and A ≥ 40 nuclei, respectively, but one may expect a reasonable extrapolation also to A = 20, especially for the KD potential. To include the core excitation, we extended these potentialsfor quadrupole vibrations <cit.> and modify by the subtraction method of Ref. <cit.>adding a nonlocal contribution. The terms up to the second order in β_2 as given in Ref. <cit.> are taken into account in our calculations. It turns out that such an approachreproduces the experimental data for elastic and inelastic differential cross sections of Refs. <cit.> reasonably well using the same value β_2 = 0.5 as shown in Fig. <ref>, especially for theKD potential. To study the sensitivity to β_2, we also show CH predictions with β_2 = 0.55, that yield a betterdescription of the inelastic cross section. The observed agreement encourages the application of these potentials for 20O(d,p)21O transfer reactions, not only forp+20Obut also forn+20O pair where no experimental scattering data are available. An exception is the n+20O potential in the 5/2^+ and 1/2^+ partial wavesthat must be real to support bound states with the binding energies of 3.806 and 2.586 MeV, respectively. In addition,predictions of various shell models <cit.> for SF's of thesestates are available, being around 0.33 to 0.34 for 5/2^+ and 0.81 to 0.83 for 1/2^+<cit.>. We include this informationin constraining then+20O potentials. We start with the undeformed coordinate-space potentialv_α(r) = -V_c f(r,R,a) + L^2 V_L f(r,R,a) + σ·LV_so 2/rd/drf(r,R,a),where f(r,R,a) = [1+exp((r-R)/a)]^-1 is Woods-Saxon form factor,a=0.65 fm, V_so= 6.0MeV·fm^2, and R is taken from the real part of the optical potential acting in other waves, i.e., R = 3.13 fm (3.17 fm) for KD (CH) potentials. In addition to standard central and spin-orbitterms a phenomenological L^2 term is taken over from Ref. <cit.>. The core excitation is included by quadrupole vibrations of the central part in (<ref>) with β_2 = 0.5or 0.55 asdescribed by Tamura <cit.>. Potential strength parameters V_c and V_L are adjusted to reproduce the desired binding energies and SF's. The latter are chosen to be the middle values of several shell model predictions <cit.>, i.e.,0.34 for 5/2^+ and 0.82 for 1/2^+. Deeply-bound Pauli forbidden states are projected out. The resulting potentialparameters are collected in Tables <ref> and<ref>; parameter sets with β_2 = 0.0 correspond to single-particle models without core excitation that are used to isolate its effect. § RESULTSTakingp+20O andn+20O potentials from previous section together with the high-precision charge-dependent (CD) Bonn n+p potential<cit.> as the dynamic input, we solve the AGS equations(<ref>) and calculate 20O(d,p)21Odifferential cross sections dσ/dΩ as functions of the c.m. scattering angle Θ_.We start with10.5 MeV/nucleon beam energy, corresponding to the deuteron beam energy E_d = 21 MeV, where theexperimental data<cit.> are available. The results obtained without (β_2 = 0) and with (β_2 = 0.5) core excitation based on KD and CH potentialsare presented inFig. <ref>.The core excitation effect for the transfer to the 21O ground state 5/2^+ is very large. It strongly reducesthe differential cross section bringing it in a good agreement with the experimental data. The sensitivity to the potential model is visible except at very small angles but remains smaller than experimental error bars. To study the sensitivity to β_2 we include also CH-based predictions withβ_2 = 0.55; they are almost indistinguishable from the corresponding β_2 = 0.5 results, indicating that the value of β_2 is not critical for transfer observables provided that other properties are fixed. Same conclusions regarding the sensitivity to β_2 and potentialapply also for the transfer to the 21O excited state 1/2^+. However, in this case the core excitation effect issmaller, although it also reduces the differential cross section bringing it closer to the data, except for few points at larger angles. There is also some mismatch between predicted and measured positions of the minimum. We note that for both reactions KD predictions are slightly higher, possibly due to a larger elasticN+20O cross section.Obviously, the reduction of the differential cross section due to the core excitation correlates with the reduction of the SF from unity to 0.34and 0.82 for ground and excited states, respectively. In naive reaction methods like DWBA or ADWA the dynamic core excitation is usually neglected, i.e., it is assumedthat the bound state component | Φ_p^x ⟩ takes no part in the reaction, and the core excitation effect is a reduction of the single-particle differential cross section by the SF. However,this conjecture on factorization may be wrong as it was demonstrated by rigorous Faddeev-type calculationsusing the10Be(d,p)11Be transfer to the ground state of 11Be(1/2^+) as example <cit.>. We therefore investigate inFigs. <ref> and <ref>the validity of factorization conjecture for 20O(d,p)21O reactions over a broader energy range. Having no more experimental data, we simply take additional energy value larger by a factor of3, i.e., E_d = 63 MeV. As the core excitation effects for KD and CH turn out to be quite similar, we show only KD results that in general are closer to the experimental two- and three-body data.We multiply KD single-particle β_2 = 0 differential cross sections by the respective SF of the model with the core excitation and compare with the KD(β_2 = 0.5) results fully including the core excitation. The difference between these two results, or the deviation of the ratioR_x = dσ/dΩ(β_2=0.5)/SF· dσ/dΩ(β_2=0)from unity indicates violation of the factorization conjecture. We start with the excited state 1/2^+ analysis in Fig. <ref> where we expect some similarities with the11Be(1/2^+) case <cit.>. At E_d = 21 MeV the two curves are close but, at least below the first minimum,differ by a roughly constant factor, i.e., the core excitation effect is slightly, by about 6%, stronger than predicted by the factorization conjecture. Having the SF of 0.82 thecore excitation reduces thedifferential cross section at forward angles by a factor of 0.77 which is exactly the value of the SF extractedin Ref. <cit.> relying on the factorization conjecture. Thus, the dynamical core excitation model well explains a stronger reduction of the cross section observed in Ref. <cit.> as compared to the factorization conjecture. The deviation between the two curves in Fig. <ref> increases with increasing energy, and their ratio becomes angle-dependent, thereby indicating that the factorization conjecture fails at higher energies.The reduction of the cross section at forward angles is significantly strongerthan SF, e.g.,R_x = 0.59 at E_d = 63 MeV and Θ_ =0^∘. Such a behavior is indeedqualitatively consistent with findings of Refs. <cit.> for 11Be(1/2^+) within the rotational model. A similar study of the20O(d,p)21O transfer to the 21O ground state 5/2^+ is presented inFig. <ref>. At E_d = 21 MeV the two curves are again close, especially at forward angles. Thus, despite that SF =0.34 significantly deviates from unity, the differential cross section including the core excitation scaleswell with SF, and at this energy the factorization conjecture is valid. However, the situation changesdramatically at higher energy where the two curves deviate from each other in an angle-dependent way. We emphasize that at forward angles this deviation is in opposite direction as compared to the excited state 1/2^+, e.g., R_x = 1.83 at E_d = 63 MeV and Θ_ =0^∘. Thus, at higher energies the factorization conjecture fails for the21O ground state 5/2^+ as well, but quantitatively the core excitation effectis very different as compared to the one for the excited state 1/2^+. In Figs. <ref> and <ref> we also isolate the E3BF core-excitation effect, given as the difference between the solid and dash-dotted curves. Quite surprisingly, even atE_d = 21 MeV it turns out to be significant. Consequently, the core-excitation effect of the two-body nature must be significant as well to cancel the E3BF to a large extent, especially atE_d = 21 MeV,such that their sumreproduces the full core-excitation effect. We note that substantial cancellation of the corresponding two- and three-body effects due to the Δ-isobar excitation was oftenobserved also in the nucleon-deuteron scattering <cit.>.We studied also sensitivity of the transfer cross sections to the neutron-proton tensor force and D-state component in the deuteron. Replacing the CD Bonn potential in the ^3S_1-^3D_1 partial wave by a central one reproducing deuteron binding and, roughly, n-p ^3S_1 and ^3D_1 phase shifts, leads to small but visible changes(smaller than KD - CH difference) in the cross sections. However, we do not consider such a n-p potential as realistic and therefore performed another test calculation with the realistic Argonne V18 potential <cit.> that hasa stronger tensor force and a larger deuteron D-state probability as compared toCD Bonn.In this case the differences were minor, so we conclude that uncertainties in arealistic n-p force do not affect the20O(d,p)21O transfer cross sections.Finally we consider the deuteron pickup reaction 21O(p,d)20O. For the d+20O(0^+) final state it is exactly the time-reverse reaction of 20O(d,p)21Owith the cross sections (at the same c.m. energy) related by the time reversal symmetry. In contrast, with the d+20O(2^+) final state it presents a new case that we study in Fig. <ref> at 60.36 MeV/nucleon beam energy. The initial excited state 21O(1/2^+) this time corresponds tothe ℓ = 2 transfer as the 20O(2^+) component is coupled with a D-wave neutron. The core-excitation effect turns out to be qualitatively similar to another ℓ = 2 case, i.e.,20O(d,p)21O(5/2^+) shown in the bottom part ofFig. <ref>.§ SUMMARY AND CONCLUSIONSWe analyzed 20O(d,p)21O transfer reactions taking into account the vibrational excitation of the 20O core. Calculations were performed using Faddeev-type equations for transition operators that were solved in the momentum-space partial-wave representation. Well converged results were obtained for several interaction models based on the vibrational extension of KD and CH potentials. The only available experimental differential cross section data for thetransfer to the 21O ground state 5/2^+ and excited state 1/2^+at 10.5 MeV/nucleon beam energy are quite well described by our calculations including the core excitation. Some sensitivity to the underlying potential was observed, but the core-excitation effects turn out to be almost independent of it. The precise value of the quadrupole vibrational couplingβ_2 also turns out to be irrelevant provided that spectroscopic factors are fixed that we take from shell-model calculations. At this lowest considered energy we found that the core-excitation effect can be approximated to a good accuracy (6% for the 1/2^+ state and even better for the 5/2^+ state)by a simple reduction of the single-particle cross section according to the respective SF. Thus, the extraction of the SF through the ratio of experimental data and single-particle cross sectionas performed in Ref. <cit.> is a reasonable procedure. Nevertheless, our prediction for a slightly stronger reduction ofthe 1/2^+ cross sectionleads to an even better agreement between the shell model SF and experimental data. The situation changes dramatically at higher energy where the core-excitation effects are much more complicated than just a reduction of the cross sectionaccording to the respective SF. Thus, in this regime one really needs to perform full calculations with the core excitation and should not rely on a single-particle cross section to extract the SF. For example, we found that at 31.5 MeV/nucleon beam energythe SF extracted in this naive way would be about 70% too small for the 1/2^+ state but 80% too largefor the 5/2^+ state. This also demonstrates that core excitation acts very differently in the S and D-wave neutron states.In the S-wave case the results are qualitatively consistent with previous findings for reactions involving the 11Be(1/2^+) but based on the rotational model.Taking into account also the study of the 21O^*(p,d)20O(2^+) reaction, we are able to make an important conclusion on a systematic effect of the quadrupole core excitationat higher energies: it substantially suppresses reactions with ℓ=0 transferbut enhances those with ℓ=2. The shape of the angular distribution of the differential cross section is changed in both cases.Of course, the quantitative size of these effects depends on the collision, binding, and excitation energies. Furthermore, the core-excitation effect is a result of a complicated interplay between itscontributions of the two- and three-body nature;including only the two-body effect through the modification of the potential is computationally simpler but not justified. This work was supported by Lietuvos Mokslo Taryba (Research Council of Lithuania) under Contract No. MIP-094/2015. A.D. acknowledges also the hospitality of the Ruhr-Universität Bochum where a part of this work was performed. 10tamura:cex T. Tamura, Rev. Mod. Phys. 37(1965) 679.PhysRev.181.1396 R. J. Ascuitto, N. K. Glendenning, Phys. Rev. 181(1969) 1396.Glendenning1971575 N. K. Glendenning, R. S. Mackintosh, Nucl. Phys. A 168(1971) 575.Mackintosh1971353 R. S. Mackintosh, Nucl. Phys. A 170(1971) 353.Ascuitto1974454 R. Ascuitto, C. King, L. McVay, B. Sorensen, Nucl. Phys. A 226(1974) 454.moro:12a A. M. Moro, R. Crespo, Phys. Rev. C 85(2012) 054613.moro:12b A. Moro, J. A. Lay, Phys. Rev. Lett. 109(2012) 232502.summers:07a N. C. Summers, F. M. Nunes, Phys. Rev. C 76(2007) 014611.diego:14a R. de Diego, J. M. Arias, J. A. Lay, A. M. Moro, Phys. Rev. C 89(2014) 064609.faddeev:60a L. D. Faddeev, Zh. Eksp. Teor. Fiz. 39(1960) 1459 [Sov. Phys. JETP12 (1961) 1014].alt:67a E. O. Alt, P. Grassberger, W. Sandhas, Nucl. Phys. B2(1967) 167.deltuva:13d A. Deltuva, Phys. Rev. C 88(2013) 011601(R).deltuva:15b A. Deltuva, Phys. Rev. C 91(2015) 024607.deltuva:16c A. Deltuva, A. Ross, E. Norvaišas, F. M. Nunes, Phys. Rev. C 94(2016) 044613.johnson:70a R. C. Johnson, P. J. R. Soper, Phys. Rev. C 1(1970) 976.o20dp B. Fernández-Domínguez, et al., Phys. Rev. C 84(2011) 011301.hajduk:83a C. Hajduk, P. U. Sauer, W. Strueve, Nucl. Phys. A405 (1983) 581.deltuva:03c A. Deltuva, R. Machleidt, P. U. Sauer, Phys. Rev. C 68(2003) 024005.taylor:74a J. R. Taylor, Nuovo Cimento B 23(1974) 313, ; M. D. Semon, J. R. Taylor, Nuovo Cimento A 26 (1975) 48.alt:80a E. O. Alt, W. Sandhas, Phys. Rev. C 21(1980) 1733.deltuva:05a A. Deltuva, A. C. Fonseca, P. U. Sauer, Phys. Rev. C 71(2005) 054005.o20p30 J. K. Jewell, et al., Phys. Lett. B 454(1999) 181.o20p43 E. Khan, et al., Phys. Lett. B 490(2000) 45.becchetti F. D. Becchetti Jr., G. W. Greenlees, Phys. Rev. 182(1969) 1190.koning A. J. Koning, J. P. Delaroche, Nucl. Phys. A713(2003) 231.CH89 R. L. Varner, W. J. Thompson, T. L. McAbee, E. J. Ludwig, T. B. Clegg, Phys. Rep. 201(1991) 57.warburton:92a E. K. Warburton, B. A. Brown, Phys. Rev. C 46(1992) 923.PhysRevC.60.054315 Y. Utsuno, T. Otsuka, T. Mizusaki, M. Honma, Phys. Rev. C 60(1999) 054315.amos:03a K. Amos, L. Canton, G. Pisent, J. Svenne, D. van der Knijff, Nucl. Phys. A 728(2003) 65.machleidt:01a R. Machleidt, Phys. Rev. C 63(2001) 024001.wiringa:95a R. B. Wiringa, V. G. J. Stoks, R. Schiavilla, Phys. Rev. C 51(1995) 38. | http://arxiv.org/abs/1703.09289v1 | {
"authors": [
"A. Deltuva",
"D. Jurčiukonis",
"E. Norvaišas"
],
"categories": [
"nucl-th",
"nucl-ex"
],
"primary_category": "nucl-th",
"published": "20170327195639",
"title": "Core-excitation effects in ${}^{20}\\mathrm{O}(d,p){}^{21}\\mathrm{O}$ transfer reactions: Suppression or enhancement?"
} |
Efficient Processing of Deep Neural Networks:A Tutorial and Survey Vivienne Sze, Senior Member, IEEE, Yu-Hsin Chen, Student Member, IEEE, Tien-Ju Yang, Student Member, IEEE, Joel Emer, Fellow, IEEE V. Sze, Y.-H. Chen and T.-J. Yang are with the Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA. (e-mail: [email protected]; [email protected], [email protected]) J. S. Emer is with the Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA, and also with Nvidia Corporation, Westford, MA 01886 USA. (e-mail: [email protected])December 30, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems. This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities. § INTRODUCTIONDeep neural networks (DNNs) are currently the foundation for many modern artificial intelligence (AI) applications <cit.>. Since the breakthrough application of DNNs to speech recognition <cit.> and image recognition <cit.>, the number of applications that use DNNs has exploded. These DNNs are employed in a myriad of applications from self-driving cars <cit.>, to detecting cancer <cit.> to playing complex games <cit.>. In many of these domains, DNNs are now able to exceed human accuracy.The superior performance of DNNs comes from its ability to extract high-level features from raw sensory data after using statistical learning over a large amount of data to obtain an effective representation of an input space.This is different from earlier approaches that use hand-crafted features or rules designed by experts. The superior accuracy of DNNs, however, comes at the cost of high computational complexity. While general-purpose compute engines, especially graphics processing units (GPUs), have been the mainstay for much DNN processing, increasingly there is interest in providing more specialized acceleration of the DNN computation. This article aims to provide an overview of DNNs, the various tools for understanding their behavior, and the techniques being explored to efficiently accelerate their computation.This paper is organized as follows: * Section <ref> provides background on the context of why DNNs are important, their history and applications.* Section <ref> gives an overview of the basic components of DNNs and popular DNN models currently in use. * Section <ref> describes the various resources used for DNN research and development.* Section <ref> describes the various hardware platforms used to process DNNs and the various optimizations used to improve throughput and energy efficiency without impacting application accuracy (i.e., produce bit-wise identical results).* Section <ref> discusses how mixed-signal circuits and new memory technologies can be used for near-data processing to address the expensive data movement that dominates throughput and energy consumption of DNNs. * Section <ref> describes various joint algorithm and hardware optimizations that can be performed on DNNs to improve both throughput and energy efficiency while trying to minimize impact on accuracy. * Section <ref> describes the key metrics that should be considered when comparing various DNN designs. § BACKGROUND ON DEEP NEURAL NETWORKS (DNN)In this section, we describe the position of DNNs in the context of AI in general and some of the concepts that motivated its development. We will also present a brief chronology of the major steps in its history, and some current domains to which it is being applied. §.§ Artificial Intelligence and DNNs DNNs, also referred to as deep learning, are a part of the broad field of AI, which is the science and engineering of creating intelligent machines that have the ability to achieve goals like humans do, according to John McCarthy, the computer scientist who coined the term in the 1950s. The relationship of deep learning to the whole of artificial intelligence is illustrated in Fig. <ref>. Within artificial intelligence is a large sub-field called machine learning, which was defined in 1959 by Arthur Samuel as the field of study that gives computers the ability to learn without being explicitly programmed. That means a single program, once created, will be able to learn how to do some intelligent activities outside the notion of programming. This is in contrast to purpose-built programs whose behavior is defined by hand-crafted heuristics that explicitly and statically define their behavior.The advantage of an effective machine learning algorithm is clear. Instead of the laborious and hit-or-miss approach of creating a distinct, custom program to solve each individual problem in a domain, the single machine learning algorithm simply needs to learn, via a processes called training, to handle each new problem. Within the machine learning field, there is an area that is often referred to as brain-inspired computation. Since the brain is currently the best `machine' we know for learning and solving problems, it is a natural place to look for a machine learning approach. Therefore, a brain-inspired computation is a program or algorithm that takes some aspects of its basic form or functionality from the way the brain works. This is in contrast to attempts to create a brain, but rather the program aims to emulate some aspects of how we understand the brain to operate.Although scientists are still exploring the details of how the brain works, it is generally believed that the main computational element of the brain is the neuron. There are approximately 86 billion neurons in the average human brain. The neurons themselves are connected together with a number of elements entering them called dendrites and an element leaving them called an axon as shown in Fig. <ref>. The neuron accepts the signals entering it via the dendrites, performs a computation on those signals, and generates a signal on the axon. These input and output signals are referred to as activations. The axon of one neuron branches out and is connected to the dendrites of many other neurons. The connections between a branch of the axon and a dendrite is called a synapse. There are estimated to be 10^14 to 10^15 synapses in the average human brain. A key characteristic of the synapse is that it can scale the signal (x_i) crossing it as shown in Fig. <ref>. That scaling factor can be referred to as a weight (w_i), and the way the brain is believed to learn is through changes to the weights associated with the synapses. Thus, different weights result in different responses to an input.Note that learning is the adjustment of the weights in response to a learning stimulus, while the organization (what might be thought of as the program) of the brain does not change. This characteristic makes the brain an excellent inspiration for a machine-learning-style algorithm.Within the brain-inspired computing paradigm there is a subarea called spiking computing. In this subarea, inspiration is taken from the fact that the communication on the dendrites and axons are spike-like pulses and that the information being conveyed is not just based on a spike's amplitude. Instead, it also depends on the time the pulse arrives and that the computation that happens in the neuron is a function of not just a single value but the width of pulse and the timing relationship between different pulses. An example of a project that was inspired by the spiking of the brain is the IBM TrueNorth <cit.>. In contrast to spiking computing, another subarea of brain-inspired computing is called neural networks, which is the focus of this article.[Note: Recent work using TrueNorth in a stylized fashion allows it to be used to compute reduced precision neural networks <cit.>.These types of neural networks are discussed in Section <ref>.] §.§ Neural Networks and Deep Neural Networks (DNNs) Neural networks take their inspiration from the notion that a neuron's computation involves a weighted sum of the input values. These weighted sums correspond to the value scaling performed by the synapses and the combining of those values in the neuron. Furthermore, the neuron doesn't just output that weighted sum, sincethe computation associated with a cascade of neurons would then be a simple linear algebra operation. Instead there is a functional operation within the neuron that is performed on the combined inputs. This operation appears to be a non-linear function that causes a neuron to generate an output only if the inputs cross some threshold. Thus by analogy, neural networks apply a non-linear function to the weighted sum of the input values. We look at what some of those non-linear functions are in Section <ref>.Fig. <ref> shows a diagrammatic picture of a computational neural network. The neurons in the input layer receive some values and propagate them to the neurons in the middle layer of the network, which is also frequently called a `hidden layer'. The weighted sums from one or more hidden layers are ultimately propagated to the output layer, which presents the final outputs of the network to the user. To align brain-inspired terminology with neural networks, the outputs of the neurons are often referred to as activations, and the synapses are often referred to as weights as shown in Fig. <ref>. We will use the activation/weight nomenclature in this article.Fig. <ref> shows an example of the computation at each layer: y_j=f(∑_i=1^3 W_ij× x_i + b), where W_ij, x_i and y_j are the weights, input activations and output activations, respectively, and f(·) is a non-linear function described in Section <ref>.The bias term b is omitted from Fig. <ref> for simplicity. Within the domain of neural networks, there is an area called deep learning, in which the neural networks have more than three layers, i.e., more than one hidden layer. Today, the typical numbers of network layers used in deep learning range from five to more than a thousand. In this article, we will generally use the terminology deep neural networks (DNNs) to refer to the neural networks used in deep learning.DNNs are capable of learning high-level features with more complexity and abstraction than shallower neural networks. An example that demonstrates this point is using DNNs to process visual data. In these applications, pixels of an image are fed into the first layer of a DNN, and the outputs of that layer can be interpreted as representing the presence of different low-level features in the image, such as lines and edges. At subsequent layers, these features are then combined into a measure of the likely presence of higher level features, e.g., lines are combined into shapes, which are further combined into sets of shapes. And finally, given all this information, the network provides a probability that these high-level features comprise a particular object or scene. This deep feature hierarchy enables DNNs to achieve superior performance in many tasks. §.§ Inference versus TrainingSince DNNs are an instance of a machine learning algorithm, the basic program does not change as it learns to perform its given tasks. In the specific case of DNNs, this learning involves determining the value of the weights (and bias) in the network, and is referred to as training the network. Once trained, the program can perform its task by computing the output of the network using the weights determined during the training process. Running the program with these weights is referred to as inference. In this section, we will use image classification, as shown in Fig. <ref>, as a driving example for training and using a DNN.When we perform inference using a DNN, we give an input image and the output of the DNN is a vector of scores, one for each object class; the class with the highest score indicates the most likely class of object in the image. The overarching goal for training a DNN is to determine the weights that maximize the score of the correct class and minimize the scores of theincorrect classes. When training the network the correct class is often known because it is given for the images used for training (i.e., the training set of the network). The gap between the ideal correct scores and the scores computed by the DNN based on its current weights is referred to as the loss (L). Thus the goal of training DNNs is to find a set of weights to minimize the average loss over a large training set.When training a network, the weights (w_ij) are usually updated using a hill-climbing optimization process called gradient descent. A multiple of the gradient of the loss relative to each weight, which is the partial derivative of the loss with respect to the weight, is used to update the weight (i.e., updated w_ij^t+1 = w_ij^t - α∂ L/∂ w_ij, where α is called the learning rate). Note that this gradient indicates how the weights should change in order to reduce the loss. The process is repeated iteratively to reduce the overall loss.An efficient way to compute the partial derivatives of the gradient is through a process called backpropagation. Backpropagation, which is a computation derived from the chain rule of calculus, operates by passing values backwards through the network to compute how the loss is affected by each weight. This backpropagation computation is, in fact, very similar in form to the computation used for inference as shown in Fig. <ref> <cit.>.[To backpropagate through each filter: (1) compute the gradient of the loss relative to the weights from the filter inputs (i.e., the forward activations) and the gradients of the loss relative to the filter outputs; (2) compute the gradient of the loss relative to the filter inputs from the filter weights and the gradients of the loss relative to the filter outputs.] Thus, techniques for efficiently performing inference can sometimes be useful for performing training. It is, however, important to note a couple of points. First, backpropagation requires intermediate outputs of the network to be preserved for the backwards computation, thus training has increased storage requirements. Second, due to the gradients use for hill-climbing, the precision requirement for training is generally higher than inference. Thus many of the reduced precision techniques discussed in Section <ref> are limited to inference only. A variety of techniques are used to improve the efficiency and robustness of training. For example, often the loss from multiple sets of input data, i.e., a batch, are collected before a single pass of weight update is performed; this helps to speed up and stabilize the training process. There are multiple ways to train the weights. The most common approach, as described above, is called supervised learning, where all the training samples are labeled (e.g.,with the correct class). Unsupervised learning is another approach where all the training samples are not labeled and essentially the goal is to find the structure or clusters in the data. Semi-supervised learning falls in between the two approaches where only a small subset of the training data is labeled (e.g., use unlabeled data to define the cluster boundaries, and use the small amount of labeled data to label the clusters). Finally, reinforcement learning can be used to the train weights such that given the state of the current environment, the DNN can output what action the agent should take next to maximize expected rewards; however, the rewards might not be available immediately after an action, but instead only after a series of actions. Another commonly used approach to determine weights is fine-tuning, where previously-trained weights are available and are used as a starting point and then those weights are adjusted for a new dataset (e.g., transfer learning) or for a new constraint (e.g., reduced precision). This results in faster training than starting from a random starting point, and can sometimes result in better accuracy.This article will focus on the efficient processing of DNN inference rather than training, since DNN inference is often performed on embedded devices (rather than the cloud) where resources are limited as discussed in more details later. §.§ Development History Although neural nets were proposed in the 1940s, the first practical application employing multiple digital neurons didn't appear until the late 1980s with the LeNet network for hand-written digit recognition <cit.>[In the early 1960s, single analog neuron systems were used for adaptive filtering <cit.>.]. Such systems are widely used by ATMs for digit recognition on checks. However, the early 2010s have seen a blossoming of DNN-based applications with highlights such as Microsoft's speech recognition system in 2011 <cit.> and the AlexNet system for image recognition in 2012 <cit.>. A brief chronology of deep learning is shown in Fig. <ref>.The deep learning successes of the early 2010s are believed to be a confluence of three factors. The first factor is the amount of available information to train the networks. To learn a powerful representation (rather than using a hand-crafted approach) requires a large amount of training data. For example, Facebook receives over 350 millions images per day, Walmart creates 2.5 Petabytes of customer data hourly and YouTube has 300 hours of video uploaded every minute. As a result, the cloud providers and many businesses have a huge amount of data to train their algorithms. The second factor is the amount of compute capacity available. Semiconductor device and computer architecture advances have continued to provide increased computing capability, and we appear to have crossed a threshold where the large amount of weighted sum computation in DNNs, which is required for both inference and training, can be performed in a reasonable amount of time. The successes of these early DNN applications opened the floodgates of algorithmic development. It has also inspired the development of several (largely open source) frameworks that make it even easier for researchers and practitioners to explore and use DNNs. Combining these efforts contributes to the third factor, which is the evolution of the algorithmic techniques that have improved application accuracy significantly and broadened the domains to which DNNs are being applied. An excellent example of the successes in deep learning can be illustrated with the ImageNet Challenge <cit.>. This challenge is a contest involving several different components. One of the components is an image classification task where algorithms are given an image and they must identify what is in the image, as shown in Fig. <ref>. The training set consists of 1.2 million images, each of which is labeled with one of 1000 object categories that the image contains. For the evaluation phase, the algorithm must accurately identify objects in a test set of images, which it hasn't previously seen.Fig. <ref> shows the performance of the best entrants in the ImageNet contest over a number of years. One sees that the accuracy of the algorithms initially had an error rate of 25% or more. In 2012, a group from the University of Toronto used graphics processing units (GPUs) for their high compute capability and a deep neural network approach, named AlexNet, and dropped the error rate by approximately 10% <cit.>. Their accomplishment inspired an outpouring of deep learning style algorithms that have resulted in a steady stream of improvements. In conjunction with the trend to deep learning approaches for the ImageNet Challenge, there has been a corresponding increase in the number of entrants using GPUs. From 2012 when only 4 entrants used GPUs to 2014 when almost all the entrants (110) were using them. This reflects the almost complete switch from traditional computer vision approaches to deep learning-based approaches for the competition.In 2015, the ImageNet winning entry, ResNet <cit.>, exceeded human-level accuracy with a top-5 error rate[The top-5 error rate is measured based on whether the correct answer appears in one of the top 5 categories selected by the algorithm.] below 5%.Since then, the error rate has dropped below 3% and more focus is now being placed on more challenging components of the competition, such as object detection and localization. These successes are clearly a contributing factor to the wide range of applications to which DNNs are being applied. §.§ Applications of DNN Many applications can benefit from DNNs ranging from multimedia to medical space. In this section, we will provide examples of areas where DNNs are currently making an impact and highlight emerging areas where DNNs hope to make an impact in the future.* Image and Video Video is arguably the biggest of the big data.It accounts for over 70% of today's Internet traffic <cit.>.For instance, over 800 million hours of video is collected daily worldwide for video surveillance <cit.>.Computer vision is necessary to extract meaningful information from video.DNNs have significantly improved the accuracy of many computer vision tasks such as image classification <cit.>, object localization and detection <cit.>, image segmentation <cit.>, and action recognition <cit.>.* Speech and LanguageDNNs have significantly improved the accuracy of speech recognition <cit.> as well as many related tasks such as machine translation <cit.>, natural language processing <cit.>, and audio generation <cit.>.* MedicalDNNs have played an important role in genomics to gain insight into the genetics of diseases such as autism, cancers, and spinal muscular atrophy <cit.>.They have also been used in medical imaging to detect skin cancer <cit.>, brain cancer <cit.> and breast cancer <cit.>. * Game Play Recently, many of the grand AI challenges involving game play have been overcome using DNNs. These successes also required innovations in training techniques and many rely on reinforcement learning <cit.>. DNNs have surpassed human level accuracy in playing Atari <cit.> as well as Go <cit.>, where an exhaustive search of all possibilities is not feasible due to the unimaginably huge number of possible moves. * Robotics DNNs have been successful in the domain of robotic tasks such as grasping with a robotic arm <cit.>, motion planning for ground robots <cit.>, visual navigation <cit.>, control to stabilize a quadcopter <cit.> and driving strategies for autonomous vehicles <cit.>.DNNs are already widely used in multimedia applications today (e.g., computer vision, speech recognition).Looking forward, we expect that DNNs will likely play an increasingly important role in the medical and robotics fields, as discussed above, as well as finance (e.g., for trading, energy forecasting, and risk assessment), infrastructure (e.g., structural safety, and traffic control), weather forecasting and event detection <cit.>. The myriad application domains pose new challenges to the efficient processing of DNNs; the solutions then have to be adaptive and scalable in order to handle the new and varied forms of DNNs that these applications may employ. §.§ Embedded versus CloudThe various applications and aspects of DNN processing (i.e., training versus inference) have different computational needs. Specifically, training often requires a large dataset[One of the major drawbacks of DNNs is their need for large datasets to prevent over-fitting during training.] and significant computational resources for multiple weight-update iterations. In many cases, training a DNN model still takes several hours to multiple days and thus is typically performed in the cloud. Inference, on the other hand, can happen either in the cloud or at the edge (e.g., IoT or mobile).In many applications, it is desirable to have the DNN inference processing near the sensor. For instance, in computer vision applications, such as measuring wait times in stores or predicting traffic patterns, it would be desirable to extract meaningful information from the video right at the image sensor rather than in the cloud to reduce the communication cost. For other applications such as autonomous vehicles, drone navigation and robotics, local processing is desired since the latency and security risks of relying on the cloud are too high. However, video involves a large amount of data, which is computationally complex to process; thus, low cost hardware to analyze video is challenging yet critical to enabling these applications. Speech recognition enables us to seamlessly interact with electronic devices, such as smartphones. While currently most of the processing for applications such as Apple Siri and Amazon Alexa voice services is in the cloud, it is still desirable to perform the recognition on the device itself to reduce latency and dependency on connectivity, and to improve privacy and security.Many of the embedded platforms that perform DNN inference have stringent energy consumption, compute and memory cost limitations; efficient processing of DNNs have thus become of prime importance under these constraints. Therefore, in this article, we will focus on the compute requirements for inference rather than training.§ OVERVIEW OF DNNS DNNs come in a wide variety of shapes and sizes depending on the application. The popular shapes and sizes are also evolving rapidly to improve accuracy and efficiency. In all cases, the input to a DNN is a set of values representing the information to be analyzed by the network. For instance, these values can be pixels of an image, sampled amplitudes of an audio wave or the numerical representation of the state of some system or game.The networks that process the input come in two major forms: feed forward and recurrent as shown in Fig. <ref>. In feed-forward networks all of the computation is performed as a sequence of operations on the outputs of a previous layer. The final set of operations generates the output of the network, for example a probability that an image contains a particular object, the probability that an audio sequence contains a particular word, a bounding box in an image around an object or the proposed action that should be taken. In such DNNs, the network has no memory and the output for an input is always the same irrespective of the sequence of inputs previously given to the network.In contrast, recurrent neural networks (RNNs), of which Long Short-Term Memory networks (LSTMs) <cit.> are a popular variant, have internal memory to allow long-term dependencies to affect the output. In these networks, some intermediate operations generate values that are stored internally in the network and used as inputs to other operations in conjunction with the processing of a later input. In this article, we will focus on feed-forward networks since (1) the major computation in RNNs is still the weighted sum, which is covered by the feed-forward networks, and (2) to-date little attention has been given to hardware acceleration specifically for RNNs.DNNs can be composed solely of fully-connected (FC) layers (also referred to as multi-layer perceptrons, or MLP) as shown in the leftmost layer of Fig. <ref>. In a FC layer, all output activations are composed of a weighted sum of all input activations (i.e., all outputs are connected to all inputs). This requires a significant amount of storage and computation. Thankfully, in many applications, we can remove some connections between the activations by setting the weights to zero without affecting accuracy. This results in a sparsely-connected layer. A sparsely connected layer is illustrated in the rightmost layer of Fig. <ref>.We can also make the computation more efficient by limiting the number of weights that contribute to an output. This sort of structured sparsity can arise if each output is only a function of a fixed-size window of inputs. Even further efficiency can be gained if the same set of weights are used in the calculation of every output. This repeated use of the same weight values is called weight sharing and can significantly reduce the storage requirements for weights. An extremely popular windowed and weight-shared DNN layer arises by structuring the computation as a convolution, as shown in Fig. <ref>, where the weighted sum for each output activation is computed using only a small neighborhood of input activations (i.e., all weights beyond beyond the neighborhood are set to zero), and where the same set of weights are shared for every output (i.e., the filter is space invariant). Such convolution-based layers are referred to as convolutional (CONV) layers. [Note: the structured sparsity in CONV layers is orthogonal to the sparsity that occurs from network pruning as described in Section <ref>.] §.§ Convolutional Neural Networks (CNNs) A common form of DNNs is Convolutional Neural Nets (CNNs), which are composed of multiple CONV layers as shown in Fig. <ref>. In such networks, each layer generates a successively higher-level abstraction of the input data, called a feature map (fmap), which preserves essential yet unique information. Modern CNNs are able to achieve superior performance by employing a very deep hierarchy of layers. CNN are widely used in a variety of applications including image understanding <cit.>, speech recognition <cit.>, game play <cit.>, robotics <cit.>, etc. This paper will focus on its use in image processing, specifically for the task of image classification <cit.>.Each of the CONV layers in CNN is primarily composed of high-dimensional convolutions as shown in Fig. <ref>. In this computation, the input activations of a layer are structured as a set of 2-D input feature maps (ifmaps), each of which is called a channel. Each channel is convolved with a distinct 2-D filter from the stack of filters, one for each channel; this stack of 2-D filters is often referred to as a single 3-D filter. The results of the convolution at each point are summed across all the channels. In addition, a 1-D bias can be added to the filtering results, but some recent networks <cit.> remove its usage from parts of the layers. The result of this computation is the output activations that comprise one channel of output feature map (ofmap). Additional 3-D filters can be used on the same input to create additional output channels. Finally, multiple input feature maps may be processed together as a batch to potentially improve reuse of the filter weights.Given the shape parameters in Table <ref>, the computation of a CONV layer is defined as 𝐎[z][u][x][y] = 𝐁[u] + ∑_k=0^C-1∑_i=0^S-1∑_j=0^R-1𝐈[z][k][Ux+i][Uy+j]×𝐖[u][k][i][j],0≤ z <N, 0≤ u < M, 0≤ x < F, 0≤ y <E,E = (H-R+U)/U, F = (W-S+U)/U.𝐎, 𝐈, 𝐖 and 𝐁 are the matrices of the ofmaps, ifmaps, filters and biases, respectively. U is a given stride size. Fig. <ref> shows a visualization of this computation (ignoring biases). To align the terminology of CNNs with the generic DNN,* filters are composed of weights (i.e., synapses)* input and output feature maps (ifmaps, ofmaps) are composed of activations (i.e., input and output neurons) From five <cit.> to more than a thousand <cit.> CONV layers are commonly used in recent CNN models. A small number, e.g., 1 to 3, of fully-connected (FC) layers are typically applied after the CONV layers for classification purposes. A FC layer also applies filters on the ifmaps as in the CONV layers, but the filters are of the same size as the ifmaps. Therefore, it does not have the weight sharing property of CONV layers. Eq. (<ref>) still holds for the computation of FC layers with a few additional constraints on the shape parameters: H = R, F = S, E = F = 1, and U = 1.In addition to CONV and FC layers, various optional layers can be found in a DNN such as the non-linearity, pooling, and normalization.The function and computations for each of these layers are discussed next.§.§.§ Non-Linearity A non-linear activation function is typically applied after each CONV or FC layer.Various non-linear functions are used to introduce non-linearity into the DNN as shown in Fig. <ref>.These include historically conventional non-linear functions such as sigmoid or hyperbolic tangent as well as rectified linear unit (ReLU) <cit.>, which has become popular in recent years due to its simplicity and its ability to enable fast training. Variations of ReLU, such as leaky ReLU <cit.>, parametric ReLU <cit.>, and exponential LU <cit.> have also been explored for improved accuracy. Finally, a non-linearity called maxout, which takes the max value of two intersecting linear functions, has shown to be effective in speech recognition tasks <cit.>. §.§.§ Pooling A variety of computations that reduce the dimensionality of a feature map are referred to as pooling. Pooling, which is applied to each channel separately, enables the network to be robust and invariant to small shifts and distortions. Pooling combines, or pools, a set of values in its receptive field into a smaller number of values. It can be configured based on the size of its receptive field (e.g., 2×2) and pooling operation (e.g., max or average), as shown in Fig. <ref>.Typically pooling occurs on non-overlapping blocks (i.e., the stride is equal to the size of the pooling).Usually a stride of greater than one is used such that there is a reduction in the dimension of the representation (i.e., feature map).§.§.§ NormalizationControlling the input distribution across layers can help to significantly speed up training and improve accuracy.Accordingly, the distribution of the layer input activations (σ, μ) are normalized such that it has a zero mean and a unit standard deviation.In batch normalization (BN), the normalized value is further scaled and shifted, as shown in Eq. (<ref>), where the parameters (γ, β)are learned from training <cit.>. ϵ is a small constant to avoid numerical problems. Prior to this, local response normalization (LRN) <cit.> was used, which was inspired by lateral inhibition in neurobiology where excited neurons (i.e., high value activations) should subdue its neighbors (i.e., cause low value activations); however, BN is now considered standard practice in the design of CNNs while LRN is mostly deprecated. Note that while LRN usually is performed after the non-linear function, BN is mostly performed between the CONV or FC layer and the non-linear function.y=x-μ/√(σ^2+ϵ)γ+β §.§ Popular DNN ModelsMany DNN models have been developed over the past two decades.Each of these models has a different `network architecture' in terms of number of layers, layer types, layer shapes (i.e., filter size, number of channels and filters), and connections between layers. Understanding these variations and trends is important for incorporating the right flexibility in any efficient DNN engine. In this section, we will give an overview of various popular DNNs such as LeNet <cit.> as well as those that competed in and/or won the ImageNet Challenge <cit.> as shown in Fig. <ref>, most of whose models with pre-trained weights are publicly available for download; the DNN models are summarized in Table <ref>. Two results for top-5 error results are reported.In the first row, the accuracy is boosted by using multiple crops from the image and an ensemble of multiple trained models (i.e., the DNN needs to be run several times); these results were used to compete in the ImageNet Challenge. The second row reports the accuracy if only a single crop was used (i.e., the DNN is run only once), which is more consistent with what would likely be deployed in real-time and/or energy-constrained applications.LeNet <cit.> was one of the first CNN approaches introduced in 1989.It was designed for the task of digit classification in grayscale images of size 28×28. The most well known version, LeNet-5, contains two CONV layers and two FC layers <cit.>. Each CONV layer uses filters of size 5×5 (1 channel per filter) with 6 filters in the first layer and 16 filters in the second layer. Average pooling of 2×2 is used after each convolution and a sigmoid is used for the non-linearity.In total, LeNet requires 60k weights and 341k multiply-and-accumulates (MACs) per image. LeNet led to CNNs' first commercial success, as it was deployed in ATMs to recognize digits for check deposits. AlexNet <cit.> was the first CNN to win the ImageNet Challenge in 2012. It consists of five CONV layers and three FC layers. Within each CONV layer, there are 96 to 384 filters and the filter size ranges from 3×3 to 11×11, with 3 to 256 channels each.In the first layer, the 3 channels of the filter correspond to the red, green and blue components of the input image. A ReLU non-linearity is used in each layer.Max pooling of 3×3 is applied to the outputs of layers 1, 2 and 5.To reduce computation, a stride of 4 is used at the first layer of the network. AlexNet introduced the use of LRN in layers 1 and 2 before the max pooling, though LRN is no longer popular in later CNN models. One important factor that differentiates AlexNet from LeNet is that the number of weights is much larger and the shapes vary from layer to layer. To reduce the amount of weights and computation in the second CONV layer, the 96 output channels of the first layer are split into two groups of 48 input channels for the second layer, such that the filters in the second layer only have 48 channels. Similarly, the weights in fourth and fifth layer are also split into two groups.In total, AlexNet requires 61M weights and 724M MACs to process one 227×227 input image. Overfeat <cit.> has a very similar architecture to AlexNet with five CONV layers and three FC layers. The main differences are that the number of filters is increased for layers 3 (384 to 512), 4 (384 to 1024), and 5 (256 to 1024), layer 2 is not split into two groups, the first fully connected layer only has 3072 channels rather than 4096, and the input size is 231×231 rather than 227×227.As a result, the number of weights grows to 146M and the number of MACs grows to 2.8G per image. Overfeat has two different models: fast (described here) and accurate. The accurate model used in the ImageNet Challenge gives a 0.65% lower top-5 error rate than the fast model at the cost of 1.9× more MACsVGG-16 <cit.> goes deeper to 16 layers consisting of 13 CONV layers and 3 FC layers. In order to balance out the cost of going deeper, larger filters (e.g., 5×5) are built from multiple smaller filters (e.g., 3×3), which have fewer weights, to achieve the same receptive fields as shown in Fig. <ref>. As a result, all CONV layers have the same filter size of 3×3.In total, VGG-16 requires 138M weights and 15.5G MACs to process one 224×224 input image. VGG has two different models: VGG-16 (described here) and VGG-19. VGG-19 gives a 0.1%lower top-5 error rate than VGG-16 at the cost of 1.27× more MACs. GoogLeNet <cit.> goes even deeper with 22 layers. It introduced an inception module, shown in Fig. <ref>, which is composed of parallel connections, whereas previously there was only a single serial connection. Different sized filters (i.e., 1×1, 3×3, 5×5), along with 3×3 max-pooling, are used for each parallel connection and their outputs are concatenated for the module output.Using multiple filter sizeshas the effect of processing the input at multiple scales. For improved training speed, GoogLeNet is designed such that the weights and the activations, which are stored for backpropagation during training, could all fit into the GPU memory. In order to reduce the number of weights, 1×1 filters are applied as a `bottleneck' to reduce the number of channels for each filter <cit.>.The 22 layers consist of three CONV layers, followed by 9 inceptions layers (each of which are two CONV layers deep), and one FC layer. Since its introduction in 2014, GoogleNet (also referred to as Inception) has multiple versions: v1 (described here), v3 [v2 is very similar to v3.] and v4. Inception-v3 decomposes the convolutions by using smaller 1-D filters as shown in Fig. <ref> to reduce number of MACs and weights in order to go deeper to 42 layers.In conjunction with batch normalization <cit.>, v3 achieves over 3% lower top-5 error than v1 with 2.5× increase in computation <cit.>. Inception-v4 uses residual connections <cit.>, described in the next section, for a 0.4% reduction in error.ResNet <cit.>, also known as Residual Net, uses residual connections to go even deeper (34 layers or more). It was the first entry DNN in ImageNet Challenge that exceeded human-level accuracy with a top-5 error rate below 5%. One of the challenges with deep networks is the vanishing gradient during training: as the error backpropagates through the network the gradient shrinks, which affects the ability to update the weights in the earlier layers for very deep networks. Residual net introduces a `shortcut' module which contains an identity connection such that the weight layers (i.e., CONV layers) can be skipped as shown in Fig. <ref>. Rather than learning the function for the weight layers F(x), the shortcut module learns the residual mapping (F(x)=H(x)-x).Initially, F(x) is zero and the identity connection is taken; then gradually during training, the actual forward connection through the weight layer is used.This is similar to the LSTM networks that are used for sequential data.ResNet also uses the `bottleneck' approach of using 1×1 filters to reduce the number of weight parameters.As a result, the two layers in the shortcut module are replaced by three layers (1×1, 3×3, 1×1) where the 1×1 reduces and then increases (restores) the number of weights. ResNet-50 consists of one CONV layer, followed by 16 shortcut layers (each of which are three CONV layers deep), and one FC layer; it requires 25.5M weights and 3.9G MACs per image. There are various versions of ResNet with multiple depths (e.g., without bottleneck: 18, 34; with bottleneck: 50, 101, 152). The ResNet with 152 layers was the winner of the ImageNet Challenge requiring 11.3G MACs and 60M weights. Compared to ResNet-50, it reduces the top-5 error by around 1% at the cost of 2.9× more MACs and 2.5× more weights.Several trends can be observed in the popular DNNs shown in Table <ref>.Increasing the depth of the network tends to provide higher accuracy. Controlling for number of weights, a deeper network can support a wider range of non-linear functions that are more discriminative and also provides more levels of hierarchy in the learned representation <cit.>. The number of filter shapes continues to vary across layers, thus flexibility is still important. Furthermore, most of the computation has been placed on CONV layers rather than FC layers. In addition, the number of weights in the FC layers is reduced and in most recent networks (since GoogLeNet) the CONV layers also dominate in terms of weights. Thus, the focus of hardware implementations should be on addressing the efficiency of the CONV layers, which in many domains are increasingly important.§ DNN DEVELOPMENT RESOURCESOne of the key factors that has enabled the rapid development of DNNs is the set of development resources that have been made available by the research community and industry. These resources are also key to the development of DNN accelerators by providing characterizations of the workloads and facilitating the exploration of trade-offs in model complexity and accuracy. This section will describe these resources such that those who are interested in this field can quickly get started. §.§ Frameworks For ease of DNN development and to enable sharing of trained networks, several deep learning frameworks have been developed from various sources. These open source libraries contain software libraries for DNNs. Caffe was made available in 2014 from UC Berkeley <cit.>. It supports C, C++, Python and MATLAB. Tensorflow was released by Google in 2015, and supports C++ and python; it also supports multiple CPUs and GPUs and has more flexibility than Caffe, with the computation expressed as dataflow graphs to manage the “tensors” (multidimensional arrays).Another popular framework is Torch, which was developed by Facebook and NYU and supports C, C++ and Lua. There are several other frameworks such as Theano, MXNet, CNTK, which are described in <cit.>. There are also higher-level libraries that can run on top of the aforementioned frameworks to provide a more universal experience and faster development. One example of such libraries is Keras, which is written in Python and supports Tensorflow, CNTK and Theano.The existence of such frameworks are not only a convenient aid for DNN researchers and application designers, but they are also invaluable for engineering high performance or more efficient DNN computation engines. In particular, because the frameworks make heavy use of a set primitive operations, such processing of a CONV layer, they can incorporate use of optimized software or hardware accelerators. This acceleration is transparent to the user of the framework. Thus, for example, most frameworks can use Nvidia's cuDNN library for rapid execution on Nvidia GPUs. Similarly, transparent incorporation of dedicated hardware accelerators can be achieved as was done with the Eyeriss chip <cit.>. Finally, these frameworks are a valuable source of workloads for hardware researchers. They can be used to drive experimental designs for different workloads, for profiling different workloads and for exploring hardware-software trade-offs. §.§ ModelsPretrained DNN models can be downloaded from various websites <cit.> for the various different frameworks. It should be noted that even for the same DNN (e.g., AlexNet) the accuracy of these models can vary by around 1% to 2% depending on how the model was trained, and thus the results do not always exactly match the original publication.§.§ Popular Datasets for ClassificationIt is important to factor in the difficulty of the task when comparingdifferent DNN models. For instance, the task of classifying handwritten digits from the MNIST dataset <cit.> is much simpler than classifying an object into one of 1000 classes as is required for the ImageNet dataset <cit.>(Fig. <ref>).It is expected that the size of the DNNs (i.e., number of weights) and the number of MACs will be larger for the more difficult task than the simpler task and thus require more energy and have lower throughput. For instance, LeNet-5<cit.> is designed for digit classification, while AlexNet<cit.>, VGG-16<cit.>, GoogLeNet<cit.>, and ResNet<cit.> are designed for the 1000-class image classification. There are many AI tasks that come with publicly available datasets in order to evaluate the accuracy of a given DNN. Public datasets are important for comparing the accuracy of different approaches. The simplest and most common task is image classification, which involves being given an entire image, and selecting 1 of N classes that the image most likely belongs to. There is no localization or detection. MNIST is a widely used dataset for digit classification that was introduced in 1998 <cit.>. It consists of 28×28 pixel grayscale images of handwritten digits. There are 10 classes (for 10 digits) and 60,000 training images and 10,000 test images. LeNet-5 was able to achieve an accuracy of 99.05% when MNIST was first introduced.Since then the accuracy has increased to 99.79% using regularization of neural networks with dropconnect <cit.>. Thus, MNIST is now considered a fairly easy dataset.CIFAR is a dataset that consists of 32×32 pixel colored images of of various objects, which was released in 2009 <cit.>.CIFAR is a subset of the 80 million Tiny Image dataset <cit.>.CIFAR-10 is composed of 10 mutually exclusive classes. There are 50,000 training images (5000 per class) and 10,000 test images (1000 per class). A two-layer convolutional deep belief network was able to achieve 64.84% accuracy on CIFAR-10 when it was first introduced <cit.>. Since then the accuracy has increased to 96.53% using fractional max pooling <cit.>. ImageNet is a large scale image dataset that was first introduced in 2010; the dataset stabilized in 2012 <cit.>.It contains images of 256×256 pixel in color with 1000 classes.The classes are defined using the WordNet as a backbone to handle ambiguous word meanings and to combine together synonyms into the same object category.In otherwords, there is a hierarchy for the ImageNet categories. The 1000 classes were selected such that there is no overlap in the ImageNet hierarchy.The ImageNet dataset contains many fine-grained categories including 120 different breeds of dogs. There are 1.3M training images (732 to 1300 per class), 100,000 testing images (100 per class)and 50,000 validation images (50 per class).The accuracy of the ImageNet Challenge are reported using two metrics: Top-5 and Top-1 error.Top-5 error means that if any of the top five scoring categories are the correct category, it is counted as a correct classification.The Top-1 requires that the top scoring category be correct. In 2012, the winner of the ImageNet Challenge (AlexNet) was able to achieve an accuracy of 83.6% for the top-5 (which is substantially better than the 73.8% which was second place that year that did not use DNNs); it achieved 61.9% on the top-1 of the validation set. In 2017, the highest accuracy was 97.7% for the top-5. In summary of the various image classification datasets, it is clear that MNIST is a fairly easy dataset, while ImageNet is a challenging one with a wider coverage of classes.Thus in terms of evaluating the accuracy of a given DNN, it is important to consider that dataset upon which the accuracy is measured. §.§ Datasets for Other TasksSince the accuracy of the state-of-the-art DNNs are performing better than human-level accuracy on image classification tasks, the ImageNet Challenge has started to focus on more difficult tasks such as single-object localization and object detection.For single-object localization, the target object must be localized and classified (out of 1000 classes).The DNN outputs the top five categories and top five bounding box locations.There is no penalty for identifying an object that is in the image but not included in the ground truth. For object detection, all objects in the image must be localized and classified (out of 200 classes). The bounding box for all objects in these categories must be labeled. Objects that are not labeled are penalized as are duplicated detections.Beyond ImageNet, there are also other popular image datasets for computer vision tasks. For object detection, there is the PASCAL VOC (2005-2012) dataset that contains 11k images representing 20 classes (27k object instances, 7k of which has detailed segmentation) <cit.>.For object detection, segmentation and recognition in context, there is the MS COCO dataset with 2.5M labeled instances in 328k images (91 object categories) <cit.>; compared to ImageNet, COCO has fewer categories but more instances per category, which is useful for precise 2-D localization. COCO also has more labeled instances per image to potentially help with contextual information.Most recently even larger scale datasets have been made available. For instance, Google has an Open Images dataset with over 9M images <cit.>, spanning 6000 categories. There is also a YouTube dataset with 8M videos (0.5M hours of video) covering 4800 classes <cit.>. Google also released an audio dataset comprised of 632 audio event classes and a collection of 2M human-labeled 10-second sound clips <cit.>. These large datasets will be evermore important as DNNs become deeper with more weight parameters to train.Undoubtedly, both larger datasets and datasets for new domains will serve as important resources for profiling and exploring the efficiency of future DNN engines. § HARDWARE FOR DNN PROCESSING Due to the popularity of DNNs, many recent hardware platforms have special features that target DNN processing. For instance, the Intel Knights Landing CPU features special vector instructions for deep learning; the Nvidia PASCAL GP100 GPU features 16-bit floating point (FP16) arithmetic support to perform two FP16 operations on a single precision core for faster deep learning computation. Systems have also been built specifically for DNN processing such as Nvidia DGX-1 and Facebook's Big Basin custom DNN server <cit.>.DNN inference has also been demonstrated on various embedded System-on-Chips (SoC) such as Nvidia Tegra and Samsung Exynos as well as FPGAs.Accordingly, it's important to have a good understanding of how the processing is being performed on these platforms, and how application-specific accelerators can be designed for DNNs for further improvement in throughput and energy efficiency.The fundamental component of both the CONV and FC layers are the multiply-and-accumulate (MAC) operations, which can be easily parallelized.In order to achieve high performance, highly-parallel compute paradigms are very commonly used, including both temporal and spatial architectures as shown in Fig. <ref>. The temporal architectures appear mostly in CPUs or GPUs, and employ a variety of techniques to improve parallelism such as vectors (SIMD) or parallel threads (SIMT). Such temporal architecture use a centralized control for a large number of ALUs. These ALUs can only fetch data from the memory hierarchy and cannot communicate directly with each other. In contrast, spatial architectures use dataflow processing, i.e., the ALUs form a processing chain so that they can pass data from one to another directly. Sometimes each ALU can have its own control logic and local memory, called a scratchpad or register file. We refer to the ALU with its own local memory as a processing engine (PE).Spatial architectures are commonly used for DNNs in ASIC and FPGA-based designs. In this section, we will discuss the different design strategies for efficient processing on these different platforms, without any impact on accuracy (i.e., all approaches in this section produce bit-wise identical results); specifically, * For temporal architectures such as CPUs and GPUs, we will discuss how computational transforms on the kernel can reduce the number of multiplications to increase throughput.* For spatial architectures used in accelerators, we will discuss how dataflows can increase data reuse from low cost memories in the memory hierarchy to reduce energy consumption.§.§ Accelerate Kernel Computation on CPU and GPU Platforms CPUs and GPUs use parallelizaton techniques such as SIMD or SIMT to perform the MACs in parallel.All the ALUs share the same control and memory (register file).On these platforms, both the FC and CONV layers are often mapped to a matrix multiplication (i.e., the kernel computation).Fig. <ref> shows how a matrix multiplication is used for the FC layer.The height of the filter matrix is the number of filters and the width is the number of weights per filter (input channels (C) × width (W) × height (H), since R=W and S=H in the FC layer); the height of the input feature maps matrix is the number of activations per input feature map (C × W × H), and the width is the number of input feature maps (one in Fig. <ref> and N in Fig. <ref>); finally, the height of the output feature map matrix is the number of channels in the output feature maps (M), and the width is the number of output feature maps (N), where each output feature map of the FC layer has the dimension of 1×1×number of output channels (M). The CONV layer in a DNN can also be mapped to a matrix multiplication using a relaxed form of the Toeplitz matrix as shown in Fig. <ref>. The downside for using matrix multiplication for the CONV layers is that there is redundant data in the input feature map matrix as highlighted in Fig. <ref>.This can lead to either inefficiency in storage, or a complex memory access pattern. There are software libraries designed for CPUs (e.g., OpenBLAS, Intel MKL, etc.) and GPUs (e.g., cuBLAS, cuDNN, etc.) that optimize for matrix multiplications.The matrix multiplication is tiled to the storage hierarchy of these platforms, which are on the order of a few megabytes at the higher levels. The matrix multiplications on these platforms can be further sped up by applying computational transforms to the data to reduce the number of multiplications, while still giving the same bit-wise result. Often this can come at a cost of increased number of additions and a more irregular data access pattern.Fast Fourier Transform (FFT) <cit.> is a well known approach, shown in Fig. <ref> that reduces the number of multiplications from O(N_o^2N_f^2) to O(N_o^2log_2N_o), where the output size is N_o× N_o and the filter size is N_f× N_f.To perform the convolution, we take the FFT of the filter and input feature map, and then perform the multiplication in the frequency domain; we then apply an inverse FFT to the resulting product to recover the output feature map in the spatial domain. However, there are several drawbacks to using FFT: (1) the benefits of FFTs decrease with filter size; (2) the size of the FFT is dictated by the output feature map size which is often much larger than the filter; (3) the coefficients in the frequency domain are complex.As a result, while FFT reduces computation, it requires larger storage capacity and bandwidth.Finally, a popular approach for reducing complexity is to make the weights sparse, which will be discussed in Section <ref>; using FFTs makes it difficult for this sparsity to be exploited.Several optimizations can be performed on FFT to make it more effective for DNNs.To reduce the number of operations, the FFT of the filter can be precomputed and stored. In addition, the FFT of the input feature map can be computed once and used to generate multiple channels in the output feature map.Finally, since an image contains only real values, its Fourier Transform is symmetric and this can be exploited to reduce storage and computation cost. Other approaches include Strassen <cit.> and Winograd <cit.>, which rearrange the computation such that the number of multiplications reduce from O(N^3) to O(N^2.807) and by 2.25× for a 3×3 filter, respectively, at the cost of reduced numerical stability, increased storage requirements, and specialized processing depending on the size of the filter.In practice, different algorithms might be used for different layer shapes and sizes (e.g., FFT for filters greater than 5×5, and Winograd for filters 3×3 and below). Existing platform libraries, such as MKL and cuDNN, dynamically chose the appropriate algorithm for a given shape and size <cit.>. §.§ Energy-Efficient Dataflow for AcceleratorsFor DNNs, the bottleneck for processing is in the memory access. Each MAC requires three memory reads (for filter weight, fmap activation, and partial sum) and one memory write (for the updated partial sum) as shown in Fig. <ref>. In the worst case, all of the memory accesses have to go through the off-chip DRAM, which will severely impact both throughput and energy efficiency. For example, in AlexNet, to support its 724M MACs, nearly 3000M DRAM accesses will be required.Furthermore, DRAM accesses require up to several orders of magnitude higher energy than computation <cit.>. Accelerators, such as spatial architectures as shown in Fig. <ref>, provide an opportunity to reduce the energy cost of data movement by introducing several levels of local memory hierarchy with different energy cost as shown in Fig. <ref>.This includes a large global buffer with a size of several hundred kilobytes that connects to DRAM, an inter-PE network that can pass data directly between the ALUs, and a register file (RF) within each processing element (PE) with a size of a few kilobytes or less.The multiple levels of memory hierarchy help to improve energy efficiency by providing low-cost data accesses. For example, fetching the data from the RF or neighbor PEs is going to cost 1 or 2 orders of magnitude lower energy than from DRAM. Accelerators can be designed to support specialized processing dataflows that leverage this memory hierarchy. The dataflow decides what data gets read into which level of the memory hierarchy and when are they getting processed. Since there is no randomness in the processing of DNNs, it is possible to design a fixed dataflow that can adapt to the DNN shapes and sizes and optimize for the best energy efficiency. The optimized dataflow minimizes access from the more energy consuming levels of the memory hierarchy. Large memories that can store a significant amount of data consume more energy than smaller memories. For instance, DRAM can store gigabytes of data, but consumes two orders of magnitude higher energy per access than a small on-chip memory of a few kilobytes. Thus, every time a piece of data is moved from an expensive level to a lower cost level in terms of energy, we want to reuse that piece of data as much as possible to minimize subsequent accesses to the expensive levels. The challenge, however, is that the storage capacity of these low cost memories are limited. Thus we need to explore different dataflows that maximize reuse under these constraints.For DNNs, we investigate dataflows that exploit three forms of input data reuse (convolutional, feature map and filter) as shown in Fig. <ref>.For convolutional reuse, the same input feature map activations and filter weights are used within a given channel, just in different combinations for different weighted sums. For feature map reuse, multiple filters are applied to the same feature map, so the input feature map activations are used multiple times across filters. Finally, for filter reuse, when multiple input feature maps are processed at once (referred to as a batch), the same filter weights are used multiple times across input features maps. If we can harness the three types of data reuse by storing the data in the local memory hierarchy and accessing them multiple times without going back to the DRAM, it can save a significant amount of DRAM accesses. For example, in AlexNet, the number of DRAM reads can be reduced by up to 500× in the CONV layers.The local memory can also be used for partial sum accumulation, so they do not have to reach DRAM. In the best case, if all data reuse and accumulation can be achieved by the local memory hierarchy, the 3000M DRAM accesses in AlexNet can be reduced to only 61M. The operation of DNN accelerators is analogous to that of general-purpose processors as illustrated in Fig. <ref> <cit.>. In conventional computer systems, the compiler translates the program into machine-readable binary codes for execution given the hardware architecture (e.g., x86 or ARM); in the processing of DNNs, the mapper translates the DNN shape and size into a hardware-compatible computation mapping for execution given the dataflow. While the compiler usually optimizes for performance, the mapper optimizes for energy efficiency.The following taxonomy (Fig. <ref>) can be used to classify the DNN dataflows in recent works <cit.> based on their data handling characteristics <cit.>:§.§.§ Weight stationary (WS)The weight stationary dataflow is designed to minimize the energy consumption of reading weights by maximizing the accesses of weights from the register file (RF) at the PE (Fig. <ref>). Each weight is read from DRAM into the RF of each PE and stays stationary for further accesses.The processing runs as many MACs that use the same weight as possible while the weight is present in the RF; it maximizes convolutional and filter reuse of weights. The inputs and partial sums must move through the spatial array and global buffer. The input fmap activations are broadcast to all PEs and then the partial sums are spatially accumulated across the PE array. One example of previous work that implement weight stationary dataflow is nn-X, or neuFlow <cit.>, which uses eight 2-D convolution engines for processing a 10×10 filter. There are total 100 MAC units, i.e. PEs, per engine with each PE having a weight that stays stationary for processing. The input fmap activations are broadcast to all MAC units and the partial sums are accumulated across the MAC units. In order to accumulate the partial sums correctly, additional delay storage elements are required, which are counted into the required size of local storage. Other weight stationary examples are found in <cit.>. §.§.§ Output stationary (OS)The output stationary dataflow is designed to minimize the energy consumption of reading and writing the partial sums (Fig. <ref>). It keeps the accumulation of partial sums for the same output activation value local in the RF. In order to keep the accumulation of partial sums stationary in the RF, one common implementation is to stream the input activations across the PE array and broadcast the weight to all PEs in the array.One example that implements the output stationary dataflow is ShiDianNao <cit.>, where each PE handles the processing for each output activation value by fetching the corresponding input activations from neighboring PEs. The PE array implements dedicated networks to pass data horizontally and vertically. Each PE also has data delay registers to keep data around for the required amount of cycles. At the system level, the global buffer streams the input activations and broadcasts the weights into the PE array. The partial sums are accumulated inside each PE and then get streamed out back to the global buffer.Other examples of output stationary are found in <cit.>.There are multiple possible variants of output stationary as shown in Fig. <ref> since the output activations that get processed at the same time can come from different dimensions. For example, the variant OS_A targets the processing of CONV layers, and therefore focuses on the processing of output activations from the same channel at a time in order to maximize data reuse opportunities. The variant OS_C targets the processing of FC layers, and focuses on generating output activations from all different channels, since each channel only has one output activation. The variant OS_B is something in betweenOS_A and OS_C. Example of variants OS_A, OS_B, and OS_C are <cit.>, <cit.>, and <cit.>, respectively.§.§.§ No local reuse (NLR)While small register files are efficient in terms of energy (pJ/bit), they are inefficient in terms of area (μ m^2/bit).In order to maximize the storage capacity, and minimize the off-chip memory bandwidth, no local storage is allocated to the PE and instead all that area is allocated to the global buffer to increase its capacity (Fig. <ref>).The no local reuse dataflow differs from the previous dataflows in that nothing stays stationary inside the PE array.As a result, there will be increased traffic on the spatial array and to the global buffer for all data types. Specifically, it has to multicast the activations, single-cast the filter weights, and then spatially accumulate the partial sums across the PE array. In an example of the no local reuse dataflow from UCLA <cit.>, the filter weights and input activations are read from the global buffer, processed by the MAC units with custom adder trees that can complete the accumulation in a single cycle, and the resulting partial sums or output activations are then put back to the global buffer. Another example is DianNao <cit.>, which also reads input activations and filter weights from the buffer, and processes them through the MAC units with custom adder trees. However, DianNao implements specialized registers to keep the partial sums in the PE array, which helps to further reduce the energy consumption of accessing partial sums. Another example of no local reuse dataflow is found in <cit.>. §.§.§ Row stationary (RS)A row stationary dataflow is proposed in <cit.>, which aims to maximize the reuse and accumulation at the RF level for all types of data (weights, pixels, partial sums) for the overall energy efficiency. This differs from WS or OS dataflows, which optimize for only weights and partial sums, respectively. The row stationary dataflow assigns the processing of a 1-D row convolution into each PE for processing as shown in Fig. <ref>. It keeps the row of filter weights stationary inside the RF of the PE and then streams the input activations into the PE. The PE does the MACs for each sliding window at a time, which uses just one memory space for the accumulation of partial sums. Since there are overlaps of input activations between different sliding windows, the input activations can then be kept in the RF and get reused. By going through all the sliding windows in the row, it completes the 1-D convolution and maximize the data reuse and local accumulation of data in this row.With each PE processing a 1-D convolution, multiple PEs can be aggregated to complete the 2-D convolution as shown in Fig. <ref>. For example, to generate the first row of output activations with a filter having three rows, three 1-D convolutions are required. Therefore, we can use three PEs in a column, each running one of the three 1-D convolutions. The partial sums are further accumulated vertically across the three PEs to generate the first output row. To generate the second row of output, we use another column of PEs, where three rows of input activations are shifted down by one row, and use the same rows of filters to perform the three 1-D convolutions. Additional columns of PEs are added until all rows of the output are completed (i.e., the number of PE columns equals the number of output rows).This 2-D array of PEs enables other forms of reuse to reduce accesses to the more expensive global buffer. For example, each filter row is reused across multiple PEs horizontally. Each row of input activations is reused across multiple PEs diagonally. And each row of partial sums are further accumulated across the PEs vertically. Therefore, 2-D convolutional data reuse and accumulation are maximized inside the 2-D PE array.To address the high-dimensional convolution of the CONV layer (i.e., multiple fmaps, filters, and channels), multiple rows can be mapped onto the same PE as shown in Fig. <ref>. The 2-D convolution is mapped to a set of PEs, and the additional dimensions are handled by interleaving or concatenating the additional data.For filter reuse within the PE, different rows of fmaps are concatenated and run through the same PE as a 1-D convolution. For input fmap reuse within the PE, different filter rows are interleaved and run through the same PE as a 1-D convolution. Finally, to increase local partial sum accumulation within the PE, filter rows and fmap rows from different channels are interleaved, and run through the same PE as a 1-D convolution. The partial sums from different channels then naturally get accumulated inside the PE. The number of filters, channels, and fmaps that can be processed at the same time is programmable, and there exists an optimal mapping for the best energy efficiency, which depends on the shape configuration of the DNN as well as the hardware resources provided, e.g., the number of PEs and the size of the memory in the hierarchy. Since all of the variables are known before runtime, it is possible to build a compiler (i.e., mapper) to perform this optimization off-line to configure the hardware for different mappings of the RS dataflow for different DNNs as shown in Fig. <ref>.One example that implements the row stationary dataflow is Eyeriss <cit.>. It consists of a 14×12 PE array, a 108KB global buffer, ReLU and fmap compression units as shown in Fig. <ref>. The chip communicates with the off-chip DRAM using a 64-bit bidirectional data bus to fetch data into the global buffer. The global buffer then streams the data into the PE array for processing.In order to support the RS dataflow, two problems need to be solved in the hardware design. First, how can the fixed-size PE array accommodate different layer shapes? Second, although the data will be passed in a very specific pattern, it still changes with different shape configurations. How can the fixed design pass data in different patterns?Two mapping strategies can be used to solve the first problem as shown in Fig. <ref>. First, replication can be used to map shapes that do not use up the entire PE array. For example, in the third to fifth layers of AlexNet, each 2-D convolution only uses a 13×3 PE array. This structure is then replicated four times, and runs different channels and filters in each replication. The second strategy is called folding. For example, in the second layer of AlexNet, it requires a 27×5 PE array to complete the 2-D convolution. In order to fit it into the 14×12 physical PE array, it is folded into two parts, 14×5 and 13×5, and each arevertically mapped into the physical PE array. Since not all PEs are used by the mapping, the unused PEs can be clock gated to save energy consumption.A custom multicast network is used to solve the second problem about flexible data delivery. The simplest way to pass data to multiple destinations is to broadcast the data to all PEs and let each PE decide if it has to process the data or not. However, it is not very energy efficient especially when the size of PE array is large. Instead, a multicast network is used to send data to only the places where it is needed. §.§.§ Energy comparison of different dataflows To evaluate and compare different dataflows, the same total hardware area and number of PEs (256) are used in the simulation of a spatial architecture for all dataflows. The local memory (register file) at each processing element (PE) is on the order of 0.5 – 1.0kB and a shared memory (global buffer) is on the order of 100 – 500kB. The sizes of these memories are selected to be comparable to a typical accelerator for multimedia processing, such as video coding <cit.>. The memory sizes are further adjusted for the needs of each dataflow under the same area constraint. For example, since the no local reuse dataflow does not require any RF in PE, it is allocated with a much larger global buffer. The simulation uses the layer configurations from AlexNet with a batch size of 16. The simulation also takes into account the fact that accessing different levels of the memory hierarchy requires different energy cost.Fig. <ref> compares the chip and DRAM energy consumption of each dataflow for the CONV layers of AlexNet with a batch size of 16. The WS and OS dataflows have the lowest energy consumption for accessing weights and partial sums, respectively. However, the RS dataflow has the lowest total energy consumption since it optimizes for the overall energy efficiency instead of only for a certain data type. Fig. <ref> shows the same results with breakdown in terms of memory hierarchy. The RS dataflow consumes the most energy in the RF, since by design most of the accesses have been moved to the lowest level of the memory hierarchy. This helps to achieve the lowest total energy consumption since RF has the lowest energy per access. The NLR dataflow has the lowest energy consumption at the DRAM level, since it has a much larger global buffer and thus higher on-chip storage capacity compared to others. However, most of the data accesses in the NLR dataflow is from the global buffer, which still has a relatively large energy consumption per access compared to accessing data from RF or inside the PE array. As a result, the overall energy consumption of the NLR dataflow is still fairly high. Overall, RS dataflow uses 1.4× to 2.5× lower energy than other dataflows. Fig. <ref> shows the energy efficiency between different dataflows in the FC layers of AlexNet with a batch size of 16. Since there is not as much data reuse in the FC layers as in the CONV layers, all dataflows spend a significant amount of energy on reading weights. However, RS dataflow still has the lowest energy consumption because it optimizes for the energy of accessing input activations and partial sums. For the OS dataflows, OS_C now consumes lower energy than OS_A since it is designed for the FC layers. Overall, RS still consumes 1.3× lower energy compared to other dataflows at the batch size of 16. Fig. <ref> shows the RS dataflow design with energy breakdown in terms of different layers of AlexNet. In the CONV layers, the energy is mostly consumed by the RF, while in the FC layers, the energy is mostly consumed by DRAM. However, most of the energy is consumed by the CONV layers, which takes around 80% of the energy.As recent DNN models go deeper with more CONV layers, the ratio between number of CONV and FC layers only gets larger. Therefore, moving forward, significant effort should be placed on energy optimizations for CONV layers.Finally, up until now, we have been looking at architectures with relatively limited storage on the order of a few hundred kilobytes. With much larger storage on the order of a few megabytes, additional dataflows can be considered. For example, Fused-Layer looks at dataflow optimizations across layers <cit.>.§ NEAR-DATA PROCESSINGThe previous section highlighted that data movement dominates energy consumption.While spatial architectures distribute the on-chip memory such that it is closer to the computation (e.g., into the PE), there have also been efforts to bring the off-chip high density memory closer to the computation or to integrate the computation into the memory itself; the latter is often referred to as processing-in-memory or logic-in-memory. In embedded systems, there have also been efforts to bring the computation into the sensor where the data is first collected. In this section, we will discuss how moving compute and data closer to reduce data movement (i.e., near-data processing) can be achieved using mixed-signal circuit design and advanced memory technologies.Many of these works use analog processing which has the drawback of increased sensitivity to circuit and device non-idealities. Consequentially, the computation is often performed at reduced precision, which can be accounted for during the training of the DNNs using the techniques discussed in Section <ref>. Another factor to take into consideration is that DNNs are often trained in the digital domain; thus for analog processing, there is an additional overhead cost for analog-to-digital conversion (ADC) and digital-to-analog conversion (DAC). §.§ DRAMAdvanced memory technology can reduce the access energy for high density memories such as DRAMs.For instance, embedded DRAM (eDRAM) brings high density memory on-chip to avoid the high energy cost of switching off-chip capacitance <cit.>; eDRAM is 2.85× higher density than SRAM and 321× more energy efficient than DRAM (DDR3) <cit.>. eDRAM also offers higher bandwidth and lower latency compared to DRAM. In DNN processing, eDRAM can be used to store tens of megabytes of weights and activations on-chip to avoid off-chip access, as demonstrated in DaDianNao <cit.>.The downside of eDRAM is that it has lower density than off-chip DRAM and can increase the cost of the chip. Rather than integrating DRAM into the chip itself, the DRAM can also be stacked on top of the chip using through silicon vias (TSV). This technology is often referred to as 3-D memory, and has been commercialized in the form of Hybrid Memory Cube (HMC) <cit.> and High Bandwidth Memory (HBM) <cit.>. 3-D memory delivers an order of magnitude higher bandwidth and reduces access energy by up to 5× relative to existing 2-D DRAMs, as TSV have lower capacitance than typical off-chip interconnects.Recent works have explored the use of HMC for efficient DNN processing in a variety of ways.For instance, Neurocube <cit.> integrates SIMD processors into the logic die of the HMC to bring the memory and computation closer together. Tetris <cit.> explores the use of HMC with the Eyeriss spatial architecture and row stationary dataflow. It proposes allocating more area to computation than on-chip memory (i.e., larger PE array and smaller global buffer) in order to exploit the low energy and high throughput properties of the HMC. It also adapts the dataflow to account for the HMC memory and smaller on-chip memory. Tetris achieves a 1.5× reduction in energy consumption and 4.1× increase in throughput over a baseline system with conventional 2-D DRAM. §.§ SRAMRather than bringing the memory near the compute, recent work has also investigated bringing the compute into the memory. For instance, the multiply and accumulate operation can be directly integrated into the bit-cells of an SRAM array <cit.>, as shown in Fig. <ref>. In this work, a 5-bit DAC is used to drive the word line (WL) to an analog voltage that represents the feature vector, while the bit-cells store the binary weights ± 1.The bit-cell current (I_BC) is effectively a product of the value of the feature vector and the value of the weight stored in the bit-cell; the currents from the bit-cells within a column add together to discharge the bitline (V_BL).This approach gives 12× energy savings compared to reading the 1-bit weights from the SRAM and performing the computation separately. To counter circuit non-idealities, the DAC accounts for the non-linear bit-line discharge with respect to the WL voltage, and boosting is used to combine the weak classifiers that are susceptible to device variations to form a strong classifier <cit.>.§.§ Non-volatile Resistive MemoriesThe multiply and accumulate operation can also be directly integrated into advanced non-volatile high density memories by using them as programmable resistive elements, commonly referred to as memristors <cit.>.Specifically, a multiplication is performed with the resistor's conductance as the weight, the voltage as the input, and the current as the output as shown in Fig. <ref>. The addition is done by summing the currents of different memristors with Kirchhoff's current law.This is the ultimate form of a weight stationary dataflow, as the weights are always held in place. The advantages of this approach include reduced energy consumption since the computation is embedded within memory which reduces data movement, and increased density since memory and computation can be densely packed with a similar density to DRAM <cit.>.[The resistive devices can be inserted between the cross-point of two wires and in certain cases can avoid the need for an access transistor.] There are several popular candidates for non-volatile resistive memory devices including phase change memory (PCM), resistive RAM (RRAM or ReRAM), conductive bridge RAM (CBRAM), and spin transfer torque magnetic RAM (STT-MRAM) <cit.>. These devices have different trade-offs in terms of endurance (i.e., how many times it can be written), retention time, write current, density (i.e., cell size), variations and speed. Processing with non-volatile resistive memories has several drawbacks as described in <cit.>. First, it suffers from the reduced precision and ADC/DAC overhead of analog processing described earlier.Second, the array size is limited by the wires that connect the resistive devices; specifically, wire energy dominates for large arrays (e.g., 1k×1k), and the IR drop along wire can degrade the read accuracy. Third, the write energy to program the resistive devices can be costly, in some cases requiring multiple pulses. Finally, the resistive devices can also suffer from device-to-device and cycle-to-cycle variations with non-linear conductance across the conductance range.There have been several recent works that explore the use of memristors for DNNs. ISAAC <cit.> replaces the eDRAM in DaDianNao with memristors.To address the limited precision support, ISAAC computes a 16-bit dot product operation with 8 memristors each storing 2-bits; a 1-bit×2-bit multiplication is performed at each memristor, where a 16-bit input requires 16 cycles to complete. In other words, the ISAAC architecture trades off area and time for increased precision. Finally, ISAAC arranges its 25.1M memristors in a hierarchical structure to avoid issues with large arrays. PRIME <cit.> also replaces the DRAM main memory with memristors; specifically, it uses 256×256 memristor arrays that can be configured for 4-bit multi-level cell computation or 1-bit single level cell storage. It should be noted that results from ISAAC and PRIME are obtained from simulations.The task of actually fabricating large memristors arrays is still very much a research challenge; for instance, <cit.> uses a fabricated 12×12 memristor array to demonstrate a linear classifier. §.§ SensorsIn certain applications, such as image processing, the data movement from the sensor itself can account for a significant portion of the system energy consumption. Thus there has also been research on performing the computation as close as possible to the sensor.In particular, much of the work focuses on moving the computation into the analog domain to avoid using the ADC within the sensor, which accounts for a significant portion of the sensor power.However, as mentioned earlier, lower precision is required for analog computation due to circuit non-idealities. In <cit.>, the matrix multiplication is integrated into the ADC, where the most significant bits of the multiplications are performed using switched capacitors in an 8-bit successive approximation format. This is extended in <cit.> to not only perform the multiplications, but also the accumulations in the analog domain. In this work, it is assumed that 3-bits and 6-bits are sufficient to represent the weights and activations, respectively. This reduces the number of ADC conversions in the sensor by 21×. RedEye <cit.> takes this approach even further by performing the entire convolution layer (including convolution, max pooling and quantization) in the analog domain at the sensor. It should be noted that <cit.> and <cit.> report measured results from fabricated test chips, while results in <cit.> are from simulations.It is also feasible to embed the computation not just before the ADC, but into the sensor itself.For instance, in <cit.> an Angle Sensitive Pixels sensor is used to compute the gradient of the input, which along with compression, reduces the data movement from the sensor by 10×. In addition, since the first layer of the DNN often outputs a gradient-like feature map, it maybe possible to skip the computations in the first layer, which further reduces energy consumption as discussed in <cit.>. § CO-DESIGN OF DNN MODELS AND HARDWAREIn earlier work, the DNN models were designed to maximize accuracy without much consideration of the implementation complexity. However, this can lead to designs that are challenging to implement and deploy.To address this, recent work has shown that DNN models and hardware can be co-designed to jointly maximize accuracy and throughput, while minimizing energy and cost, which increases the likelihood of adoption.In this section, we will highlight various efforts that have been made towards the co-design of DNN models and hardware. Note that unlike Section <ref>, the techniques discussed in this section can affect the accuracy; thus, the goal is to not only substantially reduce energy consumption and increase throughput, but also to minimize any degradation in accuracy.The co-design approaches can be loosely grouped into the following categories: * Reduce precision of operations and operands.This includes going from floating point to fixed point, reducing the bitwidth, non-linear quantization and weight sharing.* Reduce number of operations and model size. This includes techniques such as compression, pruning and compact network architectures.§.§ Reduce Precision Quantization involves mapping data to a smaller set of quantization levels. The ultimate goal is to minimize the error between the reconstructed data from the quantization levels and the original data.The number of quantization levels reflects the precision and ultimately the number of bits required to represent the data (usually log_2 of the number of levels); thus, reduced precision refers to reducing the number of levels, and thus the number of bits. The benefits of reduced precision include reduced storage cost and/or reduced computation requirements. There are several ways to map the data to quantization levels.The simplest method is a linear mapping with uniform distance between each quantization level (Fig. <ref>). Another approach is to use a simple mapping function such as a log function (Fig. <ref>) where the distance between the levelsvaries; this mapping can often be implemented with simple logic such as a shift. Alternatively, a more complex mapping function can be used where the quantization levels are determined or learned from the data (Fig. <ref>), e.g., using k-means clustering; for this approach, the mapping is usually implemented with a look up table.Finally, the quantization can be fixed (i.e., the same method of quantization is used for all data types and layers, filters, and channels in the network); or it can be variable (i.e., different methods of quantization can be used for weights and activations, and different layers, filters, and channels in the network). Reduced precision research initially focused on reducing the precision of the weights rather than the activations, since weights directly increase the storage capacity requirement, while the impact of activations on storage capacity depends on the network architecture and dataflow.However, more recent works have also started to look at the impact of quantization on activations.Most reduced precision research also focuses on reducing the precision for inference rather than training (with some exceptions <cit.>) due to the sensitivity of the gradients to quantization. The key techniques used in recent work to reduce precision are summarized in Table <ref>; both linear and non-linear quantization applied to weights and activations are explored. The impact on accuracy is reported relative to a baseline precision of 32-bit floating point, which is the default precision used on platforms such as GPUs and CPUs.§.§.§ Linear quantizationThe first step of reducing precision is usually to convert values and operations from floating point to fixed point. A 32-bit floating point number, as shown in Fig. <ref>, is represented by (-1)^s× m ×2^(e-127), where s is the sign bit, e is the 8-bit exponent, and m is the 23-bit mantissa, and covers the range of 10^-38 to 10^38.An N-bit fixed point number is represented by (-1)^s× m × 2^-f, where s is the sign bit, m is the (N-1)-bit mantissa, and f determines the location of the decimal point and acts as a scale factor.For instance, for an 8-bit integer, when f=0, the dynamic range is -128 to 127, whereas when f=10, the dynamic range is -0.125 to 0.124023438. Dynamic fixed point representation allows f to vary based on the desired dynamic range as shown in Fig. <ref>. This is useful for DNNs, since the dynamic range of the weights and activations can be quite different. In addition, the dynamic range can also vary across layers and layer types (e.g., convolutional vs. fully connected). Using dynamic fixed point, the bitwidth can be reduced to 8 bits for the weights and 10 bits for the activations without any fine-tuning of the weights <cit.>; with fine-tuning, both weights and activations can reach 8-bits <cit.>.Using 8-bit fixed point has the following impact on energy and area <cit.>:* An 8-bit fixed point add consumes 3.3× less energy (3.8× less area) than a 32-bit fixed point add, and 30× less energy (116× less area) than a 32-bit floating point add. The energy and area of a fixed-point add scales approximately linearly with the number of bits.* An 8-bit fixed point multiply consumes 15.5× less energy (12.4× less area) than a 32-bit fixed point multiply, and 18.5× less energy (27.5× less area) than a 32-bit floating point multiply. The energy and area of a fixed-point multiply scales approximately quadratically with the number of bits. Reducing the precision also reduces the energy and area cost for storage, which is important since memory access and data movement dominate energy consumption as described earlier.The energy and area of the memory scale approximately linearly with number of bits. It should be noted, however, that changing from floating point to fixed point, without reducing bit-width, does not reduce the energy or area cost of the memory.For completeness, it should be noted that the precision of the internal values of a fixed-point multiply and accumulate (MAC) operation are typically higher than the weights and activations.To guarantee no precision loss, weights and input activations with N-bit fixed-point precision would require an N-bit×N-bit multiplication which generates a 2N-bit output product; that output would need to be accumulated with 2N+M-bit precision, where M is determined based on the largest filter size log_2(C× R × S from Fig. <ref>), which is in the range of 10 to 16 bits for the popular DNNs described in Section <ref>.After accumulation, the precision of the final output activation is typically reduced to N-bits <cit.>, as shown in Fig. <ref>. The reduced output precision does not have a significant impact on accuracy if the distribution of the weights and activations are centered near zero such that the accumulation would not move only in one direction; this is particularly true when batch normalization is used.The reduced precision is not only explored in research, but has been used in recent commercial platforms for DNN processing. For instance, Google's Tensor Processing Unit (TPU) which was announced in May 2016, was designed for 8-bit integer arithmetic <cit.>.Similarly, Nvidia's PASCAL GPU, which was announced in April 2016, also has 8-bit integer instructions for deep learning inference <cit.>. In general purpose platforms such as CPUs and GPUs, the main benefit of using 8-bit computation is an increase in throughput, as four 8-bit operations rather than one 32-bit operation can be performed for a given clock cycle.While general purpose platforms usually support 8-bit, 16-bit and/or 32-bit operations, it has been shown that the minimum bit precision for DNNs can actually vary in a more fine grained manner. For instance, the weight and activation precision can vary between 4 and 9 bits for AlexNet across different layers without significant impact on accuracy (i.e., a change of less than 1%) <cit.>. This fine-grained variation can be exploited for increased throughput or reduced energy consumption with specialized hardware. For instance, if bit-serial processing is used, where the number of clock cycles to complete an operation is proportional to the bitwidth, adapting to fine-grain variations in bit precision can result in a 2.24× speed up versus 16-bits <cit.>. Alternatively, a multiplier can be designed such that its critical path reduces based on the bit precision as fewer adders are needed to resolve the product; this can be combined with voltage scaling for a 2.56× energy savings versus 16-bits <cit.>. While these bit scaling results are reported relative to 16-bit, it would be interesting to see their impact relative to the maximum precision required across layers (i.e., 9-bits for <cit.>).The precision can be reduced even more aggressively to a single bit; this area of research is often referred to as binary nets.BinaryConnect (BC) <cit.> introduced the concept of binary weights (i.e., -1 and 1), where using a binary weight reduced the multiplication in the MAC to addition and subtraction only. This was later extended in Binarized Neural Networks (BNN) <cit.> that uses binary weights and activations, which reduces the MAC to an XNOR. However, BC and BNN have an accuracy loss of 19% and 29.8%, respectively <cit.>. In order to reduce this accuracy loss, Binary Weight Nets (BWN) and XNOR-Nets introduced several significant modifications to the DNN processing <cit.>. This includes multiplying the outputs with a scale factor to recover the dynamic range (i.e., the weights effectively become -w and w, where w is the average of the absolute values of the weights in the filter)[This can also be thought of as a form of weights sharing, where only two weights are used per filter.], keeping the first and last layers at 32-bit floating point precision, and performing normalization before convolution to reduce the dynamic range of the activations.With these changes, BWN reduced the accuracy loss to 0.8%, while XNOR-Nets reduced the loss to 11%.The loss of XNOR-Net can be further reduced by increasing the precision of the activations to be slightly larger than one bit. For instance, Quantized Neural Networks (QNN) <cit.>, DoReFa-Net <cit.>, and HWGQ-Net <cit.> allow the activations to have 2-bits, while the weights remain at 1-bit; in HWGQ-Net, this reduces the accuracy loss to 5.2%.All the previously described binary nets limit the weights to two values (-w and w); however, there may be benefits for allowing weights to be zero (i.e., -w, 0, w). Although this requires an additional bit per weight compared to binary weights, the sparsity of the weights can be exploited to reduce computation and storage cost, which can potentially cancel out the cost of the additional bit.This is explored in Ternary Weight Nets (TWN) <cit.> and then extended in Trained Ternary Quantization (TTQ) where a different scale is trained for each weight (i.e., -w_1, 0, w_2) for an accuracy loss of 0.6% <cit.>, assuming 32-bit floating point for the activations.Hardware implementations for binary/ternary nets have been explored in recent publications. YodaNN <cit.> uses binary weights, while BRein <cit.> uses binary weights and activations. Binary weights are also used in the compute in SRAM work <cit.> described in Section <ref>. Finally, the nominally spike-inspired TrueNorth chip can implement a reduced precision neural network with binary activations and ternary weights using TrueNorth's quantized weight table <cit.>. These works tend not to support state-of-the-art DNN models (with the exception of YodaNN).§.§.§ Non-linear quantization The previous works described involve linear quantization where the levels are uniformly spaced out.It has been shown that the distributions of the weights and activations are not uniform <cit.>, and thus a non-linear quantization can potentially improve accuracy.Specifically, there have been two popular approaches taken in recent works: (1) log domain quantization; (2) learned quantization or weight sharing.Log domain quantization If the quantization levels are assigned based on a logarithmic distribution as shown in Fig <ref>, the weights and activations are more equally distributed across the different levels and each level is used more efficiently resulting in less quantization error.For instance, using 4 bits in linear quantization results in a 27.8% loss in accuracy versus a 5% loss for log base-2 quantization for VGG-16 <cit.>. Furthermore, when weights are quantized to powers of two, the multiplication can be replaced with a bit-shift <cit.>.[Note however that multiplications do not account for a significant portion of the total energy.]Incremental Network Quantization (INQ) can be used to further reduce the loss in accuracy by dividing the large and small weights into different groups, and then iteratively quantizing and re-training the weights <cit.>.Weight Sharing forces several weights to share a single value. This reduces the number of unique weights in a filter or a layer. One example is to group the weights by using a hashing function and use one value for each group <cit.>. Alternatively, the weights can be grouped by the k-means algorithm <cit.>. Both the shared weights and the indexes indicating which weight to use at each position of the filter are stored. This leads to a two step process to fetch the weight: (1) read the weight index; (2) using the weight index, read the shared weights.This approach can reduce the cost of reading and storing the weights if the weight index (log_2 of the number of unique weights) is less than the bitwidth of the weight itself. For instance, in Deep Compression <cit.>, the number of unique weights per layer is reduced to 256 for convolutional layers and 16 for fully-connected layers in AlexNet, requiring 8-bit and 4-bit weight indexes, respectively.Assuming there are U unique weights and the size of the filters in the layer is C×R×S×M from Fig. <ref>, there will be energy savings if reading from a CRSM× log_2U-bit memory plus a U×16-bit memory (as shown in Fig. <ref>) cost less than reading from a CRSM×16-bit memory. Note that unlike the previous quantization methods, the weight sharing approach does not reduce the precision of the MAC computation itself and only reduces the weight storage requirement.§.§ Reduce Number of Operations and Model SizeIn addition to reducing the size of each operation or operand (weight/activation), there is also a significant amount of research on methods to reduce the number of operations and model size.These techniques can be loosely classified as exploiting activation statistics, network pruning, network architecture design and knowledge distillation.§.§.§ Exploiting Activation Statistics As discussed in Section <ref>, ReLU is a popular form of non-linearity used in DNNs that sets all negative values to zero as shown in Fig. <ref>.As a result, the output activations of the feature maps after the ReLU are sparse; for instance, the feature maps in AlexNet have sparsity between 19% to 63% as shown in Fig. <ref>.This sparsity gives ReLU an implementation advantage over other non-linearities such as sigmoid, etc.The sparsity can be exploited for energy and area savings using compression, particularly for off-chip DRAM access which is expensive.For instance, a simple run length coding that involves signaling non-zero values of 16-bits and then runs of zeros up to 31 can reduce the external memory bandwidth of the activations by 2.1× and the overall external bandwidth (including weights) by 1.5× <cit.>.[This simple run length compression is within 5-10% of the theoretical entropy limit.]In addition to compression, the hardware can also be modified such that it skips reading the weights and performing the MAC for zero-valued activations to reduce energy cost by 45% <cit.>.Rather than just gating the read and MAC computation, the hardware could also skip the cycle to increase the throughput by 1.37× <cit.>.The activations can be made to be even more sparse by pruning the low-valued activations.For instance, if all activations with small values are pruned, this can be translated into an additional 11% speed up <cit.> or 2× power reduction <cit.> with little impact on accuracy. Aggressively pruning more activations can provide additional throughput improvement at a cost of reduced accuracy.§.§.§ Network Pruning To make network training easier, the networks are usually over-parameterized. Therefore, a large amount of the weights in a network are redundant and can be removed (i.e., set to zero). This process is called network pruning. Aggressive network pruning often requires some fine-tuning of the weights to maintain the original accuracy. This was first proposed in 1989 through a technique called Optimal Brain Damage <cit.>. The idea was to compute the impact of each weight on the training loss (discussed in Section <ref>), referred to as the weight saliency. The low-saliency weights were removed and the remaining weights were fine-tuned; this process was repeated until the desired weight reduction and accuracy were reached. In 2015, a similar idea was applied to modern DNNs in <cit.>. Rather than using the saliency as a metric, which is too difficult to compute for the large-scaled DNNs, the pruning was simply based on the magnitude of the weights. Small weights were pruned and the model was fine-tuned to restore the accuracy. Without fine-tuning the weights, about 50% of the weights could be pruned. With fine-tuning, over 80% of the weights were pruned. Overall this approach can reduce the number of weights in AlexNet by 9× and the number of MACs by 3×. Most of the weight reduction comes from the fully-connected layers (9.9× for fully-connected layers versus 2.7× for convolutional layers).However, the number of weights alone is not a good metric for energy. For instance, in AlexNet, the number of weights in the fully-connected layers is much larger than in the convolutional layers; however, the energy of the convolutional layers is much higher than the fully-connected layers as shown in Fig. <ref> <cit.>. Rather than using the number of weights and MAC operations as proxies for energy, the pruning of the weights can be directly driven by energy itself <cit.>. An energy evaluation method can be used to estimate the DNN energy that accounts for the data movement from different levels of the memory hierarchy, the number of MACs, and the data sparsity as shown in Fig. <ref>; this energy estimation tool is available at <cit.>. The resulting energy values for popular DNN models are shown in Fig. <ref>. Energy-aware pruning can then be used to prune weights based on energy to reduce the overall energy across all layers by 3.7× for AlexNet, which is 1.74× more efficient than magnitude-based approaches <cit.> as shown in Fig. <ref>.As mentioned previously, it is well known that AlexNet is over-parameterized. The energy-aware pruning can also be applied to GoogleNet, which is already a small DNN model, for a 1.6× energy reduction.Recent works have examine how to efficiently support processing of sparse weights in hardware.One area of interest is how to best store the sparse weights after pruning.Similar to compressing the sparse activations discussed in Section <ref>, the sparse weights can be compressed to reduce memory access bandwidth by 20 to 30% <cit.>. When DNN processing is performed as a matrix-vector multiplication, as shown in Fig. <ref>, one challenge is to determine how to store the sparse weight matrix in a compressed format. The compression can be applied either in row or column order. A compressed sparse row (CSR) format, as shown in Fig. <ref>, is often used to perform Sparse Matrix-Vector multiplication. However, the input vector needs to be read in multiple times even though only a subset of it is used since each row of the matrix is sparse.Alternatively, a compressed sparse column (CSC) format, as shown in Fig. <ref>, can be used, where the output is updated several times, and only one element of the input vector is read at a time <cit.>.The CSC format will provide an overall lower memory bandwidth than CSR if the output is smaller than the input, or in the case of DNN, if the number of filters is not significantly larger than the number of weights in the filter (C × R × S from Fig. <ref>).Since this is often true, CSC can be an effective format for sparse DNN processing.Custom hardware has been explored to efficiently support pruned DNN models. Many works aim to perform the processing without decompressing the weights or activations.EIE <cit.> performs the sparse matrix-vector multiplication specifically for the fully connected layers.It stores the weights in a CSC format along with the start location of each column, which needs to be stored since the compressed weights have variable length.When the input is not zero, the compressed weight column is read and the output is updated. To handle the sparsity, additional logic is used to keep track of the location of the output that should be updated. SCNN <cit.> supports processing of convolutional layers in a compressed format. It uses an input stationary dataflow to deliver the compressed weights and activations to a multiplier array followed by a scatter network to add the scattered partial sums. Recent works have also explored the use of structured pruning to avoid the need for custom hardware <cit.>.Rather than pruning individual weights (also referred to as fine-grained pruning), structured pruning involves pruning groups of weights (also referred to as coarse-grained pruning). The benefits of structured pruning are (1) the resulting weights can better align with the data-parallel architecture (e.g., SIMD) found in existing general purpose hardware, which results in more efficient processing <cit.>; (2) it amortizes the overhead cost required to signal the location of the non-zero weights across a group of weights, which improves compression and thus reduces storage cost. These groups of weights can include a pair of neighboring weights, an entire row or column of a filter, an entire channel of a filter or the entire filter itself; using larger groups tends to result in higher loss in accuracy <cit.>.§.§.§ Compact Network ArchitecturesThe number of weights and operations can also be reduced by improving the network architecture itself. The trend is to replace a large filter with a series of smaller filters, which have fewer weights in total; when the filters are applied sequentially, they achieve the same overall effective receptive field (i.e., the region the filter uses from input image to compute an output). This approach can be applied during the network architecture design (before training) or by decomposing the filters of a trained network (after training). The latter one avoids the hassle of training networks from scratch. However, it is less flexible than the former one. For example, existing methods can only decompose a filter in a trained network into a series of filters without non-linearity between them. Before Training In recent DNN models, filters with a smaller width and height are used more frequently because concatenating several of them can emulate a larger filter as shown in Fig. <ref>. For example, one 5×5 convolution can be replaced with two 3×3 convolutions. Alternatively, one N×N convolution can be decomposed into two 1-D convolutions, one 1×N and one N×1 convolution <cit.>; this basically imposes a restriction that the 2-D filter must be separable, which is a common constraint in image processing <cit.>.Similarly, a 3-D convolution can be replaced by a set of 2-D convolutions (i.e., applied only on one of the input channels) followed by 1×1 3-D convolutions as demonstrated in Xception <cit.> and MobileNets <cit.>. The order of the 2-D convolutions and 1×1 3-D convolutions can be switched.1×1 convolutional layers can also be used to reduce the number of channels in the output feature map for a given layer, which reduces the number of filter channels and thus computation cost for the filters in the next layer as demonstrated in <cit.>; this is often referred to as a `bottleneck' as discussed in Section <ref>.For this purpose, the number of 1×1 filters has to be less than the number of channels in the 1×1 filter.For example, 32 filters of 1×1×64 can transform an input with 64 channels to an output of 32 channels and reduce the number of filter channels in the next layer to 32. SqueezeNet uses many 1×1 filters to aggressively reduce the number of weights <cit.>.It proposes a fire module that first `squeezes' the network with 1×1 convolution filters and then expands it with multiple 1×1 and 3×3 convolution filters.It achieves an overall 50× reduction in number of weights compared to AlexNet, while maintaining the same accuracy.It should be noted, however, that reducing the number of weights does not necessarily reduce energy; for instance, SqueezeNet consumes more energy than AlexNet, as shown in Fig. <ref>. After Training Tensor decomposition can be used to decompose filters in a trained network without impacting the accuracy. It treats weights in a layer as a 4-D tensor and breaks it into a combination of smaller tensors (i.e., several layers). Low-rank approximation can then be applied to further increase the compression rate at the cost of accuracy degradation, which can be restored by fine-tuning the weights. This approach is demonstrated using Canonical Polyadic (CP) decomposition, a high-order extension of singular value decomposition that can be solved by various methods, such as a greedy algorithm <cit.> or a non-linear least-square method <cit.>. Combining CP-decomposition with low-rank approximation achieves a 4.5× speed-up on CPUs <cit.>. However, CP-decomposition cannot be computed in a numerically stable way when the dimension of the tensor, which represents the weights, is larger than two <cit.>. To alleviate this problem, Tucker decomposition is adopted instead in <cit.>.§.§.§ Knowledge Distillation Using a deep network or averaging the predictions of different models (i.e., ensemble) gives a better accuracy than using a single shallower network. However, the computational complexity is also higher. To get the best of both worlds, knowledge distillation transfers the knowledge learned by the complex model (teacher) to the simpler model (student). The student network can therefore achieve an accuracy that would be unachievable if it was directly trained with the same dataset <cit.>. For example, <cit.> shows how using knowledge distillation can improve the speech recognition accuracy of a student net by 2%, which is similar to the accuracy of a teacher net that is composed of an ensemble of 10 networks.Fig. <ref> shows the simplest knowledge distillation method <cit.>. The softmax layer is commonly used as the output layer in the image classification networks to generate the class probabilities from the class scores[Also commonly referred to as logits.]; it squashes the class scores into values between 0 and 1 that sum up to 1. For this knowledge distillation method, soft targets (values between 0 and 1) such as the class scores of the teacher DNN (or an ensemble of teacher DNNs) are used instead of the hard targets (values of either 0 or 1) such as the labels in the dataset.The objective is to minimize the squared difference between the soft targets and the class scores of the student DNN. Class scores are used as the soft targets instead of the class probabilities because small values in the class scores contain important information that may be eliminated by the softmax. Alternatively, class probabilities after the softmax layer can be used as soft targets if the softmax is configured to generate softer class probabilities where the smaller values retain more information <cit.>. Finally, the intermediate representations of the teacher DNN can also be incorporated as the extra hints to train the student DNN <cit.>.§ BENCHMARKING METRICS FOR DNN EVALUATION AND COMPARISON As we have seen in this article, there has been a significant amount of research on efficient processing of DNNs.We should consider several key metrics to compare the various strengths and weaknesses of different designs and proposed techniques.These metrics should cover important attributes such as accuracy/robustness, power/energy consumption, throughput/latency and cost.Reporting all these metrics is important in order to provide a complete picture of the trade-offs made by a proposed design or technique. We have prepared a website to collect these metrics from various publications <cit.>.In terms of accuracy and robustness, it is important that the accuracy be reported on widely-accepted datasets as discussed in Section <ref>. The difficulty of the dataset and/or task should be considered when measuring the accuracy.For instance, the MNIST dataset for digit recognition is significantly easier than the ImageNet dataset. As a result, a DNN that performs well on MNIST may not necessarily perform well on ImageNet. Thus it is important that the same dataset and task is used when comparing the accuracy of different DNN models; currently ImageNet is preferred since it presents a challenge for DNNs, as opposed to MNIST, which can also be addressed with simple non-DNN techniques. To demonstrate primarily hardware innovations,it would be desirable to report results for widely-used DNN models (e.g., AlexNet, GoogLeNet) whose accuracy and robustness have been well studied and tested. Energy and power are important when processing DNNs at the edge in embedded devices with limited battery capacity (e.g., smart phones, smart sensors, UAVs, and wearables), or in the cloud in data centers with stringent power ceilings due to cooling costs, respectively. Edge processing is preferred over the cloud for certain applications due to latency, privacy or communication bandwidth limitations.When evaluating the power and energy consumption, it is important to account for all aspects of the system including the chip and external memory accesses.High throughput is necessary to deliver real-time performance for interactive applications such as navigation and robotics. For data analytics, high throughput means that more data can be analyzed in a given amount of time. As the amount of visual data is growing exponentially, high-throughput big data analytics becomes important, particularly if an action needs to be taken based on the analysis (e.g., security or terrorist prevention; medical diagnosis). Low latency is necessary for real-time interactive applications.Latency measures the time between when the pixel arrives to a system and when the result is generated.Latency is measured in terms of seconds, while throughput is measured in operations/second. Often high throughput is obtained by batching multiple images/frames together for processing; this results in multiple frame latency (e.g., at 30 frames per second, a batch of 100 frames results in a 3 second delay). This delay is not acceptable for real-time applications, such as high-speed navigation where it would reduce the time available for course correction.Thus achieving low latency and high throughput simultaneously can be a challenge.Hardware cost is in large part dictated by the amount of on-chip storage and the number of cores. Typical embedded processors have limited on-chip storage on the order of a few hundred kilobytes. Since there is a trade-off between the amount of on-chip memory and the external memory bandwidth, both metrics should be reported.Similarly, there is a correlation between the number of cores and the throughput. In addition, while many cores can be built on a chip, the number of cores that can actually be used at a given time should be reported. It is often unrealistic to assume peak utilization and performance due to limitations of mapping and memory bandwidth.Accordingly, the power and throughput should be reported for running actual DNNs as opposed to only reporting theoretical limits. §.§ Metrics for DNN ModelsTo evaluate the properties of a given DNN model, we should consider the following metrics: * The accuracy of the model in terms of the top-5 error on datasets such as ImageNet.Also, the type of data augmentation used (e.g., multiple crops, ensemble models) should be reported.* The network architecture of the model should be reported, including number of layers, filter sizes, number of filters and number of channels.* The number of weights impact the storage requirement of the model and should be reported.If possible, the number of non-zero weights should be reported since this reflects the theoretical minimum storage requirements. * The number of MACs that needs to be performed should be reported as it is somewhat indicative of the number of operations and potential throughput of the given DNN. If possible, the number of non-zero MACs should also be reported since this reflects the theoretical minimum compute requirements. Table <ref> shows how these metrics are reported for various well known DNNs. The accuracy is reported for the case where only a single crop for a single model is used for classification, such that the number of weights and MACs in the table are consistent.[Data augmentation is often used to increase accuracy.This includes using multiple crops of an image to account for misalignment; in addition, an ensemble of multiple models can be used where each model has different weights due to different training settings, such as using different initializations or datasets, or even different network architectures. If multiple crops and models are used, then the number of MACs and weights required would increase.]Note that accounting for the number of non-zero (NZ) operations significantly reduces the number of MACs and weights. Since the number of NZ MACs depends on the input data, we propose using the publicly available 50,000 validation images from ImageNet for the computation.Finally, there are various methods to reduce the weights in a DNN (e.g., network pruning in Section <ref>).Table <ref> shows another example of these DNN model metrics, by comparing sparse DNNs pruned using <cit.> to dense DNNs. §.§ Metrics for DNN HardwareTo measure the efficiency of the DNN hardware, we should consider the following additional metrics: * The power and energy consumption of the design should be reported for various DNN models; the DNN model specifications should be provided including which layers and bit precision are supported by the hardware during measurement. In addition, the amount of off-chip accesses (e.g., DRAM accesses) should be included since it accounts for a significant portion of the system power; it can be reported in terms of the total amount of data that is read and written off-chip per inference. * The latency and throughput should be reported in terms of the batch size and the actual run time for various DNN models, which accounts for mapping and memory bandwidth effects.This provides a more useful and informative metric than peak throughput.* The cost of the chip depends on the area efficiency, which accounts for the size and type of memory (e.g., registers or SRAM) and the amount of control logic. It should be reported in terms of the core area in squared millimeters per multiplier along with process technology. In terms of cost, different platforms will have different implementation-specific metrics.For instance, for an FPGA, the specific device should be reported, along with the utilization of resources such as DSP, BRAM, LUT and FF; performance density such as GOPs/slice can also be reported.Each processor should report various specifications for each metric as shown in Table <ref>, using the Eyeriss chip as an example.It is important that all metrics and specifications are accounted for in order fairly evaluate all the design trade-offs. For instance, without the accuracy given for a specific dataset and task, one could run a simple DNN and easily claim low power, high throughput, and low cost – however, the processor might not be usable for a meaningful task; alternatively, without reporting the off-chip bandwidth, one could build a processor with only multipliers and easily claim low cost, high throughput,high accuracy, and low chip power – however, when evaluating system power, the off-chip memory access would be substantial. Finally, the test setup should also be reported, including whether the results are measured or obtained from simulation[If obtained from simulation, it should be clarified whether it is from synthesis or post place-and-route and what library corner was used.] and how many images were tested.In summary, the evaluation process for whether a DNN system is a viable solution for a given application might go as follows: (1) the accuracy determines if it can perform the given task; (2) the latency and throughput determine if it can run fast enough and in real-time; (3) the energy and power consumption will primarily dictate the form factor of the device where the processing can operate; (4) the cost, which is primarily dictated by the chip area, determines how much one would pay for this solution.§ SUMMARYThe use of deep neural networks (DNNs) has seen explosive growth in the past few years. They are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition and robotics and are often delivering better than human accuracy.However, while DNNs can deliver this outstanding accuracy, it comes at the cost of high computational complexity. Consequently, techniques that enable efficient processing of deep neural network to improve energy-efficiency and throughput without sacrificing accuracy with cost-effective hardware are critical to expanding the deployment of DNNs in both existing and new domains.Creating a system for efficient DNN processing should begin with understanding the current and future applications and the specific computations required both now and the potential evolution of those computations. This article surveys a number of the current applications, focusing on computer vision applications, the associated algorithms, and the data being used to drive the algorithms. These applications, algorithms and input data are experiencing rapid change. So extrapolating these trends to determine the degree of flexibility desired to handle next generation computations, becomes an important ingredient of any design project. During the design-space exploration process, it is critical to understand and balance the important system metrics. For DNN computation these include the accuracy, energy, throughput and hardware cost. Evaluating these metrics is, of course, key, so this article surveys the important components of a DNN workload. In specific, a DNN workload has two major components. First, the workload is the form of each DNN network including the `shape' of each layer and the interconnections between layers. These can vary both within and between applications. Second, the workload consists of the specific the data input to the DNN. This data will vary with the input set used for training or the data input during operation for inference.This article also surveys a number of avenues that prior work have taken to optimize DNN processing. Since data movement dominates energy consumption, a primary focus of some recent research has been to reduce data movement while maintaining accuracy, throughput and cost.This means selecting architectures with favorable memory hierarchies like a spatial array, and developing dataflows that increase data reuse at the low-cost levels of the memory hierarchy. We have included a taxonomy of dataflows and an analysis of their characteristics. Other work is presented that aims to save space and energy by changing the representation of data values in the DNN. Still other work saves energy and sometimes increases throughput by exploiting the sparsity of weights and/or activations. The DNN domain also affords an excellent opportunity for joint hardware/software co-design. For example, various efforts have noted that efficiency can be improved by increasing sparsity (increasing the number of zero values) or optimizing the representation of data by reducing the precision of values or using more complex mappings of the stored value to the actual value used for computation. However, to avoid losing accuracy it is often useful to modify the network or fine-tune the network's weights to accommodate these changes. Thus, this article both reviews a variety of these techniques and discusses the frameworks that are available for describing, running and training networks. Finally, DNNs afford the opportunity to use mixed-signal circuit design and advanced technologies to improve efficiency. These include using memristors for analog computation and 3-D stacked memory. Advanced technologies can also can facilitate moving computation closer to the source by embedding computation near or within the sensor and the memories.Of course, all of these techniques should also be considered in combination, while being careful to understand their interactions and looking for opportunities for joint hardware/algorithm co-optimization. In conclusion, although much work has been done, deep neural networks remain an important area of research with many promising applications and opportunities for innovation at various levels of hardware design. § ACKNOWLEDGMENTSFunding provided by DARPA YFA, MIT CICS, and gifts from Nvidia and Intel. The authors thank the anonymous reviewers as well as James Noraky, Mehul Tikekar and Zhengdong Zhang for providing valuable feedback on this paper.IEEEtran | http://arxiv.org/abs/1703.09039v2 | {
"authors": [
"Vivienne Sze",
"Yu-Hsin Chen",
"Tien-Ju Yang",
"Joel Emer"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170327124213",
"title": "Efficient Processing of Deep Neural Networks: A Tutorial and Survey"
} |
A Visual Measure of Changes to Weighted SOM Patterns Younjin Chung Joachim Gudmundsson Masahiro Takatsuka School of IT, Faculty of Engineering and ITThe University of Sydney, NSW 2006 Australia A Visual Measure of Changes to Weighted SOM Patterns Younjin Chung, Joachim Gudmundsson, Masahiro Takatsuka A Visual Measure of Changes to Weighted Self-Organizing Map Patterns Younjin Chung Joachim Gudmundsson Masahiro Takatsuka December 30, 2023 ====================================================================Estimating output changes by input changes is the main task in causal analysis. In previous work, input and output Self-Organizing Maps (SOMs) were associated for causal analysis of multivariate and nonlinear data. Based on the association, a weight distribution of the output conditional on a given input was obtained over the output map space. Such a weighted SOM pattern of the output changes when the input changes. In order to analyze the change, it is important to measure the difference of the patterns. Many methods have been proposed for the dissimilarity measure of patterns. However, it remains a major challenge when attempting to measure how the patterns change. In this paper, we propose a visualization approach that simplifies the comparison of the difference in terms of the pattern property. Using this approach, the change can be analyzed by integrating colors and star glyph shapes representing the property dissimilarity. Ecological data is used to demonstrate the usefulness of our approach and the experimental results show that our approach provides the change information effectively.§ INTRODUCTIONAnalyzing causality is one of the central tasks of science since it influences decision making in such diverse domains as natural, social and health sciences. Causality is the relationship between two events, if changes of one (cause) trigger changes of the other (effect) <cit.>. In our previous work <cit.>, a causal analysis model was developed for analyzing causality of multivariate and nonlinear data (unlabeled in nature). In that model, different Self-Organizing Maps (SOMs) <cit.> for input and output data sets were networked using a weight association based on the connection prototype feature vector similarity. Given such SOMs, the similarity weights conditional on a given input could be assigned to the neurons of the output SOM. Such a weighted SOM pattern of the output is described by two information types: (1) the weight distribution and (2) the property (prototype feature vector) distribution. For assessing output changes by input changes, it is crucial to measure the property difference of weighted SOM patterns. There have been many attempts to measure the dissimilarity between two distributions (patterns) such as the Minkowski and the Shanon's entropy families <cit.>, the Quadratic Form Distance <cit.> and the Earth Mover's Distance (EMD) <cit.>. For weights with adaptive neurons in a weighted SOM pattern, we found the EMD to be the most suitable method in measuring the dissimilarity of weighted SOM patterns. The EMD, as the others, aims to provide a numerical value to define only a notion of the overall resemblance of patterns. It cannot differentiate between weighted SOM patterns if they have the same dissimilarity but different properties. Therefore, we introduce a method, called Property EMD (PEMD), to measure the property difference by individual feature differences in the pattern change. However, it is still difficult to represent and to compare the overall property dissimilarity in the change for high dimensional data. It is also difficult to observe possible feature values that gain the pattern change. Due to the limitations of quantitative approaches, we propose a visualization approach for measuring the property change of weighted SOM patterns along with the PEMD. Our visualization integrates colors and graphical shape objects of star glyph to represent the pattern information in the change. Using this approach, the property dissimilarity of weighted SOM patterns can be captured with the size and the direction of the change. Possible feature values that gain the pattern change can also be observed by exploring regions of interest in weighted SOM patterns. Ecological data is used to demonstrate that our approach is useful for the pattern property comparison in the pattern change. The experimental results show that our approach provides the change information considered for causal analysis in an effective visual way.§ BACKGROUNDA Self-Organizing Map (SOM) <cit.> projects high dimensional data onto a low dimensional (typically 2-dimensional) grid map space. A set of neurons of the map, which are prototype feature vectors adaptively projected for original feature vectors, reflects the data properties. Using the causal analysis model in our previous work <cit.>, a weight distribution is estimated on the property distribution of the output SOM for a given input. Fig. <ref> shows a simple example illustrating two information types in a weighted SOM pattern: (1) the weight distribution and (2) the property distribution. The SOM in Fig. <ref>(a) is used as it is easy for visualizing 3-dimensional RGB color property and position. Based on the SOM, several weighted SOM patterns ((b) - (g) in Fig. <ref>) are created by different color opacity values (weights). Such a weighted SOM pattern can be depicted as S = {(v_i, w_i)}, i = 1,..,n, where n is the number of neurons (n=9); v_i is ith prototype feature vector (v_i = {(c_k)_i}, k = 1,..,m, where m is the number of features (m=3); c_k is the kth component of the prototype feature vector) and w_i is ith weight, representing the two information types. The perceptual dissimilarity between two weighted SOM patterns in Fig. <ref> can be measured by observing the color properties in the highly weighted neurons. The patterns S_c and S_d are different patterns by their color properties although they have the same distance in relation to their changes from S_b. The pattern S_g shows the change in two perspectives by the two different color properties highlighted in S_f. Such differences can be measured using the 3-dimensional color property. Nonetheless, it is still difficult to estimate the size and the direction of the change with respect to the pattern property. Furthermore, it becomes harder to measure such differences in higher dimensions. There have been many methods to quantitatively measure the dissimilarity between two distributions (patterns) of high dimensional data. The most used families of functions are the Minkowski and the Shanon's entropy families <cit.>. However, these functions do not match perceptual dissimilarity well since they only compare weights of corresponding fixed bins <cit.>. The families do not use the similarity information across neighboring bins such as adaptive neurons of weighted SOM patterns. When considering the information across bins, the Quadratic Form Distance (QFD) <cit.> and the Earth Mover's Distance (EMD) <cit.> are the most used functions. However, the QFD tends to overestimate the dissimilarity of patterns as the weight of each bin is simultaneously compared with weights across all bins <cit.>. On the other hand, the EMD uses the ground distance of feature vectors across bins for the minimum weight flow providing better perceptual matches.The EMD, as the other functions, aims to provide a numeric value to define only a notion of the overall resemblance of patterns. The overall dissimilarity itself cannot differentiate between weighted SOM patterns when the feature vector distance and the weight distribution have the same relation but different feature values such as S_c and S_d from S_b in Fig. <ref>. This explains that such patterns can be further differentiated by the information of the pattern property in relation to the weight distribution. Moreover, it is difficult to identify the two different properties in S_f that gain the change to S_g (Fig. <ref>). In an attempt to handle these issues, we propose a visualization approach based on a metric using the EMD to measure the property change of weighted SOM patterns.§ OUR APPROACHIn this section, we propose a visualization approach that uses a metric for measuring the property difference of weighted SOM patterns in order to capture the change information based on the property dissimilarity. §.§ A Metric for Pattern Property DifferenceIn order to measure the pattern property difference, we introduce an extended function of the EMD, called Property EMD (PEMD). The PEMD measures the individual feature difference in the pattern change based on the capability of the EMD for the pattern dissimilarity measure. According to <cit.>, the EMD between two weighted SOM patterns P and Q is defined as follows: EMD(P,Q) = ∑_i=1^n∑_j=1^nf_ijd_ij/∑_i=1^n∑_j=1^nf_ij,where d_ij is the ground distance function and f_ij is the minimum cost flow under constraints: ∀i, j: f_ij≥ 0, ∀i: ∑_j=1^nf_ij≤w_p_i, ∀j: ∑_i=1^nf_ij≤w_q_j, and ∑_i=1^n∑_j=1^nf_ij = min{∑_i=1^nw_p_i, ∑_j=1^nw_q_j}. The weighted SOM patterns P and Q are based on the same SOM; thus, they have the same number (n) of neurons and their weights equally sum to 1. Based on the EMD, the difference of a feature c_k in Q for given P can be measured by a function as follows:PEMD_c_k(Q|P) = ∑_i=1^n∑_j=1^nf_ijd_ij(c_k_j-c_k_i)/∑_i=1^n∑_j=1^nf_ijd_ij.The direction of the feature change is accounted for by the difference measure. This distance is then defined as the resulting feature difference in the change normalized by the total work flow of the EMD. The feature difference is normalized to avoid favoring larger differences between pattern changes in the comparison. The individual feature differences of the weighted SOM patterns in Fig. <ref> are measured by the PEMD for the property comparison. The EMD is also measured and scaled by the maximum EMD of the data space for the dissimilarity comparison. As the results show in Table <ref>, the individual feature differences can be used to explain the property difference between the patterns S_c and S_d changed from S_b, which show the same EMD. However, it does not explain that the patterns S_f and S_g are not the same as shown by the EMD. This shows that the patterns can be further explained by possible feature values that gain the pattern change. Furthermore, it is difficult to compare the overall property dissimilarity if the dimensionality is high. Therefore, we propose a visualization approach in the next section for better analyzing the pattern change of high dimensional data based on the property difference by the PEMD.§.§ Visualization of Pattern Property ChangesOur visualization integrates colors and graphical objects, which are perceptually orthogonal <cit.>, to represent the pattern dissimilarity of high dimensional data. Hue colors and star glyph shape objects are used to view a weighted SOM pattern. The scaled hue colors in Fig. <ref>(a) are used to indicate high weight by red and low weight by blue. The graphical object mapping of prototype feature vector into a star glyph shape represents a neuron property in a SOM. A star glyph <cit.> has m evenly angled branches emanating from a central point in the same ordering of m dimensions. The length of each branch marks the value along the dimension it represents, and the value points are connected creating a bounded polygon shape. The patterns in Fig. <ref> are illustrated in Fig. <ref>(b) - (g). As the shape is used as a single visual parameter, it is easier to recognize the property difference of neurons by only considering the shape variations in the fixed orientation <cit.>.The perceptual dissimilarity of hue colors indicates a clear boundary of the weights. Thus, it facilitates the user selection of regions of interest for understanding the main properties of the weighted SOM patterns. The property dissimilarity between weighted SOM patterns can also be visualized by integrating colors and star glyph shape objects. As shown in Fig. <ref>, a star glyph shape created by individual feature differences is imposed on the 3-branch star glyph shape frame indicating the overall property dissimilarity. The PEMD value for each feature is scaled in the range of [0.1, 0.9] for the visualization. The average of the PEMD values is used to indicate the direction of the overall property change by applying it to the color saturation. The property shape is filled with the direction color; red for increase, blue for decrease and white for no change. The property direction color can differentiate between property changes if they have the same property shape but opposite directions of any individual feature changes. The possible value changes in each feature can be visualized depending on the user selection of regions (e.g. highly weighted regions). The possible value changes are indicated by red and blue for increase and decrease respectively (empty dots for the reference pattern and full dots for the changed pattern). The line with red or blue from the center to 0.1 indicates the direction of the individual feature change for increase or decrease, respectively. The overall property dissimilarity of the patterns can then be captured by the property shape variation with the change information in the fixed orientation. The EMD is also visualized by filling its gray scale in the frame for the dissimilarity comparison. Fig. <ref>(k) shows that S_f and S_g are not the same pattern by the EMD while there is no property difference in the change. This can be explained by observing the change (C(S_g|S_f)) which shows the possible changes in the features R and G by the same size increase and decrease while making no change to the pattern property. Fig. <ref>(h) and (i) show that S_c and S_d obtained from S_b are very different patterns by their different property shapes although they show the same EMD as the size and the direction color of the shapes.In summary, our approach can measure the dissimilarity of weighted SOM patterns in terms of the pattern property. It measures the individual feature differences by the PEMD and captures the overall property dissimilarity by the visualization in the pattern change. It facilitates a simultaneous comparison of weighted SOM patterns and provides the information of how the patterns change. § EXPERIMENTAL RESULTSIn this section, we test our approach by applying it to the ecological domain data[The ecological features: B1 (Shredders), B2 (Filtering-Collectors), B3 (Collector-Gathers), B4 (Scrapers) and B5 (Predators) for Biological data set; P1 (Elevation), P2 (Slope), P3 (Stream Order), P4 (Embeddedness) and P5 (Water Temperature) for Physical data set. The feature values are all standardized for the total 130 data.] <cit.> for analyzing changes of output pattern by changes of input in the causal analysis. The physical and biological SOMs were trained using 10 × 12 hexagonal grids by the minimum values of quantization and topological errors. The physical input SOM was associated with the biological output SOM. Among the physical features, it is known that Embeddedness (P4) has a strong impact on the biotic integrity <cit.>. Thus, for our experiments, we varied the value of P4 in the physical input to examine the changes of the biological output. The value of P4 was increased by 1 and 2 standard deviations (SD) for the first and the second change, respectively, while the others were fixed at the initial value (-0.5SD). The standardized Z-score values of data used in the data analysis were converted to T-score values for visualization using our approach.Fig. <ref> shows the weighted biological output SOM patterns for the physical input given in (a), the first changed in (b) and the second changed in (c). More than one region is highly weighted in S_0 showing the high possibility of having different biological outputs for the given physical input. The reddish regions of each pattern are selected for analyzing the change. The difference of the patterns are measured and visualized using our approach in (d) for the first change and (e) for the second change. More information can be added in the view and we have added the significance information, measured on the difference of every weighted feature distribution using the Kolmogorov-Smirnov test <cit.>. The insignificant changes are indicated by yellow and cyan while the significant changes are indicated by red and blue for increase and decrease, respectively. In Fig. <ref>(d) and (e), the user can observe that the second change C(S_2 | S_0) is larger than the first change C(S_1 | S_0) with a similar tendency of the property change. This can be explained by comparing the size, the direction color and the similarity of the property shape as well as the EMD gray scale in the frame. The individual change of each feature is also captured by the change information in the center of each branch. It shows that the changes in B1 and B4 become significant and the changes in B1, B2, B3 and B5 become larger when the value of P4 is increased. Based on the selected regions, the possible changes are also detailed on each branch. In particular, both increase and decrease are seen in B4, which cannot be provided by a quantitative measure. Throughout the experiments, the user can derive the impact of P4 (Embeddedness) as its increase lowers the balance of the biological composition of the ecosystem. The causal effects are more effectively analyzed by considering all possible changes for well-informed decision making. Our approach supports this by detecting regions of interest and providing the change information visually; thus, it can be very useful for comparing the pattern changes in the process of causal analysis.§ CONCLUSIONIn this paper, we have presented our approach for analyzing weighted output SOM pattern changes by input changes in causal analysis. We elucidated the idea of analyzing the change of weighted SOM patterns by comparing the dissimilarity of the pattern properties corresponding to the weight distributions. Our approach measures the property difference using a metric and uses a visualization to measure the property change of weighted SOM patterns. Throughout the experiments, we have shown that our approach is useful for measuring and comparing the pattern property in the change of weighted SOM patterns. We also facilitated exploring regions of user interest and capturing all possible changes to the pattern property. The experimental results show that our approach provides the property change information in an interactive and effective visual way when analyzing causal effects.10cha2007 Cha, S.H.: Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions. International Journal of Mathematical Models and Methods in Applied Sciences1,300–307 (2007)chung2015 Chung, Y., Takatsuka, M.: A Causal Model using Self-Organizing Maps. In: Proceedings of ICONIP'15, Lecture Notes in Computer Science. pp. 591–600 (2015)giddings2009 Giddings, E.M.P., Bell, A.H., Beaulieu, K.M., Cuffney, T.F., Coles, J.F., Brown, L.R., Fitzpatrick, F.A., Falcone, J., Sprague, L.A., Bryant, W.L., Peppler, M.C., Stephens, C., McMahon, G.: Selected physical, chemical, and biological data used to study urbanizing streams in nine metropolitan areas of the united states, 1999-2004. Technical Report Data Series 423, U.S. Geological Survey (2009)johnson2000 Johnson, D.H., Sinanovic, S.: Symmetrizing the Kullback-Leibler Distance. Tech. rep., IEEE Transactions on Information Theory (2000)Kohonen2001 Kohonen, T.: Self-Organizing Maps. Information Sciences, Springer-Verlag, Heidelberg, 3 edn. (2001)kullback2012 Kullback, S.: Information Theory and Statistics. Courier Corporation (2012)may1970 May, W.E.: Knowledge of Causality in Hume and Aquinas. The Thomist34, 254–288 (1970)niblack1993 Niblack, C.W., Barber, R., Equitz, W., Flickner, M.D., Glasman, E.H., Petkovic, D., Yanker, P., Faloutsos, C., Taubin, G.: Qbic project: querying images by content, using color, texture, and shape. Proc. SPIE1908,173–187 (1993)novotny2005 Novotny, V., Virani, H., Manolakos, E.: Self Organizing Feature Maps Combined With Ecological Ordination Techniques For Effective Watershed Management. Technical Report 4, Northeastern University, Boston (2005)press1992 Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in C: The Art of Scientific Computing. Campbridge University Press, 2 edn. (1992)rubner2000 Rubner, Y., Tomasi, C., Guibas, L.J.: The Earth Mover's Distance as a Metric for Image Retrieval. International Journal of Computer Vision40(2), 99–121 (2000)ward2008 Ward, M.O.: Multivariate Data Glyphs: Principles and Practice, pp. 179–198. Springer Berlin Heidelberg (2008)wong1997 Wong, P.C., Bergeron, R.D.: 30 years of multidimensional multivariate visualization. In: Scientific Visualization, Overviews, Methodologies, and Techniques. pp. 3–33. IEEE Computer Society (1997) | http://arxiv.org/abs/1703.08917v1 | {
"authors": [
"Younjin Chung",
"Joachim Gudmundsson",
"Masahiro Takatsuka"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170327034658",
"title": "A Visual Measure of Changes to Weighted Self-Organizing Map Patterns"
} |
theoremTheorem[subsection] lemma[theorem]Lemma proposition[theorem]Proposition problem[theorem]Problem corollary[theorem]Corollarydefinition definition[theorem]Definition example[theorem]Example xca[theorem]Exerciseremark remark[theorem]RemarkGL tr A \ Q R Z C SL S H G F FB A EC Z Q SAdG H T M B N S Z .1em ^t-.1em #1C_c^∞(#1) F^× 1 2 T O^× E^× Eg h k̨ t n ḇby2#1#2#3#4(#1 #3#2 #4) Ξ C \ 2mu 1pt.2mu 4pt.2mu 7pt.1mu #1#2.2in#1(#2).2in #1.2in#1.2in #1.2in#1.2in #1[#1] #1C_c^∞(#1)Hom Ind \ ∋ e f h R I C D E H O N a b cO h l m O r s S t z C Q R T Z ^t-.1em tr ad Ad by2#1#2#3#4( #1 #3#2 #4)K P S V W E A F T O O^× F^× E^× 1 2 .2in [1] [2] [3] [4] [5] [6] [7] [1] [2] [3] [4] [5] [6] [7] W Distinguished Cuspidal Representations over p-adic and Finite Fields Jeffrey Hakim December 30, 2023 ====================================================================The author's work with Murnaghan on distinguished tame supercuspidal representations is re-examined using a simplified treatment of Jiu-Kang Yu's construction of tame supercuspidal representations of p-adic reductive groups.This leads to a unification of aspects of thetheories of distinguished cuspidal representations over p-adic and finite fields. =.13in§ INTRODUCTION This paper establishes a close connection between the theories of distinguished representations over p-adic fields and finite fields by proving a uniform formula (Theorem <ref>) that was previously stated without proof in <cit.>.In <cit.>, we presented a new construction of supercuspidal representations for p-adic reductive groups.This construction was built on the same foundation as Yu's construction <cit.>, but supercuspidal representations were directly associated to representations of compact-mod-center subgroups, rather than generic, cuspidal G-data.From a technical point of view, the new construction simplifies Yu's construction<cit.> largely because it avoids the use of (noncanonical) Howe factorizations.(See <cit.> for the relevant notion of “factorization.”) But perhaps the true test of the construction is how amenable it is to applications.This paper provides the first application of the formula and hence the first basis for comparison with other approaches.Theapplication considered in <ref> is are-examination of the theory of distinguished tame supercuspidal representations developed (in <cit.>) by the author and Murnaghan and we prove stronger versions of the main results with considerably less effort.On the surface, <ref> appears to provide a dramatically shorter treatment of the material from <cit.>, however, part of this reduction results from the fact that substantial portions of <cit.> are quoted in our proofs.So it is important to acknowledge and emphasize the large influence of Fiona Murnaghan's ideas on the present paper. In <ref>, our main p-adic results are shown to have obvious analogues that are valid in the context of cuspidal Deligne-Lusztig representations over finite fields.The results we develop in the finite field case are compatible with results of Lusztig <cit.>.To get the p-adic and finite field theories to mesh, we articulate Lusztig's results in a new way.In particular, we relate the character η'_θ occurring in <cit.> to a character ε__𝖳,θ occurring in <cit.>. Both characters involve determinants ofadjoint actions of groups on Lie algebras over finite fields. In the finite field case, ε__𝖳,θ is computed inProposition <ref>. We hope to study the p-adic case in a subsequent paper.Finally, we direct the reader to two preprints <cit.> that correct an error (discovered by Loren Spice)in the proofs of Proposition 14.1 and Theorem 14.2 in <cit.>.This error affects both this paper and its precursor <cit.>.Weattempted to correct the error in 3.10 of <cit.>, butFintzen discovered an error in our putative correction.§ STATEMENT OF THE MAIN THEOREM In order not to recapitulate large amounts of notations and definitions, we follow the conventions of <cit.>.Practically speaking, the reader is therefore required to have a copy of <cit.> readily accessible while reading this paper.We consider two cases that we refer to as “the p-adic case” and “the finite field case.” In the p-adic case, F is a finite extension of ℚ_p with p odd.In the finite field case, F= 𝔽_q where q is a power of an odd prime p.Letbe a connected, reductive F-group and let G =(F).(More generally, we use boldface letters for F-groups andthe corresponding non-bold letters for the corresponding groups of F-points.)Let us selectively recall some of our inherited notations from<cit.> in the p-adic case: *is an F-subgroup ofthat is a Levi subgroup ofover some tamely ramified finite extension of F, and the quotient _/_ of the centers is F-anisotropic.* x isa vertexin the reduced building ℬ_ red(,F).* H =(F).* H_x is the stabilizer of x in H.The corresponding (maximal) parahoric subgroup is H_x,0 andits prounipotent radical isH_x,0+.* _ sc isthe universal cover of the derived group _ der.* H_ der,x,0+^♭ is the image of H_ sc,x,0+ in _ der. Recall also that a smooth, irreducible, complex representation (ρ ,V_ρ) of H_x is permissible (as in Definition 2.1.1 <cit.>) if: * ρ induces an irreducible (and hence supercuspidal) representation of H,* the restriction of ρ to H_x,0+ is a multiple of some character ϕ of H_x,0+, * ϕ is trivial on H_ der,x,0+^♭,* thedual cosets (ϕ |Z^i,i+1_r_i)^* (defined in 2.7 <cit.>) contain elements that satisfy Yu's condition GE2(statedin 3.6 <cit.>). In the p-adic case, we take L=H_x.In the finite field case, L is the group T =(F) of F-rational points an F-elliptic maximal F-torusof .In the p-adic case, ρ will be a permissible representation of L =H_x. In the finite field case,ρ will be a character in general position of L= T.Let π (ρ) be the irreducible supercuspidal or cuspidal Deligne-Lusztig representation of G associated to ρ.Let ℐ be the set of F-automorphisms ofof order two, and let G act on ℐ byg·θ =Int(g)∘θ∘ Int(g)^-1 =Int(gθ (g)^-1)∘θ,where Int(g) is conjugation by g.Fix a G-orbit Θ in ℐ.Given θ∈Θ, let G_θ be the stabilizer of θ in G.Let G^θ be the group of fixed points of θ in G.Let L_θ = G_θ∩ L. When ϑ is an L-orbit in Θ, letm_L (ϑ) = [G_θ :G^θ L_θ],for some, hence all, θ∈ϑ.Let ⟨Θ,ρ⟩_G denote the dimension of the space Hom_G^θ(π (ρ),1)of -linear forms on the space of π (ρ) that are invariant under the action of G^θ for some, hence all, θ∈Θ.For each θ such that θ (L) = L, we define a characterε__L,θ:L^θ→{± 1}as follows.In the finite field case,ε__L,θ (h) = ∏_a∈ Gal(F/F) Φ (Z_((^θ)^∘ ) ,) a(h).In other words, ε__L,θ (h) is (-1)^s, where s is the number of Galois orbits of roots a in Φ (Z_((^θ)^∘ ) ,) such that a(h)=-1. In the p-adic case,ε__L,θ (h) = ∏_i=0^d-1(_ f ( Ad(h)|W_i^+)/P_F)_2,with the following notations.First, we let𝔚_i^+ =((⊕_a∈Φ^i+1-Φ^ig_a)^ Gal(F/F))^θ_x,s_i:s_i+,viewed as a vector space over the residue field f of F, where g_a is the root space attached to the root a. So 𝔚_i^+may be viewed as the space of θ-fixed points in the Lie algebra of W_i= J^i+1/J^i+1_+. (Here, “the Lie algebra of W_i” really means the image of W_i under a suitable Moy-Prasad isomorphism.)Next, for u∈f^×, we let (u/P_F)_2 denote the quadratic residue symbol.This is related to the ordinary Legendre symbol by(u/P_F)_2 =(N_f/𝔽_p (u)/p) = (N_f/𝔽_p (u))^(p-1)/2= u^(q_F-1)/2.This is the same as the character η'_θ defined in <cit.>, but we have expressed iton the Lie algebra. When ϑ is an L-orbit in Θ, we write ϑ∼ρ if θ (L) = L and if the space Hom_L^θ (ρ ,ε__L,θ) is nonzero for some, hence all, θ∈ϑ. When ϑ∼ρ, we define⟨ϑ ,ρ⟩_L =Hom_L^θ (ρ ,ε__L,θ),where θ is any element of ϑ.(The choice of θ does not matter.)We can now state our main theorem: ⟨Θ,ρ⟩_G = ∑_ϑ∼ρ m_L(ϑ) ⟨ϑ , ρ⟩_L. In the finite field case, this is contained in Proposition <ref> and it is further refined in Theorem <ref>. In the p-adiccase, it is contained in Proposition <ref>.Note that inthe special case in which*is a product _1×_1,* Θ contains the involution θ (x,y) = (y,x), * ρ has the form ρ_1×ρ̃_2, where ρ̃_2 is the contragredient of ρ_2,we have⟨Θ,ρ⟩_G =Hom_G(π (ρ_1),π(ρ_2)).In the finite field case, this is equivalent to theDeligne-Lusztig inner product formula <cit.>.See <cit.>, for more details.§ P-ADIC REPRESENTATION THEORY§.§ θ-symmetry The present paper should be viewed as a sequel to <cit.> and for the p-adic theory we use precisely the same notation, terminology, and conventions. As in <cit.>, we fix all of the following objects: * F : a finite extension of _p, with p 2,*: a connected reductive F-group,*: an F-subgroup ofthat is a Levi subgroup over some tamely ramified finite extension of F suchthat the quotient _/_ of the centersis F-anisotropic,* x : a vertex in the reduced building ℬ_ red(,F),* (ρ , V_ρ) : a permissible representation of H_x,* (π ,V_π) : a supercuspidal representation of G =(F) in the isomorphism class associated to ρ.We refer to F-automorphisms ofof order two as involutions of G, and we let G act on its set of involutions byg·θ =Int(g)∘θ∘ Int(g)^-1.For the rest of this chapter, we assume we have fixed a G-orbit Θ of involutions of G.We define⟨Θ , ρ⟩_G =Hom_G^θ (π ,1),where θ is any element of Θ and ^θ is the group of fixed points of θ in .The fact that this definition is independent of the choice of θ is a consequence of the fact that we have a bijectionHom_G^θ (π ,1)[r]^-.5em≃Hom_G^g·θ (π ,1) λ@|->[r] ( v↦λ (π (g)^-1v)),for each g∈ G.It is elementary to show that if K is the open, compact-mod-center inducing group for π then⟨Θ, ρ⟩_G = ∑_𝒪∈Θ^K m_K(𝒪) ⟨𝒪,ρ⟩_K,where: * Θ^K is the set of K-orbits in Θ,* m_K(𝒪) = [G_θ :G^θ K_θ], where θ is any element of 𝒪, G_θ is the stabilizer of θ in G, and K_θ =K∩ G_θ,* ⟨𝒪,ρ⟩_K =Hom_K^θ(κ ,1), for any θ∈𝒪, and K^θ = K∩ G^θ, where κ is the representation (see 3.11 <cit.>) of K from which π is induced.We refer to<cit.> for an explanation of the details, including the facts that the definitions of m_K(𝒪) and ⟨𝒪,ρ⟩_K do not depend on the choice of θ in 𝒪.We also note that m_K (𝒪) is a power of two that is bounded as indicated in<cit.>.Recall from <cit.> that we have a tamely ramified twisted Levi sequence = (^0,… ,^d) associated to ρ.The purpose of this section is to prove: Suppose 𝒪 is a K-orbit of involutions of G such that ⟨𝒪, ρ⟩_K is nonzero. Then: (1) There exists θ∈𝒪 such that θ () =.(2)Such an involution θ must fix x.(3)The character ϕ̂ (defined in 3.9 <cit.>) must be trivial on K^θ_+ (defined in 2.6 <cit.>), and hence ϕ must be trivial on H_x,0+^θ.(4) For each i∈{ 0,…, d-1}, there must exist an element X^*_i in the dual coset (ϕ|Z^i,i+1_r_i)^* such that θ (X^*_i) = -X^*_i.(5) Up to scalar multiples, there exists a canonical isomorphismHom_K^θ (κ, 1) ≅ Hom_H_x^θ(ρ,ε__H_x,θ).This isomorphism is defined below and it still exists if we replace the hypothesis ⟨𝒪, ρ⟩_K 0 with the weaker hypotheses that ϕ̂|K^θ_+=1 and θ () =.(6) ⟨𝒪, ρ⟩_K=Hom_H_x^θ(ρ,ε__H_x,θ).For the rest of this section, we state and prove a sequence of lemmas that collectively encompass Proposition <ref>.The first lemma is an analogue of Lemma 5.15 <cit.>, but the proof is much simpler. If 𝒪 is a K-orbit of involutions of G and ϕ̂|K^θ_+=1 for all θ∈𝒪 then there exists θ∈𝒪 such that θ () =.There is nothing to prove if d=0, so we assume d>0.Starting with an arbitrary element θ_d of 𝒪, we recursively construct a sequence θ_d,… , θ_0 of elements of 𝒪 such that θ_i (^j) = ^j whenever i≤ j≤ d.Assume θ_i+1 has already been constructed.On page 52 <cit.>, we define groups J^i+1 and J^i+1_+. Sinceϕ̂|K^θ_i+1_+=1, we deduce that ϕ̂ is trivial on G^i+1_ der∩ J^i+1_+∩ G^θ_i+1.But on the latter subgroup, and in fact on G^i+1_ der∩ J^i+1_+, the character ϕ̂ is represented by each element X^*_i of the dual coset (ϕ |Z^i,i+1_r_i)^* (defined in <cit.>).(See Lemma 3.9.1(5) <cit.>.)Therefore, for a given X^*_i,1= ϕ̂(exp (X+g^i+1_x,r_i+)) = ψ (X^*_i (X)),for all X∈g^i+1_ der∩J^i+1_+∩g^θ_i+1. Here, “exp” refers to the isomorphismexp :J^i+1_+/g^i+1_x,r_i+→ J^i+1_+/G^i+1_x,r_i+as in <cit.>.(See also <cit.>.)We areusingthe abbreviation θ_i+1 for dθ_i+1.(More details on our choice of the additive character ψ and its role in the definition of the dual coset are given in 2.7 <cit.>.)It follows that X_i^*∈ (g^i+1_ der∩J^i+1_+∩g^θ_i+1)^∙, where, as in the proof of Lemma 5.15 <cit.>, we lets^∙ = { Y^*∈g^i+1,*:Y^*(s)⊂P_F},when s is a subset of g^i+1.We have(g^i+1_ der∩J^i+1_+∩g^θ_i+1)^∙ =g^i+1,∙_ der+ J^i+1,∙_+ + g^θ_i+1,∙,and ∘ g^i+1,∙_ der = z^i+1,*,∘ J^i+1,∙_+ = (g^i,*,g^i+1,*)_x,((-r_i)+,-s_i),∘ g^θ_i+1,∙ = { Y^*∈g^i+1,*: θ_i+1(Y^*) = -Y^*}.Therefore, we can choose Y^*∈J^i+1,∙ and Z^* ∈z^i+1,* such that X^*_i+Y^*+ Z^*∈g^θ_i+1,∙. Since X^*_i+Z^* is ^i+1-generic and since G^i+1_x,s_i = J^i+1 G^i_x,s_i, we deduce from Lemma 8.6 <cit.> thatX^*_i +Z^* + J^i+1,∙_+ = ^*(J^i+1)(X^*_i +Z^* + g^i,*_x,(-r_i)+).Therefore, we can choose k∈ J^i+1 and U^*∈g^i,*_x,(-r_i)+ such thatX^*_i +Y^*+ Z^*= ^*(k)(X^*_i +Z^* + U^*). We take θ_i = k^-1·θ_i+1 and observe thatθ_i (X^*_i + Z^*+U^*) = - (X^*_i +Z^*+U^*). Lemma 5.17 <cit.> implies that θ_i (^i) = ^i.In addition, θ_i (^j) = ^j for i< j≤ d.This completes the recursion, and taking θ = θ_0 completes the proof.If ϕ̂|K^θ_+=1, θ () =, and i∈{ 0,… , d-1} then the dual coset (ϕ |Z^i,i+1_r_i)^* contains elements X^*_i∈z^i,i+1,*_-r_i such that θ (X^*_i ) =-X^*_i.Choose an arbitrary element U^*_i in the dual coset. It suffices to show that U^*_i represents the same character of z^i,i+1_r_i as the elementX^*_i = U^*_i -θ(U^*_i)/2. We first observe that for each Y∈z^i,i+1_r_i the associated element (Y+θ (Y))/2 lies in the space (z^i,i+1_r_i)^θ of θ-fixed elements of z^i,i+1_r_i. Hence exp( Y+θ(Y)/2 + z^i,i+1_r_i+) lies in (Z^i,i+1_r_i:r_i+)^θ or, equivalently, the image of (Z^i,i+1_r_i)^θ in Z^i,i+1_r_i:r_i+.Thusψ( U^*_i (Y+θ(Y)/2)) =ϕ( exp(Y+θ(Y)/2+ z^i,i+1_r_i+))=1and thereforeψ( U^*_i (Y/2)) =ψ( U^*_i (θ(Y)/2))^-1 =ψ( θ (U^*_i) (Y/2))^-1.Henceψ(U^*_i+θ (U^*_i)/2 (Y))=1. Our claimfollows. We now adapt Proposition 4.2 from <cit.> whose proof is rather involved.In the statement, (τ_i,V_i) is the Heisenberg represenattion of the group ℋ_i = W_i×μ_p defined in 2.8 <cit.>. The notations ω_i and 𝒮_i are also defined in 2.8 <cit.>. If ϕ̂|K^θ_+=1, θ () =, and i∈{ 0,… , d-1}then the spaces Hom_W^+_i× 1(τ_i,1) and V_i^τ_i(W_i^+× 1) have dimension one, whereW^+_i = J^i+1,θJ^i+1_+/J^i+1_+.If𝒫_i = { s∈𝒮_i : sW^+_i⊆ W^+_i}and ε_i is the unique character of 𝒫_i of order two thenHom_W^+_i× 1(τ_i,1)⊆ Hom_K^i,θ(ω_i,ε_i). Our assertions follow directly fromProposition 4.2 <cit.> where we replace (',,ϕ |G'_y,r) by (^i,^i+1,ζ_i |G^i_x,r_i ).It should be noted that in <cit.>one assumes that one has a quasicharacter of G' that is G-generic of (positive) depth r.However, a line-by-line analysis of the proof of Proposition 4.2 <cit.> reveals that the proof only uses the restriction of the latter quasicharacter to G'_y,r and, moreover, there is no need to require that this restriction extends to a quasicharacter of G'.If ϕ̂|K^θ_+=1, θ () =, and i∈{ 0,… , d-1} thenV^κ (J^i+1,θ) = V_ρ⊗ V_0⊗⋯⊗ V_i-1⊗ V_i^τ_i (W_i^+× 1)⊗ V_i+1⊗⋯⊗ V_d-1. The representation κ can be constructed as in 3.11 <cit.>. The construction requires the choiceof a special homomorphism ν_i (in the sense of Definition 3.8.1 <cit.>) and a character χ_i of J^i+1 (as described in 3.11 <cit.>). These choices (within the restrictions of <cit.>) do not affect the isomorphism class of κ, so we will make choices that are most convenient for our present purposes.In particular, we choose ν_i so that it is associated to Yu's special isomorphism ν_i^∙ (see Definition 3.15 <cit.>) in the sense that the following diagram commutes:J^i+1[r]^ν_i[d] ℋ_i= W_i⊠μ_p[d]^ Id×ζ_i^-1J^i+1/ζ_i[r]^ν_i^∙ W_i⊠ (J^i+1_+/ζ_i). (The ⊠ notation is explained in 2.3 <cit.>.) With this choice of ν_i, Proposition 4.2 <cit.> implies ν_i (J^i+1,θ) = W_i^+× 1.The character χ_i is chosen as follows.As in 3.11 <cit.>, defineJ^i+1_♭ = ^i+1_ der∩ J^i+1,J^i+1_♭ , + = ^i+1_ der∩ J^i+1_+.Then J^i+1/(J^i+1,θJ^i+1_♭) is a compact abelian group and we may view ζ_i^-1(ϕ̂| J^i+1_+) as a character of the subgroup J^i+1_+/(J^i+1,θ_+J^i+1_♭ ,+).(Here, we are using Lemma <ref>.We also caution that, unlike in the diagram above, the notation ζ_i^-1 does not denote the inverse function of ζ_i, but rather ζ_i^-1(z) =ζ_i(z)^-1.)We take χ_i to be a character of the compact abelian groupJ^i+1/(J^i+1,θJ^i+1_♭)that extends the character ζ_i^-1(ϕ̂| J^i+1_+) of the subgroupJ^i+1_+/(J^i+1,θ_+J^i+1_♭ ,+). With these choices, if j∈ J^i+1,θ thenκ (j) = 1_ρ⊗ 1_0⊗⋯ 1_i-1⊗τ_i(ν_i(j))⊗ 1_i+1⊗⋯⊗ 1_d-1,according to theconstruction in 3.11 <cit.>.This implies that our assertion holds. We are interested in studying the space Hom_K^θ(κ, 1).As a preliminary step, we study the space Hom_J^1,θ⋯ J^d,θ(κ ,1) of linear forms on V that are fixed by each of the groups J^1,θ,… , J^d,θ, and its subspace Hom_H_x^θ J^1,θ⋯ J^d,θ(κ ,1) of H^θ_x-fixed linear forms.In the following discussion, the reader should consult 3.11 <cit.> for basic facts about the construction of κ and the relation of κ to auxiliary objects such as ρ and ω_i.We observe now that the character ε__H_x,θ defined in <ref> may also be expressed asε__H_x,θ = ∏_i=0^d-1(ε_i|H_x^θ)of H^θ_x, and we note that ε__H_x,θ^2=1.(See 5.5 <cit.>.) Supposeϕ̂|K^θ_+=1 and θ () =, and for each i∈{ 0,… , d-1} choose a nonzero element v_i^∘ in the 1-dimensional space V_i^τ_i (W_i^+× 1).Thenλ↦( v_ρ↦λ (v_ρ⊗ v_0^∘⊗⋯⊗ v_d-1^∘))determines a linear isomorphismHom_J^1,θ⋯ J^d,θ(κ ,1)≅ Hom_ (V_ρ ,)that is canonical up to scalar multiples. The latter isomorphism restricts to an isomorphismHom_H_x^θ J^1,θ⋯ J^d,θ(κ ,1)≅ Hom_H_x^θ (ρ ,ε__H_x,θ). According to Lemma <ref>, we have an isomorphismV_ρ≅ V^κ (J^1,θ⋯ J^d,θ) :v_ρ↦ v_ρ⊗ v_0^∘⊗⋯⊗ v_d-1^∘.We also have a projection V→ V^κ (J^1,θ⋯ J^d,θ) that on elementary tensors v_ρ⊗ v_0⊗⋯⊗ v_d-1 replaces each factor v_i (other than v_ρ) with the average of its τ_i (W_i^+× 1)-translates. Thus we obtain a projection V→ V_ρ. A linear form on V is invariant under J^1,θ⋯ J^d,θ precisely when it factors through thisprojection V→ V_ρ.Our assertion that Hom_J^1,θ⋯ J^d,θ(κ ,1)≅ Hom_ (V_ρ ,)now follows. The assertion thatHom_H_x^θ J^1,θ⋯ J^d,θ(κ ,1)≅ Hom_H_x^θ (ρ ,ε__H_x,θ )follows from Lemma <ref>.Ifϕ̂|K^θ_+=1 and θ () = then θ x = x. Our proof is modeled after the proof of Proposition 5.20 <cit.> and makes use of facts deduced within the latter proof.Suppose that θ x x.There exists an apartmentin ℬ_ red(, F) that contains x and θ (x), and within such an apartment we may choose a point z such that x lies on the boundary of the facet of z in ℬ_ red(, F).Over the residue field f of F, we have a reductive group 𝖧_x and a proper parabolic subgroup 𝖯 with unipotent radical 𝖴, such that𝖧_x(f) = H_x,0:0+,𝖯(f) = H_z,0/H_x,0+,𝖴(f) = H_z,0+/H_x,0+. It is shown in the proof of Proposition 5.20 <cit.> thatH_z,0+ = H^θ_z,0+H_x,0+.Similarly, replacingby _ der, we obtainH_ der,z,0+ = H^θ_ der, z,0+H_ der, x,0+. Using the first of the latter two decompositions, as well as the fact that the character ϕ :H_x,0+→^× is trivial on H^θ_x,0+, we see that we can inflate ϕ over H^θ_z,0+ to obtain a character inf_H_x,0+^H_z,0+(ϕ). We can assume, after passing to a z-extension, that p does not divide the order of the fundamental group of _ der.This allows us to apply Lemma 3.2.1 <cit.> and the definition of permissibility to see that inf_H_x,0+^H_z,0+(ϕ) extends to a quasicharacter χ of H. Here, we use the decompositionH_ der,z,0+ = H^θ_ der, z,0+H_ der, x,0+ to observe that inf_H_x,0+^H_z,0+(ϕ) is trivial on H_ der,z,0+.But, by Lemma 3.2.1 <cit.>, H_ der,z,0+ is identical to [H,H]∩ H_z,0+.Given χ, we let ρ_0 = ρ⊗ (χ|H_x)^-1.Then ρ_0 has depth zero and, according to Corollary 3.3.3 <cit.>, it induces an irreducible (supercuspidal) representation of H.The restriction of ρ_0 factors to a (possibly reducible) cuspidal representation ρ̅_0 of 𝖧_x(f).Cuspidality implies thatHom_H^θ_z,0+(ρ_0 , 1) =Hom_𝖴(f)(ρ̅_0 , 1) = 0. But this yields the following contradiction:Hom_H_x^θ(ρ,ε__H_x,θ)⊆ Hom_H^θ_z,0+(ρ_0 , 1)= 0.Note that we have used the fact that, by construction, χ is trivial on H^θ_z,0+.Moreover, we have used the fact that ε__H_x,θ is also trivial on H^θ_z,0+.This follows from an argumentas in the proof of Proposition 5.20 <cit.>.Ifϕ̂|K^θ_+=1 and θ () = thenK^θ = H_x^θ J^1,θ⋯ J^d,θ. This result is a variant of Proposition 3.14 <cit.> and it follows fromthe results cited in the proof of the latter resultand Lemma <ref> above.If 𝒪 is a K-orbit of involutions of G such that ⟨𝒪, ρ⟩_K is nonzero then the character ϕ̂must be trivial on K^θ_+, and hence ϕ must be trivial on H_x,0+^θ.Our claims follow from the fact that κ |K_+ is a multiple of ϕ̂ (accordingTheorem 2.8.1(2) <cit.>) and the fact that, by definition,ϕ̂|H_x,0+ = ϕ.§.§ From K-orbits of involutions to H_x-orbits of involutions An orbit 𝒪∈Θ^K is relevant if ⟨𝒪,ρ⟩_K is nonzero.An involution θ of G is stabilizing if θ () = and θ x= x. Proposition <ref> implies that every relevant orbit contains a stabilizing involution. If 𝒪∈Θ^K is relevant then H_x acts transitively on the set of stabilizing involutions in 𝒪.Fix a stabilizing involution θ in 𝒪. Clearly, every element of the H_x-orbit of θ is stabilizing. Now suppose we are given a stabilizing involution in 𝒪.Then this involution must have the form k·θ, for some k∈ K.Then k·θ must stabilize H_x and thus, according to Proposition 3.7 <cit.>, k must lie in H_x.This proves our assertion. Our next result is an adaptation of Lemma 3.3 <cit.>. If θ is a stabilizing involution thenK_θ = H_x,θJ^1,θ⋯ J^d,θ,where H_x,θ = H_x∩ G_θ.The desired result follows from repeatedly applying Lemma 3.4 <cit.>, however, we note that the statement of the latter result is missing the hypothesis H^1_θ (A∩ B) =1 used in the proof.More precisely, one showsK_θ = (H_xJ^1⋯ J^d-1)_θ J^d,θ =⋯ = H_x,θJ^1,θ⋯ J^d,θ.To see that the hypotheses of Lemma 3.4 <cit.> are satisfied, we refer to Proposition 2.12 <cit.> and Lemma 3.12 <cit.>. When θ is a stabilizing involution and ϑ is its H_x-orbit then we definem_H_x (ϑ) = [ G_θ :G^θ H_x,θ],where H_x,θ = H_x∩ G_θ.It is easy to see that this definition does not depend on the choice of θ in ϑ. If θ is a stabilizing involution then m_H_x (H_x ·θ) =m_K(K·θ). This follows directly from the definitions and Lemma <ref>. Recall that if θ∈Θ has H_x-orbit ϑ then we have defined⟨ϑ,ρ⟩_H_x =Hom_H_x^θ(ρ,ε__H_x,θ),and we note that this definition does not depend on the choice of θ in ϑ. We also write ϑ∼ρ when H_x is θ-stable for all θ∈ϑ and, in addition, ⟨ϑ,ρ⟩_H_x is nonzero.⟨Θ, ρ⟩_G = ∑_ϑ∼ρ m_H_x(ϑ ) ⟨ϑ,ρ⟩_H_x. Recall that⟨Θ, ρ⟩_G = ∑_𝒪∈Θ^K m_K(𝒪) ⟨𝒪,ρ⟩_K.Suppose ⟨𝒪,ρ⟩_K is nonzero or, in other words, 𝒪 is relevant.Then according to Lemma <ref>, 𝒪 contains a unique H_x-orbit ϑ that consists of all the stabilizing involutions in 𝒪.Lemma <ref> implies that m_K(𝒪) =m_H_x (ϑ) and Proposition <ref>(5) implies that ⟨𝒪,ρ⟩_K= ⟨ϑ,ρ⟩_H_x.Proposition <ref> also implies θ (H_x) = H_x and hence ϑ∼ρ.It remains to show that if ϑ is any H_x-orbit in Θ such that ϑ∼ρ then its K-orbit 𝒪 is relevant. Given such an orbit ϑ, fix θ∈ϑ.Lemma 3.5 <cit.> implies that θ stabilizesand x. According to Lemma <ref> and Corollary <ref>, it now suffices to show that ϕ̂|K^θ_+=1 and θ () =.One can inductively show that the groups ^d,… ,^0 are θ-stable by using the θ-stability ofand x together with the fact that^i is defined (in 2.4 <cit.>) to be the (unique) maximal subgroup ofsuch that: * ^i is defined over F,* ^i contains ,* ^i is a Levi subgroup ofover E,* ϕ |H^i-1_x,r_i=1, where ^i-1 = ^i_ der∩. Finally, we observe that ε__H_x,θ |H_x,0+^θ =1 since it is a character of exponent 2 of a pro-p-group with p odd.Therefore, ϑ∼ρ implies that ϕ |H_x,0+^θ=1.Since θ () = and θ x = x, it is easy to see from the definition of ϕ̂ that ϕ |H_x,0+^θ =1 implies ϕ̂|K^θ_+=1.(See 3.9 <cit.> for information on the definition of ϕ̂.)§ ADAPTING LUSZTIG'S FINITE FIELD THEORY The main purpose of this section is to prove Proposition <ref> or, in other words, the finite field case of Theorem <ref>.We also study Lusztig's character ε (see <cit.>) and interpret it in terms of determinants of adjoint actions, and then compute it inProposition <ref>. §.§ _q-rank and its parity Let q be a power of an odd prime p. Fix a connected, reductive group 𝖦 defined over _q, and let rank(𝖦) denote the _q-rank of 𝖦 and, following Lusztig <cit.>,defineσ (𝖦) = (-1)^ rank(𝖦). Fix a maximal torus 𝖳 in 𝖦 that is defined over _q.The product σ (𝖦) σ(𝖳) is ubiquitous in the Deligne-Lusztig theory of virtual characters of 𝖦 associated to 𝖳.A useful formula for computing this product is σ (𝖦) σ(𝖳)= (-1)^ (𝖴/(𝖴∩ F𝖴)),where𝖴 is the unipotent radical of a Borel subgroup 𝖡 of 𝖦 containing 𝖳, and F is the Frobenius automorphism.(See <cit.>.) The latter formula is also useful in theformσ (𝖦) = σ(𝖳)(-1)^ (𝖴/(𝖴∩ F𝖴))as a tool to compute σ (𝖦).Note that when 𝖡 (and hence 𝖴) is F-stable then 𝖳 is maximally split.In this case, the resulting identity σ (𝖦) = σ(𝖳) can also be seen as a consequence of the fact that the _q-rank of 𝖦 isthe _q-rank of any maximally split torus in 𝖦. §.§ A formula for σ (𝖦) σ(𝖳) Let Φ = Φ (𝖦,𝖳). Assume we have fixed a Borel subgroup 𝖡 containing 𝖳 and having unipotent radical 𝖴.Let Φ^+ be the associated system of positive roots in Φ.Let Γ =Gal(_q/_q) be the absolute Galois group of _q.Then Γ is generated by the Frobenius automorphism F(x) = x^q.Suppose 𝒪 is a Γ-orbit in Φ. If a is any root in 𝒪 and if d is the order of 𝒪 then the elements of 𝒪 are precisely the elements a,Fa,F^2a,…, F^d-1a. In this situation, _q^d is the (minimal) field of definition of a (and all of the elements of 𝒪).Let -𝒪 = { -a: a∈𝒪}.Then -𝒪 is also a Γ-orbit in Φ. If 𝒪 = -𝒪, we say 𝒪 is a symmetric orbit. Let (ΓΦ)_ sym denote the set of symmetric orbits.σ (𝖦) σ(𝖳)= (-1)^# (ΓΦ)= (-1)^# (ΓΦ)_ symGiven a Γ-orbit 𝒪, suppose we fix a∈𝒪 and arrange the elements a,Fa, … , F^d-1a as equally spaced points on a circle listed in clockwise order.Now label each element by + or - according to whether it is a positive or negative root.Let s(𝒪) be the number of sign changes from - to + as one goes clockwise around the circle.Then the formulaσ (𝖦) σ(𝖳)= (-1)^ (𝖴/(𝖴∩ F𝖴)) implies thatσ (𝖦) σ(𝖳)= ∏_𝒪∈ΓΦ (-1)^s(𝒪). Since s(-𝒪) = s(𝒪) for all 𝒪, we haveσ (𝖦) σ(𝖳)= ∏_𝒪∈ (ΓΦ)_ sym (-1)^s(𝒪),where (ΓΦ)_ sym is the set of symmetric orbits.Now suppose 𝒪 is a symmetric orbit and choose a root a∈𝒪.Let c be the minimal positive integer such that F^c a=-a.Then the elements a,Fa, …, F^c-1a,-a,-Fa, …, -F^c-1a are distinct and this sequence must be identical to the sequence a,Fa,F^2a,…, F^d-1a.In particular, we note that d=2c.In the simplest case, the rootsa,Fa, …, F^c-1a are all positive and thus the roots -a,-Fa, …, -F^c-1a are all negative.In this case, s(𝒪)=1. If d=2 or 4, given an orbit 𝒪, one can always choose a∈𝒪 so that the sign pattern is of the latter type.Now suppose we have an orbit 𝒪 of length d≥ 6.In general, we will have some configuration s_1,…, s_c, -s_1,…, -s_c of ± signs. It is easy to check that if we reverse the sign s_2 (and hence -s_2) then (-1)^s(𝒪) is unchanged.(One only needs to consider s_1, s_2, s_3 and their negatives and check the eight possible sign configurations, which essentially amounts to checking the case of d=6.) One can also adjust s_3,… , s_c-1 so that one essentially reduces to the case of d=6.We deduce that, in general, s(𝒪) is odd if 𝒪 is a symmetric root.Our assertion now follows.§.§ A formula for ε__𝖳,θ Let θ be an _q-automorphism of order two of 𝖦 that preserves 𝖳. Let 𝖳_+ = (𝖳^θ)^∘ be the identity component of the fixed points of θ in 𝖳. According to <cit.>, 𝖳_+ is identical to the set of all tθ (t) with t∈𝖳.So if a∈Φ then a|𝖳_+ is trivial precisely when θ a = -a.According to <cit.>, we have the following relation between the centralizers of 𝖳 in 𝖦 and its Lie algebra:Lie(Z_𝖦(𝖳_+)) = Z_ Lie(𝖦)(𝖳_+). In addition, we observe that Z_ Lie(𝖦) (𝖳_+) =Lie(𝖳) ⊕( ⊕_a∈Φ,a|𝖳_+=1g_a ). LettingΦ_θ = Φ (Z_𝖦 (𝖳_+),𝖳), we now have: Given a∈Φ then the following are equivalent: * a∈Φ_θ,* a|𝖳_+ =1,* θ a = -a. Lusztig defines ε = ε__𝖳 = ε__𝖳,θ=ε__𝖳(_q),θ byε (t) = σ (Z_𝖦(𝖳_+)) σ (Z_𝖦(𝖳_+)∩ Z_𝖦 ( t)^∘)for t∈𝖳^θ ( f). For all t∈𝖳^θ ( _q), ε__𝖳,θ (t) =∏_a∈ΓΦ_θ a(t).Fixt∈𝖳^θ( f). Then Proposition <ref> implies thatσ (Z_𝖦(𝖳_+)) = (-1)^# (ΓΦ_θ)andσ (Z_𝖦(𝖳_+)∩ Z_𝖦 ( t)^∘) = (-1)^# ( (ΓΦ_θ)_t),where (ΓΦ_θ)_t is the set of Γ-orbits ofroots a∈Φ_θ such that a(t)=1.If we multiply these factors together, then the only orbits that are not counted twice are the orbits of roots a∈Φ_θ with a (t) 1.Appealing to Lemma 4.3.1, we observe that, since θ (t) = t and θ a = -a, we must have a(t) =-1 whenever a(t) 1.Therefore,ε__𝖳,θ (t) =(-1)^s,where s is the number of Γ-orbits of roots a∈Φ_θ such that a(t) =-1. Since(-1)^s = ∏_a∈ΓΦ_θ a(t),our claim follows.§.§ Involutions that do not fix any roots The results in this section are not used elsewhere in this paper. Our objective is to show how Proposition<ref> and Lemma <ref> may be applied to provide an alternative proof ofLemma 10.5 <cit.>.Let ℐ be the set of _q-automorphisms of 𝖦 of order two and let 𝖦(_q) act on ℐ byg·θ =Int(g)∘θ∘ Int(g)^-1. Given θ∈ℐ and g∈𝖦(_q), then 𝖳 is (g·θ)-stable precisely when g^-1𝖳g is θ-stable. Therefore studying the elements of ℐ that stabilize 𝖳 is equivalent to studying the 𝖦(_q)-conjugates of 𝖳 that are θ-stable.We prefer to fix 𝖳 and considervarious orbits of elements of ℐ that stabilize 𝖳 without choosing any specific involution θ as our base point. This differs from <cit.> and thus some translating between approaches is required.Lusztig fixes θ∈ℐ and defines an associated set 𝒥_θ. According to <cit.>, the set 𝒥_θ is identical to the set of θ-stable maximal tori 𝖳' in 𝖦 such that θ does not fix any roots in Φ (𝖦,𝖳').Let us now fix a maximal _q-torus 𝖳 in 𝖦 and let Φ = Φ (𝖦,𝖳).The next result is essentiallyLemma 10.5 <cit.>. If θ∈ℐ and𝖳∈𝒥_θ then σ (Z_𝖦(𝖳_+)) = σ (𝖦).According to Proposition<ref>, it suffices to show that the number of Γ-orbits in Φ -Φ_θ is even.Consider a root a∈Φ - Φ_θ. According to Lemma <ref> and the hypothesisthat 𝖳∈𝒥_θ,the roots a,-a,θ a, -θ a must be distinct. We claim that these four roots cannot all lie in a single Γ-orbit. Suppose, to the contrary, that they do lie in the same orbit.Let Γ_a be the stabilizer of a in Γ.Let F_a be the fixed field of Γ_a in _q.There must exist γ∈Γ/Γ_a =Gal(F_a/_q ) such that γ a = -a.Clearly, γ has order two, and hence γ is the unique element of order two in the cyclic group Gal(F_a/_q ).Similarly, one can argue that γ a = θ a.Hence, we deduce θ a = -a, which contradicts Lemma <ref>.The latter contradiction implies thatthe number ofΓ-orbits in the Γ-invariant set generated by a,-a,θ a, -θ a is even.But since Φ-Φ_θ is partitioned into Γ-invariant sets generated by the sets{ a, -a,θ a , -θ a} associated to a∈Φ-Φ_θ, our claim follows. §.§ Revising Lusztig's formula Our main objective in this section is to establish the finite field case ofTheorem <ref> and show that it is consistent with Lusztig's results in <cit.>. We also show in Theorem <ref> how Theorem <ref> simplifies over finite fields.Lusztig gives a symmetric space generalization ofDeligne-Lusztig virtual characters in <cit.>,and, inTheorem 3.3 <cit.>, he provides a formula for these generalized virtual characters.A much simpler formula, <cit.>, treats the special case of irreducible cuspidal representations.It is this simpler formula that we revise.We remark that Theorem 3.3 <cit.> has been reformulated and generalized in <cit.>, <cit.> and <cit.>.As in the previous sections, we fix a connected, reductive group 𝖦 defined over _q.We also fix a maximal torus 𝖳 in 𝖦 that is defined and elliptic over _q.Fix a character λ of 𝖳(_p) that is in general position. Let π = π(λ) be an irreducible, cuspidal representation in the equivalence class associated by Deligne-Lusztig to (𝖳(_q),λ). Then the character of π isσ (𝖳) σ (𝖦)R_𝖳^λ ,whereR^λ_𝖳 is the Deligne-Lusztig virtual character associated to(𝖳(_q),λ). Lusztig's formula 10.6(a) says (π^𝖦^θ (_q)) = #(𝖳(_q)Θ_𝖳,λ(_q)/ 𝖦^θ (_q)),where θ∈ℐ, π^𝖦^θ (_q) is the space of 𝖦^θ (_q)-fixed points in the space of π, andΘ_𝖳,λ(_q)= { g∈𝖦(_q):( g·θ)(𝖳) = 𝖳, λ |𝖳^g·θ(_q) = ε__𝖳,g·θ}.Given a 𝖦(_q)-orbit Θ in ℐ, we define ⟨Θ ,λ⟩_𝖦(_q) =Hom_𝖦(_q)^θ(π,1),where θ is an arbitrary element of 𝒪.This invariant is identical to the invariant on the left hand side of Lusztig's formula 10.6(a) according to the followingstandard lemma:Assume (ρ,𝒱) be an irreducible, complex representation of a finite group 𝒢.If ℋ is a subgroup of 𝒢 and V^ℋ is the space of ℋ-fixed points in 𝒱 then Hom_ℋ(ρ, 1) =V^ℋ.Let 𝒱^* be the space of -linear forms on 𝒱 and let (ρ̃, 𝒱^*) be the representation given by(ρ̃(g)λ )(v) = λ (ρ(g)^-1v). Then ρ̃ is contragredient to ρ.The space 𝒱 has a canonical decomposition 𝒱^ℋ⊕𝒱_ℋ, into ℋ-submodules, where 𝒱_ℋ is the kernel of the projection 𝒱→𝒱^ℋ given by v↦1/|ℋ|∑_h∈ℋρ (h)v.The contragredient has a similar decomposition 𝒱^*= 𝒱^*ℋ⊕𝒱^*_ℋ. A given linear form λ on 𝒱^ℋ extends uniquely to a linear form on 𝒱 that annihilates 𝒱_ℋ.This yields an embedding of (𝒱^ℋ)^* in (𝒱^*)^ℋ.Similarly, (𝒱_ℋ)^* embeds in (𝒱^*)_ℋ.Counting dimensions, we see that (𝒱^ℋ)^* =𝒱^*ℋ and (𝒱_ℋ)^*= (𝒱^*)_ℋ.We now have𝒱^ℋ =(𝒱^ℋ)^* =(𝒱^*)^ℋ =Hom_ℋ(ρ, 1). The next result is the finite field case of Theorem <ref>:⟨Θ ,λ⟩_𝖦(_q) = ∑_ϑ∼λ m_𝖳(_q)(ϑ )⟨ϑ , λ⟩_𝖳(_q). The right hand side of Lusztig's formula 10.6(a) is the cardinality of the double coset space 𝖳(_q)Θ_𝖳,λ(_q)/ 𝖦^θ (_q),where θ is any element of Θ.Let us now fix such an involution θ. The set Θ_𝖳,λ(_q) may be partitioned as followsΘ_𝖳,λ(_q) = _ϑ∈𝖳(_q)ΘΘ_𝖳,λ(_q)_ϑ,whereΘ_𝖳,λ(_q)_ϑ = { g∈Θ_𝖳,λ(_q):g·θ∈ϑ}. (Note that it is elementary to see that both Θ_𝖳,λ(_q) and the sets Θ_𝖳,λ(_q)_ϑ are unions of double cosets in 𝖳(_q)𝖦(_q)/ 𝖦^θ (_q).)Given a 𝖳(_q)-orbit ϑ in Θ, recall that we write ϑ∼λ when θ' (𝖳(_q))=𝖳(_q) and Hom_𝖳(_q)^θ'(λ,ε__𝖳,θ') is nonzero for θ'∈ϑ.Equivalently, ϑ∼λ when θ' (𝖳)=𝖳 and λ |𝖳(_q)^θ'=ε__𝖳,θ' for θ'∈ϑ. We observe that Θ_𝖳,λ(_q)_ϑ is nonempty precisely when ϑ∼λ.So we haveΘ_𝖳,λ(_q) = _ϑ∼λΘ_𝖳,λ(_q)_ϑ .Moreover, we observe that if ϑ∼λ then Θ_𝖳,λ(_q)_ϑ ={ g∈𝖦(_q):g·θ∈ϑ}. Next, we recall that if ϑ∼λ then ⟨ϑ ,λ⟩_𝖳(_q) is defined as ⟨ϑ ,λ⟩_𝖳(_q) =Hom_𝖳(_q)^θ'(λ ,ε__𝖳, θ'),for θ'∈ϑ.But this simply means that ϑ∼λ then ⟨ϑ ,λ⟩_𝖳(_q)=1.It now suffices to show that if ϑ∼λ then the cardinality of𝖳(_q)Θ_𝖳,λ(_q)_ϑ/ 𝖦^θ (_q) is m_𝖳(_q)(ϑ) = [𝖦_θ' (_q):𝖦^θ' (_q) (𝖦_θ'(_q)∩𝖳(_q))],where θ'∈ϑ and𝖦_θ' (_q) is the stabilizer of θ' relative to the action of 𝖦 (_q) on ℐ.We have a bijection𝖦 (_q)/𝖦_θ (_q)≅Θgiven by g𝖦_θ(_q)↦ g·θ. This yields a bijection 𝖳(_q)𝖦(_q)/𝖦_θ (_q)≅𝖳(_q)Θ .This bijection can be pulled back via the natural projection𝖳(_q)𝖦(_q)/𝖦^θ(_q)→𝖳(_q)𝖦(_q)/𝖦_θ (_q)to yield a surjection𝖳(_q)𝖦(_q)/𝖦^θ(_q)→𝖳(_q)Θ .The constant m_𝖳(_q)(ϑ) is identical to the cardinality of the fiber of ϑ under the latter surjection. Since this fiber is precisely 𝖳(_q)Θ_𝖳,λ(_q)_ϑ/ 𝖦^θ (_q)our assertion follows.The purpose of the latter result was to provide a formula that unifies the p-adic and finite field theories.However, if one is specifically interested in the finite field theory then the following result is stronger and follows from the previous proof.⟨Θ ,λ⟩_𝖦(_q) = ∑_ϑ m_𝖳(_q)(ϑ ),where the sum is over the set of 𝖳(_q)-orbits ϑ in Θ such that θ (𝖳) =𝖳 andλ (t) = ε__𝖳,θ (t) =∏_a∈ΓΦ_θ a(t),for some (hence all) θ∈ϑ and for all t∈𝖳^θ (_q). amsalpha | http://arxiv.org/abs/1703.08861v3 | {
"authors": [
"Jeffrey Hakim"
],
"categories": [
"math.RT",
"22E50, 11F70"
],
"primary_category": "math.RT",
"published": "20170326190637",
"title": "Distinguished Cuspidal Representations over p-adic and Finite Fields"
} |
[email protected] 1Department of Physics, University of the Pacific, 3601 Pacific Avenue, Stockton, CA 95211, USA 2Department of Astronomy, Pennsylvania State University, University Park, PA 16802, USA 3Center for Exoplanets and Habitable Worlds, The Pennsylvania State University, University Park, PA 16802, USA 4Space Science and Astrobiology Division, MS 245-3, NASA Ames Research Center, Moffett Field, CA 94035, USA 5Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA The outer architectures of Kepler's compact systems of multiple transiting planets remain poorly constrained and few of these systems have lower bounds on the orbital distance of any massive outer planets. We infer a minimum orbital distance and upper limits on the inclination of a hypothetical Jovian-mass planet orbiting exterior to the six transiting planets at Kepler-11. Our constraints are derived from dynamical models together with observations provided by the Kepler mission. Firstly, the lack of transit timing variations (TTV) in the outermost transiting planet Kepler-11 g imply that the system does not contain a Jovian mass perturber within 2 AU from the star. Secondly, we test under what initial conditions a Jovian-mass planet moderately inclined from the transiting planets would make their co-transiting configuration unlikely. The transiting planets are secularly coupled and exhibit small mutual inclinations over long time-scales, although the outermost transiting planet, Kepler-11 g is weakly coupled to the inner five. We rule out a Jovian-mass planet on a 3^∘ inclination within 3.0 AU, and higher inclinations out to further orbital distances, unless an undetected planet exists orbiting in the dynamical gap between Kepler-11 f and Kepler-11 g. Our constraints depend little on whether we assume the six transiting planets of Kepler-11 were initially perfectly co-planar or whether a minimum initial mutual inclination between the transiting planets is adopted based on the measured impact parameters of the transiting planets. § INTRODUCTIONKepler's discovery of multiple compact, coplanar systems of multi-transiting sub-Neptune planets has sparked renewed interest in planet formation theory. Kepler found 261 systems with three or more transiting planet candidates, and 92 with four or more transiting planets (retrieved from http://exoplanetarchive.ipac.caltech.edu).Most of these planets, super-Earths to sub-Neptunes are in a size range that is absent from the solar system, as are their remarkably compact configurations. Systems like Kepler-11, with six planets having orbital periods well under that of Venus <cit.>, highlight the difference between what appears common at other systems and our own Solar System. The debate over how these systems formed has, to a certain extent, given rise to two competing paradigms, invoking either the migration of multi-planet systems to close-in orbits <cit.>, or in situ formation determined by local conditions (). Much of the discussion has occurred in the context of interpreting the period-ratio distribution of planet pairs, which is one of the major planet population observables delivered by the Kepler mission.The period ratio distribution of Kepler's multi-transiting planets has narrow valleys and peaks near first-order mean motion resonances, over an otherwise smooth distribution (). Generally, the assembly of resonances is thought to result from convergent migration and resonant trapping, followed by migration while maintaining constant orbital period ratios. This explains the preponderance of orbital period ratios near 2:1 or 3:2. However, closer orbital period ratios require either initial formation at a small orbital period ratio or a mechanism to avoid or escape from the 2:1 and 3:2 resonances. Furthermore, the migration of planets and resonant trapping leads to the expectation that the majority of multi-planet systems should end up in resonance or even in resonant chains, although that does not seem to be the case <cit.>. In fact, just narrow of first-order resonances, the period ratio distribution has narrow gaps, with sharp peaks just wide of the resonances ().<cit.> invoked tidal eccentricity damping of near resonant pairs to explain the gap and peak structure near resonances in the period ratio distribution. A similar result was found by <cit.>, who further suggested that many of the planets with near-commensurate orbital period ratios are in fact still in resonances. The tidal dissipation model requires rather efficient tidal damping (tidal Q∼ 10), or a different mechanism beyond orbital periods of around 10 days, where tidal damping becomes less efficient. However, even with rocky planets among the sample, tides are not strong enough to move some of the planets to their observed separations <cit.>. Nevertheless, the period ratio distribution changes significantly with orbital distance and the structure near resonance is more pronounced at shorter periods, which makes tidal dissipation a potentially important factor in explaining the near-resonant peaks <cit.>. An additional challenge to the migration model is that the majority of planet pairs are in fact not in or near resonance, but rather fill the period ratio distribution between the peaks and troughs at low order resonances. Various theories attempt to explain the period-ratio distribution include migration with a disk or planetesimal interactions. <cit.> noted the difficulty of forming close first-order mean motion resonances with disk migration, and found that some fine tuning between smooth migration and stochastic interactions with turbulence or planetesimals can reproduce the observed period ratio distribution. Another explanation invokes the interaction between a planet and the wake of its neighbor within a disk, which can reverse convergent migration and increase the orbital period ratio away from resonance <cit.>. Alternatively, <cit.> found that a disk of planetesimals can cause low mass planets to leave resonance after gas dispersal if the planetesimal disk mass is high enough. In their simulations, resonances persist among more massive planets.<cit.> invoked overstable librations in resonances to explain the low number of systems in resonance, where the eccentricities are high enough (given the dynamical masses of the planets) that resonant capture is only temporary.While there has been significant progress in reconciling the expectation of planets trapped in resonant chains with the observed period ratio distribution in migration models, <cit.> showed that the characteristic peak-trough structure near resonances could also be a natural outcome of rapid in situ formation. Therefore, both in situ and migration models remain viable frameworks to explain the orbital period-ratios of Kepler's multi-transiting planets. Where the two paradigms differ most is in the outer architectures of the compact multi-transiting systems.<cit.> modelled the assembly of compact multi-planet systems as co-migrating ensembles of distantly formed Neptunes. In these models, the likelihood of the arrival of the Neptunes to distances within 1 AU depends on the presence of giant planets, which tend to block access to the inner regions. Under this scenario, the high-multiplicity systems of transiting planets should not have distant jovian planets. On the other hand, in situ formation models allow planets to form in an extended disk over a wide range of distances in relative isolation. In such systems, the presence of a compact group of inner super-Earths or Neptunes would in no way preclude the formation of outer planets like the giant planets of our Solar System ().Until an RV survey of compact multi-transiting planets is largely complete, we can only assess the potential effects of outer Jovian planets on the formation and architectures of these systems. <cit.> considered the effect of the `Grand Tack' scenario <cit.>, whereby Jupiter migrates in as close as ∼1.5 AU before migrating outwards, trapped in resonance with Saturn. In a system like Kepler-11, such a scenario could drive a compact system of inner planets formed in situ into the star. Hence, a lack of outer Jovian planets or an alternative to a Grand Tack scenerio is predicted for such compact systems, if they formed in situ. For transiting systems with high multiplicity (four or more transiting planets), it will take some time before RV surveys can answer these questions. Very few high multiplicity systems have published RV constraints on the orbital architectures beyond the known transiting planets, due to the limited time available for observations, the large number of systems with four or more transiting planets and the faintness of many of the most interesting systems.In the case of Kepler-11, RV monitoring with Keck HIRES since 2011 found the presence of Jovian-mass outer planet (msin i = 1 M_Jup) unlikely within an orbital period of 1000 days or ≈ 1.93 AU <cit.>. However, dynamical models of the effects of distant Jovian planets on the observed transiting systems can reveal important insights into the outer architectures of multi-planet systems.<cit.> modeled the effects of outer Jovian planets on the assembly of resonances among compact multi-transiting systems like Kepler-11. They found that by disrupting wide, low-order resonances(e.g., 2:1 and 3:2) between inner planets, massive Jovian companions enhance the likelihood of inner planets becoming trapped in closer resonances (e.g.,4:3 and 5:4), or Laplace-like resonance chains. They predict that distant Jovians may be more likely at systems like Kepler-36, which contains transiting planets with a period ratio smaller than 6:5.<cit.> considered the effects of secular perturbations on the multiplicity of transiting systems in the Kepler dataset. They found that the excess of single transiting planets cannot be explained by the pumping of inclination without driving a significant fraction of systems to be dynamically unstable. A similar conclusion was drawn by <cit.>. In their results, the maximum mutual inclination between two close-in planets in the presence of an external perturber is constrained by the mass ratio of the inner planets and their orbital separation. They found that, given a moderately inclined outer planet, the mutual inclination of a close-in pair increases with m_p/a_p^3, where m_p and a_p are the mass and orbital distance of the outer perturbing planet. In the case of Kepler-11, upper limits on the mass of a perturbing planet beyond the outermost transiting planet can be derived from two lines of inquiry with dynamical simulations, which may applicable to several systems of high multiplicity discovered by Kepler. The first is to test the extent to which the existing transit timing data can rule out a Jovian companion beyond the transiting planets. In the case of Kepler-11, observed transit timing variations (TTVs) place useful upper limits on non-transiting perturbers that are near a low-order mean motion resonances with Kepler-11 g, or near enough to cause a detectable “synodic chopping". We explain the method and results of this in the next section.The second line of inquiry is to test the effect of a distant Jupiter with a moderately inclined orbit on the remarkably co-planar configuration of Kepler-11 b-g. A similar analysis of the Kepler-20 and WASP-47 systems has been performed by <cit.>. In the case of Kepler-11, the coupling between the six transiting planets can cause non-linear effects in mutual inclinations due to overlapping resonances. Nevertheless, <cit.> have demonstrated that to a large extent the maximum mutual inclination of an inner system of two planets with an inclined perturber can be determined analytically, particularly in limit of strong secular coupling. Furthermore, the nonlinear effect in systems of higher multiplicity may be neglected in the strong coupling limit if the angular momentum of the inner system is dominated by one planet.In the case of Kepler-11, five of the six known planets have tightly constrained masses, and their masses span much less than one order of magnitude <cit.>. The inner five are all strongly coupled and their mutual inclinations remain small in the presence of a Jovian perturber. However, the wide seperation between Kepler-11 f and g leave this pair moderately coupled, and in this case, the amplitude of mutual inclination cycles is sensitive to the ratio of planetary masses among the inner pair. We refer to the transiting planets of Kepler-11 as the “confirmed" planets throughout (even though Kepler-11 g is technically “validated" by a probabilistic argument rather than being “confirmed" by an alternative signature like TTVs, see ), to distinguish them from added planets in our simulations.As shown in our results, the likelihood of all six confirmed planets to be transiting for our line-of-sight if the system contains a massive outer planet on an inclined orbit is bolstered considerably by their compact configuration that keeps the planets locked with low mutual inclinations over secular timescales, even if the six planets have a wide range of inclinations over the nodal precession cycle. So long as their mutual inclinations are very low, the probability to observe all six planets in transit is not significantly less than the transit probability of the outermost planet. If the planets had significant mutual inclinations, the likelihood that they would all be transiting for an observer from any favorable perspective is very small. The conditions under which an inclined Jovian companion breaks this coplanarity and makes it unlikely that all six could be transiting for any observer, provides upper limits on the mass and inclination of a Jovian planet to orbital distances beyond what is possible with existing TTV data and potentially beyond what is possible with existing RV data. We explain our method and results for this problem in the next section.§ TTV CONSTRAINTS ON PUTATIVE PLANETSThe dataset of transit times for Kepler-11 modeled by <cit.> included all short cadence Kepler data available through Q14, with long cadence data where short cadence was unavailable. We complete the dataset using the long cadence transit times from Q15 through Q17 from <cit.>. We fitted the transit times using the orbital periods, phases, eccentricity vector components and dynamical masses as free parameters for Kepler-11 b-f throughout. Kepler-11 g has a period more than 2.5 times that of with Kepler-11 f, and its mass is poorly constrained by the TTVs<cit.>. The planet is 3.3 R_⊕ in size. In some cases, planetary radius can serve as a reasonable proxy for mass, although the diversity in density among sub-Neptunes, both between systems and within the same systems (), leaves a wide range of plausible masses for Kepler-11 g.For most of our simulations, we assumed the system was co-planar and fixed the mass of Kepler-11 g at 5.5 M_⊕. Our estimate for the mass of this planet follows a simple approximate mass-radius relation for Kepler-11 b–f ( M_p/M_⊕∼1/2(R_p/R_⊕)^2). This is similar to the mass-radius relation for the Solar System found by <cit.> except that the planets of Kepler-11 are roughly half as massive over the same range in radii. This assumed mass for Kepler-11 g is close to the mode of well-characterized planets less massive than Neptune <cit.>.§.§ A putative planet between Kepler-11 f and Kepler-11 g.First, we tested the effect of an undetected planet orbiting between Kepler-11 f and Kepler-11 g on the goodness of fit of all measured transit times found via Levenberg-Marquardt minimization (L-M) for a range of orbital periods between Kepler-11 f and Kepler-11 g from 50 to 110 days. In our TTV modeling, for the inner five planets; orbital period, initial orbital phase, eccentricity vector components, and mass were all free parameters (5 per planet). We performed a grid search over a fixed orbital period for Kepler-11 x, with the orbital phase as a free parameter. In these fits, we fixed the eccentricity of Kepler-11 x at zero, under the assumption that since the TTVs in Kepler-11 g are sensitive to the relative eccentricities of Kepler-11 x and Kepler-11 g, freeing the eccentricity of either planet would have a similar effect but freeing both would add two parameters too many.To map out the goodness-of-fit (χ^2) as a function of the orbital period of Kepler-11 x,we began this grid search near the middle of the orbital period range (P_x =80 days), using the best fit at each orbital period as an initial input model for the next orbital period, changing the fixed orbital period by 0.1 days between each L-M fit. We performed this search for three fixed masses for the undetected Kepler-11 x: 1 M_⊕, 3 M_⊕ and 10 M_⊕. We found that the goodness-of-fit (χ^2) showed sharp features due to the sensitivity of multiple planets' transit times to a perturber between Kepler-11 f and Kepler-11 g. In most cases, we found that the extra parameters degraded rather than improved the goodness of fit (see left panel in Figure <ref>). The best fit model that we found with a Kepler-11 x had a χ^2 value of 371, with the mass of Kepler-11 x fixed at 1 M_⊕, and 4 additional free parameters, namely, the orbital period and phase of Kepler-11 x, and the eccentricity vector components of Kepler-11 g. The right two panels in Figure <ref> illustrate the effect of Kepler-11 x on the TTVs of Kepler-11 g, at two local minima found in the left panel. In both cases, high frequency TTVs (so-called synodic chopping) improves the goodness-of-fit at very specific orbital periods.The Bayes Information Criterion offers a simplistic way to compare the best fit model with extra parameters to the best fit 6-planet model without Kepler-11 x as follows:BIC = χ^2+ k lnnwhere χ^2 measures of goodness-of-fit of a parametric model to the observed transit timing data, k is the number of free parameters and n is the number of measured transit times. In this case, the dataset includes the transit times of all 6 known planets (340 transits). For the 6-planet model, where χ^2_6pl = 374, and k_6pl = 27we found BIC_6pl = 531. The corresponding estimate for our best fit model including Kepler-11 x gives BIC_x of 552. Since the 6-planet model has a lower BIC than the model with Kepler-11 x, there is no significant evidence for the more complex model. In sum, we find that in most cases an additional planet between Kepler-11 f and Kepler-11 g degrades rather than enhances the fit to the TTV data, and where the fit is improved, the improvement provides no significant evidence of a planet between Kepler-11 f and Kepler-11 g. §.§ A putative Jovian planet beyond Kepler-11 g.We tested the effect of a Jovian-mass perturber (“Kepler-11 J") on an eccentric orbit beyond Kepler-11 g on the goodness-of-fit of the transit timing data for the six known planets. For these models, we fixed the orbital period of Kepler-11J for each L-M fit, increasing the orbital period in increments of 1 day from 250 to 1200 days, for three possible perturbing masses: 1 M_Jup, 0.3 M_Jup, and 0.1 M_Jup. To explore the goodness-of-fit we began our grid search at 800 days, using the local minimum found via L-M fitting for our initial parameters. After finding the local minima at 354 and 700 days, we repeated the grid search using these two locations as starting points. The eccentricity vector components, orbital period and initial orbital phase of the extra planet added four parameters to the TTV model. We show the resulting goodness-of-fit in the left panel of Figure <ref>. At almost all distances, the extra parameters degraded the goodness-of-fit, implying that an extra planet out to some distance where its gravity has no material effect on the TTVs, is unlikely. We found two regions that slightly improved the fit to the observed TTVs by causing synodic chopping in the TTVs of Kepler-11 g, at orbital periods of 354 days and a broad region from 600–800 days respectively, as illustrated in the left panel of Figure <ref>. We illustrate the TTVs of Kepler-11 g at both of these local minima in the right two panels of Figure <ref>.We compared the model of a Jovian planet at 354 days with the simpler model of no planets beyond Kepler-11 g (which has the same χ^2 as a model with a Jovian planet beyond 1200 days.) The extra free parameters in the Jovian planet model include the orbital period, phase and eccentricity vector components of the Jovian (but its mass is fixed). Our best fit model included a Jovian planet at 354 days, with a goodness-of-fit χ^2_J = 364 and k_J = 31, and hence BIC_J = 545. We repeated this experiment with the mass of the perturber fixed at 0.3 M_Jup and 0.1 M_Jup respectively. Reducing the mass reduced the height and depth of local extrema substantially, whilst leaving the orbital periods of the best-fit solutions roughly unchanged.Since the 6-planet model has a lower BIC than the model with a Jovian planet, there is no significant evidence for a Jovian planet beyond Kepler-11 g from the TTVs. For the range of distances where the goodness-of-fit is substantially degraded, we note that a Jovian-mass planet is most unlikely within 480 days orbital period, with the exception of narrow possible range of orbital periods around 356 days. For a 0.3 M_Jup perturber, a wider range of solutions are consistent with the data around 356 days orbital period, but otherwise this extra planet is unlikely within about 460 days (1.1 AU).In the case of a 0.1 M_Jup perturber at an orbital period of 241 days, we found a TTV model with χ^2 = 373, comparable to the six-planet model. While such a planet is ruled unlikely over a much smaller range of distances from the TTVs (from about 250–350 days), the improvement to the TTV model around 356 days or further is also substantially weaker. Hence we conclude that the TTVs provide no strong evidence of an additional planet beyond Kepler-11 g, and a weak improvement in TTV fitting for a range of possible masses with a perturber orbiting around 354 days.We note that these lower mass solutions for the perturber may not be ruled out by the existing RV dataset. Hence we look for further dynamical constraints from outer perturbers and the extremely low mutual inclinations between the confirmed planets.§ COPLANARITY CONSTRAINTS VIA SECULAR INCLINATION EVOLUTION §.§ Set-upAny two planets `y' and `z' have a mutual inclination defined by:cosϕ_y,z = cos i_ycos i_z + sin i_ysin i_zcos(Ω_y-Ω_z),where ϕ_y,z is their true mutual inclinations, and i and Ω are orbital inclinations and ascending nodes <cit.>.We consider planet y to be “nearly co-planar" with z (i.e., likely for both planets to transit for any observer that observes the outer planet to be transiting) if:sinϕ_y,z≤R_⋆/a_z,where R_⋆ is the radius of the star and a_z is the semi-major axis of the outer planet `z'. The actual likelihood depends on the impact parameter for the outer planet, ranging from 50% if the outer planet has a grazing transit geometry, to 100% if the outer planet has zero impact parameter for the observer.For a simulation over the secular timescale we count what fraction of time-steps this condition is satisfied, with Kepler-11 g as the outer planet. Note, however, that this condition for coplanarity is commutative. Our results are unaffected by whether we count how many time-steps planet `b' (or `c', or `d') is in the plane of planet `g' or vice-versa. Although Kepler-11 b has a higher transit probability (R_⋆/a) than its neighbors,we choose the varying plane of Kepler-11 b as our reference plane because in practice, we find that the five planets Kepler-11 b–f have negligible mutual inclinations in all our simulations, and our results are driven by the separation of Kepler-11 g from the inner five. In our first set of simulations, we assume the system of transiting planets at Kepler-11 was initially coplanar. Although there are partial constraints on the present day relative inclinations from their modeled transit durations, these provide no information of the location of the ascending node (Ω).Without this crucial information, the observed transit impact parameters give approximate minimum mutual inclinations between the confirmed planets. We will test this configuration for our initial conditions, but for now assume the system was initially coplanar. This is the conservative choice for our investigation, since any initial non-zero mutual inclinations would likely reduce the fraction of the time that the planets are in a co-transiting configuration and increase the distance out to which we would find an inclined Jovian mass perturber to be unlikely. Therefore, by assuming an initial coplanar configuration we are measuring a minimum distance for an inclined perturber to be unlikely given that the six confirmed planets are co-transiting. Under this setup, without an inclined perturber, the confirmed transiting planets remain coplanar indefinitely. However, the addition of a hypothetical Jovian planet beyond Kepler-11 g (“Kepler-11 J") will cause oscillations in the inclination vector components of each transiting planet over secular timescales if it has some inclination relative to the compact transiting system. Whether or not these secular effects break the coplanarity of the system depends on Kepler-11 J's mass, orbital distance and inclination. The large gap in orbital period between Kepler-11 f and Kepler-11 g, combined with the compact configuration of the five inner planets, ensures that co-planarity is always lost when Kepler-11 g is excited to significant inclinations, while Kepler-11 b–f remain nearly coplanar. We considered a range of distances for the Jovian-mass perturber from 1.5 to 6 AU, with initial inclinations at 1, 3 and 10 degrees, and determined which combinations of inclination and orbital distance would cause the co-transiting configuration of Kepler-11 b–g to be “unlikely".We chose this range of mutual inclinations to explore for the Kepler-11 system to be consistent with the moderate inclinations observed in multi-planet systems, including the Solar System. We note that in the Solar System, Jupiter has a mutual inclination of 2^∘ with Venus, and 6^∘ with Mercury. Furthermore, the normalized transit duration ratios of all of Kepler's multiplanet systems are consistent with typical mutual inclinations of 1.0^∘–2.0^∘ (). For all simulations, we adopt the planetary and stellar parameters of Kepler-11 in Tables 3 and 4 of <cit.>. These assumed perfectly co-planar orbits. Using transit timing data through Q14 of the Kepler mission, <cit.> found tight constraints on the masses and orbital eccentricities of Kepler-11 b-f, and measured the mass of Kepler-11 M_⋆ = 0.961 M_⊙. In these simulations we assumed a mass of 5.5M_⊕ for Kepler-11 g, as explained in 2.2. This assumed mass is based on a simple mass-radius relation for the well-characterized planets at Kepler-11 and the mode of well-characterized low-mass exoplanet. However, it is roughly half of the mean mass for planets of its size, based on the probabilistic planetary mass-radius relationship inferred from follow-up of Kepler planet candidates () and we tested our results with a mass of 10 M_⊕ for Kepler-11 g.We used a hybrid symplectic integrator from the Mercury package <cit.>, with a time-step set to 0.1 days, roughly 0.01 times the orbital period of the innermost planet in our simulations. This choice of time-step optimized numerical accuracy and efficiency, since shorter time-steps provided no changes in the output (to ten significant figures) and hence no improvement in accuracy. In our models with an initial inclination for Kepler-11J at 3^∘, we found that if it is further than 3.0 AU, the cotransiting configuration of Kepler-11 b–g is likely. We set this model of Kepler-11 J as our benchmark for comparison as we considered additional effects on the coplanarity of Kepler-11 b–g, beginning with a test on how sensitive the coplanarity of the Kepler-11 system is to the chosen mass of Kepler-11 g (see 3.2.1). We also considered the possibility of a seventh low-mass coplanar planet, “Kepler-11 x", orbiting between Kepler-11 f and Kepler-11 g, and its effect on the coplanarity of the known transiting planets with a nearby Jovian planet on an inclined orbit (s3.2.2). Such a planet has not been detected in the Kepler light curve, indicating that it is either: i) too small to cause a detectable transit, ii) inclined relative to the six known planets, or iii) non-existent. Nevertheless, if it were to exist and if it were nearly coplanar with the known transiting planets, it could strengthen the coplanar bond between the known planets and weaken the constraints of the inclination, mass and minimum orbital distance of Kepler-11 J. We ran three suites of models with the low mass planet between Kepler-11 f and Kepler-11 g set at2.4× 10^-6 M_⋆ (≈1 M_⊕) and ≈2 M_⊕ and ≈3 M_⊕ respectively, and considered a range of distances from 0.29 to 0.43 AU. This range was chosen to include all distances between Kepler-11 f and Kepler-11 g for which the system would remain stable for 8 Myr. Note that the result of our TTV studies (Figure <ref>) suggest that a 3 M_⊕ or a 10 M_⊕ planet in this region is unlikely for most of the range of possible orbital periods between 60 and 100 days and we see little or no evidence for (or against) a 1 M_⊕ planet. As an additional test on our nominal result, we relaxed our assumption of initially coplanar orbits for the transiting planets (3.2.4). <cit.> show that there is a minimum mutual inclination between the planets given the constraints on their transit impact parameters. While substantial mutual inclinations between the planets are possible, this is exceedingly unlikely, since it would require the planets to share a common longitude of ascending node with our line of sight. Hence, the impact parameters give an approximate upper limit to the mutual inclinations of the planets. For our final test, we adopt the nominal lower bounds on inclinations of the <cit.> to set initial mutual inclinations between Kepler-11 b and all of its neighbors.Finally, since the angular momentum of the system is dominated by the distant perturber, we tested the effect of reducing its the mass on whether the unlikely co-transiting state of a Kepler-11 b–g persists with a lower mass perturber inclined at 3^∘ (3.2.3). We summarize the input parameters all our simulations in Table <ref>. §.§ Results We illustrate the co-planarity of the six known planets in Kepler-11 over 100 kyr in Figure <ref>. The inner five planets of Kepler-11 are tightly coupled over secular timescales, with mutual inclinations never exceeding a few arc-minutes, while Kepler-11 g reaches an inclination of 1.4^∘ during the secular cycle. The decoupling of Kepler-11 g from its inner neighbors makes it non-transiting for most observers that detect Kepler-11 b–f to be transiting for much of the time.We compared the numerical findings in Fig. <ref> with the analytical solutions of <cit.>. The coupling between Kepler-11 f and g at their respective semi-major axes (a_f and a_g), when perturbed by Kepler-11 J is determined by the parameter ϵ_fg = Ω̂_fJ(Ω_gJ/Ω_fJ)-1/1+(L_f/L_g),where L_f/L_g = (m_f/m_g)(a_f/a_g)^1/2 is the ratio of the planets' angular momenta, Ω_gJ/Ω_fJ≈(a_g/a_f)^3/2 is the ratio of precession rates of Kepler-11 f and Kepler-11 g respectively, driven by Kepler-11 J:Ω_fJ = Gm_fm_ga_f/4a_g^2L_1b_3/2^(1)(a_f/a_g).Here, b_3/2^(1)(a_f/a_g) is a Laplace coefficient in the literal expansion of the disturbing function <cit.>. A similar expression exists for Ω_gJ.The measure of coupling ϵ determines how much inclination dispersion can be expected between the inner planets. Where ϵ≪ 1, the inner planets are tightly coupled and remain coplanar, and where ϵ≫ 1 mutual inclinations can reach as high as twice the inclination of the distant perturber <cit.>.Treating the system as two inner planets (Kepler-11 f and g) with a Jovian mass perturber at 3.0 AU inclined at 3^∘, Kepler-11 f and Kepler-11 g are moderately coupled (ϵ = 0.2), while the inner five are all strongly coupled such that between any two adjacent neighbors, ϵ≲ 0.001. Treating the inner six as a group, and treating Kepler-11 e (8.0 M_⊕) as the dominant mass within the group, the averaged coupling parameter ϵ̂ = 0.1 and the spread in mutual inclinations for the inner six is 0.5^∘, in close agreement with range in mutual inclinations shown in the left panel of Figure <ref>. For our numerical determination of the likelihood of the inner six to be co-transiting, we counted the fraction of output time-steps over an 8 Myr simulation that Kepler-11 b-g are all likely transiting and show the result in Figure <ref>. The 8 Myr timescale covers ≳20 secular periods for the inner planets, so that the fraction of time when the planets are co-planar is adequately sampled with at least one hundred time samples within each secular cycle. Figure <ref> highlights the sensitivity of the transiting configuration to a Jovian planet placed at different distances for three possible initial inclinations. For the nominal case of a 3^∘ inclination, the co-transiting configuration becomes unlikely if the Jovian-mass planet is within 3.0 AU. With an initial inclination at 10^∘, a Jovian-mass planet is unlikely interior to 4.5 AU. Even an inclination of just 1^∘ provides an important constraint on a Jovian-mass perturber. Such a planet is unlikely to exist within 2.0 AU; comparable to the known constraint from RV spectroscopy.We tested the sensitivity of our nominal result to the mass of Kepler-11 g, the presence of a planet orbiting between Kepler-11 f and Kepler-11 g, the mass of the perturbing outer planet, and the minimum mutual inclinations among the transiting planets derived from the transit light curve. §.§.§ Sensitivity to the poorly constrained mass of Kepler-11 g. Figure <ref> shows the sensitivity of our benchmark result to the mass of Kepler-11 g, which is poorly constrained from observations (Lissauer et al. 2013).We find that increasing the mass of Kepler-11 g increases the fraction of the time that the 6 transiting planets are kept in a coplanar configuration by a few percent if Kepler-11J orbits between 2 and 3 AU from the star; i.e., increasing the mass of Kepler-11 gby a factor of 2 modestly reduces the distance out to which we can rule that a Jovian perturber on a 3 degree inclination is unlikely from 3.0 to 2.8 AU. This is a far weaker effect than the initial inclination of the perturbing outer planet, and hence our results are insensitive to the mass of Kepler-11 g. §.§.§ Senstivity to a hypothetical planet orbiting between Kepler-11 f and Kepler-11 g. Figure <ref> shows the effects of an undetected planet between Kepler-11 f and Kepler-11 g (with its mass at 5.5 M_⊕) on the likelihood that the known planets are co-transiting. This would cause the orbital plane of the transiting planets to be more tightly coupled, requiring a perturbing Jovian planet to be either closer or more highly inclined to break the co-transiting configuration, and hence weakening our constraints. For these models, we assumed a Jovian mass planet (“Kepler-11 J") on a 3^∘ inclination orbiting at 3.0 AU. Giving Kepler-11 x a mass of 1 M_⊕ enhances the likelihood of the observed planets to be co-transiting, although fairly moderately. The likelihood of all planets transiting remains between 50% and 63% over the entire range of orbital distances where the system was stable for at least 8 Myr. Increasing the mass of Kepler-11 x to 2 or 3 M_⊕ raises the likelihood that the known planets remain co-planar significantly. From figure <ref>, it is clear that the coplanar condition is sensitive to the mass of the intermediate planet. In the case of the 3 M_⊕ model planet, the 6 confirmed planets are co-transitingall of the time for almost the entire range of possible orbital distances for an intermediate planet, with the exception of a the feature near 0.3 AU, near the 4:3 resonance with Kepler-11 f and the 2:1 resonance with Kepler-11 g. A 3 M_⊕ intermediate planet Kepler-11 x would reduce the distance out to which a Jovian-mass planet can be ruled out on the condition of co-planarity, but a 3 M_⊕ planet between Kepler-11 f and Kepler-11 g is also disfavored with the transit timing data as shown in section 2. A 2 M_⊕ planet permits enough mutual inclination between Kepler-11 g and the inner five to make the cotransiting configuration likely but not 100% of the time. A planet less massive than 1 M_⊕ between Kepler-11 f and Kepler-11 g has an even weaker effect on the coplanarity of the system. We conclude here that while the TTVs disfavor a putative Kepler-11 x more massive than 3 M_⊕, neither TTVs nor the constraint of coplanarity rule out a lower mass planet between Kepler-11 f and Kepler-11 g. §.§.§ Senstivity to initial inclinations of the transiting planets.Here, we return to our assumption that an initially perfectly co-planar configuration for the transiting planets gives a conservative minimum distance out to which a Jovian perturber is unlikely. We repeated our nominal simulation with Kepler-11 J at 3.0 AU with a inclination of 3^∘ with respect to the initial orbit plane of Kepler-11 b, and with the initial relative inclinations of the transiting planets based on the values listed in Table 2 of <cit.>. This assumes the planets have the same ascending node. The relative inclinations of the transiting planets at Kepler-11 to Kepler-11 b are all less than 0.2^∘ except Kepler-11 e, which is mutually inclined to the orbit of Kepler-11 b by 0.75^∘ under this assumption. The results of our simulations with these initial mutual inclinations are shown in Figure <ref>. We found that the range of mutual inclinations between Kepler-11 b–f is significantly higher than the initially co-planar model, and the constraints on a Jovian perturber are driven by relative inclination peaks in Kepler-11 e, f and g. The bottom panel of Figure <ref> reveals important behavior that is different when initial mutual inclinations are included. With the Jovian planet at greater distances, instead of the likelihood of the co-transiting increasing rapidly to 100%, the likelihood asymptotes to 100% since with non-zero initial inclinations, the system is more sensitive to distant perturbations. For a Jovian perturber beyond 3.0 AU, our constraints from an initially co-planar system are more conservative, as we expected from our initial assumption. However, where there is a transition from “unlikely" to “more likely than not" (the50% line crossed near 3.0 AU), the initial mutual inclinations make little difference, since the range in mutual inclinations between the inner five and the weak coupling of Kepler-11 g have a comparable effect on the overall range in mutual inclinations. §.§.§ Senstivity to the mass of the perturbing outer planet. Finally, we checked the effect of a reduced mass of the distant perturber on the co-planarity of the transiting planets to determine at what distances a lower mass perturber could be ruled unlikely. In Figure <ref> we plot the temporal variations of the mutual inclinations of the six known planets with a 0.3 M_Jup and a 0.1 M_Jup perturber at 3.0 AU with a 3^∘ initial inclination. In this case, the timescale of the secular cycle in inclination increases from 250 kyr for a 1 M_Jup perturber to ∼ 700 kyr for a 0.3 M_Jup perturber and 2 Myr for a 0.1 M_Jup perturber, although the timescale is irrelevant for our purposes. Most importantly, the mutual inclinations between the inner six never exceed the threshold to make their co-transiting configuration unlikely for the lower masses. We compared the maximum mutual inclinations caused by the perturber (at the same distance a_p) for the three choices of mass, shown in the right panels of Figure <ref> and Figure <ref>. The results are consistent with those of <cit.>, as our maximum mutual inclination approximately scales with the perturbing mass m_p, although the high multiplicity of Kepler-11 makes the peak mutual inclination vary slightly between cycles.Figure <ref> shows the distances at which planets on a 3^∘ inclination with different masses 1 M_Jup, 0.3 M_Jup, or 0.1 M_Jup can be ruled unlikely. We found that a Saturn-mass (0.3 M_Jup) is unlikely within 2 AU, similar to the RV constrain on a potential Jupiter-mass planet. A lower mass planet (0.1 M_Jup) is unlikely within 1.5 AU. This is a tighter constraint than our TTV models provided, where the model fit of the TTVs was significantly degraded by a perturber of 0.1 M_Jup only within 1 AU (see Figure <ref>).§ DISCUSSION AND CONCLUSIONSWe define a system as unlikely if all six transiting planets of Kepler-11 would be co-transiting to a distant observer less than 50% of the time if Kepler-11 g is transiting. We have determined that the minimum distance out to which a Jovian mass planet at Kepler-11 is unlikely is 3.0 AU with a moderate inclination of 3^∘. A Jovian mass planet at higher inclination can be ruled out to greater orbital distances.In principle, this distance would be reduced if an undetected planet were orbiting between Kepler-11 f and Kepler-11 g. However, transit timing data makes the presence of a planet ≳3 M_⊕ in mass unlikely, and a planet less massive than 3 M_⊕ has only a moderate effect on the coplanarity of the known planets. On the other hand, if a Jovian-mass planet is detected within an orbital distance of 3.0 AU from the star, then the most likely scenarios to explain the current co-transiting configuration of Kepler-11 b-g are either a low inclination of the Jovian mass planet compared to the transiting six, or a planet between Kepler-11 f and g which would enhance the co-planarity of the inner system. Due to the faintness of Kepler-11, very limited RV observations are available, although the data in hand are able to rule out a planetwith msin i = M_Jup within 1.93 AU from the current data <cit.>. This constraint is significantly weaker than our nominal result. Therefore, these dynamical models provide the tightest constraints yet available on the presence of a putative Jovian-mass planet orbiting Kepler-11 beyond Kepler-11 g. We thank Soko Matsumura for comments which improved this paper. D.J. acknowledges the support of the University of the Pacific. We also acknowledge support from NASA Exoplanets Research Program awards #NNX17AC23G and #NNX15AE21G. This work was partially supported by funding from the Pennsylvania State University's Office of Science Engagement and Center for Exoplanets and Habitable Worlds.The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium. Portions of this research were conducted with Advanced CyberInfrastructure computational resources provided by The Institute for CyberScience at The Pennsylvania State University (http://ics.psu.edu). We thank the Kepler mission for the extraordinary dataset which continues to advance our understanding of exoplanetary systems.This study benefitted from collaborations and/or information exchanged within NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate.[Baruteau & Papaloizou(2013)]Baruteau2013 Baruteau, C., & Papaloizou, J. C. B. 2013, , 778, 7[Batygin & Laughlin(2015)]BatyginLaughlin2015 Batygin, K., & Laughlin, G. 2015, Proceedings of the National Academy of Science, 112, 4214[Batygin & Morbidelli(2013)]Batygin2013 Batygin, K., & Morbidelli, A. 2013, , 145, 1[Becker & Adams(2017)]Becker2017 Becker, J. C., & Adams, F. C. 2017, ArXiv e-prints 1702.07714[Chambers(1999)]Chambers1999 Chambers, J. E. 1999, , 304, 793[Chatterjee & Ford(2015)]Chatterjee2015 Chatterjee, S., & Ford, E. B. 2015, , 803, 33[Chen & Kipping(2017)]Chen2017 Chen, J., & Kipping, D. 2017, , 834, 17[Chiang & Laughlin(2013)]chi13 Chiang, E., & Laughlin, G. 2013, , 431, 3444[Delisle & Laskar(2014)]Delisle2014 Delisle, J.-B., & Laskar, J. 2014, , 570, L7[Fabrycky et al.(2014)Fabrycky, Lissauer, Ragozzine, Rowe, Steffen, Agol, Barclay, Batalha, Borucki, Ciardi, Ford, Gautier, Geary, Holman, Jenkins, Li, Morehead, Morris, Shporer, Smith, Still, & Van Cleve]fab14 Fabrycky, D. C., et al. 2014, , 790, 146[Goldreich & Schlichting(2014)]Goldreich2014 Goldreich, P., & Schlichting, H. E. 2014, , 147, 32[Hands & Alexander(2015)]Hands2015 Hands, T. O., & Alexander, R. D. 2015, ArXiv e-prints, 1512.02649[Hansen(2016)]Hansen2016 Hansen, B. M. S. 2016, ArXiv e-prints 1608.06300[Hansen & Murray(2013)]Hansen2013 Hansen, B. M. S., & Murray, N. 2013, , 775, 53[Izidoro et al.(2015)Izidoro, Raymond, Morbidelli, Hersant, & Pierens]Izidoro2015 Izidoro, A., Raymond, S. N., Morbidelli, A., Hersant, F., & Pierens, A. 2015, , 800, L22[Jontof-Hutter et al.(2014)Jontof-Hutter, Lissauer, Rowe, & Fabrycky]jont14 Jontof-Hutter, D., Lissauer, J. J., Rowe, J. F., & Fabrycky, D. C. 2014, , 785, 15[Jontof-Hutter et al.(2015)Jontof-Hutter, Lissauer, Rowe, & Fabrycky]jont15 —. 2015, , 785, 15[Jontof-Hutter et al.(2016)Jontof-Hutter, Ford, Rowe, Lissauer, Fabrycky, Van Laerhoven, Agol, Deck, Holczer, & Mazeh]jont16 Jontof-Hutter, D., et al. 2016, , 820, 39[Lai & Pu(2016)]Lai2016 Lai, D., & Pu, B. 2016, ArXiv e-prints 1606.08855[Lee et al.(2013)Lee, Fabrycky, & Lin]Lee2013 Lee, M. H., Fabrycky, D., & Lin, D. N. C. 2013, , 774, 52[Lissauer et al.(2011a)Lissauer, Fabrycky, Ford, Borucki, Fressin, Marcy, Orosz, Rowe, Torres, Welsh, Batalha, Bryson, Buchhave, Caldwell, Carter, Charbonneau, Christiansen, Cochran, Desert, Dunham, Fanelli, Fortney, Gautier, Geary, Gilliland, Haas, Hall, Holman, Koch, Latham, Lopez, McCauliff, Miller, Morehead, Quintana, Ragozzine, Sasselov, Short, & Steffen]liss11a Lissauer, J. J., et al. 2011a, , 470, 53[Lissauer et al.(2011b)Lissauer, Ragozzine, Fabrycky, Steffen, Ford, Jenkins, Shporer, Holman, Rowe, Quintana, Batalha, Borucki, Bryson, Caldwell, Carter, Ciardi, Dunham, Fortney, Gautier, Howell, Koch, Latham, Marcy, Morehead, & Sasselov]liss11b —. 2011b, , 197, 8[Lissauer et al.(2013)Lissauer, Jontof-Hutter, Rowe, Fabrycky, Lopez, Agol, Marcy, Deck, Fischer, Fortney, Howell, Isaacson, Jenkins, Kolbl, Sasselov, Short, & Welsh]liss13 —. 2013, , 770, 131[Lithwick & Wu(2012)]lithwu12 Lithwick, Y., & Wu, Y. 2012, , 756, L11[Malhotra(2015)]Malhotra2015 Malhotra, R. 2015, , 808, 71[Murray & Dermott(1999)]Murray1999 Murry, C.D., & Dermott, S.F., 1999, Solar System Dynamics, (Cambridge Univ. Press)[Petrovich et al.(2013)Petrovich, Malhotra, & Tremaine]Petrovich2013 Petrovich, C., Malhotra, R., & Tremaine, S. 2013, , 770, 24[Ragozzine & Holman(2010)]Ragozzine2010 Ragozzine, D., & Holman, M. J. 2010, ArXiv e-prints 1006.3727[Rein et al.(2012)Rein, Payne, Veras, & Ford]Rein2012 Rein, H., Payne, M. J., Veras, D., & Ford, E. B. 2012, , 426, 187[Rowe & Thompson(2015)]rowe15a Rowe, J. F., & Thompson, S. E. 2015, ArXiv e-prints 1504.00707[Steffen & Hwang(2015)]Steffen2015 Steffen, J. H., & Hwang, J. A. 2015, , 448, 1956[Veras & Ford(2012)]veras2012 Veras, D., & Ford, E. B. 2012, , 420, L23[Walsh et al.(2011)Walsh, Morbidelli, Raymond, O'Brien, & Mandell]Walsh2011 Walsh, K. J., Morbidelli, A., Raymond, S. N., O'Brien, D. P., & Mandell, A. M. 2011, , 475, 206[Weiss(2016)]Weiss2016 Weiss, L. M. 2016, PhD thesis, University of California, Berkeley[Wolfgang et al.(2016)Wolfgang, Rogers, & Ford]Wolfgang2016 Wolfgang, A., Rogers, L. A., & Ford, E. B. 2016, , 825, 19 | http://arxiv.org/abs/1703.08829v1 | {
"authors": [
"Daniel Jontof-Hutter",
"Brian P. Weaver",
"Eric B. Ford",
"Jack J. Lissauer",
"Daniel C. Fabrycky"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170326155349",
"title": "Outer Architecture of Kepler-11: Constraints from Coplanarity"
} |
In this paper we strengthen the relationship between Yoneda structures and KZ doctrines by showing that for any locally fully faithful KZ doctrine, with the notion of admissibility as defined by Bunge and Funk, all of the Yoneda structure axioms apart from the right ideal property are automatic. Department of Mathematics, Macquarie University, NSW 2109, Australia [email protected] [2000]18A35, 18C15, 18D05Yoneda Structures and KZ Doctrines Charles Walker December 30, 2023 ==================================§ INTRODUCTION The majority of this paper concerns Kock-Zöberlein doctrines, which were introduced by Kock <cit.> and Zöberlein <cit.>. These KZ doctrines capture the free cocompletion under a suitable class of colimits Φ, with a canonical example being the free small cocompletion KZ doctrine on locally small categories. On the other hand, Yoneda structures as introduced by Street and Walters <cit.> capture the presheaf construction, with the canonical example being the Yoneda structure on (not necessarily locally small) categories, whose basic data is the Yoneda embedding 𝒜→[𝒜^op,𝐒𝐞𝐭] for each locally small category 𝒜. When 𝒜 is small this coincides with the usual free small cocompletion, but not in general. In this paper we prove a theorem tightening the relationship between these two notions, not just in the context of this example, but in general.A key feature of a Yoneda structure (which is not present in the definition of a KZ doctrine) is a class of 1-cells called admissible 1-cells. In the setting of the usual Yoneda structure on 𝐂𝐀𝐓, a 1-cell (that is a functor) L𝒜→ℬ is called admissible when the corresponding functor ℬ(L-,-)ℬ→[𝒜^op,𝐒𝐄𝐓] factors through the inclusion of [𝒜^op,] into [𝒜^op,𝐒𝐄𝐓].In order to compare Yoneda structures with KZ doctrines, we will also need a notion of admissibility in the setting of a KZ doctrine. Fortunately, such a notion of admissibility has already been introduced by Bunge and Funk <cit.>. In the case of the free small cocompletion KZ doctrine P on locally small categories, these admissible 1-cells, which we refer to as P-admissible, are those functors L𝒜→ℬ for which the corresponding functor ℬ(L-,-)ℬ→[𝒜^op,] factors through the inclusion of P𝒜 into [𝒜^op,].The main result of this paper; Theorem <ref>, shows that given a locally fully faithful KZ doctrine P on a 2-category 𝒞, on defining the admissible maps to be those of Bunge and Funk, one defines all the data and axioms for a Yoneda structure except for the “right ideal property” which asks that the class of admissible 1-cells 𝐈 satisfies the property that for each L∈𝐈 we have L· F∈𝐈 for all F such that the composite L· F is defined. § BACKGROUND In this section we will recall the notion of a KZ doctrine P as well as the notions of left extensions and left liftings, as these will be needed to describe Yoneda structures, and to discuss their relationship with KZ doctrines.Suppose we are given a 2-cell η I→ R· L as in the left diagram@=1emℬ[rr]^R𝒞@[ld]|-η⟸ℬ[rr]^R@/^1.5pc/[rr]_⇑σ^M𝒞@[ld]|-η⟸ 𝒜[uu]_I[uull]^L𝒜[uu]_I[uull]^Lin a 2-category 𝒞. We say that R is exhibited as a left extension of I along L by the 2-cell η when pasting 2-cells σ:R→ M with the 2-cell η:I→ R· L as in the right diagram defines a bijection between 2-cells R→ M and 2-cells I→ M· L. Moreover, we say such a left extension is respected by a 1-cell E𝒞→𝒟 when the whiskering of η by E given by the following pasting diagram @=1emℬ[rr]^R𝒞@[ld]|-η⟸[rr]^E@[rd]|-id⟸𝒟 𝒜[uu]_I[uull]^L[uurr]_E· Iexhibits E· R as a left extension of E· I along L. Dually, we have the notion of a left lifting. We say a 2-cell η I→ R· L exhibits L as a left lifting of I through R when pasting 2-cells δ L→ K with the 2-cell η I→ R· L defines a bijection between 2-cells L→ K and 2-cells I→ R· K. We call such a lifting absolute if for any 1-cell F𝒳→𝒜 the whiskering of η by F given by the following pasting diagram@=1emℬ[rr]^R𝒞@[ld]|-η⟸@[rr]|-id⟸𝒜[uu]_I[uull]^L𝒳[uu]_F@/^1pc/[uuuull]^L· Fexhibits L· F as a left lifting of I· F through R.There are quite a few different characterizations of KZ doctrines, for example those due to Kelly-Lack or Kock <cit.>. For the purposes of relating KZ doctrines to Yoneda structures, it will be easiest to work with the following characterization given by Marmolejo and Wood <cit.> in terms of left Kan extensions.<cit.> A KZ doctrine (P,y) on a 2-category 𝒞 consists of (i) An assignation on objects Pob𝒞→ob𝒞;(ii) For every object 𝒜∈𝒞, a 1-cell y_𝒜𝒜→ P𝒜;(iii) For every pair of objects 𝒜 and ℬ and 1-cell F𝒜→ Pℬ, a left extension@=1emP𝒜@->[rr]^F@[rd]|-c_F⟸ Pℬ𝒜[rruu]_F[uu]^y_𝒜of F along y_𝒜 exhibited by an isomorphism c_F as above. Moreover, we require that:(a) For every object 𝒜∈𝒞, the left extension of y_𝒜 as in <ref> is given by@=1emP𝒜@->[rr]^id_P𝒜 P𝒜@[ld]|-id⟸𝒜[uu]_y_𝒜[ulul]^y_𝒜Note that this means c_y_𝒜 is equal to the identity 2-cell on y_𝒜.(b) For any 1-cell Gℬ→ P𝒞, the corresponding left extension G Pℬ→ P𝒞 respects the left extension F in <ref>.This definition is equivalent (in the sense that each gives rise to the other) to the well known algebraic definition, which we refer to as a KZ pseudomonad <cit.>. A KZ pseudomonad (P,y,μ) on a 2-category 𝒞 is taken to be a pseudomonad (P,y,μ) on 𝒞 equipped with a modification θ Py→ yP satisfying two coherence axioms <cit.>.Just as KZ doctrines may be defined algebraically or in terms of left extensions, one may also define pseudo algebras for these KZ doctrines algebraically or in terms of left extensions. The following definitions in terms of left extensions are equivalent to the usual notions of pseudo P-algebra and P-homomorphism, in the sense that we have an equivalence between the two resulting 2-categories of pseudo P-algebras arising from the two different definitions <cit.>.Given a KZ doctrine (P,y) on a 2-category 𝒞, we say an object 𝒳∈𝒞 is P-cocomplete if for every Gℬ→𝒳 @=1emPℬ@->[rr]^G@[rd]|-c_G⟸𝒳P𝒜@->[rr]^F@[rd]|-c_F⟸ Pℬ[rr]^G𝒳ℬ[rruu]_G[uu]^y_ℬ 𝒜[rruu]_F[uu]^y_𝒜there exists a left extension G as on the left exhibited by an isomorphism c_G, and moreover this left extension respects the left extensions F as in the diagram on the right. We say a 1-cell E𝒳→𝒴 between P-cocomplete objects 𝒳 and 𝒴 is a P-homomorphism when it respects all left extensions along y_ℬ into 𝒳 for every object ℬ. It is clear that P𝒜 is P-cocomplete for every 𝒜∈𝒞. The relationship between P-cocompleteness and admitting a pseudo P-algebra structure is as below. Given a KZ doctrine (P,y) on a 2-category 𝒞 and an object 𝒳∈𝒞, the following are equivalent:(1) 𝒳 is P-cocomplete;(2) y_𝒳𝒳→ P𝒳 has a left adjoint with invertible counit;(3) 𝒳 is the underlying object of a pseudo P-algebra.For (1)(2) see the proof of <cit.>, and for (2)(3) see <cit.>.We now recall the notion of Yoneda structure as introduced by Street and Walters <cit.>.A Yoneda structure 𝔜 on a 2-category 𝒞 consists of:(1) A class of 1-cells 𝐈 with the property that for any L∈𝐈 we have L· F∈𝐈 for all F such that the composite L· F is defined; we call this the class of admissible 1-cells. We say an object 𝒜∈𝒞 is admissible when id_𝒜 is an admissible 1-cell. (2) For each admissible object 𝒜∈𝒞, an admissible map y_𝒜𝒜→ P𝒜. (3) For each L𝒜→ℬ such that L and 𝒜 are both admissible, a 1-cell R_L and 2-cell φ_L as in the diagram@=1emℬ[rr]^R_L P𝒜@[ld]|-φ_L⟸𝒜[uu]_y_𝒜[uull]^LSuch that:(a) The diagram above exhibits L as a absolute left lifting and R_L as a left extension via φ_L.(b) For each admissible 𝒜, the diagram @=1emP𝒜[rr]^id_P𝒜 P𝒜@[ld]|-id ⟸𝒜[uu]_y_𝒜[uull]^y_𝒜exhibits id_P𝒜 as a left extension.(c) For admissible 𝒜,ℬ and L,K as below, the diagram @=1emP𝒜 Pℬ@[ldld]|-φ_y_ℬ·L[ll]_R_y_ℬ· L@[rd]|-φ_K𝒞[ll]_R_K 𝒜[uu]^y_𝒜[rr]_Lℬ[uu]_y_ℬ[urur]_Kexhibits R_y_ℬ· L· R_K as a left extension.We note that when the admissible maps form a right ideal, the admissibility of L in condition (c) is redundant. However, in the following sections we will consider a setting in which the admissible maps are closed under composition, but do not necessarily form a right ideal.There is an additional axiom (d) discussed in “Yoneda structures” <cit.> which when satisfied defines a so called good Yoneda structure <cit.>. This axiom asks for every admissible L and every diagram@=1emℬ@->[rr]^M P𝒜@[ld]|-ϕ⟸𝒜[uu]_y_𝒜[ulul]^Lthat if ϕ exhibits L as an absolute left lifting, then ϕ exhibits M as a left extension. This condition implies axioms (b) and (c) in the presence of (a) <cit.>. However, this condition is often too strong. For example one may consider the free 𝐂𝐚𝐭-cocompletion, and take ℕ to be the monoid of natural numbers seen as a one object category, yielding the absolute left lifting diagram@=1em1@->[rr]^pick ℕ𝐂𝐚𝐭@[ld]|-!⟸1[uu]_pick 1[ulul]^id_1It is then trivial, as we would be extending along an identity, that the left extension property is not satisfied.§ ADMISSIBLE MAPS IN KZ DOCTRINES Yoneda structures as defined above require us to give a suitable class of admissible maps, and so in order to compare Yoneda structures with KZ doctrines we will need a suitable notion of admissible map in the setting of a KZ doctrine. Bunge and Funk defined a map L𝒜→ℬ in the setting of a KZ pseudomonad P to be P-admissible when PL has a right adjoint, and showed this notion of admissibility may also be described in terms of left extensions <cit.>. Our definition in terms of left extensions and KZ doctrines is as follows. Given a KZ doctrine (P,y) on a 2-category 𝒞, we say a 1-cell L𝒜→ℬ is P-admissible when @=1emℬ@->[rr]^R_L P𝒜@[ld]|-φ_L⟸ ℬ@->[rr]^R_L P𝒜@[ld]|-φ_L⟸[rr]^H@[rd]|-c_H⟸𝒳 𝒜[uu]_y_𝒜[ulul]^L 𝒜[uu]_y_𝒜[ulul]^L[rruu]_H there exists a left extension (R_L,φ_L) of y_𝒜 along L as in the left diagram, and moreover the left extension is respected by any H as in the right diagram where 𝒳 is P-cocomplete.Note that such a H is a P-homomorphism, and conversely that a P-homomorphism H P𝒜→𝒳 is a left extension of H:=H· y_𝒜 along y_𝒜 as above. Thus this is saying the left extension R_L is respected by P-homomorphisms. Suppose we are given a KZ doctrine (P,y) and a P-admissible 1-cell L𝒜→ℬ where ℬ is P-cocomplete, then the 1-cell R_L in @=1emℬ@->[rr]^R_L P𝒜@[ld]|-φ_L⟸𝒜[uu]_y_𝒜[ulul]^Lhas a left adjoint L P𝒜→ℬ.Taking L to be the left extension @=1emP𝒜@[rd]|-c_L⟸@->[rr]^Lℬ𝒜[rruu]_L[uu]^y_𝒜we then have L⊣ R_L since we may define n:id_P𝒜→ R_L·L and e:L· R_L→id_ℬ respectively as (since L is P-admissible) the unique solutions to@=1em ℬ[rd]^R_L P𝒜[ru]^L[rr]_id_P𝒜 @[u]|-⇑ nP𝒜P𝒜[r]^L ℬ[r]^R_LP𝒜ℬ[r]^R_L@/^2pc/[rr]^id_ℬP𝒜[r]^L@[u]|-⇑ e ℬ @[d]|-= ℬ[rr]^id_ℬℬ@[ur]|-id⇒@[u]|-= @[ru]|- φ_L⇐@[u]|-c_L⇐@[ru]|- c_L⇐@[u]|-φ_L⇐ @[ur]|-id⇐ 𝒜[ulul]^y_𝒜[uu]_y_𝒜𝒜[uul]^L[uu]_y_𝒜@/^1pc/[ulul]^y_𝒜𝒜[uul]^y_𝒜[uu]_L@/^1pc/[ulul]^L𝒜[uu]_L[uull]^LVerifying the triangle identities is then a simple exercise.The following is an easy consequence of this Lemma. Suppose we are given a KZ doctrine (P,y) on a 2-category 𝒞 and a P-admissible 1-cell L𝒜→ℬ. Then the 1-cell res_L defined here as the left extension in the top triangle@=1emP𝒜 Pℬ[ll]_res_L @[ur]|-c_R_L 𝒜[uu]^y_𝒜[rr]_L@[ur]|-φ_Lℬ[uu]_y_ℬ[ulul]^R_Lhas a left adjoint lan_L, and when R_L is P-admissible, a right adjoint ran_L.First note that it is an easy consequence of the left extension pasting lemma (the dual of <cit.>) that y_ℬ· L is P-admissible, which is to say the left extension res_L above is respected by any P-homomorphism H P𝒜→𝒳. This is since such a H will respect the left extension R_L of y_𝒜 along L as well as the left extension res_L of R_L along y_ℬ. Hence by Lemma <ref> res_L has a left adjoint lan_L given as the left extension as on the left (which is how PL is defined given the data of Definition <ref>),@=1emP𝒜@->[rr]^lan_L@[rdrd]|-c_y_ℬ· L⟸ PℬP𝒜[rr]^ran_L Pℬ@[ur]|-φ_R_L 𝒜[rr]_L[uu]^y_𝒜ℬ[uu]_y_ℬ ℬ[uu]_y_ℬ[ulul]^R_Land if R_L is P-admissible then we may define ran_L:=R_R_L (which is the left extension as on the right) and since P𝒜 is P-cocomplete ran_L has a left adjoint given by res_L=R_L again by Lemma <ref>.We have shown that when both L and R_L are P-admissible we have the adjoint triple PL⊣R_L⊣ R_R_L. Of particular interest is the case where L=y_𝒜 for some 𝒜∈𝒞. Clearly in this case both L and R_L are P-admissible and so we may define μ_𝒜:=R_y_𝒜=id_P𝒜 and observe R_R_y_𝒜=R_id_P𝒜=y_P𝒜 to recover the well known sequence of adjunctions Py_𝒜⊣μ_𝒜⊣ y_P𝒜 as in <cit.>.The following result is mostly due to Bunge and Funk <cit.>, though we state it in our notation and from the viewpoint of KZ doctrines in terms of left extensions. Also, we will prove the following proposition in full detail in order to clarify some parts of the argument given by Bunge and Funk <cit.>. For example, in order to check that certain left extensions are respected we will need to know their exhibiting 2-cells. These exhibiting 2-cells will also be needed later to prove our main result. Given a KZ doctrine (P,y) on a 2-category 𝒞 and a 1-cell L𝒜→ℬ, the following are equivalent:(1) L is P-admissible;(2) every P-cocomplete object 𝒳∈𝒞 admits, and P-homomorphism respects, left extensions along L. This says that for any given 1-cell K𝒜→𝒳, where 𝒳 is P-cocomplete, there exists a 1-cell J and 2-cell δ as on the left @=1emℬ@->[rr]^J𝒳@[ld]|-δ⟸ ℬ@->[rr]^J𝒳@[ld]|-δ⟸[rr]^E𝒴 𝒜[uu]_K[ulul]^L 𝒜[uu]_K[ulul]^Lexhibiting J as a left extension, and moreover this left extension is respected by any P-homomorphism E𝒳→𝒴 for P-cocomplete 𝒴 as in the right diagram.(3) PL:=lan_L given as the left extension@=1emP𝒜@->[rr]^PL@[rdrd]|-c_y_ℬ· L⟸ Pℬ𝒜[rr]_L[uu]^y_𝒜ℬ[uu]_y_ℬhas a right adjoint. We denote the inverse of the above 2-cell as y_L:=c_y_ℬ· L^-1 for every 1-cell L.The following implications prove the logical equivalence.(2)(1) This is trivial as P𝒜 is P-cocomplete.(1)(2) Given a K𝒜→𝒳 as in (2). We take the pasting @=1emℬ@->[rr]^R_L P𝒜@[ld]|-φ_L⟸[rr]^K@[rd]|-c_K⟸𝒳 𝒜[uu]_y_𝒜[ulul]^L[rruu]_Kas our left extension using that L is P-admissible. This is respected by any P-homomorphism E𝒳→𝒴 where 𝒴 is P-cocomplete as a consequence of the second part of the definition of P-admissibility.(1)(3) This was shown in Lemma <ref>.(3)(1) This implication is where the majority of the work lies in proving this proposition. We suppose that we are given an adjunction lan_L⊣res_L with unit η where lan_L is defined as in (3). We split the proof into two parts.Part 1: The given right adjoint, res_L, is a left extension of res_L· y_ℬ along y_ℬ as in the diagram @=1emPℬ[rrrr]^res_L@[rrd]|-id⟸ P𝒜Pℬ[rru]_res_L ℬ[uu]^y_ℬ[rru]_y_ℬexhibited by the identity 2-cell.[This may be seen as an analogue of <cit.>. However, we emphasize here that considering right adjoints tells us res_L is a P-homomorphism since the adjunctions may be used to construct an isomorphism between res_L and a known P-homomorphism. ]To see this, we consider the isomorphism in the square on the left@=1emP𝒜@->[rr]^PL@[rdrd]|-y_L PℬP^2𝒜@->[rr]^P^2L@[rdrd]|-Py_L P^2ℬP^2𝒜@[rdrd]|-≅[dd]_μ_𝒜 P^2ℬ[ll]_Pres_L[dd]^μ_ℬ𝒜[rr]_L[uu]^y_𝒜ℬ[uu]_y_ℬP𝒜[rr]_PL[uu]^Py_𝒜 Pℬ[uu]_Py_ℬP𝒜 Pℬ[ll]^res_Land then apply P to get the isomorphism of left adjoints in the middle square (suppressing pseudofunctoriality constraints[These pseudofunctoriality constraints are those arising from the uniqueness of left extensions up to coherent isomorphism.]), which corresponds to an isomorphism of right adjoints in the right square (which we leave unnamed). Now by <cit.> (and since μ_𝒜· Pres_L respects the left extension Py_ℬ) we have the left extension μ_𝒜· Pres_L· Py_ℬ of res_L· y_ℬ along y_ℬ as below@=1em@[d]|-≅ Pℬ[rrd]^res_L@[d]|-≅Pℬ[rr]^Py_ℬ@/^0.75pc/[rrrru]^id_Pℬ P^2ℬ[rr]^Pres_L@/^0.25pc/[rru]^μ_ℬ P^2𝒜[rr]^μ_𝒜@[rd]|-≅ P𝒜@[]|-⇓ y_y_ℬ@[]|-⇓ y_res_Lℬ[rr]_y_ℬ[uu]^y_ℬ Pℬ[rr]_res_L[uu]_y_Pℬ P𝒜[rruu]_id_P𝒜[uu]_y_P𝒜and so pasting with the isomorphism μ_𝒜· Pres_L· Py_ℬ≅res_L constructed as above tells us res_L is also an extension of res_L· y_ℬ along y_ℬ. It follows that res_L respects the left extension@=1emPℬ[rr]^id_Pℬ@[rd]|-id⟸ Pℬℬ[uu]^y_ℬ[uurr]_y_ℬand this gives the result. Part 2: The following pasting exhibits @=1emℬ[rr]^y_ℬ Pℬ[rr]^res_L P𝒜[rr]^H𝒳@[l]|- y_L⟸P𝒜[ul]^lan_L@[u]|-η⟸[ru]_id_P𝒜 @[0,1]|-c_H⟸ 𝒜@/^1pc/[ulull]^L[u]^y_𝒜@/_1pc/[rurru]_Hthe composite H·res_L· y_ℬ as a left extension of H along L. Suppose we are given a 1-cell Kℬ→𝒳. We then see that our left extension is exhibited by the sequence of natural bijections → 2pt K·lan_L· y_𝒜H·res_L· y_ℬ[][r]H[][l]K · L K· y_ℬ≅ K[][r]H[][l]K· y_ℬ· Llan_L· y_𝒜≅ y_ℬ· L [][r]H[][l]K·lan_L· y_𝒜c_H exhibits H as a left extension [][r]H[][l]K·lan_Lmates correspondence [][r]H·res_L[][l]Kleft extensionin Part 1 preserved by H [][r]H·res_L· y_ℬ[][l]K· y_ℬK· y_ℬ≅ K [][r]H·res_L· y_ℬ[][l]KIt is easily seen this left extension is exhibited by the above 2-cell since when taking K=H·res_L· y_ℬ we may take K=H·res_L as a consequence of Part 1 (with the left extension K exhibited by the identity 2-cell). Tracing through the bijection to find the exhibiting 2-cell is then trivial.Considering Part 2 in the above proposition with H=y_𝒜 and H and c_H being an identity 1-cell and 2-cell respectively, we see that for any P-admissible 1-cell L𝒜→ℬ and corresponding adjunction PL⊣res_L with unit η, we may define our 1-cell R_L and 2-cell φ_L as in Definition <ref> by @=1emℬ@->[rr]^R_L P𝒜@[ld]|-φ_L⟸ℬ[rr]^y_ℬPℬ[r]^res_LP𝒜 := @[ul]|-y_L⟸P𝒜@/^0.5pc/[ul]^lan_L@[ul]|-η⟸[u]_id_P𝒜 𝒜[uu]_y_𝒜[ulul]^L 𝒜@/^1pc/[ulull]^L[u]_y_𝒜We will make regular use of this definition in the next section.It is clear from the above proposition that P-admissible 1-cells are closed under composition as noted by Bunge and Funk <cit.>. We may also note, as in <cit.>, that every left adjoint is P-admissible, as taking PL:=lan_L defines a pseudofunctor <cit.> and so preserves the adjunction. § RELATING KZ DOCTRINES AND YONEDA STRUCTURESWe are now ready to prove our main result. In the following statement we call a KZ doctrine locally fully faithful if the unit components are fully faithful; indeed Bunge and Funk <cit.> noted that a KZ pseudomonad is locally fully faithful precisely when its unit componentsare fully faithful. Here the admissible maps of Bunge and Funk refer to those maps L for which PL:=lan_L has a right adjoint (which we denote by res_L). Suppose we are given a locally fully faithful KZ doctrine (P,y) on a 2-category 𝒞. Then on defining the class of admissible maps L to be those of Bunge and Funk, with chosen left extensions (R_L,φ_L) those of Remark <ref>, we obtain all of the definition and axioms of a Yoneda structure with the exception of the right ideal property (though the admissible maps remain closed under composition).We need only check that: (1) φ_L exhibits L as an absolute left lifting. Thus, we must exhibit a natural bijection between 2-cells L· W→ H and 2-cells y_𝒜· W→ R_L· H for 1-cells W𝒟→𝒜 and H𝒟→ℬ as in the diagram@=1em𝒟[rr]^W@/_1pc/[rdrd]_H 𝒜[rr]^y_𝒜[dd]_L P𝒜 @[u]|-⇓α@[ul]|-⇓φ_L ℬ[dd]_y_ℬ[ruru]_R_L @[ul]|-⇓ c_R_LPℬ@/_1pc/[ruruuu]_res_LSuch a natural bijection is given by the correspondence → 2pt res_L· y_ℬ· Hlan_L· y_𝒜· W[][r]L· W[][l]H y_ℬ fully faithful[][r]y_ℬ· L· W[][l]y_ℬ· Hlan_L· y_𝒜≅ y_ℬ· L [][r]lan_L· y_𝒜· W[][l]y_ℬ· Hlan_L⊣res_L [][r]y_𝒜· W[][l]res_L· y_ℬ· HR_L:=res_L· y_ℬ [][r]y_𝒜· W[][l]R_L· Hand the 2-cell exhibiting this absolute left lifting is easily seen to be the 2-cell as given in Remark <ref> by following the above bijection.(2) res_L· R_K is a left extension. Considering the diagram@=1emP𝒜 Pℬ[ll]_res_L@[dr]|-φ_K𝒞[ll]_R_K @[ru]|-c_R_L@[dl]|-φ_L𝒜[uu]^y_𝒜[rr]_Lℬ[rruu]_K[uu]_y_ℬ[ulul]^R_Lwe first note that res_L· R_K is a left extension of R_L along K since K is P-admissible. We then apply the pasting lemma for left extensions to see the outside diagram also exhibits res_L· R_K as a left extension. We observe that to ask that res_L· R_K be a left extension in the diagram above for every P-admissible L and K, is to ask by the pasting lemma that the pasting of φ_K and c_R_L exhibit res_L· R_K as a left extension. As c_R_L is invertible, this is to say that res_L respects every left extension arising from admissibility. This is equivalent to asking res_L be a P-homomorphism.We note here that we do not necessarily have the right ideal property. Indeed given a KZ doctrine on a 2-category every identity arrow is admissible, and so the right ideal property would require all arrows into all objects being admissible (that is all arrows being admissible). This fails for example with the identity KZ doctrine on any 2-category 𝒞 which contains an arrow L with no right adjoint.Given an object 𝒜∈𝒞 with a P-admissible generalized element a𝒮→𝒜 we have a version of the Yoneda lemma in the sense that we have bijections → 2pt res_a· Klan_a· y_𝒮[][r]y_𝒜· a[][l]K lan_a· y_𝒮≅ y_𝒜· a[][r]lan_a· y_𝒮[][l]K lan_a⊣res_a [][r]y_𝒮[][l]res_a· Kfor generalized elements K𝒮→ P𝒜. In the case where P is the usual free small cocompletion KZ doctrine on locally small categories and 𝒮=1 is the terminal category, maps y_𝒮→res_a· K are elements of res_a· K (which may be viewed as K evaluated at a). The purpose of the following is to give an example in which absolute left liftings (also known as relative adjunctions or partial adjunctions) are preserved[In this case respected by the KZ pseudomonad resulting from the KZ doctrine as in <cit.>.]. Also, the following proposition does not require locally fully faithfulness, whereas Theorem <ref> does. Suppose we are given a KZ doctrine (P,y) on a 2-category 𝒞. Then for every P-admissible 1-cell L𝒜→ℬ as on the left,@=1emℬ@->[rr]^R_L P𝒜@[ld]|-φ_L⟸ Pℬ@->[rr]^PR_L P^2𝒜@[ld]|-Pφ_L⟸ 𝒜[uu]_y_𝒜[ulul]^L P𝒜[uu]_Py_𝒜[ulul]^PLthe 2-cell Pφ_L as on the right (in which we have suppressed the pseudofunctoriality constraints) exhibits PL as an absolute left lifting of Py_𝒜 through PR_L.Without loss of generality, we define φ_L as in Remark <ref>. We then have the sequence of natural bijections → 2pt Pres_L· Py_ℬ· HP^2L· Py_𝒜· W[][r]PL· W[][l]H Py_ℬ fully faithful[][r]Py_ℬ· PL· W[][l]Py_ℬ· Hy_ℬ· L≅ PL· y_𝒜 [][r]P^2L· Py_𝒜· W[][l]Py_ℬ· HPL⊣res_L [][r]Py_𝒜· W[][l]Pres_L· Py_ℬ· HR_L:=res_L· y_ℬ [][r]Py_𝒜· W[][l]PR_L· Hfor 1-cells W into P𝒜. Following the bijection we see that the absolute left lifting is exhibited by Pφ_L, suppressing the pseudofunctoriality constraints.Some observations made in “Yoneda structures” <cit.> may be seen more directly in this setting of a KZ doctrine. For example Street and Walters defined an admissible morphism L (in the setting of a Yoneda structure) to be fully faithful when the 2-cell φ_L is invertible (which agrees with a representable notion of fully faithfulness, that is fully faithfulness defined via the absolute left lifting property, when axiom (d) is satisfied). Here we see this in the context of a (locally fully faithful) KZ doctrine. Suppose we are given a KZ doctrine (P,y) on a 2-category 𝒞, and a P-admissible 1-cell L𝒜→ℬ @=1emℬ@->[rr]^R_L P𝒜@[ld]|-φ_L⟸𝒜[uu]_y_𝒜[ulul]^Lwith a left extension R_L as in the above diagram. Then the exhibiting 2-cell φ_L is invertible if and only if PL:=lan_L is fully faithful. We use the well known fact that the left adjoint of an adjunction is fully faithful precisely when the unit is invertible. Now, given that φ_L is invertible we may define our 2-cell η^∗ as the unique solution to @=1em @[d]|-⇑η^∗P𝒜 P𝒜@[ld]|-φ_L^-1@->[ll]_id_P𝒜 P𝒜 Pℬ@[rrdd]|-⇑ c_y_ℬ· L[ll]_res_L P𝒜[ll]_PL@/_2pc/[llll]_id_P𝒜ℬ[ul]^R_L =@[ur]|-⇑ c_R_L 𝒜[uu]_y_𝒜[ul]^Lℬ[uu]^y_ℬ[ulul]^R_L𝒜[uu]_y_𝒜[ll]^LThat η is the inverse of η^∗ follows from an easy calculation using Remark <ref>. Conversely, if the unit η is invertible then so is φ_L by Remark <ref>.If we define a map L to be P-fully faithful when PL is fully faithful, then as a consequence of Proposition <ref> (Part 2) and Proposition <ref> we see that for any P-admissible map L, this L is P-fully faithful if and only if every left extension along L into a P-cocomplete object is exhibited by an invertible 2-cell.In the following remark we compare PL being fully faithful with L being fully faithful, and point out sufficient conditions for these notions to agree.Note that if PL is fully faithful then L is fully faithful assuming P is locally fully faithful, as y is pseudonatural. Conversely if L is fully faithful, then (supposing our corresponding left extension R_L is pointwise) the exhibiting 2-cell is invertible <cit.>, equivalent to PL being fully faithful by the above. This converse may also be seen when the KZ doctrine is locally fully faithful and good (meaning axiom (d) is satisfied for P-admissible maps) as we can use the argument of <cit.>. However, as we now see, this converse need not hold in general.An example in which L is fully faithful but PL is not is given as follows. Take 𝒜 to be the 2-category containing the two objects 0,1 and two non-trivial 1-cells x,y0→1, and take ℬ to be the same but with an additional 2-cell α x→ y. Define L as the inclusion of 𝒜 into ℬ. Then for the free 𝐂𝐚𝐭-cocompletion of 𝒜 given by y_𝒜𝒜→[𝒜^op,𝐂𝐚𝐭] we note that y_𝒜 and R_L· L are not isomorphic, and so the 2-cell φ_L is not invertible meaning PL is not fully faithful (despite L being fully faithful). § FUTURE WORK We have seen that the notions of pseudo algebras and admissibility for a given KZ doctrine, and KZ doctrines themselves, may be expressed in terms of left extensions. In a soon forthcoming paper we show that pseudodistributive laws over a KZ doctrine may be simply expressed entirely in terms of left extensions and admissibility, allowing us to generalize some results of Marmolejo and Wood <cit.>. § ACKNOWLEDGMENTS The author would like to thank his supervisor as well as the anonymous referee for their helpful feedback. In addition, the support of an Australian Government Research Training Program Scholarship is gratefully acknowledged.siam | http://arxiv.org/abs/1703.08693v3 | {
"authors": [
"Charles Walker"
],
"categories": [
"math.CT",
"18A35, 18C15, 18D05"
],
"primary_category": "math.CT",
"published": "20170325133832",
"title": "Yoneda structures and KZ doctrines"
} |
CTPU-17-11Center for Theoretical Physics of the Universe, Institute for Basic Science (IBS), Daejeon 34051, KoreaWe propose a new class of inflationary modelsin which inflation takes place while the inflaton is climbing up a potential hill due to a coupling to gravity. We study their attractor behavior, and investigate its relation with known attractors. We also discuss a possible realization of this type of models with the natural inflation,and show that the inflationary predictions come well within the region consistent with the observation of the cosmic microwave background.Hillclimbing inflation Kunio KanetaDecember 30, 2023 ======================== § INTRODUCTIONCosmological inflation plays an essential role in addressingvarious cosmological issues <cit.> as well asgenerating the primordial perturbations <cit.>. Although the idea of inflation leads to a successful picture in modern cosmology,the underlying particle physics is still unclear. Among various possibilities, one of the most attractive scenarios is extending the gravity sector. The Starobinsky inflation <cit.> is one of the most successful models along this line, and the Higgs inflation <cit.>, in which a nonminimal coupling between the Higgs field and the Ricci scalar is introduced, has also persisted to date.Over the past few years, our understanding on the behavior of these inflation modelshas been significantly developed: the discovery of attractor.It has been found that a large class of models with a general form of nonminimal coupling to gravity,including the Higgs inflation, has similar inflationary predictions. Generalization of such models has been made (“universal attractor" <cit.> or “induced inflation" <cit.>), and their attractor behavior is now called “ξ-attractor" <cit.>. On the other hand, the recently proposed “α-attractor" <cit.> has revealed a generic feature of attractors appearing in the models with a kinetic pole, which coincide with ξ-attractor with a certain choice of model parameters <cit.>.In this paper, we revisit inflation models with the nonminimal coupling. We first propose a new class of models which is featured by the climbing of the inflaton up the potential hill by considering a specific behavior of the nonminimal coupling. We point out that this class of models has attractor behavior,and discuss its relation with known attractors in a general way. Then we give a concrete example of such modelsusing the natural inflation-type potential <cit.> with a nonminimal coupling, and show that the inflationary predictions come well within the region consistentwith Planck observation of the cosmic microwave background (CMB) <cit.>. We also point out that our setup can be applied to a broad class of inflaton potentialswhich have multiple degenerate vacua.The organization of the paper is as follows. In Sec. <ref>we give general discussion on the attractor behavior in inflation models with the nonminimal coupling. In Sec. <ref> we give a concrete model to illustrate the point discussed in Sec. <ref> and show the inflationary predictions. We finally conclude in Sec. <ref>.§ ATTRACTORS AND HILLCLIMBING INFLATION §.§ ξ-attractorLet us start with general discussion on attractor solutions in inflation models with the nonminimal coupling. Throughout the paper, we focus on the following single-field inflation setup: S= ∫ d^4x √(-g_J)[ M_P^2/2Ω R_J - K_J/2 (∂_J ϕ_J)^2 - V_J ], where M_P is the reduced Planck mass, and ϕ_J, R_J and V_J(ϕ_J) arethe (Jordan-frame) inflaton, Ricci scalar and potential, respectively. Hereafter the subscript J indicates the “Jordan frame”. Also, the factor K_J(ϕ_J) in front of the kinetic term(∂_J ϕ_J)^2 ≡ g^μν_J ∂_μϕ_J ∂_νϕ_J is an arbitrary function of ϕ_J, which we retain for generality of the following argument. In addition, Ω(ϕ_J) is an arbitrary function which takes positive values in the case of our interest.Under the Weyl rescaling g_μν≡Ω g_J μν, the Ricci scalar transforms as R_J= Ω[R + 3 lnΩ- 3/2 (∂lnΩ)^2 ] with which the action (<ref>) becomes S= ∫ d^4x √(-g)[ M_P^2/2R - K/2 (∂ϕ_J)^2 - V ], where the potential is given by V = V_J / Ω^2,and K=K_J/Ω + 3M_P^2/2Ω^2( dΩ/dϕ_J)^2. Here let us suppose that the second term dominates the first term in Eq. (<ref>). Such a setup occurs e.g. when M_P^2 (dΩ/dϕ_J)^2 / Ω^2 ≫ 1/Ω for K_J = 1 or when K_J = 0. Then the kinetic term in the action (<ref>) reduces to <cit.> -K/2(∂ϕ_J)^2 ≃ - 3M_P^2/4 (∂lnΩ)^2. Now let us assume thatΩ evolves from Ω≫ 1 to Ω = 1 during inflation.[ One can take Ω = 1 in the present universe without loss of generality. ] For example, if we take V = V_0(1 - Ω^-1)^2with V_0 being a constant which determines the potential height at Ω≫ 1, this potential realizes a plateau for Ω≫ 1and a vanishing cosmological constant in the present universe.[ As we see in the next subsection and show in footnote <ref>, this potential form can be generalized to a broader class.] In terms of the Einstein-frame inflaton, one may identifyϕ / M_P ≃√(3/2)lnΩ to make the kinetic term canonical, and then the potential becomes V=V_0( 1 - e^- √(2/3)ϕ/M_P)^2. This potential realizes the spectral index n_s and the tensor-to-scalar ratio r, n_s ≃ 1 - 2/N,r≃12/N^2, in large-N limit with N being the e-folding number. It is known that a large class of inflation models predicts Eq. (<ref>),which includes the Higgs inflationΩ = 1 + ξϕ_J^2/M_P^2 <cit.>and its generalizations such as “universal attractor" Ω = 1 + ξ f(ϕ_J) <cit.> or “induced inflation" Ω = ξ f(ϕ_J) <cit.>. In Ref. <cit.>, the class of models which have this attractor behavior in the inflationary predictions (<ref>) have been dubbed as “ξ-attractor". §.§ η-attractorHere we take a closer look at the arguments in the previous subsection. In discussing ξ-attractor, we have assumed that the inflationary regime occurs at Ω≫ 1. In this paper, in contrast, we consider * Inflation at Ω≪ 1. This distinction is important from a model-building point of view, as we see in the rest of this subsection and also in the next section with a concrete example.To investigate the properties of such inflation models, let us first consider the requirements to realize a vanishing cosmological constantin the present universe. Assuming that Ω is monotonic for the inflaton values of our interest, we write down the potential in terms of Ω: V=V_0( 1 - ∑_k = n^∞η_k Ω^k ), with η_k being some constants and n ≥ 1 being the leading exponent of Ω which dominantly affects the inflationary predictions. Note that we have not included negative powers of Ω, in order to keep the flatness of the potential. The vanishing cosmological constant is realized by ∑_k = n^∞η_k= 1. When the approximation (<ref>) holds,[ See Appendix <ref> for the cases in which this approximation does not hold.] the slow-roll parameters ϵ and η are given byϵ ≡ M_P^2/2( V'/V)^2 ≃1/3(nη_nΩ^n)^2, η ≡M_P^2 V”/V≃ -2/3 n^2 η_n Ω^n,where we respectively define V' and V” as dV/dϕ and d^2V/dϕ^2,and thus the inflaton rolls slowly when Ω becomes sufficiently small.The attractor behavior of the inflationary predictions with the potential (<ref>) can be calculated as n_s≃ 1 - 2/N,r ≃12/n^2N^2, in large-N limit. It should be noted that the resultant r is more general than Eq. (<ref>). Of course, in ξ-attractor as well, the inflationary predictions reduce to Eq. (<ref>)by adopting a similar expansion to the potential.[The same line of argument is possible for Ω≫ 1 as well. One may write down the potential as V_E =V_0( 1 - ∑_k = n^∞ξ_k Ω^-k), with ξ_k being some constants and n being the leading exponent. Here we have not included positive powers of Ω because such terms spoil the flatness of the potential for Ω≫ 1. The vanishing cosmological constant in the present universe is realized by ∑_k = n^∞ξ_k= 1. Note that V_E ∼ (1 - Ω^-1)^2 gives one realization of this condition. The setup (<ref>) gives the same prediction as Eq. (<ref>), which is more general than Eq. (<ref>).][ In Ref. <cit.>, generalization of ξ-attractor has been made by setting V_E ∼ (1 - Ω^-p)^2 where p can be other than unity. ] However, as we see in Sec. <ref>,relatively simple setups lead to n ≠ 1 in this type of inflation models,and therefore we call them “η-attractor” in the following to distinguishbetween the two.[ In both of the two classes, inflationary predictions generically deviate from the attractor limit (<ref>) once one considers concrete model constructions. Therefore it would be possible to distinguish such models by future CMB observations.]There is another important reason to distinguish between ξ- and η-attractors: application to existing inflaton potentials. To see this, let us consider the inflaton behavior in the Jordan frame. The Jordan-frame potential is given byV_J = Ω^2 V_E ≃Ω^2 V_0 during inflation. This means that, in the class of inflation models which is featured by Ω≪ 1,the inflaton climbs up the potential hill due to a coupling to gravity. If one considers applying such a gravity effect to existing inflaton potentials, one sees that the existence of η-attractor can help the predictions ofinflation models with multiple potential minima by modifying the inflaton behavior. We call such inflation models with η-attractor behaviorhillclimbing inflation in the rest of the paper. We will illustrate this point in Sec. <ref> with a concrete example.Before moving on to the next topic, let us mention previous studies.It is known that in some specific setup the inflation can take place with a small Ω,for instance, the conformal inflationwith χ=√(6) gauge <cit.> and its extension <cit.>.While they are based on a specific type of inflation models,we stress that this class of inflation can be seen in a rather broader range of modelshaving multiple vacua in the inflaton potential, as we see in the next section. §.§ Relation with α-attractorHere we comment on the relation with α-attractor,which was discovered in the studies ofsuperconformal inflation <cit.> and later developed to the currentform <cit.>. Its action is given by <cit.> S= ∫ d^4x √(-g) ℒ, ℒ =M_P^2/2R- α/(1 - ϕ^2/6M_P^2)^2(∂ϕ)^2- V ( ϕ/√(6)M_P ), with non-negative potential V.[ The pole structure in the inflaton kinetic term has been generalized in e.g. Ref. <cit.>. ] This class of models coincides with the Starobinsky model with α = 1 and a specific choice of the potential <cit.>. Canonical normalization of the inflation fieldϕ/√(6)M_P = tanhφ/√(6α)M_P leads to ℒ = M_P^2/2R - 1/2(∂φ)^2 - V ( tanhφ/√(6α)M_P). The potential can generically be expanded asV= V_0 ( 1 - ∑_k = nα_k e^- k √(2/3α)φ/M_P), with α_k being some constants andn being the leading exponent as in the previous subsections. Though usually n = 1 is assumed in calculating inflationary predictions in this class ofmodels <cit.>, which holds true for e.g.V ∼ (1 -e^- √(2/3α)φ/M_P)^2, we do not restrict ourselves to such a special case. The inflationary predictions for the potential (<ref>) approach n_s≃ 1 - 2/N,r ≃12α/n^2N^2, in large-N limit. The correspondence between ξ- or η-attractorsand α-attractor is easily seen if one notice thatthe pole structure and the leading exponent dominantly determinethe inflationary predictions <cit.>: α-attractor,whose action is given by Eqs. (<ref>)–(<ref>), shares its inflationary predictions for α = 1 with those of ξ- or η-attractors, whose action is given by Eq. (<ref>) and inflation occurs at Ω≫ 1 and Ω≪ 1, respectively. We summarize the relation in Fig. <ref>.§ HILLCLIMBING NATURAL INFLATION Now let us illustrate our point discussed in the previous section with a specific example, which we call hillclimbing natural inflation. The model setup is the Jordan-frame action (<ref>) with K_J = 1 and Ω = ωsin( ϕ_J/2 η f), V_J= Λ^4 [ 1 - cos( ϕ_J/f) ]. Here ω, η, f and Λ are free parameters of the model (see also Appendix <ref>). We set the coefficient ω to satisfy ω =1 / sin( π/η), to realize Ω =1 in the present universe. Note that other choices of K_J such as K_J = 0 also work, because they only give the negligible part in Eq. (<ref>). Figure <ref> shows typical shapes of the potential and Ω in this model.In this setup the inflation takes place in the vicinity of the origin,and the inflaton is slowly displaced towards ϕ_J > 0 by climbing up the potential hill of V_Jdue to the small conformal factor. This property can also be understood in the Einstein frame. The relation between ϕ_J and ϕ is given by ϕ_J ∝ e^√(2/3)ϕ/M_P around the origin. Inflation occurs at ϕ_J → +0, or ϕ→ -∞, where the potential V = V_J / Ω^2 approaches to a certain constant value, and ends at the neighboring minimum. As long as we take f ≪ M_P, the relevant inflationary dynamics occurs around the origin and therefore the predictions are given by Eq. (<ref>). It is also seen that this model corresponds to the n = 2 case in Sec. <ref>: Noting that the Einstein-frame potential V = V_J / Ω^2 is even in ϕ_J, one sees that only even powers of Ω, which is an odd function of ϕ_J,appear in the expansion of V. As a result n = 2 (more specifically η_2=(2/3)(η/ω)^2)appears as the leading exponent in Eq. (<ref>).[ Note that n = 1 can also be realized easily by choosing Ω not to be odd in ϕ_J. ] On the other hand, in the opposite case f ≫ M_P,the relevant inflation dynamics occurs around the minimum ϕ_J = 2 π f. In this case, the conformal factor is close to unity all the way from the CMB scale to the end of inflation,and thus the model approaches to the quadratic chaotic inflation (see Appendix <ref> for more detail).In Fig. <ref> we show the inflationary predictions of the hillclimbing natural inflation. We have calculated the scalar spectral index n_s and the tensor-to-scalar ratio r, varying f from large f (≫ M_P) to small f (≪ M_P). In this figure, we have fixed the scalar normalization by 𝒫_ζ = 2 × 10^-9 and taken the e-folding to be N = 50 and 60. It is seen that for f ≫ M_P the predictions coincide with those of quadratic chaotic inflation, while they approach asymptotic values for f ≪ M_P, which corresponds tothe n=2 case of the η-attractor discussed in Sec. <ref>.§ CONCLUSION In this paper,we have presented a new class of inflationary models in which inflation takes placewhile the inflaton is climbing up a potential hill due to a coupling to gravity. We have studied the attractor behavior in the resulting inflationary predictionsand investigated its relation with known attractors, and as a result, we have proposed “η-attractor". The inflaton behavior in η-attractor is well understoodin the Einstein frame, where the original potential is lifted up by the Weyl transformation. We have also discussed a possible realization of this type of models with the natural inflation potential,and shown that the inflationary predictions are affected by the existence of the attractor. Though in this paper we have restricted ourselves to an example with the natural inflation, our discussion is also applicable to various types of models which have multiple vacuawith vanishing vacuum energy. For example, it would be interesting to investigate the possibility of realizing the hillclimbing inflationwhen the SM Higgs has another degenerate vacuum around ∼ 10^17 GeV, as suggested by the multiple-point principle <cit.>. As another interesting possibility, the hillclimbing inflation would take placeeven if the inflaton potential in the Jordan frame is not bounded from belowas long as the conformal factor becomes sufficiently small at a certain relevant scale. We leave such studies as future work <cit.>.§ ACKNOWLEDGMENTSThis work was supported by IBS under the project code, IBS-R018-D1. The authors are grateful to T. Terada for helpful comments. § MODELS In this appendix we discuss some realizations of the setup (<ref>)–(<ref>) using a complex scalar field. We consider the action S = ∫ d^4x √(-g_J)ℒ_J with ℒ_J= - iM(Φ_J - Φ_J^†)R_J - |∂Φ_J|^2- V_J,orℒ_J= - i(Φ_J^2 - Φ_J^† 2)R_J - |∂Φ_J|^2- V_J, with Φ_J being a complex scalar and M being a dimensionful parameter,and the potential is given by V_J(Φ_J)=λ[ |Φ_J|^p- 1/2(Φ_J^p + Φ_J^† p) ]+ V_ SB(Φ_J). Here p is some integer, and V_ SB is introduced in order to fix the radial value of Φ_J. After Φ_J develops non-zero vacuum expectation value ⟨Φ_J ⟩≠ 0,the inflaton field ϕ residing in the phase of Φ_J,Φ_J=⟨Φ_J ⟩exp(i ϕ/√(2)⟨Φ_J ⟩),acquires the potential of the form given in Eq. (<ref>) by taking Λ=λ⟨Φ_J ⟩^pand f=√(2)⟨Φ_J ⟩/p.[ In terms of shift symmetry along to the ϕ direction,the coupling λ is regarded as an explicit breaking parameter. In the gravity sector,the nonminimal coupling also breaks the shift symmetry,and in some case the quantum corrections to the inflaton potential might affect the inflaton dynamics.In such a case, we may need a UV description of the nonminimal coupling to control the corrections. ]In the hillclimbing natural inflation models, the reheating process is also a non-trivial issue,since the potential shape relevant for the reheating epoch highly depends on the choice of the conformal factor. Also, it is known that in inflation with the nonminimal couplingthe direction perpendicular to the inflaton can be violently producedat the onset of preheating <cit.>, and it would be interesting to study whether this occurs in the present setup.§ HILLCLIMBING CONDITION In this appendix we discuss conditions for a successful hillclimbing inflation, focusing on special cases in whichV_J ∝ϕ_J^2 and Ω∝ϕ_J hold around the potential minimum(which we take to be near the origin). In particular, we take a closer look at conditionswith which the approximation given by Eq. (<ref>) is justified as a consequence of the second term domination in the R.H.S. of Eq. (<ref>). In the following we take K_J = 1 for concreteness.Let us parameterize the conformal factor around the potential minimum as Ω ∼ϕ_J/M, where M is some dimensionful quantity. Note that Ω = 1 in the present Universe means thatthe inflaton value at the end of inflation is parameterized as ϕ_J ∼ M. In the hillclimbing natural inflation it corresponds to M ∼η f. The first and second terms in Eq. (<ref>) are estimated as 1/Ω ∼M/ϕ_J, 3M_P^2/2Ω^2( dΩ/dϕ_J)^2 ∼M_P^2/ϕ_J^2, and therefore the second term dominates for ϕ_J ≲ M_P^2 / M. Now let us consider the following two cases: ( i) M ≫ M_P,( ii)M ≪ M_P.For (i), the first term dominates the second term in Eq. (<ref>) for the inflaton value from the CMB scale to the inflation end, and thus the discussions below Eqs. (<ref>) and (<ref>) do not hold. In such a case, the inflationary predictions deviate from those of the η-attractor. Instead, recalling that the hillclimbing inflation needs another minimum in the potential in order to terminate the inflationary regime(see the setup in Sec. <ref>),one sees that the setup approaches to chaotic inflation. This is because the whole inflaton excursion from the CMB scale to the inflation endfalls within the vicinity of this reheating minimum, which makes the evolution of the conformal factor Ω within this inflaton range negligible. This is why the inflationary predictions in the hillclimbing natural inflation approachthe ones in quadratic chaotic inflation for f ≫ M_P.On the other hand, for (ii),the second term dominates in Eq. (<ref>) for the inflaton values of our interest. As a consistency check, let us first assume the second term domination in Eq. (<ref>). Then, from the discussions in Sec. <ref>,inflation occurs for Ω≪ 1 or equivalently ϕ_J ≪ M. Now plugging this back into Eq. (<ref>)one sees that the second term indeed dominates for this inflaton range.99 Guth:1980zmA. H. Guth,Phys. Rev. D 23, 347 (1981).Mukhanov:1981xtV. F. Mukhanov and G. V. Chibisov,JETP Lett.33, 532 (1981) [Pisma Zh. Eksp. Teor. Fiz.33, 549 (1981)]. Starobinsky:1980teA. A. Starobinsky,Phys. Lett.91B, 99 (1980).Lucchin:1985ipF. Lucchin, S. Matarrese and M. D. Pollock,Phys. Lett.167B, 163 (1986).Futamase:1987uaT. Futamase and K. i. Maeda,Phys. Rev. D 39, 399 (1989).Salopek:1988qhD. S. Salopek, J. R. Bond and J. M. Bardeen,Phys. Rev. D 40, 1753 (1989).CervantesCota:1995tzJ. L. Cervantes-Cota and H. Dehnen,Nucl. Phys. B 442, 391 (1995)[astro-ph/9505069]. Bezrukov:2007epF. L. Bezrukov and M. Shaposhnikov,Phys. Lett. B 659, 703 (2008)[arXiv:0710.3755 [hep-th]]. Kallosh:2013tuaR. Kallosh, A. Linde and D. Roest,Phys. Rev. Lett.112, no. 1, 011303 (2014)[arXiv:1310.3950 [hep-th]]. Giudice:2014toaG. F. Giudice and H. M. Lee,Phys. Lett. B 733, 58 (2014)[arXiv:1402.2129 [hep-ph]]. Kallosh:2014laaR. Kallosh, A. Linde and D. Roest,JHEP 1409, 062 (2014)[arXiv:1407.4471 [hep-th]]. Galante:2014ifaM. Galante, R. Kallosh, A. Linde and D. Roest,Phys. Rev. Lett.114, no. 14, 141302 (2015)[arXiv:1412.3797 [hep-th]]. Kallosh:2013yoaR. Kallosh, A. Linde and D. Roest,JHEP 1311, 198 (2013)[arXiv:1311.0472 [hep-th]]. Freese:1990rbK. Freese, J. A. Frieman and A. V. Olinto,Phys. Rev. Lett.65, 3233 (1990).Ade:2015xuaP. A. R. Ade et al. [Planck Collaboration],Astron. Astrophys.594, A13 (2016)[arXiv:1502.01589 [astro-ph.CO]]. Yi:2016jqrZ. Yi and Y. Gong,Phys. Rev. D 94, no. 10, 103527 (2016)[arXiv:1608.05922 [gr-qc]]. Kallosh:2013hoaR. Kallosh and A. Linde,JCAP 1307, 002 (2013)[arXiv:1306.5220 [hep-th]]. Kallosh:2013daaR. Kallosh and A. Linde,JCAP 1312, 006 (2013)[arXiv:1309.2015 [hep-th]]. Kallosh:2013maaR. Kallosh and A. Linde,JCAP 1310, 033 (2013)[arXiv:1307.7938 [hep-th]]. Kallosh:2013wyaR. Kallosh and A. Linde,JCAP 1306, 027 (2013)[arXiv:1306.3211 [hep-th]]. Kallosh:2013xyaR. Kallosh and A. Linde,JCAP 1306, 028 (2013)[arXiv:1306.3214 [hep-th]]. Kallosh:2014rgaR. Kallosh, A. Linde and D. Roest,JHEP 1408, 052 (2014)[arXiv:1405.3646 [hep-th]]. Terada:2016nqgT. Terada,Phys. Lett. B 760, 674 (2016)[arXiv:1602.07867 [hep-th]]. Bennett:1993pjD. L. Bennett and H. B. Nielsen,Int. J. Mod. Phys. A 9, 5155 (1994)[hep-ph/9311321]. JKR. Jinno and K. Kaneta, in progress.Ema:2016dnyY. Ema, R. Jinno, K. Mukaida and K. Nakayama,JCAP 1702, no. 02, 045 (2017)[arXiv:1609.05209 [hep-ph]]. | http://arxiv.org/abs/1703.09020v3 | {
"authors": [
"Ryusuke Jinno",
"Kunio Kaneta"
],
"categories": [
"hep-ph",
"gr-qc",
"hep-th"
],
"primary_category": "hep-ph",
"published": "20170327114941",
"title": "Hillclimbing inflation"
} |
ifp,ustem]T. Schachinger cor1 [email protected] ustem,mcmaster]S. Löffler ustem]A. Steiger-Thirsfeld ifp,ustem]M. Stöger-Pollach ifw]S. Schneider ifw]D. Pohl ifw]B. Rellinghaus ifp,ustem]P. Schattschneider [cor1]Corresponding author[ifp]Institute of Solid State Physics, TU Wien, Wiedner Hauptstraße 8-10, 1040 Vienna, Austria [ustem]University Service Centre for Transmission Electron Microscopy, TU Wien, Wiedner Hauptstraße 8-10, 1040 Wien, Austria [mcmaster]Department of Materials Science and Engineering, McMaster University,1280 Main Street West, Hamilton OntarioL8S 4L8, Canada [ifw]Institute for Metallic Materials, IFW Dresden,P.O. Box 270116, D-01171 Dresden, Germany We discuss the feasibility of detecting spin polarized electronic transitions with a vortex filter. This approach does not rely on the principal condition of the standard energy loss magnetic chiral dichroism (EMCD) technique, the precise alignment of the crystal, and thus paves the way for the application of EMCD to new classes of materials and problems. The dichroic signal strength in the L_2,3-edge of ferromagnetic cobalt is estimated on theoretical grounds. It is shown that magnetic dichroism can, in principle, be detected. However, as an experimental test shows, count rates are currently too low under standard conditions.EMCD with an electron vortex filter: Limitations and possibilities [ Accepted XXX. Received YYY; in original form ZZZ ==================================================================§ INTRODUCTION The discovery in 2006 that magnetic chiral dichroism can be observed in the transmission electron microscope (TEM) <cit.> provided an unexpected alternative to X-ray circular dichroism (XMCD) in the synchrotron. Energy-loss magnetic chiral dichroism (EMCD)[Sometimes also called electron magnetic chiral dichroism. Properly speaking, this is incorrect because it is not the electron but the energy loss signal that shows dichroism.] has seen tremendous progress since then <cit.>, achievingnanometre resolution <cit.>, and even sub-lattice resolution <cit.>. The discovery of electron vortex beams (EVBs) <cit.> has spurred efforts to use them for EMCD because of their intrinsic chirality.In spite of much progress in the production and application of vortex beams <cit.>, it soon became clear that atom-sized vortices incident on the specimen are needed for EMCD experiments <cit.>. Attempts to produce such beams and to use them for EMCD measurements did not show an effect so far <cit.>. Nevertheless, faint atomic resolution EMCD signals have been shown without the need for atom-sized EVBs using intelligent shaping of the incident wavefront with a C_s corrector instead <cit.>.The fact that orbital angular momentum (OAM) can be transfered to the probing electron when it excites electronic transitions to spin polarized final states in the sample manifests itself in a vortical structure of the inelastically scattered probe electron. The latter could be detected by a holographic mask after the specimen. Using a fork mask as chiral filter is already established in optics. <cit.>. This ansatz opens up the possibility to measure magnetic properties of amorphous materials (or multiphase materials including both crystalline and amorphous magnetic phases <cit.>), since the specimen no longer needs to act as a crystal beam splitter itself. Also crystalline specimens could benefit from using the vortex filter setup and its inherent breaking of the Bragg limitation, when, for example, substrate reflections overlap with the two EMCD measurement positions which would diminish the EMCD signal strength.§ PRINCIPLE Dealing with transition metals, dichroism measurements typically involve 2p-core to d-valence excitations at the L_2,3 ionization edges. The L_2,3-edges are used, due to their strong spin-orbit interaction in the initial state. Besides their dichroic signal is an order of magnitude higher compared to using K-edges, which were originally used in X-ray magnetic circular dichroism measurements to show the dichroic effect <cit.>. The most dominant contribution to the ionisation edges are electric dipole-allowed transitions. Higher multi-pole transitions show low transition amplitudes contributing less than 10% at scattering angles of < 20 relevant in electron energy loss spectrometry (EELS) <cit.>. In case of an L-edge dipole-allowed transition which changes the magnetic quantum number of an atom by μ, an incident plane wave electron transforms into an outgoing wave <cit.>ψ_μ( r)=e^-iμφ_r f_μ(r)where φ_r is the azimuthal angle, and f_μ(r)= i^μ/2 πq_E^1-|μ|∫_0^∞q^1+|μ| J_|μ|(q r) ⟨ j_1(Q) ⟩_ELSj/Q^3 dq,with ⟨ j_1(Q) ⟩_ELSj the matrix element of the spherical Bessel function between the initial and final radial atomic wave functions, and Q=√(q^2+q_E^2). Here, q is the transverse scattering vector that relates to the experimental scattering angle θ as q=k_0 θ, and ħ q_E=ħ k_0 θ_E is the scalar difference of linear momenta of the probe electron before and after the inelastic interaction, also known as the characteristic momentum transfer in EELS <cit.>. The characteristic scattering angle θ_E is given by θ_E∼Δ E/2E_0, with Δ E being the threshold energy of the dipole-allowed L-edge and E_0 the primary beam energy.The dichroic signal in the diffraction plane (DP) can readily be calculated via Fourier transforming Eq. <ref>. According to atheorem for the Fourier-Bessel transform of a function of azimuthal variation e^-i μφ <cit.>,one hasψ̃_μ( q)=i^μ/2 π e^-iμφ_q∫_0^∞ f_μ(r) J_|μ|(q r) r dr .The outgoing electron in the DP still carries topological charge μ, showing that the wave function is topologically protected. The radial intensity profiles |ψ̃_μ(q)|^2 for the possible transitions with μ∈{-1, 0, 1} and their sum ∑_μ=-1,0,1 |ψ̃_μ|^2, which represents the Lorentz profile for non-magnetic isotropic transitions, for the Co L_3-edge are shown in Fig. <ref>. Note that in Fig. <ref>, as well as in all the following simulation results, the parameters used have been adopted to the proof-of-principle experiment given in Sec. <ref>. A fork mask in the DP adds topological charges m ∈ℤ to the incident beam of topological charge μ. Due to the grating nature of the fork mask, the m-dependent deflections are separated by 2θ_Bragg = λ/g, where λ is the electrons' wavelength and g is the fork mask periodicity. Thus, such a mask creates a line of vortices of topological chargem+μ in the image plane, see Fig. <ref>. The radial profiles in the image plane are given by the back-transform of Eq. <ref> with the respective vortex order m added by the mask:ψ_m μ( r)=i^m+μ/2 πe^-i(m+μ)φ_r∫_0^q_maskψ̃_μ(q) J_|m+μ|(qr) qdqwhere q_mask=k_0 θ_mask is given by the mask aperture limiting the maximum momentum transfer. The respective intensities are azimuthally symmetric with distinct radial profiles. Fig. <ref> shows schematically the central three vortices for the three dipole-allowed transition channels. Note that the central vortex (m=0, |μ|=1) does not show a difference for up and down spin polarization. This is the reason why such transitions cannot be distinguished with standard EELS. Fig. <ref> only describes the situation where one transition channel is present.For several transition channels at the same energy, the outgoing probe electron is in a mixed state, described by the reduced density matrix <cit.> and the path of rays cannot be visualized in such an easy way. Note that the total intensity is the trace of the matrix, i.e. the sum over all intensities in the respective channels. For fully spin-polarized systems I_m=∑_μ=-1^1 C_m^μ |ψ_m μ|^2where the C_m^μ are derived from the Clebsch-Gordan coefficients <cit.> and given in Tab. <ref>.In Fig. <ref>, the resulting radial intensity profiles are drawn for the coherent case with no inelastic source broadening added (which will be discussed in more detail at the end of this section). The radial extension of the intensity profile in Fig. <ref> is considerably broader than it is directly at the scattering centre, due to the limited extent q_mask of the vortex filtering mask shown in Fig. <ref>.In this geometry, we define the EMCD signal as the relative difference of the intensities, from vortices with m=± 1EMCD=2 ·I_+1-I_-1/I_+1+I_-1.TheEMCD signal is a function of the radius which has been omitted for clarity here. It depends on the topological charge of the spin polarized transition μ. It is the difference of radial profiles for vortex orders m=-1 and m=1.In practice, there are two technical problems with the setup shown in Fig. <ref>. On the one hand, placing the vortex filtering fork mask in the DP is not straightforward, because, due to the limited space in the pole piece gap, strip apertures are used which cannot be loaded with conventional 3 frame apertures. Even though there are proposals to use spiral-phase-plates in the DP, e.g. to determine chiral crystal symmetries and the local OAM content of an electron wave <cit.>, to date no successful implementation of a vortex mask in the DP of a TEM has been shown. On the other hand, the final image in the selected area aperture (SAA) plane (intermediate image plane) of the atom sized vortices would be too small to be resolvable at all. Both facts make this direct approach problematic.The second obstacle can be removed by defocussing the lens, observing broader vortices. The first one can be overcome by positioning the fork mask in a relatively easily accessible position, which is the SAA holder. These ideas lead to a scattering geometry that can be easily set up in conventional TEMs, see Fig. <ref>a and Fig. <ref>c. Here, the fork mask is positioned in the SAA holder, creating a demagnified virtual image in the eucentric plane with small lattice constant. Additionally, the specimen is lifted in height by dz and the C2 condenser lens is adjusted such that a focused probe is incident on the specimen. Note that focusing the beam onto the specimen guarantees that the probability density current is mostly aligned parallel to the optical axis all over the illuminated area such that the scattering "light cones" all point in the same direction towards the vortex filter mask. This is due to the fact that the Rayleigh range of the incident beam[Note that the Rayleigh range was determined using the diffraction limited spot size of the C2 aperture, the final probe diameter is much larger due to incoherent source broadening.] is of the order of 600 (convergence semi-angle 3.8) which is much larger than the sample thickness, in our case ∼70, and thus the incident wavefronts are almost flat everywhere inside the specimen, see Fig. <ref>b. How much they actually deviate from that assumption is estimated in Fig. <ref>c. It can be seen that the tilt angle at a radial position of 0.7 in the entrance plane amounts to ∼70 which can be considered negligible compared to the characteristic scattering angle θ_E ∼1.95 of Co. Furthermore only a small fraction of atoms in the illuminated specimen area actually sees such "high" tilt angles of the incident wavefront, most of the illuminated atoms do see a much less tilted wavefront[Neglecting the crystal field and channelling effects.].Lifting the specimen ensures that the (virtual) fork mask is now in the far field of the excited atom and creates a series of images of the ionization process as depicted in Fig. <ref>a. Practically, this setup is comparable to a standard STEM geometry but with the specimen lifted far off the eucentric plane. For better understanding the scattering setup, Fig. <ref> compares the standard TEM setup in diffraction, Fig. <ref>a, and the standard STEM setup, Fig. <ref>b, to the setup proposed here, Fig. <ref>c.Note that there are slight changes in the focal position of elastic- to inelastically scattered electrons in Fig. <ref>b. It can be seen that when the vortex filter mask is placed in the SAA holder diffracted beams (small dashing) emerge from the vortex mask in Fig. <ref>a,c but not in Fig. <ref>b because there the image of the inelastically scattered electrons in the SAA plane is much smaller than the grating periodicity such that the vortex mask is not illuminated. Lifting the specimen by dz in Fig. <ref>c ensures that the vortex filter mask is properly illuminated. Moreover, due to electron optical reasons this lifting is essentially reducing the size of the first image of the focused probe on the (lifted) specimen, comparable to the reduction of the effective source size in the condenser system of a TEM by adjusting the C1 lens excitation. The dichroic signal is strong in the centre of the vortices but difficult to observe because of their extension of only about 1 nm, as seen in Fig. <ref>. Therefore, the observation plane is set at a defocus df (here 4) from the specimen (preferably with the diffraction lens setting) to enhance the visibility of the dichroic signal. This can be understood from Fig. <ref>a and Fig. <ref>c. Virtual images (green) of the object intensities are observed at a defocus df, making the distribution broader, such that the maximum of the vortices' radial intensity distribution, where no dichroic signal is expected to occur, moves towards higher radii. The orders m ∈{-1, 0, 1},with an angular separation of 2θ_B are shown in Fig. <ref>a.The observed vortices are calculated with Eq. <ref>, but now including the defocus df and the spherical aberration C_s:ψ_m μ( r)= i^m+μ/2 πe^-i(m+μ)φ_r∫_0^q_maskψ̃_μ(q)J_|m+μ|(qr)e^i(df q^2/2 k_0+C_s q^4/4 k_0^3) qdq .When a homogeneous specimen is illuminated all atoms within the beam will contribute incoherently with their respective signals. This incoherent broadening effect according to the finite illuminated area of the specimen is taken into account by a convolution with a Gaussian as described in <cit.>. Thus, the final simulated radial intensity distribution is given byI_m^σ(r)= e^-(1/2)(r/σ)^2∫_0^∞ψ_m μ(r')e^-(1/2)(r'/σ)^2 I_0(r r'/σ^2) r'dr',where I_0 represents the modified Bessel function of first kind of order zero and σ the amount of incoherent broadening. The resulting illuminated area (FWHM) at the specimen is ∼2.4 σ. This incoherent broadening effect will reduce the EMCD signal as shown in Fig. <ref>.But still, the defocused case is preferable because the tiny differences in width for the focused case are hardly observable, see Fig. <ref>. In Eq. <ref>, there are two free parameters, defocus and broadening width, which are used to obtain best fits to the experimental data shown in the next section. § EXPERIMENTALThe fundamental method and scattering geometry elaborated above have been realized in a proof-of-principle experiment on a FEI TECNAI F20 TEM equipped with a GATAN GIF Tridiem spectrometer (GIF) and a high-brightness XFEG. The acceleration voltage was set to 200, whereas the condenser system was set up in a way to achieve a high beam current at a sufficiently small spot size, i.e. providing a beam current of ∼500 incident on the sample in a ∼1.9 probe (FWHM) with a convergence semi-angle of 3.8. Fig. <ref> shows the vortex filtering holographic fork mask that was placed in the SAA holder. It was produced by FIB milling into a 300 Pt layer deposited on a 200 Si_3 N_4 membrane.With a diameter of 10 and a grating periodicity of 500 (back-projected: 9.4), it exhibits a Bragg angle of θ_B=5 separating the central spot from the first vortex orders in the eucentric plane by ∼20[The separation distance was calculated using 2θ_Braggdz, with a camera-length of dz=75 and the back-projected grating periodicity g=9.4.]. As a result the vortex orders ±1 are still well separated from the central peak for defocus values of df = 4 and higher, see Fig. <ref>.For the preparation of the Co sample, a 70 nm thin Co layer is deposited onto a NaCl crystal. The thin Co foil is then extracted by dissolving the NaCl in water. Afterwards, the Co foil is netted with a commercially available Cu grid, resulting in a free standing nano-crystalline Co film of 70 thickness,with randomly oriented 20 crystallites. In the following section, the experimental setup of the vortex filter experiment on the Co film is described in detail.In imaging mode, the objective lens is set to the eucentric focus value with the sample in the eucentric height. From that position, the Co sample is then lifted by dz=75. The beam is focused onto the lifted specimen using C2 excitation and observing the ronchigram in the eucentric plane.The microscope is set to the diffraction mode with a camera length of 39.5 (including the GIF magnification), which is necessary to resolve the 5 separated EVBs. Focusing is solely done with the diffraction lens. Then, all three vortex orders are imaged at the GIF camera in the energy filtered TEM mode[In fact we are working in the energy filtered selected area diffraction (EFSAD) mode <cit.> because the microscope is set to diffraction mode and the SAA (vortex filter mask) is used.], operated at the edge threshold energy of the L_3-edge of Co of 780 using a slit width of 15, see Fig. <ref>. In order to keep the illumination conditions constant, the drift tube was used to adjust the spectrometer to the desired Co L_3-edge energy of 780 instead of the high tension.Fig. 8 shows the experimental energy filtered image of the vortex aperture from electrons which have transferred 780 to the Co sample. Due to the extremely low count rates, Fig. <ref> is acquired taking four frames with an acquisition time of 100 s per frame with four fold binning. Subsequently, the frames are stacked and aligned using Image J <cit.>. To extract the radial intensity profiles given in Fig. <ref>, Digital Micrograph scripts are used to determine the exact vortex orders' centres, cropping them and doing the rotational (azimuthal) average. Fig. <ref> shows that the simulation[For the sake of simplicity, elastic scattering inside the nano-crystalline structure of the specimen was not taken into account in the single atom approach given here.] with the chosen parameters is in very good agreement with the experimental data. Curiously, the experimental radial profiles show a strong difference in the central region (∼15%) which is similar to classical EMCD measurements <cit.> and previous vortex filter EMCD experiments <cit.>. However, the simulation predicts a much smaller EMCD signal (∼3%). This discrepancy is possibly due to (i) skew optic axeswhich gives rise to slight differences in apparent defocus df for the positive and negative vortex orders, (ii) artefacts from the mask production <cit.> and (iii) OAM impurities stemming from the SAA vortex mask <cit.>.Fig. <ref> also clearly shows that the EMCD signal is much smaller than the relative root-mean-square (RMS) value of the experimental EMCD signal (∼±40%, magenta shaded region) and thus cannot be detected reliably under present experimental conditions.Since the profiles are azimuthally averaged, the absolute signal-to-noise ratio (SNR) per radial position is best for the largest radius where an average over 512 pixels was taken, and lowest for the second point (9 pixel average). This was taken into account numerically for the results shown in Fig. <ref>. The magenta shaded region in Fig. <ref> is calculated using Gaussian error propagation, thus showing the relative RMS value in the azimuthal direction of the EMCD signal defined by Eq. <ref>.Furthermore, the error in Fig. <ref> (magenta shaded region) only represents the statistical error; systematic errors such as beam-drift, beam damage, artefacts due to non-isotropic vortex rings etc. have not been included. In practice, the sample's spin polarization may be less than 100 %, and the vorticity may change during the propagation of the outgoing vortex beams through the specimen <cit.>, thereby decreasing the expected EMCD signal as well. In view of these results and considerations, further investigations and improvement of the experimental conditions are necessary to proof the applicability and reliability of the EMCD vortex filter method. § CONCLUSIONSIn this work we investigated the feasibility of detecting an EMCD signal when incorporating a fork mask as a vortex filter in the SAA plane in a standard two-condenser lens field emission TEM. By lifting the sample far above the eucentric position, a vortex filter mask in the SAA plane can be properly illuminated. Thus, it produces well separated vortex orders which should, in principle, carry the EMCD information in the asymmetry of their respective central intensities. This method could become a promising method for studying magnetic properties of amorphous or nanocrystalline materials, which is impossible in the classical EMCD setup.So far the experimental tests show that the SNR is still too low and that for a successful experimental realization substantial progress in the experimental conditions is compulsory. For example, to improve the SNR future experiments should incorporate larger SAA fork masks, e.g. at least 30 to 50 in diameter. As the collected signal scales with the mask area, the acquisition times could then be lowered by an order of magnitude. Also, incorporating a HM in the contrast aperture holder located in the DP would simplify the experimental setup as well as increase the collection efficiency.Finally, using state-of-the-art aberration corrected microscopes it is possible to increase the lateral coherence of the probe beam while at the same time keeping the beam current high. This would enhance the EMCD signal strength by an order of magnitude. Thus, in the light of above considerations and the proof-of-principle experiment, detection of EMCD signals using HM as a vorticity filter seems to be feasible but needs thorough control of experimental parameters like spot size, vortex masks fidelity, sample- and system stability.§ ACKNOWLEDGEMENTST.S. acknowledges financial support by the Austrian Science Fund (FWF), project I543-N20 and by the European research council, project ERC-StG-306447, P.S. acknowledges financial support by the FWF, project I543-N20 and S.L. acknowledges financial support by the FWF, project J3732-N27. | http://arxiv.org/abs/1703.09156v1 | {
"authors": [
"Thomas Schachinger",
"Stefan Löffler",
"Andreas Steiger-Thirsfeld",
"Michael Stöger-Pollach",
"Sebastian Schneider",
"Darius Pohl",
"Bernd Rellinghaus",
"Peter Schattschneider"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20170327155424",
"title": "EMCD with an electron vortex filter: Limitations and possibilities"
} |
Given a symmetric Dirichlet form (ℰ,ℱ) on a (non-trivial) σ-finite measure space (E,ℬ,) with associated Markovian semigroup {T_t}_t∈(0,∞), we prove that (ℰ,ℱ) is both irreducible and recurrent if and only if there is no non-constant ℬ-measurable function u:E→[0,∞] that is ℰ-excessive, i.e., such that T_tu≤ u -a.e. for any t∈(0,∞). We also prove that these conditions are equivalent to the equality {u∈ℱ_e|ℰ(u,u)=0}=ℝ, where ℱ_e denotes the extended Dirichlet space associated with (ℰ,ℱ). The proof is based on simple analytic arguments and requires no additional assumption on the state space or on the form. In the course of the proof we also present a characterization of the ℰ-excessiveness in terms of ℱ_e and ℰ, which is valid for any symmetric positivity preserving form. The most effective model for describing the universal behavior of unstable surface growth Yuki Minami Shin-ichi Sasa December 30, 2023 ========================================================================================= § INTRODUCTION AND THE STATEMENT OF THE MAIN THEOREM Since the classical theorem of Liouville saying that there is no non-constant bounded holomorphic function on ℂ, non-existence of non-constant bounded (super-)harmonic functions on the whole space, so-called Liouville property, has been one of the main concerns of harmonic analysis on various spaces. One of the most well-known facts about Liouville property is that the non-existence of non-constant bounded superharmonic functions on the whole space is equivalent to the recurrence of the corresponding stochastic process. Such an equivalence is known to hold for standard processes on locally compact separable metrizable spaces by Blumenthal and Getoor <cit.> and also for more general right processes by Getoor <cit.>. Getoor <cit.> provides the same kind of equivalence in terms of excessive measures. The purpose of this paper is to give a completely elementary proof of this equivalence in the framework of an arbitrary symmetric Dirichlet form on a (non-trivial) σ-finite measure space. Our proof is purely functional-analytic and free of topological notions on the state space, although we need to assume the symmetry of the Dirichlet form.In the rest of this section, we describe our setting and state the main theorem. We fix a σ-finite measure space (E,ℬ,) throughout this paper, and below all ℬ-measurable functions are assumed to be [-∞,∞]-valued. Let (ℰ,ℱ) be a symmetric Dirichlet form on L^2(E,) and let {T_t}_t∈(0,∞) be its associated Markovian semigroup on L^2(E,). Let L_+(E,):={f| f:E→[0,∞], f is ℬ-measurable} and L^0(E,):={f| f:E→ℝ, f is ℬ-measurable}, where we of course identify any two ℬ-measurable functions which are equal -a.e. Letdenote the constant function :E→{1}, and we regard ℝ:={c| c∈ℝ} as a linear subspace of L^0(E,). Also let L^p_+(E,):=L^p(E,)∩ L_+(E,) for p∈[1,∞]∪{0}. Note that T_t is canonically extended to an operator on L_+(E,) and also to a linear operator from 𝒟[T_t]:={f∈ L^0(E,)| T_t|f|<∞ -a.e.} to L^0(E,); see Proposition <ref> below. u∈ L_+(E,) is called ℰ-excessive if and only if T_tu≤ u -a.e. for any t∈(0,∞). Similarly, u∈⋂_t∈(0,∞)𝒟[T_t] is called ℰ-excessive in the wide sense if and only if T_tu≤ u -a.e. for any t∈(0,∞).As stated in <cit.>, when we call a function u excessive, it is usual to assume that u is non-negative, which is why we have added “in the wide sense” in the latter part of Definition <ref>. ℰ-excessive functions will play the role of superharmonic functions on the whole state space, and the main theorem of this paper (Theorem <ref>) asserts that (ℰ,ℱ) is irreducible and recurrent if and only if there is no non-constant ℰ-excessive function.Yet another possible way of formulation of harmonicity of functions (on the whole space E) is to use the extended Dirichlet space ℱ_e associated with (ℰ,ℱ); u∈ℱ_e could be called “superharmonic” if ℰ(u,v)≥ 0 for any v∈ℱ_e∩ L_+(E,), and “harmonic” if ℰ(u,v)=0 for any v∈ℱ_e, or equivalently, if ℰ(u,u)=0. In fact, as a key lemma for the proof of the main theorem, in Proposition <ref> below we prove that u∈ℱ_e is “superharmonic” in this sense if and only if u is ℰ-excessive in the wide sense. Under this formulation of harmonicity, if (ℰ,ℱ) is recurrent, i.e., ∈ℱ_e and ℰ(,)=0, then the non-existence of non-constant harmonic functions amounts to the equality{u∈ℱ_e|ℰ(u,u)=0}=ℝ. Ōshima <cit.> proved (<ref>) (and the completeness of (ℱ_e/ℝ,ℰ) as well) for the Dirichlet form associated with a symmetric Hunt process which is recurrent in the sense of Harris; note that the recurrence in the sense of Harris is stronger than the usual recurrence of the associated Dirichlet form. Fukushima and Takeda <cit.> (see also <cit.>) showed (<ref>) for irreducible recurrent symmetric Dirichlet forms (ℰ,ℱ) under the (only) additional assumption that (E)<∞. In the recent book <cit.>, Chen and Fukushima has extended this result to the case of (E)=∞ when (ℰ,ℱ) is regular, by using the theory of random time changes of Dirichlet spaces. As part of our main theorem, we generalize (<ref>) to any irreducible recurrent symmetric Dirichlet form. In fact, this generalization could be obtained (at least when L^2(E,) is separable) also by applying the theory of regular representations of Dirichlet spaces (see <cit.>) to reduce the proof to the case where (ℰ,ℱ) is regular. The advantage of our proof is that it is based on totally elementary analytic arguments and is free from any use of time changes or regular representations of Dirichlet spaces.Here is the statement of our main theorem. See <cit.> or <cit.> for basics on ℱ_e, and <cit.> or <cit.> for details about irreducibility and recurrence of (ℰ,ℱ). We remark that ℱ_e⊂⋂_t∈(0,∞)𝒟[T_t] by Lemma <ref>-(1) below. We say that (E,ℬ,) is non-trivial if and only if both (A)>0 and (E∖ A)>0 hold for some A∈ℬ, which is equivalent to the condition that L^2(E,)⊄ℝ since (E,ℬ,) is assumed to be σ-finite. Consider the following six conditions.1) (ℰ,ℱ) is both irreducible and recurrent.2) {u∈ℱ_e|ℰ(u,u)=0}=ℝ.3) {u∈ℱ_e∩ L^∞_+(E,)|ℰ(u,u)=0} ={c| c∈[0,∞)}.4) If u∈ℱ_e is ℰ-excessive in the wide sense then u∈ℝ.5) If u∈ L^0_+(E,) is ℰ-excessive then u∈ℝ.6) If u∈ℱ_e∩ L^∞_+(E,) is ℰ-excessive then u∈ℝ.The three conditions 1), 2), 3) are equivalent to each other and imply 4), 5), 6).If (E,ℬ,) is non-trivial, then the six conditions are all equivalent.The organization of this paper is as follows. In Section <ref>, we prepare basic results about the extended space ℱ_e and ℰ-excessive functions, which are valid as long as (ℰ,ℱ) is a symmetric positivity preserving form. The key results there are Propositions <ref> and <ref>, which are essentially known but seem new in the present general framework. Furthermore Proposition <ref> provides a characterization of the notion of ℰ-excessive functions in terms of ℱ_e and ℰ. Making use of these two propositions, we show Theorem <ref> in Section <ref>.§ PRELIMINARIES: THE EXTENDED (DIRICHLET) SPACE AND EXCESSIVE FUNCTIONSAs noted in the previous section, we fix a σ-finite measure space (E,ℬ,) throughout this paper, and all ℬ-measurable functions are assumed to be [-∞,∞]-valued. Note that by the σ-finiteness of (E,ℬ,) we can take η∈ L^1(E,)∩ L^∞(E,) such that η>0 -a.e.We follow the convention that ℕ={1,2,3,…}, i.e., 0∉ℕ. For a,b∈[-∞,∞], we write a∨ b:=max{a,b}, a∧ b:=min{a,b}, a^+:=a∨ 0 and a^-:=-(a∧ 0). For {a_n}_n∈ℕ⊂[-∞,∞] and a∈[-∞,∞], we write a_n↑ a (resp. a_n↓ a) if and only if {a_n}_n∈ℕ is non-decreasing (resp. non-increasing) and lim_n→∞a_n=a. We use the same notation also for (-equivalence classes of) [-∞,∞]-valued functions. As introduced before Definition <ref>, identifying any two ℬ-measurable functions that are equal -a.e., we set L_+(E,):={f| f:E→[0,∞], f is ℬ-measurable}, L^0(E,):={f| f:E→ℝ, f is ℬ-measurable} and L^p_+(E,):=L^p(E,)∩ L_+(E,), p∈[1,∞]∪{0}. We regard ℝ:={c| c∈ℝ} as a linear subspace of L^0(E,). Let ·_p denote the norm of L^p(E,) for p∈[1,∞]. Finally, let ⟨ f,g⟩:=∫_Efg d for f,g∈ L_+(E,) and also for f,g∈ L^0(E,) with fg∈ L^1(E,).Recall the following definitions regarding bounded linear operators on L^2(E,). Let T:L^2(E,)→ L^2(E,) be a bounded linear operator on L^2(E,). T is called positivity preserving if and only if Tf≥ 0 -a.e. for any f∈ L^2_+(E,). T is called Markovian if and only if 0≤ Tf≤ 1 -a.e. for any f∈ L^2(E,) with 0≤ f≤ 1 -a.e. Clearly, if T is positivity preserving then so is its adjoint T^*. Note that if T is Markovian, then it is positivity preserving, Tf_∞≤f_∞ for any L^2(E,)∩ L^∞(E,) and T^*f_1≤f_1 for any f∈ L^1(E,)∩ L^2(E,). Moreover, using the σ-finiteness of (E,ℬ,), we easily have the following proposition. Let T:L^2(E,)→ L^2(E,) be a positivity preserving bounded linear operator on L^2(E,). T|_L^2_+(E,) uniquely extends to a map T:L_+(E,)→ L_+(E,) such that Tf_n↑ Tf -a.e. for any f∈ L_+(E,) and any {f_n}_n∈ℕ⊂ L_+(E,) with f_n↑ f -a.e. Moreover, let f,g∈ L_+(E,) and a∈[0,∞]. Then T(f+g)=Tf+Tg, T(af)=aTf, ⟨ Tf,g⟩=⟨ f,T^*g⟩, and if f≤ g -a.e. then Tf≤ Tg -a.e. Let 𝒟[T]:={f∈ L^0(E,)| T|f|<∞ -a.e.}. Then T:L^2(E,)→ L^2(E,) is extended to a linear operator T:𝒟[T]→ L^0(E,) given by Tf:=T(f^+)-T(f^-), f∈𝒟[T], so that it has the following properties: If f,g∈𝒟[T] and f≤ g -a.e. then Tf≤ Tg -a.e. If {f_n}_n∈ℕ⊂𝒟[T] and f,g∈𝒟[T] satisfy lim_n→∞f_n=f -a.e. and |f_n|≤ |g| -a.e. for any n∈ℕ, then lim_n→∞Tf_n=Tf -a.e. Throughout the rest of this paper, we fix a closed symmetric form (ℰ,ℱ) on L^2(E,) together with its associated symmetric strongly continuous contraction semigroup {T_t}_t∈(0,∞) and resolvent {G_α}_α∈(0,∞) on L^2(E,); see <cit.> for basics on closed symmetric forms on Hilbert spaces and their associated semigroups and resolvents.Let us further recall the following definition.(ℰ,ℱ) is called a positivity preserving form if and only if u^+∈ℱ and ℰ(u^+,u^+)≤ℰ(u,u) for any u∈ℱ, or equivalently, T_t is positivity preserving for any t∈(0,∞). (ℰ,ℱ) is called a Dirichlet form if and only if u^+∧ 1∈ℱ and ℰ(u^+∧ 1,u^+∧ 1)≤ℰ(u,u) for any u∈ℱ, or equivalently, T_t is Markovian for any t∈(0,∞). See, e.g., <cit.> for the equivalences stated in Definition <ref>.In the rest of this section, we assume that (ℰ,ℱ) is a positivity preserving form. The following definition is standard (see <cit.>, <cit.> or <cit.>). We define the extended space ℱ_e associated with (ℰ,ℱ) byℱ_e:={u∈ L^0(E,) | 230ptlim_n→∞u_n=u-a.e. for some {u_n}_n∈ℕ⊂ℱ with lim_k∧ℓ→∞ℰ(u_k-u_ℓ,u_k-u_ℓ)=0}.For u∈ℱ_e, such {u_n}_n∈ℕ⊂ℱ as in (<ref>) is called an approximating sequence for u. When (ℰ,ℱ) is a Dirichlet form, ℱ_e is called the extended Dirichlet space associated with (ℰ,ℱ). Obviously ℱ⊂ℱ_e and ℱ_e is a linear subspace of L^0(E,). By virtue of <cit.>, ℱ=ℱ_e∩ L^2(E,), and for u,v∈ℱ_e with approximating sequences {u_n}_n∈ℕ and {v_n}_n∈ℕ, respectively, the limit lim_n→∞ℰ(u_n,v_n)∈ℝ exists and is independent of particular choices of {u_n}_n∈ℕ and {v_n}_n∈ℕ, as discussed in <cit.>. By setting ℰ(u,v):=lim_n→∞ℰ(u_n,v_n), ℰ is extended to a non-negative definite symmetric bilinear form on ℱ_e. Then it is easy to see that lim_n→∞ℰ(u-u_n,u-u_n)=0 for u∈ℱ_e and any approximating sequence {u_n}_n∈ℕ⊂ℱ for u. Moreover, we have the following proposition due to Schmuland <cit.>, which is easily proved by utilizing a version <cit.> of the Banach-Saks theorem. Let u∈ L^0(E,) and {u_n}_n∈ℕ⊂ℱ satisfy lim_n→∞u_n =u -a.e. and lim inf_n→∞ℰ(u_n,u_n)<∞. Then u∈ℱ_e, ℰ(u,u)≤lim inf_n→∞ℰ(u_n,u_n), and lim inf_n→∞ℰ(u_n,v)≤ℰ(u,v) ≤lim sup_n→∞ℰ(u_n,v) for any v∈ℱ_e. In particular, we easily see from Proposition <ref> that u^+∈ℱ_e and ℰ(u^+,u^+)≤ℰ(u,u) for any u∈ℱ_e. For symmetric Dirichlet forms, the properties of ℱ_e stated above are well-known and most of them are proved in the textbooks <cit.> and <cit.> and also in <cit.>. In fact, we can verify similar results in a quite general setting; see Schmuland <cit.> for details. The next proposition (Proposition <ref> below) requires the following lemmas. Let η∈ L^1(E,)∩ L^2(E,) be such that η>0 -a.e., and set u_ℱ_e:=ℰ(u,u)^1/2+∫_E(|u|∧ 1)η d for u∈ℱ_e. Then we have the following assertions: u+v_ℱ_e≤u_ℱ_e+v_ℱ_e and au_ℱ_e≤(|a|∨ 1)u_ℱ_e for any u,v∈ℱ_e and any a∈ℝ. ℱ_e is a complete metric space under the metric d_ℱ_e given by d_ℱ_e(u,v):=u-v_ℱ_e.(1) is immediate and d_ℱ_e is clearly a metric on ℱ_e. For the proof of its completeness, let {u_n}_n∈ℕ⊂ℱ_e be a Cauchy sequence in (ℱ_e,d_ℱ_e). Noting that ℱ is dense in (ℱ_e,d_ℱ_e), for each n∈ℕ take v_n∈ℱ such that v_n-u_n_ℱ_e≤ n^-1. Then {v_n}_n∈ℕ is also a Cauchy sequence in (ℱ_e,d_ℱ_e). A Borel-Cantelli argument easily yields a subsequence {v_n_k}_k∈ℕ of {v_n}_n∈ℕ converging -a.e. to some u∈ L^0(E,), which means that u∈ℱ_e with approximating sequence {v_n_k}_k∈ℕ and hence that lim_k→∞u-v_n_k_ℱ_e=0. The same argument also implies that every subsequence of {v_n}_n∈ℕ admits a further subsequence converging to u in (ℱ_e,d_ℱ_e), from which lim_n→∞u-v_n_ℱ_e=0 follows. Thus lim_n→∞u-u_n_ℱ_e=0. ℱ_e⊂⋂_t∈(0,∞)𝒟[T_t] and T_t(ℱ_e)⊂ℱ_e for any t∈(0,∞). Let η and ·_ℱ_e be as in Lemma , and let u∈ℱ_e. Then ℰ(T_tu,T_tu)≤ℰ(u,u), u-T_tu_2^2≤ tℰ(u,u) and T_tu_ℱ_e≤(3+η_2√(t))u_ℱ_e for any t∈(0,∞), T_sT_tu=T_s+tu for any s,t∈(0,∞), and lim_t↓ 0u-T_tu_ℱ_e=0.Let η, ·_ℱ_e and d_ℱ_e be as in Lemma <ref>. First we prove (2) for u∈ℱ. The fourth assertion is clear. T_tu∈ℱ and ℰ(T_tu,T_tu)≤ℰ(u,u) for t∈(0,∞) by <cit.>, and lim_t↓ 0u-T_tu_ℱ_e=0 by <cit.>. Let t∈(0,∞). Noting that ⟨ f-T_tf,T_tf⟩=T_t/2f_2^2-T_tf_2^2≥ 0 for f∈ L^2(E,), we have u-T_tu_2^2=⟨ u-T_tu,u⟩-⟨ u-T_tu,T_tu⟩≤⟨ u-T_tu,u⟩≤ tℰ(u,u) by <cit.>. Applying these estimates to u-T_tu_ℱ_e≤ℰ(u,u)^1/2+ℰ(T_tu,T_tu)^1/2+η_2u-T_tu_2 easily yields T_tu_ℱ_e≤(3+η_2√(t))u_ℱ_e.Now since ℱ is dense in a complete metric space (ℱ_e,d_ℱ_e), it follows from the previous paragraph that T_t|_ℱ is uniquely extended to a continuous map T^e_t from (ℱ_e,d_ℱ_e) to itself, and then clearly T^e_t is linear and the assertions of (2) are true with T^e_t in place of T_t.Let t∈(0,∞) and u∈ℱ_e∩ L_+(E,). It remains to show T^e_tu=T_tu, as v^+,v^-∈ℱ_e for v∈ℱ_e. Since v^+∧ u∈ℱ_e∩ L^2(E,)=ℱ and ℰ(v^+∧ u,v^+∧ u)^1/2≤ℰ(v,v)^1/2+ℰ(u,u)^1/2 for any v∈ℱ by the positivity preserving property of (ℰ,ℱ), an application of the Banach-Saks theorem <cit.> assures the existence of an approximating sequence {w_n}_n∈ℕ for u such that 0≤ w_n≤ u -a.e. A Borel-Cantelli argument yields a subsequence {w_n_k}_k∈ℕ such that lim_k→∞T_tw_n_k=T^e_tu -a.e., and T^e_tu=T_tu follows by letting k→∞ in T_t(inf_j≥ kw_n_j)≤ T_tw_n_k≤ T_tu -a.e.The following proposition (Proposition <ref>), which seems new in spite of its easiness, plays an essential role in the proof of 1)⇒ 2) of Theorem <ref>. Proposition <ref>-(2) is an extension of a result of Chen and Kuwae <cit.> for functions in ℱ to those in ℱ_e, and Proposition <ref>-(3) extends a basic fact for functions in ℱ to those in ℱ_e.Let u∈ℱ_e and v∈ℱ. Thenlim_t↓ 01/t⟨ u-T_tu,v⟩ =ℰ(u,v) 24muand24mu⟨ u-T_tu,v⟩=∫_0^tℰ(u,T_sv)ds, 12mut∈(0,∞).Let u∈ℱ_e. Then u is ℰ-excessive in the wide sense if and only if ℰ(u,v)≥ 0 for any v∈ℱ∩ L_+(E,), or equivalently, for any v∈ℱ_e∩ L_+(E,). Let u∈ℱ_e. Then T_tu=u for any t∈(0,∞) if and only if ℰ(u,u)=0.(1) Let u∈ℱ_e, v∈ℱ and set φ(t):=⟨ u-T_tu,v⟩ for t∈[0,∞), where T_0u:=u. Then t^-1|φ(t)|≤ℰ(u,u)^1/2ℰ(v,v)^1/2 for t∈(0,∞) and lim_t↓ 0t^-1φ(t)=ℰ(u,v) if u∈ℱ by <cit.>, and the same are true for u∈ℱ_e as well by Lemma <ref>. Using Lemma <ref>, we easily see also that φ'(t)=ℰ(u,T_tv) for t∈[0,∞) and that φ' is continuous on [0,∞), proving (<ref>).(2) The third assertion of Proposition <ref> together with the positivity preserving property of (ℰ,ℱ) easily implies that ℰ(u,v)≥ 0 for any v∈ℱ∩ L_+(E,) if and only if the same is true for any v∈ℱ_e∩ L_+(E,). The rest of the assertion is immediate from (<ref>).(3) This is an immediate consequence of (2). The next proposition (Proposition <ref>), which characterizes the notion of ℰ-excessive functions in terms of ℱ_e and ℰ, is of independent interest. The proof is based on a result <cit.> of Ouhabaz which provides a characterization of invariance of closed convex sets for semigroups on Hilbert spaces. A similar argument in a more general framework can be found in Shigekawa <cit.>. Let u∈ L_+(E,). Then u is ℰ-excessive if and only if v∧ u∈ℱ_e and ℰ(v∧ u,v∧ u)≤ℰ(v,v) for any v∈ℱ_e.The notion of ℰ-excessive functionsis determined solely by the pair (ℱ_e,ℰ) of the extended space ℱ_e and the form ℰ:ℱ_e×ℱ_e→ℝ.Let u∈ L_+(E,) be ℰ-excessive and v∈ℱ_e. Suppose u≤ v -a.e. Then u∈ℱ_e and ℰ(u,u)≤ℰ(v,v).Chen and Kuwae <cit.> gave a probabilistic proof of Corollaryfor the Dirichlet forms associated with symmetric right Markov processes.Let K_u:={f∈ L^2(E,)| f≤ u -a.e.}, which is clearly a closed convex subset of L^2(E,). We claim thatu is ℰ-excessiveif and only ifT_t(K_u)⊂ K_u for any t∈(0,∞).Indeed, let t∈(0,∞). If T_tu≤ u -a.e. then T_tf≤ T_tu≤ u -a.e. for any f∈ K_u and hence T_t(K_u)⊂ K_u. Conversely if T_t(K_u)⊂ K_u, then choosing η∈ L^2(E,) so that η>0 -a.e., we have (nη)∧ u↑ u -a.e., (nη)∧ u∈ K_u and hence T_tu=lim_n→∞T_t((nη)∧ u)≤ u -a.e.On the other hand, since the projection of f∈ L^2(E,) on K_u is given by f∧ u, <cit.> tells us that T_t(K_u)⊂ K_u for any t∈(0,∞) if and only ifv∧ u∈ℱandℰ(v∧ u,v∧ u)≤ℰ(v,v)for any v∈ℱ.Finally, ℱ_e∩ L^2(E,)=ℱ and Proposition <ref> easily imply that (<ref>) is equivalent to the same condition with ℱ_e in place of ℱ, completing the proof. § PROOF OF THEOREM <REF> We are now ready for the proof of Theorem <ref>. We assume throughout this section that our closed symmetric form (ℰ,ℱ) is a Dirichlet form. The proof consists of three steps. The first one is Proposition <ref> below, which establishes 1)⇒ 2) of Theorem <ref> and whose proof makes full use of Proposition <ref>-(3). Recall the following notions concerning the irreducibility of (ℰ,ℱ); see <cit.> or <cit.> for details.A set A∈ℬ is called ℰ-invariant if and only if AT_t(fE∖ A)=0 -a.e. for any f∈ L^2(E,) and any t∈(0,∞). (ℰ,ℱ) is called irreducible if and only if either (A)=0 or (E∖ A)=0 holds for any ℰ-invariant A∈ℬ.Let u∈ L_+(E,) be ℰ-excessive. Then {u=0} is ℰ-invariant.In fact, the following proof is valid as long as (ℰ,ℱ) is a symmetric positivity preserving form. Let B:={u=0}, f∈ L^2(E,) and set f_n:=|f|∧(nu) for n∈ℕ, so that f_n↑|f|E∖ B -a.e. Then 0≤BT_tf_n≤BT_t(nu)≤ nBu=0 -a.e., and letting n→∞ leads to |BT_t(fE∖ B)|≤BT_t(|f|E∖ B)=0 -a.e. Thus B={u=0} is ℰ-invariant.Suppose that (ℰ,ℱ) is irreducible. If u∈ℱ_e and ℰ(u,u)=0 then u∈ℝ.We follow <cit.>. Let u∈ℱ_e satisfy ℰ(u,u)=0. We may assume that ({u>0})>0. Let λ∈[0,∞) and u_λ:=u-u∧λ. Since (ℰ,ℱ) is assumed to be a Dirichlet form, u_λ∈ℱ_e∩ L_+(E,) and ℰ(u_λ,u_λ)=0 (see Proposition <ref>), and therefore T_tu_λ=u_λ for any t∈(0,∞) by Proposition <ref>-(3). Then {u_λ=0} is ℰ-invariant by Lemma <ref>, and the irreducibility of (ℰ,ℱ) implies that either ({u_λ=0})=0 or ({u_λ>0})=0 holds. Now setting κ:=sup{λ∈[0,∞)|({u_λ=0})=0}, we easily see that κ∈(0,∞) and that u=κ -a.e. For the rest of the proof of Theorem <ref>, let us recall basic notions concerning recurrence and transience of Dirichlet forms. See <cit.> or <cit.> for details. For t∈(0,∞), we define S_t:L^2(E,)→ L^2(E,) by S_tf:=∫_0^tT_sf ds, where the integral is the Riemann integral in L^2(E,). Then t^-1S_t is a Markovian symmetric bounded linear operator on L^2(E,), and therefore it is canonically extended to an operator on L_+(E,) by Proposition <ref>. Furthermore, for any s,t∈(0,∞) we easily see that S_s+t=S_s+T_sS_t=S_s+S_tT_s as operators on L_+(E,) or on L^2(E,).Let f∈ L_+(E,). Then 0≤ S_sf≤ S_tf -a.e. and 0≤ G_βf≤ G_αf -a.e. for 0<s<t, 0<α<β. Therefore there exists a unique Gf∈ L_+(E,) satisfying S_Nf↑ Gf -a.e. It is immediate that Gf_n↑ Gf -a.e. for any {f_n}_n∈ℕ⊂ L_+(E,) with f_n↑ f -a.e. Since, on L^2(E,), {G_α}_α∈(0,∞) is the Laplace transform of {T_t}_t∈(0,∞), we see that S_t_nf↑ Gf -a.e. and G_α_nf↑ Gf -a.e. for any {t_n}_n∈ℕ,{α_n}_n∈ℕ⊂(0,∞) with t_n↑∞, α_n↓ 0. Moreover, since S_t+Nf=S_tf+T_tS_Nf≥ T_tS_Nf -a.e. for t∈(0,∞) and N∈ℕ, by letting N→∞ we have T_tGf≤ Gf -a.e., that is, Gf is ℰ-excessive. We call this operator G:L_+(E,)→ L_+(E,) the 0-resolvent associated with (ℰ,ℱ).[Transience and Recurrence](ℰ,ℱ) is called transient if and only if Gf<∞ -a.e. for some f∈ L_+(E,) with f>0 -a.e. (ℰ,ℱ) is called recurrent if and only if ({0<Gf<∞})=0 for any f∈ L_+(E,). By <cit.>, (ℰ,ℱ) is transient if and only if Gf<∞ -a.e. for any f∈ L^1_+(E,). On the other hand, by <cit.>, (ℰ,ℱ) is recurrent if and only if ∈ℱ_e and ℰ(,)=0.The following proposition is the second step of the proof of Theorem <ref>. Assume that (ℰ,ℱ) is recurrent. If u∈ L^0_+(E,) is ℰ-excessive then u∈ℱ_e and ℰ(u,u)=0.Let n∈ℕ. Then u∧ n≤ n -a.e., n∈ℱ_e and ℰ(n,n)=0 by the recurrence of (ℰ,ℱ), and u∧ n is ℰ-excessive since so are u and . Thus u∧ n∈ℱ_e and ℰ(u∧ n,u∧ n)=0 by Corollary <ref>. Lemma <ref>-(2) implies that lim_n→∞v-u∧ n_ℱ_e=0 for some v∈ℱ_e with ·_ℱ_e as defined there, and then we easily have u=v∈ℱ_e and ℰ(u,u)=0. As the third step, now we finish the proof of Theorem <ref>. 1)⇒ 2) follows by Proposition <ref>, and so does 1)⇒ 5) by Propositions <ref> and <ref>. 2)⇒ 3), 4)⇒ 6) and 5)⇒ 6) are trivial.4pt1)⇒ 4): Let u∈ℱ_e be ℰ-excessive in the wide sense, n∈ℕ and u_n:=u∧ n. Then u_n∈ℱ_e, u_n is also ℰ-excessive in the wide sense, n-u_n∈ℱ_e∩ L_+(E,) and hence ℰ(u_n,u_n)=ℰ(u_n,u_n-n)≤ 0 by Proposition <ref>-(2). As in the proof of Proposition <ref>, letting n→∞ we get ℰ(u,u)=0 by Lemma <ref>-(2), and hence u∈ℝ by Proposition <ref>.4pt3)⇒ 1): (ℰ,ℱ) is recurrent since ∈ℱ_e and ℰ(,)=0. Let A∈ℬ be ℰ-invariant. Then A=A∈ℱ_e∩ L^∞_+(E,) and 0≤ℰ(A,A)≤ℰ(,)=0 by <cit.>. Now 3) implies A∈ℝ, and hence either (A)=0 or (E∖ A)=0.4pt6)⇒ 3) when (E,ℬ,) is non-trivial: Choose g∈ L^1(E,) so that g>0 -a.e., and set E_c:={Gg=∞}. Then E_c∈ℱ_e∩ L^∞_+(E,) and ℰ(E_c,E_c)=0 by <cit.>, and 6) together with Proposition <ref>-(3) implies E_c∈ℝ, i.e., either (E_c)=0 or (E∖ E_c)=0. In view of 6) and Proposition <ref>-(3), it suffices to show (E∖ E_c)=0.Suppose (E_c)=0, so that (ℰ,ℱ) is transient, and set η:=g/(1∨ Gg). Then 0<η≤ g -a.e. and ⟨η,Gη⟩≤⟨ g/(1∨ Gg),Gg⟩≤g_1<∞. Let f∈ L^1_+(E,)∩ L^2(E,) and set f_n:=f∧(nη) for n∈ℕ. Then f_n∈ L^2_+(E,), Gf_n≤ nGη<∞ -a.e., ⟨ f_n,Gf_n⟩<∞ and f_n↑ f -a.e. Since ℰ(G_αf_n,G_αf_n) ≤⟨ f_n,G_αf_n⟩≤⟨ f_n,Gf_n⟩<∞ for α∈(0,∞), Proposition <ref> implies Gf_n∈ℱ_e. Since Gf_n is ℰ-excessive, so is n∧ Gf_n∈ℱ_e∩ L^∞_+(E,) and 6) yields n∧ Gf_n∈ℝ. Letting n→∞ and noting Gf<∞ -a.e. by the transience of (ℰ,ℱ), we get Gf∈ℝ. Let α∈(0,∞). Then G_αf∈ L^1_+(E,)∩ L^2(E,) and hence GG_αf∈ℝ. Letting n→∞ in G_αf=G_1/nf-(α-1/n)G_1/nG_αf implies that G_αf=Gf-α GG_αf∈ℝ. Since α G_αf→ f in L^2(E,) as α→∞, we conclude that L^1_+(E,)∩ L^2(E,)⊂ℝ, contradicting the assumption that (E,ℬ,) is non-trivial. Thus (E∖ E_c)=0 follows.§.§ AcknowledgementsThe author would like to express his deepest gratitude toward Professor Masatoshi Fukushima for fruitful discussions and for having suggested this problem to him in <cit.>. The author would like to thank Professor Masanori Hino for detailed valuable comments on the proofs in an earlier version of the manuscript; in particular, the proofs of Propositions <ref> and <ref> have been much simplified by following his suggestion of the use of Lemma <ref> and Corollary <ref>. The author would like to thank also Professor Masayoshi Takeda and Professor Jun Kigami for valuable comments.99 BG Blumenthal, R. M., Getoor, R. K., Markov Processes and Potential Theory, Academic Press, New York (1968), republished by Dover Publications, Inc., New York (2007).CF Chen Z.-Q. Fukushima M., Symmetric Markov Processes, Time Change, and Boundary Theory, London Math. Soc. Monographs, 35, Princeton University Press, Princeton (2012). CK:subh Chen Z.-Q., Kuwae K., On subharmonicity for symmetric Markov processes, J. Math. Soc. Japan, 64 (2012), 1181–1209. F:Fe Fukushima M., On extended Dirichlet spaces and the space of BL functions, in “Potential theory and stochastics in Albac”, Theta Ser. Adv. Math., 11, Theta, Bucharest (2009), 101–110. F1 Fukushima M., personal communication (December 17, 2008).FOT Fukushima M., Oshima Y., Takeda M., Dirichlet Forms and Symmetric Markov Processes, 2nd ed., de Gruyter Studies in Math., 19, Walter de Gruyter, Berlin (2011). FT Fukushima M., Takeda M., Markov Processes (in Japanese), Baifukan, Tokyo (2008). Get:Exc Getoor, R. K., Excessive Measures, Birkhäuser, Boston (1990). Get:LMN80 Getoor, R. K., Transience and recurrence of Markov processes, in “Séminaire de Probabilités XIV 1978/79”, Lecture Notes in Math., 784, Springer, Berlin (1980), 397–409.Oshima:LMN82 Ōshima Y., Potential of recurrent symmetric Markov processes and its associated Dirichlet spaces, in “Functional analysis in Markov processes (Katata/Kyoto, 1981)”, Lecture Notes in Math., 923, Springer, Berlin (1982), 260–275. Ouh:PA96 Ouhabaz, E. M., Invariance of Closed Convex Sets and Domination Criteria for Semigroups, Potential Anal., 5 (1996), 611–625. Sch:extend Schmuland, M., Extended Dirichlet spaces, C. R. Math. Acad. Sci. Soc. R. Can., 21 (1999), 146–152. Sch:Fatou Schmuland, M., Positivity preserving forms have the Fatou property, Potential Anal., 10 (1999), 373–378. Shigekawa:convex Shigekawa I., Semigroups preserving a convex set in a Banach space, Kyoto J. Math., 51 (2011), 647–672. | http://arxiv.org/abs/1703.08943v3 | {
"authors": [
"Naotaka Kajino"
],
"categories": [
"math.PR",
"31C05, 31C25, 60J45"
],
"primary_category": "math.PR",
"published": "20170327060813",
"title": "Equivalence of recurrence and Liouville property for symmetric Dirichlet forms"
} |
Open Vocabulary Scene Parsing Hang Zhao^1, Xavier Puig^1, Bolei Zhou^1, Sanja Fidler^2, Antonio Torralba^1^1Massachusetts Institute of Technology, USA^2University of Toronto, CanadaDecember 30, 2023 ============================================================================================================================================================================§ INTRODUCTIONIn a macroscopic system, near-equilibrium phenomena can often be described by classical hydrodynamics. When the microscopic theory contains weakly coupled U(1) gauge fields, long-range correlations mediated by those fields are possible. Maxwell's equations in matter give an effective description of such correlations in terms of classical gauge fields. These equationsare useful when the coupling between electromagnetic and thermal/mechanical degrees of freedom can be neglected. We would like to understand the effective description of relativistic systems in which macroscopic electromagnetic degrees of freedom are coupled to the macroscopic thermal and mechanical degrees of freedom. This amounts to coupling Maxwell's equations in matter to hydrodynamic equations. When the matter is electrically conducting and electric fields are neglected, such classical effective theory is usually called magneto-hydrodynamics (MHD). Our motivation it two-fold. From a fundamental point of view, a number of recent developments in relativistic hydrodynamics have pushed the boundaries of the “traditional” theory, as described for example in the classic textbook <cit.>. These include: a systematic derivative expansion in hydrodynamics <cit.>, an equivalence between hydrodynamics and black hole dynamics <cit.>, the manifestation of chiral anomalies in hydrodynamic equations <cit.>, the relevance of partition functions <cit.>, elucidation of the role of the entropy current <cit.>, new insights into relativistic hydrodynamic turbulence <cit.>, convergence properties of the hydrodynamic expansion <cit.>, and a classification of hydrodynamic transport coefficients <cit.>. It is reasonable to expect that the above insights will also lead to an improved understanding of the “traditional” MHD. For example, there does not appear to be an agreement in the current literature on such basic question as the number of transport coefficients in MHD. From an applied point of view, recent years have seen relativistic hydrodynamics expand from its traditional areas of astrophysical plasmas and hot subnuclear matter into the domain of condensed matter physics. Examples include transport near relativistic quantum critical points <cit.>, in graphene <cit.> and in Weyl semi-metals <cit.>. For conducting matter, MHD is a natural extension of such hydrodynamic models. In what follows, we will outline the construction of classical relativistic hydrodynamics with dynamical electromagnetic fields, starting from equilibrium thermodynamics. In order to write down the hydrodynamic equations, we will assume that the system is locally in thermal equilibrium. We will further assume that the departures from local equilibrium may be implemented through a derivative expansion such that the parameters which characterize the equilibrium (temperature, chemical potential, magnetic field, fluid velocity) vary slowly in space and time. At one-derivative order, transport coefficients such as viscosity and electrical conductivity appear in the constitutive relations.We are not aware of previous treatments that list all one-derivative terms in the constitutive relations of magnetohydrodynamics. For parity-preserving conducting fluids in magnetic field, we find eleven transport coefficients at one-derivative order. One transport coefficient is thermodynamic, and determines the angular momentum of charged fluid induced by the magnetic field. Three transport coefficients are non-equilibrium and non-dissipative: these are the two Hall viscosities (transverse and longitudinal), and one Hall conductivity. There are also seven non-equilibrium dissipative transport coefficients: two electrical conductivities (transverse and longitudinal), two shear viscosities (transverse and longitudinal), and three bulk viscosities. The constitutive relations for the energy-momentum tensor are given in eqs. (<ref>), (<ref>), and for thecurrent in eqs. (<ref>), (<ref>). The dissipative coefficients have to satisfy the inequalities in eq. (<ref>) imposed by the positivity of entropy production, or alternatively by the positivity of the spectral function. As a simple application of the hydrodynamic equations, we study eigenmodes of small oscillations near thermal equilibrium in constant magnetic field. We start in Section <ref> with a discussion of equilibrium thermodynamics in the presence of external electromagnetic and gravitational fields. In Section <ref>, we will discuss hydrodynamics, again when electromagnetic and gravitational fields are external. The magnetic fields are taken as “large” and electric fields as “small” in the sense of the derivative expansion. The smallness of the electric field is due to electric screening. Our procedure will improve on existing studies by taking into account the effects of polarization (magnetic, electric, or both), electric fields, and by enumerating all transport coefficients at leading order in derivatives. In Section <ref> we discuss hydrodynamics with dynamical electromagnetic fields, as an extension of hydrodynamics with fixed electromagnetic fields. As a simple example, one can study Alfvénand magnetosonic waves in a neutral state (including their damping and polarization), and waves in a dynamically charged (but overall electrically neutral) state. We compare our results with the recent “dual” formulation of MHD in Section <ref>, and with some of the previous studies of transport coefficients of relativistic fluids in magnetic field in the Appendix. § THERMODYNAMICS Let us start with equilibrium thermodynamics. For a system in equilibrium subject to an external non-dynamical gauge field A_μ and an external non-dynamical metric g_μν, we write the logarithm of the partition function W_s=-iln Z as W_s[g,A] = ∫d^d+1x √(-g)F ,and we will call F the free energy density. [Conventions: metric is mostly plus, ϵ^0123=1/√(-g).] For a system with short-range correlations in equilibrium and for external sources A and g which only vary on scales much longer than the correlation length, F is a local function of the external sources, and W_s is extensive in the thermodynamic limit. The density F may then be written as an expansion in derivatives of the external sources <cit.>. The current J^μ (defined by varying W_s with respect to the gauge field) and the energy-momentum tensor T^μν (defined by varying W_s with respect to the metric) automatically satisfy∇_μ T^μν = F^νλJ_λ ,∇_μ J^μ = 0 . owing to gauge- and diffeomorphism-invariance of W_s[g,A]. The object W_s[g,A] is the generating functional of static (zero frequency) correlation functions of T^μν and J^μ in equilibrium. Of course, the conservation laws (<ref>) are also true out of equilibrium, being a consequence of gauge- and diffeomorphism-invariance in the microscopic theory. Being in equilibrium means that there exists a timelike Killing vector V such that the Lie derivative of the sources with respect to V vanishes. The equilibrium temperature T, velocity u^α and the chemical potential μ are functions of the Killing vector and the external sources <cit.>T = 1/β_0 √(-V^2) , u^μ = V^μ/√(-V^2) ,μ =V^μ A_μ + Λ_V/√(-V^2) .Here β_0 is a constant setting the normalization of temperature, and Λ_V is a gauge parameter which ensures that μ is gauge-invariant <cit.>. The electromagnetic field strength tensor F_μν=∂_μ A_ν - ∂_ν A_μ can be decomposed in 3+1 dimensions asF_μν = u_μ E_ν - u_ν E_μ-ϵ_μνρσ u^ρ B^σ ,where E_μ≡ F_μνu^ν is the electric field, and B^μ≡12 ϵ^μναβ u_ν F_αβ is the magnetic field, satisfying u·E=u·B=0. The decomposition (<ref>) is just an identity, true for any antisymmetric F_μν and any timelike unit u^μ. Electric and magnetic fields are not independent, but are related by the “Bianchi identity” ϵ^μναβ∇_νF_αβ=0, which in equilibrium becomes∇·B = B·a - E·Ω , u_μϵ^μνρσ∇_ρ E_σ = u_μϵ^μνρσE_ρ a_σ . Here Ω^μ≡ϵ^μναβ u_ν∇_α u_β is the vorticity and a^μ≡ u^λ∇_λu^μ is the acceleration. In equilibrium, the acceleration is related to temperature by ∂_λ T = - T a_λ. Relations (<ref>) are curved-space versions of the familiar flat-space equilibrium identities ∇· B=0 and ∇× E=0. In order to write down the density F in the derivative expansion, we need to specify the derivative counting of the external sources A and g. The natural derivative counting for the metric is g∼ O(1) (assuming we are interested in transport phenomena in flat space), while the derivative counting for A depends on the physical system under consideration. As an example, consider an insulator, such as a system made out of particles which carry electric/magnetic dipole moments, but no electric charges. In such a system, there is no conserved electric charge, and the above μ is not a relevant thermodynamic variable. If we are interested in thermodynamics of such a system subject to external electric and magnetic fields, we are free to choose B∼ O(1) and E∼ O(1) in the derivative expansion. The free energy density is thenF = p(T, E^2, E·B, B^2) + O(∂) .The leading-order term is the pressure, whose dependence on E and B encodes the electric, magnetic, and mixed susceptibilities. For the list of O(∂) contributions to F, see ref. <cit.>.As another example, consider a system that has electrically charged degrees of freedom (a conductor), such that μ gives a non-negligible contribution to thermodynamics. In equilibrium, ∂_λμ = E_λ - μ a_λ is satisfied identically, which suggests that counting μ∼ O(1) leads to E∼ O(∂). This is a manifestation of electric screening. The magnetic field, on the other hand, may still be counted as O(1). The counting B∼ O(1) and E∼ O(∂) is the relevant derivative counting for MHD. The free energy density is thenF = p(T, μ, B^2) + ∑_n=1^5 M_n(T,μ,B^2) s_n^(1) + O(∂^2) ,where s_n^(1) are O(∂) gauge- and diffeomorphism-invariants, and the coefficients M_n need to be determined by the microscopic theory, just like the pressure p. Following ref. <cit.>, we list the invariants s_n^(1) in Table <ref>. The rows labeled C, P, T indicate the eigenvalue of the invariant under charge conjugation, parity, and time reversal. The last row shows the weight w of the invariant under a local rescaling of the metric: g_μν→g̃_μν = e^-2φg_μν, and s_n→s̃_n = e^wφs_n. The invariant s_3^(1) does not transform homogeneously under the rescaling, and can not appear in a conformally invariant generating functional. Hence, we expect that in a conformal theory M_3=0.The coefficient M_5 is the usual magneto-electric (or electro-magnetic) susceptibility; similarly M_4 may be termed magneto-vortical susceptibility. For the rest of the paper, we will adopt the derivative counting B∼ O(1) and E∼ O(∂), as is appropriate for MHD.As an example, consider a parity-invariant theory in magnetic field. The only O(∂) thermodynamic coefficient is the magneto-vortical susceptibility ≡ M_4, which affects ⟨ T^μν⟩ and ⟨ J^μ⟩ when there is non-zero vorticity, and higher-point equilibrium correlation functions of T^μν and J^μ when there is no vorticity. We define static (zero frequency) correlation functions of T^μν and J^μ by varying the generating functional (<ref>) with respect to g_μν and A_μ in the standard fashion. For example, in flat space at constant temperature T_0, constant chemical potential μ_0, and constant magnetic field B_0 in the z-direction, one finds the following static correlation functions at small momentum⟨ T^tx J^z ⟩ = -k_x k_z,⟨ T^tx T^yz⟩ = -i B_0 k_z.The first expression may be used to evaluate the magneto-vortical susceptibilityin a system that is not subject to magnetic field, and is not rotating. § HYDRODYNAMICS WITH EXTERNAL ELECTROMAGNETIC FIELDS§.§ Constitutive relationsHydrodynamics is conventionally formulated as an extension of thermodynamics, in the sense that hydrodynamic variables are inherited from the thermodynamic parameters. This is a strong assumption, and we expect the hydrodynamic description only to be valid for B≪ T^2, otherwise new non-hydrodynamic degrees of freedom (such as those associated with Landau levels) must be taken into account. Let us start by takingE and B fields as external and non-dynamical. In hydrodynamics, the thermodynamic variables T, u^α, and μ are promoted to time-dependent quantities. Out of equilibrium, they no longer have a microscopic definition, but are merely auxiliary variables used to build the non-equilibrium energy-momentum tensor and the current. The expressions of T^μν and J^μ in terms of the auxiliary variables T, u^α, and μ are called constitutive relations; they contain both thermodynamic contributions (coming from the variation of F), and non-equilibrium contributions (such as the viscosity). It is worth noting that thermodynamic contributions and non-equilibrium contributions to the constitutive relations may appear at the same order in the derivative expansion. The constitutive relations are then used together with the conservation laws (<ref>) to find the energy-momentum tensor and the current. While in thermodynamics Eqs. (<ref>) are mere identities reflecting the symmetries of W_s, solving Eqs. (<ref>) in hydrodynamics can be a challenging endeavour leading to rich physics. We will write the energy-momentum tensor using the decomposition with respect to the timelike velocity vector u^μ,T^μν =E u^μ u^ν +PΔ^μν + Q^μ u^ν +Q^ν u^μ +T^μν ,where Δ^μν≡ g^μν + u^μ u^ν is the transverse projector, Q^μ is transverse to u_μ, andT^μν is transverse to u_μ, symmetric, and traceless. Explicitly, the coefficients are E≡ u_μ u_ν T^μν, P≡13Δ_μνT^μν, Q_μ≡ -Δ_μα u_β T^αβ and T_μν≡12(Δ_μαΔ_νβ + Δ_ναΔ_μβ - 23Δ_μνΔ_αβ) T^αβ. Similarly, we will write the current asJ^μ =N u^μ +J^μwhere the charge density is N≡ -u_μ J^μ, and the spatial current is J_μ≡Δ_μλ J^λ.Using the equilibrium free energy (<ref>), one can isolate O(1) and O(∂) contributions to the energy-momentum tensor and the current:E = ϵ(T,μ,B^2) + f_ E , P = Π(T,μ,B^2) + f_ P , N = n(T,μ,B^2) + f_ N , T^μν = (T,μ,B^2) ( B^μ B^ν - 13 Δ^μν B^2 ) +f^μν_ T ,where ϵ = -p+ T(∂ p/∂ T) + μ (∂ p/∂μ), Π=p-23B^2, n=∂ p/∂μ, and the magnetic susceptibility is = 2∂ p/∂ B^2. The terms f_ E, f_ P, f_ N, f^μν_ T, Q^μ, and J^μ are all O(∂), and contain both equilibrium and non-equilibrium contributions, f_ E = f̅_ E + f_ E^non-eq. etc, where the bar denotes O(∂) contributions coming from the variation of W_s.§.§ Field redefinitionsOut of equilibrium, the variables T, u^α, and μ may be redefined. Such a redefinition is often referred to as a choice of “frame”, see e.g. ref. <cit.> for a discussion. Consider changing the hydrodynamic variables to T'=T+δ T, u'^α = u^α + δ u^α, μ' = μ + δμ, where δ T, δ u^α, and δμ are O(∂). The same energy-momentum tensor and the current may be expressed either in terms of T, u^α, μ, or in terms of T', u'^α, μ' (note that B^2 = B'^2 + O(∂^2)). Physical transport coefficients must be derived from O(∂) quantities which are invariant under such changes of hydrodynamic variables. A direct evaluation shows that the following combinations are invariant under “frame” transformations:f ≡ f_ P - (∂Π/∂ϵ)_n f_ E - (∂Π/∂ n)_ϵ f_ N ,ℓ≡B^α/B(J_α - n/ϵ+p Q_α) ,ℓ^μ_⊥≡𝔹^μα(J_α - n/ϵ+p- B^2 Q_α), t^μν≡ f^μν_ T - ( B^μ B^ν - 13 Δ^μνB^2 ) [ (∂/∂ϵ)_n f_ E + (∂/∂ n)_ϵ f_ N] . Here 𝔹^μν≡Δ^μν - B^μ B^ν/B^2 is the projector onto a plane orthogonal to both u^μ and B^μ, all thermodynamic derivatives are evaluated at fixed B^2, and B≡√(B^2). When the magnetic susceptibilityis T- and μ-independent, the stress f^μν_ T is frame-invariant.As an example, one can choose δ T and δμ such that E'=ϵ(T',μ',B'^2), N'=n(T',μ',B'^2), and further choose δ u^α such that Q'_α=0. This corresponds to the Landau-Lifshitz frame <cit.>.The components of energy-momentum tensor and the current take the following form in the Landau-Lifshitz frame:P' = Π(T',μ',B'^2) + f, J'^μ =ℓ^μ_⊥ + B'^μ/B'ℓ , T'^μν = (T',μ',B'^2) ( B'^μ B'^ν - 13 Δ'^μν B'^2 ) +t^μν , where the frame invariants are given by eq. (<ref>). In the Landau-Lifshitz frame, a non-zero value of the pseudoscalar frame-invariant ℓ indicates a current flowing along the magnetic field. In a constant external magnetic field such currents arise as consequences of chiral anomalies <cit.>; in an inhomogeneous external field, an electric current flowing along the magnetic field can arise without chiral anomalies, owing to a non-zero magnetic susceptibility. §.§ Thermodynamic frameThe energy-momentum tensor and the current derived from the static generating functional W_s correspond to a different frame, termed in <cit.> the thermodynamic frame. Taking the variation of the free energy (<ref>), one finds the following equilibrium O(∂) contributions in the thermodynamic frame:f̅_ E = ∑_n=1^5 ϵ_n s_n^ (1) , f̅_ P = ∑_n=1^5 π_n s_n^ (1) , f̅_ N = ∑_n=1^5 ϕ_n s_n^ (1) , Q̅^μ = ∑_n=1^4 γ_n v_n^ (1)μ , J̅^μ = ∑_n=1^4 δ_n v_n^ (1)μ , f̅^μν_ T = ∑_n=1^10θ_n t_n^ (1)μν ,where the bar signifies equilibrium contributions, and the coefficients ϵ_n, π_n, ϕ_n, γ_n, δ_n, θ_n are all O(1) functions of the five thermodynamic coefficients M_n(T,μ,B^2) and of the magnetic susceptibility =2∂ p/∂ B^2. The explicit expressions are given in Appendix <ref>. The one-derivative scalars s_n^ (1) are given in Table <ref>. The one-derivative vectors v_n^ (1)μ and tensors t_n^ (1)μν are listed in Table <ref>. The table does not list all O(∂) vectors and tensors, but only those that appear in the equilibrium Q^μ and T^μν. The frame invariants (<ref>) then becomef = ∑_n=1^5 Φ_n s_n^ (1) + f_non-eq. ,ℓ = ∑_n=1^5 Λ_n s_n^ (1) + ℓ_non-eq. ,ℓ^μ_⊥ = ∑_n=1^5 Γ_n v_n^ (1)μ + ℓ^μ_⊥non-eq. , t^μν = ∑_n=1^10Θ_n t_n^ (1)μν + t^μν_non-eq. In the vector invariant, we have defined v_5^ (1)μ≡ s_2^ (1) B^μ. The subscript “non-eq” denotes non-equilibrium contributions which by definition vanish in equilibrium. The functions Φ_n(T,μ,B^2), Λ_n(T,μ,B^2), Γ_n(T,μ,B^2), Θ_n(T,μ,B^2) are non-dissipative thermodynamic transport coefficients. Explicitly,Φ_n = π_n - ϵ_n (∂Π/∂ϵ)_n - ϕ_n (∂Π/∂ n)_ϵ , Λ_n≠2 = 0 ,Λ_2 = 1/B(δ_1 - n/ϵ+pγ_1 ),Γ_n⩽4 = δ_n - n/ϵ+p- B^2γ_n,Γ_5 = -1/B^2( δ_1 - n/ϵ+p- B^2γ_1 ),Θ_n⩽ 5 = θ_n - 12 ϵ_n (∂/∂ϵ)_n - 12ϕ_n (∂/∂ n)_ϵ , Θ_n⩾ 6 = θ_n.We see that the constitutive relations for energy-momentum tensor and the current contain twenty-one thermodynamic transport coefficients Φ_n, Λ_2, Γ_n, Θ_n. These twenty-one coefficients are not independent, but can all be expressed in terms of only five parameters M_n of the equilibrium generating functional. Let us now write down the constitutive relations in the thermodynamic frame that is a natural generalization of the Landau-Lifshitz frame. We will define the thermodynamic frame (primed variables) by redefinitions of T, μ, and u^α that give E' = ϵ(T',μ',B'^2) + f̅_ E , N' = n(T',μ',B'^2) + f̅_ N , Q'_α = Q̅_α .In other words, in this thermodynamic frame the coefficients E, N, and Q_α in the decompositions (<ref>), (<ref>) take their equilibrium values, derived from the equilibrium generating functional W_s. The other coefficients take the following form in the thermodynamic frame:P' = Π(T',μ',B'^2) + f̅_ P + f_non-eq. , J'^μ = J̅^μ + ℓ^μ_⊥non-eq. + B'^μ/B'ℓ_non-eq. , T'^μν = (T',μ',B'^2) ( B'^μ B'^ν - 13 Δ'^μν B'^2 ) + f̅_ T^μν + t^μν_non-eq. .§.§ Non-equilibrium contributionsWith the equilibrium contributions out of the way, the next task is to find the non-equilibrium terms in the constitutive relations (<ref>). This amounts to finding one-derivative scalars, vectors (orthogonal both to B_μ and to u_μ), and transverse traceless symmetric tensors that vanish in equilibrium. Note that non-equilibrium contributions (those that vanish in equilibrium) are not the same as dissipative contributions (those that contribute to hydrodynamic entropy production). Every dissipative contribution is non-equilibrium, but not every non-equilibrium contribution is dissipative.The six independent non-equilibrium one-derivative scalars are given in Table <ref>.The scalar u^λ∂_λ B^2 is not independent as a consequence of the electromagnetic Bianchi identity, and can be expressed as a combination of ∇·u and B^μ B^ν∇_μ u_ν. Three scalar equations of motion ∇_μJ^μ = 0, u_ν∇_μ T^μν + E_μ J^μ=0, and B_ν∇_μ T^μν + (E·B) (u·J)=0 taken at zeroth order provide three relations among the scalars. We choose to eliminate s^ (1)_1non-eq., s^ (1)_2non-eq., and s^ (1)_6non-eq. and write the scalar and pseudo-scalar constitutive relations asf_non-eq. = c_1 s^ (1)_3non-eq. + c_2 s^ (1)_4non-eq. + c_3 s^ (1)_5non-eq. ,ℓ_non-eq. = c_4 s^ (1)_3non-eq. + c_5 s^ (1)_4non-eq. + c_6 s^ (1)_5non-eq. ,with some undetermined transport coefficients c_n.The independent non-equilibrium transverse one-derivative vectors are given in Table <ref>, where the shear tensor is σ^μν≡Δ^μαΔ^νβ(∇_α u_β + ∇_β u_α -23 Δ_αβ∇·u).We use the vector equation of motion (<ref>) projected with 𝔹^μν at zeroth order to eliminate one of the vectors, [ Namely, using the equation of motion (<ref>) with the constitutive relations for T^μν and J^μ derived from the generating functional W=∫√(-g)p(T,μ,B^2)+O(∂). The relation among the vectors that one finds is v_2 non-eq.^ (1)μ = v_1 non-eq.^ (1)μ n/(ϵ+p) + O(∂^2).] and write the vector constitutive relation asℓ^μ_⊥non-eq. = c_7 𝔹^μ_ νv_1 non-eq.^ (1)ν+ c_8 𝔹^μ_ νv_3 non-eq.^ (1)ν + c_9 ṽ_1 non-eq.^ (1)μ + c_10 ṽ_3 non-eq.^ (1)μ ,The tilded vectors are defined as ṽ^μ≡ϵ^μνρσu_ν B_ρ v_σ/B.There is a number of symmetric transverse traceless non-equilibrium one-derivative tensors besides the shear tensor σ^μν. One such tensor isσ̃^μν≡1/2B( ϵ^μλαβ u_λ B_ασ_β^ν + ϵ^νλαβ u_λ B_ασ_β^μ) .Other tensors can be formed by B^⟨μ B^ν⟩ s^ (1)_nnon-eq., or by symmetrizing B^μ with a transverse non-equilibrium vector. Again, we eliminate three scalars and one vector by the zeroth order equations of motion andwrite the tensor constitutive relation in terms of b^μ≡ B^μ/B as t^μν_non-eq.= c_11σ^μν +b^⟨μ b^ν⟩( c_12 s^ (1)_3non-eq. + c_13 s^ (1)_4non-eq. + c_14 s^ (1)_5non-eq.)+ c_15b^⟨μ v_1 non-eq.^ (1)ν⟩ + c_16 b^⟨μ v_3 non-eq.^ (1)ν⟩ + c_17 b^⟨μṽ_1 non-eq.^ (1)ν⟩ + c_18 b^⟨μṽ_3 non-eq.^ (1)ν⟩ + c_19 σ̃^μν ,with some undetermined transport coefficients c_n. Thus there are five equilibrium functions M_n(T,μ,B^2), and nineteen non-equilibrium functions c_n(T,μ,B^2) that determine one-derivative contributions to the energy-momentum tensor and the current in strong magnetic field.If the microscopic system is parity-invariant, all thermodynamic coefficients M_n vanish except for M_4. In addition, the dynamical coefficients c_3, c_4, c_5, c_8, c_10, c_14, c_15, c_17 must vanish by parity invariance. Thus a conducting parity-invariant system in magnetic field has one thermodynamic coefficient M_4, three “electrical conductivities” c_6, c_7, and c_9, and eight “viscosities” c_1, c_2, c_11, c_12, c_13, c_16, c_18, and c_19. We will see later that the Onsager relations impose a relation between c_2, c_12, and c_13, plus four more relations among the parity-violating coefficients. This leaves eleven transport coefficients (one thermodynamic and ten non-equilibrium) for a conducting parity-invariant system in magnetic field in 3+1 dimensions. In a conformal theory, the tracelessness condition [ In a conformal theory subject to external fields g_μν and A_μ, the trace of the energy-momentum tensor receives an anomalous contribution T^μ_ μ = κ F^2 + O(∂^4), where κ is a theory-dependent constant that counts the number of charged degrees of freedom, and the terms O(∂^4) are due to curvature invariants. It was shown in ref. <cit.> that the conformal anomaly may be captured by a certain local term in the hydrostatic generating functional, which for our purposes amounts to a term in p(T,μ,B^2) proportional to κ. ] will in addition impose c_1 = c_2 = 0.The constitutive relations may be simplified further if we note that the shear tensor can be decomposed with respect to the magnetic field asσ^μν = σ^μν_⊥ + ( b^μΣ^ν + b^νΣ^μ) + 12 b^⟨μ b^ν⟩( 3 S_4 - S_3 ) .Here σ^μν_⊥≡12 (𝔹^μα𝔹^νβ + 𝔹^να𝔹^μβ - 𝔹^μν𝔹^αβ) σ_αβ is traceless, Σ^μ≡𝔹^μλσ_λρb^ρ, and both are orthogonal to the magnetic field B_μ. The scalars are S_3≡∇·u and S_4 ≡ b^μ b^ν∇_μu_ν. The tensor (<ref>) then becomesσ̃^μν = σ̃^μν_⊥ + 12 ( b^μΣ̃^ν + b^νΣ̃^μ) ,where σ̃^μν_⊥ is transverse to both u_μ and B_μ, symmetric, and traceless.For completeness, let us summarize the constitutive relations for a parity-invariant theory in the thermodynamic frame. Defining ≡ M_4, the energy-momentum tensor is given by eq. (<ref>) with the following coefficients:E=-p + Tp_,T + μp_,μ +(T _,T + μ_,μ -2 ) B·Ω , P= p - 43p_,B^2B^2 -13 ( + 4_,B^2B^2)B·Ω -ζ_1 ∇·u - ζ_2 b^μ b^ν∇_μ u_ν , Q^μ= - ϵ^μνρσ u_ν∂_σ B_ρ +(2- T _,T- μ_,μ) ϵ^μνρσ u_ν B_ρ∂_σ T /T- _,B^2ϵ^μνρσ u_ν B_ρ∂_σ B^2 +(-2p_,B^2 + _,μ - 2 _,B^2 B·Ω) ϵ^μνρσ u_ν E_ρ B_σ + ϵ^μνρσΩ_ν E_ρ u_σ ,T^μν= 2p_,B^2( B^μ B^ν -13 Δ^μνB^2) + _,B^2 B^⟨μ B^ν⟩ B·Ω +B^⟨μΩ^ν⟩ - η_⊥σ^μν_⊥ -η_∥ (b^μΣ^ν + b^νΣ^μ)- b^⟨μb^ν⟩(η_1 ∇·u + η_2 b^α b^β∇_α u_β) - η̃_⊥σ̃^μν_⊥- η̃_∥ (b^μΣ̃^ν + b^νΣ̃^μ) , and the current is given by eq. (<ref>) with the following coefficients:N= p_,μ + _,μB·Ω - m ·Ω , J^μ= ϵ^μνρσ u_ν∇_ρ m_σ +ϵ^μνρσ u_ν a_ρ m_σ+ ( σ_⊥𝔹^μν + σ_∥B^μ B^ν/B^2) V_ν +σ̃ Ṽ^μ . The current is written in terms of the magnetic polarization vectorm^μ = ( 2 p_,B^2 + 2 _,B^2B·Ω)B^μ + Ω^μ ,while the electric polarization vector vanishes at leading order in a parity-invariant system. The comma subscript denotes the derivative with respect to the argument that follows. Note that we are keeping O(∂^2) thermodynamic terms in the constitutive relations (coming from the variation of M_4 s^ (1)_4) that are needed to ensure that the conservation laws (<ref>) are satisfied identically for time-independent background fields. In writing down the constitutive relations (<ref>), (<ref>), we have relabeled the non-equilibrium transport coefficients as ζ_1 ≡ -c_1, ζ_2≡ - c_2, σ_∥≡ c_6, σ_⊥≡ c_7, σ̃≡ c_9, η_⊥≡ -c_11, η_∥≡ -c_11 - c_16, η_1≡ - c_12 +1/2 c_11 +2/3 c_16, η_2 ≡ - c_13 - 3/2 c_11 - 2c_16, η̃_∥≡ - c_18 - 1/2 c_19, η̃_⊥≡ -c_19, and defined V^μ≡ E^μ - T Δ^μν∂_ν(μ/T). The coefficients σ_⊥, σ_∥ are the transverse and longitudinal conductivities, and η_⊥, η_∥ are the transverse and longitudinal shear viscosities. The coefficients ζ_1, ζ_2, η_1 and η_2 may all be called “bulk viscosities”, of which only three are independent due to the Onsager relation. The coefficients η̃_⊥, η̃_∥ are the two Hall viscosities, and σ̃ is the Hall conductivity. [ The actual Hall conductivity, measured as a response to external electric field, must be obtained after the hydrodynamic equations with the constitutive relations (<ref>), (<ref>) have been solved. Doing so in a state with constant charge density n_0 and magnetic field B_0 gives the Hall conductivity n_0/B_0, as expected from elementary considerations of boosting the state in the plane transverse to B_0. See eq. (<ref>) below. ]When the external electromagnetic field vanishes, the system becomes isotropic, and we expect to recover the constitutive relations of the standard isotropic hydrodynamics, with shear viscosity η, bulk viscosity ζ, and electrical conductivity σ. Thus as B→ 0 we expect η_⊥ = η_∥ = -2η_1 = 2/3η_2 = η, η̃_⊥ = η̃_∥ = 0, ζ_1 = ζ, ζ_2 = 0, σ_⊥ = σ_∥ = σ, σ̃=0.§.§ Eigenmodes As a simple application of the hydrodynamic equations (<ref>) together with the constitutive relations (<ref>), (<ref>), one can study the eigenmodes of small oscillations about the thermal equilibrium state. We set the external sources to zero, and linearize the hydrodynamic equations near the flat-space equilibrium state with constant T=T_0, μ=μ_0, u^α=(1, 0), and B^α=(0,0,0,B_0). Taking the fluctuating hydrodynamic variables proportional to exp(-iω t+ i k· x), the source-free system admits five eigenmodes, two gapped (ω( k→0)≠0), and three gapless (ω( k→0)=0). The frequencies of the gapped eigenmodes areω = ±B_0 n_0/w_0 - i B_0^2/w_0( σ_⊥± iσ̃) -i D_c k^2 ,where w_0 ≡ϵ_0 + p_0 is the equilibrium enthalpy density, and we have taken B_0^2 ≪ w_0, _,μ B_0^2 ≪ w_0 in the hydrodynamic regime B_0≪ T_0^2. As the imaginary part of the eigenfrequency must be negative for stability, this implies σ_⊥ >0. The mode has a circular polarization (at k=0), with δ u_x and δ u_y oscillating with a π/2 phase difference. The analogous mode in 2+1 dimensional hydrodynamics was christened the hydrodynamic cyclotron mode in ref. <cit.>, which also explored its implications for transport near two-dimensional quantum critical points. For momenta k∥ B_0, the three gapless eigenmodes are the two sound waves, and one diffusive mode. The eigenfrequencies in the small momentum limit areω = ± k v_s - iΓ_s,∥/2 k^2 ,ω = -iD_∥ k^2 , where v_s is the speed of sound.As in ref. <cit.>, we can write the coefficients in terms of the elements of the susceptibility matrix in the grand canonical ensemble. The non-zero elements of the 3× 3 susceptibility matrix are χ_11 = T (∂ϵ/∂ T)_μ/T, χ_13 = χ_31 = (∂ϵ /∂μ)_T, χ_33 = (∂ n/∂μ)_T, and χ_22=w_0, with derivatives evaluated at constant B^2 in equilibrium.The longitudinal diffusion constant isD_∥ = σ_∥w_0^2/n_0^2 χ_11 + w_0^2 χ_33 - 2 n_0 w_0 χ_13 .The positivity of the diffusion constant implies σ_∥ >0. The speed of sound squared expressed in terms of the elements of the susceptibility matrix is given byv_s^2 = n_0^2 χ_11 + w_0^2 χ_33 -2n_0 w_0 χ_13/(χ) ,and the damping coefficient isΓ_s,∥ = 1/w_0( 43 (η_1 +η_2) + ζ_1 +ζ_2 ) +σ_∥w_0/(χ)(n_0 χ_11 - w_0 χ_13)^2/n_0^2 χ_11 + w_0^2 χ_33 - 2n_0 w_0 χ_13 .The expression for v_s and D_∥ in terms of the thermodynamic functions formally look the same as in hydrodynamics without external O(1) magnetic fields <cit.>. All of v_s, Γ_s,∥, and D_∥ depend on B_0 through p=p(T,μ,B^2) and the transport coefficients.For momenta k⊥ B_0, the three gapless eigenmodes include two diffusive modes, and one “subdiffusive” mode with a quartic dispersion relation,ω = -i D_⊥ k^2 ,ω = -i η_∥ k^2/w_0 ,ω = -i η_⊥ k^4/B_0^2 χ_33 . The transverse diffusion constant is determined by the transverse resistivity. We define the 2×2 conductivity matrix in the plane transverse to B_0 as σ_ab≡σ_⊥δ_ab + (n_0/| B_0| + σ̃)ϵ_ab, and the corresponding resistivity matrix as ρ_ab≡ (σ^-1)_ab = ρ_⊥δ_ab + ρ̃_⊥ ϵ_ab, which defines ρ_⊥ and ρ̃_⊥. The transverse diffusion constant is thenD_⊥ = w_0^3 χ_33/(χ) B_0^2 ρ_⊥ ,again using _,μ B_0^2 ≪ w_0. Stability of the equilibrium state now implies η_⊥ >0, η_∥ >0.For modes propagating at an angle θ with respect to B_0, the gapless modes include sound waves (unless θ=π/2), and a diffusive mode. For a fixed value of θ, the small-momentum eigenfrequencies are ω = ± k v_s cosθ -i/2Γ_s(θ) k^2, and ω=-iD(θ)k^2, whereD(θ) = D_∥cos^2θ + n_0^2/v_s^2 w_0 χ_33 D_⊥sin^2θ ,Γ_s(θ) = Γ_s,∥cos^2θ + ( η_∥/w_0 + (n_0 χ_13 - w_0 χ_33)^2/χ_33v_s^2 (χ) D_⊥) sin^2θ .The coefficient D_c in the cyclotron mode eigenfrequency (<ref>) at small B_0 is D_c =( ±i v_s^2 w_0/2 n_0 B_0 + (n_0^2 χ_11- w_0^2 χ_33)w_0/2 n_0^2 (χ)σ + 3ζ+7η/6w_0) sin^2θ + η/w_0cos^2θ + O(B_0) .Note that the limits θ→π/2 and k→0 in the eigenfrequencies do not commute.§.§ Entropy production The simple flat-space eigenfrequency analysis in the previous subsection imposes certain constraints on non-equilibrium transport coefficients. In order to find more general constraints, one method is to impose a local version of the second law of thermodynamics: the existence of a local entropy current with positive semi-definite divergence for every non-equilibrium configuration consistent with the hydrodynamic equations. We will not attempt to construct the most general entropy current from scratch. Rather, we will use the result of <cit.> saying that the constraints on transport coefficients derived from the entropy current are the same as those derived from the equilibrium generating functional, plus the inequality constraints on dissipative transport coefficients. We take the entropy current to beS^μ = S^μ_ canon + S^μ_ eq. ,where the canonical part of the entropy current isS^μ_ canon = 1/T(p u^μ - T^μνu_ν - μ J^μ) ,and S^μ_ eq. is found from the equilibrium partition function, as described in <cit.>. The constraints on transport coefficients follow by demanding ∇_μ S^μ⩾0. Using conservation laws (<ref>), the divergence of the canonical entropy current is∇_μ S^μ_ canon = ∇_μ(p/Tu^μ) - T^μν∇_μu_ν/T + J^μ(E_μ/T-∂_μμ/T).The S^μ_ eq. part of the entropy current is explicitly built to cancel out the part of ∇_μ S^μ_ canon that arises from the equilibrium terms in the constitutive relations, i.e. the terms in T^μν and J^μ derived from the equilibrium generating functional. In fact, ref. <cit.> has already found S^μ_ eq. in the case when the generating functional contains a contribution proportional to B·Ω. We thus focus on non-equilibrium terms, and write the thermodynamic frame constitutive relations (<ref>) as T^μν = T^μν_ eq. + T^μν_non-eq. and J^μ = J^μ_ eq. + J^μ_non-eq.. The divergence of the entropy current is then∇_μ S^μ= 1/TJ^μ_non-eq.( E_μ-T∂_μμ/T)- T^μν_non-eq.∇_μu_ν/T = 1/T( ℓ^μ_⊥non-eq. + B^μ/Bℓ_non-eq.) V_μ -1/T f_non-eq.∇· u- 1/2T t^μν_non-eq.σ_μν .Using the constitutive relations (<ref>), (<ref>), this leads toT ∇_μ S^μ= σ_∥(B· V)^2/B^2+ σ_⊥ (𝔹^μν V_ν)^2+ 12 η_⊥ (σ^μν_⊥)^2 + η_∥Σ^2+ (ζ_1 - 23 η_1) S_3^2 + 2η_2 S_4^2 + (2η_1 + ζ_2 -23 η_2) S_3 S_4 ,where again S_3≡∇·u and S_4 ≡ b^μ b^ν∇_μu_ν. Demanding ∇_μS^μ⩾0 now givesσ_∥⩾0 ,σ_⊥⩾0 ,η_⊥⩾0 ,η_∥⩾0 ,together with the condition that the quadratic form made out of S_3, S_4 in the second line of eq. (<ref>) is non-negative, which impliesη_2 ⩾ 0 ,ζ_1 -23 η_1 ⩾ 0 ,2η_2 (ζ_1 - 23 η_1) ⩾14 (2η_1 + ζ_2 -23 η_2)^2 . The coefficients η̃_⊥, η̃_∥, and σ̃ do not contribute to entropy production, and are not constrained by the above analysis. Thus, η̃_⊥, η̃_∥, and σ̃ are non-equilibrium non-dissipative coefficients. §.§ Kubo formulas When the microscopic system is time-reversal invariant (i.e. the only source of time-reversal breaking is due to the external magnetic field), transport coefficients can be further constrained by the Onsager relations. The retarded two-point functions of operators O_a and O_b in a time-reversal invariant theory in equilibrium obeyG_ab(ω,𝐤,B) = ϵ_a ϵ_bG_ba(ω,-𝐤,-B) ,where ϵ_a and ϵ_b are time-reversal eigenvalues of the operators O_a and O_b. We take our operators to be various components of T^μν and J^μ, and evaluate the retarded two-point functions by varying one-point functions in the presence of the external source with respect to the source. Namely, we solve the hydrodynamic equations in the presence of fluctuating external sources δ A, δ g (proportional to exp(-iω t+ i k· x)) to find δ T[A,g], δμ[A,g], δ u^α[A,g], and then vary the resulting hydrodynamic expressions T^μν[A,g] and J^μ[A,g] with respect to g_αβ, A_α to find the retarded functions. Specifically,G_T^μν T^αβ = 2δ/δ g_αβ( √(-g)T^μν_on-shell[A,g] ) , G_J^μ T^αβ = 2δ/δ g_αβ( √(-g)J^μ_on-shell[A,g] ) , G_T^μν J^α = δ/δ A_α T^μν_on-shell[A,g] , G_J^μ J^α = δ/δ A_αJ^μ_on-shell[A,g], where the subscript “on-shell” signifies that the corresponding hydrodynamic T^μν[A,g] and J^μ[A,g] are evaluated on the solutions to (<ref>), and the sources δ A, δ g are set to zero after the variation is taken. The expressions (<ref>) are to be understood as δ(√(-g)T^μν_on-shell)= 12 G_T^μν T^αβ(ω,k) δ g_αβ (ω,k) ,etc. This provides a direct method to evaluate the retarded functions, and allows both to check the Onsager relations and to derive Kubo formulas for transport coefficients. [ Taken at face value, hydrodynamic correlation functions violate Onsager relations at non-zero ω and non-zero k. However these violations do not affect the Kubo formulas and disappear in the limit B≪ T^2, which corresponds to the validity regime of hydrodynamics. ] The constraint on transport coefficients we find by demanding that eq. (<ref>) holds is [ For parity-violating coefficients, we find c_3 = 2/3 (c_14 + c_15) - c_4, c_5 = -2(c_14 + c_15), c_8 = -c_15, c_10 = -c_17. ]3 ζ_2 - 6 η_1 - 2 η_2 = 0 .For the rest of the paper, we will assume that (<ref>) holds, which leaves us with ten non-equilibrium transport coefficients for a parity-invariant microscopic system. Using eq. (<ref>) to eliminate ζ_2, the inequality constraint in eq. (<ref>) turns into2η_2 (ζ_1 - 23 η_1) ⩾4η_1^2 .We next list the expressions for transport coefficients in terms of retarded functions evaluated in flat-space equilibrium with external magnetic field in the z direction, as in sec. <ref>. In the limit k→0 first, ω→ 0 second we find the following Kubo formulas. The two-point function of the longitudinal current J^z gives the longitudinal conductivity,1ω ImG_J^z J^z(ω, k=0) =σ_∥ ,while the two-point functions of the transverse currents J^x, J^y give the transverse resistivities,1ω ImG_J^x J^x(ω, k=0) = ω^2 ρ_⊥w_0^2/B_0^4 ,1ω ImG_J^x J^y(ω, k=0) = n_0/B_0 - ω^2 ρ̃_⊥w_0^2/B_0^4sign(B_0) , where the resistivities ρ_⊥ and ρ̃_⊥ were defined below eq. (<ref>). Alternatively, the resistivities can be found from correlation functions of momentum density,1ω ImG_T_0x T_0x (ω,k=0) = ρ_⊥w_0^2/B_0^2 ,1ω ImG_T_0x T_0y (ω,k=0) = -ρ̃_⊥ sign(B_0) w_0^2/B_0^2 , assuming B_0^2≪ w_0. The shear viscosities are given by 1ω ImG_T^xy T^xy(ω, k=0) =η_⊥ ,1ω ImG_T^xy T^xx(ω, k=0) =η̃_⊥sign(B_0),1ω ImG_T^xz T^xz(ω, k=0) =η_∥ ,1ω ImG_T^yz T^xz(ω, k=0) =η̃_∥sign(B_0) ,while the “bulk” viscosities may be expressed as1ωδ_ij ImG_T^ij T^xx(ω, k=0) = 3 ζ_1 ,13ωδ_ijδ_klImG_T^ij T^kl(ω, k=0) = 3ζ_1 + ζ_2,1ω ImG_O_1 O_1 = ζ_1 - 23 η_1 ,1ω ImG_O_2 O_2 = 2η_2 , where O_1 = 1/2(T^xx+T^yy), and O_2 = T^zz - 1/2(T^xx+T^yy). Correlation functions at non-zero momentum may be obtained in a straightforward way from the variational procedure described earlier.§.§ Inequality constraints on transport coefficientsFinally, let us show that the inequality constraints on transport coefficientsderived from demanding that the entropy production is non-negative can also be obtained from hydrodynamic correlation functions, without using the entropy current. The argument is based on the fact that the imaginary part of the retarded function G_OO(ω, k) must be positive for any Hermitean operator O and ω>0,ImG_OO(ω,k) ⩾ 0 .Now consider the operator O=a O_1 + b O_2, with real coefficients a and b, and Hermitean operators O_1, O_2. The inequality (<ref>) implies Im[ a^2 G_O_1 O_1 + ab G_O_1 O_2 + ab G_O_2 O_1 + b^2 G_O_2 O_2] ⩾ 0 ,for ω⩾0. This quadratic form in a, b must be non-negative for all a,b which implies Im G_O_1 O_1⩾ 0, Im G_O_2 O_2⩾ 0 together with(Im G_O_1 O_1) ( Im G_O_2 O_2) ⩾14 (Im G_O_1 O_2 +Im G_O_2 O_1)^2 .The two terms in the right-hand side of (<ref>) can be related by the Onsager relation (<ref>). As an example, take O_1 = 1/2(T^xx+T^yy), and O_2 = T^zz - 1/2(T^xx+T^yy). Evaluating the correlation functions at k=0 and ω→0, the inequalities (<ref>), (<ref>) immediately imply the entropy current constraint (<ref>). The constraints (<ref>), (<ref>) follow directly from the Kubo formulas given in the previous subsection. § HYDRODYNAMICS WITH DYNAMICAL ELECTROMAGNETIC FIELDS§.§ Dynamical gauge fieldWe now move on to systems where the gauge field A_μ is dynamical rather than external, which will lead us to MHD. In external metric g, the (microscopic) generating functional isZ[g] = ∫ DA e^i S[g,A] ,where S is the action. Let us couple the gauge field to an external conserved current J^μ_ ext. We do this so that the new generating functional isZ[g,J_ ext] = ∫ DADφe^iS[g,A]+ i∫√(-g)(A_μ - ∂_μφ) J^μ_ ext ,and W≡-iln Z. The new field φ is a Lagrange multiplier which shifts under gauge transformations and ensures that the external current is conserved. We define the energy-momentum tensor and the current by the variation of the action: δ_g S[g,A] = 12 ∫√(-g)T^μνδ g_μν ,δ_A S[g,A] = ∫√(-g)J^μδ A_μ .Diffeomorphism invariance of W[g,J_ ext] implies ∇_μ⟨ T^μν⟩ =⟨ F^λν⟩ J_ ext λ . In what follows, we will omit the angular brackets, writing the (non)-conservation of the energy-momentum tensor simply as ∇_μT^μν = F^λν J_ ext λ .In the standard hydrodynamic approach, T^μν and F_μν will then be taken as dynamical variables in the classical hydrodynamic theory.Note that the sign in the right-hand side of eq. (<ref>) is opposite compared to eq. (<ref>), owing to the fact that the current, rather than the gauge field, is now external.In order to proceed with hydrodynamics, we need to specify a) the constitutive relations for the energy-momentum tensor to be used in eq. (<ref>), and b) the equations which determine the evolution of the dynamical gauge field F_μν. §.§ Maxwell's equations in matterClassical equations specifying the dynamics of electric and magnetic fields are usually referred to as Maxwell's equations in matter. While we don't have a recipe of deriving them in a most general form in a model-independent way, a useful starting point is provided by matter in thermal equilibrium. Maxwell's equations for equilibrium matter may be then amended to include the non-equilibrium and dissipative effects, such as the electrical conductivity. To this end, as advocated in <cit.>, we take the static generating functional W_s[g,A] to be the effective action for gauge fields in equilibrium,S_ eff[g,A] = ∫d^4x √(-g)F ,where F is a local gauge-invariant function of the sources g_μν and A_μ, and we have ignored the surface terms. To leading order in the derivative expansion, F is simply the pressure. We can always write F=-14 F_μνF^μν +F_, where the vacuum action is -14 F_μν F^μν = 12(E^2 - B^2), and F_ is the “matter" contribution. The isolation of the vacuum term is arbitrary, but it will allow us to make contact with the textbook form of Maxwell's equations in matter. Our (equilibrium) effective theory is then given by the partition function (<ref>), with S replaced by S_ eff, and the total action isS_ tot[A, φ] = W_s[g,A]+ ∫√(-g)(A_μ-∂_μφ) J^μ_ ext .The current derived by varying the total action with respect to A_μ is J^μ_ tot = J^μ + J^μ_ ext, orJ^μ_ tot = -∇_ν ( F^μν - M_^μν) + n u^μ + J^μ_ ext ,where the polarization tensor M_^μν is defined byδ_F ∫ d^4x √(-g)F_ =12 ∫ d^4x √(-g)M_^μν δ F_μν , and the density of “free” charges is n ≡∂ F_/∂μ. The equation of motion for the gauge field follows from δ_A S_ tot=0, or equivalently J^μ_ tot=0, and becomes∇_ν H^μν = n u^μ + J^μ_ ext ,where H^μν≡ F^μν - M_^μν. This is the desired equation that must be satisfied by electromagnetic fields in equilibrium. Following the standard hydrodynamic lore and assuming that eq. (<ref>) also holds for small departures away from equilibrium, one obtains hydrodynamics of “perfect fluids”, now with dynamical electric and magnetic fields.For these perfect fluids, equations (<ref>) have to be solved together with the stress tensor (non)-conservation (<ref>), where T^μν is derived from the effective action (<ref>).In fact, eq. (<ref>) is nothing but the standard Maxwell's equations in matter. The polarization tensor M_^μν defines electric and magnetic polarization vectors P^μ and M^μ through the decompositionM_^μν = P^μ u^ν - P^ν u^μ - ϵ^μνρσ u_ρ M_σ .The antisymmetric tensor H_μν can be decomposed in the same way as the field strength F_μν,H_μν = u_μ D_ν - u_ν D_μ - ϵ_μνρσ u^ρ H^σ ,which defines D_μ≡ H_μνu^ν and H^μ≡12 ϵ^μναβ u_ν H_αβ, so thatD^μ = E^μ + P^μ , H^μ = B^μ - M^μ .It is then clear that eq. (<ref>) is the covariant form of Maxwell's equations in matter: the currents of `free charges' are in the right-hand side, while the effects of polarization appear in the left-hand side through the substitution E^μ→ D^μ, B^μ→ H^μ in the vacuum Maxwell's equations. Action (<ref>) is the action for Maxwell's equations in matter.As an example, consider the following “matter” contribution: F_ = p_(T,μ,E^2,B^2,E·B), where p_ is the “matter” pressure. The polarization tensor is then M^μν_ = 2∂ p_/∂ F_μν, and the polarization vectors areP^μ =E^μ +B^μ , M^μ =E^μ +B^μ , where the susceptibilities ≡ 2∂ p_/∂ E^2, ≡∂ p_/∂(E·B), and ≡ 2∂ p_/∂ B^2 all depend on T, μ, E^2, B^2, and E·B. This gives the standard constitutive relations, expressing D and B in terms of E and H,D^μ = ε_ E^μ + β_ H^μ , B^μ = β_ E^μ + μ_ H^μ ,where ε_≡ 1+ + ^2/(1-) is the electric permittivity, μ_≡ 1/(1-) is the magnetic permeability, and β_≡/(1-). We will also use ε_≡ 1+, which coincides with the electric permittivity if =0.§.§ Hydrodynamics We take the MHD equations to be as follows:∇_μT^μν = F^λν J_ ext λ ,J^μ + J^μ_ ext = 0 ,ϵ^μναβ∇_ν F_αβ = 0 . The last equation is the electromagnetic “Bianchi identity”, expressing the fact that the electric and magnetic fields are derived from the vector potential A_μ. The second equation (Maxwell's equations in matter) can be rewritten as ∇_ν (F^μν- M_^μν) = J^μ_ free + J^μ_ ext which defines J^μ_ free, the current of “free charges”. While eqs. (<ref>) and (<ref>) are true microscopically, the Maxwell's equations in matter (<ref>) are written based on the above intuition of the equilibrium effective action. Note that ∇_μ J^μ_ free =0 is a consequence of (<ref>), and is not an independent equation. The hydrodynamic variables are T, u^α, μ, as well as the electric and magnetic fields which satisfy u_α E^α=0, u_α B^α=0. Hydrodynamic equations (<ref>) must be supplemented by constitutive relations, which express T^μν, J^μ (or J^μ_ free and M^μν_) in terms of the hydrodynamic variables. These constitutive relations will contain equilibrium contributions coming from the equilibrium effective action (<ref>). In addition, the constitutive relations will contain non-equilibrium contributions, such as the electrical conductivity and the shear viscosity. Taking the divergence of eq. (<ref>) and using J^μ_ ext = -J^μ gives∇_μT^μν = F^νλ J_λ , ∇_μ J^μ = 0 ,which shows that the variables T, u^α, and μ satisfy exactly the same equations (<ref>) as they did in the theory with a non-dynamical, external A_μ. Thus in order to “solve” the MHD theory (<ref>) one can i) solve the hydrodynamic equations with an external gauge field (<ref>) to find T[A,g], u^α[A,g], μ[A,g], and ii) solve J^μ [T[A,g], u^α[A,g], μ[A,g],A,g] + J^μ_ ext = 0 in order to find A_μ[J_ ext,g], and iii) use the constitutive relations to find the energy-momentum tensor T^μν[J_ ext,g] = T^μν[T[A[J_ ext,g],g], u^α[A[J_ ext,g],g], μ[A[J_ ext,g],g], A[J_ ext,g], g ]. MHD correlation functions may then be obtained through variations with respect to the external sources J^λ_ ext and g_μν.An equivalent way to understand the classical effective theory (<ref>) is to promote the real-time generating functional to the non-equilibrium effective action <cit.>, i.e. to writeS_ tot[A, φ] = W_r[A,g]+ ∫√(-g)(A_μ-∂_μφ) J^μ_ ext ,where W_r[A,g] is low-energy, real-time generating functional for retarded correlation functions in the theory with a non-dynamical A_μ. The functional W_r[g,A] is non-local due to the gapless low-energy degrees of freedom (sound waves etc). However, for the purposes of MHD we do not need the actual generating functional, but only the equations of motion for the effective action S_ tot. These equations of motion are J^μ[A,g] + J^μ_ ext = 0, where J^μ[A,g] is the on-shell current in the theory with a non-dynamical A_μ. One can then solve the theory as described in the previous paragraph.We will thus adopt the simplest hydrodynamic effective theory (<ref>) where the constitutive relations for T^μν and J^μ are the same as in the case of external non-dynamical electromagnetic fields. Under this “mean-field” assumption, transport coefficients which are naively independent would still be related by the conditions originating from the static generating functional. Further, any solution T[A,g], u^α[A,g], μ[A,g] to the MHD equations is also a solution to the hydrodynamic equations (<ref>) in the theory with a non-dynamical A_μ. Thus the entropy current with a non-negative divergence on the solutions to (<ref>) will also have non-negative divergence when evaluated on the solutions to the MHD equations (<ref>). This means that the entropy current in MHD may be taken the same as the entropy current in the theory with a non-dynamical gauge field <cit.>, and we do not need to perform a separate entropy current analysis beyond what was already done in sec. <ref>.To sum up, with the MHD scaling B∼ O(1), E∼ O(∂), the equilibrium effective action is given by eq. (<ref>),S_ eff = ∫√(-g)( -12 B^2 + p_(T,μ, B^2) + ∑_n=1^5 M_n(T,μ,B^2) s_n^(1) + O(∂^2) ) .For a parity-invariant theory, only the M_4 term in the sum contributes. The constitutive relations for the energy-momentum tensor and the current were already found in the previous section, where now we have p(T,μ,B^2) = -12 B^2 + p_(T,μ, B^2). The energy-momentum tensor appearing in eq. (<ref>) and the current J^μ satisfying J^μ + J^μ_ ext = 0 take the form (<ref>), (<ref>), and the constitutive relations for a parity-invariant theory in the thermodynamic frame are given by Eqs. (<ref>), (<ref>).We will find it useful to modify the above effective theory by giving dynamics to the electric field. To do so, we add an O(∂^2) term 1/2ε_ E^2 to the effective action (<ref>), where ε_ is the electric permittivity which we take constant. This term is one of the many O(∂^2) terms, and we add it as a “ultraviolet regulator” which improves the high-frequency behaviour of the theory. When studying the near-equilibrium eigenmodes of the system, this term will affect the frequency gaps, but not the leading-order dispersion relations of the gapless modes. With this new term, the following contributions have to be added to the constitutive relations (<ref>), (<ref>):T^μν_ El. = ε_(12 E^2 g^μν + E^2 u^μ u^ν - E^μ E^ν) , J^μ_ El. = -ε_∇_λ( E^λ u^μ - E^μ u^λ) .The current J^μ_ El. contains the kinetic term for the electric field in Maxwell's equations, as well as the “bound” current due to electric polarization. §.§ Eigenmodes As a simple application of the above MHD theory, one can study the eigenmodes of small oscillations about the thermal equilibrium state. As we did earlier, we set the external sources to zero, and linearize the hydrodynamic equations near the flat-space equilibrium state with constant T=T_0, μ=μ_0, u^α=(1, 0), and B^α=(0,0,0,B_0). For simplicity, we will take the magnetic permeability μ_ constant, though it is straightforward to find how the eigenfrequencies below are modified for non-constant μ_ = μ_(T,μ,B^2). §.§.§ Neutral stateWe begin with the neutral state at μ_0 =0 and n_0 = 0. The system admits nine eigenmodes, three gapped, and six gapless.Let us start with the familiar case of vanishing magnetic field in equilibrium. The system is then isotropic, with shear viscosity η, bulk viscosity ζ, and conductivity σ≡σ_⊥ = σ_∥. The fluctuations of δ T, δ u_i decouple from the fluctuations of δμ, δ E_i, δ B_i. The eigenmodes include two transverse shear modes with eigenfrequency ω=-iη k^2/(ϵ_0+p_0), and longitudinal sound waves with v_s^2=∂ p/∂ϵ and Γ_s=(43η+ζ)/(ϵ_0+p_0). In addition, there is a longitudinal charge diffusion mode which becomes gapped because of non-zero electrical conductivity,ω = -iσ/ε_ - i(σ/∂ n/∂μ) k^2 .Thus, charge fluctuations in a neutral conducting medium do not diffuse. Instead, what diffuses are the transverse magnetic and electric fields: there are two sets of transverse conductor modes whose eigenfrequencies are determined byω( ω + iσ/ε_) = k^2/ε_μ_ .Recall that ε_ is the electric permittivity and μ_ = 1/(1-2∂ p_/∂ B^2) is the magnetic permeability, so √(ε_μ_) is the elementary index of refraction. The conductor modes have the following frequencies at small momenta:ω = -iσ/ε_ + ik^2/σμ_ , ω = -ik^2/σμ_ .The gapless conductor mode is responsible for the skin effect in metals.We now turn on non-zero magnetic field and consider modes propagating at an angle θ with respect to B_0. Thermal and mechanical fluctuations now no longer decouple from electromagnetic fluctuations.There is one longitudinal gapped mode, and two transverse gapped modes,ω = -iσ_∥/ε_+ O(k^2) ,ω = -iσ_⊥±σ̃/ε_ +O(k^2) .In writing down the transverse eigenfrequencies, we have assumed B_0^2≪ϵ_0+p_0. All six gapless modes have linear dispersion relation at small momenta. Two of the gapless modes are the Alfvén waves,ω = ± v_ A k cosθ - iΓ_ A/2 k^2 ,whose speed and damping are determined byv_ A^2 = B_0^2/μ_ (ϵ_0 + p_0) + B_0^2 ,Γ_ A = 1/ϵ_0 +p_0( η_⊥sin^2θ +η_∥cos^2θ) +1/μ_( /ρ_⊥cos^2θ + ρ_∥sin^2θ), where ρ_∥≡ 1/σ_∥, and ρ_⊥ was defined below eq. (<ref>). In writing down the damping coefficient, we have taken B_0^2≪ϵ_0 + p_0, the corrections of order B_0^2/(ϵ_0 + p_0) are straightforward to write down. The other four gapless modes are the two branches of magnetosonic waves,ω = ± v_ ms k - iΓ_ ms/2 k^2 ,whose speed is determined by the quadratic equation(v_ ms^2)^2 - v_ ms^2 (v_A^2 + v_s^2 - v_A^2 v_s^2 sin^2θ) + v_A^2 v_s^2 cos^2θ = 0 ,where v_s^2 = (s/T)/(∂ s/∂ T) = ∂ p/∂ϵ is the speed of sound at n_0=0. The two solutions of (<ref>) correspond to the sound-type (or “fast”) branch, and the Alfvén-type (or “slow”) branch. At θ=0, the slow branch turns into a second set of Alfvén waves, while the fast branch becomes the sound wave. See e.g. ref. <cit.> for an early derivation of v_ A and v_ ms in relativistic MHD. The damping coefficients of the magnetosonic waves are straightforward to evaluate, but are quite lengthy to write down in general, and we will only present them in the limits of small B_0 and small θ. As B_0→0, the damping coefficients becomeslow:Γ_ ms = η/ϵ_0+p_0 + 1/σμ_ , fast:Γ_ ms = 1/ϵ_0+p_0( 43 η + ζ) .On the other hand, as θ→0, the damping coefficients becomeslow:Γ_ ms = η_∥/ϵ_0+p_0 + ρ_⊥/μ_ , fast:Γ_ ms = 1/ϵ_0+p_0( 103η_1 + 2η_2 + ζ_1 ) . We have again taken B_0^2≪ϵ_0 +p_0, the corrections of order B_0^2/(ϵ_0 + p_0) are straightforward to write down. At θ=0, both polarizations of Alfvén waves have the same damping.Let us now consider gapless modes propagating perpendicularly to the magnetic field, i.e. taking θ→π/2 first, k→0 second. These include sound waves ω = ± k v_π/2 - iΓ_π/2/2k^2 ,where v_π/2 is the non-zero solution of eq. (<ref>) at θ=π/2. In the limit of small B_0 it reduces to v_π/2^2 = v_s^2 = (s/T)/(∂ s/∂ T)=∂ p/∂ϵ, in equilibrium. The damping coefficient is Γ_π/2 = 1/ϵ_0 + p_0( ζ_1 - 23 η_1 + η_⊥) , assuming B_0^2≪ϵ_0 + p_0. The other four gapless modes at θ=π/2 are purely diffusive, ω = -i η_∥/ϵ_0+p_0k^2 ,ω = -i ρ_∥/μ_k^2 ,ω = -iη_⊥/ϵ_0+p_0k^2 ,ω = -iρ_⊥/μ_k^2 , In writing down (<ref>) and (<ref>) we have again taken B_0^2≪ϵ_0+p_0. §.§.§ Charged state offset by background chargeWe now consider a state with a non-zero value of μ_0, which gives rise to a constant non-zero charge density n_0. In order to ensure that the equilibrium state is stable, we will offset this equilibrium value of the dynamical charge density by a constant non-dynamical external background charge density -n_0. This can be achieved by choosing the external current in the hydrodynamic equations (<ref>) as J^μ_ ext=(-n_0, 0). In the particle language, this would correspond to a state where the excess of electrically charged particles over antiparticles (or vice versa) is compensated by a constant charge density of immobile background “ions”.Even though the system is overall electrically neutral, its dynamics is not equivalent to that of the system with μ_0=0, n_0=0: for example, the fluctuation of the spatial electric current has a convective contribution n_0 δ u_i. More formally, when analyzing hydrodynamic modes, the limits n_0→0 and k→0 do not commute. We now find six gapped modes and three gapless modes.To get some intuition about the gapped modes, let us set all transport coefficients to zero, as well as set B_0=0. Then at small momenta there are two longitudinal gapped modes whose frequencies are determined byω^2 = Ω_p^2 + v_s^2 k^2 ,where Ω_p^2 ≡n_0^2/[(ϵ_0+p_0)ε_], and v_s is the speed of sound that the charged fluid would have, if the electromagnetic fields were not dynamical, see Sec <ref>. These modes are the relativistic analogues of Langmuir oscillations, and Ω_p is the relativistic “plasma frequency" which gaps out the sound waves. In addition, there are four transverse gapped modes whose frequencies are determined byω^2 = Ω_p^2 + k^2/ε_μ_ .These are electromagnetic waves in the fluid, gapped by the same plasma frequency Ω_p as the sound waves. If we now turn on the transport coefficients, the gaps are determined by ω( ω + iσ_∥/ε_) = Ω_p^2 , ω( ω + i(σ_⊥± iσ̃)/ε_) = Ω_p^2 ,indicating the damping of plasma oscillations. At non-zero B_0^2≪ϵ_0+p_0, the gaps will receive dependence on the magnetic field. At B_0=0 the system is isotropic. The gapless modes (B_0→0 first, k→0 second) include two transverse shear modes with quartic dispersion relation, and one longitudinal diffusive mode,ω = -iη k^4/n_0^2 μ_ ,ω = -iσχ_33 w_0^3/n_0^2 (χ)k^2 ,where again w_0≡ T_0 s_0 + μ_0 n_0, and the susceptibility matrix χ was defined below eq. (<ref>).At non-zero B_0, the three gapless modes all have quadratic dispersion relation at small momenta.There are two propagating waves with real frequenciesω = ±B_0 cosθ/n_0μ_k^2 ,where θ is the angle between k and B_0, and one diffusive mode. For B_0^2 _,μ≪ϵ_0 + p_0, the diffusive frequency isω = -iχ_33 w_0^3/ det(χ)( σ_∥cos^2θ/n_0^2 + ρ_⊥sin^2θ/B_0^2) k^2 .For gapless modes propagating at θ=π/2 at small momenta (θ→π/2 first, k→0 second), we again find the diffusive mode ω=-iD_⊥ k^2, with the same coefficient D_⊥ as in sec. <ref>. In addition, at θ=π/2 there are two “subdiffusive” modes with quartic dispersion relation,ω = -iη_⊥ k^4/n_0^2 μ_ ,ω = -iη_∥ k^4/n_0^2 μ_ .The eigenfrequencies are noticeably different from the ones in a theory with fixed, non-dynamical electromagnetic field discussed in sec. <ref>. Compared to the case of n_0=0 earlier in this section, one can say that non-vanishing dynamical charge density gaps out the magnetosonic waves, and turns Alfvén waves into waves whose frequency is quadratic in momentum.§.§ Kubo formulas We can find MHD correlation functions following the same variational procedure outlined in sec. <ref>. As the total current vanishes by the equations of motion, the objects whose correlation functions it makes sense to evaluate in MHD are the energy-momentum tensor T^μν and the electromagnetic field strength tensor F_μν. It is straightforward to evaluate retarded functions in flat space, in an equilibrium state with constant T=T_0, μ=μ_0, u^α=(1, 0), and constant magnetic field.We solve the hydrodynamic equations in the presence of fluctuating external sources δ J_ ext, δ g (proportional to exp(-iω t+ i k· x)) to find δ T[J_ ext,g], δμ[J_ ext,g], δ u^α[J_ ext,g], δ F_μν[J_ ext,g] and then vary the resulting hydrodynamic expressions T^μν[J_ ext,g] and F_μν[J_ ext,g] with respect to g_αβ, J_ ext^α to find the retarded functions. The metric variations are performed as usual,G_T^μν T^αβ = 2δ/δ g_αβ( √(-g)T^μν_on-shell[J_ ext,g] ) ,G_F_μν T^αβ = 2δ/δ g_αβ( √(-g)F_μν^on-shell[J_ ext,g] ) .The subscript “on-shell” signifies that T^μν and F_μν are evaluated on the solutions to (<ref>) with the constitutive relations (<ref>), (<ref>). Further, recall that the external current must be conserved, which can be implemented by choosing δ J^0_ ext = k_i δ J^i_ ext/ω + 1/2 n_0 δ g_μ^μ. The coupling A_μ J^μ_ ext then implies that iω δ/δ J^l_ ext(k) produces an insertion of F_0l(-k), while ik_m ϵ^nmlδ/δ J^l_ ext(k) produces an insertion of 1/2ϵ^nmlF_lm(-k). For example, for electric field correlation functions we have G_T^μν F_0l = iωδ/δ J^l_ extT^μν_on-shell[J_ ext,g],G_F_μν F_0l = iωδ/δ J^l_ extF_μν^on-shell[J_ ext,g],and similarly for the magnetic field. [ Alternatively, one can introduce an antisymmetric “polarization source” M_ ext^μν, by taking the conserved current as J^μ_ ext = ∇_νM_ ext^μν. The coupling A_μ J^μ_ ext then becomes 1/2M_ ext^μν F_μν upon integration by parts, and correlation functions of F_μν may be obtained as variations with respect to M_ ext^μν. ]Choosing the external magnetic field in the z-direction, we find the same Kubo formulas (<ref>) and (<ref>). The electrical resistivities may also be expressed in terms of correlation functions of the electric field. In the zero-density state with μ_0=0, n_0=0 we find1ω ImG_F_z0 F_z0 (ω,k=0) = ρ_∥ ,at small frequency, where ρ_∥≡ 1/σ_∥. Similarly, for the transverse resistivities we find1ω ImG_F_x0 F_x0 (ω,k=0) =ρ_⊥ ,1ω ImG_F_x0 F_y0 (ω,k=0) = - ρ̃_⊥sign(B_0), where again w_0≡ϵ_0+p_0, andρ_⊥, ρ̃_⊥ were defined below eq. (<ref>). We have taken B_0^2≪ w_0, otherwise there is a multiplicative factor of w_0 (w_0 - B_0^2 _,μ) μ_^2/(w_0 μ_+ B_0^2)^2 in the right-hand side of (<ref>), (<ref>).In a charged state (offset by non-dynamical -n_0), the correlation functions change, for example G_F_x0 F_y0(ω,k=0) = iωB_0/n_0 , while σ_∥ can be found from1ω ImG_T_0z T_0z (ω,k=0) = σ_∥ .Retarded functions at non-zero momentum may be found from the above variational procedure. For example, the function G_F_x0 F_x0 (ω,k) in a state with n_0=0 and with k∥ B_0 has singularities at the eigenfrequencies of Alfvén waves for small momenta. § A DUAL FORMULATION As this paper was being completed, an interesting article <cit.> (abbreviated below as GHI) came out which approached magnetohydrodynamics from a different perspective. The dual electromagnetic field strength tensor J^μν≡1/2ϵ^μναβF_αβ was taken as a conserved current, and the constitutive relations were written down for J^μν, rather than for the electric current J^μ as was done in MHD historically. This “dual” construction follows the earlier work of ref. <cit.> which studied a similar MHD-like setup for “string fluids”. The paper <cit.> identifies six transport coefficients in MHD, compared to eleven transport coefficients (in a parity-preserving system) found here. In this section we revisit the analysis of GHI, and show that the dual formulation allows for the same eleven transport coefficients we described earlier in Sections <ref> and <ref>. §.§ Constitutive relationsThe conservation laws are taken as follows:∇_μ T^μν = H^ν_ρσ J^ρσ ,∇_μ J^μν = 0 .These are the same equations (<ref>), (<ref>) we had earlier. The conserved external current is taken as J^μ_ ext = 12 ϵ^μνρσ∂_νΠ_ρσ^ ext, where Π^ ext_μν may be viewed as the dual of the external polarization tensor M^μν_ ext. The coupling A_μ J^μ_ ext then becomes 1/2Π^ ext_μν J^μν upon integration by parts, and correlation functions of J^μν may be obtained as variations with respect to Π^ ext_μν. The tensor H in (<ref>) is H=1/2 dΠ^ ext, or in components H_αβγ = 1/4∂_αΠ_βγ^ ext + (signed permutations).In order to relate the GHI thermodynamic parameters to ours, we can compare equilibrium currents. The currents at zeroth order in derivatives are given byT^μν = (ε_ + p_) u^μ u^ν + p_g^μν - μ_ ρ_h^μ h^ν + O(∂) , J^μν = ρ_ (u^μ h^ν - u^ν h^μ) +O(∂) . The subscript “d” for “dual” is used to differentiate the parameters from those used earlier in the paper.The currents can be compared with our eq. (<ref>) and the dual of eq. (<ref>) at zeroth order:T^μν = ( w_ + B^2/μ_)u^μ u^ν + ( -12 B^2 + p_ + B^2/μ_) g^μν - B^μ B^ν/μ_ + O(∂) , J^μν = u^μ B^ν - u^ν B^μ+ O(∂) , where w_≡ T p_,T + μ p_,μ = Ts + μ n is the enthalpy density, and μ_ = 1/(1 - 2∂ p_/∂ B^2) is the magnetic permeability. Using h^2=1, we can identify ρ_ = B, μ_ = B/μ_, h^μ = B^μ/B, p_ = -1/2 B^2 + p_ + B^2/μ_, up to O(∂) terms. Out of equilibrium, h^μ and μ_ are auxiliary dynamical variables (without a unique microscopic definition) designed to capture the dynamics of the magnetic field. The entropy density is s_ = p_,T + μ/T p_,μ, as follows from ε_ + p_ = T s_ + μ_ρ_. The energy densities coincide, ε_ = -p + T s + μ n = ϵ, again with p=-1/2 B^2 + p_(T,μ,B^2).At order O(∂), our constitutive relations can not be directly compared to those of GHI because of different hydrodynamic variables. However, we can compare the number of transport coefficients.The comparison may be done based on the entropy current argument which we review below.In a particular hydrodynamic “frame”, the one-derivative contributions to the GHI constitutive relations are given in eq. (3.4), (3.5) of ref. <cit.>,T^μν_(1) = δ f_ Δ^μν_ + δτ_h^μ h^ν+ ℓ_^μ h^ν + ℓ_^ν h^μ+ t_^μν , J^μν_(1) =m_^μ h^ν - m_^ν h^μ+ s_^μν , where Δ^μν_ = g^μν + u^μ u^ν -h^μ h^ν, andthe coefficients δ f_, δτ_, ℓ_^μ, t_^μν, m_^μ, s_^μν are all O(∂).The quantities ℓ_^μ, t_^μν, m_^μ, s_^μν are all transverse to both u_μ and h_μ, the tensor t_^μν is symmetric and traceless, and the tensor s_^μν is anti-symmetric. We do not write the subscript on the temperature and fluid velocity, even though the GHI's T and u^μ differ from ours at O(∂).Further, GHI impose charge conjugation as a constraint on the dynamics. §.§ Entropy productionThe “canonical” entropy current in the GHI formulation is analogous to eq. (<ref>),S^μ_ = 1/T( p_ u^μ - T^μν u_ν- μ_ J^μν h_ν) .This does not take into account the O(∂) contributions to thermodynamics: as we have seen earlier, the only non-trivial thermodynamic susceptibility in a parity-invariant theory is odd under charge charge conjugation C, and gets eliminated if C is imposed as a symmetry of hydrodynamics.Upon using the conservation equations (<ref>) together with the zeroth-order constitutive relations (<ref>), the divergence of the entropy current (<ref>) is∇_μ S^μ_ = -T^μν_(1) ∇_μ(u_ν/T) - J^μν_(1)[ ∇_μ( μ_ h_ν/T) + u_α H^α_μν/T] .Substituting the first-order constitutive relations (<ref>), we findT ∇_μ S^μ_=-δf_(S_3 - S_4) - δτ_ S_4 - ℓ^μ_Σ_μ - 12 t_μν^σ_⊥^μν - m_α^Y^α - 12 s^_ρσ Z^ρσ .Using the notation similar to sec. <ref>, we have the scalars S_3≡∇·u, S_4 ≡ h^μ h^ν∇_μu_ν, as well as σ^μν_⊥≡12 (Δ_^μαΔ_^νβ + Δ_^ναΔ_^μβ - Δ_^μνΔ_^αβ) σ_αβ and Σ^μ≡Δ_^μλσ_λρh^ρ. We have further definedY^λ≡Δ_^λρ[ T∂_ρ(μ_/T) + 2 u_α H^α_ ρσ h^σ - μ_ h^α∇_α h_ρ], Z^αβ≡Δ_^αρΔ_^βσ[ μ_ (∇_ρ h_σ - ∇_σ h_ρ) + 2 u_α H^α_ρσ] .In order to ensure that the entropy production in eq. (<ref>) is non-negative, GHI demandδf_ = -ζ_⊥ (S_3 - S_4) ,δτ_ = -2ζ_∥ S_4 ,ℓ^μ_ = -η_∥Σ^μ , t^μν_ = -η_⊥σ^μν_⊥ , m^α_ = -r_⊥Y^α , s^ρσ_ = -r_∥ Z^ρσ ,with six non-negative coefficients ζ_⊥, ζ_∥, η_⊥, η_∥, r_⊥, r_∥. This clearly gives ∇_μ S^μ_⩾ 0.Note however that while demanding eq. (<ref>) is sufficient to ensure non-negative entropy production, there are more ways besides eq. (<ref>) to make the right-hand side of eq. (<ref>) non-negative. These other options will give rise to extra transport coefficients. Indeed, consider the following coefficients of the O(∂) constitutive relations:δf_ = -f_1 S_3 - f_2 S_4,δτ_ = -τ_1 S_3 - τ_2 S_4 ,ℓ^μ_ = -η_∥Σ^μ - η̃_∥Σ̃^μ , t^μν_ = -η_⊥σ^μν_⊥ - η̃_⊥σ̃^μν_⊥ , m^α_ = -r_⊥Y^α - r̃_⊥Ỹ^α , s^ρσ_ = -r_∥ Z^ρσ . The tilded vectors are defined as Ṽ^μ = ϵ^μναβu_ν h_α V_β, and the tilded shear tensor isσ̃_⊥^μν≡12 ( ϵ^μλα_β u_λ h_ασ_⊥^βν + ϵ^νλα_β u_λ h_ασ_⊥^βμ) ,as in eq. (<ref>). The tensor s^ρσ_ has only one degree of freedom, hence it contains only one transport coefficient. The divergence of the entropy current (<ref>) is thenT ∇_μ S^μ_=f_1 S_3^2 +(τ_1+ f_2 - f_1 ) S_3 S_4 + (τ_2 - f_2)S_4^2+ η_∥Σ_μΣ^μ + 12 η_⊥ (σ_⊥^μν)^2 + r_⊥ Y_μ Y^μ+12 r_∥ (Z^ρσ)^2 .The three tilded coefficients do not contribute to entropy production in eq. (<ref>) due to Ṽ^μ V_μ = 0 and σ_⊥μν σ̃_⊥^μν = 0, and can take any real values,η̃_∥∈ℝ ,η̃_⊥∈ℝ ,r̃_⊥∈ℝ .Demanding that ∇_μ S^μ_ in eq. (<ref>) is non-negative now impliesη_⊥⩾ 0 ,η_∥⩾ 0 , r_⊥⩾ 0 , r_∥⩾ 0 ,together with the condition that the quadratic form in the first line of eq. (<ref>) is positive semi-definite. The latter givesf_1 ⩾ 0 ,τ_2 - f_2 ⩾ 0 , f_1 (τ_2 - f_2) ⩾14 (τ_1 - f_1 + f_2)^2 . Thus there are eleven apriori independent non-equilibrium transport coefficients listed in Eqs. (<ref>) that are consistent with non-negative entropy production, provided the constraints (<ref>) are satisfied. The coefficients r̃_⊥, η̃_⊥, η̃_∥ are odd under charge conjugation C, and can be eliminated if one demands C-invariance of hydrodynamics. An implicit assumption of ref. <cit.> amounts to choosing f_1 = -f_2 = ζ_⊥, τ_1 = 0, τ_2 = 2ζ_∥.§.§ Kubo formulasAssuming time-reversal covariance, the above transport coefficients can be further constrained by the Onsager relation (<ref>). In order to find the retarded functions, we can use exactly the same variational procedure as in sec. <ref>:G_T^μν T^αβ = 2 δ/δ g_αβ( √(-g)T^μν_on-shell[Π^ ext,g] ) ,G_J^μν T^αβ = 2 δ/δ g_αβ( √(-g)J^μν_on-shell[Π^ ext,g] ) ,as well asG_T^μν J^αβ = 2δ/δΠ_αβ^ extT^μν_on-shell[Π^ ext,g] ,G_J^μν J^αβ = 2δ/δΠ_αβ^ extJ^μν_on-shell[Π^ ext,g]. Again, the subscript “on-shell” signifies that T^μν and J^μν are evaluated on the solutions to the conservation equations (<ref>) with the constitutive relations (<ref>). We use the above prescription to evaluate correlation functions at zero spatial momentum, which gives rise to Kubo formulas. Demanding that the correlation functions satisfy (<ref>) now gives the Onsager relationτ_1 = f_1 + f_2 .We further find the following Kubo formulas for transport coefficients in the constitutive relations (<ref>). The resistivities are given by 1ω ImG_J^xy J^xy(ω, k=0) = r_∥ ,1ω ImG_J^xz J^xz(ω, k=0) = r_⊥ ,1ω ImG_J^yz J^xz(ω, k=0) = r̃_⊥sign(B_0) ,the “shear viscosities” are given by1ω ImG_T^xz T^xz(ω, k=0) = η_∥ ,1ω ImG_T^xy T^xy(ω, k=0) = η_⊥ ,1ω ImG_T^yz T^xz(ω, k=0) = η̃_∥sign(B_0) ,1ω ImG_T^xy T^xx(ω, k=0) = η̃_⊥sign(B_0) ,and the “bulk viscosities” are given by1ω ImG_T^xx T^xx(ω, k=0) = f_1 + η_⊥ ,1ω ImG_T^xx T^zz(ω, k=0) = f_1 + f_2 ,1ω ImG_T^zz T^zz(ω, k=0) = τ_1 + τ_2 . Correlation functions at non-zero momentum may also be found by using the above variational procedure.§.§ Mapping of transport coefficientsWe can compare the correlation functions of T^μν and J^μν evaluated using(<ref>) with the correlation functions found in sec. <ref>. If the two approaches to MHD (section <ref> and section <ref>) compute the same physical objects G_T^μν T^αβ etc, the results should agree. Comparing correlation functions at zero spatial momentum allows one to relate the transport coefficients in the constitutive relations (<ref>) to transport coefficients introduced in section <ref>, see eq. (<ref>), (<ref>). Doing so in the (dynamically) neutral state with n_0=0 gives the following relations. The resistivities are related byr_∥ = 1/σ_∥ , r_⊥ = σ_⊥/σ_⊥^2 + σ̃^2 ,r̃_⊥ = -σ̃/σ_⊥^2 + σ̃^2 ,the “shear viscosities” η_⊥, η̃_⊥, η_∥, η̃_∥ agree, and the “bulk viscosities” are related byf_1 = ζ_1 -23 η_1, f_2 = ζ_2 - 23 η_2,τ_1 = ζ_1 +43 η_1, τ_2 = ζ_2 + 43 η_2. The Onsager relation (<ref>) maps to the Onsager relation (<ref>), as expected. The entropy current constraints (<ref>) map to the entropy current constraints (<ref>), as expected. Finally, the mapping of transport coefficients (<ref>) can be used to compare the eigenfrequencies of small oscillations of the (dynamically) neutral state found in eq. (<ref>), (<ref>) to those found in ref. <cit.>. Using the map of thermodynamic parameters spelled out below eq. (<ref>), the speed of Alfvén waves agrees with ref. <cit.>. The damping coefficient of Alfvén waves in eq. (<ref>) agrees with ref. <cit.> when B^2/μ_≪ϵ+p. The speed of magnetosonic waves in eq. (<ref>) agrees with ref. <cit.>: in order to see this, note that the assumption of constant magnetic permeability amounts to assuming that the equation of state takes the form p_ = 1/2μ_μ_^2 + F(T), or p=-1/2μ_B^2 + F(T), with some F(T). In general, the speed of magnetosonic waves derived from the formalisms of sec. <ref> and sec. <ref> will not agree, except when B^2/μ_≪ (ϵ+p). One reason is that the chemical potential for the electric charge is treated as a thermodynamic variable in sec. <ref>, hence the magnetosonic wave speed will in general depend on the charge susceptibility (∂ n/∂μ)_μ=0. This thermodynamic derivative is not present in the formalism of sec. <ref>. Finally, note that the transport coefficient τ_1 contributes to damping of fast magnetosonic waves, for example at θ=0 we have Γ_ ms = (τ_1 + τ_2)/(Ts_), in agreement with eq. (<ref>).§ DISCUSSION In this paper we have presented the equations of relativistic magnetohydrodynamics, by which we mean the hydrodynamics of a conducting fluid in local thermal equilibrium, with dynamical electromagnetic fields. MHD is naturally formulated in a derivative expansion with magnetic field B∼ O(1). Electric screening does not imply that the electric field vanishes: rather, it implies E∼ O(∂) is subleading in the derivative expansion. We have adopted the simplest “mean-field” formulation in which the constitutive relations in the theory with dynamical electromagnetic fields are inherited from the theory with external electromagnetic fields. Our main focus was on transport coefficients. For a parity-symmetric microscopic system, we find eleven transport coefficients at one-derivative order. One transport coefficient is thermodynamic: it is a part of the equation of state in curved space, and contributes to flat-space correlations. Transport coefficients of this type in relativistic hydrodynamics were first identified in <cit.> where they appeared at second order in derivatives. In 2+1 dimensional hydrodynamics, thermodynamic transport coefficients can already appear at first order in derivatives <cit.>. Of the remaining ten transport coefficients, three are non-equilibrium and non-dissipative, and seven are non-equilibrium and dissipative. There are more transport coefficients for parity-violating fluids, as listed in sec. <ref>. We now comment on questions not discussed in detail in the main body of the paper.Angular momentum generated by the magnetic field.— The thermodynamic transport coefficientdetermines the response of equilibrium magnetic polarization to vorticity, as can be seen from eq. (<ref>). One way to viewis to note that a system of charged particles in external magnetic field will develop angular momentum. One can see this in the thermodynamic framework of sec. <ref>. For a bounded system, the equilibrium energy-momentum tensor obtained by varying the equilibrium free energy (<ref>), (<ref>) with respect to the metric will have a boundary contribution after the variation B ·δ_g Ω is integrated by parts <cit.>. The surface momentum density Q^α_ s = ϵ^αμνρ u_μ B_ν n_ρ (where n^μ is the unit spacelike normal vector to the boundary) will give rise to angular momentum induced by the magnetic field.Consider a system at rest in flat space at constant temperature, charge density, and constant magnetic field B. The angular momentum L derived from the energy-momentum tensor only receives a boundary contribution, and one findsL/V = 2 B ,where V is the spatial volume. In this sensedetermines “angular momentum density”. As the coefficientis odd under charge conjugation C, this generation of angular momentum only happens in a C-invariant theory if the equilibrium state has non-zero charge density. Similarly, for a system not subject to the magnetic field, in flat space, which rotates uniformly with small (namely |ω|R≪ 1 where R is the size of the system) angular velocity ω, the magnetization density is m = 2 ω. More generally, the susceptibilityprovides a macroscopic parametrization of gyromagnetic phenomena such as the Barnett and Einstein-de Haas effects. Previous work on transport coefficients.—Papers <cit.> studied transport coefficients for relativistic fluids subject to an external magnetic field. While this does not correspond to MHD in the sense described in this paper (we define MHD as a theory in which magnetic field or its auxiliary is a dynamical degree of freedom), a fluid in external field is a fundamental building block for MHD. Parts of Refs. <cit.> overlap with our Section <ref>. Some of our results differ from those in Refs. <cit.>: the analysis of thermodynamics, the number of transport coefficients, constraints on transport coefficients imposed by the positivity of entropy production, and some of the Kubo formulas. The details are given in Appendix <ref>. Dual formulation of magneto-hydrodynamics.— In sec. <ref> we compared our results with the recent “dual” formulation of MHD in ref. <cit.>. We found the same number of transport coefficients in the two approaches, provided the bulk viscosity missed in ref. <cit.> is restored, and the constraint of C-invariance imposed in ref. <cit.> is lifted. It would be interesting to investigate the relation between the “dual” and “conventional” formulations of MHD further, in particular with regard to the description of electric charge fluctuations. Applicability regime.— The MHD described in this paper treats electromagnetic fields classically. This means that the electromagnetic coupling constant must be small so thatquantum fluctuations of the electromagnetic field can be ignored. The applicability regime of MHD also includes B≪ T^2 (or restoring the fundamental constants ħ c e B ≪ (k_ BT)^2), as is necessary to restrict the hydrodynamic degrees of freedom to those inherited from thermodynamics. We do not have a method to systematically incorporate the effects of larger magnetic fields within the MHD description of sec. <ref>. The classical hydrodynamic theory also ignores statistical fluctuations, which are known to invalidate classical second-order hydrodynamics in 3+1 dimensions (and classical first-order hydrodynamics in 2+1 dimensions). Understanding the effects of statistical fluctuations in magnetic field requires further work.Transport coefficients at strong coupling.— While the small electromagnetic coupling allows one to treat magnetic fields classically, other interactions in the theory do not have to be small. For strongly interacting non-abelian gauge theories in external U(1) magnetic field, methods of gauge-gravity duality provide a window into non-equilibrium physics, both within and outside the hydrodynamic regime. Some of the hydrodynamic transport coefficients discussed in this paper were evaluated in holographic models in refs. <cit.>. The full set of transport coefficients for fluids in external magnetic field has not yet been explored holographically.Higher-order terms.— We have not taken into account the terms beyond first order in the derivative expansion. In conventional hydrodynamics, higher-order terms are required to render the theory causal <cit.> (see e.g. <cit.> for more recent discussions). We expect that a causal formulation of MHD will involve higher-order relaxation times as well as the electric field dynamics.Note added: We have communicated with the authors of ref. <cit.>, and it is our understanding that the missing bulk viscosity will be added in an updated version of ref. <cit.>, and that the Kubo formulas for bulk viscosities will agree with ours. We have also communicated with the authors of ref. <cit.>, and it is our understanding that the Kubo formulas for viscosities in an updated version of ref. <cit.> will agree with ours. We thank the Perimeter Institute for Theoretical Physics where a large part of this work was completed. We thank the authors of refs. <cit.> for discussing their papers and the connections to our work. We thank Shira Chapman, Kristan Jensen, and Adam Ritz for helpful conversations. This work was supported in part by NSERC of Canada. § EQUILIBRIUM T^ΜΝ AND J^Μ The coefficients ϵ_n, π_n, ϕ_n, γ_n, δ_n, θ_n in the equilibrium energy-momentum tensor and the current (<ref>) have the following expressions in terms of the five parameters M_n(T,μ,B^2) of the generating functional (<ref>). The O(∂) correction to the energy density is determined byϵ_1 = -M_1 + T M_1,T + μ M_1,μ + 4B^2 M_1,B^2 + T^4 M_3,B^2 ,ϵ_2 = -M_2 + T M_2,T + μ M_2,μ ,ϵ_3 = 4B^2/T^4( M_1 - T M_1,T - μ M_1,μ - 4B^2 M_1,B^2) - 4B^2 M_3,B^2 , ϵ_4 = -2M_4 + T M_4,T + μ M_4,μ , ϵ_5 = T M_5,T + μ M_5,μ + 4B^2/T^4 M_1,μ + M_3,μ ,where the comma denotes the partial derivative: M_1,T≡ (∂ M_1/∂ T) evaluated at fixed μ and B^2, etc. The O(∂) correction to the pressure is determined byπ_1 = 0 ,π_2 = -23 M_2 - 43 B^2 M_2,B^2 , π_3 = -43 B^2 M_3,B^2 + 4B^2/3T^4(M_1 - T M_1,T - μ M_1,μ - 4B^2 M_1,B^2), π_4 = -13 M_4 - 43 B^2 M_4,B^2 ,π_5 = -43 B^2 M_5,B^2 + 4B^2/3T^4M_1,μ .The O(∂) correction to the charge density is determined byϕ_1 = M_1,μ -T^4 M_5,B^2 ,ϕ_2 = M_2,μ ,ϕ_3 = M_3,μ + T M_5,T + μ M_5,μ + 4B^2 M_5,B^2 ,ϕ_4 = - + M_4,μ ,ϕ_5 = 0 .The O(∂) correction to the energy flux is determined byγ_1 = -M_4,γ_2 = 2M_4 - T M_4,T - μ M_4,μ ,γ_3 = -M_4,B^2 ,γ_4 = - + M_4,μ .The O(∂) correction to the spatial current is determined by the magnetic susceptibility,δ_1 = - ,δ_2 =- T _,T - μ_,μ ,δ_3 = -_,B^2 ,δ_4 =_,μ .The O(∂) correction to the stress is determined byθ_1 = 0 ,θ_2 = M_2,B^2 ,θ_3 = M_3,B^2 -1/T^4(M_1 - T M_1,T -μ M_1,μ -4B^2 M_1,B^2),θ_4 = M_4,B^2 ,θ_5 = M_5,B^2 - 1/T^4 M_1,μ , θ_6 = 2M_2 ,θ_7 = -M_2 + T M_2,T + μ M_2,μ , θ_8 = M_2,B^2 , θ_9 = -M_2,μ ,θ_10 = M_4 .§ COMPARISON WITH PREVIOUS WORK§.§ Comparison with Huang et al In this appendix we will comment on how our work relates to some earlier studies of transport coefficients, for the benefit of the reader who might want to compare different approaches. Ref. <cit.>, abbreviated below as HSR, studied relativistic hydrodynamics of parity-invariant fluids in external non-dynamical magnetic field. HSR enumerated the transport coefficients, giving a relativistic version of the classification in the book <cit.>, 13, and derived the Kubo formulas for transport coefficients in an operator formalism. Parts of the HSR paper overlap with our Section <ref>.Our counting of non-equilibrium transport coefficients for parity-invariant systems agrees with HSR. Denoting the transport coefficients in ref. <cit.> with the subscript HSR, the relations to our transport coefficients are as follows:η_⊥ = η_0,,η̃_⊥ = -2η_3,,η_∥ = η_0, + η_2,,η̃_∥ = -η_4, ,η_1 = -12 η_0, -38 η_1, - 34 ζ_⊥, ,ζ_1 = ζ_⊥,,η_2 = 32 η_0, + 98 η_1, + 34 ζ_⊥, + 32 ζ_∥, ,ζ_2 = ζ_∥,-ζ_⊥,,σ_⊥ = κ_⊥,,σ_∥ = κ_∥,,σ̃= -κ_×,,assuming the convention ϵ^0123=1. This lists eleven transport coefficients compared to ten HSR coefficients, hence under this mapping the eleven transport coefficients are not independent. Indeed, the comparison (<ref>) implies ζ_2 = 2η_1 + 23 η_2, which is precisely our Onsager constraint (<ref>). Thus our counting of non-equilibrium transport coefficients in Section <ref> agrees with that of HSR. There are also some differences between our Section <ref> and HSR. In terms of the setup, the HSR treatment neglects electric fields, while we include them and explain how to do so systematically. Related to that, the treatment of polarization effects in HSR was incomplete. A direct way to obtain the equilibrium energy-momentum tensor and the current in the presence of external fields is by varying the corresponding generating functional with respect to the metric and the gauge field, as was done for example in ref. <cit.>. As a result, HSR did not include the thermodynamic transport coefficient, denoted in Section <ref> as , and did not distinguish between the Landau-Lifshitz and thermodynamic frames. In the Landau-Lifshitz frame,would contribute to all frame invariants in eq. (<ref>) inducing O(∂) contributions to pressure, electric current, and spatial stress.We also find that our constraints on transport coefficients imposed by the positivity of entropy production differ somewhat from those presented in HSR. Rewriting our constraints (<ref>) in terms of the HSR coefficients, we findη_0, ⩾ 0 ,η_0,+ η_2, ⩾ 0 ,13 η_0,+ 14 η_1, + 32 ζ_⊥, ⩾ 0 , 3η_0,+ 94 η_1, + 32 ζ_⊥,+ 3ζ_∥, ⩾ 0 , 18 ζ_∥, ζ_⊥,+4 ζ_∥, η_0,+ 3 ζ_∥, η_1, + 8 ζ_⊥, η_0,+ 6 ζ_⊥, η_1, ⩾ 0 ,κ_⊥, ⩾0 ,κ_∥, ⩾ 0 .On the other hand, the constraints coming from the second law in ref. <cit.> state that all the dissipative HSR transport coefficients must be positive. We find that the constraints on dissipative transport coefficients (<ref>) are in fact weaker. In other words, the constraints of ref. <cit.> are too restrictive: some of the dissipative transport coefficients in the HSR notation can be negative, while still satisfying (<ref>), and therefore still leading to positive entropy production. Finally, there are differences between our Kubo formulas and those of HSR. In particular our Kubo formulas for conductivities transverse to the external magnetic field are markedly different. Comparing the correlation functions in the neutral state (n_0=0), the HSR Kubo formulas give the conductivities κ_⊥, and κ_×, in terms of the iω coefficient of the retarded current-current correlation functions at zero momentum. On the other hand, our Kubo formulas (<ref>), (<ref>) show that the coefficient of iω vanishes, while the subleading coefficient in the small-ω expansion is determined by the resistivity rather than the conductivity. In the charged state, the term n_0/B_0 in our eq. (<ref>) describes the standard Hall effect in the plane transverse to the magnetic field. The Hall effect appears to be missing from correlation functions in ref. <cit.>.§.§ Comparison with Finazzo et al In ref. <cit.> (abbreviated below as FCRN), the authors considered hydrodynamics with fixed non-dynamical magnetic field, and derived Kubo formulas for transport coefficients that appear in the energy-momentum tensor in the Landau-Lifshitz frame. FCRN use a variational approach to find the retarded functions of the energy-momentum tensor, and Appendix B of FCRN overlaps with our Section <ref>. FCRN follow ref. <cit.> in their constitutive relations for the energy-momentum tensor, so the comments in Section <ref> apply to FCRN as well, where FCRN agree with ref. <cit.>. In particular, FCRN did not include the thermodynamic transport coefficientthat appears in the equilibrium free energy at one-derivative order.FCRN use mostly the same convention for transport coefficients as HSR: η_0, = η_0,, η_1, = η_1,, η_4, = η_4,, ζ_⊥,= ζ_⊥,, ζ_∥,= ζ_∥,, while η_2, = - η_2,, η_3, = -2η_3,, assuming the convention ϵ^0123=1. The translation to our convention for transport coefficients can be done through eq. (<ref>). The convention for the variational retarded correlation functions used by FCRN differs from ours by an overall minus sign. We agree with FCRN's Kubo formulas for η_0,, ζ_⊥,, and ζ_∥,. Our Kubo formulas for η_2, and η_3, differ from those in ref. <cit.> by a minus sign. Our Kubo formula for η_4, differs from that in ref. <cit.> by a factor of 1/4.Our Kubo formula for η_1, + 43 η_0, differs from that in ref. <cit.> by a factor of 2.Ref. <cit.> does not derive Kubo formulas for electrical conductivities in external magnetic field, so we can not compare those. JHEP | http://arxiv.org/abs/1703.08757v1 | {
"authors": [
"Juan Hernandez",
"Pavel Kovtun"
],
"categories": [
"hep-th",
"hep-ph",
"nucl-th"
],
"primary_category": "hep-th",
"published": "20170326022839",
"title": "Relativistic magnetohydrodynamics"
} |
Contextuality and truth-value assignment Arkady Bolotin[Email: [email protected] ] Ben-Gurion University of the Negev, Beersheba (Israel) December 30, 2023 ======================================================================================================In the paper, the question whether truth values can be assigned to the propositions before their verification is discussed. To answer this question, a notion of a propositionally noncontextual theory is introduced that in order to explain the verification outcomes provides a map linking each element of a complete lattice identified with a proposition to a truth value. The paper demonstrates that no model obeying such a theory and at the same time the principle of bivalence can be consistent with the occurrence of a non-vanishing “two-path” quantum interference term and the quantum collapse postulate. Keywords: Contextuality, Propositional logic, Truth-value assignment, Many-valued logic. § INTRODUCTION As it is known, the presence of contextuality in quantum theory makes it impossible to view a measurement as merely revealing pre-existing properties of a quantum system <cit.>. More specifically, no hidden-variable model in quantum theory can assign {0,1}-valued outcomes to the projections of the measurement in a way that depends only on the projection and not the context in which it appeared, even though the Born probabilities associated with those projections are independent of the context. [For an overview of different aspects of contextuality in quantum theory and beyond, see <cit.>. Also, see a review of the framework of ontological models in <cit.>. ] On the other hand, thanks to a trivial probability function mapping true propositions to probability 1 and false propositions to probability 0, after the assertion of the truth of propositions the sum of the Born probabilities can be presented as a disjunction of a set of the propositions where exactly one proposition is true while the others are false. This naturally raises the question whether one can assign truth values to propositions about properties of a state of a quantum system before the act of verification. In more general terms, can the assignment of pre-existing truth values to all the lattice elements associated with the system under investigation be always made available in quantum theory? In this paper, we introduce a notion of a propositionally noncontextual theory, which can provide a map linking each element of a complete lattice to a truth value so as to explain the verification outcomes of experimental propositions associated with the state of the system. Using a quantum version of the double-slit experiment as an example, this paper demonstrates that no model based on such a theory and at the same time obeying the principle of bivalence can agree with the occurrence of a non-vanishing “two-path” quantum interference term and the quantum collapse postulate.§ PRE-EXISTING TRUTH VALUES AND THE PRINCIPLE OF BIVALENCE Let us consider a double-slit quantum interference set-up in which detectors placed just behind slits indicate by a record a particle passage through a particular slit. Let X_1 denote the proposition of a click of the detector behind slit 1 such that X_1 is true (“1”) if the detector 1 clicks (verifying in this way that the particle has indeed passed through slit 1) and X_1 is false (“0”) if this detector does not click (thus verifying that the particle has in fact not passed through slit 1). Let X_2 analogously denote the proposition of the second detector's click. Let us introduce the proposition: X_12≡ X_1X_2 ≡ (X_1X_2)(X_1X_2) ,where the symbolstands for the associative and commutative operation of exclusive disjunction that outputs true when one of its inputs is true and the other is false. This proposition corresponds to the assertion that the particle passes through exactly one slit – either 1 or 2. Subsequent to the recording of which-slit information (i.e., after the detectors confirms the particle's passage through either slit), the proposition X_12 represents an exact (i.e., sharp) property that the combined systemi.e., the particle plus the detectorspossesses. [For an approach to unsharp (and partial) forms of a quantum logic, see <cit.>. ] To keep things general, let us consider a complete lattice ℒ=(L,⊔,⊓) containing any set L where each two-element subset {y,z}⊆ L has a join (i.e., a least upper bound) and a meet (i.e., a greatest lower bound) defined by y ⊔ z ≡ l.u.b.(y,z) and y⊓ z ≡ g.l.b.(y,z), correspondingly. In addition to the binary operations ⊔ and ⊓, let the lattice ℒ contain a unary operation ∼ defined in a manner that L is closed under this operation and ∼ is an involution, explicitly, ∼y ∈ L if y ∈ L and ∼(∼y)=y. Let α(Y) = [[ Y ]]_v, where Y is any proposition associated with an exact property of the system, refer to a valuation, i.e., a mapping α: S →𝒱_N from a set of propositions S={Y} to a set 𝒱_N = {𝔳} of truth-values 𝔳 ranging from 0 to 1 where N is the cardinality of {𝔳}. Let us introduce the following definition: Suppose that there is a homomorphism f: L → S and let y ∈ L be a lattice element identified with the proposition Y ∈ S. Then, a theory will be defined as propositionally noncontextual if to explain (predict) the verification outcomes (i.e., the truth values of the proposition Y) it provides a truth-function v that maps each lattice element to the truth value of the corresponding proposition, namely, v(y)=[[ Y ]]_v, basing on the following principles: v(y)=0 if y=0_L and v(y)=1 if y=1_L, where 0_L is the least and 1_L is the greatest elements of the lattice (identified with always true and always false propositions, respectively). Correspondingly, a theory, which is not propositionally noncontextual, will be defined as propositionally contextual. [This definition is motivated by a similar one introduced in the paper <cit.>. ] Then, allowing that within a propositionally noncontextual theory the following valuational axioms hold v(y ⊔ z)=[[ Y Z ]]_v, v(y ⊓ z)=[[ Y Z ]]_v, v(∼ y)=[[Y ]]_v, the truth value of the compound proposition X_12 can be expressed in the lattice-theoretic terms as follows v(x_12) =v( (⊔_i=1^2 x_i )⊓(∼(⊓_i=1^2 x_i ))) =[[ X_12 ]]_v,where the lattice elements x_i are attributed to the propositions X_i such that v(x_i)=[[X_i]]_v. Alternatively, the truth values of logical connectives disjunction, conjunction and negation can be decided through the corresponding truth degree functions [[ Y Z ]]_v = F_ ([[ Y ]]_v, [[ Z ]]_v), [[ Y Z ]]_v = F_ ([[ Y ]]_v, [[ Z ]]_v), and [[Y ]]_v = F_ ([[ Y ]]_v). For example, in Łukasiewicz logics, the definition of a truth degree function of a negation connective F_ is 1-[[ Y ]]_v. To meet the Łukasiewicz version of negation, we will accept that v(∼y) =1 - v(y).In accordance with this definition, ∼ 0_L=1_L and ∼ 1_L=0_L meaning that the lattice greatest element and the lattice least element are complements of each other. Furthermore, as it is stated in <cit.>, Łukasiewicz versions of disjunction and conjunction coincide with the truth-functions of the lattice joins and meets, namely, v(y ⊔ z)=min{v(y)+v(z),1} and v(y ⊓ z)=max{v(y)+v(z)-1,0}, whenever these Łukasiewicz operations can be defined. Now, let us analyze the following assumption: Even before the verification, the lattice elements x_i in the formula (<ref>) can be assigned truth values. Otherwise stated, it is conceivable that the propositions X_i are in possession of pre-existing (i.e., existing before the detectors' clicks) truth values which are either merely revealed or somehow transformed by the verification. In agreement with the analyzed assumption, let us suppose that before the verification the elements x_1 and x_2 are either the bottom element and the top element of a lattice or other way around. In such a case, prior to the verification, v(⊔_i=1^2 x_i ) = v(0_L ⊔ 1_L) = 1 while v(⊓_i=1^2 x_i ) = v(0_L ⊓ 1_L) = 0, and therefore, v(x_12)=1 in accordance with the formula (<ref>). Let ℙ be the probability function mapping any proposition Y,Z,… to the real interval [0,1] such that ℙ[Y]=1 if Y is true, ℙ[Y]=0 if Y is false, and ℙ[YZ]=P[Y]+P[Z]-P[YZ]. Along these lines, the probability function ℙ can be considered as the degree of belief that the corresponding proposition is true (or it is expecting to be true). [This approach to the generalization of the notion of a probability function allows to accommodate variation in the background logic of the account while maintaining the core of standard probability theory <cit.>. ] Since in the considered case [[ X_1 X_2 ]]_v =1 and [[ X_1 X_2 ]]_v =0, the probability function mapping the conjunction X_1X_2 to the interval [0,1] can be written by the sum of the probabilities ℙ[X_1X_2 ] = ℙ[X_1] + ℙ[X_2] = 1. So, were the pre-existing truth values of the propositions X_1 and X_2 to be such that v(x_12)=1, the interference pattern ℙ[R|X_1X_2] (i.e., the probability of finding the particle at a certain region R on the screen) in the two-slit set-up with none of the detectors present at the slits would be the sum of one-slit patterns ℙ[R|X_1] and ℙ[R|X_2], namely, ℙ[R|X_1X_2] =½ℙ[R|X_1] + ½ℙ[R|X_2] (on condition that ℙ[X_1] = ℙ[X_2]), and thus the second-order interference term I_12≡ℙ[R|X_1X_2] -½ℙ[R|X_1] - ½ℙ[R|X_2] would be absent. [The wording “the second-order interference term” is from <cit.>. ] By contrast, let us suppose that before the verification v(x_1)=1 and v(x_2)=1 (which could be if both x_i=1_L) and so, according to the formula (<ref>), v(x_12)=0 prior to the verification. But then, one would findin contradiction to the quantum collapse postulatethat v(⊓_i=1^2 x_i ) = [[ X_1 X_2 ]]_v =1, that is, it is not true that only one detector will click if the particles passage through the slits is observed. Thus, the compound proposition X_12 would be in possession of pre-existing truth values (consistent with the occurrence of quantum interference and quantum collapse) only on condition that v(x_i) ≠ 0 and v(x_i) ≠ 1. Clearly, this condition could be met if prior to the verification, X_1 and X_2 did not obey the principle “a proposition is either true or false”, i.e., the principle of bivalence.§ MANY-VALUED LOGICS VS. SUPERVALUATIONISM From the violation of bivalence, one can infer that results of future non-certain (not consistent with always true and always false propositions) events can be described using many-valued logics. For example, in a series of papers <cit.>, it is argued that for any lattice element y ∈ L one should have { v(y) }={𝔳∈ℝ |0 ≤𝔳≤ 1 }whereasv(0_L)=0andv(1_L)=1,which implies that an infinite-valued logic should be used to describe not-yet-verified properties of quantum objects. But what is more, from the violation of the principle of bivalence it is also possible to conclude that truth values of the future non-certain events simply do not exist, that is, {v(y) |y ≠ 0_Land y ≠ 1_L}=∅whereasv(0_L)=0andv(1_L)=1.Unlike the assumption of pre-existing many-valuedness (<ref>) which supposes that borderline (that is, uncertain) statements should be assigned truth-values lying anywhere between the truth and the falsehood, the assumption of supervaluationism (<ref>) suggests that such statements should lack truth-values at all. This can neatly explain why it is impossible to know in advance the truth-values of the borderline propositions X_1 and X_2 concerning the path that the particle can take getting through the slits.§ CONCLUDING REMARKS Suppose a double-slit quantum interference experiment is described in the following manner: After the verification, the proposition that the particle passes through a particular slit comes out true. Now, let us ask the question, is this a complete description of the quantum interference experiment? The first answer is no: In a complete description, the particle passes through either slit regardless of the verification since any specific proposition about the properties of the combined system (the particle + the detectors) can be not only either true or false but also neither true nor false. Accordingly, in the complete description, all the elements of a lattice represent properties which the system can possess to some degree of truth. The second answer is yes: Prior to the detectors' clicks, the particle is by no means has passed through either slit. If both slits are opened, the passage through the given slit only comes about when the corresponding detector confirms it. As a result, the sentence “the particle passes through a particular slit” can be a proposition, that is, a primary bearer of truth-value, only after the detectors have verified the particle's passage through the slit. [One can easily notice that the description of a double-slit interference experiment presented above bears a great deal of similarity to Einsteins example of a particle confined to a two-chambered box. See the detailed analysis of this example in <cit.>. ] It is clear that the assumption of pre-existing many-valuedness, specifically, infinite-valuedness, coincides with the first answer. Whereas the assumption of supervaluationism, as per which any element of a lattice other than the greatest element 1_L and the least element 0_L carries no truth values, corresponds to the second answer.§ ACKNOWLEDGMENTThe author would like to thank the anonymous referee for the inspiring feedback and the insights.References | http://arxiv.org/abs/1703.09353v3 | {
"authors": [
"Arkady Bolotin"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20170327234850",
"title": "Contextuality and truth-value assignment"
} |
Institut d’Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, FranceSorbonne Universités, UPMC Paris 6 & CNRS, UMR [email protected] chaotic nature of planet dynamics in the solar system suggests the relevance of a statistical approach to planetary orbits. In such a statistical description, the time-dependent position and velocity of the planets are replaced by the probability density function (PDF) of their orbital elements. It is quite natural to set up this kind of approach in the framework of statistical mechanics. In the present paper I focus on the collisionless excitation of eccentricities and inclinations by gravitational interactions in a planetary system, the prototype of such a dynamics being the future planet trajectories in the solar system. I thus address the statistical mechanics of the planetary orbits in the solar system and try to reproduce the PDFs numerically constructed by Laskar (2008). I show that the microcanonical ensemble of the Laplace-Lagrange theory accurately reproduce the statistics of the giant planet orbits. To model the inner planets I then investigate the ansatz of equiprobability in the phase space constrained by the secular integrals of motion. The eccentricity and inclination PDFs of Earth and Venus are reproduced with no free parameters. Within the limitations of a stationary model, the predictions also show a reasonable agreement with Mars PDFs and that of Mercury inclination. The eccentricity of Mercury demands in contrast a deeper analysis. I finally revisit Laskar's random walk approach to the time dependence of the inner planet PDFs. Such a statistical theory could be combined with direct numerical simulations of planet trajectories in the context of planet formation, which is likely to be a chaotic process.Addressing the statistical mechanics of planet orbits in the solar system Federico Mogavero Received ; accepted========================================================================= § INTRODUCTIONThe chaotic dynamics of solar system planets <cit.> raises the question of a statistical approach to planetary orbits. When chaos is significant, a single integration of the equations of motion is not representative of the entire possible dynamics. A description based on the probability density function (PDF) of the planet orbital elements, instead of their time-dependent position and velocity, is essentially more suitable. Such a statistical approach could be combined with usual direct numerical integrations, especially in the context of planet formation, which is likely to be a chaotic process <cit.>.The role of statistical mechanics in assessing the PDF of the planet orbital elements naturally emerges. Recently, the statistical mechanics of terrestrial planet formation has been addressed by <cit.>. In the spirit of a packed planetarysystems hypothesis <cit.>, he advances the equiprobability ofphase-space configurations verifying a certain criterion for the long-term stability ofplanetary systems. Within the limitations due to the sheared-sheet approximationand the restriction to the planar case, this ansatz allows to analytically computeany property derivable from the complete N-planet distribution function, such as the distributions of planet eccentricities and semimajor axis differences. The predictions show an encouraging agreement with N-body simulations and data from the Kepler catalogue, while significant discordances arise inthe case of radial-velocity data and solar system planets. The simplicity of Tremaine's ansatz is fundamental to describe at once the two main steps of the finalgiant-impact phase of terrestrial planet formation: the excitation ofembryo eccentricities and inclinations through mutual gravitational interactionsand the subsequent collisions and merging. However, one could ask for a morefundamental approach, such as one exploiting the integrals of motion <cit.>, in line with the spirit ofequilibrium statistical mechanics <cit.>. Such an approach can be actually set up by focusing on how eccentricities and inclinations are excited by gravitational interactions in a early, collisionless final phase of terrestrial planet formation, and postponing the analysis of collisions to subsequent treatments. As there is no fundamental physical difference between this problem and that of future planetary trajectories in the solar system <cit.>, the latter can provide a benchmark to any statistical theory of planetary orbits to be applied to exoplanet systems. In the present study I address the problem of setting up the statistical mechanics of planetary trajectories in the solar system.The statistical mechanics of gravitating systems is notoriously challenging <cit.>. Firstly, the system has to be confined in a spherical box to assure a bounded phase space, similarly to ideal gases. Then, the non-extensive nature of gravitational energy, due to the long-range character of Newtonian potential, breaks the equivalence between the microcanonical and canonical ensembles, which is typical of systems like neutral gases and plasmas. The long-range nature of gravity also prevents the existence of a thermodynamic limit. In addition to this, the short-range singularity of the gravitational potential requires to take into account the specific nature of small-scale interactions to guarantee the existence of microcanonical equilibrium states. Finally, the number N of planets in a typical planetary system, even during the giant-impact phase of its formation process, is limited. Therefore, the N ≫ 1 regime is generally inappropriate and so is the employment of a mean field approach[The lack of the N → + ∞ limit could seem to question the usefulness of employing statistical mechanics. Actually, the foundation for such an approach relies on the chaotic nature of planet dynamics, independently of their number N.]. As pointed out by <cit.> mainly in the context of stellar systems, one effective way to construct the statistical mechanics of orbits in a planetary system is to average the planet motion over its fastest timescale, that is the orbital period. As a result of this averaging procedure, each planet is replaced by a massive ring following its Keplerian orbit, whose linear mass density is inversely proportional to the planet orbital velocity, a so-called Gaussian ring <cit.>. In such a system, the rings interact with each other via secular gravitational interactions. Their eccentricities and inclinations relax by exchange of angular momentum, while their semi-major axes are constant, and so are their Keplerian energies <cit.>. Even if there is no theorem assuring this secular dynamics approaches the actual planet motion, it can nevertheless represent the source of important results <cit.>. Indeed, the first clear indications of a chaotic planetary motion in the solar system came from averaged equations of motion <cit.>. Moreover, by using the same secular dynamics, <cit.> numerically computed the PDFs of eccentricity and inclination of the solar system planets. In the present paper I address the problem of reproducing these PDFs through a statistical mechanics approach.The paper is structured as follows. In Sect. <ref> I briefly recall how secular planet dynamics can be introduced in its simplest form. Then, in Sect. <ref> I describe the PDFs of eccentricity and inclination of the solar system planets calculated numerically in L08. I introduce in Sect. <ref> the microcanonical ensemble of the Laplace-Lagrange theory to reproduce the statistics of the giant planet orbits. The ansatz of equiprobability in the phase space constrained by the secular integrals of motion is presented in Sect. <ref> for the inner planets. Finally, I revisit in Sect. <ref> the random walk ansatz of L08 to account for the time dependence of the inner planet PDFs. I conclude emphasizing the relevance of a statistical theory of planetary orbits in the context of planet formation. § SECULAR PLANET DYNAMICS Using standard notation <cit.>, the Hamiltonian of the N = 8 solar system planets can be written as= ∑_k = 1^N( p⃗_k^2/2 μ_k - G m_0 m_k/r_k) + ∑_k = 1^N∑_l=k+1^N( p⃗_k ·p⃗_l/m_0 - G m_k m_l/| r⃗_k - r⃗_l |)where heliocentric canonical variables r⃗_k, p⃗_k are employed, m_0 is the Sun mass, m_k the planet masses, μ_k = m_0 m_k/(m_0 + m_k) the reduced masses and G the gravitational constant. As usual, the index k lists the planets by increasing semi-major axis, from Mercury to Neptune. The first term on the right side of equation (<ref>) is the Hamiltonian of the Keplerian motion, while the second one contains the gravitational interactions among planets. Therefore, it is worthwhile to introduce a set of action-angle variables for the Keplerian motion, e.g. the modified Delaunay variables <cit.>:Λ_k= μ_k √(G(m_0 + m_k)a_k) P_k= Λ_k (1 - √(1-e_k^2))Q_ k= 2 Λ_k √(1-e_k^2)sin^2(i_k/2) λ_k= M_k + ω_k + Ω_kp_k= - ω_k - Ω_k q_k= - Ω_kKeplerian orbital elements are used in these definitions: a_k is the planet orbit semi-major axis, e_k the eccentricity, i_k the inclination, M_k the mean anomaly, ω_k the argument of perihelion and Ω_k the longitude of node. As long as planets do not experience close encounters and do not lie near mean motion resonances, secular dynamics can be introduced by averaging the Hamiltonian (<ref>) over the fastest motion timescales, i.e. the planet orbital periods. This corresponds to average the Hamiltonian over the mean anomalies M_k:⟨⟩ = 1/(2 π)^N∫_0^2π dM_1⋯∫_0^2π dM_N= _0 + _secwhere _0 = ∑_k=1^N -G(m_0+m_k)^2 μ_k^3/(2 Λ_k^2) = ∑_k=1^N - G m_0 m_k/(2 a_k) is the total Keplerian energy and_sec =- ∑_k = 1^N∑_l=k+1^NG m_k m_l/(2 π)^2∫_0^2 π∫_0^2 πdM_k dM_l/| r⃗_k - r⃗_l |are the orbit-averaged gravitational interactions between planets (the kinetic contribution in the second term on the right side of Eq. (<ref>) averages out to zero over a Keplerian orbit). This averaging procedure corresponds to constructing a secular normal form to first order in planetary masses <cit.>. The action variables Λ_k are integrals of motion of the secular dynamics <cit.>, and so are the semi-major axes a_k. As a consequence, _0 is an additive constant which may be dropped from the Hamiltonian. The single-planet secular phase space is four-dimensional and compact, with the canonical volume element given by dp_k dq_k dP_k dQ_k. The secular contribution of general relativity is of primary importance for the long-term dynamics of the solar system inner planets (e.g., L08). The secular Hamiltonian accounting for the leading relativistic correction is given by_sec =- ∑_k = 1^N∑_l=k+1^NG m_k m_l/(2 π)^2∫_0^2 π∫_0^2 πdM_k dM_l/| r⃗_k - r⃗_l |- ∑_k = 1^N3 G^2 m_0^2 m_k/c^2 a_k^2 √(1-e_k^2)where c is the speed of light. A short derivation of the general relativistic contribution is presented in the Appendix <ref> for future reference.When eccentricities and inclinations are sufficiently small, e_k, sin(i_k/2) ≪ 1, it is valuable to develop the secular Hamiltonian in a power series of these variables. Neglecting terms of the fourth order and smaller, one obtains the Laplace-Lagrange (LL) linear secular theory:= - x⃗^ TAx⃗ + y⃗^ TAy⃗ + v⃗^ TBv⃗ + z⃗^ TBz⃗/2where I have introduced the Poincaré canonical variables,x_k= √(Λ_k) e_k sin p_ky_k= √(Λ_k) e_k cos p_k v_k= 2 √(Λ_k)sin(i_k/2) sin q_kz_k= 2 √(Λ_k)sin(i_k/2) cos q_kand, for instance, x⃗^ T = (x_1, …, x_N). The elements of the matrices A and B are given in the Appendix <ref>. These matrices are real and symmetric and can therefore be diagonalized through orthogonal matrices O_A and O_B,A = O_A D_A O_A^ T, B = O_B D_B O_B^ TThe elements of the diagonal matrices D_A and D_B are the eigenvalues of A and B, respectively, and the columns of O_A and O_B are their normalized eigenvectors. New action-angle coordinates P⃗'⃗, Q⃗'⃗, p⃗'⃗, q⃗'⃗ can be introduced by combining the following canonical transformations:x⃗ = O_Ax⃗'⃗ y⃗ = O_Ay⃗'⃗v⃗ = O_Bv⃗'⃗ z⃗ = O_Bz⃗'⃗ x'_k= √(2 P'_k)sin p'_ky'_k= √(2 P'_k)cos p'_k v'_k= √(2 Q'_k)sin q'_kz'_k= √(2 Q'_k)cos q'_kEmploying the new variables, the LL Hamiltonian becomes:= - ∑_k=1^N ( g_k P'_k + s_k Q'_k )where g_k and s_k are the eigenvalues of the matrices A and B, respectively. According to the corresponding Hamilton equations, the actions P'_k and Q'_k are integrals of motion, while the angles p'_k and q'_k change linearly with time at constant frequencies -g_k and -s_k, respectively. Therefore, the time dependence of the Poincaré variables x_k, y_k, v_k, z_k turns out to be the superposition of N harmonics with different frequencies and amplitudes. In the present study I refer to each of these harmonics as a LL mode and I follow the conventional ordering of the frequencies g_k and s_k <cit.>. §.§ Integrals of motion Generally speaking, the dynamics of a planetary system obeys to the conservation of mechanical energy, momentum and angular momentum. The secular dynamics introduced above automatically verifies the conservation of momentum: as Keplerian orbits are closed, the average of planet momenta over mean anomalies identically vanishes. Therefore, the dynamics resulting from the secular Hamiltonian (<ref>) must conserve energy and angular momentum. This can be easily shown to be the case in the LL dynamics <cit.>. As formula (<ref>) shows, the LL Hamiltonian is a function of the action variables only. These being integrals of motion, the conservation of energy follows. The angular momentum content of a planetary system can be described by the following quantities:L_x= ∑_k=1^N Λ_k √(1-e_k^2)sin i_k sinΩ_kL_y= - ∑_k=1^N Λ_k √(1-e_k^2)sin i_k cosΩ_kC= ∑_k=1^N Λ_k ( 1- √(1-e_k^2)cos i_k )where x, y, z are the Cartesian coordinates related to the definition of the Keplerian elements i_k, ω_k and Ω_k <cit.>[Typically, the x y-reference plane is chosen to be perpendicular to the total angular momentum vector. However, in this study I choose the ecliptic plane as the reference one, since this is the choice made in L08.]. L_x and L_y are the components of the angular moment lying on the x y-reference plane, while C is the angular momentum deficit (AMD) <cit.>, the difference between the angular momentum that planets would have on coplanar, circular orbits and L_z, the component of the total angular momentum perpendicular to the reference plane. The AMD can be expressed asC = ∑_k=1^N P_k + Q_k = ∑_k=1^N P'_k + Q'_kwhere I have used the definitions (<ref>) and the fact that the matrices O_A and O_B, appearing in the transformation (<ref>), are orthogonal. As the action variables P⃗'⃗ and Q⃗'⃗ are conserved in the LL dynamics, the AMD is also a conserved quantity. Conservation of L_x and L_y derives from the properties of the matrix B. As shown in the Appendix <ref>, one of the eigenvalues s_k is zero and . ( √(Λ_1), …, √(Λ_N))/ √(∑_kΛ_k) is, up to a sign, the corresponding normalized eigenvector. By convention <cit.>, the null eigenvalue is chosen to be s_5. Moreover, neglecting terms of order three in eccentricities and inclinations, one hasL_x= - ∑_k=1^N 2 √(Λ_k) v_k L_y= - ∑_k=1^N 2 √(Λ_k) z_kAll this implies that in the LL dynamics the following identities are verified:v'_5 = ∑_k=1^N O_B^ T_5k v_k = - L_x/2 √(∑_k=1^N Λ_k)z'_5 = ∑_k=1^N O_B^ T_5k z_k = - L_y/2 √(∑_k=1^N Λ_k)where I have used the fact that O_B is orthogonal, O_B^ -1 = O_B^ T, and its columns are the normalized eigenvectors of B. As the frequency s_5 is zero, q'_5 does not change with time. Therefore, v'_5 and z'_5 are integrals of motion and so are L_x and L_y. § THE PDFS OF <CIT.> In L08 Laskar performs a statistical analysis of the future planet orbits in the solar system. He employs 1001 integrations of secular equations over the 5 Gyr of the Sun's remaining lifetime before its red giant phase. These integrations differ for the initial conditions, which are obtained with small variations of the initial Poincaré variables of the VSOP82 solution <cit.>. The total integration time is divided in 250 Myr intervals and statistics are performed over each interval recording the state of the 1001 solutions with a 1 kyr timestep. Normalized PDFs of planet eccentricities and inclinations are then estimated. The PDFs of the giant planets are plotted in Fig. <ref>, while in Fig. <ref> are shown those of the inner ones[Unfortunately, the data behind the L08 PDFs are not currently available (Laskar, private communication). Therefore, I have extracted these curves from the original plots through WebPlotDigitizer <cit.>. This works fine for the giant planets PDFs and the eccentricity PDFs of the inner ones, as Laskar plotted them separately at three different times. Because of chaotic diffusion, such an extraction is infeasible for the inclination PDFs of the inner planets, which Laskar plotted at several different times on the top of each other. In this case I use as a reference Laskar's fits instead of the original numerical PDFs, even if some differences exists between these curves, especially in the tails.]. The dynamics considered by Laskar is more accurate than the one I introduced in Sect. <ref>, as Laskar's secular Hamiltonian contains terms of order two in planetary masses and six in eccentricities and inclinations. This is why he conjectures that, even though his PDFs are obtained through secular equation, they should be nevertheless close to those arising from the full, non-averaged, dynamics. While analysing these PDFs, what strikes Laskar is the very different shape between the giant planets curves and those of the inner planets. The PDFs of the outer planets are characterized by two peaks and restricted to a certain range of eccentricities and inclinations. In contrast, those of the inner planets have zero value at e = 0 and i = 0, a single peak and a continuous decaying at large eccentricities and inclinations. On the basis of these differences, Laskar takes two different approaches in explaining his numerical PDFs. By means of a frequency analysis applied to the outer planet motion, he shows that the PDF of a quasiperiodic approximation of their dynamics can reproduce very accurately the numerical PDFs. On the other hand, to take into account the significant chaotic diffusion existing in the statistics of the inner planets, Laskar fits their PDFs through a Rice distribution <cit.>. Indeed, he assumes that, because of chaotic diffusion, the Poincaré variables (<ref>) become practically independently Gaussian distributed over long timescales, with non-zero mean and a variance that increases linearly with time, like in a diffusion process. § GIANT PLANETS According to secular dynamics, the orbital motion of the giant planets is well approximated by a quasiperiodic time function (L08). Indeed, chaotic diffusion is very limited for the outer planets, their PDFs virtually do not change with time. A certain chaotic behaviour arises from the full equations of motion <cit.>, but according L08 this is irrelevant to the overall structure of the PDFs. Taking advantage of this regularity of the giant planets, it is straightforward to predict the PDF of their Keplerian orbital elements. Neglecting chaotic diffusion, the most basic quasiperiodic dynamics is that resulting from the LL Hamiltonian, Eq. (<ref>). Such a motion is ergodic and the asymptotic stationary probability density is the microcanonical one <cit.>. It is straightforward to write down the corresponding PDF employing the integrals of motion <cit.>:ρ(P⃗'⃗, Q⃗'⃗, p⃗'⃗, q⃗'⃗) = δ(q'_5 - q'_5)/(2 π)^2N-1∏_k=1^N δ(P'_k - P'_k)δ(Q'_k - Q'_k)where δ stands for the Dirac delta and the bar indicates the initial value of the corresponding variable[In the present study these initial values are chosen to match the initial conditions of the VSOP82 solution <cit.>, since they are the ones employed in L08.]. This PDF reflects the conservation of the action variables P⃗'⃗, Q⃗'⃗ in the LL dynamics, while the angle variables are uniformly distributed over the (2N-1)-dimensional torus 𝐓^2N-1 (the angle q'_5 is conserved because the angular momentum is, see Sect. <ref>). An integral formula for the PDFs of eccentricity and inclination can be obtained <cit.>,ρ(e_k) = e_k ∫_0^+ ∞ duu J_0(e_k u) ∏_n = 1^N J_0(γ_kn u)where J_0 is the Bessel function of the first kind and order zero and γ_kn = O_A_kn (2P'_n/Λ_k)^1/2. A similar formula holds for the inclination i_k. Otherwise, these PDFs can be very rapidly estimated by direct sampling of Eq. (<ref>) and using transformations (<ref>), (<ref>) and (<ref>). The distribution function (<ref>) is time-independent and describes the statistics of planet orbits over timescales much longer than the timescale of the LL dynamics (dozen of kyr to few Myr for the solar system planets). Since Laskar's PDFs are constructed over 250-Myr intervals, one can expect the statistics of the giant planets resulting from Eq. (<ref>) to agree with that of L08. Therefore, I plot in Fig. <ref> the PDFs predicted by the microcanonical distribution (<ref>), along with Laskar's PDFs. The agreement between the two curves is indeed very satisfactory for the inclination PDFs, with a very good matching for Jupiter and Saturn and minor differences in the case of Uranus and Neptune. The agreement is also good for the eccentricity PDFs of Jupiter and Saturn, while some major discordances arise in the eccentricity PDFs of Uranus and Neptune. Even though the shape of the PDFs is correctly reproduced by the microcanonical density (<ref>) in both cases, the endpoints of the eccentricity interval over which the PDFs vary are somewhat different from L08. However, the reason for such a discrepancy is clear. The LL Hamiltonian (<ref>) employed to derive the microcanonical distribution (<ref>) is valid up to the third order in eccentricities and inclinations. This regime is meaningful for the giant planets of the solar system, whose eccentricities and inclinations are quite moderate. However, terms of the fourth order in e and i in the Hamiltonian (<ref>) yield non-linearities in the corresponding Hamilton equations and produce high-order harmonics in the eccentricity and inclination time dependence[These higher order terms also affect amplitude and frequency of the LL modes. It is worthwhile to remember that the secular Hamiltonian (<ref>) is itself valid to the first order in planetary masses. Contributions from higher order terms in m_k slightly adjust amplitude and frequency of the harmonics.]. The amplitude of these additional harmonics is generally much smaller than that of the LL modes, but some of them can nevertheless contribute to a significant level. <cit.> reports the amplitudes of these high-order harmonics. From Bretagnon's tables it is clear that their contribution is bigger for the eccentricities than the inclinations and for Uranus and Neptune than Jupiter and Saturn. This is in agreement with Fig. <ref>. Moreover, the amplitudes of the higher order harmonics are roughly in accord with the size of the discrepancies in Fig. <ref>. Generally speaking, the agreement between the predictions of the microcanonical distribution (<ref>) and Laskar's PDFs is very satisfactory when one realizes the simplicity of the LL approximation. Moreover, a better agreement can be obtained extending the analysis of Sect. <ref> to higher order terms in eccentricities, inclinations and masses. In principle, this could be done by means of a perturbation approach to the Hamiltonian (<ref>) based on successive quasi-identity canonical transformations and normal forms <cit.>.§ INNER PLANETS The analysis of the giant planet orbits is based on the assumption that the effect of chaos on their motion is largely negligible. This is not the case for the inner planets. <cit.> already showed that Mercury and Mars orbits are affected by a significant chaotic diffusion over 5 Gyr, while this diffusion is moderate for Venus and the Earth. A first interesting step in analysing the structure of Laskar's PDFs of the inner planets is to consider what the distribution function (<ref>) predicts for their eccentricity and inclination. This is illustrated in Fig. <ref>, where I also plot the corresponding L08 PDFs at three different times, t = 500 Myr, 2.5 Gyr and 5 Gyr, to illustrate chaotic diffusion. As one might expect, there is no agreement between the two curves. Nevertheless, it is interesting to note that, contrary to what could emerge from Laskar's analysis in L08, the difference in shape between the giant and inner planet PDFs is not fundamental. Fig. <ref> shows that Eq. (<ref>) based on the LL dynamics is a priori able to produce the general shape of the inner planet PDFs, i.e. the zero value and positive linear slope at e = 0 and i=0, and the continuous decaying at large eccentricities and inclinations. This is particularly evident in the Venus and Earth eccentricity curves and is due to the strong linear coupling existing between several LL modes in the Earth and Venus dynamics. Analogue strong couplings are not present in the LL dynamics of the giant planets. This explains the different shape of their PDFs, with two peaks and a zero probability outside a certain range (Fig. <ref>). This is also the case for Mercury, what justifies the strong differences between the corresponding Laskar's PDFs and Eq. (<ref>) in Fig. <ref>. §.§ The ansatz of equiprobability of phase-space statesSince the effect of chaos is relevant for the long-term dynamics of the inner planets, I search for a more pertinent statistical description than one based on quasiperiodic motion. The chaotic nature of planet dynamics allows for sharing of angular momentum between the LL action variables, which would otherwise be constant. This chaotic diffusion of angular momentum is already mentioned in <cit.> and has been recently illustrated in <cit.>. The simplest statistical approach is assuming that, by consequence, the motion of the inner planets uniformly visits the subset of the phase space defined by the conservation of angular momentum L⃗ and energy . Clearly, the fact that chaotic diffusion finally lead to such an uniform phase-space measure cannot be strictly true, as the inner planet PDFs depend on time. More probably, an uniform exploration could indeed occur adiabatically on a smaller subset of the phase space than that allowed by the conservation of angular momentum and energy only. Such a domain could be in principle determined by studying the higher order terms in the Hamiltonian (<ref>). However, the hypothesis of equiprobability in the phase-space is still the best estimate that one can make without a deeper analysis of the dynamics <cit.>. It so simple that it is worthwhile to investigate its predictions before undertaking more involved studies. Indeed, the idea of equipartition between LL modes has already been considered from a dynamical perspective by <cit.>. Moreover, even though it leads to a stationary distribution function, such an ansatz is nevertheless quite appropriate to Earth and Venus, since their PDFs diffuse very slowly over time, and can be still useful to understand the main factors contributing to the overall structure of Mercury and Mars PDFs.Since the giant planets are well described by the microcanonical distribution (<ref>), in which all the LL action variables are fixed, it is clear that an equiprobability simply based on conservation of the total angular momentum and energy of all the planets would not produce reasonable results. To achieve a global description of giant and inner planets at once, one needs to statistically disconnect them. To this end, I note that the total angular momentum of the inner planets is quite independent of that of the giant planets. More precisely, <cit.> has shown that the flux of angular momentum deficit (AMD) from the large reservoir of the outer planets to the inner ones is moderate over 5 Gyr. As a first approximation, one can therefore assume that the total AMD of the inner planets is constant. Moreover, from Eq. (<ref>) one can interpret the action variables P'_k and Q'_k as the AMD content in the corresponding LL mode. These considerations suggest to fix the action variables corresponding to the modes k ∈{5, 6, 7, 8}, as they are the only ones which are relevant for the dynamics of the outer planets[As stated in Sect. <ref>, I adopt the conventional ordering of the LL modes <cit.>.]. Indeed, with this choice the conservation of the total AMD reduces to that of the quantity ∑_k=0^4 P'_k +Q'_k, whose initial value is very close to the initial total AMD of the inner planets, as shown in Table <ref>. The remaining action variables, corresponding to the modes k ∈{1, 2, 3 ,4}, can then be allowed to vary stochastically according to equiprobability. The statistics of the giant planet orbits would therefore be virtually identical to the one shown in Fig. <ref>.Among the integrals of motion, the AMD is of particular interest. For instance, <cit.> has highlighted its role in the orbital configuration of planetary systems. An important reason for this relevance is that, differently from Eqs. (<ref>) and (<ref>), the simple Eq. (<ref>) is valid to all orders in eccentricities and inclinations. I then start to investigate the predictions of equiprobability based on AMD conservation only. Conservation of energy will be added later. The ansatz corresponding to the above considerations reads:ρ(P⃗'⃗, Q⃗'⃗, p⃗'⃗, q⃗'⃗) ∝δ(C - C) f f(P⃗'⃗, Q⃗'⃗, p⃗'⃗, q⃗'⃗) = δ(q'_5 - q'_5) ∏_k=5^8δ(P'_k - P'_k)δ(Q'_k - Q'_k)Conservation of L_x and L_y to second order in eccentricities and inclinations is contained in this distribution function as the variables Q'_5 and q'_5 are set to their initial values (see Eq. (<ref>)). I do not have a simple analytical formula for the PDFs of eccentricity and inclination. However, they can be calculated numerically very rapidly by direct sampling, as illustrated in the Appendix <ref>. Moreover, I present an analytical characterization of these PDFs in the Appendix <ref>. The predictions of the ansatz (<ref>) are compared in Fig. <ref> with Laskar's PDFs. They reproduce very satisfactorily Venus and Earth PDFs. A detailed comparison in the case of Earth is shown in Figs. <ref> and <ref>. The agreement is remarkable if one considers the simplicity of the ansatz and that it does not contain any free parameter[To fit his PDFs Laskar employs three free parameters for each planet and each orbital element (eccentricity or inclination).]. For what concerns the PDFs of Mars and that of Mercury inclination, the predictions are still reasonable as the differences with the numerical PDFs are of the same order of magnitude of the PDF diffusion over time. Indeed, in these cases one could a priori interpret the predicted curves as the long-term, stationary limit of Laskar's PDFs. However, this interpretation breaks down when one considers the eccentricity PDF of Mercury, as shown in detail in Fig. <ref>. In this case, the predicted PDF is substantially different from the numerical one. In particular, the mean of Laskar's curve is not correctly reproduced by the ansatz. It is clear that equiprobability, as applied in Eq. (<ref>), is far to approximate the actual Mercury dynamics.One can add to the ansatz (<ref>) the conservation of secular energy at quadratic order in eccentricities and inclinations (see Eq. (<ref>)),ρ(P⃗'⃗, Q⃗'⃗, p⃗'⃗, q⃗'⃗) ∝δ(C - C)δ( - ) fIn the Appendix <ref> I suggest an algorithm to efficiently sampling this distribution function. The addition of energy conservation does not change in a substantial way the predictions shown in Fig. <ref>. The principal consequence is that the average AMD in a single LL mode depends now, for k ≤ 4, on the mode considered via the specific values of the frequencies g_k and s_k. In contrast, Eq. (<ref>) predicts an average AMD which is independent of the particular mode (see Appendix <ref>). This slightly shifts the predicted PDFs, without changing the validity of the above considerations. This is shown in the case of Earth in Figs. <ref> and <ref>, and in Fig. <ref> for Mercury.The major disagreement between the predictions of equiprobability in the phase-space and Laskar's curves occurs for the eccentricity of Mercury, which is the less massive planet and has the highest typical eccentricity and inclination[This is a crucial point, as this study takes into account conservation of L_x, L_y andonly at quadratic order in eccentricities and inclinations (see Eqs. (<ref>) and (<ref>)).]. As suggested above, a kind of equiprobability ansatz could still be valuable if applied to a more restricted phase space domain than that defined by the conservation of the total inner planet AMD and energy. If, as a matter of speculation, one restricts the equiprobability domain by setting the action variable P'_1 to its initial value, i.e.ρ(P⃗'⃗, Q⃗'⃗, p⃗'⃗, q⃗'⃗) ∝δ(C - C)δ( - )δ(P'_1 - P'_1) fthe predicted PDF of Mercury eccentricity starts to reproduce the characteristic mean value of the corresponding Laskar's curve, as shown in Fig. <ref>. This is related to the fact that the chaotic dynamics of Mercury eccentricity keeps a strong memory of the dominant LL mode P'_1[As <cit.> and <cit.> show, the dynamics of Mercury treated as a test particle in the gravitational field of the other planets, whose orbits are fixed, can be described by a time-independent Hamiltonian, which is therefore an integral of motion and keeps memory of the initial conditions.].However, it is clear that a much more involved analysis, taking into account high order terms in the Hamiltonian (<ref>), is needed to closely reproduce the numerical PDF.§ RANDOM WALK APPROACH TO CHAOTIC DIFFUSION Reproducing the PDFs of the inner planets is particularly difficult because of the time dependence of these curves. The complexity of describing diffusion of the integrals of motion in weakly nonlinear systems, starting from a given partition between linear modes, is perhaps best illustrated by the famous Fermi–Pasta–Ulam–Tsingou problem <cit.>. As stated in Sect. <ref>, in L08 the inner planet PDFs are fitted by means of a Rice distribution <cit.>, whose parameters depend on time to account for the curve diffusion. The choice of this particular distribution is justified by Laskar's suggestion that, because of chaotic diffusion, Poincaré variables (<ref>) could behave like independent Gaussian random variables over a long timescale. In what follows I try, in the simplest way, to frame this ansatz in the context of the present paper, showing the difficulties of such a random walk approach. In the following discussion I focus on the degrees of freedom related to eccentricity, but a similar analysis applies to the inclinations ones. According to the microcanonical distribution function (<ref>), the PDF of the rectangular variable h_k = e_k sin(p_k) (i.e. marginalized over the remaining Poincaré variables) can easily be shown to beρ(h_k) = 1/2 π∫_-∞^+∞ du e^-i h_k u∏_n=1^N J_0 (γ_kn u )where i = √(-1) is the imaginary unit, J_0 the Bessel function of the first kind and order zero, and γ_kn = O_A_kn (2P'_n/Λ_k)^1/2. Depending on the coefficients γ_kn, the PDF (<ref>) can be double-peaked and strongly different from a Gaussian distribution. Moreover, since ρ(-h_k) = ρ(h_k), the mean value of h_k is zero. It seems doubtful whether chaotic diffusion, acting on the LL modes, can drive Poincaré variables to the Gaussianity suggested by Laskar. Therefore, I set up the random walk approach on a slight different basis. On a secular timescaleof few Myr, the motion of the inner planets is well reproduced by a quasiperiodic time function. On a Lyapunov timescale, ≃ 5 Myr, chaos starts to decorrelate the actual motion from the initial quasiperiodic approximation. On a much more longer timescale, t ≫, the precise nature of the chaotic terms in the Hamiltonian (<ref>) is maybe irrelevant in assessing the general form of the inner planet PDFs. Considering such a case, I propose the following ansatz, in which the stochastic dynamics of Poincaré variables x'_k and y'_k is given by the superposition of a secular term and a chaotic one modelled by means of a white noise:r⃗ = r⃗_sec + r⃗_chaoswhere r⃗ = (x'_k, y'_k)[Since the action variables P⃗'⃗ are conserved in the LL dynamics, it is reasonable to consider them as a starting point. However, as the boundary condition P'_k ≥ 0 applies, the most simple and straightforward approach is to use Poincaré variables x'_k and y'_k which are defined on the entire real line.]. The stationary random variable r⃗_sec describes the statistics of the LL dynamics and corresponds to the microcanonical ensemble (<ref>),ρ_sec(x'_k, y'_k) = δ[ r - (2 P'_k)^1/2]/2 π(2 P'_k)^1/2where r^2 = x'_k^2 + y'_k^2. The time-dependent chaotic component is modelled by the following Langevin equation:ṙ⃗_chaos = √(2 D_k) η⃗(t)where η⃗ is a white noise, i.e.⟨η⃗(t) ⟩ = 0⃗ ⟨η_x(t) η_x(t')⟩ = δ(t-t') ⟨η_y(t) η_y(t')⟩ = δ(t-t')⟨η_x(t) η_y(t')⟩ = 0for all times t, t' ≥ 0. For simplicity I have assumed an isotropic diffusion in the phase space of the LL mode k. The diffusion coefficient D_k is a free parameter in this model. Moreover, I choose the initial condition on the chaotic term to match the LL dynamics,r⃗_chaos(t=0) = 0⃗Since the random variables r⃗_sec and r⃗_chaos are independent, the PDF of r⃗ is given by the convolution of the PDF of r⃗_sec, ρ_sec, and that of r⃗_chaos, ρ_chaos. Therefore one hasρ(r⃗, t) = ∫_ℝ^2 dz⃗ ρ_sec(z⃗)ρ_chaos(r⃗-z⃗,t)The PDF of the random variable satisfying Eqs. (<ref>), (<ref>) and (<ref>) is the Green function of the Fokker-Planck equation∂ρ_chaos(ξ⃗,t)/∂ t = D_k∇⃗^2 ρ_chaos(ξ⃗,t)This Green function isρ_chaos(ξ⃗,t) = 1/4 π D_k texp( - ξ⃗^2/4 D_k t)Substituting Eqs. (<ref>) and (<ref>) in Eq. (<ref>) one getsρ(r, t) = r/σ^2exp( - r^2 + m^2/2 σ^2) I_0 ( r m/σ^2)where I_0 indicates the modified Bessel function of the first kind and order zero, m = (2 P'_k)^1/2 and σ^2 = 2 D_k t. Therefore, one obtains a Rice distribution for the variable r. In the case where the variables x_k and y_k are dominated by one particular LL mode, i.e. |γ_kl| ≫ |γ_kn| for a certain index l and for all n ≠ l, the PDF of the eccentricity e_k turns out to be a Rice distribution too, with parameters m = |O_A_kl| (2 P'_l/Λ_k)^1/2 and σ^2 = 2 D_k (O_A_kl)^2 t/Λ_k. In this limiting case one thus recovers Laskar's ansatz. This is due to the fact that the Rice distribution describes the envelope of a sine wave plus Gaussian noise <cit.>, as already mentioned in L08. However, within the present framework, the parameter m is not a free parameter as in L08, but it is set up by the initial LL dynamics.I compare in Table <ref> the value of the parameter m in L08 (Table 6) for the eccentricity PDFs to the prediction of Eq. (<ref>) when one considers for each planet only the LL mode with the biggest amplitude. The physical meaning of the parameter m, which is free in L08, clearly emerges in the present framework.As shown in Table <ref>, the PDF (<ref>) casts some light on Laskar's fits in L08. Nevertheless, it cannot reproduce the numerical PDFs accurately, even if the diffusion coefficients D_k are free parameters. There are a couple of reasons for that. Firstly, the initial distribution (<ref>) is based on the LL dynamics. However, terms of higher order in eccentricities and inclinations in the Hamiltonian (<ref>) are of primary importance for the inner planets. To have a faithful representation of their initial quasiperiodic dynamics one has to go beyond the LL approximation.The other principal limitation of the present framework is that PDF (<ref>) cannot reproduce the slightly decreasing peak positions of Mars PDFs. Indeed, one has ⟨ P'_k ⟩ = ⟨ r^2 ⟩/2 =m^2/2 + σ^2 = P'_k + 2 D_k t, which means that the average content of AMD in the LL mode k is an increasing function of time. Therefore, the PDFs of eccentricity and inclination can only diffuse towards increasing mean values of the corresponding random variable. The reason for this behaviour is that the random walk considered above does not take into account neither the conservation of AMD nor that of energy. Indeed, if one assumes that the AMD of the inner planets is conserved over time, it is understandable that Mercury, Earth and Venus PDFs diffuse towards increasing mean values, while Mars PDFs diffuse towards decreasing ones. Therefore, it would be a considerable improvement to set up a random walk exploiting the secular integrals of motion. Such a modification would end in a feedback dynamics between r⃗_sec and r⃗_chaos in Eq. (<ref>).§ CONCLUSIONS Motivated by the chaotic dynamics of solar system planets and the stochastic nature of planet formation, I address in the present paper the construction of a statistical description of planetary orbits. I suggest that such an approach can be based on the statistical mechanics of secular dynamics. Considering the solar system as a benchmark, I try to reproduce the PDFs of eccentricity and inclination calculated numerically by <cit.>. I show that the statistics of the giant planet orbits is well reproduced by the microcanonical ensemble of the Laplace-Lagrange linear secular dynamics. This is particularly relevant as such a theory can be directly applied to a generic planetary system. Minor discrepancies in this description are connected to higher-than-quadratic terms in the planet Hamiltonian. Their contribution could be taken into account by improving the perturbation approach via successive quasi-identity canonical transformation and normal forms. I then try to reproduce the statistics of the inner planet orbits through the ansatz of equiprobability in the phase-space constrained by the secular integrals of motion, namely angular momentum and energy. Within the limitations of a stationary model, such an ansatz allows to easily and accurately reproduce the structure of Venus and Earth PDFs without any free parameter. However, major discrepancies rise in the case of Mercury eccentricity. I finally show the difficulties of constructing a random walk to model the chaotic diffusion of the inner planet PDFs, following the original ansatz of <cit.>. Within certain limitations, the PDF I obtain allows to illustrate the physical meaning of one of the free parameters in Laskar's ansatz.It is clear that the above description of the inner planet statistics has to be improved. One has to taken into account higher-than-quadratic terms in the planet Hamiltonian. Generally speaking, one could think of using the full expression of the secular energy, Eq. (<ref>). The problem would then be how efficiently sample the corresponding microcanonical ensemble. Another viable approach could be to start constructing an ad hoc model for Mercury, since it shows the most relevant discrepancies with respect to the proposed ansatz. Such an approach could consist in considering Mercury as a test particle in the gravitational field of the other solar system planets, whose orbits are fixed <cit.>. With a selection of the most important higher order terms relevant for the non-linear dynamics of Mercury, important improvements could be achieved in reproducing the PDFs of its eccentricity and inclination. With respect to the random walk approach presented in Sect. <ref>, a considerable improvement would be taking into account the conservation of angular momentum deficit and energy. Even if such a model is not completely predictive, as the diffusion in the phase space is described by free parameters, it could be nevertheless really instructive in judging how the conservation of the integrals of motion constrains the relaxation of the Laplace-Lagrange action variables.The relevance of a statistical description of planet orbits in a generic planetary system is particularly manifest if one considers the model of planet spacing by <cit.> <cit.>. In this approach to planet formation, Laskar describes the collisionless dynamics of planetary embryos by a stochastic variation in their orbital elements that conserves the total angular momentum deficit. Conservation of mass and linear momentum is taken into account to model the result of collisions, during which the total angular momentum deficit decreases. Collisions stop when the total angular momentum deficit is too smalls to allow for further close encounters. With these simple assumptions, Laskar is able to show how the structure of a generic planetary system derives from the initial mass distribution of planetary embryos. As already pointed out by <cit.>, Laskar's model is not unique because it requires an ad hoc prescription for the random evolution of the orbital elements between collisions. Statistical descriptions like those in Sects. <ref> and <ref> can provide such a prescription based on solid physical arguments. A statistical approach to planetary orbits could then be useful in studying planet formation, in combination with standard N-body simulation of planetary trajectories.I am grateful to A. Morbidelli, J. Touma, J. Laskar, J. Teyssandier, N. Cornuault and L. Pittau for the fruitful discussions. I would also like to thank J.-P. Beaulieu for his support and help in reviewing the manuscript.bibtex/aa.bst § GENERAL RELATIVISTIC CORRECTION I present a short derivation of the leading general relativistic contribution to the secular planetary Hamiltonian. At the order (v/c)^2 the Lagrangian of Sun and planets is given by <cit.>L= L_0 + 3 ∑_a ∑_b^'m_a v^2_a/2G m_b/c^2 r_ab + ∑_a m_a v^4_a/8 c^2- ∑_a ∑_b^'G m_a m_b/4 c^2 r_ab[ 7v⃗_a ·v⃗_b + (v⃗_a ·n⃗_ab)( v⃗_b ·n⃗_ab)] - ∑_a ∑_b^'∑_c^'G^2 m_a m_b m_c/2 c^2 r_ab r_acwhere L_0 = ∑_a m_a v^2_a/2 + ∑_a ∑^'_b G m_a m_b/2 r_ab is the Newtonian Lagrangian. In the equation above a, b, c ∈{0, 1, …, 8}, r_ab = |r⃗_a - r⃗_b|, n_ab = (r⃗_a - r⃗_b)/r_ab and the prime symbol means that the terms b = a and c = a must be excluded from the summation. I also recall that m_i/m_0 ≪ 1 for i ∈{1, 2, …, 8}. I then simplify Eq. (<ref>) by keeping among the relativistic terms only those of order (v/c)^2. I thus neglect terms of order (m_i/m_0)(v/c)^2 and smaller. One obtains L = L_0 + δ L, withδ L = ∑_i m_i/2 c^2( v^4_i/4 + 3G m_0 v^2_i /r_i - G^2 m^2_0/r^2_i)where r_i is the Heliocentric position of planet i. The general relativistic correction to the planetary Hamiltonian is thus given by δ = - δ L <cit.>. To obtain the secular correction one has to average δ over the Keplerian orbits of the non-interacting planets. Neglecting corrections of the order m_i/m_0, along a Keplerian orbit one hasv^2_i = G m_0/a_i( 2 a_i/r_i - 1 )where a_i is the semi-major axis of planet i. Moreover, averaging over an orbital period one obtains ⟨ r_i^-1⟩ = a_i^-1 and ⟨ r_i^-2⟩ = a_i^-2 (1-e^2_i)^-1/2, with e_i the eccentricity of planet i. Substituting Eq. (<ref>) in δ and taking the average, one finally finds⟨δ⟩ = ∑_i G^2 m^2_0 m_i/c^2 a^2_i[ 15/8 - 3/(1-e^2_i)^1/2]Discarding the first term as it is constant in the secular dynamics, one obtains that the dominant secular contribution of general relativity is = - 3 G^2 m^2_0 m_i / c^2 a^2_i (1-e^2_i)^1/2.§ LAPLACE-LAGRANGE HAMILTONIAN The matrix A appearing in the quadratic Hamiltonian (<ref>) is given by the following expression:A_kl = {[ - G m_k m_l a_k a_l/π√(Λ_k Λ_l) |a_k - a_l|^3 I_2kl; ∑_j=1 j ≠ k^NG m_k m_j a_k a_j/πΛ_k |a_k - a_j|^3 I_1kj + 3 G^2 m_0^2 m_k/c^2 a_k^2 ].where I have definedI_nkl = ∫_0^π/2 dθcos(2n θ)/[ 1 + 4 a_k a_l sin^2 θ / (a_k - a_l)^2 ]^3/2.Similarly, the matrix B is given byB_kl = {[ G m_k m_l a_k a_l/π√(Λ_k Λ_l) |a_k - a_l|^3 I_1kl; - ∑_j=1 j ≠ k^NG m_k m_j a_k a_j/πΛ_k |a_k - a_j|^3 I_1kj ].From formula (<ref>) it is easy to check that matrix B verifies the following relation:∑_l=1^NB_kl√(Λ_l) = 0This implies that one of the s_k eigenvalues is zero and . (√(Λ_1), …, √(Λ_N))/ √(∑_kΛ_k) is, up to a sign, the corresponding normalized eigenvector.§ ANSATZ OF EQUIPROBABILITY: ANALYTICAL DESCRIPTION An analytical characterization of the PDFs of eccentricity and inclination predicted by Eq. (<ref>) can be obtained by writing the conservation of the total AMD of the inner planets, as motivated in Sect. <ref>, through the modified Delauney variables (<ref>). Compared to that of Sect. <ref>, such an approach does not include the statistics of giant planet orbits. The microcanonical density of states ω then readsω(C_0)/(2π)^2N = [ ∏_k=1^N∫_0^Λ_k dP_k∫_0^2(Λ_k-P_k) dQ_k ] δ( C_0 - ∑_k=1^N P_k + Q_k )where N=4 in the present analysis, C_0 is the initial total AMD of the inner planets and the factor (2π)^2N comes from the integration over the angle variables p_k and q_k. Since Λ_k ≥ 3C_0/2 for all k <cit.>, one can restrict the above integration to the hypercube [0,C_0]^2N. To calculate the density of states one can therefore write ω(C_0) = d Σ(C_0)/dC_0, withΣ(C_0) = [ ∏_k=1^N∫_0^C_0 dP_k∫_0^C_0 dQ_k ]H ( C_0 - ∑_k=1^N P_k + Q_k )where H is the Heaviside step function. Performing the integrations by iteration, one obtains Σ(C_0) = (2π)^2NC_0^2N/2N! and thenω(C_0) = (2π)^2NC_0^2N-1/(2N-1)!The joint microcanonical PDF of P_k and Q_k (i.e. marginalized over the remaining modified Delauney variables) is then given byρ(P_k, Q_k) = (2π)^2N/ω(C_0)d/dC_0[ (C_0 - P_k - Q_k)^2N-2/(2N-2)!]for P_k + Q_k ≤ C_0, otherwise it is zero. Thus one obtainsρ(P_k, Q_k) = (2N-1)(2N-2)/C_0^2( 1 - P_k + Q_k/C_0)^2N-3for P_k + Q_k ≤ C_0. Switching to the variables e_k and i_k, one findsρ(e_k, i_k) =(2N-1)(2N-2)/η^2 e_k sin i_k··( 1 - 1 - √(1-e_k^2)cos i_k/η)^2N-3for η^-1(1 - √(1-e_k^2)cos i_k ) ≤ 1, with η = C_0/Λ_k. The PDF of e_k is then given byρ(e_k) = ∫_0^i^⋆ di_k ρ(e_k, i_k),e_k ≤√(1-(1-η)^2)where cos i^⋆ = (1-η)/(1-e_k^2)^1/2. Performing the integral, one obtainsρ(e_k) = 2N-1/ηe_k/√(1-e_k^2)[ 1 - η^-1(1 - √(1-e_k^2)) ]^2N-2for e_k ≤√(1-(1-η)^2). Similarly, to obtain the PDF of i_k one has to calculateρ(i_k) = ∫_0^e^⋆ de_k ρ(e_k, i_k), cos i_k ≥ 1-ηwhere e^⋆ = [1 - (1-η)^2/cos^2i_k]^1/2. Performing the integration, one findsρ(i_k) =tan i_k[ 1 - η^-1(1-cos i_k) ]^2N-2 ··[ 2N-1/η- 1 - η^-1(1-cos i_k)/cos i_k]for cos i_k ≥ 1-η. As in the present study one has η≪ 1 <cit.>, one can simplify these PDFs obtainingρ(e_k)= 2N-1/η e_k ( 1 - e_k^2/2η)^2N-2,e_k ≤√(2η) ρ(i_k)= 2N-1/η i_k ( 1 - i_k^2/2η)^2N-2,i_k ≤√(2η)In this limit one has that⟨ e_k ⟩ = ⟨ i_k ⟩ =√(π/2)Γ(2N)/Γ(2N+1/2)η^1/2 ⟨ e^2_k ⟩ = ⟨ i^2_k ⟩ = η/Nwhere Γ is the Gamma function, and the average AMD of planet k is thus given by⟨ P_k + Q_k ⟩ = Λ_k ⟨ e_k^2 + i_k^2 ⟩/2 = C_0/NTherefore one obtains the equipartition of AMD between planets and between the eccentricity and inclination degrees of freedom. Even though they do not coincide, the PDFs (<ref>) and (<ref>) present the same overall structure as those predicted by the ansatz (<ref>).One can also include in the above analytical treatment the conservation of the angular momentum components L_x and L_y at quadratic order in eccentricities and inclinations (see Eq. (<ref>)). While the shape of the eccentricity PDF is unaffected, that of the inclination PDF turns out to be somewhat influenced by the parameter (L_x^2 + L_y^2)^1/2/L_z.§ ANSATZ OF EQUIPROBABILITY: SAMPLING Conservation of AMD and energy in Eqs. (<ref>) and (<ref>) is equivalent to the two following conditions:C_0= ∑_k=1^4 P'_k + Q'_kH_0= - ∑_k=1^4( g_k P'_k + s_k Q'_k )with C_0 = ∑_k=1^4P'_k + Q'_k and H_0 = - ∑_k=1^4( g_k P'_k + s_k Q'_k). These equations can be rewritten as1= ∑_k=1^4 a_k + b_kh= ∑_k=1^4 g_k a_k + s_k b_kwhere a_k = P'_k/C_0, b_k = Q'_k/C_0 and h = - H_0/C_0. Moreover, I recall that a_k, b_k ≥ 0, g_k > 0 and s_k < 0 .The PDF (<ref>) can be therefore evaluated by an uniform sampling of a_k and b_k verifying Eq. (<ref>). To evaluate the PDF (<ref>) a_k and b_k have to verify both Eq. (<ref>) and Eq. (<ref>).Uniform sampling of N positive real numbers that add up to unity, as in Eq. (<ref>), can be performed via a direct sampling algorithm. One starts by sampling N-1 random numbers uniformly from the interval [0,1]. After sorting them, one obtains the set {u_1, …, u_N-1}, with u_i-1≤ u_i for i ∈{2,…,N-1}. Then, the numbers {u_1, u_2-u_1, …, u_N-1-u_N-2, 1-u_N-1} follow the target distribution.To uniformly sampling positive real numbers a_k and b_k from Eqs. (<ref>) and (<ref>) there is probably no direct sampling method. However, I present here an efficient algorithm with rejection. Let's assume that h ≥ 0, as in the present study. A similar algorithm can be constructed in case h is negative. One can multiply Eq. (<ref>) by g_max = max_k{a_k}, subtract Eq. (<ref>) and finally divide by g_max-h (one can assume this to be different from zero, otherwise the sampling is trivial). One obtains1 = ∑_k=1^4 g_max-g_k/g_max-h a_k + g_max-s_k/g_max-h b_kThe coefficients of a_k and b_k in Eq. (<ref>) are positive and the term a_k^⋆, such that g_k^⋆ = g_max, does not appear in it. Therefore, the other N-1 numbers can be sampled from this equation using the direct sampling algorithm presented above. If their sum S turns out to be smaller or equal to unity, then one can take a_k^⋆ = 1 - S to obtain a set of numbers following the target distribution. Otherwise, the numbers are rejected and Eq. (<ref>) sampled again. | http://arxiv.org/abs/1703.09225v1 | {
"authors": [
"Federico Mogavero"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170327180001",
"title": "Addressing the statistical mechanics of planet orbits in the solar system"
} |
Aloisio [email protected] Gran Sasso Science Institute, L'Aquila, Italy.INFN - Laboratori Nazionali Gran Sasso, Assergi (AQ), Italy.Berezinsky [email protected] Gran Sasso Science Institute, L'Aquila, Italy. INFN - Laboratori Nazionali Gran Sasso, Assergi (AQ), Italy.Using the Auger mass-composition analysis of ultra high energycosmic rays, based on the shape-fitting of X_max distributions<cit.>, we demonstrate that mass composition and energyspectra measured by Auger, Telescope Array and HiRes can be broughtinto good agreement. The shape-fitting analysis of X_max distributionsshows that themeasured sum of proton and Helium fractions, for somehadronic-interaction models, can saturate the total flux. Such p+He model,with small admixture of other light nuclei, naturally follows from cosmologywith recombination and reheating phases. The most radical assumption ofthe presented model is the assumed unreliability of the experimental separationof Helium and protons, which allows to consider He/p ratio as a free parameter.The results presented here show that the models withdominant p+Hecomposition explain well the energy spectrum of the dip in the latest(2015 - 2017) data of Auger and Telescope Array, but have sometension at the highestenergies with the expectedGreisen-Zatsepin-Kuzmincutoff. The Auger-Prime upgrade experiment has a great potential to reject orconfirm this model.Towards solving the mass-composition problem in ultra high energy cosmic rays Veniamin December 30, 2023 =============================================================================§INTRODUCTION Mass compositionstill remains a controversial issue in Ultra High Energy Cosmic Rays (UHECR). The three biggest detectors, Pierre Auger(referred here as 'Auger'), Telescope Array (referred as 'TA') and HiRes, have obtained contradictory results concerning mass composition of primary particles in the energy range (3 - 100) EeV (1 EeV = 1× 10^18 eV). At (1-3) EeV all three detectors agree with light composition, protons or protons and Helium, butin the range (3 - 100) EeV the Auger detector,the biggest one, founds a progressively heavier mass compositionwith increasing energy, while the other two detectors report the masscomposition similar to that at lower energy.At present there are two basic methods to study the mass compositionof UHECR: direct measurements and indirect tests. The morereliable direct method is based on the observation of the fluorescentlight produced by Extensive Air Showers (EAS) in the atmosphere. Theindirect test is based on the signaturesof mass composition in the primaryenergy spectrum and we start with it as more old and lessconstraining.This approach works most efficiently for protons due to their interaction with the Cosmic Microwave Background (CMB). It results in two very specific spectral features: the Greisen-Zatsepin-Kuzmin (GZK) cutoff <cit.> and the pair-production dip. The formeris a sharp cutoff at the end of the spectrum, around E ∼ 50 EeV,due to the photo-pion production and the latter is a rather faintfeature at E ∼ 1 - 30 EeV first calculated in <cit.> andstudied in detail in <cit.>. The dip is well confirmedin the spectra of all three detectors but its origin as the pair-production dip p+γ_cmb→ p +e^-+ e^+ is now questioned by the Auger mass composition. Before 2011 the data published by HiRes <cit.> and Auger <cit.>, and later confirmed by TA, showed high energysteepening in good agreement with the predictedGZK cutoff. Nevertheless,the newest data of Auger and TA, released in 2015 - 2017 seem to be incontradiction with this interpretation, see Fig. <ref>.The propagation of UHE nuclei does not leave any clear signature of the mass composition in the energy spectra. The main channel of energy losses, that determines the spectrum of UHE nuclei, is the photo-disintegration process on the Extragalactic Background Light (EBL) and on CMB. This process naturally produces secondary lighter nuclei, mixing thus with the primary composition. As was first predictedby Gerasimova and Rozental <cit.> in 1961, i.e. before the discovery of CMB, nuclei photo-disintegration on EBL results in asuppression of the UHECR energy spectrum (GR steepening). In fact, as wasrealised later, see e.g. <cit.>, a more sharp cutoff occurs at higher energies where the nucleus photo-disintegration time onCMB becomes equal tothat on EBL. This cutoff arises at Lorenz-factor Γ∼ (3-5) × 10^9 for all nuclei. The energy of the cutoffE_cut∝ AΓ is different for primary nuclei with different A. This fact together with the unavoidable mixed composition,due to the production of secondary nuclei makes unclear any composition signature in the observed spectrum.At present the best method to measure the mass compositionis given by the observation of fluorescent light produced by the e-m component of EAS in the atmosphere. All three aforementioned detectors use this method. However, for better accuracy thefluorescent-light method needs additional information, which in the caseof HiRes is given by the stereo observation of fluorescent light, andin the case of Auger (and recently of TA) this additional information is obtained from the data of on-ground detectors (water-Cherenkov detectors inAuger and scintillation detectors in TA).The basic observable parameter related to mass composition is X_max(E), the atmospheric depth where the number N(E) of particles in the cascade, with total energy E, reaches its maximum X_max, is sensitive to the number of nucleons in the primary nucleus. Heavy nuclei interact higher in the atmosphere and have smaller fluctuations. In practice the actual quantity which allows to find the mass composition is the distribution N(X_max) of the showers with total energy E.In the case of large statistics the direct use of N(X_max,E) gives the most reliable estimation of composition. In the case of limited statistics one may use the moments of this distribution, see e.g<cit.>, namely the first moment which is the mean value ⟨ X_max⟩ and the second moment σ(X_max) which is the variance or dispersion (RMS) of the distribution. As was demonstrated in<cit.> using only the first two moments for the analysis, may resultin a false degeneracy: two different mass compositions may produce the same ⟨ X_max⟩ and σ(X_max).The shape-fitting analysis of N(X_max) recently performed by the Auger collaboration <cit.> gives very important results that, summarising, can be described as follows. The mass composition is assumed as a discrete sum of four elements: Iron (Fe), Nitrogen (N), Helium (He) and protons (p).For each element the X_max distribution is calculated by Monte Carlo (MC) simulations and the fraction of each element in the total flux is found from the comparison with observations. These fractions depend on the models of hadronic interaction included in MC simulations.A decisive result is given by the very small fraction of Iron at all energies, almostindependently of the hadronic interaction model (see the upper panelof Fig. 4 in <cit.>). Besides, the analysis of <cit.> shows that the fraction of light elements (p+He) is quite large independently of the hadronic interaction model. It allows the conclusion that at least a large fraction of UHECR, if not the dominant one, is composed by light elements. The small fraction of Iron and large of p+He seem to be a common conclusion of the N(X_max) shape-fittinganalysis of Auger and HiRes/TA data.The argument above does not dismiss the question: Auger, HiRes and TA use the same fluorescent data to measure X_max and the same moments-based method for the analysis of mass composition. Why then their conclusionsdiffer?The most convincing answer to this question is probably given in a recent paper by the TA collaboration <cit.>. The observation of fluorescence light can be performed in two ways: with a monocular observation, when only one telescope observes the fluorescent signal, or in the stereo mode when more than one telescope simultaneously observe the same shower. Fluorescence detection in monocular mode is less efficient to measure X_max in comparison with the stereo mode. HiRes and later TA used, apart from monocular, also stereo events with higher precision in the measurement of X_max. It became possible because of the smaller (in comparison with Auger) spatial separation between telescopes. Auger, on the other side, to cover a larger area has a much largerseparation among telescopes and collected mainly monocular fluorescentevents. Instead, the Auger collaboration elaborated thehybridtechnique, first proposed in <cit.>, based on additional accompanying signal from, at least, one on-ground water-Cherenkov detector. Hybrid method allows to measure the core location and geometry of the shower, which improve themeasurement precision for X_max and shower energy E.Auger collected now the largest number ofhybrid eventsand we compare our predictions with the hybrid Auger data whenever thisis possible.At present TA is also using the hybrid technique with the help of 507 on-ground scintillation detectors <cit.>. With an accumulatedstatistics of 5 years data, TA reports <cit.> that the hybridmeasurements of X_maxagree with the results of Auger, if analysed withthe EPOS-LHChadronic interaction model <cit.>. On the otherhand, using the QGSJetII-03 hadronic interaction model <cit.> the TA collaboration founds a mass composition compatible with only light nuclei.Another important method to measure the mass composition is given by the observation of muons produced in the EAS.The basic effect to distinguish a nucleus from a proton with the help ofmuons is related to the different energy per nucleon, E/A, at fixed total energy E.A low energy nucleon produces low energy charged pions which decay tomuons before the parent pion undergoes new collisions with air-nucleus. Produced in the EAS, muons propagaterectilinearly with velocity v ≈ c. As a result they can provide directionaland timing informations, which can further reduce uncertainties in the fluorescent method. There are two well known muon quantities relevant for measuring mass composition: The total number of muons in the showerN_μ and the so-called Muon Production Depth (MPD) X_max^μ, which gives the atmospheric depth where the production rate of muons reachesits maximum <cit.>. The total number of muons N_μ, called also the muon size of the shower, is especially important to determine mass composition at energies below the UHECR regime <10^17 eV, where the fluorescent emissionis too faint to be detected. At these energies, N_μ gives the only way to determine the mass composition. The analysis of the muon component in the KASCADE-Grande experiment<cit.> allowed recently to find the Iron knee in the Galactic cosmic ray spectrum at energyE ∼ 1× 10^17 eV as predicted by the rigidity relation <cit.>.Among the three biggest UHECR arrays at present only the Auger experimenthas several unique possibilities to measure the muon flux directly and use it todetermine the mass composition. The on-groundwater-Cherenkov detectors canmeasure muons in inclined directions, although with a high level of uncertainty dueto the decoupling of the electron and muon signals. In AMIGA (Auger Muon andInfill Ground Array) there are muon detectors in the form of scintillation countersburied at a depth of 2.3m underground <cit.>. The new exciting method of muon detection in Auger experiment is given bythe Auger-Prime <cit.> upgrade, recently funded, has been specifically designed to improvemuon detection in the whole energy rangeof the experiment.Each water-Cherenkov tank will be equipped with scintillator layer on the top, sensitiveonly to e-m component of the shower, whilewater-Cherenkov detector is sensitive toboth e-m and muon components. The combination of the two signals allows toreconstruct each of the fluxes separately. Recently, also the TA collaboration started importantupgrades to increase the statistics at the highest energies enlarging the area covered by thesurface detectors, an updated description of the status of the TA experimental set-up can be found in <cit.>. Another important channel of measuring the mass composition is connected with thecorrelation between muon signal and X_max. We will discuss this method in somedetails in Section <ref>. Above we discussed the problem of measuring the mass composition. Somewhat different question is whether it is possible to exclude, on the basis of observations, a pure proton composition at E > 3 EeV. There are two challengers for such task: the quasi-isotropic gamma-radiation and neutrinos; both produced mainly by collisions ofUHECR protons with background photons. The most stringent limit on the isotropic component of gamma-radiation in the range 50 MeV - 820 GeV is given by the Fermi-LAT experiment <cit.>. The strongest upper limit on the allowed UHE proton flux wasobtained practically simultaneously in 2016 in three works <cit.>. The limit depends on the models for sources of UHECR, especially strongly on the injection power-law index γ_g and cosmological evolution of sources. In <cit.> it was demonstrated that in a wide range of parameters the proton models which explain UHECR flux and spectrum, predict gamma-rays and neutrinos below the Fermi-LAT upper limit and IceCube flux.In the present paper we use the latest Auger and TA observations, comparing them with the spectral features that arise due to propagationof UHECR, and their mass composition.We argue that the spectral features may still be considered as an indication for a light mass composition, solving the problem of the alleged discrepancy between Auger and TA observations. The paper is organised as follows: In Section <ref> we reconsiderthe statusof thedip model inlight of the latest observations of the spectrum.In Section <ref> we show how a mixture of Helium nuclei and protonsprovide a good description of the observed flux. In Section<ref> we discuss the correlation of X_max withmuon characteristics: N_μ, the total number of muons in a shower, and X_μ^max, the Muon Production Depth; for this discussion we calculated the spectrum in the model p+He+CNO. The conclusion is givenin Section <ref>.§MODIFICATION FACTOR AS INDICATION OFPROTON-DOMINATED COMPOSITION Propagating through CMB the proton energy spectrum acquires two characteristic features: the dip, due to the process p+γ_ cmb→ p+e^-+e^+ <cit.>, and GZK cutoff <cit.>due to the reaction p+γ_ cmb→ N + π. These two features are quite different from the spectral features arising in the fluxof UHE nuclei due to the interaction with CMB and EBL.This difference can be used to distinguish protonsfrom nuclei and can be used as an additional (indirect) test of the masscomposition, to be compared with other observations.This test becomes particularly important in the light of the uncertainties in the direct measurements of the mass composition and in the hadronic interaction models at energies above the LHC (CERN) calibrations.In this section we use the modification-factor method to identify protons in UHECR. Following the works <cit.>, it can be proved that this method favours a proton-dominated mass composition in the observations of four experiments: AGASA, Yakutsk, HiRes and Auger, using the data before 2009, and the TA data as of 2011.Here we reconsider this analysis using the higher statistics data of Auger and TAas published in 2015-2017 <cit.>. We willdemonstrate that new data may agree at some reasonable conditions with the proton dip,but show noticeable differences with the GZK cutoff (see Fig. <ref>).The proton modification factor used in the forthcoming analysis is defined as the ratio:η_p(E)=J_p(E)/J_p^ unm(E),where J_p(E) is the total flux of protons, measured or calculated, taking into account all energy losses due topγ_ cmb collisions and adiabatic energy losses.In Eq.(<ref>) we introduced also the unmodified proton spectrumJ_p^ unm(E) which is calculated taking into account only adiabaticenergy losses, J_p^ unm(E)=KE^-γ_g.Model-dependent phenomena enter both numerator and denominator in Eq.(<ref>) and compensate or even cancel each other, while interaction with CMB photons does not enter the unmodified flux, i.e. the denominator, and appears only in the numerator of Eq.(<ref>). Thus the modification factor presents, in an unsuppressed way, such features as dip and GZK cutoff directly connected with the propagation of UHE protons, while the model-dependent features areseen there in a suppressed form. Therefore the modification factor is an excellent instrument to search for the proton-dominated mass composition through the proton interaction features, dip and GZK cutoff, but it is not sensitive to the model-dependent features e.g. to the details of theacceleration models.Modification factor depends non-trivially on the statistics of the events.First, at very high observational statistics the agreement of the observed modificationfactor with the predicted one (both for protons) must be worse because at high resolutionthe ratio starts to distinguish better the model-dependent effects being less compensatedin denominator and nominator of Eq.(<ref>). In other words the higher statisticsresults in a worse agreement between denominator and numerator inEq.(<ref>)due to model-dependent effects.This phenomenonwill be referred to as "high-statistics de-compensation". The second effect is caused by the observation technique. There are two kinds of observationalerrors: flux errors δ J/J and energy errors δ E/E. In all published presentations of the measured spectra the errors are given as the flux errors δ J/J,while the energy errors δ E/E are just mentioned or shortly discussed.The systematic energy error was first presented in the Auger ICRC report 2013<cit.> as δ E/E = 0.14. In the present paper we consider two cases:(i) The total error isdominated by flux errorδ J /J >> δ E/E, (ii) The energy error δ E /E is the dominant component δ E/E > δ J/J, orδ E/E >> δ J/J.We start with the modification factor calculation η (E_ obs) ±δη in thegeneral case of two errors δ E_ obs and δ J_ obs. It is easy to obtainthe exact expression for η(E_ obs)in the case of natural conditionδ E_ obs <<E_ obs:η(E_ obs) ±δη =J_ obs(E_ obs)/K E_ obs^-γ_g ×[1 ±1/J_ obs(E_ obs)∂ J_ obs(E_ obs)/∂ E_ obsδ E_ obs±δ J_ obs(E_ obs)/J_ obs(E_ obs)] In the case (i), (δ E/E)_ obs <<(δ J/J)_ obs, thesecond term in rhs of Eq.(<ref>) may be neglected andthe comparison of the proton modification factor with the data of four experiments released before 2011, namely AGASA, Yakutsk, HiRes, Auger, and also the 2011 data of TA, results in an excellent agreement with theobserved spectra in approximately 100 energy bins <cit.>.It is a remarkable fact that this agreement is achieved with only one free parameter, the injection power-law index γ_g ≈ 2.6, together with sources emissivity L_0, which provides the total normalization of the flux using the same restriction (i) δ E/E < δ J/J.In Fig. <ref> we compare the theoretical modification factor forprotons with TA (left panel) Auger (right panel) data as of 2015 and 2017, using the same restriction(i) (δ E/E) < δ J/J. This comparison shows a fairly good agreement with the dip in both datasets,the relatively small discrepancy with GZK cutoff for Auger and stronger discrepancy with GZK cutoff for TA.We postpone the discussion of GZK cutoff to a forthcoming publication. As to more details concerning the dip, TA combined spectra of 2015 and 2017, show an excellent agreementwith the theoretical modification factor, while Auger data show a good agreement with theoretical curve for 2015hybrid data, but for the high-statistics 2017 combined data there is statistically significant difference withboth the theoretical spectrum and the 2015 hybrid spectrum. Since the statistics of the combined Augerevents is much higher than that for TA and for hybrid Auger events 2015, one may suspectthe "statistical de-compensation" effect as the explanation for discrepancy, namely suppression of thecompensation mechanism for model-dependent effects in numerator and denominator of modificationfactor given by Eq. (<ref>).The reasonable agreement of the modification factor calculated above within the assumptionδ E /E << δ J/J may be considered as experimental confirmation of this assumption,however there is an argument in favour of alternativerelation (ii) δ E/E > δ J/J.From the Auger presentations of the combined energy spectra of 2014 - 2017 in the formE^3J(E) with error-bar δ[E^3J(E)] :δ(E^3J(E))/E^3 J(E) = δ J(E)/J(E) + 3 δ E/E ,it is important to note that the total error δ(E^3J(E)) is determined by the errors δ Jand δ E, which operate in two perpendicular directions. This remark is also valid for themodification factor η(E)∝ E^γ_g J(E), which results in δη(E)/η(E)=δ J(E)/J(E) + γ_gδ E/E . One may concludethat δ E/E term can be larger or much larger than δ J/J in the case of high statistics Auger combined events. Indeed, the first term δ J/Jmay be very small because of the tremendous Auger statistics (especially for the combined events)while the second term is large in particular for the systematic errorsδ E/E=0.14 <cit.>. Moreover, the difference between the hybrid andcombined spectra in the Auger data (see below) could point toward a higher energyerror. However, it is difficult to estimate it and we restrict our analysis towhat discussed in <cit.>. Thus δ E/E ≫δ J/J,and it may over-close the whole dip in the vertical direction making it unobservable:the dip exists but cannot be seen because of too large energy errors. We will referto this effect as "over-closing of gap".This case is illustrated by Fig. <ref> where the theoretical modification factor is plottedtogether with the Auger combined spectrum of 2017 and Auger hybrid spectrum of 2015.The two spectra show the largest difference in the recent Auger data.In the left panel they are presented with error bars δ J/J, andin the right panel the energy errors δ E/E are added as half ofsystematic energy error of Auger <cit.>.One may see that energy error-bars over-close the gap between the two Augerspectra and between each of Auger spectrum and theoretical modificationfactor (seecaptions to Fig. <ref>).The conclusion above is valid for small energy errors δ E/E < δ J/J. In the case of larger energy errorsδ E/E > δ J/J the energy error δ E can exceed the gap between predictedand observed values, η_th and η_obs, in particular at most disturbingenergies E∼ 4 - 6 EeV, and contradiction between η_th (E) and η_obs (E)disappears.In a more general discussion the good agreement of the proton theoretical modification factor with Auger and TA observations, at least at the dip energies 1 - 30 EeV, is a strong indication of proton or proton-dominated composition. However, one cannot consider it as the the final proof. Indeed, on one side we observe the unique shape of the dip produced by p+γ_ cmb→ p+e^-+e^+ scattering. On the other side, in models with mixed nuclei composition, see for instance <cit.>, one can obtain a theoretical spectrum with practically the same shape of the pair-production dip, but using more than 10 free parameters in the theoretical model. This result demonstrates that the very specific shape of the pair-production dip is not the unique one explaining the observed spectrum. As discussed in the Introduction, there are a few other observations that can in principle challenge the proton-dominated mass composition: the Auger observations of mass composition using fluorescence and muon signals, the Fermi LAT data for the diffuse gamma-ray background and the IceCube data for astrophysical neutrinos. If all these experiments provide in future evidences against the proton-dominated composition one must conclude that the pair-production dip is an accidental coincidence. Nevertheless, until experimental data are not conclusive, one must consider the proton modification factor as an indication for a proton-dominated mass composition of UHECR.§P+HE MODEL As discussed in the Introduction there is some tension onthe mass composition at E> 3 EeV between the three biggestUHECR experiments Auger, TA and HiRes. In this Section we first summarisethe basic physics of mass-composition measurements, then discuss theinfluence of cosmological environment and, finally, present calculations relevantto the p+He model.The measurement of mass composition is based on the X_max value, which is the depth of the atmosphere where the number of particles in the shower reaches its maximum. The value of X_max is a basic parameter to determine the mass composition of UHECR, with the best observable quantity for this determination given by the shapeof the distribution N(X_max,E) forshowers with fixed total energy E.As a matter of fact, until recently, instead of the distribution N(X_max), the first two moments of this distribution were used: the mean value ⟨ X_max⟩ and its RMSσ(X_max).In two recent papers by the Auger collaboration, the mass-compositionsobtained using moments-analysis <cit.> and shape-fitting N(X_max) analysis <cit.> are not identical. Realistically,they are not expected to be such,similarly to the already known factthat ⟨ X_max⟩ and σ (X_max) give, if analysed separately,somewhat different results. The shape-fitting analysis is obviously the most fundamental and most sensitive method, since, for example, it involves the tiny parts of the wing distribution. Apart from it, the shape-fitting analysis demonstrated a degeneracy effect when two different mass compositions correspond to the same first two moments. For this reason, in the present paper, we choose the results obtained from the shape-fitting analysis of the Auger data <cit.> with the measured fractions of four nuclei species: Fe, N, He and p. These fractions, as determined from Auger measurements <cit.>, reveal some uncertainties due to different hadronic interaction models namely QGSJet <cit.>, EPOS-LHC <cit.> and Sybill <cit.>.The important result obtained in <cit.> is given by a very small fraction of Iron at all energies and for all interaction models, except EPOS-LHC at the two highest energy bins (see the upper panel in Fig. 4 of <cit.>). The other important result, as mentioned in Introduction, and exposed inFig. <ref>, is a large fraction of p+He, consistent with unity.It is interesting to note that both effects have a natural cosmological explanation.Among the heaviest nuclei, Iron is the most natural element to be produced in Supernova (SN) explosions and the absence of Iron in UHECR implies that other heavy elements must be absent too.Their suppression in the form Fe/p ≪ 1is very natural forextragalactic gas and extragalactic cosmic rays. Enhancement ofp+He component has the same nature.At the cosmological epoch of recombination, protons and Heliumnuclei were the dominant components and heavy metals were almostcompletely absent. Meanwhile, theproduction of metals is compulsory.It is needed to provide cooling of ordinary stars during their evolution,including the preSN phase. The later stage of reionization in the universe, as detected by WMAP <cit.> andPlanck <cit.>satellites occurs at redshift z=11.0 ± 1.4 and z≲ 10.0, respectively.This stage needs at least two early generations of stars with low metallicity,Pop III and Pop II stars. They inject into the extragalactic space a small amount ofheavy metals. The main contribution to the Iron observed in the extragalacticspace (and thus in extragalactic cosmic rays) is given by the present-time SNexplosions. This scenariois confirmed by WMAP and Planck observations of theUniverse reionization and by the observations ofLy α forest whichindicate that extragalactic space had very low fraction of heavy elements atthe level Z ∼ 10^-3.5 Z_⊙ at redshift z ∼ 5, see eg. <cit.>.Iron and other heavy metals are injected intoextragalactic space mainly duringa short interval Δ z at z ∼ 0 mostly due to explosions of the lastgeneration of SNe. This scenario is similar to the model of UHECRproduced mostly nearby our Galaxy <cit.>.One may conclude that Hydrogen and Helium as the main products of primordial cosmological nucleosynthesis, and suppression of SN-producedIron and other heavy metals in red-shifted gas, naturally result ina p+He dominated extragalactic gas and UHECR accelerated at red-shiftz afew.In our calculations an additional simplifying assumption is used. Generically, we assume that all existing detectors do not distinguish reliably Helium from proton and one can consider p+He flux as one light component, assuming the fraction He/p as a free parameterof the model. However, we will start with the sum p+He as it comes from Fig. 4 (strip 3 for He and strip 4 for p) of <cit.>. Summing these two fractions, with errors summed in quadrature, we obtainp+He flux presented in Fig. <ref>. One can see that the sum of these two components saturates well the total flux, at least in the case of QGSJet and Sybil hadronic interaction models. This interesting fact confirms well our assumption that the light fraction (p+He) weakly depends on energy and with good accuracy saturates the total flux leaving small room for other components (e.g. N which will be considered later as CNO component).We are ready now to calculate the energy spectra for p+He models and to compare them withspectra released by Auger and TA in 2015.We consider a power-law generation spectrum as Q(E)=K_i E^-γ_g (i=p, He) with the same generation index γ_g for protons and Helium nuclei. We also assume that sources are distributed homogeneously and uniformly, so that the calculated spectrum is universal, i.e. not being affected by propagation models. Energy lossesinclude pair-production, photo-pion production, and photo-dissociationfor Helium on CMB and EBL. Secondary protons from He and D photo-dissociation and also from neutron decays are included in calculations. For interaction with EBL photons the model <cit.> is used. In all these calculations we follow <cit.>.In Fig. <ref> and Fig. <ref> the computed spectra for p+He models are presented for γ_g=2.6 andγ_g=2.2, respectively. A generic feature of p+He spectra is the proton dominance at the highest energies because of the GR steepening for Helium atE≲ 5× 10^18 eV due to photo-disintegration on EBL. Therefore the GZK cutoffin p+He model becomes compulsory, unless the maximum acceleration energy E_max^ acc is below the GZK thresholdE_max^ GZK≃ 50 EeV.Consider first the case of generation index γ_g=2.6shown in Fig. <ref>. This generation index corresponds to the canonical proton modification factor in the dip model (see section <ref>). Therefore ifto take a small He/p ratio at generation one should obtain the theoretical spectrum and theoretical modification factor in agreementwith old (before 2015) observations. One may notice from Fig. <ref> the similar agreement between theoretical andobservational dips for Auger 2015 (hybrid data) and for TA 2015 (combined data). This agreement becomes worseat the highest energies.In Fig. <ref> we plot the comparison of the observedand calculated fluxes for Auger 2015 (left panel) and TA 2015 (right panel).It is remarkable that, at the dip energies, the TA spectrum can be describedjust rescaling by a factor 1.2 the source emissivity needed todescribe the Auger data. The behaviour of the flux at the highest energiesis determined by the photo-pion production process. The maximum accelerationenergy in Fig. <ref> is taken at the level of E_max=10^21 eV.In other words the theoretical spectrum shape in Fig. <ref> isexactly as predicted in the case of the GZK cutoff and, as follows fromFig. <ref>, it seems not well reproduced in both datasets. Auger shows an earlier cutoff at energies below the GZK cutoff energy(≃ 50 EeV) while TA shows a flux suppression at energies slightlyhigher than this value.The fraction of Helium allowed at the sources depends on the assumptions for the injection power-law index. Assuming harder spectra it is possible to increase the fraction of Helium. In Fig. <ref> we assume a maximum acceleration energy E_max=8× 10^19 eV and a flatter injection spectrum with γ_g=2.2, that allows to increase the fraction of Helium nuclei inthe generation spectrum up to K_He/K_p=0.35. This procedure improved but a little the agreementwith observational data of Auger at the highestenergies, while the good agreement with the dip remains practically as before. These changes are linked with the GR steepening of He spectrumdue to photo-disintegration on the EBL radiation. § X_MAX AND MUONS As discussed in the Introduction, the observation of the showers muon component is a very efficient tool to measure the mass composition. The total number of muonsN_μ in the showerand also the Muon Production Depth X^μ_max are sensitive indicators for it. These two quantities are expected to have correlationswith X_max, measured using the fluorescent light, because all threequantities characterise the mass composition. Among the three biggestarrays, Auger, at present, is the most efficient one to measure the muonflux and the correlations of N_μ and X^μ_max with X_max.To pursue such measurements the Auger collaboration has just started anoverall upgrade of the experimental set-up (Auger-Prime) to instrument eachwater-Cherenkov detector with a 4m^2 plastic scintillator located on the top<cit.>, to disentanglethe total signals produced by muons andelectrons.From the theoretical point of view, P. Younk and M. Risse <cit.> were the first to demonstrate that the statistical correlation of the shower-maximum depth X_max and the total number of muons N_μ in the shower, depends on the mixture of different nuclei species in the UHECR flux and thus can be used for the analysis of the mass composition. This is a transparent theoretical idea, because both quantities are good characteristics of the mass composition measured, however, in two independent experimental ways. In <cit.>, the authors introduced the statistical correlation factor r, see Eq.(1) in <cit.>, for the two quantities X_max and N_μ and demonstrated that r is very sensitive to single nuclei composition and their mixtures. For example, a pure proton and a pure Iron composition gives r_p=0.0 and r_ Fe=0.7 respectively, while for equal ratios of both nuclei r_p+Fe=-0.51. These numbers are given for the case of an 'ideal detector', for a realistic detector the difference is smaller. This example demonstrates the power of this method to shed additional light on the actual mass composition of UHECR.Recently, restricting the analysis to hybrid events, the Auger collaboration presented the measurement of the correlation between the depth of shower maximum X_max, as observed by fluorescence telescopes, and the signal in the water-Cherenkov detectors <cit.>. The surface array of these detectors is sensitive to muons, particularly in the case of inclined showers with zenith angles between 20 and 90 degrees, in which muons provide from40% to 90% of the signal S(1000) at a distance 1000 m from theshower core. Instead of X_max andS(1000)in <cit.>as correlation quantities there were used X_max^* and S_38^*,which are the values of X_max and S(1000) recalculated to a showerzenith angle of 38^∘ and to a shower energy of 10 EeV. The statisticalcorrelation factor measured and simulated in this way is denoted in <cit.>as r_G(X_max^*,S_38^*). The measured value obtained isr_G^ data=-0.125 ± 0.024 (negative), while the simulated values of thecorrelation factors for pure protons (p) and for a mixture of protons and Helium(0.8p+0.2He) were found approximately r_G ≈ 0, for all three hadronicinteraction models EPOS-LHC, QGSJetII-04 and Sibyll 2.1. This result seemsto exclude both a pure proton model and a proton model with an admixture of20% He, moreover, any p+He model, without an admixture of heavier nuclei, is disfavoured. The last conclusion is a strong argument against the pair-productiondip, though this model may survive including He and more heavy nuclei as small admixtures. On the other hand one should realise that the present correlation measurementis undoubtedly a step forward to enter the muon physics in UHECR, since the correlationinvolves muon signals. Auger <cit.> and TA <cit.>face the problem herein the form of a muon excess by a factor 1.5 - 2.0 higher then expectedat primary energies above 10 EeV. The analysis <cit.> shows that this excessis difficult to explain by any reasonable modification of the particle interactions and it maybeevidence for a light (proton) composition at these energies. The authors of <cit.>also interpret their measurements by a proton primary composition. Moreover, the muon excessquestions the accuracy of the measurement of the muon signals based on inclined showers,and requires some caution towards the mass-composition restrictions obtained in <cit.>.One may hope that in the near future the Auger-Prime measurements of N_μ and X^μ_maxwill allow to find the correlation factors r(X_max,N_μ) andr(X_max,X^μ_max) with the neededaccuracy.As discussed in the Introduction, a powerful method to determine mass composition with the help of muons is given by the measurement of the muon production depth of the shower X_max^μ <cit.>. Muons trace their parents, heavy nuclei or protons, being produced at different heights in the atmosphere and propagating rectilinearly with velocity v ≈ c. The timing of muon signals according to detailed calculations <cit.>will improve the accuracy of X_max^μ method as well as the accuracy ofits correlation with X_max.The problem with mass composition as it is found in <cit.>requires the presence of nuclei heavier than Helium. Here, assumingthat the result of <cit.> will be confirmed in a more convincingway through the correlations (X_max, X_max^μ), and (X_max, N_μ), we included in p+He model also CNO nuclei with ratios at the source He/p=0.2 and CNO/p=0.1 (both allowedby Auger data <cit.>). The spectra obtained are shownin figure <ref>, calculated following the computation scheme of<cit.>, show a quite good agreement with experimental data of bothAuger and TA. The latter, as before, are reproduced assuming a sourceemissivity multiplied by a factor 1.2 respect to the Auger case.§CONCLUSIONS As far as mass composition is concerned there are three methods to analyse the fluorescence data: ⟨ X_max⟩, σ (X_max) and the shape-fitting analysis ofN(X_max) distribution <cit.>. As it is well known, the first two methods (moments of the X_max distribution) do not agree well between themselves and both disagree with the shape-fitting analysis (compare the mass composition obtained in <cit.> and in <cit.>). In this paper we used the Auger shape-fitting analysis as the most reliable and free from false degeneracies, see <cit.> and Introduction.In the shape-fitting analysis the mass composition is described in terms of four nuclei species: p, He, N (we consider it as CNO) and Fe (which can be considered as Iron group including the heavy metals).The results of this analysis are given as fractions of the fluxes of these four elements, which depend rather strongly on the hadronic interaction models used.The new and important result of this analysis is the very small fraction of Iron, compatible with zero, practically for all models of hadronic interactions. We argue that this result is natural for the standard cosmology with reionization of the universe.Our first observation is that using QGSJet II-4 and Sybill 2.1 for the hadronic interaction model the sum of protons and Helium nuclei fractions saturates with good precision the total flux, while for EPOS-LHCit leaves more space for other elements especially at the lowest and highest energies. Thus a reasonable model could be a p+He dominated injection with asmall admixture of CNO muclei.Next we made the ad hoc assumption that at present all existing detectors cannot distinguish reliably Helium nuclei from protons and we calculated the spectra for Helium and protons considering them asa single component with the same injection power-law index γ_g equal to 2.6 and 2.2 and taking the ratio He/p at the source to fit the spectra of Auger and TA. These ratios are 0.15 for γ_g=2.6 and 0.35 for γ_g=2.2. The calculated spectra are shown inFigs. <ref> and <ref>.The highest energy part of these spectra are always dominated by protons, because high-energy He nuclei are photo-disintegrated in collisions with EBL photons. In the case of γ_g=2.6 the observed dip is mainly producedby protons: it is the canonical case of the dip modelconsidered in section<ref> in terms of the modification factor. The new elementof this studyis the inclusion of two errors: flux error δ J/J and energy error δ E/E.At very large statistics of events the energy error dominates, and usingδ J/J error e.g. for spectrum presented in the formE^3 J(E) isincorrect. It changes the status of the dip model in terms of the combinedAuger spectrum (see Section <ref>).In the case γ_g=2.2 the dip at EeV energies of the spectrum is produced by both Helium and proton components. However, the observed high energy cutoff in the Auger spectrum is located at energy below the predicted GZK cutoff. In any case the shape of the spectrum alone is notenough to accept the model (see Section <ref>).Recently, the very interesting idea of the correlation of X_max and muonsignals was proposed <cit.> and the Auger experiment<cit.> has developed it. Its results exclude a pure protonp and p+He mass composition. We argue in Section <ref>that the accuracy of the found correlation is questioned by the anomalous muonexcess observed in Auger <cit.> and TA <cit.>experiments. With near-futureAuger-Prime measurements the accuracy ofthe measured muon signals N_μ and X_max^μ will providereliable values of the correlation functions r(X_max,N_μ) and r(X_max,X^μ_max) together with the conclusions concerningmass composition. With these data the accuracy of the correlation analysiswill undoubtedly reach the needed level providing important informations on mass composition.In order to account for the correlation models of <cit.>, considering them as preliminary, we excluded the pure two-componentp+He models adding to them CNO nuclei. The CNO component seems also toappear in the analysis of <cit.> with the EPOS-LHC hadronicinteraction model. As far as energy spectra are concerned, the model with ratiosHe/p=0.2 and CNO/p=0.1 gives better agreement withthe observed spectrum than thetwo-component p+He model. We conclude emphasising that the understanding reached so far on the mass composition of UHECR is still not conclusive. The observations of mass composition are still contradictory and cannot exclude a pure light composition, while the observations of spectra agree fairly well with such hypothesis. For these reasons the high energy muon programof Auger, especially the measurement of X_max^μ (E), will be a crucial test of the models discussed in this paper.0.3cm We finish with the following note.This paper is mainly focused on the impact of the Auger analysis <cit.> on the mass composition problem in UHECR.We avoided the discussion of some accompanying problems. In particular we calculated the spectra atE ≥ 1 EeV to avoid the discussion about the transition from galactic to extragalactic cosmic rays. We also did not include the (possible)cosmological evolution of the sources, which can "artificially" improve the agreement of theoretical spectra with observations. We did not discuss the absence of GZK cutoff inall three biggest UHECR detectors: HiRes, Telescope Array and Auger.We hope to address these problems in a forthcoming publication.0.3cm The present paper is an updated version of the arXiv paper 1703.08671v1. § ACKNOWLEDGEMENTS We thank Karl-Heinz Kampert, Antonio Villar, Piera Ghia, Michael Unger, Markus Risse and Alexey Yushkov for valuable remarks and critical comments. We are grateful to the anonymous Referee for many valuable remarks.Auger-shape2014A. Aab et al. (Auger collaboration) Phys. Rev. D90, 122006 (2014).GZK K. Greisen, Phys. Rev. Lett. 16 48 (1966).G.T. Zatsepin and V.A. Kuzmin,JETP Letters 4 78, (1966).Blum G.R. Blumenthal, Phys. Rev. D1 1596 (1970).BGG V. Berezinsky, A.Gazizov, S. Grigorieva, Phys. Rev. D74 043005 (2006).BGG-PLV.Berezinsky, A.Gazizov, S. Grigorieva, Phys. Lett. B612 147 (2005).Aletal R. Aloisio, V. Berezinsky, P. Blasi, A. Gazizov, S. Grigorieva, B. Hnatyk, Astrop. Physics 27 76 (2007).HiResGZKT.Abu-Zayyad et al, Phys. Rev. Lett. 92, 151101 (2004).AugerGZKJ. Abraham et al,Phys. Rev. Lett. 101, 061101 (2008).GR N.M. Gerasimova and I.L. Rozental, JETPh 41 488 (1961).ABGR.Aloisio, V.Berezinsky, S.Grigorieva, Astropart. Phys.41,94, (2013). Auger-moments2014A. Aab et al. (Auger Collaboration), Phys. Rev. D90 122005 (2014).TAhybrid2015R.U.Abbasi et al (TA Collaboration), Astropart. Phys. 64 49 (2015).MIA T. Abu-Zayyad et al. (HiRes-MIA collaboration),Astrophys.J. 557, 686 (2001).EPOS-LHC K. Werner, F.M.,Liu and T. Pirtoj, Phys. Rev. C74 044902 (2006).QGSJET S. Ostapchenko, Phys. Rev. D74 014026 (2006).MPR-GG2011 D. Garcia-Gamez et al. (Auger Collaboration), contribution ICRC 2011, arXiv:1107.4807.MPR-RC2013 R. Conceicao, S. Andringa, L. Cazon and M. Pimenta (for the Auger collaboration),EPJ Web Conf. 52 03004 (2013), arXiv:1301.0507.mu-prod-rate L. Collica et al. (Auger Collaboration), Eur.Phys. J. Plus 131 (2016), arXiv:1609.02498.KGkneeW.D. Apel et al (KASCADE-Grande Collaboration) Phys. Rev. Lett. 107 171104 (2011).BookV.S. Berezinsky, S.V. Bulanov, V.A.Dogiel, V.L.Ginzburg,V.S. Ptuskin, Astrophysics of Cosmic Rays, North Holland (1990).AMIGA2015 B. Wundheiler et al. (Auger Collaboration), contribution ICRC 2015, PoS(ICRC2015) 324 (2015), arXiv:1509.03732.AugerPrimeA. Aab et al. (Auger Collaboration), The Pierre Auger Observatory Upgrade - Preliminary Design Report, arXiv:1604.03637.TAstatus R.U. Abbasi et. al. (TA Collaboration), Astrophys.J. 858 (2018) no.2, 76.Fermi-LATA.A. Abdo et al(Fermi-LAT Collaboration), Phys. Rev. Lett. 104 101101 (2015).Felix R.Y. Liu, A.M. Taylor, X.Y. Wang, F. Aharonian, Phys. Rev.D94 043008 (2016).Gavish E.Gavish, D.Eichler, Astrophys.J.822 56 (2016).BGK V. Berezinsky, A. Gazizov, O. Kalashev Astropart. Phys. 84 52 (2016).Auger2015I. Valino et al. (Auger Collaboration), contribution ICRC 2015, PoS (ICRC2015) 271 (2015), arXiv:1509.03732.TA2015D. Ivanov et al. (TA Collaboration),contribution ICRC 2015, PoS (ICRC2015) 349 (2015).Auger2017F. Fenu et al. (Auger collaboration), contribution ICRC 2017, PoS (ICRC2017) 486 (2017), arXiv:1708.06592TA2017J. Matthews et al. (TA collaboration), contribution ICRC 2017,PoS (ICRC2017) 1096 (2017).Verzi2013V. Verzi et al. (Auger collaboration), contribution ICRC 2013,arXiv:1307.5059.Rapporteur2015 V. Verzi, rapporteur report ICRC 2015, PoS ICRC2015 015 (2016).ABBR. Aloisio, V. Berezinsky, P. Blasi, JCAP 10 020 (2014).SibyllE.J. Ahn et al, Phys.ReV. D80 094003 (2009).WMAP G. Hinshaw et al. (WMAP Collaboration)Astroph. J. Suppl. Ser. 180 225 (2009). PlanckR. Adam et al. (Planck Collaboration), Astron. Astrophys. 596 A108 (2016).lowFeA. Songaila, Ap.J. 561 L153 (2001).aharonianR.Y. Liu, A.M. Taylor, X.Y. Wang, F.A. AharonianPhys.Rev. D94 (2016) no.4, 043008, arXiv:1603.03223.EBLF.W. Stecker, M.M. Malkan and S. Scully, Astrophys. J. 648 774 (2006).correl-th P. Younk and M. Risse, Astrop. Phys. 35 807 (2012).correl-exp A. Aab et al. (Auger Collaboration) Phys. Lett. B708 288 (2016).muon-excess A. Aab et al for Auger collaboration, Phys. Rev. Lett. 117 192001 (2016).TAmuon-excessR. Takcisin et al. JPS Conf. Proc. 19 011045 (2018).Ostap2016S. Ostapchenko, XXV ECRS 2016 Proceedings, arxiv:1612.09461. | http://arxiv.org/abs/1703.08671v2 | {
"authors": [
"R. Aloisio",
"V. Berezinsky"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20170325101705",
"title": "Towards solving the mass-composition problem in ultra high energy cosmic rays"
} |
The Conformal Field Theory on the Horizon of BTZ Black Hole Chao-Guang Huang Revised 02/2017 ============================================================ For a given smooth compact manifold M, we introduce an open class 𝒢(M) of Riemannian metrics, which we call metrics of the gradient type. For such metrics g, the geodesic flow v^g on the spherical tangent bundle SM → M admits a Lyapunov function (so the v^g-flow is traversing). It turns out, that metrics of the gradient type are exactly the non-trapping metrics. For every g ∈𝒢(M), the geodesic scattering along the boundary Ṃ can be expressed in terms of the scattering map C_v^g: _̣1^+(SM) →_̣1^-(SM). It acts from a domain _̣1^+(SM) in the boundary (̣SM) to the complementary domain _̣1^-(SM), both domains being diffeomorphic.We prove that, for a boundary generic metric g ∈𝒢(M), the map C_v^g allows for a reconstruction of SM and of the geodesic foliation ℱ(v^g) on it, up to a homeomorphism (often a diffeomorphism). Also, for such g,the knowledge of the scattering map C_v^g makes it possible to recover the homology of M, the Gromov simplicial semi-norm on it, and the fundamental group of M. Additionally, C_v^g allows to reconstruct the naturally stratified topological type of the space of geodesics on M. We aim to understand the constraints on (M, g), under which the scattering data allow for a reconstruction of M and the metric g on it, up to a natural action of the diffeomorphism group 𝖣𝗂𝖿𝖿(M, Ṃ). In particular, we considera closed Riemannian n-manifold (N, g) which is locally symmetric and of negative sectional curvature. Let M is obtained from N by removing an n-domain U, such that the metric g|_M is boundary generic, of the gradient type, and the homomorphism π_1(U) →π_1(N) of the fundamental groups is trivial. Then we prove thatthe scattering map C_v^g|_M makes it possible to recover N and the metric g on it. § INTRODUCTION Let M be a compact connected and smooth Riemannian n-manifold with boundary. In this paper, we apply the Holographic Causality Principle (<cit.>, Theorem 3.1) to the geodesic flow on the space SM of unit tangent vectors on M. Our main observation is that the holographic causality is intimately linked to the classical inverse scattering problems. So the geodesic scattering is the focus of our present investigation. Let us briefly explain what we mean by the scattering data on a given compact connected Riemannian manifold M with boundary Ṃ. For each geodesic curve ⊂ M which “enters" M through a point m ∈Ṃ in the direction of an unitary tangent vector u ∈ T_m(M), we register the first along“exit point" m' ∈Ṃ and the exit direction, given by a unitary tangent vector u' ∈ T_m'(M) at m'. Of course, not for any geodesicon M, this construction makes sense:∖ m may belong to the interior of M. In such case, the geodesicthrough m ∈Ṃ never reaches the boundary again.In any case, when available, we call the correspondence {(m, v) ⇒ (m', v')}_(m, v) “the metric-induced scattering data".We strive to restore the metric g on M, up to the action of M-diffeomorphisms that are the identity maps on Ṃ, from the scattering data[This resembles the problem of reconstructing the mass distribution from the gravitational lensing.]. This restoration seems harder when (M, g) has closed geodesics or geodesics that originate at a boundary point, but never reach the boundary Ṃ again.In special cases, the restoration of g is possible. This conclusion is very much inline with the results from <cit.>, <cit.>, <cit.>, as well as with <cit.>- <cit.> and <cit.> - <cit.>. The recent paper <cit.>, which reflects the modern state of the art, contains the strongest results.Recall that there are examples of two analytic Riemannian manifolds with isometric boundaries and identical scattering (even lens) data, but with different C^∞-jets of the metric tensors at the boundaries (see <cit.>, Theorem 4.3)! However, these examples have trapped geodesics; the metrics there are fundamentally different from the ones we study here.Moving towards the goal of g-reconstruction from the scattering data, we introduce a class of metrics g which we call metrics of the gradient type (see Definition <ref>). By Lemma <ref>, the the gradient type metrics are exactly the nontrapping metrics. In Theorem <ref>, we prove that, given any compact connected Riemannian n-manifold (N, g) with boundary that admits a -flat triangulation, where > 0 is a universal constant that depends only on (N) (see Definition <ref>), it is possible to delete several smooth n-balls {B_} from N, so that M = N ∖(∐_ B_) is diffeomorphic to N, and the restriction g|_M is of the gradient type. In particular, any connected M with boundary admits a gradient type Riemannian metric g, provided that M admits a -flat triangulation for a sufficiently small >0 and a different metric g̃. The gradient type metrics form an open nonempty set 𝒢(M) in the space ℛ(M) of all Riemannian metrics on M (Corollary <ref>).Then we introduce another class of Riemannian metrics g on M such that the boundary Ṃ is “generically curved" in g (see Definition <ref>). We call such g geodesically boundary generic, or boundary generic for short.We denote by 𝒢^†(M) the space of geodesically boundary generic metrics of the gradient type. We speculate (see Conjecture 2.2) that, for any M, the space 𝒢^†(M) is open and dense in 𝒢(M) and prove that it is indeed open (see Theorem <ref>). We also consider a subspace 𝒢^(M)⊂𝒢^†(M), formed by metrics g for which the geodesic vector field v^g on SM is traversally generic in the sense of Definition 3.2 from <cit.>. Again, 𝒢^(M) is open in 𝒢^†(M). In Theorem <ref>, the main result of this paper, we prove that, for a metric g ∈𝒢^†(M),the geodesic flow v^g on SM is topologically rigid for given scattering data. This means that, when two scattering maps, C_v^g_1 and C_v^g_2, are conjugated with the help of a smooth diffeomorphism Φ^:̣(̣SM_1) →(̣SM_2), then the un-parametrized geodesic flows on SM_1 and SM_2 are conjugated with the help of an appropriate homeomorphism (often a diffeomorphism) Φ: SM_1 →SM_2 which extends Φ^$̣. In fact, for all metrics of the gradient type, the geodesic fieldv^gonSMallows for an arbitrary accurateC^∞-approximations by boundary generic and traversing (or even traversally generic) fieldswonSM.For suchw, the topological restoration of thew-induced1-dimensional oriented foliationℱ(w)onSMfrom the new “scattering data"C_w: _̣1^+SM(w) →_̣1^-SM(w)becomes possible. However, the difficulty is to find the approximating fieldwin the formv^g̃for some metricg̃onM. Let𝒯(v^g)denote the space of geodesics inM. In Theorem <ref>, we prove that,for any metricg ∈𝒢^†(M), the scattering data are sufficient for a reconstruction of the stratified topological type of𝒯(v^g). In general,𝒯(v^g)is not a smooth manifold, but forg ∈𝒢^(M), it is a compactCW-complex <cit.>. Forg ∈𝒢^†(M), the space𝒯(v^g)carries some “surrogate smooth structure" <cit.>. This structure is also captured by the scattering data. In Theorem <ref>, we prove that, for anyg ∈𝒢^†(M), the geodesic scattering mapC_v^gallows for a reconstruction of the homology spacesH_∗(M; )andH_∗(M, Ṃ; ), equipped with the Gromov simplicial semi-norms (see <cit.> for the definition).In particular, the simplicial volume of the relative fundamental cycle[M, Ṃ]can be recovered from the scattering map. IfM ≥3, the geodesic scattering mapC_v^galso allows for a reconstruction ofthe fundamental groupπ_1(M), together with all the homotopy groups{π_i(M)}_i < M. Moreover, if the tangent bundle ofMis trivial,C_v^gallows for a reconstruction of the stable topological type of the manifoldM. Let(N, g)be a closed smooth locally symmetric Riemanniann-manifold,n ≥3, of negative sectional curvature. In Theorem <ref>, we prove that, ifandMis obtained fromNby removing the interior of a smoothn-ball so thatg̃ =_𝖽𝖾𝖿 g|_Mis of the gradient type and boundary generic, then the knowledge of the scattering mapC_v^g̃: S^n-1× D^n-1→ S^n-1× D^n-1makes it possible to reconstructNand the metricg, up to a positive scalar factor. However, this result does not imply the possibility of reconstructingg̃fromC_v^g̃. In Section 4, we study the inverse scattering problem in the presence of additional information about the lengths of geodesic curves that connect each point(m, v) ∈_̣1^+(SM)to the “scattered" pointC_v^g(m, v) ∈_̣1^-(SM). This information is commonly called “the lens data". Our main result here, Theorem <ref>, claims the strong topological rigidity (see Definition <ref>) of the geodesic flow for given lens data. The proof of the theorem requires additional hypotheses about the metricg, which we call “ballanced" (see Definition <ref>). We apply Theorem <ref> to the case of manifoldsM, obtained from closed Riemannian manifolds(N, g)by removing special domainsU ⊂N. Then the scattering problem on(M, g|_M)is intimately linked to the geodesic flow onSN(see the “Cut and Scatter" Theorem <ref>).By combining Theorem <ref> with some classical results from <cit.>, <cit.>, we are able to reconstruct(N, g)from the scattering and lens data onM, provided that eitherNis locally symmetric and of a negative sectional curvature, or admits a non-vanishing parallel vector field (see Corollary <ref> and Corollary <ref>).In general, our approach to the inverse scattering problem relies more on the methods of Differential Topology and Singularity Theory, and less on the more analytical methods of Differential Geometry and Operator Theory. Of course, this topological approach has its limitations: by itself it allows only for a reconstruction of the geodesic flow from the scattering and lens data.The assumption that the Riemannian manifolds in this study are smooth seems to be crucial for the effectiveness of our methods. Perhaps, similar results are valid under the weaker assumption that then-manifolds we investigate have aC^2n-differentiable structure.§ BOUNDARY GENERIC METRICS OF THE GRADIENT TYPELetMbe a compactn-dimensional smooth Riemannian manifold with boundary, andga smooth Riemannian metric onM. LetSM →Mdenote the tangent spherical bundle. With the help ofg, we may interpret the bundleSMis a subbundle of the tangent bundleTM. The metricgonMinduces a partially-defined one-parameter family of diffeomorphisms{ψ^t_g: SM →SM}, the geodesic flow. Each unit tangent vectoruat a pointm ∈M ∖Ṃdetermines a unique geodesic curve_(m, u) ⊂Mtroughmin the direction ofu. Whenm ∈Ṃ, the geodesic curve_(m, u)is uniquely-defined for unit vectorsu ∈T_mMthat point inside ofM.By definition,ψ^t_g(m, u)is the point(m', u') ∈SMsuch that the distance along_(m, u)fromm' ∈_(m, u)tomist, andu'is the tangent vector to_(m, u)atm'.We stress thatψ^t_g(m, u)may not be well-defined for allt ∈and allu ∈TM: some geodesic curvesmay reach the boundaryṂin finite time, and some tangent vectorsu ∈TM|_Ṃmay point outside ofM. However, such constraints are common to our enterprise (<cit.>, <cit.> - <cit.>), which deals with such boundary-induced complications.In the local coordinates(x^1, …, x^n, p^1, …p^n)on the tangent spaceTM, the equations of the geodesic flow are:ẋ^=∑_ g_ p^ ṗ^= -1/2∑_,̱g̣_(x)/x̣^ p^p̱^,whereg_(x)is the metric tensor.This system can be rewritten in terms of the Hamiltonian functionH^g(x, p) =_𝖽𝖾𝖿 1/2∑_, g_(x) p^ p^$̱—the kinetic energy—in the familiar Hamiltonian form: ẋ^=Ḥ^g/p̣^ ṗ^= - Ḥ^g/x̣^ The projections of the trajectories of (<ref>) (or of (<ref>)) on M are the geodesic curves.Letv^g ∈ T(TM) be the field on the manifold TM, tangent to the trajectories of the geodesic flow Ψ_t^g on TM. Note that v^g-flow is tangent to SM ⊂ TM,so v^g|_SM≠ 0. In fact, the integral trajectories of v^g are geodesic curves in the Sasaki metric 𝗀 = 𝗀(g) on T(TM) (<cit.>, Prop. 1.106). Let X be a compact smooth (m+1)-manifold with boundary. Any smooth vector field v on X, which does not vanish along the boundary X̣, gives rise to a partition _̣1^+X(v) ∪_̣1^-X(v) of the boundary X̣ intotwo sets: the locus _̣1^+X(v), where the field is directed inward of X or is tangent to X̣, and_̣1^-X(v), where it is directed outwards or is tangent to X̣. We assume that v|_X̣, viewed as a section of the quotientline bundle T(X)/T(X̣) over X̣, is transversal to its zero section. This assumption implies that both sets ^̣±_1 X(v) are compact manifolds which share a common boundary _̣2X(v) =_𝖽𝖾𝖿(̣_̣1^+X(v)) = (̣_̣1^-X(v)). Evidently, _̣2X(v) is the locus where v is tangent to the boundary X̣.Morse has noticed (see <cit.>) that, for a generic vector field v, the tangent locus _̣2X(v) inherits a similar structure in connection to _̣1^+X(v), as X̣ has in connection to X. That is, v gives rise to a partition _̣2^+X(v) ∪_̣2^-X(v) of_̣2X(v) intotwo sets: the locus _̣2^+X(v), where the field is directed inward of _̣1^+X(v) or is tangent to _̣2X(v), and_̣2^-X(v), where it is directed outward of _̣1^+X(v) or is tangent to _̣2X(v). Again, we assume that v|__̣2X(v), viewed as a section of the quotientline bundle T(X̣)/T(_̣2X(v)) over _̣2X(v), is transversal to its zero section.For, so called, boundary generic vector fields (see <cit.> for a formal definition), this structure replicates itself: the cuspidal locus _̣3X(v) is defined as the locus where v is tangent to _̣2X(v); _̣3X(v) is divided into two manifolds, _̣3^+X(v) and _̣3^-X(v). In_̣3^+X(v), the field is directed inward of _̣2^+X(v) or is tangent to its boundary, in_̣3^-X(v), outward of _̣2^+X(v) or is tangent to its boundary. We can repeat this construction until we reach the zero-dimensional stratum _̣m+1X(v) = _̣m+1^+X(v) ∪_̣m+1^-X(v).To achieve some uniformity in the notations, put _̣0^+X =_𝖽𝖾𝖿 X and _̣1X =_𝖽𝖾𝖿X̣. Thus a boundary generic vector field von Xgives rise to two stratifications: X̣ =_𝖽𝖾𝖿_̣1X ⊃_̣2X(v) ⊃…⊃_̣m +1X(v),X =_𝖽𝖾𝖿_̣0^+ X ⊃_̣1^+X(v) ⊃_̣2^+X(v) ⊃…⊃_̣m+1^+X(v),the first one by closed smooth submanifolds, the second one—by compact ones.Here (X) = m+1, and (_̣jX(v)) = (_̣j^+X(v)) = m +1 - j. We will use often the notation “_̣j^± X" instead of “_̣j^± X(v)" when the vector field v is fixed or its choice is obvious. As any non-vanishing vector field, the geodesic field v^g ∈ T(SM) divides the boundary (̣SM) into two portions: _̣1^+(SM), where v^g points inside of SM or is tangent to its boundary, and _̣1^-(SM), where it points outside of SM or is tangent to its boundary. In fact, _̣1^+(SM) and _̣1^-(SM) do not depend on g: the first locus is formed by the pairs (m, v), where m ∈Ṃ and v ∈ T_mM points inside of M or is tangent to Ṃ. Therefore, both _̣1^+(SM) and _̣1^-(SM) are homeomorphic to the tangent (n-1)-disk bundle of the manifold Ṃ. The locus _̣2(SM) = (̣_̣1^+(SM)) = (̣_̣1^-(SM)) is also g-independent; it is the space of the sphere bundle, associated with the tangent (n-1)-bundle T(Ṃ). Let (M, g) be a compact connected smooth Riemannian manifold with boundary.We say that a metric g on M is of the gradient type if the vector field v^g ∈ T(SM) that governs the geodesic flow is of the gradient type: that is, there exists a smooth function F: SM → such that dF(v^g) > 0. This condition is equivalent to the property {F̃, H^g} > 0, where F̃: TM ∖ M → is a smooth extension of F, the Hamiltonian H^g is defined by equations (<ref>) and (<ref>), and { , } stands for the Poisson bracket of functions on TM.We denote by 𝒢(M) the space of the gradient-type Riemannian metrics on M.Example 2.1. Consider a flat metric g on the torus T^2 = ^2/^2 and form a punctured torus M by removing an open disk D^2 from T^2. If D^2 is convex in the fundamental square domain Q^2 ⊂^2, then there exist closed geodesics (with a rational slope with respect to the lattice ^2) that miss D^2. For such M ⊂ T^2, the flat metric is not of the gradient type. However, it is possible to position D^2 ⊂ T^2 so that its lift D̃^2 to ^2 will have intersections with any line that passes through Q^2 (see Figure 1). We restrict the flat metric g to M = T^2 ∖ D^2. For such a choice of (M, g|_M), thanks to Lemma <ref> below, the metric g|_M is of the gradient type. Moreover, by Theorem <ref>, any metric g' on M, sufficiently close to this flat metric g|_M, is also of the gradient type. The set 𝒢(M) of Riemannian metrics g of the gradient type is open in the space ℛ(M) of all Riemannian metrics on M, considered in the C^∞-topology. Let : M → TM denote the zero section. If dF(v^g) > 0 on SM, then F: SM → extends in a compact neighborhood U of SM in TM ∖(M) to a smooth function F̃: U → so that dF̃(v^g) > 0 in U. In this neighborhood, dF̃(v^g')|_U > 0 for all metrics g', sufficiently close to g. For such metrics g', the space of unit spheres S'M ⊂ TM is fiberwise close to SM ⊂ TM; in particular, we may assume that S'M ⊂ U. Recall thatthe geodesic field v^g' is tangent to S'M, thus dF̃(v^g') > 0 on S'M.A Riemannian metric g on a connected compact manifold M with boundary is called non-trapping if (M, g) has no closed geodesics and no geodesics of infinite length (the later are homeomorphic to an open or a semi-open interval). Let M be a compact connected smooth manifold with boundary. A metric g on M is of the gradient type, if and only if, any trajectory of the geodesic flow is homeomorphic to a closed interval or to a singleton. In other words, the non-trapping metrics and the metrics of the gradient type are the same. If, for a smooth function F, d F(v^g) > 0, then each v^g-trajectory is singleton residing in (̣SM) or a closed segment with its both ends residing in (̣SM)(we call such vector fields traversing). Evidently, F prevents v^g from having a closed trajectory. Letbe a trajectory that starts at a point w ∈ SM and is homeomorphic to a semi-open interval. Soextends beyond any point onthat can be reached from w ( cannot “exit" SM in a finite time). Consider the closure K of . It is a compact and v^g-invariant set. So F attends its maximum at a point w_⋆∈ K. However, dF(v^g) > 0 at w_⋆ and a germ of a v^g-trajectory through w_⋆ belongs to K, a contradiction to the assumption that F attends its maximum at w_⋆ .A similar argument rules out the (-v^g)-trajectories that are homeomorphic to a semi-open interval. Conversely, by Lemma 5.6 from <cit.>, any traversing field is of the gradient type. So v^g admits a Lyapunov function F. Thus g is of the gradient type if and only if the image of any v^g-trajectoryunder the map SM → M is either a singleton in Ṃ or a compact geodesic curvewhose ends reside in Ṃ (by our convention,does not extends beyond its two ends). In particular, any g ∈𝒢(M) has no closed geodesics in M and no geodesics that originate at the boundary Ṃ and are trapped in (M) for all positive times.Let (N, g) be an open Riemannian manifold such that no geodesic curve in N is closed or has an end that is contained in a compact set. Let M ⊂ N be a smooth compact codimension zero submanifold. Then the restriction g|_M is a metric of the gradient type, and so are all the metrics g̃ on M that are sufficiently close to g|_M. In particular, for any compact domain M with a smooth boundary in the Euclidean space 𝖤^n or in the hyperbolic space 𝖧^n, the Euclidean metricg_𝖤 or the hyperbolic metric g_𝖧 on M are of the gradient type, and so are all the metrics g̃ that are sufficiently close to g_𝖤 or g_𝖧, respectively.Using the hypotheses, no positive time geodesic ⊂ M in the metric g|_M is an image of a semi-open interval or a closed loop. By Lemma <ref>, the pair (M, g|_M) is of the gradient type. By Lemma <ref>, any metric g̃ on M, which is sufficiently close to g|_M, is of the gradient type as well. In order to prove Theorem <ref> below, we will need few lemmas, dealing with smooth triangulations of compact smooth Riemannian manifolds, triangulations that are specially adjusted to the given metric. Let N be a smooth compact n-manifold. A smooth triangulation T: K → N is a homeomorphism from a finite simplicial complex K to N. The triangulation is assembled out of several homeomorphisms {T_j:→ N}_j, wheredenotes of the standard n-simplex ⊂^n+1, the restriction of T_j to the interior of each subsimplex' ⊂ being a smooth diffeomorphism.The homeomorphisms{T_j:→ N}_j commute with affine maps of subsimplicies ' ⊂, the maps that assemble K out of several copies of . Consider a smooth triangulation T: K → N of a smooth compact n-manifold N with a Riemannian metric g. Let ⊂ T_j() be a geodesic arc, and _j =_𝖽𝖾𝖿(T_j)^-1() ⊂ its preimage in the standard n-simplex . We denote by l_𝖤(_j) its length in the Euclidean metric on ⊂^n+1. Let λ(_j) ⊂ be a line segment that shares its ends with _j. We denote by l_𝖤(λ(_j)) its length.Pick a number > 0. We say that a smooth triangulation T: K → N is -flat with respect to g if, for each index j and any geodesic arc ⊂ T_j(), the inequality is valid:l_𝖤(_j) < (1+)· l_𝖤(λ(_j)). Note that the inequality in this definition remains valid under the conformal scaling of the simplex . Let N be a compactsmooth n-manifold, equipped with a Riemannian metric g, and U a convex domain in ^n, equipped with the Euclidean metric g_𝖤. Consider a diffeomorphism S: U → S(U) ⊂ N. Let ⊂ S(U) be a geodesic arc, andits S-preimage. We denote by λ() the segment in U that shares its ends with the arc . Assume that, for some > 0 and any geodesic arc ⊂ S(U), the Euclidean lengths ofand λ() satisfy the inequality l_𝖤() < (1+)· l_𝖤(λ()). Then the arcis contained in the δ-neighborhood of the segment λ(), where δ = l_𝖤(λ())·√(/2)·√(1+/2).Let λ=_𝖽𝖾𝖿λ() and let ã, b̃ be the two ends of the segment λ. Put ℓ = d_𝖤(ã, b̃) = l_𝖤(λ()). Consider the solid ellipsoid ℰ = {c̃∈ U| d_𝖤(ã, c̃) + d_𝖤(c̃, b̃) < (1+)ℓ}.The maximal distance from the main axis μ of the ellipsoid ℰ to its boundary is the radius δ of the (n-2)-sphere, obtained by intersecting the bisector hyperplane H, orthogonal to λ at its midpoint, withE. From the elementary 2D-geometry, we get δ^2 + (ℓ/2)^2 = ((1+)(ℓ/2))^2. Thus δ = ℓ·√(/2)·√(1+/2). As a result,must be contained in the δ-neighborhood of μ (μ contains the segment λ). Additional elementary computations show that ℰ is contained in the δ-neighborhood of λ as well.Take a typical point x ∈. By the hypotheses, l_𝖤(λ()) < (1+)ℓ. Thus(1+)ℓ > l_𝖤(λ()) ≥ d_𝖤(ã, x) + d_𝖤(x, b̃). Therefore x ∈ℰ. As a result,is contained in the neighborhood of λ of the radius δ.Letbe the standard n-simplex, residing in ^n+1 and equipped with the Euclidean metric. We denote byand ^̱2 the first and the second barycentric subdivisions of . For any vertex a ∈, we form its star St(a) in ^̱2. For any 0 < θ≤ 1, we denote by St(a, θ) ⊂the θ-homothetic image of St(a), the center of homothety being at a ∈. Consider the setSt(θ) =_𝖽𝖾𝖿∐_a ∈ St(a, θ). By a line inwe mean an intersection of an affine line in ^n ⊂^n+1 with . There exists a number θ_⋆(n) ∈ (0, 1) such that every line ℒ⊂ has a point a that belongs to the interior of St(θ_⋆(n)).[It is desirable to find an elementary argument that explicitly computes θ_⋆(n) as a function of n.]Let ' be a typical simplex of . It will suffice to show that there exists an universal θ_⋆(n) ∈ (0, 1), such that any line ℒ⊂' has a point b that belongs to the interior of some set St(a, θ_⋆(n)) ∩', for a vertex a ∈.For each θ∈ (0, 1], consider the polyhedron P(θ) =_𝖽𝖾𝖿∖(St(θ)). For each simplex ' of , put P_'(θ) = P(θ) ∩'. The space of lines 𝖫 in ' is compact; in fact, it is a continuous image of a compact subset 𝖪 of the Grassmanian 𝖦𝗋(n+1, 2). 𝖪 consists of the 2-planes through the point (1, 0, … , 0) ∈^n+1 that have a nonempty intersections with ' ⊂^n. Here the hyperplane ^n ⊂^n+1 is defined by equating the first coordinate with zero. This construction gives rise to a continuous map π: 𝖪→𝖫. Any line whose intersection with ' ⊂^n is not a singleton (equivalently whose intersection with ' is not a vertex of '), defines a unique point in 𝖦𝗋(n+1, 2).Consider an increasing sequence {θ_i ∈ (0, 1)}_i that converges to 1. Contrarily to the claim ofthe lemma, assumethat for each i, there exists a line ℒ_i that is contained in the polyhedron P_'(θ_i). Using compactness of 𝖫, there exists a subsequence {ℒ_i_k}_k that converges to a limiting line ℒ⊂'. Then ℒ⊂⋂_k P_'(θ_i_k) = P_'(1).Note that ℒ is missing the verticies of '. On the other hand, by the construction of ', if a line ℒ⊂' has a pair of distinct points b, c such that the segment [a, b] ⊂ P_'(1), then ℒ∩(St(1)) ≠∅. This contradiction proves that exists θ_⋆(n) ∈ (0, 1) for which no line in ' is contained in P(θ_⋆(n)). The definition of the polyhedron P_'(θ_⋆(n)) ⊂ P(θ_⋆(n)) is given by an affine (metric-independent) construction. So the polyhedra P_'(θ_⋆(n)) for different ' ⊂ match automatically. By the same token, P(θ_⋆(n)) does not contain any lines for any simplex , not necessarily for the standard one.Thus there is a number θ_⋆(n) < 1 such that, for each θ∈ [θ_⋆(n), 1], every line ℒ⊂ hits the set St(θ).Let N be a compact smooth Riemannian manifold. For any sufficiently small > 0, N admits a smooth -flat triangulation T: K → N. Let θ_⋆(n) ∈ (0,1) be as in Lemma <ref>. Put θ'_⋆(n) =_𝖽𝖾𝖿 (1 + θ_⋆(n))/2, and let _⋆(n) denote the Euclidean distance between the sets St(θ_⋆(n)) and P(θ'_⋆(n)) in the standard simplex .* Let (N, g) be a closed connected smooth Riemannian n-manifold that admits a _⋆(n)-flat smooth triangulation[By Conjecture <ref>, any (N,g) will do.]. Then there exists a smooth n-ball B ⊂(N) such that the restriction of the metric g to M = N ∖(B) is of the gradient type.* If (N, g) is a compact connected Riemannian n-manifold with boundary that admits a _⋆(n)-flat smooth triangulation. Then, for each connected component _̣ N of the boundary Ṇ, there exists a relative n-ball (B_, δ B_) ⊂ (N, _̣ N), whereδ_ B =_𝖽𝖾𝖿 B ∩_̣ N is the (n-1)-ball, so that all the balls are disjointed and the restriction of the metric g to the manifold M = N ∖∐_(B_) is of the gradient type. The manifolds M and N are diffeomorphic. The idea is first to construct a number of disjointed balls in N so that each geodesic curve will hit some ball. Thus deleting such balls from N will produce a “geodesically traversing swiss cheese". When N is closed, we will incorporate all the balls into a single smooth one. When N has a boundary, then the balls will be incorporated in a domain, whose removal from N does not change the smooth topology of N.Let Ṯ and ^̱2T denote the first and second barycentric subdivisions of a given smooth triangulation T: K → N.As before, T is assembled from a collection of singular simplicies {T_j: → N}_j. Put _⋆ = _⋆(n),θ_⋆ = θ_⋆(n). By the hypotheses, there exists a smooth _⋆-flat smooth triangulation T: K → N. Then, for any geodesic curve ⊂ N,its T_j-preimage _j in the simplexis contained in the _⋆-neighborhood of a line ℒ_j ⊂_j. By Lemma <ref>, that line contains a point a_j ∈ St(θ_⋆) whose _⋆-neighborhood U_j ⊂ is contained in the set St(θ'_⋆). So, by Definition <ref>, the curve _j must intersect the neighborhood U_j. As a result,has a non-empty intersection with the set T(St(θ'_⋆)), a finite disjoint union of n-dimensional 𝖯𝖫-balls {B_v}_v ∈Ṯ, centered on the vertices of {T(v)}_v ∈Ṯ in N.We can smoothen their boundaries by encapsulating each ball B_v into a smooth ball B̂_v so that B̂_v ∩B̂_v' = ∅ for all distinct v, v' ∈Ṯ. When N is closed,we place the disjoint union A =_𝖽𝖾𝖿∐_v ∈ṮB̂_v inside of a single smooth ball B ⊂ N. This may be accomplished by attaching 1-handles to A so that the cores of the handles form a tree. Any geodesic in N hits B since it hits A.In the case of a non-empty boundary Ṇ, by attaching first some relative 1-handles {H_i ≈ D^n-1_+ × [0, 1]}, whose cores {0 × [0, 1] } reside in Ṇ, transforms A ∩Ṇ into a disjoint union of several (n-1)-balls, each ball residing in its connected component of Ṇ. Again, in each component _̣ N of Ṇ, the attaching the handles is guided by a tree. In the process, we incapsulate A into a disjoint union of several smooth balls that reside in the interior of N and several relative balls, each of which is touching the corresponding boundary component _̣ N. These relative balls are in 1-to-1 correspondence with the boundary components. Then connecting the balls in the interior on N to the balls that touch the boundary Ṇ by 1-handles (which reside in N) produces the desired relative pairs {(B_, B_∩Ṇ)}_, one pair per component of Ṇ. Again, any geodesic in N hits the disjoint union of these pairs. The removal of the union from N results in a smooth manifold M, which is diffeomorphic to N. If a compact connected smooth Riemannian n-manifold (M, g) with boundary admits a _⋆(n)-flat triangulation, then M admits a Riemannian metric g̃ of the gradient type. In fact, the subspace 𝒢(M) of such metrics g̃ is nonempty and open in the space ℛ(M) of all Riemannian metrics on M.Use Theorem<ref> to conclude that 𝒢(M) is a nonempty set. By Lemma <ref>, 𝒢(M) is open in the space ℛ(M). Note that the smooth balls, whose removal from N (the “geodesic Swiss cheese") delivers, by Theorem <ref>, a metric of the gradient type on M, are not necessarily convex in the original metric g.For any compact smooth Riemannian manifold (N, g), there exists a finite disjoint union of smooth convex balls whose removal from N delivers the metric g_M of the gradient type on their complement M.Let (M, g) be a compact connected Riemannian manifold with boundary. * We say that a metric g on M is geodesicallyboundary genericif the geodesic vector field v^g ∈ T(SM) isboundary generic[See the discussion that precedes Definition <ref> or Definition 2.1 from <cit.>.] with respect to the boundary (̣SM) = SM|_Ṃ.* We say that a metric g on M is geodesically traversallygenericif the vector field v^g ∈ T(SM) is of the gradient type and is traversallygeneric[See Definition 3.2 from <cit.>.] with respect to (̣SM) = SM|_Ṃ. We denote the space of all gradient type metrics on M by the symbol 𝒢(M), the space of allgeodesically boundary generic metrics of the gradient type on M by the symbol 𝒢^†(M), and the space of all geodesically traversally generic metrics on M by the symbol 𝒢^(M). So we get 𝒢^(M) ⊂𝒢^†(M) ⊂𝒢(M). Remark 2.2.If (M, g) is such that there exists a geodesic curve ⊂ M whose arc is contained in Ṃ, then the metric g is not geodesically boundary generic. For example, the Euclidean metric on ^3 is not geodesically boundary generic with respect to the ruled surface S = {z = x^2 - y^2}: indeed, S is comprised of lines (geodesics). Example 2.2.Let M be a domain in the Euclidean plane ^2, bounded by simple smooth closed curves. Then the flat metric g_𝖤 is boundary generic on M if and only if Ṃ is comprised of strictly concave and convex loops or arcs that are separated by the cubic inflection points. In particular, no line, tangent to the boundary, has the order of tangency that exceed 3 (see Example 3.1 for the details).How to formulate the property of the geodesic vector field v^g being boundary generic/traversally generic with respect to (̣SM) in terms of the geodesic curves and Jacobi fields in M and their interactions with Ṃ? Remark 2.3.Of course, not any metric g on M is of the gradient type. At the same time, thanks to Theorem <ref>, the gradient-type metrics form a massive set.Examples of geodesicallyboundary generic metrics are also not so hard to exhibit.They require only a localized control of the geometry of Ṃ in terms of g (see Lemmas <ref> and <ref>). For instance, if all the components of Ṃ are either strictly convex or strictly concave in g, then g is geodesicallyboundary generic. In contrast, to manufacture a geodesicly traversallygeneric metric is a more delicate task. In fact, we know only few examples, where gradient type metrics are proven to be of the traversally generic type: these examples have gradient-type metrics in which the boundary Ṃ is strictly convex (see Corollary <ref>). However, we suspect that traversallygeneric metrics are abundant (see Conjecture <ref>). In any case, by Theorem <ref> below, the property of a metric g to be traversallygeneric is stable under small smooth perturbations of g. So we have only a weak evidence for the validity of following conjecture; however, the world in which it is valid seems to be a pleasing place...The sets 𝒢^†(M) and 𝒢^(M) are open and dense in the space 𝒢(M). The openness of 𝒢^†(M) and 𝒢^(M) in 𝒢(M) follows from the theorem below.Let M be a compact smooth connected manifold with boundary. In the space ℛ(M) of all Riemannian metrics on M, equipped with the C^∞-topology, the spaces 𝒢^†(M) and 𝒢^(M) are open. Each of these spaces is invariant under the natural action of the smooth diffeomorphism group 𝖣𝗂𝖿𝖿(M) on ℛ(M).If a metric g on M is of the gradient type, then the geodesic field v^g on SM can be approximated arbitrary well in the C^∞-topology by a traversallygeneric field ṽ∈𝒱^(SM). The construction of the geodesic flow g ⇒ v^g defines a continuous mapℱ: ℛ(M) →𝒱(SM),where 𝒱(SM) denotes the space of all vector fields on SM. By Theorem 6.7 and Corollary 6.4 from <cit.>, the subspace 𝒱^(SM), formed by traversallygeneric (and thusgradient-like) vector fields, is open in 𝒱(SM). Similarly, the boundary generic and traversing fields form an open set 𝒱^†(SM) ∩𝒱_𝗍𝗋𝖺𝗏(M) in ℛ(M). Since the germ of the geodesicthrough a point m ∈ M in the direction of a given unit tangent vector u depends smoothly on metric g, we conclude that ℱ is a continuous map. Therefore,𝒢^(M) =_𝖽𝖾𝖿ℱ^-1(𝒱^(SM))and𝒢^†(M) =_𝖽𝖾𝖿ℱ^-1(𝒱^†(SM) ∩𝒱_𝗍𝗋𝖺𝗏(SM))are open sets in ℛ(M). By definition, for any g ∈𝒢(M), the geodesic field v^g on SM is of the gradient type (and thus traversing). Again, by Theorem 6.7 from <cit.>, v^g can be approximated by a traversallygeneric field ṽ∈𝒱^(SM). Note however that the projections of ṽ-trajectories under the map SM → M may not stay C^∞-close to the geodesic lines in the original metric g due to the concave boundary effects.By Theorem <ref>, 𝒱^†(SM) ≠∅. Nevertheless, the question whether 𝒢^(M) ≠∅ for a given M remains open! Evidently, by the “naturality" of the geodesic flow,the spaces 𝒢^†(M), 𝒢^(M) are invariant under the natural action of the smooth diffeomorphism group 𝖣𝗂𝖿𝖿(M) on ℛ(M). Remark 2.4. Let M be a codimension 0 compact submanifold of a compactRiemannian manifold N such that M ⊂𝗂𝗇𝗍(N). If a metric g on the ambient N is of the gradient type, then, by Corollary <ref>, its restriction g|_Mis of the gradient type on M. Of course, if g is geodesically traversallygeneric on a compact manifold N, it may not be geodesicallytraversallygeneric on M. Example 2.3.Considerthe hyperbolic space ^̋n with its virtual spherical boundary ̣̋^n and hyperbolic metric g. The space ^̋n is modeled by the open unit ball in the Euclidean space 𝖤^n. Each geodesic line hits ̣̋^n at a pair of points, where it is orthogonal (in the Euclidean metric) to ̣̋^n. For each oriented geodesic linetrough a given point x ∈^̋n in the direction of a given vector w, consider thedistance d_(̋x, w) between x and the unique point y ∈∩̣̋^n that can be reached from x by moving alongin the direction of -w, that is, d_(̋x, w) is the length of the circular arc (y, x) ⊂ in 𝖤^n. Evidently, d_(̋x, w) is strictly increasing, as one moves along the oriented .Let M ⊂^̋n be a compactcodimension 0smooth submanifold, equipped with the induced hyperbolic metric. Then the geodesic field v^g on the space SM is of the gradient type, since d_(̋x, w) is strictly increasing along the oriented trajectories of v^g. Again, by Theorem <ref>, any metric g' on M, sufficiently close to the hyperbolic metric g|_M, is also of the gradient type. § THE GEODESIC SCATTERING AND HOLOGRAPHY In this section, we will apply the Holographic Causality Principle <cit.>, to geodesic flows on the spaces SM of unit tangent vectors on compact smooth Riemannian manifolds M with boundary.We will be guided by a single important observation: if a metric g on M is of the gradient type, then the causality mapC_v^g: _̣1^+(SM) →_̣1^-(SM),introduced in <cit.> (for generic smooth traversing vector fields v on compact manifolds X with boundary), is available! To get a feel for the nature of the causality map from <cit.>, the reader may glance at Figure 3. It depicts the causality map C_v for a traversing field v on a surface X with boundary.The map C_v^g represents the g-induced geodesic scattering: indeed, with the help of C_v^g, each unit tangent vector u ∈ T^+M|_Ṃ is mapped (“scattered") to a unit tangent vector u' ∈ T^-M|_Ṃ. Here T^± M|_Ṃ⊂ TM|_Ṃ denote the half-spaces, formed by vectors along Ṃ that are tangent to M and point inside/outside of M.We will employ boundary generic or even traversally generic metrics g of the gradient type in order to control well the local structure of the causality map C_v^g. For each tangent vector w ∈ T_xM|_Ṃ, consider its orthogonal decomposition w = n ⊕ u with respect to the metric g, where n is the exterior normal to Ṃand u ∈ T_x(Ṃ).We denote by τ_g(x, w) the point(x, -n ⊕ u) ∈T_xM|_Ṃ. The g-independent manifolds _̣1^+(SM)(v^g) and _̣1^-(SM)(v^g) are diffeomorphic via the orientation-reversing involution τ_g: _̣1(SM) →_̣1(SM). Examining the definition of the strata ^̣±_1(SM)(v^g) and the construction of τ_g, we see that τ_g maps _̣1^+(SM)(v^g) to _̣1^-(SM)(v^g) by an orientation-reversing diffeomorphism. Given a compact Riemannian manifold (M, g), we denote by ℱ(v^g) the oriented 1-dimensional foliation on SM, produced by the geodesic field v^g. Let ψ^t: SM → SM be the v^g-flow generated local diffeomorphism; for each x ∈ SM, the image ψ^t(x) is well-defined only for some values of t, a time interval in . Given two compact Riemannian n-manifolds, (M_1, g_1) and (M_2, g_2), consider thegeodesic fields v^g_1 on SM_1 and v^g_2 on SM_2. * We say that the metrics g_1 and g_2 are geodesic flow topologically strongly conjugate if there is a homeomorphism Φ: SM_1 → SM_2 such that (Φ∘ψ^t_1)(x) = (ψ^t_2 ∘Φ)(x) for all x ∈ SM_1 and all moments t ∈ for which ψ^t_1(x) is well-defined[This implies that Φ preserves the lengths of the corresponding trajectories in the Sasaki metrics, and thus the lengths of the corresponding geodesic curves in M_1 and M_2 are equal.]. The restriction of Φ to each v^g_1-trajectory is required to be an orientation-preserving diffeomorphism. * We say that the metrics g_1 and g_2 are geodesic flow topologically conjugate if there is a homeomorphism Φ: SM_1 → SM_2 such that it maps each leaf of ℱ(v^g_1) to a leaf of ℱ(v^g_2), the map Φ on every leaf being an orientation-preserving diffeomorphism.Both notions of conjugacy come in different flavors by requiring that g_i, Φ^$̣, andΦbelong to various classes ofC^k-smooth objects, wherek = 0, 1, 2, …, ∞. For example, we may considerΦ^,̣ (Φ^)̣^-1of the classC^∞, whileΦ, (Φ)^-1may be just homeomorphisms (of the classC^0). For complete (in particular, closed) manifolds, the investigation of geodesic flow topologically conjugate metrics[These investigations employ a notion of geodesic conjugacy similar to the one in the first bullet of Definition <ref>.]led to a variety of strong results <cit.>, <cit.>, <cit.>, <cit.><cit.>, <cit.>, <cit.>, <cit.>. Let us describe their spirit: under certain conditions, imposed on metrics a priori, the geodesic flow topological conjugacy implies an isometry of the underlying metrics! In some cases, to establish the isometry, one needs to know also the lengths of geodesic lines (the, so called, the lens data). In particular, in <cit.>, the followingresult has been established.(Croke, Eberlein, Kleiner) Let (N_1, g_1) and (N_2, g_2) be two closed Riemannian manifolds, N_1 =N_2 ≥ 3. Assume that both manifolds have nonpositive sectional curvatures and that one of them has rank k ≥2.Denote by {ψ^t_i}_t ∈, i = 1, 2, the g_i-induced geodesic flow on SN_i.If there is a homeomorphism Φ: SN_1 → SN_2 such that Φ∘ψ^t_1 = ψ^t_2 ∘Φ for all t ∈, then g_1 and g_2 are isometric. Besson, Courtois, and Gallot proved the following equally striking theorem (<cit.>, Theorem 1.3). (Besson, Courtois, and Gallot) Let (N, g) be a closed locally symmetric manifold of a negative sectional curvature and of dimension ≥ 3.Then any Riemannian manifold (N_1, g_1) whose geodesic flow is C^1-conjugate[that is, the geodesic flows conjugating map Φ: SN_1 → SN_2 is a C^1-diffeomorphism.] to that of (N, g) is isometric to (N, g). Motivated by the spirit of these theorems (as far as we understand, they do not apply directly to the geodesic flows on manifolds with boundary), we move towards linking the geodesic flow topologically conjugate Riemannian manifolds with the problem of inverse geodesic scattering.In its more daring formulation, the problem of inverse geodesic scattering asks to reconstruct the metricgonMfrom the scattering mapC_v^g: _̣1^+(SM) →_̣1^-(SM), the reconstruction is thought up to the natural action of𝖣𝗂𝖿𝖿(M, Ṃ), the group of diffeomorphisms that are the identity maps onṂ, on the Riemannian metrics (see Remark 3.2). In the definition below, we assume that a given smooth compact Riemannian manifoldMis embedded properly into a larger open manifoldM̂, and the metricgonMis extended smoothly to a metricĝonM̂. However, the properties, described in the definition, do not depend on a particular extension(M̂, ĝ) ⊃(M, g).We say that a boundary generic metric g of the gradient type on a compact manifold M has the property 𝖠, if one of the following statements holds: *Any geodesic curve ⊂ M, but a singleton, has at least one point of transversal intersection with the boundary Ṃ. If the geodesic ⊂M̂ is such that ∩ M is a singleton m, thenisquadratically tangent to Ṃ at m.*Any geodesic curve ⊂M̂ that is tangent to Ṃ, is quadratically tangentto it.Remark 3.1. We will see later that the first bullet in property𝖠implies that any trajectory of the geodesic flowv^gonSM, but a singleton, has at least one point of transversal intersection with the boundary(̣SM). The trajectories-singletons are quadratically tangent to(̣SM)inSM̂. In terms of <cit.> and <cit.>, the combinatorial tangency types of such trajectories do not belong to the closed poset(33)_≽∪(4)_≽⊂Ω^∙.Similarly, the second bullet in property𝖠implies that, if av^ĝ-trajectory is tangent to(̣SM)at a point, then it is quadratically tangent there.In other words, the combinatorial tangency types of such trajectories do not belong to the closed poset(3)_≽⊂Ω^∙.The property𝖠reflects the shortcomings of our proof of the smooth version of The Holography Theorem 3.1 from <cit.>. We suspect that it is a superfluous assumption, and the conjugating homeomorphism in The Holography Theorem is actually a diffeomorphism. Therefore, property𝖠is likely a superfluous constraint, when the conjugating homeomorphismΦ: SM_1 →SM_2is desired to be a diffeomorphism. Regrettably,we must include property𝖠as a hypotheses in some theorems to follow.Remark 3.2. Note that the inverse scattering problems on Riemannian manifolds(M, g)have an unavoidable intrinsic ambiguity. It arises from the action of theM-diffeomorphisms that are the identity onṂand whose differentials are the identity onTM |_Ṃ. Let us denote by𝖣𝗂𝖿𝖿(M, Ṃ)the group of such diffeomorphisms. Each diffeomorphismϕ∈𝖣𝗂𝖿𝖿(M, Ṃ)acts naturally ong, producing a new metricϕ^∗(g). Since suchϕmaps geodesics ingto geodesics inϕ^∗(g),(M, g)and(M, ϕ^∗(g))share the same the scattering map. Therefore, for a given randomMandC_v^g, the best we can hope for is to reconstructgup to the𝖣𝗂𝖿𝖿(M, Ṃ)-action. For anyϕ∈𝖣𝗂𝖿𝖿(M, Ṃ), the geodesic flowsv^gandv^ϕ^∗(g)onTM ∖Mare strongly conjugated with the help of the diffeomorphismDϕ.Our main result, Theorem <ref> below, claims: For smooth boundary generic Riemannian metrics of the gradient type, the inverse geodesic scattering problem is topologically rigid, up to the geodesic flow topologically conjugate equivalence[see the second bullet of Definition <ref>] among metrics. (the topological rigidity of the geodesic flow for the inverse scattering problem)Let (M_1, g_1) and(M_2, g_2) be two smooth compact connected Riemannian n-manifolds with boundaries.Let the metrics g_1, g_2 be geodesically boundary generic, and let g_2 be of the gradient type. Assume that the scattering maps C_v^g_1: _̣1^+(SM_1) →_̣1^-(SM_1)and C_v^g_2: _̣1^+(SM_2) →_̣1^-(SM_2) are conjugate by a smooth diffeomorphism Φ^:̣_̣1(SM_1) →_̣1(SM_2).Then g_1 is also of the gradient type. Moreover, the metrics g_1 and g_2 are geodesic flow topologically conjugate.Ifthe metric g_2 possess property 𝖠 from Definition <ref>, then so does g_1, and the conjugating homeomorphism Φ: SM_1 → SM_2 is a diffeomorphism. By the hypotheses, the geodesic fieldv^g_2 is of the gradient type (equivalently, traversing) and boundary generic. If the causality map C_v^g_1 is not well-defined for some w ∈_̣1^+(SM_1), then by the property of Φ^$̣, the mapC_v^g_2is not well-defined atΦ^(̣w), a contradiction to the property ofv^g_2being traversing. Therefore thev^g_1-trajectory that connectswwithC_v^g_1(w)is a closed segment or a singleton for anyw ∈_̣1^+(SM_1); in other words,v^g_1is a traversing field. By Lemma 4.1 from<cit.>, any traversing field is of the gradient type. Sov^g_1is of the gradient type. Thus both causality mapsC_v^g_1andC_v^g_2are well-defined.By applying the Holography Theorem 3.1 and Corollary 3.3 from <cit.>, we conclude that the two geodesic flows are topologically conjugate (in the sense of Definition <ref>) by a homeomorphismΦ: SM_1 → SM_2. When property𝖠is valid for one of the two manifolds, by the proof of the Holography Theorem 3.1 from<cit.>, it is valid for the other one. So in this case, by the Holography Theorem,Φis a diffeomorphism. Assume that a compact connected and smooth manifold M with boundary admits a boundary genericRiemannian metric g of the gradient type. Let ℱ(v^g) denote the oriented 1-dimensional foliation, induced by the geodesic flow. Then the scattering map C_v^g: _̣1^+(SM) →_̣1^-(SM) allows for a reconstruction of the pair (SM, ℱ(v^g)), up to a homeomorphism of SM which is the identity on (̣SM) and an orientation-preserving diffeomorphism on each v^g-trajectory. Ifg possess property 𝖠 from Definition <ref>, then it is possible to reconstruct the pair (SM, ℱ(v^g)), up to a diffeomorphism of SM which is the identity on (̣SM).Example 3.1. Consider a shellM, produced by removing a strictly convex domain from the interior of a strictly convex domain in the Euclidean space𝖤(so topologicallyMis a shell). Any geodesic lineinMthat is tangent to the boundary of the interior convex domain is transversal to the boundary of the exterior convex domain. The intersection∩Ṃof any geodesic line⊂𝖤that is tangent to the boundary of the exterior convex domain is a singleton. For such a pair(M, g_𝖤), the geodesic fieldv^gonSMis traversally generic; in particular, it is boundary generic and of the gradient type. Thus Theorem <ref> is applicable to(M, g_𝖤), as well as to any metricginM, sufficiently close tog_𝖤.Under the hypotheses of Theorem <ref>, the scattering maps faithfully distinguish between manifolds whose spherical tangent bundles have distinct topological types: Let (M_1, g_1) and(M_2, g_2) be two smooth compact connected Riemannian n-manifolds with boundaries. Let the metrics g_1, g_2 be of the gradient type and geodesically boundary generic.Assume that the boundaries Ṃ_1 and Ṃ_2 are diffeomorphic[This implies that (̣SM_1) and (̣SM_2) are diffeomorphic.], but the spaces SM_1 and SM_2 are not homeomorphic. Then no diffeomorphism Φ^:̣(̣SM_1) →(̣SM_2) conjugates the two scattering maps C_v^g_1 andC_v^g_2. Example 3.2. Recall that the classical knots are determined by the topological types of their complements inS^3or^3(the Gordon-Luecke Theorem <cit.>), while the links are not; in fact, there are infinitely many distinct links whose complements are all homeomorphic to the complement of the Whitehead link. Let us examine Corollary <ref>, while keeping these facts in mind.LetL_1 ⊂ D^3_1 ⊂ N_1andL_2 ⊂ D^3_2 ⊂ N_2be two links in closed3-folds(N_1, g_1)and(N_2, g_2), respectively. We denote byU_1andU_2the regular tubular neighborhoods ofL_1 ⊂ D^3_1andL_2 ⊂ D^3_2. PutM_1 = N_1 ∖(U_1),M_2 = N_2 ∖(U_2). We assume thatL_1andL_2have the same number of components; thus there exists a diffeomorphismψ: U_1 → U_2Ifg_1|_M_1andg_2|_M_2are boundary generic and of the gradient type, andSM_1andSM_2are not homeomorphic, then the scattering mapsC_v^g_1andC_v^g_2are not conjugated via any diffeomorphismΦ^:̣(̣SU_1) →(̣SU_2). So the scattering maps distinguish between the linksL_1andL_2with non-homeomorphic stabilized complementsSM_1andSM_2.WhenN_i = S^3, thenSM_iis homeomorphic toM_i × S^2.In particular, consider the sphereS^3, equippedwith the standard metricg_S. LetD^3_∙⊂ S^3be a smooth ball that properly contains a hemisphereS^3_+. Then any geodesic inS^3hitsD^3_∙.Take a pair of linksL_i ⊂ S^3,i=1, 2, such that their tubular neighborhoodsU_iare formed by attaching solid1-handles toD^3_∙. Then any geodesic onS^3hitsD^3_∙and thus eachU_i. Therefore the metricg_SonM_i = S^3 ∖ U_iis of the gradient type. Assuming that the spherical metric is boundary generic on eachM_i(conjecturally, this property may be achieved by a smooth perturbation ofỤ_i), we conclude that if the two scattering maps onM_1andM_2are smoothly conjugated, thenM_1 × S^2must be homeomorphic toM_2 × S^2.Note that a Riemannian manifold(M, g)produces a(2n-2)-bundleξ^goverSM. It is a subbundle ofT(SM), transversalto the1-foliationℱ(v^g). The isomorphism class ofξ^gis well-defined byv^g. In fact, for a gradient-likeg, the distributionξ^gis integrable: the tangent bundle to the(2n-2)-foliation{F^-1(c)}_c ∈, wheredF(v^g) > 0, deliversξ^g. On the other hand, employing anyv^g-invariant Riemannian metric𝗀onSM ⊂ TM, we may consider the(2n-2)-distribution(v^g)^⊥, orthogonal in𝗀tov^g. That distribution is locally invariant under thev^g-flow, and thus generates a(2n-2)-bundleτ^gover the space of un-parametrized geodesics𝒯(v^g), such thatΓ^∗(τ^g) = ξ^g, whereΓ: SM →𝒯(v^g)is the obvious map. Under the notations and hypotheses of Theorem <ref>, the geodesic flows conjugating homeomorphism Φ: SM_1 → SM_2 (which extends Φ^$̣) induces a bundle isomorphismΦ^∗: ξ^g_2→ξ^g_1. Similar,theΦ^$̣-generated natural map Φ^𝒯 : 𝒯(v^g_1) →𝒯(v^g_2) of the spaces of geodesics induces the “tangent" bundle isomorphism (Φ^𝒯)^∗: τ^g_2→τ^g_1. By the proof of Holography Theorem 3.1 from <cit.>, the homeomorphism Φ has the property Φ^∗(F_2) = F_1, where the function F_i: SM_i → is such that dF_i(v^g_i) > 0 (i = 1, 2).Since both geodesic flows are traversing, 𝒯(v^g_i) = (̣SM_i)/C_v^g_i, where (̣SM_i)/C_v^g_i denotes the quotient space by the partially-defined scattering map C_v^g_i. Since Φ^∘̣C_v^g_1 =C_v^g_2∘Φ^$̣, the diffeomorphismΦ^:̣(̣SM_1) →(̣SM_2)generates the homeomorphismΦ^𝒯 : 𝒯(v^g_1) →𝒯(v^g_2). Moreover, by the proof of Theorem <ref>, the homeomorphismΦ^𝒯has the propertyΓ_2 ∘Φ = Φ^𝒯∘Γ_1. Therefore, using thatΓ_i^∗(τ_i) = ξ_i, we conclude that(Φ^𝒯)^∗: τ^g_2→τ^g_1is a bundle isomorphism.Remark 3.3. If a pair(M, g)is such that the geodesic fieldv^gis traversing onSM, then the trajectory space𝒯(v^g)—the space of un-parametrized geodesics onM— is a separable compact space. It is given the quotient topology via the obvious mapπ: SM →𝒯(v^g). For a geodesically boundary generic metricgof the gradient type, the mapπ |: (̣SM) →𝒯(v^g)has finite fibers. The space of geodesics𝒯(v^g), although not a manifold in general, for traversing geodesic flows, inherits a surrogate smooth structure fromSM(as in Definitions 2.2 and 2.3 from <cit.>). This structure manifests itself in the form of the algebraC^∞(𝒯(v^g))of smooth functionsf ∈ C^∞(SM)such that the directional derivativesℒ_v^g(f) = 0. The bundleτ^gmay be viewed as a surrogate tangent bundle of the stratified singular space𝒯(v^g), since its restriction to the open and dense set𝒯(v^g, ω_0)of trajectories of the combinatorial tangency typeω_0 = (1,1)may be identified with the “honest" tangent bundle of the open manifold𝒯(v^g, ω_0)(<cit.>, <cit.>).Recall that for any traversing boundary generic fieldvon a compact smooth manifoldX, eachv-trajectoryproduces an ordered finite sequenceω = (ω_1, ω_2, …)of positive integral multiplicities, associated with the finite ordered set∩X̣. These sequencesωcan beorganized in an universal posetΩ^∙<cit.>. In view of Remark 3.2 and Corollary 3.2 from <cit.>, Theorem <ref> has an instant, but philosophically important implication: Assume that M admits a geodesically boundary generic Riemannian metric g of the gradient type. Then the scattering map C_v^g: _̣1^+(SM) →_̣1^-(SM)allows for a reconstruction of: *the Ω^∙-stratifiedtopological type of the space 𝒯(v^g)of un-parametrized geodesics on M, * the isomorphism class of the “tangent" (2n-2)-bundle τ^g of the space 𝒯(v^g),* thesmooth topological type of the space 𝒯(v^g), provided that the property 𝖠 from Definition <ref> is valid. By definition, 𝒯(v^g) is the quotient space of SM under the obvious map Γ that takes each v^g-trajectory to a singleton. Now just apply Theorem <ref> and Corollary <ref> to reconstruct the map Γ. ByCorollary 3.2 from <cit.>, C_v^g allows for a reconstruction of the Ω^∙-stratifiedtopological type of the space 𝒯(v^g). By combining Corollary <ref> with Theorem <ref>, we get the second claim.When property 𝖠 is valid, this application will lead to a reconstruction, up to an algebra isomorphism, of the algebra C^∞(𝒯(v^g), ) of smooth functions on 𝒯(v^g), the kernel of the directional derivative operator ℒ_v^g: C^∞(SM, ) → C^∞(SM, ).Given aboundary generic metric g on a compact smooth n-manifold M, how to describe the Morse strata {_̣j^± SM(v^g)}_1 ≤ j ≤ 2n-1 of (̣SM) in terms of the local Riemanniangeometry[like the curvature tensor of M and the normal curvature of Ṃ]of the pair(M, Ṃ)?The next two lemmas represent a step towards answering Question 3.1. The language,in which the answer is given, is reminiscent of the language in <cit.>.As before,we assume that a given smooth compact Riemannian manifoldMis embedded properly into a larger open manifoldM̂, and the metricgonMis extended smoothly to a metricĝonM̂. For a boundary generic metric g on M, the stratum _̣jSM(v^g) consists of pairs (x, w) ∈ SM|_Ṃ such that the germ of the geodesic curve ⊂M̂ through x in the direction of w is tangent to Ṃ with the multiplicity m(x) = j -1. Moreover, the stratum _̣j^± SM(v^g) also has a similar description[Its exact formulation is described in the proof.] in terms of the germ of ⊂ M. Take a smooth function z: M̂→_+ with the properties: (1)0 is a regular value of z,(2)z^-1(0) = Ṃ, (3)z^-1((-∞, 0]) = M[We may use the distance functions -d_g(∼, Ṃ) in M and d_g(∼, Ṃ) in M̂∖ M for the role of z.]Let z̃: SM̂→_+ be the composition of the projection π: SM̂→M̂ with z. Evidently, z̃: SM̂→ has the same three properties with respect to (̣SM) as z: M̂→ has with respect to Ṃ.For any pair (x, w) ∈(̣SM), consider the germ of the geodesic curve ⊂M̂ through x ∈Ṃ in the direction of w ∈ T_xM and its lift ⊂ SM̂, the geodesic v^ĝ-flow curve through (x, w) ∈(̣SM). Then π: → locally is an orientation preservingdiffeomorphism of curves.Therefore the k-jet jet^k(z̃|_) = 0 if and only if k-jet jet^k(z|_) =0. By Lemmas 3.1 and 3.4 from <cit.>, (x, w) ∈_̣j(SM)(v^g) if and only if jet^j-1(z̃|_) = 0; similarly, (x, v) ∈_̣j^+(SM)(v^g) if, in addition, ^̣j/(̣̃t)^j(z̃|_)(x,v) ≥ 0 (here t̃ is the natural parameter along ). Thanks to the orientation-preserving diffeomorphism π: →, these properties of z̃|_ are equivalent to the similar properties jet^j-1( z|_) = 0 and ^̣j/(ṭ)^j(z|_)(x) ≥ 0 of z|_. The latter ones describe the multiplicity m(x) =_𝖽𝖾𝖿 j - 1 of tangency of to Ṃ at x. Let M be a smooth compact Riemannian manifold of dimension n. For a boundary generic metric g on M, the order of tangency of any geodesic curve to Ṃ does not exceed 2n-1.For any boundary generic vector field w on SM, the locus _̣(SM)+1(SM)(w) = ∅. In particular, since (SM) = 2n-1, we get _̣2n(SM)(v^g) = ∅.Let us rephrase the previous arguments in the proof of Lemma <ref> in terms of the exponential maps. For each pointx ∈Ṃ, consider theĝ-induced exponential mapexp_x^ĝ: T_xM →M̂and its locally-defined inverseln_x^ĝ: M̂→ T_xM. We may assume thatexp_x^ĝis well-defined for all sufficiently smallw ∈ T_xMandln_x^ĝfor ally ∈M̂that are close tox. The local maps{exp_x^ĝ}_x ∈Ṃcan be organized into a single mapexp^ĝ: TM|_Ṃ→M̂, well-defined in a neighborhood of the zero section: Ṃ→ TM|_Ṃ.We denote byH_x: T_xM →the pull-back of the functionz: M̂→under the mapexp_x^ĝ. Similarly, we denote byH: TM|_Ṃ→the pull-back ofz: M̂→under the mapexp^ĝ. Pickw ∈ T_xM, wherex ∈Ṃ. Then the order of tangency of the hypersufaceṂ = {z= 0}with the geodesic curveexp_x^ĝ(tw) ⊂M̂is equal to the order of tangency of the hypersurface{H_x(w) = 0}⊂ T_xMwith the line{tw}_t ∈. Therefore the property(x, w) ∈_̣j(SM)(v^g)is equivalent to the propertyjet^j-1_t=0(H_x(tw)) = 0, wherejet^j-1_t=0denotes the(j-1)-jet of thet-functionH_x(tw)att = 0, andwis a unitary tangent vector atx.Letπ: TM → Mdenote the tangent bundle, and: M → TMits zero section. Letπ^∗: T^∗ M → Mdenote the cotangent bundle.We say that two smooth functions f, h ∈ C^∞(TM, ) share the same verticalk-jet field and write f ≡_π^k h, if jet^k_0(f|_π^-1(x)) =jet^k_0(h|_π^-1(x)) for all x ∈ M. Here jet^k_0(F|_π^-1(x)) stands for the k-jet at the point (x) = 0 ∈ T_xM of a smooth function F: TM →, being restricted to the π-fiber T_xM. We denote by 𝒥^k(π)[not to be confused with the standard notation “J^k(π)" that refers to the equivalence classes of local sections of the bundle π, a “horizontal" construction!] the quotient space of C^∞(TX, ) by the equivalence relation “≡_π^k".Consider thek-jet spaceJet^k(^n, ), jets being formed at the origin0 ∈^n.We may interpretJet^k(^n, )as the space of real polynomials of degreekat most innvariables. Let𝒮^j((^n)^∗)denotes the space of homogeneous polynomials of degreejinnvariables.Any jetτ∈ Jet^k(^n, )has a unique representation as a sumτ_0 +… + τ_k, where the “homogeneous" summandsτ_j ∈𝒮^j((^n)^∗). So thek-jets can be decomposed:Jet^k(^n, ) ≈⊕_j= 0^k 𝒮^j((^n)^∗).The group𝖦𝖫(n)acts on^n, and thus onJet^k(^n, ). Note thateach subspace𝒮^j((^n)^∗)is preserved by this action. LetΠ: PM → Mbe the principle𝖦𝖫(n)-bundle, associated with the bundleπ: TM → M. We can form the bundleJet^k_vert(π)→ M, associated with the bundleΠ: PM → M. Its total space isPM ×_𝖦𝖫(n) Jet^k(^n, )and its fiber isJet^k(^n, ). Similarly, we may construct a bundle𝒮^j(π^∗) = P^∗ M ×_𝖦𝖫(n)𝒮^j((^n)^∗)overM, associated with the cotangent bundleπ^∗: T^∗ M → M.Employing the𝖦𝖫(n)-equivariant isomorphism:̱ Jet^k(^n, ) ≈⊕_j=0^k𝒮^j((^n)^∗), we get a bundle isomorphismB: Jet^k_vert(π) ≈⊕_j=0^k𝒮^j(π^∗). Given a smooth bundleη: E(η) → M, we denote byΓ(η)the space of its smooth sections overM.By the definition of the space𝒥et^k(π), there is the obvious injectionℰ: 𝒥et^k(π) ⟶Γ(J^k_vert(π)). We use it to form the composite map𝖳^k:C^∞(TM, ) ≡_π^k⟶𝒥et^k(π) ℰ⟶Γ(Jet^k_vert(π)) ≈⊕_j=0^kΓ(𝒮^j(π^∗)).Therefore, for a fixedk, each functionf ∈C^∞(TM, )gives rise to a sequence of sections{T^k_j(f) ∈Γ(𝒮^j(π^∗))}_j ∈ [0, k]. In particular,T^k_0(f) = f|_(M),T^k_1(f) = d_π f|_(M), whered_πstands for the differential along theπ-fibers. We call the sum𝖳^k(f) = ∑_j=0^k T^k_j(f)the vertical Taylor polynomial offof degreek. Each of the sectionsT^k_j(f)can be evaluated at any point(x, w) ∈ TMby forming the symmetric contravariant tensorT^k_j(f)(x) ∈𝒮^j(T^∗_xM)and evaluating it at the polyvectorw^⊗_S j =_𝖽𝖾𝖿w ⊗_S w …⊗_S w_j,where “⊗_S" stands for the symmetrized tensor product of vectors. We denote byτ^k_j(f)(x, w)the result of this evaluation. Now let us choosek = 2n - 1 = (SM). Consider the vertical Taylor expansionT^k(H) = ∑_j=0^k T_j^k(H)of the functionH(x, w) =_𝖽𝖾𝖿 z(exp^ĝ_x(w)),wherex ∈M̂is in the vicinity ofṂandT_j^k(H)(x, w^∗)is a homogeneous polynomial inw^∗∈ T^∗_xM̂of degreej. As before, we denote byτ_j^k(x, w) =_𝖽𝖾𝖿τ_j^k(H)(x, w)the result of evaluation of the symmetric differential formT_j^k(H)at the vectorw^⊗_S j∈𝒮^j(T_xM̂). Note thatτ^k_0(x, w) = z(x).Using the homogeneity ofτ^k_j(x, w)inwand in light of the arguments that precede Definition <ref>, we conclude that(x, w) ∈_̣j(SM)(v^g)is equivalent to the requirement{τ^k_l(x,w) = 0}_0 < l < j, wherewis a unit tangent vector andk ≥ j. Moreover,(x, w)belongs to the pure stratum_̣j^∘(SM)(v^g)if and only if{τ^k_l(x, w) = 0}_0 < l < jandτ^k_j(x, w) ≠ 0.Although the functionHdepends on the choice of the auxiliary functionz, the solution set{τ^k_l(x,w) = 0}_0 < l < jdoes not. Indeed, replacing the functionzwith the new functionz· q, whereq|_Ṃ > 0, and using the multiplicative properties of the local Taylor expansions, leads to a new system of constraints{τ̃^k_l(x,w) = 0}_0 < l < jwhich exhibit a “solvable triangular pattern", as compared with the original system{τ^k_l(x,w) = 0}_0 < l < j.Remark 3.1. For eachx ∈Ṃ, the degreelhomogeneous inwequations{τ^k_l(x,w) = 0}_l < jdefine az-independent real algebraic subvariety𝒱_< j(x)of the unit sphereS_x ⊂ T_xM.The pure stratum_̣j^∘(SM)(v^g)gives rise to the semialgebraic set𝒱^∘_j(x) =_𝖽𝖾𝖿𝒱_< j(x) ∖𝒱_< j+1(x).The subvariety𝒱_< j+1(x)separates𝒱_< j(x)into two semialgebraic sets:𝒱^+_< j(x), whereτ^k_j(x, w) ≥ 0, and𝒱^-_< j(x), whereτ^k_j(x, w) ≤ 0. These loci reflect the generalized concavity and convexity ofṂ(in the metricg) atx ∈Ṃin the direction of a given unit vectorw ∈ S(T_xM).Note that, for givenxandj, the sets𝒱_< j(x), 𝒱^+_< j(x), 𝒱^-_< j(x)may be empty. So, withṂandgbeing fixed, we may regard the requirements𝒱_< j(x) ≠∅, 𝒱^+_< j(x) ≠∅, 𝒱^-_< j(x) ≠∅as constraints, imposed onx ∈Ṃ.Consider now on the restrictions of the smooth functionsτ_j^2n-1(H): TM̂→to the subspaceSM̂in the vicinity of the boundary(̣SM). Together,{τ_j^2n-1(H)}_jwill generate a smooth mapτ⃗^2n-1(H): SM̂→^2n. Lety_0,… , y_2n-1denote the coordinates in^2n. LetL_jbe the subspace of^2n, defined by the equations{y_0 = 0, … , y_j = 0}. Consider the complete flag𝖥(2n) = {^2n⊃ L_0 ⊃…⊃ L_2n-2⊃ L_2n-1 = {0}}.Let us focus on the interaction of the mapτ⃗^2n-1(H): SM̂→^2nwith the flag𝖥(2n). The considerations above imply that_̣j(SM)(v^g) = (τ⃗^2n-1(H))^-1(L_j-1). The definition of boundary generic geodesic fieldv^ĝ∈ T(SM̂)with respect(̣SM) = {τ^2n-1_0(H) = 0}implies the linear independence of the differential1-formsdτ^2n-1_0(H), … ,dτ^2n-1_j-1(H)along the locus_̣j(SM)(v^g) = {τ^2n-1_0(H) = 0, … ,τ^2n-1_j-1(H) = 0}.In turn, their linear independence is equivalent to the non-vanishing ofj-formdτ^2n-1_0(H) ∧…∧ dτ^2n-1_j-1(H) ∈⋀^j T^∗(SM) ≠ 0along the locus_̣j(SM)(v^g). In light of the considerations above, we get a “more constructive" version of Lemma <ref>. Let n =M and k = 2n-1. Consider an auxiliary function z: M̂→ with properties (1)-(3) from the proof of Lemma <ref> and the associated function H : TM̂|_Ṃexp^ĝ⟶M̂ẑ⟶.The metric ĝ on M̂is boundary generic with respect to Ṃ = z^-1(0) if and only if the mapτ⃗^k(H): SM̂→^2n is transversal to each space L_j from the flag 𝖥(2n)[By the properties of z, τ⃗^k(H) is transversal to L_0 and τ⃗^k(H)^-1(L_0) = (̣SM).]. Moreover, _̣j(SM)(v^g) = τ⃗(H))^-1(L_j-1). In other words, ĝ is boundary generic if and only if, for each j ∈ [1, 2n], the differential j-form dτ^k_0(H) ∧ dτ^k_1(H) ∧…∧ dτ^k_j-1(H) ∈⋀^j T^∗(SM) does not vanish along the locus _̣j(SM)(v^g) = {τ^k_0(H) = 0, τ^k_1(H) = 0, … , τ^k_j-1(H) = 0}.Example 3.1. LetMbe a domain in^2, bounded by a smooth simple curveṂ, given by an equation{z(x_1, x_2) = 0}. We assume that0is a regular value of the smooth functionz: ^2 →and thatz ≤ 0definesM.Since^2carries the Eucledian metricg = g_𝖤, the exponential map is given by the formulaexp_x⃗(w⃗) = x⃗ + w⃗, wherex⃗∈^2andw⃗∈ T_x(^2) ≈^2.For each pointx⃗= (x_1, x_2), we consider the Taylor expansionT(x⃗, w⃗)of the functionH(x⃗, w⃗) = z(x⃗+w⃗)as a sum:T(x⃗, w⃗) = τ_0(x⃗) + τ_1(x⃗, w⃗) + τ_2(x⃗, w⃗) + τ_3(x⃗, w⃗) + … ,wherew = (w_1, w_2) ∈ T_x(^2), and the summands are of the form:τ_0(x⃗)=z(x⃗),τ_1(x⃗, w⃗) =τ^1_1(x⃗) w_1 + τ^1_2(x⃗) w_2,τ_2(x⃗, w⃗) =τ^2_11(x⃗) w_1^2 + τ^2_12(x⃗) w_1w_2+ τ^2_22(x⃗) w_2^2,τ_3(x⃗, w⃗) =τ^3_111(x⃗) w_1^3 + τ^3_112(x⃗) w_1^2w_2 + τ^3_122(x⃗) w_1w_2^2 + τ^3_222(x⃗) w_2^3. The manifoldSMis diffeomorphic to the productM × S^1. The boundary(̣SM)is a2-torus, given by the equation{τ_0(x⃗) = 0}.The locus_̣2SM(v^g) = S(Ṃ) ⊂(̣SM)is given by the homogeneous inw_1, w_2equationsτ_0(x⃗)= 0 τ^1_1(x⃗) w_1 + τ^1_2(x⃗) w_2 = 0,where(w_1, w_2)is a unitary vector.In_̣1^+(SM)(v^g), we getτ^1_1(x⃗) w_1 + τ^1_2(x⃗) w_2 ≥ 0.For a boundary generic relative tog_𝖤curveṂ, the locus_̣2SM(v^g)is represented by two smooth parallel loops. They separate(̣SM)into two annuli,_̣1^+(SM)(v^g)and_̣1^-(SM)(v^g).The locus_̣3(SM)(v^g) ⊂_̣2(SM)(v^g)is given by the homogeneous equations τ_0(x⃗) = 0, τ^1_1(x⃗) w_1 + τ^1_2(x⃗) w_2 = 0, τ^2_11(x⃗) w_1^2 + τ^2_12(x⃗) w_1w_2+ τ^2_22(x⃗) w_2^2 = 0, where(w_1, w_2)is a unitary vector. For a boundary generic relative tog_𝖤curveṂ, the locus_̣3(SM)(v^g)is a collection of an even number of points.The locus_̣4(SM)(v^g)is given by adding the equationτ^3(x⃗, w⃗) = 0to the previous homogeneous system of three equations. The resulting system must have only the trivial solutionw_1 = 0, w_2 = 0, w_3 = 0for allx⃗∈Ṃ. For a boundary generic relative tog_𝖤curveṂ, the differential1-formdτ_0 ∈ T^∗(SM)does not vanish at the points ofthe surface{τ_0 = 0}, the differential2-formdτ_0 ∧ dτ_1 ∈⋀^2 T^∗(SM)does not vanish at the points of the curve{τ_0 = 0, τ_1 = 0}, and the differential3-formdτ_0 ∧ dτ_1 ∧ dτ_2 ∈⋀^3 T^∗(SM)does not vanish at the points of the finite locus{τ_0 = 0, τ_1 = 0, τ_2 = 0}.So a boundary generic relative tog_𝖤curveṂmay be either strictly convex, or the union of several arcs, along which the strict concavity and convexity ofṂalternate. These arcs are bounded by finitely many points of cubic inflection. No tangent toṂline has tangency of an order that exceed3.The validity of the next conjecture would imply the half of Conjecture <ref>, concerned with the space of boundary generic metricsbeing dense in the space of all metrics. Perhaps, Lemma <ref> could be useful in validating the conjecture. Let M be an open smooth n-manifold with a Riemannian metric g.Consider a smooth compact submanifold : N ⊂ M of codimension one. Then there is a small smooth isotopy A: N × [0, 1] → M of the imbedding , such that the metric g is boundary generic[That is, the geodesic flow v^g on SM is boundary generic with respect to the submanifold Σ N that is produced by restricting the spherical bundle SM → M to A(N ×{1}).] with respect to the submanifold A(N ×{1}). The next proposition claims that these refined stratified convexity/concavity properties of the boundaryṂwith respect to the metricgonMcan be recovered from the scattering data. Assume that a smooth compact and connected n-manifold M admits a geodesically boundary generic Riemannian metric g of the gradient type. Then the scattering map C_v^g: _̣1^+(SM) →_̣1^-(SM)allows for a reconstruction of the loci {_̣j^± SM(v^g)}_j ∈ [1, 2n-1]. As a result, for each j ∈ [2, 2n-1], the locus {(x, w) ∈ SM|_Ṃ}, such that the germ of the geodesic curve ⊂ M through x in the direction of w is tangent to Ṃ with the multiplicity j -1,can be reconstructed fromthe scattering map. By the proof of Holography Theorem 3.1 from <cit.>, the causality map C_v: _̣1^+X(v) →_̣1^-X(v) of anyboundary generic traversing field v on a smooth compact manifold X with boundary allows for a reconstruction of all the strata {_̣j^± X(v)}. With this fact in hand, the claims of the corollary are the immediate implicationsof Lemma <ref> andTheorem <ref>.Assume that a metricgonMis geodesically boundary generic (see Definition <ref>). For an oriented geodesic curve⊂ M, consider its intersections{x}withṂ. We denote bym(x)the multiplicity of an intersection pointx ∈∩Ṃ(see the proof of Lemma <ref>). We define the multiplicity ofby the formulam() =_𝖽𝖾𝖿∑_x ∈∩Ṃ m(x),and the reduced multiplicity ofby the formulam'() =_𝖽𝖾𝖿∑_x ∈∩Ṃ (m(x) - 1)(see <cit.> for the properties of these quantities).The following corollary describes the ways in whichgeodesic arcs can be inscribed inM, provided that the metric onMis traversallygeneric, that is, the geodesic fieldv^gis traversally generic onSM(see Definition 3.2 from <cit.> for the notion of traversally generic vector field). For example, if a metric on a surfaceMis traversally generic, then any geodesic curve may have two simple tangencies toṂat most. Similarly, any geodesic curve may have one cubic tangency toṂat most. No tangencies of multiplicity≥ 4are permitted. Assume that a n-dimensional compact smooth manifold M admits a traversallygeneric Riemannian metric g. Then any geodesic curve in M has 2n -2 simple points of tangency to the boundary Ṃ at most. In general, any geodesic curve ⊂ M interacts with the boundary Ṃ so that the multiplicity and the reduced multiplicity ofsatisfy the inequalities:m() ≤ 4n - 2and m'() ≤ 2n - 2.Moreover, these inequalities hold for any metric g' sufficiently close, in the C^∞-topology, to g.Since, by Lemma <ref>, the multiplicity of tangency of a geodesic curveto Ṃ at a point x and the multiplicity of tangency of its lift ⊂ SM to (̣SM) at the point (x, (x)) are equal, the claim follows from the second bullet in Theorem 3.5 from <cit.>. The last claim follows from two facts: (1) the field v^g depends smoothly on g, and (2) the space 𝒱^(SM) is open in the space 𝒱(SM). In one special case of(M, g),the traversal genericity of the geodesic flow onSMcomes “for free" at the expense of a very restricted topology ofM.So a random manifoldMdoes not admit a non-trapping metricgsuch thatṂis convex! Let M be a compact connected Riemannian n-manifold with boundary. Assume that the boundary Ṃ is strictly convex with respect to a metric g of the gradient type on M. Then geodesic field v^g on SM is traversallygeneric. Moreover, the manifold SM is diffeomorphc to the product D(Ṃ) × [0,1], the corners in the product being rounded. Here D(Ṃ) denotes the tangent(n-1)-disk bundle of Ṃ.For such metric g,any geodesic curve ⊂ M, tangent to Ṃ, is a singleton. Therefore _̣2SM(v^g) =_̣2^-SM(v^g)— the field v^g on SM is convex. Under these assumptions, the geodesic field v^g on SM is traversallygeneric, since no strata {_̣jSM(v^g)}_j interact, with the help of the geodesic flow, trough the bulk SM. Also, for a strictly convex Ṃ and g of the gradient type, _̣1^+SM(v^g) = {(x, v) | x ∈Ṃ, v points insideM}. Thus _̣1^+SM(v^g) fibers over Ṃ with a fiber being the hemisphere D^n-1_+ ⊂ S^n-1 of dimension n - 1 = (Ṃ). This fibration is isomorphic to the unit disk tangent bundle of Ṃ. Hence, by Lemma 4.2 from <cit.>,the manifold SM must be diffeomorphc to the product D(Ṃ) × [0,1], the corners in the products being rounded.As a result, SM fibers over Ṃ with the fiber D^n and over M with the fiber S^n-1. This property of SM puts severe restrictions on the topologyof M: in particular, the space SM must be homotopy equivalent to Ṃ. Note that M = D^n has the desired property: M × S^n-1≈Ṃ× D^n. The next corollary should be compared with Theorem D from <cit.>, in a way, a stronger than Corollary <ref> result, but using more data (the distance “acrossM" between points ofṂ). Let M ⊂^̋n be a codimension 0 compact smooth submanifold of the hyperbolic space, such that the metric g = (g_)|_M is geodesically boundary generic.Then the metric g is of the gradient type, and the scattering map C_v^g: _̣1^+(SM) →_̣1^-(SM)allows for a reconstruction of the pair (SM, ℱ(v^g)), up to a homeomorphism (a diffeomorphism when the property 𝖠 is valid) of SM which is the identity on (̣SM). In view of Corollary <ref> and Example 2.3,the hyperbolic metric on M is of the gradient type. By Theorem <ref>, the corollary follows.One might hope that, for a “random" (bumpy) metricg ∈𝒢^†(M)onM, the reconstructionC_v^g⇒ (M, g)is possible, up to the action of diffeomorphisms that are the identity onṂ(see <cit.>, <cit.>, <cit.>, and especially <cit.> and <cit.> for special interesting cases of such reconstructions). At the moment, this is just a wishful thinking, weakly supported by <cit.>. Here is the main difficulty in reaching this conclusion, as we see it now from the holographic viewpoint. Although, by Theorem <ref>,C_v^gallows for a reconstruction of the pair(SM, ℱ(v^g)), the fibration mapSM → Mseems to resist a reconstruction. In other words, the scattering mapC_v^g“does not know how to project the geodesic flow trajectories inSMto the geodesic curves inM".Perhaps, some additional structure should be brought into the play.So, in the absence of a faithful reconstruction of the geometrygonMfrom the scattering dataC_v^g,with a mindset of a “humble topologist", we will settle for less:Assume that a smooth compact connected n-manifold M with boundary admits a boundary generic Riemannian metric g of the gradient type.Then the following statements are valid:*The geodesic scattering map C_v^g: _̣1^+(SM) →_̣1^-(SM) allows for a reconstruction of the cohomology rings H^∗(M; ) and H^∗(M, Ṃ; ), as well as for the reconstruction of the homotopy groups {π_i(M)}_i < n.*Moreover, the Gromov simplicial semi-norms ∼_ on the vector spaces H^∗(M; ) and on H^∗(M, Ṃ; ) can be reconstructed form C_v^g. In particular,the simplicial volume [M, Ṃ] _ of the fundamentalcycle [M, Ṃ] can be recovered form C_v^g.*If, in addition, M has a trivial tangent bundle, then the stable topological (smooth when the property 𝖠 from Definition <ref> holds) type of M[Here we say that M and M' share the same stable topological (smooth) type, if M × S^n-1 and M' × S^n-1 are homeomorphic (diffeomorphic).] is also reconstructable from the scattering map. Since, by Theorem <ref>, the topological type of SM can be reconstructed from the scattering map C_v^g, so is the cohomology/homology of SM with arbitrary coefficients. Moreover, the Gromov simplicial semi-norms ∼_ on H^∗(SM; ) and on H_∗(SM; ), being invariants of the homotopy type of the pair (SM, (̣SM)), can be recovered form C_v^g. Since H^n(M; ) = 0 due to the property Ṃ≠∅, the fibration π: SM → M admits a section . Thus M is a retract of SM. The cohomology/homology spectral sequence of the spherical fiibration π: SM → M is trivial, again since H^n(M; ) = 0. As a result,H^∗(SM; ) ≈ H^∗(M; ) ⊕ H^∗ - n +1(M; ). Note that H^∗(M; ) ≈ H^∗(SM; ) for all ∗ < n-1. Therefore H^∗(M; ), a direct summand of H^∗(SM; ), can be recovered from H^∗(SM; ) and thus from the scattering map C_v^g. Similar considerations are valid for H^∗(M, Ṃ; ), a direct summand of the homology H^∗(SM, (̣SM); ).The simplicial semi-norm of a homology class does not increase under continuous maps of spaces <cit.>. Therefore, with the help of ^∗, we get that the natural homomorphisms π^∗: H^∗(M; ) → H^∗(SM; ), π^∗: H^∗(M, Ṃ; ) → H^∗(SM, (̣SM); ), induced by the map π: SM → M, are isometries with respect to the semi-norm∼_. Hence, the Gromov simplicial semi-norms on H^∗(M, Ṃ; ) and H^∗(M; ) can be recovered form C_v^g as well. In particular, the simplicial volume [M, Ṃ] _ of the fundamental class [M, Ṃ] can be recovered form the scattering data C_v^g. The long exact homotopy sequenceof the fibration SM → M identifies π_i(M) with π_i(SM) for all i < n. As a result, the homotopy groups {π_i(M)}_i < n can be recovered from the scattering map C_v^g as well.For a trivial tangent bundle TM, the “reconstructible" space SM is diffeomorphic to the product M × S^n-1. Thus the stable smooth type of M (the stabilization being understood as the multiplication with a sphere) is “reconstructible" as well. In particular, the homotopy type of M can be reconstructed from the scattering data.By Theorem <ref>, if two scattering maps are smoothly conjugated, then the two metrics are geodesic flow topologically conjugated, provided they are boundary generic and of the gradient type. In general, we do not know when two metrics, which are geodesic flow topologically/strongly conjugate in the sense of Definition <ref>, are isometric/proportional (see <cit.>, <cit.>, <cit.> for the positive answers in special cases; for example, Croke proved that, forn ≥ 2, the flat product metric onD^n × S^1is scattering rigid). However, for one family of special cases, dealing with metrics of negative sectional curvature, we get a pleasing answer in Theorem <ref>. This theorem should be compared with more general results of a similar nature in <cit.>, <cit.>, <cit.>, obtained by powerful analytic techniques. In order to illustrate a connection between the geodesic flow topological conjugacy and isometry of Riemannian manifolds, let usrecall the famous Mostow Rigidity Theorem <cit.>.LetXandYbe complete finite-volume hyperbolicn-manifolds withn ≥ 3. If there exists an isomorphismϕ_∗: π_1(X) →π_1(Y)of the fundamental groups, then the Mostow Theorem claims thatϕ_∗is induced by a unique isometryϕ: X → Y. Here isanother formulation of the Mostow Rigidity Theorem, better suited for our goals: (Mostow) Let (N_1, g_1) and (N_2, g_2) be two closed locally symmetric Riemannian manifoldswith negative sectional curvatures and of dimension ≥ 3.Given an isomorphism ϕ: π_1(N_1) →π_1(N_2) of the fundamental groups, there exists a unique isometry Φ: (N_1, c· g_1) → (N_2, g_2), where c > 0 is a constant, so that the map Φ induces the isomorphism ϕ of the fundamental groups. In thespirit of Theorem 1.3 from <cit.> and <cit.>, by combining Mostow Theorem <ref> with Theorem <ref>, we get the following result. It and its Corollary <ref> should be compared with Theorem D from <cit.>, which deals with domains in the hyperbolic space^̋n. Let n ≥ 3.Consider two closed locally symmetric Riemannian n-manifolds, (N_1, g_1) and (N_2, g_2), with negative sectional curvatures. Let a connected manifold M_i (i = 1, 2) be obtained from N_i by removing the interior of a smooth codimension zero submanifold U_i ⊂ N_i, such that the induced homomorphism π_1(M_i) →π_1(N_i) of the fundamental groups is an isomorphism[For example, by a general position argument, this the case when U_i has a spine of codimension 3 at least. In particular, U_i may be a disjoint union of n-balls.].Assume that the restriction of the metric g_i to M_i is boundary generic and of the gradient type[Thanks to Theorem <ref>, the latter hypotheses is not restrictive; by Conjecture <ref>, the boundary generic hypotheses does not seem restrictive either.]. Assume also that the two geodesic scattering maps C_v^g_1: _̣1^+(SM_1) →_̣1^-(SM_1),C_v^g_2: _̣1^+(SM_2) →_̣1^-(SM_2) are conjugated via a smooth diffeomorphism Φ^:̣(̣SM_1) →(̣SM_2)[Thus the boundaries Ụ_1 and Ụ_2 are stably diffeomorphic.].Then Φ^$̣ determines a unique diffeomorphismϕ: N_1 → N_2such thatϕ^∗(g_2) = c · g_1for a constantc > 0. By Theorem <ref> (see also Theorem <ref>), the spaces SM_1 and SM_2 are homeomorphic via a homeomorphism Φ which is an extension of Φ^$̣. In particular,Φinduces anisomorphismΦ_∗: π_1(SM_1) →π_1(SM_2)of the fundamental groups. Each spaceSM_ifibers overM_iwith the spherical fiberS^n-1. Thus, whenn ≥ 3, the fundamental groupsπ_1(M_1)andπ_1(M_2)are isomorphic via an isomorphismϕ_∗: π_1(M_1) →π_1(M_2), which is determined byΦ_∗. In fact,ϕ_∗is induced by taking a section_1: M_1 → SM_1(the bundleTM_1 → M_1admits a non-vanishing section sinceṂ_1 ≠∅), composing_1withΦ, and then applying the projectionSM_2 → M_2. By the hypotheses, the inclusion homomorphismπ_1(M_i) →π_1(N_i)is an isomorphism. Therefore,π_1(N_1)andπ_1(N_2)are isomorphic via an isomorphismϕ̂_∗, which is determined byϕ_∗(and thus eventually byΦ). By Mostow's Rigidity Theorem<ref>, for a givenϕ̂_∗, there exists a unique diffeomorphismϕ̂: N_1 → N_2such thatϕ̂^∗(g_2) = c · g_1, wherecis a positive constant.We claim that the diffeomorphismϕ̂: N_1 → N_2is actually determined by the diffeomorphismΦ^:̣(̣SM_1) →(̣SM_2). Indeed, consider the obvious mapΓ_i: SM_i →𝒯(v^g_i), where𝒯(v^g_i)denotes the trajectory space of the geodesicv^g_i-flow. Sincev^g_iis traversing,Γ_iis a quasifibration with contractible fibers. Therefore,(Γ_i)_∗: π_1(SM_i) →π_1(𝒯(v^g_i))is an isomorphism. By Theorem <ref>, we get a commutative diagramΦ^𝒯∘Γ_1 = Γ_2 ∘Φ, whereΦ^𝒯: 𝒯(v^g_1) →𝒯(v^g_2)is a homeomorphism, which is determined byΦ^$̣. In fact, Φ^𝒯∘Γ_1^=̣Γ_2^∘̣Φ^$̣, whereΓ_i^=̣Γ_i |_(̣SM_i). ThereforeΦ_∗: π_1(SM_1) →π_1(SM_2)is determined byΦ^$̣. As a result, ϕ_∗: π_1(M_1) →π_1(M_2) and thus ϕ̂_∗: π_1(N_1) →π_1(N_2) are determined by Φ^$̣. So by Mostow's Rigidity Theorem<ref>, the corresponding unique diffeomorphismϕ̂: N_1 → N_2that delivers the isometry betweeng_2andc· g_1is determined byΦ^$̣.Remark 3.4. Under the hypotheses of Theorem <ref>, consider the case when the diffeomorphismΦ^$̣ that conjugates the scattering maps is the identity. Let U_1 = U_2 be a n-ball, and M_1 = M_2 =_𝖽𝖾𝖿 M. Then (̣SM) ≈ S^n-1× S^n-1 andC_v^g is a map whose source and target are both canonically diffeomorphic to S^n-1× D^n-1 (with the help of the involution that orthogonally reflects tangent vectors with respect to the boundary Ṃ). By Theorem <ref>, the scattering map C_v^g: S^n-1× D^n-1→ S^n-1× D^n-1 allows to reconstruct the pair (N, g), up to a positive scalar.In other words, the scattering maps distinguishes (up to a constant conformal factor) between different locally symmetric spaces of negative sectional curvature. The next immediate corollary of Theorem <ref> is inspired by the image of geodesic motion of a bouncing particle in the complement M to a number of disjoint balls, placed in a closed hyperbolic manifold N of a dimension greater than two. The balls a placed so “dense" in N that every geodesic curve hits some ball. Under these assumptions, the collisions of a probe particle in M with the boundary Ṃ“feel the shape of N".Let (N_1, g_1) and (N_2, g_2) be two closed hyperbolic manifolds of dimension n ≥ 3. Let M_i be produced from N_i by removing the interiors of k disjointed smooth n-balls, so that the geodesic flow on each M_i is boundary generic[This is the case when the boundaries of the balls are strictlyconvex in N_i.] and of the gradient type. If the scattering mapsC_v^g_1 and C_v^g_2 are conjugated with the help of a smooth diffeomorphism Φ^:̣(̣SM_1) →(̣SM_2), then (N_1, g_1) isisometric to (N_2, g_2). Let (N, g) be a closed locally symmetric Riemannian manifold with negative sectional curvature, and let U ⊂ N be a smooth codimension zero submanifold. Take M = N ∖(U) and assume that g|_M is boundary generic and of the gradient type. Recall that, by the Mostow Theorem, the isometry group 𝖨𝗌𝗈(N, g) is isomorphic to the group 𝖠𝗎𝗍(π_1(N)) of automorphisms of the fundamental group π_1(N). For an isometry ψ: N → N, put M_ψ=_𝖽𝖾𝖿 N ∖(ψ(U)). Evidently any isometry ψ: N → N induces a diffeomorphism Ψ: SN → SN which commutes with the geodesic flow on SN. Moreover, Φ =_𝖽𝖾𝖿Ψ|: SM → SM_ψ strongly conjugates the geodesic flows, delivered by the metrics g^M= g|_M and g^M_ψ=g|_M_ψ. As a result, the scattering maps C_v^g^M and C_v^g^M_ψ are conjugated with the help of the diffeomorphism Φ^=̣Φ | : (̣SM) →(̣SM_ψ). Is this construction the only way in which the scattering maps on the complements of isometric domains in N can be conjugated? More accurately, we ask the following question.Let (N, g) be a closed locally symmetric Riemannian manifold with negative sectional curvature. Consider two smooth codimension zero submanifolds U, V ⊂ N.Let ψ: U → V be an isometry and denote by Ψ: SU → SV the diffeomorphism, induced by ψ. Form the submanifoldsM = N ∖(U) and L = N ∖(V).Assume that the metrics g_M =_𝖽𝖾𝖿 g|_M and g_L =_𝖽𝖾𝖿 g|_L are boundary generic and of the gradient type. Also assume that the scattering maps C_v^g_M and C_v^g_L are conjugated with the help of a diffeomorphism Ψ^=̣_𝖽𝖾𝖿Ψ|_(̣SU).Can we conclude that there exists an isometry ψ̃: N → N such that ψ̃|_U = ψ? § THE INVERSE GEODESIC SCATTERING PROBLEM IN PRESENCE OF LENGTH DATA Now we will enhance the scattering data C_v^g by adding information about the length (equivalently, travel time) along each v^g-trajectory. This new combination of data is commonly called “lens data". Let g be a smooth metric on a compact manifold M, and F: SM → a smooth function such that dF(v^g) > 0. For each point w ∈_̣1^+(SM), let _w be the segment of thev^g-trajectory that connects w and C_v^g(w). We denote by _w its image under the projection π: SM → M, and by l_g(w) the length of the geodesic arc _w.We say that the function F is balancedif, for each point w ∈_̣1^+(SM), the variation F(C_v^g(w)) - F(w) is equal (up to an universal constant) to the arclength l_g(w). We call a gradient type Riemannian metric gbalanced, if v^g admits a balanced Lyapunov function F. Example 4.1. Let M be a compact smooth domain in the hyperbolic space (^̋n, g_)̋. Using a modification F of the Lyapunovfunction d_$̋ from Example 2.3, we will see that the restriction of the hyperbolic metric toMisF-balanced. Let us sketch the argument, based on the Poincarè model of^̋n. Any geodesic in^̋nis orthogonal to the virtual boundaryS_∞^n-1of^̋n.Therefore, using the compactness ofM, there exists a big Euclidean ballB ⊃ M, whose center is inMand such that any geodesic curvein^̋n, which intersectsM, has two distinct transversal intersections with the boundaryḄ. The orientation ofpicks a single pointa_∈Ḅ∩, where the velocity vector points inside ofB. For anyw = (m, v) ∈ SM,we consider the geodesic_wthroughmin the direction ofvand a pointa__w∈Ḅ. We introduce the smooth functionF: SM →by the formulaF(w) =_𝖽𝖾𝖿_(̋a__w, m). Evidently,dF(v^g_) > 0and the variation ofFalong any geodesic arc is the length of that arc. By a very similar argument, any compact domainMin the Eucledian space𝖤^ninherits a flat balanced metric of the gradient type. These observations can be generalized. LetMbe a compact smooth codimension zero submanifold of a compact connected Riemannian manifold(N, g)with boundary. Assume that every geodesicinN, such that∩ M ≠∅,intersectsṆtransversally at a pair of points. Then, by a construction, similarto the previous one, the metricg|_Mis of the gradient type and balanced.For a traversing vector fieldvon compact manifoldX, letI_x^ydenote the segment of thev-trajectorythat is bounded by a pair of pointsx, y ∈. Given a1-formθonX, we use the notation “∫_x^y θ" for the integral∫_I_x^yθ. Let (X, 𝗀) be a compact smooth Riemannian manifold with boundary. Let v ≠ 0 be a smooth traversing and boundary generic vector field, and letbe a smooth 1-form on X such that (v) > 0. Assume that for each x ∈_̣1^+X(v), the integrals ∫_x^C_v(x)v_𝗀d𝗀[Here C_v: _̣1^+X(v) →_̣1^-X(v) denotes the v-generated causality map, and d𝗀 denotes the measure on the trajectory _x induced by the metric 𝗀|__x.]and ∫_x^C_v(x) are equal.Then there exists a homeomorphismΘ: X → X with the following properties: * Θ|_X̣ is the identity,*each v-trajectoryis invariant under Θ, *each restriction Θ |: → is a smooth diffeomorphism, * (Θ|_)^∗()(v) = v_𝗀.If v is such that any v-trajectoryis either transversal to X̣ at some point from ∩X̣, or ∩X̣ is a singleton of multiplicity 2, then the homeomorphism Θ is a smooth diffeomorphism. Consider two strictly monotone smooth y-functions on the arc I_x^C_v(x): K(y) = ∫_x^y ,L(y) = ∫_x^y v_gd𝗀. Put Θ(y) =_𝖽𝖾𝖿 (K^-1∘ L)(y). In other words, for each x ∈_̣1^+X(v) and y ∈ I_x^C_v(x), Θ(y) is well-defined by the identity ∫_x^y= ∫_x^Θ(y)v_𝗀d𝗀. By the lemma hypotheses, Θ(x) = x and Θ(C_v(x)) = C_v(x) for all x ∈_̣1^+X(v). Thus, Θ(I_x^C_v(x)) = I_x^C_v(x). As a result, Θ is the identity on the boundary X̣. For anyy' ∈ I_x^C_v(C_v(x)), putK̃(y') =_𝖽𝖾𝖿∫_x^y' = L + ∫^y'_C_v(x), L̃(y') =_𝖽𝖾𝖿∫_x^y'v_𝗀d𝗀 = L + ∫^y'_C_v(x)v_𝗀d𝗀, where L denotes∫_x^C_v(x)v_𝗀d𝗀. Since K̃ and K, as well as L̃ and L', differ by the same parallel shift L, we have (K^-1∘ L)(y') = (K̃^-1∘L̃)(y'). By the continuity of 𝗀 and , we get that the y'-function K̃^-1∘L̃ is continuous in the vicinity of C_v(x) ∈_x. Thus Θ(y) and Θ(y') are close for any two points y, y' ∈_x that are sufficiently close to the point C_v(x). By a similar inductive argument in i ∈ [1, k], applied to the intervals {[x, C_v^(∘ i)(x)]}_i ∈ [1, k], where C_v^(∘ i) stands for the i-th iteration of the partially defined map C_v, we get that Θ|: _x →_x is a homeomorphism. Note that thederivative Θ' > 0 in I_x^C_v(x) since, by the definition of Θ,Θ'(y) = v(y)_𝗀 / (v(y)) for all y ∈_x. Therefore, the restriction of Θ to any v-trajectoryis a smooth diffeomorphism.In order to show that Θ: X → X is a homeomorphism, we embed X into an open manifold X̂ and extend smoothly the field v, the form , and the metric 𝗀 into a open neighborhood U of X in X̂ so that the extensions v̂, ,𝗀̂ have the property (v̂) > 0 in U. In what follows, we may adjust the size of U according to the needs of the arguments. So we will treat X̂ = U, v̂, ,𝗀̂ as a germs. Note that any v-trajectoryis contained in a unique v̂-trajectory .In the next paragraph, we will rely on few basic facts about boundary generic traversing flows (see <cit.>). Let _0 be a v-trajectory. Since v is boundary generic, the intersection _0 ∩X̣ is a finite ordered collection of points {x_1, x_2, …, x_k}.So C_v(x_i) = x_i+1 for all i ∈ [1, k-1], unless _0 ∩X̣ is a singleton (k = 1), in which case C_v(x_1) = x_1.Let V be a cylindrical neighborhood of _0 in X̂ which consists of v̂-trajectories. By choosing an appropriate U, we may assume that _0 ∩ X = _0. We may also assume that V admits a system of coordinates (u, w⃗) such that the v̂-trajectories are given by the equations {w⃗ = c⃗o⃗n⃗s⃗t⃗}. For any x ∈, we denote by S_x the hypersurface {u = u(x)}.For any ⊂ V, the intersection ∩X̣ consists of finitely many points, while ∩ X consists of finitely many closed intervals or singletons. Picking V sufficiently narrow, the set ∩X̣ may be divided into at most k disjoint (possibly empty) subsets A_, i that correspond to the elements of x_i ∈_0 ∩X̣.The cardinality of each set A_, i does not exceed m(x_i), the multiplicity of tangency of _0 to X̣ at x_i, and#(A_, i) ≡ m(x_i)2. Let {S_i =_𝖽𝖾𝖿 S_x_i}_1≤ i ≤ k be disjoint transversal sections of the v̂-flow in the vicinity of _0 ⊂ V. By the choice of V and by the construction of the non-empty sets A_, i⊂, the points ofA_, i are located in the vicinity of the point y_i = S_i ∩. Let us compare Θ(x) and Θ(y) for a pair of points x ∈_0 and y ∈∩ X that are close in X. We may assume that,for some i,x ∈ [x_i, x_i+1) ⊂_0 and that z =_𝖽𝖾𝖿 S_x ∩ and y are close in . By the previous argument, Θ(z) and Θ(y) are close, so it suffices to compare Θ(z) = (K^-1∘ L)(z) and Θ(x) = (K^-1∘ L)(x).Here, by the definition, K(x) = ∫_x_i^x , L(y) = ∫_x_i^x v_𝗀d𝗀, and K(z) = ∫_z^⋆^z , L(z) = ∫_z^⋆^z v_𝗀d𝗀, where z^⋆ is the lowest point in the connected component of the set ∩ X that contains z. By the choice of V,for some j ≤ i, z^⋆ is a point of the set A_, j, and thus is close to both x_j ∈_0 and to the point z_j^⋆ =_𝖽𝖾𝖿 S_j ∩.By the continuity ofand 𝗀 and by the choice of V ⊃_0, the value K_†(x) = ∫_x_j^x is close to the value K_(z) = ∫_z^⋆_j^z= ∫_z^⋆_j^z^⋆ + K(z) and the valueL_†(x) = ∫_x_j^x v_𝗀d𝗀 is close to L_(z) = ∫_z^⋆_j^z v_𝗀d𝗀 =±∫_z^⋆_j^z^⋆v_𝗀d𝗀 + L(z). Therefore, (K_†^-1∘ L_†)(x) is close to (K_^-1∘ L_)(z). Since, by the lemma hypotheses, ∫_x_j^x_i = ∫_x_j^x_iv_𝗀d𝗀, we get that(K_†^-1∘ L_†)(x) is close to (K^-1∘ L)(x). Since ∫_z^⋆_j^z^⋆ and ∫_z^⋆_j^z^⋆v_𝗀d𝗀 are small, we conclude that (K_^-1∘ L_)(z) is close to(K^-1∘ L)(z).As a result, Θ(x) and Θ(z) are close in V. Thus Θ: X → X is continuous. By the same token, Θ^-1: X → X is continuous as well. The argument validating that Θ is a smooth diffeomorphism (under the Lemma <ref> hypotheses, the last paragraph), is similar the the one in the proof of Theorem 3.1 <cit.>. It employs that the boundary X̣, at a point of transversal intersection ∩X̣, is a smooth sectionof the v-flow, together with the hypotheses that the metric 𝗀 and the formare smooth. This implies that the transformation Θ, defined by the formula ∫__x ∩^x= ∫__x ∩^Θ(x) v_𝗀d𝗀, and the similarly defined transformation Θ^-1 are smooth in X̂.Fori= 1, 2, let{ψ_i^t: SM_i → SM_i}_tdenote the geodesic flow transformations, partially-defined for appropriate momentst ∈. (thestrong topological rigidity of the geodesic flow for the inverse scattering problem in the presence of length data)Let (M_1, g_1) and(M_2, g_2) be two smooth compact connected Riemannian n-manifolds with boundaries.Let the metric g_1 be boundary generic, and g_2 be of the gradient type, boundary generic, and balanced.Assume that the scattering maps C_v^g_1: _̣1^+(SM_1) →_̣1^-(SM_1)and C_v^g_2: _̣1^+(SM_2) →_̣1^-(SM_2) are conjugated by a smooth diffeomorphism Φ^:̣_̣1(SM_1) →_̣1(SM_2).Moreover, assume that for each w ∈_̣1^+(SM_1), the length data agree: l_g_2(Φ^(̣w)) = l_g_1(w).Then * g_1 is a balanced metric of the gradient type. * Φ^$̣ extends to a homeomorphismΦ: SM_1 → SM_2such thatΦ∘ψ_1^t(w) = ψ_2^t ∘Φ (w)for eachw ∈ SM_1and allt ∈for wichψ_1^t(w)is well-defined. *the restriction ofΦto eachv^g_1-trajectory is a smooth diffeomorphism. *Ifg_2is such that no geodesic curve⊂ M_2is cubically tangent toṂ_2at a pair of distinct points, and no geodesic curve is a singleton of multiplicity4,then the conjugating homeomorphismΦ: SM_1 → SM_2is a smooth diffeomorphism. The proof is a modification of the arguments in the Holography Theorem 3.1 from <cit.>. As in that paper, we start with a balanced function F_2:SM_2 → such that dF_2(v^g_2) > 0. We consider the pull-back F_1^=̣_𝖽𝖾𝖿 (Φ^)̣^∗(F_2|_X̣_2): (̣SM_1) →, and using Lemma 3.2 from <cit.>, extend F_1^$̣ to a smooth functionF_1: SM_1 →so thatd F_1(v^g_1) > 0. Then, as in the proof of the Holography Theorem, we define the scattering maps conjugating homeomorphismΨ: SM_1 → SM_2so that:Ψ|_(̣SM_1) = Φ^$̣, Ψ maps each v^g_1-trajectoryto a v^g_2-trajectory, the restriction of Ψ|_ is a diffeomorphism, and, due tothe construction of Ψ,F_1 = Ψ^∗(F_2). By the latter property, for any w ∈_̣1^+(SM_1), we get the equality ∫_w^C_v^g_1(w) dF_1 =∫_Ψ(w)^C_v^g_2(Ψ(w)) dF_2.Let 𝗀_i, i = 1, 2, denote the Sasaki metric on SM_i, induced by the metric g_i on M_i. Let π_i: SM_i → M_i be the obvious map. Since v^g_i is orthogonal in 𝗀_i to the fibers π_i^-1(∗), we conclude that v^g_i_𝗀_i = Dπ_i(v^g_i)_g_i = 1. Since F_2 is balanced, we get∫_Ψ(w)^C_v^g_2(Ψ(w)) dF_2 = ∫_Ψ(w)^C_v^g_2(Ψ(w))v^g_2_𝗀_2 d𝗀_2 = ∫_Ψ(w)^C_v^g_2(Ψ(w)) d𝗀_2, the length of the geodesic arc _Ψ(w) in M_2. On the other hand, by the hypotheses of the theorem, l_g_1(w)=_𝖽𝖾𝖿 ∫_w^C_v^g_1(w) d𝗀_1 = ∫_Ψ(w)^C_v^g_2(Ψ(w)) d𝗀_2=_𝖽𝖾𝖿l_g_2(Ψ(w)).So, F_1 is balanced as well. Therefore, Lemma <ref> is applicable to both pairs (dF_1, 𝗀_1) and (dF_2, 𝗀_2). By the lemma, there exist homeomorphisms Θ_1: SM_1 → SM_1 and Θ_2: SM_2 → SM_2 with the properties as in (<ref>).Finally, we construct a homeomorphismΦ =_𝖽𝖾𝖿 (Θ_2)^-1∘Ψ∘Θ_1. For such a choice, thanks to properties (<ref>), we get d𝗀_1|_ = Φ^∗(d𝗀_2|_Φ()) for all v^g_1-trajectories . So Φ∘ψ_1^t(w) = ψ_2^t ∘Φ (w) for each w ∈ SM_1 and all t ∈ for whichψ_1^t(w) is well-defined. If g_2 is such that no geodesic curve ⊂ M_2 is cubically tangent to Ṃ_2 at a pair of distinct points and no geodesic curve is a singleton of multiplicity 4, then by Lemma <ref>, v^g_2, and thus v^g_1 both possess property 𝖠 from Definition <ref>. So, by Lemma <ref>, the conjugating homeomorphism Φ: SM_1 → SM_2 is a smooth diffeomorphism. Theorem <ref> leads to the following “Cut & Scatter Theorem":Let (N_1, g_1) and (N_2, g_2)be two closed smooth Riemannian n-manifolds.For i = 1, 2, let U_i be a codimension zero submanifold of N_i with a smooth boundary Ụ_i.Put M_i =_𝖽𝖾𝖿 N_i ∖(U_i). Consider a compact neighborhood V_i ⊂ N_i of U_i whose interior contains U_i.Assume that the metric g_2|_M_2 is boundary generic, of the gradient type, balanced, and satisfies property 𝖠[If U_2 is strictly concave in N_2, then these hypotheses reduce to the requirement that g_2|_M_2 is of the gradient type andbalanced.]. Let a bijection ψ: V_1 → V_2 be an isometry with respect to g_1|_V_1 and g_2|_V_2. Consider the diffeomorphism Ψ: SV_1 → SV_2, induced by ψ, and its restriction Ψ^$̣ to(̣SU_1).If the scattering mapsC_v^g_1|_M_1: _̣1^+(SM_1) →_̣1^-(SM_1) andC_v^g_2|_M_2: _̣1^+(SM_2) →_̣1^-(SM_2)are conjugatedwith the help of the diffeomorphismΨ^$̣ and if, for each w ∈_̣1^+(SM_1), the length data agree: l_g_2|_M_2(Ψ^(̣w)) = l_g_1|_M_1(w), then the geodesic flows on SN_1 and SN_2 are strongly smoothly conjugated in the sense of Definition <ref>. Letbe a v^g_i-trajectory in SN_i, i= 1, 2. Put _M_i =_𝖽𝖾𝖿∩ SM_i,_U_i =_𝖽𝖾𝖿∩ SU_i, and _V_i =_𝖽𝖾𝖿∩ SV_i.For each point x ∈(̣SU_1), denote by (x) the v^g_1-trajectory through x in SN_1.Since ψ: V_1 → V_2 is an isometry, the diffeomorphism Ψ maps _V_1(x) to _V_2(Ψ(x)), while preserving the (𝗀_i|_V_i)-induced natural parameterizations on the two cures. On the other hand, by Theorem <ref>, the diffeomorphism Φ maps _M_1(x) to _M_2(Ψ(x)) = Φ(_M_1(x)), while preserving the (𝗀_i|_M_i)-induced natural parameterizations on the two cures. Let us denote by Σ_i the union of all segments of the v^g_i-trajectories in SM_i ∩ SV_i that have a nonempty intersection with (̣SM_i). Thus, any point of Σ_i is connected to a point of (̣SM_i) by an arc δ⊂Σ_i of a v^g_i-trajectory. Note that Σ̅_i, the closure of Σ_i in SN_i, is a compact set, containing (̣SM_i). We denote by Σ_i^∙ the set SU_i ∪Σ̅_i.Since Ψ |_(̣SM_1) = Φ|_(̣SM_1), and both diffeomorphisms, Φ and Ψ, preserve the natural parameterizations along the trajectories in the source and the target, we conclude that Φ |_Σ_1 = Ψ |_Σ_1 as maps. It is possible that Ψ and Φ may differ in SV_1 ∖Σ_1^∙. Consider the homeomorphism Ξ: SN_1 → SN_2, defined by the formula Ψ⋃_Ψ^Φ: SU_1 ⋃_(̣SU_1) SM_1 →SU_2 ⋃_(̣SU_2) SM_2.By the properties of Φ an Ψ, the homeomorphism Ξ maps the v^g_1-trajectories in SN_1 to the v^g_2-trajectories in SN_2.By the definitions of the sets Σ_i, Σ_i^∙ and using thatΦ |_Σ_1 = Ψ |_Σ_1, we may interpret Ξ as the homeomorphism Ψ⋃_Ψ |_(̣Σ_1^∙)Φ:Σ_1^∙⋃_(̣Σ_1^∙)(SM_1 ∖𝗂𝗇𝗍(Σ_1))→Σ_2^∙⋃_(̣Σ_2^∙)(SM_2 ∖𝗂𝗇𝗍(Σ_2)). Since Φ |_Σ_1 = Ψ |_Σ_1,Φ and Ψ are smooth diffeomorphisms, and Σ_1 ⊃(̣SU_1), we conclude that Ξ is a smooth diffeomorphism in the vicinity of (̣SU_1). By Theorem <ref>, the diffeomorphism Φ = Ξ |_SM_1 has the property Ξ^∗(d 𝗀_2|_Ξ(^#)) =d 𝗀_1|_^# for every (v^g_i |_M_i)-trajectory ^#⊂ SM_i. By the hypotheses, ψ: U_1 → U_2 is an isometry. Thus, for every (v^g_i |_U_i)-trajectory ^†⊂ SU_i, we have Ξ^∗(d 𝗀_2|_Ξ(^†)) =d 𝗀_1|_^†. Letbe an integral curve of the geodesic field v^g_1 on SN_1. The set _U_1 = ∩ SU_1is the union of trajectories {^†} of the geodesic field v^g_1|_U_1, while the set _M_1 = ∩ SM_1 is the union of v^g_1|_M_1-trajectories {^#}. Therefore, by the arguments above,Ξ^∗(d 𝗀_2|_Ξ()) =d 𝗀_1|_ for all v^g_1-trajectoriesin SN_1. This implies that Ξ∘ϕ_1^t = ϕ_2^t ∘Ξ for all moments t ∈, where ϕ_i^t denotes the geodesic flow diffeomorphism of SN_i. Thus the metrics g_1 and g_2 are geodesic flow strongly conjugate by a diffeomorphism Ξ: SN_1 → SN_2 from the class C^∞(SN_1, SN_2).In the next three corollaries, we combine Theorem <ref> with a number of classical results that make it possible to reconstruct the metric on special closed manifolds from the corresponding geodesic flows.Let the pairs of smooth Riemanniann-manifolds, (N_1, g_1) ⊃ (M_1, g_1|) and (N_2, g_2)⊃ (M_2, g_2|), be as in Theorem <ref>.If the scattering maps C_v^g_1|_M_1 and C_v^g_2|_M_2are conjugatedwith the help of the diffeomorphism Ψ^=̣_𝖽𝖾𝖿Ψ |_(̣SU_1)[As in Theorem <ref>, Ψ is induced by an isometry ψ: V_1 → V_2.] and if, for each w ∈_̣1^+(SM_1), the length data agree: l_g_2|_M_2(Ψ^(̣w)) = l_g_1|_M_1(w), then the manifolds (M_1, g_1|_M_1) and (M_2, g_2|_M_2) share the same volume. By Theorem <ref>,the metrics g_1 and g_2 are geodesic flow strongly conjugate by a diffeomorphism Ξ: SN_1 → SN_2 from the class C^∞(SN_1, SN_2). By <cit.>, Proposition 1.2, the manifolds (N_1, g_1) and (N_2, g_2) share the same volume. Since ψ: U_1 → U_2 is an isometry, the manifolds (M_1, g_1|_M_1) and (M_2, g_2|_M_2) share the same volume as well.Let the pairs of smooth Riemanniann-manifolds, (N_1, g_1) ⊃ (M_1, g_1|) and (N_2, g_2) ⊃ (M_2, g_2|), be as in Theorem <ref>. In addition, assume that N_2 admits a non-vanishing parallel vector field w[The existence of a parallel vector field is equivalent to (N_2, g_2) being isometric to the quotient of the Riemannian product L × by the isometry subgroup G ⊂𝖨𝗌𝗈(L) ×𝖨𝗌𝗈_+(), where L is a simply connected complete Riemannian manifold (<cit.>).].If the scattering maps C_v^g_1|_M_1 and C_v^g_2|_M_2are conjugatedwith the help of the diffeomorphism Ψ^=̣_𝖽𝖾𝖿Ψ |_(̣SU_1) and if, for each w ∈_̣1^+(SM_1), the length data agree: l_g_2|_M_2(Ψ^(̣w)) = l_g_1|_M_1(w), then the manifolds (N_1, g_1) and (N_2, g_2) are isometric. By Theorem <ref>,the metrics g_1 and g_2 are geodesic flow strongly conjugate by a diffeomorphism Ξ: SN_1 → SN_2 from the class C^∞(SN_1, SN_2). By <cit.>, Theorem 1.1, the manifolds (N_1, g_1) and (N_2, g_2) are isometric. Let n ≥ 3. Let the pairs of Riemanniann-manifolds, (N_1, g_1) ⊃ (M_1, g_1|) and (N_2, g_2) ⊃ (M_2, g_2|), be as in Theorem <ref>. Assume, in addition, that (N_2, g_2) is locally symmetric and of negative sectional curvature. If the scattering maps C_v^g_1|_M_1 and C_v^g_2|_M_2are conjugatedwith the help of the diffeomorphism Ψ^=̣_𝖽𝖾𝖿Ψ |_(̣SU_1) and if, for each w ∈_̣1^+(SM_1), the length data agree: l_g_2|_M_2(Ψ^(̣w)) = l_g_1|_M_1(w), then the spaces (N_1, g_1) and (N_2, g_2) are isometric. By Theorem <ref>, the C^1-diffeomorphismΞ: SN_1 → SN_2 conjugates the geodesic flows on SN_1 and SN_2. Since (N_2, g_2) is a locally symmetric space of negative sectional curvature, the Besson-Courtois-Gallaot Theorem <ref> applies; so we get that, for an appropriateconstant c > 0,there exists an isometry χ: (N_1, c· g_1) → (N_2, g_2). Using that ψ: U_1 → U_2 is an isometry, we conclude that c = 1.Acknowledgments: The author is thankful to Christopher Croke and Gunther Uhlmann for very stimulating, informative conversations. He is also gratefulto Reed Meyerson for helping with the proof of Lemma 2.3.[GM2] [Be]Be Besse, A.L.,Manifolds all of whose Geodesics are Closed, Springer-Verlag, Berlin Heidelberg New York, 1978. [BCG]BCG Besson, G., Courtois, G., Gallot, G.,Minimal entropy and Mostow'srigidity theorems, Ergod. Th. & Dynam. Syst., (1996), 16, 623-649. [Cr]Cr Croke, C.Scattering Rigidity with Trapped Geogesics, arXiv:1103.5511v2 [mathDG] 21 Nov 2012. [Cr1]Cr1 Croke, C.Rigidity Theorems in Riemannian Geometry, Chapter in Geometric Methods in Inverse Problems and PDE Control, C. Croke, I. Lasiecka, G. Uhlmann, and M. Vogelius eds., IMA vol. Math Appl., 137, Springer, 2004. [Cr2]Cr2 Croke, C.Rigidity and the distance between boundary points, J. Diff. Geometry, 33 (1991), 445-464. [CEK]CEK Croke, C., Eberlein P., Kleiner, B.,Conjugacy and rigidity for nonpositively curved manifolds of higher rank, Topology 35 (1996), 273-286. [CK]CK Croke, C., Kleiner, B.,Conjugacy and rigidity formanifolds with a parallel vector field, J. Differential Geometry, 39 (1994), 659-680.[GL]GL Gordon, G., Luecke, J., Knots are determined by their complements, J. Amer. Math. Soc. 2 (1989), no. 2, 371415. [G]G Gromov, M.,Volume and bounded cohomology, Publ. Math. I.H.E.S. 56 (1982) 599. [K]K Katz, G.,Convexity of Morse Stratifications and Spines of 3-Manifolds, JP Journal of Geometry and Topology, vol. 9, No 1 (2009), 1-119. [K1]K1 Katz, G.,StratifiedConvexity and Concavity of Gradient Flows on Manifolds with Boundary, Applied Mathematics, 2014, 5, 2823-2848,(SciRes. http://www.scirp.org/journal/am) (also arXiv:1406.6907v1 [mathGT] (26 June, 2014)).[K2]K2 Katz, G., Traversally Generic and Versal Flows: Semi-algebraic Models of Tangency to the Boundary,Asian J. of Math., vol. 21, No.1 (2017), 127-168 (also arXiv:1407.1345v1 [mathGT] (4 July, 2014)).[K3]K3 Katz, G.,The Stratified Spaces of Real Polynomials and Trajectory Spaces of Traversing Flows, JP Journal of Geometry and Topology, Vol. 19, No 2, 2016,95-160.[K4]K4 Katz G.,Causal Holography of Traversing FlowsarXiv:1409.0588v3 [mathGT] 7 Aug 2017.[K5]K5 Katz G.,Flows in Flatland: A Romance of Few Dimensions, Arnold Math. J., DOI 10.1007/s40598-016-0059-1, Springer (2016). [Mat]MatMatveev, V.S.,Geodesically equivalent metrics in general relativity, arXiv:1101.2069v2 [math.DG] 7 Apr 2011. [Mo]Mo Morse, M.Singular points of vector fields undergeneral boundary conditions, Amer. J. Math. 51 (1929), 165-178. [Most]Most Mostow, G. D., (1973),Strong rigidity of locally symmetric spaces, Annals of mathematics studies 78, Princeton University Press, ISBN 978-0-691-08136-6, MR 0385004. [SU]SU Stefanov, P., G. Uhlmann, G.,Stability estimates for the X-ray transform for tensor fields and boundary rigidity, Duke Math. J., 123(3):445-467, 2004. [SU1]SU1 Stefanov, P., G. Uhlmann, G.,Boundary rigidity and stability for generic simple metrics, J. Amer. Math. Soc., 18(4): 975-1003, 2005.[SU2]SU2 Stefanov, P., G. Uhlmann, G.,Boundary and lens rigidity, tensor holography, and analytic microlocal analysis, In Algebraic Analysis of Differential Equations. Springer, 2008.[SU3]SU3 Stefanov, P., G. Uhlmann, G.,Rigidity for metrics with the same lengths of geodesics, Math. Res. Lett., 5(1-2): 83-96, 1998.[SU4]SU4 Stefanov, P., G. Uhlmann, G.,Local lens rigidity with incomplete data for a class of non-simple Riemannian manifolds, J. Differential Geom., 82(2), 383-409, 2009.[SUV]SUV Stefanov, P., G. Uhlmann, G., Vasy, A.,Boundary rigidity with partial data, J. Amer. Math. Soc., 29, 299-332 (2016), arXiv.1306.2995.[SUV1]SUV1 Stefanov, P., G. Uhlmann, G., Vasy, A.,Local recovery of the compressional and shear speeds from the hyperbolic DN map, preprint.[SUV2]SUV2 Stefanov, P., G. Uhlmann, G., Vasy, A.,Inverting the local geodesic X-ray transform on tensors, Journal d'Analyse Mathematique, to appear, arXiv:1410.5145. [SUV3]SUV3Stefanov, P., G. Uhlmann, G., Vasy, A.,Local and global boundary rigidity and the geodesic X-ray transform in the normal gauge, arXiv:1702.03638, 2017. [Te]Te Teschi, G.,Ordinary Differential Equations and Dynamical Systems, Graduate Studies in Math., 140, AMS publication, 2012. [We]We Wen, H.,Simple Riemannian Surfaces are Scattering Rigid, arXiv:1405.1712v1 [math.DG], 7 May, 2014. [Zh]Zh Zhou, X.,Recovery of the C^∞-jet from the boundary distance function, arXiv:1103.5509v1 [mathDG] 28 March 2011. | http://arxiv.org/abs/1703.08874v4 | {
"authors": [
"Gabriel Katz"
],
"categories": [
"math.GT",
"math.DG"
],
"primary_category": "math.GT",
"published": "20170326211642",
"title": "Causal Holography in Application to the Inverse Scattering Problems"
} |
a4paper*mps* section§§§ section§§§12 equationsectionℐ^+ R ϵ S σ σ^0Ψ^0∂ z̅y̅ x̅ w̅8πG_N 𝒯 (0) (1) (2)L OV ∇ ℐ ℐ^+_+ ⟨ ⟩ w̅ Res | http://arxiv.org/abs/1703.09272v1 | {
"authors": [
"Ioannis Florakis",
"John Rizos"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20170327190642",
"title": "A Solution to the Decompactification Problem in Chiral Heterotic Strings"
} |
0.0cm 0.2cm 16cm21cm 1.0cm sciabstractlastnote scilastnotelastnote+1lastnote.24ptExperimental Evidence of Quantum Radiation Reaction in Aligned Crystals Tobias N. Wistisen,^1∗ Antonino Di Piazza,^2 Helge V. Knudsen,^1 Ulrik I. Uggerhøj^1^1Department of Physics and Astronomy, Aarhus University,Ny Munkegade 120, Aarhus, 8000, Denmark^2Max Planck Institute for Nuclear Physics,Saupfercheckweg 1, Heidelberg, 69117, Germany^∗To whom correspondence should be addressed; E-mail:[email protected] 30, 2023 ========================================================================================================================================================================================================================================================================================================================================================================== Radiation reaction is the influence of the electromagnetic fieldemitted by a charged particle on the dynamics of the particle itself. Taking into account radiation reaction is essential for the correct description of the motion of high-energy particles driven by strong electromagnetic fields. Classical theoretical approaches to radiation reaction lead to physical inconsistent equations of motion. A full understanding of the origin of radiation reaction and its consistent description are possible only within the more fundamental quantum electrodynamical theory. However, radiation-reaction effects have never been measured, which has prevented a complete understanding of this problem. Here we report experimental radiation emission spectra from ultrarelativistic positrons in silicon in a regime where both quantum and radiation-reaction effects dominate the dynamics of the positrons. We found that each positron emits multiple photons with energy comparable to its own energy, revealing the importance of quantum photon recoil. Moreover, the shape of the emission spectra indicates that photon emissions occur in a nonlinear regime where positrons absorb several quanta from the crystal field. Our theoretical analysis shows that only a full quantum theory of radiation reaction is capable of explaining the experimental results, with radiation-reaction effects arising from the recoils undergone by the positrons during multiple photon emissions. This experiment is the first fundamental test of quantum electrodynamics in a new regime where the dynamics of charged particles is determined not only by the external electromagnetic fields but also by the radiation-field generated by the charges themselves. Future experiments carried out in the same line will be able to, in principle, also shed light on the fundamental question about the structure of the electromagnetic field close to elementary charges.A complete understanding of the dynamics of charged particles in external electromagnetic fields is of fundamental importance in several branches of physics, spanning e.g. from pure theoretical areas like particle physics, to more applicativeones like accelerator physics. Since accelerated charges,electrons for definiteness, emit electromagnetic radiation,in the realm of classical electrodynamics a self-consistentequation of motion of the electron in an external electromagneticfield must take into account the resulting loss of energy and momentum<cit.>.However, the inclusion of the reaction of the radiation emittedby an electron on the motion of the electron itself(radiation reaction) leads to one of the most famous andcontroversial equations of classical electrodynamics,the Lorentz-Abraham-Dirac (LAD) equation <cit.>.The LAD equation is plagued by serious inconsistencies like the existence of “runaway” solutions, with the electron acceleration diverging exponentially in time even if the external field identically vanishes. The mentioned exponential growth is found to occur at time scales of the order of the time that light needs to cover the classical electron radius r_e=e^2/mc^2=2.8× 10^-15, i.e. τ=r_e/c=9.5× 10^-24<cit.>. Now, the classical electron radius is α=e^2/ħ c≃1/137times smaller than the reduced Compton wavelengthλ_C=ħ/mc=3.9× 10^-13,which is the typical length of quantum electrodynamics(QED) <cit.> and this also applies to the time scale τ. This occurrence suggests that a complete understanding of the problem of radiation reaction requires an approach based on QED. Below we will concentrate on a regime where both quantum and radiation-reaction effects are substantial, and we therefore refer the reader to the recentreviews <cit.> concerning classical aspects of radiation reaction. We only recall herethat the discussed overlapping between classical and quantumscales implies that within the realm of classical electrodynamicsthe LAD equation can be consistently approximated by an equation,known as Landau-Lifshitz (LL) equation, which is not plagued by the above-mentioned inconsistencies <cit.>.In order for quantum effects to be negligible, in the instantaneousrest frame of a relativistic electron the external field has to vary slowly on space (time) distances of the order of λ_C (λ_C/c) and its amplitude has to be much smaller than the so-called critical electric (magnetic) field of QED E_cr=m^2c^3/ħ|e|=1.3× 10^18 (B_cr=m^2c^3/ħ|e|=4.4× 10^9) <cit.>. These conditions guarantee, among others, that the recoil undergone by the electronduring the emission of a single photon inside the considered field,is much smaller than the electron energy and the whole emissionprocess can be treated classically as a continuous emission.In the situation investigated below, the second mentionedcondition plays a major role and, if we define the parameterχ=γ E/E_cr, where E is a measure of the amplitudeof the crystal electric field and where γ is the electronLorentz factor, it can be expressed as χ≪ 1. Therelation between radiation reaction and the emission ofmultiple photons in a regime where quantum recoil is substantialhas been pointed out in <cit.> in theinteraction of ultrarelativistic electrons with intenselaser fields. In the quantum radiation reaction regime the typicalenergy of the emitted photons is comparable to the energy of theincoming electron (χ≳ 1) and multiple photon emissionis more probable than single photon emission. In laser-electron interaction it is customary to assume the laser field to be “strong” in the sense that theparameter ξ_l=|e|E_l/mcω_l, where E_l and ω_l arethe laser amplitude and the angular frequency, respectively, is muchlarger than unity <cit.>. At ξ_l≫ 1 each photon emission occurs with the absorption of several laser photons and the radiation is formed over a length (formation length) much shorter than the laser wavelength. Thus, the formula of the radiation emission probability in a constant crossed field (CCF) can be employed <cit.>. A more general definition of the parameter ξ_l (indicated below as ξ) is related to the importance of relativistic effects in the electron transverse motion, with respect to the direction of its average velocity: ξ=p_,max/mc, where p_=γ mv_ is the transverse momentum <cit.>. In the ultrarelativistic regime the parameter ξ also represents the maximum angular deflection during the electron's motion from its average direction, divided by the characteristic angle of radiation 1/γ. In the terminology used in <cit.> the conditions ξ≪ 1 and ξ≫ 1 thus correspond to the dipole and the magnetic bremsstrahlung (or CCF) limits, respectively. One of the main reasons why such an old, fundamental and outstanding problem as the radiation reaction problem is still unsolved, relies on the difficultiesin detecting it experimentally. As we have hinted above, the rapid development of laser technologyhas renewed the interest in this problem because the strong fields provided by intense laser facilities may allow for the experimental measurement of radiation-reaction effects (we refer to the review <cit.> for papers until 2012 and we also mention the recent studies <cit.>). In <cit.> wehave realized that the strong electric fields in aligned crystals may be also suitable for measuring radiation-reaction effects and test the LL equation. In an aligned crystal, in fact, under suitableconditions identifying the so-called channeling regime, an electriccharge also oscillates similarly as in a laser field and may thusradiate a substantial fraction of its energy.In the experiment described below, ultrarelativistic positrons crossa Si crystal in the channeling regime. The dynamics of the positrons is characterizedby χ≤ 1.4 and 0.7≲ξ≲7 such that one is inthe quantum regime and the field is either classically strong orin an intermediate regime below the CCF regime ξ≫ 1.The experiment has been performed at the SPS NA facility at CERN employing positrons with incoming energy of ε_0=178.2 and two Si crystals with thickness 3.8 and 10, respectively,aligned along the ⟨ 111⟩ axis. The measured photon emissionspectra show features which can only be explained theoretically by includingboth 1) quantum effects related to the recoil undergone by the positronsin the emission of photons and the stochasticity of photon emission, and2) radiation-reaction effects stemming from the emission of multiple photons.Several experiments have studied the emission of radiation in crystals inthe quantum regime, mostly in thin crystals to avoid pile-up effects inthe calorimeter i.e. the emission of multiple photons by a single particlehas been avoided. Due to pile-up, in fact, only the sum of the energiesof all the photons emitted by each charged particle is measured in such experiments, whichprevents the possibility of reconstructing the single-photon spectrum(in e.g. <cit.> such a pile-up effect can be seen).In the present experiment we have instead employed a thin converterfoil and a magnetic spectrometer to obtain the single-photon spectrum,see figure <ref>. Therefore, in the radiation-reactionregime where many photons are emitted by a single positron, thecurrent experiment clearly provides more information on the dynamicsof the positrons and a stronger test of the theory than previous experimental campaigns.In figure <ref> a schematic of the experimental setup is shown. The incoming positron encounters the scintillators S1, S2 and S3 which are used to make the trigger signal. The positron rate is sufficiently low such that in each event only asingle positron enters the setup. The positron then enters a He chamber where the two first position sensitive (2 cm × 1 cm) MIMOSA-26 detectors are placed. Shortly after the He chamber the crystal target is placed. The He chamber reduces multiple scattering of the positron such that the incoming particle angle can be measured precisely using the detectorsM1 and M2. After the positron enters the crystal, multiple photons and charged particles will leave the crystal. We have ensured alsotheoretically that electron-positron pair production by the emittedphotons is negligible in the considered experimental conditions. Tosweep away the charged particles, two large magnets were placed beforethe final set of tracking detectors. The photons emitted inside thecrystal then reach a thin converter foil, 200 µ m of Ta,corresponding to approximately 5% of the radiation length X_0 , which in turn corresponds to 7/9 of the mean free pathfor pair production by a high-energy photon <cit.>. Thethickness was optimized such that most of the time a single photonamong those emitted by each positron converts to an electron-positronpair. The produced pair then passes through M3 and M4 before enteringa small magnet, such that the momenta of the electron and the positroncan be determined based on the resulting angular deflection. Finally,the deflected electron and positron pass through M5 and M6. As we have mentioned, unlikeusing a calorimeter, this setup has the great advantage that itallows one to measure the single-photon radiation spectrum sinceonly a single, randomly chosen, of the several emitted photonsconverts to a pair in the thin foil. It is important to pointout that for photon energies much larger than the electron rest energy,as most of those emitted in our experiment, the conversion of aphoton into an electron-positron pair in the thin foil is independentof the photon energy <cit.>. Thus, the presence ofthe thin foil does not alter the spectrum of the photons emitted inthe crystal. The tracking algorithm used in the analysis of the datato correctly determine the energy of the photon which originated fromthe measured electron and positron is described in the Supplementary Materials.It is clear that the spectrum originating from this procedure can not be directly compared to the theory since the response of the setup is complicated by “practical” effects such as multiple scattering inthe converter foil and the presence of air. Therefore a simulation ofthe experimental setup which can “translate” the theoretical photon spectra into the corresponding experimental ones has been developed, thedetails of which can also be found in the Supplementary Materials.In figure <ref>, left panel, we show the experimentally obtained counting spectra forthe “background” case, when no crystal is present, for the “random” case whenthe crystal is present but not aligned with respect to the positron beam,and for the “align” case, when the crystal's ⟨ 111⟩ axis isaligned with the positron beam. In the right panel we show a comparison of the experimental and the theoretical results in the amorphous case. The theoretical, simulated curves, are denoted by “sim” (see also the Supplementary Materials). In the vertical label of thisplot X_0=9.37 cm is the radiation length of Si. In the random orientationthe radiation emission is the well understood Bethe-Heitler bremsstrahlung <cit.>and the agreement here therefore shows that the simulation of the setup is accurate.The result in the randomorientation was used as a way to normalize the theoretical results to the experiment by a scaling factor.This is necessary since the efficiency of the setup depends not only on the geometry of the setup, multiple scattering etc., but also on the inherent efficiency of the MIMOSA detectors.We have considered four different theoretical models to compare with the experiment.These models are described in the section Materials and Methods in the SupplementaryMaterials, and, depending on which effects they include, are indicated as classicalplus radiation reaction model (CRRM), semiclassical plus radiation reaction model(SCRRM), quantum plus radiation reaction model (QRRM), and quantum with no radiation reaction model (QnoRRM).In figure <ref> we show the result of such a comparison inthe cases of the 3.8mm crystal (left) and the 10.0mm crystal (right). As we have anticipated, among the four models described in the Supplementary materials, only the QRRM can be considered inreasonable agreement with the experimental data, indicating the importance of including both quantum and radiation-reaction effects in the modeling. As we have also hinted, the remainingdiscrepancy can likely be attributedto the use of the CCF approximation in regionsat the limits of its applicability. However, to the best ofour knowledge, no complete theory of quantum radiation reaction,valid in all regimes, has yet been devised as it would essentiallyimply an exact computation of the emission probability of anarbitrary number of hard photons.For the sake of completeness, in figure <ref> we show the positron power spectra according to thefour mentioned theoretical models before the translation based on the simulationof the setup has been carried out. Here it is seen that for both thicknesses the curvescorresponding to the “QnoRRM” are the same but that this is not the case afterthe translation is carried out (see figure <ref>). The main reason for this is that the efficiency of the experimental setup depends on the total number of produced photons.This effect becomes severe when the number of photons that can convert in the foilbecomes appreciable compared to ∼ 26, considering the 5% of the radiation length X_0converter foil such that multi-photon conversion becomes likely. In such events the original photon energycan not be found and is thus rejected (this also shows the necessity of doing such a simulation of the experimental setup). It is seen in the 3.8 mm case that there is a qualitative agreement between figure <ref> and figure <ref> in the relativesizes of the spectra compared to each other. However, in the case of the 10.0 mm crystalit is seen, for example, that the spectrum corresponding to the “QRRM” model is higher than that corresponding to the “SCRRM” in figure <ref>, whereas the opposite occurs in figure <ref>. This is possible due to the many more soft photons being predicted in the “SCRRM” calculation than in the “QRRM”, which lowers the translated spectrum because of the discussed rejection of multi photon conversion events in the foil. § SUPPLEMENTARY MATERIALS AND METHODS Theoretical Models We have considered four different theoretical models to compare with the experiment:* Classical plus radiation reaction model (CRRM). In this model, we include radiation reaction classically, i.e., we determine the positron trajectory via the Landau-Lifshitz (LL) equation mcdu^i/ds=(q/c)F^iju_j+g^i, where (see, e.g., <cit.>) g^i=2q^3/3mc^3∂ F^ik/∂ x^lu_ku^l-2q^4/3m^2c^5F^ilF_klu^k+2q^4/3m^2c^5(F_klu^l)(F^kmu_m)u^i. Here, u^i is the positron four-velocity, s its proper-time, q=e>0 and m its charge and mass respectively, and F^ij the electromagnetic field tensor of the crystal (see, e.g., <cit.>). The crystal field has been modeled starting from from the sum of Doyle-Turner string potentials <cit.>, centered on a regular grid according to the diamond cubic crystal structure. Once the positron trajectory has been determined, the emission spectrum has been computed starting from the Liénard-Wiechert potential and following the standard procedure, as described, e.g., in <cit.>.* Semiclassical plus radiation reaction model (SCRRM). In this model we “partially” include quantum effects following an approach described, e.g., in <cit.>, where the term involving the derivative in Eq. (<ref>) is neglected and the remaining two are multiplied by the ratio between the quantum total emitted power and the corresponding classical quantity. The emission spectrum has then been evaluated as in the CRRM. This model phenomenologically takes into account that quantum effects reduce the total radiation yield but it does not account for the intrinsic stochasticity of the photon emission process (see, e.g., <cit.>). * Quantum plus radiation reaction model (QRRM). In this model the radiation emission is taken into account fully quantum mechanically, i.e., the positron propagates classically within the crystal according to the Lorentz equation and, in a genuinely random way, it emits photons and undergoes the corresponding recoil. A numerical program has been written to determine the positron dynamics and emission spectrum according to the following procedure. At each step in the time evolution of the positron trajectory the probability of single photon emission in that time interval is originally calculated within the constant-crossed field (CCF) approximation (see, e.g., Eq. (4.36) in <cit.>) and a random number generator decides if the emission takes place or not (the time step has been chosen sufficiently small such that the resulting single-photon emission probability is much smaller than unity). In the former case, on the one hand, the photon energy is also determined by sampling another random number using the procedure shown in <cit.>, such that the emitted photons are consistentlydistributed in accordance with the formula for radiation emission also in the CCF approximation (see Eq. (4.24) in <cit.>). On the other hand, the momentum of the emitted photon is directed along the positron momentum at the instant of emission (which is an excellent approximation in the ultrarelativistic regime) and it is subtracted from the positron momentum before the trajectory solver starts out with the new initial conditions according to the Lorentz equation. As we have mentioned, the emission probabilities within the CCF approximation are employed here (like in e.g. <cit.>). However, since ξ is in some cases only comparable to unity in the experiment, the model has been improved. In fact, for low energies of the emitted photons ħω≪ε_0, the formation length of the emission process is given by l_f(ω)=2γ^2c/ω, where γ is the Lorentz factor of the positron at the instant of emission <cit.>. Thus, if we denote as λ_0 the typical oscillation length of a channeled positron, we expect that the CCF approximation does not work for frequencies lower than ω_c, with l_f(ω_c)=aλ_0/2, where a is a constant of the order of unity. In order to determine the constant a, we have used the fact that at low photon energies, where it is not valid, the CCF approximation significantly overestimates the emitted radiation yield with respect to the more general and more accurate approach of Baier et. al. described in <cit.> and also used numerically in several channeling codes <cit.>. Thus, as the simplest approach, we have modified the CCF emission probability by setting it to zero for photon frequencies below ω_c. The constant a has been fixed by requiring that the resulting total yield coincides with the total yield given by the more accurate approach by Baier et al. <cit.> in the case of a thin crystal where multiple photon emissions are negligible. Indeed, we have found numerically that in this way a turns out to be approximately equal to 0.52.* Quantum with no radiation reaction model (QnoRRM). This model is the same as the QRRM described above but whenever the positron emits a photon, its recoil energy-momentum is not subtracted from the positron. The spectrum in the QRRM approaches the spectrum of this model when the crystal becomes thin because for a thin crystal the probability of multiple photon emission becomes negligible and each positron essentially emits a single photon. Thus, the difference between this model and the QRRM shows the size of the effects of the recoil of multiple photon emission, i.e. of radiation reaction. Tracking Algorithm A tracking algorithm has been employedin the analysis of the experimental data in order to correctly characterizethe created electrons and positrons, and determinewhether they arise from a converting photon in the foil.This is decided based on a series of conditions: Hypothetical rectilinear tracksin the detectors M3-M4 and M5-M6 (see figure 1 in the main text) areconstructed by connecting all possible pairs of hits in the twoplanes of M3-M4 and in the two planes of M5-M6. These trackcandidates in M5-M6 must be matched with thosein M3-M4 giving a full particle track, identified by the following conditions:* The tracks for individual particles arising from two points in M3-M4 and from two points in M5-M6 are ideally continued into the magnet and, in order to be accepted, they must have adistance to each other within 0.8 mm in the center of the magnet. * The two tracks from the detectors M3-M4 and M5-M6 should be at the shortest distance from each other approximately at the z position of the center of the magnet.* The size of the deflection angle between the tracks in M5-M6 and the tracks in M3-M4 in the y direction must be smaller than 2 mrad because the magnet deflects only along the x direction. Now, tracks of electrons and positrons have been individuallyidentified. Moreover, these must also be paired to stem fromthe same photon. This identification is carried out byrequiring that an electron and a positron trackmust originate from within a distance of 20 µm on the x-y plane in the converter foil.After the identification of the tracks has been carried out, it may happen thatfor a given electron or positron, more than one particle ofopposite charge matches within the mentioned distance in the converter foil. If this happens, the event is discarded becausemore than one photon must have converted in the foil and it is not possible to unequivocally associate the electron-positron pair with a photon.This also implies that if the number of photons above the pair productionthreshold in the converter foil exceeds ∼ 26, one will begin to seethe experimental photon spectrum drop due to multiple photonconversion. We recall that, as we have mentioned in the main text,the thickness of the converter foil corresponds to about (7/9)×5%≃ 1/26 of the average length that a photon covers beforeconverting into an electron positron pair. Therefore, optimally, this regime is avoided. In each event all tracks are determined inM1, M2, and M3 as well. The chosen track in these detectors is theone with the closest approach to the pair origin alreadydetermined in the converter foil. Finally, the positron entry angleis determined from the hits in M1 and M2 of this track.It is clear that the photon energy spectrum originating from this procedure can not be directly compared to the theoretical spectra because the response of thissetup is complicated by practical issues. For example, a positron entering thesetup at the center of the detector M1 and another one entering at the borderof the same detector will have a different chance of leading to a detected pair.The reason is that the pair originating from the positron hitting M1 at the borderis more likely to be deflected outside M5 or M6. A similar effect takes placewhen considering the angle of the incoming positrons. In additionto this, multiple scattering in the converter foil, air and detectorsinfluence both the efficiency and the resolution. In order to deal withthese issues, a code simulating the setup has been written. The beam distributionin position and angle as experimentally measured are given as input to the programsimulating the setup, and then the effects of multiple Coulomb scattering betweenand inside the detectors and converter foil, as well as of the Bethe-Heitler pair-production are included for determining the particles' dynamics<cit.>. The only non-trivial input to such a simulationof the setup, is the spectrum of the radiation emitted by the positrons in the crystal, which we have determined theoretically according to the fourmodels described above. Finally, the simulation of the setup produces data-filesof the same format as those obtained from the data acquisition in the experiment,which are then both sent through the tracking algorithm. 10Jackson_b_1975 J. D. Jackson, Classical Electrodynamics (Wiley, New York, 1975).Landau_b_2_1975 L. D. Landau, E. M. Lifshitz, The Classical Theory of Fields (Elsevier, Oxford, 1975).Abraham_b_1905 M. Abraham, Theorie der Elektrizität (Teubner, Leipzig, 1905).Lorentz_b_1909 H. A. Lorentz, The Theory of Electrons (Teubner, Leipzig, 1909).Dirac_1938 P. A. M. Dirac, Proc. R. Soc. London, Ser. A 167, 148 (1938).Barut_b_1980 A. O. Barut, Electrodynamics and Classical Theory of Fields and Particles (Dover Publications, New York, 1980).Rohrlich_b_2007 F. Rohrlich, Classical Charged Particles (World Scientific, Singapore, 2007).Landau_b_4_1982 V. B. Berestetskii, E. M. Lifshitz, L. P. Pitaevskii, Quantum Electrodynamics (Elsevier Butterworth-Heinemann, Oxford, 1982).Hammond_2010_b R. T. Hammond, Electron. J. Theor. Phys. 7, 221 (2010).Di_Piazza_2012 A. Di Piazza, C. Müller, K. Z. Hatsagortsyan, C. H. Keitel, Rev. Mod. Phys. 84, 1177 (2012).Burton_2014 D. A. Burton, A. Noble, Contemp. Phys. 55, 110 (2014).Dittrich_b_1985 W. Dittrich, M. Reuter, Effective Lagrangians in Quantum Electrodynamics (Springer, Heidelberg, 1985).Fradkin_b_1991 E. S. Fradkin, D. M. Gitman, Sh. M. Shvartsman, Quantum Electrodynamics with Unstable Vacuum (Springer, Berlin, 1991).Baier_b_1998 V. N. Baier, V. M. Katkov, V. M. Strakhovenko, Electromagnetic Processes at High Energies in Oriented Single Crystals (World Scientific, Singapore, 1998).PhysRevLett.105.220403 A. Di Piazza, K. Z. Hatsagortsyan, C. H. Keitel, Phys. Rev. Lett. 105, 220403 (2010).PhysRevLett.112.015001 T. G. Blackburn, C. P. Ridgers, J. G. Kirk, A. R. Bell, Phys. Rev. Lett. 112, 015001 (2014).PhysRevLett.111.054802 N. Neitz, A. Di Piazza, Phys. Rev. Lett. 111, 054802 (2013).Ritus V. Ritus, Journal of Soviet Laser Research 6, 497 (1985).Kravets_2013 Y. Kravets, A. Noble, D. Jaroszynski, Phys. Rev. E 88, 011201 (2013).Kumar_2013 N. Kumar, K. Z. Hatsagortsyan, C. H. Keitel, Phys. Rev. Lett. 111, 105001 (2013).Bashinov_2013b A. V. Bashinov, A. V. Kim, Phys. Plasmas 20, 113111 (2013).Ilderton_2013 A. Ilderton, G. Torgrimsson, Phys. Lett. B 725, 481 (2013).Ji_2014 L. L. Ji, A. Pukhov, I. Y. Kostyukov, B. F. Shen, K. Akli, Phys. Rev. Lett. 112, 145003 (2014).Li_2014 J.-X. Li, K. Z. Hatsagortsyan, C. H. Keitel, Phys. Rev. Lett. 113, 044801 (2014).Capdessus_2015 R. Capdessus, P. McKenna, Phys. Rev. E 91, 053105 (2015).Vranic_2016 M. Vranic, T. Grismayer, R. A. Fonseca, L. O. Silva, New J. Phys. 18, 073035 (2016).Dinu_2016 V. Dinu, C. Harvey, A. Ilderton, M. Marklund, G. Torgrimsson, Phys. Rev. Lett. 116, 044801 (2016).Di_Piazza_2017 A. Di Piazza, T. N. Wistisen, U. I. Uggerhøj, Phys. Lett. B 765, 1 (2017).PhysRevLett.63.2827 R. Medenwaldt, et al., Phys. Rev. Lett. 63, 2827 (1989).PhysRevD.86.010001 J. Beringer, et al., Phys. Rev. D 86, 010001 (2012).Landau_b_2_1975 L. D. Landau, E. M. Lifshitz, The Classical Theory of Fields (Elsevier, Oxford, 1975).Jackson_b_1975 J. D. Jackson, Classical Electrodynamics (Wiley, New York, 1975).Moller1995403 S. P. Møller, Nucl. Instrum. Methods Phys. Res. A 361, 403 (1995).PhysRevE.81.036412 I. V. Sokolov, J. A. Nees, V. P. Yanovsky, N. M. Naumova, G. A. Mourou, Phys. Rev. E 81, 036412 (2010).Neitz_2013 N. Neitz, A. Di Piazza, Phys. Rev. Lett. 111, 054802 (2013).Baier_b_1998 V. N. Baier, V. M. Katkov, V. M. Strakhovenko, Electromagnetic Processes at High Energies in Oriented Single Crystals (World Scientific, Singapore, 1998).Yokoya1985ab K. Yokoya, SLAC KEK-Report-85-9(1985).PhysRevLett.112.015001 T. G. Blackburn, C. P. Ridgers, J. G. Kirk, A. R. Bell, Phys. Rev. Lett. 112, 015001 (2014).PhysRevLett.105.220403 A. Di Piazza, K. Z. Hatsagortsyan, C. H. Keitel, Phys. Rev. Lett. 105, 220403 (2010).Belkacem198586 A. Belkacem, N. Cue, J. Kimball, Phys. Lett. A 111, 86(1985).PhysRevD.90.125008 T. N. Wistisen, Phys. Rev. D 90, 125008 (2014).PhysRevD.92.045045 T. N. Wistisen, Phys. Rev. D 92, 045045 (2015).Bandiera201544 L. Bandiera, E. Bagli, V. Guidi, V. V. Tikhomirov, Nucl. Instr. Methods Phys. Res. B 355, 44(2015).PhysRevA.86.042903 V. Guidi, L. Bandiera, V. V. Tikhomirov, Phys. Rev. A 86, 042903 (2012).PhysRevD.86.010001 J. Beringer, et al., Phys. Rev. D 86, 010001 (2012).§ ACKNOWLEDGMENTSWe acknowledge the technical help and expertise from Per Bluhme Christensen, Erik Loft Larsen and Frank Daugaard (AU) in setting up the experiment and data acquisition.§ AUTHORS' CONTRIBUTIONTNW and UIU conceived and carried out the experiment. TNW and ADP carried out the theoretical calculations. TNW carried out the data analysis. TNW and ADP wrote the paper with input and discussion from HVK and UIU. | http://arxiv.org/abs/1704.01080v2 | {
"authors": [
"Tobias N. Wistisen",
"Antonino Di Piazza",
"Helge V. Knudsen",
"Ulrik I. Uggerhøj"
],
"categories": [
"hep-ex",
"hep-ph",
"physics.plasm-ph"
],
"primary_category": "hep-ex",
"published": "20170327133848",
"title": "Experimental Evidence of Quantum Radiation Reaction in Aligned Crystals"
} |
firstpage–lastpage Immersed boundary model of aortic heart valve dynamics with physiological driving and loading conditions Boyce E. Griffith December 30, 2023. ==========================================================================================================We present IFS-RedEx, a spectrum and redshift extraction pipeline for integral-field spectrographs. A key feature of the tool is a wavelet-based spectrum cleaner. It identifies reliable spectral features, reconstructs their shapes, and suppresses the spectrum noise. This gives the technique an advantage over conventional methods like Gaussian filtering, which only smears out the signal. As a result, the wavelet-based cleaning allows the quick identification of true spectral features. We test the cleaning technique with degraded MUSE spectra and find that it can detect spectrum peaks down to S/N≈ 8 while reporting no fake detections. We apply IFS-RedEx to MUSE data of the strong lensing cluster MACSJ1931.8-2635 and extract 54 spectroscopic redshifts. We identify 29 cluster members and 22 background galaxies with z ≥ 0.4. IFS-RedEx is open source and publicly available. Techniques: Imaging spectroscopy – Techniques: Image processing – Galaxies: clusters: individual: MACSJ1931.8-2635 – Galaxies: high-redshift § INTRODUCTIONAstrophysical research has benefited greatly from publicly available open source software and programs like SExtractor <cit.> and Astropy <cit.> have become standard tools for many astronomers. Their public availability allows researchers to focus on the science and to reduce the programming overhead, while the open source nature facilitates the code's further development and adaptation. In this spirit, we developed the Integral-Field Spectrograph Redshift Extractor (IFS-RedEx), an open source software for the efficient extraction of spectra and redshifts from integral-field spectrographs[The software can be downloaded at <http://lastro.epfl.ch/software>]. The software can also be used as a complement to other tools such as the Multi Unit Spectroscopic Explorer (MUSE) Python Data Analysis Framework (mpdaf)[Available at <https://git-cral.univ-lyon1.fr/MUSE/mpdaf>].Our redshift extraction tool includes a key feature, a wavelet-based spectrum cleaning tool which removes spurious peaks and reconstructs a cleaned spectrum. Wavelet transformations are well suited for astrophysical image and data processing <cit.> and have been successfully applied to a variety of astronomical research projects. To name only a few recent examples, wavelets have been used for source deblending <cit.>, gravitational lens modeling <cit.> and the removal of contaminants to facilitate the detection of high redshift objects <cit.>.The paper is designed as follows: Sections 2 and 3 present the spectrum and redshift extraction routines of IFS-RedEx. In section 4, we describe and test the wavelet-based spectrum cleaning tool. In section 5, we illustrate the use of our software by applying it to MUSE data of the strong lensing cluster MACSJ1931.8-2635 (henceforth called MACSJ1931). We summarize our results in section 6. § SPECTRUM EXTRACTION & CATALOG CLEANINGIt is advantageous to combine Integral-Field Unit (IFU) data cubes with high resolution imaging, as this allows us to detect small, faint sources which might remain undetected if we used the image obtained by collapsing the data cube along the wavelength axis (henceforth called white-light image) for source detection. For example, <cit.> used this combination in their analysis of MUSE observations of the Hubble Deep Field South. Therefore we exploit this case in the following, but in principle the software can be used without high resolution data. IFS-RedEx uses the center positions of stars provided by the user to align the IFU and high resolution images. It utilizes a SExtractor <cit.> catalog of the high-resolution data to extract the spectra and the associated standard deviation noise estimate for each source from the data cube. It extracts the signal in an area with a radius of 3 to 5 data cube pixels, depending on the SExtractor full width at half maximum (FWHM) estimate. Sources with FWHM < 2 high resolution pixels are discarded as these are typically spurious detections, e.g. due to cosmic rays.IFS-RedEx shows the user each source and extraction radius overplotted on the high resolution image and the IFU data cube. The user can now quickly examine each detection and decide to either keep it in the database or to remove it, for example because it is too close to the data cube boundary and suffers from edge effects.The tool also supports line emission and continuum emission catalogs. These are for example created by the MUSELET[MUSELET is part of the mpdaf package. A tutorial and the documentation are available at <http://mpdaf.readthedocs.io/en/latest/muselet.html>] software, which uses narrow-band images to perform a blind search for the respective signal. IFS-RedEx displays the detected sources and their extraction radius of 3 pixels on the IFU data cube. The user labels sources which cannot be used, e.g. because the signal is only a spurious detection in one pixel or it is too close to the image boundary. The spectra and noise of the good sources are automatically extracted. Finally, the cleaned SExtractor, line emission, and continuum emission catalogs are merged into a master catalog. In this step, the sources are displayed on the high-resolution image so that the user can decide if the MUSELET and SExtractor detections are part of the same source. This visual inspection is more reliable than an automatic association and the number of sources is typically small enough for a manual inspection in reasonable time.§ REDSHIFT EXTRACTIONEach 1D spectrum is displayed in an interactive plot and a second window shows the corresponding high resolution image, see figure <ref>. The position of sky lines with a flux ≥ 50 × 10^-20 erg s^-1 cm^-2 arcsec^-2 are labeled in green. The sky line fluxes are taken from <cit.>. IFS-RedEx also lists the emission line identifications from MUSELET if available.The user can now adjust the position of the emission and absorption line template by changing the source redshift. Once the template matches the source spectrum, the right redshift is found. IFS-RedEx has several features to facilitate the correct identification of spectral features. The user can zoom in and out, overplot the noise on the spectrum, smooth the signal with a Gaussian filter and perform a wavelet-based spectrum cleaning, see figure <ref>. When IFS-RedEx plots the noise, it shows the standard deviation around an offset. The offset is calculated by smoothing the spectrum signal with a Gaussian with σ = 100 pixels. Thus the noise is centered on the smoothed signal and it follows signal drifts. The wavelet cleaning is described in detail in the next section. As can be seen in figure <ref>, it reconstructs the shape of the reliable spectrum features and suppresses the noise. The Gaussian filter only smears out the signal. Thus the wavelet-based reconstruction makes it easier to distinguish true from false peaks. Finally, the user can fit a Gaussian to the most prominent spectral line. IFS-RedEx combines the error of the fitted center position with the wavelength calibration error from the IFU data reduction pipeline into the final statistical redshift error. The software creates a final catalog with all source redshifts and errors. In addition, it produces a document with all spectral feature identifications and high resolution images for later use, e.g. for verification by a colleague.§ WAVELET-BASED SPECTRUM RECONSTRUCTION §.§ Wavelet transform algorithmsThe wavelet-based cleaning algorithm reconstructs only spectral features above a given significance threshold. For this purpose, we use the “à trous” wavelet transform with a B_3-spline scaling function of the coordinate x ∈ℝ, ϕ(x) = 1/12(|x-2|^3 -4|x-1|^3 + 6|x|^3 - 4|x+1|^3 + |x+2|^3),which is well suited for isotropic signals such as emission lines <cit.>. In contrast to a Fourier transform, wavelets possess both frequency and location information. We note that the measured spectrum signal is discrete and not continuous and we denote the unprocessed, noisy spectrum data c_0, where the subscript indicates the scale s, and its value at pixel position l with c_0,l. We assume that c_0,l is the scalar product of the continuous spectrum function f(x) and ϕ(x) at pixel l.Now we can filter this data, where each filtering step increases s by one and leads to c_s+1, which no longer includes the highest frequency information from c_s. The filtered data for each scale is calculated by using a convolution. The coefficients of the convolution mask h derive from the scaling function,1/2ϕ(x/2) = ∑_l h(l)ϕ(x-l),and they are (1/16, 1/4, 3/8, 1/4, 1/16) <cit.>. By noting that h(k) is symmetric <cit.>, we havec_s,l = ∑_k h(k) c_s-1,l+2^s-1k and we define the double-convolved data on the same scale bycd_s,l = ∑_k h(k) c_s,l+2^s-1k.The wavelet coefficients are now given byw_s,l = c_s-1,l - cd_s,l,and they include the information between these two scales <cit.>. A low scale s implies high frequencies and vice versa. The final wavelet transform is the set {w_1, …, w_L, c_L}, where L is the highest scale level we use, and it includes the full spectrum information. We impose an upper limit for L depending on the spectrum wavelength range and resolution: L ≤log_2((P-1)/(H-1)), where P is the number of pixels of the spectrum signal and H the length of h, which is in our case H=5. Otherwise s could become so large that the filtering equation <ref> would require data outside of the wavelength range. We compute the wavelet transform according to algorithm <ref> and we transform back into real space by using algorithm <ref> <cit.>.The cleaning in wavelet space is performed following <cit.>: We transform a discretized Dirac δ-distribution to obtain the wavelet set {w_1^δ, …, w_L^δ}. Subsequently, we convolve each squared w_s^δ with the squared standard deviation spectrum noise extracted from the IFU data cube and take the square root of the result. This gives us the noise coefficients in wavelet space.In the next step, we build the multiresolution support , which is a (L+1) × P matrix. We compare the absolute value of the signal and noise wavelet coefficients at each pixel, w_s,l and w_s,l^N. We take a threshold T set by the user, for example 5 for a 5σ cleaning in wavelet space, and set the corresponding matrix entry into 1 if |w_s,l| ≥ T |w_s,l^N|, and 0 otherwise. Note that for s=1, we use a higher threshold of T+1, as this wavelet scale corresponds to high frequencies, where we expect the noise to dominate. The matrix coefficients for the smoothed signal c_L are automatically set to 1.Now we perform the cleaning: We set all w_s,l associated with a vanishingvalue to zero and transform back into real space to obtain a first clean spectrum. However, there is still some signal to be harnessed in the residuals. Therefore we subtract the clean spectrum from the full spectrum to obtain the residual spectrum, and we compare its standard deviation, σ_res, with the standard deviation of the full spectrum (in the first iteration) or of the residual used in the previous iteration (all subsequent iterations), which we indicate in both cases with σ_prev. If |(σ_prev - σ_res)/σ_res| > ϵ, we transform the residual spectrum into wavelet space, set wavelets with vanishingvalues to zero, transform back into real space, and add the resulting signal to obtain our new clean signal. Note that the same multiresolution support as before is used. Subsequently, we calculate again the residual and continue until the ϵ criterion is no longer fulfilled and all the signal has been extracted. The value of ϵ is set by the user and must satisfy the condition 0 < ϵ < 1. Algorithm <ref> summarizes this cleaning procedure. §.§ Testing the wavelet-based reconstructionTo test our software, we use the spectrum of the brightest cluster galaxy (BCG) from our MUSE data set described in the next section. MUSE provides both the spectrum signal and a noise estimate over the full wavelength range. The original spectrum can be considered clean due to its very high signal-to-noise. We rescale it to simulate fainter sources at low signal-to-noise. We calculate the rescaling factor R by looking at the highest spectrum signal peak and dividing the associated MUSE noise estimate by this signal. This results in R ≈ 0.0015. We investigate three cases, namely a good, an intermediate, and a low signal-to-noise case, where we rescale the full signal spectrum by 10 R, 5 R, and 2 R respectively. Subsequently we add Gaussian noise simulating the real noise estimate of the MUSE data cube. For each spectral wavelength pixel l we obtain the realized noise by drawing from a Gaussian probability distribution with a standard deviation equal to the MUSE standard deviation noise estimate at this pixel. We repeat this process 10 times to obtain spectra with different noise realizations.We calculate the signal-to-noise of six emission lines by summing over their respective wavelength ranges,S/N = ∑_l signal_l/√(∑_l'std^2_l'),where std_l is the MUSE standard deviation noise estimate at pixel l. We will refer to the lines according to their wavelength order, i.e. the first line is situated at the lowest wavelength and the last line at the highest. We apply our wavelet cleaning software to the spectra using the MUSE noise estimate and different wavelet parameters as input. We investigate 5σ and 3σ cleaning and ϵ parameters of 0.1, 0.01, and 0.001. The cleaning procedure is fast and takes about 1 second per spectrum on a laptop. Figure <ref> shows reconstructed spectra for the three different signal-to-noise cases. Note that the last two emission lines in the true spectrum are actually comprised of merged individual lines. As can be seen in figure <ref>, the wavelet tool can detect if a line consists of two merged lines and reconstruct them correctly if their signal-to-noise is high enough. If it is too low, it will reconstruct them as a single line. For all 90 spectra which we analyzed with a 5σ wavelet reconstruction, we find no fake detections of emission lines. For signal-to-noise larger than 20, all 6 test emission lines are detected. For S/N between 10 and 20, all emission lines but the third are found. The third peak is no longer recovered due to its proximity to the fourth peak, which has typically a twice larger S/N value. In general, the wavelet software might reconstruct two close-by peaks as a single peak unless they have each a sufficiently large signal. When the signal-to-noise of both peaks was similar, both the third and the fourth emission line were detected and reconstructed. For emission lines with low signal-to-noise values between 5 and 10, we can reconstruct the stronger lines with S/N ≳ 8, while the weaker peaks remain typically undetected. However, as the bottom plot in figure <ref> shows, even weaker peaks can occasionally be reconstructed.Emission lines modeled with a wavelet reconstruction do sometimes not reach the full peak height of the signal, in particular for high ϵ values, and their tails can suffer from ringing effects which might be due to the wavelet shape, see for example the first emission line of the intermediate S/N case in figure <ref>. For low signal-to-noise emission lines (S/N ≤ 10), care has therefore to be taken not to mistake the signal dip due to ringing effects as an absorption signal, as the ringing effect might occasionally have a similar (negative) amplitude as the signal peak of the reconstructed emission line, see figure <ref>. When this effect occurs in practice, it might be improved by changing the wavelet setup, e.g. by lowering the ϵ value. A lower ϵ is designed to detect a larger fraction of the signal peak and should thus increase its height. However, care has to be taken as a lower ϵ might also lead to stronger ringing effects. The 3σ wavelet reconstruction recovered more emission lines than the 5σ cleaning, but it also produced false detections. We therefore adopted a conservative approach and used the 5σ wavelet cleaning when applying the code to real data.Finally, we compared the noise free emission line shapes with the reconstructed ones. We find that the shape reconstruction is generally good, but the reconstructed line shape and height recovered from the noisy data can differ from the original, clean ones, in particular in low signal-to-noise scenarios. Therefore we use the wavelet cleaning only to distinguish true from false spectrum peaks, and we perform all data operations such as fitting a Gaussian to obtain the centering error on the real, noisy data. § APPLICATION TO MUSE DATA: MACSJ1931We apply IFS-RedEx to our data set of the strong lensing cluster MACSJ1931 obtained with MUSE <cit.> on the Very Large Telescope (VLT). We combine our data with the publicly available Hubble Space Telescope (HST) imaging from the Cluster Lensing And Supernova survey with Hubble <cit.>. The cluster is part of the MAssive Cluster Survey (MACS), which comprises more than one hundred highly X-ray luminous clusters <cit.>.The core of MACSJ1931 (z = 0.35) was observed with MUSE on June 12 and July 17 2015 (ESO program 095.A-0525(A), PI: Jean-Paul Kneib). The 1 x 1 arcmin^2 field of view was pointed at α = 19:31:49.66 and δ = -26:34:34.0 (J2000) and we observed for a total exposure time of 2.44 hours, divided into 6 exposures of 1462 seconds each. We rotated the second exposure of each exposure pair by 90 degrees to allow for cosmic ray rejection and improve the overall image quality. The data were taken using the WFM-NOAO-N mode of MUSE in good seeing conditions with FWHM ≈ 0.7 arcseconds.We reduced the data using the MUSE pipeline version 1.2.1 <cit.>, which includes bias and flat-field corrections, sky subtraction, and wavelength and flux calibrations. The six individual exposures were finally combined into a single data cube and we subtracted the remaining sky residuals with ZAP <cit.>. The wavelength range of the data cube stretches from 4750 to 9351 Å in steps of 1.25 Å. The spatial pixel size is 0.2 arcseconds.We used the HST data for MACSJ1931 obtained as part of the CLASH program <cit.> in the bands F105W, F475W, F625W, and F814W with a spatial sampling of 0.03 arcsec/pixel. The HST data products are publicly available on the CLASH website[<https://archive.stsci.edu/prepds/clash/>]. We use only redshift identifications which we consider secure because we see e.g. several lines or a clear Lyα emission line shape. We extract 54 sources with redshifts ranging from 0.21 to 5.8. Among them, 29 are cluster members with 0.3419 ≤ z ≤ 0.3672 and 22 are background sources with 0.4 ≤ z ≤ 5.8. A table of all sources with spectroscopic redshifts is presented in the companion paper Rexroth et al. 2017 (in preparation), in which we use the data to improve the cluster lens model. Figure <ref> shows a histogram of the source distribution in redshift space. § SUMMARYWe describe IFS-RedEx, a public spectrum and redshift extraction pipeline for integral-field spectrographs. The software supports SExtractor catalogs as well as MUSELET narrow-band detection catalogs as input. The pipeline has several features which allow a quick identification of reliable spectrum features, most notably a wavelet-based spectrum cleaning tool. The tool only reconstructs spectral features above a given significance threshold. We test it with degraded MUSE spectra and find that it can detect spectral features with S/N≳ 8. We find no fake detections in our test. Finally, we apply IFS-RedEx to a MUSE data cube of the strong lensing cluster MACSJ1931 and extract 54 spectroscopic redshifts. § ACKNOWLEDGEMENTSMR thanks Timothée Delubac for verifying the spectral line identifications and Yves Revaz and the ESO user support center for their help with a non-critical issue in the MUSE pipeline. He thanks Thibault Kuntzer, Pierre North and Frédéric Vogt for fruitful discussions and Anton Koekemoer for his help with processing the Simple Imaging Polynomial (SIP) distortion information from FITS image headers. MR and JPK gratefully acknowledge support from the ERC advanced grant LIDA. RJ gratefully acknowledges support from the Swiss National Science Foundation. JR gratefully acknowledges support from the ERC starting grant 336736-CALENDS. This research made use of SAOImage DS9, numpy <cit.>, scipy <cit.>, matplotlib <cit.>, PyFITS, PyRAF/IRAF <cit.>, Astropy <cit.>, pyds9, GPL ghostscript, and TeX Live. PyRAF and PyFITS are public software created by the Space Telescope Science Institute, which is operated by AURA for NASA. This research has made use of NASA's Astrophysics Data System. | http://arxiv.org/abs/1703.09239v1 | {
"authors": [
"Markus Rexroth",
"Jean-Paul Kneib",
"Rémy Joseph",
"Johan Richard",
"Romaric Her"
],
"categories": [
"astro-ph.IM",
"astro-ph.GA"
],
"primary_category": "astro-ph.IM",
"published": "20170327180052",
"title": "IFS-RedEx, a redshift extraction software for integral-field spectrographs: Application to MUSE data"
} |
22.5cm-0.4 in 16.5cm0in0in 1.5 | http://arxiv.org/abs/1703.09093v1 | {
"authors": [
"M. Agaoglou",
"E. G. Charalampidis",
"T. A. Ioannidou",
"P. G. Kevrekidis"
],
"categories": [
"hep-th",
"nlin.PS"
],
"primary_category": "hep-th",
"published": "20170327141546",
"title": "Discrete BPS Skyrmions"
} |
Active Convolution: Learning the Shape of Convolution for Image Classification Yunho JeonEE, [email protected] KimEE, [email protected] 30, 2023 ================================================================================================In recent years, deep learning has achieved great success in many computer vision applications. Convolutional neural networks (CNNs) have lately emerged as a major approach to image classification. Most research on CNNs thus far has focused on developing architectures such as the Inception and residual networks. The convolution layer is the core of the CNN, but few studies have addressed the convolution unit itself. In this paper, we introduce a convolution unit called the active convolution unit (ACU). A new convolution has no fixed shape, because of which we can define any form of convolution. Its shape can be learned through backpropagation during training. Our proposed unit has a few advantages. First, the ACU is a generalization of convolution; it can define not only all conventional convolutions, but also convolutions with fractional pixel coordinates. We can freely change the shape of the convolution, which provides greater freedom to form CNN structures. Second, the shape of the convolution is learned while training and there is no need to tune it by hand. Third, the ACU can learn better than a conventional unit, where we obtained the improvement simply by changing the conventional convolution to an ACU. We tested our proposed method on plain and residual networks, and the results showed significant improvement using our method on various datasets and architectures in comparison with the baseline. § INTRODUCTIONFollowing the success of deep learning in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) <cit.>, the best performance in classification competitions has almost invariably been achieved on convolutional neural network (CNN) architectures. AlexNet <cit.> is composed of three types of receptive field convolutions (3 × 3, 5 × 5, 11 × 11). VGG <cit.> is based on the idea that a stack of two convolutional layers with a receptive field 3 × 3 is more effective than a 5 × 5 convolution. GoogleNet <cit.> introduced an Inception layer for the composition of various receptive fields. The residual network <cit.>, which adds shortcut connections to implement identity mapping, allows more layers to be stacked without running into the gradient vanishing problem. Recent research on CNNs has mostly focused on composing layers rather than the convolution itself.Other basic units, such as activation and pooling units, have been studied with many variations. Sigmoid <cit.> and tanh were the basic activations for the very first neural network. The rectified linear unit (ReLU) <cit.> was suggested to overcome the gradient vanishing problem, and achieved good results without pre-training. Since then, many variants of ReLUs has been suggested, such as the leaky ReLU (LReLU) <cit.>, randomized LReLU <cit.>, parametric ReLU <cit.>, and exponential linear unit <cit.>. Other types of activation units have been suggested to learn subnetworks, such as Maxout <cit.> and local winner-take-all <cit.>.Pooling is another basic operation in a CNN to reduce the resolution and enable translation invariance. Max and average pooling are the most popular methods. Spatial pyramid pooling <cit.> was introduced to deal with inputs of varying resolution. The ROI pooling method was used to speed up detection <cit.>. Recently, fractional pooling <cit.> has been applied to image classification. Lee et al. <cit.> proposed a general pooling method that combines pooling and convolution units. On the other hand, Springenberg et al. <cit.> showed that using only convolution units is sufficient without any pooling.However, only a few studies have considered convolution units themselves. Dilated convolution <cit.> has been suggested for dense prediction of segmentation. It reduces post-processing to enhance the resolution of the segmented result. Permutohedral lattice convolution <cit.> is used to expand the convolved dimension from the spatial domain to the color domain. It enables pairwise potentials to be learned for conditional random fields.In this paper, we propose a new convolution unit. Unlike conventional convolution and its variants, this unit does not have a fixed shape of the receptive field, and can be used to take more diverse forms of receptive fields for convolutions. Moreover, its shape can be learned during the training procedure. Since the shape of the unit is deformable and learnable, we call it the active convolution unit (ACU).The main contribution of this paper is this convolution unit to provide greater flexibility and representation power to a CNN for a meaningful improvement in the image classification benchmark. We explain the new convolution unit in Section <ref>. In Sections <ref> and <ref>, we report experiments of our unit on plain and residual networks. Finally, we show the results on a general dataset in Section <ref>, and conclude the paper in Section <ref>. § ACTIVE CONVOLUTION UNIT In this section, we describe the basic concept of the ACU. The ACU is a new type of convolution unit with position parameters. Conventionally, the shape of a convolution unit is fixed when the network is initialized, and is not flexible. Fig. <ref> shows the concept of the ACU. The ACU can define more diverse forms of the receptive fields for convolutions with learnable positions parameters. Inspired by the nervous system, we call one acceptor of the ACU the synapse. Position parameters can be differentiated, and the shape can be learned through backpropagation. §.§ Capacity of ACUThe ACU can be considered a generalization of the convolution unit. Any conventional convolution can be represented with the ACU by setting the positions of the synapses properly and fixing all positions. Dilated convolution can be also represented by multiplying the dilation factor with the position parameters. Compared to a conventional convolution, the ACU can generate fractional dilated convolutions and be used to directly calculate the results of the interpolated convolution. It can also be used to define K synapses without any restriction (e.g., cross-shaped convolution with five synapses, or a circular convolution with many synapses).At the network level, the ACU converts a discrete input space to a continuous one (Fig. <ref>). Since the ACU uses bilinear interpolation between adjacent neurons, synapses can connect inter-neuron spaces. This lends greater representational power to the convolution units. The position parameters control the synapses that connect neuron spaces, and the synapses can move around the neuron space to reduce error. §.§ FormulationA convolution unit has a number of learnable filters, and each filter is convolved with its receptive field. A filter of the convolution unit has weight W and bias b. For simplicity, we provide a formulation for the case with one output channel, but we can simply expand this formulation for cases with multiple output channels. A conventional convolution can be formulated as shown in Eqs. (<ref>) and (<ref>): Y = W * X + b y_m,n = ∑_c∑_i,j w_c,i,j· x_c,m+i,n+j + bwhere c is the index of the input channel and b the bias. m and n are the spatial positions, and w_c,i,j and x_c,m,n are the weight of the convolution filter and the value in the given channel and position, respectively.Compared with the basic convolution, the ACU has a learnable position parameter θ_p, which is the set of positions of the synapses (Eq. (<ref>)): θ_p = {p_k|0≤ k < K}where k is the index of the synapse and p_k = (α_k, β_k) ∈ℝ^2. The parameters α_k and β_k define the horizontal and vertical displacements, respectively, of the synapse from the origin. With parameter θ_p, the ACU can be defined as given in Eqs. (<ref>),(<ref>): Y = W * X_θ_p + b y_m,n =∑_c∑_k w_c,k·x^p_k_c,m,n + b=∑_c∑_k w_c,k· x_c,m+α_k,n+β_k + b For instance, the conventional 3 × 3 convolution can be represented by the ACU with θ_p = {(-1,-1), (0,-1), (1,-1),(-1,0),(0,0),(1,0),(-1,1),(0,1),(1,1)}.In this paper, θ_p is shared across all output units y_m,n. If the number of synapses, input channels, and output channels are K, C, and D, respectively, the dimensions of weight W should be D × C × K. The additional parameters for the ACU are 2× K; this number is very small compared to the number of weight parameters. §.§ Forward PassBecause the position parameter p_k is a real number, x_c,m+α_k,n+β_k can refer to a nonlattice point. To obtain the value of a fractional position, we use bilinear interpolation: x^p_k_c,m,n =x_c,m + α_k, n + β_k={Q^11_c,k·(1 - Δα_k)·(1 - Δβ_k)+Q^21_c,k·Δα_k·(1- Δβ_k)+Q^12_c,k·(1-Δα_k)·Δβ_k+Q^22_c,k·Δα_k·Δβ_k}whereΔα_k = α_k - ⌊α_k⌋ Δβ_k = β_k - ⌊β_k⌋m^1_k = m + ⌊α_k⌋, m^2_k = m^1_k + 1,n^1_k = n + ⌊β_k⌋, n^2_k = n^1_k + 1Q^11_c,k = x_c,m^1_k,n^1_k, Q^12_c,k = x_c,m^1_k,n^2_k,Q^21_c,k = x_c,m^2_k,n^1_k, Q^22_c,k = x_c,m^2_k,n^2_k. We can obtain the value of the fractional position by using the four nearest integer points Q^ab_c,k. The interpolated value x^p_k_c,m,n is continuous with respect to any p_k, even at the lattice point. §.§ Backward PassThe ACU has three types of parameters: weight, bias, and position. They can all be differentiated and learned thorough backpropagation. The partial derivative of weight w is the same as that for a conventional convolution, except that the interpolated value of x^p_k_c,m,n (Eq. (<ref>)) is used. The gradient of the bias is the same as that of the conventional convolution (Eq. (<ref>)): ∂ y_m,n/∂ w_c,k = x^p_k_c,m,n , ∂ y_m,n/∂ b = 1. Because we use bilinear interpolation, the positions are simply differentiated as defined in Eqs. (<ref>),(<ref>): ∂ y_m,n/∂α_k = ∑_c w_c,k· {(1-Δβ_k)·(Q^21_c,k-Q^11_c,k) + Δβ_k ·(Q^22_c,k - Q^12_c,k)}, ∂ y_m,n/∂β_k = ∑_c w_c,k· {(1-Δα_k)·(Q^12_c,k-Q^11_c,k) + (Δα_k ·(Q^22_c,k - Q^21_c,k)}. The derivatives of the positions are valid in subpixel regions, i.e., between lattice points. They may not be differentiable at lattice points because we use the four nearest neighbors for interpolation. Empirically, however, we found that this was not a problem if we set an appropriate learning rate for the position parameters. This is explained in Section <ref>.The differential with respect to the input of the ACU is given simply as∂ y_m,n/∂x^p_k_c,m,n = w_c,k. §.§ Normalized Gradient The backpropagated value of the position of the synapse controls its movement. If the value is too small, the synapse stays in almost the same position and the ACU has no effect. By contrast, a large value diversifies the synapses. Hence, controlling the magnitude of movement is important. The partial derivatives with respect to position are dependent on the weight, and the backpropagated error can fluctuate across layers. Hence, determining the learning rate of the position is difficult.One way to reduce the fluctuation of the gradient across layers is to use only the direction of the derivatives, and not the magnitude. When we use the normalized gradient of position, we can easily control the magnitude of movement. We observed in experiments that the use of a normalized gradient made it easier to train and achieve a good result. The normalized gradient of the position is defined as ∂ L /∂α_k = ∂ L /∂α_k / Z, ∂ L /∂β_k = ∂ L /∂β_k / Zwhere L is loss function, and Z is the normalization factor:Z = √((∂ L /∂α_k)^ 2 + (∂ L /∂β_k)^2) An initial learning rate of 0.001 is typically appropriate, which means that a synapse moves 0.001 pixel in an iteration. In other words, a synapse can move a maximum of one pixel in 1,000 iterations. §.§ Warming upAs noted above, the direction of movement is influenced by weight. Since the weight is initialized with a random distribution, the movement of the synapse wanders randomly in early iterations. This can cause the position to stick to a local minimum; hence, we first warm up the network without learning the position parameters. In early iterations, the network only learns weights with fixed shape. It then learns positions and weights concurrently. This helps the synapses learn a more stable shape; in experiments, we obtained an improvement of 0.1%–0.2% over benchmark data. §.§ MiscellaneousThe index of synapse k starts at 0, which represents the base position. In experiments, we fixed p_0 to the origin, where p_0 = (α_0, β_0) = (0,0), x^p_0_c,m,n = x_c,m,n, and the learning rate for p_0 was set to 0. Since p_0 was fixed, we did not need to assign a parameter to it, but we set the index to 0 for convenience.The ACU was implemented on Caffe <cit.>. Owing to the calculation of bilinear interpolations, the ACU was approximately 1.6 times slower than the conventional convolution for the forward pass with a normal graphics processing unit (GPU) implementation. If we implement the ACU using a dedicated multi-processing API, like the CUDA Deep Neural Network library (cuDNN) <cit.>, it can be run faster than the current implementation. § ACU WITH A PLAIN NETWORKTo evaluate the performance of the proposed ACU, we created a simple network consisting of convolution units without a pooling layer <cit.> (Table <ref>). Batch normalization <cit.> and ReLU were applied after all convolutions.We used the CIFAR dataset for the experiment <cit.>. CIFAR-10/100 is widely used for image classification, and consists of 60k 32 × 32 color images: 50k for training and 10k for testing in 10 and 100 classes, respectively. The weight parameters were initialized by using the method proposed by He et al. <cit.> for all experiments. The positions of the synapses were initialized with the shape of a conventional 3 × 3 convolution. §.§ Experimental resultsWe simply changed all 3 × 3 convolution units to ACUs from the baseline. The 3 × 3 convolution unit can be substituted by an ACU with nine synapses (one is fixed: p_0). The position parameters were shared across all kernels in the same layer; thus, 8 × 2 more parameters were required per ACU layer. We replaced six convolution layers with the ACU; hence, only 96 parameters were added compared to the baseline. To test the performance only of the ACU, all other parameters were kept constant.The network was trained with 64k iterations by using stochastic gradient descent with the Nesterov momentum. The initial learning rate was 0.1, and was divided by 10 after 32k and 48k iterations. The batch size was 64, and the momentum was 0.9. We used L2 regularization, and the weight decay was 0.0001. We warmed up the network for 10k iterations without learning positions. We used the normalized gradient for the synapses, and the learning rate was set to 0.01 times the base learning rate. When the base learning rate decreased, so did the learning rate of the synapse, which meant that their movement was limited over iterations.We preprocessed input data with commonly used methods <cit.>: global contrast normalization and ZCA whitening. For data augmentation, we performed random cropping on images padded by four pixels on each side as well as horizontal flipping, as in previous studies <cit.>. Table <ref> presents the experimental results. The baseline network obtained error rates of 8.01% with CIFAR-10 and 27.85% with CIFAR-100. Once we changed the convolution units to ACUs, the error rate dropped to 7.33% with CIFAR-10, a 0.68% improvement over the baseline. We obtained a similar result with CIFAR-100: the ACU reduced the error rate by 0.74%. Fig. <ref> shows the training curve for CIFAR-10. Both the training loss and the test error were less than those of the baseline. §.§ Learned Position Fig. <ref> shows the results of position learning. The exact positions were different in each trial due to the weight initialization but had similar characteristics. In the lower layer, the ACU settled down to a conventional 3 × 3 convolution. By contrast, the upper layers tended to grow. The last ACU was similar to the two dilated convolutions. Since we normalized the magnitude of movement, all synapses moved by the same magnitude in an iteration. Thus, this phenomenon was not due to gradient flow. We think that the lower layers focused on capturing local features and the higher layers became wider to integrate the spatial features from the preceding layers.Fig. <ref> shows the movement of α_k in the first and the last layers. We see the first layer tended to maintain the initial shape of the receptive field. Unlike in the first layer, the positions of synapses monotonically grew in the last layer.Although the lower layer was similar to a conventional convolution, we found that the ACU was still effective for it. Since the synapses move continuously during training, the same inputs and weights do not yield the same outputs. The output of the ACU is spatially deformed in every iteration, which lends an augmentation effect without explicit data augmentation. § ACU WITH THE RESIDUAL NETWORKRecent work has shown that a residual structure <cit.> can improve performance using very deep layers. We experimented on our convolution unit with a residual network and investigated how the ACU can collaborate with residual architecture. All our experiments were based on pre-activation <cit.> residual, and the experimental parameters were the same as those described in the previous section, except for network structure. §.§ Basic Residual NetworkThe basic residual network consists of residual blocks with two 3 × 3 convolutions in series. To check the availability of the ACU, we formed a 32-layer residual network with five residual blocks. We used projection shortcuts to increase the number of dimensions, where the other shortcuts were identity.Table <ref> presents the experimental results. The basic residual network achieved an 8.01% error rate with CIFAR-10. We replaced all convolution units with ACUs in the residual blocks. This yielded a 7.54% error, an improvement of 0.47% over the baseline. With CIFAR-100, we obtained a 0.68% improvement, which was better than that with CIFAR-10. §.§ Bottleneck Residual NetworkA bottleneck residual unit consists of three successive convolution layers: 1 × 1, 3 × 3, and 1 × 1 <cit.>. The first layer reduces the number of dimensions and the last 1 × 1 convolution restores the original dimensions. Fig. <ref> shows the difference between the basic and the bottleneck residual blocks. This architecture has a similar complexity to that of the basic residual block, but can increase network depth.The 32-layer basic residual network was converted into a 47-layer bottleneck network. The number of parameters was nearly the same, and this network achieved a 7.64% error rate (Table <ref>). Since the bottleneck block had only one 3 × 3 convolution, we only changed one convolution to ACU. We obtained a 7.12% error with CIFAR-10, a 0.52% improvement over the bottleneck baseline. We obtained a similar result (0.46% gain) with CIFAR-100. The training loss and the test error are shown in Fig. <ref> §.§ Learned Position Fig. <ref> shows the results of the learned filter for the bottleneck structure. In total, 15 convolutions were changed to ACUs. Each row shows the learned position of the ACU in the residual block with the same number of dimensions. As in the plain network, the positions of the synapses became wider at higher layers. ACUs in different residual blocks with the same number of dimensions tended to learn different shapes of the receptive field. This shows that each ACU extracted unique features with different views. § EXPERIMENT ON PLACE365In the previous experiment, we had used the CIFAR-10/100 datasets to check the validity of our approach. However these sets had consisted of small image patches of limited data size. To verify our approach in more realistic situations, we tested the ACU with the Place365-Standard benchmark set <cit.>.Place365 is the latest subset of the Place2 database, which contains more than 10 million images. Place365-Standard is a benchmark set with more than 1.8 million training images in 365 scene categories. To represent plain and residual networks, we trained AlexNet <cit.> and a 26-layer residual network with a bottleneck structure. We resized images to 256 × 256 pixels, and randomly cropped and flipped them horizontally for augmentation. The networks were trained for 400k iterations by using stochastic gradient descent. We divided the learning rate by 10 after 200k and 300k iterations, and used four GPUs with 32 batch sizes each.When we trained the network with the ACU, we warmed it up network with 50k iterations with fixed positions. We tested the networks with a validation set containing 100 images per class. Top-1 and top-5 accuracies are obtained by the standard 10-crop testing <cit.>. §.§ AlexNetFollowing the conventions of AlexNet, we used a weight decay of 0.0005, and the momentum was 0.9. The initial learning rate was 0.01. We could have changed the 11 × 11 and 5 × 5 convolutions to ACUs, but decided to only change the 3 × 3 convolutions because we had not completed analyzed the effects of large convolutions. Since the second and third layers of the 3× 3 convolution split the channels by two in AlexNet (Fig. <ref>), we assigned two set of positions to each convolution, and five sets of position parameters were used in total.Simply by changing the 3 × 3 convolution layers to ACUs, we obtained an improvement of 0.79% for AlexNet (Table <ref>). Fig. <ref> shows the training curve of AlexNet and its ACU network. Their accuracies were almost identical during the warming up iterations; but after warming up (50k iterations), test error began to decrease. This shows the effect of the ACU.Fig. <ref>(a) shows the position learned after training. The first ACU layer (conv3) was not the first layer of the network (conv1), and the shape of this layer was not similar to that of conventional convolutions, unlike in previous experiments. The learned shape of the synapses is interesting in that it is similar to combinations of two receptive fields of a convolution. With small numbers of weight parameters, the ACU tried to learn multiple receptive field features. §.§ Residual NetworkTo test with the residual network, we constructed a 26-layer residual network, which was an 18-layer basic network <cit.> converted into a bottleneck structure. Due to restrictions on hardware, we were not able to use deeper architecture. In the residual network, weight decay was 0.0001 and the initial learning rate was 0.1. To prevent overfitting, we stopped training at 350k iterations.With the residual network, we replaced all 33 convolutions in the residual block, as in the previous experiment. In total, eight convolutions were replaced with ACUs, and thus, 128 parameters were added. With this small number of parameters, we obtained a 0.49% gain in top-5 accuracy.On the both of the plain and the residual networks, we obtained a meaningful improvement using the ACU. This results show that the ACU can be applied not only to simple image classification as in CIFAR-10/100, but also to more general problems.The final shape of the ACU is shown in Fig. <ref>(b). As in our previous experiments, the higher layer tended to enlarge its receptive field. However, its coverage was greater than in the previous CIFAR-10/100 experiment. We think that since the image size of Place365 was larger than that of images in the CIFAR dataset, the last layer preferred a larger receptive field. § CONCLUSION In this study, we introduced the ACU that contains position parameters to provide more freedom to a conventional convolution and allow its position to be learned through backpropagation. Through various experiments, we showed the effectiveness of our approach. Simply by changing the convolution layers to our unit, we were able to boost the performance of equivalent networks.Since the shape of the ACU can be freely defined, we believe that it can expand network architectures in a more diverse manner. In this study, we shared only one set of position parameters; it is possible to define other sets of positions in a layer. Using multiple set of positions, the model’s representational power could be expanded. In future work, we plan to investigate variations in ACU in greater detail.ieee | http://arxiv.org/abs/1703.09076v1 | {
"authors": [
"Yunho Jeon",
"Junmo Kim"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170327134426",
"title": "Active Convolution: Learning the Shape of Convolution for Image Classification"
} |
Department of Physics, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel Department of Physics, NRCN, P.O. Box 9001, Beer-Sheva 84190, IsraelPhysikalisches Institut, Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, GermanyPhysikalisches Institut, Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, GermanyInstitut für Theorie der Kondensierten Materie, KIT, 76131 Karlsruhe, GermanyPhysikalisches Institut, Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, GermanyPhysikalisches Institut, Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany Russian Quantum Center, National University of Science and Technology MISIS, Leninsky prosp. 4, Moscow, 119049, RussiaDepartment of Physics, Ben-Gurion University of the Negev, Beer Sheva 84105, IsraelUnderstanding the nature of two-level tunneling defects is important for minimizing their disruptive effects in various nano-devices. By exploiting the resonant coupling of these defects to a superconducting qubit, one can probe and coherently manipulate them individually. In this work we utilize a phase qubit to induce Rabi oscillations of single tunneling defects and measure their dephasing rates as a function of the defect's asymmetry energy, which is tuned by an applied strain. The dephasing rates scale quadratically with the external strain and are inversely proportional to the Rabi frequency. These results are analyzed and explained within a model of interacting standard defects, in which pure dephasing of coherent high-frequency (GHz) defects is caused by interaction with incoherent low-frequency thermally excited defects. Rabi noise spectroscopy of individual two-level tunneling defects Moshe Schechter December 30, 2023 =================================================================Since the early 1970's, various experiments over a wide range of amorphous solids revealed a universality in their thermal, acoustic and dielectric properties below 1K. <cit.> In an attempt to account for this universal behavior, the existence of two-level tunneling defects as a generic property in amorphous systems was postulated.<cit.> This phenomenological standard tunneling model (STM) explains many of the universal low-temperature properties of the amorphous state of matter. <cit.> However, despite extensive efforts, the exact nature of the two-level systems (TLSs) remains unknown. With the recent progress in fabrication, manipulation, and measurement of quantum devices it became crucial to understand the microscopic nature of the environment responsible for decoherence. There exists abundant experimental evidence that TLS-baths form an ubiquitous source of noise in various devices such as superconducting microwave resonators, <cit.> single-electron transistors, <cit.> and nanomechanical resonators. <cit.> In superconducting qubits, TLSs residing in the amorphous tunnel barrier of Josephson junctions were found to constitute a major source of decoherence. <cit.> Via their electric dipole moment, TLSs couple to the ac microwave fields in the circuit. <cit.> Whereas this coupling is deleterious from the point of view of qubit operation, it opens up the possibility to use superconducting qubits as tools for detection, manipulation and characterization of individual TLSs. The transfer of an arbitrary quantum state from a superconducting phase qubit to a resonant TLS was first demonstrated by Neeley et al.. <cit.> This method was used to probe the coherence times of individual TLSs. <cit.> Furthermore, in Ref. LJ10a it was shown that there exists an effective qubit-mediated coupling between TLSs and an externally applied electromagnetic ac field. This effective coupling was utilized to directly control the quantum state of individual TLSs by coherent resonant microwave driving. <cit.>Previously, <cit.> we measured the Ramsey (free induction decay) and spin-echo pure dephasing rates of individual TLSs in a phase qubit as a function of their asymmetry energy, which was tuned by an applied strain via a piezo actuator. <cit.> Since the mutual longitudinal coupling between TLSs is proportional to the product of their asymmetry energies, strain-tuning allows one to gradually increase the longitudinal coupling of a single probed TLS to a bath of other TLSs and study its dephasing rates as a function of this coupling. This yields information about the spectrum of the environment to which a TLS couples, and provides a test to distinguish between different TLS models. The experimental data on Ramsey dephasing indicate that the main low-frequency noise is quasi-static and can be attributed to slow thermal TLSs (with energy splitting smaller than the temperature), which flip between their energy eigenstates with maximum rates Γ_1,max≈ 10(ms)^-1, <cit.> much smaller than the dephasing rates of the probed TLS which are of the order of Γ_Ramsey≈ 1(μs)^-1. For such an environment, the echo protocol should be very efficient. Surprisingly, the experiment shows that the echo dephasing rates are not negligible, revealing the existence of an additional noise source with a flat power spectral density. It was suggested that this white noise may arise due to fast relaxing TLSs that interact much more strongly with strain fields compared to the weakly interacting TLSs of the STM, <cit.> or may be a result of quasiparticle excitations. <cit.>Here we study experimentally and theoretically the decoherence of Rabi oscillations of individual TLSs as a function of their strain-tuned asymmetry energy. At resonance with the driving field, the Rabi decay rate consists of three contributions from noise at different frequencies. <cit.> The first contribution is due to noise at frequency equal to the energy splitting of the probed TLS (≈ 2π·7GHz), and arises from degrees of freedom other than thermal TLSs, such as phonons or microwave photons. The other two contributions are due to noise at the Rabi frequency of the probed TLS (several MHz) and low-frequency quasi-static noise similar to the one responsible for the Ramsey dephasing discussed in Refs. LJ16 and MS16. The last two contributions result from a transverse noise in the rotating frame of reference, the origin of which is suggested to be thermal TLSs. Due to its transverse nature, this noise leads to a quadratic strain dependence of the Rabi dephasing rate near the symmetry point of the probed TLS, making the dephasing rates smaller than those in a Ramsey experiment.We begin with a model of a high-frequency (GHz) single TLS driven by a microwave field at frequency ω_d and interacting with a thermal bath. In the basis of local states of the TLS, the Hamiltonian of the system is (we set ħ=k_B=1)ℋ̂=1/2(ετ̂_z+Δτ̂_x)-Ω^0_Rcos(ω_dt)τ̂_z+1/2τ̂_zÔ+ℋ̂_b.The first term describes the TLS, characterized by the asymmetry and tunneling energies, ε and Δ, with τ̂_x and τ̂_z being the Pauli matrices. The second term describes its coupling to the driving field with Ω^0_R=pE_z being the maximum Rabi frequency, where p is the electric dipole moment of the TLS and E_z is the component of the electric field along its dipole moment. The third term is the coupling of the TLS to the bath observable Ô. Motivated by the experimental data below, we write Ô=X̂+Ŷ, separating the environmental degrees of freedom which couple to the TLS into those fluctuating at frequencies below ∼MHz (X̂) and those fluctuating at frequencies of the order of the energy splitting of the TLS, E=√(ε^2+Δ^2) ≈ 2π·7GHz (Ŷ).In the eigenbasis of the TLS, Eq. (<ref>) readsℋ̂=E/2σ̂_z-Ω^0_Rcos(ω_dt)(σ̂_zcosθ-σ̂_xsinθ)+1/2(σ̂_zcosθ-σ̂_xsinθ)(X̂+Ŷ)+ℋ̂_b,where σ̂_x and σ̂_z are the Pauli matrices in the eigenbasis of the TLS, cosθ=ε/E and sinθ=Δ/E. All these are strain-dependent via ε(ϵ), where ϵ is the strain at the position of the TLS. Taking into account the characteristic frequencies of X̂ and Ŷ, the relevant terms are <cit.>ℋ̂=E/2σ̂_z+Ω_Rcos(ω_dt)σ̂_x+1/2(cosθ σ̂_zX̂-sinθ σ̂_xŶ)+ℋ̂_b,where Ω_R=Ω^0_Rsinθ is the Rabi frequency. We are interested in the decay of Rabi oscillations, Γ_D, which is the equivalent of Γ_2 in the rotating frame of reference. At resonance, ω_d=E, the contribution of the high-frequency part Ŷ to Γ_D is known to be 3/4Γ_1, <cit.> where Γ_1=1/2sin^2θ S_Y(ω=E) is the relaxation rate in the laboratory frame of reference, with S_Y(ω) being the spectral density of fluctuations in Ŷ. We now discuss the contribution of X̂ to Γ_D. To this end we consider the Hamiltonian ℋ̂=E/2σ̂_z+Ω_Rcos(ω_dt)σ̂_x+cosθ/2σ̂_zX̂+ℋ̂_b.We now move to the rotating frame of reference by applying the unitary transformation Û_R=e^iω_dtσ̂_z/2. Using the rotating wave approximation, in which counter-rotating terms with frequencies 2ω_d are neglected, and assuming resonant driving (ω_d=E), the transformed Hamiltonian ℋ̂_R=Û_Rℋ̂Û^†_R+idÛ_R/dtÛ^†_R isℋ̂_R=Ω_R/2σ̂_x+cosθ/2σ̂_zX̂+ℋ̂_b.We observe that in the rotating frame and at resonance, the noise is purely transverse. This transverse noise gives rise to relaxation in the rotating frame of reference, <cit.> for which the golden rule yieldsΓ_ν=1/2cos^2θ S_X(ω=Ω_R),where S_X(ω) is the spectral density of fluctuations in X̂. As usual, this results in a contribution of Γ_ν/2 to Γ_D. In an echo experiment, the same noise X̂ is longitudinal and the dephasing rate isΓ_Echo= 1/2cos^2θ S_X(ω≈Γ_Echo).Equation (<ref>) coincides with the golden rule result if S_X(ω) is flat on a scale of Γ_Echo around zero frequency. Otherwise, it provides self-consistently an order of magnitude estimate for the dephasing time. Both Eqs. (<ref>) and (<ref>) are determined by the spectral density S_X(ω) at frequencies ≲ 1MHz (see the experimental data below).Based on the Ramsey dephasing observed and discussed in Refs. LJ16 and MS16, X̂ contains contributions from many slow fluctuators (thermal TLSs). Using standard estimations within the STM, one expects the relaxation rates Γ_1 of thermal TLSs (at T=35mK) to be smaller than 10ms^-1. <cit.> Consequently, these fluctuators are in the regime Γ_1≪Ω_R, Γ_Echo and thus give rise to quasi-static noise, for which X̂ is constant during each run of the experiment but fluctuates between different runs. These fluctuators will not contribute to Γ_ν and Γ_Echo. However, due to their large number, their second-order contribution to pure dephasing may be important. The noise due to these fluctuators can be treated classically, X̂→ X(t)=∑_jv_jα_j(t), where each fluctuator is represented by a random telegraph process (RTP) α_j(t) which randomly switches between the states α_j=± 1 with flipping rate γ_1,j, <cit.> and interacts with the probed TLS with a coupling strength v_j. As shown in Refs. LJ16 and MS16, the interaction between the probed TLS and its closest thermal TLS, v_T, is of the order of a few MHz. As a result, close to the symmetry point ε=0 (thus cosθ≪ 1) one may assume Xcosθ≪Ω_R. We therefore expand√(Ω^2_R+X^2cos^2θ)≈ Ω_R+X^2cos^2θ/2Ω_R=Ω_R+cos^2θ/2Ω_R∑_jv^2_j +cos^2θ/2Ω_R∑_i≠ jv_iv_jα_i(t)α_j(t).In contrast to the Ramsey dephasing, <cit.> which is caused by individual thermal TLSs, the last term of Eq. (<ref>) shows that the second-order contribution to the Rabi dephasing is due to pairs of thermal TLSs. However, since a product of two RTPs with flipping rates γ_1 and γ_2 is also a RTP with flipping rate γ_1+γ_2, the second-order contribution to the Rabi dephasing is essentially similar to the Ramsey dephasing. <cit.> The decay law due to this low-frequency noise isf_Rabi(t)=1/2^N∑_{ξ_k}e^itcos^2θ/2Ω_R∑_i≠ jv_iv_jξ_iξ_j,where N is the number of thermal TLSs and the sum is over all the configurations of the variables ξ_k=±1. Similarly to the Ramsey dephasing, it is expected to be dominated by the few closest fluctuators, and the typical decay rate is <cit.>Γ^(2)_φ≈v^2_T/Ω_Rcos^2θ. The Rabi decay rate is given by the sum of the three contributions discussed above,Γ_D=3Γ_1/4+Γ_ν/2+Γ^(2)_φ.We now discuss the experimental results for this decay rate. The distinct strain dependence of the first and the other two terms of Eq. (<ref>) allows us to separate the effect of the noise at the two spectral ranges discussed above (i.e., below ∼MHz and at GHz frequencies).The TLSs studied here are contained in the amorphous AlO_x tunnel barrier of a Josephson junction, which is part of a superconducting phase qubit circuit. <cit.> We apply mechanical strain to the qubit chip by means of a piezo actuator, <cit.> allowing us to tune the asymmetry energy ε of the TLSs while their resonance frequencies are tracked with the qubit, <cit.> as shown in the top row of Fig. <ref>. At each strain value, standard microwave pulse sequences are applied <cit.> to measure the energy relaxation rate Γ_1 and the dephasing rates of Rabi oscillations and echo signals, Γ_Rabi and Γ_Echo, of the probed TLS (bottom row of Fig. <ref>). In addition, we present the frequency of Rabi oscillations in the middle row of Fig. <ref> for a fixed microwave driving power. Peaks and dips, which appear symmetrically in these data with respect to ε=0, mainly originate from the frequency dependence of the transmitted microwave power due to cable resonances. When these fluctuations are smoothened out, the Rabi frequency tends to increase with ε. This is because the TLS is driven via a transition that involves a virtual state of the qubit, such that the effective driving strength depends on the detuning between the TLS and the qubit, <cit.> where the latter was tuned to the fixed frequency of 8.8GHz during all measurements. This increase is partly compensated by the factor sinθ=Δ/E appearing in the definition of the Rabi frequency.The strain dependence of the energy relaxation and echo dephasing rates was discussed in Ref. LJ16. Here we focus on the Rabi dephasing rate. Due to limitations in the applicable microwave power, the Rabi frequency is at maximum of order 5MHz, such that only a few oscillations can be observed during the coherence time of the TLS [see Fig. <ref>(a)]. Together with the experimental measurement uncertainties it therefore becomes practically impossible to determine the exact functional form of the decay envelope. We thus extract an effective decay rate Γ_D from a fit to an exponentially damped sinusoid.At the symmetry point ε=0 (cosθ=0), one expects Γ_D=3/4Γ_1, so that the Rabi decay rate is independent of the Rabi frequency. The experimental observation is in good agreement with this prediction, as shown in the inset of Fig. <ref>(b). To analyze the effect of the noise at frequencies below the Rabi frequency, one has to subtract this contribution of fluctuations at GHz frequencies from Γ_D. In the bottom row of Fig. <ref> we show the Rabi dephasing rate Γ_Rabi≡Γ_D-3/4Γ_1. For all investigated TLSs, Γ_Rabi vanishes at the symmetry point and increases quadratically with ε. This quadratic strain dependence is also predicted by the above model, which shows that both Γ_ν and Γ^(2)_φ scale as cos^2θ=(ε/E)^2≈ (ε/Δ)^2 [Eqs. (<ref>) and (<ref>)]. Table <ref> summarizes the measured tunneling energies Δ, the deformation potentials ∂ε/∂ V where V is the applied piezo voltage, and the relaxation times T_1 measured at ε=0. The strength of Rabi and echo dephasing rates is obtained from quadratic fits to Γ_i= A_i(ε/E)^2, where i={Rabi,Echo}. We now compare the values extracted for A_Rabi and A_Echo with the expressions derived from the above model, A_Rabi=Γ_Rabi/cos^2θ=Γ_ν/2+Γ^(2)_φ/cos^2θ=S_X(Ω_R)/4+v^2_T/Ω_R,A_Echo=Γ_Echo/cos^2θ=S_X(Γ_Echo)/2.Further information on A_Rabi comes from the inverse dependence of the Rabi dephasing rate on the Rabi frequency, Γ_Rabi=A/Ω_R, observed at fixed ε and displayed in Fig. <ref>(b) for TLS 2, with fit parameters A listed in the legend. Comparing this observation with Eq. (<ref>), we identify two possibilities: I) The first term of Eq. (<ref>) dominates, in which case S_X(Ω_R)∝Ω^-1_R (1/f noise). II) The second contribution dominates. In the first case Γ_Echo≫Γ_Rabi because Γ_Echo≪Ω_Rabi, which is inconsistent with the experimental observations. Thus we adopt the second scenario. Indeed, with v_T estimated by several MHz, <cit.> the contribution v^2_T/Ω_R due to thermal standard TLSs gives the correct order of magnitude for A_Rabi, and therefore this scenario seems plausible. Moreover, according to our previous results,<cit.> the strain dependence and the ratio between the Ramsey and the echo dephasing rates are inconsistent with a 1/f spectrum at frequencies around Γ_Echo, but rather suggest a flat spectrum. It is therefore improbable that S_X(Ω_R)∝Ω^-1_R, leading to the conclusion that S_X(Ω_R)≪ v^2_T/Ω_R. What could then be learned from comparing A_Rabi with A_Echo? Based on our previous results in Ref. LJ16, we try to associate the two contributions to Γ_Rabi with the two types of TLSs discussed in Refs. MS16,SM13. In Ref. LJ16,MS16 we attributed the Ramsey dephasing to an ensemble of slow (quasi-static) thermal TLSs (denoted as τ-TLSs), with parameters consistent with the STM, for which the echo protocol should be more efficient than observed. This discrepancy was explained by the existence of a few fluctuators, with maximum relaxation rates of the order of 10 (μs)^-1 (denoted as S-TLSs). In contrast to the standard fluctuators, these fast fluctuators contribute to S_X(ω) at MHz frequencies. Let us assume the existence of a single fast fluctuator with relaxation rate γ_1 and coupling constant to the probed TLS v_Sτ, for which S_X(ω)=2v^2_Sτγ_1/(ω^2+γ^2_1). If the second contribution to A_Rabi is dominant, the observation that A_Rabi and A_Echo are comparable (see Fig. <ref> and table <ref>) implies that S_X(Γ_Echo)≫ S_X(Ω_R). Since Ω_R≫Γ_Echo, one obtains the condition Ω_R>γ_1, which sets an upper bound of ∼ 10 (μs)^-1 for γ_1 of the fast relaxing fluctuator.In summary, we have measured the decay of Rabi oscillations as a function of the asymmetry energy of individual TLSs, which is controlled by applying an external strain. We employ a theoretical model based on interacting TLSs and find agreement with the experimentally observed magnitude of the Rabi dephasing rate and its dependence on the applied strain and on the Rabi frequency. In conjunction with measurements of energy relaxation, Ramsey and echo dephasing, <cit.> Rabi noise spectroscopy provides information about the spectrum of the environment to which TLSs couple, within three different spectral ranges. This allows one to distinguish between contributions from distinct environmental degrees of freedom. Such information is important for minimizing noise due to TLSs in various nano-devices, for exploiting TLSs as useful degrees of freedom, and for a basic understanding of amorphous systems at low temperatures.This work was supported by the German-Israeli Foundation (GIF), grant 1183/2011, by the Israel Science Foundation (ISF), grant 821/14, and by the Deutsche Forschungsgemeinschaft (DFG), grants SH 81/2-1 and LI2446/1-1. AB acknowledges support from the Helmholtz International Research School for Teratronics (HIRST) and the Landesgraduiertenförderung-Karlsruhe (LGF).ZRC71 R. C. Zeller and R. O. Pohl, Phys. Rev. B 4, 2029 (1971). BJF88 J. F. Berret and M. Meissner, Z. Phys. B: Condens. Matter 70, 65 (1988). PRO02 R. O. Pohl, X. Liu, and E. Thompson, Rev. Mod. Phys. 74, 991 (2002). PWA72 W. A. Phillips, J. Low-Temp. Phys. 7, 351 (1972). APW72 P. W. Anderson, B. I. Halperin, and C. M. Varma, Philos. Mag. 25, 1 (1972). PWA87 W. A. Phillips, Rep. Prog. Phys. 50, 1657 (1987). ZJ12 J. Zmuidzinas, Annu. Rev. Condens. Matter Phys. 3, 169 (2012). PA14 A. Pourkabirian, M. V. Gustafsson, G. Johansson, J. Clarke, and P. Delsing, Phys. Rev. Lett. 113, 256801 (2014). AKH03 K. -H. Ahn and P. Mohanty, Phys. Rev. Lett. 90, 085504 (2003). SRW04 R. W. Simmonds, K. M. Lang, D. A. Hite, S. Nam, D. P. Pappas, and J. M. Martinis, Phys. Rev. Lett. 93, 077003 (2004). CKB04 K. B. Cooper, M. Steffen, R. McDermott, R. W. Simmonds, S. Oh, D. A. Hite, D. P. Pappas, and J. M. Martinis, Phys. Rev. Lett. 93, 180401 (2004). CJH10 J. H. Cole, C. Müller, P. Bushev, G. J. Grabovskij, J. Lisenfeld, A. Lukashenko, A. V. Ustinov, and A. Shnirman, Appl. Phys. Lett. 97, 252501 (2010). NM08 M. Neeley, M. Ansmann, R. C. Bialczak, M. Hofheinz, N. Katz, E. Lucero, A. O'Connell, H. Wang, A. N. Cleland, and J. M. Martinis, Nature Phys. 4, 523 (2008). SY10 Y. Shalibo, Y. Rofe, D. Shwa, F. Zeides, M. Neeley, J. M. Martinis, and N. Katz, Phys. Rev. Lett. 105, 177001 (2010). LJ10a J. Lisenfeld, C. Müller, J. H. Cole, P. Bushev, A. Lukashenko, A. Shnirman, and A. V. Ustinov, Phys. Rev. B 81, 100511(R) (2010). LJ10b J. Lisenfeld, C. Müller, J. H. Cole, P. Bushev, A. Lukashenko, A. Shnirman, and A. V. Ustinov, Phys. Rev. Lett. 105, 230504 (2010). LJ16 J. Lisenfeld, A. Bilmes, S. Matityahu, S. Zanker, M. Marthaler, M. Schechter, G. Schön, A. Shnirman, G. Weiss, and A. V. Ustinov, Sci. Rep. 6, 23786 (2016). GGJ12 G. J. Grabovskij, T. Peichl, J. Lisenfeld, G. Weiss, and A. V. Ustinov, Science 338, 232 (2012). MS16 S. Matityahu, A. Shnirman, G. Schön, and M. Schechter, Phys. Rev. B 93, 134208 (2016). SM13 M. Schechter and P. C. E. Stamp, Phys. Rev. B 88, 174202 (2013). ZS16 S. Zanker, M. Marthaler, and G. Schön, IEEE Trans. Appl. Supercond. 26, 1 (2016). BA16 A. Bilmes, S. Zanker, A. Heimes, M. Marthaler, G. Schön, G. Weiss, A.V. Ustinov, and J. Lisenfeld, arXiv:1609.06173 (2016). HJ08 J. Hauss, A. Fedorov, S. André, V. Brosco, C. Hutter, R. Kothari, S. Yeshwanth, A. Shnirman, and G. Schön, New J. Phys. 10, 095018 (2008). BJ09 J. Bergli, Y. M. Galperin, and B. L. Altshuler, New. J. Phys. 11, 025002 (2009). Comment1 Here we use the term "rate" to refer to the inverse of the time for which the signal has decayed to e^-1. The decay in this case is not exponential but rather Gaussian. Steffen06 M. Steffen, M. Ansmann, R. McDermott, N. Katz, R.C. Bialczak, E. Lucero, M. Neeley, E.M. Weig, A.N. Cleland, and J.M. Martinis, Phys. Rev. Lett. 97, 050502 (2006). LisenfeldNatureComm J. Lisenfeld, G.J. Grabovskij, C. Müller, J.H. Cole, G. Weiss, and A.V. Ustinov, Nature Comm. 6, 6182 (2015). | http://arxiv.org/abs/1703.09303v1 | {
"authors": [
"Shlomi Matityahu",
"Jürgen Lisenfeld",
"Alexander Bilmes",
"Alexander Shnirman",
"Georg Weiss",
"Alexey V. Ustinov",
"Moshe Schechter"
],
"categories": [
"cond-mat.dis-nn"
],
"primary_category": "cond-mat.dis-nn",
"published": "20170327203937",
"title": "Rabi noise spectroscopy of individual two-level tunneling defects"
} |
WPaxos is a multileader Paxos protocol that provides low-latency and high-throughput consensus across wide-area network (WAN) deployments. WPaxos uses multileaders, and partitions the object-space among these multileaders. Unlike statically partitioned multiple Paxos deployments, WPaxos is able to adapt to the changing access locality through object stealing. Multiple concurrent leaders coinciding in different zones steal ownership of objects from each other using phase-1 of Paxos, and then use phase-2 to commit update-requests on these objects locally until they are stolen by other leaders. To achieve fast phase-2 commits, WPaxos adopts the flexible quorums idea in a novel manner, and appoints phase-2 acceptors to be close to their respective leaders. We implemented WPaxos and evaluated it on WAN deployments across 5 AWS regions. The dynamic partitioning of the object-space and emphasis on zone-local commits allow WPaxos to significantly outperform both partitioned Paxos deployments and leaderless Paxos approaches.Our results show that, for a ∼70% access locality workload, WPaxos achieves 2.4 times faster average request latency and 3.9 times faster median latency than EPaxos due to the reduction in WAN communication. For a ∼90% access locality workload, WPaxos improves further and achieves 6 times faster average request latency and 59 times faster median latency than EPaxos.Increased access locality enabled WPaxos to achieve improved performance. dynamically partitions the object-space across multiple concurrent leaders, andWe implemented WPaxos and evaluated it across WAN deployments using the benchmarks introduced in the EPaxos work. Our results show that WPaxos achieves up to 9 times faster average request latency and 54 times faster median latency than EPaxos due to the reduction in WAN communication.The ability to quickly react to changing access locality not only speeds up the protocol, but also enables support for mini-transactions.We present WPaxos, a lightweight WAN Paxos protocol, that achieves low-latency by adopting flexible quorums idea for WAN deployments and achieves efficiency by using locality of accesses to weigh in leader election. To improve performance in WAN deployments, we use flexible quorums idea to use close vicinity acceptors as the Q2 (the phase-2 quorum acceptors). This ideally means that the Q2 acceptors are all at the same site, since intra-site network latency is very small compared to across site network latency. The selection of Q2 suggests that Q1 acceptors need to be across site, but this is still tolerable if phase-1, the leader election phase is executed rarely. We have implemented WPaxos and performed extensive evaluations across WAN deployments. Our results show that WPaxos significantly outperforms EPaxos using the evaluation benchmarks introduced in the EPaxos paper. While WPaxos uses a significantly simpler protocol than EPaxos, it achieves better performance.While WPaxos uses multiple concurrent leaders, each responsible for an object-space, it differs from the existing solutions, because it manages the object-spaces in a dynamic fashion. The concurrent leaders can steal objects from each others' object-spaces using phase-1 of Paxos. The commit decisions for updates in an object-space are fast as the phase-2 acceptors are located at the same zone as the zone-leader. Finally transactions across keyspaces (such as a consistent read of multiple objects across various keyspaces) are implemented efficiently in one round-trip-time via the key-stealing mechanism using phase-1 of Paxos, and there is no need for a separate 2PC protocol across zone-leaders.These novel features are achieved by adopting the “flexible quorums” idea (which was introduced in 2016 summer as part of FPaxos <cit.>) for WAN Paxos deployments. WPaxos adopts the flexible quorums rule in a novel way for across WAN zone/site deployments, and unlike the single-leader FPaxos protocol, it introduces a multi-leader Paxos protocol.In WPaxos multiple leaders manages multiple keyspaces in a dynamic fashion to optimize and adapt to the access patterns across the WAN sites. WPaxos uses key-stealing from dynamic keyspaces in order to ensure that subsequent accesses to an object that belongs to a different keyspace is not persistently penalized. We present these locality-adaptive key-stealing strategies in WPaxos.We implemented WPaxos and performed extensive evaluations across WAN deployments. Our results in Section <ref> show that WPaxos significantly outperforms EPaxos using the evaluation benchmarks introduced in the EPaxos paper. This is because, while the EPaxos opportunistic commit protocol requires 3/4ths of the Paxos acceptors to agree and incurs almost one WAN round-trip latency, WPaxos is able to achieve site-local-latency Paxos commits using the site-local phase-2 acceptors.Our evaluation shows that WPaxos significantly outperforms EPaxos under various operation conflict rates. Median request latency, as observed by the clients, is 30 times smaller for WPaxos compared to EPaxos due to the reduction in WAN communication.WPaxos achieves the same consistency guarantees as EPaxos. Linearizability is ensured within a keyspace, and serializability and causal-consistency is ensured across keyspaces. We prove these properties in Section <ref>.While achieving low-latency and high-throughput, WPaxos also achieves incessant high-availability. Since WPaxos uses multileaders even the failure of a leader does not stall the availability of the overall system as other leaders are able to make progress. Finally since leader re-elections are handled through the Paxos protocol, safety is always upheld to the face of node failure/recovery, message loss, and asynchronous concurrent execution. We discuss fault-tolerance properties of WPaxos in Section <ref>.While WPaxos helps most for slashing WAN latencies, it is also possible to deploy WPaxos entirely inside the same datacenter across clusters to improve throughput and high-availability. Throughput is improving by load-balancing and parallel usage of multiple WPaxos leaders across object space. High availability is achieved by having multi-leaders: failure of a leader is handled without affecting the clients as other leaders can serve the requests previously handled by that leader by object stealing mechanism. Distributed systems, distributed applications, wide-area networks, fault-tolerance WPaxos: Wide Area Network Flexible ConsensusAilidani Ailijiang, Aleksey Charapko, Murat Demirbas and Tevfik KosarDepartment of Computer Science and EngineeringUniversity at Buffalo, SUNYEmail: {ailidani,acharapk,demirbas,tkosar}@buffalo.eduDecember 30, 2023 ================================================================================================================================================================================================================§ INTRODUCTION Paxos <cit.> provides a formally-proven solution to the fault-tolerant distributed consensus problem. Notably, Paxos never violates the safety specification of distributed consensus (i.e., no two nodes decide differently), even in the case of fully asynchronous execution, crash/recovery of the nodes, and arbitrary loss of messages. When the conditions improve such that distributed consensus becomes solvable <cit.>, Paxos also satisfies the progress property (i.e., nodes decide on a value as a function of the inputs).Paxos and its variants have been deployed widely, including in Chubby <cit.> based on Paxos <cit.>, Apache ZooKeeper <cit.> based on Zab <cit.>, and etcd <cit.> based on Raft <cit.>. These Paxos implementations depend on a centralized primary process (i.e., the leader) to serialize all commands. Due to this dependence on a single centralized leader, these Paxos implementations support deployments in local area and cannot deal with write-intensive scenarios across wide-area networks (WANs) well.In recent years, however, coordination over wide-area networks (e.g., across zones, such as datacenters and sites) has gained greater importance, especially for database applications and NewSQL datastores <cit.>, distributed filesystems <cit.>, and social networks <cit.>.In order to eliminate the single leader bottleneck in Paxos, leaderless and multileader solutions were proposed. EPaxos <cit.> is a leaderless extension of the Paxos protocol where any replica at any zone can propose and commit commands opportunistically, provided that the commands are non-interfering. This opportunistic commit protocol requires an agreement from a fast-quorum of roughly 3/4th of the acceptors[For a deployment of size 2F+1, fast-quorum is F+⌊F+1/2⌋], which means that WAN latencies are still incurred. Moreover, if the commands proposed by multiple concurrent opportunistic proposers do interfere, the protocol requires performing a second phase to record the acquired dependencies, and agreement from a majority of the Paxos acceptors is needed.Another way to eliminate the single leader bottleneck is to use a separate Paxos group deployed at each zone. Systems like Google Spanner <cit.>, ZooNet <cit.>, and Bizur <cit.> achieve this via a static partitioning of the global object-space to different zones, each responsible for a shard of the object-space. However, such static partitioning is inflexible and WAN latencies will be incurred persistently to access/update an object mapped to a different zone. Moreover, in order to perform transactions involving objects in different zones, a separate mechanism (such a two-phase commit) would need to be implemented across the corresponding Paxos groups.Contributions. We present WPaxos, a novel multileader Paxos protocol that provides low-latency and high-throughput consensus across WAN deployments.WPaxos leverages the flexible quorums <cit.> idea to cut WAN communication costs. It deploys flexible quorums in a novel manner to appointmultiple concurrent leaders across the WAN.Unlike the FPaxos protocol <cit.> which uses a single-leader and does not scale to WAN distances, WPaxos uses multileaders and partitions the object-space among these multileaders. This allows the protocol to process requests for objects under different leaders concurrently. Each object in the system is maintained in its own commit log, allowing for per-object linearizability. By strategically selecting the phase-2 acceptors to be close to the leader, WPaxos achieves fast commit decisions. On the other hand, WPaxos differs from the existing static partitioned multiple Paxos deployment solutions because it implements a dynamic partitioning scheme: leaders coinciding in different zones steal ownership/leadership of an object from each other using phase-1 of Paxos, and then use phase-2 to commit update-requests on the object locally until the object is stolen by another leader. This object-stealing mechanism also enables transactions across leaders (such as a consistent read of multiple objects in different partitions) naturally within the Paxos updates, obviating the need for a separate two phase commit protocol across zone-leaders.With its multileader protocol, WPaxos guarantees linearizability per object.We model WPaxos in TLA+/PlusCal <cit.> and present the algorithm using the PlusCal specification in Section <ref>. The consistency properties of WPaxos are verified by model checking this specification[The TLA+ specification of WPaxos is available at <http://github.com/ailidani/paxi/tree/master/tla>]. Since object stealing is an integrated part of phase-1 of Paxos, WPaxos remains simple as a pure Paxos flavor and obviates the need for another service/protocol. There is no need for a configuration service for relocating objects to zones as in Spanner <cit.> and vertical Paxos <cit.>.Since the base WPaxos protocol guarantees safety to concurrency, asynchrony, and faults, the performance can be tuned orthogonally and aggressively, as we discuss in Section <ref>. To improve performance, we present a locality adaptive object stealing extension in Section <ref>. To quantify the performance benefits from WPaxos, we implemented WPaxos in Go[The GO implementation of WPaxos is available at <http://github.com/ailidani/paxi>] and performed evaluations on WAN deployments across 5 AWS regions. Our results in Section <ref> show that WPaxos outperforms EPaxos, achieving 15 times faster average request latency than EPaxos using a ∼70% access locality workload in some regions. Moreover, for a ∼90% access locality workload, WPaxos improves further and achieves 39 times faster average request latency than EPaxos in some regions. This is because, while the EPaxos opportunistic commit protocol requires about 3/4th of the Paxos acceptors to agree and incurs one WAN round-trip latency, WPaxos is able to achieve low latency commits using the zone-local phase-2 acceptors.Moreover, WPaxos is able to maintain low-latency responses under a heavy workload: Under 10,000 requests/sec, using a ∼70% access locality workload, WPaxos achieves 9 times faster average request latency and 54 times faster median latency than EPaxos.Finally, we evaluate WPaxos with a shifting locality workload and show that WPaxos seamlessly adapts and significantly outperforms static partitioned multiple Paxos deployments.While achieving low latency and high throughput, WPaxos also achieves seamless high-availability by having multileaders: failure of a leader is handled gracefully as other leaders can serve the requests previously processed by that leader via the object stealing mechanism. Since leader re-election (i.e., object stealing) is handled through the Paxos protocol, safety is always upheld to the face of node failure/recovery, message loss, and asynchronous concurrent execution.While WPaxos helps most for slashing WAN latencies, it is also suitable for intra-datacenter deployments for its high-availability and throughput benefits. In WPaxos multiple leaders manages multiple keyspaces in a dynamic fashion to optimize and adapt to the access patterns across the WAN sites. WPaxos uses key-stealing from dynamic keyspaces in order to ensure that subsequent accesses to an object that belongs to a different keyspace is not persistently penalized. We present these locality-adaptive key-stealing strategies in WPaxos in Section <ref>. We show leader-handover rules that use momentum-based techniques to delay the election of a leader for an object to avoid dithering penalties.In our discussion section, we present extensions to the basic WPaxos protocol. WPaxos allows transactions that involve multiple object updates. We also discuss fault-tolerance of WPaxos.(Move and merge these two paragraphs to Section 2?) The flexible quorums is a simple and surprising result. It states that we can weaken the Paxos requirement that “all quorums intersect” to require that “only quorums from different phases intersect”. In other words, majority quorums are not necessary for Paxos, provided that phase-1 quorums (Q1s) intersect with phase-2 quorums (Q2s).Flexible Paxos allows trading off Q1 and Q2 sizes to improve performance. Assuming failures and resulting leader changes are rare, phase-2 where the leader tells the acceptors to decide values is run more often than phase-1 where a new leader is elected. So it is possible to improve performance by reducing the size of Q2 at the expense of making the infrequently used Q1 larger. To improve performance in WAN deployments, WPaxos uses close vicinity acceptors as the Q2. This ideally means that the Q2 acceptors are all at the same site, since intra-site network latency is very small compared to across site network latency. The selection of Q2 suggests that Q1 acceptors need to be across site, but this is still tolerable if phase-1, the leader election phase is executed rarely. This type of flexible quorum is realizable by using grid quorums. Consider all the acceptors at all sites form a grid, then Q2 sets are columns in the acceptor grid, and Q1 sets are rows in the acceptor grid. An attractive property of this grid quorum arrangement is Q1+Q2 does not need to be greater than N, the total number of acceptors, in order to guarantee intersection of any Q1 and Q2. Since Q1s are chosen from rows and Q2s are chosen from columns, any Q1 and Q2 are guaranteed to intersect even when Q1+Q2 < N. In addition a Q1 does not need to intersect with other Q1s, and a Q2 does not need to intersect with other Q2s.And even in the case of all Q1s failing, the progress is still possible as long as there is no need for changing the leader. For example in a system of 10 acceptors, we can safely allow any set of only 3 acceptors to participate in phase-2, provided that we require 8 acceptors to participate for phase-2. This decreasing of phase-2 quorums at the cost of increasing phase-1 quorums is called as simple quorums. Or alternatively, we can use grid quorums, where every column forms a phase-1 quorum, and every row a phase-2 quorum. In grid quorums, the quorums within either phase do not intersect with each other. Quorum2 (for phase-2) is site-local acceptors, Quorum1 (for phase-1) is across-site acceptors. In a local area network, the performance benefits we can achieve by tweaking and rearranging the quorums may not be as pronounced, but the ability to manage flexible quorums becomes very beneficial in a wide area network. In this work we investigate this problem.In many cases, the application logic can dictate that certain objects are independent of each other and may be decided separately. In such scenarios we do not need to order the operations performed against such objects and can run them in parallel as part of separate instances of WPaxos. We call this consensus sharding. In this setup, different leaders may be executing Paxos phase-2 rounds on their respective objects at different sites. Since phase-2 quorums are executed with site-local acceptors only, the throughput of the system can grow linearly as the leaders at different sites executed in parallel and independent of each other.This linear scaling of performance assumes that each object sees locality of access from one site. WPaxos uses weighted leader election so a rare access from another site does not lead to the penalty of an immediate leader changing via coordinating through a high round-trip latency Q1 set across sites. Even when using consensus sharding, the shards need not be static. There are many scenarios where an object starts to see accesses from another site, and this pattern persists over a period of time. One example is circadian rhythms of access: when it is morning in Europe and night in US, most accesses would come from the Europe sites, when it is morning in US and night in Europe most accesses will now come from the US sites. Or in scientific collaboration applications or in distributed filesystems, when some data are accessed by some program on a site, due to locality of access in most programs, the same data will see repeated access by that program and site. Finally, in online trade applications, there is also locality of access to the objects: one access guarantees at least couple more access to that data or related data.When persistent access from another site occurs, instead of paying the penalty of routing to the current leader's site, it is better to execute a leadership change via executing phase-1, so that the remaining accesses are performed via site-local Q2 acceptors at that site. WPaxos enables leader change via Q1 quorum, as part of normal Paxos protocol, so this does not require introducing unnecessary complexity and a separate safety or progress proof.We bank on access locality (that object ownership does not change frequently) to ensure leader change is rare. Finally, if an access involves multiple objects, this can again be coordinated via WPaxos. (Note that if the multiple objects x and y have the same leader responsible for them, the transaction is executed by the common leader at that leader's corresponding site.) WPaxos provides multiple object transactions by using Paxos with phase-1 and phase-2 to choose a new leader for the objects x and y. In other words, the leadership of objects x and y changes to a newly elected leader which performs the transaction. This can be taught as merging the histories of x and y to perform the transaction. When objects x and y see other accesses from different sites, their leadership again changes based on the locality of access and weighed leadership rule. (The weighed leadership rule does not apply for a transaction.)Provided that the such multiple object (or multiple conflict domain) transactions are rare, WPaxos continue to provide improved performance across WAN coordination.WPaxos provides a robust strategy to deal with the occasional faults. Since flexible quorums change the Q1 and Q2 quorums, this impacts the fault tolerance of the system. WPaxos can make progress as long as there is at least one valid Q1 and Q2 quorum. Dynamic reconfiguration in Paxos allows us to add new acceptors to repopulate the quorums before all nodes in a quorum crash.We show arrangement of flexible quorums that mask failure of one Q2 acceptor. This is achieved by increasing Q1 set to include two Q2 nodes from each site.If due to failures no Q2 set is available the leader site, the leadership for those objects are moved to another site with an available Q2 set. This is done via a normal phase1 leader election. When failed Q2 acceptors are restored, that site becomes eligible for taking back the leadership. Failure of a Q1 acceptor have little impact, because there are many possible Q1 sets.Having small Q2 quorums means we do not replicate data to many nodes, as such we can tolerate less Q2 node failures before losing the data, but such faults must all happen at the same time on the Q2 quorum deciding the operations. § RELATED WORK Here we give an overview of core Paxos protocol and provide an architectural classification of algorithms in the Paxos family. §.§ Paxos ProtocolPaxos separates its operation into three phases as illustrate in Figure <ref>. In phase-1 a node proposes itself as a leader over some ballot b. Other nodes accept the proposal only if ballot b is the highest they have seen so far. If some node has seen a leader with a greater ballot, that node will reject ballot b. Receiving a rejection fails the aspiring leader and causes it to start again with higher ballot. However, if the majority of the nodes accepts the leader, it will move to the phase-2 of the protocol. In phase-1, leader also learns uncommitted commands from earlier ballots to finish them later. In phase-2, the leader tells its followers to accept a command into their log. The command depends on the results of the prior phase, as the leader is obliged to finish highest ballot uncommitted command it may have learned earlier. Similarly, to phase-1, a leader requires a majority ack to complete phase-2. However, if a follower has learned of a higher ballot leader, it will not accept the command, and reject the leader, causing the leader to go back to leader election phase and retry. Once a majority of nodes ack to accept the command, the command becomes anchored and cannot be lost even in case of failures or leader changes, since that command is guaranteed to be learned by any future leader. Finally, Paxos commits the command in phase-3. In this phase, the leader sends a message to all followers to commit the command in their respective logs.Many practical Paxos systems typically continue with the same leader for many rounds to avoid paying the cost of phase-1 repeatedly. This optimization, commonly known as Multi-Paxos <cit.>, iterates over phase-2 for different slots of the same ballot number. The safety is preserved as a new leader needs to obtain a majority of followers with a higher ballot, and this causes the original leader to get rejected in phase-2 and stops its progress. §.§ Paxos Variants Many Paxos variants optimize Paxos for specific needs, significantly extending the original protocol. We categorize non-byzantine consensus into five different classes as illustrated in Figure <ref>, and provide an overview of these protocol families.Single leader protocols, such as Multi-Paxos and Raft <cit.> rely on one node to drive the progress forward (Figure <ref>). Due to the overheads imposed by communications with all of the followers, the leader node often becomes a bottleneck in single leader protocols. Mencius <cit.> tries to reduce load imbalance by rotating the single leader in a round-robin fashion. Multi-leader protocols in Figure <ref>, such as M^2Paxos <cit.>, ZooNet <cit.> operate on the observation that not all commands require to have a total order in the system. Multi-leader algorithms can run many commands in parallel at different leaders, as long as these commands belong to different conflict domains. This results in each leader node often serving as an acceptor to leaders of other conflict domains. Unlike single leader architectures that guarantee a single total order of commands in the system, multi-leader protocols provide a partial total order, where only commands belonging to the same conflict domain are ordered with respect to each other. The node despecialization allows multi-leader consensus to improve resource utilization in the cluster by spreading the load more evenly.WPaxos goes one step further and allows different leaders to use different quorums, depicted in Figure <ref>, as long as inter-quorum communication can ensure required safety properties. Such multi-leader, multi-quorum setup helps with both WAN latency and throughput due to smaller and geographically localized quorums. The DPaxos data-management/replication protocol <cit.> cites our original WPaxos technical report <cit.> and adopts a similar protocol for the edge computing domain to bring highly granular high access locality data to the consumers at the edge. Hierarchical multi-leader protocols, such as WanKeeper <cit.> and Vertical Paxos <cit.>, establish a chain of command between the leaders or quorums. A higher-level leader oversees and coordinates the lower-level children leaders/quorums. In a two-layer WanKeeper, master leader is responsible for assigning conflict domain ownership to lower-level leaders. Additionally, the master leader also handles operations that are of high demand by many lower-level quorums as to avoid changing the leadership back-and-forth. In Vertical Paxos, the master cluster is responsible for overseeing the lower-level configurations and does not participate in handling actual commands from the clients. Compared to flat multi-quorum setup of the WPaxos, hierarchical composition has quorum specialization, since master quorum is responsible for different or additional work compared to its children quorums.Leaderless solutions in Figure <ref> also build on the idea of parallelizing the execution of non-conflicting commands. Unlike multi-leader approaches, however, leaderless systems, such as EPaxos <cit.> do not impose the partitioning of conflict domains between nodes, and instead try to opportunistically commit any command at any node. Any node in EPaxos becomes an opportunistic leader for a command and tries to commit it by running a phase-2 of Paxos in a fast quorum system. If some other node in the fast quorum is also working on a conflicting command, then an additional round of communication is used to establish order on the conflicting commands.§ WPAXOS OVERVIEWWe assume a set of nodes communicating through message passing in an asynchronous environment. The nodes are deployed in a set of zones, which are the unit of availability isolation. Depending on the deployment, a zone can range from a cluster or datacenter to geographically isolated regions. Each node is identified by a tuple consisting of a zone ID and node ID, i.e. Nodes ≜ 1..Z × 1..N. Every node maintains a sequence of instances ordered by an increasing slot number. Every instance is committed with a ballot number. Each ballot has a unique leader. Similar to Paxos implementation <cit.>, we construct the ballot number as lexicographically ordered pairs of an integer and its leader identifier, s.t. Ballots ≜ Nat × Nodes. Consequently, ballot numbers are unique and totally ordered, and any node can easily retrieve the id of the leader from a given ballot.§.§ WPaxos QuorumsWPaxos leverages on the flexible quorums idea <cit.>. This result shows that we can weaken Paxos' “all quorums should intersect” assertion to instead “only quorums from different phases should intersect”. That is, majority quorums are not necessary for Paxos, provided that phase-1 quorums (Q_1) intersect with phase-2 quorums (Q_2). Flexible-Paxos, i.e., FPaxos, allows trading off Q_1 and Q_2 sizes to improve performance. Assuming failures and resulting leader changes are rare, phase-2 (where the leader tells the acceptors to decide values) is run more often than phase-1 (where a new leader is elected). Thus it is possible to improve performance of Paxos by reducing the size of Q_2 at the expense of making the infrequently used Q_1 larger. A quorum system over the set of nodes is safe if the quorums used in phase-1 and phase-2, named Q_1 and Q_2, intersect. That is, ∀ q_1 ∈ Q_1, q_2 ∈ Q_2 : q_1 ∩ q_2 ≠∅. WPaxos adopts the flexible quorum idea to WAN deployments. Our quorum system derives from the grid quorum layout, shown in Figure <ref>, in which rows and columns act as Q_1 and Q_2 quorums respectively. An attractive property of this grid quorum arrangement is Q_1+Q_2 does not need to exceed N, the total number of acceptors, in order to guarantee intersection of any Q_1 and Q_2. Let q_1, q_2 denote one specific instance in Q_1 and Q_2. Since q_1 ∈ Q_1 are chosen from rows and q_2 ∈ Q_2 are chosen from columns, any q_1 and q_2 are guaranteed to intersect even when |q_1+q_2| < N. In WPaxos quorums, each column represents a zone and acts as a unit of availability or geographical partitioning. The collection of all zones form a grid. In this setup, we further generalize the grid quorum constraints in both Q_1 and Q_2 to achieve a more fault-tolerant and flexible alternative. Instead of using rigid grid columns, we introduce two parameters: f_z, the number of zone failures tolerated, and f_n, the number of node failures a zone can tolerate before losing availability.In order to tolerate f_n crash failures in every zone, WPaxos picks f_n+1 nodes in a zone over l nodes, regardless of their row position. In addition, to tolerate f_z zone failures within Z zones, q_1 ∈ Q_1 is selected from Z-f_z zones, and q_2 ∈ Q_2 from f_z+1 zones.Below, we formally define WPaxos quorums in TLA+ <cit.> and prove that the Q1 and Q2 quorums always intersect. Q_1 ≜{q ∈SUBSET Nodes : Cardinality(q) = (f_n+1) × (Z - f_z) ∧∃ k ∈SUBSETq : ∀ i, j ∈ k : i[1] = j[1] ∧Cardinality(k) > f_n+1} Q_2 ≜{q ∈SUBSET Nodes : Cardinality(q) = (l-f_n) × (f_z + 1) ∧∃ k ∈SUBSETq : ∀ i, j ∈ k : i[1] = j[1] ∧Cardinality(k) > l-f_n} SUBSETS is the set of subsets of SThe generalized Q_1 quorum that tolerates f crush failures in each zone of 2f+1 nodes and F zones failures in total of Z zones is defined by selecting majority (f+1) nodes per zone for Z - F zones, formally: The generalized Q_2 quorum is defined as selecting majority (f+1) nodes per zone for F+1 zones, formally: WPaxos Q_1 and Q_2 quorums satisfy intersection requirement (Definition <ref>). (1) WPaxos q_1s involve Z-f_z zones and q_2s involve f_z+1 zones, since Z-f_z+f_z+1 = Z+1 > Z, there is at least one zone selected by both quorums. (2) Within the common zone, q_1 selects f_n+1 nodes and q_2 selects l-f_n nodes out of l nodes forming a zone. Since l-f_n+f_n+1 > l, there is at least one node in the intersection. Figure <ref> shows a 4-by-3 grid with f_n=f_z=0 and Figure <ref> shows a 4-by-3 with f_n=f_z=1. In the latter deployment, each zone has 3 nodes, and each q_2 includes 2 out of 3 nodes from 2 zones. The q_1 quorum spans 3 out of 4 zones and includes any 2 nodes from each zone. Using a 2 row q_1 rather than 1 row q_1 has negligible effect on the performance (as we show in Section <ref>) and provides more fault-tolerance.When using grid-based flexible quorums (as opposed to unified majority quorums), the total number of faults tolerated, F, becomes topology-dependent. In EPaxos quorums are selected from a set so it is not important which nodes you pick. In WPaxos quorums are selected from a grid, so the location of the nodes picked becomes important.We define F_min to be the size of F in the worst possible case of failure placement so as to violate availability with respect to read/write operations on any item. If all the faults conspire to target a particular q ∈ Q2, then after Cardinality(Q2) faults, the q ∈ Q2 is wiped off. By definition any Q2 quorum intersects with any Q1 quorum. By wiping off a Q2 quorum in its entirety, the faults made any Q1 quorum unavailable as well. To account for the case where Cardinality of Q1 quorum could be less than that of Q2, we make the formula symmetric and define as follows. F_min = Min (Cardinality(Q2), Cardinality(Q1)) -1 We define F_max to be the size of F in the best possible case of failure placement in the grid. In this case, the faults miss a union of a Q1 quorum and Q2 quorum, leaving at least one Q1 and Q2 quorum intact so WPaxos can continue operating. Note that the Q2 quorum may be completely embedded inside the Q1 quorum (or vice versa if Q1 quorums are smaller than Q2 quorums). So the formula is derived as subtracting from N, the cardinality of Q1 and Q2, and by adding the maximum cardinality of intersection of Q1 and Q2.F_max = N-Cardinality(Q1)-Cardinality(Q2)+ (f_z+1)*(f_n+1)For the 4-by-3 grid with f_n=f_z=0 in Figure <ref>, F_min=2 and F_max=6. For the deployment in Figure <ref> with f_n=f_z=1, F_min=3, and F_max=6. [For a deployment of size 2F+1, fast-quorum is of size F+(F+1)/2. Therefore for N=12 and F=5 EPaxos fast quorum is 8, and EPaxos can tolerate upto 4 failures before the fast quorum size is breached. After 4 failures, EPaxos operations detoriate because they time out waiting response from the nonexisting fast quorum, and then proceed to go to the second round to be able to make progress with majority nodes. Progress is still possible until 5 node failures. The EPaxos paper suggests a reconfiguration to be invoked upon node failures to reduce N, and shrink the fast quorum size.] §.§ Multi-leaderIn contrast to FPaxos <cit.> which uses flexible quorums with a classical single-leader Paxos protocol, WPaxos presents a multi-leader protocol over flexible quorums. Every node in WPaxos can act as a leader for a subset of objects in the system. This allows the protocol to process requests for objects under different leaders concurrently. Each object in the system is allotted its own commit log, allowing for per-object linearizability. A node can lead multiple objects at once, all of which may have different ballot and slot numbers in their corresponding logs.The WPaxos protocol consists of two phases. The concurrent leaders steal ownership/leadership of objects from each other using phase-1 of Paxos executed over q_1 ∈ Q_1. Then phase-2 commits the update-requests to the object over q_2 ∈ Q_2, selected from the leader's zone (and nearby zones) for improved locality. The leadercan execute phase-2 multiple times until some other node steals the object.The phase-1 of the protocol starts only when the node needs to steal an object from a remote leader or if a client has a request for a brand new object that is not in the system. This phase of the algorithm causes the ballot number to grow for the object involved. After a node becomes the owner/leader for an object, it repeats phase-2 multiple times on that object for committing commands/updates, incrementing the slot number at each iteration, while the ballot number for the object stays the same.Figure <ref> shows the normal operation of both phases, and also references each operation to the algorithms in Section <ref>. §.§ Object StealingWPaxos dynamically partitions objects across leaders in various zones, creating use cases when a client needs an object that belongs to a different zone. Our protocol makes this remote operation transparent for the client.When a node needs to steal an object from another leader in order to carry out a client request, it first consults its internal cache to determine the last ballot number used for the object and performs phase-1 on some q_1 ∈ Q_1 with a larger ballot. Object stealing is successful if the candidate node can out-ballot the existing leader. This is achieved in just one phase-1 attempt, provided that the local cache is current and a remote leader is not engaged in another phase-1 on the same object.Once the object is stolen, the old leader cannot act on it, since the object is now associated with a higher ballot number than the ballot it had at the old leader. This is true even when the old leader was not in the q_1 when the key was stolen, because the intersected node in q_2 will reject any object operations attempted with the old ballot. The object stealingmay occur when some commands for the objects are still in progress, therefore, a new leader must recover any accepted, but not yet committed commands for the object.WPaxos maintains separate ballot numbers for all objects isolating the effects of object stealing. Keeping per-leader ballot numbers, i.e., keeping a single ballot number for all objects maintained by the leader, would necessitate out-balloting all objects of a remote leader when trying to steal one object. This would then create a leader dueling problem in which two nodes try to steal different objects from each other by constantly proposing a higher ballot than the opponent.Using separate ballot numbers for each object alleviates ballot contention, although it can still happen when two leaders are trying to take over the same object currently owned by a third leader. To mitigate that issue, we use two additional safeguards: (1) resolving ballot conflict by zone ID and node ID in case the ballot counters are the same, and (2) implementing a random back-off mechanism in case a new dueling iteration starts anyway. Object stealing is part of core WPaxos protocol. In contrast to the simplicity and agility of object stealing in WPaxos, object relocation in other systems require integration of another service, such as movedir in Spanner <cit.>, or performing multiple reconfiguration or coordination steps as in Vertical Paxos <cit.>.Vertical Paxos depends on a reliable master service that overseeing configuration changes. Object relocation involves configuration change in the node responsible for processing commands on that object. When a node in a different region attempts to steal the object, it must first contact the reconfiguration master to obtain the current ballot number and next ballot to be used. The new leader then must complete phase-1 of Paxos on the old configuration to learn the previous commands. Upon finishing the phase-1, the new leader can commit any uncommitted slots with its own set of acceptors. At the same time the new leader notifies the master of completing phase-1 with its ballot. Only after the master replies and activates the new configuration, the leader can start serving user requests. This process can be extended to multiple objects, by keeping track of separate ballot numbers for each object. Vertical Paxos requires three separate WAN communications to change the leadership, while WPaxos can do so with just one WAN communication.: in the normal case, the object's leadership changes (with a higher ballot number) in just a single phase-1 execution performed on one of the Q_1 quorums. § WPAXOS ALGORITHM In the basic algorithm, every node maintains a set of variables and a sequence of commands written into the command log. The command log can be committed out of order, but has to be executed against the state machine in the same order without any gap. Every command accesses only one object o. Every node leads its own set of objects in a set called own.All nodes in WPaxos initialize their state with above variables. We assume no prior knowledge of the ownership of the objects; a user can optionally provide initial object assignments. The highest known ballot numbers for objects are constructed by concatenating counter=0 and the node ID (line 1). The slot numbers start from zero (line 2), and the objects self owned is an empty set (line 3). Inside the log, an instance contains three components, the ballot number b for that slot, the proposed command/value v and a flag c indicates whether the instance is committed (line 4). §.§ Phase-1: Prepare WPaxos starts with a client sending requests to one of the nodes. A client typically chooses a node in the local zone to minimize the initial communication costs. The request message includes a command and some object o on which the command needs to be executed. Upon receiving the request, the node checks if the object exists in the set of own, and start phase-1 for any new objects by invoking p1a() procedure in Algorithm <ref>. If the object is already owned by this node, the node can directly start phase-2 of the protocol.In p1a(), a larger ballot number is selected and “1a” message is sent to a Q_1 quorum.The p1b() procedure processes the incoming “1a” message sent during phase-1 initiation. A node can accept the sender as the leader for object o only if the sender's ballot m.b is greater or equal to the ballot number it knows of (line 4). If object o is owned by current node, it is removed from set own (line 6). Finally, the “1b” message acknowledging the accepted ballot number is send (line 7). The highest slot associated with o is also attached to the reply message, so that any unresolved commands can be committed by the new leader. §.§ Phase-2: Accept Phase-2 of the protocol starts after the completion of phase-1 or when it is determined that no phase-1 is required for a given object. WPaxos carries out this phase on a Q_2 quorum residing in the closest F+1 zones, thus all communication is kept local, greatly reducing the latency.Procedure p2a() in Algorithm <ref> collects the “1b” messages for itself (lines 4-6). The node becomes the leader of the object only if Q_1 quorum is satisfied (line 7,8). The new leader then recovers any uncommitted slots with suggested values and starts the accept phase for the pending requests that have accumulated in queue. Phase-2 is launched by increasing the highest slot (line 9), and creates new entry in log (line 10), sending “2a” message (line 11). Once the leader of the object sends out the “2a” message at the beginning of phase-2, the replicas respond to this message as shown in Algorithm <ref>. The leader node updates its instance at slot m.s only if the message ballot m.b is greater or equal to accepted ballot (line 4-6). §.§ Phase-3: Commit The leader collects replies from its Q_2 acceptors. The request proposal either gets committed with replies satisfying a Q_2 quorum, or aborted if some acceptors reject the proposal citing a higher ballot number. In case of rejection, the node updates a local ballot and puts the request in this instance back to main request queue to retry later. §.§ PropertiesNon-triviality. For any node n, the set of committed commands is always a sequence σ of proposed commands, i.e. ∃σ : committed[n] = ∙σ. Non-triviality is straightforward since nodes only start phase-1 or phase-2 for commands proposed by clients in Algorithm 1.Stability. For any node n, the sequence of committed commands at any time is a prefix of the sequence at any later time, i.e. ∃σ : committed[n] = γ at tcommitted[n] = γ∙σ at t+Δ. Stability asserts any committed command cannot be overridden later. It is guaranteed and proven by Paxos that any leader with higher ballot number will learn previous values before proposing new slots. WPaxos inherits the same process.Consistency. For any slot of any object, no two leaders can commit different values. This property asserts that object stealing and failure recovery procedures do not override any previously accepted or committed values. We verified this consistency property by model checking a TLA+ specification of WPaxos algorithm. WPaxos consistency guarantees are on par with other protocols, such as EPaxos, that solve the generalized consensus problem <cit.>. Generalized consensus relaxes the consensus requirement by allowing non-interfering commands to be processed concurrently. Generalized consensus no longer enforces a totally ordered set of commands. Instead only conflicting commands need to be ordered with respect to each other, making the command log a partially ordered set. WPaxos maintains separate logs for every object and provides per-object linearizability. Liveness. A proposed command γ will eventually be committed by all non-faulty nodes n, i.e. ♢∀ n ∈Nodes: γ∈ committed[n]. The PlusCal code presented in Section <ref> specifies what actions each node is allowed to perform, but not when to perform, which affects liveness. The liveness property satisfied by WPaxos algorithm is same to that of ordinary Paxos: as long as there exists q_1 ∈ Q_1 and q_2 ∈ Q_2 are alive, the system will progress. § EXTENSIONS§.§ Locality Adaptive Object StealingThe basic protocol migrates the object from a remote region to a local region upon the first request, but that causes a performance degradation when an object is frequently accessed across many zones.With locality adaptive object stealing we can delay or deny the object transfer to a zone issuing the request based on an object migration policy. The intuition behind this approach is to move objects to a zone whose clients will benefit the most from not having to communicate over WAN, while allowing clients accessing the object from less frequent zones to get their requests forwarded to the remote leader.In this adaptive mode, clients still communicate with the local nodes, however the nodes may not steal the objects right away, and may instead choose to forward the requests to the remote leader. Our majority-zone migration policy aims to improve the locality of reference by transferring the objects to zones that sending out the highest number of requests for the objects, as shown in Figure <ref>. Since the current object leader handles all the requests, it has the information about which clients access the object more frequently. If the leader α detects that the object has more requests coming from a remote zone, it will initiate the object handover by communicating with the node β, and in its turn β will start the phase-1 protocol to steal the leadership of that object.§.§ Replication SetWPaxos provides flexibility in selecting a replication set. The phase-2 (p2a) message need not be broadcast to the entire system, but only to a subset of Q_2 quorums, denoted as a replication Q_2 or RQ_2. The user has the freedom to choose the replication factor across zones from the minimal required F+1 zones up to the total number of Z zones. Such choice can be seen as a trade off between communication overhead and a more predictable latency, since the replication zone may not always be the fastest to reply. Additionally, if a node outside of the RQ_2 becomes the new leader of the object, that may delay the new phase-2 as the leader need to catch up with the missing logs in previous ballots. One way to minimize the delay is let the RQ_2 reply on phase-2 messages for replication, while the potential leader nodes learn the states as non-voting learners.§.§ Fault Tolerance and ReconfigurationWPaxos can make progress as long as it can form valid q_1 and q_2 quorums. The flexibility of WPaxos enables the user to deploy the system with quorum configuration tailored to their needs. Some configurations are geared towards performance, while others may prioritize fault tolerance.By default, WPaxos configures the quorums to tolerate one zone failure and minority node failures per zone, and thus provides similar fault tolerance as Spanner with Paxos groups deployed over three zones.For a WPaxos deployment in 4 zones with 3 nodes per zone, the default configuration requires forming a q_1 across 3 zones with 2 nodes in each zone. Thus, the q_2 quorum must span 2 zones with 2 nodes per zone to guarantee that all possible quorums in Q_1 intersect with all possible quorums in Q_2. Such cluster can tolerate an entire zone crash, since the system can still form valid q_1 and q_2 quorums to proceed with the normal operation.Since each quorum in Q_2 spans two zones, phase-2 operation in such configuration requires inter-zone communication in WAN. However, systems like Spanner rely on Paxos for their fault-tolerance, and must be able to form a majority in each Paxos group.Therefore, Spanner must be replicated across 3 zones and require even more WAN communication. This may negatively affect the performance of Spanner if a Paxos group leader is geographically far from both acceptors.WPaxos remains partially available when more zones fail than the tolerance threshold it was configured for. In such a case, no valid q_1 quorum may be formed, which halts the object stealing routine, however the operations can proceed for objects owned in the remaining live regions, as long as there are enough zones left to form a q_2 quorum. Our system can tolerate node failures in a zone without making the entire zone appear as failed. With a default configuration, a single node can fail per zone. Under such failure, a zone can still participate in both Q_1 and Q_2 quorums, thus it remains active. Failing more nodes than the failure limit is the same as the failure of an entire zone.We show arrangement of flexible quorums that mask failure of one Q2 acceptor. This is achieved by increasing Q1 set to include two Q2 nodes from each site.Failures in Q2 quorum cause the system to find another quorum to decide on the operations. Our system does not need to go through the first phase of Paxos to elect a new leader in case of such failure, as we can simple use acceptors from a different region’s Q2 quorum. During such operation, the leader no longer acts as an acceptor and acts only as the proposer, since it is not part of the active Q2. Once the failed node’s participation in the system is restored and it joined all of its Q1s, we can move the operation back to the local region. Theoretical performance of the system under such failure should be comparable to regular Multi-Paxos approach with the same number of nodes, as we will be paying a penalty of one WAN RTT to reach the acceptors in a different region. A leader recovery is handled naturally by the object stealing procedure. Upon a leader failure, all of its objects will become unreachable through that leader, forcing the system to start the object stealing phase. A failed node does not prevent the new leader from forming a Q1 quorum needed for acquiring an object, thus the new leader can proceed and get the leadership of any object. The object stealing procedure also recovers accepted but not committed instances in the previous leader’s log for the object and the same procedure is carried out when recovering from a failure.A zone failure will make forming the Q1 impossible in WPaxos, halting all object movement in the system. However, leaders in all unaffected zones can continue to process requests on the objects they already own. Immediate WPaxos suffers more, as it will not be able to perform remote requests on unaffected objects. On the other hand, adaptive WPaxos can still serve all request (remote or local) for all unaffected objects. In both WPaxos modes, objects in the affected regions will be unavailable for the duration of zone recovery.Network partitioning can affect the system in a similar manner as a zone failure. If a region is partitioned from the rest of the system, no object movement is possible, however both partitions will be able to make some progress in WPaxos under the adaptive object stealing rules.The problem with transaction is that once a transactional operation is done, the objects may get overtaken by new object leader and diverge again. We can impose a lease strategy to prevent independent objects from getting a new leader unless the WANPaxos client explicitly finishes the transaction or the lease expires. The lease approach can help us achieve multi-object linerizability, but it will work the best when such linearizability guarantee does not need to be permanent.The ability to reconfigure, i.e., dynamically change the membership of the system, is critical to provide reliability for long periods as it allowscrashed nodes to be replaced.WPaxos achieves high throughput by allowing pipelining (like Paxos and Raft algorithms) in which new commands may begin phase-2 before any previous instances/slots have been committed. Pipelining architecture brings more complexity to reconfiguration, as there may be another reconfiguration operation in the pipeline which could change the quorum and invalidate a previous proposal. Paxos <cit.> solves this by limiting the length of the pipeline window to α > 0 and only activating the new config C' chosen at slot i until slot i+α. Depending on the value of α, this approach either limits throughput or latency of the system. On the other hand, Raft <cit.> does not impose any limitation of concurrency and proposes two solutions. The first solution is to restrict the reconfiguration operation, i.e. what can be reconfigured. For example, if each operation only adds one node or removes one node, a sequence of these operations can be scheduled to achieve arbitrary changes. The second solution is to change configuration in two phases: a union of both old and new configuration C+C' is proposed in the log first, and committed by the quorums combined. Only after the commit, the leader may propose the new config C'. During the two phases, any election or command proposal should be committed by quorum in both C and C'.To ensure safety during reconfiguration, all these solutions essentially prevent two configurations C and C' to make decision at the same time that leads to divergent system states.WPaxos adopts the more general two-phase reconfiguration procedure from Raft for arbitrary C's, where C = ⟨ Q_1, Q_2 ⟩, C' = ⟨ Q_1', Q_2' ⟩. WPaxos further reduces the two phases into one in certain special cases since adding and removing one zone or one row operations are the most common reconfigurations in the WAN topology. These four operations are equivalent to the Raft's first solution because the combined quorum of C+C' is equivalent to quorum in C'. We show one example of adding new zone of dashed nodes in the Figure <ref>. Previous configuration Q_1 involves two zones, whereas the new config Q_1' involves three zones including the new zone added. The quorums in Q_1' combines quorums in Q_1 is same as Q_1'. Both Q_2 and Q_2' remains the same size of two zones.The general quorum intersection assumption and the restrictions Q_1' ∪ Q_1 = Q_1' and Q_2' ∪ Q_2 = Q_2' ensure that old and new configuration cannot make separate decisions and provides same safety property.Q_1' ∪ Q_1 = Q_1'Q_2' ∪ Q_2 = Q_2' § EVALUATION We developed a general framework, called Paxi to conduct our evaluation. The framework allows us to compare WPaxos, EPaxos, M2Paxos and other Paxos protocols in the same controlled environment under identical workloads. We implemented Paxi along with WPaxos and EPaxos in Go and released it as an open-source project on GitHub at <https://github.com/ailidani/paxi>.The framework provides extended abstractions to be shared between all Paxos variants, including location-aware configuration, network communication, client library with RESTful API, and a quorum management module (which accommodates majority quorum, fast quorum, grid quorum and flexible quorum). Paxi's networking layer encapsulates a message passing model and exposes basic interfaces for a variety of message exchange patterns, and transparently supports TCP, UDP and simulated connection with Go channels. Additionally, our Paxi framework incorporates mechanisms to facilitate the startup of the system by sharing the initial parameters through the configuration management tool.§.§ SetupWe evaluated WPaxos using the key-value store abstraction provided by our Paxi framework. We used AWS EC2 <cit.> nodes to deploy WPaxos across 5 different regions: Tokyo (T), California (C), Ohio (O), Virginia (V), and Ireland (I).In our experiments, we used 4 m5.large instances at each AWS region to host 3 WPaxos nodes and 20 concurrent clients. WPaxos in our experiments uses adaptive mode by default, unless otherwise noted.We conducted all experiments with Paxi microbenchmark. Paxi provides similar benchmarking capabilities as YCSB <cit.>, with both benchmarks generating similar workloads. However, Paxi benchmark adds more tuning knobs to facilitate testing in wide area with workloads that exhibit different conflict and locality characteristics.In order to simulate workloads with tunable access locality patterns we used a normal distribution to control the probability of generating a request on each object. As shown in the Figure <ref>, we used a pool of 1000 common objects, with the probability function of each region denoting how likely an object is to be selected at a particular zone. Each region has a set of objects it is more likely to access. We define locality as the percentage of the requests pulled from such set of likely objects.We introduce locality to our evaluation by drawing the conflicting keys from a Normal distribution 𝒩(μ, σ^2), where μ can be varied for different zones to control the locality, and σ is shared between zones. The locality can be visualized as the non-overlapping area under the probability density functions in Figure <ref>.In our workload we used distributions with identical standard deviations, but different, equally spaced means to give a locality of roughly 70%. This means that 70% of commands will access the objects a zone is more likely to own, however, any zone can still potentially request any object. Locality L is the complement of the overlapping coefficient (OVL)[The overlapping coefficient (OVL) is a measurement of similarity between two probability distributions, refers to the shadowed area under two probability density functions simultaneously <cit.>.] among workload distributions: L = 1 - OVL. Let Φ(x-μ/σ) denote the cumulative distribution function (CDF) of any normal distribution with mean μ and deviation σ, and x̂ as the x-coordinate of the point intersected by two distributions, locality is given by L = Φ_1(x̂) - Φ_2(x̂). At the two ends of the spectrum, locality equals to 0 if two overlapping distributions are congruent, and locality equals to 1 if two distributions do not intersect.Paxi provides a replicated key-value store as the state machine application on top of the protocols under evaluation. The client library of Paxi's key-value store has both synchronous and asynchronous version of update (put) and query (get) operations.Our workloads are generated by exercising two primary parameters: conflict and locality.Conflict c is the proportion of commands operated on the objects shared across zones. The workload with conflicting objects exhibits no locality if the objects are selected uniformly random at each zone. We introduce locality to our evaluation by drawing the conflicting keys from a Normal distribution 𝒩(μ, σ^2), where μ can be varied for different zones to control the locality, and σ is shared between zones. The locality can be visualized as the non-overlapping area under the probability density functions, as in Figure <ref>. In our experiments we vary the conflict and locality parameters to test WPaxos under different scenarios. We run each experiment for 5 minutes, given 500 keys that are shared across regions and 500 designated keys local to each region.§.§ Object Space We begin by presenting our evaluation of the overhead with increasing number of objects in WPaxos system. Every object in WPaxos is fully replicated. We preload the system with one thousand to one million keys evenly distributed among three regions (Virginia, Oregon and California), then generate requests with random key from every region. To evaluate the performance impact, we measure the average latency in each one of the three regions.The results shown in Figure <ref> indicates there are no significant impacts on request latency. This is expected since a hash map index has O(1) lookup time to keep track of object and its current leader. The index data does not consume extra memory because the leader ID is already maintained in the ballot number from last Paxos log entry. At the end of our experiment, one million keys without log snapshots and garbage collection consumes about 1.6 GB memory out of our 8 GB VM. For more keys inserted into the system, we expect steady performance as long as they fit into the memory.§.§ WPaxos Quorum Latencies Ideally, we want to minimize the latency of the most frequently used quorum type, while containing the performance degradation of lesser used quorums.In this set of experiments, we compare the latency of Q_1 and Q_2 accesses in three different fault tolerance configurations: (f_z=f_n=0), (f_z=0; f_n=1), and (f_z=f_n=1). The configurations with f_n=0 uses a single node per zone/region for Q_1, requiring all nodes in one zone/region to form Q_2, while configurations with f_n=1 require one fewer node in Q_2. When f_z=0, Q_1 uses all 5 zones and Q_2 remains in a single region. With f_z=1, Q_1 uses 4 zones which reduce phase-1 latency significantly, but Q_2 requires 2 zones/regions thus exhibits WAN latency. In each region we simultaneously generated a fixed number (1000) of phase-1 and phase-2 requests, and measured the latency for each phase. Figure <ref> shows the average latency in phase-1 (left) and phase-2 (right).Quorum size of Q_1 in f_n=1 configurations is half of that for WPaxos with f_n=0, but both experience average latency of about one round trip to the farthest peer region, since the communication happens in parallel. Within a zone, however, f_n=1 can tolerate one straggler node, reducing the latency for the most frequently used Q_2 quorum type. §.§ Conflicting Commands Here we evaluate the performance of WPaxos in terms of conflicting commands.In WPaxos, we treat any two commands operating on the same object in the Paxi key-value store as conflicting. In our experiments, the workload ranges from 0% conflicts (i.e. all requests are completely local to its leader) to 100% conflicts (i.e. every request targets the conflicting object). While WPaxos can only denote object-based conflicts and non-conflicts, EPaxos can denote operation-based conflicts and non-conflicts in the general case. In the context of our experiments on the Paxi key-value store, for EPaxos, we treat any two update operations on the same object as conflicting, and treat read operations on any object as nonconflicting with any other read operations. This is the same setup used in the evaluation of the EPaxos paper <cit.>. As shown in Figure <ref>, WPaxos without zone-failure-tolerance (f_z=0) performance better than WPaxos that tolerate one zone failure (f_z=1) in every case, because Q_2 within a region avoids the RTT between neighboring regions for non-conflicting commands. Since the Ohio region is located in the relative center of our topology, it becomes the leader of conflicting objects and the performance in that region becomes independent of conflicts. More interestingly, even though both WPaxos f_z=1 and EPaxos requires involving two RTTs for conflicting commands, WPaxos is able to reduce the latency by committing requests with two closer regions. For example, in 100% conflict workload, requests from C is committed by one RTT between C and O (49ms) plus one RTT between V and O (11ms) instead of two RTTs of CO like EPaxos.§.§ Latency Comparison We compare the latency of WPaxos (with f_z=0,1,2), EPaxos, and M2Paxos protocols using three sets of workloads: random (Figure <ref>), ∼70% locality (Figure <ref>), and ∼95% locality (Figure <ref>). Before each round of experiment, we divide 1000 objects and preload them into each region, such thatevery region owns 200 objects according to Figure <ref>. For each experiment, the clients in each region generate requests concurrently within the duration of one minute.Figure <ref> compares the average latency of random workload in 5 regions. Each region experiences different latencies due to their asymmetrical location in the geographical topology. WPaxos in f_z=1 and f_z=2 tolerance configurations show higher latency than EPaxos because requests are forwarded to leaders in other regions most of the time and causes extra wide area RTT, whereas EPaxos initiates PreAccept phase by any local leader. WPaxos with f_z=0 performs better than all other protocols due to its local phase 2 quorums.Figure <ref> shows that, under ∼70% locality workload (𝒩(μ_z, σ=100)), regions located close to the geographic center improve their average latencies. Given the wide standard deviation of accessing keys, EPaxos experiences slightly higher conflict rate, and WPaxos and M2Paxos still experience request forwarding to remote leaders. When the phase 2 quorums cover the same number of regions, all three protocols show a similar average latency. Since WPaxos provides more flexibility in configuring the fault tolerance factor, WPaxos f_z=0, 1 outperforms all other protocols in all regions.In Figure <ref>, we increase the locality to ∼95% (𝒩(μ_z, σ=50)). EPaxos shows similar pattern as previous experiments, whereas WPaxos achieves much lower latency by avoiding WAN forwarding largely in all regions.Figure <ref> shows the tail latencies caused by object stealing in WPaxos immediate and adaptive modes and compares them with EPaxos with 5 and 15 node deployments. In the figure, all request latencies from every region are aggregated to produce the cumulative distribution (CDF). Using WPaxos immediate, the edge regions suffer from high object stealing latencies because their Q1 latencies are longer due to their location in the topology. WPaxos adaptive alleviates and smoothens these effects. Even under low locality, about half of the requests are committed in local-area latency in WPaxos. varying degrees of conflicts in three regions. EPaxos always has to pay the price of WAN communication, while WPaxos tries to keep as many operations locally. With small conflicts c ≤ 50%, the median latency of EPaxos is about 1 RTT (60-70 ms) between the region and its closest neighboring region. WPaxos reduces median latency to local commit time (1-4 ms). Under full conflict (c = 100%), both EPaxos and WPaxos degrade to full WAN RTT, as EPaxos no longer able to commit most commands in fast quorum, and WPaxos is forced to do frequent object-stealing. WPaxos, however can achieve good median latency in VA, which is a geographically central region in our topology. This is because the performance penalty for stealing an object to this region is significantly lower, allowing VA to process more requests, some of which will be local due to the previously stolen objects. We repeat the 100% conflict experiment with 4 regions, adding Tokyo (JP) in Figure <ref>. In the new topology, EPaxos's fast quorum size expends to 3 regions instead of 2, hence the minimum commit latency increases to the second smallest RTT. The median latency of EPaxos, however, reflects more normal Paxos rounds under high conflicts. The ability of WPaxos to do so greatly depends on the locality of the conflicting objects. As a result, WPaxos reduce median latency by 96% at all conflict ratios. Even with 100% conflicts, WPaxos is able to reduce the median latency by exploring the intermediate locality among the shared objects. The 99th percentile latency of high conflicts (≥ 50%) indicates a small fraction of the slow commits, where EPaxos has to do 2RTT in a slow path, WPaxos is able to commit in one RTT. Figure <ref> shows the average latency of workload with locality derived from Figure <ref>, where conflict c = 100%, σ = 50, μ = 150, 300, 450 repectively, and locality l = 86.6%. EPaxos replica in VA region experiences the highest average commit latency because its distribution has overlaps with two other regions, whereas in WPaxos, this disadvantage is canceled out by the favorable location that enables VA region to steal more objects. The average latency in WPaxos is one third to one fifth (depends on the region) compared to that of EPaxos.Figure <ref> shows the average latency of the same experiment but with 4 regions. The gap between WPaxos and EPaxos increased even more in the two edge regions. We also present the median and 95 percentile latency in Table <ref>. While median WPaxos latency is local RTT (1-3 ms), EPaxos median latency is WAN RTT. Even in the case of 95 percentile latency, the latency of WPaxos is smaller than that of EPaxos in most regions except VA. §.§ Throughput Comparison We experiment on scalability of WPaxos with respect to the number of requests it processes by driving a steady workload at each zone. Instead of the medium instances, we used a cluster of 15 large EC2 nodes to host WPaxos deployments. EPaxos is hosted at the same nodes, but with only one EPaxos node per zone. We opted out of using EPaxos with 15 nodes, because our preliminary experiments showed significantly higher latencies with such a large EPaxos deployment. We limit WPaxos deployments to a single leader per zone to be better comparable to EPaxos. We gradually increase the load on the systems by issuing more requests and measure thelatency at each of the throughput levels. Figure <ref> shows the latencies as the aggregate throughput increases. At low load, we observe both immediate and adaptive WPaxos significantly outperform EPaxos as expected. With relatively small number of requests coming through the system, WPaxos has low contention for object stealing, and can perform many operations locally within a region. As the number of requests increases and contention rises, performance of both EPaxos and WPaxos with immediate object stealing deteriorates. EPaxos suffers from high conflict in WAN degrading its performance further. This is because cross-datacenter latencies increase commit time, resulting in higher conflict probability. At high conflict, Paxos goes into two-phase operation, which negatively impacts both latency and maximum throughput.Immediate WPaxos suffers from leaders competing for objects with neighboring regions, degrading its performance faster than EPaxos. Median request latency graph in Figure <ref> clearly illustrates this deterioration. This behavior in WPaxos with immediate object stealing is caused by dueling leaders: as two nodes in neighboring zones try to acquire ownership of the same object, each restarts phase-1 of the protocol before the other leader has a chance to finish its phase-2. On the other hand, WPaxos in adaptive object stealing mode scales better and shows almost no degradation until it starts to reach the CPU and networking limits of individual instances. Adaptive WPaxos median latency actually slightly decreases under the medium workloads, while EPaxos shows gradual latency increase. At the workload of 10000 req/s adaptive WPaxos outperforms EPaxos 9 times in terms of average latency and 54 times in terms of median latency. §.§ Shifting Locality Workload are common for large scale applications that process human input For instance, social networks may experience drifting access patterns as activity of people changes depending on the time of the day. WPaxos is capable to adapt to such changes in the workload access patterns while still maintaining the benefits of locality. While statically partitioned solutions perform well when there are many local requests accessing each partition, they require a priori knowledge of the best possible partitioning. Moreover, statically partitioned Paxos systems cannot adjust to changes in the access pattern, thus their performance degrade when the locality shifts.Many applications in the WAN setting may experience workloads with shifting access patterns such as diurnal patterns <cit.>.Figure <ref> illustrates the effects of shifting locality in the workload on WPaxos and statically key-partitioned Paxos (KPaxos). KPaxos starts in the optimal state with most of the requests done on the local objects.When the access locality is gradually shifted by changing the mean of the locality distributions at a rate of 2 objects/sec, the access pattern shifts further from optimal for statically partitioned Paxos, and its latency increases.WPaxos, on the other hand, does not suffer from the shifts in the locality. The adaptive algorithm slowly migrates the objects to regions with more demand, providing stable and predictable performance under shifting access locality.§.§ Fault ToleranceIn this section we evaluate WPaxos availability by using Paxi framework fault injection API to introduce different failures and measure latency and throughput of normal workload in every second. Every fault injection will last for 10 seconds and recover.Figure <ref> shows the result of first deployment where f_n = 1 and f_z = 0. The throughput and latency is measured in region V. For the first 10 seconds under normal operation, the latency and throughput is steady at less than 1 millisecond and 1000 operations/second respectively. We crash one node in region V first, it does not have any effect on performance since |q_2|=2 out of 3 nodes in that region. At 30th second, we crash two local nodes so that a local q_2 cannot be formed. The requests has to wait for two acks from neighboring region O, which introduce additional 11 ms RTT to the latency.Figure <ref> shows the results of a same deployment but f_z = 1 where we can tolerate any one zone failure. The latency remains at 11 ms as q_2 requires 2 nodes from both V and O. Until 10th second, we crash region O entirely, The leader has to wait for acks from region C and latency become 60 ms. When region O recovers, we partitioned 4 nodes as the minority from the system of 9 nodes. The 4 nodes including 3 nodes from C and one node from O. As expected, such partition does not have any effect on system performance.In all above failures, WPaxos always remain available. § CONCLUDING REMARKSWPaxos achieves fast wide-area coordination by dynamically partitioning the objects across multiple leaders that are strategically deployed using flexible quorums. Such partitioning and emphasis on local operations allow our protocol to significantly outperform other WAN Paxos solutions.Since the object stealing is an integrated part of phase-1 of Paxos, WPaxos remains simple as a pure Paxos flavor and obviates the need for another service/protocol for relocating objects to zones. Since the base WPaxos protocol guarantees safety to concurrency, asynchrony, and faults, the performance can be tuned orthogonally and aggressively. In future work, we will investigate smart object stealing policies that can proactively move objects to zones with high demand. We will also investigate implementing transactions more efficiently leveraging WPaxos optimizations.§ ACKNOWLEDGMENTS§ ACKNOWLEDGMENTThis project is in part sponsored by the National Science Foundation (NSF) under award number CNS-1527629. IEEEtran | http://arxiv.org/abs/1703.08905v4 | {
"authors": [
"Ailidani Ailijiang",
"Aleksey Charapko",
"Murat Demirbas",
"Tevfik Kosar"
],
"categories": [
"cs.DC"
],
"primary_category": "cs.DC",
"published": "20170327023401",
"title": "WPaxos: Wide Area Network Flexible Consensus"
} |
1]Naveen Kumar [email protected], [email protected] 2]Michael Junk [email protected]]S.V. Raghurama [email protected]]M. [email protected][1]Research Scholar, IISc Mathematical Initiative (IMI), Indian Institute of Science, Bangalore, India[2]Professor, Fachbereich Mathematik und Statistik, Universität Konstanz, Germany[3]Associate Professor, Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India[4]Professor, Department of Civil Engineering, Indian Institute of Science, Bangalore, India In this article, we attempted to develop an upwind scheme based on Flux Difference Splitting using Jordan canonical forms to simulate genuine weakly hyperbolic systems. Theory of Jordan Canonical Forms is being used to complete defective set of linear independent eigenvectors. Proposed FDS-J scheme is capable of recognizing various shocks accurately. Weakly hyperbolic systems Jordan canonical forms Upwind scheme § INTRODUCTIONCentral and upwind discretization schemes are the popular categories of numerical methods for simulating hyperbolic conservation laws.A system is said to be hyperbolic if its Jacobian matrix has all real eigenvalues with complete set of linearly independent eigenvectors.Upwind schemes based on Flux Difference Splitting (FDS) are usually more accurate than others.Two popular schemes belonging to this category are the approximate Riemann solvers of Roe <cit.> and Osher <cit.> and these are heavily dependent on eigenvector structure. Thus their applications are limited to systems which have complete set of linearlyindependent eigenvectors.Several other numerical schemes too are dependent strongly on eigenstructure and thus share the same difficulty.Recently, an attempt is made <cit.> to extend Roe scheme to weakly hyperbolic systems by adding a perturbation parameter ϵ to make such systems strictly hyperbolic. In this article we try to develop an upwind method based on the concept of flux difference splitting together with Jordan forms, thus naming it asFDS-J scheme, to simulate genuine weakly hyperbolic systems. We use the theory of Jordan canonical forms to complete the defective set of linearly independent (LI) eigenvectors.Pressureless gas dynamics system, which happens to be weakly hyperbolic, is considered and it is known to produce delta shocks for density variable. Next, we consider Modified Burgers' System as given in <cit.> and for this system too delta shocks occur exactly at same locations where normal shocks occur in the primary variables.Similarly, other types of discontinuities, namely, δ^'-shocks andδ^''-shocks areobserved if we further extend modified Burgers' system as given in <cit.> and <cit.>.FDS-J solver is capable of recognizing these shocks accurately.Comparison is done with simple Local Lax-Friendrichs (LLF) <cit.> method.Contribution of generalized eigenvectors is not seen directly in the final FDS-J scheme for simulating considered genuine weakly hyperbolic systems.It is because for each considered system, all eigenvalues are equal with arithmetic multiplicity (AM) greater than one in the resulting single Jordan block for each case. § 1-D PRESSURELESS SYSTEMConsider the one-dimensional pressure-less gas dynamics system∂U/∂ t+ ∂F( U)/∂ x = 0where U is the conserved variable vector and F( U) is the flux vector defined byU = [ ρ; ρ u ]F(U) = [ ρ u; ρ u^2 ] This system can also be written in quasilinear form as follows.∂U/∂ t+ A∂U/∂ x = 0Here A isJacobian matrix for pressure-less systemand is given byA = [01; -u^22 u ] Eigenvalues corresponding to Jacobian matrix A are λ_1 = λ_2=u and thus algebraic multiplicity (AM) of the eigenvalues is 2, so we have to find its eigenvector space to see whether A has complete set of linearly independent eigenvectors or not. The analysis of matrix A shows that given system is weakly hyperbolic as there is no complete set of linearly independent eigenvectors, with the only eigenvector being R_1=[ 1; u ]Since given system doesn't have a complete set of linearly independent (LI)eigenvectors, it will be difficult to apply any upwind scheme based on either Flux Vector Splitting (FVS) method or Flux Difference Splitting (FDS) method.But from the theory of Jordan Canonical Forms we can still recover complete set of LI generalized eigenvectors. § JORDAN CANONICAL FORMS AND FDS FOR PRESSURELESS GAS DYNAMICSEvery square matrix is similar to a triangular matrix with all eigenvalues on its main diagonal. A square matrix is said to besimilar to a diagonal matrix only if it has a complete set of LI eigenvectors.But every square matrix can be made similar to a Jordan matrix. An n× n matrix J with repeated eigenvalue λ is called a Jordan matrix of order n if each diagonal entry in a Jordan block is λ, each entry in the super diagonal is 1 and every other entry is zero.Here we are providing a brief procedure to reduce a given square matrix to a Jordan matrix.§.§ Re-visit of typical cases Let A be n × n matrix with n real eigenvalues λ_1, λ_2, λ_3, ⋯, λ_n.Now the followingtypical cases may arise:Case 1:When all λ_i, where 1≤ i ≤ n, are distinct. In this case matrix A will have a complete set of LI eigenvectors and hence will be similar to a diagonal matrix.Case 2: When some λ_i are equal, i.e., let λ_1 =λ_2 =λ_3 =⋯ =λ_p =λ, where p is a natural number ≤ n, any of thefollowing sub-cases may happen.Sub-case 1: If algebraic multiplicity (AM) of an eigenvalue λ, which is p in assumed case, isequal to geometric multiplicity (GM), and moreover if this is true for all subsets of equal eigenvalues then square matrix A will again be similar to a unique diagonal matrix. Sub-case 2: Now, consider the case in which GM is strictly less than AM, in that case the LI set of eigenvectors will not be a complete one.Here, we can pull in the theory of Jordan canonical forms to recover full LI set of generalized eigenvectors and to make given square matrix similar to a Jordan matrix which is not much different from a diagonal matrix.Definition: A n × n matrix is called defective matrix if it doesn't possess full set of linearly independent eigenvectors.Procedure to find generalized eigenvectors: In this article we mainly focus on systems which belong to the category as discussed in Sub-case 2. If all eigenvalues of a given defective matrix are equal and further if there is only asingle Jordan block corresponding to given matrix, then following steps need to be followed to recover full set of LI generalized eigenvectors: (i) For an eigenvalue λ, compute the ranks of the matrices A-λI,(A-λI)^2, ⋯, and find the least positive integer s such that rank(A-λI)^s=rank(A-λI)^s+1. There will be a single Jordan block only if s comes out equal to dimension of given matrix.(ii)Once s is equal to dimension of defective matrix, generalized eigenvectors can be computed from the system of equations AP = PJ, where J(λ) = [ λ 1; ⋱ ⋱; ⋱ 1; λ ]_s× s Let P equal to [X_1, X_2, X_3, ..., X_s] be a set of column vectors which need to evaluated. Then,A[X_1, X_2, X_3,........., X_s]=[X_1, X_2, X_3,........., X_s] [ λ 1; ⋱ ⋱; ⋱ 1; λ ]_s× sgives AX_1=λX_1 AX_2=λX_2 +X_1 AX_3=λX_3 +X_2 ⋮ AX_s=λX_s +X_s-1Now we can compute all true and generalized eigenvectors from system of relations (<ref>).For present case, u is repeated eigenvalue with arithmetic multiplicity (AM) of 2 and on computing the ranks of matrices A- uI,(A- uI)^2 and (A- uI)^3, we find rank(A- u I)^2 = 0 =rank(A- u I)^3. Thus s will be 2 in this case, so there will be one Jordan block of order 2.On expanding relation AP = PJ, we get A[X_1X_2]=[X_1X_2] [J_1J_2]where X_i^'s are linearly independent, 2 × 1, column vectors. Similarly, J_i^'s are column vectors which form Jordan matrix J and are given as J_1=[ λ; 0 ]_2 × 1, J_2= [ 1; λ ]_2 × 1On solving (<ref>), we get following relations to find all eigenvectors, i.e.,AX_1=λX_1 AX_2=λX_2 +X_1First relation of (<ref>) gives X_1 = R_1 and on using this value in second relation of (<ref>), we get X_2 =R_2 =[x_1; 1 + ux_1 ]which will be a generalized eigenvector of the pressureless gas dynamics system and x_1∈ IR. §.§ Formulation of a FDS scheme for Pressureless SystemSystem (<ref>) can be written in quasi-linear formas ∂U/∂ t+ A∂U/∂ x = 0Now, because of the non-linearity of Jacobian matrix A, it is difficult to solve above system. But locally, inside each cell, A can be made linearized to form a constant matrix A̅, which is now a function of left and right state variables U_L and U_R, i.e., A̅ =A̅(U_L,U_R). So, (<ref>) becomes∂U/∂ t+ A̅∂U/∂ x = 0On comparing (<ref>) and (<ref>), we get dF = A̅dUThe finite difference analogue of the above differential relation is,△F=A̅△Uwhere,△F=F_R - F_L△U=U_R - U_LIn the above equations, subscripts R and L represent the right and left states respectively. Relation (<ref>) ensures the conservation property. As already explained the present system is weakly hyperbolic, but on the basis of above mentioned procedure, we can construct a basis of true and generalized eigenvectors for column vector △U, i.e.,△U =∑_i = 1^2α̅_iR̅_iwhere, α̅_i's are coefficients attached with both LI eigenvectors corresponding to given system. On using above equation in (<ref>), we get△F=A̅∑_i = 1^2α̅_iR̅_iFor weakly hyperbolic systems, A̅ is non-diagonalizable, resulting in A̅R̅_i ≠ λ̅_iR̅_i for some i'sWe now have R̅_2 as a generalized eigenvector and A̅R̅_1 =λ̅_1R̅_1 A̅R̅_2 =λ̅_2R̅_2 +R̅_1 On using above relations in (<ref>), we get△F =α̅_1λ̅_1R̅_1 +α̅_2λ̅_2R̅_2 +α̅_2R̅_1 We now define the standard Courant splitting for the eigenvalues as λ̅^+_i - λ̅^-_i = |λ̅_i|After splitting each of the eigenvalues into a positive and a negative part, △F^+ and △F^- can be written as△F^+ =α̅_1λ̅^+_1R̅_1 +α̅_2λ̅^+_2R̅_2 +α̅_2R̅_1and△F^- =α̅_1λ̅^-_1R̅_1 +α̅_2λ̅^-_2R̅_2 +α̅_2R̅_1Taking a cue from the traditional flux difference splitting methods, we now write the interface flux as F_I = 1/2[F_L + F_R] - 1/2[ (△F^+ -△F^-) ]On using (<ref>) and (<ref>) in the upwinding part of FDS formulation for pressureless system, we get△F^+ - △F^- =∑_i = 1^2α̅_i |λ̅_i| R̅_iSince both eigenvalues are the same, above relation becomes△F^+ - △F^- =|λ̅|△UNow △U_2 is equal to △(ρ u), which can be further expressed as△(ρ u)=u̅△ρ +ρ̅△uwhere u̅ is some average of u_L and u_R, ρ̅ is another average of ρ_L and ρ_R, both to be determined. We now haveρ_Ru_R - ρ_Lu_L =u̅(ρ_R - ρ_L)+ρ̅(u_R - u_L)We need to find average values for both density and velocity variables and both of which should satisfy relation (<ref>) to get some meaningful solutions for interface fluxes inside each cell.Again consider relation △F = A̅△U, which in expanded form can be written as[△ (ρ u); △(ρ u^2) ] = [ 0 1; -u̅^22 u̅ ][△ (ρ); △(ρ u) ]First relation is automatically satisfied for any average values.From the second relation, we get△(ρ u^2) =-u̅^2△ (ρ) +2 u̅△(ρ u)where △ (ρ) =(ρ)_R - (ρ)_L △ (ρ u) =(ρ u)_R - (ρ u)_L △ (ρ u^2) =(ρ u^2)_R - (ρ u^2)_LAfter rearrangement of terms we obtain u̅^2 △ (ρ)-2 u̅△(ρ u)+△ (ρ u^2) =0which is a quadratic equation in u̅ the solution of which, after a little algebra, is obtained as u̅ =√(ρ_L) u_L ± √(ρ_R) u_R /√(ρ_L) ± √(ρ_R)We neglect the root having negative signs in both numerator and denominator as it is not physical and may become infinity as √(ρ_R)⟼√(ρ_L) or vice-versa. Thus average value of u is defined as u̅ =√(ρ_L) u_L +√(ρ_R) u_R /√(ρ_L) +√(ρ_R)On using u̅ in the relation (<ref>) we get ρ_Ru_R - ρ_Lu_L = √(ρ_L) u_L +√(ρ_R) u_R /√(ρ_L) +√(ρ_R)(ρ_R - ρ_L)+ρ̅(u_R - u_L)Now we use (ρ_R - ρ_L)=(√(ρ_R) + √(ρ_L))(√(ρ_R) - √(ρ_L)) in the above equation and after rearrangement of terms, we get ρ̅ = (√(ρ_R)√(ρ_L)) Since density is always positive, the average valueρ̅ becomes equal to (√(ρ_Rρ_L)). One can check that the relation (<ref>) becomes an equation for above defined averages for both density and velocity variables.As the interface flux is now completely defined, the final update formula in the finite volume framework is written as follows.U^n+1_j = U^n_j - Δ t/Δ x[ F^n_j+1/2 - F^n_j-1/2] §.§ Numerical examplesHere we consider two test cases for 1D-pressureless gas dynamics. First test case we take from <cit.> with initial conditions being given as (ρ_L, u_L) = (1.0, 1.5), (ρ_R, u_R) = (0.2, 0.0) with x_o=0.0 and all solutions are obtained at final time t=0.2 units. In this case, a δ-shock develops in density variable and our FDS-J scheme captures this feature accurately, as seen in Figure <ref>. The formation of step discontinuity in velocity variable is shown in Figure<ref>. Second test case is taken from <cit.>.This test case is designed to check positivity property and maximum principle for density and velocity variables respectively.For this problem, FDS-J scheme generates insufficient numerical diffusion.To get meaningful solution, we use Harten's entropy fix <cit.> which usually increase diffusion in the scheme, i.e., |λ̃|=|λ|if |λ| ≥ϵ and |λ̃|=12(λ^2/ϵ + ϵ) if|λ| < ϵfor some small value of ϵ.Thedensity variable plot is shown in Figure <ref>.§ MODIFIED BURGERS' SYSTEMNext we consider modified Burgers' system which is formed augmenting the inviscid Burgers equation with an equation obtained by taking its derivative, forming a 2×2 system.Let us consider one-dimensional inviscid Burgers' equation u_t +f_x(u)=0where, u is the conserved variable and f(u) is the flux function which is given by f(u) = 1/2 u^2.On differentiating above equation w.r.t. x, we obtain(u_t)_x +(f_x(u))_x =0It further can be written as (u_x)_t +(f^'(u)u_x)_x =0or v_t +g_x(u)=0where we define v = u_x and g(u)= f^'(u)v. (<ref>) and (<ref>) together form 2×2 system ∂U/∂ t +A∂U/∂ x =0 where U is column vector and A is 2×2 matrix, i.e.,U =[ u; v ] andA = [ u 0; v u ]Eigenvalues corresponding to Jacobian matrix A are λ_1 =u = λ_2 and thus algebraic multiplicity (AM) of the eigenvalue u is 2. For v ≠ 0, analysis of matrix A shows that given system is weakly hyperbolic as the given system has only one LI eigenvector, which is given by R_1=[ 0; 1 ]We find that there is one Jordan block of order two as rank(A - u I)^2 =0= rank(A - u I)^3. Like in the previous case, in order to find a generalized eigenvector we need to solve relation AP = PJ. After a little algebra, R_2 comes out asR_2=[1v; x_2 ]where x_2∈ IR. §.§ Formulation of FDS scheme for Modified Burgers' systemSimilar analysis like that in the pressureless gas dynamics system is valid for modified Burgers' system till equation (<ref>) which is △F^+ - △F^- =|λ̅|△UIn this case △U is defined as △U=[ △u; △v ]and λ̅ = u̅. In order to solve (<ref>) fully, we need to find average value of u from relation △F =A̅△U. In expanded form it can be written as[ △ (1/2 u^2);△(u v) ] = [ u̅0; v̅ u̅ ][△ (u); △(v) ]From the first equation, we get △(1/2 u^2) =u̅△ (u)or(1/2 u^2_R - 1/2 u^2_L)=u̅(u_R - u_L) if u_L≠ u_R, then u̅ = (u_L + u_R)2. Otherwise also u̅ = (u_L + u_R)2 in a limiting sense. Second expression (v̅) need not be solved as interface flux requires only u̅ to be evaluated. It is important to note that even if v = 0, relation (<ref>) still holds. §.§ Numerical examplesWe considered some numerical test cases from <cit.> for the modified Burgers' system.First test case contains smooth initial conditions which are given as U(x,0)={[ 1/2 + sin(π x); π cos(π x) ]. ∀ x ∈ [0,2] with a 2-periodic boundary condition.Later near time t = 3/(2π), the given system develops a normal shock and a δ-shock in u and v variables respectively. Theoretically, v = π cos(π x) may be zero at points x = 1/2, 3/2 but computationally it is not so. Results with FDS-J scheme are given in Figure <ref> and <ref>. Next we present results with Local Lax-Friedrichs (LLF) method, which is a simple central solver andare given in Figure <ref>, <ref>.Second test case for which initial conditions are defined as (u_L, v_L) = (-2.0, 1.0), (u_R, v_R) = (4.0, -2.0) with x_o = 1.0 contains a sonic point. Final solutions are obtained at time t=0.125 units as given in <cit.>. Harten's entropy fix is employed to get meaningful solutions and results are given in Figure <ref> and <ref>. § FURTHER MODIFIED BURGERS' SYSTEMShelkovich <cit.> shows existence of δ^'-shocks in addition to δ-shocks. These shocks occur ina system which is formed by taking one more derivative of second equation of modified Burgers' system leading to 3×3 system. Similarly, Joseph <cit.> shows existence of δ^''-shocks in the solution of 4×4 system. Let us consider again both equations of modified Burgers' systemu_t +f_x(u)=0 and v_t +g_x(u)=0 On differentiating above equation w.r.t x, we getw_t +(v^2 + uw)_x =0 If we differentiate above equation once more we have z_t +(3vw + uz)_x =0 In a quasi-linear form above set of four equations can be written as U_t +AU_x =0 where, U is a 4×1 column vector and A is a Jacobian matrix which is given below A = [u000;vu00;w 2vu0;z 3w 3vu ]Eigenvalues corresponding to matrix A are u,u,u,u and for v≠0,w≠0,z≠0, matrix A is weakly hyperbolic. Indeed it has only one LI eigenvector e_4. In this case also we find that there is only one Jordan block of order 4 as rank(A - u I)^4 =0= rank(A - u I)^5. This means for present system, a Jordan chain of order four corresponding to eigenvalue λ = u will form, i.e., AR_1=λR_1 AR_2=λR_2 +R_1 AR_3=λR_3 +R_2 AR_4=λR_4 +R_3where R_1 = e_4 and on using R_1 in the second relation, R_2 comes out as(0,0,1/3v,x_4)^t with x_4 as a real constant. Similarly, on using R_2 in next relation, R_3 comes out equal to (0,1/6v^2,x_4/3v-w/6v^3,y_4), where x_4 is already defined and y_4 is another real constant. Finally, last expression gives R_4 = (1/6v^3, x_4/6v^2, y_4/3v - z/18v^4 - w x_4/6v^3, t_4)^t. Let P denote a matrix with column vectors [R_1 | R_2 | R_3 | R_4] and one can check determinant of P is 1/108v^6≠ 0.§.§ Formulation of FDS scheme for Further Modified Burgers' System In this case △F is written as,△F =α̅_1λ̅R̅_1 +α̅_2(λ̅R̅_2 +R̅_1) +α̅_3(λ̅R̅_3 +R̅_2)+α̅_4(λ̅R̅_4 +R̅_3) After splitting each of the eigenvalues into a positive part and a negative part, △F^+ and △F^- can be written as△F^+ =α̅_1λ̅^+R̅_1 +α̅_2(λ̅^+R̅_2 +R̅_1)+α̅_3(λ̅^+R̅_3 +R̅_2)+α̅_4(λ̅^+R̅_4 +R̅_3)and△F^- =α̅_1λ̅^-R̅_1 +α̅_2(λ̅^-R̅_2 +R̅_1)+α̅_3(λ̅^-R̅_3 +R̅_2)+α̅_4(λ̅^-R̅_4 +R̅_3) ⇒△F^+ - △F^- =|λ̅|△UIn this case △U is defined as,△U=[ △u; △v; △w; △z ]and λ̅ = u̅. In order to solve (<ref>) fully, we need to find average value of u. In this case also average value of u turns out to be equal tou_L + u_R2. we take the same test case as considered in the modified Burgers' system with initial smooth conditionsU(x,0)={[ 1/2 + sin(π x); π cos(π x);-π^2 sin(π x);-π^3 cos(π x) ]. ∀ x ∈ [0,2] As already explained at time t = 3/(2π), the given system develops a normal shock and a δ-shock in u and v variables. Similarly, at same position where normal shock forms, third variable w gives a δ^'-shock and fourth variable z creates a δ^''-shock. Results for FDS-J scheme are compared with simple central solver LLF and are given in Figures <ref> and <ref>. § SUMMARYIn this study, we attempted to develop a Flux Difference Splitting scheme for genuine weakly hyperbolic systems to simulate various shocks includingδ-shocks, δ^'-shocks and δ^''-shocks. Newly constructed FDS-J scheme, developed using Jordan Canonical forms together with an upwind flux difference splitting method, is capable of recognizing these shocks accurately.For considered weakly hyperbolic systems, there is no direct contribution of generalized eigenvector in the final formulation of the scheme. 99Bouchut_Jin_ _Li F. Bouchut, S. Jin and X. Li (2003). Numerical approximations of pressureless and isothermal gas dynamics. SIAM Journal on Numerical Analysis, 41(1), 135-158. Capdeville G. Capdeville (2008). Towards a compact high-order method for non-linear hyperbolic systems, II. The Hermite-HLLC scheme. Journal of Computational Physics, 227(22), 9428-9462. Chen_ _Liu G. Q. Chen and H. Liu (2003). Formation of δ-shocks and vacuum states in the vanishing pressure limit of solutions to the Euler equations for isentropic fluids. SIAM journal on mathematical analysis, 34(4), 925-938. Shelkovich V. G. Danilov and V. M. Shelkovich (2005). Dynamics of propagation and interaction of δ-shock waves in conservation law systems. Journal of Differential Equations, 211(2), 333-381. Harten_entropy_fix A. Harten (1984). On a class of high resolution total-variation-stable finite-difference schemes. SIAM Journal on Numerical Analysis, 21(1), 1-23. Joseph K. T. Joseph and M. R. Sahoo (2013). Vanishing viscosity approach to a system of conservation laws admitting δ^'' waves. Communications on Pure & Applied Analysis, 12(5). Osher S.J. Osher and F. Solomon, Upwind difference schemes for hyperbolic systems of conservation laws, Mathematics of Computation, vol. 38, no. 158, pp. 339-374, 1982.Roe P. L. Roe (1981). Approximate Riemann solvers, parameter vectors, and difference schemes. Journal of computational physics, 43(2), 357-372. LLF V. V. E. Rusanov (1962). The calculation of the interaction of non-stationary shock waves and obstacles. USSR Computational Mathematics and Mathematical Physics, 1(2), 304-320. Smith_et_al T. A. Smith, D. J. Petty and C. Pantano (2016). A Roe-like numerical method for weakly hyperbolic systems of equations in conservation and non-conservation form. Journal of Computational Physics, 316, 117-138. | http://arxiv.org/abs/1703.08751v1 | {
"authors": [
"Naveen Kumar Garg",
"Michael Junk",
"S. V. Raghurama Rao",
"M. Sekhar"
],
"categories": [
"math.NA"
],
"primary_category": "math.NA",
"published": "20170326014322",
"title": "An upwind method for genuine weakly hyperbolic systems"
} |
[Corresponding author: ][email protected][On leave from A. F. Ioffe Physical Technical Institute, 194021 St. Petersburg, Russian Federation.]^1 School of Mathematics and Physics, Queen's University Belfast, BT7 1NN Belfast, Northern Ireland, UK^2 Departament of Physics, Oakland University, Rochester, Michigan 48309, USA^3 Department of Physical Sciences, The Open University, MK7 6AA Milton Keynes, England, UK^4 MBN Research Center, Altenhöferallee 3, 60438 Frankfurt am Main, GermanyThe passage of energetic ions through tissue initiates a series of physico-chemical events, which lead to biodamage. The study of this scenario using a multiscale approach brought about the theoretical prediction of shock waves initiated by energy deposited within ion tracks. These waves are being explored in this letter in different aspects. The radial dose that sets their initial conditions is calculated using diffusion equations extended to include the effect of energetic δ-electrons. The resulting shock waves are simulated by means of reactive classical molecular dynamics. The simulations predict a characteristic distribution of reactive species which may have a significant contribution to biodamage, and also suggests experimental means to detect the shock waves. The effect of ion-induced shock waves on the transport of reacting species around energetic ion tracks Andrey V. Solov'yov^4========================================================================================================The basic understanding of the interaction of energetic ions with biomaterials is important both for radiotherapy and radiation protection from natural or human-made sources (cosmic radiation during manned space travel, Earth natural radioactivity or nuclear reactors) <cit.>. In radiotherapy, proton and heavier ion beams have been exploited since the 1990s in the advanced technique referred to as ion-beam cancer therapy (IBCT) <cit.>. Macroscopically, ion beams feature a depth-dose curve where the maximum of energy loss (the Bragg peak) is reached close to the end of ions trajectories, allowing a precise energy delivery to the tumor region while sparing surrounding healthy tissues. Microscopic patterns of energy deposition around each ion path feature extremely high radial doses which steeply decrease on a nanometer scale. In the Bragg peak region, secondary electrons, free radicals, and other reactive species are produced in large numbers, leading to much higher (compared to photon radiation) concentrations of DNA lesions and formation of multiply-damages sites. These processes increase the probability of cell death or sterilization because of the suppressed capability of enzymes to repair such a complex damage. A thorough understanding of radiation damage with ions is needed to develop a reliable optimization for IBCT treatment planning. The relation of the cell survival probability to the physical dose is non-trivial. In a common x-ray therapy a single physical parameter, the dose, is involved, but the biological diversity is so staggering that empirical components in all existing models is essential. A number of molecular models have been developed since the 1960s with an effort to mathematically explain the experimental dependence of cell survival probabilities on dose <cit.>. The microdosimetric kinetic model (MKM) <cit.> is one of the most advanced approaches of that kind; it predicts cell survival as a function of dose and linear energy transfer (LET). A popular local effect model (LEM) <cit.> relates the radial dose to the cell survival using its relation to the dose for cells irradiated with x-rays. Another approach is being pursued by the track-structure community, with the idea of including all relevant processes, from ionization/excitation of the medium with ions to nuclear DNA damage, using Monte Carlo (MC) simulations <cit.>.Another alternative still is a multiscale approach (MSA) <cit.> that took a radically different direction. Instead of starting with the observed cell survival probabilities, pertinent physical, chemical, and biological processes are analyzed combining a variety of temporal, spatial and energy scales. One important distinction from the track-structure approach is the prediction of shock waves initiated by each ion propagating in the medium. The strength of these shock wavesincreases with LET, so they might be a substantial part of the radiation damage scenario around the Bragg peak region. Thus far, these shock waves have not been observed directly, but there is an evidence that makes their existence plausible. The detected acoustic waves from the Bragg peak region <cit.> are likely to be the artifacts of shock waves. Moreover, a successful comparison of experimental cell survival with the MSA, which included the shock waves in the scenario, for a variety of cell lines, values of LET, oxygen environments and cell repair conditions <cit.>, is impressive.The predicted shock waves are a consequence of the localized energy transfer from ions to the medium, due to the propagation of secondary electrons (most of them having very low energies of ∼45 eV in the vicinity of the Bragg peak <cit.> and thus travelling just a few nanometers) and the lack of mechanisms to quickly propagate this energy away from ions paths. Indeed, the diffusion mechanism is too slow and the production of energetic δ-electrons in the Bragg peak vanishes <cit.>. Thus the development of high pressures inside of a narrow cylinder is expected. This happens by the time of ∼10^-13 s after the ion traverse, when all electrons have been thermalized, and suggests an onset of the cylindrical shock wave propagating radially away from the ion path. Thus, the shock wave propagates on the time scale just between the so-called physical and chemical stages of radiation, typically considered to be fairly well separated in time, and both preceding the biological stage <cit.>. While the evolution of track structure finishes by ∼10^-14 s (end of physical stage) and reactive chemical species (free radicals and solvated electrons) are created by ∼10^-13–10^-12 s, the chemical stage is deemed to start by ∼10^-12 s after irradiation and last until ∼10^-6 s. However, the emerging waves may have a substantial influence on the chemistry of the scenario, effectively mixing the physical and chemical stages of irradiation.Classical molecular dynamics (MD) simulations were previously used to investigate the direct mechanical effect of these waves on radiation damage <cit.>. It was shown that covalent bonds can be ruptured by stress from the shock wave if the target is close enough to the ion path and the LET is sufficiently large. In Ref. <cit.> it was also predicted that shock waves may play a significant role in the transport of reactive species such as free radicals due to radial collective flows initiated by them. Even though this idea has been heavily exploited in Refs. <cit.> it has never been properly studied. The analysis of formation and diffusion of free radicals in Ref. <cit.> suggests that if there are no shock waves, most of the radicals do not leave ion tracks since they annihilate due to high rates of chemical reactions, high concentrations and the inability of the diffusion mechanism to steer them outside.This letter reports results of MD simulations of transport of hydroxyl radicals by a shock wave. After the initial distributions of energy deposition and radicals around the ion path is obtained by solving the diffusion equations for the propagation of electrons, MD is used to simulate the shock waves. We investigate the effect of the radial-dose including δ-electrons on the strength of the shock waves both in and out of the Bragg peak region.Then the reactive force field implemented in MBN Explorer <cit.> allows us to simulate one of the most representative chemical reactions occurring around the ion path (OH recombination) in the presence of the wave.In Refs.<cit.> the transport of electrons in the vicinity of the Bragg peak was treated analytically using the diffusion equationsand the radial dose deposition profile at a given time was calculated <cit.>. In Fig. <ref> the time evolution of the radial dose produced by a carbon ion (a) in the Bragg peak region (energy 200-keV/u) and (b) at 2-MeV/u (out of the Bragg peak) calculated by the diffusion equations is shown by thin lines. The radial dose increases with time until it saturates at ∼50 fs; then all electrons thermalize.It should be noted that there is no mechanism by which the energy deposited so quickly can be dissipated gradually, since processes such as electron-phonon interaction or diffusion take place in much longer time scales <cit.>. A limitation of the diffusion equations is that, treating all first generation electrons as having the average kinetic energy of 45 eV, they cannot reproduce a large-radii dose tail coming from the contribution of balistic δ-electrons, much less common but more energetic (see MC results for 2-MeV/u carbon ions in water in Fig. <ref>(b)). In Ref. <cit.> δ-electrons were not accounted for explicitly, since their production is supressed in the Bragg peak region.However, to also simulate the effect of shock waves out of the Bragg peak, in this work we have implemented a recipe to account for δ-electrons based on a spatially-restricted LET formula <cit.>, while the low-energy electrons are still treated by the diffusion equations. The technical implementation goes beyond the goal of this letter and it will be described in detail in another work <cit.>.The radial doses at the end of the track-structure, including δ-electrons, are represented in Fig. <ref> by thick lines; they are compared to different MC simulations for 2-MeV/u carbon ions <cit.>. The present calculations are in rather good agreement with the MC results, demonstrating the capacity of this approach to correctly predict the radial doses. Our calculations correspond to the end of the track structure, but before the shock wave is formed, and this is why they agree with the MC simulations where shock waves are not present. Interestingly, the radial dose profile also gives the upper estimate for the pressure developed around the ion path <cit.>.The pressure profiles for carbon ions are also shown in Fig. <ref> (the axis is given on the right). So large pressures are sufficient to initiate a shock wave in the liquid medium. The hydrodynamic equations with the initial conditions corresponding to a “strong explosion” were solved in Ref. <cit.>. As it is discussed below, this solution allows one to obtain useful physical characteristics of the ion-induced shock waves which serve as a benchmark for the MD simulations <cit.>.Besides the dose and pressure profiles, the diffusion equations also yield the initial distribution of free radicals and pre-solvated electrons <cit.>. As a first approximation, it can be assumed that each inelastic collision leads to the formation of one OH radical.Under this assumption, the initial distribution of OH will follow the profile of the radial dose <cit.>, as shown in one of the right axes in Fig. <ref>. The proton transfer in the ionized water molecule dissociation is a fast process occurring by ∼ 10^-14 s <cit.>. Therefore, by the time that shock waves develops (∼ 10^-13 s <cit.>), the radicals are almost at the same location where they were created. The radial doses shown in Fig. <ref> can be used to set up MD simulations of carbon ion-induced shock waves, both in and out the Bragg peak region. This is an improvement over previous simulations where the energy lost by the ion was assumed to be deposited in a cylinder of radius 1 nm (the so-called hot cylinder) <cit.>. Simulations are arranged as described in Ref. <cit.>, i.e., by scaling atomic velocities according to the energy deposited, but in this case using the radial dose distributions shown in Fig. <ref> for different concentric cylindrical shells around the ion path. Thus we can assess the effect of the realistic initial conditions on the shock wave development. All simulations were done using MBN Explorer <cit.>. The dependencies of pressure at the wave front obtained from MD simulations for carbon ions in and out of the Bragg peak region on the radius of the wave front, obtained as explained in ref. <cit.>, are shown in Fig. <ref>. The insets show the evolution of radii of wave fronts as a function of time. The results for the step-function-distributed initial pressure (hot cylinder, circles) are compared to those for the initial pressure distributed in accordance with the radial dose (squares). The hydrodynamic calculations (dashed lines) <cit.> agree with the hot cylinder simulations, but the use of the radial dose reduces the intensity of the wave front. This reduction is up to 30 % for a given radius in the case of the Bragg peak region (for which the hot cylinder model has been applied preciously <cit.>), mainly due to the dispersion of the initial wave front, but it reaches stupendous 65 % (for a given radius) for 2-MeV/u carbon ions, whereδ-electrons become important. Out of the Bragg peak region the hydrodynamic model also reproduces the simulation results (solid lines in the figure) but assuming a reduced “effective” stopping power. The latter corresponds (for both energies) to the amount of energy deposited within the first ∼ 1–2 nm from the ion path <cit.>. While in the Bragg peak region most of the energy is deposited within this cylinder, this is not the case at higher energies outside the Bragg peak due to the production of more energetic δ-electrons.Once the strength of shock waves is determined, we turn our attention to their effects on the chemical stage in the Bragg peak region where they are expected to be stronger. As described above the diffusion equations also give the initial distribution of OH radicals around the ion path. This distribution can be used as the initial condition for MD simulations in which the OH transport and reaction can be included. Converting the OH concentration n_ OH(r) shown in Fig. <ref> into a histogram of 5 Åbin width, water molecules have been randomly selected and deprotonated, leaving a neutral OH radical. As a first approximation, only one of the most relevant reactions was considered, i.e., the OH recombination reaction:OH +OH⟶ H_2 O_2OH being the most important species in chemical biodamage <cit.>. For simplicity, in this work we disregarded other chemical reactions <cit.>, so pre-solvated electrons were not included in the simulations. To assure the charge neutrality, the protons coming from the water dissociation were removed from the system.Reaction (<ref>) can be simulated within classical MD by virtue of the new reactive CHARMM force field introduced and implemented in MBN Explorer <cit.>. The CHARMM force field is very effective for simulating biomolecules in water medium <cit.> and this extension makes it possible to include chemistry merely by defining a few additional parameters, introducing reactivity almost without increasing the computational cost. The performance of this reactive force field has been shown, e.g., with the description of the water dissociation at high temperatures <cit.>. In the present simulations, O-H bonds, being quite stronger than O-O bonds in peroxide <cit.>, are assumed to be non-reactive, while the only possible reaction considered is the formation and breaking of the O-O bond. The parameters needed for simulation are the O-H and O-O bond distances and force constants, the O-O bond dissociation energy and cutoff distance, the atomic partial charges in the OH radical and peroxide molecule, and the equilibrium angle and force constant for the O-O-H angle <cit.>. Typical values have been taken from the literature: the O-H distances are similar as those in water <cit.> so CHARMM parameters have been kept. The values for the O-O bond and the O-O-H angle have been taken from Ref. <cit.>. Partial charges of ± 0.375e have been used for OH <cit.> and ± 0.35e for peroxide <cit.>. Inspection of the resulting O-O Morse potential <cit.> gives a cutoff distance of ∼3 Å. The appropriateness of the parameters has been checked by comparing H_2O_2 formation G-values for 500 keV proton impact (without shock wave) to GEANT4-DNA simulations, which implement the well-known diffusion-reaction algorithms <cit.>, finding good agreement at least up to ∼ 100 ps. Figure <ref> illustrates the effects of a carbon ion-induced shock wave on the OH chemistry in the Bragg peak region. Panel (a) shows the OH mean square displacements with (solid line) and without (dashed line) shock wave. In these simulations the OH recombination is not included so essentially the diffusion of OH radicals without a wave is compared to a collective flow transport with the wave.According to the Einstein relation, the slope of the dashed line gives 6D_ OH, so the diffusion coefficient is D_ OH=0.166 Å^2/ps. This is in a relatively good agreement with the results of recent simulations giving 0.3 Å^2/ps <cit.> and with the value used in the popular simulation packages PARTRAC <cit.> or GEANT4-DNA <cit.>, 0.28 Å^2/ps. In the case of the shock wave the propagation is also linear with time but the slope is more than 80 times larger.This result clearly demonstrates the ability of the shock wave to propagate free radicals (and other reactive species) more efficiently.Finally, Fig. <ref>(b) shows the results for the number of OH and H_2O_2 molecules in the simulations where the chemistry has been included, also with and without a shock wave. The lines show averages for three independent simulations with the error bars representing standard deviations. The evolutions of radicals with and without the shock wave drastically differ from each other. The shock wave prevents OH recombination, both by spreading out the radicals (as discussed above) and by creating harsh conditions in which the formation of the O-O bond is suppressed. Although only short times were simulated here, this tendency may prevail over long times. After 16 ps of transport with a wave the number of surviving OH radicals is 40% larger than that propagated by diffusion. Even if this situation may change over longer times, it is worth noticing that the experimental G-values for OH after ns–μs-times for the largest measured LET are somewhat larger than MC simulation results, which do not include the shock waves <cit.>. However, these experiments are only reported for relatively low values of LET and experiments in the Bragg peak region would be much more desirable. Such long times are still challenging for MD simulations, however G-values can be probed down to some hundreds of picosends <cit.>. Also, ultrafast measurements using the technique described by Dromey et al. <cit.> potentially could probe the time-dependence of the OH signal on the timescales presented here, creating an opportunity to detect a chemical signature of the shock waves and directly prove their existence.In summary, in this letter we have investigated the strength of the ion-induced shock waves inside and outside of the Bragg peak region, as well as assessed their impact on the chemistry around the ion path, by calculating realistic radial doses by means of the diffusion equations (where the contribution of δ-electrons has been incorporated) and using them as the initial conditions for reactive classical MD.It was shown that the production of more energetic δ-electrons outside of the Bragg peak region substantially weakens the shock waves, this phenomenon being confined to the Bragg peak region. The collective flowof the shock wave in the Bragg peak region propagates the radicals 80 times faster than the diffusion mechanism, which is the only means for transport of radicals in the absence of shock waves. The waves also prevent the OH recombination to produce hydrogen peroxide. This demonstrates how the shock waves not only can produce direct physical effects (such as rupture of DNA molecules by high pressure stresses) but also modify the chemical stage of radiation damage. This fact, apart of implying a strong overlapping of the physical and chemical stages, also suggests an indirect way for experimentally detecting the ion-induced shock waves, thus far only predicted theoretically. The chemical effect has been exemplified here only by the main radiochemical reaction of OH recombination for simplicity. However, further development of the reactive force field in MBN Explorer, where other reactions can be included and/or water dissociation by the shock wave accounted for, may reveal further influence of the shock waves on the chemical stage of radiation.The authors recognize financial support from the European Union's FP7-People Program within the Initial Training Network No. 608163 “ARGENT”.widest-label Cucinotta2006 F. A. Cucinotta, M. Durante, Lancet Oncol. 7, 431 (2006)Schardt2010 D. Schardt, T. Elsässer, D. Schulz-Ertner, Rev. Mod. Phys. 82, 383 (2010).Loeffler2013 J. S. Loeffler, M. Durante, Nature Rev. Clin. Oncol. 10, 411 (2013). Surdutovich2014 E. Surdutovich, A. V. Solov'yov, Eur. Phys. J. D 68, 353 (2014)Solovyov2017 A. V. Solov'yov (ed.), Nanoscale Insights into Ion-Beam Cancer Therapy (Springer Int. Pub. Switzerland, 2017). Alpen E. L. Alpen, Radiation Biophysics (Academic Press, 1998)MKM R. Hawkins, Int. J. Radiat. Biol. 69, 739 (1996)Scholz1996 N. Scholz, G. Kraft, Adv. Space Res. 18, 5 (1996).StewartMCDS R. Stewart et al., Phys. Med. Biol. 60, 8249 (2015)Friedland2017 W. Friedland, E. Schmitt, P. Kundrat et al., Sci. Rep. 7, in press (2017) Solovyov2009 A. V. Solov'yov, E. Surdutovich, E. Scifoni, I. Mishustin, W. Greiner, Phys. Rev. E 79, 011909 (2009)Baily N. Baily, Med. Phys. 19, 525 (1992).Verkhovtsev2016 A. V. Verkhovtsev, E. Surdutovich, A. V. Solov'yov, Sci. Rep. 6, 27654 (2016)deVera2013 P. de Vera, R. Garcia-Molina, I. Abril, A. V. Solov'yov, Phys. Rev. Lett. 110, 148104 (2013).deVera2013b P. de Vera, I. Abril, R. Garcia-Molina, A. V. Solov'yov, J. Phys.: Conf. Ser. 438, 012015 (2013).Surdutovich2015 E. Surdutovich, A. V. Solov'yov, Europ. Phys. J. D 69, 193 (2015)Mozumder2004 A. Mozumder, Y. Hatano, Charged Particle and Photon Interaction with Matter: Chemical, Physicochemical and Biological Consequences with Applications (Marcel Dekker Inc., New York, 2004)Yakubovich2012 A.V. Yakubovich, E. Surdutovich, A.V. Solov'yov, Nucl. Instrum. Methods B 279, 135 (2012).Surdutovich2013 E. Surdutovich, A. V. Yakubovich, A. V. Solov'yov, Sci. Rep. 3, 1289 (2013)deVera2016 P. de Vera, N. J. Mason, F. J. Currell, A. V. Solov'yov, Europ. Phys. J. D 70, 183 (2016)Solovyov2012 I. A. Solov'yov, A. V. Yakubovich, P. V. Nikolaev, I. Volkovets, A. V. Solov'yov, J. Comp. Chem. 33, 2412 (2012); Sushko2016 G. B. Sushko, I. A. Solov'yov, A. V. Verkhovtsev, S. N. Volkov, A. V. Solov'yov, Eur. Phys. J. D 70, 12 (2016).Gerchikov2000 L. G. Gerchikov, A. N. Ipatov, A. V. Solov'yov, W. Greiner, J. Phys. B 33, 4905 (2000)Xapsos1992 M. A. Xapsos, Rad. Res. 132, 282 (1992)deVera2017 P. de Vera, E. Surdutovich, N. J. Mason, A. V. Solov'yov, submitted (2017). arXiv:1703.04602 [physics.bio-ph]Waligorski1986 M. P. R. Waligórski, R. N. Hamm, R. Katz, Int. J. Rad. Appl. Instrum. 11, 309 (1986)Liamsuwan2013 T. Liamsuwan, H. Nikjoo, Phys. Med. Biol. 58, 673 (2013)Incerti2014 S. Incerti et al., Nucl. Instr. Meth. B 333, 92 (2014)Surdutovich2010 E. Surdutovich, A. V. Solov'yov, Phys. Rev. E 82, 051915 (2010) Douki1998 T. Douki, J. Onuki, M. H. G. Medeiros, E. J. Bechara, J. Cadet, P. Di Mascio, FEBS Lett 428, 93 (1998)vonSonntag1987 C. von Sonntag, The Chemical Basis of Radiation Biology (Taylor & Francis, London, 1987)MacKerel1998 A. D. MacKerell, Jr. et al., J. Phys. Chem. B 102, 3586 (1998)Ruscic2002 B. Ruscic et al., J. Phys. Chem. A 106, 2727 (2002) Blanksby2003 S. J. Blanksby, G. Barney Ellison, Acc. Chem. Res. 36, 255 (2003)Pabis2011 A. Pabis, J. Szala-Bilnik, D. Swiatla-Wojcik, Phys. Chem. Chem. Phys. 13, 9458 (2011)DeGioia1999 L. De Gioia, P. Fantucci, J. Molec. Struct. 469, 41 (1999)Karamitros2014 M. Karamitros et al., J. Comput. Phys. 274, 841 (2014)Kreipl2009 M. S. Kreipl, W. Friedland, H. G. Paretzke, Radiat. Environ. Biophys. 48, 11 (2009)Karamitros2011 M. Karamitros et al., Progr. Nucl. Sci. Tech. 2, 503 (2011)Maeyama2011 T. Maeyama et al., Rad. Phys. Chem 80, 1352 (2011) Jonah1977 C. D. Jonah, J. R. Miller, J Phys Chem 81, 1974 (1977) Dromey2016 B. Dromey et al., Nature Comms. 7, 10642 (2016) | http://arxiv.org/abs/1703.09150v1 | {
"authors": [
"Pablo de Vera",
"Eugene Surdutovich",
"Nigel J. Mason",
"Fred J. Currell",
"Andrey V. Solov'yov"
],
"categories": [
"physics.bio-ph"
],
"primary_category": "physics.bio-ph",
"published": "20170327154453",
"title": "The effect of ion-induced shock waves on the transport of reacting species around energetic ion tracks"
} |
firstpage–lastpage [NO \title GIVEN] [NO \author GIVEN] December 30, 2023 ======================Debris discs are the dusty aftermath of planet formation processes around main-sequence stars.Analysis of these discs is often hampered by the absence of any meaningful constraint on the location and spatial extent of the disc around its host star.Multi-wavelength, resolved imaging ameliorates the degeneracies inherent in the modelling process, making such data indispensable in the interpretation of these systems.The Herschel Space Observatory observed HD 105211 (η Cru, HIP 59072) with its PACS instrument in three far-infrared wavebands (70, 100 and 160 μm).Here we combine these data with ancillary photometry spanning optical to far-infrared wavelengths in order to determine the extent of the circumstellar disc.The spectral energy distribution and multi-wavelength resolved emission of the disc are simultaneously modelled using a radiative transfer and imaging codes.Analysis of the Herschel/PACS images reveals the presence of extended structure in all three PACS images. From a radiative transfer model we derive a disc extent of 87.0 ± 2.5 au, with an inclination of 70.7 ± 2.2 to the line of sight and a position angle of 30.1 ± 0.5.Deconvolution of the Herschel images reveal a potential asymmetry but this remains uncertain as a combined radiative transfer and image analysis replicate both the structure and the emission of the disc using a single axisymmetric annulus.stars: individual: HD 105211 – circumstellar matter – infrared: stars: planetary systems – disc-planet interactions Accepted to appear in the Monthly Notices of the Royal Astronomical Society on 25th March, 2017§ INTRODUCTION In the early 1980s, the InfraRed Astronomical Satellite <cit.>, announced the surprising discovery that several young, nearby stars were brighter at infrared wavelengths than was expected <cit.>.It was soon realised that this excess infrared emission was the signature of dusty debris in orbit around those stars; being analogous to, but far more massive than the Solar system's Asteroid Belt <cit.>.In the three decades since, the detection of such infrared excesses has become routine; with many main sequence stars being found to host significant amounts of circumstellar debris at mid- and far-infrared wavelengths <cit.>.We now know that such debris discs are a normal result of the stellar and planetary formation process.Observations have shown that most of the youngest, low-mass stars are attended by massive gas- and dust-rich circumstellar discs <cit.>.It is within these discs that the formation of planetary systems like our own occurs, assembling planetesimals and planets <cit.> from dust grains over the first 1 to 100 Myr of the star's life <cit.>. These protoplanetary discs undergo significant evolution over the first 10 Myr of the star's life, which can be traced through the decay of excess emission at near- and mid-infrared wavelengths <cit.>, and a corresponding drop in emission at sub-millimetre wavelengths <cit.>.Around older stars the tenuous, dusty remnants of these protoplanetary discs remain observable through the presence of excess emission above that of the stellar photosphere at infrared wavelengths <cit.>. These faint, gas-poor discs are replenished by the mutual collision of unseen planetesimals – hence these systems are called “debris discs” <cit.>.To date, the vast majority of known debris discs remain unresolved.Most of our current knowledge of the evolution of debris discs has been derived from the analysis of multi-wavelength photometry <cit.> and mid-infrared spectroscopy <cit.>.At the most basic level, the observational properties of debris discs are limited to their temperature and brightness (fractional luminosity, L_ dust/L_⋆), derived from fitting simple (modified) blackbody models to their SEDs <cit.>. By definition, debris discs have fractional luminosities lower than 10^-2 <cit.>.Drawing parallels to the Solar system, the debris discs are most commonly divided into warm and cool discs, analogous to the Solar system's Asteroid Belt <cit.> and Edgeworth-Kuiper Belt <cit.>, repsectively. Recent surveys of nearby stars reveal an incidence of cool dust around Sun-like stars (FGK spectral types) of 20 ± 2 per cent <cit.>.Observational constraints place the faintest known debris discs at fractional luminosities (L_ dust/L_⋆) of a few ×10^-7 <cit.>, around five to ten times brighter than the predicted brightness of the Edgeworth-Kuiper Belt <cit.>.It is therefore worth noting that, with current technology, we would so far have failed to detect the Solar system's debris components; we might therefore assume that the Solar system is dust free.The SED of an unresolved debris disc system can be used to broadly infer the structure of the underlying planetary system. Assuming thermal equilibrium between the dust and incident stellar radiation means that the radial location of dust derived from a disc's blackbody temperature is the minimum separation at which the dust could lie from the star. Studies of resolved debris discs revealed that the disc extent is often much greater than that predicted from the dust temperature <cit.>. A trend between host star luminosity and dust temperature for A- to G-type stars has been noted <cit.> with larger discs, relative to the blackbody radius, observed around later type stars. Modelling of a sample of 34 extended debris discs parameterised this relationship, allowing more plausible estimates of the true extent of unresolved discs to be made <cit.>. For many debris discs, modelling their excesses using a single component often fails to accurately reproduce the observed SEDs.For this reason, a growing number of studies have modelled systems using two (or more) components. Such systems are often inferred to have multiple dust-producing planetesimal belts; however, the migration of dust and planetesimals within a system could also explain such observations <cit.>. A more detailed understanding of the architectures of these systems is limited by the fundamental degeneracies between the assumed disc radial location, minimum dust grain size and dust composition <cit.>.Such knowledge is particularly important in placing circumstellar debris in context as a component of the planetary systems, alongside planets, around their host stars <cit.>.The Herschel Space Observatory <cit.>, with its large, 3.5-m primary mirror, offered an unprecedented opportunity to spatially resolve circumstellar emission at far-infrared and sub-millimetre wavelengths. The Herschel Photodetector Array Camera and Spectrometer <cit.> had imaging capabilities in three wavebands centred on 70, 100 and 160 .The analysis of resolved images of debris discs in the far-infrared, made possible by the high angular resolution of PACS (58 FWHM at 70 ), in conjunction with more densely sampled source SEDs greatly refined models of nearby debris disc systems.As a result, Herschel/PACS led to the detection of many new debris disc systems. In addition, thanks to the excellent spatial resolution by Herschel, it was able to resolve approximately half of the discs imaged by PACS <cit.>. These results made possible, for the first time, the quantification of the relationship between the presence of debris and the presence of planets <cit.>. It also allowed the architectures of a number of planetary systems to be more precisely determined, as a direct result of the measurement of the orientation of the debris in those systems <cit.>.HD 105211 (η Cru, HIP 59072) was identified as a debris disc host star through observations at 70by the Spitzer Space Telescope's <cit.> Multiband Imaging Photometer for Spitzer instrument <cit.>. The detection image revealed a bright disc with extended emission <cit.>. Here we analyse the higher angular resolution Herschel/PACS images of this target in combination with available data from the literature, seeking to better determine the architecture of this nearby, bright debris disc.In Section <ref> we describe the observations obtained and the associated analysis for HD 105211.In Section <ref>, the results of the analysis of the SED, disc images and radial profiles at PACS 70/100/160are presented along with measurements of the disc observational properties: temperature T_ dust, radial extent R_ dust (both from the blackbody assumption and resolved/deconvolved images), and fractional luminosity L_ dust/L_⋆. In Section <ref>, we discuss the state of the disc in comparison with other studied systems and the potential asymmetry present. In Section <ref>, we summarise our conclusions and discuss our plans for future work in studying this system. § OBSERVATIONS AND ANALYSIS In this section, we present the observational data for the HD 105211 system.This includes the characterisation of the host star with the fitting of a stellar photosphere model, a summary of the ancilliary photometry compiled for the SED, and a description of the Herschel observations, their reduction and analysis.The compiled SED for HD 105211, a scaled photospheric model, and a blackbody fit to the dust excess are presented in Figure <ref>. §.§ Stellar parameters HD 105211 (η Crucis, HIP 50972) is a nearby <cit.>, main-sequence star. Currently, HD 105211 is defined as both a spectroscopic binary <cit.> and a main sequence star with a spectral type of F2 V <cit.> and it was originally classified as a yellow-white giant <cit.>. HD 105211 has been estimated to have an age between 1.30 to 3.99 Gyr with an effective temperature of 6950 K <cit.> and a sub-solar metallicity [Fe/H] of -0.37.A summary of the stellar physical properties are given in Table <ref>. The stellar photospheric contribution to the SED was modelled using an appropriate model taken from the Castelli-Kurcz atlas[Castelli-Kurcz models can be obtained from: http://www.stsci.edu/hst/observatory/crds/castelli_kurucz_atlas.html] <cit.>. The chosen photospheric model was scaled to the observations at optical and near-infrared wavelengths between 0.5 and 10 , weighted by their uncertainties, using a least squares fit. §.§ Ancillary data For the purposes of modelling HD 105211's SED, the Herschel PACS photometry were supplemented with a broad range of observations from the literature spanning optical to far-infrared wavelengths.A summary of the compiled photometry is given in Table <ref>.The optical Johnson BV photometry were taken from the Hipparcos & Tycho Catalogues <cit.>, whilst the near-infrared RI and JHK photometry were taken from the catalogues of Morel & Magnenat <cit.> and Epchtein <cit.>, respectively. These measurements were supplemented by Strömgren ubvy photometry from <cit.>.Mid-infrared photometry included in the SED were taken from the Midcourse Space eXperiment <cit.>, AKARI IRC all-sky survey at 9 and 18<cit.>, the WISE survey at 3.4, 12, and 22<cit.>, and a Spitzer MIPS measurement at 24<cit.>. Colour corrections were applied to the AKARI IRC measurements assuming a blackbody temperature of 7000 K (factors of 1.184 at 9and 0.990 at 18 ), and also applied to the WISE data assuming a Rayleigh-Jeans slope (factors of 1.0088 at 12and 1.0013 at 22 ).A Spitzer InfraRed Spectrograph <cit.> low-resolution spectrum spanning ∼7.5 to 38(PID 2324, PI Beichman, AOR 16605440) was taken from the CASSIS[The Cornell Atlas of Spitzer/IRS Sources (CASSIS) is a product of the Infrared Science Center at Cornell University, supported by NASA and JPL.] archive <cit.>. The IRS spectrum was scaled to match the model photosphere at wavelengths < 10 , where no excess emission from the disc was expected, by a least-squares fitting process. The rescaling factor for the IRS spectrum was 0.94, in line with other works <cit.>. For inclusion in the SED modelling process, the IRS spectrum was binned with a weighted average mean (with associated uncertainty) at 27, 30, 33 and 36in order to trace the rise in the excess emission from the dust above the photosphere at mid-infrared wavelengths (see section <ref> for details).§.§ Herschel data At far-infrared wavelengths, the previous Spitzer/MIPS 70measurement <cit.> was supplemented by the new Herschel/PACS observations. HD 105211 was observed as part of the Herschel Open Time 1 programme ot1_sdodson_1 (PI: S. Dodson-Robinson; `A study into the conditions for giant planet formation'), the results of which were presented in <cit.>. The log of Herschel observations is given in Table <ref>. PACS scan map observations of HD 105211 were taken in both 70/160 and 100/160 channel combinations.A summary of the PACS observations is presented in Table <ref>.Each scan map was conducted at a slew rate of 20 per second (medium scan speed) and is composed of eight scan legs of 3 long, each separated by 4. The target was observed twice, at instrument orientation angles of 70and 110, for each channel combination. This resulted in a total of two scan maps each for 70 and 100and a total of four scan maps for 160 . Reduction of the Herschel/PACS data was carried out in the Herschel Interactive Processing Environment[hipe is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortia.http://www.cosmos.esa.int/web/herschel/hipe-download]<cit.> user release 13.0.0 and PACS calibration version 69 (the latest available public release at the time), using the standard reduction script starting from the level 0 products.All available images (i.e. two each at 70 and 100 , and four at 160 ) were combined to produce a final mosaic for each waveband. The image scales for the mosaiced images were 12 per pixel for the 70 and 100images, and 24 per pixel for the 160image.To remove large scale background emission, the images were high-pass filtered with widths of 15 (70/100 ) and 25 (160 ) frames, corresponding to spatial scales of 62 and 102, respectively. Fluxes were measured using an aperture photometry task implemented in Matlab. In the 70 and 100images we adopted aperture radii of 24, in order to encompass the whole disc within the measurement aperture. At 160the disc is contaminated by the presence of bright, nearby background structure to the west. We therefore use a smaller aperture radius of 17 at 160 , despite the larger beam size (118 FWHM at 160compared to 58 FWHM at 70 ), to reduce the contribution of this background to the measurement. These values were scaled by appropriate aperture correction factors of 0.877, 0.867 and 0.758 at 70, 100 and 160photometry, respectively <cit.>. We note that whilst applying aperture corrections derived for point sources may be contentious when dealing with extended sources, this method has been widely used in the literature <cit.>.The Herschel flux measurements in each waveband were also colour corrected, treating the stellar and dust contributions separately. Appropriate factors were taken from the PACS Photometer colour correction release note[http://herschel.esac.esa.int/twiki/pub/Public/PacsCalibrationWeb/ cc_report_v1.pdf] corresponding to their temperature values of 6,950 K for the star (interpolating between tabulate values) and 50 K for the dust. At 70, 100, and 160we used values of 1.016 (0.982), 1.033 (0.985) and 1.074 (1.010) for the stellar (dust) contributions, respectively. The final fluxes used in the modelling were further corrected for losses (1 to 2 per cent) induced by the high-pass filtering process, as detailed in <cit.>. The PACS fluxes and associated uncertainties are shown in Table <ref>.The sky background in each mosaic was estimated from the median values of ten randomly placed 10×10 pixel sub-regions of each mosaic. These regions were masked to avoid HD 105211, any bright background objects (e.g. CL Cru – see Section <ref>), and the borders of the mosaics (where noise increases due to lower coverage). The median value was scaled to the revelant aperture size corresponding to the target radius to yield the sky background contribution, which was then subtracted from the measured flux (before correction, as detailed above). The sky noise was estimated from the standard deviation of the ten sub-regions. § RESULTS Here we present the results of our analysis of HD 105211's disc. We examine the PACS images for evidence of extended emission, before applying a deconvolution routine to better determine the disc extent and geometry. The additional photometric points are combined with ancillary data and the disc architecture measured from the images to model the disc, fitting the excess emission with both modified blackbody and power law disc models from which we deduce the dust grain properties. §.§ Images HD 105211 was resolved by the Herschel/PACS instrument <cit.> showing the extent of the disc for wavelengths 70, 100 and 160(see top row in Figure <ref>). The 70 and 100maps were originally 1 pixel to 12, whilst the 160map was 1 pixel to 24.All maps were scaled to 1 pixel to 1 for subsequent analysis.§.§.§ Stellar position The optical position of HD 105211, at the epoch of the Herschel observations, is 12^h6^m5281-64364991; utilising the proper motions from the re-reduction of Hipparcos data <cit.>. Two methods were used to estimate the star's position in the maps: finding the position of the peak pixel brightness in all maps and calculating the centre of a fitted 2D Gaussian profile, allowing for rotation and ellipticity, of the 70 and 100maps. Fitting a 2D Gaussian to the 160PACS map was not feasible due to background contamination surrounding HD 105211. The peak pixel and the centre of the disc profile are both within ∼ 25 from the optical position, which is just outside the Herschel absolute pointing accuracy of 2 at the 1-σ level <cit.>.In the deconvolution presented here, we have assumed that the star's position for the 70 and 100maps is the corresponding centre of 2D Gaussian disc profile fit. §.§.§ Radial profiles and deconvolution In the original maps, the extent of the semi-major and -minor axes were derived from the FWHM of model 2D Gaussian profiles along the major and minor axes.In the 70 and 100maps the extents were found to be similar, with values of 160.4 ± 1.7 au and 91.0 ± 0.8 au, respectively.However, the 160map yielded a larger estimate of the disc extent along both axes of 192 ± 32 au and 141 ± 14 au. The disc inclination angle was determined from the inverse cosine of the ratio of the semi-minor axis to the semi-major axis. The inclination was consistent in the 70 and 100maps at 55.4 ± 5.2. The inclination angle for 160map, 42.7 ± 8.3,is just within the uncertainty limits when compared with the values derived in the shorter wavelength maps.This could be attributed to both the rising level of background contamination to the west of the source, and the larger PACS beam FWHM of 118 at 160 (cf 58 at 70 ), comparable to the disc extent, such that the disc is poorly resolved in the 160image.A reference point spread function (PSF) was used to estimate the stellar contribution to the images, and to deconvolve the original maps. Here, we have adopted a similar technique to that used in <cit.> and <cit.>.Observations of the chosen reference PSF, α Böotis (HD 124897, Arcturus) were reduced in the same way as the HD 105211 images and rotated to the same telescope pointing angle.The stellar photosphere contribution was modelled separately by scaling the PSF to the flux estimate corresponding to 70, 100 and 160wavelengths of the photosphere model respectively.The stellar contribution was centred on the previously determined stellar position in each image and subtracted from the three original maps.After stellar subtraction the Lucy-Richardson method <cit.> was then used to deconvolve the original maps with the corresponding instrument PSFs (see bottom row in Figure <ref>).After deconvolution, the semi-major and -minor axes of the disc were determined by fitting an ellipse to the region of the map, centred on HD 105211, which exceeded a 3-σ threshold. These measured values are larger than the original PACS images due to the difference between the 2D Gaussian and ellipse fits.The semi-major and -minor axes were used to estimate the inclination angle, as before.The inclination angles measured in the deconvolved images are consistent with values of 70.1 ± 2.3 at 70and 70.7 ± 2.2 at 100 . The increased size of the inclination angle from the original PACS maps can be attributed to the deconvolution resolving the axis of the disc more precisely. However, the inclination angle for 160of 60.1 ± 7.7 is not consistent due to the poorer resolution of the original image. Radial profiles are shown in Figure <ref> and a summary of associated measurements is shown in Table <ref>.§.§.§ Asymmetry The deconvolved images at 70 and 100reveal distinct peaks along the NE and SW arms. The disc extent along the major-axis of the 70 and 100deconvolutions were measured from peak-to-peak. The peak-to-peak extent was not observed in the 160deconvolution, as there is only a peak emission along the NE arm. The NE and SW arms were measured from the star centre position to peak emission of the disc.The NW and SE extents are the half of the FWHMs of the deconvolved disc profile from star centre position.Examining the 70 and 100deconvolutions, the NE clump is larger and closer to the star's centre position (86.0 ± 5.7 au) in contrast to its SW counterpart (116.0 ± 8.2 au) but both comparable in brightness, clearly showing an asymmetry in the disc. However, the disc at 160has no peak emission on either side of disc centre but still shows asymmetry, with a smoothly decreasing brightness profile from the NE arm to the SW arm. A summary of disc profile measurements is shown in Table <ref>. §.§ Spectral energy distribution §.§.§ Modified blackbody The SED reveals no significant emission at mid-infrared wavelengths ≤ 20, as shown in Fig. <ref>. We therefore determine the disc observational properties of temperature, (blackbody) radial extent, and fractional luminosity (T_ dust, R_ dust, L_ dust/L_⋆) through fitting a single component, modified blackbody model to the photometry at wavelengths where a significant (> 3-σ) excess was measured. Due to the absence of any photometry at wavelengths beyond 160 , the break wavelength (λ_ 0) and sub-millimetre slope (β) of the modified blackbody model are unconstrained and for the purposes of plotting the best-fit model we assume a break wavelength of 210 , and a β of 1, as typical values for this purpose <cit.>. A least-squares fit to the photometry weighted by the uncertainties produces a fractional luminosity of L_ dust/L_⋆ = 7.4 ± 0.4 ×10^-5, in line with previous estimates of the disc brightness <cit.>, and a best-fit temperature of T_ dust = 49.7 ± 0.6 K. The temperature is equivalent to a blackbody radius of R_ dust = 80.2 ± 2.4 au. A comparison with the resolved extent from the images (80 or 120 au) reveals that the ratio of the actual to blackbody radii, Γ = 1 to 1.5. This is surprisingly low for a star of 6.6 L_⊙, as we expect a value of Γ in the region 2.65^+0.57_-0.24 by following the models of <cit.>. The radiation blow-out radius for dust grains <cit.> around HD 105211 is 3.5 μm. The small value of Γ leads us to infer that the disc emission is therefore dominated by large, cool dust grains and that any population of smaller grains, that would be warmer at a given stellocentric distance <cit.>, is thus low. The HD 105211 system has not been imaged in scattered light; however, aperture polarisation observations in the optical (SDSS g^' and r^' measurements) have been made and the analysis of this data is ongoing (Cotton D. priv. comm).Compact residual emission centred on the star is visible in the 70disc-subtracted image (see top right panel of Figure <ref>).This emission is weak, constituting a 2-σ detection at 70 .However, it could represent the marginal detection of a warm inner belt to the HD 105211 system.We therefore estimate the properties of this inner belt based on the available mid-infrared photometry between 24 and 37 , along the with the 70measurement. Using an error-weighted least squares fit to these measurements, we obtain a temperature of 215 ± 43 K (equivalent to radial distance of 4.3 ± 1.9 au, assuming the same properties as the cold disc) with a fractional luminosity of approximately 5 × 10^-6.§.§.§ Radiative transfer Combining the SED and resolved imaging, we now apply a more physically complex model to the available data. Here we simultaneously fit the extended emission and SED with a radiative transfer model using a power law size distribution of dust grains assuming a dust composition of astronomical silicate <cit.>. The debris disc is assumed to lie at some distance from the star in an annulus between R_ in and R_ in + Δ R, where R_ in was constrained to lie between 50 and 150 au, and Δ R was a free parameter. The exponent of the surface density of the disc α was allowed to vary between 2.0 and -5.0. As noted in the previous section there was some evidence for an asymmetry in the disc from the deconvolved images; here we assumed an axisymmetric model in the first instance. The constituent grains are represented by a power law size distribution between a_ min, a free parameter, and a_ max, which is fixed at 1 mm, with an exponent q. Due to the lack of sub-millimetre photometry, we assumed the value of q lies in the range 3 to 4, typical of the theoretically expected <cit.> and observed <cit.> values for debris disc systems. The brightness profiles of the disc along its major and minor axes for all three PACS images were used to represent the resolved emission in the fitting process. The profiles were measured as described in Section <ref>. For the weighting of the fit, each image was given equal weighting to the SED in determining the best-fit (i.e. the SED and each image each contribute 25 per cent of the best-fit).We found a simultaneous best fit model to the images and SED with a reduced χ^2 of 1.61 (six free parameters). The inner edge of the disc lies at R_ in = 87.0^+2.5_-2.3au, close to the blackbody radius inferred for the disc. The disc was found to be broad, Δ R = 100^+10_-20au, with an outwardly decreasing radial surface density exponent α = -1.00^+0.22_-0.24. The minimum dust grain size was calculated to be 5.16^+0.36_-0.35, mostly dictated by the fit to the rising Spitzer IRS spectrum; this is perhaps surprising given how close the disc inner edge lies to the blackbody radius. The exponent of the size distribution q = 3.90^+0.10_-0.14, lies within the assumed range, and is consistent with theoretical expectations assuming a velocity dependent dispersion of collision fragments <cit.>. A dust mass of 2.45^+0.05_-0.13×10^-2M_⊕ is inferred from the model, but the validity of this model-derived value is highly uncertain in the absence of sub-millimetre photometry. The results of the modelling are summarised in Table <ref>.In Figure <ref>, we present the residual emission after subtraction of the disc model from the observations. Overall the model replicates the observations well, with no significant residual flux at the disc position in any of the three images. This is interesting given the potential asymmetry inferred from the deconvolved images. In this instance, the evidence lies with an axisymmetric disc capable of replicating both the structure and thermal emission of the disc, such that we cannot confirm the presence of asymmetry in the disc with the data to hand.However, at 70 , a 2-σ peak is present at the stellar position. The model brightness profile has a deficit of flux compared to the observations close to the stellar position (see top left panel of Figure <ref>), this could be used to infer the presence of an additional, unresolved warm component to the disc. A two component model was previously used to fit the Spitzer IRS spectrum in <cit.>, but the SED shows no significant remaining excess emission that would require the presence of an additional disc component (see Figure <ref>), so we opted not to include one in our analysis. Radial drift of material from the outer belt toward the star could account for the presence of this faint emission. §.§ CL Cru The Mira variable star CL Cru (IRAS 12043-6417) is clearly visible in the 70 and 100PACS mosaics, lying in the northwest corner, but is barely detected in the 160mosaic. Note, this object is not visible in in Figure <ref> as it lies outside the cropped region presented there. Its flux measurements are included here as they may be of interest to those involved in research on evolved stars. Circular apertures of 4, 5, and 8 radius were used to measure the fluxes at 70, 100 and 160 , respectively. The background contribution and uncertainty was estimated from a annulus spanning 15 to 25 from the source. CL Cru has no evidence of extended emission, based on a 2D Gaussian fit to the source, and we therefore extract the fluxes with radii for optimal signal-to-noise. The measurements were aperture corrected, with correction factors of 0.487, 0.521 and 0.527 <cit.> for the 70, 100 and 160measurements, respectively. At 70 and 100the noise is dominated by the calibration uncertainty, whereas at 160the sky noise is the dominant contribution. The aperture corrected (but not colour corrected) flux density measurements are given in Table <ref>. § DISCUSSION Here we place the results of our analysis into context, and examine the origins and implications for a potential asymmetry of the disc around HD 105211.§.§ State of the disc The radiative transfer model determined that the minimum grain size for the disc (assuming a composition of pure astronomical silicate) was 5.16^+0.36_-0.35. This result is only marginally larger than the blowout size for grains in the disc. This is unexpected due to the extent of the disc determined from the resolved images and SED fitting being 87.0^+2.5_-2.3 au, close to the blackbody radius of 80 au, such that we would perhaps expect a larger minimum grain size to account for such a low ratio in measured to blackbody radii. Additionally, the ratio of observed radius to the blackbody radius of the disc (Γ = 1.1) is smaller than the expected ratio of ∼ 2.5 when correlated with stellar luminosity <cit.>. HD 105211's debris disc has been previously noted as being bright in thermal emission for its age <cit.> when compared to the expected trends for Sun-like stars <cit.>. We therefore have a sketch of the disc around HD 105211 as being both bright (albeit not anomalously so) with large dust grains similar in size found in HD 207129 <cit.>. The inner edge of the disc is close to the minimum (blackbody) extent with a broad disc component of about 100 au.The inferred disc extent (down to the blackbody radius) requires a minimum grain size above the blow-out radius for the star's luminosity. The large dust grains will likely be broken down into smaller, warmer fragments that could persist around HD 105211.These fragments would then migrate inward by radiation transport processes, which could account for the marginal excess seen in the 70image. §.§ Potential asymmetry The deconvolved images revealed an asymmetry in the separation of the two ansae from the stellar position, but the structure and emission from the disc around HD 105211 can be effecitvely replicated with a single, axisymmetric annulus. We do not therefore propose that the disc is asymmetric, but here investigate the possibilities of such an asymmetry, and discuss future directions to refine our understanding of the disc architecture.Asymmetries are often cited as the result of perturbation by an unseen planetary companion influencing the dynamics of the dust-producing planetesimals <cit.>. The disc asymmetry is observed to be brighter at longer wavelengths on the side closer to the star, which suggests the dust there is colder, perhaps due to trapping of larger grains by a perturbing body. Additional observations at higher angular resolution in the far-infrared to fill in the SED between the Spitzer and Herschel photometry would be invaluable in pinning down the location of the star relative to the disc, and determining if a second warm component is present in the inner regions of the disc.There are other possible scenarios to explain the presence of an asymmetry. Given HD 105211's location, lying close to the galactic plane (b ∼ -2), background contamination along the line of sight could offer an explanation. Bright, extended emission is present across the images, particularly at 160 , but there is little at either 70 or 100where the asymmetry is most obvious. Additionally, a background point source would need to have a flux ∼ 50 mJy, lying at a separation of ≤ 9 (along the semi-major axis) from the star to replicate the observed asymmetry. Considering extragalactic contamination <cit.>, the binomial probability of such an occurence is low (P = 0.009) and should therefore give us confidence in the real presence of the asymmetry. However, HD 105211 lies close to the galactic plane and such estimates, based on galaxy counts, should be considered a lower limit in such a scenario.Alternatively, interaction between the disc and interstellar medium might also induce an asymmetry, as has been seen for a number of systems resolved in scattered light <cit.>. Obtaining more observations at sub-millimetre wavelengths of HD 105211 would enable the size distribution of grains and their mass to be precisely determined; and these derived values used in combination would confirm the disc's asymmetry.This would also clarify if it is exclusive to smaller grains that are susceptible to radiation pressure and stellar winds or if it persists for larger grains that more accurately trace the parent planetesimal belts of the disc. §.§ Comparison with Dodson-Robinson et al. Our analysis of HD 105211 is generally consistent with the results of <cit.>. However, we differ in several of the particulars. We note that our single component model, while broad, matches the detected excesses measured from Spitzer IRS through Herschel wavebands, as shown in Figure <ref>. Most importantly, we find no evidence for significant warm excess emission from the system at 24 μm. This is likely due to the different photometry used to scale the stellar photosphere models in each work; as <cit.> used 2MASS JHK_s (which are of poor quality) and AllWISE data (which suffered saturation not present in the all-sky survey). The lack of significant warm excess in our analysis led us to adopt a single component disc model, leading to the result that we require a broad disc to replicate both the SED and radial profiles. <cit.>, with the freedom of an additional dust component, found the outer disc to be more narrowly confined at a larger radial distance (175 ± 20 au). The validity of both models can be tested by observations at high spatial resolution by ALMA (which will resolve the width of the outer cool component) and JWST (searching for a distinct warm component). <cit.> also determined a smaller flux density for HD 105211 at 160 μm (440 ± 146 mJy) than in this work. There is significant contamination from bright extended structure within the image at 160 μm, so the determination of the correct flux is fraught with difficulty. We note that our values are consistent within uncertainties and that obtaining longer wavelength observations, either from SOFIA/HAWC+ at 210 μm, or ALMA in the sub-millimetre would better determine the shape of the SED and confirm whether or not the disc emission had deviated from a blackbody at 160 μm.§ CONCLUSION HD 105211 is nearby star (d = 19.76 ± 0.05 pc) and slightly more massive and more luminous than the Sun. It was reported to exhibit significant infrared excess by Spitzer. In this work, we have presented new Herschel/PACS far-infrared observations of the debris disc around HD 105211 at 70, 100 and 160 . These images revealed extended emission from the disc along both its semi-major and semi-minor axes in all three wavebands, from which we determine the disc to have an inclination of 71 ± 2 and a position angle of 30 ± 1. Of particular interest is the presence of an intriguing asymmetry between the two sides of the disc after deconvolution, interpreted as ansae lying at 86 ± 5.7 au and 116 ± 8.2 au, from which we infer a disc eccentricity of 0.20 ± 0.03.However, the asymmetry could not be considered significant when we modelled the structure and thermal emission of the disc and found that the disc was broad, 100 ± 20 au, starting at an inner edge of 87 ± 2.5 au. The star's location close to the galactic plane makes contamination a possibility, requiring further observation at higher angular resolution and longer wavelengths to better determine the architecture of this disc. Combining the disc SED with the multi-wavelength resolved images we fitted a power law disc model and derived a minimum grain size of 5.16^+0.36_-0.35, assuming the disc lies in a single annulus around the star. The lack of any constraint on the sub-millimetre emission from the disc leaves the grain size distribution unconstrained in our model, again pointing the way to further observations to determine the dust properties for this system. § ACKNOWLEDGEMENTSThe authors wish to thank the anonymous referee for their feedback, which improved the clarity of this paper. They would also like to thank S. Dodson-Robinson for the use of her Herschel observing programme that included HD 105211. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France.The research has made use of NASA's Astrophysics Data System. JPM is supported by a UNSW Vice-Chancellor's Postdoctoral Fellowship. mnras | http://arxiv.org/abs/1703.09348v1 | {
"authors": [
"S. Hengst",
"J. P. Marshall",
"J. Horner",
"S. C. Marsden"
],
"categories": [
"astro-ph.EP",
"astro-ph.SR"
],
"primary_category": "astro-ph.EP",
"published": "20170327234411",
"title": "A Herschel resolved debris disc around HD 105211"
} |
The greedy walk is a walk on a point process that alwaysmovesfrom its current position to the nearest not yet visited point. We consider here variouspoint processes on two lines. We look first at the greedy walk on two independent one-dimensional Poissonprocesses placed on two intersecting lines and prove that the greedy walkalmost surely does not visit all points. When a point process is defined on two parallel lines, the result depends on the definition of the process: If each line has a copy of the same realisation of a homogeneous Poisson point process,then the walk almost surely does not visit all points of the process. However, if each point of this processis removed with probability p from either of the two lines, independently of the other points, then the walk almost surely visits all points. Moreover, the greedy walk on two parallel lines, where each line has a copy of the same realisation of a homogeneous Poisson point process, but one copy is shifted by some small s, almost surelyvisits all points. Poisson point process; greedy walk60K3760G55,60K25Greedy walks on two lines Katja Gabrysch December 30, 2023 ========================= § INTRODUCTION Consider a simple point process Π in a metric space (E,d). We think of Π as a collection of points (the support of the measure)andwe use the notation |Π∩ B|to indicate the number of pointson the Borel set B ⊂ E. If x ∈ E, the notation x ∈Π is used instead of Π∩{x}≠∅.We define a greedy walk on Πas follows.The walkstarts from some point S_0 ∈ E and always moves on the points of Π by picking the point closest to its current position that has not been visited before.Thus a sequence (S_n)_n≥ 0 is defined recursively byS_n+1 = min{d(X, S_n ) :X ∈Π, X∉{S_0, S_1, …, S_n}}.The greedy walk is a model in queuing systems where the points of the process represent positions of customers and the walk represents a server moving towards customers. Applications of such a system can be found, for example, in telecommunications and computer networks ortransportation. As described in <cit.>, the model of a greedy walk on a point process can be defined in various ways and on different spaces. For example, Coffman and Gilbert <cit.> and Leskelä and Unger <cit.> study a dynamic version of the greedy walk on a circle with new customers arriving to the systemaccording to a Poisson process.The greedy walk definedas in (<ref>) on a homogeneous Poisson process on almost surely does not visit all points. More precisely, the expected number of times the walk jumps over 0 is 1/2 <cit.>. Foss et al. <cit.> and Rolla et al. <cit.> study two modifications of this model where they introduce some extra points on the line, which they call “rain” and “dust”, respectively.Foss et al. <cit.> consider a space-time model, starting with a Poisson process at time 0. The positions and times of arrival of new points are given by a Poisson process on the half-plane. Moreover, the expected time that the walk spends at a point is 1. In this case the walk, almost surely, jumps over the starting point finitely many times and the position of the walk diverges logarithmically in time. Rolla et al. <cit.> assign to the points of a Poisson process one or two marks at random.The walk always moves to the point closest to the current position which still has at least one mark left and then removes exactly one mark from that point. The authors show that introducing points with two marks will force the walk to change sides infinitely many times. Thus, unlike the walk on a Poisson process with single marks, the walk here almost surely visits all points of the point process.There is not much known about the behaviour of the greedy walk on a homogeneous Poisson process in higher dimensions. For example, it is an open problem whetherthe greedy walk on the points of a homogeneous Poisson process on ^2 visits all points <cit.>. In this paper, we study various point processes defined on the union of two lines E⊂^2, where the distance function d on E is the Euclidean distance,d((x_1,y_1),(x_2,y_2))=√((x_1-x_2)^2+(y_1-y_2)^2). For all point processes Π considered in this paper, every step of the greedy walk on Π is almost surelyuniquely defined, that is for every n≥ 0, there is, almost surely, only one point for which the minimum (<ref>) is obtained. We study first a point process Π on two lines intersecting at (0,0) with independent homogeneous Poisson processes on each line. The greedy walk starts from (0,0). When the walk visits a point that is far away from (0,0), thenthe distance to (0,0) and to any point on the other line is large. Thus, the probability of changing lines or crossing (0,0) is small. In Section <ref> we show by using the Borel-Cantelli lemma that almost surely the walk crosses (0,0) or changeslines only finitely many times, which implies that almost surely the walk does not visit all points of Π.Thereafter, we look at the greedy walk on two parallel lines at a fixed distance r, ×{0,r}, with a point process on each line. The behaviour here depends on the definition of the process. The first case we study is a process Πconsisting of two identical copies ofa homogeneous Poisson process on ,that are placedon the parallel lines.We show in Section <ref> that the greedy walk does not visit all the points of Π,but it visits all the points on one side of the vertical line {0}× and just finitely many points on the other side.In the second case, we modify the definition of the processabove by deleting exactlyone of the copies of each point with probability p>0, independently from the other points, and the line is chosen with probability 1/2. In particular, if p=1 we have two independent Poisson processes on these lines. For any p>0, the greedy walk almost surely visits all points. The reason is thatthe greedy walk skips some of the points when it goes away from the vertical line {0}×and those points will force the walk to return and cross the vertical line{0}× infinitely many times.We prove this in Section <ref> using arguments from <cit.>.The greedy walk also visitsall points of Π in the case when Π consists of two identical copies of a homogeneous Poisson process onwhere one copy is shifted by | s|<r/√(3).This is discussed in Section <ref>.Note that all results are independent of the choice of r.For the greedy walk on a homogeneous Poisson process on ,with single or double marks assigned to the points, Rolla et al. <cit.> show that, even though the walk visits all the points,the expected first crossing time of 0 is infinite. One can in a similar way show analogous results for the greedy walk on the processes on two parallel lines defined in Sections <ref> and <ref>, but that is not included in this paper.§ TWO INTERSECTING LINESLet E={(x,y)∈^2:y=m_1xory=m_2x} where m_1,m_2∈, m_1≠ m_2. The point (0,0) divides E into four half-lines. Let d be the Euclidean distance and denote the distance of a point (x,y) from the origin by | |(x,y)| |=d((0,0),(x,y)).Let Π^1,Π^2 be two independent Poisson processes onwith rate 1.Then, for any a,b∈ let|Π∩{(x,m_ix):x√(1+m_i^2)∈ (a,b)}|=|Π^i∩(a,b)|,so that the distances between the points of Π^1 and Π^2 are preserved in E. The greedy walk (S_n)_n≥ 0 on Π defined by (<ref>) starts from (0,0).Almost surely, the greedy walk does not visit all points.More precisely, the greedy walk almost surely visits only finitely many points on three half-lines.Let B_(0,0)(R) be a ball in ^2 of radius R around point (0,0). Then|Π∩ B_(0,0)(R)|= |Π^1∩ (-R,R)|+|Π^2 ∩ (-R,R)|<∞ a.s.Thus, lim_n →∞| |S_n|| = ∞ almost surely. To show that the walk does not visit all points of Π,it suffices to prove that the sequence (S_n)_n≥ 0 changes half-lines only finitely many times. We can define a subsequence of the times when the walk changes half-lines as follows:j_0 =0,j_n = inf{k>j_n-1: S_k andS_k+1 are not on the same half-line .and . | |S_k||>max{n, ||S_j_n-1||}},k_0 =0,k_n =inf{k≤ j_n:S_k,S_k+1,…,S_j_n are on the same half-line},where inf∅=∞. Let (U_n)_n≥ 0 and (V_n)_n≥ 0be the corresponding subsequences of (S_n)_n≥ 0, that is,U_n=S_k_n andV_n=S_j_n,when j_n,k_n<∞, and U_n=(1,1), V_n=(0,0) otherwise. Moreover, define the eventsB_n={||U_n||≤||V_n||} and C_n={||U_n||> ||V_n||}.If the greedy walk changes half-lines infinitely often, then j_n<∞ for all n and exactly one of the events B_n and C_n occurs for each n. Assume thatC_n occurs for some n such that j_n<∞. Let X=S_k_n-1 be the last visited point before U_n.Then,by the definition of the sequence (V_n)_n≥ 0,| |X||≤max{n,| |V_n-1||}<| |V_n||. From ||U_n||>||V_n||>||X|| it follows thatd(X,V_n)<d(X,U_n), which contradicts the definition of the greedy walk, and thereforealso the assumption that C_n occurs.Assume now that B_n occurs for some n such that j_n<∞. Denote by α the acute angle between the lines y=m_1x and y=m_2x.By the definition of the sequence (V_n)_n≥ 0,the walk up to time j_n never changed lines from a point whose distance from the origin is greater than ||V_n||.Moreover, because of the assumption ||U_n||≤ ||V_n||, between times k_n and j_n the walk never visited the points further away than V_n on the corresponding half-line. Thus, the points on this half-lineoutside B_(0,0)(||V_n||) are not yet visited. The distance from V_n to the other line is ||V_n||sinα. Since the walk changes lines after visiting the point V_n,we can conclude that there are no unvisited points of Π in B_V_n(||V_n||sinα) at time n. Hence, for n≥ n_0, where n_0 is such that n_0sinα>1, we haveB_n⊂⋃_i=1^2{Π^i∩( ||V_n||,||V_n||(1+sinα))=∅}∪{Π^i∩( -||V_n||(1+sinα),-||V_n|| )=∅}⊂⋃_i=1^2⋃_R=n^∞{Π^i∩( R+1,R(1+sinα) )=∅}∪{Π^i∩( -R(1+sinα),-(R+1) )=∅}.Then,(B_n)≤ 4∑_R=n^∞e^-Rsinα+1=4e^-nsinα+1/1-e^-sinαand ∑_n=n_0^∞(B_n)≤4e^-n_0sinα+1/(1- e^-sinα)^2<∞.Hence, by the Borel-Cantelli lemma,(B_n for infinitely many n≥ 1 )=0.Since the event C_n does not occur for any n such that j_n<∞ and B_n occurs almost surely finitely many times, there exists, almost surely,n_0 such that j_n=∞ for n≥ n_0. Thus, the greedy walk changes lines finitely many times. The theorem holds also when Π^1 and Π^2 are Poisson processes with different rates. Moreover, we can generalise this theorem as follows.Let E be a space of finitely many intersecting lines(every two lines are intersecting, but not necessarily all in the same point) and Π is a point process on E consisting of a homogeneous Poisson process (with possibly different rates)on every line. Then the greedy walk starting from a point in E does not visit all points of Π. Similarly as above, one can show that the walk changes lines finitely many times, and therefore, visitsfinitely many points on all but one line. § TWO PARALLEL LINES WITH THE SAME POISSON PROCESS Let E = ×{0,r} and let d be the Euclidean distance. Sometimes we refer to the lines ×{0} and ×{r} as line 0 and line r, respectively.We define a point process Π on E in the following way. Let Π be a homogeneous Poisson process onwith rate 1and let |Π∩ (B×{0})|=|Π∩ (B×{r})|=|Π∩ B| for all Borel sets B⊂. Denote the points of the process Π by…<X_-2<X_-1≤ 0<X_1<X_2<….The greedy walk on Π starts from the point S_0=(0,0), which is with probability 1 not a point of Π. Almost surely, the greedy walk does not visit all points of Π. More precisely,the greedy walk almost surely visits all points on one side of the vertical line {0}× and finitely many points on the other side. We can divide the points (X_i)_i∈ into clusters,so that successive points in the same cluster have a distance less than rand the distance between any two points in different clusters is greater than r. Let (τ_i)_i∈ be the indices of the closest point to 0 in each cluster.More specifically,τ_0=-1 if | X_-1| <X_1 and τ_0=1otherwise, (τ_i)_i≥ 1 is the unique sequence of integers such that X_τ_i-X_τ_i-1>r and X_k-X_k-1≤ r for τ_i-1<k<τ_i, and, similarly, (τ_-i)_i≥ 1 is a sequence of integers such that X_τ_-i+1-X_τ_-i>r and X_k+1-X_k≤ r for τ_-i-1<k<τ_-i. Moreover, we call the cluster containing the point X_τ_i cluster i. See Figure <ref> for an example of clusters -1,0,1 and 2.The points {X_τ_-1,X_τ_0, X_τ_1, X_τ_2}×{0,r}are marked with gray colour.The greedy walk starting from (0,0)visits several points around (0,0) and then moves to line r from one of the outermost points of cluster 0on line 0. Then it visits all points of cluster 0 on line r and if there are points left it changes lines again to visit the remaining points on line 0 before moving to the next cluster. Later, when the greedy walk is in cluster i, i≠ 0,it always visits first a point at X_τ_i, that is (X_τ_i,0) or (X_τ_i,r).Then the walk visits successively all the points of cluster i on the same line and it changes lines at the other outermost point of the cluster. Thereafter the walk visits the corresponding points on the other linein reverse order until it reaches a point at X_τ_i. Thus, the walk visits all points ofcluster i consecutively and it ends at the starting position X_τ_i,but on the other line. Therefore, to know whether the walk visits all points, it is enough to knowthe positions of the points in cluster0 and the position of the points at X_τ_i, i≠ 0. Since the points of cluster 0 almost surely do not change the asymptotic behaviour of the walk,for the proof we look at the greedy walk (S_n)_n≥ 0 on(X_τ_i)_i∈ with 0 as starting point. Note that the distances X_τ_i+1-X_τ_i, i∈∖{-1,0}, are independent and identically distributed random variables with finite expectation. To visit all points (X_τ_i)_i∈, the greedy walk needs to cross over 0 infinitely many times.Let A_m be the event that the walk (S_n)_n≥ 0 crosses 0 after visiting a point in the interval (rm,r(m+1)),that isA_m ={∃ n : rm ≤S_n < r(m+1)and S_n+1< 0}.This can be written asA_m ={∃ n,i:rm ≤ X_τ_i< r(m+1), S_n=X_τ_i and S_n+1<0 }⊂{∃ i: rm ≤ X_τ_i< r(m+1), X_τ_i+1-X_τ_i>rm}=⋃_i≥ 0{rm ≤ X_τ_i< r(m+1), X_τ_i+1-X_τ_i>rm}.For i≥ 1, the random variable X_τ_i+1-X_τ_i is independent of X_τ_i andit has the same distribution as X_τ_2-X_τ_1.Moreover, X_τ_i∈ [rm,r(m+1)) for at most one i. Thus,we have (A_m) ≤(rm ≤ X_τ_0< r(m+1))+∑_i=1^∞(rm ≤ X_τ_i< r(m+1), X_τ_i+1-X_τ_i>rm)≤(rm ≤ X_τ_0< r(m+1))+∑_i=1^∞(rm ≤ X_τ_i< r(m+1))(X_τ_2-X_τ_1>rm)= (rm ≤ X_τ_0< r(m+1))+(X_τ_2-X_τ_1>rm) ∑_i=1^∞(_{rm≤ X_τ_i< r(m+1)})≤1/2e^-2rm(1-e^-2r)+(X_τ_2-X_τ_1>rm).Then ∑_m=1^∞(A_m) ≤∑_m=1^∞1/2e^-2rm(1-e^-2r)+ ∑_m=1^∞(X_τ_2-X_τ_1>rm)<1/2e^-2r+1/r(X_τ_2-X_τ_1)<∞. Now the Borel-Cantelli lemma implies that(A_m for infinitely manym≥ 1)=0.Hence, the walk (S_n)_n≥ 0 almost surely crosses 0 finitely many times. Therefore, also the walk(S_n)_n≥ 0 crosses the vertical line {0}× finitely many times and visits just finitely many points on one side of that line. § TWO PARALLEL LINES WITH THINNED POISSON PROCESSESLet E = ×{0,r} and let d be the Euclidean distance.Moreover, let 0<p≤ 1. We define a point process Π on E as follows.Let Π be a homogeneous Poisson process with rate 1. Let Π^0 and Π^r be two (dependent, in general) thinnings of Π generated as follows. For all X∈Π, do one of the following: * With probability 1-p, duplicate the point X and make it a point of both processes Π^0 and Π^r. * With probability p, assign X to either Π^0 or Π^r (but not both), with probability 1/2 each. Let now Π be the point process on E withΠ∩ (B×{0})=(Π^0∩ B)×{0} and Π∩ (B ×{r})=(Π^r∩ B)×{r} for all Borel sets B⊂.We study the greedy walk on Π defined by (<ref>) that starts at S_0=(0,0).Note that (0,0)∉Π with probability 1.Sometimes in the proofs we consider the greedy walk starting from another point x∈ E.We emphasise this by writing the superscript x in the sequence (S^x_n)_n≥ 0.If p=0, Π^0 and Π^r are identical. This was studied in Section <ref> where we showed that the greedy walk does not visit all points of the process. In this section we consider p>0 and we get the opposite result: For any p>0, the greedy walk visits all points of Π almost surely.For p=1, the processes Π^0 and Π^r defined above are two independent Poisson processes with rate 1/2. When 0<p<1, the process Π has some “doublë́” points andΠ^0 and Π^r are not independent. However, for p=1 and p∈(0,1) the behaviour of the walk is similar and, thus, we study these cases together. Let us now introduce some notation and definitions that we use throughout this section. We call the projections of elements of E on their first coordinate shadows of the elements of E onand we denote it by ·. For example, the shadow x of a point x=(x_1,x_2)∈ Eis x_1. The point process Π is defined from the process Π that can be seen as a shadow of Π, that is, x∈Π if and only if there exists ι∈{0,r} such that (x,ι)∈Π. Also, (S_n)_n≥ 0 is the shadow of the greedy walk (S_n)_n≥ 0, that is, (S_n)_n≥ 0 contains just the information about the first coordinates of the locations of the walk.Let Π_n=Π∖{S_0,S_1,S_2,…,S_n-1} be the set of all points of Π that are not visited before time n. Let Π^0_n and Π^r_n be the shadows of Π_n∩ (×{0}) and Π_n∩(×{r}), respectively.Moreover, Π_n denotes the shadow of Π_n, that is, the set of the first coordinates of the points of Π_n.Define the shift operator θ_(x,ι), (x,ι)∈ E,on Π, by θ_(x,0)Π=((Π^0-x)×{0})∪((Π^r-x)×{r}) andθ_(x,r)Π=((Π^r-x)×{0})∪((Π^0-x)×{r}). Define also the mirroring operator σ byσΠ=(( -Π^0)×{0})∪(( -Π^r)×{r}). For any subset A ofdefineT_A=inf{n≥ 1: S_n∈ A}to be the first time the shadow of the greedy walk enters A and write T_x for T_{x}. Let T^R_x=inf{n≥ 0:Π_n+1(x)=0}, that is, T^R_x is the time when both points (x,0) and (x,r) are visited.If exactly one of (x,0) and (x,r) is in Π then T^R_x=T_x.Note that for any x>0 we have T_[x,∞)<∞ or T_(-∞,0)<∞ because Π[0,x]<∞, almost surely, and the walk exits [0,x]×{0,r} in a finite time. Define now the variable D_x, x∈Π, x>0, as follows. If T_(-∞,0)<T_[x,∞) then D_x=0.Otherwise, T_[x,∞)<T_(-∞,0) and0< S_1,S_2,…,S_T_[x,∞)-1<x, S_T_[x,∞)≥ x.We can label the remaining points of Π_T_[x,∞) in the interval (0,x) by z_1,z_2,…,z_n-1 so that 0=z_n<z_n-1<…<z_1<z_0=x.Let thenD_x=max_0≤ i≤ n-1{(z_i-z_i+1)-(x-z_i)}=max_0≤ i≤ n-1{2z_i-z_i+1-x}.Since 2z_0-z_1-x=x-z_1>0 and 2z_i-z_i+1-x≤ 2z_i-x≤ x for 0≤ i≤ n-1, we have0<D_x≤ x. The variable D_xmeasures how big should be the distance between x andthe closest point to x in Π∩ (x,∞), so that the walk possibly visits a point in (-∞, 0)×{0,r}before visiting any point in (x,∞)×{0,r}. We prove Theorem <ref> in a similar way as Rolla et al. <cit.> prove that the greedy walk visits all marks attached to the points of a homogeneous Poisson process on ,where each point hasone mark with probability p or two marks with probability 1-p. The idea of the proof is the following.We define first a subset Ξ of Π^0, which is stationary and ergodic. Then, using the definition and properties of Ξ, we are able to show thatthere exists d_0>0 such that, almost surely, D_X<d_0 for infinitely many X∈Π.This we use to show that events A_k, which we define later in (<ref>),occur for infinitely many k>0, almost surely. Then we show that whenever A_k occurs, the greedy walk visits (-∞, 0)×{0,r} in a finite time.Therefore, we can conclude that T_(-∞,0)<∞, almost surely. Finally, using repeatedly the fact that T_(-∞,0)<∞, almost surely, we are able to show that the greedy walk on Π crosses the vertical line {0}× infinitely many times and, thus, almost surely visits all points of Π. Let us first discuss general properties of the greedy walk on E that donot depend on the definition of the point process Π. If the greedy walk visits two points (x,ι) and (y,ι) on a line ι∈{0,r},without changing lines in between those visits,then it visits also all the points between x and y on line ι. So the walk clears some intervals on the lines, but because of changing the lines it possibly leaves some unvisited points between those intervals. The following lemma shows that if the walk omits two points on different lines between visited intervals, then the horizontal distance between those points is greater than r. Moreover, looking from a point that is to the right of those two points,the closer point is always the point that is more to the right.Thus, when the walk returns to a partly visited set,it always visits first the rightmost remaining point in this set. Let a=min_0≤ i≤ nS_i and b=max_0≤ i≤ nS_i.(a) Leta≤ x<y≤ b such that (x,ι_x),(y,ι_y)∈Π_n+1 and ι_x≠ι_y. Theny-x>r.(b) Let a≤ x<y≤ z≤ b such that (x,ι_x),(y,ι_y)∈Π_n+1, (z,ι_z)∈Π_n. Then d((y,ι_y),(z,ι_z))<d((x,ι_x),(z,ι_z)). (a) Let x,y be as in the lemma and suppose on the contrary that y-x≤ r. Let I_1=((a,x)×{ι_x})∪((a,y)×{ι_y}) and I_2=((x,b)×{ι_x})∪((y,b)×{ι_y}). Since the greedy walk has visited points at a and b up to time n,but not the points (x,ι_x) and (y,ι_y), the walk before time n moved from I_1 to I_2 or in the opposite direction. Because of the assumption y-x≤ r, a point in I_1 is closer to (x,ι_x) or (y,ι_y) than to any point in I_2, and conversely.Thus, if y-x≤ r it is impossible to visit a point at a and a point at b without visitingeither (x,ι_x) or (y,ι_y), which contradicts the assumptions of the lemma. (b)If ι_y=ι_z thend((y,ι_y),(z,ι_z))=z-y<z-x≤ d((x,ι_x),(z,ι_z)). If ι_y≠ι_z and ι_x=ι_y then d((y,ι_y),(z,ι_z))^2=(z-y)^2+r^2<(z-x)^2+r^2≤ d((x,ι_x),(z,ι_z))^2. If ι_y≠ι_z and ι_x≠ι_y, it followsfrompart (a) of this lemma that y-x>r.This yieldsd((y,ι_y),(z,ι_z))^2 =(z-y)^2+r^2<(z-y+r)^2<(z-y+y-x)^2=d((x,ι_x),(z,ι_z))^2.-8mm3mm Let (c,ι_c)∈Π, c>0, be a point far enough from (0,0). If (c,ι_c) is not visited when the walk goes for the first time from (-∞,c)×{0,r} to (c,∞)×{0,r}, then this point is visited before the walkreturns to (-∞,c)×{0,r}.Furthermore, as we show in the following lemma, when the walk is finally at the location (c,ι_c), then there is no unvisited points left in (c,max_0≤ i ≤ T^R_cS_i]×{0,r}. If (c,ι_c) is close to (0,0) and ι_c=r, then this is not always true because the greedy walk might jump several times over (0,0) (and (c,0)) without visiting any point on line r.Let a=min_0≤ i≤ nS_i and b=max_0≤ i≤ nS_i.Let a≤ c≤ b, ι_c∈{0,r},such that (c,ι_c)∈Π_n+1. If S_n≥ c and (c,ι_c) is visited before any other point of Π_n in (-∞,c)×{0,r}, then all points of Π in (c,max_0≤ i ≤ T^R_cS_i]×{0,r} are visited before time T_c^R.Let us denote max_0≤ i ≤ T^R_cS_i by M. It is easy to see that the lemma is true for c=M. Thus, assume that M>c andlet j be the first timefor which S_j=M.Then, for j+1≤ k≤ T^R_c-1,S_k is in (c,M]×{0,r}.Let x be the rightmost point ofΠ_j+1 in the interval [c,M] and let ι_x be the corresponding line of that point. Note that at time j+1 there is only one point left at x. If x=c, the claim of the lemma follows directly. Otherwise, by Lemma <ref> (b), the point (x,ι_x) is closer to S_j than any other point in [c,x)×{0,r}.Thus, S_j+1=(x,ι_x) and there are no points left in [x,M]×{0,r} after time j+1. Repeating the same arguments we can see that the walk successively visits all the remaining pointsin [c,M]×{0,r} until it reaches c. Hence, at time T^R_c the set (c,M]×{0,r} is empty.To show that D_X<∞ for infinitely many X∈Π, the crucial will be the subset Ξ of Π^0 which we define as follows. For X∈Π^0, let L^0(X)=max{Y∈Π^0: Y<X}.Then letΞ={X∈Π^0: S_n^(L^0(X),0)∈([L^0(X),∞)×{0,r})∖{(X,0)} for all n≥ 1and S_n^(L^0(X),r)∈([L^0(X),∞)×{0,r})∖{(X,0)} for all n≥ 1 },that is, Ξ contains all points X∈Π^0 such that if we start a walk from (L^0(X),0) or (L^0(X),r), then the walk stays always in[L^0(X),∞)×{0,r} and it never visits (X,0). We have defined Ξ in this way because then for every X∈Ξ, whenever the greedy walk approaches (X,0) and (X,r) from their left, the walk passes around (X,0) and it never comes back to visit (X,0). Thus, if Ξ is non-empty, there are some points of Π that are never visited. The point process Π is generated from the homogeneous Poisson process Π and thus it isstationary and ergodic. The process Ξ is defined as a function of the points in Π and thereforeΞ is a stationary and ergodic process in .Therefore, Ξ is almost surely the empty set or it is almost surely a non-empty set, in which case Ξ has a positive rate.We look at these two cases in the next two lemmas.For the first lemma we need the random variable W_(x,ι), x∈, ι∈{0,r}, which measures a sufficient horizontal distance from (x,ι) to the rightmost point in the set (-∞,x)×{0,r}, so that the greedy walk starting from (x,ι) never visits that set. For x∈ let Π^x=Π∩([x,∞)×{0,r}). Consider for the moment the greedy walk on Π^x starting from (x,ι) defined by (<ref>) and let W_(x,ι)=inf_i≥ 0{S_i-d(S_i,S_i+1)}.Let L^0(x)=max{Y∈Π^0:Y<x} and L^r(x)=max{Y∈Π^r:Y<x}. If for some x∈, ι∈{0,r}, | W_(x,ι)|<∞ and max{L^0(x),L^r(x)}<W_(x,ι), then for all i≥ 0min{d(S_i,(L^0(x),0)),d(S_i,(L^r(x),r))} ≥S_i-max{L^0(x),L^r(x)}> S_i-W_(x,ι)≥ d(S_i,S_i+1).Therefore, the walk on Π starting from (x,ι) coincides with the walk on Π^x, because for every i≥ 0 the point S_i is closer to S_i+1 than to any point in Π∖Π^x. Thus, the walk does not visit Π∖Π^x. The opposite is also true, i.e. if the walk on Π starting from (x,ι) coincides with the walk on Π^x, then | W_(x,ι)|<∞. If Ξ is almost surely the empty set, then Ψ={(X,ι)∈Π:| W_(X,ι)| <∞}is also almost surely the empty set.For x∈, let L^0(x) and L^r(x) be as above. Since W_(X,ι) is identically distributed for all (X,ι)∈Π,Ψ is stationary and ergodic. Suppose, on the contrary, that Ψ is a non-empty set. For d>0 let Ψ_d={(X,r)∈Ψ:| W_(X,r)|<d} and note that ⋃_d> 0Ψ_d=Ψ∩ (×{r}). Thus, there exists d large enough so that Ψ_d is a non-empty set which is stationary and ergodic. Let Ψ_d be the set of all (X,r)∈Ψ_d such that the points (L^0(X),0), (L^r(X),r) and (L^0(L^0(X)),0) satisfy the following. First, these points are in (-∞,X-d)×{0,r}. Second, the point (L^r(X),r) is closer to (X,r) than to (L^0(X),0).Third, the greedy walks starting from (L^0(L^0(X)),0) and (L^0(L^0(X)),r) visit only the points in [L^0(L^0(X)),∞)×{0,r} and these walks visit (L^r(X),r)before visiting (L^0(X),0). Since these three conditions have a positive probability, which is independent ofW_(X,r),Ψ_d is also almost surely anon-empty set. But, by definition of Ξ, for every (X,r)∈Ψ_d, L^0(X)∈Ξ and Ξ is a non-empty set, which is a contradiction.In the following lemma we use the random variable D_X, X∈Π_0, which can be compared to D_X defined in (<ref>). For X∈Π^0∖Ξ set D_X=0. For X∈Ξ denote the points of Ξ∩(∞,X] in decreasing order…<z_2<z_1<z_0=Xand define D_X asD_X = sup_i≥ 0{2z_i-z_i+1-X}. For d_1, d_2>0 defineΞ_d_1,d_2={X∈Ξ: D_X<d_1and there exists Y∈Π such that 0<Y-X<d_2 and Π∩(Y, Y+r)=∅},i.e. X∈Ξ belongs to Ξ_d_1,d_2ifD_X<d_1 and at the distance less than d_2 from X there is an interval of length r where there is no points of Π. We consider the empty intervals of length r in Π, because wheneverΠ∩(Y, Y+r)=∅ for some Y∈Π, Y>0, the greedy walk is forced to visit the points(Y,0) and (Y,r), before crossing the interval and visiting a point in [Y+r,∞)×{0,r}.If Ξ is almost surely a non-empty set, thenthere exist d_1, d_2>0 such that Ξ_d_1,d_2 is almost surely a non-empty set. Moreover,Ξ_d_1,d_2 is a stationary and ergodic process. If Ξ is non-empty, the rate δ of Ξ is positive. Then for X∈Ξ,lim_i→∞X-z_i/i =δ^-1, almost surely. Also, lim_i→∞z_i-z_i+1/i=0, almost surely. Hence, for all large i, 2z_i-z_i+1-X= (z_i-z_i+1)-(X-z_i)<0and we can conclude that D_X is almost surely finite.Therefore, there exists d_1 such that {X∈Π^0: X∈Ξ, D_X<d_1} iswith positive probability a non-empty set. Moreover, since D_X is identically distributed for all X∈Ξ, {X∈Π^0:X∈Ξ, D_X<d_1} is a stationary and ergodic process with positive rate. Almost surely, the gap between two neighbouring points of the homogeneous Poisson process Πisinfinitely often greater than r.Thus, also Π∩((Y, Y+r)×{0,r})=∅ for infinitely many Y∈Π and all such Y form a stationary and ergodic process.Thus, Ξ_d_1,d_2 is also stationary and ergodic for all d_2>0. Since ⋃_d_2> 0Ξ_d_1, d_2={X∈Π^0: X∈Ξ, D_X<d_1} and this is not empty when d_1 is large enough,we can choose d_2 large enough so that Ξ_d_1,d_2 is almost surely a non-empty set. We study the greedy walk starting from the point (0,0), which is, almost surely, not a point of Π. From now on denote the points of Π by…<X_-2 <X_-1≤ 0< X_1<X_2< …and denote the points of Π^0 and Π^r by …<X_-2^0 <X_-1^0≤ 0< X_1^0<X_2^0< …and…<X_-2^r <X_-1^r≤ 0< X_1^r<X_2^r< …, respectively.Now we are ready to prove that D_X<d_0 for infinitely many X∈Π.We use here the definition of Ξ and divide the proof in two parts, depending weatherΞ is almost surely the empty set or a non-empty set. There exists d_0<∞ such that, almost surely, D_X_k<d_0 and X_k+1-X_k>r for infinitely many k>0.Assume first that Ξ is almost surely the empty set. Then, by Lemma <ref>, Ψ={(X,ι)∈Π: | W_(X,ι)| <∞} is almost surely the empty set. Moreover, {(X,0)∈Π:X∉Π^r, | W_(X,0)| <∞} is also the empty set.The greedy walk (S^(0,0)_n)_n≥ 0 on Π∩([0,∞)×{0,r}) has the same law as the greedy walk (S^(X,0)_n)_n≥ 0 on Π∩([X,∞)×{0,r}) shifted for (X,0), where X∈Π^0, X∉Π^r. This implies that (| W_(0,0)| <∞)=0 andthe greedy walk (S^(0,0)_n)_n≥ 0 almost surely visits a point in (-∞,0)×{0,r} in a finite time. Then it follows from the definition of D_x that D_x=0 for all large enough x> 0.Since X_k+1-X_k>r for infinitely many k>0, theclaim of the lemma holds for any d_0>0.Assume now that Ξ is not empty.Then, by Lemma <ref>, we can find d_1 and d_2 large enough so that Ξ_d_1,d_2 is a non-empty set. We first show that D_X≤ d_1 for infinitely many X∈Ξ, X>0, and then we prove that D_X_k<d_1+d_2 and X_k+1-X_k>r for infinitely many k>0.For X∈Ξ, X>0, denote the points of Ξ in [0,X] by z_0,z_1,…,z_n-1 so that 0=z_n<z_n-1<…<z_1<z_0=Xand define D_X', an analogue of D_X and a restricted version of D_X, asD_X'=max_0≤i ≤n-1{2z_i-z_i+1-X}= max{max_0≤i ≤n-2{2z_i-z_i+1-X},2z_n-1-X}. Let ξ=min{Y∈Ξ: Y>0} and note that z_n-1=ξ for every X∈Ξ, X>0. Then for X∈Ξ, X>2ξ we have 2z_n-1-X=2ξ-X<0. Since D_X'≥ 2z_0-z_1-X=X-z_1>0, the term 2z_n-1-X does not contribute to D_X'. From the definition of Ξ_d_1,d_2, we have D_X<d_1 for infinitely many X∈Ξ, almost surely. When X>2ξ, D_X' is the maximum of a finite subset of the values in (<ref>) and thus D_X'≤D_X<d_1 for infinitely many X∈Ξ, X>2ξ. We prove now that D_X≤D_X' in two steps.First, we show that the points used in the definition of D_X' are a subset of the points used in the definition of D_X. Second, we show that adding a new point to the definition (<ref>) decreases the value of the maximum. Let ξ∈Ξ, ξ>0.Before visiting any point in [ξ,∞)×{0,r}, the greedy walk starting from (0,0) visitsthe leftmost pointon one of the lines in [L^0(ξ),∞)×{0,r}, where L^0(ξ)=max{Y∈Π^0:Y<ξ}. That is, the greedy walk visits (L^0(ξ),0) or the closest point to the right of (L^0(ξ),r) on the line r(which is (L^0(ξ),r) if that point exists). By the definition of Ξ, the greedy walk starting from one of these two points never visits (ξ,0) and it never visits any point to the left of {L^0(ξ)}×{0,r}. Therefore,once the greedy walk starting from (0,0) enters [L^0(ξ),∞)×{0,r}, it continues on the path of one of these two walks. Thus, the greedy walk starting from (0,0) does not visit (ξ,0). From this we can conclude that all points in {Ξ∩(0,∞)}×{0} are not visited by the greedy walk andfor X∈Ξ∩ (0,∞) we have {z_0,z_1,…,z_n}⊂{z_0,z_1,…,z_n}, where z_0,z_1,…,z_n are as in (<ref>).LetY∈{z_0,z_1,…,z_n}∖{z_0,z_1,…,z_n} andfindj such that z_j<Y<z_j-1.Adding Y to the set {z_0,z_1,…,z_n} in the definition (<ref>), removes the value 2z_j-1-z_j-X and adds the values 2z_j-1-Y-X and 2Y-z_j-X. Since, 2z_j-1-Y-X<2z_j-1-z_j-X and2Y-z_j-X<2z_j-1-z_j-X,the point at Y added to {z_0,z_1,…,z_n}decreases the value of D_X' or leaves it unchanged. Since both D_X and D_X' are defined on {z_0,z_1,…,z_n}, but D_X has also points {z_0,z_1,…,z_n}∖{z_0,z_1,…,z_n} in the definition, we can conclude that D_X≤D_X' for all X∈Ξ, X>0. Thus D_X<d_1 for infinitely many X∈Ξ, X>0. This together with Lemma <ref> implies that for infinitely manyX∈Ξ∩ (0,∞), D_X<d_1 and there exists Y∈Π such that 0< Y-X<d_2 and Π(( Y, Y+r)×{0,r})=0. Choose one such X and let k be such that X_k-X<d_2 and X_k+1-X_k>r. If T_(-∞,0)<T_[X_k,∞), then D_X_k=0<d_1+d_2. Otherwise, T_[X_k,+∞)<T_(-∞,0),and we can denote the points of Π_T_[X_k,∞)in(0,X_k) by z_1,z_2,…,z_n-1 so that 0=z_n<z_n-1<…<z_1<z_0=X_k. By the definition of Ξ, the point (X,0) is never visited by the walk andthus there exists j such that z_j=X. Then we haveD_X_k =max_0≤ i≤ n-1{2 z_i- z_i+1-X_k}≤max{max_j≤ i≤ n-1{2 z_i- z_i+1-X}-(X_k-X),max_0≤ i≤ j-1{2 z_i- z_i+1-X_k}}≤max{D_X,X_k-X}≤D_X+X_k-X< d_1+d_2,where in the second inequality we use the fact that, by the definition of the points in Ξ, the walk does not visit any point in (0,X)×{0,r}after time T_[X,∞) and therefore D_X=max_j≤ i≤ n-1{2 z_i- z_i+1-X}.Let now d_0=d_1+d_2.Since there are, almost surely, infinitely many X∈Ξ and ksuch that D_X<d_1, X_k-X<d_2 and X_k+1-X_k>r,it follows thatD_X_k<d_0 and X_k+1-X_k>r for infinitely many k>0,almost surely, which proves the claim of the lemma. Since D_X_k<d_0 for infinitely many k, almost surely,one should expect that also X_k+1-X_k>d_0>D_X_k for infinitely many k.That is exactly what we show next, but let us first state the extended Borel–Cantelli Lemma which we use in the proof.Let ℱ_n, n≥ 0, be a filtration with ℱ_0={0,Ω} and let A_n∈ℱ_n,n≥ 1.Then a.s.{A_ni.o.}={∑_n=1^∞ℙ[A_n | ℱ_n-1]=∞}.Almost surely, the events A_k={X_k+1-X_k>D_X_k-X_-1+r}occur for infinitely many k>0.For k>0 let j_k^0=max{i:X_i^0≤ X_k} and j_k^r=max{i:X_i^r≤ X_k}. Furthermore, define the σ-algebra ℱ_k=σ((X_-1^0,0),(X_0^0,0),…,(X_j_k^0^0,0),(X_-1^r,r),(X_0^r,r),(X_1^r,r),…,(X_j_k^r^r,r))and denote by T_A^σ and D_x^σ analogues of T_A and D_x forthe greedy walk on the set of points which generates ℱ_k. Assume T_[X_k,∞)<T_(-∞,0). Then, the greedy walk on Π and the walk on the restricted set are the same until time T_[X_k,∞). If X_k+1-X_k>r then S_T_[X_k,∞)=X_k, T_X_k^σ= T_[X_k,∞) andD_X_k^σ= D_X_k.Let A_k^σ={X_k+1-X_k>D_X_k^σ-X_-1+r} and observe that A_k^σ∈ℱ_k+1. For d_0>0 we have (A_k^σ | ℱ_k) ≥ (D_X_k^σ<d_0,X_k+1-X_k>d_0-X_-1+r | ℱ_k)= _{D_X_k^σ<d_0}(X_k+1-X_k>d_0-X_-1+r | ℱ_k)= _{D_X_k^σ<d_0}e^-(d_0-X_-1+r)a.s. The first equality above holds because {D_X_k^σ<d_0}∈ℱ_k. The second equality follows from the facts that X_-1∈ℱ_k and X_k+1-X_k is exponentially distributed with mean 1 and independent of ℱ_k. By Lemma <ref>, there exists d_0 such thatD_X_k<d_0 and X_k+1-X_k>rfor infinitely many k, almost surely. Since, D_X_k^σ= D_X_k whenever X_k+1-X_k>r,also D^σ_X_k<d_0 for infinitely many k and, thus, ∑_k=1^∞(A_k^σ | ℱ_k)=∞a.s. It follows now from the extended Borel-Cantelli lemma (Lemma <ref>) that (A_k^σ for infinitely manyk≥ 1)=1.Since A_k^σ⊂{X_k+1-X_k>r} and A_k=A_k^σ whenever X_k+1-X_k>r, also (A_kfor infinitely manyk≥ 1)=1.-8mm3mm Whenever A_k occurs, as we show in the next lemma, the greedy walk is forced to visit (-∞,0)×{0,r} before visiting [X_k+1,∞)×{0,r}.Note that the arguments do not depend on the definition of the point process Π. Almost surely, T_(-∞,0)<∞.By Lemma <ref>, the events A_k={X_k+1-X_k>D_X_k-X_-1+r} occur for infinitely many k,almost surely. To prove the lemma, it suffices to prove that whenever A_k occurs,then T_(-∞,0)<T_[X_k+1,∞).Because the walk exits [0,X_k+1]×{0,r} in, almost surely, finite time, it follows that T_(-∞,0)<∞, almost surely. Assume that T_[X_k,∞)<T_(-∞,0) and A_k occurs for some k. Then X_k+1-X_k>D_X_k-X_-1+r>r and a point in (0,X_k)×{0,r} is closer to a point at X_k than to any point in[X_k+1,∞)×{0,r}.Hence, S_T_[X_k,∞)=X_k.Denote the remaining points of Π_T_[X_k,∞) in the interval [0,X_k]as in (<ref>). Note that at time T_[X_k,∞) there is exactly one unvisited point left at each position z_1,z_2,…,z_n-1anddenote by ι_1,ι_2,…,ι_n-1 the corresponding lines of these points. If there is only one point at X_k, let ι_0 be the line of this point. If there are two points at X_k, let ι_0 be the line of the point that is not visited at time T_X_k.The point S_T_X_k is closer to the second point at X_k, if such exists, than to any point in (X_k+1,∞)×{0,r} because X_k+1-X_k>r or to any of the remaining points with shadows at z_1, z_2,…, z_n-1 because of Lemma <ref> (b).For i=0,1,…,n-2,from the definition of D_X_k (<ref>) we have D_X_k≥ 2z_i-z_i+1-X_k and d((z_i,ι_i),(z_i+1,ι_i+1))^2 ≤ (z_i-z_i+1)^2+r^2 < (z_i-z_i+1+r)^2 ≤ (D_X_k+X_k-z_i+r)^2< (D_X_k-X_-1+r+X_k-z_i)^2< (X_k+1-z_i)^2≤ d((z_i,ι_i),(X_k+1,ι_i))^2.Thus the point (z_i,ι_i) is closer to (z_i+1,ι_i+1) than to any point in [X_k+1,∞)×{0,r}. Moreover, by Lemma <ref> (b), (z_i+1,ι_i+1) is closer to (z_i,ι_i) than any other point in [0,z_i+1)×{0,r}. Thus, when the walk is at (z_i,ι_i) it visits(z_i+1,ι_i+1) next,except if z_i<r and there is a point at (-∞,0)×{0} which is closer. In the latter case we have T_(-∞,0)<T_[X_k+1,∞).Assume that the walk visits successively the points at z_1,z_2,…, z_n-1. Hence, all points in(z_n-1,X_k+1)×{0,r} are visited. When the greedy walk is at (z_n-1,ι_n-1), the closest unvisited point isin (-∞,0)×{0,r},because a point with shadow X_-1 is closer to (z_n-1,ι_n-1) than any point in [X_k+1,∞)×{0,r}. This follows fromd((z_n-1,ι_n-1),(X_-1,r-ι_n-1))^2 =(z_n-1-X_-1)^2+r^2≤ (D_X_k+X_k-z_n-1-X_-1)^2+r^2< (D_X_k+X_k-z_n-1-X_-1+r)^2< (X_k+1-z_n-1)^2=d((z_n-1,ι_n-1),(X_k+1,ι_n-1))^2,where in the first inequality above we use that, by the definition of D_X_k (<ref>), D_X_k≥ 2z_n-1-X_k. Thus, the walk visits (-∞,0)×{0,r} next. Therefore, also now T_(-∞,0)<T_[X_k+1,∞),which completes the proof.Now we are ready to prove Theorem <ref> where we use Lemma <ref> repeatedly to show that the greedy walk crosses the vertical line {0}× infinitely often.Therefore the greedy walk visits all the points of Π. [Proof of Theorem <ref>] From Lemma <ref> we have (T_(-∞,0)<∞)=1, which is equivalent to(T_(-∞,0)<∞ | X_-1^0, X_-1^r,X_1^0,X_1^r)=1a.s.Moreover,the conditional probability (<ref>) is almost surely 1 for any absolutely continuous distribution of (X_-1^0, X_-1^r,X_1^0,X_1^r) on (-∞,0)^2×(0,∞)^2,which is independent of Π∩(((X_1^0,∞)×{0})∪((X_1^r,∞) ×{r})).Let T_0=0 and let T_i, i≥ 1, be the first time the greedy walk visits a point in (-∞,min_0≤ n≤ T_i-1S_n)×{0,r} for i odd and in (max_0≤ n≤ T_i-1S_n,∞)×{0,r} for i even. That is, for i odd (even) T_i is the time when the walk visitsthe part of Π on the left (right) of {0}×which is unvisited up to time T_i-1.We prove first that T_i is almost surely finite for all i and, thus, the greedy walk, almost surely, crosses the vertical line {0}× infinitely many times. Then we show that it is not possible that a point of Π is never visited, and thus thewalk almost surely visits all points of Π.Assume that T_i is finite for some even i.Let Y_0=S_T_i,Y_1^0=min{Y∈Π^0:Y>Y_0} and Y_1^r=min{Y∈Π^r:Y>Y_0'}. Furthermore, let Y_-1^0=max{ Y∈Π^0: Y<min_0≤ n≤ T_iS_n} and Y_-1^r=max{ Y∈Π^r: Y<min_0≤ n≤ T_iS_n}.By the definition of Y_-1^0 and Y_-1^r, at time T_i the set I_1=(-∞,Y_-1^0]×{0}∪(-∞,Y_-1^r]×{r} is not yet visited. Also, by definition Y_1^0, Y_1^r >S_T_i and the set I_2=([Y_1^0,∞)×{0})∪([Y_1^r,∞)×{r}) is never visited before time T_i. Moreover,because of the strong Markov property, the distributions of Π∩ I_1 and Π∩ I_2are independent of the points of Π outside I_1 and I_2. Let Π'=Π∩( I_1 ∪ I_2 ).From (<ref>) we have that the greedy walk on θ _S_T_iΠ' starting from (0,0), visits the set (-∞,0)×{0,r} in a finite time, almost surely. In other words, the greedy walk on Π'starting from S_T_i, almost surely, visits a point in I_1 at some time T_i+1'<∞. The greedy walk on Π_T_i, starting from S_T_i, might differ from the walk on Π' if there are some points outside I_1 and I_2 that are not visited up to time T_i. Denote the shadows of these points by c_1,c_2,…,c_j, so that S_T_i≥ c_1>c_2>…>c_j> max{Y_-1^0, Y_-1^r}.Because ofLemma <ref> (b), a point in (S_T_i,∞)×{0,r} is closer to the point at c_1, than to any of the points at c_2,c_3,…,c_j. Hence the walk visits the point at c_1 before visiting any of the points at c_2,c_3,…,c_j.Let T_i+1^1 =min{T_c_1^R, T_I_1}, where I_1=(-∞,max{Y_-1^0',Y_-1^r'}). The greedy walks on Π_T_i and Π' starting from S_T_i are the same until the time T_i+1^1. If T_i+1^1=T_I_1 then T_i+1'=T_i+1^1=T_i+1 and, thus, T_i+1 is finite. Otherwise, the point at c_1 is visited before any point in I_1, soT_c_1^R<T_i+1' and T_c_1 is, almost surely, finite. In this case, similarly as above, let us defineY_0^c_1=c_1,Y_1^0,c_1=min{ Y∈Π^0: Y>max_0≤ n≤ T_c_1S_n} and Y_1^r,c_1=min{ Y∈Π^r: Y>max_0≤ n≤ T_c_1S_n}. Moreover, let I_1^c_1=I_1 and letI_2^c_2=([Y_1^0,c_1,∞)×{0})∪([Y_1^r,c_1,∞)×{r}). From Lemma <ref> we can deduce that (c_1,max_0≤ n≤ T_c_1^RS_n]×{0,r} is empty at time T_c_1^R and, therefore, I_2^c_1 contains all points of Π_T_c_1^R to the right of {(c_1,0),(c_1,r)}.Now, by the same arguments as above, it follows that the walk starting from the point at c_1 visits almost surelya point in I_1 or a point at c_2 in a finite time. If the walk, starting from a point at c_k, 1≤ k≤ j-1, visits a point at c_k+1 before I_1,we repeat the same procedure. Since there are only finitely many such points, the walkin almost surely finite time eventually visits I_1 and thus T_i+1<∞, almost surely.When i is odd,we can look at the walk on σΠ_T_i starting from σ S_T_i. Then the same procedure as above yields T_i+1<∞, almost surely. Therefore, T_i is almost surely finite for all i.Assume now that the walk does not visit all points of Π and let (x,ι_x)be a point of Π that is never visited. Then, there is i_0 even, such that x<S_T_i_0 and x-min_0≤ n≤ T_i_0S_n>r. Then for all n∈ (T_i_0,T_i_0+1-1) such that S_n≥ x, by the choice of i_0,S_n is closer to (x,ι_x) than to anypoint in(-∞,min_0≤ n≤ T_i_0S_n)×{0,r}. Also, by Lemma <ref> (b),S_n is closer to (x,ι_x) than to anyremaining point in[min_0≤ n≤ T_i_0S_n,x). Hence, the greedy walk visits (x,ι_x) before time T_i_0+1, which is a contradiction.§ TWO PARALLEL LINES WITH SHIFTED POISSON PROCESSES Let E = ×{0,r} and let d be the Euclidean distance. We define a point process Π on E in the following way. Let Π^0 be a homogeneous Poisson process onwith rate 1 and let Π^r be a copy of Π^0 shifted by s, 0<| s| <r/√(3),i.e. Π^r= {x:x-s∈Π^0}.Then, let Π be a point process on E with Π∩ (B×{0})=(Π^0∩ B)×{0}and Π∩(B×{r})=(Π^r∩ B)×{r} for all Borel sets B⊂. We consider the greedy walk on Π defined by(<ref>) starting from S_0=(0,0). We call the pair of points (Y,0) and (Y+s,r) shifted copies. Moreover, we say that a point of Π is an indented point if it is further away from the vertical line {0}× than its shifted copy. That is, for s>0, the indented points are in (-∞,0)×{0} and (0,∞)×{r}. For s<0, the indented points are in (0,∞)×{0} and(-∞,0)×{r}.We can divide the points of Π into clusters in the following way. Any two successive points on line 0 are in the same cluster if their distance is lessthan √(r^2+s^2), otherwise they are in different clusters. Moreover, any two points on line 0are in the same cluster only if all points between those two points belong to that cluster. The points on line r belong to the cluster of its shifted copy. Throughout this section, we will call the closest point to the vertical line {0}× of a cluster on each line the leading point of the cluster.Every cluster has one leading indented and one leading unindented point, except the cluster around (0,0) that has possibly pointsin both (-∞,0)×{0,r} and (0,∞)×{0,r}. In Section <ref> the points were divided into clusters in a similar way and we observed that the walk always visits all points of a clusterbefore moving to a new cluster. This is not the case here. See Figure <ref>for an example where the greedy walk moves to a new cluster before visiting all points in a current cluster. The points that are not visited during the first visit of a cluster cause the walk to jump over the vertical line {0}× infinitely many times. Therefore, we obtain here the same result as in Section <ref>: Almost surely, the greedy walk visits all points of Π.The proof follows similarly as the proof of Theorem <ref>. We change here the definition of the set Ξ.In addition, the arguments in the first part of the proof of Lemma <ref>, where we show that if Ξ is almost surely the empty set then the walk almost surely jumps over the starting point, are different from those in Lemma <ref>. Furthermore, in the proof of Theorem <ref> we use the fact that whenever the walk enters a cluster at its leading unindented point, then the walk always visits successivelyall points of the cluster.This can be explained as follows: Let (X,r) be the leading indented point and assume that(X,r) is the first point that the greedy walk visits in its cluster. (The case when the walk first visits the leading indented point on the line 0 can be handled in the same way.) Denote the closest point to (X,r) on line r by (Y,r) andlet a=| X-Y| be the distance between points (X,r) and (Y,r). The distance between(X,r) and (X-s,0) is √(r^2+s^2) and the distance between (X,r) and (Y-s,0) is √(r^2+(a-s)^2). Because of the choice s<r/√(3), we have 2s<√(r^2+s^2). Therefore, if r≤ a<√(r^2+s^2),then r^2+(a-s)^2 =a^2+r^2+s^2-2sa>a^2+r^2+s^2-2s√(r^2+s^2)=a^2+√(r^2+s^2)(√(r^2+s^2)-2s)>a^2Also, if a<r, then r^2+(a-s)^2≥ r^2>a^2.Thus, when a<√(r^2+s^2) the point (Y,r) is closer to (X,r) than the copies of those two points on line 0 and thus (Y,r) is visited next. We can argue in the same way for all the points in this cluster on line r, until the walk reaches the outermost point. The distance from the outermost point of the cluster to the closest point on line r is greater than√(r^2+s^2) and the closest unvisited point is its shifted copy. Once the walk is on line 0, it visits all remaining points of the cluster, because the distances between the successive points in the cluster are less than √(r^2+s^2), all points of the cluster on line r are already visited and the distance to any point in another cluster is greater than √(r^2+s^2).We define now the set Ξ in a slightly different way than in Section <ref>. For x∈, let L^0(x)=max{Y∈Π^0: Y<x}. Then define Ξ=s + {X∈Π^0: S_n^(X,0)∈((X,∞)×{0,r})∖{(X+s,r)} for all n≥ 1..andX-L^0(X)>r^2+s^2/2s},that is, Ξ contains all points X+s∈Π^r such that the distance between (X,0) and the closest point on the line 0 to the right of (X,0) is greater than r^2+s^2/2s and if we start a walk from (X,0), then the walk always stays in [X,∞)×{0,r}, but it never visits (X+s,r). If X-L^0(X)>r^2+s^2/2s and the walk approaches (X,0) and (X+s,r) from their right, then the walk visits the point (X,0) before visiting (X+s,r).Hence, if X+s∈Ξ and X>0, then the point (X+s,r)is never going to be visited.Note that the set Ξ is a function of a homogeneous Poisson process and hence it is stationary and ergodic.Let us define now the random variable W_x, x∈, that corresponds to the random variable W_x,ι from Section <ref>. For x∈, letΠ^x=Π∩(([x,∞)×{0})∪([x+s,∞)×{r})).Consider for the moment the greedy walk on Π^x starting from (x,0) defined by (<ref>) and let W_x=inf_i≥ 0{S_i-d(S_i,S_i+1)}.If s>0 and if for some x∈ we have | W_x|<∞ and L^0(x)+s<W_x, then for the walk starting from (x,0) and for any i≥ 0 it holds min{d(S_i,(L^0(x),0)),d(S_i,(L^0(x)+s,r))}≥S_i-L^0(x)-s> S_i-W_x≥ d(S_i,S_i+1),that is, the point S_i is closer to S_i+1 than to any point in Π∖Π^x. Therefore,the walk on Π starting from (x,0) coincides with the walk onΠ^x and the walk does not visit any point in Π∖Π^x. The opposite is also true, i.e. if the walk on Π starting from (x,0) coincides with the walk on Π^x, then | W_x|<∞. The same holds also for s<0: if| W_x|<∞ and L^0(x)<W_x,forx∈, then the walk on Π starting from (x,0)does not visit Π∖Π^x and if the walk does not visit Π∖Π^x then | W_x|<∞. If s>0 and if Ξ is almost surely the empty set, thenΨ={X∈Π^0:| W_X| <∞} is almost surely the empty set.Since W_X is identically distributed for all X∈Π^0,Ψ is stationary and ergodic. Suppose, on the contrary, that Ψ is almost surely a non-empty set. For d>0 let Ψ_d={y∈Π^0:| W_x| <d} and note that ⋃_d>0Ψ_d=Ψ. Then there exists d such that Ψ_d is almost surely a non-empty set.Let Ψ_d be the set of all X∈Ψ_d which satisfy the following. First, there are no points in (X-d,X)×{0,r}.Secondly, there is Y∈Π^0, Y<X, such thatthe distance between Y and max{Z∈Π^0:Z<Y} is greater than r^2+s^2/2s.Thirdly,the walk starting from (Y,0) stays in (Y,∞)×{0,r} until it visits (X,0) and it never visits(Y+s,r). Since there is a positive probability that all three conditions occur and this probability is independent of W_x, Ψ_d is almost surely anon-empty set. But,then by definition of Ξ, for every X∈Ψ_d, Y+s∈Ξ and Ξ is a non-empty set, which is a contradiction.In the next two lemmas we use the random variable D_X, X∈Π^r,which can be compared with the corresponding random variable in Section <ref> defined in (<ref>). For X∈Π^r∖Ξ set D_X=0. For X∈Ξ denote the points of Ξ∩(∞,X] in decreasing order…<z_2<z_1<z_0=Xand define D_X asD_X = sup_i≥ 0{2z_i-z_i+1-X}. Also we define the set Ξ_d_1,d_2 in the same way as in Section <ref>. For d_1, d_2>0 defineΞ_d_1,d_2={X∈Ξ: D_X<d_1and there exists Y∈Π^r such that 0< Y-X<d_2 and Π∩( Y, Y+r)=∅},where Π=Π^0∪Π^r. The following lemma corresponds to Lemma <ref>. Since the proof is very similar, it is not included here.If s>0 and if Ξ is almost surely a non-empty set, thenthere exists d_1,d_2>0 such that Ξ_d_1,d_2 isalmost surely a non-empty set. Moreover,Ξ_d_1,d_2 is a stationary and ergodic process.We study the greedy walk starting from the point (0,0), which is almost surely not a point of Π. From now on denote the points of Π^0 by… <X_-2<X_-1≤ 0< X_1<X_2< … .Also, we let Π=Π^0∪Π^r be the shadow of all the points of the process Π. As in Section <ref>, we denote by Π_n the set of points that are not visited until time n. Similarly, Π^0_n, Π^r_n denotes unvisited points of Π^0 and Π^r until time n, respectively,and Π_n=Π^0_n∪Π^r_n.We define T_A=inf{n≥ 0: S_n∈ A} to be the first time the walk visits A×{0,r},where A is a subset of .Define the variable D_x, x∈, x>0,as follows. If T_(-∞,0)<T_[x,∞) then D_x=0.Otherwise, 0< S_1,S_2,…,S_T_[x,∞)-1<x, S_T_[x,∞)≥ x andwe label the remaining points ofΠ_T_[x,∞) in the interval (0,x) by z_1,z_2,…,z_n-1 so that 0=z_n<z_n-1<…<z_1<z_0=x.Let thenD_x=max_0≤ i≤ n-1{(z_i-z_i+1)-(x-z_i)}=max_0≤ i≤ n-1{2z_i-z_i+1-x}.From the definition it follows that 0≤ D_x≤ x. As in Section <ref>, this random variable measures how large should be the minimal distance between (x,0) or (x,r) and points in (x,∞)×{0,r}, so that the walk possibly visits a point in (-∞,0)×{0,r} before visiting any point in (x,∞)×{0,r}.We prove next that D_X<d_0 for infinitely many X∈Π^0.The proof is divided into two parts, one discussing the case when Ξ is almost surely the empty set and another one discussing the case when Ξ is a non-empty set. The proof of the second case follows in the similar way as the second part of the proof of Lemma <ref>, so we are not going to write all the details here.If s>0, then there exists d_0<∞ such that, almost surely,D_X_k+s<d_0 and X_k+1-X_k>r+s for infinitely many k>0.Assume first that Ξ is almost surely the empty set. Then, by Lemma <ref>, {X∈Π^0∩ (0,∞):| W_X| <∞} is almost surely empty. Observe that d((X_1,0),(0,0))<d((X_1+s,r),(0,0)) and if X_-1+s>0 then d((X_-1,0),(0,0))<s<r<d((X_-1+s,r),(0,0)).Thus, if the walk starts from (0,0) and S_1>0, then S_1 must be (X_1,0). Assume that {S_n>0for all n≥ 1} occurs with positive probability. Then one of the following three events also has positive probability.First, {X_-1+s<0, S_n>0for all n≥ 1}. But, this event implies that | W_X_1|<∞ and X_1∈Ψ, which has probability 0.Secondly, {0<X_-1+s, S_n>0for all n≥ 1and the walk does not visit (X_-1+s,r)}.If this event occurs then the walk does not visit any point in (0,X_-1+s)×{r}, because X_-1+s<r/√(3) andthe distance from any point in the interval (0,X_-1+s)×{r} to the point (X_-1+s,r)is smaller than distance from any point in (0,X_-1+s)×{r} to a point in (0,∞)×{0}. Therefore, from S_1=(X_1,0) the walk visits just the points of Π^X_1 and we can conclude that| W_X_1| <∞,which is impossible. Thirdly, {S_n>0for all n≥ 1 and the walk visits (X_-1+s,r)}. Assume that this event occurs. There are almost surely finitely many points in (0,X_-1+s]×{r} and thus there isa time k when that interval is visited for the last time. From Lemma <ref> it follows thatall points in (S_k,max_0≤ i≤ kS_i]×{0,r} are visited up to time k. Therefore, S_k+1 is in (max_0≤ i≤ kS_i,∞)×{0,r}. Since the distance from S_k to (X_-1,0) is at most √(r^2+s^2), the distance from S_k to S_k+1 is less than √(r^2+s^2). Thus, we can conclude that both S_k and S_k+1 belong to the cluster around (X_1,0). Moreover, the greedy walk did not visit another cluster before time k and it moved fromline 0 to line r only once. At time k there might be some unvisited points of the cluster around (X_1,0) on line r whose shifted copies are visited before time k.Those points are visited directly after S_k, because those points are closer to S_k than any unvisited point on line 0. After visiting those points, all remaining unvisited points of the cluster around (X_1,0) have unvisited shifted copy. We can think about that part of the cluster as a new cluster. The walk visits the next cluster starting from the indented or the unindented leading point. If it visits first the indented point, then the walk visits consecutively all the points of that cluster and afterwards it visits the unindented leading point of the next cluster. Once the walk is at the unindented leading point (Y,0),all points in (0,Y)×{0,r} are visited except possibly some points in(0,X_-1+s]×{r} which are never visited. Since S_n>0for all n≥ 1,we can conclude that the walk after visiting (Y,0) stays in Π^Y. But, then | W_Y|<∞ and Ψ is not empty, which is a contradiction. Since these three events almost surely do not occur,also {S_n>0for all n≥ 1} does not occur.Thus T_(-∞,0)<∞, almost surely. This together with the definition of D_x, implies that D_x=0 for all large enough x≥ 0.Since X_k+1-X_k>r+s for infinitely many k>0, theclaim of the lemma holds for any d_0>0.If Ξ is a non-empty set, the proof follows in the same way asthe corresponding part of the proof of Lemma <ref>.Thus we omit the proof here and we only emphasize that points Ξ∩(0,∞) are never visitedbecause the condition X-L^0(X)>r^2+s^2/2s implies that for X∈Ξ∩(0,∞) points in (-∞,X-s)×{0,r} are closer to (X-s,0) than to (X,r).Thus, the point ( X-s,0) is visited first and then by the definition of Ξ the walk never visits (X,r). Using that event D_X_k+s<d_0 occurs for infinitely many k>0 andX_k+1-X_k>d_0 occurs for infinitely many k>0, we show in the next lemma thatthere are infinitely many k>0 such that both events occur simultaneously. If s>0 then, almost surely, the events A_k={X_k+1-X_k>D_X_k+s-X_-1+r+s} occur for infinitely many k>0.Let j_s=max{i: X_i<-s}, B_k={X_j_s,X_j_s+1,…,X_-1,X_1,…,X_k-1,X_k} and ℱ_k=σ(B_k).LetT_A^σ and D_x^σ be the analogues ofT_A and D_x for the greedy walk on the set of points (B_k×{0})∪((B_k+s)×{r}). When T_[X_k+s,∞)<T_(-∞,0), the greedy walk on Π and the walk on the restricted set are the same until time T_[X_k+s,∞). Moreover, if X_k+1-X_k>r+s then S_T_[X_k+s,∞)=X_k+s, T_X_k+s^σ= T_[X_k+s,∞) andD_X_k+s^σ= D_X_k+s.Let A_k^σ={X_k+1-X_k>D_X_k+s^σ-X_-1+r+s} and observe that A_k^σ∈ℱ_k+1. For d_0>0 we have (A_k^σ | ℱ_k) ≥ (D_X_k+s^σ<d_0,X_k+1-X_k>d_0-X_-1+r+s | ℱ_k)= _{D_X_k+s^σ<d_0}(X_k+1-X_k>d_0-X_-1+r+s | ℱ_k)= _{D_X_k+s^σ<d_0}e^-(d_0-X_-1+r+s)a.s. The first equality above holds because {D_X_k+s^σ<d_0}∈ℱ_k. The second equality follows from the facts that X_-1∈ℱ_k and X_k+1-X_k is exponentially distributed with mean 1 and independent of ℱ_k. By Lemma <ref>, we can choose d_0 such that D_X_k+s<d_0 and X_k+1-X_k>r+sfor infinitely many k, almost surely. Since, D_X_k+s^σ= D_X_k+s whenever X_k+1-X_k>r+s,also D^σ_X_k+s<d_0 for infinitely many k and, thus, ∑_k=1^∞(A_k^σ | ℱ_k)=∞a.s. It follows now from Lemma <ref> that (A_k^σ for infinitely manyk≥ 1)=1.Since A_k^σ⊂{X_k+1-X_k>r+s} and A_k=A_k^σ whenever X_k+1-X_k>r+s, also (A_kfor infinitely manyk≥ 1)=1.-7mm3mm Whenever A_k occurs, the greedy walk is forced to visit (-∞,0)×{0,r} before visiting [X_k+1,∞)×{0,r}.This together with Lemma <ref> immpliesthat T_(-∞,0)<∞ when s>0. The same is also true if s<0 and to prove this we use that T_(-∞,0)<∞, almost surely, for s>0. Almost surely, T_(-∞,0)<∞.When s>0one can show in the same way as in Lemma <ref> that whenever A_k={X_k+1-X_k>D_X_k+s-X_-1+r+s} occurs, the walk visits (-∞,0)×{0,r} before visiting [X_k+1,∞)×{0,r}. By Lemma <ref>, the event A_k occurs for some k, almost surely, and hence T_(-∞,0)<∞, almost surely. Furthermore,(T_(-∞,0)<∞ | X_-1,X_1)=1 a.s.,for any absolutely continuous distribution of (X_-1,X_1) on (-∞,0)×(0,∞) which is independent of Π^0∩(X_1,∞).Assume now on the contrary that(T_(-∞,0)=∞)>0 for s<0.If T_(-∞,0)=∞,then S_1 is in (0,∞)×{0} or in (0,∞)×{r}. If S_1 is on line 0 then S_1 is indented and the walk consecutively visits all points of its cluster. Since the walk stays in(0,∞)×{0,r}, the last visited point of this cluster is (X_1+s,r) and X_1+s>0. Let T be the time when the walk visits (X_1+s,r) and let Y_0=(X_1+s,r). Moreover, let Y_-1=X_-1+s andlet Y_1=min{Y∈Π^r:Y>max_0≤ n≤ TS_n} be the leading indented point of the next cluster on the right of Y_0. If S_1 is on line r, then the greedy walk is the same as if the walk starts from (0,r). Thus let Y_0=(0,r), T=0, Y_-1=X_-1+s and Y_1=X_1+s. From (<ref>) it follows that the walk on θ_Y_0(Π_T)starting from (0,0) and given X_-1=Y_-1 and X_1=Y_1, visits (-∞,0)×{0,r} in a finite time, almost surely. In other words, the walk onΠ_T starting from Y_0 visits (X_-1,0) or (X_-1+s,r) in a finite time, almost surely. This contradicts the assumption that (T_(-∞,0)=∞)>0. Therefore, the claim of the lemma holds also for s<0.[Proof of Theorem <ref>] From Lemma <ref> it follows that for any | s|<r/2(T_(-∞,0)<∞ | X_-1,X_1)=1 a.s.,for any absolutely continuous distribution of (X_-1,X_1) on (-∞,0)×(0,∞) which is independent of Π^0∩(X_1,∞). We prove the theorem for s>0. The proof of the theorem for s<0 follows in a similar way.Let us first look at the cluster around the starting point of the greedy walk (0,0). If X_1-X_-1>√(r^2+s^2) this cluster is empty. If the cluster is not empty, it has finitely many points andthe walk visits a point in another clusterin a finite time. Let T_0 be the first time the walk visits a point in another cluster(T_0=1 if the cluster around (0,0) is empty).We assume at the moment that S_T_0>0. For i≥ 1 let T_i be the first time the greedy walk visits (-∞,min_0≤ n≤ T_i-1S_n)×{0,r} for i odd and (max_0≤ n≤ T_i-1S_n,∞)×{0,r} for i even. That is, for i odd (even) T_i is the time when the walk visitsthe part of Π on the left (right) of the vertical line {0}×which is not visited up to time T_i-1. Let Y_0=S_T_0-1 be the last visited point in the cluster around (0,0)before the first visit to another cluster and let ι be the line of S_T_0-1.Moreover, let Y_-1 and Y_1 be the closest not yet visited points of Π^ι to Y_0,such that their shifted copy is also not visited, that is Y_-1=max{Y∈Π^ι_T_0:Y<min_0≤ n< T_0S_n,Y+(-1)^1_r(ι)· s∈Π^r-ι_T_0}and Y_1=min{Y∈Π^ι_T_0:Y>max_0≤ n< T_0S_n,Y+(-1)^1_r(ι)· s∈Π^r-ι_T_0}.Let I_1=((-∞,Y_-1)×{ι})∪((-∞,Y_-1+(-1)^1_r(ι)· s)×{r-ι})and I_2=((Y_1,∞)×{ι})∪((Y_1+(-1)^1_r(ι)· s)×{r-ι}).Because of the strong Markov property the distribution of Π in I_1 and I_2is independent of the points of Π outside these sets. Now let Π'=Π∩(I_1∪ I_2). From (<ref>), we know that the walk on σθ_Y_0(Π')starting from (0,0) and given X_-1=Y_-1 and X_1=Y_1 visits (-∞,0)×{0,r} in a finite time, almost surely. Hence, the walk on Π_T_0 starting at S_T_0 visits I_1 or a point of the cluster around (0,0) in almost surely finite time. Denote that time by T_1'. If S_T_1' is in I_1, then T_1=T_1'. Otherwise, S_T_1' is in ((-∞,Y_0)×{0,r})∖ I_1and from the definition of I_1we can deduce that the shifted copy of the point S_T_1' must have been visited before T_1. Set now Y_0=S_T_1' and redefine Y_-1, Y_1, I_2 and Π' with respect to the time T_1'instead of T_0.Observe that, by Lemma <ref>, at time T_1' the set (S_T_1',max_0≤ n≤ T_1'S_n)×{0,r} is empty. Moreover, if there are some points in (max_0≤ n≤ T_1'S_n,∞)×{0,r} whose shifted copy is visited, then these points are online r and belong to one cluster. The closest point with a still unvisited shifted copy is at a horizontal distance of at least r-s from those points.Since the distance from S_T_1' to (max_0≤ n≤ T_1'S_n,∞)×{0,r} is at least√(r^2+s^2), the remaining points on line r which do not have a shifted copyare closer to S_T_1' than the closest point on line 0. Thus, these points, whose shifted copies are already visited, are visited before visiting (Y_1',ι) or its shifted copy.Now, we can conclude that the walk starting from S_T_1' visits in almost surely finite time I_1 or a point of Π∖ (I_1∪ I_2), that is one of the remaining points of the cluster around (0,0)or a point in(max_0≤ n≤ T_1'S_n,Y_1')×{r}. There are finitely many points in Π∖ (I_1∪ I_2) and every time the walk visits one of these points,we redefine Y_0, Y_-1, Y_1, I_2 and Π', and repeat the same arguments as above. Thus the walk visits I_1 in almost surely finite time.Assume now that T_i is finite for some odd i≥ 1.Let Y_0=S_T_i-1 and define Y_-1, Y_1, I_1, I_2 and Π' as before. Then points of Π_T_i∖Π' are in (Y_-1,0)×{r}, (0,Y_1)×{0} or the cluster around (0,0) and there arealmost surely finitely many such points. By the observation above, the greedy walk visitsall points in (Y_-1, Y_0)×{0} beforeit visits I_1 and it visits all points in (Y_0,Y_1)×{r} before it visits a point in I_2. Denote the time when the walk visits a point of Π_T_i∖Π' before visiting I_2 with T_i'. Set then Y_0=S_T_i' and redefine Y_-1', Y_1 and Π', with the respect with the time T_i'. Again, by (<ref>), the walk on σθ_Y_0Π'visits (-∞,0)×{0,r} in almost surely finite time.Thus, the walk on Π_T_i' in almost surely finite time visits I_2 or another point in Π_T_i∖Π'. Repeating this arguments for every visited point in Π_T_i∖Π',we can see that the walk eventually visits I_2 and that T_i+1 is almost surely finite.Similarly, one can show that if T_i, i≥ 2 even, is finite,then T_i+1 is also almost surely finite. Therefore, inductively we can conclude that the walk almost surely crosses the vertical line {0}× infinitely many times and, thus, it eventually visits all points of Π.We conjecture that Theorem <ref> holds also for| s| >r/2. For those s the idea to cluster the points of Π does not work in the same way.For example, the greedy walk does not always visit all points of the cluster when it starts from the leading indented point of the cluster. Thus, the walk more often does not visit all points of a cluster successively andwe expect that the points that are not visited during the first visit of a clustercause the walk to return and to cross the vertical line {0}× infinitely often. The author thanks Svante Janson, Takis Konstantopoulos and Erik Thörnblad for valuable comments. apt []13.5cm Katja Gabrysch,Department of Mathematics, Uppsala University, PO Box 480, 751 06 Uppsala, Sweden E-mail address: mailto:[email protected]@math.uu.se | http://arxiv.org/abs/1703.08706v1 | {
"authors": [
"Katja Gabrysch"
],
"categories": [
"math.PR"
],
"primary_category": "math.PR",
"published": "20170325155153",
"title": "Greedy walks on two lines"
} |
A near-field study on the transition from localized to propagating plasmons on 2D nano-wedgesThorsten Weber,^1,2 Thomas Kiel,^3 Stephan Irsen,^2 Kurt Busch^3,4 and Stefan Linden^1,∗ ^1Physikalisches Institut, Rheinische Friedrich-Wilhelms Universität, 53115 Bonn, Germany^2Electron Microscopy and Analytics, Center of Advanced European Studies and Research, 53175 Bonn, Germany^3Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin, Germany^4Max-Born-Institut, 12489 Berlin, Germany^*[email protected] Abstract:In this manuscript we report on a near field study of two-dimensional plasmonic gold nano-wedges using electron energy loss spectroscopy in combination with scanning transmission electron microscopy, as well as discontinuous Galerkin time-domain computations. With increasing nano-wedge size, we observe a transition from localized surface plasmons on small nano-wedges to non-resonant propagating surface plasmon polaritons on large nano-wedges. Furthermore we demonstrate that nano-wedges with a groove cut can support localized as well as propagating plasmons in the same energy range. 10 Hartschuh2003 A. Hartschuh, E. J. Sánchez, X. S. Xie, and L. Novotny, High-resolution near-field Raman microscopy of single-walled carbon nanotubes. PRL 90, 095503 (2003).Keilmann2004 F. Keilmann and R. Hillenbrand, Near-field microscopy by elastic light scattering from a tip. Phil. Trans. R. Soc. A 362, 787–805 (2004).Neacsu2010 C. C. Neacsu, S. Berweger, R. L. Olmon, L. V. Saraf, C. Ropers, and M. B. Raschke, Near-field localization in plasmonic superfocusing: A nanoemitter on a tip, Nano Lett. 10, 592–596 (2010).Kruger2011 M. Krüger, M. Schenk, and P. Hommelhoff, Attosecond control of electrons emitted from a nanoscale metal tip. Nature 475, 78–81 (2011).Herink2012 G. Herink, D. R. Solli, M. Gulde, and C. Ropers, Field-driven photoemission from nanostructures quenches the quiver motion. Nature 483, 190–193 (2012).Maier2007 S. A. Maier. Plasmonics: Fundamentals and Applications (Springer, 2007).Muehlschlegel2005 P. Mühlschlegel, H.-J. Eisler, O. J. F. Martin, B. Hecht, and D. W. Pohl, Resonant optical antennas, Science 308, 1607–1609 (2005).Hentschel2010 M. Hentschel, M. Saliba, R. Vogelgesang, H. Giessen, A. P. Alivisatos, and N. Liu, Transition from Isolated to Collective Modes in Plasmonic Oligomers. Nano Lett. 10, 2721–2726 (2010).Boudarham2010 G. Boudarham, N. Feth, V. Myroshnychenko, S. Linden, F. J. García de Abajo, M. Wegener, and M. Kociak, Spectral Imaging of Individual Split-Ring Resonators, PRL 105, 255501 (2010).Nerkararyan1997 K. Nerkararyan, Superfocusing of a surface polariton in a wedge-like structure, Phys. Lett. A 237, 103–105 (1997).Stockman2004 M. Stockman, Nanofocusing of Optical Energy in Tapered Plasmonic Waveguides, PRL 93, 137404 (2004).Schroder2015 B. Schröder, T. Weber, S. V. Yalunin, T. Kiel, C. Matyssek, M. Sivis, S. Schäfer, F. von Cube, S. Irsen, K. Busch, C. Ropers, and S. Linden, Real-space imaging of nanotip plasmons using electron energy loss spectroscopy, PRB 92, 085411 (2015).Yalunin2016 S. V. Yalunin, B. Schröder, and C. Ropers, Theory of electron energy loss near plasmonic wires, nanorods, and cones,PRB 93, 115408, (2016).Guo2016 S. Guo, N. Talebi, W. Sigle, R. Vogelgesang, G. Richter, M. Esmann, S. F. Becker, C. Lienau, and P. A. van Aken, Reflection and Phase Matching in Plasmonic Gold Tapers, Nano Lett. 16, 6137–6144 (2016).Nelayah2007 J. Nelayah, M. Kociak, O. Stéphan, F. J. García de Abajo, M. Tencé, L. Henrard, D. Taverna, I. Pastoriza-Santos, L. M. Liz-Marzán, and C. Colliex, Mapping surface plasmons on a single metallic nanoparticle, Nature Phys. 3, 348–353 (2007).Bosman2007 M. Bosman, V. J. Keast, M. Watanabe, A. I. Maaroof, and M. B. Cortie, Mapping surface plasmons at the nanometre scale with an electron beam, Nanotechnology 18, 165505 (2007).Rossouw2011 D. Rossouw, M. Couillard, J. Vickery, E. Kumacheva, and G. A. Botton, Multipolar Plasmonic Resonances in Silver Nanowire Antennas Imaged with a Subnanometer Electron Probe. Nano Lett. 11, 1499–1504 (2011). Cube2011 F. von Cube, S. Irsen, J. Niegemann, C. Matyssek, W. Hergert, K. Busch, and S. Linden, Spatio-spectral characterization of photonic meta-atoms with electron energy-loss spectroscopy [ Invited ], Opt. Mater. 1, 1009–1018 (2011).Cube2013 F. von Cube, S. Irsen, R. Diehl, J. Niegemann, K. Busch, and S. Linden, From Isolated Metaatoms to Photonic Metamaterials: Evolution of the Plasmonic Near-Field. Nano Lett. 13, 703–708 (2013). Huth2013 F. Huth, A. Chuvilin, M. Schnell, I. Amenabar, R. Krutokhvostov, S. Lopatin, and R. Hillenbrand, Resonant Antenna Probes for Tip-Enhanced Infrared Near-Field Microscopy. Nano Lett. 13, 1065–1072 (2013).Schoen2015 D. T. Schoen, A. C. Atre, A. García-Etxarri, J. A. Dionne, and M. L. Brongersma, Probing Complex Reflection Coefficients in One-Dimensional Surface Plasmon Polariton Waveguides and Cavities Using STEM EELS, Nano Lett. 15, 120–126 (2015).Walther2016 R. Walther, S. Fritz, E. Müller, R. Schneider, D. Gerthsen, W. Sigle, T. Maniv, H. Cohen, C. Matyssek, and K. Busch Coupling of Surface-Plasmon-Polariton-Hybridized Cavity Modes between Submicron Slits in a Thin Gold Film, ACS Photonics 3, 836-843 (2016).GarciadeAbajo2008 F. J. García de Abajo and M. Kociak, Probing the Photonic Local Density of States with Electron Energy Loss Spectroscopy, PRL 100, 106804 (2008).GarciaDeAbajo2010 F. J. García de Abajo, Optical excitations in electron microscopy, Rev. Mod. Phys. 82, 209–275,(2010).Hohenester2009 U. Hohenester, H. Ditlbacher andJ. R. Krenn, Electron-Energy-Loss Spectra of Plasmonic Nanoparticles, PRL 103, 106801 (2009).Busch2011 K. Busch, M. König, and J. Niegemann, Discontinuous Galerkin methods in nanophotonics, Laser Photon. Rev. 5, 773–809 (2011).Matyssek2011 C. Matyssek, J. Niegemann, W. Hergert, and K. Busch, Computing electron energy loss spectra with the Discontinuous Galerkin Time-Domain method, Photon. Nanostruct. Fundam. Appl. 9, 367–373 (2011).Ritchie1957 R. H. Ritchie, Plasma Losses by Fast Electrons in Thin Metal Films, Phys. Rev. 106, 874–881 (1957).Schmidt2015 F. P. Schmidt, H. Ditlbacher, A. Trügler, U. Hohenester, A. Hohenau, F. Hofer, and J. R. Krenn, Plasmon modes of a silver thin film taper probed with STEM-EELS, Optics Letters 40, 5670–5673 (2015). § INTRODUCTIONFocusing light down to nanometric volumes is of great interest for various applications. For instance, nanoscale spectroscopy and imaging techniques such as tip-enhanced Raman scattering, apertureless near-field optical microscopy (A-NSOM), and ultrafast photoemission of electrons make use of highly confined electromagnetic fields <cit.>.Many of these techniques rely on the excitation of plasmonic modes in suitable metallic nano-structures, where the collective oscillations of the conduction band electrons in the metal lead to strongly enhanced electromagnetic near-fields<cit.>.In metallic nano-structures that are small compared to the relevant free-space wavelength, e.g. nano-antennas <cit.>, plasmonic oligomers <cit.>, or split-ring resonators <cit.>, the boundary conditions give rise to standing wave patterns of the charge carrier oscillations. The corresponding resonant modes are the so-called localized surface plasmons (LSPs), which typically exhibit hot-spots of the electromagnetic field in the vicinity of the surface. Thespectral and spatial properties of the LSPs can be controlled by designing the shape, size, or surrounding media of the nano-structure <cit.>. Extended metal nano-structures support propagating surface plasmon polaritons (SPPs), which are formed by charge carrier density waves moving along the surface. On tapered nano-structures such as nano-wedges, the SPP dispersion relation varies along the propagation direction. This can be utilized to compress SPPs and to achieve strong local-field enhancements. For instance, a SPP propagating along a nano-wedge towards the apex is gradually slowed down as it reaches the apex region <cit.>. For most experimentally accessible wedge parameters, the SPP is however not brought to a complete stop but rather is reflected at the apex<cit.>.Precise knowledge of the spatio-spectral distribution of the plasmonic near-field is of utmost importance for the applications mentioned above. A very powerful experimental method that allows to map localized as well as propagating plasmon modes is electron energy loss spectroscopy (EELS) in combination with scanning transmission electron microscopy (STEM)<cit.>.In STEM-EELS, a highly focused electron beam scans over the sample. Passing nearby or through the structure, the electrons can excite plasmon modes. As a result, the fast electrons lose kinetic energy by interacting with the self-induced electric field in the vicinity of the nano-structure. The probability for this process, the so called electron energy loss probability (EELP), is related to the plasmonic local density of states (LDOS) and can thereby be used to characterize the plasmonic nano-structure <cit.>.In this work, we report on a near field study on two-dimensional plasmonic gold nano-wedges using STEM-EELS. With increasing nano-wedge size, we observe a transition from LSPs on resonant nano-wedges to non-resonant propagating SPPs on large nano-wedges. Furthermore we demonstrate that nano-wedges with a groove cut can support both, localized and propagating plasmon modes. The experimental data is compared to numerical computations based on the Discontinuous Galerkin Time-Domain (DGTD) method. § METHODS AND INSTRUMENTATIONAll samples have a wedge-like form with an opening angle of 14^∘ and are fabricated by standard electron beam lithography on [30]nm thin silicon nitride membranes <cit.>. The structures themselves consist of [30]nm thin thermally evaporated gold films. For the STEM-EELS experiments we use a Zeiss Libra200 MC Cs-STEM CRISP (Corrected Illumination Scanning Probe) operated at [200]kV. The instrument is equipped with a monochromated Schottky-type field-emission cathode and a double hexapole-design corrector for spherical aberrations of the illumination system (Cs-corrector).An in-column omega-type energy filter, fully corrected for second-order aberrations, is integrated into the microscope. The spectra are recorded using a Gatan UltraScan 2k x 2k CCD camera with an acquisition time for each spectrum of [5]ms and a dispersion of the spectrometer of[0.016]eV per channel. The energy resolution, which is defined by the FWHM of the spectrum's zero-loss peak measured through the silicon nitride membrane, is [0.1]eV in the center of the scan area. In the present STEM-EELS experiments, the electron beam raster scans over the sample with a step width ranging between [5]nm and [20]nm, depending on the size of the investigated area.All spectra are recorded for normal incidence of the electron beam.For data postprocessing, each spectrum is normalized to its total number of electron counts. Subsequently, the first moment of the zero-loss peak is centered to [0]eV and a background spectrum is subtracted. In the presented EEL maps, the signal is normalized to the maximum value found in the relevant area for the given electron loss energy.To support our experimental findings we have performed numerical computations for selected structures using the DGTD method<cit.>.The geometrical parameters for the nano-wedge are taken from the corresponding HAADF micrographs. The nano-wedges are embedded into vacuum and have a dielectric permittivity, which is approximated using a Drude-Lorentz-model for gold similar to Ref. <cit.>. The swift electron's speed is set to v = [0.77]c. We obtain the induced polarization of a relativistic moving electron passing the gold nano-wedges and determine the back-action of the induced field on this electron <cit.>. The expansion of the electric and the magnetic field into Lagrange polynomials of third order is carried out for each mesh element of the structure. The mesh's geometry is determined by inspection of the micrograph of the corresponding structure and the mesh's elements have side lengths down to [5]nm (see figure <ref>).§ RESULTSAs an example for a small resonant plasmonic nano-structure we first investigate a gold nano-wedge with a length of [600]nm (see electron micrograph in figure <ref>). As expected, the nano-wedge exhibits a number of distinct LSP-modes with resonance energies of [0.5]eV, [1.0]eV, [1.35]eV,and [1.73]eV, respectively.LSP-modes with larger resonance energies can not be clearly resolved because of the onset of the interband transitions in gold. The EEL maps of the observed LSP-modes are displayed on panels (a)-(d) of figure <ref>.The fundamental LSP-mode ([0.5]eV) has two pronounced EELP maxima near the ends of the nano-wedge.This clearly indicates that this mode is of dipolar character. The higher-order LSP-modes exhibit an increasing number of EELP maxima along the nano-wedge axis (z-axis).In a simple standing wave picture, the maxima correspond to the locations of nodes of the the charge carrier oscillations.Note that these LSP-modes could be excited in an optical far-field experiment with light polarized along the z-axis.Figure <ref>(f) displays the EELP recorded along the edge of the nano-wedge (see blue box in micrograph) for different loss energies on a false color scale. Here, the EELP signal was integrated over 3 pixels perpendicular to the edge for each z-position along the nano-wedge.In this representation, we can clearly identify the different discrete LSP-modes.Our experimental findings are reproduced by the numerical computations. Figure <ref> displays the computed EELP along a line parallel to the edge of a [600]nm-long nano-wedge. The thickness of the gold film as well as the opening angle of the nano-wedge have been chosen as in the experiment. Like in the experimental data, we observe a set of discrete modes, which can be assigned to the localized surface plasmons of the nano-wedge. The offset in energy of the modes between experiment and numerical computations is most likely caused by deviations of the fabricated structure from the geometry assumed in the computations and a shift in energy caused by the influence of the underlying substrate, which was not taken into account in the computations.A qualitatively different behavior is expected for metal wedges whose lateral dimensions are large compared to the plasmon decay length.In this case, a discrete standing wave pattern of the charge carrier oscillations can not form since multiple reflections from the nano-wedges's ends are suppressed by absorption. Hence, long nano-wedges are expected to support a continuum of propagating SPPs instead of discrete LSPs.As an example for a non-resonant structure, we consider a [15] m long gold nano-wedge.The panels (a)-(d) of figure <ref> display EEL maps of the first [2] m long part of this nano-wedge for the same electron loss energies as displayed in figure <ref>.In contrast to the short nano-wedge, the long nano-wedge shows a non-resonant behavior. The strong signals at the edge and in the interior of the nano-wedge correspond to the SPP edge modes and the film modes, respectively, as studied by Schmidt et al.<cit.> on 2D silver tapers.In the following, we will concentrate on the SPP edge modes. For this purpose, we consider the EELP recorded along the lower edge of the nano-wedge (see blue box in figure <ref>(e)). This data is displayed for different loss energies on a false color scale in figure <ref>(f). The EELP exhibits at all loss energies a maximum near the apex of the nano-wedge. In addition, we find several maxima along the edge that shift towards the apex with increasing loss energy.These observations can be understood in the following way<cit.>: The incident electrons excite SPP wave packets propagating along the edge away from the impact position in positive and negative z direction. If the distance from the impact position to the nano-wedges's apex is not too large, i.e., small compared to the SPP decay length, reflection of the SPPs at the apex leads tothe formation of a standing-wave interference pattern. Hence, we observe for each electron loss energy, a set of EELP maxima along the edge of the nano-wedge. With increasing electron loss energy the period of this standing wave pattern decreases. Moreover, propagation losses of the SPP result in a reduction of the modulation of the standing wave pattern with increasing distance to the apex.Naturally the question arises for which nano-wedge size a transition from a set of discrete LSPs to a continuum of SPPs takes place in the considered energy range. By investigating nano-wedges with different sizes, we find that the crossover between these two regimes occurs for a length between [2.0] m and [2.5] m. Figure <ref>(b) depicts the relative EELP recorded parallel to the edge of a [2.0] m long nano-wedge.This nano-wedge clearly shows a resonant behavior with several distinct LSP modes. In contrast, a qualitatively different behavior is observed for a [2.5] m long nano-wedge. In addition to the maxima at the ends, the corresponding EELP (see Figure <ref>(d)) features two pronounced continuous bands. With increasing electron energy loss, they shift towards the apex and base of the wegde, respectively. These bands can be interpreted as maxima of the two standing wave patterns resulting from the reflection of SPPs at the apex and the base, respectively. Adding grooves to a long nano-wedge leads to interesting new effects. Figure <ref>(e) shows the apex region of a [15] m long nano-wedge. [600]nm away from the nano-wedge's apex two [50]nm broad and [55]nm deep grooves are cut, leaving a fillet with a width of [100]nm, separating the front section from the rear section. The front section has the same dimensions as the short wedge discussed above.For low electron loss energies, one expects that the grooves have a minor effect on the SPP propagation. Hence, the structure should behave similar to the long nano-wedge without grooves.Our experimental data confirms this prediction. The EEL map with an electron loss energy of [0.5]eV (see Figure <ref>(a)), as well as the low energy region (<[0.85]eV) of the EELP distribution recorded along the edge (see Figure <ref>(f)) show a clear SPP like behavior. In particular, we observe that the maxima in the EELP distribution continuously shift towards the apex with increasing energy loss. This is a clear signature of the formation of a standing wave pattern as also observed in figure <ref>.In contrast to this, for loss energies above approximately [0.85]eV, a clear influence of the grooves can be observed. In this case, the front section of the structure (z<[600]nm) shows a similar behavior as the small nano-wedge. More specifically, we can identify two distinct resonances in the front section at [1.0]eV and [1.35]eV with three and four maxima, respectively. These resonances correspond to the second and third LSP mode of the short nano-wedge (cf. figure <ref>). Furthermore, we observe in the EELP distribution a pronounced maximum at the right side of the grooves for loss energies above [0.85]eV. Additionally, a second maximum occurs in the rear section that continuously shifts towards the grooves with increasing electron loss energy. This indicates the formation of a standing wave pattern, where the reflection of the SPPs takes place at the grooves.Figure <ref> depicts a cross-section of the three-dimensional mesh of the modified nano-wedge used in the DGTD computations. The EELP is computed along a line parallel to one edge of the nano-wedge (see blue line in upper part of figure <ref>). The impact parameter b between line and edge of the nano-wedge is [10]nm and the line does not follow the shape of the groove. The computed EELP data (see figure <ref>, lower part)qualitatively reproduces the experimental results. At low electron loss energies a clear signature of SPPs running towards the apex is observed. For electron loss energies exceeding approximately [1.2]eV, distinct maxima in the EELP indicative of LSPs occur at [1.5]eV, [1.8]eV, and [2.1]eV in the front section of the structure.As before, the deviation in energy of the computations from the experimental data can be most likely traced back to deviations of the assumed geometry from the real sample parameters and an energy shift caused by the underlying substrate.§ CONCLUSIONIn this work we demonstrated the transition fromLSP-supporting nano-structures to a SPP-supporting structures by prolongating the structural dimensions from some hundred nanometers to some ten micrometers. For the shorter samples, multiple reflections at the ends lead to the formation of LSPs. For the longer samples, also propagating plasmons can be found. These are SPPs that are reflected at one end of the structure. The complex behavior of a modified sample shows that the addition of sub-wavelength features to extended nano-structures can result in a rich interplay of localized LSPs and delocalized SPPs within the same energy range.§ FUNDINGS.L, S.I., and T.W. gratefully acknowledge financial support by the Deutsche Forschungsgemeinschaft project LI 1641/5-1. K.B. and T.K. acknowledge support by the Einstein Foundation within the project "ActiPLAnt". | http://arxiv.org/abs/1703.09291v1 | {
"authors": [
"Thorsten Weber",
"Thomas Kiel",
"Stephan Irsen",
"Kurt Busch",
"Stefan Linden"
],
"categories": [
"physics.optics"
],
"primary_category": "physics.optics",
"published": "20170327200535",
"title": "A near-field study on the transition from localized to propagating plasmons on 2D nano-wedges"
} |
[t1]Supplementary material, including proofs of all propositions and additional numerical results, can be found in the online version of the paper. A stand-alone package for implementing LANOVA penalization for matrices and three-way tensors can be downloaded from https://github.com/maryclare/LANOVA.1]Maryclare Griffincor1 [email protected] 2]Peter [email protected] [cor1]Corresponding author[1]Center for Applied Mathematics, Cornell University, Ithaca, NY, USA[2]Department of Statistical Science, Duke University, Durham, NC, USA Consider the problem of estimating the entries of an unknown mean matrix or tensor given a single noisy realization. In the matrix case, this problem can be addressed by decomposing the mean matrix into a component that is additive in the rows and columns, i.e. the additive ANOVA decomposition of the mean matrix, plus a matrix of elementwise effects, and assuming that the elementwise effects may be sparse. Accordingly, the mean matrix can be estimated by solving a penalized regression problem, applying a lasso penalty to the elementwise effects.Although solving this penalized regression problem is straightforward, specifying appropriate values of the penalty parameters is not. Leveraging the posterior mode interpretation of the penalized regression problem, moment-based empirical Bayes estimators of the penalty parameters can be defined. Estimation of the mean matrix using these these moment-based empirical Bayes estimators can be called LANOVA penalization, and the corresponding estimate of the mean matrix can be called the LANOVA estimate. The empirical Bayes estimators are shown to be consistent. Additionally, LANOVA penalization is extended to accommodate sparsity of row and column effects and to estimate an unknown mean tensor.The behavior of the LANOVA estimate is examined under misspecification of the distribution of the elementwise effects, and LANOVA penalization isapplied to several datasets, including a matrix of microarray data, a three-way tensor of fMRI data and a three-way tensor of wheat infection data.Adaptive estimation Method of moments Multiway data Structured data Transposable data Regularized regression Lasso ANOVA Decompositions for Matrix and Tensor Datat1 [ December 30, 2023 ======================================================= § INTRODUCTION Researchers are often interested in estimating the entries of an unknown n× p mean matrix M given a single noisy realization, Y =M +Z, where the entries of Z are assumed to be independent, identically distributed mean zero normal random variables with unknown variance σ^2_z. Consider a noisy matrix Y of gene expression measurements for different genes and tumors. Researchers may be interested in which tumors have unique gene expression profiles, and which genes are differentially expressed across different tumors. This is challenging because no replicates are observed. Each unknown m_ij corresponds to a single observation y_ij,and so the maximum likelihood estimate Y has high variability.Accordingly, simplifying assumptions that reduce the dimensionality of M are often made. Many such assumptions relate to a two-way ANOVA decomposition of M:M = μ 1_n1_p' +a1_p' +1_nb' +C,where μ is an unknown grand mean, a is an n × 1 vector of unknown row effects, b is a p× 1 vector of unknown column effects, C is a matrix of elementwise “interaction” effectsand 1_n and 1_p are n× 1 and p × 1 vectors of ones, respectively. In the absence of replicates, implicitly assuming C =0 is common. This reduces the number of freely varying unknown parameters, from np to n + p, but is also unlikely to be appropriate in practice.Alternatively, one might assume that elements of C can be written as a function of a small number R of multiplicative components, i.e. c_ij = ∑_r= 1^R u_r,i v_r,j where u_r and v_r and n × 1 and p× 1 row and column factors. This corresponds to a low-rank matrix C and an additive-plus-low-rank mean matrix M. Additive-plus-low-rank models have a long history and continue to be very popular <cit.>.However, in the settings we consider it is reasonable to expect that C may be sparse with a relatively small number of nonzero elements, e.g. some tumor-gene combinations may have large interaction effects while others may have negligible interaction effects. In such settings, it is easy to imagine scenarios in which a low-rank estimate of M may fail, e.g. if Y were an n× n square matrix and all c_ii were large while all c_ij, i≠ j were equal to zero. In this case, a low-rank estimate of C would not suffice because a full rank R = n estimatewould be needed.If M is approximately additive in the sense that large deviations from additivity are rare, then C is sparse and estimation of M may be improved by penalizing elements of C:min_μ, a, b,C1/2σ^2_z||vec{ Y - ( μ 1_n1_p' +a1_p' +1_nb' +C)}||^2_2 + λ_c || vec( C)||_1.The ℓ_1 penalty induces sparsity among the estimated entries of C and solving this penalized regression problem yields unique estimates of M and C.Elements of C can be interpreted as interactions insofar as they indicate deviation from a strictly additive model.Although(<ref>) is a standard lasso regression problem that can be solved easily given values of λ_c and σ^2_z, specifying values of λ_c and σ^2_z is uniquely challenging in this setting.The methods suggested by <cit.> and <cit.>are not appropriate because they are specific to orthogonal design matrices; columns of the design matrix corresponding to the regression problem in (<ref>) are correlated. The same is true of the unbiased risk estimate minimization procedure suggested by <cit.>. Although columns of the design matrix will become less correlated as n and p→∞, the correlations may not be negligible in practice especially if n or p is relatively small. <cit.> also suggested cross validation, which could be performed after rewriting Equation (<ref>) to depend on a single parameter η = λ_c σ^2_z.However, cross-validation is also poorly suited to this setting. Consider leave-one-out cross validation to select a value of η, and suppose we hold out y_11 and solve (<ref>) for any fixed value of η using the elements of Y excluding y_11. We obtain estimates of μ, a, b, and all elements of C except c_11, as only the held out test data point y_11 contains any information about c_11. For this reason, we cannot compute an out-of-sample prediction for y_11 without making additional assumptions that relate μ, a, b and all of the elements of C except c_11 to c_11 and selecting η by cross validation without additional assumptions is not possible.This penalized regression problems has been considered in the literature on outlier detection, as nonzero elements of C can alternatively be interpreted as outliers.<cit.> interpret elements of C in this way and consider the more general problem with an arbitrary full rank design matrix X. They approach specification of λ_c and σ^2_z by introducing a conservative extension of the methods suggested by <cit.> for orthogonal design matrices, setting np different valuesλ_i = σ√(2(1 - h_ii)log(np)) where H =X ( X' X)^-1 X. Because σ^2 can be very challenging to estimate, they suggest setting λ_i = λ√(1 - h_ii) in a data-adaptive way using a modified BIC. Although the methods proposed by <cit.> have the advantage of applying to general regression problems with arbitrary design matrices X, they have computational disadvantages in high dimensions because they require computing an initial robust estimate of and iteratively re-estimating C. We take another approach and view the ℓ_1 penalty on C as a Laplaceprior distribution, in which case λ_c and σ^2_z can be interpreted as nuisance parameters that can be estimated from the data. The relationship between the ℓ_1 penalty and the Laplace prior has long been acknowledged <cit.>. It offers not only a framework for specifying λ_c and σ^2_z, but also decision theoretic justifications for using estimates of M and C obtained by solving (<ref>) using estimated λ_c and σ^2_z because the posterior mode is known to minimize a specific data-adaptive loss function <cit.>. The challenge is in the estimation of λ_c and σ^2_z, becausecomputing maximum marginal likelihood estimates may be prohibitively computationally demanding and intractable in practice <cit.>.In this paper we instead present moment-based empirical Bayes estimators of the nuisance parameters λ_c and σ^2_z thatare easy to compute, consistent andindependent of assumptions made regarding a and b. As our approach to estimating λ_c and σ^2_z uses the Laplace prior interpretation of the ℓ_1 penalty, we refer to estimation of M via optimization ofEquation (<ref>) using these nuisance parameter estimators as LANOVA penalization and we refer to the estimate M as the LANOVA estimate.The paper proceeds as follows: In Section <ref>, we introduce moment-basedestimators for λ_c and σ^2_z, show that they are consistent as either the number of rows or columns of Y go to infinity. We show that their efficiency is comparable to that of asymptotically efficient marginal maximum likelihood estimators (MMLEs).In Section <ref>, we discuss estimation of M via Equation (<ref>) given estimates of λ_c and σ^2_z and introduce a test of whether or not elements of C are heavy-tailed, which allows us to avoid LANOVA penalization in settings where it is especially inappropriate. We also investigate the performance of LANOVA estimates of M relative to strictly additive estimates, strictly non-additive estimates, additive-plus-low-rank estimates, IPOD estimates from <cit.>, and approximately minimax estimates based on <cit.>, and examine robustness to misspecification of the distribution of elements of C. In Section <ref>, we extend LANOVA penalization to include penalization of lower-order mean parameters a and b and also to apply to the case where Y and M are K-way tensors.In Section <ref>, we apply LANOVA penalization to a matrix of gene expression measurements, a three-way tensor of fMRI data and a three-way tensor of wheat infection data. In Section <ref> we discuss extensions, specifically multilinear regression models and opportunities that arise in the presence of replicates.§ LANOVA NUISANCE PARAMETER ESTIMATION Consider the following statistical model for deviationsof Y from a strictly additive model:Y= μ 1_n1_p' +a1_p' +1_nb' +C +Z,C = { c_ij}∼i.i.d. Laplace(0,λ_c^-1),Z = { z_ij}∼i.i.d. N(0,σ^2_z ).The posterior mode of μ, a, b and C under this Laplace prior for Cand flat priors for μ, a and b corresponds to the solution of LANOVA penalization problemgiven by Equation (<ref>). We construct estimators of λ_c and σ^2_z as follows. Letting H_k =I_k -1_k1_k/k be the k× kcentering matrix, we define R =H_nYH_p. R dependes on C and Z alone, specifically R = H_n ( C +Z)H_p. We construct estimators of λ_c and σ^2_z from R by leveraging the difference between Laplace and normal tail behavior as measured by fourth order moments.The fourth order central moment of any random variable x with mean μ_x and variance σ^2_x can beexpressed as𝔼[(x - μ_x)^4 ] = (κ + 3)σ^4_x, where κ is interpreted as the excess kurtosis of the distribution of x relative to a normal distribution. A normally distributed variable has excess kurtosis equal to 0, whereas a Laplace distributed random variable has excess kurtosis equal to 3. It follows that the second and fourth order central moments of elements of C +Z are 𝔼[(c_ij + z_ij)^2 ] = σ^2_c + σ^2_z and 𝔼[(c_ij + z_ij)^4 ] = 3σ^4_c + 3(σ^2_c + σ^2_z )^2, respectively,where σ^2_c = 2/λ^2_c is the varianceof a Laplace(0, λ_c^-1) random variable.Given values of 𝔼[(c_ij + z_ij)^2] and 𝔼[(c_ij + z_ij)^4], we see that σ^2_c and σ^2_z, and accordingly λ_c, can easily be recovered.We do not observe C +Z directly, but we can use the the second and fourth order sample moments of R,an estimate of C +Z, given by r^(2) = 1/np∑_i = 1^n ∑_j = 1^p r^2_ij and r^(4) = 1/np∑_i = 1^n ∑_j = 1^p r^4_ij, respectively, to separately estimate σ^2_c and σ^2_z. These estimators are:σ^4_c = {n^3p^3/(n - 1)(n^2 - 3n + 3)(p - 1)(p^2 - 3p + 3)}{r^(4)/3 - (r^(2))^2 },σ^2_c = √(σ^4_c),σ^2_z = {np/(n - 1)(p - 1)}r^(2) - σ^2_c.An estimator of λ_c is then given by λ_c = √(2/σ^2_c). Studying the properties of these estimators is slightly challenging, as elements of R are neither independent nor identically distributed.The estimator σ^4_c is biased. It is possible to obtain an unbiased estimator for σ^4_c, however the unbiased estimator will not be consistent as n→∞ with p fixed or p →∞ with n fixed.Because these estimators depend on higher-order terms which can be very sensitive to outliers, it is desirable to have consistency as either the number of rows orcolumns grows. Accordingly, we prefer the biased estimator and examine its bias in the following proposition.Under the model given by Equation (<ref>), 𝔼[σ^4_c] -σ^4_c =-{n^3 p^3/(n - 1)(n^2 - 3n + 3)(p - 1)(p^2 - 3p + 3)}[{3(n - 1)^2(p - 1)^2/n^3p^3}σ^4_c +.. {2(n - 1)(p - 1)/n^2p^2}(σ^2_c + σ^2_z)^2 ].A proof of this proposition and all other results presented in this paper are given in an web appendix.The bias is always negative and accordingly, yieldsoverpenalization of C.When both n and p are small, σ^4_c tends to underestimate σ^4_c. Recalling that σ^4_c is inversely related to λ_c, this reflects a tendency to overpenalize and accordingly overshrink elements of C when both n and p are small. This is desirable, in that it reflects a tendency to prefer the simple additive model when few data are available.We also observe that the bias depends on both σ^2_c and σ^2_z. Holding n, p and σ^2_c fixed, we will overestimate λ_c more when σ^2_z is larger. Again, this is desirable, in that it reflects a tendency to prefer the simple additive model when the data are very noisy. Last, we see that the bias is O(1/np), i.e. the bias approaches zero as either the number of rows or the number of columns increases. The large sample behavior of our estimators of the nuisance parameters is similar.Under the model given by Equation (<ref>), σ^4_cp→σ^4_c, σ^2_cp→σ^2_c, λ_cp→λ_c and σ^2_zp→σ^2_z as n →∞ with p fixed, p→∞ with n fixed, or n, p→∞. Although these nuisance parameter estimators are easy to compute and consistent as n or p→∞, they are not maximum likelihood estimators andmay not be asymptotically efficient even as n and p→∞. Accordingly, we compare the asymptotic efficiency of our estimator σ^2_c to that of the corresponding asymptotically efficient marginal maximum likelihood estimator (MMLE) denoted by σ^2_c as n and p→∞. As noted in the Introduction, obtaining σ^2_c is computationally demanding because maximizing the marginal likelihood of the data requires a Gibbs-within-EM algorithm that can be slow to converge <cit.>. Fortunately, computing the asymptotic variance of σ^2_c is simpler than computing σ^2_c itself.The asymptotic variance of σ^2_c is given by the Cramér-Rao lower bound for σ^2_c, which can be computed numerically from the density of the sum of Laplace and normally distributed variables<cit.>. The asymptotic variance of σ^2_c is straightforward to compute as √(np)(σ^4_c - σ^4_c) converges in distribution to a moment estimator of σ^4_c.We note that the asymptotic variance of λ_c is similarly straightforward to compute; both asymptotic variances are given in the web appendix. Letting 𝕍[σ^2_c] and 𝕍[σ^2_c] refer to the variances of the estimators σ^2_c and σ^2_c, we plot the asymptotic relative efficiency 𝕍[σ^2_c]/𝕍[σ^2_c] over values of σ^2_c, σ^2_z ∈[0, 1] in Figure <ref>. Note that the relative efficiency ofσ^2_c compared to σ^2_c also reflects the relative efficiency of our estimators λ_c and σ^2_z compared to the MMLEs λ_c and σ^2_c, respectively, because both are simple functions of σ^2_c.When σ^2_c is small relative to σ^2_z, the MMLE σ^2_c tends to be slightly more efficient. When σ^2_c is large relative to σ^2_z, σ^2_c tends to be much more efficient. However, in such cases the interactions will not be heavily penalized and LANOVA penalization will not tend to yield a simplified, nearly additive estimate of M. Put another way, Figure <ref> indicates that λ_c and σ^2_z will be nearly as efficient as the corresponding MMLEs when LANOVA penalization is useful for producing a simplified, nearly additive estimate of M with sparse interactions. We also note that because they are moment-based, our estimators may be more robust to misspecification of the distribution of elements of C and Z than the MMLEs.§ MEAN ESTIMATION, INTERPRETATION, MODEL CHECKING AND ROBUSTNESS§.§ Mean Estimation In practice, our nuisance parameter estimators are not guaranteed to be nonnegative and two special cases can arise. When σ^4_c < 0, we set σ^2_c = 0, C =0, and M =M_ADD, where M_ADD = ( I_n -H_n ) Y ( I_p -H_p) is the strictly additive estimate. When σ^2_z < 0, we reset σ^2_z = 0 and set M =M_MLE, where M_MLE =Y is the strictly non-additive estimate. Neither special case prohibits estimation of M. We assess how often these special cases arise via a small simulation study. Setting σ^2_z = 1, n = p = 25, μ = 0, a =0 and b =0, we simulate 10,000 realizations of Y =C +Z under the model given by Equation (<ref>) for each value of σ^2_c ∈{1/2, 1, 3/2}. We obtain σ^2_c ≤ 0in 13.7%, 1.64% and 0.02% of simulations forσ^2_c equal to 1/2, 1 and 3/2, respectively. This means that when the magnitude of elements of C is smaller, we are more likely to obtain a strictly additive estimate of M.We do not obtain σ^2_z = 0 in any simulations.When σ^2_c > 0 and σ^2_z> 0, we can obtain an estimate of M from Equation (<ref>) using block coordinate descent. Setting C^0 =H_nYH_p and k = 1, our block coordinate descent algorithm iterates the following until the objective function Equation (<ref>) converges: * Set μ^k =1_n'( Y -C^k-1) 1_p/np, a^k =H_n'( Y -C^k-1) 1_p/p, b^k =H_p( Y -C^k-1)' 1_n/n and R^k =Y - μ^k 1_n1_p' -a^k 1_p' -1_n ( b^k)'; * Set C^k = sign( R^k ) (| R^k| - λ_c σ^2_z )_+, where λ_c = √(2/σ^2_c) sign(·) and the soft-thresholding function (· )_+ are applied elementwise. Set k = k + 1.§.§ Interpretation The nonzero entries of C correspond to the r largest residuals from fitting a strictly additive model with C =0, where r is determined by λ_c and σ^2_z.Elements of C can be interpreted as interactions insofar as they indicate deviation from a strictly additive model for M. However, because we do impose the standard ANOVA zero-sum constraints, we cannot interpret elements of C directly as population average interaction effects, i.e. c_ij≠𝔼[y_ij] -1/p∑_j = 1^p 𝔼[y_ij] - 1/n∑_i = 1^n 𝔼[y_ij] + 1/np∑_i = 1^n ∑_j = 1^p 𝔼[y_ij].For the same reason, μ, a and b cannot be interpreted as the grand mean and population average main effects. To obtain estimates that have the standard population average interpretation, we recommend performing a two-way ANOVA decomposition of M.In the appendix, we show that the grand mean and population average main effects obtained via ANOVA decomposition of M are identical to those obtained by performing an ANOVA decomposition of Y. §.§ Testing LANOVA penalization assumes the distribution of entries of C have tail behavior consistent with a Laplace distribution. It is natural to ask if this assumption is appropriate, but it is difficult to test it because C and Z enter into the observed data through their sum C +Z.Accordingly, we suggest a test of the more general assumption that elements of C are heavy-tailed. This allows us to rule out LANOVA penalization when it is especially inappropriate, i.e. when the data suggest elements of C are normal tailed. When the distribution of elements of C is heavy-tailed, the distribution of elements of C +Z will also be heavy-tailed and will have strictly positive excess kurtosis. In contrast, when elements of C are either all zero or have a distribution with normal tails, elements of C +Z will have excess kurtosis equal to exactly zero. We construct a test of thenull hypothesis H_0: c_ij + z_ij∼i.i.d. N(0,σ^2_c + σ^2_z), which encompasses the cases in which C =0 or elements of C are normally distributed. Conveniently, the test statistic is a simple function of σ^2_c and σ^2_z and can be computed at little additional computational cost.We can also think of this as a test of deconvolvability of C +Z, where the null hypothesis is that deconvolution of C +Z is not possible.For Y = μ 1_n1_p' +a1_p' +1_nb' +C +Z, as n and p→∞ an asymptotically level-α test of H_0: c_ij + z_ij∼i.i.d. N(0,σ^2_c + σ^2_z) is obtained by rejecting H_0 when √(np){σ^4_c/√(8/3)(σ^2_c + σ^2_z)^2} > z_1 - α,where z_1-α denotes the 1-α quantile of the standard normal distribution.This test gives us power against the alternative where elements C are heavy-tailed and LANOVA penalization may be appropriate.Because this is an approximate test, we assess its level in finite samples in a small simulation study. Setting σ^2_z = 1, n = p, μ = 0, a =0 and b =0, we simulate 10,000 realizations of Y =C +Z under H_0 for each value of n ∈{25, 100} and σ^2_c ∈{1/2, 1, 3/2}. When n = p = 25, the test rejects at a slightly higher rate than the nominal level. It rejects in 7.98%, 7.65% and 8.66% of simulations for σ^2_c equal to 1/2, 1 and 3/2, respectively. When n = p = 100, the test nearly achieves the desired level. It rejects in 6.13%, 5.60% and 6.00% of simulations for σ^2_c equal to 1/2, 1 and 3/2, respectively.We compute the approximate power of this test under twoheavy-tailed distributions for elements of C: the Laplace distribution assumed for LANOVA penalization and a Bernoulli-normal spike-and-slab distribution.Assume that elements of C are independent, identically distributed mean zero Laplace random variables with variance σ^2_c and let ϕ^2 = σ^2_c/σ^2_z. Then as n and p→∞, the asymptotic power of the test given by Proposition <ref> is: 1 - Φ[z_1 - α - √(3np/8)(ϕ^2/ϕ^2 + 1)^2/√(1 + {68ϕ^8 + 36ϕ^6 + 9ϕ^4/(1 + ϕ^2)^4})]. The power depends on the variancesσ^2_c and σ^2_z only through their ratio ϕ^2. It is plotted for for α = 0.05, ϕ^2 ∈[0,2] and np = {100, 200, …, 1000} in Figure <ref>. The power of the test is increasing in ϕ^2 and increasing more quickly when np is larger and more data are available.Now we consider the power for Bernoulli-normal distributed elements of C.Assume that elements of C are independent, identically distributed Bernoulli-normal random variables. An element of C isexactly equal to zero with probability 1 - π_c, and normally distributed with mean zero and variance τ^2_c otherwise. Letting ϕ^2 = τ^2_c/σ^2_z, as n and p→∞, the asymptotic power of the test given by Proposition <ref> is: 1 - Φ[z_1 - α - π_c (1 - π_c){√(3np/8)(ϕ^2/π_c ϕ^2 + 1)^2}/√(1 + π_c (1 - π_c){(20π_c^2 - 28π_c + 35)ϕ^8 + 16(5 - π_c)ϕ^6 + 72ϕ^4/8(π_c ϕ^2 + 1)^4})]. The approximate power depends on the variances of the nonzero effects τ^2_c and the noise σ^2_z only through their ratio ϕ^2.It is plotted for α = 0.05, π_c ∈[0,1], ϕ^2 ∈{0, 0.2, …, 2} and np = {100, 1000} in Figure <ref>. The approximate power is always increasing in ϕ^2 and np. For fixed ϕ^2 and np, power diminishes as the probability of an element of C being nonzero π_c approaches 0 or 1 and C +Z becomes more normally distributed.Overall, the test is more powerful when estimating C separately from Z ismore valuable, e.g. when elements of C are large in magnitude relative to the noise and when many entries of C are exactly zero.§.§ Robustness and Comparative PerformanceIf the true model is not the LANOVA model and elements of C are drawn from a different heavy-tailed distribution, it is natural to ask how our estimates σ^2_c, σ^2_z, and M perform. As M is a function of μ, a, b and C, the performance of M also reflects the performance of μ, a, b and C indirectly. We find that the excess kurtosis κ of the “true” distribution of elements of C determines our ability to estimate σ^2_c separately from σ^2_z.Under the model Y = μ 1_n1_p' +a1_p' +1_nb' +C +Z, where elements of C are independent, identically distributed draws from a mean zero, symmetric distribution with variance σ^2_c, excess kurtosis κ and finite eighth moment and elements of Z are normally distributed with mean zero and variance σ^2_z, σ^2_c p→√(κ/3)σ^2_c and σ^2_zp→σ^2_z + (1 - √(κ/3))σ^2_c as n →∞ with p fixed, p →∞ with n fixed, or n and p→∞. Proposition <ref> indicates that we underestimate σ^2_c when elements of C are lighter-than-Laplace tailed and we overestimate σ^2_c when elements of C are heavier-than-Laplace tailed. To see how this affects estimation of M, we consider exponential power and Bernoulli-normal distributed elements of C. Exponential power distributed c_ij have density p(c_ij | σ^2_c, q_c) = (q_c/(2σ_c))√(Γ(3/q_c)/Γ(1/q_c)^3)exp{- (Γ(3/q_c)/Γ(1/q_c))^q_c/2 |c_ij/σ_c|^q_c} that is parameterized in terms of the variance σ^2_c and a shape parameter q_c, and Bernoulli-normal distributed c_ij are exactly equal to zero with probability 1 - π_c and normally distributed with mean zero and τ^2_c otherwise. Both distributions can be lighter- or heavier-than-Laplace tailed. The excess kurtosis of exponential power and Bernoulli-normal c_ij is Γ(5/q_c)Γ(1/q_c)/Γ(3/q_c)^2 - 3 and 3(1 - π_c)/π_c, respectively. As a result, elements of C will be heavier-than-Laplace tailed when q_c < 1 or π_c < 0.5 and lighter-than-Laplace tailed when q_c > 1 or π_c > 0.5. Note that when q_c = 1 the exponential power distribution corresponds to the Laplace distribution and the LANOVA model is correct, and when π_c = 0.5 the spike-and-slab distributed C have the same excess kurtosis as Laplace distributed C and the variance estimators σ^2_c and σ^2_z will be consistent. We compare the risk of the LANOVA estimate M to the risk of the maximum likelihood estimate M_MLE, the risk of the strictly additive estimate M_ADD, the risk of additive-plus-low-rank estimates M_LOW,1 and M_LOW,5 which assume rank-one and rank-five C respectively, the risk of the soft- and hard-thresholding IPOD estimates of <cit.> M_IPOD,S and M_IPOD,H, and the risk of approximately minimax estimates M_MINI,U and M_MINI,S obtained using the universal threshold and Stein's unbiased risk estimate (SURE) described by <cit.>.Additive-plus-low-rank estimates are computed according to <cit.>. Approximately minimax estimates are computed according to M_MINI,U =M_ADD +C_MINI,U and M_MINI,S =M_ADD +C_MINI,S, where C_MINI,S and C_MINI,S are obtained by applying the universal threshold or SURE soft thresholding methods given by <cit.> to elements of Y -M_ADD, as if elements of Y -M_ADD were independent and identically distributed.We compute Monte Carlo estimates of the relative risksfor n = p = 25, μ = 0, a =0, b =0 and σ^2_z = 1. For exponential power distributed C, we vary σ^2_c = {1/2,1,2 } and q_c = {0.1,…,1.9 }.For Bernoulli-normal distributed C, we vary τ^2_c = {1/2,1,2 } and π_c = {0,0.1,…,0.9,1 }. For each (σ^2_c, q_c) and (τ^2_c, π_c), the Monte Carlo estimate is based on 500 simulated Y.The results shown in Figure <ref> indicate generally favorable performance of the LANOVA estimate M.The top four plots show log relative risk estimates when elements of C are exponential power distributed, whereas the bottom four plots show log relative risk estimates when elements of C are Bernoulli-normal distributed.As expected, the LANOVA estimate performs as well as or better than all alternative estimators when q_c = 1 and the LANOVA model is true. Interestingly, the LANOVA estimate also performs as well as or better than all alternative estimators when π_c = 0.5, even though the LANOVA model is not true.This suggests that the LANOVA estimate will tend to perform well relative to alternative estimators when the excess kurtosis of elements of C is similar to the excess kurtosis of a Laplace distribution. This is consistent with Proposition <ref>, which states that the asymptotic bias of the variance estimators σ^2_c and σ^2_z will depend on the excess kurtosis of the true distribution of elements of C.In the first column of Figure <ref>, we see that the LANOVA estimate M tends to outperform the strictly non-additive estimate M_MLE when q_c < 1 or π_c < 0.5, especially when the variance of the interactions σ^2_c or τ^2_c is small relative to the variance of the noise σ^2_z. Intuitively, this makes sense. When q_c is small we expect that many elements of C will be very close to zero, and when π_c is small, many elements of C will be exactly equal to zero. Furthermore, when σ^2_c or τ^2_c are small we expect even the largest interactions to be small relative to the noise. Accordingly, this suggests that M tends to outperform the strictly non-additive estimate M_MLE when M is more nearly additive. Analogously,the LANOVA estimate M tends to outperform the strictly additive estimate M_ADD when q_c > 1 or π_c > 0.5, especially when the variance of the interactions σ^2_c or τ^2_c is large relative to the variance of the noise σ^2_z. Again, this makes sense because we expect that fewer elements of C will be nearly or exactly equal to zero when q_c > 1 or π_c > 0.5, and more elements of M will be strictly non-additive.In the second column of Figure <ref>, we see that the relative performance of the LANOVA estimate M relative to the additive-plus-low-rank estimates M_LOW,1 and M_LOW,5depends on the distribution of elements of C.When elements of C are exponential power distributed, the LANOVA estimate M outperforms the additive-plus-low-rank estimates M_LOW,1 and M_LOW,5as long as q_c is neither to small nor too large, especially when the variance of the interactions σ^2_c is large relative to the variance of the noise σ^2_z.When elements of C are Bernoulli-normal distributed, the LANOVA estimate M almost always outperforms both additive-plus-low-rank estimates M_LOW,1 and M_LOW,5. This makes sense, as low rank approximations of sparse matrices tend to perform poorly.In the third column, we see that the LANOVA estimate M tends to outperform the the soft- and hard-thresholding IPOD estimates M_IPOD, S and M_IPOD, H for values of q_c > 0.5 and all values of π_c and τ^2_c. The soft-thresholding IPOD estimate M_IPOD, S performs almost identically to the strictly additive estimate M_ADD which suggests that it tends to overpenalize elements of C. Surprisingly, M_IPOD,S tends to slightly outperform M_IPOD,H. As discussed in <cit.>, outlier detection based on convex penalties, which are used to compute M and M_IPOD,S, tends to perform worse than outlier detection based on nonconvex penalties, which are used to compute M_IPOD,H. However the relatively better performance of M and M_IPOD,S relative to M_IPOD, H can be attributed to the fact that outlier detection based on convex penalties performs well whenall observations have equal leverage, as is the case in this setting <cit.>. The LANOVA estimate M̂ is also much faster to compute than both IPOD estimates; the soft- and hard-thresholding IPOD estimates take over 1,000 times longer than the LANOVA estimate to compute on average across all simulations.In the last column, we see that the LANOVA estimate M tends to outperform the approximately minimax estimates M_MINI,Uand M_MINI,S as long as elements of C are not extremely heavy tailed or extremely sparse, especially when the variance of the interactions σ^2_c is large relative to the variance of the noise σ^2_z. Additionally, the improvements offered by the LANOVA estimate M over the approximately minimax estimates M_MINI,Uand M_MINI,S are greater when C is exponential power distributed versus Bernoulli-normal distributed, which suggests that LANOVA penalization may be particularly useful when we expect that some elements of C are nearly but not exactly equal to zero.Altogether, these results also suggest that the LANOVA estimate M can offer better or comparable performance relative to alternatives as long as the tail behavior of elements of C is not too different from the tail behavior of a Laplace distribution. Relatedly, the results also also suggest that we might be able to construct an improved estimate of M based on LANOVA penalization if prior information on the tail behavior of elements of C is available. The results displayed in Figure <ref> suggest that the biased estimation of σ^2_c and σ^2_z when elements of C have lighter- or heavier-than Laplace tailed described in Proposition <ref> leads to poorer LANOVA estimates of M. Fortunately, this suggests that estimation of M could be improved if less biased estimates of σ^2_c and σ^2_z could be obtained. Proposition <ref> suggests a correction of multiplying σ^2_c by √(3/κ) and subtracting (1 - √(κ/3))√(3/κ)σ^2_c from σ^2_z.However because excess kurtosis κ is not a readily interpretable quantity, specifying a more appropriate value of κ a priori may be difficult. However if a Bernoulli-normal distribution for elements of C is plausible, a more appropriate value of π_c can be used to specify a more appropriate value of κ.If the new value of π_c is close to the “true” proportion of nonzero elements of C, this may improve estimation of σ^2_c and σ^2_z and accordingly, M. § EXTENSIONS§.§ Penalizing Lower-Order Parameters When Y has many rows or columns, it may be reasonable to believe that many elements of a or b are exactly zero. A natural extension of Equation (<ref>) is given bymin_μ, a, b,C1/2σ^2_z||vec( Y - M) ||^2_2 + λ_a || a||_1 + λ_b || b||_1 + λ_c || vec( C)||_1,where we still have M =1_n1_p' μ +a1_p' +1_nb' +C.Again, using the posterior mode interpretation of Equation (<ref>), we can estimate σ^2_a and σ^2_b from the observed data, Y:σ^2_a=1/n-1∑_i = 1^n ǎ^2_i - n/(n -1)(p - 1)r^(2), σ^2_b = 1/p-1∑_j = 1^p b̌^2_j - p/(n -1)(p - 1)r^(2).where ǎ =H_nY1_p/p and b̌ =H_pY'1_n/n are OLS estimates for a and b. The estimators λ_a = √(2/σ^2_a) and λ_b = √(2/σ^2_b) can be shown to be consistent for λ_a and λ_b as n→∞ and p→∞, respectively. Because λ_c and σ^2_z do not depend on of a and b, our estimators for λ_c and σ^2_z are unchanged.A block coordinate descent algorithm for solving Equation (<ref>) is given in the web appendix. With respect to interpretation, population average row and column main effects can be obtained via ANOVA decomposition of M. §.§ Tensor Data LANOVA penalization can be extended to a p_1× p_2 ×…× p_K K-mode tensor Y. We consider:vec( Y)= W β + vec( C)+ vec( Z), C= {c_i_1… i_K}∼i.i.d. Laplace(0,λ_c^-1),Z = {z_i_1 … i_K}∼i.i.d. N(0,σ^2_z),where vec( Y) is the ∏_k= 1^K p_k × 1 vectorization of the K-mode tensor Y with “lower” indices moving “faster” and W and β are the design matrix and unknown mean parameters corresponding to a K-way ANOVA decomposition treating the K modes of Y as factors.The matrix W = [ W_1, … ,W_2^K-1] is obtained by concatenating the 2^K-1 unique matrices of the form W_l = ( W_l,1⊗…⊗ W_l,K), where each W_l,k is equal to either I_p_k or 1_p_k, excluding the identity matrix, I_p_K⊗…⊗ I_p_1. As in the matrix case, approaches that assume a low rank C are common <cit.>. We penalize elements of the highest order mean term C for which no replicates are observed. In the three-way tensor case, the first part of Equation (<ref>) refers to the following decomposition:y_ijk = μ + a_i + b_j + d_k + e_ij + f_ik + g_jk + c_ijk + z_ijk.Estimates of σ^2_z and λ_c are constructed from vec( R) = ( H_K ⊗…⊗ H_1 )vec( Y), where H_k =I_p_k -1_p_k 1_p_k/p_k is the p_k × p_k centering matrix and `⊗' is the Kronecker product.As in the matrix case, vec( R) is independent of the lower-order unknown mean parameters β, i.e. vec( R) = ( H_K ⊗…⊗ H_1 )vec( C +Z).Our estimates of σ^2_z and λ_c are still functions of the second and fourth sample moments of R: r^(2) = 1/p∑_i = 1^p r^2_i and r^(4) = 1/p∑_i = 1^p r^4_i, where p = ∏_k = 1^K p_k. We extend our empirical Bayes estimators as follows:σ^4_c = {∏_k = 1^Kp_k^3/(p_k - 1)(p_k^2 - 3p_k + 3)}{r^(4)/3 - (r^(2))^2 },σ^2_c = √(σ^4_c),σ^2_z = (∏_k = 1^Kp_k/p_k - 1)r^(2) - σ^2_c,where λ_c = √(2/σ^2_c).As in the matrix case, we can compute the bias of σ^4_c.Under the model given by Equation (<ref>), 𝔼[σ^4_c] -σ^4_c =-{∏_k = 1^Kp_k^3/(p_k - 1)(p^2_k - 3p_k + 3)}[{3∏_k = 1^K(p_k - 1)^2/p_k^3}σ^4_c +.. (2∏_k= 1^Kp_k - 1/p_k^2)(σ^2_c + σ^2_z)^2 ]. Interpretation of this result is analogous to the matrix case. We tend to prefer the simpler model with vec( C) =0 over a more complicated model with nonzero elements of vec( C) when few data are available or when the data is very noisy. Additionally,𝔼[σ^4_c] - σ^4_c=O(1/p), i.e. the bias of σ^4_c diminishes as the number of levels of any mode increases. We also assess the large-sample performance of our empirical Bayes estimators in the K-way tensor case. Under the model given by Equation (<ref>), σ^4_cp→σ^4_c, σ^2_cp→σ^2_c, λ_cp→λ_c and σ^2_zp→σ^2_z as p_k'→∞ with p_k, k≠ k', fixed or p_1,…, p_K→∞. A block coordinate descent algorithm for estimating the unknown mean parameters is given in the web appendix. Results for testing the appropriateness of assuming heavy-tailed C and robustness carry over to K-way tensors.K-way tensor analogues to Propositions <ref>-<ref>, where we replace np with p and assume all p_1,…,p_K→∞, are shown to hold in the web appendix. Lastly we can also extend LANOVA penalization for tensor data to penalize lower-order mean parameters. Because tensor-variate Y include even more lower-order mean parameters, penalizing lower-order parameters is especially useful. We give nuisance parameter estimators for penalizing lower-order parameters in the three-way case in the web appendix.§ NUMERICAL EXAMPLESBrain Tumor Data: We consider a 356× 43 matrix of gene expression measurements for 356 genes and 43 brain tumors.The 43 brain tumors include 24 glioblastomas and 19 oligodendrogal tumors, which include 5 astrocytomas, 8 oliodendrogliomas and 6 mixed oligoastrocytomas. This data is contained in thepackage for<cit.>, and it has been used to identify genes associated with glioblastomas versus oligodendrogal tumors <cit.>.We focus on comparison to <cit.>, who used a variation of principal components analysis of Y to identify differentially expressed genes and groups of tumors which is similar to using an additive-plus-low-rank estimate of M.To ensure a straightforward comparison to the methods used in <cit.>, we do not perform any additional preprocessing of the data, e.g. adjustments for possible low rank confounding factors that are often observed in gene expression data <cit.>.Unlike pairwise test-based methods which require prespecified tumor groupings, LANOVA penalization and additive-plus-low-rank estimates can be used to examine differential expression both within and across types of brain tumors. Differential expression within types of brain tumors in particular is of recent interest <cit.>. We apply LANOVA penalization with penalized interaction effects and unpenalized main effects. The test given by Proposition <ref> supports a non-additive estimate of M; we obtain a test statistic of 18.45 and reject the null hypothesis of normally distributed elementwise variability at level α = 0.05 withp<10^-5.We estimate that 11,188 elements of C (73%) are exactly equal to zero, i.e. most genes are not differentially expressed.Figure <ref> shows C and a subset containing fifty genes with the lowest gene-by-tumor sparsity rates.The results of LANOVA penalization are consistent with those of <cit.>. We observe that49% and 56% of the elements of C involving the genes ASPA and PDPN are nonzero.Examination of Mindicates overexpression of these genes among glioblastomas relative to oligodendrogal tumors, as observed in<cit.>. LANOVA penalization yields additional results that are consistent with the wider literature. The gene DLL3 has the highest rate of gene-by-tumor interactions at 74% and tends to be underexpressed in glioblastomas. This is consistent with findings of overexpression of DLL3 in brain tumors with better prognoses <cit.>.The KLRC genes KLRC1, KLRC2 and KLRC3.1 all have very high rates of gene-by-tumor interactions at 72%, 70% and 60%. <cit.> has found evidence for differential KLRC expression across glioma subtypes.LANOVA penalization also indicates that several brain tumors have unique gene expression profiles. Glioblastomas 3, 4 and 30 have rates of nonzero gene-by-tumor interactions exceeding 50% and similar gene expression profiles.Specifically, we observe overexpression of FCGR2B and HMOX1 and underexpression of RTN3 for gliomastomas 3, 4 and 30. Overexpression of FCGR2B or HMOX1 is associated with poorer prognosis <cit.>, and RTN3 is differentially expressed across subgroups of glioblastoma that differ with respect to prognosis <cit.>. This suggests that glioblastomas 3, 4 and 30 may correspond to an especially aggressive subtype.fMRI Data:Second, we consider a tensor of fMRI data which appeared in <cit.>. During each of 36 tasks, fMRI activations were measured at 55 time points and 4,698 locations (voxels). Accordingly, the data can be represented as a 36× 55× 4,698 three-way tensor. Because the data is so high dimensional, many methods of analysis are prohibitively computationally burdensome.Accordingly, parcellation approaches that reduce the spatial resolution of fMRI data by grouping voxels into spatially contiguous groups are common, however, the choice of a specific parcellation can be difficult <cit.>. Instead, we propose LANOVA penalization as an exploratory method to identify relevant dimensions of spatial variation that should be accounted for in a subsequent analysis. The test given by Proposition <ref> supports a non-additive estimate of M; we obtain a test statistic of 298.87 and reject the null hypothesis of normally distributed elementwise variability at level α = 0.05 withp<10^-5.Having found support for the use of a non-additive estimate of M, we also penalize lower-order mean parameters as Y is high dimensional and sparsity of lower-order mean parameters could result in substantial dimension reduction and improved interpretability. The LANOVA estimate has 1,751,179 nonzero parameters, a small fraction of the 9,302,040 parameters needed to represent the raw data, Y (18.83%). Recalling that we are primarily interested in spatial variation, we examine estimated task-by-location interactions F and task-by-time-by-location elementwise interactions C,as defined in Equation (<ref>). Figure <ref> shows the percent of nonzero entries F and C at each location. At each location, the proportion of nonzero entries of F is much larger than the proportion of nonzero entries of C. This suggests that much of the spatial variation of activations by task can be attributed to an overall level change in activation over the duration of the task, as opposed to time-specific changes in activation. In a subsequent analysis, it may be reasonable to ignore task-by-time-by-location interactions. By examining the percent of nonzero entries of F by location, we can get a sense of which locations correspond to level changes in fMRI activity response by task. There is evidence for an overall level change in response to at least some tasks for all locations; the minimum percent of nonzero entries of F per location is 33%. However, voxels in the the parietal region, the calcarine fissure and the right- and left-dorsolateral prefrontal cortex have particularly high proportions of nonzero entries of F, suggesting that overall activation in these regions is particularly responsive to tasks. By examining the percent of nonzero entries of C by location, we can get a sense of which locations correspond to time-specific differential activity by task over time.We see that nonzero entries of C are concentrated among voxels in the upper supplementary motor area, the calcarine fissure and the left- and right-temporal lobes. In this way, we can use LANOVA estimates to identify subsets of relevant voxels that should be included in a subsequent analysis. Fusarium Data:Last, we consider the problem of checking for nonzero three-way interactions in experimental data without replicates. The data is a 20× 7 × 4 three-way tensor containing severity of disease incidence ratings for 20 varieties of wheat infected with 7 strains of Fusarium head blight over 4 years, from 1990-1993 that appeared in <cit.>. There is scientific reason to believe that several nonzero three-way variety-by-strain-by-year interactions are present. <cit.> examined these interactions using a rank-one tensor model for C, i.e. c_ijk = α_iγ_jδ_k. However as noted in Section <ref>, a low rank model may not be sufficient even if few nonzero interactions are present. As in <cit.>, we transform the severity ratings to the logit scale before estimating LANOVA parameters. The test given by Proposition <ref> supports a non-additive estimate of M; we obtain a test statistic of 3.99 and reject the null hypothesis of normally distributed elementwise variability at level α = 0.05 with p=3.34×10^-5.We obtain 87 nonzero entries of C (16%). Figure <ref> shows nonzero elements of C as well as nonzero elements of C_IPOD,H obtained using the hard-thresholding IPOD method of <cit.>, with dashed lines separating groups of wheat and blight by country of origin: Hungary (H), Germany (G), France (F) or the Netherlands (N). Estimates of elements of C_IPOD,S obtained using the soft-thresholding IPOD method of <cit.> are not pictured because they are all exactly equal to zero. We can interpret nonzero elements of estimates of C as evidence for variety-by-year-by-strain interactions that cannot be expressed as additive in variety-by-year, year-by-strain and variety-by-strain effects. Like <cit.>, both the LANOVA estimate C and the hard-thresholding IPOD estimate C_IPOD, H include large three-way interactions in 1992, during which there was a disturbance in the storage of blight strains. Specifically, we observe interactions involving Dutch variety 2, the only variety with no infections at all in 1992, and interactions between Hungarian varieties 21 and 23 and foreign blight strains, which despite the storage disturbance were still able to cause infection in these two Hungarian varieties alone. The soft-thresholding IPOD estimate C_IPOD,S fails to include these real interactions, suggesting that it overpenalized C just as we observed in simulations. We do not have enough information about the data to assess whether or not the remaining interactions identified by Ĉ and Ĉ_IPOD,H are related to features of the study or known patterns in the behavior of certain varieties and strains, however they suggest further investigation of these varieties and strains may be warranted.§ DISCUSSION This paper demonstrates the use the common Lasso penalty and the corresponding Laplace prior distribution for estimating elementwise effects of scientific interest in the absence of replicates. Our procedure, LANOVA penalization, can also be interpretedas assessing evidence for nonadditivity. We show that our nuisance parameter estimators are consistent and explore their behavior when assumptions are violated. We demonstrate thatthe corresponding mean parameter estimates can perform favorably relative to strictly additive, strictly non-additive,additive-plus-low-rank, IPOD, and approximately minimax estimates when elements of C are exponential power or Bernoulli-normal distributed, especially if the tail behavior of elements of C is similar to the tail behavior of a Laplace distribution and the variance of the interactions σ^2_c is large relative to the variance of the noise σ^2_z. We emphasize that LANOVA penalization is computationally simple.The nuisance parameter estimators are easy to compute for arbitrarily large matrices and estimates of M can be computed using fast a block coordinate descent algorithm that exploits the structure of the problem. We also extend LANOVA penalization to penalize lower-order mean parameters and apply to tensors. Finally, we show that LANOVA estimates can be used to examine gene-by-tumor interactions using microarray data, to perform exploratory analysis of spatial variation in activation response to tasks over time in high dimensional fMRI data and to assess evidence for “real” elementwise interaction effects atin experimental data. To conclude, we discuss several limitations and extensions.One limitation is that we assume heavy-tailed elementwise variation is of scientific interest and should be incorporated into the mean M, whereas normally distributed elementwise variation is spurious noise Z. If the noise is heavy-tailed, it may erroneously be incorporated into the estimate of C.Similar issues arise with low rank models for C, insofar as systematic correlated noise can erroneously be incorporated into the estimate of C.Furthermore, if the elementwise variation of scientific interest includes low rank components, these low rank components may not be included in LANOVA estimates of the mean M. This is of particular concern in the brain tumor data example but also of more general concern in applications involving gene expression data, because unobserved confounders with low rank structure may be present and need to be accounted for <cit.>. That said, any method that aims to separate elementwise variation into components that are of scientific interest and spurious noise requires strong assumptions that must be considered in the context of the problem. Another limitation is that the results of the simulation study assessing the performance of the LANOVA estimate M do not necessarily suggest favorable performance of the LANOVA estimate in all settings. First, we only consider exponential power and Bernoulli-normal elements of C. Although we expect the LANOVA estimate to perform well when elements of C are heavy tailed, we do not expect the LANOVA estimate to perform well when C are light-tailed. Second, this scenario considers C with independent, identically distributed elements. In some settings, it may be reasonable to expect dependence across elements of C and additive-plus-low-rank estimates may perform better.The methods presented in this paper could be extended in several ways. Although we use our estimators of λ_c and σ^2_z in posterior mode estimation, the same estimators could also be used to simulate from the posterior distribution using a Gibbs sampler. The output of a Gibbs sampler could be used to construct credible intervals for elements of M, which would be one way of addressing uncertainty. We may also want to account for additional uncertainty induced by estimating λ_c and σ^2_z. To address this, a fully Bayesian approach with prior distributions set for λ_c and σ^2_z could be taken at the cost of losing a sparse estimate of C. Our empirical Bayes nuisance parameter estimators could be used to set parameters of prior distributions for λ_c or σ^2_z. Last, LANOVA penalization for matrices is a specific case of the more general bilinear regression model, where we assume Y =AW +BX +C +Z, given known W and X. We chose to focus on a simpler case in this paper to facilitate the derivation of interpretable expressions for the bias of σ^4_c as well as estimators that are consistent as either n or p →∞ because researchers often encounter “fat” or “skinny” matrices in practice. However, the same logic could easily be extended to the more general bilinear regression context as well as the even more general multilinear context for tensor Y.Finally, our intuition combined with the results of Section <ref> suggests that the Laplace distributional assumptions used in this paper are likely to be violated in many settings. The strength of the Laplace distributional assumption for elements of C can be justified by the need to make some assumptions to estimate σ^2_c and σ^2_z when C and Z are always observed as a sum C +Z. Given that we need to estimate fourth order moments of C and Zjust to separately estimate σ^2_c and σ^2_z, we expect that we would need to estimate even higher order moments to assess the appropriateness of the Laplace distributional assumption for C which is notoriously difficult to do well in practice. However, if we observed replicate measurements, as is the case for lower-order mean parameters a and b, we would not need to estimate fourth order moments to separately estimate σ^2_a or σ^2_b from σ^2_z. Instead, we could use fourth order moments to assess the appropriateness of the Laplace distributional assumptions for a or b and possibly improve distribution specification. In recent work, we have considered this question in the general regression setting and have found that it can be possible to test the appropriateness of Laplace distributional assumptions and, when Laplace distributional assumptions are inappropriate, choose better ones <cit.>. § ACKNOWLEDGEMENTSThis work was partially supported by NSF grants DGE-1256082 and DMS-1505136.tocsectionReferenceschicago | http://arxiv.org/abs/1703.08620v2 | {
"authors": [
"Maryclare Griffin",
"Peter D. Hoff"
],
"categories": [
"stat.ME",
"stat.AP"
],
"primary_category": "stat.ME",
"published": "20170324230417",
"title": "Lasso ANOVA Decompositions for Matrix and Tensor Data"
} |
Combustion in thermonuclear supernova explosionsHeidelberger Institut für Theoretische Studien, Schloss-Wolfsbrunnenweg 35, 69118 Heidelberg, GermanyandZentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Philosophenweg 12, 69120 Heidelberg, [email protected]* Friedrich K. Röpke December 30, 2023 ====================== Type Ia supernovae are associated with thermonuclear explosions of white dwarf stars. Combustion processes convert material in nuclear reactions and release the energy required to explode the stars. At the same time, they produce the radioactive species that power radiation and give rise to the formation of the observables. Therefore, the physical mechanism of the combustion processes, as reviewed here, is the key to understand these astrophysical events.Theory establishes two distinct modes of propagation for combustion fronts: subsonic deflagrations and supersonic detonations. Both are assumed to play an important role in thermonuclear supernovae.The physical nature and theoretical models of deflagrations and detonations are discussed together with numerical implementations.A particular challenge arises due to the wide range of spatial scales involved in these phenomena. Neither the combustion waves nor their interaction with fluid flow and instabilities can be directly resolved in simulations. Substantial modeling effort is required to consistently capture such effects and the corresponding techniques are discussed in detail. They form the basis of modern multidimensional hydrodynamical simulations of thermonuclear supernova explosions. The problem of deflagration-to-detonation transitions in thermonuclear supernova explosions is briefly mentioned. § INTRODUCTION Theory associates Type Ia supernovaesupernovae, Type Ia with thermonuclear explosions of white dwarf stars. Therefore, any model of these spectacular astrophysical events is based on describing the physical processes leading to such explosions. Thermonuclear reactions release the energy necessary to overcome the gravitational binding of the star. What are these reactions? How do they spread over the stellar material after ignition? These questions are answered by combustion theory which was extensively developed for chemical terrestrial processes because of their importance for technical applications. Over the past two decades, there has been an increased interest in modeling the combustion processes in thermonuclear supernovae. As we will discuss below, the phenomena known from chemical combustion find many counterparts here. This led to a rapid improvement of multidimensional simulations of thermonuclear supernova explosions that greatly benefited from knowledge and techniques in technical combustion modeling. Some of the regimes of turbulent combustion that we will identify in the context of supernova modeling are encountered, for instance, in car engines, and are well known and studied there.We will argue in the following sections that thermonuclear burningthermonuclear burning in Type Ia supernovae propagates as combustion waves subject to various instabilities. This makes the explosion physics of thermonuclear supernovae rich and its numerical simulation challenging. A variety of physical effects determine the propagation of such combustion waves, and the scales of interaction range from microscopic lengths below millimeters up to the radius of the exploding stars (thousands of kilometers). Modeling thermonuclear combustion in white dwarf stars is therefore a demanding multi-scale multi-physics problem.§ COMBUSTION IN WHITE DWARF MATTER The phenomenon of combustioncombustion in white dwarf matter is generally modeled as an interplay of fluid mechanics and reactions: reactions convert species and release the energy that drives the process. This causes fluid flow and energy transport that propagate the burning into fuel material. All this is captured in a set of partial differential equations that result from combining the Navier-Stokes equationsNavier-Stokes equations of fluid dynamicsfluid dynamics with reactive and other source terms and additional terms that account for energy transport by diffusion and conduction. The equations are based on the concepts of mass conservation∂ρ/∂ t + ∇· (ρv) = 0, momentum balance∂ρv/∂ t + ∇· (ρvv) + ∇·Π = ρf, species balance[Note that the combination of mass conservationwith the equations describing balance of species overdeterminesthe system and thus Eqs. (<ref>) and (<ref>)have to be treated in a consistent way.]∂ρ X_i/∂ t + ∇· (ρ X_i v)= - ∇· (ρv^D_i X_i) + ρω_X_i i=1… N, and energy balance∂ρ e_tot/∂ t + ∇· (ρ e_totv) + ∇· ( vΠ) = ρv·f + ρ∑_i=1^N X_i v_i^D·f_i - ∇·q + ρ S.Here, ρ, v, e_tot, Π, f , X_i , v^D , ω, q, and S denote mass density, fluid velocity, specific (i.e. per unit mass) total energy, pressure tensor, external forces, mass fraction of species i (index i running from 1 to the number of considered species N), diffusion velocity, reaction rate, heat flux, and energy source terms due to reactions, respectively.Eq. (<ref>) equals the temporal change of the mass density to the divergence of the mass flux density (the sign is convention). Integrated over a control volume, Eq. (<ref>) states that mass can only change inside it by a mass flux over its surface. Species, momentum, and energy are not strictly conserved. In addition to changes due to fluxes over the surface, they can be produced or destroyed by physical mechanisms inside the volume. This is captured in the source terms on the right-hand sides of Eqs. (<ref>) – (<ref>): momentum can be changed under the action of an external force, species are changed by diffusion and reactions, and energy is affected by external forces, diffusion processes, heat flux, and energy release or consumption in reactions.The list of source terms included in Eqs. (<ref>) – (<ref>) is not exhaustive. As pointed out by <cit.>, radiative heat flux is unimportant in white dwarf matter, and we therefore neglect it here. The potential dynamic effects of magnetic fields are also not considered. Our choice of effects accounted for in the above equations, however, allows for a model that captures the basics of combustion physics in thermonuclear supernovae.The system of Equations (<ref>) – (<ref>) is capable of describing complex physical processes. In particular, we emphasize the importance of the second term on the right-hand side of Eq. (<ref>), the divergence of the momentum flux density, often referred to as “advection term.” This term is nonlinear in the fluid velocity leading to the rich (and theoretically still not satisfactorily understood) phenomenon of turbulence. Its impact on combustion processes will be discussed in Sect. <ref>. The ratio of the magnitude of the advection term to that of viscosity is measured by the Reynolds number 𝑅𝑒Reynolds number on a certain spatial scale. In the situations encountered for thermonuclear burning in white dwarf stars, the value of the Reynolds number is huge (it can reach 𝑅𝑒∼ 10^14 on length scales comparable to that of the white dwarf star). To a good approximation, viscosity effects can be neglected in models that consider such astrophysical combustion processes on large scales. The pressure tensor then reduces to the hydrodynamical pressure P, and Eq. (<ref>) simplifies to the Euler equation.To close the system of Eqs. (<ref>) – (<ref>), an appropriate equation of stateequation of state, white dwarf matter has to be provided. It relates pressure to other quantities such as mass density, internal energy, and composition. White dwarf stars are compact objects in which matter is fully ionized.For white dwarf matter, the ions dominate the mass density and are treated as an ideal gas. The electrons, however, provide the main contributions to pressure and energy. They form a degenerate gas and may be relativistic to a variable degree. Additional effects due to radiation, electron-positron pairs, and Coulomb interactions may also be incorporated in the equation of state. Unlike the ideal gas equation of state widely used in computational fluid dynamics, astrophysical equations of state are often provided in form of tables rather than as a closed expression. In addition, relations modeling heat flux and diffusion have to be specified. In the case of thermonuclear supernova explosions, the most important external force is gravity. The corresponding terms in the above equations can be determined from solving the Poisson equation for the given mass distribution.Reaction rates and the energy source term due to reactions depend on temperature, density, and composition,ω_X_i = ω_X_i(ρ, T, X_i),S=S(ω_X_i).The reactions in a combustion system can be chemical in nature, as is usually the case in terrestrial technical applications, or nuclear, as in Type Ia supernovae. Here, we are interested in the latter case. Nuclear reactions are formally treated with a nuclear reaction network <cit.>nuclear reaction network. This set of coupled ordinary differential equations describes the temporal change of the abundance of each species i due to electron captures, β-decays, photodisintegrations, two-, and three-body reactions (for higher-order reactions, the probability is low and they can be neglected). These effects depend on temperature, density, and the abundances of the other species in the system. The numerical implementation of the hydrodynamics solver in thermonuclear supernova explosion simulations usually follows finite volume techniques. For a textbook on that matter we refer to <cit.>.Combustion is encountered in many situations in nature and technology. The characteristics of such processes, however, vary widely. Chemical combustion is classified depending on how the reactants are brought together in the fuel. In premixed combustion, all necessary ingredients for the reaction are contained in the fuel mixture, whereas diffusion processes bring the reactants into contact in non-premixed combustion. Both types are discussed in detail in <cit.>. In thermonuclear supernovae, the fuel material contains all species required to set off nuclear burning, and therefore this situation resembles that of premixed combustion, which we will focus on in the following discussion.§ COMBUSTION FRONTSReactions in combustion systems are usually initiated by an ignition in a certain small volume from which they spread out. In the case of thermonuclear supernovae, the extremely strong dependence of the associated thermonuclear reaction rates on temperature localizes the burning in space so that it propagates as a combustion wavecombustion waves. For example, the reaction rate of carbon fusion, which initiates thermonuclear burning in carbon-oxygen white dwarf stars, scales at T ∼ 10^10 K as 10^12 <cit.>. Therefore, burning peaks sharply at the places of highest temperature, and the reaction is confined to a thin layer in space that propagates through the material. At high fuel densities, the width of such combustion waves may be tiny fractions of a millimeter. At lower densities it broadens, but it still remains small compared to the scales of the exploding white dwarf star with a radius of thousands of kilometers. The idealized treatment of a combustion wave as a sharp front, where fuel is instantaneously converted to ash and the corresponding energy is released, thus provides an excellent approximation for combustion modeling on these large scales.This discontinuity approximationcombustion fronts, discontinuity approximation establishes the simplest model of a combustion wave. Its internal structure is not resolved, but the correct relation between the hydrodynamical states ahead and behind the combustion front is recovered. The discontinuity approximation simplifies the equations describing the system: microphysical transport due to diffusion and heat conduction can be neglected. If the fluid can be treated as ideal and in the absence of gravity and other external forces, the system reduces to the Euler equations with source terms due to reactions.Discontinuities are not among the solutions of the system as stated above. This differential form of the equations of fluid dynamics allows only for continuous (“strong”) solutions. The integral form, however, also allows for discontinuous (“weak”) solutions. If treated in the differential from, discontinuous solutions require the introduction of additional jump conditionsjump conditions relating the hydrodynamical states on both sides of the discontinuity. Neglecting reactions, conservation of mass, momentum, and energy over discontinuities require continuous fluxes of these quantities over the discontinuity (which may carry jumps in other quantities such as density, temperature, etc.).For simplicity, consider the system in one spatial dimension in the rest frame of the combustion front, and let u denote the velocity component normal to it.Combining the continuity of mass flux density, M := ρ u, and momentum flux density, ρ u^2 + P, over the front [the corresponding expressions can easily be read off from the terms on the left-hand side of Eqs. (<ref>) and (<ref>)], we arrive at the Rayleigh criterion <cit.> relating the unburnt and burnt states (denoted with subscripts “u” and “b”, respectively):- M^2 = - ( ρ_u u_u)^2 = - ( ρ_b u_b)^2 = P_b - P_u/V_b - V_u.The specific volume is defined as V := 1/ρ. Eq. (<ref>) describes a straight line in the P-V-planeRayleigh line. It must have a negative slope corresponding to -M^2, because the mass flux cannot be imaginary. The burnt state is characterized by a density and a pressure that is connected to the unburnt state by this line. The Rayleigh criterion (<ref>) derives from the principles of mass and momentum conservation and confines the mechanically possible burnt state to a line in the P-V-plane. A second relation fixes the possible burnt states to a point. If no energy were released in the combustion wave, the energy flux (ρ e_tot + P)u = (1/2 u^2 + e_int + P/ρ) ρ u over it would be continuous. Here, we introduce the specific internal energy e_int := e_tot - u^2/2. This results in a thermodynamic condition for the “burnt” state, the so-called Hugoniot adiabatHugoniot adiabat <cit.>:e_int, u - e_int, b = - P_u -P_b/2( V_b - V_u).Of course, no energy release in the burning is accounted for up to now, so it would be more correct to speak of the “post-discontinuity” state here; this is not done to avoid excessive introduction of notation.For common equations of state, the Hugoniot adiabat is a parabola in the P-V-plane connecting the unburnt state to all thermodynamically allowed “burnt” states. It is important to note that the slope of the tangent to the Hugoniot adiabat at a certain point measures the speed of sound in the corresponding state, while the slope of the Rayleigh line through that same point represents the velocity of the front with respect to that state <cit.>. The curve's intersection with the Rayleigh line marks the physically realized (i.e. both mechanically and thermodynamically admissible) “burnt” state (V_b, P_b).The process modeled so far without energy release over the combustion front corresponds to a hydrodynamical shock. If energy release is considered, the Hugoniot curve shifts upward in the P-V-plane and does not contain the unburnt state any longer:e_int, u - e_int, b = Δ h_0 -P_u - P_b/2( V_b -V_u).Here, Δ h_0 denotes the difference in the formation enthalpies of the burnt and unburnt material. This situation is illustrated in Fig. <ref>. The shifted Hugoniot curve is usually referred to as detonation adiabatdetonation adiabat. It can intersect with Rayleigh lines issuing from the unburnt state (V_u, P_u) for different values of M. Since these lines must have a negative slope, the possible burnt state is located on one branch of the detonation adiabat, either above point A or below point A' in Fig. <ref>. Combustion in these two branches is caused by different physical mechanisms as we will discuss below.The speed of sound in the unburnt material is graphical represented by slope of the tangent to the Hugoniot adiabat in point (V_u, P_u). By construction (see Fig. <ref>), all Rayleigh lines connecting (V_u, P_u) to the upper branch of the upward-shifted detonation adiabat have a steeper slope, which, in turn, measures the velocity of the combustion wave with respect to the unburnt state <cit.>. This means that all processes leading to burnt states on the upper branch of the detonation adiabat are caused by combustion fronts that propagate supersonically with respect to the fuel. These are called detonationsdetonations. The converse holds for burnt states on the lower branch of the detonation adiabat, and the corresponding subsonically propagating combustion fronts are called deflagrationsdeflagrations.The Rayleigh lines from (V_u, P_u) to the points O and O' in Fig. <ref> mark a special case and are indicated with dashed lines in the figure. They are tangents to the detonation adiabat. With the same lines of arguments as above, one can show <cit.> that combustion waves terminating in points O and O' propagate with the speed of sound in the burnt material. These are the so-called Chapman-Jouguet pointsChapman-Jouguet points. Above these points, the velocity of the front is subsonic with respect to the burnt state; below it is supersonic.§ DEFLAGRATIONS IN WHITE DWARF MATTER The simplified picture of a combustion front modeled as a discontinuity discussed in Sect. <ref> provides useful relations between burnt and unburnt states that directly derive from the balance laws of fluid dynamics. It does not, however, address the question of the physical processes that lead to the phenomenon of propagating combustion waves on a microscopic level.In contrast to detonations (see Sect. <ref>), deflagrations propagate due to microphysical transport processesdeflagrations in white dwarf matter. Energy is released by reactions in the burning zone. Conduction and diffusion lead to a heating of the fuel ahead of this zone so that it also reaches conditions for burning. In this sense, deflagrations resemble the picture of a propagating flame front. Its velocity is given by the microphysical transport processes and therefore slow compared to detonations that will be discussed below.The full structure of deflagration waves results from solving Eqs. (<ref>) – (<ref>). The reaction zone, where the reaction rate is significant, typically occupies only a small fraction of the width of the structure, and it trails a more extended preheat zone.Deflagrations in white dwarf matter are more complex than this schematic picture suggests. To solve the equation of combustion hydrodynamics, the relevant microphysical processes have to be modeled. Nuclear burning proceeds in a multitude of reactions, starting out with carbon burning, over oxygen burning to silicon burning, finally reaching nuclear statistical equilibrium, provided the fuel densities are sufficiently high. The relevant reactions have to be identified, and the corresponding reaction rates have to be provided. In addition, energy transport has to be accounted for. It turns out that the most important of these effects is thermal conduction, an energy flux caused by a temperature gradient:q = - σ∇T.In white dwarf matter, different energy carriers are present, e.g., electrons, ions, and photons. The transport properties of these carriers have to be accounted for in determining the thermal conductivity σ. <cit.> discuss the corresponding contributions and conclude that electron conduction dominates over all other effects under conditions typical for white dwarf stars. The reason is that final states for electronic scattering processes are occupied below the difference of Fermi energy E_F of the electron gas, which is ∼1 MeV, and the thermal energy k_bT ∼ 10keV. This implies extremely large mean free paths of the electrons leading to high values of σ. An important consequence is observed when comparing the magnitudes of thermal conduction and diffusive energy transport, a ratio that defines the Lewis numberLewis number. Typical values for white dwarf matter are ∼10^7 <cit.>, while the Lewis number is order of unity in terrestrial combustion processes. The Prandtl numberPrandtl number compares viscous transport to thermal conduction, and its value is typically very small in white dwarf matter <cit.>. These figures of merit reflect the efficiency of electron conduction due to the high degeneracy of white dwarf matter. This situation greatly simplifies the solution of the equations of combustion hydrodynamics (<ref>) – (<ref>). The huge Lewis number indicates that diffusive processes are unimportant for the energy transport and can be neglected. The low value of the Prandtl number demonstrates the subdominance of viscous effects. Therefore, a model neglecting diffusive transport and based on the Euler equations of hydrodynamics rather than on the Navier-Stokes equations is justified. Another argument for this approach is that the full set of equations can only be solved numerically. Hydrodynamics solvers, however, introduce a considerable numerical viscosity, and explicitly accounting for small physical viscosities is pointless.Numerical solutions of the resulting system determine the laminar (i.e. in case of a planar front in the absence of any geometrical perturbation) propagation speed s_l of deflagration waves as a function of fuel density, temperature, and composition. Results are given by <cit.>. The different reactions occurring in a deflagration in white dwarf matter are illustrated in Fig. <ref>. It shows the density, the temperature, and the abundances of different species inside the structure. Note that the thermodynamical structure evolves monotonically, although the various reactions give rise to burning on different length scales due to differences in their reaction rate. Therefore, the width of the “carbon flame” is almost an order of magnitude smaller than that of the “oxygen flame.” § DEFLAGRATION INSTABILITIES AND INTERACTION WITH TURBULENCEIn more than one spatial dimension, subsonic deflagrations are subject to various instabilities. Some of them are relevant or even fundamental for flame propagation in thermonuclear supernovae. Deflagrations in Chandrasekhar-mass white dwarfs are one prominent example. They can potentially unbind the stars <cit.>, or at least lead to the ejection of a substantial part of their masses <cit.>. This is not trivial as subsonic deflagrations significantly expand of the fuel material ahead of them due to their energy release. This drop in fuel density eventually inhibits further burning, and there is a competition between the star's expansion and the fuel consumption by the flame. A simple laminar flame propagates too slowly. It does not burn sufficient amounts of material to explode the star. The required efficiency in energy release and species conversion is only possible due to the action of instabilitiesinstabilities of deflagration waves causing turbulenceturbulence with which the flame interacts and thus acceleratesdeflagrations, turbulent. We will discuss below that instabilities are an inevitable physical consequence of any multidimensional treatment of fluid dynamics and flame propagation. These effects can only be parametrized in one-dimensional models, and thus a multidimensional simulations are mandatory when a solid and predictive explosion model is aimed for.Not all potential flame instabilities are relevant for the case of thermonuclear burning in supernovae. The diffusive-thermal instability <cit.>, for instance, is suppressed at the prevailing high Lewis numbers. The three main instabilities that potentially affect the flame propagation are the Rayleigh-Taylor instability, the Kelvin-Helmholtz instability, and the Landau-Darrieus instability. The first two are general fluid flow instabilities, while the latter is specific to burning fronts.The Rayleigh-Taylor instabilityRayleigh-Taylor instability is caused by buoyancy in the corresponding supernova explosion scenarios. The flame ignites near the stellar center and burns toward the surface. Behind the flame, energy is released by nuclear reactions. This partially lifts the degeneracy, and therefore the density in the ashes is lower than in the fuel. The result is an inverse density stratification in the gravitational field of the star, which is buoyancy unstable. Perturbations at the interface between fuel and ash grow. In the nonlinear stage, the Rayleigh-Taylor instability leads to the formation of bubbles of hot ash material that rise into the cold fuel. In between, downdrafts transport fuel material toward the star's center. These counterflows at the fuel-ash interface lead to shear at the flame. The Kelvin-Helmholtz instabilityKelvin-Helmholtz instability amplifies initial perturbations and forms a wave pattern that grows into eddies in the nonlinear stage.The flame itself, however, is subject to an instability that arises from its self-propagation. This so-called Landau-Darrieus instabilityLandau-Darrieus instability is active on all scales that are significantly larger than the internal flame width and results from a refraction of stream lines in the vicinity of a flame front due to the change in density over it. While the tangential velocity component over the front is steady, mass flux conservation requires a jump in the normal component. This widens streamlines in the vicinity of bulges of the flame and consequently the fluid velocity is lower locally. The laminar burning speed is thus larger than the local fluid velocity and this causes bulges to increase. Recesses become deeper by the opposite effect. Thus perturbations deforming the flame front from planar geometry grow.In the nonlinear regime, however, the flame stabilizes into a cellular pattern <cit.>. Such a stabilization was shown to be effective for thermonuclear flames in white dwarf matter <cit.>. The Landau-Darrieus instability is therefore not expected to have a significant impact on burning in thermonuclear supernovae.The Rayleigh-Taylor and Kelvin-Helmholtz instabilities, in contrast, show no such nonlinear stabilization. They are fundamental for the evolution of flame structure on the scales of the exploding white dwarf star and lead to a considerable acceleration of burning in thermonuclear supernovae. Both instabilities, however, do not act on the smallest spatial scales. The growth time of the Rayleigh-Taylor instability competes with the burning time scale. At the smallest scales front distortions grow so slowly that they will be overrun by the flame. As the growth time of the buoyancy-driven instability increases with the length scale of the perturbations, there exists a minimum scale for the Rayleigh-Taylor instability develop in the presence of a self-propagating flame. This so-called fire-polishing length can be estimated as <cit.>λ_fp = g s_l^2/2 πρ_u - ρ_b/ρ_u + ρ_b,with g denoting the gravitational acceleration.The Kelvin-Helmholtz instability acts as a secondary effect, because the shear motions necessary to trigger it are produced by the large-scale uprising plumes of hot ashes in the nonlinear stage of Rayleigh-Taylor instability. In its nonlinear regime, the Kelvin-Helmholtz instability results in vortices that develop in the shear region. A prerequisite for a Kelvin-Helmholtz unstable configuration is a tangential discontinuity without a flow over it. This is clearly not the case encountered for burning fronts. The finite mass flow over them stabilizes flames against the Kelvin-Helmholtz instability. Its effect is similar to that of viscosity in shear layers leading to some stabilization. With buoyancy acting to form fast-rising bubbles of ash, however, the situation changes. The resulting shear velocities are much larger than the flow velocities over the flame fronts. Indeed, in numerical simulations <cit.> found that the flame becomes subject to the Kelvin-Helmholtz instability once the shear velocity exceeds the laminar burning speed. This is typically the case for thermonuclear deflagrations in white dwarf stars.The action of the Kelvin-Helmholtz instability is the primary cause for the generation of turbulence. The importance of this effect can be estimated from the Reynolds numbers of the shear flows around rising bubbles of burning material that are as high as 10^14. This gives rise to the following picture: turbulent eddies are generated by shear instability at the location of the flame at the large scales of the buoyant plumes of burnt material. These eddies themselves are unstable and decay to smaller eddies, thus establishing a turbulent cascade. In this cascade, turbulent kinetic energy injected at the largest scales is transported without loss through the so-called inertial range. The velocity of the turbulent eddies steadily decreases toward smaller scales. At some microscopic scale (much below a millimeter), the local (scale-dependent) Reynolds number drops to ∼1 indicating that viscous effects become important. At this so-called Kolmogorov scale they dissipate the turbulent kinetic energy into heat.The flame interacts with turbulent eddies on a wide range of scalesturbulent combustionturbulent deflagrations. On scales much larger than its internal width, they deform and corrugate the flame front. The scale down to which this effect is active depends on the flame speed. Similar to the argument of the fire polishing length for buoyancy instability, there should be a scale on which the turbulent eddy velocities have become so low that they are comparable to the laminar flame speed. This implies that below this so-called Gibson scaleGibson scale l_Gibs the flame burns faster through the turbulent eddies than they turn over, and the flame is nearly undistorted by the action of eddies. For constant turbulent velocities, the size of the Gibson scale depends on the fuel density. With lower fuel density the flame speed decreases and the Gibson scale becomes smaller (see Fig. <ref>). One important question is how far down in scale space the effect of turbulent velocity fluctuations affects the flame, i.e. the size of the Gibson scale. If this scale is significantly above the width of the flame, turbulence will not interact with the internal flame structure. The only effect is then a corrugation of the flame front on large scales. This is the so-called flamelet regimeturbulent combustion, flamelet regime of turbulent combustion <cit.>.While the microphysics of the burning is unaffected and the laminar flame speed does not change, the overall flame wrinkling has significant effect on the overall burning efficiency. The flame surface is enlarged and therefore the net burning rate increases.If, however, the Gibson scale is smaller than the internal flame width, turbulent eddies will affect the internal flame structure. They can now transport material into and out of the flame structure and thus disrupt it. This is the so-called distributed burning regimeturbulent combustion, distributed burning regime. Depending on the size of the Gibson scale, different sub-regimes are distinguished <cit.>: In the thin reaction zone regime, turbulence will alter the structure of the preheat zone, but not the reaction zone itself. When the Gibson scale reaches down to the width of the actual reaction zone, burning is said to take place in the broken reaction zones regime and ultimately turbulence completely dominates the burning process and spreads it out over space in the well-stirred reactor regime.In a thermonuclear supernova explosion resulting from a deflagration, the flame will enter the distributed burning regime. Towards the end of the burning, turbulence freezes out and the gravitationally unbound material approaches homologous expansion (<cit.>). Before this happens, however, the flame will transition from the flamelet regime of turbulent combustion to the distributed burning regime. As the exploding white dwarf expands and the flame burns towards its surface, the fuel density decreases thus reducing the Gibson scale (see Fig. <ref>). At the same time, the internal width of the deflagration grows with decreasing fuel density <cit.>. Once the flame reaches material with ≲10^7g cm^-3, the Gibson scale falls below the flame width, and turbulence affects its internal structure.Overall, the picture is that deflagrations in thermonuclear supernova explosions are far from propagating laminar. They possess a pronounced multidimensional structure, being wrinkled by instabilities on large scales, and the thus induced turbulence interacts with them on a wide range of scales. A one-dimensional model therefore is necessarily approximate and introduces free parameters. For a consistent treatment, multidimensional hydrodynamical simulations are inevitable.§ MODELING DEFLAGRATIONS The internal structure of deflagrations in the dense core of carbon-oxygen white dwarfs cannot be resolved in current multidimensional explosion simulations. Therefore, modeling approachesflame models for deflagrations are required. Two general strategies can be distinguished that have been used in such simulations: The first broadens the flame artificially by tuning the microphysical transport coefficients and the progress of the reaction such that a structure emerges that can be resolved on the scale of the computational grid.The second approach treats the flame as a sharp discontinuity with no attempt to model its internal structure. It has to be emphasized that neither of the approaches captures the physical processes inside the flame correctly.The approach of artificially broadened flames was introduced to the field of thermonuclear supernova simulations by <cit.> <cit.>. It is based on a reaction progress variable ϕ that governs energy release and species conversion. The evolution of ϕ is described by an advection-diffusion-reaction (ADR) equationadvection-diffusion-reaction (ADR) flame model∂ϕ/∂ t + v·∇ϕ = κ∇^2 ϕ + 1/τ R(ϕ),and hence the flame model is often referred to as ADR approach.In the absence of advection (v = 0), ϕ is propagated by diffusion and produced by reaction. The diffusion coefficient κ and the time scale of the reaction τ do not reflect microphysical processes but are model parameters chosen such that the global flame properties are obtained in the desired way: a model flame is defined that matches the laminar flame propagation speed of the physical flame but is broad enough to be resolved with a few cells on the computational grid. A more sophisticated version of this flame model involves individual progress variables for the different burning stages in the flame <cit.>.The level-set approachlevel-set approach, in contrast, models the flame as a discontinuity. Energy is released, and species are converted instantaneously at the position of the discontinuity. Such a discontinuity can be treated numerically by associating it to the zero-level set of a signed distance function G(x, t) , |∇ G| = 1, such that the flame is modeled as the moving hypersurface Γ(t):Γ (t) := {x | G(x, t) = 0 }<cit.>. The propagation of this hypersurface is given by the evolution of the signed distance function∂ G/∂ t = (v_un + s_u) |∇ G|,where v_u, n, and s_u denote the fluid velocity in the unburnt material, the normal vector to the flame front, and the flame speed with respect to the fuel, respectively. The G-function is defined to be negative in the fuel and positive in the ashes <cit.>. From G, the location of the flame can be reconstructed with subgrid cell resolution.Clearly, both approaches do not consistently treat the burning microphysics but are models to propagate a flame-like structure on scales resolvable in a thermonuclear supernova simulation. Therefore, the model parameters have to be calibrated to reproduce properties of the physical flames (in particular the flame speed and the energy release). These have to be known independently and are taken from one-dimensional microscopic flame simulations <cit.>. Both the ADR and the level-set approaches are widely used to model deflagrations in thermonuclear supernova explosion simulations. Although the ADR model may seem physically better motivated, it has disadvantages: because the model flame has an artificial finite width, it suffers from curvature effects close to the grid scale, and it modifies fluid flows in neighboring cells that are physically very far away from the actual flame. This may alter the response of ADR model flames to instabilities and turbulence. Minimizing such effects is one of the reasons why ADR approaches are usually combined with adaptive mesh refinement (AMR) techniques. The level-set approach, in contrast, is in principle able to localize the flame front with subgrid-scale resolution as a discontinuity (in the hydrodynamical variables, at least as much as the employed hydrodynamics solver is able to represent discontinuities), and AMR is usually not employed here. As discussed in Sect. <ref>, flames do not propagate laminar in white dwarfs. They are subject to instabilities and interaction with turbulent motions, which significantly enhances the burning efficiency.Such effects have to be taken into account in any flame model for simulating thermonuclear supernova explosions.This is not automatically guaranteed because of the finite spatial resolutions these simulations reach. Consequently, the numerical flame model will lack structure from the unresolved scales and be artificially smooth. For the flamelet regime, surface area enhancement and burning acceleration due to flame-turbulence interaction are therefore not fully reproduced in simulations. To compensate for the effect of missing flame substructure, the model flame front is propagated on the grid scale with an effective turbulent flame speed s_t instead of the laminar value s_l, that applies only to unresolved microscopic scales. This is critical for the success of many thermonuclear supernova models.An approach used in some of the published thermonuclear supernova simulations <cit.> is to scale the effective turbulent flame speed to the velocity of buoyantly rising bubbless_t∝√(g L ρ_u - ρ_b/ρ_u + ρ_b),where g is the gravitational acceleration and L denotes the length scale of the bubble, here associated to the scale of the computational grid. <cit.> discusses a self-similarity in structures of turbulent flame fronts subject to buoyancy as a motivation for this approach.The assumption of turbulent flame speeds being set by buoyancy effects has been called into question by <cit.>, who argue that a turbulence-driven speed should be used instead <cit.>. An elaborate technique to account for flame-turbulence interaction is based on turbulent subgrid-scale modelssubgrid-scale turbulence model. According to <cit.> the turbulent burning speed s_t of a flame on a certain scale should be proportional to the turbulent velocity fluctuations v' on that scale. These are determined from a subgrid-scale turbulence model in a Large-Eddy Simulation (LES) approach. In an LES, only the largest scales of the turbulent cascade, where energy is injected into it, are resolved together with the onset of the inertial range. At the grid scale, the turbulent energy cascade is cut off, and additional modeling is required at and below this scale. Such turbulent subgrid-scale models were introduced by <cit.>, <cit.> to the field of thermonuclear supernova explosion simulations <cit.>. They are based on a balance equation that describes the budget of turbulent kinetic energy on the unresolved scales, with terms accounting for transport of kinetic energy from the resolved to the unresolved scales, for energy transport on the unresolved scales, for dissipation by viscosity on unresolved scales, and for the action of the Archimedian force on unresolved scales. These subgrid-scale models for turbulence-flame interactions were used in several multidimensional simulations of deflagrations in white dwarfs leading to thermonuclear supernova explosions, e.g., <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. § DETONATIONS The physical mechanism of detonationsdetonations differs fundamentally from that of deflagrations. A detonation propagates due to the progression of a shock wave that compresses and heats fuel material such that it starts to burn. This burning, in turn, releases the energy to further support the shock wave. The temperature increase enhances the reaction rate – a process, however, that proceeds more slowly because the shock structure is too thin to allow significant reactions while passing. Behind this shock wave, reactions take place and release energy leading to a further increase in temperature which also raises the reaction rate. Fuel is consumed until depleted. Unlike deflagrations, the propagation of detonations is not determined by microphysical transport processes, but instead by hydrodynamical effects. Consequently, these are fast combustion fronts advancing with the speed of the shock wave supersonically with respect to the fuel state.For planar detonations, a theoretical picture was independently developed by <cit.>, <cit.>, and <cit.>. This so-called ZND modeldetonations, ZND model can be illustrated in the (P,V)-plane of Fig. <ref>. Consider the detonation branch only (see Fig. <ref>). The shock in the ZND model is assumed to be infinitely thin as in the discontinuity approximation discussed in Sect. <ref>. It evolves the initial state from point a along the detonation adiabat (bold dashed curve) over point e to state d, which marks the highest pressure in the detonation structure and is called the von Neumann spike. Here, reactions set in and lead to a heating and expansion of the material. Contrary to the above-discussed discontinuity approximation, the reactions are no longer assumed to be instantaneous but proceed with finite rates behind the shock. The corresponding path toward the final burnt state, e.g. c, must follow down a Rayleigh line, i.e. dc, because mass and momentum are conserved for all intermediate states. This path can be pictured as consisting of a sequence of intermediate states in which adiabats corresponding to partial but progressively more complete burning intersect with the Rayleigh line until the detonation adiabat of the full energy release is reached. Relaxing the assumptions of the ZND theory leads to a more realistic path that is indicated as a dashed-dotted line in Fig. <ref>.A special case is obtained when the final state is located at the Chapman-Jouguet point O. The detonation then propagates sonically with respect to the burnt material. Detonations often propagate in this mode and attain a steady structure <cit.>detonations, Chapman-Jouguet. In white dwarf material of sufficiently high density to burn to nuclear statistical equilibriumnuclear statistical equilibrium, detonations diverge from the Chapman-Jouguet type because of the endothermic nature of the involved photodissociation reactions <cit.>. They are of pathological typedetonations, pathological and travel with greater speeds. At lower densities, however, the Chapman-Jouguet case is a valid approximation.§ MULTIDIMENSIONAL STRUCTURE OF DETONATIONS The detonation model outlined above describes the simplified one-dimensional case of planar detonation waves. Because detonations propagate into the fuel with supersonic speeds, they are not subject to external hydrodynamical instabilities, such as the Rayleigh-Taylor or the Kelvin-Helmholtz instabilities. Nonetheless, real detonations possess a three-dimensional structuredetonations, multidimensional structure. Planar detonations are unstable and form incident shocks, transverse shock waves, and triple points. Their continuous interactions cause the emergence of cellular patterns <cit.> that are observed in terrestrial experimentsdetonations, cellular instability.Transverse waves move back and forth along the detonation perpendicular to its direction of motion.At the points where they collide, so-called triple points emerge. Between these triple points, the shocks show significant curvature and are too weak to sustain the detonation. In the triple points, the compression is stronger, and this drives the reactions and the propagation of the detonation. The tracks of these triple points form a cellular pattern in the downstream material. This effect is observed in small-scale simulations of thermonuclear detonations in white dwarf matter <cit.>. It alters the characteristics of the detonation. Burning is more complete in the triple points, and thus the chemical composition of the ashes is inhomogeneous behind the front with pockets of less completely burned material off the paths of the triple points. The emerging complex multidimensional detonation structure is widened compared to the prediction of the one-dimensional model, and its propagation speed is lower.The one-dimensional theory therefore applies only in an average sense and provides an acceptable approximation if multidimensional structure cannot be resolved. According to <cit.>, this is the case for detonations in dense white dwarf material. At lower densities, these structures may grow to sizes comparable to the scales of the exploding star. Even if unresolved, the changes in propagation velocity and ash composition due to the multidimensional structure may affect the characteristics of thermonuclear supernova explosions, although not by much <cit.>. Overall, multidimensional detonations are weaker and may quench at higher densities than expected from one-dimensional theory.§ MODELING DETONATIONS In many models of thermonuclear supernova explosions, detonations play an important role. They seem to be required to produce the stratified chemical composition observed in the outer layers of normal Type Ia supernovae. Because the associated shock wave compresses the material, they lead to more complete burning than deflagrations for the same fuel density. To produce the intermediate mass elements observed in the spectra of Type Ia supernovae, substantial burning has to take place in low-density material. This is not given for detonations in Chandrasekhar-mass white dwarfs in hydrostatic equilibrium. The high densities in these would lead to an exclusive production of iron group elements. Therefore, detonations have to trigger after a phase of expansion caused by an initial deflagration in a Chandrasekhar-mass white dwarf, or they lead to explosions of less compact sub-Chandrasekhar mass configurations.Contrary to the case of deflagrations, detonations do not rely on microphysical transport, but arise directly from reactive fluid dynamics. Therefore, no additional modeling of microphysical processes is required for capturing these processes in numerical simulations that are based on the reactive Euler equations.In multidimensional simulations of thermonuclear explosions of white dwarfs, however, it is impossible to resolve the inner structure of detonations. Similar to the case of deflagrations, two approaches have been taken: one that broadens the structure to fit on the numerical grid and one that treats detonations as sharp discontinuitiesflame models for detonations.Broadened detonations arise naturally in a system modeled with the reactive Euler equations, provided they are triggered and the thermodynamic conditions are sufficient to allow for their propagation. The correct jumps in the hydrodynamical quantities as well as the detonation speed are retained for Chapman-Jouguet detonations even if their microphysical structure is not resolved. This means that contrary to broadened deflagrations, the detonation speed is determined consistently in the model and does not have to be provided externally. It is, however, necessary to artificially suppress unphysical burning inside the (too wide) shock structure. This approach has been employed in a number of thermonuclear supernova simulations <cit.>. The level-set approach allows to numerically treat detonations as discontinuities and has been employed in simulations of thermonuclear supernovae <cit.>. Here, the material crossed by the model detonation is instantaneously converted into nuclear ash, and the corresponding energy is released. This does not require to include an extensive reaction network as the details of the burning are not resolved. Therefore, energy release, species conversion, and detonation speed are parameters of the model that have to be determined externally. Such a parametrization can lead to inaccuracies, and, therefore, contrary to the case of deflagrations, it may be preferred to use the broadened detonations approach. The advantages of the level-set technique, however, are that unphysical propagation of detonations over deflagration ash regions <cit.> in the delayed detonation model can easily be prevented and that the speed of non-Chapman-Jouguet detonations can be correctly set. § DEFLAGRATION-TO-DETONATION TRANSITIONS The model of delayed detonations for Type Ia supernova explosions <cit.> assumes a spontaneous transition of the burning mode from an initial subsonic deflagration to a supersonic detonation. Such transitions are indeed observed in terrestrial combustion, where they are usually arise from the interaction of the deflagration flame with walls or obstacles in the combustion region. In the case of thermonuclear supernovae, deflagration-to-detonation transitions (DDTs)deflagration-to-detonation transitions have to take place in an unconfined medium. It is unclear, however, whether such unconfined DDTs occur in nature.One possibility <cit.> is a suitable spatial gradient of autoignition delay (also called induction) times of the reactions. Such a configuration may lead to a coherent runaway of the reactions with a phase velocity that is sufficient to ramp up to a detonation wave. The original idea of <cit.>Zeldovich gradient mechanism was later extended to the so-called shock wave amplification through coherent energy release (SWACER) picture by <cit.>SWACER mechanism.This detonation initiation mechanism requires a preconditioning of the fuel material with a shallow temperature gradient to arrange for the required induction time gradient. It has been speculated that sufficiently strong turbulence in a late phase of deflagration burning in a thermonuclear supernova explosion can provide such conditions, in particular once the fuel density has dropped to ≲10^7 g cm^-3 <cit.>, but it is difficult to identify such regions in simulations of thermonuclear supernova explosions <cit.>. Several simulations therefore artificially prescribe the DDT spot or trigger the transition once the deflagration flame reaches a certain density threshold. <cit.> propose a subgrid-scale model for DDTs that takes into account turbulence properties. This model was employed in the delayed detonation simulations of <cit.>. § CONCLUSIONS The theory of combustion provides the basis for models of thermonuclear supernova explosions. Because of its technological application, combustion is well studied, and theory of the basic phenomena has reached a rather mature state. It can therefore be claimed that the physical principles of combustion wave propagation – at least for the one-dimensional case – are understood to the level of precision needed for modeling the astrophysical events. Complications arise in multidimensional models because of instabilities and interaction with turbulence.The numerical implementation of combustion waves remains challenging because of the limited spatial resolution in multidimensional supernova explosion simulations. Both deflagration and detonation propagation modes can either be represented as an artificially broadened structure fitting on the numerical grid or as a sharp discontinuity between fuel and ash. Advantages and disadvantages of both approaches have been discussed. It seems that for deflagration modeling a discontinuity representation is favorable, while detonations are better captured in the broadened wave approach, but this may depend on the particular situation and explosion scenario under consideration. Different possibilities exist for modeling the interaction of deflagrations with instabilities and turbulence, and elaborate subgrid-scale techniques have been employed in supernova simulations. The multidimensional structure of detonations is not recovered (nor accounted for) in current thermonuclear supernova explosion simulations. Although it seems unlikely to have a significant impact on the results of such simulations in terms of ejecta composition and predicted observables, future model improvement may require to account for this effect.Deflagration-to-detonation transitions in unconfined media are an unsolved problem in combustion theory. In delayed detonation models of Type Ia supernovae, this adds an uncertainty. Although necessary conditions for such a transition have been derived, it remains unclear whether the mechanism is actually realized in nature.Current combustion wave models are successful in providing a qualitative picture of Type Ia supernova explosion scenarios. Future efforts, however, are required to make them precise enough to allow for a detailed quantitative comparison with observed supernova events. This research was supported by the Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence “Origin and Structure of the Universe.” In particular, participation in the MIAPP workshop “The physics of supernovae,” where this article was initiated, is gratefully acknowledged.The work of FKR is supported by the Klaus-Tschira Foundation.76#1ISBN #1#1#1 #1#1 #1#1#1#1 #1#1 #1#1#1#1 #1#1 #1#1 #1#1 #1#1 #1#1 et al. #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 <><#>1#1 #1#1 #1#1 #1#1 #1#1 #1#1 ˜˜126#1#1 [Barenblatt et al.1962]barenblatt1962aBarenblatt, G. I, Y. B. Zel'dovich, and A. G. Istratov. 1962. On the diffusional-thermal stability of a laminar flame. Zh. Prikl. Mekh. Tekh. Fiz. 4: 21–26.[Boisseau et al.1996]boisseau1996aBoisseau, J. R., J. C. Wheeler, E. S. Oran, and A. M. Khokhlov. 1996. The Multidimensional Structure of Detonations in Type IA Supernovae.471: 99–102.[Calder et al.2007]calder2007aCalder, A. C., D. M. Townsley, I. R. Seitenzahl, F. Peng, O. E. B. Messer, N. Vladimirova, E. F. Brown, J. W. Truran, and D. Q. Lamb. 2007. Capturing the Fire: Flame Energetics and Neutronization for Type Ia Supernova Simulations.656: 313–332.[Ciaraldi-Schoolmann et al.2013]ciaraldi2013aCiaraldi-Schoolmann, F., I. R. Seitenzahl, and F. K. Röpke. 2013. A subgrid-scale model for deflagration-to-detonation transitions in Type Ia supernova explosion simulations. Numerical implementation.559: 117.[Ciaraldi-Schoolmann et al.2009]ciaraldi2009aCiaraldi-Schoolmann, F., W. Schmidt, J. C. Niemeyer, F. K. Röpke, and W. Hillebrandt. 2009. Turbulence in a three-dimensional deflagration model for Type Ia supernovae. I. Scaling properties.696: 1491–1497.[Damköhler1940]damkoehler1940aDamköhler, G. 1940. Der Einfluß der Turbulenz auf die Flammengeschwindigkeit in Gasgemischen. Z. f. Elektroch. 46 (11): 601–652.[Döring1943]doering1943aDöring, W. 1943. Über den Detonationsvorgang in Gasen. Annalen der Physik 435: 421–436.[Fickett and Davis1979]fickett1979aFickett, W., and C. Davis. 1979. Detonation. Los Alamos series in basic and applied sciences. University of California Press.[Fink et al.2010]fink2010aFink, M., F. K. Röpke, W. Hillebrandt, I. R. Seitenzahl, S. A. Sim, and M. Kromer. 2010. Double-detonation sub-Chandrasekhar supernovae: can minimum helium shell masses detonate the core?514: 53.[Fink et al.2014]fink2014aFink, M., M. Kromer, I. R. Seitenzahl, F. Ciaraldi-Schoolmann, F. K. Röpke, S. A. Sim, R. Pakmor, A. J. Ruiter, and W. Hillebrandt. 2014. Three-dimensional pure deflagration models with nucleosynthesis and synthetic observables for Type Ia supernovae.438: 1762–1783.[Gamezo et al.2005]gamezo2005aGamezo, V. N., A. M. Khokhlov, and E. S. Oran. 2005. Three-dimensional delayed-detonation model of Type Ia supernovae.623: 337–346.[Gamezo et al.1999]gamezo1999aGamezo, V. N., J. C. Wheeler, A. M. Khokhlov, and E. S. Oran. 1999. Multilevel Structure of Cellular Detonations in Type IA Supernovae.512: 827–842.[Gamezo et al.2003]gamezo2003aGamezo, V. N., A. M. Khokhlov, E. S. Oran, A. Y. Chtchelkanova, and R. O. Rosenberg. 2003. Thermonuclear supernovae: Simulations of the deflagration stage and their implications. Science 299: 77–81.[Golombek and Niemeyer2005]golombek2005aGolombek, I., and J. C. Niemeyer. 2005. A model for multidimensional delayed detonations in SN Ia explosions.438: 611–616.[Hansen and Kawaler1994]hansen1994aHansen, Carl J., and Steven D. Kawaler. 1994. Stellar interiors: physical principles, structure, and evolution. Astronomy and astrophysics library. New York: Springer.[Hicks2015]hicks2015aHicks, E. P. 2015. Rayleigh-Taylor unstable flames – Fast or faster?803: 72.[Hix and Meyer2006]hix2006aHix, W. R., and B. S. Meyer. 2006. Thermonuclear kinetics in astrophysics. Nuclear Physics A 777: 188–207.[Jackson et al.2014]jackson2014aJackson, A. P., D. M. Townsley, and A. C. Calder. 2014. Power-law wrinkling turbulence-flame interaction model for astrophysical flames.784: 174.[Jackson et al.2010]jackson2010aJackson, A. P., A. C. Calder, D. M. Townsley, D. A. Chamulak, E. F. Brown, and F. X. Timmes. 2010. Evaluating systematic dependencies of type Ia supernovae: The influence of deflagration to detonation density.720: 99–113.[Jordan et al.2008]jordan2008aJordan IV, G. C., R. T. Fisher, D. M. Townsley, A. C. Calder, C. Graziani, S. Asida, D. Q. Lamb, and J. W. Truran. 2008. Three-dimensional simulations of the deflagration phase of the gravitationally confined detonation model of Type Ia supernovae.681: 1448–1457.[Jordan et al.2012a]jordan2012aJordan IV, G. C., C. Graziani, R. T. Fisher, D. M. Townsley, C. Meakin, K. Weide, L. B. Reid, J. Norris, R. Hudson, and D. Q. Lamb. 2012a. The detonation mechanism of the pulsationally assisted gravitationally confined detonation model of type Ia supernovae.759: 53.[Jordan et al.2012b]jordan2012bJordan IV, G. C., H. B. Perets, R. T. Fisher, and D. R. van Rossum. 2012b. Failed-detonation Supernovae: Subluminous Low-velocity Ia Supernovae and their Kicked Remnant White Dwarfs with Iron-rich Cores.761: 23.[Kasen et al.2009]kasen2009aKasen, D., F. K. Röpke, and S. E. Woosley. 2009. The diversity of type Ia supernovae from broken symmetries.460: 869–872.[Khokhlov1989]khokhlov1989aKhokhlov, A. M. 1989. The structure of detonation waves in supernovae.239: 785–808.[Khokhlov1991]khokhlov1991aKhokhlov, A. M. 1991. Delayed detonation model for type Ia supernovae.245: 114–128.[Khokhlov1995]khokhlov1995aKhokhlov, A. M. 1995. Propagation of turbulent flames in supernovae.449: 695–713.[Kromer et al.2013]kromer2013aKromer, M., M. Fink, V. Stanishev, S. Taubenberger, F. Ciaraldi-Schoolman, R. Pakmor, F. K. Röpke, A. J. Ruiter, I. R. Seitenzahl, S. A. Sim, G. Blanc, N. Elias-Rosa, and W. Hillebrandt. 2013. 3D deflagration simulations leaving bound remnants: a model for 2002cx-like Type Ia supernovae.429: 2287–2297.[Landau and Lifshitz1987]landau1987aLandau, L. D., and E. M. Lifshitz. 1987. Fluid mechanics (course of theoretical physics: volume 6), 2nd edn. Oxford: Butterworth-Heinemann.[Lee et al.1978]lee1978aLee, J. H. S., R. Knystautas, and N. Yoshikawa. 1978. Photochemical initiation of gaseous detonations. Acta Astronautica 5: 971–982.[Liñan and Williams1993]linan1993aLiñan, Amable, and Forman A. Williams. 1993. Fundamental aspects of combustion. Oxford, New York: Oxford University Press.[Lisewski et al.2000]lisewski2000bLisewski, A. M., W. Hillebrandt, and S. E. Woosley. 2000. Constraints on the delayed transition to detonation in Type Ia supernovae.538: 831–836.[Maier and Niemeyer2006]maier2006aMaier, A., and J. C. Niemeyer. 2006. C+O detonations in thermonuclear supernovae: interaction with previously burned material.451: 207–212.[Marquardt et al.2015]marquardt2015aMarquardt, K. S., S. A. Sim, A. J. Ruiter, I. R. Seitenzahl, S. T. Ohlmann, M. Kromer, R. Pakmor, and F. K. Röpke. 2015. Type Ia supernovae from exploding oxygen-neon white dwarfs.580: 118.[Meakin et al.2009]meakin2009aMeakin, C. A., I. Seitenzahl, D. Townsley, G. C. Jordan, J. Truran, and D. Lamb. 2009. Study of the detonation phase in the gravitationally confined detonation model of Type Ia supernovae.693: 1188–1208.[Moll and Woosley2013]moll2013aMoll, R., and S. E. Woosley. 2013. Multi-dimensional Models for Double Detonation in Sub-Chandrasekhar Mass White Dwarfs.774: 137.[Niemeyer and Hillebrandt1995]niemeyer1995bNiemeyer, J. C., and W. Hillebrandt. 1995. Turbulent nuclear flames in type Ia supernovae.452: 769–778.[Niemeyer and Hillebrandt1997]niemeyer1997aNiemeyer, J. C., and W. Hillebrandt. 1997. Microscopic and macroscopic modeling of thermonuclear burning fronts. In NATO ASIC Proc. 486: Thermonuclear supernovae, eds. P. Ruiz-Lapuente, R. Canal, and J. Isern. Vol. 486 of Nato asic proc., 441–456. Dordrecht: Kluwer Academic Publishers.[Ohlmann et al.2014]ohlmann2014aOhlmann, S. T., M. Kromer, M. Fink, R. Pakmor, I. R. Seitenzahl, S. A. Sim, and F. K. Röpke. 2014. The white dwarf's carbon fraction as a secondary parameter of Type Ia supernovae.572: 57.[Osher and Sethian1988]osher1988aOsher, S., and J. A. Sethian. 1988. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton–Jacobi formulations. Journal of Computational Physics 79: 12–49.[Pakmor et al.2010]pakmor2010aPakmor, R., M. Kromer, F. K. Röpke, S. A. Sim, A. J. Ruiter, and W. Hillebrandt. 2010. Sub-luminous type ia supernovae from the mergers of equal-mass white dwarfs with mass ∼0.9m_⊙.463: 61–64.[Pakmor et al.2012]pakmor2012aPakmor, R., M. Kromer, S. Taubenberger, S. A. Sim, F. K. Röpke, and W. Hillebrandt. 2012. Normal Type Ia supernovae from violent mergers of white dwarf binaries.747: 10.[Pakmor et al.2013]pakmor2013aPakmor, R., M. Kromer, S. Taubenberger, and V. Springel. 2013. Helium-ignited violent mergers as a unified model for normal and rapidly declining type Ia supernovae.770: 8.[Peters2000]peters2000aPeters, Norbert. 2000. Turbulent combustion. Cambridge: Cambridge University Press.[Poludnenko et al.2011]poludnenko2011aPoludnenko, A. Y., T. A. Gardiner, and E. S. Oran. 2011. Spontaneous Transition of Turbulent Flames to Detonations in Unconfined Media. Physical Review Letters 107 (5): 054501.[Reinecke et al.2002a]reinecke2002bReinecke, M., W. Hillebrandt, and J. C. Niemeyer. 2002a. Refined numerical models for multidimensional type Ia supernova simulations.386: 936–943.[Reinecke et al.2002b]reinecke2002dReinecke, M., W. Hillebrandt, and J. C. Niemeyer. 2002b. Three-dimensional simulations of type Ia supernovae.391: 1167–1172.[Reinecke et al.1999]reinecke1999aReinecke, M., W. Hillebrandt, J. C. Niemeyer, R. Klein, and A. Gröbl. 1999. A new model for deflagration fronts in reactive fluids.347: 724–733.[Röpke2003]roepke_phdRöpke, F. K. 2003. On the stability of thermonuclear flames in type Ia supernova explosions. PhD diss, Technical University of Munich.[Röpke2005]roepke2005cRöpke, F. K. 2005. Following multi-dimensional type Ia supernova explosion models to homologous expansion.432: 969–983.[Röpke2007]roepke2007dRöpke, F. K. 2007. Flame-driven deflagration-to-detonation transitions in Type Ia supernovae?668: 1103–1108.[Röpke and Hillebrandt2004]roepke2004cRöpke, F. K., and W. Hillebrandt. 2004. The case against the progenitor's carbon-to-oxygen ratio as a source of peak luminosity variations in type Ia supernovae.420: 1–4.[Röpke and Hillebrandt2005]roepke2005bRöpke, F. K., and W. Hillebrandt. 2005. Full-star type Ia supernova explosion models.431: 635–645.[Röpke and Niemeyer2007]roepke2007bRöpke, F. K., and J. C. Niemeyer. 2007. Delayed detonations in full-star models of type Ia supernova explosions.464: 683–686.[Röpke et al.2004a]roepke2004aRöpke, F. K., W. Hillebrandt, and J. C. Niemeyer. 2004a. The cellular burning regime in type Ia supernova explosions. I. Flame propagation into quiescent fuel.420: 411–422.[Röpke et al.2004b]roepke2004bRöpke, F. K., W. Hillebrandt, and J. C. Niemeyer. 2004b. The cellular burning regime in type Ia supernova explosions. II. Flame propagation into vortical fuel.421: 783–795.[Röpke et al.2003]roepke2003aRöpke, F. K., J. C. Niemeyer, and W. Hillebrandt. 2003. On the small-scale stability of thermonuclear flames in Type Ia supernovae.588: 952–961.[Röpke et al.2007b]roepke2007aRöpke, F. K., S. E. Woosley, and W. Hillebrandt. 2007b. Off-center ignition in Type Ia supernovae. I. initial evolution and implications for delayed detonation.660: 1344–1356.[Röpke et al.2006]roepke2006aRöpke, F. K., W. Hillebrandt, J. C. Niemeyer, and S. E. Woosley. 2006. Multi-spot ignition in type Ia supernova models.448: 1–14.[Röpke et al.2007a]roepke2007cRöpke, F. K., W. Hillebrandt, W. Schmidt, J. C. Niemeyer, S. I. Blinnikov, and P. A. Mazzali. 2007a. A three-dimensional deflagration model for Type Ia supernovae compared with observations.668: 1132–1139.[Schmidt et al.2006a]schmidt2006bSchmidt, W., J. C. Niemeyer, and W. Hillebrandt. 2006a. A localised subgrid scale model for fluid dynamical simulations in astrophysics. I. Theory and numerical tests.450: 265–281.[Schmidt et al.2006b]schmidt2006cSchmidt, W., J. C. Niemeyer, W. Hillebrandt, and F. K. Röpke. 2006b. A localised subgrid scale model for fluid dynamical simulations in astrophysics. II. Application to type Ia supernovae.450: 283–294.[Seitenzahl et al.2013]seitenzahl2013aSeitenzahl, I. R., F. Ciaraldi-Schoolmann, F. K. Röpke, M. Fink, W. Hillebrandt, M. Kromer, R. Pakmor, A. J. Ruiter, S. A. Sim, and S. Taubenberger. 2013. Three-dimensional delayed-detonation models with nucleosynthesis for Type Ia supernovae.429: 1156–1172.[Seitenzahl et al.2016]seitenzahl2016aSeitenzahl, I. R., M. Kromer, S. T. Ohlmann, F. Ciaraldi-Schoolmann, K. Marquardt, M. Fink, W. Hillebrandt, R. Pakmor, F. K. Röpke, A. J. Ruiter, S. A. Sim, and S. Taubenberger. 2016. Three-dimensional simulations of gravitationally confined detonations compared to observations of SN 1991T.592: 57.[Sharpe1999]sharpe1999aSharpe, G. J. 1999. The structure of steady detonation waves in Type Ia supernovae: pathological detonations in C-O cores.310: 1039–1052.[Sim et al.2010]sim2010aSim, S. A., F. K. Röpke, W. Hillebrandt, M. Kromer, R. Pakmor, M. Fink, A. J. Ruiter, and I. R. Seitenzahl. 2010. Detonations in sub-Chandrasekhar-mass C+O white dwarfs.714: 52–57.[Timmes and Woosley1992]timmes1992aTimmes, F. X., and S. E. Woosley. 1992. The conductive propagation of nuclear flames. I. degenerate C+O and O+Ne+Mg white dwarfs.396: 649–667.[Timmes et al.2000]timmes2000eTimmes, F. X., M. Zingale, K. Olson, B. Fryxell, P. Ricker, A. C. Calder, L. J. Dursi, H. Tufo, P. MacNeice, J. W. Truran, and R. Rosner. 2000. On the Cellular Structure of Carbon Detonations.543: 938–954.[Toro2009]toro2009aToro, E. F. 2009. Riemann solvers and numerical methods for fluid dynamics: A practical introduction. Berlin Heidelberg: Springer. http://books.google.de/books?id=SqEjX0um8o0C.[Townsley et al.2007]townsley2007aTownsley, D. M., A. C. Calder, S. M. Asida, I. R. Seitenzahl, F. Peng, N. Vladimirova, D. Q. Lamb, and J. W. Truran. 2007. Flame evolution during Type Ia supernovae and the deflagration phase in the gravitationally confined detonation scenario.668: 1118–1131.[Townsley et al.2009]townsley2009aTownsley, D. M., A. P. Jackson, A. C. Calder, D. A. Chamulak, E. F. Brown, and F. X. Timmes. 2009. Evaluating systematic dependencies of Type Ia supernovae: The influence of progenitor ^22Ne content on dynamics.701: 1582–1604.[Vladimirova et al.2006]vladimirova2006aVladimirova, N., G. V. Weirs, and L. Ryzhik. 2006. Flame capturing with an advection-reaction-diffusion model. Combustion Theory Modelling 10 (5): 727–747.[von Neumann1942]vonneumann1942avon Neumann, J. 1942. Theory of detonation waves, Prog. Rept. No. 238; O.S.R.D. Rept. No. 549, Ballistic Research Laboratory File No. X-122, Aberdeen Proving Ground, MD, Aberdeen Proving Ground, MD.[Woosley et al.2009]woosley2009aWoosley, S. E., A. R. Kerstein, V. Sankaran, A. J. Aspden, and F. K. Röpke. 2009. Type Ia Supernovae: Calculations of Turbulent Flames Using the Linear Eddy Model.704: 255–273.[Zel'dovich1940]zeldovich1940aZel'dovich, Y. B. 1940. On the theory of the propagation of detonations on gaseous system. Zh. Eksp. Teor. Fiz. 10: 542–568. In Russian..[Zel'dovich1966]zeldovich1966aZel'dovich, Y. B. 1966. An effect which stabilizes the curved front of a laminar flame. Journal of Applied Mechanics and Technical Physics 7: 68–69.[Zel'dovich et al.1970]zeldovich1970aZel'dovich, Y. B., V. B. Librovich, G. M. Makhviladze, and G. I. Sivashinskii. 1970. On the onset of detonation in a nonuniformly heated gas. Journal of Applied Mechanics and Technical Physics 11: 264–270. | http://arxiv.org/abs/1703.09274v1 | {
"authors": [
"Friedrich K. Roepke"
],
"categories": [
"astro-ph.SR",
"astro-ph.HE"
],
"primary_category": "astro-ph.SR",
"published": "20170327191838",
"title": "Combustion in thermonuclear supernova explosions"
} |
[a]Paolo Gondolo [email protected] [a]Department of Physics, University of Utah, 115 South 1400 East #201, Salt Lake City, Utah 84112-0830 [b]Stefano Scopel [email protected] [b]Department of Physics, Sogang University, Seoul 121-742, South KoreaWe present a halo-independent determination of the unmodulated signal corresponding to the DAMA modulation if interpreted as due to dark matter weakly interacting massive particles (WIMPs). First we show how a modulated signal gives information on the WIMP velocity distribution function in the Galactic rest frame from which the unmodulated signal descends. Then we describe a mathematically-sound profile likelihood analysis in which the likelihood is profiled over a continuum of nuisance parameters (namely, the WIMP velocity distribution). As a first application of the method, which is very general and valid for any class of velocity distributions, we restrict the analysis to velocity distributions that are isotropic in the Galactic frame. In this way we obtain halo-independent maximum-likelihood estimates and confidence intervals for the DAMA unmodulated signal. We find that the estimated unmodulated signal is in line with expectations for a WIMP-induced modulation and is compatible with the DAMA background+signal rate. Specifically, for the isotropic case we find that the modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass. Halo–independent determination of the unmodulated WIMP signal in DAMA: the isotropic case. [ Received date; Accepted date ============================================================================================§ INTRODUCTION Discovering the nature of Dark Matter (DM) is one of the most important endeavors in today's particle physics and cosmology. Weakly Interacting Massive Particles (WIMPs), the most popular and natural DM candidates, provide the correct thermal relic density in the early Universe when their cross section with Standard Model particles is at the level of or not much smaller than weak interaction cross sections. Many experiments are currently trying to exploit this fact to detect the WIMPs supposed to form the dark halo of our Galaxy through their scattering off atomic nuclei in low-background laboratory detectors. An expected feature of halo WIMP scattering is an annual modulation of the scattering rate due to the revolution of the Earth around the Sun <cit.>.The importance of such yearly modulation for WIMP direct detection rests on the fact that, in absence of a sensitivity to the direction of the incoming particles, it is the only known signature that allows to distinguish a WIMP signal from the background due to radioactive contamination, since the latter may have an energy spectrum indistinguishable from that predicted for WIMPs.An annual modulation with WIMP characteristics has been claimed for many years by the DAMA experiment <cit.>. The low–energy event rate in the DAMA sodium iodide scintillators is well represented by a signal of the form S(t)=S_0+S_mcos[ω(t-t_0)],with ω=2π/T, T=1 yr, and t_0≃ 2^ nd of June, as expected for a nonrotating dark halo of WIMPs. The statistical significance of the DAMA modulation signal exceeds 9 standard deviations and the effect has been recorded through 14 yearly periods.For a typical Maxwellian distribution of WIMP velocities in the Galactic rest frame with rms velocity below 300 km/s, the modulated component S_m of the signal is predicted to be less that 10% of the unmodulated component S_0, in all of the energy bins of the DAMA detected spectrum. The DAMA claim has prompted a world–wide effort by other direct detection experiments to confirm or disprove the signal <cit.>. The experiments that have by now reached a background level low enough to be sensitive to the DAMA modulation use targets different from sodium iodide. As a consequence, while for standard hypotheses on the WIMP–nucleon cross section and WIMP velocity distribution (i.e., spin–dependent or spin–independent interactions with a truncated Maxwellian distribution) the DAMA signal appears to be in strong tension with constraints from other detectors, when such assumptions are relaxed several WIMP models have been shown to exist for which the yearly modulation effect measured by DAMA can still be reconciled with the non observation of a dark matter signal in other experiments <cit.>. This shows that there is still a clear need to assess the compatibility of the DAMA excess with other detectors in a model–independent way.Eliminating the dependence on astrophysics is the underlying goal of the halo–independent approach. Its first formulation <cit.> was based on the observation that the elastic spin-independent scattering rate of WIMPs in a detector depends on the velocity distribution only through a single velocity integral η̃(), the same for all experiments,η̃() =ρ_χ/m_χ σ_χ N∫_||>f_()/||d^3 v .Here m_χ is the WIMP mass, σ_χ N is the WIMP-nucleon cross section, ρ_χ is the local WIMP mass density and f_() is the WIMP velocity distribution in the frame of the detector (the laboratory). The method of <cit.> has been applied to the comparison of experiments in <cit.>. It has been generalized to arbitrary WIMP–nucleon interactions and any direct detection experiment (with arbitrary efficiency and energy resolution) in <cit.> by defining weighted averages of η̃() over the range(s) of velocities measured in an experiment. Applications of the latter method to the comparison of experiments for various WIMP–nucleon interactions can be found in <cit.>. Maximum-likelihood methods to determine the velocity integral and particle physics parameters have been used in <cit.> and statistical methods to assess the compatibility of experiments have been considered in <cit.>. Alternative methods to place halo–independent bounds on particle physics parameters have been put forward in <cit.>. A weak point of the halo-independent methods above is the way they compare modulated and unmodulated rates. Some authors define two separate velocity integrals η̃_0() and η̃_m(), one for the unmodulated part S_0 and one for the modulated part S_m in Eq. (<ref>), and then proceed to impose either the simple inequality η̃_m() < η̃_0() <cit.> ormore sophisticated inequalities valid for smooth distributions <cit.>. Comparing two separate velocity integrals with proper statistical significance is not straightforward. Other authors replace the modulation amplitude S_m, which is a coefficient in a Fourier time-series, with half the difference between the maximum and minimum signal during a year, i.e., replace η̃_m() with η̃_1/2() = [η̃_0(t_0)+η̃_0(t_0+12T)]/2 <cit.>. This replacement is inaccurate in a halo-independent approach, where one must include velocity distributions for which the modulation is not sinusoidal near the threshold region in which the DAMA signal is present (in this case, the theoretical values of η̃_1/2 and S_m may be very different, and without a control on the sinusoidal character of the modulation, it would be inappropriate to compare the theoretical η̃_1/2 with the measured S_m).The main goal of this paper is to show that it is possible to obtain information on the unmodulated signal S_0 from a measurement of the modulation amplitudes S_m without specifying the WIMP velocity distribution (and without assuming two separate velocity integrals or approximating the Fourier coefficient with a difference). For this purpose, (a) we transfer the modulation from being a property of the velocity distribution to being a property of the detector, indeed of the relative motion of the detector with respect to the rest frame of the WIMP population, and (b) we profile the likelihood at fixed S_0 over all WIMP velocity distributions (a continuum of nuisance parameters) using rigorous mathematical methods based on the theory of linear optimization in spaces of functions that have the dimension of the continuum. As a first application of the method, which is very general and valid for any elastic or inelastic cross section and any class of velocity distributions, we estimate the DAMA unmodulated signal starting from data on the modulation amplitudes under some simplifying assumptions that have allowed us to explore and understand the difficulties and merits of the method. We restrict the analysis to velocity distributions that are isotropic in the Galactic frame. We apply our method to the DAMA data for the case of WIMP–nucleus elastic scattering and for WIMP masses m_χ< 15 GeV, for which only sodium targets contribute to the expected signal.[At higher WIMP masses the expected rate in DAMA also takes contributions from scattering off iodine. This part of the signal is in principle constrained in a model–independent way by KIMS <cit.>, which uses a CsI detector, and COUPP <cit.>, which uses a CF_3I detector, since such detectors employ the same nuclear target (iodine).] In this way we will obtain quantitative confidence intervals for the unmodulated components of the WIMP signal in each energy bin, for the first time disentangling the unmodulated signal from the background in a halo–independent way.The S_0 confidence intervals we find are valid for any WIMP–nucleus interaction in which the ^23Na cross section varies negligibly in the 2–4 keVee energy range where the DAMA modulation is present. This includes the standard elastic spin–independent and spin–dependent interactions, with arbitrary ratios of the proton–WIMP and neutron–WIMP coupling constants, but does not include inelastic scattering or effective WIMP–nucleon operators that show an explicit and fast dependence on the WIMP–nucleus relative velocity and/or on the exchanged momentum (like some of those in <cit.>). Since the properties discussed in the present paper pertain exclusively to sodium targets in DAMA, while other detectors that constrain DAMA use different target nuclei, we will not discuss the latter any further. The plan of this paper is as follows. In Section <ref> we show how to compare the modulated and the unmodulated rates directly in terms of a single velocity distribution function, namely the velocity distribution function in the Galactic rest frame.In Section <ref> we present and discuss our method to compute the profilelikelihood of the unmodulated signal by using linear optimization theory in the continuum to profile the likelihood over the whole velocity distribution (a continuum of nuisance parameters). Section <ref> is devoted to our numerical analysis of the DAMA data for velocity distributions that are isotropic in the Galactic frame. Figs. <ref>, <ref> and <ref> and Table <ref> of this Section contain our main quantitative results. Finally, theAppendix contains details of the calculation of the modulated and unmodulated Galacticresponse functions for isotropic velocity distributions. § RATES IN TERMS OF THE GALACTIC VELOCITY DISTRIBUTIONLet S_i(t) be the expected signal in a dark matter detector, where t is time, and the index i, which may be continuous,specifies the quantity measured in the experiment, for example detected recoil energy or energy bin or number of photoelectrons. Let f_(,t) be the WIMP velocity distribution function in the frame of the detector, normalized to ∫ f_(,t) d^3v= 1 .The signal S_i(t) depends on f_(,t) according to the general formula <cit.>:S_i(t) = ∫_i() f_(,t) d^3v ,where _i(), called the response function, equals the value the signal would have if all the WIMPs had the same velocity . Formula (<ref>) can be understood for example by writing the scattering rate per unit target mass dR_χ T/dE_R of a WIMP χ off an isotope T in the target, differential in the nucleus recoil energy E_R, as a product of the differential cross section dσ_χ T/dE_R and the WIMP flux n_χ v f_(,t) d^3v (where n_χ=ρ_χ/m_χ is the χ number density), the whole quantity divided by the mass m_T of the target isotope,dR_χ T/dE_R = 1/m_T∫dσ_χ T/dE_R ρ_χ/m_χv f_(,t) d^3v .Here it is understood that the differential cross section dσ_χ T/dE_R isnonzero only in the kinematically allowed region (e.g., for elastic scattering, only for E_R≤ E_R^ max(v) given in equation (<ref>) below). Furthermore, if _T(E,E_R) indicates the probability of actually observing an event with observed energy E when a WIMP has scattered off an isotope T in the detector target with recoil energy E_R, the expected observed event rate per unit target mass dR/dE is given by the convolutiondR/dE = ∑_T C_T ∫_T(E,E_R)dR_χ T/dE_RdE_R .Here C_T is the mass fraction of isotope T in the target. Inserting the scattering rate in equation (<ref>) into the latter expression, and exchanging the order of the integrations over E_R and , leads to the following expression of the response function _E(v), where i=E, for the differential event rate dR/dE,_E(v) = vρ_χ/m_χ∫ dE_R∑_T _T(E,E_R)C_T/m_T dσ_χ T/dE_R.The response function depends on the particle physics model for the interaction of the WIMP with the target and includes the probability that a WIMP scattering in the detector is actually observed. The nonzero values of the response function _i() also indicate the WIMP velocitiesto which the observed signal S_i(t) is sensitive. General expressions of the response functions _i() for experiments counting number of events in observed energy bins can be found in <cit.>.One commonly assumes that the response function _i() is stable, i.e., that it does not depend on time (as already implied in the notation above). One also commonly assumes that the only time dependence in the laboratory velocity distribution f_(,t) comes from the motion of the Earth around the Sun or the daily rotation of the Earth. In other words, one assumes that the WIMP velocity distribution in the Galactic frame f_(), whereis the Galactic WIMP velocity, is stationary (on the time scale of the experiment).The laboratory velocity distribution f_(,t) is related to the Galactic velocity distribution f_() by a Galilean transformation:f_(,t) = f_(), =++(t) .Hereis the WIMP velocity relative to the laboratory,is the WIMP velocity relative to therest frame of the Galaxy,is the velocity of the Sun with respect to the Galactic rest frame, and (t) is the velocity of the Earth with respect to the Sun.[Since we are interested in the annual modulation, we neglect the daily rotation of the Earth, but our general considerations apply using the velocity of the detector with respect to the Sun in place of (t).] A change of integration variables in Eq. (<ref>) givesS_i(t) = ∫^_i(,t) f_() d^3 u,where^_i(,t) = _i(--(t)) .Conceptually, this passage to the Galactic frame means that the time dependence of the signal is a property of the motion of the detector in the Galaxy and not of the (Galactic) distribution function. In particular, the characteristics of a modulated signal are ultimately a property of the detector (composition, energy threshold, motion in the Galaxy) and not of the WIMP velocity distribution in the Galactic frame.The DAMA modulation amplitude is equal to the coefficient of the cos[ω (t-t_0)] term in the Fourier time-series analysis of the signal. Here ω = 2π/T with the period T=1 yr, and the time t=t_0 corresponds to the time of maximum modulation. So:S_m,i = 2/T∫_0^T dtcos[ω (t-t_0)] S_i(t) .The unmodulated signal is the time average of the signal over the course of a year,S_0,i = 1/T∫_0^T dt S_i(t) .In the Galactic frame, the time dependence is in the response functions, so one can writeS_0,i=∫^_0,i() f_() d^3 u , S_m,i=∫^_m,i() f_() d^3 u ,where^_0,i() = 2/T∫_0^T dt_i(--(t)),^_m,i() = 2/T∫_0^T dtcos[ω (t-t_0)]_i(--(t)). § CONSTRAINING THE UNMODULATED SIGNAL GivenN experimental modulation amplitude measurements S_m,j^ exp (j=1,…,N) and their 1σ errors Δ S_j^ exp, and assuming Gaussian fluctuations, the likelihood function L, function of the set of parameters { S_m,j}, is given by-2 ln L({S_m,j})=∑_j=1^N (S_m,j-S_m,j^ exp/Δ S_j^ exp )^2 .Here, for clarity and simplicity, we only showthe dependence of L on the expected modulation amplitudesS_m,j = ∫^_m,j() f_() d^3u ,which contain all the dependence on the WIMP velocity distribution function f_().We are interested in constraining the unmodulated signalsS_0,i = ∫^_0,i() f_() d^3u,in the given energy bins i=1,…,N. For this purpose, we construct a joint profile likelihood L_ p({S_0,i}) for the S_0,i (i=1,…, N), treating the velocity distribution f_() as a continuum of nuisance parameters. The joint profile likelihood L_ p({S_0,i}) is defined as the maximum value of the likelihood function over the set A({S_0,i}) of distribution functions that satisfy Eq. (<ref>),L_ p({S_0,i}) = sup_f_∈ A({S_0,i}) L({S_m,j}) .(Technically, we use the notation sup instead of max because in the infinite-dimensional space of distribution functions it is not automatically guaranteed that there is a distribution that achieves the maximum, although this does happen in our case.)The maximum-likelihood estimator of the S_0,i's then follows as the location of the maximum L_ p,max in the joint profile likelihood L_ p({S_0,i}), and confidence regions for any combination of the S_0,i's can be obtained through the usual profile likelihood procedure. For example, the standard error ellipsoid in the N-dimensional S_0,i parameter space (or `pseudo-ellipsoid' in the case of non-Gaussian likelihoods) can be obtained by the condition -2Δln L_ p({S_0,i}) = 1 ,where -2Δln L_ p({S_0,i}) =- 2 ln L_ p({S_0,i}) + 2 ln L_ p,max .Similarly, projections of the standard error ellipsoid onto any pair of two variables S_0,i and S_0,j can be obtained either by simple geometric construction or by using a profile likelihood further profiled over the other S_0,i parameters. In Section <ref> we show an example of such 2-dimensional standard error ellipses.We are interested in particular in finding confidence intervals on each of the N unmodulated signals S_0,i. For this purpose, we use the profile likelihood functionL_i(S_0,i) = max_S_0,j j i L_ p({S_0,j}) ,which is the joint profile likelihood L_ p({S_0,i}) further profiled over the S_0,j with j i, and is thus a function of S_0,i only. A 1σ confidence interval on a single S_0,i can then be obtained through the condition-2 Δln L_i(S_0,i) ≤ 1,where-2Δln L_i(S_0,i) =- 2 ln L_i(S_0,i) + 2 ln L_ p,max .Notice that these 1σ confidence intervals, sometime called 1σ likelihood intervals, have a 68% coverage probability in the limit of large samples when the likelihood is well approximated by a Gaussian, but do not necessarily have a coverage probability of 68% if the likelihood is non-Gaussian.We examined various ways of computing L_ p({S_0,i}) and L_i(S_0,i), and we have adopted the following procedure. Since both S_0,i and S_m,i are functionals of the distribution function f_ gal, we can at least conceptually construct a parametric plot of L({S_m,i}) vs. {S_0,i} by using f_ gal as the parameter. At each point {S_0,i} there will be many values of L({S_m,i}), and our goal is to find the maximum of those values, which is L_ p({S_0,i}). In other words, in this geometrical representation, the joint profile likelihood L_ p({S_0,i}) is the boundary of all possible values of the likelihood L({S_m,i}) when plotted vs. {S_0,i}. In practice we cannot implement an infinite number of functions f_ gal (we tried discretizing the distribution function but the maximization procedure did not converge). However, we can think of constructing the boundary of the likelihood values by “rotating the plot by 90 degrees,” i.e., finding the boundary of the {S_0,i} that have L({S_m,i}) ≥ L_ p({S_0,i}). The latter problem can be written as an extremization problem for {S_0,i} for which powerful mathematical theorems exist that reduce the infinitely-many functions f_ gal to a finite number of parameters, making the solution attainable in practice.For clarity, we illustrate our procedure for L_i(S_0,i) only, although we used it for the joint profile likelihood L_ p({S_0,i}). We write our problem as a linear optimization problem for {S_0,i} over the distribution functions f_() subject to the constraint that thelikelihood function L({S_m,j}) is greater than or equal to a given number L_0, which we will later vary. Our goal is to find the lower and upper boundsS_0,i^ inf(L_0) = inf_f_∈ A(L_0) ∫^_0,i() f_() d^3uandS_0,i^ sup(L_0) = sup_f_∈ A(L_0) ∫^_0,i() f_() d^3u,over the set A(L_0) of distribution functions that satisfy the constraintL({S_m,j}) ≥ L_0 .Varying L_0, we obtain the lines S_0,i^ inf(L_0) and S_0,i^ sup(L_0) in the S_0,i–L plane, which we then invert to obtain the profile likelihood L_i(S_0,i) as a function of S_0,i. Figure <ref> is a schematic illustration of our procedure.To compute S_0,i^ inf(L_0) and S_0,i^ sup(L_0) we notice that the likelihood function is a function of f_() only through the integrals S_m,j in Eq. (<ref>). So we rephrase our goal as the following mathematical problem. ExtremizeS_0,i = ∫_0,i^() f_() d^3u over the set A(L_0) of distribution functions f_() that satisfy the N+1 moment conditions ∫ f_() d^3u = 1 , ∫^_m,j() f_() d^3u = S_m,j (j=1,…,N) , where the moments S_m,j are allowed to vary within the region defined by the likelihood conditionL({S_m,j}) ≥ L_0.Mathematically, this is an optimization problem of the kind discussed for example in <cit.>. Specifically, an optimization problem in which the moment set (i.e., the set of distribution functions that obey the moment conditions) is defined by a subset C(L_0) ⊆ℝ^N+1, namely the subset C(L_0) in which the moments (1,S_m,1,…,S_m,N) satisfy L({S_m,j})≥ L_0.The fundamental theorem in this context <cit.> states that S_0,i achieves its extreme values S_0,i^ inf(L_0) and S_0,i^ sup(L_0) on the extreme distributions of the moment set, which are sums with positive coefficients of a finite number K≤ N+1 of Dirac delta functions. Here N+1 is the number of moment conditions. More precisely,the extreme distributions ofthe moment set defined by the moment conditions (<ref>–<ref>) have the formf_ e() = ∑_k=1^Kλ_kδ( - _k) ,where 1≤ K ≤ N+1,∑_k=1^Kλ_k^_m,j(_k) = S_m,j (j=1,…,N), ∑_k=1^Kλ_k = 1 , λ_k > 0(k=1,…,K),and the K N+1-dimensional vectors (1,^_m,1(_k),…,^_m,N(_k)), where the index k=1,…,K specifies the vector, are linearly independent.Geometrically, the extreme distributions of the moment set are analogous to the vertices of a polyhedron.In the finite-dimensional case, the fundamental theorem states that the maximum and minimum values of a linear function defined over a polyhedron are achieved at one or more vertices of the polyhedron, and thus to find these extrema it suffices to compute the value of the linear function on the vertices. In the continuum case, the fundamental theorem states that the extrema of a linear functional of the distribution (in our case, each S_0,i) are achieved at one or more “vertices” of the moment set (i.e., at the extreme distributions), and thus to find these extrema it suffices to compute the value of the linear functional on the extreme distributions. The computational advantage is that the fundamental theorem reduces an extremization problem in infinite dimensions (the moment set) into an extremization problem in a finite number of dimensions (the space of extreme distributions, which has dimension at most (1+d) N', where N'=N+1 is the number of moment conditions and d is the dimensionality of the velocity space). Physically, each delta-function distribution δ( - _k) in the expression of an extreme distribution, Eq. (<ref>), represents a stream of velocity _k and zero velocity dispersion. An extreme distribution is a weighted average of streams with weights λ_k. The fundamental theorem allows the computation of S_0,i^ inf(L_0) and S_0,i^ sup(L_0) by parametrizing the velocity distribution as a weighted average of no more than N+1 streams in velocity space, where N+1 is the number of moment conditions, including the normalization condition.[The fundamental theorem is also the rigorous mathematics behind the “interesting” facts that the number of steps in η̃() is less than or equal to the number of bins for a binned likelihood <cit.>, or that N_O streams suffice to minimize an unbinned likelihood with N_O events <cit.>, or that only 1+p+q grid points have a nonzero distribution function in the presence of 1+p+q moment conditions <cit.>.]The fundamental theorem translates the mathematical problem (<ref>–<ref>) into the following one.ExtremizeS_0,i = ∑_k=1^Kλ_k^_0,i(_k)over λ_k and _k subject to1≤ K ≤ N+1, λ_k > 0(k=1,…,K),∑_k=1^Kλ_k = 1 ,S_m,j= ∑_k=1^Kλ_k^_m,j(_k)(j=1,…,N),L({S_m,j}) ≥ L_0 . In practice this means that at fixed K and given L_0, the maximal range of the S_0,i integral computed using Eq. (<ref>) is swept by the λ_k, _k parameters that satisfy the constraints (<ref>-<ref>). The full range of S_0,i is then obtained by combining the N+1 intervals for 1≤ K ≤ N+1.It is very important to understand that this method does not in general give the optimal velocity distribution, or the maximum-likelihood velocity distribution. In fact, given a value L_0 of the likelihood, there are in general many S_m,i that have the same likelihood (all those on the likelihood contour level L({S_m,i})=L_0). But even if there is only one set of S_m,i that corresponds to a given value of the likelihood (and this happens at the point of absolute maximum likelihood for concave likelihood functions), there are in general many velocity distributions with the same moments S_m,i: some of them are extreme distributions (sums of streams), and some are continuous distributions, or more precisely, continuous linear combinations of sums of streams of the formf() = ∫_0^1 ∑_k=1^Kλ_k(α)δ(- _k(α) ) dα.In particular, although the value of the maximum likelihood can be obtained using only sums of streams, there is in general an infinite number of distributions, some discrete and some continuous, that maximize the likelihood. So even if we use extreme distributions to find the extreme values of S_0,i, it is not correct to think that in general these sums of streams are the only velocity distributions giving those extreme values. The reason we can use the methods of this Section to estimate the unmodulated signal is that we are not interested in finding the optimal velocity distribution but in performing a maximum-profile-likelihood analysis of quantities like S_0,i that are integrals of the velocity distribution. For this task, the method described in this Section is adequate and mathematically sound.§ ANALYSIS: ISOTROPIC CASEIn this Section, we apply the general method described in Section <ref> to a specific case: a halo-independent estimate of the unmodulated DAMA signal. Since this is the first implementation of our method, we have made some simplifying assumptions that have allowed us to explore and understand the difficulties and merits of the method itself. First and foremost, to reduce the computing time, we have restricted our analysis to WIMP velocity distributions that are isotropic in the Galactic reference frame, i.e., f_ gal()=f_ gal(u) with u=||, so that the distribution functions depend on one variable only (the magnitude u) instead of three (the components of ). Under this assumption, the Galactic response functions can be replaced by the angle-averaged Galactic response functions, defined by an average over the directions of the vectoras_0,i^(u) = 1/4π∫^_0,i()dΩ_u ,_m,i^(u) = 1/4π∫^_m,i()dΩ_u .Then our extremization problem readsExtremizeS_0,i = ∑_k=1^Kλ_k^ gal_0,i(u_k)over λ_k and u_k subject to1≤ K ≤ N+1, λ_k > 0(k=1,…,K),∑_k=1^Kλ_k = 1 ,S_m,j= ∑_k=1^Kλ_k^ gal_0,i(u_k)(j=1,…,N),L({S_m,j}) ≥ L_0 .Moreover, the isotropic extreme distribution functions aref_ e(u) = ∑_k=1^Kλ_kδ(u - u_k) ,where f_ e(u) = 4 π u^2 f_ e(u). While mathematically appropriate and well-defined, physically these isotropic extreme distributions do not describe a collection of streams in velocity space but rather some sort of spherical shells in velocity space.An additional simplifying assumption we make in this first application of our method is to consider only spin-independent scattering off sodium in the DAMA NaI detector. Thus we restrict our analysis to light WIMP masses (m_χ≤ 15 GeV) for which WIMP elastic scattering off iodine is below threshold (for a constant iodine quenching factor Q_I=0.09 and Galactic escape speeds less than ∼580 km/s). It must be noted that the results of our analysis apply also to cross sections that are not spin-independent but have a mild energy dependence in the 2–4 keVee energy range.For our analysis we use the N=12 DAMA cosine modulation amplitudes in the lowest energy bins in Fig. 8 of Ref. <cit.>. We list them in Table <ref>. These measurements were obtained using a total exposure of 1.33 ton yr. The signal is concentrated in the first 6 bins, and the other 6 bins act as a control set with no modulation signal. Data are also available for the sine modulation amplitudes <cit.> but under our simplifying assumption of isotropic velocity distribution, the sine modulation response functions vanish identically (see Appendix), and thus including them in the likelihood would amount to adding an irrelevant constant. The DAMA Collaboration published also time series of its modulation data <cit.>, with time binnings ranging from 30 days (close to the maxima and minima of the oscillation) and 70 days (close to its equilibrum points). However it has been shown that the corresponding error bars can easily accommodate sizeable distortions from a sinusoidal time dependence of the signal (see for instance the discussion in Section 5 of Ref.<cit.>, relative to the case of a Maxwellian distribution yielding modulation fractions of order unity when the incoming WIMP velocities are very close to the escape velocity), so the ensuing constraint has no impact on our analysis and we neglect it.For the DAMA response functions off sodium, we take i to be the index of the energy bin in the electron–equivalent energy E_ee. The latter is related to the recoil energy E_R on average by E_ee=Q(E_R) E_R, where Q(E_R) is the quenching factor, with an additional smearing due to a finite energy resolution. We use the DAMA response functions for elastic spin-independent WIMP–nucleus scattering, but our results extend practically unchanged to other elastic interactions in which the velocity and/or energy dependence of the cross section is negligible in the DAMA energy bins (e.g., spin-dependent interactions or other nonrelativistic effective operators that do not show a large variation with recoil energy or WIMP velocity). The DAMA response function for the i-th bin with electron–equivalent energies in the range E_ee,i≤ E_ee≤ E_ee,i+1 is <cit.>:_i(v) = N_T/M_ det Δ E ρ_χ/m_χ σ_χ T _i(v),_i(v) = v/E_R^ max(v)∫_0^E_R^ max(v)dE_R F(E_R,v)∫_E_ee,i^E_ee,i+1 dE_ee _T(E_ee,E_R) ϵ(E_ee).Here N_T is the number of target nuclei in the detector, M_ det is the mass of the detector, Δ E = E_ee,i+1-E_ee,i is the width of the energy bins,ρ_χ is the WIMP density in the neighborhood of the Sun, σ_χ T is a reference cross section representing the strength of the WIMP–nucleus interaction, F(E_R,v) = E_R^ max(v)/σ_χ T dσ_χ T/dE_Ris a form factor which depends on the assumed interaction operator, _T(E_ee,E_R) is the energy resolution smearing function, ϵ(E_ee) is an acceptance function including the effect of experimental cuts, andE_R^ max(v)=2μ_χ T^2 v^2/m_Tis the maximum recoil energy achievable for a WIMP of speed v scattering off a nucleus of mass m_T (here μ_χ T=m_χ m_T/(m_χ+m_T) is the reduced WIMP–nucleus mass).The reduced response function_i(v) has dimensions ofvelocity. The dimensionless ratio_i(v)/v has the immediatephysical interpretation as the fraction of an incoming monochromaticflux of speed v that is detected in the i-th electron–equivalent energy bin. We take _T(E_ee,E_R) to be a Gaussian in E_ee centered at E_ee = Q(E_R) E_R and with width σ_ rms/=0.0091 (E_ee/)+0.448 (E_ee/)^1/2. We further assume that _T(E_ee,E_R) vanishes below the hardware threshold of 1 keVee. We assume a constant quenching factor Q(E_R)=0.3 for sodium. For the form factor F(E_R,v) we use the spin-independent form factor of ^23Na as given by the Helm form in <cit.>, in correspondence of which σ_χ T is the point–like ^23Na–WIMP cross section. For the WIMP masses we consider (m_χ≲ 15 GeV), this form factor varies negligibly in our analysis: by less than 1% over the 2–4 keVee range where the DAMA modulation is significant, and by ≲ 3% over the whole 2–8 keVee range. By the same token, our analysis applies to all cases in which the variation of the ^23Na form factor F(E_R,v) is negligible. Notice in addition that for such ^23Na form factors, any difference in strength between WIMP–proton and WIMP–neutron interactions can be included in the reference cross section σ_χ T. Thus our analysis applies equally well to ^23Na–WIMP elastic scattering that is, for example, any combination of isoscalar and isovector spin-independent interactions, any combination of spin-dependent interactions (for which the ^23Na form factors vary by ≲ 1% over the whole 2–8 keVee range), and so on.We implement the angle-averaged Galactic response functions ^ gal_0,i(u) and ^ gal_m,i(u) for elastic spin-independentWIMP–sodium scattering according to Eqs. (<ref>–<ref>) in the Appendix, with _i(v) given by Eq. (<ref>) and =232 km/s. Figs. <ref> and <ref> show these response functions for m_χ = 10 GeV. In these figures, we plot the reduced response functions ^_0,i(u) = m_χ M_ det Δ E /N_T ρ_χσ_χ T^_0,i(u),^_m,i(u) = m_χ M_ det Δ E /N_T ρ_χσ_χ T^_m,i(u).The reduced response functions have dimensions of velocity and represent the Galactic response functions normalized in the same way as _i(v) in Eq. (<ref>). Conceptually, the reduced response functions divided by u give the fraction of WIMPs of Galactic speed u that contribute to the detectable event rate.As anticipated, we cast the modulation effect as a property of the detector Galactic response function: the only information needed to obtain the reduced Galactic response functions for a given WIMP-nucleus cross section, both modulated and unmodulated, are the experimental properties of the detector and the motion of the detector in the Galaxy.Once the Galactic response functions are given, the procedure outlined in Section <ref>, namely solving the extremization problem (<ref>–<ref>), can be used to estimate the unmodulated signals S_0,i from the DAMA data on S_m,i. The following three considerations are relevant in actually implementing the method of Section <ref>.(1) Although no assumptions on f_ gal(u) are needed in the extremization problem (<ref>–<ref>), it appearsnatural to assume that there is a maximum speed for the WIMPs in the Galaxy. Thus we assume that f_ gal(u) vanishes when u exceeds a maximum velocity u_, which we take to be the escape speed from the Galaxy as quoted in <cit.>, u_=550 km/s.(This value is within the updated measurements in <cit.>.) We therefore restrict u≤ u_.(2) As seen in Eq. (<ref>), and as true in general, the absolute normalization of the response functions contains the unknown factor ρ_χ σ_χ T. However, its value cancels out in the determination of the S_0,i from the S_m,i, essentially because the same factor ρ_χ σ_χ T appears in both. In principle, one could fix the value of ρ_χσ_χ T that appears in the nonreduced response functions ^_0,i(u) and ^_m,i(u) in the extremization problem (<ref>–<ref>), solve it as it stands, and then combine the solutions as the value of ρ_χσ_χ T is varied from zero to infinity. Alternatively, and this is the procedure we actually implement, one can use reduced response functions ^_0,i(u) and ^_m,i(u), which do not contain the factor ρ_χσ_χ T, rescale the coefficients λ_k to λ_k = N_T ρ_χσ_χ T/m_χ M_ det Δ E λ_k ,so that λ_k^_0,i(u_k) = λ_k ^_0,i(u_k) , λ_k^_m,i(_k) = λ_k ^_m,i(u_k) ,and solve the modified extremization problem in which the normalization condition ∑_k=1^Kλ_k = 1 is replaced by the condition∑_k=1^Kλ_k > 0 .Since this condition is already contained in the conditions λ_k > 0, one of the moment conditions effectively disappears, and the extreme values of the S_0,i can be found with sums of up to N, instead of N+1, streams in velocity space, provided the extreme distributions contain the rescaled coefficients λ_k instead of λ_k.(3) Having dropped the normalization condition on the λ_k as described in the previous paragraph, the K streams (1≤ K ≤ N) of an extreme distribution must have speeds u_k such that the K N-dimensional vectors (_m,1(u_k),…,_m,N(u_k)) are linearly independent. Now the experimental threshold in observed energy (1 keVee in our treatment of DAMA; see our discussion of G_T(E_ee,E_R) after Eq. (<ref>)) induces a region below threshold in velocity space, comprised of all speeds u for which the response functions vanish simultaneously, _m,i(u)=0 (i=1,…,N) for u below threshold. For example, in the isotropic case we consider, the threshold speed for the modulated Galactic response functions turns out to beu_ = 210.13,30.91,0 km/s form_χ=5, 10, 15 GeV, respectively.Thus, if the velocity of one or more of the K streams is below threshold, the K vectors mentioned above are not linearly independent (one or more of them is the zero vector). On the other hand, the streams with velocity below threshold do not contribute to the S_m,i signals at all (indeed, all the response functions are zero for these streams).Thus a sum of K streams in which some streams are below threshold is effectively an extreme distribution with less than K streams. Since we let K vary from 1 to N, it is obvious that it is enough for extreme distributions to include only streams above threshold. Therefore we allow only u > u_. The practical implementation of the method described above is conceptually quite simple, although the use of the parametrization (<ref>) for the extreme distributions of the moment set requires to explore a parameter space of large dimensionality (2N=24 in our N=12 case with isotropic Galactic velocity distribution; 4N=48 if we explored anisotropic Galactic velocity distributions). This kind of task is efficiently performed by using the technique of Markov chains, which makes use of the likelihood function itself to optimize the sampling procedure.[Alternatively one could use grids in velocity space and increase the grid resolution to increase the precision of the computed extreme values, or one could use algorithms to directly maximize and minimize S_0,i under the given constraints. We tried some of those methods without success. We chose to generate Monte-Carlo samples of the likelihood to gain confidence on our method.] To this aim we use the Markov–Chain Monte Carlo (MCMC) code emcee <cit.> to generate a large number of sets { u , λ} = { u_1, …, u_K, λ_1, …, λ_K } of Galactic speeds u_k and coefficients λ_k with 1≤ k ≤ K, 1≤ K ≤ N, u_ < u_k ≤ u_ and λ_k > 0. For each value of K=1,...,N, we generate a Markov chain of 5×10^6 points using 250 independent walkers and a standard Metropolis-Hastings sampler.For each MCMC-generated set { u, λ} we calculate both χ^2 = -2 ln L and S_0,i for i=1,…, N (N=12) and produce N scatter plots of χ^2 vs. each of the S_0,i.The resulting (χ^2,S_0,i) scatter plots are shown in Figs. <ref>, <ref> and <ref> for three representative values of the WIMP mass: m_χ=5 GeV, 10 GeV, and 15 GeV, respectively. In each of these figures, the 12 panels correspond to one of the 12 DAMA energy bins between 2 keVee and 8 keVee.In each panel, the boundary of the region covered by points gives the profile likelihood for the S_0,i in that panel. More precisely, the boundary in the i-th panel is the graph of χ_i^2 = -2ln L_i(S_0,i), where L_i(S_0,i) is the profile likelihood for S_0,i with the velocity distribution treated as a continuum of nuisance parameters. We use two colors for the points, red and black. The points in red (black) have χ^2_i≤χ^2_i, min+1 (χ^2_i>χ^2_i, min+1), where χ^2_i, min is the minimum of the χ^2_i. The value of S_0,i where χ^2_i is minimum is the maximum-likelihood estimate of S_0,i. The lowest and highest values of S_0,i in the χ^2_i≤χ^2_i, min+1 region (points in red, which occur at the top boundary of the red region) are the endpoints of the 1σ confidence interval for S_0,i. To illustrate that our analysis includes the correlations amongthe S_0,i's, in Fig. <ref> we show the regions -2Δln L_ p({S_0,i})≤1 (red points) and -2Δln L_ p({S_0,i})≤3 (black points) in the plane S_0,1–S_0,2 of the unmodulated signals in the first and second energy bins for m_χ=10 GeV. The 1σ confidence intervals on S_0,1, for example, are obtained by projecting the red ellipse onto the S_0,1 axis.The maximum-likelihood estimates and the 1σ confidence intervals of S_0,i (i=1,…,12), obtained from Figs. <ref>, <ref> and <ref> (horizontal blue lines best visible in Fig. <ref>),[For the higher masses, the sampling becomes sparse at large values of S_0,i and the exact value of S^ sup_0,i(L_0) becomes harder to determine from the figures. This problem is there because the MCMC concentrates points near the maximum-likelihood value of S_0,i. A few attempted methods of direct minimization of the χ^2 at fixed S_0,i did not converge. We have performed additional dedicated MCMC runs focused at large values of S_0,i and the value of S^ sup_0,i(L_0) did not change significantly.] are listed in Table <ref>and plotted versus the electron–equivalent energy E_ee in Figs. <ref>, <ref> and <ref>,for m_χ=5, 10, and 15 GeV, respectively. These are the main results of our paper. Table <ref> also lists the DAMA modulated amplitudes S_m,i (from Fig. 8 of <cit.>) and the DAMA background+signal rates B_i+S_i (from Fig. 27 of <cit.>, rebinned from 0.25-keVee- to 0.5-keVee-width bins). We see that for each m_χ, the S_0,i decrease with energy, as reasonable for a WIMP signal. We also see that the error bars on the S_0.i are small enough to allow a rather good determination of the unmodulated signal, although the error bars become very asymmetric for m_χ = 15 GeV. Moreover, an examination of Table <ref> leads to the conclusion that the unmodulated signals S_0,i are much smaller than the DAMA background+signal measurements. Since it is not trivial to identify what contributes to the DAMA background, we refrain from subtracting an estimated model background like done for instance in <cit.>. It suffices for us to conclude that the S_0,i values we estimate in this paper are reasonable and compatible with the measured DAMA background+signal level. It is finally interesting to estimate the fraction of the signal that is modulated. In the energy range where the bulk of the DAMA modulation is present, i.e., 2 keVee<E_ee<4 keVee, we find, for χ^2_i≤χ^2_i, min+1 (examining the ratio S_m,i/S_0,i for each point in red in Figs. <ref>, <ref> and <ref>): 0.04 S_m,i/S_0,i 0.14form_χ = 5 GeV, 0.05 S_m,i/S_0,i 0.17form_χ = 10 GeV,0.03 S_m,i/S_0,i 0.24form_χ = 15 GeV.The modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass. This is in line with expectations for a signal due to dark matter WIMPs.This conclusion is limited to the case at study, i.e., to velocity distributions that are isotropic in the frame of the galaxy. However it is worth pointing out that solutions with higher modulation fractions are present in the allowed parameter space also in the isotropic case, but are rejected by the DAMA data. This can be seen in Fig. <ref>, where we plot the ratios between modulated and unmodulated response functions ^_m,i/^_0,i for m_χ=10 GeV. In this specific example it is clear that, if f_ gal(u) is parametrized according to Eq. (<ref>) with a set of u_j velocities all below ∼60 km/s, modulation fractions close to 1 are possible. These configurations, however, have a very small likelihood and do not appear in the 1σ confidence intervals. Notice that at even lower velocities, values of the ^_m,i/^_0,i ratios larger than one signal the need of additional harmonics besides the cosine in the description of the modulation time dependence of the signal.§ CONCLUSIONSWe have estimated the unmodulated signal corresponding to the DAMA modulation if interpreted as due to scattering of dark matter WIMPs off ^23Na in the DAMA detector. Our analysis, covering WIMPs lighter than ∼ 15 GeV, is to large extent independent of the dark halo model (we profile the likelihood over velocity distributions that are isotropic in the Galactic rest frame) and of the particle physics model (we cover all ^23Na–WIMP interactions in which the ^23Na cross section vary little over the 2–4 keVee energy range where the DAMA modulation is significant).The method outlined in Section <ref> is however very general and valid for any class of velocity distribution.We have presented and used an exact and sound mathematical set-up to profile the likelihood over a continuum of nuisance parameters (the whole WIMP velocity distribution). We have used the profile likelihood for each of the unmodulated rates S_0,i (i=1,…,12 indexing the first 12 DAMA energy bins) to find their maximum-likelihood estimates and their 1σ confidence intervals.Our halo-independent estimates of the unmodulated rates are reasonable and in line with expectations for a signal from WIMP dark matter. The unmodulated rates we estimate give a modulated/unmodulated ratio ranging between a few percent and ∼ 25%. The unmodulated rates are comfortably below the background+signal level measured by DAMA.§ ACKNOWLEDGEMENTS This work was completed during Stefano Scopel's sabbatical semester at the University of Utah in the Spring of 2016, and was presented at various conferences thereafter.We thank the many audience members, and in particular Riccardo Catena, for asking many questions that have helped sharpening the writing of this paper. Paolo Gondolo thanks Sogang University for the financial support and the kind hospitality during his research visits. This work was partially supported by NSF award PHY-1415974 at the University of Utah and by the National Research Foundation of Korea grant number 2016R1D1A1A09917964 at Sogang University.§ ANGLE-AVERAGED GALACTIC RESPONSE FUNCTIONS In the application of the optimization method to the modulated/unmodulated DAMA rates in this paper we focus on velocity distributions that are isotropic in the Galactic frame, i.e., for which f_()=f_(u),where u=||. In this case, integrals of the form (<ref>) becomeS_i(t) = ∫^_i(u,t)f_(u) du,where_i^(u,t) = 1/4π∫^_i(,t)dΩ_uis the angle-averaged Galactic response function, and f_(u) = 4 π u^2 f_(u)is the speed distribution in the Galactic frame normalized to one (∫_0^∞f_(u) du = 1). A Fourier time-series analysis then gives, in this case of isotropic Galactic velocity distributions,S_0,i = ∫^_0,i(u)f_(u) du,S_m,i = ∫^_m,i(u)f_(u) du,where^_0,i(u) = 1/T∫_0^T dt^_i(u,t) , ^_m,i(u) = 2/T∫_0^T dtcos[ω(t-t_0)]^_i(u,t) .For nondirectional dark matter detectors, the laboratory response functions _i() are isotropic, i.e., _i() = _i(v). In this case, Eqs. (<ref>), (<ref>) and (<ref>–<ref>) give^_0,i(u) = 1/4π∫ dΩ_u1/T∫_0^T dt_i(|-|),^_m,i(u) = 1/4π∫ dΩ_u2/T∫_0^T dtcos(ω t)_i(|-|),where= +(t).These integrals can be recast as integrals over v=|-| with analytically computed kernels by means of the following manipulations. Let the polar angles (θ_u,ϕ_u) ofin dΩ_u=sinθ_u dθ_u dϕ_u be defined with respect to the direction of . Then v = |-| = √(u^2+V^2-2uVcosθ_u) .Perform the trivial integration over ϕ_u, and change integration variable from cosθ_u to v. Then interchange the order of the t and v integrations. This gives^_0,i(u) = ∫_0^∞ dv v_i(v)1/T∫_0^T dt1/2uV Θ( |u-v| ≤ V ≤ u+v ) ,^_m,i(u) = ∫_0^∞ dv v_i(v)2/T∫_0^T dtcos(ω t)/2uV Θ( |u-v| ≤ V ≤ u+v ) .Here Θ(x)=1 if x is true and Θ(x)=0 if x is false. The integrals over the time t in Eqs. (<ref>) and (<ref>) can be performed analytically for a circular Earth orbit. Write (t) = _ 1cos(ω t) + _ 2sin(ω t),where t=0 whenandform their smallest angle β, i.e., ·_ 1 = cosβ and ·_ 2 = 0. Then ·_(t) = cosβcos(ω t) andV = √(^2+^2+2cosβcos(ω t)) = v_a √( 1+ϵcos(ω t)),where v_a = √(^2+^2), ϵ = 2cosβ/^2+^2.In terms of these variables,^_0,i(u) = 1/2uv_a∫_0^∞ dv v_i(v) K_0( u/v_a, v/v_a) ,^_m,i(u) = - cosβ/2uv_a^3∫_0^∞ dv v_i(v) K_m( u/v_a, v/v_a) ,where we defineK_0(x,y) = 1/T∫_0^T dt1/√(1+ϵcos(ω t))Θ( (x-y)^2 ≤ 1+ ϵcos(ω t) ≤ (x+y)^2 ) , K_m(x,y) = - 4/ϵT∫_0^T dtcos(ω t)/√(1+ϵcos(ω t))Θ( (x-y)^2 ≤ 1+ ϵcos(ω t) ≤ (x+y)^2 ) .The functions K_0(x,y) and K_m(x,y) are symmetric in x and y, and the integrals appearing in them can be expressed in terms of the incomplete elliptic functions E(ϕ|m) and F(ϕ|m). K_0(x,y) = I_0(α(x-y))-I_0(α(x+y)), K_m(x,y) = I_m(α(x-y))-I_m(α(x+y)),whereI_0(α) = 1/π√(1+ϵ)F(α/2|2ϵ/1+ϵ) , I_m(α) = - 8/πϵ√(1+ϵ)[ (1+ϵ) E(α/2|2ϵ/1+ϵ) - F(α/2|2ϵ/1+ϵ) ],andα(ξ) =π, |ξ|≤√(1-ϵ) , arccos(ξ^2-1/ϵ),√(1-ϵ)≤ |ξ| ≤√(1+ϵ), 0, |ξ|≥√(1+ϵ) .We used Eqs. (<ref>) and (<ref>) to compute the DAMA angle-averaged Galactic response functions.We notice lastly that the angle–averaged Galactic response functions for the sin(ω t) Fourier mode vanish identically. This follows from the observation that the formula for the angle–averaged sine modulation response functions is analogous to equation (<ref>) with the replacement of cos(ω t) with sin(ω t) in the numerator but the same dependence on V. Since V depends on cos(ω t) only, the integrand is thus a product of an odd and an even function of ω t and vanishes when integrated over a whole period.99Drukier:1986tm A. K. Drukier, K. Freese, and D. N. Spergel, Detecting Cold Dark Matter Candidates,Phys. Rev. D33 (1986) 3495–3508. dama DAMA, LIBRA Collaboration, R. Bernabei et al., Final model independent result of DAMA/LIBRA-phase1,Eur. Phys. J. C73 (2013) 2648, [http://arxiv.org/abs/1308.5109arXiv:1308.5109].lux LUX Collaboration, D. S. Akerib et al., First results from the LUX dark matter experiment at the Sanford Underground Research Facility,Phys. Rev. Lett. 112 (2014) 091303, [http://arxiv.org/abs/1310.8214arXiv:1310.8214].xenon100 XENON100 Collaboration, E. Aprile et al., Dark Matter Results from 225 Live Days of XENON100 Data,Phys. Rev. Lett. 109 (2012) 181301, [http://arxiv.org/abs/1207.5988arXiv:1207.5988].xenon10 XENON10 Collaboration, J. Angle et al., A search for light dark matter in XENON10 data,Phys. Rev. Lett. 107 (2011) 051301, [http://arxiv.org/abs/1104.3088arXiv:1104.3088]. [Erratum: Phys. Rev. Lett.110,249901(2013)].kims S. C. Kim et al., New Limits on Interactions between Weakly Interacting Massive Particles and Nucleons Obtained with CsI(Tl) Crystal Detectors, Phys. Rev. Lett. 108 (2012) 181301, [http://arxiv.org/abs/1204.2646arXiv:1204.2646].cdms_ge CDMS-II Collaboration, Z. Ahmed et al., Results from a Low-Energy Analysis of the CDMS II Germanium Data,Phys. Rev. Lett. 106 (2011) 131302, [http://arxiv.org/abs/1011.2482arXiv:1011.2482].cdms_lite SuperCDMS Collaboration, R. Agnese et al., Search for Low-Mass Weakly Interacting Massive Particles Using Voltage-Assisted Calorimetric Ionization Detection in the SuperCDMS Experiment,Phys. Rev. Lett. 112 (2014), no. 4 041302, [http://arxiv.org/abs/1309.3259arXiv:1309.3259].super_cdms SuperCDMS Collaboration, R. Agnese et al., Search for Low-Mass Weakly Interacting Massive Particles with SuperCDMS,Phys. Rev. Lett. 112 (2014), no. 24 241302, [http://arxiv.org/abs/1402.7137arXiv:1402.7137].simple M. Felizardo et al., Final Analysis and Results of the Phase II SIMPLE Dark Matter Search,Phys. Rev. Lett. 108 (2012) 201302, [http://arxiv.org/abs/1106.3014arXiv:1106.3014].coupp COUPP Collaboration, E. Behnke et al., First Dark Matter Search Results from a 4-kg CF_3I Bubble Chamber Operated in a Deep Underground Site,Phys. Rev. D86 (2012), no. 5 052001, [http://arxiv.org/abs/1204.3094arXiv:1204.3094]. [Erratum: Phys. Rev.D90,no.7,079902(2014)].picasso PICASSO Collaboration, S. Archambault et al., Constraints on Low-Mass WIMP Interactions on ^19F from PICASSO,Phys. Lett. B711 (2012) 153–161, [http://arxiv.org/abs/1202.1240arXiv:1202.1240].pico2l PICO Collaboration, C. Amole et al., Dark Matter Search Results from the PICO-2L C_3F_8 Bubble Chamber,Phys. Rev. Lett. 114 (2015), no. 23 231302, [http://arxiv.org/abs/1503.00008arXiv:1503.00008].pico60 PICO Collaboration, C. Amole et al., Dark Matter Search Results from the PICO-60 CF_3I Bubble Chamber,Submitted to: Phys. Rev. D (2015) [http://arxiv.org/abs/1510.07754arXiv:1510.07754].xmass XMASS Collaboration, K. Abe et al., Direct dark matter search by annual modulation in XMASS-I,Phys. Lett. B759 (2016) 272–276, [http://arxiv.org/abs/1511.04807arXiv:1511.04807].1405.0364 S. Scopel and K. Yoon, A systematic halo-independent analysis of direct detection data within the framework of Inelastic Dark Matter,JCAP 1408 (2014) 060, [http://arxiv.org/abs/1405.0364arXiv:1405.0364].1502.07682 E. Del Nobile, G. B. Gelmini, A. Georgescu, and J.-H. Huh, Reevaluation of spin-dependent WIMP-proton interactions as an explanation of the DAMA data,JCAP 1508 (2015), no. 08 046, [http://arxiv.org/abs/1502.07682arXiv:1502.07682].1505.01926 S. Scopel, K.-H. Yoon, and J.-H. Yoon, Generalized spin-dependent WIMP-nucleus interactions and the DAMA modulation effect,JCAP 1507 (2015), no. 07 041, [http://arxiv.org/abs/1505.01926arXiv:1505.01926].1701.02215 S. Scopel and H. Yu, From direct detection to relic abundance: the case of proton-philic spin-dependent inelastic Dark Matter, http://arxiv.org/abs/1701.02215arXiv:1701.02215.1011.1915 P. J. Fox, J. Liu, and N. Weiner, Integrating Out Astrophysical Uncertainties,Phys. Rev. D83 (2011) 103514, [http://arxiv.org/abs/1011.1915arXiv:1011.1915].1107.0717 P. J. Fox, J. Kopp, M. Lisanti, and N. Weiner, A CoGeNT Modulation Analysis,Phys. Rev. D85 (2012) 036008, [http://arxiv.org/abs/1107.0717arXiv:1107.0717].1107.0741 C. McCabe, DAMA and CoGeNT without astrophysical uncertainties,Phys. Rev. D84 (2011) 043525, [http://arxiv.org/abs/1107.0741arXiv:1107.0741].1111.0292 M. T. Frandsen, F. Kahlhoefer, C. McCabe, S. Sarkar, and K. Schmidt-Hoberg, Resolving astrophysical uncertainties in dark matter direct detection,JCAP 1201 (2012) 024, [http://arxiv.org/abs/1111.0292arXiv:1111.0292].1112.1627 J. Herrero-Garcia, T. Schwetz, and J. Zupan, On the annual modulation signal in dark matter direct detection,JCAP 1203 (2012) 005, [http://arxiv.org/abs/1112.1627arXiv:1112.1627].1205.0134 J. Herrero-Garcia, T. Schwetz, and J. Zupan, Astrophysics independent bounds on the annual modulation of dark matter signals,Phys. Rev. Lett. 109 (2012) 141301, [http://arxiv.org/abs/1205.0134arXiv:1205.0134].1304.6066 M. T. Frandsen, F. Kahlhoefer, C. McCabe, S. Sarkar, and K. Schmidt-Hoberg, The unbearable lightness of being: CDMS versus XENON,JCAP 1307 (2013) 023, [http://arxiv.org/abs/1304.6066arXiv:1304.6066].1405.1420 J. F. Cherry, M. T. Frandsen, and I. M. Shoemaker, Halo Independent Direct Detection of Momentum-Dependent Dark Matter,JCAP 1410 (2014), no. 10 022, [http://arxiv.org/abs/1405.1420arXiv:1405.1420].1502.03342 M. Blennow, J. Herrero-Garcia, and T. Schwetz, A halo-independent lower bound on the dark matter capture rate in the Sun from a direct detection signal,JCAP 1505 (2015), no. 05 036, [http://arxiv.org/abs/1502.03342arXiv:1502.03342].1504.03333 A. J. Anderson, P. J. Fox, Y. Kahn, and M. McCullough, Halo-Independent Direct Detection Analyses Without Mass Assumptions,JCAP 1510 (2015), no. 10 012, [http://arxiv.org/abs/1504.03333arXiv:1504.03333].1202.6359 P. Gondolo and G. B. Gelmini, Halo independent comparison of direct dark matter detection data,JCAP 1212 (2012) 015, [http://arxiv.org/abs/1202.6359arXiv:1202.6359].1304.6183 E. Del Nobile, G. B. Gelmini, P. Gondolo, and J.-H. Huh, Halo-independent analysis of direct detection data for light WIMPs,JCAP 1310 (2013) 026, [http://arxiv.org/abs/1304.6183arXiv:1304.6183].1311.4247 E. Del Nobile, G. B. Gelmini, P. Gondolo, and J.-H. Huh, Update on Light WIMP Limits: LUX, lite and Light,JCAP 1403 (2014) 014, [http://arxiv.org/abs/1311.4247arXiv:1311.4247].1401.4508 E. Del Nobile, G. B. Gelmini, P. Gondolo, and J.-H. Huh, Direct detection of Light Anapole and Magnetic Dipole DM,JCAP 1406 (2014) 002, [http://arxiv.org/abs/1401.4508arXiv:1401.4508].1404.7484 G. B. Gelmini, A. Georgescu, and J.-H. Huh, Direct detection of light Ge-phobic” exothermic dark matter,JCAP 1407 (2014) 028, [http://arxiv.org/abs/1404.7484arXiv:1404.7484].1405.5582 E. Del Nobile, G. B. Gelmini, P. Gondolo, and J.-H. Huh, Update on the Halo-Independent Comparison of Direct Dark Matter Detection Data,Phys. Procedia 61 (2015) 45–54, [http://arxiv.org/abs/1405.5582arXiv:1405.5582].1507.03902 G. B. Gelmini, A. Georgescu, P. Gondolo, and J.-H. Huh, Extended Maximum Likelihood Halo-independent Analysis of Dark Matter Direct Detection Data, JCAP 1511 (2015), no. 11 038, [http://arxiv.org/abs/1507.03902arXiv:1507.03902].1607.02445 G. B. Gelmini, J.-H. Huh, and S. J. Witte, Assessing Compatibility of Direct Detection Data: Halo-Independent Global Likelihood Analyses,JCAP 1610 (2016), no. 10 029, [http://arxiv.org/abs/1607.02445arXiv:1607.02445].1703.06892 S. J. Witte and G. B. Gelmini, Updated Constraints on the Dark Matter Interpretation of CDMS-II-Si Data, http://arxiv.org/abs/1703.06892arXiv:1703.06892.1403.4606 B. Feldstein and F. Kahlhoefer, A new halo-independent approach to dark matter direct detection analysis,JCAP 1408 (2014) 065, [http://arxiv.org/abs/1403.4606arXiv:1403.4606].1403.6830 J. F. Cherry, M. T. Frandsen, and I. M. Shoemaker, Halo Independent Direct Detection of Momentum-Dependent Dark Matter,JCAP 1410 (2014), no. 10 022, [http://arxiv.org/abs/1403.6830arXiv:1403.6830].1409.5446 B. Feldstein and F. Kahlhoefer, Quantifying (dis)agreement between direct detection experiments in a halo-independent way,JCAP 1412 (2014), no. 12 052, [http://arxiv.org/abs/1409.5446arXiv:1409.5446].1607.04418 F. Kahlhoefer and S. Wild, Studying generalised dark matter interactions with extended halo-independent methods,JCAP 1610 (2016), no. 10 032, [http://arxiv.org/abs/1607.04418arXiv:1607.04418].1410.6160 N. Bozorgnia and T. Schwetz, What is the probability that direct detection experiments have observed Dark Matter?,JCAP 1412 (2014), no. 12 015, [http://arxiv.org/abs/1410.6160arXiv:1410.6160].Drees:2007hr M. Drees and C.-L. Shan, Reconstructing the Velocity Distribution of WIMPs from Direct Dark Matter Detection Data,JCAP 0706 (2007) 011, [http://arxiv.org/abs/astro-ph/0703651astro-ph/0703651].0803.4477 M. Drees and C.-L. Shan, Model-Independent Determination of the WIMP Mass from Direct Dark Matter Detection Data,JCAP 0806 (2008) 012, [http://arxiv.org/abs/0803.4477arXiv:0803.4477].1011.1910 P. J. Fox, G. D. Kribs, and T. M. P. Tait, Interpreting Dark Matter Direct Detection Independently of the Local Velocity and Density Distribution,Phys. Rev. D83 (2011) 034007, [http://arxiv.org/abs/1011.1910arXiv:1011.1910].1003.5283 C.-L. Shan, Effects of Residue Background Events in Direct Dark Matter Detection Experiments on the Reconstruction of the Velocity Distribution Function of Halo WIMPs,JCAP 1006 (2010) 029, [http://arxiv.org/abs/1003.5283arXiv:1003.5283].1103.5145A. H. G. Peter, WIMP astronomy and particle physics with liquid-noble and cryogenic direct-detection experiments,Phys. Rev. D 83 (2011) 125029[http://arxiv.org/abs/1103.5145arXiv:1103.5145].1207.2039B. J. Kavanagh and A. M. Green, Improved determination of the WIMP mass from direct detection data,Phys. Rev. D 86 (2012) 065027[http://arxiv.org/abs/1207.2039arXiv:1207.2039].1303.6868B. J. Kavanagh and A. M. Green, Model independent determination of the dark matter mass from direct detection experiments,Phys. Rev. Lett. 111 (2013) 031302 [http://arxiv.org/abs/1303.6868arXiv:1303.6868].1310.7039A. H. G. Peter, V. Gluscevic, A. M. Green, B. J. Kavanagh and S. K. Lee, WIMP physics with ensembles of direct-detection experiments,Phys. Dark Univ. 5-6 (2014) 45 [http://arxiv.org/abs/1310.7039arXiv:1310.7039].1312.1852B. J. Kavanagh, Parametrizing the local dark matter speed distribution: a detailed analysis,Phys. Rev. D 89 (2014) 085026 [http://arxiv.org/abs/1312.1852arXiv:1312.1852].1502.04224B. J. Kavanagh, Discretising the velocity distribution for directional dark matter experiments,JCAP 1507 (2015) 019 [http://arxiv.org/abs/1502.04224arXiv:1502.04224].1506.03386 F. Ferrer, A. Ibarra, and S. Wild, A novel approach to derive halo-independent limits on dark matter properties,JCAP 1509 (2015), no. 09 052, [http://arxiv.org/abs/1506.03386arXiv:1506.03386].1609.08630B. J. Kavanagh and C. A. J. O'Hare, Reconstructing the three-dimensional local dark matter velocity distribution,Phys. Rev. D 94(2016) 123009 [http://arxiv.org/abs/1609.08630arXiv:1609.08630].IbarraRappeltMIAPP A. Ibarra and A. Rappelt, A new halo independent approach for direct dark matter searches,talk at “Direct Dark Matter Detection: Experiment meets Theory," Garching, Germany (2017).haxton1 A. L. Fitzpatrick, W. Haxton, E. Katz, N. Lubbers, and Y. Xu, The Effective Field Theory of Dark Matter Direct Detection,JCAP 1302 (2013) 004, [http://arxiv.org/abs/1203.3542arXiv:1203.3542].del_nobile_generalized E. Del Nobile, G. Gelmini, P. Gondolo, and J.-H. Huh, Generalized Halo Independent Comparison of Direct Dark Matter Detection Data,JCAP 1310 (2013) 048, [http://arxiv.org/abs/1306.5273arXiv:1306.5273].representation_theorems H. P. Mulholland and C. A. Rogers, Representation theorems for distribution functions,Proc. London Math. Soc. s3-8(2) (1958) 177–223.pinelis I. Pinelis, On the extreme points of moments sets,Math. Meth. Oper. Res. 83 (2016) 325–349, [http://arxiv.org/abs/1204.0249arXiv:1204.0249].1512.00593 S. Scopel and K. H. Yoon, Inelastic dark matter with spin-dependent couplings to protons and large modulation fractions in DAMA, JCAP 1602 (2016) no.02,050, [http://arxiv.org/abs/1512.00593arXiv:1512.00593].lewinsmith J. D. Lewin and P. F. Smith, Review of mathematics, numerical factors, and corrections for dark matter experiments based on elastic nuclear recoil,Astropart. Phys. 6 (1996) 87–112.vesc M. C. Smith et al., The RAVE Survey: Constraining the Local Galactic Escape Speed,Mon. Not. Roy. Astron. Soc. 379 (2007) 755–772, [http://arxiv.org/abs/astro-ph/0611671astro-ph/0611671].vesc_salucci F. Nesti and P. Salucci, The Dark Matter halo of the Milky Way, AD 2013,JCAP 1307 (2013) 016, [http://arxiv.org/abs/1304.5127arXiv:1304.5127].vescnew T. Piffl et al., The RAVE survey: the Galactic escape speed and the mass of the Milky Way, Astron. Astrophys.562 (2014) A91, [http://arxiv.org/abs/1309.4293arXiv:1309.4293].emcee D. Foreman-Mackey, D. W. Hogg, D. Lang, and J. Goodman, emcee: The mcmc hammer,Publications of the Astronomical Society of the Pacific 125 (2013), no. 925 306.DAMAbackground DAMA Collaboration, R. Bernabei et al., The DAMA/LIBRA apparatus,Nucl. Instrum. Meth. A592 (2008) 297–315, [http://arxiv.org/abs/0804.2738arXiv:0804.2738].dama_bck V. A. Kudryavtsev, M. Robinson, and N. J. C. Spooner, The expected background spectrum in NaI dark matter detectors and the DAMA result,Astropart. Phys. 33 (2010) 91–96, [http://arxiv.org/abs/0912.2983arXiv:0912.2983]. | http://arxiv.org/abs/1703.08942v2 | {
"authors": [
"Paolo Gondolo",
"Stefano Scopel"
],
"categories": [
"hep-ph",
"astro-ph.CO"
],
"primary_category": "hep-ph",
"published": "20170327055411",
"title": "Halo-independent determination of the unmodulated WIMP signal in DAMA: the isotropic case"
} |
The intra-cluster magnetic field power spectrumF. Govoni, email [email protected] INAF - Osservatorio Astronomico di Cagliari Via della Scienza 5, I–09047 Selargius (CA), ItalyDip. di Fisica, Università degli Studi di Cagliari, Strada Prov.le Monserrato-Sestu Km 0.700, I-09042 Monserrato (CA), ItalyDip. di Fisica, Università degli Studi di Trieste - Sezione di Astronomia, via Tiepolo 11, I-34143 Trieste, Italy INAF - Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34143 Trieste, ItalyINAF - IASF Milano, Via Bassini 15, I-20133 Milano, Italy Dep. of Physics and Astronomy, University of California at Irvine,4129 Frederick Reines Hall, Irvine, CA 92697-4575, USAINAF - Istituto di Radioastronomia, BolognaVia Gobetti 101, I–40129 Bologna, ItalyDip. di Fisica e Astronomia, Università degli Studi di Bologna, Viale Berti Pichat 6/2, I–40127 Bologna, ItalyAgenzia Spaziale Italiana (ASI), RomaSKA SA, 3rd Floor, The Park, Park Road, Pinelands 7405, The Cape Town, South Africa Department of Physics and Electronics, Rhodes University, PO Box 94, Grahamstown 6140, South AfricaHamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029, Hamburg, GermanyFundación G. Galilei - INAF TNG,Rambla J. A. Fernández Pérez 7, E-38712 Breña Baja (La Palma), Spain Instituto de Astrofísica de Canarias,C/Vía Láctea s/n, E-38205 La Laguna (Tenerife), SpainDep. de Astrofísica, Univ. de La Laguna,Av. del Astrofísico Francisco Sánchez s/n, E-38205 La Laguna (Tenerife), Spain ASTRON, the Netherlands Institute for Radio Astronomy, Postbus 2, 7990 AA, Dwingeloo, The Netherlands Kapteyn Astronomical Institute, Rijksuniversiteit Groningen, Landleven 12, 9747 AD Groningen, The Netherlands Naval Research Laboratory, Washington, District of Columbia 20375, USASchool of Physics, University of the Witwatersrand, Private Bag 3, 2050, Johannesburg, South AfricaUniversity of Leiden, Rapenburg 70, 2311 EZ Leiden, the Netherlands Astronomy Department, University of Geneva, 16 ch. d'Ecogia, CH-1290 Versoix, SwitzerlandMax Planck Institut für Astrophysik, Karl-Schwarzschild-Str.1, D-85740 Garching, Germany Laboratoire Lagrange, UCA, OCA, CNRS, Blvd de l'Observatoire, CS 34229, 06304 Nice cedex 4, France School of Chemical & Physical Sciences,Victoria University of Wellington, PO Box 600, Wellington, 6140, New Zealand Argelander-Institut für Astronomie, Auf dem Hügel 71 D-53121 Bonn, Germany National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801, USA Department of Physics and Astronomy, University of New Mexico, Albuquerque NM, 87131, USAWe study the intra-cluster magnetic field in the poor galaxy cluster Abell 194by complementing radio data, at different frequencies, with data in the optical and X-ray bands.We analyze new total intensity and polarization observations of Abell 194 obtained with the Sardinia Radio Telescope (SRT). We use the SRT data in combination with archival Very Large Array observations to derive both the spectral aging and Rotation Measure (RM) images of the radio galaxies 3C 40A and 3C 40B embedded in Abell 194.To obtain new additional insights in the cluster structure we investigatethe redshifts of 1893 galaxies, resulting in a sample of 143 fiducial cluster members.We analyze the availableandobservations to measure the electron density profile of the galaxy cluster.The optical analysis indicates that Abell 194 does notshow a major and recent cluster merger, but rather agrees with a scenario of accretion ofsmall groups, mainly along the NE-SW direction. Under the minimum energy assumption, the lifetimes of synchrotron electrons in 3C40 B measured from the spectral breakare found to be 157±11 Myrs. The break frequency image and the electron density profile inferred from the X-ray emissionare used in combination with the RM data to constrain the intra-cluster magnetic field power spectrum.By assuming a Kolmogorov power law power spectrum with a minimum scale of fluctuations of Λ_min=1 kpc, we find that the RM data in Abell 194 are well described by a magnetic field with a maximum scale of fluctuations of Λ_max=(64±24) kpc.We find a central magnetic field strength of ⟨ B_0⟩=(1.5 ± 0.2) μ G, the lowest ever measured so far in galaxy clusters based on Faraday rotation analysis. Further out, the field decreases with the radius following the gas density to the power of η=1.1±0.2. Comparing Abell 194 with a small sample of galaxy clusters, there is a hint of a trend between central electron densities and magnetic field strengths.Sardinia Radio Telescope observations of Abell 194F. Govoni et al.Sardinia Radio Telescope observations of Abell 194 F. Govoni1 M. Murgia1 V. Vacca1 F. Loi1,2M. Girardi3,4 F. Gastaldello5,6 G. Giovannini7,8 L. Feretti7 R. Paladino7 E. Carretti1R. Concu1 A. Melis1 S. Poppi1 G. Valente9,1G. Bernardi10,11A. Bonafede7,12 W. Boschin13,14,15M. Brienza16,17T.E. Clarke18S. Colafrancesco19F. de Gasperin20D. Eckert21T. A. Enßlin22C. Ferrari23L. Gregorini7 M. Johnston-Hollitt24H. Junklewitz25E. Orrù16 P. Parma7R. Perley26M. Rossetti5G.B Taylor27 F. Vazza7,12 Received; accepted ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION Galaxy clusters are unique laboratories for the investigation of turbulent fluidmotions and large-scale magnetic fields (e.g. Carilli & Taylor 2002,Govoni & Feretti 2004, Murgia 2011).In the last few years, several efforts have beenfocused to determine the effective strengthand the structure of magnetic fields in galaxy clusters and this topic represents a key project in view of theSquare Kilometre Array (e.g. Johnston-Hollitt et al. 2015). Synchrotron radio halos at the centerof galaxy clusters (e.g. Feretti et al. 2012, Ferrari et al. 2008) providea direct evidence of the presence of relativistic particles and magnetic fields associatedwith the intra-cluster medium. In particular, the detection of polarized emission from radio halosis the key to investigate the magnetic field power spectrum ingalaxy clusters(Murgia et al. 2004, Govoni et al. 2006, Vacca et al. 2010, Govoni et al. 2013, Govoni et al. 2015).However, detecting this polarized signal is a very hard task withcurrent radio facilities and so far only three examples of large-scale filamentary polarized structures possibly associated with halo emission have been detected (A2255; Govoni et al. 2005, Pizzo et al. 2011, MACS J0717.5+3745; Bonafede et al. 2009,A523; Girardi et al. 2016).Highly polarized elongated radio sources named relics are also observed at the periphery of merging systems (e.g. Clarke & Ensslin 2006, Bonafede et al. 2009b, van Weeren et al. 2010).They trace the regions where the propagation of mildly supersonic shock waves compresses the turbulent intra-cluster magnetic fieldenhancing the polarized emission and acceleratingthe relativistic electrons responsible for the synchrotron emission.A complementary set of information on galaxy cluster magnetic fields can be obtained from high quality RM images of powerful andextended radio galaxies. The presence of a magnetized plasma between an observer and a radio sourcechanges the properties of the incoming polarized emission.In particular, the position angle of the linearly polarized radiationrotates by an amount that is proportional to the line integral of themagnetic field along the line-of-sight times the electron density ofthe intervening medium, i.e. the so-called Faraday rotation effect.Therefore, information on the intra-cluster magnetic fields can be obtained,in conjunction with X-ray observations of the hot gas, through the analysisof the RM of radio galaxies in the background or in the galaxy clustersthemselves.RM studies have been performed onstatistical samples(e.g. Clarke et al. 2001, Johnston-Hollitt & Ekers 2004,Govoni et al. 2010) as well as individualclusters (e.g. Perley & Taylor 1991, Taylor & Perley 1993, Feretti et al. 1995, Feretti et al. 1999, Taylor et al. 2001, Eilek & Owen 2002,Pratley et al. 2013).These studies reveal that magnetic fields are widespread in the intra-cluster medium,regardless of the presence of a diffuse radio halo emission. The RM distribution seen over extended radio galaxies isgenerally patchy, indicating that magnetic fields are not regularly orderedon cluster scales, but instead they have turbulent structures down to linear scalesas small as a few kpc or less. Therefore, RM measurements probethe complex topology of the cluster magnetic field and indeed state-of-the-art software tools and approaches based on a Fourier domain formulation have beendeveloped to constrain the magnetic field power spectrumparameters on the basis of the RMimages (Enßlin & Vogt 2003, Murgia et al., 2004,Laing et al. 2008, Kuchar & Enßlin 2011, Bonafede et al. 2013). In some galaxy clusters and galaxy groups containing radio sources with verydetailed RM images, the magnetic field power spectrum has been estimated (e.g. Vogt & Enßlin 2003, Murgia et al. 2004, Vogt & Enßlin 2005,Govoni et al. 2006, Guidetti et al. 2008, Laing et al. 2008,Guidetti et al. 2010, Bonafede et al. 2010, Vacca et al. 2012). RM data are usually consistent with volume averaged magneticfields of ≃0.1-1 μG over 1Mpc^3. The central magnetic field strengths are typicallya few μG, but stronger fields, with values exceeding ≃10 μG, are measured in the inner regions of relaxedcooling core clusters. There are several indications that the magnetic field intensity decreasesgoing from the centre to the periphery following the cluster gas density profile.This has been illustrated by magneto-hydrodynamical simulations(see e.g. Dolag et al. 2002, Brüggen et al. 2005, Xu et al. 2012, Vazza et al. 2014) and confirmed in RM data. In this paper we aim at improving our knowledge of the intra-clustermagnetic field in Abell 194. This nearby (z=0.018; Struble & Rood 1999) andpoor (richness class R=0; Abell et al. 1989) galaxy cluster belongs to the SRT Multi-frequency Observations of Galaxy clusters (SMOG)sample, an early science program of the new SRT radio telescope. For our purpose, we investigated thetotal intensity and thepolarization properties of two extended radio galaxies embedded in Abell 194, in combination with data in optical and X-ray bands. The paper is organized as follows. In Sect. 2, we describe the SMOG project.In Sect. 3, we present the radio observations and the data reduction. In Sect. 4, we present the radio, optical and X-ray properties of Abell 194. In Sect. 5, we complement the SRT total intensity data with archival VLA observations to derive spectralaging information of the radio galaxies. In Sect. 6, we complement the SRT polarization data with archival VLA observations to derivedetailed multi-resolution RM images.In Sect. 7, we use numerical simulations to investigate the cluster magnetic field, by analyzing the RM and polarization data. Finally, in Sect. 8 we summarize our conclusions.Throughout this paper we assume a Λ CDM cosmology with H_0= 71 kms^-1Mpc^-1, Ω_m=0.27, and Ω_Λ=0.73. At the distance of Abell 194,1corresponds to 0.36 kpc.§ THE SRT MULTI-FREQUENCY OBSERVATIONS OF GALAXY CLUSTERS (SMOG) The SRT is a new 64-m single dish radio telescope located north of Cagliari, Italy.In its first light configuration, the SRT is equipped with three receivers:a 7-beam K-Band receiver (18-26 GHz), a mono-feed C-Band receiver (5700-7700 GHz), and a coaxial dual-feedL/P-Band receiver (305-410 MHz, 1300-1800 MHz). The antenna was officially opened on September 30th 2013,upon completion of the technical commissioning phase (Bolli et al. 2015). The scientific commissioningof the SRT was carried out in the period 2012-2015 (Prandoni et al., submitted).At the beginning of 2016 the first call for single dish early science programs was issued, and the observations started on February 1st, 2016.The SMOG project is an SRT early science program (PI M. Murgia) focused on a wide-band and wide-field single dish spectral-polarimetric study ofa sample of galaxy clusters.By comparing and complementing the SRT observations with archival radio data athigher resolution and at different frequencies, but also with data in optical and X-ray bands, we want to improve our knowledge of the non-thermal components of the intra-cluster medium on large scale. Our aim is also to understand the interplay between these components (relativistic particles and magnetic fields)and the life-cycles of cluster radio galaxies by studying both the spectral and polarization properties ofthe radio sources with the SRT (see e.g. the case of 3C 129; Murgia et al. 2016). For this purpose, we selected a suitable sample of nearby galaxy clustersknown from the literature to harbor diffuse radio halos, relics,or extended radio galaxies. We included Abell 194 in the SMOG sample because it is one of the rare clusters hostingmore than one luminous and extended radio galaxy close to the cluster center.In particular, it hosts the radio source 3C 40 (PKS 0123-016) which is indeed constituted by two radio galaxies with distorted morphologies(3C 40A and 3C 40B). This galaxy cluster has been extensively analyzed in the literature with radio interferometers (e.g. O'Dea & Owen 1985, Jetha et al. 2006, Sakelliou et al. 2008, Bogdán et al. 2011).Here we present, for the first time, total intensity and polarization single-dish observations obtained with the SRT at 6600 MHz. The importance of mapping the radio galaxies in Abell 194 with a single dish is that, interferometers suffer the technical problem of not measuring the total power; the so-called missing zero spacing problem. Indeed, they filter out structures larger than the angularsize corresponding to their shortest spacing, limiting the synthesis imaging of extended structures.Single dish telescopes are optimal for recoveringall of the emission on large angular scales, especially at high frequencies (>1 GHz).Although single dishes typically have a low resolution, the radio galaxies at the center of Abell 194are sufficiently extended to be well resolved with the SRT at 6600 MHz at the resolution of 2.9'.The SRT data at 6600 MHz are used in combination with archival Very Large Arrayobservations at lower frequencies to derive the trend of the synchrotron spectraalong 3C 40A and 3C 40B. In addition, linearly polarized emission is clearly detected for bothsources and the resulting polarization angle images are used to produce detailed RM images at different angular resolutions.3C 40B and 3C 40A are quite extended both in angular and linear size,therefore they represent ideal cases for studying the RM of thecluster along different lines-of-sights. In addition, the close distanceof Abell 194 permits a detailed investigation of the cluster magnetic field structure. Following Murgia et al. (2004) we simulated Gaussian random magnetic field models, and we compared the observed data and the synthetic images with aBayesian approach (Vogt & Enßlin 2005), in order to constrain the strengthand structure of the magneticfield associated with the intra-cluster medium. Until recently, most of the work on cluster magnetic fields has been devotedto rich galaxy clusters. Little attention has been given in the literature to magnetic fieldsassociated with poor galaxy clusters like Abell 194 and galaxygroups (see e.g. Laing 2008, Guidetti et al. 2008, Guidetti et al. 2010). Magnetic fieldsin these system deserve to be investigated in detail since, being more numerous, are more representative than those of richclusters. § RADIO OBSERVATIONS AND DATA REDUCTION§.§ SRT observations We observed with the SRT, for a total exposure time of about 14.4 hours, an area of 1deg×1deg centered on the galaxycluster Abell 194 using the C-Band receiver. Full-Stokes parameters were acquired with the SARDARA back-end (SArdinia Roach2-based Digital Architecturefor Radio Astronomy; Melis et al., submitted), one of the back-ends available at the SRT (Melis et al. 2014). We used the correlator configuration with 1500 MHz and 1024 frequency channels of approximately 1.46 MHz in width. We observed in the frequency range 6000-7200 MHz, at a central frequency of 6600 MHz. We performed several on-the-fly (OTF) mappings in the equatorial framealternating the right ascension (RA) and declination (DEC).The telescope scanning speed was set to 6 arcmin/s and the scans wereseparated by 0.7 to properly sample the SRT beam whose full width at half maximum (FWHM) is 2.9 in this frequency range.We recorded the data stream sampling at 33 spectraper second, therefore individual samples were separated on the skyby 10.9 along the scanning direction. A summary of the SRT observations is listed in Table <ref>. Data reduction was performed with the proprietary Single-dish Spectral-polarimetrySoftware (SCUBE; Murgia et al. 2016).Band-pass, flux density, and polarization calibration were done with at least four cross scans on source calibrators performed at the beginning andat the end of each observing section.Band-pass and flux density calibration were performed by observing 3C 286 and 3C 138, assuming the flux density scale of Perley & Butler (2013). After a first bandpass and flux density calibration cycle, persistent radio frequency interference (RFI) were flaggedusing observations of a cold part of the sky. The flagged data were then used to repeat the bandpass and the flux density calibration for a finer RFI flagging. The procedure was iterated a few times, eliminating all of the most obvious RFI. We applied the gain-elevation curve correction to account for the gain variation with elevation due to the telescope structure gravitational stress change. We performed the polarization calibration by correcting the instrumental polarization and the absolute polarization angle. The on-axis instrumental polarization was determined through observations of the bright unpolarized source 3C 84.The leakage of Stokes I into Q and U is in general less than 2% across the band,with a rms scatter of 0.7 - 0.8%. We fixed the absolute position of the polarization angle using as reference the primarypolarization calibrator 3C 286 and 3C 138. The difference between the observed and predicted position angle according to Perley & Butler (2013)was determined, and corrected channel-by-channel. The calibrated position angle was within the expected valueof 33and -11.9for 3C 286 and 3C 138, respectively, with a rms scatter of ± 1.In the following we describe the total intensity and the polarization imaging. For further details on the calibration and data handling of the SRT observations see Murgia et al. (2016).§.§.§ Total intensity imaging The imaging was performed in SCUBE by subtracting the baseline from the calibrated telescopescans and byprojecting the data in a regular three-dimensional grid. At 6600MHz we used a spatial resolution of 0.7/pixel (corresponding to the separation of the telescope scans), which is enough in our case to sample the beam FWHM with four pixels.As a first step, the baseline was subtractedby a linear fit involving only the 10% of data at the beginning and the endof each scan.The baseline removal was performed channel-by-channel for each scan. All frequency cubes obtained by gridding the scans along the two orthogonal axes (RA and DEC) were then stacked together to produce full-Stokes I, Q, U imagesof an area of 1 square degree centered on the galaxy cluster Abell 194. In the combination, the individual image cubeswere averaged and de-stripped by mixing their Stationary Wavelet Transform (SWT) coefficients (see Murgia et al. 2016, for details).We then used the higher Signal-to-Noise (S/N) image cubes obtained from the SWT stacking as a prior model to refine the baseline fit.The baseline subtraction procedure was then repeated including not just the 10% from the begin and the end of each scan.In the refinement stage, the baseline was represented with a 2nd order polynomial using the regions of the scans free of radio sources. A new SWT stacking was then performed and the process was iterated a few more times until the convergence was reached.Close to the cluster center, the dynamical range of the C-Band total intensity image was limited by the telescope beampattern rather than by the image sensitivity.We used in SCUBE a beam model pattern (Murgia et al. 2016) for a proper deconvolution of thesky image from the antenna pattern. The deconvolution algorithm interactively finds the peak in the image obtained from the SWT stacking of all images,and subtracts a fixed gain fraction (typically 0.1) of this pointsource flux convolved with the re-projected telescope “dirty beam model” from the individual images.In the re-projection, the exact elevation and parallactic angle for each pixel in the unstacked images are used. The residual images were stacked again and the CLEAN continued until a threshold condition was reached. Given the low level of the beam side lobes compared to interferometric images,a shallow deconvolution was sufficient in our case, and we decided to stopthe CLEAN at the first negative component encountered. As a final step, CLEAN components at the same position were merged,smoothed with a circular Gaussian beam with FWHM 2.9, and thenrestored back in the residuals image to obtain a CLEANed image.§.§.§ Polarization imaging The polarization imaging at C-Band of Stokes parameters Q and U was performed following the same procedures described for the total intensity imaging: baseline subtraction,gridding, and SWT stacking. There were no critical dynamic range issues with the polarization image, and thus no deconvolution was required. However, since the contribution of the off-axis instrumental polarization can affect the quality of polarization data if bright sources arepresent in the image, we corrected for the off-axis instrumental polarization by deconvolving the Stokes Q and U beam patterns. In particular, SCUBE uses the CLEAN components derived from the deconvolution of the beam pattern from the total intensity image to subtractthe spurious off-axis polarization from each individual Q and U scans before their stacking.The off-axis instrumental polarization level compared to the Stokes I peak is 0.3%. Finally, polarized intensity P=√(Q^2+U^2) (corrected for the positive bias),fractional polarization FPOL=P/I and position angle of polarization Ψ=0.5tan^-1(U/Q) images were then derived from the I, Q,and U images. §.§ Archival VLA observations We analyzed archival observations obtained with the VLA at different frequencies and configurations. The details of the observations are provided in Table <ref>. The data were reduced using the NRAO's Astronomical Image Processing System (AIPS) package. The data in C-Band and in L-Band were calibrated in amplitude, phase, and polarization angle using the standard calibration procedure.The flux density of the calibrators have been calculated accordingly to the flux density scale of Perley & Butler (2013). Phase calibration was derived from nearby sources, periodically observed over a wide range in parallactic angle. The radio sources 3C 286 or 3C 138 were used as reference for the absolute polarization angles. Phase calibrators were also usedto separate the source polarization properties from the antenna polarizations. Imaging was performed following the standard procedure: Fourier-Transform, Clean and Restore.A few cycles of self-calibration were appliedwhen they have proven to be useful to remove residual phase variations. Images of the Stokes parameters I, Q, and U, have been produced foreach frequency and configuration separately providing different angular resolutions.Since the C-Band data consist of three separate short pointings, in this case,the final I, Q, and U images were obtained by mosaicing the three different pointings with the AIPS task FLATN. Finally, P, FPOL, and Ψ images were then derived from the I, Q, and U images. Data in P-Band were obtained in spectral line mode dividing the bandwidth of 3.125 MHzin 31 channels. A data editing was made in order to excise RFI channel by channel. We performed the amplitude and bandpass calibration withthe source 3C 48. The flux density of the calibrator was calculated accordingly to the low frequency coefficients of Scaife & Heald (2012). In the imaging, the data were averaged into five channels. The data were mapped using a wide-field imagingtechnique, which corrects fordistortions in the image caused by the non-coplanarity of the VLA over a wide field of view.A set of small overlapping maps was used to cover the central area of about ∼2.4in radius(Cornwell & Perley 1992).However, at this frequency confusion lobes of sources far from the center of the field are still present. Thus, we also obtained images ofstrong sources in an area of about ∼6in radius, searched in the NRAO VLA Sky Survey (NVSS; Condon et al. 1998) catalog.All these "facets” were included in CLEAN and used for several loops of phase self-calibration (Perley 1999).To improve the u-v coverage and sensitivity we combined the data sets in B and C configuration. § MULTI-WAVELENGTHS PROPERTIES OF ABELL 194 In the following we present the radio, optical,and X-ray properties of the galaxy cluster Abell 194. §.§ Radio propertiesIn Fig. <ref>, we show the radio, optical, and X-ray emission of Abell 194. The field of view of the left panel of Fig. <ref> is ≃1.3×1.3 Mpc^2. In this panel, the contours of the CLEANed radio image obtained with the SRT at 6600 MHz are overlaid on the X-rayPSPC image in the 0.4-2 keV band (see Sect. <ref>). The SRT image was obtained by averaging all the frequency channels from 6000 MHz to 7200 MHz. We reached a final noise level of 1 mJy/beamand an angular resolution of 2.9 FWHM.The radio galaxy 3C 40B, close to the cluster X-ray center, extends for about 20.The peak brightness of 3C 40B (≃600 mJy/beam) is located in thesouthern lobe. The narrow angle-tail radio galaxy 3C 40A (peak brightness ≃150 mJy/beam), is only slightly resolvedat the SRT resolution and it is not clearly separated from 3C 40B.The details of the morphology of the two radio galaxies can be appreciated in the right panel of Fig. <ref>, where we show afield of view of ≃0.5×0.5 Mpc^2. In this panel,the contoursof the radio image obtained with the VLA at 1443 MHz are overlaidon the optical emission of the cluster. We retrieved the optical image in the g^ Mega bandfrom the CADCMegapipe[http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/megapipe/] archive (Gwyn 2008). The VLA radio image was obtained from the L-Band data set in C configuration. It has a sensitivity of 0.34 mJy/beam and an angular resolution of 19. The distorted morphology of the two radio galaxies 3C 40B and 3C 40A is well visible. Both radio galaxies show an optical counterpart and their host galaxies are separatedby 4.6 (≃ 100 kpc). The core of the extended source 3C 40B is associated with NGC 547, which is known to form a dumbbell system with the galaxy NGC 545 (e.g. Fasano et al. 1996).3C 40A is a narrow angle-tail radio galaxy associatedwith the galaxy NGC 541 (O'Dea & Owen 1985). The jet emanating from 3C 40A is believed to be responsiblefor triggering star formation in Minkowski's object (e.g. van Breugel et al. 1985, Brodie et al. 1985, Croft et al. 2006), astar-forming peculiar galaxy near NGC 541. In addition to 3C 40A and 3C 40B, some other faint radio sourceshave been detected in the Abell 194 field. Inthe left panel of Fig. <ref>, SRT sources with an NVSScounterpart are labeled with the letters A to S.Sources labeled with E, J, and Q are actually blendsof multiple NVSS sources. There are also a few sources visible in the NVSS but not detected at the sensitivity level of the SRT image.These are likely steep spectral index radio sources.The SRT contours show an elongation in the northern lobe of 3C 40B towardwest. This elongation is likely due to the point source labelled with S,which is detected both in the 1443 MHz VLA image at 19resolution in the right panel of Fig. <ref>, and in the NVSS.The 1443 MHz image also shows the presence of another point source locatedon the east of the northern lobe of 3C 40B.This point source is blended with 3C 40B both at the SRT and atthe NVSS resolution.Given that 3C 40A and 3C 40B are not clearly separated at the SRT resolution,we calculated the flux density for the two sources together, by integratingthe total intensity image at 6600 MHz down to the 3σ_I isophote.It results ≃1.72±0.05 Jy.This flux contains also the flux of the two discrete sources located inthe northern lobe of 3C 40B mentioned above.In Table <ref>, we list the basic properties of the faint radio sources detected in the field of the SRT image. For the unresolved sources, we calculated the flux density bymeans of a bi-dimensional Gaussian fit. Along with the SRT coordinates and the SRT flux densities,we also report their NVSS name, the NVSS flux density at 1400 MHz, and the global spectral indices (S_ν∝ν^-α)between 1400 and 6600 MHz.In Fig. <ref>, we show the total intensity SRT contourslevels overlaid on the linearly-polarized intensity image Pat 6600 MHz. The polarized intensity was corrected for both the on-axis and the off-axis instrumental polarization.The noise level, after the correction for the polarization bias, is σ_ P=0.5 mJy/beam.Polarization is detected for both 3C 40B and 3C 40A with a global fractional polarization of ≃9%. The peak polarized intensity is of ≃19 mJy/beam, located in thesouth lobe of 3C 40B, not matching the total intensity peak. The origin of this mismatch could be attributed to different effects. A possibility is that the magnetic field inside the radio source's lobe is not completely ordered. Indeed, along the line-of-sight at the position of the peak intensity, we may have by chance two (or more) magnetic field structures not perfectlyaligned in theplane of the sky so that the polarized intensity is reduced, while the total intensity is unaffected. This depolarization is a pure geometrical effectrelated to the intrinsic ordering of the source's magnetic field and may be present even if there is no Faradayrotation inside and/or outside the radio emitting plasma. Another possibility is that Faraday Rotation is occurring in an external screen and the peak intensity is located in projection in a region of a high RM gradient (see RM image in Fig. <ref>).Inthis case, the beam depolarization is expected to reduce the polarized signal but not the totalintensity. Finally, there could be also internal FaradayRotation, but the presence of an X-ray cavity (see Sect. <ref>), suggests that the southern lobe is devoided of thermal gas and furthermore, the observed trend of the polarized angles are consistent with the λ^2-law that points in favour of an externalFaraday screen (see Sect. <ref>).In 3C 40B the fractional polarization increase along the source from 1-3% in the central brighter part up to 15-18% in the low surface brightness associated with the northern lobe. The other faint sources detected in the Abell 194 field are not significantly polarized in the SRT image. §.§ Optical properties One of the intriguing properties of Abell 194 is that it appears as a “linear cluster”, as its galaxy distribution and X-ray emission are both linearly elongated along the NE-SW direction(Rood & Sastry 1971, Struble & Rood 1987; Chapman et al. 1988;Nikogossyan et al. 1999; Jones & Forman 1999).Previous studies of Abell 194, based on redshift data, have found a low value of the velocity dispersion of galaxies within the cluster (∼ 350-400 ; Chapman et al. 1988, Girardi et al. 1998) and a low value of the mass. Two independent analyses agree in determining a mass of M_200∼ 1 × 10^14 within the radius[The radius R_δ is the radius of a sphere with mass overdensity δ times the critical density at the redshift of the galaxy system.]R_200∼ 1 Mpc (Girardi et al. 1998, Rines et al. 2003). Early analyses of redshift-data samples of ∼ 100-150 galaxies within 3-6 Mpc, mostly derived by Chapman et al. (1988), confirmed that Abell 194 is formed of a main system elongated along the NE-SW directionand detected minor substructure in both the central and external cluster regions (Girardi et al. 1997, Barton et al. 1998, Nikogossyan et al. 1999).To obtain new additional insights in the cluster structure, we considered more recent redshift data. Rines et al. (2003) compiled redshifts from the literature as collected by the NASA/IPAC Extragalactic Database (NED), including the first Sloan Digital Sky Survey (SDSS) data.Here, we also added further data extracted from the last SDSS release (DR12).In particular, to analyze the 2R_200 cluster region, we considered galaxies within 2 Mpc (∼ 93) from the cluster center (here taken as the galaxy 3C 40B/NGC 547). Our galaxy sample consists of 1893 galaxies.After the application of the P+G membership selection (Fadda et al. 1996; Girardi et al. 2015) we obtained a sample of143 fiducial members. The P+G membership selection is a two steps method. First, we used the 1D adaptive-kernel method DEDICA (Pisani 1993) to detect the significant cluster peak in the velocity distribution.All the galaxies assigned to the cluster peak are analyzed in the second step which uses the combination of position and velocity information: the “shifting gapper” method by Fadda et al. (1996).This procedure rejects galaxies that are too far in velocity from the main body of galaxies and within a fixed bin that shifts along the distance from the cluster center.The procedure is iterated until the number of cluster members converges to a stable value.The comparison with the corresponding galaxy samples of Chapman et al. (1988), formed of 84 galaxies (67 members), shows that we have doubled the data sample and stresses the difficulty of improving the Abell 194 sample when going down to fainter luminosities. Unlike the Chapman et al. (1988) sample, the sampling and completeness of both our sample and that of Rines et al. (2003) are not uniform and, in particular, since the center of Abell 194 is at the border of a SDSS strip, the southern clusters regions are undersampled with respect to the northern ones. As a consequence, we limited our analysis of substructure to the velocity distribution and to the position-velocity combined data.By applying the biweight estimator to the 143 cluster members (Beers et al. 1990, ROSTAT software), we computed a mean cluster line-of-sight (LOS) velocity <V>=<cz> =(5 375±143) , corresponding to a mean cluster redshift <z>=0.017929±0.0004 and a LOS velocity dispersion σ_V=425_-30^+34 , in good agreement with the estimates by Rines et al. (2003, see their Table 2). There is no evidence of non Gaussianity in the galaxy velocity distribution according to two robust shape estimators, the asymmetry index and the tail index, and the scaled tail index (Bird & Beers 1993). We also verified that the three luminous galaxies in the cluster core (NGC 547, NGC 545, and NGC 541) have no evidence of peculiar velocity according to the indicator test by Gebhardt & Beers (1991).We applied the Δ-statistics devised by Dressler & Schectman (1988, hereafter DS-test), which is a powerful test for 3D substructure. The significance is based on 1000 Monte Carlo simulated clusters obtained shuffling galaxies velocities with respect to their positions. We detected a marginal evidence of substructure (at the 91.3% confidence level). Fig. <ref> shows the comparison of the DS bubble-plot, here obtained considering only the local velocity kinematical DS indicator, with the subgroups detected by Nikogossyan et al. (1999). The most important subgroup detected by Nikogossyan et al. (1999), the No. 3, traces the NE-SW elongated structure. Inside this, the regions characterized by low or high local velocity correspond to their No. 1 and No. 5 subgroups. A SE region characterized by a high local velocity corresponds to their No. 4 subgroup, the only one outside the main NE-SW cluster structure. The above agreement is particularly meaningful when considering that the Nikogossyan et al. (1999) and our results are based on quite different samples and analyses. In particular, their hierarchical-tree analysis weights galaxies with their luminosity, while no weight is applied in our DS-test and plot. The galaxy with a very large blue/thin circle has a difference from the mean cz of 488 km/s and lies at 0.42 Mpc from the cluster center, that is well inside the caustic lines reported by Rines et al. (2003, see their Fig.2) and thus it is definitely a cluster member. However, this galaxy lies at the center of a region inhabited by several galaxies at low velocity, resulting in the large size of the circle. In fact, as mentioned in the caption, the circle size refers to the local mean velocityas computed with respect to the galaxy and its ten neighbors.We also performed the 3D optimized adaptive-kernel method of Pisani (1993, 1996;hereafter 3D-DEDICA, see also Girardi et al. 2016). The method detects two important density peaks, significant at the >99.99% confidence level of 31 and 15 galaxies.Minor subgroups have very low density and/or richness and are not discussed. Fig. <ref> shows as both the two DEDICA subgroups are strongly elongated and trace the NE-SW direction but have different velocities. The main one has a velocity peak of 5 463 , close to the mean cluster velocity, and contains NGC 547, NGC 545, and NGC 541. The secondary one has a lower velocity (4 897 ).The picture resulting from new and previous optical results agree in that Abell 194 does not show trace of a major and recent cluster merger (e.g., as in the case of a bimodal head-on merger), but rather agrees with a scenario of accretion of small groups, mainly along the NE-SW axis. §.§ X-ray propertiesThe cluster Abell 194 has been observed in X-rays with the ASCA,, , and XMM satellites. Sakelliou et al. (2008), investigated the cluster relying on XMM and radio observations. The X-ray data do not show any signs of features expected from recent cluster merger activity. They concluded that the central region of Abell 194 is relativelyquiescent and is not suffering a major merger event, in agreement with the optical analysis described in Sect. <ref>. Bogdán et al. (2011), analyzed X-ray observations with andsatellites. They mappedthe dynamics of the galaxy cluster and also detected a largeX-ray cavity formed by the southern radio lobe arising from 3C 40B. Therefore, this target is particularly interesting forFaraday rotation studies because the presence of an X-ray cavityindicates that the rotation of the polarization plane is likely tooccur entirely in the intra-cluster medium, since comparatively little thermal gas should be present inside the radio-emitting plasma.The temperature profile and the cooling time of 21.6±4.41 Gyr determined by Lovisari et al. (2015), and the temperature map by Bogdán et al. (2011),indicate that Abell 194 does not harbor a cool core.We analyzed thePSPC pointed observation of Abell 194 following the same procedure of Eckert et al. (2012) to which we refer for a more detailed description. Here we briefly mention the main steps of the reduction and of the analysis. TheExtended Source Analysis Software (Snowden et al. 1994) was used for the data reduction. The contribution of the various background components such as scattered solar X-rays, long term enhancements and particle background has been taken into account and combined to get a map of all the non-cosmic background components to be subtracted.We then extracted the image in the R37energy band (0.4-2 keV)corrected for vignetting effect with the corresponding exposure map. Pointsources have been detected and excluded up to a constant flux threshold inorder to ensure a constant resolved fraction of the Cosmic X-ray Background (CXB)over the field of view. The surface brightness profile has been computed with 30 arcsec bins centered on the centroid of the image(RA=01 25 54; DEC=-01 21 05) out to 50 arcmin. The surface brightness profile was fitted with a single β-model (Cavaliere & Fusco-Femiano 1976) plus a constant (to take into account the sky background composed by Galactic foregrounds and residual CXB), with thesoftware PROFFIT v1.3 (Eckert et al. 2011). The best fitting model has a core radius r_ c = 11.5±1.2 arcmin (248.4 ± 26 kpc)and β=0.67±0.06 (1σ) for a χ^2/dof=1.77. In Fig. <ref> we show the surface brightness profile of the X-rayemission of Abell 194, with the best fit β-model shown in blue.For the determination of the spectral parameters representative of the core properties we analyzed theACIS-S observation of Abell 194(ObsID: 7823) with CIAO 4.7 and CALDB 4.6.8. All data were reprocessed from thelevel=1 event file following the standardreduction threads and flare cleaning. We used blank-sky observations to subtract the background components and to account for variations in the normalization of theparticle background we rescaled the blank-sky background template by the ratio of the count rates in the 10-12 keV energy band. We extracted a spectrum from a circular region of radius 1 arcmin around the centroid position. We fitted the data with an APEC (Smith et al. 2001) model withATOMDB code v2.0.2. in XSPEC v.12.8.2 (Arnaud 1996). We fixed the Galactic column density at N_H=4.11 × 10^20cm^-2 (Kalberla et al. 2005), the abundance is quoted in the solar units of Anders & Grevesse (1989) and we used the Cash statistic.The best fit model gives a temperature of kT=2.4±0.3 keV, an abundance of0.27^+0.15_-0.11 Z_⊙ and a XSPEC normalization of1.0±0.1 × 10^-4 cm^-5 for a cstat/dof = 82/83.Using thebest fit β-model and the spectral parameters obtained withthe central electron density can be expressed by a simple analytical formula (Eq. 2 of Ettori et al. 2004). In order to assess the error we repeated the measurements after 1000 random realizations of the normalization and the β-model parameters drawn from Gaussiandistributions with mean and standard deviation set by the best-fit results. We obtain a value for the central electron density ofn_ 0=(6.9±0.6) × 10^-4 cm^-3. The distribution of the thermal electrons density with the distance from the clusterX-ray center r was thus modeled with:n_ e(r)=n_ 0(1+r^ 2/r_ c^ 2)^ -3/2β. § SPECTRAL AGING ANALYSIS We investigated 3C 40A and 3C 40B at differentfrequencies in order to study in detail their spectral index behavior.We analyzed the images obtained from the VLA Low-Frequency Sky Surveyredux (VLSSr at 74 MHz; Lane et al. 2014),the VLA archive data in P-Band (330 MHz), the VLA archive datain L-Band (1443 and 1630 MHz), and the SRT (6600 MHz) data.We smoothed all the images to the same resolution as that of the SRT image(see top panels of Fig. <ref>). The relevant parameters of the images smoothed to a resolution of 2.9are reported in Table <ref>.We calculated the flux densities of 3C40 A and 3C40 B together. The total spectrum of the sources is shown in thebottom panel of Fig. <ref>.Although the radio sources in Abell 194 have a large angular extension, the interferometric VLA data at 1443 and 1630 MHz seems do not suffer from missing flux problem. To calculate the error associated to the flux densities,we added in quadrature the statistical noise and an additional uncertainty of 10%to take into account for a different uv-coverageand a possible bias due to the different flux density scale of the data sets.By using the software SYNAGE (Murgia et al. 1999), we fitted the integrated spectrum with the continuous injection model (CI; Pacholczyk 1970). The CI model is characterized by three free parameters: the injection spectral index (α_ inj), the break frequency (ν_ b), and the flux normalization. In the context of the CI model, it is assumed that the spectral break is due to the energy losses of the relativistic electrons.For high-energy electrons, the energy losses are primarily due to the synchrotron radiation itself and to the inverse Compton scattering of the Cosmic Microwave Background (CMB) photons. During the active phase, the evolution of the integrated spectrum is determined by the shift with time of ν_ b to lower and lower frequencies. Indeed, the spectral break can be considered to be a clockindicating the time elapsed since the injection of the first electron population.Below and above ν_ b, the spectral indices are respectively α_ inj andα_ inj+0.5.For 3C40 A and 3C40 B the best fit of the CI model to the observed radio spectrum yieldsa break frequency ν_ b≃700±280 MHz. To limit the number of free parameters, we fixed α_ inj=0.5whichis the value of the spectral index calculated in the jets of 3C 40B, at arelatively high resolution (≃ 20), between the L-Band (1443 MHz)and the P-Band (330 MHz) images.By using the images in the top panels of Fig. <ref>, we also studied the variation pixel by pixel of the synchrotron spectrum along 3C 40A and 3C 40B. 3C 40B is extended enough to investigate the spectraltrend along its length. A few plots showing the radio surface brightness as a function of theobserving frequency at different locations of the 3C 40B are shown inbottom panels of Fig. <ref>. The plots show somepixels, located in different parts of 3C 40B, with representative spectral trend.The morphology of the radio galaxy, and the trend of the spectral curvatures at differentlocations indicate that 3C 40B is currently an active source whose synchrotron emission is dominatedby the electron populations with GeV energies injected by the radio jets and accumulated during theirentire lives (Murgia et al. 2011). We then fitted the observed spectra with the JP model (Jaffe & Perola 1973), which also has three free parameters α_ inj, ν_ b, and flux normalizationlike the CI model. The JP model however describes the spectral shape of an isolated electron population with an isotropic distributionof the pitch angles injected at a specific instant in time with an initialpower law energy spectrum with index p=2α_ inj+1.According to the synchrotron theory (e.g. Blumenthal & Gould 1970, Rybicki & Lightman 1979),it is possible to relate the break frequency to the time elapsed since the start ofthe injection: t_ syn= 1590 B^0.5/(B^2+B_ IC^2) [(1+z)ν_ b]^0.5 Myr,where B and B_ IC=3.25(1+z)^2 are the source magnetic field and the inverse Compton equivalent magneticfield associated with the CMB, respectively. The resulting map of the break frequency derived by the fit of the JPmodel is shown in the top left panel of Fig. <ref>. The break frequency is computed in the regions where the brightness of the VLSSrimage smoothed to a resolution of 2.9 is above 5σ_I (>1.3 Jy/beam). The measured break frequency decreases systematically along the lobes of 3C 40B in agreement with a scenario in which the oldest particles arethose at larger distance from the AGN.The minimum break frequency measured in the faintest part of the lobes of 3C 40Bis ν_ b≃ 850± 120 MHz, in agreement, within the errors, with the spectralbreak measured in the integrated spectrum of 3C40 A and 3C40 B. We derived the radiative age from Eq. <ref>, using the minimum energy magnetic field strength. The minimum energy magnetic field was calculated assuming, for the electron energy spectrum,a power law with index p=2α_ inj+1=2 and alow energy cut-off at a Lorentz factor γ_ low=100. In addition, we assumed that non-radiating relativistic ions have the sameenergy density as the relativistic electrons.We used the luminosity at 330 MHz, since radiative losses are lessimportant at low frequencies.Modeling the lobes of 3C 40B as two cylinders in the plane of the sky, the resultingminimum energy magnetic field, is ≃1.8 μG in thenorthern lobe and ≃1.7 μG in the southern lobe. Assuming for the source's magnetic field ⟨ B_ min⟩=1.75 μGand the lowest measured break frequency of ν_ b=850± 120 MHz, the radiative age of 3C 40B is found to be t_ syn=157±11 Myrs. Sakelliou et al. (2008), applied a similar approach and found a spectral age perfectly consistent with our result. Furthermore, we note that the radiative age of 3C40 B is consistentwith that found in the literature for other sources of similar size and radio power (see Fig. 6 of Parma et al. 1999).In the top right panel of Fig. <ref>, we show the corresponding radiative age mapof 3C 40B obtained by applying in Eq. <ref> the break frequency image and the minimum energy magnetic field. In the computation above we assumed an electron to proton ratio k=1, in agreement with previous works that we use for comparison. We are aware that this ratio could be larger, and in particular there could be local changes due to significant radiative losses of electrons, with respect to protons, in oldest source regions. We can estimate the impact of a different value of k on the radiative age. By keeping the assumptions adopted above and considering k=100, the minimum energy magnetic field results a factor of three higher, and the radiative age of 3C 40B is found to be t_ syn=100± 7 Myrs, which is slightly lower than the value derived by using k=1, however the main results of the paper are not affected.For extended sources like 3C40 B, high-frequencies interferometric observations suffer the so-called missing flux problem. On the other hand, the total intensity SRT image at 6600 MHzpermitted us to investigate the curvature of the high-frequency spectrum across the source 3C40 B, which is essential to obtain a reliable aging analysis. The above analysis is important not only to derive the radiative age of the radio galaxy but also for the cluster magnetic field interpretation.Indeed, the break frequency image and the cluster X-ray emission model (see Sect. <ref>) are usedin combination with the RM data to constrain the intra-cluster magnetic field power spectrum. § FARADAY ROTATION ANALYSISIn this section we investigate the polarization properties of the radio galaxies in Abell 194, by using data-sets at different resolutions. First, we produced polarization images at 2.9' resolution, useful to derive the variation of the magnetic field strength in the cluster volume. Second, we produced polarization images at higher resolution (19” and 60”), useful to determine the correlation length of the magnetic field fluctuations. §.§ Polarization data at 2.9resolutionIn the top panels of Fig. <ref> we compare the polarization imageobserved with the SRT at 6600 MHz (right) with that obtained with the VLA at 1443 MHz (left), smoothed with a FWHMGaussian of 2.9'.The fractional polarization at 1443 MHz is similar to that at 6600 MHz being ≃ 3-5% in 3C 40A and in the brightest part of the radiolobes of 3C 40B. The fractional polarization increases in the low surface brightness of the northern lobe of 3C 40B where it is ≃15%. The oldest low-brightness regions are visible both in total intensity andpolarization at 1443 MHz but not at 6600 MHz, due to the sharp high-frequency cut-off of the synchrotron spectrum. These structures, located east of 3C 40B and inthe southern lobe of 3C 40B, show a fractional polarization as high as ≃ 20-50% at 1443 MHz (see Sect. <ref>). By comparing the orientation of the polarization vectors at the differentfrequencies it is possibleto note a rotation of the position angle. We interpret this as due to the Faraday rotation effect. The observed polarization angle Ψ of a synchrotronradio source is modified from its intrinsicvalue Ψ_ 0 by the presence of a magneto-ionic Faraday screen between the source of polarized emission and the observer. In particular, the plane of polarization rotates according to: Ψ=Ψ_ 0+ RM×λ^2 The RM is related to the plasma thermalelectron density, n_ e,and magnetic field along the line-of-sight, B_ ofthe magneto-ionic medium, by the equation:RM = 812∫_0^L n_ e B_dl rad/m^2where B_ is measured in μG, n_ e in cm^-3, and L is the depth of the screen in kpc. Following Eq. <ref>, we obtained the RM image of Abell 194, by performing a fit pixel by pixel of the polarization angle images as a function of λ^2 using the FARADAY software (Murgia et al. 2004). Given as input multi-frequencies images of Q and U, the software produces the RM, the intrinsic polarization angle Ψ_ 0, and the correspondingerror images. To improve the RM image, the algorithm can be iterated in several self-calibration cycles. In the first cycle only pixels with the highest signal-to-noise ratio are fitted.In the next cyclesthe algorithm uses the RM information in these high signal-to-noise pixelsto solve the nπ-ambiguity in adjacent pixels of lower signal-to-noise, in asimilar method used in the PACERMAN algorithm by Dolag et al. (2005).In the middle left panel of Fig. <ref>, we show the resulting RM image at an angular resolution of 2.9. The values of RM range from -60 to 60 rad/m^2, with the higher absolute values in the central region of the cluster in between the cores of 3C 40B and 3C 40A. We obtained this image by performing theλ^2 fit of thepolarization angle images at 1443, 1465, 1515, 1630, and 6600 MHz. The total intensity contours at 1443 MHz are overlaid on the RM image. The RM image was derived only in those pixels in which the followingconditions were satisfied: the total intensitysignal at 1443 MHz was above 3σ_ I, the resulting RM fit error was lower than 5 rad/m^2, and in at least four frequencies the error in the polarization angle was lower than 10. In the bottom panels of Fig. <ref>, we show a few plots of the positionangle Ψ as a functionof λ^2 at different source locations.The plots show somepixels, located in different parts of the sources, with representative RM. Thanks to the SRT image, we can effectively observe that the data are generally quite well represented by a linearλ^2 relation over a broad λ^2 range, whichsupports the external Faraday screen hypothesis. We can characterize the RM distribution in terms of a mean⟨ RM ⟩ and root mean square σ_ RM. In the middle right panel of Fig. <ref>, we show the histogram of the RM distribution.The mean and root mean square of the RMdistribution are ⟨ RM ⟩=15.2 rad/m^2 and σ_ RM=14.4 rad/m^2, respectively.The mean fit error of the RM error image is ≃1.6 rad/m^2. The dispersion σ_ RM of the RM distribution is higher than the mean RM error. Therefore, the RM fluctuations observed in the image aresignificant and can give us information on the cluster magneticfield power spectrum.For a partially resolved foreground, with Faraday rotation in a short-wavelength limit, a depolarization of the signal due to the unresolved RM structures in the external screen can be approximated, to first order, with the Burn-law (Burn 1966, see also Laing et al. 2008 for a more recent derivation): FPOL=FPOL_0 exp(-aλ^4), where FPOL_0 is the fractional polarization at λ=0 anda=2 |∇ RM |^2 σ^2 is related to the depolarizationdue to theRM gradient within the observing beam considered a circular Gaussian withFWHM=2√(2ln2)σ.The effect of depolarization between two wavelengths λ_1 and λ_2 is usually expressed in terms of the ratio of the degree of polarizationDP=FPOL(λ_2)/FPOL(λ_1). The Burn-law can provide depolarization informationby considering the data at all frequencies simultaneously. However, we note that the polarization behaviour varies as the Burn-law at short wavelengths and goes to a simple generic power-law form at long wavelengths (Tribble 1991).Following Laing et al. (2008), in the top left panel ofFig. <ref> we show the image of the Burn-law exponent a derived froma fit to the data.We obtain this image by performing the λ^4 fit of the fractional polarization FPOL images at 1443, 1465, 1515, 1630, and 6600 MHz, by using at least four of these frequencies in each pixel. The values of a range from -500 to 1300 rad^2/m^4. The histogram of the Burn-law a distribution is shown in the topright panel of Fig. <ref>.A large number of pixels have a ≃ 0 indicating no depolarization. Significant depolarization (a > 0) is found close to 3C 40A. Finally, aminority of pixels have a < 0. This may be due to the noiseor the effect of Faraday rotation on a non-uniform distributionof intrinsic polarization (Laing et al. 2008). We note that, indeed, re-polarisation of the signal is not unphysicaland perfectly possible (e.g. Farnes etal. 2014, Lamee et al 2016). Example plots of a as a function of λ^4 areshown in the bottom panels of Fig. <ref>. The plots show the same pixels of Fig. <ref>.In these plots, the Burn-lawgives adequate fits in the frequency range and resolution of these observations.We used the images presented in Fig. <ref> and in Fig. <ref> to investigate the variation of the magnetic field strengthin the cluster volume. In addition, to measure the correlation length of the magnetic field fluctuations, we used the archival VLA observations toimprove the RM resolution in the brightest parts of the sources. §.§ Polarization data at 19and 60resolution In the left panel of Fig. <ref> we show the resulting RM image at an angular resolution of 19. In the middle panel ofFig. <ref>, we show the resulting Burn-law image, at the same angular resolution. We derived these images by using VLA dataat 1443, 1630, 4535, and 4885 MHz and by adopting the samestrategy described in Sect. <ref> for the lower resolution images.The noise levels of the images at a resolution of 19are given in Table <ref>. Qualitatively, the brightest parts of the 3C 40A and 3C 40B sources show RM and Burn-law imagesin agreement with what we find at lower resolution. However,at this resolution the two sources are well separated and the RM structures can be investigated in a finer detail. In particular, the RM distribution seen over the two radio galaxies is patchy, indicating a cluster magnetic field with turbulent structures on scales of a few kpc. In the left and middle panel of Fig. <ref> we show the resulting RM and Burn-law images at an angular resolution of 60. We derived these images by using VLA dataat 1443, 1465, 1515, and 1630 MHz and by adopting the same strategy described in Sect. <ref>.The noise level of the images at a resolution of 60are given in Table <ref>. The RM image at 60is consistent with that at 2.9, although obtained in a narrower λ^2 range.We note that low polarization structures may potentially causing images with some artefacts. However, these effects are taken into accountin the magnetic field modeling since simulations and dataare filtered in the same way. The data at 19and 60are used to derive realistic models for the intrinsic properties of the radio galaxies. These models are usedto produce synthetic polarization images at arbitrary frequenciesand resolutions in our magnetic field modeling (see Sect. <ref>). In the right panel of Fig. <ref> we show the intrinsic polarization image at 19of 3C 40A and 3C 40B. The intrinsicpolarization (Ψ_ 0 and FPOL_ 0), was obtained by extrapolating to λ=0 both the RM (Eq. <ref>) and the Burn-law (Eq. <ref>). The image reveals regions of high fractional polarization. The panel showssome representative values of fractional polarization with the correspondinguncertainty. We note that the southern lobe of 3C40B shows to thesouth-east an outer rim of high fractional polarization (≃ 50%). These polarisation rims are commonly observed at the boundary of the lobes of both low (e.g. Capetti et al. 1993) and high (e.g. Perley & Carilli 1996)luminosity radio galaxies. The rim-like morphology of high fractional polarisation is an expected feature usually interpreted as the compression of the lobes magnetic field along the contact discontinuity. The normal field components near the edge are suppressed and only tangential components survive giving rise to very high fractional polarisation levels.This image provides a description of the intrinsic polarization propertiesin the brightest part of the sources. However, for modeling at large distances from the cluster center we need information also inthe fainter parts of the sources. In this case we obtained the information from the data at a resolution of 60”. In the right panel of Fig. <ref> we show the intrinsic polarization image at 60”, derived by extrapolating to λ=0 both the RM formulaand the Burn-law. The two images are in good agreement in the common parts.In addition, they complement the intrinsic polarization information in differentsampling regions of the sources. The intrinsic (λ=0) fractional polarization images shown inFig. <ref> and Fig. <ref>, confirmvery low polarization levels in coincidence with the peak intensity in the southern lobe,in agreement with the small length measured for the fractional polarization vectors overimposed to the polarization image obtained with the SRT at 6600 MHz and shown in Fig. <ref>. Furthermore, the higher S/N ratio of the low resolution image, reveals a fewsteep spectrum diffuse features not observable in Fig. <ref>. In particular: a trail to the east (see Sakelliou et al. 2008, for an interpretation), the northern part of the tail of 3C40A, that overlaps in projection to the tip of the northern lobe of 3C40B, and a tail extending fromthe southern lobe of 3C40B. A common property of these features is that they are all highly polarised as indicated in the figure. These are likely very old and relaxed regions characterized by an ordered magnetic field. The combination of these two effects could explain the high fractional polarization observed.§.§ Polarization results summary The SRT high-frequency observations are essential to determine the large scale structure of the RM in Abell 194.While the angular size of the lobes shown in Fig. <ref> are comparable with the largest angular scaledetectable by the VLA in D configuration at C-Band (≃5), the full angular extent of 3C40 B is so large (≃20) that the intrinsic structure of the U and Q parameters can be properly recovered only with single-dish observations. Indeed, the SRT observation at 6600 MHz is fundamental because it makes possible to ensure that the polarization position angles follow the λ^2 law over a broad range of wavelengths thus confirming the hypothesis that the RM originates in an external Faraday screen.The observed RM fluctuations in Abell 194 appear similar to those observed for other sources in comparable environments, e.g. NGC 6251, 3C 449, NGC 383, (Perley et al. 1984, Feretti et al. 1999, Laing et al. 2008), but aremuch smaller in amplitude than in rich galaxy clusters (e.g. Govoni et al. 2010).We summarize the RM results obtained at the different resolutionsin Table <ref>. The values of ⟨RM⟩ given in Table <ref> include the contribution of the Galaxy.We determined the RM Galactic contribution by using the resultsby Oppermann et al. (2015), mostly based on the catalog byTaylor et al. (2009), which contains the RM for 34 radio sources within a radius of 3 deg from the cluster center of Abell 194. Within this radius, the RM Galactic contribution results 8.7±4.5 rad/m^2.Therefore, half of the ⟨ RM ⟩ value measuredin the Abell 194 sources is Galactic in origin.In 3C 40B the two lobes show a similar RM and Burn-law a, this is in contrast to what was foundin 3C 31 (Laing et al. 2008), where asymmetry between the north andsouth sides is evident. While 3C 31 is inclined with respect the plane of the sky and the receding lobe is more rotated than the approaching lobe (Laing-Garrintoneffect; Laing 1988, Garrinton et al. 1988), in Abell 194, our results seem to indicate that the two lobes of 3C 40B lay on the plane of thesky, and thus they are affected by a similar physical depth of the Faraday screen. It is also interesting to note that 3C 40A is located at a projected distance from the cluster center very similar to that of the southern lobe of 3C 40B, but 3C 40A is more depolarized than the southern lobe of 3C 40B. This behavior may be explained if 3C 40A is located at a deeper position along the line of sight than 3C 40B. Thus, passing acrossa longer path, the signal is subjectto an higher depolarization, due to the higher RM.Other explanations may be connected to the presence of the X-ray cavity in thesouthern lobe of 3C 40B which might reduce its depolarization withrespect to 3C 40A.The FPOL trend with λ^4 andthe polarization angle trend with λ^2would argue in favor ofan RM produced by a foreground Faraday screen(Burn 1966, Laing 1984, Laing et al. 2008).In addition, the presence of a large X-ray cavity formed bythe southern radio lobe arising from 3C 40B (Bogdán et al. 2011),supports the scenario in which the rotation of the polarization plane is likely to occurentirely in the intra-cluster medium, since the radio lobe is likely voided of thermal gas.In a few cases (Guidetti et al. 2011) the RM morphologiestend to display peculiar anisotropic RM structures.These anisotropies appear highly ordered (‘RM bands’) with iso-contoursorthogonal to the axes of the radio lobes and are likely related to the interactionbetween the sources and their surroundings.The RM images of Abell 194 show patchy patterns without any obvious preferred direction, in agreement with many of the published RM images. Therefore, in Abell 194, the standard picture that the Faraday effect isdue to the turbulent intra-cluster medium and not affectedby the presence of the radio source itself,is self-consistent.These above considerations suggest that the effect of the external Faraday screen is dominant over the internal Faraday rotation, if any.As shown in optical and X-ray analyses (see Sect. <ref> and <ref>), Abell 194 is not undergoing a major merger event. Rather it is likely accreting several smaller clumps. Under these conditions, we do not expect injection of turbulence on cluster-wide scales in the intra-cluster medium. However, we may expect injection of turbulence on scales of a few hundred kpc due to the accretion flows of small groups of galaxies,the motion of galaxy cluster members, and the expansion of radio lobes. This turbulent energy will cascade on smaller scales and will be dissipated finally heating the intra-cluster medium. The footprint of this cascade could be observable on the intra-cluster magnetic field structure through the RM images. In this case, the patchy structure characterizing the RM imagesin Figs. <ref>, <ref>, and<ref>, can be interpreted as a signature of theturbulent intra-cluster magnetic field and the values of the σ_ RM and ⟨ RM ⟩ can be used to constrain the strength and thestructure of the magnetic field.§ INTRA-CLUSTER MAGNETIC FIELD CHARACTERIZATIONWe constrained the magnetic field power spectrum in Abell 194 by using the polarization information (RM and fractional polarization) derived for the radio galaxies 3C 40A and 3C 40B. A realistic description of the cluster magnetic fields must provide simultaneously a reasonable representation of the following observables: - the RM structure function defined by:S(Δ x, Δ y)=⟨ [RM(x,y)-RM(x+Δ x,y+Δ y)]^2⟩_(x,y),where =⟨⟩_(x,y) indicates that the average is taken over all the positions (x,y) in the RM image. The structure function S(Δ r) was then computed by azimuthally averaging S(Δ x,Δ y) over annuli of increasing separation Δ r=√(Δ x^2+Δ y^2); - the Burn-law (see Eq. <ref>); - the trends of σ_RM and ⟨ RM ⟩ against the distance from the cluster center. The RM structure function and the Burn-law are better investigated by using the data at higher resolution (19”), while the trends of σ_RM and ⟨ RM ⟩ with the distance from the clustercenter are better described by the data atlower resolution (2.9').Our modeling is based on the assumption that the Faraday rotationoccurs entirely in the intra-cluster medium. In particular, wesupposed that there is no internal Faraday rotation inside the radio lobes. To interpret the RM and the fractional polarization data,we need a model for the spatial distribution of the thermal electron density and of the intra-cluster magnetic field.For the thermal electrons density profile we assumed theβ-model derived in Sect. <ref>,with r_ c = 248.4 kpc, β=0.67,and n_ 0=6.9 × 10^-4 cm^-3.For the power spectrum of the intra-cluster magnetic field fluctuations we adopted a power-law with index n of the form | B_ k| ^ 2∝ k^ -n in the wave number range from k_ min=2π/Λ_ max tok_ max=2π/Λ_ min. Moreover, we supposed that the power-spectrum normalization varies with thedistance from the cluster center such that the average magnetic field strength at a given radiusscales as a function of the thermal gas density according to⟨B(r)⟩=⟨B_ 0⟩[n_ e(r)/n_ 0]^η,where ⟨ B_ 0⟩ is the average magnetic field strength at the centerof the cluster[Since the simulated magnetic field components follow a Gaussian distribution, the total magnetic field is distributed according to a Maxwellian. In this work we quote the average magnetic field strength which is related to the rms magnetic field strength through: ⟨ B⟩=√(8/(3π-8)) B_ rms], and n_ e(r) is the thermal electron gas density radial profile.The magnetic field fluctuations are considered isotropic on scales Λ≫Λ_ max, i.e. the fluctuations phases are completely random in the Fourier space.Overall, our magnetic field model depends on the five parameters listed in Table <ref>: (i) the strength at the cluster center ⟨ B_ 0⟩;(ii) the radial index η;(iii) the power spectrum index n;(iv) the minimum scales of fluctuation Λ_ min;(v) the maximum scales of fluctuation Λ_ max.As pointed out in Murgia et al. (2004), there are a number of degeneracies between these parameters.In particular, the most relevant to us are the degeneracy between ⟨ B_0⟩ and η,and between Λ _ min, Λ _ max, and n.Fitting all of these five parameters simultaneously would be the best way to proceed but, due tocomputational burden, this is not feasible. The aim of this work is to constrain the magnetic field radial profile, therefore we proceeded in two steps. First, we performed two-dimensional simulations to derive the magnetic field shape of the power spectrum. Second, we performedthree-dimensional simulations varying the values of ⟨ B_ 0⟩ and η and derive the magnetic field profile that best reproduce the RM observations.We focused our analysis on ⟨ B_0⟩, η, and Λ _ max. In our modeling, Λ _ max should represents the injection scale of turbulent energy. Since Abell 194 is a rather relaxed cluster, we expect a Λ _ max to be a hundred of kpc or less. The turbulent energy will cascade down to smaller and smaller scales until it is dissipated. Determining the slope of the magnetic field power spectrum and the outer scale of the magnetic field fluctuations is not trivial because of the degeneracy between these parameters (see e.g. Murgia et al. 2004, Bonafede et al. 2010). To reduce the number of free model parameters,on the basis of the Kolmogorov theory for a turbulent medium, we fixed the slope of the power-lawpower spectrum to n=11/3. In addition, we fixed Λ_ min = 1 kpc. We note that a different Λ_ min has a negligible impact onthese simulation results since, for a Kolmogorov spectral index, most of the magnetic field power is onlarger scales and our observations do not resolve sub-kpc structures. In particular, by using the 19” resolution data set, we performed a two-dimensional analysis to constrain the maximum scale of the magnetic field fluctuations Λ_ max and by using the 2.9' resolutiondata set, we performed three-dimensional numerical simulations to constrain the strength of the magneticfield and its scaling with the gas density. In both cases, we used the FARADAY code (Murgia et al. 2004), to producesimulated RM and fractional polarization images and to compare them with the observed ones. The simulated images were produced by integrating numerically the gas density and the magnetic field product along the line-of-sight. We considered the sources 3C40 A and 3C40 B lying in the plane of the sky(not inclined) and the limits of the integrals were [0,1 Mpc], i.e. with the sources located at the cluster center. In the simulated RM images we took into account the Galactic RM contribution, which is supposed to be uniform over the radio sources 3C40 A and 3C40 B.Due to the heavy window function imposed by the limited projected size of the radio galaxies with respect to the size of the cluster, for a proper comparison with observations, the simulated fractional polarization and RM imagesare filtered like the observations before comparing them to data. The two-dimensional and three-dimensional simulations are filtered following these steps: (1) We created a source model at 19” (for the two-dimensional analysis) and at 60” (for the three-dimensional analysis), by considering the CLEAN components (CC model) of the images at 1443 MHz. For each observing frequency, we then derived the expected total intensity images I(ν) obtained by associating at each CLEAN component the corresponding flux density calculatedby assuming a JP model with α_ inj=0.5 and ν_ b taken from the observed break frequencyimage (see Fig. <ref> and Sect. <ref>); (2) We used the images in the right panels of Figs. <ref> and<ref>, obtained by extrapolating the Burn-law at λ=0, as a model of intrinsic fractionalpolarization FPOL_ 0, thus removing the wavelength-dependent depolarization due to the foreground Faraday screen. In this way, FPOL_ 0 accounts for the intrinsic disorder of the magnetic field inside the radio source, which may vary in general from point to point. For each observing frequency, we then derived the expected “full-resolution” fractional polarization FPOL(ν) obtained by associating at each CLEAN component the intrinsicfractional polarization corrected for the aging effect derived from the break frequency image shown in Fig. <ref>: FPOL(ν)=FPOL_0× FPOL_theor(ν/ν_b).Murgia et al. (2016) pointed out that, as the synchrotron plasma ages, the theoretical fractional polarization[The theoretical fractional polarization level FPOL_ theor(ν/ν_ b), expected forα_ inj=0.5,is shown in Fig. 15 of Murgia et al. (2016).] approaches 100% for ν≫ν_ b. The canonical value FPOL_ theor=(3 p +3)/(3 p + 7), valid for a power-law emissionspectrum, is obtained in the limit ν≪ν_ b. (3) For the intrinsic polarization angle Ψ_ 0, we assigned at each CLEAN componentthe values taken from the images in the right panels of Figs. <ref> and<ref>,obtained by extrapolating the Ψ at λ=0; (4) By using I(ν), FPOL(ν), and Ψ_ 0, we produced the expected Q(ν) and U(ν) images correspondingto the simulated RM, obtained by employing Eq. <ref>; (5) We added a Gaussian noise such that of the observations to the I(ν), Q(ν), and U(ν) images and we smoothed them with the FWHM beam of the observations used in the two-dimensional and three-dimensional analysis(19” or 2.9').In this way we have taken into account the beam depolarization of the polarized signal; (6) Finally, we derived the fractional polarization images FPOL(ν)=P(ν)/I(ν) from synthetic I(ν), Q(ν), and U(ν) images and we produced the synthetic RM image by fitting pixel by pixel the polarization angle images Ψ(ν)versus the squared wavelength using exactly the same algorithm and strategy describedin Sect. <ref> as if they were actual observations. Examples of observed and synthetic images used in our analysis are shown in Fig. <ref>.Following Vacca et al. (2012), we compared the synthetic images and data using the Bayesianinference, whose use was first introduced in the RM analysis by Enßlin & Vogt (2003).§.§ Bayesian inferenceBecause of the random nature of the intra-cluster magnetic field fluctuations,the RM and fractional polarization images we observe are just one possible realization of the data. Thus, rather than try to determine the particular set of power spectrum parameters, θ⃗, that best reproduces the given realization of the data, it is perhaps more meaningful to search for that distribution of model parameters that maximizes the probability of the model given the data.The Bayesian inference offers a natural theoretical framework for this approach.The Bayesian rule relates our prior information on the distribution P(θ⃗) of model parameters θ⃗ to their posterior probability distribution P(θ⃗ | D) after the data D have been acquired P(θ⃗ | D)=P(D | θ⃗)P(θ⃗)/P(D), where P (D | θ⃗) is the likelihood function, while P(D) is called the evidence. The evidence acts as a normalizing constant and represents the integral of the likelihood function weighted by the prior over all the parameters spaceP(D)=∫ P(D | θ⃗)P(θ⃗)d θ⃗.The most probable configuration for the model parameters is obtained from the posterior distribution given by the product of the likelihood function with the prior probability. We used a Markov Chain Monte Carlo (MCMC) method to extract samples from the posterior probability distribution. In particular, we implemented the Metropolis-Hastings algorithm, which is capable of generating a sample of the posterior distribution without the need to calculate the evidence explicitly, which is often extremely difficult to compute since it would require exploring the entire parameters space. The MCMC is started from a random initial position θ⃗_ 0 and the algorithm is run for many iterations by selecting new states according to a transitional kernel, Q(θ⃗,θ⃗^'), between the actual, θ⃗, and the proposed position, θ⃗^'. The proposed position is accepted with probability h=min[1, P(D | θ⃗^') P( θ⃗^') Q(θ⃗^',θ⃗) / P(D | θ⃗) P( θ⃗) Q(θ⃗,θ⃗^') ].We chose for Q a multivariate Gaussian kernel. The MCMC starts with a number of “burn-in” steps during which, according to common practice, the standard deviation of the transitional kernel is adjusted so that the average acceptance rate stays in the range 25%-50%. After the burn-in period, the random walk forgets its initial state and the chain reaches an equilibrium distribution. The burn-in steps are discarded, and the remaining set of accepted values of θ⃗ is a representative sample of the posterior distribution that can be used to compute the final statistics on the model parameters.§.§ 2D simulationsThe polarization images at a resolution of 19” give a detailed view of the magnetic field fluctuations toward the brightest parts of the radio sources. For this reason, we performed a two-dimensional analysis of these data,to constrain the maximum scale of the magnetic field fluctuations Λ_ max, by fixing n=11/3 and Λ_ min = 1 kpc.Within the power-law scaling regime (Enßlin & Vogt 2003), the two-dimensional analysis relies on the proportionality between the magnetic field and the RM power spectra. On the basis of this proportionality, the slope n of the two-dimensional RM power spectrum isthe same of the three-dimensional magnetic field power spectrum:| RM_ k| ^ 2∝ k^ -n. We simulated RM images with a given power spectrum in a two-dimensional grid. The simulations start in the Fourier space, where the amplitudes of the RM components are selected according to Eq. <ref>, while the phases are completely random. The RM image in the real space is obtained by a fast Fourier transform (FFT) inversion. We have taken into account the dependence of the RM on the radial decreaseof the thermal gas density and of the intra-cluster magnetic field by applying a tapering to the RM image:RM_ taper(r)=(1+r^ 2/r_ c^ 2)^ -3 β (1+ η)+0.5for which we fixed theparameters of the β-model (Sect. <ref>) and we assumedin first approximation η=1.As a compromise between computational speed and spatial dynamical range, we adopted a computational grid of 1024×1024 pixels with a pixel size of 0.5 kpc. This grid allowed us to explore RM fluctuations on scales as small as Λ_ min= 1 kpc, which is smaller than the linear size of the 19 beam of the observations FWHM ≃ 6.8 kpc. At the same time, we were able to investigate fluctuations as large as Λ_ max= 512 kpc, i.e., comparable to the linear size of the radio galaxy 3C 40B in Abell 194. To the aim of the Bayesian inference outlined in Sect. <ref>, we adopted a likelihood of the form:P(D | θ⃗)=∏_i1/σ_ synth(i)√(2π)exp[-1/2[D_ obs(i)-D_ synth(i)]^2/σ_ synth(i)^2],where the observed data D_ obs are compared to the synthetic data D_ synth. The synthetic data are obtained by filtering simulations as the observations (as described in Sect. <ref>). The scatter σ_ synth includes both the simulated noise measurements, σ_ noise, and the intrinsic statistical scatter of the synthetic data, σ D_ synth, due to the random nature of the intra-cluster magnetic fields: σ_ synth^2=σ D_ synth^2+σ_ noise^2. At each step of the MCMC, for a given combination of the free parameters θ⃗, we performed five simulations with different realizations of the fluctuations phases, and we evaluated the dispersion σ_ synth. In our two-dimensional analysis, the data D (observed and synthetic) are represented by the azimuthally averaged RM structure function S(Δ r), and by the depolarization ratio FPOL/FPOL_ 0(λ^4). The posterior distribution for a given set of model parameters is computed by considering the jointlikelihood of the RM structure function and of the depolarization ratio all together. We applied the Bayesian method by choosing priors uniform in logarithmic scale for the two free parameters: normalization of the magnetic field power spectrum norm and minimum wave number k_ min.In the top, right panel of Fig. <ref> we show the image of the posterior distribution for thetwo free parameters norm and k_ min. In addition, one-dimensional marginalizations are shown as histograms along each axis of the image. The one-dimensional marginalizations represent the projected density of samples in the MCMC, which is proportional to the posterior probability of that particular couple of model parameters. To provide a visual comparison between synthetic and observed data we present the fan-plots of thestructure function and of the depolarization ratio.In the top left panel of Fig. <ref> the dots represent the observed RM structure function S(Δ r)calculated using the RM image of Fig. <ref>.The observed RM structure function S(Δ r) is shown together with the population of synthetic structure functions extracted from the posterior distribution. Brighter pixels in the fan-plot occur where many different synthetic structure functions overlap each other.Thus, the fan-plot gives a visual indication of the marginalized posterior probability of the synthetic data ata given separation. Overall, the fan-plot shows that the Kolmogorov power spectrum is able toreproduce the shape of the observed RM structure function, including the large scale oscillation observed beforethe turnover at Δ r>100 kpc.The turnover is likely caused by under-sampling of the large separations imposed by the window function due to the silhouette of the radio sources combined with the effect of the tapering imposed by the dimming of the gas density and magnetic field strength at increasing distances from the cluster center. On small separations,Δ r< 6.8 kpc, the slope of the structure function is affected by the observing beam. It is worth mentioningthat all these effects are fully taken into account when simulating the synthetic I, Q, and U images.In the bottom panels of Fig. <ref> the dots represent FPOL/FPOL_ 0 as a function of λ^4 calculated in the southern (left panel) and in the northern (right panel) lobe of 3C 40B, respectively. The southern lobe, is slightly more depolarized than the norther lobe.The statistics of the fractional polarization have been calculatedconsidering only the pixels in which FPOL>3σ_ FPOL. The observeddepolarization ratio as a function of λ^4 is shown, together with the fan-plots of the synthetic depolarization.In general, the Kolmogorov power spectrum is able to reproduce the observed depolarization, as shown by the fan-plots.To summarize, the combined two-dimensional analysis of RM structure function and fractional polarization allowed us to constrain the maximum scales on the magneticfield fluctuations to Λ_ max=(64±24) kpc, where the given error represents the dispersion of the one-dimensional marginalizations.The magnetic field auto-correlation length is calculated according to: Λ_ B=3π/2∫_0^∞|B_k|^2k dk/∫_0^∞|B_k|^2 k^2 dk,where the wavenumber k=2π/Λ (Enßlin & Vogt 2003). We obtained Λ_ B=(20±8) kpc. In the next step we fix this value and we constrain the strength of the magnetic field and its scaling with the gas density with the aid of three-dimensional simulations. §.§ 3D simulationsThe data at a resolution of 2.9' are best suited to derive RM informationup to a large distance from the cluster center. Therefore,we used these data to constrain the strength of the magnetic field and its scaling with the gas density using three-dimensional simulations.We constructed three-dimensional models of the intra-cluster magnetic field by following the numerical approach described in Murgia et al. (2004). The simulations begin in Fourier space by extracting the amplitude of the magnetic field potential vector, Ã(k), from a Rayleigh distribution whose standard deviation varies with the wave number according to |A_ k|^ 2∝ k^ -n-2.The phase of the potential vector fluctuations is taken to be completely random. The magnetic field is formed in Fourier space via the cross product B̃(k)=i k ×Ã(k). This ensures that the magnetic field is effectively divergence free.We then perform a three-dimensionalFFT inversion to produce the magnetic field in the real space domain.The field is then Gaussian and isotropic, in the sense that there is no privileged direction in space for the magnetic field fluctuations. The power-spectrum normalization is set such that the average magnetic field strength scales as a function of the thermal gas density according to Eq. <ref>. The gas density profile is described by the β-model in Eq. <ref>,modified, as done in A2199 by Vacca et al. 2012, to take into accountof the presence of the X-ray cavity detected in correspondence of thesouth lobe of 3C 40B.We simulated the random magnetic field by using a cubic grid of 1024^ 3cells with a cell size of 0.5 kpc/cell.The simulated magnetic field is periodic at the boundaries and thus the computational grid is replicated to cover a larger cluster volume. The simulated RM images were obtained by numerically integratingthe magnetic field and the gas density product along the line-of-sight,accordingly to Eq. <ref>. In the case of Abell 194, the integration was performed from the cluster centre up to 1 Mpc along the line-of-sight. In a similar way to the two-dimensional simulations, the simulated RM images were filteredlike the observations. Finally, we used the Bayesian approach to find the posterior distribution for ⟨ B_ 0⟩ and η. We derived the observed profiles of σ_RM and ⟨ RM ⟩ as a function of the distance from the cluster centerby calculating the statistics in five concentric annuli of 2.9in size. The annuli are centered at the position of the X-ray centroid (see Sect. <ref>).The slope n=11/3 and the scales Λ_ min=1 kpcand Λ_ max=64 kpc were kept fixed at the values found in the two-dimensional analysis.We applied the Bayesian method by choosing an uniform prior for the distribution of η, and aprior uniform in logarithmic scale for the distribution of ⟨ B_ 0⟩. At each step of the MCMC we simulated thesynthetic I, Q, and U images of 3C 40A and 3C 40B and we derived the synthetic RM image following the same procedure as for the two-dimensional analysis. Using Eq. 15, we evaluated the likelihood of the σ_RMand ⟨ RM ⟩ profiles by mean of ten different random configurations for the magnetic field phases to calculate the model scatter σ D_ synth.The result of the Bayesian analysis is shown in Fig. <ref>.In the top right panel of Fig. <ref>, we present the image of the posterior distribution of model parameters along with the one-dimensional marginalizations represented as histograms. In the top left panel of Fig. <ref> we show the observed σ_RM profilealong with the fan-plot of the synthetic profiles from the posterior. The dispersion of the RM increases toward the cluster centre as expected due to the higher density and magneticfield strength. The drop at the cluster center is likely caused by theunder-sampling of the RM statistics as a result of the smaller size of the central annulus with respect to the external annuli. In the bottom left panel of Fig. <ref> we show the observed ⟨ RM ⟩ profilealong with the fan-plot of the synthetic RM profiles extracted from the posterior distribution. In this plot it is evident that the cluster shows an enhancement of the ⟨ RM ⟩ signal with respect to the Galactic foreground of 8.7 rad/m^2 (Oppermann et al. 2015).Finally, in the bottom right panel of Fig. <ref> we show the trend of the mean magnetic field with the distance fromthe cluster center.The degeneracy between the central magnetic field strength and radial index is evident in the posterior distribution of model parameters: the higher ⟨ B_0⟩ the higher η. However, we were able to constrain the central magnetic field strength to ⟨ B_0⟩=(1.5±0.2) μG,and the radial index to η=(1.1±0.2).The average magnetic field over a volume of 1 Mpc^3 is aslow as ⟨ B ⟩≃ 0.3 μG.This result is obtained by fixing the slope of the magnetic field power spectrum to the Kolmogorov index. However, we note that the central magnetic field strength depends on the square root of thefield correlation length Λ_B (see Eq. 16). The field correlation length in turn depends on a combination of the power spectrum slope and Λ_max and these two parameters are degenerate: a flatter slope for instance leads to a larger Λ_max so that Λ_B is preserved. Therefore, a different choice of the power spectrum slope n should have a second-order effect on the estimated central magnetic field strength. Nevertheless, as a further check, we repeated the 2D Bayesian analysis (not shown) by freeing all the power spectrum parameters:normalization, Λ_min, Λ_max, and n. The larger dimensionality implies larger uncertainties, due tothe computational burden needed to fully explore the parameters space. The maximum posterior probability is foundfor a slope n≃ 3.6± 1.6, which is in agreement, to within the large uncertainty, with the Kolmogorov slope n=11/3.To our knowledge, the central magnetic field we determined in Abell 194 is theweakest ever found using RM data in galaxy clusters. We note, however,that Abell 194 is a poor galaxy cluster with no evidence of cool core.It is interesting to compare the intra-cluster magnetic field of Abell 194 to that of other galaxy clusters for whicha similar estimate is present in the literature. These are listed in Table <ref>. In the left panel of Fig. <ref> we show a plot of the central magnetic field strength ⟨B_ 0⟩ versus the mean cluster temperature, whilein the right panel of Fig. <ref> we show a plot of⟨B_ 0⟩ versus the central electron density n_0.Although the cluster sample is still rather small, and thus all the following considerations should be taken with care,we note that there is a hint of a positive trend between⟨B_ 0⟩ and n_0 measured among different clusters. On the other hand, no correlation seems to be present between the central magnetic field and the mean cluster temperature, in agreement with what found by Govoni et al. (2010)in a statistical analysis of a sample of RM in rich galaxy clusters. There are three merging clusters in the sample: Coma, A2255, and A119. They all havequite similar central magnetic field strengths despite Coma and A2255 host a giant radio halo while A119 is radio quite. We confirm that cool core clusters like Hydra and A2199 tend to have higher central magnetic fields. In general, fainter centralmagnetic fields seems to be present in less dense galaxy clusters.Actually, this result is corroborated by the low central magnetic field found in this work for thepoor galaxy cluster Abell 194. The scaling obtained by a linear fit of the log-log relationshipis: ⟨B_ 0⟩ ∝ n_0^0.47. This result is in line with the theoretical prediction by Kunz et al. (2011), who assume that parallel viscous heating due to turbulent dissipation balances radiative cooling at all radii inside the cluster core.Kunz et al. (2011) found in the bremsstrahlung regime (that is T≳ 1 keV)B≃ 11ξ^ -1/2(n_ e/0.1 cm^ -3)^ 1/2(T/2 keV)^ 3/4μ G,where T is the temperature, while ξ is expected to range between 0.5 and 1 in a turbulent plasma. Following this formula, the expected magnetic field at the center of Abell 194 is 1.05ξ^-1/2μG. Although the central magnetic field found in our analysis is in remarkable agreement with the value obtained from Eq. <ref>, we note that we found a magnetic field which scales with density as η=1.1±0.2, while according to Eq. <ref>, we should expect η=0.5. However, we did not included in our analysis a possible scaling of B with the cluster temperature. If the intra-cluster medium is not isothermal, it is possible to bring the magnetic field radial profile found in our analysis in agreement with that in Eq. <ref> supposing in Abell 194 a temperature profile as steep as T∝ r^-1.4 at large radii (r> 0.2 R_180). The temperature profile for Abell 194 is not known at large radii, but the required temperature profile is much steep than the typical temperature decreases found in galaxy clusters which is T∝ r^μ with μ between 0.2 and 0.5 (e.g. Leccardi & Molendi 2008).In Fig. <ref>, we plot the expected range of magnetic field strengths obtained by using Eq. <ref>. In the right panel, the upper bound corresponds to ξ=0.5 and T=10 keV, while the lower bound corresponds toξ=1 and T=1 keV. In the left panel, the upper bound corresponds to ξ=0.5 and n_0=5×10^-4 cm^-3, while the lower bound corresponds toξ=1 and n_0=0.15 cm^-3. Indeed for some reason it seems that the model by Kunz et al. (2011) seems to reproduce the scaling between central magnetic field and gas density among different galaxy clusters even if in the case of Abell 194 the observed magnetic field scaling with radius is significantly steeper than what they predict.§ SUMMARY AND CONCLUSIONS In the context of the SRT early science program SMOG (SRT Multifrequency Observations of Galaxy clusters), we have presented a spectral-polarimetric study ofthe poor galaxy cluster Abell 194. We have analyzed total intensity and polarization images of the extended radio galaxies 3C 40A and 3C 40B located close to the cluster center. By complementing the SRT observations with radio data at higher resolution and at different frequencies, but also with data in optical and X-ray bands, we aimed at inferring the dynamical state of the cluster, the radiative age of the radio galaxies, and theintra-cluster magnetic field power spectrum. The SRT data were used in combination with archivalVery Large Array observations at lower frequencies to derive both spectral aging and RM images of 3C 40A and 3C 40B. The break frequency image and the cluster electron density profile inferred fromandobservationsare used in combination with the RM data to constrain the intra-cluster magnetic field power spectrum.Following Murgia et al. (2004),we simulated Gaussian random two-dimensional and three-dimensional magnetic field models with different power-law power spectra, and we compared the observed data and the synthetic images with aBayesian approach (Vogt & Enßlin 2005, Vacca et al. 2012), in order to constrain the strengthand structure of the magnetic field associated with the intra-cluster medium. Our results can be summarized as follows: -To investigate the dynamical state of Abell 194 and to obtainnew additional insights in the cluster structure, we analyzedthe redshifts of 1893 galaxies from the literature, resulting in a sample of 143 fiducial cluster members. The picture resulting from new and previous results agrees in that Abell 194 does not show any trace of a major and recent cluster merger, but rather agrees with a scenario of accretion of small groups, mainly along the NE-SW axis.- The measured break frequency in the radio spectra decreases systematically along the lobes of3C 40B, down to a minimum of 850±120 MHz. If we assume that the magnetic field intrinsic to the source 3C 40B is in equipartition, the lifetimes of radiating electrons result 157±11 Myrs, in agreement with thespectral age estimate calculated by Sakelliou et al. (2008).Furthermore, the radiative age of 3C40 B is consistentwith that found in the literature for other sources of similar size and radio power (Parma et al. 1999). - Linearly polarized emission is clearly detected for bothsources and the resulting polarization angle images are used to produce detailed RM images at different angular resolutions (2.9, 60, and 19). The RM images of Abell 194 show patchy patterns without any obvious preferred direction, in agreement with the standard picture that the Faraday effect is due to the turbulent intra-cluster medium.- The results of the magnetic field power spectrum in Abell 194 are reported in Table <ref>. By assuming a Kolmogorov magnetic field power law power spectrum (slope n=11/3) with a minimum scale offluctuations Λ_min=1 kpc, we find that the RM datain Abell 194 are well described by a magnetic field with a maximum scale of fluctuations ofΛ_max=(64±24) kpc.The corresponding magnetic field auto-correlation length is Λ_ B=(20±8) kpc. We find a magnetic field strength of ⟨ B_ 0⟩=(1.5±0.2) μG at the cluster center.To our knowledge, the central magnetic field we determined in Abell 194 is the weakest everfound using RM data in galaxy clusters. We note, however, that Abell 194 is a poor galaxycluster with no evidence of cool core. Our results also indicate a magnetic field strength which scales as the thermal gas density. In particular, the field decreases with the radius following the gas density to the powerof η=(1.1±0.2). - We compared the intra-cluster magnetic field determined for Abell 194to that of a small sample of galaxy clusters for which a similar estimate was available in the literature. No correlation seems to be present between the mean central magnetic field and the cluster temperature.On the other hand, there is a hint of a trend between the central electron densities and magnetic fieldstrengths among different clusters: ⟨B_ 0⟩ ∝ n_0^0.47. We thank the anonymous referee for the useful comments who helped to improve the paper.The Sardinia Radio Telescope is funded by the Department of University and Research (MIUR), Italian Space Agency (ASI), and the Autonomous Region of Sardinia (RAS)and is operated as National Facility by the National Institute for Astrophysics (INAF).The development of the SARDARA back-end has been funded by the Autonomous Region of Sardinia (RAS) using resources from the Regional Law 7/2007 "Promotion of the scientific research and technological innovation in Sardinia" in the context of the research project CRP-49231 (year 2011, PI Possenti): "High resolution sampling of the Universe in the radio band: an unprecedented instrument to understand the fundamental laws of the nature". This research was partially supported by PRIN-INAF 2014.The National Radio Astronomy Observatory (NRAO) is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. This research made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology,under contract with the National Aeronautics and Space Administration. F. Loi gratefully acknowledges Sardinia Regional Government for the financial support of her PhD scholarship(P.O.R. Sardegna F.S.E. Operational Programme of the Autonomous Region of Sardinia, European Social Fund 2007-2013 - Axis IV Human Resources, Objective l.3, Line of Activity l.3.1.).Basic research in radio astronomy at the Naval Research Laboratory is funded by 6.1 Base funding.This research was supported by the DFG Forschengruppe 1254 “Magnetisation of Interstellar and Intergalactic Media: The Prospects of Low-Frequency Radio Observations”.F. Vazza acknowledges funding from the European Union's Horizon 2020 research and innovation programmeunder the Marie-Sklodowska-Curie gran agreement No 664931. 99Abell, G.O., Corwin, H.G.Jr., Olowin, R.P., 1989, , 70, 1 Anders, E. & Grevesse, N., 1989, , 53, 197Arnaud, K.A., 1996, in ASP Conf. Ser. 101: Astronomical Data Analysis Software and Systems V, Vol. 5, 17 Barton, E.J., De Carvalho, R.R., Geller, M.J., 1998, , 116, 1573 Beers, T.C., Flynn, K., Gebhardt, K., 1990, AJ, 100, 32 Bird, C.M., & Beers, T.C., 1993, AJ, 105, 1596 Blumenthal, G.R., & Gould, R.J., 1970, Reviews of Modern Physics, 42, 237 Bogdán, Á., Kraft, R.P., Forman, W.R., et al., 2011, , 743, 59Bolli, P., Orlati, A., Stringhetti, L., et al., 2015, Journal of Astronomical Instrumentation, Vol. 4, Nos. 3 & 4, 1550008 Bonafede, A., Feretti, L., Giovannini, G., et al., 2009a, , 503, 707Bonafede, A., Giovannini, G., Feretti, L., et al., 2009b, , 494, 429 Bonafede, A., Feretti, L., Murgia, M., et al., 2010, , 513, 30 Bonafede, A., Vazza, F., Brüggen, M., et al., 2013, , 433, 3208Briel, U.G., Henry, J.P., Boehringer, H., 1992, , 259, L31Brodie, J.P., Bowyer, S., McCarthy, P., 1985, , 293, L59Brüggen, M., Ruszkowski, M., Simionescu, et al.2005, , 631, L21Burn, B.J., 1966, , 133, 67 Capetti, A., Morganti, R., Parma, P., & Fanti, R. 1993, , 99, 407 Carilli, C.L., & Taylor, G.B., 2002, , 40, 319Cavaliere, A., & Fusco-Femiano, R., 1976, , 49, 137 Chapman, G.N.F., Geller, M.J., Huchra, J.P., 1988, , 95, 999Cirimele, G., Nesci, R., Trèvese, D., 1997, , 475, 11 Clarke, T.E., & Ensslin, T.A., 2006, , 131, 2900Clarke, T.E., Kronberg, P.P., Böhringer, H., 2001, ApJ 547, L111 Condon, J.J., Cotton, W.D., Greisen, E.W., et al., 1998, AJ 115, 1693Cornwell, T.J., & Perley, R.A. 1992, A&A, 261, 353 Croft, S., van Breugel, W., de Vries, W., et al., 2006, , 647, 1040Croston, J.H., Hardcastle, M.J., Birkinshaw, M., Worrall, D.M., Laing, R. A., 2008, , 386, 1709Dolag, K., Bartelmann, M., & Lesch, H. 2002, , 387, 383Dolag, K., Vogt, C., & Enßlin, T.A., 2005, MNRAS 358, 726Dressler, A., & Shectman, S. A. 1988, , 95, 985 Ebeling, H., Voges, W., Böhringer, H., et al. 1996, , 281, 799 Eckert, D., Molendi, S., Paltani, S., 2011, , 526, A79Eckert, D., Vazza, F., Ettori, S., et al., 2012, , 541, A57 Eilek, J.A., & Owen, F.N., 2002, , 567, 202Enßlin, T.A., & Vogt, C., 2003, A&A, 401, 835 Ettori, S., Tozzi, P., Borgani, S., Rosati, P., 2004, , 417, 13 Fadda, D., Girardi, M., Giuricin, G., Mardirossian, F., Mezzetti, M., 1996, , 473, 670 Farnes, J.S., Gaensler, B.M., & Carretti, E., 2014, , 212, 15 Fasano, G., Falomo, R., & Scarpa, R., 1996, , 282, 40Feretti, L., Boehringer, H., Giovannini, G., Neumann, D., 1997, , 317, 432Feretti, L., Perley, R., Giovannini, G., Andernach, H., 1999, , 341, 29 Feretti, L., Dallacasa, D., Giovannini, G., & Tagliani, A., 1995, A&A, 302, 680Feretti, L., Dallacasa, D., Govoni, F., et al., 1999, A&A 344, 472 Feretti, L., Giovannini, G., Govoni, F., & Murgia, M. 2012, , 20, 54Ferrari, C., Govoni, F., Schindler, S., et al., 2008, Space Science Reviews, 134, 93 Garrington, S.T., Leahy, J.P., Conway, R.G., Laing, R.A., 1988, Nature 331, 147 Gebhardt, K., & Beers, T.C. 1991, , 383, 72 Girardi, M., Escalera, E., Fadda, D., et al. 1997, , 482, 41 Girardi, M., Giuricin G., Mardirossian F., Mezzetti M., Boschin W., 1998, ApJ, 505, 74 Girardi, M., Mercurio, A., Balestra, I., et al. 2015, A&A, 579, A4 Girardi, M., Boschin, W., Gastaldello, F., et al., 2016, MNRAS, 456, 2829 Govoni, F., & Feretti, L., 2004, International Journal of Modern Physics D 13, 1549Govoni, F., Murgia, M., Feretti, L., et al., 2005, A&A, 430, L5Govoni, F., Murgia, M., Feretti, et al., 2006, A&A 460, 425 Govoni, F., Dolag, K., Murgia, M., et al., 2010, , 522, A105Govoni, F., Murgia, M., Xu, H., et al., 2013, , 554, A102Govoni, F., Murgia, M., Xu, H., et al., 2015, Advancing Astrophysics with the Square Kilometre Array (AASKA14), 105Guidetti, D., Murgia, M., Govoni, F., et al., 2008, , 483, 699Guidetti, D., Laing, R.A., Murgia, M., et al., 2010, , 514, 50 Guidetti, D., Laing, R.A., Bridle, A.H., Parma, P., Gregorini, L., 2011, , 413, 2525Gwyn, S.D.J. 2008, , 120, 212 Jaffe, W.J., & Perola, G.C., 1973, , 26, 423 Jetha, N.N., Hardcastle, M.J., Sakelliou, I., 2006, , 368, 609Johnstone, R.M., Allen, S.W., Fabian, A.C., Sanders, J.S., 2002, , 336, 299Johnston-Hollitt, M., & Ekers, R.D., 2004, arXiv:astro-ph/0411045Johnston-Hollitt, M., Govoni, F., Beck, R., et al., 2015,Advancing Astrophysics with the Square Kilometre Array (AASKA14), 92Jones, C., & Forman, W. 1999, , 511, 65 Kalberla, P.M.W., Burton, W.B., Hartmann, D., et al., 2005, , 440, 775 Komossa, S., & Böhringer, H., 1999, , 344, 755Kuchar, P., & Enßlin, T.A., 2011, , 529, A13 Kunz, M.W., Schekochihin, A.A., Cowley, S.C., Binney, J.J., Sanders, J.S., 2011, , 410, 2446 Laing, R.A., 1984, Physics of Energy Transport in Extragalactic Radio Sources, 90Laing, R.A., 1988, Nature 331, 149Laing, R.A., Bridle, A.H., Parma, P., Murgia, M., 2008, , 391, 521 Lamee, M., Rudnick, L., Farnes, J.S., et al., 2016, , 829, 5Lane, W.M., Cotton, W.D., van Velzen, S., et al., 2014, , 440, 327Leccardi, A., & Molendi, S. 2008, , 486, 359 Lovisari, L., Reiprich, T.H., Schellenberger, G., 2015, , 573, A118Melis, A., Valente, G., Tarchi, A., et al. 2014, , 9153, 91532M Melis, A., Concu, R., Trois, A., et al., 2017, Journal for Astronomical Instrumentation, submitted Murgia, M., Fanti, C., Fanti, R., et al., 1999, A&A, 345, 769 Murgia M., Govoni F., Feretti L., et al., 2004, A&A 424, 429 Murgia, M., Parma, P., Mack, K.-H., et al., 2011, , 526, A148 Murgia, M., 2011, , 82, 507Murgia, M., Govoni, F., Carretti, E., et al., 2016, , 461, 3516Nikogossyan, E., Durret, F., Gerbal, D., Magnard, F., 1999, , 349, 97O'Dea, C.P., & Owen, F. N., 1985, , 90, 954Oppermann, N., Junklewitz, H., Greiner, M., et al., 2015, , 575, A118Pacholczyk, A.G., 1970, Series of Books in Astronomy and Astrophysics, San Francisco: Freeman, 1970 Parma, P., Murgia, M., Morganti, R., et al., 1999, , 344, 7Perley, R.A., Bridle, A.H., Willis, A.G., 1984, , 54, 291 Perley, R.A., & Taylor, G.B., 1991, AJ 101, 1623 Perley, R.A., & Carilli, C.L., 1996, Cygnus A – Study of a Radio Galaxy, 168 Perley, R.A., 1999, Synthesis Imaging in Radio Astronomy II, 180, 383 Perley, R.A., & Butler, B.J., 2013, , 204, 19 Pisani, A., 1993, MNRAS, 265, 706Pisani, A. 1996, MNRAS, 278, 697Pizzo, R.F., de Bruyn, A.G., Bernardi, G., Brentjens, M.A., 2011, A&A, 525, A104 Prandoni, I., Murgia, M., Tarchi, A., et al., 2017, A&A, submittedPratley, L., Johnston-Hollitt, M., Dehghan, S., Sun, M., 2013, , 432, 243Reiprich, T.H., & Böhringer, H., 2002, , 567, 716Rybicki, G.B., & Lightman, A.P., 1979, New York, Wiley-Interscience, 1979. 393 p., Rines, K., Geller, M. J., Kurtz, M. J., Diaferio, A., 2003, , 126, 2152 Rood, H.J., & Sastry, G. N., 1971, , 83, 313Sakelliou, I., Hardcastle, M.J., Jetha, N.N, 2008, , 384, 87 Scaife, A.M.M., & Heald, G.H., 2012, , 423, L30 Smith, R.K., Brickhouse, N.S., Liedahl, D.A., Raymond, J.C., 2001, , 556, L91Snowden, S.L., McCammon, D., Burrows, D.N., Mendenhall, J.A., 1994, , 424, 714 Struble, M.F., & Rood, H. J. 1987, ApJS, 63, 555 Struble, M.F., & Rood, H.J., 1999, ApJS, 125, 35Taylor, A.R., Stil, J. M., Sunstrum, C., 2009, , 702, 1230Taylor, G.B., &Perley, R.A., 1993, ApJ, 416, 554Taylor, G.B., Govoni, F., Allen, S., Fabian, A.C., 2001, MNRAS, 326, 2Tribble, P.C., 1991, MNRAS, 250, 726 Vacca, V., Murgia, M., Govoni, F., et al., 2010, , 514, A71 Vacca, V., Murgia, M., Govoni, F., et al., 2012, , 540, A38van Breugel, W., Filippenko, A.V., Heckman, T., & Miley, G., 1985, , 293, 83van Weeren, R.J., Röttgering, H.J.A., Brüggen, M., Hoeft, M., 2010, Science, 330, 347Vazza, F., Brüggen, M., Gheller, C., Wang, P., 2014, , 445, 3706Vogt, C., & Enßlin, T. A. 2003, , 412, 373Vogt, C., & Enßlin, T. A. 2005, , 434, 67Wise, M.W., McNamara, B.R., Nulsen, P.E.J., Houck, J.C., David, L.P., 2007, , 659, 1153 Xu, H., Govoni, F., Murgia, M., et al. 2012, , 759, 40 | http://arxiv.org/abs/1703.08688v1 | {
"authors": [
"F. Govoni",
"M. Murgia",
"V. Vacca",
"F. Loi",
"M. Girardi",
"F. Gastaldello",
"G. Giovannini",
"L. Feretti",
"R. Paladino",
"E. Carretti",
"R. Concu",
"A. Melis",
"S. Poppi",
"G. Valente",
"G. Bernardi",
"A. Bonafede",
"W. Boschin",
"M. Brienza",
"T. E. Clarke",
"S. Colafrancesco",
"F. de Gasperin",
"D. Eckert",
"T. A. Ensslin",
"C. Ferrari",
"L. Gregorini",
"M. Johnston-Hollitt",
"H. Junklewitz",
"E. Orru'",
"P. Parma",
"R. Perley",
"M. Rossetti",
"G. B Taylor",
"F. Vazza"
],
"categories": [
"astro-ph.CO",
"astro-ph.GA"
],
"primary_category": "astro-ph.CO",
"published": "20170325130317",
"title": "Sardinia Radio Telescope observations of Abell 194 - the intra-cluster magnetic field power spectrum"
} |
[email protected]^1Department of Physics, Chongqing University, Chongqing 401331, P.R. China ^2 School of Science, Guizhou Minzu University, Guiyang 550025, P.R. China In this paper, we study the B → K^* transition form factors (TFFs) within the QCD light-cone sum rules (LCSR) approach. Two correlators, i.e. the usual one and the right-handed one, are adopted in the LCSR calculation. The resultant LCSRs for the B → K^* TFFs are arranged according to the twist structure of the K^*-meson light-cone distribution amplitudes (LCDAs), whose twist-2, twist-3 and twist-4 terms behave quite differently by using different correlators. We observe that the twist-4 LCDAs, though generally small, shall have sizable contributions to the TFFs A_1/2, V and T_1, thus the twist-4 terms should be kept for a sound prediction. We also observe that even though different choices of the correlator lead to different LCSRs with different twist contributions, the large correlation coefficients for most of the TFFs indicate that the LCSRs for different correlators are close to each order, not only for their values at the large recoil point q^2=0 but also for their ascending trends in whole q^2-region. Such a high degree of correlation is confirmed by their application to the branching fraction of the semi-leptonic decay B → K^* μ^+ μ^-. Thus, a proper choice of correlator may inversely provide a chance for probing uncertain LCDAs, i.e. the contributions from those LCDAs can be amplified to a certain degree via a proper choice of correlator, thus amplifying the sensitivity of the TFFs, and hence their related observables, to those LCDAs.13.25.Hw, 11.55.Hx, 12.38.Aw, 14.40.BeReconsideration of the B → K^* Transition Form Factors within the QCD Light-Cone Sum Rules Hai-Bing Fu^2 December 30, 2023 ==========================================================================================§ INTRODUCTION The heavy-to-light B-meson decay provides an excellent platform for testing the CP-violation phenomena and for seeking new physics beyond the Standard Model (SM). The heavy-to-light transition form factors (TFFs) are key components in those studies, which however are non-trivial due to the fact that for practical values of the momentum transfer and the b-quark mass (m_b), the soft contributions are always numerically important and are often dominant.The Shifman-Vainshtein-Zakharov (SVZ) sum rules <cit.>, which provides an important step forward for studying those non-perturbative hadron phenomenology. It is a method of expanding the correlation function (correlator) into the QCD vacuum condensates with subsequent matching via dispersion relations. The vacuum condensates are non-perturbative but universal, whose contributions follow from the usual power-counting rules at the large q^2-region and the first several ones are enough to achieve the required accuracy. Many successful hadron properties have been achieved since its invention, and the SVZ sum rules becomes a useful tool for studying the hadron phenomenology.Following its strategy, one has to deal with the two-point correlator for the heavy-to-light transition form factors (TFFs) <cit.>, which however will meet specific problems such as the breaking of power-counting and the contamination of sum rules by “non-diagonal" transitions <cit.>, severely restricting the precisions and applicabilities of the SVZ sum rules.To avoid the problems of the two-point SVZ sum rules, the QCD light-cone sum rules (LCSR) has later been suggested to deal with the heavy-to-light TFFs <cit.>. Its main idea is to make a partial resummation of the operator product expansion (OPE) to all orders and reorganize the OPE expansion in terms of the twists of relevant operators rather than their dimensions. The vacuum condensates of the SVZ sum rules are then substituted by the light-meson's light-cone distribution amplitudes (LCDAs) of increasing twists. The LCDA, which relates the matrix elements of the nonlocal light-ray operators sandwiched between the hadronic state and the vacuum, has a direct physical significance and provides the underlying links between the hadronic phenomena at small and large distances.Generally, contributions from the LCDAs suffer from the power counting rules basing on the twists, i.e. the high-twist LCDAs are usually powered suppressed to the lower twist ones in large Q^2-region, and the first several LCDAs shall usually provide dominant contributions to the LCSR. Since its invention, the LCSR approach has been widely adopted for studying the B→ light meson decays. In the paper, we shall concentrate our attention on its application to the B → K^* decays, which is helpful for studying the K^*-meson LCDAs.How to “design" a proper correlator is a tricky problem for the LCSR approach. By choosing a proper correlator, one can not only study theproperties of the hadrons but also simplify the theoretical uncertainties effectively. Usually, the correlator is constructed by using the currents with definite quantum numbers, such as those with definite J^P, where J is the total angular momentum and P is the parity of the bound state. Such a direct way of constructing the correlator is not the only choice adopted in the literature, e.g. the chiral correlator with a chiral current in between the matrix element has also been suggested so as to suppress part of the hazy contributions from the uncertain LCDAs <cit.>.The LCDAs of the K^*-meson have a much complex structure than that of the light pseudo-scalar mesons. It contains two leading-twist (or twist-2) LCDAs ϕ^_2;K^* and ϕ_2;K^*^, seven twist-3 LCDAs ϕ_3;K^*^, ψ_3;K^*^, Φ_3;K^*^, Φ_3;K^*^, ϕ_3;K^*^, ψ_3;K^*^ and Φ_3;K^*^, and twelve twist-4 LCDAs ϕ_4;K^*^, ψ_4;K^*^, Ψ_4;K^*^, Ψ _4;K^*^, Φ _4;K^*^(1), Φ _4;K^*^(2), Φ _4;K^*^(3), Φ _4;K^*^(4), ϕ_4;K^*^, ψ_4;K^*^, Φ_4;K^*^ and Ψ_4;K^*^ <cit.>. By taking the usual correlator, we shall show that at the twist-4 accuracy, the LCSRs of the B→ K^* TFFs shall contain almost all of the mentioned LCDAs, where the twist-3 and twist-4 LCDAs shall have sizable contributions to the B→ K^* TFFs. At the present, some of the high-twist LCDAs have been studied within the QCD sum rules <cit.>, which, however, are still of large errors.As an attempt, we have suggested to use a chiral correlator with a right-handed chiral current to deal with the B→ K^* TFFs such that to suppress the uncertainties from the high-twist LCDA contributions <cit.>. The resultant LCSRs derived there show the contributions from most of the high-twist LCDAs are suppressed by δ^2∼ (m_K^*/m_b)^2∼ 0.03, thus uncertainties from high-twist LCDAs themselves are effectively suppressed. In previous discussions <cit.>, some of the terms that are proportional to high-twist LCDAs have been omitted due to the δ-power counting rule. In the paper, as a sound prediction, we shall keep all of them in our present calculations; as will be shown later, those terms shall provide sizable contributions for certain TFFs.It is interesting to show whether the LCSRs under different choices of the correlator are consistent with each other. In the paper, as a step forward, we shall compare the LCSRs for the TFFs under the usual correlator and the right-handed chiral correlator with the help of the correlation coefficient ρ_XY <cit.>.The remaining parts of the paper are organized as follows. In Sec.II, we present the calculation technology for the B → K^* TFFs within the LCSR approach, where the results for the usual correlator are presented. The results for the right-handed chiral correlator are presented in the Appendix. In Sec.III, we make a comparative study on various LCSRs for the B→ K^* TFFs, and their application for the branching fraction dℬ(B→ K^* μ^+μ^-)/dq^2 is also presented. Sec.IV is reserved for a summary.§ THE B → K^* TFFS WITHIN THE LCSR APPROACH The B → K^* TFFs, V(q^2), A_0,1,2(q^2) and T_1,2,3(q^2), are related with the matrix elements ⟨ K^* |s̅γ^μ b|B⟩, ⟨ K^* |s̅γ^μγ^5 b|B⟩, ⟨ K^* |s̅σ^μνq_ν b|B⟩ and ⟨ K^* |s̅σ^μνγ^5 q_ν b|B⟩ via the following way <cit.>, ⟨ K^*(p,λ )|s̅γ _μ(1 - γ _5)b|B(p+q)⟩ = - ie_μ ^*(λ )(m_B + m_K^* )A_1(q^2) + i(e^*(λ )· q)(2p + q)_μ/m_B + m_K^*A_2(q^2)+ iq_μ (e^*(λ )· q)2 m_K^*/q^2[A_3(q^2)- A_0(q^2)]+ϵ_μναβe^*(λ )ν q^α p^β2V(q^2)/m_B + m_K^*and⟨ K^*(p,λ )|s̅σ_μνq^ν (1 + γ_5)b|B(p + q)⟩ = 2iϵ_μναβe^*(λ )ν q^α p^β T_1(q^2) + e_μ ^*(λ ) (m_B^2 - m_K^*^2)T_2(q^2) - (2p + q)_μ (e^*(λ )· q)T_3(q^2)+ q_μ (e^*(λ )· q)T_3(q^2), where p is the momentum of K^*-meson and q = p_B - p is the momentum transfer, e^(λ) stands for the K^*-meson polarization vector with λ being its transverse () or longitudinal () component, respectively. The following relations are helpful,T_3(q^2) = m_B^2 - m_K^*^2/q^2 [T_3(q^2) - T_2(q^2)],A_3(q^2) = m_B+m_K^*/2 m_K^* A_1(q^2) - m_B-m_K^*/2m_K^* A_2(q^2)and A_0(0)=A_3(0) and T_1(0)=T_2(0)=T_3(0).To derive the LCSRs for the B→ K^* TFFs, we introduce the following correlatorΠ_μ^ I,II(p,q) = -i ∫ d^4 x e^iq· x⟨ K^*(p,λ)|T{j_W^ I,II(x),j_B^† (0)}|0⟩ ,where the currents j_W^ I(x) = s̅ (x)γ _μ(1 - γ _5)b(x) and J_W^ II (x)= s̅ (x)σ _μνq^ν(1 + γ _5)b(x). The current j_B^† (x) is usually chosen as i m_b b̅(x) γ_5 q(x), which has the same quantum state as the pseudoscalar B-meson with J^P=0^-. For simplicity, we call its corresponding LCSR as LCSR-U. As mentioned in the Introduction, the current j_B^† (x) can also be chosen as a chiral current, e.g. the right-handed chiral current i m_b b̅(x)(1+ γ_5)q(x). We call its corresponding LCSR as LCSR-R. The calculation technology for the LCSR are the same for both cases, and we take j_B^† (x)=i m_b b̅(x) γ_5 q(x) as an explicit example to show how to derive the LCSRs for the B→ K^* TFFs up to twist-4 accuracy.The correlator (<ref>) is analytic in the whole q^2-region. In the time-like region, one can insert a complete series of the intermediate hadronic states in the correlator and obtain its hadronic representation by isolating out the pole term of the lowest pseudoscalar B-meson. More explicitly, the correlator Π_μ^ H(I) can be written as Π_μ^ H(I)(p,q)= ⟨ K^*|s̅γ_μ (1 - γ_5)b|B⟩⟨ B|b̅i m_bγ_5 q_1|0⟩/m_B^2 - (p + q)^2 + ∑_ H⟨ K^*|s̅γ_μ (1 - γ_5)b|B^ H⟩⟨ B^ H|b̅ i m_b γ_5 q_1|0⟩/m_B^ H^2 - (p + q)^2= Π_1^ H (I) e_μ^*(λ) + Π_2^ H (I) (e^*(λ)· q) (2p+q)_μ+ Π _3^ H (I) (e^*(λ )· q)q_μ+ iΠ_4^ H (I)ϵ_μ^ναβ e_ν^*(λ ) q_α p_β , where the matrix element ⟨ B|b̅ i m_bγ_5 q_1|0⟩=m_B^2 f_B, where f_B is the B-meson decay constant. By replacing the contributions from the high resonances and continuum states with the dispersion relations, the invariant amplitudes can be written asΠ _i^ H(I) = m_B^2 f_B(m_B+ m_K^*)/m_B^2 - (p+q)^2Ã_i(q^2) + ∫_s_0^∞ρ_i^ H(I)/s - (p+q)^2ds + ⋯,where i=(1,⋯,4), s_0 is the threshold parameter, and the ellipsis stands for the subtraction constant or the finite q^2-polynomial which has no contribution to the final sum rules. The reduced functions Ã_i areÃ_1=A_1, Ã_2=A_2/(m_B+ m_K^*)^2,Ã_3=2m_K^*/q^2 (m_B+ m_K^*)[A_3(q^2) - A_0(q^2)],Ã_4=2V(q^2)/(m_B+ m_K^*)^2.The spectral densities ρ^ H(I)_i(s) is estimated by using the ansatz of the quark-hadron duality <cit.>, i.e. ρ^ H(I)_i(s)= ρ^ QCD(I)_i(s)θ (s-s_0).In the space-like region, the correlator can be calculated by using the operator production expansion (OPE). With the help of the b-quark propagator <cit.>,⟨ 0| T{ b(x)b̅ (0)} | 0⟩= i∫d^4k/(2π )^4e^ - ik · x k + m_b/m_b^2 - k^2 - ig_s∫d^4k/(2π )^4e^ - ik · x∫_0^1 dv [1/2 k + m_b/(m_b^2 - k^2)^2 ×G^μν(v x)σ _μν+ 1/m_b^2 - k^2vx_μG^μν(v x)γ _ν] ,The correlator can be expressed as Π_μ ^ OPE(I)(p,q)= ∫d^4 x d^4 k/(2π )^4e^i(q-k)· x/m_b^2 - k^2{k^ν⟨ K^*(p,λ)| T{s̅(x)γ _μγ _νγ _5 q(0) }|0⟩+ k^ν⟨K^*(p,λ )| T{s̅(x)γ _μγ _νq(0)}|0⟩+ m_b ⟨ K^*(p,λ)| T{s̅(x) γ_μγ_5 q(0)}|0⟩ - m_b⟨ K^*(p,λ )| T{s̅(x) γ_μ q(0) }|0⟩ +⋯}, The nonlocal matrix elements in the right-hand-side of the above equation can be reexpressed by the LCDAs of various twists <cit.>. We present the relations for the nonlocal matrix elements to the LCDAs in the Appendix.The correlator Π_μ^ II(p,q) can be treated via a similar way. With the help of the analytic property of the correlator in different q^2-region, the LCSRs for the B → K^* TFFs are ready to be derived.As a further step, we apply the usual Borel transformation to the sum rules, which removes the subtraction terms in the dispersion relation and effectively suppresses the contributions from the unknown excited resonances and continuum states heavier than K^* meson. After applying the Borel transformation, our final LCSRs read A_1^𝒰(q^2)= m_K^* m_b/f_B m_B^2(m_B + m_K^*)∫_0^1 du e^( m_B^2 - s(u)) / M^2{ m_K^* f_K^*^ C/2u^2 m_K^* ^2Θ(c(u,s_0))ϕ_2;K^*^(u) +m_K^* f_K^*^/2uΘ(c(u,s_0)) ×ψ_3;K^*^(u)+m_b f_K^*^∥/uΘ(c ( u,s_0 ) ) ϕ_3;K^*^ ( u ) - m_K^* f_K^*^[ m_b^2 C/8u^4M^4Θ(c(u,s_0)) +C-2m_b^2/8u^3M^2Θ(c(u,s_0)) - 1/8u^2Θ(c(u,s_0))]ϕ_4;K^*^(u) - m_b m_K^*^2 f_K^*^∥/u^2 M^2Θ( c( u,s_0 ) )C_K^*(u) - m_K^* f_K^*^[ C/u^3M^2Θ(c(u,s_0))-1/u^2 ×Θ(c(u,s_0))]I_L(u) - m_K^* f_K^*^[2m_b^2/2u^2M^2Θ(c(u,s_0))+1/2uΘ(c(u,s_0)) ] H_3(u)}+∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟 × e^( m_B^2 - s(u)) / M^2Θ(c(u,s_0))/u^2 M^2m_b m_K^*^2/12 f_B m_B^2( m_B + m_K^*){f_K^*^[Ψ _4;K^*^ (α) - 12(Ψ _4;K^*^ (α)-2v Ψ _4;K^*^ (α) +2Φ _4;K^*^(1) (α) -2Φ _4;K^*^(2) (α)+4v Φ _4;K^*^(2) (α))](m_B^2-m_K^*^2+2 u m_K^*^2)+ 2 m_b m_K^* f_K^*^∥(Φ _3;K^*^∥ (α)+ 12 Φ _3;K^*^∥ (α))}, A_2^𝒰(q^2)= m_K^* m_b( m_B + m_K^*)/2f_B m_B^2∫_0^1 du e^( m_B^2 - s(u)) / M^2{m_K^* f_K^*^/u m_K^* ^2Θ(c(u,s_0))ϕ_2;K^*^(u)- m_b f_K^*^/u M^2Θ(c(u,s_0)) ×ψ_3;K^*^(u) - m_K^* f_K^*^/4[m_b^2/u^3M^4Θ(c(u,s_0)) + 1/u^2M^2Θ(c(u,s_0))]ϕ_4;K^*^(u)+ 2 m_b f_K^*^∥/u^2 M^2Θ( c( u,s_0) )× A_K^*(u) - m_K^*^2 m_b^3 f_K^*^∥/2u^4 M^6Θ( c( u,s_0 ) )B_K^*(u) + 2 m_b m_K^*^2 f_K^*^∥/u^2 M^4Θ( c( u,s_0 ))C_K^*(u) + 2 m_K^* f_K^*^ ×[ C - 2m_b^2/u^3 M^4Θ(c(u,s_0)) - 1/u^2M^2Θ(c(u,s_0))]I_L(u)- m_K^* f_K^*^/u M^2Θ(c(u,s_0))H_3(u)}+∫_0^1 dv ∫_0^1 du ×∫_0^1 d 𝒟 e^( m_B^2 - s(u)) / M^2 m_b m_K^*^2 f_K^*^/12 f_B m_B^2 m_B + m_K^*/u^2 M^2Θ(c(u,s_0))[Ψ _4;K^*^ (α) + 12( 2v Ψ _4;K^*^ (α) -Ψ _4;K^*^ (α)+(4v-2) Φ _4;K^*^(1) (α) + 2Φ _4;K^*^(2) (α) )], A_3^𝒰(q^2)-A_0^𝒰(q^2) = m_b q^2/2f_B m_B^2∫_0^1 du e^( m_B^2 - s(u)) / M^2{ - f_K^*^/2u m_K^*Θ (c(u,s_0))ϕ_2;K^*^(u) - (2-u)m_K^*f_K^*^/2u^2M^2Θ(c(u,s_0))×ψ_3;K^*^ (u) + m_K^*f_K^*^/8[m_b^2/u^3M^4Θ(c(u,s_0))+ 1/u^2M^2Θ(c(u,s_0))]ϕ_4;K^*^(u) -m_b f_K^*^/m_K^*u^2M^2Θ(c(u,s_0))A_K^*(u)+ m_K^* m_b^3 f_K^*^/u^4 M^6Θ(c(u,s_0))B_K^*(u)+ m_bm_K^*f_K^*^(2 - u)/u^3 M^4Θ(c(u,s_0))C_K^*(u) + m_K^* f_K^*^[(4 - 2u)×( C/2u^4 M^4Θ(c(u,s_0)) - 1/u^3 M^2Θ(c(u,s_0)))+ (2m_b^2/u^3 M^4Θ(c(u,s_0)) + 1/u^2M^2Θ(c(u,s_0)))]I_L(u) - m_K^*(2-u)f_K^*^/2u^2M^2Θ(c(u,s_0))H_3(u)}-∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟e^( m_B^2 - s(u)) / M^2 m_b q^2 f_K^*^/24 f_B m_B^2Θ(c(u,s_0))/u^2 M^2 ×[12(2v Ψ _4;K^*^ (α)-Ψ _4;K^*^ (α) +(4v-2) Φ _4;K^*^(1) (α)+ 2Φ _4;K^*^(2) (α))+Ψ _4;K^*^ (α)], T_1^𝒰(q^2)= m_b m_K^*/2m_B^2 f_B∫_0^1 du e^( m_B^2 - s(u)) / M^2{m_b m_K^* f_K^*^/u m_K^*^2Θ (c(u,s_0))ϕ _2;K^*^ (u)+ f_K^*^[ ℰ/4u^2M^2Θ(c(u,s_0)) + 1/4u ×Θ(c(u,s_0)) ] ψ_3;K^*^(u) -m_K^* m_b^3 f_K^*^/4u^3M^4Θ (c(u,s_0)) ϕ_4;K^*^(u) + f_K^*^Θ (c(u,s_0))ϕ _3;K^*^(u) + f_K^*^/u ×Θ (c(u,s_0))A_K^*(u) - [ 1/4u^2 M^2Θ(c(u,s_0)) + m_b^2/4u^3 M^4Θ(c(u,s_0)) ] m_K^*^2 f_K^*^ B_K^*(u)-2m_b m_K^* f_K^*^/u^2M^2 ×Θ(c(u,s_0)) I_L(u) - m_b m_K^* f_K^*^/uM^2Θ(c(u,s_0)) H_3(u) }+∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟 e^( m_B^2 - s(u)) / M^2m_K^*/12 f_B m_B^2 ×Θ(c(u,s_0))/u^2 M^2{ m_K^* m_b f_K^*^[Ψ _4;K^*^ (α)-12(Ψ _4;K^*^ (α)+2Φ _4;K^*^(1) (α) - 2 Φ _4;K^*^(2) (α))] + f_K^*^∥ [u m_K^*^2 ×(12(1-2v)Φ_3;K^*^∥ (α)+Φ _3;K^*^∥ (α)) -2 v uΦ _3;K^*^∥ (α) + v (-12Φ_3;K^*^∥ (α)+ Φ _3;K^*^∥ (α))( m_B ^2- m_K^*^2) ] }, T_2^𝒰(q^2)= m_b m_K^*/2m_B^2f_B∫_0^1 du e^( m_B^2 - s(u)) / M^2{m_b m_K^*f_K^*^(1 - ℋ)/u m_K^*^2Θ (c(u,s_0)) ϕ_2;K^*^(u)+ f_K^*^[1 + (2 - u) H/u]×Θ (c(u,s_0)) ϕ _3;K^*^(u) +[ ℱ (1-ℋ) - 4u m_K^*^2 ℋ/4u^2 M^2Θ(c(u,s_0))+ 1 -ℋ/4uΘ (c(u,s_0)) ] f_K^*^ψ _3;K^*^ (u) - m_K^*m_b^3 f_K^*^/4u^3M^4(1 - ℋ)Θ(c(u,s_0))ϕ_4;K^*^(u) + f_K^*^(1 - H)/uΘ (c(u,s_0)) A_K^*(u)-[ 1 - ℋ/4u^2M^2Θ(c(u,s_0)) + (1 - ℋ)m_b^2/4u^3 M^4Θ(c(u,s_0))]m_K^*^2 f_K^*^ B_K^*(u)- 2m_b m_K^*f_K^*^ (1 - ℋ)/u^2M^2Θ(c(u,s_0))I_L(u) - m_b m_K^* f_K^*^/uM^2 ×[ 1+ ( 2/u-1 )ℋ]Θ(c(u,s_0)) H_3(u) }+∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟 e^( m_B^2 - s(u)) / M^2m_K^*/12 f_B m_B^2Θ(c(u,s_0))/u^2 M^2 ×{ m_K^*m_b f_K^*^[Ψ _4;K^*^ (α)-12(Ψ _4;K^*^ (α)+2Φ _4;K^*^(1) (α) - 2 Φ _4;K^*^(2) (α))]+ f_K^*^∥/2(m_B^2-m_K^*^2)Θ(c(u,s_0))/u^2 M^2 ×[ v (Φ _3;K^*^∥ (α)-12Φ_3;K^*^∥ (α) ) ( m_B^2 - m_K^*^2)^2 + u m_K^*^2(Φ _3;K^*^∥ (α)+12(1-2v)Φ_3;K^*^∥ (α))( m_B - m_K^*^2) + 2 q^2 m_K^*^2 (-2v Φ _3;K^*^∥ (α)+Φ _3;K^*^∥ (α)+12 Φ _3;K^*^∥ (α) )]}, V^𝒰(q^2)= m_b ( m_B + m_K^*)/2f_B m_B^2∫_0^1 du e^( m_B^2 - s(u)) / M^2{f_K^*^Θ(c(u,s_0))ϕ_2;K^*^(u)+m_K^* m_b f_K^*^∥/2u^2 M^2Θ( c( u,s_0 ) )×ψ _3;K^*^(u) - [m_b^2/u^2M^4Θ(c(u,s_0))+1/uM^2Θ(c(u,s_0))]m_K^*^2 f_K^*^/4ϕ_4;K^*^ (u)}+ ∫_0^1 dv ∫_0^1 du ×∫_0^1 d 𝒟 e^( m_B^2 - s(u)) / M^2 m_K^*^2 f_K^*^/6Θ(c(u,s_0))/u^2 M^2[(2v-1)Ψ _4;K^*^ (α) + 12(Ψ _4;K^*^ (α)-2(v-1) × (Φ _4;K^*^(1) (α)-Φ _4;K^*^(2) (α)))], T_3^𝒰(q^2)= m_b m_K^*/2m_B^2 f_B∫_0^1 du e^( m_B^2 - s(u)) / M^2{m_b m_K^*f_K^*^/u m_K^*^2Θ(c(u,s_0))ϕ_2;K^*^(u) + f_K^*^(1 - 2/u)Θ(c(u,s_0))ϕ_3;K^*^ (u) + f_K^*^[ ℱ + 4um_K^*^2/4u^2 M^2Θ(c(u,s_0)) + 1/4uΘ (c(u,s_0))] ψ _3;K^*^(u)-m_K^* m_b^3 f_K^*^/4u^3M^4Θ(c(u,s_0))ϕ_4;K^*^(u) + 2f_K^*^ ×[ m_B^2 - m_K^*^2/u^2 M^2Θ(c(u,s_0)) + 1/2uΘ (c(u,s_0))] A_K^*(u) -[ 1/4u^2 M^2Θ(c(u,s_0))+ 2q^2 + m_b^2 + 2𝒬/4u^3M^4 ×Θ(c(u,s_0))+ (m_K^*^2 + 𝒬)m_b^2/2u^4 M^6Θ(c(u,s_0)) ] m_K^*^2f_K^*^ B_K^*(u)- m_b m_K^* f_K^*^[2/u^2M^2Θ(c(u,s_0)) + 4/u^3 M^4Θ(c(u,s_0))(m_B^2 - m_K^*^2)] I_L(u) + m_b m_K^* f_K^*^[ 2/u^2M^2 - 1/uM^2] Θ(c(u,s_0))H_3(u) }+ ∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟 e^( m_B^2 - s(u)) / M^2m_K^*/12 f_B m_B^2Θ(c(u,s_0))/u^2 M^2{ m_K^*m_b f_K^*^[Ψ _4;K^*^ (α)-12(Ψ _4;K^*^ (α) +2Φ _4;K^*^(1) (α)- 2 Φ _4;K^*^(2) (α))]+f_K^*^∥[ m_K^*^2 ( Φ_3;K^*^∥ (α)(4v-u-2) + 12 Φ_3;K^*^∥ (α)(2vu-u-2) )- v ( m_B^2 - m_K^*^2 ) (Φ _3;K^*^∥ (α)-12Φ_3;K^*^∥ (α))]}, where the superscript U indicates those LCSRs are for the usual correlator with j_B^† (x)=i m_b b̅(x)γ_5 q(x). The LCDAs are generally scale-dependent, and for convenience we have implicitly omitted the factorization scale μ in the LCDAs. ∫ d D=∫ dα_1 dα_2 dα_3 δ(1 - ∑_i= 1^3α_i). ℋ = q^2/(m_B^2 - m_K^*^2), ℰ = m_b^2 - u^2 m_K^*^2 + q^2, 𝒞=m_b^2+u^2m_K^*^2-q^2, 𝒬 = m_B^2 - m_K^*^2 - q^2, ℱ = m_b^2 - u^2 m_K^*^2 - q^2, c(ϱ,s_0) = ϱ s_0 - m_b^2 + ϱ̅q^2 - ϱϱ̅m_K^*^2 and s(ϱ)=[ m_b^2 - ϱ̅(q^2 - ϱ m_K^*^2)] / ϱ (ϱ = u) with ϱ̅= 1 - ϱ. Θ(c(u,s_0)) is the usual step function. Θ(c(u,s_0)) and Θ(c(u,s_0)) come from the surface terms δ(c(u_0,s_0)) and Δ (c(u_0,s_0)), whose explicit forms have been given in Ref.<cit.>.The reduced functions I_L(u), H_3(u), A_K^*(u), B_K^*(u), and C_K^*(u) are defined asI_L(u)=∫_0^u dv ∫_0^v dw [ϕ_3;K^*^(w) -1/2ϕ_2;K^*^(w) -1/2ψ_4;K^*^(w)],H_3(u)=∫_0^u dv [ψ_4;K^*^(v)-ϕ_2;K^*^(v)], A_K^*(u) =∫_0^u dv [ ϕ _2;K^*^ (v) - ϕ _3;K^*^(v) ], B_K^*(u) =∫_0^u dv ϕ _4;K^*^ (v), C_K^*(u) =∫_0^u dv ∫_0^v dw[ψ _4;K^*^(w) + ϕ _2;K^*^(w)..- 2 ϕ_3;K^*^(w) ]. By using the same correlator and keeping only the first term of the b-quark propagator (<ref>), Ref.<cit.> calculated the LCSRs for the B→ρ TFFs A_1, A_2 and V, and Ref.<cit.> calculated the LCSRs for the B→ K^* TFFs A_1, A_2, A_3 - A_0, V, T_1, T_2 and T_3. All those LCSRs are given up to twist-3 accuracy [In those two references the surface terms have not be taken into consideration, and because only the 2-particle terms have been kept in the matrix elements, the twist-3 LCDAs involving 3-particle contributions have also been missed in the LCSRs.]. As a cross-check, we find if keeping the terms up to the same twist-3 accuracy and transforming to the same definitions for the form factors, we return to the same expressions listed in Refs.<cit.>.Up to twist-4 accuracy, we present the required K^*-meson LCDAs in Table <ref>. All of those LCDAs are emerged in the LCSRs (<ref>, <ref>, <ref>, <ref>, <ref>, <ref>, <ref>). The accuracy of the LCSRs thus depend heavily on how well we know those LCDAs.In general cases, the contributions from the twist-4 terms are numerically small, thus the uncertainties from the twist-4 LCDAs themselves are highly suppressed. We shall directly adopt the twist-4 LCDAs derived by applying the conformal expansion of the matrix element <cit.> to do the numerical calculation.The twist-3 contributions are generally suppressed by certain δ-powers (δ=m_K^*/m_b∼ 0.17) and 1/M^2-powers to the leading twist-2 terms. For examples, the twist-3 contributions from the LCDAs ϕ_3;K^*^, ψ_3;K^*^, Φ_3;K^*^ and Φ_3;K^*^ are suppressed by δ^1 and the twist-3 contributions from the LCDAs ϕ_3;K^*^, ψ_3;K^*^ and Φ_3;K^*^ are suppressed by δ^2. However, the twist-3 contributions are sizable and important in certain kinematic region, a special effect should be paid for a precise prediction.On the one hand, one may use more accurate twist-2 LCDAs to predict the twist-3 contributions. This can be achieved by applying the relations among the twist-2 and twist-3 LCDAs. For example, under the Wandzura-Wilczek approximation <cit.>, the 2-particle twist-3 LCDAs ψ_3;K^*^, ϕ_3;K^*^, ψ_3;K^*^ and ϕ_3;K^*^ can be related to the twist-2 LCDAs ϕ_2;K^*^ and ϕ_2;K^*^ via the following relations <cit.>ψ_3;K^*^(u) = 2[u̅∫_0^u dv ϕ_2;K^*^(v)/v̅+ u ∫_u^1 dv ϕ_2;K^*^(v)/v], ϕ_3;K^*^(u) = 1/2[∫_0^u dv ϕ_2;K^*^(v)/v̅+∫_u^1 dv ϕ_2;K^*^(v)/v], ψ_3;K^*^(u) = 2[u̅∫_0^u dv ϕ_2;K^*^(v)/v̅+ u ∫_u^1 dv ϕ_2;K^*^(v)/v], ϕ_3;K^*^(u) = (1-2u̅)[∫_0^u dv ϕ_2;K^*^(v)/v̅-∫_u^1 dv ϕ_2;K^*^(v)/v],where u̅ =1-u and v̅ =1-v. The contributions from the remaining three 3-particle twist-3 LCDAs to the B→ K^* TFFs are numerically small, thus as the same as the twist-4 LCDAs, we shall directly take them as the ones from Ref.<cit.>.On the other hand, it has been suggested that by using the improved LCSR approach <cit.> and by taking a chiral correlator, the less certain high-twist contributions could be highly suppressed. Ref.<cit.> has shown that by taking a right-handed correlator with j_B^† (x)=i m_b b̅(x)(1+ γ_5)q(x), the twist-3 LCDAs, ϕ_3;K^*^, ψ_3;K^*^, Φ_3;K^*^, Φ_3;K^*^, and even the twist-2 LCDA ϕ_2;K^*^ disappear in the LCSRs. Thus the uncertain twist-3 contributions can be highly suppressed.Following the standard LCSR procedures, and by keeping all the terms that contribute to the LCSRs up to twist-4 accuracy, we recalculate the B→ K^* TFFs for the right-handed chiral correlator. Our final LCSRs are presented in the Appendix.The hadronic representation of the chiral correlator contains an extra resonance J^P = 0^+ in addition to the usual one with J^P=0^-, introducing extra uncertainty to the LCSR-R. The LCSR-R eliminates the large uncertainties from the twist-2 and twist-3 structures which are at the δ^1-order and we can also suppress its pollution by a proper choice of continuum threshold s_0, thus it is worthwhile to use a chiral correlator. Numerically, we confirm our previous observation that the final LCSRs have slight s_0 dependence <cit.>, thus the uncertainties from J^P = 0^+ resonance are small.§ NUMERICAL ANALYSIS §.§ Basic inputsIn doing the numerical calculation, we take the K^*-meson decay constants f^_K^*= 0.185(9)GeV and f^_K^*= 0.220(5)GeV <cit.>, the b-quark mass m_b = 4.80 ± 0.05GeV, the K^*-meson mass m_K^* = 0.892GeV, the B-meson mass m_B =5.279GeV <cit.>, and the B-meson decay constant f_B =0.160 ± 0.019GeV <cit.>. The factorization scale μ is set as typical momentum of the heavy b-quark, i.e. μ≃ (m^2_B-m^2_b)^1/2∼ 2.2GeV <cit.>, and we predict its error by taking Δμ=± 1.0GeV.The choices of twist-3 and twist-4 LCDAs have been explained in last subsection. As for the twist-2 LCDAs, we adopt the model, following the idea of Wu-Huang model for the pion LCDA <cit.>, to do the calculation <cit.> ϕ_2;K^*^λ(x)= A_2;K^*^λ√(3x x̅) Y/8π^3/2f_K^*^λ b_2;K^*^λ [1 + B_2;K^*^λ C_1^3/2(ξ) +C_2;K^*^λ C_2^3/2(ξ) ]exp[ - b_2;K^*^λ 2x̅ m_s^2 + x m_q^2 -Y ^2/xx̅] ×[ Erf( b_2;K^*^λ√(μ _0^2 +Y^2/xx̅)) - Erf( b_2;K^*^λ√( Y^2/xx̅)) ], where λ= or ⊥, f_K^*^⊥=f_K^*^⊥/√(3) and f_K^*^=f_K^*^/√(5) are reduced decay constants, ξ=2x-1, Y= x m_s + x m_q, the error function, Erf(x) = 2/√(π)∫_0^x e^ - t^2dt. The model cooperates the transverse momentum dependence with the longitudinal one under the Brodsky-Huang-Lepage prescription <cit.> and the Wigner-Melosh rotation <cit.>. Such a cooperation of transverse effect in the light meson wavefunction is helpful for an effective suppression of the end-point singularity for high-energy processes involving light mesons, cf. a review <cit.>.The model parameters A_2;K^*^λ, B_2;K^*^λ, C_2;K^*^λ and b_2;K^*^λ can be fixed by applying the criteria, * The normalization condition of the twist-2 LCDA, i.e. ∫ϕ_2;K^*^λ(x)dx=1;* As shown by Ref.<cit.>, the average of the squared transverse momentum ⟨ k_^2 ⟩_K^*^1/2 could be determined from the light-cone wavefunction which is related to the LCDA by integrating out its transverse momentum dependence. To fix the parameter, we adopt ⟨ k_^2 ⟩_K^*^1/2 = 0.37(2)GeV <cit.>.* Generally, the twist-2 LCDA can be expanded as a Gegenbauer polynomial,ϕ_2;K^*^λ(x)=6xx̅(1+∑_n=1,2,⋯ a_n^λ C_n^3/2(ξ)),whose Gegenbauer moment a_n^λ can be calculated via the following way due to the orthogonality of the Gegenbauer functions, i.e.a_n^λ= ∫_0^1 dx ϕ_2;K^*^λ(x) C_n^3/2(ξ)∫_0^1 6x x̅ [C_n^3/2(ξ)]^2.Generally, the behavior of the twist-2 LCDA is dominated by its first several terms. We adopt the first two Gegenbauer moments derived from the QCD sum rules <cit.> to fix the parameters, i.e. a_1^ (1 GeV) = 0.04(3) and a_2^ (1 GeV) = 0.10(8) for ϕ^⊥_2;K^*, and a_1^ (1 GeV) = 0.03(2) and a_2^ (1 GeV) = 0.11(9) for ϕ^_2;K^*.This way, we get the LCDA at the scale of 1 GeV, and its behavior at any other scale can be achieved via the renormalization group evolution <cit.>.The parameters of ϕ_2;K^*^⊥ for a_1^ (1 GeV) = 0.04(3) and a_2^ (1 GeV) = 0.10(8) have been given in Ref.<cit.>. We present the parameters of ϕ_2;K^*^ for a_1^ (1 GeV) = 0.03(2) and a_2^ (1 GeV) = 0.11(9) in Table <ref>, and the corresponding LCDA behavior in Fig.<ref>. §.§ Criteria for the LCSRsThe Borel parameter M^2 and the continuum threshold s_0 are determined by the criteria, * The continuum contribution, which is the part of the dispersive integral from s_0 to ∞, should not be too large. We take it to be less than 50% of the total LCSR,∫_s_0^∝ds ρ^tot(s)e^-s/M^2/∫_m_b^2^∝dsρ^tot(s)e^-s/M^2 ≤50%. * All high-twist LCDAs' contributions are less than 35% of the total LCSR, qualitatively ensuring the usual power counting of twist contributions.* The derivatives of the TFFs with respect to 1/M^2 give the LCSRs for m_B. We require all predicted B-meson masses to be full-filled with high accuracy, e.g. | m_B^ LCSR - m_B^exp| / m_B^exp≤ 0.1%.The determined continuum threshold s_0 and the Borel parameter M^2 for various B→ K^* TFFs at the large recoil point q^2 =0 are listed in Table <ref>. §.§ Properties of the LCSRs We present the sum rules for the B→ K^* TFFs T_1,2,3(q^2), A_0,1,2(q^2) and V(q^2) for the right-handed chiral correlator (LCSR-R) and for the usual correlator (LCSR-U) in Figs.(<ref>, <ref>), in which the solid line stands for its central value and the shaded band is the theoretical error. The error is squared average of errors caused by all the mentioned error sources, e.g. we adopt Δ M^2_ R/ U=±0.5 GeV^2 and Δ s_0, R/ U= ± 0.5 GeV^2 [The TFFs changes very slightly by taking Δ M^2_ R/ U=±0.5 GeV^2, which is still ∼3% by setting Δ M^2_ R/ U=±1.0 GeV^2. Thus our predictions are consistent with the usual flatness criterion for determining the Borel window <cit.>.]. Figs.(<ref>, <ref>) indicate that all the TFFs increase with the increment of q^2. We present the LCSRs together with their errors at the large recoil region q^2 → 0 in Table <ref>. As a comparison, the Ball and Zwicky (BZ) prediction <cit.>, the AdS/QCD prediction <cit.>, and the LCSR prediction <cit.> are also presented. Those TFFs are consistent with each other within errors.A smaller Borel parameter indicates a larger M^2-dependance due to a weaker convergence over 1/M^2, and a larger M^2-uncertainty could be observed. This explains why a larger M^2-uncertainty than our present one is observed in Ref.<cit.>, whose Borel parameter is taken as M^2=1.00 ± 0.25GeV^2 by using a rough scaling relation, M^2∼ 2m_b τ∼ 1GeV <cit.>, which is much smaller than the M^2-values shown by Table <ref>.We present the contributions from the K^*-meson LCDAs up to twist-4 in Table <ref>. For the LCSR-U of the usual correlator, the relative importance among different twist LCDAs follows the trends, twist-2 > twist-3 > twist-4; For the LCSR-R of the right-handed chiral correlator, we have, twist-2 ≫ twist-3 ∼ twist-4. The dominance of the twist-2 term indicates a more convergent twist-expansion could be achieved by using the chiral correlator. In Table <ref>, a somewhat larger twist-4 contribution is observed for A^ R/ U_1 and T^ R/ U_1, which comes from the twist-4 LCDA ψ_4;K^*^ in the reduced function H_3=∫_0^u dv [ψ_4;K^*^(v)- ϕ_2;K^*^(v)]; because of large suppression from the twist-2 LCDA ϕ_2;K^*^, the net contribution of H_3 is small, which is about 0.5% of the twist-2 ones. Except for H_3, the remaining twist-4 contributions are still about 10% of the twist-2 ones for A^ R/ U_1/2, V^ R/ U and T^ R/ U_1, thus the twist-4 terms are important and should be kept for a sound prediction.As shown by Table <ref>, the factorization scale dependence is small for all the B→ K^* TFFs, e.g. less than 3% by taking μ = (2.2 ± 1.0) GeV. Table <ref> shows when setting μ>2.2 GeV, TFFs are almost unchanged. This negligible dependence for larger scale value is consistent with the fact that the K^* LCDAs change slightly when running from 2.2 GeV to higher value via the renormalization group evolution <cit.>. §.§ An extrapolation of the TFFs and the correlation coefficient ρ_XY for the two LCSRs The LCSRs are valid when the K^*-meson energy has large energy in the rest-system of the B-meson, E_K^*> Λ_ QCD; using the relation, q^2=m_B^2-2m_B E_K^*, one usually adopts 0 ≤ q^2 ≤ 14 GeV^2. We adopt the simplified series expansion (SSE) to extrapolate the TFFs to all physically allowable q^2-region, i.e. the TFFs F_i(q^2) are expanded as <cit.>F_i(q^2)=1/1-q^2/m_R,i^2∑_k=0,1,2 a_k^i [ z(q^2) - z(0) ]^k,where F_i stands for A_0,1,2(q^2), V(q^2) and T_1,2,3(q^2), respectively. The functionz(t)=√(t_+-t)-√(t_+ - t_0)/√(t_+ - t) +√(t_+-t_0)with t_±=(m_B ± m_K^*)^2 and t_0 = t_+ (1 - √(1 - t_- / t_+)). The resonance masses m_R;i have been given in Ref.<cit.>. The coefficients a_0^i=F_i(0), a_1^i and a_2^i are determined such that the quality of fit (Δ) is around several percents. The quality of fit is defined as <cit.>Δ = ∑_t|F_i(t)-F_i^ fit(t)|/∑_t|F_i(t)|× 100,where t ∈[0,1/2,⋯,27/2,14 ] GeV^2. We put the determined parameters a^i_1,2 in Table <ref>, in which all the LCSR parameters are set to be their central values.We present the extrapolated B→ K^* TFFs in Fig.(<ref>) and Fig.(<ref>), where the AdS/QCD prediction <cit.> and the Lattice QCD prediction <cit.> are also given as a comparison. Figs.(<ref>, <ref>) show the sum rules of LCSR-R and LCSR-U are close in shape. We adopt the correlation coefficient ρ_XY to show to what degree those LCSRs are correlated. The correlation coefficient is defined as <cit.>ρ_XY= Cov(X,Y)/σ_X σ_Y.X and Y stand for the LCSR-R and LCSR-U sum rules for the TFFs, respectively. The covariance Cov(X,Y) = E[(X-E(X)) (Y-E(Y))] =E(XY)-E(X)E(Y) with E being the expectation value of a function. σ_X and σ_Y are standard deviations of X and Y. The rang of |ρ_X,Y| is 0 ∼ 1, a larger |ρ_X,Y| indicates a higher consistency among X and Y. The correlation coefficients for various TFFs are listed in Table <ref>. The magnitudes of the covariance for most of the TFFs are larger than 0.5, implying those TFFs are consistent with each other, or significantly correlated, even though they are calculated by using different correlators. In the LCSRs, the twist-2, the twist-3, and the twist-4 terms behave differently for different correlators. A larger ρ_X shows the net contributions for LCSR-R and LCSR-U from various twists are close to each order, not only for their values at the large recoil point q^2=0 but also for their ascending trends in whole q^2-region. §.§ The branching fraction of B → K^* μ^+ μ^- As an application, we adopt the present TFFs to calculate the branching fraction of the semi-leptonic decay B → K^* μ^+ μ^-. We adopt the differential branching fraction derived in Ref.<cit.> as our starting point, where the relations among the coefficients to the TFFs have also been presented.We present the branching fraction d B/dq^2 of the semi-leptonic decay B → K^* μ^+ μ^- in Fig.(<ref>), where the Belle data <cit.> and the LHCb data <cit.> are presented. The branching fractions for B^+→ K^*+μ^+μ^- (B^+-type) and B^0→ K^*0μ^+μ^- (B^0-type) are shown separately. Fig.(<ref>) shows the differential branching fractions from LCSR-U and LCSR-R are close in shape, both of which are consistent with the LHCb data. Numerically, we find the correlation coefficient for the branching fractions for the channels B^+→ K^* μ^+μ^- and B^0→ K^*0μ^+μ^- by using the LCSR-U and the LCSR-R are the same, both of which have a significant covariance with ρ_XY=0.64. This is due to the fact that the TFFs A_1 and A_2 dominate the branching fraction, whose correlation coefficients, as shown by Table <ref>, are large.§ SUMMARY In the paper, we have studied the B → K^* TFFs under the LCSR approach by applying two correlators, i.e. the usual one with j_B^† (x)=i m_b b̅(x) γ_5 q(x) and the right-handed chiral one with j_B^† (x)=i m_b b̅(x)(1+ γ_5)q(x), which lead to different light-cone sum rules for the TFFs, i.e. LCSR-U and LCSR-R, respectively. The LCSRs for the B → K^* TFFs are arranged according to the twist structure of the K^*-meson LCDA, whose twist-2, twist-3 and twist-4 terms behave quite differently by using different correlators.The 2-particle and 3-particle LCDAs up to twist-4 accuracy have been kept explicitly in the LCSRs. For the LCSR-U, almost all of the LCDAs come into the contribution, and the relative importance among different twists follows the usual trends, twist-2 > twist-3 > twist-4. For the LCSR-R, only part of the LCDAs are emerged in the TFF, the uncertainty from the unknown high-twist LCDAs are thus greatly suppressed; Moreover, the relative importance among different twists changes to twist-2 ≫ twist-3 ∼ twist-4. The dominance of the twist-2 term indicates a more convergent twist-expansion could be achieved by using a chiral correlator. Two exceptions for the power counting rule over twists are caused by the twist-4 LCDA ψ_4;K^*^; however it contributes to the TFFs via the reduced function H_3=∫_0^u dv [ψ_4;K^*^(v)- ϕ_2;K^*^(v)], whose net contribution is negligible. Except for H_3, the remaining twist-4 contributions are about 10% of the twist-2 ones for the TFFs A^ R/ U_1/2, V and T^ R/ U_1, thus the twist-4 terms should be kept for a sound prediction.We have observed that different LCSRs for the B→ K^* TFFs, i.e. LCSR-U and LCSR-R, are consistent with each other even though they have been calculated by using different correlators. As shown by Table <ref>, large correlation coefficients for most of the TFFs show the net twist-contributions for LCSR-R and LCSR-U are close to each order, not only for their values at the large recoil point q^2=0 but also for their ascending trends in the whole q^2-region. The high correlation of those LCSRs is further confirmed by their application to the branching fraction of the semi-leptonic decay B → K^* μ^+ μ^-, i.e. they are significantly correlated withρ_XY=0.64.The K^*-meson LCDAs contribute differently in the LCSRs by using different correlators. The consistency of different LCSRs inversely provide a suitable platform for probing unknown or uncertain LCDAs, i.e. the contributions from those LCDAs to the TFFs can be amplified to a certain degree via a proper choice of correlator, thus amplifying the sensitivity of the TFFs, and hence their related observables, to those LCDAs.Acknowledgments: This work was supported in part by Natural Science Foundation of China under Grant No.11625520 and No.11647112.§ THE RELATIONS BETWEEN THE LCDAS AND THE NONLOCAL MATRIX ELEMENTS The nonlocal matrix elements in the right-hand-side of the above equation can be reexpressed by the LCDAs of various twists <cit.>, i.e. ⟨K^*(p,λ )|s̅(x) q_1(0)|0⟩ =- i/2 f_K^*^(e^*(λ )· x) m_K^*^2∫_0^1 du e^iup· xψ_3;K^*^∥(u), ⟨K^*(p,λ )|s̅ (x)γ _βγ _5q_1(0)|0⟩ = 1/4ε^*(λ )_βm_K^*f_K^*^∥∫_0^1 d ue^iup· xψ_3;K^*^(x), ⟨K^*(p,λ )|s(x)γ _βq_1(0)|0⟩ = m_K^*f_K^*^∥∫_0^1 d ue^iu (p· x){e^*(λ )· x/p · xp_β[ ϕ _2;K^*^∥ (u) - ϕ _3;K^*^(u)] + e^*(λ )· x/p · xp_βm_ρ ^2x^2/16ϕ _4;K^*^∥ (u) + e^*(λ )_βϕ _3;K^*^(u) - 1/2x_βe^*(λ )· x/(p · x)^2m_K^*^2[ ψ _4;K^*^∥ (u) + ϕ _2;K^*^∥ (u) - 2ϕ _3;K^*^(u)]}, ⟨K^* (p,λ)|s̅(x)σ_μνq_1(0)|0⟩ =- i f_K^*^∫_0^1 du e^iu(p· x){(e^*(λ )_μ p_ν - e^*(λ )_ν p_μ)[ϕ_2;K^*^(u)+ m_K^*^2 x^2/16ϕ_4;K^*^(u)]+(p_μ x_ν - p_ν x_μ) e^*(λ )· x/(p· x)^2 m_K^*^2 [ϕ_3;K^*^(u) - 1/2ϕ_2;K^*^(u) - 1/2ψ_4;K^*^(u)] +1/2( e^*(λ )_μ x_ν- e^*(λ )_ν x_μ) m_K^*^2/p· x[ ψ_4;K^*^(u)-ϕ_2;K^*^(u) ] }. ⟨ 0 |q (0)gG_μν(v x)s( - x)| K^*(P,λ )⟩ =if_K^*^m_K^*^2[e_μ^(λ )p_ν - e_ν^(λ )p_μ]Ψ _4;K^*^(v,px) ⟨ 0 |q (0)igG̃ _μν(v x)γ _5s( - x)| K^*(P,λ )⟩ =if_K^*^m_K^*^2[e_μ^(λ )p_ν - e_ν^(λ )p_μ]Ψ̃_4;K^*^(v,px) ⟨ 0 |q (x)γ _μγ _5gG̃ _αβ(vx)s( - x)| K^*(P,λ )⟩ = p_μ(e_α^(λ )p_β - e_β^(λ )p_α)f_K^*^m_K^*Φ̃_3;K^*^(v,px) + (p_α g_βμ^ - p_β g_αμ^)e^(λ )x/px f_K^*^m_K^*^3 Φ̃_4;K^*^(v,px)+ p_μ (p_α x_β- p_β x_α )e^(λ )x/(p z)^2 f_K^*^ m_K^*^3 Ψ̃_4;K^*^(v,px) ⟨ 0 |q (x)iγ _μ g G_αβ(v x)s( - x)| K^*(P,λ ) ⟩ =p_μ (e_α^(λ )p_β- e_β^(λ ) p_α ) f_K^*^ m_K^*Φ _3;K^*^(v,px) + (p_α g_βμ^ - p_β g_αμ^)e^(λ )x/p x f_K^*^ m_K^*^3 Φ _4;K^*^(v,px) + p_μ (p_α x_β- p_β x_α ) e^(λ )x/(px)^2 f_K^*^ m_K^*^3 Ψ _4;K^*^(v,px) ⟨ 0 |q (x)σ _αβ g G_μν(v x)s( - x)| K^*(P,λ ) ⟩ =f_K^*^m_K^*^2e^(λ )· x/2(p· x)[p_α p_μ g_βν^ - p_β p_μ g_αν^ - p_α p_ν g_βμ^ + p_β p_ν g_αμ^]Φ _3;K^*^(v,px) + f_K^*^m_K^*^2[p_α e_μ^(λ )g_βν^ - p_β e_μ^(λ )g_αν^ - p_α e_ν^(λ )g_βμ^ + p_β e_ν^(λ )g_αμ^]Φ _4;K^*^ (1)(v,px) + f_K^*^m_K^*^2[p_μ e_α^(λ )g_βν^ - p_μ e_β^(λ )g_αν^ - p_ν e_α^(λ )g_βμ^ + p_ν e_β^(λ )g_αμ^]Φ _4;K^*^ (2)(v,px) + f_K^*^m_K^*^2/p· x[p_α p_μ e_β^(λ ) x_ν- p_β p_μ e_α^(λ )x_ν- p_α p_ν e_β^(λ )x_ν + p_β p_ν e_α^(λ )x_ν ]Φ _4;K^*^ (3)(v,px) + f_K^*^m_K^*^2/p· x[p_α p_μ e_ν^(λ )x_β- p_β p_μ e_ν^(λ )x_α- p_α p_ν e_μ^(λ )x_β+ p_β p_ν e_ν^(λ )x_α ]Φ _4;K^*^ (4)(v,px) ⟨ 0 |q (x)i σ_αβ g G̃_μν (v x)s( - x)| K^*(P,λ ) ⟩ =f_K^*^m_K^*^2e^(λ )· x/2(p· x)[p_α p_μ g_βν^ - p_β p_μ g_αν^ - p_α p_ν g_βμ^ + p_β p_ν g_αμ^]Φ _3;K^*^(v,px) + f_K^*^m_K^*^2[p_α e_μ^(λ )g_βν^ - p_β e_μ^(λ )g_αν^ - p_α e_ν^(λ )g_βμ^ + p_β e_ν^(λ )g_αμ^] Φ̃_4;K^*^ (1)(v,px) + f_K^*^m_K^*^2[p_μ e_α^(λ )g_βν^ - p_μ e_β^(λ )g_αν^ - p_ν e_α^(λ )g_βμ^ + p_ν e_β^(λ )g_αμ^] Φ̃_4;K^*^ (2)(v,px) + f_K^*^m_K^*^2/p· x[p_α p_μ e_β^(λ ) x_ν- p_β p_μ e_α^(λ )x_ν- p_α p_ν e_β^(λ )x_ν + p_β p_ν e_α^(λ )x_ν ] Φ̃_4;K^*^ (3)(v,px) + f_K^*^m_K^*^2/p· x[p_α p_μ e_ν^(λ )x_β- p_β p_μ e_ν^(λ )x_α- p_α p_ν e_μ^(λ )x_β+ p_β p_ν e_ν^(λ )x_α ] Φ̃_4;K^*^ (4)(v,px) Here f_K^*^ and f_K^*^ are K^*-meson decay constants, which are defined as ⟨K^*(P,λ )|s̅ (0)γ _μq(0)| 0 ⟩= f_K^*^∥m_K^*e^*(λ )_μ and ⟨K^*(P,λ )|s̅ (0)σ _μνq(0)| 0 ⟩ = if_K^*^(e^*(λ )_μp_ν - e^*(λ )_νp_μ).§ LCSRS FOR THE B → K^* TFFS BY USING THE RIGHT-HANDED CHIRAL CORRELATOR We list the LCSRs for the B → K^* TFFs by using the right-handed chiral correlator in the following A_1^ℛ(q^2)= m_b m_K^*^2 f_K^*^/f_B m_B^2(m_B+m_K^*){∫_0^1du/ue^( m_B^2 - s(u)) / M^2{ C/u m_K^* ^2Θ(c(u,s_0))ϕ_2;K^*^(u,μ) +Θ(c(u,s_0))(u) ×ψ_3;K^*^-1/4[ m_b^2 C/u^3M^4Θ(c(u,s_0)) +C-2m_b^2/u^2M^2Θ(c(u,s_0))- 1/uΘ(c(u,s_0))]ϕ_4;K^*^(u)- 2[ C/u^2M^2 ×Θ(c(u,s_0)) -1/uΘ(c(u,s_0))]I_L(u)-[2m_b^2/uM^2Θ(c(u,s_0)) + Θ(c(u,s_0)) ] H_3(u)}+∫_0^1 dv ∫_0^1 du×∫_0^1 d 𝒟 e^( m_B^2 - s(u)) / M^2f_K^*^ m_b m_K^*^2/12 f_B m_B^2( m_B + m_K^*)Θ(c(u,s_0))/u^2 M^2[Ψ _4;K^*^ (α) - 12(4v Φ _4;K^*^(2) (α)-2v Ψ _4;K^*^ (α)+2Φ _4;K^*^(1) (α)-2Φ _4;K^*^(2) (α)+Ψ _4;K^*^ (α))] (m_B^2-m_K^*^2+2 u m_K^*^2 ), A_2^ℛ(q^2)= m_b(m_B + m_K^* )m_K^*^2 f_K^*^/f_B m_B^2{∫_0^1 du/u e^( m_B^2 - s(u)) / M^2{1/m_K^* ^2Θ(c(u,s_0))ϕ_2;K^*^(u,μ )- 1/M^2 ×Θ(c(u,s_0))ψ_3;K^*^(u)- 1/4[m_b^2/u^2M^4Θ(c(u,s_0)) + 1/uM^2Θ(c(u,s_0))]ϕ_4;K^*^(u)+ 2[ C - 2m_b^2/u^2 M^4 ×Θ(c(u,s_0)) - 1/uM^2Θ(c(u,s_0))]I_L(u)- 1/M^2Θ(c(u,s_0))H_3(u)}+∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟 × e^( m_B^2 - s(u)) / M^2f_K^*^ m_b m_K^*^2/12 f_B m_B^2 m_B + m_K^*/u^2 M^2Θ(c(u,s_0))[Ψ _4;K^*^ (α) + 12((4v-2) Φ _4;K^*^(1) (α) +2v Ψ _4;K^*^ (α)+2Φ _4;K^*^(2) (α)-Ψ _4;K^*^ (α))], A_3^ℛ(q^2)-A_0^ℛ(q^2) =m_b m_K^* f_K^*^ q^2/2f_B m_B^2{∫_0^1 du/u e^( m_B^2 - s(u)) / M^2{-1/m_K^*^2Θ (c(u,s_0))ϕ_2;K^*^(u)- 2-u/uM^2 ×Θ(c(u,s_0))ψ_3;K^*^ (u,μ)+ 1/4[m_b^2/u^2M^4Θ(c(u,s_0))+ 1/uM^2Θ(c(u,s_0))]ϕ_4;K^*^(u)+[(4-2u)×( C/u^3 M^4Θ(c(u,s_0))- 2/u^2 M^2Θ(c(u,s_0)))+ (4m_b^2/u^2 M^4Θ(c(u,s_0)) + 2/uM^2Θ(c(u,s_0)))]I_L(u)- 2-u/uM^2Θ(c(u,s_0))H_3(u)}-∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟e^( m_B^2 - s(u)) / M^2 m_b q^2 f_K^*^/24 f_B m_B^2Θ(c(u,s_0))/u^2 M^2[ Ψ _4;K^*^ (α)+ 12((4v-2) Φ _4;K^*^(1) (α)+2v Ψ _4;K^*^ (α)+2Φ _4;K^*^(2) (α)-Ψ _4;K^*^ (α))], T_1^ℛ(q^2)= m_b^2 m_K^*^2 f_K^*^/m_B^2f_B∫_0^1du/u e^( m_B^2 - s(u)) / M^2{1/m_K^*^2Θ (c(u,s_0))ϕ _2;K^*^ (u) - m_b^2/4u^2M^4Θ (c(u,s_0)) ϕ_4;K^*^(u) -2/uM^2Θ(c(u,s_0))I_L(u) - 1/M^2Θ(c(u,s_0)) H_3(u) }+∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟 e^( m_B^2 - s(u)) / M^2Θ(c(u,s_0))/u^2 M^2 ×f_K^*^m_K^*^2 m_b/12 f_B m_B^2[Ψ _4;K^*^ (α)-12(2Φ _4;K^*^(1) (α) - 2 Φ _4;K^*^(2) (α)+Ψ _4;K^*^ (α))], T_2^ℛ(q^2)= m_b^2f_K^*^ m_K^*^2/m_B^2 f_B∫_0^1 du/ue^( m_B^2 - s(u)) / M^2{1 - ℋ/m_K^*^2Θ (c(u,s_0)) ϕ_2;K^*^(u) - m_b^2/4u^2M^4 (1 - ℋ) Θ(c(u,s_0))×ϕ_4;K^*^(u)- 2(1 - ℋ)/uM^2Θ(c(u,s_0))I_L(u) - 1/M^2[ 1+ ( 2/u-1 )ℋ]Θ(c(u,s_0)) H_3(u) }+∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟 e^( m_B^2 - s(u)) / M^2f_K^*^m_K^*^2 m_b/12 f_B m_B^2Θ(c(u,s_0))/u^2 M^2[Ψ _4;K^*^ (α)-12(2Φ _4;K^*^(1) (α) - 2 Φ _4;K^*^(2) (α) +Ψ _4;K^*^ (α))], T_3^ℛ(q^2)= m_b^2f_K^*^ m_K^*^2/m_B^2f_B∫_0^1du/u e^( m_B^2 - s(u)) / M^2{1/m_K^*^2Θ(c(u,s_0))ϕ_2;K^*^(u) - m_b^2/4u^2M^4Θ(c(u,s_0))ϕ_4;K^*^(u) - [2/uM^2Θ(c(u,s_0)) + 4/u^2 M^4Θ(c(u,s_0))(m_B^2 - m_K^*^2)] I_L(u) + [ 2/uM^2 - 1/M^2] Θ(c(u,s_0))H_3(u) } +∫_0^1 dv∫_0^1 du ∫_0^1 d 𝒟e^( m_B^2 - s(u)) / M^2f_K^*^m_K^*^2 m_b/12 f_B m_B^2 u^2 M^2Θ(c(u,s_0))[Ψ _4;K^*^ (α)-12(2Φ _4;K^*^(1) (α)- 2 Φ _4;K^*^(2) (α)+Ψ _4;K^*^ (α))], V^ℛ(q^2)= m_b(m_B + m_K^*)f_K^*^/f_B m_B^2∫_0^1 du/u e^( m_B^2 - s(u)) / M^2{Θ(c(u,s_0)) ϕ_2;K^*^(u,μ) - [m_b^2/u^2M^4Θ(c(u,s_0))+1/uM^2Θ(c(u,s_0))]m_K^*^2/4ϕ_4;K^*^ (u)}+∫_0^1 dv ∫_0^1 du ∫_0^1 d 𝒟e^( m_B^2 - s(u)) / M^2f_K^*^ m_K^*^2/6u^2 M^2Θ(c(u,s_0)) ×[(2v-1)Ψ _4;K^*^ (α) + 12(Ψ _4;K^*^ (α)-2(v-1)(Φ _4;K^*^(1) (α)-Φ _4;K^*^(2) (α)))]. To compare with previous LCSRs given by Ref.<cit.>, in the above formulas, we keep all the three-particle twist-4 terms in the LCSRs.99Shifman:1978bx M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, “QCD and Resonance Physics. Theoretical Foundations,” Nucl. Phys. B 147, 385 (1979).Shifman:1978by M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, “QCD and Resonance Physics: Applications,” Nucl. Phys. B 147, 448 (1979).Nesterenko:1982gc V. A. Nesterenko and A. V. Radyushkin, “Sum Rules and Pion Form-Factor in QCD,” Phys. Lett. B 115, 410 (1982).Ioffe:1982qb B. L. Ioffe and A. V. Smilga, “Meson Widths and Form-Factors at Intermediate Momentum Transfer in Nonperturbative QCD,” Nucl. Phys. B 216, 373 (1983).Reinders:1984sr L. J. Reinders, H. Rubinstein and S. Yazaki, “Hadron Properties from QCD Sum Rules,” Phys. Rept.127, 1 (1985).Braun:1997kw V. M. Braun, “Light cone sum rules,” hep-ph/9801222.Balitsky:1989ry I. I. Balitsky, V. M. Braun and A. V. Kolesnichenko, “Radiative Decay σ^+ → p γ in Quantum Chromodynamics,” Nucl. Phys. B 312, 509 (1989).Chernyak:1990ag V. L. Chernyak and I. R. Zhitnitsky, “B meson exclusive decays into baryons,” Nucl. Phys. B 345, 137 (1990).Ball:1991bs P. Ball, V. M. Braun and H. G. Dosch, “Form-factors of semileptonic D decays from QCD sum rules,” Phys. Rev. D 44, 3567 (1991).Belyaev:1993wp V. M. Belyaev, A. Khodjamirian and R. Ruckl, “QCD calculation of the B →π, K form-factors,” Z. Phys. C 60, 349 (1993).Ball:1997rj P. Ball and V. M. Braun, “Use and misuse of QCD sum rules in heavy to light transitions: The Decay B →ρ e neutrino reexamined,” Phys. Rev. D 55, 5561 (1997).Ball:2004rg P. Ball and R. Zwicky, “B_(ds)→ρ, ω, K^*, ϕ decay form-factors from light-cone sum rules revisited,” Phys. Rev. D 71, 014029 (2005).Huang:1998gp T. Huang and Z. H. Li, “B → K^* gamma in the light cone QCD sum rule,” Phys. Rev. D 57, 1993 (1998).Huang:2001xb T. Huang, Z. H. Li and X. Y. Wu, “Improved approach to the heavy to light form-factors in the light cone QCD sum rules,” Phys. Rev. D 63, 094001 (2001).Wan:2002hz Z. G. Wang, M. Z. Zhou and T. Huang, “B π weak form-factor with chiral current in the light cone sum rules,” Phys. Rev. D 67, 094006 (2003).Zuo:2006dk F. Zuo, Z. H. Li and T. Huang, “Form Factor for B → D l ν in Light-Cone Sum Rules With Chiral Current Correlator,” Phys. Lett. B 641, 177 (2006).Wu:2007vi X. G. Wu, T. Huang and Z. Y. Fang, “SU(f)(3)-symmetry breaking effects of the B → K transition form-factor in the QCD light-cone sum rules,” Phys. Rev. D 77, 074001 (2008).Wu:2009kq X. G. Wu and T. Huang, “Radiative Corrections on the B → P Form Factors with Chiral Current in the Light-Cone Sum Rules,” Phys. Rev. D 79, 034013 (2009).Ball:2007rt P. Ball and G. W. Jones, “Twist-3 distribution amplitudes of K* and phi mesons,” JHEP 0703, 069 (2007).Ball:2007zt P. Ball, V. M. Braun and A. Lenz, “Twist-4 distribution amplitudes of the K^* and ϕ mesons in QCD,” JHEP 0708, 090 (2007).Fu:2014uea H. B. Fu, X. G. Wu and Y. Ma, “B→ K^* Transition Form Factors and the Semi-leptonic Decay B → K^* μ^+ μ^-,” J. Phys. G 43, 015002 (2016).Agashe:2014kda K. A. Olive et al. [Particle Data Group Collaboration], “Review of Particle Physics,” Chin. Phys. C 38, 090001 (2014).Aliev:1996hb T. M. Aliev, M. Savci and A. Ozpineci, “Rare B → K^* lepton+ lepton- decay in light cone QCD,” Phys. Rev. D 56, 4260 (1997).Wandzura:1977qf S. Wandzura and F. Wilczek, “Sum Rules for Spin Dependent Electroproduction: Test of Relativistic Constituent Quarks,” Phys. Lett. B 72, 195 (1977).Fu:2014pba H. B. Fu, X. G. Wu, H. Y. Han and Y. Ma, “B →ρ transition form factors and the ρ-meson transverse leading-twist distribution amplitude,” J. Phys. G 42, 055002 (2015).Belyaev:1994zk V. M. Belyaev, V. M. Braun, A. Khodjamirian and R. Ruckl, “D* D pi and B* B pi couplings in QCD,” Phys. Rev. D 51, 6177 (1995).Ball:1998tj P. Ball, “B → pi and B → K transitions from QCD sum rules on the light cone,” JHEP 9809, 005 (1998).Wu:2010zc X. G. Wu and T. Huang, “An Implication on the Pion Distribution Amplitude from the Pion-Photon Transition Form Factor with the New BABAR Data,” Phys. Rev. D 82, 034024 (2010)BHL S.J. Brodsky, T. Huang and G.P. Lepage, in Particles and Fields-2, edited by A.Z. Capri and A.N. Kamal (Plenum, New York, 1983), p.143; S.J. Brodsky, G.P. Lepage, T. Huang and P.B. MacKenzis, in Particles and Fieds 2, edited by A.Z. Capri and A.N. Kamal (Plenum, New York, 1983), p.83.Huang:1994dy T. Huang, B. Q. Ma and Q. X. Shen, “Analysis of the pion wave function in light cone formalism,” Phys. Rev. D 49, 1490 (1994).Cao:1997hw F. G. Cao and T. Huang, “Large corrections to asymptotic F(η_c γ) and F(η_b γ) in the light cone perturbative QCD,” Phys. Rev. D 59, 093004 (1999).Huang:2004su T. Huang and X. G. Wu, “A Model for the twist-3 wave function of the pion and its contribution to the pion form-factor,” Phys. Rev. D 70, 093013 (2004).Wu:2013lga X. G. Wu and T. Huang, “Heavy and light meson wavefunctions,” Chin. Sci. Bull.59, 3801 (2014).Huang:2013yya T. Huang, T. Zhong and X. G. Wu, “Determination of the pion distribution amplitude,” Phys. Rev. D 88, 034013 (2013)Ball:2006nr P. Ball and R. Zwicky, “|V_td / V_ts| from B → V γ,” JHEP 0604, 046 (2006).Ahmady:2014sva M. Ahmady, R. Campbell, S. Lord and R. Sandapen, “Predicting the B → K^* form factors in light-cone QCD,” Phys. Rev. D 89, 074021 (2014).Colangelo:2000dp P. Colangelo and A. Khodjamirian, “QCD sum rules, a modern perspective,” hep-ph/0010175.AKhodjamirian:2010 A. Khodjamirian, T. Mannel, A. A. Pivovarov and Y. M. Wang, “Charm-loop effect in B → K^* ℓ^+ ℓ^- and B → K^* γ,” JHEP1009, 089 (2010).Pietro Colangelo:2000 P. Colangelo and A. Khodjamirian “QCD sum rules, a modern perspective ,” At the frontier of particle physics, 3, 1495 (2000).Straub:2015ica A. Bharucha, D. M. Straub and R. Zwicky, “B→ Vℓ^+ℓ^- in the Standard Model from Light-Cone Sum Rules,” hep-ph/1503.05534.Horgan:2013hoa R. R. Horgan, Z. F. Liu, S. Meinel and M. Wingate, “Lattice QCD calculation of form factors describing the rare decays B → K^* ℓ^+ ℓ^- and B_s →ϕℓ^+ ℓ^-,” Phys. Rev. D 89, 094501 (2014).Akar:2014hda S. Akar [BaBar Collaboration], “Penguin and rare decays in BABAR,” J. Phys. Conf. Ser.556, 012047 (2014).Wei:2009zv J.-T. Wei et al. [Belle Collaboration], “Measurement of the Differential Branching Fraction and Forward-Backword Asymmetry for B → K^(*)ℓ^+ℓ^-,” Phys. Rev. Lett.103, 171801 (2009).Aaij:2012cq R. Aaij et al. [LHCb Collaboration], “Measurement of the isospin asymmetry in B → K^(*)μ^+μ^- decays,” JHEP 1207, 133 (2012).Aaij:2013qta R. Aaij et al. [LHCb Collaboration], “Measurement of Form-Factor-Independent Observables in the Decay B^0→ K^*0μ^+ μ^-,” Phys. Rev. Lett.111, 191801 (2013).LHCb:2012aja[LHCb Collaboration], “Differential branching fraction and angular analysis of the B^0→ K^*0μ^+μ^- decay,” LHCb-CONF-2012-008. | http://arxiv.org/abs/1703.08677v2 | {
"authors": [
"Wei Cheng",
"Xing-Gang Wu",
"Hai-Bing Fu"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170325114055",
"title": "Reconsideration of the $B \\to K^*$ Transition Form Factors within the QCD Light-Cone Sum Rules"
} |
label1]L.D. Valdez label1]G.J. Sibona label2,label3]L.A. Diazlabel3]M.S. Contigiani label1]C.A. Condat [label1]Facultad de Matemática, Astronomía, Física y Computación, Universidad Nacional de Córdoba, Instituto de Física Enrique Gaviola, CONICET, Ciudad Universitaria, 5000 Córdoba, Argentina [label2]Instituto de Investigaciones Biológicas y Tecnológicas–CONICET–Universidad Nacional de Córdoba, Córdoba, Argentina [label3]Laboratorio de Arbovirus–Instituto de Virología “Dr. J. M. Vanella”–Facultad de Ciencias Médicas–Universidad Nacional de Córdoba, Córdoba, Argentina The dynamics of a mosquito population depends heavily on climatic variables such as temperature and precipitation. Since climate change models predict that global warming will impact on the frequency and intensity of rainfall, it is important to understand how these variables affect the mosquito populations. We present a model of the dynamics of a Culex quinquefasciatus mosquito population that incorporates the effect of rainfall and use it to study the influence of the number of rainy days and the mean monthly precipitation on the maximum yearly abundance of mosquitoes M_max. Additionally, using a fracturing process, we investigate the influence of the variability in daily rainfall on M_max. We find that, given a constant value of monthly precipitation, there is an optimum number of rainy days for which M_max is a maximum. On the other hand, we show that increasing daily rainfall variability reduces the dependence of M_max on the number of rainy days, leading also to a higher abundance of mosquitoes for the case of low mean monthly precipitation. Finally, we explore the effect of the rainfall in the months preceding the wettest season, and we obtain that a regimen with high precipitations throughout the year and a higher variability tends to advance slightly the time at which the peak mosquito abundance occurs, but could significantly change the total mosquito abundance in a year.Culex rainfall arbovirus mosquito abundance mathematical modeling§ INTRODUCTIONMosquito-transmitted flaviviruses are an increasing health threat. In particular, members of Culex species (Diptera: Culicidae) such as Cx. quinquefasciatus and Cx. interfor are responsible for transmitting the West Nile and St. Louis encephalitis (SLEV) viruses to humans and domestic animals (<cit.>). For instance, the SLEV is endemic in Argentina, where the principal vector is postulated to be Cx. quinquefasciatus (<cit.>). In the last decades, mosquito-borne diseases have emerged and re-emerged as a result of multiple factors such as increasing urbanization, international travel, and climate change (<cit.>). The development of mathematical models is essential to quantify the effect of each of these factors on the dynamics of the mosquito population, and to determine the most effective strategies to control the epidemic outbreaks transmitted by mosquito vectors (<cit.>).Multiple studies have shown that the life cycle of the Cx. quinquefasciatus is closely related to temperature (<cit.>). <cit.> demonstrated that its reproductive activity increases with temperature, and <cit.> showed that this species can only live in environments with a temperature above 10^∘C. Other studies have found that temperature has a strong influence on the development and survival of both adult and immature mosquitoes (<cit.>). In turn, it was observed that Cx. quinquefasciatus does not enter diapause, but it may undergo quiescence or remain gonoactive in protected (indoor or underground) habitats (<cit.>). Urbanization therefore helps quinquefasciatus populations survive the mild winters of temperate regions.Similarly, rainfall is an important climatological variable to predict the abundance of Culex mosquitoes, since its copiousness and distribution determine the production and size of mosquito breeding sites. <cit.> studied the changes in the Cx. tarsalis population in California and found that, in most regions, it is positively correlated with an increase in total precipitation. However, these authors also found that in some places of the driest region of California, the correlation between these two variables was negative. On the other hand, <cit.> showed that a very large rainfall is not always accompanied by proportionately large increases in the abundance of Cx. tritaeniorhynchus and Cx. gelidus. In consequence, these results indicate that there exists a nonlinear relationship between rainfall and Culex abundance, which should be modeled in order to predict mosquito abundance. Additionally, since the climatological projections suggest that global warming will alter the frequency and intensity of rainfall, it is crucial to understand how different rainfall patterns will affect mosquito populations.In this paper we develop a dynamic model of the Cx. quinquefasciatus population, adapting a fracturing procedure (<cit.>) to describe the rainfall distribution. We use a system of compartmental ordinary differential equations that describe the immature and adult mosquito populations, in which we introduce the influence of temperature and rainfall on the reproduction rate. To study the influence of different rainfall patterns, we use a synthetic time series of rainfall based on the amount of rainfall per month and the monthly number of rainy days.We find that, for a given constant value of monthly precipitation, there is an optimum number of rainy days for which the maximum M_max in the mosquito population is highest.On the other hand, we also study the variability of daily rainfall intensity through a fracturing process (<cit.>), which allows us to study homogeneous and heterogeneous rainfall regimes, including those characterized by heavy rain events. We show that increasing daily rainfall variability reduces the dependence of M_max on the number of rainy days, leading also to a higher abundance of mosquitoes for the case of low mean monthly precipitation.Finally, we explore the effect of different winter precipitation regimes on the mosquito abundance in the summer season, obtaining that a higher variability tends to advance slightly the peak time of mosquito abundance. Interestingly, we predict that the accumulated abundance of mosquitoes will decrease in a regime with high variability in the rainfall intensity.The boundary of the Cx. quinquefasciatus habitat in South America runs across central Argentina. This region is thus expected to exhibit intense changes in mosquito populations due to the undergoing climatic change; this is likely to have a strong impact on flavivirus prevalence. For this reason, we use the climatic data for the city of Córdoba to calibrate our model in the period 2008-2009.The paper is organized as follows: in Sec. <ref> we present the model of the dynamics of mosquito population and in Secs.<ref> and <ref> we explain the methods for generating two different synthetic time series. Then in Sec. <ref> we show our results and finally we present our conclusions in Sec. <ref>.§ METHODS §.§ The model of mosquito abundance In this section, we construct a compartmental, ordinary differential equation model for the mosquito abundance. We consider that the total vector population is stage-structured with an immature class consisting of all aquatic stages, and a mature or adult class . We assume that these population groups are restricted only to female mosquitoes as the reproductive sex. In our model, the total birth rate, i.e. the total number of new immature female mosquitoes per unit of time, is proportional to the number of adult mosquito females and to β_Lλ(t) θ(t), where β_L corresponds to the reproduction rate in optimal conditions of temperature and water availability, and λ(t) and θ(t) are normalized factors describing the influence of rainfall and temperature on the total birth rate, respectively. In Sec. <ref> we will explain how we construct these factors. In addition, we assume that the total birth rate is also regulated by a carrying capacity effect that depends on the occupation of the available immature habitats. Therefore we propose that the immature population growth is logistic-like with a carrying capacity K_L. Additionally, immature individuals either go to the mature class with rate m_L or die at a rate μ_L. We stress that the ratio 1/m_L gives the average development time from immature mosquito to adult. Finally, adult mosquitoes die with a mortality rate μ_M. For simplicity, we assume that m_L, μ_L and μ_M do not depend on temperature or rainfall. With these definitions, we propose the following dynamic mass-balance equations for the abundance of immature mosquitoes, L(t), and adult mosquitoes, M(t), L(t+Δ t) =L(t)+Δ t [ β_L θ(t) λ(t) M(t)(1-L(t)/K_L)-m_L L(t)-μ_L L(t)],andM(t+Δ t) =M(t)+Δ t [ m_L L(t)-μ_M M(t)],where Δ t is the time step size. Here we use Δ t=0.1 [days]. Equation (<ref>) can be easily derived by assuming that the fraction of the carrying capacity corresponding to female immature mosquitoes is the same as the fraction of females in the immature population. Table <ref> summarizes the different state variables and parameters used in this paper.§.§.§ Effect of temperature and rainfall on the total birth rate We add the effect of the temperature through a temperature factor θ(t). Several studies have shown that Cx. quinquefasciatus can breed only at temperatures above 10^∘C (<cit.>) and that the number of egg rafts collected per day is closely correlated with temperature (<cit.>). Therefore we propose that the factor θ(t) is a piecewise linear function, θ(t)={ [ T(t)-T_A/T_B-T_A if T_A⩽ T(t) ⩽ T_B;1 ifT(t)>T_B;0 if T(t)<T_A, ] .where T(t) is the average daily temperature and T_A=10^∘C corresponds to a minimum temperature below which the net birth rate vanishes. The choice of the function θ(t) is not unique: for instance, a power law may be used instead (<cit.>). Here we assume that the positive effect of the temperature on the birth rate saturates at T_B, since it was observed by <cit.> that the number of egg rafts per female does not significantly change between 21^∘C and 30^∘C.On the other hand, mosquito reproduction is also triggered by rainfalls since these increase the number of breeding sites, such as temporary ground pools.In order to introduce the effect of rainfall on the mosquito birth rate, we compute the accumulated amount of rainwater H, whose variation is given by the total daily rainfall R(t) minus the evapotranspiration E(t) (<cit.>),H(t+1)=H(t)+ [ R(t)-E(t) ].The function H(t) is a convenient measure of the quantity of water available for breeding sites. It should represent the average level of puddles, ponds, drains, small streams, and underground sources such as waste water channels in urban environments. As we will explain below, equation (<ref>) is applied on a daily time scale. The term of evapotranspiration is estimated using the Ivanov model (<cit.>), which is based on the mean temperature and the relative humidity [Hum(t)], and it is given byE(t)=6.10^-5(25+T(t))^2(100-Hum(t)).Note that the evapotranspiration E(t) is a monotonically increasing function of temperature and a decreasing function of humidity. In particular, E(t) vanishes for Hum=100%. In addition, we assume that the level of accumulated water H varies only between a minimum (H_min) and a maximum (H_max) boundary level, i.e.,H(t+1)={ [ H_min if H(t)+ [ R(t)-E(t) ]⩽ H_min; H_max ifH(t)+[ R(t)-E(t) ]⩾ H_max; H(t)+ [ R(t)-E(t) ]otherwise.; ] .Here, H_min represents a minimum amount of water that is always available for mosquito breeding, for instance in permanent streams or in the drainage system, while H_max is a level of water above which the breeding sites overflow (<cit.>). Finally, the factor λ(t) that takes into account the effect ofrainfall on the birth rate (see Eq. (<ref>)), is the normalization of H(t):λ(t)=H(t)/H_max.Note that the minimum value of λ(t) is H_min/H_max.Although we take the integration time step to be Δ t=0.1 days, the data of temperature, rainfall and humidity are available only on a daily time scale. Therefore, for all the integration time steps within a day “d” (i.e. d≤ t<d+1), we set λ(t) and θ(t) to have the values computed through Eqs. (<ref>) and (<ref>) using the corresponding meteorological data for day “d”. §.§ Model of synthetic rainfall using the monthly number of rainy days and the monthly precipitation The amount of average rainfall P and the number of days D with rain per month are two parameters commonly used to characterize the long-term precipitation trend (<cit.>). In this section, we show how they can be used to construct a synthetic rainfall time series.The average monthly rainfall is assumed to follow a sinusoidal function,P(m)= P_max-P_min/2cos(2π/12(m-m_0))+P_max+P_min/2,where m=1,...,12 represents the month (with m=1 for January and m=12 for December), m_0 corresponds to the month of maximum rainfall, P_max is the total precipitation of the wettest month m_0, and P_min is the total precipitation of the driest month, corresponding to m = m_0 + 6. Note that for a higher value of either P_max or P_min there is an increase in the annual amount of precipitation but, while a higher P_max enhances the precipitation difference between the rainy and dry seasons, a higher P_min reduces this difference.Similarly, we propose that the number of rainy days is given byD(m)= D_max-D_min/2cos(2π/12(m-m_0))+D_max+D_min/2,where D_max and D_min correspond to the number of rainy days in the months labeled by m_0 and m_0 + 6, respectively.Choosing m_0=2, this distribution would be suitable for the city of Córdoba. In Fig. <ref>(a) and (b) we show a schematic of the parameters used in Eqs. (<ref>) and (<ref>). From Eqs. (<ref>) and (<ref>) we construct the amount of daily rainfall R(t) (see Eq. (<ref>)), placing the rainy days in each month at random and assuming that the amount of rain specified in Eq. (<ref>) is equally distributed over these days. Then we calculate the factor λ(t) and integrate Eqs. (<ref>) and (<ref>). For simplicity we use in these equations the values of humidity and temperature obtained from meteorological data.It is important to note that, by construction, R(t) is a stochastic time series since the rainy days for each month are chosen at random; therefore our results for the effect of the synthetic series R(t) on the mosquito abundance must be averaged over a large number (we take 10^4) of realizations. Using this model of time series of rainfall, we measure the highest peak of mosquito abundance M_max and the time τ_max at which this peak is reached (see Fig. <ref>(c)).In the following section, we explain how to introduce variability on the amount of daily rainfalls. §.§ Model of synthetic rainfalls with variable daily intensity of precipitationIn general, the daily rainfall intensity can range from drizzles with less than 1 mm to torrential downpours exceeding 200 mm (<cit.>). Various distributions, such as Weibull (<cit.>), lognormal (<cit.>) and generalized Pareto (<cit.>), have been proposed to model the precipitation amount. While the above-mentioned distributions could be used to generate a sequence of rainfalls with variable or heterogeneous intensity, the disadvantage of this approach is that the total monthly rainfall is also a stochastic variable. In order to isolate the effect of the heterogeneity of the rainfalls, we will use a fracturing or fragmentation process (FT), which allows us to maintain the total monthly rainfall constant. This method is related to a cascading procedure that was used by physicists to study the fragmentation of brittle material (<cit.>). Recently the FT process was also applied to obtain empirical distributions with a finite tail (<cit.>). From a geometrical point of view (<cit.>), this process performs a sequential breakage of a segment or interval of length ℓ to obtain D subintervals with variable length ℓ, which can be used to decompose the total monthly rainfall into daily rainfalls with heterogeneous intensity. In this representation, ℓ and ℓ stand for the total amount of rainfall in a month and in a day, respectively. In <ref> we explain the steps of the FT process in detail.This method depends on a parameter α∈ [0,1] which controls the heterogeneity of the segment length. In particular: * α=0 corresponds to the case where an interval is split into two subintervals of equal length ℓ=ℓ/2,* α=1 corresponds to the special case where two "intervals" are generated, one of length zero and the other of length ℓ=ℓ.In Fig. <ref> we show how α controls the shape of the fragment length distribution 𝒫(ℓ), using ℓ=150 mm and D=10. Note that the resulting length of each subinterval corresponds to the intensity of rainfall in one day.For small values of α the distribution is concentrated around the mean value close to ℓ/D, while for intermediate values of α, the distribution of lengths has a longer tail. Finally, for high values of α, 𝒫(ℓ) has a peak near ℓ which depicts a regime where most of the corresponding month total rainfall is confined to one day. Interestingly, we also note that in this case the rainfall distribution has a region in which it decays as a power law.In order to model the temporal variation of precipitation R(t) (see Sec. <ref>), we apply a fracturing process (FT) for each month, partitioning an interval whose length is the amount of monthly precipitation given by Eq. (<ref>) and where the number of subintervals (the number of rainy days) is given by Eq. (<ref>). See <ref> for further details on the construction of R(t). § RESULTS§.§ CalibrationWe calibrate our model of mosquito abundance using a Metropolis-Hastings algorithm (see <ref>) with the number of female Culex quinquefasciatus mosquitoes collected in Córdoba city (31^∘24'30” S, 64^∘11'02” W, Córdoba province, Argentina) every two weeks from January 2008 to December 2009 (see <cit.> for details on the data and their sources). The climate of Córdoba is temperate with dry winters and hot rainy summers. The mean annual temperature ranges between 16^∘C-17^∘C and the mean annual rainfall is 800 mm (<cit.>). The temperature [T(t)], relative humidity [Hum(t)], and rainfall [R(t)] data for Córdoba were obtained from the website <cit.>.We calibrate the following parameters: β_L, H_max, H_min, and K_L. In <ref> we perform a sensitivity analysis of these calibrated parameters.As initial conditions of Eqs. (<ref>) and (<ref>), we set M(t)=L(t)=20 . To attenuate the effect of these initial conditions, the integration of the equations starts 12 months before we implement the fitting and study our model.Fig. <ref>(a) shows the fit of our model to the data, where the abundance M(t) is given as the number of female mosquitoes per night per trap. We observe that the mosquito population, which is assumed to be proportional to M(t), increases in summer as expected. Although there are no daily abundance data to compare with, we remark that our model predicts day-to-day changes in the abundance of adult mosquitoes M(t) due to fluctuations in temperature, humidity, and rainfall.In Fig. <ref>(b) we plot the evolution of the immature population of mosquitoes L(t) which shows that temperature and precipitation lead to more abrupt fluctuations in this group than in the compartment of adult mosquitoes. This was to be expected, since these climatic variables are directly introduced into the equation of the population of immature mosquitoes (see Eq. (<ref>)). In turn, we find that after a rainfall event there is a large increase in the abundance of this group which frequently approaches the carrying capacity, leading to a subsequent population decline due to competition among immature mosquitoes (<cit.>).In the following section we study how different rainfall patterns affect the mosquito population dynamics.§.§ Effect of different rainfall regimes on M_maxThe proposed model of synthetic rainfall allows us to explore the evolution of mosquito abundance under possible scenarios in which the weather becomes, for instance, rainier, or with persistent drought conditions. In this Section we discuss how different rain regimes influence the mosquito population when it is at its highest (see Fig. <ref>(c)). To do this, we first assume that the total rainfall P(m) (see Eq. (<ref>)) is equally distributed in D(m) days (see Eq. (<ref>)), the remainder of the days in the month being rainless. In Fig. <ref>(a) we consider the weather data (temperature and humidity) of the austral summer season 2008-2009, to show the influence of P_max and D_max on the highest peak M_max of mosquito abundance, with fixed values of the parameters P_min = 10 mm and D_min =1.From Fig. <ref>(a), we note that, for a fixed number of rainy days D_max, the highest peak of mosquito abundance increases as P_max grows since, as expected, a greater amount of water promotes breeding sites for mosquitoes. Similarly, it can be seen from Figs. <ref>(a) and (b) that for P_max≈ 190 mm, M_max is an increasing function with D_max, because a higher frequency of rainfall events provides more opportunities for mosquitoes to breed. The opposite happens for the lowest level of precipitation (P_max≈ 10 mm) as there is very little daily rainfall in this regime and the accumulated water evaporates quickly. Interestingly, we find that at moderate precipitation levels M_max has a maximum at an intermediate value of the number of rainy days. If the rainfall is equally distributed over a few days, the mosquito population will increase with more rainy days, but, beyond certain point, the rain becomes too thin to maintain all breeding sites active and the mosquito population must decrease. However, as it was mentioned above, the intensity of daily rainfalls is usually far from uniform (<cit.>), and recent studies suggest that the amount of mosquitoes depend on the distribution of precipitation (<cit.>). Therefore, in the following we study how heterogeneity in the daily rainfall affects the dynamics of mosquito abundance. In figure <ref>, using the weather data of the austral summer season 2008-2009 (temperature and humidity), we show the peak mosquito abundance for different values of P_max, D_max, and α, as obtained from the Eqs. (<ref>)-(<ref>). For simplicity, we use the same value of α for each month. From Figs. <ref>(a), (b) and (d), we note that, similarly to the case of constant rainfall intensity, the abundance M_max is a decreasing (increasing) function with D_max for a low (high) amount of monthly precipitation P_max. Furthermore, Figs.<ref>(b)-(d) show that a higher variability in the intensity of rainfalls reduces the dependence of M_max with D_max with respect to the case of homogeneous rainfall. In particular, for P_max=10 mm (see Fig.<ref>(b)), we note that M_max is higher in the heterogeneous case (α>0) than in the homogeneous one, since a greater variability tends to confine most of the total monthly precipitation to a few days, in which there is a higher level of water (see Eq. (<ref>)) available for the immature mosquito population. In contrast, for P_max=190 mm (see Fig.<ref>(d)) the heterogeneity diminishes the abundance M_max with respect to the homogeneous one. In this case, even if a higher amount of precipitation in a few days would increase the rate of immature mosquito birth, the effect of the intense rain days is limited by the threshold H_max because the impact of precipitation saturates for a precipitation higher than H_max (i.e., λ(t)=1, see Eqs. (<ref>) and (<ref>)). Moreover, there are more rainy days with low precipitation in this regime which further reduces the growth of the mosquito population. As a consequence, the heterogeneity in the intensity of rainfall implies that the monthly total precipitation is a more relevant variable for the prediction of the maximum abundance of mosquitoes than the number of rainy days. Another aspect of relevance for the prediction of vector-borne diseases is the effect of the rainfall in the dry-season (winter) on the future abundance of mosquitoes in summer. To study this relationship, we measure the maximum abundance of mosquitoes M_max and the timing τ_max at which the peak of mosquito abundance is reached (see Fig. <ref>(c)), for different values of P_min and D_min, which correspond to the parameters that control the intensity and frequency of rainfall in the driest month, respectively. Here, we keep fixed the parameters P_max =150 mm and D_max=10, and assume that February is always the wettest month of the year (m_0=2, see Eq. (<ref>)).We note from Figs. <ref>(a) and (c) that, in the case of a homogeneous distribution of rainfalls, the time of peak τ_max moves forward for high values of P_min and D_min, because in this case the rainfalls are regular and abundant throughout the year, which favors mosquito breeding. However, for the explored values of P_min and D_min, the position of this peak is in late February or March, i.e., just after the wettest month of the year. On the other hand, in a scenario of heterogeneous rainfalls with α=0.9 (see Figs. <ref>(b) and (d)), τ_max is moved forward by only approximately 10 days with respect to the homogeneous intensity case. Correspondingly, for constant values of α (0.1, 0.5 and 0.9) the changes in P_min and D_min only affect M_max by less than 5% (not shown here). Therefore these results suggest that the peak of abundance of mosquitoes and τ_max are mainly determined by summer weather conditions and the carrying capacity of the system and not by the intensity and distribution of precipitation throughout the year. Despite the weak effect of P_min on M_max, Figs. <ref> (e) and (f) show, as expected, that an increasing value of P_min could have a remarkable effect on the accumulated abundance of mosquitoes (one measure of which is the time integral of M(t) over the period of interest), since from P_min=10 mm to P_min=130 mm, it could increase by more than 40%. However, for the case of a fixed value of P_min and higher values of α we obtain that the accumulated abundance of mosquitoes diminishes down to a 50% of the value for a homogeneous rain distribution. Consequently, the heterogeneity could help to attenuate the enhancement of the mosquito population. Therefore, these findings suggest that in order to predict the total annual abundance, it is not only necessary to take into account the overall amount of rainfall throughout the year but also the heterogeneity in daily rainfall intensity.§ DISCUSSIONSince projections of climate change (<cit.>) suggest that for the late twenty-first century in regions of South America, such as Córdoba province, the pattern of precipitation will change towards a regime with rainier autumns and an increase in extreme events, it is crucial to study how this variation would affect the mosquito abundance.In this paper we studied the effects of the total intensity, number of rainy days and heterogeneity of rainfall on the mosquito population. We found that for a regime with a low total rainfall, the abundance of mosquitoes is a decreasing function with the number of rainy days, while for a high total rainfall regime it is an increasing function of this number. Interestingly, for an intermediate precipitation regime, we found that there is a halfway number D_max of rainy days for which M_max is optimized. Since P_max is fixed, fewer rainy days would imply dry intervals, leading to a lessening of the mosquito abundance.If the number of rainy days exceeds the optimal value of D_max, a considerable fraction of the rainwater resulting from the typically meager rainfall would disappear due to evapotranspiration, again leading to a reduction of the mosquito abundance.In order to study the effect of the heterogeneity in the daily rainfall, we used a fracturing process that keeps constant the total amount of monthly precipitation. We observed that a higher heterogeneity reduces the dependence of M_max on the number of rainy days. However, an increasing variability favors the mosquito production in the low rainfall regime, while the opposite behavior takes place in the case of high precipitation P_max. Therefore, if climatic models predict the intensification of storms, but not an increase in the total amount of precipitation, our model predicts that the enhancement of mosquito abundance would be more significant in semiarid areas than in humid climates.Finally we study the effect of an increasing amount of rainfall in the dry season on the mosquito abundance dynamics, obtaining that high precipitation throughout the year does not significantly alter the maximum abundance or the time at which this peak occurs, but it could notably increase the accumulated abundance of mosquitoes. However, we also observed that a regime with a higher variability of rainfall intensity could reduce this increase.While our model captures multiple relationships between rainfall and mosquito population, additional extensions could be considered. For instance, there is evidence that rainfalls reduce the immature population in the short term due to flushing of breeding sites (<cit.>) and affect the bacterial concentration used as food by mosquito larvae (<cit.>); therefore, it would be interesting to study the relevance of these effects on the dynamic of mosquito population.We think that our findings could be used as support and reference guidance for the assessment of the influence of different rainfall regimes on the mosquito population dynamics, using the weather data for any specific region. Such an assessment would impact positively on our ability to make predictions for the spread of various possible arboviruses. It is also known that rainfall could have a substantial effect on insecticide residence times (<cit.>). The model presented here can be used to optimize the efficacy of mosquito control campaigns, using temperature and rainfall data to select the best times for the application of population reduction procedures. figuresection tablesection § CALIBRATIONThe Metropolis-Hastings (MH) algorithm is a stochastic optimization tool for fitting statistical models to data that has been used in cosmology (<cit.>), epidemiology (<cit.>), and in the study of mosquito population dynamics (<cit.>). This algorithm allows us to estimate the unknown values of some parameters Θ (Θ represents either a single parameter or a parameter set), by means of a stochastic search in the parameter space that generates a sequence or chain Θ^(i), where i represents the step number of the MH algorithm (<cit.>). Each value of this chain is sampled from a proposal distribution and accepted with a probability σ defined by an acceptance function, which depends on the likelihood function of the observations.In our model we estimate the parameters β_L, H_max, H_min and K_L using a MH algorithm and the data for adult mosquito abundance from Córdoba city in the period 2008-2009 (<cit.>). We propose that the likelihood of the observations is given byL=∏_j=1^np(x_j(H_max,H_min,K_L,β_L);k_j),where n is the number of data points and p(x_j(H_max,H_min,K_L,β_L);k_j) is the probability to observe the abundance k_j of mosquitoes obtained from the data. Here we assume that p(·) follows a Poisson distribution whose mean x(H_max,H_min,K_L,β_L) is the number of adult mosquitoes predicted by Eqs. (<ref>)-(<ref>). The MH algorithm implemented in this paper has the following steps: * Step 1: Initialize the starting value of the parameters Θ^(i=0), using a uniform distribution in order to avoid favoring any initial value.* Step 2: Generate a new sample of the parameters, Θ^New starting from a proposal distribution that indicates a candidate for the next sample value. To ensure that the new values of the parameters are positive, we use as a proposal distribution a log-normal density which has a mean equal to the logarithm of the current value parameter and constant variance δ. The value of this variance is chosen in order to guarantee an acceptance rate between 10% and 30% in the burn-in period.* Step 3: accept the new candidate Θ^New with probability σ:σ =min{1,L(Θ^New)/L(Θ^(i))}. * Step 4: repeat steps 2 and 3 until convergence is reached.We perform 2.10^6 iterations and check convergence by visual inspection of the chain Θ^(i). In order to construct the posterior distribution of the parameters, we discard the first 10^5 iterations as a burn-in and we only keep every 20^th sampled value of the remaining iterations to reduce autocorrelation within successive samples. Finally, the values of the parameters that we will use in our model are the averages of the medians of the posterior distributions obtained from 5 different initial conditions (see step one of the MH algorithm).Fig. <ref> shows the posterior distribution obtained for the parameters β_L, H_max, H_min and K_L. We note that all of these distributions are unimodal, except for H_max. Although in this paper we set H_max=9.86 mm, since it is the average value of the median obtained from the MH algorithm, we also check our model for H_max≈ 7 mm and H_max≈ 13 mm which are the positions of the highest peaks of the posterior distribution (see Fig. <ref>(b)). For these cases, our results presented in Section <ref> do no qualitatively change.§ FRACTURING PROCESSFracturing process (FT) is a stochastic iterative process which generates a finite sequence of numbers with the property that their sum is always a constant (<cit.>). From a geometrical point of view, this method consists of partitioning an interval of length ℓ in a number D of subintervals or segments, with the property that the sum of their lengths is always ℓ. Following <cit.>, the FT process starts with an interval or segment of length ℓ which is split into two subintervals of lengths ℓ and ℓ-ℓ, where ℓ is a stochastic variable generated by the following function: ℓ=ℓ×{ [ρ1-α/α if 0⩽ρ <α/2;1/2+(ρ-1/2)α/1-αifα/2⩽ρ⩽ 1-α/2;1-(1-ρ)1-α/α if 1-α/2 < ρ⩽ 1; ] .Here ρ is a uniform random variable and α∈ (0,1) is a parameter that controls the average length ℓ. In a second step, the partition function, Eq. (<ref>), is applied again on the intervals resulting from the previous step, generating a total of 4 intervals. This procedure is repeated until the required number of intervals is reached [For example, in order to generate a total of five subintervals, three iterations of the fracturing process must be performed: step 1) the initial interval is divided into two subintervals, step 2) each of the previous subintervals is divided into two parts, and finally step 3) one randomly chosen subinterval of the previous step is split into two subintervals.].In order to model the variability in the daily rainfall R(t), we apply a FT process for each month, in which, * the length of the initial segment ℓ is given by Eq. (<ref>), i.e., the monthly precipitation,* the number D of subintervals is given by Eq. (<ref>), i.e., the number of rainy days.After we apply the fracturing process, the length of each resulting interval ℓ_i (with i=1,⋯,D) represents the total amount of water that falls in the day d_i, which we choose at random as it is shown in the schematic of Fig. <ref>, and then we set R(d_i)=ℓ_i. In Fig. <ref> we plot the distribution of segment lengths obtained from an FT process for ℓ=150 mm and D=10, which are the average rainfall and the number of rainy days in February, respectively. Here we use the value of α=0.43 which gives the best fit to the February rainfalls in the city of Córdoba in the period 2001-2015. § SENSITIVITY ANALYSISA sensitivity analysis allows us to measure the impact of different parameters on the relevant variables of our model. In this section, we perform a one-way sensitivity analysis on the model of Sec. <ref>, by varying a ± 25% of the baseline values of individual parameters one at a time, while keeping the other parameters constant in order to analyze their individual impact on the maximum abundance of mosquitoes M_max. The parameters examined are: β_L, H_max, H_min, and K_L. The results of the sensitivity analysis are summarized in Table <ref>. It shows, as expected, that for higher values of β_L, H_max, H_min, and K_L the abundance M_max increases. Although the influence of the first three is rather weak, M_max is heavily influenced by K_L, which is therefore a critical parameter for the estimation of mosquito abundance. § ACKNOWLEDGMENTSThis work was supported by SECyT-UNC (Projects 103/15 and 313/16), CONICET (PIP 11220110100794), and PICT Cambio Climático (Ministerio de Ciencia y Técnica de la Provincia de Córdoba), PICT Nro. 2013-1779 (ANPCYT-MYNCYT), Argentina. We also thank Dr. A. M. Visintin and Biól. M. Beranek for useful discussions. § BIBLIOGRAPHYelsarticle-harv | http://arxiv.org/abs/1703.08915v2 | {
"authors": [
"L. D. Valdez",
"G. J. Sibona",
"L. A. Diaz",
"M. S. Contigiani",
"C. A. Condat"
],
"categories": [
"q-bio.PE"
],
"primary_category": "q-bio.PE",
"published": "20170327034218",
"title": "Effects of rainfall on Culex mosquito population dynamics"
} |
firstpage–lastpage A Multi-Wavelength Analysis of Dust and Gas in the SR 24S Transition Disk L. Testi December 30, 2023 =========================================================================Jets from supermassive black holes in the centres of galaxy clusters are a potential candidate for moderating gas cooling and subsequent star formation through depositing energy in the intra-cluster gas. In this work, we simulate the jet–intra-cluster medium interaction using the moving-mesh magnetohydrodynamics code Arepo. Our model injects supersonic, low density, collimated and magnetised outflows in cluster centres, which are then stopped by the surrounding gas, thermalise and inflate low-density cavities filled withcosmic-rays. We perform high-resolution, non-radiative simulations of the lobe creation, expansion and disruption, and find that its dynamical evolution is in qualitative agreement with simulations of idealised low-density cavities that are dominated by a large-scale Rayleigh-Taylor instability. The buoyant rising of the lobe does not create energetically significant small-scale chaotic motion in a volume-filling fashion, but rather a systematic upward motion in the wake of the lobe and a corresponding back-flow perpendicular to it. We find that, overall, 50 per cent of the injected energy ends up in material which is not part of the lobe, and about 25 per cent remains in the inner 100 kpc. We conclude that jet-inflated, buoyantly rising cavities drive systematic gas motions which play an important role in heating the central regions, while mixing of lobe material is sub-dominant. Encouragingly, the main mechanisms responsible for this energy deposition can be modelled already at resolutions within reach in future, high-resolution cosmological simulations of galaxy clusters.black hole physics – galaxies: clusters: general – galaxies: jets – galaxies: nuclei – ISM: jets and outflows – methods: numerical§ INTRODUCTION The short cooling times in galaxy clusters, combined with the paucity of cold gas and star formation <cit.>, suggest the presence of a central heating source. The energy from jets driven by the central supermassive black hole (SMBH) is widely considered to be a promising candidate to balance the cooling losses <cit.>. This is observationally supported by the fact that most galaxy clusters with short cooling times show signatures of jet activity, and their jet power correlates with the cooling rate <cit.>. Yet, how the highly collimated jets distribute energy in a volume-filling fashion to the cluster gas still remains a topic of debate. Suggested mechanisms include heating by weak shocks and sound waves <cit.>, mixing of the lobe with surrounding material <cit.>, cosmic rays <cit.>, turbulent dissipation <cit.>, and mixing by turbulence <cit.> which may be promoted byanisotropic thermal conduction <cit.>. Recent X-ray spectroscopic observations of the Perseus cluster <cit.> however indicate that turbulence is unlikely to distribute the energy in a volume filling fashion if it is generated close to the lobe, because it would then dissipate on a much shorter timescale than needed to advect the energy to the cooling gas. An important aspect of the jet–intra-cluster medium (ICM) interaction is the dynamics and lifetime of the jet-inflated cavities. To quantify this, a number of idealised simulations have been carried out, typically starting with idealised under-dense structures. <cit.> performed such hydrodynamical simulations in 2D to explain the observed X-ray and radio morphology in M87. <cit.> and <cit.> present a generalised study of hydrodynamical and magnetohydrodynamical simulations of buoyant cavities of different shapes. One result of their work is that the cavities need to have some favoured direction initially to rise buoyantly without being disrupted by Rayleigh-Taylor instabilities. <cit.> find that buoyant cavities in idealised 3D hydrodynamical simulations get disrupted quickly by emerging Rayleigh-Taylor and Kelvin-Helmholtz instabilities, which, however, can be prevented assuming a non-negligible amount of shear viscosity. <cit.> come to a similar conclusion using smoothed particle hydrodynamics simulations with physical viscosity. External magnetic fields could in principle have a similar effect <cit.>. More recently, a number of studies have been published on the efficiency of different coupling mechanisms. <cit.> show in an idealised simulation that the turbulent driving via explosively injected, buoyantly rising bubbles is not efficient enough to balance cooling losses via turbulent dissipation. <cit.> find in simulations of jet-inflated lobes that turbulent mixing is the main energy distribution channel, dominating over turbulent dissipation and shocks. Studying the effect of a clumpy interstellar medium on the early phases of jet propagation, <cit.> show that low-power jets get dispersed by high-density clouds, and distribute their energy at small radii. Simulations that include a self-regulated cycle of gas cooling, black hole accretion and gas heating <cit.> were generally able to prevent excessive cooling of gas, yet sometimes at the cost of dramatically changing the thermodynamic profiles <cit.>. Using a different estimate for the accretion rate, a steady state can also be reached, maintaining a cool-core temperature structure <cit.>. More recent results indicate that this discrepancy is due to insufficient numerical resolution <cit.>. In high-resolution simulations, cold clumps form along the outflows via thermal instability, which plays an important role in the overall heating-cooling cycle <cit.>. The dominant mechanism of energy dissipation in these simulations are weak shocks <cit.>. This is particularly the case in the external regions at large angles from the jet direction, while in the `jet cones', mixing of lobe material is energetically dominant <cit.>. Jets from SMBHs that interact with the ICM cover an enormous dynamic range in space and time, being launched at several Schwarzschild radii, and propagating outwards to tens, sometimes even 100 kpc. Given this dynamic range challenge, there are a number of different techniques to model jets in simulations, depending on the topic of investigation. In particular, the implementation of how the jet is injected has to be adjusted to the available resolution of the simulation, and some simplifications are inevitable. Recently, some studies <cit.> used a magneto-centrifugal launching of jets, which is likely closest to the real jet launching. However, this technique requires a quite high resolution and strong magnetic fields.In lower resolution studies that target only hydrodynamical jets, other techniques have to be applied. A widely used method for injecting a collimated outflow on kpc scales, as presented in <cit.>, is based on adding a predefined momentum and energy in a kernel-weighted fashion to all cells in a given region. This approach is also used in the model of <cit.>. <cit.> place all available energy in kinetic form in the injection region, which implies a variable momentum input. In an alternative approach, the thermodynamic and kinetic state of an injected region is explicitly modified instead of adding a given flux to the cells, i.e. a predefined density, velocity and energy density is set <cit.>. This gives full control over the jet properties at the injection scale and has been shown to produce low-density cavities, yet has the disadvantage that the injected energy depends on the external pressure, which implies that such a scheme is difficult to use in simulations with self-regulated feedback.In this paper, we analyse a new set of high-resolution magnetohydrodynamical simulations of jets from SMBHs and their interaction with the surrounding medium. We use idealised magnetohydrodynamical simulations which conserve, apart from the energy injection from the jet, the total energy of the gas in a stationary spherically symmetric gravitational potential. This simple setup allows us to simulate the evolution of the jet inflating a low-density cavity in the surrounding ICM and the subsequent lobe evolution and disruption after a few hundred Myr at unprecedented resolution.This paper is structured as follows.We describe the simulation methodology and our implementation of the jet injection in Section <ref>, followed by the details of the simulation setup in Section <ref>. We discuss the results in Section <ref>, and give our conclusions in Section <ref>. § METHODOLOGYWe carry out 3D magnetohydrodynamic (MHD) simulations in a prescribed external gravitational potential using the moving-mesh code Arepo. The equations of ideal MHD are discretised on an unstructured, moving Voronoi mesh <cit.>. The MHD Riemann problems at cell interfaces are solved using an HLLD Riemann solver <cit.>, and the divergence-constraint of the magnetic field is addressed by a Powell eight-wave cleaning scheme <cit.>. The gravitational acceleration is imparted in the same way as in <cit.>, using the local gradient of the analytic potential and ignoring gas self-gravity. In addition to ideal MHD, we include a cosmic ray (CR) component in a two-fluid approximation <cit.>. The CR component has an adiabatic index of γ_CR=4/3 and is injected as part of the jet. Throughout this paper, we restrict ourselves to an advective transport of the CR component, leaving a study of CR transport relative to the gas as well as energy dissipation from the CR to the thermal component to future work.§.§ Jet modelIn this work, we study jets from SMBHs in simulations that reach resolutions better than 200 pc (target cell size). However, the model is designed such that it is still applicable for simulations with 10 times coarser (spatial) resolution. We do not model the actual jet launching, or early propagation effects such as self-collimation, but instead set up the thermodynamic, magnetic and kinetic state of the jet at a distance of a few kpc from the black hole. In practice, this means that we want to create in a numerically robust way a kinetically dominated, low density, collimated outflow in pressure equilibrium with its surroundings. If desired, this outflow can contain a predefined fraction of the pressure in a (toroidal) magnetic field and in cosmic rays. We only set up the thermodynamic state of this `effective jet' if the required energy, composed of the energy Δ E_redist (including thermal and CR component) for redistributing the gas and the energy Δ E_B connected to the magnetic field change, is smaller than the energyavailable from the black hole, viz. Δ E = ∫_t_last^tĖ_jetd t^' = Δ E_kin + Δ E_B + Δ E_redist.Here t is the current time, t_last is the time of the last injection event, and Ė_jet is the jet power, a free parameter in our setup[Ė_jet can be computed using a black hole accretion rate estimate in future work.]. In other words, the injected kinetic energy Δ E_kin in the jet region has to be positive. Due to this criterion, the injection is not necessarily happening every (local) hydrodynamical timestep. During a jet injection phase over 50 Myr, there are typically several thousand small injection events that effectively yield a continuous launching of the jet. §.§.§ Jet thermodynamic stateTo achieve the targeted thermodynamic state of the jet, we select a spherical region around the black hole with a given radius h (5 kpc throughout the paper). We split this volume into two spherical sub-volumes, located off-center along the jet-direction n̂⃗̂ (see Figure <ref>). The union of all cells that have their mesh-generating points within these spherical sub-volumes is referred to as jet regions (1,2) in the following. In these regions, we set the jet thermodynamic state. The third volume, outside of the jet regions, will be referred to as the buffer region (3), to which we add (or from which we take) the mass to set up a desired thermodynamic state in the jet region while simultaneously ensuring overall mass conservation.The density in jet region 1,2 is calculated asρ_1,2 = ρ_target V_1 + V_2/2 V_1,2,respectively, where ρ_target is treated as a free parameter. V_1 and V_2 are the volumes of the jet regions 1 and 2. We emphasise that these two volumes can be slightly different due to the nature of the unstructured computational grid in Arepo. The volume factor in the density ensures equal mass in both jet regions. The specific thermal energy u in this region is u_1,2 = P_target/(1 + β^-1_jet + β^-1_CR, jet)(γ - 1) ρ_1,2,where γ = 5/3 is the adiabatic index of the gas and P_target is the kernel-weighted pressure in the buffer region. We use an SPH-smoothing kernel of the form (r,h) = 8/π h^3 1-6(r/h)^2+6 (r/h)^3 for0≤r/h≤1/22 (1-r/h)^3for 1/2 < r/h≤ 10for r/h > 1.β^-1_jet = B⃗^2/8 πP_thandβ^-1_CR, jet = P_CR/P_th are the magnetic and cosmic ray pressure contributions relative to the thermal pressure P_th in the jet region, respectively, and are treated as free parameters[We use β^-1 to be consistent with the nomenclature of the commonly used plasma-beta parameter.]. The cosmic ray specific energy u_CR isu_CR, 1,2 =P_target/(1 + β_CR,jet + β_CR,jet β_jet^-1)(γ_CR - 1) ρ_1,2,where γ_CR = 4/3 is the adiabatic index of the cosmic ray component. The mass that is removed from the jet regions is added adiabatically to the buffer region (or the mass which is added in the jet regions is removed adiabatically from the buffer region, depending on the initial density of the jet region) in a mass weighted fashion, adding the total momentum associated with this redistribution to the buffer cells.Additionally, we make sure that the total thermal energy change in the jet regions is added (or subtracted) from the buffer region in a mass-weighted fashion, which ensures that the overall thermal energy change is only due to adiabatic contraction or expansion in the buffer region. This means that the thermal energy change in the buffer region Δ E_therm, 3 is given byΔ E_therm, 3 = ∑_region 3(ρ_final/ρ_init)^γ-1+ ∑_regions 1,2(u_i,init m_i,init) - (u_i,final m_i,final),where u_i,init, m_i,init, u_i,final and m_i,final are the specific thermal energy and mass of cell i before and after the redistribution, respectively. We denote the overall energy change due to this redistribution as Δ E_redist =∑_regions 1,2,3 [1/2m_i,final _i,final^2 . +(u_i,final+u_CR,i,final) m_i,final-1/2m_i,init _i,init^2 -. (u_i,init+u_CR,i,init) m_i,init],where u_CR,i,init and u_CR,i,final are the CR specific energy of cell i before and after the redistribution, respectively. §.§.§ Magnetic fieldIn addition to the thermal and cosmic ray specific energy, we determine the magnetic energy injection Δ E_B needed to reach a specified magnetic pressure relative to the thermal pressure β_jet^-1 byβ_jet^-1 = ∑_iB⃗_i,init^2 V_i(8π)^-1 + Δ E_B/(γ -1) ∑_i u_im_i,where the sum includes all cells in both jet regions. Note that we set Δ E_B = 0 if the magnetic field energy is already exceeding the desired value.The injected magnetic field is purely toroidal with the directionB̂⃗̂_i = r⃗_i×n̂⃗̂/|r⃗_i×n̂⃗̂|,where r⃗_i is the position of the cell i relative to the black hole. We parametrise the injected magnetic field asΔB⃗_i = _B,if_B B̂⃗̂_i,insert this parametrisation in the energy equationΔ E_B= ∑_i (B⃗_i,init+ΔB⃗_i)^2 V_i (8π)^-1 - ∑_iB⃗_i,init^2 V_i (8π)^-1,and solve it for f_B. B⃗_i,init and V_i are the magnetic field and volume of the cell i before injection. _B,i = (|d⃗r⃗_i|, 0.33 h) ( dr_i^2 - (d⃗r⃗_i·n̂⃗̂)^2/(0.33 h)^2)^4is a weighting kernel for the magnetic field, and d⃗r⃗_i the position of the cell i relative to the center of its jet region. Note that the radius of the jet regions is 0.33h.§.§.§ Momentum injection The momentum kick for each cell Δ𝐩_i is given byΔp⃗_i = _im_if n̂⃗̂·r⃗_i/|n̂⃗̂·r⃗_i|where _i = (| d⃗r⃗_i|,0.33 h). f is determined by the desired kinetic energy input, Δ E_kin = ∑_i (𝐩_i,old + Δ𝐩_i)^2/2 m_i - 𝐩_i,old^2/2 m_i .§.§ Local time-steppingCollimated outflows can have a very high velocity. Also, in the early phase of a jet, the velocity in the jet region, which dominates over the sound speed, changes rapidly with time. This has consequences for the Courant-Friedrich-Levy condition in the jet region, as well as in neighbouring cells, and demands very fine timestepping. For the jet region itself, this can be accounted for at any timestep by choosing a smaller timestep instead (which is usually the case after an injection event). However, it is also important to ensure that the neighbouring cells are evolved on timesteps that are short enough to handle an incoming jet. This is in general a problem for simulations that operate with local timesteps and include such source terms. To overcome this problem, we use a tree-based nonlocal timestep criterion <cit.>, setting the signal speed of the cells in the jet region as c_j = max(2 _jet, 0.1c, c_s+_jet),where _jet is the gas velocity of the respective cells relative to the velocity of the mesh-generating point, c is the speed of light, c_s the speed of sound[Here, we use the effective sound speed of thermal gas, magnetic fields and cosmic rays, c_s^2 = γP_th ρ^-1+γ_CR P_CR ρ^-1+B⃗^2 (4πρ)^-1.] and c_j the signal speed of a cell as in <cit.>. We do not have a good way yet for reliably predicting the precise values required for the parameters involved, but practical experience shows that the choice we madeworks well and enforces neighbouring cells outside the jet regions (1,2) to be on low enough timesteps for properly modelling the incoming supersonic flows.In practice, we do not apply this procedure to all cells in the jet regions (1,2), but only to those that are at most a specified distance away from the spherical shell defining the corresponding jet region. This distance is specified for each cell individually as the radius of the largest circumsphere of the Delaunay tessellation involved in the generation of the Voronoi cell. This ensures that at least the outermost layer of cells in the jet regions is considered, while the inner cells are not. In this way, we considerably reduce the computational cost of the timestep calculations.§ SIMULATION SETUP To study the interaction between jets and the ICM, we set up a halo in the form of an analytic Navarro-Frenk-White (NFW) profile <cit.> with mass M_200,c = 10^15 M_⊙, concentration c_NFW = 5.0 and virial radius R_200,c = 2.12 Mpc.We use a fit to the Perseus cluster electron number density profile from<cit.>, originally from <cit.>, and scale with a constant factor such that the gas fraction within R_200,c reaches 16 %:n= 26.9 × 10^-3 (1.0 + (r/57 kpc)^2)^-1.8cm^-3 + 2.80× 10^-3 (1.0 + (r/200 kpc)^2)^-0.87cm^-3The energy density is derived from the pressure needed for hydrostatic equilibrium, and the assumption of a vanishing pressure at a radius of 3 Mpc. The dotted black lines in Figure <ref> show the initial density, temperature and entropy profiles, respectively.At the center of the halo, we consider a black hole which injects energy at a constant rate Ė_jet for a given amount of time. Apart from the energy injection in the jet, the simulation is non-radiative and does not include gravitational interactions between gas cells or from the black hole itself. Thus the gravitational force originates purely from the analytic NFW potential. We choose this approach to maximise the possible hydrodynamic resolution with moderate computational resources. The magnetic field strength in the initial conditions is zero. Throughout the analysis, we assume a constant chemical composition with 76% hydrogen and 24% helium. §.§ RefinementSimulating jets from active galactic nuclei on the scale of full galaxy clusters represents a challenging numerical problem. Jets operate at scales around a kpc and lower, and involve correspondingly short timescales of a few hundred kyr, while galaxy clusters have typical sizes of a Mpc and dynamical timescales in the range of a Gyr. The aim here is to resolve both simultaneously, which requires a high adaptivity of the resolution, both in space and time.A standard approach of using Arepo consists of prescribing a fixed target mass m_target,0 for each cell <cit.>, and refininga cell once it is a factor of two more massive than that this target mass (and derefining it once the cell is a factor two less massive than the target mass). We use this criterion in the region of the unperturbed ICM. But the jets inflate low-density cavities. Using only this criterion would imply that the gas cells in the lobes would attain a volume orders of magnitude larger that in the surrounding medium. This would mean in particular that gas flows within the lobe, and the surface of these lobes, would be very poorly resolved. As this structure is one of the regions of interest in our simulations, we instead apply a refinement criterion based on a target volume to the cells in the lobe. This target volume is significantly lower than the resolution of the surrounding medium. Technically this is done by defining an adaptive target mass for each cell bym_target,i = f ρ_i V_target + (1 - f)m_target,0f= 0.5 + 0.5 tanh(x_jet,i - 10^-4/10^-5),where x_jet,i is the mass fraction of jet material in cell i. Note that x_tracer = 1.0 in a jet injection cell and that the mass fraction is advected with the fluid according to the fluxes at the interfaces of each cell. In practice this ensures that the complete jet and lobe structure has uniform spatial resolution. Due to the very low numerical diffusivity of the Arepo code, the outside is not affected.One region of particular interest is the boundary layer between jet/lobe and surrounding ICM, which we want to resolve as well as possible to study arising hydrodynamic instabilities. To achieve this, we refine a cell wheneverV_i^1/3|∇ρ_i| > 0.5 ρ_i.This ensures that we refine boundary layers until they are well resolved, and this criterion replaces the above mentioned criteria whenever applicable. We note that by construction, this criterion is violated at the boundaries of the jet injection region, where the density contrast between neighbouring cells is significant, and therefore the estimated gradients can be much larger. To avoid a runaway refinement, we employ a minimum cell volume V_min, irrespective of all other refinement criteria.Because of these variable refinement criteria and target resolutions, it is important to ensure a smooth transition in resolutions. To achieve this, we enforce that the volume of every cell is at most a factor of 3 larger than the smallest neighbour, refining the cells where this is not satisfied[Note that the jump in resolution between neighbouring cells used here is smaller than usually present in adaptive mesh refinement simulations (factor of 8).].§.§ Mesh-movement and refinement criteria Because these non-standard refinement and derefinement criteria produce significant changes in the computational mesh as the system evolves, we also change the mesh-regularisation options slightly compared to the standard settings in the Arepo-code, allowing for more aggressive refinement and cell shape changes. To this end, we apply a slightly faster mesh regularisation value of ξ = 1.0, in agreement with <cit.>.Furthermore, we do not allow for derefinement of a gas cell if max(√(A/π)h^-1) > 6.75,where A is the area of the interface between two cells, h the distance between mesh generating point and the cell interface. The maximum denotes the maximum over all faces of a cell. Note that, due to the nature of Voronoi cells, this criterion, if satisfied, always applies to a pair of neighbouring cells. This means that the code does not derefine heavily distorted cells <cit.>. §.§ Simulation set We perform a number of simulations with different jet powers (10^44 erg s^-1, 3× 10^44 erg s^-1, and 10^45 erg s^-1). In all runs, the jet is active for 50 Myr, which corresponds to a total energy injection of ∼ 1.6×10^59 erg, 4.7× 10^59 erg, and 1.6 × 10^60 erg, respectively. We run the simulation setup at various resolution levels (see Table <ref>), always changing all resolution parameters, i.e. the resolution of the ICM (the target mass per cell, m_target,0), the target volume in the jet and lobe V_target, and the minimum volume of a cell V_min (always half the target volume) by the same amount (factors of 10). Due to the high computational cost, we do not simulate the high-power jet at the highest resolution level. All runs are performed with a purely hydrodynamic jet (β^-1_jet = 0) and with a magnetised jet (β^-1_jet = 0.1), both with the same HLLD Riemann solver.For the further analysis, unless stated otherwise, we focus on the high resolution simulation with a jet power of 3× 10^44 erg s^-1 and a magnetised jet. This is the simulation with the highest number of simulation cells within the lobes (∼ 1.7 × 10^7 cells in both lobes combined, in total 2.7× 10^8 cells in the simulation box after 168 Myr.)Additionally, we run a set of simulations of the low-resolution target mass m_target,0 = 1.5× 10^7 M_⊙ in which we successively abandon or relax the refinement criteria that are special to this simulation (density gradient, neighbour refinement criterion, and target volume). In this way, we evaluate a potential use of the presented model in future cosmological simulations of galaxy cluster formation at much lower resolution[We note that the used `low res' target gas mass is larger than the one in some already published cosmological zoom-in simulations of galaxy clusters <cit.>.]. This set of simulations has a varying h, determined by the weighted number of neighbouring cells n_ngb = 64 ± 20, as it is usually used in cosmological simulations <cit.>. h is then calculated iteratively by solvingn_ngb = ∑_i 4 π h^3 m_i/3 m_target,0(r_i,h)via bisection. These special simulations, as well as two intermediate resolution runs with 3× 10^44 erg s^-1 and magnetised jets with varying parameters h and ρ_target, are only analysed in Appendix <ref> and <ref>, respectively. § RESULTSIn this section, we analyse the effect of the jet from the injection scale to successively larger spatial and time scales.Figure <ref> shows the evolution of both, a magnetised (`MHD') and an unmagnetised (`hydro') jet, where the colormap indicates the mass fraction of jet material. As long as the jet is active, it drills a low-density channel into the ICM and inflates elongated, low density cavities that expand until they reach pressure equilibrium with the surroundings. The buoyant timescale of these cavities is larger than the jet timescales, but a persistent buoyancy force over several hundred Myr changes the shape of the lobe, first reducing its ellipticity and ultimately forming a torus (two disconnected round patches in the slice). This torus structure is gradually diluted and mixed with the surroundings. The magnetised lobe mixes less efficiently with the surroundings.§.§ Jet propertiesOne of the key properties of a jet is its internal Mach number |v⃗|/c_s (Figure <ref>). Although we set up a low density jet in pressure equilibrium, i.e. with a high sound speed, we payed attention that the jet actually reaches supersonic speeds (the maximum absolute velocity is ∼ 1.0 × 10^5 kms^-1) in the black hole rest frame, so that it transports its kinetic energy flux outwards and thermalises in a low-density cavity. The magnetic fields are frozen into the plasma and transported outward with the fluid flow, staying confined within the cavity. Note that the magnetic energy flux here is about two orders of magnitude lower than the kinetic energy flux. In this particular simulation, we choose thermal and cosmic ray pressure in the injection region in equipartition, while the magnetic pressure is 10 % of the thermal pressure.The momentum flux of the jet in the black hole rest frame is lower than the momentum flux of the surrounding medium outside the expanding lobe (in the post bow shock region). This is the case because we have set up a low density jet, which has important consequences for the resulting dynamics as well as for the morphology of the cavity <cit.>: the surrounding material is pushed aside by pressure forces of the expanding lobe, which itself is fuelled by the jet, rather than being directly displaced by a jet with high momentum flux. Consequently, the lobe expands in all directions, not just in the jet propagation direction, thereby naturally leading to a considerable horizontal extent. A higher density jet, on the other hand, would propagate further with the same amount of energy (see Appendix <ref>). The jet shown here reaches remarkably large distances of more than 75 kpc, which is surprising given its moderate power of 3× 10^44 erg s^-1. This is in qualitative agreement with <cit.>, who find that the transition form Faranoff-Riley type I to type II type morphology occurs at Ė_jet∼ 10^43 erg s^-1 for purely hydrodynamic jets. However, there are several effects that could in principle obstruct the jet propagation. First, the surrounding material has in our run a favourable uniform density and no prior fluid motions or magnetic fields. A clumpy medium would be more readily capable of stopping the jet or delaying its propagation <cit.>, while large scale density, velocity and magnetic field fluctuations can also redirect and deform the resulting low-density channels <cit.>, making it more difficult for a jet to propagate outwards. Second, instabilities of the jet, such as a magnetic kink instability, can help to disperse the jet <cit.>, limiting its range. We avoided such instabilities by choosing a low degree of magnetisation, mainly because we expect their occurrence to be very sensitive to the details of the injection of the magnetic field (which is toroidal in our case, not helical as expected in jets). Because of these reasons, we expect the jet range to be slightly overestimated in our study. Another interesting detail is the absence of a backflow down to the injection base, connecting the two lobes <cit.>. We note that for some of our simulations, in particular the high-power jets, such backflows are present. We suspect that the absence of the backflows is partially due to the (intentional) separation of the injection regions by a few kpc, as well as possible resolution effects at these small scales. However, we do not expect this to have a large impact on scales of a few tens to a few hundred kpc from the center, which is the main focus of our study. §.§ Lobe properties After the jet has terminated, the low-density cavities quickly reach pressure equilibrium with their surroundings. Figure <ref> shows the jet material weighted density, total pressure, plasma-beta parameter β = P_th (B⃗^2 / 8 π)^-1 and the thermal over cosmic ray pressure β_CR = P_th P_CR^-1. Note that at injection we chose β = 10 and β_CR = 1.As the jet inflates the lobe, kinetic energy thermalises, and correspondingly, the thermal pressure content of the lobe exceeds the magnetic and cosmic ray content[As the density of the lobe at most differs by a factor of a few from the target density ρ_target= 10^-28 g cm^-3, this cannot be explained by the different adiabatic expansion behaviour of the thermal, magnetic and CR components.]. This means that about 90% of the lobe thermal energy originates from thermalisation of kinetic energy, not from an initial thermal energy injection[We neglect diffusive shock acceleration at these shocks that should inject a CR population, which should form a dynamically significant component after adiabatic expansion in the lobe.]. However, even though subdominant, the energies in magnetic fields and cosmic rays are still significant, especially if their dynamics is different than that of an ideal fluid. For example, as seen in Figure <ref>, magnetic fields have a stabilising effect on the lobe with respect to instabilities <cit.>. The cosmic rays are advected with the thermal fluid throughout this study. This means that the only difference between thermal and cosmic-ray fluid is the adiabatic index (5/3 for the thermal component, 4/3 for the cosmic rays). When CRs are subdominant, the effective adiabatic index stays close to 5/3 in the lobe and its dynamics is not significantly changed. In an astrophysical plasma, however, cosmic rays can propagate along magnetic field lines and thus behave very different from the thermal component <cit.>. We will study this in more detail in a forthcoming paper (Ehlert et al., in prep.).§.§.§ Lobe dynamicsStudying the evolution of an individual lobe with a time-series of slices through the mid-plane showing the mass fraction of jet material (Figure <ref>), it becomes clear that the lobe evolution and disruption is not governed by the onset of Kelvin-Helmholtz (KH) instabilities on ever larger scales, but rather by a Rayleigh-Taylor (RT) like instability which causes surrounding ICM material in the wake of the lobe to rise and shred the lobe inside out. This behaviour has been seen already in previous studies that started out with under-dense lobes at rest <cit.>. We verified that setting up bubbles at rest in pressure equilibrium in our setup leads to the same dynamics, and therefore conclude that the jet is unimportant for this stage of lobe evolution. It is important to note that the buoyancy force is proportional to the absolute difference of densities, which means that it does not make a big difference whether the density is reduced by a factor of a few or by ∼ 3 orders of magnitude, as in our case. For Kelvin Helmholtz instabilities on the surface layer, however, the growth time depends on the ratio of densities. Because of the large density contrast and the magnetisation, the lobe does not develop Kelvin-Helmholtz instabilities on scales larger than a few kpc.Quantitatively, the growth timescales for the KH and RT instabilities in ideal hydrodynamics <cit.> areτ_KH = ρ_1 + ρ_2/(ρ_1 ρ_2)^0.51/Δk≈(ρ_2/ρ_1)^0.51/Δk , τ_RT = |ρ_1 + ρ_2/ρ_1 - ρ_21/k|^0.5≈1/|k |^0.5 ,where ρ_1 and ρ_2 are the densities in the lobe and the surrounding ICM, respectively. We assume ρ_1/ρ_2 ≈ 10^-3 (see figure <ref>, left panel). Δ≈ 500 km s^-1 is the relative velocity of the shear flow parallel to the surface,≈ 3.1 × 10^-8cm s^-2 is the acceleration of the lobe, and k is the wavenumber of the perturbation. We assume that the acceleration originates purely due to gravitational forces (i.e. that the lobe rises with constant velocity) at a distance of 80 kpc. Using these values, we obtainτ_KH ≈ 600 (k 10 kpc)^-1Myr , τ_RT ≈ 30 (k 10 kpc)^-0.5Myr,which means that large-scale KH instabilities with k < (10 kpc)^-1 do not have enough time to grow. KH eddies on smaller scales, however, do grow (consistent with Fig. <ref>, top panel).This result differs from the finding by <cit.>, who report that Kelvin-Helmholtz instabilities develop on the lobe surfaces and mix the lobe material significantly. We explain the difference mainly by the different ways the jet is injected. The presence of magnetic fields and a high density contrast might also contribute. Also recall that the simulations are run with ideal MHD. In particular, our modelling does not include any physical viscosity, which would stabilise the lobe further <cit.>. However, even with our simulations, the lifetime of the lobes can be up to a few times the time the jet is active (Figure <ref>). This implies that, assuming that the jet is active most ofthe time, the model would naturally produce multiple generations of observable buoyantly rising cavities, as observed in some cool core galaxy clusters.While Kelvin-Helmholtz instabilities require very high numerical resolution of a few hundred parsec, the large-scale nature of the Rayleigh-Taylor instability, which dominates in our lobes, implies that a resolution of a few kiloparsec is enough to capture the lobe dynamics. This has important consequences for the possible modelling in future (lower resolution) cosmological simulations of galaxy clusters, as will be discussed in detail in Appendix <ref>.§.§.§ Lobe mixing The slow growth time of large-scale KH instabilities gives rise to the question of how fast the lobe material mixes with the surrounding medium. Qualitatively, this is shown in Figure <ref>. We now quantify the degree of mixing in Figure <ref>, which shows the mass fraction of the jet material (normalised by the overall integrated mass flux of the jets) enclosed in a sphere with a given radius as a function of this radius for the different simulations. In all simulations, the dominant part of the jet material ends up (after 336 Myr) at radii larger than 70 kpc, which indicates that the mixing timescale of the jet material is larger than the buoyant timescale. This effect is more pronounced for the high-power jets. For the low-power jet (10^44 erg s^-1), however, a significant fraction of the material stays at distances less than 100 kpc.Keeping this in mind, we analyse the volume filling fraction of the jet material within the inner 100 kpc in Figure <ref>, where we show the volume fraction of cells with a jet mass contribution higher than x_jet, as a function of x_jet. Even accounting for extremely small mass fractions (x_jet≥ 10^-12), the volume fraction stays below 10% after 336 Myr. We note that there might be other transport processes, such as thermal conduction <cit.>, active CR propagation (Ehlert et al., in prep.) or externally induced turbulence, which promote the mixing of the lobe's internal energy, increasing the volume filling factor.§.§.§ Lobe energetics Figure <ref> shows the evolution of the energy in the lobe as a function of time. We split the velocity into a bulk velocity v⃗_b, which is the volume-weighted average velocity in the lobe, and a turbulent component[All small-scale chaotic motions are referred to as turbulent motions here. We note that, strictly speaking, the decomposition in small-scale and large scale energy contribution can only be done using a mass weighted velocity, which we do not use to avoid contamination from the lobe surface layers. In our case, there is a non-vanishing cross-term between the two velocities, contributing to the energy. We calculate this cross-term and find that it is at least 4 orders of magnitude lower than the other components.] v⃗_t, which is the gas velocity of each cell in the lobe relative to the bulk velocity. This decomposition allows us to study the energies separately. The turbulent kinetic energy in the lobe dominates over the bulk kinetic energy as long as the jet is active. Once the jet has terminated, i.e. to the right of the vertical dashed line in Figure <ref>, the bulk kinetic energy increases due to buoyancy forces, while the turbulent energy decreases. The cosmic ray energy increases at a slightly slower rate compared to the thermal energy, being subdominant by an order of magnitude after the jet terminated. This is consistent with the pressure fractions shown in Figure <ref>. In the jet region, the magnetic pressure is 10 % of the cosmic ray pressure. In the further evolution, the ratio of magnetic to cosmic ray energy, as well as the ratio of magnetic to thermal energy, drop, mainly due to a decline in magnetic energy between 50 and 100 Myr (likely due to numerical resistivity). It is remarkable, however, that the magnetic field, though energetically subdominant by a factor of ∼ 500 over the thermal component, and even subdominant by an order of magnitude compared to the kinetic energy, still has a significant impact on the lobe morphology and mixing properties (Figure <ref>), which highlights the need for simulations to model MHD in this context. The reason for this behaviour is that in the lobe, the force density due to magnetic tension is almost as high as the net force density due to pressure gradients and gravity. §.§ ICM properties One of the key aspects of AGN jet feedback is the question how the radio lobe interacts with the surrounding ICM. We study this by looking at the entropy and kinetic properties of the gas in Figure <ref>. Excluding lobes, the entropy profile is barely changed, except of a radial feature in the wake of the lobe, in agreement with Figure <ref>. The kinetic energy around the lobe is increased, but does not exceed unity, even in the wake of the lobe where the velocities are highest. At distances to the lobe surface of more than a few tens of kiloparsec, the kinetic energy fraction drops to sub-per cent level. In the ICM, the kinetic energy fraction stays below a per cent level in our simulations. The map of vorticity squared confirms that the turbulent motions are largely restricted to the wake of the lobe and the cavity itself, but there is a ring-like feature in the ICM at a distance of ∼ 75 kpc from the centre. The rising of the lobe induces a systematic outward motion in its wake and a corresponding slow inflow perpendicular to it. This is in agreement with <cit.>, who obtain this pattern for a simulation that has a (fixed) directional jet for a simulation time of several Gyr. However, in our case, this happens for each buoyantly rising lobe individually. Another feature are ripples in the radial velocity map and in the kinetic over thermal energy ratio. They seem to be located outside the lobe trajectory, filling a large fraction of the volume. A careful inspection of the entropy map as well aspressure and density maps (not shown here) indicates that these ripples in velocity are coincident with adiabatic fluctuations. This is in qualitative agreement with the idea that sound waves dissipate energy in the ICM in a volume filling fashion <cit.>. We leave a quantitative analysis of the ICM perturbations induced by the jet-lobe system for a future study. §.§ Energy coupling Figure <ref> shows the energy gain of gas inside a sphere that encloses a specific mass as a function of the radius this enclosed mass corresponds to in the initial conditions. At small radii, where the lobe already passed, the gain in thermal energy dominates the total energy gain (Figure <ref>). Overall, around 25 % of the total energy is deposited in the inner 100 kpc of the ICM. The lobe itself, after having risen buoyantly to a distance of more than 100 kpc, still contains half of the injected energy. The drop and increase in energy change at radii larger than the lobe position can be attributed to the post-shock uplift of the gas (increase in gravitational potential energy in Figure <ref>), an associated adiabatic cooling (simultaneous decrease in thermal energy), and the bow shock (increase of the thermal energy gain to the final value), respectively. The remaining energy is transported outward by a shock to radii of more than 200 kpc. We note that this number does not correspond to the total energy thermalised in shocks.Overall, about 40 % of the energy gain goes into an increase in gravitational energy, about 35 % to a thermal energy increase (this includes the thermal energy in the lobe) and more than 20 % into kinetic energy, mostly outside the lobe. The energy gains via magnetic fields and cosmic rays (< 5 %) are subdominant in this simulation, and mostly confined to the lobe region. The overall energy gain outside the lobe region is about 50 %.§.§ Shortcomings and missing physicsIn this paper, we have introduced a new model for launching jets in magnetohydrodynamical simulations. For the sake of clarity and simplicity, we did not include some additional effects that are known or at least suspected to be important in this context. These include a clumpy interstellar medium, which might significantly change the range and energy deposition of the jets <cit.>. Additionally, we only solve the equations of non-relativistic magnetohydrodynamics, which is somewhat inappropriate for the jet velocities reached <cit.>, and treat the jet material and the corresponding lobe, as a thermal fluid with a non-relativistic equation of state (apart from a small contribution of cosmic rays), which is highly approximate at these temperatures.The jet power is constant for a specific simulation, and not yet linked to the black hole spin and accretion rate, which likely determine the jet power in real systems. On the galaxy cluster side, potential future improvements include radiative gas cooling and subsequent star formation, stellar feedback and related processes. Furthermore, our simulations do not include the infall of substructure, a resulting large-scale turbulent velocity, and a self-consistent magnetic field. From a plasma-physics perspective, thermal conduction, viscosity as well as diffusive shock acceleration, transport and interaction processes of CRs with the gas are not included in our set of simulations. Neglecting CR acceleration may be responsible for the artificial dominance of thermal over CR pressure in the lobes. We leave the systematic study of these effects to future work, though we emphasise that simultaneous improvements in both, small scale jet modelling and galaxy cluster modelling, is restricted by computational and numerical limits. We therefore rather advocate to study, wherever possible, the importance of each of the above listed effects individually at the appropriate level of simplification, using the same implementation for launching jets and carefully assessing the possibilities to account for the corresponding effects in larger-scale simulations. § CONCLUSIONS In this paper, we present a new model for jets in the Arepo code. It is based on the preparation of the thermodynamic state of the jet material on marginally resolved scales close to the SMBH, and a redistribution of material to (or from) the surrounding gas for mass conservation. We study the evolution of light, magnetised jets in idealised simulations of hydrostatic cluster-sized halos. Here, the jet represents a kinetically dominated energy flux which reaches mildly supersonic velocities. At the head of the jet, the low density jet material is slowed down by the ram pressure of the denser, ambient ICM and thermalises most of its kinetic energy via shocks. This leads to an inflation of low-density, hot, magnetised cavities containing a population of CRs, in pressure equilibrium with the surrounding ICM. The cavities rise buoyantly and get deformed and eventually disrupted by a Rayleigh-Taylor like instability, similarly to what has been seen in previous simulations of idealised radio lobes. In the wake of the lobe, an upward flow is induced which shows high vorticity and a kinetic energy of up to a few percent of the thermal energy. Very close to the cavity, this fraction rises to almost unity. Overall, the rising cavities induce an upward motion in the wake of the cavity, which is compensated by a slow downward motion at the sides and perpendicular to it, similar as reported by <cit.> and <cit.>. The shear flow at the lobe surface can cause Kelvin-Helmholtz instabilities, yet, we find that their growth time is sufficiently suppressed with respect to the Rayleigh-Taylor growth time in our simulations.Consequently, the mixing of lobe material with the surrounding ICM is energetically unimportant in the centre of the halo. Overall, we find that about half of the injected jet energy is deposited in regions outside the lobe. After passage of the lobe, ∼ 25% of the injected energy is deposited in the inner 100 kpc, which is dominated by an increase in thermal energy, while the remaining energy can be found in material affected by the bow shock at large radii, which mostly gained gravitational energy.This study of the jet-ICM interaction at very high resolution has allowed us to identify some of the main mechanisms governing lobe dynamics and to quantify the energy coupling efficiency. It also provides guidance for modelling jets from AGN more realistically in simulations of galaxy clusters. We find that the main requirements for such a model are to resolve the (lobe-scale) Rayleigh-Taylor instability and to maintain a large density contrast between lobe and surrounding ICM, which calls for sufficiently good control of numerical mixing in the hydrodynamic scheme. In Appendix <ref>, we study at which resolution these requirements can be fulfilled. We conclude that, while still highly challenging or beyond reach for present simulations, the corresponding resolutions should be achievable in the next generations of cosmological `zoom-in' simulations of galaxy clusters. § ACKNOWLEDGEMENTS We thank Volker Gaibler, Alexander Tchekhovskoy and Svenja Jacob for discussions and helpful comments.RW acknowledges support by the IMPRS for Astronomy and Cosmic Physics at the University of Heidelberg.RW, RP and VS acknowledge support through the European Research Council under ERC-StG grant EXAGAL-308037.CP acknowledges support through the European Research Council under ERC-CoG grant CRAGSMAN-646955.The authors would like to thank the Klaus Tschira Foundation. mnras@urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc[Barniol Duran, Tchekhovskoy& GianniosBarniol Duran et al.2016]BarniolDuran+2016 Barniol Duran R.,Tchekhovskoy A., Giannios D.,2016, preprint, http://adsabs.harvard.edu/abs/2016arXiv161206929B(@eprint arXiv 1612.06929)[Bîrzan, Rafferty, McNamara, Wise & NulsenBîrzan et al.2004]Birzan+2004 Bîrzan L.,Rafferty D. A.,McNamara B. R.,Wise M. W., Nulsen P. E. J.,2004, @doi [] 10.1086/383519, http://adsabs.harvard.edu/abs/2004ApJ...607..800B 607, 800[Brüggen & KaiserBrüggen & Kaiser2001]Brueggen+Kaiser2001 Brüggen M.,Kaiser C. R.,2001, @doi [] 10.1046/j.1365-8711.2001.04494.x, http://adsabs.harvard.edu/abs/2001MNRAS.325..676B 325, 676[Brüggen, Kaiser, Churazov& EnßlinBrüggen et al.2002]Brueggen+2002 Brüggen M.,Kaiser C. R.,Churazov E., Enßlin T. A.,2002, @doi [] 10.1046/j.1365-8711.2002.05233.x, http://adsabs.harvard.edu/abs/2002MNRAS.331..545B 331, 545[Cattaneo & TeyssierCattaneo & Teyssier2007]Cattaneo+Teyssier2007 Cattaneo A.,Teyssier R.,2007, @doi [] 10.1111/j.1365-2966.2007.11512.x, http://adsabs.harvard.edu/abs/2007MNRAS.376.1547C 376, 1547[ChandrasekharChandrasekhar1981]Chandrasekhar1981 Chandrasekhar S.,1981, Hydrodynamic and Hydromagnetic Stability. Dover, New York[Churazov, Brüggen, Kaiser, Böhringer& FormanChurazov et al.2001]Churazov+2001 Churazov E.,Brüggen M.,Kaiser C. R.,Böhringer H., Forman W.,2001, @doi [] 10.1086/321357, http://adsabs.harvard.edu/abs/2001ApJ...554..261C 554, 261[Churazov, Forman, Jones& BöhringerChurazov et al.2003]Churazov+2003 Churazov E.,Forman W.,Jones C., Böhringer H.,2003, @doi [] 10.1086/374923, http://adsabs.harvard.edu/abs/2003ApJ...590..225C 590, 225[Cielo, Antonuccio-Delogu, Macciò, Romeo& SilkCielo et al.2014]Cielo+2014 Cielo S.,Antonuccio-Delogu V.,Macciò A. V.,Romeo A. D., Silk J.,2014, @doi [] 10.1093/mnras/stu161, http://adsabs.harvard.edu/abs/2014MNRAS.439.2903C 439, 2903[Cielo, Antonuccio-Delogu, Silk& RomeoCielo et al.2017]Cielo+2017 Cielo S.,Antonuccio-Delogu V.,Silk J., Romeo A. D.,2017, preprint, http://adsabs.harvard.edu/abs/2017arXiv170107825C(@eprint arXiv 1701.07825)[Dubois, Devriendt, Slyz& TeyssierDubois et al.2010]Dubois+2010 Dubois Y.,Devriendt J.,Slyz A., Teyssier R.,2010, @doi [] 10.1111/j.1365-2966.2010.17338.x, http://adsabs.harvard.edu/abs/2010MNRAS.409..985D 409, 985[Dunn & FabianDunn & Fabian2006]Dunn+Fabian2006 Dunn R. J. H.,Fabian A. C.,2006, @doi [] 10.1111/j.1365-2966.2006.11080.x, http://adsabs.harvard.edu/abs/2006MNRAS.373..959D 373, 959[Dursi & PfrommerDursi & Pfrommer2008]Dursi+Pfrommer2008 Dursi L. J.,Pfrommer C.,2008, @doi [] 10.1086/529371, http://adsabs.harvard.edu/abs/2008ApJ...677..993D 677, 993[English, Hardcastle& KrauseEnglish et al.2016]English+2016 English W.,Hardcastle M. J., Krause M. G. H.,2016, @doi [] 10.1093/mnras/stw1407, http://adsabs.harvard.edu/abs/2016MNRAS.461.2025E 461, 2025[Enßlin & VogtEnßlin & Vogt2006]Enszlin+Vogt2006 Enßlin T. A.,Vogt C.,2006, @doi [] 10.1051/0004-6361:20053518, http://adsabs.harvard.edu/abs/2006A[Enßlin, Pfrommer, Miniati& SubramanianEnßlin et al.2011]Enszlin2011 Enßlin T.,Pfrommer C.,Miniati F., Subramanian K.,2011, @doi [] 10.1051/0004-6361/201015652, http://adsabs.harvard.edu/abs/2011A[FabianFabian1994]Fabian1994 Fabian A. C.,1994, @doi [] 10.1146/annurev.aa.32.090194.001425, http://adsabs.harvard.edu/abs/1994ARA[FabianFabian2012]Fabian2012 Fabian A. C.,2012, @doi [] 10.1146/annurev-astro-081811-125521, http://adsabs.harvard.edu/abs/2012ARA[Fabian, Walker, Russell, Pinto, Sanders& ReynoldsFabian et al.2017]Fabian+2017 Fabian A. C.,Walker S. A.,Russell H. R.,Pinto C.,Sanders J. S., Reynolds C. S.,2017, @doi [] 10.1093/mnrasl/slw170, http://adsabs.harvard.edu/abs/2017MNRAS.464L...1F 464, L1[FujitaFujita2005]Fujita2005 Fujita Y.,2005, @doi [] 10.1086/496972, http://adsabs.harvard.edu/abs/2005ApJ...631L..17F 631, L17[Fujita & OhiraFujita & Ohira2011]Fujita+Ohira2011 Fujita Y.,Ohira Y.,2011, @doi [] 10.1088/0004-637X/738/2/182, http://adsabs.harvard.edu/abs/2011ApJ...738..182F 738, 182[Gaibler, Krause& CamenzindGaibler et al.2009]Gaibler+2009 Gaibler V.,Krause M., Camenzind M.,2009, @doi [] 10.1111/j.1365-2966.2009.15625.x, http://adsabs.harvard.edu/abs/2009MNRAS.400.1785G 400, 1785[Gan, Li, Li& YuanGan et al.2017]Gan+2017 Gan Z.,Li H.,Li S., Yuan F.,2017, preprint, http://adsabs.harvard.edu/abs/2017arXiv170301740G(@eprint arXiv 1703.01740)[Gaspari, Melioli, Brighenti& D'ErcoleGaspari et al.2011]Gaspari+2011 Gaspari M.,Melioli C.,Brighenti F., D'Ercole A.,2011, @doi [] 10.1111/j.1365-2966.2010.17688.x, http://adsabs.harvard.edu/abs/2011MNRAS.411..349G 411, 349[GuoGuo2015]Guo2015 Guo F.,2015, @doi [] 10.1088/0004-637X/803/1/48, http://adsabs.harvard.edu/abs/2015ApJ...803...48G 803, 48[Guo & OhGuo & Oh2008]Guo+Oh2008 Guo F.,Oh S. P.,2008, @doi [] 10.1111/j.1365-2966.2007.12692.x, http://adsabs.harvard.edu/abs/2008MNRAS.384..251G 384, 251[Hardcastle & KrauseHardcastle & Krause2013]Hardcastle+Krause2013 Hardcastle M. J.,Krause M. G. H.,2013, @doi [] 10.1093/mnras/sts564, http://adsabs.harvard.edu/abs/2013MNRAS.430..174H 430, 174[Hardcastle & KrauseHardcastle & Krause2014]Hardcastle+Krause2014 Hardcastle M. J.,Krause M. G. H.,2014, @doi [] 10.1093/mnras/stu1229, http://adsabs.harvard.edu/abs/2014MNRAS.443.1482H 443, 1482[Hillel & SokerHillel & Soker2016]Hillel+Soker2016 Hillel S.,Soker N.,2016, @doi [] 10.1093/mnras/stv2483, http://adsabs.harvard.edu/abs/2016MNRAS.455.2139H 455, 2139[Hillel & SokerHillel & Soker2017]Hillel+Soker2017 Hillel S.,Soker N.,2017, @doi [] 10.1093/mnrasl/slw231, http://adsabs.harvard.edu/abs/2017MNRAS.466L..39H 466, L39[Hitomi Collaboration et al.Hitomi Collaboration et al.2016]Hitomi2016 Hitomi Collaboration et al., 2016, @doi [] 10.1038/nature18627, http://adsabs.harvard.edu/abs/2016Natur.535..117H 535, 117[Jacob & PfrommerJacob & Pfrommer2017a]Jacob+Pfrommer2017 Jacob S.,Pfrommer C.,2017a, @doi [] 10.1093/mnras/stx131, http://adsabs.harvard.edu/abs/2017MNRAS.tmp..134J [Jacob & PfrommerJacob & Pfrommer2017b]Jacob+Pfrommer2017b Jacob S.,Pfrommer C.,2017b, @doi [] 10.1093/mnras/stx132, http://adsabs.harvard.edu/abs/2017MNRAS.tmp..135J [Kannan, Vogelsberger, Pfrommer, Weinberger, Springel, Hernquist, Puchwein& PakmorKannan et al.2017]Kannan+2017 Kannan R.,Vogelsberger M.,Pfrommer C.,Weinberger R.,Springel V.,Hernquist L.,Puchwein E., Pakmor R.,2017, @doi [] 10.3847/2041-8213/aa624b, http://adsabs.harvard.edu/abs/2017ApJ...837L..18K 837, L18[Kim & NarayanKim & Narayan2003]Kim+Narayan2003 Kim W.-T.,Narayan R.,2003, @doi [] 10.1086/379342, http://adsabs.harvard.edu/abs/2003ApJ...596L.139K 596, L139[KrauseKrause2003]Krause2003 Krause M.,2003, @doi [] 10.1051/0004-6361:20021649, http://adsabs.harvard.edu/abs/2003A[Kunz, Schekochihin, Cowley, Binney& SandersKunz et al.2011]Kunz+2011 Kunz M. W.,Schekochihin A. A.,Cowley S. C.,Binney J. J., Sanders J. S.,2011, @doi [] 10.1111/j.1365-2966.2010.17621.x, http://adsabs.harvard.edu/abs/2011MNRAS.410.2446K 410, 2446[Li & BryanLi & Bryan2014a]Li+Bryan2014 Li Y.,Bryan G. L.,2014a, @doi [] 10.1088/0004-637X/789/1/54, http://adsabs.harvard.edu/abs/2014ApJ...789...54L 789, 54[Li & BryanLi & Bryan2014b]Li+Bryan2014b Li Y.,Bryan G. L.,2014b, @doi [] 10.1088/0004-637X/789/2/153, http://adsabs.harvard.edu/abs/2014ApJ...789..153L 789, 153[Li, Bryan, Ruszkowski, Voit, O'Shea & DonahueLi et al.2015]Li+2015 Li Y.,Bryan G. L.,Ruszkowski M.,Voit G. M.,O'Shea B. W., Donahue M.,2015, @doi [] 10.1088/0004-637X/811/2/73, http://adsabs.harvard.edu/abs/2015ApJ...811...73L 811, 73[Li, Ruszkowski& BryanLi et al.2016]Li+2016 Li Y.,Ruszkowski M., Bryan G. L.,2016, preprint, http://adsabs.harvard.edu/abs/2016arXiv161105455L(@eprint arXiv 1611.05455)[Loewenstein, Zweibel& BegelmanLoewenstein et al.1991]Loewenstein+1991 Loewenstein M.,Zweibel E. G., Begelman M. C.,1991, @doi [] 10.1086/170369, http://adsabs.harvard.edu/abs/1991ApJ...377..392L 377, 392[Massaglia, Bodo, Rossi, Capetti& MignoneMassaglia et al.2016]Massaglia+2016 Massaglia S.,Bodo G.,Rossi P.,Capetti S., Mignone A.,2016, @doi [] 10.1051/0004-6361/201629375, http://adsabs.harvard.edu/abs/2016A[McNamara & NulsenMcNamara & Nulsen2007]McNamara+Nulsen2007 McNamara B. R.,Nulsen P. E. J.,2007, @doi [] 10.1146/annurev.astro.45.051806.110625, http://adsabs.harvard.edu/abs/2007ARA[McNamara & NulsenMcNamara & Nulsen2012]McNamara+Nulsen2012 McNamara B. R.,Nulsen P. E. J.,2012, @doi [New Journal of Physics] 10.1088/1367-2630/14/5/055023, http://adsabs.harvard.edu/abs/2012NJPh...14e5023M 14, 055023[Meece, Voit& O'SheaMeece et al.2016]Meece+2016 Meece G. R.,Voit G. M., O'Shea B. W.,2016, preprint, http://adsabs.harvard.edu/abs/2016arXiv160303674M(@eprint arXiv 1603.03674)[Mukherjee, Bicknell, Sutherland& WagnerMukherjee et al.2016]Mukherjee+2016 Mukherjee D.,Bicknell G. V.,Sutherland R., Wagner A.,2016, @doi [] 10.1093/mnras/stw1368, http://adsabs.harvard.edu/abs/2016MNRAS.461..967M 461, 967[Navarro, Frenk& WhiteNavarro et al.1996]Navarro+1996 Navarro J. F.,Frenk C. S., White S. D. M.,1996, @doi [] 10.1086/177173, http://adsabs.harvard.edu/abs/1996ApJ...462..563N 462, 563[Navarro, Frenk& WhiteNavarro et al.1997]Navarro+1997 Navarro J. F.,Frenk C. S., White S. D. M.,1997, , http://adsabs.harvard.edu/abs/1997ApJ...490..493N 490, 493[Omma, Binney, Bryan& SlyzOmma et al.2004]Omma+2004 Omma H.,Binney J.,Bryan G., Slyz A.,2004, @doi [] 10.1111/j.1365-2966.2004.07382.x, http://adsabs.harvard.edu/abs/2004MNRAS.348.1105O 348, 1105[Pakmor & SpringelPakmor & Springel2013]Pakmor+Springel2013 Pakmor R.,Springel V.,2013, @doi [] 10.1093/mnras/stt428, http://adsabs.harvard.edu/abs/2013MNRAS.432..176P 432, 176[Pakmor, Bauer& SpringelPakmor et al.2011]Pakmor+2011 Pakmor R.,Bauer A., Springel V.,2011, @doi [] 10.1111/j.1365-2966.2011.19591.x, http://adsabs.harvard.edu/abs/2011MNRAS.418.1392P 418, 1392[Pakmor, Pfrommer, Simpson, Kannan& SpringelPakmor et al.2016]Pakmor+2016 Pakmor R.,Pfrommer C.,Simpson C. M.,Kannan R., Springel V., 2016, @doi [] 10.1093/mnras/stw1761, http://adsabs.harvard.edu/abs/2016MNRAS.462.2603P 462, 2603[Peterson & FabianPeterson & Fabian2006]Peterson+Fabian2006 Peterson J. R.,Fabian A. C.,2006, @doi [] 10.1016/j.physrep.2005.12.007, http://adsabs.harvard.edu/abs/2006PhR...427....1P 427, 1[PfrommerPfrommer2013]Pfrommer2013 Pfrommer C.,2013, @doi [] 10.1088/0004-637X/779/1/10, http://adsabs.harvard.edu/abs/2013ApJ...779...10P 779, 10[Pfrommer, Pakmor, Schaal, Simpson& SpringelPfrommer et al.2017]Pfrommer+2017 Pfrommer C.,Pakmor R.,Schaal K.,Simpson C. M., Springel V., 2017, @doi [] 10.1093/mnras/stw2941, http://adsabs.harvard.edu/abs/2017MNRAS.465.4500P 465, 4500[Pinzke & PfrommerPinzke & Pfrommer2010]Pinzke+Pfrommer2010 Pinzke A.,Pfrommer C.,2010, @doi [] 10.1111/j.1365-2966.2010.17328.x, http://adsabs.harvard.edu/abs/2010MNRAS.409..449P 409, 449[Prasad, Sharma& BabulPrasad et al.2015]Prasad+2015 Prasad D.,Sharma P., Babul A.,2015, @doi [] 10.1088/0004-637X/811/2/108, http://adsabs.harvard.edu/abs/2015ApJ...811..108P 811, 108[Reynolds, McKernan, Fabian, Stone& VernaleoReynolds et al.2005]Reynolds+2005 Reynolds C. S.,McKernan B.,Fabian A. C.,Stone J. M., Vernaleo J. C.,2005, @doi [] 10.1111/j.1365-2966.2005.08643.x, http://adsabs.harvard.edu/abs/2005MNRAS.357..242R 357, 242[Reynolds, Balbus& SchekochihinReynolds et al.2015]Reynolds+2015 Reynolds C. S.,Balbus S. A., Schekochihin A. A.,2015, @doi [] 10.1088/0004-637X/815/1/41, http://adsabs.harvard.edu/abs/2015ApJ...815...41R 815, 41[Ruszkowski & OhRuszkowski & Oh2011]Ruszkowski+Oh2011 Ruszkowski M.,Oh S. P.,2011, @doi [] 10.1111/j.1365-2966.2011.18482.x, http://adsabs.harvard.edu/abs/2011MNRAS.414.1493R 414, 1493[Ruszkowski, Brüggen& BegelmanRuszkowski et al.2004]Ruszkowski2004 Ruszkowski M.,Brüggen M., Begelman M. C.,2004, @doi [] 10.1086/422158, http://adsabs.harvard.edu/abs/2004ApJ...611..158R 611, 158[Ruszkowski, Enßlin, Brüggen, Heinz& PfrommerRuszkowski et al.2007]Ruszkowski+2007 Ruszkowski M.,Enßlin T. A.,Brüggen M.,Heinz S., Pfrommer C.,2007, @doi [] 10.1111/j.1365-2966.2007.11801.x, http://adsabs.harvard.edu/abs/2007MNRAS.378..662R 378, 662[Ruszkowski, Yang& ReynoldsRuszkowski et al.2017]Ruszkowski+2017 Ruszkowski M.,Yang H.-Y. K., Reynolds C. S.,2017, preprint, http://adsabs.harvard.edu/abs/2017arXiv170107441R(@eprint arXiv 1701.07441)[Sijacki & SpringelSijacki & Springel2006]Sijacki+Springel2006 Sijacki D.,Springel V.,2006, @doi [] 10.1111/j.1365-2966.2006.10752.x, http://adsabs.harvard.edu/abs/2006MNRAS.371.1025S 371, 1025[Sijacki, Springel, Di Matteo& HernquistSijacki et al.2007]Sijacki+2007 Sijacki D.,Springel V.,Di Matteo T., Hernquist L.,2007, @doi [] 10.1111/j.1365-2966.2007.12153.x, http://adsabs.harvard.edu/abs/2007MNRAS.380..877S 380, 877[Sijacki, Pfrommer, Springel& EnßlinSijacki et al.2008]Sijacki+2008 Sijacki D.,Pfrommer C.,Springel V., Enßlin T. A.,2008, @doi [] 10.1111/j.1365-2966.2008.13310.x, http://adsabs.harvard.edu/abs/2008MNRAS.387.1403S 387, 1403[SpringelSpringel2010]Springel2010 Springel V.,2010, @doi [] 10.1111/j.1365-2966.2009.15715.x, http://adsabs.harvard.edu/abs/2010MNRAS.401..791S 401, 791[Tchekhovskoy & BrombergTchekhovskoy & Bromberg2016]Tchekhovskoy+Bromberg2016 Tchekhovskoy A.,Bromberg O.,2016, @doi [] 10.1093/mnrasl/slw064, http://adsabs.harvard.edu/abs/2016MNRAS.461L..46T 461, L46[Vogelsberger, Sijacki, Kereš, Springel& HernquistVogelsberger et al.2012]Vogelsberger+2012 Vogelsberger M.,Sijacki D.,Kereš D.,Springel V., Hernquist L.,2012, @doi [] 10.1111/j.1365-2966.2012.21590.x, http://adsabs.harvard.edu/abs/2012MNRAS.425.3024V 425, 3024[Voit, Meece, Li, O'Shea, Bryan& DonahueVoit et al.2016]Voit+2016 Voit G. M.,Meece G.,Li Y.,O'Shea B. W.,Bryan G. L., Donahue M.,2016, preprint, http://adsabs.harvard.edu/abs/2016arXiv160702212V(@eprint arXiv 1607.02212)[Weinberger et al.,Weinberger et al.2017]Weinberger+2017 Weinberger R.,et al., 2017, @doi [] 10.1093/mnras/stw2944, http://adsabs.harvard.edu/abs/2017MNRAS.465.3291W 465, 3291[Yang & ReynoldsYang & Reynolds2016]Yang+Reynolds2016 Yang H.-Y. K.,Reynolds C. S.,2016, @doi [] 10.3847/0004-637X/829/2/90, http://adsabs.harvard.edu/abs/2016ApJ...829...90Y 829, 90[Zhuravleva et al.,Zhuravleva et al.2014]Zhuravleva+2014 Zhuravleva I.,et al., 2014, @doi [] 10.1038/nature13830, http://adsabs.harvard.edu/abs/2014Natur.515...85Z 515, 85 § RESOLUTION DEPENDENCE Figure <ref> shows, as an example, a map of the internal Mach number | |/c_s in simulations with the same jet properties, but different numerical resolution. The three panels on the left hand side correspond to our high resolution, intermediate resolution, and low-resolution results, respectively. The first thing to notice is that the propagation distance of the jet increases with resolution. This is likely linked to the fact that a poorly resolved velocity gradient across the jet leads to a widening of the jet.We note that the computational grid does not line up along the jet direction in our case, which causes a numerical widening of the jet if the flow is not sufficiently resolved. For our high-resolution simulation, where the jet diameter is resolved by ∼ 25 cells, this effect is significantly reduced, which means that the loss of momentum and kinetic energy flux is small and therefore the jet propagates further.The three panels on the right-hand side (from centre to right) show simulations with a variable injection kernel, as would be used in cosmological simulations, and successively relaxed refinement criteria. While the first of these panels (`variable h') has the same resolution settings as the low resolution run, the middle one (`coarse lobe ref.') has a reduced target volume (V_target^1/3≈ 1.5 kpc) and no refinement criteria on the gradient of the density or volume limitations. The rightmost panel (`mass ref') shows a run where the only refinement criteria is the target mass of a cell. The change to an injection region which varies in size depending on the surrounding density leads to a less well-defined jet which is in general slower, while the main structure is still captured. Decreasing the resolution in the outflow further to kpc scales, the nature of the outflow changes, as it is no longer reaching supersonic velocities. Having a pure mass criterion for refinement is (as expected) an inappropriate choice for the problem at hand. In particular, the redistribution of mass from the jet region to the buffer region (see Section <ref>) leads to low-mass cells in the jet region. Having only a mass criterion for refinement and derefinement, these cells are derefined immediately (causing merging with the surrounding higher density cells) and thus become numerically mixed. Therefore, it is not possible to simulate a low-density outflow in a meaningful way with this numerical treatment.The morphology of the lobe is more robust to resolution changes than the jet itself. Figure <ref> shows the jet-material weighted density of the lobe. Each panel is 75 kpc wide and centred on the median lobe position, to compensate for the different height due to the different jet propagation properties discussed previously. The projections are made after 168 Myr, i.e. at a stage where the lobe is already relatively evolved (see Figure <ref>). The left three panels, i.e. the high, intermediate and low resolution lobes, show a similar overall shape and density, which means that we expect a similar dynamics for them. For the runs with a variable injection kernel, we see a successively smaller and denser lobe, forming a less coherent structure, indicating that more material has mixed with the surroundings for numerical reasons. For reasons discussed above, the run using only a refinement criterion based on a target mass never forms a significantly under-dense structure and mixes with its surroundings very efficiently. One of the key features of a predictive model for AGN jet-mode feedback is the ability to deposit the jet energy radially in the same way, and in the same form as in the high-resolution simulations presented in this work. To assess this issue, we plot the energy change in a sphere with given enclosed mass as a function of the corresponding radius in Figure <ref>. The general shape of this function is preserved for different resolution, indicating that the general structure of the lobe (inner maximum) and the bow shock (maximum at larger distances) is preserved. The position of the bow shock is very robust, too, even considering the drastic changes in resolution. This can be explained by the excellent shock-capturing properties of the finite-volume approach used in this study. The lobe structure, however, is located at successively larger distances when going to higher resolution. This can be explained by the different jet propagation properties, as shown in Figure <ref>. Apart from this change, there is an additional difference concerning the relative height of the first peak, which is lower for the high-resolution simulations. This indicates that the bow-shock is energetically more important in the higher resolution simulations than in the low-resolution run. We note, however, that the relative energies do not indicate the energy dissipated in the bow shock vs the energy retained in the lobe.For potential use of such a model in cosmological simulations, or in general simulations with lower resolution, this means that the energy deposition is in general too centrally concentrated at low resolution, whereas the radial distribution of the energy up to the height of the lobe is approximately the same. Keeping in mind that simulations of galaxy clusters develop a self-regulated cooling-heating balance, one would expect the black hole accretion rate to drop by the overestimate of central heating, which would further decrease the range of the low-resolution jet. A possible way to compensate for such a propagation effect is to artificially prolong the duty cycle of poorly resolved jets, such that the resulting lobes end up at the same height. Given the uncertainties in both, the duty cycle and the conversion efficiency of black hole accretion rate to jet power, this could be an acceptable way to compensate for the above mentioned resolution effects, allowing use of the model in lower resolution simulations.§ DEPENDENCE ON MODEL PARAMETERS We already discussed the effect of jet magnetisation in the main text. For completeness, we discuss the variations of jet density ρ_target and the precise choice of the size of the jet injection region, parameterised by h in this section. Figure <ref> shows the density projections in panels of 200 × 400kpc^2 for the fiducial (`fid', intermediate resolution) runs as well as for a jet with ρ_target = 10^-26 g cm^-3, i.e. 100 times higher initial jet density (`heavy'). Unsurprisingly, the heavy jet, carrying more momentum given the same amount of kinetic energy, propagates further, leading to an extremely elongated cavity extending far beyond 100 kpc. Consequently, such a jet will have a very different impact on the surrounding ICM, which is why we consider this parameter as the main uncertainty in modelling jet-mode feedback by AGN. The `wide' panel of Figure <ref> shows a jet where we increased the parameter h to 20 kpc, i.e. by a factor of 4. This has significant consequences for the width of the jet and consequently its propagation distance. This effect is similar to a decrease in resolution (see `low res'), however, for a different reason. As already discussed in Appendix <ref>, the precise range of the jet is not converged for all possible resolutions used in this study, and it is subject to additional uncertainties due to the modelled jet and galaxy cluster effects (see discussion in Section <ref>). We therefore do not consider the parameter h to be a dominant factor of uncertainty in our model, in particular as the lobe density is largely unaffected by it. | http://arxiv.org/abs/1703.09223v1 | {
"authors": [
"Rainer Weinberger",
"Kristian Ehlert",
"Christoph Pfrommer",
"Rüdiger Pakmor",
"Volker Springel"
],
"categories": [
"astro-ph.GA",
"astro-ph.HE"
],
"primary_category": "astro-ph.GA",
"published": "20170327180000",
"title": "Simulating the interaction of jets with the intra-cluster medium"
} |
label1,label2]Rui Pintolabel1]Ricardo J. [email protected] label1,label2]Manuel A. Matos[label1]INESC Technology and Science (INESC TEC), Campus da FEUP, Rua Dr. Roberto Frias, 4200-465 Porto Portugal[label2]Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto Portugal [cor2]Corresponding author Near-future electric distribution grids operation will have to rely ondemand-side flexibility, both by implementation of demand response strategies and by taking advantage of the intelligent management of increasingly common small-scale energy storage. The Home energy management system (HEMS), installed at low voltage residential clients, will play a crucial role on the flexibility provision to both system operators and market players like aggregators. Modeling and forecasting multi-period flexibility from residential prosumers, such as battery storage and electric water heater, while complying with internal constraints (comfort levels, data privacy) and uncertainty is a complex task. This papers describes a computational method that is capable of efficiently learn and define the feasibility flexibility space from controllable resources connected to a HEMS. An Evolutionary Particle Swarm Optimization (EPSO) algorithm is adopted and reshaped to derive a set of feasible temporal trajectories for the residential net-load, considering storage, flexible appliances, and predefined costumer preferences, as well as load and photovoltaic (PV) forecast uncertainty. A support vector data description (SVDD) algorithm is used to build models capable of classifying feasible and non-feasible HEMS operating trajectories upon request from an optimization/control algorithm operated by a DSO or market player. Renewable energy multi-temporal flexibility forecast storage uncertainty prosumers§ INTRODUCTION §.§ Motivation Distributed Renewable Energy Sources (DRES) have been experiencing a fast growing in medium voltage (MV) and low voltage (LV) grids as the solar power technology becomes more and more affordable <cit.>. Conventional electrical network infrastructures were designed to accommodate unidirectional power flows coming from the large power plants to the more populating zones where most of the consumers exist. With the increasingly presence of DRES in LV and MV distribution grid, there is a paradigm change as the power flows start to reverse direction, particularly during sunny days or windy periods. Consequently, technical difficulties regarding the operation of distribution grid start to arise for Distribution Systems Operators (DSO), with bus voltage limits being violated or even line congestion events.Microgrids, composed with flexible loads, small-scale storage and their intelligent management by means of an Home Energy Management System (HEMS) combined with smart meter capabilities, can bring flexibility into the operation of distribution grids, taking advantage of increasingly more frequent types of flexible loads such as the electric vehicle <cit.> and the presence of significant levels of photovoltaic (PV) microgeneration <cit.>. The flexible character that microgrids bring to the distribution grid operation can be used to provide ancillary services <cit.> at period of grid stressful operation. Optimizing the operation of these small-scale distribution grids is being seen as crucial to endow DSOs of means of accommodate current levels of DRES and allow for a deeper penetration of micro-generation in LV distribution grids in the short-term future. Controlling, modeling, optimizing schedules of microgrid flexible assets has been topic of research for the last years <cit.> with some focus given to energy storage <cit.>.DSOs can also take advantage of the flexible nature of microgrids during grid stressful operation periods, use it for voltage control features at MV/LV substations <cit.>, for load frequency control <cit.>, for power losses reduction, support unintentional microgrid operation <cit.>, and to create operational conditions that maximize DRES hosting capacity that might bring financial benefits for prosumers and have a positive impact on the decarbonization of the electric power system.Domestic small-scale storage, heat-pumps, thermostatically controlled loads (TCL), air-conditioners, and the electric vehicle will for sure be more common in residential type buildings in the near future, increasing the flexibility that can be provided within LV distribution grids, either by means of demand response programmes or flexibility aggregators participating in dedicated market sessions or even with single HEMS smart operation aiming at maximizing customers profits <cit.> while supporting DSO in meeting specific operational criteria <cit.>. The flexibility provision from HEMS monitoring and control capabilities is a theme of relevant significance as a result of the added value that can be brought to interested agents like DSOs and flexibility aggregators. Therefore, there is a need for developing flexibility models to capture the following characteristics:* Multi-period flexibility from behind-the-meter storage and TCL due to the inter-temporal nature of state of charge (SoC) and water temperature equations;* LV net-load patterns driven by weather conditions (e.g., PV generation) that introduce high uncertainty in the forecasting task <cit.>.This paper produces contributions that address these two research challenges. §.§ Related Work and Contributions Different authors studied the economic and technical benefits of flexibility from energy storage technologies. The focus was mainly in large-scale storage and renewable energy power plants, which, combined, create the virtual power plant (VPP) concept. For instance, in <cit.> the value of price arbitrage for energy storage is evaluated for different European electricity markets. The authors found that in fuel import-based markets, such as the United Kingdom, there are opportunities for investors to benefit from energy storage arbitrage and the presence of hydro power plants decrease the interest in storage investment. Gomes et al. proposed a two-stages stochastic optimization method for the joint coordination of wind and PV systems considering energy storage in the day-ahead market and adopted a mixed-integer linear programming formulation <cit.>. Results show that, although the total amount of traded energy is inferior when considering a joint coordination, the final profit is greater than when considering a disjoint operation. A comprehensive literature review about VPP scheduling can be found in <cit.>.The present paper differs from these works since it is focused in residential prosumers with small-scale storage and PV generation, and flexible appliances combined through an HEMS. In this segment, concerns like data protection and privacy, multi-period flexibility representation and the need to have local computational units are important challenges that are not fully covered by the current literature.A standard approach for modeling flexible distributed energy resources (DER) is to characterize their flexibility with a set of specific parameters. For instance, for planning the operation of services buildings with thermal energy storage on a yearly basis, Stinner et al. defined the maximum flexibility provision in each hour considering three dimensions: temporal, power and energy <cit.>. However, in this work the flexibility was modeled and calculated for planning purposes rather than a short-term time horizon. For design process or for selecting set of buildings to participate in demand response schemes, De Coninck et al. proposed flexibility cost curves, which correspond to the amount of energy that can be shifted to or from a flexibility interval and the associated cost <cit.>. Operational decision making is not covered by this methodology.Multi-period flexibility modeling and forecasting for short-term horizons is a rather recent topic, with few relevant research works dedicated to it. Zhao et al. studied a geometric approach that is capable of aggregating flexibility provided from TCL (represented as a “virtual” battery), and where the set of feasible power profiles of each TCL has been demonstrated to be a polytope <cit.>. The aggregation of several sets is performed by means of the Minkowski sum of the individual polytopes. The computational burden issue is tackled by the authors by adopting several approximations regarding the calculation of the Minkowski sums. Despite the merits of this work, domestic battery storage is not modeled and neither is the impact of net-load uncertainty, which are both relevant and challenging issues in this problem.The “virtual” battery model was also explored by other authors, like Hughes et al. that proposed a first-order linear dynamical model for flexibility provision from heating, ventilation, and air conditioning (HVAC) systems in frequency regulation services, and generalizes the method to many other types of loads <cit.>. The results showed that the developed technique still has challenges to overcome in modeling small-scale systems. Another example is Hao et al., which considered demand aggregation using battery models to model the set of feasible power profiles that a collection of TCLs can provide to track frequency regulation signals <cit.>. The TCL modeling assumes a simplified continuous power model where the error related to this simplification decreases as the size and homogeneity of the TCL aggregation increases.Nosair et al. proposed a method to construct flexibility envelopes that describe the flexibility potential of the power system and its individual resources <cit.>. The proposed envelopes comprise all possible intra-hourly deviation and variation of the modeled DRES considering that for a certain sub-hourly time there is maximum output variability. Using the 95% percentile of the probability distribution of all the sub-hourly time steps the authors define an envelope which comprises the majority of realizations of flexibility requirements for that intervals. HEMS are not considered in this study, particularly the costumer's preferences regarding the operation of their equipment, which makes the modeling problem more complex and simultaneously more realistic. Moreover, the multi-period nature of flexibility is not modeled in the envelope. A similar concept is also proposed by Nuytten et al. <cit.>. The authors presented a methodology to estimate the maximum and minimum curves regarding the operation of a combined heat and power (CHP) plant combined with thermal storage. The difference between these two curves is indicated by the authors as being the theoretical maximum flexibility that the system can provide. This methodology can only be used for modeling the maximum flexibility that the system is capable of providing for one specific time step in the time horizon considered, if one assumes that no flexibility has already been provided in a later period. No multi-temporal formulation has been adopted in this work, which means that a set of feasible power set-points regarding flexibility provision during more than one time step cannot be provided by the proposed method. This flexibility representation was also adopted in the European Project IndustRE as flexgraphs <cit.>.Ulbig and Anderson <cit.> described a methodology to analyze the available flexibility for each time-step from an ensemble of diverse units in a confined grid zone. This flexibility is modeled as a Power Node, which allows for detailed modeling of specific constraints such as maximum ramp rates, power limits, as well as energy storage operation ranges. The authors propose a visual representation of the available flexibility during a specific time horizon. Nevertheless, this visual representation and modeling approach does not account for a multi-temporal formulation, meaning that the mentioned flexibility availability is only depicted for a single time step.Pan et al. proposed a method to calculate the feasible region of a linear programming formulation for the operation of a district heating system, considering the thermal inertia of buildings <cit.>. The main objective was to have condensed and privacy preserving data exchange between district heating and electricity control centers. The method can be generalized to estimate the flexibility region for multiple time periods and non-linear formulations, but showed the following limitations: a) uncertainty is not included in this work (is pointed out by the authors as future work); b) visual representation ofmulti-period flexibility region with more than three time intervals is not possible. A different representation for the flexibility is proposed in<cit.>. Bremer and Sonnenschein propose two sampling methods for defining the technically feasible flexibility set from DER. The authors, for instance, propose a Monte Carlo sampling method that starts with a feasible operating schedule and then, in each step, modifies it in at least one point in time with a random mutation factor. The methodology is used by the authors in a succeeding work where the obtained trajectories are used as a learning sample for a support vector data description (SVDD) algorithm <cit.>. Despite some similarities in the methodology with the work being presented in the present paper, this approach did not consider the modeling of costumers' preferences neither accounted for the forecast uncertainty.Considering the reviewed state of-the-art, the main original contribution from the present paper is a novel trajectory generation algorithm that is capable of modeling the multi-period flexibility from HEMS, including information about base net-load (i.e., inflexible load plus PV generation) forecast uncertainty represented by a set of short-term scenarios generated from probabilistic forecasts taking into account the temporal interdependency of forecast errors.The main limitation of reviewed literature is that information about net-load forecast uncertainty is not included in the multi-period flexibility model, which, when included in DSO or aggregators optimization models, might lead to solutions with low robustness to uncertainty. In contrast to the methods described in <cit.> that proposed multi-period flexibility estimation but without forecast uncertainty characterization, in our method the uncertainty in the PV and net-load profile is tackled by means of a novel trajectory-based evaluation procedure using the convergence features of the EPSO (Evolutionary Particle Swarm Optimization <cit.>) algorithm that have been adapted to this work.Compared to <cit.>, the proposed method is for operational (or short-term decision-making) and extends flexibility estimation to multiple time intervals (i.e., set of flexibility trajectories), which represents an added value compared to the works described in <cit.>.Similarly to <cit.>, the modeling strategy allows the interested parts to not have to model the equipment within the HEMS, which reduces the computational complexity and effort of problems such as multi-period OPF and maintains data privacy. In a second stage, the flexibility trajectory set is used as input in a SVDD model that is capable of delimiting the feasible flexibility set of the respective HEMS. The potential interested parts, DSOs or demand/flexibility aggregators, only need to receive a reduced number of flexibility trajectories, called support vectors, which by means of a specific function to be embedded in their optimization tools allows them to identify unknown HEMS flexibility trajectories as being feasible or not. §.§ Structure of the Paper The remaining of this paper is organized as follows: section <ref> introduces the concept of multi-temporal flexibility adopted in this work; in section <ref> the developed methodology is detailed, presenting the structure of the flexibility set generation algorithm, and the approach to encode and distribute the generated information to interested parts; section <ref> presents the results regarding the performance of the proposed method; finally, in section <ref> the main conclusions are presented.§ HEMS MULTI-PERIOD FLEXIBILITY: THE CONCEPTMulti-period flexibility from HEMS can be defined as the ability to change the expected (baseline) net-load profile for a specific period of time (e.g., 24 hours period), by jointly considering information about flexible/inflexible load, PV generation, hot water demand, water temperature inside the electric water heater and the state of charge (SoC) of domestic battery storage. The visual representation of the multi-period flexibility envelope is not as straightforward as one might assume. Actually, when dealing with problems that aim at defining the feasible flexibility set for more than three time steps ahead, the visual representation of such domain becomes impossible. There is a difference between the visual representation of the power limits in each time step that characterize the maximum flexibility band (like the flexibility envelope in <cit.>) and the actual visual representation of the feasible flexibility set considering different temporal activations of flexible resources.For the sake of clarity, let one assume that in the following example the flexibility provision can only be provided by a single domestic battery storage with 3.2 kWh of electrical energy capacity, maximum charge and discharge power of 1.5 kW, an initial SoC of 0.64 kWh (20%) and a minimum allowed SoC of 15% capacity, 0.48 kWh. In this example the battery efficiency is neglected. The upward and downward hourly limits of the flexibility band for this battery, considering the stated conditions, are depicted in Figure <ref>. This would represent the flexibility envelope or flexgraph from <cit.>).The outer limits of the flexibility domain represented in Figure <ref> are defined by the previously mentioned upward and downward flexibility limits. The search domain defined in Figure <ref> relates to what is commonly referred to as the flexibility power band. As the initial SoC is 0.64 kWh, the upward flexibility limit for the first considered operating time is equal to the maximum battery charging power, as with that the resulting SoC will be 2.14 kWh, considering hourly time steps and neglecting the battery's efficiency. Analyzing the defined downward flexibility limits, namely the limit for the first time step considered, as the minimum allowed SoC is 15% of the storage capacity, 0.48 kWh, the maximum discharge power admissible is just 0.16 kW for a hourly time step. From the second operation time step considered onwards, both the upward and downward flexibility provision power limits correspond to the maximum charge and discharge rates, respectively. This occurs because there is always a possible flexibility trajectory that remains feasible while representing a choice of maximum charging or discharging power in any of the remaining time steps considered.With that said, it is important to stress that, as previously stated, there is an important difference between the flexibility power band limits definition and the limits for the feasible flexibility provision envelope being tackled in this study. An example of that fact is the trajectory being depicted also in Figure <ref> representing a possible flexibility offer expressed in kW [0.0, -0.5, 0.0]. Although the power set-points composing this trajectory are all within the limits of the defined flexibility power band, analyzing the SoC response to such trajectory (in kWh) [0.64, 0.14, 0.14] one can verify that the trajectory becomes infeasible from the second time step onwards as the minimum SoC constraint (SoC >= 0.48 kWh) is not being complied.The representation for the flexibility space that is adopted in this paper is through a set of technically feasible net-load trajectories, which represent alternative paths to the expected (baseline) net-load profile (trajectory). In other words, these trajectories are samples taken from the multi-dimensional space forming the feasible flexibility set.Concluding, the concept of multi-temporal flexibility provision relates with the potential that a certain HEMS has of reshaping its expected net-load profile for a determined number of inter-temporal related periods, while complying with technical and physical internal constraints. The main focus of this study refers to the delimiting of the flexibility set that encompasses all the possible multi-temporal net-load profile variations that can be performed by the HEMS control functions. The next section describes the methodology that generates this set of temporal flexibility trajectories.§ METHODOLOGY FOR MODELING THE FEASIBLE FLEXIBILITY SET §.§ General Framework The methodology developed in this work extends the previous work reported in <cit.> in which the feasible flexibility set is estimated using semi-randomly generated feasible trajectories and then feeding a SVDD algorithm with those trajectories. In that previous version of the algorithm, random sampling routines were being used to generate a sufficient number of feasible trajectories. In this new version, the construction of feasible trajectories no longer depends on a random sampling routine but instead an EPSO algorithm is being used to search for feasible trajectories. The use of the EPSO algorithm also enables the inclusion of information about base net-load uncertainty forecast (i.e., inflexible load plus PV generation) by means of solution evaluation for a set of different uncertainty scenarios, which greatly increases the complexity and computational effort. Accordingly, a feasible solution will be one that complies with all the constraints for a predefined probability threshold from all the possible HEMS base net-load scenarios considered, instead of using simply a point (or deterministic) forecast information like in the previous algorithm version.The final set of feasible trajectories resulting from the EPSO search procedure are aggregated to create a learning dataset for the SVDD algorithm. The SVDD is an one-class support vector machine algorithm that is commonly used in novelty detection, where a determined set of samples is provided to the function which in turn builds a model by detecting the soft boundary of that set <cit.>. Inspired by the methodology proposed in in <cit.> for the encoding of search spaces for virtual power plants application, the SVDD is used in this work for classifying new flexibility trajectories as belonging to that set or not. Or in other words, to check if a potential net-load profile is technically feasible or not.Figure <ref> depicts the main stages of the proposed methodology in the form of a block-diagram. The first step is the generation of short-term scenarios for the forecast uncertainty, corresponding to a discrete multi-temporal representation of uncertainty, which is described in section <ref>. Then, there are three main boxes representing: the trajectory construction process; the learning of the feasible domain process; and the validation of the multi-period flexibility. The first stage concerns to the use of the EPSO algorithm to generate feasible trajectories that comply with the defined customer's preferences (see section <ref>). These customer's preferences are embedded into the proposed algorithm and are responsible of modeling the desire of minimizing the wasting of energy coming from PV generation, which means that the battery's storage capacity must be used at its most to accommodate the energy surplus from PV generation. Another considered customer preference is the definition of the water temperature range inside the EWH tank, which must never surpass the defined minimum and maximum temperatures during the period of time in study. The formulation of the optimization problem is discussed in section <ref>. In the second stage, section <ref>, the constructed feasible trajectories are used as input for the SVDD function which will result in the model construction and identification of the support vectors that define the boundaries of the feasibility domain. Finally, in the validation of multi-period flexibility stage, the interested agent (DSO or flexibility aggregator) can take advantage of the feasible domain knowledge coming from the information embedded in the provided support vector and define the optimal multi-period flexibility trajectory that is aligned with its operational needs while being viable to be provided by the HEMS/aggregation of HEMS.§.§ Representation of Forecast UncertaintyThe proposed method to generate flexibility trajectories for HEMS uses as input a sample of temporal scenarios without any assumption of the parametric model of its joint distribution. Each scenario can be interpreted as a sample taken from the joint probability distribution. This representation is called random vectors in statistics <cit.>, path forecasts in econometrics <cit.> and weather ensembles in atmospheric sciences <cit.>.Mathematically, a set of M temporal scenarios for a time horizon of length T can be defined as follows:Y^M= [ [ y_t + 1|t^1 y_t + 2|t^1 ⋯ y_t + T|t^1; y_t + 1|t^2 y_t + 2|t^2 ⋯ y_t + T|t^2; ⋯ ⋯ ⋯ ⋯; y_t + 1|t^m y_t + 2|t^m ⋯ y_t + T|t^m; ]]where each row (y^[ m ]) of Y^M contains one scenario member. Collectively, the scenario set should exhibit the correct temporal dependence-structure structure between the marginal probability distributions or probabilistic forecasts (see <cit.> for more details).These scenarios can be empirical, analytical, physical or generated in a stochastic model. Yet, it is important to underline that the proposed method is independent from the scenario generating process and can be applied even if this process is unknown or known only in the form of a simulation model.Empirical scenarios can be constructed with analog-based methods like the analog ensemble approach <cit.>, which searches the historical forecast data for situations when the forecast was most similar (or analogous) to the current forecast. For each of those analogous forecasts, the corresponding observation is collected. An analytical solution to construct these scenarios is the epi-spline basis functions, which approximates the stochastic process for renewable energy and controls the degree to which extreme errors are captured <cit.>. A well-know physical approach consists in ensemble predictions systems that are designed to model three sources of weather uncertainty: initial conditions, physical approximation and boundary conditions <cit.>.In this paper, a stochastic simulation method, based on the Gaussian copula proposed in <cit.> for wind and solar energy, was adopted. The method work as follows: * Probabilistic forecasts (marginal distribution functions) for solar power are generated with a combination of feature engineering and gradient boosting trees and using a grid of weather forecasts <cit.>. Exponential functions are used for the distribution's tails as described in <cit.>. For load time series, probabilistic forecasts are generated with conditional kernel density estimation <cit.>.* Scenarios (or random vectors) are generated by plug-in an exponential covariance matrix into a Gaussian copula and using the inverse of the forecasted cumulative distribution function. The details of the scenario generation method can be found in Appendix A.§.§ Problem Formulation In this work, flexibility from domestic EWH and battery storage is included in the flexibility model. However, the methodology can be easily generalized to other flexible resources at the domestic and network level. The problem formulation has two decision variables and two state variables. The decision variables are the power flow in the domestic electric battery's inverter, P_bat, and the operating point of the EWH, P_ewh. The two state variables refer to the battery SoC and the water temperature inside the EWH tank. The constraints used in this problem formulation are presented next. trajh = Pbath + PewhhPbat^min <= Pbath <= Pbat^maxPewhh=0,for off status Pewh^nom,for on statusSoC^ini + ∑_h=1^HPbath <= SoC^maxSoC^ini + ∑_h=1^HPbath >= SoC^minθ^min <= θh <= θ^max The trajectories representing the flexibility that can be provided by the HEMS are limited to the battery's charging and discharging powers (Pbat^max and Pbat^min) and the EWH nominal power (Pewh^nom), (<ref>) and (<ref>) respectively. Regarding the maximum charging power, in this study a dynamic model is adopted where the maximum charging power depends on the SoC of the battery, which is a typical behavior for Li-ion batteries. This is explained by the two most common charging stages that occur when charging a lithium-ion battery, constant current and constant voltage <cit.>. During constant current charging stage the battery is basically connected to a current-limited power supply until reaching around 70-80% of its energy capacity. For superior SoC the battery enters the constant voltage stage where the charger acts as a voltage limited power supply and the charging current gradually decreases as the SoC approximates full capacity. The modeling used in this work is presented in Figure <ref> where the maximum charging power starts to decrease for SoC superior to 80% and a minimum charging power of 20% nominal power is assumed. The inclusion of this non-linear behavior of battery storage also highlights the added value of the proposed method since it is not constrained by first-order linear models like in <cit.>.State of charge limits are enforced by equation (<ref>) and (<ref>), with the maximum allowed SoC being the total energy capacity of the battery and the minimum allowed SoC being limited to a certain percentage of total capacity, e.g. 15%. There is an allowed water temperature range inside the EWH tank that is being represented by Equation (<ref>). Equation (<ref>) represents the water temperature variation along the time horizon, which depends on the expected volume of hot water usage and the decision variable regarding the operating status of the EWH, Pewh. The physically-based load model adopted for the EWH modeling is aligned with the one used in <cit.>. θh = θh-1 + Δ t/C [ -α(θh-1 - θhouse) - cp vh (θdes-θini)+ Pewhh ] In (<ref>), Δ t is the time step [h], C is the thermal capacity [kWh/°C] set to 0.117, α is the thermal admittance [kWh/°C] set to -9.42^-4, θhouse is the house indoor temperature set to 20 °C, cp is the water specific heat [kWh/(ltr.°C)], vh is the hot water consumption at time h, θdes is the desired temperature for water consumption set to 38 °C, and θinl is the inlet water temperature set to 17 °C.In line with Figure <ref>, there is a need to validate the costumers preferences. Thus, the SoC state variable must be updated taking into account the PV generation surplus, which must be accommodated by the battery, accounting for battery storage capacity and maximum charging power limitations. Accordingly, for each time step of the operation horizon considered, the SoC variable is updated by summing up the PV generation surplus, PV^sur, which consequently increases the SoC, and subtracting the EWH possible power in those time steps. The combination between the PV generation surplus to be accommodated, the decision variable regarding the charging (or not) of the battery, and the decision variable representing the EWH operating status must respect the maximum charging power, by verifying (<ref>). Pbat_h + PV_h^sur - Pewh_h <= Pbat^max, ∀ h Additionally, (<ref>) must give place to (<ref>) to account for the PV generation surplus. The maximum and physically possible amount of PV surplus energy that the battery can absorb without being used for flexibility provision must still be assured when defining the feasible trajectories for flexibility provision. SoC^ini + ∑_h=1^HPbath + PVh^sur - Pewh_h <= SoC^max There is a maximum amount of PV generation surplus that the battery is capable of accommodating, which is related to its storage capacity and it can vary along the time horizon depending on the precedent operation decisions. The maximum energy that the battery can accommodate results from the difference between the maximum and minimum SoC limits. To assess whether this specification is being respected or not along all the time steps considered, an auxiliary variable was created, capacity. Its initial value is set to the previously referred maximum energy that the battery can accommodate. The created auxiliary variable tracks down the supposed PV surplus energy accommodation capacity for each time step. capacity_h = capacity_h-1 - (PVh^sur - Pewh_h) capacity_h = capacity_h-1 + Pbat^max In each time step where there is a PV generation surplus, its value is updated by subtracting its current value by the PV surplus to be accommodated in that time step (<ref>), limited to the battery charging power. On the other hand, in each time step where there is no PV generation surplus, its value is updated by increasing the capacity to absorb by the discharging power limit of the battery (<ref>). A trajectory is classified non-feasible if is limiting the theoretical maximum capacity of the storage unit to accommodate PV surplus. If a certain trajectory respects all the problem constraints besides the one regarding the accommodation of PV, that trajectory is modified to cope with that requirement.Pbat_h = capacity_h - (PVh^sur - Pewh_h) Accordingly, the decision variable regarding the operation of the electric battery will be modified to be equal to the remaining battery capacity after accommodating the PV surplus (<ref>). Part of the PV surplus might be consumed by the operation of the EWH while providing flexibility, which is accounted in (<ref>) and (<ref>)Figure <ref> illustrates the evolution of the state variable regarding the battery SoC after this evaluation procedure takes place.Figure <ref> represents, at the top, one of the 100 possible base net-load scenarios. As previously mentioned, to meet the costumer's preferences, the battery SoC must be updated whenever the PV generation exceeds the load levels (represented by negative net-load values). One consequence of this costumer preference modeling is that, in moments of PV surplus, the battery cannot present downward flexibility (only possible with battery discharging) since the battery must be used to accommodate the energy PV surplus through the charging mode (although HEMS downward flexibility might be presented depending on the operation of the EWH). An example of this effect can be observed in Figure <ref>. The initial SoC variation depicted in the bottom chart shows that the SoC either stays unchanged or increases during the identified PV surplus moments. At the bottom, the resulting SoC variation along the considered time horizon can be compared to the original variation. As it can be seen from the figure analysis, the final SoC starts to differ from the initial SoC variation at the moment when the first net-load negative value occurs. The shape difference represents the PV generation surplus deducted the power consumed by the EWH in the flexibility provision.§.§ EPSO Implementation for the Trajectory Searching Process The fundamental ideas behind the EPSO algorithm are the population based evolutionary programming concept where each combination of generated solution, X, and respective strategic parameters, weights - w, is called particle. There are five main steps in the general scheme of EPSO, namely:* Replication: where each particle is replicated;* Mutation: where each particle has its weights, w, mutated;* Reproduction: where an offspring is generated from each mutated particle according to the movement rule;* Evaluation: where each particle in the population has its fitness evaluated;* Selection: where, by means of stochastic tournament, the best particles survive to form the next generation. For a given particle X_i, the new resulting particle, X_i^new, results from: X_i^new = X_i + V_i^newV_i^new = w_i0^*V_i +w_i1^*(b_i - X_i) + w_i2^*(b_g^* - X_i) This movement rule has the terms of inertia, memory and cooperation. The weights are subjected to mutation: w_ik^* = w_i1 + τ N(0, 1) where N(0, 1) is a random variable with Gaussian distribution with 0 mean and variance 1. Additionally, the global best b_g comes randomly disturbed: b_g^* = b_g + τ^' N(0, 1) In (<ref>) and (<ref>) the τ and τ^' are learning parameters.Using the EPSO method, the developed algorithm incorporates in each particle information regarding the decision variables,P_bat and P_ewh. Accordingly, each particle in this reshaped EPSO algorithm becomes a two-dimension object representing the two decision variables. The final fitness value of each particle, which represents the trajectory feasibility verification, is the result of the combined fitness evaluation of the two dimensions of each particle. During the fitness evaluation process the state variables are updated for the time steps considered. In the end, each feasible trajectory, traj, will result from the sum, for each time step, of the two dimensions of each of the selected particles, according to (<ref>). Problem constraint enumerated in the previous section must be complied. In each iteration of the algorithm, as part of the EPSO fitness evaluation process, the state variables must be assessed regarding the generated particles. Accordingly, the battery SoC and the water temperature of the EWH are computed based on the values of the previous time steps for the entire time horizon considered. As the developed EPSO algorithm considers a two-dimensional formulation, this state variable assessment procedure includes two independent functions. The trajectory feasibility relies on the fulfillment of (<ref>), (<ref>), and (<ref>). For each time step where (<ref>), (<ref>), and (<ref>) are not respected, a penalty term is added in the penalty verification function of the particle being evaluated (<ref>). Penaltyk = Pensocmax + Pensocmin + Pentempwater In line with the information presented in Figure <ref>, the feasible trajectories generation process that takes place in the developed EPSO algorithm is not complete without the customer's preferences validation. In this study, besides the assurance that the water temperature inside the EWH tank remains within the pre-established temperature range (<ref>), one must assure that the main propose of the battery use prior to the HEMS flexibility offering remains being the accommodation of the PV generation surplus (<ref>) and (<ref>).The validation of this customer requirement is accomplished by a scenario based approach that allows the EPSO resulting feasible flexibility set (trajectories) to incorporate the forecast uncertainty regarding the base net-load of the HEMS. Hence, a set of short-term scenarios are used to represent forecast uncertainty of base load and PV generation in this methodology. This probabilistic information is included in the constraints of the optimization problem, resulting in a chance-constrained optimization problem <cit.> that is solved with EPSO. Let ς be the indicator function on the fulfillment of constraint (<ref>), as represented by (<ref>).ς=1,if Penaltyk = 0 0,if Penaltyk> 0 The indicator function ς can be seen as a binary indicator that gets value 1 if the penalty verification function in (<ref>) equals 0 (i.e., no constraints violation), and has the value 0 if the the penalty verification function results in a value grater that 0 representing constraints violation. In other words, if the constraints are violated in one interval of the net-load scenario, ς gets value 0 independently from the violation or not of the constraints in other intervals, which is guaranteed by (<ref>). The EPSO fitness function comes from the sum of the indicator function for all the considered base net-load scenarios (<ref>). Accordingly, and as defined in (<ref>), for a trajectory to be considered robustly feasible it must comply with a scenario percentage threshold τ, e.g., the number of base net-load scenarios in which the trajectory remains feasible must be greater than a determined percentage of the total number of scenarios. Fitk^uncert = ∑_s=1^NscenςsFitk^uncert > τ*Nscen The trajectory fitness evaluation can be carried out according to (<ref>) for the different conditions that each of the considered base net-load scenario represents. Consequently, one can classify a certain trajectory as being or not robustly feasible by checking (<ref>).In each EPSO iteration, the ever encountered best global particle (i.e., the one with better fitness evaluation) needs to be updated. As referred, the fitness function used in this problem formulation only penalizes solutions that do not comply with the defined constraints. This means that there is no optimal solution in this searching process for which the EPSO algorithm is being used. Each particle that is feasible for all the considered scenarios will have the same fitness value. Therefore, the best global particle, b_g, is selected based on the relative position of all the so far identified feasible trajectories. The aim here is that b_g, which is used as reference in the movement rule, is chosen to increase the diversity of the feasible trajectories set. Accordingly, this selection procedure evaluates the relative position of all the elements in the set of feasible trajectories, looking for the one with the greatest distance relative to all the others.Distance = ∑_t=1^T (| P_bat, t - P_bat, t^mean |) +(| P_ewh, t - P_ewh, t^mean |)Thus, for each dimension and for all the time steps considered, the absolute distance between the particle position and the mean position of the feasible set is computed. The final distance comes from the accumulated distance along the time horizon considered, following (<ref>). §.§ Surrogate Model for the HEMS FlexibilityThe final EPSO algorithm output will be a large set of feasible temporal trajectories, which represent the feasible flexibility domain. The originated trajectories are to be used as a learning set of a SVDD function. The one-class support vector machine function available in the Scikit-Learn Python Library <cit.> was used. This learning dataset must have sufficient diversity among the built trajectories so that the resulting model is capable of efficiently delimit and learn the feasibility domain boundary.The trained SVDD model identifies the necessary support vectors (domain boundary representative trajectories) that describe the high-dimension sphere representing the feasible domain and the respective coefficients. The support vectors together with the respective coefficients compose the data that is transmitted to the interested agents in order to quantify the HEMS flexibility potential. From the entire set of feasible trajectories that feed the SVDD function, some are selected by it regarding the significance that they have on delimiting the feasible domain. During this identification process, support vectors coefficients are also computed, which imposes more or less significance on certain support vectors. This means that some support vectors are more decisive on the delimiting of the feasible domain, which implies that the respective coefficients have a greater value. Applying (<ref>) the SVDD model is capable of classifying new flexibility trajectories as feasible or not.R^2(x)=1-2∑_iβik(xi, x) + ∑_i,jβiβjk(xi, xj) This classification is based on the comparison of the radius of the high dimension sphere and the radius in the high dimension domain that the trajectory being classified represents. The formula that calculates the correspondent trajectory (and sphere's) radius is expressed in equation (<ref>), where R^2 is the square of the radius being calculated, x_i and x_j are support vectors, β_i and β_j are the respective coefficients, k refers to the kernel type used by the SVDD function, and x is the trajectory being evaluated. More detailed information regarding this methodology can be consulted in <cit.> and <cit.>. The information in equation (<ref>) canbe integrated in any meta-heuristic optimization framework (like EPSO) to optimize the availability flexibility according to the end-user goals <cit.>.To be classified as feasible, a trajectory must represent a radius in the high dimension domain that is equal or inferior to the radius of the sphere representing the feasibility boundary. Figure <ref> illustrates the SVDD classification procedure where the sphere representing the feasible flexibility space is defined by the identified support vectors. Trajectories whose projection falls within the flexibility space delimited by the sphere (in other others, trajectories that lead to radius smaller than the sphere's radius) are considered feasible. On the other hand, trajectories leading to radius greater that the sphere's radius are considered unfeasible, which in Figure <ref> are represented by the triangle-shaped symbols outside the domain defined by the sphere.The discrete nature of the EWH operation brings some challenges when defining the HEMS flexibility set, namely due to discontinuities that can be introduced by it. These discontinuities can be learnt by the SVDD algorithm, which might lead to misguiding trajectory classification. In order to clearly illustrate this effect, let one consider a EWH and a electric battery as flexible assets inside a HEMS. If at a certain time step the battery, for reasons of its operating strategy, does not have flexibility provision capability, the only flexibility that the HEMS can provide comes from the EWH discrete operation. Thus, the SVDD function will learn that for the referred time step the HEMS can provide upward flexibility by the amount equal to the nominal power of the EWH. The resulting trajectory classification model will then consider as feasible flexibility provision values between 0 kW and the EWH nominal power, instead of a discrete representation. Nevertheless, these operation conditions are not frequent and this modeling glitch can be neglected when the battery has enough SoC margin to adjust its power output.According to the methodology introduced by this work, in order to define its flexibility potential the HEMS only needs to provide to the interested parts the computed support vectors and the respective coefficients, following (<ref>), where x_i and x_j are the support vectors and β_i and β_j the respective coefficients. Accordingly, no information regarding customer's demand patterns or installed equipment (like battery specifications) need to be revealed. This surrogate model for the HEMS flexibility complies with recent concerns regarding costumer's data privacy arising with the smart meter deployment <cit.>.§ NUMERICAL RESULTS The performance assessment of the developed methodology was based on two main analyzes: the generation of feasible trajectories and the classification accuracy of SVDD models. §.§ Generation of Feasible Flexibility TrajectoriesAs detailed in section <ref>, a trajectory will be classified as feasible when complying with the considered constraints for at least a pre-defined minimum percentage of base net-load scenarios. The constraints refer to battery's SoC limits, EWH water temperature and the use of the battery during PV surplus periods. The EPSO algorithm is used to generate the set of feasible trajectories that will feed the SVDD function responsible to construct a model capable of identifying new trajectories as feasible or not.For the base net-load forecast uncertainty of the HEMS, 100 short-term scenarios were used. In this study, for a certain trajectory to be considered feasible, it must comply with the problem constraints for at least 90 scenarios, which represents a probability of 90%. The time horizon corresponds to a 24 hours window with a 15 minutes resolution, leading to 96 time steps.The EPSO algorithm was configured with a 30 particle population size, a maximum number of iterations of 5000, a feasible trajectories target of 1000, communication factors of 0.15 for both dimensions of the particles, maximum and minimum mutation rates of 0.50 and 0.05, respectively, and a learning parameter, τ, of 5. The mutation rate is dynamic throughout the process, beginning with the maximum value and decreasing until the minimum value as the number of iterations increase. For helping in the convergence of the algorithm, an initial population is used, instead of using random values. This initial population fills the initial particles with decision values that respect the limits for the respective decision variables, P_bat and P_ewh. Additionally, using one of the scenarios of HEMS net load profile, the PV energy surplus periods were identified and used to zero the decision variable P_bat, so it approximates some of these initial particles to the costumer's preference constraint of not discharging the battery during those periods and use it to accommodate the referred PV energy surplus.The HEMS modeled in this study has as flexible assets a domestic battery and an EWH unit. The battery considered has 3.2 kWh capacity, maximum charging and discharging power of 1.5 kW and an efficiency of 92.5%. The initial SoC was set to 60% of the maximum capacity and the minimum SoC level was defined as 15% of the maximum capacity. Regarding the EWH, it has a nominal power of 0.5 kW, maximum and minimum water temperatures of 80°C and 45°C, respectively, and an initial water temperature of 60°C.Figure <ref> depicts the 100 HEMS base net-load scenarios used in this study (top-left corner) and also a set of 100 feasible flexibility trajectories generated by the EPSO algorithm (bottom-left corner).The net-load profile scenarios analysis (Figure <ref> top-left corner) lets one identify the period of day of typical superior PV generation as the one that brings more uncertainty regarding the HEMS net-load. It is during this period that the HEMS net-load can present negative values, which relate to the costumer's preference constraint of using the battery to minimize, as far as it is physical possible to the battery, the injection of PV energy in the grid. The impact of this constraint in the set of feasible trajectories constructed can beobserved in the bottom charts depicted in Figure <ref>. The period between 07:30 and 13:00 has been zoomed in for improved clarity (right hand side charts of Figure <ref>). From 08:00 to 12:30 most of the net-load scenarios represent a power injection in the LV distribution grid, which imposes that the battery should not be used to provide downward flexibility. The right hand side bottom chart shows the flexibility trajectories for that period. As one can observe, most of the trajectories are providing 0 kW or 0.5 kW, the latter referring to the EWH nominal power. Values greater that 0.5 kW can occur if the respective trajectory has room to provide upward flexibility, while accommodating the PV energy surplus. Negative values are very uncommon and only can occur if when evaluating a certain particle for the use of the battery costumer preference the EWH power related decision variable counterbalances the negative net-load trajectories in some time steps. If that occurs, the battery can be used freely during such time steps to provide flexibility.Regarding the increase in the diversity of the solutions generated when comparing the current version of the algorithm with the preliminary one that used semi-random routines for the trajectory construction <cit.>, an analysis was performed using the principle component analysis method, which applies an orthogonal transformation to convert a set of variables (with a set of observations) into a set of values of linearly uncorrelated variables (principal components). Basically, the aim is to verify which version of the algorithm produces a 1000 set of feasible trajectories that needs more components to explain a certain percentage of the variance of the respective produced set. Results show that the newest algorithm version presented in this work needs 5 components to explain 50% of the variance and 16 components to explain 80%, while, for the same conditions, the older version needs only 2 and 5, respectively. This proves that the set of feasible trajectories produced by the newest version of the algorithm is more diverse, which leads to a better representation of the feasible search space when using the computed trajectories as input to build the SVDD model.The final set of 1000 feasible trajectories generated by the EPSO routine needed around 25 minutes to be constructed in a desktop computer with an Intel Core i7-2600 CPU running at 3.40 GHz and with 8.00 GB of installed RAM. The algorithm was developed on Python programming language. §.§ Classification Accuracy of SVDD-based Model The other analysis carried out regarding the performance of the algorithm refers to the assessment of the SVDD classification accuracy of the multi-period trajectories. With that objective, two different sets of trajectories were used: one set composed by feasible trajectories and a second set of unfeasible trajectories. The purpose of using these two sets relates to the necessity of evaluating the classification process not only for the correct classification of feasible trajectories but also for correctly identifying unfeasible trajectories.Regarding the SVDD function, hyper-parameters had to be defined before the construction of the classification models. These hyper-parameters are the kernel type and coefficient (γ), which will influence the quality of the classification performance. For this problem, it was found that the most suitable type of kernel was the Sigmoid, being presented in Table <ref> the performance results for that kernel type together with the results from models using the radial basis function and the polynomial kernels. Additionally, it was found that the fine tuning of the nu parameter can have a strong influence on the quality of the constructed SVDD model, which can be consulted in Table <ref>. For better model performance, the input trajectories were normalized for values between 0 and 1, regarding the minimum and maximum values from the EPSO feasible trajectories set. The time period used for this analysis ranged between 09:00 and 13:00 with 15 minutes time steps, resulting in a 16 time steps problem.The nu parameter is a model configuration parameter and it refers to an upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Based on the results displayed in Table <ref> one can verify that increasing the value of the nu parameter decreases the error on the classification of unfeasible trajectories, while increasing the error on classifying feasible ones. Therefore, there is a trade-off on defining the best parametrization for the SVDD models.It was found that the configuration with γ = 0.05 and nu = 0.15 produces the most balanced classification model, resulting in errors for classifying feasible and non-feasible trajectories of 14.98% and 15.45%, respectively. In the last iteration of the proposed approach, as depicted in Figure <ref>, the flexibility trajectories selected by the DSO or flexibility aggregator still have to be validated locally by the HEMS. In the case of a non-feasible flexibility trajectory selection, the HEMS is responsible of indicating the most similar feasible trajectory that it can provide as flexibility to the interested agent.§ CONCLUSIONSThis paper proposed a novel concept, called multi-period flexibility forecast for LV prosumers (behind-the-meter flexibility), which combines small-scale DER flexibility (e.g., storage, EWH) and forecast uncertainty for PV and net-load time series. This new function can be embedded in an HEMS and explored in the context of smart grid and microgrid technology. The flexibility potential also accounts for comfort constraint regarding the temperature of the water in the EWH tank and a costumer-defined mode of operation of the electric battery. In a first stage, an EPSO-based generation mechanism is proposed to create a set of feasible flexibility trajectories that are robust to net-load forecast uncertainty. Then, in a second phase, it explores a support vector data description function as a “black-box” model to communicate the flexibility set to different types of users, like DSO and flexibility aggregators. Nevertheless, the discrete representation with the flexibility trajectories set can be also integrated in management functions of DSO and market players.The modified EPSO algorithm used for searching feasible flexibility trajectories showed a high diversity in producing feasible solutions and, consequently, improved the efficiency of the created SVDD models that are responsible of learning the HEMS feasible flexibility set boundaries. More than that, the forecast uncertainty regarding the HEMS net-load profile is fully considered by means of fitness evaluation of the computed solutions for various base net-load short-term scenarios which was one of the major gaps identified in the state of the art. Consequently, the proposed methodology can be used to control the robustness of the flexibility trajectories set in a probabilistic fashion (i.e., chance-constrained optimization).It is crucial that the flexibility set, which can be transmitted to the interested stakeholders, clearly incorporates the physical and operating constraints of the DER and also the desired strategic modes of operation defined by the HEMS end-user. The trajectories resulting from the developed EPSO-based algorithm respect the problem constraints regarding DER power limits, state-of-charge of the battery and the temperature of the water inside the EWH tank. Additionally, the constraint related to the customer's preferences about the use of the battery during PV surplus periods is also respected by the generated flexibility trajectories.Additionally, the performance of the SVDD model was assessed for the classification of feasible and non-feasible flexibility trajectories. Several configurations regarding the different hyper-parameters of the SVDD algorithm were analyzed. The configuration found to be more balanced regarding the trajectory classification procedure,with sigmoid kernel, γ = 0.05, and ν = 0.15, resulted in a misclassification error of around 15%. These errors were computed for a 16 time steps problem formulation. As the total number of time steps of the problem decreases, the errors also tend to decrease. The SVDD parameters showed a significant impact on the model classification performance, imposing a trade-off between misclassification of feasible and non-feasible trajectories.The final outcome of this work is a forecast methodology with benefits for two groups of stakeholders: a) integrated in DSO operational planning tools to explore local flexibility potential within LV distribution grids in a time efficient fashion while complying with costumer privacy concerns; b)used by a prosumers flexibility aggregator to efficiently assess the true flexibility provision capacity that can be provided by the assets belonging to its portfolio.Future work will focus on improving the efficiency of the classification procedure by the SVDD model and also on investigating new approaches to transmit to the interested agents the HEMS flexibility set, e.g. use virtual batteries derived from the flexibility trajectories set.§ ACKNOWLEDGEMENTSThe research leading to this work is being carried out as a part of the InteGrid project (Demonstration of INTElligent grid technologies for renewables INTEgration and INTEractive consumer participation enabling INTEroperable market solutions and INTErconnected stakeholders), which received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under grant agreement No. 731218.The work of Rui Pinto was also supported in part by Fundação para a Ciência e a Tecnologia (FCT) under PhD Grant SFRH/BD/117428/2016. § GAUSSIAN COPULA METHODThe Gaussian copula method <cit.> generates M temporal scenarios of solar power forecasts. First, the variable Z_t+k|t= Φ ^ - 1( F̂_t+k|t( y_t+k)) is calculated, where y_t+k is the observed value for lead time t+k, F̂_t+k the forecasted probability distribution function and Φ ^ - 1 theinverse of the Gaussian cumulative distribution function. This random variable Z_t+k|t is Gaussian distributed with zero mean and unit standard deviation.Then, the scenarios are generated with the following process: * Generate M random vectors Z^[ m ] from a multivariate Gaussian distribution (i.e., Gaussian copula) with zero mean and covariance matrix Σ _Z. The size of this random vector is between t+1 and t+T where T is the maximum lead time or time horizon. * Transform Z^[ m ] with the following equation to obtain a random vector in the same scale of y (note that Z was a Gaussian variable).y^[ m ]= F̂^ - 1( Φ(Z^[ m ]))where F̂^ - 1 is the vector of inverse forecasted distribution function for lead times between t+1 and t+T, Φ is the distribution function of a standard normal random variable and y^[ m ] is the m-th solar power scenario.The dependency structure between the lead-times is modeled with a Gaussian copula that uses an exponential covariance matrix given by:cov( Z_t + k_1,Z_t + k_2 ) = exp(- | k_1- k_2 |/ν)where Z_t + k_1 is the Gaussian random variable for lead time t + k_1 and where ν is the range parameter controlling the strength of the correlation of random variables among the set of lead times. The parameter ν is determined by trial-error experiences using the p-variogram score as a performance metric <cit.>. § REFERENCES | http://arxiv.org/abs/1703.08825v4 | {
"authors": [
"Rui Pinto",
"Ricardo Bessa",
"Manuel Matos"
],
"categories": [
"cs.NE",
"cs.AI"
],
"primary_category": "cs.NE",
"published": "20170326152634",
"title": "Multi-Period Flexibility Forecast for Low Voltage Prosumers"
} |
Hidden space reconstruction inspires link prediction in complex networks Hao Liao^1,2, Mingyang Zhou^1,3[[email protected]], Zong-wen Wei^1,3, Rui Mao^1,Alexandre Vidmer^1,2Yi-Cheng Zhang^2 December 30, 2023 ===========================================================================================================================Programming language definitions assign formal meaning to complete programs.Programmers, however, spend a substantial amount of time interacting with incomplete programs – programs with holes, type inconsistencies and binding inconsistencies – using tools like program editors and live programming environments (which interleave editing and evaluation).Semanticists have done comparatively little to formally characterize (1) the static and dynamic semantics of incomplete programs; (2) theactions available to programmers as they edit and inspect incomplete programs; and (3) the behavior of editor services that suggest likely edit actions to the programmer. This paper serves as a vision statement for a research program that seeks to develop these “missing” semanticfoundations. Our hope is that these contributions, which will take the form of a series of simple formal calculi equipped with a tractable metatheory, will guide the design of a variety of current and future interactive programming tools, much as various lambda calculi have guided modern language designs. Our own research will apply these principles in the design of , an experimental live lab notebook programming environment designed for data science tasks. We plan to co-design the language with the editor so that we can explore concepts such as edit-time semantic conflict resolution mechanisms and mechanisms that allow library providers to install library-specific editor services. § INTRODUCTION Language-aware program editors (like Eclipse or Emacs, with the appropriate extensions installed <cit.>) offer programmers a number of useful editor services. Simple examples include (1) syntax highlighting, (2) type inspection, (3) navigation to variable binding sites, and (4) refactoring services. More sophisticated editors provide context-aware code and action suggestions to the programmer (using various code completion, program synthesis and program repair techniques). Many editors also offer live programming <cit.> services, e.g. by displaying the run-time value of an expression directly within the editor as the program runs.When these editor services encounter complete programs – programs that are well-formed and semantically meaningful (i.e. assigned meaning) according to the definition of the language in use – they can rely on a variety of well-understood reasoning principles and program manipulation techniques. For example, a syntax highlighter for well-formed programs can be generated automaticallyfrom a context-free grammar <cit.> and the remaining editor services enumerated above can follow the language's type and binding structure as specified by a standard static semantics. Live programming services can additionally follow the language's dynamic semantics.The problem, of course, is that many of the edit states encountered by a program editor do not correspond to complete programs. For example, the programmer may be in the midst of a transient edit, or the programmer may have introduced a type error somewhere in the program. Standard language definitions are silent about incomplete programs, so in these situations, simple program editors disable various editor services until the program is again in a complete state. In other words, useful editor services become unavailable when the programmer needs them most! More advanced editors attempt to continue to provide editor services during these incomplete states by using various ad hoc and poorly understood heuristics that rely on idiosyncratic internal representations ofincomplete programs.This paper advocates for a research program that seeks to understand both incomplete programs, and the editor services that interact with them, as semantically rich mathematical objects. This research program will broaden the scope of the “programming language theory” (PLT) tradition, which has made significant advances by treating complete programs, programming languages and logics as semantically rich mathematical objects.In following the PLT tradition, we intend to start by developing a series of minimal calculi that build upon well-understood typed lambda calculi to capture the essential character of incomplete programs and various editor services of interest. Editor designers will be able to apply the insights gained from studying these calculi (together with insights gained from the study of human factors and other topics) to design more sophisticated program editors. Some of these editors will evolve directly from editors already in use today. In parallel with these efforts, we plan to design a “clean-slate” programming environment, , based directly on these first principles. This will allow researchers to explore the frontier of what is possible when one considers languages and editors within a common theoretical framework. Such a clean-slate design will also likely prove useful in certain educational settings, and even some day evolve into a practical tool.Figure <ref> shows a mockup of the user interface, which is loosely modeled after the widely adopted IPython / Jupyter lab notebook interface <cit.>.This figure will serve as our running example throughout the remainder of the paper. Each section below briefly summarizes a fundamental problem that we must confront as we seek to develop a semantic foundation for advanced program editors. For each problem, we discuss existing approaches, including those advanced by our own recent research, and suggest a number of promising future research directions that we hope that the community will pursue.§ PROBLEM 1: SYNTACTICALLY MALFORMED EDIT STATES Textual program editors frequently encounter edit states that are not well-formed with respect to the textual syntax of complete programs. For example, consider a programmer constructing a call to a function std: There is a syntax error, so editor services that require a syntactically complete program must be disabled. This is unsatisfying. Sophisticated editors like Eclipse, and editor generators like Spoofax <cit.>, use error recovery heuristics that silently insert tokens so that the editor-internal representation is well-formed <cit.>. These heuristics are typically provided manually by the grammar designer, though certain heuristics can be generated semi-automatically by tools that are given a description of the scoping conventions of the language or of secondary notational conventions (e.g. whitespace) <cit.>. Error recovery heuristics require guessing at the programmer's intent, so they are fundamentally ad hoc and can confuse the programmer <cit.>.A more systematic alternative approach, and the approach that we plan to explore with , is to build a structure editor – a program editor where every edit state maps onto a syntax tree, with holes representing leaves of the tree that have not yet been constructed.This representation choice sidesteps the problem of syntactically malformed edit states. Notice that in Figure <ref>, the program fragment in cell (a) contains holes, appearing as squares. This design also permits non-textual projections of expressions, e.g.the 2D projection of a matrix value in cell (b). We will return to the topic of non-textual projections below.Structure editors have a long history. For example, the Cornell Program Synthesizer was developed in the early 1980s <cit.>.Although text-based syntax continues to predominate, there remains significant interest in structure editors today, particularly in practice. For example,Scratch is astructure editor that has achieved success as a tool for teaching children how to program <cit.>.is an editor for a C-like language <cit.>, built using the commercially supported MPS structure editor workbench <cit.>. TouchDevelop is an editor for an object-oriented language <cit.>. Lamdu <cit.> and Unison <cit.> are open source structure editors for functional languages similar to Haskell. Most work on structure editors has focused on the user interfaces that they present. This is important work – presenting a fluid user interface involving higher-level edit actions is a non-trivial problem, and some aspects of this problem remain open even after many years of research. There is reason to be optimistic, however, with recent studiessuggesting that programmers experienced with a modern keyboard-driven structure editor (e.g. )can be highly productive <cit.>.Researchers have also explored various “hybrid” approaches, which incorporate holes into an otherwise textual program editor. These hybrid approaches are appealing in part because tools for interacting with text, like regular expressions and various differencing techniques used by version control systems, are already well-developed. For example, recent work on syntactic placeholders envisionsa textual program editor where edit actions cause textual placeholders (a.k.a. holes) of various sorts to appear, rather than leaving the program transiently malformed <cit.>. This“approximates” the experience of a structure editor in common usage, while allowing the programmer to perform arbitrarytext edits when necessary. Some programming systems, e.g. recent iterations of the Glasgow Haskell Compiler (GHC) <cit.> and the Agda proof assistant <cit.>, support a workflow where the programmer places holes manually at locations in the program that remain under construction. Another hybrid approach would be to perform error recovery by attempting to insert holes into the internal representation usedby the program editor, without including them in the surface syntax exposed to programmers.If “pure” structure editing proves too rigid as we design , we will explore hybrid approaches.§ PROBLEM 2: STATICALLY MEANINGLESS EDIT STATESNo matter how aneditor confronts syntactically malformed edit states, it must also confrontedit states that are syntactically well-formed but statically meaningless. For example, the following value member definition (assuming an ML-like language) has a type inconsistency: [numbers=none] val x : float = std(m, ColumnWise)because std has type matrix(float) * dimension -> vec(float), but the type annotation on x is float, rather than vec(float). This leaves the entire surrounding program formally meaningless according to a standard static semantics.In the presence of syntactic holes, the problem of reasoning statically about incomplete programsbecomes even more interesting.Consider the incomplete expression from cell (a) in Figure <ref>. Although it is intuitively apparent that the type of this expression, after hole instantiation, could only be vec(float) (the return type of std), and that the hole must be instantiated with values of type dimension, the static semantics of complete expressions is again silent about these matters. Various heuristic approaches are implemented in Eclipse and other sophisticated tools, but theformal character of these heuristics are obscure, buried deep within their implementations. What is needed is a clear static semantics for incomplete programs, i.e. programs that contain holes (in both expressions and types), type inconsistencies, binding inconsistencies (i.e. unbound variables), and other static problems. Such a static semantics is necessary for to be able toprovide type inspection services. For example, in the right column of Figure <ref>, is informing the programmer that the expression at the cursor, highlighted in blue in cell (a), must be of type dimension). Similarly, must be able to assign the incomplete function summary_stats an incomplete function type for it to be able to understand subsequent applications of summary_stats. Here, the function body has been filled out enough to be able to assign the function the following incomplete function type:We have investigated a subset of this problem in recent work <cit.> by defining a static semantics for a simply typed lambda calculus (with asingle base type, , for simplicity) extended with holes and type inconsistencies (but no binding inconsistencies). Figure <ref> defines the syntactic objects of this calculus – H-types, , are types with holes , and H-expressions, , are expressions with holes , and marked type inconsistencies, . We call marked type inconsistencies non-empty holes, because they mark portions of the syntax tree that remain incomplete and behave semantically much like empty holes. Types and expressions that contain no holes are complete types and complete expressions, respectively. We will not reproduce further details here. Instead, let us simply note some interesting connections with other work.First, type holes behave much like unknown types, ?, from Siek and Taha's pioneering work on gradual typing <cit.>. This discovery is quite encouraging, given that gradual typing is also motivated by adesire to make sense of one class of “incomplete program” – programs that have not been fully annotated with types.Empty expression holes have also been studied formally, e.g. as the metavariablesof contextual modal type theory (CMTT) <cit.>. In particular, expression holes can have types and are surrounded by contexts, just as metavariables in CMTT are associated with types and contexts. This begins to clarify the logical meaning of a typing derivation in Hazelnut – it conveys well-typedness relative to an (implicit) modal context that extracts each expression hole's type and context. The modal context must be emptied – i.e. the expression holes must be instantiated with expressions of the proper type in the proper context – before the expression can be considered complete. This relates to the notion of modal necessity in contextual modal logic.For interactive proof assistants that support a tactic model based directly on hole filling, the connection to CMTT and similar systems is quite salient. For example, Beluga <cit.> is based on dependent CMTT and aspects of Idris' editor support <cit.> are based on a similar system –McBride's OLEG <cit.>. As we will discuss in Sec. <ref>, our notion of a program editor supports actions beyond hole filling.There are a number of future research directions that are worth exploring.Binding inconsistencies.In the simple calculus developed so far, all variables must be bound before they are used, including those in holes. We plan extend Hazelnut to support reasoning when a variable is mentioned without having been bound (as is a common workflow). Dagenais and Hendren also studied how to reason statically about programs with binding errors using a constraint system, focusing on Java programs whose imports are not completely known <cit.>. They neither considered programs with holes or other type inconsistencies, nor did they formally specify their technique. However, they provide a useful starting point.Expressiveness.The simple calculus discussed aboveis only as expressive as the typed lambda calculus with numbers. We must scale up the semantics to handle other modern language features. Our plan is to focus initially on functional language constructs (so that can be used to teach courses that are today taught using Standard ML, OCaml or Haskell). This will include recursive and polymorphic functions, recursive types, and labeled product (record) and sum types. We also propose to investigate ML-style structural pattern matching. All of these will require defining new sorts of holes and static inconsistencies, including: (1) non-empty holes at the type level, to handle kind inconsistencies; (2) holes in label position; and (3) holes and type inconsistencies in patterns. Automation.Although we plan to explore some of these language extensions“manually,” extending our existing mechanized metatheory, we ultimately planto automatically generate a statics for incomplete terms from a standard statics for complete terms, annotated perhaps with additional information. There is some precedent for this in recent work on the Gradualizer, which is capable of producing a gradual type system from a standard type system with lightweight annotations that communicate the intended polarities of certain constructs <cit.>. However, although it provides a good starting point, gradual type systemsonly consider the problem of holes intypes. Our plan is to buildupon existing proof automation techniques, e.g. Agda's reflection <cit.> (in part because our present mechanization effort is in Agda). § PROBLEM 3: DYNAMICALLY MEANINGLESS EDIT STATESModern programming tools are increasingly moving beyond simple “batch” programming models by incorporating live programming features that interleave editing and evaluation <cit.>. These tools provide programmers with rapid feedback about the dynamic behavior of the program they are editing, or selected portions thereof <cit.>. Examples include lab notebooks, e.g. the popular IPython/Jupyter <cit.>, which allow the programmer to interactively edit and evaluate program fragments organized into a sequence of cells (an extension of the read-eval-print loop (REPL)); spreadsheets; live graphics programming environments, e.g. SuperGlue <cit.>, Sketch-n-Sketch <cit.> and the tools demonstrated by Bret Victor in his lectures <cit.>; the TouchDevelop live UI framework <cit.>; and live visual and auditory dataflow languages <cit.>. In the words of Burckhardt et al. <cit.>, live programming environments“capture the imagination of today's programmers and promise to narrow the temporal and perceptive gapbetween program development and code execution”. Our proposed design for combines aspects of several of these designs to form a live lab notebook interface.It will use the edit state of each cell to continuously update the output value displayed for that cell and subsequent cells that depend on it. Uniquely, rather than providing meaningful feedback about the dynamic behavior only once a cell becomes complete, will provide meaningful feedback also about the dynamic behavior of incomplete cells (and thereby further tighten Burckhardt's “perceptive gap”).For example, in cell (c) of Figure <ref>, the programmer appliesthe incomplete function summary_stats tothe matrix my_data, andthe editor is still able to display a result. The value of the column-wise mean is fully determined, because evaluation does not encounter any holes, whereas the standard deviation and median computations cannot be fully evaluated. Notice, however, that the standard deviation computation does communicate the substitution of the applied argument, my_data, for the variable m.[To avoid exposing the internals of imported library functions, evaluation does not step into functions, like std, that have been imported from external libraries indicated by the row at the top of Figure <ref> (unless specifically requested, not shown).]To realize this functionality, we need a dynamic semantics for incomplete programs that builds upon our proposed static semantics. There is some precedent for this: research in gradual typing considers the dynamic semantics of programs with holes in types, and our proposed static semantics for incomplete programs borrows technical machinery from theoretical work on gradual typing <cit.>. However, we need a dynamic semantics for incomplete programs that also have expression holes (and in the future, other sorts of holes). Research on CMTT has not yet considered the problem of evaluating expressions under a non-empty metavariable context.Normally, this would violate the classical notion of Progress –evaluation can neither proceed, nor has it produced a value. We conjecture that this is resolved by (1) positively characterizing indeterminateevaluation states, those where a hole blocks progress at all locations within the expression, and (2) defining a notion of Indeterminate Progress that allows for evaluation to stop at anindeterminate evaluation state. By gradualizing CMTT and defining these notions, we believe we can achieve the basic functionality described above.There are several more applications that we aim to explore after developing these initial foundations. For example, it would be useful for the programmer to be able to select a hole that appears in an indeterminate state and be taken to its original location. There, they should be able to inspect the value of a subexpression under the cursor in the environment of the selected hole (rather than just its type). Again, CMTT's closures provide a theoretical starting point for this debugger service. It would also be useful to be able to continue evaluation where it left off after making an edit to the program that corresponds to hole instantiation. This would require proving a commutativity property regarding hole instantiation. Fortunately, initial research on commutativity properties for holes has been conducted for CMTT, which will serve as a starting point for this work <cit.>. There are likely to be interesting new theoretical questions (and, likely, some limitations) that arise if one adds non-termination and memory effects. Relatedly, IPython/Jupyter <cit.> support a feature whereby numeric variable(s) in cells can be marked as being “interactive”, which causes the user interface to display a slider. As the slider value changes, the value of the cell is recomputed. It would be useful to be able to use the mechanisms just proposed to incrementalize parts of this recomputation.§ PROBLEM 4: A CALCULUS OF EDIT ACTIONSThe previous sectionsconsidered the structure and meaning of intermediate edit states. However, to understand the act of editing itself, we need a calculus of edit actions that governs transitions between these edit states. In a structure editor, the ideal would be for every possible edit state to be both statically and dynamically meaningful according to the semantics proposed in the previous two sections. This corresponds formally to proving a metatheorem about the action semantics: when the initial edit state is semantically meaningful, the edit state that results from performing an action is as well. In a textual or hybrid setting, these structured edit actions would need to be supplemented by lower-level text edit actions that may not maintain this invariant. In addition to this crucial metatheorem, which we call sensibility, there are a number of other metatheorems of interest that establish the expressive power of the action semantics, e.g. that every well-typed term can be constructed by some sequence of edit actions. In our recent work on Hazelnut, we have developed an action calculus for the minimal calculus of H-types and H-expressions described in Section <ref> <cit.>. We have mechanically proven the sensibility invariant, as well as expressivity metatheorems, using the Agda proof assistant. What remains is to investigate action composition principles. For example, it would be worthwhile to investigate thenotion of an action macro, wherebyfunctional programs could themselves be lifted to the level of actions to compute non-trivial compound actions. Such compound actions would give a uniform description of transformations ranging from the simple—like “move the cursor to the next hole to the right”—to quite complex whole program refactorings, while remaining subject to the core semantics. Using proof automation, it should be possible to prove that an action macro implements derived action logic thatis admissible with respect to the core semantics. This would eliminate the possibility of “edit-time” errors. This is closely related to work on tactic languages in proof assistants, e.g. the Mtac typed macro language for Coq <cit.>, differing again in that the action language involves notions other than hole filling. § PROBLEM 5: MEANINGFUL SUGGESTION GENERATION AND RANKING The simplestedit actions will be bound to keyboard shortcuts. However, will also provide suggestions to help the programmer edit incomplete programs by providing a suggestion palette, marked (d) in Figure <ref>.This palette will suggest semantically relevant code snippets when the cursor is on an empty hole. It will also suggest other relevant edit actions, including high-level edit actions implemented by imported action macros (e.g. the refactoring action in Figure <ref>).When the cursor is on a non-empty hole, indicating a static error, it will suggest bug fixes. We plan to also consider bugs that do not correspond to static errors, including those identified explicitly by the programmer, and those related to assertion failures or exceptions encountered when using the live programming features of . In these situations, we plan to build onexisting automated fault localization techniques <cit.>.Note that features like these are not themselves novel. Many editors provide contextually relevant suggestions. Indeed, suggestion generation is closely related to several major research areas: code completion <cit.>, program synthesis <cit.>, and program repair <cit.>. The problems that such existing systems encounter is exactly the problem we havebeen discussing throughout this proposal: when attempting to integrate thesefeatures into an editor, it is difficult to reason about malformed or meaningless edit states. Many of these systems therefore fall back onto tokenized representations of programs <cit.>. Because will maintain the invariant that every edit state is a syntactically and semantically meaningful formal structure, we can develop a more principled solution to the problem of generating meaningful suggestions. In particular, we will be able to prove that every action suggestion generated for a particular edit state ismeaningful for that edit state.In addition to investigating the problem of populating the suggestion palette with semantically valid actions, we will consider the problem of evaluating the statistical likelihood of the suggestions. This requires developing a statistical model over actions.We will prove that this statistical model is a proper probability distribution (e.g. that it “integrates” to 1), and that it assigns zero probability to semantically invalid actions. We will also develop techniques for estimating the parameters of these distributions from a corpus of code or a corpus of edit actions. Collectively, we referto these contributions as a statistical action suggestion semantics. Ultimately, we envision this work as being the foundation for an intelligent programmer's assistant that is able to integrate semantic information gathered from the incomplete program with statistics gathered from other programs and interactions that the system has observed to do much of the “tedious” labor of programming, without hiding the generated code from the programmer (as is the case with fully automated program synthesis techniques). § LANGUAGE-EDITOR CO-DESIGN In designing , we are intentionally blurring the line between the programming language and the program editor. This opens up a number of interesting research directions in language-editor co-design. For example, it may be possible to recast“tricky” language mechanisms, like function overloading, type classes <cit.>, implicit values, and unqualified imports, as editor mechanisms. Because we will be treating programming as a structured conversation between the programmer and the programming environment, the editor can simply ask the programmer to resolve ambiguities when they arise. The programmer's choice is then stored unambiguously in the underlying syntax tree.Another important research direction lies in exploring how types can be used to control the presentation of expressions in the editor. In the textual setting, we have developed type-specific languages (TSLs) <cit.>. It should be possible to define an analagous notion of type-specific projections (TSPs) in the setting of a structure editor. For example, the matrix projection shown in Figure <ref> need not be built in to . Instead, the Numerics library provider will be able to introduce this logic. In particular, TSPs will define not only derived visual forms, but also derived edit actions (e.g. “add new column” for the example just given.) It should be possible to switch between multiple projections (including purely textual projections) while editing code and interacting with values. This line of research is also related to our work on active code completion, which investigated type-specific code generation interfaces in a textual program editor (Eclipse) <cit.>. Another interesting direction is that of semantic, interactive documentation. In particular, in , references to program structures that appear in documentation will be treated in the same way as other references and be subject to renaming and other operations. Documentation will also be capable of containing expressions of arbitrary types (e.g. of the Image or Link type). Together with the type-specific projection mechanism mentioned above, we hope that this will allow to function not only as a structured programming environment, but also as a structured document authoring environment! By understanding hyperlinks as variable references (in, perhaps, a different modality <cit.>), we may be able to blur the line between a module and a webpage. § CONCLUSIONTo summarize, there are a number of interesting semantic questions that come up in the design of program editors. We advocate a research program that studies these problems using mathematical tools previously used to study programming languages and complete programs. This work will both demystify the design of program editors and open up the doors for a number of advanced editor services. Ultimately, we envision an intelligent programmer's assistant that combines a deep semantic understanding of incomplete programs with a broad statistical understanding of common idioms to help humans author both programs and documents (as one and the same sort of artifact.)§ ACKNOWLEDGMENTS We thank the SNAPL 2017 reviewers and our paper shepherd Nate Foster for the thoughtful comments and suggestions. This work is supported in part through a gift from Mozilla; by NSF under grantnumbers CCF-1619282, 1553741 and 1439957; by AFRL and DARPA under agreement #FA8750-16-2-0042; and by the NSA under lablet contract #H98230-14-C-0140.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Mozilla, NSF, AFRL, DARPA or NSA.plainurl | http://arxiv.org/abs/1703.08694v1 | {
"authors": [
"Cyrus Omar",
"Ian Voysey",
"Michael Hilton",
"Joshua Sunshine",
"Claire Le Goues",
"Jonathan Aldrich",
"Matthew A. Hammer"
],
"categories": [
"cs.PL"
],
"primary_category": "cs.PL",
"published": "20170325133951",
"title": "Toward Semantic Foundations for Program Editors"
} |
[][email protected] Institut für Materialphysik im Weltraum, Deutsches Zentrum für Luft- und Raumfahrt (DLR), 82234 Weßling, Germany We study a complex plasma under microgravity conditions that is first stabilized with an oscillating electric field. Once the stabilization is stopped, the so-called heartbeat instability develops. We study how the kinetic energy spectrum changes during and after the onset of the instability and compare with the double cascade predicted by Kraichnan and Leith for two-dimensional turbulence. The onset of the instability manifests clearly in the ratio of the reduced rates of cascade of energy and enstrophy and in the power-law exponents of the energy spectra.52.27.Lw, 52.35.Ra Instability onset and scaling laws of an autooscillating turbulent flow in a complex (dusty) plasma C. Räth December 30, 2023 =================================================================================================== *Introduction Turbulence is often cited as one of the main challenges of modern theoretical physics <cit.>, despite being a long-standing subject of study. Many turbulent systems are more complex than Navier-Stokes flow in a fluid - involving, for instance, complicated ways of energy injection, transfer and dissipation <cit.>.Turbulent pulsations or its analogues occur in systems as diverse as soap films <cit.>, insect flight <cit.>, lattices of anharmonic oscillators <cit.>, Bose-Einstein condensates <cit.>, and microswimmers <cit.>. The development or, inversely, the decay of turbulence is of special interest and can be studied, for instance, in flames <cit.> or behind grids <cit.>.In general, when energy is put into a two-dimensional turbulent system, the energy spectrum splits into the inverse energy and direct enstrophy ranges – the so-called double cascade develops <cit.>. Its presence has been supported by computer simulations, but not unambiguously <cit.>. Only a few numerical simulations <cit.> and experiments <cit.> were able to simultaneously observe both cascades, which is challenging due to the large range of scales necessary to cover both the inverse and direct cascade <cit.>. Recently, it was suggested that the inverse cascade might not be robust, but depend on friction <cit.>. The evolution of the spectrum during the onset of two-dimensional forced turbulence was investigated in <cit.>. A recent topic of interest is the transition of weak wave turbulence to wave turbulence with intermittent collapses <cit.> and intermittency, i.e. strong non-Gaussian velocity fluctuations, in wave turbulence <cit.>, and so-called Janus spectra which differ in streamwise and transverse direction <cit.>.In this paper, we present the first study of developing turbulence in dusty/complex plasmas. Complex plasmas consist of microparticles embedded in a low-temperature plasma. The microparticles acquire high charges and interact with each other. They can be visualized individually and thus enable observations on the kinetic level of, for instance, vortices <cit.>, tunneling <cit.>, and channeling <cit.>. Gravity is a major force acting on the microparticles in ground-based experiments. Under its influence, the particles are located close to the sheath region of the plasma, where strong electric fields compensate for gravity, and strong ion fluxes are present. It is desirable to perform experiments with microparticles under microgravity conditions. Then, the microparticles are suspended in the more homogeneous plasma bulk with a weak electric field <cit.>, and the strong ion flux effects such as wake formation <cit.> are avoided. The data presented in this paper were measured using the PK-3 Plus Laboratory <cit.> in the microgravity environment on board the International Space Station. Here, large, symmetric microparticle clouds form in the plasma bulk which typically contain a small central, particle-free void caused by the interplay between the ion drag and electric forces acting on the microparticles <cit.>. Several authors have recently begun to study turbulence in complex plasmas <cit.>. Using complex plasmas to study turbulence has the advantage that the particles that carry the interaction can be visualized individually,in contrast to traditional experiments on turbulence <cit.>, in which the use of tracer particles might not be reliable <cit.>. Turbulence in complex plasmas occurs at low Reynolds numbers, which is common in viscoelastic fluids <cit.>. Furthermore, complex plasmas are usually highly dissipative. Therefore, it is advantageous to study forced turbulence. A good mechanism to induce turbulent pulsations is the self-sustained heartbeat oscillation <cit.>. This type of instability is characterized by a regularly pulsating (auto-oscillating) microparticle cloud <cit.>. The heartbeat-induced auto-oscillations pump energy into the microparticle cloud and are able to induce turbulence and effectively channel the flow in two dimensions <cit.>. In the experiment presented in this letter, we use a developing heartbeat instability and auto-oscillations to study the onset of turbulence and, specifically, the development of the kinetic energy spectrum. In addition to the channeling due to the heartbeat, we limit our analysis to particles that move within a plane for 0.2 s or longer, thereby excluding particles with a significant transverse velocity component. *Experiment detailsHere we present data obtained with the PK-3 Plus Laboratory on board the International Space Station <cit.>. The heart of the laboratory consists of a parallel plate plasma reactor. The experiment was performed in argon at a pressure of 9 Pa. Melamine-formaldehyde particles with a diameter of (9.2 ± 1 %)μm and a mass density of 1510 kg/m^3 are inserted into the plasma via dispensers. They are illuminated with the light from a laser that is spread into a vertical plane, and their positions are recorded at a frame rate of 50 fps. The rate of damping caused by the friction between microparticles and neutral gas is γ_damp = 10.7 s^-1 <cit.>. The microparticles' velocity of sound is C_DAW = 6–7 mm/s under these conditions <cit.>. At the beginning of the experiment presented here, the particle positions are stabilized by applying a rapidly alternating additional electric field (frequency between 42 and 51 Hz). This electric field suppresses a heartbeat instability. When it is switched off, first the particles move out of the cloud center, and a void develops. This void then begins to regularily expand and contract - it undergoes the heartbeat instability <cit.>. The heartbeat in the present experiment has a frequency of f_HB = 2.81 ± 0.03 Hz <cit.>. We analyze a time series of 80 s duration. The heartbeat develops at t = 23 s (t = 0 s corresponds to an arbitrarily chosen point during the stabilization stage); see Fig. <ref>. The transition from stabilized cloud to heartbeat takes about 2 s as seen from the video images, which is approximately 20 times longer than the time scale defined by friction. A short movie showing experimental data before, during and after the transition can be found in the supplemental material <cit.>.*Velocity distributionFigure <ref> shows the evolution of the velocity distribution before and after the stabilization is switched off. During the stabilization stage, the mean horizontal velocity is approximately zero. When the stabilization is turned off (at t = 23 s), an oscillation caused by the heartbeat instability develops, causing a drift of the particles leftwards (towards negative x values), see Fig. <ref>. Details on the fully developed instability can be found in <cit.>. Here we are interested in the transition from the stabilized complex plasma to the unstable system. This transition is clearly seen in the velocity distribution of the microparticles (Fig. <ref>). The particles start drifting to the left edge of the cloud, as can be seen in the displacement of the particle track (Fig. <ref>a), in the shift of the distribution towards negative values (Fig. <ref>b), and in the nonzero mean axial velocity (Fig. <ref>c). The movement of the particles with the heartbeat manifests in an oscillating mean velocity. The standard deviation of the velocity, which is a measure of the particle kinetic temperature, also increases once the instability sets in[A series of strong short-time excitations around t = 36–57 s, caused by external shock compressions of the particle cloud, are also visible. See <cit.> for more details.]. Images visualizing the particle movement during the instability are given in the supplementary material <cit.>. *Energy spectraNext, we are going to explore the evolution of the energy spectra of the microparticles. To calculate the spectra, we follow the method presented in <cit.>: We calculate velocity maps, i.e., the average horizontal or vertical velocity of the particles as a function of position. The energy is then calculated from the squares of the Fourier transformed velocity maps by associating every energy value with the corresponding wave vector modulus k = |𝐤|. As we are interested in the transition process, we calculate the velocity maps averaging over a number of frames. A small number of frames is desirable to increase the resolution during the transition, but introduces the problem of “holes” in the velocity maps. These holes are positions in space for which no average velocity data is available, as no particles were present or detected at these positions in the frame range used to calculate the map. The presence of the holes/gaps in the data can potentially have a significant influence on calculated energy spectra <cit.>.The size and number of holes depends on the radius over which mean velocities are calculated and on the number of frames involved in averaging. We test the influence on the spectra by first calculating the spectrum from a map without holes. Then, we artificially remove the data at the position of the holes in a different velocity map and recalculate the energy spectrum. We repeat this method using various averaging radii and frame ranges to find the optimal parameters. According to this analysis, we determine an averaging radius of 5 pixels and a sliding window of 5 s width[The averaging time is smaller than, for example, the large eddy turnover time and the oscillon period τ_LE≈τ_osc≈ 10 s.] as optimal choice to calculate velocity maps. The result is shown in Fig. <ref>, which clearly demonstrates that there is no significant influence of the holes for the selected parameters. These are thus the parameters selected to calculate the velocity maps used in the following analysis, especially Fig. <ref>, <ref>, and <ref>. *Evolution of energy spectraFigure <ref> shows energy spectra calculated in the stages before, during, and after the heartbeat instability sets in. The energy rise once the instability begins is clearly seen, as is the change in slope of the spectra. We plot the average energy spectra before and after the onset of the instability in Figure <ref>. Before the onset of the instability, the spectrum (blue crosses in Fig. <ref>) shows an exponential dependence. During the instability (red circles in Fig. <ref>), the slope of the spectrum changes at small wave numbers, and a range with E ∝ k^-5/3 develops. This is more easily visible in the compensated energy spectra Fig. <ref>b) and <ref>c), which depict the energy multiplied by k^5/3 resp. k^3. There is an almost constant E × k^5/3 range in Fig. <ref>b) at k < 4 mm^-1,whereas E × k^3 in Fig. <ref>c) is almost constant at k > 4 mm^-1. The transition between the two ranges occurs at k_exc≈ 4 mm^-1. The enstrophy spectrum is given in the supplementary material <cit.>.Next, we determine the power-law exponents n of the energy spectra in the two ranges as a function of time and plot them in Figure <ref>b). The transition when the instability sets in is well visible for both ranges (note that the obtained values of the slopes are sensitive to the exact k ranges selected, but the results remain qualitatively unchanged). *Comparison to 2d forced turbulence Our experiment is intrinsically three-dimensional (3d) in nature. Typically, in 3d turbulence, the energy spectrum follows a E ∝ k^-5/3 law over a suitable range <cit.>. It is for a two-dimensional (2d) system into which energy is injected at a length scale ℓ_exc that <cit.> and <cit.> predicted a separation of the spectrum into scales larger and smaller than ℓ_exc: For k < k_exc = 2 π / ℓ_exc, energy is transferred to lower wave numbers k at zero vorticity flow by elongation and thinning of vortices <cit.>. For this inverse energy cascade, it holds that E = C ϵ^2/3 k^-5/3,where E is the energy and C is a dimensionless positive coefficient. The rate of cascade of kinetic energy per unit mass is signified by ϵ. For k > k_exc, it holds that E = C̃η^2/3 k^-3,where C̃ is another positive constant, and η is the rate of cascade of mean-square vorticity. In this enstrophy cascade range (the direct cascade), vorticity flows from large to small spatial scales, and there is a minor energy cascade in the same direction <cit.>[There is a possible logarithmic correction to Eq. (<ref>) of the form [ln(k/k_exc)]^-1/3 <cit.>. The presence of linear friction can lead to steeper power laws than given by Eq. (<ref>) <cit.>.]. This power law is also displayed by freely decaying two-dimensional turbulence <cit.>. The double cascade structure is depicted schematically in the inset of Figure <ref>. The power-law exponents n obtained from linear fits to the spectra in the present experiment are shown in Fig. <ref>b. The average values for t ≥ 25 s are n = 3.1 ± 0.3 for (4 ≤ k ≤ 15) mm^-1 and n = 1.5 ± 0.5 for (0.3 ≤ k ≤ 4) mm^-1. Apparently, these values agree well (within the experimental uncertainties) with Kraichnan's power-law exponents n=3 and n=5/3.Applying Kraichnan and Leith's theory, it is possible to calculate the reduced rates of energy and enstrophy transfer, η` = C̃^3/2η and ϵ` = C^3/2ϵ with equations (<ref>) and (<ref>) using the energy at two wave numbers k_1 and k_2 that fall into the two ranges. We select k_1 = 2 mm^-1 and k_2 = 6 mm^-1, which are indicated in Figure <ref> with vertical grey lines. Next, we define a parameter κκ^2 = η' / ϵ' = ( E k^3|_k_2/E k^5/3|_k_1)^3/2.Figure <ref>a) shows κ determined with Eq. (<ref>) as a function of time. As can be seen, κ behaves stepwise:Before the instability begins, < κ> = (7 ± 1) mm^-1. Around the onset of the instability, there is a strong decrease of κ during approximately 3 s, afterwards < κ> = (3.9 ± 0.5) mm^-1. When a double cascade is present, κ can be used to determine the excitation wave number k_exc: It is exactly at this wave number that the direct and indirect ranges are linked. Thus, at k_exc, Eqs (<ref>) and (<ref>) can be equated, giving ϵ'^2/3 k_exc^-5/3 = η'^2/3 k_exc^-3, andk_exc = √(η'/ϵ') = κ. The value of k_exc≈ 4 mm^-1 fits very well with the transition between the two ranges observed in Fig. <ref>. Note that this value of k_exc agrees with, but is somewhat larger than, the previously estimated value of k_exc = 2 π f_HB / C_DAW≈ 2.7 mm^-1 <cit.>. *Discussion and conclusionWe speculate that the energy spectra that we observe indicate a double cascade as predicted by <cit.> and <cit.> for forced two-dimensional turbulence. The fact that the excitation wave number estimated with the ratio of the reduced energy and enstrophy transfer rates corresponds well to that determined directly from the energy spectrum furthermore supports this speculation. If true, this would be one of the few experiments on turbulence in which both ranges of the double cascade are observed simultaneously. However, the coincidence between the scaling exponents from Kraichnan's work and those found in the present work are surprising, as the former exponents were obtained originally under conservation laws for an inviscid 2D flow, and our system is dissipative due to particle-particle interactions. We cannot exclude that the coincidence between the power-law exponents might be caused by other effects such as a flow from the third dimension. A future, more precise investigation is warranted. A simultaneous simulation of the plasma and microparticle dynamics would be best, but is complicated by the vastly different time scales. Previous simulations on (unforced) two-dimensional turbulence in complex plasmas are promising <cit.>. Further experiments dedicated to turbulence in complex plasmas could include measuring fluxes, the decay of turbulent motion, and investigating in more detail the microparticle trajectories. *AcknowledgmentsWe would like to thank Dr. Hubertus Thomas for useful discussions and thePlus Team at DLR Oberpfaffenhofen, Germany, and at JIHT Moscow, Russia, for relinquishing the data to us. PK-3 Plus was funded by DLR/BMWi under the contract Nos. FKZs 50 WM 0203 and 50 WM 1203. 60 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Bratanov et al.(2015)Bratanov, Jenko, and Frey]Bratanov2015 author author V. Bratanov, author F. Jenko, and author E. Frey, @noopjournal journal PNAS volume 112, pages 15048 (year 2015)NoStop [Shakeel and Vorobieff(2007)]Shakeel2007 author author T. Shakeel and author P. Vorobieff, 10.1007/s00348-007-0334-y journal journal Exp. Fluids volume 43, pages 125 (year 2007)NoStop [Wang(2000)]Wang2000a author author Z. J. Wang, @noopjournal journal Phys. Rev. Lett. volume 85, pages 2216 (year 2000)NoStop [Peyrard and Daumont(2002)]Peyrard2002 author author M. Peyrard and author I. Daumont, @noopjournal journal Europhys. Lett. volume 59, pages 834 (year 2002)NoStop [Daumont and Peyrard(2003)]Daumont2003 author author I. Daumont and author M. Peyrard, 10.1063/1.1530991 journal journal Choas: Interd. J. Nonl. Sci. volume 13, pages 624 (year 2003)NoStop [Reeves et al.(2012)Reeves, Anderson, and Bradley]Reeves2012 author author M. T. Reeves, author B. P. Anderson,and author A. S. Bradley, 10.1103/PhysRevA.86.053621 journal journal Phys. Rev. A volume 86, pages 053621 (year 2012)NoStop [Tierno et al.(2008)Tierno, Golestanian, Pagonabarraga, andSague]Tierno2008 author author P. Tierno, author R. Golestanian, author I. Pagonabarraga,andauthor F. Sague, 10.1021/jp808354n journal journal J. Phys. Chem. B volume 112, pages 16525 (year 2008)NoStop [Haq(2006)]Haq2006 author author M. Z. Haq, @noopjournal journal J. Eng. Gas Turbines Power volume 128, pages 455 (year 2006)NoStop [Mohamed and LaRue(1990)]Mohamed1990 author author M. S. Mohamed and author J. C. LaRue, @noopjournal journal J. Fluid Mech. volume 219, pages 195 (year 1990)NoStop [Kraichnan(1967)]Kraichnan1967 author author R. H. Kraichnan, 10.1063/1.1762301 journal journal Phys. Fluids volume 10,pages 1417 (year 1967)NoStop [Leith(1968)]Leith1968 author author C. E. Leith, 10.1063/1.1691968 journal journal Phys. Fluids volume 11,pages 671 (year 1968)NoStop [Rutgers(1998)]Rutgers1998 author author M. A. Rutgers, 10.1103/PhysRevLett.81.2244 journal journal Phys. Rev. Lett. volume 81, pages 2244 (year 1998)NoStop [Frisch(1995)]Frisch1995 author author U. Frisch, @nooptitle Turbulence : the legacy of A.N. Kolmogorov (publisher Cambridge University Press,year 1995)NoStop [Boffetta(2007)]Boffetta2007 author author G. Boffetta, 10.1017/S0022112007008014 journal journal J. Fluid Mech. volume 589, pages 253 (year 2007)NoStop [Boffetta and Musacchio(2010)]Boffetta2010a author author G. Boffetta and author S. Musacchio, 10.1103/PhysRevE.82.016307 journal journal Phys. Rev. E volume 82, pages 016307 (year 2010)NoStop [Xia and Qian(2014)]Xia2014 author author Y.X. Xia and author Y.H. Qian,10.1103/PhysRevE.90.023004 journal journal Phys. Rev. E volume 90, pages 023004 (year 2014)NoStop [Bruneau and Kellay(2005)]Bruneau2005 author author C. H. Bruneau and author H. Kellay, 10.1103/PhysRevE.71.046305 journal journal Phys. Rev. E volume 71, pages 046305 (year 2005)NoStop [von Kameke et al.(2011)von Kameke, Huhn, Fernández-García, Munuzuri, and Pérez-Munuzuri]Kameke2011 author author A. von Kameke, author F. Huhn, et al. , 10.1103/PhysRevLett.107.074502 journal journal Phys. Rev. Lett. volume 107, pages 074502 (year 2011)NoStop [Boffetta and Ecke(2012)]Boffetta2012 author author G. Boffetta and author R. E. Ecke, 10.1146/annurev-fluid-120710-101240 journal journal Ann. Rev. Fluid Mech. volume 44, pages 427 (year 2012)NoStop [Scott(2007)]Scott2007 author author R. K. Scott, 10.1103/PhysRevE.75.046301 journal journal Phys. Rev. E volume 75,pages 046301 (year 2007)NoStop [Paret and Tabeling(1997)]Paret1997 author author J. Paret and author P. Tabeling, 10.1103/PhysRevLett.79.4162 journal journal Phys. Rev. Lett. volume 79, pages 4162 (year 1997)NoStop [Rumpf and Sheffield(2015)]Rumpf2015 author author B. Rumpf and author T. Y. Sheffield, 10.1103/PhysRevE.92.022927 journal journal Phys. Rev. E volume 92, pages 022927 (year 2015)NoStop [Falcon et al.(2007)Falcon, Fauve, and Laroche]Falcon2007 author author E. Falcon, author S. Fauve, and author C. Laroche, 10.1103/PhysRevLett.98.154501 journal journal Phys. Rev. Lett.volume 98, pages 154501 (year 2007)NoStop [Falcon(2010)]Falcon2010 author author E. Falcon, 10.3934/dcdsb.2010.13.819 journal journal Disc. Conti. Dyn. Sys. Series B volume 13, pages 819 (year 2010)NoStop [Falcon et al.(2010a)Falcon, Roux,and Laroche]Falcon2010a author author E. Falcon, author S. G. Roux, and author C. Laroche, 10.1209/0295-5075/90/34005 journal journal EPL volume 90, pages 34005 (year 2010a)NoStop [Falcon et al.(2010b)Falcon, Roux,and Audit]Falcon2010b author author E. Falcon, author S. G. Roux, and author B. Audit, 10.1209/0295-5075/90/50007 journal journal EPL volume 90, pages 50007 (year 2010b)NoStop []Liu2016a C.-C. Liu, R. T. Cerbus, and P. Chakraborty, Phys. Rev. Lett. 117, 114502 (2016). [Akdim and Goedheer(2003)]Akdim2003 author author M. R. Akdim and author W. J. Goedheer, 10.1103/PhysRevE.67.056405 journal journal Phys. Rev. E volume 67, pages 056405 (year 2003)NoStop [Bockwoldt et al.(2014)Bockwoldt, Arp, Menzel, and Piel]Bockwoldt2014 author author T. Bockwoldt, author O. Arp, author K. O. Menzel,andauthor A. Piel, 10.1063/1.4897181 journal journal Phys. Plasmas volume 21, pages 103703 (year 2014)NoStop [Schwabe et al.(2014)Schwabe, Zhdanov, Räth, Graves, Thomas, and Morfill]Schwabe2014 author author M. Schwabe, author S. Zhdanov, et al., journal journal Phys. Rev. Lett. volume 112, pages 115002 (year 2014)NoStop [Morfill et al.(2006)Morfill, Konopka, Kretschmer, Rubin-Zuzic, Thomas, Zhdanov, andTsytovich]Morfill2006 author author G. E. Morfill, author U. Konopka, et al., 10.1088/1367-2630/8/1/007 journal journal New J. Phys. volume 8, pages 7 (year 2006)NoStop [Du et al.(2014)Du, Nosenko, Zhdanov, Thomas,and Morfill]Du2014 author author C.-R. Du, author V. Nosenko, author S. Zhdanov, author H. M. Thomas,and author G. E. Morfill, 10.1103/PhysRevE.89.021101 journal journal Phys. Rev. E volume 89, pages 021101(R) (year 2014)NoStop [Goree et al.(1999)Goree, Morfill, Tsytovich, and Vladimirov]Goree1999 author author J. Goree, author G. E. Morfill, author V. N. Tsytovich,andauthor S. V. Vladimirov, 10.1103/PhysRevE.59.7055 journal journal Phys. Rev. E volume 59, pages 7055 (year 1999)NoStop [Kompaneets et al.(2016)Kompaneets, Morfill, and Ivlev]Kompaneets2016a author author R. Kompaneets, author G. E. Morfill,and author A. V. Ivlev, 10.1103/PhysRevE.93.063201 journal journal Phys. Rev. E volume 93, pages 063201 (year 2016)NoStop [Thomas et al.(2008)Thomas, Morfill, Fortov, Ivlev, Molotkov, Lipaev, Hagl, Rothermel, Khrapak, Sütterlin, Rubin-Zuzic, Petrov, Tokarev, and Krikalev]Thomas2008 author author H. M. Thomas, author G. E. Morfill, et al., 10.1088/1367-2630/10/3/033036 journal journal New J. Phys. volume 10, pages 033036 (year 2008)NoStop [Tsai et al.(2012)Tsai, Chang, and I]Tsai2012 author author Y.-Y. Tsai, author M.-C. Chang, and author L. I, 10.1103/PhysRevE.86.045402 journal journal Phys. Rev. E volume 86, pages 045402 (year 2012)NoStop [Gupta et al.(2014)Gupta, Ganesh, and Joy]Gupta2014 author author A. Gupta, author R. Ganesh, and author A. Joy, 10.1063/1.4890488 journal journal Phys. Plasmas volume 21, pages 073707 (year 2014)NoStop [Zhdanov et al.(2015)Zhdanov, Schwabe, Räth, Thomas, and Morfill]Zhdanov2015 author author S. Zhdanov, author M. Schwabe, author C. Räth, author H. M. Thomas,and author G. E. Morfill, 10.1209/0295-5075/110/35001 journal journal EPL volume 110, pages 35001 (year 2015)NoStop [Arnèodo et al.(2008)Arnèodo, Benzi, Berg, Biferale, Bodenschatz, Busse, Calzavarini, Castaing, Cencini, Chevillard, Fisher, Grauer, Homann, Lamb, Lanotte, Lévèque, Lüthi, Mann, Mordant, Müller, Ott, Ouellette, Pinton, Pope, Roux, Toschi, Xu,and Yeung]Arneodo2008 author author A. Arnèodo, author R. Benzi, et al., 10.1103/PhysRevLett.100.254504 journal journal Phys. Rev. Lett. volume 100, pages 254504 (year 2008)NoStop [Monchaux(2012)]Monchaux2012 author author R. Monchaux, 10.1088/1367-2630/14/9/095013 journal journal New J. Phys. volume 14, pages 095013 (year 2012)NoStop [Mathai et al.(2016)Mathai, Calzavarini, Brons, Sun,and Lohse]Mathai2016 author author V. Mathai, author E. Calzavarini, author J. Brons, author C. Sun,and author D. Lohse, 10.1103/PhysRevLett.117.024501 journal journal Phys. Rev. Lett. volume 117, pages 024501 (year 2016)NoStop [Groisman and Steinberg(2000)]Groisman2000 author author A. Groisman and author V. Steinberg, 10.1038/35011019 journal journal Nature volume 405, pages 53 (year 2000)NoStop [Zhdanov et al.(2010)Zhdanov, Schwabe, Heidemann, Sütterlin, Thomas, Rubin-Zuzic, Rothermel, Hagl, Ivlev, Morfill, Molotkov, Lipaev, Petrov, Fortov, and Reiter]Zhdanov2010 author author S. K. Zhdanov, author M. Schwabe et al., 10.1088/1367-2630/12/4/043006 journal journal New J. Phys. volume 12, pages 043006 (year 2010)NoStop [Heidemann et al.(2011)Heidemann, Couëdel, Zhdanov, Sütterlin, Schwabe, Thomas, Ivlev, Hagl, Morfill, Fortov, Molotkov, Petrov, Lipaev, Tokarev, Reiter, and Vinogradov]Heidemann2011 author author R. Heidemann, author L. Couëdel, et al., 10.1063/1.3574905 journal journal Phys. Plasmas volume 18, pages 053701 (year 2011)NoStop [Pustylnik et al.(2012)Pustylnik, Ivlev, Sadeghi, Heidemann, Mitic, Thomas, andMorfill]Pustylnik2012a author author M. Y. Pustylnik, author A. V. Ivlev, et al., 10.1063/1.4757213 journal journal Phys. Plasmas volume 19, pages 103701 (year 2012)NoStop [Epstein(1924)]Epstein1924 author author P. Epstein, @noopjournal journal Phys. Rev. volume 23, pages 710 (year 1924)NoStop []Supp See Supplemental Material at PRE for a movie of the microparticle motion. []SuppFlowfield See Supplemental Material for a figure visualizing the particle flow. []SuppEns See Supplemental Material for an enstrophy map and spectrum. [Arévalo et al.(2012)Arévalo, Churazov, Zhuravleva, Hernández-Monteagudo, and Revnivtsev]Arevalo2012 author author P. Arévalo, author E. Churazov, et al., 10.1111/j.1365-2966.2012.21789.x journal journal Mon. Not. R. Astron. Soc. volume 426,pages 1793 (year 2012)NoStop [Chen et al.(2006)Chen, Ecke, Eyink, Rivera, Wan, and Xiao]Chen2006 author author S. Chen, author R. E. Ecke, author G. L. Eyink, author M. Rivera, author M. Wan,and author Z. Xiao, 10.1103/PhysRevLett.96.084502 journal journal Phys. Rev. Lett. volume 96, pages 084502 (year 2006)NoStop [Eyink (1996)]Eyink1996 author author G. L. Eyink, 10.1016/0167-2789(95)00250-2 journal journal Phys. D: Nonlin. Phenomena volume 91, pages 97 (year 1996)NoStop [Kraichnan(1971)]Kraichnan1971 author author R. H. Kraichnan, 10.1017/S0022112071001216 journal journal J. Fluid Mech. volume 47, pages 525 (year 1971)NoStop [Batchelor(1969)]Batchelor1969 author author G. K. Batchelor, @noopjournal journal Phys. Fluids volume 12, pages II (year 1969)NoStop [Laurie et al.(2012)Laurie, Bortolozzo, Nazarenko, and Residoni]Laurie2012 author author J. Laurie, author U. Bortolozzo, author S. Nazarenko,andauthor S. Residoni, 10.1016/j.physrep.2012.01.004 journal journal Phys. Rep. volume 514, pages 121 (year 2012)NoStop § SUPPLEMENTAL FIGURES | http://arxiv.org/abs/1703.09070v1 | {
"authors": [
"Mierk Schwabe",
"Sergey Zhdanov",
"Christoph Räth"
],
"categories": [
"physics.plasm-ph"
],
"primary_category": "physics.plasm-ph",
"published": "20170327133942",
"title": "Instability onset and scaling laws of an autooscillating turbulent flow in a complex plasma"
} |
Localization of fermions in coupled chains with identical disorder J. Sirker December 30, 2023 ==================================================================§ INTRODUCTIONMicrowave imaging systems provide unique sensing capabilities that are advantageous for a variety of applications. Their features include the construction of three-dimensional (3D) images, the ability to penetrate optically-opaque materials, and the use of non-ionizing electromagnetic radiation—collective traits that are desirable in applications ranging from security screening to biomedical diagnostics <cit.>. Most conventional microwave systems take the form of mechanically-scanned antennas or use electronic beamforming to retrieve scene content from backscatter measurements <cit.>. Excellent imaging performance can be obtained from these systems, but these approaches often suffer from implementation drawbacks. Specifically, mechanically-scanned antennas tend to be slow and bulky, while electronic beamforming systems, such as phased arrays or electronically scanned antennas (ESAs) are complex, expensive, and often exhibit significant power draw <cit.>. Since the radiation pattern from an antenna does not vary with range in the far-field, targets distant from a static antenna must be probed with a band of frequencies to resolve depth information in addition to cross-range information. Such scenarios are common, for example, in synthetic aperture radar imaging, which makes use of an antenna scanned over a large area <cit.>. Imaging paradigms that require wide bandwidths complicate the radio frequency (RF) signal generation/detection hardware and necessitate allocation of large portions of the electromagnetic spectrum. When broadband operation is needed, the RF components—filters, power dividers, etc.—become more expensive and performance is sacrificed relative to narrowband components. Using a large frequency bandwidth also increases the possibility for interference with other electronic devices, hindering performance and complicating processing techniques. In addition, wideband systems have limited data acquisition rates (due to the extended dwell times needed to settle the phase locking circuitry). From an imaging perspective, objects exhibiting frequency dispersion (such as walls in through-wall imaging) are problematic for wideband systems <cit.>. From an implementation perspective, large bandwidths can also contribute to the complexity of system integration because they increase sensitivity to misalignment and necessitate calibration of the aperture/feed layers <cit.>. Given these numerous challenges, reliance on frequency bandwidth has become the bottleneck of many microwave imaging systems. If the requirement of bandwidth can be removed, a microwave imaging platform may experience considerable benefits in many of these features.The notion of simplifying hardware is in line with many emerging applications which require a capacity for real-time imaging within strict economic constraints. Efforts have recently been made to simplify the hardware required for high resolution imaging and instead rely more heavily on post-processing algorithms. In these approaches, termed computational imaging, spatially-distinct waveforms can be used to interrogate a scene's spatial content, with computational techniques used to reconstruct the image. This idea, which transfers much of the burden of imaging from hardware to software, has been demonstrated across the electromagnetic spectrum as well as in acoustics <cit.>. Applying this idea to microwave imaging, the conventional dense antenna arrays (or mechanically-swept systems) can be replaced with large antennas that offer radiation pattern tailoring capabilities, e.g. metasurface antennas. Metasurface antennas often take the form of a waveguide or cavity structure that leaks out the contained wave through metamaterial elements. It has been shown that metasurface antennas can radiate spatially-diverse, frequency-indexed wavefronts in order to multiplex a scene's spatial information <cit.>. In these demonstrations, post-processing decodes the measurements to resolve an image in all three dimensions. Since these systems rely on large antennas and collect information across a sizable surface in parallel <cit.>, the architecture is favorable compared to systems that are densely populated with independent antenna modules <cit.>. Nonetheless, such systems remain particularly dependent on a wide bandwidth due to their use of frequency diversity to generate a sequence of spatially diverse radiation patterns.Alternative efforts have focused on reducing the dependence of metasurface antennas on a wide bandwidth by leveraging electronic tuning <cit.>. These dynamic metasurface antennas are composed of individually addressable metamaterial elements whose resonance frequencies can be varied by application of an external voltage. A generic example of a dynamic metasurface antenna in the form of a microstrip waveguide, similar to the one used in this work, is shown in Fig. <ref>. By tuning the resonance frequency of each element—which influences the amplitude and phase of radiation emitted from that point—spatially-distinct radiation patterns are generated which multiplex a scene's cross-range content without requiring a large frequency bandwidth. An example of a spatially-diverse far-field pattern from a dynamic metasurface aperture (taken from the device used in this work) is shown in Fig. <ref>. Using a dynamic metasurface as a transmitter in conjunction with a simple probe as a receiver, high-fidelity imaging has been experimentally demonstrated using a small bandwidth, even down to a single frequency point <cit.>. However, in these demonstrations range resolution was limited due to the small bandwidth of operation. In the extreme case of a single frequency point, imaging in the cross-range dimension was viable but range information became negligible to the point that resolving in range was considered infeasible.The seemingly indispensable bandwidth requirement, considered necessary to resolve objects in the range direction, is not a fundamental limitation arising from the physics of microwave imaging systems. This can be understood by interpreting image formation through spatial frequency components (i.e. in k-space). In this framework, sampling each spatial frequency component (k-component) can yield information in the corresponding spatial direction. Analysis in k-space has been successfully employed in synthetic aperture radar (SAR) for decades to ensure images free of aliasing. More recently, it has been used to realize phaseless imaging and to implement massive, multi-static imaging systems for security screening <cit.>.More generally, wavefronts propagating in all directions (sampling all spatial frequency components through the decomposed plane waves) can retrieve information along all directions, even when a single spectral frequency is used. The key to achieving this capability is to operate in the Fresnel zone of an electrically-large aperture, where electromagnetic fields exhibit variation along both the range and cross-range directions, allowing the spatial frequencies in both directions to be sampled with no bandwidth.To properly sample the required k-components at a single frequency, a set of relatively uncorrelated wavefronts must be generated spanning a wide range of incidence angles. This type of wavefront shaping has been explored from a k-space perspective in the optical regime, with the most famous examples being the early works on holography <cit.>. In these works, a monochromatic source was sculpted into complex, volumetric shapes in the near-field of a recorded hologram. At microwave frequencies, however, generating waveforms to sample all k-components in an imaging application has not been straightforward. In a recent attempt a pair of simple antennas was mechanically scanned along separate, parallel paths and range/cross-range imaging was demonstrated using a single spectral frequency. This approach was successful, but proved to be prohibitively time consuming because every permutation of the two antennas' locations was needed <cit.>. In some quantitative imaging efforts, scanned antennas have been used in a tomographic setting to sample k-space and resolve objects in the along-range direction <cit.>. Other works have focused on single-frequency imaging, but these efforts have not been directed at obtaining range information and do not possess favorable form factors <cit.>. A more recent study has suggested using fields with orbital angular momentum (with different topological charges creating different spiral patterns emanated from the antenna) to obtain range information with a single frequency, but the proposed structure is complicated and imaging performance has so far been limited to locating simple point scatterers <cit.>.Considering the results of k-space analysis and the ideas behind computational imaging, we investigate dynamic metasurface antennas as a simple and unique means for realizing single-frequency Fresnel-zone imaging. The proposed concept is illustrated in Fig. <ref> where an electrically-large, linear, transmit-receive aperture is formed by a pair of dynamic metasurfaces; one acting as transmitter and the other as receiver. To better visualize the unique features of this configuration, we have contrasted its sampling of k-space against the configuration used in <cit.>. Figure <ref>(b) demonstrates the configuration in <cit.> where two simple antennas were mechanically moved to sample all the possible k-components. This sampling is denoted by the rays from two arbitrary points on the transmit (Tx) and receive (Rx) antennas drawn according to the corresponding spatial frequencies at a point in the scene. Using such a configuration can only sample a small portion of k-space at each measurement (i.e. each synthetic aperture position). In Fig. <ref>(c) we show the k-space sampling of the configuration proposed in this paper. Since we are assuming Fresnel zone operation, k-components in 2D (both range and one cross-range dimension) can be probed from a 1D aperture radiating a series of distinct patterns at a single frequency. In other words, larger portions of the required k-space are sampled in each measurement.In this paper, we experimentally demonstrate single-frequency imaging using dynamic metasurface antennas. We employ the 1D dynamic metasurface antenna previously reported in <cit.>, although the proposed concepts can be extended to other 1D and 2D architectures. The 1D dynamic metasurface considered here consists of an array of metamaterial resonators, with size and spacing that are both subwavelength. Each metamaterial element can be tuned to a radiating or non-radiating state (binary operation) using voltage-driven PIN diodes integrated into the metamaterial element design <cit.>. By tuning the elements in the transmitting (and receiving) aperture we modulate the radiation patterns illuminating the scene (and being collected), thus accessing large portions of k-space simultaneously (as demonstrated in Fig. <ref>(c)). The backscatter measurements collected in this manner are then post-processed to form 2D (range/cross-range) images. To facilitate the image formation, we modify the range migration algorithm (RMA) to be compatible with this specific metasurface architecture and mode of operation <cit.>. Since this algorithm takes advantage of fast Fourier transform (FFT) techniques, it can reconstruct scenes at real-time rates <cit.>. Given that dynamic metasurface antennas are planar and easily-fabricated, the added benefit of single-frequency operation further simplifies the RF hardware requirements. The imaging system proposed in Fig. <ref>(a) therefore promises a fast, low-cost device for applications such as security screening, through-wall imaging, and biomedical diagnostics <cit.>.We begin by reviewing the underlying mechanisms of microwave and computational imaging. Next, we outline the specific aperture that has been implemented and detail its operation in a single-frequency imaging scheme. Experimental images are shown to highlight the performance and a point spread function analysis is carried out across the field of view <cit.>. We also demonstrate that single-frequency imaging is robust to practical misalignments—a property especially attractive in the emerging application of imaging from unmanned aerial/terrestrial vehicles with imprecise tracking capabilities <cit.>. § IMAGING BACKGROUND §.§ Single-Frequency Imaging The range resolution of a microwave imaging system is typically set by the time domain resolution of the measured signal, equivalently stated in the frequency domain as <cit.> δ_range = c/2B where c is the speed of light and B is the frequency bandwidth. For a finite aperture with largest dimension D, the cross-range resolution, governed by diffraction, is given by <cit.> δ_cross-range = R λ_0/D for a mean operating wavelength of λ_0 and a standoff distance R. For radar type schemes that rely on a frequency bandwidth to estimate range (or, equivalently, time-of-flight in the time domain), it is implied from (<ref>) that range resolution can only be improved by using a large frequency bandwidth.In general, Eqs. (<ref>) and (<ref>) can be derived by analyzing the support in k-space. To better understand this relationship, we designate x as the range direction and y as the cross-range direction, as shown in Fig. <ref>. In both cases the resolution is given by <cit.> δ_x/y = 2π/Δ k_x/y = 2π/k_x/y,max-k_x/y,min. Thus, to determine the range resolution we must identify the maximum and minimum values for k_x, which itself can be written as the sum of the transmitter (denoted by subscript t) and receiver (subscript r) components as k_x = k_xt +k_xr = √( k^2 - k_yt^2) + √( k^2 - k_yr^2)where we have used the dispersion relation k_x = √(k^2 - k_y^2). When x ≫ y then k_yt, k_yr ≪ k, which results in the bound k_x,max = k_xt,max + k_xr,max = 2 π f_max/c +2 π f_max/c = 4 π f_max/c. This is a hard upper bound on what k_x,max can be and it is true regardless of the system geometry. With the same x ≫ y assumption the minimum k_x would similarly be k_x,min = 4 π f_min/c. The difference can be used to derive δ_x = c/2B from (<ref>), which would suggest that B > 0 is necessary for range resolution. However, when x ≈ y the k_yr and k_yt terms become non-negligible and k_x can be significantly smaller that 4 π f_min/c. In this case, the k_x,min becomes heavily dependent on the system's geometry.When operating in the Fresnel zone of the imaging aperture, the geometry will have significant effects on the generated wavefronts. For example, a transmitting source located at the edge of the aperture will illuminate points in the scene with plane waves at oblique angles, sampling both the k_x- and k_y-components. In other words, different locations along the aperture generate different plane waves, spanning both k_x and k_y based purely on spatial diversity. This can be seen when examining the case of a multiple-input multiple-output (MIMO) system and tracking the resultant k-vectors, as seen in Fig. <ref>. This case is contrasted with the single-input multiple-output (SIMO) version in Fig. <ref>(a), where it is seen that the enhancement in spatial diversity delivered by using two large apertures is substantial. Having a single transmitter (or, equivalently, receiver) only allows for an arc in k-space to be sampled with a single frequency, whereas the pairwise combinations in a MIMO array can sample an area in k-space. Even though the configuration in Fig. <ref> can be viewed as a virtual MIMO system, it is important to note that the transmit aperture is only excited by a single source and the received signal is only sampled by a single detector.In this imaging strategy, the k-space support becomes spatially-variant because different portions of the scene are probed by different sets of k-components. This results in resolution that depends on the position in the scene, as we will show in Section <ref><ref>. The most important conclusion from this analysis is that, since the transmit-receive aperture samples both k_x- and k_y-components, it is possible to retrieve range information even at a single frequency, i.e. without any spectral bandwidth—contrary to Eq. (<ref>).The analysis above sets guidelines for the imaging apertures and the effective sources across it. Specifically, two conditions should be met in order to sample the k_x- and k_y-components at a single spectral frequency: (i) an electrically-large transmit-receive aperture should be realized and (ii) different locations on this aperture must be sampled independently. A proof-of-concept effort was recently conducted to this end in which dipole antennas were translated to create an aperture satisfying these conditions <cit.>. These antennas were mechanically scanned through all locations on each aperture. (For each location on the synthetic Tx aperture, the receiver was scanned along the whole synthetic Rx aperture.) This experiment demonstrated the possibility to retrieve high-fidelity range/cross-range information using a single frequency. However, relying on mechanically-scanned synthesized apertures resulted in an impractical hardware implementation whose acquisition time was extremely slow. An alternative would be to use an array of independent transmitters and receivers, but this approach scales to a generally complex and expensive system. Instead, computational imaging has been studied as a paradigm which offers greater simplicity and more flexibility in hardware. §.§ Computational Imaging Interest in computational imaging has been on the rise due to the promise of simpler and less expensive hardware <cit.>. The overarching concept behind computational imaging is to alleviate the burden of complex hardware and instead rely on post-processing to reconstruct an image. This often involves multiplexing data within the physical layer and extracting the important portions in software post-processing to build an image <cit.>. At optical and terahertz frequencies, several computational imaging systems have been proposed and experimentally demonstrated, including multiply-scattering media, random-lens arrays, and metamaterial spatial light modulators <cit.>. In the microwave regime, various metasurface designs have proven particularly useful for crafting apertures well-suited for computational imaging systems <cit.>.Metasurface apertures, which consist of waveguides loaded with numerous subwavelength elements, have been an architecture of much interest. They allow flexibility in design and also possess a key advantage in that the feed is directly integrated within the radiative layer (in contrast to a reflect array or a transmission mask <cit.>). Due to the subwavelength spacing of the individual elements and the large overall aperture size, a vast quantity of radiation patterns can be generated with significant spatial variation. These diverse patterns are vital to the computational imaging paradigm because they illuminate a large portion of the scene with complex field structures (as exemplified in Figs. <ref>(a) and <ref>(a)), allowing for the scene's spatial content to be encoded into a series of backscatter measurements <cit.>. In order to alleviate the dependence on frequency diversity, dynamically-tunable metasurface antennas have been pursued <cit.>. These antennas have tuning components, embedded into either the radiating elements <cit.> or the feed structure <cit.>, which enable the generation of diverse patterns as a function of externally applied voltage rather than driving frequency. In this work, we utilize the 1D dynamic metasurface pictured in Fig. <ref>. In this aperture, which is further described in Section <ref>, each metamaterial element is loaded with a tunable component and is independently addressable via an applied voltage. This allows for modulation of the radiation as depicted in Fig. <ref>(c), even with a single frequency.In <cit.>, this dynamic metasurface antenna was used as the transmitter of an imaging system while several dipole antennas were used as receivers. This configuration demonstrated high-quality imaging with a much smaller bandwidth than would be required for a frequency-diverse system <cit.>. The role of the dynamic metasurface was to generate spatially-distinct radiation patterns to resolve objects in the cross-range direction. Resolving objects in the range direction was still primarily accomplished using a finite bandwidth. However, the aperture from <cit.> can be adapted for multiplexing a 2D scene's spatial content (range and cross-range directions) with a single frequency. We use two apertures (one as a transmitter and another as a receiver) to cover a large area and we sample the spatial points along the apertures through linear combinations of the constituting elements. The dynamic metasurface can thus provide a practical alternative to the brute force bistatic SAR method utilized in <cit.>. In other words, instead of probing the scene from each location on the imaging aperture independently (as is the case in conventional schemes <cit.> and is schematically shown in Fig. <ref>(b)), the signal is transmitted from and received at many locations on the aperture simultaneously, as depicted in Fig. <ref>(c). In a k-space interpretation, it can be said that the dynamic metasurface samples many k-components simultaneously, which are then de-embedded in post-processing. To better illustrate this concept, we review the operation of the dynamic metasurface and describe the imaging process in the next section. § SINGLE-FREQUENCY COMPUTATIONAL IMAGING SYSTEM §.§ Dynamic Metasurfaces The dynamic metasurface used in this paper is extensively examined in <cit.>. In this subsection, we briefly review its properties and describe its operation. This antenna is depicted in Fig. <ref> and consists of a 40-cm-long microstrip line that excites 112 metamaterial elements. The metamaterial elements are complementary electric-LC (cELC) resonators <cit.>, with every other element having resonance frequencies 17.8 and 19.2 (covering the lower part of K band). Each element is loaded with two PIN diodes (MACOM MADP-000907-14020W) and is individually addressed using an Arduino microcontroller which applies a DC voltage bias. The microstrip line is excited through a center feed (which generates two counter-propagating waves) and two edge feeds (each creating a single propagating wave) to ensure ample signal stimulates all of the elements. The edge feed can be seen in Fig. <ref> and the connectors for the end launch and the central launch are Southwest Microwave 292-04A-5 and 1012-23SF, respectively.The signal injected into the aperture is distributed through power dividers and carried by the waveguide, but the structure is effectively a single port device due to the fact that a single RF signal is inserted/collected. The combination of the transmitter and receiver thus creates a single-input single-output (SISO) system, but emulates the performance of a MIMO system because tuning the elements allows us to multiplex the transmitted/received signal into a single feed. In this sense, the device retains simplicity in RF circuitry (only needing one feed, and allowing the RF components to be designed around a single frequency), but with the addition of the DC tuning circuitry. This DC circuitry, composed of shift registers, resistors, and diodes is vastly simpler than the design that goes into creating a broadband RF source/detector circuit.The resulting radiation pattern generated from this single-port antenna is the superposition of the contributions from each constituting element. By turning each element on or off using the PIN diodes, the radiation pattern can be modulated without active phase shifters or moving parts. By leveraging the resonant behavior of the metamaterial elements, we can change the phase and magnitude of each radiator and manipulate the radiation pattern. The phase and amplitude of each metamaterial element cannot be controlled independently, but rather are tied together through the Lorentzian form of the element's resonance <cit.>. While this connection places a limitation on the available field patterns, there is nevertheless enormous flexibility in possible patterns that can be exercised in this otherwise very inexpensive device. For example, in <cit.> this structure was experimentally demonstrated to generate directive or diverse patterns depending on the applied tuning state. It is important to emphasize that this structure, while sharing some similarities with well-established leaky-wave antennas (LWAs), is distinct in both its architecture and operation <cit.>. Both structures consist of a waveguide whose guided mode is gradually leaked through many small openings. However, leaky-wave antennas are designed such that the leaked radiation from each opening has a specific phase relative to the other openings, thus forming a directive beam which steers as a function of input frequency <cit.>. This is different from the dynamic metasurface, where the metamaterial radiators are placed at subwavelength distances apart and are not located/actuated to have a specific phase profile.It should also be noted that the dynamic metasurface is different from the microwave camera described recently in <cit.>. While both structures use diodes to turn resonant radiators on/off, the microwave camera is designed such that only one element is radiating at a time and it has a complex corporate feed. This is quite different from a dynamic metasurface, where the radiation pattern is the superposition of several elements distributed over an electrically-large aperture. Phased arrays can also be compared to the dynamic metasurface antenna, but these structures often have phase shifters and amplifiers (and sometimes a full radio module) behind each element <cit.>. While phased arrays exercise total control over the radiated fields, these structures are significantly more complicated than our dynamic aperture which has a single feed and no active components. §.§ Imaging SetupThe imaging setup used in this work consists of two identical dynamic metasurfaces placed side by side with 20 in between (edge-to-edge), as shown in Fig. <ref>. One dynamic metasurface is connected to Port 1 of an Agilent E8364c vector network analyzer (VNA) and acts as the transmitter, while the other is connected to Port 2 of the VNA and acts as the receiver. The RF signal on each side is distributed with a power divider to excite all three feed points, simultaneously generating four traveling waves with equal magnitude. Each dynamic metasurface is addressed separately using distinct Arduino microcontrollers. The DC control signals sent from the Arduinos are distributed to each independent element through a series of shift registers, allowing us to turn on/off combinations of elements at any time. The microcontrollers and VNA are controlled via MATLAB. Figure <ref>(c) shows a schematic of the control circuitry and the RF paths.To characterize one of the dynamic metasurfaces, its near-field is scanned for different tuning states using a reference antenna <cit.>. The scanned data is then used along with the surface equivalence principle to represent the aperture as a set of discrete, effective sources. These sources can then be propagated to arbitrary locations in the scene, a process completed for both the transmit aperture and the receive aperture. The subwavelength metamaterial elements have mutual coupling which affects the overall radiation pattern, but these contributions (as well as any fabrication tolerances) are all captured in this experimental characterization and thus do not pose a problem.During the imaging process the VNA is set to measure a single frequency, 19.03, which provides ample SNR and diverse patterns from the aperture <cit.>. The transmitter's microcontroller enforces a tuning state a_1 and then the receiver's microcontroller sweeps through all the characterized tuning states b_q, q=1,…,Q. The transmitter then moves on to the next tuning state and repeats, continuing this process through a_p, p=1,…,P until reaching the final state a_P. The raw data, a single complex number for each tuning state pair, is inserted into a [Q × P] matrix 𝐆. For the following experiments, we will use Q=P such that the total number of measurements is P^2—e.g. for 50 transmitter tuning states and 50 receiver tuning states, 2500 measurements will be taken at a single spectral frequency. It should be noted that the amount of information that a finite aperture can probe is inherently limited by its physical size and the operating frequency. The space-bandwidth product imposes a physical limit on the number of independent free fields that can be generated from any aperture <cit.>. We have examined this consideration in previous works <cit.>, but will generally use a larger number of tuning states than is strictly necessary (P = 100, compared to the space-bandwidth product, which is 50 based on the antenna's size) to overcome potential correlation among the random patterns.For each tuning state, a different combination of 30 elements (out of the 112) is chosen at random and set to the on state. Randomness is chosen to ensure low correlation among the radiated patterns. Though a more deliberate set of on/off elements may give a minor enhancement in performance, the random selection process has been shown to provide ample diversity in the patterns <cit.>. The choice of 30 on elements is based on the analysis completed in <cit.>, which found that this quantity ensures that the guided mode is not depleted too quickly and that sufficient power is radiated. It is worth emphasizing that this process is purely electronic and could be accomplished in real-time by using a FPGA in conjunction with a customized RF source/detector.§.§ Image ReconstructionIn this paper, we use two different approaches to reconstruct images from the raw data. In the first approach, a sensing matrix, 𝐇, is calculated based on the fields projected from the Tx and Rx antennas. By formulating the forward model of the Tx/Rx fields with the scene’s reflectivity, σ, the set of measurements, 𝐠, can be represented as 𝐠 = 𝐇σ where the quantities are discretized based on the diffraction-limit <cit.>. The measurement column vector 𝐠 is composed of the entries of the matrix 𝐆 with order given by the indexing of the (a_p, b_q) states. Assuming the Born approximation (weak scatterers) <cit.>, entries of 𝐇 are written as the dot product 𝐇_ij = E_Tx^i (r_j) ·E_Rx^i (r_j). The quantities E_Tx^i and E_Rx^i are, respectively, the electric fields of the Tx and Rx dynamic metasurface apertures for the ith tuning state combination. With the two apertures, this index is essentially counting the pairs (a_1, b_1), (a_1, b_2), …, (a_1, b_P), (a_2, b_1), …, (a_P, b_P). The r_j's are the positions in the region of interest, resulting in each row of the 𝐇 matrix representing the interrogating fields for all locations in the scene for a given tuning state, (a_p, b_q). To reconstruct the scene reflectivity, Eq. (<ref>) must be inverted to find an estimate of the scene σ̂. Since 𝐇 is generally ill-conditioned, this process must be completed with computational techniques. Here, we use the iterative, least squares solver GMRES to estimate the scene reflectivity map <cit.>.While the method above is intuitive, it may require significant resources to calculate and store 𝐇. In the system analyzed here, this issue does not cause practical difficulty; however, such limitations become problematic as the system scales up. A more efficient approach is to use range migration techniques, which employ fast Fourier transforms to rapidly invert a signal represented in k-space <cit.>. The conventional range migration algorithm (RMA) assumes simple, dipolar antennas with well-defined phase centers—a condition not met by our dynamic metasurface aperture <cit.>. This issue has recently been addressed in <cit.> and a pre-processing technique was introduced in <cit.> to make the RMA and dynamic metasurfaces compatible. In the following, we will review this pre-processing step and then recap the salient portions of the RMA before adapting this method to the single frequency case used here.The required additional step lies in transforming the data obtained by a complex radiating aperture into independent spatial measurements <cit.>. In this case, where two distinct metasurface antennas are used as the transmitter and receiver, the pre-processing step must be applied independently for each metasurface antenna. For a single tuning state, the effective source distribution on the aperture plane can be represented as a [1×N] row vector with the entries being the discretized sources. By cycling through P tuning states, the effective sources can be stacked to obtain the [P×N] matrix Φ, which represents the linear combinations that make up the radiating aperture. An inverse (or pseudo-inverse when P ≠ N) is then used to diagonalize the matrix of measurements and find the contributions for each source location. The procedure is carried out through a singular value decomposition (SVD) with a truncation threshold, the latter technique eliminating subspaces that grow toward infinity and corrupt the data during inversion. The transformed signal 𝐬_0 (as processed from the raw 𝐆 matrix from Section <ref><ref>) is thus 𝐬_0 = Φ_Tx^+ 𝐆 (Φ_Rx^+)^T where Φ_Tx = 𝐔_TxΛ_Tx𝐕_Tx^* is the SVD of the transmitter's source matrix and Φ_Tx^+ = 𝐕_TxΛ_Tx^+ 𝐔_Tx^* is its pseudo-inverse. Within these equations + stands for the pseudo-inverse operator, * is the conjugate transpose operator, and T stands for the transpose operator. The SVD entails splitting Φ into two unitary matrices, 𝐔 and 𝐕^*, and a diagonal matrix, Λ, which includes the singular values. All singular values falling below 1% of the largest singular value are truncated. The same is done for the receiver.In essence, the process in (<ref>) mathematically reverts the SISO problem back to the simplest physical situation (the MIMO case, where each pair of positions on the Tx and Rx apertures are sampled separately <cit.>), but allows for a simpler hardware to carry out the process in a faster, more practical implementation <cit.>.It should be noted that the SVD has long been used for image inversion and information theoretic techniques <cit.>. Recent applications have studied system resolution based on the SVD <cit.> and have also studied how many independent measurements can be taken in an imaging system <cit.>. More commonly, the SVD has been used to invert matrix equation (<ref>) and directly solve for σ̂ in the inverse scattering problem <cit.>. The 𝐇 matrix, however, is significantly larger than Φ so this approach becomes cumbersome when the problem is large. When the SVD is used for image inversion, the singular values are often truncated based on the signal-to-noise ratio (SNR) of the system. In our case, the SVD truncation is acting on Φ which is not directly related to the SNR of the system. Relating the truncation level to the system's noise characteristics will be a topic of future exploration in this algorithm.We now briefly outline the reconstruction strategy employed in <cit.> which provides physical insight into the imaging process and highlights how the spatial diversity of the aperture enables monochromatic imaging in the range dimension. It should be kept in mind that the physical measurements will be obtained as in Fig. <ref>(c), as outlined in Section <ref>, but that the mathematical framework detailed here is based on the depiction in Fig. <ref>(b).From the SVD inversion we get the decoupled spatial contributions s_0(y_t,y_r) along the aperture. These can be represented in the spatial frequency domain by taking the Fourier transform asS_0(k_yt,k_yr) = ∑_y_t∑_y_r s_0(y_t,y_r) e^j(k_yt y_t + k_yr y_r). After completing phase compensation, invoking the stationary phase approximation, and accounting for the physically offset locations of the two apertures <cit.> we can write this signal as S_M(k_yt,k_yr) = S_0(k_yt,k_yr) e^-j k_yt y_t0 e^-j k_yr y_r0 e^j k_y y_C e^j k_x x_C where (x_C, y_C) is the center of the scene, y_t0 and y_r0 are the central locations of the transmitting and receiving apertures, and the k-components are defined in accordance with the free space dispersion relation. Specifically, the spatial frequency components combine as k_x=k_xt+k_xr (and similarly for k_y) and k_xt=√(k^2-k_yt^2) (and similarly for k_xr). Thus, the first two exponentials account for the locations of the apertures and the last two exponentials compensate to reference the phase with respect to the center of the scene.Since S_M(k_yt,k_yr) is defined on an irregular grid, it must be resampled onto a regular grid if we wish to use the traditional fast Fourier transform to reconstruct the spatial representation of the scene. Here we perform Stolt interpolation, a process that resamples our k-space data onto a uniform grid, and the regularly-sampled data is S_I(k_x,k_y) <cit.>. After we have completed our resampling, the image can be efficiently found as σ̂(x,y) = ℱ_2D^-1 [S_I(k_x,k_y)]. The last two steps of this reconstruction process are shown in Fig. <ref>. The signal interpolated onto a regular grid, S_I, is shown in Fig. <ref>(a). Note that all signal should fall within the outlined region because the components outside are evanescent and will not contribute to imaging. For the signal in Fig. <ref>(a), generated from two scatterers in the scene, the inversion has been completed and the result is shown in Fig. <ref>(b). This method of reconstruction will be the primary method utilized in this work and it will be contrasted with the GMRES backprojection method in Section <ref><ref>.The mapping from S_M(k_yt,k_yr) to S_I(k_x,k_y) is one of the more computationally expensive steps in the reconstruction process and can become cumbersome when the problem scales up or is extended to 3D. An alternative approach is to use the non-uniform fast Fourier transform (NUFFT)—to combine the resampling and FFT steps and complete the reconstruction directly <cit.>—or to implement lookup tables to streamline the interpolation process. These schemes are beyond the scope of the present paper and will be left for future work.§ EXPERIMENTAL DEMONSTRATIONIn this section we demonstrate the single-frequency imaging approach for a set of test targets, comparing experimentally-measured and analytically-predicted resolution in both range and cross-range. Then we investigate systematic misalignments and we contrast their impact on single-frequency imaging as compared to the finite bandwidth case. Finally, we compare the modified RMA with a least squares solver, GMRES, which solves (<ref>). Through each of these studies, we comment on the impact of using only a single frequency versus a finite bandwidth.§.§ Point Spread FunctionBased on the k-space analysis above, the illuminating fields should create sufficient support to resolve objects on the order of centimeters when located within 1 of the aperture. The k-space support can be determined analytically from the component-wise contributions of the transmitting and receiving apertures. Specifically, for the x component and for the transmitter we can find k_xt = 2 π f/ccos (θ (r̅_0-r̅_t)) for r̅_t locations on the aperture and a location r̅_0 in the scene (similarly for k_xr)—here θ is the angle of the vector between these two points, measured from the antenna's broadside. The combined spatial frequency in the x direction from the transmitter and receiver is the same as the sum in (<ref>). A similar expression can be written for k_yt and k_yr, with the cosine replaced by sine <cit.>.With all k-components calculated across the aperture for a specific location in the scene, the maximum/minimum k_x and k_y can be found. At the specific location r̅_0=(x_0, y_0) the best case range/cross-range (x/y) resolution is then determined according to (<ref>). Calculating explicit equations for resolution in terms of scene location does not return a clean formula and is heavily dependent on the system geometry. For our setup (described in Section <ref><ref>) the resulting resolutions have been plotted in Fig. <ref>. The plotted resolution is determined from substituting Δ k_x and Δ k_y, which are specifically based on the geometry of our system, into (<ref>). It should be noted that this resolution is an optimistic upper bound since the k-space sampling is non-uniform and only has certain areas with sufficiently dense sampling.When a sub-diffraction-limited object is placed in the scene, we can experimentally estimate the resolution by taking the full width at half maximum (FWHM) of the resulting reconstructed image. The overall shape of this reconstruction will also identify the magnitudes and locations of the sidelobes, which would reveal any potential aliasing issues. This point spread function (PSF) analysis is thus an important characterization of the system's overall response <cit.>.Figure <ref>(a) shows the experimental results of a PSF analysis with a metallic rod of diameter 3 (a 2D point scatterer) placed at x= 0.5, y=0. For this result we reconstruct with the RMA method detailed in Section <ref><ref> and use P=100 tuning states to obtain a high-quality image. Slices of this image taken along the major axes—plotted in Fig. <ref>(c)—are used to determine the cross-range and range resolution. The same process is carried out for a simulation of the same configuration and these simulated results are also plotted. The simulation is conducted with the forward model in <cit.>, which uses the near fields scans and propagates the sources to the objects with the free space Green's function—this is essentially the formulation in Eq. (<ref>).By comparing the simulated result to the experiment, it is seen that qualitatively similar behavior appears in both reconstructions. The PSF resembles the shape of an "X", due to the geometric placement of the antennas, which is seen in both reconstructions. Additionally, the 1D cross sections show that the expected resolution is met. For the cross-range resolution, the experiment results in 2.7, the numeric simulation returns 2.7, and the analytic expectation gives 1.8. Overall, the agreement between the three cases is reasonable. The experimental PSF shows excellent agreement with the simulated PSF and it also confirms the expectation that the analytic approach is overconfident. The range result reveals a similar trend. Experimental results show a resolution of 5.3, in contrast to an expectation of 5.3 from simulation and 3.4 from the analytic result. This good agreement confirms that the configuration matches the expected performance.The resolution of this imaging system varies across different locations in the region of interest. According to the analytic resolution plots in Fig. <ref> we expect the cross-range and range resolutions to be spatially-variant. A PSF analysis is also conducted experimentally at several locations to demonstrate this effect. Figure <ref> shows that the resolutions obtained from the reconstructed images are comparable to those from the analytic expressions. We also plot a reconstruction of an off-center object in Fig. <ref>(c) as an example (corresponding to the star in Fig. <ref>(a),(b)).While the results presented in Figs. <ref>-<ref> establish that the proposed configuration is capable of resolving objects on the order of centimeters in both directions, it also raises an important question: how does the single-frequency imaging system compare to a traditional system which utilizes bandwidth? To answer this question, we examine a hypothetical situation where the same dynamic aperture, with length D, is used for both transmission and reception. In the same manner that Fig. <ref> was generated (by calculating the k-space from the geometry) we can make this comparison when bandwidth is included. The range resolution along the x-axis can then be calculated as <cit.> δ_x = c/2( B+f_min(1-1/√((1+(D/2x)^2))) )^-1 for the case of a transceiver (collocated Tx-Rx aperture) with size D and bandwidth B. We have plotted curves for the case of 0%, 10%, and 20% fractional bandwidth in Fig. <ref> for a point at (x=0.6, y=0). Although a larger bandwidth always enhances resolution, the resulting resolution with and without bandwidth are fairly comparable and converge as the aperture size grows. For a system defined as ultra wideband (20% fractional bandwidth) the resolution is better than the no bandwidth case (0%) by a factor of 2× when the aperture size is 0.75, a feasible design for an implementation. Given the hardware benefits offered by operating at a single frequency, this factor of 2 difference can be acceptable in many applications. §.§ Misalignment Sensitivity The aperture-to-aperture experiment here is an example of an electrically-large system with complex radiation patterns. In a recent work it was shown that such systems can suffer severely from misalignments between the antennas <cit.>. Appreciable rotational or displacement errors can quickly degrade an image beyond recognition <cit.>. This issue, for example, necessitated special efforts to ensure precise alignment, as outlined in <cit.>, for the system examined in <cit.>. However, this sensitivity is exacerbated when a large bandwidth is used. Here we investigate the sensitivity of the single-frequency imaging system to misalignment and demonstrate that it is robust to practical misalignments.We first obtain experimental images of objects when the antennas are ideally aligned, again with P=100 tuning states per antenna. After displacing the Tx antenna by a moderate amount (-2.54=1.6λ in x and -2.54=1.6λ in y) we take new measurements and image assuming the location of the aperture has not been changed. The results for the aligned and misaligned cases are shown in Fig. <ref>. As expected, misalignment has slightly degraded the quality of the image, but the image is still well-resolved and distortions are minor.With a single frequency, the effects of misalignment in k-space are less pronounced than in wideband systems. Since the geometric factors alone determine the k-space sampling for the single frequency case, the result of a misalignment will be a smooth shifting with slight warping of the sampled k-components. Thus, except for extreme misalignments, we expect that the gradual shifting of the k-components will not significantly harm the imaging performance, but rather will lead to a minor displacement in the reconstructed objects' locations. Wideband systems, on the other hand, are more prone to degradation because of increased distortion in k-space. A physical interpretation of this effect would be that a given misalignment length has different electrical lengths at different portions of the frequency band of operation. Thus, the phase distortions are different for the different frequency points, leading to data that seems self-contradictory. We have plotted the sampled k-components for the single frequency case in Fig. <ref>(a) and for a case with three frequencies in Fig. <ref>(b)—the overlapping areas in the B ≠0 case lead to conflicting data which harms the reconstruction.To show the effects of bandwidth and misalignment in imaging, we carry out simulations for the case of B = 0 and B = 1.53. When doing this we used GMRES and solve Eq. (<ref>) so that the inversion in Eq. (<ref>) will be fair across the simulations—this will become clearer in the next section. The same maximum frequency is used in both cases, with the B > 0 simulation spanning 17.519.03 with 18 points. To reduce the size of the problem (when P = 100 with 18 frequency points the 𝐇 matrix becomes large) we use a smaller number of tuning states. It was found that 18 frequencies combined with 8 tuning patterns results in a reasonable image. To make the case with B = 0 match, we reduce the number of tuning states to P = 34. In this manner, both methods have approximately 1150 measurements.An image of two point scatterers is reconstructed with ideal alignment for both cases in Fig. <ref>(a),(b). Misalignment of x=2, y=3 is then introduced (compared to a wavelength of λ=1.6) and the simulation is repeated. The results for B = 0 and B=1.53 are shown in Fig. <ref>(c),(d) and it is clearly seen that the case with null bandwidth has greater fidelity. The case without bandwidth only exhibits a shift in the location and some minor distortion of the sidelobes. On the other hand, for the B=1.53 result the sidelobe of the object at y=+0.2 is larger than the target itself at y=-0.2. Furthermore, the object at y=-0.2 has sidelobes that start to merge with the actual object. The alignment sensitivity exemplifies yet another advantage of the single-frequency imaging paradigm. In addition, we note that the calibration procedure for the two cases also presents a contrasting feature. Calibration of a coherent system typically involves defining a reference plane (after the RF source) which the measurements can be taken with respect to. Specifically, it is essential that the phases of the different measurements can be considered together and compensated for. In the case of a single frequency, the reference plane can be chosen arbitrarily and all of the measurements will effectively have the same phase offset. For the case with multiple frequencies, each measurement will have a different phase offset at the reference plane and therefore it is necessary to correct for the differences. We do not experimentally demonstrate this notion here, but simply comment that this consideration becomes more important when the system scales up. For example, the imager in <cit.> has 24 Rx ports and 72 Tx ports, necessitating the development of special procedures to perform this phase calibration <cit.>. A single-frequency system can operate without compensation so long as the radio has internal stability.§.§ GMRES vs. RMA Reconstructions Lastly, we demonstrate the fact that the RMA method has faster processing times and higher fidelity. To this end, we compare the performance of the RMA with GMRES (as implemented by the built-in MATLAB function). GMRES was completed with 1,643 locations in the scene, with a ROI depth range of 40 and a ROI width of 68, sampled on a 1.3×1.3 grid. The solver was run for 20 iterations, taking a total time of 1.46. Pre-computation of the 𝐇 matrix took a total of 13.2. On the other hand, the RMA reconstructs over a significantly larger area (defined by the sampling in k-space) in a shorter amount of time. The pre-computation (including SVD truncation of the source matrices) took 0.22 and the direct inversion took 0.16. The direct computation time shows an order of magnitude speed enhancement. With the exception of the backprojection 𝐇 matrix build, all of these computations were run on a CPU with a 3.50 GHz Intel Xeon processor and 64 GB of RAM. The computation of the 𝐇 matrix was completed on a NVIDIA Quadro K5000 GPU. All of these computations can generally be implemented on a GPU which will provide further time advantages, but the GMRES is inherently iterative and therefore does not receive as large a benefit from this transition. Further research in this area is a topic of much interest and optimized reconstruction methods geared around single-frequency imaging will be investigated in future works.A contrast between the two methods, GMRES and RMA, lies in how the source distribution across the aperture Φ is taken into consideration. In the implementation based on Eq. (<ref>), the sources representing the dynamic metasurface aperture are considered directly in the forward propagation, whereas in the RMA method a pre-processing inversion step is required (completed for each frequency if bandwidth is used). When the number of tuning states is low (P < 20) the inversion in (<ref>) becomes less accurate and the subsequent reconstruction quality degrades. Since GMRES acts on the total 𝐇 matrix, the number of frequencies versus number of tuning states is irrelevant to its operation; thus it was the method of choice for comparing the misalignment simulations which had a variable number of frequencies (and thus a variable P). A more detailed study on the effects of SVD truncation and inversion can be found in <cit.>.The result that we emphasize here is the superior reconstruction quality of the RMA in Fig. <ref> and the shorter processing time. For this reconstruction, P = 100 tuning states were used and a complex scene, shown in Fig. <ref>, was imaged. It is worth noting that the RMA reconstructs a significantly larger ROI (over 1 in both range and cross-range) and that we have zoomed-in on the targets after reconstructing. § CONCLUSION AND DISCUSSIONWe have shown that the fundamental idea of range-imaging with a single frequency is tenable with the appropriate hardware. Additionally, a dynamic metasurface aperture implementation was demonstrated that can conduct single-frequency computational imaging from a low-profile and cost-effective platform. The dynamic aperture can be rapidly reconfigured to multiplex the information in a scene with no bandwidth.The numerous advantages of single-frequency imaging have been highlighted and demonstrated. Specifically, this architecture is attractive as a favorable hardware layer capable of obtaining high-fidelity images in the Fresnel zone. For applications involving dispersive objects, this platform may find particular advantage because the single-frequency measurements are not be sensitive to dispersion. Additionally, calibration of the feed network is not required and the system is highly robust to misalignment errors. Lastly, removing the requirement for a large bandwidth means that many portions of the electromagnetic spectrum will be available and that interference from other devices may be less pronounced. The high-quality performance and unique advantages of this system pose it as a powerful architecture for next generation microwave computational imaging applications.§ FUNDING INFORMATIONThis work was supported by the Air Force Office of Scientific Research (AFOSR, Grant No. FA9550-12-1-0491). | http://arxiv.org/abs/1704.03303v2 | {
"authors": [
"Timothy Sleasman",
"Michael Boyarsky",
"Mohammadreza F. Imani",
"Thomas Fromenteze",
"Jonah N. Gollub",
"David R. Smith"
],
"categories": [
"physics.class-ph",
"physics.optics"
],
"primary_category": "physics.class-ph",
"published": "20170327221114",
"title": "Single-Frequency Microwave Imaging with Dynamic Metasurface Apertures"
} |
^1Departamento de Teoría do Sinal e Comunicacións, Universidade de Vigo, Campus Lagoas-Marcosende, Vigo, ES-36310 Spain.^2Área de Óptica, Departamento de Física Aplicada, Universidade de Vigo, As Lagoas s/n, Ourense, ES-32004 Spain. We analyze theoretically the Schrödinger-Poisson equation in two transverse dimensions in the presence of a Kerr term. The model describes the nonlinear propagation of optical beams in thermo-optical media and can be regarded as an analogue system for a self-gravitating self-interacting wave.We compute numerically the family of radially symmetric ground state bright stationary solutions for focusing and defocusing local nonlinearity, keeping in both cases a focusing nonlocal nonlinearity.We also analyze excited states and oscillations induced byfixing the temperature at the borders of the material. We provide simulations of soliton interactions, drawing analogies with the dynamics of galactic cores in the scalar field dark matter scenario. 42.65.Tg, 05.45.Yv, 42.65.Jx, 95.35.+d Spatial solitons in thermo-optical media from the nonlinearSchrödinger-Poisson equation and dark matter analoguesAlvaro Navarrete^1, Angel Paredes^2, José R. Salgueiro^2 andHumberto Michinel^2 December 30, 2023 =======================================================================================================================§ I. INTRODUCTION Opticalsolitons have been a subject of intense research during the lastdecades <cit.>. The interplay of dispersion, diffraction and different types of nonlinearities gives rise to an amazing variety of phenomena. Thishasled to an ever increasing control on light propagation andto qualitative and quantitative connections to other areas of physics.The Schrödinger-Poisson equation, sometimes also called Schrödinger-Newton orGross-Pitaevskii-Newton equation was initially introduced to describe self-gravitating scalar particles, as a non-relativistic approximation to boson stars <cit.>. Since then, it has found application in very disparate physical contexts.For instance, it has been used in foundations of quantum mechanics tomodelwavefunction collapse <cit.>,in particular situations of cold boson condensates with long-range interactions <cit.> or for fermion gases in magnetic fields <cit.>. In cosmology, it plays a crucial role for two different dark matter scenarios, namely those of quantum chromodynamic axions <cit.> and scalar field dark matter <cit.> (usually abbreviated as ψDM or SFDM, it also goes under the name of fuzzy dark matter FDM <cit.>). In nonlinear optics, it can describe the propagation of light in liquid nematic crystals <cit.> or thermo-optical media <cit.>. This broad applicability underscores the interest of theoretical analysis of different versions of the Schrödinger-Poisson equation. Moreover, it paves the way for the design oflaboratory analogues of gravitational phenomena <cit.>.It is worth remarking that nonlinear optical analogues have been useful in the past to make progress in different disciplines, e.g. the generation of solitons in condensed cold atoms <cit.> or the understandingof rogue waves in the ocean <cit.>. Gravity-optics analogies have been studied for Newtonian gravity <cit.>, aspects of general relativity <cit.> and even issues related to quantum gravity <cit.>. In the present context,we envisage the possibility of mimicking certain qualitative aspects of gravitating galactic dark matter waves in the ψDM modelby studying laser beams inthermo-optical media. Although this can only be a partial analogy (the optical dynamics is 1+2 dimensional instead of 1+3), it is certainly appealing and can lead to novel views on both sides.Consequently, this article focuses on the following system of equations:i∂ψ/∂ z = -1/2∇^2 ψ - λ_K |ψ|^2ψ + Φψ ∇^2 Φ =C |ψ|^2The constant C can be fixed to any value and we will choose C=2π. |ψ|^2 represents the laser beam intensity and Φ corresponds to the temperature, see section II for their precise definitions. In the ψDM escenario, |ψ|^2 is associated to the dark matter density and Φ to the gravitational potential.All coefficients in (<ref>), (<ref>) have been rescaled to bring the expressions to their canonical form and all quantities are dimensionless. This rescaling can be performed without loss of generality, see section II and the appendix for the details of the relation to dimensionful parameters.Notice that the wavefunction ψ is complex while the potential Φ is real. Thecoordinate z is the propagation distance in optics and plays the role of time in condensed matter waves. The Laplacian acts on two transverse dimensions d=2. We constrain ourselves to the case in which the Poissonian interaction is attractive and thus fix a positive sign for the Φψ term. The constant λ_K=± 1 is related to the focusing (defocusing) Kerr nonlinearity for positive (negative) sign. For matter waves, it is proportional to the s-wave scattering length which leads to attractive (repulsive) local interactions. We remark that in theψDM model of cosmology, this term is sometimes absent <cit.>but there are also numerous works considering λ_K ≠ 0, e.g. <cit.>. Equation (<ref>) has to be supplemented with boundary conditions.This is a compelling property of d=2 since it allows to partially control the dynamics by tuning the boundary conditions, asdemonstrated in<cit.>. In this aspect, there is a marked difference with thethree-dimensional case d=3, in whichΦ→ 0 at spatial infinity if the energy distribution |ψ|^2 is confined to a finite region. Another point that raises interest on thestudy ofequations (<ref>) and (<ref>) is that they include competing nonlinearities, one of which is nonlocal.Different kinds of competing nonlinearities have been thoroughly studied since theycan improve the tunability of optical media and lead to rich dynamics, see e.g <cit.>. On the other hand, itis well known that nonlocal interactions can stabilize nonlinear solitary waves since they tend to arrest collapse <cit.>. Moreover, nonlocality can lead to long-range interaction between solitons <cit.> and other interesting phenomena as, for instance the stabilization of multipole solitons <cit.>. The existence of robust solitons for the Schrödinger-Poisson equation with d=3 has been demonstrated and their properties have been thoroughly analyzed<cit.>. However, thetwo dimensional casehas received less attention, although we must stress thatrelevant results in similar contexts without the Kerr term can be found in<cit.>. Here, we intend to close this gap by performing a systematic analysis of the basic stationary solutions of (<ref>), (<ref>) and by simulating some simple interactions between them. In section II, we discuss the implementation of equations (<ref>), (<ref>) in nonlinear optical setups. Section III is devoted to the analysis of thesimplest eigenstates of these equations, namely the spatial optical solitons. The cases of focusing and defocusing Kerr nonlinearities are discussed in turn.In section IV, we comment on their interactions and on analogies with dark matter theories.The different sections are (mostly) independent and can be read separately. Section V summarizes our findings. § II. THE OPTICAL SETUP The Schrödinger-Poisson system, without the Kerr term, describes the propagation of a continuous wave laser beam in a thermo-optical medium, see e.g. <cit.> and references therein. In this section, we briefly review the formalism in order to make the discussion reasonably self-contained and to fix notation. Then, we argue that the Kerr term can play a significant role in certain situations. Finally, we discuss some details of interest for a possible experimental implementation and the conserved quantities. §.§ a. Formalism The paraxial propagation equation for a beamof angular frequency ω in a medium withrefractive index n=n_0 + Δ n is given by:-2ik_0 n_0∂ A/∂z̃= ∇̃^2 A + 2 Δ n k_0^2 n_0 Awhere we have assumed that n_0 is a constant and neglected terms of order O( Δ n^2). We denote with a tilde the dimensionful coordinates such that the Laplacian is ∇̃^2 ≡∂_x̃^2+ ∂_ỹ^2. The electric field is E= Re[A e^i( n_0 k_0 ẑ-ωt̃)], the intensity is given by I=n_0/2η_0|A|^2 where η_0=√(%s/%s)μ_0ϵ_0 and k_0=ω/c is the wavenumber in vacuum. In the paraxial approximation, the electromagnetic wave envelope Ais assumed to vary slowly at the scale of the wavelength ∂_z^2 A ≪ k ∂_z A. We will consider a model in which Δ n is the sum of an optical Kerr term Δ n_K = n_2 I=n_2n_0/2η_0|A|^2 and of a thermo-optical variation of the refractive index Δ n_T = βΔ T where β is the thermo-optic coefficient which we assume to be positive and the temperature is defined as T=T_0 + Δ T where T_0 is a fiducial constant.We now write down the equation determining the temperature distribution in thematerial. In a stationary situation, it is given by κ∇̃^2 T = q, where κ is thermal conductivity (with units of W/m K) and q the heat-flux density of the source (power exchanged per unit volume), which comes from the absorption of the beam in the material q=-α I where α is the linear absorption coefficient of the optical medium. Thus,κ∇̃^2 Δ T = -α n_0/2 η_0 |A|^2We are taking here a two dimensional Laplacian, therefore assuming ∂^2 Δ T/∂z̃^2≪∇̃^2 Δ T. This is a kind of paraxial approximation for the temperature distribution motivated by the paraxial distribution of the source beam. Notice, however, that the neglected term may play a role in non-stationary situations under certain circumstances <cit.>.We can rewrite (<ref>), (<ref>) in dimensionless form (<ref>), (<ref>)taking λ_K to be the sign of n_2, C=2π and performing the following rescaling (see the appendix):z̃=2πκ n_0 |n_2| k_0/αβ z,(x̃,ỹ)=√(%s/%s)2πκ |n_2|αβ(x,y)A=√(%s/%s)η_0 αβπκ n_0^2 |n_2|^2 k_0^2ψ, Δ T= - α/2πκ n_0 |n_2| k_0^2ΦThe power of the beam P̃=∫ I dx̃ dỹ is given by:P̃=P/n_0 |n_2| k_0^2≡1/n_0 |n_2| k_0^2∫ |ψ|^2 dx dyThe limit P ≪ 1 corresponds to negligible Kerr nonlinearity, a fact thatwill be made explicit in section III when discussing the eigenstates. §.§ b. The Kerr term The goal of this paper is to perform a general analysis of the dimensionless equations (<ref>), (<ref>), which can be associated to a particular physical scenario through Eqs. (<ref>), (<ref>) or, in general, Eq. (<ref>) in the appendix. Nevertheless, it is interesting to consider a particular case in order to provide benchmark values for the physical quantities. Thus, let us quote the values associated to the experiments in<cit.>, in which a continuous wave laser beam with a power P̃ of a few watts andλ=488nm propagates through lead glass with κ=0.7 W/(mK), β=14 × 10^-6 K^-1,n_0=1.8, α=0.01 cm^-1 (values taken from <cit.>) and n_2=2.2× 10^-19m^2/W <cit.>. In this setup, P ≈ 10^-4 and the Kerr term is inconsequential. In order to motivate the inclusion of this term in (<ref>), it is worth commenting on different experimental options to increase P.The first possibility is to treat the material in order to enlarge |n_2|. This can be done bydoping it with metallic nanoparticles <cit.> and/or ions <cit.>. Another option is to use a pulsed laser. The thermo-optical term, being a slow nonlinearity,mostly depends on the average power and therefore does not changemuchwith the temporal structure of the pulse. On the other hand, the Kerr term does of course depend on the peak power. Thus, for a pulsed laser, we can use the same formalism(<ref>), (<ref>) and, compared to a continuous wave laser of the same averagepower, it amounts to enhancing n_2 by a factor which is approximately (τ R_r)^-1where τ is the pulse duration and R_r the repetition rate.In fact, this kind of interplay between slow (nonlocal) and fast (local) nonlinearities has been demonstrated for spatiotemporal solitons, also called light bullets<cit.>.In our case, Eqs. (<ref>), (<ref>) do not include the temporal dispersion and would not bevalid for very short pulses, but are well suited for, e.g., Q-switched lasers, where both kind of nonlinearities can be comparable for the spatial dynamics of the beam. §.§ c. Measurable quantities and boundary conditions We now briefly comment on certaindetails of interest for an eventual experimentalimplementation. We do so by quoting the techniques employed in experiments of laser propagation in lead glass, see <cit.> and references therein.The first question is what observables can actually be measured. As in<cit.>, we envisage the possibility of taking images of the laser power distribution at the entrance and exit facets. Below, we present plots of the evolution of the spatial profile of the beam at different values of the propagation distance z. They would correspond to propagation within sections of the thermo-optical material of different lengths, with the rest of conditions fixed.Notice that, in the absence of Kerr term (λ_K=0), there is a scaling symmetry γ^2ψ(γ x,γ y, γ^2 z), γ^2 Φ(γ x,γ y, γ^2 z) solves (<ref>), (<ref>) for any γ if it is a solution for γ=1.Thus, in this case, different adimensional propagation lengths can be studied just by changing the initial power and width of the beam, and not the medium itself. Of course, it would be of interest to measure the intensity profilewithin the material, but we are unaware of techniques that can achieve that goal without distorting the beam propagation itself. We are also unaware of techniques to measure the temperature distribution within the material and, thus, Δ T can be estimated through the modeling equations but can only beindirectly compared to measurements.A second important point is that of boundary conditions for the Poisson field or, in physical terms, how to fix the temperature at the borders of the material. This can be done by thermally connecting the borders to heat sinks at fixed temperaturewhich exchange energy with the optical material <cit.>. Therefore,in a typical experiment, the mathematical problem is supplemented with Dirichlet boundary conditions. The boundary value of Φ can be tuned to be different at different positions of the perimeter, giving rise to a turning knob useful to control the laser dynamics from the exterior of the sample <cit.>. However, for simplicity, in this work we will consider Φ at the border to be constant. Non-Dirichlet boundary conditions for Φ are also feasible in experimental implementations. For instance, if the edge of the material is thermally isolated, Neumann boundary conditions are in order.It is also worth commenting on the behavior of the electromagnetic wave at the borders. Ifat most a negligible fraction of the optical energy reaches the boundary facetsthat are parallel to propagation, the boundary conditions for ψ used incomputations become irrelevant. However, as we show below, capturing some aspects of dark matter evolution requires long propagations during which, unavoidably, part of the radiation does reach the border. In an actual experiment, the easiest is to have reflecting boundary conditions, with the interface acting as a mirror for light. Physically, for a dark matter analogue, it would be better in turn to avoid reflections and therefore to have open or absorbing boundary conditions. This can be accomplished by attaching an absorbing element with the same real part of the refractive index as the bulk material. We will come back to thisquestion in section IV. §.§ d. Conserved quantities We close this section by mentioning the conserved quantities upon propagation in z.We assume here that ψ is vanishingly small near the boundary of the sample and that generic Dirichlet conditions hold for Φ. It is then straightforward to checkfrom (<ref>), (<ref>) that thenorm N= ∫ |ψ|^2 d^2 xand hamiltonian:H= 1/2∫( ∇⃗ψ^* ·∇⃗ψ -λ_K |ψ|^4 +Φ |ψ|^2 )dxdydo not change during evolution in z.§ III. RADIALLY SYMMETRIC BRIGHT SOLITONS In this section, we study stationary solutions of Eqs. (<ref>), (<ref>)of the form ψ=e^iμ z f(r), Φ = ϕ(r) where we have introduced r=√(x^2+y^2). With a usual abuse of language, we refer to these solutions as solitons. The system gets reduced to:μ f(r) = 1/2d^2 f(r)/dr^2 +1/2rdf(r)/dr +λ_K f(r)^3 -ϕ(r) f(r) 2 π f(r)^2 = d^2 ϕ(r)/dr^2 + 1/rdϕ(r)/drEquation (<ref>) has to be supplemented with a boundary condition for ϕ(r) since, unlike the d=3 case, it is not possible to require that lim_r→∞ϕ(r) = 0.We consider a boundary condition that preserves radial symmetry. Non-radially symmetric boundary conditions lead to non-radially symmetric solitons <cit.>, whose systematic study we leave for future work. Therefore, we impose ϕ(R)=ϕ_R, where ϕ_R is an arbitrary constant andR is much larger than the bright soliton radius R ≫ r_sol.The particular values of R and ϕ_R are unimportant because for r≫ r_sol, the optical field vanishes f(r) ≈ 0 and the potential reads ϕ = ϕ_R + Plog(r/R), where P is the adimensional power defined in eq. (<ref>). Thus, changing ϕ_R and R only amounts to adding a constant to ϕ which can be absorbed as a shift in μ, while the beam profile f(r) is unaffected. However, in order to compare the propagation constant μ of different solutions, it is important to compute them with the same convention. In our computations, we take, without loss of generality ϕ_R=0, R=100. For large r, the function f(r) decays as exp (-r√(2 Plog (r/R)) ). We remark that in the case of defocusing nonlocal nonlinearity there are no decaying solutions of this kind and, accordingly, there are no bright solitons.Enforcing regularity at r=0, we find the following expansion, in terms of two constants f_0 and φ_0. The propagation constant μ can be absorbed as a shift in ϕ for the computation, taking φ(r)=ϕ(r)+μ:f(r) =f_0 + f_0/2(φ_0 - λ_K f_0^2) r^2 +O(r^4)φ(r) = φ_0 + π/2f_0^2 r^2+O(r^4)§.§ a. Focusing Kerr term We start discussing the Kerr focusing case λ_K=1. For any positive value of f_0 there is a discrete set of values φ_0,i(f_0) which yield normalizable solutions with i=0,1,2,… nodes for f(r).These values can be found numerically, for instance using a simple shooting technique. For each solution, the value of μ is computed from the boundary condition for ϕ(R). In figure <ref>, we plot the f(r) profiles of the solutions with i=0,1,2 for different values of f_0 and λ_K=1.We compare the ground state solutions i=0 to gaussians with the same value of f(r=0) and norm. Gaussians are a usual approximate trial function for soliton profiles in nonlocal media <cit.> and the figure shows that in the present case, the approximation is rather precise and becomes better for smaller f_0.§.§.§ Ground state In figure <ref>, we depict how the power and propagation constant vary within the family of ground state solutions with λ_K=1, that interpolate between the solution without Kerr term for f_0 → 0 and the one with only Kerr term, namely the Townes profile <cit.>, for f_0 →∞. Explicitly, for small f_0, we have P≈ 2.40 f_0, μ≈ 10.53 f_0 + 1.20 f_0 log f_0, where the logarithmic term is related to the boundary condition for ϕ(R). For the large f_0, we have P≈ 5.85 and μ≈ 0.205 f_0^2 +(26.9 +5.85 log f_0 ). The μ is the sum of the one of the Townes solution plus a term coming from the value of ϕ(r) at small r. Notice that the light intensity is confined to a small region in r < r_sol where ϕ reaches its minimum. Away from it, ϕ(R) ≈ϕ(r_sol) + P logR/r_sol. Thus, fromour boundary condition ϕ(100)=0, we find ϕ(r_sol)=-P log100/r_sol, where r_sol≈ f_0^-1 for the Townes profile.Both P(f_0) and μ(f_0) are monotonically increasing functions. Thus dP/dμ is always positive within the family and all solutions are stable according to the Vakhitov-Kolokolov criterion. Clearly this derivative approaches zero for large f_0, as expected for the Townes profile.§.§.§ Excited states and details on evolution algorithms Let us now turn to excited states i ≥ 1. In figure <ref>, we show an example of the disintegration of the solution with f_0=3 with two nodes. The computation of figure <ref> and the rest of dynamical simulations displayed in this paper are performed setting boundary conditions in the sides of a square. The reason is that radial symmetry is not usually preserved by the actual pieces ofthermo-optical material used in experiments <cit.>.In order to compare to the dark matter scenario, the best would be to fix theboundary conditions to match the monopolar contribution of the Poisson fieldsourced by the energy distribution. In a non-cylindrical sample material, this requiresgenerating a space dependent particular temperature distribution at the boundary, which seems difficult to implement experimentally. Thus, we still fix constantΦ conditions at the borders of the square.We remark that radial symmetry is broken in a soft way if the square is much larger than the beam size and the main features of the evolution are not affected by this mismatch.Our simulations are performed using the beam propagation methodto solve Eq. (<ref>) anda finite difference scheme to solve Eq. (<ref>) at each step. Convergence of the method has been checked by comparing simulations with different spacing for the spatialcomputationalgrids and steps in z.We have not found any excited state stationary solution that preserves its shape for long propagation distances. Fig. <ref> shows the initial stages of the disintegration of an unstable solution. Following the propagation to larger z, the system typically tends towards a ground state solution, surrounded by some radiation that takes the excess energy. This is similar to what we will discuss in section IV.c. The analogue behavior in three dimensions was analyzed in <cit.>. When the total power is above the Townes critical value P≈ 5.85, the evolution can eventually resultin collapse, with ψ diverging at finite z. When the beam profile becomes extremely narrow, the nonlocal term is negligible and the collapse is equivalent to that with only focusing Kerr term.§.§.§ Oscillations around the center of the materialThe radially symmetric solutions we have discussed require that the center of the thermo-optical material coincides with the center of the soliton.In a first approximation, if the light beam isshifted from the center of thematerial, the beam profile remains unchanged but its center feels a refractive index gradient which induces an oscillation <cit.>.One can think of this phenomenon as a self-force mediated by boundary conditions. It can be understood in terms in terms of Green's function for two dimensionalLaplace equation on the disk.G (ρ,θ)= P/2logρ̂^2 + ρ^2 - 2 ρ̂ρcos(θ - θ̂)/R^2 + ρ̂^2 ρ^2/R^2 - 2 ρ̂ρcos(θ - θ̂)This expression solves (<ref>) for a point source of power P placed at (x̂,ŷ), namely |ψ|^2 =Pδ(x-x̂) δ(y-ŷ)and satisfies the boundary condition ϕ(R)=0. We have introduced ρ^2 = x^2 + y^2≤ R^2, ρ̂^2 = x̂^2+ ŷ^2 < R^2,θ= arctan (y/x), and θ̂= arctan(ŷ/ x̂). The Green function (<ref>) is computed by considering an image at a distance R^2 / ρ̂ from the center of the disk. We can think of the self-force mediated by boundary conditions as the force performed bythe image on the source <cit.>.Without loss of generality, consider a soliton with center atx=x_s with y_s=0. Taking the gradient of the potentialgenerated by the image, keeping only the leading termsin |x_s| / R < 1 and using Ehrenfest theorem we find the approximate expression: d^2 x_s/dz^2≈ - 2Px_s/R^2 - 2P x_s^3/R^4where x_s is the position of the soliton at propagation distance z. Notice that this restoring force induced by boundary conditions becomes negligible for large R and fixed x_s. This means that, if the electromagnetic wave is confined in agiven region, the role of boundaries diminishes when the piece of optical material is taken wider.From (<ref>), it is immediate to infer a periodic motion with a period in the propagation distance Z≈(√(2)π R/√(P) - 3π x_i^2/4√(2P) R) for a soliton initially at rest at x=x_i.We have performed a series of simulations, with boundary conditions ϕ = 0 set at the boundary of a square of side L, by placing solitons of different powers initially displaced a distance x_i from the center of the square. As expected from the discussion above, there is an oscillation induced by the boundary conditions. The period of the oscillation follows the same kind of dependence on P, L and x_i as the one found analytically for the disk. From our numerics, we infer that the period in z of the oscillation is:Z≈(3.38L/√(P) - 8.2 x_i^2/√(P) L) In figure <ref> we depict an example of the oscillation found by numerically solvingthe evolution equations and a comparison of Eq. (<ref>) with the computed value of Z for several cases. §.§ b. Defocusing Kerr term We now turn to thecase of defocusing Kerr nonlinearity λ_K=-1. As in the previous case, for anyf_0>0 there is a discrete set of values φ_0,i(f_0) which yield normalizable solutions with i=0,1,2,… nodes for f(r).Again, the ground state is always stable while we have not found stable excited solutions. Figure <ref> shows several examples.Unsurprisingly, the solutions with f_0=0.1 are almost indistinguishable from those in figure <ref>, since the f_0 → 0 limit corresponds to negligible Kerr term. On the other hand, for large f_0, the difference becomes apparent. Curiously, in the intermediate case with f_0=1, the numerical solution is really similar to a gaussian.Figure <ref> represents the power and propagation constant as a function of f_0 forthe family of ground state solutions with λ_K=-1.For small f_0, we have P≈ 2.40 f_0, μ≈ 10.53 f_0 + 1.20 f_0 log f_0as in the focusing case, since the Kerr term is unimportant in this limit.For large f_0, both nonlinearities play adecisive role. We have found by directly fitting the numerical data that both P and μ grow quadratically P ≈ 1.27 f_0^2,μ≈ 5.88 f_0^2. A remarkable featureis that the size of the soliton solutions tends to a constant for large f_0. More precisely, thefull width at half maximum of the distribution, defined asf(r= fwhm/2)= f_0 / √(2) asymptotes to lim_f_0 →∞ fwhm≈ 1.21, see the inset of figure <ref>. Regarding oscillations mediated by boundary conditions, the dynamics with λ_K=-1 is rather similar to the one with focusing Kerr term because this effect is linked to the nonlocal nonlinearity while the local nonlinear term only affectsthe shape of the soliton itself but is hardly related to its overall motion. § IV. SOLITON INTERACTIONS AND DARK MATTER ANALOGUES In this section, we provide several examples of the dynamics of interacting solitonsby numerically analyzing Eqs. (<ref>), (<ref>). These simulations are relevant for the propagation of light in thermo-optical media, as described in section II. Moreover, they can be considered as analogues of galactic dark matter dynamics. In the context of the scalar field dark matter (ψDM), soliton interactions have been studied in different situations, including head-on collisions<cit.>, dipole-like structures <cit.> and soliton mergers <cit.>. These works deal withEqs. (<ref>), (<ref>)with one more transverse dimension d=3, but, as wewill show, there are many qualitative similarities with the d=2 case. We remark that the dynamics of soliton collisions is of great importance in cosmology, since the wave-like evolution of ψDM provides different outcomes from those ofparticle-like dark matter scenarios. Thus, it may furnish a way ofimproving our understanding of the nature and dynamics of dark matter, allowing us to discriminate between different scenarios and to make progress in one of the most important open problems of fundamental physics. For instance, wave interference at a galactic scale can induce large offsets between dark matter distributions and stars that might explain some recent puzzling observations <cit.>.Some technical details of the numerical methods were brieflyexplained for Fig. <ref> above. In order to generateinitial conditions, we use the f(r) profiles of the eigenstates discussed in section III:ψ|_z=0 = ∑_i=1^n_sol f_i( | x -x_i|) e^i ( v_i · x + ϕ_i)where the sum runs over a number n_solof initial solitons with initial positionsx_i, phases ϕ_i and “velocities” v_i. From now on, for dx_s/dz we use the word velocity , which is appropriate for matter waves. In the optical setup, this quantity is of course the angle of propagation with respect to the axis. Boldface characters represent two-dimensional vectors x=(x,y), etc. In the examples displayed below, boundary conditions Φ=0 are set at the perimeter of a square ofside L=20 and center at x=0. §.§ a. Head-on collisions We start by analyzing the encounter of two solitons of the same powerwith equal phases. In the cosmological three-dimensional setup, this kind of problem has been addressed in<cit.>. Qualitative results are very similar in the present d=2 optical setup. What happens during evolution largely depends on the initial relative velocity as we describe below.For large velocities, where for large we mean that | v| is larger than the inverse size of the initial structures, the solitons cross each other. During the collision,a typical interference fringe pattern is produced. Moreover, some tiny fraction of energy is radiated away from the solitons. We stress that we are using the word soliton in a loose sense, since the theory is not integrable and therefore even if the solitary waves cross each other, they do not come out undistorted. This behavior is depicted in figure <ref>. After the crossing, the non-local attraction and the self-interaction mediated by boundary conditions pull the solitons against each other again and, depending on the particular case, this might result in a second collision.For intermediate velocities, solitons also cross each other but the associated wavelength is too small to generate a pattern with multiple fringes. Figure <ref> shows an example.For small velocities, the solitons merge in afashion similar to subsection c below.§.§ b. Dipole-like configuration As it is well known in different contexts, solitons in phase opposition repel each other.Possible consequences of this fact for galactic clusters were explored in <cit.>.In order to illustrate the fact, we consider a dipole-like structure: two solitons of the same size and power in phase opposition bounce back from each. Due to the nonlocal nonlinearity, they attract each other until they bounce back again and so on (see <cit.> for similar consideration in the dark matter context).There is a competition between the attraction due tothe Poisson term and the repulsion due towave destructive interference. The Kerr term contributes to attraction or repulsion depending on its sign. At this point, it is important to comment on the boundary conditions for thewave, since for large propagations part of the electromagnetic energy can reach the bound of the domain. We will consider absorbing conditions, which are the best suited for cosmological analogues. They can be implemented introducing at the borders a material withan imaginary part of the refractive index, but with the same real part as the one of the bulk. Mathematically, it can be modeled by introducing a “sponge”, as discussed for the three-dimensional dark matter case in <cit.>. This amounts to adding a term:-i/4V_0 (4 - tanhx+γ/δ + tanhx-γ/δ+.. +tanhy+γ/δ + tanhy-γ/δ) ψto the right hand side of (<ref>). This is a smooth version of a step function<cit.>. γ fixes the position of the step and δ controls how steep the step is.In our simulations we fix γ=0.4, δ=0.2, V_0=1.Figure <ref> shows an example of the bounces of a dipolar configuration, comparing the evolution for cases with λ_K= ± 1 and the same soliton power. It is shown, that, eventually, thebouncing pattern becomes unstable and the solitons merge. As one could expect, this happens before for focusing Kerr nonlinearity λ_K=+1. Notice, however, that the values of z reached in fig. <ref> are much larger than those of the other figures of this section. This means thatthe instabilities only become manifest for long propagations. In fig <ref>, we also plot the z-evolution of the norm N=∫ |ψ|^2 dxdy, which decreases when the radiation approached the boundary because of the absorbing condition described above. This is the analogue of having scalar radiation flowing away from the region of interest in ψDM. The figure shows that it starts happening when the instability breaks the initial solitons. The system eventually evolves into a pseudo-stationary state, similar to the one described in the next subsection. We have not found any stable dipolar structure, but this aspect might deserve further research. §.§ c. Soliton mergers In the ψDM model of cosmology, theoutcome of the merging of solitons has important consequences for the galactic dark matter distributions. In<cit.>, it was proven that the final configuration of such a process is a new, more massive, soliton (to be identified with a galactic core) surrounded by an incoherent distribution of matter with its density decreasing with a power law. The gravitational attraction prevents that this halo is radiated away. Here, we will show that a very similar behavior takes place in d=2, paving the way foroptical experiments partially mimicking galactic mergers.Figure <ref> depicts an example of the initial stages of evolution of four equal merging solitons. The Kerr nonlinearity has been taken to be focusing and the total power to be below Townes' threshold in order to avoid a possible collapse. The solitons rapidly coalesce and form a peaked narrowstructure with a faint distribution of power around it.Continuing the numerical evolution of Eqs. (<ref>), (<ref>) to large values of z, a pseudo-stationary situation is attained, with an oscillationabout a soliton profile at its center and incoherent radiation around it. This is shown in figure <ref>, where we depict an average in z of the density profile depending on the distance to the center. We do the computation for the absorbing boundary condition presented in section IV.b. We also include an example with defocusing Kerr term λ_K=-1.The graphs show that, roughly speaking, the merging results in a soliton surrounded by a halo of energy trapped by “gravity”.Part of the initial energy has been radiated away during the merging of the solitonic structures. Their qualitative agreement with those of <cit.> is apparent. The force mediated by boundary conditions for Φ affects the result but does not change the qualitative picture with respect to the three-dimensional case.§ V. CONCLUSIONS We have analyzed the Schrödinger-Poisson equations (<ref>),(<ref>) in two dimensions (d=2) in the presence of a Kerr term. We have limited the discussion to positive sign for the Poisson term. The model is relevant to describe laser propagation in thermo-opticalmaterials, among other physical systems. With radially symmetric boundary conditions for Φ, we have found uniparametric families of radially symmetric stable solitons. When the Kerr term is focusing, the family interpolates between the solution withoutKerr term and the Townes profile. For defocusing Kerr term, the family interpolates between the solution without Kerr term and solitons which asymptotically tend to a particular finite size. The fixed boundary conditions induce effective forces which push the solitons towards the center of the material.Regarding interactions, the Poisson term produces attraction at a distance, resembling gravity. This fact has been studied experimentally <cit.>.Both the local and nonlocal nonlinearities shape the solitons and the results of dynamical evolution. However, we remark that in soliton collisions, a prominent role is played by the wave nature of Schrödinger equation.As in many nonlinear systems, interference fringes appear for appropriate initial conditions and there is attraction/repulsion forphase coincidence/opposition.We have remarked that the same equations, in one more dimension (d=3), are the basis of the scalar field dark matter (ψDM) model of cosmology, which relies on the hypothesis of the existence of a cosmic Bose-Einstein condensate of an ultralight axion. In this scenario, the physics of solitons is connected to phenomena taking place at length scales comparable to galaxies. Certainly, there are differences between the cosmological d=3 case and the possible laboratory d=2 setups. First of all, we have not considered situations with evolution of the cosmic scale factor. Furthermore, the Poisson interaction is stronger in smallerdimension and the monopolar “gravitational force” decays as 1/r in d=2 rather than as 1/r^2.It is of particular importance the role of boundaryconditions. In the Universe, one typically has open boundary conditions. In a laboratory experiment, they have to be specified at a finite distance from the center. For the Poisson field (temperature), the simplest is to make it constant at the borders of the sample, even if this generates restoring forces towards the center. For the electromagnetic wave, the best is to introduce an absorbing element that prevents reflections towards the center of the radiation reaching the borders. This simulates the sponge typically used in the numerical 3d computations. In any case, the effects related to boundary conditions are reduced by taking larger two-dimensional sections of the optical material. We see no obstruction, apart from cost, to utilizing large pieces of glass in this kind of experiment. Let us also remark that, apart from the analogy discussed here, boundary conditions are a useful turning knob in optical experiments with nonlocal nonlinearities. Despite these discrepancies and the disparate physical scales involved, there are apparent strong similarities between the families of self-trapped waves, their stability and their interactions in the d=2 and d=3 cases.We emphasize thatthis resemblance holds with or without Kerr term, corresponding to the presence or absence of non-negligible local self-interactions of the scalar in the cosmological setup. We hopethat these considerations will pave the way for the experimental engineering of optical experiments that introduce analogues of dark matter dynamics in the ψDM scenario.§ APPENDIX: SCALING THE EQUATION TO ITS CANONICAL FORM Consider the equation:i a_1 ∂ψ̃/∂z̃ = -1/2 a_2 ∇̃^2 ψ̃-λ_K a_3 |ψ̃|^2ψ̃+ a_4 Φ̃ψ̃ ∇̃^2 Φ̃ =a_5 |ψ̃|^2where the a_i>0 are constants and tilded quantities correspond to dimensionful coordinates, potential and wave function. Equations (<ref>), (<ref>) are transformed into the canonical dimensionless form (<ref>), (<ref>) bythe following rescalings.z̃= C a_1 a_3/a_2 a_4 a_5 z, (x̃,ỹ)= (C a_3/a_4 a_5)^1/2(x,y)ψ̃= (a_2 a_4 a_5/C a_3^2)^1/2ψ, Φ̃= a_2 a_5/C a_3Φ . Acknowledgements. We thank Alessandro Alberucci, Jisha Chandroth Pannianand Camilo Ruiz for useful comments. We also thank two anonymous referees that helped us improving the discussions on the physical implementation and the analogy to cosmological setups. This work is supported by grants FIS2014-58117-P and FIS2014-61984-EXP from Ministerio de Economía y Competitividad and grants GPC2015/019 and EM2013/002 from Xunta de Galicia. 10Kivshar2003xv Y. S. Kivshar and G. P. Agrawal, Optical Solitons. Academic Press, 2003.1464-4266-7-5-R02 B. A. Malomed, D. Mihalache, F. Wise, and L. Torner, “Spatiotemporal optical solitons,” J. Opt.B: QuantumSemiclass. Opt., 7, R53 (2005).0034-4885-75-8-086401 Z. Chen, M. Segev, and D. N. Christodoulides, “Optical spatial solitons: historical overview and recent advances,” Rep. Prog. Phys. 75, 086401 (2012).PhysRev.187.1767 R. Ruffini and S. Bonazzola, “Systems of self-gravitating particles in general relativity and the concept of an equation of state,”Phys. Rev. 187, 1767 (1969).diosi L. Diósi, “Gravitation and quantum-mechanical localization of macro-objects,”Phys. Lett. A 105, 199 (1984).penrose1 R. Penrose, “On gravity's role in quantum state reduction,”Gen. Rel. Gravit. 28, 581 (1996).penrose2 R. Penrose, “On the gravitization of quantum mechanics 1: Quantum state reduction,”Found. Phys. 44, 557 (2014).1367-2630-16-11-115007 M. Bahrami, A. Grossardt, S. Donadi, and A. Bassi, “The Schrödinger-Newton equation and its foundations,” New J. Phys. 16, 115007 (2014).atom1 D. O'Dell, S. Giovanazzi, G. Kurizki, and V. M. Akulin, “Bose-Einstein condensates with 1/𝑟 interatomic attraction: Electromagnetically induced “gravity”,”Phys. Rev. Lett. 84, 5687 (2000).PhysRevLett.115.023901 J. Qin, G. Dong, and B. A. Malomed, “Hybrid matter-wave microwave solitons produced by the local-field effect,” Phys. Rev. Lett. 115, 023901 (2015).guth A. H. Guth, M. P. Hertzberg, and C. Prescod-Weinstein, “Do Dark Matter Axions Form a Condensate with Long-Range Correlation?,” Phys. Rev. D 92, 103513 (2015).1475-7516-2007-06-025 C. G. Böhmer and T. Harko, “Can dark matter be a Bose-Einstein condensate?,”J. Cosmol. and Astropart. Phys., 2007,025 (2007).Matos A. Suárez, V. H. Robles, and T. Matos, “A review on the scalar field/Bose-Einstein condensate dark matter model,” in Accelerated Cosmic Expansion (C. Moreno González, J. E. Madriz Aguilar, and L. M. Reyes Barrera, eds.), vol. 38 of Astrophysics and Space Science Proceedings, pp. 107–142, Springer International Publishing, 2014.Marsh:2015xka D. J. Marsh, “Axion cosmology,”Phys. Rep. 643, 1(2016). schive H.-Y. Schive, T. Chiueh, and T. Broadhurst, “Cosmic structure as the quantum interference of a coherent dark wave,”Nat. Phys. 10, 496 (2014).schiveprl H.-Y. Schive, M.-H. Liao, T.-P. Woo, S.-K. Wong, T. Chiueh, T. Broadhurst, and W.-Y. P. Hwang, “Understanding the core-halo relation of quantum wave dark matter from 3d simulations,”Phys. Rev. Lett. 113, 261302(2014).witten L. Hui, J.P. Ostriker, S. Tremaine, and E. Witten, “On the hypothesis that cosmological dark matter is composed of ultra-light bosons”, arXiv:1610.08297 (2016). PhysRevLett.91.073901 C. Conti, M. Peccianti, and G. Assanto, “Route to nonlocality and observation of accessible solitons,”Phys. Rev. Lett. 91, 073901 (2003).2040-8986-18-5-054006 Y. Izdebskaya, W. Krolikowski, N. F. Smyth, and G. Assanto, “Vortex stabilization by means of spatial solitons in nonlocal media,”J. Opt. 18, 054006 (2016).segev R. Bekenstein, R. Schley, M. Mutzafi, C. Rotschild, and M. Segev, “Optical simulations of gravitational effects in the Newton-Schrödinger system,” Nat. Phys. 11, 872 (2015).PhysRevA.57.3837 V. M. Pérez-García, H. Michinel, and H. Herrero, “Bose-Einstein solitons in highly asymmetric traps,” Phys. Rev. A 57, 3837 (1998).StreckerSolitons K. E. Strecker, G. B. Partridge, A. G. Truscott, and R. G. Hulet, “Formation and propagation of matter-wave soliton trains,” Nature 417, 150 (2002).Khaykovich1290 L. Khaykovich, F. Schreck, G. Ferrari, T. Bourdel, J. Cubizolles, L. D. Carr, Y. Castin, and C. Salomon, “Formation of a matter-wave bright soliton,”Science 296, 1290 (2002).OpticalRogue D. R. Solli, C. Ropers, P. Koonath, and B. Jalali, “Optical rogue waves,” Nature 450, 1054 (2007).Philbin1367 T. G. Philbin, C. Kuklewicz, S. Robertson, S. Hill, F. König, and U. Leonhardt, “Fiber-optical analog of the event horizon,”Science 319, 1367 (2008).Braidotti:2016ido M. C. Braidotti, Z. H. Musslimani, and C. Conti, “Generalized uncertainty principle and analogue of quantum gravity in optics”,Physica D: Nonlinear Phenomena, in press, 2016.PhysRevD.53.2236 J.W. Lee and I.G. Koh, “Galactic halos as boson stars,” Phys. Rev. D 53, 2236 (1996).Goodman J. Goodman, “Repulsive dark matter,” New Astron. 5,103 (2000).guzman1 A. Bernal and F. S. Guzmán, “Scalar field dark matter: Head-on interaction between two structures,”Phys. Rev. D 74, 103002 (2006).PhysRevLett.95.213904 C. Rotschild, O. Cohen, O. Manela, M. Segev, and T. Carmon, “Solitons in nonlinear media with an infinite range of nonlocality: First observation of coherent elliptic solitons and of vortex-ring solitons,”Phys. Rev. Lett. 95, 213904 (2005).Alberucci:07 A. Alberucci, M. Peccianti, and G. Assanto, “Nonlinear bouncing of nonlocal spatial solitons at the boundaries,” Opt. Lett. 32, 2795 (2007).Alberucci:07a A. Alberucci and G. Assanto, “Propagation of optical spatial solitons in finite-size media: interplay between nonlocality and boundary conditions,” J. Opt. Soc. Am. B 24, 2314 (2007).Alfassi:07 B. Alfassi, C. Rotschild, O. Manela, M. Segev, and D. N. Christodoulides, “Boundary force effects exerted on solitons in highly nonlocal nonlinear media,”Opt. Lett. 32, 154 (2007).PhysRevLett.102.203903 I. B. Burgess, M. Peccianti, G. Assanto, and R. Morandotti, “Accessible light bullets via synergetic nonlinearities,” Phys. Rev. Lett. 102, 203903 (2009).Gurgov:09 H. C. Gurgov and O. Cohen, “Spatiotemporal pulse-train solitons,”Opt. Express 17, 7052 (2009).Laudyn:15 U. A. Laudyn, M. Kwasny, A. Piccardi, M. A. Karpierz, R. Dabrowski, O. Chojnowska, A. Alberucci, and G. Assanto, “Nonlinear competition in nematicon propagation,” Opt. Lett. 40, 5235 (2015).PhysRevA.83.053838 K.-H. Kuo, Y.Y. Lin, R.-K. Lee, and B. A. Malomed, “Gap solitons under competing local and nonlocal nonlinearities,”Phys. Rev. A 83, 053838 (2011).Cavitation A. Paredes, D. Feijoo, and H. Michinel, “Coherent cavitation in the liquid of light,” Phys. Rev. Lett. 112, 173901 (2014).PhysRevLett.116.163902 F. Maucher, T. Pohl, S. Skupin, and W. Krolikowski, “Self-organization of light in optical media with competing nonlinearities,”Phys. Rev. Lett. 116, 163902 (2016).PhysRevLett.115.253902 Y.-C. Zhang, Z.-W. Zhou, B. A. Malomed, and H. Pu, “Stable solitons in three dimensional free space without the ground state: Self-trapped Bose-Einstein condensates with spin-orbit coupling,”Phys. Rev. Lett. 115, 253902 (2015).0295-5075-98-4-44003 D. Novoa, D. Tommasini, and H. Michinel, “Ultrasolitons: Multistability and subcritical power threshold from higher-order kerr terms,”EPL 98, 44003 (2012).PhysRevE.62.4300 V. M. Pérez-García, V. V. Konotop, and J. J. García-Ripoll, “Dynamics of quasicollapse in nonlinear schrödinger systems with nonlocal interactions,”Phys. Rev. E 62, 4300 (2000).PhysRevE.66.046619 O. Bang, W. Krolikowski, J. Wyller, and J. J. Rasmussen, “Collapse arrest and soliton stabilization in nonlocal nonlinear media,” Phys. Rev. E 66, 046619 (2002).1464-4266-6-5-017 W. Kroikowski, O. Bang, N. I. Nikolov, D. Neshev, J. Wyller, J. J. Rasmussen, and D. Edmundson, “Modulational instability, solitons and beam propagation in spatially nonlocal nonlinear media,”J. Opt. B: QuantumSemiclass. Opt. 6, S288 (2004).segev2 C. Rotschild, B. Alfassi, O. Cohen, and M. Segev, “Long-range interactions between optical solitons,” Nat. Phys. 2, 769 (2006).Kartashov:06 Y. V. Kartashov, L. Torner, V. A. Vysloukh, and D. Mihalache, “Multipole vector solitons in nonlocal nonlinear media,” Opt. Lett. 31, 1483 (2006).Lopez-Aguayo:06 S. Lopez-Aguayo, A. S. Desyatnikov, Y. S. Kivshar, S. Skupin, W. Krolikowski, and O. Bang, “Stable rotating dipole solitons in nonlocal optical media,” Opt. Lett. 31, 1100 (2006).penrose I. M. Moroz, R. Penrose, and P. Tod, “Spherically-symmetric solutions of the Schrödinger-Newton equations,”Class.Quantum Grav. 15, 2733 (1998).harrison R. Harrison, I. Moroz, and K. P. Tod, “A numerical study of the Schrödinger-Newton equations,”Nonlinearity 16,101 (2003).atom2 I. Papadopoulos, P. Wagner, G. Wunner, and J. Main, “Bose-Einstein condensates with attractive 1/r interaction: The case of self-trapping,” Phys. Rev. A 76, 053604 (2007).PhysRevA.78.013615 H. Cartarius, T. Fabcic, J. Main, and G. Wunner, “Dynamics and stability of Bose-Einstein condensates with attractive 1/r interaction,” Phys. Rev. A 78, 013615 (2008).PhysRevA.82.023611 S. Rau, J. Main, H. Cartarius, P. Köberle, and G. Wunner, “Variational methods with coupled gaussian functions for Bose-Einstein condensates with long-range interactions. II. applications,” Phys. Rev. A 82, 023611 (2010).0004-637X-645-2-814 F. S. Guzmán and L. A. Ureña-López, “Gravitational cooling of self-gravitating Bose condensates,” Astrophys. J. 645, 814 (2006).Chavanis P.-H. Chavanis and L. Delfini, “Mass-radius relation of newtonian self-gravitating Bose-Einstein condensates with short-range interactions. ii. numerical results,” Phys. Rev. D 84, 043532 (2011).Pop D. J. E. Marsh and A.-R. Pop, “Axion dark matter, solitons and the cusp-core problem,”Mon. Not. R. Astron. Soc. 451, 2479 (2015).PhysRevE.71.065603 A. I. Yakimenko, Y. A. Zaliznyak, and Y. Kivshar, “Stable vortex solitons in nonlocal self-focusing nonlinear media,” Phys. Rev. E 71, 065603 (2005).PhysRevA.76.053833 S. Ouyang and Q. Guo, “(1+2)-dimensional strongly nonlocal solitons,”Phys. Rev. A 76, 053833 (2007).PhysRevA.91.013841 A. Alberucci, C. P. Jisha, N. F. Smyth, and G. Assanto, “Spatial optical solitons in highly nonlocal media,” Phys. Rev. A 91, 013841 (2015).Alberucci:14 A. Alberucci, C. P. Jisha, and G. Assanto, “Accessible solitons in diffusive media,”Opt. Lett. 39, 4317 (2014).breather A. Alberucci, C. P. Jisha, and G. Assanto, “Breather solitons in highly nonlocal media,” arXiv:1602.01722 (2016).singh V. Singh and P. Aghamkar, “Surface plasmon enhanced third-order optical nonlinearity of Ag nanocomposite film,” Appl. Phys. Lett. 104 (2014).Can-Uc:16 B. Can-Uc, R. Rangel-Rojo, A. P. na Ramírez, C. B. de Araújo, H. T. M. C. M. Baltar, A. Crespo-Sosa, M. L. Garcia-Betancourt, and A. Oliver, “Nonlinear optical response of platinum nanoparticles and platinum ions embedded in sapphire,”Opt. Express 24, 9955 (2016).Snyder1538 A. W. Snyder and D. J. Mitchell, “Accessible solitons,”Science 276, 1538 (1997).PhysRevLett.13.479 R. Y. Chiao, E. Garmire, and C. H. Townes, “Self-trapping of optical beams,”Phys. Rev. Lett. 13, 479 (1964).PRD69 F. S. Guzmán, and L. A. Ureña-López, “Evolution of the Schrödinger-Newton system for a self-gravitating scalar field”, Phys. Rev. D 69, 0124033 (2004).Shou:09 Q. Shou, Y. Liang, Q. Jiang, Y. Zheng, S. Lan, W. Hu, and Q. Guo, “Boundary force exerted on spatial solitons in cylindrical strongly nonlocal media,”Opt. Lett. 34, 3523 (2009).guzman2 J. A. González and F. S. Guzmán, “Interference pattern in the collision of structures in the Bose-Einstein condensate dark matter model: Comparison with fluids,”Phys. Rev. D 83, 103513 (2011).Paredes:2015wga A. Paredes and H. Michinel, “Interference of dark matter solitons and galactic offsets,” Phys.Dark Univ. 12, 50 (2016).PhysRevD.93.103535 F. S. Guzmán, J. A. González, and J. P. Cruz-Pérez, “Behavior of luminous matter in the head-on encounter of two ultralight BEC dark matter halos,”Phys. Rev. D 93, 103535 (2016).cotner E. Cotner, “Collisional interactions between self-interacting non-relativistic boson stars: effective potential analysis and numerical simulations,”Phys. Rev. D 94, 063503 (2016).PhysRevD.94.043513 B. Schwabe, J. C. Niemeyer, and J. F. Engels, “Simulations of solitonic core mergers in ultralight axion dark matter cosmologies,” Phys. Rev. D 94, 043513 (2016).PRD74 A. Bernal, and F. S. Guzmán, “Scalar field dark matter: Nonspherical collapse and late-time behavior”, Phys. Rev. D 74, 063504 (2006). | http://arxiv.org/abs/1703.09095v1 | {
"authors": [
"Alvaro Navarrete",
"Angel Paredes",
"Jose R. Salgueiro",
"Humberto Michinel"
],
"categories": [
"physics.optics",
"astro-ph.GA",
"nlin.PS"
],
"primary_category": "physics.optics",
"published": "20170327141809",
"title": "Spatial solitons in thermo-optical media from the nonlinear Schrodinger-Poisson equation and dark matter analogues"
} |
[pages=1-last]simple2.pdf | http://arxiv.org/abs/1703.08944v1 | {
"authors": [
"Ahmed Hussain Qureshi",
"Yasar Ayaz"
],
"categories": [
"cs.RO",
"cs.AI"
],
"primary_category": "cs.RO",
"published": "20170327061938",
"title": "Intelligent bidirectional rapidly-exploring random trees for optimal motion planning in complex cluttered environments"
} |
refs.bibtheoremTheorem[section] corollaryCorollary *mainMain Theorem lemma[theorem]Lemma propositionProposition conjectureConjecture *problemProblem definition definition[theorem]Definition remarkRemark *notationNotation | http://arxiv.org/abs/1703.09027v1 | {
"authors": [
"Irina Pettersson"
],
"categories": [
"math.AP",
"35B27"
],
"primary_category": "math.AP",
"published": "20170327121651",
"title": "Two-scale convergence in thin domains with locally periodic rapidly oscillating boundary"
} |
sort compresskmu,uae]P. S. Vinayagam kmu]R. Radhacor1 [email protected][cor1]Corresponding author [kmu]Centre for Nonlinear Science (CeNSc), PG and Research Department of Physics, Government College for Women (Autonomous), Kumbakonam 612001, India [uae]Department of Physics, United Arab Emirates University, P.O.Box 15551, Al-Ain, United Arab Emirates bdu]S. Bhuvaneswari bdu]R. Ravisankar bdu]P. Muruganandam [email protected] [bdu]Department of Physics, Bharathidasan University,Palkaliperur Campus, Tiruchirapalli 620024, IndiaWe investigate the dynamics of a spin-orbit (SO) coupledBECs in a time dependent harmonic trap and show the dynamical system to be completely integrable by constructing the Lax pair. We then employ gauge transformation approach to witnessthe rapid oscillations of the condensates for a relatively smaller value of SO coupling in a time independent harmonic trap compared to their counterparts in a transient trap. Keeping track of the evolution of the condensates in a transient trap during its transition from confining to expulsive trap, we notice that they collapse in the expulsive trap. We further show that one can manipulate the scattering length through Feshbach resonance to stretch the lifetime of the confining trap and revive the condensate. Considering a SO coupled state as the initial state, the numerical simulation indicates that the reinforcement of Rabi coupling on SO coupled BECs generates the striped phase of the bright solitons and does not impact the stability of the condensates despite destroying the integrability of the dynamical system.Coupled nonlinear Schrödinger system, Bright Soliton, Gauge transformation, Lax pair2000 MSC: 37K40, 35Q51, 35Q55§ INTRODUCTIONThe advent of Bose-Einstein condensates (BECs) in rubidium <cit.> and the subsequent experimental identification of bright <cit.> and dark solitons <cit.> for attractive and repulsive binary interaction, respectively contributed to a resurgence in the investigation of ultra cold matter. At ultralow temperatures, the macroscopic wave function of BECs can be described by the mean field Gross-Pitaevskii (GP) equation, which is essentially a variant of the celebrated nonlinear Schrödinger (NLS) equation <cit.>. It is worth pointing out at this juncture that the behavior of single (scalar) component BECs is influenced by the external trapping potential and binary interatomic interaction. Experimental realization of vector (or two component) BECs in which two (or more) internal states or different atoms can be populated has given a fillip to the investigation of multi-component BECs. In contrast to the single component BECs, the multi-component BECs exhibit rich dynamics by virtue of inter-species and intra species binary interaction which can be either attractive or repulsive. This extra freedom associated with multi-component BECs enables them to display novel and rich phenomenon like multidomain walls <cit.>, spin switching soliton pairs (either bright-bright <cit.>, dark-dark <cit.> or bright-dark, etc) which can never be witnessed in single component BECs. Recently, in a landmark experiment, Spielman group at NIST have engineered asynthetic spin-orbit (SO) coupling for a BEC <cit.>. In the experiment, two Raman laser beams were used to couple a two component BEC consisting of (predominantly) two hyperfine states of ^87Rb. The momentum transfer between laser beams and atoms contributes to the rich possibility of creating synthetic electric and magnetic fields. Recent investigations have explored the possibility of identifying tunable spin orbit coupled BECs <cit.> with various trapping potentials and stable regimes of condensates have been observed. The identification of stripe phase in the investigation of spin orbit coupled BECs <cit.> has only reignited the enthusiasm in this domain of interest as the striped phase is believed to be an indication of the phase transition taking place in a spin orbit coupled BEC. Also, the influence of spin-orbit coupling in a one-dimensional BEC, particularly the interplay of the SOC, Raman coupling, and nonlinearity induced precession of the soliton's spin has been studied recently <cit.>. The existence of “Striped phase", which essentially consists of a linear combination of plane waves and this has contributed to the idea of using ultra cold atoms for the implementation of a quantum simulator <cit.>.At this juncture, it should be mentioned that eventhough nonlinear excitations like vortices <cit.>,Skymions <cit.> and bright solitons <cit.> have been generated in a SO coupled BEC, the dynamics of SO coupled BECs in a time dependent harmonic trap governed by a two coupled GP equation has not been exactly solved analytically. In this paper, we construct the linear eigenvalue problem of SO coupled GP equation in a transient harmonic trap and show that it is completely integrable. We then generate bright solitons solutions and track their evolution in a transient harmonic trap. We observe that the addition of SO coupling contributes to the rapid oscillations of real and imaginary parts of the order parameter and this occurs at a relatively lower value of SO coupling parameter in a time independent harmonic trap compared to their counterparts in a transient trap. Tracing the evolution of the bright solitons (or the condensates) during its transition from confining to expulsive trap, we notice that the condensates collapse suddenly in the expulsive trap. By employing Feshbach resonance management, we show that one can retrieve the condensates by stretching the lifespan of confining trap. Then considering a SO coupled state as the initial state, we numerically study the impact of Rabi coupling on the condensates. The results of our investigation indicate that the reinforcement of Rabi coupling on SO coupled BECs generates striped solitons and leaves no impact on the stability of BECs. We also emphasize that all the above occurs despite the transition of the dynamical system to the nonintegrable regime. § THE MODEL AND LAX PAIR We consider a spin-orbit coupled quasi-one dimensional BEC in a parabolic trap with longitudinal and transverse frequencies ω_x ≪ω_⊥. Assuming equal contributions of Rashba <cit.> and Dresselhaus <cit.> SO coupling (as in the experiment of Ref.<cit.>), which can be described at sufficiently low temperatures by a set of coupled GP equation of the form <cit.>: i∂ψ_1/∂ t=[-1/2∂^2/∂ x^2+V(x,t)-γ(t)(|ψ_1|^2+|ψ_2|^2) -ik_L∂/∂ x] ψ_1 +Ωψ_2, i∂ψ_2/∂ t=[-1/2∂^2/∂ x^2+V(x,t)-γ(t)(|ψ_1|^2+|ψ_2|^2)+ik_L∂/∂ x] ψ_2 +Ωψ_1. In the above equation, the term ± i k_L ∂ / ∂ x represents the momentum transfer between the laser beams and atoms due to SO coupling, γ(t) represents binary attractive interaction, and the linear cross coupling parameter Ω denotes Rabi coupling and V(x,t)= λ(t)^2 x^2 /2, where λ(t)=ω_x/ω_⊥, is the time dependent trap frequencySwitching off the Rabi coupling (Ω=0), and employing the following transformationψ_1(x,t) = q_1(x,t) exp[ i/2 k_L (k_L t - 2 x ) ],ψ_2(x,t) = q_2(x,t) exp[ i/2 k_L (k_L t + 2 x ) ] , Equation (<ref>) can be written in a simpler form by eliminating the SO coupling term as, i∂ q_1/∂ t=[-1/2∂^2/∂ x^2+V(x,t)-γ(t)(| q_1|^2+| q_2|^2) ] q_1, i∂ q_2/∂ t=[-1/2∂^2/∂ x^2+V(x,t)-γ(t)(| q_1|^2+| q_2|^2) ] q_2. We emphasize that the model governed by equations (<ref>) is exactly integrable if either Rabi coupling (Zeeman splitting) (Ω) or SO coupling (i k_L) is taken into account, but not both of them <cit.>.The above coupled equations (<ref>) and (<ref>) admit the following Lax-pair Φ_x + UΦ =0, Φ_t + VΦ =0, where Φ =(ϕ_1,ϕ _2,ϕ_3)^T is a three-component Jost function,U=[iζ (t)U_12U_13;U_21 -iζ (t) 0;U_31 0 -iζ (t) ],V= [ V_11 V_12 V_13; V_21 V_22 V_23; V_31 V_32 V_33 ],withU_12 =√(γ (t))q_1(x,t) exp[iϕ(x,t)], U_13 =√(γ (t))q_2(x,t) exp[iϕ(x,t)], U_21 = -√(γ (t))q_1^∗(x,t) exp[ -iϕ(x,t)], U_31 = -√(γ (t))q_2^∗(x,t) exp[ -iϕ(x,t)], V_11 =iζ(t)[ c(t)x- ζ (t) ] +i/2γ(t) ( | q_1(x,t)|^2+| q_2(x,t)|^2 ), V_12 =√(γ (t))[c(t)x-ζ (t)] q_1(x,t) exp[ iϕ (x,t) ]+i/2√(γ (t))[q_1(x,t) exp[ iϕ (x,t) ]]_x,V_13 =√(γ (t))[c(t)x-ζ (t)]q_2(x,t) exp[ iϕ(x,t) ] +i/2√(γ (t))[q_2(x,t) exp[ iϕ (x,t) ]]_x, V_21 =-√(γ(t))[c(t)x-ζ (t)] q_1^∗(x,t) exp[ -iϕ (x,t) ] +i/2√(γ(t))[q_1^∗(x,t)exp[-iϕ(x,t)]]_x,V_22 = -iζ(t)[ c(t)x- ζ (t) ] -i/2γ (t) | q_1(x,t)|^2 , V_23 = -i/2γ (t)q_1^∗(x,t) q_2(x,t) ,V_31 = -[c(t)x-ζ (t)] √(γ (t)) q_2^∗(x,t) exp[ iϕ (x,t) ]+ i/2√(γ (t))[q_2^∗(x,t) exp[ iϕ (x,t) ]]_x,V_32 =-i/2γ (t) q_1(x,t)q_2^∗(x,t) exp[ 2 iϕ (x,t)] ,V_33 =-iζ(t)[ c(t)x- ζ (t) ] -i/2γ (t) | q_2(x,t)|^2 .In the above, ϕ (x,t) ≡ c(t)x^2/2, and [ …]_x denotes differentiation with respect to x. The compatibility condition U_t- V_x+[ U, V]=0 generates the SO coupled GP equation (<ref>) withoutRabi coupling (Ω=0), while the spectral parameter ζ (t) obeys the following equation:d/dtζ (t)= c(t)ζ (t),withλ(t)^2= d/dtc(t)-c(t)^2,andc(t) = d/dtlnγ (t).It may be noted that a similar Riccati equation (<ref>) has been employed to solve GP-type equations <cit.>. In fact, the identification of the Riccati-type equation (<ref>) gives the first signature of complete integrability of Equation (<ref>) with (Ω=0). Equation (<ref>), which determines the parabolic potential strength, λ ^2(t), demonstrates that it is related to the interaction strength, γ (t), through the integrability condition, which can be derived by simply substituting Equation (<ref>) in Equation (<ref>):γ(t) d^2/dt^2γ (t) - 2( d/dtγ (t)) ^2 + λ(t)^2γ ^2(t)=0.Thus, the system of coupled GP equations (<ref>) is completely integrable for suitable choices of λ (t) and γ (t), which are consistent with equation (<ref>). For condensates in a time independent harmonic trap, λ (t)= λ_0 (a constant), Equation (<ref>) yields γ (t)=γ_0 exp(λ_0 t ), where γ_0 is an arbitrary constant.Considering a vacuum seed solution, i.e., q_1(x,t)=q_2(x,t)=0, and employing gauge transformation approach <cit.>, we obtainq_1(x,t)=2 ε_1β(t)/√(γ(t))exp[iξ(x,t)-iϕ (x,t) ] sech θ(x,t), q_2(x,t)=2 ε_2β(t)/√(γ(t))exp[iξ(x,t)-iϕ (x,t) ] sech θ(x,t), whereθ(x,t) = 2 β(t) x - 4∫α(t)β(t)dt + 2δ,ξ(x,t)= 2 α(t) x- 2∫[ α(t)^2-β(t)^2] dt-2χ,α(t) = α_0exp∫ c(t) dt,β(t) = β_0exp∫ c(t)dt,γ(t) = γ_0 exp∫ c(t) dt, ϕ(x,t) =1/2c(t) x^2, δ and χ are arbitrary real parameters, and ε_1 and ε_2 are coupling constants subject to the constraint |ε_1|^2+|ε_2|^2=1. A formal solution to the coupled GP equation (<ref>), in the absence of Rabi term, can be straightforwardly written using the transformation (<ref>). The solutions given by Equations (2) illustrate the momentum transfer between laser and atoms when a vector BEC with order parameters q_1 and q_2driven by bright solitons given by Equations (12) is irradiated with a laser beam.§ SOLITON DYNAMICS OF SPIN-ORBIT-RABI COUPLED BEC IN A TRANSIENT AND TIME INDEPENDENT TRAP Choosing a transient trap shown in figure <ref>, we show the real, imaginary and absolute value of the macroscopic order parameters ψ_1 and ψ_2 without and with spin orbit coupling in figures <ref>(a) and <ref>(b).Comparison of figures <ref>(a) and <ref>(b) indicates that the addition of SO coupling contributes to the rapid oscillations of the real and imaginary parts oforder parameters ψ_1 and ψ_2. The fact that the phase of ψ_1 and ψ_2 oscillates with space and time as it is evident from (<ref>) contributes to oscillating nature of spin orbit coupled real and imaginary parts of order parameter. Keeping track of the evolution of the condensates in the transient trap, we observe that the condensate collapses during time evaluation as shown in figure <ref>(b) for t=10. In other words, the oscillating nature of real and imaginary parts of order parameter is sustained as long as the trap remains confining in nature (λ(t)^2>0) and the condensates collapse as soon as the trap becomes expulsive (λ(t)^2<0). In addition to the expulsive trap, the fact that the scattering length γ(t) is driven by exp (0.02 t^2) [by virtue of (<ref>)] contributes to the collapse of BECs in quasi one dimension. However, by employing Feshbach resonance and manipulating the trap frequency appropriately, one can increase the longevity of the confining trap and recover the condensates as shown in figure <ref>. For a quasi one dimensional attractive BEC confined in a harmonic trap with frequencies ω_x= 2 π× 20Hz and ω_⊥= 2 π× 1000Hz, the trap frequency becomes (ω_x/ω_⊥)^2=0.0004. The fact that the condensates are recovered for c(t) =0.0004 t (see figure <ref>) is consistent with the results observed recently in <cit.>.The stabilization of the condensates by manipulating the trapping frequency through Feshbach resonance is also numerically verified and the corresponding density profiles are shown in figure <ref>. It should also be emphasized that the present integrable model offers the luxury of retrieving thecondensates by tuning the time dependent trap frequency through Feshbach resonance. Switching off the time dependence of the trap, one observes the same oscillatory behavior for a relatively smallvalue of SO coupling and larger trap frequency as shown in figure <ref>. However, the subsequent time evolution leads to collapse of the condensates which can once again be stabilized employing Feshbach resonance.We then study the impact ofRabi coupling on bright solitons numerically by considering the spin orbit coupled profile in (<ref>) at t=0 as the initial condition. The numerical simulations are employed using a split-step Crank-Nicolson method <cit.>. The initial profile is refined to be a stationary state using imaginary time propagation, which is then evolved with realtime propagation by adding the correct phase ξ(x,0) and θ(x,0) from (<ref>). Figure <ref> depicts the initial profile (Ω = 0) from (<ref>) at t=0 and final profile of the stationary state as obtained from imaginary time propagation with Ω = 10. Figures <ref> and <ref> show the numerically simulated density profile of the SO coupled condensates in the presence of Rabi coupling in a transient and time independent harmonic trap respectively. The presence of stripes in figures <ref> and <ref> is a clear signature of SO coupling consistent with the results of <cit.>. The observation of striped phase which is an indication of the phase transition taking place in spin orbit coupled BECs is also consistent with the results of spin orbit coupled spin-1 BECs <cit.>. Eventhough we have shown the system governed by (<ref>) to be completely integrable only in the absence of Rabi coupling (Ω=0), we observe that the addition of Rabi coupling does not impact the stability of the condensates as it is evident fromfigures <ref> and <ref>. In other words, eventhough the addition of Rabi coupling to SO coupled condensates takes the dynamical system to the nonintegrable regime, one does not obviously observe its signature on the condensates and they dwell in the stable regime. The addition of different trap strengths (or different interaction strengths) merely changes the trajectories of bright solitons without impacting the stability of the condensates either as shown in figures <ref>(a)-(c). The tunability of transient harmonic trap to produce stable condensates is reminiscent of the tunability of stable SOC-BECs in adouble wellpotential with the adjustable Raman frequency <cit.>.§ CONCLUSION In this paper, we have unearthed the Lax-pair of the SO coupled BECs in a transient harmonic trap and shown them to be completely integrable. We observe that the addition of SO coupling contributes to the rapid oscillations in the real and imaginary parts of order parameter. Tracing their evolution in a transient harmonic trap, we observe that the condensates collapse in the expulsive trap. However, by employing Feshbach resonance management and manipulating trap frequency appropriately,we are able to retrieve the condensates. The oscillatory behavior in the condensates occurs at a relatively small SO coupling parameter in a time independent harmonic trap compared to its counterpart in a transient trap. Then, considering a SO coupled state as the ground state, we study numerically the impact ofRabi coupling on BECs. We observe that the reinforcement ofRabi coupling in a SO coupled BEC gives rise to striped solitons and one does not see any signature of instability in the condensates. It would be interesting to study the impact of SO coupling by considering a Rabi coupled BEC as the initial state and the results will be published later. § ACKNOWLEDGEMENTS RR wishes to acknowledge the financial assistance received from Department of Atomic Energy-National Board for Higher Mathematics (DAE-NBHM) (No. NBHM/R.P.16/2014) and Council of Scientific and Industrial Research (CSIR) (No. 03(1323)/14/EMR-II) for the financial support in the form Major Research Projects. The work of PM forms a part of Science & Engineering Research Board, Department of Science & Technology, Govt. of India sponsored research project (No. EMR/2014/000644). PSV acknowledge the support of UAE University through the grant UAEU-UPAR(7) and UAEU-UPAR(4).§ REFERENCE 30ref3 Dalfovo F, Giorgini S, Pitaevskii L P, StringariS. Theory of Bose-Einstein condensation in trapped gases. Rev Mod Phys 1999; 71: 463.bright Strecker K E, Patridge G B, TruscottA G, Hulet R G. Formation and propagation of matter-wave soliton trains. Nature (London) 2002; 417: 150.bright1 Khaykovich L, SchreckF, Ferrari G, BourdelT, CubizollesJ, CarrL D, CastinY, Salomon C. Formation of a matter-wave bright soliton. Science 2002; 296:1290.bright2 Strecker K E, PartridgeG B, TruscottA G, Hulet R G. Bright matter wave solitons in Bose-Einstein condensates. New J Phys 2003; 5: 73.dark Burger S, BongsK, DettmerS, ErtmerW, SengstockK, Sanpera A, Shlyapnikov G V, LewensteinM. Dark solitons in Bose-Einstein condensates. Phys Rev Lett 1999; 83: 5198.dark1 Cornish S L, Thompson S T, Wieman C E.Formation of bright matter-wave solitons during the collapse of attractive Bose-Einstein Condensates. Phys Rev Lett 2006; 96: 170401.meanfield Pethick C J, Smith H. Bose-Einstein Condensation in Dilute Gases. 2nd ed.Cabmridge: Cambridge University Press; 2008.domainwalls Kasamatsu K, Tsubota M. Multiple domain formation induced by modulation instability in two-component Bose-Einstein condensates. Phys Rev Lett 2004; 93: 100402.domainwalls1 Kevrekidis P G, NistazakisH E, FrantzeskakisD J, Malomed B A, Carretero-Gonzalez R. Families of matter-waves in two-component Bose-Einstein condensates. Eur Phys J D 2004; 28: 181.domainwalls2 Sabbatini J, ZurekW H, DavisM J. Phase Separation and Pattern Formation in a Binary Bose-Einstein Condensate. Phys Rev Lett 2011; 107: 230402.b-b Perez-García V M, BeitiaJ B. Symbiotic solitons in heteronuclear multicomponent Bose-Einstein condensates. Phys Rev A 2005; 72: 033620.b-b1 Adhikari S K. Bright solitons in coupled defocusing NLS equation supported by coupling: Application to Bose-Einstein condensation. Phys Lett A 2005; 346: 179.b-b2 Salasnich L, Malomed B A. Vector solitons in nearly one-dimensional Bose-Einstein condensates. Phys Rev A 2006; 74: 053610.d-d Öhberg P, Santos L. Dark solitons in a two-component Bose-Einstein condensate. Phys Rev Lett 2001; 86 : 2918.SOC-experiment Lin Y J, GarcíaK J, Spielman I B. Spin-orbit-coupled Bose-Einstein condensates. Nature (London) 2011; 471: 83. tunable Salerno M, Abdullaev F.Kh, Gammal A,and Tomio L. Tunable spin orbit coupled Bose Einstein condensates in deep lattices. Phys Rev A 2016;94:043602.two-dim Jiang X, Fan Z, Chen Z, Pang W, Li Y, and Malomed B A. Two Dimensional solitons in dipolar Bose Einstein condensates with spin orbit coupling. Phys Rev A 2016;93:023633.vortex Sakaguchi H, Ya E S and Malomed B A. Vortex solitons in two dimensional spin orbit coupled Bose Einstein codensates:Effects of the Rashba -Dresselhaus coupling and Zeeman splitting. Phys Rev E 2016;94:032202.loc Salasnich L, Cardoso W B and Malomed B A. Localized modes in quasi -two -dimensional Bose Einstein condensates with spin -orbit and Rabi couplings. Phys Rev A 2014;90:033629.bichromatic-soc Cheng Y, Tang G and Adhikari S K. Localization of a spin -orbit coupled Bose -Einstein condensate in a bichromatic optical lattice. Phys Rev A 2014;89:063602.doublewell Wang W Y, Cao H, Liu J and Fu L B. Spin -orbit coupled BEC in a double well potential: Quantum Energy spectrum and flat band. Phys Lett A 2015; 379:1762. kueisun Sun K, Qu C, Xu Y, Zhang Y, Zhang C. Interacting spin-orbit-coupled spin-1 Bose-Einstein Condensates. Phys Rev A 2016; 93: 023615. Wen2016 Wen L, Sun Q,Chen Y,Wang D -S,Hu J, Chen H,Liu W -M, Juzeliũnas G, Malomed B A and Ji A -C. Motion of solitons in one-dimensional spin-orbit-coupled Bose-Einstein condensates. Phys Rev A 2016; 94:061602(R). quantumsimulator Mazza L, BermudezA, GoldmanN, RizziM, Martin-Delgado M A, LewensteinM. An optical-lattice-based quantum simulator for relativistic field theories and topological insulators. New. J. Phys. 2012; 14: 015007.vortices Xu X Q, HanJ H. Spin-orbit coupled Bose-Einstein condensate under rotation. Phys Rev Lett 2011; 107: 200401.skymions Kawakami T, MizushimaT, Nitta M, Machida K. Stable skyrmions in SU(2) gauged Bose-Einstein condensates. Phys Rev Lett 2012; 109: 015301.moto Achilleos V, FrantzeskakisD J, KevrekidisP G, Pelinovsky D E. Matter-wave bright solitons in spin-orbit coupled Bose-Einstein condensates. Phys Rev Lett 2013; 110: 264101.rashba Bychkov Y A, Rashba E I. Oscillatory effects and the magnetic susceptibility of carriers in inversion layers. J Phys C 1984; 17: 6039.dressel-soc Dresselhaus G. Spin-Orbit Coupling Effects in Zinc Blende structures. Phys Rev 1955; 100: 580. ho96 Ho T L, Shenoy V B. Binary mixtures of Bose condensates of alkali atoms. Phys Rev Lett 1996; 77: 3276. esry97 Esry B D, Greene C H, BurkeJ P, Bohn J L. Hartree-Fock theory for double condensates. Phys Rev Lett 1997; 78: 3594.pu98 Pu H, Bigelow N P. Properties of two-species Bose condensates. Phys Re Lett 1998; 80: 1130.konotop Kartashov Y V, Konotop V V, Zezyulin D A. Bose-Einstein condensates with localized spin-orbit coupling: Soliton complexes and spinor dynamics. Phys Rev A 2014; 90: 063621.riccati Wu L, Zhang J-F, Li L. Modulational instability and bright solitary wave solution for Bose-Einstein condensates with time-dependent scattering length and harmonic potential. New J Phys 2007; 9: 69.riccati1 Ramesh Kumar V , Radha R, Panigrahi P K. Dynamics of Bose-Einstein condensates in a time-dependent trap. Phys Rev A 2008; 77: 023611.riccati2 Radha R, Vinayagam P S, Sudharsan J B, Liu Wu-Ming, Boris Malomed A. Engineering bright solitons to enhance the stability of two-component Bose-Einstein condensates. Phys Lett A 2015; 379: 2977.riccati3 Radha R, Vinayagam P S, Sudharsan J B, BorisMalomed A. Persistent bright solitons in sign-indefinite coupled nonlinear Schrödinger equation with a time-dependent harmonic trap. Commun Nonlinear Sci Numer Simulat 2016; 31: 30.riccati4 Radha R, Vinayagam P S. An analytical window into the world of Ultra-cold atoms. Rom Rep Phys 2015; 67(1): 89. llc Chau L-L, Shaw J C, Yen H C. An alternative explicit construction of N-soliton solutions in 1+1 dimensions. J Math Phys 1991; 32: 1737. gaugetrans Ramesh Kumar V, Radha R, Wadati M. Collision of bright vector solitons in two-component Bose-Einstein condensates. Phys Lett A 2010; 374: 3685.gaugetrans1 Radha R, Vinayagam P S. Stabilization of matter wave solitons in weakly coupled atomic condensates. Phys Lett A 2012; 376: 944.gaugetrans2 Radha R, Vinayagam P S, Porsezian K. Rotation of the trajectories of bright solitons and realignment of intensity distribution in the coupled nonlinear Schrödinger equation. Phys Rev E 2013; 88: 032903. Muruganandam2009 Muruganandam P, Adhikari S K. Fortran programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap. Comput Phys Commun 2009; 180:1888.Muruganandam20091 Vudragovic D, VidanovicI, BalažA, Muruganandam P, Adhikari S K. C programs for solving the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap. Comput Phys Commun 2012; 183: 2021.Muruganandam20092 Kishor Kumar R, Young-SL, Vudragovic D, BalažA, Muruganandam P, Adhikari S K. Fortran and C programs for the time-dependent dipolar Gross-Pitaevskii equation in an anisotropic trap. Comput Phys Commun 2015; 195: 117.Muruganandam20093 Lončar V, Balaž A, BogojevicA, SkrbicS, Muruganandam P, Adhikari S K. CUDA programs for solving the time-dependent dipolar Gross-Pitaevskii equation in an anisotropic trap. Comput Phys Commun 2016; 200: 406. | http://arxiv.org/abs/1703.09119v1 | {
"authors": [
"P. S. Vinayagam",
"R. Radha",
"S. Bhuvaneswari",
"R. Ravisankar",
"P. Muruganandam"
],
"categories": [
"cond-mat.quant-gas",
"cond-mat.other",
"37K40, 35Q51, 35Q55"
],
"primary_category": "cond-mat.quant-gas",
"published": "20170327144502",
"title": "Bright soliton dynamics in Spin Orbit-Rabi coupled Bose-Einstein condensates"
} |
.tifpng.png`convert #1 `dirname #1`/`basename #1 .tif`.png -.5 -1in -2.5in | http://arxiv.org/abs/1703.09696v1 | {
"authors": [
"Marina Cortês",
"Lee Smolin"
],
"categories": [
"gr-qc",
"astro-ph.CO",
"nlin.AO",
"physics.class-ph"
],
"primary_category": "gr-qc",
"published": "20170327180010",
"title": "Reversing the irreversible: from limit cycles to emergent time symmetry"
} |
Algebras and semigroups of locally subexponential growth Adel Alahmadi[1], Hamed Alsulami[1], S.K. Jain[1]^,[2], Efim Zelmanov[1]^,[3]^,1====================================================================================fancyshortarticlesinglecolumn§ INTRODUCTION The basic unit of information is the 'bit'; it represents the change in uncertainty from a state of two equally-probable possibilities (0,1) to a definite measurement of the outcome <cit.>.The left- and right- handed photon polarisation basis states, a manifestation of the spin angular momentum (SAM), provides a physical realisation of a 'bit'. When states representing bits exist in entangled superpositions, they are known as qubits and are normally depicted as unit vectors in a two-dimensional complex vector space <cit.>. Operations on qubits offer the potential for vast improvements over classical computing; for example, the execution of algorithms that run exponentially faster than the best-known equivalent. Despite this, the number of retrievable bits in a quantum system is bounded by that contained in its classical counterpart <cit.>. A simple corollary is that any theorem that limits classical information content is directly applicable to quantum information. Information can be encoded into a single photon using any measurable degree of freedom <cit.>, including: frequency <cit.>, spatial structure <cit.>, complex polarisation <cit.>, temporal structuring <cit.>. Light with a phase that varies with azimuthal angle around the wavevector axis is endowed with orbital angular momentum (OAM) and is commonly termed optical vortex (OV) radiation. These beams have Poynting vectors that spiral around the direction of propagation <cit.> and have an azimuthally varying phase factore^-il ϕ, where l is the OAM quantum number and may be a positive or negative integer. In some situations, the effect of SAM and OAM can be equivalent <cit.>, but in general they have different physical manifestations. In randomly oriented samples, SAM is responsible for differential interactions with chiral molecules <cit.>. However, the enantiomer-dependent electric quadrupole transition moments can engage with OAM to produce some chiral effects <cit.>. The recent interest in optical vortex light is due to the large number of potential, and realised, applications. For example: detecting the rotary or lateral motion of particles in the beam cross-section <cit.>, the masking of parent stars with an OV coronograph to allow direct imaging of companion objects <cit.>, or imposing one form of optical torque on a nano- or micro- scale particle <cit.>. The interest in the information content of OV photons stems from the assumption that, since |l⟩ forms a countably infinite set of basis states, the number of bits encoded in a photon is only bounded by experimental effects. Specifically, the question of whether there is a maximum information capacity for OAM light is an active area of research <cit.>. The study of singularimetry suggests a practical limit on the use of OAM beams for information transfer, since interaction with a topological aberration results in decomposition of an optical vortex into multiple lower-order beams <cit.>. However, the benefits of OAM beams for eavesdropping-resistant free-space information transfer <cit.> and optical fibre to free-space coupling with artificial turbulence <cit.> have been experimentally verified. It has been demonstrated that four light beams with different values of orbital angular momentum can be multiplexed and demultiplexed allowing transmission of over one terabit per second <cit.>. The issue of single photon detection is an active area of study <cit.> as quantum information processing is dependant on the entanglement of pairs of<cit.> (or multiple <cit.>) OAM photons.§ MODIFICATION OF THE PHASE AND GROUP VELOCITY OF LIGHTThe constraint typically applied when modelling a electromagnetic field in a waveguide is that the field must be zero at the boundaries.The consequences are that the axial wavevector becomes dependent on the frequency, ω, and the width of the guide <cit.>, and below a certain critical frequency, the field decays exponentially in the guide. Dispersion may also be caused by other geometric boundary conditions or by interaction with a medium, and through either constraint the phase and group velocity become separate. The latter is the velocity at which the envelope of a wave packet travels through space. Here, the geometric boundary conditions are enforced by the lights spatial structure. As structured light has a transverse component of the wavevector, the path length has an additional contribution. To secure a result that is z-independent the paraxial approximation is required, and the group velocity for an arbitrary optical field becomes:v_g= c/1 +⟨k̂_⊥^2 ⟩ / 2k^2_0,where k̂_⊥=-i∇_⊥ and k_0 is the magnitude of the wavevector. Any structured beam with a non-zero expectation value for k̂_⊥ will experience v_g<c. Here we consider a Laguerre-Gaussian (LG) optical field as representative of an OV. However Bessel beams, also endowed with OAM, have similarly been experimentally verified as having a subluminal velocity <cit.>. Using the LG optical field profile delivers the expectation value as:⟨k̂_⊥^2 ⟩ = 2/w^2_0(2p + | l | + 1 ),where w_0 is the minimum beam waist and p is the radial quantum number <cit.>. The OAM photon (group) velocity is then inversely proportional to l. The integer p is a measurable quantised degree of freedom for a photon, can encode information <cit.> and plays a role in angular beam shifts <cit.>. For clarity, the proceeding analysis assumes p=0 and l ≥ 0. The inclusion of the radial and negative l modes will be discussed at the end of this Letter. § LIMITS ON PHOTON INFORMATION CONVEYANCETheoretically the number of symbols, N, required to communicate B bits of information is 2^B, assuming an equal probability of each symbol occurring <cit.>. Let us take an example: Using a basis of l ∈{0,1} conveys one bit per photon. The addition of states l=2 and 3 provides four total outcomes and therefore the possibility of encoding two bits of information. Visualising three photons each with two possible states (for example, left and right circular polarisation) provides justification for the requirement of eight possible outcomes to encode three bits. In the case of OAM photons, the encoding of three bits of information requires the ability to perfectly distinguish between the first eight l values. Extending this argument and including theGaussian (l=0) mode,a measured state with l+1 possible symbols conveys N=log_2(l+ 1 ) bits of information. The exponential increase in required states is well-known in spatial encoding: to encode 10.5 bits per photon Tendrup et al <cit.> needed to employ 8 times more symbols than previous work, which reported 7 bit per photon as highest value for random keys <cit.>, and is comparable to what has been achieved in temporal and polarization encoding <cit.>. To elucidate the issues surrounding detection, we consider a device that will either generate a photon with the phase structure of an l=1 or 2 LG optical vortex (p=0). The detection of multiple photons allows for the resolution of either of the donut modes and a precise determination of whether l=1 or 2 light is being sent. However, since the intensity distributions of each mode overlap considerably, the detection of a single photon will not provide enough information to reduce the uncertainty in l to zero, and will therefore convey only a fraction of a bit. One might use two distant l values, for example l=1 and l=20, so that the overlap of the intensity distributions is essentially zero; the detection of a single photon would then spatially resolve the OAM modes, within a high confidence interval. It is worth noting that this method requires increasingly large gaps between l-values since the overlap between consecutive modes increases as l →∞. It has been shown that a Mach-Zehnder interferometer with a rotated Dove prism in each light-path can form the base units of a device that can, in principle, distinguish an arbitrary large number of OAM states at the individual photon level <cit.>. Thus, to proceed we assume that photon OAM states are perfectly differentiable. In fact, high-fidelity OAM sorting may be possible with nano-scale detectors <cit.>.The assumption here is that individual photons and the contained information travels at the free-space group velocity. In a medium, information can propagate faster than v_g (but not faster than c) <cit.>, however it is established that individual photons travel at the group velocity <cit.>. Indeed, it is hard to imagine a situation where optical information arrives before the detection a photon. To proceed we consider a photon travelling at the group velocity over a distance d to a detector; the time taken is δ t =d/v_g and the rate that information arrives per photon per unit time is delivered as:ℐ = c/dlog_2(l + 1)/ 1 + ( 1/k_0w_0)^2 (l+ 1) ,where d is the distance travelled. Finding where the derivative with respect to l is zero yields the maximum value of the rate of information transfer:ℐ_max = c/dW( k_0^2 w^2_0/e)/ln(2) , l_max=e^1+W( k_0^2 w^2_0/e)-1,where c is the speed of light, d is the distance travelled.Here, W is the principal branch of the Lambert-W function <cit.>, which is the inverse of f(W )=We^W. These are the main results of this Letter. Figure <ref> displays a plot of ℐ against l based on Equation <ref> over a transmission distance of 1 km. For an infrared photon ℐ_max is obtained when l ≈ 1.38×10^7; this corresponds to a transfer rate of 6.68 megabits per photon per second, which is ≈ 22 times the information transfer rate of a two-state photon travelling at c, here designated by the constant ℐ_2S=c/d. Dividing Equations <ref> and <ref> by ℐ_2S delivers results that are manifestly independent of distance. Specifically, Equation <ref> becomes the proportionate increase (or decrease) in information conveyance afforded by using OAM basis states over another two-state photon degree of freedom, e.g. polarisation. In the limit l →∞, similarly ℐ→ 0 as the light slows to a halt, thus all values of k_0 have a maximum value. The UV example in Figure <ref> has a maximum value beyond the range of the graph of l ≈ 10^9, where ℐ_max=28.6 ×ℐ_2S. Choosing an experimentally realisable OAM content of l=300 <cit.> for a photon with λ =400nm delivers an information content of 8.2 ×ℐ_2S. In the latter example, an approximate three-fold increase in the information capacity toℐ_max=23.9 ×ℐ_2S requires more than 4.2 × 10^7 extra distinguishable l-values. A curious observation is that the imaginary parts of the analytic continuation of the Lambert-W function describes a surface with striking similarity to a plane of constant phase in an optical vortex.§ CONCLUSIONOften discussed in the recent studies on optical OAM states of high dimensionality is the potential for large photonic information capacity <cit.>. This Letter sets out an analytically tractable upper bound on information conveyance for photons endowed with OAM. The conclusion is clear: it becomes progressively more difficult to encode information in the OAM degree of freedom of a photon as l approaches l_max, given in Equation <ref>. For clarity, the arguments presented above ignore the radial quantum number, p and values of l<0. However, Equation <ref> is easily modified by the replacement l →2| l |+p in the numerator and l →| l |+2p in the denominator. It is worth noting that the theory presented here applies only to information conveyance; stationary light pulses <cit.> may still encode a theoretically unbounded amount of information - limited only by the required exponential increase in detectable basis states. Thus, there are less implications for OAM beam use as optical memory <cit.>. In fact, the difference in group velocity can aid with separation of possible OAM states in the temporal domain. Holevo's theorem <cit.> states that no more than n bits can be obtained from n qubits. Thus, although the results of this Letter have been framed in terms of classical bits, they are equally applicable to quantum information transfer. In the special case of superdense coding, at most 2n bits may be retrieved from n qubits and Equations <ref> and <ref> are multiplied by a factor of two <cit.>.The author gratefully acknowledges helpful insights and comments from Professor David L. Andrews and Dr David S. Bradshaw.ol ollib | http://arxiv.org/abs/1703.08718v1 | {
"authors": [
"Matt M Coles"
],
"categories": [
"physics.optics",
"quant-ph"
],
"primary_category": "physics.optics",
"published": "20170325173511",
"title": "An Upper Bound on the Rate of Information Transfer in Optical Vortex Beams"
} |
[Email: ][email protected] [Email: ][email protected] 16 April 2018; published 25 July 2018Almost four decades ago, Gacs, Kurdyumov, and Levin introduced three different cellular automata to investigate whether one-dimensional nonequilibrium interacting particle systems are capable of displaying phase transitions and, as a by-product, introduced the density classification problem (the ability to classify arrays of symbols according to their initial density) in the cellular automata literature. Their model II became a well known model in theoretical computer science and statistical mechanics. The other two models, however, did not receive much attention. Here we characterize the density classification performance of Gacs, Kurdyumov, and Levin's model IV, a four-state cellular automaton with three absorbing states—only two of which are attractive—, by numerical simulations. We show that model IV compares well with its sibling model II in the density classification task: the additional states slow down the convergence to the majority state but confer a slight advantage in classification performance. We also show that, unexpectedly, initial states diluted in one of the nonclassifiable states are more easily classified. The performance of model IV under the influence of noise was also investigated and we found signs of an ergodic-nonergodic phase transition at some small finite positive level of noise, although the evidences are not entirely conclusive. We set an upper bound on the critical point for the transition, if any.DOI: https://doi.org/10.1103/PhysRevE.98.01213510.1103/PhysRevE.98.012135Density classification performance and ergodicity of the Gacs-Kurdyumov-Levin cellular automaton model IV Rolf E. O. Simões December 30, 2023 ===========================================================================================================§ INTRODUCTION In 1978, Gacs, Kurdyumov, and Levin (GKL) introduced three different cellular automata (CA), which they called models II, IV, and VI, together with their probabilistic versions (PCA) to investigate whether nonequilibrium interacting particle systems are capable of displaying phase transitions <cit.>. Their goal was to examine the so-called “positive probabilities conjecture,” according to which one-dimensional systems with short-range interactions and positive transition probabilities are always ergodic <cit.>. This conjecture has been disproved—much to the awe of the practising community—many times since then, with the introduction of several models that have become archetypal models in theoretical computer science and nonequilibrium statistical mechanics <cit.>.As a by-product of their investigations, GKL introduced the density classification problem in the cellular automata literature. The density classification problem consists in classifying arrays of symbols according to their initial density using local rules, and is completed successfully if a correct verdict as to which was the initial majority state is obtained in time at most linear in the size of the input array. Density classification is a nontrivial task for CA in which cells interact over finite neighbourhoods, because then the cells have to achieve a global consensus cooperating locally only. Ultimately, that means that information should flow through the entire system, be processed by the cells, and be not destroyed or become incoherent in the process—entropy must lose to work, a relevant property in the theoretical analysis of data processing and storage under noise <cit.>. For one-dimensional locally interacting systems of autonomous and memoryless cells, emergence of collective behavior is required in these cases. In this context, GKL model II has been extensively scrutinized as a model system related with the concepts of emergence, communication, efficiency, and connectivity <cit.>. The search for efficient density classifiers is an entire subfield in the theory of cellular automata. The usual candidates are the so-called eroders, a class of CA to which the GKL models belong (see Sec. <ref>). Unfortunately, the general problem of defining or discerning eroders is algorithmically unsolvable <cit.>. As such, the proposition and analysis of particular models has always been carried out with great interest. Current trends, advances and open problems related with the density classification problem are reviewed in <cit.>.In this paper we characterize the density classification performance of Gacs, Kurdyumov, and Levin's model IV, a four-state cellular automaton with three absorbing states, by numerical simulations. To our knowledge, the model never received a thorough examination of its basic dynamics and properties since its proposition. We show that GKL model IV compares well with its sibling model II in the density classification task, although it takes longer to converge to the right answer. We also investigate the performance of model IV under the influence of noise and show that, most likely, it displays an ergodic-nonergodic transition at some finite small level of noise, although the evidences are not conclusive.The paper goes as follows: in Section <ref> we introduce the GKL model IV, describe its transition rules, and discuss some of its properties. In Section <ref> we describe our numerical simulations and discuss the density classification performance of the model in its deterministic version, inlcuding a comparison with GKL model II, while in Section <ref> we examine the behavior of the model under the influence of noise. In Section <ref> we summarize our findings and discuss our results. An appendix displays the complete rule table of GKL model IV.§ GACS, KURDYUMOV, AND LEVIN'S CA MODEL IV Gacs, Kurdyumov, and Levin's model IV (GKL-IV for short) is a four-state CA with state space given by Ω_Λ = {→,←,↑,↓}^Λ, with Λ⊆ℤ a finite array of |Λ|=L ≥ 1 cells under periodic boundary conditions, and transition function Φ_ IV: Ω_Λ→Ω_Λ that given the state x^t = (x_1^t, x_2^t, …, x_L^t) of the CA at instant t determines the state x_i^t+1 = [Φ_ IV(x^t)]_i = ϕ_ IV(x^t_i-1, x^t_i, x^t_i+1) of the CA at instant t+1 by the rules ϕ_ IV(→,x_i,x_i+1)= →, 30.5muifx_i, x_i+1←,ϕ_ IV(x_i-1, →,x_i+1)= ↓,ifx_i-1∈{←,↑},→, otherwise,ϕ_ IV(x_i-1,x_i,x_i+1)= ↑, 39.5muifx_i∈{↑,↓} and rule (<ref>) does not apply.Rules (<ref>)–(<ref>) are redundant—for example, transitions ϕ_ IV(→,→,{→,↑,↓}) are defined both by rules (<ref>) and (<ref>)—and incomplete, since they define only 42 of the 64 possible transitions. For example, they do not define the important transition ϕ_ IV(←,←,→), see Figure <ref>. The missing transitions are determined by the supplemental reflection ruleϕ_ IV(x_i-1, x_i, x_i+1) = ϕ_ IV(x^*_i+1, x^*_i, x^*_i-1)^*,with →^* = ←, ←^* = →, ↑^* = ↑, and ↓^* = ↓. The reflection rule supplements rules (<ref>)–(<ref>) in their order of appearance and does not substitute a transition that has already been defined. Since the GKL-IV rules are somewhat unwieldy, we give the complete rule table of the CA in the appendix.The rationale behind the GKL-IV rules is that of an “eroder.” An eroder CA is capable of erasing “errors” in the initial configuration, which for a density classifier means to erase the symbols of the minority phases. In GKL-IV, this is achieved by the propagation of state → over states ↑ and ↓ from the left, rule (<ref>), and of state ← over states ↑ and ↓ from the right, reflection (<ref>) of rule (<ref>). Rules (<ref>) and its mirrored symmetric rule along with rule (<ref>) define two other processes. The first consists in the annihilation of states ← and → when they are adjacent; the second process consists in the propagation of state ↑ over states → and ← from the right and left, respectively. These processes of annihilation and propagation occur by substitution, as they are intermediated by state ↓, which is then converted to state ← in the next time step, leading to the continued propagation of state ↑. Overall, these rules promote the propagation of state ↑ over states → and ← at half the speed of the inverse propagation of states → and ← over state ↑, because of the intermediate step involving state ↓. The only role played by state ↓ in the dynamics of GKL-IV is that of delaying the conversion of states → and ← into state ↑ such that the CA can erode states ↑ and ↓ within islands of the minority phase towards the stationary configuration of the majority state. Figure <ref> displays the eroder mechanism in action in two schematic situations.In <cit.>, the authors state that the states → and ← are attracting states for models II and IV, further noting that “evidently [models II and IV] do not have other attracting states.” It happens, however, that GKL-IV has three absorbing states, as it can be seen from the transitions ϕ(→,→,→) = →, ϕ(←,←,←) = ←, and ϕ(↑,↑,↑) = ↑. That the states (→,→,…,→) and (←,←,…,←) are attracting is a theorem of GKL <cit.>, revisited in <cit.>. The state (↑,↑,…,↑), despite being absorbing, may not be attracting, since it may not be true that if we disturb it in finitely many places it will recur in finite time. In our simulations on relatively small systems, however, we observed the convergence of the GKL-IV CA to the state (↑,↑,…,↑) many times. Rough initial estimates indicated that for random, uncorrelated initial configurations in which cells initially get one of the four possible states with equal probabilities, the final configuration converges to (↑,↑,…,↑) about 1% of the times. We thus asked whether GKL-IV can classify initial configurations with majority of cells in state ↑, even if it was not designed for the task. As we will see in Section <ref> the answer is nearly never.§ GKL-IV DENSITY CLASSIFICATION PERFORMANCE§.§ GKL-IV vs. GKL-II performance Let N_s be the number of cells in the state s ∈{→, ←, ↑, ↓}, with N_→+N_←+N_↑+N_↓=L, the size of the array. Given an initial assignment of the numbers N_s, the main observables of the CA are the empirical time-dependent number of cells in state s given byN_s(t) = ∑_i=1^Lδ(x^t_i,s),where δ(·,·) is the Kronecker delta symbol. GKL argued in <cit.> that in the stationary state either the state → or the state ← completely dominates the CA, with the dominance depending on which state, → or ←, respectively, prevails in the initial configuration; states ↑ and ↓ are washed out by the dynamics and do not survive to the stationary state.We assess the density classification performance of the GKL-IV CA by direct simulations as follows. We set the array size L to a multiple of 4 and then assign L/4 randomly chosen cells to each of the states ↑ and ↓, L/4+k cells to state →, and L/4-k cells to state ←, with k an integer parameter that can be varied in the range -L/4 ≤ k ≤ L/4. If GKL-IV can classify density, we expect that when k>0 the stationary state will be the all → state while when k<0 the stationary state will be the all ← state. We then evolve the CA array and track the empirical densities until at some time t^* either N_→(t^*)=L or N_←(t^*)=L. If initially k>0 (k<0) and the stationary state becomes the all → (respectively, all ←) state, then GKL-IV has classified the initial state succesfully, otherwise it has failed. We also consider that the CA failed if after 4L time steps the array did not converge to one of those two states, but this did not happen in our simulations. When k=0, we compute the performance of the GKL-IV array as the number of times that it converges to the all → state. The choice between the all → or the all ← states in this case is irrelevant because the GKL-IV rules are reflection-symmetric with respect to these states and our array is periodic; see Table <ref>.For each given k, the performance of the CA is estimated as the fraction p̂ of correct classifications measured over n=10 000 random initial configurations, with standard deviation estimated as √(p̂(1-p̂)/n). The results appear in Figure <ref> for CA of lengths L=400, 800, 1600, and 3200. At the “worst case” of k/L=0 for a CA of L=400 cells we measured ⟨ n ⟩ = 0.503 ± 0.005.For comparison, results for the sibling GKL model II (GKL-II for short) are also displayed in Figure <ref>. GKL-II is a two-state CA that evolves by the following rule <cit.>: if the state x_i^t of the cell at instant t is +1 (or, equivalently, →), then at instant t+1 it takes the state of the majority state of itself and the first and the third neighbors to its right, otherwise, if the state of the cell at instant t is -1 (or ←), it takes at instant t+1 the state of the majority state of the same neighborhood but in the opposite direction. In symbols, x_i^t+1 = [Φ_ II(x^t)]_i = ϕ_ II(x_i^t,x_i+s^t,x_i+3s^t) withϕ_ II(x_i^t,x_i+s^t,x_i+3s^t) = maj(x_i^t,x_i+s^t,x_i+3s^t)and s = x_i^t = ± 1. The GKL-II rule is not nearest-neighbor but observes a generalization of the reflection rule (<ref>), to wit, ϕ_ II(x_i,x_j,x_k)=ϕ_ II(x_σ(i)^*,x_σ(j)^*,x_σ(k)^*)^*, where x^*=-x and σ is any permutation of . The GKL-II automaton achieves 81.6% performance in a test consisting of classifying something between 10^4 and 10^7 random initial configurations of an array of L=149 (sometimes also 599 and 999) cells close to the “critical density” N_→=1/2 <cit.>. Improvements of the GKL-II rule by humans as well as by genetic and co-evolution programming techniques were able to upgrade the success rate of GKL-II (under the same evaluation protocol) to 86.0% <cit.>. There are other figures published for the GKL-II and other CA (see, e. g., <cit.>), but the reader must be aware that the direct comparison of CA performance numbers is not straighforward because of the use of different array lengths, sets of initial configurations, and evaluation/measurement protocols. Some of these numbers and issues are reviewed in <cit.>.Note that, since GKL-IV has 4 states instead of the 2 states of GKL-II, one might argue that the proper quantity to be used in the comparison of the two CA would be relative imbalance between the classifiable states only, which in our case would read k/(2/4L)=2k/L. We have adopted, however, the point of view of a “client application” that wants to classify the majority between two possible states with a CA. From this point of view, it does not matter if the CA has 2 or more states.We see from Figure <ref> that GKL-IV is not a perfect classifier: for random initial configurations, sometimes it converges to the wrong answer. Otherwise, when the imbalance k/L between the → and the ← states in the initial state is larger than ∼ 2%, the GKL-IV classification performace exceeds 95%, an excellent result. This performance is considerably better than the one for GKL-II, that at k/L=2% is only about ∼ 85% and reaches the 95% mark only for k/L ≳ 3%.It should be remarked that the density classification task cannot be achieved without misclassifications by any single locally interacting two-state cellular automaton. Indeed, under the requirement that all the cells of the automaton must converge to the same state as the majority state in the initial configuration, no automata can achieve 100% efficiency <cit.>. These results establish the need for probabilistic rules and imperfect quality measurements.The panels on the right in Figure <ref> display the time needed to converge to the stationary state as a function of k/L. We see that as the imbalance becomes smaller the time to converge grows, but it never grows more than linearly with the size of the array. Even for large instances of the problem (large L) in the difficult region k/L ≪ 1, GKL-IV converges fast to the solution. In this regard, however, GKL-II exceeds GKL-IV almost by a factor of 3. It seems that the additional states of GKL-IV provide more “error-correction,” while at the same time retarding the convergence to the majority state. §.§ Influence of the nonattractive state (↑,↑,…,↑) We found that GKL-IV sometimes converges to the all ↑ state, which is also one of its absorbing configurations. For relatively small array sizes and random initial configurations in which each cell starts in one of the four possible states with equal probability 1/4, we observed that the array converges to the state (↑,↑,…,↑) approximately 1% of the times. To quantify this behavior, we performed the following numerical experiment: we initially assign a fraction 1/4≤ f ≤ 1 of the L cells to state ↑ and distribute the remaining (1-f)L cells randomly to the other three possible states—such that N_↑(0)=fL exactly and, on average, N_s(0)=1/3(1-f)L for each of the other possible states—, evolve the dynamics and observe the approach to stationarity. The results are summarized in Figure <ref>. As we can see from that figure, even with as much as 95% of the cells initially in the state ↑ GKL-IV cannot really classify initial states with majority of cells in the state ↑ except for the smallest arrays—and even in these cases, only very badly, at a rate of ∼ 10%. Data suggest that as L ↗∞ the stationary state is never (↑,↑,…,↑) unless f=1 exactly. The occasional convergence to the stationary state of all↑ arrows is thus a feature of finite small size systems.A closely related question is whether the density of states ↑ (and perhaps ↓) impacts the classification performance of GKL-IV. We assess this impact by measuring the classification performance of GKL-IV as follows: given an initial assignment of fL of cells in state ↑, the remaining (1-f)L cells are divided between 1/3(1-f)L cells in state ↓, 1/3(1-f)L+k cells in state → and 1/3(1-f)L-k cells in state ←, with 1/4≤ f ≤ 1 such that initially N_↑≥ N_↓. We then measure the performance of the CA in terms of f and k/L for a fixed L=3192 (which is close to 3200 but is divisible by 3 and 4). The results appear in Figure <ref>. We see that the impact of the number of ↑ in the initial state is noticeable, with an unexpected improvement in the density classification performance of GKL-IV at higher densities of ↑ arrows in the initial state near the “critical density” k/L=0. For example, at k/L=0.25%, the performance of GKL-IV increases from ∼ 67% at f=0.3 to ∼ 82% at f=0.8. As the imbalance k/L becomes larger, the performance gain tapers off, but remains measurable. The same phenomenon was observed when the initial state is diluted in ↓ arrows. In both cases, the performance remains larger than the one measured by the protocol of Section <ref>, which corresponds to N_↑=1/4L (i. e., f=0.25), displayed in Figure <ref> as thick solid black lines. One possible explanation for this improvement is that upon dilution in a field of ↑ or ↓ arrows, extended regions of ← and → states separated by domain walls of the type ← ← → or → → ← —configurations that may lead to missclassifications in the long run, see Figure <ref>—hardly form. Other GKL-IV processes certainly play a role in this behavior, although their identification is not immediate.§ THE GKL-IV MODEL UNDER NOISE A CA with rules depending on a random variable becomes a probabilistic CA (PCA) <cit.>. Let us denote the probabilistic transition function of the GKL-IV model under noise by Φ_ IV^(α), where the real parameter α∈ [0,1] denotes the level of noise imposed to the dynamics. If α = 0, GKL-IV becomes the deterministic CA given by transition rules (<ref>)–(<ref>), otherwise with probability α > 0 the transition rules fail in some specific manner, leading to an evolved state that may be at variance with the one prescribed by the deterministic transition rules. In their paper <cit.>, GKL considered mainly random writing errors: at every time step, with probability 1-α the transition follows rules (<ref>)–(<ref>) and with probability α the final state is chosen at random with equal probabilities. In other words, for GKL-IV model under noise level α, at every time step the probability of writing the new state to a cell according to the rules is (1-α)+1/4α, while the probability of doing it incorrectly is 3/4α.A PCA is ergodic if it eventually forgets its initial state, meaning that it has a unique invariant measure—a unique probability distribution of states over the configuration space of the model that does not change under the dynamics <cit.>. Remarkably, GKL found by means of numerical experiments evidence that GKL-IV may be nonergodic below a certain small level of noise α^*≈ 0.05 <cit.>. If true, this would provide a counterexample to the positive probabilities conjecture, according to which all one-dimensional PCA with positive rates, short-range interactions and finite local state space are ergodic. This conjecture is deeply rooted in the theory of Markov processes and has a counterpart in the well-known statistical physics lore that one-dimensional systems do not display phase transitions at finite (T>0) temperature <cit.>. It took nearly three decades to disprove this conjecture in general <cit.>, while counterexamples also appeared in the physics literature <cit.>. The roles played by the size of the rule spaces, symmetries, number of absorbing states, irreversibility and the thermodynamic limit in the phenomenon are still under debate. §.§ Empirical stationary density Little is known about the ergodicity of GKL-IV beyond the loose estimate α^*≈ 0.05 mentioned in <cit.>, in contrast with the same problem for GKL-II <cit.>. To improve this situation, we performed relatively large simulations of GKL-IV under noise to verify whether there may be some sort of ergodic-nonergodic transition upon the variation of α.Our simulations ran as follows. For a given level of noise α, we initialize a PCA of length L=400 with all cells in the state →, relax the initial state for 4L time steps, and start sampling the number N_→ of cells in the state → in the PCA every 5 time steps.We choose 4L for the initial relaxation time because without noise GKL-IV converges to the absorbing state from an arbitrary initial configuration in ∼ 2L time steps maximum, see Figure <ref>. The choice of 4L time steps for relaxation seems reasonable, since we are already starting from an absorbing configuration.We collected 100 million samples of the stationary state for each level of noise in the range 0 < α≤ 0.15 in steps of Δα=0.001 and the results are displayed as a probability density plot in Figure <ref>. Each vertical line (fixed α) in Figure <ref> is a histogram of bin size 1/L and unit area. Note that the lack of points scattered near ⟨ N_→⟩/L=1 except for the smallest values of α in Figure <ref> corroborates ex post the choice of 4L time steps for the initial relaxation time. In fact, the initial relaxation time is utterly irrelevant—the worst that it can do is to contribute with a couple of hundreds of “bad samples” to the set of 100 million samples for each value of α.Figure <ref> clearly displays the two extreme behaviors expected of the noisy GKL-IV. When α=0, the distribution of ⟨ N_→⟩/L is a zero-width distribution concentrated at 1. At the other extreme, when the level of the noise is high, in our case α≳ 0.12, all the states become equiprobable and the density ⟨ N_→⟩/L concentrates around 1/4 with a more or less symmetric distribution that becomes sharper as α increases. The difficult question is whether there is a finite positive α^*>0 such that the noisy GKL-IV is ergodic above α^* and nonergodic below it. Figure <ref> indicates that the probability distribution of ⟨ N_→⟩/L becomes bimodal at α≈ 0.05, with one peak concentrated near the majority of states → and the other peak near the majority of states ←, becoming narrower as α↘ 0. We also see that flipping between the two majority phases ceases completely at α≈ 0.016, at least within the span of 5 × 10^8 time steps for each given α of our simulations. These seem to indicate that the noisy GKL-IV may be nonergodic for some finite α≲ 0.016.Figure <ref> depicts typical space-time diagrams of the noisy GKL-IV with L=400 cells for some selected levels of noise. In diagram A (α=0.012, upper left corner), the state of the PCA just fluctuates about the majority state of → to which it would have converged if it were not for the noise. Small islands of contiguous ↑ (white) states that form could in principle foster the spread of the minority state ← (purple), but these islands are too small and short-lived to make any difference. Ergodicity at this level of noise would imply that an island of the minority phase (or of the ↑(white) state) large enough to thrive in the background of the majority phase and noise can form randomly—an exceedingly unlikely event. The overall result is a spotted spatiotemporal pattern of the majority state that on average occupy ∼ 95% of the cells.As the noise α increases, larger islands of ↑ (white) states form and the states → (yellow) and ← (purple) tend to coexist for longer periods. In diagram B (α=0.05, upper right corner of Figure <ref>), we see larger islands of ↑ (white) states allowing ← (purple) states to spread to the left until being annihilated. Eventually, however, those sliders meet to form bigger ones, survive for longer periods and become the majority state. Such an event was captured in diagram C (α=0.065, lower left corner of Figure <ref>). Note how the majority of states → (yellow) in the top of the diagram is superseded by the ← (purple) states after some time; at the bottom of diagram C the ← (purple) states occupy ∼ 60% of the cells. The “border” at the picture is a remainder of the initial condition; if we follow the evolution of the PCA for a little longer we would see a more homogeneous mixture of states. The PCA has flipped between two majority phases, an indication that at α=0.065 it is ergodic.Finally, under the presence of strong noise, the PCA loses almost all structure except very locally and for short times. This can be seen in diagram D (α=0.14, lower right corner of Figure <ref>). Although islands of the attractive states → (yellow) and ← (purple) endure more than islands of the other two states (and this is particularly true of the ↓ (black) states), on average all four states are present approximately in the same amount. Note, in Figure <ref>, how the stationary probability density at C still displays a bimodal profile, while at D it is clearly a single-peaked distribution centered at ∼ 1/4. §.§ Flipping times It is possible to qualitatively spot an ergodic phase by the analysis of the flipping times between the different stationary configurations of the model. The idea is that this quantity diverges as “potential barriers” grow between the metastable configurations of the system as it gets larger, with the system getting trapped deeper and deeper inside one configuration until ultimately ergodicity is broken in the limit of a system of infinite length. Based on an analogy between the flipping time τ(L,α) between the majority phases of a PCA of L cells subject to noise level α and the correlation length ξ_(L,T) of a 2D equilibrium interacting classical spin model of linear size L at temperature T (see <cit.> for details), we expect thatτ(L,α) ∼exp[u(L,α)].A nonergodic dynamics implies that τ(L,α) diverges as L ↗∞, while for an ergodic dynamics u(L,α) remains bounded in L, signaling that the PCA forgets about its initial condition in finite time, wandering over the entire configuration space and making the invariant measure unique. Clearly, for GKL-IV τ(L,α) must diverge at α=0. Based on these observations, in a nonergodic phase we must have, to first order, u(L,α) ∼ b(L)/α for α↘ 0 and fixed L and u(L,α) ∼ c(α)L for fixed α and L ↗∞, where b(L) and c(α) are bounded functions of their arguments.We measured τ(L,α) for the noisy GKL-IV as follows. For a given level of noise α, we initialize the PCA with all cells in the state → and run the dynamics until a state with majority of cells in the state ← is observed (N_←(τ)>L/2), signaling that the PCA transposed the “potential barrier.” We choose the initial configuration with all cells in the state → because, being an attractive and absorbing state, it provides the “worst case scenario” if the PCA has to reach a configuration with a majority of ←. We then obtain the flipping time τ(L,α) for each given L and α as an average over 1000 such hitting times. In our simulations 80 ≤ L ≤ 400 and 0.024 ≤α≤ 0.100. We did not take measurements under α≲ 0.02 because it would take several thousand hours (months, literally) of CPU time on modern workstations to obtain one point. Note that we write low-level C code to run the simulations, and that even the pseudo-random number generator was thought-out to run as fast as possible (we employ Vigna's superbgenerator <cit.>). The relatively small L also allow us to investigate the flipping times without having to wait too much to observe the flips. Our results appear in Figures <ref> and <ref>.The behavior of τ(L,α) with α seems to indicate that the PCA is nonergodic at least up to α≲ 0.05. Otherwise, we do not observe any sign of divergence of τ(L,α) with increasing L up to L=400 and down to α=0.025, indicating that the PCA is likely to be ergodic in these regions of parameters. The best that we can do with these mixed signals, then, is to combine the bound α^*≤ 0.025 provided by the behavior of τ(L,α) with L together with the bound α^*≲ 0.016 provided by Figure <ref> to set an upper bound on the critical level of noise separating the ergodic from the nonergodic phase of GKL-IV, if any, at α^*≈ 0.016.§ SUMMARY AND CONCLUSIONS We found that the GKL-IV model performs well in the density classification problem, with a performance comparable with that of the more well-known model GKL-II. In fact, GKL-IV performs slightly better at the task, even having more states to deal with. The additional states ↑ and ↓ enable GKL-IV to annihilate isolated ← and → states and create local islands of majority states ↑ and ↓ that are then eroded from the boundaries by means of transitions involving the states ← and → that propagate twice as fast as the former processes, thus leading the CA to converge to the majority state among these states. Its somewhat elaborate eroder mechanism turns out to be very effective. Surprisingly, we also found in Section <ref> that dilution of the state to be classified by random insertion of ↑ or ↓ states enhances the performance of GKL-IV. This suggests a procedure to boost the performance of the CA in more difficult situations of small imbalance between the number of states ← and →: enlarge the CA array (say, by 50%) with randomly inserted ↑ arrows, at the cost of increased time to complete the classification. On the negative side, GKL-IV takes longer (almost 3 times more, see Figure <ref>) to reach consensus. If performance is to be preferred over speed, however, than GKL-IV is a better classifier that GKL-II at only a moderate increase in runtime.We also investigated the performance of GKL-IV under the influence of noise and found signs of an ergodic-nonergodic phase transition at some small finite positive level of noise. Indeed, from Figure <ref> we see that the stationary density of → states clearly becomes bimodal below α≈ 0.05, indicating that GKL-IV apparently becomes nonergodic for levels of noise below this value, but the exact location of α^* is not very clear from that figure. The behavior of the flipping times τ(L,α) displayed in Figures <ref> and <ref> also indicate that GKL-IV is probably nonergodic for α < 0.05, although the behavior of this quantity with the system size L indicate that the system is ergodic at least down to α≈ 0.025. Combining these somewhat conflicting informations together with the fact that flipping between the two majority phases ceases completely about α≈ 0.016, the best that we can do is to set α^*≲ 0.016 as an upper bound on the critical point for a putative ergodic-nonergodic phase transition of GKL-IV. Note that estimates of α^* from Figures <ref>, <ref> and <ref> are affected by the finite size of the system and the finite time of the simulations. Our data indicate that the noisy GKL-IV may be nonergodic but are not conclusive; larger systems simulated for longer periods could tell better.In <cit.> the authors advanced the idea that the noisy GKL-IV (as well as its siblings GKL-II and GKL-VI) is “quasinonergodic,” in the sense that while the models are ergodic for any α>0, convergence to the unique invariant measure is extremely slow. The behavior displayed by ⟨ N_→⟩/L and τ(L,α) in Figures <ref>, <ref> and <ref> supports this idea. It should be remarked that while the ergodicity of one-dimensional deterministic CA is in general undecidable, most PCA are believed to be ergodic, with the notable exception of Gacs' very complicated (and still controversial) counterexample <cit.>. We established an upper bound on the critical level of noise of GKL-IV above which it becomes ergodic. Whether this critical level is smaller or zero remains an open question.The GKL-II and, principally, GKL-IV CA and PCA deserve more analytical studies. We believe that already at the level of single-cell mean field approximation <cit.> the equations may reveal an interesting structure. In the same vein, a study of the spreading of damage <cit.> in the GKL-II and IV PCA may help to clarify the rate of convergence of the dynamics to the stationary states and help to understand CA and PCA that are able to classify density.We thank Peter Gacs (BU) for useful correspondence, library specialist Angela K. Gruendl (UIUC) for excellent service in providing a copy of reference <cit.>, and Yeva Gevorgyan (USP) for help with the literature in Russian. This work was partially supported by FAPESP (Brazil) under grant nr. 2015/21580-0 (JRGM) and CNPq (Brazil) through Ph. D. grant nr. 140684/2016-6 (REOS).*§ COMPLETE GKL-IV RULE TABLE We had to tinker a bit with GKL-IV before getting its rule table right, so we share the result of our labor here. Table <ref> displays all the elementary transitions of GKL-IV according to rules (<ref>)–(<ref>) supplemented by their reflections (<ref>) as described in Section <ref>.99kurdyumovG. L. Kurdyumov, An example of a nonergodic homogeneous one-dimensional random medium with positive transition probabilities, Sov. Math. Dokl. 19 (1), 211–214 (1978).gklP. Gach, G. L. Kurdyumov, and L. A. Levin, One-dimensional uniform arrays that wash out finite islands, Probl. Inf. Transm. 14 (3), 223–226 (1978).dobrushinN. B. Vasil"7Eev, R. L. Dobrushin, i I. I. Pyatetskiĭ-Shapiro, Markovskie protsessy na beskonechnom proizvedenii diskretnykh prostranstv, Sovetsko-Yaponskiĭ Simpozium po Teorii Veroyatnosteĭ (Doklady), Khabarovsk, SSSR, avgust 1969 (AN SSSR, Novosibirsk, 1969), Tom 2: Sovetskie Stat"7Ei, s. 3–30.petrovskayaN. B. Vasil'ev, M. B. Petrovskaya, and I. I. Pyatetskii-Shapiro, Modelling of voting with random error, Automat. Remote Control 30 (10), 1639–1642 (1970).vasersteinL. N. Vaserstein, Markov processes over denumerable products of spaces, describing large systems of automata, Probl. Inf. Transm. 5 (3), 47–52 (1969).kuznetsovA. V. Kuznetsov, Information storage in a memory assembled from unreliable components, Probl. Inf. Transm. 9 (3), 254–264(1973).stavskayaO. N. Stavskaya, Gibbs invariant measures for Markov chains on finite lattices with local interaction, Math. USSR Sbornik 21 (3), 395–411 (1973).toomA. L. Toom, Nonergodic multidimensional system of automata, Probl. Inf. Transm. 10 (3), 239–246 (1974).matzakO. N. Stavskaya, Sufficient conditions for the uniqueness of a probability field and estimates for correlations, Math. Not. Acad. Sci. USSR 18 (4), 950–956 (1975).tsirelsonB. S. Cirel'son, Reliable storage of information in a system of unreliable components with local interactions, in Locally Interacting Systems and their Application in Biology, Proc. School-Seminar on Markov Interaction Processes in Biology, Pushchino, USSR, 1976, edited by R. L. Dobrushin, V. I. Kryukov, and A. L. Toom, LNM 653 (Springer, New York/Berlin, 1978), pp. 15–30.multiA. L. Toom, Stable and attractive trajectories in multicomponent systems, in Advances in Probability Vol. 6, edited by R. L. Dobrushin and Ya. G. Sinai (Marcel Dekker, New York, 1980), pp. 549–576.russkiyeA. L. Toom, N. B. Vasilyev, O. N. Stavskaya, L. G. Mityushin, G. L. Kurdyumov, and S. A. Pirogov, Discrete local Markov systems, in Stochastic Cellular Systems: Ergodicity, Memory, Morphogenesis, edited by R. L. Dobrushin, V. I. Kryukov, and A. L. Toom (Manchester University Press, Manchester, 1990), pp. 1–182.holleyR. Holley and D. W. Stroock, In one and two dimensions, every stationary measure for a stochastic Ising model is a Gibbs state, Commun. Math. Phys. 55 (1), 37–45 (1977).bennett85C. H. Bennett and G. Grinstein, Role of irreversibility in stabilizing complex and nonergodic behavior in locally interacting discrete systems, Phys. Rev. Lett. 55 (7), 657–660 (1985).bennett90C. H. Bennett, Dissipation, anisotropy, and the stabilization of computationally complex states of homogeneous media, Physica A 163 (1), 393–397 (1990).grinsteinG. Grinstein, Can complex structures be generically stable in a noisy world?, IBM J. Res. & Dev. 48 (1), 5–12 (2004).cuestaJ. A. Cuesta and A. Sánchez, General non-existence theorem for phase transitions in one-dimensional systems with short range interactions, and physical examples of such transitions, J. Stat. Phys. 115 (3–4), 869–893 (2004).grinjayaG. Grinstein and C. Jayaprakash, Statistical mechanics of probabilistic cellular automata, Phys. Rev. Lett. 55 (23),2527–2530 (1985).ledoussalA. Georges and P. Le Doussal, From equilibrium spin models to probabilistic cellular automata, J. Stat. Phys. 54 (3), 1011–1064 (1989).lebowitzJ. L. Lebowitz, C. Maes, and E. R. Speer, Statistical mechanics of probabilistic cellular automata, J. Stat. Phys. 59 (1), 117–190 (1990).maesP. Gonzaga de Sá and C. Maes, The Gacs-Kurdyumov-Levin automaton revisited, J. Stat. Phys. 67 (3–4), 507–522 (1992).reliableP. Gács, Reliable computation with cellular automata, J. Comput. Syst. Sci. 32 (1), 15–78 (1986).gacsP. Gács, Reliable cellular automata with self-organization, J. Stat. Phys. 103 (1–2), 45–267 (2001).grayL. F. Gray, A reader's guide to Gács's `positive rates' paper, J. Stat. Phys. 103 (1–2), 1–44 (2001).evansM. R. Evans, D. P. Foster, C. Godreche, and D. Mukamel, Spontaneous symmetry breaking in a one dimensional driven diffusive system, Phys. Rev. Lett. 74 (2), 208–211 (1995).kafriM. R. Evans, Y. Kafri, H. M. Koduvely, and D. Mukamel, Phase separation in one-dimensional driven diffusive systems, Phys. Rev. Lett. 80 (3), 425–429 (1998).bjp30M. R. Evans, Phase transitions in one-dimensional nonequilibrium systems, Braz. J. Phys. 30 (1), 42–57 (2000).rakosA. Rákos and M. Paessens, Ergodicity breaking in one-dimensional reaction-diffusion systems, J. Phys. A: Math. Gen. 39 (13), 3231–3252 (2006).paessensA. Rákos, M. Paessens, and G. M. Schütz, Broken ergodicity in driven one-dimensional particle systems with short-range interaction, Markov Proc. Rel. Fields 12, 309–322 (2006).mitchell93M. Mitchell, P. T. Hraber, and J. P. Crutchfield, Revisiting the edge of chaos: Evolving cellular automata to perform computations, Complex Systems 7 (2), 89–130 (1993).crutchJ. P. Crutchifeld and M. Mitchell, The evolution of emergent computation, Proc. Natl. Acad. Sci. USA 92 (23), 10742–10746 (1995).landbelewM. Land and R. K. Belew, No perfect two-state cellular automata for density classification exists, Phys. Rev. Lett. 74 (25), 5148–5150 (1995).fuksH. Fukś, Solution of the density classification problem with two cellular automata rules, Phys. Rev. E 55 (3), R2081–R2084 (1997).sipcapronM. Sipper, M. S. Capcarrere, and E. Ronald, A simple cellular automaton that solves the density and ordering problems, Int. J. Mod. Phys. C 9 (7), 899–902 (1998).juilleH. Juillé and J. B. Pollack, Coevolving the “ideal” trainer: Application to the discovery of cellular automata rules, in Genetic Programming 1998, Proc. Third Annual Genetic Programming Conference, July 22–25, 1998, University of Wisconsin, Madison, edited by J. R. Koza, W. Banzhaf, K. Chellapilla, M. Dorigo, D. B. Fogel, M. H. Garzon, D. E. Goldberg, H. Iba, and L. Riolo (Morgan Kaufmann, San Francisco, CA, 1998), pp. 519–527.booleanB. Mesota and C. Teuscherb, Deducing local rules for solving global tasks with random Boolean networks, Physica D 211 (1–2), 88–106 (2005).amaralA. A. Moreira, A. Mathur, D. Diermeier, and L. A. N. Amaral, Efficient system-wide coordination in noisy environments, Proc. Natl. Acad. Sci. USA 101 (33), 12085–12090 (2004).mitchell05M. Mitchell, Computation in cellular automata: A selected review, in T. Gramß, S. Bornholdt, M. Groß, M. Mitchell, and T. Pellizzari, Non-Standard Computation: Molecular Computation – Cellular Automata – Evolutionary Algorithms – Quantum Computers (Wiley-VCH, Weinheim, 2005), pp. 95–140.moreiraS. M. D. Seaver, A. A. Moreira, M. Sales-Pardo, R. D. Malmgren, D. Diermeier, and L. A. N. Amaral, Micro-bias and macro-performance, Eur. Phys. J. B 67 (3), 369–375 (2009).bricenoR. Briceño, P. M. de Espanés, A. Osses, and I. Rapaport, Solving the density classification problem with a large diffusion and small amplification cellular automaton, Physica D 261, 70–80 (2013).stoneC. Stone and L. Bull, Evolution of cellular automata with memory: The density classification task, BioSystems 97 (2), 108–116 (2009).regnaultD. Regnault, Proof of a phase transition in probabilistic cellular automata, in Developments in Language Theory – DLT 2013, Proc. 17th International Conference, Marne-la-Vallée, France, June 18–21, 2013, edited by M.-P. Beal and O. Carton, LNCS 7907 (Springer, Berlin, 2013), pp. 433–444.taati15S. Taati, Restricted density classification in one dimension, in Cellular Automata and Discrete Complex Systems – AUTOMATA 2015, Proc. 21st IFIP WG 1.5 International Workshop, Turku, Finland, June 8–10, 2015, edited by J. Kari, LNCS 9099 (Springer, Berlin, 2015), pp. 238–250.stavsJ. R. G. Mendonça, Monte Carlo investigation of the critical behavior of Stavskaya's probabilistic cellular automaton, Phys. Rev. E 83 (1), 012102 (2011).assemblyJ. R. G. Mendonça, Sensitivity to noise and ergodicity of an assembly line of cellular automata that classifies density, Phys. Rev. E 83 (3), 031112 (2011).Wolnik2017B. Wolnik, M. Dembowski, W. Bołt, J. M. Baetens, and B. De Baets, Density-conserving affine continuous cellular automata solving the relaxed density classification problem, J. Phys. A: Math. Theor. 50 (34), 345103 (2017).fatesN. Fatès, Stochastic cellular automata solutions to the density classification problem – When randomness helps computing, Theory Comput. Syst. 53 (2), 223–242 (2013).ppbortotP. P. B. de Oliveira, J. C. Bortot, and G. M. B. Oliveira, The best currently known class of dynamically equivalent cellular automata rules for density classification, Neurocomputing 70 (1–3), 35–43 (2006).ppreviewP. P. B. de Oliveira, On density determination with cellular automata: Results, constructions and directions, J. Cell. Autom. 9 (5–6), 357–385 (2014).mairesseJ. Mairesse, I. Marcovici, Around probabilistic cellular automata, Theor. Comput. Sci. 559, 42–72 (2014).busicA. Bušić, J. Mairesse, and I. Marcovici, Probabilistic cellular automata, invariant measures, and perfect sampling, Adv. Appl. Probab. 45 (4), 960–980 (2013).marcoviciI. Marcovici, Ergodicity of noisy cellular automata: The coupling method and beyond, in Pursuit of the Universal – CiE 2016, Proc. 12th Conference on Computability in Europe, Paris, France, June 27–July 1, 2016, edited by A. Beckmann, L. Bienvenu, and N. Jonoska, LNCS 9709 (Springer, Cham, 2016), pp. 153–163.taati17I. Marcovici, M. Sablik, and S. Taati, Ergodicity of some classes of cellular automata subject to noise, arXiv:1712.05500 [math.PR] (2017).robertoR. Fernández, P.-Y. Louis, and F. R.Nardi, Overview: PCA models and issues, in Probabilistic Cellular Automata, Emergence, Complexity and Computation Vol. 27, edited by P.-Y. Louis and F. R. Nardi (Springer, Cham, 2018), pp. 1–30.mfrsosJ. R. G. Mendonça, Mean-field critical behavior and ergodicity break in a nonequilibrium one-dimensional RSOS growth model, Int. J. Mod. Phys. C 23 (3), 1250019 (2012).xoroshiroS. Vigna, xoroshiro+/xorshift*/xorshift+ generators and the PRNG shootout (2018). <http://xoroshiro.di.unimi.it/> (accessed 10 March 2018).dickmanJ. Marro and R. Dickman, Nonequilibrium Phase Transitions in Lattice Models (Cambridge University Press, Cambridge, 1999).bollobasP. Balister, B. Bollobás, and R. Kozma, Large deviations for mean field models of probabilistic cellular automata, Random Struct. Algor. 29 (3), 399–415 (2006).kohringG. A. Kohring and M. Schreckenberg, The Domany-Kinzel cellular automaton revisited, J. Phys. I (France) 2 (11), 2033–2037 (1992).tomeT. Tomé, Spreading of damage in the Domany-Kinzel cellular automaton: A mean-field approach, Physica A 212 (1–2), 99–109 (1994). | http://arxiv.org/abs/1703.09038v3 | {
"authors": [
"J. Ricardo G. Mendonça",
"Rolf E. O. Simões"
],
"categories": [
"cond-mat.stat-mech",
"nlin.CG"
],
"primary_category": "cond-mat.stat-mech",
"published": "20170327124149",
"title": "Density classification performance and ergodicity of the Gacs-Kurdyumov-Levin cellular automaton model IV"
} |
Token-based Function Computation with MemorySaber Salehkaleybar, Student Member, IEEE, and S. Jamaloddin Golestani, Fellow, IEEE Dept. of Electrical Engineering, Sharif University of Technology, Tehran, IranEmails: [email protected], [email protected] December 30, 2023 ============================================================================================================================================================================================================================================= In distributed function computation, each node has an initial value and the goal is to compute a function of these values in a distributed manner. In this paper, we propose a novel token-based approach to compute a wide class of target functions to which we refer as “Token-based function Computation with Memory” (TCM) algorithm. In this approach, node values are attached to tokens and travel across the network. Each pair of travelling tokens would coalesce when they meet, forming a token with a new value as a function of the original token values. In contrast to the Coalescing Random Walk (CRW) algorithm, where token movement is governed by random walk, meeting of tokens in our scheme is accelerated by adopting a novel chasing mechanism. We proved that, compared to the CRW algorithm, the TCM algorithm results in a reduction of time complexity by a factor of at least √(n/log(n)) in Erdös-Renyi and complete graphs, and by a factor of log(n)/log(log(n)) in torus networks. Simulation results show that there is at least a constant factor improvement in the message complexity of TCM algorithm in all considered topologies. Robustness of the CRW and TCM algorithms in the presence of node failure is analyzed. We show that their robustness can be improved by running multiple instances of the algorithms in parallel. definition mydefDefinition[section] mycolCorollary[section] myremRemark[section] theorem mylmLemma[section] mythTheorem[section] mypropProposition[section] § INTRODUCTIONDistributed function computation is an essential building block in many network applications where it is required to compute a function of initial values of nodes in a distributed manner. For instance, in wireless sensor networks, distributed inference algorithms can be executed by computing average of the sensor measurements as a subroutine. Examples of distributed inference in sensor networks include transmitter localization <cit.>, parameter estimation <cit.>, and data aggregation <cit.>. As another application, consider a network with n processors in which each processor has a local utility function and the goal is to obtain the optimal solution of sum of the utility functions subject to some constraints. This problem has frequently arisen in network optimization algorithms such as distributed learning <cit.>, link scheduling <cit.>, and network utility maximization <cit.>. All these algorithms utilize a distributed sum or average computation subroutine in solving the optimization problems.Consider the problem of computing a target function f_n(v_1^0,⋯,v_n^0) in a network with n nodes, where v_i^0 is the initial value of node i. A common approach is based on constructing spanning trees <cit.>. In this solution, the values would be sent toward the root where the final result is computed and sent back to all nodes over the spanning tree. Although the spanning tree-based solution is quite efficient in terms of message and time complexities, it is not robust against network perturbations such as node failures or time-varying topologies. For example, the final result may be dramatically corrupted if a node close to the root fails.To overcome the above drawback of spanning tree-based solutions, recent approaches take advantage of local interactions between nodes <cit.>. In these approaches, each node i which has a value, chooses one of its neighbors, say node j; The two nodes then update their values based on a predefined rule function g(.,.) which is determined by the target function f_n(.) (see Lemma <ref>). By iterating this process in the entire network, the target function is computed in a distributed manner. Let v_i and v_j be the current values of nodes i and j, respectively. Two possible options for executing the rule function g(v_i,v_j) are: 1) v_i^+=v_j^+=g(v_i,v_j),2) v_i^+=e, v_j^+=g(v_i,v_j),where v_i^+ and v_j^+ are the updated values of nodes i and j, respectively. The value e is the identity element of the rule function g(.,.), i.e. g(v,e)=g(e,v)=v for any value v.The first option in (<ref>) corresponds to the class of distributed algorithms commonly called gossip algorithms <cit.>. The main advantage of these algorithms is that they are robust against network perturbations due to their simple structure. However, this robust structure is obtained at the expense of huge time and message complexities <cit.>. For the first option, various updating rule functions have been proposed for specific target functions like average <cit.>, min/max, and sum <cit.>. For instance, the updating rules g(v_i,v_j)=(v_i+v_j)/2 and g(v_i,v_j)=min(v_i,v_j) can be used to compute average and min functions, respectively.The second updating option can compute a wide class of target functions including the ones computable by gossip algorithms (see Lemma <ref>) and it is much more energy-efficient than the gossip algorithms <cit.>. This approach can be easily implemented by a token-based algorithm: Suppose that each node has a token at the beginning of the algorithm and passes its initial value to its token. A node is said to be inactive when it does not have a token. If the local clock of an active node like i ticks, it chooses a random neighbor node, like node j, and sends its token carrying its value. Upon receiving the token, node j updates its value, and becomes active (if it is not already)[In case of computing the sum function, the updating rule function g(v_i,v_j) is v_i+v_j and the identity element is equal to zero.]. Then, node i sets its own value to e, and becomes inactive. From token's view, each token walks in the network, randomly, until it meets another token. The two tokens will then coalesce and form a token with an updated value. This process continues until the result is aggregated in one token. Finally, the last active node can broadcast the result by a controlled flooding mechanism[In section II, we will explain how the last active node broadcasts the final result.]. This computation scheme is called Coalescing Random Walk (CRW) algorithm after the coalescing random walks <cit.>. The CRW algorithm offers comparable performance to spanning tree-based solutions in terms of message complexity <cit.>, making it much more energy-efficient than the gossip algorithms. However, it is still slow due to deficiency in token coalescence when only a few tokens remain in the network. Hence, authors in <cit.>, modified the CRW algorithm in order to improve its running time. In the modified algorithm, which we call the truncated CRW algorithm, at some point of time, the execution of the CRW algorithm is terminated and each active node broadcasts the value of its token via a controlled flooding mechanism, leaving the completion of the computation to each network node. However, this solution does not lead to a significant improvement in time or message complexity <cit.>. In this paper, we propose a mechanism to speed up the coalescence of tokens. Suppose that each token has a unique identifier (UID) besides its carried value. In the proposed mechanism, each node registers the maximum UID of tokens seen so far, and the outgoing edge taken by the token with the maximum UID. When a token enters a node previously visited by a token with higher UID, it follows the registered outgoing edge. Otherwise, it will go to a random chosen neighbor node, according to a predefined probability. Figure <ref> illustrates a scenario where two tokens are left in the network and show how coalescing is expedited in the proposed scheme. Since nodes memorize the outgoing edge of a token with maximum UID they have seen, we call the proposed scheme “Token-based function Computation with Memory” (TCM) Algorithm.It is interesting to mention an analogy between this scheme and cosmology. Think of tokens in the network as cosmic dusts in space. Accordingly, the process of function computation is like forming a planet from cosmic dusts. By running the TCM algorithm, tokens with small UID (light dusts) are trapped in the set of nodes visited by tokens with higher UID (in the gravitational field of heavy dusts). The coalescing process continues until a single token is left, similar to birth of a planet. The main contributions of the paper are as follows: * We show that the proposed TCM algorithm, by accelerating coalescing of tokens, reduces the average time complexity by a factor √(n/log(n)) in complete graphs and Erdös-Renyi model compared to the CRW algorithm and its truncated version. Furthermore, there is at least log(n)/log(log(n)) factor improvement in torus networks. Simulation results show that the TCM algorithm also outperforms the CRW algorithm in terms of message complexity. * In CRW and TCM algorithms, the final result may be corrupted if an active node fails. Hence, it is quite important to study the robustness of these algorithms under node failures. In this regard, we evaluate the performance of CRW and TCM algorithms based on a proposed robustness metric. We show that the robustness can be substantially improved by running multiple instances of the TCM and CRW algorithms in parallel. We prove that, for the CRW algorithm, the required number of instances in order to tolerate the failure rate α/n in complete graphs, is of the order O(n^α). While the TCM algorithm needs to run only O(1) instances in parallel. * We study the performance of TCM and CRW algorithms under random walk mobility model <cit.>. Simulation results show that both algorithms can compute the class of target functions defined in Lemma II.1 successfully even in high mobility conditions. The remainder of the paper is organized as follows: In Section II, the TCM algorithm is described. In Section III, the performances of TCM and CRW algorithms are analyzed and compared for different network topologies. In Section IV, we study the robustness of both algorithms in complete graphs. In Section V, the performances of TCM and CRW algorithms are evaluated through simulations and then compared with analytical results. Finally, we conclude with Section VI.§ THE TCM ALGORITHM §.§ System modelConsider a network of n nodes, where each node i has an initial value v_i^0 and the goal is to compute a function f_n(v_1^0,⋯,v_n^0) of initial values in a distributed manner. The topology of the network is represented by a bidirected graph, G=(V,E), with the vertex set V={1,...,n}, and the edge set E⊆ V× V, such that (i,j)∈ E if and only if nodes i and j can communicate directly. We index ports of node i with {1,⋯,d_i}, where d_i is the degree of node i. It is assumed that the function f_n(.) is symmetric for any permutation π of the set {1,⋯,n}, i.e. f_n(v_1^0,⋯,v_n^0)=f_n(v_π_1^0,⋯,v_π_n^0). This means that it does not matter which node of the network holds which part of the initial values.§.§ Description of the TCM algorithmAssume that a UID is assigned to each node i.[One can use randomized algorithms to assign UIDs. Each node randomly chooses an integer number in the set {1,⋯,kn^2}. From birthday problem <cit.>, it can be shown that each node gets a UID with high probability if k is large enough. Furthermore, each node can encode its UID with O(log(n)) bits.] At the beginning of the algorithm, each node has a token to which it passes its UID and initial value. It is also assumed that each node has an independent clock which ticks according to a Poisson process with rate one. Let the value and UID of the token at node i be value(i) and ID(i), respectively. We denote the token at node i by the vector [value(i),size(i),ID(i)]. The role of parameter size(i) will be explained in the next part. The TCM algorithm computes the target function f_n(.) by passing and merging tokens in the network. When a node does not have a token, it becomes inactive until a neighbor node gets in contact with it. Let memory(i) be the maximum UID of the tokens, node i has seen so far. Algorithm <ref> describes how and when an active node i sends or merges tokens. The subroutine Send() is executed by each tick of local clock while the subroutine Receive() is activated upon receiving a token from some neighbor node. Suppose that the local clock of active node i ticks. Node i decides to send the token [value(i),size(i),ID(i)] to a neighbor node.In this respect, we make distinction between two cases:Case 1- memory(i)=ID(i): In this case, node i decides to pass the token to a random neighbor node with probability p_send. Thus, node i waits for 1/p_send number of clock ticks on average before sending out the token. To implement the waiting mechanism, node i will exit the subroutine Send() with probability 1-p_send, each time its clock ticks (line 6). Otherwise, it chooses a random port j, sets the path(i) to j, and sends the token on that port (lines 7-8). Case 2- ID(i)<memory(i): In this case, node i sends the token on the port path(i) with probability one. Now, suppose that node i receives a token [value,size,ID]. If node i is inactive, then the received token remains unchanged. Otherwise, it will coalesce with the token at nodes i and the token with greater UID remains in the network (line 15). Then, the parameters value(i), size(i), and memory(i) are updated to g(value(i),value), size(i)+size, and max(memory(i),ID), respectively (lines 16-18). The updating rule function g(.,.) is determined by the target function f_n(.) as explained in Lemma <ref>. Furthermore, the value e is the identity element of the rule function g(.,.), i.e. g(v,e)=g(e,v)=v for any value v.From top view, each token walks randomly in the network until it enters a node visited by a token with higher UID (Case 1). Then, it follows a path to meet the token with higher UID (Case 2). We call the walking modes in the first and second cases the random walk and chasing modes, respectively. In the random walk mode, a token walks with the lower speed p_send. Thus, it can be followed by tokens with lower UID more quickly.§.§ Termination of the TCM algorithm The process in Algorithm <ref> continues until a few tokens remain in the network. In order to terminate the algorithm, we consider two options: * Option 1- Assume that the exact network size, n, is known by all nodes. Furthermore, each node i has a parameter size(i), beside its initial value which is equal to one at the beginning. The sum of parameters {size(i), i∈{1,⋯,n}} can be computed in parallel to the target function. If the parameter size in an active node reaches n, it can identify itself as the unique active node in the network. Then, it broadcasts the output of the TCM algorithm to all nodes by controlled flooding, further explained below. * Option 2- Suppose that there exists an upper bound on the network size. Then, the execution time of the TCM algorithm can be adjusted to a time T_run such that, on average, at most a constant number of active nodes remain after time T_run. Afterwards, each active node broadcasts the value of its token including the UID. All nodes can obtain the final result by combining values received from the active nodes. In analyzing the performances of CRW and TCM algorithms, we consider the first option. In controlled flooding, an active node i sends the value and UID of its token to all neighbor nodes. Each node j, upon receiving this message from a node k for the first time, forwards it to all its neighbor nodes except node k. Since each message is transmitted on each edge at most twice, the time and message complexities of controlled flooding are Θ((G)) and Θ(|E|), respectively[In complete graphs, we can employ gossip algorithm proposed in <cit.> to broadcast the output with time and message complexities of the order O(log(n)) and O(nlog(n)), respectively.].The allocation of memory at node i would be: (memory(i),path(i),size(i),value(i)) where the possible values of the first three entries are in the set {1,⋯,n}. Thus, the TCM algorithm requires at most Θ(log(n)) bits more storage capacity compared to the CRW algorithm. The next Lemma identifies the class of target functions f_n(v_1^0,⋯,v_n^0) which can be computed by the TCM algorithm. The TCM algorithm can compute a collection of symmetric functions {f_n(.)} if there exists an updating rule function g(.,.) such that for any permutation π of the set {1,⋯,n}, we have: f_n(v_1^0,⋯,v_n^0)=g(f_k(v_π_1^0,⋯,v_π_k^0),f_n-k(v^0_π_k+1,⋯,v_π_n^0)), 1≤ k≤ n, ∀ n.The proof is the same as Lemma 3.1 in <cit.>. A wide class of target functions fulfil these requirements such as min/max, average, sum, and exclusive OR. For instance, updating rule functions g(v_i,v_j)=v_i+v_j, g(v_i,v_j)=max(v_i,v_j), and g(v_i,v_j)=v_i ⊕ v_j are used for computing sum, minimum, and exclusive OR functions, respectively. The average function can also be computed by dividing the output of the sum function by the network size which is obtained by summing parameter size of nodes in parallel to computing the sum function.§ PERFORMANCE ANALYSIS OF THE CRW AND TCM ALGORITHMSIn this section, we study the performances of CRW and TCM algorithms in complete graphs, Erdös-Renyi model, and torus networks. The considered network topologies may resemble different practical networks. For instance, the topology of a wireless network, in which all stations are in transmission range of each other, is typically modelled by a complete graph. A peer-to-peer network such that all nodes can communicate with each other in the overlay network, is another example of complete graphs. As we explain later, the Erdös-Renyi model is frequently used as a model to represent social networks. Furthermore, torus network is a simple structure widely used to model distributed processing systems with grid layout or grid-based wireless sensor networks. As a prelude to analyze the performance of the TCM algorithm, we first present an analysis of time and message complexities of the CRW algorithm for complete graphs, although the CRW algorithm is already analyzed in <cit.>. Then, we study time complexity of the TCM algorithm in complete graphs. We also give a naive analysis of message complexity of the TCM algorithm in complete graphs and time/message complexity of both algorithms in Erdös-Renyi model and torus networks. The summary of time and message complexities for the TCM algorithm and the CRW/truncated CRW algorithms are given in Table 1. In complete graphs and Erdös-Renyi model, the TCM algorithm reduces the time complexity at least by a factor √(n/log(n)). In the case of torus networks, there is an improvement at least by a factor log(n)/log(log(n)) with respect to the CRW algorithm. Furthermore, the message complexity of the TCM algorithm is at most the same as the CRW and truncated CRW algorithms. Simulation results show that there is at least a constant factor improvement in the message complexity by employing the TCM algorithm in all considered topologies.In analyzing the CRW and TCM algorithms, we assume that each token is transmitted instantaneously. Furthermore, passing a token is counted as sending one message in the network.§.§ Time and message complexities of the CRW algorithm on complete graphsLet T_CRW and M_CRW be the average time and message complexities of the CRW algorithms, respectively. Next theorem gives a tight bound on T_CRW and M_CRW. The average time and message complexities of the CRW algorithm in complete graphs are of the orders Θ(n) and Θ(nlog(n)), respectively.We can represent the process of token coalescing by a Markov chain with the number of active nodes remaining in the network defined as the state (see Fig. <ref>). The chain undergoes transition from state k to state k-1 if a token chooses an active nodes for the next step, which occurs with rate k(k-1)/n-1. Let T_k be the sojourn time in state k. Then the average time complexity is: T_CRW=∑_k=2^n 𝔼{T_k}= ∑_k=2^n n-1/k(k-1)= (n-1)(1-1/n)≈ n-2. Besides, in state k, on average, (n-1)/(k-1) messages are transmitted before observing a coalescing event. Therefore, the average message complexity would be[∑_k=1^n 1/k ≈log(n) +c where c ≈ 0.577 is the Euler-Mascheroni Constant.]:M_CRW= ∑_k=2^n n-1/k-1≈ (n-1) (log(n-1)+0.577).Thus, the average time and message complexities of CRW algorithm are of the orders Θ(n) and Θ(nlog(n)), respectively.§.§ Time complexity of TCM algorithm on complete graphsLet the UIDs of the n tokens at the beginning of the algorithm be denoted as ID_1,⋯,ID_n. Without loss of generality, assume that ID_1>⋯ >ID_n. Throughout this section, we also assume that p_send=1/2.Let T_coal(ID_i), i=2,⋯,n, denote the time that token ID_i coalesces with a token with a larger UID. Thus, the algorithm running time would be: T_run(n)=max_i ∈{2,⋯, n} T_coal(ID_i). In the TCM algorithm, token ID_1 walks randomly in the network. In each step, it chooses a random node from the whole set of network nodes except the node where it is currently presented. After taking j steps, the average number of visited nodes by token ID_1 would be: n-(n-1)×(1-1/(n-1))^j.We call the set of nodes visited by token ID_1 during its first j movements the event horizon of ID_1, and denote it by EH1(j). Notice that, in the TCM algorithm, when a token gets in the event horizon of token ID_1, it cannot escape and will eventually coalesce with token ID_1. We borrowed the term event horizon from general relativity, where it refers to “the point of no return”.The size of event horizon of token ID_1 after taking 2j steps, i.e. |EH1(2j)|, is at least 𝔼{|EH1(j)|}≈ n(1-(1-1/n)^j) with probability greater than 1-e^-n/4- j η where constant η≥ 0.05.See Appendix A in the supplemental material. Now, we can obtain an upper bound on the average time complexity of the TCM algorithm, from Lemma <ref>. In complete graphs, the average time complexity of TCM algorithm is of the order O(√(n log(n))).For a complete proof, see Appendix B in the supplemental material. Here, in order to provide better insight about the algorithm, we present a naive analysis, that is based on a modified model of the network, where Poisson assumption for clock ticks is relaxed. Instead, we adopt a slotted model for time, where each token in the chasing mode, takes one step in each time slot. Furthermore, in the random walk mode, we replace the assumption of p_send=1/2 with sending token every other slot. Tokens which are scheduled to move in a time slot, take steps in a random order. In our analysis, we utilize the following inequality that we trust is correct, based on intuition and simulation verification:{T_coal(ID_i)≤ t}≥{T_coal(ID_2)≤ t}, 2≤ i≤ n. As an example, simulation results are given for a network with n=100 nodes in Fig. <ref>. First, we derive an upper bound on the probability that the token ID_2 gets in the event horizon of ID_1 after time slot t. According to the simplified timing model, token ID_1 moves at even time slots and token ID_2 tries to get in the event horizon of token ID_1 at the same time slots. In order to obtain the upper bound, we wait for 2k time slots to have a big enough event horizon of token ID_1. Since the size of event horizon in the next 2k time slots is equal or greater than the one at time slot 2k, the probability of not hitting the event horizon in time interval [2k,4k] is less than (1-|EH_1(k)|/n)^k. By bounding |EH_1(k)| from below (see Lemma <ref>), we have for k≥ 2√(nlog(n)): { T_EH1(ID_2)>4k}≤(1-𝔼{|EH_1(k/2)|}/n)^k×{|EH_1(k)|≤𝔼{|EH_1(k/2)|}}+ {|EH_1(k)|≤𝔼{|EH_1(k/2)|}}× 1 ≤(1-𝔼{|EH_1(k/2)|}/n)^k + e^-n/4-η k/2 ≤e^-√(log(n)/n) k + e^-n/4-η k/2,where the last inequality is obtained by replacing 𝔼{|EH_1(k/2)|}≥𝔼{|EH_1(⌊√(nlog(n))⌋ )|}, for k≥ 2√(nlog(n)).When token ID_2 reaches the event horizon of token ID_1 at time slot 4k, it takes at most another 4k time slots to coalesce with token ID_1. Because the size of |EH_1(k)| is at most 2k and the relative velocity of two tokens is 1/2. From this fact, we have: {T_coal(ID_2)≤ 8k}≥{T_EH1(ID_2)≤ 4k}. From (<ref>), we can obtain the following:{T_coal(ID_2)>k} <e^-√(log(n)/n) k/8 + e^-n/4-η k/16, k≥ 16 √(nlogn) .Now, an upper bound can be derived on the average time complexity:𝔼{T_run(n)} =∑_k=1^∞{T_run(n)>k}= ∑_k=1^∞{max_i ∈{2,⋯, n} T_coal(ID_i)>k}≤∑_k=1^∞min(1,∑_i∈{2,⋯,n}{T_coal(ID_i)>k})≤^a 16 √(nlog(n)) + ∫_16 √(nlog(n))^∞min(1,(n-1)× (e^-√(log(n)/n) t/8 + e^-n/4-η t/16))dt ≤^b 16 √(nlog(n)) + 8/√(nlog(n)) +16n/η e^-n/4-η√(nlog(n)). (a) From the inequalities in (<ref>) and (<ref>). (b) Due to (n-1)× (e^-√(log(n)/n) t/8 + e^-n/4-η t/16)≤ 1 for t≥ 16√(nlog(n)). From (<ref>), we conclude that the average time complexity is of the order O(√(nlog(n))). Comparing with the CRW algorithm, the TCM algorithm improves the time complexity with at least a factor of √(n/ log(n)). §.§ Message complexity of TCM algorithm on complete graphs In this part, we give a naive analysis of the message complexity of TCM algorithm in complete graphs. To obtain the bound on message complexity, we will show that the average number of messages sent in the TCM algorithm until observing a coalescing event, is less than the case for the CRW algorithm.The average message complexity of the TCM algorithm is of the order O(nlog(n)) in complete graphs.Assume that clock of an active node i ticks at time t and k tokens remain in the network. Suppose that token ID_r is in node i. The token ID_r may be in two different modes: Walking randomly or following another token with higher UID. In the first mode, it will choose any node like j with probability 1/(n-1). Thus, the probability of coalescing is: 1/n-1∑_j∈{1,⋯,n}∖{i}{ζ_j(t)=1},where ζ_j(t) is an indicator parameter which is equal to one if node j is active at time t and otherwise, it is zero. But the expected number of active nodes excluding node i is: ∑_j∈{1,⋯,n}∖{i} 1×{ζ_j(t)=1}=k-1. Hence, the probability of coalescing in this mode is (k-1)/(n-1). In the second mode, token ID_r follows another token with higher UID and decided to go to a neighbor node, let say node l. We know that there exist k-1 tokens excluding token ID_r which walk randomly or follow another token on a trajectory of a random walk. Thus, node l is active with probability at least (k-1)/(n-1). Following the same arguments in analyzing the message complexity of the CRW algorithm, the message complexity is of the order O(nlog(n)).§.§ Time and message complexities of TCM and CRW algorithms in Erdös-Renyi modelIn some network applications, it is required to compute a specific function in social networks, such as majority voting <cit.>. Hence, it is quite important to study the performances of TCM and CRW algorithms in these scenarios. Erdös-Renyi model is frequently used as a simple model to represent social networks <cit.>. In this part, we use this model to give a naive analysis on the time and message complexities of TCM and CRW algorithms in social networks. In Erdös-Renyi model, there exists an edge between any two nodes with probability p. It can be shown that the graph is almost certainly connected, if p≥ 2log(n)/n <cit.>. The next two propositions give upper bounds on the time and message complexities of CRW and TCM algorithms. In the Erdös-Renyi model, the average time and message complexities of CRW algorithm are of the order O(n) and O(nlog(n)), respectively.Assume that k tokens remain in the network. Consider token ID_i walks randomly until it meets another token. In each step, it may be located in any node. From the token's view, it seems that edges are randomly established with probability p in each step. Suppose that token ID_i is in node l at time t. It will choose an active node with probability, P_selec:P_selec=∑_m∈{q|ζ_q(t)=1}∑_j=0^n-2 p×{d^'_l=j}× 1/(j+1) =(k-1)× p ×𝔼{1/(d^'_l+1)},where d^'_l is the degree of node l excluding an active node m. The first term in summation shows the probability of having an edge between two nodes l and m. The second term represents the probability that node l has j number of neighbor nodes excluding the node m and the last term is the probability that node l chooses active node m from the set of its neighbor nodes. From Jensen's inequality and convexity of function f(x)=1/(x+1) over x>0, we have: P_selec≥ (k-1)p/(𝔼{d^'_l}+1)=(k-1)/(n-2+1/p)≥ (k-1)/(n-2+n/(2log(n))). It can be easily verified that P_selec≥ (k-1)/(1.12(n-1))=Θ((k-1)/(n-1)) for n≥ 100. Following the same arguments in analyzing the performance of CRW algorithm in complete graphs, we can deduce that the time and message complexities are of the order O(n) and O(nlog(n)), respectively.In the Erdös-Renyi model, the average time and message complexities of TCM algorithm are of the orders O(√(nlog(n))) and O(nlog(n)), respectively.Suppose that the token ID_i is in random walk mode. In each step, it visits each node with probability p×𝔼{1/(d^'_l+1)}≥ 1/(n-2+1/p)≈ 1/(n-1) for large enough n. Intuitively, we still have the same bounds on the probabilities {T_coal(ID_i)> t}, 2≤ i≤ n. By the same arguments for the case of complete graphs, the time and message complexities are of the order O(√(nlog(n))) and O(nlog (n)), respectively. §.§ Time complexity of TCM algorithm on torus networksIn this part, we give a naive analysis on the time complexity of TCM algorithm in torus networks. We will show that the average running time of the algorithm is of the order O(nlog(log(n))). To obtain the bound, we first need to review two lemmas about single random walks. <cit.> Consider a √(n)×√(n) discrete torus. Let T_hit be the average time for a single random walk to hit the set of nodes contained in a disc of radius r<ℛ/2 around a point x starting from the boundary of a disc of radius ℛ around x. Then, we have: 𝔼{T_hit}=Θ(nlog(r^-1)). <cit.> Let V_k be the number of nodes visited by a single random walk on ℤ^2 after k steps. Then, we have: 𝔼{V_k }=π k/logk and variance Var(V_k )=O(k^2 log(log(k))/log(k)^3). In torus networks, the average time complexity of the TCM algorithm is of the order O(nlog(log(n))).Consider the token ID_1. From Lemma <ref>, π k/logk number of nodes are visited on average by token ID_1 after k steps. To simplify the analysis, we approximate the region of visited nodes with a disc of radius √(k/nlogk) on a unit torus (see Fig. <ref>). Hence, after k=β n steps, radius of the disc would be √(β/log(β n)) where β<<1. Furthermore, any other token ID_i (i≥ 2) walks randomly or follows another token on a trajectory of a random walk. Hence, from Lemma <ref>, token ID_i hits the disc after Θ (nlog(log(n))) average time units if it does not coalesce with any other token during this time interval. Following that, at most 2n time slots are required to reach token ID_1. Therefore, the time complexity is of the order O(nlog(log(n))). § ROBUSTNESS ANALYSISIn this section, we study the robustness of CRW and TCM algorithms. In the literature of distributed systems, identifying robust algorithms is done mostly from a qualitative rather than quantitative perspective. For instance, there is a common belief that gossip algorithms have a robust structure against network perturbations such as node failures or time-varying topologies <cit.>. Nevertheless, this advantage is achieved by huge time and message complexities <cit.>.To the best of our knowledge, there exist a few works <cit.> on analyzing the robustness of distributed function computation (DCF) algorithms. One of the main challenges is that it is difficult to devise a well defined robustness metric. Despite the challenges, there exist some methodologies for defining a robustness metric in a computing system <cit.>. Here, we follow the same approach in these methodologies. To do so, three steps should be taken:1) First, a metric should be considered for the system performance. In our case, we consider it as the probability of successful computation at the end of the algorithm, i.e. {v_i=f(v_1^0,⋯,v_n^0), ∀ i ∈{1,⋯,n} ,} where v_i is the output of node i. Note that the correct result is a function of initial values of whole nodes. 2) In the second step, network perturbations should be modelled. In the CRW and TCM algorithms, the final result may be corrupted if an active node fails. Thus, studying the impact of such event on the robustness of these algorithms is quite important. In order to model node failures, we assume that each node may crash according to exponential distribution with rate λ. Therefore, the average lifespan of a node is 1/λ. As a result, at most n× (1-e^-λ𝔼{T_run(n)}) number of nodes fail on average. We assume that the expected number of crashed nodes during the execution of the algorithm is at most a small fraction of network size, i.e. λ𝔼{T_run(n)}<-log(1-α)≈α where α<<1.3) At the end, it should be identified how much perturbation the algorithm can tolerate such that the performance metric remains in an acceptable region. For this purpose, we define the following robustness metric. The robustness metric, r(ϵ), is defined by the following equation:r(ϵ) ≜maxλ_0s.t. {v_i=f(v_1^0,⋯,v_n^0), ∀ i∈{1,⋯ ,n} , | λ=λ_0}≥ 1-ϵ,Intuitively, the robustness metric shows maximum failure rate which an algorithm can tolerate such that the probability of successful computation is greater than a desired threshold, 1-ϵ.In order to execute CRW and TCM algorithms in the presence of node failure, it is assumed that each token chooses a random neighbor node for the next clock tick, if the contacting node at the current moment has been failed.§.§ Robustness of CRW algorithm in complete graphsWe first derive the probability that node i is active at time t, i.e. {ζ_i(t)=1}. In the non-failure scenario, node i is active at time t with probability {ζ_i(t)=1}=1/(t+1).We use the mean field theorem to calculate the probability p(t)={ζ_i(t)=1} (for more on mean field theorem, see <cit.>). Due to symmetry property of the complete graphs, each node is active at time t with the same probability p(t). Thus, the portion of active nodes will decrease with rate -p^2(t). Therefore, we have: dp(t)/dt=-p^2(t). By solving the differential equation and considering the fact that p(0)=1, we have: p(t)=1/(t+1) and 𝔼{c(t)}=n/(t+1) where c(t)=∑_i=1^nζ_i(t) is the the number of active nodes at time t.In the CRW algorithm, the probability of successful computation is greater than n^-λ n for the node failure rate λ <α/𝔼{T_run(n)}.The function computation is successful iff none of active nodes fail up to time T_run(n).[In controlled flooding mechanism, the value of last active node is broadcasted to all nodes. Thus, node failures have negligible impact on the final result in this phase and we neglect it in our analysis.] Let F_[t_0,t_1) be the event that none of active nodes fails in the time interval [t_0,t_1). Thus, the probability P_succ(t)≜{F_[0,t)},(t<T_run(n)), satisfies the following equation:P_succ(t+dt) =P_succ(t)×{F_[t,t+dt)|F_[0,t)}, =P_succ(t)×𝔼_c(t){{F_[t,t+dt)|c(t), F_[0,t)}}, =^aP_succ(t) ×𝔼_c(t){ e^-λ c(t)dt}, =P_succ(t) ×𝔼_c(t){1-λ c(t) dt}+ O(dt^2), =^bP_succ(t) × (1-λ n/t+1 dt).(a) From property of exponential distribution considered in modelling node failures.(b) We assume that 𝔼{c(t)}≈ n/(t+1) is not affected by missing a small fraction of nodes.Therefore, we have:dP_succ(t)/dt=-P_succ(t)λ n/t+1. By solving the above differential equation, we have: P_succ(t)=(t+1)^-λ n. Hence, we can obtain a lower bound on the probability of successful computation, P_succ, as follows:P_succ= 𝔼_T_run(n){P_succ(T_run(n))}≥ (𝔼{T_run(n)}+1)^-λ n≥ n^-λ n. The above inequality holds due to Jensen's inequality and considering the fact that function f(x)=(x+1)^-nλ, x>0 is convex.After some manipulations, it can be easily verified that: r(ϵ)> log((1-ϵ)^-1)/(nlog (n)). Hence, the single CRW can tolerate failure rates of order O(1/(nlog(n))). But, how can we improve the performance of this algorithm such that it tolerates failure rates of order α/𝔼{T_run(n)}=α/n? One effective solution is to run multiple CRWs in parallel. More specifically, we run R instances of CRW algorithm denoted by 1,…,R; As a result, if an active node fails in some instances of the CRW algorithm, it might be inactive in the other instances and those instances survive from that node failure. In order to run multiple instances of the algorithm, tokens carry the index of the corresponding instance in the execution of the algorithm and can only coalesce with token of the same index. At the end of the algorithm, nodes decide on the output of an instance which includes as many values as possible in computing the target function. To do so, we can assume that each node i has a count parameter size(i) which is equal to one at the beginning of the algorithm (see section II). The sum of these count parameters is obtained alongside computing the target function of initial values for each instance of the algorithm. Nodes decide on the output of instance with maximum count parameter. To tolerate the failure rate of α/n and get the correct result with probability 1-ϵ, the number of instances of the CRW algorithm should be greater than:R>log(ϵ^-1)n^α.Assuming that the multiple instances are approximately independent and considering λ= α/n and Lemma <ref>, the probability of successful computation of the target function with R instances of CRW algorithm is greater than:1-(1- n^-α)^R≥ 1-ϵ,→ R ≥ log(ϵ)/log(1-n^-α)≈log(ϵ^-1)n^α. The CRW algorithm is robust against failing α fraction of nodes by running O(n^α) instances of CRW algorithm in parallel. Thus, the message complexity is of the order O(n^1+αlog(n)). Since α<<1, this solution imposes low message overhead.§.§ Robustness of TCM algorithm in complete graphsTo study the robustness of TCM algorithm, we first need to obtain the average percentage of active nodes at time t. However, deriving 𝔼{c(t)}/n for TCM algorithm in complete graphs is not an easy task as the one for the CRW algorithm. Since it is required to compute the following sum:𝔼{c(t)}=1/n∑_i=1^n{T_coal(ID_i)>t},where obtaining {T_coal(ID_i)>t}, ∀ i∈{2,⋯,n} (or even bounds on them) is quite challenging. In order to simplify the analysis, we consider a form of function 𝔼{c(t)}/n≈log_2(t+2)/(at^2+bt+1) where a=0.23 and b=1.8. The reason for choosing this form is that the average running time is of the order O(√(nlog(n))) and it can also be fitted properly to the simulation results[From simulation results, the root mean square error (RMSE) of fitted function is less than 10^-3 for all n∈ [100,2500].]. According to this assumption, we can derive the probability of successful computation by the following lemma. The probability of successful computation by TCM algorithm is greater than e^-γ nλ in complete graphs where γ≈ 4.13.By the same arguments in the proof of Lemma <ref>, we have:P_succ(t)=exp(-λ∫_0^t 𝔼{c(τ)}dτ). Since h(t)=e^-λ t is convex and non-increasing and g(t)=∫_0^t 𝔼{c(τ)}dτ is concave (d/dt𝔼{c(t)}<0, t>0), the P_succ(t)=h(g(t)) is convex. Hence, we have from Jensen's inequality:P_succ=𝔼_T_run(n){P_succ(T_run(n))} ≥exp(-n λ∫_0^𝔼{T_run(n)}log_2(τ+2)/aτ^2+b τ+1dτ) ≥ e^-γ nλ,where ∫_0^𝔼{T_run(n)}log_2(τ+2)/aτ^2+b τ+1dτ≤∫_0^∞log_2(τ+2)/aτ^2+b τ+1dτ=γ.From Lemma <ref>, we can see that r(ϵ) is at least ϵ/(γ n) for a single TCM algorithm. Similar to the CRW algorithm, we can run multiple instances of TCM algorithm in parallel to improve its robustness. In order to tolerate the failure rate of α/n, the required number of instances running in parallel should be of the order O(1). § SIMULATION RESULTSIn this section, we evaluate the performances of TCM and CRW algorithms through simulation. Simulation results are averaged over 10000 runs for both algorithms in complete graphs, torus networks, and Erdös-Renyi model. In Fig. <ref>, average time complexities of TCM and CRW algorithms are given for complete graphs. In the TCM algorithm, p_send is set to 1/2. As it can be seen, simulation results are close to our analysis. Furthermore, the TCM algorithm outperforms the CRW algorithm by a scale factor √(n). For instance, for n=256, the average time complexities of TCM and CRW algorithms are 67 and 255 time units, respectively. Hence, the amount of improvement is 255/67=3.81 ≈ n/(4.5n^0.5)=3.56. In Fig. <ref>, the average message complexities of TCM and CRW algorithms are depicted in complete graphs. As it can be seen, the average message complexity of TCM algorithm is always less than half of the one for the CRW algorithm.In order to study the effect of parameter p_send on the running time of TCM algorithm, the average time complexity is plotted versus p_send for the complete graphs in Fig. <ref>. Intuitively, the event horizon of token ID_1 grows with a pace inversely proportional to p_send. On the other hand, the relative velocity of two tokens is approximately related to 1-p_send. Thus, the average time complexity increases as p_send goes to zero or one. Furthermore, the optimal p_send gets close to 0.5 as network size increases. In Fig. <ref>, we evaluate the average time and message complexities of TCM and CRW algorithms in torus networks. We can see that TCM algorithm has at least a gain of log(n) in time complexity and a scale factor of 2.85 in message complexity. In Fig. <ref>, the average time and message complexities of TCM and CRW algorithms are depicted in Erdös-Renyi model. According to Fig. <ref>, the TCM algorithm has an improvement in time complexity by a factor √(n). Furthermore, the average message complexity of TCM algorithm is approximately half of the CRW algorithm. In Fig. <ref>, the probability of successful computation by running one instance of TCM and CRW algorithms are depicted in the case of complete graphs. The failure rate is set to 0.05/n. For the TCM algorithm, P_succ is approximately equal to 0.83 for different values of n in the range [100,400]. Besides, results from analysis are close to it by an offset of 0.001. In the case of CRW algorithm, results from the simulation and the analysis are also close to each other. For this algorithm, P_succ is greater than 0.74 for various values of n in the range [100,400].In Fig. <ref>, the message complexities of the TCM and CRW algorithms are plotted versus failure rate in a complete graph with n=100 nodes. The number of parallel instances is determined such that the probability of successful computation is equal to 0.95. As it can be seen, it is required to run a few more instances of the TCM and CRW algorithms to tolerate higher failure rate. Furthermore, message complexity of the TCM algorithm is less than the one for the CRW algorithm. In Fig. <ref>, the time complexities of both algorithms are given versus failure rate. For higher failure rate, we need to run more instances of the TCM/CRW algorithm to have P_succ=0.95. On the other hand, executing multiple instance of the algorithms improves the time complexity. Since the target function is computed if any of the instances is terminated successfully. In Fig. <ref>, the probabilities of successful computation of the TCM and CRW algorithms are plotted versus number of multiple instances in a complete graph with n=400 nodes for the failure rates λ=0.05/n,0.1/n. It can be seen that the analytical lower bounds in (<ref>) and (<ref>) are close to simulation results. Furthermore, P_succ goes to one in all cases when 6 number of instances are executed in parallel. Thus, the proposed solution makes both algorithms robust against node failures by running a few number of instances in parallel as we expected from Corollaries <ref> and <ref>.Studying the impact of dynamic topologies on the performance of distributed algorithms is quite important. Here, we evaluate the performance of TCM and CRW algorithms under node mobility. There exist different mobility models in the literature of mobile ad hoc networks <cit.>. In the simulations, we consider the Random Walk (RW) mobility model which is frequently used in determining the protocol performance and it can mimic movements of mobile nodes walking in an unpredictable way <cit.>. Initially, suppose that nodes are located randomly over a square of unit area. Let [x_i(t),y_i(t)] be the location of node i at time t. In the RW mobility model, the differences x(t+h)-x(t) and y(t+h)-y(t) are two independent normally distributed random variables with zero mean and variance 2Dh ,∀ h>0 where D is the diffusion coefficient <cit.>. Thus, the mean square displacement of a node is related to the parameter D. In particular, the probability of large displacement increases as diffusion coefficient D grows. We assumed that if a node reaches the boundary of simulated area, it will be bounced off the boundary according to the same angle. Furthermore, two nodes are neighbor if the distance between them is less than a fixed transmission range. The transmission range is set to a value such that the graph remains connected with high probability for the static case, i.e. D=0 <cit.>.In the TCM algorithm, we assume that each node i registers the UID of the node that the token memory(i) passed to it. Whenever an active node should send a token to a node which is not in its transmission range any more, it will pass its token to a random neighbor node. In Fig. <ref>, the time and message complexities of TCM and CRW algorithms are depicted versus the parameter D in a network with n=100 nodes. It is noteworthy that both algorithms can compute the class of target functions defined in Lemma <ref> successfully even in high mobility networks. Furthermore, the time and message complexities of TCM algorithm increases as the parameter D grows while node mobility improves the performance of CRW algorithm. In fact, higher mobility weakens the advantage of chasing mechanism. On the other hand, it gives an opportunity to a completely randomized solution, i.e. the CRW algorithm, to reduce the coalescing time of distant tokens. Nevertheless, simulation results show that the TCM algorithm outperforms the CRW algorithm in both time and message complexities.§ CONCLUSIONSIn this paper, we proposed the TCM algorithm to compute a wide class of target functions (such as sum, average, min/max, XOR) in a distributed manner. In complete graph and Erdös-Renyi model, we showed that it reduces running time at least by factor √(n/log(n)) with respect to completely randomized solution, i.e. the CRW algorithm, and there is at least a factor of log(n)/log(log(n)) improvement in torus networks. We defined a robustness metric to study the impact of node failures on the performance of CRW and TCM algorithms. The TCM and CRW algorithms can tolerate the failure rate of α/n by running O(n^α) and O(1) instances in parallel, respectively. Furthermore, simulation results showed that both algorithm can compute the target functions successfully even in high mobility conditions. §APPENDIX A Proof of Lemma III.1: The pdf of |EH_1(k)| can be approximated with Gaussian distribution 𝒩(μ_k,σ_k) where μ_k=n-n(1-1/n)^k and σ_k^2=n^2(1-1/n)(1-2/n)^k+n(1-1/n)^k-n^2(1-1/n)^2k [29]. After some manipulations, we have: {|EH_1(2k)|≤𝔼{|EH_1(k)|}}≤1/2e^-(μ_k-μ_2k)^2/2σ_2k^2≤ e^-n/4-kη.where η≥ 0.05. Hence, the size of the set EH_1(2k), is greater than 𝔼{|EH_1(k)|} with probability at least 1-e^-n/4 -kη. §APPENDIX BProof of Theorem III.2: Consider token ID_i (i>1). Let x_k^i be the node visited by token ID_i at k-th step and S_k^i={x_1^i,⋯,x_k^i} be the history of the corresponding walk. We define the walk taken by token ID_i as weakly self-avoiding walk, provided that: {x_k+1^i|S_k^i} = α_k x_k+1^i∉S_k^i, ≤α_k x_k+1^i∈ S_k^i,for some α_k where α_k≥1/n-1. Thus, in a weakly self-avoiding walk, token ID_i visits new nodes with higher probability than the visited nodes. In the TCM algorithm, the path traced by token ID_i (i>1) is a weakly self-avoiding walk.Suppose that the token ID_i enters a node x_k^i visited by some other token with higher UID. Let ID_i^' be the maximum UID, node x_k^i has seen so far. Furthremore, assume that token ID_i^' is in k^' steps and has visited node x_k^i in j-th step for the last time, i.e. j=max_ω≤ k^'ω s.t.x_ω^i^'= x_k^i. We denote the chasing and random walk modes of token ID_i by chase_i and RW_i, respectively. Now, for a given history S_k^i, we have:{x_k+1^i=a|S_k^i} =∑_i^'=1^i-1[∑_j=1^k^'{x_k+1^i=a|x_j^i^'=x_k^i,chase_i,S_k^i}×{x_j^i^'=x_k^i,chase_i|S_k^i}]+{x_k+1^i=a|RW_i,S_k^i}×{RW_i|S_k^i}. Suppose that token ID_i was in l-th step when token ID_i^' was leaving node x_j^i^' (see Fig. <ref>). We prove that token ID_i will not visit nodes in the set {x_l^i,⋯,x_k^i} in the next step. By contradiction, assume that there exists l≤ p ≤ k where x_p^i=x_j+1^i^'. However, we have:∀ω_1∈{p,⋯,k}, ∃ω_2 ∈{j+1,⋯,k^'},s.t.x_ω_1^i=x_ω_2^i^',due to the fact that token ID_i is chasing token ID_i^'. For ω_1=k, the above equation asserts that token ID_i^' revisited node x_k^i in some step later than j which is contradiction. We know that token ID_i^' was eventually in the random walk mode in j-th step. Hence, each node in the set {1,⋯,n}\{x_l^i,⋯,x_k^i} is selected with probability 1/(n-|{x_l^i,⋯,x_k^i}|) in the k+1-th step. Consequently, we have:{x_k+1^i=a|x_j^i^'=x_k^i,chase_i,S_k^i} ≥{x_k+1^i=b|x_j^i^'=x_k^i,chase_i,S_k^i}∀ j,k, ∀ a,b∈{1,⋯,n} , a∉S_k^i, b∈ S_k^i.From (<ref>) and (<ref>), it can be concluded that:{x_k+1^i=a|S_k^i}≥{x_k+1^i=b|S_k^i}, ∀ a,b∈{1,⋯,n} , a∉S_k^i, b∈ S_k^i. Thus, the proof is complete.Assume that if token ID_i coalesces with token ID_j (where ID_j>ID_i), it virtually sticks to token ID_j. Now, if token ID_j meets another token, say ID_k with higher UID, token ID_j and all tokens attached to it, stick to token ID_k. This process continues until token ID_i hits the event horizon of ID_1 by itself or another token. We denote the time for token ID_i to hit the event horizon of token ID_1 by T_EH1(ID_i). Furthermore, let EH_i(t) be the set of nodes visited by token ID_i up to time t. Token ID_1 takes steps in the network according to a Poisson process with rate 1/2 (assuming that p_send=1/2). At each step, it chooses one of nodes except its current node with probability 1/(n-1). Thus, each node (excluding the initial node having token ID_1) is not visited by token ID_1 up to time t with probability e^-t/2(n-1) independently from other nodes. Hence, the pdf of the number of visited nodes at time t is:{|EH_1(t)|=r}=n-1r-1 (1-e^-t/2(n-1))^r-1× (e^-t/2(n-1))^n-r,for 1≤ r ≤ n-1. We have the following probabilistic bound on the number of visited nodes by token ID_1 at time 2t:{|EH_1(2t)|≤𝔼{|EH_1(t)|}}≤e^-α_0 t, t≤ 2n,where α_0=(1-log(2))/4.From (<ref>) and the proposed upper bound for binomial distribution in [30], we have:{|EH_1(2t)|≤𝔼{|EH_1(t)|}}≤e^-nD,where D=alog(a/b)+(1-a)log((1-a)/(1-b)), a=1-e^-t/2(n-1) and b=1-e^-t/(n-1). Besides, we have:nD =t/2e^-t/2(n-1)-(n-1)(1-e^-t/2(n-1))(1+e^-t/2(n-1)) >t/2(1-log(2))+t^2(log(2)-1)/(8n). From above equation, it can be easily seen that nD>t(1-log(2))/4 for t≤ 2n. Therefore, the proof is complete.Let N_i(t_0,t_0+2t) be the number of steps taken by token ID_i in time interval [t_0,t_0+2t]. Then, we have the following bound:{N_i(t_0,t_0+2t)<⌊ t/2⌋}≤ e^-α_1t,where α_1=log(√(e/2)).The random variable N_i(t_0,t_0+2t) is a Poisson process with rate at least λ_2t=2t× 1/2=t. Thus, we have from the Chernoff bound:{N_i(t_0,t_0+2t)≤⌊ t/2⌋}≤∑_i=0^⌊ t/2⌋e^-λ_2t(λ_2t)^i/i!≤ e^-t(et)^t/2/(t/2)^t/2=(2/e)^t/2.The proof is complete.By the same arguments in Lemma <ref>, it can be shown that: { N_i(t_0,t_0+t)>2t}≤ e^-α_2 t where α_2=log(4/e).Given a time t, we say that the event E_i(t) occurs if |EH_1(t)\ EH_i(t)|≥ 1/8√(nlogn). Let E(t)=i⋂ E_i(t) and define t^⋆=√(nlogn). We have:{ E^c(t^⋆)} ≤^a ∑_i=2^n{E_i^c(t^⋆)}≤ (n-1) 𝔼{∑_j= |EH_1(t^⋆)|-1/8√(nlogn)^N_i(0,t^⋆) N_i(0,t^⋆)j P_EH_1(t^⋆)^j P_EH^c_1(t^⋆)^N_i(0,t^⋆)-j}≤^b 𝔼{(n-1) ∑_j=|EH_1(t^⋆)|-1/8√(nlogn)^N_i(0,t^⋆) N_i(0,t^⋆)j(|EH_1(t^⋆)|/n-N_i(0,t^⋆))^j} ≤^c (n-1) (∑_j=⌊ 1/8√(nlogn)⌋^⌈ 2√(nlogn)⌉ 2√(nlogn) j(1/4√(nlogn)/n-2√(nlogn))^j+ e^-α_0 √(nlogn)/2 + e^-α_2 √(nlogn)) ≤^d 1/√(nlogn).(a) The first sum is given according to the union bound. The second sum is greater than the probability of having |EH_1(t^⋆)∩ EH_i(t^⋆)|≥ j where P_EH_1(t^⋆) and P_EH^c_1(t^⋆) are the probabilities of choosing a node from the set EH_1(t^⋆) and {1,⋯,n}\ EH_1(t^⋆), respectively.(b) From Lemma <ref>, the path traced by token ID_i (i>1) is a weakly self-avoiding walk. Thus, we have: P_EH_1(t^⋆)≤|EH_1(t^⋆)|/n-N_i(0,t^⋆). (c) The sum has greater value for larger |N_i(0,t)| and smaller |EH_1(t)|. We can obtain this inequality by bounding the probability {|EH_1(t^⋆)|<1/4√(nlogn)} and {N_i(0,t^⋆)>2√(nlogn)} from Lemma <ref> and Remark <ref>, respectively.(d) From Strling's approximation, the probability is in the order of O(e^-logn√(nlogn)). Thus, it is less than 1/√(nlogn) for large enough n.Assume that the event E(t^⋆) occurs. Then, the probability of not hitting the event horizon of token ID_1 by token ID_i after t^⋆+2t is less than the following:{T_EH1(ID_i)>t^⋆+2t|E(t^⋆)}≤ e^-1/16√(logn/n)t + e^-α_1 t. Suppose that the event E(t^⋆) occurs at time t^⋆. Thus, the size of the the set EH_1(t)\ EH_i(t), t>t^⋆, will be greater than 1/8√(nlog(n)) as far as token ID_i does not hit it. Hence, the probability of not hitting the event horizon of ID_1 in time interval [t^⋆,t^⋆+2t] is less than (1-1/8√(nlogn)/n)^N_i(t^⋆,t^⋆+2t). By bounding N_i(t^⋆,t^⋆+2t) from below (see Lemma <ref>), we have:{ T_EH1(ID_i)>t^⋆+2t| E(t^⋆)} ≤{ N_i(t^⋆,t^⋆+2t)>⌊ t/2⌋}× (1-1/8√(nlogn)/n)^t/2 + {N_i(t^⋆,t^⋆+2t)≤⌊ t/2⌋}× 1,≤ e^-1/16√(logn/n)t + e^-α_1 t.Suppose that token ID_i hits the event horizon of token ID_1 at time t. Then, it will coalesce with token ID_1 in next 3t time units with probability greater than 1-(e^-α_3 t+e^-α_4 t) where α_3=1/36 and α_4=log(2/√(e)).In worst case scenario, the event horizon of token ID_1 is a line with length N_1(0,t) and token ID_i hits end of the line at time t. Thus, token ID_i reaches token ID_1 at time t^' given in the following equation:N_i(t,t^')=N_1(0,t)+N_1(t,t^').Let us define random variable Y(t^')=N_i(t,t^')-N_1(t,t^'), which is the difference of two independent Poisson random variables N_i(t,t^') and N_1(t,t^') with rates (t^'-t) and (t^'-t)/2, respectively. Hence, the random variable Y(t^') has Skellam distribution and we have:{Y(t^')< N_1(0,t)}≤ e^-(N_1(0,t)-1/2(t^'-t))^2/3(t^'-t). Since token ID_1 takes at most ⌈ t ⌉ steps in time interval [0,t] with probability ∑_i=0^⌈ t⌉ e^-t/2(t/2)^i/i!≤ e^-α_4 t, we have:{Y(4t)< N_1(0,t)}≤ e^-α_3 t+e^-α_4 t. From Lemmas <ref> and <ref>, we have: {T_coal(ID_i)>4t^⋆+8t| E(t^⋆)} ≤e^-1/16√(logn/n)t+e^-α_1 t+e^-α_3 t+e^-α_4 t,≤ e^-1/16√(logn/n)t+ 3e^-α_3 t. Now, we can obtain an upper bound on the average time complexity:𝔼{T_run(n)} = 𝔼{T_run(n)| E(t^⋆)}{E(t^⋆)}+ 𝔼{T_run(n)| E^c(t^⋆)}{ E^c(t^⋆)}, ≤^a 𝔼{T_run(n)| E(t^⋆)} + (4 nlog(n)+2t^⋆) ×1/√(nlogn), =∫_0^∞{T_run(n)>τ| E(t^⋆) } dτ + 4 √(nlogn)+2, ≤^b∫_0^∞min(1,∑_i∈{2,⋯,n}{T_coal(ID_i)>τ|E(t^⋆)}) dτ + 4√(nlogn)+2, ≤^c 4t^⋆ +∫_0^∞min(1,(n-1)× (e^-1/128√(logn/n)τ+3e^-α_3τ/8) )dτ + 4√(nlogn)+2, ≤^d ∫_0^128√( n log(n)) 1 dt +∫_128√(n log(n))^∞ n× (e^-1/128√(logn/n)τ+3e^-α_3 τ/8)dτ + 8√(nlogn)+2, ≤ 128√( n log(n)) + 128√( n/logn) + 24n/α_3 e^-16α_3 √(nlogn)+8√(nlogn)+2 = O(√(nlogn)). (a) Regardless of the event E(t^⋆), token ID_1 covers the complete graph in t^⋆+2nlog(n) time units on average [13]. Thus, any token ID_i (i>1) will coalesce with it in at most 2×(2nlogn+t^⋆) time units on average. Hence, we have: 𝔼{T_run(n)| E^c(t^⋆)}≤ 4nlogn + 2t^⋆. Besides, we know that{ E^c(t^⋆)}≤ 1/√(nlogn) according to (<ref>).(b) According to union bound.(c) From Corollary <ref>.(d) From the fact that ne^-1/128√(logn/n)t≥ 1 for t≤ 128√(nlogn). IEEEtran1.2 [29] H.-K. Hwang and S. Janson, “Local limit theorems for finite and infinite urn models,” The Annals of Probability, pp. 992-1022, 2008. [30] R. Arratia and L. Gordon, “Tutorial on large deviations for the binomial distribution,” Bulletin of mathematical biology, vol. 51, no. 1, pp. 125-131, 1989. | http://arxiv.org/abs/1703.08831v1 | {
"authors": [
"Saber Salehkaleybar",
"S. Jamaloddin Golestani"
],
"categories": [
"cs.DC",
"stat.ML"
],
"primary_category": "cs.DC",
"published": "20170326160128",
"title": "Token-based Function Computation with Memory"
} |
firstpage–lastpage 2016 EMCD with an electron vortex filter: Limitations and possibilities [ Accepted Mar 2017. Received Jan 2017 ================================================================== We usethe SloanDigital SkySurvey Data Release12, whichis the largest available white dwarf catalogto date, to study the evolution of the kinematical properties of the population of white dwarfs in the Galacticdisc.Wederivemasses, ages,photometric distancesand radial velocities for all white dwarfs with hydrogen-rich atmospheres. For those stars for which propermotions from the USNO-B1 catalog are available thetrue three-dimensional components ofthe stellar space velocity are obtained. This subset of theoriginal sample comprises 20,247objects, makingit thelargest sampleof whitedwarfs with measured three-dimensional velocities.Furthermore, the volume probed by oursample islarge, allowing usto obtainrelevant kinematical information. Inparticular, our sample extendsfrom a Galactocentric radial distance R_ G=7.8 kpc to 9.3 kpc, and vertical distances fromthe Galacticplane rangingfrom Z=-0.5 kpcto 0.5 kpc. We examinethe meancomponents ofthe stellarthree-dimensional velocities,aswellastheir dispersionswithrespecttothe Galactocentric and vertical distances.Weconfirm the existence of a mean Galactocentricradial velocitygradient, ∂⟨ V_ R⟩/∂R_ G=-3±5 km s^-1 kpc^-1. We alsoconfirmNorth-South differencesin⟨V_ z⟩. Specifically,we findthat whitedwarfswith Z>0(in theNorth Galactic hemisphere) have ⟨V_ z⟩<0, while the reverse istrue forwhite dwarfswith Z<0. The age-velocity dispersion relationderived from the present sample indicates that the Galactic populationof white dwarfs may have experienced an additional source of heating, whichadds to the secular evolution of the Galactic disc.(stars:)white dwarfs;Galaxy: general;Galaxy: evolution;Galaxy: kinematicsanddynamics;(Galaxy:)solarneighborhood;Galaxy: stellar content § INTRODUCTION White dwarfsare themost usualstellar evolutionaryendpoint, and accountfor about97 percentof allevolved stars— see,for instance,the comprehensivereviewof<cit.>.Oncethe progenitor main-sequence star hasexhausted all its available nuclear energy sources, it evolves to the white dwarf cooling phase, which for the coolest and fainter stars last for ages comparable to the age of the Galacticdisc.Hence, thewhite dwarf populationconstitutes a fossil recordof ourGalaxy, thus allowingto traceits evolution. Moreover, the population of Galactic white dwarfs also conveys crucial information about the evolution ofmost stars. Furthermore, since the cooling processitself, aswell asthe structural properties ofwhitedwarfs, are reasonablywell understood <cit.>,whitedwarfs canbe usedto retrieve information about our Galaxy thatwould complement that obtained using other stars, like main sequence stars. The ensemble properties of thewhite dwarf population are recorded in the whitedwarf luminosity function, whichtherefore carries crucial informationaboutthestarformationhistory,theinitialmass function, or the nature and history of the different components of our Galaxy — see the recent reviewof <cit.> for an extensive list of possibleapplications, as wellas for updated referenceson this topic.Among theseapplications perhaps the most wellknown of them is thatwhite dwarfs are frequentlyused as reliable cosmochronometers.Specifically,since thebulk of thewhite dwarf population hasnot had timeenough to cool downto undetectability, white dwarfs provide an independent estimateof the age of the Galaxy <cit.>. But thisis notthe onlyinteresting application of white dwarfs. In particular, the observed properties of the population of white dwarfs canalso be employed to study the mass lost duringthe previous stellar evolutionaryphases.Additionally, onthe asymptoticgiant branchwhite dwarfprogenitors ejectmass which has beenenriched in carbon, nitrogen, andoxygen during theevolutionarystages. Hence, theprogenitorsofwhite dwarfsare significant contributors to the chemical evolution of the Galaxy.Finally, binary systems made of a main sequence star and a white dwarf can also be used to probe the evolution of the metal content of the Galaxy <cit.>. All these studies are done analyzingthe mass distribution of field white dwarfs. Several studieshave focusedonobtaining themass distributionofGalacticwhitedwarfs.Inparticular,themass distributionof whitedwarfs ofthe mostcommon spectraltype — namely thosewith hydrogen-rich atmospheres,also known asDA white dwarfs — has been extensivelyinvestigated in recent years — see, for example<cit.> and <cit.>, and references therein.Thefirststudy ofthekinematicpropertiesof thewhitedwarf population was thatof <cit.>, where a sampleof over 1,300 degeneratestarsfromthe catalogof<cit.>was employed.Sincethen, severalstudies havebeen done,focusing in different important aspects that canbe obtained from the kinematical distributions,suchastheidentificationofhalowhitedwarfs candidates <cit.>, thedark matter content of a potentialhalo population <cit.>, or the possible existence of amajor mergerepisode inthe Galacticdisc <cit.>. However,thesmall samplesizesusedin thesestudiesprevented obtaining conclusiveresults. But thisis not the onlyproblem that these studiesfaced.Specifically,in addition topoor statistics, themajordrawback oftheseworkswasthe lackofreliable determinations oftrue three-dimensional velocities.Thereason for this is that the surface gravity ofwhite dwarf stars is so high that gravitationalbroadening oftheBalmer linesis important. Thus, disentangling thegravitational redshift from thetrue Doppler shift is, in most of the cases, a difficult task.Consequently, determining the true radialvelocity of white dwarfs require a model that predictsstellar masses and radii and hence, the value of the gravitational redshift.Evenmore, asizable fraction ofcool whitedwarfs has featureless spectra, and hence for these stars only proper motions can be determined.All this precluded accurate measurements of the radial component of the velocity, and the assumption of zero radial velocity, or the method proposed by <cit.> were adopted in most studies— see,forinstance, <cit.>and <cit.>,and referencestherein.Morerecently, <cit.> presented a way to overcome this drawback.Theauthorsstudied asample ofwhitedwarfs incommon propermotion binary systems. Among these pairsthey selected a sub-sample in which the secondary starwas an M dwarf,for which the sharplines in its spectrumallowed toderiveeasilyreliable radialvelocities. Nevertheless, itwas not untilthe ESO SNIa ProgenitorsurveY (SPY) project— see<cit.>and referencestherein —that radial velocitieswere measuredfor thefirst timewith reasonable precisions. This wasmade because within thissurvey high resolution UVES VLT spectrawere obtained for a sampleof stars.Nevertheless, thesample of<cit.>only contained∼400 DAwhite dwarfswithradialvelocitiesmeasurementsbetterthan2km s^-1, ofwhich they estimated that2 per cent ofstars were members ofthe Galactichalo and7 per centbelonged tothe thick disc.Insummary,theneedofa complete sample,oratleaststatistically significant, withaccurate measurementsof truespace velocities, distances, masses and ages, is crucial for studying theevolution of ourGalaxy.In this sense, it is worth emphasizing thatlittle progresshas been doneto usethe Galactic white dwarf populationto unravel the evolution ofthe Galactic disc studying theage-velocity relationship (AVR). Age-velocity diagrams reflect the slow increase of the random velocities with age due to the heating of thedisc by massive objects. The term“disc heating” is often applied to the sum of the effects that may cause larger velocity dispersions inthe population ofdisc stars.Inprinciple, heating injects kinetic energy into the random component of the stellar motion overtime. Inordertounderstandtheoriginofthepresent assemblage ofdisc stars, itis necessary to quantifythe kinematic propertiesofthepopulationsin thediscandcharacterizethe properties of their stars asaccurately as possible.In other words, the space motions of the stars as a function of age allows us to probe the dynamical evolution of the Galactic disc.Severalheating mechanismshavebeen proposedinthe lastyears. Among themwe explicitly mention transientspiral arms <cit.>,giantmolecularclouds, <cit.>,massiveblack holesinthehalo <cit.>,repeateddiscimpact oftheoriginalGalactic globular clusterpopulation <cit.>,satellite galaxy mergers<cit.>. Recently,<cit.>studiedthe impactof differentmergeractivityin theshapeofthe AVRin simulated disc galaxiesand found that the shape ofthe AVR strongly depends on themerger history at low redshift forstars younger than 9 Gyr. A mechanismcalledradialmixing <cit.>was suggested asa possible source ofdisc heating <cit.>.However,<cit.> and<cit.>found that the contributionof radialmixing to discheating isnegligible in their simulations.Recently,<cit.> used state-of-the-art cosmological magnetohydrodynamical simulations andfound that thedominantheating mechanism is the bar,whereas spiralarms andradial migration are all subdominant inMilky Way-sized galaxies.They also foundthat thestrongest source,though lessprevalent thanbars, originatesfrom externalperturbations fromsatellites/subhaloes of masses log_ 10(M/M_) ≥ 10.From the observational point of viewit is worth mentioning that some of the propertiesof the AVR are stillcontroversial.For instance, <cit.>, <cit.>,and <cit.> found that the stellar velocity dispersion increases steadily for all times, following a power law. In the case of the vertical velocity dispersion theyfound thatσ_ z∝t^α,where αis close to 0.5. On the other hand, <cit.> found that the AVR rises fairly steeply for stars younger than 6 Gyr, thereafter becoming nearly constant withage.Another observational findingis thatheating takes place for the first2 to 3 Gyr, but then saturates when σ_ z reaches ∼ 20 km s^-1. This suggests that stars of highervelocity dispersion spendmost of their orbitaltime away from theGalactic planewhere thesources ofheating lie <cit.>. <cit.>found thatverticaldischeating modelsthat saturate after 4.5 Gyr areequally consistent with observations.The difficultyof obtainingprecise agesfor fieldstars <cit.>, and selection effectsin the different samples used by differentauthors might be responsiblefor the discrepanciesmentioned above.In thispaper we usethe sampleof white dwarfswith hydrogen-rich atmospheres from the Sloan Digital Sky Survey (SDSS) data release (DR) 12,which isthe largestcatalogavailable todate, todetermine velocity gradientsin the(R_ G, Z) space. Moreover, since whitedwarfs areexcellent naturalclockswe usethem tocompute accurateages andin thiswaywe determinethe AVRin thesolar vicinity.All thisallows us to investigatethe kinematic evolution oftheGalactic disc. Thispaperisorganized asfollows. In Sect. <ref> we describe the white dwarf catalog and we explain how weassess the quality ofwhite dwarf spectra. Sect. <ref> is devoted to explain how we derive effectivetemperaturesandsurfacegravities,aswellastheir respective errors, and todiscuss the corresponding distributions. We alsopresentthe setofderivedmasses, ages,distances,radial velocities and proper motions. This is later used in Sect. <ref>wherewe presentthevelocitymaps andage gradients. TheAVRis discussedinSect. <ref>,whilein Sect. <ref> we summarize our most important results and we draw our conclusions.§ THE WHITE DWARF CATALOGModern large scalesurveys have been veryprofficient at identifying whitedwarfs. AmongthemwementionthePalomarGreensurvey <cit.>, the Kisosurvey <cit.>, and the LAMOSTspectroscopic surveyof theGalactic Anti-Center <cit.>. However,ithas beentheSDSS <cit.>thesurveythatinrecentyearshas significantly increased the number of known white dwarfs.Indeed, the SDSS hasproduced the largestspectroscopic sample ofwhite dwarfs, and itslatest release,the DR 12, containsmore than30,000 stars <cit.>. Becauseofitsconsiderablylargersizeas compared toother available whitedwarf catalogs, we adoptthe SDSS catalog asthe sampleof studyin this work. From thissample we select only white dwarfs withhydrogen dominated atmospheres, that is of the DA spectral type. Forthese white dwarfs radial velocities and the relevantstellar parameterscan bederived usingthe technique describedbelow.TheGalactic coordinatesof the20,247 DAwhite dwarfsselectedinthisway areshowninFig. <ref>, whereasTable <ref>liststhestellarparameters,proper motions and radial velocities of these stars.This table is published inits integrityin machine-readableformat.However,for obvious reasons, only a portion is shownhere for guidance regarding its form and content. §.§ Signal-to-noise ratio of the SDSS white dwarf spectraBefore providing details on howwe measure the stellar parameters and radial velocities from the SDSS white dwarf spectra it is important to mention that these spectra are in several cases rather noisy.This is because white dwarfs are generallyfaint objects.Hence, the stellar parameters and radial velocities derived from their spectra often have large uncertainties. In order to selecta clean DA white dwarf sample for the AVR of theGalactic disc we henceneed to exclude all badquality data.We willdo this based onthe signal-to-noise ratio (SNR) of the SDSS spectra, which is calculated for each spectrum in this section.The featureless spectralregion of the continuum in awhite dwarf DA spectrum allows us to derive a statistical estimate of the SNR using a rather simple approach. This is done comparing the flux level (signal) within aparticular wavelengthrange to theintrinsic noiseof the spectrum in the same wavelength region.That is, we compute the ratio of the average signal to itsstandard deviation — see, for example, <cit.>.We defined four continuum bands centered at λ_ C of width Δλ (seeTable <ref>) fromthe observed spectrum, f(λ). We applied acorrection for the presence of a slopewithinthecontinuumband, whereσ_Cisthe standard deviation in the difference between f(λ) and a linear fit tof(λ). We thencalculated the statistical SNRfor the four continuumbands employingthe expression (SNR)_C_i = μ_ C_i/σ_C_i, where μ_ C_iis the mean flux in the given continuumband and σ_ C_i is the standarddeviationof thefluxwithinthe givencontinuumband, dominated by noise instead ofreal features.We found that SNR_ C1>SNR_C2>SNR_ C3>SNR_C4,for allthe spectra, asexpected for thesestars, where emergent fluxappears in theblue wavelengths. We finallyderived thecontinuum SNRof an observed spectrum as the average of the individual (SNR)_ C_i calculated foreach of the pseudocontinuumbands, C1, C2, C3and C4 respectively.Fig. <ref>shows the histogramof the distributionof thestatistical SNRfor thetotal of20,247 white dwarf spectraselected forthis study. We foundthat halfof the sample hasa statistical SNR forthe continuum larger than12 while most of the spectra a SNR around 5.§ DISTRIBUTIONS OF STELLAR PARAMETERSWhite dwarfs are classified into several different sub-types depending on their atmospheric composition.Themost common of these sub-types areDAwhite dwarfs,whichhavehydrogen-rich atmospheres. They comprise∼85percentofall whitedwarfs—see,e.g., <cit.>.The most distinctive spectral feature of DA white dwarfs is the Balmer series. These lines are sensitive to both theeffective temperatureandthe surfacegravity.We fittedthe Balmerlines sampledby theSDSS spectrawith theone-dimensional modelatmospherespectraof <cit.>,forwhichthe parameterizationof convectionfollows themixing lengthformalism ML2/α =0.8.In orderto account forthe higher-dimensional dependence of convection, which is important for cool white dwarfs, we applied the three-dimensional corrections of <cit.>. §.§ Effective temperatures and surface gravitiesFig. <ref> shows thedistributions of effective temperatures for two sets of data. Inthe left panel the distribution of effective temperaturesfor asub-sample ofwhite dwarfswith spectrahaving SNR>5isdisplayed,whereasthe rightpanelshowsthesame distributionfor thosewhitedwarfs withspectraof anexcellent quality,namely thosewithSNR>20. Forthosewhite dwarfswith spectra withSNR>5 the effective temperaturesare within 6,000 K << 100,000K, butmost whitedwarfs have effective temperaturesbetween 7,000 Kand 30,000 K,while although forthesub-sample withSNR>20thesameistrue, thereisa secondary peak at T_ eff∼10,000 K. The origin of this peak remains unclear, but it is related to the longer cooling timescales of faint white dwarfs.However, aprecise explanation of this peak must beaddressed withdetailed populationsynthesis studies,which are beyondthe scopeofthis paper. Fig. <ref> displaysthe distribution of surface gravities for both the sub-sample with SNR>5 (left panel) and that with SNR>20(right panel). As can be seen, in bothcases surface gravitiesranges from6.5 dex <log g< 9.5 dex witha narrow peak around7.85 dex.These distributions are very similar to those presented by other authors using SDSS DA whitedwarfs <cit.>.In summary, we concludethat the sub-sample of whitedwarfs with SNR>20 istotallyrepresentativeofthesampleofwhitedwarfs.This sub-samplewillbe usedbelowtoderiveina reliablewaythe kinematical properties of the white dwarf population.Fig. <ref>shows thedistributionoferrors ofthe effective temperatures as a functionof the effective temperature for all 20,427 DAwhite dwarfs with spectra havingSNR>5 (upper panel), andthecorrespondingdistribution ofuncertainties in surface gravities as a function of surface gravity (lower panel). It is worth noting that bothdistributions have very narrowpeaks. Actually, the typical uncertainties peak around σ_T_ eff∼200K and σ_log g∼0.15dex.These values are comparable to thoseobtainedinequivalentstudiesofthiskind <cit.>.Weinterpolatedthemeasuredeffectivetemperaturesandsurface gravities on the coolingtracks of <cit.> and <cit.> toderive thewhite dwarf masses,radii, cooling ages and absolute UBVRI magnitudes. The absolute magnitudes in the UBVRIsystemwere convertedintotheugriz systemusingthe equations of<cit.>.In passing wenote that although white dwarfscool down atalmost constant radii, theradius (hence, the surface gravity)slightly evolves as timepasses by. Consequently,massdeterminations, alongwiththecorresponding uncertainties, depend on both T_ eff and log g. §.§ Mass distributionIn Fig. <ref> weshow the mass distributionfor the white dwarfsub-samplewith spectrawithSNR>5(left panel)andthe sub-sample with SNR>20(right panel).For thesample with spectra with SNR>5the observedmass distributionexhibits anarrow peak with broadand flattails whichextend bothto largerand smaller masses.The meanmass of the distributionis near 0.55 M_. We found that most white dwarfs in this sample have mass uncertainties σ_M_WD<0.15 M_.Thesub-sample withSNR>20 showsasimilarbehavior,but althoughthemeanmassofthe distributionisessentiallythesame,themassdistributionis narrower, and the tails at lowand high masses are much less evident. This isparticularly true forwhite dwarfs with massessmaller than 0.45 M_ — those that populate the black shaded area in this figure.This isa direct consequence of thebetter determination of white dwarf masses for the sub-sample with spectra with larger SNR.Theoretical models predictthat white dwarfs ofmasses below 0.45M_have heliumcores. Theexistence ofsuchlow-mass helium-core white dwarfs cannot be explained by single stellar evolution, as the main sequence lifetimesof the corresponding progenitors would belarger thantheHubble time. It isthenexpected thatthese low-mass whitedwarfs areformed as aconsequence ofmass transfer episodes in binary systems, which lead to a common envelope phase, and hence to adramatic decrease of thebinary orbit <cit.>.Wetherefore expect the vastmajority of all low-mass (those withmasses ≤0.45 M_) white dwarfs in our sample to be members of close binaries.Additionally,thereisanotherpotential sourceofclosebinary contamination in thewhite dwarf mass distribution,for masses above 0.45 M_.Post-common-envelopebinaries, typically including a carbon-oxygen corewhite dwarf, represent anoticeable fraction of theGalacticwhitedwarf population—see,e.g., <cit.>and<cit.>.Someof these systems arelikely present in our sample whenthe companion is an unseenlow-mass star or asecond (less luminous) whitedwarf. In those cases, the brighter and lowermass white dwarf will likely have experienced a massloss episode.<cit.> and <cit.>estimated thatthe numberof closebinary systems represent ∼10 per cent of the population. §.§ AgesThe ageof a white dwarfis computed asthe sum of itscooling age plus themain sequence lifetimeof its progenitor.Usingthe white dwarfeffectivetemperaturesand surfacegravitiesobtainedin Sect. <ref> we interpolated these values on the cooling tracks of <cit.>to obtainthe corresponding coolingages.To obtain theirmain sequenceprogenitor lifetimesan initial-to-final massrelation (IFMR)— i.e.,the relationshipbetween thewhite dwarf massand the mass ofits main sequence progenitor— must be adopted. GiventhatthecurrentlyavailableIFMRssufferfrom relativelylarge observationaluncertainties,especially forwhite dwarfwithmassesbelow0.55 M_, hereweadoptedthree different relationships. More specifically, we used the semi-empirical IFMR of <cit.> as our reference case.However, to assess the uncertaintiesin the cooling ageswe also employed theIFMRs of <cit.>and<cit.>.AllthreeIFMRs cover the rangeof white dwarf masses withcarbon-oxygen cores. Even more,we onlycalculatedagesfor starswithmasses largerthan 0.55 M_. Using theseIFMRswederived threeindependent valuesofthe progenitormassesforeachwhite dwarf. Wethen interpolatedthe progenitormassesinthe solar-metallicityBASTI isochrones of <cit.> andcomputed the time that the white dwarfprogenitors spent on the main sequence,and thus the total age.In Fig. <ref>we compare the totalages obtained using different IFMRs for white dwarfs with spectra with SNR>5.As can be seen, although thereis a sizable fraction of starsthat have nearly equal ages, systematiceffects arise when using theIFMRs derived by different authors.Thus, we turn our attention to quantify the impact ofthe differentIFMRson theagesof starsinour sample. In Fig. <ref> wealso showthe average agesand standard deviations forour sample of whitedwarfs when stars aregrouped in binsof 1 Gyrduration.Ascanbe seenthe averageages arein excellent agreement witheach other, and thestandard deviations are substantiallysmaller thanthe sizeofthe bin. This meansthat althoughtheagesofindividualwhite dwarfsmaydifferdepending onthe adopted IFMR, the averageages in each 1 Gyr bin are in goodagreement. Thus,these agescan besafely employedto derive average structural properties of our Galaxy.We alsostudy the distribution of ages of whitedwarfsin our sample. Fig. <ref> shows thedistribution of white dwarf ages for thethree IFMRs already mentioned. The three distributions showa peakaround 1 Gyr. However,for thedistribution ofages obtained usingthe IFMRs of<cit.> and <cit.> the peak isnarrower, whereas the distribution of ages obtained using the IFMR of <cit.> has a smaller peak and shows anextended tail with a significant numberof white dwarfs with ages rangingfrom 2 to 5 Gyr.This is because theslope of the IFMR of <cit.>is steeper, andthus results in an extended agedistribution. Finally, in all three casesthere is a clear paucity of white dwarfs with ages larger than ∼ 6 Gyr. This is because relatively massive white dwarfs older than ∼6 Gyr are typically rathercool (< 7,000K) andhence toofaint and difficult to detect by the SDSS.Hence,we will not be able to probe the very first stages of the Galactic disc.Finally, we emphasize that the age uncertainty forindividual white dwarfs dependson bothits mass(or, equivalently,the surface gravity) andits luminosity(namely, the effectivetemperature).Wefound thattypicalageuncertainties cluster aroundσ_ age=0.5 Gyr and that most of the SDSS DA whitedwarfs in this study have σ_ age< 2.5 Gyr. This finalerror budget may be slightly underestimated as we employed solarmetallicity isochrones for all the objects, however this is a reasonableassumption for our local sample as most of the objects belong to thethin disc where a metallicity close to solar is the most commonvalue for a sample between R_ G=7 kpc to 9 kpc, andvertical distances fromthe Galacticplane ranging from Z=-0.5 kpc to 0.5 kpc <cit.>. §.§ Rest-frame radial velocitiesWedetermined theradialvelocities (RVs)fromthe observedSDSS spectra usinga cross-correlationprocedure originallydescribed by <cit.>. We used128 DA white dwarf modelspectra <cit.>attheSDSSnominal spectral resolution. The models cover effectivetemperatures in the range6,000K to80,000K andsurface gravitiesfrom 7.5to 9 dex.We used the IRAF[IRAFis distributed by the National Optical Astronomy Observatories, which are operated by the Association ofUniversities forResearch inAstronomy, Inc.,under cooperative agreement withthe NationalScience Foundation] packagervsao <cit.>.The low spectralresolution and the lowSNR (see Fig. <ref>) for most of SDSS DAwhite dwarf spectra togetherwith the broad Balmer lines makederiving precise RVs a challengingtask.Thus, to improve the RV determinations we implemented a Fourier filter. The aim of this filteris to suppress some of thelow-frequency power, making the Balmer lines narrower.We also designed the filter for an optimal removal ofphoton noiseat the high-frequencyend ofthe spectrum. Also, toderive theRV uncertaintieswe usedthe cross-correlation coefficient(r), wherer isthe ratioof thecorrelation peak heighttotheamplitude oftheantisymmetricnoise <cit.>.Fig. <ref> shows the uncertaintiesfrom the cross-correlation in RV asa function ofthe SNRof the SDSS spectra for our20,247 DA white dwarfs. For starswith SNR<5, σ_ RVis always larger than ∼25 km s^-1. Forspectra with SNR larger than 20 we found σ_RV<15 km s^-1.The spectra of whitedwarfs are affected by gravitational redshiftsdue totheir high surface gravities. Thus, the velocities measured by the cross-correlationtechnique (hereafter RV_ cross)are the sum of two individualcomponents <cit.>,anintrinsic radial velocity component (RV)and the contribution of the gravitational redshift RV_ grav=GM/Rc. In this expression G is the gravitationalconstant, c is the speed oflight and M and R are the white dwarf massand radius respectively. Note that these quantities are known foreach star (see Sect. <ref>).Hence, the radial velocities of our SDSS DA white dwarfs are finally obtained as RV=RV_ cross - GM/Rc. The error contribution from thegravitational redshift RV_ grav has a typical value of1.5 km s^-1, and all the sample has σ < 4 km s^-1when SNR>20.§.§ Distances and spatial distribution Wederived thedistancesto allSDSS DAwhitedwarfs fromtheir distance moduli M_g-g-A_g, where M_g is the SDSSg-band absolute magnitude (seeSect. <ref>), g isthe SDSSg-band apparentmagnitude andA_g isthe g-bandextinction. A_g=E(B-V)R_g,with E(B-V) interpolated in the tables of <cit.> atthespecificcoordinatesofeach whitedwarf,andweadopt R_g=3.3<cit.>.Becausethe valuesof E(B-V)givenby<cit.> areforsources located at infinity, it islikely that they have been over-estimated, as whitedwarfs areintrinsically faint andare generallyfound at distances 0.5–1 kpc <cit.>.Hence, we computedan independent estimateof E(B-V)using the three-dimensional extinction modelof <cit.>, where our preliminary distance determination was used as input parameter.If the difference betweenboth valuesof E(B-V) waslarger than0.01, we adoptedthe estimateobtained usingthe three-dimensionalextinction map. We then computed again the extinction A_g.Afterwards, we determine again the distance, which was then used to calculate a new value ofE(B-V) from thethree-dimensional map.Thiswas repeated for each white dwarf until thedifference between the adopted and the calculated E(B-V) became ≤0.01.To check which regionsof the sky are probed byour SDSS white dwarf sample,andbefore examiningthespacevelocities, westudythe spatial distribution ofstars in our catalog. Fig. <ref> showsthe spatial distribution of the white dwarfs sample. We usea right-handed Cartesiancoordinate system with X increasing towards theGalactic center, Y in the direction ofrotationandZpositive towardstheNorthGalacticPole (NGP). The white dwarf sample probes a region between 0.02<d<2kpc, but the vast majority of white dwarfs are located at distances between 0.1 and0.6kpc.Note aswell thatour sample haswhite dwarfs with relativelylarge altitudes fromthe Galactic plane —see the middleand bottompanelsofFig. <ref>. Finally,we mentionthat mostwhite dwarfshave distanceuncertainties smaller than 0.1kpc,being thetypical relative uncertainty in distance around 5%. All the stars with a SNR>20 have a relative error better than20%. Finally, we also estimated theGalactocentric distances R_ G of each whitedwarf in oursample. Thiswas done usingtheir distances d,andGalacticcoordinates (l,b)(seeFig.<ref>). Adopting a SunGalactocentric distance, R_=8.34±0.16kpc <cit.>, wefound that mostwhite dwarfs in thisstudy are located between 7.8<R_ G<9.3kpc. In the following sections, we will use R_ G togetherwith Z to investigate the velocity distributions. §.§ Proper motions We obtained proper motions andtheir associated uncertainties for the objects ofour sample usingthe casjobs[http://casjobs.sdss.org/casjobs/]interface <cit.>,which combinesSDSS andre-calibrated USNO-B astrometry <cit.>.These propermotions are calculatedfrom theUSNO-B1.0platepositions re-calibratedusing nearby galaxiestogether withthe SDSS positionso thatthe proper motionsare moreaccurateand absolute. Bymeasuring theproper motionsofquasars,<cit.>estimatethat 1σerror is∼4mas yr^-1.InFig. <ref> we compare themeasurement of theproper motion, in rightascension, and declination for 125 objectsin common in our white dwarf data-setbetweenSDSS-USNO-B<cit.> andUCAC4catalogue <cit.>. Wefounda goodagreementbetweenboth measurements for most of the common objects.Unfortunately, for 4,360 DA white dwarfs in our sample there are no available proper motions. §.§ Space velocitiesWe computedthe velocities ina cartesian Galacticsystem following the method developed by <cit.>. That is, we derived the spacevelocitycomponents(U, V, W) fromtheobservedradial velocities, propermotions and distances.Weconsidered a right-handed Galactic reference system,with U positive towards theanticenter, V in the direction of rotation,and W positive towardstheNorthGalacticPole (NGP). Theuncertainties inthe velocity components U, V andW were derived using the formalism of<cit.>. Withinthis frameworkthe equationfor the variance ofa function of severalvariables is used. <cit.>assumed thatthematrixused totransform coordinatesintovelocitiesintroducesno errorinU,Vor W. Consequently,the only sourcesof error are thedistances, proper motions andradialvelocities. Furthermore,this method assumes that the errors of the measured quantities are uncorrelated, i.e.that the covariances are zero. We found that the typical error is (ΔU, Δ V, Δ W) ∼(6.5, 8.3, 10.5)km s^-1andthat nearlytheentire sample has Δ U, Δ V,Δ W<35km s^-1.Fig. <ref> shows the U, V and W distributions for theSDSS DA whitedwarfs.Onlywhite dwarfs with M_WD>0.45M_— toavoid contamination from close binaries, see Sect. <ref> — and spectra with SNR>20 —to ensurea RV errorsmaller than20 km s^-1, see Sect. <ref> — and reasonableuncertainties ΔU, ΔV, andΔ W<35 km s^-1 were selected. Also, in Fig. <ref> we show the phase-space diagrams for the same sample. These diagrams revealinteresting substructure in the Galactic disc, i.e. moving groups in the U-V plane <cit.>.A detailed study of the overdensities and outliers objects seen in the phase-space using a population synthesis code <cit.> is underway.Sincewhite dwarfsin oursampleare locatedat relativelylarge distancesfrom theSun, wealso usecylindrical coordinates,with V_ R,V_ϕandV_ zdefined aspositivewith increasing R_G, ϕ andZ, with the lattertowards the NGP.We took the motion of the Sun with respect to the Local Standard of Rest(LSR) <cit.>, thatis (U_, V_, W_)=(-11,+12, +7)km s^-1. The LSR was assumed to be on a circularorbit, with circular velocity Θ = 240 ±8 km s^-1 <cit.>.With these values wecomputedthecylindricalvelocitiesofoursample,V_ R, positive towards the Galactic center, in consonance with the usual U velocity component, V_ϕ, towardsthe direction of rotation and V_ z= W,positive towardstheNGP –see appendixin <cit.> for more details.Tocompute theuncertainties ofthe velocitiesin thecylindrical coordinatesystem wecannot assumethatthe observablesare uncorrelated.Topropagate the non-independent randomerrors in the velocities inthis coordinate framewe used a MonteCarlo technique that takesinto account theuncertainties in thedistances (Δ X, Δ Y, ΔZ) andin thespace velocitiesin cartesian coordinates(Δ U, ΔV, ΔW). This MonteCarlo algorithmfor errorpropagationallows toeasilytrack theerror covariances.In essence,we generateda distributionof 1,000test particles around each inputvalue of (XYZ, UVW), assuming Gaussian errorswith standarddeviations givenby theformal errorsin the measurements for each white dwarf, and then we calculate the resulting (V_ R,V_ϕ,V_ z)andtheircorresponding1σ associated uncertainties.§ VELOCITY MAPS AND AGE GRADIENTSIn theprevious sections we haveintroduced our SDSS DAwhite dwarf sampleandwehaveexplainedhowwehave derivedthestellar parameters, ages, distances, radialvelocities and proper motions for each star. With this informationat hands we analyzethe kinematic properties ofthe observed sample. This is fullyjustified because our sample ofDA white dwarfs culled from theSDSS DR 12 catalog is statistically significant,independently of therestrictions applied to the fullsample to use only high-qualitydata. These restrictions are explainedbelow, and quitenaturally, reduce the totalnumber of white dwarfs.Finally,we emphasize that a thoroughanalysis of the effects of theselection procedures and observationalbiases using a detailed population synthesis code <cit.> is underway, and we refer the reader to a forthcoming publication.As mentioned, thecatalog used in this work contains20,247 DA white dwarfs culledfrom theSDSS DR 12. To studythe behaviorof the velocitycomponentswithrespecttotheGalactocentricdistance (R_ G)and tothe vertical distance(Z) werestricted our sample to employonly stars with data of thehighest quality.First of all, weexplored the effects of selecting starswith spectra with qualities above a given SNRthreshold.We found that both theresulting velocity dispersions and the average velocities are sensitive tothis choice.Hence,a natural questionarises, namely whatis theoptimal SNRcut toachieve ourscience goalswithout discarding an excessive number ofstars.To address this question in Fig. <ref>weexaminethe behaviorofthevelocity dispersionas afunction ofthe SNRcut.Theerror barsare the standarddeviation ofthe uncertaintiesin thevelocities forthe corresponding SNR bin. Note that for white dwarfswith spectra with SNR<20random errorsdominate thevelocity dispersion,while for starswithspectra withSNR>20thedispersion remainsconstant within the error bars, suggesting that a physical dispersion, which is not dueto the uncertaintiesin the observables, exists. Hence, we only selectedstars with spectrahaving SNR>20.Wealso excluded fromthissub-sample allwhitedwarfswith massesbelow0.45M_.In this way we avoid an undesirable contamination by close binaries — see thediscussion in Sect. <ref>. Furthermore,we onlyconsidered whitedwarfs withreasonable determinations, and consequently we excluded those stars for which the velocity errorswere large— seeSect. <ref>. In particular, weadopted a cut Δ_U,V,W<35km s^-1.We alsodid not consider thosewhite dwarfs with velocities-600 < V_ R,V_ϕ,V_ z <600 km s^-1 to remove outliers. This resultedin a sub-sample of3,415 DA white dwarfs.Since the aim ofthis section is to gain a preliminary insightonthegeneralbehavior ofthevelocitiesandvelocity dispersions, we donot distinguish between halo, thickand thin disc white dwarfs.This sub-sample allowed us to studythe behavior of the components of the velocityas afunction ofthe Galactocentricdistance (R_ G)and thevertical distance(Z).We usedweighted meansand weighted velocity dispersions. The expression for the weighted mean of any (say ω) of the three components of the velocities is: ⟨ V_ω⟩ = ∑_i=1^Nη_iV_ω_i/∑_i=1^Nη_i where V_ω_iis theω componentof thevelocity for each star. In this expression theweights are given byη_i = 1/σ^2_ω_i, beingσ_ω_i the error associated to thethree components of thevelocity (Δ V_ R, Δ V_ϕ, Δ V_ z)of each individual white dwarf. The variance of ⟨ V_ω⟩ is given by: σ_ω^2 = 1/∑_i=1^N 1/σ_ω_i^2 To compute the velocity dispersionof the stellar velocity components we used a weighted variance: σ_V_ω^2 = ∑_i=1^Nη_i(V_ω_i - ⟨ V_ω⟩)^2/k∑_i=1^Nη_i where k=(N'-1)/N', and N' isthe number of non-zero weights.The error barsof thevelocity dispersion inFigs. <ref> and <ref> weere computed employing the expression ΔV_ω = (2N)^-1/2σ_V_ω where N is the number of white dwarfs in each bin of R_ G and Z respectively(see also Table <ref>and Table <ref>), used to calculate σ_V_ω. §.§ Space velocities in the local volume InFigs. <ref>and<ref> weshowthethree componentsof thespace velocity(V_ R, V_ϕ,V_ z)for each white dwarfin this sub-sample as a functionof R_ G and Z, respectively. We alsoshow the average(black lines). In Fig. <ref>we show the velocity dispersion as a functionof R_ G and Z. The velocity dispersion components are corrected for the measuring errors using thestandard deviation of the error distribution per each bin. Most white dwarfs inthis sub-samplewith precise kinematicsare closeto the Galacticplane,withaltitudes fromtheGalacticplanewithin ±0.5kpc.Also, the vast majorityof these stars are located at Galactic radii ranging from R_ G= 8.0 kpc to 8.8 kpc.In the local volume covered by our samplewe found that the average value of the radialvelocity V_ Rfirst slightly increasesfor increasing distances fromthe centerof theGalaxy, andthen decreasesup to R_G≃8.5 kpc.Fromthis pointoutwards theaverage velocityremains nearlyconstant. Theradial velocitydispersion σ_ R slightly increases as R_ G increases (see Table <ref>).For white dwarfs around R_ G∼ 8.0kpc wefoundV_ R∼+2.5±1.0km s^-1, whileforR_G∼ 9.0kpcwehaveV_ R∼+3.4±2.8km s^-1.However,atlarger distances, for example, R_ G<7.9 kpc and Z<-0.5kpc, Poisson noisecontributessignificantly. Thisnoiseisbeyond theformal measurement errors of the measuredkinematic properties as these bins contain onlya few whitedwarfs (notethat Poisson noisescales as N^-1/2, where N is the number of objects in the bin).<cit.> and<cit.>, usinga sampleof red clump stars from the RAVE survey that covers avolume larger than that probed by our sample of whitedwarfs; also reported a radial gradient in themean Galactocentric radial velocity,V_ R.In particular, if we restrict ourselves tostars with altitudes -0.5<Z<0.5kpc we finda small negative gradient∂⟨V_ R⟩/∂R_G =-3±5km s^-1 kpc^-1 whilethe gradient ofthe velocity dispersionis∂σ_ R/∂R_ G=+3±4 km s^-1 kpc^-1. Interestingly, V_ Rshows a small gradient in the vertical direction, ∂⟨ V_ R⟩/∂ Z=-5±6km s^-1 kpc^-1.However, the errors are too large to draw any robust conclusion.For the variation ofσ_ Rwithrespectto theverticaldistancewefound thefollowing, ∂σ_ R/∂Z =-23±5km s^-1 kpc^-1 for for Z < 0, suggesting a substantial gradient while ∂σ_ R/∂Z =+19±3km s^-1 kpc^-1 for Z > 0.σ_ R clearly increases when moving away from the Galactic plane (see Table. <ref>).Aswell knownfrom theJeans equationV_ϕ decreasesas the asymmetricdriftincreases,and hencewhenσ_ϕ increases<cit.>. Wefounda positivegradient withrespecttoR_G,∂⟨V_ϕ⟩/∂R_ G=+15±5km s^-1 kpc^-1. Moving inwards,V_ϕ decreases. For thevelocity dispersion we foundanegativegradientin termsofR_G,∂σ_ϕ/∂R_ G=-3±1km s^-1 kpc^-1,thevelocity dispersion increases whenmoving inwards. Also, itisinteresting toinvestigatetheprofile ofV_ϕwith respectto theverticalheight.Ourdata allowsusto detecta significantgradient,∂⟨ V_ϕ⟩/∂ Z=+17±4km s^-1 kpc^-1, suggestingthat theouter region of the South Galactic Pole, rotatesslower than those regions close to the North.However, as mentioned above,the number of white dwarfswith Z<-0.5kpcis small,and Poissonnoise, among other selectionbiases, mayplay arole.Thevelocity dispersion, σ_ϕalso shows a smallgradient. For Z < 0, we have∂σ_ϕ/∂ Z=+3±18km s^-1 kpc^-1 while for Z > 0, ∂σ_ϕ/∂ Z=+6±2km s^-1 kpc^-1.We alsofound thatσ_ϕ increases(Δσ_ϕ∼ 4 km s^-1) when | Z| increases from zeroto 0.6kpc,reachingaminimum valueof σ_ϕ=+21.6±0.4km s^-1 at 0<Z<+0.25kpc.Figs. <ref>, <ref>, <ref> also shows how the mean value of V_ z and σ_ zbehave withrespect toR_G andZ. Asmall gradient is found, ∂ V_ z/∂ R_ G=-6±7km s^-1 kpc^-1, suggesting thatfor this range of values ofZ the meanvertical velocity increases whenR_ G decreases. The vertical velocity dispersion, σ_ z, clearly decreaseswhen movingaway fromthe Galacticcenter, thegradient being∂σ_z/∂R_ G=-10±4km s^-1 kpc^-1.The profile of the mean value of V_z asa function ofthe verticaldistance, Z, alsoshows an interesting trend. Specifically, whilefor white dwarfs with positive Z(in theNorth Galactichemisphere) wehave negative values in ⟨ V_ z⟩, for whitedwarfs with Z<0 (in the SouthGalactic hemisphere), ⟨ V_ z⟩ is positive. Our dataset showsa gradient∂V_ z/∂ Z=-18±8km s^-1 kpc^-1.Wealso found thatthe velocity dispersions increase when moving away from the Galactic plane, Δσ_ z∼10km s^-1 fromthe Galactic planeto 0.6kpc witha significantgradient,∂σ_ z/∂ Z=-15±1km s^-1 kpc^-1 for Z < 0 and ∂σ_ z/∂ Z=+15±2km s^-1 kpc^-1 for Z > 0. Wenotethat thevariationof σ_ z with Z has receivedsignificant attention as it can be used to trace the vertical potential of the disc <cit.>.For the local sample of white dwarfsused in this study we found that theinnerpartoftheGalaxyishotterthantheouterpart, σ_ R,ϕ,z(R_ G<R_)>σ_ R,ϕ,z(R_ G>R_). The radial gradient in the velocity dispersion is wellknown and also detected in other galaxies, e.g. <cit.>. We also found that thevelocity dispersions increase whenmoving away fromthe Galacticplane, σ_R,ϕ,z (| Z| = 0)<σ_ R,ϕ,Z (| Z| > 0).Figs. <ref>, <ref>, <ref> andTables <ref>and <ref> summarize the results discussed here. §.§ Radial and vertical age gradients In this section we study theradial and vertical age gradients on the Milky Way discusing the derived whitedwarf ages.<cit.> reportedvariations inthe fractionof activeM dwarfs ofsimilar spectral type at increasing Galactic latitudes as an indirect evidence for theexistence of a verticalage gradient in theMilky Way disc. Recently,<cit.> usingred giants and seismicages by Kepler mission reportedthat oldred giantsdominate at increasing Galacticheights.They also reporteda vertical gradient of approximately 4 Gyr kpc^-1, although with a large dispersion of agesatallheightsandconsiderablylargeuncertainties,that prevented them to derive meaningful conclusions.Note that,as mentionedbefore, ourdata-set containswhite dwarfs withages <4.5 Gyr.Hence, intermediate-oldstars aremissing in this study and this may introduce a bias in our results.By selecting intervals in R_G and | Z| weestimated the median age. Forstars withinthis binwe also computedthe meanvalues of R_ G and | Z|.Then, we used a linear regression to computetheage gradientsaswellas theircorrespond1σ uncertainties.In Fig. <ref> themedian stellar age as a functionofGalactocentric distanceR_G(left panel)and height | Z| (right panel) for 7.6<R_ G<9.2kpc and 0.1<|Z|<1.3kpc aredisplayed,together withthe resulting linear fit (red line).The error bars in any given bin were computed as Δ_τ=(2N)^-1/2σ_τ, where N is the number ofwhite dwarfs ineach bin. Wefound a verysmall negative radial andpositive vertical agegradients, ∂⟨τ⟩/∂R_ G=-0.2±0.1Gyr kpc^-1and ∂⟨τ⟩/∂Z=+0.1±0.2Gyr kpc^-1, respectively.However,these results together withtheir associated uncertaintiesarealsocompatiblewith zeroradialandvertical gradients for the volume probed in our study. § THE AGE-VELOCITY RELATIONInthis sectionweexplore theage-velocityand theage-velocity dispersionrelationshipsoftheGalactic disc.We restricted our sample, asdiscussed in Sect. <ref>, to those stars with the highest quality data.Our final sample contains 3,415 DAwhitedwarfsforwhichthe typicalageuncertaintyis≤ 0.5 Gyr.Fig. <ref>shows the three componentsof the velocity in cylindricalcoordinates (V_ R, V_ϕ, V_ z) as a functionofagewhentheIFMRof<cit.>isused. Unfortunately,when thisIFMR isused oursample doesnot contain stars older than4.0 Gyr, and the same occurswhen the relationships of<cit.>and<cit.>areemployed. Thisis dueto thetight qualityrequirements wehave introduced. Hence,we arerestrictedto thestudyof thelatestfew Gyrof evolution from the formation of the Milky Way disc.§.§ Thin, thick disc and halo sampleThere is observationalevidence that a sizable fractionof the thick disc is chemicallydifferent from the dominant thindisc. Studies of abundance populations based onhigh resolution spectroscopy find that the abundance distribution isbimodal — see , e.g., <cit.>,<cit.>, and<cit.>. This strongly suggeststhat the thin and thick dischave a different physical origin.Moreover, most of the thick Galactic disc population iskinematically hotterthan thatof thethin disc <cit.>. However, the stronggravities of white dwarfs do not allow to use the metallicity to classify white dwarfs, because all metalsarediffusedinwards inshorttimescales,resultingin atmospheres made ofpristine hydrogen. Thus, tostudy the membership of white dwarfs in our sample to the thick or to the thin disc we rely exclusively on their kinematical properties. In view of this, a Toomre diagram is useful to understandthe characteristics of final DA white dwarfsample. This diagramcombinesverticaland radialkinetic energiesas afunction ofthe rotationalenergy.Severalstudies based inthe analysis ofthe abundances of theGalactic populations — e.g., <cit.> and <cit.> — have concluded that to a first approximationlow-velocity stars (that is, those with V_ tot<70km s^-1) mainly belong to the thin disc, whereas stars withV_ tot>70km s^-1 but withvelocities smaller than ∼ 200 kms^-1,are likely tobelong to thethick disc.However, the velocity cutoff selection population strongly depends on thesize of the sample. Moreover, a pure kinematical selection of thin/thickdisc stars can introduce a severe bias in our studies, as observed by different authors where subpopulationswere selected using individual chemical abundances <cit.>.Accordingto this,from Fig. <ref>it follows that our sample is dominated by thin disc white dwarfs, although certainly some stars in thesample exhibit thick disc kinematics. Moreover, Fig. 17 shows that the σ_ z velocity dispersion rises with | Z |, which could be at least partly due to a component hotter than the thin disc. In order to study the fraction of thin, thick and halo stars in our WD sample, we performed a gaussian mixture model on their V_ z distribution, to see if the velocity distribution shows multiple components. The model uses maximum likelihood to get the parameters for gaussian components, and takes account of the measuring errors for the velocities of individual stars.We find that the z-velocity distribution is dominated by a single colder component, with a velocity dispersion of about 23 ± 0.5 km s^-1 and showing no significant change with age.A weak hotter component is also seen, containing about 5% of the total sample (about100 stars), with a velocity dispersion of 72 ± 6 km s^-1, again showing no significant change with age. We can compare the dispersion of the colder component of the white dwarfs (23 km s^-1) with the AVR from the <cit.> sample of Geneva-Copenhagen Survey (GCS) stars. This is probably the best source of age-velocity data available for nearby stars.The GCS stars have isochrone ages and are mostly turnoff stars. Fig. <ref> shows the AVR for GCS stars with[Fe/H] > -0.3; this cut removes most of the thick disc stars from the sample. The GCS stars show a well defined AVR, with the σ_ z velocity dispersion rising with age from about 10 km s^-1 for the youngest stars, and levelling out at about 23 km s^-1 for stars older than about 5 Gyr. We note that the 23 km s^-1 dispersion of the colder component of the white dwarfs (ages mostly < 4 Gyr) is in excellent agreement with the dispersion of the older thin disc stars in Fig 23.The velocity dispersion of the weak hotter component from the mixture model is about 72 km s^-1. This is much larger than the typical z-velocity dispersion of thick disc stars near the sun (about 40 km s^-1), and suggests that there may also be a few halo WDs in the sample. However, in this we will not attempt tofully analyzeand characterizeeach kinematiccomponent ofour sample.Thus, wedo not apply any apriori kinematical cut to distinguish between thin and thick stars to study the AVR. §.§ The age-velocity dispersion relation Theage-velocity dispersionrelationis afundamental relationto understand local Galactic dynamics.Itreflects the slow increase of therandom velocitieswith agedue tothe heatingof thestellar disc. We study this relation usingthe ages estimated using the three IFMRs discussed in Sect. <ref>.Tothis end, we first binned thedata inintervals of1 Gyr, andthen wecomputed thestellar velocitydispersionsandtheir associatederrorsfollowingthe expressions discussed in Sect. <ref>. The velocity dispersioncomponents were also corrected for the measuring errors using thestandard deviation of the error distribution for each age bin.Our results indicate that the three components of the stellar velocity dispersionincreasewith time,independently ofthe adopted IFMR. This is illustrated in Fig. <ref> — see also Table <ref>.Whenweusethe IFMRof <cit.>forstarsolderthan2.5 Gyrσ_ϕ increases with time, while σ_ R remains nearly constant within the errors, and σ_ Z remains also constant.However, it should be taken intoaccount thatthe numberof starsolder than3.5 Gyr is small forall three IFMRs,although thenumber of starsolder than 3.5 Gyris significantlylargerforthe IFMRof <cit.>.Thus, onlyinthiscase theage-velocity dispersion relation for ages longer than 4 Gyr, but still shorter than 5.5 Gyr canbe explored.For therest of the IFMRsthe results are less significant. Nevertheless, it isinteresting to realize that, as it occurs forthe IFMR of <cit.>, for boththe IFMRs of <cit.> and <cit.>all three components ofthe stellarvelocity dispersionincrease forages smallerthan ∼2.5Gyr,whilefor ageslargerthanthisσ_ Rand σ_ϕ increases both cases, while σ_ z saturates.Age-velocity dispersionrelations aresuitable toconstrain heating mechanisms taking place in theGalactic disc during the first epochs. Sincedynamical streamsarewell mixedinthe verticaldirection <cit.> wewill use the age-σ_ z relation.The vertical velocity dispersion obtainedwhen the ages derived employing thesemi-empirical IFMRof<cit.>,together withthe associated uncertainties, isdisplayed in Fig. <ref> using blackdots. In thisfigurewealsoshow twodynamicalheating functions followinga power lawas afunction of theage (τ), σ_ z∝τ^α, with α=0.35(solid line) and 0.50 (dashedline) aslabelled inthis figure <cit.>.Clearly,our results fall well abovethe predictionsof these powerlaws for the younger ages. Inparticular, we foundthat σ_ z ∼ 22±0.5kms^-1 forwhite dwarfs withages between0.5 and2.0 Gyr.Newly bornstars, withages ∼ 300Myr, typicallyhave σ_ z ∼ 5.0km s^-1 <cit.>.Foropen clusters with agesaround 1.5Gyr it is found thatσ_ z ∼ 15.0±5km s^-1 <cit.>.Thesevalues are significantlysmaller than these obtained using the population of Galactic white dwarfs.A highlyeffective mechanism for scatteringstars, especially during thefirst ∼3 Gyr <cit.>,is interactions with Giant Molecular Clouds(GMCs).In particular, using simulations of theorbits oftracer starsembedded inthe localGalactic disc <cit.>derivedan age-velocitydispersionrelation σ∝τ^0.2fortheheating causedbyGMCs. Other widely accepted mechanismis heating by transient spiralarms in the disc<cit.>. Combining bothheating mechanisms leads to apower law with larger exponent, σ_ z∝τ^0.4±0.1<cit.>. The age-velocity dispersion relation derived from the present WD sample indicates that the Galactic population of white dwarfs may have experienced an additional source of heating, which adds to the secular evolution of the Galactic disc seen in Fig. <ref>. The GCS stars in the age range of our WDs (< 4 Gyr) have velocity dispersions that are clearly less than the 23 km s^-1 found for the colder component of the WDs. Theorigin ofthis heating mechanism remainsunclear. Onepossibility isthat somethick disc-halostars may havecontaminated the younger bins of theAVR (see Sect. <ref> wherewe reported the existence of a hot component in our sample),thus making the age-velocity dispersion hotter.Another possibility is that there is an intrinsicdispersion for these stars.Giventhat white dwarfs arethefinal productsoftheevolutionofstars withlowand intermediate massesthey mighthave experienceda velocitykick of ∼10 km s^-1during thefinalphasesof theirevolution <cit.>. It is also important to keep in mind that, although we have excluded in our analysisall white dwarfs with masses smaller than 0.45 M_, becausethey are expectedto bemembers of close binaries,a possiblecontamination by closepairs —of the orderof∼10 percent<cit.> —may influence theage-velocity dispersionrelation. Finally, thereis a lastpossible explanation. Our sampleofDA whitedwarfs isnot homogeneously distributedon the sky,but instead is drawnfrom the SDSS. The geometry of theSDSS is complicated — see Fig. <ref> — and there isa large concentration of stars around the NGP. Evenmore, the fields of the SDSS are not equallydistributed inthe polar cap.This meansthat, possibly, importantselectioneffectscouldaffect theresults.Allthese alternativesneed tobecarefullyexplored employingdetailed population synthesismodels must beemployed that takeinto account the samplebiases and selectionprocedurestocheck their verisimilitude. We postpone this study for a future publication. § CONCLUSIONSWe have used the largestcatalog of white dwarfs with hydrogen-rich(DA) atmospheres currently available(20,247 stars) obtained from the SDSSDR12 tocompute effectivesurface temperatures,surface gravities, masses,ages, photometric distances andradial velocities as well as the components oftheir velocities when proper motions are available. Forthefirsttime weinvestigatedhowthespace velocities V_ R, V_ϕ, V_ z depend on the Galactocentric radial distance R_ G and onthe vertical height Z in a large volume around the Sun using a sample of DA white dwarfs. Our understanding of the chemical and dynamical evolution of the Milky Waydisc hasbeenhamperedover theyearsbythe difficultyof measuring accurate ages of starsin our Galaxy. However, white dwarfs are naturalcosmochronometers, and thismotivated us to usethem to studythekinematicalevolutionof ourGalaxy. Accordingly,we derived agesfor each individualwhite dwarfin our catalogand we computedaveraged agesas well.We didthis usingthree different initial-to-final mass relationships.In asecond step we studied the sensitivityoftheindividualand averagedagestotheadopted initial-to-final mass relationship, findingthat average ages are not affected by thechoice of this relationship,although individual ages can be significantly different.Additionally, we found that when ourpreferredchoice,theinitial-to-finalmassrelationshipof <cit.>,isemployedthenumber ofstarsolderthan 2.5 Gyr islarger thanthat predictedby theinitial-to-final mass relationships of <cit.> and <cit.>.In allthreecases, nevertheless,wefoundapaucity ofoldwhite dwarfs. Naturally, thisarises because most of the white dwarfsolder than 6 Gyr have small effective surfacetemperatures, T_ eff<7000K, and luminosities.Consequently, they fall outthe magnitude limit of the SDSS. In asubsequent effortwe studied theage-velocity relationof the Galacticdisc duringthe lastfewGyr. Todo thiswe selecteda sub-sampleof starsforwhich propermotionsand accurateradial velocitieswereavailable.Forthissub-sampleof starsprecise kinematicswere derived(seeSect. <ref>).Whitedwarfs belongingto thissub-sampleare within±0.5kpcin Zand between 7.8and ∼ 9.0 kpc in R_G, and allowed usto study how their kinematical properties depend onR_ G and Z. Our results can be summarized as follows. We foundthat the mean valueof the radial velocity,⟨ V_ R⟩ increases when moving from the inner regions of the Galaxy to distancesbeyond theSolarcircle,while theradialvelocity dispersion σ_ Rslightly increases as R_G increases. Similarradialgradients havebeenreportedfor theRAVEsurvey <cit.>, whichusesredclump starsand probes avolume larger thanthe one studied here. Additionally, we found that V_ Rshows a small gradient inthe vertical direction. However, the uncertainties are stilllarge, preventing us to draw any robust conclusion.Finally, we alsofound that σ_ R clearly increaseswhen moving away from the Galactic plane. Wealso foundthat V_ϕ decreasesfor decreasingR_G, whileσ_ϕincreases,andthatV_ϕhasa significantgradient inthe Zdirection, ∂⟨V_ϕ⟩/∂Z=+17±4km s^-1 kpc^-1,suggesting that theregions ofSouth Galactic hemisphereprobed byour sample rotate slower than the those of the North Galactic hemisphere. We also foundthatσ_ϕ increaseswhen|Z| increases from zero to 0.6 kpc. Wefound thatthe meanvertical velocityincreases asR_ G decreases,whiletheverticalvelocitydispersion,σ_ z, clearlydecreases whenmovingawayfrom theGalacticcenter. Interestingly, while for white dwarfswith positive Z (belonging to the NorthGalactic hemisphere) wefound negative valuesof ⟨ V_ z⟩, for stars withZ<0 (belonging to the South Galactic hemisphere), ⟨ V_ z⟩ ispositive.We also found that the velocity dispersionsincrease when moving awayfrom the Galactic plane.The age-velocity dispersionrelation was also studied.We found that the age-velocity dispersion relationderived using the present sample ofDA whitedwarfs may have experiencedan additionalheating, in addition to thesecular evolution of theGalacticdisc. However, the origin of this heating mechanismremains unclear.We advanced several hypothesis in the previous section which may explain it, but we defer athorough evaluation of thesealternativesforafuture publication where we will employ population synthesis techniques to carefullyreproduce the selection procedures and observational biases. To conclude,we demonstratedthat whitedwarfs can beused to study thedynamical evolution ofour Galaxy. This hasbeen possible because nowwe have alarge databaseof white dwarfswith accurate measurements oftheir stellarparameters, abyproduct ofthe SDSS. However,the catalogofwhite dwarfspresentedhere hasinherent limitations, due to the selection procedures and observational biases. Thus, modelingthe kinematicalproperties derived fromthis sample requiressignificant theoreticalefforts,and deservesfurther studies. Thesestudies arecurrentlyunderway,and willbethe subjectof futurepublications.Nevertheless, theresults, willbe rewarding, as we will have independent determinations of the dynamical properties of our Galaxy. § ACKNOWLEDGMENTS BA gratefully acknowledge the financial support of the Australian Research Council through Super Science Fellowship FS110200035. BA also thanks Maurizio Salaris (Liverpool John Moores University) and Daniel Zucker (Macquarie University/AAO) for lively discussions. TZ thanks the Slovenian Research Agency (research core funding No. P1-0188). This research has been partiallyfunded by MINECO grant AYA2014-59084-P and by the AGAUR.mnras | http://arxiv.org/abs/1703.09152v1 | {
"authors": [
"Borja Anguiano",
"Alberto Rebassa-Mansergas",
"Enrique Garcia-Berro",
"Santiago Torres",
"Ken Freeman",
"Tomaz Zwitter"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170327155028",
"title": "The kinematics of the white dwarf population from the SDSS DR12"
} |
PSPACE hardness of approximating the capacity]PSPACE hardness of approximating the capacity of time-invariant Markov channels with perfect feedback Mukul Agarwal]Mukul Agarwal It is proved that approximating the capacity of a time-invariantMarkov channel with perfect feedback is PSPACE hard. [ [ Updated: March 2017 ======================= § INTRODUCTIONIn this paper, it is proved that approximating the capacity of a Markov channel with perfect feedback is PSPACE-hard. By `approximating,' we mean, computing the capacity within a certain given additive error e. A class of channels will be constructed for which it will be proved that approximating the capacity to within 0.1 bits of the correct capacity is PSPACE-hard.The observation that approximating the capacity of a Markov channel with perfect feedback can be formulated as a stochastic dynamic programming problem with partial observations thereby connecting it to the work of Tsitsiklis and Papadimitriou on the complexity of Partially Observed Markov decision processes <cit.> is what is novel here along with the result.The authors of <cit.> demonstrate that the complexity of Markov decision processes with partially observed states is PSPACE-hard. The constructions and arguments in our paper depend significantly on, and are in many cases, the same as the arguments in <cit.>. The output of the channel in this paper is the partial state information in <cit.>. By carefully chosing the reset probabilities from the final decision states, we are able to map the complexity result about Markov decision process in <cit.> to result concerning the hardness of capacity approximation in this paper. The utter simplicity with which a result in control theory can be `transported' into a result in communication theory can be seen. For the relevant background on complexity theory, see <cit.> and references therein. An understanding of Section 4 in <cit.>, in particular, the statement of Theorem 6 and its proof is needed to understand the proof here. Once this is understood, and the reader has an understanding of information theory and basic probability theory, the proofs presented here can be understood.§ LITERATURE SURVEY In <cit.>, the complexity of Markov decision processes and partially observed Markovdecision processes has been considered and in <cit.>, the Witsenhausen and team decision problems have been considered. In both these papers, it is proved that there are problems in each category which are hard. See further references therein for problems which have been proved to be hard, especially in a control setting. There are various papers in information theory demostrating hardness of source and channel coding algorithms and code constructions, see, for example <cit.>, <cit.>, and <cit.>. In <cit.>, it has been proved that the generalized Lloyd-Max algorithm is NP-complete. In <cit.>, it has been proved that general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete. In <cit.>,the problem of encoding complexity of network coding is considered, and one of the results in this paper is that approximating the minimum number of encoding nodes requiredfor the design of multicast coding networks is NP-hard. In this paper, it is proved that even just approximately calculating the capacity to within a constant additive positive number isPSPACE-hard for the problem of Markov channel with perfect feedback, and thus, for a general network, the problem of approximating the capacity region is PSPACE-hard.§ THE RESULT This section contains the channel construction, lemmas and the theorem.For literature on Markov channels with feedback, the reader is referred to <cit.> and references therein, though this paper is not particularly needed to understand our paper.§.§ Channel construction and reliable communication Starting from any quantified formula Q =∃ x_1 ∀ x_2 ∃ x_3 …∀ x_n F(x_1, x_2, …, x_n) with n variables and m clauses C_1, C_2, …, C_m, where F is in conjunctive normal form, we construct a channel as follows:Channel states s_0, A_ij, A'_ij, T_ij, T'_ij, F_ij, F'_ij; sets A_j, A'_j, T_j, T'_j, F_j, F'_j 1 ≤ i ≤ m, 1 ≤ j ≤ n are the same as those in <cit.>. The states A_i,n+1 are lumped into a single state A_n+1 and the states A'_i,n+1 are lumped into a single state A'_n+1. Also, there is no terminating state, and from A_n+1, the channel goes back to itself or to s_0, and from A'_n+1, the channel goes back to itself or to s_0, in a way described later. The state transitions, which as stated above, are exactly the same as in <cit.> (but for the exception stated above). In order to make the paper self-contained, we reproduce the same here.Based on each clause C_i of Q and variable x_j there are 6 states, A_ij, A'_ij, T_ij, T'_ij, F_ij, F'_ij. There are also 2 additional states A_n+1 and A'_n+1.The initial state s_0 should be thought of as a set. For each variable j, the states A_ij, 1 ≤ i ≤ m form a set A_i, and the states A'_ij, 1 ≤ j ≤ m form a set A'_ij and similarly, the sets T_i, T'_i, F_i, F'_i are defined (this will be the partial state information, as we shall see below).The channel can transition from s_0 to states A'_i1, 1 ≤ i ≤ m with equal probability. If x_j is an existential variable, there are two possible state transitions out of the set A_j leading with certainty from A_ij to T_ij and F_ij respectively, and similarly, there are two transitions out of the set A'_j leading with certainty from A'_ij to T'_ij and F'_ij respectively. If x_j is a universal variable, there is one transitionout of the set A_j leading with equal probability from A_ij to T_ij and F_ij and similarly, there is one transition out of the set A'_j leading with equal probability from A'_ij to T'_ij and F'_ij. From the sets T_j, F_j, T'_j, F'_j sets, there is only one transition which leads with certainty from T_ij, F_ij, T'_ij and F'_ij to (respectively) A_i,j+1, A_i, j+1, A'_i, j+1, A'_i,j+1 with two exceptions: If x_j appears positively in C_i, the transition from T'_ij is to A_i,j+1 instead of A'_i,j+1 and if x_j appears negatively, the transition from F'_i,j is to A_i,j+1. When the channel reaches the state A_i,n+1, as stated above, the states A_i,n+1 are lumped into a single state A_n+1, and the channel stays in A_n+1 with probability 1-p and transitions back to the state s_0 with probability p for a value of p stated later. Similarly, when the channel reaches state A'_i,n+1, as stated previously, the states A'_i,n+1 are lumped into a single state A'_n+1, and the channel stays at state A'_n+1 with probability 1-q and transitions to the state s_0 with probability q, for a value of q stated later. The state A_n+1 will be called `good' state and the rest of the states will be called `bad' states. The reason for this will become clear below. The output of the channel is the partial state information, that is, one of s_0, A_i,A'_i,T_i,T'_i,F_i,F'_i, 1 ≤ i ≤ n, and A_n+1, A'_n+1along with an output bit either 0 or 1 depending on channel input and channel functioning.Thus, the output space of the channel is 𝕆≜{s_0, {T_i}, {T'_i},{F_i}, {F'_i}{A_i}, {A'_i}, 1 ≤ i ≤ n, A_n+1, A'_n+1}}×{0,1}.The output of the channel is `fed back' directly to the encoder without delay. The input to the channel is {D_1, D_2}×{0, 1}. Intuitively, this should be thought of as a bit being transmitted through 0 or 1 and D_1 and D_2 determine the state transition (note that there are at most 2 possible state transitions out of a state). Of course, another policy can be used instead of transmitting the information bit and the state transition information; the above is just an intuitive way of thinking about the channel input. The input to the encoder is a sequence of bits, each bit taking a value of 1 with probability 1/2 and 0 with probability 1/2, along with the output of the channel, which, as stated above, is `fed back' directly to the encoder without delay. At each time, the bit input to the encoder should be thought of as a single bit, the bit which has not yet been communicated. Thus, the input space of the encoder is {0, 1}×𝕆 = {0, 1}×{s_0, {T_i}, {T'_i},{F_i}, {F'_i}{A_i},{A'_i}, 1 ≤ i ≤ n, A_n+1, A'_n+1}×{0,1}.Based on all past inputs (a bit stream and all past feedback from the channel output), the encoder makes an encoding and `feeds it' into thechannel. It will be assumed that the channel starts in state s_0. Note that if the channel starts in state s_0, in 2n+1 units of time, it reaches either state A_n+1 or state A'_n+1. For the purpose of understanding, it is best to think of the problem as a sequential problem where a set of bits `enter' the encoder at a certain rate R which causes an encodingand the channel produces the outputs from which a decoding needs to happen. A rate R is achievable if, for every δ, ∃ t_δ such that for t > t_δ, Rt bits can be communicated and the average error, that is, (B̂_tR≠ B_tR) is less than δ, where B_tR denotes the bit input upto time t and B̂_tR is the corresponding decoding.Let p=2^-(mn)^100 and q = 2^-(mn)^200. §.§ Lemmas, theorem and proofs As has been stated previously, assume, in what follows, that the channel starts in state s_0. Given ϵ > 0. Then, ∃ m_0, n_0, depending only on ϵ such that for m>m_0, n>n_0, if Q is true, capacity of the channel corresponding to Q is larger than 1-ϵ.By <cit.>, Q is true implies that we can choose the channel input (decisions in <cit.>) so that we always end up in A_n+1, not A'_n+1. The transitions from s_0 to A_n+1 takes 2n+1 units of time where at worst, no bits can be communicated, and the channel stays in state A_n+1 for an average of order of magnitude 2^(mn)^100 number of transitions, where 1 bit is transmitted noiselessly per channel use. By these considerations, it follows that given any κ > 0, ∃ m_1, n_1 such that for m>m_1, n>n_1, the stationary distribution of the state A_n+1 of the Markov chain with the above chosen channel inputs is >1-κ. By use of the ergodic theory for Markov chains, the lemma follows.Given α > 0. Then, ∃ m_0, n_0 sufficiently large, depending only on αsuch that for m>m_0, n>n_0, the capacity of the channel corresponding to the formula Q is larger than α implies that Q is true. If there was some way for the channel to enter the state A'_n+1 irrespective of the decisions, this would happen with probability at least 2^-n/m (see <cit.>), and then, the channel stays in this state for an average of an order of magnitude of 2^(mn)^200 amount of time (1 unit of time refers to one state transition). Note that there is no transmission of information possible in state A'_n+1. Even if the channel ended in A_n+1 with the rest of the probability 1-2^-n/m, the average order of magnitude amount of time the channel stays in state A_n+1 is 2^(mn)^100 which is `much less' than 2^(mn)^2002^-n/m. Also, there is the 2n+1 units of time when the channel transitions from s_0 to A_n+1 or A'_n+1, which is `negligible'.It follows that given any λ > 0, ∃ m_2, n_2 such that for m > m_2, n > n_2, the fraction of time the channel spends in states A'_n+1 is >(1-λ) with high probability. Finally, note that the amount of transmission of information during the (2n+1) units of time when the channel transitions from s_0 to A_n+1 or A'_n+1 is at most log⌈ 6mn+3 ⌉. This is because the output space to the channel has cardinality 6mn+3.By taking into account the above numbers, it follows, then, that if the channel could enter A'_n+1, there exist m_0, n_0 sufficiently large such that the capacity of the channel will be less than α which will contradict the assumption on the channel. It follows, then, that there is a set of decisions (channel inputs) for which the channel never enters the state A'_n+1 which implies, by <cit.>, that the formula is true. Computing the capacity of this set of Markov channels (the set of channels formed by taking a channel corresponding to each formula Q where m,n and the particular formula can be any positive integers) when perfect feedback is available, to within an accuracy of 0.1 bits per channel use, is PSPACE hard.If the capacity of this set of Markov channels with feedback could be computed to within an accuracy of 0.1, it would be known whether the capacity of the channel is less than 0.2 or larger than 0.8. This would imply, from the previous lemmas, that we would know, for sufficiently large m,n, whether Q is true or not, and this problem is PSPACE hard. § COMMENTSIt has been assumed that the channel starts in state s_0. This is only for simplicity of presentation. Minor modifications can be made in order to make the channel start in any state.In addition, a transition from state A_i,n+1 to A'_i,n+1 with a probability 2^-(mn)^500 could be added, and thus, from state A_i,n+1 to state s_0 with probability 1-2^-(mn)^500 - 2^-(mn)^100 to make the picture a little more realistic. Also, the bound in Lemma 2 concerning theinformation transmission from input to output, is log⌈ 6nm + 3 ⌉; however, a bound should also be possible on the cardinality of the input space of the channel.There is nothing special about the number 0.1 in Theorem <ref>. Also, p and q need not be doubly exponential. Doubly exponentials work, and for that reason, they have been chosen this way.PSPACE hardness implies NP hardness and thus, the problem dealt with in this paper is also NP hard.A block-coding model can be considered instead of a sequential model. For the purpose of intuition, it is best to think of a sequential model.§ IMPORTANCE OF THE RESULT In information theory, one important research direction is to find single-letter characterizations, appropriately defined, for capacity regions of networks. However, it is not concretely known whether single letter characterizations exist in general. An example is the two-way channel <cit.>. In order to prove that in general, there is no single letter characterization, all one needs is an example for which it is not possible to get one.This paper takes a different view-point and firmly establishes a limit from the view-point of complexity theory by providing an example for which approximating the capacity of a general network is indeed hard in the sense of PSPACE-hardness. § RECAP AND RESEARCH DIRECTIONS A class of Markov channels with perfect feedbackwas constructed for which it was proved that approximating the capacity to within 0.1 bits is PSPACE hard.It would be worthwhile exploring the application of this proof idea to other channels with potentially noisy feedback, channels with perfect state information, and networks in general.It would be helpful to see whether this result puts restrictions on the kind of single-letter characterizations there may exist for capacity regions of networks; for example, this paper will rule out certain characterizations which can be approximated in a way that is not PSPACE-hard.The simplicity with which a result in which a result in control theory has been used to prove a result in information theory may be noted and further possibilities of the same may be explored.§ ACKNOWLEDGMENTS The author thanks Prof. John Tsitsiklis for suggesting his paper <cit.> which led to proving the result in this paper. The author also thanks Dr. Tom Richardson for carefully reviewing this paper and suggesting extensive changes in the writing of the proof and encouraging the author to submit the paper. The author thanks Prof. Vincent Tan for helpful discussions. Finally, the author thanks Prof. Sanjoy Mitter for suggesting that the author talk to Prof. John Tsitsiklis and Dr. Tom Richardson, for looking through the first version of this paper, and for general encouragement.10 TsPaCo C. H. Papadimitriou and J. N. Tsitsiklis, The complexity of Markov Decision Processes, Mathematics of Operations Research 12 (1987), no. 3.TsPaIn C. H. Papadimitriou and J. N. Tsitsiklis, Intractable problems in control theory, SIAM Journal of Control and Optimization 24 (1986), no. 4. 1em Wit M. R. Garey, D. S. Johnson, and H. S. Witsenhausen, The complexity of the generalized Lloyd-Max problem, IEEE Transactions on Information Theory IT-28 (1982), no. 2.Ber E. Berlekamp, R. McEliece, and H. van Tilborg, On the inherent intractability of certain coding problems, IEEE Transactions on Information Theory24 (1978), no. 3.Lang M. Langberg, A. Sprintson, and J. Bruck, The encoding complexity of network coding, IEEE Transactions on Information Theory 52 (2006), no. 6.Tat S. Tatikonda and S. K. Mitter, Capacity of channels with feedback, IEEE Transactions on Information Theory 55 (2009), no. 11.S2way C. E. Shannon, Two-way communication channels, Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, University of California Press, 1961, pp. 611–644. Department of Electrical and Computer Engineering Boston University, Boston, MA 02215, [email protected] | http://arxiv.org/abs/1703.08625v3 | {
"authors": [
"Mukul Agarwal"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170324233204",
"title": "PSPACE hardness of approximating the capacity of Markoff channels with noiseless feedback"
} |
APS/123-QED Department of Applied Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, JapanUsing a variational Monte Carlo method, we study competitions of strong electron-electron and electron-phonon interactions in the ground state of Holstein-Hubbard model on a square lattice. At half filling, an extended intermediate metallic or weakly superconducting (SC) phase emerges, sandwiched by antiferromagnetic (AF) and charge order (CO) insulating phases. By the carrier doping into the CO insulator, the SC order dramatically increases for strong electron-phonon couplings, but largely hampered by wide phase separation (PS) regions. Superconductivity is optimized at the border to the PS.63.20.Kr, 71.10.Fd, 71.27.+a, 74.25.Kc, 74.72.-h Competition among Superconducting, Antiferromagnetic, and Charge Orders with Intervention by Phase Separation in the 2D Holstein-Hubbard Model Masatoshi Imada December 30, 2023 ================================================================================================================================================ Introduction. —The electron-phonon interaction in condensed matter is the origin of many important phenomena such as conventional superconductivity (SC) and charge density wave. In a class of strongly-correlated materials, the interplay between electron correlations and electron-phonon interactions is believed to induce novel phenomena such as the unconventional high-T_c s-wave SC in the alkali-doped fullerenes<cit.>. Even for high-T_c cuprates, some experiments<cit.> and theoretical studies<cit.> have suggested the important roles of phonons for full understanding the electronic properties including the SC. However, they are still controversial because the relevance of the electron-phonon interaction addressed in previous theoretical works largely rely on adjustable model parameters introduced in an ad hoc fashion. In addition, computationally accurate framework to study the interplay between the electron correlation and the electron-phonon interaction has not fully been explored. To establish the roles of phonons in a wide range of stongly correlated materials including the cuprates, we need a flexible method which can accurately treat strong electron-electron and electron-phonon interactions on an equal footing. For decades, variational Monte Carlo (VMC) methods have been applied to investigate strongly correlated electrons<cit.>. Its advantage is that it does not suffer from the notorious negative-sign problem, whereas its accuracy depends on the assumed variational wave function. However, owing to the improved efficient optimization method such as the stochastic reconfiguration method<cit.>, its accuracy and flexibility have improved by introducing many variational parameters<cit.>. It has been recently applied to complicated ab initio multi-orbital effective Hamiltonians<cit.>. Recently, we have successfully extended this many-variable VMC (mVMC) method to electron-phonon coupled systems<cit.>.The Holstein-Hubbard model is the simplest model for studying the interpley of electron-electron and electron-phonon interactions. However, the phase diagram and physical properties under these two competing interactions are controversial even for the ground states. In one dimension and the Bethe lattice with inifinite coordination, its phase diagrams have been obtained by the density matrix renormalization group (DMRG)<cit.> and the dynamical mean-field theory (DMFT)<cit.>, respectively. At half filling, the DMRG studies have reported the existence of an intermediate metallic phase between a Mott insulating and a CO phase in the ground-state phase diagram. On the other hand, the DMFT study for zero temperature has not found its evidence<cit.>. For square lattices, a finite-temperature quautum Monte Carlo (QMC) study has also suggested the emergence of an intermediate paramagnetic metallic phase between the AF and CO phases<cit.>. However, such a phase diagram cannot be conclusive in the finite-temperature studies because of the Mermin-Wagner theorem.Another important open issue is found when carriers are doped into the half-filled system. The DMFT study on the Holstein model has revealed the presence of a coexisting phase of CO and SC which is not prevented by the PS<cit.>. It is interesting to ask whether the coexistence also exists in two dimensions. The connection between the SC and PS is also intriguing and has been discussed in the literature<cit.> for a different context of the three-band Hubbard model as a model for the cuprates. Recently, their strong connections are observed in the mVMC studies on the Hubbard model<cit.> and ab initio effective Hamiltonian of electron-doped LaFeAsO<cit.>. A natural question here is whether a phonon-driven PS also has a connection in the case of the s-wave SC. In this paper, we study these issues by using the mVMC method.Model. —The Hamiltonian we consider here is given byH= - t ∑_⟨ i,j ⟩, σ (c_i σ^†c_j σ +h. c.) + U ∑_i n_i ↑ n_i ↓+ g ∑_i x_i n_i+ ∑_i ( p_i^2/2M+ M Ω^2 x_i^2/2),where t, U, g, and Ω represent the hopping amplitude, the on-site intraction strength between electrons, the electron-phonon interaction strength, and the phonon frequency, respectively. c_i σ(c_i σ^†) represents the annihilation (creation) operator of an electron with spin σ (=↑ or ↓) at the site i. The particle number operators n_i σ and n_i are defined by n_i σ=c_i σ^† c_i σ and n_i=n_i ↑+n_i ↓. x_i and p_i are the lattice displacement operator and its conjugate meomentum operator, respectively. x_i relates to the annihilation/creation boson(phonon) operator b_i/b_i^† as x_i = √(1/2M Ω) (b_i + b_i^†). The dimensionless electron-phonon interaction strength λ is defined as the ratio of the lattice deformation energy to half the bandwidth W/2=4t and we obtain λ = g^2/(M Ω^2 W), where M is the mass of single-component nuclei. If we consider the path-integral representation of the partition function and integrate out the phonon degrees of freedom, the model is exactly mapped onto the Hubbard model with the effective dynamical on-site interaction U_ eff(ω)=U-λ W/1-(ω/Ω)^2. In this paper, we set M=t=1 as the unit of mass and energy. We consider N=L^2 systems on the square lattice with N_e electrons and impose the periodic/anti-periodic boundary condition in the x/y-direction to satisfy the closed-shell condition. The filling factor and doping (hole) concentration are given by ρ=N_e/N and δ=1-ρ, respectively.Method. —Our variational wave function takes the following form:| ψ⟩ =P^ el-ph (| ψ^ el⟩ | ψ^ ph⟩ )<cit.>. Here, | ψ^ el⟩ and | ψ^ ph⟩ represent variational wave functions for electrons and phonons, respectively. P^ el-ph is the correlation factor which takes into account the entanglement between electrons and phonons. Its explicit form is given by P^ el-ph=exp( ∑_i,jα_ij x_i n_j ), where α_ij are variational parameters. As | ψ^ el⟩, we adopt the generalized pairing wave function with the Gutzwiller<cit.> and Jastrow correlation factors<cit.>: | ψ^ el⟩ =P^ J P^ G | ϕ^ pair⟩. The generalized pairing wave function takes the form of | ϕ^ pair⟩ = ( ∑_i,j=1^N f_ij c_i ↑^† c_j ↓^†)^N_ e/2 | 0 ⟩, where f_ij are variational parameters. This is a generalization of the Hartree-Fock-Bogliubov type wave funcion with AF/CO and SC orders<cit.> and thus flexibly describes these states as well as paramagnetic metals (PM). In order to reduce the number of independent variational parameters, we assume that f_ij have a sublattice structure such that f_ij depend on the relative vector r_i- r_j and a sublattice index of the site i which we denote as η(i). Thus, f_ij = f_η(i)( r_i- r_j). In the present study, we assume a 2 × 2 sublattice structure and the number of independent f_ij reduces from N^2 to 2 × 2 × N. We also assume a translational symmetry for variational parameters in the correlation factors. For | ψ^ ph⟩, we use the tensor product of phonon wave functions with wave vectors q: | ψ^ ph⟩= ∏_ q | ψ^ ph_ q⟩. | ψ^ ph_ q⟩ is expanded in terms of phonon Fock states | m_ q⟩ as | ψ^ ph_ q⟩ = ∑_m_ q=0^m_ q^ max c_m_ q | m_ q⟩. Here, m_ q^ max are controllable cutoffs for the number of phonons and c_m_ q are treated as variational parameters of real numbers. The number of its variational parameteres is ∑_ q (m_ q^ max+1), which is equal to N (m^ max+1) if we take m_ q^ max=m^ max. In this study, we checked the convergence of physical quantities as a function of the cutoff and we typically took m_ q^ max=10-40 for q=(π, π) and m_ q^ max=5 for others. As initial states in the optimization of variational parameters, we considered the non-interacting Fermi sea (PM state), SC, AF, CO, and coexisting states of SC+AF and SC+CO. Half-filled case. — We consider two phonon frequencies: An intermediate frequency Ω=8t (equal to the bandwidth W) and a smaller one Ω=t. In Fig. <ref>, we summarize our results in the ground-state phase diagram in the U-λ plane. The phase diagram includes the boundary of the AF and CO phases. To distinguish each phase, we measured the spin structure factor S_s( q) = 1/3 N∑_i,j⟨ S_i · S_j ⟩ e^iq·( r_i- r_j) and the charge structure factor S_c( q) = 1/N∑_i,j (⟨ n_i n_j ⟩ - ρ^2)e^iq· ( r_i- r_j). One of the main findings in this Letter is the existence of an intermediate phase sandwiched by the AF and CO phases around U ∼λ W. For Ω=8t, we found a wide intermediate phase. For smaller frequency Ω=t, it is narrowed but still exists for U ≲ 2. The shrinkage of the intermediate region for small Ω was also observed in one<cit.> and infinite<cit.> dimensions. Previous QMC studies suggested the intermediate region at U=5t <cit.>. However, the wider intermediate region there is probably because their calculation is at finite temperature T/t = 0.25.In Fig. <ref> (a) and (b), we plot the spin/charge structure factor S_s/c(π,π)/N as a function of λ at (Ω/t,U/t)=(8,8) and (1,8), respectively. In the intermediate region, the values of S_s/c(π,π)/N vanish after its extrapolation to the thermodynamic limit (see Supplemental Materials<cit.> for the extrapolation procedure). The presence of the intermediate phase is further evidenced by the two first-order transitions signaled by two energy-level crossings as a function of λ, where the AF phase energy crosses with the intermediate phase energy at λ∼ 0.91 as shown in Fig. <ref>(a) and then the latter crosses with the CO phase slightly aboveλ∼ 1.07 as in Fig. <ref>(b) with increasing λ at fixed U and Ω. One may infer that the antiadiabatic or the adiabatic limits may further give useful insights. These are examined in Supplemental Materials<cit.>. In the intermediate region, it is likely that weak SC orders emerge, while expected amplitudes of the order are too weak so that we could not distinguish them from PM states in the available data of finite systems (see Supplemental Materials<cit.>).Doped case. — We now study the doped region. In Fig. <ref>, we first present our ground-state phase diagram at U=0 in the δ-λ plane for Ω=8t and Ω=t, because the U=0 phase diagram captures an essential aspect. For U=0, the effective interaction U_ eff(ω) has negative parts for ω < Ω, which lead to s-wave SC states except for the gapped CO phase at half filling. In our phase diagram, the SC + CO phase is absent. Instead, the PS region appears adjacent to the CO phase at half filling. We find that for the smaller phonon frequency, the PS region is enlarged. In the Supplemental Material<cit.>, we present the phase diagram in the adiabatic limit as the extreme case. In Fig. <ref>, we also plot S_c(π,π)/N and the long-range part of the s-wave SC correlation function P_s^∞ which is defined by P_s^∞ = 1/M∑_√(2)L/4<| r| P_s( r). Here, r is the relative position vectors belonging to (-L/2, L/2]^2 and M is the number of vectors satisfying √(2)L/4<| r|<√(2)L/2 and the SC function P_s( r) is defined by P_s( r) = 1/N∑_ r_i⟨Δ_s^† ( r_i) Δ_s ( r_i+ r) ⟩ with the order parameter Δ_s( r_i) = c_ r_i ↑ c_ r_i↓.In Fig. <ref>(a), we show physical quantities which were used to determine the phase diagrams in Fig. <ref> in an example at (Ω/t, U/t, λ) = (8, 0, 0.3).We also show an interacting case for (Ω/t, U/t, λ) = (8, 8, 1.3) in Fig. <ref>(b) for comparison. Since in the antiadiabatic limit, the model is mapped to the standard Hubbard model with the on-site interaction U_ eff=U-Wλ, the comparison between the interacting and noninteracting cases with the same U_ eff may provide us with insight for large Ω. The cases (a) and (b) indeed have the same U_ eff=-2.4.The value of S_c(π,π)/N decreases monotonically and the CO eventually disappears at δ≃ 0.14 and 0.37 for (a)(U/t=0) and (b)(U/t=8), respectively. On the other hand, the value of P_s^∞ increases as δ increases and we clearly observe SC phase. For small δ, a CO and s-wave SC orders coexist.By the Maxwell construction for the δ-μ curve, however, we find that the SC+CO phase is swallowed up by the PS region in our phase diagrams. Here, μ is the chemical potential which was calculated by μ(N̅_ e) = [ E(N_ e) - E(N'_ e)]/(N_ e-N'_ e). Here, E is the total energy, (N_ e, N'_ e) are the electron numbers and we obtain the chemical potential at the mid filling N̅_ e = (N_ e+N'_ e)/2. Our Hamiltonian has the particle-hole symmetry at μ = -8λ -U/2 = -2.4 and -6.4 for (a) and (b), respectively. Since this value is above the line used for the Maxwell construction, there is a charge gap at half filling. For the interacting case (b), the charge gap is even larger. We also present the negative inverse uniform charge susceptibility -χ_c^-1 = d μ/d ρ in Fig. <ref>.In our model, the spinodal point δ_s, where the uniform charge susceptibility diverges (χ_c^-1=0), coincides with the critical point of the CO and therefore the PS is driven by the CO (see also results for the adiabatic limit in the Supplemental Material<cit.>). Comparisons between (a) and (b) show a quantitative difference that the CO/SC orders are enhanced/supressed for large U/t. However, we find a universal common feature both in (a) and (b); a clear one-to-one correspondence among the peak of the SC order, the spinodal point and the border of the CO phase, thus indicates tight connections of the mechanism of the SC, CO and uniform charge instability. The strong effective attractive interaction of carriers is certainly a key, because it causes all of these three properties. The strong attraction is caused by the electron-phonon interaction here while the resultant charge fluctuations may also work as additional glue of the Cooper pair. The same trend between the enhancement of the s-wave SC and the uniform charge susceptibility has been reported for d-wave SC in the Hubbard model<cit.> and extended s-wave SC in the ab initio effective Hamiltonian for LaFeAsO<cit.> as well. To summarize, by studying the ground states of the Holstein-Hubbard model on a square lattice, we have clarified where the s-wave SC is enhanced in the phase diagram. At half filling, we have found an intermediate metallic or weakly SC region sandwiched by the CO and AF phases. In the doped case, the SC is dramatically enhanced, but a wide PS region triggered by the CO largely hinders the SC and completely preempts the SC+CO phase. We have revealed that the SC is optimized at the border of the PS. These findings have been obtained by the VMC method extended for electron-phonon coupled systems. Our method is quite flexible, and therefore it will be also useful to study more complicated systems such as ab initio Hamiltonians of high T_c cuprates where several different phonon modes are present.We thank Kota Ido for useful discussions. T.O also thanks Yuta Murakami for discussions. The code was developed based on the open-source software mVMC<cit.>. This work is financially supported by the MEXT HPCI Strategic Programs for Innovative Research (SPIRE), the Computational Materials Science Initiative (CMSI) and Creation of New Functional Devices and High-Performance Materials to Support Next Generation Industries (CDMSI). This work was also supported by a Grant-in-Aid for Scientific Research (No. 22104010, No. 22340090 and No. 16H06345) from MEXT, Japan. The simulations were partially performed on the K computer provided by the RIKEN Advanced Institute for Computational Science under the HPCI System Research project (the project number hp130007, hp140215, hp150211, hp160201, and hp170263). The simulations were also performed on computers at the Supercomputer Center, Institute for Solid State Physics, University of Tokyo. Supplemental Material for “Competition among Superconducting, Antiferromagnetic, and Charge Orders with Intervention by Phase Separation in 2D Holstein-Hubbard Model”§ PHASE TRANSITIONS TO AF/CO PHASES AT HALF-FILLINGIn Fig. <ref>, we show the extrapotaion of S_s/c(π, π)/N to the thermodynamic limit. Here, we plot S_s/c(π, π)/N as a function 1/L and performed a linear fit based on the spin-wave theory<cit.>.§ SUPERCONDUCTING CORRELATION FUNCTION AT HALF FILLINGHere, we show the superconducting correlation functions at half filling and a small effective attraction U_ eff (< 0) where the charge order does not appear. In Fig. <ref>, we plot the s-wave superconducting correlation functions P_s( r) at (U/t, Ω/t, λ) = (0, 8, 0.14) (U_ eff=-1.12t). P_s( r) does not show a clear power-law decay, but instead its long-range parts P_s^∞ seem to show saturated behaviors. However, after the size extrapolation to the thermodynamic limit<cit.> (the inset of Fig. <ref>), its value does not exclude 0 within the error of ∼ 10^-4. Because of this extrapolation error, we cannot estimate its value accurately, although it is expected to be finite. Since our model is mapped onto the attractive Hubbard model with U/t=-1.12 in the antiadiabatic limit Ω/t →∞ and it is equivalent to the repulsive Hubbard model with U/t=1.12 at half filling, we may estimate the upper bound of P_s^∞ from the previous determinant quantum Monte Carlo (DQMC) results of the repulsive Hubbard model<cit.>. The antiferromagnetic order parameter defined by m_s = √(lim_L →∞ S_s(π, π)/N) for the repulsive Hubbard model relates to P_s^∞ for the attractive Hubbard model as P_s^∞ = 2 m_s^2 due to the spin-rotational symmetry. Since there are no available DQMC data of m_s for U/t<2, we estimate the value at U/t=1.12 from the rescaled Hartree-Fock result. Here, we rescaled the Hartree-Fock result such that it reproduces the DQMC result at U/t=2 (shown in Fig. <ref>), although it seems to overestimate m_s for U/t<2. Note that by considering the uncertainty in the fitting by the rescaled Hartree-Fock result, this should be regarded as a rough estimate. Nevertheless, from this estimation, we obtain P_s^∞∼ 0.001 for the attractive Hubbard model with U/t=-1.12, whose order of estimate looks robust. (In the absence of the spin-rotational symmetry which is true for finite Ω/t, we need to multiply the additional factor 1.5.) Thus, we confirm that the value of P_s^∞ for Ω/t=8 is smaller than the estimated value for the attractive Hubbard model. We believe that it is mainly due to the retardation effect. In any case, although the difficulty in the estimate of small order parameter exists, the order of expected superconducting order from this simple analysis suggests P_s(r=∞)∼ O(10^-3-10^-4), from which our numerical results are consistent with the existence of a weak superconducting order. More quantitative analyses are left for future studies. § ANTIADIABATIC REGIMEFor Ω≫ t, the Holstein-Hubbard model is reduced to the Hubbard model with the static interaction U_ eff. Here, we confirm this property by comparing results of these two models. Through the celebrated canonical transformation<cit.>, the attractive Hubbard model (AHM) at half filling is transformed into the repulsive Hubbard model (RHM) at the same filling. Owing to this transformation, the physical quantities of the AHM and the RHM can be mapped to each other. In particular, S_c(π)/4 and the structure factor for the s-wave superconductivity P( 0) for the AHM are transformed to the spin structure factor of the z-direction component S_s^z(π) and the x-y plane component S_s^x(π) + S_s^y(π) for the RHM, respectively. Here, P( q) = 1/N∑_i,j⟨ c_i ↑^† c_i ↓^†c_j ↓ c_j ↑⟩ e^iq· ( r_i- r_j) and S_s^α( q) = 1/N∑_i,j⟨ S_i^α S_j^α⟩ e^iq·( r_i- r_j) (α= x,y,z).The results are shown in Fig. <ref>. We plot S_s(π)/N and@[S_c(π) + 4P( 0)]/12N at (ρ, U/t) = (1, 8) for large Ωs. The Ω→∞ results are obtained by the calculations for the Hubbard model with the effective interaction U_ eff. In this limit, these two quantities should be symmetric with respect to U_ eff=0 if the results are exact. For the Holstein-Hubbard model, we show results for Ω/t=50, 100, and 200 at U/t=8. From these results, we can confirm that the results of the Holstein-Hubbard model approach those of the Hubbard model with U_ eff for Ω≫ t. In this limit, there is no intermediate region between the AF and CO phase. To show the absence of energy crossings in this limit, we present the energy curves for the Hubbard model in Fig. <ref>.Because of the SU(2) spin-rotational symmetry, there is the degeneracy between a SC and CO state for the half-filled attractive Hubbard model. To see how accurately we can describe this degeneracy, we here compare these two states. In Fig. <ref>, we present the charge/superconducting structure factors (L=14). The charge/superconducting structure factors are enhanced for large -U/t in the SC/CO states. We obtained these states by optimizing the initial states with SC or CO orders. We define the energy difference between these states as Δ E = (E_ CO -E_ SC)/N, where E_ CO and E_ SC are the energies of SC and CO states, respectively. We obtained Δ E/t = 0.00264(6) for U/t=-4. Within this difference ascribed to the error in the VMC calculations, the two states are nearly degenerate. The difference is quite small compared with the Holstein-Hubbard model where there is no degeneracy. For instance, we obtained Δ E/t = -0.1019(2) at Ω/t=U/t=8 and λ=1.5 (U_ eff/t=-4).Finally, we present results for the doped case. In the main text, we showed the presence of phase separated regions for finite Ω/t. In contrast, a QMC study reported the absence of phase separations for the static attractive Hubbard model<cit.>. In addition, the superconductivity becomes dominant away from half filling and there is no degeneracy or coexistence with the charge order. To show that our VMC method can describe these properties in the antiadiabatic limit, we present physical quantities as functions of δ for the Hubbard model at U/t=-2.4 as well as the Holstein model at λ=0.3 (U_ eff/t = -2.4) and Ω/t=200 in Fig. <ref>. From the good agreement between these two cases, we can again confirm that the Holstein model approaches its effective static Hubbard model for large Ω/t. These results can be regarded as the antiadiabatic limit of Fig. 5 (a) where a phase separation was observed for Ω/t = 8. In contrast, the monotonic behavior of μ here indicates the absence of phase separations in the antiadiabatic limit. We can also see that superconducting states are indeed dominant away from half filling in comparison to the degeneracy with the charge ordered state shown in Fig. <ref>.More concretely, we obtained Δ E/t = 0.0004(4) for U/t=-2.4 at half filling, whereas we found that the SC states always have lower energies than the CO states away from half filling. § DOPED HOLSTEIN MODEL IN THE ADIABATIC REGIMEIn Fig. 4, we have shown the phase diagrams for the doped Holstein model at Ω/t=1 and 8. The physical quantities for Ω/t=1 and λ=0.3 are presented in Fig. <ref>. From these results, we found that for smaller phonon frequencies, the PS region is enlarged. As the extreme case, we here present the analysis for the adiabatic limit.The limit of Ω/t → 0 with the spring constant K = MΩ^2 kept fixed (M →∞) is called adiabatic (classical) limit. In this limit, the Hamiltonian is reduced to H= - t ∑_⟨ i,j ⟩, σ (c_i σ^†c_j σ +h. c.) + g ∑_i x_i n_i+ ∑_i K x_i^2/2,where K is the spring constant and the lattice displacements {x_i} become classical variables. By changing variables as x_i = x̃_i -g/Kρ, we can rewrite it asH= - t ∑_⟨ i,j ⟩, σ (c_i σ^†c_j σ +h. c.) + g ∑_ix̃_iδ n_i+ ∑_i K x̃_i^2/2 - W λ/2ρ^2 N.Here, δ n_i = n_i - ρ and λ = g^2/KW. In this form, the new lattice displacements {x̃_i } are 0 if electrons are uniformly distributed (⟨δ n_i⟩=0 for all i's). By completing the square for the second and third terms, the total energy ⟨ H⟩ can be minimized if x̃_i = - g/K⟨δ n_i⟩. We assume a charge orderd state with x̃_i = (-1)^i x̃ = - g/Kδ n (-1)^i, where x̃ and δ n are order parameters. By substituting this for the Hamiltonian, we obtainH= - t ∑_⟨ i,j ⟩, σ (c_i σ^†c_j σ +h. c.) + g ∑_ix̃ (-1)^i n_i + K x̃^2/2N - W λ/2ρ^2 N.We can diagonalize this by the following unitary transformation:a_ kσ^†= u_ k c_ kσ^† + v_ k c_ k+ Qσb_ kσ^†= -v_ k c_ kσ^† + u_ k c_ k+ Qσwithu_ k (v_ k) =1/2[ 1 -(+) ε ( k)/√(ε ( k)^2 + Δ^2)].Here, Q=(π, π). By this transformation, we obtainH=∑_ k∈ folded BZ, σ [E^-( k) a_ kσ^† a_ kσ + E^+( k) b_ kσ^†b_ kσ] +K x̃^2/2N - W λ/2ρ^2 N,with E^±( k) = ±√(ε( k)^2 + Δ^2)Here, BZ denotes the Brillouin zone, ε ( k) is the energy dispersion for the non-interacting system and Δ = g x̃. Based on this result, we can numerically calculate the total energy as a function of x̃ (or δ n) and obtain the ground-state energy as the minimum of E(x̃).In Fig. <ref>, we present the obtained ground-state phase diagram. In the phase diagrm for finite Ω (Fig. 4 of the main text), we observe that the phase separation region expands as Ω decreases. Especially for large λ, a broad phase separation region is reasonable, because we have the term - W λ/2ρ^2 N in the Hamiltonian. The second derivative of - W λ/2ρ^2 with respect to ρ gives the negative constant - W λ and this easily makes the curve of E/N convex upward for large λ. As is evident, the origin of the term - W λ/2ρ^2 N is the uniform shift of the original lattice displacement due to the change of the particle density. As seen for finite Ω, the spinodal point δ_s coincides with the critical point of the charge order in the adiabatic limit as well. Actually this is not accidental, because the vanishing Δ makes a kink in the chemical potential. To show this, we plot the δ-dependence of Δ and μ in Fig. <ref>. This fact indicates that the charge order in the doped system is necessarily preempted by the phase separation.§ ENERGY PER HOLE IN DOPED REGIONSIn the main text, we have shown the chemical potential μ (δ) to identify the PS region. Alternative way which is often used is to see the energy per hole defined bye_h (δ) = (E/N - E_0/N)/δ.Here, E_0 is the energy at δ=0. If e_h(δ) has a minimum at δ=δ_c, a PS region is identified as δ < δ_c. In Fig. <ref>, we plot this quantity for two parameter sets which are adopted in the main text. Minima are clearly seen and the PS regions are perfectly consistent with Fig. 5.§ POSSIBILITY OF INCOMMENSURATE ORDERSIn the present study, we assumed a 2 × 2 sublattice structure for {f_ij}, and thus disregarded the possibility of incommensurate orders which may appear at finite δ<cit.>. The inclusion of such states may modify our phase diagram. Here, we discuss its possibility with additional data. In Fig. <ref>, we show results for (Ω/t, U/t, λ) = (1, 0, 0.25), which include a charge-density-wave (CDW) state with long periodicity l=(l_x, l_y). Here, l_α represents the period in the α-direction and we consider l=(8, 2), (10, 2), and (12, 2) which have peaks at q=(π-2π/l_x, π) in S_c( q). To obtain these states, we extended the sublattice structure to 8 × 2, 10 × 2, and 12 × 2, respectively. In addition to them, we show a CDW state with l=(2, 2) which is considered in the main text. As seen in Fig. <ref>, we found that CDW states with l=(10, 2) or (12, 2) have lower energies than the states with l=(2, 2) in some δs, whereas the states with l=(8, 2) have higher energies. This suggests the presence of incommensurate orders in the thermodynamic limit. However, the inclusion of these states do not change the result of the Maxwell construction (the dashed line) and they are unstable against the phase separation. | http://arxiv.org/abs/1703.08899v2 | {
"authors": [
"Takahiro Ohgoe",
"Masatoshi Imada"
],
"categories": [
"cond-mat.supr-con"
],
"primary_category": "cond-mat.supr-con",
"published": "20170327015638",
"title": "Competition among Superconducting, Antiferromagnetic, and Charge Orders with Intervention by Phase Separation in the 2D Holstein-Hubbard Model"
} |
Randomized CP Tensor Decomposition N. Benjamin Erichson [email protected] of Statistics University of California, Berkeley, USAKrithika Manohar [email protected] of Applied & Computational Mathematics California Institute of Technology, Pasadena, USA Steven L. Brunton [email protected] Department of Mechanical Engineering University of Washington, Seattle, USAJ. Nathan Kutz [email protected] of Applied Mathematics University of Washington, Seattle, USADecember 30, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The CANDECOMP/PARAFAC (CP) tensor decomposition is a popular dimensionality-reduction method for multiway data. Dimensionality reduction is often sought after since many high-dimensional tensors have low intrinsic rank relative to the dimension of the ambient measurement space. However, the emergence of `big data' poses significant computational challenges for computing this fundamental tensor decomposition. By leveraging modern randomized algorithms, we demonstrate that coherent structures can be learned from a smaller representation of the tensor in a fraction of the time. Thus, this simple but powerful algorithm enables one to compute the approximate CP decomposition even for massive tensors. The approximation error can thereby be controlled via oversampling and the computation of power iterations. In addition to theoretical results, several empirical results demonstrate the performance of the proposed algorithm. randomized algorithms; randomized least squares; dimension reduction; multilinear algebra; CP decomposition; canonical polyadic tensor decomposition.§ INTRODUCTION Advances in data acquisition and storage technology have enabled the acquisition of massive amounts of data in a wide range of emerging applications. In particular, numerous applications across the physical, biological, social and engineering sciences generate large multidimensional,multi-relational and/or multi-modal data. Efficient analysis of this data requires dimensionality reduction techniques.However, traditionally employed matrix decompositions techniques such as the singular value decomposition (SVD) and principal component analysis (PCA) can become inadequate when dealing with multidimensional data. This is because reshaping multi-modal data into matrices, or data flattening, can fail to reveal important structures in the data. Tensor decompositions overcome the information loss from flattening. The canonical CP tensor decomposition expresses an N-way tensor as a sum of rank-one tensors to extract multi-modal structure. It is particularly suitable for data-driven discovery, as shown by <cit.> for various learning tasks on real world data.However, tensor decompositions of massive multidimensional data pose a tremendous computational challenge. Hence, innovations that reduce the computational demands have become increasingly relevant in this field. The idea of tensor compression, for instance, eases computational bottlenecks by constructing smaller (compressed) tensors, which are then used as a proxy to efficiently compute CP decompositions. Compressed tensors may be obtained, for example, using the Tucker decomposition of a tensor into one small core tensor and N unitary matrices However, this approach requires the expensive computation of the left singular vectors for each mode.Related Work. This computational challenge can be tackled using modern randomized techniques developed to compute the SVD.<cit.> presents a randomized Tucker decomposition algorithm based on the random sparsification idea of <cit.> for computing the SVD.<cit.> proposed an accelerated randomized CP algorithm using the ideas of <cit.> without the use of power iterations. An alternative randomized tensor algorithm proposed by <cit.> uses random column selection. Related work by <cit.> proposed a sparsity-promoting algorithm for incomplete tensors using compressed sensing.Another efficient approach to building large-scale tensor decompositions is the subdivision of a tensor into a set of blocks. These smaller blocks can then be used to approximate the CP decomposition of the full tensor in a parallelized or distributed fashion <cit.>. <cit.> fused random projection and blocking into a highly computationally efficient algorithm. More recently, <cit.> proposed a block sampling CP decomposition method for the analysis of large-scale tensors using randomization, demonstrating significant computational savings while attaining near optimal accuracy. These block based algorithms are particularly relevant if the tensor is too large to fit into fast memory.Contributions.We present the randomized CP algorithm, which is closely related to the randomized methods of <cit.>.Our method proceeds in two stages. The first stage uses random projections with power iterations to obtain a compressed tensor (Algorithm <ref>). The second stage applies either alternating least squares (ALS) or block coordinate descent (BCD) to the compressed tensor (Algorithms <ref> and <ref>), at significantly reduced cost. Finally, the CP decomposition of the original tensor is obtained by projecting the compressed factor matrices back using the basis derived in Algorithm <ref>. Randomized algorithms for CP decomposition have been proposed <cit.>, albeit without incorporating power iterations. The power iteration concept is fundamental for modern high-performance randomized matrix decompositions, but to the best of our knowledge, has not been applied in the context of tensors. Without power iterations, the performance of randomized algorithms based on random projections can suffer considerably in the presence of white noise. We combine power iterations with oversampling to further control the error of the decomposition.Embedding the CP decomposition into this probabilistic framework allows us to achieve significant computational savings, while producing near-optimal approximation quality.For motivation, Figure <ref> shows the computational savings of a rank k=20 approximation for varying tensor dimensions. Here we compare the ALS and BCD solver for computing the deterministic and randomized CP decomposition. The computational advantage of the randomized algorithms is significant, while sacrificing a negligible amount of accuracy. Organization.The paper is organized as follows: Section <ref> briefly reviews the CP decomposition and randomized matrix decompositions. Section <ref> introduces the randomized CP tensor decomposition algorithm.Section <ref> presents the evaluation of the computational performance, and examples. Finally, Section <ref> summarizes the research findings.§ BACKGROUND Ideas for multi-way factor analysis emerged in the 1920s with the formulation of the polyadic decomposition by <cit.>.However, the polyadic tensor decomposition only achieved popularity much later in the 1970s with the canonical decomposition (CANDECOMP) in psychometrics, proposed by <cit.>.Concurrently, the method of parallel factors (PARAFAC) was introduced in chemometrics by <cit.>. Hence, this method became known as the CP (CANDECOMP/PARAFAC) decomposition.However, in the past, computation was severely inhibited by available computational power. Today, tensor decompositions enjoy increasing popularity, yet runtime bottlenecks persist.§.§ Notation Scalars are denoted by lower case letters x, vectors as bold lower case letters 𝐱, and matrices by bold capitals 𝐗. Tensors are denoted by calligraphic letters . The mode-n unfolding of a tensor is expressed as _(n), while the mode-n folding of a matrix is defined as 𝐗_(n).The vector outer product, the Kronecker product, the Khatri-Rao product and the Hadamard product are denoted by ∘, ⊗, ⊙, and ∗, respectively.The inner product of two tensors is expressed as ⟨· ,·⟩, and ·_F denotes the Frobenius norm for both matrices and tensors.§.§ CP Decomposition The CP decomposition is the tensor equivalent of the SVD since it approximates a tensor by a sum of rank-one tensors. Here, tensor rank is defined as the smallest sum of rank-one tensors required to generate the tensor <cit.>. The CP decomposition approximates these rank-one tensors.Given a third order tensor ∈ℝ^I × J × K, the rank-R CP decomposition is expressed as≈∑_r=1^R𝐚_r∘𝐛_r∘𝐜_r, where ∘ denotes the outer product. Specifically, each rank-one tensor is formulated as the outer product of the rank-one components 𝐚_r ∈ℝ^I, 𝐛_r ∈ℝ^J, and 𝐜_r ∈ℝ^K. Components are often constrained to unit length with the weights absorbed into the vector λ=[λ_1, ..., λ_R] ∈ℝ^R. Equation (<ref>) can then be re-expressed as (See Fig. <ref>) ≈∑_r=1^Rλ_r ·𝐚_r∘𝐛_r∘𝐜_r. More compactly the components can be expressed as factor matrices, i.e., 𝐀=[𝐚_1, 𝐚_2,...,𝐚_R], 𝐁=[𝐛_1, 𝐛_2,...,𝐛_R], and 𝐂=[𝐜_1, 𝐜_2,...,𝐜_R] . Using the Kruskal operator as defined by <cit.>, (<ref>) can be more compactly expressed as ≈ [[λ; ,,]].§.§ Randomized Matrix Algorithms The efficient computation of low rank matrix approximations is a ubiquitous problem in machine learning and data mining. Randomized matrix algorithms have been demonstrated to be highly competitive and robust when compared to traditional deterministic methods. Randomized algorithms aim to construct a smaller matrix (henceforth called sketch) designed to capture the essential information of the source matrix <cit.>. The sketch can then be used for various learning tasks. There exist several strategies for obtaining such a sketch, with random projections being the most robust off-the-shelf approach. Randomized algorithms have been in particular studied for computing the near-optimal low-rank SVD <cit.>. Following the seminal work by <cit.>, a randomized algorithm computes the low-rank matrix approximation[𝐀≈𝐐𝐁; m× nm× k k× n ] where the target rank is denoted as k, and assumed to be k≪min{m,n}.Intuitively, we construct a sketch 𝐘∈ℝ^m× k by forming a random linear weighted combination of the columns of the source matrix 𝐀. More concisely, we construct the sketch as 𝐘 = 𝐀Ω, where Ω∈ℝ^m× k is a random test matrix with entries drawn from, for example, a Gaussian distribution.If 𝐀 has exact rank k, then the sketch 𝐘 spans with high probability a basis for the column space of the source matrix. However, most data matrices do not feature exact rank (i.e., the singular values {σ_i}_i=k+1^n are non-zero). Thus, instead of just using k samples, it is favorable to construct a slightly oversampled sketch 𝐘∈ℝ^m× l which has l=k+p columns, were p denotes the number of additional samples. In most situations small values p={10,20} are sufficient to obtain a good basis that is comparable to the best possible basis <cit.>. An orthonormal basis 𝐐∈ℝ^m× l is then obtained via the QR-decomposition𝐘=𝐐𝐑, such that 𝐀 ≈𝐐𝐐^⊤𝐀 is satisfied. Finally, the source matrix 𝐀 is projected to low-dimensional space 𝐁 = 𝐐^⊤𝐀, where 𝐁∈ℝ^l× n. (Note, that Eq. <ref> requires a second pass over the source matrix.) The matrix 𝐁 can then be used to efficiently compute the matrix decomposition of interest, e.g., the SVD. The approximation error can be controlled by a combination of oversampling and power iteration <cit.>.Randomized matrix algorithms are not only pass efficient, but they also have the ability to exploit modern computationalparallelized and distributed computing architectures.Implementations in MATLAB, C and R are provided by <cit.>, <cit.>, and <cit.>. § RANDOMIZED CP DECOMPOSITION Given a third order tensor ∈ℝ^I × J × K, the objective of the CP decomposition is to find a set of R normalized rank-one tensors {𝐚_r ∘𝐛_r ∘𝐜_r}_r=1^R which best approximates , i.e., minimizes the Frobenius norm minimize - _F^2 subject to =∑_r=1^R𝐚_r ∘𝐛_r ∘𝐜_r. The challenge is that this problem is highly nonconvex and unlike PCA there is no closed-form solution. Solution methods for this optimization problem therefore rely on iterative schemes (e.g., alternating least squares). These solvers are not guaranteed to find a global minimum, yet for many practical problems, they can find high-quality solutions. Iterative schemes, however, can be computationally demanding if the dimensions ofare large.Fortunately, only the column spaces of the modes are of importance for obtaining the factor matrices 𝐀, 𝐁, 𝐂, rather then the individual columns of the mode matricizations _(1), _(2), _(3) of the tensor . This is because the CP decomposition learns the components based on proportional variations in inter-point distances between the components. Therefore, a compressed tensor ∈ℝ^k × k × k must preserve pairwise Euclidean distances, where k≥ R.This in turn requires that column spaces, and thus pairwise distances, are approximately preserved by compression - this can be achieved by generalizing the concepts of randomized matrix algorithms to tensors. We build upon the methods introduced by <cit.> and <cit.>, as well as related work on randomized tensors by <cit.>, who proposed a randomized algorithm based on random column selection.Figure <ref> shows the schematic of the randomized CP decomposition architecture. Our approach proceeds in two stages. The first stage, detailed in Section <ref>, applies random projections with power iteration to obtain a compressed tensor ℬ, with expressivity analysis in Section <ref>. In Section <ref>, we describe two algorithms for performing CP on the compressed tensor ℬ and approximating the original CP factor matrices. Section <ref> provides additional details about our implementation in Python. §.§ Randomized Tensor AlgorithmThe aim is to use randomization as a computational resource to efficiently build a suitable basis that captures the action of the tensor . Assuming an N-way tensor ∈ℝ^I_1 ×⋯× I_N, the aim is to obtain a smaller compressed tensor ∈ℝ^k ×⋯× k, so that its N tensor modes capture the action of the input tensor modes. Hence, we seek a natural basis in the form of a set of orthonormal matrices {𝐐_n ∈ℝ^I_n × R_n}_n=1^N, so that ≈×_1 _1_1^⊤×_2 ⋯×_N _N_N^⊤ . Here the operator ×_n denotes tensor-matrix multiplication defined as follows. The n-mode matrix product ×_n_n_n^⊤ multiplies a tensor by the matrices _n_n^⊤ in mode n, i.e., each mode-n fiber is multiplied by _n_n^⊤ = ×_n_n_n^⊤ ⇔ 𝐌_(n) = _n_n^⊤_(n). Given a fixed target rank k, these basis matrices can be efficiently obtained using a randomized algorithm.First, the approximate basis for the n-th tensor mode is obtained by drawing k random vectorsω_1, …,ω_k∈ℝ^∏_i≠ nI_i from a Gaussian distribution. (Note that if N is large, it might be favorable to draw the entries of ω from a quasi random sequence, e.g., Halton or Sobol sequence. These so-called quasi-random numbers are known to be less correlated in high-dimensions.) Stacked together, the k random vectors ω form the random test matrix Ω_n ∈ℝ^∏_i≠ nI_i× k used to sample the column space of _(n)∈ℝ^I_n ×∏_i≠ nI_i as follows 𝐘_n = _(n)Ω_n,where 𝐘_n ∈ℝ^I_n × k is the sketch.The sketch serves as an approximate basis for the range of the n-th tensor mode. Probability theory guarantees that the set of random vectors {ω_i}_i=1^k are linearly independent with high probability. Hence, the corresponding random projections 𝐲_1, …,𝐲_k efficiently sample the range of a rank deficient tensor mode _(n). The economic QR decomposition of the sketch 𝐘_n = _n 𝐑_n is then used to obtain a natural basis, so that 𝐐_n ∈ℝ^I_n× k is orthonormal and has the same column space as 𝐘_n. The final step restricts the tensor mode to this low-dimensional subspace _n =×_n𝐐_n^⊤ ⇔ 𝐁_n =𝐐_n^⊤_(n). Thus, after N iterations a compressed tensorand a set of orthonormal matrices is obtained. Since this is an iterative algorithm, we set _n after each iteration.The number of columns of the basis matrices form a trade-off between accuracy and computational performance. We aim to use as few columns as possible, yet allow an accurate approximation of the input tensor. Assuming that the tensorexhibits low-rank structure, or equivalently, the rank R is much smaller than the ambient dimensions of the tensor, the basis matrices will be an efficient representation.In practice, the compression performance is improved through using oversampling, i.e., drawing l=k+p random vectors where k is the target rank and p the oversampling parameter.The randomized algorithm as presented requires that the mode-n unfolding of the tensor has a rapidly decaying spectrum in order to achieve good performance. However, this assumption is often not suitable, and the spectrum exhibits slow decay if the tensor is compressed several times. To overcome this issue, the algorithm's performance can be substantially improved using power iterations <cit.>. Power iterations turn a slowly decaying spectrum into a rapidly decaying one by taking powers of the tensor modes. Thus, instead of sampling _(n) we sample from the following tensor mode_(n)^q(_(n)_(n)^⊤)^q _(n), where q denotes a small integer. This power operation enforces that the singular values σ_j of _(n)^q are {σ_j^2q+1}_j. Instead of using (<ref>), an improved sketch is computed 𝐘_n = (_(n)_(n)^⊤)^q _(n)Ω_n.However, if (<ref>) is implemented in this form the basis may be distorted due to round-off errors. Therefore in practice, normalized subspace iterations are used to form the sketch, meaning that the sketch is orthornormalized between each power iteration in order to stabilize the algorithm. For implementation, see <cit.> and <cit.>.The combination of oversampling and additional power iterations can be used to control the trade-off between approximation quality and computational efficiency of the randomized tensor algorithm. Our results, for example, show that just q=2 subspace iterations and an oversampling parameter of about p=10 achieves near-optimal results.Algorithm <ref> summarizes the computational steps.§.§.§ Expressivity analysisThe average behavior of the randomized tensor algorithm is characterized using the expected residual error _F =- _F, where = ×_1 _1_1^⊤×_2 ⋯×_N _N_N^⊤. Theorem <ref>generalizes the matrix version of Theorem 10.5 formulated by <cit.> to the tensor case.[Expected Frobenius error] Consider a low-rank real N-way tensor ∈ℝ^I_1 ×⋯× I_N. Then the expected approximation error, given a target rank k≥2 and an oversampling parameter p≥2 for each mode, is - ×_1 _1_1^⊤×_2 ⋯×_N _N_N^⊤_F≤ √( 1 + k/p-1)·√(∑_n=1^N∑_j>kσ_nj^2), where σ_nj denotes the j singular value of the mode-n unfolding of the source tensor . For the proof see Appendix <ref>. Intuitively, the projection of each tensor mode onto a low-dimensional space introduces an additional residual. This is expressed by the double sum on the right hand side. If the low-rank approximation captures the column space of each mode accurately, then the singular values j>k for each mode n are small. Moreover, the error can be improved using the oversampling parameter. The computation of additional power (subspace) iterations can improve the error further. This result again follows by generalizing the results of <cit.> to tensors. Sharper performance bounds for both oversampling and additional power iterations can be derived following, for instance, the results by <cit.>. §.§ Optimization Strategies There are several optimization strategies for minimizing the objective function defined in (<ref>), of which we consider, alternating least squares (ALS) and block coordinate descent (BCD). Both methods are suitable to operate on a compressed tensor ∈ℝ^k ×⋯× k, where k≥ R. The optimization problem (<ref>) is reformulatedminimize - _F^2 subject to =∑_r=1^Rã_r ∘𝐛̃_r ∘𝐜̃_r. Once the compressed factor matrices Ã∈ℝ^k × R, 𝐁̃∈ℝ^k × R, 𝐂̃∈ℝ^k × R are estimated, the full factor matrices can be recovered 𝐀≈𝐐_1Ã, 𝐁≈𝐐_2𝐁̃, 𝐂≈𝐐_3𝐂̃,where 𝐐_1 ∈ℝ^I × k, 𝐐_2 ∈ℝ^J × k, 𝐐_3 ∈ℝ^K × k denote the orthonormal basis matrices. For simplicity we focus on third order tensors, but the result generalizes to N-way tensors.§.§.§ ALS AlgorithmDue to its simplicity and efficiency, ALS is the most popular method for computing the CP decomposition <cit.>. We note that the optimization (<ref>) is equivalent to Ã, B̃, C̃minimize - ∑_r=1^Rã_r ∘𝐛̃_r ∘𝐜̃_r _F^2 with respect to the factor matrices Ã, B̃ and C̃. The tensorcan further be expressed in matricized form_(1)≈Ã(𝐂̃⊙𝐁̃)^⊤, _(2)≈𝐁̃(𝐂̃⊙Ã)^⊤, _(3)≈𝐂̃(𝐁̃⊙Ã)^⊤,where ⊙ denotes the Khatri-Rao product. The optimization problem in this form is non-convex. However, an estimate for the factor matrices can be obtained using the least-squares methodas follows. The ALS algorithm updates one component, while holding the other two components fixed, in an alternating fashion until convergence. It iterates over the following subproblems Ã^j+1=_Ã_(1) - Ã(𝐂̃^j⊙𝐁̃^j)^⊤, 𝐁̃^j+1=_𝐁̃_(2) - 𝐁̃(𝐂̃^j⊙Ã^j+1)^⊤, 𝐂̃^j+1=_𝐂̃_(3) - 𝐂̃(𝐁̃^j+1⊙Ã^j+1)^⊤. Each step therefore involves a least-squares problem which can be solved using the Khatri-Rao product pseudo-inverse.Algorithm <ref> summarizes the computational steps.The Khatri-Rao product pseudo-inverse is defined as (⊙)^†=(^⊤ * ^⊤)^†(⊙)^⊤, where the operator * denotes the Hadamard product, i.e., the elementwise multiplication of two equal sized matrices.There exist few general convergence guarantees for the ALS algorithm <cit.>. Moreover, the final solution tends to depend on the initial guess Ã^0,B̃^0 and C̃^0. A standard initial guess uses the eigenvectors of _(1)_(1)^⊤, _(2)_(2)^⊤, _(3)_(3)^⊤ <cit.>. Further, it is important to note that normalization of the factor matrices is necessary after each iteration to achieve good convergence. This prevents singularities of the Khatri-Rao product pseudo-inverse <cit.>. The algorithm can be further improved by reformulating the above subproblems as regularized least-squares problems, see for instance<cit.> for technical details and convergence results.The structure imposed by the ALS algorithm on the factor matrices permits the formulation of non-negative, or sparsity-constrained tensor decompositions <cit.>. §.§.§ BCD AlgorithmWhile ALS is the most popular algorithm for computing the CP decomposition, many alternative algorithms have been developed. Onealternate approach is based on block coordinate descent, BCD <cit.>. <cit.> first proposed this approach for computing nonnegative tensor factorizations.The BCD algorithm is based on the idea of successive rank-one deflation. Unlike ALS, which updates the entire factor matrix at each step, BCD computes the rank-1 tensors in a hierarchical fashion. Therefore, the algorithm treats each component _r,_r,_ras a block. First, the most correlated rank-1 tensor is computed; then the second most correlated rank-1 tensor is learned on the residual tensor, and so on. Assuming that R̃=r-1 components have been computed, then the r-th compressed residual tensor _res is defined_res =- ∑_r=1^R̃ã_r ∘𝐛̃_r ∘𝐜̃_r. The algorithm then iterates over the following subproblems ã_r^j+1=_ã_r_res(1) - ã_r(𝐜̃_r^j⊙𝐛̃_r^j)^⊤, 𝐛̃_r^j+1=_𝐛̃_𝐫_res(2) - 𝐛̃_r(𝐜̃_r^j⊙ã_r^j+1)^⊤, 𝐜̃_r^j+1=_𝐜̃_r_res(3) - 𝐜̃_r(𝐛̃_r^j+1⊙ã_r^j+1)^⊤. Note that the computation can be more efficiently evaluated without explicitly constructing the residual tensor _res <cit.>. Algorithm <ref> summarizes the computation. §.§ Implementation DetailsThe algorithms we present are implemented in the programming language Python, using numerical linear algebra tools provided by the SciPy (Open Source Library of Scientific Tools) package <cit.>. SciPy provides MKL (Math Kernel Library) accelerated high performance implementations of BLAS and LAPACK routines. Thus, all linear algebra operations are threaded and highly optimized on Intel processors. The implementation of the CP decomposition follows the MATLAB Tensor Toolbox implementation <cit.>. This implementation normalizes the components after each step to achieve better convergence. Furthermore, we use eigenvectors (see above) to initialize the factor matrices. Interestingly, randomly initialized factor matrices have the ability to achieve slightly better approximation errors, but re-running the algorithms several times with different random seeds can display significant variance in the results. Hence only the former approach is used for initialization. We note that the randomized algorithm introduces some randomness and slight variations into the CP decompositions as well. However, randomization can also act as an implicit regularization on the CP decomposition <cit.>, meaning that the results of the randomized algorithm can be in some cases even `better' than the results of the corresponding deterministic implementation.Finally, the convergence criterion is defined as the change in fit, following <cit.>. The algorithm therefore stops when the improvement of the fit ρ is less then a predefined threshold, where the fit is computed usingρ = 1 - (_F^2 + _F^2 - 2·⟨,⟩)/_F^2. § NUMERICAL RESULTSThe randomized CP algorithm is evaluated on a number of examples where the near optimal approximation of massive tensors can be achieved in a fraction of the time using the randomized algorithm. Approximation accuracy is computed with the relative error - _F/_F,wheredenotes the approximated tensor.§.§ Computational PerformanceThe robustness of the randomized CP algorithm is first assessed on random low-rank tensors. Here we illustrate the approximation quality in the presence of additive white noise. Figure <ref> shows the average relative errors over 100 runs for varying signal-to-noise ratios (SNR). In the case of high SNRs, all algorithms converge towards the same relative error. However, at excessive levels of noise (i.e., SNR<4) the deterministic CP algorithms exhibit small gains in accuracy over the randomized algorithms using q=2 power iterations, with which both the ALS and BCD algorithm show the same performance. The performance of the randomized algorithm without power iterations (q=0) is, however, poor, and stresses the importance of the power operation for real applications. The oversampling parameter for the randomized algorithms is set to p=10. Increasing p can further improve accuracy. Next, the reconstruction errors and runtimes for tensors of varying dimensions are compared. Figure <ref> shows the average evaluation results over 100 runs for random low-rank tensors of different dimensions, and for varying target ranks k. The randomized algorithms achieve near optimal approximation accuracy while demonstrating substantial computational savings.Interestingly, the relative error achieved by the BCD decreases sharply by about one order of magnitude when the target rank k matches the actual tensor rank (here R=50). The computational advantage becomes more pronounced with increasing tensor dimensions, and as the number of iterations required for convergence increase. Using random tensors as presented here, all algorithms rapidly converge after about 4 to 6 iterations. However, it is evident that the computational cost per iteration of the randomized algorithm is substantially lower. Thus, the computational savings can be even more substantial in real world applications that may require several hundred iterations to converge.Overall, the ALS algorithm is computationally more efficient than BCD, i.e., the deterministic ALS algorithm is faster than the BCD by nearly one order of magnitude, while the randomized algorithms exhibit similar computational timings.Similar performance results are achieved for higher order tensors. Figure <ref> shows the computational performance for a 4-way tensor of dimension 100× 100× 100× 100. Again, the randomized algorithms achieve speedups of 1-2 orders of magnitude, while attaining good approximation errors.§.§ Examples The examples demonstrate the advantages of the randomized CP decomposition. The first is a multiscale toy video example, and the second is a simulated flow field behind a stationary cylinder. Due to the better and more natural interpretability of the BCD algorithm, only this algorithm is considered in subsequent sections.§.§.§ Multiscale Toy Video ExampleThe approximation of the underlying spatial modes and temporal dynamics of a system is a common problem in signal processing. In the following, we consider a toy example presenting multiscale intermittent dynamics in the time direction. The data consists of four Gaussian modes, each undergoing different frequencies of intermittent oscillation, for 215 times steps in the temporal direction on a two-dimensional spatial grid(200× 200). The resulting tensor is of dimension 200× 200× 215. Figure <ref> shows the corresponding modes and the time dynamics.This problem becomes even more challenging when the underlying structure needs to be reconstructed from noisy measurements. Traditional matrix decomposition techniques such as the SVD are known toface difficulties capturing such intermittent, multiscale structure.Figure <ref> displays the decomposition results of the noisy (SNR=2) toy video for both the randomized CP decomposition and the SVD. The first subplot shows the results of a rank k=4 approximation computed using the rCP algorithm with q=2 power iterations, and a small oversampling parameter p=10. The method faithfully captures the underlying spatial modes and the time dynamics. For illustration, the second subplot shows the decomposition results without additional power iterations. It can clearly be seen that this approach introduces distinct artifacts, and the approximation quality is relatively poor. The SVD, shown in the last subplot, demonstrates poor performance at separating the modes and mixes the spatiotemporal dynamics of modes 2 & 3.Table <ref> further quantifies the observed results. Interestingly, the relative error using the randomized algorithm with q=2 power iterations is slightly better than the deterministic algorithm. This is due to the intrinsic regularizing behavior of randomization. However, the reconstruction error without power iterations is large, as is the error resulting from the SVD. The achieved speedup of randomized CP is substantial, with a speedup factor of about 15. A Note on Compression.The CP decomposition provides a more parsimonious representation of the data.Comparing the compression ratios between the CP decomposition and SVD illustrates the difference. For a rank R=4 tensor of dimension 100× 100 × 100, the compression ratios are c_SVD=I· J· KR· (I· J + K + 1)=100^34·(100^2+ 100 + 1) ≈ 24.75, c_CP=I· J· KR· (I + J + K + 1)=100^34·(100 + 100 + 100 + 1) ≈830.56. Note, the SVD requires the tensor to be reshaped in some direction.The comparison illustrates the striking difference between compression ratios. It is evident that the CP decomposition requires computing many fewer coefficients in order to approximate the tensor. Thus, less storage is required to approximate the data.While the CP decomposition yields a parsimonious approximation that is more robust to noise, the advantage of the SVD is that data can be approximated with an accuracy as low as machine precision.§.§.§ Flow behind a cylinder Extracting the dominant coherent structures from fluid flows helps to better characterize them for modeling and control <cit.>. The workhorse algorithm in fluid dynamics and for model reduction is the SVD. However, fluid simulations generate high-resolution spatiotemporal grids of data which naturally manifest as tensors. In the following we examine the suitability of the CP decomposition for decomposing flow data, and compare the results to those of the SVD. The fluid flow behind a cylinder, a canonical example in fluid dynamics, is presented here. The data are a time-series generated from a simulation of fluid vorticity behind a stationary cylinder <cit.>. The corresponding flow tensor has dimension 199× 449× 151, consisting of 151 snapshots on a 449× 199 spatial grid. Figure <ref> shows a few example snapshots of the fluid flow.The flow is characterized by a periodically shedding wake structure and is inherently low-rank in the absence of noise.The characteristic frequencies of flow oscillations occur in pairs, reflecting the complex-conjugate pairs of eigenvalues corresponding to harmonics in the temporal direction. Results in Absence of White Noise.Figure <ref> shows both the approximated spatial modes and the temporal dynamics for the randomized CP decomposition and the SVD.Observe that both methods extract similar patterns, although unlike SVD, rCP spatial modes are sparse on the domain. The reason is two-fold: (i) CP does not impose orthogonality constraints (which in general reveal dense structure), and (ii) CP imposes rank-one outer product structure in the x and y directions via the columns of the factor matrices. In doing so, CP isolates the contributions of single wavenumbers (spatial frequencies) to the steady-state vortex shedding regime. These are commonly analyzed to study energy transfer between scales in complex flows and turbulence <cit.>, hence CP can help reveal new physically meaningful insights. In particular, randomization enables tensor decomposition of high-resolution vector fields that otherwise may not be tractable using deterministic methods. Thus rCP provides a novel decomposition of multimodal fields for downstream tasks in model reduction, pattern extraction and control.Note, that for a fixed target rank of k=30 across all methods, the SVD achieves a substantially lower reconstruction error (see Table <ref>). However, the compression ratios for the CP and SVD methods are c_CP≈ 562.17 and c_SVD≈ 5.02, i.e., the CP compresses the data nearly two orders of magnitude more.Results in Presence of White Noise.Next, the analysis of the same flow is repeated in the presence of additive white noise. While this is not of concern when dealing with flow simulations, it is realistic when dealing with flows obtained from measurement. We chose a signal-to-noise ratio of 2 to demonstrate the robustness of the CP decomposition to noise.Figure <ref> shows again the corresponding dominant spatial modes and temporal dynamics. Both the SVD and the CP decomposition faithfully capture the temporal dynamics. However, comparing the modes of the SVD to Figure <ref>, it is apparent that the spatial modes contain a large amount of noise. The spatial modes revealed by the CP decomposition provide a significantly better approximation. Again, it is crucial to use power iterations to achieve a good approximation quality (see Table <ref>). By inspection, the relative reconstruction error using the SVD is poor compared to the error achieved using the CP decomposition. Here, we show the error for a rank k=30 and k=6 approximation. The target rank was determined using the optimal hard threshold for singular values <cit.>. The CP decomposition overcomes this disadvantage, and is able to approximate the first k=30 modes with only a slight loss of accuracy. Note that here the randomized CP decomposition performs better than the deterministic algorithm. We assume that this is due to the favorable intrinsic regularization effect of randomized methods. § CONCLUSIONThe emergence of massive tensors require efficient algorithms for obtaining tensor decompositions. To address this challenge, we have presented a randomized algorithm which substantially reduces the computational demands of the CP decomposition. Indeed, randomized algorithms have established themselves as highly competitive methods for computing traditional matrix decompositions. A key advantage of the randomized algorithm is that modern computational architectures are fully exploited. Thus, the algorithm benefits substantially from multithreading in a multi-core processor. In contrast to previously proposed high-performance tensor algorithms which are based on computational concepts such as distributed computing, our proposed randomized algorithm provides substantial computational speedups even on standard desktop computers.Our proposed algorithm achieves these speedups by reducing the computational costs per iteration, which enables the user to decompose real world examples that typically require a large number of iterations to converge.In addition to computational savings, the randomized CP decomposition demonstrates outstanding performance on several examples using artificial and real-world data, including decompositions of high-resolution flow fields that may not be tractable with deterministic methods.Moreover, our experiments show that the power iteration concept is crucial in order to achieve a robust tensor decomposition.Thus, our algorithm has a practical advantage over previous randomized tensor algorithms, at a slightly increased computational cost due to additional power iterations. We would like to thank Alex Williams, and Michael W. Mahoney for insightful discussion on tensor decompositions and randomized numerical linear algebra. Further, we would like to express our gratitude to the two anonymous reviewers for their valuable feedback, which helped us greatly improve the manuscript. NBE would like to acknowledge DARPA and NSF for providing partial support of this work. KM acknowledges support from NSF MSPRF Award 1803663. JNK acknowledges support from the Air Force Office of Scientific Research (AFOSR) grant FA9550-17-1-0329. SLB acknowledges funding support from the Air Force Office of Scientific Research (AFOSR) grant FA9550-18-1-0200. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available upon request.§ PROOF OF THEOREM 1 In the following, we sketch a proof for Theorem <ref>, which yields an upper bound for the approximate basis for the range of a tensor. To assess the quality of the basis matrices {𝐐_n}_n=1^N, we first show that the problem can be expressed as a sum of subproblems. Defining the residual error_F =-= - ×_1 _1_1^⊤×_2 ⋯×_N _N_N^⊤_F. Note that the Frobenius norm of a tensor and its matricized forms are equivalent. Defining the orthogonal projector 𝐏_n ≡_n_n^⊤, we can reformulate (<ref>)compactly as _F =- ×_1𝐏_1 ×_2 ⋯×_N 𝐏_N_F.Assuming that 𝐏_n yields an exact projection onto the column space of the matrix 𝐐_n, we need to show first that the error can be expressed as a sum of the errors of the n projections _F = ∑_n=1^N- ×_n 𝐏_n _F = ∑_n=1^N×_n (𝐈 - 𝐏_n) _F, where 𝐈 denotes the identity matrix. Following <cit.>, let us add and subtract the term ×_N 𝐏_N in Equation (<ref>) so that we obtain _F = - ×_N 𝐏_N + ×_N 𝐏_N - ×_1 𝐏_1 ×_1 ⋯×_N 𝐏_N _F ≤- ×_N 𝐏_N _F + ×_N 𝐏_N - ×_1 𝐏_1 ×_1 ⋯×_N 𝐏_N_F = - ×_N 𝐏_N _F + ( - ×_1 𝐏_1 ×_1 ⋯×_N-1𝐏_N-1) ×_N 𝐏_N_F ≤- ×_N 𝐏_N _F +- ×_1 𝐏_1 ×_1 ⋯×_N-1𝐏_N-1_F. The bound (<ref>) follows from the triangular inequality for a norm. Next, the common term 𝐏_N is factored out in Equation (<ref>). Then, the bound (<ref>) follows from the properties of orthogonal projectors. This is because the range(×_1 𝐏_1 ×_1 ⋯×_N-1𝐏_N-1) ⊂ range(×_1 𝐏_1 ×_1 ⋯×_N 𝐏_N), and then it holds that - ×_1 𝐏_1 ×_1 ⋯×_N 𝐏_N _F ≤ - ×_1 𝐏_1 ×_1 ⋯×_N-1𝐏_N-1_F. See Proposition 8.5 by <cit.> for a proof using matrices. Subsequently the residual error_N-1 can be bounded _N-1 ≤- ×_N-1𝐏_N-1_F +- ×_1 𝐏_1 ×_1 ⋯×_N-2𝐏_N-2)_F. From this inequality, Equation (<ref>) follows. We take the expectation of Equation (<ref>) _F= [ ∑_n=1^N×_n (𝐈 - 𝐏_n) _F ]. Recalling that Theorem 10.5 formulated by <cit.> states the following expected approximation error (formulated here using tensor notation) ×_n (𝐈 - 𝐏_n) _F ≤√( 1 + k/p-1)·√(∑_j>kσ_j^2), assuming that the sketch in Equation (<ref>) is constructed using a standard Gaussian matrix Ω. Here σ_j denotes the singular values of the matricized tensor _(n) greater then the chosen target rank k. Combining Equations (<ref>) and (<ref>) then yields the results of the theorem (<ref>). Figures <ref> evaluates the theoretical upper bound over 100 runs for both a third and fourth order random low-rank R=25 tensor. Here, we use a fixed oversampling parameter p=2. The results show that the empirical error is faithfully bounded by the theoretical upper bound for varying target ranks. | http://arxiv.org/abs/1703.09074v2 | {
"authors": [
"N. Benjamin Erichson",
"Krithika Manohar",
"Steven L. Brunton",
"J. Nathan Kutz"
],
"categories": [
"cs.NA",
"stat.CO"
],
"primary_category": "cs.NA",
"published": "20170327134100",
"title": "Randomized CP Tensor Decomposition"
} |
1University of Utah. Department of Physics & Astronomy, 115 S 1400 E, Salt Lake City, UT 84105 [email protected] 3ETH Zurich, Switzerland 4Michigan State University 5University of Queensland 6Max-Planck-Institut für Astronomie 7Smithsonian Astrophysical Observatory, 60 Garden St. MS09, 02138 Cambridge, MA, USA 8Sternberg Astronomical Institute, M.V. Lomonosov Moscow State University, 13 Universitetsky prospect, 119992 Moscow, Russia 9Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königsstuhl 12, D-69117 Heidelberg, Germany 10European Southern Observatory, Garching 11Australian Astronomical Observatory, PO Box 915 North Ryde NSW 1670, Australia 12San Jose State University 13University of California Observatories/Lick Observatory 14Research Centre for Astronomy, Astrophysics & Astrophotonics, Macquarie University, Sydney, NSW 2109, Australia 15Department of Physics & Astronomy, Macquarie University, Sydney, NSW 2109, Australia 16University of California Santa Cruz 17George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843We present the detection of supermassive black holes (BHs) in two Virgo ultracompact dwarf galaxies (UCDs), VUCD3 and M59cO. We use adaptive optics assisted data from the Gemini/NIFS instrument to derive radial velocity dispersion profiles for both objects. Mass models for the two UCDs are created using multi-band Hubble Space Telescope (HST) imaging, including the modeling of mild color gradients seen in both objects. We then find a best-fit stellar mass-to-light ratio (M/L) and BH mass by combining the kinematic data and the deprojected stellar mass profile using Jeans Anisotropic Models (JAM). Assuming axisymmetric isotropic Jeans models, we detect BHs in both objects with masses of 4.4^+2.5_-3.0× 10^6M_⊙ in VUCD3 and 5.8^+2.5_-2.8× 10^6 M_⊙ in M59cO (3σ uncertainties).The BH mass is degenerate with the anisotropy parameter, β_z; for the data to be consistent with no BH requires β_z = 0.4 and β_z = 0.6 for VUCD3 and M59cO, respectively. Comparing these values with nuclear star clusters shows that while it is possible that these UCDs are highly radially anisotropic, it seems unlikely. These detections constitute the second and third UCDs known to host supermassive BHs. They both have a high fraction of their total mass in their BH; ∼13% for VUCD3 and ∼18% for M59cO.They also have low best-fit stellar M/Ls, supporting the proposed scenario that most massive UCDs host high mass fraction BHs.The properties of the BHs and UCDs are consistent with both objects being the tidally stripped remnants of ∼10^9 M_⊙ galaxies. § INTRODUCTION Ultracompact dwarf galaxies (UCDs) are stellar systems discovered in the late 1990s through spectroscopic surveys of the Fornax cluster <cit.>. With masses ranging from a few million to a hundred million solar masses and sizes ≲ 100 pc, UCDs are among the densest stellar systems in the Universe. In the luminosity-size plane, UCDs occupy the region between globular clusters (GCs) and compact ellipticals (cEs) <cit.>. The smooth transition between these three classes of objects has led to significant debate as to how UCDs were formed. Explanations have ranged from UCDs being the most massive GCs <cit.>, to UCDs being the tidally stripped nuclear remnants of dwarf galaxies <cit.>.Recently, analyses of the integrated dispersions of UCDs revealed an interesting property; the dynamical mass appears to be elevated ∼50% for almost all UCDs above 10^7 M_⊙ when compared to the mass attributed to stars alone <cit.>. These dynamical mass estimates have been made combining structural information from Hubble Space Telescope (HST) imaging with ground-based, global velocity dispersion measurements. These models assume that mass traces light, stars are on isotropic orbits, and are formed from a Kroupa-like initial mass function (IMF) <cit.>. Possible explanations for this unique phenomenon have included ongoing tidal stripping scenarios <cit.>, and central massive black holes (BHs) making up ∼10-15% of the total mass <cit.>. Alternatively, the elevated dynamical-to-stellar mass ratios can be explained by a change in the stellar IMF in these dense environments. For example, a bottom-heavy IMF would imply an overabundance of low-mass stars that contribute mass but very little light <cit.>, and a top-heavy IMF would allow for an overabundance of stellar remnants contributing mass but virtually no light. The former case has been suggested in giant ellipticals <cit.>, while <cit.> argued that the relative abundance of X-ray binaries in UCDs favored a top-heavy IMF, although an increased X-ray luminosity in UCDs was not found in subsequent work <cit.>.In the context of the tidal stripping scenario, the elevated dynamical-to-stellar mass ratios could potentially be explained if UCDs still reside within progenitor dark matter haloes. However, to have a measurable effect on the kinematics of compact objects such as UCDs, the central density of the dark matter halo would need to be orders of magnitude higher than expected for dark matter halos of the stripped galaxies <cit.>. In addition, the search for an extended dark matter halo in Fornax UCD3, based on its velocity dispersion profile, yielded a non-detection <cit.>.In this paper, we follow up on the idea that the elevated values of the dynamical-to-stellar mass ratios, which we denote in this paper as Γ (≡ (M/L)_dyn / (M/L)_*), can be explained by the presence of a supermassive BH <cit.>. This scenario was confirmed in one case, M60-UCD1, which hosts a BH that makes up 15% of the total dynamical mass of the system and a best-fit Γ of 0.7±0.2 <cit.>.As M60-UCD1 is one of the highest density UCDs, its low stellar mass-to-light ratio (M/L) suggests that a systematic variation of the IMF with density is not the cause for high M/L estimates found in most massive UCDs, and strengthens the case that these may be due to high mass fraction BHs. As part of our ongoing adaptive optics kinematics survey of UCDs, we investigate internal kinematics of two Virgo UCDs, VUCD3 and M59cO, with the goal of constraining the mass of putative central massive BHs.While we resolve the stellar kinematics of these two UCDs, they are fainter than M60-UCD1, and therefore have lower S/N data. This forces us to make more assumptions in our modeling. However, making reasonable assumptions, we clearly detect BHs in both objects.Images of VUCD3, M59cO, and their host galaxies are shown in Figure <ref>.VUCD3 is located 14 kpc in projection from the center of M87 and has M_V = -12.75 <cit.>.The metallicity of VUCD3 has been estimated to be between -0.28 and 0.35 in several studies <cit.>, and it has an [α/Fe]∼0.5 <cit.>.M59cO is located 10 kpc in projection from the center of M59 and has M_V = -13.26 <cit.>.Its metallicity has been measured in several studies with [Z/H] between 0.0 and 0.2, with [α/Fe]∼0.2 <cit.>.We assume a distance of 16.5 Mpc for both objects.All magnitudes are reported in the AB magnitude system unless otherwise noted.All magnitudes and colors have been corrected for extinction; in VUCD3 we use A_F606W = 0.061 and A_F814W = 0.034, while for M59cO we used A_F475W = 0.107 and A_F850LP = 0.041 <cit.>.This paper is organized as follows: in Section <ref> we discuss the data used for analysis and how the kinematics were modeled. In Section <ref> we present our methods for determining the density profile of our UCDs. We present our dynamical modeling methods in Section <ref>. Our results for the best-fit BH are presented in Section <ref>, and we conclude in Section <ref>. § OBSERVATIONS AND KINEMATICSIn this section we discuss the data and reduction techniques used for our analysis. Section <ref> discusses the HST archival images and Section <ref> explains the reduction of our Gemini NIFS integral field spectroscopy.The derivation of kinematics is discussed in Section <ref>. §.§ HST dataWe obtained images of VUCD3 and M59cO from the Hubble Legacy Archive. VUCD3 was originally imaged as part of HST snapshot program 10137 (PI: Drinkwater). These data were taken using the High Resolution Channel (HRC) on the Advanced Camera for Survey (ACS), through the F606W and F814W filters. The ACS/HRC pixel scale is 0.025 pixel^-1. The exposure times were 870s in F606W and 1050 s in F814W. M59cO was originally imaged as part of GO Cycle 11 program 9401 (PI: Côté). These data were taken using the ACS, Wide Field Camera (WFC), through the F475W and F850LP filters. The ACS/WFC pixel scale is 0.05 pixel^-1. The exposure times were 750s in F475W and 1210s in F850LP.We synthesized a point spread function (PSF) for the HST images in each filter using the TinyTim software[<http://www.stsci.edu/software/tinytim/>], MultiDrizzle[<http://stsdas.stsci.edu/multidrizzle/>], and following the procedure described in <cit.>. First, we generated the distorted PSFs with TinyTim, such that they represent the PSF in the raw images. We then place the PSFs in an empty copy of the raw HST flat fielded images at the location of the observed target. Finally, we pass these images through MultiDrizzle using the same parameters as were used for the data. This procedure produces model PSFs that are processed in the same way as the original HST data. The size of the PSFs was chosen to be 10 × 10, which is much larger than the minimum size recommended by TinyTim in an attempt to minimize the effects of the ACS/HRC PSF halo problem (see §4.3.1 of <cit.>). Although the ACS/WFC PSF does not suffer the same problems, we modeled the PSFs the same for consistency.The background (sky) level was initially subtracted from the HST images by MultiDrizzle. Instead of following the conventional way of estimating the sky level from empty portions of the image, we elected to add the MultiDrizzle sky level back in and model the background ourselves. This choice was motivated by the fact that these UCDs fall within the envelope of their host galaxy and thus the background is not uniform across the image. To accomplish this, we first masked our original object and all foreground/background objects in the image. Next, we weighted the remaining good pixels (from the DQ extension of the image) by their corresponding error. Finally, we fitted a plane to the image to determine the sky level in each individual pixel; this was then subtracted from the image. §.§ Gemini NIFS data Our spectroscopic data were obtained on May 2nd and 3rd, 2015 and May 20th, 2014 using the Gemini North telescope with the Near-Infrared Integral Field Spectrometer (NIFS) instrument <cit.>. The observations were taken using Altair laser guide star adaptive optics <cit.>. The Gemini/NIFS instrument supplies infrared spectroscopy with a 3 field of view in 0.1× 0.04 pixels with spectral resolution λ/δλ∼ 5700 (σ_ inst = 22 km s^-1). The observations were taken in the K band, covering wavelengths from 2.0 to 2.4 μm. The NIFS data were reduced following a similar procedure to the previous work done by <cit.>. The data were reduced using the Gemini version 1.13 IRAF package. Arc lamp and Ronchi mask images were used to determine the spatial and spectral geometry of the images. For M59cO, the sky images were subtracted from their closest neighboring on-source exposure. The images were then flat-fielded, had bad pixels removed, and split into long-slit slices. The spectra were then corrected for atmospheric absorption using an A0V telluric star taken on the same night at similar airmass with the NFTELLURIC procedure. Minor alterations were made to the Gemini pipeline to enable error propagation of the variance spectrum. This required the creation of our own IDL versions of NIFCUBE and NSCOMBINE programs. Each dithered cube was rebinned using our version of NIFCUBE to a 0.05 × 0.05 pixel scale from the original 0.1 × 0.04.These cubes were aligned by centroiding the nucleus,and combined using our own IDL program which includes bad pixel rejection based on the nearest neighbor pixels to enhance its robustness. For VUCD3, sixteen 900s on-source exposures were taken with a wide range of image quality. We selected eight of the images with the best image quality and highest peak fluxes (two taken on May 2nd, and six taken on May 3rd, 2015) for a total on-source exposure time of two hours to create our final data cube. The data were dithered on chip in a diagonal pattern with separations of ∼1. Due to the very compact nature of VUCD3, the surface brightness of the source in the sky region of the exposure that was subtracted from the neighboring exposure had very little signal (S/N of the sky portion of each individual image was always < 1).The final data cube for M59cO was created using twelve 900s on-source exposures (six taken on May 20th, 2014, two taken on May 2nd 2015, and four taken on May 3rd, 2015; an additional five exposures were not used due to poor image quality) for a total on-source exposure time of three hours. We used an object-sky-object exposure sequence, and sky images were subtracted from both of their neighboring exposures. The object exposures were dithered to ensure the object did not fall on the same pixels in the two exposures with the same sky frame subtracted: this gives independent sky measurements for each exposure, improving the S/N relative to undithered exposures.The PSF for the kinematic data was derived by convolving a HST model image to match the continuum emission in the kinematic data cube. The HST model image was derived in K-band to best match the kinematic observations as follows. First, we generated 2D model images in each band using our best-fit Sérsic profiles described in Section <ref>. We then determined the color in each pixel. Next, we generated simple stellar population (SSP) models from <cit.> using their PADOVA 1994 models at solar metallicities and assuming a Chabrier IMF. This is a reasonable assumption as both objects have near solar metallicities (see Section <ref>). Using our derived color and a color-color diagram from the SSPs, we determined the K-band luminosity for each pixel. The resulting image was convolved with a double Gaussian and Gauss+Moffat function and fitted to the NIFS image using the MPFIT2DFUN code[<http://www.physics.wisc.edu/ craigm/idl/fitting.html>] <cit.>. In each case, we assessed which function provided the best fit to the PSF. For VUCD3, a Gauss+Moffat function was determined to provide the best fit; the Gaussian had a FWHM of 0.138 and contained 29% of the light. The Moffat contained the remaining 71% of the light with a FWHM of 1.08 which was parameterized by a series of Gaussians using the MGE-FIT-1D code developed by <cit.>. For M59cO, a double Gaussian model was determined to provide the best fit with the inner component having a FWHM of 0.216 and containing 53% of the light and the outer component having a FWHM of 0.931 and containing 47% of the light. We note that in order to quantify the systematic effects on our choice of model PSF we also report the results with the best-fit counterpart function (i.e., a double Gaussian for VUCD3, and a Gauss+Moffat function for M59cO). Furthermore, we verified the reliability of our PSF determination by comparing the results of fits to the HST model image to those from Lucy-Richardson deconvolved images. §.§ Kinematic DerivationThe kinematics were measured from the NIFS data by fitting the wavelength region between 2.29 μ m and 2.37 μ m, which contains the CO bandheads.Due to the low S/N of the data (central pixel median S/N = 10 per 2.13 Å pixel for VUCD3 and S/N = 11 per 2.13 Å pixel for M59cO), it was not possible to make kinematic maps as were made for M60-UCD1 <cit.>.We therefore constructed a radial dispersion profile for dynamical modeling.To create our dispersion profile, the data were binned radially such that the median S/N was ≈ 25 in each bin.The line spread function (LSF) was determined in each bin by combining sky exposures in the same dither pattern as the science exposures; we convolved the high-resolution <cit.> stellar templates (λ/Δλ = 45000) by the LSF in each radial bin before fitting. We fitted the radial velocity and dispersion to the data using the penalized pixel fitting algorithm pPXF <cit.>; a fourth order additive polynomial was also included in the fit.The integrated spectra and their corresponding fits are shown in Figure <ref>. The one sigma random uncertainties on the determined kinematics were estimated using Monte Carlo simulations. Gaussian random noise was added to each spectral pixel, and then we fitted kinematics and took the standard devation as the uncertainty. For VUCD3, a portion of the fit was left out due to bad sky subtraction.We found the integrated (r < 0.6) barycentric-correct radial velocity to be 707.3 ± 1.4 km s^-1, and the integrated dispersion to be 39.7 ± 1.2 km s^-1. This dispersion value is in agreement with the measurement in <cit.> of 41.2±1.5 km s^-1 using Keck/ESI data and a 1.5 aperture. For M59cO we found the integrated (r < 1.1) velocity to be 721.2 ± 0.5 km s^-1, and the integrated dispersion to be 31.3 ± 0.5 km s^-1. The dispersion of M59cO is significantly lower than the measurement in <cit.> of 48±5 km s^-1; however, this study was based on relatively low resolution SDSS spectra.Our values are consistent with the higher resolution Keck/DEIMOS observations presented in <cit.> that find σ = 29.0 ± 2.5 km s^-1.Our binned, resolved values for the radial kinematics are given in Table <ref>. For each bin, we give the light weighted radius, S/N in each bin, line-of-sight velocity, dispersion, the corresponding uncertainty for both the velocity and dispersion, as well as the rotational speed and angle (see below). We also tested our velocity and dispersion values by fitting to the Phoenix stellar templates <cit.>. Our results are consistent within one sigma for all dispersion values, while we see velocity discrepancies of up to 6 km s^-1, suggesting the velocity measurement uncertainties may be underestimated due to template mismatch or sky subtraction issues (especially in VUCD3 due to low S/N).To calculate the rotational speed, we first split the integrated velocity bin in half and rotated the line separating the two halves through 360^∘ in increments of 5^∘, fitting the velocity on each side. Next, we fitted a sinusoidal curve to the difference between the two halves as a function of angle. Using the angle where the sinusoidal curve was either maximum or minimum, we created a line to split the radial bins in half and quote the difference in Table <ref>. We note that the rotational speed quoted is of order unity of the true rotation value (the velocity difference is a factor of 4/π v_rot assuming smooth azimuthal variation). Neither object is rotation dominated, but VUCD3 shows substantial rotation oriented roughly along the major axis with v / σ∼ 0.5; this is similar to M60-UCD1 <cit.>.We note that only the dispersion values for the full annuli are used for the V_rms profile for fits to our dynamical models.We note two important characteristics of the M59cO kinematics. First, from the low rotational velocity combined with the nearly circular Sérsic profile fits, discussed below, M59cO appears to be nearly face-on or simply non-rotating. Second, there appears to be an upturn in the velocity dispersion for the last radial bin. This upturn could be due to several possibilities. First, it may be due to a problem in the characterization of the PSF on large scales resulting in scattered light from the center of the UCD to large radii; however, tests with alternative PSF models with more power in the wings did not create the observed upturn in our dynamical models. Alternatively, it may be due to poor sky subtraction at large radii, tidal stripping at the edges of the UCD, or background light contamination from the host galaxy; assuming one of these is the cause, we exclude this data point from our dynamical modeling. ccccccccc[ht!]Resolved Kinematics 0ptObject Radius [] S/N v [km s^-1] v_err [km s^-1] σ [km s^-1] σ_err [km s^-1] Rotation^1 [km s^-1] PA^2 [^∘] VUCD3 0.033 38.27 709.6 3.0 52.9 2.5 4.7 770.069 41.66 713.7 3.0 51.2 2.2 7.7 77 0.118 40.10 715.6 2.8 49.0 2.1 18.7 82 0.217 25.96 704.0 1.7 40.3 1.6 22.5 112 0.437 15.72 706.7 1.8 33.1 2.1 4.3 127 M59cO 0.025 33.07 720.8 2.0 40.2 1.6 5.3 820.065 45.84 719.5 1.7 39.91.4 7.5 820.105 47.87 719.4 1.7 37.61.3 8.3 470.145 49.65 718.5 1.6 34.91.2 7.0 520.191 47.92 720.1 1.4 33.61.1 3.7 220.245 42.31 720.3 1.3 31.81.0 4.7 70.317 43.49 724.5 1.1 28.41.0 5.9 -230.475 33.42 725.2 0.9 29.60.7 3.8 -281Rotation is the maximum difference (amplitude) between the two halves of the radial bin split by a line at the PA. This value is on the order of unity of the true rotational velocity (amplitude is a factor 4/π v_rot). 2The PA orientation is N=0^∘ and E=90^∘. § CREATING A MASS MODEL To create dynamical models predicting the kinematics of our UCDs, we first needed to create a model for the luminosity and mass distribution in each object.While typically, the mass is assumed to trace the light <cit.>, in both UCDs considered here previous works have detected color gradients, suggesting a non-constant M/L <cit.>. Fortunately, we have two filter data available for both UCDs, and we make use of the surface brightness profile fits in both filters to estimate the luminosity and mass profiles of the UCDs. We consider the uncertainties from different combinations of luminosity and mass models in our best-fit dynamical models in Section <ref>, and find that they do not create a significant uncertainty in our BH mass determinations.Neither source is well fit using a single Sérsic profile, and both appear to have two components <cit.>. We therefore determine the surface brightness profile by fitting the data in each filter to a PSF-convolved, two component Sérsic profile using the two-dimensional fitting algorithm, GALFIT <cit.>. The parameters in our fits are shown in Table <ref> and include, for each Sérsic profile: the total magnitude (m_tot), effective radius (R_e), Sérsic exponent (n), position angle (PA), and axis ratio (q). The fitting was done in two ways; first, we allowed all of the above free parameters to vary in both filters; these fits are henceforth referred to as the “free” fits. Next, we fitted the data again fixing the shape parameters of one filter to the best-fit model from the other filter; specifically, we fixed the effective radius, Sérsic exponent, position angle and axis ratio, allowing only the total magnitude to vary.For example, in VUCD3, our fixed fit in F814W was done by fixing the shape parameters to the best-fit free model in F606W. By using the same shape parameters, these “fixed” fits provide a well-defined color for the inner and outer Sérsic profiles.These Sérsic profile fits, shown in Figure <ref> for the filters used to create the luminosity and mass models, were performed on a 5 × 5 image centered on each UCD with a 100 × 100 pixel convolution box. We note that M59cO also has data in the F850LP filter available, which we use to model the color gradients as discussed below. However, due to the lack of a red cutoff in the filter, the PSF is difficult to characterize [<http://www.stsci.edu/hst/HST_overview/documents/calworkshop/workshop2002/CW2002_Papers/gilliland.pdf>]; therefore, we chose to use the Sérsic fits to the F475W filter as the basis for our luminosity and mass models. The outputs for the best fitting Sérsic profiles in each filter are shown in Table <ref>.For VUCD3, the total luminosity and effective radius calculated from the double Sérsic profile was found to be L_F814W=17.8 × 10^6 L_⊙ and R_e = 18 pc or 0.225, with Sérsic indices of 3.25 for the inner component and 1.74 for the outer component. These indices are similar to what was found for M60-UCD1 <cit.>. For M59cO, the total luminosity and effective radius was found to be L_F475W=20.3 × 10^6 L_⊙ and R_e = 32 pc or 0.4, with Sérsic indices of 1.06 and 1.21 for the inner and outer components, respectively. These values are comparable to the n ∼ 1 used to fit the system in previous work <cit.>. The best-fit Sérsic profiles were then parameterized by a series of Gaussians or MGEs for use in our dynamical models <cit.>. cccccccccccccc[ht!]Best-fit Sérsic Parameters 0pt 2c 5cInner Sérsic 5cOuter SérsicObject χ^2 m_tot R_e [] n ϵ PA [^∘] m_tot R_e [] n ϵ PA [^∘] m_in,tot Fixed m_out,tot FixedVUCD3 (F606W) 14.16 18.99 0.08 3.51 0.66 19.0 18.83 0.61 1.28 0.91 18.4 19.13 18.66VUCD3 (F814W) 6.342 18.55 0.07 3.25 0.62 18.0 17.96 0.62 1.74 0.89 20.7 18.40 18.11M59cO (F475W) 1.065 19.30 0.16 1.06 0.97 -65.2 18.27 0.64 1.09 0.98 88.4 19.45 18.21M59cO (F850LP) 0.974 18.13 0.15 1.02 0.99 34.1 16.68 0.61 1.21 0.98 17.7 17.94 16.741Notes. The last two columns show the total magnitude when all shape parameters of the Sérsic profiles are held fixed to the other filter.cccccc[ht!] Multi-Gaussian Expansions (MGEs) used as the default mass and luminosity models in dynamical modeling. 0ptObject Mass (M_⊙pc^-2)^1 I_K (L_⊙pc^-2)^2 σ () q PA (^∘) VUCD3 1687276. 537235. .0001 0.66 19.04 1435008. 456912. .0003 0.66 19.041145834. 364838. .0008 0.66 19.04798986.4 254400. 0.002 0.66 19.04 489960.9 156005. 0.005 0.66 19.04 265194.5 84439.0 0.009 0.66 19.04 123165.2 39216.4 0.019 0.66 19.04 48162.50 15335.1 0.038 0.66 19.04 15761.48 5018.52 0.072 0.66 19.04 4225.199 1345.32 0.131 0.66 19.04 948.0368 301.859 0.228 0.66 19.04 173.9253 55.3785 0.387 0.66 19.04 25.42110 8.09415 0.640 0.66 19.04 2.945087 0.93773 1.046 0.66 19.04 0.192683 0.06135 1.836 0.66 19.04 938.0369 135.776 0.012 0.91 18.43 1724.193 249.400 0.044 0.91 18.43 2272.561 328.719 0.121 0.91 18.43 2018.410 291.958 0.262 0.91 18.43 1101.361 159.309 0.484 0.91 18.43 348.8148 50.4551 0.795 0.91 18.43 57.96370 8.38429 1.210 0.91 18.43 3.441089 0.49774 1.808 0.91 18.43 M59cO 3820.00 328.197 0.005 0.99 34.06 8870.65 762.126 0.019 0.99 34.06 13691.4 1176.31 0.047 0.99 34.06 12547.7 1078.04 0.092 0.99 34.06 6057.40 520.425 0.151 0.99 34.06 1376.19 118.236 0.223 0.99 34.06 95.0511 8.16636 0.315 0.99 34.06 2082.22 75.4811 0.013 0.98 17.69 4051.69 146.874 0.047 0.98 17.69 5736.21 207.938 0.123 0.98 17.69 5457.66 197.841 0.262 0.98 17.69 3284.28 119.056 0.473 0.98 17.69 1137.78 41.2449 0.761 0.98 17.69 206.300 7.47872 1.138 0.98 17.69 12.7642 0.46271 1.670 0.98 17.69 1The M/L in F814W (VUCD3) and F475W (M59cO) were used to determine the mass profiles for dynamical modeling. These methods are described in detail in Section <ref> 2Creation of MGEs required an assumption on the absolute magnitude of the sun. These values were assumed to be 4.53 mag in F814W, 5.11 mag in F475W, and 3.28 mag in K taken from http://www.ucolick.org/∼ cnaw/sun.html. See Section <ref> for a discussion on how the K-band luminosity was determined.For a uniform stellar population, the luminosity profile in any band can be used to obtain an accurate mass model.However, if the stellar populations vary with position, we need to take this into account in our dynamical modeling.We tested for stellar population variations by creating color profiles as shown in Figure <ref>. It is clear that both objects show a trend toward redder colors as a function of radius, confirming the color gradients shown in previous works <cit.>. Therefore, we could not assume a mass-follows-light model as was done with M60-UCD1 <cit.>. To incorporate the stellar population variations into our dynamical models, we needed two ingredients: (1) a mass profile of the object needed for dynamical modeling, and (2) a K band luminosity profile to enable comparison of our dynamical models with our observed data.The unconvolved fixed models (dashed blue and red lines in Figure <ref>) provide a well-defined color for the inner and outer components of the Sérsic fits since the shape parameters are held fixed. For VUCD3, the inner component color is F606W - F814W = 0.59 mag, while the outer component color is F606W - F814W = 0.72 mag. In M59cO we found the color to be F475W - F850LP = 1.36 and 1.53 mag for the inner and outer components, respectively. These colors were then used to find the mass-to-light ratio, assuming solar metallicity, using the <cit.> Padova 1994 SSP models. To evaluate the errors on our M/L we assumed an error of ±0.02 mags in our color determinations. For VUCD3, we found the inner component M/L_F814W to be 1.4±0.4, with a corresponding age of 9.6 Gyrs, and the outer component to be 2.7±0.4, with a corresponding age of 11 Gyrs. For M59cO, we found 2.8^+0.3_-0.2 and 5.5±0.5 for the inner and outer M/L_F475W, with corresponding ages of 5.5 and 11.5 Gyrs, respectively. To determine a mass density profile in M_⊙ pc^-2 to be used in the dynamical models, we multiplied the luminosities of each MGE subcomponent by these M/Ls. These M/L values can also be used to estimate total masses of the inner and outer Sérsic components. For VUCD3, we found the inner component to contain 11 ± 3 × 10^6 M_⊙ and the outer component 27 ± 4 × 10^6 M_⊙. For M59cO, we found 14^+2_-1 and 84 ± 8 × 10^6 M_⊙ for the inner and outer components, respectively.We note that we also computed a mass profile for the free Sérsic fits where we determined the M/L from the color at the FWHM of each Gaussian component in the MGE light profile (discussed in Section <ref> and shown in Figure <ref>).Our best fit mass model MGE for each UCD is given in the 1st column of Table <ref>.The color profiles and SSP models were also used to calculate the K-band luminosity MGEs that are used to compare our dynamical models to the kinematic data. Since the unconvolved fixed models provide an accurate determination of the color profile of each UCD, we used these colors, described above, to create a K-band luminosity profile for the dynamical models. In both cases, we use the BC03 models to infer the colors between our best-fit model and K-band.For VUCD3 we find that the inner component has F814W - K = 2.14 and outer component has F814W - K = 2.26.This leads to a scale factor in luminosity surface density of 0.44 and 0.40 for the inner and outer components, respectively. For M59cO, we found the inner component F475W - K =3.35 with a scale factor of 0.24 and the outer component F475W - K=3.58 with a scale factor of 0.20. These scale factors were multiplied by the inner and outer component luminosity profiles to make K-band MGEs for use in the dynamical models. Our best fit K-band luminosity model MGE for each UCD is given in the 2nd column of Table <ref>.Finally, the color profiles and SSP models were used to calculate the total stellar population M/L. This was accomplished by first calculating the flux within the central 2.5 from model images of the inner and outer Sérsic profiles. Next, we used the M/L calculated from the color profiles to find a flux weighted total M/L. For VUCD3, we found M/L_F814W,* = 2.1 ± 0.6, which, assuming V-I = 1.27 based on observations <cit.> corresponds to M/L_V,* = 5.2 ± 1.5. We found M/L_F475W,* = 4.8^+0.6_-0.5 for M59cO. For the overall object we estimate a g - V ∼ 0.47, yielding a M/L_V,* = 4.1^+0.5_-0.4.Both values of M/L_V are consistent with the 13 Gyr population estimates in <cit.>. § DYNAMICAL MODELINGIn this section we describe the technical details of the dynamical modeling, while the results of the modeling are presented in the next section.We fit the radial dispersion profiles of each UCD to dynamical models using the Jeans Anisotropic Models (JAM) method with the corresponding code discussed in detail in <cit.>. To briefly summarize, the dynamical models are made in a series of steps making two general assumptions: (1) the velocity ellipsoid is aligned with the cylindrical coordinate system (R,z,ϕ), (2) the anisotropy is constant. Here, the anisotropy is defined as β_z = 1-(σ_z/σ_R)^2 where σ_z is the velocity dispersion parallel to the rotation axis and σ_r is the velocity dispersion in the radial direction in the plane of the galaxy. The first step in the dynamical modeling process is to construct a three-dimensional mass model by deprojecting the two-dimensional mass model MGEs discussed in the previous section. In the self-consistent case, the luminosity and mass profile are the same. However, in our case, we used the mass profile to construct the potential and we used the light profile to calculate the observable properties of the model, both described below. The choice to parameterize the light profile with MGEs is motivated by the ease of deprojecting Gaussians and the accuracy in reproducing the surface brightness profiles <cit.>. The second step in the dynamical modeling process is to construct a gravitational potential using our mass model. This potential also contains a Gaussian to represent a supermassive BH with the axis ratio, q=1, and width, σ≲ r_min/3, where r_min is the smallest distance from the BH that needs to be accurately modeled. Although a supermassive BH can be modelled by adding a Keplerian potential, it is much simpler to model the BH as this small Gaussian <cit.>. Next, the MGE formalism is applied to the solution of the axisymmetric anisotropic Jeans equations (see Section 3.1.1 of ). Finally, the intrinsic quantities are integrated along the line-of-sight (LOS) and convolved with the PSF from the kinematic data to generate observables that can be compared with the radially binned dispersion profiles. Supermassive BH masses are frequently measured with dynamical models that allow for fully general distribution functions (e.g. Schwarzschild), which is important to include because of the BH mass-anisotropy degeneracy in explaining central dispersion peaks in galaxies. Since plunging radial orbits have an average radius that is far from the center of the galaxy, these orbits can raise the central dispersion without significantly enhancing the central mass density. Similarly, a supermassive BH also raises the dispersion near the center of the galaxy. Other dynamical modeling techniques break this degeneracy by fitting the full orbital distribution without assumptions about the anisotropy. However, given the quality of our kinematic data, a more sophisticated dynamical modeling technique is not feasible; we further discuss the assumptions and limitations of our modeling at the beginning of Section <ref>. For our dynamical models, we created a grid of the four free parameters: Γ, BH mass, inclination angle, and anisotropy. For VUCD3, our initial run consisted of:* 40 values of Γ ranging from 0.05 to 2.0 in increments of 0.05. Note that the best-fit dynamical M/L_F814W = 2.1 Γ. We use this translation when reporting the dynamical M/Ls for VUCD3.* 16 values for the BH mass from 0.0 to 7.0 × 10^6 M_⊙ in increments of 5 × 10^5 M_⊙ plus one point at 1 × 10^5 M_⊙* 11 values for an anisotropy parameter in increments of 0.1 from -0.2 to 0.8* 4 values for the inclination angle from 60^∘ to 90^∘ in 10^∘ incrementsM59cO was fitted using:* 35 values of Γ from 0.1 to 3.5 in increments of 0.1. Note that the best-fit dynamical M/L_F475W = 4.8 Γ. We use this translation when reporting the dynamical M/Ls for M59cO. * 19 values for the BH mass from 0.0 to 8.5 × 10^6 M_⊙ in increments of 5 × 10^5 M_⊙ plus one point at 1 × 10^5 M_⊙* 11 values for an anisotropy parameter from -0.2 to 0.8 in 0.1 increments* 9 values for the inclination angle from 14.5^∘ (the lowest possible) to 90^∘ in 10^∘ increments The grids for Γ, BH mass and anisotropy values are shown in Figure <ref>, and explained in further detail below. To determine the best-fit BH mass, we assumed isotropy (motivation explained in Section <ref>) and marginalized over Γ and the inclination angle. Next, we computed the cumulative likelihood, shown in Figure <ref>. Here, the different linestyles and colors represent different models and variations in the kinematic PSF. Unless explicitly stated, all of our dynamical models make use of the K-band luminosity MGEs; we note that the Γ values are scalings of our mass model, and the luminosity model is used only to calculate the model dispersion values. Our default dynamical models (shown in black) are as follows: * For VUCD3 the default mass model was obtained by fixing the best-fit double Sérsic model from the F814W data to the F606W data allowing only the Sérsic amplitudes to vary. The best-fit PSF was a Gauss+Moffat profile.* For M59cO the default mass model was obtained by fixing the best-fit double Sérsic model from the F475W data to the F850LP data allowing only the amplitude to vary. The best-fit PSF was a double Gaussian profile. To explore the systematic errors created by our choices of mass modeling and the fitting of the kinematic (NIFS) PSF, we also ran JAM models varying the mass model and PSF. We used our default mass model and varied only the PSF (shown in red) as:* Solid: the best-fit PSF from the function that did not best match the continuum (i.e., a double Gaussian for VUCD3, and a Gauss+Moffat profile for M59cO).* Dotted: the PSF created using the HST model image in the reddest filter available for convolution.We also ran three separate JAM models with various mass models (shown in blue) as: * Solid: mass model using the best-fit free double Sérsic profile with the mass profile determined from the color at the FWHM of the individual MGEs.* Dashed: model using the best-fit free double Sérsic profile assuming mass follows light.* Dotted: model where the shape parameters of the double Sérsic profile were fixed assuming mass follows light.Finally, we tested the effects of our choice of the luminosity model by running one dynamical model with the default mass model and PSF, but using the luminosity model from the original filter (F814W for VUCD3 and F475W for M59cO; shown in cyan).The default model was chosen based on the accuracy of reproducing the surface brightness profiles, as well as the ease and accuracy of determining the luminosity and mass profiles. The systematic effects of our model and PSF variations were taken into account when reporting the uncertainties on our final results (see Section <ref>).We also ran a finer grid of models for our default isotropic models to better sample and obtain a best-fit value for the cumulative distributions (Figure <ref>) and predicted V_rms profiles (Figure <ref>). This smaller grid sampled the BH mass in 18 linear steps of 100,000 M_⊙ ranging from: 3.4 million to 5 million M_⊙ for VUCD3, and 5.2 million to 6.8 million M_⊙ for M59cO. For comparison with the dispersion profile, we also included a zero mass BH for both objects. For VUCD3, Γ was modeled in 40 linear steps of 0.05, ranging from 0.05 to 2.0. For M59cO, we sampled the Γ in 35 linear steps of 0.1 ranging from 0.1 to 3.5. This final grid was modeled at the best-fit inclination angle from the first grid, which was 60^∘ for VUCD3, and 14.5^∘ for M59cO. However, the inclination angle has a negligible effect on the best-fit BH mass and M/L. The best-fit model from this grid was used to determine the final results (see Section <ref>). § RESULTSIn this section we report the results of our dynamical modeling, including the best-fit BH mass, and stellar M/L, assuming isotropy. We also report the impact of including anisotropic orbits in our dynamical modeling. §.§ Isotropic Model ResultsWe start by considering the best-fit results for isotropic Jeans models.We are forced to adopt simple dynamical models given the quality of our data sets, with just a small number of radially integrated kinematic measurements. The assumption of isotropy is a resonable one; in M60-UCD1, for which higher fidelity data were available, the results from isotropic Jeans models were fully consistent with the more sophisticated Schwarzschild model results <cit.>, but with somewhat smaller error bars due to the lack of orbital freedom in the Jeans models.Nearby nuclei, including the Milky Way, have also been found to be nearly isotropic <cit.> and their transformation into UCDs by tidal stripping is not expected to affect the mass distribution near the center of the galaxy <cit.>.We also note that other works have also shown JAM models consistent with Schwarzschild model and maser BH mass estimates <cit.>.Given all of these factors, we present our results assuming isotropic Jeans models, and then consider the effects of anisotropy in the following section. For the BH mass we found the best fit to be 4.4^+0.6_-0.7× 10^6 M_⊙ and 5.8^+0.9_-0.5× 10^6 M_⊙ for VUCD3 and M59cO, respectively. We found the best-fit M/L_F814W to be 1.8 ± 0.3 for VUCD3 and M/L_F475W=1.6^+0.3_-0.4 for M59cO. Here, the uncertainties are quoted as the one-sigma deviations calculated from the cumulative likelihood. Due to the lack of orbital freedom in the JAM models we also quote the three-sigma deviations for both objects, which also encompass the systematic effects of the model/PSF variations. For VUCD3, we found the best-fit BH mass and M/L_F814W to be 4.4^+2.5_-3.0× 10^6 M_⊙ and 1.8 ± 1.2, respectively. For M59cO, we found 5.8^+2.5_-2.8× 10^6 M_⊙ for the BH mass and 1.6^+1.2_-1.1 for the M/L_F475W. Using the color information from Section <ref> we found M/L_V = 3.0 and 1.4 for VUCD3 and M59cO, respectively.Figure <ref> shows the comparison of our kinematic data (black points) with the best fitting dynamical model, using the values stated above for the mass of the BH and M/L. The red line represents the best-fit dynamical model without a BH, and the blue line represents the best-fit dynamical model with a BH. The grey line indicates the best-fit dynamical model without a BH, but including anisotropy (discussed in Section <ref>). Changing the mass of the BH affects the overall shape of the dispersion profile, while the M/L merely scales the model dispersion vertically. In both objects, it is clear that when isotropy is assumed a central massive BH better reproduces the kinematic dispersion profile. From Figure <ref>, we see that adding a central massive BH to the dynamical modeling has the effect of reducing the best-fit M/L (as well as increasing the anisotropy as discussed below). Therefore, we determined the total dynamical mass and Γ, assuming isotropy, as a check to our original hypothesis; the addition of a central massive BH reduces Γ to values comparable to globular clusters and compact elliptical galaxies. The total dynamical mass was calculated by multiplying the dynamical M/L with the total luminosity. For VUCD3, we found that with a BH mass of 4.4 × 10^6 M_⊙, Γ with three-sigma error bars was 0.8 ± 0.6 resulting in a total dynamical mass of 32 ± 21 × 10^6 M_⊙. For comparison, without a BH component Γ with three-sigma error bars was 1.7 ± 0.2 resulting in a total dynamical mass of 66 ± 8 ×10^6 M_⊙, which is consistent with previous results <cit.>. For M59cO, we found that with a BH mass of 5.8 × 10^6 M_⊙, Γ with three-sigma error bars was 0.3 ± 0.2 resulting in a total dynamical mass of 32^+24_-22× 10^6 M_⊙. Without a BH component Γ with three-sigma error bars was 0.9 ± 0.1 resulting in a total dynamical mass of 83^+5_-12× 10^6 M_⊙. These results for Γ and the total dynamical mass without a BH are inconsistent with previous works; it is lower than those based on a low resolution dispersion determination <cit.>, and higher than the measurement by <cit.> based on a lower integrated dispersion measurement. Taking the ratio of the best-fit BH mass and the total dynamical mass (including both stars and BH), we found a central massive BH making up ∼13% and ∼18% of the total mass for VUCD3 and M59cO, respectively. These large mass fractions suggest a large black hole sphere of influence, which quanitifies the ability to detect a BH given a set of observations. Using the conventional definition of the black hole sphere of influence (r_infl= G M / σ^2) we find r_infl = 0.15 for VUCD3 and r_infl = 0.31 for M59cO (assuming the integrated dispersion values).This large sphere of influence is the reason for the large uncertainty in our stellar masses.The BH mass fractions are comparable to the mass fraction found in M60-UCD1 <cit.>, and consistent with the estimates made by <cit.>. They are also similar to the mass fractions of BHs within nuclear star clusters in the Milky Way, M32, and NGC 4395 <cit.>.Furthermore, these BH mass fractions reduce Γ to values comparable to those in many globular clusters and compact ellipticals <cit.>. Figure <ref> illustrates this effect. Here the grey points represent globular clusters and UCDs. The stars represent objects which our group will analyze and test for the presence of central massive BHs. The blue stars are VUCD3 and M59cO while the red star shows M60-UCD1.These colored stars show Γ after accounting for a central massive BH. The colored arrows illustrate the effect that the central massive BH has on Γ and the dynamical estimate of the stellar mass. Figure <ref> also illustrates the fact that all three UCDs with central massive BHs have stellar components with lower than expected dynamical masses (i.e. their Γ value is below one) assuming a Kroupa/Chabrier IMF.In both objects we assumed solar metallicities. However, if the metallicity were significantly below solar, this could lead to an overestimate of the population mass estimates; this seems possible in VUCD3 where the existing measurements span a wide range from [Z/H]=-0.28 to 0.35 <cit.>.Globular cluster dynamical mass estimates also seem to be lower than expected <cit.>, although this appears to be in part because of mass segregation within the clusters combined with the assumption of mass-traces-light models <cit.>.However, we note that no mass segregation is expected in any of the UCDs with BHs due to their long relaxation times (the half-mass relaxation times are 203 Gyr, 624 Gyr, and 350 Gyr in VUCD3, M59cO and M60-UCD1 respectively).The most massive globular clusters also have long relaxation times, and these clusters also seem to have lower than expected M/Ls for clusters with [Fe/H]>-1 <cit.>.Both these clusters and the less massive metal-rich clusters in the Milky Way with dynamical mass estimates based on N-body models by <cit.> have masses 70-80% of the values expected for a Kroupa IMF, consistent with all three UCDs we have measured so far.We also note that Γ (assuming a Chabrier IMF) was recently found to be significantly below unity in the nucleus of NGC 404<cit.> and in many compact elliptical galaxies <cit.>.§.§ Impact of AnisotropyDue to the intrinsic degeneracy between the BH mass, stellar M/L, and anisotropy parameter, we also tested the impact of including anisotropic orbits in our JAM models. These degeneracies are represented by the contour plots shown in Figure <ref>. From left to right, top to bottom, the panels represent VUCD3 anisotropy vs. BH mass, VUCD3 Γ vs. BH mass, M59cO anisotropy vs. BH mass, and M59cO Γ vs. BH mass. The blue points represent our grid sample and the green point is the best-fit determined over the entire grid. The colored lines represent the best-fit anisotropy and Γ assuming the BH mass makes up 1% (green), 5% (orange), and 10% (yellow) of the total dynamical mass of the system. The contours were calculated by determining the minimum chi-squared value between the four free parameters for each pair of grid points shown in the plot (i.e., for each pair of grid points shown in Figure <ref>, we marginalised out the two parameters not shown). Here, it is clear that the BH mass scales inversely with both the anisotropy and M/L. We note that the green points in Figure <ref> show the best-fit BH mass and Γ determined over the entire grid are consistent with the results we obtained when we assumed isotropy.For the kinematic data to be consistent with no BH, the anisotropy parameter needs to be as high as β_z = 0.4 and β_z = 0.6 for VUCD3 and M59cO, respectively (shown as a grey line in Figure <ref>). This would require both of these objects to have a high degree of radial anisotropy. However, we recognize that a lower value for the mass of the BH could lead to a more reasonable value for β_z. Therefore, we also tested what the best-fit values for β_z and Γ would be assuming the mass of the BH was 1%, 5% and 10% of the total dynamical mass. The best-fit values are represented by the green (1%), orange (5%) and yellow (10%) colored lines in Figure <ref>, and shown again as dispersion profile fits in Figure <ref>. In each case the dynamical models fit the dispersion profile well, but β_z at each BH mass remains high at 0.3-0.4 for VUCD3 and 0.5-0.6 for M59cO, similar to the no BH case.As discussed at the beginning of Section <ref> high β_z would be at odds with existing nuclei and the one previous UCD where this measurement has been made. Furthermore, Γ remains elevated in the case of VUCD3. The best-fit no BH mass stellar M/Ls would be a factor of 1.9 above the population estimate. This is inconsistent with the stellar population results in M60-UCD1 and local group globular clusters which fall below the stellar population estimates. We compared our anisotropy values to similar objects tested for anisotropic orbits. M60-UCD1, the only other UCD with a known value, was shown to be nearly isotropic <cit.>. Other compact objects such as the nuclear star clusters in the Milky Way, CenA, NGC 404, NGC 4244 and compact object M32 have also been shown to have nearly isotropic orbits <cit.>. Therefore, we conclude that while it is possible to have highly anisotropic orbits, it seems unlikely.§ DISCUSSION & CONCLUSIONSIn this paper we have tested the hypothesis that the existence of central massive BHs making up ∼10% of the total mass of UCDs can explain the elevated dynamical-to-stellar mass ratios observed in almost all UCDs above 10^7 M_⊙ <cit.>. For our analysis, we observed two Virgo UCDs, VUCD3 and M59cO, using adaptive optics assisted kinematics data from the Gemini/NIFS instrument combined with multi-band HST archival imaging.The Gemini/NIFS data were used to determine radial dispersion profiles for each object. We found integrated dispersion values of 39.7 ± 1.2 km s^-1 for VUCD3 and 31.3 ± 0.5 km s^-1 for M59cO with central dispersion values peaking at 52.9 km s^-1 and 40.2 km s^-1 for each object, respectively. The HST archival images were fitted with a double Sérsic profile to model the mass density, total luminosity and to test for the presence of stellar population variations. We found a total luminosity of L_F814W = 17.8 × 10^6 L_⊙ and L_F475W = 20.3 × 10^6 L_⊙ for VUCD3 and M59cO, respectively. Both objects showed a mild positive color gradient as a function of radius, implying multiple stellar populations. These effects were accounted for in our mass models by multiplying the luminosity by the M/L determined from SSP models. Combining our mass models and velocity dispersion profiles we created dynamical models using JAM. We found that the best-fit dynamical models contained central massive BHs with masses and three sigma uncertainties of 4.4^+2.5_-3.0 and 5.8^+2.5_-2.8× 10^6 M_⊙ for VUCD3 and M59cO, respectively, assuming isotropy. These BHs make up an astonishing ∼13% of VUCD3's and ∼18% of M59cO's total dynamical mass. The addition of a central massive BH has the effect of reducing Γ, as illustrated by the red and blue arrows in Figure <ref>. For comparison, the best-fit dynamical model, assuming isotropy, without a central BH returns a Γ value of 1.7 with a total dynamical mass of 66 × 10^6 M_⊙ and 0.9 with a total dynamical mass of 83 × 10^6 M_⊙ for VUCD3 and M59cO, respectively. The best-fit dynamical models reduce Γ to 0.8 with a total dynamical mass of 32 × 10^6 M_⊙ for VUCD3 and 0.3 with total dynamical mass 32 × 10^6 M_⊙ for M59cO.Due to the intrinsic degeneracy between the BH mass and anisotropy parameter, β_z, in the JAM models, we also tested the impact of including anisotropic orbits. We found that β_z values of 0.4 for VUCD3 and 0.6 for M59cO allow the kinematic data to be consistent with no BH. Furthermore, a central massive BH making up 1%, 5%, and 10% of the total dynamical mass also match the kinematic data, but leave β_z relatively unchanged at ∼ 0.4 for VUCD3 and ∼ 0.6 for M59cO. Comparing these values with other nuclear star clusters and UCDs shows that highly radially anisotropic orbits in UCDs are improbable.We conclude that both VUCD3 and M59cO host supermassive BHs and are likely tidally stripped remnants of once more massive galaxies. We can estimate the progenitor mass assuming that these UCDs follow the same scaling relations between BH mass and bulge or galaxy mass as unstripped galaxies <cit.>, as well as similar scaling relations for NSCs <cit.>. <cit.> used BH scaling relations to show that today's UCDs are consistent with typically having ∼1% of the luminosity of their progenitor galaxy, suggesting progenitor galaxies for VUCD3 and M59cO of roughly ∼ 10^9 M_⊙.With the measured BH masses, we can use scaling relations to estimate more precise progenitor masses; using the <cit.> relations for all galaxies we estimate dispersions of ∼100 km s^-1, and bulge masses of 1.2×10^9 and 1.7×10^9 M_⊙ for VUCD3 and M59cO respectively, with the scatter in the latter relationship suggesting about an order of magnitude uncertainty.We note the high inferred galaxy σ values are not necessarily expected to be observed in the nucleus <cit.>.Assuming the inner components of the UCDs represent the nuclear star clusters of the progenitor <cit.>, the apparent magnitudes of these components are ∼19.5-20 in g with inferred masses of 4-10×10^6 M_⊙.Nuclei with these magnitudes and similar effective radii are seen in Virgo Cluster galaxies ranging from M_g of -16 to -18 <cit.> (with galaxy masses of 1–8×10^9 M_⊙ assuming M/L_g similar to that of M59cO).Both inner components are slightly bluer; if interpreted as being due to a younger age population (rather than lower metallicity), this suggests that they formed stars as recently as 6-8 Gyr (Section <ref>), which may suggest that they were stripped more recently than this.Measurements for both VUCD3 and M59cO suggest they have near-solar metallicity and are α enhanced (see Section <ref>).The metallicity seems consistent with present day nuclei in the luminosity range expected for the progenitors <cit.>.However only a small fraction of Virgo NSCs seem to be significantly α enhanced <cit.>.Overall, the BH mass and NSC luminosity and metallicity suggest that the host galaxies for both objects were of order 10^9 M_⊙.This is about an order of magnitude lower than the likely progenitor of M60-UCD1 <cit.>, and is in a galaxy mass regime where very few BH masses have been measured <cit.>.In systems with measured BH masses, the ratio of BH to NSC massesranges from 10^-4 to 10^4 <cit.>, thus is consistent with the measurements here of roughly equal NSC and BH masses. These UCDs constitute the second and third UCDs known to host supermassive BHs. All UCDs with adaptive optics kinematic data available thus far have been shown to host central massive BHs.After taking these BHs into account, the stellar mass of UCDs is no longer higher than expected, suggesting other UCDs with high Γ may host BHs (Figure <ref>).Non-detection of BHs in two UCDs based on ground-based data have been published. In NGC4546-UCD1 (M_⋆∼ 3 × 10^7 M_⊙), <cit.> suggests that any BH is ≲3% of the stellar mass despite finding evidence that this UCD is in fact a stripped nucleus.This result depends on the assumption of a stellar M/L based on the age estimate of the stellar population; a lower stellar M/L such as those we find (Figure <ref>) would result in a higher possible BH mass.Another BH non-detection was reported by <cit.> using isotropic models of the bright and extended UCD3; a 5% mass fraction BH is consistent with their data within 1σ, while a 20% BH mass fraction is excluded at 96% confidence.Our high resolution results reinforce the hypothesis that UCD BHs could represent a large increase in the number density of massive BHs <cit.>.Simulations of tidal stripping from cosmological simulations suggest that all high-mass UCDs (>10^7.3 M_⊙) are consistent with being stripped nuclei <cit.>, with a mix of globular clusters and stripped nuclei at lower masses.Depending on how common stripped nuclei are, these objects may represent the best way of studying the population of BHs in lower mass galaxies, a critical measurement for understanding the origin of supermassive BHs <cit.>.This emphasizes the value in making similar studies of nearer, lower mass UCDs.For example, some local group globular clusters are also thought to be tidally stripped remnants <cit.>.BH detections have been claimed in ω Cen <cit.>, M54 <cit.>, and 47 Tucanae <cit.>, but these remain controversial <cit.>.A BH has also been claimed in the Andromeda globular cluster G1 <cit.>, but accretion evidence for this BH has been elusive <cit.>.In all these cases, the mass fraction of the black hole is certainly lower than the mass fractions of >10% that we find here.The lack of knowledge of BH demographics in low-mass host galaxies prevents easy comparison with non-stripped systems. Nonetheless, nearby UCDs represent the best place to push towards lower masses; we have ongoing observing programs for six additional UCDs, including objects in M31 and NGC 5128. Facilities: Gemini:Gillett (NIFS/ALTAIR), HST (ACS/HRC), HST (ACS/WFC)Acknowledgments:CA and ACS acknowledge financial support from NSF grant AST-1350389.JB acknowledges financial support from NSF grant AST-1518294.JS acknowledges financial support from NSF grant AST-1514763 and a Packard Fellowship. IC acknowledges support by the Russian–French PICS International Laboratory program (no. 6590) co-funded by CNRS the RFBR (project 15-52-15050).AJR was supported by National Science Foundation grant AST-1515084. Based on observations obtained at the Gemini Observatory (program: GN-2014A-Q-4 & GN-2015A-Q-6; acquired through the Gemini Observatory Archive and processed using the Gemini IRAF package) , which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil). Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). apj | http://arxiv.org/abs/1703.09221v1 | {
"authors": [
"Christopher P. Ahn",
"Anil C. Seth",
"Mark den Brok",
"Jay Strader",
"Holger Baumgardt",
"Remco van den Bosch",
"Igor Chilingarian",
"Matthias Frank",
"Michael Hilker",
"Richard McDermid",
"Steffen Mieske",
"Aaron J. Romanowsky",
"Lee Spitler",
"Jean Brodie",
"Nadine Neumayer",
"Jonelle L. Walsh"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170327180000",
"title": "Detection of Supermassive Black Holes in Two Virgo Ultracompact Dwarf Galaxies"
} |
^1Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA ^2Laboratoire AIM, CEA/IRFU, CNRS/INSU, Université Paris Diderot, CEA-Saclay, 91191 Gif-Sur-Yvette, France^3Dipartimento di Fisica, Università di Roma Tor Vergata, via della Ricerca Scientifica 1, 00133 Roma, Italy^4Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse, 85748 Garching, Germany^5Naval Research Laboratory, 4555 Overlook Avenue SW, Code 7213, Washington, DC 20375, USA^6U.S. Naval Research Laboratory, Code 7213, 4555 Overlook Ave. SW., Washington, DC20375^7Dipartimento di Fisica dell' Università di Trieste, Sezione di Astronomia, via Tiepolo 11, I-34131 Trieste, Italy^8INAF, Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34131, Trieste, Italy^9Institut d'Astrophysique Spatiale, CNRS (UMR8617) Université Paris-Sud 11, Bâtiment 121, Orsay, France^10CNRS; IRAP; 9 Av. colonel Roche, BP 44346, F-31028 Toulouse cedex 4, France^11Université de Toulouse; UPS-OMP; IRAP; Toulouse, France^12Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo, Norway^13Department of Physics & Astronomy, The University of Iowa, Iowa City, IA, 52245^14Institut d'Astrophysique Spatiale, CNRS, Univ. Paris-Sud, Université Paris-Saclay, Bât. 121, 91405 Orsay Cedex, France We derive and compare the fractions of cool-core clusters in the Planck Early Sunyaev-Zel'dovich sample of 164 clusters with z ≤ 0.35 and in a flux-limited X-ray sample of 100 clusters with z ≤ 0.30,using Chandra observations. We use four metrics to identify cool-core clusters: 1) the concentration parameter: the ratio of the integrated emissivity profile within 0.15 r_500 to that within r_500, and 2) the ratio of the integrated emissivity profile within 40 kpc to that within 400 kpc, 3) the cuspiness of the gas density profile: the negative of the logarithmic derivative of the gas density with respect to the radius, measured at 0.04 r_500, and 4) the central gas density, measured at0.01 r_500. We find that the sample of X-ray selected clusters, as characterized by each of these metrics, contains asignificantly larger fraction of cool-core clusters compared to the sample of SZ selected clusters (44±7% vs. 28±4% using the concentration parameter in the 0.15–1.0 r_500 range, 61±8% vs. 36±5% using the concentration parameter in the 40–400 kpc range, 64±8% vs. 38±5% using the cuspiness, and 53±7% vs. 39±5% using the central gas density). Qualitatively, cool-core clusters are more X-ray luminous at fixed mass. Hence, our X-rayflux-limited sample, compared to the approximately mass-limited SZ sample, is over-represented with cool-core clusters.We describe a simple quantitative model thatuses the excess luminosity of cool-core clusters compared to non-cool-core clustersat fixed mass to successfully predict the observed fraction of cool-core clusters in X-ray selected samples.§ INTRODUCTION Clusters of galaxies are the largest gravitationally bound structures in the Universe. In the standard ΛCDM cosmology, massive halos dominated by dark matter assemble by the accretion of smaller groups and clusters <cit.>.Under the influence of gravity, diffuse matter and smaller collapsed halos fall into larger halos and, occasionally, halos of comparable mass merge with one another. X-ray observations of substructures in clusters of galaxies<cit.> and measurements of the growth of structure <cit.> demonstrate that clusters are still in the process of formation.Early X-ray observations of galaxy clusters revealed that a significant fraction present a bright and dense core, whose central cooling time is much shorter than the Hubble time. These observations led to the development of the cooling flow model <cit.>. Analyzing deep Chandra observations of Hydra-A, <cit.> showed that the spectral fits yielded significantly smaller mass deposition rates than expected. Using XMM-Newton, <cit.> presented high-resolution X-ray spectra of 14 putative cooling-flow clusters that exhibit a severe deficit of very cool emission relative to the predictions of the isobaric cooling-flow model.However, as predicted by the cooling-flow model,a temperature drop in the center of many clusters is observed, typically reaching one third of the peak temperature <cit.>. Clusters presenting such a temperature dropin their cores are referred to as cool-core (CC) clusters <cit.>.Using a very large, high-resolution cosmological N-body simulation (Millennium-XXL), <cit.> showed that cosmological conclusions based on galaxy cluster surveys depend critically on the selection biases, which include the wavelength used for identification of clusters of galaxies. <cit.> presented the first observational evidence of different morphological properties in X-ray vs. SZ selected samples: SZ selected clusters have a lesspeaked density distribution and are less X-ray luminous at a given mass than X-ray selected clusters. <cit.> presented a method to diagnosesubstructure and dynamical state for 2092 optical galaxy clusters.They found that the fraction of relaxed clusters is 28% in the full sample, while the fraction increases to 46% for the subsample matched with ROSAT detections, indicating that the wavelength used for detecting clusters plays a significant role in the dynamical state of the population that is selected. <cit.> showed that CC clusters in his SZ sample represent 40±10% of the cluster population at low redshift. <cit.> constructed near-complete samples based on X-ray and SZ catalogs. They found that roughly 70±10% of the clusters in the X-ray selection have no radio halos (indicating they are relaxed), whereas the fraction in the Planckselection is only 30±10%, in agreement with findings from <cit.>. More recently, <cit.> compared the dynamical state of the 132 clusters with the highest signal to noise ratio from the Planck Sunyaev-Zel'dovich (SZ) catalogue, to that of three X-ray-selected samples(HIFLUGCS[HIFLUGCS – The HIghest X-ray FLUx Galaxy Cluster Sample <cit.>], MACS[MACS – The MAssive Cluster Survey <cit.>], and REXCESS[REXCESS – The REpresentative XMM-Newton ClustEr Structure Survey <cit.>]). They showed that the fractions of relaxed clusters in the the X-ray samples are significantly larger than that in the Planck sample, and interpreted this result as an indication of a cool core bias<cit.> affecting X-ray selection.Recently, <cit.> compared the fraction of cool core clusters in a Planck cosmology SZ sample <cit.> to that in the MACS X-ray sample. Using the concentration parameter which measures the ratio of the integrated surface brightness in two fixed physical apertures,as defined by <cit.>,they showed that the cool core fraction is significantly higher in the MACS X-ray selected sample than in the Planck cosmology SZ sample (59±5% in their X-ray sample vs. 29±4% in their SZ sample). This resultagrees with that presented by <cit.>, which is fully described in this paper. The X-ray sample presented in this paper spans a higher mass range compared to the mass range in the X-ray sample presented by <cit.>.Therefore, we also make use of three parameters which are computed at radii that scale with total mass. We note that the work presented by <cit.> and the work presented here were developed in parallel <cit.>.In this paper we compare the nature of the cores for 164 Planck ESZ clusters at z < 0.35 to the 100 highest flux X-ray clusters at Galactic latitudes |b| > 20^∘ and 0.025 < z < 0.30. The X-ray sample is extended to 100 clusters from the sample of 52 clusters presented by <cit.>, by lowering the flux limit to f_ X > 7.5 × 10^-12 erg s^-1 cm^-2 in the 0.5 – 2.0 keV band.Throughout this paper, we assume a standard ΛCDM cosmology with H_0= 70 km s^-1 Mpc^-1, Ω_ M=0.3, and Ω_Λ=0.7. All uncertainties are quoted at the 1σ level.§ SZ AND X-RAY SELECTED CLUSTERS The main goal of this work is to compare the fraction of CC clusters in X-ray and SZ selected samples. In this section, we describe the two samples.§.§ SZ sample The first catalog of 189 SZ clusters detected by the Planck mission was released in early 2011 <cit.>. A XVP (X-ray Visionary Program – PI: Jones) and HRC Guaranteed Time Observations (PI: Murray) combined to form the Chandra-Planck Legacy Program for Massive Clusters ofGalaxies[http://hea-www.cfa.harvard.edu/CHANDRA_PLANCK_CLUSTERS/hea-www.cfa.harvard.edu/CHANDRA_PLANCK_CLUSTERS/].For each of the 164 ESZ Planck clusters at z ≤ 0.35, we obtained exposures sufficient to collect at least 10,000 source counts.§.§ X-ray sample <cit.> compiled a sample of the X-ray brightest clusters in the local universe by selecting the highest flux clusters detected in theROSAT All-Sky survey at |b| > 20^∘ andz > 0.025 – using the HIFLUGCS catalog <cit.> as reference. The sample used here is an extension of the<cit.> sample, where the flux limit in the 0.5 – 2.0 keV band was lowered to f_ X > 7.5 × 10^-12 erg s^-1 cm^-2. This sample contains 100 clusters and has an effective redshift depth of z < 0.3. All have Chandra observations. Of the 100 X-ray selected clusters, 49 are alsoin the ESZ sample, and 47 are in the HIFLUGCS catalog. §.§ Comparisons between the X-ray and ESZ selected clusters Figure <ref> presents the redshift distribution in both samples. The Planck detected clusters are clearly more broadly distributed in redshift than are the X-ray clusters. This is due to the nature of the selection: for resolved clusters the Sunyaev-Zel'dovich signal is independent of the redshift of the cluster because it is the CMB that is distorted (the CMB photons originatedat the epoch of recombination – from a constant redshift of z ∼ 1000), while the X-ray selected clusters constitute a flux-limited sample, which strongly favors the X-ray brighter, lower redshift clusters.Figure <ref> presents the mass distribution of both samples. The X-ray sample spans a much larger mass range, extending to lower masses than the Planck ESZ sample. The difference between the lowest observed mass in the X-ray and ESZ samples is caused by different detection thresholds. Note that the highest observed mass is the same for both samples (see Figure <ref>). §.§ Subclusters A small fraction (∼ 10 – 20%) of the clusters in both the X-ray and SZ samples present X-ray bright subclusters. In our analyses we exclude the secondary subclusters. Only the principal cluster component is used in the comparisons between the X-ray and SZ samples. However, we present the metrics for all cluster components in Tables <ref> and <ref>.§ DATA REDUCTION Our Chandra data reduction followed the process described in <cit.>. We applied the calibration files .The data reduction included corrections for the time dependence of the charge transfer inefficiency and gain, and also a check for periods of high background, which were then omitted. Standard blank sky background files and readout artifacts were subtracted. We also detected compact X-ray sources in the 0.7–2.0 keV and 2.0–7.0 keV bands using CIAO wavdetect and then masked these sources before performing the spectral and spatial analyses of the cluster emission. For each cluster, we used all available Chandra observations within 2 Mpc of the cluster center with all CCDs (ACIS-I and ACIS-S). § EMISSION MEASURE PROFILES We refer to <cit.> for a detailed description of the procedures we used to compute the emission measure profile for each cluster. We outline here only the main aspects of the method. We measured the surface brightness profiles in the 0.7–2.0 keV energy band,which maximizes the signal to noise ratio in Chandra observations for typical cluster gas temperatures. We used the X-ray halo peak as the cluster center. The readout artifacts and blank-field background <cit.>were subtracted from the X-ray images, and the results were then exposure-corrected, using exposure maps computed assuming an absorbed optically-thin thermal plasma with kT = 5.0 keV, abundance = 0.3 solar, with the Galactic column density and including corrections for bad pixels and CCD gaps, which do nottake into account spatial variations of the effective area.We subtracted any small uniform component corresponding tosoft X-ray foreground adjustments, if required (determined by fittinga thermal model in a region of the detector field distant from the cluster center, taking into account the expected thermal contribution from the cluster).Following these steps, we extracted the surface brightness in narrowconcentric annuli (r_ out/r_ in = 1.05) centered on the X-ray halo peak and computed the Chandra area-averaged effective area for each annulus (see <cit.>, for details on calculating the effective area).To compute the emission measure and temperature profiles, we assumed sphericalsymmetry. The spherical assumption is expected to introduce only smalldeviationsin the emission measure profile <cit.>. Using the modeled de-projected temperature (see Section <ref>), effective area, andmetallicity as a function of radius, we converted the Chandra countrate in the 0.7–2.0 keV band into the emission integral,EI =∫ n_ e n_ p dV, within each cylindrical shell. Tables <ref> and <ref> list the maximum cluster radius where the emission integral is computed (r_ max) for each cluster. Seven clusters in the ESZ sample haver_ max < r_500, and in the X-ray sample, nine clusters have this condition (four of them are also in the ESZ sample). These numbers represent only 4% and 9% of the clusters in the ESZ and X-ray samples, respectively.We fit the emission measure profile assuming the gas density profile follows that given by <cit.>:n_ en_ p =n_0^2(r/r_ c)^-α/(1+r^2/r_ c^2)^3β-α/21/(1+r^γ/r_ s^γ)^ϵ/γ+ n^2_02/(1+r^2/r_ c2^2)^3β_2,where the parameters n_0 and n_02 determine the normalizations of both additive components. α, β, β_2, and ϵ are indexes controlling the slope of the curve at characteristic radii given by the parameters r_ c, r_ c2, and r_ s. γ controls the width of the transition region given by r_ s. Although the relation given by Equation <ref> is based on a classicβ-model <cit.>, it is modified to account for a central power-law type cusp and a steeper emission measure slope at large radii. In addition, a second β-model is included, to better characterize the cluster core. For further details on this equation, we refer the reader to <cit.>. In the fit to the emissivity profile, all parameters are free to vary. For a typical metallicity of 0.3 Z_⊙, the reference values from <cit.> yield n_ e/n_ p = 1.1995. Examples of projected emissivity and gas density profiles are presented in Figures <ref> and <ref>.§ TOTAL CLUSTER MASS ESTIMATES Using the gas mass and temperature, we estimated the total cluster mass from the Y_X–M scaling relation of <cit.>, M_ 500,Y_X = E^-2/5(z)A_ YM(Y_ X/3×10^14M_⊙ keV)^B_ YM, where Y_ X = M_ gas,500× kT_ X, M_ gas,500 is computed using the best fit parameters of Equation (<ref>), and T_ X is the measured temperature in the (0.15–1) × r_500 range.A_ YM=5.77×10^14h^1/2M_⊙ and B_ YM=0.57 <cit.>. Here, M_ Y_X,500 is the total mass within r_500, and E(z)=[Ω_ M(1+z)^3 + (1-Ω_ M-Ω_Λ)(1+z)^2 + Ω_Λ]^1/2 is the function describing the evolution of the Hubble parameter. Using Equation (<ref>), r_500is computed by solvingM_ 500,Y_X≡ 500 ρ_c (4π/3) r_500^3, where ρ_c is the critical density of the Universe at the cluster redshift. In practice, Equation (<ref>) is evaluated ata given radius, whose result is compared to the evaluation of Equation (<ref>) at the same radius. This process is repeated in an iterative procedure, until the fractional mass difference is less than 1%.§ COOL-CORE METRICS It is not possible to measure the temperature profileto determine the presence of a cool-core for all clusters in our SZ and X-ray samples because of the X-ray data quality. Figures <ref> and <ref> illustrate the difference in data quality in the sample.Instead, we apply a more robust approach of using four metrics described below, to characterize the presence of cool cores. §.§ Concentration parameter in the 40–400 kpc range The presence of cooler gas in the cores of clusters usually implies a larger gas density in the core, compared to the gasdensity outside the core, to maintain the pressure balance. This increased gas density produces an X-ray bright core, since the X-ray emissivity is roughly proportional to the square of the gas density.Evaluating the X-ray brightness in the cluster core compared to the brightness within a given larger radius is a powerful method to determine if the cluster contains a cool-core. This metric is referred to as the concentration parameter and was originally introduced by <cit.>: C_ SB4 = Σ(<40kpc)/Σ(<400kpc),where Σ(<r) is the integrated projected emissivity within a circle of radius r. §.§ Concentration parameter in the 0.15–1.0 r_500 range Here we also use a modification of the original definition <cit.>, which is scaled by the characteristic radius r_500 as <cit.>: C_ SB = Σ(<0.15r_500)/Σ(<r_500). §.§ Cuspiness of the gas density profile A third related metric is the cuspiness of the gas density profile computed at a fixed scaled radius of 0.04 r_500 <cit.>: δ = -d log n(r)/ d log r|_r=0.04r_500,where n(r) is the gas density at a distance r from the cluster center. §.§ Central gas density A fourth useful quantity that indicates if a cluster presents a cool core is the central gas density <cit.>. Here we calculate the central density at 0.01 r_500 from the core (which will be called n_ core), since the equation used to fit the density profile may diverge at r = 0 (if α > 0 in Equation (<ref>)).lcc Spearman Rank Test0ptRelation Correlation (X-ray)Correlation (ESZ) C_SB(0.15–1.0 r_500) vs. C_SB(40–400 kpc) 0.87 0.84C_SB(0.15–1.0 r_500) vs. Cuspiness 0.75 0.69 C_SB(0.15–1.0 r_500) vs. Central Density 0.90 0.87 C_SB(40–400 kpc) vs. Cuspiness 0.93 0.90 C_SB(40–400 kpc) vs. Central Density 0.93 0.91 Cuspiness vs. Central Density 0.92 0.88 Columns list metric relations and their Spearman correlations for both the SZ and X-ray samples. Spearman p-values for all correlations were lower than 1.5 × 10^-19, indicating extremely low probability that the metrics are not correlated.§ TEMPERATURE PROFILES In this paper, we present the temperature profiles for only two clusters, although we have temperature profiles for all clusters in our samples, which vary in the number of fitted parameters according to the quality of the data. We provide the fitted parameters for all clusters in Andrade-Santos et al. (2017). The two temperature profiles presented in this paper are examples of CC and NCC clusters as well as clusters with very different data quality (see Figures <ref> and <ref>).In this Section, we present the analytic equations used to obtain the profiles, referring the reader to papers where the full description of the calculationsare presented <cit.>.<cit.> give a 3D temperature profile that describes the general features of the temperature profile of clusters: T_ 3D(r) =T_0×x+T_ min/T_0/x+1×(r/r_ t)^-a/(1+(r/r_ t)^b)^c/b, where x=(r/r_ cool)^a_ cool. r_ t and r_ cool are the transition and cool core radii, respectively. T_ min is the central temperature, and a, b, c, and a_ cool are indexes that determine the slopes of the temperature profile in different radial ranges. We derive the deprojected 3D temperature by projecting a model to compare to the projected measured temperature. The 3D temperature model, T_3D, is weighted by the density squared according to the spectroscopic-like temperature <cit.>: T_ 2D=T_ spec≡∫ n_ e^2 T_ 3D^1/4 dz/∫ n_ e^2 T_ 3D^-3/4 dz, to give values of T_2D for comparison with the measured values. n_ e is the electron density, given by Equation <ref>, andT_ 3D is the deprojected 3D temperature, given by Equation <ref>. § RESULTS With the cluster gas density and emission measure profiles, we are able to compute the cuspiness of the gas density profile, concentration, and central gas density for the X-ray and Planck ESZ cluster samples. The uncertainties on the metrics for each cluster were computed using 100 Monte Carlo realizations of the density profile,including also a Gaussian distribution for r_500 (r_500±σ_r_500). The top left panel of Figure <ref> presents the distribution of concentration parameters in the 0.15–1.0 r_500 range for both cluster samples. We used a Kolmogorov–Smirnov (K-S) test for the SZ and X-ray samples to evaluate the probability that the two samples were drawn from the same distribution.We obtained p-values that are smaller than 3.1 × 10^-2 for all metrics, whichsuggests that the fraction of cool cores in the sample of X-ray selected clusters is different from that in the SZ sample. Defining a cool-core cluster as one that presents a concentration parameter in the 0.15–1.0 r_500 range, C_ SB > 0.4, the fraction of cool-cores in the X-ray sample is 44±7%, whereas in the SZ sample, the fraction is 28±4%. The uncertainties on the fraction of cool-core clusters were computed using a Bootstrap re-sampling method, including Poisson statistics on the number of clusters satisfying the cool-core criterion: a metric value greater than the break value used to segregate clusters into CC and NCC. With a break value of 0.075 <cit.> for the concentration parameter in the 40-400 kpc range (Figure <ref>, top right panel), we have a CC fraction of 61±8% in the X-ray sample and 36±5% in the SZ sample. The high fraction of CC clusters in the X-ray selected sample compared to that in the SZ sample agrees quite well with the recent results presented by <cit.> (59±5% in their X-ray sample vs. 29±4% in their SZ sample). With a break value of 0.5 <cit.> for the cuspiness of the gas density profile (Figure <ref>, bottom left panel), we have a CC fraction of 64±8% in the X-ray sample and 38±5% in the SZ sample. <cit.> used a value of 0.8, more appropriate for moderate to strong CC clusters. They also use a break value of 0.5 for the concentration parameter in the 0.15–1.0 r_500 range, which we chose to be 0.4 to also include weak CC clusters. Finally, using a break value of 1.5 × 10^-2 cm^-3 <cit.> for the central gas density to distinguish cool and non-cool core clusters (Figure <ref>, bottom right panel),we find that 53±7%of the clusters in the X-ray sample have cool cores, whereas the SZ sample shows afraction of 39±5%. The fraction of CC clusters and K-S test results are listed in Table <ref>.We also include in Table <ref>a systematic uncertainty on the fraction of CC clusters by varyingthe break value by ±10%. The magnitude of this systematic uncertainty is comparable to the statistical uncertainty.Using all four comparisons of the X-ray and SZ cluster samples,we find that CC clusters are significantly more common in X-ray selected cluster samples than in SZ selected samples. Figure <ref> shows the correlations between all metrics. Visually, we observe a strong correlation between all metrics, which we verified numerically using a Spearman test.This provides a correlation coefficient ranging between 0 (no correlation) and (-)+1, in the case of perfect (anti)correlation.Results for our metrics are listed in Table <ref>.Using a set of cosmological hydrodynamic simulations of galaxy clusters, <cit.> found that 38% (11/29) of their simulated clusters at z = 0 are classified as CC using the central entropy <cit.> and pseudo-entropy ratio <cit.> as metrics. This result agrees very well with our observed fraction of CC clusters in the SZ sample (28 – 39% according to the metric used, suggesting that the fraction of CC clusters in SZ samples is representative of the fraction of CC clusters in the universe). <cit.> showed that constraints on the fraction of CC clusters in SZ selected datasets are only subject to a systematic bias of order one percent, a significant reduction compared to X-ray selected samples, supporting that SZ selected samples of galaxy clusters are robust cosmological probes.§.§ Numerical Simulations In this section, we apply the four metrics used in this paper to the set of numerical simulations of galaxy clusters from <cit.>. We obtain the following results: a) using the concentration parameter in the 0.15–1.0 r_500 range, we obtain a fraction of CC clusters of 33±11%, b) usingthe concentration parameter in the 40–400 kpc range, the fraction of CC clusters is 26±10%, c) using the cuspiness of the gas density, the fraction of CC clusters is 38±11%, and d) using the central gas density, the fraction of CC clusters is 48±13%.These numbers are in good agreement with the fraction of CC clusters in our ESZ sample.§ SELECTION EFFECTS: MALMQUIST BIAS IN X-RAYThe X-ray cluster sample used in this paper is derived from the ROSAT X-ray catalogs which formally used the total X-ray flux as the only selection criterion.Thus, it can be affected by the Malmquist bias leading to over-representation of the CC clusters. CC clusters tend to be more X-ray luminous for the same mass and thus they become over-represented in a purely X-ray flux-limited survey. To estimate the fractional increase in X-ray luminosity of the CC subsample, we compare the ratio of the observed luminosity to the expected luminosity for the measured mass (using the L_ X–M relation given by Equation (22) from <cit.>) forCC and NCC clusters for all four metrics used to identify CC clusters. We find thatCC clusters are on average ∼ 1.6–1.8 times more X-ray luminous for the same mass (Table <ref>) than are NCC clusters. These results are consistent with early studies based on Einstein imaging data (central excesses over the β-model, <cit.>), and normalizations of the L_ X – T relations for the CC and NCC populations <cit.>.The impact of this difference in the total X-ray luminosity on the fractions of CC and NCC clusters is substantial. In a low-z flux-limited survey, the search volume is ∝ L^3/2 so a subpopulation which is intrinsically more luminous by a factor of ∼ 1.7 becomes over-represented by a factor of 2.2 above a fixed mass threshold.A similar bias is still present if we consider clusters in a narrow redshift range, where there is no difference in the search volume. In this case, a flux limit is equivalent to an X-ray luminosity threshold. CC clusters are less massive than NCC clusters for a fixed L_ X and hence are more numerous than NCC clusters.We can quantify the selection effects of CC clusters in X-ray surveys. For simplicity, let us approximatethe cluster mass function locally as a power law given by: N(>M) ∝M^-γ, and assume a power law M–L_ X relation as: L_ X∝ M^β. Let l be the ratio of average luminosities (0.5–2.0 keV band, in the 0–r_500 range) at a fixed mass for the CC and NCC populations: l ≡⟨ L_ CC⟩/ ⟨ L_ NCC⟩, then we would expect the ratio of the number of CC to NCC clusters to be: Δ≡ l^γ/β. We computed the ratio of average luminositiesat a fixed mass for the CC and NCC populations, l, to be in the range 1.6–1.8 (Table <ref>). From <cit.>,β = 1.61 ± 0.14. To compute the slope of the halo mass function, we averaged the slope of the mass function at the location of each cluster mass in our X-ray sample, using the mass function provided by <cit.>. We obtained γ =2.54 ± 0.79. With these numbers in hand, we estimate that the CC clusters are over-represented in our X-ray sample by a factor of Δ = 2.1 – 2.7 (depending on the metric) because of the Malmquist bias. § CONCLUSIONS Using Chandra observations, we derived and compared the fraction of cool-core clusters in the Planck Early Sunyaev-Zel'dovich (ESZ) sample of 164 detected clusters with z ≤ 0.35 and in a flux-limited X-ray sample of 100 clusters with z ≤ 0.30. We use four metrics to identify the presence of a cool-core: 1) the concentration parameter: the ratio of the integrated surface brightness within 0.15 r_500 to that within r_500, and 2) within 40 kpc to that within 400 kpc, 3) the cuspiness of the gas density profile: the negative of the logarithmic derivative of the gas density with respect to the radius measured at 0.04 r_500, and 4) the central gas density, measured at0.01 r_500. We find that:* In all four metrics that we used, the sample of X-ray selected clusters contains asignificantly larger fraction of cool-core clusters compared to the sample of SZ selected clusters (44±7% vs. 28±4% using the concentration parameter in the 0.15–1.0 r_500 range as a metric for cool-cores, 61±8% vs. 36±5% using the concentration parameter in the 40–400 kpc range, 64±8% vs. 38±5% using the cuspiness, and 53±7% vs. 39±5% using the central density).Our results for the concentration parameter in the 40–400 kpc range agree well with the recent results by <cit.>.* Qualitatively, cool-core clusters are more X-ray luminous at fixed mass. Hence, our X-rayflux-limited sample, compared to the approximately mass-limited SZ sample, is over-represented with cool-core clusters.We describe a simple quantitative model that successfully predicts theobserved difference based on the selection bias. Our model predicts an over-population of CC clusters in our X-ray selected sample compared to SZ samples of 2.1 – 2.7, depending on the metric used to identify CC clusters, with a typical uncertainty of ∼ 0.8,which is consistent within the uncertainties with the observed values in the range 1.4 – 1.7 with a typical uncertainty of ∼ 0.3. * The results of the four metrics we used to measure theover-population of CC clusters in X-ray samples compared to thatin SZ samples are all consistent within their uncertainties. * While differences in X-ray and SZ cluster selection are significant, they can be quantitatively explained by the effect of cool-cores on X-ray luminosities.CC clusters are more X-ray luminous than NCC clusters for a fixed cluster mass. Thus, an X-ray flux-limited sample will select a larger fraction of CC clusters compared to an SZ selected cluster sample. The determination of cosmological parameters from an X-ray flux-limited sample in the local Universe can be summarized by determining confidence levels in the highly degenerate Ω_ M–σ_8 plane. If cluster massesare determined using a proxy other than the X-ray luminosity (e.g., gas mass, M–Y_ X scaling relation, T_ X, weak-lensing, hydrostatic mass) there will be no Malmquist bias on the determination of cosmological parameters, simply because when the mass function is constructed, the CC clusters that were wrongly included in the selection will now be excluded from the mass function because they do not satisfy the criterion that their masses are above the mass limit given their redshifts. On the other hand, if the cluster masses are determined by theL_ X–M scaling relation, the masses of the CC clusters will be biased high, and their inclusion in the mass function will lead to a shift towards higher values of Ω_ M and σ_8. F.A-S. acknowledges support from Chandra grant GO3-14131X.C.J., W.R.F., A.V. are supported by the Smithsonian Institution.R.J.W. is supported by a Clay Fellowship awarded by theHarvard-Smithsonian Center for Astrophysics. Basic research in radio astronomy at the Naval Research Laboratory by S.G. and T.E.C is supported by 6.1 base funding.S.B. acknowledges financial support from the PRIN-MIUR 201278X4FL grant. M.A., G.W.P., and J.D. acknowledge support from the European Research Council under the European Union's Seventh FrameworkProgramme (FP7/2007-2013) / ERC grant agreement n^∘ 340519. L.L. acknowledges support from the Chandra X-ray Observatory grantGO3-14130B and by the Chandra X-ray Center through NASA contract NAS8-03060. We thank the anonymous referee for their useful comments.[Allen & Fabian(1998)]1998Allen Allen, S. W., & Fabian, A. C. 1998, , 297, L57[Allen et al.(2011)]2011Allen Allen, S. W., Evrard, A. E., & Mantz, A. B. 2011, , 49, 409[Anders & Grevesse(1989)]1989AndersGrevesse Anders, E., & Grevesse, N. 1989, , 53, 197 [Andrade-Santos et al.(2012)]2012Andrade-Santos Andrade-Santos, F., Lima Neto, G. B., & Laganá, T. F. 2012, , 746, 139 [Andrade-Santos et al.(2013)]2013Andrade-Santos Andrade-Santos, F., Nulsen, P. E. J., Kraft, R. P., et al. 2013, , 766, 107[Andrade-Santos et al.(2015)]2015Andrade-Santos Andrade-Santos, F., Jones, C., Forman, W. R., et al. 2015, , 803, 108 [Andrade-Santos et al.(2016)]2016Andrade-Santos Andrade-Santos, F., Bogdán, Á., Romani, R. W., et al. 2016, , 826, 91[Andrade-Santos et al.(2017)]2017Andrade-Santos Andrade-Santos, F., Jones, C., Forman, W. R., Lovisari, L., & Chandra-Planck Collaboration 2017, American Astronomical Society Meeting Abstracts, 229, 404.04[Andrade-Santos et al.(2017)]2017Andrade-Santos Andrade-Santos, F., Jones, C., Forman, W. R., et al. 2017, , in preparation[Angulo et al.(2012)]2012Angulo Angulo, R. E., Springel, V., White, S. D. M., et al. 2012, , 426, 2046 [Benson et al.(2013)]2013Benson Benson, B. A., de Haan, T., Dudley, J. P., et al. 2013, , 763, 147 [Böhringer et al.(2007)]2007Bohringer Böhringer, H., Schuecker, P., Pratt, G. W., et al. 2007, , 469, 363[Böhringer et al.(2010)]2010Bohringer Böhringer, H., Pratt, G. W., Arnaud, M., et al. 2010, , 514, A32 [Buote & Tsai(1996)]1996Buote Buote, D. A., & Tsai, J. C. 1996, , 458, 27 [Cavagnolo et al.(2009)]2009Cavagnolo Cavagnolo, K. W., Donahue, M., Voit, G. M., & Sun, M. 2009, , 182, 12 [Cavaliere & Fusco-Femiano(1976)]1976Cavaliere Cavaliere, A., & Fusco-Femiano, R. 1976, , 49, 137 [David et al.(2001)]2001David David, L. P., Nulsen, P. E. J., McNamara, B. R., et al. 2001, , 557, 546 [Ebeling et al.(2001)]2001Ebeling Ebeling, H., Edge, A. C., & Henry, J. P. 2001, , 553, 668[Eckert et al.(2011)]2011Eckert Eckert, D., Molendi, S., & Paltani, S. 2011, , 526, A79 [Fabian & Nulsen(1977)]1977Fabian Fabian, A. C., & Nulsen, P. E. J. 1977, , 180, 479 [Fabian et al.(1984)]1984Fabian Fabian, A. C., Nulsen, P. E. J., & Canizares, C. R. 1984, , 310, 733 [Fabian(1994)]1994Fabian Fabian, A. C. 1994, , 32, 277 [Fabian(2012)]2012Fabian Fabian, A. C. 2012, , 50, 455 [Forman & Jones(1982)]1982Forman Forman, W., & Jones, C. 1982, , 20, 547 [Hudson et al.(2010)]2010Hudson Hudson, D. S., Mittal, R., Reiprich, T. H., et al. 2010, , 513, A37 [Jeltema et al.(2005)]2005Jeltema Jeltema, T. E., Canizares, C. R., Bautz, M. W., & Buote, D. A. 2005, , 624, 606 [Jones et al.(1979)]1979Jones Jones, C., Mandel, E., Schwarz, J., et al. 1979, , 234, L21[Jones & Forman(1984)]1984Jones Jones, C., & Forman, W. 1984, , 276, 38 [Jones & Forman(1999)]1999Jones Jones, C., & Forman, W. 1999, , 511, 65[Jones et al.(2016)]2016Jones Jones, C., Santos, F. A., Forman, W. R., et al. 2016, American Astronomical Society Meeting Abstracts, 228, 110.04 [Kravtsov & Borgani(2012)]2012Kravtsov Kravtsov, A. V., & Borgani, S. 2012, , 50, 353 [Laganá et al.(2010)]2010Lagana Laganá, T. F., Andrade-Santos, F., & Lima Neto, G. B. 2010, , 511, A15 [Leccardi et al.(2010)]2010Leccardi Leccardi, A., Rossetti, M., & Molendi, S. 2010, , 510, A82 [Lin et al.(2015)]2015Lin Lin, H. W., McDonald, M., Benson, B., & Miller, E. 2015, , 802, 34 [Mantz et al.(2010)]2010Mantz Mantz, A., Allen, S. W., Rapetti, D., & Ebeling, H. 2010, , 406, 1759[Maughan et al.(2012)]2012Maughan Maughan, B. J., Giles, P. A., Randall, S. W., Jones, C., & Forman, W. R. 2012, , 421, 1583 [Mazzotta et al.(2004)]2004Mazzotta Mazzotta, P., Rasia, E., Moscardini, L., & Tormen, G. 2004, , 354, 10 [McDonald et al.(2013)]2013McDonald McDonald, M., Benson, B. A., Vikhlinin, A., et al. 2013, , 774, 23[Mohr et al.(1995)]1995Mohr Mohr, J. J., Evrard, A. E., Fabricant, D. G., & Geller, M. J. 1995, , 447, 8 [Molendi & Pizzolato(2001)]2001Molendi Molendi, S., & Pizzolato, F. 2001, , 560, 194 [Owen et al.(1985)]1985Owen Owen, F. N., O'Dea, C. P., Inoue, M., & Eilek, J. A. 1985, , 294, L85 [Peterson et al.(2003)]2003Peterson Peterson, J. R., Kahn, S. M., Paerels, F. B. S., et al. 2003, , 590, 207 [Planck Collaboration et al.(2011)]2011PlanckCol Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2011, , 536, A8[Planck Collaboration et al.(2011)]2011PlanckXMMvalid Planck Collaboration, Aghanim, N., Arnaud, M., et al. 2011, , 536, A9[Planck Collaboration et al.(2013)]2013PlanckXMMvalid Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2013, , 550, A130[Planck Collaboration et al.(2014)]2014Planck Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2014, , 571, A20[Planck Collaboration et al.(2016)]2016Planck Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, , 594, A24[Piffaretti et al.(2003)]2003Piffaretti Piffaretti, R., Jetzer, P., & Schindler, S. 2003, , 398, 41 [Rasia et al.(2015)]2015Rasia Rasia, E., Borgani, S., Murante, G., et al. 2015, , 813, L17 [Reiprich & Böhringer(2002)]2002Reiprich Reiprich, T. H., & Böhringer, H. 2002, , 567, 716 [Rossetti et al.(2016)]2016Rossetti Rossetti, M., Gastaldello, F., Ferioli, G., et al. 2016, , 457, 4515 [Rossetti et al.(2017)]2017Rossetti Rossetti, M., Gastaldello, F., Eckert, D., et al. 2017, arXiv:1702.06961[Santos et al.(2008)]2008Santos Santos, J. S., Rosati, P., Tozzi, P., et al. 2008, , 483, 35 [Sommer & Basu(2014)]2014Sommer Sommer, M. W., & Basu, K. 2014, , 437, 2163 [Vikhlinin et al.(2005)]2005Vik Vikhlinin, A., Markevitch, M., Murray, S. S., et al. 2005, , 628, 655[Vikhlinin et al.(2006)]2006Vik Vikhlinin, A., Kravtsov, A., Forman, W., et al. 2006, , 640, 691[Vikhlinin et al.(2007)]2007Vik Vikhlinin, A., Burenin, R., Forman, W. R., et al. 2007, Heating versus Cooling in Galaxies and Clusters of Galaxies, 48[Vikhlinin et al.(2009a)]2009Vik Vikhlinin, A., Burenin, R. A., Ebeling, H., et al. 2009a, , 692, 1033[Vikhlinin et al.(2009b)]2009bVik Vikhlinin, A., Kravtsov, A. V., Burenin, R. A., et al. 2009b, , 692, 1060[Voevodkin & Vikhlinin(2004)]2004Voevodkin Voevodkin, A., & Vikhlinin, A. 2004, , 601, 610[Warren et al.(2006)]2006Warren Warren, M. S., Abazajian, K., Holz, D. E., & Teodoro, L. 2006, , 646, 881 [Wen & Han(2013)]2013Wen Wen, Z. L., & Han, J. L. 2013, , 436, 275We present in Table <ref> the values of the metrics for all clusters in the ESZ sample, including the secondary subclusters (on the lines following the primary subcluster, indicated by –). Columns list the cluster name (the prefix PLCKESZ is omitted for simplicity), RA, DEC, redshift, concentration parameter in the 0.15–1.0 r_500 range, concentration parameter in the 40–400 kpc range, cuspiness of the gas density profile, central gas density, and the maximum radius where the emission integral is computed. Each metric value is followed by its uncertainty.lllcccccccccc [Concentration parameter, Cuspiness, and Central Density for the SZ sample]Concentration parameter, Cuspiness, and Central Density for the Planck ESZ sample. 1cCluster1cRA 1cDEC 1cz1cC_ SB 1cσ_C_ SB 1cC_ SB4 1cσ_C_ SB4 1cδ 1cσ_δ 1cn_ core 1cσ_n_ core 1cr_ max1c1c 1c 1c1c 1c 1c 1c 1c 1c 1c(cm^-3) 1c(cm^-3) 1c(r_500)13c – continued from previous page 1cCluster1cRA 1cDEC 1cz1cC_ SB 1cσ_C_ SB 1cC_ SB4 1cσ_C_ SB4 1cδ 1cσ_δ 1cn_ core 1cσ_n_ core 1cr_ max1c1c 1c 1c1c 1c 1c 1c 1c 1c 1c(cm^-3) 1c(cm^-3) 1c(r_500)13cContinued on next pageG000.44-41.83 21:04:18.603 -41:20:39.36 0.165 0.275 0.008 0.0585 0.0044 0.611 0.084 0.01482 0.00233 1.68 G002.74-56.18 22:18:39.822 -38:53:58.47 0.141 0.393 0.007 0.0604 0.0030 0.254 0.047 0.01423 0.00371 1.55 G003.90-59.41 22:34:27.334 -37:44:07.88 0.151 0.391 0.007 0.0335 0.0011 0.135 0.029 0.00729 0.00113 1.37 G006.47+50.54 15:10:56.117 +5:44:40.38 0.077 0.591 0.005 0.1674 0.0035 0.920 0.015 0.05946 0.00131 1.43 G006.70-35.54 20:34:46.912 -35:49:24.54 0.089 0.188 0.006 0.0334 0.0016 0.251 0.048 0.00469 0.00096 1.66 G006.78+30.46 16:15:46.073 -6:08:54.61 0.203 0.304 0.009 0.0250 0.0008 0.159 0.030 0.00819 0.00096 1.08 G008.30-64.75 22:58:48.095 -34:48:04.62 0.312 0.218 0.005 0.0422 0.0020 0.289 0.047 0.00831 0.00070 1.50 G008.44-56.35 22:17:45.701 -35:43:32.55 0.149 0.370 0.009 0.0939 0.0049 0.614 0.056 0.01607 0.00215 1.82 G008.93-81.23 0:14:19.305 -30:23:29.33 0.307 0.249 0.004 0.0273 0.0011 0.212 0.048 0.00813 0.00153 1.37 G018.53-25.72 20:03:30.848 -23:23:37.54 0.317 0.110 0.003 0.0141 0.0020 0.033 0.056 0.00284 0.00184 1.42 G021.09+33.25 16:32:46.854 +5:34:31.61 0.151 0.638 0.005 0.3344 0.0069 1.395 0.061 0.13411 0.00361 1.42 G029.00+44.56 16:02:14.068 +15:58:16.23 0.035 0.183 0.005 0.0380 0.0013 0.315 0.076 0.00396 0.00050 1.01 G033.46-48.43 21:52:21.245 -19:32:54.19 0.094 0.282 0.005 0.1452 0.0037 1.036 0.061 0.02163 0.00083 1.68 G033.78+77.16 13:48:52.710 +26:35:31.20 0.062 0.592 0.004 0.1774 0.0036 1.090 0.046 0.04278 0.00092 1.53 G036.72+14.92 18:04:31.215 +10:03:24.21 0.152 0.415 0.009 0.1236 0.0068 0.655 0.073 0.03475 0.00440 1.62 – 18:04:27.892 +10:02:35.55 0.152 0.392 0.008 0.0773 0.0050 0.433 0.078 0.01403 0.00255 1.67 G039.85-39.98 21:27:12.470 -12:10:00.46 0.176 0.124 0.004 0.0228 0.0041 0.119 0.062 0.00277 0.00072 1.69 – 21:26:37.301 -12:06:49.76 0.176 0.075 0.006 0.0808 0.0158 0.081 0.074 0.00736 0.00148 2.03 G042.82+56.61 15:22:29.473 +27:42:18.76 0.072 0.341 0.005 0.0773 0.0019 0.556 0.059 0.02364 0.00097 1.47 G044.22+48.68 15:58:21.100 +27:13:47.87 0.089 0.493 0.005 0.0810 0.0017 0.758 0.053 0.02480 0.00073 1.25 G046.50-49.43 22:10:19.489 -12:10:10.03 0.085 0.277 0.006 0.0348 0.0020 0.254 0.038 0.00588 0.00084 1.60 G046.88+56.49 15:24:11.019 +29:52:45.70 0.115 0.144 0.014 0.0228 0.0049 0.145 0.063 0.00258 0.00055 1.53 – 15:24:22.521 +30:01:09.10 0.115 0.108 0.004 0.0371 0.0026 0.508 0.085 0.00468 0.00083 1.90 G048.05+57.17 15:21:12.694 +30:38:00.59 0.078 0.133 0.004 0.0178 0.0007 0.037 0.034 0.00163 0.00020 1.77 G049.20+30.86 17:20:09.957 +26:37:30.79 0.164 0.620 0.007 0.2313 0.0055 1.238 0.054 0.06126 0.00245 1.48 G049.33+44.38 16:20:30.305 +29:53:35.91 0.097 0.241 0.007 0.0459 0.0029 0.350 0.125 0.00543 0.00089 1.81 G049.66-49.50 22:14:32.554 -10:22:17.84 0.098 0.437 0.009 0.0977 0.0037 0.511 0.069 0.01625 0.00201 1.81 G053.44-36.26 21:35:11.371 -1:02:53.24 0.325 0.273 0.008 0.0536 0.0046 0.468 0.129 0.01374 0.00339 1.70 – 21:35:25.878 -0:57:44.27 0.325 0.350 0.094 0.4745 0.0922 0.115 0.113 0.16250 0.02681 2.39 G053.52+59.54 15:10:12.700 +33:30:34.01 0.113 0.293 0.004 0.0326 0.0009 0.152 0.033 0.00522 0.00057 1.44 – 15:10:13.060 +33:32:26.21 0.113 0.259 0.004 0.0298 0.0007 0.072 0.025 0.00312 0.00025 1.58 G055.60+31.86 17:22:27.300 +32:07:57.98 0.224 0.495 0.007 0.1173 0.0045 0.720 0.031 0.03980 0.00297 1.52 G055.97-34.88 21:35:16.105 +1:25:03.07 0.124 0.282 0.013 0.0451 0.0085 0.150 0.108 0.00489 0.00206 2.04 G056.81+36.31 17:02:42.571 +34:03:38.15 0.095 0.485 0.006 0.1049 0.0023 0.616 0.028 0.02366 0.00123 1.61 G057.33+88.01 12:59:47.654 +27:57:06.81 0.023 0.234 0.003 0.0232 0.0004 0.120 0.023 0.00361 0.00018 1.12 G057.61+34.94 17:09:45.792 +34:27:20.36 0.080 0.163 0.005 0.0253 0.0021 0.371 0.270 0.00449 0.00168 1.82 G057.92+27.64 17:44:15.426 +32:59:31.71 0.076 0.555 0.005 0.2121 0.0046 1.140 0.058 0.03579 0.00142 1.91 G058.28+18.59 18:25:22.292 +30:26:38.42 0.065 0.199 0.005 0.0312 0.0017 0.192 0.056 0.00360 0.00067 1.45 G062.42-46.41 22:23:47.779 -1:39:00.82 0.091 0.294 0.008 0.1466 0.0051 0.943 0.071 0.03207 0.00167 1.96 – 22:23:56.949 -1:35:01.68 0.091 0.256 0.007 0.1044 0.0040 0.840 0.037 0.02285 0.00159 1.90 – 22:23:14.663 -1:39:36.81 0.091 0.317 0.029 0.2421 0.0271 1.177 0.164 0.03242 0.00496 3.04 G062.92+43.70 16:28:38.232 +39:33:03.36 0.030 0.536 0.006 0.1906 0.0031 0.801 0.031 0.03963 0.00122 0.76 G067.23+67.46 14:26:00.269 +37:49:40.52 0.171 0.492 0.008 0.0544 0.0033 0.260 0.038 0.01615 0.00367 1.39 – 14:26:03.448 +37:49:28.87 0.171 0.442 0.007 0.0972 0.0036 0.784 0.030 0.02393 0.00121 1.41 G071.61+29.79 17:47:12.236 +45:12:29.69 0.157 0.141 0.005 0.0270 0.0048 0.232 0.092 0.00363 0.00118 1.83 G072.63+41.46 16:40:19.460 +46:42:45.63 0.228 0.347 0.005 0.0385 0.0013 0.373 0.033 0.01247 0.00087 1.18 G072.80-18.72 21:22:27.115 +23:11:50.12 0.143 0.330 0.007 0.0732 0.0041 0.872 0.102 0.01786 0.00206 1.47 G073.96-27.82 21:53:36.797 +17:41:43.53 0.233 0.502 0.006 0.1336 0.0041 1.046 0.071 0.06475 0.00273 1.19 G077.90-26.64 22:00:53.012 +20:58:43.93 0.147 0.368 0.008 0.0648 0.0037 0.597 0.122 0.01476 0.00259 1.64 G080.38-33.20 22:26:02.754 +17:22:34.09 0.107 0.316 0.005 0.0417 0.0013 0.212 0.062 0.00566 0.00092 1.86 G080.99-50.90 23:11:33.144 +3:38:08.17 0.300 0.375 0.007 0.0792 0.0033 0.751 0.028 0.02297 0.00232 1.46 – 23:11:48.226 +3:40:51.80 0.300 0.152 0.010 0.0618 0.0089 0.669 0.226 0.01322 0.00491 2.20 G085.99+26.71 18:19:57.122 +57:09:50.82 0.179 0.142 0.006 0.0278 0.0027 0.248 0.106 0.00376 0.00127 1.79 G086.45+15.29 19:38:18.297 +54:09:36.16 0.260 0.492 0.011 0.0860 0.0042 0.553 0.058 0.02315 0.00334 1.42 G092.73+73.46 13:35:18.141 +40:59:59.07 0.228 0.292 0.006 0.0444 0.0030 0.370 0.047 0.01125 0.00175 1.34 G093.91+34.90 17:12:43.585 +64:03:46.90 0.081 0.162 0.004 0.0197 0.0006 0.077 0.038 0.00233 0.00020 1.20 G094.01+27.42 18:21:57.197 +64:20:36.30 0.299 0.563 0.007 0.1890 0.0063 1.209 0.083 0.07306 0.00327 1.43 G096.85+52.46 14:52:58.061 +58:03:00.56 0.318 0.397 0.013 0.0835 0.0170 0.142 0.032 0.03621 0.00478 1.76 G097.73+38.11 16:35:51.314 +66:12:40.87 0.171 0.347 0.006 0.0392 0.0016 0.177 0.046 0.00766 0.00142 1.51 G098.95+24.86 18:54:02.098 +68:23:01.24 0.093 0.367 0.010 0.0966 0.0058 0.728 0.032 0.01629 0.00103 1.85 G106.73-83.22 0:43:24.653 -20:37:24.39 0.292 0.394 0.007 0.0458 0.0020 0.176 0.065 0.01064 0.00243 1.51 G107.11+65.31 13:32:47.351 +50:32:29.58 0.280 0.220 0.004 0.0322 0.0017 0.295 0.029 0.00742 0.00111 1.48 – 13:32:38.660 +50:33:45.07 0.280 0.206 0.005 0.0491 0.0022 0.600 0.055 0.01367 0.00123 1.56 – 13:32:33.121 +50:25:01.44 0.280 0.218 0.005 0.0435 0.0019 0.446 0.084 0.00880 0.00073 1.63 G110.98+31.73 17:03:14.917 +78:39:23.17 0.058 0.220 0.004 0.0217 0.0009 0.126 0.046 0.00367 0.00038 1.36 G112.45+57.03 13:36:05.971 +59:12:08.41 0.070 0.252 0.008 0.0444 0.0068 0.305 0.063 0.00546 0.00073 1.80 G113.82+44.35 14:13:55.482 +71:17:59.81 0.225 0.189 0.015 0.0710 0.0306 0.379 0.083 0.01066 0.00370 1.87 – 14:14:14.032 +71:17:19.62 0.225 0.138 0.008 0.0318 0.0058 0.232 0.075 0.00534 0.00306 1.97 – 14:14:06.680 +71:15:44.77 0.225 0.119 0.011 0.0308 0.0179 0.283 0.153 0.00498 0.00281 1.93 G114.33+64.87 13:15:04.617 +51:49:10.75 0.284 0.317 0.007 0.0444 0.0023 0.400 0.044 0.01172 0.00116 1.55 G115.16-72.09 0:41:50.390 -9:18:09.53 0.056 0.455 0.004 0.1535 0.0030 1.012 0.040 0.04534 0.00113 1.40 G115.71+17.52 22:26:30.303 +78:19:16.11 0.300 0.474 0.011 0.1686 0.0092 0.920 0.080 0.03959 0.00408 1.63 G118.60+28.55 17:24:11.920 +85:53:08.41 0.178 0.335 0.009 0.0534 0.0029 0.461 0.099 0.01043 0.00147 1.42 G121.11+57.01 12:59:35.208 +60:04:15.62 0.344 0.137 0.005 0.0194 0.0049 0.151 0.205 0.00536 0.00396 1.64 G124.21-36.48 0:55:50.297 +26:24:36.47 0.197 0.405 0.006 0.2269 0.0056 0.900 0.042 0.03266 0.00087 1.65 G125.58-64.14 0:56:16.210 -1:14:58.98 0.044 0.164 0.004 0.0229 0.0007 0.221 0.052 0.00258 0.00027 1.58 G125.70+53.85 12:36:57.703 +63:11:13.65 0.302 0.338 0.008 0.0607 0.0032 0.481 0.073 0.01758 0.00235 1.56 G139.19+56.35 11:42:24.113 +58:31:37.19 0.322 0.189 0.006 0.0191 0.0022 0.065 0.088 0.00517 0.00311 1.53 G139.59+24.18 6:21:49.156 +74:42:05.10 0.300 0.519 0.008 0.1826 0.0072 1.085 0.109 0.07301 0.00533 1.47 G143.24+65.21 11:59:13.697 +49:47:41.37 0.211 0.409 0.018 0.1210 0.0265 0.285 0.046 0.01037 0.00162 1.60 – 11:59:32.831 +49:47:07.28 0.211 0.167 0.013 0.0409 0.0075 0.360 0.221 0.00402 0.00182 1.83 G146.33-15.59 2:54:27.386 +41:34:46.82 0.017 0.476 0.012 0.1241 0.0023 0.760 0.018 0.01932 0.00045 0.45 G149.24+54.18 10:58:26.061 +56:47:35.78 0.137 0.285 0.009 0.0308 0.0023 0.250 0.104 0.00638 0.00131 1.51 G149.73+34.69 8:30:59.262 +65:50:22.84 0.182 0.325 0.005 0.0512 0.0015 0.476 0.040 0.01397 0.00103 1.32 G159.85-73.47 1:31:53.170 -13:36:43.29 0.206 0.311 0.007 0.0436 0.0022 0.276 0.056 0.00909 0.00100 1.39 G161.44+26.23 7:21:30.328 +55:45:41.56 0.038 0.313 0.008 0.0671 0.0039 0.390 0.070 0.00755 0.00111 1.10 G163.72+53.53 10:22:28.815 +50:06:24.68 0.158 0.342 0.009 0.0517 0.0039 0.428 0.079 0.01155 0.00173 1.64 G164.18-38.89 2:58:57.595 +13:34:44.19 0.074 0.308 0.005 0.0400 0.0012 0.262 0.040 0.00776 0.00081 1.34 G164.61+46.38 9:38:19.645 +52:02:58.03 0.342 0.357 0.009 0.0872 0.0045 0.423 0.083 0.01742 0.00209 1.70 G165.08+54.11 10:23:39.906 +49:08:32.59 0.144 0.337 0.009 0.0511 0.0032 0.415 0.089 0.01238 0.00232 1.56 G166.13+43.39 9:17:53.077 +51:43:41.38 0.217 0.326 0.006 0.0519 0.0022 0.575 0.024 0.01756 0.00111 1.51 G167.65+17.64 6:38:04.950 +47:47:50.92 0.174 0.260 0.009 0.0302 0.0022 0.163 0.088 0.00591 0.00128 1.61 G171.94-40.65 3:12:57.303 +8:22:08.53 0.270 0.235 0.009 0.0356 0.0024 0.290 0.116 0.00885 0.00156 1.59 G172.88+65.32 11:11:40.472 +40:50:28.25 0.079 0.256 0.007 0.0559 0.0032 0.400 0.090 0.00698 0.00157 2.11 G176.28-35.05 3:38:40.698 +9:58:03.07 0.035 0.718 0.004 0.3876 0.0047 1.242 0.017 0.05916 0.00104 1.09 G180.62+76.65 11:57:17.375 +33:36:39.11 0.213 0.378 0.007 0.1228 0.0050 0.844 0.059 0.03195 0.00255 1.64 G182.44-28.29 4:13:25.196 +10:27:53.66 0.088 0.622 0.005 0.1726 0.0033 0.908 0.056 0.05973 0.00179 1.38 G182.63+55.82 10:17:03.531 +39:02:53.32 0.206 0.467 0.006 0.0986 0.0031 0.634 0.062 0.02822 0.00252 1.63 G186.39+37.25 8:42:57.490 +36:21:59.64 0.282 0.338 0.007 0.0361 0.0027 0.218 0.076 0.01399 0.00410 1.32 G195.62+44.05 9:20:23.927 +30:30:37.51 0.2952 0.131 0.005 0.0170 0.0009 0.044 0.043 0.00238 0.00035 1.52 – 9:19:34.939 +30:31:54.67 0.427 0.326 0.026 0.1066 0.0131 0.392 0.207 0.01544 0.00611 2.56 – 9:21:10.775 +30:28:04.61 0.427 0.082 0.035 0.0399 0.0544 0.480 0.568 0.01593 0.02757 2.63 G195.77-24.30 4:54:10.021 +2:55:33.68 0.203 0.214 0.004 0.0205 0.0007 0.117 0.036 0.00398 0.00033 1.36 – 4:54:25.560 +2:59:08.98 0.203 0.090 0.004 0.0251 0.0062 0.095 0.073 0.00156 0.00090 2.18 G209.56-36.49 4:33:37.913 -13:15:42.03 0.033 0.438 0.005 0.1892 0.0039 1.200 0.086 0.04229 0.00359 0.79 G218.85+35.50 9:09:12.462 +10:58:29.96 0.175 0.385 0.006 0.0831 0.0031 0.466 0.035 0.01317 0.00150 1.65 G226.17-21.91 5:52:50.842 -21:03:15.06 0.099 0.291 0.008 0.0437 0.0046 0.274 0.069 0.00613 0.00091 1.75 G226.24+76.76 11:55:17.943 +23:24:20.25 0.143 0.488 0.005 0.0985 0.0026 0.666 0.033 0.02751 0.00107 1.39 G228.49+53.12 10:25:58.011 +12:41:08.71 0.143 0.603 0.009 0.2305 0.0082 0.917 0.052 0.07758 0.00512 1.72 G229.21-17.24 6:16:24.830 -21:56:19.24 0.171 0.199 0.006 0.0300 0.0029 0.231 0.119 0.00436 0.00115 1.60 G229.64+77.96 12:01:13.211 +23:06:29.50 0.269 0.218 0.008 0.0238 0.0022 0.141 0.082 0.00575 0.00223 1.43 G229.94+15.29 8:17:25.557 -7:30:30.54 0.070 0.483 0.005 0.0718 0.0019 0.415 0.043 0.01305 0.00090 1.48 G234.59+73.01 11:44:44.158 +19:42:15.97 0.021 0.095 0.001 0.0242 0.0001 0.292 0.016 0.00240 0.00007 0.95 G236.95-26.67 5:47:37.339 -31:52:09.32 0.148 0.340 0.006 0.0452 0.0024 0.309 0.076 0.00968 0.00210 1.59 G239.28+24.76 9:09:20.461 -9:41:03.53 0.054 0.333 0.003 0.0477 0.0011 0.276 0.018 0.00561 0.00022 1.35 – 9:08:47.478 -9:38:28.45 0.054 0.323 0.183 0.1382 0.2078 0.713 0.843 0.02846 0.05574 1.42 G241.74-30.88 5:32:55.628 -37:01:35.71 0.271 0.447 0.009 0.0846 0.0044 0.532 0.082 0.02341 0.00361 1.51 G241.77-24.00 6:05:53.936 -35:18:08.71 0.139 0.536 0.008 0.2448 0.0078 1.134 0.062 0.08282 0.00499 1.88 G241.85+51.53 10:39:40.051 +5:09:39.68 0.070 0.090 0.005 0.0332 0.0083 0.084 0.061 0.00137 0.00035 2.30 G241.97+14.85 8:41:58.360 -17:29:45.45 0.169 0.118 0.003 0.0158 0.0006 0.073 0.025 0.00193 0.00015 1.50 – 8:41:51.490 -17:27:46.97 0.169 0.183 0.004 0.0350 0.0013 0.302 0.055 0.00598 0.00092 1.51 G243.57+67.76 11:32:51.155 +14:29:32.08 0.083 0.246 0.006 0.0362 0.0031 0.181 0.040 0.00442 0.00064 1.70 – 11:32:50.676 +14:27:21.08 0.083 0.268 0.008 0.0696 0.0036 0.635 0.025 0.01276 0.00092 1.76 G244.34-32.13 5:28:52.997 -39:28:13.50 0.284 0.411 0.007 0.1159 0.0043 1.145 0.074 0.03555 0.00249 1.39 G244.69+32.49 9:45:24.590 -8:39:19.44 0.153 0.264 0.009 0.0432 0.0100 0.140 0.063 0.00566 0.00180 1.78 G246.52-26.05 6:01:41.584 -39:59:19.90 0.047 0.133 0.003 0.0179 0.0006 0.052 0.050 0.00123 0.00019 1.54 – 6:02:11.776 -39:57:26.35 0.047 0.184 0.005 0.0431 0.0015 0.287 0.050 0.00278 0.00034 1.31 G247.17-23.32 6:16:31.784 -39:47:46.01 0.152 0.268 0.014 0.1071 0.0252 0.304 0.083 0.00712 0.00156 1.88 G249.87-39.86 4:49:56.204 -44:40:20.25 0.150 0.321 0.007 0.0704 0.0032 0.285 0.064 0.00955 0.00221 2.05 – 4:49:53.125 -44:40:23.36 0.150 0.216 0.007 0.0763 0.0049 0.790 0.136 0.01097 0.00120 2.08 G250.90-36.25 5:10:16.218 -45:19:12.68 0.200 0.377 0.010 0.0645 0.0037 0.350 0.053 0.01260 0.00239 1.58 G252.96-56.05 3:17:57.637 -44:14:17.40 0.075 0.581 0.004 0.2692 0.0048 1.140 0.035 0.06626 0.00186 1.67 G253.47-33.72 5:25:48.812 -47:15:10.22 0.191 0.375 0.009 0.0739 0.0045 0.584 0.158 0.01927 0.00387 1.79 G256.45-65.71 2:25:53.140 -41:54:52.95 0.220 0.388 0.018 0.1507 0.0342 0.848 0.054 0.04070 0.01416 1.60 – 2:25:25.618 -42:00:50.57 0.220 0.197 0.017 0.1490 0.0277 1.047 0.058 0.03069 0.00388 2.46 G257.34-22.18 6:37:14.638 -48:28:18.15 0.203 0.282 0.009 0.0942 0.0058 1.158 0.152 0.02179 0.00342 1.78 – 6:37:29.500 -48:29:40.52 0.203 0.172 0.008 0.0299 0.0041 0.146 0.071 0.00405 0.00197 1.69 G260.03-63.44 2:32:18.714 -44:20:46.38 0.284 0.416 0.023 0.1618 0.0182 1.065 0.165 0.07744 0.01227 1.69 G262.25-35.36 5:16:36.189 -54:30:34.93 0.295 0.156 0.005 0.0245 0.0027 0.185 0.109 0.00553 0.00221 1.37 G263.16-23.41 6:38:48.541 -53:58:26.09 0.227 0.472 0.006 0.1319 0.0040 1.003 0.043 0.04609 0.00206 1.37 – 6:38:44.287 -53:58:26.85 0.227 0.453 0.007 0.1073 0.0034 0.743 0.030 0.02970 0.00158 1.41 G263.20-25.21 6:26:48.188 -54:32:57.24 0.051 0.117 0.003 0.0442 0.0015 0.598 0.062 0.00571 0.00051 1.73 – 6:27:36.008 -54:26:46.65 0.051 0.116 0.002 0.0279 0.0007 0.171 0.025 0.00216 0.00017 1.72 G263.66-22.53 6:45:28.586 -54:13:43.00 0.164 0.403 0.010 0.0738 0.0042 0.644 0.086 0.02070 0.00301 1.39 G264.41+19.48 10:00:01.753 -30:16:37.46 0.240 0.271 0.008 0.0596 0.0043 0.648 0.134 0.01621 0.00302 1.65 G265.00-48.94 3:42:53.069 -53:37:53.35 0.059 0.317 0.005 0.0445 0.0021 0.217 0.028 0.00601 0.00048 1.25 G266.03-21.25 6:58:29.918 -55:56:31.03 0.296 0.331 0.006 0.0254 0.0008 0.280 0.061 0.00955 0.00077 1.11 – 6:58:20.053 -55:56:28.77 0.296 0.299 0.005 0.0989 0.0033 0.962 0.051 0.02120 0.00112 1.21 G266.84+25.07 10:23:50.137 -27:15:22.24 0.254 0.598 0.007 0.2097 0.0059 1.056 0.031 0.07762 0.00345 1.59 G269.31-49.87 3:28:36.732 -55:42:41.79 0.085 0.396 0.009 0.0627 0.0023 0.199 0.046 0.00682 0.00114 1.83 G269.51+26.42 10:36:40.918 -27:31:35.50 0.013 0.426 0.010 0.1282 0.0027 0.463 0.055 0.00835 0.00041 0.46 G271.50-56.55 2:45:28.461 -53:02:03.51 0.300 0.354 0.008 0.0357 0.0030 0.120 0.081 0.00988 0.00364 1.55 G272.10-40.15 4:31:30.202 -61:24:48.44 0.059 0.205 0.088 0.0204 0.0028 0.162 0.109 0.00333 0.00069 0.98 – 4:31:12.696 -61:27:20.58 0.059 0.199 0.004 0.0419 0.0013 0.531 0.025 0.00796 0.00038 0.90 G273.64+63.28 12:00:22.549 +3:20:23.47 0.134 0.203 0.004 0.0355 0.0017 0.304 0.064 0.00610 0.00109 1.52 G275.21+43.92 11:30:22.201 -14:34:38.17 0.107 0.258 0.006 0.0417 0.0020 0.319 0.110 0.00756 0.00175 1.65 G278.60+39.17 11:31:56.126 -19:56:06.98 0.307 0.259 0.008 0.0410 0.0037 0.397 0.130 0.01199 0.00307 1.39 – 11:31:54.302 -19:55:42.44 0.307 0.317 0.016 0.1091 0.0105 0.780 0.087 0.03229 0.00395 1.53 G280.19+47.81 11:49:46.508 -12:18:50.35 0.156 0.224 0.006 0.0337 0.0032 0.184 0.058 0.00448 0.00095 1.65 G282.49+65.17 12:17:41.196 +3:39:22.03 0.077 0.286 0.006 0.0331 0.0021 0.146 0.047 0.00537 0.00141 1.51 G286.58-31.25 5:31:28.179 -75:10:37.86 0.210 0.248 0.009 0.0396 0.0037 0.296 0.212 0.01587 0.00465 1.67 G288.61-37.65 3:52:30.180 -74:01:56.35 0.127 0.225 0.006 0.0345 0.0023 0.408 0.139 0.00817 0.00189 1.43 G292.51+21.98 12:01:04.779 -39:51:52.89 0.300 0.181 0.004 0.0318 0.0021 0.309 0.077 0.00686 0.00190 1.45 – 12:01:10.496 -39:54:44.42 0.300 0.215 0.009 0.1284 0.0104 0.940 0.113 0.02524 0.00323 1.95 G294.66-37.02 3:03:43.147 -77:52:47.22 0.274 0.321 0.010 0.0474 0.0144 0.240 0.041 0.00998 0.00237 1.40 – 2:59:20.807 -77:52:10.09 0.274 0.258 0.017 0.0725 0.0091 0.399 0.203 0.01175 0.00627 2.20 G295.33+23.33 12:15:27.399 -39:01:58.34 0.119 0.169 0.005 0.0327 0.0024 0.240 0.096 0.00395 0.00116 1.66 G296.41-32.48 3:51:31.364 -82:13:27.86 0.061 0.238 0.007 0.0499 0.0044 0.334 0.069 0.00526 0.00082 1.81 G303.75+33.65 12:54:40.703 -29:13:40.55 0.054 0.330 0.010 0.1854 0.0057 1.435 0.141 0.03161 0.00295 1.94 – 12:54:22.138 -29:00:47.20 0.054 0.331 0.007 0.1652 0.0060 0.939 0.094 0.03347 0.00192 1.89 G304.49+32.44 12:57:22.033 -30:21:49.34 0.055 0.205 0.006 0.0481 0.0034 0.409 0.037 0.00591 0.00062 1.37 G304.67-31.66 23:40:12.802 -85:11:01.81 0.193 0.131 0.006 0.0200 0.0023 0.150 0.168 0.00326 0.00177 1.74 G304.89+45.45 12:57:11.570 -17:24:33.57 0.047 0.195 0.003 0.0846 0.0019 0.905 0.024 0.01788 0.00048 1.14 G306.68+61.06 12:58:41.433 -1:45:43.44 0.085 0.456 0.006 0.1186 0.0027 0.777 0.029 0.02416 0.00118 1.60 G306.80+58.60 12:59:22.339 -4:11:45.31 0.085 0.438 0.007 0.0773 0.0022 0.535 0.049 0.01708 0.00144 1.50 G311.99+30.71 13:27:56.824 -31:29:43.84 0.048 0.302 0.005 0.0611 0.0013 0.506 0.038 0.01224 0.00057 1.66 – 13:29:47.769 -31:36:26.05 0.048 0.149 0.004 0.0606 0.0048 0.701 0.056 0.00968 0.00094 2.08 G313.36+61.11 13:11:29.460 -1:20:27.68 0.183 0.590 0.006 0.1098 0.0027 0.717 0.024 0.04374 0.00190 1.32 G313.87-17.10 16:01:48.426 -75:45:15.56 0.153 0.540 0.007 0.0813 0.0037 0.513 0.036 0.02341 0.00170 1.39 G315.70-18.04 16:31:25.411 -75:06:38.34 0.105 0.201 0.004 0.0249 0.0020 0.137 0.046 0.00350 0.00044 1.42 G316.34+28.54 13:47:28.110 -32:51:57.98 0.039 0.394 0.006 0.0636 0.0014 0.576 0.070 0.01769 0.00069 0.80 G318.13-29.57 19:47:14.773 -76:23:44.99 0.217 0.441 0.010 0.1502 0.0107 0.841 0.070 0.05416 0.00547 1.54 G321.96-47.97 22:49:58.191 -64:25:46.79 0.094 0.313 0.006 0.0723 0.0054 0.303 0.061 0.00702 0.00084 1.61 G324.49-44.97 22:18:00.518 -65:10:52.42 0.095 0.342 0.009 0.1225 0.0054 0.735 0.088 0.03748 0.00221 1.87 G332.23-46.36 22:01:53.194 -59:56:43.51 0.098 0.388 0.005 0.0512 0.0013 0.298 0.052 0.01155 0.00114 1.53 G332.88-19.28 18:13:15.663 -61:26:54.65 0.147 0.347 0.008 0.0477 0.0029 0.296 0.072 0.01085 0.00228 1.58 G335.59-46.46 21:54:05.793 -57:51:41.87 0.076 0.230 0.006 0.0451 0.0039 0.430 0.056 0.00787 0.00093 1.72 G336.59-55.44 22:46:21.491 -52:44:17.54 0.097 0.181 0.006 0.0230 0.0012 0.076 0.050 0.00281 0.00040 1.68 G337.09-25.97 19:14:37.070 -59:28:17.20 0.120 0.501 0.009 0.1818 0.0063 0.764 0.027 0.02517 0.00163 2.02 – 19:13:51.095 -59:33:51.75 0.120 0.488 0.010 0.3395 0.0096 1.053 0.069 0.04536 0.00329 2.62 G340.88-33.34 20:12:37.869 -56:50:45.53 0.056 0.185 0.004 0.0291 0.0006 0.112 0.013 0.00402 0.00011 1.01 G340.95+35.11 14:59:28.730 -18:10:45.08 0.236 0.516 0.007 0.2230 0.0074 1.070 0.034 0.07261 0.00394 1.58 G342.31-34.90 20:23:22.364 -55:35:27.23 0.232 0.286 0.009 0.0388 0.0120 0.162 0.050 0.00767 0.00279 1.51 G342.81-30.46 19:52:08.122 -55:03:40.32 0.060 0.160 0.006 0.0319 0.0019 0.155 0.086 0.00282 0.00065 1.90 G345.40-39.34 20:51:56.959 -52:37:48.21 0.045 0.119 0.005 0.0707 0.0056 0.673 0.051 0.01019 0.00180 2.37 – 20:51:59.518 -52:47:09.10 0.045 0.076 0.003 0.0214 0.0012 0.114 0.131 0.00127 0.00053 1.78 G346.59+35.04 15:15:02.301 -15:22:53.25 0.223 0.118 0.004 0.0153 0.0008 0.103 0.135 0.00249 0.00093 1.48 G347.18-27.35 19:34:51.447 -50:52:16.67 0.237 0.181 0.006 0.0246 0.0041 0.117 0.103 0.00528 0.00214 1.56 G349.46-59.94 22:48:44.562 -44:31:48.94 0.347 0.521 0.008 0.0594 0.0027 0.352 0.041 0.02786 0.00403 1.21 We present in Table <ref> the values of the metrics for all clusters in the X-ray sample, including the secondary subclusters (on the lines following the primary subcluster, indicated by –). Columns list the cluster name, Planck name (the prefix PLCKESZ is omitted for simplicity; ^† the prefix PSZ1 is omitted for simplicity), RA, DEC, redshift, concentration parameter in the 0.15–1.0 r_500 range, concentration parameter in the 40–400 kpc range, cuspiness of the gas density profile, central gas density, and the maximum radius where the emission integral is computed. Each metric value is followed by its uncertainty.l@l@l@c@l@c@c@c@c@c@cc@c@c@c [Concentration parameter, Cuspiness, and Central Density for the X-ray sample]Concentration parameter, Cuspiness, and Central Density for the X-ray sample. 1cCluster 1cPlanck name1cRA 1cDEC 1cz1cC_ SB 1cσ_C_ SB 1cC_ SB4 1cσ_C_ SB4 1cδ 1cσ_δ 1cn_ core 1cσ_n_ core 1cr_ max1c1c1c 1c 1c1c 1c 1c 1c 1c 1c 1c(cm^-3) 1c(cm^-3) 1c(r_500) 14c – continued from previous page 1cCluster 1cPlanck name1cRA 1cDEC 1cz1cC_ SB 1cσ_C_ SB 1cC_ SB4 1cσ_C_ SB4 1cδ 1cσ_δ 1cn_ core 1cσ_n_ core 1cr_ max 1c1c1c 1c 1c1c 1c 1c 1c 1c 1c 1c(cm^-3) 1c(cm^-3) 1c(r_500) 14cContinued on next page2A0335 G176.28-35.05 03:38:40.698 +09:58:03.07 0.035 0.718 0.004 0.3876 0.0047 1.242 0.017 0.05916 0.00104 1.09 A85 G115.16-72.09 00:41:50.390 -09:18:09.53 0.056 0.455 0.004 0.1535 0.0030 1.012 0.040 0.04534 0.00113 1.40 A119 G125.58-64.14 00:56:16.210 -01:14:58.98 0.044 0.164 0.004 0.0229 0.0007 0.221 0.052 0.00258 0.00027 1.58 A133 G149.55-84.16^† 01:02:41.707 -21:52:52.58 0.057 0.503 0.005 0.2421 0.0045 1.335 0.077 0.04650 0.00115 1.98 A193 01:25:07.559 +08:41:59.95 0.049 0.295 0.006 0.0596 0.0016 0.428 0.036 0.00825 0.00085 1.44 A376 02:46:03.910 +36:54:18.44 0.049 0.240 0.008 0.0870 0.0031 0.884 0.026 0.01538 0.00068 1.94 A399 02:57:53.422 +13:01:57.47 0.071 0.251 0.004 0.0357 0.0012 0.339 0.049 0.00643 0.00086 1.52 A401 G164.18-38.89 02:58:57.595 +13:34:44.19 0.074 0.308 0.005 0.0400 0.0012 0.262 0.040 0.00776 0.00081 1.34 A478 G182.44-28.29 04:13:25.196 +10:27:53.66 0.088 0.622 0.005 0.1726 0.0033 0.908 0.056 0.05973 0.00179 1.38 A496 G209.56-36.49 04:33:37.913 -13:15:42.03 0.033 0.438 0.005 0.1892 0.0039 1.200 0.086 0.04229 0.00359 0.79 A548e 05:48:38.311 -25:28:40.20 0.041 0.222 0.008 0.0995 0.0047 0.733 0.043 0.01378 0.00129 1.36 A576 G161.44+26.23 07:21:30.328 +55:45:41.56 0.038 0.313 0.008 0.0671 0.0039 0.390 0.070 0.00755 0.00111 1.10 A754 G239.28+24.76 09:09:20.461 -09:41:03.53 0.054 0.333 0.003 0.0477 0.0011 0.276 0.018 0.00561 0.00022 1.35 – 09:08:47.478 -09:38:28.45 0.054 0.323 0.183 0.1382 0.2078 0.713 0.843 0.02846 0.05574 1.42 A970 10:17:23.807 -10:41:07.78 0.059 0.325 0.007 0.0733 0.0034 0.543 0.087 0.00779 0.00068 1.73 A1413 G226.24+76.76 11:55:17.943 +23:24:20.25 0.143 0.488 0.005 0.0985 0.0026 0.666 0.033 0.02751 0.00107 1.39 A1644 G304.89+45.45 12:57:11.570 -17:24:33.57 0.047 0.195 0.003 0.0846 0.0019 0.905 0.024 0.01788 0.00048 1.14 A1650 G306.68+61.06 12:58:41.433 -01:45:43.44 0.085 0.456 0.006 0.1186 0.0027 0.777 0.029 0.02416 0.00118 1.60 A1651 G306.80+58.60 12:59:22.339 -04:11:45.31 0.085 0.438 0.007 0.0773 0.0022 0.535 0.049 0.01708 0.00144 1.50 A1689 G313.36+61.11 13:11:29.460 -01:20:27.68 0.183 0.590 0.006 0.1098 0.0027 0.717 0.024 0.04374 0.00190 1.32 A1736 G312.64+35.09^† 13:27:00.923 -27:11:47.19 0.046 0.101 0.003 0.0206 0.0007 0.134 0.082 0.00211 0.00063 2.00 A1767 G112.45+57.03 13:36:05.971 +59:12:08.41 0.070 0.252 0.008 0.0444 0.0068 0.305 0.063 0.00546 0.00073 1.80 A1775 13:41:48.863 +26:22:19.89 0.076 0.279 0.005 0.0823 0.0021 0.880 0.046 0.01393 0.00065 1.93 A1795 G033.78+77.16 13:48:52.710 +26:35:31.20 0.062 0.592 0.004 0.1774 0.0036 1.090 0.046 0.04278 0.00092 1.53 A1831 13:59:15.391 +27:58:34.43 0.061 0.385 0.008 0.1312 0.0048 0.789 0.082 0.01298 0.00104 1.68 A1914 G067.23+67.46 14:26:00.269 +37:49:40.52 0.171 0.492 0.008 0.0544 0.0033 0.260 0.038 0.01615 0.00367 1.39 – 14:26:03.448 +37:49:28.87 0.171 0.442 0.007 0.0972 0.0036 0.784 0.030 0.02393 0.00121 1.41 A2029 G006.47+50.54 15:10:56.117 +05:44:40.38 0.077 0.591 0.005 0.1674 0.0035 0.920 0.015 0.05946 0.00131 1.43 A2034 G053.52+59.54 15:10:12.700 +33:30:34.01 0.113 0.293 0.004 0.0326 0.0009 0.152 0.033 0.00522 0.00057 1.44 – 15:10:13.060 +33:32:26.21 0.113 0.259 0.004 0.0298 0.0007 0.072 0.025 0.00312 0.00025 1.58 A2052 15:16:44.489 +07:01:17.84 0.035 0.532 0.005 0.2585 0.0043 0.970 0.037 0.03395 0.00100 1.05 A2061 G048.05+57.17 15:21:12.694 +30:38:00.59 0.078 0.133 0.004 0.0178 0.0007 0.037 0.034 0.00163 0.00020 1.77 A2063 15:23:05.130 +08:36:34.45 0.035 0.350 0.006 0.1133 0.0021 0.821 0.020 0.01493 0.00046 0.95 A2065 G042.82+56.61 15:22:29.473 +27:42:18.76 0.072 0.341 0.005 0.0773 0.0019 0.556 0.059 0.02364 0.00097 1.47 A2107 15:39:39.091 +21:46:58.25 0.041 0.366 0.006 0.1171 0.0025 0.640 0.035 0.02449 0.00091 1.19 A2142 G044.22+48.68 15:58:21.100 +27:13:47.87 0.089 0.493 0.005 0.0810 0.0017 0.758 0.053 0.02480 0.00073 1.25 A2147 G029.00+44.56 16:02:14.068 +15:58:16.23 0.035 0.183 0.005 0.0380 0.0013 0.315 0.076 0.00396 0.00050 1.01 A2151 16:04:35.792 +17:43:17.15 0.037 0.412 0.016 0.2047 0.0084 0.908 0.047 0.02810 0.00214 1.02 A2163 G006.78+30.46 16:15:46.073 -06:08:54.61 0.203 0.304 0.009 0.0250 0.0008 0.159 0.030 0.00819 0.00096 1.08 A2199 G062.92+43.70 16:28:38.232 +39:33:03.36 0.030 0.536 0.006 0.1906 0.0031 0.801 0.031 0.03963 0.00122 0.76 A2204 G021.09+33.25 16:32:46.854 +05:34:31.61 0.151 0.638 0.005 0.3344 0.0069 1.395 0.061 0.13411 0.00361 1.42 A2244 G056.81+36.31 17:02:42.571 +34:03:38.15 0.095 0.485 0.006 0.1049 0.0023 0.616 0.028 0.02366 0.00123 1.61 A2249 G057.61+34.94 17:09:45.792 +34:27:20.36 0.080 0.163 0.005 0.0253 0.0021 0.371 0.270 0.00449 0.00168 1.82 A2255 G093.91+34.90 17:12:43.585 +64:03:46.90 0.081 0.162 0.004 0.0197 0.0006 0.077 0.038 0.00233 0.00020 1.20 A2256 G110.98+31.73 17:03:14.917 +78:39:23.17 0.058 0.220 0.004 0.0217 0.0009 0.126 0.046 0.00367 0.00038 1.36 A2415 22:05:38.437 -05:35:31.57 0.058 0.402 0.008 0.2494 0.0069 1.373 0.104 0.04912 0.00325 2.13 A2420 G046.50-49.43 22:10:19.489 -12:10:10.03 0.085 0.277 0.006 0.0348 0.0020 0.254 0.038 0.00588 0.00084 1.60 A2426 G049.66-49.50 22:14:32.554 -10:22:17.84 0.098 0.437 0.009 0.0977 0.0037 0.511 0.069 0.01625 0.00201 1.81 A2457 22:35:41.138 +01:29:11.41 0.059 0.219 0.006 0.0763 0.0039 0.756 0.101 0.01903 0.00160 1.80 A2572 23:17:12.803 +18:42:10.29 0.042 0.311 0.008 0.1040 0.0033 0.571 0.053 0.00867 0.00080 1.31 A2589 23:23:57.326 +16:46:39.86 0.042 0.388 0.005 0.1086 0.0021 0.586 0.036 0.01691 0.00088 1.21 A2593 23:24:20.855 +14:38:41.63 0.043 0.206 0.009 0.0487 0.0033 0.399 0.082 0.00509 0.00085 1.62 A2597 23:25:19.765 -12:07:25.96 0.085 0.687 0.005 0.3163 0.0044 1.118 0.019 0.06686 0.00194 1.95 A2626 23:36:30.314 +21:08:48.22 0.057 0.477 0.005 0.1830 0.0033 0.874 0.017 0.02861 0.00070 1.68 A2634 23:38:29.381 +27:01:53.58 0.031 0.165 0.005 0.0640 0.0028 0.733 0.038 0.00926 0.00020 0.95 A2657 23:44:57.622 +09:11:26.36 0.040 0.340 0.006 0.0929 0.0024 0.640 0.021 0.01253 0.00053 1.22 A2665 23:50:50.649 +06:08:59.67 0.056 0.428 0.008 0.1928 0.0066 0.935 0.042 0.03611 0.00258 1.70 A2734 00:11:21.686 -28:51:14.53 0.062 0.290 0.006 0.0826 0.0032 0.544 0.047 0.01058 0.00087 1.57 A3112 G252.96-56.05 03:17:57.637 -44:14:17.40 0.075 0.581 0.004 0.2692 0.0048 1.140 0.035 0.06626 0.00186 1.67 A3158 G265.00-48.94 03:42:53.069 -53:37:53.35 0.059 0.317 0.005 0.0445 0.0021 0.217 0.028 0.00601 0.00048 1.25 A3266 G272.10-40.15 04:31:30.202 -61:24:48.44 0.059 0.205 0.088 0.0204 0.0028 0.162 0.109 0.00333 0.00069 0.98 – 04:31:12.696 -61:27:20.58 0.059 0.199 0.004 0.0419 0.0013 0.531 0.025 0.00796 0.00038 0.90 A3376 G246.52-26.05 06:01:41.584 -39:59:19.90 0.047 0.133 0.003 0.0179 0.0006 0.052 0.050 0.00123 0.00019 1.54 – 06:02:11.776 -39:57:26.35 0.047 0.184 0.005 0.0431 0.0015 0.287 0.050 0.00278 0.00034 1.31 A3391 06:26:20.481 -53:41:35.84 0.051 0.203 0.004 0.0514 0.0021 0.279 0.031 0.00414 0.00030 1.74 A3395 G263.20-25.21 06:26:48.188 -54:32:57.24 0.051 0.117 0.003 0.0442 0.0015 0.598 0.062 0.00571 0.00051 1.73 – 06:27:36.008 -54:26:46.65 0.051 0.116 0.002 0.0279 0.0007 0.171 0.025 0.00216 0.00017 1.72 A3528n 12:54:22.206 -29:00:46.76 0.054 0.332 0.005 0.1665 0.0055 0.858 0.053 0.03590 0.00137 1.90 A3528s G303.75+33.65 12:54:40.703 -29:13:40.55 0.054 0.330 0.010 0.1854 0.0057 1.435 0.141 0.03161 0.00295 1.94 – 12:54:22.138 -29:00:47.20 0.054 0.331 0.007 0.1652 0.0060 0.939 0.094 0.03347 0.00192 1.89 A3532 G304.49+32.44 12:57:22.033 -30:21:49.34 0.055 0.205 0.006 0.0481 0.0034 0.409 0.037 0.00591 0.00062 1.37 A3558 G311.99+30.71 13:27:56.824 -31:29:43.84 0.048 0.302 0.005 0.0611 0.0013 0.506 0.038 0.01224 0.00057 1.66 – 13:29:47.769 -31:36:26.05 0.048 0.149 0.004 0.0606 0.0048 0.701 0.056 0.00968 0.00094 2.08 A3560 13:32:27.757 -33:08:33.59 0.050 0.208 0.005 0.0457 0.0013 0.243 0.028 0.00348 0.00030 1.50 A3562 13:33:34.727 -31:40:22.49 0.049 0.241 0.005 0.0752 0.0022 0.565 0.030 0.00895 0.00069 1.75 – 13:31:27.476 -31:49:18.14 0.049 0.108 0.003 0.0400 0.0018 0.421 0.069 0.00483 0.00110 2.07 A3571 G316.34+28.54 13:47:28.110 -32:51:57.98 0.039 0.394 0.006 0.0636 0.0014 0.576 0.070 0.01769 0.00069 0.80 A3667 G340.88-33.34 20:12:37.869 -56:50:45.53 0.056 0.185 0.004 0.0291 0.0006 0.112 0.013 0.00402 0.00011 1.01 A3695 G006.70-35.54 20:34:46.912 -35:49:24.54 0.089 0.188 0.006 0.0334 0.0016 0.251 0.048 0.00469 0.00096 1.66 A3822 G335.59-46.46 21:54:05.793 -57:51:41.87 0.076 0.230 0.006 0.0451 0.0039 0.430 0.056 0.00787 0.00093 1.72 A3827 G332.23-46.36 22:01:53.194 -59:56:43.51 0.098 0.388 0.005 0.0512 0.0013 0.298 0.052 0.01155 0.00114 1.53 A3921 G321.96-47.97 22:49:58.191 -64:25:46.79 0.094 0.313 0.006 0.0723 0.0054 0.303 0.061 0.00702 0.00084 1.61 A4038 23:47:42.886 -28:08:34.44 0.029 0.463 0.006 0.1821 0.0028 0.751 0.018 0.02007 0.00059 0.88 A4059 23:57:01.016 -34:45:32.37 0.046 0.488 0.006 0.1474 0.0027 0.693 0.028 0.02525 0.00115 1.95 AWM4 16:04:56.644 +23:55:57.63 0.033 0.436 0.006 0.1811 0.0032 0.677 0.023 0.01385 0.00049 1.23 EXO0422 04:25:51.164 -08:33:34.34 0.039 0.568 0.007 0.3048 0.0054 1.186 0.067 0.04669 0.00149 1.38 Hydra-A 09:18:05.641 -12:05:43.98 0.052 0.616 0.006 0.2454 0.0056 1.026 0.030 0.05451 0.00115 1.37 IC1262 17:33:02.990 +43:45:38.94 0.031 0.497 0.005 0.2548 0.0035 0.871 0.014 0.02359 0.00053 1.28 IC1365 21:13:55.893 +02:33:50.66 0.049 0.243 0.007 0.0512 0.0021 0.401 0.155 0.00769 0.00134 1.33 MKW3s 15:21:51.824 +07:42:31.63 0.045 0.570 0.017 0.2093 0.0324 0.920 0.026 0.02659 0.00072 0.80 MKW8 14:40:39.469 +03:28:13.13 0.027 0.191 0.003 0.0620 0.0020 0.452 0.052 0.00496 0.00042 2.37 – 14:38:21.874 +03:40:11.93 0.027 0.512 0.010 0.0287 0.0011 0.084 0.002 0.12684 0.00011 0.43 NGC6338 17:15:22.990 +57:24:40.27 0.028 0.415 0.013 0.2830 0.0053 1.469 0.053 0.04356 0.00112 0.67 – 17:15:23.167 +57:26:04.65 0.028 0.290 0.009 0.1316 0.0033 0.874 0.070 0.01950 0.00083 0.69 RXJ0341.3+1524 03:41:16.625 +15:24:01.81 0.029 0.347 0.007 0.1100 0.0024 0.401 0.026 0.00630 0.00034 2.73 RXJ1252.5-3116 12:52:34.721 -31:15:59.24 0.053 0.714 0.007 0.4436 0.0079 1.047 0.047 0.06392 0.00279 1.90 RXJ1504.1-0248 15:04:07.427 -02:48:16.47 0.215 0.778 0.005 0.3443 0.0067 1.567 0.054 0.14018 0.00441 1.39 RXJ1524.2-3154 15:24:12.912 -31:54:22.66 0.103 0.597 0.006 0.4128 0.0069 1.447 0.014 0.09196 0.00335 1.95 RXJ1539.5-8335 15:39:34.529 -83:35:23.27 0.073 0.715 0.007 0.3070 0.0069 0.781 0.043 0.04268 0.00246 2.15 RXJ1558.3-1410 15:58:21.651 -14:09:59.28 0.097 0.548 0.006 0.1807 0.0036 0.805 0.025 0.03545 0.00112 1.84 RXJ1720.1+2638 G049.20+30.86 17:20:09.957 +26:37:30.79 0.164 0.620 0.007 0.2313 0.0055 1.238 0.054 0.06126 0.00245 1.48 RXJ1958.2-3011 19:58:14.918 -30:11:11.53 0.117 0.694 0.019 0.7253 0.0185 2.203 0.112 0.19818 0.00995 3.45 RXJ2014.8-2430 20:14:51.619 -24:30:22.88 0.161 0.647 0.007 0.3680 0.0076 1.523 0.056 0.12483 0.00452 1.53 RXJ2344.3-042 23:44:18.277 -04:22:54.04 0.079 0.326 0.006 0.0524 0.0023 0.270 0.070 0.00740 0.00163 1.87 S0405 G296.41-32.48 03:51:31.364 -82:13:27.86 0.061 0.238 0.007 0.0499 0.0044 0.334 0.069 0.00526 0.00082 1.81 S0540 05:40:06.677 -40:50:11.66 0.036 0.398 0.006 0.1874 0.0040 0.767 0.039 0.02517 0.00137 2.66 – 05:42:49.463 -40:59:57.11 0.036 0.517 0.040 0.0287 0.0000 0.085 0.008 0.12681 0.00043 1.18 S1101 23:13:58.595 -42:43:31.06 0.058 0.681 0.005 0.3260 0.0053 0.985 0.045 0.05015 0.00175 2.40 UGC03957 07:40:58.208 +55:25:38.21 0.034 0.554 0.009 0.3042 0.0066 1.039 0.054 0.04315 0.00213 1.35 USGCS152 10:50:26.109 -12:50:42.05 0.015 0.791 0.070 0.7165 0.0936 1.339 0.049 0.05962 0.00160 1.13 ZwCl1215 G282.49+65.17 12:17:41.196 +03:39:22.03 0.077 0.286 0.006 0.0331 0.0021 0.146 0.047 0.00537 0.00141 1.51 ZwCl1742 G057.92+27.64 17:44:15.426 +32:59:31.71 0.076 0.555 0.005 0.2121 0.0046 1.140 0.058 0.03579 0.00142 1.91 | http://arxiv.org/abs/1703.08690v2 | {
"authors": [
"Felipe Andrade-Santos",
"Christine Jones",
"William R. Forman",
"Lorenzo Lovisari",
"Alexey Vikhlinin",
"Reinout J. van Weeren",
"Stephen S. Murray",
"Monique Arnaud",
"Gabriel W. Pratt",
"Jessica Démoclès",
"Ralph Kraft",
"Pasquale Mazzotta",
"Hans Böhringer",
"Gayoung Chon",
"Simona Giacintucci",
"Tracy E. Clarke",
"Stefano Borgani",
"Laurence P. David",
"Marian Douspis",
"Etienne Pointecouteau",
"Håkon Dahle",
"Shea Brown",
"Nabila Aghanim",
"Elena Rasia"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20170325131528",
"title": "The fraction of cool-core clusters in X-ray vs. SZ samples using Chandra observations"
} |
emptyDerivation of Semiclassical Kinetic Theory in the Presence of Non-Abelian Berry CurvatureEldad BettelheimRacah Inst. of Physics, Hebrew University of Jerusalem, Edmund J. Safra Campus, Jerusalem, 91904Israel In quantum mechanics it is often required to describe in a semiclassical approximation the motion of particles moving within a given energy band. Such a representation leads to the appearance of ananalogues of fictitious forces in the semiclassical equations of motion associated with the Berry curvature.The purpose of this paper is to derive systematically the kinetic Boltzmann equations displaying these effects in the case that the band is degenerate, and as such the Berry curvature is non-Abelian. We use the formalism of phase-space quantum mechanics to derive the results. § INTRODUCTIONThe semiclassical motion of a particle inside a crystal displays anomalous terms in the presence of external electromagnetic fields (see, e.g., Ref.<cit.>). To understand the nature of such effects we first recall that the eigenstates of the Hamiltonian have a Bloch form. Namely, an eigenstate in band n having Bloch momentum p is written as χ^n_p(x)e^/ħp·x, where χ^n_p is periodic in any period of the lattice (here n is an index not an exponent). For various reasons, including the development of a semiclassical theory, it is often advantageous to use rather the following basis of states φ_n,p(x)=χ^n_p=0(x)e^/ħp·x,in which the Bloch eigenfunctions with zero Bloch momentum, p, are used to span Bloch functions with non-zero momentum. The advantage of this basis is that the momentum dependence appears only in the plane wave factor, while the periodic function is momentum independent.This simplifies the analysis in various settings and is convenient in developing a semiclassical theory. It should be noted that here we have used the zero momentum Bloch eigenfunctions, but nothing substantial changes in the sequel if a different point is chosen, this point usually being chosen in practice as a point of higher symmetry in momentum space. One may now write the Hamiltonian in the basis φ_n,p, at which point one obtains a p-dependent matrix Hamiltonian, which constitutes the starting point <cit.> of the analysis underlying many well-known models of condensed matter such as the Kane model<cit.>, the Luttinger model<cit.>, models of Dirac and Weyl semimetals, etc. An eigenvalue of the Hamiltonian in band m and with a given Bloch momenutm p (which we denoted by χ^m_p, up to a plane wave factor) can be written as a superpositionof the functionsφ_n,p as follows:χ^m_p=∑_nU^m _n (p) φ_n,p. Consider now an electronic wave packet moving within the crystal created at givenenergy band, σ. We maycreate such a wave packet by taking a superposition of χ^m_p with fixed m:ψ=∫α_pχ_k^m d^3p =∑_n∫α_pU^m _n (p) φ_n,pd^3p As a result of the motion of the packet through the crystal and in the presence of external fieldsits position andmomentum changes. This leads to the change of the amplitude α_pU^m _n(p) of the component φ_n,p in the wave packet. The effect of the changing amplitude can be incorporated into the semiclassical equations of motion in analogy to the appearance offictitious forces in the classical mechanics.The modification of the semiclassical equations of motion allows one to understand a number ofphysical effects such as the anomalous Hall effect<cit.>, the anomalous Nernst effect<cit.>, negative magnetoresistance <cit.>, to name but a few of the developments related to this problem.The corrections to the semiclassical equations of motion were derived in Refrs. <cit.>.The purpose of this paper is to derive the Boltzmann kinetic equation describing such effects, especially in the case where the bands are degenerate and, as such, the Berry curvature is generically non-Abelian.In the non-Abelian case, it is not directly possible to recover the Boltzmann kinetic equation by considering the equations of motion for the averaged momentum and center of a wave packet. Thus our approach would be to directly obtain an equation for the density matrix, in contrast to the approach described in Ref. <cit.>. Our approach also has the advantage of allowing for a first principle derivation of the phase space volume element and how it should be incorporated in different calculations, rather than the recovering it fromfrom additional considerations<cit.>.Our approach is quite similar to that of Ref. <cit.>, with some notable differences, most prominently is that here we derive the kinetic equation in the general non-Abelian case. The approach here can also be recast in terms of the Keldysh formalism, as employed in Refrs. <cit.>.We also mention Ref. <cit.> which makes use of a field theory approach.§ DERIVATIONOur starting point is a J× J matrix Hamiltonian. Such Hamiltonians are derived in the condensed matter settings by writing the full Hamiltonian in the basis of states given by φ_n,p and truncating the infinite dimensional space into a smaller, J, dimensional subspace. In such a manner Kane's model or Luttinger's Hamiltonian may be derived <cit.> . The analysis of this paper is valid, however, whenever there is a matrix Hamiltonian, and as such not restricted to the condensed matter settings.We my diagonalize the matrix Hamiltonian by finding J eigenfunctions, U_j ^k(p). Namely∑_j' H^0_jj' (p)U_j' ^k(p)=U_j ^k(p)ε_k (p).Here k denotes that this is thek-th eigenfunctions and the superscript 0 on H^0 denotes that external fields are absent. In fact here and throughout, raised indices pertain to band indices while lower indices are vector indices in thespace on which the Hamiltonian acts.We wish to project onto a subset of the band consisting of, say, M bands. We introduce here theconventionin which Greek indicesdenote band indicestaking values from 1 to M while Roman indicesdenoteindices taking values from 1 to J. The J× M matrix U_j ^σ(p)is a projector onto the M bands for given p. If we wish to write an operator valued matrix acting on the Hilbert space such that it will achieve the sameprojection for any state, then we may define û_j ^σ=U_j ^σ(p̂). The matrix ûcan be written as:û_j ^σ=∫ U_j ^σ(p) |p⟩p|d^3p/(2πħ)^3. It is easy to show the following properties for û:û^†û = 1, H^0(p̂)û =ûĥ,with ĥ^στ=δ^στε^σ(p̂).We shall refer to ĥ as the `projected Hamiltonian'. The operator ûû^† isaJ× Jprojection operator onto the M bands. The facts thatit is a projection operator with rank M can be surmised from the first equation in (<ref>), which says that û is unitary on its image, an image which in turn has dimension M.The latter statement being trivial given the J× M dimensions of the matrix û. We now wish to include external fields. The Hamiltonian may be written asH(p̂)=H^0(p̂)-gB·S-χE·P,where p̂ is the kinematical momentum:[p̂_i,p̂_j]=∑_kħε_ijk B_k,while S andP are matrices describing the spin and polarization of the basis states in which the Hamiltonian H is written. In the presence of external fields we define û similarly as in Eq.(<ref>) :û^†û = 1, H(p̂)û =ûĥ,We do not demand that ĥ be diagonal, only that it is an M× M matrix. Effectively what has been done by finding ĥ is to block-diagonalize H into an M× M block and a (J-M)×(J-M) block, where ĥ is the former block in question.Due to the second equation in (<ref>), the operator ûû^† commutes with the Hamiltonian:H(p̂) ûû^†=ûĥû^†=ûû^† H(p̂),as to be expected from aprojection operator onto an invariant space of the Hamiltonian, the invariant space in question being composed of the M bands. It is then the fact that û is a J× M matrix that satisfy theproperties in(<ref>) that justify the designation of û as a projection operator onto M bands. As such, we shall use (<ref>) as the defining properties of û in the case where electromagnetic fields are present. Indeed, In the presence of a magnetic field the elements of p,which denotes the kinematical momentum, are no longergood quantum numbers, such that(<ref>) is no longer valid. Instead we we shall seek out a solution of Eq. (<ref>) in a semiclassical expansion. Namely, a formal expansion in ħ. Before continuing to carry out this expansion, we digress to note that we shall be interested in writing down the dynamics of the density matrix, with the assumption that the density matrix acts only in the invariant subspace of M bands. This requirement may be written as:ρ̂=ρ̂ûû^†=ûû^†ρ̂. The property defined by Eq. (<ref>) is invariant under time translations. Namely, it is obeyed at all times if it is obeyed at any single point in time. Indeed, due to (<ref>),ħ∂ _t ρ̂= ħ∂ _t ρ̂ûû^†= [ρ̂,H(p̂)]ûû^† = [ρ̂ûû^†,H(p̂)]=ħ∂_t(ρ̂ûû^†).This allows us to define the operator û^†ρ̂û, as an M× M density matrix that contains all the information of the quantum state of the system at all times. This is exhibited by the following relation, which may be derived making use of (<ref>):ħ∂_t(û^†ρ̂û)=[û^†ρ̂û,û^† H(p̂) û] =[û^†ρ̂û,ĥ],here we have used the following relation which will also be usefulin the sequel:û^† H(p̂) û=ĥ. Let us comment that we are using a single particle formalism. This poses no loss of generality in the absence of interaction. If ultimately interactions are to be included in the form of acollision integral, the drawback of the one particle formalism will be encountered when one wishes to analyze Berry phase effects on the collisions themselves. If one excludes from theanalysis such effects the current formalism is sufficient.We seek now to find a semiclassical expansion ofû_j ^σ, assuming knowledge of the solution of the eigenstatesof the Hamiltonian, U_j ^σ,which are themselves defined for theproblem in the strict semiclassical approximation (lowest order in ħ).Here we shall only deal with the expansion to subleading order in ħ, where theeffects we wish to derive are displayed.The semiclassical expansion is facilitated by using a phase space formulation of quantum mechanics. We thus take the Wigner transformof û to obtain functions ũ_j ^σ. The defining equations of û, Eq. (<ref>), become the following equations forthe Wigner transform, ũ:ũ^†⋆ũ =1,H⋆ũ=ũ⋆ h,where matrix multiplication is implied and the star denotes the usual start product. We need to solve these equations order by order in ħ. We recount the expansion of the star product:f⋆ g=fg+ħ/2{f,g}+…. Here the Poisson brackets is given by: {f,g} =∇f ·∇^(p)g-∇^(p)f·∇g+ q/cB·∇^(p)f ×∇^(p)g,where ∇ denotes spatial derivatives while ∇^(p) denotes derivatives with respect to the momentum. From here on p denotes throughout the kinematical momentum.The solution ofEq. (<ref>) to leading order in ħ is obtained by ignoring the star product and replacing it with a regular product, such that we may write:ũ_j ^σ=U_j ^σ( p)+O(ħ),with U_j ^σ( p) spanning an M-dimensional eigenspace of the semiclassical Hamiltonian:∑_j'H_jj'(p)U_j' ^σ(p)=∑_τ U_j ^τ(p)h^τσ(p).Finding the functionsU_j ^σ( p) is a problem of diagonalizing a J-dimensional matrix for each p. The additional terms in the semiclassical equations of motion that we derive below will be written in terms of these functions. In particular the Berry connection,𝒜≡ U^†∇^(p)Uand the Berry curvature,Ω≡∇^(p)×𝒜-𝒜×𝒜associated with thesethese functions will feature in the corrections to the semiclassical equations of motion. We shall need to computethe projected Hamiltonian, h, and the dynamics it dictates in the M bands to subleading order in ħ.We thus first expand ũ in powers of ħũ=U+ħδ U +O(ħ^2).We have from ũ^†⋆ũ=1 (Eq. (<ref>)):U^†δ U+δ U^† U+U^†⋆ U=1.Namely, we may choose:U^†δ U =-ħ q/4cB·∇^(p)×𝒜, The projected Hamiltonian is given by h=ũ^†⋆ H⋆ũ from the phase space representation of Eq. (<ref>) , such that we may now derive an ħ expansion of itby using the expansion of the star product, Eq. (<ref>), and making use ofand (<ref>):h= ε^ eff(p)+qδ^στΦ(x)-ħ qE·𝒜+ħq/cB·∇^(p)(ε^(p))×𝒜,whereε^ eff=ε-qħ/cℳ·B -g𝒮·B-χ𝒫·E,withℳ≡/2∇^(p)U^†×(H-ε)∇^(p)U,𝒮=U^† SU,𝒫=U^† P U A derivation of the evolution equation for ρ is obtained byapplying the expansion of the star product to (<ref>). This computation is standard and yields:∂_t ρ +𝒮(∇ρ·∇^(p)h-∇^(p)ρ·∇h+q/cB×∇^(p)ρ·∇^(p)h)+[ρ,h]/ħ=0,where 𝒮 denotes the symmetrization of matrix products, such that, e.g.,𝒮(AB)≡1/2( AB+BA).For purposes of symmetrization a commutator is considered a single matrix, hence, e.g.,𝒮([A,B])=[A,B],𝒮([A,B]C)=1/2( C[A,B]+[A,B]C). Equation (<ref>) may be understood as the collisionless kinetic (Boltzmann) equation which is valid to subleading order in ħ in the presence of non-Abelian Berry curvature. Nevertheless, the formalism that we have used thus far is not gauge invariant, and in the next section we wish to correct that. This is not to say that Eq. (<ref>) is somehow incorrect, but rather that it is usually preferred to work in a formalism where gauge invariance is manifest.§ GAUGE INVARIANT FORMALISM As just mentioned, the formalism we have used thus far is not gauge invariant. In fact, the density matrix ρ is not gauge invariant. Indeed by choosing a different set of eigenvectors U, one obtains a new density matrix ρ that is not a simple unitary rotation of the original density matrix. A gauge invariant object may nevertheless be defined byconsidering U^†ρ̃U. It will turn out however that a slightly more complicated object is more convenient to work with. This is given by ρ̅ defined as follows:ρ̅≡𝒱 U^†ρ̃U ,with 𝒱=(1-ħ q/2 cB·Ω).Since Ω is gauge invariant, ρ̅ defined in (<ref>) is also gauge invariant. We should stress however, that the difficulty of working with ρ̅ is that it is not an exact projection onto an invariant space of the Hamiltonian.In order to write a kinetic equation one must utilize equations(<ref>) and (<ref>), which are more naturally written for ρ rather than for ρ̅. Nevertheless, an evolution equation may be written for ρ̅ by different means, the most straightforward at this point, having already derived an equation for ρ, is torelate ρ̅ to ρ and then translate Eq. (<ref>) into an evolution equation for ρ̅.The actual calculation is rather cumbersome, but mechanical. This relation between ρ and ρ̅, is obtained by writing out the definitions of both objects. expanding in ħ and relating the two. The result is:ρ=ρ̅-ħ q/2cB×𝒜ρ̅·𝒜++ħ𝒮( ∇ρ̅·𝒜 +q/cB×∇^(p)ρ̅·𝒜+q/2cρ̅B·∇^(p)×𝒜+ρ̅q/2cB·Ω).§.§Collisionless Kinetics Plugging Eq. (<ref>) this into (<ref>), and after the requisitecalculus the following equation, which is the gauge invariant collisionless kinetic equations we seek,is obtained: 𝒮{∂_t ρ̅+∇·(ρ̅v)+𝒟·(ρ̅F)-/ħ[ρ̅,ε^ eff]}=0.The equation is derived under the assumption that ρ̅= ρ̅_I 1+ħρ̅_T, where ρ̅_ T is traceless.Namely, terms involving the commutator of ρ̅ are automatically of a lower order. From here on this assumption will be made throughout. The effective energy ε^ eff is defined in (<ref>), the covariant momentum derivative, 𝒟, is defined making use of the Berry connection (Eq. (<ref>)) as follows:𝒟 g= ∇^(p)g- [𝒜,g] As for the definitions of the velocity, v and force, F,we have madeuse of the following notationsv=v_0+ħΩ×(q E+q/cv_0×B),v_0=𝒟ε^ eff,F=q E+ q/cv×B,where the Berry curvature, Ω, is defined in Eq. (<ref>). The velocity and force in Eqs. (<ref>,<ref>) may be viewed asthe next to leading order in ħ solution of the following equations derived in Refrs. <cit.>:v=v_0+ħΩ×F, F=q E+q/cv×B. §.§ Expectation ValuesLet us note thatρ̅ was defined such that it does not require the introduction of a phase spacevolume element.Indeed making use of (<ref>), (<ref>) , the fact that (ÂB̂)=∫ÃB̃ d^3xd^3p/(2πħ)^3, and the expansion of the star product, one may write the expression for the trace of ρ̂ as:ρ̂= ∫[ρ̃(u ⋆ u^†) ]d^3xd^3 p = ∫[ρ̅]d^3xd^3 p.We present also the calculation of the expectation value of scalar observables in terms of ρ̅. We use the term `scalar observable' for an a quantum operator, f̂, the representation of which in terms of the Wigner transform takes the form f̃(r,p) δ_ij.We define f̅^στ≡f̃δ^στ. We will need the following relation:f≡ u^†⋆f̃⋆ u= f̅+ħ∇f̅·𝒜+ħ q/cB×∇^(p)f̅·𝒜,which is derived by the standard means already employed thus far.The expectation value of f̂ is given by:f̂=(ρ̂f̂)=∫ρ f d^3xd^3p.Substituting into this Eq. (<ref>)and Eq. (<ref>) and integrating by parts yields simply:f̂ =∫ f ρ̅ d^3xd^3p.§.§ Equilibrium We conclude this section by deriving theequilibrium distribution function described by ρ̂_0=f(β H), where f may be chosen, e.g., as the Fermi-Dirac or Bose-Einstein distribution, depending on the statistics of the particle described. From this distribution we may compute ρ_0=ũ^σ†⋆ρ̃⋆ũ=f(h). One may easily derive :ρ_0=f(ε)+β(h-ε)f'(ε).Computing ρ̅ by making use of (<ref>) gives:ρ̅_0=𝒱^2f(ε^ eff).This result is somewhat counterintuitive sinceit shows that, although expectation values do not require the introduction of a phase factor due to Eq. (<ref>), one does have to include the phase space factor 𝒱^2 when averaging with the quantum distribution (Fermi-Dirac or Bose-Einstein). In other words, within the current formalism, solving the semiclassical kinetics described by the Boltzmann equation (possibly with a collision term) will leads to a distribution ρ̅_0 which includes a factor which may be interpreted phase space volume, such that the phase space volume must not be posited as an extra factor that must beincluded, but rather appears automatically, after solving the kinetic equation.§ COLLISION INTEGRAL We wish now to demonstratehow the effect of collisions into the Boltzmann equation. We derive the collision integral in the case where the collisions are with the disorder potential, assuming that the disorder potential is smooth enough such that it may be considered within the semiclassical approach. Namely, the entire effect of collisions withdisorder can be incorporated by assuming a disordered electric field in the semiclassical Boltzmann equation that was already derived, Eq. (<ref>).The procedure we implement in this section, then, is to show that averaging over disorder allows us to represent the effect of collisionsas a collision integral. Our starting point is then Eq.(<ref>). We write it as:e^-ℒ_0t∂_t e^ℒ_0tρ̅=ℒ_Vρ̅where the differential operators ℒ_0 and ℒ_V are definedthrough their action on any function f as follows: ℒ_0 f= ∇·(vf)+𝒟·(Ff)ℒ_Vf=∇V·𝒟 f+ħ q/c𝒟·((Ω×∇V) ×B f)+ħΩ×∇V ·∇f. The evolution canbe writtenusing objects defined in the interaction picture (designated here by the superscript (I)) :∂_t ρ̅^(I)(t)=ℒ^(I)_V(t)ρ̅^(I)(t),where ρ^(I)(t) and ℒ^(I)_V(t) are defined by:ρ̅^(I)(t)≡ e^tℒ_0ρ̅(t),ℒ^(I)_V(t)≡ e^tℒ_0ℒ_V e^-tℒ_0The evolution equation forρ^(I), Eq. (<ref>), is solved in perturbation theory to first orders as follows:ρ̅^(I) (t)=ρ̅(0)+ ∫_0^t dt'ℒ^(I)_V(t') ρ̅(0),hitting (<ref>) with e^-ℒ_0t and combining with (<ref>)leads to∂_t ρ̅+ℒρ̅=ℒ_Ve^-tℒ_0ρ̅(0)+ℒ^_V∫_-t^0 dt'ℒ^(I)_V(t')e^-tℒ_0ρ̅(0). Motivated by the assumption of self-averaging, we wish now consider averaging Eq. (<ref>)over the disorder potential V. Without loss of generality, one may assume that the average electric field produced by the disorder potential vanishes, and as a result the the term ℒ_V ρ̅^(I)(-t) in Eq. (<ref>) vanishes after averaging,which leaves the second term on the right hand side as the collision integral, I_ coll: I_ coll≡ℒ^_V∫_-t^0 dt'ℒ^(I)_V(t')e^-tℒ_0ρ̅(0)where the angled brackets denote disorder averaging. To computeℒ_V(t), which features in this equation, we write down the following differential equation for it:∂_t ℒ^(I)_V(t)=[ℒ_0,ℒ^(I)_V(t)]with initial conditions for ℒ^(I)_V given by ℒ^(I)_V(0)=ℒ_V, where the latter is givenin Eq. (<ref>). We now derive an expression for the collision integral in leading order in ħ. We further assume that the momentum change of the particle due to the electric field during the collision time is negligible. To this approximation, a solution for ℒ_V(t), of Eq. (<ref>), and an expression fore^-tℒ_0ρ̅(0) can be written as follows:ℒ_V(t)=∇V(x+v_0t)·∇^(p),e^-tℒ_0ρ̅(0) =ρ̅(x-v_0t,p,0).The collision integral in this approximationis designated as I^(0)_ coll. It takes the form:I^(0)_ coll≃∇V(x)·∇^(p)∫_-t^0 dt'∇V (x- v_0(t'))·∇^(p)ρ̅(x-v_0t,p,0),where the angled brackets denote disorder averaging, and we have assumed as usual that the density matrix is diagonal in the leading order in ħ. We have assumed that the momentum change of the particle due to the electric field during the collision time is negligible. To bring this expression into more familiar form, wewrite it in terms of the Fourier transform of V. We furtherassume that the density is constant within a region of the size comparable to the distance a particle travels within the collision time (this allows to replace ρ̅(x-vt,p,0) by ρ̅(x,p,0)). We implement the disorder average by a simplified procedure whereby it is assumed that any two realizations of the disorder are related by a translation. The disorder ensemble is then modelled as a uniform measure over these translations. This ensembleis sufficient to obtain the result, a more realistic model of disorder does not affect the derivation beyond addingcomplexity of the formalism, we forgo then such more realistic models for the sake of notational brevity.We thus introduce a translation vector, R, the integral over which signifies averaging over the disorder. This, together, with the Fourier transform of the potential V_q, leads to the following expression for the collision integral:I_ coll=∫dRdpdp'dt' /(2π)^6V_-p' V_pe^/ħ[p·(x- v(t-t'))-p'·x+(p-p')·R]p'·∇^(p)p·∇^(p)ρ̅(x,p,0).We may now perform the integral with respect to R which forces p' to be equal to p. In addition, the semiclassical limit requires a small momentum transfer forcollisions, such that one mayreplace derivatives with respect to the momentum with finite differences involving the transferred momentum, p. This yields the following for the collision integral:I_ coll=ħ∫dp/(2π)^2 |V_q|^2/v·p-0^+(ρ̅(x,p+p,0)+ρ̅(x,p-p,0)-2ρ̅(x,p,0))Simple manipulations involving the change of integration variable from p to -p and by replacing v·p by ε(p)-ε(p+p) (justified again in the limit of small momentum transfer) lead to the familiar form of the collision integral: I_ coll=ħ∫dp/(2π)^2|V_q|^2δ(ε(p)-ε(p+p))(ρ̅(x,p+p,0)-ρ̅(x,p,0)), Various effects can be recovered by lifting some of the assumptions made in the derivation. For example, we may consider ħ corrections in the presence of a constantelectric field but in the absence of a magnetic field. Coming back to the differential equation forℒ_V(t), Eq. (<ref>),we may write in the current approximation:∂_t ℒ^(I)_V(t)=[(v_0+ħ qΩ×E)·∇,ℒ^(I)_V(t)].The solution of this equation to leading and sub-leading order in ħ is given by:ℒ^(I)_V(t)=∇V(x-v_0t)·𝒟 +qħ t(Ω×E·∇∇V(x-v_0t))·𝒟,where we have also neglected any terms in ℒ^(I)_V(t) that are proportional to the spatial derivative operator, ∇, since they will not be important in the following once we letℒ^(I)_V(t)act on the density matrix, which we assume does notdepend strongly on position within the collision distance.The terms proportional to ħ may now be collected to yield I^(1)_ coll, the correction to I^(0) in the current settings:I^(1)_ coll =ħ/(2π)^2∫ _-t^0dpdt' t'( qΩ×E·p)|V_p|^2e^-/ħp·vt'p·∇^(p)p·∇^(p)ρ̅(x,p,0)Following the same steps leading to Eq. (<ref>) now gives:I^(1)_ coll=ħ ^3∫ dp( qΩ×E·p)|V_q|^2δ'(ε(p)-ε(p+p))(ρ̅(x,p+p,0)-ρ̅(x,p,0)). This correction to the collision integral is related to side jumps. A subject that wasdiscussed in the past in several occasions, see e.g. Refrs. <cit.>We have neglected terms involving a commutator of the density matrix with the Berry connection. These can be readily recovered.Corrections proportional to ħ that appear when a magnetic field is turned on, can likewise be recovered. § CONCLUSIONIn conclusion, we wish to reiterate the purpose of this paper, which is to derive a kinetic theory including non-Abelian Berry phase effects, making use only of pertinent formalisms. Indeed, the effects discussed here are a common feature of the semiclassics of theories described by matrix Hamiltonians, and as such the development of the formalism requires onlyquantum mechanics and a semiclassical expansion, the latter being straightforward within thephase space formulation (that is the formulation through the Wigner transform) of quantum mechanics. We believe that a derivation of the equation in this manner, allows one to better grasp how to use the formalism when more subtle points are encountered, for example, when dealing with questions related to thephase space volume factor, or the proper formulation of collision integrals.§ ACKNOWLDGEMENTI wish to thank B. Spivak, A.Andreev, M. Khodas, P. Wiegmann for extensive discussions. I acknowledge the Israeli Science Foundation, which supported this research through grant 1466/15. unsrt | http://arxiv.org/abs/1703.08673v1 | {
"authors": [
"Eldad Bettelheim"
],
"categories": [
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.stat-mech",
"published": "20170325102338",
"title": "Derivation of Semiclassical Kinetic Theory in the Presence of Non-Abelian Berry Curvature"
} |
Transductive Zero-Shot Learning with a Self-training dictionary approach Yunlong Yu, Zhong Ji, Xi Li, Jichang Guo, Zhongfei Zhang, Haibin Ling, Fei WuDecember 30, 2023 ========================================================================================== Clustering is the process of finding underlying group structures in data. Although mixture model-based clustering is firmly established in the multivariate case, there is a relative paucity of work on matrix variate distributions and none for clustering with mixtures of skewed matrix variate distributions. Four finite mixtures of skewed matrix variate distributions are considered. Parameter estimation is carried out using an expectation-conditional maximization algorithm, and both simulated and real data are used for illustration. Keywords: Clustering; matrix variate; mixture models; skewed distributions.§ INTRODUCTIONOver the years, there has been increased interest in the applications involving three-way (matrix variate) data. Although there are countless examples of clustering for multivariate distributions using finite mixture models, as discussed in Section <ref>, there is very little work for matrix variate distributions. Moreover, the examples in the literature deal exclusively with symmetric (non-skewed) matrix variate distributions such as the matrix variate normal and the matrix variate t distributions. There are many different areas of application for matrix variate distributions. One area is multivariate longitudinal data, where multiple variables are measured over time <cit.>. In this case, each row of a matrix would correspond to a time point and the columns would represent each of the variables. Furthermore, the two scale matrices, a defining characteristic of matrix variate distributions, allow for simultaneous modelling of the inter-variable covariances as well as the temporal covariances. A second application, considered herein, is image recognition. In this case, an image is analyzed as an n× p pixel intensity matrix.Herein, a finite mixture of four different skewed matrix distributions, the matrix variate skew-t, generalized hyperbolic, variance-gamma and normal inverse Gaussian (NIG) distributions are considered. These mixture models are illustrated for both clustering (unsupervised classification) and semi-supervised classification using both simulated and real data.§ BACKGROUND§.§ Model-Based Clustering and Mixture ModelsClustering and classification look at finding and analyzing underlying group structures in data. One common method used for clustering is model-based, and generally makes use of a G-component finite mixture model. A multivariate random variablefrom a finite mixture model has densityf( | )=∑_g=1^G f_g( | _g),where =(π_1,π_2,…,π_G,_1,_2,…,_G), f_g(·) is the gth component density, and >0 is the gth mixing proportion such that ∑_i=1^G=1. <cit.> traces the association between clustering and mixture models back to <cit.>, and the earliest use of a finite mixture model for clustering can be found in <cit.>, who uses a Gaussian mixture model. Other early work in this area can be found in <cit.> and <cit.>, and a recent review of model-based clustering is given by <cit.>.Although the Gaussian mixture model is well-established for clustering, largely due to its mathematical tractability, quite some work has been done in the area of non-Gaussian mixtures. For example, some work has been done using symmetric component densities that parameterize concentration (tail weight), e.g., the t distribution <cit.> and the power exponential distribution <cit.>. There has also been work on mixtures for discrete data <cit.> as well as several examples of mixtures of skewed distributions such as the NIG distribution <cit.>, the skew-t distribution <cit.>, the shifted asymmetric Laplace distribution <cit.>, the variance-gamma distribution <cit.>, the generalized hyperbolic distribution <cit.>, and others <cit.>.Recently, there has been an interest in the mixtures of matrix variate distributions, e.g., <cit.> consider multivariate longitudinal data with the matrix variate normal distribution and <cit.> consider a finite mixture of matrix variate t distributions. §.§ Matrix Variate DistributionsThree-way data such as multivariate longitudinal data or greyscale image data can be easily modelled using a matrix variate distribution. There are many examples of such distributions presented in the literature, the most notable being the matrix variate normal distribution.In Section <ref>,was used in the typical way to denote a multivariate random variable andwas used to denote its realization. Hereafter,is used to denote a realization of a random matrix . An n× p random matrixfollows an n× p matrix variate normal distribution with location parameterand scale matricesandof dimensions n× n and p× p, respectively, denoted by 𝒩_n× p(, , ) if the density ofcan be written asf( | , , )=1/(2π)^np/2||^p/2||^n/2exp{-1/2(^-1(-)^-1(-)')}. A well known property of the matrix variate normal distribution <cit.> is ∼𝒩_n× p(,,) ()∼𝒩_np((),⊗),where 𝒩_np(·) is the multivariate normal density with dimension np, (·) is the vectorization operator, and ⊗ is the Kronecker product. The matrix variate normal distribution has many elegant mathematical properties that have made it so popular, e.g., <cit.> uses a mixture of matrix variate normal distributions for clustering. However, there are non-normal examples such as the Wishart distribution <cit.> and the skew-normal distribution, e.g., <cit.>, <cit.>, and <cit.>. More information on matrix variate distributions can be found in <cit.>. §.§ The Generalized Inverse Gaussian Distribution The generalized inverse Gaussian distribution has two different parameterizations, both of which will be useful. A random variable Y has a generalized inverse Gaussian (GIG) distribution parameterized by a, b and λ, denoted by GIG(a,b,λ), if its probability density function can be written asf(y|a, b, λ)=(a/b)^λ/2y^λ-1/2K_λ(√(ab))exp{-ay+b/y/2},whereK_λ(u)=1/2∫_0^∞y^λ-1exp{-u/2(y+1/y)}dyis the modified Bessel function of the third kind with index λ. Expectations of some functions of a GIG random variable have a mathematically tractable form, e.g.:𝔼(Y)=√(b/a)K_λ+1(√(ab))/K_λ(√(ab)), 𝔼(1/Y)=√(a/b)K_λ+1(√(ab))/K_λ(√(ab))-2λ/b, 𝔼(log Y)=log(√(b/a))+1/K_λ(√(ab))∂/∂λK_λ(√(ab)). Although this parameterization of the GIG distribution will be useful for parameter estimation, for the purposes of deriving the density of the matrix variate generalized hyperbolic distribution, it is more useful to take the parameterizationg(y|ω,η,λ)= (w/η)^λ-1/2η K_λ(ω)exp{-ω/2(w/η+η/w)},where ω=√(ab) and η=√(a/b). For notational clarity, we will denote the parameterization given in (<ref>) by I(ω,η,λ). §.§ Skewed Matrix Variate DistributionsThe work of <cit.> presents a total of four skewed matrix variate distributions, the matrix variate skew-t, generalized hyperbolic, variance-gamma and NIG distributions. Each of these distributions is derived from a matrix variate normal variance-mean mixture. In this representation, the random matrixhas the representation=+W+√(W),whereandare n× p matrices representing the location and skewness respectively, ∼𝒩_n × p(0,, ), and W>0 is a random variable with density h(w|). In <cit.>, the matrix variate skew-t distribution, with ν degrees of freedom, is shown to arise as a special case of (<ref>) with W^ST∼IGamma(ν/2,ν/2), where IGamma(·) denotes the inverse Gamma distribution with density f(y | a,b)=b^a/Γ(a)y^-a-1exp{-b/y}. The resulting density ofis f_MVST( | )= 2(ν/2)^ν/2exp{(^-1(-)^-1') }/(2π)^np/2||^p/2 | |^n/2Γ(ν/2)(δ(;,,)+ν/ρ(,,))^-ν+np/4×K_-ν+np/2(√([ρ(,,)][δ(;,,)+ν])),where δ(;,,)=(^-1(-)^-1(-)'),ρ(;,)=(^-1^-1')and ν>0. For notational clarity, this distribution will be denoted by MVST(,,,,ν).In <cit.>, one of the distributions considered is a matrix variate generalized hyperbolic distribution. This again is the result of a special case of (<ref>) with W^GH∼I(ω,1,λ). This distribution will be denoted by MVGH(,,,,λ,ω), and the density is f_MVGH(|)= exp{(^-1(-)^-1') }/(2π)^np/2||^p/2 | |^n/2K_λ(ω)(δ(;,,)+ω/ρ(,,)+ω)^(λ-np/2)/2×K_(λ-np/2)(√([ρ(,,)+ω][δ(;,,)+ω])),where λ∈ℝ and ω>0.The matrix variate variance-gamma distribution, also derived in <cit.>, denoted MVVG(,,,,γ) is a special case of the matrix variate normal variance-mean mixture (<ref>) with W^VG∼gamma(γ,γ), where gamma(·) denotes the gamma distribution with density f(y | a,b)=b^a/Γ(a)y^a-1exp{-by}.The density of the random matrixwith this distribution isf_MVVG(|)= 2γ^γexp{(^-1(-)^-1') }/(2π)^np/2||^p/2 | |^n/2Γ(γ)(δ(;,,)/ρ(,,)+2γ)^(γ-np/2)/2×K_(γ-np/2)(√([ρ(,,)+2γ][δ(;,,)])),where γ>0.Finally, the matrix variate NIG distribution arises when W^NIG∼IG(1,), where IG(·) denotes the inverse-Gaussian distribution with densityf(y | δ,γ)=δ/√(2π)exp{δγ}y^-3/2exp{-1/2(δ^2/y+γ^2y)}.The density ofisf_MVNIG(|) =2exp{(^-1(-)^-1')+}/(2π)^np+1/2||^p/2 | |^n/2(δ(;,,)+1/ρ(,,)+^2)^-(1+np)/4× K_-(1+np)/2(√([ρ(,,)+^2][δ(;,,)+1])),where >0. This distribution is denoted by MVNIG(,,,,).§.§ Benefits Over VectorizationOne alternative to matrix variate analysis for matrix variate data is to consider the vectorization of the data and perform multivariate techniques. However, the benefits of using matrix variate methods are twofold. The first being specifically for the case of multivariate longitudinal data. Performing the analysis using a matrix variate model has the benefit of simultaneously considering the temporal covariances (via ) as well as the covariances for the variables (via ). Performing multivariate analysis on the vectorization of the data would not have this benefit without imposing some structure on the scale matrix.The second benefit is the reduction in the number of parameters. If the matrix variate data is n× p, vectorization would result in np dimensional vectors, therefore resulting in (n^2p^2+np)/2 free scale parameters when using multivariate analysis. However, when using a matrix variate model, there are two lower dimensional matrices that comprise the scale parameters with a total of (n^2+p^2+n+p)/2 free scale parameters. Thus, for n=p, there is a reduction from quartic to quadratic complexity in n and, for almost all values of n and p, there will be a (often substantial) reduction in the number of free scale parameters.§ METHODOLOGY §.§ LikelihoodsIn the mixture model context,is assumed to come from a population with G subgroups each distributed according to the same one of the four skewed matrix variate distributions discussed previously. Now suppose N n× p matrices _1,_2,…,_N are observed, then the observed-data likelihood isL() =∏_i=1^N∑_g=1^G f(_i | _g,_g,_g,_g,_g),where _g are the parameters associated with the distribution of W_ig. For the purposes of parameter estimation, we proceed as if the observed data is incomplete. In particular, we introduce the missing group membership indicators z_ig, wherez_ig=1if _i is in groupg,0otherwise.In addition to the missing z_ig, we also have the latent variables W_ig and we denote their densities by h(w_ig | _g).The complete-data log-likelihood, in its general form for any of the distributions already discussed, is thenℓ_c()=ℒ_1+(ℒ_2+C_2)+(ℒ_3+C_3),where C_2 and C_3 are constant with respect to the parameters, ℒ_1=∑_i=1^N∑_g=1^Gz_ig,ℒ_2=∑_i=1^N∑_g=1^Gz_igh(w_ig | _g)-C_2, andℒ_3= 1/2∑_i=1^N∑_g=1^Gz_ig[ (_g^-1(_i-_g)_g^-1_g')+(_g^-1_g_g^-1(_i-_g)')-1/w_ig(_g^-1(_i-_g)_g^-1(_i-_g)')-w_ig(_g^-1_g_g^-1_g')-plog(|_g|)-nlog(|_g|)]. §.§ Parameter EstimationParameter estimation is performed by using an expectation-conditional maximization (ECM) algorithm <cit.>. 1) Initialization: Initialize the parameters _g,_g,_g,_g and other parameters related to the distribution. Set t=0.2) E Step: Update ẑ_ig, a_ig, b_ig, c_ig, whereẑ_ig^(t+1) = f(_i | ^(t)_g)/∑_h=1^Gπ_h f(_i | ^(t)_h), a_ig^(t+1)=𝔼(W_ig|_i,z_ig=1,_g^(t)),b_ig^(t+1) =𝔼(1/W_ig.|_i,z_ig=1,_g^(t)), c_ig^(t+1)=𝔼(log(W_ig)|_i,z_ig=1,_g^(t)). Note that the specific updates will depend on the distribution. However, in each case, the conditional distribution of W_ig given the observed data and group memberships is a generalized inverse Gaussian distribution. Specifically,W_ig^ST | _i, z_ig=1 ∼GIG(ρ(_g,_g,_g),δ(;_g,_g,_g)+ν_g,-(ν_g+np)/2),W_ig^GH | _i, z_ig=1 ∼GIG(ρ(_g,_g,_g)+ω_g,δ(;_g,_g,_g)+ω_g,λ_g-np/2),W_ig^VG | _i, z_ig=1 ∼GIG(ρ(_g,_g,_g)+2γ_g,δ(;_g,_g,_g),γ_g-np/2),W_ig^NIG | _i, z_ig=1 ∼GIG(ρ(_g,_g,_g)+_g^2,δ(;_g,_g,_g)+1,-(1+np)/2). Therefore, the exact updates are obtained by using the expectations given in (<ref>)–(<ref>) for appropriate values of λ, a, and b. 3) First CM Step: Update the parameters ,_g,_g. π̂_g^(t+1) =N_g/N, _g^(t+1)=∑_i=1^Nẑ_ig^(t+1)_i(a_g^(t+1)b^(t+1)_ig-1)/∑_i=1^Nẑ_ig^(t+1)a_g^(t+1)b_ig^(t+1)-N_g,^(t+1) =∑_i=1^Nẑ_ig^(t+1)_i(b_g^(t+1)-b^(t+1)_ig)/∑_i=1^Nẑ_ig^(t+1)a_g^(t+1)b_ig^(t+1)-N_g,where N_g=∑_i=1^Nẑ_ig^(t+1),a_g^(t+1)=∑_i=1^Nẑ_ig^(t+1)a_ig^(t+1)/N_g,b_g^(t+1)=∑_i=1^Nẑ_ig^(t+1)b_ig^(t+1)/N_g. 4) Second CM Step: Update _g_g^(t+1) =1/N_gp[∑_i=1^Nẑ_ig^(t+1)(b^(t+1)_ig(_i-_g^(t+1))._g^(t).^-1(_i-_g^(t+1))'....-._g^(t+1).._g^(t).^-1(_i-_g^(t+1))' -(_i-_g^(t+1))._g^(t).^-1._g^(t+1).'..+..a_ig^(t+1)._g^(t+1).._g^(t).^-1._g^(t+1).') ]. 5) Third CM Step: Update _g_g^(t+1) =1/N_gn[∑_i=1^Nẑ_ig^(t+1)(b^(t+1)_ig(_i-_g^(t+1))'._g^(t+1).^-1(_i-_g^(t+1))....-._g^(t+1).'._g^(t+1).^-1(_i-_g^(t+1)) -(_i-_g^(t+1))'._g^(t+1).^-1._g^(t+1)...+..a_ig^(t+1)._g^(t+1).'._g^(t+1).^-1._g^(t+1). )]. 6) Other CM Steps: The additional parameters introduced by the distribution of W_ig are now updated. These updates will vary according the distribution and the particulars for the MVST, MVGH, MVVG and MVNIG distributions are given below.7) Check Convergence: If not converged, set t=t+1 and return to step 2).§.§.§ Matrix Variate Skew-t DistributionIn the case of the matrix variate skew-t distribution, the degrees of freedom ν_g need to be updated. This update cannot be obtained in closed form, and thus needs to be performed numerically. We haveℒ_2^MVST=∑_i=1^N∑_g=1^Gz_ig[ν_g/2log(ν_g/2)-log(Γ(ν_g/2))-ν_g/2(log(w_ig)+1/w_ig)]. Therefore, the update ν_g^(t+1) is obtained by solving (<ref>) for ν_g, i.e.,log(ν_g/2)+1-φ(ν_g/2)-1/N_g∑_i=1^Nẑ_ig^(t+1)(b^(t+1)_ig+c^(t+1)_ig)=0,where φ(·) denotes the digamma function.§.§.§ Matrix Variate Generalized Hyperbolic Distribution In the case of the matrix variate generalized hyperbolic distribution, updates for λ_g and ω_g are needed. In this case,ℒ_2^MVGH=∑_i=1^N∑_g=1^Gz_ig[log(K_λ_g(ω_g))-λ_glog w_ig-1/2ω_g(w_ig+1/w_ig)].The updates for λ_g and ω_g cannot be obtained in closed form. However, <cit.> discuss numerical methods for these updates and, because the portion of the likelihood function that include these parameters is the same as in the multivariate case, the updates described in <cit.> can be used directly here.The updates for λ_g and ω_g rely on the log-convexity of K_s(t) in both s and t <cit.> and maximizing (<ref>) via conditional maximization. The resulting updates areλ̂_g^(t+1) =c̅_g^(t+1)λ̂_g^(t)[.∂/∂ slog(K_s(ω̂_g^(t)))|_s=λ̂_g^(t)]^-1,ω̂_g^(t+1) =ω̂_g^(t)-[.∂/∂ sq(λ̂_g^(t+1),s)|_s=ω̂_g^(t)][.∂^2/∂ s^2q(λ̂_g^(t+1),s)|_s=ω̂_g^(t)]^-1,where the derivative in (<ref>) is calculated numerically, q(λ_g,ω_g)=∑_i=1^Nz_ig[log(K_λ_g(ω_g))-λ_glog w_ig-1/2ω_g(w_ig+1/w_ig)] and c̅_g^(t+1)=(1/N_g)∑_i=1^Nẑ^(t+1)_igc_ig^(t+1). The partials in (<ref>) are described in <cit.> and can be written as∂/∂ω_gq(λ_g,ω_g)=1/2[R_λ_g(ω_g)+R_-λ_g(ω_g)-(a̅_g^(t+1)+b̅_g^(t+1))],and ∂^2/∂ω_g^2q(λ_g,ω_g)=1/2[R_λ_g(ω_g)^2-1+2λ_g/ω_gR_λ_g(ω_g)-1+R_-λ_g(ω_g)^2-1-2λ_g/ω_gR_-λ_g(ω_g)-1],where R_λ_g(ω_g)=K_λ_g+1(ω_g)/K_λ_g(ω_g).§.§.§ Matrix Variate Variance-Gamma Distribution In the case of the matrix variate variance-gamma, ℒ_2^MVVG=∑_i=1^N∑_g=1^Gz_ig[γ_glogγ_g-logΓ(γ_g)+γ_g(log w_ig-w_ig)].The update for γ_g, as in the skew-t case, cannot be obtained in closed form. Instead, the update γ_g^(t+1) is obtained by solving (<ref>) for γ_g, wherelogγ_g+1-φ(γ_g)+c̅_g^(t+1)-a̅_g^(t+1)=0. §.§.§ Matrix Variate NIG DistributionIn this case, _g needs to be updated. Note thatℒ_2^MVNIG=∑_i=1^N∑_g=1^Gz_ig_g-_g^2/2z_igw_igand, therefore, the closed form updates for _g are_g^(t+1)=N_g/a^(t+1)_g.§.§ Numerical Considerations and Convergence CriterionThe main numerical problem encountered is the calculation of the Bessel function K_λ(x) and the calculation of its partial derivative with respect to λ. When x becomes large relative to λ, the Bessel function is rounded to zero. One solution is to consider the evaluation of exp{x}K_λ(x) and then make adjustments to subsequent calculations. In most of the simulations (Section 4), this helped with the evaluation of the densities and the numerical derivatives. However, for the real data application (Section 5), the issue is that |λ| becomes too large due to the dimension and the Bessel function tends to infinity. This is an indication that dimension reduction techniques will need to be considered in the future (see Section <ref>).There are several options for determining convergence of this ECM algorithm. The criterion used in the simulations in Section 4 is based on the Aitken acceleration <cit.>. The Aitken acceleration at iteration t isa^(t) = l^(t+1)-l^(t)/l^(t)-l^(t-1),where l^(t) is the (observed) log-likelihood at iteration t.We then define l_∞^(t+1) = l^(t) + 1/1-a^(t)(l^(t+1)-l^(t))<cit.>. The quantity l_∞ is an asymptotic estimate (i.e., an estimate of the value after many iterations) of the log-likelihood at iteration t+1. As in <cit.>, we stop our EM algorithms whenl_∞^(k+1)-l^(k) < ϵ,provided this difference is positive. The main benefit of this criterion when compared to lack of progress, is that the likelihood can sometimes “plateau" before increasing again. Accordingly, if lack of progress is used, the algorithm may terminate prematurely <cit.>. However, the criterion in (<ref>) helps overcome this problem by considering the likelihood after very many iterations, i.e., l_∞. §.§ A Note on IdentifiabilityIt is important to note that, for each of the distributions discussed herein, the estimates for _g and _g are only unique up to a strictly positive constant. Therefore, to eliminate the identifiability issue, a constraint needs to be imposed on _g or _g. <cit.>, suggest taking the trace of _g to be equal to p; however, it is much simpler to set the first diagonal element of _g to be 1 and this is the constraint we use in the analyses herein. Discussion of identifiability would not be complete without mention of the label switching problem. This well-known problem is due to the invariance of the mixture model to relabelling of the components <cit.>. While the label switching problem is a real issue in the Bayesian paradigm <cit.>, it is of no practical concern for the work carried out herein. However, it is a theoretical identifiability issue and we note that it be resolved by specifying some ordering on the model parameters, e.g., simply requiring that π_1>π_2>⋯>π_G often works and ordering on other parameters can be imposed as needed. §.§ Number of Components and Performance EvaluationIn a general clustering scenario, the number of components (groups) G are not known a priori. It is, therefore, necessary to select an adequate number of components. There are two methods that are quite common in the literature. The first is the Bayesian information criterion<cit.>, which is defined asBIC=2ℓ()-plog N,where ℓ() is the maximized log-likelihood, N is the number of observations, and p is the number of free parameters. Another criterion common in the literature is the integrated completed likelihood <cit.>. The ICL can be approximated asICL≈BIC+2∑_i=1^N∑_g=1^GMAP(ẑ_ig)logẑ_ig,where MAP(ẑ_ig)=1if arg max_h=1,…,G{ẑ_ih}=g,0otherwise.The ICL can be viewed as penalized version of the BIC, where the penalty is for uncertainty in the component membership.To evaluate clustering performance, we consider the adjusted Rand index <cit.>. The ARI compares two different partitions, i.e., two different classifications in our applications. When the predicted classification is compared the actual classification, an ARI of 1 corresponds to perfect classification, whereas a value of 0 indicates the predicted classification is no better than randomly assigning the labels. Detailed review and discussion of the ARI is provided by <cit.>. §.§ Semi-Supervised ClassificationIn addition to clustering (unsupervised classification), the matrix variate mixture models introduced here can also be applied for semi-supervised classification. Suppose that N matrices are observed but that we know the labels for K of the N matrices; specifically, suppose that K of the N matrices come from one of G classes. By analogy with <cit.>, and without loss of generality, order these matrices so it is the first K that have known labels: _1,…,_K,_K+1,…_N. Now, we know the values of z_ig for i=1,…,K and the observed-data likelihood isL() =∏_i=1^K∏_g=1^G[ f(_i | _g,_g,_g,_g,_g)]^z_ig×∏_j=K+1^N∑_h=1^H f(_j | _h,_h,_h,_h,_h),where _g are the parameters associated with the distribution of W_ig. In general, H≥ G; however, for the analyses herein, we make the common assumption that H=G. Parameter estimation, identifiability, etc., follow in an analogous fashion to the clustering case already described herein. Further details on semi-supervised classification in the mixture model setting are given in <cit.> and <cit.>. § SIMULATIONS §.§ OverviewTwo simulations are performed, where the first simulation has two groups and the second has three. The chosen parameters have no intrinsic meaning; however, they can be viewed as representations of multivariate longitudinal data and the parameters introduced by the distribution of W_ig are meant to illustrate the flexibility in concentration. Simulation 1 considers 3× 4 data, Simulation 2 illustrates 4× 3 data. In the first simulation, _g and _g are set to[_1=( [ 1 0.5 0.1; 0.5 1 0.5; 0.1 0.5 1; ]),_2=( [ 1 0.1 0.1; 0.1 1 0.1; 0.1 0.1 1; ]),; ; _1=( [ 1 0.5 0.5 0.5; 0.5 1 0 0; 0.5 0 1 0; 0.5 0 0 1; ]), _2=( [ 1 0 0 0; 0 1 0.5 0.5; 0 0.5 1 0.2; 0 0.5 0.2 1; ]). ] For notational purposes, let _g and _g be the scale matrices used in Simulation 2. We set _1=_1, _2=_3=_2 and _1=_3=_1 and _2=_2.For each distribution, the models are fitted for G∈{1,2,3,4} and the BIC is used to choose the number of groups. §.§ Simulation 1In Simulation 1, for all four distributions, we take the location and skewness matrices to be:[_1=( [100 -1;01 -10; -102 -1 ]), _2=( [ 3 4 2 4; 4 3 3 3; 3 4 2 4; ]),;;_1=( [1 -101;1 -101;1 -101;]), _2=( [ 1 1 1-1; 1 1 0.5-1; 1 1 0-1; ]).; ]For the additional parameters, we took ν_1=4,ν_2=20 for the skew-t distribution, λ_1=λ_2=2 and ω_1=4, ω_2=2 for the generalized hyperbolic distribution, γ_1=7, γ_2=14 for the variance-gamma distribution, and _1=1/2,_2=2 for the NIG distribution. Figure <ref> in Appendix A shows a typical dataset for each distribution. For visualization, we look at the marginal columns which we label V1, V2, V3 and V4. We see that for all of the columns, except column 4, there is a clear separation between the two groups. We also note that for the skew-t distribution, there was a severe outlier in group 2 (due to the small degrees of freedom) that we do not show for better visualization. The orange dotted line is the marginal location parameter for the first group, and the yellow dotted line is the marginal location for the second group. Table <ref> displays the number of groups (components) chosen and the average ARI values with the associated standard deviations. The ICL results are identical, and thus are not shown here. We see that the correct number of groups is chosen, with perfect classification, for all 30 of the datasets when using the MVST, MVVG, and MVNIG mixtures. However, this is not the case with MVGH mixture, which underperforms when compared to the other three. However, the eight datasets for which the incorrect number of components is chosen correspond to datasets for which the two-component MVGH solution did not converge and, in a real application, alternative starting values would be pursued until convergence is achieved for the G=2 component case. In Table <ref>, we show the average amount of time per dataset to run the algorithm for G=1, 2, 3, 4. We note that these simulations were performed in parallel.§.§ Simulation 2In Simulation 2, a three group mixture was considered with 200 observations per group and the following location and skewness parameters.[ _1=( [1 -10;00 -1;010; -10 -1;]), _2=( [ -110;001;0 -10;101;]),_3=( [ 1 1 2; 1 2 0; 0 1 1; 0 1 0; ]),;; _1=( [1 -1 -1;1 -0.5 -1;10 -1;10 -1;]), _2=_3=( [ 1 1-1; 1 0.5 0.5; 1 0 0; 1 0 0; ]).; ]The other parameters we set to ν_1=4, ν_2=8, ν_3=20 for the MVST, λ_1=4, λ_2=0, λ_3=-2 and ω_1=4, ω_2=ω_3=2 for the MVGH, γ_1=7, γ_2=9, γ_3=14 for the MVVG and _1=1/2, _2=1, _3=2 for the MVNIG.Again, the marginal distributions of a typical dataset is shown in Figure <ref> in Appendix A. The dotted lines again represent the marginal locations, with orange for the first group, yellow for the second, and purple for the third. In Table <ref>, the number of groups chosen by the BIC as well as the average ARI values, and associated standard deviations, are presented. As before, the MVST, MVVG and MVNIG mixtures outperform the MVGH mixture and, once again, this is due to convergence issues. The issue with convergence for the MVGH mixture with both simulations is possibly due to the update for, or impact of, the index parameters λ_1,…,λ_G. Table <ref> shows the average runtime per dataset for Simulation 2. Notice that for the MVGH, MVVG and MVNIG mixtures, each dataset took longer on average, with the MVGH mixture having the longest runtime as well as the largest increase in runtime over Simulation 1. This is to be expected because there is an increase in the number of groups and observations; however, for the MVVG and MVNIG mixtures, the time differences between Simulations 1 and 2 are less notable. The MVST mixture actually took less time on average; however, this is because a few datasets for Simulation 1 ran to the maximum number of iterations (without converging) for the G=4 group mixture thus increasing the runtime.§ IMAGE RECOGNITION EXAMPLEWe now apply the matrix variate mixture models introduced herein to image recognition with the MNIST handwriting dataset <cit.>. The original dataset consists of 60,000 training images of handwritten digits 0 to 9, which can be represented as 28 × 28 pixel matrices with greyscale intensities ranging from 0 to 255. However, these dimensions resulted in an infinite calculation for the Bessel function and its derivative with respect to λ. Moreover, because two unstructured 28× 28 dimensional covariance matrices would need to be estimated, model fitting would be infeasible. We stress that this alone is an indication that dimension reduction techniques will need to be developed in the future. However, the main goal of this application is to demonstrate the discussed methods outside of the theoretical confines of the simulations. Therefore, we resized the original image to a 10× 10 pixel matrix using the resize function in the EBImage package <cit.> for the R software <cit.>. However, there are problems with sparsity. Specifically, the outside columns and rows all contain values of 0 because they are outside of the main writing space. Accordingly, there is no variation in these outer columns and rows, therefore resulting in exactly singular _g and _g updates. To solve this problem, we replace a value of 0 with a value between 0 and 2 with increments of 0.1 and added 50 to the non-zero values to make sure the noise did not interfere with the true signal. Each of the matrix variate mixtures introduced herein is applied within the semi-supervised classification paradigm (Section 3.6). A total of 500 observations from digit 1 and 500 from digit 7 are sampled from the training set, and then 100 of each of these digits isconsidered unlabelled, i.e., 80% of the data are labelled. We performed the analysis on 30 different such sets. In Table <ref>, we show aggregate classification tables for the points considered unlabelled for each of the matrix variate mixtures. In Table <ref>, we show the average ARI values and the average misclassification rates for the unlabelled points. Note, that for some of the datasets, not all four mixtures converged; therefore, the total number of observations in the tables need not be the same for all four distributions. Looking at the classification tables, it is clear that all of these matrix variate mixtures overall misclassify digit 1 as digit 7 more often than digit 7 as digit 1. From both the ARI and MCR results, the MVVG mixture slightly outperforms the other three mixtures. It is interesting to note that the MVGH mixture did not experience the same convergence issues as seen with the simulations. This is almost certainly because 80% of the data points have known labels here whereas, in the simulations, we used the clustering (unsupervised classification) scenario and so all of the labels are unknown. § DISCUSSIONFour matrix variate mixture distributions, with component densities that parameterize skewness, have been used for model-based clustering — and its semi-supervised analogue — of three-way data. Specifically, we considered MVST, MVGH, MVVG, and MVNIG mixtures, respectively, and an ECM algorithm was used for parameter estimation in each case. Simulated and real data were used for illustration. In the first simulation, there was good separation between the two groups and, in the second, we increased the number of groups, decreased the separation between the groups, and obtained similar results to the first. In both simulations, the MVGH mixture often underperformed when compared to the other three mixtures due to convergence issues. This could be resolved, for example, by restricting the index parameter λ; however, doing this would essentially eliminate the additional flexibility enjoyed by the MVGH mixture. In the real data application, the MVVG mixture outperformed the other three mixtures in terms of both average ARI and average misclassification rate, and the MVVG mixture consistently ran faster than the other three mixtures.The next step in this work is to introduce parsimony into the matrix variate mixture models introduced herein. A very simple way to introduce parsimony is to take the eigenvalue decomposition of the scale matrices to form a family of parsimonious mixture models, along similar lines to <cit.>. Another important area, though slightly more difficult, is dimension reduction. Recall, in the MNIST data application, that the original data had to be resized due to problems evaluating the Bessel function as well as feasibility issues. One possible solution is to consider a matrix variate analogue of the mixture of factor analyzers model <cit.> and this will be a topic of future work. Another possibility for future work is the application of these models to multivariate longitudinal data <cit.>, in which case it would be important to impose a structure on _g. Finally, the unsupervised and semi-supervised classification paradigms have been investigated herein but some future work will focus on applying these matrix variate mixtures within the fractionally-supervised classification framework <cit.>. § ACKNOWLEDGEMENTSThe authors gratefully acknowledge the support of a Vanier Canada Graduate Scholarship (Gallaugher) and the Canada Research Chairs program (McNicholas).xxAitken1926aitken26 Aitken, A. C.1926, `A series formula for the roots of algebraic and transcendental equations', Proceedings of the Royal Society of Edinburgh 45, 14–22.AnderlucciViroli2015Anderlucci15 Anderlucci, L.Viroli, C. 2015, `Covariance pattern mixture models for the analysis of multivariate heterogeneous longitudinal data', The Annals of Applied Statistics 9(2), 777–800.AndrewsMcNicholas2011andrews11a Andrews, J. L.McNicholas, P. D.2011, `Extending mixtures of multivariate t-factor analyzers', Statistics and Computing 21(3), 361–373.AndrewsMcNicholas2012andrews12 Andrews, J. L.McNicholas, P. D.2012, `Model-based clustering, classification, and discriminant analysis via mixtures of multivariate t-distributions: The tEIGEN family', Statistics and Computing 22(5), 1021–1029.Baricz2010baricz10 Baricz, A.2010, `Turn type inequalities for some probability density functions', Studia Scientiarum Mathematicarum Hungarica 47, 175–189.Baum, Petrie, Soules Weiss1970baum70 Baum, L. E., Petrie, T., Soules, G.Weiss, N.1970, `A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains', Annals of Mathematical Statistics 41, 164–171.Biernacki, Celeux Govaert2000biernacki00 Biernacki, C., Celeux, G.Govaert, G.2000, `Assessing a mixture model for clustering with the integrated completed likelihood', IEEE Transactions on Pattern Analysis and Machine Intelligence 22(7), 719–725.Biernacki, Celeux Govaert2003biernacki03 Biernacki, C., Celeux, G.Govaert, G.2003, `Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models', Computational Statistics and Data Analysis 41, 561–575.Böhning, Dietz, Schaub, SchlattmannLindsay1994bohning94 Böhning, D., Dietz, E., Schaub, R., Schlattmann, P.Lindsay, B.1994, `The distribution of the likelihood ratio for mixtures of densities from the one-parameter exponential family', Annals of the Institute of Statistical Mathematics 46, 373–388.BouguilaElGuebaly2009bouguila09 Bouguila, N.ElGuebaly, W.2009, `Discrete data clustering using finite mixture models', Pattern Recognition 42(1), 33–42.BrowneMcNicholas2015browne15 Browne, R. P.McNicholas, P. D.2015, `A mixture of generalized hyperbolic distributions', Canadian Journal of Statistics 43(2), 176–198.CeleuxGovaert1995celeux95 Celeux, G.G. Govaert 1995, `Gaussian parsimonious clustering models', Pattern Recognition 28(5), 781–793.Celeux, Hurn,Robert2000celeux00 Celeux, G., Hurn, M.,Robert, C. P. 2000, `Computational and inferential difficulties with mixture posterior distributions',Journal of the American Statistical Association 95, 957–970.ChenGupta2005chen2005 Chen, J. T.Gupta, A. K.2005, `Matrix variate skew normal distributions', Statistics 39(3), 247–253.Dang, BrowneMcNicholas2015dang15 Dang, U. J., Browne, R. P.McNicholas, P. D.2015, `Mixtures of multivariate power exponential distributions', Biometrics 71(4), 1081–1089.Doğru, Bulut Arslan2016dougru16 Doğru, F. Z., Bulut, Y. M.Arslan, O.2016, `Finite mixtures of matrix variate t distributions', Gazi University Journal of Science 29(2), 335–341.Domínguez-Molina, González-Farías, Ramos-Quiroga Gupta2007dominguez2007 Domínguez-Molina, J. A., González-Farías, G., Ramos-Quiroga, R.Gupta, A. K.2007, `A matrix variate closed skew-normal distribution with applications to stochastic frontier analysis', Communications in Statistics–Theory and Methods 36(9), 1691–1703.ElguebalyBouguila2015elguebaly15 Elguebaly, T.Bouguila, N.2015, `Simultaneous high-dimensional clustering and featureselection using asymmetric Gaussian mixture models', Image andVision Computing 34(6), 27–41. Franczak, Browne McNicholas2014franczak14 Franczak, B. C., Browne, R. P.McNicholas, P. D.2014, `Mixtures of shifted asymmetric Laplace distributions', IEEE Transactions on Pattern Analysis and Machine Intelligence 36(6), 1149–1157.Franczak, Tortora, Browne McNicholas2015franczak15 Franczak, B. C., Tortora, C., Browne, R. P.McNicholas, P. D.2015, `Unsupervised learning via mixtures of skewed distributions with hypercube contours', Pattern Recognition Letters 58(1), 69–76.GallaugherMcNicholas2017agallaugher17a Gallaugher, M. P. B.McNicholas, P. D.2017a, `A matrix variate skew-t distribution', Stat 6(1), 160–170.GallaugherMcNicholas2017bgallaugher17b Gallaugher, M. P. B.McNicholas, P. D.2017b, `Three skewed matrix variate distributions', arXiv preprint arXiv:1704.02531.GallaugherMcNicholas2017cgallaugher17c Gallaugher, M. P. B.McNicholas, P. D.2017c, `On fractionally-supervised classification: Weight selection and extension to the multivariate t-distribution', arXiv preprint arXiv:1709.08258.GhahramaniHinton1997ghahramani97 Ghahramani, Z.Hinton, G. E.1997, The EM algorithm for factor analyzers. Technical Report CRG-TR-96-1, University of Toronto, Toronto, Canada.GuptaNagar1999gupta99book Gupta, A. K.Nagar, D. K.1999, Matrix Variate Distributions, Chapman & Hall/CRC Press, Boca Raton.HarrarGupta2008harrar08 Harrar, S. W.Gupta, A. K.2008, `On matrix variate skew-normal distributions', Statistics 42(2), 179–194.HubertArabie1985hubert85 Hubert, L.Arabie, P.1985, `Comparing partitions', Journal of Classification 2(1), 193–218.KarlisMeligkotsidou2007karlis07 Karlis, D.Meligkotsidou, L. 2007, `Finite mixtures of multivariate Poisson distributions with application', Journal of Statistical Planning and Inference 137, 1942–1960.KarlisSantourian2009karlis09 Karlis, D.Santourian, A. 2009, `Model-based clustering with non-elliptically contoured distributions', Statistics and Computing 19(1), 73–83.LeCun, Bottou, Bengio Haffner1998MNIST LeCun, Y., Bottou, L., Bengio, Y.Haffner, P.1998, `Gradient-based learning applied to document recognition', Proceedings of the IEEE 86(11), 2278–2324.LeeMcLachlan2014lee14 Lee, S.McLachlan, G. J.2014, `Finite mixtures of multivariate skew t-distributions: some recent and new results', Statistics and Computing 24(2), 181–202.LeeMcLachlan2016lee16 Lee, S. X.McLachlan, G. J.2016, `Finite mixtures of canonical fundamental skew t-distributions', Statistics and Computing 26(3), 573–589.Lin2010lin10 Lin, T.-I.2010, `Robust mixture modeling using multivariate skew t distributions', Statistics and Computing 20(3), 343–356.Lin, McNicholasHsiu2014lin14 Lin, T.-I., McNicholas, P. D.Hsiu, J. H.2014, `Capturing patterns via parsimonious t mixture models', Statistics and Probability Letters 88, 80–87.Lindsay1995lindsay95 Lindsay, B. G.1995, Mixture models: Theory, geometry and applications, in `NSF-CBMS Regional Conference Series in Probability and Statistics', Vol. 5, Hayward, California: Institute of Mathematical Statistics.McLachlanPeel2000mclachlan00 McLachlan, G. and Peel, D. 2000, Finite Mixture Models, John Wiley & Sons, New York.McNicholas2010mcnicholas10c McNicholas, P. D. 2010, `Model-based classification using latent Gaussian mixture models', Journal of Statistical Planning and Inference 140(5), 1175–1181.McNicholas2016amcnicholas16a McNicholas, P. D.2016a, Mixture Model-Based Classification, Chapman & Hall/CRC Press, Boca Raton.McNicholas2016bmcnicholas16b McNicholas, P. D.2016b, `Model-based clustering', Journal of Classification 33(3), 331–373.McNicholas, Murphy, McDaidFrost2010mcnicholas10a McNicholas, P. D., Murphy, T. B., McDaid, A. F.Frost, D. 2010, `Serial and parallel implementations of model-based clustering via parsimonious Gaussian mixture models', Computational Statistics and Data Analysis 54(3), 711–723.McNicholas, McNicholasBrowne2017smcnicholas17 McNicholas, S. M., McNicholas, P. D.Browne, R. P. 2017, A mixture of variance-gamma factor analyzers in Ahmed, S. E. (ed.), `Big and Complex Data Analysis: Methodologies and Applications', pp.369–385. Springer International Publishing, Cham, Switzerland.MengRubin1993meng93 Meng, X.-L.Rubin, D. B.1993, `Maximum likelihood estimation via the ECM algorithm: a general framework', Biometrika 80, 267–278.MorrisMcNicholas2013morris13b Morris, K.McNicholas, P. D.2013, `Dimension reduction for model-based clustering via mixtures of shifted asymmetric Laplace distributions', Statistics and Probability Letters 83(9), 2088–2093.Murray, BrowneMcNicholas2014murray14b Murray, P. M., Browne, R. P.McNicholas, P. D.2014, `Mixtures of skew-t factor analyzers', Computational Statistics and Data Analysis 77, 326–335.Murray, BrowneMcNicholas2017murray17 Murray, P. M., Browne, R. P.McNicholas, P. D.2017, `Hidden truncation hyperbolic distributions, finite mixtures thereof, and their application for clustering', Journal of Multivariate Analysis 161, 141–156.Murray, McNicholasBrowne2014murray14a Murray, P. M., McNicholas, P. D.Browne, R. P.2014, `A mixtures of common skew-t factor analyzers', Stat 3(1), 68–82.Pau, Fuchs, Sklyar, Boutros Huber2010EBImage Pau, G., Fuchs, F., Sklyar, O., Boutros, M.Huber, W. 2010, `Ebimage—an R package for image processing with applications to cellular phenotypes', Bioinformatics 26(7), 979–981.PeelMcLachlan2000peel00 Peel, D.McLachlan, G. J.2000, `Robust mixture modelling using the t distribution', Statistics and Computing 10(4), 339–348.R Core Team2016R16 R Core Team2016, R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.RednerWalker1984render84 Redner, R. A. and H. F. Walker 1984, `Mixture densities, maximum likelihood and the EM algorithm',SIAM Review 26, 195–239.Schwarz1978schwarz78 Schwarz, G.1978, `Estimating the dimension of a model', The Annals of Statistics 6(2), 461–464. ScottSymons1971scott71 Scott, A. J.Symons, M. J.1971, `Clustering methods based on likelihood ratio criteria', Biometrics 27, 387–397.Steinley2004steinley04 Steinley, D. 2004,`Properties of the Hubert-Arabie adjusted Rand index',Psychological Methods 9, 386–396.Stephens2000stephens00 Stephens, M. 2000, `Dealing with label switching in mixture models', Journal of Royal Statistical Society: Series B 62, 795–809.SubediMcNicholas2014subedi14 Subedi, S. and McNicholas, P. D. 2014, `Variational Bayes approximations for clustering via mixtures of normal inverse Gaussian distributions', Advances in Data Analysisand Classification 8(2), 167–193.Tiedeman1955tiedeman55 Tiedeman, D. V.1955, On the study of types, in S. B. Sells, ed., `Symposium on Pattern Analysis', Air University, U.S.A.F. School of Aviation Medicine, Randolph Field, Texas.Viroli2011viroli11 Viroli, C. 2011, `Finite mixtures of matrix normal distributions for classifying three-way data',Statistics and Computing 21(4), 511–522.VrbikMcNicholas2012vrbik12 Vrbik, I.McNicholas, P. D.2012, `Analytic calculations for the EM algorithm for multivariate skew-t mixture models', Statistics and Probability Letters 82(6), 1169–1174.VrbikMcNicholas2014vrbik14 Vrbik, I.McNicholas, P. D.2014, `Parsimonious skew mixture models for model-based clustering and classification', Computational Statistics and Data Analysis 71, 196–210.VrbikMcNicholas2015vrbik15 Vrbik, I.McNicholas, P. D. 2015, `Fractionally-supervised classification', Journal of Classification 32(3), 359–381.Wishart1928Wishart Wishart, J.1928, `The generalised product moment distribution in samples from a normal multivariate population', Biometrika 20A(1–2), 32–52.Wolfe1965wolfe65 Wolfe, J. H.1965, A computer program for the maximum likelihood analysis of types, Technical Bulletin 65-15, U.S.Naval Personnel Research Activity.§ FIGURES | http://arxiv.org/abs/1703.08882v3 | {
"authors": [
"Michael P. B. Gallaugher",
"Paul D. McNicholas"
],
"categories": [
"stat.ME",
"stat.CO"
],
"primary_category": "stat.ME",
"published": "20170326224931",
"title": "Finite Mixtures of Skewed Matrix Variate Distributions"
} |
Service Overlay Forest Embedding forSoftware-Defined Cloud Networks Jian-Jhih Kuo1, Shan-Hsiang Shen12, Ming-Hong Yang13, De-Nian Yang1, Ming-Jer Tsai4 and Wen-Tsuen Chen14 1Inst. of Information Science, Academia Sinica, Taipei, Taiwan2Dept. of Computer Science & Information Engineering,National Taiwan University of Science & Technology, Taipei, Taiwan3Dept. of Computer Science & Engineering, University of Minnesota, Minneapolis MN, USA4Dept. of Computer Science, National Tsing Hua University, Hsinchu, TaiwanE-mail: {lajacky,sshen3,curtisyang,dnyang,chenwt}@iis.sinica.edu.tw and [email protected] December 30, 2023 ===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Network Function Virtualization (NFV) on Software-Defined Networks (SDN) can effectively optimize the allocation of Virtual Network Functions (VNFs) and the routing of network flows simultaneously. Nevertheless, most previous studies on NFV focus on unicast service chains and thereby are not scalable to support a large number of destinations in multicast. On the other hand, the allocation of VNFs has not been supported in the current SDN multicast routing algorithms. In this paper, therefore, we make the first attempt to tackle a new challenging problem for finding a service forest with multiple service trees, where each tree contains multiple VNFs required by each destination. Specifically, we formulate a new optimization, named Service Overlay Forest (SOF), to minimize the total cost of all allocated VNFs and all multicast trees in the forest. We design a new 3ρ_ST-approximation algorithm to solve the problem, where ρ _ST denotes the best approximation ratio of the Steiner Tree problem, and the distributed implementation of the algorithm is also presented. Simulation results on real networks for data centers manifest that the proposed algorithm outperforms the existing ones by over 25%. Moreover, the implementation of an experimental SDN with HP OpenFlow switches indicates that SOF can significantly improve the QoE of the Youtube service. § INTRODUCTIONThe media industry is now experiencing a major change that alters user subscription patterns and thereby inspires the architects to rethink the design <cit.>. For example, the live video streaming on Anvato<cit.> enables online video editing for content providers, ad insertion for advertisers, caching, and transcoding for heterogeneous user devices. Google has acquired Anvato with the above abundant functions and integrated its architecture into Google Cloud to develop the next-generation Youtube.[http://www.msn.com/en-us/news/technology/google-buys-a-backbone-for-pay-tv-services/ar-BBu61eB?li=AA4Zoy&ocid=spartanntp] Therefore, it is envisaged that the next-generation Youtube requires more computation functionalities and resources in the cloud.For distributed collaborative virtual reality (VR), it is also crucial to allocate distributed computation resources for important tasks such as collision detection, geometric constraint matching, synchronization, view consistency, concurrency and interest management <cit.>.Network Function Virtualization (NFV) has been regarded as a promising way <cit.> that exploits Virtual Machines (VMs) to divide the required function into building blocks connected with a service chain <cit.>. A service chain passes through a set of Virtual Network Functions (VNFs) in sequence, and Netflix <cit.> has adopted AWS <cit.> to support the service chains. Current commercial solutions usually assign an individual service chain for each end user for unicast <cit.>. Nevertheless, it is expected that this approach is not scalable because duplicated VNFs and network traffic are involved to serve all users if they require the same content, such as live/linear content broadcast.The global consumer research <cit.> manifests that although the unicast video on demand becomes more and more popular, the live/linear content broadcast and multicast nowadays still account for over 50% of viewing hours per week, from companies such as Sony Crackle <cit.> and Pluto TV <cit.>, because it effectively attracts the users through a shared social experience to instantly access the contents. However, currently there is no effective solution to support large-scale content distributions with abundant computation functionalities for content providers and end users. For scalable one-to-many communications, multicast exploits a tree to replicate the packets in branching routers. Compared with unicast flows, a multicast tree can effectively reduce the bandwidth consumption in backbone networks by over 50% <cit.>, especially for multimedia traffic <cit.>. Currently, shortest-path trees are employed by Internet standards (such as PIM-SM <cit.>) because they can be efficiently constructed in a distributed manner. Nevertheless, the routing is not flexible since the path from the source to each destination needs to follow the corresponding shortest path. Recently, the flexible routing for traffic engineering becomes increasingly important with the emergence of Software-Defined Networks (SDNs), whereas centralized computation can be facilitated in an SDN controller to find the optimal routing, such as Steiner Tree <cit.> in Graph Theory or its variations<cit.>. Thus, multicast traffic engineering has been regarded very promising for SDNs.Nevertheless, the above approaches and other existing multicast routing algorithms <cit.> are not designed to support NFV because the nodes (e.g., the source and destinations) that need to be connected in a tree are specified as the problem input. On the contrary, here VNFs are also required to be spanned in a tree for NFV, and the problem is more challenging since VMs also need to be selected, instead of being assigned as the problem input. Moreover, multicast NFV indeed is more complicated when it is necessary to employ multiple multicast trees as a forest for a group of destinations, and this feature is crucial for Content Deliver Networks (CDNs) with multiple video source servers. In this case, the video source also needs to be chosen for each end user <cit.>.In this paper, therefore, we make the first attempt to explore the resource allocation problem (i.e., both the VM selection, source selection, and the tree routing) for a service forest involving multiple multicast trees, where the path from a source to each destination needs to traverse a sequence of demanded services (i.e., a service chain) in the tree.[In this paper, we first consider the static multicast, and then clarify the static case is already a good step forward and discuss how to adapt the proposed algorithm to the dynamic case in Sections <ref> and <ref>, respectively.] We formulate a new optimization problem for multi-tree NFV in software defined cloudnetworks, named, Service Overlay Forest (SOF), to minimize the total cost of the selected VMs and trees. Given the sources and the destinations with each destination requiring a chain of services, the SOF problem aims at finding an overlay forest that 1) connects each destination to a source and 2) visits the demanded services in selected VMs in sequence before reaching the destinations.Fig. <ref> first compares a service tree and a service forest. Fig. <ref> is the input network with the cost labeled beside each node and edge to represent the link connection cost and the VM setup cost, respectively. Assume that there are two destinations 9 and 10, and their demanded service chain consists of two VNFs, f_1 and f_2 in order.A Steiner tree in Fig. <ref> spanning source node 1 and both destinations incurs the total cost as 34 if VMs 2 and 3 are employed. Note that the edge between VMs 2 and 3 is visited twice to reach destination 10, and the cost of the edge is thus required to be included twice. More specifically, the edge costs from source 1 to VM 3 (f_1), from VM 3 to VM 2 (f_2), and from VM 2 to destinations 9, 10, are 1, 3, 20+1+1+1+3+1+1=28, respectively. Thus, the edge cost is 1+3+28=32 while the node cost is 1+1=2. By contrast, the cost of a service forest with two trees and four VMs selected is 14 in Fig. <ref>, which significantly reduces the cost by about 60 %. This example manifests that consolidating the services in few VMs may not always lead to the smallest cost because the edges to connect multiple destinations are also important. Therefore, multiple trees with multiple sources are promising to further reduce the cost.[In this paper, we assume that the setup cost for a source node is negligible. The source with the setup cost is further discussed in Appendix <ref>.] In this paper, we first prove that the problem is NP-hard. To investigate the problem in depth, we will step-by-step reveal the thinking process of the algorithm design from the single-source case to the general case, and then propose a 3ρ _ST-approximation algorithm,[Compared with the traditional Steiner Tree problem, the problem considered in this paper is more difficult due to newSDN/NFV constraints involved. Indeed, several recent research works <cit.> on SDN multicast and NFV service chain embedding (e.g., <cit.>) have massive approximation ratios (e.g. O(|D|), where |D| denotes the number of destinations, and O(|𝒞|), where |𝒞| denotes the length of demanded service chain). By contrast, the approximation ratio of this paper is 3ρ_ST, where ρ_ST is the best approximation ratio of the Steiner Tree problem (e.g., the current best one is 1.39), which is smaller than the above works. Moreover, the simulation results manifest that empirically the performance is very close to the optimal solutions obtained by the proposed Integer Programming formulation.] named Service Overlay Forest Deployment Algorithm (SOFDA) for the general case, where ρ _ST denotes the best approximation ratio of the Steiner Tree problem (e.g., the current best one is 1.39). The single-source case is more difficult than the traditional Steiner tree problem because not only the terminal nodes (i.e., source and destinations) need to be spanned but also a set of VMs is required to be selected and spanned to install VNFs in sequence. Also, the general case is more challenging than the single-source case, because a service tree is necessary to be created for each source, and the VNF conflict (i.e., a VM is allocated with too many VNFs from multiple trees) tends to occur in this case. Therefore, SOFDA is designed to 1) assign multiple sources for varied trees with multiple VMs, 2) allocate the VMs for each tree to provide a service chain for each destination, and 3) find the routing of each tree to span the selected source, VMs, and destinations. Simulation on real topologies manifests that SOFDA can effectively reduce the total cost for data center networks. In addition, a distributed SOFDA is proposed to support the multi-controller SDNs. Implementation on an experimental SDN for Youtube traffic also indicates that the user QoE can be significantly improved for transcoded and watermarked video streams. The rest of this paper is organized as follows. The related works are summarized in Section <ref>. We formulate the SOF problem in Section <ref> and design the approximation algorithms in Sections <ref> and <ref>. The distributed algorithm is presented in Section <ref>. Some important issues are discussed in Section <ref>.The simulation and implementation results are presented in Section <ref>, and we conclude this paper in Section <ref>.§ RELATED WORKTraffic engineering for unicast service chains in SDN has drawn increasing attention recently. Lukovszki et al. <cit.> point out that the length of a service chain is necessary to be bounded and present an efficient online algorithm to maximize the number of deployed service chains, whereas the maximal number of VMs hosted on a node is also guaranteed. Xia et al. <cit.> jointly consider the optical and electrical domains and minimize the number of domain conversions in all service chains. Moreover, Kuo et al. <cit.> strike the balance between link utilization and server usage to maximize the total benefit. Nevertheless, the above studies only explore unicast routing for service chains and do not support multicast.Multicast traffic engineering for SDN is more complicated than traditional unicast traffic engineering. Huang et al. <cit.> first incorporate the flow table scalability in the design of the multicast tree routing in SDN. Shen et al. <cit.> then further consider the packet loss recovery in reliable multicast routing for SDN. Recently, the routing of multiple trees in SDN <cit.> has been studied to ensure that the routing follows both the link capacity and the TCAM size. The problem is more challenging due to the above two constraints, and the best approximation ratio that can be achieved is only D (i.e., the maximum number of destinations in a tree). However, the dimension of service allocation in VMs has not been explored in the above work. Recently, special cases on a tree <cit.> with only one source and one VM have been explored.Overall, the above approaches are not designed to support a service forest with multiple VNFs and multiple trees, and the problem here is more challenging because VNF conflict due to the overlapping of trees will occur. To the best knowledge of the authors, this paper is the first one that explores both routing and VM selection for multiple trees to construct a forest in SDN. As explained in Section <ref>, the service forest is important for many emerging and crucial multimedia applications in CDN that require intensive cloud computing.§ THE SERVICE OVERLAY FOREST PROBLEM A service overlay forest consists of a set of service overlay trees. Each service overlay tree spans one source, a set of VMs for enabled VNFs, and a subset of destinations. Any two service overlay trees do not overlap since each destination only needs to connect to a source via a service chain in a tree. In the following, we first formally define the problem. We are given:* a network G={V= M ∪ U,E}, where each link e ∈ E is associated with a nonnegative cost c(e) denoting the connection cost of link e to forward the demand of destinations, each virtual machine (VM) v ∈ M is associated with a nonnegative cost c(v) denoting the setup cost of VM v to run a virtual network function (VNF), and each switch v ∈ U is associated with cost 0,* a set of destinations D⊆ V requesting the same demand,* a set of sources S⊆ V having the demands of destinations, and* a chain of VNFs 𝒞=(f_1,f_2,⋯, f_|𝒞|) required to process the demand of destinations.The Service Overlay Forest (SOF) problem is to construct a service overlay forest consisting of the service overlay trees with the roots in S, the leaves in D, and the remaining nodes in V, so that there exists a chain of VNFs from a source to each destination. A chain of VNFs is represented by a walk, which is allowed to traverse a node (i.e., a VM or a switch) multiple times. In each walk, a clone of a node and the corresponding incident links are created to foster an additional one-time pass of the node, and onlyone of its clones is allowed to run VNF to avoid duplicated counting of the setup cost. For example, in the second feasible forest (colored with light gray) of Fig. <ref>, a walk from source 1 to destination 8 passes VM 2 twice without running any VNF, and there are two clones of VM 2 on the walk. For each destination t ∈ D, SOF needs to ensure that there exists a path with clone nodes (i.e., a walk on the original G) on which f_1,f_2,⋯, f_|𝒞| are performed in sequence from a source s ∈ S to t in the service overlay forest.The objective of SOF is to minimize the total setup and connection cost of the service overlay forest, where the setup and connection costs denote the total cost of the VMs and links, respectively. Note that the cost of a link in G is counted twice if the link is duplicated because its terminal nodes are cloned. In this paper, it is assumed that a VM can run at most one VNF in the network G. The scenario that requires a VM to support multiple VNFs can be simply addressed by first replicating the VM multiple times in the input graph G. Fig. <ref> presents three examples for the service overlay forests. The first service overlay forest consists of two service overlay trees, where the demand of destination 8 (or 9) is routed from source 1 (or 0) along the walk (1,2, 4, 10, 6, 8) (or 0, 3, 11, 5, 7, 9), and the demand is processed by VNFs f_1 and f_2 at VMs 4 and 6 (or 3 and 7), respectively. The total cost of the first service overlay forest is 82, where the setup cost and connection cost are 50 and 32, respectively. In the second service overlay forest (including only one tree), source 1 first routes the demand to VM 4 for VNF f_1. Subsequently, VM 4 forwards the demand to VM 7 and VM 2 for VNF f_2, respectively. Finally, the demand is forwarded towards destinations 8 and 9, respectively. The setup cost and connection costs of the second service overlay forest are 30 and 29, respectively. In the third service overlay forest (tree), the demand is first routed from source 1 to VM 3 for VNF f_1 and then toward VM 4 for VNF f_2, and finally to destinations 8 and 9, respectively. The third service overlay forest is an optimal service overlay forest with the setup cost and connection cost as 20 and 27, respectively.§.§ Integer ProgrammingIn the following, we present the Integer Programming (IP) formulation for SOF. Our IP formulation first identifies the service chainfor each destination and then constructs the whole service forest accordingly. To find the walk of the service chain, we first assign the VMs corresponding to each VNF in the walk and then find the routing of the walk between every two consecutive VMs. More specifically, SOF includes the following binary decision variables. Let γ _d,f,u denote if node u is assigned as the enabled VM for VNF f in the walk to destination d. Let π _d,f,u,v denote if edge e_u,v is located in the walk connecting the enabled VM of VNF f and the enabled VM of the next VNF f_N.Note that the above walk will belong to a service tree rooted at the enabled candidate node of VNF f.Therefore, to find the service forest for f, let binary variable τ _f,u,v represent if edge e_u,v is located in the forest. On the other hand, binary variable σ _f,u represents if node u is assigned as the enabled VM of service f for the whole service forest. Notice that each destination d may desire a different VNF f on the same enabled VM u according to γ _d,f,u, but the constraint later in this section will ensure that only one VNF is allowed to be allocated to u by properly assigning σ _f,u accordingly.The objective function for SOF is as follows.min∑_f∈𝒞∑_u∈ Vc(u)σ _f,u+min∑_f∈𝒞∑_e_u,v∈ Ec(e_u,v)τ _f,u,v,where the first term represents the total setup cost of all VMs, and the second term is the connection cost of the service forest. The IP formulation contains the following constraints.1) Service ChainConstraint. The following four constraints first assign the enabled VM for each service chain.∑_s∈ Sγ _d,f_S,s=1, ∀ d∈ D, (1)∑_u∈ Mγ _d,f,u=1, ∀ d∈ D,f∈𝒞, (2)γ _d,f_D,d=1, ∀ d∈ D, (3)γ _d,f_D,u=0, ∀ d∈ D,u∈ V-{d}. (4) Constraint (1) ensures that each destination chooses one source s in S as its service source, where f_S denotes the function as the source of the service chain.Constraint (2) finds a node u from M as the enabled VM of each VNF f for each destination. Constraints (3) and (4) assign only destination d for function f_D, where f_D denotes the function as the destination of the service chain.Here notations f_S and f_D are incorporated in our IP formulation in order to support the routing constraints described later. In other words, a service chaintraverses the nodes with f_S,f_1,...,f_|𝒞|,f_D sequentially.2) Service Forest Constraint. The following two constraints assign the enabled VM for the whole service forest. γ _d,f,u≤σ _f,u, ∀ d∈ D,f∈𝒞,u∈ V, (5)∑_f∈𝒞σ _f,u≤ 1, ∀ u∈ V,(6) Constraint (5) assigns u as the enabled VM of VNF f for the whole service forest if u has been selected by at least one destination d for VNF f. Constraint (6) ensures that each node u is in charge of at most one VNF.3) Chainand Forest Routing Constraints. The following two constraints find the routing of the whole service forest. ∑_v∈ N_uπ _d,f,u,v-∑_v∈ N_uπ _d,f,v,u≥γ _d,f,u-γ _d,f_N,u, ∀ d∈ D,f∈𝒞∪{f_S},u∈ V, (7)π _d,f,u,v≤τ _f,u,v, ∀ d∈ D,f∈𝒞∪{f_S},e_u,v∈ E. (8) Constraint (7) is the most complicated one. It first finds the routing of the service chainfor each destination d. For the source u of a service chain,γ _d,f_S,u=1 and γ _d,f_N,u=γ _d,f_1,u=0, where f_1 is the first VNF in 𝒞. In this case, the constraint becomes ∑_v∈ N_uπ _d,f_S,u,v-∑_v∈ N_uπ _d,f_S,v,u≥ 1.It ensures that at least one edge e_u,v incident from u is selected for the service chainbecause no edge e_v,u incident to u is chosen (i.e., ∑_v∈ N_uπ _d,f_S,v,u=0 for the source u). By contrast, for any intermediate switch u in the walk from the source to the enabled VM of f_1, γ _d,f_S,u=0 and γ _d,f_1,u=0, and the constraint becomes∑_v∈ N_uπ _d,f_S,v,u≤∑_v∈ N_uπ _d,f_S,u,v.When any edge e_v,u incident to u has been chosen in the walk, the above constraint states that at least one edge e_u,v incident from u must also be selected in order to construct the service chainiteratively. The above induction starts from the source of the walk to the previous node of the enabled VM of f_1. Afterward, for the enabled VM u of f_1, γ _d,f_1,u=1 and γ _d,f_S,u=0, and the constraint becomes ∑_v∈ N_uπ _d,f_S,v,u-∑_v∈ N_uπ _d,f_S,u,v≤ 1.Since π _d,f_S,v,u=1 for only one edge e_v,u in the walk incident to u, the above constraint is identical to ∑_v∈ N_uπ _d,f_S,u,v≥ 0. Therefore, π _d,f_S,u,v is allowed to be 0 for every edge e_u,v to minimize the objective function, implying that no data of f_S will be sent from the enabled VM of f_1. By contrast, π _d,f_1,u,v will be 1 for one edge e_u,v due to constraint (6), implying that the enabled VM of f_1 will deliver the data in one edge incident from u, and the above induction repeats sequentially for every service f in 𝒞 until it reaches the destination d. Finally, constraint (8) states that any edge e_u,v is in the service forest if it isin the service chainfor at least one destination d. §.§ The HardnessThe SOF problem is NP-hard since a metric version of the Steiner Tree problem (see Definition <ref>) can be reduced to the SF problem in polynomial time. The complete proof is presented in Appendix <ref>. <cit.> Given a weighted graph G={V,E} withedge costs, a root r∈ V and a node set U⊆ V∖{r}, a Steiner Tree is a minimum spanning tree that roots at s and spans all the nodes in U, where U≠∅.The SOF problem is NP-hard. § SPECIAL CASE WITH SINGLE TREE In this subsection, we propose a (2+ρ _ST)-approximation algorithm, named Service Overlay Forest Deployment Algorithm with Single Source (SOFDA-SS) to explore the fundamental characteristics of the problem, and a more complicated algorithm for the general case with multiple sources will be presented in the next section.SOFDA-SS includes the following two phases. The first phase chooses the most suitable VM to install the last VNF (i.e., called last VM in the rest of this paper) and then finds a minimum-cost service chain[The next section will extend the service chain into a service tree with multiple last VMs.] between the source and the last VM.Afterward, the second phase finds a minimum-cost Steiner tree to span the VM and all the destinations. The selection of the last VM is crucial due to the following trade-offs. Choosing a VM closer to the source tends to generate a shorter service chain, but it may create a larger service tree if the last VM is distant from all destinations. Also, it is important to address the trade-off between the setup cost and connection cost, because a VM with a smaller setup cost will sometimes generate a larger tree.The pseudo code of SOFDA-SS is presented in Appendix <ref> (see Algorithm <ref>) .Therefore, to achieve the approximation ratio, it is necessary for SOFDA-SS to carefully examine every possible VM to derive a Steiner tree and evaluate the corresponding cost. For every VM u, to obtain a walk W_G (i.e., service chain) from source s to u with |𝒞| VMs (so that the VNFs f_1,f_2,⋯ ,f_|𝒞| can be installed in sequence) in G, we first propose a graph transformation from G to 𝒢 and then find the k-stroll <cit.> from s to u defined as follows. Given a weighted graph 𝒢={𝒱,ℰ} and two nodes s and u in 𝒱, the k-stroll problem is to find the shortest walk that visits at least k distinct nodes (including s and u ) from s to u in 𝒢.SOFDA-SS constructs an instance 𝒢={𝒱,ℰ} of the k-stroll problem from G as follows. Let 𝒱 consist of s and all VMs in G (i.e., 𝒱=M∪{s}). Let ℰ contain all edges between any two nodes in 𝒱 (i.e., 𝒢 is a complete graph).The cost of the edge between nodes v_1 and v_2 in ℰ is defined as follows,c(v_1,v_2)=∑_(a,b) ∈ Pc((a,b))+ [ c(u)+c(v_2)/2ifv_1=s,; c(v_1)+c(u)/2 else ifv_2=s,; c(v_1)+c(v_2)/2otherwise, ]where u and P denote the last VM and the shortest path between nodes v_1 and v_2 in G, respectively. In other words, the cost of each shortest path in G is first included in the cost of the corresponding edge in ℰ. Afterward, since the data always enter and leave the VM running an intermediate VNF (≠ f_|𝒞|), the setup cost of the VM is shared by the incoming and outgoing edges of the VM. Finally, the setup cost of last VM u is shared by the outgoing edge of s and the incoming edge of u. The edge costs of 𝒢 are assigned in the above way to ensure that the shortest walk with |𝒞| VMs in G is identical to the shortest path with |𝒞|+1 nodes in 𝒢.Clearly, 𝒢 can be constructed in polynomial time. Then, SOFDA-SS finds a k-stroll walk W_𝒢^' that visits exactly |𝒞|+1 distinct nodes from source s to the last VM u (i.e., k=|𝒞|+1) in 𝒢. Then, SOFDA-SS finds the corresponding walk W_G (i.e., a service chain from s to u in G) that visits exactly |𝒞| distinct VMs in G by concatenating each shortest path corresponding to a selected edge in |𝒞|, and each path connects two consecutive nodes, u_j and u_j+1, on walk W_𝒢, where 1≤ j≤ |𝒞|. Finally, the demanded VNFs f_1,f_2,...,f_|𝒞| can be deployed in order on the walk with |𝒞| VMs from s to u.Fig. <ref> presents an illustrative example for SOFDA-SS. First, for VM 7, the walk W_G=(u_1,u_2,⋯, u_|𝒞|+1) with u_1=1 and u_|𝒞|+1=7 is obtained as follows. An instance 𝒢={𝒱,ℰ} of the k-stroll problem is first constructed with s=1, M={2,3,4,5,6,7}, and u=7, where 𝒱 is set to {1,2,3,4,5,6,7}, ℰ is set as {(x,y)|x,y ∈𝒱}, the cost of the edge between nodes 1 and 6 is set to c((1,2))+c((2,4))+c((4,6))+c(5)+c(6)/2=14, and the cost of the edge between nodes 2 and 6 is set as c((2,4))+c((4,6))+c(2)+c(6)/2=13. Subsequently, we acquire a walk W'_𝒢=W_𝒢=(1,2,4,3,5,7) in 𝒢 and the corresponding walk W_G=(1,2,4,2,3,5,7) in G. After W_G is obtained, the service overlay forest with the last VM (i.e., 7) is constructed, where the demand is first routed from source 1 to VM 7 along the walk W_G=(1,2,4,2,3,5,7), and f_1, f_2, f_3,f_4, and f_5 is processed at VMs 2, 4, 3, 5, and 7, respectively. After finding the Steiner tree rooted at VM 7, the demand is then routed to destination 8 by traversing switches 4 and 6, and directly to destination 9. The total cost in the end of the second phase is 45. In the following, we present several important characteristics for graph 𝒢, which play crucial roles to derive the approximation ratio. First, the cost of a walk (u_1,u_2,⋯ ,u_|𝒞|+1) from s=u_1 to the last VM u=u_|𝒞|+1 without traversing a node multiple times in 𝒢 is equal to the sum of the total setup cost of u_2,u_3,⋯ ,u_k, plus the total connection cost of the shortest paths between every u_j and u_j+1 for j=1,2,⋯ ,k-1 in G. Second, the edge costs in 𝒢 satisfy the triangular inequality, as described in the following lemma. For readability, the detailed proof is presented in Appendix <ref>. The graph 𝒢 satisfies triangular inequality.Let c(ℱ_M^OPT) and c(ℱ_E^OPT) denote the setup and connection costs of the optimal service overlay forest ℱ^OPT, respectively. Based on the above twocharacteristics, the following theorem derives the approximation ratio of SOFDA-SS. The complete proof is presented in Appendix <ref>. The cost of F is bounded by (2+ρ_ST)c(ℱ^OPT). That is, SOFDA-SS is a (2+ρ_ST)-approximation algorithm for the SOF problem with one tree. Time Complexity Analysis.SOFDA-SS constructs |M| instances of the k-stroll problem, and each of them employs the Dijkstra algorithm |M| times to compute the edge costs of each instance, where O(T_d) denotes the time to run the Dijkstra algorithm. Moreover, let O(T_k) denote the time to solve a k-stroll instance <cit.>, and let O(T_s) represent the time to append a Steiner tree by <cit.>. Therefore, the overall time complexityis O(|M|(T_d|M|+T_k+T_s)).§ GENERAL CASE WITH MULTIPLE TREES In this section, we propose a 3ρ _ST-approximation algorithm, named Service Overlay Forest Deployment Algorithm (SOFDA), for the general SOF problem with multiple sources. Different from SOFDA-SS, here we select multiple sources to exploit multiple trees for further reducing the total cost, and it is necessary to choose a different subset of destinations for each source to form a forest. In other words, both the last VMs and the set of destinations are necessary to be carefully chosen for the tree corresponding to each source. To effectively solve the above problem, our idea is to identify a short service chain from each source to each destination as a candidate service chain and then encourage more destinations to merge their service chains into a service tree, and those destinations will belong to the same tree in this case. More specifically, SOFDA first constructs an auxiliary graph 𝔾 with each candidate service chain represented by a new virtual edge connecting the source and the last VM of the chain. Also, every source is connected to a common virtual source. SOFDA finds a Steiner tree spanning the virtual source and all destinations, and we will prove that the cost of the tree in 𝔾 is no greater than 3ρ_STc(ℱ^OPT).Nevertheless, a new challenge arises here because the service chains corresponding to the selected virtual edges in the above approach may overlap in a few nodes in G, and the solution thereby is not feasible if any overlapping node in this case needs to support multiple VNFs (see the definition of SOF). SOFDA in Section <ref> thereby revises the above solution into multiple feasible trees, and we prove that SOFDA can still maintain the desired approximation in Section <ref>. The pseudo code of SOFDA is presented in Appendix <ref> (see Algorithm <ref>). §.§ Cost-Bounded Steiner TreeSOFDA first constructs an auxiliary graph 𝔾 to effectively extract multiple service chains and group the destinations. Specifically, let 𝕍_𝕊 consist of the duplicate v̂ of each source v∈ S, and let 𝕍_𝕄 contain the duplicate v̂ of each VM v∈ M. Therefore, 𝕍=V∪{ŝ}∪𝕍_𝕊 ∪𝕍_𝕄, where ŝ denotes the virtual source. Also, let 𝔼_ŝ𝕊 includethe edges between ŝ and v̂ for each v̂∈𝕍_𝕊. Let 𝔼_ 𝕊𝕄 consist of the virtual edges (representing the candidate service chain) between v̂ and û for each v̂∈𝕍_𝕊 and û∈𝕍_𝕄, and let 𝔼_𝕄M include the edges between v and v̂ for each v∈ M. Then, 𝔼=E∪𝔼_ŝ𝕊 ∪𝔼_𝕊𝕄∪𝔼_𝕄M. Moreover, the cost of each edge in 𝔼_ŝ𝕊∪𝔼_𝕄M is assigned to 0, and the cost of the virtual edge between v̂∈𝕍_𝕊 and û∈𝕍_𝕄 in 𝔼_ 𝕊𝕄 is equal to the cost of the k-stroll walk that visits |𝒞 | VMs between v and u in G. We first present an illustrative example for the above graph transformation.Fig. <ref> presents an example to construct the instance 𝔾={𝕍,𝔼} of the Steiner tree problem with the graph G shown in Fig. <ref>, where S={0,1}, M={2,3,4,5,6,7}. The output 𝔾 is presented in Fig. <ref>. SOFDA first replicates G in 𝔾. Subsequently, it duplicates the sources 0 and 1 by creating nodes 0̂ and 1̂, and VMs 2,3,4,5,6,7 by creating nodes2̂,3̂,4̂,5̂,6̂,7̂ in 𝔾.Then, the costs of edges (ŝ,0̂), (ŝ,1̂), (2̂,2), (3̂,3), (4̂,4), (5̂,5), (6̂,6), and (7̂,7) are all set to 0. To derive the cost of the virtual edge (1̂,6̂), SOFDA finds the walk from source 1 to VM 6 in G as follows. First, it constructs an instance of the k-stroll problem 𝒢 shown in Fig. <ref>.Then, we obtain a walk W'_𝒢=W_𝒢=(1,4,2,3,5,6) in 𝒢. By combining the shortest paths with each path connecting two consecutive nodes in W'_𝒢, we find the desired walk W_G=(1,2,4,2,3,5,6) in G. Thus, the cost of link (1̂,6̂) is set to the cost of W_G, which is equal to c(2)+c(4)+c(3)+c(5)+c(6)+c(1,2)+c(2,4)+c(4,2)+c(2,3)+c(3,5)+c(5,6)=21. The following lemma first indicates that the cost of the constructed Steiner tree in 𝔾 is bounded by ρ _ST· 3c(ℱ^OPT), by showing that there is a feasible Steiner tree 𝕋={𝕍_𝕋 ,𝔼_𝕋} in 𝔾 with the cost bounded by 3c(ℱ ^OPT). A feasible Steiner Tree with the cost no greater than 3c(ℱ^OPT) exists in 𝔾.We first show that there is a 𝕋-like graph, 𝕋^'={ 𝕍_𝕋^',𝔼_𝕋^'}, with a cost of at most 3c(ℱ^OPT) in 𝔾. Afterward, we extract the desired 𝕋 from 𝕋'.Let D_v^OPT denote the set of the destinations in the service overlay tree rooted at source v in ℱ^OPT. In addition, for the service overlay tree rooted at source v in ℱ^OPT, let m_v^OPT be the representative last VM chosen from all the VMs running f_|𝒞| on the paths from v to the destinations in D_v^OPT. Moreover, let T_v be the optimal Steiner tree rooted at m_v^OPT that spans all destinations in D_v^OPT in G. Then, let 𝕍_𝕋^' consist of 1) ŝ, 2) the duplicate v̂ (in 𝕍_𝕊) of each source v in ℱ^OPT, 3) the duplicatem̂_̂v̂^̂ÔP̂T̂ (in 𝕍_𝕄) of each m_v^OPT in ℱ^OPT, 4) each m_v^OPT in ℱ^OPT, and 5) all VMs and switches (including all destinations in D) in all optimal Steiner trees T_v in G. Let 𝔼_𝕋^' include the edges between 1) ŝ and v̂, 2) v̂ and m̂_̂v̂^̂ÔP̂T̂, 3) m_v^OPT and m̂_̂v̂^̂ÔP̂T̂ for each spanned source v in ℱ^OPT, and 4) all links in all optimal Steiner trees T_v in G for each used source v in ℱ^OPT.Note that for each source v in ℱ ^OPT, the cost of the edge between v̂ and m̂_̂v̂^̂ÔP̂T̂ in 𝕋^' is bounded by twice of the cost of the shortest walk that visits |𝒞| VMs between v and m_v^OPT in G. Since there is a walk between v and v^OPT in ℱ^OPT, the total cost of the edges in 𝔼_𝕋^'∩𝔼_𝕊 𝕄 is bounded by 2c(ℱ^OPT). In addition, the cost of T_v is restricted by the connection cost of the service overlay tree with root v in ℱ^OPT, because the latter one not only spans m_v^OPTand the destinations but also spans the source v and other VMs (runningf_1,f_2,... ,f_|𝒞|). Thus, the total cost of every edge in 𝔼_𝕋^'∩ E is bounded by c(ℱ^OPT). Since the cost of each edge in 𝔼_𝕋^'∩𝔼_ŝ 𝕊 or 𝔼_𝕋^'∩𝔼_𝕄M is 0, the cost of 𝕋^' is bounded by 3c(ℱ^OPT) . Furthermore, there is a subgraph (more specifically, a tree) 𝕋 of𝕋^' that spans the virtual node and all the destinations in 𝔾.Hence, the cost of 𝕋 is smaller than that of 𝕋^' and is bounded by 3c(ℱ^OPT).§.§ Cost-Bounded Service Overlay ForestAfter finding a Steiner tree 𝕋 in 𝔾 with a bounded cost of at most 3ρ_ST c(ℱ^OPT) by the above ρ_ST-approximation algorithm, to limit the total cost of the service overlay forest, SOFDA will deploy each service chain with the corresponding virtual edge in 𝕋∩𝔼_𝕊𝕄 and the route traffic via the edges in 𝕋∩ E. Specifically,SOFDA first 1) adds each corresponding walk of the spanned virtual edge one by one in G and then 2) adds all VMs, switches, and links in 𝕋∩ G to F.Fig. <ref> presents an example for the construction of the service overlay forest with 𝒞=(f_1,f_2,f_3,f_4,f_5) in Fig. <ref> by SOFDA. First, an instance 𝔾={𝕍,𝔼} of the Steiner Tree problem is constructed with the input parameters G, S={0,1}, M={2,3,4,5,6,7}, and 𝒞=(f_1,f_2,f_3,f_4,f_5), and a Steiner tree 𝕋 in 𝔾 using the ρ_ST-approximation algorithmin <cit.> is obtained, as shown in Fig. <ref>. Nevertheless, multiple walks in G corresponding to the spanned virtual edges in 𝕋 may overlap in a few VMs, and the solution in this case is infeasible if any overlapping VM in this case needs to perform different VNFs (see the definition of SOF in Section <ref>). The situation is called VNF conflict in this paper. In the following, we present an effective way to eliminate the conflict by tailoring the overlapping walks without increasing the cost. To address the VNF conflict, when a walk W_G=(v_1,v_2,⋯ ,v_n) in G is added to the service overlay forest F, it is encouraged to augment F with a modified walk W=(u_1,u_2,⋯ ,u_n) based on W_G. Note that a VM or switch is allowed to be passed without processing any VNF by simply forwarding the data.Moreover, a VNF conflict happens when two walks compete for a clone to perform different VNFs. Fig. <ref> presents an example of the VNF conflict, where W_1 and W_2 respectively desire to run f_1 and f_4 on VM 4. Suppose that a walk W (between source s and VM v)facesthe VNF conflict with another walk W_1 (between source s_1 and VM v_1) in F. Wesolvetheconflict between W and W_1 by changing the source of W from s to s_1 (attaching W to W_1), or changing the source of W_1 from s_1 to s (attaching W_1 to W) without adding new links, VMs, and switches to F and without enabling new VMs in F for VNFs.Following Example <ref>, SOFDA finds walks W_G,1=(1,2,4,2,3,5,6) and W_G,2=(0,3,5,3,2,4,7) in G, where f_1, f_2, f_3, f_4, f_5 are installed at VMs 4, 2, 3, 5, 6 on W_G,1, and also at VMs 3, 5, 2, 4, 7 on W_G,2, respectively. After W_G,1 is added to F, we have F={W_1}, where W_1 consists of one clone for source 1, two clones of VM 2, and one clone for VMs 4, 3, 5, 6 due to F=∅ in the beginning. As W_G,2 is added to F, SOFDA augments F with W_2, where W_2 includes one clone for source 0, one clone for VMs 3, 5, 2, 4 (on which f_3, f_4, f_2, and f_1 are already running on W_1), and two new clones for VMs 3 and 7, as shown in Fig. <ref>.Specifically, let u be the first VM, where W experiences the VNF conflict with W_1 by backtracking W. Recall in Fig. <ref>, for example, that VM 4 is the first conflict node with W_1 by backtracking W.Let f_1,f_2,⋯ ,f_|𝒞| denote the VNFs required to be performed in sequence on W and W_1. Let f_i and f_j be the VNFs located at u on W_1 and W, respectively. SOFDA effectively addresses the VNF conflict in details as follows.First, if j≤ i, SOFDA attaches W to W_1 through u by changing W to the concatenation of the sub-walk of W_1 from s_1 to u (on which f_1,f_2,⋯ ,f_i are installed in sequence, identical to W_1) and the sub-walk of W from u to v (on which f_i+1,f_i+2,⋯ ,f_|𝒞| are running in sequence, identical to W), as shown in Fig. <ref>.Following Example <ref>, W_2 first experiences the VNF conflict with W_1 at (the clone of) VM 4, where f_4 and f_1 are installed on W_2 and W_1, respectively. The sequence numbers of the VNFs at VM 4 on W_2 and W_1 are 4 and 1, respectively.The condition j≤ i is not satisfied since j=4 and i=1. SOFDA then checks the next condition. Note that one of the three conditions must be satisfied. Second, if there is another VM w such that W experiences the VNF conflict withW_1 at w, where f_h with h≥ j is on W_1, SOFDA attaches W to W_1 through w by changing W to the concatenation of the sub-walk of W_1 from s_1 to w (on which f_1,f_2,⋯ ,f_h are running in sequence, identical to W_1), the sub-walk of W from w to u, and the sub-walk of W from u to v (on which f_h+1,f_h+2,⋯,f_|𝒞| are running in sequence, identical to W), as illustrated in Fig. <ref>. Following Example <ref>, W_2 experiences another VNF conflict with W_1 at VM 5, where f_2 and f_4 are performed on W_2 and W_1, respectively. Since the sequence number of the VNF at VM 5 on W_1 is not smaller than that of the VNF at VM 4 on W_2, SOFDA attaches W_2 to W_1 through VM 4 as follows. SOFDA first steers W_2 along the sub-walk of W_1 from source 1 to VM 5 (i.e., the walk (1,2,4,2,3,5)) on which f_1,f_2,f_3,f_4 are running in sequence at VMs 4, 2, 3, 5, respectively, identical to W_1.Subsequently, it continues steering W_2 along the sub-walk of W_2 from VM 5 to VM 4 (i.e., the walk (5,3,2,4)), and the sub-walk of W_2 from VM 4 to VM 7 (i.e., the walk (4,7)) on which f_5 is run at VM 7, identical to W_2. Finally, the sub-walk (5,3,2,4,7) on the revised W_2 can be shortened to be a walk (5,7). The constructed service overlay forest for G is displayed in Fig. <ref>.Otherwise, SOFDA attaches W_1 to W through u by changing W_1 to the concatenation of the sub-walk of W from s to u (on which f_1,f_2,⋯ ,f_j are running in sequence, identical to W) and the sub-walk of W_1 from u to v_1 (on which f_j+1,f_j+2,⋯ ,f_|𝒞| are run in sequence, identical to W_1), as shown in Fig. <ref>. Moreover, when a walk W experiences the VNF conflict with multiple walks W_1,W_2,⋯ ,W_l in F in sequence by backtracking W, SOFDA resolves the VNF conflict between W and W_1,W_2,⋯ ,W_l one-by-one. The following theorem derives the approximation ratio for SOFDA. The cost of the constructed service overlay forest F is bounded by 3ρ_STc(ℱ^OPT).First, the cost of Steiner tree 𝕋 in 𝔾 is bounded by ρ_ST times of the optimal Steiner tree in 𝔾. Since the cost of the optimal Steiner tree in 𝔾 is bounded by 3c(ℱ^OPT) according to Lemma <ref>, the cost of 𝕋 is limited by 3ρ_STc(ℱ^OPT). In addition, since the cost of the edge between v̂∈𝕍_𝕊 and û∈𝕍_𝕄 of u in 𝕋 is identical to the cost of the walk that visits |𝒞| VMs between v and u in G, the cost of F constructed in G is equal to the cost of 𝕋 and thereby bounded by 3ρ_STc(ℱ^OPT) if no VNF conflict occurs in F. On the other hand, when the VNF conflict between two walks happens, one of the two walks in F is updated, and no new link, VM, and switch is added to F, and no VM in F is newly created to perform the VNF. Thus, the cost of F revised for resolving the VNF conflict is still bounded by 3ρ_STc(ℱ^OPT). The theorem follows. Time Complexity Analysis. We follow the notations in the time complexity analysis of SOFDA-SS. To generate the instance of the Steiner tree problem, SOFDA constructs |S||M| instances of the k-stroll problem, and each of them employs the Dijkstra algorithm |M| times to compute the edge costs of each instance. Then, SOFDA solves the k-stroll instance by <cit.> to derive the costs of virtual edges (i.e., corresponding candidate service chains). To eliminate the conflict, in the worst case, all the added walks in F are appended to the newly added walk, and the complexity is O(|M|^3). Therefore, the total time complexity is dominated by constructing and solving k-stroll instance and finding a Steiner tree, i.e., O(|S||M|(|M|T_d+T_k)+T_s).§ DISTRIBUTED IMPLEMENTATIONFor large SDNs, it is important to employ multiple SDN controllers, where each one monitors and controls a subset of the network <cit.>, and the communication protocols <cit.> between controllers are developed to facilitate scalable control of the whole network. In the following, therefore, we discuss the distributed implementation of the proposed algorithm in Section <ref> to support multi-controller SDNs. Note the controller that receives the request is elected to be the leader, which is responsible for progress tracking and phase switching. First, shortest-path routing plays a fundamental role in SOFDA to build the auxiliary graph 𝔾 and the service chain corresponding to each edge in 𝔾. To find a shortest path traversing multiple domains, it is necessary for each controller to first abstract a matrix that consists of the lengths between every pair of border routers over the Southbound interface <cit.> within its domain. Afterward, each controller propagates the matrix to the other controllers along with the Network Layer Reachability Information of SDNi Wrapper over East-West Interface.which is used to share the connectivity information with the neighboring controllers. More specifically, let s and t denote the source and the destination, respectively. The controller C_s covering s can find the corresponding domain by the IP prefix of t. Then, controller C_s informs the controller C_t that covers t of the lengths of all shortest paths from s to all broader routers of C_t. Afterward, controller C_t can respond the best broader router to controller C_s, and the length of a shortest path can be acquired accordingly.Equipped with the shortest-path computation from multiple controllers, each controller can acquire the length of each shortest path between a VM in its domain and any other VM (or source). Thus, once the forest construction is initiated,every controller that covers a source will communicate with other controllers to collect the matrices of lengths between any two VMs and the lengths between any source and any VM. Then, the controller can find all candidate service chains from its covered source to each VM and creates a virtual link in 𝔾 representing the service chain to connect the virtual source and the corresponding last VM.Afterward, a distributed Steiner tree algorithm <cit.> can be employed by multiple controllers to find the Steiner tree, where the computation load originally assigned to each switch in the distributed algorithm can be finished by its controller instead. In SOFDA, it is important to address the VNF conflicts in multiple domains. To achieve this goal, each controller first removes the useless candidate service chains that do not connect with any destination, and then informs any other controller whose coverage is visited by any remaining service chain. When one of the informed controllers observes a VNF conflict of two service chains, it notifies the other controller to collaboratively remove the conflict according to the conflict elimination algorithm described in Section <ref>. Finally, each controller deletes the virtual source, deploys the remaining service chains, and forwards the content to the destinations by SOF. § DISCUSSION §.§ Static Mulitcast Trees with Service Chaining To the best knowledge of the authors, this paper is the first one that explores the notion of the service forest, i.e., the fundamental multi-tree multicast problem with service chaining, and provides approximation algorithms with theoretical bounds. Therefore, we first consider the fundamental problems for static SDN/NFV multicast and then extend the proposed algorithms to the dynamic case in Section <ref>. Actually, static multicast is crucial for backbone ISPs. In this situation, each terminal node of a multicast tree is usually an edge router or a local proxy server of the ISP, instead of a dynamic user client. For example, current live streams are sent by the source (i.e., headends or content servers) and travel through the high-speed backbone network to the access nodes and edge nodes (e.g., Digital Subscriber Line Access Multiplexer (DSLAM) <cit.>, or a Mobile Edge Computing (MEC) server <cit.>) via static multicast trees <cit.> (e.g., Chung-Hwa Telecom MOD <cit.>), whereas the dynamic user join and leave are handled by the local access nodes and edge nodes. Static multicast trees can significantly reduce the backbone bandwidth consumption for each stream and thereby is much more scalable to support a large number of video channels. In this case, each access node usually serves hundreds or thousands of end users and streams one (or few) channel(s) to each user according to the available bandwidth between the access node and user devices (e.g., set-top boxes). Moreover, for the massively multi-user virtual reality (e.g., gaming) <cit.>, the servers create a virtual environment with a 3D model, player avatars, and scripts, and then transmit the data by static multicast to several MEC servers <cit.>, which always need to appear in a multicast group. In the above life examples, our proposed algorithms can facilitate static multicast with service chaining (i.e., multiple stages of servers) to support a large number of streams between the headend server and the local access nodes/edge nodes. §.§ Cost Model and Online Deployment In the online scenario, when a new request arrives, SOFDA allocates the required resources for the request by constructing a service forest according to the current link and node costs. To balance the network resource consumption and accommodate more requests in the future, congested links and nodes are unnecessary to be assigned with higher costs for encouraging SOFDA to employ the links and nodes with low loads <cit.>. In this paper, therefore, we exploit <cit.>, which is designed for online adaptive routing in the Internet, to assign a convex cost to each link or node. The cost will significantly increase as the load linearly grows, to avoid overwhelming the link or node. More specifically, let l and p denote current load and capacity of the link or node, respectively, and the cost c is set according to the utilization (i.e., l/p) as follows and illustrated in Fig. <ref>.c= [ lifl/p ≤ 1/3,; 3l-2/3p else ifl/p ≤ 2/3,; 10l-16/3pelse ifl/p ≤ 9/10,;70l-178/3p else ifl/p ≤ 1,;500l - 1468/3p else ifl/p ≤ 11/10,;5000l-14318/3potherwise. ]The cost model properly handles the online situation by assigning a huge cost to a more congested node or link. Therefore, SOFDA will avoid choosing the above congested node or link to minimize the total cost of the service forest. SOFDA thereby can mitigate the impact on a VM of other VMs colocated with an overloaded node. Indeed, the cost model can be applied to both private and public cloud networks, where resource optimization and load balancing are usually addressed. For example, Chung-Hwa Telecom MOD <cit.> is built in its private clouds while Netflix <cit.> adopts AWS <cit.>.Nevertheless, each request has a different duration, and an approach without considering the duration of the request is inclined to incur fragmentation of the network resources and degrade the performance. However, the duration of a stream (e.g., a VR multi-player game) is usually difficult to be precisely predicted, and many current approaches thereby adaptively reroute <cit.> and migrate the VM <cit.> to relocate the network resources when congestion occurs. Similar to the above approaches, when a node or link becomes congested, SOFDA reroutes the service forest by letting the users downstream to the above node or link re-join the forest again (explained in the reply of the first question),where the current path in the forest is removed only after the new join path is created to avoid service interruption <cit.>.§.§ Adjustments for Various Dynamic Cases. In the following, we extend SOFDA to support the dynamic join and leave of destination users and the addition and deletion of NFVs in a service forest after a session starts. To address the dynamic case, a simple approach is to run SOFDA again for the whole forest. Nevertheless, this approach tends to incur massive computation loads in the SDN controller, especially when users frequently join and leave the multicast group or change the computation tasks in the service forest. In the following, therefore, we extend SOFDA to properly handle the dynamic case <cit.>. * Destination Leave. When a destination v leaves the service forest, if v is a leaf node, SOFDA removes v and all intermediate nodes and links in the path connecting v and the closest upstream branch node in the service forest, where a branch node is a node in the forest with at least two child nodes. By contrast, if v is not a leaf node, because there are other destination users in the subtree rooted at v,SOFDA is not allowed to remove the path connecting to the upstream branch node.* Destination Join. When a new destination user v joins the service forest, SOFDA finds the walk from v to the forest with the lowest cost. More specifically, for each node u in the forest ℱ that can be a candidate branch node to connect v, let f(u) denote the index of the last installed VNF between a source s and u in the forest. To derive the cost in the walk from u to v, SOFDA finds the walk with k=|𝒞|-f(u)+1 from u to v to install the (|𝒞|-f(u)) new VNFs in the walk by exploiting k-stroll in the transformed graph (see Section <ref>). Let W_G(u,v)=(u_1, ..., u_k) denote the acquired walk, where u_1=u and u_k=v. In this case, SOFDA needs to install the new VNFs f_f(u)+1,...,f_|𝒞| on the above walk from u to v, and the cost of the forest is increased bymin_u∈ℱ{ c(W_G(u,v))}. SOFDA carefully examines every possible u in the existing service forest to effectively minimize the increasing cost, and the node u leading to the smallest cost is selected to serve the new destination user v accordingly.* VNF Deletion. When VNF f_j is removed from the service forest, for each VM v that installs a VNF f_j, SOFDA connects the VM u with the upstream VNF f_j-1 to the VM w with the downstream VNF f_j+1 (along the minimum-cost path from u to w in the original G) in the forest, where the source (or destination) can be regarded as the VM with the upstream (or downstream) VNF f_j-1 (or f_j+1) if f_j is the first (or last) VNF. * VNF Insertion. When VNF f_j is inserted to the service forest, for each pair of VMs u and w with VNFs f_j-1 and f_j+1, respectively, SOFDA installs f_j on an available VM v, and connects u to v and v to w in the forest such that the sum of 1) the connection cost of the path between u and v, 2) the setup cost of v, and 3) the connection cost of the path between v and w is minimized. When f_j is the first (or last) VNF, the source (or destination) is regarded as the VM with VNF f_j-1 (or f_j+1). In addition, if two pairs of VMs (u_1, w_1) and (u_2, w_2) with VNFs f_j-1 and f_j+1 choose the same VM v to install VNF f_j, SOFDA removes all intermediate nodes and links in the path connecting u_2 and v in the forest in order to reduce the total cost of the forest (i.e., avoid creating redundant paths in the forest). * Link Congestion. For any congested link e between the VMs with VNF f_j and f_j+1, SOFDA updates the link cost according to <cit.> and then re-connects the two VMs with the path associated with the lowest cost. SOFDA can effectively avoid choosing a congested link because the cost of the link will be extremely large. On the other hand, if e is between the source and a VM (or VM and a destination), the source (or the destinations) is regarded as the upstream VM (or the downstream VM) and handled in a similar manner.* VM Overload. For any overloaded VM v between the VMs with VNF f_j-1 and f_j+1, SOFDA updates the node cost according to <cit.> to find an available VM v' and then re-connects it to the upstream VM and downstream VM with the path having the lowest cost. Therefore, SOFDA can also avoid selecting an overloaded VM. On the other hand, if v is the first VNF (or the last VNF), the source (or the destinations) is regarded as the upstream VM (or the downstream VM) and thenhandled in a similar manner. § NUMERIC RESULT§.§ Simulation Setup We conduct simulations to compare different approaches in two inter-data-center networks: IBM SoftLayer <cit.> and Cogent <cit.>. SoftLayer contains 27 access nodes with 49 links and 17 data centers, whereas Cogent has 190 access nodes with 260 links and 40 data centers. We also generate a synthetic network with 5000 access nodes, 10000 links , and 2000 data centers by Inet <cit.>. The edge costs and the node costs are set according to <cit.> and <cit.> based on the corresponding loads (described in Section <ref>), respectively. The sources and destinations are chosen uniformly at random from the nodes in the network. We examine the performance of different approaches in two scenarios: one-time deployment and online deployment. Moreover, we also implement all algorithms in a small-scale SDN with HP SDN switches.In the one-time deployment scenario, the link bandwidth is set to 100 Mbps,and each requested demand is set to 5 Mbps. The link usage is randomly chosen in (0,1) so as to derive the edge cost according to <cit.>. Also, the total number of VMs ranges in {5,15,25,35,45}, and each VM is randomly attached to a data center.The service chain length (i.e., the number of VNFs in the chain) ranges in {3,4,5,6,7} in both Cogent and Softlayer networks. The number of destinations and candidate sources range in {2,4,6,8,10} and {2,8,14,20,26}, respectively in both networks. The default numbers of candidate sources, destinations, VMs, and service chain length are 14, 6, 25, 3, respectively.Afterward, for the online deployment scenario, the node/link usages are zero initially. Each data center has 5 VMs with the cost according to the host machine utilization. Afterward, we incrementally generate a new request, and the node costs and edge costs will be updated according to <cit.>. The numbers of destinations and candidate sources in the request are randomly chosen from 13 to 17 and 8 to 12 in Softlayer, and they are from 20 to 60 and from 10 to 30 in Cogent, respectively. The number of demanded services in a request is 3.We compare the proposed algorithm with the following ones. 1) CPLEX <cit.>. It finds the optimal solution by solving the IP formulation in Section <ref>.2) Enhanced Steiner Tree (eST). Since the Steiner tree algorithm <cit.> does not select VMs in the tree, we extend it for SOF as follows. We find the minimum-cost tree among all Steiner trees rooted at different sources. Afterward, we construct the shortest service chain that is closest to the tree from <cit.> and then connect it to the tree with the minimum cost. 3) Enhanced algorithm for the NFV enabled multicast problem (eNEMP). Since the algorithm for the NFV enabled multicast problem (NEMP) <cit.> does not support multiple sources and VNFs, similar to the above extension, we construct a service chain and then connect it to the tree, where the chain spans the VM that has been chosen in the tree. Moreover, we enable eST and eNEMP to support multiple sources via the modification as follows. The idea is to iteratively add a service tree in the solution until no tree can reduce the total cost.At each iteration, we elect the minimal-cost service tree among all candidate trees rooted at each unused source, run VNFs sequentially on unused VMs, and span all the destinations in D.To estimate the profit of tree addition, we calculate the total cost of the current forest with the elected tree, where each destination is spanned and served by the closet tree. Hence, we add the elected tree and proceed to the next iteration if it can decrease the total cost.Otherwise, we output the forest. Furthermore, a special case with only one Steiner tree connected with a service chain (denoted by ST in the figures) is also evaluated. §.§ One-Time DeploymentWe compare the performance of SOFDA, eNEMP, eST, ST, and the optimal solution generated by CPLEX with different numbers of sources, destinations, VMs, and different numbers of demand services. Because SOF is NP-hard, CPLEX is able to find the optimal solutions for small instances, and thus only Softlayer is tested in this case. Figs. <ref> and <ref> manifest that SOFDA is very close to the optimal solutions, and choosing multiple sources effectively reduces the total cost. The improvement in Fig. <ref> and Fig. <ref> is more significant because larger networks (i.e., Cogent and the synthetic network) contains more candidate nodes and links to generate a more proper forest. Since eNEMP and eST do not choose multiple sources and VMs during the multicast routing, they tend to miss many good opportunities for allocating the VMs with small costs to the tree with fewer edges. By contrast, the results indicate that SOFDA effectively reduces the total cost by 30%. Also, when the number of sources increases, the destinations have more candidate trees to join, and thus the total cost is effectively reduced. However, the total cost increases when the number of destinations grows, because a service tree is necessary to span more destinations. Fortunately, when we have more VMs, there are more candidate machines to deploy VNFs, and the total cost thereby can be reduced. Fig. <ref> presents the impact of different setup costs. The forest cost increases as the setup cost (i.e., 1x, 3x, ..., 9x) or the length of a demanded service chain (i.e., |𝒞|) grows as shown in Fig. <ref>. Fig. <ref> manifests that the average number of selected VMs in a forest is effectively reduced by SOFDA as the setup cost of a VM increases. Moreover, when the length of a demanded service chain (i.e., |𝒞|) becomes larger, the number of required VM needs to increase in order to satisfy new user requirements. Table <ref> shows the running time of SOFDA with different numbers of sources and network sizes. The running time is less than 2 seconds for small networks, such as the one with 1000 nodes and 2 sources. With |S| and |V| increase, the running time grows, but SOFDA only requires around 19 seconds for the largest case. §.§ Online Deployment In the following, we explore the online scenario with the requests arriving sequentially. The edge costs also grow incrementally due to more traffic demand. Fig. <ref> presents the accumulative costs (i.e., the total cost from the beginning to the current time slot) of different approaches. It manifests that SOFDA outperforms the others because the existing approaches focus on minimizing the traditional tree cost and thus tend to miss many good opportunities to deploy the VNFs on a longer path with sufficient VMs. By contrast, SOFDA carefully examines the edge costs and node costs and acquires the best trade-off between utilizing more VMs (leading to a smaller forest) and reducing the number of VMs, especially when the network load increases.§.§ ImplementationTo evaluate SOF in real environments, we implement SOFDA in Emulab <cit.>. The version and build of the Emulab are 4.570 and 03/17/2017, respectively. We create the topology by using NS format defined by Emulab and run Ubuntu 14.04 in each end host. We also deploy an experimental SDN with HP Procurve 5406zl OpenFlow-enabled switches and HP DL380G8 servers, where OpenDaylight is the OpenFlow controller, and OpenStack is employed to manage VMs. To support distributed computation, we run multiple OpenDaylight instances in VMs deployed in different servers and leverage the ODL-SDNi architecture <cit.>, which enables inter-controller communications. In addition, SOFDA is implemented as an application on the top of OpenDaylight and relies on OpenDaylight APIs to install forwarding rules into the switches. It also calls OpenStack APIs to launch VM instances, which are enabled VNFs. The goal is to evaluate the transcoded and watermarked video performance under the environment with limited resources. Our testbed includes 14 nodes and 20 links, where the link capacity is set as 50 Mbps, and each node can support one VNF as explained in Section <ref>. Two nodes are randomly selected as the video sources connecting to Youtube, and the full-HD test video is in 137 seconds encoded by H.264 with the average bit rate as 8 Mbps. Four nodes are randomly selected as destinations playing the videos with the VLC player. The video streams are processed by VNFs, including a video transcoder and a watermarker implemented by FFmpeg before reaching the destinations. The available bandwidth of each link ranges from 4.5 Mbps to 9 Mbps to emulate the scenario with the network congestion, where the video playback may stall and wait for startup again or re-buffering. During the video playback, we measure the startup latency and total video re-buffering time, which are crucial for user QoE. Table <ref> summarizes the average startup latency and re-buffering time of different approaches. The results manifest that the startup latency of SOFDA is 20 % and 33 % shorter than eNEMP and eST, and the video stalling time of SOFDA is 16% and 21% smaller than eNEMP and eST, respectively. The experiment results also indicate that SOFDA routes traffic to less congested links compared with eNEMP and eST, and fewer packets thereby are lost.§ CONCLUSIONIn this paper, we investigated a new optimization problem (i.e., SOF) for cloud SDN. Compared with previous studies, the problem is more challenging because both the routing of a forest with multiple trees and the allocation of multiple VNFs in each tree are required to be considered. We proposed a 3ρ _ST-approximation algorithm (SOFDA) to effectively handle the VNF conflict, which has not been explored by previous Steiner Tree algorithms. We also discussed the distributed implementation of SOFDA. Simulation results manifest that SOFDA outperforms the existing ones by over 25%. Implementation results indicate that SOF can significantly improve the QoE of the Youtube traffic. Since current IP multicast supports dynamic group membership (i.e., each user can join and leave a tree at any time), our future work is to explore the online problem for rerouting of the forest and relocation of VNFs in cloud SDN with a performance guarantee. IEEEtran§ THE PROOF OF THEOREM <REF>We prove it by a polynomial-time reduction from a variant of the Steiner tree problem, where all the edge costs are positive and satisfy triangular inequality, to the SOF problem. Given any instance G of the Steiner tree problem, we construct a corresponding instance G' of the SOF problem as follows. We first replicate G into G', set |𝒞|=1 in G', and add one source s into G'. We let root r as the only VM in G' and nodes in U of G as the destinations in D of G'. Root r is connected to s with an edge whose cost is set to an arbitrary value w>0 so as to obtain the instance G'. In the following, we prove OPT_G'=OPT_G+w. Because the edge e_r,s only exists in G' and the solution of G' must contain a subgraph in G, which is also a tree rooted at r and spans all the nodes in U, OPT_G≤ OPT_G'-w holds. In addition, OPT_G≥ OPT_G'-w; otherwise, such a Steiner tree of G plus the edge e_r,s becomes a solution of G' with a smaller cost than OPT_G'. Hence, given OPT_G (or OPT_G'), we can obtain OPT_G' (or OPT_G) by adding (or removing) the edge e_r,s. The theorem follows. § THE PROOF OF LEMMA <REF> Consider any three nodes a, b, and c in 𝒢. Since 𝒢 is a complete graph, the edge between a and b, the edge between b and c, and the edge between a and c form a triangle in 𝒢. Clearly, the total cost of the edge between a and b and the edge between b and c must be greater than the cost of the edge between a and c; otherwise, the connection cost of the shortest path between a and c in G must be greater than the total connection cost of the shortest path between a and b and the shortest path between b and c, which is a contradiction. The lemma follows.§ THE PROOF OF THEOREM <REF> There are two possible cases for the last VM u. For the first case, u is one of the last VMs in ℱ^OPT, and u is not in the second case. For the first case, let ℱ^OPT(u) be a subgraph of ℱ^OPT that connects u to all the destinations in D. Thus, c(ℱ^OPT(u))≤ c(ℱ_E^OPT). It is worthy to note that the subgraph ℱ^OPT(u) may also span s (and even the other last VMs in ℱ^OPT) so as to connect u and all the destinations. Hence, the cost of the minimum Steiner tree that spans u and all the destinations in D must be no greater than c(ℱ^OPT(u))≤ c(ℱ_E^OPT) when u also runs the last VNF in ℱ^OPT. On the other hand, the k-stroll problem is NP-Hard and has a 2-approximation algorithm in metric graphs, which satisfy triangular inequality. According to Lemma <ref>, W_𝒢 follows triangular inequality, and the cost of W_𝒢 is thereby no greater than twice of the cost of the shortest walk that visits at least | 𝒞|+1 distinct nodes, W_𝒢^OPT, from s to u in 𝒢. Since the cost of W_𝒢^OPT is equal to the minimum setup and connection costs of the walk that visits at least |𝒞| VMs from s to u in G, the cost of W_𝒢^OPT is bounded by the total setup and connection costs of the walk from s to u in ℱ^OPT. Therefore, the cost of W_𝒢 is bounded by 2c(ℱ^OPT). On the other hand, the connection cost of connecting u to all destinations is bounded by ρ _ST· c(ℱ_E^OPT). Thus, the cost of the service overlay forest with the last VM u is bounded by (2+ρ _ST)c(ℱ^OPT) as u runs f_|𝒞| in ℱ^OPT. Note that SOFDA-SS constructs a service overlay forest for every possible last VM u and chooses the forest with the minimum cost. Since at least one VM runs f_|𝒞| in ℱ^OPT, the cost of the service overlay forest generated by SOFDA-SS is bounded by (2+ρ _ST)c(ℱ^OPT). The theorem follows.§ SCENARIOS WITH SETUP COSTS ON SOURCES In the following, we extend SOFDA-SS to support the case with a setup cost assigned to the source. Let c(s) denote the cost to enable source s. Let the cost of the edge between nodes v_1 and v_2 in ℰc(v_1,v_2)= ∑_(a,b) ∈ Pc((a,b))+[c(s)+c(u)if v_1=s, v_2=u;orv_1=u, v_2=s; c(s)+c(u)+c(v_2)/2 else ifv_1=s, v_2≠ u; orv_1=u, v_2≠ s; c(v_1)+c(s)+c(u)/2 else ifv_1≠ s, v_2=u;orv_1≠ u, v_2= s;c(v_1)+c(v_2)/2 otherwise, ] where u and P denote the last VM and the shortest path between nodes v_1 and v_2 in G, respectively. Due to a similar reason in Section <ref>, we set the cost of the edges in ℰ in a similar way but consider the source cost c(s). The approximation ratio also holds, and the proof is similar to Theorem <ref>.§ PSEUDO CODES[h] Instance Construction of the k-Stroll Problem [h] Identification of the Walk with |𝒞| VMs [h] Instance Construction of the Steiner Tree Problem [h] Augment of Service Overlay Forest | http://arxiv.org/abs/1703.09025v1 | {
"authors": [
"Jian-Jhih Kuo",
"Shan-Hsiang Shen",
"Ming-Hong Yang",
"De-Nian Yang",
"Ming-Jer Tsai",
"Wen-Tsuen Chen"
],
"categories": [
"cs.NI"
],
"primary_category": "cs.NI",
"published": "20170327121255",
"title": "Service Overlay Forest Embedding for Software-Defined Cloud Networks"
} |
http://arxiv.org/abs/1703.09153v3 | {
"authors": [
"Cheng-Wei Chiang",
"Hiroshi Okada",
"Eibun Senaha"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170327155048",
"title": "Dark Matter, Muon $g-2$, Electric Dipole Moments and $Z\\to \\ell_i^+ \\ell_j^-$ in a One-Loop Induced Neutrino Model"
} |
|
plain Finding the Size and the Diameter of a Radio Network Using Short Labels [1] Barun Gorain[2]Andrzej Pelc[3] ===========================================================================footnote [1]A preliminary version of this paper appeared in Proc.19th International Conference on Distributed Computing and Networking (ICDCN 2018). [2] Department of Electrical Engineering and Computer Science, Indian Institute of Technology Bhilai, India. [email protected] [3] Département d'informatique, Université du Québec en Outaouais, Gatineau, Québec J8X 3X7, Canada, [email protected]. Partially supported by NSERC discovery grant 2018-03899 and by the Research Chair in Distributed Computing at the Université du Québec en Outaouais. The number of nodes of a network, called its size, and the largest distance between nodes of a network, called its diameter, are among the most important network parameters. Knowing the size and/or diameter (or a good upper bound on those parameters) is a prerequisite of many distributed network algorithms, ranging from broadcasting and gossiping, through leader election, to rendezvous and exploration. A radio network is a collection of stations, called nodes, with wireless transmission and receiving capabilities. It is modeled as a simple connected undirected graph whose nodes communicate in synchronous rounds. In each round, a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node v hears a message from a neighbor w in a given round, if v listens in this round, and if w is its only neighbor that transmits in this round.If v listens in a round, and two or more neighbors of v transmit in this round, a collision occurs at v. If v transmits in a round, it does not hear anything in this round. Two scenarios are considered in the literature: if listening nodes can distinguish collision from silence (the latter occurs when no neighbor transmits), we say that the network has the collision detection capability, otherwise there is no collision detection. We consider the tasks of size discovery and diameter discovery: finding the size (resp. the diameter) of an unknown radio network with collision detection. All nodes have to output the size (resp. the diameter) of the network, using a deterministic algorithm. Nodes havelabels which are (not necessarily distinct) binary strings. The length of a labeling scheme is the largest length of a label.We concentrate on the following problems:What is the shortest labeling scheme that permits size discovery in all radio networks of maximum degree Δ? What is the shortest labeling scheme that permits diameter discovery in all radio networks? Our main result states that the minimum length of a labeling scheme that permits size discovery is Θ(loglogΔ). The upper bound is proven by designing a size discovery algorithm using a labeling scheme of length O(loglogΔ), for all networksof maximum degree Δ. The matching lower bound is proven by constructing a class of graphs (in fact even of trees) of maximum degree Δ, for which any size discovery algorithm must use a labeling scheme of length at least Ω(loglogΔ) on some graph of this class. By contrast, we show that diameter discovery can be done in all radio networks using a labeling scheme of constant length.Keywords: radio network, collision detection, network size, network diameter, labeling scheme§ INTRODUCTION §.§ The model and the problem The number of nodes of a network, called its size, and the largest distance between nodes of a network, called its diameter, are among the most important network parameters. Knowing the size and/or diameter (or a good upper bound on those parameters) by nodes of a network or by mobile agents operating in it, is a prerequisite of many distributed network algorithms, ranging from broadcasting and gossiping, through leader election, to rendezvous and exploration. A radio network is a collection of stations, called nodes, with wireless transmission and receiving capabilities. It is modeled as a simple connected undirected graph. As it is usually assumed in the algorithmic theory of radio networks <cit.>,all nodes start simultaneously and communicate in synchronous rounds. In each round, a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node v hears a message from a neighbor w in a given round, if v listens in this round, and if w is its only neighbor that transmits in this round. If v listens in a round, and two or more neighbors of v transmit in this round, a collision occurs at v. If v transmits in a round, it does not hear anything in this round. Two scenarios are considered in the literature: if listening nodes can distinguish collision from silence (the latter occurs when no neighbor transmits), we say that the network has the collision detection capability, otherwise there is no collision detection.We consider the tasks of size discovery and diameter discovery: finding the size (resp. the diameter) of an unknown radio network with collision detection. All nodes have to output the size (resp. the diameter) of the network, using a deterministic algorithm. Nodes havelabels which are (not necessarily distinct) binary strings. These labels are given to (otherwise anonymous) nodes by an oracle knowing the network, whose aim is to help the nodes in executing a size or diameter discovery algorithm using these labels. Such informative labeling schemes, also referred to as advice given to nodes, have been previously studied, e.g.,in the context of ancestor queries <cit.>, MST computation <cit.>, and topology recognition <cit.>, for wired networks, and in the context of topology recognition <cit.> and broadcasting <cit.> for radio networks. The length of a labeling scheme is the largest length of a label. A priori, every node knows only its own label.In this paper we concentrate on the problem of finding a shortest labeling scheme permitting size and diameter discovery in radio networks with collision detection. Clearly, some labels have to be given to nodes, because otherwise (in anonymous radio networks) no deterministic communication is possible. Indeed, for any deterministic algorithm in an anonymous network, all nodes would transmit in exactly the same rounds, and hence no node would ever hear anything.On the other hand, labeling schemes of length Θ (log n), for n-node networks, are certainly enough to discover the size of the network, as it can be then coded in the labels. Similarly, length Θ (log D) is enough to discover the diameter D. Our aim is to answer the following questions.What is the shortest labeling scheme that permits size discovery in all radio networks of maximum degree Δ? What is the shortest labeling scheme that permits diameter discovery in all radio networks?§.§ Our results Our main result states that the minimum length of a labeling scheme that permits size discovery is Θ(loglogΔ). The upper bound is proven by designing a size discovery algorithm using a labeling scheme of length O(loglogΔ), for all networksof maximum degree Δ. The matching lower bound is proven by constructing a class of graphs (in fact even of trees) of maximum degree Δ, for which any size discovery algorithm must use a labeling scheme of length at least Ω(loglogΔ) on some graph of this class. By contrast, we show that diameter discovery can be done in all radio networks using a labeling scheme of constant length. §.§ Related workAlgorithmic problems in radio networks modeled as graphs were studied for such distributed tasks as broadcasting <cit.>, gossiping <cit.> and leader election <cit.>. In some cases <cit.>, the model without collision detection was used, in others <cit.>, the collision detection capability was assumed.Providing nodes of a network, or mobile agents circulating in it, with information of arbitrary type (in the form of binary strings) that can be used by an algorithm to perform some network task, has been proposed in <cit.>. This approach was referred to as algorithms using informative labeling schemes, or equivalently, algorithms with advice. When advice is given to nodes,two variations are considered: either the binary string given to nodes is the same for all of them <cit.> or different strings may be given to different nodes <cit.>, as in our present case.If strings may be different, they can be considered as labels assigned to (otherwise anonymous) nodes. Several authors studied the minimum length of labels required for a given network problem to be solvable, or to solve a network problem in an efficient way. The framework of advice or of labeling schemes permits us to quantify the amount of needed information, regardless of the type of information that is provided and of the way the algorithm subsequently uses it.In <cit.>, the authors compared the minimum size of advice required to solve two information dissemination problems, using a linear number of messages.In <cit.>, given a distributed representation of a solution for a problem, the authors investigated the number of bits of communication needed to verify the legality of the represented solution.In <cit.>, the authors established the size of advice needed to break competitive ratio 2 of an exploration algorithm in trees. In <cit.>, it was shown that advice of constant size permits to carry out the distributed construction of a minimum spanning tree in logarithmic time. In <cit.>, short labeling schemes were constructed with the aim to answer queries about the distance between any pair of nodes. In <cit.>, the advice paradigm was used for online problems. In the case of <cit.>, the issue was not efficiency but feasibility: it was shown that Θ(nlog n) is the minimum size of advice required to perform monotone connected graph clearing.There are three papers studying the size of advice in the context of radio networks. In <cit.>, the authors studied radio networks without collision detection for which it is possible to perform centralized broadcasting in constant time. They proved that a total of O(n) bits of additional information (i.e., not counting the labels of nodes) given to all nodes are sufficient for performing broadcast in constant time in such networks, and a total of o(n) bits are not enough. In <cit.>, the authors considered the problem of topology recognition in wireless trees without collision detection. Similarly to the present paper, they investigated short labeling schemes permitting to accomplish this task. It should be noted that the results in <cit.> and in the present paper are not comparable: <cit.> studies a harder task (topology recognition) in a weaker model (no collision detection), but restricts attention only to trees, while the present paper studies easier tasks (size and diameter discovery) in a stronger model (with collision detection) but our results hold for arbitrary networks. In a recent paper <cit.>, the authors considered the problem of broadcasting in radio networks without collision detection, and proved that this can be done using a labeling scheme of constant length.§ PRELIMINARIES According to the definition of labeling schemes, a label of any node should be a finite binary string. For ease of comprehension, in our positive result concerning size discovery, we present our labels in a more structured way, namely as sequences (a,b,c,d), where a is a binary string of length 7, and each of b, c and d is a pair whose first term is a binary string, and the second term is a bit. Each of the components a, b, c, d, is later used in the size discovery algorithm in a particular way. It is well known that such a sequence (a,b,c,d) can be unambiguously coded as a single binary string whose length is a constant multiple of the sum of lengths of all binary strings that compose it. Hence, presenting labels in this more structured way and skipping the details of the encoding does not change the order of magnitude of the length of the constructed labeling schemes.In our algorithms, we use the subroutine Wave(x), for a positive integer x, that can be implemented in radio networks with collision detection (cf. <cit.> where a similar procedure was called Algorithm Encoded-Broadcast). We describe the subroutine below,for the sake of completeness. The aim of Wave(x) is to transmit the integer x to all nodes of the network, bit by bit.During the execution of Wave(x), each node is colored either blue, or red or white. Blue and white nodes know x and after each phase red neighbors of blue nodes learn x and become blue, while blue nodes become white. Wave(x) is initiated by some node v. At the beginning, v is blue and all other nodes are red.Let p=(a_1a_2… a_k) be the binary representation of the integer x. Consider the binary sequence p^*=(b_1,b_2, …, b_2k+2) of length 2k+2 that is formed from p by replacing every bit 1 by 10, every bit 0 by 00, and adding 11 at the end. For example, if p=(1101) then p^*=(1010001011).Each phase of Wave(x) lasts 2k+2 rounds, starting in some round r+1. In consecutive rounds r+1,… ,r+2k+2, every blue node transmits some message m in round r+i, if b_i=1, and remains silent if b_i=0. A red node w listens until a round when it hears either a collision or a message (this is round r+1), and then until two consecutive rounds occur when it hears either a collision or a message. Suppose that the second of these two rounds is round s.Then w decodes p^*by putting b_i=1 if it heard a message or a collision in round t+i, and putting b_i=0 if it heard silence in round t+i. From p^* it computes unambiguously p and then x. Round s=r+2k+2 is the round in which all blue nodes finished transmitting in the current phase of Wave(x). In round s+1 which starts the next phase, all blue nodes become white and all red nodes that heard a collision or a message in round s become blue.In this way, the subroutine Wave(x) proceeds from level to level, where the i-th level is formed by the nodes at distance i from v in the graph. Every node at a level i>0, is involved in the subroutine in two consecutive phases, first as a red node and then as a blue node. The initiating node v is involved only in the first phase. Since a sequence of the form p^* cannot be a prefix of another sequence of the form q^*, every node can determine when the transmissions from the previous level are finished, and can correctly decode x. In our applications, no other transmissions are performed simultaneously with transmissions prescribed by Wave(x), and hence nodes can compute when a given Wave will terminate.§ FINDING THE SIZE OF A NETWORK This section is devoted to the task of finding the size of a network.§.§ The Algorithm Size Discovery In this section, we construct a labeling scheme of length O(loglogΔ) and a size discovery algorithm using this scheme and working for any radio network of maximum degree Δ.§.§.§ Construction of the labeling scheme Let G be a graph of maximum degree Δ. Let r be any node of G of degree Δ. For l ≥ 0, a node is said to be in level l, if its distance from r is l. Let h be the maximum level in G. Let V(l) be the set of nodes in level l. For any node v∈ V(l), let N(v) be the set of neighbors of v which are in level l+1. Before giving the detailed description of the labeling scheme and of the algorithm, we give a high-level idea of our size discovery method. The algorithm is executed level by level in a bottom up fashion. Each node of a level maintains an integer variable, weight, such that the sum of the weights of all nodes in level l, for 0≤ l≤ h, is equal to the total number of nodes in levels l'≥ l.Using the assigned labels, these weights are transmitted to a special set of nodes, called upper set, in level l-1. An upper set in level l-1 is an ordered subset of the nodes in level l-1, which covers all the nodes in level l, i.e., each node in level l is a neighbor of at least one node in the upper set of level l. Using this property of upper sets, the capability of collision detection, and a specially designed labeling of the nodes, multiple accounting of the weights is prevented. The weights of each level are transmitted up the tree, and finally the node at level 0, i.e., the root calculates its weight, which is the size of the network. In the final stage, this size is transmitted to all other nodes.Let U={v_1,v_2,⋯ ,v_k} be an ordered set of nodes in level l. For all v ∈ V(l)∖ U, we define N'(v,U)= N(v)∖ (∪_w ∈ UN(w)) and for v_j∈ U, N'(v_j,U)=N(v_j)∖ (∪_i=1^j-1N(v_i)). An ordered subset U of V(l) is said to be an upper set at level l, if for each v ∈ U, N'(v,U) ∅ and ∪_w∈ UN(w)=V(l+1).Fig. <ref> shows an example of an upper set U={v_1,v_3,v_4} for a two-level graph. The node v_2 is not part of the upper set, as the set of neighbors of v_2 is a subset of the neighbors of v_1. Below, we propose an algorithm that computes an upper set at each level l, for 1≤ l≤ h-1. The algorithm works in a recursive way. The first node v_1 of the set is chosen arbitrarily. At any step, let US(l)={v_1, ⋯, v_i} be the set computed by the algorithm in the previous step. Let u_1^j,u_2^j, ⋯, u_|N'(v_j,US(l))|^j be nodes in N'(v_j,US(l)) for 1 ≤ j≤ i. Let k_j=1+log⌊|N'(v_j,US(l))|⌋. If V(l+1)∖ (∪_w∈ US(l)N(w)) ∅, then the next node in US(l) is added using the following rules.* Find the last node v_a in US(l) that has a common neighbor in {u_1^a,u_2^a, ⋯ u_k_a^a} with some nodev ∈ V(l)∖ US(l) such that N(v)∖ (∪_w∈ US(l)N(w)) ∅.Choose such a node v in US(l) that has the common neighbor u_b^a with v_a, where b=min{1,2,⋯,k_a}. Add v to US(l) as the node v_i+1.* If no such node in v_a exists in US(l), add any node v ∈ V(l)∖ US(l) with N(v)∖ (∪_w∈ US(l)N(w)) ∅ as the node v_i+1.The construction of US(l) is completed when ∪_w∈ US(l)N(w)=V(l+1).Also, for every node v_i ∈ US(l), the nodes u_1^i,u_2^i, ⋯, u_k_i^j are assigned some unique id's from the set {1,2,⋯, ⌊logΔ⌋+1}. Moreover,if a node v_m is added to US(l) according to the first rule, where v_m has a common neighbor u_c^i with v_i, then the node u_1^m gets the same id as u_c^i. If a node v_m is added according to the 2nd rule, then u_1^m gets the id 1.These id's will be later used to construct the labels ofthe nodes. In Algorithm <ref> we give the pseudocode of the procedure that constructs an upper set US(l) for each level l, and that assigns id's to some nodes of V(l+1), as explained above. This procedure uses in turn the subroutineCompute(v,j) whose pseudocode is presented in Algorithm <ref>..The nodes in the upper set US(l) and the assignment of their ids are shown in Fig. <ref>. The node v_1 is chosen in US(l) arbitrarily. The number of neighbors of v_1 in V(l+1) is 5 (the node v_1 and its neighbors in V(l+1) are shown as the gray circles.). Three neighbors of v_1, (as 1+ ⌊log 5 ⌋ =3) are assigned ids 1,2,3 as shown in the figure. Note that the node with id 3 is also a neighbor of another node in V(l). Hence, this node is selected as the next node in US(l), according to Step <ref> of Algorithm <ref>.N'(v_2, US(l)) is 4 (the node v_2 and the nodes in N'(v_2, US(l)) are shown as the dotted pink circles). According to Step <ref> of Algorithm <ref>, the neighbor u_2^1 of v_2 gets the same id 3 as the node u_1^3. In a similar fashion, the node v_3 (shown as dashed green circle) is chosen as the next node in US(l). After adding v_3, no node can be added in US(l) according to the first rule. The node v_4 is chosen according to the second rule and added to US(l) and the construction of US(l) is complete, as all nodes of V(l+1) are taken care of.For 0≤ l≤ h, we define the weight W(v) of a node v ∈ V(l) as follows. If v ∈ V(h), we define W(v)=1. For a node v ∈ V(l), where 0≤ l≤ h-1, we define W(v)=1+∑_u∈ N'(v,US(l)) W(u). Thus, for any level l, the sum of the weights of nodes at level l is equal to the total number of nodes in levels l'≥ l. Hence the weight of the node r is the size of the network.The ids assigned to the nodes in the above fashion have the following purpose. The objective of the algorithm is to let the nodes in US(l)learn jointly the sum of the weights of the nodes in levels ≥ l. This can only be done if the weight of a node in level l+1 is transmitted to exactly one node in level l. The ids assigned to the nodes in the level l+1 are helpful in order to ensure this. In Fig. <ref>, the node v_2 has two neighbors in level l+1 with id 3. The algorithm asks every node with a specific id to transmit in a specific round in different phases. Now, when the nodes with id 3 transmit, the node v_1 successfully receives the messages, but a collision happens at v_2. The node v_2 immediately learns that the ongoing message transmissions are dedicated to some other node and therefore it ignores the activities for the remaining rounds in the current phase.Once the node v_1 learns its weight, it asks its neighbors in N'(v_1,US(l)) not to participate in the subsequent rounds. In the next phase, v_2 does not hear any collision (as one node with id 3 does not transmit) while its neighbors with positive ids transmits, hence it successfully learns its weight. But the node v_3 hears a collision when the node with id 2 transmits. Therefore, v_3 ignores all the activities in this phase. In the next phase, v_3 successfully learns its weight. In this way the consecutive nodes which are added in US(l) by rule 1 learn their weights one by one in different phases, hence no multiple accounting can occur.We are now ready to define the labeling scheme Λ that will be used by our size discovery algorithm. The label Λ(v) of each node v contains two parts. The first part is a vector of markers that is a binary string of length 7, used to identify nodes with different properties. The second part is a vector of three tags. Each tag is a pair (id,b), where id is the binary representation ofan integer from the set {1,2,⋯, ⌊logΔ⌋+1}, and b is either 0 or 1. Every node will use the tags to identify the time slot when it should transmitand what it should transmit in this particular time slot. We first describe how the markers are assigned to different nodes of G. * The node r gets the marker 0, and one of the nodes in level h gets the marker 1.* Choose any set of ⌊logΔ⌋+1 nodes in N(r) and give them the marker 2.* Let P be a simple path from r to the node with marker 1. All the internal nodes in P get the marker 3.* For each l,0≤ l≤ h-1, all the nodes in US(l) get the marker 4. The last node of US(l) gets the marker 5 and a unique node from V(l+1) with maximum weight in this set gets the marker 6. The first part of every label is a binary stringM of length 7, where the markers are stored. Note that a node can be marked by multiple markers. If the node is marked by the marker i, for i=0,… ,6, we have M(i)=1; otherwise, M(i)=0.The markers are assigned to the nodes in the network in order to identify different types of nodes that play different roles in the proposed algorithm. Some specific rounds are allotted toeach level during which all the nodes of that level transmit their weights. Every node learns from its label in which time slot it has to transmit. The root is distinguished by the marker 0. In the algorithm, the node with marker 0 first learns the value of its degree which is Δ, using the messages transmitted by the nodes with marker 2.Then it transmits this value to all other nodes using Subroutine Wave(Δ). The node with marker 1 is recognized as one of the nodes at the last level in the BFS tree rooted at the node r. This node is the first that learns the value of h and then transmits it to r using the internalnodes, which are marked by marker 3, in the shortest path from this node to r. Markers 4 and 5 are used to identify nodes which are responsible for transmitting the value of the weights, and the node assigned marker 6 is the node in a level which transmits its weight last among the nodes in that level. The second part of the label of each node v is a vector [L_1(v),L_2(v), L_3(v)] containing threetags, namely, the Δ-learning tag L_1(v), the collision tag L_2(v), and the weight-transmission tag L_3(v). The assignment of the above tags is described below.* The Δ-learning tags will be used for learning the value of Δ by the root r. The node r and all the nodes with marker 2 get the Δ-learning tags as follows. The nodes with marker 2 are neighbors w_1, w_2, …, w_⌊logΔ⌋+1 of the node r. For each i, 1≤ i≤⌊logΔ⌋+1, node w_i is assigned the tag (B(i),b_i), where B(i) is the binary representation of the integer i and b_i is the i-th bit of the binary representation of Δ. The node r gets the tag (B,0), where B is the binary representation of the integer ⌊logΔ⌋ +1. All other nodes of G get the Δ-learning tag (0,0).* The collision tags will be used to create collisions. For each l, 0 ≤ l ≤ h-1, each node in V(l+1) gets the collision tag as follows. Let US(l)={v_1,v_2,⋯ ,v_k}. For 1 ≤ i ≤ k and 1≤ j≤⌊log |N'(v_i,US(l))|⌋+1, the node u_j^i ∈ V(l+1) gets the collision tag (id(u_j^i), b_m), where m is the position of the integer id(u_j^i) in the set ID(v_i) in increasing order, and b_m is the m-th bit of the binary representation of |N'(v_i,US(l))|. All other nodes v ∈ V(l+1) get the collision tag (0,0).* The weight-transmission tags will be used by nodes to transmit their weight to a unique node in the previous level. For each l, 0 ≤ l ≤ h-1, each node in V(l+1) gets the transmission tag as follows. Let US(l)={v_1,v_2,⋯, v_k}. For 1≤ i≤ k, let Q_i(x)={u ∈ N'(v_i,US(l))| W(u)=x}.Choose any subset {w_1,w_2,…, w_⌊log |Q_i(x)|⌋+1} of Q_i(x).For 1 ≤ i ≤⌊log |Q_i(x)|⌋+1, the node w_i gets the weight-transmission tag (B(i),b_i), where B(i) is the binary representation of the integer i, and b_i is the i-th bit of the binary representation of |Q_i(x)|. All other nodes v ∈ V(l+1) get the weight-transmission tag (0,0).This completes the description of the labeling scheme Λ.§.§.§ Description of Algorithm Size DiscoveryAlgorithm Size Discovery using the scheme Λ consists of three procedures, namely Procedure Parameter Learning, Procedure Size Learning,and Procedure Final. The high-level idea and the detailed descriptions of each of these procedures are given below.Procedure Parameter Learning. The aim of this procedure is for every node in G to learn three integers: Δ, the number of the level to which the node belongs, andh. The procedure consists of two stages. In the first stage, that starts in round 1, every node with M(2)=1 and M(0) =0 (i.e., a neighbor of r with marker 2) transmits its Δ-learning tag in round i, if the id in the first component of this tag is i. The node with M(0)=1, i.e., the noder, collects all the tags until it received a message from a node which has the same id as the id of r in the Δ-learning tag. After receiving this message, the node r has learned all pairs (B(1),b_1), ..., (B(m), b_m), where m is the id of r and B(i) is the binary representation of the integer i, corresponding to the Δ-learning tagat the respective nodes. Then node r computes the string s=(b_1b_2 … b_m). This is the binary representation of Δ. In the second stage, after learning Δ, the node r initiates the subroutine Wave(Δ). Every node other than r waits until it detects two consecutive non-silent rounds. This indicates the end of the wave at this node and happens 2m+2 rounds after the wave has beenstarted by the nodes of the previous level. The node computes s, learns Δ, computes m=⌊logΔ⌋+1, and sets its level number as j, ifthe end of the wave at this node occurred in round m+j(2m+2).When the unique node withM(1)=1 learns its level number (which is h), it transmits the value of h in the next round. After receiving the first message containing an integer, a node with M(3)=1 sets h to this integer and retransmits it. When the node with M(0)=1, i.e., the node r, gets the first message after round m that contains an integer, it learns h and initiates Wave(h). The stage and the entire procedure end in round t_1=m+h(2m+2) +h + h(2( ⌊log h ⌋ +1)+2). Note that after learning h, every node can compute t_1 and thus knows when Procedure Parameter Learning ends.Procedure Size Learning. This is the crucial procedure of the algorithm.Its aim is to learn the size of the graph by the node r, i.e., to learn its weight W(r). This procedure consists of h phases. In the i-th phase,where 1≤ i≤ h,the participating nodes are from level h-i+1 and from level h-i. We will show by induction on i that at the end ofthe i-th phase, all nodes of level h-i correctly compute their weights. Thus at the end of the h-th phase, the node r will learn its weight, i.e.,the size of the network. The high-level idea of the i-th phase is the following. In order to learn its weight, a node v in US(h-i) must learn the weights of all nodes u in N'(v,US(h-i)) and subsequently add all these weights. Weight-transmission tags are used to achieve this. The difficulty consists in preventing other neighbors in level h-i of such nodes u from adding these weights when computing their own weight, as this would result in multiple accounting (see Fig. <ref>). This is done using collision tags to create collisions in other such nodes, so that nodes z in US(h-i) can identify neighbors in level h-i+1 outside of N'(z,US(h-i)) and ignore their weights. A node transmits its weight-transmission tag in a round which is an increasing function of its weight. Since the nodes in US(h-i) do not have any knowledge about their degree, they must learn the maximum possible weight of a node in level h-i+1, to determine how long they must wait before receiving the last message from such a node. We now give a detailed description of the i-th phase. At the beginning of the first phase, all nodes in level h set their weight to 1. The i-th phase starts in round t_2(i)+1, where t_2(1)=t_1, and ends in round t_2(i+1). We will show that t_2(i+1) will be known by every node of the graph by the end of the i-th phase, i.e., by the round t_2(i+1). In round t_2(i)+1 (which starts the i-th phase) the unique node u' of level h-i+1 with M(6)=1 (which is a node of this level with maximum weight), initiates Wave(W(u')). Every node in G learns the value x_i which is the maximal weight of a node in level h-i+1, by round t'_2(i)=t_2(i)+2h (2(⌊log x_i⌋ +1)+2). Since every node knows h and t_2(i), and it learns x_i during the wave subroutine, it can compute the value t_2'(i) by which Wave(W(u')) is finished. After learning this integer, every node in level h-i+1 and every node in level h-imaintains a variable status which can be either complete or incomplete. (The variable status is proper to a particular phase. In what follows we consider status for phase i.) Initially the status of every node in level h-iwith M(4)=1 (the nodes in US(h-i)) and of every node in level h-i+1 is incomplete. The initial status of the nodes in level h-i with M(4)=0 is complete.At any time, only incomplete nodes will participate in this phase. The nodes with M(4)=0, i.e., the nodes outside US(h-i), set their weights to 1 and never participate in this phase. After learning its weight, a node v in US(h-i) gets status complete and transmits a stop message in a special round. All the nodes in N'(v,US(h-i)) learn this stop message either by receiving it or by detecting a collision in this special round, and become complete. Thus the nodes in N'(v,US(h-i)) never transmit in subsequent rounds, and this prevents multiple accounting of the weights.Let z be a node in level h-i+1 with status incomplete.If the id in the collision tag of z is a positive integer e, then z performs the following steps.* The node z transmits its collision tag for the first time in the i-th phase in round t'_2(i)+e. After that, the node z transmits its collision tag in every round t'_2(i)+e+jτ_i, whereτ_i=⌊logΔ⌋+1+x_i(⌊logΔ⌋+1)+1, and j≥ 1, until it gets a stop message or detects a collision in round t'_2(i)+j'τ_i, for some integer j' ≥ 1. In the latter case, node z updates its status to complete. If the id in the weight-transmission tag of z is a positive integer e', then z performs the following steps.* The node z transmits the pair (t,W(z)), where t is its weight-transmission tag and W(z) is its weight, for the first time in the i-th phase in round t'_2(i)+(W(z)-1)(⌊logΔ⌋+1)+e'. After that the node z transmits(t,W(z)) in every roundt'_2(i)+(W(z)-1)(⌊logΔ⌋+1)+e'+jτ_i, whereτ_i=⌊logΔ⌋+1+x_i(⌊logΔ⌋+1)+1, and j≥ 1, until it gets a stop message or detects a collision in the roundt'_2(i)+j'τ_i for some integer j' ≥ 1. In the latter case, it updates its status to complete. Let z' be a node with M(4)=1, i.e., a node in US(h-i). The node z' (with status incomplete) performs the following steps.* If z' does not detect any collision in the time interval [t'_2(i)+(j-1)τ_i, t'_2(i)+(j-1)τ_i+ ⌊logΔ⌋+1], for some integer j ≥ 1, then the node changes its status to complete. In this interval, the node z' received the collision tags from the nodes in N'(z',US(h-i)). Suppose that the node z' learns the pairs (B(g_1),b_1), (B(g_2),b_2), ⋯ , (B(g_k), b_k), where B(g_1), B(g_2), ⋯, B(g_k) are the binary representations of the integers g_1, g_2, ⋯,g_k, respectively, in the increasing order, corresponding to the collision tags of the respective nodes. The node z' computes s'=(b_1b_2⋯ b_k). Let d be the integer whose binary representation is s'. The integer d is the size of N'(z',US(h-i)). Then z' waits until round t_2'(i)+jτ_i. By this time, all nodes in level h-i+1 that transmitted according to their collision tags and weight-transmission tags, have already completed all these transmissions. If z' detects any collision in the time interval [t'_2(i)+(j-1)τ_i+ ⌊logΔ⌋+2,t_2'(i)+jτ_i-1], it changes its status back to incomplete. Otherwise, for 1 ≤ f ≤ x_i, let (B(1),b_1), (B(2),b_2), ⋯, (B(g(f)), b_g(f)) be the weight-transmission tags that the node z' received from a node with weight f, where B(a) is the binary representation of the integer a. Let s_f=(b_1b_2⋯ b_i(f)) and let d_f be the integer whose binary representation is s_f. The integer d_f is the total number of nodes of weight f in N'(z',US(h-i)). The node z' computes the value ∑_f d_f. If the node z' had received any message from a node which is not in N'(z',US(h-i)), then the sum ∑_f d_f cannot be equal to the integer d, and hence the node learns that there is a danger of multiple accounting of weights. In that case, the node changes its status back to incomplete.Otherwise, if ∑_f d_f=d, node z' assigns W(z')=1+∑_f (fd_f). After computing W(z'), the node z' transmits a stop message in round t_2'(i)+jτ_i. If z' is the node with M(5)=1 (i.e., the last node of US(h-i)), then after sending the stop message, it initiates Wave(T), where T is the current round number.After learning T from Wave(T), every node in G computes t_2(i+1)=T+2h(2(⌊log T⌋ +1)+2). This is the round by which Wave(T) is finished. In this round, the i-th phase of the procedure is finished as well.At the end of the h-th phase, the node r learns its weight, sets n=W(r) and the procedure ends.Procedure Final: After computing n, the node r initiates Wave(n). Every node in G computes the value of n, and outputs it. The procedure ends after all nodes output n.Now our algorithm can be succinctly formulated as follows: §.§ Correctness and analysis The proof of the correctness of Algorithm Size Discovery is split into two lemmas.Upon completion of the Procedure Parameter Learning, every node in G correctly computes Δ, h, and its level number. Moreover, every node computes the round number t_1=m+h(2m+2) +h + h(2( ⌊log h ⌋ +1)+2) by which the procedure is over. After round m=⌊logΔ⌋ +1, the node r learns all pairs (B(1),b_1), ..., (B(m), b_m), where B(i) is the binary representation of the integer i, corresponding to the Δ-learning tags at the respective nodes with M(2)=1. According to the assignment of the tags to the nodes, the binary string s=(b_1b_2… b_m) is the binary representation of the integer Δ. Therefore, the node r correctly learns Δ.After learning Δ, the node r initiate Wave(Δ). For every level i≥ 1, the wave ends at the nodes in level i in round m+i(2m+2). The nodes learn the value of Δ from the wave and calculate their level number. The node in the h-th level for which M(1)=1, learns h and transmits the value of h along the path with nodes for which M(3)=1. The node r learns h, and initiate Wave(h). Every node computes h from the wave.Knowing m and h every node computes t_1.At the end of the i-th phase of the Procedure Size Learning, every node in level h-i correctly computes its weight. We prove this lemma in two steps. First, we prove the following two claims, and then we prove the lemma by induction using these claims.Claim 1:In the i-th phase of Procedure Size learning, if the status of a nodev_p∈ US(h-i) is changed from incomplete to complete in the time interval [t'_2(i)+(j-1)τ_i,t'_2(i)+jτ_i-1], for some integer j≥ 1, and remains complete forever, then the node v_p correctly computes W(v) in round t_2(i)+jτ_i, provided that all nodes of level h-i+1 know their weight at the beginning of the i-th phase.In order to prove this claim, suppose that the status of v_p∈ US(h-i) is changed to complete from incomplete in the interval [t'_2(i)+(j-1)τ_i,t'_2(i)+jτ_i-1], for some integer j≥ 1. Since the status of v_p is complete in round t_2(i)+jτ_i, the node v_p did not detect any collision in the above time interval. Suppose that v_p received messages only from nodes in N'(v_p,US(h-i)). In round t'_2(i)+(j-1)τ_i+⌊logΔ⌋+1, the node computes the integer d from the collision tags of the nodes from which it received messages in the time interval [t'_2(i)+(j-1)τ_i, t'_2(i)+(j-1)τ_i+⌊logΔ⌋+1]. According to the labeling scheme, the bits in the collision tags of the nodes in N'(v_p,US(h-i)) were assigned in such a way that the string s' formed by these bits is the binary representation of the integer |N'(v_p,US(h-i))|. After that, the nodes in N'(v_p,US(h-i)) whose weight-transmission tag contains a positive integer as the id, transmit their tags to v_p one by one. Let X_f ⊆ N'(v_p,US(h-i)) be the the set of nodes in N'(v_p,US(h-i)) with weight f, for 1≤ f≤ x_i. The weight-transmission tags are given to (⌊log |X_f|⌋+1) nodes in X_f in such a way that the binary string formed by the bits of the weight transmission tags of these nodes in the increasing order of their ids is the binary representation of the integer |X_f|. Hence, for 1≤ f≤ x_i, ∑_f |X_f|=d, as the sum of the numbers of nodes in N'(v_p,US(h-i)) with different weights is equal to the total number of nodes in N'(v_p,US(h-i)). If the node v_p received messages only from the nodes in N'(v_p,US(h-i)), it learns ∑_f |X_f|=d, and hence correctly computes W(v_p)=1+∑_f f|X_f|.Otherwise,there exists a node u ∈ N(v_p)∩ N'(v_q,US(h-i)), for some node v_q ∈ US(h-i) with q<p, such that the id in the weight-transmission tag of u is non-zero. Then the integer ∑_f |X_f| that v_p computescannot be equal to the integer d, as explained above, and the node v_p changes its status back to incomplete. This is a contradiction. Therefore, the node v_p correctlycomputes its weight at the end of round t'_2(i)+jτ_i-1, which proves the claim.Claim 2: Let US(h-i)={v_1,v_2,⋯,v_k}.In the i-th phase of Procedure Size Learning, each node v_j changes its status from incomplete to complete during the time interval [t'_2(i)+(q_j-1)τ_i+1,t'_2(i)+q_jτ_i], for some q_j ≤ j, and remains complete forever. We prove this claim by induction on j. As the base case, we prove that in the time interval [t'_2(i)+1,t'_2(i)+τ_i-1], the status of the node v_1 ∈ US(h-i) is changed from incomplete to complete. According to the labeling scheme and to the construction of the set US(h-i), (⌊log |N'(v_1,US(h-i))|⌋+1) nodes from N'(v_1,US(h-i)) have distinct positive ids in their collision tags, and all other nodes from N'(v_1,US(h-i)) have the id 0. Hence, the node v_1 detects no collision in the time interval[t'_2(i)+1,t'_2(i)+(⌊logΔ⌋+1)], and it changes its status to complete.In the next x_i(⌊logΔ⌋+1) rounds, the nodes of level h-i+1, with positive ids in their weight-transmission tags, transmit. Since the ids in the weight-transmission tags of (⌊log |N'(v_1,US(h-i))|⌋+1) nodes are distinct positive integers, and N(v_1)=N'(v_1,US(h-i)), the node v_1 does not detect any collision. Also, since the node v_1 received messages only from nodes in N'(v_1,US(h-i)),therefore ∑_f |X_f|=d, for 1≤ f≤ x_i, and hence v_1 remains complete forever.Suppose by induction that Claim 2 holds for nodes v_1,… ,v_j.Let y=max{q_1,q_2,…, q_j}. Consider the following two cases.Case 1: There exists an integer q_j+1≤ y, such that the node v_j+1 changes its status from incomplete to complete during the time interval [t'_2(i)+(q_j+1-1)τ_i+1,t'_2(i)+q_j+1τ_i], for some q_j+1≤ y, and remains complete forever.In this case the claim holds for v_j+1 because q_j+1≤ y ≤ j+1.Case 2: Case 1 does not hold.Therefore, the status of v_j+1 is incomplete in roundt'_2(i)+yτ_i. The status of all the nodes in N'(v_j+1,US(h-i)) is incomplete in this round as well, as they did not received any stop message from v_j+1 or detected any collision in round t'_2(i)+yτ_i. The status of the nodes in N(v_j+1)∖ N'(v_j+1,US(h-i)) is complete, as N(v_j+1)∖ N'(v_j+1,US(h-i)) ⊆∪_i=1^j N(v_i) and the nodes v_1,v_2,⋯,v_j are complete. Consider the time interval [t'_2(i)+yτ_i+1,t'_2(i)+(y+1)τ_i-1]. In this time interval, the node v_j+1 receives messages only from the nodes in N'(v_j+1,US(h-i)). Since the positive ids in the collision tags and the positive ids in the weight-transmission tags are unique for the nodes in N'(v_j+1,US(h-i)), the node v_j+1 does not detect any collision in the interval [t'_2(i)+yτ_i+1,t'_2(i)+(y+1)τ_i-1]. Also, since the node v_j+1 received messages only from nodes in N'(v_j+1,US(h-i)),therefore ∑_f=1^x_i d_f=d, and hence v_j+1 remains complete forever. Since y ≤ j, we have y+1≤ j+1. Therefore, the proof of the claim follows by induction. Now we prove the lemma by induction on the phase number. According to the definition of the weight of a node, all the nodes in level h have weight 1. Therefore, by Claim 2, at the end of round t_2(1)+jτ_1, the node v_j in US(h-1) becomes complete, and hence by Claim 1, it correctly computes its weight, since all the nodes in level h already know their weight which is 1. This implies that all the nodes in level h-1 correctly compute their weights at the end of phase 1. Suppose that for i≥ 1, all the nodes in level h-i correctly compute their weights at the end of phase i. Then by Claim 2, all the nodes in US(h-i-1) become complete in the (i+1)-th phase, and hence by Claim 1 they correctly compute their weights in this phase. Therefore, the lemma follows by induction. Applying Lemma <ref> for i=h, we get the following corollary.Upon completion of Procedure Size Learning, the node r correctly computes the size of the graph. Now we are ready to formulate our main positive result. The length of the labeling scheme used by Algorithm Size Discovery on a graph of maximum degree Δ is O(loglogΔ). Upon completion of this algorithm, all nodes correctly output the size of the graph. According to the labeling scheme Λ, the label of every node has two parts. The first part is a vector M of constant length and each term of M is one bit. The second part is a vector L containing three tags, each of which is of length O(loglogΔ). Therefore, the length of the labeling scheme Λ is O(loglogΔ).By Corollary <ref>, node r correctly computes the size n of the network upon completion of Procedure Size Learning. In Procedure Final, node r initiates Wave(n), and hence every node correctly computes n upon completion of the algorithm. §.§ The lower bound In this section, we show that the length of the labeling scheme used by Algorithm Size Discovery is optimal, up to multiplicative constants. We prove the matching lower bound by showing that for some class of graphs of maximum degree Δ (indeed of trees), any size discovery algorithm must use a labeling scheme of length at least Ω(loglogΔ) on some graph of this class.Let S be a star with the central node r of degree Δ. Denote one of the leaves of S by a. For ⌊Δ/2⌋≤ i ≤Δ-1, we construct a tree T_i by attaching i leaves to a. The maximum degree of each tree T_i is Δ. Letbe the set of trees T_i, for⌊Δ/2⌋≤ i ≤Δ-1, cf. Fig. <ref>. Hence the size ofis at least Δ/2.The classof trees was used in <cit.> to prove an analogous lower bound for the problem of topology recognition (which, for the class , is equivalent to size discovery). However, it should be stressed that the proof of the lower bound in our present scenario is much more involved because we work under the more powerful model assuming the capability of collision detection, while <cit.> assumed no collision detection. The negative result under our more powerful model is more difficult to obtain because of potential possibility of acquiring information by nodes from hearing collisions. More precisely, our negative argument is based on the fact that in a deterministic algorithm nodes with the same history (see the formal definition below) must behave identically. In the model with collision detection, histories are more complicated because they are composed not only of messages heard by nodes in previous rounds but also of collisions heard by them. Let R be the set of leaves attached to r and let A be the set of leaves attached to a. For a tree T∈, consider a labeling scheme L(T) of length β, and letbe an algorithm that finds the size of every tree T ∈, using L(T). LetL(T)assign the label l(v) to each node v in T.Let T ∈ be any tree. We define the notion of history (a similar notion was defined in <cit.> for anonymous radio networks without collision detection) for each node v in T in round t. The history of a node in time t is denoted by H(v,t,L,). This is the information that node v acquires by round t, using the algorithm . The action of a nodev in round t+1 is a function of the history H(v,t,L,), hence for every round t, if two nodes have the same history in round t, then they behave identically in round t+1. As in <cit.>, we assume without loss of generality, that whenever a node transmits a message in round t+1, it sends its entire history in round t. We define the history by induction on the round number as follows. H(v,0,L,)=l(v), for each node v in T. For t≥ 0, the history in time t+1 is defined as follows, using the histories of the nodes in T in time t.* If v receives a message from a node u in round t+1, i.e., v is silent in this round, and u is its only neighbor that transmits in this round, then H(v,t+1,L,)=[H(v,t,L,),H(u,t,L,)].* If v detects a collision in round t+1, i.e., v is silent in this round, and there are at least two neighbors of v that transmit in this round, thenH(v,t+1,L,)=[H(v,t,L,),*].* Otherwise, H(v,t+1,L,)=[H(v,t,L,),λ]. Hence, histories are nested sequences of labels and of symbols λ, and *, where, intuitively, λ stands for silence in a given round, and * stands for a collision.The following lemma shows that histories of nodesin sets A and R are equal iff the labels of these nodes are the same. For any tree T ∈ consider a labeling scheme L(T). Letbe any algorithm that finds the size of every tree T ∈ using the scheme L(T). Then for any t≥ 0, we have:* For v_1,v_2 ∈ R, H(v_1,t,L,)=H(v_2,t,L,), if and only if l(v_1)=l(v_2).* For v_1,v_2 ∈ A, H(v_1,t,L,)=H(v_2,t,L,), if and only if l(v_1)=l(v_2). We prove the first part of the lemma. The proof of the second part is similar. By definition, for two nodes v_1 and v_2 with different labels, we have H(v_1,t,L,) H(v_2,t,L,) for all t ≥ 0.To prove the converse, we use induction on t. Let v_1,v_2∈ R such that l(v_1)=l(v_2). For t=0, H(v_1,0,L,)=l(v_1)=l(v_2)=H(v_2,0,L,). Suppose that the statement is true for round t, i.e., H(v_1,t,L,)=H(v_2,t,L,). Note that the history of any node in R in round t+1 does not depend on any action performed by the node a or the nodes in A in round t+1. Also, since the nodes v_1 and v_2 have the same histories in round t, they must behave identically in round t+1. Therefore, in round t+1, there can only be the following four cases. Case 1 The node r transmits and the nodes v_1,v_2 do not transmit.According to the definition of history, H(v_1,t+1,L,)=[H(v_1,t,L,), H(r,t,L,)] and H(v_2,t+1,L,)=[H(v_2,t,L,), H(r,t,L,)]. This implies H(v_1,t+1,L,)=H(v_2,t+1,L,).Case 2 The nodes v_1 and v_2 transmit and the node r does not transmit. According to the definition of history,H(v_1,t+1,L,)=[H(v_1,t,L,),λ] and H(v_2,t+1,L,)=[H(v_2,t,L,),λ]. This implies H(v_1,t+1,L,)=H(v_2,t+1,L,). Case 3 The nodes v_1 and v_2, and the node r transmit. According to the definition of history,H(v_1,t+1,L,)=[H(v_1,t,L,),λ] and H(v_2,t+1,L,)=[H(v_2,t,L,),λ]. This implies H(v_1,t+1,L,)=H(v_2,t+1,L,).Case 4 Neither v_1 nor v_2 nor r transmit. In this case, H(v_1,t+1,L,)=[H(v_1,t,L,),λ] and H(v_2,t+1,L,)=[H(v_2,t,L,),λ]. This implies H(v_1,t+1,L,)=H(v_2,t+1,L,). Hence the proof of the lemma follows by induction.With the length of the labeling scheme β, there can be atmost z=2^β+1 possible different labels of at most this length. Let ={l_1,l_2,⋯,l_z} be the set of distinct labels of length at most β. We define the pattern of a tree T with the labeling scheme L(T) as the pair (P(r),P(a)), where P(r) and P(a) are defined as follows.P(r)=(l(r),b_1,b_2, ⋯, b_z), where b_i∈{0,1,2} and: b_i=0, if no node in R has label l_i;b_i=1, if there is exactly one node in R with label l_i;b_i=2, if there are more than one node in R with label l_i.P(a)=(l(a),b'_1,b'_2, ⋯, b'_z), where b'_i∈{0,1,2} and: b'_i=0, if no node in A has label l_i; b'_i=1, if there is exactly one node in A with label l_i; b'_i=2, if there are more than one node in A with label l_i. The following lemma states that histories of the node r in trees fromdepend only on the pattern and not on the tree itself.Letbe any algorithm that solves the size discovery problem for all trees T∈ using the labeling scheme L(T).If trees T_1 and T_2 have the same pattern, then for any t ≥ 0, the node r in T_1 and the node r in T_2 have the same history in round t. Let T_1 and T_2 be two trees with same pattern (P(r),P(a)). For j=1,2, denote the node r in T_j by r_j, the node a in T_j by a_j, the set R in T_j by R_j, and the set A in T_j by A_j. For any t ≥ 0, we prove the following statements by simultaneous induction. (To prove the lemma, we need only the first of them).* H(r_1,t,L,)=H(r_2,t,L,).* H(a_1,t,L,)=H(a_2,t,L,).* For a node v_1 in R_1 and a node v_2 in R_2 with same label, H(v_1,t,L,)=H(v_2,t,L,).* For a node v_1 in A_1 and a node v_2 in A_2 with the same label, H(v_1,t,L,)=H(v_2,t,L,). Since the patterns of the two trees are the same, we have l(r_1)=l(r_2), and l(a_1)=l(a_2). Therefore, according to the definition of the history, the above statements are true for t=0.Suppose that all the above statements are true for round t. Consider the execution of the algorithm in round t+1 as follows:Induction step for (1): The actions ofnodes in A_1 and A_2 in round t+1 do not affect the histories of the nodes r_1 and r_2 in this round.Hence we have the following cases in round t+1. * r_1 transmits, a_1 does not transmit, and no node in R_1 transmits. According to the definition of history, H(r_1,t+1,L,)=[H(r_1,t,L,),λ]. Since H(r_1,t,L,)=H(r_2,t,L,) and r_1 transmits in round t+1, then r_2 also transmits in round t+1. Similarly a_2 does not transmit in this round, since a_1 does not transmit. We prove that no node in R_2 transmits in round t+1. Suppose otherwise. Let v_2 be a node in R_2 that transmits in round t+1,and suppose that the label of v_2 is l_i. Therefore, in P(r), either b_i=1, or b_i=2. If b_i=1, then v_2 is the unique node with label l_i in R_2. Since the patterns of the two trees are the same, therefore, there exists a unique node v_1 in R_1 with label l_i. Since H(v_1,t,L,)=H(v_2,t,L,), by the induction hypothesis for (3), therefore v_1 must transmit in round t+1, which is a contradiction with the fact that no node in R_1 transmits.A similar statement holds for b_i=2. Hence no node in R_2 transmits in round t+1. Therefore, H(r_2,t+1,L,)= [H(r_2,t,L,),λ]. This implies that H(r_1,t+1,L,)=H(r_2,t+1,L,). * r_1 transmits, a_1 transmits, no node in R_1 transmits.Since H(r_1,t,L,)=H(r_2,t,L,) and H(a_1,t,L,)=H(a_2,t,L,), and r_1 and a_1 transmit in round t+1, therefore, r_2 and a_2 also transmit in round t+1. Hence H(r_1,t+1,L,)=[H(r_1,t,L,),λ] and H(r_2,t+1,L,)=[H(r_2,t,L,),λ]. This implies that H(r_1,t+1,L,)= H(r_2,t+1,L,). * r_1 transmits, a_1 does not transmit, some nodes in R_1 transmit.Similarly as in (b), we have H(r_1,t+1,L,)=[H(r_1,t,L,),λ] and H(r_2,t+1,L,)=[H(r_2,t,L,),λ]. This implies that H(r_1,t+1,L,)= H(r_2,t+1,L,). * r_1 transmits, a_1 transmits, some nodes in R_1 transmit.Similarly as in (b), we have H(r_1,t+1,L,)=[H(r_1,t,L,),λ] and H(r_2,t+1,L,)=[H(r_2,t,L,),λ]. This implies that H(r_1,t+1,L,)= H(r_2,t+1,L,).* r_1does not transmit, a_1 does not transmit, no node in R_1 transmits.Since a_1 does not transmit in round t+1, therefore a_2 does not transmit in round t+1. Also, as explained in (a), no node in R_2transmits in round t+1. Therefore, H(r_1,t+1,L,)= [H(r_1,t,L,),λ] and H(r_2,t+1,L,)= [H(r_2,t,L,),λ]. This implies that H(r_1,t+1,L,)= H(r_2,t+1,L,). * r_1does not transmit, a_1 transmits, no node in R_1 transmits.Since a_1 transmits in round t+1, therefore a_2 transmits in round t+1. Also, as explained in (a), no node in R_2 transmits in round t+1. Therefore, according to the definition of history, H(r_1,t+1,L,)=[H(r_1,t,L,),[H(a_1,t,L,)], and H(r_2,t+1,L,)=[H(r_2,t,L,),[H(a_2,t,L,)]. Since H(r_1,t,L,)=H(r_2,t,L,) and H(a_1,t,L,)=H(a_2,t,L,), therefore, H(r_1,t+1,L,)=H(r_2,t+1,L,). * r_1does not transmit, a_1 does not transmit, some nodes in R_1 transmit.Let v_1 be a node in R_1 with label l_i, such that v_1 transmits in round t+1. Then by Lemma <ref>, all the nodes in R_1 with label l_i must transmit in round t+1. Suppose that the nodes with labelsl_i_1,l_i_2, ⋯,l_i_k transmit in round t+1.If each of the integers b_i_1,b_i_2, ⋯,b_i_k is 0, then no node in R_1 transmitswhich contradicts the assumption of case (g).If at least two of the integers b_i_1,b_i_2, ⋯,b_i_k are 1, or at least one of them is 2, then there exist at least two nodes in R_1 and at least two nodes in R_2 that transmit in round t+1. Hence, a collision is heard at the node r_1 and a collision is heard at the node r_2. Therefore,H(r_1,t+1,L,)=[H(r_1,t,L,),*] and H(r_2,t+1,L,)=[H(r_2,t,L,),*]. This implies that H(r_1,t+1,L,)= H(r_2,t+1,L,).Otherwise, exactly one of the integers b_i_1,b_i_2, ⋯,b_i_k is 1, and all others are 0. W.l.o.g. let b_i_1 be the uniqueinteger 1. Then there is exactly one node v_1 with label l_i_1 in R_1 and there is exactly one node v_2 with label l_i_1 in R_2 which transmit in round t+1. Therefore, H(r_1,t+1,L,)=[H(r_1,t,L,),H(v_1,t,L,)] and H(r_2,t+1,L,)=[H(r_2,t,L,),H(v_2,t,L,)]. Since,H(v_1,t,L,)=H(v_2,t,L,) and H(r_1,t,L,)=H(r_2,t,L,),therefore, H(r_1,t+1,L,)= H(r_2,t+1,L,). * r_1does not transmit, a_1 transmits, some nodesin R_1 transmit.Since a_1 transmits in round t+1, therefore, a_2 transmits in round t+1. Also, since some node in R_1 transmits in round t+1, therefore some node in R_2 transmits in round t+1, as explained in (a). Therefore, a collision is heard at r_1, and a collision is heard at r_2. Hence, H(r_1,t+1,L,)=[H(r_1,t,L,),*] and H(r_2,t+1,L,)=[H(r_2,t,L,),*]. This implies that H(r_1,t+1,L,)= H(r_2,t+1,L,).Induction step for (2): This is similar to the induction step for (1).Induction step for (3): For j=1,2, the histories of the nodes in R_j in round t+1 do not depend on the action of thenode a_j and the actions of the nodes in A_j, in this round. Hence we have the following cases in round t+1. * The node r_1 transmits and no node in R_1 transmits.This implies that the node r_2 transmits and no node in R_2 transmits. Therefore, H(v_1,t+1,L,)=[H(v_1,t,L,),H(r_1,t,L,)] and H(v_2,t+1,L,)=[H(v_2,t,L,),H(r_2,t,L,)]. Since,H(r_1,t,L,)=H(r_2,t,L,) and H(v_1,t,L,)=H(v_2,t,L,), therefore, H(v_1,t+1,L,)=H(v_2,t+1,L,).* The node r_1 transmits and some nodes in R_1 transmit.This implies that the node r_2 transmits and some nodes in R_2 transmit. There are two cases. If v_1 transmits, then v_2 also transmits in round t+1. HenceH(v_1,t+1,L,)=[H(v_1,t,L,),λ] and H(v_2,t+1,L,)=[H(v_2,t,L,),λ] and hence H(v_1,t+1,L,)=H(v_2,t+1,L,). If v_1 does not transmit, then v_2 does not transmit either, in round t+1. HenceH(v_1,t+1,L,)=[H(v_1,t,L,),H(r_1,t,L,)] and H(v_2,t+1,L,)=[H(v_2,t,L,),H(r_2,t,L,)] and hence H(v_1,t+1,L,)=H(v_2,t+1,L,). * The node r_1 does not transmit and some nodes in R_1 transmit.Since r_1 does not transmit therefore r_2 does not transmit. According to the definition of history, H(v_1,t+1,L,)=[H(v_1,t,L,),λ] and H(v_2,t+1,L,)=[H(v_2,t,L,),λ], and hence H(v_1,t+1,L,)=H(v_2,t+1,L,). * The node r_1 does not transmit and no node in R_1 transmits.In this case,H(v_1,t+1,L,)=[H(v_1,t,L,),λ] and H(v_2,t+1,L,)=[H(v_2,t,L,),λ] and hence H(v_1,t+1,L,)=H(v_2,t+1,L,).Induction step for (4): This is similar to the induction step for (3).Therefore, the lemma follows by induction.Let _t be the set of all possible histories of the node r in all trees in , in round t, and letbe the set of all possible patterns of trees in . Then |_t| ≤ ||. The following theorem gives the lower bound Ω(loglogΔ) on the length of a labeling scheme for size discovery, that matches the length of the labeling scheme used by Algorithm Size Discovery.For any tree T ∈ consider a labeling scheme L(T). Letbe any algorithm that finds the size of T, for every tree T ∈, using the scheme L(T). Then there exists a tree T' ∈, for which the length of the scheme L(T') is Ω(loglogΔ). It is enough to prove the theorem for sufficiently large Δ. We prove the theorem by contradiction. Suppose that there exists an algorithmthat solves the size discovery problem in the class , in time t, with labels of length at most 1/4loglogΔ. There are at most z=2(logΔ)^1/4 possible different labels of at most this length. There are at most z^2 3^2z different possible patterns for these z labels. Therefore, by Corollary <ref>, the total number of histories of the node r, in time t, over the entire class ,is at most z^2 3^2z < (2(logΔ)^1/4)^2 3^4(logΔ)^1/4<Δ/2≤ ||, for sufficiently large Δ.Therefore, by the pigeonhole principle, there exist two trees T', T” insuch that the history of r in T' in time t is the same as the history of r in T” in round t. This implies that the node r in T' and the node r in T” must behave identically in every round until round t, hence they mustoutput the same size. This contradicts the fact that the trees T' and T” have different sizes. This completes the proof. § FINDING THE DIAMETER OF A NETWORK By contrast to the task of finding the size of a network, it turns out that finding the diameter of a network can be done with a much shorter labeling scheme: in fact, we will show that a scheme of constant length is sufficient. In our solution the labeling scheme has length 2.Consider any graph G and let u and v be two nodes at distance equal to the diameter D of the graph. The labeling scheme is as follows: node u has label (00), node v has label (11), all othernodes at distance D from u have label (10), and all nodes at positive distances<D from node u have label (01). Algorithm Diameter Discovery consists of two procedures: Procedure Find Diameter initiated by the node with label (00), and Wave(D) initiated by the node with label (11). Upon completion of Find Diameter, the node with label (11) learns the correct value of D which it subsequently broadcasts to all other nodes using Wave(D).We now describe Procedure Find Diameter. It is a modification of the subroutine Wave described in Section 2. Its aim is for all nodes at distance i from the initiating node u with label (00), to learn i. The set of these nodes is called level i. As soon as the node v with label (11) learns its distance from u, it knows that this distance is D and then it initiates Wave(D) to spread this knowledge to all other nodes. Procedure Find Diameter works in phases i=0,1,…, D-1. At the beginning of phase i, all nodes at levels at most i know their level number and all nodes at levels larger than i do not know it. Phase i of Procedure Find Diameter is identical to any phase of Wave(i+1) with nodes at level i playing the role of blue nodes, nodes at levels smaller than i playing the role of white nodes, and nodes at levels larger than i playing the role of red nodes, with the modification that nodes at level i initiate it onlyif their label starts with bit 0. (Hence nodes with labels (11) and (10) which are exactly nodes at distanceD from u do not initiate phase D, which would be useless). Upon completion of phase i, all nodes at level i+1 learn the value of i+1. Hence at the end of phase D-1 all nodes know their level number. Node with label (11) that knows its level number D, knows that this is the diameter of the graph. It initiates Wave(D). At this point, all nodes have finished all transmissions of ProcedureFind Diameter and hence there are no interferences with transmissions of Wave(D), which starts with node v colored blue and all other nodes colored red. Procedure Spreadthat follows Procedure Find Diameter can be described as follows. After computing D, node v with label (11) initiates Wave(D). Every node in G computes the value of D, and outputs it. The procedure ends after all nodes output D. Now our algorithm can be succinctly formulated as follows: The above explanations prove the following proposition. The length of the labeling scheme used by Algorithm Diameter Discovery on any graph is 2. Upon completion of this algorithm, all nodes correctly output the diameter of the graph. § CONCLUSION We established the minimum length Θ(loglogΔ) of a labeling scheme permitting to find the size of arbitrary radio networks of maximum degree Δ, with collision detection, and we designed a size discovery algorithm using a labeling scheme of this length. For the task of diameter discovery, we showed an algorithm using a labeling scheme of constant length. Our algorithms heavily use the collision detection capability, hence the first open question is whether our results hold in radio networks without collision detection. Secondly, in this paper we were concerned only with the feasibility of the size and diameter discovery tasks using short labels. The running time ofour size discovery algorithm is O(Dn^2logΔ), for n-node networks of diameter D and maximumdegree Δ. We did not try to optimize this running time. A natural open question is: what is the fastest size discovery algorithm using a shortest possible labeling scheme, i.e., a scheme of length Θ(loglogΔ)? On the other hand, the running time of our diameter discovery algorithm is O(Dlog D). It is also natural to ask what is the optimal time of a diameter discovery algorithm using a labeling scheme of constant length. Another direction of future research could be considering our problems in the context of dynamic networks and/or in the presence of faults. 12AKM01 S. Abiteboul, H. Kaplan, T. Milo, Compact labeling schemes for ancestor queries, Proc. 12th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2001), 547–556. CGGPR B.S. Chlebus, L. Gasieniec, A. Gibbons, A. Pelc, W. Rytter, Deterministic broadcasting in ad hoc radio networks, Distributed Computing 15 (2002), 27-38.CGR M. Chrobak, L. Gasieniec, W. Rytter, Fast broadcasting and gossiping in radio networks, Journal of Algorithms 43 (2002):177-189. CFIKP R. Cohen, P. Fraigniaud, D. Ilcinkas, A. Korman, D. Peleg, Label-guided graph exploration by a finite automaton, ACM Transactions on Algorithms 4 (2008).CD A. Czumaj, P. Davies, Exploiting spontaneous transmissions for broadcasting and leader election in radio networks, Proc. 36th ACM Symposium on Principles of Distributed Computing (PODC 2017), 3-12DP D. Dereniowski, A. Pelc, Drawing maps with advice,Journal of Parallel and Distributed Computing 72 (2012), 132–143.EGMP F. Ellen, B. Gorain, A. Miller, A. Pelc, Constant-length labeling schemes for deterministic radio broadcast, Proc. 31st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2019), 171-178.EFKR Y. Emek, P. Fraigniaud, A. Korman, A. Rosen, Online computation with advice, Theoretical Computer Science 412 (2011), 2642–2656. FGIP P. Fraigniaud, C. Gavoille, D. Ilcinkas, A. Pelc, Distributed computing with advice: Information sensitivity of graph coloring, Distributed Computing 21 (2009), 395–403.FIP1 P. Fraigniaud, D. Ilcinkas, A. Pelc, Communication algorithms with advice, Journal ofComputer and System Sciences 76 (2010), 222–232.FIP2 P. Fraigniaud, D. Ilcinkas, A. Pelc, Tree exploration with advice, Information and Computation 206 (2008), 1276–1287. FKL P. Fraigniaud, A. Korman, E. Lebhar, Local MST computation with short advice, Theory of Computing Systems 47 (2010), 920–933. FP E. Fusco, A. Pelc, Trade-offs between the size of advice and broadcasting time in trees, Algorithmica 60 (2011), 719–734. FPP E. Fusco, A. Pelc, R. Petreschi, Topology recognition with advice, Information and Computation 247 (2016), 254-265.GPPR L. Gasieniec, A. Pagourtzis, I. Potapov, T. Radzik, Deterministic communication in radio networks with large labels. Algorithmica 47 (2007), 97-117.GPX L. Gasieniec, D. Peleg, Q. Xin, Faster communication in known topology radio networks, Distributed Computing 19 (2007), 289-300.GPPR02 C. Gavoille, D. Peleg, S. Pérennes, R. Raz. Distance labeling in graphs, Journal of Algorithms 53 (2004), 85-112. GHK M. Ghaffari, B. Haeupler,M. Khabbazian. Randomized broadcast in radio networks with collision detection. Proc. 32nd Annual ACM Symposium on Principles of Distributed Computing (PODC 2013),325 - 334.GMP C. Glacet, A. Miller, A. Pelc, Time vs. information tradeoffs for leader election in anonymous trees,ACM Transactions on Algorithms 13 (2017), 31:1-31:41.GP B. Gorain, A. Pelc, Short labeling schemes fortopology recognition in wireless tree networks,Proc. 24th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2017), 37-52.IKP D. Ilcinkas, D. Kowalski, A. Pelc, Fast radio broadcasting with advice,Theoretical Computer Science, 411 (2012),1544–1557.KKKP02 M. Katz, N. Katz, A. Korman, D. Peleg, Labeling schemes for flow and connectivity, SIAM Journal ofComputing 34 (2004), 23–40. KKP05 A. Korman, S. Kutten, D. Peleg, Proof labeling schemes, Distributed Computing 22 (2010), 215–233.KP D. Kowalski, A. Pelc, Leader election in ad hoc radio networks: a keen ear helps, Journal of Computer and System Sciences 79 (2013), 1164-1180.SN N. Nisse, D. Soguet, Graph searching with advice, Theoretical Computer Science 410 (2009), 1307–1318.Pe A. Pelc, Activating anonymous ad hoc radio networks, Distributed Computing 19 (2007), 361-371. | http://arxiv.org/abs/1704.08713v2 | {
"authors": [
"Barun Gorain",
"Andrzej Pelc"
],
"categories": [
"cs.DC"
],
"primary_category": "cs.DC",
"published": "20170427184618",
"title": "Finding the Size and the Diameter of a Radio Network Using Short Labels"
} |
plain L-functions and infinite index subgroups]L-functions and sharp resonances of infinite index congruence subgroups of SL_2() Dmitry Jakobson]Dmitry Jakobson McGill UniversityDepartment of Mathematics and Statistics805 Sherbrooke Street WestMontreal, Quebec, Canada H3A0B9 [email protected] de Mathématiques d'Avignon Campus Jean-Henri Fabre, 301 rue Baruch de Spinoza84916 Avignon Cedex [email protected][ Frédéric Naud================= Let Γ be a convex co-compact subgroup of SL_2() and consider the "congruence subgroups" Γ(p)⊲Γ, for p prime. Let X_p:=Γ(p)\ be the associated family of hyperbolic surfaces covering X:=Γ\, we investigate the behaviour of resonances of the Laplacian on X_p as p goes to infinity. We prove a factorization formula for the Selberg zeta function Z_Γ(p)(s) in term of L-functions L_Γ(s,) related to irreducible representationsof the Galois group =SL_2() of the covering, together with a priori bounds and analytic continuation. We use this factorization property combined with an averaging technique over representations to prove a new existence result of non-trivial resonances in an effective low frequency strip. § INTRODUCTION AND RESULTS In mathematical physics, resonances generalize the L^2-eigenvalues in situations where the underlying geometry is non-compact. Indeed, when the geometry has infinite volume, the L^2-spectrum of the Laplacian is mostly continuous and the natural replacement data for the missing eigenvalues are provided by resonances which arise from a meromorphic continuation of the resolvent of the Laplacian.To be more specific, in this paper we will work with the positive Laplacian Δ_X on hyperbolic surfaces X=Γ\, where Γis a convex co-compact subgroup of PSL_2(). A good reference on the subject is the book of Borthwick <cit.>. Hereis the hyperbolic plane endowed with its metric of constant curvature -1. Let Γ be a geometrically finite Fuchsian group of isometries acting on . This means that Γ admits a finite sided polygonal fundamental domain in . We will require that Γ has no elliptic elements different from the identity and that the quotient Γ\ is of infinite hyperbolic area. We assume in addition in this paper that Γ has no parabolic elements (no cusps). Under these assumptions, the quotient spaceX=Γ\ is a Riemann surface (called convex co-compact) whose ends geometry is as follows. The surface X can be decomposed into a compact surface N with geodesic boundary, called the Nielsen region, on whichinfinite area ends F_i are glued : the funnels.A funnel F_i is a half cylinder isometric to F_i=( /l_i )_θ× (^+)_t,where l_i>0, with the warped metric ds^2=dt^2+cosh^2(t)dθ^2.The limit set Λ(Γ) is defined as Λ(Γ):=Γ.z∩∂,where z∈ is a given point and Γ.z is the orbit under the action of Γ which accumulates on the boundary ∂. The limit set Λ does not depend on the choice of z and its Hausdorff dimension δ(Γ) is the critical exponent of Poincaré series <cit.>. The spectrum of Δ_X on L^2(X) has been described completely byLax and Phillips and Patterson in <cit.> as follows:* The half line [1/4, +∞) is the continuous spectrum.* There are no no embedded eigenvalues inside [1/4,+∞).* The pure point spectrum is empty if δ≤, and finite and starting at δ(1-δ) if δ>. Using the above notations, the resolvent R(s):=(Δ_X-s(1-s) )^-1:L^2(X)→ L^2(X)is a holomorphic family for (s) >, except at a finite number of possible poles related to the eigenvalues. From the work of Mazzeo-Melrose <cit.>, it can be meromorphically continued (to all )from C_0^∞(X)→ C^∞(X), and poles are called resonances. We denote in the sequel by ℛ_X the set of resonances, written with multiplicities.To each resonance s∈ (depending on multiplicity) are associated generalized eigenfunctions (so-called purely outgoing states) ψ_s ∈ C^∞(X) which provide stationary solutions of the automorphic wave equation given by ϕ(t,x)=e^(s-)tψ_s(x), (D_t^2+Δ_X-1/4)ϕ=0.From a physical point of view, (s)- is therefore a rate of decay while (s) is a frequency of oscillation. Resonances that live the longest are called sharp resonances and are those for which (s) is the closest to the unitary axis (s)=. In general, s=δ is the only explicitly known resonance (or eigenvalue if δ>). There are very few effective results on the existence of non-trivialsharp resonances, and to our knowledge the best statement so far is due to the authors <cit.>, where it is proved that for all ϵ>0, there are infinitely many resonances in the strip{(s) >δ(1-2δ)/2-ϵ}.It is conjectured in the same paper <cit.> that for all ϵ>0, there are infinitely many resonances in the strip {(s)>δ/2-ϵ}. However, the above result, while proving existence of non-trivial resonances, does not provide estimates on the imaginary parts (the frequencies), and it is a notoriously hard problem to locate precisely non trivial resonances. The goal of the present work is to obtain a different type of existence result by looking at families of "congruence" surfaces. Let Γ be an infinite index, finitely generated, free subgroup of SL_2(), without parabolic elements. Because Γ is free, the projection map π:SL_2()→ PSL_2() is injective when restricted to Γ and we will thus identify Γ with π(Γ), i.e. with its realization as a Fuchsian group. Under the above hypotheses, Γ is a convex co-compact group of isometries. For all p>2 a prime number, we define the congruence subgroup Γ(p) by Γ(p):={γ∈Γ : γ≡Id mod p},and we set Γ(0)=Γ. Recently, these "infinite index congruence subgroups" have attracted a lot of attention because of the key role they play in number theory and graph theory. We mention the early work of Gamburd <cit.> and the more recent by Bourgain-Gamburd-Sarnak <cit.>, Bourgain-Kontorovich <cit.> and Oh-Winter<cit.>. In all of the previously mentioned works, the spectral theory of surfacesX_p:=Γ(p)\,plays a critical role and knowledge on resonances is mandatory, in particular knowledge on uniform spectral gaps as p→ +∞. In <cit.>, the authors have started investigating the behaviour of resonancesin the large p limit and the present paper goes in the same direction with different techniques involving sharper tools of representation theory.A way to attack any problem on resonances of hyperbolic surfaces is through the Selberg zeta function defined for (s)>δ byZ_Γ(s):=∏_𝒞∈𝒫∏_k∈ (1-e^-(s+k)l(𝒞)),where 𝒫 is the set of primitive closed geodesics and l(𝒞) is the length. This zeta function extends analytically toand it is known from the work of Patterson-Perry <cit.> that non-trivial zeros of Z_Γ(s) are resonances with multiplicities. Our first goal is to establish a factorization formula for Z_Γ (p)(s) using the representation theory of SL_2(). Indeed, it is known from Gamburd <cit.>, that the map π_p:{Γ→SL_2() γ↦γ mod p .is onto for all p large, and we thus have a family of Galois covers X_p→ X with Galois group SL_2(). Let {} denote the set of irreducible complex representations of :=SL_2(), and givenwe denote by χ_ its character, V_ its linear representation space and we set d_:=dim(V_).Our first result is the following.For (s)>δ, consider the L-function defined byL_Γ(s,):=∏_𝒞∈𝒫∏_k∈ (Id_V_-(𝒞^𝓀)e^-(s+k)l(𝒞)),where (𝒞) is understood as (π_p(γ_𝒞)) where γ_𝒞∈Γ is any representative of the conjugacy class defined by 𝒞. Then we have the following facts.* For allirreducible, L_Γ(s,) extends as analytic function to . * There exist C_1,C_2>0 such that for all p large, allirreducible representation of , and all s∈,we have | L_Γ(s,) |≤ C_1 exp(C_2 d_log(1+d_)(1+ | s|^2) ).* For all p large, we have the formula Z_Γ(p)(s)=∏_ irreducible(L_Γ(s,) )^d_.Notice that the L-function for the trivial representation is just Z_Γ(s) and thus Z_Γ(s) is always a factor of Z_Γ(p)(s).There is a long story of L-functions associated with compact extensions of geodesic flows in negative curvature, see for example <cit.> and <cit.>. In the case of pairs of Hyperbolic pants with symmetries, a similar type of factorization has been considered for numerical purposes by Borthwick and Weich <cit.>. The above factorization is very similar to the factorization of Dedekind zeta functions as a product of Artin L-functions in the case of Number fields. In the context of hyperbolic surfaces with infinite volume, the above statement is new and interesting in itself for various applications. In <cit.>, by combining trace formulae techniques with some a priori upper bounds for Z_Γ(p)(s) obtained via transfer operator techniques, we proved the following fact. For all ϵ>0, there exists C_ϵ>0 such that for all p large enough,C_ϵ^-1 p^3≤#ℛ_X_p∩{| s|≤ (log(p))^ϵ}≤ C_ϵ p^3 (log(p))^1+2ϵ.We point out that p^3≍Vol(N_p), where Vol(N_p) is the volume of the convex core of X_p, therefore these bounds can be thought as a Weyl law in the large p regime. In the case of covers of compact or finite volume manifolds, precise results for the Laplace spectrum in the "large degree" limit have been obtained in the past in <cit.>. In the case of infinite volume hyperbolic manifolds, we also mention the density bound obtained by Oh <cit.>. While this result has near optimal upper and lower bounds, it does not provide a lot of information on the precise location of this wealth of non trivial resonances. The main result of this paper is as follows.Using the above notations, assume that δ>3/4. Then for all ϵ,β>0, and for all p large,#ℛ_X_p∩{δ-3/4-ϵ≤(s)≤δ and |(s) |≤ (log(log(p)))^1+β}≥p-1/2. Existence of convex co-compact subgroups Γ of SL_2() with δ_Γ arbitrarily close to 1 is guaranteed by a theorem of Lewis Bowen <cit.>. See also <cit.> for some hand-made examples.Theorem <ref> shows that as p→∞, there are plenty of resonances with (s)≥δ-3/4-ϵ and small imaginary part (s), providing an existence theorem with an effective control of both decay rate and frequency. Notice that from <cit.>, we also knowthat for any σ≤δ, β>0, and all large p,#ℛ_X_p∩{σ≤(s)≤δ and |(s) |≤ (log(log(p)))^1+β} ≤ C_β,σ (p^3 log(p)).One of the points of Theorem <ref> is that we can produce non-trivial resonances without having to affect δ, just by moving to a finite cover. The outline of the proof is as follows. Having established the factorization formula, we first notice that since the dimension of any non trivial representation ofis at least p-1/2, it is enough to show that at least one of the L-functions L_Γ(s,) vanishes in the described region as p→∞. We achieve this goal through an averaging technique (over irreducible ) which takes in account the "explicit" knowledge of the conjugacy classes of ,together with the high multiplicities in the length spectrum of X. Unlike in finite volume cases where one cantake advantage of a precise location of the spectrum (for example by assuming GRH), none of this strategy applies here which makes it much harder to mimic existing techniques from analytic number theory.The remainder of the paper is divided in 3 sections. In 2 we prove analytic continuation of L-functions together with an a priori bound using transfer operators and singular value estimates. In 3, we relate the existence of zero-free regions for L-functions to upper bounds on certain twisted sums over closed geodesics via some arguments of harmonic and complex analysis. In 4, we recall the algebraic facts around the group =SL_2() and apply it to prove Theorem <ref> via averaging overto produce a relevant lower bound and conclude by contradiction.Acknowledgements. Both Authors are supported by ANR grant "GeRaSic". DJ was partially supported by NSERC, FRQNT and Peter Redpath fellowship. FN is supported by Institut Universitaire de France.§ VECTOR VALUED TRANSFER OPERATORS AND ANALYTIC CONTINUATION§.§ Bowen-Series coding and transfer operatorThe goal of this section is to prove Theorem <ref>. The technique follows closely previous works <cit.> with the notable addition that we have to deal with vector valued transfer operators. We start by recalling Bowen-Series coding and holomorphic function spaces needed for our analysis. Letdenote the Poincaré upper half-plane ={ x+iy∈ : y>0}endowed with its standard metric of constant curvature -1ds^2=dx^2+dy^2/y^2. The group of isometries ofis PSL_2() through the action of2× 2 matrices viewed as Möbius transformsz↦az+b/cz+d, ad-bc=1. Below we recall the definition of Fuchsian Schottky groups which will be used to define transfer operators. A Fuchsian Schottky group is a free subgroup of PSL_2() built as follows. Let _1,…, _m,_m+1,…, _2m, m≥ 2,be 2m Euclidean open discs inorthogonal to the line ≃∂. We assume that for all i≠ j, _i∩_j=∅.Let γ_1,…,γ_m ∈PSL_2() be m isometries such that for all i=1,…,m, we haveγ_i(_i)=∖_m+i,where :=∪{∞} stands for the Riemann sphere. For notational purposes, we also set γ_i^-1=:γ_m+i.< g r a p h i c s >Let Γ be the free group generated by γ_i,γ_i^-1 for i=1,…,m, then Γ is a convex co-compact group, i.e. it is finitely generated and has no non-trivial parabolic element. The converse is true : up to isometry, convex co-compact hyperbolic surfaces can be obtained as a quotient by a group as above, see <cit.>.We assume from now on that each γ_i ∈PSL_2() comes from an element in SL_2() so that Γ is naturally identified with an infinite index, finitely generated free subgroup of SL_2(). For all j=1,…,2m, set I_j:=_j∩. One can define a map T:I:=∪_j=1^2mI_j→∪{∞}by settingT(x)=γ_j(x) if x∈ I_j.This map encodes the dynamics of the full group Γ, and is called the Bowen-Series map, see<cit.> for the genesis of these type of coding. The key properties are orbit equivalence and uniform expansion of T on the maximal invariant subset ∩_n≥ 1 T^-n(I) which coincides with the limit set Λ(Γ), see <cit.>.We now define the function space and the associated transfer operators. Set Ω:=∪_j=1^2m_j. Each complex representation space V_ is endowed with an inner product ⟨ .,.⟩_ which makes each representation: →End(V_)unitary. Consider now the Hilbert space H^2_(Ω) which is defined as the set of vector valued holomorphic functionsF:Ω→ V_ such that‖ F‖_H^2_^2:=∫_Ω‖ F(z)‖^2_ dm(z)<+∞,where dm is Lebesgue measure on . On the space H^2_(Ω), we define a "twisted" bytransfer operator _,s by_,s(F)(z):=∑_j ((T')(T_j^-1))^-sF(y)(T_j^-1)= ∑_j≠ i (γ_j')^s F(γ_j z)(γ_j), if z∈_i,where s∈ is the spectral parameter. Here (γ_j) is understood as(π_p(γ_j)), γ_j∈ SL_2(),where π_p:SL_2()→ SL_2() is the natural projection. We also point out that the linear map (g) acts "on the right" on vectors U ∈ V_ simply by fixing an orthonormal basis ℬ=(e_1,…,e_d_) of V_ and settingU(g):=(U_1,…,U_d_) ℳat_ℬ(ρ(g)).Notice that for all j≠ i, γ_j:_i→_m+j is a holomorphic contraction since γ_j(_i)⊂_m+j. Therefore, _,s is a compact trace class operator and thus has a Fredholm determinant. We start by recalling a few facts.We need to introduce some more notations. Considering a finite sequence α withα=(α_1,…,α_n)∈{1,…, 2m}^n,we set γ_α:=γ_α_1∘…∘γ_α_n.We then denote by 𝒲_n the set of admissible sequences of length n by𝒲_n:={α∈{1,…, 2m}^n : ∀ i=1,…,n-1, α_i+1≠α_i +m mod 2m }.The set 𝒲_n is simply the set of reduced words of length n. For all j=1,…, 2m, we define 𝒲_n^j by𝒲_n^j:={α∈𝒲_n : α_n≠ j }. If α∈𝒲_n^j, then γ_α maps _j into _α_1+m. Using this set of notations, we have the formula for all z∈_j, j=1,…,2m, _,s^N(F)(z)=∑_α∈𝒲_N^j (γ_α'(z))^s F(γ_α z)(γ_α).A key property of the contraction maps γ_α is that they are eventually uniformly contracting, see <cit.>, prop 15.4 : there exist C>0 and 0<ρ_2<ρ_1<1 such that for all z∈_j, for all α∈𝒲_n^j we have for all n≥ 1, C^-1ρ_2^N≤sup_z∈_j|γ_α'(z) |≤ C ρ_1^nIn addition, they have the bounded distortion property (see <cit.> for proofs): There exists M_1>0 such that for all n,j and all α∈𝒲_n^j, we have for all z ∈_j,|γ”_α(z)/γ'_α(z)|≤ M_1.We will also need to use the topological pressure as a way to estimate certain weighted sums over words.We will rely on the following fact <cit.>. Fix σ_0 ∈, then there exists C(σ_0) such that for all n andσ≥σ_0, we have∑_j=1^2m ( ∑_α∈𝒲_n^jsup__j|γ'_α|^σ)≤ C_0e^nP(σ_0).Here σ↦ P(σ) is the topological pressure, which is a strictly convex decreasing function which vanishes at σ=δ, see <cit.>.In particular, whenever σ>δ, we have P(σ)<0. A definition of P(σ) is by a variational formula:P(σ)=sup_μ ( h_μ(T)-σ∫_Λlog| T' | dμ),where μ ranges over the set of T-invariant probability measures, and h_μ(T) is the measure theoretic entropy. For general facts on topological pressure and theormodynamical formalism we refer to <cit.>. Since we will only use it once for the spectral radius estimate below, we don't feel the need to elaborate more on various other definitions of the topological pressure and refer the reader to the above references.§.§ Norm estimates and determinant identityWe start with an a priori norm estimate that will be used later on, see also <cit.> where a similar bound (on a different function space) is proved in appendix. Fix σ=(s) ∈, then there exists C_σ>0, independent of p, such that for all s ∈with (s)=σ and allN we have‖_,s^N‖_H^2_≤ C_σ e^C_σ|(s)| e^2NP(σ). Proof. First we need to be more specific about the complex powers involved here. First we point out that given z∈_i then for all j≠ i, γ'_j(z) belongs to ∖ (-∞,0], simply because each γ_j is in PSL_2().This make it possible to define γ'_j(z)^s by γ'_j(z)^s:=e^s𝕃(γ'_j(z)),where 𝕃(z) is the complex logarithm definedon ∖ (-∞,0] by the contour integral𝕃(z):=∫_1^zdζ/ζ.By analytic continuation, the same identity holds for iterates. In particular, because of bound (<ref>) and also bound (<ref>) one can easily show that there exists C_1>0 such that for all N,j and allα∈𝒲_N^j, we havesup_z∈_j|γ'_α(z)^s|≤ e^C_1|(s)|sup__j|γ'_α|^σ,where σ=(s). We can now compute, given F∈ H^2_(Ω),‖_,s^N(F)‖^2_H^2_:=∑_j=1^2m∑_α,β∈𝒲_N^j∫__jγ'_α(z)^s γ'_β(z)^s⟨ F(γ_α z) (γ_α),F(γ_β z)(γ_β) ⟩_ dm(z).By unitarity ofand Schwarz inequality we obtain‖_,s^N(F)‖^2_H^2_≤e^2C_1|(s)|∑_j ∑_α,βsup__j|γ'_α|^σsup__j|γ'_β|^σ∫_D_j‖ F(γ_α z) ‖_‖ F(γ_β z) ‖_ dm(z). We now remark that z↦ F(z) has components in H^2(Ω), the Bergman space of L^2 holomorphic functions on Ω=∪_j _j, so we can use the scalar reproducing kernel B_Ω(z,w) to write (in a vector valued way)F(γ_α z )=∫_ΩF(w) B_Ω(γ_α z,w)dm(w).Therefore we get‖ F(γ_α z )‖_≤∫_Ω‖ F(w)‖_| B_Ω(γ_α z,w)| dm(w),and by Schwarz inequality we obtainsup_z∈_j‖ F(γ_α z )‖_≤‖ F ‖_H^2_ (∫_Ω| B_Ω(γ_α z,w)|^2 dm(w))^1/2.Observe now that by uniform contraction of branches γ_α:_j→Ω,there exists a compact subset K⊂Ω such that for all N,j and α∈𝒲_N^j,γ_α(_j)⊂ K.We can therefore bound ∫_Ω| B_Ω(γ_α z,w)|^2 dm(w)≤ Cuniformly in z,α. We have now reached‖_,s^N(F)‖^2_H^2_≤‖ F ‖_H^2_^2 C_2e^2C_1|(s)|∑_j ∑_α,βsup__j|γ'_α|^σsup__j|γ'_β|^σ,and the proof is now done using the topological pressure estimate (<ref>). □The main point of the above estimate is to obtain a bound which is independent of d_. In particular the spectral radiusρ_sp(_,s) of _,s:H^2_(Ω)→ H^2_(Ω) is bounded byρ_sp(_,s)≤ e^P((s)),which is uniform with respect to the representation , and also shows that it is a contraction whenever σ=(s)>δ. Notice also that using the variational principle for the topological pressure, it is possible to show that there exist a_0,b_0>0 such that for all σ∈,P(σ)≤ a_0-σ b_0. We continue with a key determinantal identity. We point out that representations of Selberg zeta functions as Fredholm determinants of transfer operators have a long history going back to Fried <cit.>, Pollicott<cit.> and also Mayer <cit.> for the Modular surface. For more recent works involving transfer operators and unitary representations we also mention <cit.>.For all (s) large, we have the identity :(I-_,s)=L_Γ(s,), Proof. Remark that the above statement implies analytic continuation toof each L-function L_Γ(s,), since each s↦(I-_,s) is readily an entire function of s.For all integer N≥ 1, let us compute the trace of _,s^N.Our basic reference for the theory of Fredholm determinants on Hilbert spaces is <cit.>. Let (e_1,…,e_d_) be an orthonormal basis of V_. For each disc _j let (φ_ℓ^j)_ℓ∈ be a Hilbert basis of the Bergmann space H^2(_j), that is the space of square integrable holomorphic functions on _j. Then the family defined byΨ_j,ℓ,k(z):={φ_ℓ^j(z)e_k if z∈_j0 otherwise, .is a Hilbert basis of H^2_(Ω). Writing⟨_,s^N(Ψ_j,ℓ,k) ,Ψ_j,ℓ,k⟩_H^2_(Ω)= ∑_α∈𝒲_N^j∫__j (γ_α'(z))^s φ^j_ℓ(γ_α z)φ^j_ℓ(z)⟨ e_k (γ_α), e_k ⟩_ dm(z),we deduce thatTr( _,s^N)=∑_j,ℓ,k⟨_,s^N(Ψ_j,ℓ,k) ,Ψ_j,ℓ,k⟩_H^2_(Ω) =∑_j ∑_α∈𝒲_N^j α_1=m+jχ_(γ_α) ∫__j (γ_α'(z))^s B__j(γ_α z,z)dm(z),where χ_ is the character ofand B__j(w,z) is the Bergmann reproducing kernel of H^2(_j). There is an explicit formula for the Bergmann kernel of a disc _j=D(c_j,r_j) : B__ℓ(w,z)=r_j^2/π [ r_j^2-(w-c_j)(z-c_j) ]^2.It is now an exercise involving Stoke's and Cauchy formula (for details we refer to Borthwick <cit.>, P. 306) to obtain the Lefschetz identity∫__j (γ_α'(z))^sB__j(γ_α z,z) dm(z)= (γ_α'(x_α))^s/1-γ_α'(x_α), where x_α is the unique fixed point of γ_α:_j→_j. Moreover,γ_α'(x_α)=e^-l( 𝒞_α), where 𝒞_α is the closed geodesic represented by the conjugacy class of γ_α∈Γ, andl( 𝒞_α) is the length. There is a one-to-one correspondence between prime reduced words (up to circular permutations) in ⋃_N≥ 1⋃_j=1^2m{α∈𝒲_N^j such that α_1=m+j },and prime conjugacy classes in Γ (see Borthwick <cit.>, P. 303), therefore each prime conjugacy class in Γ and its iterates appear in the above sum, when N ranges from 1 to +∞. We have therefore reached formally (absolute convergence is valid for (s) large, see later on)∑_N≥ 11/NTr( _,s^N)=∑_N≥ 11/N∑_j ∑_α∈𝒲_N^j α_1=m+jχ_(γ_α) (γ_α'(x_α))^s/1-γ_α'(x_α) =∑_𝒞∈𝒫∑_k≥ 1χ_(𝒞^k)/ke^-skl(𝒞)/1-e^-kl(𝒞).The prime orbit theorem for convex co-compact groups says that as T→ +∞, (see for example <cit.>), #{ (k,𝒞)∈_0×𝒫 : kl(𝒞)≤ T}=e^δ T/δ T (1+o(1)).On the other hand, since χ_ takes obviously finitely many values onwe get absolute convergence of the above series for (s)>δ. For all (s) large, we get again formally(I-_,s)=exp( ∑_N≥ 11/NTr( _,s^N) ) =exp(-∑_𝒞,k,nχ_(𝒞^k)/k e^-(s+n)kl(𝒞))=∏_𝒞∈𝒫∏_n∈exp(-∑_k≥ 1χ_(𝒞^k)/k e^-(s+n)kl(𝒞)) =∏_𝒞∈𝒫∏_k∈ (Id_V_-(𝒞^𝓀)e^-(s+k)l(𝒞)).This formal manipulations are justified for (s)>δ by using the spectral radius estimate (<ref>) and the fact that if A is a trace class operator on a Hilbert space ℋ with ‖ A‖_ℋ<1 then we have (I-A)=exp( -∑_N≥ 11/NTr( A^N)),(this is a direct consequence of Lidskii's theorem, see <cit.>, chapter 3). The proof is finished and we have claim 1) of Theorem <ref>. □Claim 3) follows from the formula (valid for (s)>δ)(I-_,s)=exp(-∑_𝒞,k,nχ_(𝒞^k)/k e^-(s+n)kl(𝒞)),and the identity for the character of the regular representation (see <cit.>, chapter 2) ∑_ irreducible d_χ_(g)=||𝒟_e(g),where 𝒟_e is the dirac mass at the neutral element e. Indeed, using (<ref>), we get∏_ irreducible ((I-_,s))^d_= exp(-||∑_k,n∑_𝒞∈𝒫𝒞≡ Id mod p1/k e^-(s+n)kl(𝒞)).To see that this is exactly the Euler product defining Z_Γ(p)(s), observe that since for p large we have (by Gamburd's result <cit.>)Γ\Γ(p)≃,each conjugacy class in Γ of elements belonging to Γ(p) splits into || conjugacy classes in Γ(p). The details of the group theoretic arguments are in <cit.>, section 2, and it rests on the fact that the only abelian subgroups of Γ are elementary subgroups. §.§ Singular value estimatesThe proof of claim 2) will require more work and will use singular values estimates for vector-valued operators.We now recall a few facts on singular values of trace class operators. Our reference for that matter is for example the book <cit.>. If T:ℋ→ℋ is a compact operator acting on a Hilbert space ℋ, the singular value sequence is by definition the sequence μ_1(T)=‖ T‖≥μ_2(T)≥…≥μ_n(T) of the eigenvalues of the positive self-adjoint operator √(T^*T). To estimate singular values in a vector valued setting, we will rely on the following fact. Assume that (e_j)_j∈ J is a Hilbert basis of ℋ, indexed by a countable set J. Let T be a compact operator on ℋ. Then for all subset I⊂ J with # I=n we have μ_n+1(T)≤∑_j∈ J∖ I‖ T e_j ‖_ℋ. Proof. By the min-max principle for bounded self-adjoint operators, we haveμ_n+1(T)=min_dim(F)=nmax_w ∈ F^⊥,‖ w ‖=1⟨√(T^*T)w,w⟩.Set F=Span{ e_j, j∈ I}. Given w=∑_j∉I c_j e_j with ∑_j | c_j |^2=1, we obtain via Cauchy-Schwarz inequality|⟨√(T^*T)w,w⟩|≤‖√(T^*T)(w)‖=‖ T(w)‖≤∑_j∉I‖ T(e_j)‖,which concludes the proof. □Our aim is now to prove the following bound. Let (λ_k(_,s))_k≥ 1 denote the eigenvalue sequence of the compact operators _,s. There exists C>0 and 0<η such that for all s∈ and all representation , we have for all k,|λ_k( _,s) |≤ Cd_ e^C| s| e^-η/d_k. Before we prove this bound, let us show quickly how the combination of the above bound with (<ref>)gives the estimate 2) of Theorem <ref>. By definition of Fredholm determinants, we havelog| L_Γ(s,) |≤∑_k=1^∞log(1+|λ_k( _,s) |) =∑_k=1^N log(1+|λ_k( _,s) |) +∑_k=N+1^∞log(1+|λ_k( _,s) |),where N will be adjusted later on. The first term is estimated via (<ref>) as∑_k=1^N log(1+|λ_k( _,s) |)≤C(| s| +1)N,for some large constant C>0. On the other hand we have by the eigenvalue bound from Proposition <ref>∑_k=N+1^∞log(1+|λ_k( _,s) | )≤∑_k=N+1^∞|λ_k( _,s) | ≤ Cd_ e^C| s|∑_k≥ N+1 e^-η/d_k= Cd_ e^C| s|e^-(N+1)η/d_/1-e^-η/d_ ≤ C' d_^2/ηe^C| s|e^-Nη/d_.Choosing N=B[| s| d_]+B[dlog d_] for some large B>0 leads to∑_k=N+1^∞log(1+|λ_k( _,s) | )≤Bfor some constant B>0 uniform in | s| and d_. Therefore we get log| L_Γ(s,) |≤ O(d_log(d_)(| s|^2+1)),which is the bound claimed in statement 2).Proof of Proposition <ref>. We first recall that if _j=D(c_j,r_j), an explicit Hilbert basis of the Bergmann space H^2(_j) is given by the functions ( ℓ=0,…,+∞, j=1,…,2m)φ_ℓ^(j)(z)=√(ℓ+1/π)1/r_j (z-c_j/r_j)^ℓ.By the Schottky property, one can find η_0>0 such for all z∈_j, for all i≠ j we have γ_i(z)∈_i+m and|γ_i(z)-c_m+i|/r_m+i≤ e^-η_0,so that we have uniformly in i,z,|φ_ℓ^(i+m)(γ_i z)|≤ Ce^-η_1 ℓ,for some 0<η_1<η_0. Going back to the basis Ψ_j,ℓ,k(z) of H^2_(Ω), we can write‖_,s( Ψ_j,ℓ,k)‖^2_H^2_=∑_n=1^2m∑_i,i'≠ n∫__n(γ_i(z))^s (γ_i'(z))^s⟨Ψ_j,ℓ,k(γ_i z)(γ_i),Ψ_j,ℓ,k(γ_i' z)(γ_i')⟩_ dm(z).Using Schwarz inequality and unitarity of the representationfor the inner product ⟨ .,.⟩_,we get by (<ref>) and also (<ref>),‖_,s( Ψ_j,ℓ,k)‖^2_H^2_≤Ce^C| s | e^-2η_1 ℓ,for some large constant C>0. We can now use Lemma <ref> to writeμ_2md_ρ n+1(_,s)≤∑_j=1^2m∑_ℓ=n^+∞∑_k=1^d_‖_,s(Ψ_j,ℓ,k) ‖_H^2_ ≤ C d_ρ e^C| s| e^-η_1 n,for some C>0. Given N∈, we write N=2md_ k+r where 0≤ r<2md_ and k=[N/2md_ρ]. We end up withμ_N+1(_ρ,s)≤μ_2md_ k+1(_,s)≤ C'd_ e^C| s| e^-η_2 N/d_,for some η_2>0. To produce a bound on the eigenvalues, we use then a variant of Weyl inequalities (see <cit.>, Thm 1.14) to get|λ_N(_,s)|≤∏_k=1^N |λ_k(_,s)|≤∏_k=1^N μ_k(_,s),which yields|λ_N(_,s)|≤ C_1 d_ e^C_2 | s| e^-η_2/Nd_∑_k=1^N k.Using the well known identity ∑_k=1^N k=N(N+1)/2 we finally recover|λ_N(_,s)|≤ C_1 d_ e^C_2 | s| e^-η N/d_,for some η>0 and the proof is done. □The reader will have noticed that we have used no specific information at all about the representation theory of =SL_2() and that the above routine works verbatim for abstract groupsas long as we have a natural group homomorphism Γ→.§ ZERO-FREE REGIONS FOR L-FUNCTIONS AND EXPLICIT FORMULAEThe goal of this section is to prove the following result which will allows us to convert zero-free regions into upper bounds on sums over closed geodesics.Fix α>0, 0≤σ<δ and ε>0. Then there exists a C_0^∞ test function φ_0, with φ_0≥ 0, Supp(φ_0)=[-1,+1] and such that that fornon trivial, if L_Γ(s,) has no zeros in the rectangle{σ≤(s) ≤ 1 and |(s)|≤ (log T)^1+α},for some T large enough, then we have ∑_𝒞,kχ_(𝒞^k) l(𝒞)/1-e^kl(𝒞)φ_0( kl(𝒞)/T)= O(d_log(d_)e^(σ+ε)T),where the implied constant is uniform in T, d_.The proof will occupy the full section and will be broken into several elementary steps. §.§ Preliminary LemmasWe start this section by the following fact from harmonic analysis. For all α>0, there exists C_1,C_2>0 and a positive test function φ_0 ∈ C_0^∞() withSupp(φ)=[-1,+1] such that for all |ξ|≥ 2, we have|φ_0(ξ)|≤ C_1 e^|(ξ)|exp (-C_2|(ξ)|/(log|(ξ)|)^1+α),where φ_0(ξ) is the Fourier transform, defined as usual byφ_0(ξ)=∫_-∞^+∞φ_0(x)e^-ixξdx. Proof. It is known from the Beurling-Malliavin multiplier Theorem, or the Denjoy-Carleman Theorem, that for compactly supported test functions ψ, one cannot beat the rate (ξ∈, large)|ψ(ξ)| =O (exp(-C|ξ|/log|ξ|)),because this rate of Fourier decay implies quasi-analyticity (hence no compactly supported test functions). We refer the reader to <cit.>, chapter 5 for more details. The above statement is definitely a folklore result. However since we need a precise control for complex valued ξ and couldn't find the exact reference for it, we provide an outline of the proof which follows closely the construction that one can find in <cit.>, chapter 5, Lemma 2.7.Let (μ_j)_j≥ 1 be a sequence of positive numbers such that ∑_j=1^∞μ_j=1. For all k∈, setφ_N(k)=∏_j=1^N sin(μ_j k)/μ_j k,φ(k)=∏_j=1^∞sin(μ_j k)/μ_j k.Consider the Fourier series given byf(x):=∑_k ∈φ(k) e^ikx,f_N(x):=∑_k ∈φ_N(k) e^ikx,then one can observe that by rapid decay of φ(k), f(x) defines a C^∞ function on [-2π,2π]. On the other hand, one can check that f_N(x) converges uniformly to f as N goes to ∞ and thatf_N(x)=(g_1 ⋆ g_2⋆…⋆ g_N)(x),where ⋆ is the convolution product and each g_j is given byg_j(x):={2π/μ_j if | x|≤μ_j0 elsewhere. .From this observation one deduces that f is positive and supported on [-1,+1] since we assume∑_j=1^∞μ_j=1.We now extend f outside [-1,+1] by zero and write by integration by parts and Schwarz inequality,| f(ξ)|≤e^|(ξ)|/|(ξ)|^N‖ f^(N)‖_L^2(-1,+1).By Plancherel formula, we get‖ f^(N)‖_L^2(-1,+1)^2=∑_k ∈ k^2N(φ(k))^2≤ C ∏_j=1^N+1μ_j^-2,where C>0 is some universal constant. Fixing ϵ>0, we now chooseμ_j=C/j(log(1+j))^1+ϵ,where C is adjusted so that ∑_j=1^∞μ_j=1, and we get| f(ξ)|≤e^|(ξ)|/|(ξ)|^N (C_1)^N N! e^N(1+ϵ)loglog(N).Using Stirling's formula and choosing N of sizeN= [ |(ξ)|/(log(|(ξ)|)^1+2ϵ ]yields (after some calculations) to| f(ξ)|≤O (e^|(ξ)| e^-C_2 |(ξ)|/(log(|(ξ)|)^1+2ϵ),and the proof is finished. □One can obviously push the above construction further below the threshold |ξ|/log|ξ| by obtaining decay rates of the typeexp (-|ξ|/log|ξ|log(log|ξ|) … (log_(n)|ξ|)^1+α),where log_(n)(x)=loglog…log(x), iterated n times. However this would only yield a very mild improvement to the main statement, so we will content ourselves with the above lemma.We continue with another result which will allow us to estimate the size of the log-derivative of L_Γ(s,) in a narrow rectangular zero-free region. More precisely, we have the following: Fix σ<δ. For all ϵ>0, there exist C(ϵ), R(ϵ) >0 such that for all R≥ R(ϵ), if L_Γ(s,) ( is non trivial) has no zeros in the rectangle{σ≤(s) ≤ 1 and |(s)|≤ R},then we have for all s in the smaller rectangle{σ+ϵ≤(s) ≤ 1 and |(s)|≤ C(ϵ)R}, |L'_Γ(s,)/L_Γ(s,)|≤ B(ϵ) d_log(d_) R^6. Proof. We will use Caratheodory's Lemma and take advantage of the a priori bound from Theorem <ref>. More precisely, our goal is to rely on this estimate (see Titchmarsh <cit.>, 5.51).Assume that f is a holomorphic function on a neighborhood of the closed disc D(0,r), thenfor all r'<r, we have max_| z|≤ r'| f'(z) |≤8r/(r-r')^2 ( max_| z|≤ r|(f(z))|+| f(0)|). First we recall that for all (s)>δ, L_Γ(s,) does not vanish and has a representation as L_Γ(s,)=exp(-∑_𝒞,kχ_(𝒞^k)/ke^-skl(𝒞)/1-e^kl(𝒞)),so that we get for all (s)≥ A>δ,|log| L_Γ(s,)||≤ C_A d_,|L'_Γ(s,)/L_Γ(s,)|≤C'_A d_where C_A, C'_A>0 are uniform constants on all half-planes {(s)≥ A>δ}.We have simply used the prime orbit theorem and the trivial bound on characters of unitary representations: |χ_(g)|≤ d_, for allg∈. Let us now assume that L_Γ(s,) does not vanish on the rectangle {σ≤(s) ≤ 1 and |(s)|≤ R}.Consider the disc D(M,r) centered at M and with radius r where M(σ,R) and r(σ,R) are given by M(σ,R)=R^2/2(1-σ)+σ+1/2;r(σ,R)=M(σ,R)-σ, see the figure below. < g r a p h i c s >Since by assumption s↦ L_Γ(s,) does not vanish on the closed disc D(M,r), we can choose a determination of the complex logarithm of L_Γ(s,) on this disc to which we can apply Lemma <ref> on the smaller discD(M,r-ε), which yields (using the a priori bound from Theorem <ref> and estimate (<ref>))|L'_Γ(s,)/L_Γ(s,)|≤ Cr/ε (d_log(d_) r^2 +A_1d_ )=O ( R^6 d_log(d_) ), where the implied constant is uniform with respect to R and d_.Looking at the picture, the smaller disc D(M,r-ε) contains a rectangle {σ+2ε≤(s) ≤ 1 and |(s)|≤ L(ε)}, where L(ε) satisfies the identity (Pythagoras Theorem!) L^2(ε)=ε(2M-2σ-3ε),which shows that L(ϵ)≥ C(ε)R,with C(ε)>0, as long as R≥ R_0(ϵ), for some R_0>0. The proof is done. □ §.§ Proof of the Proposition <ref>We are now ready to prove the main result of this section, by combining the above facts with a standard contour deformation argument. We fix a small ε>0 and 0<α<α. We use Lemma <ref> to pick a test function φ_0 with Fourier decay as described, with same exponent α. We set for all T>0, and s∈,ψ_T(s)=∫_-∞^+∞ e^sxφ_0(x/T)dx =Tφ_0(isT).By the estimate from Lemma <ref>,we have |ψ_T(s)|≤ C_1 Te^T|(s)|exp(-C_2 |(s)| T/(log(T|(s)|)^1+α). We fix now A>δ and consider the contour integralI(,T)=1/2iπ∫_A-i∞^A+i∞L'_Γ(s,)/L_Γ(s,)ψ_T(s)ds.Convergence is guaranteed by estimate (<ref>) and rapid decay of |ψ_T(s)| on vertical lines. Because we choose A>δ, we have absolute convergence of the seriesL'_Γ(s,)/L_Γ(s,)=∑_𝒞,kχ_(𝒞^k) l(𝒞)e^-skl(𝒞)/1-e^kl(𝒞)on the vertical line {(s)=A},and we can use Fubini to writeI(,T)=∑_𝒞,kχ_(𝒞^k) l(𝒞)/1-e^kl(𝒞)e^-Akl(𝒞)1/2π∫_-∞^+∞ e^-itkl(𝒞)ψ_T(iA-t)dt,and Fourier inversion formula givesI(,T)=∑_𝒞,kχ_(𝒞^k) l(𝒞)/1-e^kl(𝒞)φ_0( kl(𝒞)/T).Assuming that L_Γ(s,) has no zeros in {σ≤(s) ≤ 1 and |(s)|≤ R},where R will be adjusted later on, our aim is to use Proposition <ref> to deform the contour integral I(,T) as depicted in the figure below.< g r a p h i c s >Writing I(,T)=∑_j=1^5 I_j (see the above figure), we need to estimate carefully each contribution. In the course of the proof, we will use the following basic fact.Let ϕ:[M_0,+∞)→^+ be a C^2 map with ϕ'(x)>0 on [M_0,+∞) and satisfying (*)sup_x≥ M_0|ϕ”(x)/(ϕ'(x))^3|≤ C, then we have for all M≥ M_0, ∫_M^+∞ e^-ϕ(t)dt≤e^-ϕ(M)/ϕ'(M)+Ce^-ϕ(M). Proof. First observe that condition (*) implies thatx↦1/(ϕ'(x))^2 has a uniformly bounded derivative, which is enough to guarantee thatlim_x→ +∞e^-ϕ(x)/ϕ'(x)=0. In particular lim_x→ +∞ϕ(x)=+∞ and for all M≥ M_0, ϕ:[M,+∞)→ [ϕ(M),+∞) is a C^2-diffeomorphism.A change of variable gives ∫_M^+∞ e^-ϕ(t)dt=∫_ϕ(M)^+∞ e^-udu/ϕ'(ϕ^-1(u)), and integrating by parts yields the result. □* First we start with I_1 and I_5. Using estimate (<ref>) combined with (<ref>), we have| I_5 |≤ C d_ T e^TA∫_C(ε)R ^+∞ e^-C_2 tT/(log(tT))^1+αdt,which by a change of variable leaves us with| I_5 |≤ C d_ e^TA∫_C(ε)RT^+∞ e^-C_2 u/(log(u))^1+αdu.This where we use Lemma <ref> with ϕ(x)=C_2 x/(log(x))^1+α.Computing the first two derivatives, we can check that condition (*) is fulfilled and therefore∫_M^+∞ e^-C_2 u/(log(u))^1+α≤ C (log(M))^1+α e^-C_2 M/(log(M))^1+α,for some universal constant C>0. We have finally obtained| I_5 |≤ C d_ e^TA(log(RT))^1+α e^-C_2 RT/(log(RT))^1+α.Choosing R=(log(T))^1+α, with α>α gives| I_5 | =O(d_ e^TA(log(T))^1+α e^-C_2 T(log(T))^α-α) =O ( d_ρ e^-BT),where B>0 can be taken as large as we want. The exact same estimate is valid for I_1.* The case of I_4 and I_2. Here we use the bound from Proposition <ref> and again (<ref>) to get| I_4|+| I_2|=O ( d_log(d_) e^-BT),where B can be taken again as large as we want.* We are left with I_3 where I_3=1/2π∫_-C(ϵ)R^+C(ϵ)RL'_Γ(σ+ε+it,)/L_Γ(σ+ε+it,)ψ_T(σ+ε+it)dt.Using Proposition <ref> and (<ref>) we get| I_3 | =O( d_log(d_) (log(T))^7(1+α) e^(σ+ϵ)T). Clearly the leading term in the contour integral is provided by I_3, and the proof of Proposition <ref> is now complete. We conclude this section by a final observation. If =id is the trivial representation, then L_Γ(s,id)=Z_Γ(s) has a zero at s=δ, thus the best estimate for the contour integral I(id,T) is given by (<ref>) and (<ref>) which yields (by a change of variable)| I(id,T) |≤ C_Ad_∫_-∞^+∞|ψ_T(A+it)| dt ≤C_A d_ Te^TA∫_-∞^+∞exp(-C_2 |t| T/(log(T| t|))^1+α)dt =O(d_ e^TA).Since d_=1 and A can be taken as close to δ as we want, the contribution from the trivial representation is of sizeI(id,T)=O(e^(δ+ϵ)T). § EXISTENCE OF "LOW LYING" ZEROS FOR L_Γ(S,) §.§ Conjugacy classes in .In this section, we will use more precise knowledge on the group structure of =SL_2().Our basic reference is the book <cit.>, see chapter 3, 6 for much more general statements over finite fields. We start by describing the conjugacy classes in . Since we are only interested in the large p behaviour, we will assume that p is an odd prime strictly bigger than 3. Conjugacy classes of elements g∈ are essentially determined by the roots of the characteristic polynomial(xI_2-g)=x^2-tr(g)x+1,which are denoted by λ, λ^-1, where λ∈^×. There are three different possibilities. * λ≠λ^-1∈^×. In that case g is diagonalizable overand g is conjugate to the matrixD(λ)= ([λ0;0 λ^-1 ] ).The centralizer 𝒵(D(λ))={ h∈ : hD(λ)h^-1=D(λ)} is then equal to the "maximal torus"A={ ([a0;0 a^-1 ] ) : a ∈^×},and we have | A|=p-1, the conjugacy class of g has p(p+1) elements.* λ≠λ^-1∉^×. In that case λ belongs to ℱ≃𝔽_p^2 the unique quadratic extension of . The root λ can be written as λ=a+b√(ϵ), λ^-1=a-b√(ϵ),where {1,√(ϵ)} is a fixed -basis of ℱ. Therefore g is conjugate to ([ a ϵ b; b a ]),and |𝒵(g)|=p+1, its conjugacy class has p(p-1) elements.* λ=λ^-1∈{± 1}. In that case g is non-diagonalizable unless g∈𝒵()={± I_2}, and is conjugate to ± u or ± u' where u=([ 1 1; 0 1 ]), u'= ([ 1 ϵ; 0 1 ]).The centralizer 𝒵(g) has cardinality 2p and the four conjugacy classes have p(p+1) elements.Using this knowledge on conjugacy classes, one can construct all irreducible representations and write a character table for , but we won't need it. There are two facts that we highlight and will use in the sequel: * For all g ∈, |𝒵(g)|≥ p-1. * For allnon-trivial we have d_ρ≥p-1/2.We will also rely on the very important observation below. Let Γ be a convex co-compact subgroup of SL_2() as above. Fix 0<β<2, and consider the set ℰ_T of conjugacy classes γ⊂Γ∖{Id} such that for allγ∈ℰ_T, we have ł(γ)≤ T:=βlog(p). Then for all p large and all γ_1,γ_2∈ℰ_T, the following are equivalent: * tr(γ_1)=tr(γ_2).* γ_1 and γ_2 are conjugate in . Proof. Clearly (1) implies that γ_1 and γ_2 have the same trace modulo p. Unless we are in the cases tr(γ_1)=tr(γ_2)=± 2 mod p, we know from the above description of conjugacy classes that they are determined by the knowledge of the trace.To eliminate these "parabolic mod p" cases, we observe that if γ∈ℰ_T satisfiestr(γ)=± 2 +kp with k≠ 0, then 2cosh(l(γ)/2)=|tr(γ)|≥ p-2,and we getp-2≤ 1+p^β/2,which leads to an obvious contradiction if p is large, therefore k=0.Then it means that|tr(γ)|=2 which is impossible since Γ has no non trivial parabolic element (convex co-compact hypothesis). Conversely, if γ_1 and γ_2 are conjugate in , then we have tr(γ_1)=tr(γ_2) mod p.If tr(γ_1)≠tr(γ_2) then this givesp≤|tr(γ_1)-tr(γ_2)|≤ 4cosh(T/2)≤ 2(p^β/2+1),again a contradiction for p large. □§.§ Proof of the main result Before we can rigourously prove Theorem <ref>, we need one last fact from representation theory which is a handy folklore formula. Letbe a finite group and let :→End(V_) be an irreducible representation. Then for all x,y∈, we haveχ_(x)χ_(y)=d_/| G|∑_g∈χ_(xgy^-1g^-1). Proof. Writing∑_g∈χ_(xgy^-1g^-1)=Tr((x) ∑_g(gy^-1g^-1) ),we observe that U_y:=∑_g(gy^-1g^-1)commutes with the irreducible representation , therefore by Schur's Lemma <cit.> (chapter 2), it has to be of the formU_y=λ(y)I_V_,with λ(y)∈, which shows that∑_g∈χ_(xgy^-1g^-1)=χ_(x)λ(y).Similarly we obtain∑_g∈χ_(xgy^-1g^-1)=χ_(y)λ(x),and evaluating at the neutral element x=e_ ends the proof since we haveU_e_=|| I_V_. □ We fix some 0≤σ<δ. We take ε>0 and α>0. We assume that for all non-trivial representation , the corresponding L-function L_Γ(s,) does not vanish on the rectangle {σ≤(s) ≤ 1 and |(s)|≤ (log T)^1+α}, where T=βlog(p) with 0<β<2. The idea is to look at the averageS(p):=∑_ irreducible| I(,T) |^2,where I(,T) is the sum given byI(,T)=∑_𝒞,kχ_(𝒞^k) l(𝒞)/1-e^kl(𝒞)φ_0( kl(𝒞)/T).While each term I(,T) is hard to estimate from below because of the oscillating behaviour of characters, the mean square is tractable thanks to Lemma <ref>. Let us compute S(p).S(p)=∑_ irreducible∑_𝒞,k∑_𝒞',k'l(𝒞)l(𝒞')/(1-e^kl(𝒞))(1-e^k'l(𝒞'))φ_0( kl(𝒞)/T)φ_0( k'l(𝒞')/T) χ_(𝒞^k)χ_(𝒞'^k').Using Lemma <ref>, we haveχ_(𝒞^k)χ_(𝒞'^k')= d_/| G|∑_g∈χ_(𝒞^kg(𝒞')^-k'g^-1),and Fubini plus the identity∑_ irreducible d_χ_(g)=||𝒟_e(g)allow us to obtainS(p)= ∑_𝒞,k∑_𝒞',k'l(𝒞)l(𝒞')/(1-e^kl(𝒞))(1-e^k'l(𝒞'))φ_0( kl(𝒞)/T)φ_0( k'l(𝒞')/T) Φ_(𝒞^k,𝒞'^k'),whereΦ_(𝒞^k,𝒞'^k'):=∑_g∈𝒟_e(𝒞^kg(𝒞')^-k'g^-1).Since all terms in this sum are now positive and Supp(φ_0)=[-1,+1], we can fix a small ε>0 and find a constant C_ε>0 such thatS(p)≥ C_ε∑_kl(𝒞)≤ T(1-ε) k'l(𝒞')≤ T(1-ε)Φ_(𝒞^k,𝒞'^k').Observe now thatΦ_(𝒞^k,𝒞'^k')=∑_g∈𝒟_e(𝒞^k g(𝒞')^-k'g^-1)≠ 0if and only if 𝒞^k and 𝒞'^-k' are in the same conjugacy class mod p, and in that case,Φ_(𝒞^k,𝒞'^k')=|𝒵(𝒞^k)|=|𝒵(𝒞'^k')|.Using the lower bound for the cardinality of centralizers, we end up withS(p)≥ C_ε (p-1) ∑_𝒞^k=𝒞'^k' mod pkl(𝒞), k'l(𝒞')≤ T(1-ε) 1.Notice that since we have taken T=βlog(p) with β<2, we can use Proposition<ref> which says that𝒞^k and 𝒞'^-k' are in the same conjugacy class mod p iff they have the same traces (in SL_2()). It is therefore natural to rewrite the lower bound for S(p) in terms of traces. We need to introduce a bit more notations. Letℒ_Γ be set of traces i.e.ℒ_Γ={tr(γ) : γ∈Γ}⊂.Given t∈ℒ_Γ, we denote by m(t) the multiplicity of t in the trace set bym(t)=#{conj class γ⊂Γ : tr(γ)=t }.We have therefore (notice that multiplicities are squared in the double sum)S(p)≥ C_ε (p-1) ∑_t∈ℒ_Γ| t |≤2cosh(T(1-ε)/2) m^2(t).To estimate from below this sum, we use a trick that goes back to Selberg. By the prime orbit theorem <cit.> applied to the surface Γ\, we know that for all T large, we haveCe^(δ-2ε)T≤∑_t∈ℒ_Γ| t |≤2cosh(T(1-ε)/2) m(t),and by Schwarz inequality we get for T largeCe^(δ-2ε)T≤ C_0( ∑_t∈ℒ_Γ| t |≤2cosh(T(1-ε)/2) m^2(t))^1/2 e^T/4,where we have used the obvious bound#{ n∈ : | n|≤ 2cosh(T(1-ε)/2)}=O(e^T/2).This yields the lower bound∑_t∈ℒ_Γ| t |≤2cosh(T(1-ε)/2) m^2(t)≥ C_ε'e^(2δ-1/2-ε)T,which shows that one can take advantage of exponential multiplicities in the length spectrum when δ>, thus beating the simple bound coming from the prime orbit theorem. In a nutshell, we have reached the lower bound (for all ε>0), S(p)≥ C_ε (p-1)e^(2δ-1/2-ε)T.Keeping that lower bound in mind, we now turn to upper bounds using Proposition <ref>. WritingS(p)=| I(id,T)|^2 +∑_≠ id| I(,T)|^2,and using the bound (<ref>) combined with the conclusion of Proposition <ref>, we get S(p)=O(e^(2δ+ε)T)+O (∑_≠ id d_^2 (log(d_))^2 e^2(σ+ε)T).Using the formula|| =∑_ d_^2,combined with the fact that ||=p(p^2-1)=O(p^3),we end up withS(p)=O(e^(2δ-ε)T)+O (p^3log(p) e^2(σ+ε)T).Since T=βlog(p), we have obtained for all p large [Note that the log(p) term has been absorbed in p^ε.]Cp^(2δ-1/2-ε)β≤ p^(2δ+ε)β-1+p^2+2(σ+ε)β+ε. Remark that since β<2, then if ε is small enough we always have(2δ+ε)β-1< (2δ-1/2-ε)β,so up to a change of constant C, we actually have for all large pCp^(2δ-1/2-ε)β≤ p^2+2(σ+ε)β+ε.We have contradiction for p large providedσ< (δ-1/4-1/β)-ε-ε/2β.Since β can be taken arbitrarily close to 2 and ε arbitrarily close to 0, we have a contradiction whenever δ>3/4 and σ<δ-3/4. Therefore for all p large, at least one of the L-function L_Γ(s,)for non trivialhas to vanish inside the rectangle{δ-3/4-ϵ≤(s)≤δ and |(s) |≤ (log(log(p)))^1+α},but then by the product formula we know that this zero appear as a zero of Z_Γ(p)(s) with multiplicity d_ρ which is greater or equal to p-1/2 by Frobenius. The main theorem is proved. □We end by a few comments. It is rather clear to us that this strategy should work without major modification for congruence subgroups of arithmetic groups arising from quaternion algebras and also in higher dimension i. e. for convex co-compact subgroups ofSL_2((i)). What is less clear is the possibility to obtain similar results for more general families of Galois covers since we have used arithmeticity in a rather fundamental way (via exponential multiplicities in the length spectrum). It would be interesting to know if the log^1+ϵ(log(p)) bound can be improved to a uniform constant. However, it would likely require a completely different approach since log(log(p)) is the very limit one can achieve with compactly supported test functions. Indeed, to achieve a uniform bound with our approach would require the use of test functions φ≢0 with Fourier bounds |φ(ξ)|≤ C_1 e^|(ξ)| e^-C_2|(ξ)|, but an application of the Paley-Wiener theorem shows that these test functions do not exist (they would be both compactly supported and analytic on the real line).10Borthwick David Borthwick. Spectral theory of infinite-area hyperbolic surfaces, volume 318 of Progress in Mathematics. Birkhäuser/Springer, [Cham], second edition, 2016.BW1 David Borthwick and Tobias Weich. Symmetry reduction of holomorphic iterated function schemes and factorization of Selberg zeta functions. J. Spectr. Theory, 6(2):267–329, 2016.BGS Jean Bourgain, Alex Gamburd, and Peter Sarnak. Generalization of Selberg's 3/16 theorem and affine sieve. Acta Math., 207(2):255–290, 2011.Kontorovich1 Jean Bourgain and Alex Kontorovich. On representations of integers in thin subgroups of SL_2( Z). Geom. Funct. Anal., 20(5):1144–1174, 2010.LewisBowen Lewis Bowen. Free groups in lattices. Geom. Topol., 13(5):3021–3054, 2009.Bowen1 Rufus Bowen. Hausdorff dimension of quasicircles. Inst. Hautes Études Sci. Publ. Math., (50):11–25, 1979.BowenSeries1 Rufus Bowen and Caroline Series. Markov maps associated with Fuchsian groups. Inst. Hautes Études Sci. Publ. Math., (50):153–170, 1979.Button Jack Button. All Fuchsian Schottky groups are classical Schottky groups. In The Epstein birthday schrift, volume 1 of Geom. Topol. Monogr., pages 117–125. Geom. Topol. Publ., Coventry, 1998.Mayer2 C.-H. Chang and D. Mayer. Thermodynamic formalism and Selberg's zeta function for modular groups. Regul. Chaotic Dyn., 5(3):281–312, 2000.Degeorge David L. de George and Nolan R. Wallach. Limit formulas for multiplicities in L^2(Γ\ G). Ann. of Math. (2), 107(1):133–150, 1978.Donnelly Harold Donnelly. On the spectrum of towers. Proc. Amer. Math. Soc., 87(2):322–329, 1983.Fried1 David Fried. The zeta functions of Ruelle and Selberg. I. Ann. Sci. École Norm. Sup. (4), 19(4):491–517, 1986.Gamburd1 Alex Gamburd. On the spectral gap for infinite index “congruence” subgroups of SL_2( Z). Israel J. Math., 127:157–200, 2002.JN2 Dmitry Jakobson and Frédéric Naud. On the critical line of convex co-compact hyperbolic surfaces. Geom. Funct. Anal., 22(2):352–368, 2012.JN1 Dmitry Jakobson and Frédéric Naud. Resonances and density bounds for convex co-compact congruence subgroups of SL_2(Z). Israel J. Math., 213(1):443–473, 2016.KatSun Atsushi Katsuda and Toshikazu Sunada. Closed orbits in homology classes. Inst. Hautes Études Sci. Publ. Math., (71):5–32, 1990.Katz Yitzhak Katznelson. An introduction to harmonic analysis. Dover Publications, Inc., New York, corrected edition, 1976.Lalley Steven P. Lalley. Renewal theorems in symbolic dynamics, with applications to geodesic flows, non-Euclidean tessellations and their fractal limits. Acta Math., 163(1-2):1–55, 1989.LP1 Peter D. Lax and Ralph S. Phillips. Translation representation for automorphic solutions of the wave equation in non-Euclidean spaces. I. Comm. Pure Appl. Math., 37(3):303–328, 1984.Mayer1 Dieter H. Mayer. The thermodynamic formalism approach to Selberg's zeta function for PSL(2, Z). Bull. Amer. Math. Soc. (N.S.), 25(1):55–60, 1991.MM Rafe R. Mazzeo and Richard B. Melrose. Meromorphic extension of the resolvent on complete spaces with asymptotically constant negative curvature. J. Funct. Anal., 75(2):260–310, 1987.Naud3 Frédéric Naud. Precise asymptotics of the length spectrum for finite-geometry Riemann surfaces. Int. Math. Res. Not., (5):299–310, 2005.Naud1 Frédéric Naud. Density and location of resonances for convex co-compact hyperbolic surfaces. Invent. Math., 195(3):723–750, 2014.Oh1 Hee Oh. Eigenvalues of congruence covers of geometrically finite hyperbolic manifolds. J. Geom. Anal., 25(3):1421–1430, 2015.OhWinter Hee Oh and Dale Winter. Uniform exponential mixing and resonance free regions for convex cocompact congruence subgroups of SL_2(Z). J. Amer. Math. Soc., 29(4):1069–1115, 2016.ParryPollicott1 William Parry and Mark Pollicott. The Chebotarov theorem for Galois coverings of Axiom A flows. Ergodic Theory Dynam. Systems, 6(1):133–148, 1986.ParryPollicott2 William Parry and Mark Pollicott. Zeta functions and the periodic orbit structure of hyperbolic dynamics. Astérisque, (187-188):268, 1990.Patterson S. J. Patterson. The limit set of a Fuchsian group. Acta Math., 136(3-4):241–273, 1976.PatPerry S. J. Patterson and Peter A. Perry. The divisor of Selberg's zeta function for Kleinian groups. Duke Math. J., 106(2):321–390, 2001. Appendix A by Charles Epstein.Pohl1 Anke D. Pohl. A thermodynamic formalism approach to the Selberg zeta function for Hecke triangle surfaces of infinite area. Comm. Math. Phys., 337(1):103–126, 2015.Pohl2 Anke D. Pohl. Symbolic dynamics, automorphic functions, and Selberg zeta functions with unitary representations. In Dynamics and numbers, volume 669 of Contemp. Math., pages 205–236. Amer. Math. Soc., Providence, RI, 2016.Pollicott1 Mark Pollicott. Some applications of thermodynamic formalism to manifolds with constant negative curvature. Adv. Math., 85(2):161–192, 1991.Roblin Thomas Roblin. Ergodicité et équidistribution en courbure négative. Mém. Soc. Math. Fr. (N.S.), (95):vi+96, 2003.Sarnak Peter Clive Sarnak. PRIME GEODESIC THEOREMS. ProQuest LLC, Ann Arbor, MI, 1980. Thesis (Ph.D.)–Stanford University.serre Jean-Pierre Serre. Représentations linéaires des groupes finis. Hermann, Paris, revised edition, 1978.Simon Barry Simon. Trace ideals and their applications, volume 120 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, second edition, 2005.Suzuki Michio Suzuki. Group Theory Vol. 1, volume 247 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, 1982.Tits E. C. Titchmarsh. The theory of functions. Oxford University Press, Oxford, 1958. Reprint of the second (1939) edition. | http://arxiv.org/abs/1704.08546v1 | {
"authors": [
"Dmitry Jakobson",
"Frederic Naud"
],
"categories": [
"math.SP",
"35P25, 11M36"
],
"primary_category": "math.SP",
"published": "20170427130400",
"title": "L-functions and sharp resonances of infinite index congruence subgroups of $SL_2(\\mathbb{Z})$"
} |
firstpage–lastpage 2013Properties of blueshifted light rays in quasispherical Szekeres metrics Andrzej Krasiński=======================================================================We present a near infrared study of the spectral components of the continuum in the inner 500×500 pc^2 of the nearby Seyfert galaxy Mrk 573 using adaptive optics near-infrared integral field spectroscopy with the instrument NIFS of the Gemini North Telescope at a spatial resolution of ∼50 pc. We performed spectral synthesis using the starlight code and constructed maps for the contributions of different age components of the stellar population: young (age≤100 Myr), young-intermediate (100<age≤700 Myr), intermediate-old (700 Myr <age≤2 Gyr) and old (age>2 Gyr) to the near-IR K-band continuum, as well as their contribution to the total stellar mass. We found that the old stellar population is dominant within the inner 250 pc, while the intermediate age components dominate the continuum at larger distances. A young stellar component contributes up to ∼20% within the inner ∼ 70 pc, while hot dust emission andfeatureless continuum components are also necessary to fit the nuclear spectrum, contributing up to 20% of the K-band flux there. The radial distribution of the different age components in the inner kiloparsec of Mrk 573 is similar to those obtained by our group for the Seyfert galaxies Mrk 1066, Mrk 1157 and NGC 1068 in previous works using a similar methodology.Young stellar populations (≤100 Myr) are seen in the inner 200–300 pc for all galaxies contributing with ≥ 20% of the K-band flux, while the near-IR continuum is dominated by the contribution of intermediate-age stars (t=100 Myr–2Gyr) at larger distances. Older stellar populations dominate in the inner 250 pc.Keywords: galaxies: individual (Mrk 573) – galaxies: Seyfert – infrared: galaxies – galaxies: stellar population§ INTRODUCTIONStellar population (SP) synthesis using multi-wavelength spectra has being used to constrain the star formation history (SFH) of host galaxies of active galactic nuclei (AGN), looking in particular for the presence of recent star formation close to the AGN, supporting the so-called AGN-Starburst connection <cit.>. Indeed, previous studies have shown that massive star-forming regions are commonly detected in the inner kiloparsec of active galaxies <cit.>. Both, nuclear activity and star formation can be fed by gas inflows towards the nucleus, providing a gas reservoir in the central region of the galaxy. Indeed, inflows of gas have been observed using optical and near-infrared (near-IR) integral field spectroscopyof nearby galaxies <cit.>.This feeding process is associated with the presence of nuclear bars or non-axis-symmetric features, spiral arms, or tidal interactions <cit.>.The SFH of galaxies can be used as a constraint for their formation and evolution, being a fundamental ingredient of theoretical models. Many studies of galaxy evolution in the infrared spectral range are strongly based on Evolutionary Population Synthesis (EPS) models <cit.>. The main parameters of the stellar populations (SPs), as their ages, SFHs and stellar masses are derived through the EPS models. However, these models are still being refined and of particular importance is the contribution of Thermally Pulsing Asymptotic Giant Branch (TP-AGB) stars that play an important role in defining the shape of the spectra in the near-IR wavelengths <cit.>. Construct EPS models for evolved stars, such as TP-AGB stars, is not an easy task due totheir complex inner structures, convection, and the eventual mass ejections. Recently, SP studies of nearby Seyfert galaxies using near-IR spectroscopy have become more frequent <cit.>. The near-IR spectral range, besides allowing us to access regions highly obscured by dust, also hosts spectral fingerprints originated in massive and evolved stars, as those found in the Red Supergiant (RSG) and TP-AGB phases. These phases are responsible for a large fraction of the near-IR stellar continuum and can be used as tracers of young to intermediate SPs with ages between 200 Myr and 2 Gyr <cit.>. Additionally, in this spectral range, one can detect hot dust emission associated to the putative torus surrounding the central AGN, and some contribution also from the AGN featureless continuum (FC) emission, originated in the accretion disk <cit.>.The detection and characterization of these components is fundamental to understand the AGN spectral energy distribution and investigate the impact of the AGN in the evolution of its host galaxy.For the reasons pointed out above, near-IR spectroscopy is a powerful tool to investigate both the stellar populations and the unresolved emission from the dusty torus and accretion disk surrounding the supermassive black hole (SMBH). With the aid of the integral field capability at 8-10 meter telescopes, it is also possible to map the spatial distribution of the stellar population in the circumnuclear region of nearby active galaxies, a study that our group –AGNIFS –has been doing using the instrument NIFS (Near-Infrared Integral Field Spectrograph) at the Gemini North Telescope. In a recent work, we have used near-IR integral field spectroscopy and optical long-slit spectra to map the emission-line and stellar kinematics of the inner 700 × 2100 pc^2 of Mrk 573 using the NIFS at Gemini North and Dual Imaging Spectrograph (DIS) at Apache Point Observatory, respectively <cit.>. From this work, we found that flux distributions of ionized and molecular gas, while distinctly different, were morphologically related as arcs of molecular 2̋ gas connected ionized [S iii] gas features from outside the NLR bicone. We also found that molecular gas kinematics outside the NLR, and ionized gas kinematics at great distances from the nucleus in the extended NLR (ENLR), show signatures of rotation as observed from our stellar kinematics analysis. These observations suggest that the ionized gas kinematics and morphology in Mrk 573 can largely be attributed to material originating in the rotating disk of the host galaxy. Deviations from pure rotation were observed along the NLR projected axis at radii r < 750 pc and interpreted as being due to the radiative acceleration of material in the host disk. As the radiatively accelerated gas in the host disk goes to distances smaller than the length of the full NLR/ENLR, we concluded that these outflows may have a smaller range of impact than previously expected. The host disk galaxy presents an inclination ofi = 43^∘, with the north edge corresponding to the side nearest us <cit.>.The stellar content of the central region of Mrk 573 was studied by<cit.> using long-slit spectra obtained with the 4-m Mayall telescope of Kitt Peak National Observatory. They oriented the 15 width slit along the PA=161^∘ and mapped the stellar populations in the inner 8^'' (2.7 kpc). They found that the flux at 4020 Å is dominated by an old SP component. Within the inner 2^'' up to 70 % of the flux is due to emission of 10 Gyr SPs, while the contribution of these populations decreases at larger distances, being responsible for about 40 % of the observed flux at 8^'' from the nucleus.Intermediate-age (1 Gyr) SPs contribute with about 20 % of the nuclear flux, and their contribution increases to at larger distances, reaching 60 % outwards. The contribution of younger SPs is very small at all locations. The SP reddening values are in the rangeE(B-V)=0-0.4, with the highest values seen at 2^'' southeast of the nucleus.On the other hand, <cit.> found that the near-IR continuum in the inner 08×16 of Mrk 573 is dominated by intermediate-age populations with ages ranging from 100 Myr to 2 Gyr. The contribution of these populations reach 53 % of the flux at 1.2 μm and is diluted by a featureless continuum (FC) component, which contributes with 22 % of the continuum. <cit.> found also that intermediate age stars dominate the near-IR nuclear continuum, but they did not found an evidence for the FC component. This paper is organized as follows: in Section 2 we discuss the observations and data reduction procedures,Section 3 shows maps of the flux and mass-weighted contribution of each SP,which are discussedin Section 4. Section 5 summarizes the conclusions of this work. § OBSERVATIONS, DATA REDUCTION AND ANALYSIS In this paper we use the spectral synthesis technique to map the ages of the stellar populations of the inner 500 pc radius of the Seyfert 2 galaxy Mrk 573.Mrk 573 is a nearly face-on, early-type galaxy, morphologically classified as RSAB(rs)^+ <cit.>, presenting a bright extended emission line region(∼1 kpc) along the direction of its radio emission <cit.> and high-ionization emission-lines <cit.>. Three radio continuum sources were first detected by <cit.>, one in the nucleus of the galaxy and the two radio lobes along position anglePA=122^∘.It harbors a Seyfert 2 nucleus <cit.> and is located at a distance of ∼73 Mpc<cit.>, for which 1^'' corresponds to 350 pc at the galaxy. Z, J and K band integral-field spectroscopy (IFS) of Mrk 573 have been obtained with the Gemini-north Near-Infrared Integral-Field Spectrograph <cit.> operating with Gemini North Adaptive Optics system ALTAIR. Observations of Mrk 573 were obtained under the Gemini programme GN-2010B-Q-8 (PI: Michael Crenshaw) in 2010B and 2011A semesters, following the standard Object-Sky-Object dither sequence. Six exposures of 600s each were performed in the K-band, centred at 2.3 μm and covering the spectral range from 2.1 μm to 2.5 μm, 5 exposures of 600 s for J-band, centred at 1.25 μm, covering the spectral region from 1.1 μm to 1.3 μm and 6 exposures of 500 s for Z-band, centred at 1.05 μm andcovering the spectral region from 0.94 μm to 1.14 μm. The NIFS has a square field of view of 30×30, divided into 29 slices with an angular sampling of 010×004, and was oriented along the position angle PA=133^∘, measured relative to the orientation of the slices. The data reduction was accomplished using tasks contained in the nifs package, which are part of gemini iraf package, as well as standard iraf tasks and Interactive Data Language (IDL) routines. The process followed the standard procedure of near-IR spectroscopic data reduction,including trimming of the images,flat-fielding, sky subtraction, wavelength and s-distortion calibrations. The telluric absorption bands were removed by dividing the spectra of the galaxy by a normalized spectrum of a telluric standard star, observed just before and/or after the galaxy exposures. The final spectra were then flux calibrated by interpolating a black-body function to the spectrum of the telluric standard star. Individual exposure datacubes were created with an angular sampling of 005×005, which were combined to obtain a single cube for each band, using the nucleus of the galaxy as reference for the astrometry. The final data cubes cover the inner≈30×30 of Mrk 573, corresponding to ∼1×1 kpc^2 at the galaxy. The spatial resolution is ∼45 pc for the J and K bands, as estimated from the full width at half maximum (FWHM) of the brightness profile of the telluric standard star, while for the Z band the performance of ALTAIR is worse and the resulting spatial resolution is about 55 pc.The spectral resolution is ∼50 for all bands, as obtained from the typical FWHM of arc lamp lines.Since the performance of the adaptive optics at the Z band is worse, compared with the J and K bands,and the signal-to-noise ratio of the Z band continuum spectra is also lower, we used only the J and K band datacubes to perform the spectral synthesis. These cubes were combined to a single datacube with a constant spectral bin of 5 Å and 015×015 angular sampling. More details about the observations and data reduction procedure can be found in <cit.>. §.§ Spectral synthesisThe integrated spectrum of an active galaxy comprises a set of components such as stellar, gas, dust, as well as nuclear components such as a black-body (from the torus) and power-law emission from the accretion disk. One way to disentangle these components is by the technique of spectral synthesis, which allows to quantify the contributions of each one these components to the spectrum.A widely used code is the starlight <cit.>, which fits the continuum searching for the best description of the observed spectrum by reproducing it with different proportions of the supposed components that sum up to the observed spectrum. These components comprise the base of spectral elements. In this way,the “key" of the spectral synthesis technique is to provide a base of elements including all possible components observed in galaxies <cit.>. When fitting near-IR spectral data one needs to have in mind that this region host characteristic absorption features, being the most common theCN, CO, VO, ZrO and TiO absorptions bands, which are attributed to evolved stars, as those in the RGB and TP-AGB phases <cit.>. Thus, the simple stellar population (SSP) models used to fit near-IR data need to include these features. Therefore, we selected the <cit.> SSP models, that include empirical data for the the TP-AGB evolutionary phase. The set of spectral elements used here is composed by the SSP models of <cit.> and are described in <cit.>. In short, they include 12 ages (t= 0.01, 0.03, 0.05, 0.1, 0.2, 0.5, 0.7, 1, 2, 5, 9, 13 Gyr) and 4 metallicities (Z =0.02, 0.5, 1, 2Z_⊙). We also included black-body functions for temperatures in the range 700-1400 K in steps of 100 K and a power-law (F_λ∼λ^-0.5), in order to account for possible contributions from hot dust emission and from a featureless continuum (FC) in the nucleus of the galaxy. For details see <cit.>. The fit is carried out in starlight by minimizing the following equation: χ^2=∑_λ[(O_λ - M_λ)ω_λ]^2,where O_λ is the observed spectrum, M_λ is the fitted model, ω_λ = 1/e_λand e_λ corresponds to the associated uncertainties to the observed spectrum. The emission lines and spurious data were excluded from the fit by setting their weight as zero. Each model spectrum is obtained by: M_λ=M_λ0[∑_j=1^N_⋆x_j b_j,λ r_λ] ⊗ G(υ_⋆,σ_⋆),where M_λ0 is the synthetic flux at the normalization wavelength free of any emission or absorption line, x⃗ is the population vector, whose components x_j (j=1,...,N_⋆) represent the fractional contribution of each SSP in the spectral base, b_j,λ is the normalized spectrum of the jth SSP component of the base, ⊗ corresponds the convolution operator and G(υ_⋆,σ_*) is the Gaussian distribution used to model the line of sight velocity distribution, centered at velocity υ_⋆ with dispersion σ_*.The extinction due to dust is modeled as an uniform screen following the extinction law of <cit.>. § RESULTSAn optical image of Mrk 573 obtained with the Hubble Space Telescope (HST) Wide Field Planetary Camera 2 (WFPC2) through the filter F606W is shown in the top-left panel of Fig.<ref>. The central black square indicates the FOV of the NIFS observations. The right panel shows a continuum image of the nuclear region, in logarithmic flux units per pixel, acquired from the NIFS datacube in the spectral range between 2.25 and 2.28 μm, without emission or absorption lines. Spectra from two positions (N and A), extracted within an aperture of 015×015, are shown in the bottom panels. The central“cross” corresponds to the location of the nucleus, which was defined as the peak of continuum emission. The bottom panels of Fig.<ref> show the results of the spectral synthesis overploted to the observed spectrum (black), for the nucleus (N) and for position A, indicated on the continuum map (top-right panel). In the spectral synthesis, we masked out emission lines and spurious features, fitting only the regions with stellar absorption features and featureless continua.The fits for all spaxels are very similar to those shown in Fig. <ref>. The quality of the fits can be evaluated using the rightmost panel of Fig.<ref>, where we show the map of the mean percent deviation over all fitted pixels (Adev=|O_λ - M_λ|/O_λ).Since small stellar population differences in the spectra are washed away due to the uncertainties on the observations, we have followed <cit.> and binned the contribution of the individual SSP, x_j, into a coarser population vector as follows <cit.>: young (x_y: t ≤ 100Myr);young-intermediate (x_yi: 100 < t ≤ 700Myr); intermediate-old (: 700 Myr < t ≤ 2Gyr) and old (: 2 < t ≤ 15Gyr). Fig. <ref> (top) shows the results of the spectral synthesis in maps of the main stellar population components (SPCs) percent contributions to the 2.2μm continuum light within inner ∼500 pc of Mrk 573. Grey regions in these maps correspond to masked locations where we were not able to get good fits, with adev > 15. Themap shows that the young population contributes with up to 50% of the continuum within ∼05, with the highest contribution seen at 03 south of the nucleus. It decreases outwards and then increases again at ∼1^'' from the nucleus in a partial ring structure showing values of up to 100% to the south and south-east. The rest of the ring is dominated by the contribution from thepopulation that reaches up to 100% at the ring. The last two panels (and ) show a flux contribution of old SPs in the central region of up to60% mostly inside the ring. Once the SSPs are in the form L_⊙Å^-1 M_⊙^-1, thus a light-to-mass-ratio spectrum, thecode computes the mass fractions based on the L/M ratio[More details can be found in the STARLIGHT User Guide at www.starlight.ufsc.br.]. The maps with the mass fractions for each binned age group are shown in the bottom panels of Fig. <ref>. The spatial distribution of each mass contribution, as expected, is similar to that observed to the light fractions, however, due to the nonlinear M/L relation, themaps show higher contributions in mass than in flux, with values of up to 95%.In Fig. <ref> we present the maps for the FC and BB components that contribute to the observed continuum emission within 045 (180 pc). Although by the map it seems that both have the same value, in fact the FC has a contribution of up to 35% and the BB a contribution of up to 50%, in the central pixels. In the same figure we show the E(B-V) reddening map, which displays values of up to 0.8 and the Adev map with most of the values ≲10%.§ DISCUSSION §.§ Distribution of the stellar populationsStudies of the stellar populations in nearby Seyfert galaxies based on near-IR long-slit spectra show a substantial fraction ofintermediate-age SPs in the inner few hundreds of parsecs <cit.>, while at optical wavelengths signatures of recent episodes of star formation are seen for about 40-50 % of nearby bright Seyfert 2 galaxies at similar spatial scales <cit.>. Young and intermediate age stellar populations in the inner few hundreds of parsecs of Seyfert galaxies are also detected in recent near-IR IFS studies <cit.>. In addition, a correlation between the distribution of intermediate-age stars with low-stellar velocity dispersion (σ_*∼50-70 ) rings has been observed for some objects <cit.>, indicating that these low-σ_* rings are originated from SPs that still preserve the “cold" kinematics of the gas they were formed (being thus younger than the surroundings), as claimed to explain the σ_*-drops commonly reported for more than 20 years <cit.>.Our results for Mrk 573 show that old SPs are dominant in the inner 08 (300 pc), while at larger distances from the nucleus (up to ≈12 – covered by the FOV of our observations)young-intermediate SPs dominates the stellar mass and K-band continuum emission (see Fig. <ref>).At optical wavelengths, <cit.> found that 82% of the nuclear continuum at 5870 Å of Mrk 573 is due to a 10 Gyr stellar population. They used an integrated spectrum within an aperture of 2^''× 2^'' and their result is in agreement with other studies of the stellar populations in optical wavelengths <cit.>. On the other hand, near-IR spectral synthesis of the nucleus for an aperture of 08×16 points out that intermediate-age stars contribute to 53 % of the K-band flux <cit.>, being also in agreement with results obtained by <cit.>, who found that the nuclear H and K band emission is dominated by late-type giants with ages between 100 Myr and 1 Gyr.The simplest way to represent the mixture of stellar populations of a galaxy is estimating its mean light (< logt_⋆>_L) and mass (< logt_⋆>_M) weighted stellar age. Following <cit.>,< logt_⋆>_L = ∑^N_J=1x_j logt_jand< logt_⋆>_M = ∑^N_J=1m_j logt_j.While the former is more representative of younger ages, the latter is enhanced by the old SPC <cit.>. In Fig. <ref> we show the maps for the mean age light- (left panel) and mass- (right panel) weighted in logarithmic units (years). The mean light- and mass-weighted ages over whole field of view are < logt_⋆>_L=8.39 and < logt_⋆>_M = 8.61, respectively. These values are also in good agreementwith those found in <cit.> for the nuclear 08×16.As discussed above, studies based on seeing limited observations found that old SPs dominate the emission at optical bands, while in the near-IR intermediate-age stars are dominant <cit.>. These studies are based on measurements of a nuclear spectrum that actually integrates the light within a few hundreds pc of the nucleus, what is comparable to the whole NIFS FOV. The NIFS adaptive optics observations allowed us to spatially resolve the distribution of the stellar populations in the inner 600 pc of Mrk 573 at a spatial resolution of ∼50 pc, at least 5 times better than that of the previous long-slit observations.We have shown that recent (t<700 Myr) star formation dominates at distances larger than 300 pc from the nucleus (at least up to ≈500 pc from the nucleus), while at smaller distances older stars dominate the near-IR continuum emission and the stellar mass content of Mrk5̇73. §.§ Spatial correlation among stellar populations, velocity dispersion and H_2 emission Our group AGNIFS (AGN Integral Field Spectroscopy)has started to characterize the stellar population of the inner kiloparsec of galaxies using thestarlight code <cit.> via spectral synthesis of IFS obtained with the Gemini Near-infrared Integral Field Spectrograph <cit.>. To date, we studied four nearby Seyfert galaxies (Mrk 1066, Mrk 1157, NGC 1068 and NGC 5548). For NGC 1068, we found two episodes of recent star formation: one at 300 Myr ago, extending over the inner 300 pc of the galaxy and another at 30 Myr ago, observed in a ring at ∼100 pc from the nucleus and being associated to an expanding ring observed in warm H_2 gas emission <cit.>. For Mrk 1066 and Mrk 1157, rings of intermediate age stars have been found, being correlated with low stellar velocity dispersion values (σ_*∼50), and interpreted as being originated by stars that still preserve the kinematics of the gas from which they formed <cit.>.In the case of NGC 5548, the stellar population is dominated by an old (>2 Gyr) component between 160 and 300 pc from the nucleus, while closer to the nucleus, intermediate age stars (50 Myr–2 Gyr) are dominant <cit.>. Hot dust emission was detected for three galaxies (Mrk 1066, NGC 1068 and NGC 5548), accounting for 30-90 % of the observed K-band nuclear flux, while for two galaxies the FC component was detected, contributing with ∼25 % of the K-band nuclear flux for NGC 1068 and∼60 % for NGC 5548 <cit.>. In order to look for similar correlations for Mrk 573, we present in the left panel of Fig. <ref> contours (in green) of the H_2 λ2.1218 μm flux distribution (left panel) overlaid on the young SPC distribution, while in the right panel we show the stellar velocity dispersion map with overlaid contours in green of the young-intermediate SPC. The H_2 λ2.1218 μm fluxes were measured by direct integration of the H_2 line profile from the datacube and subtracting the adjacent continuum. The σ_*map was obtained by using the penalised pixel-fitting (pPXF) method of <cit.> to fit the CO absorption band-heads at ∼2.3 μm, using as templates spectra of late type stars from <cit.>. The H_2 flux distribution shows two arc-shaped structures extended along the east-west, with the highest emission observed within a blob centred at 03 south of the nucleus.Theσ_* map shows that most values range between 80 and 180 , with the lowest values being observed mainly to south, west and north-west of the nucleus at distances of 05 from it.More details about the molecular gas distribution and kinematics and stellar kinematics are presented in <cit.>. Fig. <ref> shows no clear correlation between the young SP distribution and the H_2 emission line map for Mrk 573, except maybe at 03 south of the nucleus where a blob of higher H_2 flux shows some overlap with the distribution of the young SPC around the nucleus. A similar trend is observed between low-σ_* values and the distribution of the young-intermediate age SP, as shown by the green contours overploted to the sigma map.This support our previous similar findings for other active galaxies that the low-σ_* structures are due to stars formed ∼ 100–700 Myr ago, that still are not in orbital equilibrium with the bulge stars and possible were formed from gas recently accreted to the inner kiloparsec <cit.>.§.§ Radial distribution of the stellar populations in Mrk 573 and comparison with other galaxies In order to compare the radial distribution of stellar population contributions in Mrk 573 with those of previous studies by our AGNIFS team for other active galaxies, using similar data and technique, we have built radial profiles of the young, young-intermediate, intermediate-old and old population contributions to the flux at 2.2 μm for all studied galaxies so far. The results are shown in Fig. <ref>. The age bins for all galaxies are the same (as described in Section <ref>) and the derived values for Mrk 1066, Mrk 1157 and NGC 1068, are described in <cit.>, <cit.> and <cit.>, respectively. These plots were constructed by calculating the average contribution of each SPC to the flux at 2.2 μm within circular ringswith 03 width in the (deprojected) plane of the disk of the galaxy. The orientation of the line of nodes (Ψ_0) and disk inclination (i) used in the deprojection wereobtained from the modeling of the stellar velocity field and presented in <cit.> for Mrk 573 (Ψ_0=97^∘, i=26^∘),<cit.> for Mrk 1066 (Ψ_0=120^∘, i=50^∘) and Mrk 1157(Ψ_0=114^∘, i=45^∘)and <cit.> for NGC 1068 (Ψ_0=85^∘, i=40^∘). Although contribution of young stellar populations of at least 20% are observed at the nucleus only for Mrk 573 and Mrk 1066, they are present for all studied galaxies within the inner 200–300 pc, in good agreement with results obtained from optical <cit.> and near-IR studies <cit.>. These young stars possibly originate from a reservoir of gas recently accumulated in the central region of active galaxies. One possibility is that these reservoirs have been built by gas streaming motions along spiral arms and nuclear bars, seem at similar scales in many active galaxies <cit.>. The nuclear activity may also have been triggered by the presence of this gas reservoir, due to gas directly accreted by the SMBH or to accretion of gas ejected by the recently formed young stars <cit.>. Some contribution of young SPs is also observed at distances larger than 500 pc from the nucleus, while the young-intermediate age SPs show their highest contribution at distances of 300–500 pc from it. The intermediate-old population is observed mainly within the inner 400 pc (with its highest contribution at ∼200–300 pc from the nucleus) and old populations are mainly observed within the inner 250 pc. The results for the oldest SPCs are in agreement with the inside out star formation scenario, in which a gradient of stellar populations ages is observed, with old stellar populations seen at the nucleus and with decreasing ages outwards <cit.>.§.§ Dust mass and AGN Featureless continuum We found that both the AGN FC (power-law) component and black-body emission contribute to up to 20 % of the observed K-band nuclear flux of Mrk 573, as seen in Fig. <ref>. The nucleus of Mrk 573 is classified as Seyfert 2 and the inclusion of these components is necessary to fit the nuclear spectrum of at least 25 % of Seyfert 2 galaxies, while more than 50 % of Seyfert 1 galaxies show FC and hot dust contribution <cit.>. The detection of the power-law component for Mrk 573 suggests that radiation from the accretion disk is coming out through the torus, in good agreement with the detection of an obscured Narrow-Line Seyfert 1 nucleus as indicated by the presence of a broad component in the Paβ emission line <cit.>.The spectral synthesis confirmed the presence of an unresolved black-body component at the nucleus, previously detected in the infrared <cit.>, and possibly due to the dusty torus surrounding the SMBH. We have estimated the mass of the hot dust following <cit.> and using the formalism of <cit.>, for dust composed by grains of graphite.The IR spectral luminosity of each dust grain, in erg s^-1 Hz^-1,can be written as L^ gr_ν, ir = 4π^2 a^2 Q_ν B_ν(T_ gr),where a = 0.05 μm is the grain radius, Q_ν = 1.4 × 10^-24ν^1.6 is its absorption efficiency and B_ν(T_ gr) is its spectral distribution assumed to be a Planck function for a temperature T_ gr.The total number of graphite grains can be obtained fromN_ HD∼L^ HD_ ir/L^ gr_ ir,where L^ HD_ ir is the total luminosity of the hot dust, obtained by integrating the flux of each black-body component contribution from the synthesis. Then, we multiplied the integrated normalized flux by the normalization flux at 21955Åand convert it to the adequate units (from erg s^-1 cm^-2 Å^-1 to erg s^-1 Hz^-1).In order to obtain L^ gr_ ir, we have integrated the Eq. <ref> for all temperatures, ranging them from 700 to 1400 K, in steps of 100 K.Finally, the hot dust mass can be obtained by the equation <cit.>: M_ HD∼4π/3a^3N_ HDρ_ gr,where ρ_ gr = 2.26 g cm^-3 is the density of the grain. The total dust mass estimated for the nucleus of Mrk 573 by integrating over the entire field of view, is M_ HD=1.3×10^-2 M_⊙, which is within the range of masses observed for other active galaxies <cit.>. §.§ Extinction The E(B-V) map (Fig. <ref>) obtained for the stellar population show higher values to the southwest side of the nucleus. We can compare this map with that for the gas extinction. A gas reddening map can be obtained by using thePaβ/Brγ emission line ratio via the following equation: E(B-V)=4.74log(5.88/F_Paβ/F_Brγ),where F_Paβ and F_Brγ are the fluxes of Paβ and Brγ emission lines, respectively. This equation was obtained using thereddeninglaw of <cit.> and adopting the intrinsic ratio F_Paβ/F_Brγ=5.88 corresponding to case B recombination <cit.>. The Paβ and Brγ emission-line flux distributions were obtained by fitting the line profiles at each spaxel by Gaussian curves, and as discussed in <cit.> these lines show similar flux distributions to those of the [S iii]λ0.95 μm line.The resultingE(B-V) map for the gas is shown in Fig. <ref>.The median value of the reddening for the gas over the whole field of view is E(B-V)∼2.5 mag, considering only spaxels where both Brγ and Paβ emission lines were detected. For the stellar population we obtain a smaller value of E(B-V)∼0.25 mag by averaging the map presented in Fig. <ref>, showing that the gas extinction in the NLR is larger than that of the stellar population. <cit.> presented the nuclear spectrum of Mrk 573 for an aperture of 08×16 obtained with the spectrograph SpeX covering the spectral range from 0.8 to 2.4 μm. Using the their flux values for Brγ and Paβ emission lines, we obtain anE(B-V) value for the gas very similar to ours, while <cit.> obtained E(B-V)≈0.3 mag for the stellar population using the same dataset.A higher extinction for the gas is also observed at optical wavelengths, in which the optical depth of the continuum underlying the Balmer lines is about half of that for the emission lines <cit.>. This difference may be due to the fact that most of the gas is located closer to the galaxy disk, where large amount of dust is expected,while the near-IR continuum has an important contribution from stars of the bulge of the galaxy.Comparing the gas and stellar E(B-V) maps, we note that they show a similar distribution near the nucleus, with higher values observed to the southwest of the nucleus and values close to zero to the northeast of it. The observed extinction for the gas is larger than that of the stellar population, in good agreement with previous studies <cit.>. A higher extinction to the southwest of the nucleus is also supported by the dust lanes seen in the structure map presented inFig. 7 of <cit.>.The rightmost panel of Fig. <ref> shows that the distribution of the hot molecular gas presents a good correlation with the gas E(B-V), confirming the known association between the molecular gas and dust. The highest apparent concentration of hot molecular gas coincides with the region of highest extinction in the gas and in the stars at 03 south of the nucleus, being also co-spatial with young stellar populations(Fig. <ref>). § CONCLUSIONS We used near-IR integral field spectraat a spatial resolution of ∼50 pc to map the stellar population distributions in the inner 500 pc of the Seyfert galaxy Mrk 573, as well as featureless continua contributions at the nucleus by combining the spectral synthesis technique with <cit.> SSP models. The main conclusions of this work are:* Although the old stellar population (x_i > 2 Gyr) dominates the K-band continuum in the inner ∼250 pc (075), within the ∼70 pc from the nucleus there is up to 30% contribution from a young stellar population (x_i < 100 Myr). Beyond the inner ∼250 pc and up to the border of the FOV (500 pc), the young-intermediate age stellar populations (100–700 Myr) are dominant, in a structure resembling a partial ring where its contribution to the continuum reaches up to ≈ 100% in the K-band.* Unresolved power-law and black-body functions contributions to the continuum are detected at the nucleus at the level of 20 % in the K-band. The first is attributed to the accretion disk emission and the latter to the emission from the dusty torus. We derivea hot dust mass of ∼ 0.013M_⊙, consistent with values observed for other Seyfert 2 galaxies. * The distribution of intermediate age stars shows a weak correlation with locations where we observe low stellar velocity dispersion values, supportingthat these low-σ structures are originated in stellar populations that still preserve the cold kinematics of the gas from which they were formed.* By comparing the radial distribution of each stellar population component observed in Mrk 573 with those of other three galaxies studied using similar data (Mrk 1066, Mrk 1157 and NGC 1068), we found that young stellar populations (contributing with ≥ 20% of the K-band continuum) are observed for all galaxies within the inner 200–300 pc, while intermediate age stars dominates the near-IR K-band continuum at distances between 100 and 500 pc. Old stellar populations are dominant for distances smaller than 300 pc. * The stellar population extinction is larger at the nucleus and to the southwest of it, where we also observe higher gas extinction, as derived from the Paβ/Brγ line ratio. The high extinction region to the southwest coincides also with a region of strong hot molecular gas emission. § ACKNOWLEDGEMENTS This work is based on observations obtained at the Gemini Observatory,which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with theNSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and TechnologyFacilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian ResearchCouncil (Australia), Ministério da Ciência e Tecnologia (Brazil) and South-EastCYT (Argentina). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the JetPropulsion Laboratory, California Institute ofTechnology, under contract with the National Aeronautics and Space Administration. This work has been partially supported by the Brazilian institution CNPq. M.R.D. thanks financial support from CNPq. R.A.R. acknowledges support from FAPERGS (project N0. 12/1209-6) and CNPq (project N0. 470090/2013-8). R.R. thanks to CNPq and FAPERGS for partially funding this work. 99[Afanasiev et al.1996]afanasiev96 Afanasiev, V. L., Burenkov, A. N., Shapovalova, A. I., Vlasyuk, V. V. 1996, in ASP Conf. Ser. 91, Barred Galaxies, ed. R. Buta, D. A. Crocker, B. G. Elmegreen (San Francisco, CA: ASP), 218.[Antonucci1993]antonucci93 Antonucci, R.,ARAA, 31, 473.[Asari et al.2007]asari07 Asari, N. V., Cid Fernandes, R., Stasińska, G., Torres-Papaqui, J. P., Mateus, A., Sodré, L., Schoenell, W., Gomes, J. M., 2007, MNRAS, 381, 263.[Barbosa et al.2006]barbosa06 Barbosa, F. K. B., Storchi-Bergmann, T., Cid Fernandes, R., Winge, C., Schmitt, H., 2006, MNRAS, 371, 170.[Barvainis1987]barva1987 Barvainis R., 1987, ApJ, 320, 537.[Calzetti et al.1994]calzetti94Calzetti, D., Kinney, A. L., Storchi-Bergmann, T. 1994, ApJ, 429, 582.[Cappellari & Emsellem2004]cappellari04 Cappellari, M., Emsellem, E., 2004, PASP, 116, 138.[Capozzi et al.2016]capozzi16 Capozzi D., Maraston C., Daddi E., Renzini A., Strazzullo V. , Gobat R., 2016, MNRAS 456, 790.[Cardelli et al.1989]cardelli89 Cardelli, J. A., Clayton, G. C., Mathis, J. S., 1989, ApJ, 345,245.[Cid Fernandes et al.2004]cid04 Cid Fernandes, R., Gu, Q., Melnick, J., Terlevich, E., Terlevich,R., Kunth, D., Rodrigues Lacerda, R., Joguet, B., 2004, MNRAS, 355, 273.[Cid Fernandes et al.2005a]cid05a Cid Fernandes, R., Mateus, A., Sodré, Laerte, Stasińska, G., Gomes, J. M., 2005a, MNRAS, 358, 363.[Cid Fernandes et al.2005b]cid05b Cid Fernandes, R., González Delgado, R. M., Storchi-Bergmann, T., Martins, L. Pires, Schmitt, H., 2005b, MNRAS, 356, 270.[Davies et al.2007]davies07 Davies, R. I., Sánchez, F. M., Genzel, R., Tacconi, L. J., Hicks, E. K. S., Friedrich, S., Sternberg, A., 2007, ApJ, 671, 1388.[Diniz et al.2015]diniz15 Diniz, M. R., Riffel, R. A., Storchi-Bergmann, T., Winge, C., 2015, MNRAS, 453, 1727.[Dors et al.2008]dors08 Dors, O. L., Storchi-Bergmann, T., Riffel, Rogemar A., Schmidt, A., 2008, A&A, 482, 59.[Emsellem2001]emsellem01 Emsellem, E., Greusard, D., Combes, F., Friedli, D., Leon, S., Pécontal, E., Wozniak, H., 2001, A&A, 368, 52.[Fathi et al.2006]fathi2006 Fathi, K., Storchi-Bergmann, T., Riffel, R. A., Winge, C., Axon, D. J., Robinson, A., Capetti, A., Marconi, A., 2006, ApJ, 641, 25.[Fischer et al.2017]fischer17 Fischer T. C., C. Machuca, M. R. Diniz, D. M. Crenshaw, S. B. Kraemer, R. A. Riffel, H. R. Schmitt, F. Baron, T. Storchi-Bergmann, A. Straughn, M. Revalski, C. Pope, ApJ, 834, 30.[Fischer et al.2015]fischer15 Fischer, T. C., Crenshaw, D. M., Kraemer, S. B., Schmitt, H. R., Storchi-Bergmann, T., Riffel, R. A., 2015, ApJ, 799, 234. [Delgado et al.2001]delgado2001 González Delgado, R. M., Heckman, T.,Leitherer, C. 2001, ApJ, 546, 845.[Haniff et al.1988]haniff88 Haniff, C.A., Wilson, A.S., Ward, M.J., 1988, ApJ 334, 104.[Heckman et al.1997]heckman97 Heckman, T. M., González Delgado, R. M., Leitherer, C., Meurer, G. R., Krolik, J., Wilson, A. S., Koratkar, A., Kinney, A. 1997, ApJ, 482, 114.[Imanishi & Dudley2000]imanishi2000 Imanishi, M., Dudley, C. C., 2000, ApJ, 545, 701.[Imanishi2002]imanishi2002 Imanishi, M., 2002, ApJ, 569, 44.[Knapen et al.2000]knapen2000 Knapen, J. H., Shlosman, I., Peletier, R. F., 2000, ApJ, 529, 93.[Maciejewski et al.2002]mw02 Maciejewski, W. Teuben, P. J., Sparke, L. S., Stone, J. M., 2002, MNRAS, 329, 502.[Maciejewski2004a]mw04a Maciejewski, W., 2004a, MNRAS, 654, 883.[Maciejewski2004b]mw04b Maciejewski, W., 2004b, MNRAS, 654, 892.[Maraston2005]maraston05 Maraston, C., 2005, MNRAS, 362, 799.[Márquez et al.2003]marquez03 Márquez, I., Masegosa, J., Durret, F., González Delgado, R. M., Moles, M., Maza, J., Pérez, E., Roth, M., 2003, A&A, 409, 459.[Marigo et al.2008]marigo08 Marigo P., Girardi L., Bressan A., Groenewegen M. A. T., Silva L., Granato G. L., 2008, A&A, 482, 883.[McGregor et al.2003]mcgregor03 McGregor, P. J. et al., 2003, Proceedings of the SPIE, 4841, 1581.[Müller Sánchez et al.2009]ms09 Müller Sánchez, F., Davies, R. I., Genzel, R., Tacconi, L. J., Eisenhauer, F., Hicks, E. K. S., Friedrich, S., Sternberg, A., 2009, ApJ, 691, 749.[Norman & Scoville1988]norman88 Norman, C., & Scoville, N. 1988, ApJ, 332, 124.[Osterbrock & Ferland2006]osterbrock06 Osterbrock, D. E. & Ferland, G. J., 2006, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei, Second Edition, University Science Books, Mill Valley, California. [Perry & Dyson1985]perry-dyson85 Perry, J. J., & Dyson, J. E. 1985 MNRAS, 213, 665.[Raimann et al.2003]raimann03 Raimann, D., Storchi-Bergmann, T., González Delgado, R. M.,Cid Fernandes, R., Heckman, T., Leitherer, C., Schmitt, H. R., 2003, MNRAS, 339, 772.[Ramos Almeida et al.2008]ramos2008 Ramos Almeida C., A. M. Pérez García, J. A. Acosta-Pulido, and O. Gónzalez-Martín, 2008, ApJL, 680, 17. [Ramos Almeida,Pérez García & Acosta-Pulido2009]ramos2009Ramos Almeida, C., Pérez García,A. M. , Acosta-Pulido,J. A., 2009, ApJ, 694, 1379. [Riffel et al.2015]riffel2015 Riffel, R. A. et al., 2015, MNRAS, 446, 2823.[Riffel et al.2010]riffel2010 Riffel, R. A., Storchi-Bergmann, T., Riffel, R., Pastoriza, M. G., 2010, ApJ, 713, 469.[Riffel et al.2009a]riffel2009a Riffel, R. A.., Storchi-Bergmann, T., Dors, O. L., Winge, C., 2009a, MNRAS, 393, 783.[Riffel et al.2009b]riffel2009b Riffel, R. A., Storchi-Bergmann, T., McGregor, P. J. 2009b, ApJ, 698, 1767.[Riffel et al.2008]riffel2008 Riffel, R. A., Storchi-Bergmann, T., Winge, C., McGregor, P. J., Beck, T., Schmitt, H., 2008, MNRAS, 385, 1129.[Riffel et al.2011]riffel2011 Riffel, R. A., Storchi-Bergmann, T., 2011, MNRAS, 417, 2752.[Riffel et al.2011b]riffel2011b Riffel, R. A., Storchi-Bergmann, T., 2011, MNRAS, 411, 449.[Riffel et al.2017]riffel2017 Riffel, R. A., Storchi-Bergmann, T., Riffel, R., Dahmer-Hahn, L. G., Diniz, M. R., Schönell, A. J., Dametto, N. Z., 2017, MNRAS, submitted.[Riffel et al.2015]rogerio15 Riffel, R., Mason, R. E., Martins, L. P., Rodríguez-Ardila, A., Ho, L. C., Riffel, R. A., Lira, P., Gonzalez Martin, O., Ruschel-Dutra, D., Alonso-Herrero, A., Flohic, H., McDermid, R. M., Ramos Almeida, C., Thanjavur, K., Winge, C., 2015, MNRAS, 450, 3069.[Riffel et al.2011a]rogerio11a Riffel, R., Bonatto, C. and Cid Fernandes, R. and Pastoriza, M. G., Balbinot, E., 2011a, MNRAS, 411, 1897.[Riffel et al.2011b]rogerio11b Riffel, R., Riffel, R. A., Ferrari, F., Storchi-Bergmann, T., 2011b, MNRAS, 416, 493.[Riffel et al.2009]rogerio09a Riffel, R., Pastoriza, M. G., Rodríguez-Ardila, A., Bonatto, C., 2009, MNRAS, 400, 273.[Riffel et al.2008]rogerio08 Riffel, R., Pastoriza, M. G., Rodríguez-Ardila, A., Maraston, C., 2008, MNRAS, 388, 803.[Riffel et al.2007]rogerio07 Riffel, R., Pastoriza, M. G., Rodríguez-Ardila, A., Maraston, C., 2007, ApJ, 659, 103.[Riffel et al.2006]rogerio06 Riffel, R., Rodríguez-Ardila, A., Pastoriza, M. G., 2006, A&A, 457, 61.[Rodríguez-Ardila & Viegas2003]ardila2003 Rodríguez-Ardila, A., Viegas, S. M., 2003, MNRAS, 340, 33.[Rodríguez-Ardila et al.2005]ardila2005 Rodríguez-Ardila, A., Contini, M., Viegas, S. M., 2005, MNRAS, 357, 220.[Rodrígues-Ardila & Mazzalay2006]ardila2006 Rodríguez-Ardila, A., Mazzalay, X., 2006, MNRAS, 367, 57.[Salaris et al.2014]salaris14 Salaris, M., Weiss, A., Cassarà, L. P., Piovan, L., Chiosi, C., 2014, A&A, 565.[Pérez et al.2013]perez13 Pérez E., Cid Fernandes R., González Delgado R. M., García-Benito R., Sánchez S. F.,Husemann B., Mast D., Rodón J. R., Kupko D., Backsmann N., de Amorim A. L., van de Ven G., Walcher J., ApJ, 2013, vol. 764, 1.[Delgado al.2015]delgado2015 González Delgado, R. M.; García-Benito, R.; Pérez, E.; Cid Fernandes, R.; de Amorim, A. L.; Cortijo-Ferrero, C.; Lacerda, E. A. D.; López Fernández, R.; Vale-Asari, N.; Sánchez, S. F.; Mollá, M.; Ruiz-Lara, T.; Sánchez-Blázquez, P.; Walcher, C. J.; Alves, J.; Aguerri, J. A. L.; Bekeraité, S.; Bland-Hawthorn, J.; Galbany, L.; Gallazzi, A.; Husemann, B.; Iglesias-Páramo, J.; Kalinova, V.; López-Sánchez, A. R.; Marino, R. A.; Márquez, I.; Masegosa, J.; Mast, D.; Méndez-Abreu, J.; Mendoza, A.; del Olmo, A.; Pérez, I.; Quirrenbach, A.; Zibetti, S., 2015, A&A,581, 44.[Delgado al.2016]delgado2016González Delgado, R. M.; Cid Fernandes, R.; Pérez, E.; García-Benito, R.; López Fernández, R.; Lacerda, E. A. D.; Cortijo-Ferrero, C.; de Amorim, A. L.; Vale Asari, N.; Sánchez, S. F.; Walcher, C. J.; Wisotzki, L.; Mast, D.; Alves, J.; Ascasibar, Y.; Bland-Hawthorn, J.; Galbany, L.; Kennicutt, R. C.; Márquez, I.; Masegosa, J.; Mollá, M.; Sánchez-Blázquez, P.; Vílchez, J. M., 2016, A&A, 590, 17.[Sarzi et al.2007]sarzi07 Sarzi, M., Allard, E. L., Knapen, J. H., Mazzuca, L. M., 2007, MNRAS, 380, 949.[Schmitt et al.1999]schmitt99 Schmitt H.R., Storchi-Bergmann T., Cid Fernandes R., 1999, MNRAS, 303, 173.[Schlesinger et al.2009]schlesinger09 Schlesinger, K., Pogge, R. W., Martini, P., Shields, J. C., Fields, D., 2009, ApJ, 699, 857.[Schönell et al.2016]astor16 Schönell, A. J., Storchi-Bergmann, T., Riffel, R. A., Riffel, R., 2016, MNRAS, tmp1370.[Springob et al.2005]springob05 Springo, C. M., Haynes, M. P., Giovanelli, R., Kent, B. R. 2005, ApJS, 160, 149.[Storchi-Bergmann et al.2000]thaisa2000 Storchi-Bergmann, T., Raimann, D., Bica, E. L. D., Fraquelli, H., A. 2000, ApJ, 544, 747.[Storchi-Bergmann et al.2012]ngc1068 Storchi-Bergmann, T., Riffel, R. A.. Riffel, R., Diniz, M. R., Borges Vale, T., McGregor, P. J., 2012, ApJ, 755, 87.[Storchi-Bergmann et al.1996]thaisa96 Storchi-Bergmann, T., Wilson, A. S., Mulchaey, J. S., Binette, L. 1996, A&A, 312, 357.[Storchi-Bergmann et al.2001]sb01 Storchi-Bergmann, T., González Delgado, R. M., Schmitt, H. R., Cid Fernandes, R., Heckman, T., 2001, ApJ, 559, 147.[Terlevich & Melnick1985]terlevich-melnick85 Terlevich, R., & Melnick, J. 1985, MNRAS, 213, 841.[Terlevich et al.1990]terlevich90 Terlevich, E., Diaz, A. I.,Terlevich, R. 1990, MNRAS, 242, 271.[Tsvetanov et al.1992]tsvetanov92 Tsvetanov, Z., Walsh, J. R. 1992, ApJ, 386, 485.[Ulvestad & Wilson1984]ulvestad84 Ulvestad, J. S., Wilson, A. S., 1984, ApJ, 278, 544.[Unger et al.1987]unger87 Unger, S. W., Pedlar, A., Axon, D. J., Whittle, M., Meurs, E. J. A., Ward, M. J., 1987, MNRAS, 228, 671.[de Vaucouleurs et al.1991]vaucoleurs91 de Vaucouleurs, G., de Vaucouleurs, A., Corwin, Jr., H. G., Buta, R. J., Paturel, G., Fouqué, P. 1991, Third Reference Catalogue of Bright Galaxies RC3) Springer Verlag, New York.[Winge et al.2009]winge09 Winge, C., Riffel, R., A., Storchi-Bergmann T., 2009, ApJS, 185, 186. | http://arxiv.org/abs/1704.08143v1 | {
"authors": [
"M. R. Diniz",
"R. A. Riffel",
"R. Riffel",
"D. M. Crenshaw",
"T. Storchi-Bergmann",
"T.",
"C. Fischer",
"H. R. Schmitt",
"S. B. Kraemer"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170426144721",
"title": "Disentangling the near infrared continuum spectral components of the inner 500 pc of Mrk 573: two-dimensional maps"
} |
^1Institut für Experimentelle und Angewandte Physik, Christian-Albrechts-Universität zu Kiel, D-24098 Kiel, Germany^2Departamento de Fisica, Facultad de Ciencias, 33006. Universidad Oviedo, Spain^3IBM Research – Zurich, CH-8803 Rüschlikon, Switzerland^4Institute for Molecules and Materials, Radboud University, 6500 GL Nijmegen, The [email protected] tips are used to probemolecules on Cu(111) with scanning tunneling and atomic force microscopy. Distinct and complex intramolecular contrasts are found. Maximal attractive forces are observed when for both molecules a [6,6] bond faces a hexagon of the other molecule. Density functional theory calculations including parameterized van der Waals interactions corroborate the observations. 68.37.Ef, 68.37.Ps, 31.15.E-, 81.05.ub Interactions between twomolecules measured by scanning probe microscopies Nadine Hauptmann^1,4, César González^2, Fabian Mohn^3, Leo Gross^3, Gerhard Meyer^3, Richard Berndt^1 December 30, 2023 =========================================================================================================§ INTRODUCTIONThe interactions between two molecules in nanometer proximity via forces and via charge transfer are important in a wide range of research areas. Scanning probe microscopy provides an opportunity to investigate these interactions in some detail. Molecules may be immobilized on a substrate and on the tip of the microscope and their relative positions may be controlled with picometer precision. The orientation of the molecule on the surface may be determined from conventional imaging. When a suitable ”micro tip” is prepared on the substrate, imaging of the molecule at the tip can be achieved as well <cit.>.By combining atomic force and scanning tunneling microscopy (AFM/STM), both force and charge transport may be probed. Combined AFM and STM employing metallic tips has been used to study various molecules, e. g., diatomic <cit.> and planar <cit.> molecules, functional molecular complexes <cit.>, as well as <cit.>. Using amolecule at the tip andmolecules on the surface, we previously investigated how the charge transport <cit.> and the force <cit.> vary as the tip is brought closer to the sample. As a result, the short-range force around the point of maximal attraction could be determined as a function of the intermolecular distance. Owing to their rigiditytips may also be used for lateral scanning <cit.>. This enables measurements in the distance range near mechanical contact that extend the previous one-dimensional measurements to the lateral dimensions.Here we use -functionalized tips to laterally probemolecules by combined STM/AFM measurements together with density functional theory (DFT) calculations taking van der Waals (vdW) interactions into account. Maximal attraction is found when a [6,6] bond of one molecule faces a hexagon of the other molecule and vice versa.§ EXPERIMENTAL AND THEORETICAL METHODSThe experiments were performed with a homebuilt combined STM/AFM in ultrahigh vacuum at a temperature of 5 K. The atomically flat Cu(111) surface was used as substrate, which was prepared by repeated sputtering and annealing cycles. Submonolayer amounts ofwere then deposited onto the sample by sublimation at room temperature. Subsequent annealing to ≈ 500 K led to a well-ordered 4× 4 structure of <cit.>. Isolatedmolecules were found on both theislands and the bare Cu substrate after an additional sublimation of small amounts ofonto the cooled sample.A PtIr tip was attached to the free prong of a quartz tuning fork oscillating with a constant amplitude at its resonance frequency of ∼ 28 kHz. The tip was further prepared in-situ by repeated indenting into the Cu substrate. -tips were created by approaching the tip towards the molecule until it jumps to the tip <cit.>. The bias voltage V was applied to the sample.Ab-initio calculations were performed with the DFT-FIREBALL code <cit.> that uses the local density approximation (LDA) for the exchange and correlation potential, and a basis of numerical atomic orbitals (NAO) vanishing for a value higher than a cutoff radius. For this kind of codes only the valence electrons are usually included in the calculation. In this work, a spd basis for Cu and a sp for C are chosen with the corresponding cutoffs (given in Bohr radii): r(Cu-s)=4.5, r(Cu-p)=5.7, r(Cu-d)=3.7, r(C-s)=4.5, and r(C-p)=4.5 <cit.>. A Cu-slab of four layers with a 4×4 periodicity in the surface plane is built to simulate the metallic substrate (Figs. <ref>(a),(d)). A singlemolecule is adsorbed on the slab and the system is relaxed using 16 wavevectors in the first Brillouin zone until the forces are lower than 0.05 eV/A. The atoms of the lowest two metallic layers are kept fixed. The tip is based on a four-layer Cu-pyramid oriented in the (111) direction (cf. Ref. <cit.>), where the last Cu-apex is replaced by amolecule. The atoms in the basis of the pyramid are kept fixed during the calculations.The Cu pyramid tip with theattached is placed on the positions of a 17×17 grid (spacing of 0.5 Å) above the adsorbedon the Cu slab. Starting with the lowest atoms of the tip located 6 Å above the topmost atoms of theadsorbed on the surface, the tip was moved down in steps of 0.25 Å. In every point all tip and sample atoms (except the fixed atoms mentioned above) were relaxed using the same conditions than that used for the isolated surface. The vdW interaction was estimated by a semi-empirical correction based on the London expression <cit.>. It is well known that the empirical values obtained from this expression for the C-C interaction are overestimated <cit.>. We use 60% of the initial empirical value, which yields reasonable results as shown in Ref. <cit.>. The total energy with respect to the tip-surface distance is obtained by the sum of the DFT and vdW energies at each point of the grid. Then, the forces are obtained by deriving the energy versus distance curves. The force contribution without the vdW interaction does not exhibit an attractive regime (Fig. S1). This emphasizes the importance of vdW contributions as also recently shown for the adsorption of fullerenes <cit.>. For this reason, the frequency shift Δ f was derived from the total tip–sample force F_TS including vdW forces according to first-order perturbation theory <cit.>:Δ f (d)=f_0/2π kA_0∫_0^2π F_TS[d+A_0+A_0cosφ] cosφ dφ. where the experimental values have been taken into account: f_0 is the resonance frequency (28 kHz), A_0 the amplitude of oscillation (2 Å) and k the stiffness of the AFM sensor (≈ 1800 N/m). The theoretical Δ f maps were generated from values calculated on the 17×17 grid. The used tip radius, which is important for vdW interaction, was varied from 1 nm to 5 nm, which influences the total value of Δ f while the general structures of the maps discussed below remain unchanged.Calculated STM images were obtained using the same tip, following the methodology explained in Ref. <cit.>. Based on the non-equilibrium Green-Keldysh formalism, both systems tip and sample can be treated separately in the tunneling regime to be finally coupled by an interaction given by the hopping matrices T_ST and T_TS, which denote the hopping of the electrons between sample and tip and vice versa. The expression obtained for the electronic current in the tunneling regime at 0 K is written as: I=4π e/ℏ∫_E_F^E_F+eV Tr[T_TSρ_SS(E)T_STρ_TT(E)]dE. where ρ_SS(TT) are the corresponding density of states (DOS) of the sample (tip) of the isolated system.§ RESULTS AND DISCUSSION First, small Cu clusters were deposited on the bare surface by approaching a metallic tip to the surface by a few Angstroms. The lateral shapes and apparent heights of these Cu clusters imaged with a metallic tip (Fig. S2) suggest that they consist of a few (presumably three) atoms. Next, amolecule from the surface was transferred to the tip by approaching the tip sufficiently close (a few Angstroms from tunneling conditions). The orientation of the molecule at the tip was determined by ”reverse imaging” on small Cu clusters and showed that a hexagon ofwas facing the surface. A constant-current image of a Cu cluster acquired with a -tip is shown in the inset of Fig. <ref>(a) (V=-2 V). At this voltage, electrons tunnel from the cluster into the second lowest unoccupied orbital (LUMO+1) of the molecule <cit.>. The shape of the cluster reflects the spatial distribution of the LUMO+1 from which the orientation of theat the tip can be deduced. Compared to typical STM images of the LUMO+1 using a metallic tip, the image of amolecule at the tip obtained with a metal cluster shows a mirror image of the molecule <cit.>. The sketch in the inset of Fig. <ref>(a) depicts the orientation of the surface-nearest hexagon of themolecule with the red bars indicating the bonds between two hexagons ([6,6] bonds), which are of higher bond order compared to bonds between a hexagon and a pentagon ([6,5] bonds) <cit.>.A STM topograph of aisland comprising two domains as imaged with the -tip is shown in Fig. <ref>(a). The domains comprise molecules that either face with a hexagon () or a pentagon () towards the tip <cit.>. The different orientations of the molecules are shown in Fig. <ref>(b) with the [6,6] bonds sketched by red bars. Different azimuthal rotations of themolecules by 60 are present in the domains. The top-most hexagon of the(Fig. <ref>(a), inset) is rotated by either (12±6)(referred to as h12) or (72±6)(referred to as h72) with respect to the lowest hexagon of the -tip. Depending on this relative orientationappears either with a pronounced center (h12) or as a more homogenous structure (h72) in the STM image <cit.>. The threefold symmetry of the pentagon-hexagon ([5,6]) bonds of the -tip is clearly discernible on . As shown recently <cit.> as well as in Fig. <ref>(b), the symmetries of these patterns can be understood from a convolution of the local densities of electronic states (LDOS) of the tip and the sample.Simultaneously acquired maps of the frequency shift Δ f and current I of two(black box in Fig. <ref>(a)) are shown in Figs. <ref>(d) and (e). The feedback loop was switched off (at V=1.7 V and I=0.55 nA) and a voltage of V=0.1 V was applied. The tip-height was decreased until the frequency shift Δ f passes its minimum and increases again as indicated by the dashed line in the Δ f versus z curve on a(center position) in Fig. <ref>(c).In both the I and Δ f map the molecules appear as almost three-fold symmetric patterns. Given their broad spatial extension, the lobes in both maps approximately occur at the same spatial positions. While the three lobes in the current map are subject to an increased tunneling current, the dark lobes in the Δ f map correspond to spatial positions with a minimal frequency shift. These depressions indicate locations of strong attractive interaction.To obtain a deeper understanding of the data, we analysed the relative orientation of the -tip and the molecules on the surface. We concentrated on the positions of minimal frequency shift of the h12 and h72 as these lobes are more pronounced than the lobes in the current map (Fig. <ref>(d)). For simplicity, we only discuss the orientations of the uppermost hexagon of theon the sample surface and the lowest hexagon of theat the tip <cit.>. In Figs. <ref>(f) and (g) the relative positions of these hexagons are shown for one of the three positions of minimal frequency shift for both the h12 and h72 orientations. The red bars indicate the [6,6] bonds. It turns out that the positions of minimal frequency shift occur when [6,6] bonds of the molecule at the tip and on the surface are facing a hexagon of the other . Compared with recent measurements using a metallic tip <cit.> to imagemolecules in the constant height AFM mode, our observation of three clearly pronounced attractive lobes is fairly different. Interestingly, a rather different contrast was found in maps of Δ f acquired with a CO-functionalized tip <cit.>: The [6,6] bonds showed an increased Δ f and appeared shorter compared to the [6,5] bonds. This contrast originates from strong Pauli repulsion of the [6,6] bonds, which have a higher bond order than the [5,6] bonds. Here, we observe a different contrast mechanism giving rise to three well-pronounced attractive maxima, which are due to strong attraction of a [6,6] bond with amolecule. The [6,6] bonds also play an important role for charge injection. A recent work using a metallic tip to contactmolecules showed that charge injection intomolecules is more efficient at [6,6] bonds due to the formation of a bonding state of the tip apex atom and themolecule on the surface <cit.>. Next, we corroborate our results by DFT+vdW calculations. The used geometries for the h12 and h72 orientations <cit.> as well as the orientation of the lowest hexagon of the -tip are shown in Fig. <ref>(a). Calculated STM images for both orientations (Fig. <ref>(b)) compare well with the experimental data (Fig. <ref>(a)). Force (F, including vdW interaction) versus distance d curves at different orientations and different lateral positions of the -tip above theon the surface are used to determine the distance range of interest (Fig. <ref>(c)). As depicted in Fig. <ref>(d), d is defined as the distance between the topmost carbon atoms from theon the surface and the lowest atoms from the tip for the unrelaxed system. The relative orientations of the uppermost hexagon of theat the surface and lowest hexagon of the -tip are defined in Fig. <ref>(e). The force curves in Fig. <ref>(c) were calculated for different relative lateral positions of the two molecules: (i) a [6,6] bond of each molecule is facing a hexagon of the other molecule (h72-c2), (ii) a [6,6] bond of one molecule is facing a hexagon and a [6,5] bond of the other molecule is facing a hexagon (h12-c1T and h12-c1S), (iii) a [6,5] bond of each molecule is facing a hexagon (h72-c0), and (iv) the hexagons of both molecules are facing each other (aligned C60). The minima in the force curves (maximal attraction) occur at d = 3.4 Å, which is comparable to the minimum of calculated force curves between a CO tip and amolecule <cit.>. Clear differences in the forces at different spatial positions of the -tip are observed for d<3.5 Å. For molecules with facing hexagons (aligned , Fig. <ref>(d)) the force is less attractive than for the other orientations sketched in Fig. <ref>(e). We note that the forces for the h12-c1T and h12-c1S orientations are virtually the same. The calculated force is maximal for the h72-c2 orientation where the hexagons of bothmolecules are above a [6,6] bond of the other . This is in agreement with the experimental findings.Figures <ref>(a) to (h) show calculated maps of Δ f for the h12 and h72 orientations at four different distances. For d=4 Å the maps are similar for both orientations h12 and h72 (Figs. <ref> (a) and (e)) showing a decrease of Δ f over themolecule. At this distance regime the contrast is governed by long-range vdW interaction <cit.>.For a further decrease of d, submolecular patterns arise. In case of the h72 orientation, the Δ f map shows a threefold pattern of spatial maxima of attraction that occur at position where a [6,6] bond of one molecule faces a hexagon of the other molecule and vice versa. In the center of the map, where surface and tip molecules are aligned, the calculation shows a reduced attractive interaction. These results match very well with the experimental findings (Fig. <ref>(g)). For the h12 orientation, the calculation results in a more six-fold symmetry at d=3.3 Å instead of the three lobes observed in the experimental data. At d=3.1 Å a faint three-lobe structure can be discerned. However, it is less pronounced than in the experimental data. This may be due to parameters that our calculation does not account for, , the lack of rotational relaxations or incomplete charge transfer from theto the metal atoms, and may be related with the almost negligible attraction between bothmolecules in the calculation (Fig. S1). A rotation of themolecule at the tip has been previously suggested to explain the different STM features observed when equally orientedmolecules are imaged with a functionalized -tip <cit.>. For d<3 Å the repulsive contribution increases as seen from Figs. <ref>(d) and (h).§ SUMMARYMolecules can be used to define the structure of the tip. CO-functionalized tips are often used and enable remarkable contrast in AFM and STM images. Compared to those,tips are less flexible but the different types of molecular bonds strongly influence the conductance and interaction resulting in images with a complex intramolecular contrast.We performed combined STM/AFM measurements onmolecules using -functionalized tips. The attractive interaction is maximal when a [6,6] bond of one molecule faces a hexagon of the other molecule and vice versa. The experimental findings are corroborated by DFT+vdW calculations which show that the [6,6] bonds and hexagons exhibit a strong attractive interaction. § ACKNOWLEDGMENTSWe thank Deutsche Forschungsgemeinschaft for financial support via the SFB 677 and for funding from the EU projects PAMS (agreement no. 610446) and the ERC Advanced Grant CEMAS (291194). 10 url<#>1#1urlprefixURL Schull2009 Schull G, Frederiksen T, Brandbyge M and Berndt R 2009 Phys. Rev. Lett. 103 206803Chiutu2012 Chiutu C, Sweetman A M, Lakin A J, Stannard A, Jarvis S, Kantorovich L, Dunn J L and Moriarty P 2012 Phys. Rev. Lett. 108 268302Sun2011 Sun Z, Boneschanscher M P, Swart I, Vanmaekelbergh D and Liljeroth P 2011 Phys. Rev. Lett. 106 046104Gross2009 Gross L, Mohn F, Moll N, Liljeroth P and Meyer G 2009 Science 325 1110–1114Gross2010 Gross L, Mohn F, Moll N, Meyer G, Ebel R, Abdel-Mageed W M and Jaspars M 2010 Nature Chem. 2 821–825Fournier2011 Fournier N, Wagner C, Weiss C, Temirov R and Tautz F S 2011 Phys. Rev. B 84 035435Mohn2011 Mohn F, Gross L and Meyer G 2011 Appl. Phys. Lett. 99 053106Pavlicek2012 Pavliček N, Fleury B, Neu M, Niedenführ J, Herranz-Lancho C, Ruben M and Repp J 2012 Phys. Rev. Lett. 108 086101Hamalainenn2014 Hämäläinen S K, van der Heijden N, van der Lit J, den Hartog S, Liljeroth P and Swart I 2014 Phys. Rev. Lett. 113 186102Oteyza2013 Oteyza D G d, Gorman P, Chen Y C, Wickenburg S, Riss A, Mowbray D J, Etkin G, Pedramrazi Z, Tsai H Z, Rubio A, Crommie M F and Fischer F R 2013 Science 340 1434–1437Albrecht2013 Albrecht F, Neu M, Quest C, Swart I and Repp J 2013 J. Am. Chem. Soc. 135 9200–9203Yang2014 Yang Z, Corso M, Robles R, Lotze C, Fitzner R, Mena-Osteritz E, Bäuerle P, Franke K J and Pascual J I 2014 ACS Nano 8 10715–10722Hauptmann2015 Hauptmann N, Groß L, Buchmann K, Scheil K, Schütt C, Otte F L, Herges R, Herrmann C and Berndt R 2015 New J. Phys. 17 013012Gross2012 Gross L, Mohn F, Moll N, Schuler B, Criado A, Guitian E, Pena D, Gourdon A and Meyer G 2012 Science 337 1326–1329Pawlak2011 Pawlak R, Kawai S, Fremy S, Glatzel T and Meyer E 2011 ACS Nano 5 6349–6354Hauptmann2012 Hauptmann N, Mohn F, Gross L, Meyer G, Frederiksen T and Berndt R 2012 New J. Phys. 14 073032Nishino2005 Nishino T, Ito T and Umezawa Y 2005 Proc. Natl. Acad. Sci. U. S. A. 102 5659–5662Chiutu2011 Chiutu C, Stannard A, Sweetman A M and Moriarty P 2011 Chem. Commun. 47 10575Hashizume1993 Hashizume T, Motai K, Wang X D, Shinohara H, Saito Y, Maruyama Y, Ohno K, Kawazoe Y, Nishina Y, Pickering H W, Kuk Y and Sakurai T 1993 Phys. Rev. Lett. 71 2959Pai2004 Pai W W, Hsu C, Lin M C, Lin K C and Tang T B 2004 Phys. Rev. B 69 125405Wang2004 Wang L and Cheng H 2004 Phys. Rev. B 69 045404Pai2010 Pai W W, Jeng H T, Cheng C, Lin C, Xiao X, Zhao A, Zhang X, Xu G, Shi X Q, Hove M A V, Hsue C and Tsuei K 2010 Phys. Rev. Lett. 104 036103Lewis2011 Lewis J P, Jelínek P, Ortega J, Demkov A A, Trabada D G, Haycock B, Wang H, Adams G, Tomfohr J K, Abad E, Wang H and Drabold D A 2011 Phys. Status Solidi B 248 1989–2007Schull2011 Schull G, Dappe Y J, González C, Bulou H and Berndt R 2011 Nano Lett. 11 3142svec_van_2012 Švec M, Merino P, Dappe Y J, González C, Abad E, Jelínek P and Martín-Gago J A 2012 Phys. Rev. B 86 121407Ortmann2006 Ortmann F, Bechstedt F and Schmidt W G 2006 Phys. Rev. B 73 205101Dappe2009 Dappe Y J, Ortega J and Flores F 2009 Phys. Rev. B 79 165409Gonzalez2009 González C, Ortega J, Flores F, Martínez-Martín D and Gómez-Herrero J 2009 Phys. Rev. Lett. 102 106801Giessibl1997 Giessibl F J 1997 Phys. Rev. B 56 16010Blanco2004 Blanco J M, González C, Jelínek P, Ortega J, Flores F and Pérez R 2004 Phys. Rev. B 70 085405Silien2004 Silien C, Pradhan N A, Ho W and Thiry P A 2004 Phys. Rev. B 69 115434Larsson2008 Larsson J A, Elliott S D, Greer J C, Repp J, Meyer G and Allenspach R 2008 Phys. Rev. B 77 115434Lakin2013 Lakin A J, Chiutu C, Sweetman A M, Moriarty P and Dunn J L 2013 Phys. Rev. B 88 035447note2 The uppermost hexagon of the C_60 molecule at the surface and the lowermost hexagon of the C_60 molecule at the tip are the hexagons that are closest to each other according to Fig. 2(d).Note1 The C_60 molecule at the tip is rotated by -12and -72compared to the C_60 on the surface while an angle of 12and 72was extracted from the experiment. Owing to the symmetry of the molecules the same pattern is obtained independent of the sign of the rotation. | http://arxiv.org/abs/1704.08466v1 | {
"authors": [
"Nadine Hauptmann",
"César González",
"Fabian Mohn",
"Leo Gross",
"Gerhard Meyer",
"Richard Berndt"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170427080605",
"title": "Interactions between two C60 molecules measured by scanning probe microscopies"
} |
firstpage–lastpage The phase transition in bounded-size Achlioptas processes Oliver Riordan Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK. E-mail: [email protected] Lutz Warnke School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA; and Peterhouse, Cambridge CB2 1RD, UK. E-mail: [email protected] 27, 2017 ============================================================================================================================================================================================================================================================================================================================================= We present a proper motion investigation of a sample of Be star candidates towards the Magellanic Clouds,which has resulted in the identification of separate populations, in the Galactic foreground and in the Magellanic background. Be stars are broadly speaking B-type stars that have shown emission lines in their spectra.In this work, we studied a sample of 2446 and 1019Be star candidates towards the LMC and SMC respectively,taken from the literature and proposed as possible Be stars due to their variability behaviour in the OGLE-II I band. JHKs magnitudes from the IRSF catalog and proper motions from the SPM4 catalog,were obtained for 1188 and 619 LMC and SMC Be stars candidates, respectively. Color-color and vector-point diagrams were used to identify different populations among the Be star candidates. In the LMC sample, two populations with distinctive infrared colours and kinematics were found,the bluer sample is consistent with being in the LMC and the redder one with belonging to the Milky Way disk. This settles the nature of the redder sample which had been described in previous publications as a possible unknown subclass of stars among the Be candidates in the LMC. In the SMC sample, a similar but less evident result was obtained, since this apparent unknown subclass was not seen in this galaxy. We confirm that in the selection of Be stars by their variability, although generally successful, there is a higher risk of contamination by Milky Way objects towards redder B-V and V-I colors.proper motion – Be stars – Magellanic Clouds § INTRODUCTION Be stars are broadly defined as non-supergiant (luminosity class III to V) B-type stars that haveor have had Balmer emission lines <cit.>. The presence of a flattened circumstellar gaseous disk formed of material ejected from the star, a dust-free Keplerian decretion disc, is currently the accepted explanation for some of the observed features in Be stars: the UV stellar light is reprocessed in it and produces the emission lines, and the observed IR excess and polarization result from the scattering of the stellar light by the disk (see <cit.> for details). Several mechanisms have been proposed for the mass-ejection process that forms the disk,which are well constrained but not totally understood. In the so-called Classical Beit clearly comes from the rapid rotation of the star,probably along with other processes including non-radial pulsations and small-scale magnetic fields; and in binaries stars the material is being accreted by the companion of the Be star, generally a white dwarf. <cit.> and <cit.>, have identified a large number of Be star candidatestowards the Magellanic Clouds based on their photometric variability in the OGLE-II I band. <cit.> has proposed a classification of these variability-selected stars into four types according to their light curves morphology and suggest thatType-1 and Type-2 stars could be Be stars with accreting white dwarfs in a Be + WD binary or blue pre-main-sequence stars showing accretion disc thermal instabilities, Type-4 stars could be single Be stars and Type-3 stars should not be linked to the Be star phenomenon at all.They found also a sample of transition stars between Type-1 and Type-2 stars,and suggested that these stars could be experiencingthe same phenomenon,which could be outbursts of plasma,accretion from awhite dwarf companion,or accretion disc thermal instabilities like these found in blue pre-main sequence stars.This last hypothesis was discarded in the study by <cit.>.<cit.> showed that the photometric method used in the aforementioned works is effective in the selection of Be star candidates. Their spectroscopic analysis found that most of the stars studied from a sample of such candidates in both LMC and SMC, belong to early type stars with emission supporting circumstellar material. However an enigmatic result was obtained in their work among Type-4 LMC Be candidate stars: a subgroup of the brightest and most massive stars was found in the sample with no NIR excess and large reddening although they were not located in regions with high reddening. Stars with similar characteristics were not found in the SMCor in our Galaxy and it was proposed they formed a possible subclass of stars that needed further analysis.In this work we used the Southern Proper Motion 4 catalogue (SPM4) <cit.> to study the kinematics of Be star candidates of the LMC and SMC galaxies, to probe the presence of contaminants or various populations in the studied samples, and with such information gain a better insight into the nature of these stars. This also allowed us to independently evaluate how successful the photometric variability techniques used by <cit.> and <cit.> are to select Be star candidates, and confirm that some biases could be present in such method.§ CROSS-MATCHING BE STARS WITH IRSF AND SPM4Be star candidates for the LMC and SMC were obtained from <cit.> and<cit.>, respectively. A total of 2446 and 1019 candidates were listed, butwe found fourand tworepeated entries in each catalog. In the LMC, the four sources that appeared twice each, had the same sky coordinates, and very similar magnitudes and colors, only differing in their Type. In each case, one entry was a Type-4, and for the lack of better information, we kept that entry in the catalog and deleted the other. In the SMC, a similar situation was faced and since in the two cases one entry was a Type-1 star, this one was selected in each case. The final lists of unique entries of Be star candidatesto match contained 2442 and 1017 stars in the LMC and SMC, respectively.A first match by sky coordinates was done between the Be star candidates and the SPM4 catalog, with 3" as a matching radius. A few multiple matches (two Be stars matched to the same SPM4 star and viceversa), were found. Since SPM4 measured a V magnitude, a comparison with their OGLE-II's V could provide additional information to clarify these problematic matches and also to discard false matches, but the SPM4 photometry in the LMC and SMC areas comes mostly from photographic plates and has a poor quality compared to CCD photometry. We also noticed the J-H vs. H-K color-color diagram done with the JHKs 2MASS magnitudes listed by SPM4, suffered from a noticeable higher dispersion, as compared to those published by <cit.> using the InfraRed Survey Facility (IRSF) magnitudes by <cit.>.Therefore, we decided to make a cross-match between SPM4 and IRSF in the areas that cover the location of the Be star candidates in the LMC and SMC, respectively, to obtain a SPM4×IRSF catalog with proper motions and good infrared magnitudes. Then a second match between the Be star candidates and this SPM4×IRSF catalog was performed to make the final cross-identifications. While the IRSF photometry is of better quality, the catalog containsa non-negligible portion of false or poor quality entries (extended/faint/saturated/etc. sources), therefore in an effort to consider only good quality point sources, we selected only those with the quality and periphery flags both equal to 111 and the adjacency flag equals to 000. By applying such restrictions, the IRSF catalog was reduced to about 15% of the original total number of listed entries. To our surprise, we found a few tens of these point sources with repeated IDs, that had the same sky coordinates and similar photometry (within 0.1-0.2 magnitudes), and for the lack of better information, we chose the first one to appear for each case.As IRSF has a fainter limiting magnitude than SPM4, the risk for mismatches between the catalogs by using only the position in the sky cannot be ignored. But SPM4 2MASS-measured JHKs magnitudes help us to overcome thistrouble, by looking also for a similarity in the J-band photometry.We first transformed the 2MASS J magnitude listed in SPM4 to the IRSF system by applying the corresponding transformation, listed in Table 10 of <cit.> asJIRSF=J2MASS-0.043(J2MASS-H2MASS)+0.018. Then, for all IRSF sources within 3" in the sky and within 0.5 magnitudes in J magnitude from a SPM4 one, a match is done with the IRSF star with the smallestd=√((Δθ/3")^2+(Δ/0.5 mag)^2)where Δθ is the sky angular separation in arcseconds between the (ra,dec) coordinates listed in SPM4 and IRSF,and ΔJ is the difference between the IRSF J magnitude and the 2MASS-transformed-to-IRSF one[Within the SPM4LMC catalog, a few hundreds of stars did not have 2MASS JHK but instead had a value of 0 magnitudes in each band. For these SPM4 stars, we took ΔJ=0 by default,but chose a more restricted Δθ<2". We then checked for repeated matches between this subset of SPM4 stars and the subset with 2MASS JHKs measured, and found eleven cases where two different SPM4 stars were matched to the same IRSF source, in each case one match had 2MASS JHKs and the other had 0 values, and in these few cases, we always chose the one with measured 2MASS JHKs.].The metric defined by the above equation finds the IRSF star that is closest in both position andmagnitude to the SPM4 one. This procedure should minimise false matches.Having constructed this SPM4×IRSF catalog, one per Magellanic Cloud,we proceeded to cross-match the corresponding Be star candidates with them, this time using only the sky coordinates to find their match: for each Be star candidate, we found the closest SPM4×IRSF source within 3".During the matching process in the LMC,we found twenty six cases where we had two Be star candidates matched within 3" with one SPM4xIRSF star. Among these multiple matches, the largest difference in colors B-V and V-I was about 0.2 magnitudes and generally smaller than 0.05 mags. Not having other information to decide, we chose the closest in angular separationas the most probable true match. We finally obtained 1188 Be star candidates with SPM4 proper motions and IRSF JHKs magnitudes.During the matching process in the SMC, we found nine cases where we had two Be star candidates matched within 3" with one SPM4xIRSF star, and in one case we had one Be star candidate matched within 3" with two SPM4xIRSF stars. Among these multiple matches, the largest difference in colors (B-V and V-I for the first nine cases, and magnitude J for the last one),was as observed in the LMC, and not having other information to decide,we again chose the closest in angular separationas the most probable true match. We finally obtained 619 Be star candidates with SPM4 proper motions and IRSF JHKs magnitudes.Figure <ref> shows the histograms of Δθ and ΔJ for each of these final matches catalogs, labeled as SPM4-IRSF. In the first two panels, we also include a histrogram of the sky angular separation in arcseconds between the SPM4 sky coordinate and the one listed by <cit.> and <cit.> (taken from OGLE-II), for the LMC and SMC, respectively, labeled as SPM4-Be. Table <ref> (see Appendix) shows a few lines of the final data table for the LMC sample; the SMC table has the same structure. Sky equatorial coordinates and proper motions with their corresponding errors come from SPM4,BVI photometry comes originally from OGLE-II and JHKs photometry comes from IRSF, the OGLE-II ID as well as the Type of each Be star candidate comes from <cit.> and <cit.>, for the LMC and SMC, respectively. Additional information regarding the quality of the proper motion (columns m, n and i from SPM4),as well as the SPM4 ID are included, to identify a few stars in SPM4 whose proper motions are of lower quality due to the fact that the 2nd-epoch position was not measured neither in SPM plates (n=0) nor CCDs(i=0) but was taken from an input catalog. Such few stars are labeled as false SPM4 stars, as their proper motions are not measured exclusively with SPM4 data, although their first epoch material is from SPM4 (m indicates the number of 1st-epoch SPM4 plates used for each star, and for all SPM4 stars, always m>0.).§ SAMPLE SELECTION FROM COLOR-COLOR DIAGRAMSAfter close examination of various color-color and color-magnitude diagrams thatcould be done with the collected BVIJHKs photometry, we were able to identify three mutually excluding samples, labeled A, B and C within each Magellanic Cloud, that had a distinctive clustering in all colours.Figures <ref> (LMC) and <ref> (SMC) show the results obtained in this regard.For both Clouds, these samples were defined with the same cuts in photometry, as described in Table <ref>. The number of false SPM4 stars within each sample is shown in parenthesis. Such selection was motivated by the results obtained by <cit.>, where a sample of enigmatic candidates were obtained in the LMC Be star candidates, although not in the SMC, described as Type-4 stars with different optical and NIR properties. These stars “do not have NIR excess, show large reddening, but are not located in regions with high reddening. The reddening corrected magnitudes make them the brightest and most massive stars in the sample. Detailed spectroscopic studies are needed to understand these enigmatic candidates” <cit.>. Within our study, such subgroup correspond to sample B within each cloud, and as expected, in the SMC such sample is much less populated. Our figures <ref> and <ref>replicate figures 3 and 4 from <cit.>, although no extinction correction was included, as it was deemed to small to have any significant effect in our investigation .The histograms of Δθ shown in the lower right panel of figures <ref> and <ref> reveal that samples C within each Magellanic Cloudare in fact mismatches between the Be star candidates and the SPM4×IRSF catalog. Stars in samples C have quite faint V magnitudesand J-H and H-Ks colors typical of GKM giants, as seen in the upper right panels of figures <ref> and <ref>. Therefore, samples C within each Magellanic Cloud are discarded at this point from any further consideration, and only samples A and Bare analysed in the following sections. § PROPER MOTION RESULTSFigure <ref> shows the vector points diagrams for samples A and B within each of the Clouds.In both cases, although especially for the LMC,it is evident that sample B is dominated by stars with large enough proper motions that place many of its stars within the Milky Way.This settles the nature of the originally thought as a new subclass of Be star candidates in the LMCsuggested by <cit.>, as simply stars located within the Milky Way foreground, having noticeable larger values of μ_δ than those expected for stars in the LMC. In the SMC, we also found that many stars in sample B show systematic larger values,this time easily visible in μ_αcos(δ), when compared to the bluer sample A.SMC's sample B is of a much smaller size than LMC's sample B.This result is further confirmed when comparing the histograms of proper motions for all these samples with ones obtained from the SPM4×IRSF catalog, using the photometric selection proposed by <cit.> based on JHKs magnitudes. Figure <ref> shows the histograms of μ_δ and μ_αcos(δ) for the LMC and SMC, respectively, for their corresponding samples A and B,and for two samples chosen within each Magellanic Cloud, and named NW-A and NW-B respectively, defined as described in Table <ref>. Samples NW-A are dominated by stars in the Magellanic Cloudsof spectral type and luminosity class O3-O5 V and B-A I-II, while samples NW-B are dominated by Milky Way disk stars of spectral type and luminosity class F-K V. It is important to notice that to select these NW-samples, we used IRSF JHKs magnitudes. It is visible that samples A and NW-A follow similar kinematics, and so do samples B and NW-B, within each Magellanic Cloud. This similarity refers mainly to central tendency values, that is that the mean values of samples A and NW-A are close to each other, and so are in samples B and NW-B, although regarding dispersion, each pair of samples also bear resemblance. Table <ref> contains the number of data points,mean and unbiased standard deviation ofμ_δ for LMC, andμ_αcos(δ) for the SMC, for their corresponding samples. In any case, the main point of this investigation is to prove that samples B do not follow the kinematics expected for the Magellanic Clouds, that is that samples B proper motion distribution is noticeably different from their corresponding samples NW-A.Two-samples Welch's T-tests were performed on each possible pair of samples to test the null hypothesis that the means of the compared samples are the same.The obtained p-values are in Table <ref> and indicate that the null hypothesis should be clearly rejected when comparing samples B vs. NW-A, andA vs. NW-B ones. These tests also show that the null hypothesis should not be indisputably rejected in the other cases, that is A vs. NW-A and B vs. NW-B, except on the LMC A vs. NW-A case, where it is visible that some differenceexists in the means. Still in this case, we notice that their distributions overlap well enough to substantiate the idea that A and NW-A share a similar kinematics, i.e. that many of the Be star candidates in the LMC sample A are indeed located in the LMC. It should be taken into account that SPM4 proper motion errors do affect the performance of these tests, and contamination from other populations or the presence of systematic effects in the SPM4 proper motions should be considered (See section <ref> for an analysis of this last point). §.§ Cross-examination with some measured radial velocitiesIn <cit.>, radial velocities and spectral types were determined for 102 Be star candidates drawn from <cit.> and <cit.>, 39 and 63 stars in the LMC and SMC, respectively. In the SMC list, the star with OGLE-II ID 005224.40-724038.6 appeared twice, with the same sky coordinates and photometry but with Types 1 and 2, and different radial velocities though still consistent with each other within errors (V_r=149± 30 km/s and V_r=115±24 km/s, respectively). In fact this star was identified in our work and the Type-1 entry was taken for the match with SPM4, so in this case, for consistency we do the same again. From the LMC sample, 8 stars (out of 39) were identified in our work with proper motions in SPM4and their data are displayed in Table <ref>. From the SMC sample, 33 stars (out of 63) were identified in our work with proper motions in SPM4and the data for a few stars are shown in Table <ref>.Figure <ref> shows the radial velocity RV for these stars versus the V-I color[Be star candidatewith OGLE-II ID 004526.51-733014.2 in the SMC has V-I=1.092, out of the plot range and therefore is not shown in this figure.The same applies for Figure <ref>.],with a symbol sized by the total proper motion μ=√((μ_αcos(δ))^2+(μ_δ)^2). Despite the fact that proper motion errors at the distance of the Magellanic Clouds represent a much larger error in km s^-1 as compared to the RV errors, it can be noticed how many of the SMC bluer starshave RV>150 km s^-1 and also the smaller proper motions, therefore all kinematical information is consistent with their membership to the SMC. For the LMC stars, with so few stars this plot is not as useful. In any case, this plot confirms that the bluer a variability-selected Be star candidate is towards the Magellanic Clouds, more probable it is to belong to these galaxies. § DISCUSSION When finding trends of proper motions among colour-selected samples,it is important to check that such behaviour is not the result of systematic errors within the catalog used, and therefore of non-cosmic origin, so to say.In the Magellanic Clouds area, the SPM4 catalog has measured proper motions, using observational materials of different kinds for their 2nd-epoch positions: photographic plates or CCDs (in a very few cases, these positions were directly taken from an input catalog,which can be Hipparcos, Tycho-2, UCAC2 or 2MASS, then n=i=0). Proper motions computed from 2nd-epoch CCD have errors about 2-3 times smaller than those from 2nd-epoch plates, and such errors are correlated withposition in the sky, depending on the location of the plates. It can also happens that from one area in the sky with plates to another one with CCDs, an offset in the reference system takes place, artificially introducingtrends in proper motions. Many astrometric catalogs suffer from these problems, but a thorough examination of the results generally reveals if such issues are present.A quick examination of the proper motions in the Magellanic Clouds area within the SPM4 catalog, reveals some trends with n that were inspectedcarefully to check if those could be behind the results described in the previous section. No trends in proper motion with m or i were observed, which in a catalogue like SPM4, are not unexpected.Within the LMC SPM4×IRSF catalog, it is visible that stars with n=0 (no 2nd-epoch plates, therefore CCD positions if i>0) have an overall shift in theirμ_δ values towards positive values, as compared to the stars with n>0, whose global distribution looks more centered towards zero. Within the LMC Be sample, about 10%, 20% and 70% of the stars have n=0,n=1 and n=2, respectively. The proper motion trend in μ_δ is the same for n=1,2 and as described in the previous section, and for n=0 is a bit offset but still sample B shows positive and larger values in μ_δ as compared to sample A, as shown in Fig. <ref>, upper panel. Within samples A and B,n=1, 2 stars represent 73% and 83% of the stars, respectively.The majority of the n=0 stars are located towards the west end of the LMC Be stars in the sky. Within the SMC SPM4×IRSF catalog, no visible trend in the proper motions with n was visible. Within the SMC Be sample, about 80% of the stars have n=0, and the same proportion is seen within sample A and sample B.Figure <ref>, lower planel,shows μ_αcos(δ) vs. V-I color-coded by n, which in this case ranges from 0 to 4. We conclude that the observed trends in proper motions are a real feature of the samples chosen and not a bias caused by systematics in the SPM4 catalogue. § CONCLUSIONS* A proper motion investigation for a sample of Be star candidates towards the Magellanic Clouds, using data from the SPM4 catalog has been done, which has allowed us to separate a Galactic foreground population from the Magellanic background one. * The Galactic foreground population has redder colors in B-V and V-I and large proper motions visibly different from those expected for the Magellanic Clouds.* In the LMC, this population is readily visible as a cluster in the J-H vs. J-Ks color-color diagram and had been described in the literature as an intriguing subgroup of stars within the LMC, that needed further spectroscopic analysis to clarify their nature. * Our result proves that these stars are simply not in the Magellanic Clouds and confirms that the photometric variability method used by <cit.> and <cit.>to select Be star candidates,suffers from significant contamination towards redder B-V and V-I values.99 [Allen2000]allen Allen C.W. 2000, in Cox A., ed., Allen's Astrophysical Quantities, 4th edition Springer-Verlag, New York [Collins1987]collins Collins, G. W. II. (1987) In IAU Colloq. 92: Physics of Be stars. p.3 [Dougherty et al.1994]doug Dougherty S. M., Waters L. B. F. M., Burki G., Cote J., Cramer N., van Kerkwijk M. H., Taylor A. R., 1994, å, 290, 609 [Girard et al.2010]spm4 Girard T. M. et al., 2011, , 142, 15[Hernández et al.2005]jesush Hernández J., Calvet N., Hartmann L., Briceño C., Sicilia-Aguilar A., Berlind P., 2005, , 129, 856 [Kato et al.2007]kato07 Kato D. et al., 2007, , 59, 615 [Mennickent et al.2002]mennickent02 Mennickent R. E., Pietrzinsky G., Gieren W., Szewczyk O., 2002, , 393, 887 [Mennickent et al.2009]mennickent09 Mennickent R. E., Sabogal B., Granada A., Cidale L., 2009, , 121, 125 [Nikolaev & Weinberg2000]niko00 Nikolaev S. and Weinberg, M. D. 2000, , 542, 804 [Sabogal et al.2005]sabogal05 Sabogal B. E., Mennickent R. E., Pietrzinsky G., Gieren W., 2005, , 361, 1055 [Paul et al.2012]paul12 Paul K. T., Subramaniam A., Mathew B., Mennickent R. E., Sabogal B., 2012, , 421, 3622 [Rivinius et al.2013]review13 Rivinius T., Carciofi A. C., Martayan, C., 2013, , 21, 69§ TABLESA sample of Table <ref>, Table <ref> and a sample of Table <ref> are shown in the following page. | http://arxiv.org/abs/1704.08709v1 | {
"authors": [
"Katherine Vieira",
"Alejandro García-Varela",
"Beatriz Sabogal"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170427183415",
"title": "Proper motion separation of Be star candidates in the Magellanic Clouds and the Milky Way"
} |
Dirac δ-function potential in quasiposition representation of a minimal-length scenarioM. F. Gusson, A. Oakes O. Gonçalves, R. O. FranciscoR. G. Furtado, J. C. Fabris[[email protected]]and J. A. [email protected] [0.1cm]Departamento de Física, Centro de Ciências Exatas Universidade Federal do Espírito Santo – UFES 29075-910 – Vitória – ES – BrasilDecember 30, 2023 ============================================================================================================================================================================================================================================================================================================= A minimal-length scenario can be considered as an effective description of quantum gravity effects. In quantum mechanics the introduction of a minimal length can be accomplished through a generalization of Heisenberg's uncertainty principle. In this scenario, state eigenvectors of the position operator are no longer physical states and the representation in momentum space or a representation in a quasiposition space must be used. In this work, we solve the Schroedinger equation with Dirac δ-function potential in quasiposition space. We calculate the bound state energy and the coefficients of reflection and transmission for scattering states. We show that leading corrections are of order of the minimal length ( O(√(β))) and the coefficients of reflection and transmission are no longer the same for the Dirac delta well and barrier as in ordinary quantum mechanics. Furthermore, assuming that the equivalence of the 1s state energy of the hydrogen atom and the bound state energy of the Dirac δ-function potential in 1-dim is kept in a minimal-length scenario, we also find that the leading correction term for the ground state energy of the hydrogen atom is of order of the minimal length and Δ x_min≤ 10^-25 m. Keywords: Minimal length generalized uncertainty principle Dirac delta-function potential PACS 12.60.i PACS 03.65.Sq PACS 04.20.Cv PACS 03.65.Caarabic § INTRODUCTIONGravity quantization has become a huge challenge to theoretical physicists. Despite enormous efforts employed, so far, it was not possible to obtain a theory which can be considered suitable and not even a consensus approach. Nevertheless, most of the candidate theories to gravity quantization seem to have one common point: the prediction of the existence of a minimal length, that is, a limit for the precision of a length measurement.Although the first proposals for the existence of a minimal length were done by the beginning of 1930s <cit.>, they were not connected with the quantum gravity, but instead of this with a nature cut-off that would remedy cumbersome divergences arising from quantization of systems with an infinite number of degrees of freedom. The relevant role that gravity plays in trying to probe a smaller and smaller region of the space-time was recognized by M. Bronstein <cit.> already in 1936, however his works did not attract a lot of attention. It was only in 1964 that C. A. Mead <cit.> once again proposed a possible connection between gravitation and minimal length. Hence, we can assume that gravity may lead to an effective cut-off in the ultraviolet. Furthermore, if we are convinced that gravitational effects are considered when a minimal length is introduced then a minimal-length scenario could be thought of as an effective description of quantum gravity effects <cit.>.As far as we know, the introduction of a minimal-length scenario can be carried out through three different way <cit.>: a generalization of the Heisenberg's uncertainty principle (GUP), a deformation of the special relativity[It is named doubly special relativity because of the existence of two universal constants: light speed and minimal length.] (DSR) and a modification of the dispersion relation (MDR). Various problems connected with the minimal length have been studied in the context of the non-relativistic quantum mechanics. Among them are the harmonic oscillator <cit.>, the hydrogen atom <cit.>,step and barrier potentials <cit.>, finite and infinite square wells <cit.>, as well as others. In the relativistic context, the Dirac equation has been studied in <cit.>. The Casimir Effect has also been studied in a minimal-length scenario in <cit.>. An interesting problem in quantum mechanics is the Dirac δ-function potential. In general, the Dirac δ-function potential is used as a pedagogical exercise. Nevertheless, it has also been used to model physical quantum systems <cit.>. Maybe because the attractive Dirac δ-function potential is one of the simplest quantum system which displays both a bound state and a set of continuous states, it has been used to model atomic and molecular systems <cit.>. In addition, the short-range interactions in condensed matter with a large scattering length can actually be modeled as a Dirac δ-function potential <cit.>. In quantum field theory, in order to treat the Casimir effect more realistically, the boundary conditions are replaced by the interaction potential 1/2σ(x) ϕ^2(x), whrere σ(x) represents the field of the material of the borders (background field). Hence, for sharply localized bordersthe background field can be modeled by a Dirac δ-function <cit.>.The Dirac δ-function potential by its very nature is challenging problem in a minimal-length scenario.N. Ferkous <cit.> and M. I. Samar & V. M. Tkachuck <cit.> have independently calculated the bound state energy in “momentum space”. In both papers, the authors have found a correction for the expression of energy in √(β) and β (√(β)∼Δ x_min), but with different coefficients, therefore disagreeing outcomes.M. I. Samar and V. M. Tkachuck claim that is because, whereas they consider p belongs to (-π/2√(β), π/2√(β)), Ferkous consider p belongs to (-∞, ∞). In this work, we propose to solve the problem of a non-relativistic particle of mass m in the presence of Dirac δ-function potential in quasiposition space. Since the quasiposition space representation is used we can consider the cases of bound states and scattering states as well. We find the same expression for the energy of the bound state obtained by Ferkous.In addition, assuming that the equality between the 1s state energy of the hydrogen atom and the bound state energy of the Dirac δ-function potential in 1-dim when the coefficient of the δ-potential is replaced by the fine structure constant <cit.> is kept in a minimal-length scenario, we find that the leading correction for the ground state energy of hydrogen atom is of order of the minimal length ( O(√(β))), differently from commonly found in the literature using perturbative methods <cit.>, but in according to the results obtained by T. V. Fityo, I. O. Vakarchuk and V. M. Tkachuk <cit.> and D. Bouaziz & N. Ferkous <cit.> using a non-perturbative approach. The rest of this paper is organized as follows. In section <ref> we show how to introduce a minimal-length scenario and find the time-independent Schroedinger equation in quasiposition space representation. In section <ref> we solve the modified Schroedinger equation and find the bound state energy and the coefficients of reflection and transmission for the scattering states. We present our conclusions in section <ref>.§ MINIMAL-LENGTH SCENARIOIn quantum theory, a minimal-length scenario can be accomplished by imposing a non-zero minimal uncertainty in the measurement of position which leads to generalized uncertainty principle (GUP). SinceΔ x Δ p ≥|⟨ [x̂,p̂] ⟩ |/2,a generalization of the uncertainty principle corresponds to a modification in the algebra of the operators. There are different suggestions of modification of the commutation relation between the position and momentum operators which implement a minimal-length scenario. We concern with the most usual of them, proposed by Kempf <cit.>, which in a 1-dimensional space is given by[x̂,p̂] := iħ(1 + βp̂^2 ),where β is a parameter related to the minimal length. The commutation relation (<ref>) corresponds to the GUPΔ x Δ p ≥ħ/2[ 1 +β (Δ p)^2 + β⟨p̂⟩^2] ,which implies the existence of a non-zero minimal uncertainty in the position Δ x_min = ħ√(β).Unfortunately, in this scenario the eigenstates of the position operator are not physical sates[That is because the uncertainty Δ A of an operator  in any of its state eigenvectors | ψ_A⟩ must be zero, which is not the case for the position operator, since Δ x_min > 0.] and, consequently, the representation in position space can no longer be used, that is, an arbitrary state vector | ψ⟩ can not be expanding in the basis of state eigenvectors of the position operator {| x ⟩}. Hence the obvious way ahead is to make use of the representation in momentum space:⟨ p | x̂ | ψ⟩ = i ħ( 1 + β p^2)∂ψ̃ (p)/∂ p, ⟨ p | p̂ | ψ⟩ = p ψ̃ (p). However, the representation in momentum space is not suitable in some cases, such as, for example, when the wave function has to satisfy boundary condition at specifics points. So, the representation in quasiposition space <cit.>,⟨ x^ML | x̂ | ψ(t) ⟩ = x ψ^qp (x,t), ⟨ x^ML | p̂ | ψ(t) ⟩ = -i ħ( 1 - βħ^2∂^2/∂ x^2) ∂ψ^qp (x,t)/∂ x,to first-order in β parameter, is more appropriate[P. Pedram <cit.> has proposed a representation in which x̂ = x̂_o and p̂ = tan( √(β)p̂_o)/√(β), where x̂_o and p̂_o are ordinary operators of position and momentum, which obey the canonical commutation relation [x̂_o,p̂_o] = i ħ.]. |x^ML⟩ are state vectors of maximal localization which satisfy <cit.>⟨ x^ML | x̂ | x^ML⟩ = x,x ∈, ( Δ x )_| x^ML⟩ = Δ x_min = ħ√(β),and⟨ x^ML | x^ML⟩ = 1.The time-independent Schroedinger equation for a non-relativistic particle of mass m in quasiposition space representation takes the form-ħ^2/2md^2φ^qp(x)/dx^2 + βħ^4/3md^4φ^qp(x)/dx^4 + V(x)φ^qp(x) = E φ^qp(x).The above modified Schroedinger equation shows that GUP effects are performed by fourth-order derivative term. This term modifies the probability current as follows[From now on, we are going to omit the qp superscript of the wave function for sake of simplicity.] <cit.>J = - i ħ/2m( ψ^*∂ψ/∂ x - ψ∂ψ^*/dx) + i βħ^3/m[ ( ψ^*∂^3ψ/∂ x^3 - ψ∂^3ψ^*/∂ x^3) + ( ∂^2ψ^*/∂ x^2∂ψ/∂ x - ∂^2ψ/∂ x^2∂ψ^*/∂ x) ],but it does not modify the probability density[That is because the authors assume that there is no changes in the time-dependent part of the Schroedinger equation.] ,ρ = | ψ |^2. § DIRAC Δ-FUNCTION POTENTIAL In this section, we consider a non-relativistic particle of mass m in the presence of Dirac delta-function potential in a minimal-length scenario. In according to Eq. (<ref>) we have-ħ^2/2md^2φ(x)/dx^2 + βħ^4/3md^4φ(x)/dx^4 - V_0δ(x)φ(x) = E φ(x),where V_0 > 0 is a constant.Integrating Eq. (<ref>) between - ϵ and ϵ (with ϵ arbitrarily small and positive), and then taking the limit ϵ→ 0, we obtain [ d φ_II(0)/d x - d φ_I(0)/d x] - 2/3βħ^2[ d^3φ_II(0)/d x^3 - d^3φ_I(0)/d x^3] + 2mV_0/ħ^2φ(0) = 0,where φ_I(x) and φ_II(x) are the solutions of Eq. (<ref>) for x < 0 and x > 0, respectively.Since the third derivative of φ(x) at x =0 has a finite discontinuity (that is to say, a jump by a finite amount), we require that the second and first derivatives are continuous at x = 0. Consequently, Eq. (<ref>) turns into <cit.>β/3[ d^3φ_II(0)/d x^3 - d^3φ_I(0)/d x^3] = mV_0/ħ^4φ(0). As it is well-known, taking into account the sign of the energy two case can then arise: (i) bound states when E < 0 and (ii) scattering states when E > 0.§.§ Bound statesIn this case, the general solution of Eq.(<ref>) is given byφ_I,II(x) = A_I,II e^kx + B_I,II e^-kx + C_I,II e^k_βx + D_I,II e^-k_βx,where, to first order in β,k := k_0( 1 + 1/3βħ^2 k_0^2), k_β := √(3/2 ħ^2β)( 1 - 1/3βħ^2 k_0^2)andk_0 := √(2 m |E|/ħ^2).The coefficients can be found, except by one normalization constant, requiring that solutions remain finite when x →±∞ and the continuity of solution and of its first and second derivatives at x = 0. We come to the result{[φ_I(x) = A e^kx - k/k_β A e^k_βx,x < 0; [13pt] φ_II(x) = A e^-kx - k/k_β A e^-k_βx, x > 0,;].where A is the normalization constant.From Eq. (<ref>) we can find the bound state energy up to order β as E = - m V_0^2/2 ħ^2 + √(2 β/3)m^2 V_0^3/ħ^3 - 2 β m^3 V_0^4/ħ^4,which is in agreement with N. Ferkous's result <cit.>. It is interesting to note that the first correction brought about by the introduction of a minimal-length scenario is O(√(β)).For an electron, the relative difference between the bound state energy arising from the introduction of a minimal length and the absolute value of the ordinary energy of the bound state is showed as a function of the minimal length for the energy about 1 eV in Fig. <ref> and as a function of E_0 (1 eV ≤ E_0≤ 1 keV ) for L_min = 10^-20 m in Fig. <ref>. In Fig. <ref> we choose the 10^-17 m upper value for the minimal length because it is in accordance with that commonly found in the literature <cit.> and it is consistent with the one at the electroweak scale <cit.>. For the Planck's length, Δ E/E_0≈ 8.4 × 10^-26, unfortunately a virtually unmeasurable effect quantum gravity using current technology.§.§ Scattering statesIn this case, the general solution of Eq.(<ref>) is given byφ_I,II(x) = A_I,II e^ikx + B_I,II e^-ikx + C_I,II e^k^'_βx + D_I,II e^-k^'_βx,wherek^'_β := √(3/2 ħ^2β)( 1 + 1/3βħ^2 k_0^2). Now we demand there is not reflected wave function for x > 0, consequently B_II = 0. From requirement that solutions remain finite when x →±∞ we have D_I = 0 and C_II = 0. In this case, the continuity of solution and of its first and second derivatives at x = 0 are not enough to find the coefficients. It is also necessary to use the discontinuity of the third derivative at x = 0, Eq. (<ref>). After some algebra, we have{[ φ_I(x) = A e^ikx + ik^'_β/kA/be^-ikx - A/be^k^'_βx, x < 0; [13pt] φ_II(x) = aA/b e^-ikx - A/be^-k^'_βx,x > 0,; ].wherea := 1 +2 βħ^4 k^'_β/3 m V_0(k^' 2_β + k^2), b := a - i k^'_β/k,and A is a normalization constant.Consequently, the reflection and transmission coefficients are given byR = ( k^'_β/k)^21/[1 +2 βħ^4 k^'_β/3 m V_0(k^' 2_β + k^2) ]^2 + ( k^'_β/k)^2andT = ( k^'_β/k)^2[1 +2 βħ^4 k^'_β/3 m V_0( k^' 2_β + k^2) ]^2/[1 +2 βħ^4 k^'_β/3 m V_0(k^' 2_β + k^2) ]^2 + ( k^'_β/k)^2.Note that R + T = 1, as must be. It is instructiveto write the reflection and transmission coefficients up to first corrections. Then,R = ( 1 + 2ħ^2 |E|/m V_0^2)^-1[ 1 - √(2 β/3)2mV_0/ħ( 1 + m V_0^2/2ħ^2 |E|)^-1]andT = ( 1 + m V_0^2/2ħ^2 |E|)^-1[ 1 + √(2 β/3)m^2V_0^3/ħ^3 |E|( 1 + m V_0^2/2ħ^2 |E|)^-1]. Above results show that the reflection and the transmission coefficients are no longer the same in the cases of a delta-function well (V_0 > 0) and a delta-function barrier (V_0 < 0). Therefore the presence of a minimal length decreases the chances of tunneling.It is also interesting to note that the first correction brought about by the introduction of a minimal-length scenario is O(√(β)) in the same way as in the bound state energy.Fig. <ref> and Fig. <ref> show the relative difference between the transmission coefficient arising from the introduction of a minimal length and T_0 (ordinary transmission coefficient) for the cases of a Dirac delta well (dashed line) and of a Dirac delta barrier (continuous line). Fig. <ref> is for electrons scattering of energy about 1 eV and V_0 = 2 eVÅ. For the Planck's length, Δ T/T_0≈ 8.9 × 10^-27, again a virtually unmeasurable effect. Fig. <ref> is for protons scattering of energy about 1 MeV and V_0 = 3 × 10^-2 MeVÅ. For the Planck's length, Δ T/T_0≈ 4.7 × 10^-17. Note that L_min∼ 10^-17 m results in significant effects, which may be an indication that L_min is far from the electroweak scale.§.§ Remarks * It is easy to see that in the limit β→ 0 we recover the results known for Dirac δ-function potential in ordinary quantum mechanics. * A more detailed analysis shows that k_β and k^'_β do not vanish even if m = 0. Therefore, e^-k_β|x| and e^-k^'_β|x| solutions still persist since e^-k_β|x|, e^-k^'_β|x|→ e^-√(3/2 ħβ)|x| when m = 0. Consequently, this leads us to presume that such solutions are “background solutions” caused by introduction of an effective description of the effects of quantum gravity. However, since their coefficients in Eqs. (<ref>) and (<ref>) vanish when m = 0, they are not present in the bound state and the scattering states solutions. * It is important to point out that now the first derivative at x = 0 is no longer discontinuous. However, in the limit β→ 0 the discontinuity at x = 0 is recovered. Moreover, if the term of O(β^2) is considered in the Schroedinger equation the third derivative will turn into continuous at x = 0, and so on. * e^-k_β|x| and e^-k^'_β|x| solutions are only significant for very small values of x, that is, high energy. Thus we could assume that they lie far outside validity range at which the Schroedinger equation may consistently work and throw them away. However, that is a naive assumption, because they lead to the emergence of traces of quantum gravity in low energy physics, as the previous results show. Note that they provide the continuity of first and second derivatives at x = 0. * It is known, at least since the Frost's work of 1954 <cit.>, that the ground state energy of the hydrogen atom (1s state) is identical to the bound state energy of a Dirac δ-function potential in 1-dim when V_0 is replaced by the fine structure constant, α. Thus, assuming that this identiy is kept in a minimal-length scenario[ Since the symmetry of the 1s state must remain the same in both cases.], the result (<ref>) predicts a leading correction for ground state energy of the hydrogen atom of O( √(β)), whereas the result commonly found in the literature using perturbative methods is of O(β) <cit.>.It is importante to add that using a non-pertubative approach T. V. Fityo, I. O. Vakarchuk and V. M. Tkachuk <cit.> andD. Bouaziz & N. Ferkous <cit.> have also found a first correction ofO( √(β)).Now, we can make a rough estimate of an upper bound for the minimal-length value comparing our result with experimental data <cit.>. Using data obtained in reference <cit.>, in which the accuracy of about 4,2 × 10^-14 eV has been obtained, we find that Δ x_min≤ 10^-25 m. Hence, in the case of the protons scattering from the previous subsection, we find Δ T/T_0∼ 10^-17 for L_min∼ 10^-25 m, which is a more representative result.§ CONCLUSION In this work, we solve, in quasiposition space, the Schroedinger equation for a Dirac δ-function potential. Our result for the bound sate energy is in agreement with that calculated by Ferkous in momentum space. Moreover, we find that leading correction for the reflection and transmission coefficients of the scattering states, the bound state energy and ground state of the hydrogen atom are of order of the minimal length, O(√(β)). We also show that in the presence of a minimal length the coefficients of reflection and transmission for the Dirac delta-function well and the Dirac delta-function barrier are no longer the same. There is a decrease in the chances of tunneling.Although different physical systems can be modeled by a Dirac δ-function potential, we have to ask ourselves of the validity of the results, since the Dirac δ-function potential is already an approximation to an actual physical system. That is, are the minimal-length effects smaller than the ones due to the modeling by the Dirac δ-function potential? Probably the answer is yes, though it is difficult to insure. What we can claim is the estimates of a upper bound for the minimal-length value are acceptable, in the sense that even though the corrections for a more realistic potential can be greater than ones due to the minimal-length effects, that only leads to upper bound values even smaller. However that is not very different from others systems we have studied in a minimal-length scenario.§ ACKNOWLEDGEMENTSWe would like to thank FAPES, CAPES and CNPq (Brazil) for financial support.00 krag1 H. Kragh, “Arthur March, Werner Heisenberg and the search for a smallest length”, Rewe d'Histoire des Sciences 8(4), 401 (2012). krag2 H. Kragh, “Heisenberg's lattice world: the 1930 theory sketch”, Am. J. Phys. 63, 595 (1995). Heisenberg W. Heisenberg, “Über die in der Theorie der Elementarteilchen auftretende universelle Länge”, Annalen der Phsik 424, 20 (1938). Bronstein M. Bronstein, “Quantum theory of weak gravitational fields”, (republication), Gen. Rel. Grav. 44, 267 (2012). Mead1 C. A. Mead, “Possible connection between gravitation and fundamental length”, Phys. Rev. 135, B849 (1964). Mead2 C. A. Mead and F. Wilczek, “Walking the Planck Length through History”, Physics Today 54, 15 (2001). Hossenfelder1 S. Hossenfelder, “A note on theories with a minimal length”, Class. Quantum Grav. 23, 1815 (2006). tawfik1 A. Tawfik and A. Diab, “Generalized uncertainty principle: Approaches and applications”, Int. J. Mod. Phys. D 23(12), 1430025 (2014). tawfik2 A. Tawfik and A. Diab, “Review on Generalized Uncertainty Principle”, Rept. Prog. Phys. 78, 126001 (2015). Kempf:1994su A. Kempf, G. Mangano and R. B. Mann, “Hilbert Space Representation Of The Minimal Length Uncertainty Relation”, Phys. Rev.D 52, 1108 (1995). Kempf2:1997 A. Kempf, “Non-pointlike particles in harmonic oscillators”, J. Phys. A 30, 2093 (1997). chang1 L. N. Chang, D. Minic, N. Okamura and T. Takeuchi, “Exact solution of the harmonic oscillator in arbitrary dimensions with minimal length uncertainty relations”, Phys. Rev. D 65, 125027 (2002). dadic I. Dadić, L. Jonke and S. Meljanac, “Harmonic oscillator with minimal length uncertainty relations and ladder operators”, Phys. Rev. D 67, 087701 (2003). Hassanabadi H. Hassanabadi, E. Maghsoodi, Akpan N. Ikot and S. Zarrinkamar, “Minimal Length Schrödinger Equation with Harmonic Potential in the Presence of a Magnetic Field”, Advances in High Energy Physics 2013, 923686 (2013). brau F. Brau, “Minimal length uncertainty relation and hydrogen atom”, J.Phys.A 32, 7691 (1999). yao R. Akhoury and Y. P. Yao, “Minimal length uncertainty relation and the hydrogen spectrum”, Phys. Lett. B 572, 37 (2003). benczik S. Benczik, L. N. Chang, D. Minic, and T. Takeuchi, “Hydrogen-atom spectrum under a minimal-length hypothesis”, Phys. Rev.A 72, 012104 (2005). Stetsko M. M. Stetsko and V. M. Tkachuk, “Perturbation hydrogen-atom spectrum in deformed space with minimal length”, Phys. Rev. A 74, 012101 (2006). nouicer K. Nouicer, “Coulomb potential in one dimension with minimal length: A path integral approach”, J. Math. Phys. 48, 112104 (2007). fityo T. V. Fityo, I. O. Vakarchuk and V. M. Tkachuk, “One-dimensional Coulomb-like problem in deformed space with minimal length”, J. Phys. A: Math. Gen. 39, 2143 (2006). bouaziz D. Bouaziz and N. Ferkous, “Hydrogen atom in momentum space with a minimal length”, Phys. Rev. A 82, 022105 (2010). das1 S. Das and E. C. Vagenas, “Universality of Quantum Gravity Correction”, Phys. Rev. Lett.101, 221301 (2008). das2 S. Das and E. C. Vagenas, “Phenomenological implications of the generalized uncertainty principle”, Can. J. Phys.87, 233 (2009). sprenger M. Sprenger, P. Nicolini and M. Bleicher “Physics on the smallest scales: an introduction to minimal length phenomenology”, Eur. J. Phys. 33, 853 (2012). nozari K. Nozari and T. Azizi, “Some aspects of gravitational quantum mechanics”, Gen. Revativ. Gravit.38(5), 735 (2006). blado G. Blado, C. Owens and V. Meyers “Quantum wells and the generalized uncertainty principle”, Eur. J. Phys. 35, 065011 (2014). nozari1 K. Nozari and M. Karami “Minimal Length and Generalized Dirac Equation”, Mod. Phys. Lett. A 20, 3095 (2005). tkachuk C. Quesne and V. M. Tkachuk “Dirac oscillator with nonzero minimal uncertainty in position”, J. Phys. A: Math. Gen. 38, 1747 (2005). nouicer1 Kh. Nouicer “An exact solution of the one-dimensional Dirac oscillator in the presence of minimal lengths”, J. Phys. A: Math. Gen. 39, 17475125 (2006). chargui Y. Chargui, A. Trabelsi and L. Chetouani “Exact solution of the (1+1)-dimensional Dirac equation with vector and scalar linear potentials in the presence of a minimal length”, Phys. Lett.A 374, 531 (2010). hassanabadi1 H. Hassanabadi, S. Zarrinkamar and E. Maghsoodi1 “Minimal length Dirac equation revisited”, Eur. Phys. J. Plus. 128, 25 (2013). panella L. Menculini, O. Panella and P. Roy “Exact solutions of the (2+1) dimensional Dirac equation in a constant magnetic field in the presence of a minimal length”, Phys. Rev.D 87, 065017 (2013). panella1 L. Menculini, O. Panella and P. Roy “Quantum phase transitions of the Dirac oscillator in a minimal length scenario”, Phys. Rev.D 91, 045032 (2015). nouicer2 Kh. Nouicer “The Casimir effect in the presence of minimal length”, J. Phys. A: Math. Gen. 38, 10027(2005). harbach U. Harbach and S. Hossenfelder “Modification of the Casimir Effect Due to a Minimal Length Scale”, Phys. Lett.B 632, 379 (2006). panella2 O. Panella, “Casimir-Polder intermolecular forces in minimal length theories”, Phys. Rev.D 76, 045012 (2007). panella3 A. M. Frassino and O. Panella, “Casimir effect in minimal length theories based on a generalized uncertainty principle”, Phys. Rev.D 85, 045030 (2012). dorsch G. C. Dorsch and J. A. Nogueira, “Maximally Localized States in Modified Commutation Relation to All Orders”, Int. J. Mod. Phys. A 27(21), 1250113 (2012). robinett M. Belloni and R. W. Robinett “The infinite well and Dirac delta function potentials as pedagogical, mathematical and physical models in quantum mechanics”, Phys. Rep. 540, 25 (2014). kronig R. L. Kronig and W. G. Penney “Quantum mechanics of electrons in crystal lattices”, Proc. R. Soc. Lond. Ser. A 130, 499 (1931) frost A. A. Frost “Delta potential function model for electronic energies in molecules”, J. Chem. Phys. 22, 1613 (1954) frost1 A. A. Frost “Delta-function model. I. Electronic energies of hydrogen-like atoms and diatomic molecules”, J. Chem. Phys. 25, 1150 (1956). frost2 A. A. Frost and F. E. Leland “Delta-potential model. II. Aromatic hydrocarbons”, J. Chem. Phys. 25, 1155 (1956). kuhn H. Kuhn “Free electron model for absorption spectra of organic dyes”, J. Chem. Phys. 16, 840 (1948). kuhn1 H. Kuhn “A quantum-mechanical theory of light absorption of organic dyes and similar compounds”, J. Chem. Phys. 17, 1198 (1949). tan S. Tan “Energetics of a strongly correlated Fermi gas”, Ann. Phys. 323, 2952 (2008). tan1 S. Tan “Large momentum part of a strongly correlated Fermi gas”, Ann. Phys. 323, 2971 (2008). tan2 S. Tan “Generalized virial theorem and pressure relation for a strongly correlated Fermi gas”, Ann. Phys. 323, 2987 (2008). braaten E. Braaten and L. Platter “Exact relations for a strongly interacting Fermi gas from the operator product expansion”, Phys. Rev. Lett. 100, 205301 (2008). zhang S. Zhang and A. J. Leggett “Universal properties of the ultracold Fermi gas”, Phys. Rev. A 79,023601 (2009). jaffe N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, M. Scandurra and H. Weigel “Calculating vacuum energies in renormalizable quantum field theories: A new approach to the Casimir problem”, Nucl. Phys. B 645, 49 (2002). kimball K. A. Milton “The Casimir effect: recent controversies and progress”, J. Phys. A: Math. Gen. 37, R209(2004). Ferkous N. Ferkous, “Regularization of the Dirac δ potential with minimal length”, Phys. Rev. A 88, 064101 (2013). samar M. I. Samar and V. M. Tkachuk, “Exactly solvable problems in the momentum space with a minimum uncertainty in position”, J. Math. Phys. 57, 042102 (2016). antonacci F. L. Antonacci Oakes, R. O. Francisco, J. C. Fabris and J. A. Nogueira, “Ground state of the hydrogen atom via Dirac equation in a minimal-length scenario”, Eur.Phys.J. C 73, 2495 (2013). pedram2 P. Pedram, “New approach to nonperturbative quantum mechanics with minimal length uncertainty”, Phys. Rev. D 85, 024016 (2012). vagenas S. Das and E. C. Vagenas, “Phenomenological implications of the generalized uncertainty principle”, Can. J. Phys. 87, 233 (2009). quesne C. Quesne and V. M. Tkachuk, “Composite system in deformed space with minimal”, Phys. Rev.A 81, 012106 (2010). robinett1 M. Belloni and R. W.Robinett, “Less than perfect quantum wavefunctions in momentum-space: How ϕ(p) senses disturbances in the force”, Amer. J. Phys. 79, 94 (2011). parthey C. G. Parthey et al., “Improved Measurement of the Hydrogen 1S-2S Transition Frequency”, Phys. Rev. Lett. 107, 203001 (2011). | http://arxiv.org/abs/1704.08236v2 | {
"authors": [
"M. F. Gusson",
"A. Oakes O. Gonçalves",
"R. O. Francisco",
"R. G. Furtado",
"J. C. Fabris",
"J. A. Nogueira"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20170426174435",
"title": "Dirac $δ$-function potential in quasiposition representation of a minimal-length scenario"
} |
That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox Anders Sandberg[Future of Humanity Institute, University of Oxford. Littlegate House, Suite 1, 16/17 St. Ebbe's Street. Oxford OX1 1PT. United Kingdom. ], Stuart Armstrong[Future of Humanity Institute. ], Milan Ćirković[Future of Humanity Institute and Astronomical Observatory of Belgrade, Volgina 7, 11000 Belgrade, Serbia. ] December 30, 2023 =========================================================================================================================================================================================================================================================================================================================================== If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the “aestivation hypothesis”: the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.Keywords: Fermi paradox, information physics, physical eschatology, extraterrestrial intelligence§ INTRODUCTIONAnswers to the Fermi question ("where are they?") typically fall into the groups “we are alone” (because intelligence is very rare, short-lived, etc.), “aliens exist but not here” (due to limitations of our technology, temporal synchronization etc.) or “they're here” (but not interacting in any obvious manner). This paper will examine a particular version of the third category, the aestivation hypothesis.In previous work <cit.>, we showed that a civilization with a certain threshold technology could use the resources of a solar system to launch colonization devices to effectively every solar system in a sizeable part of the visible universe. While the process as a whole would last eons, the investment in each bridgehead system would be small and brief. Also, the earlier the colonization was launched the larger the colonized volume. These results imply that early civilizations have a far greater chance to colonize and pre-empt later civilizations if they wish to do so.If these early civilizations are around, why are they not visible? The aestivation hypothesis states that they are aestivating[“Hibernation” might be a more familiar term but aestivation is the correct one for sleeping through too warm summer months.] until a later cosmological era. The argument is that the thermodynamics of computation make the cost of a certain amount of computation proportional to the temperature. Our astrophysical and cosmological knowledge indicates that the universe is cooling down with cosmic time. Not only star formation within galaxies winds down and dies out on timescales of 10^9 - 10^10 yrs but even the cosmic background radiation temperature is becoming exponentially colder. As the universe cools down, one Joule of energy is worth proportionally more. This can be a substantial (10^30) gain. Hence a civilization desiring to maximize the amount of computation will want to use its energy endowment as late as possible: using it now means far less total computation can be done. Hence an early civilization, after expanding to gain access to enough raw materials, will settle down and wait until it becomes rational to use the resources. We are not observing any aliens since the initial expansion phase is brief and intermittent and the aestivating civilization and its infrastructure is also largely passive and compact.The aestivation hypothesis hinges on a number of assumptions that will be examined in this paper. The main goal is to find what physical and motivational constraints make it rational to aestivate and see if these can be met. If they cannot, then the aestivation hypothesis is not a likely answer for the Fermi question.This paper is very much based on the physical eschatology research direction started by Freeman Dyson in <cit.> combined with the SETI approach dubbed “Dysonian SETI” <cit.>. Physical eschatology attempts to map the long-term future of the physical universe given current knowledge of physics <cit.>. This includes the constraints for life and information processing as the universe ages. Dysonian SETI takes the approach to widen the search to look for detectable signatures of highly advanced civilizations, in particular megascale engineering. Such signatures would not be deliberate messages, could potentially outlast their creators, and are less dependent on biological assumptions about the originating species (rather, they would stand out because of unusual physics compared to normal phenomena).It should be noted that in the physical eschatology context aestivation/hibernation has been an important issue for other reasons: Dyson suggested it as a viable strategy to stretch available energy and heat dissipation resources <cit.>, and Krauss and Starkman critiqued this approach <cit.>. § AESTIVATIONThe Old Ones were, the Old Ones are, and the Old Ones shall be. Not in the spaces we know, but between them. They walk serene and primal, undimensioned and to us unseen.H.P. Lovecraft, The Dunwich Horror and Others An explanation for the Fermi question needs to account for a lack of observable aliens given the existence of human observers in the present (not taking humans into account can lead to anthropic biases: we are not observing a randomly selected sample from all possible universes but rather a sample from universes compatible with our existence <cit.>). The aestivation hypothesis makes the following assmptions:* There are civilizations that mature much earlier than humanity. * These civilizations can expand over sizeable volumes, gaining power over their contents. * These civilizations have solved their coordination problems.* A civilization can retain control over its volume against other civilizations. * The fraction of mature civilizations that aestivate is non-zero * Aestivation is largely invisible. Assumption 1 is plausible, given planetary formation estimates <cit.>. Even a billion years of lead time is plenty to dominate the local supercluster, let alone the galaxy. If this assumption is not true (for example due to global synchronization of the emergence of intelligence <cit.>) we have an alternative answer to the Fermi question.We will assume that the relevant civilizations are mature enough to have mastered the basic laws of nature and their technological implications to whatever limit this implies. More than a million years of technological development ought to be enough.Assumption 2 is supported by our paper <cit.>. In particular, if a civilization can construct self-replicating technological systems there is essentially no limit to the amount of condensed matter in solar systems that can be manipulated. These technological systems are still potentially active in the current era. Non-expanding aestivators are entirely possible but would not provide an answer to the Fermi question. If assumption 2 is wrong, then the Fermi question is answered by a low technology ceiling and that intelligence tends to be isolated from each other by distance. Assumption 3: This paper will largely treat entire civilizations as the actors rather than their constituent members.In this regard they are “singletons” in the sense of Nick Bostrom: having a highest level of agency that can solve internal coordination problems <cit.>. Singletons may be plausible because they provide a solution to self-induced technological risks in young technological civilizations (preventing subgroups from accidentally or deliberately wiping out the civilization). Anecdotally, humanity has shown an increasing ability to coordinate on ever larger scales thanks to technological improvements (pace the imperfections of current coordination). In colonization scenarios such as <cit.> coordinating decisions made before launch could stick (especially if they have self-enforcing aspects) even when colonies are no longer in contact with each other. However, the paper does not assume all advanced civilizations will have perfect coordination, merely very good coordination. Assumption 4 needs to handle both civilizations emerging inside the larger civilization and encounters with other mature civilizations at edges of colonization. For the first case the assumption to some degree follows from assumption 2: if desired the civilization could prevent misbehavior of interior primitive civilizations (the most trivial way is to prevent their emergence, but this makes the hypothesis incompatible with our existence <cit.>). Variants of the zoo hypothesis work but the mature civilization must put some upper limit to technological development or introduce the young civilizations into their system eventually. The second case deals with civilizations on the same level of high technological maturity: both can be assumed to have roughly equal abilities and resources. If it can be shown that colonized volumes cannot be reliably protected against invasion, then the hypothesis falls. Conversely, if it could be proven that invasion over interstellar distances is logistically unfeasible or unprofitable, this could be regarded as a (weak) probabilistic support for our hypothesis.Assumption 5 is a soft requirement. In principle it might be enough to argue that aestivation is possible and that we just happen to be inside an aestivation region. However, arguing that we are not in a typical galaxy weakens the overall argument. More importantly, the assumption explains the absence of non-aestivating civilizations. Below we will show that there are strong reasons for civilizations to aestivate: we are not arguing that all civilizations must do it but that it is a likely strategy. Below we will discuss the issues of civilizational strategies and discounting that strengthen this assumption. Assumption 6 is the key assumption for making the hypothesis viable as a Fermi question answer: aestivation has to leave the universe largely unchanged in order to be compatible with evidence. The assumption does not imply deliberate hiding (except, perhaps, of the core part of the civilization) but rather that advanced civilizations do not dissipate much energy in the present era, making them effectively stealthy. The main empirical prediction of the hypothesis is that we should observe a strong dampening of processes that reduce the utility for far future computation. § CIVILIZATIONAL GOALS While it may be futile to speculate on the values of advanced civilizations, we would argue that there likely exists convergent instrumental goals. What has value? Value theories assign value to either states of the world or actions. State value models require resources to produce high-value states. If happiness is the goal, using the resources to produce the maximum number of maximally happy minds (with a tradeoff between number and state depending on how utilities aggregate) would maximize value. If the goal is knowledge, the resources would be spent on processing generating knowledge and storage, and so on. For these cases the total amount of produced value increases monotonically with the amount of resources, possibly superlinearly.Actions also require resources and for actions with discernible effects the start and end states must be distinguishable: there is an information change. Even if value resides in the doing, it can be viewed as information processing. The assumption used in this paper is that nearly all goals benefit from more resources and that information processing and storage are good metrics for measuring the potential of value creation/storage. They are not necessarily final goals but instrumentally worth getting for nearly all goals for nearly all rational agents <cit.>. As we will see below, maintaining communication between different parts of an intergalactic civilization becomes impossible over time. If utility requires contact (e.g. true value is achieved by knowledge shared across a civilization) then these limits will strongly reduce the utility of expanding far outside a galactic supercluster. If utility does not require contact (e.g. the happiness or beauty of local entities is what is valuable in itself) then expanding further is still desirable.There may also be normative uncertainty: even if a civilization has a convincing reason to have certain ultimate values, it might still regard this conclusion as probabilistic and seek to hedge its bets. Especially if the overall plan for colonization needs to be formulated at an early stage in its history and it cannot be re-negotiated once underway it may be rational to ensure that alternative value theories (especially if easily accommodated) can be implemented. This may include encountered alien civilizations, that might possibly have different but valid ultimate goals.§ DISCOUNTINGAnother relevant property of a civilization is the rate of its temporal discounting. How much is the far future worth relative to the present? There are several reasons to suspect advanced civilizations have very long time horizons.In a dangerous or uncertain environment it is rational to rapidly discount the value of a future good since survival to that point is not guaranteed. However, mature expanding civilizations have likely reduced their existential risks to a minimum level and would have little reason to discount strongly (individual members, if short-lived, may of course have high discount rates). More generally, the uncertainty of the future will be lower and this also implies lower discount rates. Specific policies that take a long time to implement such as megascale engineering or interstellar travel also favor low discount rates. There could be a selection effect here: high discount rates may prevent such long-range projects and hence only low-discount rate civilizations will be actors on the largest scales. It also appears likely that a sufficiently advanced civilization could regulate its “mental speed”, either by existing as software running on hardware with a variable clock speed, or by simply hibernating in a stable state for a period. If this is true, then the value of something after a period of pause/hibernation would be determined not by the chronological external time but how much time the civilization would subjectively experience in waiting for it. Changes in mental speed can hence make temporally remote goods more valuable if the observer can pause until they become available and there is no alternative cost for other goods.This is linked to a reduction of opportunity costs: advanced civilizations have mainly “seen it all” in the present universe and do not gain much more information utility from hanging around in the early era[Some exploration, automated or crewed, might of course still occur during the aestivation period, to be reported to the main part of the civilization at its end.]There are also arguments that future goods should not be discounted in cases like this. What really counts is fundamental goods (such as well-being or value) rather than commodities; while discounting prices of commodities makes economic sense it may not make sense to discount value itself <cit.>.This is why even a civilization with some temporal discounting can find it rational to pause in order to gain a huge reward in the far future. If the subjective experience is an instant astronomical multiplication of goods (with little risk) it is rational to make the jump. However, it requires a certain degree of internal coordination to overcome variation in individual time preferences (hence assumption 3). § THERMODYNAMICS OF EXTREME FUTURE COMPUTATIONComputation is a fundamentally physical process, tied into thermodynamic considerations. Foremost for our current purposes is Landauer's limit:at least E ≥ kT ln(2) J need to be dissipated for an irreversible change of one bit of information <cit.>. Logically irreversible manipulation of information must be accompanied by an entropy increase somewhere.It should be noted that the thermodynamic cost can be paid by using other things than energy. An ordered reservoir of spin <cit.> or indeed any other conserved quantity <cit.> can function as payment. The cost is ln(2)/λ where λ is related to the average value of the conserved quantity. However, making such a negentropy reservoir presumably requires physical work unless it can be found naturally untapped.§.§ Reversible and quantum computation There exists an important workaround for the Landauer limit: logically reversible computations do not increase entropy and can hence be done in principle without any need of energy dissipation. It has been shown that any logically irreversible computation can be expressed as a reversible computation by storing all intermediate results, outputting the result, and then retracing the steps leading up to the final state in reverse order, leaving the computer in the original state. The only thermodynamic costs would be setting the input registers and writing the output <cit.>. More effective reversible implementation methods are also known, although the number of elementary gates needed to implement a n-bit irreversible function scales somewhere between n2^n/log n and n2^n <cit.>. Quantum computation is also logically reversible since quantum circuits are based on unitary mappings. While the following analysis will be speaking of bits, the actual computations might occur among qubits and the total reversible computational power would be significantly higher. If advanced civilizations do all their computations as reversible computations, then it would seem unnecessary to gather energy resources (material resources may still be needed to process and store the information). However, irreversible operations must occur when new memory is created and in order to do error correction. In order to create N zeroed bits of memory at least kTln(2)N J have to be expended, beside the work embodied in making the bit itself. Error correction can be done with arbitrary fidelity thanks to error correcting codes but the actual correction is an irreversible operation. The cost can be deferred by transferring the tainted bit to an ancilla bit but in order to re-use ancillas they have to be reset. Error rates are suppressed by lower temperature and larger/heavier storage. Errors in bit storage occur due to classical thermal noise (with a probability proportional to e^-E_b/kT where E_b is the barrier height) and quantum tunneling (probability approximately e^-2r√(2mE_b)/ħ where m is the mass of the movable entity storing the bit and r is the size of the bit). The minimum potential height compatible with any computation is E_b^min≈ kTln(2) + ħ (ln(2))^2/8mr^2<cit.>. If a system of size R has mass M divided into N bits, each bit will have size ≈ R/N^1/3. If a fraction of the mass is used to build potential barriers, we can approximate the height of the barrier E_b in each bit as the energy in all the covalent bonds in the barrier. If we assume diamond as a building material, the total bond energy is 1.9kJ/mol = 1.6 · 10^5J/kg.Using a supercluster r=50 Mpc, M=10^43kg, it can be subdivided into 10^61 bits without reaching the limit given the current 3K background temperature. For the future T_dS temperature (see below) the limit is instead near 10^75 bits (each up to 13 cm across); here the limiting factor is tunneling (the shift occurs around T=10^-8K). However, this would require bits far lighter than individual atoms or even electrons[In theory neutrinos or very light WIMPs such as axions could fulfill the role but there is no known mechanism for confining them in such a way that they could store information reliably.]. In practice, the ultimate limiting capacity due to matter chunkiness would be on the order of 10^69 bits. However, these “heavy” bits would have 15 orders of magnitude of safety margin relative to the potential height and hence have minuscule tunneling probabilities. So while error correction and repair will have to be done (as noted by Dyson, over sufficiently long timescales (10^65 years) matter is a liquid), the rate can be very low. Quantum computing is also affected by environmental temperature and quantum error correction can only partially compensate for this <cit.>. In theory there might exist topological quantum codes that are stable against finite temperature disturbances<cit.> but again there might be thermal or tunneling events undermining the computing system hardware itself.Hence, even civilizations at the boundary of physical feasibility will have to perform some dissipative computational operations. They cannot wait indefinitely (since slow cosmological processes will erode their infrastructure and data) and will hence eventually run out of energy.§.§ CoolingWhile it is possible for a civilization to cool down parts of itself to any low temperature, the act of cooling is itself dissipative since it requires doing work against a hot environment. The most efficient cooling merely consists of linking the computation to the coldest heat-bath naturally available. In the future this will be the cosmological background radiation[The alternative might be supermassive black hole horizons at T=ħ c^3/8 π G M k, which in the present era can be far colder than the background radiation <cit.>. Supermassive 10^9 M_ black holes will become warmer than the background radiation in 520 Gyr (assuming constant mass).], which is also conveniently of maximal spatial extent. The mean temperature of the background radiation is redshifted and declines as T(t)=T_0/a(t) where a(t) is the cosmological scale factor <cit.>. Using the long term de Sitter behavior a(t) = e^Ht produces T(t) = T_0 e^-Ht. This means that one unit of energy will be worth an exponentially growing amount of computations if one waits long enough.However, the background radiation is eventually redshifted below the constant de Sitter horizon radiation temperature T_dS = √(Λ/12π^2) = H/2π≈ 2.67· 10^-30 K <cit.>.This occurs at time t=H ln(T_0/T_dS), in about 1.4· 10^12 years.There will be some other radiation fields (at this point there are still active stars) but the main conclusion is that there is a final temperature of the universal heat bath. This also resolves a problem pointed out in <cit.>: in open universes the total number of photons eventually received from the background radiation is finite and all systems decouple thermally from it. In this case this never happens.One consequence is that there will always be a finite cost to irreversible computations: without infinite resources only a finite number of such computations can be done in the future of any civilization.This factor also avoids the “paradox of the indefinitely postponed splurge”, where if it is always beneficial to postpone exploitation. Hence, it is rational at some point for aestivating civilizations to start consuming resources. The limiting temperature of T=10^-8K where error correction stops being temperature-limited and instead becomes quantization-limited might be another point where it is rational to start exploitation (this will occur in around 270 billion years). Had the temperature decline been slower, the limit might have been set by galactic evaporation (10^16 years) or proton decay (at least 10^34 years).§.§ Value of waiting A comparison of current computational resources to late era computational resources hence suggest a potential multiplier of 10^30!Even if only the resources available in a galactic supercluster are exploited, later-era exploitation produces a payoff far greater than any attempt to colonize the rest of the accessible universe and use the resources early. In fact, the mass-energy of just the Earth itself (5.9· 10^24kg) would be more than enough to power more computations than could currently be done by burning the present observable universe! (6· 10^52kg)[This might suggest that “stay at home” civilizations might hence abound, content to wait out the future in their local environment since their bounded utility functions can be satisfied eventually. Such civilizations might be common but they are not a solid explanation for the Fermi question since their existence does not preclude impatient (and hence detectable) civilizations. However, see <cit.>. In addition, this strategy is risky since expansive civilizations may be around.]In practice the efficiency will be lower but the multiplier tends to remain astronomical.A spherical blackbody civilization of radius r using energy at a rate P surrounded by a T_dS background will have an equilibrium temperature (neglecting external and internal sources) T = [P/(4 πσ r^2) + T_dS^4]^1/4.For a r=50 Mpc super-cluster sized civilization this means that maintaining a total power of P=1 W would keep it near 2.8· 10^-11K. At this temperature the civilization could do 3.8· 10^33 irreversible computations per second. The number P/kTln(2) bit erasures per second increases as P^3/4. At first this might suggest that it is better to ignore heat and use all resources quickly. However, a civilization with finite energy E will run out of it after time E/P and the total amount of erasures will then be E/kTln(2): this declines as P^-1/4. Slow energy use produces a vastly larger computational output if one is patient.As noted by Gershenfeld, optimal computation needs to make sure all internal states are close to the most probable state of the system, since otherwise there will be extra dissipation <cit.>. Hence there is a good reason to perform operations slowly. Fortunately, time is an abundant resource in the far future. In addition, a civilization whose subjective time is proportional to the computation rate will not internally experience the slowdown. The Margolus-Levitin limit shows that it takes at least time πħ / 2 E to move to a new orthogonal state <cit.>. For E=kT_dS this creates a natural “clock speed” of 3.8· 10^11 years. However, some of the energy will be embodied in computational hardware; even if the hardware is just single electrons the clock speed would be 2.0· 10^-21s: this particular limit is not a major problem for this style of future computation.A stronger constraint is the information transmission lags across the civilization. For this example the time to send a message across the civilization is 326 million years. Whether this requires a very slow clock time or not depends on the parallelizability of the computations done by the civilization, which in turn depends on its values and internal structure. Civilizations with more parallelizable goals such as local hedonic generation would be able to have more clock cycles per external second than more “serial” civilizations where a global state needs to be fully updated before the next step can be taken. However, external time is of little importance given the paucity of external events. Even if proton decay in 10^34 years puts a final deadline to the computation, this would still correspond to 10^25 clock cycles.Heat emission at very low temperature is also a cause of slowdown. The time to radiate away the entropy of a single bit erasure scales as t_rad=kln(2)/4πσ r^2 T^3. For a 50 Mpc radius system this is 5.6· 10^-66 T^-3s: for 10^-8 K the time is on the order of 10^-42 s but at 10^-30 K it takes 10^17 years.If the civilization does have a time limit t_max, then it is rational to use P=E/t_max and the total number of operations will be proportional to t_max^1/4. Time-limited civilizations do have a reason to burn the candle at both ends. Time-limited civilizations still gain overall computational success by waiting until the universe cools down enough so their long-term working temperature T is efficient. At least for long time limits like proton decay a trillion years is a short wait.The reader might wonder whether starting these computations now is rational since the universe is quickly cooling and will soon (compared to the overall lifespan of the civilization) reach convenient temperatures. The computational gain of doing computations at time t is ∝exp(Ht): it increases exponentially until the temperature is dominated by the internal heating rather than the outside temperature. Since most of the integrated value accrues within the last e-folding and the energy used early was used exponentially inefficiently, it is not worth starting early even if the wait is a minuscule fraction of the overall lifespan.A civilization burning through the baryonic mass of a supercluster before proton decay in 10^33 years has a power of 5.7· 10^21 W (similar to a dim red dwarf star) and a temperature of 7.6· 10^-6 K, achieving 10^80 erasures. The most mass-limited version instead runs close to 10^-30 K, has a power of somewhere around 10^-75 W and achievs 10^115 erasures – but each bit erasure, when it happens causes a 100 quadrillion year hiatus. A more realistic system (given quantization constraints) runs at 10^-8 K and would hence have power 10^10 W (similar to a large present-day power plant), running for 5.7· 10^44 years and achieving 10^93 erasures. § RESOURCES The amount of resources available to advanced civilizations depends on what can ultimately be used. Conservatively, condensed molecular matter such as planets and asteroids are known to be useful for both energy, computation, and information storage. Stars represent another form of high density matter that could plausibly be exploited, both as energy sources and material. Degenerate stars (white and black dwarfs, neutron stars) are also potential high density resources. Less, conservatively, there are black holes, from which mass-energy can be extracted <cit.> (and, under some conditions, can act as heat sinks as mentioned above).Beyond this, there is interstellar gas, intergalactic gas, and dark matter halos.For the purposes of this paper we will separate the resources into energy resources that can power computations and matter resources that can be used to store information, process it or (in a pinch) be converted into energy. Mass-energy diffused as starlight or neutrinos, and stars lost from galaxies are assumed to have become too dilute to be useful. Dark energy appears to be unusable in principle. Dark matter may be useful as an energy resource by annihilation even if it cannot sustain information processing structures.Not all resources in the universe can be reached and exploited. The lightspeed limit forces civilizations to remain within a light-cone and the accelerating expansion further limits how far probes can be sent. Based on the assumptions in <cit.> the amount of resources that can be reached within a 100 Mpc supercluster or by traveling at 50%, 80% or 99% c are listed in table <ref>. §.§ Changes over time§.§.§ Stellar fusion Stellar lifetime energy emissions are proportional to mass (with high luminosity stars releasing more faster), E_life = LT = L_ (M/M_) 3.16 · 10^17J, leading to a lifetime mass loss through energy emission of M_loss= 1.35· 10^27 (M/M_) kg = 6.79 · 10^-4 M_ (M/M_).Lighter stars loose less mass but since the mean mass star is 0.7 M_ this is on the order of a typical value. Hence stellar fusion is not a major energy waste if mass can be converted into energy. This is importantfor the aestivation hypothesis: had stellar fusion been a major source of mass loss it would had been rational to stop it during the aestivation period, or at least to gather up the lost energy using Dyson shells: both very visible activities that can be observationally ruled out (at least as large-scale activities).Fusion processes also produce nuclei that might be more useful for computation without any need for intervention. Long term elemental mass fractions approach 20% hydrogen, 60% helium and 20% other elements over 10^12 year timescales <cit.>.§.§.§ Black hole formationStellar black holes permanently reduces the amount of baryonic matter available even if their mass-energy is exploitable. At present a mass fraction ≈ 2.5% of the star formation budget is lost this way <cit.>.To prevent this star formation of masses above 25M_ needs to be blocked. This could occur by interventions that cause extra fragmentation of clouds condensing into heavy stars. Adding extra dust to the protostellar cloud could induce rapid cooling and hence fragmentation. This would require dust densities on the order of 10^-5ρ_gas and for a stellar formation rate ≈ 1M_ per year would require seeding galactic clouds with ≈ 10^-5M_ per year. Average nucleosynthetic yields are ≈ 0.0025, so there would be enough metals produced per year to seed the clouds by about two orders of magnitude.Other ways of inducing premature fragmentation might be to produce radiation bursts (from antimatter charges or directed illumination from Dyson-shell surrounded stars) or pressure waves, while magnetic fields might slow core collapse.The energies involved would be of the order of the cloud potential energy; for a Bok globule this might require (3G/5)(50M_^2/1 ly)=4.2· 10^36J, about 10^3 sun-years of luminosity.Given that cloud collapse is a turbulent process it might be possible to prevent massive star formation through relatively energy-efficient chaos control.While these methods might be invisible over long distances their effects ought to be noticeable due to a suppression of starbursts and heavy blue-white stars. §.§.§ Galactic winds While stars can lose significant amount of mass by outflows, this gas is recycled through the interstellar medium into new stars. However, some of this medium may be lost due to galactic winds and may hence become long-term inaccessible. Galactic winds are likely proportional to the stellar formation rate(M'/SFR around 0.01-10) plus contributions from active galactic nuclei, and may have significantly depleted early galaxies. Starbursts can lose 10^5-10^6 M_ in dwarf galaxies and 10^8-10^10 in ULIRGs, partially by entraining neutral matter. <cit.> However, for large galaxies the actual escape fractions may be low due to dark matter halos keeping the gas bound and dampening the outflow through drag, keeping it below 4% <cit.>. There can also be ongoing infall due to the halo that more than compensates for the loss.Due to the uncertainty about the wind budget it is not clear whether a civilization prioritizing baryonic matter might want to prevent galactic winds. A few approaches may be possible. One way, as described above, is to prevent too vigorous star formation. Such a civilization would also be interested in keeping the galactic nucleus quiescent, perhaps using the “stellar billiards” method suggested in the next section to manipulate orbits near the central black hole. These methods would likely be hard to detect, except for the reduction in hot stars[Another, somewhat extreme, approach would be to bag the galaxy: at a surface density of 0.77· 10^-6kg/m^2 a graphene balloon encompassing a Milky Way-sized galaxy (r=20 kpc) would weigh 1.8· 10^6 M_. Given a current carbon mass fraction of 0.0046 this would require mining 0.4· 10^9 M_ stars or gas clouds, or fusing several million solar masses. Since the saved gas mass would be just one order of magnitude larger than the balloon weight, it is likely that this approach is not effective.].§.§.§ Galaxy collisions Galactic “splashes” and mergers may lose significant amount of gas, stars and halos to intergalactic space.Can collisions be prevented? Typical galactic peculiar velocities are on the order of 200-300 km/s: in order to avoid a collision the velocity of one or both galaxies need to be changed by this amount by expelling some mass-energy. The rocket equation hence implies a final mass m_1 = m_0 e^-Δ v/v_e. If v_e ≈ 1000 km/s, such as in the case of deliberate ejection of hypervelocity stars[This can be achieved in a step-wise process. First, Shkladov thrusters <cit.> are built around many stars. While these thrusters are unable to change stellar velocities significantly over a Gyr, they are enough to engineer close encounters with passing stars within a few million years. This allows a bootstrap process where gravity assists can bring velocity changes of stars up to the average stellar velocities of 100 km/s as well as allow course corrections. Stars are then sent on slingshot orbits past the central black hole, ejecting them and accelerating the galaxy. This scheme mainly converts stellar kinetic and potential energy into thrust.], the mass loss will be 10%. For an optimal exhaust velocity of c (using some form of mass-energy conversion) the loss is around 3.5%. While this is a small fraction of the entire galactic mass, most of it (>95%) is in the form of a dark matter halo: if only baryonic matter can be used to power the evasive maneuver practically all of it would be lost in the process. Hence, if a civilization seeks to retain significant amounts of baryonic matter it is not rational in general to prevent galactic collisions. Civilizations that instead care about overall mass-energy may be more motivated to prevent splashes of halo material by sacrificing some mass-energy in a very visible maneuver. If the maneuver takes one Gyr and uses mass-energy conversion, the average luminosity will be 3.5%· 10^12M_ c^2 /( 10^9 yr) = 2· 10^41W = 5· 10^14 L_. Streams of 10^10 hypervelocity stars would also likely be very noticeable.§.§.§ Expansion Given the current ΛCDM model of the universe, the expansion rate is increasing and approaching a de Sitter expansion. This leads to the eventual separation of all gravitationally bound systems from each other by insurmountable distances, leaving each an “island universe” within their cosmological horizon. The local group of galaxies will likely be separated from the Virgo supercluster and in 100 Gyr entirely separate <cit.>. Busha et al. find a criterion for structures remaining bound,M_obj / 10^12 M_ > 3 h_70^2 ( r_0 / 1 Mpc)^3where h_70=H_0/70 km/s/Mpc. The paper also gives an estimate of the isolation time,about 120 Gyr for typical clusters <cit.>.This expansion dynamics is a key constraint on the ambitions of far-future civilizations. Most matter within not just the observable but the colonizable (at least given assumptions as in <cit.>) universe will be lost. While no doubt advanced civilizations might wish to affect the expansion rate it seems unlikely that such universal parameters are changeable[A civilization will by necessity be spatially local, so any change in Λ or dark energy parameters would have to be local; beside the inherent problem of how it could be affected, any change would also likely merely propagate at light-speed and hence will not reach indefinitely far. Worse, even a minor change or gradient in the 68.3% of the total mass-energy represented by dark energy would correspond to massive amounts of normal energy: the destructive effects may be as dramatic as vacuum decay scenarios.]. Depending on the utility function of the civilization, this either implies that there is little extra utility after colonizing the largest achievable bound cluster (utilities dependent on causal connectedness), or that future parts of the civilization will be disconnected from each other (utilities not valuing total connectedness). Civilizations desiring guaranteed separation – for example, to prevent competitors from invading – would also value the exponentially growing moats. Is it possible to gather more mass into gravitationally bound systems? In order to move mass, whether a rocket or a galaxy, energy or reaction mass need to be ejected at high velocity to induce motion[In principle gravitational wave propulsion or spacetime swimming <cit.> are possible alternatives but appear unlikely to be useful in this case.]. This is governed by the relativistic rocket equation Δ v = c tanh( (I_sp/c) ln(m_0/m_1)) (we ignore the need for slowing down at arrival). The best possible specific impulse is I_sp=c. The remaining mass arriving at the destination will then be m_1 = m_0 exp(-tanh^-1(Δ v/c)).In order to overcome the Hubble flow Δ v > H_0 r (we here ignore that the acceleration of the expansion will require higher velocities). Putting it together, we will get the boundm(r)< 4 πρ r^2 exp(-tanh^-1(H_0 r/c))for a concentric shell of radius r.Setting k=H_0/c, this can be integrated from r_0 (the border of the supercluster) to 1/k (the point where it is just barely possible to send back matter at lightspeed):M_collect=(2 πρ/3k^3)[ √(1-k^2x^2)(2k^2x^2-3kx+4)+3sin^-1(kx)]_r_0^1/k If we use r_0=50 Mpc (typical supercluster size) we get M_collect=3.8· 10^78ρkg, where ρ is the collectable mass density. For ρ=2.3· 10^-27kg/m^3 this is 8.9· 10^51kg.The ratio to mass inside the supercluster (here densities are assumed to be 20 times larger inside) is M_collect/M_cluster = 12,427. The collected mass is 35% of the entire mass inside the reachable volume; the rest is used up.Another approach would be to convert mass into radiant energy locally, beaming half of it (due to momentum conservation) inwards to a receiver such as a black hole from which it could be extracted later. The main losses would be due to redshift during transmission. However, this ignores the problem that in order to reach remote locations to send back matter home the civilization needs to travel there. If colonization occurs at lightspeed the colonies will have to deal with an expansion factor e^H_0r/c larger, producing the far tighter bound m(r)<4 πρ r^2 exp(-tanh^-1(H_0 e^H_0r/c r/c)).Integrating numerically from r_0 to the outer limit W(1)/k ≈ 2.52 Gpc (where W is Lambert's W function) produces a more modest M_collect=8.32· 10^77ρ and M_collect/M_cluster = 2,705. Of the total reachable mass, only 7.7% remains. This calculation assumes all mass can be used; if diffuse gas and dark matter cannot be used to power the move, not only does the total mass yield go down by two orders of magnitude but there are going to be significant energy losses in climbing out of cluster potential wells. Nevertheless, it still looks possible to increase the long-term available mass of superclusters by a few orders of magnitude. If it is being done it would be very visible, since at least the acceleration phases would convert a fraction of entire galaxies mass-energy into radiation. A radial process would also send this radiation in all outward directions, and backscatter would likely be very noticeable even from the interior.§.§.§ Galactic evaporationOver long periods stars in galaxies scatter from each other when they have encounters, causing a large fraction to be ejected and the rest are swallowed by the central supermassive black hole. This occurs on a timescale of 10^19 years <cit.>.However, due to the thermodynamic considerations above, it becomes rational to exploit the universe long before galactic evaporation becomes a problem. The same applies to the other forms of long-term deterioration of the universe, such as proton decay, black hole decay, quantum liquefaction of matter etc. <cit.> § INTERACTIONS WITH OTHER CIVILIZATIONSThe aestivation hypothesis at first appears to suffer the same cultural convergence assumption as many other Fermi question answers: they assume all sufficiently advanced civilizations – and members of these civilizations – will behave in the same way. While some convergence on instrumental goals is likely, convergence strong enough to ensure an answer to the Fermi questionappears implausible since it only takes one unusual civilization (or group within it) anywhere to break the explanation. Even if it is rational for every intelligent being to do something, this does not guarantee that all intelligent beings are rational.However, cultural convergence can be enforced. Civilizations could coordinate as a whole to prevent certain behaviors of their own constituents, or of younger civilizations within their sphere of influence. While current humanity shows the tremendous problems inherent in achieving global coordination, coordination may both be important for surviving the emergence of powerful technologies (acting as a filter, leaving mainly coordinated civilizations on the technologically mature side). Even if coordination leading to enforcement of some Fermi question explaining behavior is not guaranteed, we could happen to live inside a domain of a large and old civilization that happens to enforce it (even if there are defectors elsewhere). If such civilizations are very large (as suggested by our intergalactic colonization argument <cit.>) this would look to us like global convergence.The aestivation hypothesis strengthens this argument by implying a need for protecting resources during the aestivation period. If a civilization merely aestivates it may find its grip on its domain supplanted by latecomers. Leaving autonomous systems to monitor the domain and preventing activities that decrease its value would be the rational choice. They would ensure that parts of the originating civilization do not start early but also that invaders or young civilizations are stopped from value-decreasing activities. One plausible example would be banning the launch of self-replicating probes to do large-scale colonization. In this scenario cultural convergence is enforced along some dimensions.It might be objected that devices left behind cannot survive the eons required for restarting the main civilization. While the depths of space are a stable environment that might be benign for devices constructed to be functional there, there are always some micrometeors, cosmic rays or other mishaps. However, having redundant copies greatly reduce the chance of all being destroyed simultaneously. It is possible to show that by slowly adding backup capacity (at a logarithmic rate) a system can ensure a finite probability of enduring for infinite time[In practice physics places a number of limitations on the durability such as proton decay or quantum tunneling but these limitations are largely outside of the timescales considered in this paper.] <cit.>. The infrastructure left behind could hence be both extremely long-lasting and require a minuscule footprint, even if it is imperfect. One can make the argument that defenders are likely to win since the amount of materiel they have at home can easily dwarf the amount of materiel that can be moved into place, since long-range transport is expensive. However, an interior point in an aestivator domain can be targeted with resources rising quadratically with time as the message goes out. Which effects wins out depends on the relative scaling of the costs and resources[For example, using the Lanchester square law model of warfare <cit.> for a spherical defender domain (of value ∝ r^3) and quadratically arriving attackers it will resist for time t ∝α^1/2r^2/3η^-1/2, where α is the defender firepower and η is the resource efficiency of transporting attack materiel. The cost to the attacker will scale as t^3∝α^3/2r^2η^-3/2. For sufficiently large α or low η it may hence be rational to overlook small emergent civilizations – or ensure that they do not appear in the first place.].One interesting observation is that if we are inside an aestivating civilization, then other aestivators are also likely: the probability of intelligence arising per spacetime hypervolume is high enough that the large civilization will likely encounter external mature civilizations and hence needs to have a strategy against them.Two mature large-scale civilizations encountering each other will be essentially spherical, expanding at the same rate (set by convergence towards the limits set by physics). At the point of contact the amount of resources available will be the intersection of an expanding communications sphere and the overall colonization sphere: for large civilizations this means that they will have nearly identical resources. Given the maturity assumption they would hence be evenly matched[The smaller civilization would have a slight disadvantage due to the greater curvature of its surface but if interaction is settled early this might not come into play.]. They have several choices: battle each other for resources, maintain a tangential hyperboloid boundary, or, if their utilities are compatible, join. Given the intention of using resources later, expending them in the present is only rational if it prevents larger losses in the long run. If both civilizations are only interested in resources per se, it may be possible to make any invasion irrational by scorched earth tactics: the enemy will not gain anything from invading and merely expend its resources. This is no guarantee that all possible civilizations will be peaceful vis-á-vis each other, since there might be other value considerations (e.g. a negative utilitarian civilization encountering one planning to maintain a large amount of suffering: the first civilization would have a positive utility in reducing the resources of the second as much as possible, even at the expense of future computation). However, given the accelerating expansion maintaining borders could in principle be left to physics.§ DISCUSSIONThis paper has shown that very big civilizations can have small footprint by relocating most of their activity to the future. Of the 6 assumptions of the aestivation hypothesis, 1 (early civilizations) and 2 (broad expansion) are already likely (and if not true, provide alternative answers to the Fermi question). The fifth assumption, that many civilizations wish to aestivate, is supported by the vast increases in computational ability. The third assumption, that coordination problems can be resolved, remains hard to judge.It should be noted that planning for aestivation is something that is only rational for a civilization once it has solved urgent survival issues. A species that has not reduced its self-generated existential risk enough has good reason to discount the far future due to uncertainty. Conversely, the enormous potential value of a post-aestivation future makes the astronomical waste argument <cit.> stronger: given the higher stakes, the importance of early risk mitigation – and hence coordination – increases. The fourth assumption is at present also hard to judge. It seems likely that the technology allowing long-range expansion (automation, interplanetary manufacturing, self-replication and long-lived autonomous devices) would enable maintaining arbitrarily large local stockpiles of equipment to fend off incursions.Assumption six, the invisibility of aestivation may at first appear hard to test – nearly any number of advanced civilizations with nearly no energy usage could easily hide somewhere in the galactic halo <cit.>.However, in order for it to make sense to aestivate the amount of resources lost during the wait must be small enough that they are dwarfed by the resource costs of efforts to prevent them.Are there reasons to perform visible megascale engineering to preserve resources? This depends primarily on whether baryonic matter “construction material” or mass-energy is regarded as the limiting factor for future computation. If baryonic matter is the key factor considerations of stellar activity, galactic wind or galactic collision mass loss do not seem to imply much utility in megascale engineering to preserve matter. However, if dark matter halos represent significant value (the mass-energy case) reduction of collision loss would be rational, likely doable and very visible. The lack of such galactic engineering hence puts a limit on the existence of such energy-concerned aestivating civilizations in the surveyed universe.Engineering large-scale galactic movement to prevent separation of superclusters would also be highly visible and the lack of such activity implies that there are no civilizations seeking to optimize long-term causally connected mass-energy concentrations. Together, these considerations suggest that if there are aestivating civilizations in our vicinity they are either local (no interest outside their own galaxies or supercluster, utility functions that place no or little value extra matter or energy) or they have utility functions that may drive universal expansion but hold little interest in causal connectedness. From a Dysonian SETI perspective, the aestivation hypothesis makes a very clear prediction: look for inhibition of processes that permanently lose matter to inter-cluster or inter-galactic space or look for the gravitationally bound structures more massive than what the standard ΛCDM cosmology predicts for a given lengthscale.§.§ Cosmology/physics assumptionsWhat if we are wrong about the model of the universe used in this paper? A few “nearby” models have clear implications. In Big Rip scenarios where the scale factor becomes infinite at some point due to phantom energy it is rational to use up available energy before this point. For plausible values of w this is likely far in the future and hence aestivation still makes sense. If there is no horizon radiation, then it is rational to delay until proton or black hole decay, or when the heating due to the civilization becomes on par with the universe. Again aestivation makes sense. More fundamentally there is the uncertainty inherent in analysing extreme future scenarios or future technology: even when we base the arguments on well-understood and well-tested physics, there might exist unexpected ways of circumventing this. Unfortunately there is little that can be done about this.The aestivation hypothesis assumes that the main driver of advanced civilizations is computations whose cost are temperature dependent. More philosophically, what if there are other forms of value that can be generated? Turning energy straight into value without computation would break the temperature dependency, and hence the scenario. This suggests an interesting line of investigation: what is the physics of value? Until recently the idea that information was physical (or indeed, a measurable thing) was exotic but currently we are seeing a renaissance of investigations into the connections between computation and physics. The idea that there are bounds set by physics on how much information can be stored and processed by one kilogram of matter is no longer strange. Could there exist similar bounds on how much value one kilogram of matter could embody?§.§ AnthropicsDoes the aestivation hypothesis have any anthropic implications? The main consequence of the physical eschatology considerations in this paper is that future computation could vastly outweigh current computation and we should hence expect most observers to exist in the far future rather than the early stelliferous era.The Self-Indication Assumption (SIA) states that we should reason as if we were randomly selected from the set of all possible observers <cit.>. This is normally assumed to support that we are in a world with many observers. We should expect aliens should exist (since a universe with humans and aliens have more observers, especially if the aliens become a very large post-aestivation civilization). The aestivation hypothesis suggests that initially sparse worlds may have far more observers than worlds that have much activity going on early (and then run out of resources), so the SIA suggests we should believe in the hypothesis. The competing Self-Sampling Assumption states that we should reason as if we were randomly selected from the actually existent observers (past, present, future). This gives a more pessimistic outcome, where the doomsday argument suggests that we may not be going to survive. However, both the SIA and SSA may support the view that we are more likely to be a history simulation <cit.> running in the post-aestivation era (if that era is possible) than the sole early ancestor population.§.§ Final wordsThe aestivation hypothesis came about as a result of physical eschatology considerations of what the best possible outcome for a civilization would be, not directly an urge to solve the Fermi question. However, it does seem to provide one new possible answer to the question:That is not dead which can eternal lie.And with strange aeons even death may die.H.P. Lovecraft§.§ AcknowledgmentsWe wish to acknowledge Nick Beckstead,Daniel Dewey, Eric Drexler, Carl Frey, Vincent Müller, Toby Ord, Andrew Snyder-Beattie, Cecilia Tilli, Owen Cotton-Barratt, Robin Hanson and Carl Shulman for helpful and stimulating discussion. ieeetr | http://arxiv.org/abs/1705.03394v1 | {
"authors": [
"Anders Sandberg",
"Stuart Armstrong",
"Milan M. Cirkovic"
],
"categories": [
"physics.pop-ph"
],
"primary_category": "physics.pop-ph",
"published": "20170427154100",
"title": "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox"
} |
1920182143289 Three-dimensional structure of the magnetic field in the disk of the Milky Way A. Ordog1 J.C. Brown1R. Kothes2T.L. Landecker2 ===============================================================================================In this paper, we consider pattern avoidance in a subset of words on {1,1,2,2,…,n,n} called reverse double lists.In particular a reverse double list is a word formed by concatenating a permutation with its reversal.We enumerate reverse double lists avoiding any permutation pattern of length at most 4 and completely determine the corresponding Wilf classes.For permutation patterns ρ of length 5 or more, we characterize when the number of ρ-avoiding reverse double lists on n letters has polynomial growth.We also determine the number of 1⋯ k-avoiders of maximum length for any positive integer k.§ INTRODUCTION Let 𝒮_n be the set of all permutations on [n]={1,2,…, n}.Given π∈𝒮_n and ρ∈𝒮_k we say that π contains ρ as a pattern if there exists 1 ≤ i_1 < i_2 < ⋯ < i_k ≤ n such that π_i_a≤π_i_b if and only if ρ_a ≤ρ_b.In this case we say that π_i_1⋯π_i_k is order-isomorphic to ρ, and that π_i_1⋯π_i_k is an occurrence of ρ in π.If π does not contain ρ, then we say that π avoids ρ.Of particular interest are the sets 𝒮_n(ρ)={π∈𝒮_n |π avoids ρ}.Let s_n(ρ)=|𝒮_n(ρ)|.It is well known that s_n(ρ)=2nn/n+1 for ρ∈𝒮_3; see <cit.>.For ρ∈𝒮_4, 3 different sequences are possible for {s_n(ρ)}_n ≥ 1.Two of these sequences are well-understood, but an exact formula for s_n(1324) remains open; see <cit.>.Pattern avoidance has been studied for a number of combinatorial objects other than permutations.The definition above extends naturally for patterns in words (i.e. permutations of multisets) and there have been several algorithmic approaches to determining the number of words avoiding various patterns; see <cit.>. In another direction, a permutation may be viewed as a bijection on [n].When we graph the points (i,π_i) in the Cartesian plane, all points lie in the square [0,n+1] × [0,n+1], and thus we may apply various symmetries of the square to obtain involutions on the set 𝒮_n.For π∈𝒮_n, let π^r = π_n ⋯π_1 and let π^c = (n+1-π_1)⋯ (n+1-π_n), the reverse and complement of π respectively.For example, the graphs of π=1342, π^r=2431, and π^c = 4213 are shown in Figure <ref>.Pattern-avoidance in centrosymmetric permutations, i.e. permutations π such that π^rc=π has been studied by <cit.>, by <cit.>, and by <cit.>.<cit.> generalized this idea to pattern avoidance in centrosymmetric words.More recently,<cit.> defined the set of double lists on n letters to be𝒟_n = {ππ|π∈𝒮_n}. In other words, a double list is a permutation of [n] concatenated with itself.We see immediately that |𝒟_n|=n!.Cratty et. al. completely characterized the members of 𝒟_n that avoid a given permutation pattern of length at most 4.In all of these cases, knowing the first half of a permutation or word determines the second half.In this paper we consider a type of word that exhibits a different symmetry.In particular, letℛ_n = {ππ^r |π∈𝒮_n}. For example, ℛ_3 = {123321, 132231, 213312, 231132, 312213, 321123}.We call ℛ_n the set of reverse double lists on n letters.Considerℛ_n(ρ)= {σ∈ℛ_n |σ avoids ρ},and let r_n(ρ)=|ℛ_n(ρ)|.We obtain a number of interesting enumeration sequences for {r_n(ρ)}_n ≥ 1 with connections to other combinatorial objects.In Section <ref> we consider r_n(12⋯ k) for any positive integer k.We give an analogue of the Erdős–Szekeres Theorem for reverse double lists; in other words, we show that r_n(12⋯ k)=0 for sufficiently large n.We also enumerate the number of 12⋯ k-avoiders of maximum length. In Section <ref>we completely determine r_n(ρ) for ρ∈𝒮_3 ∪𝒮_4.In Section <ref>, we give data that describes all Wilf classes for avoiding a pattern ρ∈𝒮_5; we also classify the enumeration generating functions for many of these Wilf classes. More generally, we characterize when r_n(ρ) has polynomial growth for a pattern ρ of arbitrary length. § AVOIDING MONOTONE PATTERNS In this section, we show that r_n(12⋯ k)=0 for sufficiently large n.Theorem <ref> gives a sharp bound on when r_n(12⋯ k)=0, while Theorem <ref> enumerates the maximal length avoiders of 12⋯ k. r_n(12⋯ k)=0 for n ≥k2+1.Consider σ =ππ^r ∈ℛ_n.Following Seidenberg's proof of the Erdős–Szekeres Theorem in <cit.>, for 1 ≤ i ≤ n, let a_i be the length of the longest increasing subsequence of π ending in π_i and let b_i be the length of the longest decreasing subsequence of π ending in π_i.By definition, 1 ≤ a_i, b_i ≤ n.Further, if i ≠ j, then (a_i, b_i) ≠ (a_j, b_j), since if π_i<π_j then a_i < a_j and if π_i > π_j then b_i < b_j.Finally, for all i, the increasing subsequence of length a_i in π ending at π_i followed by the reversal of the decreasing subsequence of length b_i in π ending at π_i, minus the digit π_i in π^r forms an increasing subsequence of length a_i+b_i-1 in σ.If σ∈ℛ_n(12⋯ k), it must be the case that a_i+b_i-1 < k for all i.There are k2 distinct pairs of positive integers where a_i+b_i-1 < k, so we have that r_n(12⋯ k) = 0 for n ≥k2+1. In fact, this bound is sharp. Let J_ℓ be the decreasing permutation of length ℓ.Also, let α⊕β denote the direct sum of permutations α=α_1 ⋯α_n and β=β_1⋯β_m, i.e.(α⊕β)_i = α_i 1 ≤ i ≤ nn+β_i-nn+1 ≤ i ≤ n+m .Then π=J_k-1⊕ J_k-2⊕⋯⊕ J_2 ⊕ J_1 is a permutation of length k2 such that ππ^r ∈ℛ_n(12⋯ k).Now that we have a sharp bound on when r_n(12⋯ k)=0, a natural question is: how many maximal 12⋯ k-avoiders are there?This question is most easily answered using the Robinson–Schensted correspondence between permutations and pairs of standard Young tableaux of the same shape.Recall that the Robinson–Schensted correspondence maps a ↦(centertableaux a ,centertableaux 1). Now, given the pair of tableau (P,Q) for π_1⋯π_n-1, we insert π_n in the following way:(a) If π_n is larger than all entries of row 1 of P, then append π_n to the end of row 1 of P and append n to the end of row 1 of Q.(b) Otherwise, find the first entry i of row 1 of P that is larger than π_n, replace this entry with π_n and now repeat steps (a) and (b) by trying to insert i into row 2, bumping if necessary.When we finally have an entry that is added to the end of a row, insert a box in the corresponding place in Q with entry n.In general, we write (P(π),Q(π)) for the pair of tableau corresponding to π.For example, the steps of this bumping algorithm for the permutation 452316 are shown in Figure <ref>.In this correspondence, the number of rows of P(π) gives the length of the longest decreasing subsequence of π, and the number of columns of P(π) gives the length of the longest increasing subsequence of π.If σ=ππ^r ∈ℛ_k2(12⋯ k), we expect both the longest increasing subsequence and the longest decreasing subsequence of π to have length less than k.But more can be said.Consider the following result of Greene: Let π be a permutation, let P(π) have k rows and let λ_i denote the length of the ith row of P(π).Then for all 1 ≤ i ≤ k, the maximum size of the union of i increasing subsequences in π is equal to λ_1+λ_2+⋯ + λ_i.We are ready to state a characterization of P(π) if ππ^r ∈ℛ_k2(12⋯ k). r_k2(12⋯ k) is equal to the number of pairs of standard Young tableaux (P,Q) of shape (k-1,k-2,…,1) where P has increasing diagonals (i.e. the entry in row r column c is less than the entry in row r-1, column c+1 for 2 ≤ r ≤ k-1 and 1 ≤ c ≤ k-2).First, we show that the (k-1,k-2,…,1) shape of P(π) is necessary for ππ^r ∈ℛ_k2(12⋯ k). Suppose π∈𝒮_k2. If P(π) does not have the shape (k-1,k-2,…,1), then ππ^r contains an increasing subsequence of length k.We know that P(π) has at most k-1 rows; otherwise, π contains a decreasing subsequence of length k, and π^r contains an increasing subsequence of length k.Now, if λ_j is the length of row j of P(π), let r be the first row such that λ_j≥ k-j+1.We know from Theorem <ref> that the maximum size of the union of r increasing subsequences in π is equal to λ_1+⋯ + λ_r.In fact, we can find disjoint increasing subsequences of lengths λ_1, λ_2, …, λ_r in π.This means that there are at least r distinct elements of π that are the last element in an increasing subsequence of length k-r+1.However, from the proof of Theorem <ref>, we know that there are at most r-1 such elements if ππ^r avoids 12 ⋯ k.Therefore, P(π) and Q(π) have shape(k-1,k-2,…,1)whenever ππ^r ∈ℛ_k2(12⋯ k). Notice while P(π) having the shape (k-1,k-2,…,1) is necessary for ππ^r ∈ℛ_k2(12⋯ k), it is not sufficient.For example, when π=246513, P(π) has the shape (3,2,1), but ππ^r contains a 1234 pattern, realized by the digits 1 and 3 in π together with the digits 5 and 6 in π^r.However, by Lemma <ref>, we may restrict our attention to𝒮^*_k2 = {π∈𝒮_k2| P(π)has shape(k-1,k-2,…, 1)}. To prove Theorem <ref>, we must show that for π∈ S^*_k2, ππ^r avoids 12⋯ k if and only if P(π) has increasing diagonals.Before we finish the proof of Theorem <ref>, we recall a construction of <cit.> that relates the entries of P(π) to the graph of π.Consider the points (i,π_i) for 1 ≤ i ≤ n.The shadow of point (x_i, y_i), denoted S((x_i,y_i)), is S((x_i,y_i)):={(u,v) ∈ℝ^2 | u ≥ x_iandv ≥ y_i}.In other words, the shadow is the collection of all points above and to the right of the original point.Consider the points of the graph of π that are not in the shadow of any other point.The boundary of the union of their shadows is the first shadow line L_1 of π.To form the second shadow line (and subsequent shadow lines), remove the points on the first shadow line from the graph of π, and repeat.An example, showing the shadow lines of π=452316, is given in Figure <ref>.Viennot showed that the y-coordinates of the rightmost point on each shadow line are the entries in the first row of P(π).Indeed, using the Robinson–Schensted correspondence, we saw in Figure <ref> thatP(452316) = centertableaux 1 3 62 54and the y-coordinates of the rightmost points on the shadow lines in Figure <ref> are 1, 3, and 6.The second row of P(π) can be found in a similar way: mark the corners of the shadow lines where there is no point of the original permutation, as shown by the squares in the left side of Figure <ref>.Then, using these corners as the new permutation graph, draw shadow lines again, as shown in the right side of Figure <ref>.The y-coordinates of rightmost points on each of the new shadow lines are the second row of P(π); in this case the entries are 2 and 5. We can iterate this shadow line process to obtain all rows of P(π). Shadow lines are the main tool in our proof of Theorem <ref> that follows.We proceed with a series of lemmas.Next, we characterize the permutations π for which ππ^r ∈ℛ_k2(12⋯ k) in terms of shadow lines. Suppose π∈𝒮_k2. Then ππ^r avoids 12 ⋯ k if and only if* shadow line i of π contains k-i points for all 1 ≤ i ≤ k-1, and * the jth point on the ith shadow line of π is the unique point such that the longest increasing subsequence ending in that point has length i and the longest decreasing subsequence ending in that point has length j. Suppose π∈𝒮_k2 and ππ^r avoids 12⋯ k.Using the labeling from Theorem <ref> there is a unique point in π with each label (a,b) for 2 ≤ a+b ≤ k. For any permutation π, by construction, each point on shadow line i has a=i.Moreover, the jth point on shadow line i has b ≥ j.Since we know there is exactly one point with each label, it must be the case that the jth point on shadow line i has label (i,j).This labeling also results in exactly k-i points on the ith shadow line, as desired.Suppose, on the other hand that ππ^r contains 12 ⋯ k.This means that the labeling from Theorem <ref> results in a point π_ℓ with the label (a,b) where a+b ≥ k+1.By the shadow line construction, this point must be on shadow line a.If π_ℓ is really the bth point on shadow line a, then we have violated condition (1) since shadow line a has at least b ≥ k+1-a points.If π_ℓ is not the bth point on shadow line a, then we have violated condition (2). As in Viennot's general case, these shadow lines are a tool to better understand the structure of P(π) in the context of ℛ_k2(12⋯ k). If ππ^r avoids 12 ⋯ k, then column i of P(π) consists of the entries from the ith shadow line of π. By Lemma <ref> we know that the jth point on the ith shadow line of π is the unique point such that the longest increasing subsequence ending in that point has length i and the longest decreasing subsequence ending in that point has length j and that the ith shadow line of π has k-i points for all 1 ≤ i ≤ k-1.We claim that if two points are adjacent points on the same shadow line at one iteration of the shadow line construction, then they must be on the same shadow line at the next iteration.Suppose to the contrary that π_a>π_b are adjacent points on the same shadow line in one iteration, but they are on different shadow lines in the next iteration. Then one of the two cases in Figure <ref> must occur.That is, when we consider the square points at height π_a and π_b, either there is a square on a different shadow line that forms the 1 in a 312 pattern (in which case π_a moves to an earlier shadow line than π_b), or there is a square on a different shadow line that forms the 2 in a 231 pattern (in which case π_b moves to an earlier shadow line than π_a), but not both (in which case, π_a and π_b would stay on the same line). In either case, since both π_a and π_b are involved in the next iteration, there must be at least one more point π_c further to the right on their shadow line. In the case where there is a square on a different shadow line that forms the 1 in a 312 pattern (on the left of Figure <ref>), suppose that π_d is the height of the square that plays the role of this 1.Call the next point on π_d's shadow line π_e, Since π_d's square forms a 312 pattern with π_a and π_b's squares, π_e appears horizontally between π_b and π_c.If π_d is the first point on the shadow line, we have a contradiction since the longest decreasing subsequence ending at π_e should have length 2.However, π_a, π_b, and π_e form a longer decreasing subsequence ending in π_e.If π_d is not the first point on its shadow line, then call the previous point π_f.Since there is no 231 pattern using the squares from π_a and π_b, π_f>π_a.The longest decreasing subsequence ending in π_e should be the one that follows the shadow line containing π_e, ending in π_f π_d π_e, however, taking the same decreasing subsequence and replacing π_d π_e with π_aπ_bπ_e forms an even longer decreasing subsequence, which is a contradiction.So, this case is impossible if ππ^r avoids 12⋯ k.The case where there is a square on a different shadow line that forms the 2 in a 231 pattern (on the right of Figure <ref>) is similar.Again, suppose π_d is the height of the square that plays the role of this 2 and call the next point on π_d's shadow line π_e.Since π_d's square forms a 231 pattern with π_a and π_b's squares, π_e appears before π_b.If π_e is the last element on its shadow line we have a contradiction because π_e's shadow line should have one more element than π_a's shadow line, and the longest decreasing subsequence ending in π_e (following its shadow line) should be longer than the longest decreasing subsequence ending in π_c.However, taking π_e's shadow line and replacing π_d π_e with π_d π_b π_c produces a longer decreasing subsequence ending in π_c.If π_e is not the last element on its shadow line, then call the next point on the shadow line π_f.Since there is no 312 pattern using the squares from π_a and π_b, we know that π_f appears to the right of π_c.Again, the longest decreasing subsequence ending in π_f should be the decreasing subsequence formed by following π_f's shadow line.However, following this shadow line and replacing π_d π_e π_f with π_d π_bπ_cπ_f forms a longer decreasing subsequence ending in π_f, which is a contradiction.So, this case is also impossible if ππ^r avoids 12⋯ k.In summary, if ππ^r avoids 12⋯ k, then two adjacent elements on the same shadow line in one iteration of the shadow lines construction will be adjacent elements on the same shadow line at the next iteration.This means that each row of P(π) takes one element from each of the original shadow lines and the k-i elements of the ith shadow line appear in the ith column of P(π). Note that the converse of Lemma <ref> is false.For example, when π = 645123, then P(π) has shape (3,2,1) where the columns of P(π) correspond to the original shadow lines of π.However, ππ^r contains 1234 using the digits 1, 2, and 3 from π along with the digit 4 from π^r.While Lemma <ref> completely characterizes π for which ππ^r ∈ℛ_k2(12⋯ k) by giving conditions on shadow lines, we have only given partial conditions on the correponding tableau P(π).By Lemma <ref>, we know that it is necessary for P(π) to have shape (k-1, k-2,…, 1) and by Lemma <ref> is it necessary that the ith column of P(π) consist of the entries of the ith shadow line of π.As we have seen, neither of these conditions is sufficient for ππ^r ∈ℛ_k2(12⋯ k).The missing condition is that given in Theorem <ref>; i.e. the diagonals of P(π) must be increasing. By Lemma <ref> and Lemma <ref> we restrict our attention to π∈𝒮^*_k2 such that the ith column of P(π) consists of the entries of the ith shadow line of π. We show that the diagonals of P(π) are increasing if and only if ππ^r avoids 12 ⋯ k.Suppose that d is the smallest integer for which there is a decrease in a diagonal between some element x in column d and element y in column d+1.Since x>y and x is on an earlier shadow line than y, we know that x appears to the left of y.The entry z in row 1 column d+1 corresponds to the last element on the (d+1)st shadow line, which means the longest decreasing subsequence in π ending in z should have length k-d-1.However, the elements x through the end of column d together with elements y through z of column d+1 form a decreasing subsequence of length k-d in π that ends in z, which contradicts ππ^r avoiding 12⋯ k.Suppose on the other hand that all diagonals in P(π) are increasing.As in the proof of Theorem <ref>, given (partial) permutation π^*, let a_k be the length of the longest increasing subsequence of π^* ending in π^*_k and let b_k be the length of the longest decreasing subsequence of π^* ending in π^*_k.The (partial) permutation under consideration will be made clear from context.We claim that for any element π_k, if π_k is the jth element of the ith shadow line of π then (a_k,b_k)=(i, j).Since π_k is on the ith shadow line, a_k=i automatically, and we need only check that the maximal decreasing sequence ending in this entry is of length j.Let π^(f) be the partial permutation formed by the first f shadow lines of π.We claim that that π^(f) uses labels (a,b) where 1 ≤ a ≤ f and 2 ≤ a+b ≤ k each exactly once and proceed by induction on f.Notice that π^(1) consists of only the first shadow line.Its jth digit is at the end of a maximal increasing sequence of length 1 and a maximal decreasing sequence of length j, and π^(1) has length k-1, so it satisfies our claim.Now suppose that the partial permutation π^(f-1) uses labels (a,b) where 1 ≤ a ≤ f-1 and 2 ≤ a+b ≤ k each exactly once.We will show that π^(f) uses labels (a,b) where 1 ≤ a ≤ f and 2 ≤ a+b ≤ k each exactly once.Since the increasing diagonals property implies that the jth point on shadow line f is larger than the jth point on any previous shadow line, that point cannot be at the end of a longer decreasing sequence in π^(f) than the one it already ends just by following shadow line f.However, can including the digits from shadow line f affect the longest decreasing subsequence ending at some point in π^(f-1)?We claim not.Suppose, to the contrary that there is some point π_k in π^(f) that has the expected label (a_k,b_k)=(i,j) as the jth element of the ith shadow line when we restrict to π^(f-1), but its second coordinate is larger when we restrict to π^(f).This implies that there is some j^* for which the j^*th element of shadow line f appears before the j^*th element of an earlier shadow line e in π^(f) in order to make a longer decreasing subsequence ending in π_k.However, we assumed that column i of P(π) corresponds to shadow line i of π for all i, which means each entry stays in the same column of P(π) as it was originally inserted throughout the Robinson-Schensted bumping algorithm.When there are j^* elements of shadow line f before the j^*th element of shadow line e, one of the elements from column f will necessarily be bumped to an earlier column during the Robinson-Schensted bumping algorithm.So this is impossible.That is, given that π^(f-1) uses labels (a,b) where 1 ≤ a ≤ f-1 and 2 ≤ a+b ≤ k each exactly once, if P(π) satisfies the increasing diagonals property then π^(f) uses labels (a,b) where 1 ≤ a ≤ f and 2 ≤ a+b ≤ k each exactly once.Since this is true for all 1 ≤ f ≤ k-1, when we consider the labels of π = π^(k-1) we see that π avoids 12 ⋯ k by Lemma <ref>.We are now in a position to compute r_k2(12⋯ k) since ℛ_k2(12⋯ k) is in bijection with pairs of standard Young tableau (P,Q) where P and Q both have shape (k-1, k-2, …, 1) and P has increasing diagonals.r_k2(12⋯ k) = ((12(k-1)^2+k2-12)!∏_i=1^k-1(i-1)!(2i-1)!)(k2!∏_i=1^k-1(2i-1)^k-i).Here, the first factor of((12(k-1)^2+k2-12)!∏_i=1^k-1(i-1)!(2i-1)!)is OEIS sequence A003121 in <cit.>, which is the number of standard Young tableaux of shape (k-1, …, 1) with increasing diagonals, while(k2!∏_i=1^k-1(2i-1)^k-i)is the total number of standard Young tableaux of shape (k-1, …, 1), as computed by the hook length formula and described in OEIS sequence A005118. § AVOIDING PATTERNS OF SMALL LENGTH We have already described reverse double lists avoiding a monotone pattern of arbitrary length.Although r_n(12⋯ k)=0 for sufficiently large n, there are other patterns ρ for which r_n(ρ) exhibits other behavior.In the rest of the paper, we consider reverse double lists avoiding a variety of non-monotone patterns.§.§ Avoiding patterns of length 3 In this subsection, we consider patterns of length 3.First, notice that the graph of a reverse double list σ∈ℛ_n is a set of points on the rectangle [0,2n+1] × [0,n+1].Using the reverse and complement involutions described in Section <ref>,σ∈ℛ_n(ρ) ⟺σ^r ∈ℛ_n(ρ^r) ⟺σ^c ∈ℛ_n(ρ^c). In fact, if σ∈ℛ_n(ρ), then σ=σ^r, so ℛ_n(ρ)=ℛ_n(ρ^r).Partition the set of permutation patterns of length k into equivalence classes where ρ∼τ means that r_n(ρ) = r_n(τ) for n ≥ 1.When ρ∼τ, ρ and τ are said to be Wilf equivalent.When this equivalence holds because of one of the symmetries of the rectangle, we say that ρ and τ are trivially Wilf equivalent.Using trivial Wilf equivalence we have that 123 ∼ 321 and 132 ∼ 213 ∼ 231 ∼ 312, so we need only consider 2 patterns in this section: 123 and 132.With pattern-avoiding permutations, avoiding a pattern of length 3 is the first non-trivial enumeration, and for any pattern ρ of length 3, we have that s_n(ρ) is the nth Catalan number.Reverse double lists are more restrictive, so we obtain simpler sequences for r_n(ρ).More strikingly, although s_n(123)=s_n(132) for n ≥ 1, we obtain two distinct sequences in this new context.By Theorem <ref>, r_n(123)=0 for n ≥ 4.On the other hand, there are 132-avoiders of arbitrary length. r_n(132)=r_n(213)=r_n(231)=r_n(312)=2 for n ≥ 2. The n=2 case is straightforward to check.Now, suppose n ≥ 3.We claimℛ_n(132)={n(n-1)⋯ 312213⋯ (n-1)n, n(n-1)⋯ 321123⋯ (n-1)n}. Assume for contradiction that σ=ππ^r ∈ℛ_n(132) and π_1≠ n. This implies that either π_1=1 or 2≤π_1≤ n-1.If π_1=1, then the digits 1 and 3 in π and 2 in π^r form a 132 pattern.If 2≤π_1≤ n-1, then the digit 1 in π together with n and π_1 in π^r form a 132 pattern. Thus, π_1=n.Further, if σ∈ℛ_n(132), then let σ^' be the reverse double list formed by deleting both copies of n.It must be the case that σ^'∈ℛ_n-1(132) since a copy of 132 in σ^' would also be a copy of 132 in σ.Thus, ℛ_n(132)={nσ^'n | σ^'∈ℛ_n-1(132)}.Since ℛ_2(132)={1221,2112}, the claim follows by induction, and so |ℛ_n(132)|=2 when n≥ 2. At this point, we have completely characterized reverse double lists avoiding a single pattern of length 3.Although we obtained only trivial sequences, the fact that we obtained two distinct Wilf classes when avoiding a pattern of length 3 mirrors results for pattern-avoiding double lists in <cit.>. §.§ Avoiding patterns of length 4Next, we analyze reverse double lists avoiding a single pattern of length 4.Using the symmetries of the rectangle, we can partition the 24 patterns of length 4 into 7 trivial Wilf classes, as shown in Table <ref>.There is one non-trivial Wilf equivalence for patterns of length 4; namely 1324 ∼ 2143.To contrast: for double lists, there are no non-trivial Wilf equivalences for patterns of length 4.For permutations, we have an additional trivial Wilf equivalence since s_n(ρ) = s_n(ρ^-1) for n ≥ 1, so s_n(1342)=s_n(1423).There are a number of additional non-trivial Wilf equivalences for pattern-avoiding permutations so that every length 4 pattern is equivalent to one of 1342, 1234, or 1324.For large n, we have thats_n(1342^∙) < s_n(1234^†) < s_n(1324^∘). In Table <ref> each pattern is marked according to its Wilf equivalence class for permutations; patterns equivalent to 1342 are marked with ∙, those equivalent to 1234 are marked with †, and those equivalent to 1324 are marked with ∘.For permutations, the monotone pattern 1234 is neither the hardest nor the easiest pattern to avoid; for double lists, it is the easiest pattern to avoid, and for reverse double lists it is the hardest pattern to avoid.Other than the trivial equivalences of reverse and complement, Wilf equivalence in the context of reverse double lists appears to be a very different phenomenon than equivalence in the contexts of permutations or double lists.We now consider each of these patterns in turn.§.§.§ The pattern 1234 By Theorem <ref>, r_n(1234)=0 for n ≥ 7.§.§.§ The pattern 1243r_n(1243)=n^3/3-7n/3+4 for n ≥ 2. We claim thatr_n(1243)= n! n≤ 2 r_n-1(1243)+ (n+1)(n-2) n≥ 3 . The cases where n ≤ 2 are easy to check by brute force, so we focus on the case where n ≥ 3.Let σ=ππ^r ∈ℛ_n(1243), and let σ^'∈ℛ_n-1(1243) be the reverse double list formed by deleting both copies of n in σ.We consider 4 cases based on the value of π_1.Suppose π_1=n.Then σ = n σ^'n.Since n can only play the role of a 4 in a 1243 pattern, nσ^'n ∈ℛ_n(1243) for any σ^'∈ℛ_n-1(1243).There are r_n-1(1243) reverse double lists on n letters in this case.Suppose π_1=n-1.We claim that π_2=n-2.Assume for contradiction that π_2≠ n-2.This implies π_2=n or 1 ≤π_2 ≤ n-3. If π_2=n, then the digit 1 in π along with the digits n-2, n, and n-1 in π^r form a 1243 pattern. If 1 ≤π_2 ≤ n-3, then the digits π_2 and n-2 in π along with the digits n and n-1 in π^r form a 1243 pattern. Thus, if π_1=n-1, π_2=n-2.By a similar argument, π_3=n-3, and in general, π_i=n-i for 1 ≤ i ≤ n-2.Finally, π_n-1=1 and π_n=n or π_n-1=n and π_n=1.Thus there are exactly two 1243-avoiding reverse double lists on n letters in this case.Suppose π_1=1.We claim that π_2=n.Assume for contradiction that π_2≠ n. This implies 2 ≤π_2 ≤ n-2 or π_2=n-1. If 2 ≤π_2 ≤ n-2, then the digits 1, π_2 and n in π along with the digit n-1 in π^r form a 1243 pattern. If π_2=n-1, then the digits 1 and 2 in π along with the digits n and n-1 in π^r form a 1243 pattern. Hence, if π_1=1, then π_2=n.By a similar argument, π_3=n-1, and in general, π_i=n+2-i for 2 ≤ i ≤ n-2.Finally, either π_n-1=2 and π_n=3 or π_n-1=3 and π_n=2.Thus there are exactly two 1243-avoiding reverse double lists on n letters in this case.Finally, suppose 2 ≤π_1 ≤ n-2.Let π_1=a.First, we claim that π_1⋯π_a-1=a(a-1)⋯ 2.If π_1=2, this claim is already true, so suppose π_1>2 but π_2 ≠ a-1.If π_2<a-1 then the digits π_2 and a-1 in π along with the digits n and a in π^r form a 1243 pattern.If π_2>a then the digit 1 in π along with the digits 2, π_2, and a in π^r form a 1243 pattern.Therefore, π_2=a-1.A similar argument shows that π_j=a-j+1 for 1 ≤ j ≤ a-1.We have shown that π_a-1=2.Now, consider π_a.We claim that π_a ∈{1,n}.Suppose to the contrary that a+1 ≤π_a ≤ n-1.If a+1 ≤π_a ≤ n-2, then the digits 2, π_a, and n in π along with n-1 in π^r form a 1243 pattern.If π_a=n-1 then the digits 2 and a+1 in π along with n and n-1 in π^r form a 1243 pattern.Therefore, either π_a=1 or π_a=n. By a similar argument, there are at most 2 possible values for π_j where a+1≤ j≤ n-2.Either π_j=1 or π_j = max({1,…,n}∖{π_1,…, π_j-1}).Finally, a+2 and a+1 may appear in either order in π. Hence, the number 1 can take position h where a ≤ h ≤ n, and the remaining positions of π_a ⋯π_n must be filled with either n (n-1)⋯ (a+3)( a+2)( a+1) or n( n-1) ⋯( a+3)( a+1)( a+2). To summarize, if 2 ≤π_1 ≤ n-2, where π_1=a, then we have π_1⋯π_a-1=a⋯ 2.The digit 1 may appear in any of the n-a+1 remaining positions.After the initial a-1 digits and the location of 1 are chosen, there are 2 ways to fill in the rest of π.Either the remaining digits are n (n-1)⋯ (a+3)( a+2)( a+1) or n( n-1) ⋯( a+3)( a+1)( a+2). There are ∑_a=2^n-2 2(n-a+1)=n^2-n-6 1243-avoiding reverse double lists on n letters in this case.Combining all four cases, we see thatr_n(1243)=r_n-1(1243) + 2 + 2 + n^2-n-6 = r_n-1(1243)+(n+1)(n-2). The theorem then follows from the facts that (i) the n=2 terms of the theorem statement and the claim agree and (ii)(n^3/3-7n/3+4) - ((n-1)^3/3-7(n-1)/3+4) = (n+1)(n-2). §.§.§ The patterns 1324 and 2143 We now come to two patterns that are Wilf-equivalent for non-trivial reasons.Although there is not a simple symmetry of graphs that demonstrates r_n(1324)=r_n(2143), we show that r_n(1324) and r_n(2143) each satisfy the same recurrence. r_n(1324)=5· 2^n-2-4 for n ≥ 3. The case where n = 3 is easy to check by brute force, so we focus on when n ≥ 4.Let σ=ππ^r ∈ℛ_n(1324). We consider 2 cases.Suppose σ^'∈ℛ_n-1(1324) with σ^'_1=a.We can build σ =ππ^r ∈ℛ_n(1324) by either using σ_1=a or σ_1=a+1, and incrementing all digits of σ^' that are at least σ_1 by 1.It is impossible for σ_1 to participate in a 1324 pattern with σ_2 since they are consecutive integers.It is impossible for σ_1 to participate in a 1324 pattern without σ_2 since any 1324 pattern involving σ_1 would imply the existence of a 1324 pattern using σ_2 in place of σ_1.Therefore, there are 2r_n-1(1324) reverse double lists where σ_2=σ_1 ± 1.Now, suppose that |π_2-π_1|>1.Then if π_1<π_2<n, the digits π_1, π_2, and π_1+1 in π along with the digit n in π^r form a 1324 pattern.If π_1>π_2>1, then the digit 1 in π along with the digits π_1-1, π_2, and π_1 in π^r form a 1324 pattern.So, if |π_2-π_1|>1 it must be the case that either π_1<π_2=n or π_1>π_2=1.If π_1<π_2=n, then π_1=n-2, otherwise π_1 and π_1+2 in π and π_1+1 and n in π^r form a 1324 pattern.Also, the word π_3⋯π_nπ_n ⋯π_3 must avoid 132, or nπ_3⋯π_nπ_n ⋯π_3n will contain a 1324 pattern.We know that there are exactly two 132-avoiding reverse double lists for n ≥ 2, so there are two 1324-avoiding reverse double lists on n letters where π_1<π_2=n.Similarly, by taking the complements of all reverse double lists where π_1<π_2=n we see that there are two reverse double lists on n letters where π_1>π_2=1.In summary, given σ^'∈ℛ_n-1(1324), we can produce exactly two members σ=ππ^r of ℛ_n(1324) where |π_1-π_2|=1.There are 4 additional reverse double lists where |π_1-π_2|=2, so for n ≥ 4, we haver_n(1324)=2r_n-1(1324)+4. From this recurrence it follows that r_n(1324) = 5· 2^n-2-4 for n ≥ 3. Reverse double lists avoiding 2143 follow the same recurrence but for different structural reasons. r_n(2143)=5· 2^n-2-4 for n≥ 3. The case where n = 3 is easy to check by brute force, so we focus on when n ≥ 4.Let σ=ππ^r ∈ℛ_n(2143). Suppose σ^'∈ℛ_n-1(2143).We can build σ =ππ^r ∈ℛ_n(2143) by either using σ_1=1 and incrementing all digits of σ^' or by using σ_1=n.In both cases is impossible for σ_1 to participate in a 2143 pattern since σ_1 is either the largest or the smallest digit in σ.Therefore, there are r_n-1(2143) reverse double lists where σ_1=1 and r_n-1(2143) reverse double lists where σ_1=n.Now, suppose 1<π_1<n.Then, every digit larger than π_1 must be in increasing order in π^r, otherwise π_1 and 1 in π together with the decreasing pair of larger digits in π^r form a 2143 pattern.Similarly, every digit smaller than π_1 must appear in increasing order in π, otherwise the decreasing pair of smaller digits in π together with n and π_1 in π^r form a 2143 pattern.If 3 ≤π_1 ≤ n-2, we already have a problem since either 1 appears before n in π or 1 appears before n in π^r.In the first case, π_1 1 n(n-1) is a copy of 2143 in σ, and in the second case, 21nπ_1 is a copy of 2143 in σ.So it must be the case that π_1=2 or π_1=n-1.In the first case, either π_n-1=1 or π_n=1 and all other digits of π are in decreasing order.In the second case, we take the complement of the words where π_1=2 to get the words where π_1=n-1.In summary, given σ^'∈ℛ_n-1(2143), we can produce one member σ=ππ^r of ℛ_n(2143) where π_1=1 and one where π_1=n.There are 4 additional reverse double lists where π_1=2 or π_1=n-1, so for n ≥ 4, we haver_n(2143)=2r_n-1(2143)+4.From this recurrence it follows that r_n(2143) = 5· 2^n-2-4 for n ≥ 3. §.§.§ The pattern 1423r_n(1423)=2r_n-1(1423)+r_n-3(1423)+2 for n≥ 5. Suppose n ≥ 5, let σ=ππ^r ∈ℛ_n(1423), and let σ^'∈ℛ_n-1(1423) be the reverse double list formed by deleting both copies of n in σ.We consider 5 cases based on the value of π_1.Suppose π_1=n.Since n can only play the role of a 4 in a 1423 pattern, nσ^'n ∈ℛ_n(1423) for any σ^'∈ℛ_n-1(1423).There are r_n-1(1423) reverse double lists on n letters in this case.Suppose π_1=n-1.We claim that π_2=n.Assume for contradiction π_2≠ n.This implies π_2=1 or 2≤π_2≤ n-2. If π_2=1, then the digits 1 and n in π along with the digits 2 and n-1 in π^r form a 1423 pattern. If 2≤π_2 ≤ n-2, then the digit 1 in π along with the digits n, π_2, and n-1 in π^r form a 1423 pattern. Thus, when π_1=n-1, π_2=n.Notice that π_2=n can only play the role of 4 in a 1423 pattern.However, since π_1=n-1, it cannotbe part of a 1423 pattern in σ. On the other hand, n-1 can play the role of a 3 or a 4. If the copy of n-1 in π^r serves as a 3 in an occurrence of 1423, then the copy of n in π must serve as the 4; however, there is no number to play the role of the 1 that precedes n. Therefore whenever σ^''∈ℛ_n-2(1423), we have (n-1)nσ^''n(n-1) ∈ℛ_n(1423).There arer_n-2(1423) reverse double lists on n letters in this case.Suppose 3 ≤π_1 ≤ n-2.We claim that π_2=π_1+1.Let π_1=a.Assume for contradiction π_2≠ a+1. This implies π_2=1, 2 ≤π_2≤ a-1, a+2 ≤π_2 ≤ n-1, or π_2=n.If π_2=1, then the digits 1 and n in π along with the digits 2 and a in π^r form a 1423 pattern. If 2 ≤π_2 ≤ a-1, then the digit 1 in π along with the digits n, π_2, and a in π^r form a 1423 pattern.If a+2 ≤π_2 ≤ n-1, then the digits a and n in π along with the digits a+1 and π_2 in π^r form a 1423 pattern. If π_2=n, then the digitsa, n, and a+1 in π along with the digit n-1 in π^rform a 1423 pattern.Thus, when π_1=a, then π_2=a+1.By a similar argument, π_j=a+j-1 for 1≤ j ≤ n-a-1.We have that π_n-a-1=n-2.We claim that {π_n-a,π_n-a+1}={n-1,n}.Suppose to the contrary that π_n-a∉{n-1,n}.If π_n-a<a-1, then the digits π_n-a and n in π along with the digits a-1 and a in π^r form a 1423 pattern.If π_n-a=a-1, then the digit 1 in π along with the digits n, a-1, and a in π^r form a 1423 pattern.So it must be the case that π_n-a∈{n-1,n}.Similarly, π_n-a+1={n-1,n}∖{π_n-a}.Ultimately, either π_n-a=n-1 and π_n-a+1=n or π_n-a=n and π_n-a+1=n-1.Now, the remaining a-1 positions of π can be filled with σ^* ∈ℛ_a-1(1423).Because n and n-1 can be interchanged and because there exist 2r_a-1(1423) reverse double lists on n letters where π_1=a for 3≤ a≤ n-2, there are ∑_i=2^n-3 2r_i(1423) reverse double lists in this case.Suppose that π_1=2.We claim that either π_2=1 or π_2=3.Assume for contradiction that π_2 ∉{1,3}. This implies π_2=n or 4 ≤π_2≤ n-1. If π_2=n, then the digits 2, n, and 3 in π along with the digit n-1 in π^rform a 1423 pattern. If 4 ≤π_2 ≤ n-1, then the digits 2 and n in π along with the digits 3 and π_2 in π^r form a 1423 pattern. Therefore, if π_1=2, then π_2=1 or π_2=3.By a similar argument, π_j ∈{1, min({2,…, n}∖{π_1, …, π_j-1})} for 2 ≤ j ≤ n-2.Finally, n-1 may either precede or follow n.We have n-1 choices for the location of 1 and 2 choices for the order of n-1 and n in π, so there are 2(n-1) 1423-avoiding reverse double lists on n letters where π_1=2. Suppose that π_1=1.We claim that π_2=2.Assume for contradiction π_2 ≠ 2.Then the digits 1 and n in π along with the digits 2 and π_2 in π^r form a 1423 pattern.By a similar argument, π_i=i for 1 ≤ i ≤ n-2.Finally, π_n-1=n-1 and π_n=n or π_n-1=n and π_n=n-1, giving two 1423-avoiding reverse double lists on n letters when π_1=1. Combining our 5 cases, we have shown that for n ≥ 5, r_n(1423)=r_n-1(1423)+r_n-2(1423)+∑_i=2^n-32r_i(1423)+2n. We are ready to prove the n ≥ 5 case of the theorem by induction.For the base case, when n=5, r_5(1423)=36, and 2r_4(1423)+r_2(1423)+2=2· 16+2+2=36, as desired.Now, suppose r_k(1423)=2r_k-1(1423)+r_k-3(1423)+2 for some k≥ 5. From Equation <ref>, we have: r_k+1(1423) =r_k(1423)+r_k-1(1423)+2(k+1)+∑_i=2^k-22r_i(1423)=r_k(1423)+r_k-1(1423)+2k+2+∑_i=2^k-32r_i(1423)+2r_k-2(1423) =r_k(1423)+2r_k-2(1423)+r_k-1(1423)+2k+2+∑_i=2^k-32r_i(1423).On the other hand, when n=k, Equation <ref> gives:∑_i=2^k-32r_i(1423)=r_k(1423)-r_k-1(1423)-r_k-2(1423)-2k. After substituting Equation <ref> into Equation <ref>, we have:r_k+1(1423)=r_k(1423)+ 2r_k-2(1423)+r_k-1(1423)+2k+2+ r_k(1423)-r_k-1(1423)-r_k-2(1423)-2k,which simplifies tor_k+1(1423) =2r_k(1423)+r_k-2(1423)+2= 2r_(k+1)-1(1423)+r_(k+1)-3(1423)+2. §.§.§ The pattern 1432r_n(1432)=2r_n-1(1432)+r_n-2(1432) for n≥ 5. Suppose n ≥ 5, and let σ=ππ^r ∈ℛ_n(1432).We consider 5 cases based on the value of π_1.Suppose π_1=1.Then 2 ≤π_2 ≤ n-2 or π_2∈{n-1,n}.If 2 ≤π_2 ≤ n-2, then the digits 1 and n in π along with the digits n-1 and π_2 in π^r form a 1432 pattern.If π_2=n-1 or π_2=n, the the digits 1, π_2, and n-2 in π along with the digit n-3 in π^r form a 1432 pattern. Thus, π_1≠ 1. Suppose 2 ≤π_1 ≤ n-3.Then π_2=n or 1≤π_2≤ n-1. If π_2=n, then the digits π_1, n, and n-1 in π along with the digit n-2 in π^r form a 1432 pattern. If 1≤π_2≤ n-1, then two cases must be considered. If π_1<π_2, then the digit 1 in π along with the digits n, π_2, and π_1 in π^r form a 1432 pattern. If π_1>π_2, then the digits π_2 and n in π along with the digits n-1 and π_1 in π^r form a 1432 pattern. Thus, π_1 ≥ n-2.Suppose π_1=n-2.We claim that π_2=n.Assume for contradiction that π_2 ≠ n.This implies π_2=n-1 or 1 ≤π_2 ≤ n-3. If π_2=n-1, then the digit 1 in π along with the digits n, n-1, and n-2 in π^r form a 1432 pattern. If 1 ≤π_2 ≤ n-3, then the digits π_2 and n in π along with the digits n-1 and n-2 in π^r form a 1432 pattern.Thus, if π_1=n-2, then π_2=n. Now, n can only play the role of a 4 in a 1432 pattern in σ, but π_1=n-2 prevents the first copy of n from being in such a pattern, and the fact that it is the penultimate digit of π^r prevents the second copy of n from being in such a pattern. Also, n-2 can only play the role of 2, 3, or 4, so it will not play the role of 1 in the beginning of a 1432 pattern. If the second copy of n-2 serves as a 2, then n-1 in π^r can serve as the 3 and n in π must serve as the 4. However, there is no number to play the role of 1 that precedes n in π. Thus, the remaining positions can be filled in with any member of ℛ_n-2(1432).There are r_n-2(1432) 1432-avoiding reverse double lists on n letters where π_1=n-2.Suppose π_1=n-1.Since n-1 can only play the role of a 3 or a 4 in a 1432 pattern, it cannot play the role of 1 at the beginning or 2 at the end of such a pattern.There are r_n-1(1432) ways to fill in the remaining digits of σ, so there are r_n-1(1432) 1432-avoiding reverse double lists on n letters where π_1=n-1. Suppose π_1=n.The only role n can play in a 1432 pattern is a 4, however n only appears as the first and the last member of σ, so it cannot be involved in a 1432 pattern.There are r_n-1(1432) ways to fill in the remaining digits of σ, so there are r_n-1(1432) 1432-avoiding reverse double lists on n letters where π_1=n.Combining our 5 cases, we have that for n ≥ 5, r_n(1432)=2r_n-1(1432)+r_n-2(1432).§.§.§ The pattern 1342r_n(1342)= 2r_n-1(1342)+r_n-2(1342)+2 for n≥ 4.Suppose n ≥ 4, and let σ=ππ^r ∈ℛ_n(1342).We consider 5 cases based on the value of π_1.Suppose π_1=1. Then π_2=n.Assume for contradiction that π_2≠ n. This implies that π_2=2 or 3≤π_2≤ n-1. If π_2=2, then the digits 1 and 3 in π along with the digits 4 and 2 in π^rform a 1342 pattern. If 3≤π_2≤ n-1, then the digits 1, π_2, and n in π along with the digit 2 in π^r form a 1342 pattern. Hence, when π_1=1, π_2=n.By a similar argument, π_i=n-i+2 for 2 ≤ i ≤ n-2.Finally, either π_n-1=2 and π_n=3 or π_n-1=3 and π_n=2. Thus, there are two ways to avoid 1342 when π_1=1.Suppose 2≤π_1≤ n-3. Then π_2=1, 2≤π_2≤ n-2, or π_2∈{n-1,n}. If π_2=1, then the digits 1 and n-1 in π along with the digits n and π_1 in π^r form a 1342 pattern. If 2≤π_2≤ n-2, then two cases must be considered. If π_1<π_2, then the digits π_1 and n-1 in π along with the digits n and π_2 in π^r form a 1342 pattern. If π_1>π_2, then the digits π_2 and n-1 in π along with the digits n and π_1 in π^r form a 1342 pattern. If π_2∈{n-1,n}, then the digit 1 in π along with the digits n-2, π_2, and π_1 in π^r form a 1342 pattern. Hence, there are no 1342-avoiding reverse double lists where 2≤π_1≤ n-3.Suppose π_1=n-2.Then π_2=n-1.Assume for contradiction that π_2≠ n-1. This implies that π_2=n or 1≤π_2≤ n-3. If π_2=n, then the digit 1 in π along with the digits n-1, n, and n-2 in π^r form a 1342 pattern. If 1≤π_2≤ n-3, then the digits π_2 and n-1 in π along with the digits n and n-2 in π^r form a 1342 pattern. Thus, if π_1=n-2, then π_2=n-1.Notice that n-1 can only play the role of a 3 or 4 in a 1342 pattern. Also, n-2 cannot play the role of a 1. If n-2 plays the role of 2, then n in π^r must play the role of 4. Now, the only number to play the role of 3 is n-1 in π. However, there is no digit that can play the role of a 1 that precedes n-1 in π. Thus, the remaining positions can be filled in r_n-2(1342) ways.Suppose π_1=n-1.Since n-1 can only play the role of a 3 or a 4 in a 1342 pattern, it cannot play the role of 1 at the beginning or 2 at the end of such a pattern.There are r_n-1(1342) ways to fill in the remaining digits of σ, so there are r_n-1(1342) 1342-avoiding reverse double lists on n letters where π_1=n-1. Suppose π_1=n.The only role n can play in a 1342 pattern is a 4, however n only appears as the first and the last member of σ, so it cannot be involved in a 1342 pattern.There are r_n-1(1342) ways to fill in the remaining digits of σ, so there are r_n-1(1342) 1342-avoiding reverse double lists on n letters where π_1=n.Combining all 5 cases, for n ≥ 4, we have shown: r_n(1342)= 2r_n-1(1342) + r_n-2(1342) + 2. §.§.§ The pattern 2413r_n(2413)= 2r_n-1(2413)+2r_n-2(2413) for n≥ 3. The cases where n ≤ 4 are easy to check by brute force, so we focus on n ≥ 5.Let σ=ππ^r ∈ℛ_n(2413), and let σ^'∈ℛ_n-1(2413) be the reverse double list formed by deleting both copies of n in σ.We consider 5 cases based on the value of π_1. Suppose π_1=1. Since 1 can only play the role of a 1 in a 2413 pattern, adding 1 to the beginning and end of σ^'∈ℛ_n-1(2413) will not create a 2413 pattern. Thus, there are r_n-1(2413) ways to create a 2413-avoiding reverse double list on n letters where π_1=1.Suppose π_1=2. We claim that π_2=1.Assume for contradiction that π_2≠ 1. This implies that π_2=n or 3≤π_2≤ n-1. If π_2=n, then the digits 2, n, and 1 in π along with the digit 3 in π^r form a 2413 pattern. If 3≤π_2≤ n-1, then the digits 2 and n in π along with the digits 1 and π_2 in π^r form a 2413 pattern. Hence, if π_1=2, then π_2=1.Notice that 2 can only play the role of 1 or 2 in a 2413 pattern, but its location forces 2 to either be the first digit or last digit in a 2413 pattern.If 2 is involved in a 2413 pattern, it plays the role of 2. If it plays the role of 2 then the 1 in π^r must play the role of 1, but there is no digit after 1 that can play the role of 3. Also, 1 can only play the role of 1, but the location of 1 prevents either copy from participating in a 2413 pattern. Now, the remaining positions can be filled in r_n-2(2413) ways to avoid a 2413 pattern.Suppose 3 ≤π_1 ≤ n-2.Now, π_2=n, 2≤π_2 ≤ n-1, or π_2=1.If π_2=n, the digits π_1, n, and 1 in π along with the digit n-1 in π^r form a 2413 pattern.If 2≤π_2≤ n-1, two cases need to be considered. If π_1>π_2, then the digits π_2 and n in π along with the digits 1 and π_1 in π^r form a 2413 pattern. If π_1<π_2, then the digits π_1 and n in π along with the digits 1 and π_2 in π^r form a 2413 pattern.If π_2=1, the digit 2 in π along with the digits n, 1, and π_1 in π^r form a 2413 pattern.In every case, there are no 2413-avoiding reverse double lists on n letters where 3 ≤π_1 ≤ n-2. Suppose π_1=n-1. We claim that π_2=n. Assume for contradiction that π_2≠ n. This implies π_2=1 or 2≤π_2≤ n-2. If π_2=1, then the digit 2 in π along with the digits n, 1, and n-1 in π^r form a 2413 pattern. If 2≤π_2≤ n-2, then the digits π_2 and n in π along with the digits 1 and n-1 in π^r form a 2413 pattern. Hence, if π_1=n-1, then π_2=n.Notice that n-1 can only play the role of a 3 or 4 in a 2413 pattern, so it cannot play as the 2 at the beginning of a 2413 pattern. If n-1 plays the role of 3, the number n in π can play the role of 4. However, there does not exist a number before n in π to play the role of 2. Also, n can only play the role of a 4, but the location of n prevents either copy from participating in a 2413 pattern. Now, the remaining positions can be filled in r_n-2(2413) ways to avoid a 2413 pattern.Suppose π_1=n.The only role n can play in a 2413 pattern is a 4, however n only appears as the first and the last member of σ, so it cannot be involved in a 2413 pattern.There are r_n-1(2413) ways to fill in the remaining digits of σ, so there are r_n-1(2413) 2413-avoiding reverse double lists on n letters where π_1=n.Combining our cases, we have shown that for n ≥ 3, r_n(2413)=2r_n-1(2413)+2r_n-2(2413).§.§.§ Summary of length 4 patterns We have now completely characterized r_n(ρ) where ρ is a permutation pattern of length at most 4. By exploiting the symmetry inherent in reverse double lists, we found recurrences for r_n(ρ) for each pattern of length 4.The corresponding results are given in Table <ref>.These results provide an interesting contrast to pattern-avoiding permutations and double lists.First, there is exactly one non-trivial Wilf equivalence.Second, the monotone pattern is the hardest pattern to avoid in the context of reverse double lists.Finally, we obtained a variety of behaviors (constant, cubic, and exponential), as compared to permutation-pattern sequences which only grow exponentially.Each formula in Table <ref> is straightforward to convert to a generating function via standard techniques.The corresponding generating functions ∑_n=0^∞r_n(ρ) x^n are given in Table <ref>.Because each generating function is rational, we can find the corresponding linear recurrence satisfied by each sequence {r_n(ρ)}_n ≥ 0 and determine the largest root of the corresponding characteristic equation to determine the exponential growth rate of the sequence.The table includes the exact growth rate for every sequence except{r_n(1423)}_n ≥ 0.In that case, the largest root of the characteristic equation is2/3 + 1/3√(1/2(43-3√(177))) +1/3√(1/2(43+3√(177)))≈ 2.21. § AVOIDING A PATTERN OF LENGTH 5 OR MORE There are 32 trivial Wilf classes for patterns of length 5.Table <ref> shows the brute force data for {r_n(ρ)}_n=5^7 for one pattern ρ from each trivial Wilf class.From this data it is clear that there are no non-trivial Wilf equivalences for patterns of length 5.There are some interesting observations that arise from the data.Notice that r_7(15243)=r_7(15324) while r_6(15243)≠r_6(15324). From brute force data, it appears that while r_6(15243)<r_6(15324), for n>7 r_n(15243)>r_n(15324). Similarly r_6(23514)< r_6(13425) and r_7(23514)=r_7(13425), while it appears that r_n(23514)> r_n(13425)for n>7. This behavior is in contrast to enumeration sequences for pattern-avoiding permutations and double lists where there is no known example where s_N(ρ)<s_N(τ) (resp. d_N(ρ)<d_N(τ)) for some integer N but s_n(ρ)≥s_n(τ) (resp. d_n(ρ)≥d_n(τ)) for n ≥ N.We can say more about the growth rates of these sequences by relating r_n(ρ) to pattern-avoiding permutations, as described in Theorem <ref>.First, recall that a shuffle of words α_1⋯α_i and β_1 ⋯β_j is a word w of length i+j where there is a subsequence of w equal to α, and a disjoint subsequence of w equal to β.Now, given a permutation ρ∈𝒮_k define ρ^↔ to be the set of permutations that are shuffles of ρ_1 ⋯ρ_i and ρ_k ⋯ρ_i+1 for any 1 ≤ i ≤ k.For example, 1234^↔={1234, 1243, 1423, 4123, 1432, 4132, 4312, 4321}.In general, there are k-1i-1 ways to shuffle ρ_1 ⋯ρ_i with ρ_k ⋯ρ_i+1 where ρ_1 ⋯ρ_i+1 is not a subsequence of the resulting word.Summing over all possible values of i, if ρ∈𝒮_k, then |ρ^↔| = ∑_i=1^kk-1i-1 = 2^k-1. Given ρ∈𝒮_k and n ≥ 0,r_n(ρ) = s_n(ρ^↔). Suppose σ =ππ^r ∈ℛ_n contains ρ.Then, for some 1 ≤ i ≤ n, ρ_1⋯ρ_i is contained in π while ρ_i+1⋯ρ_k is contained in π^r.If ρ_i+1⋯ρ_k is contained in π^r, then ρ_k ⋯ρ_i+1 is contained in π.Further,ρ_1⋯ρ_i and ρ_k ⋯ρ_i+1 use disjoint digits of σ, so π contains a shuffle of ρ_1⋯ρ_i and ρ_k ⋯ρ_i+1.It follows that σ avoids ρ if and only if π avoids all shuffles of ρ_1⋯ρ_i and ρ_k ⋯ρ_i+1 for all 1 ≤ i ≤ k. As a consequence of Theorem <ref>, when ρ∈𝒮_k, ℛ_n(ρ) is isomorphic to a classical permutation class 𝒮_n(B), where B is a set of 2^k-1 classical permutations of length k.For example, r_n(123) = s_n(123, 132, 312, 321).Because every set we have enumerated in this paper is a finitely-based classical permutation class, we can use known machinery to determine asymptotic growth of r_n(ρ) for arbitrary ρ.For example, Vatter showed the following:Let B be a finite set of patterns. The pattern-avoidance tree T(B) is isomorphic to a finitely labeled generating tree if and only if B contains both a child of an increasing permutation and a child of a decreasing permutation.Theorem <ref> implies that s_n(B) has a rational generating function if B contains both a permutation that is achieved by inserting one digit into an increasing permutation and another permutation that is achieved by inserting one digit into a decreasing permutation.The patterns ρ for which ρ^↔ fits this criteria are marked with † in Table <ref>.Rational generating functions can indicate either polynomial or exponential growth, however.In <cit.>, Albert, Atkinson, and Brignall give necessary and sufficient conditions on when s_n(B) exhibits polynomial growth.The direct sum α⊕β of permutations α and β is the permutation formed by concatenating α and β and incrementing all digits of β by |α|.The skew sum α⊖β of permutations α and β is the permutation formed by concatenating α and β and incrementing all digits of α by |β|.Further let ϵ=(e_1,e_2) be an ordered pair where {e_1,e_2}⊆{-1,1}.The pattern class W(ϵ) is the set of all permutations π=π_1⋯π_n where there exists a value 1 ≤ j ≤ n such that π^(i) is increasing if e_i=1 and π^(i) is decreasing if e_i=-1 where π^(1):=π_1⋯π_j and π^(2):=π_j+1⋯π_n.We have the following characterization from Albert, Atkinson, and Brignall: s_n(B) has polynomial growth if and only if B contains a member of each of the following 10 sets of permutations: W(1,1), W(1,-1), W(-1,1), W(-1,-1), W(1,1)^-1, W(1,-1)^-1, W(-1,1)^-1,W(-1,-1)^-1, L_2 = {⊕_i=1^j α_i | α_i ∈{1, 21} for all i} and L_2^r.Applying Theorem <ref> to the case of reverse double lists yields the following: r_n(ρ) has polynomial growth if and only if ρ is trivially Wilf equivalent to 12⋯ k or to 1⋯ (k-2)k(k-1).We can check that the theorem holds for ρ of length at most 9 by brute force methods, so without loss of generality, we assume that k ≥ 10. First, notice that s_n(ρ^↔) meets the criteria in Theorem <ref> when ρ=12⋯ k or ρ=1⋯ (k-1)k(k-1).In the case of ρ=1⋯ k, ρ is a member of classes W(1,1), W(1,-1), W(-1,1), W(1,1)^-1, W(1,-1)^-1, W(-1,1)^-1, and L_2, while ρ^r is a member of the other 3 classes, and {ρ, ρ^r}⊂ρ^↔.In the case of ρ=1⋯ (k-2)k(k-1), 1⋯ k is still a member of 7 of the 10 classes, ρ^r is a member of the other 3 classes, and {1⋯ k,ρ^r}⊂ρ^↔.Now, suppose that ρ^↔ has a member of each of the 10 necessary permutation classes.This means there is some integer i such that a shuffle of ρ_1 ⋯ρ_i together with ρ_n ⋯ρ_i+1 is in L_2.By definition of shuffle, the first digit of the shuffle is either ρ_1 or ρ_n.By definition of L_2, the first digit of the shuffle is either 1 or 2.Similarly, there is some integer j such that a shuffle of ρ_1 ⋯ρ_j together with ρ_n ⋯ρ_j+1 is in L_2^r.By definition of shuffle, the first digit of the shuffle is either ρ_1 or ρ_n.By definition of L_2^r, the first digit of the shuffle is either n-1 or n.Combining these observations, either ρ_1 ∈{1,2} and ρ_n ∈{n-1,n} or ρ_1 ∈{n-1,n} and ρ_n ∈{1,2}.We will assume that ρ_1 ∈{1,2} because if ρ_1 ∈{n-1,n}, then ρ_1^c ∈{1,2} and ρ^c is Wilf-equivalent to ρ.Now, since ρ_n ∈{n-1,n} and there is a shuffle of ρ_1 ⋯ρ_i together with ρ_n ⋯ρ_i+1 in L_2, it must be the case that ρ_n is among the last 3 digits of the shuffle.In other words i ≥ n-3.This implies that ρ_1 ⋯ρ_n-3∈ L_2.Similarly, since ρ_1 ∈{1,2} and there is a shuffle of ρ_1 ⋯ρ_j together with ρ_n ⋯ρ_j+1 in L_2^r, it must be the case that ρ_1 is among the last 3 digits of the shuffle.In other words, i ≤ 3.This implies that ρ_n ⋯ρ_4 ∈ L_2^r, and by taking reversal, ρ_4 ⋯ρ_n ∈ L_2.We assume that n ≥ 10, so ρ_1 ⋯ρ_n-3 and ρ_4 ⋯ρ_n overlap by at least 4 digits.It must then be the case that ρ∈ L_2 if ρ^↔ has nontrivial intersection with both L_2 and L_2^r.Now, we know that ρ∈{⊕_i=1^j α_i | α_i ∈{1, 21}}.Assume α_i=21 for some 1<i<j.In other words ρ has a layer of size 2 that is not at the beginning or the end of ρ.We claim that there is no member of ρ^↔ in W(-1,1)^-1.Suppose to the contrary that there is such a member of ρ^↔.Write ρ=ρ_1⋯ρ_ℓα_1α_2 ρ_ℓ+3⋯ρ_n. where α_1>α_2 are the members of α_i. Clearly ρ∉ W(-1,1)^-1, so there must be an integer m such that a shuffle of ρ_1 ⋯ρ_m and ρ_n⋯ρ_m+1 is in W(-1,1)^-1.If m≤ℓ then in the shuffle ρ_ℓ+3 precedes α_2 which precedes α_1, and these three digits for a 312 pattern.But all members of W(-1,1)^-1 avoid 312.If m ≥ℓ+2 then ρ_ℓ precedes α_1 which precedes α_2, and these three digits form a 132 pattern.But all members of W(-1,1)^-1 avoid 132.So it must be the case that m=ℓ+1.Thus ρ_ℓ precedes α_1 and ρ_ℓ+3 precedes α_2.Since ρ_ℓ<α_1, it must be the case that all digits larger than α_1 appear after α_1 in increasing order.In other words ρ_ℓ+3 must appear after α_1.But then ρ_ℓ, α_1, ρ_ℓ+3 and α_2 form a 1342 pattern, and all members of W(-1,1)^-1 avoid 1342.We have reached a contradiction in every possible scenario, so the α_i=21 is only possible if i=1 or i=j.Now, suppose that ρ = 21 ⊕(⊕_i=1^n-4 1 )⊕ 21.We claim that there is no member of ρ^↔ in W(1,-1). Suppose there is an integer j such that a shuffle of ρ_1⋯ρ_j and ρ_n ⋯ρ_j+1 is in W(1,-1). Since ρ_1>ρ_2, it must be the case that j=1, and the shuffle in question is ρ^r = 12 ⊖(_i=1^n-4 1 )⊖ 12.However, the longest increasing sequence at the beginning of ρ^r is ρ_nρ_n-1, and then ρ_n-2⋯ρ_1 is not in decreasing order because ρ_2<ρ_1.So, there is no member of ρ^↔ in W(1,-1).We have now shown that if ρ^↔ has a nontrivial intersection with L_2, L_2^r, W(-1,1)^-1, and W(1,-1), then ρ∈ L_2, there is at most one layer of size 2 in ρ, and that layer must either be the first layer or the last layer.In other words, ρ is trivially Wilf-equivalent to either 12⋯ k or 1⋯ (k-2)k(k-1), which is what we wanted to show.Therefore, there are exactly 2 classes ℛ_n(ρ) that have polynomial growth for ρ∈𝒮_k. § SUMMARY In this paper, we have completely determined r_n(ρ) for any permutation pattern ρ of length at most 4.We have also determined the Wilf classes for patterns of length 5.By realizing that r_n(ρ) = s_n(ρ^↔), we took advantage of earlier results in the permutation patterns literature to completely characterize when r_n(ρ) has polynomial growth.We also modified a classic proof of the Erdős–Szekeres Theorem to show that r_n(12⋯ k)=0 for n ≥k2+1, and used the Robinson–Schensted correspondence to determine r_k2(12⋯ k) for all k.There are still several open questions of interest:* All of the sequences in Table <ref> have rational generating functions.Do there exist patterns ρ where the sequence {r_n(ρ)} does not have a rational generating function?* We know that the majority of the sequences in Table <ref> have rational generating functions because of Theorem <ref>; however, actually computing the appropriate finitely-labeled generating trees was prohibitive both in terms of time and computer memory.What other techniques can be used to enumerate the members of the corresponding permutation classes?* We determined r_k2(12⋯ k) by characterizing the pairs of standard Young tableau that correspond to π where ππ^r ∈ℛ_k2(12⋯ k).While r_k2-1(12⋯ k)=r_k2(12⋯ k) and r_k2-2(12⋯ k)=r_k2-1(12⋯ k)2, for smaller length permutations π, ππ^r may avoid 12⋯ k without P(π) having increasing diagonals.What can be said about P(π) where ππ^r ∈ℛ_n(12⋯ k) for n ≤ k-3? The authors are grateful to two anonymous referees for their feedback, which improved the organization and clarity of this paper.* abbrvnat | http://arxiv.org/abs/1704.08638v5 | {
"authors": [
"Monica Anderson",
"Marika Diepenbroek",
"Lara Pudwell",
"Alex Stoll"
],
"categories": [
"math.CO",
"05A05"
],
"primary_category": "math.CO",
"published": "20170427161112",
"title": "Pattern Avoidance in Reverse Double Lists"
} |
2016 Vol. 16 No. 3, 39 (16pp) doi: 10.1088/1674–4527/16/3/039 National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China; [email protected] Laboratory of Radio Astronomy, Chinese Academy of Sciences,Nanjing 210008, ChinaReceived 2015 April 3; accepted 2015 September 11We estimated the ortho-H_2O abundances of G267.9–1.1, G268.4–0.9, G333.1–0.4 and G336.5–1.5, four of the brightest ortho-H_2O sources in the southern sky observed by the Submillimeter Wave Astronomy Satellite (ortho-H_2O 1_10 – 1_01 line, 556.936 GHz). The typical molecular clumps in our sample have H_2 column densities of 10 ^22 to 10 ^23 cm^-2 and ortho-H_2O abundances of 10^-10. Compared with previous studies, the ortho-H_2O abundances are at a low level, which can be caused by the low temperatures of these clumps. To estimate the ortho-H_2O abundances, we used the CS J = 2 →1 line (97.98095 GHz) and CS J = 5 →4 (244.93556 GHz) line observed by the Swedish-ESO 15 m Submillimeter Telescope (SEST) to calculate the temperatures of the clumps and the 350 m dust continuum observed by the Caltech Submillimeter Observatory (CSO) telescope to estimate the H_2 column densities. The observations of N_2H^+ (J = 1 → 0) for these clumps were also acquired by SEST and the corresponding abundances were estimated. The N_2H^+ abundance in each clump shows a common decreasing trend toward the center and a typical abundance range from 10^-11 to 10^-9. B.-R. Wang et al. Water Abundance in Four of the Brightest Water SourcesWater abundance in four of the brightest water sources in the southern skyBing-Ru Wang1, Lei Qian1, Di Li1,2 Zhi-Chen Pan1 Received; accepted ===========================================================================§ INTRODUCTIONWater was first detected in the interstellar medium (ISM) over 40 years ago (). It is an essential coolant in star-forming regions and plays an important role in the energy balance of prestellar objects (). Thus, the abundance of water is a crucial parameter, especially for massive star formation (). Since the physical conditions of star-forming regions affect the water abundance (with respect to H_2), water acts as an excellent diagnostic for energetic phenomena (). As an abundant oxygen-bearing molecule formed in molecular clouds, its abundance also gives constraints on the abundance of atomic oxygen, therefore it affects the abundances of other chemically related oxygen-bearing species. Accessible water lines and feasible methods are necessary for estimating the abundance of water in star-forming-regions. Water lines originating from different levels probe gas under different conditions. Most rotational water lines, including the ground-state transition of ortho- and para-H_2O, cannot be observed from the ground due to the existence of telluric water (). Although there are indeed some transitions that have been detected from the ground, their upper states are over 200 K above the ground state (). The high energies over the ground state indicate high gas temperatures when collision with H_2 is considered as the excitation mechanism. Thus, these transitions are unlikely to be from cold gases. To date, space observations, (e.g., the Submillimeter Wave Astronomy Satellite (SWAS) (); the Odin satellite; the Infrared Space Observatory (ISO); the Spitzer Space Telescope and the Herschel Space Observatory) have detected water lines, including the 556.936 GHz ortho-H_2O 1_10 – 1_01 line. This ground-state transition was observed by SWAS first. With the upper state lying only 27 K above the ortho-H_2O ground state, it provides access to estimate the water abundance in cold molecular gas, in which massive stars form in cold dense clumps and young stellar objects are deeply buried. Water can form in several different routes, both in gas phase and on dust grains. Once they form, the H_2O molecules can be desorbed from the ice mantle of dusts, remain frozen on the dust surface or freeze onto the dust grains from the gas phase. Water ice on the dust surface can desorb thermally when the dust temperature rises above about 100 K (). In another way, photodesorption occurs when the ice absorbs ultraviolet (UV) photons (). When the temperature is as low as about 10 K and the density is high enough, freeze-out will dominate () and consequently lead to low H_2O abundances. Thus, temperature and UV radiation are essential factors that affect water abundance.To compare observations with predicted results, the abundances of para- or ortho-H_2O are estimated based on the spectra obtained from telescopes. For the ortho-H_2O line (1_10 – 1_01, 556.936 GHz), an effectively optically thin approximation () was adopted, which makes it convenient to estimate the ortho-H_2O abundance. In the Herschel key programme “water in star-forming regions with Hersehel" (WISH), thenon-local thermodynamic equilibrium (LTE) radiativetransfer code RADEX () was used to reduce the ortho-H_2O line (1_10 – 1_01, 556.936 GHz) data to estimate the H_2O abundance in a low-mass protostar (). In this paper, we estimate the ortho-H_2O abundances of four of the brightest ortho-H_2O sources (G267.9–1.1, G268.4–0.9, G333.1–0.4 and G336.5–1.5) in the southern sky observed by SWAS. The paper is organized as follows: in Section 2, we briefly describe the observations of these clumps and the data reduction procedures. In Section 3, we present the calculations and estimates of clump temperatures, clump masses, H_2 column densities and finally the estimate of ortho-H_2O abundances based on observations. In Section 4 and Section 5, we present the discussion and conclusion respectively. The appendix contains some supplementary material.§ OBSERVATION AND DATA REDUCTION§.§ Source Selection We checked the co-added spectra of the ortho-H_2O 1_10 – 1_01 line of all the 386 sources in the five and a half years of the SWAS nominal mission from Lambda[http://lambda.gsfc.nasa.gov/product/swas/s_sw.cfm]. We selected four of the sources with T_A^* higher than 0.1 K (excluding the sources in the Galactic Center region) in the southern sky. These four sources are G267.9–1.1 (T_A^* = 0.10 K), G268.4–0.9 (T_A^* = 0.16 K), G333.1–0.4 (T_A^* = 0.20 K) and G336.5–1.5 (T_A^* = 0.45K). Among these four sources, G336.5–1.5 has the highest T_A^*. These four sources are located in star forming regions RCW 38 (G267.9–1.1 and G268.4–0.9), RCW 106 (G333.1–0.4) and RCW 108 (G336.5–1.5), respectively. Although being bright at 8 m, they are all associated with the 22 GHz 6_16 –5_23 water masers (; ;and ), which are believed to be good indicators of the location of massive star formation (). The properties of these four sources are summarized briefly as follows.(1) G267.9–1.1.It is the third brightest source in the investigation of Galactic radio sources at 5000 MHz (), with a brightnesstemperature of 124.0 K (). The associated 22 GHz 6_16 –5_23 water maser (without OH main-line emission) was first reported by <cit.>.(2) G268.4–0.9.It was identified in an 11 cm survey of Vela (), near the source G267.9–1.1 (denoted as G268.0–1.0 in the same survey) with a lower brightness temperature. However, it was not identified as an isolated radio source in the Galactic radio source survey (). The associated 22 GHz water maser was identified by <cit.>.(3) G333.1–0.4.It is one of the clumps in the giant molecular cloud (GMC) G333 (). It was first identified as an extensive region (), with a brightness temperature of 17.8 K (). The associated 22 GHz water maser was identified by <cit.>. The ortho-H_2O 556.936 GHz line obtained by the SWAS exhibits a pronounced inverse P Cygni profile and related study () suggests that it is a rare case of direct observational evidence for large scale infall in a star forming region.(4) G336.5–1.5.It is identified as an isolated compact region in both the survey of H109α recombination line emission in Galactic regions of the southern sky () and the investigation of Galactic radio sources at 5000 MHz (). Its brightness temperature is 7.2 K (). G336.5–1.5 is associated with bright-rimmed cloud (BRC) 79, one of the 89 clouds in a catalog of BRCs with IRAS point sources (). It has the largest H_2 column density among the 43 southern hemisphere BRCs (BRC 77 and BRC 78 excluded) and the region RCW 62, according to ^13CO observations (). Its ortho-H_2O 556.936 GHz line obtained by SWAS exhibits the highest antenna temperature among all observed sources (other than solar system objects and the Galactic Center), which makes it an interesting object to study.Compared with the other three sources, G336.5–1.5 has a higher Galactic latitude. Its associated 22 GHz water maser was detected in a survey of 45 southern BRCs () for H_2O maser emission (), with a total integrated H_2O flux density of merely 5.4 Jy km s^-1. All these features mentioned above imply that these four sources are likely to be massive star forming active clumps. We use “clumps" to refer to these four sources in this paper. §.§ Observation and Data Reduction The observations were carried out with three telescopes. The ortho-H_2O 1_10 – 1_01 line (556.936 GHz) was observed with SWAS. The CS J = 2 → 1line (97.98095 GHz), CS J = 5 →4 line (244.93556 GHz) and N_2H^+ J = 1 →0 line (93.17340 GHz) data were from the Swedish-ESO 15 m Submillimeter Telescope (SEST[http://www.eso.org/public/images/esopia00049teles/]). The 350 m dust continuum data were obtained with the Submillimeter High Angular Resolution Camera II (SHARC II; see ) of the Caltech Submillimeter Observatory (CSO) telescope. The observational parameters of molecular lines are listed in Table <ref>.§.§.§ SWAS observation The observations of the ortho-H_2O 1_10 – 1_01 line (556.936 GHz) were performed with SWAS from 1999 January 20 to 2001 May 3 (G267.9–1.1), 1998 December 20 to 2003 June 5 (G268.4–0.9), 1999 September 15 to 2002 February 25 (G333.1–0.4) and 2001 September 22 to 2004 July 21 (G336.5–1.5). The data were obtained from the SWAS spectrum service in the NASA/IPAC infrared science archive[http://irsa.ipac.caltech.edu/applications/SWAS/SWAS/list.html].The ortho-H_2O 557 GHz 1_10 – 1_01 line data acquired by SWAS were converted into FITS format with a uniform 190 ×190 arcsec^2 pixel size after the spectra in every single beam (which are also in the same sampling cell) were averaged and then the baselines were substracted.The Gildas software package[http://www.iram.fr/IRAMFR/GILDAS/] was used for averaging and baseline subtraction. The baselines of spectra were acceptable and a 1^ st or 2^ nd order polynomial was used for baseline fitting. The typical root mean squares (RMSs) of the ortho-H_2O 557 GHz 1_10 – 1_01 spectra are 0.017 K for G267.9–1.1, 0.013 K for G268.4–0.9, 0.014 K for G333.1–0.4, and 0.03 K for G336.5–1.5. The different RMSs are mainly due to different integration times. When we calculated the integrated intensities of the ortho-H_2O 557 GHz 1_10 – 1_01 line, the antenna temperatures were corrected with a main beam efficiency of 0.9. For G268.4–0.9 and G333.1–0.4, the antenna temperatures below zero are due to the high noises and the subtraction of the baseline. The double-peaked spectra of ortho-H_2O 1_10 - 1_01 lines of G268.4–0.9 indicated strong self-absorption (). We performed a Gaussian fitting for both non-absorbed emission peaks and the absorption peaks and obtained the integrated intensity of the emission of the averaged and baseline subtracted spectrum. The spectrum of G333.1–0.4 shows a pronounced inverse P Cygni profile. We took into account both the water components corresponding to emission and absorption features. §.§.§ SEST observationThe observations of the CS J = 2 →1 line (97.98095 GHz), CS J = 5 →4 line (244.93556 GHz) and N_2H^+ J = 1→ 0 line (93.17632 GHz) were carried out with SEST. These four clumps were mapped with CS J = 2 →1 (except for G333.1–0.4), CS J = 5 →4 and N_2H^+ J = 1→0 in 2002 Mar 24–28. The main-beam efficiencies were 0.73 (CS J = 2 →1), 0.56 (CS J = 5 →4) () and 0.74 (N_2H^+ J = 1→0) (), respectively.In the mappings, the spacing of the square scanning grids is 40”, but in the CS (5–4) mappings for G268.4–0.9, G333.1–0.4 and G336.5–1.5, additional sampling made the square scanning grids quincunxes.In each map, every pixel was observed with the position switch mode separately. The reference positions are selected approximately 1800” away from the centers of the maps (the coordinates in Columns (2) and (3)of Table <ref>).The Gildas software package was also used. For several spectra, the baselines seem to follow a sine function but with changing periods and amplitudes. In addition, for several spectra the line widths of the real emission lines are similar to the periods of their sine baselines, so we left them with their sinusoidal baselines. We check all the spectra one by one for baseline fitting. These spectra are near the edge of the mapping area and our calculation results are affected little.After the baseline subtraction, the obtained spectra with RMSs less than 1 K (for CS J = 5 → 4 in G333.1–0.4 and G336.5–1.5, the sigma limits are 0.5 K and 0.84 K, respectively) were selected and written in FITS format with a uniform 40 ×40 arcsec^2 pixel size.The RMSs of the CS (2–1) spectra in the center of the images are 0.16 K for G267.9–1.1 (RA = 08:59:12.0, Dec = -47:29:04), 0.16 K for G268.4–0.9 (RA = 09:01:54.3, Dec = -47:43:59) and 0.16 K for G336.5–1.5 (RA = 16:39:58.9, Dec = -48:51:00). The RMSs of the CS (5–4) spectra in the center of the images are 0.42 K for G267.9–1.1 (RA = 08:59:12.0, Dec = -47:29:04), 0.16 K for G268.4–0.9 (RA = 09:01:54.3, Dec = -47:43:59), 0.13 K for G333.1–0.4 (RA = 16:21:02.1, Dec = -50:35:15), and 0.09 K for G336.5–1.5 (RA = 16:39:58.9, Dec = -48:51:00).We did not have CS (2–1) data for G333.1–0.4. The RMSs of the N_2H^+ (1–0) spectra in the center of the images are 0.20 K for G267.9–1.1 (RA = 08:59:12.0, Dec = -47:29:04), 0.18 K for G268.4–0.9 (RA = 09:01:54.3, Dec = -47:43:59), 0.19 K for G333.1–0.4 (RA = 16:21:00.8, Dec = -50:34:55), and 0.17 K for G336.5–1.5 (RA = 16:39:58.9, Dec = -48:51:00).§.§.§ CSO observation The 350 m dust continuumobservations were performed withSHARC II on CSO during 2014 April 4 and 5. The data were taken when these four regions were close to their maximum elevation (approximately 20 degrees at the CSO site) and the τ_225 GHz was lower than 0.06. The box scan mode was used for the SHARC II observation. The beam size of SHARC II is 8” and the grid spacing for the sampling is 1.5 ×1.5 arcsec^2. For each scan, the total integration time is 14.71 minutes and the corresponding RMS is 212 mJy beam^-1. Pointing and focusing calibration was done every 2 hours during the observation. The data reduction tool CRUSH[http://www.submm.caltech.edu/ sharc/crush/] was used for further data reduction. The flux calibration was done by observing Mars. The RMS for the final data are 0.49 Jy beam^-1 for G267.9–1.1, 0.61 Jy beam^-1 for G268.4–0.9, 0.79 Jy beam^-1 for G333.1–0.4 and 0.53 Jy beam^-1 for G336.5–1.5. The weather is the main reason for the variation of the noises in different maps. § RESULTS AND ANALYSIS §.§ Spectra Map and Dust MapFigures <ref>, <ref>, <ref> and <ref> are line profile maps of G267.9–1.1, G268.4–0.9, G333.1–0.4 and G333.6–1.5. In these line profile maps, all offsets are relative to the corresponding coordinates (see Table <ref>) and the units are arcsec. Empty boxes are the positions without sampling.There is a “hole” with little CS (2–1 and 5–4) emission in the center of the emission region of G267.9–1.1. Moreover, the centroid velocities of the CS spectra in the east of the hole are different from those of the CS spectra in the west of the hole. In the south of the hole, the CS spectra all have two obvious peaks and in the north of the hole, the spectra all have two peaks as well. We overlapped the CS (2–1) integrated intensity map on the 350 m dust continuum in Figure <ref>. We can see that the intensity peaks are associated with the 350 m emission and in the “hole” the dust emission is much weaker than the surrounding areas.The RMSs of CS (5–4) spectra of G333.1–0.4 vary a lot, which is caused by different integration times. The integration time (on source time) changes from less than 0.8 minutes to more than 3 minutes. In G336.5–1.5's CS (5–4) spectra, there are similar situations.The 350 m dust continuums of these four clumps are shown in Figure <ref>. In the following sections, we estimated the temperatures, masses, H_2 column densities and ortho-water abundances of these four clumps. The areas for mass and ortho-water abundance estimates are shown in white boxes with solid lines and dashed lines, in the corresponding figures, respectively.§.§ CS Excitation Temperatures §.§.§ The estimate of CS excitation temperatures For G267.9–1.1, G268.4–0.9 and G336.5–1.5 (we did not have CS (2–1) line data for G333.1–0.4) the excitation temperatures of the CS molecule were estimated based on the CS (2–1) line and CS (5–4) line. The estimated CS excitation temperatures were subsequently adopted in the estimates of clump masses (and then the H_2 column densities), ortho-water abundances and N_2H^+ abundances.The estimate is based on the following assumptions, i.e., (1) The CS molecules are in LTE.(2) The cosmic microwave background radiation (CMB) can be ignored. (3) CS (2–1) and CS (5–4) lines are optically thin. Since G267.9–1.1, G268.40.9 and G336.5–1.5 all fill the main beam, the filling factors in the estimate equal 1. According to the population diagram method (in LTE) (), for the upper levels J = 2 and J = 5 there are lnN_J=2^thin/g_J=2+lnτ _2-1/1-e^τ _2-1=ln N_tot-ln Z-E_J=2/kT_ex,andlnN_J=5^thin/g_J=5+lnτ _5-4/1-e^τ _5-4=ln N_tot-ln Z-E_J=5/kT_ex,respectively. N_J=2^thin and N_J=5^thin are the column densities at J = 2 and J = 5 in the optically thin situation respectively. g_J=2 and g_J=5 are the statistical weights of level J = 2 and level J= 5 respectively, and τ_5-4 and τ_2-1 are the corresponding optical depths. N_tot is the total column density of the CS molecule and Z is the partition function. T_ex is the excitation temperature of these two transitions. Since we assumed that the CS (2–1) line and CS (5–4) line are optically thin, we consideredlnτ _2-1/1-e^τ _2-1=0andlnτ _5-4/1-e^τ _5-4=0.From Equations (<ref>), (<ref>), (<ref>) and (<ref>), we obtainedT_ex=E_J=5-E_J=2/kln ( g_J=5N_J=2^thin/g_J=2N_J=5^thin ),whileE_J=hB_eJ(J+1).J is the rotational quantum number and B_e is the rotational constant of the CS molecule at vibrational energylevel v = 0 in Hz (2.458437 × 10^10 Hz, ). The statistical weights of the J = 2 and J = 5 level, g_J=2 and g_J=5, equal 5 and 11 respectively. Thus, we can write Equation (<ref>) asT_ex=24hB_e/ kln ( 11N_J=2^thin/5N_J=5^thin ). Now we focus on the CS column densities of the upper levels (J = 2 and J = 5). Based on <cit.>, when the molecular line is optically thin, the column densities of the upper level (J = 2 or J = 5) areN_u^thin=8πν ^3/c^3A_ul ( e^hν/kT_ex-1)∫τ _νdv,where ν is the frequency of the CS (2–1) or CS (5–4) line, T_ex is the excitation temperature, A_ul is the corresponding Einstein A-coefficient and τ_ν is the optical depth. In an isothermal medium, the relationship between the brightness temperature T_ B,the excitation temperature T_ex and the cosmic background temperature T_background can be described asT_ B =T_backgrounde^-τ _ν+T_ex ( 1-e^τ _ν ).When the CS lines are optically thin,1-e^-τ _ν≈τ _ν ,and if the background radiation can be ignored, thenT_ B ≈ T_exτ _ν.Substituting Equation (<ref>) into Equation (<ref>), we obtainedN_u^thin = 8πν ^3/c^3A_ulT_ex ( e^hν/kT_ex-1)∫ T_exτ _νdv = 8πν ^3/c^3A_ulT_ex ( e^hν/kT_ex-1)∫ T_ B dv,Since the four clumps in our study are all extended sources, the antenna temperature T_a ≃ T_ B, and the column densities of upper levels in the optically thin case areN_u^thin=8πν ^3/c^3A_ulT_ex ( e^hν/kT_ex-1)∫ T_adv.In Equation (<ref>), N_u^thin is dependent on T_ex.If we adopt the Rayleigh-Jeans approximationhν≪ kT_exin Equation (<ref>), then this equation will be reduced toN_u^thin∗=8π kν ^2/hc^3A_ul∫ T_adv.This expression is the same as that from <cit.>, and it does not depend on the excitation temperature T_ex.However, we notice that if we estimate the column densities of upper levels through Equation (<ref>), then significant deviation will arise due to the high frequency of the CS (5–4) line. The deviation in upper level column densities will lead to significant deviation of the subsequently derived T_ex.The excitation-temperature-dependent N_u^thin was derived by correcting the N_u^thin∗ with line frequency ν and excitation temperature T_exN_u^thin=hν/kT_ex ( e^hν/kT_ex-1)· N_u^thin∗. For each sampling position, we need N_u^thin at the J = 2 and J = 5 level to calculate the excitation temperature of CS, while the excitation temperature of the CS molecule is required when we derived N_u^thin (Eq. (<ref>)). As a start, we calculated N_u^thin∗ for the J = 2 and J = 5 level (Eq. (<ref>)) and then we derived an excitation temperature (Eq. (<ref>)) from N_u^thin∗ (J = 2 and J =5 level). In the next step, we obtained the first N_u^thin through Equation (<ref>), with which we subsequently calculated an excitation temperature again (Eq. (<ref>)). Since N_u^thin is dependent on T_ex, and T_ex is derived from N_u^thins (J = 2 and J = 5 level), we performed iterative calculations to correct T_ex gradually, i.e., we ran the iteration cycles of “N_ u^ thin – T_ ex.” We kept comparing the latest T_ex calculated from the latest N_u^thins at the J = 2 and J = 5 level with the previously calculated T_ex. The iterations would not end until the absolute value of the difference between these two T_ex values was less than 0.05 K. The calculation of CS excitation temperatures was only applied to the positions where both the CS (2–1) and CS (5–4) line intensities were higher than 3σ. At these positions, gas along the line of sight would be considered CS (5–4)-traced dense gas and the areas corresponding to all these positions are treated as “CS (5–4)-traced areas,” in which the CS molecules can be well-excited through collisions (at least for the CS J = 1 → 2 excitation, since the critical density of the CS (5–4) line is far greater than that of the CS (2–1) line) and the calculated CS excitation temperatures approximately equal the kinetic temperatures of the gas; the kinetic temperature of the gas is an essential parameter in subsequent estimates. Considering the sampling spacing (comparing it with the beam size), no interpolation was applied to positions where the quality of signal is poor (low signal-to-noise ratio) and the corresponding CS column densities at J = 2 and J = 5 as well as the T_exs were denoted as zero at these positions. The average CS excitation temperatures of these four clumps (corresponding to the areas within the white boxes (solid lines) in Fig. <ref>) are listed in Column (5) of Table <ref>. In those areas all calculated CS excitation temperatures are non-zero. We subsequently estimated the masses of the clumps within these white boxes (solid lines) in Section 3.3.We also derived a “characteristic” CS excitation temperature, T_ex-clump, for the CS (5–4)-traced area in each clump (G267.9–1.1, G268.4–0.9 and G336.5–1.5). For each clump, we averaged the integrated intensity of the CS (2–1) line (and CS (5–4) line, as well) at every sample position in the CS (5–4)-traced area (the white boxes (solid lines) in Fig. <ref>), then with these average integrated intensities we derived the “characteristic” excitation temperatures T_ex-clump through exactly the same assumptions and processes as we previously described.Figure <ref> shows the population diagrams calculated for the T_ex-clumps of G267.9–1.1, G268.4–0.9 and G336.5–1.5. The population diagrams were deduced from corresponding N_u^thin∗s (at J = 2 and J = 5, in dashed lines) and the last N_u^thins (at J = 2 and J = 5, in solid lines) we retrieved in the iteration. We can see that the correction of the upper level column densities (J = 2 and J = 5) for the Rayleigh-Jeans approximation does make a difference in the slopes of these plots.We derived the final values for the T_ex-clumps, i.e., those deduced from the slopes of the plots in solid lines in Figure <ref>, were listed in Table <ref>, together with N_CS-clump, the average CS column density in the CS (5–4)-traced area of each clump. The average CS column density in the CS (5–4)-traced area was deduced from the final results of the iteration for T_ex-clump.§.§.§ The non-LTE analysis for CS (2–1) and CS (5–4) lines Since it is possible for the CS lines to be optically thick (especially for the CS (2–1) line) in the actual situation, we also performed non-LTE analysis for the CS (2–1) and CS (5–4) lines with RADEX () in the CS (5–4)-traced areas (the white boxes with solid lines in Fig. <ref>). To perform this analysis,for each clump we added up the integrated intensities of the CS (2–1) line and CS (5–4) line at every sample position in the CS (5–4)-traced area (the white boxes (solid lines) in Fig. <ref>) respectively and found the corresponding average values. Based on the average CS (2–1) and CS (5–4) integrated intensities we derived theline intensity ratio R_(2-1)/(5-4) (CS (2–1)/CS (5–4)) in the CS (5–4)-traced area for each clump. We then performed the non-LTE analysis with RADEX in a kinetic temperature range from 10 K to 200 K and an H_2 density range from 10^3 to 10^8 cm^-3. According to the N_CS-clump we had estimated for each clump (see Section 3.2.1), we performed the analysis with CS column densities of 10^13 cm^-2 and 10^14 cm^-2 and got the line intensity ratio maps on the kinetic temperature (T_kin)-density plane. To estimate the probable H_2 density within the CS (5–4)-traced area for each clump, we need the corresponding kinetic temperature in addition to the R_(2-1)/(5-4). When we took their T_ex-clumps as the corresponding kinetic temperatures in the CS (5–4)-traced areas (and this is in accordance with the LTE assumption which we would adopt in the following estimates), the clumps are marked on the maps as Figures <ref> and<ref> show. The corresponding average H_2 densities of the clumps within the CS (5–4)-traced areas, n_avs, are listed in Table <ref> together with other parameters. The non-LTE analysis suggests that all these clumps have quite high n_avs in the CS (5–4)-traced areas. The non-LTE analysis results offer references for assuming densities in the subsequent estimates. We did not perform non-LTE analysis for G333.1–0.4 since we only obtained CS (5–4) data. §.§ Clump Masses and H_2 Column DensitiesTo estimate the H_2 column densities of these four clumps, we adopted the assumptions made in <cit.>, namely,(i) The medium along a certain line of sight has a single temperature (); (ii) The absorption coefficient at 350 m, Q(350), is 2×10^-4; (iii) The characteristic grain radius is 0.1 m; (iv) The grain density is 3 g cm^-3; (v) The gas to dust ratio (GDR) is 100. The clump mass can be expressed asM_clump = 0.10 M_⊙ [ 2× 10^-4/Q (350) ] [λ/350 m ]^3[ D/1 kpc ]^2 [ GDR/100 ] [ S ( ν )/Jy ]P_f ( T_d ),where S(ν) is the flux density of the cloud at 350 m at distance D, in Jy, T_d is the dust temperature, P_f(T_d) is the Planck factor andP_f ( T_d )=e^hν /kT_d-1.In this formula (), the dust temperature is an essential parameter. Although high density gases are present in these clumps, there can be a significant difference between the dust and gas temperatures at the same position (). However, when n(H_2) = 10^6 cm^-3, the dust temperature approximately equals the gas temperature (). Thus we can assume that along a certain line of sight the dust temperature equals the local gas temperature in this volume density condition. Since the CS (5–4) transition has a critical density of about 10^6 cm^-3, we assume that within the boundaries of CS (5–4)-traced areas n(H_2) = 10^6 cm^-3. According to our non-LTE analysis with RADEX (in Section 3.2.2), this assumption is reasonable and we therefore adopt the approximation of dust temperature above in the following estimate.Here we assumethe gas temperature equals the kinetic temperature. Based on the relation between the CS excitation temperatures and the kinetic temperatures mentioned before, we actually adoptedthe calculated CS excitation temperatures as the local gas temperatures and the dust temperatures along the same lines of sight in the CS (5–4)-traced areas.In the 350 m emission data, the grid spacing for the sampling is 1.5 ×1.5 arcsec^2. By summing up the calculated mass of every cell in the sampling grid, we calculated the total mass of each clump[“The total mass” is the sum of the calculated mass of every single cell in each white box with solid lines in Figure <ref>. The boundaries of each box were determined based on the CS (5–4)-traced area, the area with available calculated CS excitation temperatures and the profile of the 350 m image (for G267.9–1.1).]. Subsequently we estimated the H_2 column densities for each of the clumps cell by cell asN_H_2(cell)=M_clump(cell)f/ (Dθ )^2,where M_clump(cell) is the calculated mass of every single cell, f is the mass fraction of H in gas with a ^4He to H ratio of 0.08459 (). D is the distance from the clump, and θ is the sampling grid spacing, which is 1.5”. Since the CS (5–4) line has a high critical density of about 10^6 cm^-3 and referencing our non-LTE analysis with RADEX (in Section 3.2.2) we assume that the element H is all in the form of H_2 in the dense gases within the CS (5–4)-traced areas when we calculated the molecular hydrogen column densities. The calculated clump masses (of the CS (5–4)-traced areas, within the white boxes (solid lines) in Fig. <ref>) are listed in Column (6) of Table <ref>. The average H_2 column densities in the areas used to estimate the ortho-H_2O abundance (white boxes (dashed lines) in Fig. <ref>) are listed in Column (9) of Table <ref>. The H_2 column densities in these four clumps are on the order of magnitude 10^22 cm^-2 or 10^23 cm^-2.§.§ Ortho-H_2O Abundance§.§.§ MethodThe observation of the 557 GHz ortho-H_2O 1_10 – 1_01 line was performed with SWAS with a pixel size of about 190 × 190arcsec^2 and a main beam efficiency of about 0.90 (). This ground-state transition has a large spontaneous emission rate, A. As the stimulated absorption coefficient B is proportional to the spontaneous emission rate, it leads to large opacities and makes excitation by photon trapping important (). Since the collisional de-excitation rate coefficient C is far less than A, this line has a high critical density and the excitation is subthermal (in other words, the de-excitation of upper level molecules is dominated by emission photons rather than collisional de-excitation).Thus, although the line is expected to be optically thick at the line center even for a relatively low water abundance, every collisionally excited upper level molecule can always produce a photon which finally escapes the cloud (). Thus, the optically thick gas can be effectively thin (). Therefore, the integrated antenna temperature is proportional to the column density of ortho-H_2O under known temperature and H_2 volume density, according to <cit.>,∫ T_bdv= Cn_H_2c^3/2ν ^3kN( o-H_2O )hν/4πexp (-hν/kT_k ).T_bdv is the integrated intensity, in K km s^-1. C is the collisional de-excitation rate coefficient from level 1_10 to level 1_01.§.§.§ Kinetic temperature and other details Equation (<ref>) can be written asX·( o-H_2O)= a·∫ T_bdv /N( H_2 )n_H_2 ,witha = 1/C·c^3/2ν ^3k·hν/4π·exp (-hν/kT_k ),where a is a constant at a given temperature. We adopted an H_2 volume density of 10^6 cm^-3 in the estimates since the areas where we estimated the ortho-H_2O abundances (the white boxes with dashed lines in Fig. <ref>) are in the CS (5–4)-traced areas. The average H_2 column densities in the white boxes with dashed lines are listed in Column (9) of Table <ref>. The value of coefficient a depends on the kinetic temperature and the corresponding C. Assuming that CS lines and the ortho-H_2O 1_10 – 1_01 line originatefrom the same gas, we take the calculated CS excitation temperatures as the kinetic temperature T_k at corresponding areas and adopt them in the estimate of ortho-water abundance. Since the pixel size of ortho-H_2O data is much larger than that of the CS lines (the sampling spacing), there is an average effect for the kinetic temperature. We calculated the average T_k and the corresponding standard deviation (Table <ref>, Column (7) and (8)) to restrict the temperature range for the estimate. The collisional de-excitation rate coefficients were calculated according to the effective collisional excitation rate of ortho-H_2O from level 1_10 to level 1_01 (hereafter the effective excitation rate) by para- or ortho-H_2 and the ortho to para ratio of H_2. The effective excitation rates were adopted from <cit.> and <cit.>[i.e., the “effective rate coefficient” in these two papers] from 5 K to 80 K. Assuming that H_2 molecules are in LTE, the ortho to para ratios of H_2 were derived according to the H_2 rotational energy levels from <cit.> and the fractional population in the H_2 rotational levels ().§.§.§ Ortho-H_2O abundancesThe values of a (see Eq. (<ref>)) and the estimated ortho-H_2O abundances at the kinetic temperatures for every clump are listed in Table <ref>. The ortho-H_2O abundances in the most probable temperature ranges of these four clumps (i.e., G267.9-1.1, 30–40 K; G268.4–0.9, 10–15 K; G333.1–0.4, 30–40 K; G336.5–1.5, 15–20 K. The corresponding ortho-H_2O abundances were called “the typical ortho-H_2O abundances” in Section 4.1) are presented in Figure <ref> together with some other ortho-H_2O water abundances of giant molecular cloud (GMC) cores () and molecular outflows (), which are based on the same ortho-H_2O transition observed by SWAS. The ortho-H_2O abundances of these four clumps are at a low level compared with other results. §.§ N_2H^+ AbundancesSince the critical density of the N_2H^+ (1–0) line is far lower than that of the CS (5–4) line, we can assume that CS lines ((5–4) and (2–1)) and the N_2H^+ (1–0) line are all thermally populated in the CS (5–4)-traced areas, thus their excitation temperatures are all approximately equal to the kinetic temperatures. In this situation, we can adopt the excitation temperatures of CS as the excitation temperatures of N_2H^+ at the same positions. Then with the same assumptions and approximations we used in calculation of the column density of upper level CS molecules, we calculate the corrected N_2H^+ column density at J = 1. The N_2H^+ column density N_total is estimated asN_total=N_J=1Z/2J+1exp [ hB_eJ(J+1)/kT ] ,according to <cit.>. N_J=1 is the N_2H^+ column density without the Rayleigh-Jeans approximation at J = 1. B_e is the rotational constant of the N_2H^+ molecule at vibrational energy level v = 0 [http://www.cv.nrao.edu/php/splat/species_metadata_displayer.php? species_id=148]. J is the rotational quantum number of the upper level and J = 1. k is the Boltzmann constant and h is the Planck constant. We adopt the excitation temperatures of CS as the temperature T. Z is the rotational partition function of N_2H^+. Since N_2H^+ is a linear molecule (), when the contribution of the vibrational excited states is not taken into account,Z ≃∑_J=0^∞ ( 2J+1)exp ( -hB_0J ( J+1)/kT ) .B_0 is the rigid rotor rotational constant of the N_2H^+ molecule at the ground vibrational state v = 0 and B_0 = 46586.88 MHz (). Jis the rotational quantum number and J = 0, 1, 2, …, k. h and T are as the same as in Equation (<ref>).According to <cit.>, if we use one or several hyperfine transition(s) that can be observed to derive the column density of N_2H^+, we must take the relative line strengths of the hyperfine transition(s) into consideration. However, in our observation the spectra cover all the seven hyperfine transitions which can be observed in the J = 1 →0 transition. Thus, we just calculate the rotational partition functions at corresponding temperatures and then estimate the N_2H^+ column densities. We average the estimated H_2 column densities at every N_2H^+ pixel and then estimate the N_2H^+ abundances. The results are shown in Table <ref>and Figure <ref> and all offsets are relative to the corresponding coordinates (J2000) in Table <ref> and the unit is arcsec. § DISCUSSION §.§ Ortho-H_2O AbundancesThe typical ortho-H_2O abundances of these four clumps are in the range 3.7× 10^-10 – 5.8× 10^-10 for G267.9–1.1, 4.1× 10^-10 – 4.2× 10^-10 for G268.4–0.9, 3.8× 10^-10 – 5.9× 10^-10 for G333.1–0.4 and around 5.7× 10^-10 for G336.5–1.5. The typical ortho-H_2O abundances are at a low level compared with those of cold (T <50 K) GMC cores estimated with the same principle by <cit.>. The upper limits of the ortho-H_2O abundances of these four clumps are on the order of 10^-10, lower than the abundances of most of the GMC cores in <cit.>. In our estimate, the effective excitation rates we adopted for para-H_2 (j = 0) are larger than those from <cit.> by a factor of ∼1–3 at temperatures from 20 K to 80 K. This fact should be noticed when comparing our results with the results in <cit.> or <cit.> (see Fig. <ref>). The low abundances may be caused by the low temperatures of these clumps if we consider the water vapor originating from the interior of the clumps. Even at temperatures as low as 10 K, water can form in the ISM (). However, at such low temperatures and high densities, the freeze-out procedure dominates () and until the temperature is above about 100 K (), water molecules can be desorbed through thermal sublimation. Also, in the interior of the dense clump, the desorption of frozen water molecules is unlikely to be caused by photodesorption. Although there is an average effect for the temperature in the large SWAS beam (and in the ortho-H_2O pixels, also), we can find that the areas along the line of sight from where the ortho-H_2O emissions originate are quite cold, or in other words, are not warm enough to produce much water vapor. Consequently the ortho-H_2O abundances are low. On the other hand, since these clumps are all located in star forming regions, the ortho-H_2O emission therefore can originate primarily at the intermediate depth of these clumps where neither the photodissociation nor the freezing out of the H_2O molecules, but the photodesorption process dominates according to a model for the temperature and chemical structure in molecular clouds (). Thus, since we took the H_2 column densities along the line of sight to estimate the abundances, it consequently results in apparent low ortho-H_2O abundance for the whole clump while in the photodesorbed layer the water vapor is actually more abundant. In addition, there are also considerations of possible factors which can cause the apparent low ortho-H_2O abundances of those clumps but have been masked due to the averaging effect of the large SWAS beam. For example, small structures in the clump such as a hot outflow may contribute the majority of the observed gaseous water. The water abundance may vary greatly within the same clump. A similar phenomenon has been confirmed in the outflow powered by L1157-mm (a low-mass Class 0 protostar), in which the water abundance of the hot component is about two orders of magnitude higher than that of the nearby colder component (). In the areas we estimated ortho-water abundance, the maximum CS excitation temperatures of G267.9–1.1, G268.4–0.9 and G336.5–1.5 are 165.2 K (and the second largest value is 86.6 K), 15.8 K and 42.3 K, respectively (and for G333.1–0.4, the average kinetic temperature is 31.9 K, according to <cit.>). For G267.9–1.1, there is a possibility that the warm component makes a big contribution to the origination of gaseous water. However, we cannot infer more information on structures smaller than the SWAS beam size which can further reveal the origination of gaseous water. If the ortho-H_2O 1_10 – 1_01 line originates from the same gas as CS lines, as we assumed when we estimated the ortho-water abundance, then the CS lines may help to find traces of outflows. The CS spectra of G267.9–1.1 have broad wings. We found that 267.9–1.1 as well as G268.4–0.9 does show velocity variation over the clump on its channel map, but the spatial resolution of CS data is not high enough to identify the outflows. <cit.> have mapped G268.4–0.9 (G268.42–0.85) in CS J = 5 - 4 (SEST, 20” sampling spacing) and J = 7 - 6 (the CSO telescope, 10” sampling spacing) lines. They used the Maximum Entropy Method (MEM) deconvolution technique to achieve higher angular resolution (). In their study, the CS (5–4) map shows two peaks with an LSR velocity difference of about 0.7 Km s^-1 and on the CS (7–6) map, a bipolar structure was identified but no further temperature information was offered for this bipolar structure (). To be honest and objective, it is kind of arbitrary to assume such a high density over the whole area in which we estimated the ortho-H_2O abundance in each clump. There are very likely to be H_2 density gradients in these areas. If we adopt 10^4 cm^-3 rather than 10^6 cm^-3 as the H_2 density in the ortho-H_2O abundance estimation, then the typical ortho-H_2O abundances will be at the magnitude of 10^-8, the same as those of most of the GMC cores in <cit.>.§.§ N_2H^+ Abundances The N_2H^+ abundances of these four clumps are in the range of 1.0× 10^-10 - 1.5× 10^-8 for G267.9–1.1, 6.1× 10^-11 - 4.3× 10^-9 for G268.4–0.9, 2.6× 10^-10 - 4.2× 10^-9 for G333.1–0.4 and 5.6× 10^-11 - 1.4× 10^-9 for G336.5–1.5. The distribution of N_2H^+ abundance in each clump has a common decreasing trend toward the center. Although the abundance distributions we derived are only projected results in a plane perpendicular to the line of sight, we noticed that <cit.> suggested that N_2H^+ is likely to be distributed primarily in the clump rather than in the surface layers. When it comes to the depletion of N_2H^+, CO and electrons are the major destroyers of N_2H^+ in the gas phase and their reaction with N_2H^+ generates N_2 (, ). According to <cit.>, in the dense cores the neutrals (including CO) will rapidly freeze onto the grains. Consequently, the abundance of N_2H^+ will increase as a result of the disappearance of CO (). However, when we focus on temperature, we notice that when the dust temperature rises from 10 K to about 30 K, CO begins to sublimate (). Thus, based on the gas temperatures we estimated (the dust temperatures are approximately equal to local gas temperatures at such a high density as we had assumed), we can infer that in the high density center of the clump, the gas and dust are warm enough and the gaseous CO is abundant. N_2H^+ is therefore depleted by CO and that leads to a drop in N_2H^+ abundance toward the center. § CONCLUSIONSWe studied G267.9–1.1, G268.4–0.9, G333.1–0.4 and G336.5–1.5, four of the brightest ortho-H_2Osources in the southern sky observed by SWAS. We estimated their CS excitation temperatures in the CS (5–4)-traced areas and estimated their masses. Based on the temperatures and the masses, we then derived their average H_2 column densities and estimated the ortho-H_2O and N_2H^+ abundances. The typical molecular clumps in our study have H_2 column densities of ∼ 10 ^22 to 10 ^23 cm^-2 and ortho-H_2O abundances of 10^-10. The low ortho-H_2O abundances can be caused by the freeze-out of H_2O in the interior of the clumps due to the low temperatures if the ortho-H_2O originates from the interior of the clumps. The typical N_2H^+ abundances of these four clumps in this paper range from 10^-11 to 10^-9. Since in the center areas of the clumps, dust at such a high density is at temperatures such that CO can be released into the gas phase, the common trend of abundance decreasing toward the center of the clump can be a result of the depletion of N_2H^+ caused by CO.We sincerely thank the anonymous referee and the scientific editor for their wholehearted and patient help and the valuable advice they provided to help us improve this paper. This research is supported by the National Basic Research Program of China (973 program, Nos. 2012CB821800 and 2015CB857100), the National Natural Science Foundation of China (No. 11373038) and the Strategic Priority Research Program “The Emergence of Cosmological Structures" of the Chinese Academy of Sciences (Grant No. XDB09000000).§ SUPPLEMENTARY MATERIAL In the temperature range (20 K to 80 K), para- and ortho-H_2 are barely populated at energy levels other than j_2 = 0 and j_2 = 1 (j_2 is the rotational level of H_2), respectively. We adopted the effective rate coefficients at j_2 = 0 and j_2 = 1 from <cit.> and <cit.>.For G267.9–1.1 and G336.5–1.5, we estimated the ortho-H_2O abundance at the ortho-H_2O integrated intensity maximum pixel in FITS format data.Although there is more than one sampling cell having emission in the CLASS format data, when written in FITS format,only the pixel with maximum integrated intensity (190×190 arcsec^2, the boxes with white dashed lines in Fig. <ref>) are located within the areas with calculated H_2 column densities. However, the estimated ortho-H_2O abundance of the pixels with maximum integrated intensity still characterize the ortho-H_2O abundance of these clump in a sense.For G268.4–0.9 and G333.1–0.4, the CLASS format ortho-H_2O data only have one sampling cell and the center of the cell is the same as the corresponding coordinates in Table 3 (with a deviation of 0.01 s on RA). So we just estimated the ortho-H_2O abundance in the only sampling cell (the box with white dashed lines in Fig. <ref>, 190×190 arcsec^2, the same as the pixel size of the FITS format data). We calculate the ortho-H_2O abundance of G336.5–1.5 in the overlapping area of the white box (dashed lines) and the white box (solid lines) with corresponding average T_k, average H_2 column density and proportionally corrected ortho-H_2O integrated intensity.When calculating the partition functions of CS and N_2H^+, we summed the polynomial term by term from the J = 0 level, following the incremental rotational quantum number. When the value of a term is less than 0.1% of the sum of all the terms of the lower levels, then this term will be the last term added in the summation. 99 [Aikawa et al.(2001)]Aikawa+etal+2001 Aikawa, Y., Ohashi, N., Inutsuka, S.-i., Herbst, E., & Takakuwa, S. 2001, , 552, 639[Aikawa et al.(2005)]Aikawa+etal+2005 Aikawa, Y., Herbst, E., Roberts, H., & Caselli, P. 2005, , 620, 330[Ashby et al.(2000)]Ashby+etal+2000 Ashby, M. L. N., Bergin, E. A., Plume, R., et al. 2000, , 539, L115[Balser(2006)]Balser+2006 Balser, D. S. 2006, , 132, 2326[Beard(1966)]Beard+1966 Beard, M. 1966, Australian Journal of Physics, 19, 141[Bergin & Tafalla(2007)]Bergin+Tafalla+2007 Bergin, E. A., & Tafalla, M. 2007, , 45, 339[Bergin & van Dishoeck(2012)]Bergin+vanDishoeck+2012 Bergin, E. A., & van Dishoeck, E. F. 2012, Philosophical Transactions of the Royal Society of London Series A, 370, 2778[Braz et al.(1989)]Braz+etal+1989 Braz, M. A., Gregorio Hetem, J. C., Scalise, Jr., E., Monteiro Do Vale, J. L., & Gaylard, M. 1989, , 77, 465[Busquet et al.(2014)]Busquet+etal+2014 Busquet, G., Lefloch, B., Benedettini, M., et al. 2014, , 561, A120[Caswell et al.(1974)]Caswell+etal+1974 Caswell, J. L., Batchelor, R. A., Haynes, R. F., & Huchtmeier, W. K. 1974, Australian Journal of Physics, 27, 417[Cheung et al.(1969)]Cheung+etal+1969 Cheung, A. C., Rank, D. M., & Townes, C. H. 1969, , 221, 917[Dabrowski(1984)]Dabrowski+1984 Dabrowski, I. 1984, Canadian Journal of Physics, 62, 1639, in Ro-vibrational Collisional Excitation Database: BASECOL http://basecol.obspm.fr/[Daniel et al.(2011)]Daniel+etal+2011 Daniel, F., Dubernet, M.-L., & Grosjean, A. 2011, , 536, A76, in Ro-vibrational Collisional Excitation Database: BASECOL http://basecol.obspm.fr/[Doty & Neufeld(1997)]Doty+Neufeld+1997 Doty, S. D., & Neufeld, D. A. 1997, , 489, 122[Dowell et al.(2003)]Dowell+etal+2003 Dowell, C. D., Allen, C. A., Babu, R. S., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 4855, Millimeter and Submillimeter Detectors for Astronomy, eds. T. G. Phillips, & J. Zmuidzinas, 73[Dubernet et al.(2009)]Dubernet+etal+2009 Dubernet, M.-L., Daniel, F., Grosjean, A., & Lin, C. Y. 2009, , 497, 911, in Ro-vibrational Collisional Excitation Database: BASECOL http://basecol.obspm.fr/[Emprechtinger et al.(2010)]Emprechtinger+etal+2010 Emprechtinger, M., Lis, D. C., Bell, T., et al. 2010, , 521, L28[Franklin et al.(2008)]Franklin+etal+2008 Franklin, J., Snell, R. L., Kaufman, M. J., et al. 2008, , 674, 1015[Frogel & Persson(1974)]Frogel+Persson+1974 Frogel, J. A., & Persson, S. E. 1974, , 192, 351[Goldsmith et al.(1997)]Goldsmith+etal+1997 Goldsmith, P. F., Bergin, E. A., & Lis, D. C. 1997, , 491, 615[Goldsmith & Langer(1999)]Goldsmith+Langer+1999 Goldsmith, P. F., & Langer, W. D. 1999, , 517, 209[Goss & Shaver(1970)]Goss+Shaver+1970 Goss, W. M., & Shaver, P. A. 1970, Australian Journal of Physics Astrophysical Supplement, 14, 1[Hollenbach et al.(2009)]Hollenbach+etal+2009 Hollenbach, D., Kaufman, M. J., Bergin, E. A., & Melnick, G. J. 2009, , 690, 1497 [Juvela(1996)]Juvela+1996 Juvela, M. 1996, , 118, 191[Kaufmann et al.(1976)]Kaufmann+etal+1976 Kaufmann, P., Gammon, R. H., Ibanez, A. L., et al. 1976, , 260, 306 [Kewley et al.(1963)]Kewley+etal+1963 Kewley, R., Sastry, K. V. L. N., Winnewisser, M., & Gordy, W. 1963, , 39, 2856[Kristensen & van Dishoeck(2011)]Kristensen+vanDishoeck+2011 Kristensen, L. E., & van Dishoeck, E. F. 2011, Astronomische Nachrichten, 332, 475[Kristensen et al.(2012)]Kristensen+etal+2012 Kristensen, L. E., van Dishoeck, E. F., Bergin, E. A., et al. 2012, , 542, A8 [Lapinov et al.(1998)]Lapinov+etal+1998 Lapinov, A. V., Schilke, P., Juvela, M., & Zinchenko, I. I. 1998, , 336, 1007[Li et al.(2004)]Li+etal+2004 Li, D., González-Alfonso, E., & Melnick, G. 2004, in Bulletin of the American Astronomical Society,36, American Astronomical Society Meeting Abstracts #204, 985[Li et al.(2007)]Li+etal+2007 Li, D., Velusamy, T., Goldsmith, P. F., & Langer, W. D. 2007, , 655, 351[Lockman(1979)]Lockman+1979 Lockman, F. J. 1979, , 232, 761[Lowe et al.(2014)]Lowe+etal+2014 Lowe, V., Cunningham, M. R., Urquhart, J. S., et al. 2014, , 441, 256[Manchester & Goss(1969)]Manchester+Goss+1969 Manchester, B. A., & Goss, W. M. 1969, Australian Journal of Physics Astrophysical Supplement, 11, 35[Mangum & Shirley(2015)]Mangum+Shirley+2015 Mangum, J. G., & Shirley, Y. L. 2015, , 127, 266[Mardones et al.(1997)]Mardones+etal+1997 Mardones, D., Myers, P. C., Tafalla, M., et al. 1997, , 489, 719[Melnick(1995)]Melnick+1995 Melnick, G. J. 1995, in Astronomical Society of the Pacific Conference Series, Vol. 73, From Gas to Stars to Dust, eds. M. R. Haas, J. A. Davidson, & E. F. Erickson, 673[Melnick et al.(2000)]Melnick+etal+2000 Melnick, G. J., Stauffer, J. R., Ashby, M. L. N., et al. 2000, , 539, L77[Melnick et al.(2011)]Melnick+etal+2011 Melnick, G. J., Tolls, V., Snell, R. L., et al. 2011, , 727, 13[Phillips et al.(1996)]Phillips+etal+1996 Phillips, T. R., Maluendes, S., & Green, S. 1996, , 107, 467[Rohlfs& Wilson(1996)]Rohlfs+Wilson+1996 Rohlfs, K., & Wilson, T. L. 1996, Tools of Radio Astronomy (Springer-Verlag Berlin Heidelberg New York)[Shaver & Goss(1970)]Shaver+Goss+1970 Shaver, P. A., & Goss, W. M. 1970, Australian Journal of Physics Astrophysical Supplement, 14, 133[Snell et al.(2000a)]Snell+etal+2000a Snell, R. L., Howe, J. E., Ashby, M. L. N., et al. 2000a, , 539, L93[Snell et al.(2000b)]Snell+etal+2000b Snell, R. L., Howe, J. E., Ashby, M. L. N., et al. 2000b, , 539, L101[Sugitani & Ogura(1994)]Sugitani+Ogura+1994 Sugitani, K., & Ogura, K. 1994, , 92, 163[Thompson et al.(2004)]Thompson+etal+2004 Thompson, M. A., Urquhart, J. S., & White, G. J. 2004, , 415, 627[Valdettaro et al.(2007)]Valdettaro+etal+2007 Valdettaro, R., Chapman, J. M., Lovell, J. E. J., & Palla, F. 2007, , 466, 247[van der Tak et al.(2007)]vanderTak+etal+2007 van der Tak, F. F. S., Black, J. H., Schöier, F. L., Jansen, D. J., & van Dishoeck, E. F. 2007, , 468, 627[van Dishoeck et al.(2013)]vanDishoeck+etal+2013 van Dishoeck, E. F., Herbst, E., & Neufeld, D. A. 2013, Chemical Reviews, 113, 9043[Wannier et al.(1991)]Wannier+etal+1991 Wannier, P. G., Pagani, L., Kuiper, T. B. H., et al. 1991, , 377, 171[Wilson et al.(1970)]Wilson+etal+1970 Wilson, T. L., Mezger, P. G., Gardner, F. F., & Milne, D. K. 1970, , 6, 364[Yamaguchi et al.(1999)]Yamaguchi+etal+1999 Yamaguchi, R., Saito, H., Mizuno, N., et al. 1999, , 51, 791[Zinchenko et al.(1995)]Zinchenko+etal+1995 Zinchenko, I., Mattila, K., & Toriseva, M. 1995, , 111, 95 | http://arxiv.org/abs/1704.08431v1 | {
"authors": [
"Bing-Ru Wang",
"Lei Qian",
"Di Li",
"Zhi-Chen Pan"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170427043521",
"title": "Water abundance in four of the brightest water sources in the southern sky"
} |
Graduate School of Culture Technology and BK21 Plus Postgraduate Programme for Content Science, Korea Advanced Institute of Science & Technology, Daejeon, Republic of Korea 34141Graduate School of Culture Technology and BK21 Plus Postgraduate Programme for Content Science, Korea Advanced Institute of Science & Technology, Daejeon, Republic of Korea 34141 In the modern era where highly-commodified cultural products compete heavily for mass consumption, finding the principles behind the complex process of how successful, “hit” products emerge remains a vital scientific goal that requires an interdisciplinary approach. Here we present a framework for tracing the cycle of prosperity-and-decline of a product to find insights into influential and potent factors that determine its success. As a rapid, high-throughput indicator of the preference of the public, popularity charts have emerged as a useful information source for finding the market performance patterns of products over time, which we call the on-chart life trajectories that show how the products enter the chart, fare inside it, and eventually exit from it. We propose quantitative parameters to characterise a life trajectory, and analyse a large-scale data set of nearly 7 000 songs from Gaon Chart, a major weekly Korean Pop (K-Pop) chart that cover a span of six years. We find that a significant role is played by non-musical extrinsic factors such as the established fan base of the artist and the might of production companies in the on-chart success of songs, strongly indicative of the commodified nature of modern cultural products. We also review a possible mathematical model of this phenomenon, and discuss several nontrivial yet intriguing trajectories that we call the “Late Bloomers” and the “Re-entrants” that appears to be strongly driven by serendipitous exposure on mass media and the changes of seasons. On-Chart Success Dynamics of Popular Songs Juyong Park December 30, 2023 ==========================================§ INTRODUCTION Competition for survival and success is a crucial mechanism underlying the evolution of species or actors in a complex system, be it biological, social, or technological <cit.>. The increasing availability of large-scale data along with a remarkable progress in the theory and modeling of complex systems has led to the emergence of the “science of success” that aims to reveal common patterns in the success of people or products in such diverse subjects as viral spreading of content, performance of athletes in sports, popularity of emergent technologies, and impact of scientific works <cit.>.The scientific study of success is also deeply related to the tradition of developing robust and effective ranking methods for identifying the most successful and superior actors in a competition system <cit.>.Rankings of products and commodities serve a useful purpose for customers looking to purchase, or firms planning to advertise and market products. Cultural products such as popular songs are no exception, especially in this day and age where digital communication technology has enabled massive and efficient dissemination and consumption. Popularity charts have accordingly become more instantaneously updated, emerging as an essential reference for customers trying to make decisions in the face of a flood of new content and information, therefore becoming a coveted platform that affords products more exposure and prolonged success <cit.>. This brings up the hope that we may now be able to search for answers to many interesting questions into the underlying mechanisms of successful products, including “What are the features of `hit' products that can be learnt from their chart dynamics?”,“What are the factors–intrinsic or extrinsic–behind the success of products?”, and so forth.In this paper, as an attempt to answer these questions using high-quality contemporary data and scientific methodology, we study the chart dynamics of K-Pop (Korean pop) songs, i.e. how the songs fare on popularity charts and what factors are behind it. K-Pop, a relatively recent international cultural sensation from South Korea, is characterised by catchy tunes and a heavy use of audiovisual elements.One of theearly pinnacles of its global success happened in July of 2012 with PSY's Gangnam Style that reached number two on the U.S. Billboard Hot 100.Its online success can be said to be even more phenomenal, scoring nearly three billion views on YouTube as of this writing (April 2017) to become the most-watched online video.Another prominent characteristic of K-Pop is the prominence of “idols”, young, highly-trained dancer-singers who are designed to be the hub around which an entire industry of production and merchandise/service providers are organised <cit.>.This type of intensive commercialisation places K-Pop at the forefront of the contemporary music industry where the intrinsic, musical properties of a song such as its melody or lyrics are relegated to being merely one factor that affects its success <cit.>. Prompted by this sweeping trend, in this paper we study a prominent K-Pop chart data to characterise the chart dynamics of successful songs, and then determine the extent to which such extrinsic factors correlated with their successes.§ POPULARITY CHARTS AND THE LIFE TRAJECTORY OF A PRODUCT §.§ Data, methodology, and life trajectory patterns We analyze the data from Gaon Music Charts , a collection of weekly music charts serviced by the association of Korea's music industry.We focus on the Gaon Digital Chart, the signature one similar in spirit to the U.S. Billboard Hot 100.It ranks the top-100 songs (both domestic K-Pop and foreign songs in a single chart) according to their digital sales figures including downloads, online streaming counts, and their offline album sales, (The exact weights given to each factor has not been made public, although the rankings themselves are open for anyone.) Our data covers all weeks between the second week of 2010 (the actual beginning of Gaon) and the 53rd week of 2015, for a total of 313 weeks.During this period there have been in total 7 560 songs that appeared at least once on the chart.The actual numbers of weeks spent on the chart, however, vary widely as shown in Fig. <ref>: 36.4%of the songs (2 750) appeared for one week only, whereas 9.1% of the songs (689) stayed on the chart more than ten weeks.The longest life on the chart was enjoyed by IU's Neoui Uimi (The Meaning of You) for a record of the 73 weeks. The same artist's Joeun Nal (Good Day) and PSY's Gangnam Style topped the chart at number one for the most weeks (5). For most songs, the weekly rankings follow roughly a similar pattern (notable exceptions are discussed later): It appears in the chart, reaches its peak rank at some point, declines, and eventually leaves the chart.Upon inspection of numerous curves made by the songs' rankings, we find it useful to characterise such life trajectory of a song via the following parameters: Song's inaugural rank on the chart [Since Gaon Digital Chart compiles weekly data starting on Sundays, the first visible rank of a songs can be significantly affected by at which point during the week it is released: A song released on a Monday, for instance, would have more time to accumulate stats (downloads and streaming counts) than another one released closer to a Sunday. To alleviate this problem, we actually use a song's second week rank as its inaugural rank , reducing the number of songs analyzed to 4 810.] Peak rank Time taken to reach the peak since inaugural appearance on chart Time taken to exit from chart since peak rank An example is shown in Fig. <ref>(a) for girlband EXID's Wiarae (Up and Down). It first entered the chart at =7, reaching its peak (=1)=6 weeks later, then fell gradually (save for some brief rallies) for =30 weeks until it left the chart. In Fig. <ref>(b) we show the probability density function of the four parameters using Kernel Density Estimation (KDE) method in the R statistical package <cit.>.andare slightly skewed towards high values whereasandare more heavily skewed towards lower values, meaning that an overwhelming majority of songs enjoy only brief stints on the chart. We can also define the overall success of a song to be the area under its trajectory curve so that the longer-surviving, higher-ranking songs can naturally be called more successful.The relationships between the parameters reveal more interesting patterns about the trajectories of the songs,presented in Fig. <ref> and <ref>. In Fig. <ref>(a), the inaugural ranks {} are plotted along the x-axis, and the maximum ranks {} are plotted along the y-axis.The data points appear to be roughly evenly spread out here with the exception of a high density of points along the diagonal=, meaning that for many songs the inaugural rank was indeed the peak. In Fig. <ref>(b), the time until the peak {} are plotted along the x-axis, and the time until exiting the chart since the peak {} are plotted along the y-axis.Since most songs stay on the chart for only a short period of time as shown in Fig. <ref>, they are largely concentrated in the bottom left area.But overall >, meaning that for most songs the decay is longer than ascension.In order to understand how these parameters are related to the extrinsic properties of the songs, we utilize additional metadata of the artists and the genres. The artist metadata consists of three properties: Gender (Male, Female, or Mixed), Type (Solo, Group, or Collaboration), and Nationality (Domestic or Foreign). The genre metadata consists of five major classes–Ballad, Dance, Rap and Hiphop, R&B and Soul, and Rock–that each accounts for at least five percent of the songs in the entire data set. We use the analysis of variance (ANOVA) method to determine whether there exist statistically significant differences between groups of songs with those extrinsic properties. We compare the variance within groups and the variance between groups, then if the latter is significantly larger than the former, ANOVA lets us conclude that does exist a difference between the groups <cit.>. The original ANOVA requires the population to be close to a normal distribution and population variances to be equal. For our kind of data that do not satisfy those conditions we use the Kruskal-Wallis test, non-parametric version of ANOVA <cit.>. If the p-value from the test is less than or equal to the significance level (0.05), we reject the null hypothesis and conclude that not all populations are equal. In Table <ref>, we specify the average trajectory parameters of all groups based on the metadata and show the p-values from the Kruskal-Wallis test. The p-values shown in Table <ref> tell us that in most cases there exist statistically significant differences (p<0.05) depending on the properties of the songs. It is noteworthy that songs exhibit differences based on their extrinsic properties.However, the significances are generally weaker for the Gender classes, and particularlyin regards to Gender (p=0.31) andin regards to Nationality (p=0.12) show insignificant differences. This means that the artist's Gender is not a strong factor in a song's peak rank, and domestic (Korean) and foreign songs do not differ in the time they take to exit the chart.To further characterize the differences between groups using ANOVA, we conducted follow-up tests to see whether certainextrinsic properties led to different chart dynamics.We see that “Male” and “Solo” artists show significantly lower values on all parameters in Table <ref>. Therefore their songs enter the chart lower, reach lower peak ranks, and exit the chart quickly. Amongst different genres, Ballad, R&B and soul, and Rock do not show significant differences, while Dance and Hiphop genres are similar. Ballads are less successful than Dance and Hiphop music, reflecting the current state of the dance-heavy K-Pop. When we study the more successful, “hit” songs, distinct patterns emerge. In Fig. <ref>(a) and (b) we show analogous plots for the 689 songs that stayed on the chart for longer than ten weeks, accounting for 9.1% of the songs in the full data set. In Fig. <ref>(a) we see that now the songs occupy the top right side, meaning that successful songs are likely to be already ranked high in its early phase, implying that a song's potential for on-chart success is realised at the very beginning, and appearing on the chart does not necessarily translate to opportunity for attaining a higher rank.The message is similar in Fig. <ref>(b) where(x-axis) ranges merely between one and ten weeks (with the exception of Rock Party Anthem by LMFAO which is not a K-Pop but a foreign song, which we will discuss later in more detail) whileon the y-axis assume much wider values with the maximum of 71 weeks. Consider PSY's Gangnam Style and IU's Neoui Uimi in Fig. <ref>(c) A and B that do show typical behaviors of K-Pop: Both take no to a very short time on the chart to reach their peaks.We also find that foreign songs tend to show a noticeably different general behavior from that of K-Pop. Take New Thang by Redfoo and Party Rock Anthem by LMFAO, for instance, that are the outliers in Fig. <ref>(a) and (b). New Thang entered the chart at =83, rising in the chart quite slowly (see Fig. <ref>(c) C), peaking at =65 and staying on the chart for a total of 11 weeks.Party Rock Anthem, on the other hand, took =24 weeks to peak, but exited the chart in =8 weeks.In fact, the other outliers in the plots of Fig. <ref>(a) and (b) tend to be foreign songs as well, taking a longer time to reach their peaks than K-Pop songs: Six out of eight songs below diagonal x+y=100 in Fig. <ref>(a) are foreign songs, and below y=x in Fig. <ref>(b) (meaning>), four out of the seven songs were foreign although only 5.41% (260) of the songs analyzed (4 810) are foreign.The Kruskal-Wallis test also reveals significant differences between domestic and foreign songs with the group of 689 songs that stayed on the chart for more than ten weeks. The average trajectory parameters of these groups and p-values from the Kruskal-Wallis tests are given in Table <ref>.Here, unlike the entire group of songs, artist features and genres do not show significant differences (p>0.05). However, domestic and foreign artist groups exhibit significant differences regarding , , and(p<0.0001).Although the overall success are not significantly different, foreign songs have lowerand , and take longer times to reach their peaks. These observations regarding hit songs and their artist properties imply that while non-musical extrinsic properties of popular songs do in general matter for songs' on-chart behaviors, their effects become weaker when successful songs are considered and the intrinsic, musical qualities of the songs become more important. §.§ Chart dynamics of K-Pop vs foreign songs We now take a deeper look at the differences between K-Pop and foreign songs. To do so we analyze two separate subcharts of Gaon, primarily due to the small sample size of foreign songs in the original data: Of the N=7 560 songs that ever appeared in the main Gaon digital chart, only three percent (260) are foreign.The subcharts we use here are the digital top-100 charts exclusively for K-Pop and foreign songs.This yields a much larger set of foreign songs for us to analyze: 7 602 for K-Pop (there is not much difference, since the main chart was dominated by domestic songs to begin with), and 3 855 for foreign songs. Again focusing on those that survived for more than ten weeks on each chart (702 K-Pop and 327 foreign songs), we find that the total life on the chart already show significant differences: K-Pop songs survive on average 15.7±0.4 weeks with maximum 73 weeks by IU's Neoui Uimi, while foreign songs survive 37.3±1.0 with a maximum of 235 weeks by Maroon 5's Moves Like Jagger (Fig. <ref>(e) B).Their life trajectories are plotted separately in Figs. <ref>(a) to (d). They again confirm that K-Pop songs reach their peaks early, with only 2.14% entering the chart under the 40th place, and the longestis equal to ten, for Geu namjan marya (Because of You) by M.C the Max. Foreign songs on the other hand, while fewer than half the number of K-Pop in total, show a wider distribution of the parameters in Fig. <ref>(c) and (d):18.7% entered the chart ranked 40th or lower, and 46 songs took more than ten weeks to reach their peaks, with maximum =220 by Adele's Someone Like You that entered the chart at =98.The question is, then, which factors are behind such visible differences between K-Pop and foreign songs. One could argue that it could be the music itself, if there really are intrinsic differences between K-Pop and foreign pop. Even if such differences from a musical standpoint did exist at all, however, they would be quite subtle, given that nearly all modern popular genres–rock and roll, ballads, synth pop, electronica, –in foreign pop are also well represented in K-Pop. We therefore find it unconvincing that the musical differences would be the sole determinant of the differences in the chart dynamics of songs.If not the intrinsic properties, then some external factors may be in play here. While there could be many, here we study two that are widely believed to be defining characteristics of contemporary K-Pop, the artists and the machinery of production companies.§ ARTISTS, PRODUCTION COMPANIES OF K-POP The observed fact that most songs reach their peaks in the very early phases of their lives does seem unnatural and counterintuitive–this means that the songs have already realised potential for success during the one short week before entering the chart or, even more bizzare, before its release. We believe that the clues to the origin of such behavior comes from a dominant recent trend in how new K-Pop songs are produced and marketed: Production companies are increasingly focusing on pre-release marketing to rally the fandom that prop up online streaming counts and downloads in the very early stages of the song's release <cit.>. This happens in tandem with the actions of the fan base of the artist who are not passive consumers in music industry but active disseminators of the songs to the general public outside the fan base. This means that production companies in K-Pop–influential entities that recruit, train, finance, manage, and market artist–and huge fan bases that reflect the artists' prior success and reputation may hold significant sway over the chart success of new songs <cit.>, possibly even more than their qualities may. Artists and producers of foreign pop songs, on the other hand, generally do not maintain such a strong presence in Korea which explains their more natural dynamics on the charts (Fig. <ref>(e) and (f)). We now investigate the influence of artists and production companies on a song's chart dynamics. We start by defining a quantity that represents the influence of an artist and a production company. Possible candidates could include the size of the fan base, the company's sales volume, stock price, and number of affiliated artists in companies. But here we focus on chart performance as the measure of an artist and a company's influence.In other words, we posit that a history of chart success is an indicator of the entity's influence and future success. This is inspired by the so-called “Matthew effect”, the rich-get-richer phenomena often observed in social systems <cit.>. We define the Chart Success Index S_i of an artist and a company i as the sum total of the weekly ranks of all songs produced by the subject in the history of Gaon Digital Chart, i.e.S_i≡∑_ w∈All Weeks s∈Ω^w_i(101-r^w_s),where Ω^w_i={s} refers to all songs from artist or company i ranked on the chart on week w, and r^w_s refers to the weekly rank of the song. First, Fig. <ref>(a) shows the S values of the ten most successful artists by the last week in our data set.Most artists display a pattern of alternating rises and plateaus in the growth of S, the plateaus meaning the time between successive songs. While it is the band that generally occupy the top positions, IU is a notable exception.Second, Fig. <ref>(b) shows the S values of the top ten production companies. Most companies display linear growth patterns unlike their artist, with YG, SM, and Loen Entertainment corporations being the major three as of the last week in our data. To view the relationship between an entity's influence and the life trajectory of the songs they release, we compute the correlationbetween final value of S and the four curve parameters plus overall success of each song (the area under its trajectory curve). The errors are estimated using the jackknife method <cit.>. Here we note that we consider the life trajectories of songs from debuting artists, since unknowns in the market their production companies' abilities and clout would be the only major external, non-musical factor behind their chart performance. Of the 934 production companies that produced at least one new artist on the chart, 514 produced only one. To eliminate similar errors originating from such small players, we consider 193 artists (out of 1 680) that put more than ten songs, and the 55 companies that put nine or more debuting artists on the chart. The results are shown in Fig. <ref>(c) and (d).All parameters but the pre-peak time () show strong positive correlations with S. This means that being a renowned artist or having powerful production company behind one's back is highly correlated with general chart success (i.e. long duration on chart and high peak rank), but regardless of the prior reputation or company's influence the artist hits their peak early.To a skeptical eye, this phenomenon may be a reason for doubting the role of the “quality” of a K-Pop song in its chart performance; since early peaking (short ) is universal whileis positively correlated with S,could be due simply to the marketing and the established fan base of a company–because it suggests that a K-Pop song's success was somewhat determined even before its release, before it had a chance to be introduced to the larger consumer base–andwould also be a straightforward consequence of that (the higher the rank, the more time it takes to leave the chart). It should be noted, however, that this is not a complete picture, and there is still evidence that quality and musical properties do matter in a song's chart success.Ironically, one could make the point from the dynamics of foreign songs in the Korean market that lack such a level of support and marketing from production companies: Foreign songs are slow to gain popularity, but they tend to have more staying power.It may be an unsatisfactory state of affairs for the business that the effect of marketing may eclipse the importance of musical quality for K-Pop, and it remains to be seen how much role quality plays in the long run <cit.>. § ENDOGENOUS VERSUS EXOGENOUS PEAKS Our analyses on the the impact of extrinsic properties separate from the intrinsic ones of popular songs are in line with earlier studies on the two types of dynamical features in complex networks: exogeneity and endogeneity <cit.> that are believed to cause critical peaks. Exogenous peaks occur in response to external kicks, while endogenous peaks are spontaneous internal fluctuation formalized by the theory of self-organized criticality. It is therefore well understood that it is important to distinguish between exogeneity and endogeneity to measure their respective effects to ultimately understand the fundamental dynamics underneath a system. In particular, Sornette <cit.> introduced a model of epidemic propagation which specifies that exogenous peaks occur abruptly and are followed by a power-law relaxation while endogenous peaks are characterized by approximately symmetrical power law growth and relaxation. According to the model, the endogenous peaks ascend and descend slowly proportional to 1/|t-t_c|^1-2θ where t=0 corresponds to the peak maximum and t_c is an additional positive parameter. On the other hand, the relaxation of exogenous peaks behave like 1/(t-t_c)^1-θ which decays much faster than endogenous peaks. Sornette also found an average value of the exponent θ∼0.3 ± 0.1.However, Lambiotte <cit.> observed that the relaxation process can be as well seen as an exponential one that saturates toward an asymptotic state when focusing on the short-time scale after a sales maximum (∼ 1 month). He suggested a thermodynamic model to represent the decays with a relaxation coefficient λ which is inversely proportional to the relaxation time, and found that the initial decay of exogenous and endogenous peaks occurs on different time ranges: <λ>_exo∼2<λ>_endo=0.14.We apply these models to our data, limited to the 689 songs that stayed on the chart more than ten weeks for sufficient data points. First, we transform a time series of the rank R of a song into a time series of instantaneous sales flux through the relation S=R^-γ, γ=0.5 following Sornette 's approach. We fit the relaxation of life trajectories with power-law and exponential models respectively, and measure the root mean square deviation in order to quantify the goodness of fit. Lower values of the deviation imply a better fit. Fig. <ref> (a) shows the power-law and exponential fits of the song Neoui Uimi, the longest-ranked on the chart, and Fig. <ref> (b) shows the changes in the root-mean-square deviation along the relaxation time, . At smaller(<5 weeks), the exponential fit (red) showed similar or better results than the power-law fit (blue), but the power law seems to have lower residual errors at large , implying that the relaxation process behave exponentially at the shorter time scale.In addition, we extract endogenous and exogenous peaks inferred from their initial acceleration defined as ^-0.5-^-0.5/ here based on the assumption that exogenous peaks occur abruptly while endogenous peaks are formed relatively slowly. 30 high-ranked and low-ranked peaks are respectively classified as exogenous and endogenous. Meanwhile, as most songs (607 out of 689 songs) reach their peaks immediately as they are released, the initial acceleration is not defined (-=0, =0), so we classify them as a separate “Early Peaker” group to observe whether they show behaviors similar to those of endogenous or exogenous peaks. Fig. <ref> (a) shows relaxation of sales after peaks obtained by averaging over songs in each class, and Fig. <ref>(b) present the cumulated probability of the relaxation coefficient λ in the three classes. Both figures clearly indicate difference between endogenous and exogenous peaks being separated significantly. The power law exponents in Fig. <ref> (a) also approximately obey Sornette 's prediction, 1-θ, and 1-2θ. In addition, the average relaxation coefficient of exogenous peaks (0.178) is over twice that of endogenous peaks (0.076), implying that the relaxation time is significantly shorter in exogenous shocks. Interestingly, the early peaker songs without any precursory pattern also show a similar behavior to exogenous peaks in both the power law exponent and relaxation coefficients, leading us to infer that their early peaks can indeed be attributed to external forces rather than spontaneous fluctuations.§ DISCUSSIONSIn this paper we have studied the life trajectories of K-Pop and foreign songs on Korea's Gaon Charts.We have shown that a large majority of life trajectories could be represented with four parameters of their peak rankings and duration on the charts. Many songs, especially K-Pop, were found to attain their peaks in the early phases, which indicated the influence of non-musical external factors such as the power of artists and production companies.We have demonstrated the possibility of utilizing popularity data to describe how the state of an industry affects the success of products, and we believe that the application is not necessarily limited to popular songs. We have explored only a small range of possibilities, and we hope to see our methods applied to more systems in a wider gamut of products and markets. We now conclude this paper by introducing two classes of songs that do not follow the general trends of Fig. <ref> but could still be interesting. The first class is “Late Bloomers” that take an relatively long time since debut to climb up the chart. This would be a very rare feat indeed: Only 1.5% of the songs (112 in total) manage to climb in rankings for three straight weeks.A good example is Wi-arae (Up and Down) by EXID: having failed to make it to Gaon upon release, it went viral on other social networking sites before taking the 1st place several weeks later, as shown in Fig. <ref>(a). Another example is Reset by Korean-American rapper Tiger JK.The song was intended as a soundtrack to a TV drama whose popularity pushed up the song on to the charts, and its peak in the middle of its trajectory matches with the ratings peak for the TV show, see Fig. <ref>(b). The second class of extraordinary patterns is “Re-entrants”, referring to the songs that return to the chart after falling from it the first time (we consider a song's life trajectory on the chart has come to an end when it exits the chart and does not return within five weeks.). A very small percentage of the songs (1.5%, or 116 songs) are such cases, and an even smaller number (39) stay on the chart for longer than five weeks after re-entry. A detailed review of the forces behind such behavior tells us that there are broadly two kinds: The first is renewed media exposure and broadcasting. Particularly, getting featured on audition programs such as K-Pop Star (Korean version of American Idol) or game shows centered on songs is a common way by which an old song experiences a surge of interest. Gaseum Sirin Iyagi (Words That Freeze My Heart) by Wheesung (Fig. <ref>(c)) and Yanghwa daegyo (Yanghwa Bridge) by Zion.T in Fig. <ref>(d) are famous examples helped by the mass media, sometime gaining even more popularity after re-entry than upon its release. The second is seasonal effect, which is very straightforward: Christmas carols such as All I Want For Christmas Is You by Mariah Carey (Fig. <ref>(e)) is a good example. The K-Pop song Beotkkoch Ending (Cherry Blossoms Ending) by Busker Busker, popular upon its release, is an example that became a spring carol of sorts that re-enters the chart every spring in Korea, as seen in Fig. <ref>(f). To conclude, there were evidence that, absent powerful production companies, quality and other factors such as media exposure could lead to chart success resulting in extraordinary life trajectories such as the late bloomers and the re-entrants. § ACKNOWLEDGMENTSThis work was supported by the BK21 Plus Postgraduate Organisation for Content Science,IITP grant funded by the Korean government (MSIP-R0115-16-1006), and National Research Foundation of Korea (NRF-20100004910, NRF-2016S1A2A2911945, and NRF-2016S1A3A2925033). | http://arxiv.org/abs/1704.08437v2 | {
"authors": [
"Seungkyu Shin",
"Juyong Park"
],
"categories": [
"physics.soc-ph",
"cs.SI"
],
"primary_category": "physics.soc-ph",
"published": "20170427052347",
"title": "On-Chart Success Dynamics of Popular Songs"
} |
ApJLAJApJActa Astron. PASPSPIEApJSARAAA&ANatureMNRASPhys. Rev. D.Soviet AstronomyAstron. J.0.0cm 0.2cm 16cm21cm 1.0cm lastnote scilastnotelastnote+1lastnote.24ptRelativistic Jets in Core Collapse Supernovae Tsvi Piran,^1∗ Ehud Nakar,^2, Paolo Mazzali,^3,4, Elena Pian^5,6^1Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 91904, Israel^2Raymond and Beverly Sackler School of Physics & Astronomy, Tel Aviv University, Tel Aviv 69978, Israel^3Astrophysics Research Institute, Liverpool John Moores University, IC2, Liverpool L3 5RF, UK ^4Max-Planck-Institut fur Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching, Germany^5INAF, Istituto di Astrofisica Spaziale e Fisica Cosmica di Bologna, I-40129 Bologna, Italy ^6Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy^∗E-mail:[email protected] ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== After several decades of extensive research the mechanism driving core-collapse supernovae (CCSNe) is still unclear.A common mechanism is a neutrino driven outflow, but others have been proposed.Among those, a long-standing idea is that jets play an important role in SN explosions.Gamma-ray bursts (GRBs) that accompany rare and powerfulCCSNe, sometimes called “hypernovae", provide a clear evidence for a jet activity. The relativistic GRB jetpunches a hole in the stellar envelope and produces the observed gamma-rays far outside the progenitor star. While SNe and jets coexist in long GRBs, the relation between the mechanisms driving the hypernova and the jet is unknown. Also unclear is the relation between the rare hypernovae and the more common CCSNe.Here we present observational evidence that indicates that choked jets are active in CCSNe types that are not associated with GRBs. A choked jet deposits all its energy in a cocoon. The cocoon eventually breaks out from the star releasing energetic material at very high, yet sub-relativistic, velocities. This fast moving material has a unique signature that can be detected in early timeSN spectra.We find a clear evidence for this signature in severalCCSNe,all involving progenitors that have lost all, or most, of their hydrogen envelope prior to the explosion. These include CCSNe that don't harbor GRBs or any other relativistic outflows. Our findings suggest a continuum of central engine activities in different types of CCSNe and call for rethinking of the explosion mechanism of regular CCSNe. Massive stars end their lives in supernova (SN) explosions releasing typically ∼ 10^51 ergs (sometimes called FOE or Bethe) in kinetic energy and a fraction of that in a visible light.As the star consumes its energy reservoir its core collapses (hence Core Collapse Supernova - CCSNe) and becomes a compact object. A shock wave that propagates outwards ejects the envelope and synthesizes radioactive ^56Ni that powers part of the visible SN light. So far,in addition to the explosions themselves, we have seen the massive stellar progenitors, neutrinos produced by the newborn neutron star, the compact objects left behind (typically a neutron star) and the expanding matter, (the supernova remnant).All these observations confirm the general picture outline by Baade and Zwicky already in the 1930's <cit.>.However, while the basic pictureis well understood, in spite of several decades of research, the mechanism(s) powering the shocks that drive the SNeis not clear. Models suggested (see e.g. <cit.> and references therein) include neutrino heating, magnetohydrodynamic, thermonuclear, bounce-shock, acoustic and phase transition mechanisms. The neutrino driven explosion, possibly in combination with hydrodynamic non-spherical instabilities and non-radial flows,is the current favorite (at least for most common core collapses,type II SNe), while others (e.g., bounce-shock) seems highly unlikely.In spite of the importance of 3D effects, the neutrino driven explosion is supposed to produce roughly spherical explosions.Among the other mechanismsa long-standing idea, proposed already in the early 1970's <cit.>, is that jets (particularly magnetically driven ones)play an important role in SN explosions. Here we explore observational evidence of this idea. Rare and powerful (typically 10^52 ergs) CCSNe, sometime called Hypernovae, accompany long Gamma-Ray Bursts (GRBs) (see e.g.<cit.>).These explosions involvetwo distinctcomponents: a narrowly collimated relativistic jet that produces the GRB (see e.g. <cit.> and references therein) and a more isotropic (yet not necessarily spherically symmetric) massive SN explosion. The SN ejecta typically carries ∼10-100 times more energy than the GRB jet (see e.g. <cit.> and references therein).Thus, while the jet itself cannot drive the SN explosion, it is reasonable to expect that the central rapidly rotating compact object that must be present at the center of the collapsing star to drive the GRB jet, is related to the energy source thatdrives the SN explosion. In GRBs the jet successfully penetrates the massive stellar envelope and we observe its emission directly. The association of SNe with GRBs bring up several important questions. First, are there hypernova where the GRB jets fail to breakout, namely choked within the stellar envelope. Second, do hidden jets exist in other types of CCSNe as well, and if they do can we detect them?Finally,what is the relation, if any, between the explosion mechanism of GRB associated SNe and other types of SNe. We address these questions here. We first establish a clear observational signature of hidden jets. This signature can be detected in the early (first few days) spectra of CCSNe, provided that those arise in stars that have lost all (or almost all) of their heavy hydrogen envelopes prior to the SN explosion, namely in type Ib/c, and possibly IIb, SNe.We then proceed to demonstrate that this signature has already been observed in several SNe and that it enables us to estimate the jet parameters (its total energy and opening angle). As a spherical shock wave generated at the center of the collapsing star propagates outwards it encounters a sharp density drop near the edge of the star. The shock then accelerates until it breaks out from the star. As the shock accelerates it loses causal contact with the energy reservoir behind it, depositing less and less energy, E, into progressively faster and faster material with velocity v.Regardless of the exact density profile near the stellar edge the acceleration of the shock results in a rapidly decreasing profile of E(>v). For a typical envelop structure the fastest moving material satisfies dE/dlog(v) ∝ v^-k where 5 ≤ k ≤ 8 <cit.> (see Fig. <ref>).As a relativistic jet carves its way through the stellar envelopeadouble shock (forward- reverse) structure forms at its head <cit.>. The head propagates with a velocity muchslower than the jet itself. For typical jet-star parameters seen in GRBs this velocity is mildly relativistic <cit.>. The hot head material spills sideways, forming a cocoon that engulfs the jet and collimates it. As long as the jet propagates in the stellar envelopeit dissipates its energyat the head. This energy flows into the cocoon. The jet continues to propagate as long as the engine driving it operates. If it operates long enough the jet breaks out and powers a GRB. Otherwise the jet stalls and all its energy, E_j,is deposited into the cocoon. At that time the cocoon contains the stellar mass within a cone with a halfopening angle θ_j <cit.>.The cocoon, thatis much hotter than the surrounding matter, expands and breaks out from the star.If the jet has propagated a significant fraction of the stellar radius before it stalled the cocoon half opening angle at the time of breakout, θ_c, is comparable to that of the original jet, θ_j. Otherwise it could be much wider.As the cocoon's hot material breaks out from the star itsoptical depth τ > c/v hence itexpands rapidly sideways and it engulfs the star (see Fig. <ref>) reaching avelocity of order v_c≈ 0.1 c √(E_j,51.5/M_10θ_c,10^o^2 ),where E_j is the jet's total energy (that has been deposited in the cocoon) and M is the stellar mass<cit.>. Here and elsewhere Q_x denotes Q/10^x in cgswhile M_x is in units of solar mass. The radiation escapes from the expanding cocoon material when it reaches τ≈ c/v at t_obs≈1.5 κ_-1.3^1/2 M_10^3/4θ_c,10^o^3/2 / E_51.5^1/4 day, where κ is the opacity per unit mass. The luminosity at this time is≈1.5 ×10^42 E_51.5 R_11θ_c,10^o^4/3 / M_10κ_-1.3 ergand thetemperature is ≈ 12,000 E_51.5^1/8R_11^1/4/ θ_c,10^o^7/12M_10^3/8κ_-1.3 K, where R is the progenitor radius. This rather uv/blue signal might be observedif the SN is caught sufficiently early (but it might already be hidden by the rising^56Ni decay driven emission).While the direct emission signal is short-lived the cocoon's signatureon the velocity structure of the ejectacan be observed for a longer period via absorption. During the first few days the absorption lines of the fast moving cocoon material are optically thick, thereby leaving their mark on the optical spectra. As a result, the density profile required to fit the early spectra of a SN with a significant jet activity is expected to show an additional very fast component (with v ≈ 0.1c). Namely, a flattening or a `bump' of the E(v) (or equivalently ρ(v)) profile around this velocity, instead of the rapidly decreasing profile of a regular spherical explosion. This signature can be seen only during the first few days since once the cocoon lines in the optical become optically thin this spectral signature disappears. At high velocities the observed E(v) profile is the sum of the rapidly decreasing regular SN energy profileand the high velocity cocoon component.The latter reflects the jet properties and it depends on the jet parameters: the energy and the opening angle as well as the depth at which the jet is choked. Clearly, aless energetic, wider or a deeply choked jet will give rise to a less energetic and slower cocoon whose contribution will be weaker and at lower velocities and thus more difficult to separate from the bulk of the SN ejecta.An excess of high velocity material (≳ 0.1c) compared to the expectation from a spherical model is observed in all the hydrogen-stripped SNe with available early spectra that we examined (see Fig. <ref> and Table 1). The prominence of the excess differ from one SNe to another, and so is the confidence that the observations cannot be explained by a spherical explosion. In all SNe the excess of fast ejecta is most naturally explained as the cocoon material. The strongest jet signature is observed in SN 1997ef (see Figs. 1 and 2 of ref. <cit.> for the early spectra and Fig. 8 for the density profile). At low velocities the energy profile fits the theoretical model of a spherical explosion of a typical progenitor very well. Howeveran unexpected,well separated, component dominate the energy profile at v >25,0000 km/s. The energy and mass of the fast component, which in this SN can be estimated relatively well, measure the jet energy and puts an upper limit on the jet opening angle. A less pronounced, yet clear, flatteningof the energy profile is seen in SN 2002ap (at v≈ 30,000 km/s) and SN 2008D (at v ≈ 17,000 km/s). The flat energy profile enables a rather robust estimate of the fast component energy, but its mass which is dominated by material with velocity near the flattening point, cannot be well separated from that of the bulk of the ejecta. SN 2003bg exhibit a `bump' in the energy profile at 15,000<v<30,000 km/s. The bump is seen near the peak of the energy distribution and not as a separate component as in SN 1997ef and therefore its identification as jet activity is less secure. Yet, such energy profile is not expected in a spherical explosion of a conventional progenitor and jet activity provides a good explanation. Finally, SN 1998bw and 2016jca do not show a flattening of the energy profile at high velocities, but they also don't show the expected steepening.At v>30,000 km/s the energy profiles fall much slower than what is expected in a regular spherical SN. Thus, while not demonstrating clearly a powerful jet signature these profilesshow an excess of fast moving material indicating a likelyjet activity. At least in the case of SN 2016jca we know that a jet exists as it is associated with a regular long GRB. Most interesting is the variety of SN types in which jet signature is detected. These cover almost all types ofCCSNe from progenitors that lost all, or most, of their hydrogen envelope.SNe 1997ef and 2002ap are broad-line Ic which are not associated with any type of high energy emission. In particular, SN 2002ap, that took place at a distance of about 8 Mpc was observed extensively, showing no signs of a relativistic outflow. Radio and X-rays observed emission several days after the explosion indicate that the velocity of the fastest moving material in this SNe is ∼ 70,000 km/s <cit.>. In addition, it shows broad lines only in its early spectra while the lines in the later spectra (near and after the peak) are relatively narrow, similar to those observed in regular type Ic SNe. A jet signature is seen also in the relatively regular type Ib SN 2008D. It shows broad-lines at early times (produced by cocoon material according to our interpretation) which disappear at later times. SN 2008D also shows an early optical component with a luminosity of∼10^42 erg/s and a temperature of∼ 10,000 K <cit.>, that fits the expected cooling cocoon emission discussed above. If the excess of material at high velocities in SN 2003bg is also interpreted as a cocoon material, then jets are active also in SN that lost most, but not all, of their H envelope (type IIb). Finally, we detect a less pronounced, yet possiblejet signature also in broad-line Ic SNe that are associated with GRBs. SN 2016jca is associated with a regular long GRB, and therefor we know that a jet must be active in this SN. SN 1998bw is associated with a low-luminosity GRB 980425, where the gamma-rays are fainter by 3-4 orders of magnitude than in regular long GRBs. Here the observed gamma-rays are most likely a result of a mildly relativistic shock breakout through an extended envelope <cit.> and if a jet is active then it is most likely choked. Interestingly, the jetthat we infer from the optical spectra carried ≳ 2 × 10^51 erg while the gamma-ray emission that precededSN 1998bwcarried ∼ 10^48 erg and its radio emission indicates a mildly relativisticejecta with Γ∼ 3 that carried ∼ 10^49 erg <cit.>.To conclude, the most natural interpretation of the energetic fast moving component observed in the early spectra of the SNe in our sample, is that this is the cocoon's matter that broke out from the envelope. In one case (SN 2008D) we might have even seen the direct thermal emission of this hot cocoon material.his interpretation implies the existence of powerful jets within these SNe. These jetscarrya significant fraction of the explosion energy that are similar to those observed in typical long GRB jets. The sample presented here is small. However, the absorption lines of the cocoon's fast moving material become rapidly optically thinand can be detected only ifgood spectra is taken very early on.There are not many observations of this kind. Our sample comprise most SNe with stripped (or nearly stripped) H envelope with early spectra that were analyzed to constrain the profile of the fast moving ejecta. This suggests that a significant fraction of the core-collapse SNe, and possibly all those that have lost most or all of their H envelope, harbor choked jets.It is interesting to note that our interpretation of the existence of jets SNe is supported by other circumstantial, though less conclusive,evidence. Remarkably,spectropolarimetry of the optical light of SN 2002ap suggest that its ejecta contain a nonisotropic fast component with energy and velocity similar to those we find here <cit.>. More generally, double peaked oxygen nebular lines observed in a large fraction of the type Ib/c SNe. This featureimply a significant asphericity in the oxygen distribution of most, and possibly all, type Ib/c SNe. <cit.>. Additional indirect supporting evidence is the appearance of Ni in outer regions (i.e., high velocity) of the SNe in our sample as a jet that emerges from the inner parts of the core wouldbring freshly synthesized Ni to the outer regions <cit.>. Finally, the structure of CCSNe remnants also support jet activity during the SN explosion <cit.>.While the jets that we infer from the early optical spectra don't contain enough energy to drive the SNexplosion, they may be the smoking gun of what actually drives the explosion. First, and most important, is that a fast rotating core is almost certainly required and magnetic fields are also likely to play a major role in the explosion. Second, the jet activity which seems to be present in most type I SNe suggests a relation between the explosion mechanism of regular type I CCSNeand the extremely energetic SNe associated with GRBs. This puts into question the ability of the popular neutrino driven explosion to be the mechanism that drives these SNe, and suggests that any model of CCSNe explosion mechanism (at least of type Ib,c) should be ale to produce an extremely energetic quasi-spherical explosion accompanied by a narrow and energetic relativistic jet. The fact that our sample does not include regular type IIp SNe does not imply that we can show that these do not harbor choked jets, since the massive H envelope in this type of SNe is expected to choke not only the jet but also the cocoon, thereby washing out any jet signature from the early spectra. The observation that jets are ubiquitous in SN explosions suggests that low-metallicity that is implied from the location of long GRBs is not an essential ingredient for the activity of a central engine.Finally, we note that while the current sample is small upcomingtransient searches (ZTF, GAIA, LSST and others)will enable us to detect regularly early SNe spectra. Thosewill reveal in the near future the fraction of SNe that harbor jets and will shed new light on SNe engines.Before concluding we note that the shocks involved in these hidden jets may be the source ofhigh energy neutrinos observed by IceCube <cit.>. Unlike low-luminosity GRB, another proposed source of hidden jets<cit.>, where some of the shocks in the jets are expected to be collisionless, in regular SNe hidden jets all shocks are expected to be radiation dominated. Therefore, they are less likely to be efficient particle acceleration sites and thus strong neutrino sources. Nevertheless,it is possible that a small fraction of the energy dissipated in radiation mediated shocks is channeled into high energy neutrinos. If, as we find here, relativistic jets are common in SNethen their high abundance reduces significantly the required energy output in high energy neutrinos per event and enable much less efficient sources.Furthermore, as these sources are optically thick the Waxman-Bahcall bound does not apply to them.Interestingly the hidden jets can also bedetectable sources of gravitational radiation. The acceleration of a relativistic jet produces gravitational radiation <cit.> that peaks at sub Hz frequencies.Gravitational waves from a long GRB jet at 500 Mpc are below the detection limit of advanced LIGO but are detectable by the proposed sub Hz detector DECIGO <cit.>. However, depending on the parameters of the jet and in particular on its initial Lorentz factor and the duration of the acceleration phase, a hidden jet in a nearby SN taking place at 10Mpc might be detectable by advanced LIGO. We thank O. Gottlieb for figure <ref>. This research was supported by the I-Core center of excellence of the CHE-ISF. TP was partially supported by an advanced ERC grant TReX and by a grant from the Templeton foundation. EN was was partially supported by an ERC starting grant (GRB/SN) and an ISF grant (1277/13).PM and EP acknowledge kind hospitality in Israel by the Weizmann Institute for Science in Rehovot and the Hebrew University of Jerusalem. 10BaadeZwicky W. Baade, F. Zwicky, Physical Review 46, 76 (1934).Janka12 H.-T. Janka, Annual Review of Nuclear and Particle Science 62, 407 (2012).LeBlancWIlson J. M. LeBlanc, J. R. Wilson,161, 541 (1970).Bisnovatyi G. S. Bisnovatyi-Kogan,47, 813 (1970).OstrikerGunn J. P. Ostriker, J. E. Gunn,164, L95 (1971).Woosley06 S. E. Woosley, J. S. Bloom,44, 507 (2006).P04 T. Piran, Reviews of Modern Physics 76, 1143 (2004).mazzali2014 P. A. Mazzali, A. I. McFadyen, S. E. Woosley, E. Pian, M. Tanaka,443, 67 (2014).matzner1999 C. D. Matzner, C. F. McKee,510, 379 (1999).mignone2007 A. Mignone, et al.,170, 228 (2007).Matzner03 C. D. Matzner,345, 575 (2003).Lazzati05 D. Lazzati, M. C. Begelman,629, 903 (2005).B11 O. Bromberg, E. Nakar, T. Piran, R. Sari,740, 100 (2011).ZhangHegerWossly W. Zhang, S. E. Woosley, A. Heger,608, 365 (2004).NP17 E. Nakar, T. Piran,834, 28 (2017).mazzali2000 P. A. Mazzali, K. Iwamoto, K. Nomoto,545, 407 (2000).iwamoto1998 K. Iwamoto, et al.,395, 672 (1998).mazzali2002 P. A. Mazzali, et al.,572, L61 (2002).mazzali2009 P. A. Mazzali, J. Deng, M. Hamuy, K. Nomoto,703, 1624 (2009).mazzali2008 P. A. Mazzali, et al., Science 321, 1185 (2008).ashall2017 C. Ashall, et al., ArXiv e-prints(2017).bjornsson2004 C.-I. Björnsson, C. Fransson,605, 823 (2004).modjaz2009 M. Modjaz, et al.,702, 226 (2009).nakar2012 E. Nakar, R. Sari,747, 88 (2012).nakar2015 E. Nakar,807, 172 (2015).kulkarni1998 S. R. Kulkarni, et al.,395, 663 (1998).totani2003 T. Totani,598, 1151 (2003).Mazzali2005 P. A. Mazzali, et al., Science 308, 1284 (2005).Maeda2008 K. Maeda, et al., Science 319, 1220 (2008).Taubenberger2009 S. Taubenberger, et al.,397, 677 (2009).grichener2017 A. Grichener, N. Soker,468, 1226 (2017).Icecube2017 M. G. Aartsen, et al.,835, 45 (2017).Senno2016 N. Senno, K. Murase, P. Mészáros,93, 083003 (2016).Birnholtz2013 O. Birnholtz, T. Piran,87, 123007 (2013).Decigo S. Kawamura, et al., Classical and Quantum Gravity 28, 094011 (2011). Science | http://arxiv.org/abs/1704.08298v2 | {
"authors": [
"Tsvi Piran",
"Ehud Nakar",
"Paolo Mazzali",
"Elena Pian"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20170426190250",
"title": "Relativistic Jets in Core Collapse Supernovae"
} |
Bo Li^1*, Yuchao Dai^2, Huahui Chen^1,Mingyi He^1* ^* [email protected] ^1School of Electronics and Information, Northwestern Polytechnical University, China^2Research School of Engineering, Australian National University, Australia Single image depth estimation by dilated deep residual convolutional neural network and soft-weight-sum inference V. Galitski December 30, 2023 =================================================================================================================This paper proposes a new residual convolutional neural network (CNN) architecture for single image depth estimation. Compared with existing deep CNN based methods, our method achieves much better results with fewer training examples and model parameters. The advantages of our method come from the usage of dilated convolution, skip connection architecture and soft-weight-sum inference. Experimental evaluation on the NYU Depth V2 dataset shows that our method outperforms other state-of-the-art methods by a margin.§ INTRODUCTIONSingle image depth estimation aims at predicting pixel-wise depth for a single color image, which has drawn increasing attentions in computer vision, virtual reality and robotic. It is a very challenging problem due to its ill-posedness nature. Recently, there have been considerable efforts in applying deep convolutional neural network (CNN) to this problem and excellent performances have been achieved<cit.>.Li <cit.> predicted the depth and surface normals from a color image by regression on deep CNN features in a patch-based framework. Liu <cit.> proposed a CRF-CNN combined learning framework. Wang <cit.> proposed a CNN architecture for joint semantic labeling and depth prediction. Recent works have shown that the depth estimation problem could benefit from a better CNN architecture design. Eigen <cit.> proposed a multi-scale architecture that first predicts a coarse global output and then refines it using finer-scale local networks. Very recently, Cao <cit.> demonstrated that formulating depth estimation as a classification task is better than direct regression. These works demonstrate that, network architecture design plays a central role in improving the performance. To this end, a simple yet effective dilated deep residual CNN architecture is proposed, which could converge with much fewer training examples and model parametres. Furthermore, we analyze the statistic property of our network output, and propose soft-weight-sum inference instead of the traditional hard-threshold method. At last, we conduct evaluations on widly used NYU Depth V2 dataset <cit.>, and outperforms other state-of-the-art methods by a margin.§ NETWORK ARCHITECTUREOur CNN architecture is illustrated in Fig. <ref>, in which the weights are initialized from a pre-trained 152 layers residual CNN<cit.>. The original network <cit.> was specially designed for image classification problem. In this work, we transform it to be suitable to our depth estimation task. Firstly, we remove all the fully connect layers. In this way, we greatly reduce the number of model parameters as more than 80% of the parameters are in the fully connect layers<cit.>. Although, both <cit.> and <cit.> preserved the fully connect layers for long range context information, our experiments show that it is unnecessary in our network for the usage of dilated convolution. Secondly, we take advantage of the dilated convolution, which could expand the receptive field of the neuron without increasing the parameters. Thirdly, with dilated convolution, we could keep the spatial resolution of feature maps. Then, we concatenate intermediate feature maps with the final feature map directly. This skip connection design benefits the multi-scale feature fusion and boundary preserving.(1) Dilated Convolution: Recently, dilated convolution is successfully utilized in the CNN design by Yu <cit.>. Specially:Let F : 𝒵^2 →ℛ be a discrete function. Let Ω_r = [ r,r] ^2 ∩𝒵 ^2 and let k: Ω_r →ℛ be a discrete filter of size ( 2r+1 )^2. The discrete convolution filter * can be expressed as( F * k ) (𝐩) = ∑_𝐬 + 𝐭 = 𝐩F(𝐬)k(𝐭).We now generalize this operator. Let l be a dilation factor and let *_l be defined as( F *_l k ) (𝐩) = ∑_𝐬 + l𝐭 = 𝐩F(𝐬)k(𝐭). We refer to *_l as a dilated convolution or an l-dilated convolution. The conventional discrete convolution * is simply the 1-dilated convolution.(2) Skip Connection: As the CNN is of hierarchical structure, which means high level neurons have larger receptive field and more abstract features, while the low level neurons have smaller receptive field and more boundary information. We propose to concatenate the high-level feature map and the inter-mediate feature map. The skip connection structure benefits both the multi-scale fusion and boundary preserving.§ SOFT-WEIGHT-SUM INFERENCEWe reformulate depth estimation as classification task by equally discretizing the depth value in log space as <cit.>. Specially, we train our network with the multinomial logistic loss E = -1/N∑_n=1^N log(p_k^n) followed by softmax p_i = expx_i/∑_i^'=1^mexpx_i^'. Here, N is the number of training samples, k the correspondent label of sample n, m is the number of bins.A typical predicted score distribution is given in Fig. <ref>a, where the non-zero score is centralized. Fig. <ref>b is the confusion matrix on the test set, which present a kind of diagonal dominant structure. In Table <ref>, we give the pixel-wise accuracy and Relative error (Rel) with respect to different number of discretization bins.These statistic results show that: Even though the model can't distinguish the detailed depth well, it still learns the correctconcept of depth as the non-zero predicted score is centralized and around the right label. These statistic results also explain why increasing the number of bins could not improve the performance further, mainly due to the decrease of the pixel-wise accuracy. Inspired by these statistic results, we propose the soft-weight-sum inference. It is worth noting that, this method transforms the predicted score to the continuous depth value in a nature way. Specially:d_i = exp{𝐰^T𝐩_i },where 𝐰 is the weight vector of depth bins. 𝐩_i is the output score for sample i. In our experiments, we set the number of bins to 200. § EXPERIMENTSWe test our method on the widely used NYU V2 dataset <cit.>. The raw dataset consists of 464 scenes, captured with a Microsoft Kinect, The official split consists of 249 training and 215 test scenes.We equally sample frames out of each training sequence, resulting in approximately 12k unique images. After off-line augmentations, our dataset comprises approximately 48k RGB-D image pairs.Implementation details: Our implementation is based on the CNN toolbox: caffe <cit.> with an NVIDIA Titian X GPU. The proposed network is trained by using stochastic gradient decent with batch size of 3 (This size is too small, thus we average the gradient of 5 iterations for one back-propagation), momentum of 0.9, and weight decay of 0.0004. Weights are initialized by the pre-trained model from <cit.>. The network is trained with iterations of 50k by a fixed learning rate 0.001 in the first 30k iterations, then divided by 10 every 10k iterations.For quantitative evaluation, we report errors obtained with the following error metrics, which have been extensively used. Denote d as the ground truth depth, d̂ as the estimated depth, and T denotes the set of all points in the images.* Threshold: % of d_i s.t. max(d̂_i/d_i,d_i/d̂_i)=δ<thr;* Mean relative error (rel): 1/| T |∑_d ∈ T|d̂-d|/d;* Mean log_10 error (_10):1/| T |∑_d ∈ T|log_10d̂-log_10d|;* Root mean squared error (rms):√(1/| T |∑_d ∈ T‖d̂-d‖^2).Here d is the ground truth depth, d̂ is the estimated depth, and T denotes the set of all points in the images.Quantitative and qualitative evaluation of our method is presented in Table <ref> and Fig. <ref> respectively. Our method outperforms other state-of-the-art depth estimation methods by a large margin with much fewer training examples and model scale. It is worth noting that, without any post processing, our result is of high visual quality. To further evaluate our method, we conduct experiments to analyze the contribution of each component and the results are illustrated in Table <ref>. From Table <ref> we can see that dilated convolution, skip connection, and soft-weight-sum inference all contribute to final depth estimation. From Fig. <ref>, we could also observe that the soft-weight-sum inference is beneficial to smooth the depth map while keeping the boundary sharp.§ CONCLUSIONAn adapted deep convolutional neural network architecture is proposed for single image depth estimation. We also propose the soft-weight-sum inference instead of the hard-threshold method. Experimental results demonstrate that our proposed method achieves better performance than other state-of-the-art methods on NYU Depth V2 dataset.3ptIEEEbib | http://arxiv.org/abs/1705.00534v1 | {
"authors": [
"Bo Li",
"Yuchao Dai",
"Huahui Chen",
"Mingyi He"
],
"categories": [
"cs.CV",
"cs.LG"
],
"primary_category": "cs.CV",
"published": "20170427060705",
"title": "Single image depth estimation by dilated deep residual convolutional neural network and soft-weight-sum inference"
} |
Hypothesis Testing under Mutual Information Privacy Constraints in the High Privacy Regime Tomasz Gubiec December 30, 2023 ==========================================================================================Hypothesis testingis a statistical inference framework for determining the true distribution among a set of possible distributions for a given dataset. Privacy restrictions may require the curator of the data or the respondents themselves to share data with the test only after applying a randomizing privacy mechanism. This work considers mutual information (MI) as the privacy metric for measuring leakage. In addition, motivated by the Chernoff-Stein lemma, the relative entropy between pairs of distributions of the output (generated by the privacy mechanism) is chosen as the utility metric. For these metrics, the goal is to find the optimal privacy-utility trade-off (PUT) and the corresponding optimal privacy mechanism for both binary and m-ary hypothesis testing. Focusing on the high privacy regime, Euclidean information-theoretic approximations of the binary and m-ary PUT problems are developed. The solutions for the approximation problems clarify that an MI-based privacy metric preserves the privacy of the source symbols in inverse proportion to their likelihoods.Hypothesis testing, privacy-guaranteed data publishing, privacy mechanism, Euclidean information theory, relative entropy, Rényi divergence, mutual information.§ INTRODUCTIONThere is tremendous value to publishing datasets for a variety of statistical inference applications; however, it is crucial to ensure that the published dataset, while providing utility, does not reveal potentially privacy-threatening information. Specifically, the published dataset should allow the intended inference to be made while limiting other unwanted inferences. This requires using a randomizing mechanism (i.e., a noisy channel) that guarantees a certain measure of privacy. Any such privacy mechanism may, in turn, reduce the fidelity of the intended inference leading to a trade-off between utility of the published data and the privacy of the respondents in the dataset. We consider the problem of privacy-guaranteed data publishing hypothesis testing. The use of large datasets to test two or more hypotheses (e.g., the 99%-1% theory of income distribution in the United States <cit.>) relies on the classical statistical inference framework of binary or multiple hypothesis testing. The optimal test for hypothesis testing under various scenarios (non-Bayesian,Bayesian, minimax) involves the so-called Neyman-Pearson (or likelihood ratio) test <cit.> in which the likelihood ratio of the hypotheses is compared to a given threshold. We focus exclusively on the non-Bayesian setting. In particular, for m-ary (m≥ 2) hypothesis testing problem, we consider the setting in which the probability of missed detection is minimized for one specific hypothesis (e.g., presence of cancer) while requiring the probabilities of false alarm for the same hypothesis to be bounded (relative to the remaining hypotheses). In this context, we can apply the Chernoff-Stein Lemma <cit.> which states that for a pair of hypotheses the largest error exponent of the missed detection probability, under the constraint that the false alarm probability is bounded above by a constant, is the relative entropy between the probability distributions for the two hypotheses.Inspired by the Chernoff-Stein lemma, for the m-ary hypothesis setting described above, we use relative entropy as a measure of the utility of the published dataset (for hypothesis testing), and henceforth, refer to this as the relative entropy setting. Furthermore, for binary hypothesis testing (m=2), we also consider the setting in which the probabilities of both missed detection and false alarm decrease exponentially. For this setting, using known results of hypothesis testing <cit.>, we take the Rényi divergence as the utility metric and refer to this as the Rényi divergence setting. For the privacy metric, we use mutual information between the original and published datasets as a measure of the additional knowledge (privacy leakage) gained on average from the published dataset. By bounding the MI leakages, our goal is to develop privacy mechanisms that restrict the relative entropy between the prior and posterior (after publishing) distributions of the dataset, averaged over the published dataset. By restricting the distance between prior and posterior beliefs, we capture a large class of computationally unbounded adversaries that can use different inference methods. Specifically, bounding MI leakage allows us to exploit information-theoretic relationships between MI and the probabilities of detection/estimation to bound the ability of an adversary to “learn” the original dataset <cit.>.§.§ Our ContributionsWe study the privacy-preserving data publishing problem by considering a local privacy model in which the same (memoryless) mechanism is applied independently to each entry of the dataset. This allows the respondents of a dataset to apply the privacy mechanism before sharing data. Our main contributions are as follows:* We introduce the privacy-utility trade-off (PUT) problem for hypothesis testing (Section <ref>). The resulting PUT involves maximizing the minimum of a set of relative entropies, subject to constraints on the MI- based leakages for all source classes.* The PUT problem involves maximizing the minimum of a set of convex functions over a convex set which is, in general, NP-hard. In Section <ref>, we approximate the trade-off in the high privacy regime (near zero leakage) using techniques from Euclidean information theory (E-IT); these techniques have found use in deriving capacity results in <cit.>. * For binary hypothesis testing, we first consider the relative entropy setting (Section <ref>), in which we determine the optimal mechanism in closed form for the E-IT approximation by exploring the problem structure. Our results suggest that the solution to the E-IT approximation is independent of the alphabet size and, more importantly, that a MI-based privacy metric preserves the privacy of the source symbols inversely proportional to their likelihoods, thereby, providing more distortion to the (informative) outliers in the dataset which, in general, are more vulnerable to detection. * We extend our analysis to the Rényi divergence setting (Section <ref>), the optimal mechanism for its E-IT approximation problem in high privacy regime is similar in form to the relative entropy setting. * We study the m-ary hypothesis testing problem (Section <ref>) and show that optimal solutions of the E-IT approximation can be obtained via semidefinite programs (SDPs) <cit.>. Specially, for binary sources the optimal mechanism is derived in closed form. The dependence on the source distribution is highlighted here as well.* In Section <ref>, via numerical simulations, we uncover regimes of distributiontuples and leakage values for which the E-IT approximation is accurate. §.§ Related WorkPrivacy-guaranteed hypothesis testing in the high privacy regime using MI as the privacy metric was first studied by the authors in <cit.>. Specifically, the focus of <cit.> is on the relative entropy setting of binary hypothesis test in the high privacy regime. We significantly extend that work with three key contributions: (a) we derive optimal mechanisms in the high privacy regime for binary hypothesis test in the Rényi divergence setting; (b) we derive optimal mechanisms in the high privacy regime for the m-ary problem in relative entropy setting; (c) we provide detailed illustrations of results for binaryand m-ary hypothesis testing. Recently, the problem of designing privacy mechanisms for hypothesis testing has gained interest. Kairouz et al. <cit.> show that the optimal locally differential privacy (L-DP) mechanism has a staircase form and can be obtained as a solution of a linear program. Gaboardi et al. <cit.> deal with a privacy-guaranteed hypothesis testing by using chi-square goodness of fit as the utility measure and adding Gaussian or Laplace noise to dataset to guarantee DP-based privacy protection.Our problem differs from these efforts in using MIas the privacy metrics. In <cit.>, the L-DP formulation, focused on the high privacy regime, requires the mechanism to limit distinction between any two letters of the source alphabet for a given output. The requirement also gathers all privacy mechanisms satisfying a desired privacy protection measured by L-DP within a hypercube. Therefore, the authors simplify the trade-off problem to a linear program by exploring the sub-linearity of the relative entropy function. In contrast, all privacy mechanisms giving a desired MI-based privacy form aconvex set which is not a polytope. However, taking advantage of E-IT, we propose good approximations for the MI-based privacy utility trade-offs in high privacy regime. In fact, we present closed-form privacy mechanisms for both binary hypothesis testing with arbitrary alphabets as well as m-ary hypothesis testing with binary alphabets. Furthermore, for m-ary hypothesis testing with arbitrary sources, the privacy mechanism can be attained effectively by solving an SDP.The connection between hypothesis testing and privacy has been studied in the context of location anonymization and smart meter privacy. In location privacy, the problem of determining if a sequence of anonymized data points (e.g. location positions without an accompanying user ID) belongs to a target user can be formulated as a hypothesis test. More specifically, if the distribution of the user's data is known and unique among other users, any observed sequence can be tested against the hypothesis that it was drawn from this distribution, thus revealing if it belongs to the target user. Within this context, Montazeri et al.<cit.>studied the problem of anonymizing sequences of location data, and characterized the probability of correctly guessing a target user's data within a larger dataset. In related work on smart meter privacy, Li and Oechtering <cit.>considered the problem of private information leakage in a smart grid. Here, an adversary challenges a consumer's privacy by performing an unauthorized binary hypothesis test on the consumer's behavior based on smart meter readings. Li and Oechtering <cit.> propose a solution for mitigating the incurred privacy risk with the assist of an alternative energy source. The theoretical analysis done by Montazeri etal.<cit.> and Li and Oechtering<cit.>are related to the one presented here in that they also make use of large deviation (information-theoretic) results in hypothesis testing. However, we apply these powerful theoretical tools to a different setting, in which data is purposefully randomized before disclosure in order to provide privacy, while guaranteeing utility in terms of a successful hypothesis test. Whereas they consider a hypothesis testing adversary, here we consider a precise hypothesis test as part of the utility metric.MI has been amply used as a measure for quantifying information leakage within the information-theoretic privacy literature (cf. <cit.> and the references therein). The connection between MI-based metrics and other privacy metrics has been studied, for example, by Makhdoumi and Fawaz <cit.>. In the present paper, we approximate MI by the chi-squared divergence which, in turn, also posses interesting estimation-theoretic properties <cit.>. An exploration of the role of chi-squared related metrics in privacy has appeared in the work of Asoodeh et al. <cit.>.§.§ NotationWe use bold capital letters to represent matrices, e.g., 𝐗 is a matrix with the i^th row (or column)being 𝐗_i and the (i,j)^th entry X_ij. Weuse bold lower case letters to represent vectors, e.g. 𝐱 is a vector with the i^th entry x_i. Sets are denoted by capital calligraphic letters. For vectors 𝐚 and 𝐛, andfunctions f and g,[f(𝐚)/g(𝐛)] is a diagonal matrix with the i^th diagonal entry being f(a_i)/g(b_i), e.g., the diagonal matrix [𝐚^2/√(𝐛)] has diagonal entries a_i^2/√(b_i). We denote the l_2-norm of a vector 𝐱 by 𝐱, the logarithm of x to the base 2 as log x.Probability mass functions are denoted as row vectors, e.g., 𝐩. In addition, D(··) denotes the relative entropy and I(· ;·) denotes the MI.We can write the MI between two random variables or between a probability distribution and the corresponding conditional probability matrix. Indeed, for two random variables X,X̅ with X∼𝐩 and X̅|{X=x}∼𝐖_X̅|X=x, the MI is denoted as I(X;X̅) or I(𝐩,𝐖_X̅|X).§ PROBLEM FORMULATION §.§ General Hypothesis TestingWe consider an m-ary hypothesis testing problem that distinguishes between m≥ 2 explanations for an observed dataset. Let X^n=(X_1,…,X_n) denote a sequence of n random variables, where the entries X_i are drawn independently according to a probability distribution 𝐩. The observed random variables are assumed to be discrete with alphabet 𝒳 and size |𝒳|=M. The m hypotheses are denoted as H_k: =_k for k∈{1,…,m}. Our utility goal is to make a decision about the underlying distribution of the data X^n. Let the disjoint decision regions be 𝒜_k^(n). This means that if X^n belongs to 𝒜_k^(n), we decide in favor of H_k. §.§ Binary Hypothesis TestingIn binary hypothesis testing, there are only two hypotheses: H_1: 𝐩=𝐩_1 and H_2: 𝐩=𝐩_2. The optimum test is the Neyman-Pearson test in which the decision region for hypothesis H_1 is𝒜_1^(n) ={𝐱^n: 𝐩_1(𝐱^n)/𝐩_2(𝐱^n)>T} for some threshold T∈ℝ. Let β_1^(n) and β_2^(n) be the probabilities of false alarm and missed detection for H_1, respectively. Use β_2^(n)(δ) to indicate the smallest probability of the missed detection subject to the condition that β_1^(n)≤δ. The Chernoff-Stein lemma <cit.> states thatlim_n→∞-1/nlogβ_2^(n)(δ)=D(_2_1),∀ δ∈ (0,1). Hence, we use D(_2_1) as our utility function. §.§ m-ary Hypothesis TestingIn m-ary hypothesis testing, there arem(m-1) different errors resulting from mistaking hypothesis H_i for H_j, i j. To keep our analysis simple, we consider a scenario somewhat analogous to the “red alert” <cit.> problem in “unequal error protection” <cit.>. There is one distinguished hypothesis H_1 whose inference takes precedence. For example, in practice _1 could be the underlying distribution of measurements of amalignant tumor; the other distributions _2,…, _k couldbe the underlying distributions ofmeasurements ofvarious benign tumors. We would like to minimize the miss-detection rate of H_1. In this scenario, we would design the decision regions{𝒜_k^(n)}_k=1^m to maximize the minimum of E_1,k over k ∈{2,…, m}, where E_ 1,k is the error exponent(exponential rate of decay of the error probability analogous to (<ref>)) of mistaking H_k when H_1 is true.§.§ Privacy ConsiderationsIn most data collection and classification applications, there may be an additional requirement to ensure that the dataset, while providing utility, does not leak information about the respondents of the data. This in turn implies that the data provided to the hypothesis test is not the same as the original data, but instead a randomized version that guarantees precise measures of privacy (information leakage) and utility. Specifically, we use MIas a measure of the average information leakage between the input dataset and its randomized output dataset that is used by the test. The goal is to find the randomizing mapping, henceforth referred to a privacy mechanism, such that a measure of utility of the data is maximized while ensuring that the MI-based leakages for all possible source classes are bounded.We assume that the entries of the dataset are generated in an i.i.d. fashion.Focusing on the local privacy model, the randomizing privacy mechanism for the hypothesis testing problem is memoryless. Let 𝐖, an M× N conditional probability matrix, denote this memoryless privacy mechanism which maps the M letters of the input alphabet 𝒳 to N letters of the output alphabet 𝒳̂, where N≥ 2 is an arbitrary finite integer.Thus, the i.i.d. sequence X^n ∼𝐩_k, k∈{1,…,m}, is mapped to an output sequence X̂^n whose entries X̂_j ∈𝒳̂ for all j∈{1,…,N} are i.i.d. with the distribution 𝐩_k𝐖. Thus, the hypothesis test is now performed on a sequence X̂^n that belongs to one of m source classes with distributions[We remind that the distribution 𝐩𝐖 is the output distribution induced by the input (row vector) 𝐩 and the privacy mechanism (transition matrix) 𝐖.] 𝐩_k𝐖. For the m-ary setting, the error exponent, corresponding to the missed detection of H_1 as H_k, is D(_k_1). §.§ The Privacy-Utility Trade-offTo design an appropriate privacy mechanism, we wish to maximize the minimum of the m-1 error exponentsD(_k_1) subject to the following leakage constraints: I(𝐩_k,𝐖)≤ϵ_k for k∈{1,…,m}. Formally, the privacy-utility trade-off (PUT) problem is that finding the optimal privacy mechanism 𝐖^* of the following optimization:max_∈𝒲 min_k=2,…, m D(_k_1) s.t.I(_k,)≤ϵ_kk=1,2,…,mwhere𝒲 is the set of M× N row stochastic matrices, and ϵ_k ∈ [0, H(𝐩_k)], k∈{1,…,m}, are permissible upper bounds on I(_k,𝐖). The optimization in (<ref>) maximizes the minimum of m-1 convex functions over a convex set. Since the maximum of each of the m-1 convex functions are attained on the boundary of the feasible region, the optimal solutionof the optimization is also on the boundary. Because of the MI constraints, the feasible region is, in general, not a polytope, and thus, has infinitely many extremal points. While there exist computationally tractable methods to obtain a solution by approximating the feasible region by an intersection of polytopes <cit.>, our focus is on developing a principled approximation for (<ref>) in a specific privacy regime to obtain a closed-from and easily-interpretable privacy mechanism. Specifically, we will work in the high privacy regime in which ϵ_k is small. In this regime, one can use Taylor series expansions to approximate both the objective function and the constraints. Such approximationswere considered in <cit.>. More recently, analyses based on such approximations, referred to as E-IT, have been found to be useful in a variety settings fromgraphical model learning <cit.> to network information theory problems <cit.><cit.>. § APPROXIMATIONS IN THE HIGH PRIVACY REGIMEIn this section, we develop E-IT approximations of the relative entropy D(_k_1) and the MI I(_k,) functions, based on which we propose an approximation of PUT in (<ref>) in the high privacy regime. To develop an approximation, we select an operating point which will be perturbed to provide an approximately-optimal privacy mechanism. We let ϵ_k∈ [0,ϵ^*]for all k whereϵ^* ≪min{H(𝐩_k),k∈{1,…,m}}.Since our focus is on the high privacy regime, we present the approximation around a perfect privacy operation point, i.e., a privacy mechanism 𝐖_0 that achieves ϵ_k=0 for all k. For perfect privacy, i.e., ϵ_k=0 for all k∈{1,…,m}, the privacy mechanism 𝐖_0 is a rank-1 row stochastic matrix with every row being equal to a row vector 𝐰_0 where 𝐰_0 belongs to probability simplex, such that the entries w_0j,j∈{1,…,N} of the vector 𝐰_0 satisfy ∑_j=1^Nw_0j=1,andw_0j≥ 0 ∀ j∈{1, …, N} . For any probability distribution 𝐩 with entries p_i,i∈{1,…,M}, and a privacy mechanism 𝐖, I(𝐩,𝐖) = ∑_i=1^M∑_j=1^Np_iW_ijlogW_ij/∑_i=1^Mp_iW_ij≥∑_i=1^Mp_i(∑_j=1^NW_ij)log∑_j=1^NW_ij/∑_j=1^N∑_i=1^Mp_iW_ij= ∑_i=1^Mp_i∑_j=1^NW_ijlog1/1=0 where (<ref>) results from the log-sum inequality. Equality in (<ref>) holds if and only if <cit.> W_ij=∑_k=1^Mp_kW_kj, i∈{1, …, M} ,j∈{1, …, N}. In other words, perfect privacy, i.e., zero leakage, is achieved when every row of the optimal mechanism 𝐖_0 is the same and is equal to the probability distribution 𝐰_0=𝐩𝐖_0. Thus, for the perfect privacy setting, the optimal mechanism satisfying (<ref>) does not rely on the input distribution. Note that, for any 𝐖_0 satisfying (<ref>) that achieves perfect privacy, the utility is D(𝐩_k𝐖_0 𝐩_1𝐖_0)=0 for all k∈{2,…,m}. Furthermore, the rows of 𝐖_0, i.e., 𝐰_0, can take any value in an N-dimensional probability simplex. The following proposition presents a E-IT approximation for the objective and constraint functions of the optimization in (<ref>), i.e., the relative entropy D(_k_1) and MI I(_k,). This approximation is only applicable to the high privacy regime in which the privacy mechanism 𝐖 is modeled as a perturbation of a𝐖_0 per Lemma <ref>. In the high privacy regime with 0≤ϵ_k ≪min{H(𝐩_k),k∈{1,…,m}}, the privacy mechanism 𝐖 is chosen as a perturbation of a perfect privacy (ϵ_k=0for all k) achieving mechanism 𝐖_0, i.e., 𝐖=𝐖_0+Θ. The mechanism 𝐖_0 is a rank-1 row stochastic matrix with every row being equal to a row vector 𝐰_0 whose entries w_0j satisfy ∑_j=1^Nw_0j=1 and w_0j>0, for all j∈{1,…,N}. The perturbation matrix Θ is an M× N matrix with entries Θ_ij satisfying ∑_j=1^NΘ_ij=0 and |Θ_ij| ≤ρ w_0j, for all i∈{1, …, M},j∈{1, …, N}. For this perturbation model, the relative entropy D(_k_1) for all k∈{2, …, m} in the objective function, and the MI I(_k,) for all k∈{1, …, m} in the constraints of (<ref>) can be approximated as D(_k_1) ≈1/2(𝐩_k-𝐩_1)Θ[(𝐰_0)^-1/2]^2 I(_k,) ≈1/2∑_i=1^Mp_2iΘ_i[(𝐰_0)^-1/2]^2 where p_ki, for k∈{1,…,m}, is the i^th entry of 𝐩_k, Θ_i is the i^th row of Θ, and [(𝐰_0)^-1/2] is a diagonal matrix with i^th diagonal entry, for all i, being (w_0i)^-1/2. For ease of analysis, setting 𝐀=Θ[(𝐰_0)^-1/2], (<ref>) and (<ref>) can be rewritten as D(_k_1)≈1/2(𝐩_k-𝐩_1)𝐀^2 I(_k,) ≈1/2∑_i=1^Mp_ki𝐀_i^2 In (<ref>) and (<ref>), the notation ≈ means that the difference between the left and right sides is o( 𝐩_k-𝐩_1 _∞^2). Similarly,in (<ref>) amd (<ref>), ≈ means that the two sides differ by o( Θ_∞^2 ). Note that in Proposition <ref>, _0 is in the interior of the probability simplex, i.e., _0>0. The approximation results from the observation that all rows of a privacy mechanismin the high privacy (low leakage) regime are very close to each other and both the relative entropy and MI can be approximated by the χ^2 divergence. The detailed proof is in Appendix <ref>.§ BINARY HYPOTHESIS TESTING IN THE HIGH PRIVACY REGIMEFor binary hypothesis testing, thereare only two hypotheses H_1:=_1 and H_2:=_2, and therefore, only two types of errors. In this section, we consider the simplest hypothesis testing scenario under two regimes. First, we regard one of the two hypotheses (e.g., H_1) as being more important than the other. In this case, the goal is to maximize the exponent of the missed detection for H_1 subject to an upper bound on its false alarm probability. Second, both hypotheses are important and the goal is to maximize a weighted sum of the two exponents of the false alarm and missed detection. For both cases, we derive the PUTs in the high privacy regime and providemethods to attain explicit privacy mechanisms.§.§ Binary Hypothesis Testing (Relative Entropy Setting)We consider the case in which the false alarm of H_1 isbounded by a fixed positive constant and we examine the fastest rate of decay of its missed detection. This is exactly the problem formulated in Section <ref>, and the PUT in (<ref>) becomesmax_𝐖∈𝒲D(𝐩_2𝐖𝐩_1𝐖) s.t.I(𝐩_k,𝐖)≤ϵ_kk=1,2where 𝒲 is the set of all M× N row stochastic matrices, and ϵ_k ∈ [0, H(𝐩_k)], are the permissible upper bounds of the privacy leakages for the two distributions _1 and _2, respectively. Using the approximations in Proposition <ref>, the PUT for the E-IT approximation problem in the high privacy regime with 0≤ϵ_k ≪min(H(𝐩_1),H(𝐩_2)), for all k ∈{1,2}, is max_𝐀 1/2(𝐩_2-𝐩_1)𝐀𝐀^T(𝐩_2-𝐩_1)^T s.t. 1/2∑_i=1^Mp_ki𝐀_i^2≤ϵ_k k=1,2 𝐀(√(𝐰_0))^T=0. where 𝐀_i is the i-th row of the M× N matrix 𝐀, _0 is an interior point of the N-dimensional probability simplex, and √(𝐰_0) is a row vector with the i^th entry being the squared root of the i^th entry of _0, i.e., √(w_0,i). The functions in (<ref>) and (<ref>) are the E-IT approximations as presented in Proposition <ref>, and the constraint (<ref>) results from the requirement thatis row stochastic. This constraint is the only one in (<ref>) that explicitly involves the size of the output alphabet, i.e., the length of _0. The optimization problem in (<ref>) reduces to one with a vector variable 𝐚∈ℝ^M as max_𝐚 1/2𝐚(𝐩_2-𝐩_1)^T^2 s.t. 1/2𝐚[𝐩_k]𝐚^T≤ϵ_k k=1,2 where the absolute value of the i^th entry a_i of 𝐚, for all i ∈{1,..,M}, is the Euclidean norm of the i^th row 𝐀_i of 𝐀. The M× N matrix 𝐀^* optimizing (<ref>) is obtained from the optimal solution 𝐚^* of (<ref>) as a rank-1 matrix whose i^th row, for all i, is given by a_i^*𝐯 where a_i^* is the i^th entry of𝐚^*, and 𝐯 is a unit-norm N-dimensional vector that is orthogonal to the non-zero-entry vector √(𝐰_0), such that 𝐀^*=(𝐚^*)^T𝐯𝐯(√(𝐰_0))^T=0𝐯=1. Finally, it suffices to restrict the output to a binary alphabet, i.e., N=2. The proof of Theorem <ref> is in Appendix <ref>. We briefly summarize the approach. The simplification of (<ref>) to a vector optimization in (<ref>) results from the observation that the privacy constraint (<ref>) only restricts the row-norms of the matrix variable , whereasaffects the objective (<ref>) through all inner products of rows in . By exploiting this special structure, we simplify (<ref>) to a quadratically constrained quadratic program (QCQP) with a vector variablewhich governs the Euclidean norms of the rows in . The optimal ^* is then given by (<ref>) such that the row vectoris chosen to satisfy (<ref>). Since (<ref>) can be satisfied by a 2-dimensional , we concludethat a binary output alphabet suffices.Note that the objective function and constraints of the QCQP in (<ref>) are “even” functions,i.e., ifis feasible, so is itsnegation - and both of them yield the same objective value. Using this observation,we derive a convex program by removing the square in the objective function. The following theorem provides a closed-form privacy mechanism for the PUT (<ref>) in high privacy regime by using the Karush-Kuhn-Tucker (KKT) conditions for convex programs. An optimal privacy mechanism 𝐖' for the approximation problem in (<ref>) is 𝐖'=𝐖_0+(𝐚^*)^T𝐯·[√(𝐰_0)] where 𝐖_0 is given by Proposition <ref>, 𝐯 is chosen to satisfy (<ref>) and (<ref>), and for λ_p=𝐩_2-𝐩_1^2 and 𝐯_p=𝐩_2-𝐩_1/𝐩_2-𝐩_1 being the eigenvalue and eigenvectorof (𝐩_2-𝐩_1)^T(𝐩_2-𝐩_1), the optimal solution of (<ref>), namely 𝐚^*, is given as: * if only the first constraint in (<ref>) is active, 𝐯_p[𝐩_2/(𝐩_1)^2](𝐯_p)^T/𝐯_p[(𝐩_1)^-1](𝐯_p)^T < ϵ_2/ϵ_1, and the optimal solution 𝐚^* is 𝐚^*=±√(2ϵ_1/𝐯_p[(𝐩_1)^-1](𝐯_p)^T)𝐯_p[(𝐩_1)^-1]; * if only the second constraint in (<ref>) is active, 𝐯_p[𝐩_1/(𝐩_2)^2](𝐯_p)^T/𝐯_p[(𝐩_2)^-1](𝐯_p)^T < ϵ_1/ϵ_2, and the optimal solution 𝐚^* is 𝐚^*=±√(2ϵ_2/𝐯_p[(𝐩_2)^-1](𝐯_p)^T)𝐯_p[(𝐩_2)^-1]; * when both constraints in (<ref>) are active, the optimal solution 𝐚^* is 𝐚^*=±λ_p/2𝐯_p(η_1^*[𝐩_1]+η_2^*[𝐩_2])^-1 where η_1^*>0 and η_2^*>0 satisfy 𝐯_p[𝐩_1](η_1^*[𝐩_1]+η_2^*[𝐩_2])^-2(𝐯_p)^T =8ϵ_1/(λ_p)^2 𝐯_p[𝐩_2](η_1^*[𝐩_1]+η_2^*[𝐩_2])^-2(𝐯_p)^T =8ϵ_2/(λ_p)^2. The proof of Theorem <ref> involves proving two lemmas and is developed in Appendix <ref>. The optimal mechanism 𝐖'=𝐖_0+(𝐚^*)^T𝐯[√(𝐰_0)]captures the fact that a statistical privacy metric, such as MI, takes into consideration the source distribution in designing the perturbation mechanism Θ^*. In fact, the solutions for 𝐚^* in (<ref>), (<ref>) and (<ref>) quantify this through the term (η_1^*[𝐩_1]+η_2^*[𝐩_2])^-1. The vector 𝐯_p indicates the direction along which the objective function, i.e., the relative entropy, grows the fastest. In Fig. <ref> we illustrate the results of Theorem <ref>. Thus, fora uniformly distributed source, all entries of 𝐯_p have the same scaling such that 𝐚^* is in the direction of 𝐯_p. However, for a non-uniform source, the samples with low probabilities affect the direction of 𝐯_p(η_1^*[𝐩_1]+η_2^*[𝐩_2])^-1 the most. This is a consequence of the statistical leakage metric(the MI) which causes the optimal mechanism to minimize information leakage by perturbing the low probability (more informative) symbols proportionately more relative to the higher probability ones.§.§ Binary Hypothesis Testing (Rényi Divergence Setting)We now consider the scenario in which both the false alarm and missed detection probabilities for H_1 are exponentially decreasing. For this case, the trade-off between the two error probabilities is captured by the Rényi divergence as shown in <cit.>. We use this as our utility metric and briefly review the results in <cit.> as a starting point.Assume that the false alarm probability decays as exp(-nE_2,1), for some exponent E_2,1>0. Then, the largest error exponent of the missed detection E_1,2 for a fixed E_2,1 is a function of E_2,1 given by <cit.> E_1,2(E_2,1) ≜min_: D(_2) ≤ E_2,1D( _1).Since (<ref>) is a convex program, it can be equivalently characterized by the Lagrangian minimizationL(β)≜min_{ D(_1)+ β D(_2)},leading to the dual problem <cit.>E_1,2(E_2,1) = sup_β≥ 0{ L(β) - β E_2,1}.The ^* optimizing (<ref>) can be computed using the KKT conditions of (<ref>) (cf. <cit.>) to further obtain L(β)=-(1+β)log(∑_xp_1(x)^1/1+βp_2(x)^β/1+β).For α≜β/1+β∈ (0,1), (<ref>) simplifies as <cit.> 1/α-1log(∑_xp_2(x)^αp_1(x)^1-α)=(_2_1)where (_2_1) is the order-αRényi divergence. From (<ref>) and (<ref>), we see that (_2_1) is the weighted sum of the two error exponents, i.e., E_1,2(E_2,1)+α/1-α E_2,1, and as such a good candidate for a utility metric in this setting. For this metric, one can write the PUT problem asmax_𝐖∈𝒲 (_2𝐖_1𝐖) s.t.I(𝐩_k,𝐖)≤ϵ_k,k=1,2.Analogous to the PUT in (<ref>) with relative entropy as the utility metric, the optimization in (<ref>) is non-convex and NP-hard. Thus, we focus on the high privacy regime and approximate the order-α Rényi divergence in that regime. To this end, we use the following lemma to explicitly present the relationship of the order-α Rényi divergence D_α(_2_1) and the relative entropy D(_2_1) when _2 and _1 are “close”. For α∈(0,1), the following continuity statement holds: If [We say that a vector _2 convergesto another vector _1, denoted as _2→_1, if _2-_1_∞→ 0. ] _2→_1, then(1-α) D(_2_1)/2^(1-α ) D_α(_2_1) -1→log e/α. The proof is detailed in Appendix <ref>.According to Proposition <ref>, any privacy mechanismin the high privacy regime is a perturbation of a perfect privacy mechanism _0. When ϵ_k in (<ref>) is close to zero, ρ is also close to 0 and both output distributions _1 and _2 approach _0. We now use Lemma <ref> in the following corollary to show that the ratio of (1-α) D(_2𝐖_1𝐖) and 2^(1-α ) D_α(_2𝐖_1𝐖) -1 converges to the constant α^-1log e. Letα∈(0,1).In (<ref>),if ϵ_1,ϵ_2→ 0,converges to a perfect privacy mechanism _0 (cf.Lemma <ref>). Consequently, _2𝐖→_1𝐖 and the following convergence statement also holds. (1-α) D(_2𝐖_1𝐖)/2^(1-α ) D_α(_2𝐖_1𝐖) -1→log e/α. From (<ref>), we observe that as ϵ_1,ϵ_2→ 0,D_α(_2W_1W) is monotonically increasing in D(_2W_1W).Thus, in this high privacy regime,the optimizer of D_α(_2W_1W) is the same as D(_2W_1W). As a result, in the high privacy regime we revert to the relative entropy setting, for which we provide a closed-form solution in Theorem <ref>.§ M-ARY HYPOTHESIS TESTING IN THE HIGH PRIVACY REGIMEWe now consider the m-ary hypothesis testing problem with m distinct hypotheses H_k, k∈{1,…,m}, each corresponding to a distribution _k. This in turn results in m(m-1) error probabilities of incorrectly inferring hypothesis H_i as hypothesis H_j. As stated in Section <ref>, to simplify our analysis, we consider a scenario somewhat analogous to the “red alert” <cit.> problem in “unequal error protection” <cit.>, i.e., there is one distinct hypothesis H_1, the inference of which is more crucial than that of others (e.g., presence of cancer). We focus on maximizing the minimum of the m-1 error exponents corresponding to the m-1 ways of incorrectly deciding H_1 as H_j , j1. For this problem of unequal m-ary hypothesis testing, we introduce the PUT in (<ref>). We can further simplify the trade-off in the high privacy regime using Proposition <ref> to obtain the following PUT: max_ min{1/2(_k-_1)^2, k=2,…,m} s.t. 1/2∑_i=1^Mp_ki_i^2≤ϵ_kk=1,2,…,m(√(_0))^T=0. Recall that ∈ℝ^M× N is a perturbation matrix such that the privacy mechanismis related to _0 as =_0+[√(_0)], and _i is the i^th row of . For ease of analysis, we start from a simplified version of (<ref>) without the constraint (<ref>), which can be transformed to a semi-definite program (SDP) as summarized in the following lemma. Based on an optimal solution of the SDP, a scheme is proposed for constructing an optimal solution ^* of (<ref>) satisfying (<ref>). The optimization in (<ref>) with constraint (<ref>) is equivalent to an SDPwith (M× M matrix) variable =^T given as max_,t ts.t. 1/2((_k-_1)^T(_k-_1)) ≥ t k=2,…,m1/2([_k]) ≤ϵ_k k=1,2,…,m≽ 0 where [_k] is a diagonal matrix with i^th diagonal entry equal to p_ki, and [[_k]] is the traceof the matrix [_k].Lemma <ref> stems from the observation that both the objective function (<ref>) and constraints (<ref>) are linear functions of the entries of the positive semidefinite matrix ^T. The proof for Lemma <ref> is provided in Appendix <ref>. The following theorem shows that the solution of the SDP in (<ref>) yields an optimal privacy mechanism for the approximated PUT in (<ref>). An optimal privacy mechanism ' for the optimization problem in (<ref>) is '=_0+^*[√(_0)] where _0 is the perfect privacy mechanism with rows _0, ^* is an optimal solution of (<ref>) obtained from an optimal solution ^*=^*[](^*)^T with l≜rank(^*)≤ M of the SDP in (<ref>). It suffices to restrict the output alphabet size N to l+1, such that ^*=^*^*^T, where ^* is an M× (l+1) rectangular diagonal matrix whose diagonal entries are the square roots of the l non-zero eigenvaluesof ^*, ^* is a unitary matrix consisting of the eigenvectors of ^*, andis an (l+1)× (l+1)unitary matrix whose the first l columns are orthogonal to √(_0). Let ^* be the optimal solution of the SDP in (<ref>) and l≜rank(^*). We decompose ^* via aneigenvalue decomposition as follows: ^*=^*[](^*)^T. Here, [] is an l× l diagonal matrix consisting of entries in the eigenvalue vectorand the columns of the M× l matrix ^* are the l corresponding eigenvectors. Construct an l× (l+1) rectangular diagonal matrix ^* by adding one all-zero column to [√()]. Let N=l+1. By choosing a (l+1)× (l+1) unitary matrix , whose last column parallel to the (l+1)-dimensional row vector √(_0), we design a matrix ^* as ^*^*^T such that ^*(^*)^T=^*^*(^*)^T(^*)^T=^*. From Lemma <ref>, the SDP in (<ref>) is equivalent to the simplified (<ref>) without (<ref>). Therefore, ^* optimizes the simplified (<ref>). In addition, ^*(√(_0))^T=^*^*^T(√(_0))^T =^*[ √(λ_1) 0; ⋱ ⋮;√(λ_l)0 ]_l× (l+1)[ 0; ⋮; 0; √(_0); ]_(l+1)=^*0=0 where (<ref>) follows from the fact that the last column of the (l+1)× (l+1) unitary matrixis parallel to √(_0), such that the first l columns ofare orthogonal to √(_0), and the inner product of its last column and √(_0) is the Euclidean norm of √(_0). Therefore, the ^* constructed above is feasible and attains the optimal value of (<ref>). Note that the size of output alphabet is at most M+1. For the special case of binary hypothesis testing, we have shown in Theorem <ref> that the rank of B^* is 1 and therefore, N=2. In the absence of any constraints in (<ref>), analogous to the binary hypothesis test, one would choose min{m-1,M-1} columns ofto span the space contained by the vectors _k-_1 for all k≠ 1. However, the constraints in (<ref>) depend explicitly on the vectors _k, and in fact, in (<ref>) at least one constraint will be tight at the optimal solution ^*. Thus, analogous to the binary hypothesis result, we expect the optimal mechanism to depend inversely on one or more _k. We show that this is indeed the case for binary sources in the following subsection.§.§ m-ary Hypotheses Testing with Binary SourcesIf all the m distributions _k are Bernoulli, the m-1 difference vectors _k-_1 in(<ref>) are collinear. Thus, the minimizing element in the objective is the one in which _k-_1 has the minimal Euclidean norm. Without loss of generality, assume _2-_1=min{_k-_1,k=2,…,m}. Therefore, (_2-_1)^2=min{(_k-_1)^2,k=2,…,m}. In this case, the E-IT approximation in (<ref>) reduces to max_ 1/2(_2-_1)^2 s.t. 1/2∑_i=1^2p_ki_i^2≤ϵ_kk=1,…,m(√(_0))^T=0. We notice that (<ref>) has the same form as (<ref>) (the E-IT approximation for binary hypothesis testing for the relative entropy setting), where the number of constraints in (<ref>) is m≥ 2. Specifically, the objective and constraints have the same structure as in (<ref>), and thus, the results in Theorem <ref> holds here. Therefore, from Theorem <ref>, the corresponding optimal privacy mechanism can be expressed as (<ref>) but with^*=_2-_1/2([∑_k=1^mη^*_k_k])^-1,where η^*_k≥ 0, k=1,…,m, are the dual variables for the m constraints in (<ref>).Note that for those η_k^* that are non-zero,the corresponding constraints in (<ref>) are tight, i.e., if η^*_k>0, 1/2∑_i=1^2p_ki_i^2=ϵ_k. Let 𝒦={k: η_k^*>0, k=1,…,m}. Thus, the ^* in (<ref>) depends inversely on a linear combination of the distributions indexed by 𝒦. Consequently, the optimal mechanism for the approximated PUT depends inversely on these distributions.§ NUMERICAL RESULTSIn this section, we numerically evaluate the utilities achieved by optimal privacy mechanisms for E-IT approximations intwo scenarios: m=2 (binary) and m=3 (ternary) hypothesis testing. Furthermore, for the binary hypothesis testing scenario, we consider both the relative entropy and Rényi divergence settings, while for the m=3 scenario, we only focus on the relative entropy setting. Our goal is to compare the maximal utility for the E-IT approximation with that achieved for the original PUT. To this end, we start by choosing a privacy leakage level ϵ_k=ϵ̃≪min_kH(_k), for all k∈{1,…,m}, for the E-IT approximation.Recall that for the relative entropy setting, (<ref>) in Theorem <ref> provides an optimal privacy mechanism 𝐖'(ϵ̃) for the E-IT approximation problem in (<ref>) with leakage bounds ϵ_k=ϵ̃ for all k. Specifically, for m=2, 𝐖'(ϵ̃) can also be expressed as (<ref>) in Theorem <ref>, where ^* andare the first columns of ^*^* andin (<ref>), respectively. From Corollary <ref>, in the high privacy regime 𝐖'(ϵ̃) in (<ref>) is also the optimal mechanism (for the approximated PUT) for binary hypothesis testing in Rényi divergence setting. To evaluate the performance of 𝐖', we compare its utility to that achieved by an optimal mechanism ^* of the original PUT problem (e.g., (<ref>) for the relative entropy setting or (<ref>) for the Rényi divergence setting). For a fair comparison of the utilities resulting from the E-IT and original PUTs, we choose the MI leakages to be the same for both cases. Thus, for the relative entropy setting (resp. Rényi divergence setting), we compare the values of D(𝐩_2𝐖'(ϵ̃) 𝐩_1𝐖'(ϵ̃)) (resp. D_α(𝐩_2𝐖' (ϵ̃)𝐩_1𝐖'(ϵ̃))) and D(𝐩_2𝐖^*(ϵ) 𝐩_1𝐖^*(ϵ)) (resp. D_α(𝐩_2𝐖^*(ϵ) 𝐩_1𝐖^*(ϵ))), where ϵ=max_k I(_k,𝐖'(ϵ̃)).For the original PUT problems in (<ref>) and (<ref>), the number of independent variables in 𝐖 is M(M-1) for M=N. Even for M=3, finding the optimal privacy mechanism 𝐖^*(ϵ) using exhaustive search techniques is computationally prohibitive. Therefore, we restrict our numerical analysis to binary sources, i.e., M=2; furthermore, for numerical tractability in computing ^*(ϵ), we assume that M=N=2, i.e., the output alphabet is binary.For the E-IT approximated PUTs, since the choice of _0 does not affect the optimal ^*, we choose 𝐰_0=(0.5, 0.5), for which from (<ref>) and (<ref>), we have 𝐯=± (√(0.5) ,-√(0.5)). To capture the high privacy regime, we restrict ϵ̃≤ 0.2min_kH(_k). For these parameters, the following two subsections illustrate and discuss the regimes in which the E-IT approximation is accurate. §.§ Binary Hypothesis Testing We consider four pairs of Bernoulli distributions as shown in Table <ref> for the two source classes (hypotheses) to evaluate the accuracy of optimal mechanisms for the E-IT approximation in the relative entropy and Rényi divergence settings. Figures <ref>-<ref> illustrate the normalized utilities for Pairs 1- 4 in Table <ref>, respectively, as a function of the normalized MI leakages, i.e., ϵ/min{H(𝐩_1),H(𝐩_2)}. In the four figures, the left andright y-axes are for normalized utilities in the relative entropy and Rényi divergence settings, respectively.Figures <ref> and <ref> show that ' and ^* have the same utilities in the regions highlighted by the black-dotted ellipses, in which ϵ is smaller than 0.5% and 0.1% of min{H(𝐩_1),H(𝐩_2)}, respectively. In contrast, for Figs. <ref> and <ref>, the utilities of ' and ^* are almost the same in the entire plotted range. From Figs. <ref>–<ref>, we deduce that for any two given distributions, there is high privacy regime in which the performance of the privacy mechanism ' for the E-IT approximation is almost optimal; however, the range of the regime is specific to the distribution pairs. In particular, when both distributions are close to the uniform or when both are far apart from the uniform as well as each other, the set of leakage values for which the privacy mechanism ' works well is larger. For the former, it can be seen that the E-IT approximations of the relative entropy and the MI are more accurate (cf. <cit.>); for the latter, the individualapproximation errors“cancel out” so the overall approximation is accurate. §.§ Ternary Hypothesis TestingWe numerically evaluate our results for ternary hypothesis testing using three Bernoulli distributions, one for each of the three hypotheses. As shown in Table <ref>, we consider two such triples. Figures <ref> and <ref> illustrate the normalized utilities for Triples 1 and 2, respectively, as a function of the normalized MI leakages, i.e., ϵ/min_k{H(𝐩_k),k∈{1,2,3}}. Fig. <ref> shows that the normalized utilities for ' and ^* are almost the same in the entire plotted range for Triple 1. As shown in Fig. <ref>, for Triple 2, the normalized utilities for 𝐖' and 𝐖^* are close only in the region where the MI-based leakages ϵ are less than 0.2% ofmin_k H(𝐩_k. As for the binary case, here too, our plots show that the leakage range for which the approximation is tight depends on the distributions. For Triple 1, the good performance of the optimal mechanism ' exists for a larger set of MI leakages than for Triple 2, because the three distributions in Triple 1 are close to uniform distribution such that the E-IT approximation is accurate. § CONCLUDING REMARKSWe have systematically studied the problem of publishing large datasets for binary and m-ary hypothesis testing under privacy constraints. Our goal, broadly, is to characterize the guarantees that can be made on the error exponents when a MI-based constraint on the leakage of data from any source classis bounded. Our model seeks to understand if one can find the true probability distribution of a given dataset among a set of possible distributions without revealing the respondents of the data. We have shown that the optimalPUTis achieved through a randomizing privacy mechanism which maximizes the minimum of a set of the relative entropiesbetween pairs of distributions (one for each hypothesis), while ensuring that the MI-based leakages for all source classes are bounded. Focusing on the high privacy regime, we have developed an E-IT approximation of the PUT problem. For this problem, we have shown that the optimal mechanism can be viewed as a perturbation of a perfect privacy mechanism where the perturbation is computed as a solution to a convex optimization problem. As is expected of statistical metrics such as relative entropy and MI, our results reveal that the randomizing mechanism perturbs the statistical outliers the most for each source class. Such a mechanism ensures both utility (predominantly provided by the non-outliers) while preserving privacy of those most vulnerable to inference attacks.Future work includes developing optimal mechanisms and PUTs for all possible leakage levels. Comparison with other privacy metrics is also of interest. § PROOF OF PROPOSITION <REF> Consider the high privacy regime in which 0≤ϵ_k ≪min{H(𝐩_k),k∈{1,..,m}},k∈{1,..,m}. In this regime, 𝐖 can be written as a perturbation of 𝐖_0 via 𝐖=𝐖_0+Θ where 𝐖_0 is a mechanism achieving perfect privacy with all rows equal to 𝐰_0, where 𝐰_0 is chosen such that its entries w_0j≠ 0, ∀ j ∈{1,…,N}, and Θ is a matrix with ∑_j=1^NΘ_ij =0, ∀ i∈{1, …, M} |Θ_ij | ≤ρ w_0j∀ i∈{1, …, M} ,j∈{1, …, N}. where the radius of the neighborhood around 𝐰_0 is ρ∈ [0,1). Note that (<ref>) is derived from the row stochasticity of 𝐖 and 𝐖_0. The constraint in (<ref>) captures the fact that approximating about a perfect privacy achieving mechanism requires restricting the entries of the perturbation matrix Θ to be within a fraction ρ of 𝐰_0. The perturbation modeled in (<ref>)-(<ref>) implies that every row in 𝐖 and the output distribution 𝐩_1𝐖 and 𝐩_k𝐖 for all k∈{2,…,m} are in a neighborhood about 𝐰_0 given by |W_ij-w_0j|≤ρ w_0j, |(𝐩_k𝐖)_j-w_0j|≤ρ w_0j,for allk ∈{1,…,m} for all i ∈{1, …, M} and j ∈{1, …, N}. In this neighborhood, we can approximate the relative entropy D(𝐩_k𝐖𝐩_1𝐖) using a Taylor series around 𝐖_0 as D(𝐩_k𝐖𝐩_1𝐖)= 1/2∑_j=1^N(∑_i=1^Mp_kiΘ_ij-∑_i=1^Mp_1iΘ_ij)^2/∑_i=1^Mp_kiW_ij +o((𝐩_k-𝐩_1)Θ[(𝐩_k𝐖)^-1/2]^2_∞)≈1/2(𝐩_k-𝐩_1)Θ[(𝐩_k𝐖)^-1/2]^2 ≈1/2(𝐩_k-𝐩_1)Θ[(𝐰_0)^-1/2]^2 where (<ref>) results from approximating the relative entropy by the χ^2-divergence <cit.>, and (<ref>) results from applying the neighborhood condition in (<ref>). Similarly, one can approximate the MI between the source class k∈{1,…,m} and its output as I(𝐩_k,𝐖) =1/2∑_i=1^Mp_ki(∑_j=1^N(W_ij-∑_i=1^Mp_kiW_ij)^2/W_ij+ o((𝐖_i-𝐩_k𝐖)[(𝐖_i)^-1/2]^2_∞))≈1/2∑_i=1^Mp_ki(𝐖_i-𝐩_k𝐖)[(𝐖_i)^-1/2]^2 ≈1/2∑_i=1^Mp_ki(𝐰_0+Θ_i-𝐩_k𝐖)[(𝐰_0)^-1/2]^2 ≈1/2∑_i=1^Mp_kiΘ_i[(𝐰_0)^-1/2]^2, where 𝐖_i and Θ_i are the i^th rows of 𝐖 and Θ, respectively.Let 𝐀 be a matrix with entries A_ij=Θ_ij/√(w_0j),i∈{1, …, M} ,j ∈{1, …, N} , From (<ref>), 𝐀(√(𝐰_0))^T=0, where √(𝐰_0) is the vector whose entries are the square root of the entries of 𝐰_0. Thus,the approximations in (<ref>) and (<ref>) leadto (<ref>) and (<ref>) resp. § PROOF OF THEOREM <REF> Consider the following optimization problem obtained from (<ref>) without the constraint 𝐀(√(𝐰_0))^T=0: max_𝐀 ∑_i,j=1^M1/2(𝐩_2-𝐩_1)_i(𝐩_2-𝐩_1)_j𝐀_i𝐀_j^T s.t. 1/2∑_i=1^Mp_ki𝐀_i𝐀_i^T≤ϵ_kk=1,2 Let Δ_i ≜ (𝐩_1-𝐩_2)_i, i∈{1,…,M}. Furthermore, let 𝐚 be a row vector with entries a_i for all i that |a_i| is the Euclidean norm of the i^th row 𝐀_i of 𝐀. Let Ω denote the symmetric matrix of the cosines of angles between the rows of 𝐀, such that its entries Ω_ij≜cos∠(𝐀_i,𝐀_j), i,j∈{1,…,M}, with |Ω_ij|≤ 1 for i≠ j and Ω_ij=1 for i=j. Rewriting (<ref>) with these variables, we have max_𝐚,Ω ∑_i=1^M∑_j=1^M1/2Δ_iΔ_j|a_ia_j|Ω_ij s.t. Ω_ii= 1 |Ω_ij|≤ 1i≠ j ∈{1, …, M}1/2∑_i=1^Mp_kia_i^2≤ϵ_k,k=1,2 Consider first the optimization over Ω. Since the objective is linear in Ω and the feasible region of Ω is a hypercube, (<ref>) is a linear program whose optimal solution is at one of the extreme points of the hypercube <cit.>, i.e., the optimal solution Ω^* has entries |Ω_i,j^*|=1 for all i,j. Thus, all the rows of an 𝐀 maximizing (<ref>) are parallel, and therefore, the optimal solution 𝐀^* of (<ref>) is a rank-1 matrix. In addition, from the objective function in (<ref>), if the signs of Δ_1Δ_i and Δ_1Δ_j are known for any i,j∈{2,…, M}, the sign of Δ_iΔ_j can be determined. Furthermore, maximizing the objective requires that Ω_ij has the same sign as its coefficient Δ_iΔ_j. Therefore, Ω^*_ij=Ω^*_1iΩ^*_1j for all i,j∈{2,…,M}, i.e., Ω^* has only M-1 independent entries Ω^*_1j, j∈{2,…,M} with Ω_ii^*=1 for all i. Thus, we see that the optimization in (<ref>) depends on only M values of |a_i|, i∈{1,…,M}, and M-1 signs of Ω^*_1i, i∈{2,…,M}. Let 𝐯 denote a unit norm vector with no zero entry, and θ_i^* for all i represent the direction of i^th row of 𝐀 with respect to 𝐯, such that Ω^*_ij=θ_i^*θ_j^*, and the i^th row of 𝐀 can be written as 𝐀_i=θ_i^*|a_i|𝐯=a_i𝐯. The optimization in (<ref>) can now be written as a function of the vector 𝐚 as max_𝐚 ∑_i=1^M∑_j=1^M1/2(𝐩_2-𝐩_1)_i(𝐩_2-𝐩_1)_ja_ia_j s.t. 1/2∑_i=1^Mp_kia_i^2≤ϵ_k, k = 1,2. The optimal solution 𝐀^* of (<ref>) is related to 𝐚^* optimizing (<ref>) as follows: 𝐀^*=(𝐚^*)^T𝐯. The optimal solution in (<ref>) yields both the magnitude and sign of a_i for all i. Also, 𝐯 in (<ref>) can be chosen to satisfy 𝐀^*(√(𝐰_0))^T=(𝐚^*)^T𝐯(√(𝐰_0))^T=0. Thus, by using (<ref>) and (<ref>) and solving for 𝐚^* in (<ref>), we obtain 𝐀^* in (<ref>) as desired. For any _0∈ℝ^N, the condition N= 2 is sufficient to obtain asatisfying 𝐯(√(𝐰_0))^T=0, i.e., 𝐀^*(√(𝐰_0))^T=0. Therefore, binary output alphabets suffices. § PROOF OF THEOREM <REF>To prove Theorem <ref>, we use Theorem <ref> andtwo lemmas. The problem in (<ref>) maximizes a convex function over a convex set, and thus, it is not a convex program. However, we show how the problem can be reduced to a convex programand we also obtain a closed-formsolution. The following convex program completely determines the solutions of (<ref>), max_𝐚 1/2λ_p𝐚(𝐯_p)^T s.t. 1/2𝐚[𝐩_k]𝐚^T≤ϵ_k, k=1,2 such that the optimal solutions of (<ref>) are ±𝐚^* where 𝐚^* is the optimal solution of (<ref>). For the optimization problem in (<ref>), the matrix (𝐩_2-𝐩_1)^T(𝐩_2-𝐩_1) is rank-1 with eigenvalue λ_p=𝐩_2-𝐩_1^2 and eigenvector 𝐯_p=𝐩_2-𝐩_1/𝐩_2-𝐩_1. Thus, we have 1/2𝐚(𝐩_2-𝐩_1)^T(𝐩_2-𝐩_1)𝐚^T=1/2λ_p(𝐚(𝐯_p)^T)^2, leading to the following optimization problem max_𝐚 1/2λ_p(𝐚(𝐯_p)^T)^2 s.t. 1/2𝐚[𝐩_k]𝐚^T ≤ϵ_k,k = 1,2. In (<ref>) and (<ref>), the two objectives dependon 𝐚 in the same manner and their constraint functions are the same. Hencethe optimal solution 𝐚^* of (<ref>) optimizes (<ref>). Since the objective and constraint functions of (<ref>) are even, -𝐚^* is feasible and yields the optimal value, i.e, -𝐚^* is also optimal for (<ref>).The optimal solution 𝐚^* of (<ref>) can be evaluated by observing that at 𝐚^*, either one or both constraints are active. The following lemma summarizes the optimal solution of (<ref>). From Lemma <ref>, one can then obtain the optimal solution (<ref>). The optimal solutions of (<ref>) are given by: * if only the first constraint is active, i.e., ϵ_1 and ϵ_2 satisfy (<ref>), the optimal solution α^* is (<ref>);* if only the second constraint is active, i.e., ϵ_1 and ϵ_2 satisfy (<ref>), the optimal solution α^* is (<ref>); * when both constraints are active, the optimal solution α^* is (<ref>) with η_1^*,η_2^*>0 satisfying (<ref>) and (<ref>). From Lemma <ref>, to find the optimal solutions of (<ref>), it suffices to find the optimal solution to (<ref>). In (<ref>), the objective function is linear in 𝐚. Since 𝐩_1 and 𝐩_2 are interior points of probability simplex, both [𝐩_1] and [𝐩_2] are positive definite, i.e., the two constraint functions are convex in 𝐚. Thus, this is a convex program. In addition, 𝐚=0 strictly satisfies the two constraints for positive ϵ_1 and ϵ_2, which means that (<ref>) satisfies Slater's condition<cit.>. Therefore, the convex program has zero duality gap, and the optimal solutions are given by the following Karush-Kuhn-Tucker (KKT) conditions <cit.>: ∇{f_0(𝐚^*)+η_1^*(ϵ_1- f_1(𝐚^*)) +η_2^*(ϵ_2-f_2(𝐚^*))}=0 η_1^*(ϵ_1-f_1(𝐚^*))=0 η_2^*(ϵ_2-f_2(𝐚^*)) =0[0] f_1(𝐚^*) ≤ϵ_1 f_2(𝐚^*) ≤ϵ_2 η_1^* ≥ 0 η_2^*≥ 0. where f_0,f_1, and f_2 represent the objective and two constraint functions of (<ref>), respectively, 𝐚^* is the optimal solution of (<ref>), and η^*_1 and η^*_2 are the optimal solutions of the dual problem of (<ref>). From (<ref>), we have 𝐚^*=λ_p/2𝐯_p(η_1^*[𝐩_1]+η_2^*[𝐩_2])^-1. When η^*_1>0 and η^*_2 = 0, i.e., the first constraint is active, the optimal solution 𝐚^* of (<ref>) is 𝐚^*=λ_p/2𝐯_p[(η_1^*𝐩_1)^-1], such that from (<ref>), η_1^*=√((λ_p)^2(𝐯_p)^T[(𝐩_1)^-1]𝐯_p/8ϵ_1). Substituting η_1^* from (<ref>) in (<ref>), we obtain 𝐚^*=√(2ϵ_1/𝐯_p[(𝐩_1)^-1](𝐯_p)^T)𝐯_p[(𝐩_1)^-1] In addition, for η^*_2 = 0, (<ref>) is a strict inequality if and only if ϵ_1 and ϵ_2 satisfy (<ref>). When η^*_1=0 and η^*_2 > 0, with the same deduction based on (<ref>) and (<ref>), the optimal solution 𝐚^* is 𝐚^*=√(2ϵ_2/𝐯_p[(𝐩_2)^-1](𝐯_p)^T)𝐯_p[(𝐩_2)^-1].and (<ref>) is a strictinequality if and only if ϵ_1 and ϵ_2 satisfy (<ref>). When η^*_1>0 and η^*_2 > 0, the optimal solution 𝐚^* is given by (<ref>). From (<ref>) and (<ref>), we have f_1(𝐚^*)=ϵ_1 and f_2(𝐚^*)=ϵ_2, such that η_1^* and η_2^* satisfy (<ref>) and (<ref>). Finally, since (<ref>), (<ref>) and (<ref>) yield𝐚^* for (<ref>), the optimal solutions for (<ref>) are obtained by considering both solutions ±𝐚^* as proved in Lemma <ref>. Theproof of Theorem <ref>follows directly from Theorem <ref>, Lemma <ref> and Lemma <ref> as follows. The optimal privacy mechanism 𝐖' is a perturbation of 𝐖_0 as 𝐖'=𝐖_0+𝐀^*[√(𝐰_0)], where 𝐀^* optimizes (<ref>). From Theorem <ref>, 𝐀^* is given by 𝐀^*=(𝐚^*)^T𝐯, where 𝐯 is a unit norm M-dimensional vector that is orthogonal to √(𝐰_0), and 𝐚^* is the optimal solution of (<ref>) as presented in Lemma <ref>. Therefore, the optimal privacy mechanism 𝐖' is (<ref>). § PROOF OF LEMMA <REF> From <cit.>, for α∈ (0,1) the order-αHellinger divergence[The the order-αHellinger divergenceℋ_α(𝐪)= D_ f_α(𝐪)where D_f(𝐪) = ∑_i q_i f ( p_i/q_i) is the f-divergence and f_α(t) = t^α-1/α-1. ] ℋ_α andthe relative entropysatisfy κ_α(β_2)≤D(_2_1)/ℋ_α(_2_1)≤κ_α(β_1^-1). Here, κ_α is a continuous function defined as κ_α(t)=(1-α)tlog t/1-t^α+α t-α t∈(0,1)∪ (1,∞) α^-1log e t=1 log et=0 ,and β_1 and β_2, which depend on _1 and _2, aredefined as β_1 =exp(-logmax_ip_2i/p_1i)=(max_ip_2i/p_1i)^log1/e β_2 =exp(-logmax_ip_1i/p_2i)=(max_ip_1i/p_2i)^log1/e. Assuming _1_2, we know that max_ip_1i/p_2i,max_ip_2i/p_1i>1. Since log1/e<0,β_1,β_2<1. However, it is alsoclear that _2→_1 ⟹ β_j → 1, ∀j = 1,2. Since κ_α is continuous and κ_α(1) = α^-1log e, from (<ref>), _2→_1⟹ κ_α(β_2) →log e/α , κ_α(β_1^-1) →log e/α. Consequently, from (<ref>), _2→_1⟹ D(_2_1)/ℋ_α(_2_1)→log e/α . In addition, from <cit.>, we know that the order-α Rényi divergence andthe order-α Hellinger divergence admit the following relationshipfor α∈ (0,1)∪ (1,∞): D_α(_2_1)=1/α-1log(1+(α-1)ℋ_α(_2_1)) The proofof(<ref>) is completed by uniting(<ref>) and (<ref>).§ PROOF OF LEMMA <REF> We first replace the matrix variableof (<ref>) by a positive semi-definite symmetric matrix =^T.The objective function can thus be rewritten as1/2(_k-_1)^2= 1/2(_k-_1)(_k-_1)^T= 1/2((_k-_1)^T(_k-_1))for k∈{2,…,m}. For k∈{1,…,m} the constraint functions are 1/2∑_i=1^Mp_ki_i^2=1/2∑_i=1^Mp_ki_ii=1/2([_k])Using an auxiliary scalar variable t to represent a common lower bound of the terms 1/2(_k-_1)^2 in the objective function, the problem (<ref>) without the constraint (<ref>) can be seen to beequivalentto (<ref>).Define the inner product between matrices 𝐗 and 𝐘as 𝐗∙𝐘≜∑_i=1^M∑_j=1^M X_ijYij=(𝐗^T𝐘).Then, the optimization problem in (<ref>) can be rewritten as-min_, t -ts.t. [ -t0;0^T 1/2𝐏_k ]∙[ 1 0; 0^T ]≥ 0 ∀ k∈{2,…,m}[ ϵ_k 0; 0^T -[_k];]∙[ 1 0; 0^T;]≥ 0∀k∈{1,…,m}[ 1 0; 0;]≽ 0where the M× M matrix 𝐏_k≜(_k-_1)^T(_k-_1).Since 𝐏_k(for k∈{2,…,m})and [_k](for k∈{1,…,m}) are symmetric positive semi-definite matrices, by referring to <cit.>, we know that (<ref>) is an SDP. IEEEtran | http://arxiv.org/abs/1704.08347v1 | {
"authors": [
"Jiachun Liao",
"Lalitha Sankar",
"Vincent Y. F. Tan",
"Flavio P. Calmon"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170426204858",
"title": "Hypothesis Testing under Mutual Information Privacy Constraints in the High Privacy Regime"
} |
1]Jason M. [email protected] 2]Dana [email protected] 2]W. D. [email protected] [1]Department of Statistics and Biostatistics, Rutgers University – New Brunswick [2]Department of Statistics and Data Science, Yale UniversityEstimating the Coefficients of a Mixture of Two Linear Regressions by Expectation Maximization [ December 30, 2023 ============================================================================================== We give convergence guarantees for estimating the coefficients of a symmetric mixture of two linear regressions by expectation maximization (EM). In particular, we show that the empirical EM iterates converge to the target parameter vector at the parametric rate, provided the algorithm is initialized in an unbounded cone. In particular, if the initial guess has a sufficiently large cosine angle with the target parameter vector, a sample-splitting version of the EM algorithm converges to the true coefficient vector with high probability. Interestingly, our analysis borrows from tools used in the problem of estimating the centers of a symmetric mixture of two Gaussians by EM.We also show that the population EM operator for mixtures of two regressions is anti-contractive from the target parameter vector if the cosine angle between the input vector and the target parameter vector is too small, thereby establishing the necessity of our conic condition. Finally, we give empirical evidence supporting this theoretical observation, which suggests that the sample based EM algorithm performs poorly when initial guesses are drawn accordingly. Our simulation study also suggests that the EM algorithm performs well even under model misspecification (i.e., when the covariate and error distributions violate the model assumptions). § INTRODUCTION Mixtures of linear regressions are useful for modeling different linear relationships between input and response variables across several unobserved heterogeneous groups in a population. First proposed by <cit.> as a generalization of “switching regressions”, this model has found broad applications in areas such as plant science <cit.>, musical perception theory <cit.>, and educational policy <cit.>.In this paper, we consider estimating the model parameters in a symmetric two component mixture of linear regressions. Towards a theoretical understanding of this model, suppose we observe data _n = {(X_i, Y_i)}_i=1^n, whereY_i = R_i ⟨θ^*,X_i⟩ + ε_i,X_i i.i.d.∼ N(0, I_d), ε_i i.i.d.∼ N(0, σ^2), R_i i.i.d.∼(1/2), and {X_i}, {ε_i}, and {R_i} are independent of each other. In other words, each predictor variable is Gaussian, and the response is centered at either the θ^* or -θ^* linear combination of the predictor. The two classes are equally probable, and the label of each observation is unknown. We seek to estimate θ^* (or -θ^*, which produces the same model distribution).The likelihood function of the model(_n; θ) = ∏_i=1^n[ 1/2ψ(X_i)ψ_σ(Y_i - ⟨θ,X_i⟩) + 1/2ψ(X_i)ψ_σ(Y_i + ⟨θ,X_i⟩)],where ψ(x) = 1/(2π)^d/2e^-x^2/2 and ψ_σ(y) = 1/√(2π)σe^-y^2/(2σ^2), is a multi-dimensional, multi-modal (it has many spurious local maxima), and nonconvex objective function, and hence direct maximization (e.g., grid search) is intractable. Even the population likelihood (in the infinite data setting) has global maxima at -θ^* and θ^*, and a local minimum at the zero vector.Given these computational concerns, other less expensive methods have been used to estimate the model coefficients. For example, mixtures of linear regressions can be interpreted as a particular instance of subspace clustering, since each regressor / regressand pair (X, Y) ∈ℝ^d+1 lies in the d-dimensional subspace determined by their model parameter vectors (θ^ * and -θ^*). When the covariates and errors are Gaussian, algebro-geometric and probabilistic interpretations of PCA <cit.> motivate related clustering schemes, since there is an inherent geometric aspect to such mixture models. Another competitor is the Expectation-Maximization (EM) algorithm, which has been shown to have desirable empirical performance in various simulation studies <cit.>, <cit.>, <cit.>. Introduced in a seminal paper of Dempster, Laird, and Rubin <cit.>, the EM algorithm is a widely used technique for parameter estimation, with common applications in latent variable models (e.g., mixture models) and incomplete-data problems (e.g., corrupted or missing data) <cit.>. It is an iterative procedure that monotonically increases the likelihood <cit.>. When the likelihood is not concave, it is well known that EM can converge to a non-global optimum <cit.>. However, recent work has side-stepped the question of whether EM reaches the likelihood maximizer, instead by directly working out statistical guarantees on its loss. For certain well-specified models, these explorations have identified regions of local contractivity of the EM operator near the true parameter so that, when initialized properly, the EM iterates approach the true parameter with high probability.This line of research was spurred by <cit.>, which established general conditions for which a ball centered at the true parameter would be a basin of attraction for the population version of the EM operator. For a large enough sample size, the difference (in that ball) between the sample EM operator and the population EM operator can be bounded such that the EM estimate approaches the true parameter with high probability. That bound is the sum of two terms with distinct interpretations. There is an algorithmic convergence term γ^t θ^0 - θ^* for initial guess θ^0, truth θ^*, and some modulus of contraction γ∈ (0, 1); this comes from the analysis of the population EM operator. The second term captures statistical convergence and is proportional to the supremum norm of sup_θM(θ) - M_n(θ), the difference between the population and sample EM operators, M and M_n, respectively. This result is also shown for a “sample-splitting" version of EM, where the sample is partitioned into batches and each batch governs a single step of the algorithm.Our purpose here is to follow up on the analysis of <cit.> by proving a larger basin of attraction for the mixture of two linear models and by establishing an exact probabilistic bound on the error of the sample-splitting EM estimate when the initial guess falls in the specified region. In particular, we show that* The EM algorithm converges to the target parameter vector when it is initialized in a cone (defined in terms of the cosine similarity between the initial guess θ^0 and the target model parameter θ^*).* The EM algorithm can fail to converge to θ^* if the cosine similarity is too small. In related works, typically some variant of the mean value theorem is employed to establish contractivity toward the true parameter and the rate of geometric decay is then determined by relying heavily on the fact that initial guess belongs to a bounded set and is not too far from the target parameter vector (i.e., a ball centered at the target parameter vector). Our technique relies on Stein's Lemma, which allows us to reduce the problem to the two-dimensional case and exploit certain monotonicity properties of the population EM operator. Such methods allow one to be very careful and explicit in the analysis and more cleanly reveal the role of the initial conditions. These results cannot be deduced from preexisting works (such as <cit.>), even by sharpening their analysis. Our improvements are not solely in terms of constants. Indeed, we will show that as long as the cosine angle between the initial guess and the target parameter vector (i.e., their degree of alignment) is sufficiently large, the EM algorithm converges to the target parameter vector θ^*. In particular, the norm of the initial guess can be arbitrarily large, provided the cosine angle condition is met.In the machine learning community, mixtures of linear regressions are known as Hierarchical Mixture of Experts (HME) and, there, the EM algorithm has also been employed <cit.>. The mixtures of linear regressions problem has also drawn recent attention from other scholars (e.g., <cit.>), although none of them have attempted to sharpen the EM algorithm in the sense that many works still require initialization is a small ball around the target parameter vector. For example, the general case with multiple components was considered in <cit.>, but initialization is still required to be in a ball around each of the true component coefficient vectors.This paper is organized as follows. In Section <ref>, we explain the model and explain how the population EM operator is contractive toward the true parameter on a cone in ℝ^d. We also show that the operator is not contractive toward the true parameter on certain regions of ℝ^d. We connect our problem to phase retrieval in sec:phase and borrow preexisting techniques to find a good initial guess in sec:initial. Section <ref> looks at the behavior of the sample-splitting EM operator in this cone and states our main result in the form of a high-probability bound. sec:proof and sec:proofthm are devoted to proving the contractivity of the population EM operator toward the target vector over a cone and proving our main result, respectively. A discussion of our findings, includingevidence of the failure of the EM algorithm for poor initial guesses from a simulated experiment, is provided in sec:discussion. A simulation study of the EM algorithm under model misspecification is also given therein. Finally, more technical proofs are relegated to app:appendix.§ THE EMPIRICAL AND POPULATION EM OPERATOR The EM operator for estimating θ^* (see <cit.> for a derivation) isM_n(θ) = (1/n∑_i=1^n X_iX_i^⊤)^-1[1/n∑_i=1^n(2ϕ(Y_i ⟨θ, X_i ⟩/σ^2)-1)X_iY_i],where ϕ(z) = 1/1+e^-2z is a horizontally stretched logistic sigmoid. Here (1/n∑_i=1^n X_iX_i^⊤)^-1 is the inverse of the Gram matrix 1/n∑_i=1^n X_iX_i^⊤. In the limit with infinite data, the population EM operator replaces sample averages with expectations, and thusM(θ) = 2ϕ(Y ⟨θ, X ⟩/σ^2)XY.As we mentioned in the introduction, <cit.> showed that if the EM operator eq:sample-em is initialized in a ball around θ^* with radius proportional θ^*, then the EM algorithm converges to θ^* with high probability. It is natural to ask whether this good region of initialization can be expanded, possibly allowing for initial guesses with unbounded norm. The purpose of this paper is to relax the aforementioned ball condition of <cit.> and show that if the cosine angle between θ^* and the initial guess is not too small, the EM algorithm also converges. We also simplify the analysis considerably and use only elementary facts about multivariate Gaussian distributions. Our improvement is manifested in the set containment{θ: θ-θ^*≤√(1-ρ^2)θ^*}⊆{θ : ⟨θ, θ^* ⟩≥ρθθ^*}, ρ∈ [-1,1],since for all θ in the set on the left side,⟨θ,θ^*⟩= 1/2(θ^2+θ^*^2-θ-θ^*^2) ≥1/2(θ^2+ρ^2θ^*^2)= ρθθ^⋆ + 1/2(ρθ^*-θ)^2 ≥ρθθ^⋆. The conditions in <cit.> require the initial guess θ^0 to be at most θ^⋆/32 away from θ^⋆, which corresponds to θ≤ (1+√(1-ρ^2))θ^* and ρ = √(1-(1/32)^2)≈ 0.999, whereas our condition allows for the norm of θ to be unbounded and ρ > 0.85.Let θ_0 be the unit vector in the direction of θ and let θ^⊥_0 be the unit vector that belongs to the hyperplane spanned by {θ^*, θ} and orthogonal to θ (i.e., θ^⊥_0 ∈span{θ, θ^* } and ⟨θ, θ_0^⊥⟩ = 0). Let θ^⊥ = θθ^⊥_0. We will later show in sec:proof that M(θ) belongs to span{θ,θ^⋆}, as illustrated in fig:thetas. Denote the angle between θ^* and θ_0 as α, with θ^*cosα = ⟨θ_0, θ^* ⟩ and ρ = cosα. As we will see from the following results, as long as cosα is not too small, M(θ) is a contracting operation that is always closer to the truth θ^* than θ. The next lemma allows us to derive a region of ℝ^d on which M is contractive toward θ^*. We defer its proof until sec:proof. For any θ in ℝ^d with ⟨θ, θ^* ⟩ > 0,M(θ)-θ^*≤γθ-θ^*,whereγ = √(κ)√(1+4(|⟨θ^⊥, θ^* ⟩ |+σ^2/⟨θ, θ^* ⟩)^2),andκ^2 = max{1-|⟨θ_0,θ^⋆⟩|^2/σ^2+θ^*^2, 1-⟨θ,θ^*⟩/σ^2+⟨θ,θ^*⟩} < 1.If we define the input signal-to-noise ratio as η'=θ/σ and model signal-to-noise ratio (SNR) as η=θ^⋆/σ and use the fact that θ^⋆cosα = ⟨θ_0,θ^* ⟩, then the contractivity constant eq:gamma can be rewritten asmax{(1-η^2cos^2α/1+η^2)^1/4, (1- η'ηcosα/1+η'ηcosα)^1/4}√(1+4(tanα+1/η'ηcosα)^2).If η' ≥ 20, η≥ 40, and cosα≥ 0.85, then κ is bounded by a universal constant less than 1/2 and γ is bounded by a universal constant less than 1, implying the population EM operator θ^t← M(θ^t-1) converges to the truth θ^* exponentially fast. § RELATIONSHIP TO PHASE RETRIEVALThe problem of estimating the true parameter vector in a mixture of two linear regressions is related to the phase retrieval problem, where one has access to magnitude-only data according to the modelY = |⟨θ^*, X ⟩|^2 + ε.In the no noise case, i.e., ε≡ 0, one can obtain the phase retrieval model from the symmetric two component mixture of linear regressions by squaring each response variable Y_i from eq:mixreg and visa versa by setting Y_i = R_i√(Y'_i), where R_i i.i.d.∼Rademacher(1/2) is independent of the data { (X_i, Y_i) }_i=1^n. Here the sample subsets giving rise to the model parameters θ^* and -θ^* are {i : R_isgn(⟨θ^*, X_i ⟩) = 1 } and {i : R_isgn(⟨θ^*, X_i ⟩) = -1 }, respectively. Even in the case of noise, squaring each response variable and subtracting the variance σ^2 of the error distribution yieldsY'_i = Y^2_i - σ^2 = |⟨θ^*, X_i ⟩|^2 + 2R_iε_i⟨θ^*, X_i ⟩ + (ε^2_i-σ^2) = |⟨θ^*, X_i ⟩|^2 + ξ(X_i, R_i, ε_i),where ξ(X_i, R_i, ε_i) is a mean zero random variable with variance 4σ^2θ^*^2 + 2σ^4. This is essentially the phase retrieval model eq:phase with heteroskedastic errors. See also <cit.> for a similar reduction to the “Noisy Phase Model”, where the measurement error is pre-added to the inner product and then squared, viz., |⟨θ^*, X ⟩ + ε|^2.Recent algorithms used to recover θ^* from (X, Y) include PhaseLift <cit.>, PhaseMax <cit.>, PhaseLamp <cit.> and Wirtinger flow <cit.>, to name a few. PhaseLift operates by solving a semi-definite relaxation of the nonconvex formulation of the phase retrieval problem. PhaseMax and PhaseLamp solve a linear program over a polytope via convex programming. Finally, Wirtinger flow is an iterative gradient-based method that requires proper initialization. Parallel to our work, <cit.> reveal that exact recovery (when n, d → +∞) in PhaseMax is governed by a critical threshold <cit.>, which is measured in terms of the cosine angle between the initial guess and the target parameter vector. Analogous to our lmm:negative (which is asymptotic in the sense that n → +∞), they prove that recovery can fail is this cosine angle is too small. PhaseLamp is an iterative variant of PhaseMax that allows for a smaller cosine angle criterion than the critical threshold from PhaseMax. Our setting is slightly more general than <cit.>in that we allow for measurement error and our bounds are non-asymptotic in n and d.§ INITIALIZATIONthm:main_splitting_empirical below requires the initial guess to have a good inner product with θ^*. But how should one initialize in practice? There is considerable literature showing the efficacy of initialization based on spectral <cit.>, <cit.>, <cit.> or Bayesian <cit.> methods. For example, inspired by the link eq:connection between phase retrieval and our problem, we can use the same spectral initialization method of <cit.> for the Wirtinger flow iterates (c.f., <cit.> for a similar strategy). That is, setλ^2 = d∑_i=1^n Y'_i/∑_i=1^n X_i^2,and take θ^0 equal to be the eigenvector corresponding to the largest eigenvalue of1/n∑_i=1^n Y'_i X_i X_i^⊤,scaled so that θ^0 = λ. According to <cit.>, we are guaranteed that with high probability θ^0-θ^*≤1/8θ^*, and hence by eq:innerprodbound, ⟨θ^0, θ^* ⟩≥√(1-(1/8)^2)θ^0θ^*≈ 0.992θ^0θ^* and θ^0≥ (7/8)θ^*. Provided that θ^*≥ (8/7)20σ, we will see in thm:main_splitting_empirical that this θ^0 satisfies our criteria for a good initial guess. Although the joint distributions of (X, Y) and (X, Y') are not exactly the same, for large n, 1/n∑_i=1^nξ(X_i, R_i, ε_i) ≈ 0, and hence eq:lam and eq:eigen are approximately equal to the same quantity with Y'_i replaced by Y_i.The next lemma, proved in app:appendix, shows that the initialization conditions in rmk:contract are essentially necessary in the sense that contractivity of M toward θ^* can fail for certain initial guesses that do not meet our cosine angle criterion. In contrast, it is known <cit.> that the population EM operator for a symmetric mixture of two Gaussians Y ∼1/2N(θ^*, σ^2 I_d) + 1/2N(-θ^*, σ^2 I_d) is contractive toward θ^* on the entire half-plane defined by ⟨θ, θ^* ⟩ > 0.[Note that this is the best one can hope for: if ⟨θ, θ^* ⟩ < 0 (reps. ⟨θ, θ^* ⟩ = 0), then the population EM operator is contractive toward -θ^* (resp. the zero vector). Thus, unless ⟨θ, θ^*⟩ = 0 (i.e., θ belongs to the hyperplane perpendicular to θ^*), the population EM is contractive towards either model parameter -θ^* or θ^*.] The disparity between the EM operators for the two models is revealed in the proof of the contractivity of M toward θ^* (see sec:proof). Indeed, we will see in rmk:smoothed that the population EM operator for mixtures of regressions is essentially a “stretched” version of the population EM operator for Gaussian mixtures.There is a subset of ℝ^d with positive Lebesgue measure, each of whose members θ satisfies ⟨θ, θ^* ⟩ > 0 andM(θ) - θ^* > θ-θ^*.While this result does not generally imply that the empirical iterates θ^t← M_n(θ^t-1) will fail to converge to θ^* for ⟨θ^0, θ^* ⟩ > 0, it does suggest that difficulties may arise in this regime. Indeed, the discussion in sec:discussion gives empirical evidence for this theoretical observation.§ MAIN THEOREM As in <cit.>, we analyze a sample-splitting version of the EM algorithm, where for an allocation of n samples and T iterations, we divide the data into T subsets of size ⌊ n/T ⌋. We then perform the updates θ^t← M_n/T(θ^t-1), using a new subset of samples to compute M_n/T(θ) at each iteration. The advantage of sample-splitting is purely for ease of analysis. In particular, conditional on the portion of data used to construct M_n/T at iteration t, the distribution of θ^t depends only on the other portion of the data through θ^t-1. For the next theorem, let η^0 = θ^0/σ denote the initial SNR and η = θ^*/σ denote the model SNR.Let ⟨θ^0, θ^* ⟩ > ρθ^0θ^* for ρ > 0.85, η^0 ≥ 20, and η≥ 40. Fix δ∈ (0, 1). Suppose furthermore that n ≥max{ cdlog(T/δ), c' } for some positive universal constant c and positive constant c' = c'(ρ, σ, θ^*, θ^0). Then there exists a universal modulus of contraction γ∈ (0,1) and a positive universal constant C such that the sample-splitting empirical EM iterates (θ^t)_t=1^T based on n/T samples per step satisfyθ^t-θ^*≤γ^t θ^0 - θ^*+ C√(σ^2+θ^*^2)/1-γ√(dTlog(T/δ)/n),with probability at least 1-δ.Note that T governs the number of iterations of the EM operator; if it is too small, the term γ^t θ^0 - θ^* from thm:main_splitting_empirical may fail to reach the parametric rate. Hence, T must scale like log(n/d)/log(1/γ).We will prove thm:main_splitting_empirical in sec:proofthm. The main aspect of the analysis lies in showing that M_n satisfies an invariance property, i.e., M_n() ⊆, whereis a set on which M is contractive toward θ^*. The algorithmic error γ^t θ^0 - θ^* is a result of repeated evaluation of the population EM operator θ^t ← M(θ^t-1) and the contractivity of M towards θ^* from lmm:main. The stochastic error C√(σ^2+θ^*^2)/1-γ√(dTlog(T/δ)/n) is obtained from a high-probability bound on max_t∈[T]M_n/T(θ^t)-M(θ^t), which is contained in the proof of <cit.>).§ PROOF OF LMM:MAIN If W = ⟨θ^*, X ⟩ + ε, a few applications of Stein's Lemma <cit.> yieldsM(θ) = (2ϕ(W ⟨θ, X ⟩/σ^2)-1)XW = θ^*(2ϕ(W ⟨θ, X ⟩/σ^2)+2(W⟨θ, X ⟩ /σ^2) ϕ^'(W ⟨θ, X ⟩/σ^2)-1)+ θ2(W^2/σ^2)ϕ^'(W ⟨θ, X ⟩/σ^2).LettingA = 2ϕ(W ⟨θ, X ⟩/σ^2)+2(W⟨θ, X ⟩ /σ^2) ϕ^'(W ⟨θ, X ⟩/σ^2)-1,andB = 2(W^2/σ^2)ϕ^'(W ⟨θ, X ⟩/σ^2),we see that M(θ) = θ^*A + θ B belongs to span{θ, θ^* } = {λ_1 θ + λ_2 θ^*, : λ_1, λ_2 ∈ℝ}. This is a crucial fact that we will exploit multiple times.Observe that for any a in span{θ, θ^* },a = ⟨θ_0, a ⟩θ_0 + ⟨θ_0^⊥, a ⟩θ_0^⊥,anda^2 = |⟨θ_0, a ⟩|^2 + |⟨θ_0^⊥, a ⟩ |^2.Specializing this to a = M(θ)-θ^* yieldsM(θ)-θ^*^2 = |⟨θ_0, M(θ)-θ^* ⟩|^2 + |⟨θ_0^⊥, M(θ)-θ^* ⟩ |^2.The strategy for establishing contractivity of M(θ) toward θ^* will be to show that the sum of |⟨θ_0, M(θ)-θ^* ⟩|^2 and|⟨θ_0^⊥, M(θ)-θ^* ⟩|^2 is less than γ^2θ-θ^*^2. This idea was used in <cit.> to obtain contractivity of the population EM operator for a mixture of two Gaussians. Due to the similarity of the two problems, it turns out that many of the same ideas transfer to our (more complicated) setting.To reduce this (d+1)-dimensional problem (X, Y) ∈ℝ^d+1 to a 2-dimensional problem (Z_1, Z_2) ∈ℝ^2, we first show thatW ⟨θ, X ⟩/σ^2 𝒟=Λ Z_1|Z_2| + Γ Z^2_2,where Z_1, Z_2 i.i.d.∼ N(0,1). The coefficients Γ and Λ areΓ = ⟨θ, θ^* ⟩/σ^2andΛ^2 = (θ^2/σ^4)(σ^2+θ^*^2) - Γ^2 = (θ^2/σ^4)(σ^2+|⟨θ^⊥_0, θ^* ⟩ |^2).This is because of the distributional equality(W, ⟨θ, X ⟩/σ^2) 𝒟=(√(σ^2+θ^*^2)Z_2, Λ/√(σ^2+θ^*^2)Z_1+Γ/√(σ^2+θ^*^2)Z_2). Note further that Λ Z_1Z_2 + Γ Z^2_2𝒟=Λ Z_1|Z_2| + Γ Z^2_2 because they have the same moment generating function. Using this, we deduce thatW ⟨θ, X ⟩/σ^2 𝒟=Λ Z_1|Z_2| + Γ Z^2_2. lmm:ABbounds implies that(1-κ )⟨θ^⊥_0, θ^* ⟩≤⟨θ^⊥_0, M(θ) ⟩≤ (1+√(κ))⟨θ^⊥_0, θ^* ⟩,and consequently,|⟨θ^⊥_0, M(θ) - θ^* ⟩| ≤√(κ)|⟨θ^⊥_0, θ -θ^* ⟩| ≤√(κ)θ-θ^*.Next, we note thatσ^4|Λ^2-Γ|= |θ^2(σ^2+|⟨θ^⊥_0, θ^* ⟩ |^2) - σ^2⟨θ, θ^* ⟩| ≤θ^2|⟨θ^⊥_0, θ^* ⟩ |^2 + σ^2|⟨θ, θ-θ^*⟩| ≤θ(|⟨θ^⊥, θ^* ⟩ |+σ^2)θ-θ^*. Finally, defineh(α, β) = (2ϕ(α |Z_2|(Z_1 + β |Z_2|))-1)(|Z_2|(Z_1 + β |Z_2|)).Note that by definition of h, h(Λ, Γ/Λ) = ⟨θ, M(θ) ⟩/Λ. In fact, h is the one-dimensional population EM operator for this model when θ^* = β and σ^2 = 1. By the self-consistency property of EM <cit.>, h(β, β) = β. Translating this to our problem, we have that h(Γ/Λ, Γ/Λ) = Γ/Λ = ⟨θ, θ^* ⟩/σ^2Λ. Since h(Λ, Γ/Λ) - h(Γ/Λ, Γ/Λ) = ∫_Γ/Λ^Λ∂ h/∂α h(α, Γ/Λ)dα, we have fromlmm:derivative,|⟨θ_0, M(θ)-θ^* ⟩|≤σ^2Λ/θ|∫_Γ/Λ^Λ∂/∂α h(α, Γ/Λ)dα| ≤2√(κ)σ^2Λ/θ|∫_Γ/Λ^Λdα/α^2|= 2σ^2√(κ)|Λ^2-Γ|/Γθ≤ 2√(κ)(|⟨θ^⊥, θ^* ⟩ |+σ^2/⟨θ, θ^* ⟩)θ - θ^*. Combining this with inequality eq:perp_inequality yields eq:main_bound. This completes the proof of lmm:main.The function h is related to the EM operator for the one-dimensional symmetric mixture of two Gaussians model Y ∼1/2N(-β, 1) + 1/2N(β, 1). One can derive that (see <cit.>) the population EM operator is g(α, β) = (2ϕ(α(Z_1+β))-1)(Z_1+β).Then h(α, β) is a “stretched” version of g(α, β) as seen through the identityh(α, β) = |Z_2|g(α|Z_2|, β|Z_2|). In light of this relationship, it is perhaps not surprising that the EM operator for the mixture of linear regressions problem also enjoys a large basin of attraction. On the other hand, from <cit.>, the population EM operator M for the symmetric two component mixture of Gaussians Y ∼1/2N(θ^*, σ^2 I_d) + 1/2N(-θ^*, σ^2 I_d), is equal toM(θ) = 2Yϕ(⟨ Y, θ⟩/σ^2) = θ^* A + θB,where A = 2ϕ(⟨θ, θ^* ⟩/σ^2 + θZ_1/σ)-1 and B = 2ϕ'(⟨θ, θ^* ⟩/σ^2 + θZ_1/σ). Compare the values of A and B with A and B from eq:A and eq:B. We see that M is essentially a “stretched” and “scaled” version of M by the random dilation factors |Z_2|√(1+|⟨θ^⊥_0, θ^*⟩|^2/σ^2) and |Z_2|√(1+θ^*^2/σ^2). As will be seen in the proof lmm:negative in app:appendix, this additional source of variability causes the repellant behavior of M in lmm:negative.Recently in <cit.>, the authors analyzed gradient descent for a single-hidden layer convolutional neural network structure with no overlap and Gaussian input. In this setup, we observe i.i.d. data {(X_i, Y_i)}_i=1^n, where Y_i = f(X_i,w) + ε_i and X_i ∼ N(0, I_d) and ε_i ∼ N(0, σ^2) are independent of each other. The neural network has the form f(x,w) = 1/k∑_j=1^kmax{0, ⟨ w_j, x ⟩} and the only nonzero coordinates of w_j are in the j^ successive block of d/k coordinates and are equal to a fixed d/k dimensional filter vector w. One desires to minimize the risk ℓ(w) = (f(X,w)-f(X,w^⋆))^2. Interestingly, the gradient of ℓ(w) belongs to the linear span of ω and ω^⋆, akin to our M(θ) ∈span{θ, θ^* } (and also in the Gaussian mixture problem <cit.>). This property also plays a critical role in their analysis. § PROOF OF THM:MAIN_SPLITTING_EMPIRICAL The first step of the proof is to show that the empirical EM operator satisfies M_n() ⊂, whereis a set on which M is contractive toward θ^*. In other words, the empirical EM iterates remain in a set where M(θ) is closer to θ^* than its input θ. To this end, define the set = {θ: ⟨θ, θ^* ⟩ > ρθθ^*, θ≥ 20σ}. By rmk:contract, the stated conditions on ρ, θ, and θ^* ensure that M is contractive toward θ^* onand that κ < 1/2.Next, we uselmm:Mbounds which shows that M() ⊆ := {θ: ⟨θ, θ^* ⟩ > (1+Δ)ρθθ^*,θ^*(1-κ) ≤θ≤√(σ^2 + 3θ^*^2)}.The fact that ⊂ allows us to claim that when n is large enough, M_n() ⊂ M(), and hence M_n() ⊆ M() ⊆.To show this, assume sup_θ∈M_n(θ) - M(θ)< ϵ. That impliessup_θ∈M_n(θ)/M_n(θ) - M(θ)/M(θ)≤ 2sup_θ∈M_n(θ)-M(θ)/M(θ) < 2ϵ/(1-κ)θ^*.For the last inequality, we used the fact that M(θ)≥θ^*A ≥θ^*(1-κ) for all θ in , which follows from eq:span and lmm:ABbounds. By eq:close and lmm:Mbounds eq:cosinea, we have thatsup_θ∈⟨θ^*, M_n(θ)/M_n(θ)⟩ ≥sup_θ∈⟨θ^*, M(θ)/M(θ)⟩ - 2ϵ/(1-κ)≥θ^*(1+Δ)ρ - 2ϵ/(1-κ)≥θ^*ρ,provided ϵ < (1-κ/2)Δρθ^* and, by eq:span and lmm:ABbounds,sup_θ∈M_n(θ) ≥sup_θ∈ M(θ)- ϵ≥θ^*(1-κ) - ϵ≥ 40σ(1-κ) - ϵ≥ 20σ,provided ϵ < 20σ(1-2κ), which is positive since κ < 1/2.For δ∈ (0, 1), let ϵ_M(n,δ) be the smallest number such that for any fixed θ in , we haveM_n(θ) - M(θ) ≤ϵ_M(n,δ),with probability at least 1-δ. Moreover, suppose c' = c'(ρ, σ, θ^*, θ^0) is a constant so that if n ≥ c', thenϵ_M(n,δ) ≤min{ 20σ(1-2κ), (1-κ/2)Δρθ^*}.This guarantees that M_n() ⊆. For any iteration t ∈ [T], we haveM_n/T(θ^t) - M(θ^t) ≤ϵ_M(n/T,δ/T),with probability at least 1-δ/T. Thus by a union bound and M_n() ⊆,max_t∈[T] M_n/T(θ^t) - M(θ^t) ≤ϵ_M(n/T,δ/T),with probability at least 1 - δ.Hence if θ^0 belongs to , then by lmm:main,θ^t - θ^* = M_n/T(θ^t-1) - θ^* ≤M(θ^t-1) - θ^*+ M_n/T(θ^t) - M(θ^t) ≤γθ^t-1 - θ^* + max_t∈ [T]M_n/T(θ) - M(θ) ≤γθ^t-1 - θ^* + ϵ_M(n/T,δ/T),with probability at least 1 - δ. Solving this recursive inequality yields,θ^t - θ^*≤γ^tθ^0-θ^* + ϵ_M(n/T,δ/T)∑_j=0^t-1γ^j ≤γ^tθ^0-θ^* + ϵ_M(n/T,δ/T)/1-γ,with probability at least 1 - δ.Finally, by a slight modification to the proof of <cit.> that uses M(θ) ≤√(σ^2+3θ^*^2) from eq:cosinea1, it follows that if n ≥ cdlog(T/δ), then there exists a universal constant C > 0 such thatϵ_M(n/T,δ/T) ≤ C√(σ^2+θ^*^2)√(dTlog(T/δ)/n)with probability at least 1-δ/T. This completes the proof of thm:main_splitting_empirical.§ DISCUSSION In this paper, we showed that the empirical EM iterates converge to true coefficients of a mixture of two linear regressions as long as the initial guess lies within a cone (see the condition on thm:main_splitting_empirical: ⟨θ^0,θ^⋆⟩>ρθ^0θ^⋆).In fig:SimEM, we perform a simulation study of θ^t← M_n(θ^t-1) with σ = 1, n = 1000, d = 2, and θ^* = (-7/25, 24/25)^⊤. All entries of the covariate vector X and the noise ε are generated i.i.d. from a standard Gaussian distribution. We consider the error θ^t - θ^* plotted as a function of cosα = ⟨θ^0, θ^* ⟩/θ^0θ^* at iterations t = 5, 10, 15, 20, 25 (darker lines correspond to larger values of t). For each t, we choose a unit vector θ^0 so that cosα ranges between -1 and + 1. In accordance with the theory we have developed, increasing the iteration size and increasing the cosine angle decreases the overall error. According to lmm:negative, the algorithm should suffer when cosα is small. Indeed, we observe a sharp transition at cosα≈ 0.2. The algorithm converges to the other model parameter -θ^* = (7/25, -24/25)^⊤ for initial guesses with cosine angle (approximately) smaller than 0.2. The plot in fig:SimEM2 is a zoomed-in version of fig:SimEM near this transition point.One of the shortcomings of the EM algorithm is that it is model dependent, that is, the form of the EM operator is derived from the assumption of Gaussian input X, error ε, and two component assumption. It is natural to ask how changing either distribution and using the original EM operator designed for Gaussian data performs on simulated data. As a simple illustration, the simulation results in fig:SimEMmis use X ∼([-√(3), √(3)]^d) and ε∼([-σ√(3), σ√(3)]^d) (fig:uniform) and X ∼ N(0, I_d) and ε∼(0, σ/√(2)) (fig:laplace) for σ^2 = 1. The performance is similar to fig:SimEM and fig:SimEM2, although note that in fig:uniform, a larger cosine angle is required for convergence (i.e., cosine angles at least cosα≈ 0.4).More generally, future work would rigorously study the effect of EM under model misspecification. In this direction, the recent work of <cit.> has analyzed the EM algorithm for over-fitted mixtures. § APPENDIX In this appendix, we prove lmm:negative and all other supporting lemmas used in the body of the paper. Recall that in general, M(θ) = θ^* A + θ B, whereA = 2ϕ(W ⟨θ, X ⟩/σ^2) + 2(W⟨θ, X ⟩/σ^2) ϕ^'(W ⟨θ, X ⟩/σ^2)-1, B = 2(W^2/σ^2)ϕ^'(W ⟨θ, X ⟩/σ^2). Suppose ⟨θ, θ^* ⟩ = 0. This implies that A = 0. To see this, note thatϕ(W⟨θ, X ⟩/σ^2) = ϕ(Λ Z_1|Z_2|) = ϕ(0) = 1/2,andW⟨θ, X ⟩ϕ^'(W⟨θ, X ⟩/σ^2) = σ^2Λ Z_1|Z_2|ϕ^'(Λ Z_1|Z_2|) = 0.The first equality eq:half follows from the the fact that if Z ∼ N(0, 1), then ϕ(z Z) = 1/2 for all z in ℝ. This fact is easily established by noting that the derivative with respect to z is zero everywhere. The expectation in eq:zero vanishes since we are averaging an odd function with respect to a symmetric distribution. Next, observe that B = 2(1+θ^*^2/σ^2)Z^2_2ϕ^'(Z_1|Z_2|(θ/σ^2)√(σ^2+θ^*^2))→ 1+θ^*^2/σ^2 > 1 as θ→ 0. By continuity, there exists r > 0 such that if θ = r, then B > 1, and henceM(θ) - θ^*^2 = θ-θ^*^2 + (B^2-1)θ^2> θ-θ^*^2.This shows thatlim inf_⟨θ, θ^* ⟩↓ 0, θ = r[M(θ) - θ^*^2-θ-θ^*^2] > 0.By continuity, it follows that there exists r' > 0 such that if 0 < ⟨θ, θ^* ⟩ < r' then M(θ) - θ^*^2 >θ-θ^*^2. It is easy to see that the set of all points satisfying 0 < ⟨θ, θ^* ⟩ < r' and 0 < θ < r has positive Lebesgue measure and satisfies the stated conditions in the lemma. For the following lemmas, recall the definitionsA = 2ϕ(W ⟨θ, X ⟩/σ^2) + 2(W⟨θ, X ⟩/σ^2) ϕ^'(W ⟨θ, X ⟩/σ^2)-1, B = 2(W^2/σ^2)ϕ^'(W ⟨θ, X ⟩/σ^2),andκ^2 = 1/Γ/Λmin{Λ,Γ/Λ}+1 = max{1-|⟨θ_0,θ^⋆⟩|^2/σ^2+θ^*^2, 1-⟨θ,θ^*⟩/σ^2+⟨θ,θ^*⟩}. The cosine angle between θ^* and M(θ) is equal toθ^*^2A + ⟨θ, θ^* ⟩ B/√((θ^*^2A + ⟨θ, θ^* ⟩ B)^2 + B^2(θ^2θ^*^2 - |⟨θ, θ^* ⟩|^2)).If ⟨θ, θ^* ⟩≥ρθθ^*, then there exists positive Δ = Δ(ρ, σ, θ^*, θ) such that the cosine angle eq:cosinea is at least (1+Δ)ρ. Moreover, if ⟨θ^*, θ⟩≥ 0, thenθ^*^2(1-κ)^2 ≤M(θ)^2 = θ^*^2A^2 + θ^2B^2+2⟨θ, θ^* ⟩ AB ≤σ^2 + 3θ^*^2,and⟨θ^*, M(θ) ⟩ = θ^*^2A + ⟨θ, θ^* ⟩ B ≥θ^*^2(1-κ).The stated expression eq:cosinea for the cosine angle between θ^* and M(θ) comes from the expression ⟨ u, v⟩/uv = ⟨θ^*, M(θ)⟩/θ^*M(θ) for the cosine angle between two vectors u and v, and the fact that M(θ) = Aθ^* + Bθ(see eq:span).Next, we prove the second statement about the lower bound on eq:cosinea. Let τ = θ^*/θA/B. Observe thatθ^*^2A + ⟨θ, θ^* ⟩ B/√((θ^*^2A + ⟨θ, θ^* ⟩ B)^2 + B^2(θ^2θ^*^2 - |⟨θ, θ^* ⟩|^2) ) = 1/√(1 + θ^2θ^*^2 - |⟨θ, θ^* ⟩|^2/(θ^*^2A/B + ⟨θ, θ^* ⟩)^2 )≥1/√(1 + 1-ρ^2/(τ+ρ)^2) = ρ/√(1 - (1-ρ^2)τ(τ+2ρ)/(τ+ρ)^2)≥ρ/√(1 - (1-ρ^2)τ/τ+ρ)≥ρ(1+1/2(1-ρ^2)τ/τ+ρ),where the last line eq:finalcosinea follows from the inequality 1/√(1-z)≥ 1+z/2 for all z ∈ (0, 1). Next, note that from lmm:ABbounds,A/B≥σ^2(1-κ)/2(σ^2+θ^*^2)κ^3.Thus, τ≥τ' := σ^2θ^*(1-κ)/2θ(σ^2+θ^*^2)κ^3 and so we can setΔ = 1/2(1-ρ^2)(τ'/τ'+ρ) > 0. For the statement in eq:cosinea1, the identity M(θ)^2 = θ^*^2A^2 + θ^2B^2+2⟨θ, θ^* ⟩ ABis an immediate consequence of M(θ) = Aθ^* + Bθ. By lmm:ABbounds, A ≥ 1-κ and hence since ⟨θ, θ^* ⟩≥ 0, we have M(θ)^2 ≥θ^*^2A^2 ≥θ^*^2(1-κ)^2.Next, we will show that M(θ)^2 ≤σ^2+3θ^*^2 for all θ in ℝ^d. To see this, note that by Jensen's inequality,⟨θ, M(θ) ⟩= (2ϕ(W⟨θ, X ⟩/σ^2)-1)W⟨θ, X ⟩≤|W⟨θ, X ⟩|≤√(|W⟨θ, X ⟩|^2) = σ^2√(Λ^2 + 3Γ^2) = θ√(σ^2+θ^*^2 + 2|⟨θ_0, θ^* ⟩|^2 ).Next, it can be shown that |2ϕ(z)+2zϕ^'(z)-1| ≤√(2) and hence A ≤√(2). Using this, we have⟨θ^⊥_0, M(θ) ⟩=A⟨θ^⊥_0, θ^* ⟩≤√(2)⟨θ^⊥_0, θ^* ⟩.Putting these two facts together, we haveM(θ)^2 = |⟨θ^⊥_0, M(θ) ⟩|^2 + |⟨θ_0, M(θ) ⟩|^2 ≤σ^2 + θ^*^2 + 2|⟨θ^⊥_0, θ^* ⟩|^2 + 2|⟨θ_0, θ^* ⟩|^2= σ^2 + 3θ^*^2. The final statement eq:cosinea2 follows from similar arguments and so we omit them here. If ⟨θ, θ^* ⟩≥ 0, thenW⟨θ, X ⟩ϕ^'(W ⟨θ, X ⟩/σ^2)≥ 0.Writing W⟨θ, X ⟩ according to the distributional equivalent eq:dist2, note that the statement is true if(α Z + β)ϕ^'(α Z + β)≥ 0,where Z ∼ N(0, 1) and α≥ 0 and β≥ 0. This fact is proved in <cit.> or <cit.>. The following inequalities hold for all z ∈ℝ:|2ϕ(z)+2zϕ^'(z)-1| ≤ 1+√(2(1-ϕ(z))),andz^2ϕ^'(z) ≤√(2(1-ϕ(z))).Their validity can easily be established using mathematical software. Let α, β > 0 and Z ∼ N(0,1). Then2(1-ϕ(α(Z + β)))≤exp{-β/2min{α, β}}.Moreover,2(1-ϕ(α |Z_2|(Z_1 + β |Z_2|)))≤1/√(βmin{α,β}+1).The second conclusion follows immediately from the first since2(1-ϕ(α |Z_2|(Z_1 + β |Z_2|)))= 2_Z_2[_Z_1[1-ϕ(α |Z_2|(Z_1 + β |Z_2|))]] ≤_Z_2[exp{-Z^2_2/2βmin{α,β}}]= 1/√(βmin{α,β}+1).The last equality follows from the moment generating function of χ^2_1.For the first conclusion, we first observe that the mapping α↦ϕ(α(Z + β)) is increasing (see <cit.> or <cit.>). Next, note the inequality2(1-ϕ(z)) ≤ e^-z,which is equivalent to (e^z-1)^2 ≥ 0. If α≥β, then2(1-ϕ(α(Z + β))) ≤2(1-ϕ(β(Z + β)))≤e^-(β(Z + β)) = e^-β^2/2.If α≤β, then2(1-ϕ(α(Z + β))) ≤e^-(α(Z + β)) = e^α^2/2-αβ≤ e^-αβ/2.In each case, we used the moment generating function of a Gaussian distribution to evaluate the expectations. The following inequalities hold:1-κ≤ A ≤ 1+√(κ),andB ≤ 2(1+θ^*^2/σ^2)κ^3.By lmm:positive and lmm:main_inequality,A = 2ϕ(W ⟨θ, X ⟩/σ^2) + 2(W⟨θ, X ⟩/σ^2) ϕ^'(W ⟨θ, X ⟩/σ^2)-1≥2ϕ(W ⟨θ, X ⟩/σ^2)-1≥ 1-κ.By lmm:inequality, Jensen's inequality, and lmm:main_inequality,A = 2ϕ(W ⟨θ, X ⟩/σ^2) + 2(W⟨θ, X ⟩/σ^2) ϕ^'(W ⟨θ, X ⟩/σ^2)-1≤1+√(2(1-ϕ(W ⟨θ, X ⟩/σ^2)))≤ 1+√(2(1-ϕ(W ⟨θ, X ⟩/σ^2)))≤ 1+√(κ).By the inequality ϕ'(z) ≤ 2(1-ϕ(z)) for all z ∈ℝ and lmm:main_inequality,B = 2(W^2/σ^2)ϕ^'(W ⟨θ, X ⟩/σ^2)≤ 22(W^2/σ^2)(1-ϕ(W ⟨θ, X ⟩/σ^2)) = 2(1+θ^*^2/σ^2)_Z_2[Z^2_2_Z_1[2(1-ϕ(Λ |Z_2|(Z_1+Γ/Λ|Z_2|)))]] ≤ 2(1+θ^*^2/σ^2)_Z_2[Z^2_2exp{ -Z^2_2/2Γ/Λmin{Γ/Λ,Λ}}]= 2(1+θ^*^2/σ^2)(1/Γ/Λmin{Λ,Γ/Λ}+1)^3/2 = 2(1+θ^*^2/σ^2)κ^3.Defineh(α, β) = (2ϕ(α |Z_2|(Z_1 + β |Z_2|))-1)(|Z_2|(Z_1 + β |Z_2|)). Let α, β > 0. Then∂/∂α h(α, β) ≤2/α^2(1/βmin{α,β}+1)^1/4.First, observe that∂/∂α h(α, β) = 2ϕ^'(α |Z_2|(Z_1 + β |Z_2|))(|Z_2|(Z_1 + β |Z_2|))^2.By lmm:inequality, Jensen's inequality, and lmm:main_inequality,2ϕ^'(α |Z_2|(Z_1 + β |Z_2|))(|Z_2|(Z_1 + β |Z_2|))^2 = 1/α^22ϕ^'(α |Z_2|(Z_1 + β |Z_2|))(α |Z_2|(Z_1 + β |Z_2|))^2≤2/α^2√(2(1-ϕ(α |Z_2|(Z_1 + β |Z_2|))))≤2/α^2√(2(1-ϕ(α |Z_2|(Z_1 + β |Z_2|))))≤2/α^2(1/βmin{α,β}+1)^1/4.plain | http://arxiv.org/abs/1704.08231v3 | {
"authors": [
"Jason M. Klusowski",
"Dana Yang",
"W. D. Brinda"
],
"categories": [
"stat.ML",
"62F10, 68W40"
],
"primary_category": "stat.ML",
"published": "20170426173740",
"title": "Estimating the Coefficients of a Mixture of Two Linear Regressions by Expectation Maximization"
} |
[]978-1-5090-4023-0/17/$31.00 2017 IEEE[]empty emptyplain What Can Gamma-rays from Space tell us About the Madala Hypothesis? Anonymous FG 2017 submission – DO NOT DISTRIBUTE – =================================================================== IEEEexample:BSTcontrolSpeech is the most used communication method between humans and it involves the perception of auditory and visual channels. Automatic speech recognition focuses on interpreting the audio signals, although the video can provide information that is complementary to the audio. Exploiting the visual information, however, has proven challenging. On one hand, researchers have reported that the mapping between phonemes and visemes (visual units) is one-to-many because there are phonemes which are visually similar and indistinguishable between them. On the other hand, it is known that some people are very good lip-readers (e.g: deaf people). We study the limit of visual only speech recognition in controlled conditions. With this goal, we designed a new database in which the speakers are aware of being read and aim to facilitate lip-reading. In the literature, there are discrepancies on whether hearing-impaired people are better lip-readers than normal-hearing people. Then, we analyze if there are differences between the lip-reading abilities of 9 hearing-impaired and 15 normal-hearing people. Finally, human abilities are compared with the performance of a visual automatic speech recognition system. In our tests, hearing-impaired participants outperformed the normal-hearing participants but without reaching statistical significance. Human observers were able to decode 44% of the spoken message. In contrast, the visual only automatic system achieved 20% of word recognition rate. However, if we repeat the comparison in terms of phonemes both obtained very similar recognition rates, just above 50%. This suggests that the gap between human lip-reading and automatic speech-reading might be more related to the use of context than to the ability to interpret mouth appearance.§ INTRODUCTION Speech is the most used communication method between humans, and it is considered a multi-sensory process that involves perception of both acoustic and visual cues since McGurk demonstrated the influence of vision in speech perception. Many authors have subsequently demonstrated that the incorporation of visual information into speech recognition systems improves their robustness <cit.>.Visual information usually involves position and movement of the visible articulators (the lips, the teeth and the tongue), speaker localization, articulation place and other signals not directly related to the speech (facial expression, head pose and body gestures) <cit.>. Even though the audio is in general much more informative than the video signal, speech perception relies on the visual information to help decoding spoken words as auditory conditions are degraded <cit.>. Furthermore, for people with hearing impairments, the visual channel is the only source of information to understand spoken words if there is no sign language interpreter <cit.>. Therefore, visual speech recognition is implicated in our speech perception process and is not only influenced by lip position and movement but it also depends on the speaker's face, as it has been shown that it can also transmit relevant information about the spoken message <cit.>. Much of the research in Automatic Speech Recognition (ASR) systems has focused on audio speech recognition, or on the combination of both modalities using Audio-Visual Automatic Speech Recognition (AV-ASR) systems to improve the recognition rates, but Visual Automatic Speech Recognition (VASR) systems have been less frequently analyzed alone <cit.>. The performance of audio only ASR systems is very high if there is not much noise to degrade the signal. However, in noisy environments AV-ASR systems improves the recognition performance when compared to their audio-only equivalents <cit.>. In contrast, in visual only ASR systems the recognition rates are rather low <cit.>. This can be partially explained by the higher difficulty associated to decoding speech through the visual channel, when compared to the audio channel.One of the key limitations of VASR systems resides on the ambiguities that arise when trying to map visual information into the basic phonetic unit (phonemes), i.e. not all the phonemes that are heard can be distinguished by observing the lips. There are two types of ambiguities: i) there are phonemes that are easily confused because they look visually similar between them (e.g: /p/, /b/ and /m/). For example, the phones /p/ and /b/ are visually indistinguishable because voicing occurs at the glottis, which is not visible; ii) there are phonemes whose visual appearance can change (or even disappear) depending on the context. This is the case of the velars, consonants articulated with the back part of the tongue against the soft palate (e.g: /k/ or /g/), because they change their position in the palate depending on the previous or following phoneme. Specifically, velar consonants tolerate palatalization (the phoneme changes to palatal) when the previous or following phoneme is a vowel or a palatal <cit.>. Other drawbacks associated to lipreading have also been reported in the literature, such as the distance between the speakers, illumination conditions or visibility of the mouth <cit.>. However, the latter can be easily controlled, while the ambiguities explained above are limitations intrinsic to lip-reading and constitute an open problem.On the other hand, it is known that some people are very good lip-readers. In general, visual information is the only source of reception and comprehension of oral speech for people with hearing impairments, which leads to the common misconception that they must be good lip-readers. Indeed, while many authors have found evidence that people with hearing impairments outperform normal-hearing people in comprehending visual speech <cit.>, there are also several studies where no differences were found in speech-reading performance between normal-hearing and hearing-impaired people <cit.>. Such conflicting conclusions might be partially explained by the influence of other factors beyond hearing impairment. For example, it is well know thathuman lip-readers use the context of the conversation to decode the spoken information <cit.>, thus it has been argued that people who are good lip-readers might be more intelligent, with more knowledge of the language, and with a more comprehensible oral speech for others <cit.>.While the above complexities may provide some explanation to the rather low recognition rates of VASR systems, there seems to be a significant gap between these and human lip-reading abilities. More importantly, it is not clear what would be the upper bound of visual-speech recognition, especially for systems not using context information (it has been argued that humans can read only around 30% of the information from the lips, and the rest is filled-in from the context <cit.>). Thus, it is not clear if the poor recognition rates of VASR systems are due to inappropriate or incomplete design or because there is an intrinsic limitation in visual information that causes the impossibility of perfect decoding of the spoken message.Contributions: In this work we explore the feasibility of visual speech reading with the aim to estimate the recognition rates achievable by human observers under favorable conditions and compare them with those achieved by an automatic system. To this end, we focus on the design and acquisition of an appropriate database in which recorded speakers actively aim to facilitate lip-reading but conversation context is minimized. Specifically, we present a new database recorded with the explicit goal of being visually informative of the spoken message. Thus, data acquisition is especially designed with the aim that a human observer (or a system) can decode the message without the help of the audio signal. Concretely, lip-reading is applied to people that is aware of being read and has been instructed to make every effort so that they can be understood based exclusively on visual information. Then, the database deals with sentences that are uttered slowly, with repetitions, well pronounced and viewed under optimal conditions ensuring good illumination and mouth visibility (without occlusions and distractions).In this database we divided the participants in two groups: 9 hearing-impaired subjects and 15 normal-hearing subjects. In our tests, hearing-impaired participants outperformed the normal-hearing participants but without reaching statistical significance. Human observers outperform markedly the VASR system in terms of word recognition rates, but in terms of phonemes, the automatic system achieves very similar accuracy to human observers. § AUDIO-VISUAL SPEECH DATABASES Visual only speech recognition spans over more than thirty years, but even today is still an open problem in science. One of the limitations for the analysis of VASR systems is the accessible data corpora. Despite the abundance of audio speech databases, there exist a limited number of databases for audio-visual or visual only ASR research. That is explained in the literature because the field is relatively young, and also, because the audio-visual databases add some challenges such as database collection, storage and distribution, not found as a problem in audio corpora. Acquisition of visual data at high resolution, frame rate and image quality, with optimal conditions and synchronized with the audio signal requires expensive equipment. In addition, visual storage is at least one or two orders of magnitude to the audio signal, making his distribution more difficult <cit.>, <cit.>.Most databases used in audio-visual ASR systems suffer from one or more weaknesses. For example, they contain low number of subjects (<cit.>), small duration (<cit.>), and are addressed to specific and simple recognition tasks. For instance, most corpora are centered in simple tasks such as isolated or connected letters (<cit.>), digits (<cit.>), short sentences (<cit.>) and only recently continuous speech (<cit.>). These restrictions make more difficult the generalization of methods and the construction of robust models because of the few samples of training. Additional difficulties are that some databases are not freely available.As explained in Section <ref> the aim of this project is to apply continuous lip-reading to people that is conscious of being read and is trying to be understood based exclusively on visual information. Thus, from the most common databases, only VIDTIMIT <cit.>, AVICAR <cit.>, Grid <cit.>, MOBIO <cit.>, OuluVS <cit.>, OuluVS2 <cit.>, AV@CAR <cit.>, AV-TIMIT <cit.>, LILiR <cit.> contain short sentences or continuous speech and could be useful to us. However, we rejected the use of them because the participants speak in normal conditions without previous knowledge of being lip-read. In addition, most of the databases have low technical aspects and limited number of subjects with restricted vocabularies centred in repetitions of short utterances. Subsequently, we decided to develop a new database designed specifically for recognizing continuous speech in controlled conditions. § VISUAL LIP-READING FEASIBILITY DATABASEThe Visual Lip-Reading Feasibility (VLRF) database is designed with the aim to contribute to research in visual only speech recognition. A key difference of the VLRF database with respect to existing corpora is that it has been designed from a novel point of view: instead of trying to lip-read from people who are speaking naturally (normal speed, normal intonation,...), we propose to lip-read from people who strive to be understood.Therefore, the design objective was to create a public database visually informative of the spoken message in which it is possible to directly compare human and automatic lip-reading performance. For this purpose, in each recording session there were two participants: one speaker and one lip-reader. The speaker was recorded by a camera while pronouncing a series of sentences that were provided to him/her; the lip-reader was located in a separate room, acoustically isolated from the room where the speaker was located. To make the human decoding as close as possible to the automatic decoding, the input to the lip-reader was exclusively the video stream recorded by the camera, which was displayed in real time by means of a 23" TV screen.After each uttered sentence, the lip-reader gave feedback to the speaker (this was possible because it was possible to enable audio feedback from the lip-reading room to the recording room, but not conversely). Each sentence could be repeated up to 3 times, unless the lip-reader decoded it correctly in fewer repetitions. Both the speaker utterances and the lip-reader answers (at each repetition) were annotated.Participants were informed about the objective of the project and the database. They were also instructed to make their best effort to be easily understood, but using their own criteria (e.g: speak naturally or slowly, emphasize separation between words, exaggerate vocalization,...).Each recording session was divided in 4 levels of increasing difficulty: 3 levels with 6 sentences and 1 level with 7 sentences. We decided to divide the session in different levels to make it easier for participants to get accustomed to the lip-reading task (and perhaps also to the speaker). Specifically, in the first level the sentences are short with only few words, and as the level increases the difficulty increases in terms of number of words. The sentences are unrelated among them and only the context within the sentence is present. Thus, in the first sentences participants had to read fewer words but with very little context and in the last sentences the context was considerably more important and would certainly help decoding the sentence. To motivate participants and to ensure their concentration during all the session, at the end of each level both participants changed their roles.Finally, because our objective was to determine the visual speech recognition rates that could be achievable, we also recruited volunteers which were hearing-impaired and accustomed to use lip-reading in their daily routine. Then, we will also compare the capability of lip-reading of normal-hearing and hearing-impaired people.§.§ ParticipantsWe recruited 24 adult volunteers (3 male and 21 female). Thirteen are University students, one is Teacher of Sign Language at UPF and the other 10 participants are members of the Catalan Federation of Associations of Parents and Deaf (ACCAPS) <cit.>. The 24 participants were divided in two groups: normal-hearing people and hearing-impaired people.– Normal-hearing participants. Fifteen of the volunteers are normal-hearing participants (14 females and 1 male), who were selected from a similar educational range (e.g: same degree) because, as explained in Section <ref>, lip-reading abilities have been related to intelligence and language knowledge. Two of the participants were more than 50 years old and have a different education level while the other 13 subjects of this group shared educational level and age range.– Hearing-impaired participants. There were nine hearing-impaired participants, all above 30 years old (7 female and 2 male). Eight of them have post-lingual deafness (the person loses hearing after acquiring spoken language) and one has pre-lingual deafness (the person loses hearing before the acquisition of spoken language). There were 4 participants with cochlear implants or hearing aids.§.§ UtterancesEach participant was asked to read 25 different sentences, from a total pool of 500 sentences, proceeding similarly to <cit.>. The sentences were unrelated between them to avoid that lip-readers could benefit from conversation context. Sentences had different levels of difficulty, in terms of their number of words. There were 4 different levels, from 3-4 words, 5-6 words, 7-8 words and 8-12 words. We decided to divide the sentences in different levels for two reasons. Firstly, to allow lip-readers to get some training with the short sentences of the first level (i.e. to get acquainted and gain confidence with the setup, the task and the speaker). Secondly, to compare the effect of the context in the performance of human lip-readers. The utterances with fewer words have very little context, while longer sentences contained considerable context that should help the lip-reader when decoding the message.Overall, there were 10200 words in total (1374 unique), with an average duration of 7 seconds per sentence and a total database duration of 180 minutes (540,162 frames). The sentences contained a balanced phonological distribution of the Spanish language, based on the balanced utterances used in the AV@CAR database <cit.>.§.§ Technical aspects The database was recorded in two contiguous soundproof rooms (Fig. <ref>). The distribution of the recording equipment into the rooms is shown in Fig. <ref>. A Panasonic HPX 171 camera was located with a tripod PRO6-HDV in front of the chair of the speaker, to ensure an approximately frontal face shot, with a supplementary directional microphone mounted on the camera to ensure a directional coverage in the direction of the speaker. The camera recorded a close up shot (Fig.<ref>) at 50 fps with a resolution of 1280 × 720 pixels and audio at 48 kHz mono with 16-bit resolution. Two Lumatek ultralight 1000W Model 53-11 were used together with reflecting panels to obtain a uniform illumination and minimize shadows or other artifacts on the speaker's face. When performing the lip-reading task, the lip-reader was located in the control room. The position of the lip-reader was just in front of a 23" LG Flatron M2362D PZ TV. This screen was connected to the camera so that it reproduced in real time what the camera was recording. Only the visual channel of the camera was fed into the control room, although both audio and video channels are recorded for post processing of the database. The rooms were acoustically isolated between them except for the feedback channel composed by a microphone in the control room and a loudspeaker in the recording room. This channel was used after each utterance to let the speaker know what message was decoded by the lip-reader.§.§ Data labeling The ground-truth of the VLRF database consists of a phoneme label per frame. We used the EasyAlign plug-in from Praat <cit.>, which allows to locate the phoneme in each time instant based on the audio stream. Specifically, the program locates the phonemes semi-automatically and there is usually the need for manual intervention to adapt the boundaries of each phoneme to more precise positions. The phonemes used are based on the phonetic alphabet SAMPA <cit.>. For the Spanish language, the SAMPA vocabulary is composed of the following 31 phonemes: /p/, /b/, /t/, /d/, /k/, /g/, /tS/, /jj/, /f/, /B/, /T/, /D/, /s/, /z/, /x/, /G/, /m/, /n/, /N/, /J/, /l/, /L/, /r/, /4/, /j/, /w/, /a/, /e/, /i/, /o/, /u/. § RESULTS In this section we show the word- and phoneme-recognition rates obtained in our experiments. We start by analyzing the human lip-reading abilities and comparing the performance of hearing-impaired and normal-hearing participants. Then, we analyse the influence of training and context in human performance. Finally, we compare the performance of our automatic system to the results obtained by human observers.The use of two separate measures (word and phoneme rates) is necessary to analyze different aspects of our results. On one hand, phonemes are the minimum distinguishable units of speech and directly constitute the output of our automatic system. However, the ultimate goal of lip-reading is to understand the spoken language, hence the need to focus (at least) on words. It is important to notice that acceptable phoneme recognition rates do not necessarily imply good word recognition rates, as will be shown later.The word recognition rate was computed as the fraction of words correctly understood in a given sentence. The phoneme recognition rate was computed as the fraction of video frames in which the correct phoneme was assigned. Consequently, 25 accuracy measures were computed for each participant and each repetition. Recognition rates for the automatic system were computed in the same manner, except that there were no multiple repetitions.§.§ Experimental setup Our VASR system starts by detecting the face and performing an automatic location of the facial geometry (landmark location) using the Supervised Descend Method (SDM) <cit.>. Once the face is located, the estimated landmarks are used to fix a bounding box around theregion (ROI) that is then normalized to a fixed size. Later on, local appearance features are extracted from the ROI based on early fusion of DCT and SIFT descriptors in both spatial and temporal domains. As explained in Section <ref> there are phonemes that share the same visual appearance and should belong to the same class (visemes). Thus, we constructed a phoneme to viseme mapping that groups 32 phonemes into 20 visemes based on an iterative process that computes the confusion matrix and merges at each step the phonemes that show the highest ambiguity until the desired length is achieved. Then, the classification of the extracted features into phonemes is done in two steps. Firstly, multiple LDA classifiers are trained to convert the extracted features into visemes and secondly, at the final step, one-state-per-class HMMs are used to model the dynamic relations of the estimated visemes and produce the final phoneme sequences. This system was shown to produce near state-of-the-art performance for continuous visual speech-reading tasks (more details in <cit.>).§.§ Human lip-reading As explained in Section <ref>, it is not clear if hearing-impaired people are better lip-readers than normal-hearing people. Fig. <ref> (Top) shows the word recognition rates for both groups at each repetition and Fig. <ref> (Bottom) shows the word recognition rates for each participant and repetition. Analyzing each participant individually, it is difficult to observe any group-differences between hearing-impaired and normal-hearing participants. However, we do observe large performance variations within each of the groups, i.e. there are very good and quite poor lip-readers regardless of their hearing condition.On the other hand, looking at the results globally, split only by group (Fig. <ref> (Top)), they suggest that hearing-impaired participants outperform normal-hearing participants in the lip-reading task for all three repetitions. However, the results differ about 20% in terms of word recognition rate and thus we need to study if this difference is statistically significant.To do so, we estimated the word accuracy of each participant as the average accuracy across the 25 sentences that he/she had to lip-read. Then, we performed statistical tests to determine if there were significant differences between the 9 hearing-impaired samples and the 15 normal-hearing samples. Because we only want to test if the hearing-impaired participants were better than normal-hearing participants, we performed single-tailed tests where the null hypothesis was that the mean or median (depending on the test) performance of hearing-impaired participants was not higher than the performance of normal-hearing participants. We ran two tests (summarized in Table <ref>) for each of the 3 repetitions: Wilcoxon signed rank test and Unpaired two-sample t-test. Taking the conventional significance threshold of p < 0.05 it could be argued that at the third repetition the performance of hearing-impaired participants was significantly better than that of normal-hearing participants. However, this was not observed in the first two repetitions. Moreover, the 9 hearing-impaired subjects did better than the 15 normal-hearing, but taking into account that the sample size is relatively small, current trends in statistical analysis suggest that the obtained p-values are not small enough to claim that this would extrapolate to the general population. On the other hand, looking at the p-values, with the current number of subjects we are not far from reaching significance <cit.>.In Fig. <ref> we also show the influence of repetitions into the final performance: as the number of repetitions increases the recognition rate increases too. This effect can be seen split by group and analysing each participant separately.§.§ Training and context influence on lip-reading The context is one of the human resources more used in lip-reading to complete the spoken message. To analyse the influence of the context, the participants were asked to read four different types of sentences, in terms of number of words (explained in Section <ref>). Thus, as the level increases, sentences are longer and the context increases too.In Fig. <ref> we can observe how the first level has the lowest word recognition rates for all repetitions, while the last level has the highest rates. There are two factors that could contribute to this effect: 1) Context: humans use the relation between words to try decoding a meaningful message, and 2) Training: as the level increases the participants are more acquainted to the speaker and to the lip-reading task.The results of Fig. <ref> are not enough to determine whether the effect is due to context, training or both. Thus, in Fig. <ref> we analyze the variation of performance per sentence (with a cumulative average) instead of per level, which should make clearer the effect of training. This is because training occurs continuously from one sentence to another while context only increases when we change from one level to the next one. Thus, the effect of training can be seen as the constant increase performance in each of the curves (up to 20%). As the users have lip-read more sentences they tend to become better lip-readers. On the other hand, the influence of context is better observed by comparing the different repetitions. In the first attempt, the sentence was completely unknown to the participants, but, in the second and third repetitions there was usually some context available because the message had been already partially decoded, hence constraining the possible words to complete the sentence.§.§ Human observers and automatic system comparison The results of the automatic system are only computed for the first attempt, since it was not designed to benefit from repetitions. The resulting word-recognition rates are shown in Fig. <ref> (Top). Notice that now the participant number indicates the person that was pronouncing the sentences as the recognition is always performed by the system. Thus, this figure provides information about how well the system was able to lip-read each of the participants. The system produced the highest recognition rates for participants 1, 8, 17 and 21. Interestingly, these participants had good pronunciation and visibility of the tongue and teeth.We are interested in comparing the performance of humans lip-reading and a VASR system. Focusing on Fig. <ref> (Top) we can observe how the word recognition rates are lower for the system in most of the cases. However, we have to take into account that the system does not use the context into the sentence. Indeed, the system is not even targeting words but phonemes, which are later merged to form words. In contrast, people directly search for correlated words with the lip movements of the speaker. Thus, it is reasonable to expect a considerable gap between human and automatic performance, which will be shown to reduce considerably if the comparison is done in terms of phonemes.In the same figure (Fig. <ref>) we can observe a direct comparison of the mean recognition rates of each participant identified by humans and by the automatic system. The system gives an unbiased measure about the facility to lip-read participants because it evaluates each of them in the same manner. In contrast, human lip-reading was performed in couples (couples are organized in successive order, e.g. participants 1 and 2, 3 and 4, etc), hence each participant was only lip-read by its corresponding partner. Analyzing Fig. <ref> we can identify which users were good lip-readers and also good speakers. For example, participant 7 was lip-read by participant 8 with high word recognition rate. Then, in the curve corresponding to human performance, we observe a high value for participant 8, meaning that he/she was very successful at lip-reading. When we look at system's performance, however, the value assigned to participant 8 corresponds to the rate obtained by the system and is therefore a measure related to how participant 8 spoke rather than how he/she lip-read. For this specific participant, the figure shows that system performance was also high, hence he/she is a candidate to be good lip-reader and speaker.The word recognition rates reported by our system are rather low compared to those obtained by human observers. However, as stated earlier, our system is trying to recognize phonemes and convert them to words, so it is also interesting to analyze its performance in terms of phoneme recognition. The phoneme recognition rates obtained by the system are between 40% and 60%, as shown in Fig. <ref> (Bottom) and Fig. <ref> (Bottom). It is interesting to note that system performance was much more stable across participants than human performance. In addition, in terms of phoneme units, the global mean of the automatic system was 51.25%, very close to the global mean of 52.20% obtained by humans.There are several factors that help understanding why the system achieves significantly higher rates in terms of phonemes than in terms of words: 1) Phoneme accuracy is computed at frame level because that is the output rate of the system. Thus, the temporal resolution used for phonemes is much higher than that of words and correctly recognizing a word implies the correct match of a rather long sequence of contiguous phonemes. Any phoneme mismatch, even if in a single frame, results in the whole word being wrong. 2) The automatic system finds it easier to recognize concrete phonemes (e.g: vowels) with high appearance rates in terms of frames (vowels are usually longer than consonants). This implies that a high phoneme recognition rate does not necessarily mean that the message is correctly decoded. To analyze this, system performance is displayed in Fig. <ref>. Specifically, in Fig. <ref> (Top) we can observe the number of phonemes that were wrongly detected, distinguishing false negatives (in red color) and false positives (in green), while Fig. <ref> (Bottom) shows the corresponding values of precision and recall. Most of the consonants have very high precision, but many samples are not detected, deriving in a low recall. In contrast, vowels have an intermediate precision and recall because they are assigned more times than their actual occurrence. Close inspection of our data suggests that this effect is partially explained by the difficulty in correctly identifying the temporal limits of phonemes. § DISCUSSION AND CONCLUSIONSIn this work we explore visual speech reading with the aim to estimate the recognition rates achievable by human observers and by an automatic system under optimal and directly comparable conditions. To this end, we recorded the VLRF database, appropriately designed to be visually informative of the spoken message. For this purpose we recruited 9 hearing-impaired and 15 normal-hearing subjects. Overall, the word recognition rate achieved by the 24 human observers ranged from 44% (when the sentence was pronounced only once) to 73% (when allowing up to 3 repetitions). These results are compatible to those from Duchnowski et al. <cit.>, who stated that even under the most favorable conditions (including repetitions) "speech-readers typically miss more than one third of the words spoken".We also tested the performance of participants grouped by their hearing condition to compare their lip-reading abilities and verify if these are superior for hearing-impaired subjects, as suggested in some studies. Concretely, we found that hearing-impaired participants outperformed normal-hearing participants on the lip-reading task, but without statistical significance. The performance difference, which averaged 20%, was not sufficient to conclude significance with the current number of subjects. Hence, future work will address the extension of the VLRF database so that it includes sufficient subjects to reach a clearer conclusion.The participation of hearing-impaired people was very important given their daily experience in lip-reading. During the recording sessions they explained that lip-reading in our database was a challenge because they did not known the context of the sentence beforehand. For them, it is easier to lip-read when they know the context of the conversation. The conversation topic constrains the vocabulary that can appear in the talk. Furthermore, we mentioned before that lip-reading is related to the intelligence and the language knowledge. During the recording sessions we noticed that sentences directly related to daily life were easier to understand than sentences with words not used in colloquial language.Another important aspect to consider is how easy or difficult is to lip-read different speakers. As explained in Section <ref>, participants were instructed to use their own criterion to facilitate lip-reading. It is difficult to objectively judge the effectiveness of the techniques that were used, but we observed some interesting tendencies during the recordings. Firstly, facial expressions help decoding the spoken message adding context to the sentence (e.g: sad expression if you are speaking about something unfortunate); hearing-impaired participants used this technique more often than normal-hearing subjects. Secondly, it is more useful to separate clearly between words than to exaggerate pronunciation. That is because the human system is searching words that fit the lip movements. We noticed that when pronunciation was exaggerated the separation between words was not clear or even lost considerably increasing the difficulty of lip-reading.The above is important when interpreting the results of human observers, as they are conditioned both by the lip-reading abilities of the lip-reader and by the pronunciation abilities of the speaker. Recall that, in our experiments, each participant only lip-read his/her corresponding partner. It would be interesting to separate these factors, which could be done by randomizing the combinations of speakers and lip-readers on a per-sentence basis. In particular, the most interesting aspect would be to estimate the level of difficulty to lip-read each of the speakers, which could be done by having several subjects lip-reading the same speaker. There would be several advantages in doing so: 1) it would allow a more direct comparison to the performance of the system, as speaker performance will not be conditioned to a single human reader; 2) speakers that are too difficult could be excluded from the analysis, at least when seeking for the theoretical limit of lip-reading in optimal conditions; 3) it would help understand which are the best speaking techniques to use to facilitate lip-reading understanding.As just explained, in our experiments, human observers reached word accuracy of 44% in the first attempt while our visual-only automatic system achieved 20% of word recognition rate. However, if we repeat the comparison in terms of phonemes, the automatic system achieves recognition rates quite similar to human observers, just above 50%. These results are comparable with those reported by Lan et al. <cit.> who tested in the RM corpus, using 12 speakers and 6 expert lip-readers. Concretely, their human lip-readers reached 52.63% viseme accuracy (in our case 52.20% phoneme accuracy) and their system obtained 46% viseme accuracy (our system 51.25% phoneme accuracy). Therefore, in terms of viseme/phoneme accuracy, both Lan's and our system reach near-human performance. But this does not happen in terms of word accuracy: Lan et al. reported human word accuracy of 21% (ours 44%) and system word accuracy of 14% (ours 20%).When trying to explain the above, we found that the low word recognition rates were related to: 1) the fact that it is quite easy to make mistakes at frame level and a mistake in a single frame results in the whole word being wrong; 2) the imbalance in the occurrence frequencies of phonemes. The latter is especially important because it highlights that the system, while achieving similar phoneme rates to those from humans, does not actually perform equally well. In other words, the phoneme sequences returned by humans always make some sense, which is not generally true for the system as it does not include higher-level constraints (e.g. at the word- or phrase-level). Hence, future directions should focus on introducing constraints related to bigger speech structures such as connected phonemes, syllables or words.§ ACKNOWLEDGEMENTSThis work is partly supported by the Spanish Ministry of Economy and Competitiveness under the Ramon y Cajal fellowships and the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), and the Kristina project funded by the European Union Horizon 2020 research and innovation programme under grant agreement No 645012. 10url@samestyle mcgurk1976hearing H. McGurk and J. MacDonald, “Hearing lips and seeing voices,” Nature, vol. 264, pp. 746–748, 1976.potamianos2003recent G. Potamianos, C. Neti et al., “Recent advances in the automatic recognition of audiovisual speech,” P IEEE, vol. 91, pp. 1306–1326, 2003.hilder2009comparison S. Hilder, R. Harvey, and B.-J. Theobald, “Comparison of human and machine-based lip-reading.” in AVSP, 2009, pp. 86–89.williams1998frame J. J. Williams, J. C. Rutledge et al., “Frame rate and viseme analysis for multimedia applications to assist speechreading,” J VLSI Signal Process Syst Signal Image Video Technol, vol. 20, pp. 7–23, 1998.chictu2012automatic A. Chiţu and L. J. Rothkrantz, “Automatic visual speech recognition,” Speech enhancement, modeling and recognition, p. 95, 2012.erber1975auditory N. P. Erber, “Auditory-visual perception of speech,” J Speech Hear Disord, vol. 40, pp. 481–492, 1975.sumby1954visual W. H. Sumby and I. Pollack, “Visual contribution to speech intelligibility in noise,” J Acoust Soc Am, vol. 26, pp. 212–215, 1954.ronquest2010language R. E. Ronquest, S. V. Levi, and D. B. Pisoni, “Language identification from visual-only speech signals,” Atten Percept Psychophys, vol. 72, pp. 1601–1613, 2010.antonakos2015survey E. Antonakos, A. Roussos, and S. Zafeiriou, “A survey on mouth modeling and analysis for sign language recognition,” in FG, 2015.seymour2008comparison R. Seymour, D. Stewart, and J. Ming, “Comparison of image transform-based features for visual speech recognition in clean and corrupted videos,” Eurasip J Image Vide, p. 14, 2008.dupont2000audio S. Dupont and J. Luettin, “Audio-visual speech modeling for continuous speech recognition,” IEEE T Multimedia, vol. 2, pp. 141–151, 2000.nefian2002coupled A. V. Nefian, L. Liang et al., “A coupled hmm for audio-visual speech recognition,” in ICASSP, 2002, pp. II–2013.zhou2014review Z. Zhou, G. Zhao et al., “A review of recent advances in visual speech decoding,” Image Vis Comput, vol. 32, pp. 590–605, 2014.yau2007visual W. C. Yau, D. K. Kumar, and H. Weghorn, “Visual speech recognition using motion features and hidden markov models,” in CAIP, 2007, pp. 832–839.sui2015listening C. Sui, M. Bennamoun, and R. Togneri, “Listening with your eyes: Towards a practical visual speech recognition system using deep boltzmann machines,” in ICCV, 2015, pp. 154–162.chung2016lip J. S. Chung, A. Senior et al., “Lip reading sentences in the wild,” arXiv preprint arXiv:1611.05358, 2016.petridis2016deep S. Petridis and M. Pantic, “Deep complementary bottleneck features for visual speech recognition,” in ICASSP, 2016, pp. 2304–2308.almajai2016improved I. Almajai, S. Cox et al., “Improved speaker independent lip reading using speaker adaptive training and deep neural networks,” in ICASSP, 2016, pp. 2722–2726.zhou2014compact Z. Zhou, X. Hong et al., “A compact representation of visual speech data using latent variables,” IEEE Trans Pattern Anal Mach Intell, vol. 36, 2014.moll1971investigation K. L. Moll and R. G. Daniloff, “Investigation of the timing of velar movements during speech,” J Acoust Soc Am, vol. 50, pp. 678–684, 1971.buchan2007spatial J. N. Buchan, M. Paré, and K. G. Munhall, “Spatial statistics of gaze fixations during dynamic face processing,” Soc Neurosci, vol. 2, pp. 1–13, 2007.ortiz2008lipreading I. d. l. R. R. Ortiz, “Lipreading in the prelingually deaf: what makes a skilled speechreader?” Span J Psychol, vol. 11, pp. 488–502, 2008.potamianos2001automatic G. Potamianos and C. Neti, “Automatic speechreading of impaired speech,” in AVSP, 2001.bernstein1998makes L. E. Bernstein, M. E. Demorest, and P. E. Tucker, What makes a good speechreader? First you have to find one.1em plus 0.5em minus 0.4emHove, United Kingdom: Psychology Press Ltd. Publishers, 1998.capek2008cortical C. M. Capek, M. MacSweeney et al., “Cortical circuits for silent speechreading in deaf and hearing people,” Neuropsychologia, vol. 46, pp. 1233–1241, 2008.ellis2001tas T. Ellis, M. MacSweeney et al., “Tas: A new test of adult speechreading-deaf people really can be better speechreaders,” in AVSP, 2001.lyxell2000visual B. Lyxell and I. Holmberg, “Visual speechreading and cognitive performance in hearing-impaired and normal hearing children (11-14 years),” Br J Educ Psychol, vol. 70, pp. 505–518, 2000.rodriguez2015speechreading I. R. Rodríguez-Ortiz, D. Saldaña, and F. J. Moreno-Perez, “How speechreading contributes to reading in a transparent ortography: the case of spanish deaf people,” J Res Read, 2015.kyle2013speechreading F. E. Kyle, R. Campbell et al., “Speechreading development in deaf and hearing children: introducing the test of child speechreading,” J Speech Lang Hear Res, vol. 56, pp. 416–426, 2013.mohammed2006speechreading T. Mohammed, R. Campbell et al., “Speechreading and its association with reading among deaf, hearing and dyslexic individuals,” Clin Linguist Phon, vol. 20, pp. 621–630, 2006.kyle2006concurrent F. E. Kyle and M. Harris, “Concurrent correlates and predictors of reading and spelling achievement in deaf and hearing school children,” J Deaf Stud Deaf Educ, vol. 11, pp. 273–288, 2006.duchnowski2000development P. Duchnowski, D. S. Lum et al., “Development of speechreading supplements based on automatic speech recognition,” IEEE T Bio-Med Eng, vol. 47, pp. 487–496, 2000.potamianos2004audio G. Potamianos, C. Neti et al., “Audio-visual automatic speech recognition: An overview,” AVSP, vol. 22, p. 23, 2004.matthews2002extraction I. Matthews, T. F. Cootes et al., “Extraction of visual features for lipreading,” IEEE Trans Pattern Anal Mach Intell, vol. 24, pp. 198–213, 2002.cox2008challenge S. J. Cox, R. Harvey et al., “The challenge of multispeaker lip-reading.” in AVSP, 2008, pp. 179–184.lee2004avicar B. Lee, M. Hasegawa-Johnson et al., “Avicar: audio-visual speech corpus in a car environment.” in Interspeech, 2004.messer1999xm2vtsdb K. Messer, J. Matas et al., “Xm2vtsdb: The extended m2vts database,” in AVBPA, 1999, pp. 965–966.patterson2002cuave E. K. Patterson, S. Gurbuz et al., “Cuave: A new audio-visual database for multimodal human-computer interface research,” in ICASSP, 2002, pp. II–2017.huang2004audio J. Huang, G. Potamianos et al., “Audio-visual speech recognition using an infrared headset,” Speech Commun, vol. 44, pp. 83–96, 2004.lucey2008patch P. J. Lucey, G. Potamianos, and S. Sridharan, “Patch-based analysis of visual speech from multiple views,” AVSP, 2008.sanderson2002vidtimit C. Sanderson, “The vidtimit database,” IDIAP, Tech. Rep., 2002.cooke2006audio M. Cooke, J. Barker et al., “An audio-visual corpus for speech perception and automatic speech recognition,” J Acoust Soc Am, vol. 120, pp. 2421–2424, 2006.mccool2012bi C. McCool, S. Marcel et al., “Bi-modal person recognition on a mobile phone: using mobile phone data,” in ICMEW, 2012, pp. 635–640.zhao2009lipreading G. Zhao, M. Barnard, and M. Pietikainen, “Lipreading with local spatiotemporal descriptors,” IEEE T Multimedia, vol. 11, pp. 1254–1265, 2009.anina2015ouluvs2 I. Anina, Z. Zhou et al., “Ouluvs2: A multi-view audiovisual database for non-rigid mouth motion analysis,” in FG, 2015, pp. 1–5.ortega2004av A. Ortega, F. Sukno et al., “Av@car: A spanish multichannel multimodal corpus for in-vehicle automatic audio-visual speech recognition.” in LREC, 2004.hazen2004segment T. J. Hazen, K. Saenko et al., “A segment-based audio-visual speech recognizer: Data collection, development, and initial experiments,” in ICMI.1em plus 0.5em minus 0.4emACM, 2004, pp. 235–242.LiILiR R. Bowden, “LILiR language independent lip reading,” <http://www.ee.surrey.ac.uk/Projects/LILiR/datasets.html>, 2010, accessed: 2016-08-16.webACCAPS “ACCAPS federació d'associacions catalanes de pares i persones sordes,” <http://www.acapps.org/web/>, accessed: 2016-08-16.boersma2002praat P. Boersma et al., “Praat, a system for doing phonetics by computer,” Glot international, vol. 5, pp. 341–345, 2002.wells1997sampa J. C. Wells et al., “Sampa computer readable phonetic alphabet,” Handbook of standards and resources for spoken language systems, vol. 4, 1997.xiong2013supervised X. Xiong and F. De la Torre, “Supervised descent method and its applications to face alignment,” in CVPR, 2013, pp. 532–539.fernandez2016Automatic A. Fernandez-Lopez and F. M. Sukno, “Automatic viseme vocabulary construction to enhance continuous lip-reading,” VISAPP, 2017.colquhoun2014investigation D. Colquhoun, “An investigation of the false discovery rate and the misinterpretation of p-values,” Open Science, vol. 1, p. 140216, 2014.lan2012insights Y. Lan, R. Harvey, and B.-J. Theobald, “Insights into machine lip reading,” in ICASSP, 2012, pp. 4825–4828. | http://arxiv.org/abs/1704.08028v1 | {
"authors": [
"Adriana Fernandez-Lopez",
"Oriol Martinez",
"Federico M. Sukno"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170426091926",
"title": "Towards Estimating the Upper Bound of Visual-Speech Recognition: The Visual Lip-Reading Feasibility Database"
} |
SIT: A Lightweight Encryption Algorithm for Secure Internet of Things Muhammad Usman Faculty of Engineering Science and Technology,Iqra University, Defence View,Shaheed-e-Millat Road (Ext.), Karachi 75500, Pakistan.Email: [email protected] Irfan Ahmed and M. Imran Aslam Department of Electronic Engineering,NED University of Engineering and Technology,University Road, Karachi 75270, Pakistan.Email: [email protected], [email protected] Shujaat Khan Faculty of Engineering Science and Technology,Iqra University, Defence View,Shaheed-e-Millat Road (Ext.), Karachi 75500, Pakistan.Email: [email protected] S.M Usman Ali Department of Electronic Engineering,NED University of Engineering and Technology,University Road, Karachi 75270, Pakistan.Email: [email protected] December 30, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== De, Trevisan and Tulsiani [CRYPTO 2010] show that every distribution over n-bit strings which has constant statistical distance to uniform (e.g., the output of a pseudorandom generator mapping n-1 to n bit strings), can be distinguished from the uniformdistribution with advantage ϵ by a circuit of size O( 2^nϵ^2).We generalize this result, showing that a distribution which has less than k bits of min-entropy, can be distinguished from any distribution with k bits of δ-smooth min-entropy with advantageϵ by a circuit of size O(2^kϵ^2/δ^2).As a special case, this implies that any distribution with support at most 2^k (e.g., the output of a pseudoentropy generatormapping k to n bit strings) can be distinguished from anygiven distribution with min-entropy k+1 with advantage ϵ by a circuit of size O(2^kϵ^2).Our result thus shows that pseudoentropy distributions face basically the same non-uniform attacksas pseudorandom distributions.§ INTRODUCTION De, Trevisan and Tulsiani <cit.> show a non-uniform attack against any pseudorandom generator(PRG) which maps ^n-1→^n. For any ϵ≥ 2^-n/2, their attack achieves distinguishing advantage ϵ and can be realized by a circuit of sizeO( 2^nϵ^2). Their attack doesn't even need the PRG to be efficiently computable.In this work we consider a more general question, where we ask for attacks distinguishing a distribution from any distribution with slightly higher min-entropy. We generalize <cit.>, showing a non-uniform attack which, for any ϵ,δ>0, distinguishes any distribution with <k bits of min-entropy from any distribution with k bits of δ-smooth min-entropywith advantage ϵ, and where the distinguisher is of sizeO(2^kϵ^2/δ^2). As a corollary we recover the <cit.> result, showing that the output of any pseudoentropy generator ^k→^n can be distinguished fromany variable with min-entropy k+1 with advantage ϵ by circuits of size O(2^kϵ^2).* From a theoretical perspective, we prove where the separation between pseudoentropy and smooth min-entropy lies, by classifying how powerful computationally bounded adversaries can be so they can still be fooled to “see” more entropy than there really is. * From a more practical perspective, our result shows that using pseudoentropy instead of pseudorandomness (which for many applications is sufficient and allows for saving in entropy quantity <cit.>), will not give improvements in terms of quality (i.e., the size and advantage of distinguishers considered), at least not against generic non-uniform attacks.§.§ Notation and Basic DefinitionsTwo variables X and Y are (s,ϵ) indistinguishable, denoted X∼_s,ϵY, if for all boolean circuits Dof size |D|≤ s we have |[D(X)=1]-[D(Y)=1]|≤ϵ. The statistical distanceof X and Y is d_1(X;Y)∑_x|P_X(x)-P_Y(x)| (where P_X(x)[X=x]),the Euclidean distance of X and Y is d_2(P_X;P_Y) √(∑_x(P_X(x)-P_Y(x))^2). A variable X has min-entropy k if it doesn't take any particular outcomewith probability greater 2^-k, it has δ-smooth min-entropy k <cit.>, if it's δ close to some distribution with min-entropy k. X has k bits of HILL pseudoentoentry of quality (s,ϵ) if there exists a Y with min-entropy k that is (s,ϵ) indistinguishable from X, we use the following standard notation for these notions * min-entropy: (X) -logmax_x( [X=x]) .* smooth min-entropy: δ(X)max_Y,d_1(X;Y)≤δ(Y) .* HILL pseudoentropy: s,ϵ(X)max_Y,Y∼_(s,ϵ)X(Y) .§.§ Our ContributionIn this work give generic non-uniform attacks on pseudoentropy distributions. A seemingly natural goal is to consider a distribution X with (X)≤ k bits of min-entropy, strictly larger s,ϵ(X)≥ k+1 bits ofHILL entropy, and then give an upper bound on s in terms of ϵ.This does not work as there are X where (X)≪δ(X),[Consideran X which is basically uniform over ^n, but has mass δ on one particular point, thenlogδ^-1=(X)≪δ(X)=n.] and as by definition δ(X)=∞,δ(X),we can have a large entropy gap ∞,δ(X)-(X) even when considering unbounded adversaries againstHILL entropy. For this reason, in our main technical result <Ref> below, we must consider distributionswith bounded smooth min-entropy. This makes the statement of the lemma somewhat technical.In practice, the distributions considered often have bounded support, for example because they were generated from a short seed by a deterministic process (like a pseudorandom generator). In this case we can drop the smoothness requirement as stated in <Ref> below.Suppose that X∈^n does not have k bits of δ-smooth min-entropy, i.e., δ(X)<k, then for any ϵ we have Õ( 2^k ϵ^2δ^-2),ϵ(X) < k where Õ(·) hides a factor linear in n. Let f:^k→^n be a deterministic (not necessarily efficient) function. Then we have Õ( 2^k ϵ^2),ϵ(f(U_k)) ≤ k+1.more generally, for any X over ^n with support of size ≤ 2^kÕ( 2^k ϵ^2),ϵ(X) ≤ k+1.For the special case n=k+1 we recover the bound forpseudorandom generators from <cit.>. The theorem follows from <Ref> when δ=1/2; consider any X with support of size ≤ 2^k, thenδ(X)≤ k+1, as no matter how we cut probability mass of 1-δ=1/2 over 2^k elements, one element will have the weight at least 2^-k-1.Let us shortly discuss why we need the δ-smoothness in Theorem <ref>.For some δ where 2^-n≪δ≪ 1, consider an X∈^n which is basically uniform, but on one point, say 0^n, has more mass P_X(0^n)=δ (so for x≠ 0^n, P_X(x)=(1-δ)/(2^n-1)).This X has min-entropy (X)=-logδ≪ n, but it has maximal δ-smooth min-entropy δ(X)=n. This also means ∞,δ(X)=n, so when considering distinguishing advantage ≥δ, we don't have any non-trivial upper bound on the HILL entropy, as cutting off a δ fraction of the probability mass already makes X uniform.The theorem must thus consider δ-smooth min-entropy to exclude such pathological distributions. Fortunately, for the important special case where X is generated by a deterministic process from a uniform k-bit seed as in Corollary <ref>, such pathological behaviour can no longer happen.§.§ Proof Outline §.§.§ A Weaker Result as a Ball-Bins Problem We outline the proof of a somewhat weakened version of <Ref> in the language of balls and bins. For every Y of min-entropy k'=k+Ω(1) we want to distinguish Y from X = f(U_k). Suppose for simplicity that Y is flat and f is injective, so that X is also flat. Our strategy will be to hash the points randomly into two bins and take advantage of the fact that the average maximum load is closer to 1/2 when we sample from Y than when drawing from X. The reason is that Y has more balls, so by the law of large numbers, we expect the load to be “more concentrated” around the mean. Think of throwing balls (inputs x) into two bins (labeled by -1 and 1). If the balls come from the support of X, the expected maximum load (over two bins)equals ≈ 2^k-1 + √(2/π)· 2^k/2. Similarly,if the balls come from the support of Y, then maximum load is 2^k'-1 + √(2/π)· 2^k'/2. In terms of the average load (the load normalized by the total number of balls)𝖠𝗏𝖾𝗋𝖺𝗀𝖾𝖬𝖺𝗑𝖫𝗈𝖺𝖽(X) ≈ 0.5 + √(2/π)· 2^-k/2 w.h.p. when drawing fromX𝖠𝗏𝖾𝗋𝖺𝗀𝖾𝖬𝖺𝗑𝖫𝗈𝖺𝖽(Y) ≈ 0.5 + √(2/π)· 2^-k'/2 w.h.p. when drawing fromY As k'=k+Ω(1) we obtain (with good probability)𝖠𝗏𝖾𝗋𝖺𝗀𝖾𝖬𝖺𝗑𝖫𝗈𝖺𝖽(X) -𝖠𝗏𝖾𝗋𝖺𝗀𝖾𝖬𝖺𝗑𝖫𝗈𝖺𝖽(Y) =Ω(2^-k/2).Lettingbe one of these bins assignments we obtain a distinguisher with advantage ϵ = Ω(2^-k/2). To generate the assignments efficiently we relax the assumption about choosing bins and assumeonly that the choices of bins are independent for any group of ℓ=4 balls. The fourth moment method allows us to keep sufficiently good probabilistic guarantees on the maximum load.§.§.§ The General Case by Random Walk TechniquesA high-level outline and comparison to <cit.> Below in <Ref> we sketch the flow of our argument.Our starting point is the proof from <cit.>. They use the fact that a random mapping :^n→{-1,1} likely distinguishes any two distributions X and Y over ^n with advantage being the Euclidean distance d_2(X;Y)√(∑_x(P_X(x)-P_Y(x))^2 ). For any X and Y with constant statistical distance ∑_x|P_X(x)-P_Y(x)|=Θ(1) (which is the case for the PRG setting where Y=U_n andX=PRG(U_n-1))this yields a bound Ω(2^-n/2). This bound can be then amplified, at the cost of extra advice, by partitioning the domain ^n and combining corresponding advantages (advice basically encodes if there is a need for flipping the output). Finally one can show that 4-wise independence provides enough randomness for this argument, which makes samplingefficient. Our argument deviates from this approach in two important aspects. The first difference is that in the pseudoentropy case we can improve the advantage from Ω(2^-n/2), where n is the logarithm of the support of the variables considered, to Ω(2^-k/2), where k is the min-entropy of the variable we want to distinguish from. The reason is that being statistically far from any k-bit min-entropy distributions implies a large bias on already 2^k elements. This fact (see <Ref>, and also <Ref>) is a new characterization of smooth min-entropy of independent interest.The second subtlety arises when it comes to amplify the advantage over the partition slices. For the pseudorandomness case it is enough to split the domain in a deterministic way, for example by fixing prefixes of n-bit strings,in our case this is not sufficient.For us a “good” partition must shatter the 2^k-element high-biased set, which can be arbitrary. Our solution is to use random partitions, in fact, we show that using 4-universal hashing is sufficient.Generating base distinguishers and partitions at the same time makes probability calculations more involved. Technical calculations are based on the fourth moment method, similarly as in <cit.>.The basic idea is that for settings where the second and fourth moment are easy to compute (e.g. sums of independent symmetric random variables) we can obtain good upper and lower bounds on the first moment. In the context of algorithmic applications these techniques are usually credited to <cit.>. Interestingly, exploiting natural relations to random walks, we show that calculations immediately follow by adopting classical (almost one century old) tools and results <cit.>.Our technical novelty is an application of moment inequalities due to Marcinkiewicz-Zygmund and Paley-Zygmund, which allow us to prove slightly more than just the existence of an attack. Namely we generate it with constant success probability. Advantage Ω(2^-k/2)Consider any X with δ-smooth min-entropy smaller than k. This requirement can be seen as a statement about the “shape” of the distribution.Namely, the mass of X that is above the threshold 2^-k equals at least δ, that is∑_xmax(P_X(x) - 2^-k, 0) ⩾δ. For an illustration see <Ref>. We construct our attack based on this observation. Define the advantage of a functionfor distributions X and Y as 𝖠𝖽𝗏^(X;Y) = | ∑_x(x)(P_X(x)-P_Y(x)) |(writing also 𝖠𝖽𝗏^_S when the summation is restricted to a subset S). Consider arandom distinguisher :^n→{-1,1}. Random variables (x) for different x are independent, have zero-mean and second moment equal to 1. Thereforethe expected square of of the advantage, over the choice of , equals[(𝖠𝖽𝗏^(X;Y))^2] =| ∑_x(x)(P_X(x)-P_Y(x)) |^2= ∑_x(P_X(x)-P_Y(x))^2Let S be the set of x such that P_X(x) > 2^-k. For any Y of min-entropy at least k we obtain∑_x∈ S(P_X(x)-P_Y(x))^2⩾∑_x∈ S(P_X(x)-2^-k)^2 ⩾ |S|^-1( ∑_x∈ S(P_X(x)-2^-k))^2⩾2^-kδ^2where the first inequality follows because P_Y(x)⩽ 2^-k < P_X(x) for x∈ S,the second inequality is by the standard inequality between the first and second norm, and the third inequality follows because we showed that[X∈ S] ⩾ |S|· 2^-k+δ (illustrated in <Ref>) which also implies |S|^-1⩾ 2^-k. By the previous formula on the expected squared advantage this means that[(𝖠𝖽𝗏^(X;Y))^2] ⩾2^-kδ^2for at least one choice of . This implies𝖠𝖽𝗏_^ (X;Y)⩾2^-k/2δ. A randomas defined would be of size exponential in n, butsince we used only the second moment in calculations, it suffices to generate (x) as pairwise independent random variables. By assuming 4-wise independence – which can be computed by O(n^2) size circuits – we can prove slightly more, namely that a constant fraction of generated 's are good distinguishers. This property will be important for the next step, where we amplify the advantage assuming larger distinguishers.Leveraging the advantage by slicing the domainConsider a random and equitable partition {S_i}_i=1^T of the set{0,1}^n.From the previous analysis we know that a random distinguisher achieves advantage ϵ = d_2(P_X;P_Y) over the whole domain.Note that (for any, not necessarily random partition {S_i}_i) we have(d_2(P_X;P_Y))^2 = ∑_i=1^T(d_2(P_X;P_Y|S_i))^2 where d_2(P_X;P_Y|S_i) is the restriction of the distance to the set S_i (by restricting the summation to S_i).From a random partition we expect the mass difference between P_X and P_Y to be distributed evenly among the partition slices (see <Ref>). Based on the last equation, we expectd_2(P_X;P_Y|S_i)≈ d_2(P_X;P_Y)/√(T)to hold with high probability over {S_i}_i. In fact, if the mass difference is not well balanced amongst the slices (in the extreme case, concentrated on one slice) our argument will not offer any gain over the previous construction (see <Ref>).By applying the previous argument to individual slices, for every i we can obtain an advantage 𝖠𝖽𝗏^_S_i(X;Y) = Ω( (T^-1/22^-k/2)δ) when restricted to the set S_i (with high probability over the choice ofand {S_i}_i). Now if the sets S_i are efficiently recognizable, we can combine them into a better distinguisher. Namely for every i we chose a value β_i∈{-1,1} such that's advantage (before taking the absolute value) restricted to S_i has sign β_i,and set (x) =β_i (x),where i is such thatx∈ S_i,then the advantage equals (with high probability over and the S_i's)𝖠𝖽𝗏^ (X;Y)= ∑_i=1^T𝖠𝖽𝗏^_S_i(X;Y) = Ω( T^1/22^-k/2δ)We need to specify a 4-wise independent hash for , another 4-wise independent hash for deciding in which of the T slices an element lies, and T bits to encode the β_i's. Thus for a given T the size ofwill be T+Õ(n). Using the above equation, we then get a smooth tradeoff s =O(2^kϵ^2δ^-2) between the advantage ϵ and the circuit size s. This discussion shows that to complete the argument we need the following two properties of the partition (a) the mass difference between P_X and P_Y is (roughly) equidistributed among slices and (b) the membership in partition slices can be efficiently decided. Slicing using 4-wise independence To complete the argument, we assume that T is a power of 2, and generate the slicing by using a 4-universal hash function h:^n→^logT. The i-th slice S_i is defined as {x∈^n: h(x)=i}. These assumptions are enough to prove that𝖠𝖽𝗏^_S_i(X;Y) =Ω( T^-1/2d_2(P_X;P_Y) ) = Ω( T^-1/22^-k/2δ).Interestingly, the expected advantage (left-hand side) cannot be computed directly. The trick here is to bound it in terms of the second and fourth moment. The above inequality, coupled with bounds on second moments of the advantage 𝖠𝖽𝗏^_S_i (obtained directly), allows us to prove that[∑_i=1^T𝖠𝖽𝗏^_S_i⩾Ω(1)·T^1/22^-k/2δ] > Ω(1).This shows that there exists the claimed distinguisher . In fact, a constant fraction of generated (over the choice ofand {_i}_i) distinguishers 's works.Random walks From a technical point of view, our method involves computing higher moments of the advantages to obtain concentration and anti-concentration results.The key observation is that the advantage written down as 𝖠𝖽𝗏^_S_i(X;Y) = |∑_x(P_X(x)-P_Y(x))1_S_i(x)(x)|which can be then studied as a random walk𝖠𝖽𝗏^_S_i(X;Y) = |∑_xξ_i,x|with zero-mean increments ξ_i,x = (P_X(x)-P_Y(x))1_S_i(x)(x). The difference with respect to classical model is that the increments are only ℓ-wise independent (for ℓ=4). However, that classical moment bounds still apply (see <Ref> for more details).§ PRELIMINARIES §.§ Interpolation Inequalities Interpolation inequalities show how to bound the p-th moment of a random variable if we know bounds on one smaller and one higher moment. The following result is known also as log-convexity of L_p norms, and can be proved by the Hölder Inequality For any p_1 < p < p_2 and any bounded random variable Z we haveZ_p ⩽(Z_p_1)^θ(Z_p_2)^1-θwhere θ is such that θ/p_1 + 1-θ/p_2 = 1/p, and for any r we define Z_r = (|Z|^r)^1/r. Alternatively, we can lower bound a moment given two higher moments.This is very useful when higher moments are easier to compute. In this work will bound first moments from below when we know the second and the fourth moment (which are easier to compute as they are even-order moments)For any bounded Z we have |Z| ⩾(|Z|^2)^3/2/(|Z|^4)^1/2. §.§ Moments of random walks For a random walk ∑_xξ(x), where ξ(x) are independent with zero-mean, we have good control over the moments, namely | ∑_xξ(x)|^p = Θ(1)·(∑_xVar(ξ(x)))^p/2 where constants depend on p. This result is due to Marcinkiewicz and Zygmund <cit.> who extended the former result of Khintchine<cit.>. Below we notice that for small moments p it suffices to assume only p-wise independence (most often used versionsassume fully independence)Suppose that {ξ(x)}_x∈𝒳 are 4-wise independent, with zero mean. Then we have1/√(3)(∑_x∈𝒳Var(ξ(x)))^1/2⩽ |∑_x∈𝒳ξ(x)| ⩽(∑_x∈𝒳Var(ξ(x)))^1/2|∑_x∈𝒳ξ(x)|^2 = ∑_x∈𝒳Var(ξ(x))(∑_x∈𝒳Var(ξ(x)))^2 ⩽ |∑_x∈𝒳ξ(x)|^4 ⩽3 (∑_x∈𝒳Var(ξ(x)))^2 The proof appears in <Ref>. §.§ Anticontentration boundsFor any positive random variable Z and a parameter θ∈(0,1) we have[Z > θ Z ] ⩾ (1-θ)^2( Z)^2/ Z^2. By applying <Ref> to the setting of <Ref>, and choosing θ = 1/√(3) we obtainSuppose that {ξ(x)}_x∈𝒳 are 4-wise independent with zero-mean, then we have[ | ∑_ξ(x) | >1/3(∑_Var(ξ(x)))^1/2] > 1/17.where the summation is over x∈𝒳.§ PROOF OF <REF> For any random variable X with values in a finite set 𝒳, any δ and k we have the following equivalenceH_∞^δ(X) ⩾ k ∑_x∈𝒳max(P_X(x)-2^-k,0) ⩽δ.The proof appears in <Ref>. We will work with the following equivalent statement We have δ(X) < k if and only if there exists a set S of at most 2^k elements such that∑_x∈ S|P_X(x)-P_Y(x)| > δfor all Y of min-entropy at least k.The direction ⟸ trivially follows by the definition of smooth min-entropy.Now assume δ(X) < k. Let S be the set of all x such that P_X(x) > 2^-k, then|S|<2^k, and moreover by <Ref> we have ∑_x∈ S(P_X(x)-2^-k)>δ. In particular for any Y of min-entropy k (i.e., P_Y(x)⩽ 2^-k for all x) ∑_x∈ S(P_X(x)-P_Y(x)) > δFor any distributions P_X,P_Y on 𝒳 and any subset S of 𝒳 we have(∑_x∈ S( P_X(x)-P_Y(x) )^2)^1/2 > |S|^-1/2∑_x∈ S| P_X(x)-P_Y(x) |.By the Jensen Inequality we have |S|^-1(∑_x∈ S( P_X(x)-P_Y(x) )^2) > (|S|^-1∑_x∈ S| P_X(x)-P_Y(x)|)^2which is equivalent to the statement.Suppose that δ(X) < k. Then for any Y of min-entropy at least k we have (∑_x|P_X(x)-P_Y(x)|^2)^1/2 > 2^-k/2δ. It suffices to combine <Ref> and <Ref>. By <Ref> we conclude that the advantage of a random distinguisher for any two measures (in our case P_X and P_Y) equals the Euclidean distance.Let {(x)}_x∈{0,1}^n be 4-wise independent as indexed by x and such that (x) outputs a random element from {-1,1}.Then for anyset S we have| ∑_x∈ S(x)(P_X(x)-P_Y(x)) |> 1/3· d_2(P_X;P_Y)with probability 1/17 over the choice of(the result actually holds for any measures in place of P_X,P_Y).For our case, that is the setting in <Ref>, we obtainFor X,Y as in <Ref>, andas in <Ref> we have 𝖠𝖽𝗏^(X;Y) ≥1/3· 2^-k/2δ w.p.1/17 over . §.§ Partitioning the domain into T slices Let h:{0,1}^n→ [1… 2^t], where t=⌈log T⌉, be a 4-universal hash function. Define S_i = {x: h(x) = i}, Δ(x) = P_X(x)-P_Y(x)and consider advantages on slices S_i𝖠𝖽𝗏^_S_i(X;Y) = | ∑_xΔ(x)(x)1_S_i(x) | The following corollary shows that on each of our T slices, we get the advantage T^-1/22^-k/2δ. The proof appears in <Ref>.For , {S_u}_u as above and every i,j_,{S_u}_u𝖠𝖽𝗏^_S_i(X;Y)⩾ 3^-1/2 T^-1/2· d_2(P_X;P_Y) _,{S_u}_u( 𝖠𝖽𝗏^_S_i(X;Y)𝖠𝖽𝗏^_S_j(X;Y)) ⩽ T^-1· d_2(P_X;P_Y)^2(the statement is valid for arbitrary measures in place of P_X,P_Y).Denote Z = ∑_i𝖠𝖽𝗏^_S_i(X;Y). Using <Ref> with θ = 1/√(3) wherewe compute Z^2 and Z according to <Ref> we obtain [ |Z| > 1/√(3)·|Z|] ⩾1/17. Bounding once again |Z| as in<Ref> we getFor X,Y as in <Ref>,and S_i defined above we have_,{S_u}_u[ ∑_i=1^T𝖠𝖽𝗏^_S_i(X;Y) ⩾1/3· T^1/22^-k/2δ] ⩾1/17.(for general X,Y the lower bound is Ω(1)· T^1/2· d_2(P_X;P_Y)).The corollary shows that the total absolute advantage over all partition slices, is as expected. Since {S_i}_i is a partition we have∑_i=1^T𝖠𝖽𝗏^_S_i(X;Y) = ∑_i=1^T|∑_x∈ S_i(P_X(x)-P_Y(x))(x) |= ∑_x(P_X(x)-P_Y(x))(x)β(x)where for β_idef=sgn( ∑_x∈ S_i(P_X(x')-P_Y(x))(x) ) (the sign of the advantage on the i-th slice) we define β(x) = β_i where S_i contains x. This shows that by ”flipping“ the distinguisher output on the slices we achieve the sum of individual advantages. Since the bit β(x) can be computed with O(T) + Õ(n) advice (the complexity of the function i→β_i plus the complexity of finding i for a given x) we obtainFor X,Y as in <Ref>,and {S_i}_i defined abovethere exists a modification towhich in time Õ(n) and advice O(T) achieves advantage 1/3· T^1/22^-k/2δ with probability 1/17. Finally by setting ϵ =T^1/22^-k/2δ and manipulating T we arrive atFor any ϵ there exists T such that the distinguisher in <Ref> has advantage ϵ and circuit complexity s = O(2^kϵ^2δ^-2). § OMITTED PROOFS§.§ Proof of <Ref> (Strengthening of Marcinkiewicz-Zygmund's Inequality for p=4) Let Z = ∑_xξ(x). Since ξ(x) are (in particular) 2-wise independent with zero mean, we get(∑_xξ(x))^2= ∑_x,y( ξ(x)ξ(y) )= ∑_x=y( ξ(x)ξ(y) ) = ∑_xVar(ξ(x)).(the summation taken over x,y∈𝒳). The fourth moment is somewhat more complicated(∑_xξ(x))^4 = ∑_x_1,x_2,x_3,x_4( ξ(x_1)ξ(x_2)ξ(x_3)ξ(x_4) )= ∑_x_1=x_2=x_3=x_4( ξ(x_1)ξ(x_2)ξ(x_3)ξ(x_4)) ++ 3∑_x_1=x_2≠x_3=x_4( ξ(x_1)ξ(x_2)ξ(x_3)ξ(x_4))= ∑_xξ(x)^4 + 3∑_x≠yξ(x)^2ξ(y)^2= 3(∑_xξ(x)^2)^2-2∑_xξ(x)^4 The second equality follows because whenever ξ(x) occurs in an odd power, for example x=x_1≠x_2=x_3=x_4, the expectation is zero (this way one cansimplify and bound also higher moments, see <cit.>). It remains to estimate the first moment. By <Ref> and bounds on the second and fourth moment we have just computed we obtain1/√(3)·(∑_x∈𝒳Var(ξ(x)))^1/2⩽|∑_x∈𝒳ξ(x)|and the upper bound follows by Jensen's Inequality (with constant 1). §.§ Proof of <Ref> (Characterizing smooth min-entropy) Suppose that δ(X) ⩾ k. then, by definition, there is Y such that (Y) ⩾ k and ∑_x: P_X(x)>P_Y(x) P_X(x)-P_Y(x) ⩽δ. Since all the summands are positive and since P_Y(x) ⩽ 2^-k, ignoring those x for which P_Y(x) < 2^-k yields∑_x: P_X(x)>2^-k P_X(x)-P_Y(x) ⩽δ.Again, since P_Y(x) ⩽ 2^-k we obtain∑_x: P_X(x)>2^-k P_X(x)-2^-k⩽δ,which finishes the proof of the ”⟹“ part. Assume now that δ' = ∑_x∈𝒳max(P_X(x)-2^-k,0) ⩽δ. Note that ∑_x∈𝒳max(P_X(x)-1/2^k,0) + ∑_x∈𝒳max(1/2^k-P_X(x),0)==2∑_x∈𝒳|P_X(x)-1/2^k| ⩾ 2 ∑_x∈𝒳max(P_X(x)-1/2^k,0)and therefore we have ∑_x∈𝒳max(2^-k-P_X(x),0) ⩾δ'. By this observation we can construct a distribution Y by shifting δ' of the mass of P_X from the set S^-={x:P_X(x)>2^-k} to the set{x:2^-k⩾ P_X(x)} in such a way that we have P_Y(x) ⩽ 2^-k for all x. Thus (Y) ⩾ k and since a δ' fraction of the mass is shifted and redistributed we have d_1(X;Y) ⩽δ'. This finishes the proof of the ”⟸“ part. §.§ Proof of <Ref> ((Mixed) moments of slice advantages) For shortness denote Δ(x) = P_X(x)-P_Y(x) and 𝖠𝖽𝗏^_S_i=𝖠𝖽𝗏^_S_i(X;Y).Note that by <Ref>, applied to the family f_x = Δ(x)(x)1_S_i(x) (which is 4-wise independent) we have𝖠𝖽𝗏^_S_i⩾ 3^-1/2(∑_xΔ(x)^2)^1/2which is the first inequality claimed in the corollary.how does T come in here? In turn, again by <Ref>, we have( 𝖠𝖽𝗏^_S_i)^2 = T^-1·∑_xΔ(x)^2.Since this holds for any i, by Cauchy-Schwarz we get for any i,j𝖠𝖽𝗏^_S_i𝖠𝖽𝗏^_S_j⩽√(( 𝖠𝖽𝗏^_S_i)^2·( 𝖠𝖽𝗏^_S_j)^2 )⩽ T^-1·∑_xΔ(x)^2.which proves the second inequality in the corollary. | http://arxiv.org/abs/1704.08678v2 | {
"authors": [
"Krzysztof Pietrzak",
"Maciej Skorski"
],
"categories": [
"cs.CR",
"cs.IT",
"math.IT"
],
"primary_category": "cs.CR",
"published": "20170427174829",
"title": "Non-Uniform Attacks Against Pseudoentropy"
} |
Indiana University Bloomington, Astronomy Department, Swain West 319, 727 East Third Street, Bloomington, IN 47405-7105,USA [email protected] Department, Indiana University Bloomington, Swain West 319, 727 East Third Street, Bloomington, IN 47405-7105,USA [email protected]. Osservatorio Astronomico di Trieste, via G.B. Tiepolo 11, 34131, Trieste, Italy [email protected] We measured phosphorus abundances in 22 FGK dwarfs and giants that span –0.55 < [Fe/H] < 0.2 using spectra obtained with the Phoenix high resolution infrared spectrometer on the Kitt Peak National Observatory Mayall 4m telescope, the Gemini South Telescope, and the Arcturus spectral atlas. We fit synthetic spectra to the P1 feature at 10581 Å to determine abundances for our sample. Our results are consistent with previously measured phosphorus abundances; the average [P/Fe] ratio measured in [Fe/H] bins of 0.2 dex for our stars are within ∼ 1 σcompared to averages from other IR phosphorus studies. Our study provides more evidence that models of chemical evolution using the results of theoretical yields are under producing phosphorus compared to the observed abundances. Our data better fit a chemical evolution model with phosphorus yields increased by a factor of 2.75 compared to models with unadjusted yields. We also found average [P/Si] = 0.02 ± 0.07 and [P/S] = 0.15 ± 0.15 for our sample, showing no significant deviations from the solar ratios for [P/Si] and [P/S] ratios. § INTRODUCTION Abundance measurements of the light elements, from carbon to argon, are important in the study of nucleosynthesis and galactic chemical evolution. The light elements have been used to study differences in Galactic structure, multiple populations in stellar clusters, and used as proxies for metallicity in extragalactic studies. The even elements in particular, including O, Mg, Si, S, and Ca, and their nucleosynthesis in massive stars has been studied in detail <cit.>. These even elements are well understood in the context of hydrostatic burning and their yields, coupled with chemical evolution models, successfully predict observed abundance trends (e.g. ). However, the odd light elements are thought to be produced via other processes and few abundance measurements exist for elements such as P and Cl in stars <cit.>. This work focuses on the odd element phosphorus, an important element for life <cit.> with still uncertain nucleosynthesis production mechanisms.Phosphorus has only one stable isotope, ^31P, and is thought to be produced mostly in massive stars through neutron capture on Si isotopes, specifically ^30Si, in hydrostatic carbon and neon burning shells <cit.>. Phosphorus abundances have been observed using atomic infrared lines at approximately 10500-10820 Å and atomic lines in the near-UV at 2135-2136 Å. Phosphorus abundances have been derived in planetary nebulae using P3 lines located at ∼ 7875 Å <cit.> and in damped Lyman alpha systems using ionized phosphorus lines, such as P2 line at 1152 Å <cit.>. Anomalously high phosphorus abundances have been measured using optical phosphorus features in blue horizontal branch stars <cit.>, which may be due to diffusion of heavy elements, including P, in the photosphere <cit.>. Molecular forms of phosphorus, such as PO, PN, and CP, have been detected and used to understand phosphorus chemistry in the interstellar medium using features at millimeter and radio wavelengths. For example, phosphorus molecules have been detected in the interstellar medium <cit.> and in star forming regions <cit.>. Phosphorus molecules have also been found in the circumstellar envelopes of evolved stars <cit.>. Finally, the diffuse interstellar medium has been measured using P2 lines at 1125 Å and 1533 Å <cit.>. Solar phosphorus measurements of the infrared lines using 3-D solar models found an abundance of A(P)=5.46 ± 0.04 <cit.>. The infrared lines have been used to determine phosphorus abundances in F and G spectral type stars between –1.0 < [Fe/H] < 0.2 <cit.>. The P abundance differences in solar twins were examined by <cit.> and the P abundance of Procyon was determined by <cit.>. These lines weaken for stars with low abundance and are typically only observed in stars with [Fe/H] ≳ –1.0. The near-UV phosphorus lines however can be detected and measured in metal poor stars <cit.>. These features have been used to measure P abundances in FKG type stars with metallicities between –3.3 < [Fe/H] < –0.2 <cit.>. Phosphorus abundances have also been compared to alpha elements; <cit.> found a constant phosphorus to sulfur ratio of [P/S]= 0.10 ± 0.10 over their metallicity range. The odd, light elements, such as Na, Al, and P are thought to be produced through neutron capture and their abundances should be sensitive to the neutron flux in massive stars during hydrostatic shell burning <cit.>. The neutron flux is dependent on metallicity and a decreasing ratio of phosphorus to elements not dependent on neutron flux, such as the alpha elements is expected. The constant [P/S] ratio found by <cit.> suggests P production is insensitive to the neutron excess and other processes may be important in the nucleosynthesis of phosphorus. This effect is more pronounced at lower metallicities of [Fe/H] < –1.0 and more measurements would confirm the constancy of the [P/S] ratio with metallicity <cit.>. Current models of phosphorus production do not match the observed abundances. Chemical evolution models currently predict subsolar phosphorus abundances at [Fe/H] = 0 for the solar neighborhood <cit.>. A possible exception is the result obtained by <cit.>. Phosphorus yields would have to be increased by factors of 1.5-3 to match the observed [P/Fe] ratios measured in field stars <cit.>. Additional P production mechanisms have been proposed, such as α-particle capture on ^27Al or proton capture on ^30Si, to resolve the abundance discrepancy <cit.>.Additional abundance measurements are therefore necessary to help resolve the issue of the nucleosynthesis of phosphorus. It is uncertain how much yields must be increased to match models and additional measurements are needed to understand if the [P/Fe] versus metallicty slope of the chemical evolution model accurately fits the data. We measured phosphorus in 22 FGK field dwarfs and giants with metallicities in the range –0.5 ≲ [Fe/H] ≲ 0.2 to test these questions. Section <ref> describes the data reduction. The methodology used to determine phosphorus abundances is discussed in section <ref>. The results are compared to chemical evolution models in section <ref> and the final conclusions are summarized in section <ref>. § OBSERVATIONS AND DATA REDUCTION Our sample consists of 19 stars observed using the high resolution infrared spectrometer Phoenix on the Kitt Peak National Observatory 4m Mayall telescope and two stars from proposal proposal ID GS-2016B-Q-76 on Gemini South Telescope, also observed using Phoenix. We also measured the phosphorus abundance of Arcturus using the infrared atlas <cit.>. The full list of stars observed is found in Table <ref>. Target stars with known atmospheric parameters were chosen because the available wavelength range is narrow (our spectral range spanned only ∼ 50 Å) and contained too few spectral features to determine atmospheric parameters independently. Also, the targeted P1 features are strongest in stars with effective temperatures between ∼4500 K and ∼7000 K. Our dwarf star sample was selected from <cit.>, <cit.>, and <cit.>. Atmospheric parameters for the sample of dwarf stars were adopted from <cit.> and additional abundances for Si and S are available from <cit.> for a portion of our sample of stars. Additional giant stars with appropriate atmospheric parameters were chosen from the Pascal catalogue <cit.>. For all stars, the atmospheric parameters were determined spectroscopically for each literature source (given in Table <ref>). Finally, only targets with bright J-magnitudes (J ≲ 7 mags), obtained from 2MASS <cit.>, were chosen from for observations to ensure sufficient signal to noise ratio to measure weak phosphorus features. c c c c c c 0pt Summary of Phoenix ObservationsHD UT Date TelescopeSpectrala Jb S/NNumber Type (Mag)207942016 Dec 15 1 G8III3.032280 461142016 Dec 15 1 G8V6.266 250 107950 2015 June 32G6III 3.476 110120136 2015 June 6 2 F6IV3.620 170121560 2015 June 3 2F6V 5.137160124819 2015 June 62F56.610 160126053 2015 June 32G1.5V 5.053 140136925 2015 June 4 2G06.751 140140324 2015 June 3 2G0IV/V6.295 80148049 2015 June 4 2F86.355 170151101 2015 June 6 2K0III2.913 170152449 2015 June 3 2F6V 6.823 60160507 2015 June 6 2G5III5.169 170163363 2015 June 3 2 F86.788 70167588 2015 June 4 2 F8V5.363 210174160 2015 June 4 2F7V5.244 270186379 2015 June 4 2F8V 5.741 190186408 2015 June 3 2G1.5Vb 5.090 190191649 2015 June 3 2G06.323 150193664 2015 June 6 2G3V4.879 210194497 2015 June 4 2F6V6.504 120aspectral types from the SIMBAD database bJ magnitudes from 2MASS <cit.> Telescopes (1) Gemini South Telescope;(2) KPNO Mayall 4m Telescope The program stars were observed with the Phoenix infrared spectrometer <cit.> at the f/16 focus of the KPNO Mayall 4 meter telescope in 2015 June and with Phoenix on Gemini South in December 2016. The 0.7 arcsecond, four-pixel slit was used, resulting in a spectral resolution of ∼50,000 with both telescopes. Echelle order 53 was selected with a narrow band order sorting filter to observe the wavelength range 1.0570 μm - 1.0620 μm. Standard observing procedures for infrared observations were followed <cit.>; each object was nodded between two slit positions in an 'ABBA' pattern. Dark and flat field images were observed at the beginning of each night.The data reduction was accomplished using the IRAF software suite[IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.”].The dark images were combined using a median filter with a sigclip rejection algorithm and the flat images were median combined with an avsigclip rejection algorithm. The combined dark image was subtracted from the combined flat image. Sky contribution, bias, and detector blemishes were removed by subtracting objects from one nod position against the next position ('A' slit position image subtracted from the 'B' slit position image and visa-versa). After sky subtraction, each object was flatfielded using a dark corrected, normalized flatfield image. Spectral type A and B standard stars were observed for telluric line removal, however we found no telluric lines in our spectral region. An atlas of Arcturus also shows no telluric contamination in this spectral range, therefore no telluric correction was needed <cit.>. Wavelength calibrations were done using the stellar spectral lines, as few comparison lamp emission lines are available in our narrow wavelength range. § ABUNDANCE ANALYSIS§.§ Abundance AnalysisAbundances were obtained using MOOG spectral synthesis software <cit.> (Version 2014) with MARCS model atmospheres <cit.>. Atmospheric parameters were taken from the literature; a full list of atmospheric parameters and their sources is found in Table <ref>. Additionally, the synthetic spectrum of HD 120136 was broadened to reflect the rotational velocity of vsini = 15.4 km/s <cit.>.l c c c c c l l l l 0ptAtmospheric Parameters and AbundancesStar T_efflog g [Fe/H] ξRef. A(Si) A(P) A(S)a [P/Fe]HD Number (K) (km s^-1) 20794 5372 4.50 –0.46 0.79 2 7.30 ± 0.08 5.36 ± 0.17 0.36 46114 5129 3.50 –0.44 0.95 2 7.17 ± 0.06 5.04 ± 0.09 0.02 107950 5100 2.50 –0.13 1.7 3 7.50 ± 0.07 5.39 ± 0.08 0.11 120136 6339 4.19 0.23 1.7 4 7.75 ± 0.07 5.68 ± 0.09 –0.01 121560 6139 4.29 –0.39 1.36 5 7.09 ± 0.06 5.14 ± 0.04 7.000.07 124819 6174 4.21 –0.27 1.55 5 7.32 ± 0.06 5.29 ± 0.06 7.170.10124897b 4286 1.66 –0.52 1.75 1 7.32 ± 0.04c 5.11 ± 0.150.22 1260535691 4.44–0.361.07 57.15 ± 0.06 5.11 ± 0.06 0.06136925 5732 4.32 –0.29 0.98 5 7.33 ± 0.08 5.37 ± 0.07 7.240.11 140324 5913 4.05 –0.3 1.27 5 7.24 ± 0.07 5.31 ± 0.08 7.140.03 148049 6071 4.16 –0.33 1.37 5 7.28 ± 0.06 5.26 ± 0.06 7.140.13 151101 4535 2.1 –0.14 2.28 3 7.45 ± 0.11 5.36 ± 0.12 0.04 152449 6154 4.15 –0.03 1.48 5 7.55 ± 0.07 5.46 ± 0.08 7.410.03160507 4790 2.648 –0.23 1.57 6 7.47 ± 0.10 5.49 ± 0.13 0.26163363 5970 3.89 –0.04 1.48 5 7.47 ± 0.07 5.55 ± 0.08 7.380.13 167588 5845 3.92 –0.39 1.39 5 7.22 ± 0.05 5.22 ± 0.06 7.080.15 174160 6417 4.36 –0.01 1.5 5 7.4 ± 0.05 5.39 ± 0.04 7.28–0.06 186379 5899 4.02 –0.37 1.4 5 7.21 ± 0.05 5.19 ± 0.04 7.080.10 186408 5787 4.26 0.05 1.21 5 7.65 ± 0.06 5.54 ± 0.05 7.550.03191649 6081 4.26 –0.2 1.46 5 7.29 ± 0.06 5.26 ± 0.06 7.280 193664 5915 4.43 –0.13 1.16 5 7.4 ± 0.06 5.24 ± 0.05 7.33–0.09 194497 6198 3.78 –0.43 1.89 5 7.18 ±0.06 5.14 ± 0.05 7.050.11 aS abundances from <cit.> bα Boo cSi abundance from <cit.> References: (1) ; (2) ; (3) ; (4) ; (5) ; (6)Atomic line data for the phosphorus transitions shown in Table <ref>, was obtained from <cit.> and is identical to that used in <cit.>. Atomic line data for a feature Si1 was taken from the Kurucz database[http://kurucz.harvard.edu]. The gf value for the Si1 line at 10582 Å was refined by fitting the solar spectrum obtained from the <cit.> atlas and the infrared Arcturus atlas <cit.>. We adopted solar atmosphericparameters of T= 5870 K, log g= 4.44, and a microturbulence of 0.75 kms^-1. Atmospheric parameters for Arcturus were adopted from <cit.>. Silicon abundances for the solar spectrum of A(Si) = 7.51 and A(Fe) = 7.50 are adopted from <cit.>, the silicon abundance for Arcturus is from <cit.>, the solar phosphorus abundance of A(P) = 5.46 is adopted from <cit.>, and the solar sulfur abundance of A(S) = 7.16 is from <cit.>. The solar spectrum synthesis was performed using the atomic data from Table <ref> and overplotted on the solar spectrum in Figure <ref>. The best fit was determined by eye and the log gf value for the Si line was increased by 0.07 to fit both the solar and Arcturus spectra. The updated log gf value for the Si1 line listed in Table <ref>.l l l l l 0ptLine ListElement λ _airχ log gf TransitionÅ(eV) P I 10581.569 6.99 0.45 4s^4P_5/2–4p^4D^0_7/2 Si I 10582.16 6.222 -1.099 4p^1D_2–6s^1P_1 P I 10596.900 6.940 –0.214s^4P_1/2–4p^4D^0_1/2Phosphorus abundances were determined by comparing synthetic spectra to the observed spectra and determining the best fit by eye. Examples of final fits to the spectra are shown in Figure <ref>. To test the reliability of abundances determined by eye, abundances were re-determined by minimizing the χ^2 between synthetic spectra and the corresponding observation. We found the abundances determined from the χ^2 minimization technique were consistent with those found by eye; the average difference was 0.01 ± 0.05 dex. The largest outlier in this difference is the star HD 163363 which has a difference of –0.12 dex from their χ^2 minimization abundance subtracted from their by eye abundance. This star has a low signal to noise of 70 and this may be the cause of the discrepancy.The phosphorus abundance determined from the 10581 Å line was chosen as the final phosphorus abundance because the other feature at 10596 Å was often too weak to determine reliable abundances. For dwarf stars with high S/N measurements, such as HD 174160 (in Fig <ref>), HD 193664, HD 186379, and HD 167588, the P abundance from the 10596 Å agreed with the P1 feature at 10581 Å. However, the fit to the P1 feature at 10596 Å does not match the the observed Arcturus spectrum in Figure <ref>. This is likely due to blend with a weak, unidentified line in cool giants. Final phosphorus abundances are plotted in Figure <ref>. We find no significant dependence of the phosphorus abundance on temperature for our stars, also shown in Figure <ref>. §.§ Abundance Uncertainty Uncertainties in the phosphorus abundances were determined in two ways; first from uncertainties in the atmospheric parameters and from the fit to the phosphorus features. The uncertainty on the abundance was computed by determining the range of acceptable models by eye; the uncertainty determined this way was typically 0.05. This is consistent with the uncertainty in the χ^2 minimization determined from the abundance difference at the p = 0.05 significance level, typically 0.07 dex.To calculate the uncertainties due to the atmospheric parameters, we re-computed the abundances, varying the model atmosphere considering these uncertainties, fit synthetic spectra to the observed spectra to re-derive abundances, and added all errors found from the differences between the abundance in Table <ref> and those computed with the varied atmospheric models, from each atmospheric parameter term in quadrature. The average uncertainty in our model parameters in the dwarf stars stars from <cit.> were δT= 49 ± 14, δlog g= 0.04 ± 0.01, δ[Fe/H]= 0.05 ± 0.01, and δξ= 0.12 kms^-1. Due to the uniformity of the uncertainty conditions, uncertainties were calculated for three stars from <cit.> and final uncertainties found from fitting synthetic spectra were 0.03 dex for temperature, and an uncertainty in abundance of 0.01 due to [Fe/H] and 0.01 dex from the microturbulence. The abundance uncertainty from varying the log g was consistent with zero dex. Added in quadrature, the total uncertainty due to the atmospheric parameters in phosphorus abundance for stars from <cit.> was 0.03 dex. Stars with atmospheric parameters not from <cit.> had larger uncertainties in their atmospheric parameter with averages of δT= 85 ± 11, δlog g= 0.17 ± 0.06, δ[Fe/H]= 0.09 ± 0.01, and δξ= 0.20 kms^-1 <cit.>. The uncertainties from the atmospheric parameters were calculated for each of these stars individually and average uncertainties were found to be 0.08 dex for temperature, 0.04 dex for log g, 0.02 for [Fe/H] changes, and the microturbulence caused an uncertainty 0.01. For all stars, the uncertainty were added together in quadrature with uncertainties in the fit and the final total uncertainties are found in Table <ref>. The uncertainties on [P/Fe] include both the uncertainty on the phosphorus abundance and on [Fe/H]; ± 0.05 dex uncertainty for stars from <cit.> and ± 0.09 dex uncertainty from other sources <cit.>, as listed in Table <ref>.Our abundance determinations were also performed with the assumptions of LTE and 1-D MARCS atmospheric models, plane parallel models for dwarf stars and spherical models for giants. No study of NLTE effects on the phosphorus lines is available, however <cit.> suggests that NLTE effects should be minimal because the phosphorus lines are expected to behave similarly to S1 lines with similar transitions. Specifically, LTE is approximately valid for the 8693 Å (4p^5P_3–4d^5D^0_3) and 8694 Å lines S1 lines (4p^5P_3–4d^5D^0_4) <cit.>. The effects of 3-D stellar models compared to 1-D were ∼ 0.03 when calculating the phosphorus abundance for the sun <cit.>.§.§ Literature ComparisonsOur silicon abundances can be compared to Si measurements previously derived in 15 stars in common with <cit.>. <cit.> had derived different metallicities than <cit.> and therefore reported [Si/Fe] values reflected both differences in Si and Fe abundances between each sample. The differences in metallicity were removed between the two studies and the A(Si) values were compared directly. The difference between our measurements and those from <cit.> is –0.03 ± 0.11 dex indicating agreement with scatter. Finally, the star HD 120136 was also studied by <cit.> and our result of [P/Fe] = –0.01 ± 0.09 is consistent with their measurement, [P/Fe]= 0.03 ± 0.08. The two stars, HD 20794 and HD 46114 are consistent with silicon abundances derived by <cit.>. Our abundance for HD 20794 is [Si/Fe] = 0.25 ± 0.12 and the abundance for HD 46114 is 0.1 ± 0.08. The abundance from <cit.> for HD 20794 is 0.22 ± 0.22 and the abundance for HD 46114 is 0.09 ± 0.06. c c c c 0ptAbundance ComparisionBin This Work Caffau+ 2011 Jacobson+ 2014 [Fe/H] <[P/Fe]> <[P/Fe]> <[P/Fe]>0.2 - 0.4 –0.01 –0.02 ± 0.07 0.0 - 0.2 0.03 –0.02 ± 0.05 –0.2 - 0.0 0.03 ± 0.08 0.04 ± 0.09 –0.14 –0.4 - –0.2 0.14 ± 0.06 0.12 ± 0.10 –0.17 ± 0.03 –0.6 - –0.4 0.18 ± 0.13 0.32 ± 0.02 0.16 ± 0.09Uncertainties are standard deviation from abundances in bin and not due to systematic errors. Numbers without reported standard deviations had a bin size of one star Our measured phosphorus abundances are consistent with the two data sets from <cit.> and <cit.> in the high metallicity range (see Figure <ref>). Our average [P/Fe] abundance is 0.10 ± 0.10 dex and the average abundance from <cit.> is [P/Fe] = 0.08 ± 0.15 in the same metallicity range. We also found the averages for different metallicity bins are also consistent, as shown in Table <ref>. The abundances of <cit.> in the high metallicity range are offset from our abundances near the metallicity [Fe/H] ∼–0.2, where their average [P/Fe] measurement is –0.14. Our sample contains six stars between –0.20 < [Fe/H] < 0.25, the average abundance of this sample is 0.04 ± 0.07 with the lowest abundance for HD 193664 at [P/Fe] = –0.09 ± 0.08. The UV P1 lines at the high metallicity range saturate and the uncertainty of <cit.> measurements is ∼ 0.2 dex. The offset between the UV and IR sample is within measurement errors in both samples. Our abundance measurements in the higher metallicity bin of –0.60 < [Fe/H] < –0.40 are in close agreement, as shown in Table <ref>. § DISCUSSION §.§ Phosphorus Galactic Chemical Evolution We compare our phosphorus abundances, combined with those of <cit.> and <cit.>, to two chemical evolution models from <cit.>, shown in Figure <ref>. The first chemical evolution model uses yields from <cit.> (model 6 from ) and the second model uses the same yields increased by a factor of 2.75 (model 8 from ). Additionally, both models include contributions from hypernovae which are necessary to fit measurements at low metallicities but do not significantly contribute to P production at the high metallicity end <cit.>. Our results are fit by the enhanced yield model better than the original yield model. To test this, the chemical evolution model's predicted [P/Fe] values were compared to values derived in our sample and abundances from <cit.>. We found that χ ^2 was –17.4 for the model with the original yields and 4.4 for the model with yields increased by 2.75. We therefore find our results are consistent with previous measurements of phosphorus and find chemical evolution models with the predicted yields underproduces P compared to observations.§.§ Phosphorus and the Alpha Elements Additionally, we examined the ratios of [P/Si] and [P/S], the two elements nearest to phosphorus in atomic number.Available sulfur abundances for our stars have also been taken from <cit.>. The [S/Fe] ratios from <cit.> have been adjusted for two effects; first we adjusted their [S/Fe] ratios to account for differences in derived [Fe/H] abundances between our atmospheric parameters from <cit.> and those of <cit.>. Second, the adopted solar abundance from <cit.> (A(S)=7.34 dex) is different from the adopted solar abundance from <cit.> of A(S) = 7.16 dex. We placed the <cit.> results on the same scale. Finally, we note abundances from the S1 lines measured from multiple 8 at ∼ 6750 Å and at ∼ 6050 Å from multiple 10 used in <cit.> were calculated using LTE analysis and these lines likely do not suffer from NLTE effects <cit.>.We find that the phosphorus to silicon ratio is consistent with the solar ratio over our metallicity range as shown in Figure <ref>. The [P/Si] ratio for our abundances is plotted in the upper panel of Figure <ref> and we find an average ratio of [P/Si]=0.02 ± 0.07. [P/S] ratios found in <cit.> are compared to our results plotted in the lower panel of Figure <ref>. We find an average [P/S] ratio of 0.15 ± 0.15, in agreement with the result from <cit.>, who found a ratio of [P/S]=0.10 ± 0.10 for their sample. Combining our two samples leads to a total ratio of [P/S]=0.13 ± 0.12. Our [P/Si] and [P/S] results are also compared to chemical evolution models in Figure <ref>. The chemical evolution model for phosphorus was adopted from <cit.> and the yields for S was taken from <cit.>. The Si yields were adopted from <cit.> which are the same yields of <cit.> for solar metallicity. Additionally, the nearly constant [P/Si] and [P/S] ratios over the metallicity range of –0.45 < [Fe/H] < 0.2 may be due insensitivity of the neutron excess that is thought to form the odd elements. The neutron excess increases with increasing metallicity, as more neutron rich material is present in the star. An increase with increasing [Fe/H] is expected because the formation of odd elements like Na and Al is partially due to neutron capture unlike the alpha elements <cit.>. Our constant [P/S] ratio is in agreement with <cit.>. However, ratios of [Na,Al/Mg] do not sharply decrease until [Fe/H] ∼ –1 <cit.>. The strong decrease with decreasing metallicity in [Na,Al/Fe] ratio is most prominent at [Fe/H] ∼ –1 in models as well (e.g. Fig 13 from ). Measurements of phosphorus in stars with low metallicities would determine if the neutron excess is important at lower metallicities and if the dependence of [P/Fe] over metallicity is similar to the other odd elements. The ^30Si isotope, which is thought to be the seed for P production, has also been measured to be higher than predictions from chemical evolution models. The infrared study of SiO in M giant stars found isotope ratios of ^28Si/^30Si to be between ∼20-30 for their sample of six stars <cit.>, while chemical evolution models give the ^28Si/^30Si ∼ 34 at a [Fe/H]= 0 in the solar neighborhood <cit.>. While the predicted ratios are lower than measured by a factor of 1.1 - 1.3, this difference is not extreme. For an A(^28Si)= 7.51, a ratio of 25 would give an A(^30Si)= 6.11 while a ratio of 35 gives an A(^30Si)= 5.97.§.§ Possibly Anomalous Stars HD 20794: This star has a slightly high [P/Fe] ratio of 0.36 ± 0.18 for a metallicity of [Fe/H] = –0.46 compared to the average [P/Fe] of 0.18 in the metallicity bin –0.6 < [Fe/H] < –0.4 and the average of 0.14 in the metallicity bin –0.4 < [Fe/H] < –0.2 (shown in Table <ref>). The high [P/Fe] ratio may be due to uncertain atmospheric parameters; <cit.> cites the uncertainty in the temperature for this star at ±108K and the uncertainty in the log g of 0.21 <cit.>. Adding the abundance uncertainties in quadrature for this star gave an uncertainty of 0.17 dex. The lower end of the 1 σ uncertainty range would place the phosphorus abundances near other [P/Fe] ratios. The alpha elements for this star are [Mg/Fe] =0.33 ± 0.27, [Si/Fe] = 0.22 ± 0.23, and [Ca/Fe] = 0.13 ± 0.48 from <cit.> and this work found a [Si/Fe] = 0.25 ± 0.12. These high abundances and high velocity for this star make it a probable member of the thick disk <cit.>. The phosphorus abundance is correlated with the alpha elements in the high metallicity range, as shown in Figure <ref>, and therefore the [P/Fe] ratio this star may also be high. HD 163363: This star has [P/S] of 0.35 ± 0.12, which is high when compared to other stars near [Fe/H] ∼ 0. The high abundance is due to a low [S/Fe] abundance from <cit.> rather than to a high phosphorus abundance, as this star has a [P/Fe] = 0.13 ± 0.09. The low [S/Fe] abundance contrasts with other alpha element abundances in HD 163363: [Mg/Fe] = 0.15 ± 0.03 and [Si/Fe] = 0.01 ± 0.05 from <cit.> and our [Si/Fe] ratio of 0.01 ± 0.09. The high sulfur abundance may be measurement error, as it is only ∼ 3 σ from a [P/S]=0, and less than 2 σ away from the average value of [P/S] = 0.15 ± 0.15 found within our sample. HD 193664: The star has a slightly low phosphorus abundance at [P/Fe] = –0.09 ± 0.07. The other alpha elements are [Mg/Fe] = 0 ± 0.03, [Si/Fe] = 0.02 ± 0.05 from <cit.>, and our [Si/Fe] = 0.02 ± 0.08. The [P/Fe] abundance is low considering the other alpha elements but it is consistent with the quoted uncertainties; it is 1.7 σ away from the average value of [P/Fe] = 0.03 from its metallicity bin (–0.2 < [Fe/H] < 0) in Table <ref>.HD 194497: This star has [P/S] = 0.47 ± 0.11. This is due to a low sulfur abundance, since we find a normal [P/Fe] abundance. The [P/S] is 2.9 σ from the average value of [P/S] = 0.15 ± 0.15. Other alpha abundances are low in this star; [Mg/Fe] = –0.01 ± 0.03 and [Si/Fe] = 0.03 ± 0.05 from <cit.> and our [Si/Fe] = 0.10 ± 0.08. § CONCLUSION* We have derived phosphorus and silicon abundances for 22 stars using infrared spectra observed with Phoenix on the KPNO 4m Mayall telescope, Gemini South Telescope, and for Arcturus using spectra from the infrared Arcturus spectral atlas <cit.>.* We found no systematic difference between [P/Fe] abundances in dwarfs and giants.* Our phosphorus abundances results are consistent with the other studies, such as <cit.> and <cit.>. We find an average [P/Fe] ratio of 0.10 ± 0.10 in the metallicity reange –0.55 < [Fe/H] < 0.2. Our results are in agreement with <cit.>, who found an average abundance of [P/Fe] = 0.08 ± 0.15 for their sample withmetallicities between –1.0 < [Fe/H] < 0.2. We also compared [P/Fe] averages to <cit.> in metallicity bins with sizes of 0.2 dex. Our average values in the metallicity bins to <cit.> agree in all metallicity bins. *Our [P/Fe] measurements do not match the results of chemical evolution models with the most recent nucleosynthesis yields. Instead our results were more consistent with a chemical evolution model of phosphorus with yields increased by a factor of 2.75. * We measured Si abundances from a nearby Si1 feature. Our silicon abundances matched literature sources; the average difference for 15 stars with corresponding measurements in <cit.> is –0.03 ± 0.11 dex. We find an average [P/Si] ratio of 0.02 ± 0.07 for our sample of stars* We found a [P/S] ratio of 0.15 ± 0.15 in our sample of stars over the metallicity range –0.55 < [P/Fe] < 0.2, using S abundances from <cit.>. This result is consistent with results from <cit.>, who found [P/S]=0.10 ± 0.10 over their range from –1 < [P/Fe] < 0.2.* Other odd light elements, such as Na and Al, are sensitive to the neutron flux and their abundances with respect to the alpha elements decreases. Our constant [P/Si] and [P/S] ratios may imply phosphorus is produced by other processes than neutron capture, however more abundance measurements at lower metallicty ranges, such as [Fe/H] < –1.0 where the decrease in Na and Al with decreasing metallicity are most pronounced, are needed to explore this hypothesis.§ ACKNOWLEDGEMENTSThis paper is based on observations obtained at the Kitt Peak National Observatory, a division of the National Optical Astronomy Observatories (NOAO). NOAO is operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation. This work is also based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil). The Gemini observations were done under proposal ID GS-2016B-Q-76. We are grateful to the Kitt Peak National Observatory and particularly to Colette Salyk for her assistance at the start of the observing run and to Anthony Paat, Krissy Reetz, and Doug Williams during the run. We thank German Gimeno for his assistance with the Gemini South Telescope observing run. We thank the referee E. Caffau for suggested improvements to the manuscript. This research has made use of the NASA Astrophysics Data System Bibliographic Services, the Kurucz atomic line database operated by the Center for Astrophysics, and the Vienna Atomic Line Database operated at the Institute for Astronomy of the University of Vienna. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. We thank Eric Ost for implementing the model atmosphere interpolation code. C. A. P. acknowledges the generosity of the Kirkwood Research Fund at Indiana University. G.C. acknowledges financial support from the European Union Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement No 664931. IRAF, MOOG (v2014; ) [Agúndez et al.(2014)]agundez14Agúndez, M., Cernicharo, J., Decin, L., Encrenaz, P., & Teyssier, D. 2014, , 790, L27 [Asplund et al.(2009)]asplund09Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, , 47, 481 [Behr et al.(1999)]behr99Behr, B. B., Cohen, J. G., McCarthy, J. K., & Djorgovski, S. G. 1999, , 517, L135 [Bensby et al.(2014)]bensby14Bensby, T., Feltzing, S., & Oey, M. S. 2014, , 562, A71 [Berzinsh et al.(1997)]berzinsh Berzinsh, U., Svanberg, S., & Biemont, E. 1997, , 326, 412 [Caffau et al.(2007)]caffau07Caffau, E., Steffen, M., Sbordone, L., Ludwig, H.-G., & Bonifacio, P. 2007, , 473, L9 [Caffau et al.(2011)]caffau11sulfurCaffau, E., Ludwig, H.-G., Steffen, M., Freytag, B., & Bonifacio, P. 2011, , 268, 255 [Caffau et al.(2011)]caffau11Caffau, E., Bonifacio, P., Faraggiana, R., & Steffen, M. 2011, , 532, A98 [Caffau et al.(2016)]caffau16Caffau, E., Andrievsky, S., Korotin, S., et al. 2016, , 585, A16[Cescutti et al.(2012)]cescutti12Cescutti, G., Matteucci, F., Caffau, E., & François, P. 2012, , 540, A33 [Chen et al.(2002)]chen02Chen, Y. Q., Nissen, P. E., Zhao, G., & Asplund, M. 2002, , 390, 225[François et al.(2004)]francois04François, P., Matteucci, F., Cayrel, R., et al. 2004, , 421, 613[Fontani et al.(2016)]fontani16Fontani, F., Rivilla, V. M., Caselli, P., Vasyunin, A., & Palau, A. 2016, , 822, L30[Gehren et al.(2006)]gehrenGehren, T., Shi, J. R., Zhang, H. W., Zhao, G., & Korn, A. J. 2006, , 451, 1065 [Gibson et al.(2005)]gibsonGibson, B. K., Fenner, Y., & Kiessling, A. 2005, Nuclear Physics A, 758, 259 [Gustafsson et al.(2008)]gustafssonGustafsson, B., Edvardsson, B., Eriksson, K., et al. 2008, , 486, 951[Hekker & Meléndez(2007)]hekker Hekker, S., & Meléndez, J. 2007, , 475, 1003 [Hinkle et al.(1995)]hinkle95Hinkle, K., Wallace, L., & Livingston, W. 1995, , 107, 1042 [Hinkle et al.(1998)]hinkle_et_al_1998 Hinkle, K. H., Cuberly, R., Gaughan, N., et al. 1998, Proc. SPIE, 3354, 810[Jacobson et al.(2014)]jacobson14 Jacobson, H. R., Thanathibodee, T., Frebel, A., et al. 2014, , 796, L24[Joyce(1992)]joyce1992 Joyce, R. R. 1992, in ASP Conf. Ser. 23, Astronomical CCD Observing and Reduction Techniques, ed. S. Howell (San Francisco: ASP), 258 [Kato et al.(1996)]kato96Kato, K.-I., Watanabe, Y., & Sadakane, K. 1996, , 48, 601 [Kobayashi et al.(2006)]kobayashi06 Kobayashi, C., Umeda, H., Nomoto, K., Tominaga, N., & Ohkubo, T. 2006, , 653, 1145[Kobayashi et al.(2011)]kobayashi11 Kobayashi, C., Karakas, A. I., & Umeda, H. 2011, , 414, 3231 [Lebouteiller et al.(2005)]lebouteiller05Lebouteiller, V., Kuassivi, & Ferlet, R. 2005, , 443, 509 [Lefloch et al.(2016)]lefloch16Lefloch, B., Vastel, C., Viti, S., et al. 2016, , 462, 3937 [Levshakov et al.(2002)]levshakov02Levshakov, S. A., Dessauges-Zavadsky, M., D'Odorico, S., & Molaro, P. 2002, , 565, 696 [Macía(2005)]macia Macía, E. 2005, Chem. Soc. Rev. 34, 691[Mallik et al.(2003)]mallik03Mallik, S. V., Parthasarathy, M., & Pati, A. K. 2003, , 409, 251 [Meléndez et al.(2009)]melendez09 Meléndez, J., Asplund, M., Gustafsson, B., & Yong, D. 2009, , 704, L66 [Michaud et al.(2008)]michaud08Michaud, G., Richer, J., & Richard, O. 2008, , 675, 1223-1232 [Milam et al.(2008)]milam Milam, S. N., Halfen, D. T., Tenenbaum, E. D., et al. 2008, , 684, 618-625 [Molaro et al.(2001)]molaro01Molaro, P., Levshakov, S. A., D'Odorico, S., Bonifacio, P., & Centurión, M. 2001, , 549, 90 [Nomoto et al.(2013)]nomoto13 Nomoto, K., Kobayashi, C., & Tominaga, N. 2013, , 51, 457 [Otsuka et al.(2011)]otsuka11Otsuka, M., Meixner, M., Riebel, D., et al. 2011, , 729, 39[Outram et al.(1999)]outram99Outram, P. J., Chaffee, F. H., & Carswell, R. F. 1999, , 310, 289 [Pottasch & Bernard-Salas(2008)]pottasch08Pottasch, S. R., & Bernard-Salas, J. 2008, , 490, 715 [Ramírez & Allende Prieto(2011)]ramirez11 Ramírez, I., & Allende Prieto, C. 2011, , 743, 135 [Ramírez et al.(2013)]ramirez13Ramírez, I., Allende Prieto, C., & Lambert, D. L. 2013, , 764, 78 [Reddy et al.(2003)]reddy03Reddy, B. E., Tomkin, J., Lambert, D. L., & Allende Prieto, C. 2003, , 340, 304 [Roederer et al.(2014)]roederer14 Roederer, I. U., Jacobson, H. R., Thanathibodee, T., Frebel, A., & Toller, E. 2014, , 797, 69 [Santos et al.(2013)]santos Santos, N. C., Sousa, S. G., Mortier, A., et al. 2013, , 556, A150 [Sneden(1973)]snedenSneden, C. 1973, , 184, 839 [Soubiran et al.(2010)]soubiran Soubiran, C., Le Campion, J.-F., Cayrel de Strobel, G., & Caillo, A. 2010, , 515, A111 [Spite et al.(2017)]spite17 Spite, M., Peterson, R. C., Gallagher, A. J., Barbuy, B., & Spite, F. 2017, , 600, A26[Skrutskie et al.(2006)]skrutskie06Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, , 131, 1163 [Takeda et al.(2005)]takeda05 Takeda, Y., Hashimoto, O., Taguchi, H., et al. 2005, , 57, 751 [Tsuji et al.(1994)]tsuji94 Tsuji, T., Ohnaka, K., Hinkle, K. H., & Ridgway, S. T. 1994, , 289, 469[Turner & Bally(1987)]turner87Turner, B. E., & Bally, J. 1987, , 321, L75 [Wallace et al.(1993)]wallace93 Wallace, L., Livingston, W. C., & Hinkle, K. 1993, An atlas of the solar photospheric spectrum in the region from 8900 to 13600 cm^-1(7350 to 11230 Å), NSO technical report ; 93-001. . National Solar Observatory (U.S.), [Wang et al.(2011)]wang Wang, L., Liu, Y., Zhao, G., & Sato, B. 2011, , 63, 1035 [Woosley & Weaver(1995)]woosley95 Woosley, S. E., & Weaver, T. A. 1995, , 101, 181 | http://arxiv.org/abs/1704.08282v1 | {
"authors": [
"Z. G. Maas",
"C. A. Pilachowski",
"G. Cescutti"
],
"categories": [
"astro-ph.SR"
],
"primary_category": "astro-ph.SR",
"published": "20170426183152",
"title": "Phosphorus Abundances in FGK Stars"
} |
=6.0in=8.25in =-0.3in =-0.20in Particle physics department, University of Geneva, Switzerland #1#1#1#1 #1#1and #1 Submitted to #1Abstract Presented PRESENTED ATACKNOWLEDGEMENTSRight-handed neutrinos: the hunt is on! Philippe Mermod, on behalf of the SHiP Collaboration The possibility of the existence of right-handed neutrinos remains one of the most important open questions in particle physics, as they can help elucidate the problems of neutrino masses, matter-antimatter asymmetry, and dark matter. Interest in this topic has been increasing in recent years with the proposal of new experimental avenues by which right-handed neutrinos with masses below the electroweak scale could be detected directly using displaced-vertex signatures. At the forefront of such endeavours, the proposed SHiP proton beam-dump experiment is designed for a large acceptance to new weakly-coupled particles and low backgrounds. It is capable of probing right-handed neutrinos with masses below 5 GeV and mixings several orders of magnitude smaller than current constraints, in regions favoured by cosmology. To probe higher masses (up to 30 GeV), a promising novel approach is to identify displaced vertices from right-handed neutrinos produced in W decays at LHC experiments. NuPhys2016, Prospects in Neutrino Physics Barbican Centre, London, UK,December 12–14, 2016 footnote§ INTRODUCTION The phenomenon of neutrino oscillations, showing that neutrinos have masses, provides the first unambiguous microscopic evidence of physics beyond the Standard Model. The electroweak theory was confirmed with the discovery of the Higgs boson in 2012 at the Large Hadron Collider (LHC) and by further measurements of its properties. No signs of new physics have been found so far at the LHC even at its highest collision energies, indicating that, barring neutrino masses, the Standard Model provides a valid description of nature at least up to the TeV scale. It can thus be argued that the greatest challenge faced by experimental particle physics today is to answer the fundamental questions posed by neutrinos: the nature, hierarchy and absolute scale of neutrino masses; the possibility of CP violation in the neutrino sector; and the possible existence of right-handed neutrinos.Remarkably, the hypothesis of three right-handed neutrinos with Majorana masses below the electroweak scale (hereafter termed Heavy Neutral Leptons, or HNLs) in combination with CP violation in the neutrino sector can address at once the three fundamental questions of the origins of neutrino masses, dark matter, and the excess of matter over antimatter in the Universe <cit.> (see Fig. <ref>). Being neutral with respect to the electromagnetic, weak, and strong interactions, HNLs are extremely elusive particles which could manifest themselves only through gravitational interactions and by mixing with neutrinos. This mixing is required to obtain small neutrino masses through the seesaw mechanism but needs to be tiny to generate matter-antimatter asymmetry and to evade existing experimental constraints. This means that HNL production in a laboratory would only be possible at the highest beam intensities <cit.>, in addition to the high beam energies required to access high HNL masses. The small mixing also leads to a long lifetime and a typical signature of a displaced decay. § THE SHIP EXPERIMENT A promising strategy to search for HNLs with masses of the order of the GeV is production through hadron decays at high-intensity proton beam-dump facilities. Searches for displaced decays of HNLs with masses up to 0.4 GeV were made using neutrino production through pion and kaon decays, the most sensitive to date with the PS191 experiment at CERN <cit.>. Searches were also made using charmed meson decays to access masses up to 2 GeV, the most sensitive to date with the CHARM experiment at CERN <cit.> and the NuTeV experiment at Fermilab <cit.>. For higher HNL masses, up to 75 GeV, the best constraints come from an analysis of LEP1 data with the DELPHI experiment, where an HNL would be produced in a Z decay and detected as either a prompt or a displaced vertex <cit.>. SHiP is a general-purpose fixed-target facility proposed at the CERN SPS to search for particles with very low couplings to the Standard Model <cit.>. The 400 GeV proton beam extracted from the SPS will be dumped on a high density target with the aim of accumulating 2× 10^20 protons on target during 5 years of operation. It will produce a large number of neutrinos through hadron decays, following the same principle as that of the CHARM and NuTeV experiments. In particular, neutrinos from decays of hadrons containing c or b quarks can potentially mix with HNLs with masses up to 5 GeV. The charm production at SHiP, with an expected total of ∼ 5· 10^16 neutrinos produced in charm decays, largely surpasses that of any other existing or planned facility, allowing to probe very small coupling strengths and resulting in the HNLs, if produced, to travel very large distances until they decay. In its current design, the SHiP experiment comprises a target followed by a hadron absorber, a muon shield, a 50 m long, 5-10 m wide decay volume and a spectrometer similar to the LHCb detector, as shown in Fig. <ref>. The active muon shield is a set of magnets designed to minimise the flux of muons entering the vessel while allowing to have the vessel as close as possible to the target <cit.>. The experiment as a whole is optimised to reconstruct and identify decays from new long-lived neutral particles and reject backgrounds which could mimic such decays <cit.>. An HNL signal in SHiP is characterised by opposite-sign tracks which cross inside the decay volume, containing at least one particle identified as an electron or a muon, and whose reconstructed parent particle trajectory has its origin at the production target. Backgrounds can possibly arise from neutrino interactions in the vessel, decays of long-lived neutral hadrons, and random crossings of charged particles entering the vessel. Neutrino production in the forward direction is reduced by stopping hadrons in a dense absorber before they decay, and neutrino interactions are minimised by evacuating the air in the vessel. The other types of backgrounds can be vetoed by surrounding the decay volume with tagging detectors. One additional handle to reject random crossings is to measure and match the arrival times of the particles forming the vertex with a high precision (100 ps resolution or better) using a dedicated timing detector. Simulations show that the combined use of the active muon shield, veto taggers surrounding the vessel, the timing detector, track momentum and pointing measurements, and muon-pion separation, can reduce the backgrounds to 0.1 events in a sample of 2× 10^20 protons on target <cit.>. The experimental facility is also ideally suited for studying interactions of tau neutrinos. For this purpose, it will host an emulsion cloud chamber followed by a muon spectrometer upstream of the hidden-particle decay volume.§ HEAVY NEUTRINO SEARCHES AT THE LHC It is generally assumed that new particles accessible at the LHC with masses below 100 GeV would already have been discovered at the LEP, HERA and Tevatron colliders <cit.>. The HNL is an interesting exception. Neutrinos from W and Z decays provide the most efficient way to probe HNLs in the mass range 5-80 GeV. At previous colliders, they amounted only to a few millions and it was not possible to probe coupling strengths below 10^-5, corresponding to prompt decays for HNL masses above 2.5 GeV. Thus, in previous searches, the vertex displacement could generally not be used as a discriminant against backgrounds, which are important at hadron colliders due to large QCD cross sections. As a consequence, the best current constraints in the HNL mass range 2-75 GeV come from an analysis with the Delphi experiment at LEP1 using ∼ 10^6 neutrinos from Z decays <cit.>. By contrast, a total of ∼ 10^9 neutrinos from W boson decays were produced at the LHC (ATLAS and CMS) in run-1, and an additional ∼ 10^9 for every 15 fb^-1 is being produced in run-2 in 13 TeV collisions since 2016. The process of HNL production through on-shell Ws and its subsequent decay is illustrated in Fig. <ref>. It offers two important advantages: the possibility to trigger on the prompt charged lepton from the W decay, and the possibility to efficiently reduce backgrounds by requiring a displaced (>few mm) vertex. Searches using displaced-vertex signatures performed so far at ATLAS <cit.> and CMS <cit.> considered the new neutral particles to be decay products of other particles more massive than a W, leading to high transverse momentum (p_T) displaced decay products. None of these provided any relevant sensitivity to HNLs due to the high-p_T requirements on the particles from the displaced vertex or to the requirement that two displaced vertices should be reconstructed in the same event. However, these searches demonstrate that displaced vertices can be reconstructed with a reasonable efficiency in ATLAS and CMS, and that backgrounds that give rise to such displaced vertices can be kept under control. A well-designed analysis at ATLAS or CMS would be able to probe for the existence of HNLs with masses in the range 3-30 GeV with a sensitivity which largely surpasses existing LEP constraints and is relevant for BAU generation <cit.>. For masses higher than 30 GeV, HNLs produced through on-shell or off-shell W decays would decay promptly. For background reduction one has then to rely on their Majorana nature and require a signature of no opposite-sign same-flavour prompt lepton pairs <cit.>. Searches of this type were made in 8 TeV collisions at ATLAS <cit.> and CMS <cit.>, setting limits on the HNL mixing to muon neutrinos in the mass range 50-500 GeV. It should be noted though that the regions of masses and mixings probed using this strategy are not favoured by models of leptogenesis <cit.>. § EXPECTED SENSITIVITIES Assuming the layout of the SHiP experiment which is described in the technical proposal <cit.>, with 2× 10^20 protons on target and acceptance and backgrounds with a basic event selection estimated from simulations, one obtains the sensitivity curve shown in blue in Fig. <ref>. This is only a preliminary estimate as the experiment is still being optimised for a trade-off between cost and performance, but it indicates clearly that SHiP will be able to dig deeply into the most favoured regions of the parameter space. The curves shown in red and purple in Fig. <ref> show rough expected sensitivities of dedicated ATLAS and CMS analyses using the full LHC run-3 dataset with a signature of a displaced vertex or prompt no opposite-charge same-sign leptons, respectively. For the displaced-vertex signature, both leptonic and hadronic HNL decays are considered, with a reconstructed vertex mass cut set to >4.5 GeV for the hadronic channel to reduce hadronic backgrounds. Other assumptions made are: zero background, 60% trigger efficiency, 20% vertex selection efficiency, and 50% efficiency for the final selection cuts. For the prompt analysis, the same assumptions as in Ref. <cit.> are made. While the efficiency and background assumptions still need to be confirmed with detailed studies, it is clear that dedicated HNL searches at the LHC will largely surpass the existing LEP constraints in the mass range 3-50 GeV. § SUMMARY The proposed SHiP experiment plans to use the CERN SPS high-intensity proton beam to probe the smallest possible HNL mixings to neutrinos – several orders of magnitude smaller than current limits in the 0.4-3 GeV HNL mass range. SHiP is widely recognised as an important complement to existing CERN programmes after LHC run-2. In a complementary manner, displaced-vertex signatures at ATLAS and CMS allow to access higher HNL masses thanks to a high rate of W boson production. These two approaches offer unique opportunities to probe the existence of right-handed neutrinos below the electroweak scale by direct production and detection at the CERN laboratory within a 10-year time scale. The reward is potentially very high, as such particles can shed light on the mechanisms behind the observed neutrino masses and the generation of baryon asymmetry in the Universe.mystylem | http://arxiv.org/abs/1704.08635v1 | {
"authors": [
"P. Mermod"
],
"categories": [
"hep-ex",
"hep-ph"
],
"primary_category": "hep-ex",
"published": "20170427155822",
"title": "Right-handed neutrinos: the hunt is on!"
} |
AGH University of Science and Technology, Faculty of Physics and Applied Computer Science, al. Mickiewicza 30, 30-059 Kraków, Poland AGH University of Science and Technology, Faculty of Physics and Applied Computer Science, al. Mickiewicza 30, 30-059 Kraków, PolandWe study graphene quantum point contacts (QPC) and imaging of thebackscattering of the Fermi level wave function by potential introduced bya scanning probe. We consider both etched single-layer QPCs as well as the ones formed by bilayer patches deposited at the sides of the monolayer conducting channel using an atomistic tight binding approach. A computational method is developed to effectively simulate an infinite graphene plane outside the QPC using a computational box of a finite size.We demonstrate that in spite of the Klein phenomenon interference due to the backscattering at a circular n-p junction induced by the probe potential is visiblein spatial maps of conductance as functions of the probe position.Imagingbackscattering in graphene quantum point contacts B. Szafran==========================================================§ INTRODUCTIONQuantum point contacts <cit.> (QPC) areelementary building blocks of quantum transport devices for carrier injection and read-out withcontrol over the quantized conductance.Transport phenomena for the current injected through QPCs are studied with the spatial resolution by the scanning gate microscopy (SGM) <cit.> –a technique in which a charged tip of the atomic force microscopeperturbs the potential within the system with the 2DEG,induces the backscattering and alters the conductance. SGM was used forgraphene-based systems, the QPCs <cit.>states localized within the QPC<cit.>,quantum Hall conditions <cit.>, andmagnetic focusedtrajectories <cit.>. Theoretical studies for the magnetic focusing <cit.> and imaging snake states <cit.> wereperformed.SGM for QPCs defined within the two-dimensional electron gas for heterostructures based on III-V semiconductors resolves interference of the incident and backscattered<cit.> wave functions. In graphene, a strong tip potential induces formation of a local n-p junction <cit.> instead of depletion of the electron gas as in III-V semiconductors. The n-p junctions in graphene are transparent for Fermi level electrons incident normally due to the Klein tunneling <cit.>. Nevertheless, as we show below, the backscattering induced by the n-p junction formed by the tipinduces a clear interference in SGM maps with a period of half the Fermi wavelength. In semiconductor heterostructures with two-dimensional electron gas (2DEG), QPCs can be definedby lateral gates, which deplete the 2DEG, and thus change the constriction width andnarrow the conduction channel for Fermi level electrons<cit.>.In graphene the channel constriction byexternal gates is ineffective due to Klein tunneling <cit.>. Etched QPCs were studied instead by both experiment <cit.> and theory <cit.>. In bilayer graphene <cit.> it is possible to induce a bandgap by applying a bias between the layers <cit.>. QPCs on graphene with bilayer inclusions have been produced <cit.>, but conductance quantization in these systems has not been theoretically investigated so far. For demonstration that the present results are independent of the QPC typewe consider both etched [Fig.<ref>(a)] and bilayer patched QPCs [Fig.<ref>(b)].The latterturn out less susceptible to perturbation by defects within the QPC. In order to discuss the effects of the sample imperfectionswe consider both defects at the edge and within the bulk of the sample. For the edge deffectswe consider singly-connected carbon atoms <cit.> protruding from the zigzag segments of the constriction edge that produce resonant scattering that destabilizes the conductance plateaux. For the bulk imperfections we consider local potential perturbation introduced by fluorine adatoms deposited on the surface <cit.>.§ THEORY We use the atomistic tight-binding Hamiltionian spanned by p_z orbitals,H=∑_⟨ i,j⟩(t_ij c_i^† c_j+h.c.)+∑_i V( r_i) c_i^† c_i,whereV( r_i) is the external potential at the i-th site at position 𝐫_i, and in the first term we sum over the nearest neighbors. We use the tight-binding parametrization of Ref. Partoens, witht_ij=-3.12 eV for the nearest neighbors within the same layer.For the bilayer, we take t_ij=-0.377 eV for the A-B dimers, t_ij=-0.29 eV for skew interlayer hoppings <cit.>between atoms of the same sublattice (A-A or B-B type), and t_ij=0.12 eV for skew interlayer hopping between atoms of different sublattices.The potential energy in the lower layer is taken as the reference level V_b'=0, andthe value of the upper layer V_b is tuned by the electric field perpendicular to the layer. The interlayer distanceis 3.32 Å. In order to account for the effects of the lattice imperfections far from the edges of the sample we consider separate fluorine adatoms with the tight-binding parametrization of thehopping parameters taken from Ref. Irmer2015 in the dilute fluorination limit. Accordingly <cit.>, for the hopping between the fluorine atom and the carbon ionwe take T=5.5 eV and the on-site energy on the fluorine ion is ε_F=-2.2 eV. For simulation of the SGM, we assume an effective potential of the tip with a Lorentzian form <cit.>V(x,y)=V_t/1+( (x-x_t)^2+(y-y_t)^2)/d^2,where x_t,y_t are the tip coordinates, d is the effective width of the tip potential, and V_t is its maximal value (V_t=1.25 eVunless stated otherwise). We consider the energy range near the Dirac point.For evaluation of the transmission probability, we use the wave function matching (WFM) technique as described in Ref. Kolacha. The transmission probability from the input lead to mode m in the output lead isT^m = ∑_ n|t^mn|^2,where t^mn is the probability amplitude for the transmission from the mode n in the input lead to mode m in the output lead. We evaluate the conductance as G=G_0∑_m T^m, with G_0=2e^2/h.G= G_0 ∑_ m,n|t^mn|^2. We consider an armchair nanoribbon of width W=62 nm,509 atoms wide. The QPC is either formed by etched out semicircles with radii R=28 nm producing aQPC [Fig. <ref>(a)]or by bilayer patches of the same form [Fig. <ref>(b)]. The QPC is D=6 nm wide in the narrowest point. We consider QPC edges with a number of singly connected atoms – similar to the ones present in the Klein edge <cit.> [Fig. <ref>(c)] as well as ”clean” edgeswith thesingly connected atoms removed [Fig. <ref>(d)].For the SGM modeling,open boundary conditions (BCs) at the horizontal edges at the output QPC side are applied.We add to the right of the QPC – i.e. the output side – twoleads, that are semi-infinitein the y direction and extend all along the upper and lower edge of the nanoribbon [Fig. <ref>(c)]. The extra leads are introduced to simulate an infinite graphene sheet to eliminate the effects of the backscattering from the nanoribbon edges and the subband quantization that produces a set of subband-dependent Fermi wavelengths instead of a single one. Upon attachment of the leads, the corners of the computational box – between the right lead and the top or bottom leads [Fig. <ref>(c)], still act as scattering centers and produce an artificial interference.To eliminate the scattering by the corners - which influences the SGM maps –we added in the upper-right and lower-right corners two leads that are semi-infinite in the z-direction [Fig. <ref>] that absorb the current that has not entered the in-plane leads. The additional vertical leads are attached to the corners of the computational box as the sinks of the current. As the present approach is based on the atomistic tight-binding procedure we had to choose an atomic structure form of the leads. § RESULTS. §.§ Conductance of the QPC constrictionsIn Fig. <ref>(a,b) the transmission probability as a function of the Fermi energy is presented. A transport energy gap due to the constriction is present near the Dirac point. For QPC with singly connected atoms at the etched edge [blue line in Fig. <ref>(a)], the conductance exhibits a number of sharp peaks. No well developed plateaus are observed, and the conductance is much lower than the one fora uniform ribbon of the width of the narrowest part of the QPC (dashed line). This is caused bystrong backscatteringby the atomic-scale roughness of the etched QPC induced by the singly connected atoms. Upon their removal [cf. Fig. <ref>(c) and <ref>(d)], the conductance [the orange line in Fig. <ref>(a)] becomes a smooth function of the energy and approaches the maximal conductance for the QPC width.For the bilayer patches we assume that the potential on the lower graphene layer is V=0, and V_b on the upper layer (V_b=0.64 eV unless stated otherwise). For that bias within the finite size bilayer patches a bandgap is formed in the range of (0.19,0.64) eV.For the Fermi energy E_F of the leads within gap opened by the interlayer bias in the constriction, the current doesn't penetrate the patches [Fig. <ref>(a)]. For E_F beyond the forbidden range the current flowsacross the patches [Fig. <ref>(b)].Similar as the etched QPCs, the geometry of the QPC isspecified once the sample is produced, however the bilayer-patched systems can be controlled via the electric field which allows us to turn on and off the quantizing properties, or alter the number of conducting modes in the QPC.The conductance of the patched QPC is presented in Fig. <ref>(b) as a function of E_F. The dashed line shows the conductance of a uniform nanoribbon with two rectangular bilayer patches along the entire ribbon with a width matching the conditions of the narrowest point of the QPC constriction. The conductance of the QPC with patches that contain a Klein edge and those without it, is in both cases smooth, howeverthere is the ubiquitous backscattering that makes the conductance lower than that of the uniform ribbon of the same structure as the narrowest part of the QPC. With the singly connected atoms the G(E_F) dependence is smoother in the patched QPC [Fig. <ref>(b)] than for the etched one [Fig. <ref>(a)] since even for the atoms of the upper layer that have only one neighbor in-plane, there is a non-zero hopping to the atoms in the lower layer. §.§ Simulation of the scanning gate microscopy.For the E_F<V_t=1.25 eV, the tip introduces an n-p junction. ForQPCs without the singly-connected atoms we choose the workpoint for the scanning mapsatthe conductance step (G≈ G_0) and at the plateau G≈ 2 G_0.For the etched QPC the plateau and the step are taken at E_F=0.312 eV (G=1.01 G_0, see the orange point in Fig. <ref>(a)) and E_F=0.37 eV at the etched nanoribbon (G=1.73 G_0, see the green point in Fig. <ref>(a)), respectively. For the patched QPC we takeE_F=0.37 meV for the plateau(G=1.8 G_0,the green point in Fig. <ref>(b)) and E_F=0.327 eV for the step (G=1.19 G_0, the orange point in Fig. <ref>(b)).For the QPC conductance – in the absence of the tip – the open BCs at the output side of the QPCplay no significant role. The conductance is nearly the same with rigid and open BCs for the vertical edges of the ribbon. This factresults from a negligible scattering by the horizontal edges that could reverse the current back through the QPC to the input lead. However, the open conditions are crucial for the conductance mapping.Let usfirst consider conductance maps for closedBCs at the upper and lower edge of the ribbon, which are then actual ends of the sample. Fig. <ref>(a) shows the conductance map for the etched QPC with the clean edge.The contour of the bilayer patch is marked by black solid lines.The two halos centered in the middle of QPC correspond to the tip-induced activation of the two resonances marked by orange arrows in Fig. <ref>(a). Away from constriction in Fig. <ref>(a) the conductance fluctuates in a non-regular way, due to a large number of transversal modes with different k_F. The nanoribbonof the considered widthhave 19 modes at E_F=0.312 eV and 22 modes atE_F=0.37 eV. The image contains the signal ofsuperposition of waves with many different Fermi wavelengths with the intersubband scattering. Themaps become simpler once open BCs are applied to the right (output) side of the QPC to simulate an infinite graphene halfplane.In the conductance maps for the etched QPC with open BCs [Fig. <ref>(b)]the QPC-centered halos remain the same as for the closed BCs [Fig. <ref>(a)].The difference occurs to the right of the QPC, where the simulated flake is infinite. Far from the QPC periodic oscillations of conductance are present. Figure <ref> shows the zoom at the region [dashed line in Fig. <ref>(b)]for the etched [Fig. <ref>(a)] and the patched [Fig. <ref>(b)] QPC. In both scans the oscillations differ by an offset and not by the oscillations period. §.§ QPC work point vs the conductance maps The contrast of the conductance maps for fixed parameters of the tip potential depends on the Fermi energy. The contrast grows with the absolute value of ∂ G/∂ E_Fderivative. Fig. <ref> and <ref>present the conductance maps for open boundary conditions with the workpoints marked by the orange and light green dots on the conductance vs the Fermi energy plot in Fig. <ref>(a) and Fig. <ref>(b) of the main text, respectively.The maximal conductance value on the maps of Fig. <ref> and Fig. <ref> is given by the G values marked by the points on Fig. <ref>(a) and Fig. <ref>(b). The variation of the map increases with G(E_F) slope.The oscillation period in panels (a) and (b) of Fig. <ref> and Fig. <ref> changes in consistence with the value of the Fermi wavelength. §.§ Conductance maps and edge scatterers within the constriction The singly-connected atoms within the constrictions [Fig. <ref>(c-d)] are strong source ofscattering for electron waves that cross the QPC. The conductance is decreased when they are present within the constriction [cf. the blue and red lines in Fig. <ref>]. The conductance maps for the etched QPC are given in Figures <ref>(b) and <ref>. The map changes within the constriction, but the oscillation period that is due to the backscattering at a distance from the QPC, that the paper is about, remains unchanged.§.§ Conductance maps with bulk scatterers off the constrictionWe consider the influence of scattering by strong local perturbation due to the fluorine adatoms bound to the carbon lattice at a distance fromthe constriction.Two adatoms are considered with the positions marked by dots in Fig. <ref>.We perform a modeling of scanning gate microscopy of fluorinated graphene with V_t=0.5 eV. In the conductance map we observe elliptical features near the adaomts superimposed on the conductacnce oscillation pattern with the period of half the Fermi wavelength characteristic to the clean sample.To improve the visibility of the signal, in Fig. <ref>(b) we plot a derivative of the conductance in Fig. <ref>(a).The ellipses plotted in Fig. <ref> are drawn for the conditions of the interference of the wave functions incident from the QPC and backscattered by the tip and impurity, as described in the following Section.§DISCUSSION.The current distribution for the etched QPC is displayed in Fig. <ref> with the interference fringe pattern between the QPC and the tip that results from the tip-induced backscattering. The white circle in Fig. <ref> indicates the position where the effective tip potentialequalsE_F, i.e. then-p junction.In Fig. <ref>(b) a zoom of the rectangle marked in Fig. <ref>(a) is displayed with the current orientation given by the vector map. Note that the current at the cross-section of the computational box is not necessarily conserved, as it can flow to the upper and lower contact. The current in the entire, infinite system however is always conserved. The current is focused by the circular n-p junction, and disperses to the right of it in Fig. <ref>(b).In the Klein tunneling effect the Fermi electronincident on a perpendicular barrier larger than E_F is perfectly transmitted for normal incidence angle, and the transmission probability is less than 1 for other incidence angles <cit.>. For a non-normal incidence, the current is partially reflected, and partially transmitted and refracted by the n-p-n junction<cit.>. In Fig. <ref> a normal current along the axis of the system indeed passes across the junction.The tip potential deflects the currents inside the central p conductivity region, and only the precisely normal component of the current passes through undeflected. Other incidence angles contribute to backscattering. The angular dependence of the scattering by a circular potential in graphene has been described for an incident plane wave in Ref. <cit.>.In our case the wave function incoming from the QPC opening is not a plane wavebut it is closer to a circular wave, which contributes to a deviation of the incidence angles from normal.Moreover, the tip potential that is of an electrostatic origin is bound to possess a smooth profile.According to Ref. <cit.>, for smooth potential profile the transmission probability drops deep below 100%already at a low deviation of the incidence angle from normal. Let us consider a simple model for conductance oscillations far from the QPC. The QPC is a source of a circular wave function and the SGM tip induces backscattering as argued above and shown in Fig. <ref>(a).The wave function incident from the QPC is partially reflected back to the opening (Fig. <ref>(a)). The incident wave Ψ_in( 𝐫_tip) = exp (i 𝐤_F ( 𝐫_qpc-𝐫 _tip) )and the wave backscattered by the tip Ψ_sc( 𝐫_tip)= exp( -i 𝐤_F( 𝐫_qpc-𝐫 _tip) ) superpose and create a standing wave between the tip and the QPC. The electron density modulation can be described by|Ψ( 𝐫_tip )|^2 ∝cos( 2 𝐤_F (𝐫_tip - 𝐫_qpc ) ).This form of the scattering density gives rise to conduction map that oscillates with the tip position, with a period of λ_F/2, where λ_F=2 π/| 𝐤_F |. The Fermi vector can be calculated for low energy from the graphene linear dispersion relation<cit.>:k_F=2/3E_F/t a_CC . In Fig. <ref> the cross sections along the axis of the system of Fig. <ref>(a) and <ref>(b) are shown together with a cosine shifted in phase and offsetto adjust to the conductance calculated from the quantum scattering problem. Far from the QPC, the modeled conductance is close to a cosine with the k_F that agrees with the wave vector obtained from the dispersion relation of graphene.As seen in Fig. <ref>(b) and Figs. <ref>-<ref>, far from the QPC the oscillations can be described by a simple model.In Fig. <ref>with the purple line we marked the results obtained for V_t=0.125 eV which is belowE_F. In this case no backscattered interference pattern is observed.Wefind that formation of the n-p junction by the tip is a necessary condition for observation of the interference fringes.The signal observed in the presence of scattering near the fluorine adatoms with the elliptical features in the conductance mapof Fig. <ref> is similar to the one identified recently in III-V semiconductors <cit.> as due to the interference signal induced by scattering by the tip and a fixed defect. The interference paths are schematically shown in Fig. <ref>(b). Figure <ref>(a) illustrates backscattering by the n-p junction induced by the tip resulting in the interference pattern with half the flux quantum discussed above. In Fig. <ref>(b) the electron wave incident from the QPC to the fluorine adatom interferes with the wave that is scattered by the tip-induced n-p junction.The resulting conductance pattern can be approximately described by G ∝cos( k_F ( r_qpc-tip-f - r_qpc-f ) ).In Fig. <ref>with the dashed lines we plot the isolines of r_qpc-tip-f - r_qpc-f = λ_F/2. The dashed ellipse corresponds topoint-like tip, while the solid black line in Fig. <ref> accounts for the finite radius of the n-p junction d_np = d √(V_t/E_F-1 ). The black solid line in Fig. <ref> was obtained for condition r_qpc-tip-f - r_qpc-f - d_np = λ_F/2. A still closer approximation is obtained when one accounts for the dependence of the penetration depth of the electron incidence angle <cit.>. For the blue solid line in Fig. 10 we considered the condition r_qpc-tip-f - r_qpc-f - d √(V_t/E_Fcos(α)-1 ) = λ_F/2. Concluding, in presence of the defects the conductance map resolve the interference involving the tip, the QPC and the defect – similarlyas previously described for III-V semiconductors. The incomplete transmission in the Klein effect for electron incidence deviating from normal was used for construction of the n-p-n Fabry-Pérot interferometers <cit.> in graphene. At n-p-n junctions <cit.>interference of refracted waves in ballistic graphene appears in the scattering electron density that is referred to as thelocal density of states <cit.>.The present work deals with SGM with spatial resolution of the standing waves in conductance maps and not in the local density of states only. The present idea does not require sharp n-p junctionsor a point-like injection and detection of the current as in the Veselago lensing <cit.>.Fig. <ref> shows the probability density without the SGM tip for the electrons incoming from the left terminal for pristine graphene (a) and with fluorine adatoms (b).For the dilute fluorinated graphene the backscattering by the fluorine adatom is resolved in the density plot in Fig. <ref>(b). There is no correlation between the densities and conductance maps. The SGM maps resolve the quantum transport properties or in-plane conductance of the sample when the tip becomes the source of an additional scattering. In contrast to the 2DEG in III-V quantum wells the surface electron gas in graphene can be alternatively studied with the scanning tunneling microscopy (STM) <cit.>. In this configuration the tunneling microscope acts as a contact and not as a gate electrode. Instead of the scattering effects involving the tipthe STM resolves the local density of states.Ref. Neubeck provided a SGM map of a graphene QPC fornominal tip potential set V_t=-0.5 eV. The resistance mapof this work<cit.> resolved only the QPC itself and not the interference fringes that were described here. The nominal V_t value given in Ref. Neubeck is an unscreened parameter, and it is not granted that the screened tip potential was strong enough to induce formation of the n-p junction, since no control of E_F was demonstrated <cit.>. Nevertheless, the present work indicates thatobservation of the spatial maps of the backscattering interference pattern in graphene is not excluded by the Klein tunelling effect.§ SUMMARY AND CONCLUSIONS We have studied the current constriction by graphene QPCs formed by a gap between two biased bilayerpatches and by a narrowing of a graphene ribbon using an atomistic tight binding method and a Landauer approach.We considered conductance mapping as a function of a floating probe position. For this purpose open boundary conditions on the output side of the QPC were introduced in order to produce an image clean frombackscattering by the edges and consequences of multiple Fermi wavelengths resulting from subband quantization.With the open boundary conditions simulating an infinite graphene plane at the output side of the QPC we found a clear interference pattern in the conductance map with the period of half the Fermi wavelength characteristic to the backscattering by the tip. The interference is only observed provided that the tip induces an n-p junction in graphene. The backscattering of the electron wave function that occurs by the circular tip induced n-p junction forelectron incidence at angles that deviate from normal is enough for the interference to be observedin the calculated conductance images. The finding that the Klein effect does not prevent observation of the standing waves induced by the tip in graphene opens perspectives for experimental determination of the current distribution, current branching by scattering defects, coherence length, etc.Acknowledgments This work was supported by the National Science Centre (NCN) according to decision DEC-2015/17/B/ST3/01161. The calculations were performed on PL-Grid Infrastructure. | http://arxiv.org/abs/1704.08460v2 | {
"authors": [
"Alina Mreńca-Kolasińska",
"Bartłomiej Szafran"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170427073854",
"title": "Imaging backscattering in graphene quantum point contacts"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.