text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
: Parallelized Inference Through Post-Training Quantization Ensembling of Residual Expansions [ January 14, 2024 =============================================================================================This is a supplement to the paper<cit.>. The supplement is organized as follows. First, we prove Theorem 3.13 in <cit.> which provides the existence of the dynamical system 𝔻 introduced in Definition 3.6 in <cit.>. Second, we show some properties of 𝔻 which are summarized in Theorem 3.14 in <cit.>.In the following, we only state the basic setting and refer to <cit.> for definitions.§ SETTINGLet (Ω̃, ℱ̃, P̃) be a probability space and (Ω̂, ℱ̂) another measurable space. We define the product space(Ω,ℱ):=(Ω̃×Ω̂, ℱ̃⊗ℱ̂). Let P̂ be a Markov kernel (or stochastic kernel) from Ω̃ to Ω̂. Given ω̃∈Ω̃, we set P̂^ω̃:=P̂(ω̃) with a slight notational abuse. We then introduce a probability measure P on (Ω,ℱ) as the semidirect product of P̃ and P̂, that is,P(Ã×Â) := (P̃⋉P̂)(Ã×Â) =∫_ÃP̂^ω̃(Â)dP̃(ω̃). We fix an atomless probability space (I,I,λ) representing the space of agents and let (I ×Ω, I⊠F,λ⊠ P) be a rich Fubini extension of (I ×Ω, I⊗F, λ⊗ P). All agents in I can be classified according to their type. In particular, we let S={ 1,2,...,K } be a finite space of types andsay that an agent has type J if he is not matched. We denote by Ŝ:=S × (S ∪{ J }) the extended type space. Moreover, we call Δ̂ the space of extended type distributions, which is the set of probability distributions p on Ŝ satisfying p(k,l)=p(l,k) for any k and l in S. This space is endowed with the topology T^Δ induced by the topology of the space of matrices with |S| rows and |S| + 1 columns. We consider (n)_n ≥ 1 time periods and denote by (η^n, θ^n, ξ^n, σ^n, ς^n) the matrix valued processes, with (η^n, θ^n, ξ^n, σ^n, ς^n)=(η^n_kl,θ^n_kl, ξ^n_kl, σ^n_kl[r,s], ς^n_kl[r])_k,l,r,s ∈ S × S × S × S for n ≥ 1, on (Ω, ℱ, P). For a detailed introduction of these processes we refer to Section 3 in <cit.>. Moreover, let p̂=(p̂^n)_n ≥ 1 be a stochastic process on (Ω, ℱ,P) with values in Δ̂, representing theevolution of the underlying extended type distribution. We assume that p̂^0 is deterministic. Given the input processes (η, θ, ξ,σ, ς) we denote by 𝔻 a dynamical system on (I ×Ω, I⊠F,λ⊠ P) and by Π=(α, π,g)=(α^n, π^n, g^n)_n ∈ℕ\{ 0 } the agent-type function, the random matching and the partner-type function, respectively, as introduced in Definition 3.6 in <cit.>, which we recall in the following.A dynamical system 𝔻 defined on (I ×Ω, I⊠ℱ,λ⊠ P ) is a triple Π=(α, π,g)=(α^n, π^n, g^n)_n ∈ℕ\{ 0 } such that for each integer period n≥ 1 we have: * α^n: I ×Ω→ S is the I⊠ℱ-measurable agent type function. The corresponding end-of-period type of agent i under the realization ω∈Ω is given by α^n(i,ω) ∈ S. * A random matching π^n: I ×Ω→ I, describing the end-of-period agent π^n(i) to whom agent i is currently matched, if agent i is currently matched. If agent i is not matched, then π^n(i)=i. The associated I⊠ℱ-measurable partner-type function g^n:I ×Ω→ S ∪{ J } is given by g^n(i,ω)=α^n(π^n(i,ω),ω) if π^n(i,ω) ≠ i Jif π^n(i,ω) = i, providing the type of the agent to whom agent i is matched, if agent i is matched, or J if agent i is not matched. Let the initial condition Π^0=(α^0,β^0) of 𝔻 be given. We now construct a dynamical system 𝔻 defined on (I ×Ω, I⊠ℱ,λ⊠ P ) with input processes (η^n,θ^n,ξ^n, σ^n, ς^n)_n≥ 1. We assume that Π^n-1=(α^n-1,π^n-1,g^n-1) is given for some n ≥ 1, and define Π^n=(α^n,π^n, g^n) by characterizing the three sub-steps of random change of types of agents, random matchings, break-ups and possible type changes after matchings and break-ups as follows. Mutation: For n ≥ 1 consider an I⊠ℱ-measurable post mutation functionα̅^n: I ×Ω→ S.In particular, α̅_i^n(ω):=α̅^n(i,ω) is the type of agent i after the random mutation under the scenario ω∈Ω. The type of the agent to whom an agent is matched is identified by a I⊠ℱ-measurable functiong̅^n:I ×Ω→ S ∪{ J },given by g̅^n(i,ω)=α̅^n(π^n-1(i,ω),ω) for any ω∈Ω. In particular, g̅_i^n(ω):=g̅^n(i,ω) is the type of the agent to whom an agent is matchedunder the scenario ω∈Ω. Given p̂^n-1 and ω̃∈Ω̃, for any k_1,k_2,l_1 and l_2 in S, for any r ∈ S ∪{ J }, for λ-almost every agent i, we set P̂^ω̃(α̅_i^n(ω̃,·)=k_2, g̅_i^n(ω̃,·)=l_2 |α_i^n-1(ω̃,·)=k_1, g_i^n-1(ω̃,·)=l_1, p̂^n-1(ω̃,·))(ω̂)=η_k_1,k_2( ω̃,n,p̂^n-1(ω̃,ω̂))η_l_1,l_2( ω̃,n,p̂^n-1(ω̃,ω̂)),P̂^ω̃(α̅_i^n(ω̃,·)=k_2, g̅_i^n(ω̃,·)=r |α_i^n-1(ω̃,·)=k_1, g_i^n-1(ω̃,·)=J, p̂^n-1(ω̃,·))(ω̂) =η_k_1,k_2(ω̃,n,p̂^n-1(ω̃,ω̂))δ_J(r),We then set β̅^n(ω)=(α̅^n(ω), g̅^n(ω)),n ≥ 1.The post-mutation extended type distribution realized in the state of the world ω∈Ω is denoted by p̌(ω)=(p̌^n(ω)[k,l])_k ∈ S,l ∈ S ∪ J, where p̌^n(ω)[k,l]:=λ({ i ∈ I: α̅^n(i,ω)=k,g̅^n(i,ω)=l}). Matching: We introduce a random matching π̅^n: I ×Ω→ I and the associated post-matching partner type function g̅̅̅^n given byg̅̅̅^n(i,ω)=α̅^n(π̅^n(i,ω),ω) if π̅^n(i,ω) ≠ iJif π̅^n(i,ω)=i,satisfying the following properties: * g̅̅̅^n is I⊠ℱ-measurable. * For any ω̃∈Ω̃, any k,l ∈ S and any r ∈ S ∪{ J }, it holdsP̂^ω̃(g̅̅̅^n(ω̃,·)=r |α̅^n_i(ω̃,·)=k, g̅_i^n(ω̃,·)=l)(ω̂)=δ_l(r).This means that π̅^n_ω(i)=π_ω^n-1(i)for anyi ∈{ i: π^n-1(i,ω) ≠ i }. * Given ω̃∈Ω̃ and the post-mutation extended type distribution p̌^n in (<ref>), an unmatched agent of type k is matched to a unmatched agent of type l with conditional probability θ_kl(ω̃,n,p̌^n), that is for λ-almost every agent i and P̂^ω̃-almost every ω̂, we defineP̂^ω̃(g̅̅̅^n(ω̃,·)=l |α̅^n_i(ω̃,·)=k, g̅_i^n(ω̃,·)=J, p̌^n(ω̃,·))(ω̂)=θ_kl^n(ω̃, p̌^n(ω̃,ω̂)).This also implies thatP̂^ω̃(g̅̅̅^n(ω̃,·)=J |α̅^n_i(ω̃,·)=k, g̅_i^n(ω̃,·)=J, p̌^n(ω̃,·))(ω̂)=1-∑_l ∈ Sθ_kl^n(ω̃,p̌^n(ω̃,ω̂))=b^k(ω̃,p̌^n(ω̃,ω̂)). The extended type of agent i after the random matching step is β̅̅̅^n_i(ω)=(α̅_i^n(ω),g̅̅̅_i^n(ω)),n ≥ 1. We denote the post-matching extended type distribution realized in ω∈Ω by p̌̌̌^n(ω)=(p̌̌̌^n(ω)[k,l])_k ∈ S,l ∈ S ∪ J, where p̌̌̌^n(ω)[k,l]:=λ({ i ∈ I: α̅̅̅^n(i,ω)=k,g̅^n(i,ω)=l}). Type changes of matched agents with break-up: We now define a random matching π^n byπ^n(i)=π̅^n(i) if π̅^n(i) ≠ i iif π̅^n(i) = i.We then introduce an (I⊠ℱ)-measurable agent type function α^n and an (I⊠ℱ)-measurable partner function g^n with g^n(i,ω)=α^n(π^n(i,ω), ω),n ≥ 1,for all (i,ω) ∈ I ×Ω. Given ω̃∈Ω̃, p̌̌̌^n ∈Δ̂, for any k_1,k_2,l_1,l_2 ∈ S and r ∈ S ∪{ J }, for λ-almost every agent i, and for P̂^ω̃-almost every ω̂, we set P̂^ω̃(α_i^n(ω̃,·)=l_1, g_i^n(ω̃,·)=r |α̅_i^n(ω̃,·)=k_1, g̅̅̅^n_i(ω̃,·)=J)(ω̂)=δ_k_1(l_1) δ_J(r), P̂^ω̃(α_i^n(ω̃,·)=l_1, g_i^n(ω̃,·)=l_2 |α̅_i^n(ω̃,·)=k_1, g̅̅̅^n_i(ω̃,·)=k_2, p̌̌̌^n(ω̃,·))(ω̂) =(1-ξ_k_1k_2( ω̃, n, p̌̌̌^n(ω̃, ω̂))) σ_k_1 k_2[l_1,l_2](ω̃, n, p̌̌̌^n(ω̃,ω̂)), P̂^ω̃(α_i^n(ω̃,·)=l_1, g_i^n(ω̃,·)=J |α̅_i^n(ω̃,·)=k_1, g̅̅̅^n_i(ω̃,·)=k_2, p̌̌̌^n (ω̃,·))(ω̂)=ξ_k_1k_2(ω̃, n,p̌̌̌^n(ω̃,ω̂)) ς_k_1 k_2^n[l_1](ω̃, n, p̌̌̌^n(ω̃,ω̂)). The extended-type function at the end of the period isβ^n(ω)=(α^n(ω),g^n(ω)),n ≥ 1. We denote the extended type distribution at the end of period n realized in ω∈Ω by p̂^n(ω)=(p̂^n(ω)[k,l])_k ∈ S,l ∈ S ∪ J, where p̂^n(ω)[k,l]:=λ({ i ∈ I: α^n(i,ω)=k,g^n(i,ω)=l}). Furthermore, the definition ofMarkov conditionally independent (MCI) dynamical system is provided in Definition 3.8 in <cit.>. We work under the following assumption, which is Assumption 3.9 in <cit.>. Let (Ω̃, ℱ̃, P̃) be the probability space introduced. We assume that there exists its corresponding hyperfinite internal probability space, which we denote from now on also by (Ω̃, ℱ̃, P̃) by a slight notational abuse. As already pointed out in <cit.>, the proofs of the results below follow by analogous arguments as in <cit.> which is possible due to the product structure of the space Ω in (<ref>) and the Markov kernel P in (<ref>). As in <cit.> we use some concepts and notations from nonstandard analysis. Note here that an object with an upper left star means the transfer of a standard object to the nonstandard universe. For a detailed overview of the necessary tools of nonstandard analysis, we refer to Appendix D.2. in <cit.>.§ PROOF OF THEOREM 3.13 IN <CIT.> From now on, we fix the hyperfinite internal space (Ω̃, ℱ̃, P̃), along with the input functions (η_kl,θ_kl, ξ_kl, σ_kl[r,s], ς_kl[r])_k,l,r,s ∈ S × S × S × Sfrom Ω̃×ℕ×Δ to [0,1] introduced above. Given this framework we prove the existence of a rich Fubini extension (I ×Ω, I⊠ℱ, λ⊠ P), on which a dynamical system 𝔻 described in Definition <ref> for such input probabilities is defined. More specifically, we are going to construct the space Ω̂ and the probability measure P̂ such thatΩ = Ω̃×Ω̂ and P = P̃⋉P̂ is a Markov kernel from Ω̃ to Ω̂. We now present and prove Theorem 3.13 in <cit.>. The proof is based on Proposition 3.12 in <cit.>, which focuses on the random matching step and shows the existence of a suitable hyperfinite probability space and partial matching, generalizing Lemma 7 in <cit.>.Let Assumption 3.9 in <cit.> hold and (η_kl,θ_kl, ξ_kl, σ_kl[r,s], ς_kl[r])_k,l,r,s ∈ S × S × S × S be the input functions from Ω̃×ℕ×Δ̂ defined in Section 3 in <cit.>. Then for any extended type distribution p̈∈Δ̂ and any deterministic initial condition Π^0=(α^0, π^0) there exists a rich Fubini extension (I ×Ω, I⊠ℱ, λ⊠ P) on which a discrete dynamical system 𝔻=(Π^n)_n=0^∞ as in Definition 3.6 in <cit.> can be constructed with discrete time input processes (η^n,θ^n,ξ^n, σ^n, ς^n)_n≥ 1 coming from (η_kl,θ_kl, ξ_kl, σ_kl[r,s], ς_kl[r])_k,l,r,s ∈ S × S × S × S as stated in Section 2 in <cit.>. In particular, Ω = Ω̃×Ω̂, ℱ = ℱ̃⊗ℱ̂,P = P̃⋉P̂,where (Ω̂, ℱ̂) is a measurable space and P̂ a Markov kernel from Ω̃ to Ω̂. The dynamical system 𝔻 is also MCI according to Definition 3.8 in <cit.> and with initial cross-sectional extended type distribution p̂^0 equal to p̈^0 with probability one. At each time period we construct three internal measurable spaces with internal transition probabilities taking into account the following steps: * random mutation * random matching * random type changing with break-up. Let M be a limited hyperfinite number in ^*ℕ_∞. Let { n }_n=0^M be the hyperfinite discrete time line and (I, I_0, λ_0) the agent space, where I={ 1,..., M̂}, I_0 is the internal power set on I, λ_0 is the internal counting probability measure on I_0, and M̂ is an unlimited hyperfinite number in ^*ℕ_∞. We startby transferring the deterministic functions[Note that at initial time, the functions are supposed to be deterministic and in particular independent of Ω̃.] η(0,·),θ(0,·), ξ(0,·), σ(0,·), ς(0,·): Δ̂→ [0,1] to the nonstandard universe. In particular, we denote by ^*θ^0_kl for any k,l ∈ S and by ^*f^0 for f=η,ξ, σ, ς the internal functions from ^*Δ̂ to [0,1].We also let θ̂^0_kl(ρ̂)=^*θ̂^0_kl(ρ̂) and b̂_k^0=1-∑_l ∈ Sθ̂_kl^0(ρ̂) for any k,l ∈ S and ρ̂∈^*Δ̂, with 1 ∈^*ℕ. We start at n=0. To do so, we introduce the trivial probability space over the single set { 0 } denoted by (Ω̅_0,ℱ̅_0, Q̅_0). Let { A_kl}_(k,l)∈Ŝ be an internal partition of I such that | A_kl|/M̂≃p̈_kl for any k ∈ S and l ∈ S ∪{ J }, such that | A_kk| is even for any k,l ∈ S and | A_kl| = | A_lk| for any k,l ∈ S. Let α^0 be an internal function from (I, I_0, λ_0) to S such that α^0(i)=k if i ∈⋃_l ∈ S ∪{ J } A_kl. Let π^0 be an internal partial matching from I to I such that π^0(i)=i on ⋃_k ∈ S A_kJ, and the restriction π^0 |_A_kl is an internal bijection from A_kl to A_lk for any k,l ∈ S. Let g^0(i)=α^0(π^0(i)) if π^0(i) ≠ iJif π^0(i) = i.It is clear that λ_0({ i: α^0(i)=k, g^0(i)=l }) ≃p̈^0_kl for any k ∈ S and l ∈ S ∪{ J }. Let (Ω̃, ℱ̃, P̃) be the hyperfinite internal space. Since the intensities are supposed to be deterministic at initial time, the Markov kernel from Ω̃ is trivial and we define the initial internal product probability space as (Ω_0, ℱ_0, Q_0):=(Ω̃×Ω̅_0, ℱ̃⊗ℱ̅_̅0̅, P̃⊗Q̅_0).Suppose now that the dynamical system 𝔻 has been constructed up to time n-1 ∈^*ℕ for n ≥ 1, i.e., that the sequences { (Ω_m, ℱ_m, Q_m) }_m=0^3n-3 and {α^l, π^l }_l=0^n-1 have been constructed. In particular, we assume to have introduced the spaces (Ω̂_m, ℱ̂_m) and the Markov kernel P̂_m from Ω̃ to Ω̂_m for any m=1, …, n-3, so that we can define Ω_m:=Ω̃×Ω̂_m as a hyperfinite internal set with internal power set ℱ_m:=ℱ̃⊗ℱ̂_m and Q_m:= P̃⋉P̂ _m as an internal transition probability from Ω^m-1 to (Ω_m, ℱ_m), whereΩ^m := Ω̃×Ω̂^m, Ω̂^m:=Ω̅_0 ×∏_j=1^mΩ̂_j,ℱ̂^m:=ℱ̅_0 ⊗( ⊗_j=1^mℱ̂_j )and ℱ^m = ℱ̃⊗ℱ̂^m. In this setting, α^l is an internal type function from I ×Ω^3l-1 to the space S, and π^l an internal random matching from I ×Ω^3l to I, such that α^l(i,(ω̃, ω̂^3l-1))= α^l(i,ω̂^3l-1), for any(ω̃, ω̂^3l-1) ∈Ω^3l-1and π^l(i,(ω̃, ω̂^3l))=π^l(i, ω̂^3l), for any(ω̃, ω̂^3l) ∈Ω^3l. Given ω^3l∈Ω^3l we denote by π^l_ω̂^3l:I → I the function given by π^l_ω̂^3l(i):=π^l(i,(ω̃, ω̂^3l))=π^l(i, ω̂^3l).A similar notation will be used for α^l_ω̂^3l:I → S. We now have the following. (i) Random mutation step:We let Ω̂_3n-2:=S^I, which is the space of all internal functions from I to S, and denote its internal power set by ℱ̂_3n-2. For each i ∈ I and ω^3n-3=(ω̃, ω̂^3n-3) ∈Ω^3n-3, if α^n-1(i, ω^3n-3)=α^n-1(i, ω̂^3n-3)=k, define a probability measure γ_i^ω̃, ω̂^3n-3 on S by letting γ_i^ω̃, ω̂^3n-3(l):=θ_kl(ω̃,n,ρ̂^n-1_ω̂^3n-3) for each l ∈ S with ρ̂^n-1_ω̂^3n-3[k,r]:=λ_0({ i ∈ I:α^n_ω̂^3n-3(i)=k,α^n_ω̂^3n-3(π^n_ω̂^3n-3(i))=r}),k,r ∈ Sandρ̂^n-1_ω̂^3n-3[k,J]:=λ_0({ i ∈ I:α^n_ω̂^3n-3(i)=k, π^n_ω̂^3n-3(i)=i}),k ∈ S.Define a Markov kernel P̂^ω̂^3n-3_3n-2 from Ω̃ to Ω̂_3n-2 by letting P̂^ω̂^3n-3_3n-2(ω̃)be the internal product measure ∏_i ∈ Iγ_i^ω̃, ω̂^3n-3. Define α̅^n:( I ×Ω^3n-2) → S by α̅^n(i, (ω̃,ω̂^3n-2)):=α̅^n(i, ω̂^3n-2)=:ω̂_3n-2(i) and g̅^n: ( I ×Ω^3n-2) → S ∪{ J } by g̅^n(i, (ω̃,ω̂^3n-2)):=g̅^n(i, ω̂^3n-2):=α̅^n(π^n-1(i,ω̂^3n-3),ω̂^3n-2) if π^n-1(i,ω̂^3n-3) ≠ iJif π^n-1(i, ω̂^3n-3)) = i. Moreover, we introduce the notationα̅_ω̂^3n-2^n(·):I → S, α̅_ω̂^3n-2^n(i):=α̅^n(i, (ω̃,ω̂^3n-2)):=α̅^n(i, ω̂^3n-2) for the type function. We then define π_ω̂^3n-3^n-1(·): I → I and g^n_ω̂^3n-2:I → S ∪{ J } analogously.Finally, we define the cross-internal extended type distribution after random mutation ρ̌^n_ω̂^3n-2 byρ̌^n_ω̂^3n-2[k, l] := λ_0({∈ I: α̅^n_ω̂^3n-2(i) = k, g̅^n_ω̂^3n-2(i)=l }),k, l ∈ S.(ii) Directed random matching:Let (Ω̂_3n-1, ℱ̂_3n-1) and P̂^ω̂^3n-2_3n-1 be the measurable space and the Markov kernel, respectively, provided by Proposition 3.12 in <cit.>, with type function α̅_ω̂^3n-2^n(·) and partial matching function π_ω̂^3n-3^n-1(·), for fixed matching probability function θ(·, n, ρ̌^n_ω̂^3n-2).Proposition 3.12 in <cit.> also provides the directed random matchingπ_θ^n(·, ρ̌^n_ω̂^3n-2),α̅^n_ω̂^3n-2, π^n-1_ω̂^3n-3,which is a function defined on (Ω_3n-1,ℱ_3n-1) byπ_θ^n(·, ρ̌^n_ω̂^3n-2),α̅^n_ω̂^3n-2, π^n-1_ω̂^3n-3(i,(ω̃,ω̂_3n-1)):=π_θ^n(·, ρ̌^n_ω̂^3n-2),α̅^n_ω̂^3n-2, π^n-1_ω̂^3n-3(i,ω̂_3n-1).We then define π̅^n: ( I ×Ω^3n-1) → I by π̅^n(i,(ω̃,ω̂_3n-1)):=π̅^n(i,ω̂^3n-1):=π_θ^n(·,ρ̌^n_ω̂^3n-2),α̅^n_ω̂^3n-2, π^n-1_ω̂^3n-3(i, ω̂_3n-1)andg̅̅̅^n(i, (ω̃,ω̂^3n-1))=g̅̅̅^n(i, ω̂^3n-1):=α̅^n(π̅^n(i, ω̂^3n-1), ω̂^3n-2) if π̅^n(i, ω̂^3n-1)≠ iJif π̅^n(i, ω̂^3n-1)= i.Define nowthe cross-internal extended type distribution after the random matching ρ̌̌̌^n_ω̂^3n-1 byρ̌̌̌^n_ω̂^3n-1[k,l]:=λ_0({∈ I: α̅^n_ω̂^3n-1(i)=k, g̅̅̅^n_ω̂^3n-1(i)=l }).(iii) Random type changing with break-up for matched agents:Introduce Ω̂_3n:=(S ×{0,1 })^I with internal power set ℱ̂_3n, where 0 represents “unmatched” and 1 represents “paired”; each point ω̂_3n=(ω̂_3n^1, ω̂_3n^2) ∈Ω̂_3n represents an internal function from I to S ×{ 0,1 }. Define a new type function α^n:(I ×Ω^3n) → S by letting α^n(i,(ω̃,ω̂^3n)):=α^n(i,ω̂^3n)=ω̂_3n^1(i). Fix (ω̃,ω̂^3n-1) ∈Ω^3n-1. For each i ∈ I, we proceed in the following way. * If π̅^n(i, ω̂^3n-1)=i (i is not paired after the matching step at time n), let τ_i^ω̃, ω̂^3n-1 be theprobability measure on the type space S ×{ 0,1 } that gives probability one to the type(α̅^n(i,(ω̃,ω̂^3n-2)),0)=(α̅^n(i,ω̂^3n-2),0) and zero to the rest* If π̅^n(i, (ω̃,ω̂^3n-1))=π̅^n(i, ω̂^3n-1)=j≠ i (i is paired after the matching step at time n), α̅^n(i,(ω̃,ω̂^3n-2))=α̅^n(i,ω̂^3n-2)=k,π̅^n(i, (ω̃,ω̂^3n-1))=π̅^n(i, ω̂^3n-1)=j and α̅^n(j,(ω̃,ω̂^3n-1))=α̅^n(j,ω̂^3n-1)=l, define a probability measure τ_ij^ω̃, ω̂^3n-1 on (S ×{ 0,1 }) ×(S ×{ 0,1 }) as τ_ij^ω̃, ω̂^3n-1((k',0), (l',0)) :=(1-ξ_kl(ω̃,n,ρ̌̌̌^n_ω̂^3n-1))ς_kl[k'](ω̃,n,ρ̌̌̌^n_ω̂^3n-1)ς_lk[l'](ω̃,n,ρ̌̌̌^n_ω̂^3n-1)and τ_ij^ω̃, ω̂^3n-1((k',1), (l',1)) :=ξ_kl(ω̃,n,ρ̌̌̌^n_ω̂^3n-1)σ_kl[k',l'](ω̃,n,ρ̌̌̌^n_ω̂^3n-1)for k',l' ∈ S, and zero for the rest. Let A^n_ω̂^3n-1={ (i,j) ∈ I × I: i < j,π̅^n(i,(ω̃,ω̂^3n-1))=π̅^n(i,ω̂^3n-1)=j } and B^n_ω̂^3n-1={ i ∈ I:π̅^n(i,(ω̃,ω̂^3n-1))= π̅^n(i,ω̂^3n-1)=i }. Define a Markov kernel P̂_3n^ω̂^3n-1 from Ω̃ to Ω̂^3n by P̂_3n^ω̂^3n-1(ω̃):=∏_i ∈ B^n_ω̃, ω̂^3n-1τ_i^ω̃, ω̂^3n-1⊗∏_(i,j) ∈ A^n_ω̂^3n-1τ_ij^ω̃, ω̂^3n-1.Let π^n(i,(ω̃,ω̂^3n)) =π^n(i, ω̂^3n) := J if π̅^n(i, ω̂^3n-1)=Jor ω̂_3n^2(i)=0or ω̂_3n^2(π̅^n(i, ω̂^3n-1))=0π̅^n(i, ω̂^3n-1)otherwise,andg^n(i,(ω̃,ω̂^3n))=g^n(i, ω̂^3n):=α^n(π^n(i, ω̂^3n), ω̂^3n) if π^n(i, ω̂^3n)≠ i Jif π^n(i, ω̂^3n)= i.Define ρ̂_ω̂^3n^n=λ_0(α^n_ω̂^3n, π^n_ω̂^3n )^-1. By repeating this procedure, we construct a hyperfinite sequence of internal transition probability spaces {(Ω_m, ℱ_m,Q_m)}_m=0^3M and a hyperfinite sequence of internal type functions and internal random matchings { (α^n, π^n) }_n=0^M. Moreover, define (Ω^m,ℱ^m) as in (<ref>), andP̂^m := ∏_i=1^m P̂_i ,Q^m:=P̃⋉P̂^m,where the product of the Markov kernels is ω̃-wise. Let (I ×Ω^3M, I_0 ⊗ℱ^3M, λ_0 ⊗ Q^3M) be the internal product probability space of (I, I_0, λ_0) and (Ω^3M, ℱ^3M, Q^3M). Denote the Loeb spaces of (Ω^3M, ℱ^3M, Q^3M) and the internal product (I ×Ω^3M, I_0 ⊗ℱ^3M, λ_0 ⊗ Q^3M) by (Ω^3M, ℱ, P) and (I ×Ω^3M, I⊠ℱ, λ⊠ P), respectively. For simplicity, let Ω^3M be denoted by Ω and Ω̂^3M by Ω̂. Denote now Q^3M by P and the Markov kernel P̂^3M by P̂.The properties of a dynamical system as well as the independence conditions follow now by applying similar arguments as in the proof of Theorem 5 in <cit.> for any fixed ω̃∈Ω̃. The only difference is that in our setting the input processes for the random mutation step and the break-up step also depend on the extended type distribution. Furthermore, these arguments are similar to the ones in the proof of Lemma <ref> and can be found there with all details. § PROOF OF THEOREM 3.14 IN <CIT.>We now prove Theorem 3.14 in <cit.> which is a generalization of the results in Appendix C in <cit.>. For n ≥ 1 we define the mapping Γ^n from Ω̃×Δ̂ to Δ̂ byΓ^n_kl(ω̃,p̂)=∑_k_1, l_1 ∈ S (1-ξ_k_1 l_1(ω̃,n,p̃̃̃^n)) σ_k_1 l_1[k,l](ω̃,n,p̃̃̃^n) p̃^n_k_1l_1+∑_k_1, l_1 ∈ S (1-ξ_k_1 l_1(ω̃,n,p̃̃̃^n)) σ_k_1 l_1[k,l](ω̃,n,p̃̃̃^n) θ_k_1l_1(ω̃,n,p̃^n)p̃^n_k_1J,andΓ^n_kJ(ω̃,p̂) =b_k(ω̃,n,p̃^n)p̃_kJ^n + ∑_k_1,l_1 ∈ Sξ_k_1l_1(ω̃,n,p̃̃̃^n) ς_k_1l_1[k](ω̃, p̃̃̃^n)p̃^n_k_1l_1+∑_k_1, l_1 ∈ Sξ_k_1 l_1(ω̃,n, p̃̃̃^n) ς_k_1l_1[k](ω̃,n,p̃̃̃^n) θ_k_1 l_1(ω̃,n,p̃^n)p̃_k_1J^nwithp̃_kl^n = ∑_k_1,l_1 ∈ S η_k_1 k(ω̃,n,p̂) η_l_1 l(ω̃,n,p̂)p̂_k_1l_1 p̃_kJ^n =∑_l ∈ Sp̂_lJη_lk(ω̃,n,p̂)and p̃̃̃_kl^n =p̃^n_kl+θ_kl(ω̃,n,p̃^n)p̃^n_kJ p̃̃̃_kJ^n =b_k(ω̃,n,p̃^n) p̃^n_kJ.Theorem 3.14 in <cit.> is proven with the help of the following lemmas. Assume that the discrete dynamical system 𝔻 defined in Definition 3.6 in <cit.> is Markov conditionally independent given ω̃ as defined in Definition 3.8 in <cit.>. Then given ω̃∈Ω̃, the discrete time processes {β_i^n }_n=0^∞, i ∈ I, are essentially pairwise independent on (I ×Ω̂, I⊠ℱ̂, λ⊠P̂^ω̃). Moreover, for fixed n=1,...,M also (β̅_i^n)_n=0^∞ and (β̅̅̅_i^n)_n=0^∞, i ∈ I, are essentially pairwise independent on (I ×Ω̂, I⊠ℱ̂, λ⊠P̂^ω̃). This can be proven by the same arguments used in the proof of Lemma 3 in <cit.>.We now derive a result which shows how to compute for a fixed ω̃∈Ω̃ the expected cross-sectional distributions 𝔼^P̂^ω̃[p̌^n ], 𝔼^P̂^ω̃[p̌̌̌^n ] and 𝔼^P̂^ω̃[p̂^n]. The following holds for any fixed ω̃∈Ω̃. * For each n ≥ 1, 𝔼^P̂^ω̃[p̂^n]= Γ^n (ω̃,𝔼^P̂^ω̃[p̂^n-1]), with Γ defined in (<ref>). * For each n≥ 1, we have 𝔼^P̂^ω̃[p̌^n_kl ]=∑_k_1, l_1 ∈ Sη_k_1,k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])η_l_1,l(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) 𝔼^P̂^ω̃[p̂^n-1_k_1,l_1] and 𝔼^P̂^ω̃[p̌_kJ^n]=∑_k_1 ∈ Sη_k_1,k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])𝔼^P̂^ω̃[p̂^n-1_k_1,J]. * For each n ≥ 1, we have 𝔼^P̂^ω̃[p̌̌̌^n_kl] =𝔼^P̂^ω̃[p̌^n_kl]+ θ_kl(ω̃,n,𝔼^P̂^ω̃[p̌^n ])𝔼^P̂^ω̃[p̌^n_kJ] and 𝔼^P̂^ω̃[p̌̌̌^n_kJ ]=b_k(ω̃,n,𝔼^P̂^ω̃[p̌^n ]) 𝔼^P̂^ω̃[p̌^n_kJ]. Fix ω̃∈Ω̃ and k,l ∈ S. By Lemma <ref> we know that the processes (β_i^n)_n=0^∞, i ∈ I, are essentially pairwise independent. Then the exact law of large numbers in Lemma 1 in <cit.> implies that p̂^n-1(ω̂)=𝔼^P̂^ω̃[λ(β^n-1)^-1] for P̂-almost all ω̂∈Ω̂. Thus equations (<ref>) and (<ref>) are equivalent to P̂^ω̃(α̅_i^n=k_2, g̅_i^n=l_2 |α_i^n-1=k_1, g_i^n-1=l_1)=η_k_1,k_2(ω̃,n,𝔼^P^ω̃[p̂^n-1])η_l_1,l_2(ω̃,n,𝔼^P^ω̃[p̂^n-1])P̂^ω̃(α̅_i^n=k_2, g̅_i^n=r |α_i^n-1=k_1, g_i^n-1=J)=η_k_1,k_2(ω̃,n,𝔼^P^ω̃[p̂^n-1])δ_J(r).Therefore, for any k_1,l_1 ∈ S we haveP̂^ω̃(β̅_i^n=(k,J) |β_i^n-1=(k_1,l_1))=0 P̂^ω̃(β̅_i^n=(k,l) |β_i^n-1=(k_1,J) )=0.Then with the same calculations as in the proof of Lemma 4 in <cit.> we getthat 𝔼^P̂^ω̃[p̌^n_kl]= 𝔼^P̂^ω̃[λ(i ∈ I: β̅_ω^n(i)=(k,l)) ]=∫_I P̂^ω̃( β̅^n_i=(k,l)) dλ(i) =∑_k_1,l_1 ∈ Sη_k_1,k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])η_l_1,l(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) 𝔼^P̂^ω̃[p̂^n-1_k_1 l_1]and 𝔼^P̂^ω̃[p̌_kJ^n]=∑_k_1 ∈ Sη_k_1,k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])𝔼^P̂^ω̃[p̂^n-1_kJ].By Lemma <ref> we know that β̅^n isessentially pairwise independent. Again it follows by the exact law of large numbers that p̌^n(ω̂)=𝔼^P̂^ω̃[p̌^n] for P̂^ω̃-almost all ω̂∈Ω̂. Then (<ref>) and (<ref>) are equivalent toP̂^ω̃(g̅̅̅^n=l |α̅_i^n=k, g̅_i^n=J)=θ_kl(ω̃,n,𝔼^P̂^ω̃[p̌^n]) P̂^ω̃(g̅̅̅^n=J |α̅_i^n=k, g̅_i^n=J)=b_k(ω̃,n,𝔼^P̂^ω̃[p̌^n ]). By the same calculations as in the proof of Lemma 4 in <cit.> we have 𝔼^P̂^ω̃[p̌̌̌^n_kl ]=𝔼^P̂^ω̃[p̌^n_kl]+ θ_kl(ω̃,n,𝔼^P̂^ω̃[p̌^n])𝔼^P̂^ω̃[p̌^n_kJ]and𝔼^P̂^ω̃[p̌̌̌^n_kJ]=b_k(ω̃,n,𝔼^P̂^ω̃[p̌^n ]) 𝔼^P̂^ω̃[p̌^n_kJ].By Lemma <ref>, β̅̅̅^n is essentially pairwise independent and thus p̌̌̌^n(ω̂)=𝔼^P̂^ω̃[p̌̌̌^n] for P̂-almost all ω̂∈Ω̂. Then (<ref>) and (<ref>) are equivalent toP̂^ω̃(α_i^n=l_1,g_i^n=l_2 |α_i^n=k_1, g̅̅̅_i^n=k_2)=(1-ξ_k_1k_2(ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n ])) σ_k_1k_2[l_1,l_2](ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n]) andP̂^ω̃(α_i^n=l_1,g_i^n=J |α_i^n=k_1, g̅̅̅_i^n=k_2)=ξ_k_1k_2(ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n ]) ς_k_1k_2[l_1,l_2](ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n ]),respectively. Thus 𝔼^P̂^ω̃[p̂_kl^n]=∑_k_1, l_1 ∈ S (1-ξ_k_1 l_1(ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n])) σ_k_1 l_1[k,l](ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n]) 𝔼^P̂^ω̃[p̌̌̌^n_k_1 l_1]and𝔼^P̂^ω̃[p̂_kJ^n ]=𝔼^P̂^ω̃[p̌̌̌^n_kJ ]+ ∑_k_1,l_1 ∈ Sξ_k_1 l_1(ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n]) ς_k_1 l_1[k](ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n ]) 𝔼^P̂^ω̃[p̌̌̌^n_k_1l_1].By plugging (<ref>) in (<ref>) we get𝔼^P̂^ω̃[p̂_kl^n] =∑_k_1, l_1 ∈ S (1-ξ_k_1 l_1(ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n])) σ_k_1 l_1[k,l](ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n]) 𝔼^P̂^ω̃[p̌^n_k_1 l_1] +∑_k_1, l_1 ∈ S (1-ξ_k_1 l_1(ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n])) σ_k_1 l_1[k,l](ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n]) η_k_1l_1(ω̃,n,𝔼^P̂^ω̃[p̌^n])𝔼^P̂^ω̃[p̌^n_k_1J]. By using (<ref>) and (<ref>), it follows that𝔼^P̂^ω̃[p̂_kJ^n ]=b_k(ω̃,n,𝔼^P̂^ω̃[p̌^n ]) 𝔼^P̂^ω̃[p̌^n_kJ]+ ∑_k_1,l_1 ∈ Sξ_k_1 l_1(ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n]) ς_k_1 l_1[k](ω̃,𝔼^P̂^ω̃[p̌̌̌^n ]) 𝔼^P̂^ω̃[p̌^n_k_1 l_1]+ ∑_k_1,l_1 ∈ Sξ_k_1 l_1(ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n]) ς_k_1 l_1[k](ω̃,n,𝔼^P̂^ω̃[p̌̌̌^n ])θ_k_1l_1(ω̃,n,𝔼^P̂^ω̃[p̌^n])𝔼^P̂^ω̃[p̌^n_k_1J]. Assume that the discrete dynamical system 𝔻 defined in Definition 3.6 in <cit.> is Markov conditionally independent given ω̃∈Ω̃ according to Definition Definition 3.8 in <cit.>. Then for fixed ω̃∈Ω̃ the following holds: * For λ-almost all i ∈ I, the extended type process {β^n_i }_n=0^∞ for agent i is a Markov chain on (I ×Ω̂, I⊠ℱ̂, λ⊠P̂^ω̃) with transition matrix z^n after time n-1. * {β^n }_n=0^∞ is also a Markov chain with transition matrix z^n at time n-1. Fix ω̃∈Ω̃.1. The Markov property of {β^n_i }_n=0^∞ on (I ×Ω̂, I⊠ℱ̂, λ⊠P̂^ω̃) follows by using the same arguments as in the proof of Lemma 5 in <cit.>, for λ-almost all i ∈ I. We now derive the transition matrix with similar arguments as in <cit.>. By putting together (<ref>), (<ref>) and (<ref>), we get𝔼^P̂^ω̃[p̂_kl^n] =∑_k_1, l_1,k',l' ∈ S (1-ξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n)) σ_k_1 l_1[k,l](ω̃,n,p̃̃̃^ω̃,n)η_k' k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) ·η_l' l_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) 𝔼^P̂^ω̃[p̂^n-1_k'l'] +∑_k_1, l_1,k' ∈ S (1-ξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n)) σ_k_1 l_1[k,l](ω̃,n,p̃̃̃^ω̃,n) θ_k_1l_1(ω̃,n,p̃^ω̃,n) ·η_k' k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) 𝔼^P̂^ω̃[p̂^n-1_k'J]. Thus we havez^n_(k'J)(kl)(ω̃) = ∑_k_1, l_1 ∈ S(1-ξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n)) σ_k_1 l_1[k,l](ω̃,n,p̃̃̃^ω̃,n) θ_k_1l_1(ω̃,n,p̃^ω̃,n) ·η_k' k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])andz^n_(k'l')(kl)(ω̃) =∑_k_1, l_1 ∈ S (1-ξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n)) σ_k_1 l_1[k,l](ω̃,n,p̃̃̃^ω̃,n)η_k' k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) ·η_l' l_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]).Similarly, equations (<ref>), (<ref>) and (<ref>) yield to𝔼^P̂^ω̃[p̂_kJ^n ]=∑_k' ∈ Sb_k(ω̃,n,p̃^ω̃,n) η_k'k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) 𝔼^P̂^ω̃[p̂^n-1_k'J] + ∑_k_1,l_1,k',l' ∈ Sξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n) ς_k_1 l_1[k][ω̃,n,p̃̃̃^ω̃,n] ·η_k'k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])η_l'l_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) 𝔼^P̂^ω̃[p̂^n-1_k'l'] + ∑_k_1,l_1,k' ∈ Sξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n) ς_k_1 l_1[k](ω̃,n,p̃̃̃^ω̃,n)θ_k_1l_1(ω̃,n,p̃^ω̃,n)·η_k'k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) 𝔼^P̂^ω̃[p̂^n-1_k'J].Therefore, the transition probabilities from time n-1 to time n can be written asz^n_(k'l')(kJ)(ω̃) =∑_k_1,l_1 ∈ Sξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n) ς_k_1 l_1[k](ω̃,n,p̃̃̃^ω̃,n) ·η_k'k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])η_l'l_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])andz_(k'J)(kJ)^n(ω̃) =b_k(ω̃,n,p̃^ω̃,n) η_k'k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])+∑_k_1,l_1∈ Sξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n) ς_k_1 l_1[k](ω̃,n,p̃̃̃^ω̃,n)θ_k_1l_1(ω̃,n,p̃^ω̃,n) η_k'k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]).2. The transition matrix of {β^n }_n=0^∞ at time n-1 can be derived by using (<ref>)-(<ref>) and the Fubini property applied to λ⊠P̂^ω̃ for every fixed ω̃∈Ω̃ as in the proof of Lemma 6 in <cit.>. We are now able to prove Theorem 3.14 in <cit.>, which we present here. Assume that the discrete dynamical system 𝔻 introduced in Definition 3.6 in <cit.> is Markov conditionally independent given ω̃∈Ω̃ according to Definition 3.8 in <cit.>. Given ω̃∈Ω̃, the following holds: * For each n ≥ 1, 𝔼^P̂^ω̃[p̂^n]= Γ^n (ω̃,𝔼^P̂^ω̃[p̂^n-1]). * For each n ≥ 1,we have 𝔼^P̂^ω̃[p̌^n_kl ]=∑_k_1, l_1 ∈ Sη_k_1,k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])η_l_1,l(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) 𝔼^P̂^ω̃[p̂^n-1_k_1,l_1] and 𝔼^P̂^ω̃[p̌_kJ^n]=∑_k_1 ∈ Sη_k_1,k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])𝔼^P̂^ω̃[p̂^n-1_k_1,J]. * For each n ≥ 1,we have 𝔼^P̂^ω̃[p̌̌̌^n_kl] =𝔼^P̂^ω̃[p̌^n_kl]+ θ_kl(ω̃,n,𝔼^P̂^ω̃[p̌^n ])𝔼^P̂^ω̃[p̌^n_kJ] and 𝔼^P̂^ω̃[p̌̌̌^n_kJ ] =b_k(ω̃,n,𝔼^P̂^ω̃[p̌^n ]) 𝔼^P̂^ω̃[p̌^n_kJ]. * For λ-almost every agent i, the extended-type process {β_i^n }_n=0^∞ is a Markov chain in Ŝ on (I ×Ω̂, I⊠ℱ̂, λ⊠P̂^ω̃), whose transition matrix z^n at time n-1 is given byz^n_(k'J)(kl)(ω̃) = ∑_k_1, l_1,k' ∈ S(1-ξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n)) σ_k_1 l_1[k,l](ω̃,n,p̃̃̃^ω̃,n) θ_k_1l_1(ω̃,n,p̃^ω̃,n) ·η_k' k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) z^n_(k'l')(kl)(ω̃) =∑_k_1, l_1,k',l' ∈ S (1-ξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n)) σ_k_1 l_1[k,l](ω̃,n,p̃̃̃^ω̃,n)η_k' k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) ·η_l' l_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) z^n_(k'l')(kJ)(ω̃) =∑_k_1,l_1 ∈ Sξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n) ς_k_1 l_1[k](ω̃,n,p̃̃̃^ω̃,n) ·η_k'k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1])η_l'l_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) z_(k'J)(kJ)^n(ω̃) =b_k(ω̃,n,p̃^ω̃,n) η_k'k(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]) +∑_k_1,l_1∈ Sξ_k_1 l_1(ω̃,n,p̃̃̃^ω̃,n) ς_k_1 l_1[k](ω̃,n,p̃̃̃^ω̃,n)θ_k_1l_1(ω̃,n,p̃^ω̃,n)·η_k'k_1(ω̃,n,𝔼^P̂^ω̃[p̂^n-1]). * For λ-almost every i and every λ-almost every j, the Markov chains {β_i^n }_n=0^∞ and {β_j^n }_n=0^∞ are independent on ( Ω̂,ℱ̂, P̂^ω̃).* For P̂^ω̃-almost every ω̂∈Ω̂, the cross sectional extended type process {β^n_ω̂}_n=0^∞ is a Markov chain on (I, I, λ) with transition matrix z^n at time n-1, which is defined in (<ref>)- (<ref>).* We have P̂^ω̃-a.s. that𝔼^P̂^ω̃[p̌^n_kl]=p̌^n_kl and 𝔼^P̂^ω̃[p̌̌̌^n_kl]=p̌̌̌^n_kl and 𝔼^P̂^ω̃[p̂^n_kl] =p̂^n_kl. Fix ω̃∈Ω̃. Points 1. to 5. of Theorem 3.14 in <cit.> follow directly by Lemma <ref>, <ref> and <ref>. Moreover, Points 6. and 7. can be proven by using the same arguments as in the proof of Theorem 4 in <cit.>. plainnat | http://arxiv.org/abs/2311.15793v1 | {
"authors": [
"Francesca Biagini",
"Andrea Mazzon",
"Thilo Meyer-Brandis",
"Katharina Oberpriller"
],
"categories": [
"q-fin.MF"
],
"primary_category": "q-fin.MF",
"published": "20231127131441",
"title": "Supplement Liquidity based modeling of asset price bubbles via random matching"
} |
These authors contributed equally. Seishiro Ono: mailto:[email protected]@riken.jpKen Shiozaki: mailto:[email protected]@yukawa.kyoto-u.ac.jpInterdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN, Wako 351-0198, Japan These authors contributed equally. Seishiro Ono: mailto:[email protected]@riken.jpKen Shiozaki: mailto:[email protected]@yukawa.kyoto-u.ac.jpCenter for Gravitational Physics and Quantum Information, Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, JapanYITP-23-93 RIKEN-iTHEMS-Report-23 The past decade has witnessed significant progress in topological materials investigation. Symmetry-indicator theory and topological quantum chemistry provide an efficient scheme to diagnose topological phases from only partial information of wave functions without full knowledge of topological invariants, which has resulted in a recent comprehensive materials search. However, not all topological phases can be captured by this framework, and topological invariants are needed for a more refined diagnosis of topological phases. In this study, we present a systematic framework to construct topological invariants for a large part of symmetry classes, which should be contrasted with the existing invariants discovered through one-by-one approaches. Our method is based on the recently developed Atiyah-Hirzebruch spectral sequence in momentum space. As a demonstration, we construct topological invariants for time-reversal symmetric spinful superconductors with conventional pairing symmetries of all space groups, for which symmetry indicators are silent. We also validate that the obtained quantities work as topological invariants by computing them for randomly generated symmetric Hamiltonians. Remarkably, the constructed topological invariants completely characterize K-groups in 159 space groups. Our topological invariants for normal conducting phases are defined under some gauge conditions. To facilitate efficient numerical simulations, we discuss how to derive gauge-independent topological invariants from the gauge-fixed topological invariants through some examples. Combined with first-principles calculations, our results will help us discover topological materials that could be used in next-generation devices and pave the way for a more comprehensive topological materials database. Towards complete characterization of topological insulators and superconductors: A systematic construction of topological invariants based on Atiyah-Hirzebruch spectral sequence Ken Shiozaki January 14, 2024 =================================================================================================================================================================================§ INTRODUCTION Over the past decades, topological phases of matter have attracted much attention. In particular, topological materials, exemplified by topological insulators <cit.> and superconductors <cit.>, have been intensively studied because nontrivial bulk topology gives rise to exotic surface states and fascinating response phenomena <cit.>.These properties are expected to be utilized for applications in next-generation devices such as fault-tolerant quantum computers, ultrafast memories, and low-power devices <cit.>. Given the diverse nature of bulk band topology <cit.>, the resulting consequences also exhibit a large variety.This naturally raises two questions: (I) How many topological phases exist? (II) What quantities can distinguish materials with different topology? In fact, these two questions have been central issues in condensed matter physics for the last ten years.During this period, there have been numerous developments addressing these questions, as described below. Symmetry is a fundamental concept in physics, which also plays a crucial role in studies of topological phases of matter. The existence of symmetries enriches the variety of topological phases.Such topological phases protected by symmetries can be trivialized (continuously deformed to a trivial product state without closing the gap) once the protecting symmetries are broken.In recent years, there has been considerable research effort to establish the full classification of topological phases with various symmetries. A milestone is a complete classification and characterization of stable topological phases of noninteracting systems in the presence or absence of internal symmetries, such as time-reversal and particle-hole symmetries <cit.>.In addition to internal symmetries, crystalline symmetries, such as rotation and spatial inversion, are also present in solids.Recently, it has been shown that crystalline symmetries also protect topological phases <cit.>, as exemplified by mirror Chern insulators <cit.>, topological insulators and superconductors protected by nonsymmorphic symmetries <cit.>, and higher-order topological insulators and superconductors <cit.>.More recently, a sophisticated method to classify topological crystalline phases has been developed <cit.>, which is based on the real-space picture.The basic assumption behind the classification scheme is that any topological crystalline phase can be continuously deformed to a patchwork of lower-dimensional topological phases protected by onsite symmetries. The classification scheme has been widely applied to topological insulators <cit.>, superconductors <cit.>, and bosonic systems <cit.>.This approach provides a clear insight into question (I) posed earlier.While classifications of noninteracting fermionic systems in real space have been achieved for a large part of symmetry settings, a diagnosis of topological materials has been performed in momentum space. An efficient diagnostic scheme has been recently developed based on irreducible representations (irreps) of wave functions at high-symmetry points (called 0-cells in later discussions) <cit.>. In this scheme, we can partially diagnose the topological phase of a target system by comparing irreps of wave functions at high-symmetry momenta with those of atomic insulators <cit.>.Such a scheme is known as symmetry-based indicator <cit.> or topological quantum chemistry <cit.>. The theory can be easily combined with density functional theory calculations, since the latter can give irreps at high-symmetry momenta.In fact, comprehensive searches for topological materials have been conducted using symmetry indicators or topological quantum chemistry <cit.>.Moreover, the theory of symmetry indicators provides various topological invariants consisting of the number of irreps appearing in occupied bands and Pfaffian invariants at high-symmetry momenta <cit.>. Although the aforementioned topological invariants are easy to compute, it is known that the representation-based diagnosis is incomplete; namely, various topological phases are undetectable only by such topological invariants consisting only of quantities defined at high-symmetry points <cit.>.This implies that the complete diagnosis of topological phases requires full knowledge of topological invariants. However, despite considerable advancements in classification problems, the range of available topological invariants remains limited. The previous constructions have typically been done on a case-by-case basis. To our best knowledge, a systematic method for identifying topological invariants has yet to be established.In this work, we present a systematic scheme for constructing topological invariants of normal and superconducting phases by leveraging the Atiyah-Hirzebruch spectral sequence (AHSS) in momentum space <cit.>. The construction of topological invariants involves two pivotal steps: (i) First, we need to identify irreps on the subregions of momentum space that are used in the expression of topological invariants. For instance, mirror winding and mirror Chern numbers are defined for each mirror-eigenvalue sector on mirror lines and planes, respectively <cit.>; (ii) Second, we need to discover the quantities that characterize topological nature of the system. For example, Mirror winding and Chern numbers are computed from the winding of q-matrix (which is formally defined later) and the integral of Berry curvature in each mirror-eigenvalue sector.Remarkably, the momentum-space AHSS provides valuable insights into both steps. As the first step, we discuss a general framework for extracting information about the p-dimensional subregions and irreps defined on them from the first differentials of AHSS. Then, building upon the physical meaning of E_1-pages of AHSS—classifications of p-dimensional topological phases on the p-dimensional subregions, we define quantities for each irrep identified in the first step, which characterize topological nature of p-dimensional subregions in momentum space. AHSS can also inform us how to combine the defined quantities to obtain topological invariants. While we formulate the first step for arbitrary p, for the second step we focus on the case of p=1 and discuss how to systematically construct topological invariants defined on zero- and one-dimensional subregions. To demonstrate our approach, we construct topological invariants for time-reversal symmetric spinful superconductors with conventional pairing symmetries in 230 space groups, for which symmetry indicators are always trivial, and numerically confirm that they are actually quantized for randomly generated symmetric Hamiltonians.Surprisingly, K-groups in 159 out of the 230 space groups are completely characterized by our topological invariants. Our construction gives topological invariants for normal conducting phases under certain gauge conditions. For efficient numerical calculations, we discuss how to derive gauge-independent topological invariants from the gauge-dependent ones through illustrative examples where symmetry indicators do not work.It should be emphasized that AHSS in momentum space is originally introduced and used as a tool for classifying topological phases based on the momentum-space picture <cit.>. On the other hand, in this work, we explore the utility of AHSS in momentum space as a versatile tool for band topology and show that the AHSS can be utilized for constructing topological invariants.Furthermore, we stress that many invariants based on this work can capture topological phases not diagnosed by symmetry indicators.Thus, our work would help build a comprehensive database of topological materials beyond Refs. <cit.>. The rest of this paper is organized as follows. In Sec. <ref>, after reviewing AHSS briefly, we present our framework to construct topological invariants in a systematic way.In Sec. <ref>, we demonstrate the power of our systematic construction through some examples. As shown in this section, our approach can provide complicated expressions of topological invariants automatically. In Sec. <ref>, we discuss our construction of topological invariants for normal conducting phases. As mentioned above, the obtained topological invariants for normal conducting phases are defined under some gauge conditions. Since they are usually not suitable for numerical simulations, we illustrate how to find gauge-independent topological invariants for normal conducting phases from the gauge-dependent ones.In Sec. <ref>, we conclude the paper with outlooks for future works.Some technical details are discussed in appendices to avoid straying from the main subjects. § GENERAL FRAMEWORK As mentioned in Sec. <ref>, a refined diagnosis requires knowledge of topological invariants. In this section, we discuss a systematic framework to construct topological invariants in momentum space, for which we employ AHSS in momentum space <cit.>. There are two crucial steps: (i) identifying irreps on p-dimensional subregions responsible for topological nontriviality; (ii) finding quantities to detect topological nature of systems. These two steps are discussed in Secs. <ref> and <ref>, respectively.We numerically verify that our topological invariants actually work for time-reversal symmetric superconductors with conventional pairing symmetries by computing the constructed topological invariants in this way for randomly generated symmetric Hamiltonians.In Sec. <ref>, we explain how to generate random symmetric Hamiltonians and to confirm that our invariants work for these classes. §.§ Review on Atiyah-Hirzebruch spectral sequence in momentum spaceIn this subsection, to be self-contained, we provide a brief review of AHSS in momentum space <cit.>.In particular, here we introduce basic notions of AHSS and focus on their physical meaning rather than showing how to actually compute them.All technical details of the computation are discussed in our companion work in Ref. <cit.>.Readers familiar with AHSS in momentum space can skip this subsection.Before moving on to the review of AHSS, let us briefly explain the background of AHSS.Let G be a magnetic space groupfor normal conducting phases and a magnetic space group with particle-hole symmetryfor superconducting phases, i.e., G = for normal conducting phases and G =+.An element g = {p_g|a_g}∈ G transforms a point x∈ℝ^3 into p_gx+a_g, where p_g is an orthogonal matrix and a_g is a vector.When Π denotes the translation subgroup of G, G/Π is isomorphic to a magnetic point group with or without particle-hole symmetry.As a result, free fermionic topological phases on a d-dimensional torus T^d are classified by the twisted equivariant K-group ^ϕK_G/Π^(z,c)-n(T^d) <cit.> (see Appendix <ref> for a brief review of K-theory).The twisted equivariant K-group contains symmetry data denoted by ϕ, c, z, and n. First, ϕ and c are defined as maps ϕ, c: G/Π→{± 1}. A symmetry g ∈ G/Π is unitary (antiunitary) whenϕ_g = +1 (ϕ_g = -1), and its symmetry representation 𝒰_(g) commutes (anticommutes) with Hamiltonian H_ when c_g = +1 (c_g = -1).In other words, 𝒰_(g) and H_ satisfy the following relations 𝒰_(g)H^ϕ_g_ = c_gH_g𝒰_(g),where we introducethe following notation of matrices: A^ϕ_g = A for ϕ_g = +1 and A^ϕ_g = A^* for ϕ_g = -1.Next, the symbol z represents the set of U(1) projective factors {z_(g,g')∈U(1)}_g,g' ∈ G/Π, which are defined by 𝒰_g'(g)𝒰^ϕ_g_(g')= z_(g,g')𝒰_(gg'). Last, an integer n denotes the grading in K-theory, on which the physical meaning of K-groups depends. For n=-1, 0, and 1, the K-groups correspond to the classifications of gauge transformations, stable gapped Hamiltonians, and gapless Hamiltonians.Unfortunately, it is not known how to directly compute twisted equivariant K-groups for a large part of symmetry classes, except for some simple cases such as order-two spatial symmetries and point-group symmetries <cit.>.Recently, it turns out that AHSS in momentum space provides us with fruitful information about topology in momentum space. If we can compute AHSS completely for a given symmetry setting, it generally gives us an “approximate K-group" in the sense that the obtained Abelian group is the same as the exact K-group as a set but not as an Abelian group <cit.>. On the other hand, as discussed in Ref. <cit.>, AHSS can completely determine ^ϕK_G/Π^(z,c)-n(T^d) (d ≤ 3) for various symmetry settings in which we are interested.§.§.§ Cell decompositionTo perform AHSS calculations, we introduce a sequence of spaces X_0 ⊂ X_1 ⊂⋯⊂ T^d,where X_p is a p-dimensional subspaces of T^d, the so-called p-skeleton. Such a decomposition is known as cell decomposition <cit.>. In the following, we explain a way to obtain X_p for three dimensions. First, we find a fundamental domain of the first Brillouin zone (BZ) that spans the entire BZ by symmetry operations without any overlap. Note that, by definition, any two points strictly inside the fundamental domain are not related by symmetries. The interior of the fundamental domain is an open polyhedron, which is denoted by D^3.Then, we decompose boundary objects of the fundamental domain into p-dimensional subregions.The open polygons on faces of the fundamental domain are the case of p=2, the open edge line segments of the polygons are p=1, and the boundary points of the line segments are p=0.They are denoted by {D_i^p}_i. We also assign an orientation to each of the p-dimensional subregions. For later convenience, let us define a set of all inequivalent p-cells in the fundamental domain by I_orb^p.By acting symmetry operations on these p-cells, we haveC_p := ⋃_i ∈ I_orb^p⋃_g ∈ G/Π g(D_i^p).Elements in C_p are called p-cells.In this construction, each p-cell satisfies the following conditions: (i) The intersection of any two p-cells is an empty set. (ii) A symmetry g ∈ G/Π keeps every point in D_i^p invariant or transforms a point _i^p in D_i^p into a point in another p-cell g(D_i^p). More formally, for _i^p∈ D_i^p, g _i^p = _i^p +^∃G (G is a reciprocal lattice vector) or g_i^p∈ g(D_i^p). (iii) There are no isolated p-cells (p ≤ 2). Every (p-1)-cell is in a boundary of a p-cell. (iv) The orientation of p-cells respects all symmetries in G.In Supplementary Materials, we show our cell decomposition of a fundamental domain in each magnetic space group with and without particle-hole symmetry. Using the set of p-cells C_p, we define the p-skeleton byX_0 = C_0, X_p = X_p-1⋃ C_p (p ≥ 1). It should be noted that, in our construction, we distinguish among p-cells even when they are symmetry-related. Let us illustrate how to obtain the cell decomposition for layer group p1̅ whose generators are lattice translations and spatial inversion. As discussed above, the first step is to find a fundamental domain of BZ. Here, we consider -π≤ k_x ≤π and 0 ≤ k_y ≤π as our fundamental domain.Then, we decompose the fundamental domain into one 2-cell (pink plane), five 1-cells (solid red lines), and four 0-cells (orange circles) in the left panel of Fig. <ref> (a). Also, {I_orb^p}_p=0^2 are given by I_orb^0 = {Γ, X, Y, M}; I_orb^1 = {a, b, c}; I_orb^2 = {α}. By acting inversion symmetry on this fundamental domain, we obtain C_p as C_0= {Γ, X, Y, M}; C_1= {a, b, c, a_1, b_1, c_1}; C_2= {α, α_1}.As mentioned above, here we assign symmetry-equivalent p-cells to different labels.For example, a = {(k_x, 0) | k_x ∈ (0, π)} is related to a_1 = {(k_x, 0) | k_x ∈ (-π, 0)} by inversion. Similarly, we can construct a cell decomposition for layer group p2/m11 generated by translations, inversion, and twofold rotation along x-axis. As a result, we haveC_0= {Γ, X, Y, M}; C_1= {a, b, c, d, a_1, b_1, c_1, d_1}; C_2= {α, α_1, α_2, α_3}.§.§.§ E_1-pages As a preliminary step, we find a finite subgroup of G whose elements do not change .To do so, we first define a subgroup of G by G_ = {g ∈ G | ϕ_g p_g=+ ^∃G}, which is referred to as little group. It should be noted that the number of elements in G_ is infinite since the translation group Π is a subgroup of G_ by definition.Furthermore, Π is a normal subgroup of G_, i.e., tht^-1∈ G_ for ^∀ h ∈ G_ and ^∀ t ∈Π.As a result, we can always obtain the desired finite group by G_/Π.Let us comment on projective factors of G_/Π. Suppose that we have symmetry representations U_(g) (g ∈ G) such that U_g'(g)U_^ϕ_g(g') = z^int(g,g')U_(gg'). We can always find representations of G_/Π from those of G_ as𝒰_(g)= U_(g)e^i ·a_g,where a_g is a fractional translation or zeros <cit.>.Correspondingly, the projective factors of G_/Π are given by 𝒰_(h)𝒰^ϕ_h_(h') = z_(h,h')𝒰_(hh')(h,h' ∈ G_/Π),where z_ (h,h')= z^int(h,h')e^-i ·(p_h a_h'- ϕ_h a_h'). Let ϕ|_,c|_, and z|_ be symmetry data ϕ, c, and z restricted to elements in G_/Π, respectively.Also, since G_ is common for every point ∈ D^p, G_D^p denotes the common little group. As discussed in Refs. <cit.> and <cit.>, for a given filtration {X_p}_p=0^3, a finitely generated Abelian group E_1^p,-n is defined byE_1^p,-n := ^ϕ K^(z,c)+p-n_G/Π(X_p,X_p-1) ≅⊕_i ∈ I_orb^p^ϕ|_D^p_i K^(z|_D^p_i,c|_D^p_i)+p-n_G_D^p_i/Π(D^p_i, ∂ D^p_i) ≅⊕_i∈ I_orb^p^ϕ|_D^p_iK̃^(z|_D^p_i,c|_D^p_i)+p-n_G_D^p_i/Π(D^p_i/∂ D^p_i) ≅⊕_i ∈ I_orb^p^ϕ|_^p_iK̃^(z|_^p_i,c|_^p_i)_G_^p_i/Π({^p_i}×S̃^n) ,where _i^p is a representative point of D_i^p and the definitions of K-groups are briefly discussed in Appendix <ref>. Here, instead of defining the K-groups formally, we mention physical meaning of E_1^p, -n, which can be interpreted in various ways since E_1^p, -n is defined in terms of K-groups. (i) According to Eq. (<ref>), E_1^p, -n represents p-dimensional gapped topological phases on p-cells but trivial on (p-1)-cells when (p-n) = 0. (ii) According to Eq. (<ref>), E_1^p, -n corresponds to gapped topological phases on the p-dimensional sphere when (p-n) = 0 and gapless points on the p-dimensional sphere when (p-n) = 1. (iii) According to Eq. (<ref>), E_1^p, -n can be understood as the direct sum of classification of mass terms in n-dimensional massive Dirac Hamiltonians given by H_^p_i = ∑_μ = 1^nk̃_μγ_μ + m γ_0(k̃_μ∈S̃^n),γ_μγ_ν + γ_νγ_μ = 2δ_μν, with symmetry group G__i^p/Π. Here, symmetries in G__i^p/Π trivially act on the additional sphere S̃^n, and thus they serve as internal symmetries on the p-cell. As a result, the classification of H_^p_i is given by ⊕_απ_n(𝒞_s(α)) ≃⊕_απ_0(𝒞_s(α)+n ) or ⊕_απ_n(ℛ_s(α)) ≃⊕_απ_0(ℛ_s(α)+n) <cit.>, where 𝒞_s(α) and ℛ_s(α) are classifying spaces of the Dirac Hamiltonians (see also Table <ref>).To construct Abelian group E_1^p, -n, it is convenient to use the third interpretation. Then, the remaining task is to obtain ^ϕ|_^p_i K^(z|_^p_i,c|_^p_i)_G_^p_i/Π({^p_i}×S̃^n), i.e., classifications of the massive Dirac Hamiltonian in the presence of onsite symmetries.In fact, this task is easily achieved by identifying an effective internal symmetry class of each irrep of the unitary part of G__i^p/Π <cit.>.Such symmetry classes are known as emergent Altland-Zirnbauer symmetry class (EAZ class).For every , we can always decompose G_/Π into the following four parts:G_/Π = _ + _ + _ + _,where_ = {g ∈ G_/Π | (ϕ_g, c_g) = (+1,+1)}; _ = {g ∈ G_/Π | (ϕ_g, c_g) = (-1,+1)};_ = {g ∈ G_/Π | (ϕ_g, c_g) = (-1,-1)};_ = {g ∈ G_/Π | (ϕ_g, c_g) = (+1,-1)}. Once we identify _, we can compute irreps of _, whose labels are denoted by α in the following.For each irrep of __i^p, we can compute W^α__i^p(𝒫) ∈{0, ±1}, W^α__i^p() ∈{0, ± 1}, and W^α__i^p(𝒥)∈{0, 1} defined byW^α__i^p(𝒫) = 0 (_ = ∅) 1/|𝒫__i^p|∑_c ∈__i^pz__i^p(c, c)χ__i^p^α(c^2)∈{0, ± 1}, W^α__i^p()= 0 (_ = ∅) 1/|__i^p|∑_a ∈__i^pz__i^p(a, a)χ__i^p^α(a^2)∈{0, ± 1}, W^α__i^p(𝒥) = 0 (_ = ∅) 1/|__i^p|∑_g ∈__i^pz__i^p(γ, γ^-1gγ)/z__i^p(g, γ) [χ^α__i^p(γ^-1 g γ)]^*χ^α__i^p(g),where χ^α_(g) is the character of an irrep offor g∈ and γ is a representative element of _. These quantities inform us about how symmetries in _, _, and _ affect irreps.When W__i^p^α(𝒱) = 0 (𝒱=, , ) and 𝒱__i^p≠∅, symmetries in 𝒱__i^p transform irrep α into another one.On the other hand, when W__i^p^α(𝒱) = ± 1, irrep α is invariant under the symmetries. As a result, the EAZ symmetry class for α is identified byW__i^p[α] := (W^α__i^p() , W^α__i^p(𝒫), W^α__i^p(𝒥)), which is called Wigner criteria <cit.>.The classification of the n-dimensional massive Dirac Hamiltonians is shown in Table <ref>.As a result, E_1^p,-n is written byE^p, -n_1 =⊕_i(⊕_α_2[b^(p)_^p_i,α] ⊕⊕_β[b^(p)_^p_i,β]),where b^(p)_^p_i,α denotes a basis of E^p, -n_1 that is defined for irrep α.Also, _2[b^(p)_^p_i,α] or [b^(p)_^p_i,α] represents an Abelian group generated by b^(p)_^p_i,α.§.§.§ first differential d_1^p,-n and E_2-pagesIt should be emphasized that, although an element of E_1^p,-n corresponds to a massive Dirac Hamiltonian defined on p-cells, it is not necessary to be gapped on (p+1)-cells. In other words, elements of E_1^p,-n are sometimes incompatible with those of E_1^p+1,-n. We implement the relation between E_1^p,-n and E_1^p+1,-n in a homomorphismd_1^p,-n: E_1^p,-n→ E_1^p+1,-n,which is called first differential <cit.>. For our purpose, it is sufficient to consider the cases where n = (p-1), p, and (p+1). Here the detailed calculation to construct such a homomorphism is discussed in our companion work <cit.>.Based on the second interpretation of E_1^p,-n, we present another physical meaning of d_1^p,-p.Since E_1^p,-p and E_1^p+1,-p correspond to gapped and gapless phases, d_1^p,-p can be understood as the process of generating gapless points on (p+1)-cells from band inversions on p-cells.Then, the d_1^p,-p⊆ E_1^p,-p represents the gapped Hamiltonians on p- and (p+1)-cells.Also, d_1^p-1,-p can be interpreted as the generation of trivial gapped Hamiltonians on p- and (p+1)-cells.As a result, we define E_2-pages byE_2^0,0 :=d_1^0,0; E_2^p,-p :=d_1^p,-p/d_1^p-1,-p (for 1≤ p ≤ d-1); E_2^d,-d := E_1^d,-d/d_1^d-1,-d,which represents topologically nontrivial phases gapped on (p-1)-, p-, and (p+1)-cells. Furthermore, similar to E_1-pages, some elements of E_2^p,-p might be incompatible with gapped phases on (p+r)-cells and contain trivial phases generated from (p-r)-cells.Then, to obtain completely gapped phases, we must consider higher differential and E_r-pages for r ≥ 2. However, the higher differentials are out of the scope of this work. Importantly, to the best of our knowledge, we do not have a systematic way to construct d_r^p,-n. Instead, we focus on E_2-pages and construct topological invariants for E_2^1,-1 in Sec. <ref>, although phases corresponding to E_2^1,-1 are sometimes gapless on 3-cells. It should be emphasized that this is still useful because it is generally difficult to detect gapless points at generic momenta.§.§ Step (i): identification of irreducible representations on p-cells for topological invariants As mentioned in Sec. <ref>, we need to identify irreps on p-cells responsible for topological nontriviality, which is Step (i) mentioned in Introduction. In fact, E^p,-p_2 contains the information we need. In this subsection, we explain how to systematically extract the information from E^p,-p_2.In the following, we assume that {E^q, -p_1}_q=p-1^p+1 are free Abelian groups for simplicity.This is always true for time-reversal symmetric superconductors with conventional pairing symmetries.It is straightforward to generalize the scheme presented here (the derivation of the matrix X^(p) below) to the case where {E^q, -p_1}_q=p-1^p+1 contain torsion elements, as discussed in Appendix <ref>.Let {b_i^(p)}_i=1^N_p be a serially numbered basis set of E^q,-p_1 = ^N_q, i.e., there exists i such that b_i^(q) = b__j^q, α^(q).Then, E^q,-p_1 is expressed byE^q,-p_1 = ⊕_i=1^N_q[b_i^(q)], where [b_i^(q)] represents a free Abelian group generated by b_i^(q). When we use ℬ^(q) = (b_1^(q)b_2^(q)⋯b_N_q^(q)), the first differential d_1^p,-p: E_1^p,-p→ E_1^p+1,-p is represented byd_1^p,-p(ℬ^(p)):= (d_1^p,-p(b_1^(p)) d_1^p,-p(b_2^(p)) ⋯ d_1^p,-p(b_N_p^(p))) = ℬ^(p+1)M_d_1^p,-p, where M_d_1^p,-p is a (N_p+1× N_p)-dimensional integer-valued matrix. For an integer-valued matrix, we can always find two unimodular matrices U^(p) and V^(p) such that Σ^(p)=U^(p)M_d_1^q,-pV^(p),where Σ^(p) is an integer-valued diagonal matrix. The left-hand side is known as Smith normal form.When let r_p be the matrix rank of Σ^(p), each diagonal element of Σ^(p) satisfies the following things:[l] * [Σ^(p)]_ii∈_>0 for 1≤ i ≤ r_p; * [Σ^(p)]_ii can divide [Σ^(p)]_(i+1)(i+1) for 1 ≤ i ≤r_p-1; * [Σ^(p)]_ii = 0 for r_p<i if r_p ≠ N_P. Then, we haved_1^p,-p(ℬ'^(p))=d_1^p,-p(ℬ^(p) V^(p)) = ℬ^(p+1)[U^(p)]^-1U^(p)M_d_1^p,-pV^(p)= ℬ^(p+1)[U^(p)]^-1Σ^(p),where we introduce ℬ'^(p)=(b'^(p)_1b'^(p)_2⋯b'^(p)_N_p)=ℬ^(p)V^(p).Equation (<ref>) implies that we can obtain (d_1^p,-p) = span(b'^(p)_r_p+1⋯b'^(p)_N_p).Similarly,d_1^p-1,-p(ℬ^(p-1)) = ℬ^(p)M_d_1^p-1,-p= ℬ'^(p)[V^(p)]^-1M_d_1^p-1,-p=ℬ'^(p)([ O_r_p× N_p-1;1cY ]). The reason why the first r_p rows are zeros is that d^p-1,-p_1 should be expanded by d^p,-p_1.Applying the same decomposition of Eq. (<ref>) to Y, we rewrite the above equation asd_1^p-1,-p(ℬ^(p-1)V^(p-1))=ℬ'^(p)(1_r_p⊕[U^(p-1)]^-1) ([ O_r_p× N_p-1;Λ^(p-1) ])=ℬ”^(p)([ O_r_p× N_p-1;Λ^(p-1) ]),where Λ^(p-1) = diag(λ_1^(p-1), λ_2^(p-1), ⋯, λ_N_p - r_p^(p-1)) is the Smith normal form of Y with the same properties in (<ref>).Here, we also introduce ℬ”^(p) = (b”^(p)_1b”^(p)_2⋯b”^(p)_N_p)=ℬ^(p)X^(p)=ℬ^(p)V^(p)(1_r_p⊕[U^(p-1)]^-1).As a result, we find (1) b”^(p)_i (i=1,⋯ r_p) is not an element in d^p,-p_1; (2) b”^(p)_i is a basis of E_2^p,-p when i > r_p and λ_i-r_p^(p-1)≠ 1.Equations (<ref>) and (<ref>) inform us of topological invariants defined on p-cells.Since any element of E^p, -p_1 can be expressed by ℬ^(p)(n^(p)_1 n^(p)_2⋯ n^(p)_N_p)^⊤, n^(p) = ℬ^(p)(n^(p)_1 n^(p)_2⋯ n^(p)_N_p)^⊤= ℬ”^(p)[X^(p)]^-1(n^(p)_1 n^(p)_2⋯ n^(p)_N_p)^⊤.Combining the facts (1) and (2) with Eq. (<ref>), the row lists {x_i^T}_i tell us which combinations of {n^(p)_j}_j detect topologically nontrivial nature. This is indeed Step (i), which we need to achieve.Thus the remaining problem is to identify quantities that function as {n^(p)_j}_j properly, which is discussed in the next subsection. §.§ Step (ii): Construction of topological invariants In preceding subsection, we find that each row list in [X^(p)]^-1 tells us which irreps on p-cells are used in topological invariants to detect nontrivial elements of E_2^p,-p. In this subsection, we discuss which quantities are assigned to them and how to take combinations of these quantities.In the following, by focusing on the case of p = 1, we present topological invariants defined on the 1-skeleton (a domain gluing all 1-cells together with all 0-cells).For later convenience, [X^(1)]^-1 and [V^(0)]^-1 ares denoted by[X^(1)]^-1 := ([ x_1 x_2 ⋯ x_N_1 ])^⊤,[V^(0)]^-1 := ([ v_1 v_2 ⋯ v_N_0 ])^⊤.As a preparation for the following discussions, here we define q-matrix for each irrep when an irrep is invariant under chiral-like symmetries.In this case, we consider irreps of + _ characterized by chirality, which are denoted by α±. Each irrep of + _ satisfiesχ_^α±(g)= χ_^α(g) (g ∈),χ_^α-(g)= -χ_^α+(g)(g ∈_). When _≠∅ and W^α_()= 0, two irreps, denoted by α± and β± here, are related by symmetries in _. In such a case, we always choose β- such that χ^β-_(g) = z_g, a^/z_a, a^-1ga^[χ^α+_(g)]^* for g ∈ + _. Then, we define a projection matrix by P^α± = 𝒟_α/| +_|∑_g ∈ +_[χ_^α±(g)]^*𝒰_(g),where 𝒟_α is dimension of irrep α. Using the projection matrices, we also define a chiral matrix byΓ^α = P^α+ - P^α-,whose eigenvalues are 0 and ± 1. When 𝒰^α_± denotes a matrix composing of eigenvectors with eigenvalues ± 1, we have q^α_ = [𝒰^α_-]^† H_𝒰^α_+∈GL(rank P^α+),which is what we refer to as q-matrix. It should be noted that q^α_ depends on the choice of 𝒰^α_± and can be changed by gauge transformations. In particular, we can always choose U^α_± such that the eigenvalues of q^α_ are _α-fold degenerate for EAZ class AIII/CI and (2_α)-fold degenerate for EAZ class DIII [See Appendix <ref> for how to find such a basis set]. Then, we define “( q^α_)^1/_α” for AIII and CI and “( q^α_)^1/2_α” for DIII by the product of the duplicated eigenvalues of q^α_.As a result, we introduce 𝒵[q^α_] :=∏_jπ_j,where π_j is the j-th eigenvalue of q^α_, andq^α_ = (𝒵[q^α_])^_α for EAZ class AIII/CI (𝒵[q^α_])^2_α for EAZ class DIII holds. For EAZ class DIII with _α=1, when _ contains an order-two antiunitary symmetry denoted by a, one can replace 𝒵[q^α_] by [q^α_] by choosing 𝒰^α_- = U_(a)[𝒰^α_+]^*.We make a brief remark on Γ^α.The chiral matrix Γ_α is generally not the same as the representation of chiral symmetry.Even when we discuss symmetry groups containing chiral symmetry with a fractional translation, the eigenvalues of Γ^α are 0 and ± 1.On the other hand, U^α_± depend on momenta and are not periodic. Correspondingly, 𝒰^α_+ and 𝒰^α_- are interchanged up to unitary matrices of gauge transformations.As a result, q^α_+G is not the same as q^α_ but rather related to (q^α_)^† by the unitary matrices.Nonetheless, our construction discussed below works for such cases.See Appendix <ref> for an example. §.§.§ Revisiting physical meaning of E_1^1,-1Since we restrict ourselves to free Abelian groups E_1^1,-1, EAZ classes are always one of AIII, CI, or DIII.As defined in Eq. (<ref>), E_1^1,-1 corresponds to the direct sum of gapped topological phases on 1-cells with trivial gapped states on 0-cells. Such topological phases are characterized by the winding of q-matrix, where the winding is quantized.The simplest example is a one-dimensional class AIII system only with translation symmetry.In this case, C_0={k = ±π} and C_1 ={k |k ∈ (-π, π)}. Since the relation H_k=-π=H_k=π always holds, E_1^1,-1 = ^ϕ K^(z,c)+0_G/Π(X_1 = T^1,X_0 = C_0) = corresponds just ordinary one-dimensional topological phases of class AIII, which are characterized by one-dimensional winding number w:= 1/2πi∫_-π^πdk ∂_k log q_k ∈.In the following, we define various topological invariants based on the relation between the windings of q-matrices and topological nontriviality, although the windings are not quantized for general Hamiltonians.§.§.§topological invariants for gapless points on 2-cellsFirst, we consider topological invariants for gapless points on 2-cells, which corresponds to d_1^1,-1.To define topological invariants, for each EAZ class of each irrep on a 1-cell, we define the following quantities: * class AIII and CI w_α := 1/2πi×1/𝒟_α∫_s d(log q_s^α-log (q^α_s)^vac); * class DIII w_α:=1/2πi×1/2𝒟_α∫_s d(log q_s^α-log (q^α_s)^vac), where the integral is from one boundary 0-cell of the 1-cell to another boundary 0-cell along the orientation of the 1-cell.Here, we introduce (q^α_)^vac as the q-matrix of the vacuum Hamiltonian H^vac_ defined by the infinite chemical potential limit of H_ using the same 𝒰^α_±.The reason why we need the winding of (q^α_)^vac is that the winding of q^α_ is not invariant under global gauge transformations <cit.> and basis transformation of 𝒰^α_±.When we subtract the winding of a reference q-matrix from that of q^α_, the gauge dependence is resolved. From the fact (I) and Eq. (<ref>),𝒲_i^gapless[H_] := x_i^⊤(w_1, w_2, ⋯, w_N_1)^⊤ (1 ≤ i ≤ r_1)is a -valued topological invariant to detect gapless points on 2-cells. §.§.§topological invariants for gapped phases on 2-skeletonNext, we construct -valued topological invariants for gapped phases on 1-skeleton, i.e., fully gapped phases or gapless only on 3-cells. Indeed, we can use the same quantities as the invariants for gapless points on 2-cells. From the fact (II) and Eq. (<ref>), we find a invariant defined by𝒲_i^gapped[H_]:= x_i^⊤(w_1, w_2, ⋯, w_N_1)^⊤ for i s.t. λ_i-r_1^(0) = 0∈. §.§.§ _k topological invariants for gapped phases on 2-skeleton For i such that λ_i-r_1^(0)∉{0, 1}, we construct a _λ_i-r_1^(0)-valued topological invariant from x_i^⊤ and v_i-r_1^⊤ in Eqs. (<ref>) and (<ref>).Similar to the case of -valued invariants, it is natural to characterize topological nature on 1-cells by w_α in Eqs. (<ref>) and (<ref>).We define the following quantities for irreps on 1-cells: * class AIII and CI ν_α := exp[-2πi/λ_i-r_1^(0)w_α]=exp[-1/λ_i-r_1^(0)𝒟_α∫_s d(log q_s^α-log (q^α_s)^vac) ]; * class DIII ν_α := exp[-2πi/λ_i-r_1^(0)w_α]=exp[-1/2λ_i-r_1^(0)𝒟_α∫_s d(log q_s^α-log (q^α_s)^vac) ], where the integral is from one boundary 0-cell of the 1-cell to another boundary 0-cell along the orientation of the 1-cell.Here, _α is dimension of an irrep α on the 1-cell.The necessity of log (q^α_s)^vac arises from the same rationale as in Eqs. (<ref>) and (<ref>).Since -valued topological invariants are defined only by {w_α}_α as seen above, one might expect that ∏_αν_α^[x_i^T]_α = exp[-2πi∑_α [x_i]_αw_α/λ_i-r_1^(0)] to be a _λ_i-r_1^(0)-valued topological invariant.However, this is untrue. This is because(∏_αν_α^[x_i]_α)^λ_i-r_1^(0) = exp[-2πi∑_α [x_i]_αw_α] is generally not unity. This implies that ∏_αν_α^[x_i^T]_α does not work for a _λ_i-r_1^(0)-valued topological invariant.After integrating and summing the exponent of right-hand side of Eq. (<ref>), we have the product of {𝒵[q_^β]/𝒵[(q^β_)^vac]}_β over 0-cells. Then, we construct a _λ_i-r_1^(0)-valued topological invariant by combining ∏_αν_α^[x_i^T]_α with the correction terms {𝒵[q_^β]/𝒵[(q^β_)^vac]}_β. The remaining task is to identify such correction terms. In fact, this is accomplished by AHSS. To see this, we rewrite Eq. (<ref>) as (d_1^0,-1(b'^(0)_1) d_1^0,-1(b'^(0)_2) ⋯ d_1^0,-1(b'^(0)_N_0) ) =(b”^(1)_1b”^(1)_2⋯b”^(1)_N_1)([ O_r_1× N_0;Λ^(0) ]), where Λ^(0) = diag(λ_1^(0), λ_2^(0), ⋯) is the Smith normal form in Eq. (<ref>) for p=1.For later convenience, let us suppose that the first R_0 diagonal elements are unity, i.e., λ_j^(0) = 1 for 1≤ j ≤ R_0.Note that R_0 depends on conventions, for example, choices of the cell decomposition.When we consider Eq. (<ref>) in terms of coordinates {n_j^(0)}_j=1^N_0 and {n_j^(1)}_j=1^N_1 of E_1^0,-1 and E_1^1,-1, this equation impliesλ_i-r_1^(0)v_i-r_1^⊤(n_1^(0), n_2^(0), ⋯ n_N_0^(0))^⊤= x_i^⊤(n_1^(1), n_2^(1), ⋯ n_N_1^(1))^⊤. Equation (<ref>) hints at a relationship between quantities defined on 0- and 1-cells, as discussed below.Inspired by Eq. (<ref>), we define the following quantityexp[2πi/λ_i-r_1^(0)𝒳_i-r_1[H_]] := ∏_αν_α^[x_i]_α/∏_β(𝒵[q_^β]/𝒵[(q^β_)^vac])^[v_i-r_1]_β. The _λ^(0)_i-r_1-quantization of 𝒳_i-r_1[H_] is also confirmed by the following discussion. Let α and β be an irrep on a 1-cell and an irrep at its adjacent 0-cell, respectively.Compatibility relations imply that the ℂ-valued quantity Z[q_^α] defined in Eq. (<ref>) obeys the relation Z[q_^α] = ∏_β ( Z[q_^β])^|[M_d_1^0,-1]_αβ|,where the product ∏_β runs over the irreps on 0-cells whose EAZ class is either AIII, DIII or CI.The exponent |[M_d_1^0,-1]_αβ| is the absolute value of a matrix element of M_d_1^0,-1 defined in Eq. (<ref>).See Appendix <ref> for a derivation.Let s denote the 1-cell going from an adjacent 0-cell _0 to another adjacent 0-cell _1.By using Eq. (<ref>), ν_α^λ^(0)_i-r_1 = Z[q__0^α]/ Z[(q__0^α)^ vac]/ Z[q__1^α]/ Z[(q__1^α)^ vac]=∏_β( Z[q_^β]/ Z[(q_^β)^ vac] )^[M_d_1^0,-1]_αβ. Noticing that, from Eq. (<ref>), [X^(1)]^-1 M_d_1^0,-1 = ([ O_r_1× N_0;Λ^(0) ]) [V^(0)]^-1,we have (∏_αν_α^[x_i]_α)^λ_i-r_1^(0)=∏_β( Z[q_^β]/ Z[(q_^β)^ vac] )^∑_α [x_i]_α[M_d_1^0,-1]_αβ= ∏_β( Z[q_^β]/ Z[(q_^β)^ vac] )^λ^(0)_i-r_1 [v_i-r_1]_β=( ∏_β( Z[q_^β]/ Z[(q_^β)^ vac] )^[v_i-r_1]_β)^λ^(0)_i-r_1.This proves exp[2π iX_i-r_1[H_]]=1.We note that the above proof is only applicable to symmetry settings where the EAZ class at 0-cells does not include BDI.When there exists an EAZ class BDI at some 0-cell, the relation (<ref>) has a correction from irreps at 0-cells whose EAZ class is BDI, resulting in the symmetry indicator to detect gapless point on 2-cells or nontrivial extension of E_2^1,-1 by E_2^0,0.We also numerically verify that 𝒳_i-r_1[H_]∈{0,1,…,λ_i-r_1^(0)-1} is actually _λ_i-r_1^(0)-valued for time-reversal symmetric spinful superconductors, as discussed in Sec. <ref>.§.§.§ Demonstration As a demonstration, here we discuss the topological invariant of one-dimensional time-reversal symmetric topological superconductors only with translation symmetries.It is well known that the classification of topological superconducting phases is ^ϕK_G/Π^(z,c)(T^1) = _2 in this symmetry setting. In addition, the following topological invariant is also known:(-1)^ν = Pf[q_k=π]/Pf[q_k=0]exp[-1/2∫_0^π dk ∂_k log q_k], where we choose the basis such that q_k=0,π^T = - q_k=0,π. Here, we rederive the above invariant by using our method. Our cell decomposition of the fundamental domain is shown in Fig. <ref>.There are two inequivalent 0-cells denoted by Γ (k = 0) and X (k = π).These 0-cells are invariant under time-reversal and particle-hole symmetries, and thus E_1^0, -1 = [b^(0)_Γ]⊕[b^(0)_X].On the other hand, there exists an inequivalent 1-cell a, which is invariant only under chiral symmetry.Therefore, E_1^1, -1 = [b^(1)_a].Since d_1^1,-1 is not defined in one dimension, we start from Eq. (<ref>) with r_1 = 0 and V^(1) = (1).d_1^0,-1(b^(0)_Γb^(0)_X) = (b^(1)_a)[2 -2 ]=(b^(1)_a)U^(0)[ 2 0 ][V^(0)]^-1,where X^(1) = U^(0) = (1) and [V^(0)]^-1 = [1 -1;01 ].Following discussions in Sec. <ref>, we construct the topological invariant defined by(-1)^𝒳[H_] = 𝒵[q_X]/𝒵[q_X^vac]/𝒵[q_Γ]/𝒵[q_Γ^vac]ν_a[H_]=Pf[q_X]/Pf[q_Γ]exp[-1/2∫_Γ^X∂_k log q_k dk]×(Pf[q^vac_X]/Pf[q^vac_Γ]exp[-1/2∫_Γ^X∂_k log q^vac_k dk])^-1. It should be noted that Γ and X are invariant under time-reversal symmetry (TRS) and that their EAZ classes are class DIII with _α = 1. Thus, 𝒵[q_k=0, π] is replaced by Pf[q_k=0, π] with q^T_k=0, π = -q^T_k=0, π. As a result, our invariant is consistent with the well-known formula. In Supplementary Materials <cit.>, we present all matrices [X^(1)]^-1, [V^(0)]^-1, Λ^(0), [V^(1)]^-1, and Σ^(1) for insulators and superconductors in all magnetic space groups.Our scheme presented in this subsection is applicable to not only time-reversal symmetric superconductors with conventional pairing symmetries but also superconductors whose E_1^1,-1's are free Abelian groups. Also, [X^(1)]^-1 informs us of topological invariants for insulators and superconductors, where E_1^1,-1 = (_2)^l (l∈), under some gauge conditions, as discussed in Sec. <ref>. However, although the matrices [X^(1)]^-1, [V^(0)]^-1, Λ^(0), [V^(1)]^-1, and Σ^(1) are presented, we do not have a scheme to find explicit expressions of topological invariants for superconductors such that E_1^1,-1's contain both - and _2-parts.§.§ Numerical verification In the preceding section, we construct - and _k-valued topological invariants. However, we do not provide any proof that they are actually quantized, although it is possible to confirm it one by one.In this work, we alternatively check if they are quantized by computing invariants for randomly generated symmetric Hamiltonians defined on 1-skeletons repeatedly. Here, we discuss how to numerically confirm that quantities defined in Eqs. (<ref>), (<ref>), and (<ref>) are quantized. For simplicity, we always consider spinful systems in nonmagnetic layer groups or nonmagnetic space groups with TRS.In such a case, =+, whereis a nonmagnetic space group or layer group. §.§.§ Preparation of symmetry representations and symmetric HamiltoniansTo generate symmetric Hamiltonians, we need symmetry representations U_(g). We can always construct U_(g) from real space as follows. First, we specify a generic point x_1 in real space, where a generic point is symmetric only under nonspatial symmetries.Once we have a generic point, we find {x_l}_l=1^|/Π| and {g_l}_l=1^|/Π| such that x_l = g_l(x_1). Then, we can construct u_(g) (g∈) by[u_(g)]_σσ';ll' = z^int_g, g_l/z^int_g_l', g e^-i g() · (g(x_l) - x_l')δ'_g(x_l), x_l [u_x_1(h)]^ϕ_g_σσ',where u_x_1(h) is defined in Table <ref>, h = g_l'T_x_l' - g(x_l)gg_l (T_x_l' - g(x_l)∈Π is a translation by x_l' - g(x_l)), and δ'_g(x_l), x_l' is 1 only if g(x_l)-x_l' is a lattice vector and 0 otherwise.It should be noted that u_+G(g) = u_(g). While U_(g) = u_(g) for normal conducting phases, U_(g) = [u_(g)O;O χ_g u^*_-(g) ] for g ∈ℳ [ Oξ u_-(g); χ_g u^*_(g) O ] for g ∈ℳ for superconducting conducting phases and ξ = +1 (-1) corresponds to the presence (absence) of SU(2) symmetry.Here, χ_g ∈U(1) represents the pairing symmetry of superconducting gap function Δ_, i.e. it is defined by u_(g)Δ_^ϕ_gu^T_-(g) = χ_g Δ_g (g ∈ℳ).For later convenience, let us introduce the different convention of symmetry representations.By performing a basis transformation, we can always findŨ_(g) := V^†_gU_(g)V_ = U(g)e^i ·a_g,where U(g) is a unitary matrix independent of , and a_g is a vector that represents translation part of g ∈ G. The basis transformation matrix V^†_ is given by [v_]_l l' = δ_ll'e^-i ·x_l for normal conducting phases and[ v_O;O v_ ]for superconducting phases.Once we have symmetry representations, we can generate a symmetric random Hamiltonian defined at a -point _0.Let us suppose that we have a hermitian random matrix h. Then, we obtain a symmetric random Hamiltonian at _0 fromH__0 = 1/| G__0|∑_g ∈ G__0 c_g U__0(g) h^ϕ_g U^†__0(g).§.§.§ Adiabatic connection between two symmetric HamiltoniansFor 0- and 1-cells, we can construct symmetric random Hamiltonians using Eq. (<ref>). To compute our topological invariants obtained in Sec. <ref>, we need a symmetric Hamiltonian defined on a 1-skeleton. Let _0 be a representative point of a 1-cell, and _1 and _2 are its boundary 0-cells. Here, employing the technique developed in Ref. <cit.>, we connect H__0 to H__1 and H__2. First, we perform the basis transformation and obtain H̃__i = V^†__iH__iV__i. Then, we flatten the energy spectrum of H̃__0, H̃__1, and H̃__2.In other words, we introduceQ̃__i = Ṽ[ -1O;O1 ]Ṽ^-1,where Ṽ is a unitary matrix composed of eigenvectors of H̃__i. Next, we define parameterized Hamiltonians by Q̃_1(t_1) = (1-t_1)Q̃__1 + t_1 Q̃__0,Q̃_2(t_2) = (1-t_2)Q̃__0 + t_2 Q̃__2,where t_1 and t_2 are parameters defined by _1 + t_1 (_0 - _1) and _0 + t_2 (_2 - _0). Finally, by performing the inverse basis transformation, we have a symmetric Hamiltonians Q_1(t_1) = V__1 + t_1 (_0 - _1)Q̃_1(t_1)V^†__1 + t_1 (_0 - _1), Q_2(t_2) = V__0 + t_2 (_2 - _0)Q̃_2(t_2)V^†__0 + t_2 (_2 - _0),which are defined on the 1-cell and the two boundary 0-cells.As a result, we have a Hamiltonian given by H_ = Q_1(t_1)for= _1 + t_1 (_0 - _1) Q_2(t_2)for= _0 + t_2 (_2 - _0). Applying this scheme to all 1-cells, we obtain a symmetric Hamiltonian defined on the 1-skeleton.We make the following two remarks. First, we should interpolate Q̃__i (i = 0, 1, 2) to make interpolated Hamiltonians symmetric.If we start from the flattened H__i (i = 0, 1, 2), the interpolated Hamiltonian does not always possess all symmetries on 1-cells. On the other hand, the symmetry representation Ũ_(g) does not depend on momentum, as shown in Eq. (<ref>). In such a case, it is guaranteed that the interpolated Hamiltonians in the same basis of Ũ_(g) respect all symmetries on the 1-cell.Second, gapless points sometimes appear on 1-cells when compatibility conditions between 0- and 1-cells are violated.However, our invariants are well-defined only when the Hamiltonian is gapped on the 1-skeleton.To eliminate gapless points on 1-cells, we must stack some Hamiltonians so that all compatibility conditions between 0- and 1-cells are satisfied. §.§.§ Completeness checkTo check if our invariants work, we compute topological invariants of Hamiltonians obtained in the above way. In our numerical calculations, we repeat this process 20 times for all layer groups and all space groups with conventional pairing symmetries (χ_g = +1 for all g ∈) in the presence of TRS.For later convenience, the Hamiltonian of the m-th calculation is denoted by H^(m)_.Let us suppose that we have the following set of topological invariants: W_l[H_]= (𝒲^gapless_1[H_], 𝒲^gapless_2[H_], ⋯, 𝒲^gapless_r_1[H_])^⊤, W_g[H_]= (𝒲^gapped_1[H_], 𝒲^gapped_2[H_], ⋯, 𝒲^gapped_N_f[H_])^⊤, C[H_]= (𝒳_R_0+1[H_], 𝒳_R_0+2[H_], ⋯, 𝒳_R_0+N_t[H_])^⊤,where N_f and N_t are the numbers of - and _k-valued topological invariants. Recall that R_0 is the number of unity in the Smith normal form Λ^(0), i.e., λ_j^(0) = 1 for 1≤ j ≤ R_0.After computing all the topological invariants of the 20 Hamiltonians, we have three sets {W_l[H^(m)_]}_m=1^20, {W_g[H^(m)_]}_m=1^20, and {C[H^(m)_]}_m=1^20.Then, we check if our topological invariants can fully characterize E_2^1,-1. More precisely, we confirm that {W_l[H^(m)_]}_m=1^20, {W_g[H^(m)_]}_m=1^20, and {C[H^(m)_]}_m=1^20 can span ^r_1, ^N_f, and ⊕_i-r_1=R_0+1^N_t+R_0_λ_i-r_1, respectively.See Appendix <ref> for more technical details. §.§ Fermi surface formulas for topological superconductors Although obtaining q-matrices for realistic materials is usually challenging, it is well-known that the expressions of Eq. (<ref>) and the winding number w could be simplified when the scale of pair potentials is small enough compared with that of normal conducting phases <cit.>.In this weak-pairing limit, if normal conducting phases are gapped at k=0, π and Fermi surfaces are not degenerate, we have the following formulas(-1)^ν = ∏_ϵ_nk = 0sgn(δ_nk), w= 1/2∑_ϵ_nk = 0sgn(∂_k ϵ_nk)sgn(δ_nk), where ϵ_nk is the n-th eigenenergy of the normal conducting phase and δ_nk is a diagonal element of the superconducting gap function in the band basis [ More precisely, δ_nk is defined as follows. Let ψ_nk, u(), and Δ_k be an eigenvector with the eigenvalue ϵ_nk, the unitary representation of TRS, and the superconducting gap function, respectively. Then, δ_nk = ψ_nk^†(u()Δ_k^†)ψ_nk.See Refs. <cit.> for the derivations. ]. Given that these two invariants are also defined in terms of winding of q-matrices and exponential function of the winding, such as Eqs. (<ref>), and (<ref>), it is natural to expect that similar formulas for our invariants also exist. However, we leave the enumeration of these kinds of formulas as future work. § EXAMPLES In this section, we derive topological invariants of topological superconductors in several symmetry settings with TRS. We also compute the obtained topological invariants for representative models of all possible topological phases in these symmetry settings. §.§ Layer group p2_1/m11 with A_g pairing Our first example is layer group p2_1/m11 whose generators are screw symmetry S_x ={2_100| (1/2, 0, 0)^⊤}, mirror symmetry M_x ={m_100| (1/2, 0, 0)^⊤}, and a translation along y-direction. According to Ref. <cit.>, ^ϕK_G/Π^(z,c)(T^2) = in layer group p2_1/m11 with TRSand A_g pairing.However, topological invariants are not known yet. Here, we construct the topological invariants based on our framework.Our cell decomposition of the fundamental domain is shown in Fig. <ref>(b), and irreps are tabulated in Table <ref>. Then, E_1^1,-1=^5 is given byE_1^1,-1 = [b_a,1]⊕[b_b,1]⊕[b_c,1]⊕[b_c,2]⊕[b_d,1],where we omit “(1)” from b_D_i^p, α^(1) for short. After following procedures in Sec. <ref>, we find r_1 = 1 and[X^(1)]^-1 = ( [ a_1 b_1 c_1 c_2 d_1; 1-1 1 1-1; 0 1 0 0 1; 0 0 1 0 0; 0 0 0 0 1; 0 0-1 1 0; ]);Λ^(0) = ( [ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 0; ]). As discussed in Sec. <ref>, the first row of [X^(1)]^-1 informs us about a topological invariant for gapless points on 2-cells, which is given by𝒲^gapless = w_a_1 - w_b_1 + w_c_1 + w_c_2 - w_d_1= 1/2 πi[∫_a d log q^a_1_/ (q^a_1_)^vac- ∫_b d log q^b_1_/ (q^b_1_)^vac..+ 1/2∑_i=1^2∫_c d log q^c_i_/ (q^c_i_)^vac - ∫_d d log q^d_1_/ (q^d_1_)^vac]= 1/2×1/2 πi∮_C (d log q_ - d log q_^vac),where the loop C is defined by Γ-X-M-Y-Γ. In the last line, we use the relation q_^K_2=q_^K_1 for K = a, b, d (See Appendix <ref>). As shown in Eq. (<ref>), since only λ_4 = (5-1)≠ 1 and λ_4=0, the fifth row of [X^(1)]^-1 gives us a -valued topological invariant for gapped phases 𝒲^gapped = -w_c_1 + w_c_2= -1/4πi∫_0^π d log q_(π, k_y)^c_1/ (q_(π, k_y)^c_1)^vac( q_(π, k_y)^c_2/ (q_(π, k_y)^c_2)^vac)^-1. In fact, this is the mirror winding number.A generator of ^ϕK_G/Π^(z,c)(T^2) = is constructed from one-dimensional topological superconducting phases protected by mirror and chiral symmetries, as shown in Fig. <ref>(a).After computing 𝒲^gapped for a representative model of this generator with a random symmetric perturbation, we find that 𝒲^gapped =-1.§.§ Layer group p2_1/b11 with A_g pairing Our next example is layer group p2_1/b11, which is generated by screw symmetry S_x ={2_100| (1/2, 1/2, 0)^⊤} and glide symmetry G_x ={m_100| (1/2, 1/2, 0)^⊤}. Reference <cit.> shows that ^ϕK_G/Π^(z,c)(T^2) = _2 in layer group p2_1/b11 with TRSand A_g pairing.The cell decomposition is the same as in Fig. <ref>(b). Irreducible representations and their EAZ classes are tabulated in Table <ref>.We have E_1^1,-1 = ^6 spanned by E_1^1,-1 = ⊕_K = a,b[b_K,1]⊕⊕_K=c,d(⊕_α=1,2[b_K,α]). From the analyses in Sec. <ref>, we find r_1 = 1 and[X^(1)]^-1 =( [ a_1 b_1 c_1 c_2 d_1 d_2; 1-1 1 1-1-1; 0 1 0 0 0 2; 0 0 0 1-1 1; 0 0 0 0 0 1; 0 0 0 0-1 1; 0 0-1 1-1 1; ]); Λ^(0)=( [ 1 0 0 0 0; 0 1 0 0 0; 0 0 1 0 0; 0 0 0 1 0; 0 0 0 0 2; ]);[V^(0)]^-1 =( [ Γ_1 Γ_3 X_1 Y_1 M_1 M_2 M_3 M_4; 1 1 0 0 0 0-2-2; 0 0 1 0 0 1-1-2; 0 0 0 1 0 0-1-1; 0 0 0 0 1 1-1-1; 0 0 0 0 0 1 0-1; 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 1 0; 0 0 0 0 0 0 0 1; ]). From the first row of [X^(1)]^-1, we have the following -valued topological invariant for gapless points on 2-cells:𝒲^gapless = w_a_1 - w_b_1 + w_c_1 + w_c_2 - w_d_1 - w_d_2= 1/2 πi[∫_a d log q^a_1_/ (q^a_1_)^vac- ∫_b d log q^b_1_/ (q^b_1_)^vac..+ 1/2∑_i=1^2(∫_c d log q^c_i_/ (q^c_i_)^vac - ∫_d d log q^d_i_/ (q^d_i_)^vac) ]= 1/2×1/2 πi∮_C (d log q_ - d log q_^vac),where the loop C is defined by Γ-X-M-Y-Γ. We see that only λ_5=2 is not unity in Eq. (<ref>), which indicates that the last row of [X^(1)]^-1 and the fifth row of [V^(0)]^-1 provide a _2-valued topological invariant for gapped phases.The _2-valued topological invariant 𝒳 is given by (-1)^𝒳 = 𝒵[q_M^M_4]/𝒵[(q_M^M_4)^vac]/𝒵[q_M^M_2]/𝒵[(q_M^M_2)^vac]ν_c_2[H_]ν_d_2[H_]/ν_c_1[H_]ν_d_1[H_] = Pf[q_M^M_4]/Pf[(q_M^M_4)^vac]/Pf[q_M^M_2]/Pf[(q_M^M_2)^vac] ×exp[1/4∑_K=c,d∫_K dlog q_^K_1/ (q_^K_1)^vac( q_^K_2/ (q_^K_2)^vac)^-1], wherewe replace 𝒵[q_^α] by [q_^α] in the second line, as mentioned in Sec. <ref>. A generator of ^ϕK_G/Π^(z,c)(T^2) = _2 is constructed from one-dimensional topological superconducting phases, as shown in Fig. <ref>(b).After computing 𝒳 for a representative model of this generator with a random symmetric perturbation, we find that 𝒳 =1 mod 2. §.§ Layer group p2_111 with A pairing The third example is layer group p2_111, whose generators are screw S_x ={2_100| (1/2, 0, 0)^⊤} and a translation along y-direction. In Ref. <cit.>, the authors show that ^ϕK_G/Π^(z,c)(T^2) = _2 ×_4 in layer group p2_111 with TRS and with A pairing.However, topological invariants are not known yet. Here, we construct the topological invariants based on our framework.Again, the cell decomposition is the same as layer groups p2_1/m11 and p2_1/b11. Irreducible representations and their EAZ classes are tabulated in Table <ref>.We have E_1^1,-1 = ^6 spanned by E_1^1,-1 = ⊕_K = b,c[b_K,1]⊕⊕_K=a,d(⊕_α=1,2[b_K,α]).We perform the analyses presented in Sec. <ref>, and then we find r_1 = 1 and [X^(1)]^-1 = ( [ a_1 a_2 b_1 c_1 d_1 d_2; 1 1-1 2-1-1; 0-1 1 0 0 2; 0 0 0 1-1 1; 0 0 0 0 0 1; 0 2-1 0-1 1; 0 2-1 0-2 0; ]);Λ^(0) = ( [ 1 0 0 0 0; 0 1 0 0 0; 0 0 1 0 0; 0 0 0 2 0; 0 0 0 0 4; ]);[V^(0)]^-1 = ( [ Γ_1 X_1 X_2 Y_1 M_1 M_2; 1 0 2 0 0-4; 0 1 1 0 1-3; 0 0 0 1 0-2; 0 0-2 1 1-1; 0 0-1 0 1 0; 0 0 0 0 0 1; ]). In the same way as the above examples, we have a -valued topological invariant for gapless points on 2-cells:𝒲^gapless = w_a_1 + w_a_2 - w_b_1 + 2w_c_1 - w_d_1 - w_d_2= 1/2 πi∮_C (d log q_ - d log q_^vac). We find λ_4 = 2 and λ_5 = 4. As a result, we construct a _2-valued topological invariant from the fifth row of [X^(1)]^-1 and the fourth row of [V^(0)]^-1.Also, a _4-valued topological invariant is obtained from the sixth rows of [X^(1)]^-1 and the fifth row of [V^(0)]^-1.The _2- and _4-valued invariants, denoted by 𝒳_1 and 𝒳_2, are defined by (-1)^𝒳_1 =𝒵^2[q_X^X_2]𝒵[q_M^M_2]/𝒵[q_Y^Y_1]𝒵[q_M^M_1](𝒵^2[(q_X^X_2)^vac]𝒵[(q_M^M_2)^vac]/𝒵[(q_Y^Y_1)^vac]𝒵[(q_M^M_1)^vac])^-1ν_a_2^2[H_]ν_d_2[H_]/ν_b_1[H_]ν_d_1[H_] =q_X^X_2[q_M^M_2]/ q_Y^Y_1[q_M^M_1]( (q_X^X_2)^vac[(q_M^M_2)^vac]/ (q_Y^Y_1)^vac[(q_M^M_1)^vac])^-1 ×exp[-1/2∫ d(2log q_(k_x, 0)^a_2/ (q_(k_x, 0)^a_2)^vac - log q_(0, k_y)^b_1/ (q_(0, k_y)^b_1)^vac- log q_(k_x, π)^d_1/ (q_(k_x, π)^d_1)^vac + log q_(k_x, π)^d_2/ (q_(k_x, π)^d_2)^vac) ], i^𝒳_2 = 𝒵[q_X^X_2]/𝒵[(q_X^X_2)^vac]/𝒵[q_M^M_1]/𝒵[(q_M^M_1)^vac]ν_a_2^2[H_]/ν_b_1[H_]ν_d_1^2[H_] = [q_X^X_2]/[q_M^M_1][(q_M^M_1)^vac]/[(q_X^X_2)^vac]exp[-1/4∫ d(2log q_(k_x, 0)^a_2/ (q_(k_x, 0)^a_2)^vac - log q_(0, k_y)^b_1/ (q_(0, k_y)^b_1)^vac- 2log q_(k_x, π)^d_1/ (q_(k_x, π)^d_1)^vac) ].Real-space pictures of topological phases are shown in Fig. <ref>(c), as discussed in Ref. <cit.>. From (𝒳_1, 𝒳_2) for a representative model of each phase, we find that (𝒳_1, 𝒳_2) can fully characterize ^ϕK_G/Π^(z,c)(T^2) = _2 ×_4, as shown in Table <ref>. § TOPOLOGICAL INVARIANTS FOR NORMAL CONDUCTING PHASESSo far, we have discussed topological invariants when E_1^p,-1 (p = 0, 1, 2) are free Abelian groups. Typically, this happens to time-reversal symmetric superconductors with conventional pairing symmetries.On the other hand, this is not always the case for superconductors with unconventional pairing symmetries and normal conducting systems.In particular, when we are interested in normal conducting systems, E_1^p,-1 always takes the form of (_2)^l (l ∈_≥ 0). This is because an EAZ class of each irrep on a p-cell is always any one of A, AI, and AII, and only AI gives _2 as shown in Table <ref>.Here, let us discuss how to construct topological invariants for insulators and semimetals.The construction of topological invariants for normal conducting phases also requires the similar two steps discussed in Secs. <ref> and <ref>.The only difference in the first step is the existence of _2-parts in E_1^p, -1. The discussions in Sec. <ref> are easily generalized, as shown in Sec. <ref>. On the other hand, due to the absence of any chiral symmetry, we need to develop a different characterization of E_1^1, -1 for normal conducting phases.In Sec. <ref>, we introduce the transition function as _2-valued quantity under some gauge conditions. As a result, we first construct topological invariants under some gauge conditions from AHSS.Since the formulas with fixed gauges are generally not useful for actual computations, we then rewrite the formulas in terms of gauge-independent quantities. In the following, we explain our strategy through some examples. We leave a systematic implementation of gauge-invariant formulas as a future work.§.§ Some generalities Before moving on to the construction of invariants on the 1-skeleton, we summarize in this section general remarks not limited to the 1-skeleton.§.§.§ Types of topological invariantsIn insulating systems, E_2^0,0 is a free abelian group, and E_2^1,-1 consists solely of _2. Since free abelian groups do not undergo nontrivial extensions, the topological invariants defined on the 1-skeleton take values in the group _2. On the other hand, the group E_3^2,-2 may be nontrivially extended by invariants on the 1-skeleton, so there can be, for example, _4 invariants. §.§.§ NotationsWe summarize the notations used in this section, which are different from those in Sec. <ref>.Let G be a magnetic point group, and ϕ: G →{± 1} be a homomorphism specifying whether an element g ∈ G is unitary or antiunitary.Introduce the notation for a matrix X, X^ϕ_g = X when ϕ_g=1 and X^ϕ_g=X^* when ϕ_g=-1. Let p_g ∈O(d) be the point group action of g, and a_g be a (fractional) translation.The action on real space is g:x↦ p_gx + a_g. Denote the group action on the momentum space ∈ T^d as g:k↦ gk = ϕ_g p_g k. Write z^ int(g,h) as the factor system for internal degrees of freedom, and the factor system in the momentum space asz_(g,h) = z^ int(g,h) e^-ik· (p_g a_h+a_h-a_gh)for g,h ∈ G, where z_(g,h) satisfies the following cocycle condition z_g^-1(h,l)^ϕ_g z_(gh,l)^-1 z_(g,hl) z_(g,h)^-1 = 1for g,h,l ∈ G <cit.>. Let {U_(g)}_g ∈ G be symmetry operators globally defined in the momentum space T^d, satisfying the following -dependent cocycle condition:U_h(g)U_(h)^ϕ_g=z_gh(g,h)U_(gh), g,h ∈ G.A periodic Hamiltonian H_ in the momentum space T^d satisfies the following symmetry U_(g)H_^ϕ_gU_(g)^† = H_g, g ∈ G. §.§.§ Classification of equivariant vector bundle In general, the classification of isomorphism classes of vector bundles is to classify transition functions. Let {U_i}_i ∈ I be a covering of the momentum space T^d compatible with the symmetry, i.e., T^d = ⋃_i U_i,where U_i is called patch in the following.Each patch U_i corresponds to the Poincaré dual of a 0-cell _i in the cell decomposition of the AHSS.By design, the action of g on each patch g(U_i) = {g∈ T^d | ∈ U_i} is some patch, and the group action on patch labels is denoted as i ↦ g(i). Namely, g(U_i) = U_g(i). Each patch U_i is contractible, and the Bloch states Φ_i, on each patch U_i can be chosen continuously. Let N be the number of occupied states. On a two-patch intersection U_ij = U_i ∩ U_j, the transition function is defined as t_ij, = Φ_i,^†Φ_j,∈U(N),∈ U_ij.The transition function t_ij, satisfies the following cocycle condition over a three-patch intersection U_ijl = U_i∩ U_j ∩ U_l:t_ij,t_jl,t_li, = 1_N_ occ, ∈ U_ijl.The intersection U_ij is generally not contractible and may have multiple connected components. Under certain symmetry constraints (gauge fixing conditions) on the Bloch states of each patch, the desired topological classification is obtained from the homotopy equivalence class of transition functions t_ij, invariant against residual gauge transformations.In each patch, due to the symmetry relation in Eq. (<ref>), the Bloch state Φ_i, defines a unitary matrix, called sewing matrix, w_i,(g ∈ G) ∈U(N) by U_(g) Φ_i,^ϕ_g = Φ_g(i),g w_i,(g),∈ U_i, g ∈ G. Here, note that the matrix w_i,, unlike U_(g), is defined only on the patch ∈ U_i and satisfies the same product structure (<ref>) as U_(g), namely, w_h(i),h(g) w_i,(h)^ϕ_g = z_gh(g,h) w_i,(gh),g,h ∈ G.The following fact is the starting point of construction: There exists a continuous unitary matrices V_i,∈ U_i on patches U_i such that w_i,(g) V_i,^ϕ_g = V_g(i),g e^-i g (-_i)·a_g w_i,_i(g), ∈ U_i,g ∈ G, hold true. See Appendix <ref> for a proof.This implies that through a gauge transformation Φ_i,↦Φ_i, V_i,, one can always fix the gauge in such a way that the symmetry operator in each patch can be the symmetry operator w_i,_i(g) at the 0-cell _i up to the factor e^-i g (-_i)·a_g determined solely by the magnetic space group data.It is noted that the unitary equivalence class of the representation matrix w_i,_i(g) at the 0-cell _i is determined by the numbers of irreps of the little group G__i = {g ∈ G|g_i=_i} at _i. Consequently, the gauge fixing condition for each patch depends only on the representation at 0-cell _i.Let us choose a gauge fixing condition {w_i,(g)}_i ∈ I, g∈ G that satisfies Eq. (<ref>).Under the gauge fixing condition (<ref>), the transition functions satisfy the following symmetry:w_i,(g) t_ij,^ϕ_g w_j,(g)^† = t_g(i)g(j),g, ∈ U_ij,g ∈ G.The residual gauge transformations, which leave the gauge fixing condition (<ref>) invariant, are given by Φ_i,↦Φ_i, W_i,,W_i,∈U(N),with the matrices W_i, on U_i satisfying the condition w_i,(g) W_i,^ϕ_g w_i,(g)^† =W_g(i),g,∈ U_i,g ∈ G.Under the residual gauge transformation in (<ref>), the transition functions change as t_ij,↦ W_i,t_ij,W_j,^†. The set of homotopy equivalence classes of the transition functions t_ij,, when divided by the equivalence relation generated by the residual gauge transformation in Eq. (<ref>), yields the data for topological invariants. The classification of topological invariants defined by the transition functions t_ij, is generally difficult. However, when limiting the Bloch states to the 1-skeleton of momentum space, the topological invariants are classified by E_2^1,-1 and constructed explicitly, as discussed in the next section.§.§ The construction of _2 topological invariants over 1-skeletonIn this section, we discuss the construction of _2 topological invariants defined from the occupied states over the 1-skeleton in momentum space. In the following discussions, we assume that any set of representations of occupied states at 0-cells satisfies the compatibility relations to ensure the existence of the gap over the 1-skeleton, resulting in that the set of representations at 0-cells is an element of E_2^0,0. §.§.§ Introducing a gauge constraint over 1-cellsLet s(a) and t(a) be the start and terminal point of a 1-cell a, respectively [see Fig. <ref>].We choose a representative point _a ∈ a and examine the symmetry constraint of the transition function at the point _a:t__a = Φ_s(a),_a^†Φ_t(a),_a∈U(N). Let us define the subgroup G_a = {g ∈ G | g =for^∀∈ a}, whose elements keep points on 1-cell a invariant. The transition function then satisfies the symmetry constraint in Eq. (<ref>):w_s(a),_a(g) t__a^ϕ_g w_t(a),_a(g)^† = t__a,g ∈ G_a. Although w_s(a),_a(g) and w_t(a),_a(g) are unitary equivalent as representations of G_a with the common factor system z__a(g,h), w_s(a),_a(g) and w_t(a),_a(g) are generally different.As a result, no apparent constraint arises in eigenvalues of the transition function t__a To make restrictions on the eigenvalues explicit, we fix a representation matrix w_a,(g) (g∈ G) for 1-cell a, which obeys the globally defined factor system z_(g,h): w_h(a),h(g) w_a,(h)^ϕ_g = z_gh(g,h) w_a,(gh),g,h ∈ G.Here, g(a) denotes a 1-cell mapped from 1-cell a by g ∈ G.The matrix w_a,(g) can be considered as a gauge fixing condition on 1-cell: U_(g) Φ_a,^ϕ_g = Φ_g(a),g w_a,(g),g ∈ G. Let V_i→ a, denote the unitary matrix for a basis transformation from the 0-cell i to an adjacent 1-cell a:w_i,(g) V_i→ a,^ϕ_g = V_i→ a, w_a,(g), g ∈ G_a.(As will be detailed later, V_i→ a, is not unique.)Using this basis transformation, we redefine t̃__a:= V_s(a) → a,_a^† t__a V_t(a)→ a,_a. The transition function t̃__a then satisfies the symmetry w_a,_a(g) t̃__a^ϕ_g w_a,_a(g)^† = t̃__a, g ∈ G_a,imposing a constraint on the eigenvalues of t̃__a.§.§.§ Preliminary discussion: For block diagonalized w_a,𝐤_a(g) To examine the eigenvalue structure of t̃__a, we first consider the case where the representation matrices w_a,_a(g) are block diagonalized for each irrep. Let G^0_a = {g ∈ G_a | ϕ_g = 1} be the subgroup of G_a consisting of unitary elements. Denote the irreducible character of α-irrep of G_a^0 as χ^α_(g ∈ G_a^0). When G_a \ G^0_a ≠∅, choose a representative t∈ G_a \ G^0_a, and tα denotes the irrep related to α by t. Note that t^2 ≠ e in general.The character of tα is given by χ_^ tα(g) = z__a(g, t)/z__a( t, t^-1g t)χ__a^α( t^-1g t)^* for ^∀ g ∈ G_a^0.Choose a set of representation matrices for each irrep and denote them as ρ^α_(g). Let D_α be the matrix dimension (representation dimension) of ρ^α_(g). We consider the following block diagonalized form for the representation matrixw_a,_a(g) = ⊕_αρ^α__a(g) ⊗1_n_α (g∈ G_a^0),where n_α is the number of α-irrep contained in w__a(g).With this choice, t̃__a is block diagonalized ast̃__a = ⊕_α1_ D_α⊗t̃^α__a, t̃^α__a∈U(n_α). Furthermore, due to t-symmetry, the unitary matrix t̃^α__a of each sector is subjected to further constraint depending on the EAZ class of α-irrep, as described below. EAZ class A, i.e., no antiunitary symmetry present. In this case, there are no further restrictions on t̃^α__a.Since π_0[U(n_α)]=0, no topologically nontrivial classification arises.In particular, t̃^α__a can take any U(1) value. EAZ class A_T, i.e., antiunitary symmetry t exists but α and tα are different. In this case, there are no further restrictions on t̃^α__a.Since π_0[U(n_α)]=0, no topologically nontrivial classification arises.For ρ^α_(g), the representation matrix of the irrep tα can be fixed as ρ^ tα_(g) = z__a(g, t)/z__a( t, t^-1g t)ρ__a^α( t^-1g t)^* for g∈ G_a^0. In doing so, since t( tα) ∼α, there exists a unitary matrix U∈U(_α) such that ρ^α_(g)= z_(g, t)/z_( t, t^-1g t) U ρ^ tα_(g)^* U^† and U is given by z_( t, t)ρ^α_( t^2). As a result, the representation matrix of t in the α⊕ tα sector becomesw_a,( t)|_(α⊕ tα)^⊕ n_α = [ O z_( t, t)ρ^α_( t^2);1__α O; ]⊗1_n_α.Note that _α=_ tα and n_α=n_ tα.Under this representation, due to t-symmetry, t̃^ tα__a = (t̃^α__a)^* holds.In particular,t̃^α__at̃^ tα__a = 1.EAZ class AI. There exists a unitary matrix ρ^α_( t)∈U( D_α) such that ρ^α_( t) ρ^α_( t)^* = z_( t, t) ρ^α_( t^2) <cit.>.The representation matrix of t of α-irrep is given by w_a,|_α^⊕ n_α = ρ^α_( t) ⊗1_n_α. Consequently, (t̃^α__a)^*=t̃^α__a due to t-symmetry, meaning that t̃^α__a∈O(n_α). Since π_0[O(n_α)]=_2, a _2 classification arises from the transition functions on 1-cell, which is determined by the sign t̃^α__a∈{± 1}. It turns out that the sign t̃^α__a depends on the basis transformation V_i→ a,. For a triple of representation matrices w_i,(g), w_a,(g) and the basis transformation V_i→ a,, let us consider a substitution of V_i→ a, as V_i→ a,↦ V_i→ a,δ V_i → a,, w_a,(g) (δ V_i→ a,)^ϕ_g w_a,(g)^† = δ V_i→ a,.This substitution does not alter the relation (<ref>).Considering the form of w_a,(g) in Eq. (<ref>) and the symmetry relation in Eq. (<ref>), we haveδ V_i→ a, = ⊕_α1_ D_α⊗δ V_i→ a,^α. In particular, for the block of α-irrep whose EAZ class is AI, δ V_i→ a,^α∈O(n_α). This substitution leads to a change in the unitary matrix t̃^α__a defined in Eq. (<ref>):t̃^α__a↦ (δ V^α_s(a)→ a,_a)^†t̃^α__aδ V^α_t(a)→ a,_a. Consequently, the sign t̃^α__a changes as well:t̃^α__a↦ (δ V^α_s(a)→ a,_a)^†t̃^α__aδ V^α_t(a)→ a,_a.EAZ class AII. Due to Kramers' degeneracy, n_α is an even number. There exists a unitary matrix ρ^α_( t)∈U( D_α) such that ρ^α_( t) ρ^α_( t)^* = -z_( t, t) ρ^α__a( t^2). Using this matrix ρ^α__a( t), the representation matrix of t in the α-block is given by w_a,|_α^⊕ n_α = ρ^α_( t) ⊗ (σ_y) ⊗1_n_α/2. Consequently, due to t-symmetry, (σ_y)(t̃^α__a)^* (σ_y)^†=t̃^α__a, meaning that the t̃^α__a matrix belongs to the symplectic group Sp(n_α/2) = {X ∈U(n_α) | X^⊤ (σ_y) X = σ_y}. Since π_0[Sp(n_α/2)]=0, no nontrivial classification arises. Furthermore, as the eigenvalues of the matrix t̃^α__a appear in complex conjugate pairs (λ,λ^*), t̃^α_=1. §.§.§ Gauge invariant expression of t̃^α_𝐤_a The U(1)-valued quantity t̃^α__a is definable even if the representation matrix w_a,(g) is not block diagonalized as in the expression (<ref>).By introducing the orthogonal projection onto the α-irrep defined by P^α_a,= D_α/|G_a^0|∑_g ∈ G_a^0χ_^α(g)^* w_a,(g),the projection of the transition function t̃__a onto the α-sector is given byP^α_a,_at̃__a P^α_a,_a = P^α_a,_at̃__a = t̃__a P^α_a,_a.The non-zero eigenvalues of the projection P^α_a,_at̃__a P^α_a,_a consist of n_α eigenvalues λ_1,…λ_n_α, and each of them are D_α-fold degenerated.We defineξ^α__a(t̃__a):= ∏_i=1^n_αλ_i.The discussion in the previous section can be summarized for each EAZ class as:[ A: ξ__a^α(t̃__a) ∈U(1),; A_T: ξ__a^α(t̃__a)=ξ__a^ tα(t̃__a)^* ∈U(1),;AI:ξ__a^α(t̃__a) ∈{± 1},; AII: ξ__a^α(t̃__a)=1.;] Some remarks are in order. — The U(1)-valued quantity ξ__a^α(t̃__a) behaves as a product with respect to direct sums: ξ__a^α(t̃__a⊕t̃'__a) =ξ__a^α(t̃__a) ξ__a^α(t̃'__a). — As can be seen from the expression ξ__a^α(t̃__a) = ξ__a^α(V_s(a)→ a,_a^†Φ_s(a),_a^†Φ_t(a),_a V_t(a)→ a,_a), it is emphasized again that the U(1)-valued quantity ξ__a^α(t̃__a) depends on the basis transformation matrix V_i→ a,. — For irreps with EAZ class AI, a set of signs ξ__a^α(t̃__a) represents an entry of E_1^1,-1 in the AHSS. §.§.§ Residual patch gauge transformationWe see how the residual patch gauge transformation (<ref>) changes the U(1)-valued quantity ξ^α__a(t̃_𝐤_a). The transition function t̃__a defined in Eq. (<ref>) changes under a gauge transformation as t̃__a↦W̃_s(a),_at̃__aW̃_t(a),_a^†, W̃_i,:= V_i→ a,^† W_i, V_i→ a,, where the matrices W̃_s(a), and W̃_t(a), satisfy the same symmetry (<ref>) as δ V_i→ a, with the representation matrix w_a,(g); namely, w_a,(g) W̃_i,^ϕ_gw_a,(g)^† = W̃_i,for g∈ G_a. Thus, all of W̃_s(a),_a, t̃__a, and W̃_t(a),_a^† are block diagonalized in α-sector.Consequently,ξ^α__a(W̃_s(a),_at̃__aW̃_t(a),_a^†)=ξ^α__a(W̃_s(a),_a) ξ^α__a(t̃__a) ξ^α__a(W̃_t(a),_a^†)holds, meaning that it becomes the product of each U(1) value. Moreover, the U(1) value ξ^α__a(t̃__a) in Eq. (<ref>) is defined by eigenvalues, and thus it does not depend on the choice of the representation matrix w_a,(g ∈ G_a).Then, it follows thatξ^α__a(W̃_s(a),_at̃__aW̃_t(a),_a^†)=ξ^α__a(W_s(a),_a) ξ^α__a(t̃__a) ξ^α__a(W_t(a),_a^†). In particular, when the EAZ class of α-irrep is AI, the value ξ^α__a(W_s(a),_a) is quantized to _2, thus it does not depend on the momentum ∈ a on the 1-cell a. Therefore, it coincides with the sign at the 0-cell s(a):ξ^α__a(W_s(a),_a)= .ξ^α__a(W_s(a),_a) |__a →_s(a). Recall that momentum at 0-cell i is denoted as _i.The same applies to ξ^α__a(W^†_t(a),_a).It should be noted that, although the above . ξ^α__a(W_i,_a)|__a →_i is defined at 0-cell _i, it is computed for irreps on the adjacent 1-cell.On the other hand, with the stabilizer group G__i = {g ∈ G| g _i = _i}, a U(1)-valued quantity ξ^β(_i)__i(W_i, _i) is similarly defined for each irrep β(_i) of the unitary subgroup G__i^0 = ϕ∩ G__i.The irreducible decomposition of irrep β(_i) of G__i^0 into the irreps {α}_α of G^0_a is given byβ(_i)|_G_a^0 = ⊕_αα^⊕ n^β(_i)_α.Then, the following relation holds:. ξ^α__a(W_i,_a)|__a →_i = ∏_β(_i)[ξ^β(_i)__i(W_i, _i)]^n^β(_i)_α.For proof, see Appendix <ref>.Note that if G_a includes an antiunitary element t, then G__i also includes t∈ G__i.For an α-irrep of G_a with EAZ class AI, it follows that the EAZ of the irrep β(_i) on the right-hand-side of Eq. (<ref>) is either AI, AII, or A_T. Furthermore, considering Eq. (<ref>), the contribution from AII is ξ^β(_i)__i(W_i, _i)=1, and from A_T is ξ^β(_i)__i(W_i, _i) ξ^ tβ(_i)__i(W_i, _i)=1, leaving only contributions from AI.Therefore, we obtain.ξ^α∈ AI__a(W_i, _a)|__a →_i =∏_β(_i) ∈ AI[ξ^β(_i)__i(W_i, _i)]^n^β(_i)_α.Here, ∏_β(_i) ∈ AI denotes the product over irreps of G^0__i whose EAZ is given by AI.Eventually, the change in the sign ξ^α∈ AI__a(t̃_𝐤_a) ∈{± 1} under the residual patch gauge transformation (<ref>) is given byξ^α∈ AI__a(t̃_𝐤_a) ↦ξ^α∈ AI__a(t̃_𝐤_a) ×∏_β(_s(a)) ∈ AI[ξ^β(_s(a))__s(a)(W_s(a),_s(a))]^n^β(_s(a))_α×∏_β(_t(a)) ∈ AI[ξ^β(_t(a))__t(a)(W_t(a),_t(a))]^n^β(_t(a))_α.This relation (<ref>) is nothing but the differential d_1^0,-1:E_1^0,-1→ E_1^1,-1 in the AHSS, and therefore, the _2 invariants over the 1-skeleton are classified by the group E_1^1,-1/ d_1^0,-1. The group E_1^1,-1/ d_1^0,-1 includes _2 invariants that detect gapless points at points inside 2-cells, which are called “π-Berry phase” in literature.Such _2 invariants are given by the coimage Coim d_1^1,-1 of the differential d_1^1,-1:E_1^1,-1→ E_1^2,-1 and depend on a choice of cell decomposition.In contrast, E_2^1,-1= d_1^1,-1/ d_1^0,-1 is independent of cell decomposition.Therefore, E_2^1,-1 gives the classification of _2 invariants that characterize gapped phases over 2-skeleton, i.e., insulators and gapless points at a general point inside the 3-cell, independent of the cell decomposition.§.§.§ Gauge-invariant product for _2 invariant The gauge invariant combination of the signs ξ^α∈ AI__a(t̃_𝐤_a) constitutes a _2 invariant, which can be calculated from the first differential in AHSS.The construction is parallel to that discussed in Section <ref>, except that it is _2-valued rather than -valued. Hence, only the results are briefly stated here.Let E_1^p,-1 = ⊕_i=1^N_p_2[b_i^(p)]. We denote the matrix composed of basis vectors as B^(p) = (b_1^(p),…,b_N_p^(p)), and write the first differential asd_1^p,-1(B^(p)) = B^(p+1) M_d_1^p,-1,where M_d_1^p,-1 is a _2-valued N_p+1× N_p matrix.Let the Smith normal form of M_d_1^1,-1 beU^(1) M_d_1^1,-1 V^(1)=[ 1_r_1 O; O O; ],and define the matrix Y by[V^(1)]^-1 M_d_1^0,-1 = [ O_r_1 × N_0; Y ].The Smith normal form of Y is denoted byU^(0)YV^(0)=[ 1_R_0 O; O O; ],and the corresponding basis transformation matrix is defined byX^(1) = V^(1)(1_r_1⊕ [U^(0)]^-1). We introduce the vectors x_1,…,x_N_1 by[X^(1)]^-1 := ([ x_1 x_2 ⋯ x_N_1 ])^⊤.For a gapped band structure E on the 1-skeleton, let us define a _2-valued ν_i(E)∈{0,1} as(-1)^ν_i(E) := ∏_a,α [ξ^α__a(t̃__a)]^[x_i]_(a,α), i=1,…,N_1,where the components of the vector x_i are indexed by the pairs (a,α) representing the basis of E_1^1,-1, consisting of a 1-cell a and an α-irrep on a whose EAZ class is AI. Thus, ν_1(E),…,ν_r_1(E) correspond to _2 invariants that detect gapless points in the 2-cell protected by the π-Berry phase within the given cell decomposition, while ν_r_1+R_0+1(E),…,ν_N_1(E) correspond to _2 invariants characterizing the gapped Bloch wave functions on the 1-skeleton.On the other hand, ν_r_1+1(E),…,ν_r_1+R_0(E) are not invariants as they change under residual patch gauge transformations. §.§.§ Incompatibility of _2 invariants with band sum As noted in Secs. <ref> and <ref>, the sign ξ^α∈ AI__a(t̃_𝐤_a) ∈{± 1} depends on the basis transformation V_i → a,.Therefore, it is not immediately clear whether the _2 invariant ν(E) maintains an additive structure for the direct sum of bands E ⊕ F, i.e., whether ν(E ⊕ F) ≡ν(E) + ν(F) holds, where “≡” indicates that the values of the right-hand and left-hand sides are equal modulo two in the following.As discussed below, we discover that the relation does not hold and that a correction term is involved.As we are interested in the direct sum E ⊕ F of bands, we introduce an index B ∈{E,F} specifying the bands E and F.Now, we order the irreps at each 0-cell i, which is labeled by β(_i) = 1(_i),…,m_i(_i).Similarly, for each 1-cell a, order the irreps as α(_a)=1(_a),…,m_a(_a) and denote the representation matrix of irrep α(_a) as u^α(_a)__a(g).The irreducible decompositions of band B at 0-cell i and 1-cell a can be written as ⊕_β=1^m_i n^B_β(_i)β(_i) and ⊕_α=1^m_a n^B_α(_a)α(_a) with n^B_β(_i)∈_≥ 0 and n^B_α(_a)∈_≥ 0, respectively. After some calculations, we find that ν_i(E ⊕ F) ≡ν_i(E)+ν_i(F)+δν_i(E|_E_2^0,0,F|_E_2^0,0), where (-1)^δν_i(E|_E_2^0,0,F|_E_2^0,0) := ∏_(a,α)[δξ^α(_a)__a,s(a) → a(E|_E_2^0,0,F|_E_2^0,0) δξ^α(_a)__a,t(a) → a(E|_E_2^0,0,F|_E_2^0,0)]^[x_i]_(a,α);δξ^α(_a)__a,i → a(E|_E_2^0,0,F|_E_2^0,0) =(-1)^∑_1≤β_F<m_i∑_β_F<β_E≤ m_i n^F_β_F(_i)n^β_F(_i)_α(_a) n^E_β_E(_i)n^β_E(_i)_α(_a)∈{± 1},where the notation B|_E_2^0,0 represents the restriction to the 0-cell, and the correction term indeed depends only on elements of E_2^0,0 in the 0-cell.See Appendix <ref> for a derivation of Eqs. (<ref>)-(<ref>).It is important to note that ν_i(E ⊕ F) ≡ν_i(F ⊕ E) may not always hold true.§.§.§ Quadratic refinement and redefinition of _2 invariants As seen in Eq. (<ref>), the introduced _2 invariants do not follow the sum rule. Then, it is natural to ask if it is possible to redefine _2 invariants that satisfy linearity for the direct sum of bands. Here we introduce a technique, known as quadratic refinement, to construct the desired _2 invariants from ν. From Eq. (<ref>), we haveδν_i(E|_E_2^0,0,F|_E_2^0,0)≡∑_(a,α) [x_i]_(a,α)[ ∑_β_E(_s(a)),β_F(_s(a)) 1≤β_F(_s(a))<β_E(_s(a))≤ m_s(a) n^F_β_F(_s(a))n^β_F(_s(a))_α(_a) n^E_β_E(_s(a))n^β_E(_s(a))_α(_a) + ∑_β_E(_t(a)),β_F(_t(a)) 1≤β_F(_t(a))<β_E(_t(a))≤ m_t(a) n^F_β_F(_t(a))n^β_F(_t(a))_α(_a) n^E_β_E(_t(a))n^β_E(_t(a))_α(_a)]2.As the number of irreps behaves linearly with respect to the direct sum of bands, n^E⊕ F_β(_i) = n^E_β(_i)+n^F_β(_i), the correction δν_i(E|_E_2^0,0,F|_E_2^0,0) is a bilinear formδν_i: E_2^0,0× E_2^0,0→_2.We aim to redefine the invariant ν_i(E) using δν_i(E|_E_2^0,0,F|_E_2^0,0) so that the redefined _2 invariant satisfies linearity for the direct sum of bands. As will be discussed later, such a redefinition is possible if δν_i is symmetric.Based on an empirical rule, we conjecture the following: The bilinear form δν_i is symmetric for the invariants of gapless points i ∈{1,…,r_1} and for the invariants of gapped insulators i ∈{r_1+R_0+1,…,N_1}. In other words, δν_i(n,n') ≡δν_i(n',n), n,n' ∈ E_2^0,0i ∈{ 1,…,r_1, r_1+R_0+1,…,N_1} holds true. We numerically verified conjecture (<ref>) to be correct for the 528 types of 2D magnetic layer and 1651 types of 3D magnetic space groups in spinless and spinful electronic systems.We leave an analytical proof applicable to arbitrary spatial dimensions and factor systems for a future problem.Note that for indices i ∈{r_1,…,r_1+R_0} for gauge-dependent ones, and for representations n,n' ∈ E_1^0,0 that do not satisfy the compatibility relations, Eq. (<ref>) does not hold. For a given symmetric bilinear form δν_i, there exists a function q_i: E_2^0,0→_2 (note that it is not linear) satisfying:δν_i(n,n') ≡ q_i(n+n') + q_i(n) +q_i(n'), i ∈{ 1,…,r_1, r_1+R_0+1,…,N_1}. The function q_i is referred to as the quadratic refinement of δν_i.This quadratic refinement q_i is not unique and has the redundancy of Hom(E_2^0,0,_2).For the existence proof, see Appendix <ref>.We redefine ν_i(E) using this q_i as follows:ν̃_i(E) :≡ν_i(E) + q_i(E|_E_2^0,0), i ∈{ 1,…,r_1, r_1+R_0+1,…,N_1}.Then, ν̃_i(E ⊕ F) ≡ν̃_i(E) + ν̃_i(F)2, i ∈{ 1,…,r_1, r_1+R_0+1,…,N_1},holds true, constituting a _2 invariant that behaves linearly with respect to the direct sum of bands.A generic form of the quadratic refinement q_i is given below.Choose a basis set of E_2^0,0 such that E_2^0,0 = ⊕_ρ=1^d [b_ρ] and write [δν_i]_ρσ = δν_i(b_ρ,b_σ).Using the floor function ⌊ x ⌋ = max{n ∈ | n ≤ x}, a generic form of q_i from (<ref>) isq_i(∑_ρ=1^d n_ρb_ρ)≡∑_ρ=1^d ⌊n_ρ/2⌋ [δν_i]_ρρ+ ∑_1≤ρ<σ≤ d [δν_i]_ρσ n_ρ n_σ + ∑_ρ=1^d a_ρ n_ρwhere (a_1,…,a_d) ∈{0,1}^× d represent the redundancy in quadratic refinement and can be freely chosen.§.§.§ Comments on the approach from symmetry operatorsWithout imposing symmetry, the transition functions on 1-cells is an element of the unitary group U(N) which is path connected, allowing for the existence of global and continuous Bloch states Φ_ on the 1-skeleton of the momentum space.Then, by defining w_(g) = Φ_g^† U_(g) Φ_^ϕ_g for g∈ G, a continuous matrix of symmetry transformations w_(g) on the 1-skeleton is obtained.It is expected that the set of the homotopy equivalence classes of symmetry matrices {w_(g)}_g ∈ G up to gauge transformations of the Bloch state Φ_ may provide _2 invariants.This paper does not discuss this approach further and leaves it as a future direction. §.§ Symmetry-enriched Berry phase In the previous section <ref>, we constructed a _2 invariant (<ref>) defined on the 1-skeleton of the momentum space under the gauge fixing condition (<ref>).However, requirements of continuous Bloch states and symmetry constraints are not practical for numerical calculations.In this section, we show how to construct an invariant from Bloch states which are independently given at each point in the mesh-approximated momentum space.Note that this paper only outlines the approach and some specific examples, leaving the numerical implementation for arbitrary symmetry classes as a future problem. For a 1-cell _0_1 = {(1-t)_0 + t _1| t ∈ [0,1]}, let Φ^α_ = (ϕ^α_1,,…, ϕ^α_N_ occ^α,) denote the orthonormal set of occupied states of the Hamiltonian H_ belonging to α-irrep.Here, N_ occ^α∈ D_α×_≥ 0 is the number of occupied states with α-irrep.A U(1)-valued Wilson line is defined as e^γ^α__0 _1 := lim_𝒩→∞∏_j = 0^𝒩-1 (Φ^α__0 + (j+1)δ)^†Φ^α__0 + jδ.Here, δ = (_1 - _0)/𝒩.With a smooth gauge of Φ_, it can be rewritten as e^γ^α__0 _1 = exp[ - ∫__0^_1 A^α_]with A^α_ = (Φ^α_)^† d Φ^α_ the Berry connection.The Wilson line e^γ^α__0 _1 is not gauge invariant as it changes under gauge transformations at the endpoints Φ^α_↦Φ^α_ W^α_, W^α_∈U(N_ occ^α), as seen in e^γ^α__0 _1↦e^γ^α__0 _1 W^α__0/ W^α__1.When the start and endpoints coincide _0=_1, i.e., for a loop, the Wilson line becomes a gauge invariant and is referred to as the Berry phase. However, in constructing invariants from the AHSS, unlike the usual Berry phase, the start and end points do not always coincide. Thus, it is unclear whether the group E_2^1,-1 can be translated into a gauge-invariant Berry phase expression. Nevertheless, we find several cases for which the gauge dependence of the Wilson line at the endpoints (<ref>) can be canceled using TRS and band degeneracy at high-symmetry points. Before moving to specific examples in Secs. <ref> and <ref>, we summarize some patterns for acquiring gauge invariance below. It is currently unclear whether the scenarios described here are exhaustive. §.§.§ Class AII TRS and Pfaffian In class AII, we have TRS with U() U()^* = -1.At time-reversal invariant momentum (TRIM) = -+^∃G (G a reciprocal lattice vector), the Pfaffian can be defined:[ Φ_^† U( T) Φ_^* ] ∈U(1). The Pfaffian changes under a gauge transformation Φ_↦Φ_ W_ as follows:[ Φ_^† U( T) Φ_^* ] ↦[ Φ_^† U( T) Φ_^* ]W_^*. Using this property, the Wilson line along a line segment connecting two TRIMs _0 and _1 can be corrected to be gauge invariant:e^γ^ T__0_1=e^γ__0_1×[ Φ__0^† U( T) Φ__0^* ]/[ Φ__1^† U( T) Φ__1^* ]. Here, γ^ T__0_1/(2π)1 corresponds to the time-reversal polarization <cit.>. In this way, a gauge invariant quantity is defined on a line segment _0_1, not on a loop. The time-reversal polarization e^γ^ T__0_1 may give various _2 invariant in spinful systems in 1-skeleton if it is quantized and also constitute invariants defined on 2-skeleton like the Kane-Mele invariant <cit.>.§.§.§ Class AI TRS and source/sink of Wilson lines In class AI, we have TRS with U() U()^* = 1. The determinant of the sewing matrix of the TRS operator for the occupied states at TRIM = -+^∃G (G a reciprocal lattice vector), [ Φ_^† U( T) Φ_^* ], takes a value in U(1). Under gauge transformations Φ_↦Φ_ W_, it transforms as follows:[ Φ_^† U( T) Φ_^* ] ↦[ Φ_^† U( T) Φ_^* ] ( W_^*)^2. Using this gauge transformation property, the product of Wilson lines along two effective line segments _0_1, _0_2 emanating from a TRIM _0 can be corrected to acquire gauge invariance at _0:e^γ__0_1× e^γ__0_2×[ Φ__0^† U( T) Φ__0^* ].This expression is not gauge invariant at the points _1, _2, but it may be made gauge invariant through contributions from other line segments. For detailed examples, see Sec. <ref>.§.§.§ Band degeneracy at high-symmetry point There is a method to connect Wilson lines in a gauge invariant way by utilizing band degeneracy at high-symmetry points.Let us illustrate this through the following simple example.We consider a spinful electronic system with the space group P222. Let us write the elements of the point group as D_2 = {e,C_2^x,C_2^y,C_2^x C_2^y}. At the point Γ = (0,0,0), due to the non-commutativity of the representation matrices U(C_2^x)U(C_2^y)=-U(C_2^y)U(C_2^x), the irrep is given by a single two-dimensional representation.On the other hand, along the symmetric line Σ = ΓX (X=(π,0,0)), there exist two one-dimensional irreps Σ_3, Σ_4, specified by the irreducible characters χ_^Σ_3(C_2^x)=-i and χ_^Σ_4(C_2^x)=i. Similarly, along the symmetric line Δ = ΓY (Y=(0,π,0)), there are two one-dimensional irreps Δ_3, Δ_4 with irreducible characters χ_^Δ_3(C_2^y)=-i and χ_^Δ_4(C_2^y)=i. At the Gamma point, the two irreps along the Σ and Δ lines become degenerate and form a single two-dimensional irrep. The Σ_4-irrep has a finite overlap with Δ_4-irrep at the Gamma point, allowing the Wilson lines to be connected in a gauge invariant way. In fact, if the orthonormal set of occupied states with Σ_4-irrep at the Gamma point is denoted by Φ^Σ_4_Γ, then the orthonormal set with Δ_4-irrep can be given, apart from a U(N_ occ) gauge degree of freedom (where N_ occ is the number of occupied states for Σ_4-irrep), as Φ^Δ_4_Γ = 1/√(2)(U(e)+ χ^Δ_4_Γ(C_2^y)^* U(C_2^y)) Φ^Δ_4_Γ. Then, (Φ^Σ_4_Γ)^†Φ^Δ_4_Γ = 1/√(2)1_N_ occ. Therefore, when the orthonormal sets Φ^Σ_4_ and Φ^Δ_4_ are given on the Σ and Δ lines independently, the following product takes values in U(1) and does not have gauge ambiguity at the Gamma point:e^γ^Σ_3_XΓ× e^γ^Δ_3_ΓY×[ (Φ^Δ_3_Γ)^†Φ^Σ_3_Γ] × 2^N_ occ/2. Again, the above quantity is not invariant under gauge transformations at X and Y.However, we could obtain gauge-independent invariants by combining other techniques.See Sec. <ref> for an example of this scenario. §.§.§ EAZ class AI and _2-quantization of Berry phaseWhen EAZ class on a 1-cell is AI, the Berry phase on the 1-cell is essentially _2-quantized, as described below.For simplicity, let us consider an antiunitary symmetry constraint U_(a)H_^* U_(a)^†=H_ with U_(a) U_(a)^*=1 atinside of a 1-cell PQ. The orthonormal set of occupied states Φ_ defines unitary matrix w_(a) = Φ_^† U_(a) Φ_^* ∈U(N_ occ).Note that w_(a)^⊤=w_(a). Then, the U(1)-valued Wilson line e^γ_PQ has the following symmetry constraint:e^2γ_PQ =e^∫_P→Q tr [Φ_^† U_(a)^T dU_(a)^* Φ_]× w_Q(a)/ w_P(a).If the sign ambiguity of the square root is denoted as (-1)^ν, then e^γ_PQ =e^1/2∫_P→Q tr [Φ_^† U_(a)^T dU_(a)^* Φ_]× (-1)^ν×√( w_Q(a)/ w_P(a)).Note that the first factor on the right-hand side is independent of the U(N_ occ) gauge of Φ_. Equation (<ref>) implies that the U(1) Wilson line, when fixing the gauge of the occupied states Φ_P and Φ_Q at the endpoints, is _2-quantized apart from the first factor on the right-hand side.Particularly, when the start and end points are the same P=Q (namely, a loop), the Berry phase becomes e^γ_P→P =(-1)^ν× e^1/2∫_P→P tr [Φ_^† U_(a)^T dU_(a)^* Φ_],where the factor (-1)^ν provides a _2 invariant. This point is elaborated on in Sec. <ref>. §.§.§ EAZ class AII and triviality of Berry phase On the other hand, when EAZ class in a 1-cell is AII, the U(1) Wilson line always takes a trivial value in the following sense.For simplicity, consider an antiunitary symmetry with U_(a)U_(a)^*=-1. In this case, since w_(a)^⊤=-w_(a), the Pfaffian of w_(a) is defined, and the U(1) Wilson line is determined by the occupied states at the endpoints, except for a gauge-invariant factor:e^γ_P→Q =e^1/2∫_P→Q tr [Φ_^† U_(a)^T dU_(a)^* Φ_]×[Φ_Q^† U_Q(a) Φ_Q^*]/[Φ_P^† U_P(a) Φ_P^*]. Therefore, in the case of EAZ Class AII, there is no nontrivial classification in a 1-cell.§.§ Example: one-dimensional P-symmetric systems We begin by discussing a well-studied example is P-symmetric one-dimensional systems with U_k(P)[U_k(P)]^* = +1 <cit.>.Our cell decomposition is shown in Fig. <ref>, where C_0={A, A'} = {k = -π, k = π} and C_1 = {a} = {k |k ∈ (-π, π)}. Since the relation H_k=-π=H_k=π always holds, E_1^1,-1 = ^ϕ K^(z,c)+0_G/Π(X_1 = T^1,X_0 = C_0) ≃π_0(O(N)) = _2.Furthermore, since d_1^0,-1 is trivial, E_2^1,-1 = E_1^1,-1=_2.As discussed in Sec. <ref>, when we fix the gauge degrees of freedom, the determinant of transition function is a _2 invariant, which is given byt̃_AA' := Φ^†_A, k=0Φ_A', k=0∈O(N_occ),where Φ_A, k and Φ_A', k are wave functions around A and A' that satisfy U_k(P)Φ_A, k^*= Φ_A, k and U_k(P)Φ_A', k^* = Φ_A', k. Also, N_occ is the number of occupied bands.It should be noted that, although k = ±π are the same aside from a reciprocal lattice vector, it is not always the case that the gauge choices around A and A' are the same. Although we have a concrete expression of the _2 topological invariant, it is not practical since we need to fix the gauge degrees of freedom. For numerical computations, it is essential to obtain expressions of topological invariants that do not depend on gauge choices.To achieve this, we first define the following gauge-independent quantitye^iγ := lim_𝒩→∞∏_j = 0^𝒩-1Φ^†_- π + (j+1) δΦ_-π + jδ,where δ = 2π/𝒩 and Φ_k = - π = Φ_k = π.Next, we discuss the relation between t_AA' and e^iγ.To relate e^iγ to t_AA', we consider the two patches shown in Fig. <ref>. Then, e^iγ = lim_𝒩→∞[∏_j = 0^𝒩-1Φ^†_A', (j+1) δ/2Φ_A', jδ/2] × t_AA'^†[∏_j = 0^𝒩-1Φ^†_A, - π + (j+1) δ/2Φ_A, -π + jδ/2] = exp[-∫trΦ^†_A', kdΦ_A', k-∫trΦ^†_A, kdΦ_A, k] t_AA'^†. Furthermore, we impose the gauge condition U_k(P)Φ^*_k = Φ_k on the right-hand side. For this gauge condition, since ∫ dk trΦ^†_k∂_kΦ_k= - ∫ dk (trΦ^†_k∂_kΦ_k)^*, we have∫trΦ^†_kdΦ_k = -1/2∫trΦ^†_k(U_k(P) dU^†_k(P))Φ_k,where the right-hand side is gauge-invariant. Finally, combining Eqs. (<ref>) and (<ref>), we arrive at the gauge-independent _2-valued topological invariant(-1)^ν = e^-iγexp[1/2∫_-π^πtrΦ_k^†(U_k(P)d U^†_k(P))Φ_k]. Let us comment on what this topological invariant indicates. In this symmetry setting, there are two inequivalent atomic insulators: One has electrons at x = R (R∈) and the other has electrons at x = R+1/2.This topological invariant is trivial for the former case and nontrivial for the latter case. §.§ Example: spinless systems in space group P222 with TRS Consider a spinless electronic system with space group symmetry P222 and TRS. The K-group is given by ^13+_2, and its generators are represented by atomic insulators. Moreover, comparing the E_2 pages of the momentum-space and real-space AHSS, it is found that the _2 part can be detected by the _2 invariant on the 1-skeleton <cit.>.This section illustrates constructing the _2 invariant based on the gauge fixing condition in Sec. <ref> and its Berry phase expression without a gauge fixing condition using the method described in Sec. <ref>. Let H_ be a Hamiltonian periodic in the BZ. Symmetries and factor systems are summarized in U_() H_^* U_()^† = H_-, U_(C_2^μ) H_ U_(C_2^μ)^† = H_C_2^μ,U_-()U_()^* = 1, U_C_2^μ(C_2^μ)U_(C_2^μ) = 1,U_C_2^μ()U_(C_2^μ)^*=U_-(C_2^μ)U_(),U_C_2^ν(C_2^μ)U_(C_2^ν)=U_C_2^μ(C_2^ν)U_(C_2^μ).Here, μ,ν∈{x,y,z}, and C_2^μ denotes a twofold rotation about the μ-axis. The cell decomposition is shown in Fig. <ref> (d). At TRIMs P∈{Γ,X,Y,S,Z,U,T,R}, four one-dimensional irreps β∈{A,B_1,B_2,B_3} exist, and their irreducible characters are given below, independent of TRIMs:[ χ^β(g)eC_2^xC_2^yC_2^z;A1111;B_11 -1 -11;B_21 -11 -1;B_311 -1 -1;] On the other hand, two one-dimensional irreps α∈{A,B} exist along the twofold rotation axes (1-cells), with irreducible characters given in the following table:[ χ^α(g)eC_2^μ;A11;B1 -1;] The EAZ classes for them are all class AI.The compatibility relations along each rotation axis are given as:A = A ⊕ B_3, B = B_1 ⊕ B_2,A = A ⊕ B_2, B = B_1 ⊕ B_3,A = A ⊕ B_1, B = B_2 ⊕ B_3. As we are interested in invariants of insulators gapped on 1-skeleton, we impose the following compatibility conditions on every 1-cell. Let n_β(P) the number of the β-irreps at TRIM P.Depending on which twofold rotation axis the 1-cell PQ lies on, the compatibility conditions are represented as follows:: {[ n_A(P)+n_B_3(P)=n_A(Q)+n_B_3(Q),; n_B_1(P)+n_B_2(P)=n_B_1(Q)+n_B_2(Q),;].: {[ n_A(P)+n_B_2(P)=n_A(Q)+n_B_2(Q),; n_B_1(P)+n_B_3(P)=n_B_1(Q)+n_B_3(Q),;].: {[ n_A(P)+n_B_1(P)=n_A(Q)+n_B_1(Q),; n_B_2(P)+n_B_3(P)=n_B_2(Q)+n_B_3(Q).;]. The sublattice {{n_β(P)}_i,β∈ E_1^0,0≅^32| (<ref>), (<ref>), (<ref>)} that satisfies the above compatibility conditions is E_2^0,0. §.§.§ Constructing the _2 invariant using gauge fixing conditionIn the neighborhood U_P of TRIM P, the set of occupied states Φ_P,∈ U_P of H_ can be chosen as follows, based on section <ref>. First, it is assumed that Φ_P, is block-diagonalized by irreps at 0-cell P as in Φ_P, = Φ^A_P,⊕Φ^B_1_P,⊕Φ^B_2_P,⊕Φ^B_3_P,. Furthermore, for each block β∈{A,B_1,B_2,B_3}, the occupied states Φ_P,^β satisfy the following symmetries: U_(C_2^μ) Φ^β_P, = χ^β(C_2^μ) Φ^β_P, C_2^μ(-P)+P,U_() [Φ_P,^β]^* = Φ_P,-(-P)+P^β. In particular, at pointson the C_2^μ-axis where C_2^μ (-P)=-P, it holds that U_(C_2^μ) Φ^β_P, = χ^β(C_2^μ) Φ^β_P,.For each of the 12 independent 1-cells a shown in Fig. <ref> (d), we define transition functions t_a^α for each irrep α = A,B. Let us write the 1-cell going from TRIM P to Q by PQ and its midpoint as _PQ = (P+Q)/2. Here we focus on the 1-cell X. The Bloch states around 0-cellsand X are arranged in the order A ⊕ B_3 and B_1 ⊕ B_2 for A and B irreps over 1-cell, respectively, and then we define the transition function at the midpoint _X as t^A_X :=(Φ^A_,_X⊕Φ^B_3_,_X)^†(Φ^A_X,_X⊕Φ^B_3_X,_X), t^B_X :=(Φ^B_1_,_X⊕Φ^B_2_,_X)^†(Φ^B_1_X,_X⊕Φ^B_2_X,_X). Due to C_2^z symmetry, the following holds:(t^A_X)^* = (1_n_A()⊕-1_n_B_3()) t^A_X (1_n_A(X)⊕ -1_n_B_3(X)), (t^B_X)^* =(1_n_B_1()⊕ -1_n_B_2()) t^B_X (1_n_B_1(X)⊕ -1_n_B_2(X)).Under the gauge fixing conditions (<ref>) and (<ref>), the determinants of the transition functions of each sector are quantized as in ( t^A_X)^* = (-1)^n_B_3()+n_B_3(X) t^A_X, ( t^B_X)^* = (-1)^n_B_2()+n_B_2(X) t^B_X. Since the transition functions are unitary matrices, we can also write them as( t^A_X)^2 = (-1)^n_B_3()+n_B_3(X), ( t^B_X)^2 = (-1)^n_B_2()+n_B_2(X).For other 1-cells on the C_2^x-axis, define the transition function in the same way as Eqs. (<ref>) and (<ref>), and for the C_2^y,C_2^z axes,: t^A_PQ :=(Φ^A_P,_PQ⊕Φ^B_2_P,_PQ)^†(Φ^A_Q,_PQ⊕Φ^B_2_Q,_PQ), t^B_PQ :=(Φ^B_1_P,_PQ⊕Φ^B_3_P,_PQ)^†(Φ^B_1_Q,_PQ⊕Φ^B_3_Q,_PQ),: t^A_PQ :=(Φ^A_P,_PQ⊕Φ^B_1_P,_PQ)^†(Φ^A_Q,_PQ⊕Φ^B_1_Q,_PQ), t^B_PQ :=(Φ^B_2_P,_PQ⊕Φ^B_3_P,_PQ)^†(Φ^B_2_Q,_PQ⊕Φ^B_3_Q,_PQ). Note that by reversing the orientation of a 1-cell, we have t^α_QP=(t^α_PQ)^†. The constraints are summarized as:{[ ( t^A_PQ)^2 = (-1)^n_B_3(P)+n_B_3(Q),; ( t^B_PQ)^2 = (-1)^n_B_2(P)+n_B_2(Q),; ]. :{[ ( t^A_PQ)^2 = (-1)^n_B_2(P)+n_B_2(Q),; ( t^B_PQ)^2 = (-1)^n_B_3(P)+n_B_3(Q),; ].:{[ ( t^A_PQ)^2 = (-1)^n_B_1(P)+n_B_1(Q),; ( t^B_PQ)^2 = (-1)^n_B_3(P)+n_B_3(Q).; ].This defines 24 _2-valued quantitiesζ^α_PQ:=t^α_PQ,PQ∈{}, α∈{A,B}. Note that values of ζ^α_PQ depend on representations at 0-cells. In particular, if ζ^α_PQ∈{± i}, then ζ^α_QP = - ζ^α_PQ [ The definition of the _2 value ζ^α_PQ here is made without introducing the basis transformation matrix V_i→ a, as in Sec. <ref>, so it differs slightly from the _2 value ξ^α__a in Sec. <ref>. Therefore, ζ^α_PQ can take quantized values other than ± 1.].The combinations of ζ^α_PQ that are invariant under residual gauge transformations, which do not change the gauge fixing conditions (<ref>) and (<ref>), become the desired topological invariants.Consider a residual gauge transformation in the neighborhood of the 0-cell P, Φ_P,↦Φ_P, W_P,, W_P,∈U(N),where N is the number of occupied states. The gauge transformation W_P, should satisfy the symmetry (<ref>).That is, for the symmetry transformation of the occupied states w_P,(C_2^μ)= (χ^A(C_2^μ) 1_n_A(P))⊕(χ^B_1(C_2^μ) 1_n_B_1(P)) ⊕( χ^B_2(C_2^μ) 1_n_B_2(P))⊕(χ^B_3(C_2^μ) 1_n_B_3(P)),W_P, must satisfyw_P,(C_2^μ) W_P, w_P,(C_2^μ)^† = W_P,C_2^μ(-P)+Pfor μ∈{x,y,z} andW_P,^* = W_P,-(-P)+P. For example, over the C_2^x-axis satisfying C_2^x (-P) =-P, due to C_2^x symmetry, the gauge transformation takes a form as W_P, = [ a 0 0 b; 0 c d 0; 0 e f 0; g 0 0 h; ],. Let us denoteW^A⊕ B_3_P, = [ a b; g h; ], W^B_1⊕ B_2_P, = [ c d; e f; ].Due to C_2^z T symmetry, they satisfy(1_n_A(P)⊕ -1_n_B_3(P)) [W^A ⊕ B_3_P,]^*= W^A⊕ B_3_P, (1_n_A(P)⊕ -1_n_B_3(P)), (1_n_B_1(P)⊕ -1_n_B_2(P)) [W^B_1⊕ B_2_P,]^*= W^B_1⊕ B_2_P, (1_n_B_1(P)⊕ -1_n_B_2(P)).In particular, the determinant of each sector is quantized as W^A⊕ B_3_P,,W^B_1⊕ B_2_P,∈{± 1},,and they are constant along the C_2^x axis.In particular, at TRIM = P, due to symmetry (<ref>), we have a block-diagonalized form W_P,P = W^A_P,P⊕ W^B_1_P,P⊕ W^B_2_P,P⊕ W^B_3_P,P,and due to (<ref>), the determinant of each sector is also quantized:η^β∈{A,B_1,B_2,B_3}_P:=W^β_P,P∈{± 1}.Therefore,W^A⊕ B_3_P, =W^A⊕ B_3_P,P=(W^A_P,P⊕ W^B_3_P,P ) =η^A_Pη^B_3_P.In this way, for a 1-cell PQ parallel to C_2^x-axis, the transition function t^α_PQ transforms under gauge transformation as t^A_PQ↦ (W^A ⊕ B_3_P,_PQ)^† t^A_PQ W^A⊕ B_3_Q,_PQ, t^B_PQ↦ (W^B_1 ⊕ B_2_P,_PQ)^† t^B_PQ W^B_1⊕ B_2_Q,_PQ,so the change of _2 values is given by ζ^A_PQ↦ζ^A_PQη^A_Pη^B_3_Pη^A_Qη^B_3_Q, ζ^B_PQ↦ζ^B_PQη^B_1_Pη^B_2_Pη^B_1_Qη^B_2_Q, Similarly, for the C_2^y,C_2^z-axes, the gauge transformation for _2 values ζ^α_PQ can be obtained: ζ^A_PQ↦ζ^A_PQη^A_Pη^B_2_Pη^A_Qη^B_2_Q, ζ^B_PQ↦ζ^B_PQη^B_1_Pη^B_3_Pη^B_1_Qη^B_3_Q, ζ^A_PQ↦ζ^A_PQη^A_Pη^B_1_Pη^A_Qη^B_1_Q, ζ^B_PQ↦ζ^B_PQη^B_2_Pη^B_3_Pη^B_2_Qη^B_3_Q. The gauge transformations (<ref>)-(<ref>) correspond to the differential d_1^0,-1: _2^32→_2^24 in AHSS. We find that d_1^0,-1≅_2^6, hence there exist six gauge-invariant and independent combinations. Since E_2^1,-1≅_2, five of them are _2 invariants detecting gapless points in 2-cells.Indeed, the following six “π Berry phases” along the boundaries of the six 2-cells shown in Fig. <ref> (d) are gauge-invariant and detect gapless points in 2-cells:τ_XSY := ζ^A_Xζ^B_Xζ^A_XSζ^B_XSζ^A_SYζ^B_SYζ^A_Yζ^B_Y, τ_ZURT = ζ^A_ZUζ^B_ZUζ^A_URζ^B_URζ^A_RTζ^B_RTζ^A_TZζ^B_TZ, τ_XUZ = ζ^A_Xζ^B_Xζ^A_XUζ^B_XUζ^A_UZζ^B_UZζ^A_Zζ^B_Z, τ_YSRT = ζ^A_YSζ^B_YSζ^A_SRζ^B_SRζ^A_RTζ^B_RTζ^A_TYζ^B_TY, τ_YTZ = ζ^A_Yζ^B_Yζ^A_YTζ^B_YTζ^A_TZζ^B_TZζ^A_Zζ^B_Z, τ_XSRU = ζ^A_XSζ^B_XSζ^A_SRζ^B_SRζ^A_RUζ^B_RUζ^A_UXζ^B_UX.These are not independent; using Eqs. (<ref>)-(<ref>), the following relation exists τ_XSYτ_ZURTτ_XUZτ_YSRTτ_YTZτ_XSRU= ∏_PQ∈{ 1-cells} (ζ^A_PQζ^B_PQ)^2 =1. The remaining one _2 number is given by the product of the transition functions for the B representation in all 1-cells. (-1)^ν :=∏_PQ∈{ 1-cells}ζ^B_PQ= ζ^B_Xζ^B_XSζ^B_SYζ^B_Yζ^B_ZTζ^B_TRζ^B_RUζ^B_UZζ^B_Zζ^B_XUζ^B_YTζ^B_RS. A choice of orientations of 1-cells is arbitrary, but here it was introduced as a natural choice when removing the gauge fixing condition later.Using Eqs. (<ref>)-(<ref>), we find that ∏_PQ∈{ 1-cells} (ζ^B_PQ)^2 = ∏_P∈{ 0-cells}(-1)^n_B_2(P),and further using the compatibility conditions (<ref>)-(<ref>), ∑_P∈ 0-cellsn_B_2(P)≡ 0. Therefore, ν is quantized as (-1)^ν∈{± 1}. It should be shown that there indeed exist band structures whose ν is 0 and 1, to claim that ν is meaningful.We pose this to Sec. <ref>. §.§.§ The Berry phase formulaThe _2 invariant formula ν (<ref>) needs the gauge fixing conditions (<ref>) and (<ref>), hence it lacks practicality.In this section, we seek an expression that does not require any gauge fixing conditions.Looking at the expression (<ref>), it is reasonable to consider the product of Berry phases for the B-irrep in all 1-cells. Introduce a mesh of the 1-skeleton X_1, and for each 0-cell, we compute the Bloch states of each irrep: (Φ^A_P,Φ^B_1_P,Φ^B_2_P,Φ^B_3_P),P∈{}. For 1-cells, we compute the Bloch states for A and B irreps:(Φ^A_,Φ^B_),∈{}. For each 1-cell PQ, depending on the twofold rotation axis, we compute the U(1) Wilson line associated with the B representation as defined below: e^^B_PQ := lim_ N→∞[(Φ^B_1_Q,Φ^B_2_Q)^†( ∏_j=1^ N-1Φ^B_P+jδ (Φ^B_P+j δ)^†) (Φ^B_1_P,Φ^B_2_P) ], e^^B_PQ := lim_ N→∞[(Φ^B_1_Q,Φ^B_3_Q)^†( ∏_j=1^ N-1Φ^B_P+jδ (Φ^B_P+j δ)^†) (Φ^B_1_P,Φ^B_3_P) ], e^^B_PQ := lim_ N→∞[(Φ^B_2_Q,Φ^B_3_Q)^†( ∏_j=1^ N-1Φ^B_P+jδ (Φ^B_P+j δ)^†) (Φ^B_2_P,Φ^B_3_P) ]. Here, δ = (Q-P)/ N.The Wilson line is not gauge invariant as it depends on the gauge at endpoints. Under the gauge transformation at 0-cell by Φ^β_P ↦Φ^β_P W^β_P,W^β_P ∈U(n_β(P)),the Wilson line transforms as e^^B_PQ↦ ( W^B_1_Q)^* ( W^B_2_Q)^* e^^B_PQ W^B_1_P W^B_2_P, e^^B_PQ↦ ( W^B_1_Q)^* ( W^B_3_Q)^* e^^B_PQ W^B_1_P W^B_3_P, e^^B_PQ↦ ( W^B_2_Q)^* ( W^B_3_Q)^* e^^B_PQ W^B_2_P W^B_3_P. Now consider the product of the following Wilson lines z:= e^^B_X e^^B_XS e^^B_SY e^^B_Y×e^^B_ZT e^^B_TR e^^B_RU e^^B_UZ× e^^B_Z e^^B_XU e^^B_YT e^^B_RS. The U(1) value z is still not gauge invariant and transforms intoz ↦ z ×∏_P∈{,S,U,T} ( W^B_3_P)^-2×∏_P∈{X,Y,Z,R} ( W^B_3_P)^2.However, using the prescription described in <ref>, z can be corrected to be gauge invariant usingsymmetry.In the present case,e^ := z ×∏_P∈{,S,U,T}[(Φ^B_3_P)^† U_P() (Φ^B_3_P)^*]^-1×∏_P∈{X,Y,Z,R}[(Φ^B_3_P)^† U_P() (Φ^B_3_P)^*]is gauge-invariant.To explore the relationship between the _2 invariant ν and e^i, we deform e^i by taking a smooth gauge that satisfies the gauge fixing conditions (<ref>) and (<ref>) near 0-cells.First, from the TRS gauge (<ref>), it follows that [(Φ^B_3_P)^† U_P() (Φ^B_3_P)^*]=1.For Wilson lines, let us consider e^_X concretely. Taking the basis (<ref>) that satisfies gauge fixing conditions (<ref>) and (<ref>) near theand X points, the Wilson line becomes e^^B_X = exp[-∫__X^X[ (Φ^B_1_,, Φ^B_2_,)^† d (Φ^B_1_,, Φ^B_2_,) ] ] ×[(Φ^B_1_X,_X,Φ^B_2_X,_X)^† (Φ^B_1_,_X, Φ^B_2_,_X)] ×exp[ -∫_^_X[ (Φ^B_1_,, Φ^B_2_,)^† d (Φ^B_1_,, Φ^B_2_,) ]] = exp[-∫__X^X[ (Φ^B_1_,, Φ^B_2_,)^† d (Φ^B_1_,, Φ^B_2_,) ] ] ×ζ^B_X×exp[-∫_^_X[ (Φ^B_1_,, Φ^B_2_,)^† d (Φ^B_1_,, Φ^B_2_,) ] ]. For the line integrals of the exponents, using C_2^z symmetry, ∫_^_X[ (Φ^B_1_,, Φ^B_2_,)^† d (Φ^B_1_,, Φ^B_2_,) ]^*= ∫_^_X[ [1_n_B_1();-1_n_B_2() ] (Φ^B_1_,, Φ^B_2_,)^† U_(C_2^z)d { U_(C_2^z)^† (Φ^B_1_,, Φ^B_2_,)[1_n_B_1();-1_n_B_2() ]}]= ∫_^_X[ (Φ^B_1_,, Φ^B_2_,)^† d (Φ^B_1_,, Φ^B_2_,) ]+ ∫_^_X[ (Φ^B_1_,, Φ^B_2_,)^† U_(C_2^z) dU_(C_2^z)^† (Φ^B_1_,, Φ^B_2_,) ].Since A^* = -A, we obtain ∫_^_X[ (Φ^B_1_,, Φ^B_2_,)^† d (Φ^B_1_,, Φ^B_2_,) ] =-1/2∫_^_X[ (Φ^B_1_,, Φ^B_2_,)^† U_(C_2^z) dU_(C_2^z)^† (Φ^B_1_,, Φ^B_2_,) ].This expression is gauge invariant, independent of gauge transformations Φ^B_ = (Φ^B_1_,, Φ^B_2_,) ↦Φ^B_ W^B_ with W^B_∈U(n_B_1()+n_B_2()). Similarly, for the Wilson line e^^B_X we find the relation:e^^B_X =ζ^B_X×exp1/2∫_→X[ (Φ^B_)^† U_(C_2^z) d U_(C_2^z)^†Φ^B_]. The same result can be obtained by replacing C_2^z with C_2^y. In fact, the ratio of contributions from the correction terms of C_2^z and C_2^y is exp1/2∫_→X [(Φ^B_)^† U_(C_2^x) d U_(C_2^x)^†Φ^B_]. However, considering U_(C_2^x)^† =U_(C_2^x), U_(C_2^x) Φ^B_ = - Φ^B_ on the 1-cell ∈X, we can show that [Φ_^B† U_(C_2^x) d U_(C_2^x)^†Φ^B_] = 1/2 [Φ^B†_ d U_(C_2^x)^2 Φ^B_] = 0.Applying the same to other Wilson lines, we derive the following relation between the _2 invariant ν and the gauge-invariant Berry phase e^:(-1)^ν = e^-×exp[1/2(∮_→X→S→Y→+∮_Z→T→R→U→Z)[(Φ^B_)^† U_(C_2^z) d U_(C_2^z)^†Φ^B_]]×exp[ 1/2(∫_Z→+∫_X→U+∫_Y→T+∫_R→S) [(Φ^B_)^† U_(C_2^x) d U_(C_2^x)^†Φ^B_] ].Here, we used ζ^B_QP = (ζ^B_PQ)^-1. This provides us a gauge-invariant expression of the _2 invariant ν.However, there is a subtle ambiguity in ν from the ordering of irreps, as described below.§.§.§ Band sum and quadratic refinementIt is important to note that the _2 invariant (<ref>) depends on the ordering of irreps in the Bloch states at the 0-cells.For instance, in the expression of the transition function t^B_X (<ref>), if we swap the Bloch states near thepoint, redefining t^B_X→(Φ^B_2_,_X⊕Φ^B_1_,_X)^†(Φ^B_1_X,_X⊕Φ^B_2_X,_X), a sign (-1)^n_B_1()n_B_2() arises due to the swapping of irreps.This issue of ordering dependency of irreps also occurs in the Berry phase formula (<ref>) of the _2 invariant. For example, in the expression of the Wilson line (<ref>) parallel to the C_2^x axis, if we swap the order of Bloch states at the starting point P from (Φ^B_1_P,Φ^B_2_P) to (Φ^B_2_P,Φ^B_1_P), a sign (-1)^n_B_1(P) n_B_2(P) appears.This ordering dependency leads to the non-additive nature of the _2 invariant ν for the direct sum of bands. However, as discussed in Sec. <ref>, it is possible to redefine ν to preserve the additive structure.Recall that the order of irreps in the expression of the _2 invariant ν for a 0-cell P wasC_2^x-axis:(Φ^B_1_P,Φ^B_2_P), C_2^y-axis:(Φ^B_1_P,Φ^B_3_P), C_2^z-axis:(Φ^B_2_P,Φ^B_3_P). Let Φ^B,β_P and n^B_β(P) be the Bloch states of β-irrep at 0-cell P for band B ∈{E,F} and the number of β-irrep, respectively. Here we focus on the C_2^x-axis. Considering the direct sum E ⊕ F as a single band structure, the above ordering rule implies that (Φ^E,B_1_P⊕Φ^F,B_1_P) ⊕ (Φ^E,B_2_P⊕Φ^F,B_2_P). On the one hand, firstly ordering each band E and F as above and considering the direct sum, we have (Φ^E,B_1_P⊕Φ^E,B_2_P) ⊕ (Φ^F,B_1_P⊕Φ^F,B_2_P). They are related with a permutation matrix S from the right whose determinant is S = (-1)^n^E_B_2(P) n^F_B_1(P). Similar contributions arise from the Wilson lines on the C_2^y and C_2^z axes. The non-additive nature of the invariant ν is summarized as ν(E ⊕ F) ≡ν(E)+ν(F)+δν(E|_E_2^0,0,F|_E_2^0,0)with δν(E|_E_2^0,0,F|_E_2^0,0) ≡∑_P∈{ 0-cells}( n^E_B_2(P) n^F_B_1(P)+ n^E_B_3(P) n^F_B_1(P)+n^E_B_3(P) n^F_B_2(P)). Note that δν depends only on elements of E_2^0,0. Since the number of irreps behaves additively with respect to the sum of bands, δν is a bilinear form.Moreover, although δν(x,y) is not symmetric for elements of E_1^0,0, we find that it is symmetric when restricted to E_2^0,0, δν (x,y) ≡δν(y,x),x,y ∈ E_2^0,0. Furthermore, we find that the diagonal terms are zero, δ (x,x) ≡ 0 for x ∈ E_2^0,0.(This is not necessary for the existence of quadratic refinement below.) Therefore, according to Sec. <ref>, there exists a quadratic refinement q: E_2^0,0→{0,1} of δν, which satisfiesδν(x,y)≡ q(x+y)+q(x)+q(y),x,y ∈ E_2^0,0. By explicitly constructing the basis of E_2^0,0 and calculating expression (<ref>), we obtain the following formula: q({n_β(P)}_P,β)≡ n_B_3(R)(n_B_3(S)+n_B_2(T)+n_B_3(T)+n_B_2(U)+n_B_3(U)+n_B_3(Z)) +n_B_2(R)(n_B_3(T)+n_B_3(U)+n_B_3(X)+n_B_3(Y)) +n_B_2(T)(n_B_3(S)+n_B_3(U)+n_B_3(X))+n_B_3(Y)(n_B_3(S)+n_B_2(U)+n_B_3(X)+n_B_3(Z)) +n_B_3(S) n_B_2(U)+n_B_3(X)(n_B_3(S)+n_B_3(Z))+n_B_3(S) n_B_3(Z)+n_B_3(T)(n_B_2(U)+n_B_3(U)+n_B_3(Y)+n_B_3(Z))+n_B_3(U)(n_B_3(X)+n_B_3(Z))+n_B_3(S)+n_B_3(T)+n_B_3(U).See Appendix <ref> for a derivation. Using this quadratic refinement q, we redefine the _2 invariant asν̃(E) := ν(E) + q(E|_E_2^0,0)to recover linearityν̃(E ⊕ F) ≡ν̃(E) + ν̃(F). In the formula (<ref>), the linear term n_B_3(S)+n_B_3(T)+n_B_3(U) were chosen so that for any atomic insulator with a single band, i.e., an atomic insulator at Wyckoff position x_0 with β-irrep, denoted as a__0^β, the following holds ν(a__0^β) ≡ν̃(a__0^β).Namely, we impose the constraint q(a__0^β) ≡ 0on the quadratic refinement q. See the next section. §.§.§ _2 invariant for real-space models According to the real-space AHSS <cit.>, the K-group is given by ^13+_2, and all are generated by atomic insulators (or formal differences of their direct sums).First, we calculate the _2 invariant ν for atomic insulators with a single band.We denote the atomic insulator obtained by placing an irrep β∈{A,B_1,B_2,B_3} of the magnetic point group 2221' at a position shifted by the displacement vector x_0 from the unit cell center as a^β_x_0. The symmetry matrices are given by U_()=1 and U_(C_2^μ) =χ^β (C_2^μ) e^-i· (x_0-C_2^μx_0),μ∈{x,y,z}. The Bloch wave function can be Φ_≡ 1 independent of the momentum space, and no contribution arises from the Wilson line, making e^=1. Calculating the factor arising from the symmetry transformation in Eq. (<ref>), we get ν̃(a^β_x_0) ≡ν(a^β_x_0) ≡{[1 x_0 = (1/2,1/2,1/2),;0. ]. Here, the redefined ν̃ is chosen to satisfy Eq. (<ref>).From the real-space AHSS, the _2 part of the K-group is given by the formal difference [E]-[F] of the following two insulating states E and F: E= a^A_(0,0,0)⊕ a^A_(1/2,1/2,0)⊕ a^A_(0,1/2,1/2)⊕ a^A_(1/2,0,1/2),F= a^A_(1/2,0,0)⊕ a^A_(0,1/2,0)⊕ a^A_(0,0,1/2)⊕ a^A_(1/2,1/2,1/2). Note that insulators E and F show identical representations at TRIMs and thus cannot be distinguished by representations at 0-cells. Due to the linearity of the _2 invariant ν̃, we haveν̃(E) = 0,ν̃(F) = 1,The _2 invariant ν̃ successfully distinguishes between the two insulators E and F. Let us comment on an interesting physical consequence of nontrivial _2 nature.The space group P222 is a subgroup of many other space groups, which allows us to define the _2 invariant for the supergroups of P222.Recently, it has shown that, although atomic insulators do not exhibit any protected gapless surface states, certain types of atomic insulators still possess mid-gap surface states, known as fractional corner charges <cit.>. Furthermore, there exist fractionally quantized corner charges that cannot be captured only by irreps at 0-cells <cit.>.In Ref. <cit.>, the authors have discovered that the _2 invariant combined with irreps at 0-cells can detect fractional corner charges in various tetrahedral and cubic space groups.§.§ Other examples In this section, we present illustrative examples demonstrating the definition of _2 invariants using the scenarios listed in Sec. <ref>, comparison with the real-space model (Sec. <ref>), and _2 invariant detecting a Weyl point in generic point (Sec. <ref>). §.§.§ Example: spinful systems in space group P2 with TRS Here, we generalize the above discussion to spinful systems in space group P2 with TRS .This space group is generated by twofold rotation along y-axis, denoted by C_2^y, and translations. According to Ref. <cit.>, the K-group is ^ϕK_G/Π^(z,c)+0(T^3) = × (_2)^8 [Here, we use the fact that _4 does not appear in the K-group <cit.>.], whose × (_2)^3-part is the classification of atomic insulators and the remaining (_2)^5-part corresponds to the strong topological insulator and topological crystalline insulators, as shown in Fig. <ref>(a–e). Our cell decomposition of the fundamental domain is shown in Fig. <ref>(f).For each of 0- and 1-cells, irreducible representations and their EAZ classes are shown in Table <ref>.Although there is no unitary symmetry other than identity on a,b,c,d,e, and f, they are symmetric under the product of twofold rotation C_2^y and TRS , which results in EAZ class AI.As a result, E_1^1,-1 = (_2)^6 is given by E_1^1,-1 = ⊕_K = a,b,c,d,e,f_2[b_K_1^(1)]. From AHSS, we find that the differentials d_1^0,-1 and d_1^1,-1 are trivial, i.e., E_2^1,-1 = E_1^1,-1 = (_2)^6. Therefore, topological invariants are determinants of the gauge fixed transition functions [c.f. (<ref>)].The transition function on a 1-cell connecting _0 to _1 is defined byt__0_1 = Φ^†__0, (_0+_1)/2Φ__1, (_0+_1)/2, where Φ__i, is a set of occupied states atunder independent gauge choices around _i [c.f. Eq. (<ref>)]. Similar to the above P-symmetric example, we aim to representthe transition function using the Berry phase, i.e., e^iγ__0 _1 = lim_𝒩→∞∏_j = 0^𝒩-1Φ__0 + (j+1)δ^†Φ__0 + jδfor δ = (_1 - _0)/𝒩.However, since the 1-cell is not a loop in this case, the Berry phase is not gauge invariant under gauge transformations at the boundary 0-cells.We then use the Pfaffian to compensate for the Berry phase, as discussed in Sec. <ref>.Finally, we arrive at the gauge-independent formulas (-1)^ν_1 := t̃_ΓY = e^-iγ_ΓYPf[Φ_Y^†U()Φ_Y^*]/Pf[Φ_Γ^†U()Φ_Γ^*]exp[1/2∫_a trΦ_^†(U_(C_2^y)dU^†_(C_2^y))Φ_], (-1)^ν_2 := t̃_CZ= e^-iγ_CZPf[Φ_Z^†U()Φ_Z^*]/Pf[Φ_C^†U()Φ_C^*]exp[1/2∫_b trΦ_^†(U_(C_2^y)dU^†_(C_2^y))Φ_], (-1)^ν_3 := t̃_BA= e^-iγ_BAPf[Φ_A^†U()Φ_A^*]/Pf[Φ_B^†U()Φ_B^*]exp[1/2∫_c trΦ_^†(U_(C_2^y)dU^†_(C_2^y))Φ_], (-1)^ν_4 := t̃_ED=e^-iγ_EDPf[Φ_D^†U()Φ_D^*]/Pf[Φ_E^†U()Φ_E^*]exp[1/2∫_d trΦ_^†(U_(C_2^y)dU^†_(C_2^y))Φ_], (-1)^ν_5 := t̃_YA=e^-iγ_YAPf[Φ_A^†U()Φ_A^*]/Pf[Φ_Y^†U()Φ_Y^*]exp[1/2∫_e trΦ_^†(U_(C_2^y)dU^†_(C_2^y))Φ_], (-1)^ν_6 := t̃_CE=e^-iγ_CEPf[Φ_E^†U()Φ_E^*]/Pf[Φ_C^†U()Φ_C^*]exp[1/2∫_f trΦ_^†(U_(C_2^y)dU^†_(C_2^y))Φ_].It should be emphasized that the right-hand sides are gauge-independent quantities. In Table <ref>, we show the values of these topological invariants for representative models of the torsion subgroup of ^ϕK_G/Π^(z,c)+0(T^3).Importantly, these invariants cannot fully characterize topological phases in this space group. For a complete identification, two _2 invariants on the 2-skeleton corresponding to E_3^2,-2=_2^2 are required. §.§.§ Example: spinful systems in space group P2_12_12_1 with TRSThe third example is space group P2_12_12_1 with TRS .This space is generated by S_x = {C_2^x| (1/2, 1/2, 0)^⊤} and S_y = {C_2^y| (0, 1/2, 0)^⊤}.According to Ref. <cit.>, the K-group is ^ϕK_G/Π^(z,c)+0(T^3) = × (_2)^3, whose -part is the classification of atomic insulators and (_2)^3 corresponds to the strong topological insulator and topological crystalline insulators, as shown in Fig. <ref>(a–c). Our cell decomposition of the fundamental domain is shown in Fig. <ref>(d).Only EAZ classes of irreps on Σ, Δ, and Λ are class AI, and thus E_1^1,-1 = (_2)^6 given by E_1^1,-1 = ⊕_K = Σ, Δ, Λ_2[b_K_3^(1)] ⊕_2[b_K_4^(1)], where the labels of irreps are shown in Table <ref>.As discussed above, E_1^1,-1 is characterized by transition functions.In this case, the transition functions are defined for each irrep on Σ, Δ, and Λ.For example, the transition function for irrep Σ_3 is defined by t_ΓX^Σ_3 = (Φ^Σ_3_Γ, (π/2,0,0))^†Φ^Σ_3_X, (π/2,0,0), where Φ^Σ_3_Γ, and Φ^Σ_3_X, are occupied states with irrep Σ_3 under independent gauge choices around Γ and X [c.f. Eq. (<ref>)]. For other irreps, the transition functions are defined in the same way.After computing AHSS and performing the procedures in Appendix <ref>, we find that E_2^1,-1 = (_2)^3 and[X^(1)]^-1 = ( [ Σ_3 Σ_4 Δ_3 Δ_4 Λ_3 Λ_4; 1 1 0 0 0 0; 0 0 1 1 1 1; 0 0 0 0 0 1; 0 0 0 1-1 0; 0 1 0 0-1 0; 0 0 0 0-1 1; ]),where the last three rows correspond to three _2-valued topological invariants.Then, we construct the following three _2-valued topological invariants under certain gauge conditions:(-1)^ν_7 := (t̃_ΓZ^Λ_3)^-1t̃_ΓY^Δ_4; (-1)^ν_8:= (t̃_ΓZ^Λ_3)^-1t̃_ΓX^Σ_4; (-1)^ν_9:= (t̃_ΓZ^Λ_3)^-1t̃_ΓZ^Λ_4,where t̃^Λ_3_ΓZ is the gauge fixed transition function [c.f. Eq. (<ref>)] and the same applies to the other transition functions.To use the technique introduced in Sec. <ref>, we consider(-1)^ν'_9 := (-1)^ν_9-ν_7 = (t̃_ΓY^Δ_4)^-1t̃_ΓZ^Λ_4. Similar to the above example, we can derive gauge-independent expressions of these invariants.Again, we use the symmetry enriched Berry phase introduced in Eq. (<ref>), e^iγ__0 _1^α := lim_𝒩→∞∏_j = 0^𝒩-1 (Φ^α__0 + (j+1)δ)^†Φ^α__0 + jδ for irrep α and δ = (_1 - _0)/𝒩.Again, the Berry phase is not invariant under gauge transformations at the boundary 0-cells. For 0-cells X, Y, and Z, the Pfaffian can be used to cancel U(1)-valued quantities originating from the gauge transformations.On the other hand, at 0-cell Γ, there is only a two-dimensional irrep Γ_1 whose EAZ class is AI. According to Eqs. (<ref>), (<ref>), and (<ref>), Γ is an intersection of two 1-cells in each quantity.Thus, we can use the technique in Sec. <ref> to derive gauge invariant quantities.As a result, we have(-1)^ν_7 :=2^N_occ/2e^iγ_ΓZ^Λ_3e^-iγ_ΓY^Δ_4Pf[(Φ^Δ_4_Y)^†U()(Φ^Δ_4_Y)^*]/Pf[(Φ^Λ_3_Z)^†U()(Φ^Λ_3_Z)^*] (Φ^Λ_3_Γ)^†Φ^Δ_4_Γ×exp[-1/2∫_Λtr (Φ_^Λ_3)^†(U_(S_y)d U^†_(S_y))Φ_^Λ_3]exp[1/2∫_Δtr (Φ_^Δ_4)^†(U_(S_z)d U^†_(S_z))Φ_^Δ_4] (-1)^ν_8:=2^N_occ/2e^iγ_ΓZ^Λ_3e^-iγ_ΓX^Σ_4Pf[(Φ^Σ_4_X)^†U()(Φ^Σ_4_X)^*]/Pf[(Φ^Λ_3_Z)^†U()(Φ^Λ_3_Z)^*] (Φ^Λ_3_Γ)^†Φ^Σ_4_Γ×exp[-1/2∫_Λtr (Φ_^Λ_3)^†(U_(S_y)d U^†_(S_y))Φ_^Λ_3]exp[1/2∫_Σtr (Φ_^Σ_4)^†(U_(S_z)d U^†_(S_z))Φ_^Σ_4] (-1)^ν'_9:= 2^N_occ/2e^iγ_ΓY^Δ_4e^-iγ_ΓZ^Λ_3Pf[(Φ^Λ_3_Z)^†U()(Φ^Λ_3_Z)^*]/Pf[(Φ^Δ_4_Y)^†U()(Φ^Δ_4_Y)^*] (Φ^Δ_4_Γ)^†Φ^Λ_3_Γ×exp[-1/2∫_Δtr (Φ_^Δ_4)^†(U_(S_x)d U^†_(S_x))Φ_^Δ_4] exp[1/2∫_Λtr (Φ_^Λ_3)^†(U_(S_y)d U^†_(S_y))Φ_^Λ_3],By explicitly computing these invariants for representative models of a strong topological insulator and topological crystalline insulators, we see that (ν_7, ν_8, ν'_9) can diagnose these phases, as shown in Table <ref>. §.§.§ Example: spinless systems with space group P4̅ with TRSIn this symmetry setting, E_2^1,-1 = _2 and a _2 invariant over the 1-skeleton exists.On the one hand, the K-group K ≅^8 does not include _2 <cit.>.This means that the second differential d_2^1,-1: E_2^1,-1→ E_2^3,-2 is nontrivial and the _2 invariant over the 1-skeleton detects a gapless point (Weyl point) in 3-cell.We construct the _2 invariant over the 1-skeleton and relate it to the Chern number of Weyl point.Symmetry is summarized as follows. U_(S_4) H_ U_(S_4)^†=H_, U_() H_^* U_()^† =H_-, U_S_4^3(S_4)U_C_2^z(S_4)U_S_4(S_4)U_(S_4)=1, U_-()U_()^*=1, U_-(S_4)U_()=U_S_4()U_(S_4)^*. Here, S_4 = (k_y,-k_x,-k_z) is rotoinversion. Figure <ref> shows a cell decomposition we use. On 1-cells Z and SR, the stabilizer group is {e,S_4 , C_2^z, S_4^3 } and there are two irreps A and B whose characters are χ^A(C_2^z) = 1 and χ^B(C_2^z)=-1. The relation U_(S_4)U_(S_4)^* = U_(C_2^z) over these 1-cells implies that the EAZ class is AI for the irrep A and AII for the irrep B, respectively. On the 1-cells S and ZR the little group is {e,C_2^z } and EAZ is AI. From the AHSS, we find that the _2 invariant is composed of the transition functions of the A-irrep of the 1-cells Z and SR and the 1-cells S and ZR, meaning that the Berry phase formula is composed of these irreps of their 1-cells. Since the Berry phase over the irrep whose EAZ class is AII is trivial (see Sec. <ref>), multiplying the Berry phase of the B-irrep over 1-cells Z and ZR does not change the _2 invariant. Consequently, the _2 invariant ν is written as the standard Berry phase e^γ_ S R Z over the loop → S → R → Z → without irreducible decomposition as in (-1)^ν =e^-γ_ S R Z×exp[1/2(∫_S → R+∫_Z →) [Φ_^† U_(S_4) d U_(S_4)^†Φ_]] ×exp[1/2(∫_→ S+∫_R → Z) [Φ_^† U_(C_2^z) d U_(C_2^z)^†Φ_]].Now we show that the _2 invariant ν is the parity of Weyl charge inside the fundamental domain of the momentum space. Using Stokes's theorem and the symmetry constraints, it should be possible to directly show that ν is the parity of the Chern number of the boundary of the fundamental domain. Instead, here, based on the interpretation of the second differential d_2^1,-1 <cit.>, we show that the topological transition of ν results in the emission or absorption of the quartet of Weyl points. On the tubular neighborhood of the 1-cell Z, the topological transition of ν is modeled by the Hamiltonian H_ = k_z σ_x + (m-^2) σ_z + k_x k_y σ_ywith U_(S_4) = 1. When the mass m continuously varies from a positive to a negative value, the _2 invariant ν changes by 1, and the quartet of Weyl points is created to the fundamental domain, meaning the change of the Chern number by 1. §.§ Topological invariants defined on 2-skeletonIn this work, we have discussed topological invariants defined on 1-skeletons, which are constructed from E_2^1,-1. As shown in the above example of space group P2, the introduced topological invariants cannot fully characterize all possible phases. In fact, our framework discussed in Sec. <ref> and Appendix <ref> is applicable to E_2^p,-n for arbitrary p and n. In particular, when the higher-differential d_2^0,-1 is trivial, E_3^2,-2 = E_2^2,-2 can inform us of topological invariants defined on 2-skeletons. For example, E_3^2,-2 = E_2^2,-2 = (_2)^2 in space group P2 with TRS. Thus, there should be two _2-valued topological invariants defined on 2-skeletons. It is natural to expect that the topological invariants obtained from E_3^2,-2, together with {ν_i}_i=1^6, enable us to diagnose all possible topological phases. Although topological invariants defined on 2-skeletons are essential for the full characterization of topological phases, we leave a systematic construction of them as a future work.§ CONCLUSIONS AND OUTLOOK In this work, we have established a systematic framework to construct topological invariants based on AHSS in momentum space.We have applied our scheme to insulators and superconductors in all magnetic space groups and presented necessary information about topological invariants such as [X^(1)]^-1 and [V^(0)]^-1.Through some illustrative examples, we have shown that the constructed invariants can detect nontrivial topology, for which symmetry indicators do not work at all. Our work opens new avenues for future research.First, our scheme is applicable to various kinds of systems. In recent years, topological phases in non-Hermitian systems have attracted significant attention <cit.>. Since a non-Hermitian Bloch Hamiltonian can always be mapped to a Hermitian Hamiltonian with an additional chiral symmetry. Therefore, our scheme could be used to construct topological invariants of non-Hermitian systems for a given symmetry setting.More recently, spin space groups have been comprehensively classified <cit.>. It turns out that there exist a lot of new symmetry classes.Our general scheme could also be applied to spin space groups. Second, our invariants can capture nontrivial topology in gapped phases on 2-skeletons or gapless points on 2-cells.This means that our invariants of gapped phases on 2-skeletons sometimes detect gapless points on 3-cells.Since the discovery of gapless points at generic momenta is usually challenging, it would be interesting to differentiate between the invariants for fully gapped phases and those for gapless points on 3-cells.This distinction is achieved by computing the invariants for representative models of real-space classifications and examining which ones are for gapless points on 3-cells.Last, we expect that first-principles calculations can compute topological invariants constructed based on this work. For insulators, our gauge invariant formulas are computable from wave functions obtained by first-principles calculations. For superconductors, Fermi surfaces available from first-principles calculations enable us to calculate our invariants under the weak-pairing assumption, as briefly mentioned in Sec. <ref>. Furthermore, since they are defined on 1-skeletons composed of high-symmetry points and line segments connecting them, the computational cost is relatively low. We believe that our work would help to build a more comprehensive database of topological materials than the existing ones based on symmetry indicators and topological quantum chemistry. We thank Yohei Fuji, Shuichi Murakami, and Kenji Shimomura for fruitful discussions. We also thank the YITP workshop YITP-T22-02 on “Novel Quantum States in Condensed Matter 2022", where a part of this work was carried out. SO was supported by KAKENHI Grant No. JP20J21692 and No. 23K19043 from the Japan Society for the Promotion of Science (JSPS) and RIKEN Special Postdoctoral Researchers Program. KS was supported by JST CREST Grant No. JPMJCR19T2, and JSPS KAKENHI Grant No. 22H05118 and 23H01097. § COMPUTATIONS FOR ABELIAN GROUPS WITH TORSION ELEMENTS In this appendix, we show computational procedures for the case where {E_1^q,-p}_q = p-1^p+1 are not a free Abelian group.§.§ General frameworkThe following commutative diagram shows the mathematical structure behind our problem:[row sep=small, column sep=scriptsize] 0 [r] P^p-1,-p_1[r,"i"][d,"d̃^p-1,-p_1"] Ẽ^p-1,-p_1[d,"d̃^p-1,-p_1"] [r,"τ"] E^p-1,-p_1[r] [d, "d^p-1,-p_1"]0 0 [r] P^p,-p_1[r,"i"][d,"d̃^p,-p_1 "] Ẽ^p,-p_1[d,"d̃^p,-p_1"] [r,"τ"] E^p,-p_1[r] [d, "d^p,-p_1 "]0 0 [r] P^p+1,-p_1[r,"i"] Ẽ^p+1,-p_1[r,"τ"] E^p+1,-p_1[r] 0 ,whereE^p,-p_1 := ⊕_i=1^D_p_2[b_i^(p)] ⊕⊕_i=D_p+1^N_p[b_i^(p)]; Ẽ^p,-p_1 := ⊕_i=1^D_p[b̃_i^(p)] ⊕⊕_i=D_p+1^N_p[b̃_i^(p)];P^p,-p_1 := ⊕_i=1^D_p2[2b̃_i^(p)]. In the following, “∼” denotes that we forget about the _2-nature of an Abelian group.By definition, E^p,-p_1 = Ẽ^p,-p_1/P^p,-p_1. For simplicity, we use ℬ̃^(p) = (b̃_1^(p)b̃_2^(p)⋯b̃_N_p^(p)) and 𝒫̃^(p) = ( 2b̃_1^(p) 2b̃_2^(p) ⋯2b̃_D_p^(p)). To generalize the discussions in Sec. <ref>, we introduce a map f_1^p,-p = d̃_1^p, -n⊕id: Ẽ^p,-p_1⊕ P^p+1,-p_1→Ẽ^p+1,-p_1. This map can be represented by a matrix as f_1^p,-p(ℬ̃^(p)𝒫̃^(p+1)) = ℬ̃^(p+1)[ M̃_d̃_1^p,-pQ^(p) ],where Q^(p) = ([ 2 ×1_D_p+1;1cO ]).One can always find the Smith normal formU^(p)[ M̃_d̃_1^p,-pQ^(p) ]V^(p) = Σ^(p),where U^(p) and V^(p) are unimodular matrices. As mentioned in Sec. <ref>, the Smith normal form can be written as Σ^(p)= diag(s_1^(p), s_2^(p), ⋯, s_N_p^(p)) (s_i^(p)∈_≥0) and s_i≠ 0 for i ≤ r_p ≤ N_p.Then, we havef_1^p,-p(ℬ̃^(p)𝒫̃^(p+1))V^(p)= ℬ̃^(p+1)[U^(p)]^-1U^(p)[ M̃_d̃_1^p,-pQ^(p) ]V^(p)= ℬ̃^(p+1)[U^(p)]^-1Σ^(p),where V^(p) = (v_1^(p), v_2^(p), ⋯, v_N_p+D_p+1^(p)).Here, we obtain d̃_1^p,-p from Eq. (<ref>).First, we restrict v_i^(p) to the first N_p elements, which is denoted by v'^(p)_i. Then, Eq. (<ref>) informs us that {v'^(p)_i}_i = r_p+1^N_p+D_p+1 are elements of d̃_1^p,-p.Collecting all independent lists of {v'^(p)_i}_i = r_p+1^N_p+D_p+1, we have a basis set of d̃_1^p,-p.For later convenience, we construct a N_p-dimensional invertible matrix V'^(p) from {v'^(p)_i}_i=r_p+1^N_p+D_p+1. We obtain a N_p-dimensional unimodular matrix U_v from the following Smith decomposition(v'^(p)_r_p+1⋯v'^(p)_N_p+D_p+1) = U_v^-1Σ_vV_v^-1,U_v^-1 := ([ u_1 u_2 ⋯ u_N_p ])^⊤.where Σ_v is the Smith normal form of (v'^(p)_r_p+1⋯v'^(p)_N_p+D_p+1), i.e., a N_p-dimensional diagonal matrix whose matrix rank is N_p - r'_p (0≤ r'_p ≤ N_p).V_v is a (N_p + D_p+1-r_p)-dimensional unimodular matrix.Let us elaborate on the explanation of Σ_v. Each diagonal element of Σ_v has the following features * [Σ_v]_ii∈_>0 for 1≤ i ≤ (N_p-r'_p); * [Σ_v]_ii can always divide [Σ_v]_(i+1)(i+1) for 1 ≤ i ≤(N_p-r'_p)-1; * [Σ_v]_ii = 0 for (N_p-r'_p)<i if r'_p ≠ 0.From U_v^-1 and Σ_v, we have an integer-valued invertible matrix V'^(p) as [V'^(p)]_ij = [Σ_v]_jj[u_j]_i for 1≤ i ≤ (N_p-r'_p) [u_j]_i for (N_p-r'_p)<j if r'_p ≠ 0. Note V'^(p) is not a unimodular matrix when there exists j such that [Σ_v]_jj > 1.Since f_1^p-1, -p⊆d̃_1^p, -p,f_1^p-1,-p(ℬ̃^(p-1)𝒫̃^(p)) = ℬ̃^(p)[ M̃_d̃_1^p-1,-pQ^(p-1) ]= ℬ̃^(p) V'^(p) [V'^(p)]^-1[ M̃_d̃^1_p-1,-pQ^(p-1) ]= ℬ̃'^(p)([ 2cY; O_r'_p× N_p-1 O_r'_p× D_p; ]),where ℬ̃'^(p) = (b'^(p)_r_p +1, ⋯b'^(p)_N_p+D_p+1, b'^(p)_1, ⋯, b'^(p)_r'_p) = ℬ̃^(p) V'^(p).The reason why the last r'_p rows are zeros is b_j ∉d̃_1^p, -p. These should not be included in f_1^p-1, -p.Again, using the Smith normal form, we rewrite the above equation asf_1^p-1,-p(ℬ̃^(p-1)𝒫̃^(p))V^(p-1)=ℬ̃'^(p)([U^(p-1)]^-1⊕1_r'_p) ([ 2cΛ^(p-1); O_r'_p× N_p-1 O_r'_p× D_p ])=ℬ̃”^(p)([ 2cΛ^(p-1); O_r'_p× N_p-1 O_r'_p× D_p ]),where Λ^(p-1) is the Smith normal form of Y, and ℬ̃”^(p) =ℬ̃^(p) X^(p) =ℬ̃^(p)V'^(p)([U^(p-1)]^-1⊕1_r'_p).As with the case where E_1^p, -p is a free Abelian group, the inverse of X^(p) and V^(p-1) tell us which p- and (p-1)-cells are involved in the expression of topological invariants. Again [X^(1)]^-1 is denoted by[X^(1)]^-1 := ([ x_1 x_2 ⋯ x_N_1 ])^⊤; [V^(0)]^-1 := ([ v_1 v_2 ⋯ v_N_0 ])^⊤ §.§ Example: spinless superconductors in P3_1211'As an example, we discuss time-reversal symmetric spinless superconductors in space group P3_121 with the conventional pairing symmetry A_1.This space is generated by threefold screw S_z = {C_3^z| (0, 0, 1/3)^⊤}, twofold rotation C_2^[100]={C_2^x| (0, 0, 2/3)^⊤} along x-direction, and translations.According to Ref. <cit.>, the K-group is ^ϕK_G/Π^(z,c)+0(ℝ^3) = (_2)^3 ×_3 ×^2, whose (_2)^3 classification corresponds to atomic limits and _3 ×^2 is the classification of topological crystalline superconductors, as shown in Fig. <ref> (a–c). In addition, we find that E_2^0,0 = (_2)^5 and E_2^1,-1 = _3 ×^2, which implies thatfive _2-, a _3-, and two -valued topological invariants are defined.Our cell decomposition of the fundamental domain is shown in Fig. <ref>(d).For this cell decomposition, E_1^0,0 and E_1^1,-1 are spanned byE_1^0,0 = ⊕_K = Γ,A⊕_j=1^3_2[b_K_j^(0)] ⊕⊕_K = M, L⊕_j=1^2_2[b_K_j^(0)] , E_1^1,-1 = ⊕_j=1^3_2[b_Δ_j^(1)] ⊕⊕_j=1^3[b_P_j^(1)] ⊕⊕_K =Λ, Q, T, S⊕_j=1^2[b_K_j^(1)] ⊕⊕_K=R, N_2[b_K_1^(1)] where the labels of irreps are defined in Table <ref>.After following the procedures discussed above, we obtain[X^(1)]^-1 = ( [ Δ_1 Δ_2 Δ_3 P_1 P_2 P_3 N_1 Λ_1 Λ_2 Q_1 Q_2 R_1 T_1 T_2 S_1 S_2; 1-1 1 0 0 0 0 0 0 0 0-1 0 0 0 0; 1 0 0 0 0 0 0 0 0 0 0-1 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0; 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0; 0 0 0 0 1 0 0 0-1 0 0 0 0 0 0 0; 0 0 0 0 1 1 0 0-1 0 0 0 1 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0;-1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0; 0 0 0 0 0 0 0 0 0-1-1 0 0 0-1 0; 0 0 0 0 2 1 0 0-1 0 0 0 1 0 0 0; 0 0 0 0 0 0 0-1 0 0 0 0-1 0 0 0; 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0; 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0; 0 0 0 1 1 1 0-1-1 1 1 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1; ]);Σ^(0) = diag(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 0, 0);[V^(0)]^-1 = ( [ Γ_1 Γ_2 Γ_3 M_1 M_2 K_1 K_2 K_3 A_1 A_2 A_3 L_1 L_2 H_1 H_2 H_3 Δ_1 Δ_2 Δ_3 R_1 N_1; 1 1 0 0 0 0 0 0 0 0 0-1-1 0 0 0 2-2 2 0-2; 0 0 1 0 0 0 0 0 0 0 0-1-1 0 0 0 2 0 0 0-2; 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 2 0; 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 0 0 0 0 0; 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 2; 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0-2 2 0 0 2; 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1; ]).As discussed in Sec. <ref>, we can find topological invariants using these data. Let us begin by constructing topological invariants corresponding to E_2^1,-1 = _3 ×^2. We define the following two quantities:𝒲^gapped_1= -1/2πi[∫_Λ d log q_^Λ_1/( q_^Λ_1)^vac +∫_T d log q_^T_1/( q_^T_1)^vac]; 𝒲^gapped_2= 1/2πi[∫_Q d log q_^Q_2/( q_^Q_2)^vac +∫_S d log q_^S_1/( q_^S_1)^vac]. Then, we can find the following topological invariant 𝒳 = 3/2πImlog[𝒵[(q_H^H_3)^vac]/𝒵[q_H^H_3]exp[1/3{-∫_P(2dlog q_^P_2/( q_^P_2)^vac +dlog q_^P_3/( q_^P_3)^vac) +∫_Λdlog q_^Λ_2/( q_^Λ_2)^vac -∫_Tdlog q_^T_1/( q_^T_1)^vac}]]. Next, we construct topological invariants that diagnose the entry in E_2^0,0 = (_2)^5.As shown in Eq. (<ref>), we have E_1^0,0 = (_2)^10.It is well-known that topological phases at 0-cells are characterized by zero-dimensional topological invariants, as discussed in the context of symmetry indicators <cit.>. In this case, the zero-dimensional topological invariants are Pfaffians for each irrep at Γ, M, A, and L; that is, E_1^0,0 is completely characterized by(p_Γ_1, p_Γ_2, p_Γ_3, p_M_1, p_M_2, p_A_1, p_A_2, p_A_3, p_L_1, p_L_2),where p_ρ denotes the Pfaffian invariant for irrep ρ. Since E_2^0,0 =d_1^0,0, Eq. (<ref>) informs us about invariants defined on 0-cells, we find that the following quantities (p_Γ_2, p_M_2, p_A_2, p_L_1, p_L_2).Although the above set of invariants fully characterize E_2^0,0, it is convenient to change the basis of E_2^0,0 and to have topological invariants to distinguish between atomic limits and other phases. This is achieved by symmetry indicators, and we obtain the following five _2-valued invariants: E_2^0,0 = span{[ 𝔟_1 = (1, 0, 1, 0, 1, 0, 1, 1, 1, 0); 𝔟_2 = (0, 1, 1, 1, 0, 1, 0, 1, 0, 1); 𝔟_3 = (1, 0, 1, 0, 1, 1, 0, 1, 0, 1); 𝔟_4 = (0, 0, 0, 0, 0, 1, 1, 0, 0, 0); 𝔟_5 = (1, 1, 0, 0, 0, 1, 1, 0, 0, 0);]}andν_1= p_L_12, ν_2= p_M_2+ p_L_1 + p_L_22,ν_3= p_M_2+ p_L_12,ν_4= p_Γ_2 + p_M_2 + p_A_2 + p_L_22,ν_5= p_Γ_2 + p_M_2 + p_A_32. For 𝔟_i, ν_i = 1 and ν_j ≠ i = 0.In fact, ν_4 and ν_5 are symmetry indicators.As a result, we have a set of topological invariants𝒱 = (ν_1, ν_2, ν_3, ν_4, ν_5, 𝒳, 𝒲^gapped_1, 𝒲^gapped_2).We compute all these topological invariants for generators of ^ϕK_G/Π^(z,c)+0(ℝ^3) = (_2)^3 ×_3 ×^2, and the computed results are tabulated in Table <ref>.We make the following two comments. First, we find that ν_4 = 0 for all gapped phases.This implies that (_2)^4-part of E_2^0,0 corresponds to gapped phases.In other words, the entry of remaining _2-part is gapless, and the second differential d_2^0,0:E_2^0,0→ E_2^2,-1 = _2 is nontrivial. To show this, let us consider the following effective low-energy model around ΓH_(k_1, k_2, k_z) = (^2 - μ) τ_z + k_z τ_y σ_x + k_1 k_2 (k_1 + k_2) τ_y σ_x,U() = τ_0 σ_0,U() = τ_x σ_0,U(C_3^z) = τ_0 σ_0,U(C_2^[100]) = τ_0 σ_z,where τ_i=0,x,y,z and σ_i=0,x,y,z are Pauli matrices acting on Nambu-spinor space and the eigenvalue space, respectively.This model corresponds to b_4 + b_5 and exhibits four gapless points on (√(3μ)/2, 0, 0), (0, √(3μ)/2, 0), and (±√(3μ)/2, ∓√(3μ)/2, 0), which are the 2-cell A-A'-L'-L and its symmetry-related 2-cells.This is exactly the generator of E_2^2,-1=_2, which implies that d_2^0,0 is nontrivial. Second, 𝒲^gapped_1 and 𝒲^gapped_2 are fractional for (0,0,0,0,1,0), (0,0,0,0,0,1) ∈ ^ϕK_G/Π^(z,c)+0(ℝ^3).This indicates that there is a nontrivial relation among _2 invariants, 𝒲^gapped_1, and 𝒲^gapped_2.In fact, ν_5 is nontrivial for (0,0,0,0,1,0), (0,0,0,0,0,1) ∈ ^ϕK_G/Π^(z,c)+0(ℝ^3).This can generally occur when there is a group extension in the process of deriving ^ϕK_G/Π^(z,c)+0(T^3) by AHSS.However, we leave the development of a general theory for the case where nontrivial higher differential and the group extension are involved as future work. §.§ Example: P_c3 The next example is spinless superconductors in magnetic space group P_c3 with the conventional pairing symmetry A.Generators of magnetic space are threefold rotation C_3^z, time-reversal with half translation along z-direction, and primitive translations.Although the Abelian group structure of K-group is unknown, we have information about real-space descriptions of topological phases. According to Ref. <cit.>, there are atomic limits and topological crystalline phases classified into (_2)^4 and (_2)^2, respectively.On the other hand, from AHSS in momentum space, we find that E_2^0,0 = (_2)^2 and E_2^1,-1 = (_2)^5, which implies that seven _2-valued topological invariants can be constructed in our method.Our cell decomposition of the fundamental domain is shown in Fig. <ref>.For this cell decomposition, E_1^0,0 and E_1^1,-1 are spanned by E_1^0,0 = _2[b_Γ_3^(0)] ⊕_2[b_M_1^(0)], E_1^1,-1 = ⊕_j=1^3⊕_K=Δ, P[b_K_j^(1)] ⊕⊕_K=U, E, G[b_K_1^(1)],where a threefold rotation symmetric cell K (K = Γ, A, H, Δ, P) has three irreps K_1, K_2, K_3 corresponding to threefold rotation eigenvalues e^4π/3, e^2π/3, 1; the other cells only have the trivial irrep. In fact, E_2^0,0=E_1^0,0 since d_1^0,0 is trivial. Therefore, two Pfaffian invariants (p_Γ_3, p_M_1) are topological invariants defined on 0-cells. As fortopological invariants defined on 1-cells, after performing the same analysis, we have[X^(1)]^-1=( [ Δ_1 Δ_2 Δ_3 P_1 P_2 P_3 U_1 E_1 G_1; 0-1 0 0 0 0 0 0 0;-1-1-1 0 0 0 0-1 0; 1 1-1 0 0 0 0 0 0; 1 1 0 0 0 0-1 0 0;-1-1-2 0 0 0 0-2 0; 1 1 0 0 1 0 0 0 0; 1 1 0 0 0 1 0 0 0; 2 2 2 1 1 1 0 2 0; 1 1 1 0 0 0-1 1 1; ]);Σ^(0) = diag(1, 1, 2, 2, 2, 2, 2); [V^(0)]^-1=( [ Γ_1 Γ_3 M_1 A_1 A_3 L_1 H_1 H_2 H_3; 1 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 1 1; 0 0 0-1 1 0 0 0 0; 0 0 0-1 0 1 0 0 0; 0 0 0-1 0 0 1 1 1; 0 0 0-1 0 0 0 1 0; 0 0 0-1 0 0 0 0 1; 0 1 0 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0; ]). Based on Sec. <ref>, we find the following five _2-valued topological invariants: 𝒳_1 = 1/πlog[q_^A_1/ (q_^A_1)^vac( [q_^A_3]/[(q_^A_3)^vac])^-1exp[-1/2∫_Δ(dlog q_^Δ_1/( q_^Δ_1)^vac+dlog q_^Δ_2/( q_^Δ_2)^vac- dlog q_^Δ_3/( q_^Δ_3)^vac)]]; 𝒳_2 = 1/πlog[ q_^A_1/ (q_^A_1)^vac([q_^L_1]/[(q_^L_1)^vac])^-1exp[-1/2(∫_Δdlog q_^Δ_1/( q_^Δ_1)^vac+∫_Δdlog q_^Δ_2/( q_^Δ_2)^vac-∫_Udlog q_^U_1/( q_^U_1)^vac)]]; 𝒳_3 = 1/πlog[ q_^A_1/ (q_^A_1)^vac( q_^H_1/ (q_^H_1)^vac q_^H_2/ (q_^H_2)^vac q_^H_3/ (q_^H_3)^vac)^-1. . ×exp[1/2∫_Δ(dlog q_^Δ_1/( q_^Δ_1)^vac+dlog q_^Δ_2/( q_^Δ_2)^vac+2dlog q_^Δ_3/( q_^Δ_3)^vac) +∫_Edlog q_^E_1/( q_^E_1)^vac]]; 𝒳_4 =1/πlog[ q_^A_1/ (q_^A_1)^vac( q_^H_2/ (q_^H_2)^vac)^-1exp[-1/2(∫_Δdlog q_^Δ_1/( q_^Δ_1)^vac+∫_Δdlog q_^Δ_2/( q_^Δ_2)^vac+∫_Pdlog q_^P_2/( q_^P_2)^vac)]]; 𝒳_5 =1/πlog[ q_^A_1/ (q_^A_1)^vac( q_^H_3/ (q_^H_3)^vac)^-1exp[-1/2(∫_Δdlog q_^Δ_1/( q_^Δ_1)^vac+∫_Δdlog q_^Δ_2/( q_^Δ_2)^vac+∫_Pdlog q_^P_3/( q_^P_3)^vac)]].As a result, we have the seven _2-valued topological invariants𝒱 = (p_Γ_3, p_M_1, 𝒳_1, 𝒳_2, 𝒳_3, 𝒳_4, 𝒳_5). As shown in Table <ref>, we compute all these topological invariants for representative models constructed from the real-space description.§ DETAILED INFORMATION OF DISCUSSIONS PRESENTED IN SEC. <REF>In this appendix, we provide information about technical details, which are not included to avoid digressing from the main topic of this work. §.§ Brief review of definitions of K-groups This subsection reviews definitions of K-groups in Refs. <cit.>. Before defining K-groups, let us consider the triple (E, H_, H^ref_), where H_ and H^ref_ are Hamiltonians, and E is a vector bundle on which H_ and H^ref_ act.In the presence of symmetries, symmetry representations U_(g) are the same for H_ and H^ref_, i.e., U_(g)H_^ϕ_g = c_g H_U_(g); U_(g)[H_^ref]^ϕ_g = c_g H_^refU_(g).We define relations between two triples (E, H_, H^ref_) and (E', H'_, H'^ref_).First, we define the sum structure among triples by (E, H_, H^ref_) + (E', H'_, H'^ref_) := (E⊕ E', H_⊕ H'_, H^ref_⊕ H'^ref_). Next, let us suppose that H_ and H'_, as well as H^ref_ and H'^ref_, are homotopy equivalent, i.e., H_∼ H'_ and H^ref_∼ H'^ref_.Then, we say that these two triples are isomorphic (E, H_, H^ref_) ≃ (E', H'_, H'^ref_).As a result, isomorphism classes of (E, H_, H^ref_) form a commutative monoid (X). We define a submonoid 𝒵(X) of (X) by 𝒵(X) := {(E, H_, H^ref_)∈(X) | H_∼H^ref_}. Then, we also define the equivalence relation “≈” between x, x' ∈(X). We say that x ≈ x' if there exist z, z' ∈𝒵(X) such that x + z ≃ x'+z'. Then, K-group K(X) is defined by the equivalence classes [V, H_, H^ref_], i.e.,K(X) := (X)/≈. It should be noted that z ≈ z' for z, z' ∈𝒵(X). This immediately follows from z + z_2 ≃ z_1 + z' since there exist z_1 and z_2 such that z_1 ≃ z and z_2 ≃ z' by definition.The K-group has the following properties * [V, H_, H^ref_] = 0 when H_∼ H^ref_, * -[V, H_, H^ref_] = [V, H^ref_, H_]. From the above two properties, one can see that the K-group is invariant under adding trivial degrees of freedom [V, H_, H^ref_] + [V, H'_, H'_] ≈ [V, H_, H^ref_]. Similarly, a relative K-group K(X, Y) for Y ⊂ X is defined by triples [V, H_, H^ref_] such that H_ = H^ref_ for ^∀∈ Y. Also the reduced K-group K̃(X) is defined byK̃(X) := K(X, pt). §.§ Choice of a basis for the computation of q-matrixIn this subsection, we explain how to choose 𝒰_α± in Eq. (<ref>) so that q_^α = 𝒵[q_^α]^_α for EAZ class AIII/CI and q_^α = 𝒵[q_^α]^2_α for EAZ class DIII. Let us recall that 𝒰^α_± satisfiesP^α±𝒰^α_± = 𝒰^α_±,where P^α± is a projection matrix defined in Eq. (<ref>).Furthermore, 𝒰^α_± is transformed as 𝒰_(g)𝒰^α_± = 𝒰^α_±𝒲^±_(g) (g∈_)for EAZ class AIII and𝒰_(g)𝒰^α_± = 𝒰^α_±𝒲^±_(g) (g∈_)𝒰_(a)[𝒰^α_±]^*= 𝒰^α_∓𝒲^∓_(a) (a∈𝒜_)for EAZ class CI/DIII.It should be noted that 𝒲^+_(g) and 𝒲^-_(g) are unitary equivalent since they both are composed of irrep α. Thus, they can be the same matrix by a basis transformation 𝒰^α_-→𝒰^α_-𝒮, i.e., there exists a unitary matrix 𝒮 such that𝒲_(g):= 𝒲^+_(g) = 𝒮^†𝒲^-_(g)𝒮 (g ∈),𝒲_(a):=𝒲^+_(a)𝒮^* = 𝒮^†𝒲^-_(a) (a∈𝒜_). When we define the q-matrix byq'^α_ = [𝒰^α_-𝒮]^†H_𝒰^α_+, the q-matrix itself is symmetric under g ∈ for EAZ class AIII and g ∈ (+𝒜_) for EAZ class CI/DIII.In other words, 𝒲_(g)q'^α_ = q'^α_𝒲_(g)(g ∈),for EAZ class AIII. For EAZ class CI/DIII, in addition to Eq. (<ref>), q'^α_ satisfies𝒲_(a)[q'^α_]^T = q'^α_𝒲_(a) (a∈𝒜_). As a result, the eigenvalues of q'^α_ are _α-fold degenerate for EAZ class AIII/CI and 2_α-fold degenerate for EAZ class DIII. The problem is how to obtain 𝒮 numerically. In the following, we discuss our implementation to get 𝒮.Equations (<ref>) and (<ref>) can be rewritten as 𝒲^-_(g)𝒮[𝒲^+_(g)]^† - 𝒮 = O(g ∈), 𝒲^-_(a)𝒮^⊤[𝒲^+_(a)]^† - 𝒮 = O (a∈𝒜_). Importantly, we can numerically solve these linear equations.To achieve this, let us introduce the following tensors[A_g]_lm;no = [𝒲^-_(g)]_ln[𝒲^-_(g)]^*_mo - δ_lnδ_mo, [B_g]_lm;no = [𝒲^-_(g)]_ln[𝒲^-_(g)]^*_mo - δ_lnδ_mo. By reshaping these tensors, we have matrices {A_g}_g ∈ and {B_g}_g ∈_. Then, the solution can be obtained from⋂_g ∈ A_g for EAZ class AIII/CI and (⋂_g ∈_ A_g) ∩(⋂_g ∈_ B_g)for class DIII, as discussed below.Let (a_1, a_2, ⋯a_J) be a basis set of the intersection of the kernels. Then, we consider linear combinations of {a_i}_i=1^J with randomly chosen coefficients {c_i}_i=1^J, i.e., s = ∑_i=1^J c_i a_i.Finally, we have the unitary matrix 𝒮 by rearranging the vector s and unitarizing the reshaped matrix by singular-value decomposition. It is noted that the intersection of the kernels is not one-dimensional in general.This happens when W_(g) is reducible and composed of n_α degenerate α-irreps.If this is the case, S takes a form S = ⊕_α S_α⊗δ S_α, where S_α is a unitary matrix determined by W_^+(g) and W_^-(g), and δ S_α∈ GL(n_α) is unfixed.A vector s gives a set of matrices δ S_α, and it is invertible if the coefficients c_i are randomly chosen.Unitarizing S by singular-value decomposition leaves S_α invariant, making that S is a desired basis transformation. §.§ Check of completeness of topological invariantsHere, we explain how to confirm that our invariants span E_2^1,-1.After computing W_l[H_], W_g[H_], C[H_] in Eqs. (<ref>), (<ref>), and (<ref>) for the 20 Hamiltonians, we haveL = (W_l[H^(1)_], W_l[H^(2)_], ⋯, W_l[H^(20)_]),F = (W_g[H^(1)_], W_g[H^(2)_], ⋯, W_g[H^(20)_]), T = (C[H^(1)_], C[H^(2)_], ⋯, C[H^(20)_]). First, we check if gapless topological invariants can span ^r_1. To see this, we compute the Smith normal form of L as U_L L V_L = [ Σ_L O ]. If rankΣ_L = r_1 and [Σ_L]_j = 1 for (j = 1, ⋯ r_1), our gapless topological invariants span ^r_1. Furthermore, Eq. (<ref>) implies that the (r_1+1)-th through 20th columns of L' = LV_L are zeros.Thus, V_L informs us about combinations of Hamiltonians whose gapless topological invariants are trivial. Next, we confirm that the -valued invariants for gapped phases on 2-skeletons are sufficient to span ^N_f. From Eq. (<ref>), (r_1+1)-th through 20th columns of F' = FV_L = (𝔣'_1, ⋯,𝔣'_r_1, 𝔣'_r_1+1, ⋯, 𝔣'_20) give us the computed results of -valued invariants without gapless points on 2-cells.Then, we again consider the Smith normal form of (𝔣'_r_1+1, ⋯, 𝔣'_20)U_F (𝔣'_r_1+1, ⋯, 𝔣'_20) V_F = [ Σ_F O ]. If rankΣ_F = N_f and [Σ_F]_j = 1 for (j = 1, ⋯ N_f), our -valued invariants can fully characterize ^N_f.Last, we discuss _k-valued invariants for gapped phases on 2-skeletons.In the following, we consider _k-valued topological invariants as -valued quantities, i.e., we forget about the _k-nature of C[H_]. Similar to the case of -valued invariants, for TV_L = (𝔱'_1,⋯, 𝔱'_r_1,𝔱'_r_1+1, ⋯, 𝔱'_20), (𝔱'_r_1+1, ⋯, 𝔱'_20) is a set of results of _k-valued invariants without gapless points on 2-cells, which is denoted by 𝔗.Let us suppose that we have λ_i-r_1 such that λ_i-r_1∉{0, 1} for R_0+1 ≤ i-r_1 ≤ R_0+N_t (R_0∈).Then, we define a diagonal matrix by 𝔄 = diag(λ_R_0+1, ⋯, λ_R_0+N_t). We construct a basis set of [ 𝔗 𝔄 ] from its Smith normal formU_𝔗[ 𝔗 𝔄 ]V_𝔗= [ Σ_𝔗 O; O O ],where Σ_𝔗 is a diagonal matrix and its elements are all positive integers. When we denote U_𝔗^-1 = [ U_⊥ U_∥ ], 𝔅=U_⊥Σ_𝔗 is a basis set of [ 𝔗 𝔄 ]. By definition, 𝔄 can always be expanded by 𝔅.In other words, when 𝔅^+ denotes the pseudo-inverse matrix of 𝔅, 𝔅^+𝔄 is always an integer-valued matrix.Then, we compute the Smith normal form of 𝔅^+𝔄U_𝔅 (𝔅^+𝔄) V_𝔅 = [ Σ_𝔅 O ]. If Σ_𝔅=𝔄, our invariants can correctly capture torsion elements of E_2^1,-1.§ COMPATIBILITY RELATION OF NON-HERMITIAN MATRIX Let G be a finite group and ϕ: G →{± 1} be a homomorphism indicating g∈ G is unitary or antiunitary.The factor system of G is denoted by z_g,h.Consider the subgroup G_0 = {g ∈ G | ϕ_g = 1}.Let the set of irreps of G_0 be {α}_α, with the character of the α-irrep being χ^α_g. The EAZ class of α-irrep is defined by the Wigner criterionW^α = 1/|G_0|∑_g ∈ G\ G_0 z_g,gχ^α_g^2∈{0,± 1},where a representative element is denoted by a ∈ G\ G_0.The the irrep obtained by applying a to the α-irrep is written as aα, where χ^aα_g = z_g,a/z_a,a^-1ga (χ_a^-1ga)^*. The EAZ class is classified depending on the presence of anti-unitary elements and the value of W_α as:EAZ of α = {[Aif(G\ G_0 = ∅),; AIif(G\ G_0 ≠∅, W_α=1),;AII if(G\ G_0 ≠∅, W_α=-1),;A_Tif(G\ G_0 ≠∅, W_α=0).;].Furthermore, for the irrep α, the representation including anitunitary symmetries is given as:α̃= {[α (A, AI),;α⊕α (AII),;α⊕ aα (A_T).;]. Consider a subgroup H ⊂ G, and its unitary subgroup H_0 = { g ∈ H | ϕ_g = 1 }. An irrep of H_0 is denoted as β, and its representation including antiunitary group elements is denoted similarly to (<ref>) as β̃.The decomposition of α̃ by β̃ is written as α̃= ⊕_β̃β̃^⊕ n^α̃_β̃,n^α̃_β̃∈ℤ_≥ 0,where n^α̃_β̃ is calculated using the irreducible character, equivalent to the first differential d_1^p,0 in insulators <cit.>.Consider a z-projective representation ρ of G with dimension 𝒟_ρ.Let us denote a representation matrix by u^ρ_g∈ G, which satisfies .[ u^ρ_g u^ρ_h (ϕ_g=1); u^ρ_g (u^ρ_h)^*(ϕ_g=-1); ]}= z_g,h u^ρ_gh, ∀ g,h ∈ G.The decomposition of ρ by α̃ and β̃ is written as ρ = ⊕_α̃α̃^⊕ n^ρ_α̃ = ⊕_β̃β̃^⊕ n^ρ_β̃.Let us set a representation matrix u^α_g of the α-irrep.Then, the representation matrix for the α̃ representation reads asu^α̃_g ∈ G_0 = {[ u^α_g(A, AI),;u^α_g ⊗1_2(AII),; [u^α_g;u^aα_g;](A_T), ].andu^α̃_a = {[ u^α_a (AI),;u^α_a ⊗ (iσ_y)(AII),; [z_a,au^α_a^2; 1__α;](A_T). ].(Other matrices for elements g ∈ G \ G_0 are given by (<ref>).) Here, u^α_a ∈ U(𝒟_α) is a unitary matrix that satisfies u^α_a (u^α_a)^* = W_α× z_a,au^α_a^2. The representation matrix for β̃ is similarly defined. Using this notation, the representation matrix for ρ can be chosen as:u^ρ_g ∈ G = ⊕_α̃ u^α̃_g ⊗1_n^ρ_α̃. In the following, we consider symmetries that contain only one of the two types, either transpose or complex conjugate. More generally, there exist symmetries that contain both transposed and complex conjugate types simultaneously, but this is unnecessary for the construction of invariants in this paper.§.§ TransposeGiven a homomorphism ϕ_g, we consider an invertible matrix M ∈GL_n(ℂ) with the following transpose-type G-symmetry:M = {[ u_g M u_g^†, ϕ_g=1,; u_g M^T u_g^†,ϕ_g=-1, ].g ∈ G.Here, M^T is the transpose of matrix M. In the basis where u^ρ_g is given by (<ref>), the matrix M is block-diagonalized asM = ⊕_α̃1__α⊗ m^α̃, m^α̃∈{[GL_n^ρ_α̃(ℂ),A,AI,; GL_2n^ρ_α̃(ℂ), AII,A_T, ].Moreover, a symmetry implies that AI: (m^α̃)^T = m^α̃,AII: iσ_y (m^α̃)^T (iσ_y)^† = m^α̃,A_T: m^α̃= [ m^α; (m^α)^T; ],m^α∈GL_n^ρ_α̃(ℂ).Thus, it was observed that when the EAZ is AII or A_T, the eigenvalues of the matrix m^α̃ are doubly degenerate.We define the quantity Z^α̃_G(M), which takes values in ℂ^× = {z ∈ℂ|z ≠ 0}, as the product of independent eigenvalues in the α̃-sector. Introducing the orthogonal projection to the α̃ representationP^α̃ ={[P^α ( A,AI,AII),;P^α+P^a α( A_T), ].P^α=_α/|G_0|∑_g∈ G_0 (χ^α_g)^* u^ρ_g,the projection of M to the α̃ representation is given by M^α̃ = P^α̃ M (= M P^α̃). The matrix rank of M^α̃ is _α̃ n^ρ_α̃, and it has _α̃ degenerate eigenvalues λ^α̃_1,…,λ^α̃_n^ρ_α̃. Writing the non-zero eigenvalues of the matrix M as the set Spec^×(M), we haveSpec^×(M^α̃) = ⋃_μ=1^_α̃{λ^α̃_i}_i=1^n^ρ_α̃.In other words,(λ - M^α̃) =λ^_ρ-_α̃∏_i=1^n^ρ_α̃ (λ - λ^α̃_i)^_α̃.We define Z_G^α̃(M) := ∏_i=1^n^ρ_α̃λ^α̃_i.Note that this definition of Z_G^α̃(M) does not depend on the choice of representation matrix u^ρ_g.Similarly, for the β-irrep of the subgroup H_0 ⊂ G_0, the value Z_H^β̃(M) in ℂ^× is defined. The following holds:Z^β̃_H(M) = ∏_α̃[ Z^α̃_G(M)]^n^α̃_β̃.(Proof)In the basis that diagonalizes the matrix m^α̃, the matrix M can be expressed as:M = ⊕_α̃1__α̃⊗[λ^α̃_1; ⋱; λ^α̃_n^ρ_α̃ ]= ⊕_α̃⊕_β̃1__β̃⊗1_n^α̃_β̃⊗[λ^α̃_1; ⋱; λ^α̃_n^ρ_α̃ ].If we decompose the α̃ representation into the β̃ representation as u^α̃_g = ⊕_β̃ u^β̃_g ⊗1_n^α̃_β̃, the representation matrix u^ρ_g is expressed as:u^ρ_g = ⊕_α̃⊕_β̃ u^β̃_g ⊗1_n^α̃_β̃⊗1_n^ρ_α̃.If we define the projection of M^α̃ into the β̃ sector as M^α̃,β̃ = P^β̃ P^α̃ M, from the representations (<ref>) and (<ref>), the eigenvalues of the matrix M^α̃,β̃ are given by n^ρ_α̃ eigenvalues λ^α̃_1, …, λ^α̃_n^ρ_α̃, which are degenerated (_β̃× n^α̃_β̃) times.Spec^×(M^α̃,β̃)= ⋃_μ=1^_β̃⋃_ν=1^n^α̃_β̃{λ^α̃_i}_i=1^n^ρ_α̃.Therefore, the eigenvalues of the matrix M^β̃ = P^β̃ M = ∑_α̃ M^α̃,β̃ are obtained by taking the union over α̃:Spec^×(M^β̃)= ⋃_α̃⋃_μ=1^_β̃⋃_ν=1^n^α̃_β̃{λ^α̃_i}_i=1^n^ρ_α̃.From which we obtain:Z^β̃_H(M)= ∏_α̃∏_ν=1^n^α̃_β̃∏_i=1^n^ρ_α̃λ^α̃_i = ∏_α̃∏_ν=1^n^α̃_β̃ Z^α̃_G(M).(Note that n^ρ_β̃ = ∑_α̃ n^ρ_α̃ n^α̃_β̃.)§.§ Complex conjugationDepending on the homomorphism ϕ_g, we consider an invertible matrix M ∈GL_n(ℂ) that possesses G-symmetry of complex conjugation type:M = {[ u_g M u_g^†, ϕ_g=1,; u_g M^* u_g^†,ϕ_g=-1, ].g ∈ G.Here, M^* denotes the complex conjugate of the matrix M. In the basis where u^ρ_g is given by (<ref>), the matrix M is block-diagonalized as (<ref>) and (<ref>). Furthermore, depending on the EAZ class, the matrix m^α̃ has the following symmetry constraint by a:AI: (m^α̃)^* = m^α̃,AII: iσ_y (m^α̃)^* (iσ_y)^† = m^α̃,A_T: m^α̃= [ m^α; (m^α)^*; ],m^α∈GL_n^ρ_α̃(ℂ).For any EAZ class, no eigenvalue degeneracy occurs. However, when a-symmetry is present, the eigenvalues exhibit the following structures depending on the EAZ class: * For AI: the eigenvalues of m^α̃=m^α are either real numbers λ^α∈ℝ or they appear as complex conjugate pairs λ^α,(λ^α)^*.* For AII: the eigenvalues of m^α̃=m^α appear as complex conjugate pairs λ^α,(λ^α)^*.* For A_T: the eigenvalue λ^α of m^α and the eigenvalue λ^aα of m^aα are related as λ^α = (λ^aα)^*.Thus, if we define Z^α_G_0(M) in the same manner as (<ref>) for the α-irrep of the unitary subgroup G_0, the following hold:A:Z^α_G_0(M) ∈ℂ^×,AI:Z^α_G_0(M) ∈ℝ^×,AII:Z^α_G_0(M) ∈ℝ^×, Z^α_G_0(M) >0,A_T:Z^α_G_0(M) = [ Z^aα_G_0(M)]^* ∈ℂ^×.(Here, ℝ^× = {x ∈ℝ|x≠ 0}.)Similarly, Z_H_0^β(M) is defined in the same manner as (<ref>) for the β-irrep of the subgroup H_0 ⊂ G_0. If the irreducible decomposition of the α-irrep of G_0 by the irreps of H_0 is given as α = ⊕_ββ^⊕ n^α_β, then as a special case of (<ref>), the following holds:Z^β_H_0(M) = ∏_α[ Z^α_G_0(M)]^n^α_β.§ EXISTENCE OF A CANONICAL GAUGE FIXING CONDITION OVER A LOCAL PATCH Let G be a finite group, ϕ: G →{± 1} a homomorphism specifying unitary/antiunitary, and z_g,h a factor system.The group G acts on the space X ≅^d through a group homomorphism G → O(d).The action of G is expressed ask ↦ gk,k ∈ X,g ∈ G,where k_0 = 0 ∈^d is a point.Note that there exists a homotopy to a point k_0 which is compatible with the G-action. Consider a “representation on X” satisfying the following:U_g(hk)U_h(k)^ϕ_g = z_g,hU_gh(k),k ∈ X,g, h ∈ G.Here, U_g(k) is a unitary matrix dependent on k ∈ X.We prove the following: There exists a continuous unitary matrix V(k) on X such that it satisfies:U_g(k)V(k)^ϕ_g = V(gk) U_g(k_0),k ∈ X,g ∈ G.(Proof) First, note that the representation matrices of the group can be considered as taking values in a sort of flag manifold.For simplicity, assume that G is unitary, meaning that ϕ_g=1 for all g ∈ G.The case with antiunitary elements will be commented on later.Let ρ be a representation of G, and u^ρ_g be its representation matrix.The irreducible decomposition of ρ can be written as ρ = ⊕_α n_αα.For each α, fix a set of representation matrices {u^α_g}_g.Then, there exists a unitary matrix V such that u^ρ_g V = V ⊕_α u^α_g ⊗1_n_α.The unitary V is not unique, and there is the following ambiguity:V ↦ V ⊕_α1__α⊗ W^α, W^α∈ U(n_α).(_α is the representation dimension of the α-irrep.)The quotient space divided by this ambiguity is written as [V] ∈ M(G,z,{n_α}_α) := U(∑_α_α n_α)/∏_α U(n_α).There is a one-to-one correspondence between the equivalence class [V] and the representation matrices {u^ρ_g}_g, meaning that the representation matrices {u^ρ_g}_g take values in the space M(G,z,{n_α}_α).Moreover, the “gauge fixing” of V corresponds to a choice of W^α. We now proceed to the proof of the claim. Divide X=^d into cells symmetrically with respect to G. Namely,(i) the action of G either keeps points in a p-cell D^p_i invariant, g k = k, k ∈ D^p_i, or maps them to another p-cell, g(D^p_i) = D^p_g(i),(ii) every p-cell D^p_i is a boundary of some (p+1)-cell D^p+1_j, D^p_i ⊂∂ D^p+1_j. The stabilizer group of the p-cell D^p_i in G is denoted as G_D^p_i = {g ∈ G | gk = k, k ∈ D^p_i}. Every p-cell includes the origin k_0 in its boundary. Denote the set of p-cells as C_p and define X_0=C_0= {k_0}, and X_p = X_p-1∪ C_p. X_p is referred to as the p-skeleton.We prove the statement by induction. For the 0-skeleton X_0={k_0}, set V(k_0) = 1, which satisfies (<ref>). Assume the existence of V(k ∈ X_p-1) satisfying (<ref>) on the (p-1)-skeleton X_p-1.We then show that V(k) can be extended to X_p. Consider one of the orbits of the p-cell set C_p:g(D^p_a), g ∈ G,where D^p_a is a representative p-cell. The subgroup keeping D^p_a invariant is written as G_D^p_a = {g ∈ G | g (D^p_a) = D^p_a}. On D^p_a, restricted to G_D^p_a, U_g(k) is a group representation:U_g(k) U_h(k) = U_gh(k),k ∈ D^p_a,g ∈ G_D^p_a.Consider the irreducible decomposition of representation U in the group G_D^p_a:U|_G_D^p_a = ⊕_β n_ββ.Then, the representation matrices U_g ∈ G_D^p_a(k ∈ D^p_a) define a map to the quotient space:D^p_a → M(G_D^p_a,z,{n_β}_β) = U(∑_β_β n_β)/∏_β U(n_β).The unitary matrix V(k ∈∂ D^p_a) at the boundary of D^p_a is fixed by V(k) on the (p-1)-skeleton X_p-1. It remains to be shown whether this can be extended inside D^p_a, but this is possible since D^p_a is contractible. See Fig.<ref>. For h ∈ G\ G_D^p_a, V(k) in the p-cell h (D^p_a) should be defined as V(hk) = U_h(k)V(k)U_h(k_0)^†, k ∈ D^p_a.This definition is independent of the choice of h, as can be confirmed by considering for g ∈ G_D^p_a V(hgk) = U_hg(k)V(k)U_hg(k_0)= U_h(k)U_g(k)V(k)U_g(k_0)^† U_h(k_0^†= U_h(k) V(k)U_h(k_0)^†.This ensures that the definition of V(k) can be consistently extended over the entire p-cell h (D^p_a).When G includes antiunitary symmetries, the ambiguity (<ref>) is changed as follows.Depending on the Wigner criterion (<ref>) for the α-irrep of the unitary subgroup G_0 = {g ∈ G|ϕ_g=1}, the representation matrix for α̃ can be chosen as in (<ref>), (<ref>), and (<ref>).In this case, the ambiguity matrix W: V ↦ V W satisfies the same symmetry as (<ref>) and can thus be written as in (<ref> - <ref>).Consequently, the quotient space is obtained asM(G,z,{n_α}) = U(∑_α_α n_α)/∏_α̃, W_α=1O(n_α) ×∏_α̃, W_α=-1Sp(n_α/2) ×∏_α̃, W_α=0U(n_α).This only slightly changes the structure of the quotient space. The proof above remains unaffected. § PROOF OF (<REF>) On each patch U_i, redefinew̃_i,(g) := e^ig(-_i)·a_g w_i,(g), ∈ U_i,g ∈ G,so that the factor system of w̃_i,(g) becomes independent of :w̃_h(i),h(g) w̃_i,(h)^ϕ_g =z_gh_i(g,h) w̃_i,(gh), ∈ U_i,g, h ∈ G.Then, based on the results of the previous section <ref>, for the stabilizer group G_i = {g ∈ G|g _i = _i}, there exists a gauge such thatw̃_i,(g) = w̃_i,_i(g),∈ U_i,g ∈ G_i,For g ∈ G\ G_i, set the symmetry operator at the 0-cell asw̃_i,(g) := w̃_i,_i(g),∈ U_i,g ∈ G \ G_i,Returning the obtained w̃_i,(g) to w_i,(g), it turns out that there exists a gaugew_i,(g) = e^-ig(-_i)·a_g w_i,_i(g),∈ U_i,g in G. § DERIVATION OF EQS. (<REF>) AND (<REF>)As stated in Sec. <ref>, the _2 invariants ν(E) and ν(F) for bands E and F do not maintain an additive structure for the direct sum of bands E ⊕ F, i.e.,ν(E ⊕ F) ≠ν(E) + ν(F). To examine this issue in detail, we first describe a specific construction for the gauge fixing condition w_i,(g) and the basis transformation matrix V_i→ a, based on the given representation data {n_β(_i)}_i,β on each 0-cell.Here, we order the irreps at each 0-cell i, which is labeled by β(_i) = 1(_i),…,m_i(_i).Let us fix one representation matrix for the irrep β(_i) and denote it as u^β(_i)__i(g).The irreducible decomposition of band B∈{E,F} at 0-cell i can be written as ⊕_β=1^m_i n^B_β(_i)β(_i), n^B_β(_i)∈_≥ 0, and thus the gauge fixing condition on patch U_i, according to Eq. (<ref>), becomesw^B_i,(g) = e^-ig(-_i)·a_g⊕_β=1^m_i1_n^B_β(_i)⊗ u^β(_i)__i(g).Similarly, for each 1-cell a, order the irreps as α(_a)=1(_a),…,m_a(_a) and denote the representation matrix of irrep α(_a) as u^α(_a)__a(g).For the irreducible decomposition ⊕_α=1^m_a n^B_α(_a)α(_a), n^B_α(_a)∈_≥ 0, of band B on 1-cell a, the gauge fixing condition is set asw^B_a,_a(g) = ⊕_α=1^m_a1_n^B_α(_a)⊗ u^α(_a)__a(g).Under the settings described above, we determine the basis transformation matrix V^B_i→ a,.For an irrep β(_i) of 0-cell i, we consider its irreducible decomposition in terms of the irreps {α(_a)}_q=1,…,m_a of the 1-cell a connected to i:β(_i) = ⊕_α=1^m_a n^β(_i)_α(_a)α(_a), n^β(_i)_α(_a)∈_≥ 0.Note that∑_β=1^m_i n^B_β(_i) n^β(_i)_α(_a) = n^B_α(_a).Then, a basis transformation matrix v^β(_i)_i→ a,_a from a single irrep β(_i) to a representation on the 1-cell a is determined by e^-ig(_a-_i)·a_g u^β(_i)__i(g) v^β(_i)_i→ a,_a=v^β(_i)_i→ a,_a⊕_α=1^m_a1_n^β(_i)_α(_a)⊗ u^α(_a)__a(g),g ∈ G_a^0.The transformation matrix v^β(_i)_i→ a,_a does not depend on the band B ∈{E,F}.Using v^β(_i)_i→ a,_a, the representation matrix w^B_i,(g) for g ∈ G_a^0 is transformed as follows w^B_i,_a(g) (⊕_β=1^m_i1_n^B_β(_i)⊗ v^β(_i)_i→ a,_a) = (⊕_β=1^m_i1_n^B_β(_i)⊗ v^β(_i)_i→ a,_a) ⊕_β=1^m_i1_n^B_β(_i)⊗( ⊕_α=1^m_a1_n^β(_i)_α(_a)⊗ u^α(_a)__a(g) ). Introduce the permutation matrix P^B that rearranges the sum order from (β,μ,α) → (α,β,μ): (⊕_β=1^m_i⊕_μ=1^n^B_β(_i)⊕_α=1^m_a1_n^β(_i)_α(_a)⊗ u^α(_a)__a(g)) P^B = ⊕_α=1^m_a⊕_β=1^m_i⊕_μ=1^n^B_β(_i)1_n^β(_i)_α(_a)⊗ u^α(_a)__a(g) = ⊕_α=1^m_a1_∑_β=1^m_i n^B_ β(_i) n^β(_i)_α(_a)⊗ u^α(_a)__a(g) =w_a,_a(g). Therefore, the basis transformation matrix is given by V^B_i→ a,_a = (⊕_β=1^m_i1_n^B_β(_i)⊗ v^β(_i)_i→ a,_a) P^B. Note that it depends on the representation data {n^B_β(_i)}_β. For later discussion, we introduce representation bases to explicitly show the order changes in basis. Let {ϕ^Bβμ x_i,}_β,μ,x denote the set of Bloch states in band B on patch U_i, and write Φ^B_i, =⊕_β=1^m_i⊕_μ=1^n^B_β(_i)⊕_x=1^_β(_i)ϕ_i,^Bβμ x. The decomposition of the representation basis ⊕_x=1^_β(_i)ϕ_i,_a^Bβμ x into the representation basis of G_a^0 is written as (⊕_x=1^_β(_i)ϕ_i,_a^Bβμ x) v^β(_i)_i→ a,_a = ⊕_α=1^m_a⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y. Summarizing the changes in basis, we have Φ^B_i,_a = ⊕_β=1^m_i⊕_μ=1^n^B_β(_i)⊕_x=1^_β(_i)ϕ_i,_a^Bβμ x ⊕_β=1^m_i⊕_μ=1^n^B_β(_i)⊕_α=1^m_a⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y⊕_α=1^m_a⊕_β=1^m_i⊕_μ=1^n^B_β(_i)⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y. Now, consider ξ^α(_a)__a( (V^E_s(a)→ a,_a)^† (Φ^E_s(a),_a)^†Φ^E_t(a),_a V^E_t(a)→ a,_a) ×ξ^α(_a)__a( (V^F_s(a)→ a,_a)^† (Φ^F_s(a),_a)^†Φ^F_t(a),_a V^F_t(a)→ a,_a) = ξ^α(_a)__a( (⊕_B V^B_s(a)→ a,_a)^†(⊕_B Φ^B_s(a),_a)^†(⊕_B Φ^B_t(a),_a) (⊕_B V^B_t(a)→ a,_a) ) and ξ^α(_a)__a( (V^E ⊕ F_s(a)→ a,_a)^†(Φ^E⊕ F_s(a),_a)^†Φ^E ⊕ F_t(a),_a V^E ⊕ F_t(a)→ a,_a) for comparison. Recalling that the U(1)-valued quantity ξ^α(_a)__a(t̃__a) is independent of the choice of representation matrices at _a, we define another permutation matrix Q for swapping indices α,B as (⊕_B ⊕_α=1^m_a⊕_β=1^m_i⊕_μ=1^n^B_β(_i)⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y)Q = ⊕_α=1^m_a⊕_B ⊕_β=1^m_i⊕_μ=1^n^B_β(_i)⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y. Then we have ξ^α(_a)__a( (⊕_B V^B_s(a)→ a,_a)^†(⊕_B Φ^B_s(a),_a)^†(⊕_B Φ^B_t(a),_a) (⊕_B V^B_t(a)→ a,_a) ) =ξ^α(_a)__a( Q^T (⊕_B V^B_s(a)→ a,_a)^†(⊕_B Φ^B_s(a),_a)^†(⊕_B Φ^B_t(a),_a) (⊕_B V^B_t(a)→ a,_a)Q ). Note that since the index β is not common between the start point s(a) and terminal point t(a), the transformation of basis in _a by matrix Q cannot represent the swapping of direct sum order of β. (Furthermore, it is implicitly used that the compatibility in each α(_a) sector, i.e., ∑_β=1^m_s(a) n^B_β(s(a)) n^β(_s(a))_α(_a) = ∑_β=1^m_t(a) n^B_β(t(a)) n^β(_t(a))_α(_a) = n^B_α(_a).) In Eq. (<ref>), Φ^E ⊕ F_i, and V^E ⊕ F_i→ a,_a are given by Φ^E ⊕ F_i, = ⊕_β=1^m_i⊕_B⊕_μ=1^n^B_β(_i)⊕_x=1^_β(_i)ϕ_i,^Bβμ x, V^E ⊕ F_i→ a,_a = (⊕_β=1^m_i⊕_B 1_n^B_β(_i)⊗ v^β(_i)_i→ a,_a) P^E ⊕ F, and we define the change of basis by P^E⊕ F as Φ^E ⊕ F_i,_a = ⊕_β=1^m_i⊕_B ⊕_μ=1^n^B_β(_i)⊕_x=1^_β(_i)ϕ_i,_a^Bβμ x ⊕_β=1^m_i⊕_B ⊕_μ=1^n^B_β(_i)⊕_α=1^m_a⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y⊕_α=1^m_a⊕_β=1^m_i⊕_B ⊕_μ=1^n^B_β(_i)⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y. Comparing equations (<ref>) and (<ref>), we observe that only the order of the direct sums over indices B,β differs. Let δ V^(E,F)_i→ a,_a denote the permutation matrix for swapping the order of the direct sums over B,β: (⊕_α=1^m_a⊕_B ⊕_β=1^m_i⊕_μ=1^n^B_β(_i)⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y) δ V^(E,F)_i→ a,_a = ⊕_α=1^m_a⊕_β=1^m_i⊕_B ⊕_μ=1^n^B_β(_i)⊕_ν=1^n^β(_i)_α(_a)⊕_y=1^_α(_a)ψ_i,_a^Bβμαν y. Summarizing the transformation properties, we obtain (⊕_B Φ^B_i,_a) (⊕_B V^B_i→ a,_a) Q δ V^(E,F)_i→ a,_a = Φ^E ⊕ F_i,_a V^E ⊕ F_i→ a,_a. The permutation matrix δ V^(E,F)_i→ a,_a is block-diagonal in the α-irrep sectors and independent of the index y. Thus, it can be represented as δ V^(E,F)_i→ a,_a =⊕_α=1^m_aδ V^(E,F),α(_a)_i→ a,_a⊗1__α(_a) and possesses the symmetry under the representation w^E⊕ F_a,_a(g)=⊕_α=1^m_a1_(n^E_α(_a)+n^F_α(_a))⊗ u^α(_a)__a(g), as noted in Eq. (<ref>). The sign of the permutation matrix δ V^(E,F),α(_a)_i→ a,_a, corresponding to the permutation (B,β,μ,ν) → (β,B,μ,ν), is given by δξ^α(_a)__a,i → a(E|_E_2^0,0,F|_E_2^0,0) :=[ δ V^(E,F),α(_a)_i→ a,_a]=(-1)^∑_1≤β_F<m_i∑_β_F<β_E≤ m_i n^F_β_F(_i)n^β_F(_i)_α(_a) n^E_β_E(_i)n^β_E(_i)_α(_a)∈{± 1}. Here, the notation E|_E_2^0,0 represents the restriction to the 0-cell, and the correction term indeed depends only on elements of E_2^0,0 in the 0-cell. Consequently, the following relation is obtained: ξ^α(_a)__a( (V^E ⊕ F_s(a)→ a,_a)^†(Φ^E⊕ F_s(a),_a)^†Φ^E ⊕ F_t(a),_a V^E ⊕ F_t(a)→ a,_a) = ξ^α(_a)__a( (⊕_B V^B_s(a)→ a,_a)^†(⊕_B Φ^B_s(a),_a)^†(⊕_B Φ^B_t(a),_a) (⊕_B V^B_t(a)→ a,_a) ) ×δξ^α(_a)__a,s(a) → a(E|_E_2^0,0,F|_E_2^0,0) ×δξ^α(_a)__a,t(a) → a(E|_E_2^0,0,F|_E_2^0,0). Combining the results, we obtain ν_i(E ⊕ F) ≡ν_i(E)+ν_i(F)+δν_i(E|_E_2^0,0,F|_E_2^0,0), (-1)^δν_i(E|_E_2^0,0,F|_E_2^0,0) := ∏_(a,α)[δξ^α(_a)__a,s(a) → a(E|_E_2^0,0,F|_E_2^0,0) δξ^α(_a)__a,t(a) → a(E|_E_2^0,0,F|_E_2^0,0)]^[x_i]_(a,α). § SYMMETRIC BILINEAR FORMS AND QUADRATIC REFINEMENT Let L=^N be a lattice, and b: L × L →_2 = {0,1}be a symmetric bilinear form, meaning for x,y,z ∈ L and n,m ∈, it satisfiesb(x+y, z) = b(x,z) + b(y,z), b(x, y+z) = b(x,y) + b(x,z), b(nx,my) = nm b(x,y), b(x,y) = b(y,x).A function q: L →_2that satisfies that q(x+y) = q(x) + q(y) + b(x,y)is called a quadratic refinement of b. We show the following:(i) A quadratic refinement q exists.(ii) q is not unique; the ambiguity of q is given by Hom(L,_2) ≅_2^N.The proof of the latter is trivial. If q_1, q_2 are quadratic refinements, thenδ q(x):= q_1(x)-q_2(x)is linear. We now prove the former.Let the basis of L be e_1,…,e_N, and writeb_ij = b(e_i,e_j), b_ij=b_ji.Introduce the floor function for a real number x ∈, which returns the largest integer not exceeding x:⌊ x ⌋ = max{ n ∈| n ≤ x}.We show that a solution of (<ref>) is given byq(∑_i=1x_i e_i) = ∑_i=1^N ⌊x_i/2⌋ b_ii + ∑_1≤ i<j ≤ N x_i b_ij x_j.In fact, q(x+y)-q(x)-q(y) =∑_i=1^N ⌊x_i+y_i/2⌋ b_ii + ∑_1≤ i<j ≤ N (x_i+y_i) b_ij (x_j+y_j)-(∑_i=1^N ⌊x_i/2⌋ b_ii + ∑_1≤ i<j ≤ N x_i b_ij x_j)-(∑_i=1^N ⌊y_i/2⌋ b_ii + ∑_1≤ i<j ≤ N y_i b_ij y_j)=∑_i=1^N ( ⌊x_i+y_i/2⌋-⌊x_i/2⌋-⌊y_i/2)⌋ b_ii + ∑_i≠ j x_i b_ij y_j.Noticing that when x_i, y_i are integers ⌊x_i+y_i/2⌋=⌊x_i/2⌋ +⌊y_i/2⌋ + x_i y_i2.Thus,q(x+y)-q(x)-q(y)=∑_i,j=1^N x_i b_ij y_j =b(x,y). The ambiguity Hom(L,_2) ≅_2^N can be represented by N bits a_1,…,a_N ∈_2^N, and a general form of quadratic refinement is given byq(∑_i=1x_i e_i) = ∑_i=1^N ⌊x_i/2⌋ b_ii + ∑_1≤ i<j ≤ N x_i b_ij x_j +∑_i=1^N a_i x_i. As a comment, quadratic refinement can also be defined as a function q':L →/2 that satisfiesb(x,y) = q'(x+y) - q'(x) - q'(y) + q'(0)for all x,y ∈ L.The definition (<ref>) implies q(0)=0.If defined as (<ref>), q'(0) ∈{0,1} remains indeterminate, and there is a relation q(x) = q'(x)-q'(0) with the quadratic refinement defined by (<ref>).§ DERIVATION OF (<REF>) Computing the kernel of the differential d_1^0,0 ( B^(0)) =B^(1) M_d_1^0,0 we have the basis transformation B'^(0) =B^(0) V^(0) = (b'^(0)_1,…,b'^(0)_32) withV^(0)= [ [100 -1000000000000001 -101 -10 -1 -1011110;010 -1000000000000001 -10 -10111 -100010;001 -100000000000000111 -1100 -11000 -10;000100000000000000 -11 -110 -1010 -10001;000110000000000000 -20 -11 -1 -1 -10001111;000101000000000000 -201 -10010000000;000100100000000000 -200 -11100000000;00000000000000000000010000000000;000100010000000000 -201000 -1 -1001100;000100001000000000 -20 -100011 -1 -10011;000100000100000000 -2000000 -1110000;00000000000000000000000001000000;000100000010000000 -200000 -10001100;00000000000000000000000010000000;000100000001000000 -20 -100000000011;00000000000000000000100000000000;000100000000100000 -2 -100 -1100001010;000100000000010000 -2 -1000000 -110110;000100000000001000 -21001 -1001 -100 -11;00000000000000000001000000000000;000100000000000100 -2000 -1000001010;000100000000000010 -20000 -100000101;00000000000000000000001000000000;00000000000000000000000100000000;000100000000000001 -200000000 -11001;00010000000000000000000000 -100110;00000000000000000000000000100000;00000000000000000000000000010000;00000000000000000000000000001000;00000000000000000000000000000100;00000000000000000000000000000010;00000000000000000000000000000001;]].Here, the blocks are sequenced as ,X,Y,S,Z,U,T,R, with irreps within each block arranged in the order A,B_1,B_2,B_3.The basis vectors b'^(0)_20, …, b'^(0)_32 span E_2^0,0≅^13. The bilinear form δν: E_2^0,0× E_2^0,0→_2 defined in (<ref>) is[δν(b'^(0)_i,b'^(0)_j)]_20≤ i,j ≤ 32 =[ [ 0 0 0 0 1 1 1 1 1 1 0 0 0; 0 0 0 0 1 0 1 0 0 0 0 1 1; 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0; 1 1 0 0 0 0 1 1 1 0 0 1 0; 1 0 0 0 0 0 1 0 0 1 0 0 1; 1 1 0 0 1 1 0 0 1 0 0 0 1; 1 0 0 0 1 0 0 0 0 1 0 1 0; 1 0 0 0 1 0 1 0 0 1 0 1 1; 1 0 0 0 0 1 0 1 1 0 0 1 1; 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 1 0 0 1 0 0 1 1 1 0 0 1; 0 1 0 0 0 1 1 0 1 1 0 1 0; ]].We confirmed that δν is symmetric.A quadratic refinement in the basis of E_2^0,0 is given by q(n') = ∑_k,l=20^32 n'_i δν(b'^(0)_i,b'^(0)_j) n'_j + ∑_i=20^32 c'_i n'_i n' = ∑_i=20^32 n'_i b'^(0)_i ∈ E_2^0,0.The coefficients c'_20,… c'_32 are fixed to satisfy (<ref>).There are 8 Wyckoff positions and 4 irreps β∈{A, B_1, B_2, B_3} for each Wyckoff position, leading to 32 column vectors n'(a__0^β) = ∑_i=20^32 n'_i(a__0^β) b'^(0)_i ∈ E_2^0,0, ( n'(a__0^β) )_x_0,β= [ [ 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0; 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0; 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0; 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0; 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0; 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0; 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0; 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0; 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0; 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0; 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0; 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0; 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1; ]]We find that the equations q(n'(a^β_x_0)) ≡ 0 for all Wyckoff positions x_0 and all irreps β have a solution (c'_i)_i=20,…,32=(0,1,0,0,1,0,0,0,1,0,0,0,0).This gives the quadratic refinement on the basis of E_2^0,0.The basis transformation n'_i = ∑_j=1^32 [(V^(0))^-1]_ij n_j gives the expression (<ref>). | http://arxiv.org/abs/2311.15814v1 | {
"authors": [
"Seishiro Ono",
"Ken Shiozaki"
],
"categories": [
"cond-mat.mes-hall",
"cond-mat.mtrl-sci",
"cond-mat.supr-con"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231127134001",
"title": "Towards complete characterization of topological insulators and superconductors: A systematic construction of topological invariants based on Atiyah-Hirzebruch spectral sequence"
} |
© 2023 ELSEVIER. Personal use of this material is permitted. Permission from ELSEVIER must be obtained for all other uses, including reprinting/republishing this material for advertising or promotional purposes, collecting newly collected works for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This work has been submitted to the ELSEVIER for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. A systematic study comparing hyperparameter optimization engines on tabular dataBalázs Kégl Noah's Ark Lab, Huawei Paris [email protected] 14, 2024 ================================================================================empty empty This article introduces a modular robust subsystem-based adaptive (RSBA) control design with a new stability analysis for a class of uncertain interconnected systems with unknown modeling errors and interactions. First, we propose a nonlinear state observer for this class of systems that ensures uniformly exponential convergence of the estimation error by utilizing a new adaptive term, extending the conventional continuous Luenberger concept. Second, we introduce a novel adaptive subsystem-based control strategy for trajectory tracking, which incorporates an interesting term called the ‘stability connector,’ designed to capture dynamic interactions among subsystems during stability analysis, preventing excessive complexity as the system order increases. This represents the first instance of this term allowing modular control with exponential and uniform convergence rates, effectively handling unknown non-triangular nonlinearities. In addition to rigorous theoretical proofs by the Lyapunov theory, two complex systems are explored in simulations to confirm the merits of the suggested control schemes.ubsystem-based control, adaptive control, uniformly exponential stability, modular control, nonlinear control.§ INTRODUCTION Observers that exclusively use output, input, or existing knowledge of a system's structure, or a combination thereof, can be a viable alternative for reducing the costs and noises associated with sensor usage <cit.>. Conversely, the systems in practice are susceptible to undesired influences that arise from various sources, such as external loads, fluctuations in temperature, humidity, noise, vibrations, the power supply, imprecise measurements, or the understanding of specific components' behaviors, highlighting the crucial significance of designing robust control structures to avoid serious consequences <cit.>. Hence, Jiang et al. <cit.>, for example, present an extended-state observer incorporating an adaptive bandwidth to enhance the dynamic performance. Likewise, in a study conducted by Guo et al. <cit.>, an extended-state observer with backstepping was introduced to an electro-hydraulic system. Further, Din et al. <cit.> proposed a novel approach to designing nonlinear observers for estimating unknown disturbances. While all the mentioned papers achieve asymptotic convergence for the observer, which has become increasingly common recently <cit.>, they do not provide information about the speed of the convergence. Furthermore, recent studies have not adequately addressed the nonlinear exponential state observer. Hence, by providing reliable state information, the exponential observer serves as a fundamental basis for a robust controller, which is supposed to regulate and manipulate the behavior of a dynamic system to achieve desired outcomes in the presence of uncertainties and disturbances <cit.>.In addition, employing different methods to decompose a complex system into subsystems can aid in developing a local control approach and in evaluating stability at the subsystem level, which facilitates control-based design and analysis <cit.>. However, in most research findings, the functions bounding modeling errors and interactions are required to meet the triangular condition <cit.>. For instance, all control strategies, as presented in <cit.>, were designed for systems with triangular uncertainty structures. This form of the system does not withhold all practical applications, and the triangular structural uncertainties may not be overcome by real-world interconnected systems, given that the bounding functions could rely on all state variables <cit.>. Likewise, Koivumaki et al. <cit.> employed a generic term (stability connector) to oversee interactions among subsystems with triangular structural uncertainties. Their interesting generic term facilitates modular control and prevents excessive complexity as the system order increases. This plays a vital role in defining the virtual stability of subsystems, leading to the stability of the entire system <cit.>. Conversely, Cai et al. <cit.> proposed a decentralized adaptive control scheme based on backstepping techniques for a class of interconnected systems. Their approach accommodated non-triangular uncertainties to achieve global boundedness, though it did not account for time-varying disturbances. Likewise, <cit.> proposed an adaptive neural control for non-triangular nonlinear systems with input delays, demonstrating that under bounded initial conditions, all signals in the closed-loop system remain bounded. In this paper, we propose a novel robust subsystem-based adaptive (RSBA) control for a class of uncertain interconnected systems with unknown modeling errors and interactions. Our first objective is to extend the asymptotic conventional continuous Luenberger observer <cit.> by incorporating an additional adaptive term to achieve a uniformly exponential error reduction. While the observer strategies in <cit.> are designed for triangular systems, the exponential observer developed in this paper expands this concept to a non-triangular system structure. Next, we introduce a novel robust subsystem-based control by developing a framework that builds on <cit.>, which introduced an asymptotic 'stability connector' specifically designed for triangular systems without unknown modeling errors. Our framework addresses the complex issue of adjacent subsystems with unknown modeling errors and interactions that are not inherently triangular. Interestingly, this stability analysis yields uniformly exponential stability for the entire system.The key findings of this research are as follows: (1) Expanding upon the work of <cit.>, a nonlinear state observer is proposed, aiming to achieve uniformly exponential convergence for non-triangular systems.(2) Inspired by the work of Koivumaki et al. <cit.>, the novel RSBA control brings a new stability analysis by utilizing a "stability connector" that effectively cancels out the instability among subsystems, leading to an exponentially stable analysis of the entire system. (3) To our current understanding, no outcomes based on the subsystem-based approach have been reported for such interconnected systems.After this section, we delve into the necessary details of the proposed control design in Section 2. Section 3 covers the design of the nonlinear state observer and the suggested controller structure, as well as analyzes the exponential stability of the robust observer and the whole system using the new generic term. Finally, we provide a step-by-step summary of the RSBA control algorithm in this section. Subsequently, we conduct simulations on two distinct systems with non-triangular uncertainties in Section 4: an uncertain seven-order system and an electro-mechanical linear actuator (EMLA) system decomposed into four subsystems to illustrate the effectiveness of our proposed approach. Eventually, the paper is concluded in Section 5. § PROBLEM AND PRELIMINARIESTo propose the control methodology, a nonlinear n-order system is shown below.{ẋ_1(t) =A_1 x_2(t) +g_1(x)+F_1(x)+ d_1(t)ẋ_i(t) =A_i x_i+1(t)+g_i(x) +F_i(x)+ d_i (t)ẋ_n(t) =A_n u(t)+g_n(x) +F_n(x)+ d_n (t) .where i ∈ [1,2,3,...,n], and n∈ℕ represents the number of states in the system denoted by x=[x_1, x_2,..., x_n]^T. The control input is represented by u(t):ℝ→ℝ, A_i ∈ℝ is any non-zero coefficient, g_i(x)=ρ^⊤_ig̃_i(x) is a known non-triangular functional term originated from a model of the system, F_i(x)=ρ̅^⊤_i F̃_i(x) represents unknown non-triangular uncertainties relying on all state variables and resulting from incomplete knowledge of system parameters or modeling inaccuracy, and d_i:ℝ→ℝ is a time-variant disturbance with uncertain magnitudes and timings. For any k∈ℕ, ρ_i∈ℝ^k and ρ̅_i ∈ℝ^k are known and unknown constant parameters and g̃_i:ℝ^n →ℝ^k and F̃_i:ℝ^n →ℝ^k are known and unknown state-variant functions, respectively. Therefore, by considering the output of the system, we can also demonstrate the system as follows:ẋ(t)=Ax(t)+Bu(t)+g(x)+K(x(t),t)y(t)=Cx(t)where A∈ℝ^n × n, B∈ℝ^n, and C∈ℝ^1 × n are observable, assuming that g(x)=[g_1,...,g_n]^⊤ comprises known modeling nonlinearities. K(.):ℝ^n ×ℝ→ℝ^n represents unknown uncertainties or nonlinearities and is sufficiently smooth and continuous, as well as comprised of systematic uncertainties F_i(.) and external disturbances d_i. The model system, when subjected to external disturbances and internal uncertainties, generates the output signal y(t):ℝ→ℝ. Thus, it is essential to consider the following information before presenting the design of the suggested robust state observer and control mechanism.Assumption 1 <cit.> Assume that matrices A and C, as provided in (<ref>), are observable. Then, a feedback gain matrix α, belonging to ℝ^n, can be found such that A̅ = A - α C is a Hurwitz matrix.Assumption 2 <cit.> Following Assumption 1, and because K(x(t), t) is bounded, we can assume any positive definite q ∈ℝ^n × n in the following equation:-q = p A̅ + A̅^⊤ pto obtain a positive definite p ∈ℝ^n × n such that for all (y(t), t) ∈ℝ×ℝ, the subsequent condition is met:||pK(x(t), t)|| ≤ ||C^⊤|| η^*H(y(t), t)where · denotes the squared Euclidean norm, η^* ∈ℝ^+ is an unknown positive constant, and H(·):ℝ×ℝ→ℝ^+ is a known continuous positive function.Definition 1 <cit.> For t ≥ t_0, the system tracking error x_e is uniformly exponentially stable if the following condition is satisfied:x_e=x-x_d≤c̅ e^-α (t-t_0)x(t_0) + μ̃where c̅,μ̃, andα∈ℝ^+ are positive constants. x(t_0) is any initial state vector, and x_d is the reference trajectory. More precisely, x_e is uniformly exponentially stabilized within a defined region g(τ), as follows:g(τ):={x_e |x_e ≤τ := μ̃}§ ROBUST SUBSYSTEM-BASED ADAPTIVE CONTROL §.§ Robust exponential state observerAfter defining the error of state observation x_eo=x-x̂, we can put forward an adaptive term in the following manner for the conventional Luenberger observer: ẋ̂̇= A x̂+B u+g(x,t)+α(y-ŷ)+p^-1 C^⊤ fŷ=Cx̂, y̅=y-ŷ, y̅=Cx_eoẋ_eo= (A-α C) x_eo+K(x(t),t)-p^-1 C^⊤ fwhere x̂: ℝ→ℝ^n represents the estimated states. Furthermore, a finite and continuous function f(·):ℝ×ℝ→ℝ can be proposed as follows:f(y̅, η̂(t), t)=η̂^2.H(y, t)^2 .y̅/η̂H(y, t)y̅+m(t)where m(t):ℝ→ℝ^+ is a positive and continuous function constrained by the following condition:lim _t →∞∫_t_0^t m(T) d T ≤m̅<∞H(y,t) follows (<ref>). The function η̂:ℝ→ℝ represents the adaptation law, as follows:η̇̂̇=-m ℓη̂+ℓ H(y, t)y̅where ℓ∈ℝ^+. If we define an unknown positive parameter, denoted by η^* in (<ref>), and define he observer adaptation error system η̅=η̂-η^*, we obtain:η̇̅̇= -m ℓη̅+ℓ H(y, t) y̅ -m ℓη^*Remark 1As η̂(t_0) > 0, according to (<ref>), we can say η̂(t)>0. In addition, as with (<ref>), we can say that f(y̅, η̂(t), t) can be: f(y̅, η̂(t), t)≤η̂ H(y(t), t) §.§ Modular control designOur next step is to propose a controller structure that is compatible with the aforementioned observer. We can define the tracking error x_e_i∈ℝ as follows:x_e_i= x̂_i-x_id,i=1,...,nNow, we transform the system into the form, as demonstrated in <cit.>.P_i =x_e_i - a_i-1,P_0,a_0=0Let a_0=0. a_i-1:ℝ→ℝ serves as a differentiable virtual control input and we define it for i=1,...,n-1, as shown:a_i= -1/2 A_i(β_i+ζ_iθ̂_i)P_i-A_i-1/A_iP_i-1-x_(i+1)d-1/A_i g_iwhere A_0=0, and the adaptation parameter θ̂_i:ℝ→ℝ is defined, as follows:θ̇̂̇_i = -δ_iσ_iθ̂_i+1/2ζ_iδ_i|P_i|^2 ,i=1,...,nwhere δ_i and σ_i ∈ℝ^+, and let θ̂_i(0) ≥ 0 be an initial condition for the adaptive system. Following Remark 1, we can have θ̂_i(t) > 0. Then, the subsequent function is introduced:F_i= F_i(x_1,...,x_n)-(∑_k=1^n∂ a_i-1/∂ x_kd x_k/ d t+∂ a_i-1/∂θ̂_i-1dθ̂_i-1/ d t+∂ a_i-1/∂x_iddx_id/ d t)The smooth x_id∈ℝ is the reference trajectory for the subsystem i. Assumption 3 Let i=1,...,n. Assume F̅_i, d_i and ẋ_id are bounded. Then, we can assume a positive smooth function r_i: ℝ→ℝ^+, along with positive constants Λ_i, d_max(i), and Ω_i ∈ℝ^+ exist, which may all be unknown, such that:|F_i |≤Λ_i r_i , | d_i |≤d_max(i), |ẋ_id|≤Ω_iNow, by differentiating (<ref>), inserting (<ref>) and (<ref>) into it, and considering (<ref>), we will have: Ṗ_̇i̇=A_i P_i+1+A_i x_(i+1)d+g_i+F_i+A_i a_i+d_i-ẋ_id,i=1,...,n-1Ṗ_̇ṅ=A_n u(t)+g_n+F_n - .x_nd+d_nSimilar to Eq. (<ref>), we can propose the actual control input for (<ref>), as follows: u(t)= -1/2 A_n(β_n+ζ_nθ̂_n)P_n-A_n-1/A_nP_n-1-1/A_n g_nwhere ζ_i, and β_i are positive constants. We define the adaptation error θ̃_i=θ̂_i-θ^*_i, and we can obtain:θ̇̃̇_i=-δ_iσ_iθ̃_i+1/2ζ_iδ_i|P_i|^2-δ_iσ_iθ^*_i where θ^*_i ∈ℝ^+ is an unknown positive constant to tune the adaptation law and define, as follows:ζ_iθ_i^* =μ_i Λ_i^2+ ν_i d_max(i)^2 +ψ_i(Ω_i)^2where μ_i, ν_i, ψ_i and Ω_i∈ℝ^+ are unknown constants. Figure <ref> demonstrates how the proposed controller operates.§.§ Stability analysisTheorem 1Consider the uncertain interconnected systems outlined in (<ref>) and (<ref>), employing the modular RSBA control specified in (<ref>) and (<ref>) in conjunction with adaptive laws in (<ref>) and (<ref>). For any positive control gains satisfying (<ref>), (<ref>), (<ref>), and (<ref>), the trajectory tracking errors (<ref>), and the state estimation error (<ref>) converge to zero and the origin of the closed-loop error system is uniformly exponential stable.ProofWe introduce a Lyapunov function for the proposed observer in the following manner:V_o = x_eo^⊤ p x_eo + 1/ℓη̅^2 After taking the derivative of the Lyapunov function and inserting (<ref>), we have:V̇_o = x_eo^⊤[A̅^⊤ p+p A̅] x_eo+2 ℓ^-1η̅η̇̅̇ +2 x_eo^⊤ p[K-p^-1 C^⊤ f] Furthermore, (<ref>) implies that for all t ≥ t_0:V̇_o≤ -x_eo^⊤ q x_eo+2 η^* Hy̅ -2 x_eo^⊤ C^⊤ f+2 ℓ^-1η̅η̇̅̇ Substituting Eqs. (<ref>) and (<ref>) into (<ref>), we obtain:V̇_o ≤ -x̅^⊤ q x̅+2 η^* Hy̅ - 2 η̂^2 H^2y̅^2/η̂ Hy̅+m+2 η̅ Hy̅-2 mη̅^2 -2 m η̅η^* =-x̅^⊤ q x̅+ 2 η̂ Hy̅· m/η̂ Hy̅+m -2 m η̅^2-2 m η̅η^*=-x̅^⊤ q x̅+ 2m (η̂ Hy̅ + m) - 2m^2/η̂ Hy̅+m-2 m η̅^2-2 m η̅η^* By analyzing Eq. (<ref>) and eliminating the negative part from the right-hand side of (<ref>), we have the option to deduce that:V̇_o≤ -x̅^⊤ q x̅+2 m-2 m η̅^2-2 m η̅η^* Then, according to Young's inequality, and considering -2η̅η^* ≤η̅^2+ η^*^2, we obtain:V̇_o≤ -x̅^⊤ q x̅ +2 m-2 m η̅^2+ m (η̅^2+ η^*^2) =-x̅^⊤ q x̅ - m η̅^2+ m (2+ η^*^2) Hence, by knowing that p and q are positive definite matrices, we can assume there is a constant L∈ℝ in which:-x̅^⊤ q x̅≤ -x̅^⊤ p x̅ + LThen:V̇_o≤-x̅^⊤ p x̅ - m η̅^2+ m (2+ η^*^2)+LBy considering μ̃_o=(2+ η^*^2) and according to Young's inequality, we can reach:V̇_o ≤-x̅^⊤ p x̅ - m η̅^2+ L +μ_00μ̃_o ^2+ μ_00^-1 m^2where μ_00 may be any positive constant. By defining the function μ̃:ℝ→ℝ, as follows:μ̃ ={[ μ_00μ̃_o ^2+L ifL>0; μ_00μ̃_o ^2 ifL ≤ 0; ].Therefore, μ̃ is always positive. From (<ref>) and (<ref>), we have (<ref>), as follows:V̇_o ≤-ϕ_o V_o + μ_00^-1 m^2 + μ̃where ϕ_o=min[1, mℓ] and m=inf(m). Now, a Lyapunov function for the first subsystem is proposed, as shown:V_1 =1/2 [P_1^2+δ^-1_1θ̃_1^2] After differentiating V_1 and inserting (<ref>), we obtain:V̇_̇1̇ =A_1 P_1 P_2+A_1 P_1 x_2d+P_1 g_1+ P_1F_1+ P_1 d_1+ A_1 P_1a_1-P_1 ẋ_1d+δ_1^-1θ̃_1θ̇̃̇_1Inspired by a fundamental principle in the virtual stability analysis (VDC), known as virtual power flow (VPF) <cit.>, we employ an associated concept referred to as a stability connector between subsystems, as proposed by <cit.>. The stability connector, for uncertain interconnected systems with unknown modeling errors and interactions provided in (<ref>), can be defined, as follows:S_i=A_i P_iP_i+1,i=1,...,n-1This term is a destabilizing dynamic interaction among subsystems in the process of stability. However, we will demonstrate that by utilizing this concept, we can introduce a new stability analysis that effectively offsets the instability of this term, leading to an exponentially stable analysis. By considering (<ref>), (<ref>), and using (<ref>), we have:V̇_̇1̇≤S_1+A_1 P_1 x_2d+P_1 g_1+| P_1|Λ_1 r_1+| P_1| d_max(1)+ | P_1 |Ω_1+A_1 P_1a_1+δ_1^-1θ̃_1θ̇̃̇_1 By considering positive constants ψ_1, ν_1 and μ_1, and following Young's inequality, we have: V̇_̇1̇≤S_1+A_1 P_1 x_2d+P_1 g_1+ 1/2| P_1 |^2 μ_1 Λ_1 ^2 + 1/2μ_1^-1 r_1^2 +A_1 P_1a_1+ 1/2| P_1 | ^2 ν_1 d_max(1)^2 + 1/2ν_1^-1+1/2ψ_1 Ω^2_1 P^2_1+1/2ψ^-1_1+δ^-1θ̃_1θ̇̃̇_1 By considering the descriptionof θ^*_1 in (<ref>), as well as the descriptions in (<ref>) and (<ref>), we obtain:V̇_̇1̇≤S_1 + 1/2ζ_1 θ_1^* | P_1 |^2 + 1/2μ_1^-1 r_1^2+ 1/2ν_1^-1-1/2β_1P_1^2+1/2ψ^-1_1-1/2ζ_1θ̂_1P_1^2-σ_1θ̃_1^2+1/2ζ_1|P_1|^2θ̃_1-σ_1θ^*_1θ̃_1 Because θ̃_1=θ̂_1-θ_1^*:V̇_̇1̇≤S_1 + 1/2μ_1^-1 r_1^2 + 1/2ν_1^-1-1/2β_1P_1^2-σ_1θ̃_1^2+1/2ψ^-1_1-σ_1θ^*_1θ̃_1 After dividing σ_1θ̃_1^2 into 1/2σ_1θ̃_1^2+1/2σ_1θ̃_1^2 and considering (<ref>), we can arrive at:V̇_̇1̇≤-ϕ_1 V_1 + S_1 + 1/2μ_1^-1 r_1^2 + 1/2ν_1^-1+1/2σ_1θ_1^*^2+1/2ψ^-1_1 where:ϕ_1 = min [β_1,δ_1σ_1]Just as in (<ref>), we present the same scenario for the j-th subsystem for j=2,...,n-1 subsystems by defining the Lyapunov function as follows:V_j = 1/2[P_j^2+δ_j^-1θ̃_j^2] By differentiating V_j and inserting (<ref>), we have:V̇_̇j̇=P_j[A_j P_j+1+A_j x_(j+1)d+g_j+F_j+ A_ja_j-ẋ_jd+d_j]+δ_j^-1θ̃_jθ̇̃̇_jLikewise, we continue by considering (<ref>) and the stability connectors S_j, S_j-1 from (<ref>), inserted into (<ref>), as follows:V̇_̇j̇=S_j+P_j F_j -1/2 P_j (β_j+ζ_jθ̂_j)P_j-S_j-1+P_j d_j-P_jẋ_jd+δ_j^-1θ̃_jθ̇̃̇_jSimilar to (<ref>) to (<ref>):V̇_̇j̇≤ -ϕ_j V_j + S_j- S_j-1+ 1/2μ_j^-1 r_j^2 + 1/2ν_j^-1+1/2σ_jθ_j^*^2+1/2ψ^-1_jwhere:ϕ_j = min [β_j,δ_jσ_j]Likewise, we can establish an analogous Lyapunov function for the final subsystem, as shown:V_n = 1/2[P_n^2+δ^-1θ̃_n^2] By differentiating V_n and inserting (<ref>), we have:V̇_̇ṅ= P_n [A_n u(t)+g_n+F_n - .x_nd+d_n]+δ_n^-1θ̃_nθ̇̃̇_nLikewise, we continue by considering the control input provided in (<ref>) and the stability connectors S_n from (<ref>), inserted into (<ref>), as follows:V̇_̇ṅ≤-ϕ_n V_n -S_n-1 + 1/2μ_n^-1 r_n^2 + 1/2ν_n^-1+1/2ψ_n^-1+1/2σ_nθ_n^*^2where:ϕ_n = min [β_n,δ_nσ_n]Now, we introduce the Lyapunov function for the entire RSBA control system, as follows:V =V_o+V_1+...+V_j+...+V_nAfter the derivative:V̇ =V̇_o+V̇_1+...+V̇_j+...+V̇_nand after inserting (<ref>), (<ref>), (<ref>), and (<ref>) into (<ref>), we obtain:V̇≤-∑_i=o^n ϕ_i V_i + ∑_i=1^n-1 [ 0-S_i + S_i ] + 1/2∑_i=1^n μ_i^-1 r_i^2+ 1/2∑_i=1^nν_i^-1+ 1/2∑_i=1^nψ_i^-1+1/2∑_i=1^nσ_iθ_i^*^2+ μ_00^-1 m^2 + μ̃As we can observe in (<ref>), based on the concept defined in (<ref>), the non-triangular unstable term associated with dynamic interactions among subsystems has been effectively offset in this step. Now, we can transform the equations provided in (<ref>) and (<ref>), as follows:V= 1/2𝐏^⊤λ𝐏 + 1/2θ̃^⊤Δ^-1θ̃where:𝐏 = [ x_eo;P_1;⋮;P_n;], λ=[ 2p00…0;010…0;⋮⋮⋮⋮⋮;0…001;],θ̃ = [ η̅; θ̃_1;⋮; θ̃_n;], Δ^-1 = [ 2 ℓ_1^-100…0;0 δ_1^-10…0;⋮⋮⋮⋮⋮;0…00 δ_n^-1;]Where 𝐏:ℝ^n×ℝ^n →ℝ^2n, λ:ℝ^n× n×ℝ^n × n→ℝ^2n × 2n, θ̃:ℝ×ℝ^n →ℝ^n+1, and Δ:ℝ×ℝ^n × n→ℝ^(n+1) × (n+1). From (<ref>), we can define:μ̅^-1=max(2/nμ^-1_00 ,μ̃^-1_1,..., μ̃^-1_n) M_i^2=m^2 + r_i^2 μ̅_total=μ+ 1/2∑_i=1^nν_i^-1+ 1/2∑_i=1^nψ_i^-1+1/2∑_i=1^nσ_iθ_i^*^2ϕ_total=min[ϕ_o, ϕ_1,..., ϕ_n]Then:V̇≤-ϕ_total V + 1/2∑_i=1^n μ̅^-1 M_i^2+μ̅_totalThus, we can solve (<ref>) as follows:V ≤ V(t_0) e^-{ϕ_total(t-t_0)}+ 1/2μ̅^-1∑_i=1^n∫_t_0^t e^{-ϕ_total(t-T)} M_i^2(T)dT +μ̅_total∫_t_0^t e^{-ϕ_total(t-T)} dTBecause e^-ϕ_total(t-t_0) is always decreasing, we can interpret (<ref>) as follows:V ≤ V(t_0) e^-{ϕ_total(t-t_0)}+ 1/2μ̅^-1∑_i=1^n∫_t_0^t e^{-ϕ_total(t-T)} M_i^2(T)dT + μ̅_totalϕ_total^-1Based on (<ref>) and according to (<ref>), and defining λ_min∈ℝ as the minimum eigenvalue of matrix λ, we can say: 𝐏^2 ≤ 2/λ_min V(t_0) e^-{ϕ_total(t-t_0)}+1/λ_minμ̅^-1∑_i=1^n∫_t_0^t e^{-ϕ_total(t-T)} M_i^2dT + 2/λ_minμ̅_totalϕ_total^-1Because μ̅ may be any positive constant, we can express that:∑_i=1^n1/λ_minμ̅ϕ_total<1Therefore, we can presume a continuous function, as follows:Z(ι)=∑_i=1^nλ_min^-1μ̅^-1/ϕ_total-ι>0, ι∈ [0,ϕ_total)Note that the initial quantity in (<ref>) is equivalent to (<ref>). It is now possible to assert that there exists a positive quantity ι̅∈ι, such that:0 ≤Z̅=Z(ι̅) <1By multiplying e^ι̅(t-t_0) to (<ref>), we reach:𝐏^2 e^ι̅(t-t_0)≤2/λ_min V(t_0) e^-(ϕ_total-ι̅)(t-t_0)+λ_min^-1μ̅^-1∑_i=1^n∫_t_0^t e^-ϕ_o(t-T)+ι̅(t-t_0) M_i^2dT+2/λ_minμ̅_totalϕ_total^-1 e^ι̅(t-t_0)Because 0 ≤ι̅ <ϕ_total, we can eliminate the decreasing element e^-(ϕ_total-ι̅)(t-t_0) from the right-hand side of (<ref>):𝐏^2 e^ι̅(t-t_0)≤2/λ_minV(t_0)+λ_min^-1μ̅^-1∑_i=1^n∫_t_0^t e^-(ϕ_total-ι̅)(t-T)M_i^2 e^ι̅(T-t_0) dT+2/λ_minμ̅_totalϕ_total^-1 e^ι̅(t-t_0)Using this approach, we can describe functions E_0 and E_i, which are non-declining and continuous:E_0 =sup _e ∈(t-t_0) [𝐏^2 e^ι̅(e-t_0))] E_i=sup _e ∈(t-t_0) [(M_i^2)e^ι̅(e-t_0)]Next, by considering (<ref>) and (<ref>), and conducting some straightforward mathematical manipulations while removing the decreasing term, we obtain:𝐏^2e^ι̅(t-t_0)≤ 2/λ_minV(t_0)+∑_i=1^nλ_min^-1μ̅^-1/ϕ_total-ι̅ E_i +2/λ_minμ̅_totalϕ_total^-1 e^ι̅(t-t_0)As E_1 does not exhibit a declining pattern, the left side of (<ref>) will not exhibit a reduction. Therefore, with respect to the definition of E_0 in (<ref>), we can state that:E_0 ≤ 2/λ_min V(t_0)+ ∑_i=1^nλ_min^-1μ̅^-1/ϕ_total-ι̅ E_i+2/λ_minμ̅_totalϕ_total^-1 e^ι̅(t-t_0)Defining:E=max _i (E_i),i=0,...,nwe can have:E_0 ≤ 2/λ_min V(t_0)+Z̅ E+2/λ_minμ̅_totalϕ_total^-1 e^ι̅(t-t_0)Because 0<E_0≤ E and both E_0 and E are non-declining, let us introduce *Z:*Z>Z̅, 0<*Z<1 Z̅ E ≤*Z E_0Eq. (<ref>) is justified, as we can select μ̅ such that Z̅ becomes sufficiently small in (<ref>). When we incorporate (<ref>) into (<ref>), we arrive at:E_0 ≤ 2/λ_min V(t_0)+*Z E_0(t)+2/λ_minμ̅_totalϕ_total^-1 e^ι̅(t-t_0)Afterward, we obtain:E_0 ≤2/λ_minV(t_0)+2/λ_minμ̅_totalϕ_total^-1 e^ι̅(t-t_0)/1-*ZConcerning (<ref>), we obtain:𝐏^2 ≤2/λ_minV(t_0) e^-ι̅(t-t_0)+2/λ_minμ̅_totalϕ_total^-1/1-*ZIt is significant that: sup _t ∈[t_0, ∞](2/λ_minV(t_0) e^-ι̅(t-t_0)/1-*Z)≤2/λ_minV(t_0)/1-*ZThus, based on Definition 1, it is obvious from (<ref>) that, along with the adaptive algorithms provided in Eqs.(<ref>), and (<ref>) and the control input (<ref>), 𝐏 including the state estimation error in (<ref>), and the tracking error in (<ref>) reach a defined region g_total(τ̅_0) in uniformly exponential convergence, such that:g_total(τ̅_0):={𝐏≤τ̅_0 := √(2/λ_minμ̅_totalϕ_total^-1/1-*Z)}Therefore, we can conclude the demonstration of Theorem 1.Remark 2As μ̅_total, consisting of μ_00, μ_1,…, and μ_n, and λ_min, resulted from p, can be chosen, it is possible to decrease the radius of the ball (<ref>) as much as will satisfy (<ref>), (<ref>), and (<ref>). The deployment procedures for the RSBA control are described in Algorithm, which provides a summary and a step-by-step guide for implementing the RSBA control algorithm into n-order systems. § NUMERICAL SIMULATIONIn this section, we explore the effectiveness of our proposed algorithm through two distinct cases. Initially, we will analyze a complex seven-order system to illustrate the numerical reliability of the RSBA control approach confronted with diverse unknown modeling errors and interactions. As discussed in Section 1, many practical applications are characterized by non-triangular structural uncertainties. Therefore, in the second case, we explore an EMLA system equipped with a permanent magnet synchronous motor (PMSM), a practical example that includes non-triangular uncertainties and load disturbances affecting the system.§.§ Seventh-order systemIn the first case study, we build upon the study example presented by Yip et al. <cit.> and Koivunmaki et al. <cit.>, who explored the potential of their proposed control approach in three-order and five-order systems with triangular forms of uncertainties. Based on the modeled dynamics, they assume all design uncertainties are known, except for some coefficient constants. In addition, they ignored the presence of external disturbances, which are only time-dependent. In this paper, we investigate a more complicated system by increasing the order to seven (n=7), introducing additional unknown time-variant external disturbances d_i, and considering non-triangular uncertainties. Now, we investigate the potential of our proposed control method for the mentioned system, as described below:ẋ_1=5x^3_1 x_2 + cos(0.2t)+x_2 ẋ_2=5(x^2_5+x^2_1+x^2_4) + sin(1.5π t)+x_3 ẋ_3=0.5 x_1 x_2 x_3-x_4 ẋ_4=10(x_1^2 x_4-x_2^2 x_3)e^-0.1t-2 x_5 + 2 e^-0.1tẋ_5=5 sin ^3(x_2) sin(x_7)+x_6-5x_5 + 20 ẋ_6= arctan(x_6)e^-3t+ 3 x^2_1 - x_7 ẋ_7=0.5 u + 20x_3 arctan(x_4)e^-2t + rand(20)-x_2where rand(20)∈ℝ is a function producing a random number between 0 and 20. Following step 2 of the algorithm, we assume:α= [1.32, 0.57, 1.13, -1.01, 0.75, 0.68, -0.93]^TThen, after calculating the corresponding matrix A̅, we can readily choose positive definite matrices by the MATLAB code lyap to identify matrices q and p, as step 11 of the algorithm. The function H(y(t), t) is defined as follows:H(y(t), t)=20 cos(y) ^4 +20sin(y) ^4Then, we choose: ℓ=1,m = 200 e^-0.001t. For the simulation, we will assign the starting conditions as follows:x̂_0=[-1.2, -1.1, -0.8, -0.5, -0.3, -0.2, -1]^⊤,η̂(0)=1Regarding the tracking error in step 19, we consider the reference trajectory x_1d as follows:x_1d ={[tanh(t^3) sin (2 π t), ift = 5; tanh(t^3) sin (2 π t)[1-tanh((t-5)], else ].Furthermore, the control design parameters for this case study are outlined in Table <ref>.An important point to note is that in the sixth and seventh desired trajectories, there are significant amplitudes that occurred at the second 5, as shown in Fig. <ref>. As Fig. <ref> indicates, the RSBA control can effectively perform trajectory tracking, even when confronted with unidentified nonlinearities.In summary, Table <ref> presents a structural comparison of the approach introduced in this paper and the approach outlined in <cit.>. According to the table comparison, even though both studies proposed subsystem-based control methods and shared the same concept of dynamic interaction defined in Eq. (<ref>) to eliminate unstable terms, an RSBA control could achieve exponential stability, a result stronger than that presented in <cit.>. Furthermore, in this paper, for the first case study, we considered a more complex system with a higher-order structure and a more intricate type of uncertainty (non-triangular), along with the presence of time-derivative disturbances.§.§ One-degree-of-freedom electro-mechanical linear actuatorIn this section, we present an EMLA equipped with a PMSM as our practical system example with non-triangular nonlinearities, aiming to transform rotational motion into linear motion. The fundamental elements of an EMLA include a cylinder housing, attachments, a thrust tube, a ball screw or roller screw, a gearbox, and a motor (Fig. <ref>). The power transmission sequence of the EMLA commences with the motor, which generates torque and rotational speed through electrical power. The gearbox is responsible for decreasing the rotational speed while elevating the torque to the desired magnitude, whereas the screw shaft and nut assembly effectively convert the rotational motion into linear motion. The velocity and force of this linear motion maintain a proportional relationship with the screw lead, and the control procedure is executed within a rotating system based on the d-q axis, followed by the implementation of Park transformations, to transform a three-phase (3-Ph) signal to the d-q axis, as described in (<ref>) and (<ref>): T = [[cos(ω t)cos(ω t - 2 π/3)cos(ω t + 2 π/3); -sin(ω t) -sin(ω t - 2 π/3) -sin(ω t + 2 π/3); 1/2 1/2 1/2 ]][[ u_d; u_q; u_0 ]] =2T/3[[ u_a; u_b; u_c ]] The electrical characterization of the PMSM in the d-q rotating reference frame is outlined in Eq. (<ref>) within the context of the publication authored by Krishnan <cit.>: 3 d i_q/d t=1/L_q[u_q-R_s i_q- N_p ω_m i_d L_d - N_p ω_m Φ_PM] d i_d/d t=1/L_d[u_d-R_s i_d+ N_p ω_m i_q L_q] τ_m=3/2 N_p [i_q(i_d L_d+Φ_PM)-i_d i_q L_q] where L_d and L_q are the inductance and u_d and u_q are the voltages on the d- and q-axes. In addition, ω_m, R_s, Φ_m, and N_p are the mechanical rotational speed, phase resistance, permanent magnet flux, and number of rotor pole pairs, respectively. Regarding the mechanical components depicted in Fig. <ref>, the torque generated by the PMSM is transmitted to the gearbox and screw mechanism, respectively, and to simplify the rotary to linear motion conversion, α_RL can be defined as (<ref>):α_RL = 2π/ρ l Applying the principles of Newton's law of motion, we can deduce the torque balance equations for the PMSM (τ_m) and thereby achieve a comprehensive performance of an EMLA system, as illustrated in (<ref>): {6 A_eq= α_RL(J_m+J_c+J_G B)+1/α_RLη_G B(M_BS) B_eq= α_RL(B_m)+1/α_RLη_GB(B_BS)C'= (1/k_τ 1 + 1/k_τ 2 + 1/ρ^2 k_τ 3 + α_RL^2/k_L)^-1C_eq= α_RL^2 C'D_eq= 1/α_RLη_G Bτ_m = A_eqẍ_L + B_eqẋ_L + C_eqx_L + D_eq F_L. where ρ and l are the gear ratio and lead of the screw. In addition, B_m and B_BS are the viscous friction of the electric motor and ball screw, accordingly. Further, J_m, J_c, and J_GB are the motor, coupling, and gearbox inertia, and m_BS and η_GB are the ball screw mass and gearbox efficiency, respectively. Furthermore, the parameters k_τ 1, k_τ 2, and k_τ 3 correspond to the stiffness attributes of the motor's shaft and coupling, gearbox, and screw shaft, respectively. Meanwhile, the parameter k_L denotes the collective linear stiffness encompassing the thrust bearing, ball screw, ball nut, and thrust tube components, and it can be calculated using (<ref>): k_L=(1/k_bearing +1/k_screw +1/k_nut +1/k_tube )^-1 In the context of the studied EMLA, the state vector x(t)∈ℝ^4 and input vector u(t)∈ℝ^3 for the state-space model of the studied EMLA can be defined as in (<ref>):{2x(t) = [x_L (t)ẋ_L (t)i_q(t)i_d(t)]^T u(t) = [i^*_q (t)u_q (t)u_d (t)]^T.Ultimately, we can define the state space vector for the given case, as expressed in Eq. (<ref>):{5ẋ_1= x_2 ẋ_2=1/A_eq[3/2 N_p(x_3 x_4 L_d+i^*_q Φ_PM-x_3 x_4 L_q) . -.B_eq x_2 - C_eq x_1 - D_eq F_L] ẋ_3=1/L_q[u_q - R_s x_3- N_pα_RL x_2 ( x_4 L_d + Φ_PM) ]ẋ_4=1/L_d[u_d - R_s x_4+ N_pα_RL x_2 x_3 L_q] . Finally, the pulse width modulation (PWM) block generated the corresponding rectangular waveforms used to control the power switch S1–S6 of the inverter. Table <ref> provides an overview of the essential features of the components within the EMLA system, including the motor, gearbox, and ball screw <cit.>:As such, the order of the system is four, and by considering the RSBA control algorithm from step 1 to step 17, we can assume α =[0.53, 1.83, -2.25, 0.86]^⊤ and C =[1, 0, 1, 1]. Note that it is common to consider reference trajectories of the EMLA as x_2d=ẋ_1d, x_3d=i^*_q, and x_4d=i^*_d=0, following <cit.>. As shown, the control signal of the second subsystem i^*_q is the desired value for the third state x_3=i_q. Therefore, after receiving the state values from the observer algorithm, we should adopt the transformation of the EMLA system and control algorithm from step 20. The control parameters for this case study are selected as β_i=600, ζ_i=0.001, δ_i=0.001, and σ_i=2000 for i=1,...,4. As illustrated in Figs. <ref> and <ref>, tracking errors of the linear position (x_1, in meters), velocity (x_2, in meters per second), and current states (x_3 and x_4), measured in ampere (A), have been attained in the face of non-triangular uncertainties and the same influence of external loads as the previous case study. For this case study, we are conducting a comparison between this work and similar control studies on PMSMs, as listed in Table <ref>. In contrast, references <cit.> and <cit.> focused solely on the angular motion of a motor, whereas our case dealt with a linear actuator equipped with a PMSM motor. Furthermore, the stability achieved by the RSBA control method utilized in our study surpassed theirs, demonstrating exponential stability as opposed to asymptotic stability. In addition, while their system information relied on previously worked asymptotic stable estimation algorithms from <cit.> and <cit.> to determine the state values, our paper introduced a novel exponential observer. § CONCLUSION This research study proposed a new modular RSBA control for a class of uncertain interconnected systems with unknown modeling errors and interactions. First, we achieved a nonlinear state observer with uniformly exponential convergence for this class of systems by expanding the conventional continuous Luenberger concept with an additional adaptive term. This observer can streamline future control efforts, particularly in applications requiring exponential convergence. Then, based on a robust subsystem-based adaptive approach, the proposed control method decomposed an n-order system, and it employed a new stability analysis approach. This strategy incorporates an interesting term called the ‘stability connector,’ designed to capture non-triangular dynamic interactions among subsystems that effectively offset the instability of the system, leading to an exponentially stable analysis for the entire system. This stability analysis sets the stage for future research efforts centered around subsystem-based methodologies.IEEEtran | http://arxiv.org/abs/2311.15843v1 | {
"authors": [
"Mehdi Heydarishahna",
"Mohammad Bahari",
"Jouni Mattila"
],
"categories": [
"eess.SY",
"cs.SY"
],
"primary_category": "eess.SY",
"published": "20231127140839",
"title": "Robust observer-based control with modularity and exponential stability for interconnected systems"
} |
RIKEN Center for Quantum Computing, Wako, Saitama 351-0198, Japan Department of Physics, Boston University, Boston, Massachusetts 02215, USAPhysics and Informatics Laboratory, NTT Research, Inc., Sunnyvale, California, 94085, USA Laboratory for Nuclear Science, Massachusetts Institute of Technology, Cambridge, 02139, MA, USADepartment of Physics, Boston University, Boston, Massachusetts 02215, USA An external periodic (Floquet) drive is believed to bring any initial state to the featureless infinite temperature state in generic nonintegrable isolated quantum many-body systems. However, numerical or analytical evidence either proving or disproving this hypothesis is very limited and the issue has remained unsettled. Here, we study the initial state dependence of Floquet heating in a nonintegrable kicked Ising chain of length up to L=30 with an efficient quantum circuit simulator, showing a possible counterexample: The ground state of the effective Floquet Hamiltonian is exceptionally robust against heating, and could stay at finite energy density even after infinitely many Floquet cycles, if the driving period is shorter than a threshold value. This sharp energy localization transition/crossover does not happen for generic excited states. Our finding paves the way for engineering Floquet protocols with finite driving periods realizing long-lived, or possibly even perpetual, Floquet phases by initial state design.Robust effective ground state in a nonintegrable Floquet quantum circuit Anatoli Polkovnikov January 14, 2024 ======================================================================== Introduction Periodically driven, or Floquet, quantum systems have recently attracted renewed attention from the viewpoint of Floquet engineering, i.e., creating intriguing functionalities of matter with external periodic drives <cit.>, together with rapid developments of experimental techniques, such as strong light-matter interactions and driven artificial quantum matter <cit.>. In isolated systems Floquet-engineered states are believed to break down eventually due to heating <cit.>, i.e., the energy injection accompanied by the drive, and stability of Floquet engineering has been a central issue. For general local Hamiltonians with bounded local energy spectrum, rigorous upper bounds on heating are known and guarantee that the heating is suppressed exponentially in the driving frequency ω irrespective of the initial states <cit.>. Many experimental <cit.> and numerical <cit.> studies observe actual heating rates obeying the exponential scaling consistent with these bounds. At the same time it is known that these bounds cannot be tight. For example, exponential heating was also observed in classical systems, where these bounds diverge due to the infinite local Hilbert-space size <cit.>. At the same time, some numerical studies report indications of very sharp phase transition-like behavior of heating when the driving frequency is varied <cit.>. Namely, below (above) a threshold frequency, the system remains at a finite (is brought to the infinite) temperature after many driving cycles. This sharp transition has also been translated to Trotterization on digital quantum computers, and the long-time Trotter error, a counterpart of heating, has been discussed <cit.>. Yet those results cannot be conclusively extrapolated to the thermodynamic limit because of potentially large finite-size effects. In a one-body chaotic model <cit.> and a special integrable model <cit.>, such a Trotter transition has been analytically obtained. On the other hand, some studies report smooth crossovers rather than a transition in generic nonintegrable models <cit.>. Those studies differ in many ways, including the model, initial states, physical observables, etc., and it has yet to be understood whether and in what sense a phase transition exists in generic nonintegrable many-body models. In this Letter, we show numerical evidence for the Trotter (or heating) transition in a nonintegrable kicked Ising model by reaching as large as L=30 spins with an efficient quantum circuit simulator (see Fig. <ref>). The sharp transition is absent for most initial states but is most conspicuously seen when the initial state is the effective ground state, i.e., the ground state of an effective Hamiltonian in the high-frequency expansion.It becomes sharper and sharper if we increase the order of the expansion, but the transition point is insensitive to this order. The initial state dependence of the transition sheds new light on the seemingly contradicting previous reports about the presence/absence of transitions. Besides, the stability of states above a critical drive frequency (i.e., below a critical Trotter step) encourages Floquet engineering (Trotter simulations) for a long time even in the thermodynamic limit without the need of scaling the Trotter step down to zero with increasing the simulation time <cit.>. Formulation of the problem We consider a quantum spin-1/2 chain of length L under the following time-periodic HamiltonianH(t) =H_1( k≤ t/τ < k+1/2) H_2( k+1/2≤ t/τ < k+3/2)H_1(k+3/2≤ t/τ < k+2 ),where k∈ℤ and 2τ is the driving period, andH_1= -∑_j=1^L (J/4 Z_jZ_j+1+h/2Z_j ),H_2 = -g/2∑_j=1^LX_j.Here X_j and Z_j are the Pauli matrices acting on the site j, and the periodic boundary conditions are imposed while Ref. <cit.> used the open ones. Throughout this paper, we set J=h=g=1 since we have confirmed that the results are not sensitive to their choice as long as they are far away from integrable points. An initial state |ψ_ini⟩ unitarily evolves in time under H(t), and the state at t=nτ (n∈ℤ) is given by |ψ(nτ)⟩=T(τ)^n |ψ_ini⟩ withT(τ) =e^-iH_1τ/2e^-i H_2τe^-iH_1τ/2.We ask now how stable |ψ(nτ)⟩ is under the periodic drive. To quantify the stability, we introduce the following fidelity <cit.>F(n,τ) = |⟨ψ_ini| T(τ)^n|ψ_ini||⟩^2.This definition is motivated by the following reasoning. The Magnus expansion (or the symmetric BCH formula) gives us a power series expansion for H_F defined through T(τ)=e^-i H_F τ as H_F=∑_l=0^∞ h^(2l)τ^2l, where odd-order terms all vanish due to the symmetry T(-τ)=T^†(τ). Then a truncated effective Hamiltonian H_F^(2k)=∑_l=0^k h^(2l)τ^2l gives an approximation U_2k(τ)=e^-iH_F^(2k)τ=T(τ)+O(τ^2k+3). The approximate unitary U_2k(τ) is generated by the time-independent Hamiltonian H_F^(2k) and thus energy conserving, and the time evolution |ψ_2k(nτ)⟩=U_2k(nτ)|ψ_ini⟩ is free from Floquet heating. If |ψ_ini⟩ is an eigenstate of H_F^(2k) (as we will assume below), the fidelity between the exact and approximate states |⟨ψ_2k(nτ)|ψ(nτ)||⟩^2 <cit.> reduces to Eq. (<ref>). While Ref. <cit.> showed that eigenstates are more stable than superposition of them, we address which of the eigenstates are more stable. We note that the above argument is also translated to Trotterization; T(τ) is a (2k+2)-th order Trotter approximation for U_2k(nτ) generated by the target Hamiltonian H_F^(2k). To focus on the long-time stability, we introduce the long- but finite-time average of the fidelity:F_σ,τ=1/𝒩_σ∑_n=0^∞ F(n,τ)e^-(n/σ)^2,where 𝒩_σ:=∑_n=0^∞ e^-(n/σ)^2 and σ (>0) denotes a Gaussian cutoff. The time-averaged fidelity is numerically obtained by calculating T(τ)^n|ψ_ini⟩ for n=0,1,…,n_max so that n_max≫σ. Since T(τ) can be represented by 1- and 2-qubit quantum gates unlike other Floquet models <cit.>, T(τ)^n|ψ_ini⟩ is more efficiently calculated using a circuit simulator <cit.>. Possible Trotter transition for the Floquet ground state Figure <ref> shows the τ-dependence of the time-averaged fidelity when the initial state is what we call here the effective Floquet ground state, i.e., the ground state of H_F^(0)=H_1+H_2 and H_F^(2). We remark that the Floquet ground state is not an eigenstate of T(τ) and hence F(n,τ) evolves in n. For convenience, we plot the rate functions_σ,τ = - L^-1ln(F_σ,τ),which is expected to have a well-defined thermodynamic limit assuming the typical system size scaling F_σ,τ=e^-s_σ,τ L and a larger (smaller) s_σ,τ corresponding to a smaller (larger) fidelity. If the unitary T(τ) can be represented as T(τ)=e^-i H_F τ with H_F being a local gapped Hamiltonian then, s_σ,τ is expected to be a small number well defined in the joint limit σ→∞, L→∞. As Fig. <ref>(a) shows, for τ<1.2, s_σ,τ smoothly depends on τ and is almost independent of σ, whereas it increases abruptly in τ≥1.2 and simultaneously acquires strong σ-dependence. Thus the GS of H_F^(0) is, at least, strongly robust against heating for τ< 1.2. This transition-like behavior at τ≈1.2 becomes more conspicuous when the initial state is the ground state of H_F^(2), for which the fidelity remains almost constant all the way to the transition point τ=1.2.It is noteworthy that the threshold value τ≈1.2 does not correspond to the known convergence radius of the Magnus expansion. Although some radii are known, each dictates that the expansion is convergent if (H_1+H_2)τ < r, where ⋯ denotes the 2-norm, and r is a constant, e.g., r=π <cit.>. Considering that both H_1 and H_2 are O(L), we learn that those convergence radii are O(L^-1) and shrink as L increases. On the contrary, the transition-like behavior of the fidelity is robust for larger systems. Figure <ref> compares the fidelities at different system sizes at a fixed time cutoff σ=5×10^3. In the small-τ regime τ<1.2, we observe little dependence on L, suggesting the stability of the Floquet ground state in the thermodynamic limit. We note that s_σ,τ shows a small bump at τ=1.1, visible on the log scale. This bump likely reflects an accidental many-body resonance (see Supplemental Material). We observe that it weakens with increasing system size such that our numerical results are consistent with the scenario that it vanishes in the thermodynamic limit L→∞. Before looking into the interplay of the system size and the time cutoff, we examine other eigenstates of H_F^(2k) chosen for the initial state. For computational convenience, we calculate the infinite-time average F_∞,τ for each eigeneigenstate |E_j⟩ of H_F^(0), i.e.,H_F^(0)|E_j⟩=E_j|E_j⟩for j=0,…,d-1. Considering the translation and inversion symmetries shared by T(τ) and H_F^(0), we restrict ourselves to the symmetry sector of the zero momentum and the even parity that hosts the ground state, and d denotes the dimension of this symmetry sector. Using the eigenstates of T(τ),T(τ)|θ_α⟩ = e^-iθ_α|θ_α⟩and assuming there is no degeneracy in the eigenvalues, we obtainF_∞,τ = ∑_α|⟨θ_α|E_j||⟩^4;s_∞,τ=-L^-1ln (F_∞,τ).Figure <ref> shows s_∞,τ for all |ψ_ini⟩=|E_j⟩ (j=0,…,d-1). The panel (a) is for τ=0.5 below the crossover, where we observe that the GS (j=0), as well as the highest-excited state (HES, j=d-1), are consistent with the behavior s_∞,τ= const(L)≪ 1. Note that this stability holds after the infinite Floquet cycles, without finite cutoff σ, at least up to L≤ 18. In contrast, s_∞,τ tends to increase in the middle of the spectrum as L increases, meaning the faster than exponential decay of fidelity with the system size. For τ=1.2 in the crossover, on the other hand, s_∞,τ increases with L for all states, including the GS and HES. These results highlight the uniqueness of the GS and HES and potentially a few more nearby states as compared to other eigenstates. Namely the sharp crossover in fidelity (and other heating measures) seen for the GS in Fig. <ref> is not present for generic initial eigenstates of H_F^(2k). This finding seems rather unexpected as, naively, the notion of the ground state is not well defined for the Floquet unitary. Such differences between the GS and most other states are also seen when we deform our model, and the Fermi's golden rule description <cit.> seems to fail to capture the fidelity dynamics if the initial state is the GS. (see Supplemental Material). Approximate quantum many-body scars Now we return to the robustness of the Floquet ground state and uncover its underlying mechanism. Equation (<ref>) dictates that the time-averaged fidelity is governed by the overlap between the initial state and the Floquet eigenstates. Thus, the robustness of the Floquet ground state suggests the presence of a special eigenstate of the unitary T(τ), which is similar to the ground state of a local static Hamiltonian. We visualize the Floquet eigenstates |θ_α⟩ in Fig. <ref>(a), where we plot all the eigenvalues θ_α for various τ. These eigenvalues are color-coded according to their average magnetization ⟨θ_α|m_z|θ_α|$⟩ withm_z = L^-1∑_i=1^L Z_i. | http://arxiv.org/abs/2311.16217v1 | {
"authors": [
"Tatsuhiko N. Ikeda",
"Sho Sugiura",
"Anatoli Polkovnikov"
],
"categories": [
"quant-ph",
"cond-mat.mes-hall",
"cond-mat.quant-gas",
"cond-mat.stat-mech",
"cond-mat.str-el"
],
"primary_category": "quant-ph",
"published": "20231127190000",
"title": "Robust effective ground state in a nonintegrable Floquet quantum circuit"
} |
University of Iowa, Iowa City, IA 52240 [email protected] https://sites.google.com/view/joseph-breen/home The author was partially supported by NSF Grant DMS-2038103. We establish the relationship between folded symplectic forms and convex hypersurface theory in contact topology. As an application, we use convex hypersurface theory to reprove and strengthen the existence result for folded symplectic forms due to Cannas da Silva, and we generalize to all even dimensions Baykur's 4-dimensional existence result of folded Weinstein structures and folded Lefschetz fibrations.Folded symplectic forms in contact topology Joseph Breen January 14, 2024 ===========================================§ INTRODUCTION Let Σ be a closed and oriented manifold of dimension 2n≥ 2. We are interested in two structures associated to such a manifold, the first being that of a folded symplectic form, and the second being the structure induced by viewing Σ as a convex hypersurface embedded in a contact manifold.A folded symplectic form is a closed 2-form ω∈Ω^2(Σ) such that * ω^n ∈⋀^2n T^*Σ intersects the 0-section transversally along the fold Γ := {ω^n = 0}, and * ι^*ω has maximal rank, where ι:Γ↪Σ is the inclusion. The tuple (Σ, ω) is a folded symplectic manifold. We call R_±:= {±ω^n >0} the positive region and negative region, respectively. Informally, a folded symplectic form is a 2-form that degenerates as mildly as possible along a codimension-1 hypersurface, and is otherwise (anti)symplectic. Indeed, (R_+, ω|_R_+) and (-R_-, ω|_R_-) are open symplectic manifolds, where -R_- denotes R_- equipped with the opposite orientation that it inherits from Σ.A co-orientable hypersurfaceι: Σ↪ (M^2n+1, ξ = α) in a contact manifold is convex if there is a contact vector field X, i.e. a vector field satisfying ℒ_Xξ = ξ, everywhere transverse to Σ. We call R_± := {±α(X) > 0}⊂Σ the positive region and negative region, respectively, and we call the codimension-1 contact-type submanifold Γ := {α(X) = 0}⊂Σ the dividing set. Convex hypersurfaces form a generic class of hypersurfaces in contact manifolds. Since (M, ξ=α) is co-oriented, the co-orientation given by X induces an orientation on Σ. Importantly, R_+ is an exact symplectic manifold with a primitive given by an appropriate positive rescaling of ι^*α|_R_+, and (-1)^n+1R_- is likewise exact symplectic via an appropriate negative rescaling of ι^*α|_R_-. Both R_+ and (-1)^n+1R_- are symplectic fillings of the contact dividing set Γ.§.§ Context The goal of this note is to establish the connection between folded symplectic forms and convex hypersurfaces. Before stating our main results, <ref>, <ref>, and <ref>, we include a brief discussion of each theory for context. §.§.§ Folded symplectic forms Folded symplectic forms were first introduced by Martinet in his thesis <cit.>, where he proved a Darboux theorem; namely, that around any point p∈Γ there are local coordinates such that p=0, the fold is given by Γ = {y_1=0}, and[Here we are locally orienting Σ via the top wedge power of ∑_j=1^n dx_j ∧ dy_j, so that R_+ = {y_1 > 0}.]ω = y_1dx_1∧ dy_1 + ∑_j=2^n dx_j ∧ dy_j.Later, folded symplectic forms were thoroughly investigated in a series of papers by Cannas da Silva, Guillemin, Pires, and Woodward <cit.>. Cannas da Silva <cit.> established existence in all even dimensions, in particular showing that folded symplectic forms satisfy an existence h-principle. The sufficient formal data on Σ necessary for existence is that of a stable almost complex structure, i.e., a complex vector bundle structure on TΣ⊕^2, where ^2 →Σ is the trivial rank-2 vector bundle. Stable almost complex structures, and hence folded symplectic forms, exist on all closed and oriented 4-manifolds. In <cit.>, Baykur upgraded the existence of folded symplectic forms on 4-manifolds to that of, what we will call in this paper, folded Weinstein structures.[Baykur worked with Stein structures. We will work directly with the symplectic counterpart; see <cit.>.] Briefly, a folded Weinstein manifold is an exact folded symplectic manifold where the positive and negative regions are each Weinstein manifolds inducing the same contact structure on the fold. A precise definition is given in <ref>; see also <ref> for other uses of folded Weinstein language in the literature. We summarize the status (prior to this paper) of existence of folded symplectic structures as follows.Let Σ be a closed and oriented manifold of dimension 2n≥ 2.* <cit.> If Σ admits a stable almost complex structure, then Σ admits a folded symplectic structure. In particular, if 2n=4, then Σ admits a folded symplectic structure.* <cit.> If 2n=4, then Σ admits a folded Weinstein structure. Folded symplectic manifolds have been studied with toric actions <cit.>, have been studied dynamically <cit.>, and have even been probed by pseudoholomorphic curves in dimension 4 <cit.>. They have also been studied in terms of their relationship with other singular symplectic manifolds, such as b-symplectic manifolds <cit.>, and, recently, odd-dimensional counterparts (folded contact forms and b-contact forms) have been introduced in the literature <cit.>.§.§.§ Convex hypersurface theory Convex hypersurface theory was introduced by Giroux <cit.>, who established a C^∞-genericity statement in dimension 3, among other foundational results. In the following decades, convex surface theory was used to great effect in 3-dimensional contact topology, for instance in the classification of tight contact structures <cit.>. In higher dimensions the theory poses more technical difficulties, and it was not until recently that a C^0-genericity result was first established by Honda and Huang. In fact, they proved that convex hypersurfaces with Weinstein positive and negative regions are C^0-generic.Let Σ↪ (M^2n+1, ξ) be a closed and co-oriented hypersurface in a co-oriented contact manifold. Then there is a C^0-small perturbation of Σ so that it is convex and such that R_+ and (-1)^n+1R_- are exact symplectic manifolds that naturally inherit Weinstein structures.Honda and Huang's further work <cit.> and work together with the author <cit.>, alongside work of others like Eliashberg and Pancholi <cit.> and Sackel <cit.>, has since laid the groundwork for the use of convex hypersurface theory in the comparatively unexplored frontier of higher-dimensional contact topology. §.§ Main results <ref> and <ref> suggest a connection between folded symplectic forms and convex hypersurface theory. However, to the best of the author's knowledge, the direct relationship has not been suggested. For instance, there is no mention of convex hypersurface theory in any of the folded symplectic literature cited in <ref>, nor are folded symplectic forms named in any of the classical or modern literature on convex hypersurface theory cited in <ref>. Convex hypersurface theory is used in the recent papers on singular contact forms <cit.>, but only for the purpose of the singular contact structures; there is no indication that convexity induces a folded symplectic structure on the hypersurface.[For more commentary on relevant literature, see <ref> below for a discussion on Honda and Huang's use of folded terminology in <cit.>, and <ref> for reference to an intrinsic form of convex hypersurface theory described by Salamon based on a series of lectures given by Eliashberg <cit.>.]Our first result establishes this relationship explicitly. The precise definition of positive contact-type Liouville form used in the statement of the theorem is <ref>.Let Σ be a closed and oriented manifold of dimension 2n≥ 2, and Γ⊂Σ a separating hypersurface. The following statements are equivalent:* The manifold Σ admits an exact folded symplectic form ω with positive contact-type Liouville form λ and contact fold (Γ, ξ_Γ).* There is a contact structure on Σ× such that Σ×{0} is convex with contact dividing set (Γ, ξ_Γ). As an application, we leverage convex hypersurface theory to upgrade the existence statements of folded symplectic structures to that of folded Weinstein structures in all dimensions. Our second main result is the following strengthening of <ref> and is a consequence of <ref> together with <ref>. Let Σ be a closed and oriented manifold of dimension 2n≥ 2. If Σ admits a stable almost complex structure, then Σ admits a folded Weinstein structure.Our proof of <ref> recovers both (1) and (2) of <ref> with a different technique. The difference, particularly with Cannas da Silva's method in <cit.>, will be further elaborated in <ref>. In <cit.>, Baykur also considered other folded structures on 4-manifolds such as folded Kähler structures and folded Lefschetz fibrations that naturally come with the folded Weinstein territory. We concern ourselves with the latter. Roughly, a folded Lefschetz fibration is obtained by gluing two Lefschetz fibrations with (symplectically) isotopic monodromy along their boundaries to obtain a closed manifold; a precise definition is <ref>. Our final main result is a generalization of Baykur's folded Lefschetz fibration existence result to all dimensions. Let 2n≥ 4. Every folded Weinstein manifold (Σ^2n, λ, ϕ) admits a compatible folded Weinstein Lefschetz fibration. In particular, every closed and oriented manifold of dimension 2n with a stable almost complex structure admits a folded Weinstein Lefschetz fibration. Baykur remarks (see his discussion after <cit.>) that, assuming future advances in Weinstein Lefschetz fibration technology in higher dimensions, many of his definitions and results generalize naturally. This advancement is now possible with work of Giroux and Pardon <cit.> and the work of the author with Honda and Huang <cit.>.We close the introduction with two more literature remarks. Honda and Huang <cit.> introduce a structure they call folded Weinstein for hypersurfaces in contact manifolds, though this notion is a priori slightly different to how we use folded Weinstein in this paper (i.e. <ref>). Namely, the folded Weinstein hypersurfaces of <cit.> are specifically hypersurfaces of contact manifolds, rather than manifolds with an intrinsic structure as in <ref> or <ref>. Moreover, a folded Weinstein hypersurface of <cit.> is allowed to have many different folds that alternate between positive and negative cobordism regions, rather than one fold between two domains. However, there is no contradiction or confusion — every folded Weinstein hypersurface in a contact manifold can be perturbed to a Weinstein convex hypersurface, and by our proof of <ref>, this induces on Σ a folded Weinstein structure in the sense of <ref>. For this reason, our language is appropriate a posteriori.The notes of Salamon <cit.>, based on a series of lectures given by Eliashberg in 2022 on the work of Eliashberg and Pancholi <cit.>, discuss an intrinsic notion of convex hypersurface theory via pairs of differential forms on even-dimensional manifolds. It is likely that our work can be formulated in this framework.§.§ OrganizationWe review the basics of Weinstein topology, folded symplectic forms, convex hypersurface theory, and Lefschetz fibrations in <ref>. In <ref>, we record the various definitions needed to make each of our main results precise, and provide some constructions and examples. Finally, in <ref> we prove <ref>, <ref>, and <ref>. §.§ AcknowledgmentsWe thank Gabe Islambouli for an enlightening conversation that sparked this project, and in particular for introducing the author to the literature of folded symplectic forms. We also thank Ko Honda and Austin Christian for helpful comments.§ PRELIMINARIES Here we review the necessary background material. None of the content in this section is original. Let M be a manifold, let ι:M ↪ M × be the inclusion into the 0-level, let π: M ×→ M be the projection, and let η be a differential form on M ×. We are often interested in the form π^*ι^*η on M×. To clean up the notation, we will write η_M := ι^*η and then will omit the π^* when viewing the form on M ×. There should be no risk of confusion. §.§ Weinstein topology Here we review the basic definitions of Weinstein structures, mostly to establish notation conventions used in this paper. We also include a review of ideal Liouville structures as described by Giroux <cit.>. There is much left unsaid about Weinstein manifolds; for more details, the standard reference is <cit.>, and a nice survey article is <cit.>. Let V be a compact manifold of dimension 2n with nonempty boundary. A Liouville (domain) structure on V is a 1-form λ∈Ω^1(V) such that dλ is symplectic and such that ι^*λ is a positive contact form, where ι:∂ V ↪ V is the inclusion and ∂ V is oriented as the boundary of V, oriented by its symplectic structure. The unique vector field X_λ satisfying X_λ⌟ dλ = λ is called the Liouville vector field. Equivalently, a Liouville (domain) structure is given by a pair (ω, X) where ω is a symplectic form, X is a vector field satisfying ℒ_Xω = ω, and X is outwardly transverse to ∂ X. The data (V, λ) or (V, ω, X) is called a Liouville domain. More generally, we may speak of a compact Liouville cobordism wherein the Liouville vector field X_λ is allowed to point inward along a negative boundary component ∂_- V. Given (ω, X) the equivalence in <ref> comes from setting λ := X ⌟ω. Indeed, if X_λ is the Liouville vector field for λ, then ℒ_X_λ dλ = dλ, and the positive contact condition is equivalent to outward transversality at the boundary.Let W be a compact manifold of dimension 2n with nonempty boundary. A Weinstein structure is a tuple (λ, ϕ) where * (W,λ) is a Liouville domain, * ϕ: W → is a Morse function such that ∂ W = ϕ^-1(c) is a regular level set, and * the Liouville vector field X_λ is gradient-like for ϕ. Equivalently, a Weinstein structure on W may be given by a triple (ω, X, ϕ), where (ω, X) is a Liouville structure. The data (W, λ, ϕ) or (W, ω, X, ϕ) is a Weinstein domain. A Weinstein homotopy is simply a 1-parameter family (W, λ_t, ϕ_t) of Weinstein structures on a fixed domain W. We often abuse language and say that two Weinstein structures on different but diffeomorphic manifolds are Weinstein homotopic if they are Weinstein homotopic under the pullback via a diffeomorphism. The Liouville and Weinstein structures discussed so far by definition reside on compact domains. It is also natural to consider the structures on open manifolds with cylindrical ends. For our purposes, we simply record the following definition.Let (V, λ) be a Liouville domain. The completion of (V,λ) is the data (V̂, λ̂) where V̂ := V ∪ (∂ V × [0,∞)_s) and λ̂ is the extension of λ to V̂ via e^s α, where α is the induced contact form on ∂ V. We call (V̂, λ̂) a (completed, finite-type) Liouville manifold.There is one last type of Liouville structure to discuss which has the benefit of living on a compact domain, but simultaneously behaving like a completion.Let V be a compact manifold of dimension 2n with nonempty boundary, and let V = V ∖∂ V denote the interior of V. An ideal Liouville structure on V is an exact symplectic form ω on V admitting a primitive λ such that there is a smooth function u: V → [0,∞) with ∂ V = u^-1(0) a regular level set for which the 1-form u λ on V extends to a 1-form on all of V, inducing a contact form on ∂ V. Such a 1-form λ is called an ideal Liouville form and the pair (V, ω) is an ideal Liouville domain. Ideal Liouville domains exhibit a number of nice properties over normal Liouville domains or even completed finite-type Liouville manifolds. For example, the ideal Liouville structure ω — independent of the choice of primitive — induces a positive contact structure on ∂ V <cit.>. [Ideal completion] Let (V, λ_0) be a compact Liouville domain of dimension 2n. Via the flow of the the Liouville vector field we may identify a collar neighborhood of the boundary N(∂ V) ≅ [-, 0]_s ×∂ V with λ_0 = e^s α_0, where α_0 is the contact form induced on ∂ V by λ_0. Let u:N(∂ V) → [0,1] be a smooth function u=u(s) such that u(0) = 0, u'(s) < 0 for - < s≤ 0, u(-) = 1, and u'(-) = 0. Extend u to a smooth function on V_0 by the constant function u≡ 1. Finally, define a 1-form on V by λ:= 1/u λ_0. We claim that ω := d(1/u λ_0) is an ideal Liouville structure on V. Indeed, on N(∂ V):=[-,0)_s×∂ V we have ω =e^su(s) - u'(s)/u(s)^2ds∧α_0 + 1/u(s)dα_0 and so ω^n = e^nsu(s) - u'(s)/u(s)^n+1ds∧α_0∧ (dα_0)^n-1 > 0.Moreover, the Liouville vector field in N(∂ V) is X_λ = e^-su(s)^2/u(s)-u'(s) ∂_swhich is positively parallel to ∂_s and is complete (in forward time) on N(∂ V), hence (V, λ) is symplectomorphic to the completion of the Liouville domain (V, λ_0). We call the ideal Liouville domain (V, ω) the ideal completion of (V, λ_0). §.§ Folded symplectic forms We only need one nontrivial fact about folded symplectic forms, which is a local normal form near the fold (<ref>). For more details on folded symplectic forms, we refer the reader to <cit.>. We begin by restating the main definition from the introduction for convenience.Let Σ be a closed and oriented manifold of dimension 2n. A folded symplectic form is a closed 2-form ω∈Ω^2(Σ) such that * ω^n ∈⋀^2n T^*Σ intersects the 0-section transversally along the fold Γ:= {ω^n = 0}, and * ι^*ω has maximal rank, where ι:Γ↪Σ is the inclusion. The tuple (Σ, ω) is a folded symplectic manifold. We call R_±:= {±ω^n >0} the positive region and negative region, respectively. Since ω_Γ:=ι^*ω has maximal rank, ω_Γ is a 1-dimensional subbundle of TΓ→Γ called the null-foliation of the folded symplectic form. Moreover, ω_Γ^n-1 naturally orients the quotient bundle TΓ / ω_Γ. Since Γ is oriented, there is an induced orientation on the line bundle ω_Γ→Γ. In particular, it admits nonvanishing sections. With this in mind, the following lemma gives a local normal form for a folded symplectic form near the fold.Let (Σ, ω) be a folded symplectic manifold with fold Γ. Let ι:Γ↪Σ be the inclusion, let ω_Γ := ι^*ω, let v be a non-vanishing section that orients the line bundle ω_Γ→Γ, and let α∈Ω^1(Γ) be a 1-form with α(v) = 1. Then near Γ there are local coordinates on a neighborhood N(Γ) ≅Γ× (-, )_τ of Γ in Σ such that, on N(Γ), we have ω = ω_Γ - d(τ^2 α). The statement of <ref> differs from that of <cit.> by a negative sign because of a difference in orientation convention. In our paper, N(Γ)≅Γ× (-, )_τ is oriented via Ω_Γ∧ dτ, where Ω_Γ is an orienting volume form on Γ, so that R_+ ∩ N(Γ) = {τ > 0}. Indeed, from (<ref>) we have ω = ω_Γ - 2τdτ + τ^2dα and so ω^n = 2nτ(α∧ω_Γ^n-1∧ dτ+τ^2 α∧ (dα)^n-1∧ dτ).By construction, α∧ω_Γ^n-1 orients Γ and so α∧ω_Γ^n-1∧ dτ orients N(Γ). It follows that ω^n > 0 for τ > 0.§.§ Convex hypersurface theory References for a more thorough background on convex hypersurface theory are <cit.>; see also Salamon's notes <cit.> on the work of Eliashberg and Pancholi <cit.>. Again we repeat the main definition for convenience.Let (M, ξ=α) be a co-oriented contact manifold. A smoothly embedded closed and co-orientable hypersurface Σ⊂ M is convex if there is a contact vector field X everywhere transverse to Σ. We define the dividing set to be Γ^X:= {α(X) = 0}, and the positive (resp. negative) region to be R_±^X := {±α(X) > 0}.Note that Γ^X and R_±^X depend on the choice of contact vector field. However, the (contact) isotopy class of Γ^X in Σ is independent of X and so we will typically omit the notational dependence on the choice of contact vector field. Integrating the transverse contact vector field X (and decaying it with a contact Hamiltonian) gives an arbitrarily large vertically-invariant standard neighborhood of a convex hypersurface.Let Σ⊂ (M, ξ) be a convex hypersurface. Then there is a neighborhood N(Σ) ≅Σ×_t of Σ in M where Σ = Σ×{0} and ξ = (fdt + β) for a smooth t-independent function f: Σ→ and a t-independent 1-form β∈Ω^1(Σ). Moreover, the contact condition is dt∧(f(dβ)^n - ndf∧β∧ (dβ)^n-1) > 0.In particular, f(dβ)^n - ndf∧β∧ (dβ)^n-1 is an orienting volume form on Σ.The following lemma allows us to further localize near the dividing set of a convex hypersurface.Let Σ⊂ (M, ξ) be a closed and oriented convex hypersurface with dividing set Γ. Let ι:Γ→ M be the inclusion. After a contact isotopy rel Σ, there are coordinates on a neighborhood N(Γ) ≅Γ× (-, )_τ and a contact form α on N(Γ) ×_t such that α = τdt + α_Γwhere α_Γ:= ι^*α is the induced contact form on (Γ, ξ_Γ).In the introduction it was stated that the positive and negative regions of a convex hypersurface are naturally exact symplectic fillings of the contact-type dividing set. The precise statement is given in the language of ideal Liouville domains. Let (Σ×_t, (fdt + β)) be a vertically-invariant neighborhood of a convex hypersurface Σ = Σ×{0}. Let λ_±:= 1/f β|_R_±. Then ((± 1)^n+1R_±,dλ_±)is an ideal Liouville domain. Moreover, the characteristic foliation on R_+ is directed by the (ideal) Liouville vector field X_λ_+, and the characteristic foliation on R_- is directed by the negative (ideal) Liouville vector field -X_λ_-.Let Σ⊂ (M,ξ=α) be a co-oriented (not necessarily convex) hypersurface in a co-oriented contact manifold. The characteristic foliation is the oriented singular foliation given by Σ_ξ:= (TΣ∩ξ)^, whereis the symplectic orthogonal complement given by the conformal symplectic structure on ξ.The orientation of the foliation is induced by the orientation of Σ and the co-orientation of ξ. Singular points p of the characteristic foliation Σ_ξ, i.e., points where T_pΣ = ξ_p, are, by definition, positive (resp. negative) according to when the co-orientation of the contact plane ξ_p agrees (resp. disagrees) with the co-orientation of T_pΣ.Let (M, ξ=α) be a co-oriented contact manifold. A Weinstein convex hypersurface is a tuple (Σ, ϕ) where Σ⊂ M is a convex hypersurface and ϕ:Σ→ is a Morse function for which the characteristic foliation Σ_ξ is gradient-like (by which we mean there is some vector field Y directing Σ_ξ which is gradient-like for ϕ).The following fact then follows from <ref>, which justifies the language of the previous definition.Let (Σ×_t, (fdt + β)) be a vertically-invariant neighborhood of a Weinstein convex hypersurface (Σ, ϕ). Let λ_±:= 1/f β|_R_± and let ϕ_±:=±ϕ|_R_±. Then ((± 1)^n+1R_±, λ_±, ϕ_±) is a completed finite-type Weinstein manifold and ((± 1)^n+1R_±, dλ_±) is its ideal completion.[The standard convex sphere] Let (M, ξ) = (^2n+1, α), where α = dz + 1/2∑_j=1^2n (x_jdy_j - y_jdx_j)is the standard radial contact form, and let S^2n={𝐱^2 + 𝐲^2 + z^2 = 1} be the unit sphere with outward co-orientation. Let X = z ∂_z + 1/2∑_j=1^n (x_j ∂_x_j + y_j ∂_y_j)be the radial vector field. Note that ℒ_z ∂_z(dz) = dz and ℒ_1/2(x_j ∂_x_j + y_j ∂_y_j)1/2(x_jdy_j - y_jdx_j) = 1/2(x_j ∂_x_j + y_j ∂_y_j) ⌟ (dx_j∧ dy_j) = 1/2(x_jdy_j - y_jdx_j)and so ℒ_Xα = α. Thus, X is a (strict) contact vector field and S^2n is a convex hypersurface with respect to X. Since α(X) = z, the dividing set is Γ = {z=0}∩ S^2n and the positive (resp. negative) region is the hemisphere R_± = {± z > 0}∩ S^2n. With ϕ = -z, (S^2n, ϕ) is a Weinstein convex hypersurface.[Rounded contact handlebody] More generally, if (V,λ) is a Liouville domain, we may consider a compact contactization (V× [-1,1]_t, α = dt + λ). One may round the edges of the resulting contact manifold to obtain a smooth boundary Σ which is naturally convex and has a dividing set contactomorphic to ∂ V. See <cit.> for a careful description of the rounding.§.§ Lefschetz fibrations Here we review the theory of Lefschetz fibrations. This is necessary for the statement and proof of <ref>. A Weinstein Lefschetz fibration is a pair (p:W^2n→^2, λ) satisfying the following properties:* The manifold W^2n is a compact domain with corners which admits a smoothing W^sm and a Morse function ϕ:W^sm→ such that the total space (W^sm, λ, ϕ) is a Weinstein domain.* The map p:W →^2 is a smooth fibration except at finitely many critical points in the interior of W with distinct critical values, around which there are local holomorphic coordinates such that p(z)= p(z_0) + ∑_j=1^n z_j^2,λ = i∑_j=1^n (z_jdz_j -z_jdz_j). * The boundary ∂ W decomposes as[∂_vert W := p^-1(∂^2)]∪ [∂_hor W := ⋃_x∈^2∂ ( p^-1(x))],where the two pieces meet at a codimension-2 corner, and p|_∂_vert W and p|_∂_hor W are fibrations.* On each regular fiber, λ induces a Weinstein structure such that the contact form λ|_∂ (p^-1(x)) is independent of x∈^2, and finally dλ is nondegenerate on dp(z) for all z∈ W (not just regular points). Choose a regular basepoint ∙∈^2 and let W_0 := p^-1(∙) be a distinguished regular fiber. After isotoping the critical values x_1, …, x_k to be approximately radially distributed around ∙, we get a cyclically ordered tuple of Lagrangian spheres in W_0 by identifying Lefschetz thimbles over paths γ_j that are approximately straight lines from x_j to ∙. This yields:An abstract Weinstein Lefschetz fibration is the data ((W_0^2n-2, λ_0, ϕ_0); ℒ = (L_1, …, L_k)),abbreviated (W_0; ℒ), where (W_0^2n-2, λ_0, ϕ_0) is a Weinstein domain and ℒ is an ordered k-tuple of exact parameterized Lagrangian (n-1)-spheres (possibly duplicated), called the vanishing cycles, embedded in W_0. The total space of (W_0; ℒ), denoted |W_0; ℒ|, is the 2n-dimensional Weinstein domain obtained by attaching critical handles to (W_0×^2, λ_0 + λ_st, ϕ_0 + ϕ_st) along Legendrian lifts Λ_j⊆ W_0×∂^2, j=1, …, k, of L_j positioned near 2π j/k ∈∂^2. The total space |W_0; ℒ| of an abstract Weinstein Lefschetz fibration naturally admits a Weinstein Lefschetz fibration p:W→^2 with W^sm= |W_0; ℒ| in the sense of <ref>. We refer to Giroux and Pardon <cit.> for more details on translating between the two notions. Note that (3) of <ref> gives a supporting (strongly Weinstein) open book decomposition of the contact manifold (M, ξ):= ∂ W^sm. A neighborhood of the binding is given by ∂_horW while the partial mapping torus of the page is ∂_vertW. The abstract description of the open book is easily obtained from the data of an abstract Weinstein Lefschetz fibration. Namely, given (W_0; ℒ), the induced abstract open book on the boundary has page W_0 and monodromy τ_ℒ:= τ_L_k∘⋯∘τ_L_1where τ_L_j:W_0 → W_0 is the positive symplectic Dehn twist around the exact Lagrangian sphere L_j⊂ W_0. Every Weinstein domain, up to homotopy, admits a description as the total space of a Weinstein Lefschetz fibration.Let (W, λ, ϕ) be a Weinstein domain. Then there is an abstract Weinstein Lefschetz fibration (W_0; ℒ) whose total space |W_0; ℒ| is Weinstein homotopic to (W,λ, ϕ).There are various symmetries of Weinstein Lefschetz fibrations that leave the total space invariant. Most relevant for our purposes is a stabilization operation. Let (W_0; ℒ) be a Weinstein Lefschetz fibration. Let D⊂ W_0 be a properly embedded regular Lagrangian disk. The stabilization of (W_0; ℒ) along D is (W_0 ∪ h^2n-2_n-1; L ∪ℒ) where h_n-1^2n-2 is a critical Weinstein handle attached to W_0 along ∂ D, and L is the exact Lagrangian sphere which is the union of D with the core of h_n-1^2n-2. If (W_0'; ℒ') is a stabilization of (W_0; ℒ), then the total space |W_0'; ℒ'| is Weinstein homotopic to |W_0; ℒ|. Moreover, a Lefschetz fibration stabilization induces a positive stabilization of the boundary open book decomposition, and conversely, any positive stabilization of the boundary open book decompositon induces a stabilization of the Lefschetz fibration. § EXACT FOLDED SYMPLECTIC FORMS The role of this section is to provide the definitions needed to make our main results <ref>, <ref>, and <ref> precise. We first establish some language, constructions, and observations about exact folded forms, and then extend the discussion to folded Lefschetz fibrations.§.§ Exact folded forms Many definitions from the setting of exact symplectic forms port directly over to the folded setting. Let ω=dλ be an exact folded symplectic form on Σ with fold Γ. The Liouville vector field X_λ is the unique vector field defined on Σ∖Γ by the relation X_λ⌟ω = λ.Let ω be a an exact folded symplectic form on Σ with fold Γ, and let ι:Γ↪Σ be the inclusion. Recall that Γ is naturally oriented as ∂R_+. We say that λ∈Ω^1(Σ) is a positive contact-type Liouville form for ω if ω = dλ and ι^*λ is a positive contact form on Γ. Consider the local Darboux model ^2n_(x_1, y_1 ,…, x_n, y_n) with fold Γ = {y_1=0} and ω = y_1dx_1∧ dy_1 + ∑_j=2^n dx_j ∧ dy_j.Define two 1-forms λ̃ and λ as given to the left below: λ̃ := -y_1^2/2dx_1 - ∑_j=2^n y_jdx_j, X_λ̃ = 1/2y_1 ∂_y_1 + ∑_j=2^n y_j ∂_y_j, λ := (1 -y_1^2/2)dx_1 - ∑_j=2^n y_jdx_j, X_λ = (y_1^2 - 2/2y_1) ∂_y_1 + ∑_j=2^n y_j ∂_y_j.Note that ω = dλ̃ = dλ and hence λ̃ and λ are both Liouville forms for ω. The Liouville vector fields are computed to the right above. However, λ̃ is not a positive contact-type Liouville form for ω, while λ is a positive contact-type Liouville form. Indeed, ι^*λ̃ = -∑_j=2^n y_jdx_j,ι^*λ = dx_1 - ∑_j=2^n y_jdx_j.The latter is a positive contact form on Γ = ∂{y_1≥ 0}, while the former is not a contact form at all. Observe that for the positive contact-type Liouville form λ, the Liouville vector field is dominated by -1/y_1 ∂_y_1 near Γ; see <ref>. Consider the standard folded symplectic structure on S^2n given as follows (see, for instance, <cit.>). We view S^2n⊂^2n+1_(x_1, y_1 ,…, x_n, y_n, z) as the unit sphere. Let π: S^2n→^2n_(x_1, y_1 ,…, x_n, y_n) be the projection, and define ω∈Ω^2(S^2n) by ω := π^*(∑_j=1^n dx_j ∧ dy_j). Then ω is a folded symplectic form with fold Γ = S^2n∩{z=0}. Let λ := π^*(1/2∑_j=1^n x_jdy_j - y_jdx_j).Then ω = dλ, and moreover λ is a positive contact-type Liouville form for ω inducing the standard tight contact structure on Γ≅ S^2n-1. See for comparison the standard convex sphere from <ref>. More generally, one can double a Liouville cobordism to obtain an exact folded symplectic manifold. [Double of a Liouville cobordism] Let (W^2n, dλ) be a Liouville cobordism with negative (concave) end ∂_- W and positive (convex) end ∂_+ W. Let α_± be the induced contact form on ∂_± W. There are collar neighborhoods N(∂_+ W) and N(∂_- W) such that N(∂_+ W) ≅ (-, 0]_s×∂_+ Wwithλ = e^s α_+, N(∂_- W) ≅ [0,)_s×∂_- Wwithλ = e^s α_-.Let Σ^2n⊂× W be the smooth submanifold defined as follows:♢ Away from × N(∂_± W), set Σ := ({1}× W)∪ ({-1}× -W). ♢ In _z × N(∂_+ W), let Σ:={s = f_+(z)} where f_+:(-1,1)_z → (-, 0]_s is a smooth function satisfying f_+(0) = f_+'(0) = 0, f_+”(z)<0, and f_+(z) → - and f'_+(z) →±∞ as z →∓ 1.♢ Likewise, in _z × N(∂_- W) let Σ:={s=f_-(z)} where f_-: (-1,1)_z → [0,)_s is a smooth function satisfying f_-(0) = f'_-(0) = 0, f_-”(z) > 0, and f_-(z) → and f'_-(z) →±∞ as z →± 1. See the left side of <ref>. Let π:Σ⊂× W → W be the projection and define ω := π^*(dλ). Then ω is an exact folded symplectic form on Σ with fold Γ = Σ∩{z=0}. Each of R_+ and R_- are essentially copies of the interior of the original Liouville cobordism. This example indicates the importance of the adjective positive when talking about contact-type Liouville forms. While π^*λ is a contact form on the fold Γ, if the original Liouville cobordism has a nonempty negative end ∂_- W, then π^* λ will not be a positive contact-type Liouville form. Indeed, the fold Γ is oriented as the boundary of R_+, but the contact form on the negative boundary is induced as the negative end of a Liouville cobordism and hence on this component of the fold the contact form will not be positive. See the right side of <ref>.A folded Weinstein structure on Σ is the data (λ, ϕ) where* ω:= dλ is an exact folded symplectic form with fold Γ and positive (resp. negative) region R_±,* λ is a positive contact-type Liouville form for ω, and * ϕ:Σ→ is a (generalized) Morse function such that Γ = ϕ^-1(0) is a regular level set and such that X_λ|_R_± is gradient-like for ±ϕ|_R_±.The triple (Σ, λ, ϕ) is a folded Weinstein manifold. A folded Weinstein homotopy is simply a 1-parameter family (Σ, λ_t, ϕ_t), t∈ [0,1], of folded Weinstein structures on Σ. Let > 0 be sufficiently small and letR_+^ := ϕ^-1(-∞,-], R_-^ := ϕ^-1[,∞)be slight compact retracts of R_+ and R_-. Then (R_+^, λ, ϕ) and (-R_-^, λ, -ϕ) are both Weinstein domains. It is not correct to call (±R_±,λ,±ϕ) a Weinstein domain, as ω=dλ does not extend through the fold to a symplectic form. One could also ask if the open manifold (± R_±, λ, ±ϕ) is a completed Weinstein manifold, or if its closure is an ideal Liouville domain, but this is also not the case — note that (refer back to the local model of <ref>) the Liouville vector field is not complete. §.§.§ Asymmetric double of two Weinstein domains The double of a Weinstein domain as in <ref> is naturally a folded Weinstein manifold. More generally, we can form an asymmetric double of two Weinstein domains with contactomorphic boundaries. For the sake of precision, we give a careful description of the construction. See also <cit.>.Let (W_+, λ_+, ϕ_+) and (W_-, λ_-, ϕ_-) be two compact Weinstein domains of the same dimension. Assume moreover that the contact manifolds (∂ W_+, α_+) ≅ (∂ W_-, α_-) are contactomorphic, where α_± = ι_±^*λ_± is the induced contact form under the inclusion ι_±: ∂ W_±↪ W_±. Let ψ: (∂ W_-,α_-) → (∂ W_+, α_+) be a contactomorphism, so that ψ^*α_+ = μ α_- for some μ: ∂ W_- →_>0. We first wish to extend this contactomorphism to a Liouville isomorphism of collar neighborhoods of the boundaries. Using the Liouville flow on W_-, we may identify a compact collar neighborhood N(∂ W_-) ≅ [-, 0]_s ×∂ W_- on which λ_- = e^s α_-. On W_+, we first take the completion Ŵ_+, let C>sup( + |lnμ|) be a constant, and then identify a collar neighborhood N(∂ W_+) ≅ [-C, C]_s ×∂ W_+ of the original boundary ∂ W_+ ⊂Ŵ_+ on which λ_+ = e^s α_+. Then define ψ̅:N(∂ W_-) → N(∂ W_+) by ψ̅(s,p) := (s - lnμ,ψ(p)).We have ψ̅^*λ_+ = ψ̅^*(e^s α_+) = e^s-lnμ ψ^*α_+ = e^s·1/μ·μ·α_- = e^s α_- = λ_-.and so ψ̅ is a Liouville isomorphism onto its image. In particular, ψ̅ identifies a collar neighborhood of ∂ W_-⊂ W_- with a collar neighborhood of the Weinstein subdomain W^♭_+⊂Ŵ_+ of the completion of W_+ given by W^♭_+ := Ŵ_+ ∖{s > -lnμ}. Here we endow W_+^♭ with a Morse function ϕ_+^♭: W_+^♭→ that agrees with ϕ_+ on W_+ ∖{s> -C}, satisfies dϕ(∂_s) > 0, and has ∂ W_+^♭ as a regular level set. Note that the domain W^♭_+ is Weinstein homotopic to W_+. Next we mimic <ref> with a few modifications. Let f:(-1,1)_z → [0,1]_s be a smooth function satisfying f(0) = f'(0) = 1, f”(z) < 0, and f(z) → 0 and f'(z) →±∞ as z →∓ 1. Consider the manifold M_0 := _z × [-, 1]_s ×∂ W_- and define a submanifold Σ_0⊂ M_0 by Σ_0 := {s = f(z)}∪ ({1}_z × [-, 0]_s ×∂ W_-)∪({-1}_z × [-, 0]_s ×∂ W_-).Define a closed manifold Σ by Σ:= Σ_0 ∪_id W_- ∪_ψ̅ W_+^♭, where we glue Σ_0 and W_- via the obvious mapid: {-1}_z × [-, 0]_s ×∂ W_- → N(∂ W_-) ≅ [-, 0]_s ×∂ W_-and we glue Σ_0 and W_+^♭ instead by the corresponding map ψ̅: {1}_z × [-, 0]_s ×∂ W_- →ψ̅(N(∂ W_-)) ⊂ W_+^♭induced by the original contactomorphism. See <ref>. Note that Σ is naturally an oriented manifold such that W_+^♭ and -W_- are naturally oriented submanifolds. That is, the orientation of Σ agrees with the orientation of W_+^♭ induced by dλ_+ and disagrees with the orientation of W_- induced by dλ_-. Next we define the folded symplectic structure. Consider the 1-form e^s α_- on [-, 1]_s ×∂ W_-. Let π: M_0 → [-, 1]_s ×∂ W_- be the projection and define λ_0 on Σ_0 by λ_0 := π^*(e^s α_-). By construction, we may extend λ_0 to a 1-form on all of Σ via λ_+ on W^♭_+ and λ_- on W_-, both of which agree with λ_0 on their respective gluing regions. By the orientation remark above, it follows that dλ is a folded symplectic form on Σ with fold Γ = {z=0}∩Σ_0. Moreover, λ is a positive contact-type Liouville form inducing on the fold the same contact structure as the boundary of both initial Weinstein domains. Indeed, the induced 1-form on the fold is e^1 α_-.Finally, we upgrade this exact folded symplectic structure to a folded Weinstein structure in the natural way. By shifting theMorse functions ϕ_+^♭:W_+^♭→ and ϕ_-:W_- → by constants, we may assume that supϕ_+^♭ = -1 and inf -ϕ_- = 1.Define ϕ: Σ→ by ϕ:= ϕ_+^♭ on W_+^♭, ϕ := -ϕ_- on W_-, and by ϕ := -z on Σ_0 ∩{s≥ 0}. Then by construction, (Σ, λ, ϕ) is a folded Weinstein manifold.Let (W_+, λ_+, ϕ_+) and (W_-, λ_-, ϕ_-) be two compact Weinstein domains of the same dimension, and let ψ: (∂ W_-, ξ_-) → (∂ W_+, ξ_+) be a contactomorphism of the boundaries. Define the asymmetric double of W_+ and W_- along ψ, denoted D(W_+, W_-, ψ), to be the folded Weinstein manifold (Σ, λ, ϕ) obtained as the result of the above construction.§.§ Folded Lefschetz fibrations The following definition is the folded analogue of <ref>. An (abstract) folded Weinstein Lefschetz fibration is the data ((W_0^2n-2, λ_0, ϕ_0);ℒ^+ = (L_1^+, …, L_k_+^+), ℒ^- = (L_1^-, …, L_k_-^-) ),abbreviated (W_0; ℒ^+, ℒ^-), where (W_0^2n-2, λ_0, ϕ_0) is a Weinstein domain and both (W_0; ℒ^+) and (W_0; ℒ^-) are abstract Weinstein Lefschetz fibrations satisfying τ_L_k_+^+∘⋯∘τ_L_1^+ = τ_L_k_-^-∘⋯∘τ_L_1^-where τ_L_j^±: W_0 → W_0 is the positive symplectic Dehn twist around L_j^± and equality is in the sense of symplectic isotopy. The total space of (W_0; ℒ^+, ℒ^-), denoted |W_0; ℒ^+, ℒ^-|, is the folded Weinstein manifold Σ obtained by taking the asymmetric double of |W_0; ℒ^+| and |W_0; ℒ^-|, i.e., in the notation of <ref>, Σ := D(|W_0; ℒ^+|, |W_0; ℒ^-|, id).Let (Σ, λ, ϕ) be a folded Weinstein manifold. We say that (Σ, λ, ϕ) is supported by a folded Weinstein Lefschetz fibration (W_0; ℒ^+, ℒ^-) if |W_0; ℒ^+, ℒ^-| is folded Weinstein homotopic to (Σ, λ, ϕ).§ PROOFS OF MAIN RESULTS There are three subsections. <ref> contains the proof of <ref>, i.e., the correspondence between contact germs and exact folded symplectic forms with a positive contact-type Liouville form. <ref> contains the proof of <ref>, i.e., the upgraded folded Weinstein existence result. We also include a discussion on how our proof compares with the method of Cannas da Silva in <cit.>. Finally, <ref> contains the proof of <ref>, i.e., the existence of supporting folded Weinstein Lefschetz fibrations.§.§ Proof of <ref>We prove the equivalence of (1) and (2) via two lemmas. The first lemma, <ref>, describes how an exact folded symplectic structure on Σ determines the germ of a contact structure along Σ, assuming a positive contact-type Liouville form. This proves that (1) implies (2). The second lemma, <ref>, gives the converse direction, namely, that a convex hypersurface naturally acquires an exact folded symplectic form with positive contact-type Liouville form. This completes the proof of <ref>. Let Σ be a closed and oriented manifold of dimension 2n≥ 2, and let ω = dλ be an exact folded symplectic form with positive contact-type Liouville form λ and contact fold (Γ, ξ_Γ). Then there exists a smooth function f:Σ→ such that {f=0} = Γ and such that α := fdt + λ is a vertically-invariant contact form on Σ×_t. In particular, Σ = Σ×{0} is a convex hypersurface with contact dividing set (Γ, ξ_Γ).As usual, let ι:Γ↪Σ denote the inclusion of the fold into Σ. Let λ_Γ := ι^*λ and likewise let ω_Γ := ι^*ω = dλ_Γ. For any function f:Σ→ and α = fdt + λ, we have α∧ (dα)^n = dt∧(f ω^n - ndf ∧λ∧ω^n-1).Thus, we need to show that Ω_f := f ω^n - ndf ∧λ∧ω^n-1 is an orienting volume form on Σ for some choice of f. We first localize near the fold. Since λ is a contact-type Liouville form, the null-foliation ω_Γ is the Reeb direction of λ_Γ. Therefore, by <ref>, we may identify a neighborhood N(Γ) ≅Γ× (-, )_τ of the fold in Σ such that[We remind the reader of <ref>. That is, Σ is oriented so that a volume form on N(Γ) is Ω_Γ∧ dτ for an orienting volume form Ω_Γ on Γ. In particular, R_+∩ N(Γ) = {τ > 0}.] ω = ω_Γ - d(τ^2 λ_Γ) = d[(1 - τ^2) λ_Γ] = -2τdτ∧λ_Γ + (1-τ^2) ω_Γ.In particular, on N(Γ) we have λ = (1 - τ^2) λ_Γ. Without loss of generality we may assume < 1. Now we compute, with the end goal of Ω_f in mind. We haveω^n-1 = 2(n-1)τ(1-τ^2)^n-2 λ_Γ∧ω_Γ^n-2∧ dτ+(1-τ^2)^n-1 ω_Γ^n-1 ω^n = 2nτ(1-τ^2)^n-1 λ_Γ∧ω_Γ^n-1∧ dτ λ∧ω^n-1 = (1-τ^2)^n λ_Γ∧ω_Γ^n-1and hence, assuming f=f(τ) on N(Γ),Ω_f= f ω^n - ndf ∧λ∧ω^n-1= n(1-τ^2)^n-1 [2τ f(τ) + f'(τ)(1-τ^2)] λ_Γ∧ω_Γ^n-1∧ dτ.Since λ is a positive contact-type Liouville form, λ_Γ∧ω_Γ^n-1∧ dτ > 0 on N(Γ). Hence it suffices to choose a function f(τ) such that 2τ f(τ) + f'(τ)(1-τ^2)>0.Let f:[-,]_τ→ be a smooth function satisfying f(±) = ± 1, f'(±) = 0, f'(τ) >0 for - < τ <, and f(0) = 0; see <ref>. Then 2τ f(τ) > 0 for τ≠ 0 and f'(τ)(1-τ^2) > 0 for - <τ <, hence 2τ f(τ) + f'(τ)(1-τ^2)>0 for all -≤τ≤. With f defined appropriately on (the closure of) N(Γ), it remains to extend f across Σ∖ N(Γ). Thanks to the condition f'(±) = 0, we can do so smoothly by defining f ≡± 1 on R_±∖ N(Γ). Then on Σ∖ N(Γ) we have df=0 and hence on this regionΩ_f = sgn(f) ω^n.Since ω is folded symplectic, Ω_f > 0. This completes the proof. Let Σ be a closed and orientable manifold of dimension 2n≥ 2, and let α = fdt + β be a contact form on Σ×_t where f: Σ→ and β∈Ω^1(Σ) are t-independent. Let (Γ ={f=0}, ξ_Γ) be the contact dividing set of Σ = Σ×{0}. Then there is a smooth function g:Σ→_>0 such that ω := d(g β) is an exact folded symplectic form with positive contact-type Liouville form and contact fold (Γ, ξ_Γ).The proof is similar to the proof of <ref>, appealing instead to the normal form near the dividing set of a convex hypersurface. Step 1: Normalize f.First, we rescale α = fdt + β by a positive function Σ→_>0 to produce a different t-invariant contact form for the same contact structure. To describe the normalization we localize near the dividing set. By <ref>, near Γ there are coordinates on a neighborhood N(Γ) ≅Γ× (-, )_τ of Γ in Σ such f = τ and β = β_Γ, where β_Γ = ι^*β is a contact form on Γ. Fix 0 < δ < and define a smaller collar neighborhood N^δ(Γ) := Γ× (-δ, δ)_τ. Next we construct a smooth function μ: Σ→_>0 as follows:♢ On Σ∖ N(Γ), set μ = |f|. ♢ On N(Γ), let μ=μ(τ) be a smooth function that agrees with |τ| near τ = ±, is a constant value μ≡' for some δ≪' < on N^δ(Γ), satisfies 0≤μ'(τ) ≤ 1 for δ < τ <, and satisfies -1≤μ'(τ) ≤ 0 for - < τ < -δ; see <ref>.Set α̃ := f̃dt + β̃ where f̃ := 1/μf and β̃ := 1/μ β. Then f̃:Σ→ and β̃ satisfy the following properties:* On R_±∖ N(Γ), we have df̃ = 0 and ± (dβ̃)^n > 0.* On N^δ(Γ) = Γ× (-δ, δ)_τ, we have f̃ = Cτ for some constant C > 0 and β̃ = β̃_Γ := ι^*β̃ where β̃_Γ is a contact form on Γ. * On N(Γ) we have f̃'(τ) ≥ 0. Property (1) follows from <ref>. Indeed, since df̃ = 0, f̃(dβ̃)^n is an orienting volume form on R_±∖ N(Γ). Property (2) is immediate by construction. To see (3), note that on N(Γ) we have f̃'(τ) = μ(τ)^-2(μ(τ) - τμ'(τ)). By the construction of μ, we have μ(τ) ≥ |τ| and |τμ'(τ)| ≤ |τ|, so μ(τ) - τμ'(τ) ≥μ(τ) - |τ|≥ 0which implies f̃'(τ) ≥ 0. We now rename f=f̃ and β = β̃. Step 2: Define g.With our contact form normalized as in Step 1, we define g := e^-f^2 and ω := d(g β). We need to prove that ω is an exact folded symplectic form with fold Γ and positive contact-type Liouville form. We have ω = dg∧β + gdβ and so ω^n = ng^n-1dg∧β∧ (dβ)^n-1 + g^n(dβ)^n.Away from N(Γ), f = ± 1 on R_± and so dg = 0. Thus, in this region ω^n =e^-n (dβ)^n. By property (1), ±ω^n > 0 on R_±∖ N(Γ). On N(Γ), we have dg = -2f(τ)e^-f(τ)^2dτ and so ω^n= -2nf(τ)e^-nf(τ)^2dτ∧β_Γ∧ (dβ_Γ)^n-1+ e^-nf(τ)^2(dβ_Γ)^n = 2n f(τ)e^-nf(τ)^2 β_Γ∧ (dβ_Γ)^n-1∧ dτwhere the second equality follows from the fact that (dβ_Γ)^n= 0 on Γ^2n-1 by a dimension count. Since β_Γ is a contact form, β_Γ∧ (dβ_Γ)^n-1∧ dτ is an orienting volume form on Γ× (-, )_τ. By our normalization of f, it follows that ω^n ⋔ 0 exactly along Γ, as desired. The fact that g β is a positive contact-type Liouville form is immediate, since ι(g β) = β_Γ, the latter of which is a positive contact form by assumption. This completes the proof. §.§ Proof of <ref> Recall that an almost contact structure on an odd-dimensional manifold M^2n+1 is a hyperplane distribution H⊂ TM together with a complex bundle structure J:H → H. Equivalently, we may view an almost contact structure as a pair (α, ω) where α is a non-vanishing 1-form on M and ω is a non-degenerate 2-form on H:=α. The following fact is standard (see, for example, <cit.>), but we include a proof for completeness. Let Σ be a closed and oriented manifold of dimension 2n≥ 2 with a stable almost complex structure. Then Σ× has an almost contact structure. Let J_0 be a complex (bundle) structure on TΣ⊕^2 →Σ, where ^2 →Σ is the trivial rank-2 bundle. Let π: Σ×→Σ be the projection. Then J:=π^*J_0 is a complex structure on the pullback bundle π^*(TΣ⊕^2) →Σ×. Note that π^*(TΣ⊕^2) ≅ T(Σ×) ⊕^1where, on the right, ^1 →Σ× is the trivial rank-1 bundle. Let H be the hyperplane distribution of complex tangencies in T(Σ×) induced by J, i.e., H:= J[T(Σ×) ⊕ 0] ∩ [T(Σ×) ⊕ 0]viewed as a subbundle of T(Σ×). Then (H, J|_H) is an almost contact structure on Σ×. Let Σ be a closed and oriented manifold of dimension 2n≥ 2 with a stable almost complex structure. By <ref>, there is an almost contact structure on Σ×. By Gromov's h-principle for the existence of contact structures on open manifolds <cit.>, the almost contact structure may be homotoped to a genuine contact structure ξ on Σ×. By <ref>, there is an embedding ι:Σ↪Σ× which is C^0-close to the 0-inclusion ι_0(Σ)=Σ×{0} such that ι(Σ) is a Weinstein convex hypersurface in (Σ×, ξ). This means that there is a smooth Morse function ϕ: ι(Σ) → with dividing set ϕ^-1(0) for which the characteristic foliation of ι(Σ) is gradient-like. By <ref>, there is an exact folded symplectic structure dλ on ι(Σ) with positive contact-type Liouville form whose fold coincides with the dividing set of ι(Σ). Moreover, the Liouville vector field X_λ of the folded form directs the characteristic foliation of ι(Σ) on R_+, and directs the oppositely oriented characteristic foliation on R_-. This means that X_λ is gradient-like for ±ϕ|_R_± and hence (λ, ϕ) is a folded Weinstein structure on ι(Σ). This pulls back to a folded Weinstein structure on Σ, as desired.We close this subsection with a discussion on how our proof of <ref> compares and contrasts with the technique of <cit.> in proving existence of folded symplectic forms. The situation is summarized by <ref>. Briefly and informally, Cannas da Silva's existence proof proceeds as follows:* Assume that Σ is equipped with a stable almost complex structure. From this one can induce an almost complex structure J on Σ×^2. * By Gromov's h-principle for existence of symplectic structures on open manifolds <cit.>, after homotoping J there is an honest symplectic structure ω compatible with J. Moreover, as in <ref>, J induces a co-orientable almost contact structure on the codimension-1 submanifold Σ×.* Let ι:Σ×↪Σ×^2 be the inclusion. Since Σ×^2 is symplectic, the leaf space (Σ×)/L of the rank-1 foliation L determined by ι^*ω is locally symplectic. One then applies Eliashberg's h-principle for folded immersions relative to a foliation <cit.> to Σ→Σ× relative to L to pull back the locally symplectic structure on the leaf space to a folded symplectic form on Σ.§.§ Proof of <ref>. Finally we establish existence of supporting folded Weinstein Lefschetz fibrations on all folded Weinstein manifolds.Let (Σ, λ, ϕ) be a folded Weinstein manifold with fold (Γ, ξ_Γ) and positive (resp. negative) region R_±. Let (± R_±^, λ, ±ϕ) be the Weinstein domain given by a slight compact retract of R_± as in <ref>. By <ref> of Giroux and Pardon <cit.>, there is a Weinstein Lefschetz fibration (W_±; ℒ̃^±) whose total space |W_±; ℒ̃^±| is Weinstein homotopic to (± R_±^, λ, ±ϕ). Note that the Weinstein domains R_+^ and -R_-^ each have boundaries contactomorphic to (Γ, ξ_Γ). Thus, by the Giroux correspondence <cit.>, the abstract open book decompositions (W_+, τ_ℒ̃^+) and (W_-, τ_ℒ̃^-) may be stabilized to a common abstract open book (W_0, ψ). Each of the open book stabilizations is induced by a Lefschetz fibration stabilization, and hence we have ψ = τ_ℒ_0^+∘τ_ℒ̃^+ = τ_ℒ_0^-∘τ_ℒ̃^-where ℒ_0^± is the tuple of additional vanishing cycles in W_0 involved in stabilizing (W_±, τ_ℒ̃^±) to (W_0, ψ). This means that the total space|W_0; ℒ^± := ℒ_0^±∪ℒ̃^±|is also Weinstein homotopic to (± R_±^, λ, ±ϕ). It follows that the folded Weinstein Lefschetz fibration (W_0; ℒ^+, ℒ^-) has total space folded Weinstein homotopic to the double of (R_+^, λ, ϕ) and (-R_-^, λ, -ϕ), which is folded Weinstein homotopic to (Σ, λ, ϕ). This completes the proof. amsalpha | http://arxiv.org/abs/2311.16058v1 | {
"authors": [
"Joseph Breen"
],
"categories": [
"math.SG",
"53D10 (Primary)"
],
"primary_category": "math.SG",
"published": "20231127182229",
"title": "Folded symplectic forms in contact topology"
} |
On Dynamics and Thermodynamics of Moving Media Serge Tychkov 27 November 2024 ============================================== The task of Visual Place Recognition (VPR) aims to match a query image against references from an extensive database of images from different places, relying solely on visual cues.State-of-the-art pipelines focus on the aggregation of features extracted from a deep backbone, in order to form a global descriptor for each image.In this context, we introduce SALAD (Sinkhorn Algorithm for Locally Aggregated Descriptors), which reformulates NetVLAD's soft-assignment of local features to clusters as an optimal transport problem. In SALAD, we consider both feature-to-cluster and cluster-to-feature relations and we also introduce a `dustbin' cluster, designed to selectively discard features deemed non-informative, enhancing the overall descriptor quality.Additionally, we leverage and fine-tune DINOv2 as a backbone, which provides enhanced description power for the local features, and dramatically reduces the required training time.As a result, our single-stage method not only surpasses single-stage baselines in public VPR datasets, but also surpasses two-stage methods that add a re-ranking with significantly higher cost. Code and models are available at https://github.com/serizba/saladhttps://github.com/serizba/salad. § INTRODUCTION Recognizing a place solely from images becomes a challenging task when scenes undergo substantial changes in their structure or appearance. Such capability is referred to in the scientific and technical literature as visual place recognition (and by its acronym VPR), and is essential for agents to navigate and understand their surroundings autonomously in a wide array of applications, such as robotics <cit.> or augmented reality <cit.>. Specifically, it is present in simultaneous localization and mapping <cit.> and absolute pose estimation <cit.> pipelines. In practice, VPR is framed as an image retrieval problem, wherein typically a query image serves as the input and the goal is to obtain an ordered list of top-k matches against a pre-existing database of geo-localized reference images. Images are represented as an aggregation of appearance pattern descriptors, which are subsequently compared via nearest neighbour. The effectiveness of this matching relies on generating discriminative per-image descriptors that exhibit robust performance even for challenging variations such as fluctuating illumination, structural transformations, temporal changes, weather and seasonal shifts.Most recent research on VPR have thus focused on the two key components of this general pipeline, namely the deep neural backbones for feature extraction and methods for aggregating such features. For years, ResNet-based neural networks have been the predominant backbones for feature extraction <cit.>. Recently, given the success of Vision Transformer (ViT) for different computer vision tasks <cit.>, some methods have introduced ViT in the field of VPR <cit.>. AnyLoc <cit.> proposed to leverage foundation models, using DINOv2 <cit.> as a feature extractor for VPR. However, AnyLoc uses DINOv2 `as is', while we show in this paper that fine-tuning the model for VPR brings a significant increase in performance. Regarding aggregation, the handcrafted VLAD <cit.> and its learned counterpart NetVLAD <cit.> are among the most popular choices. Basically, they aggregate local descriptors by quantizing them into a set of clusters and storing the sum of residuals per cluster. Alternative methods include pooling layers like GeM <cit.> or learned global aggregation, like the recent MixVPR <cit.>. In this paper, we propose optimal transport aggregation, setting a new state of the art in VPR. As a summary, in this work, we present a single-stage approach to VPR that obtains state-of-the-art results in the most common benchmarks. To achieve this, we present two key contributions:* First, we propose SALAD (Sinkhorn Algorithm for Locally Aggregated Descriptors), a reformulation of the feature-to-cluster assignment problem through the lens of optimal transport, allowing more effective distribution of local features into the global descriptor bins. To further improve the discriminative power of the aggregated descriptor, we let the network discard uninformative features by introducing a `dustbin' mechanism. * Secondly, we integrate the representational power of foundation models into VPR, using DINOv2 model as the backbone for feature extraction. Unlike previous approaches that utilized DINOv2 in its pre-trained form, our method involves fine-tuning the model specifically for the task. This fine-tuning process converges extremely fast, in just four epochs, and allows DINOv2 to capture more relevant and distinctive features pertinent to place recognition tasks.The fusion of these two novel components results in DINOv2 SALAD, which can be efficiently trained in less than one hour and sets unprecedented recalls in VPR benchmarks, with 75.0% Recall@1 in MSLS Challenge and 76.0% in Nordland. All of this with a single-stage pipeline, without requiring expensive post-processing steps and with an inference speed of less than 3 ms per image.§ RELATED WORK The significant research efforts on VPR have been exhaustively compiled in a number of surveys and tutorials over the years <cit.>. Current research addresses a wide variety of topics, such as novel loss functions <cit.>, image sequences <cit.>, extreme viewpoint changes <cit.> or text features <cit.>. In this section, we focus on work related to feature extraction and aggregation, as there lie our contributions. Early approaches to VPR used either aggregations of handcrafted local features <cit.> or global descriptors <cit.>. In both cases, geometric <cit.> and temporal <cit.> consistency was sometimes enforced for enhanced performance. With the emergence of deep neural networks, features pre-trained for recognition tasks, without fine-tuning, showed a significant performance boost over handcrafted ones <cit.>. However, training or fine-tuning specifically for VPR tasks using contrastive or triplet losses <cit.> offers an additional improvement and is standard nowadays. NetVLAD <cit.> is the most popular architecture explicitly designed for VPR, mimicking the VLAD aggregation <cit.> but jointly learning from data both convolutional features and cluster centroids. <cit.> proposed the Generalized Mean Pooling (GeM) to aggregate feature activations, also a popular baseline due to its simplicity and competitive performance. In addition to these, several other alternatives have been proposed in the literature. For example, <cit.> aggregates regions instead of local features. Recently, MixVPR <cit.> has presented the best results in the literature by combining deep features with a MLP layer.A notable trend in VPR has been the adoption of a two-stage approach to enhance retrieval accuracy <cit.>. After a first stage with any of the methods presented in the previous paragraph, the top retrieved candidates are re-ranked attending to the un-aggregated local features, either assessing the geometric consistency to the query image or predicting their similarity. This re-ranking stage adds a considerable overhead, which is why it is only applied to a few candidates, but generally improves the performance. Re-ranking is out of the scope of our research but, notably, we outperform all baselines that employ re-ranking even if our model does not include such stage (and hence it is substantially faster). Optimal transport has found a significant number of applications in graphics and computer vision <cit.>. Specifically, related to our research, it has been used for image retrieval <cit.>, image matching <cit.> and feature matching <cit.>. Recently, <cit.> used optimal transport at the re-ranking stage in a retrieval pipeline. However, ours is the first work that proposes the formulation of local feature aggregation from an optimal transport perspective.§ METHOD DINOv2 SALAD is based on NetVLAD, but we propose to use and fine-tune the DINOv2 backbone (<ref>) and propose a novel module (SALAD) for the assignment (<ref>) and aggregation (<ref>) of features. §.§ Local Feature ExtractionEffective local feature extraction lies in striking a balance: features must be robust enough to withstand substantial changes in appearance, such as those between seasons or from day to night, yet they should retain sufficient information on local structure to enable accurate matching.Inspired by the success of ViT architectures in many computer vision tasks and by AnyLoc <cit.>, that leverages the exceptional representational capabilities of foundation models <cit.>, we adopt DINOv2 <cit.> as our backbone. However, differently from AnyLoc, we use a supervised pipeline and include the backbone in the end-to-end training for the specific task, yielding improved performance.DINOv2 adopts a ViT architecture that initially divides an input image 𝐈∈ℝ^h× w× c into p× p patches, with p=14. These patches are sequentially projected with transformer blocks, resulting in the output tokens {𝐭_1, …, 𝐭_n, 𝐭_n+1}, 𝐭_i∈ℝ^d, where n=hw/p^2 is the number of input patches and there is an added global token 𝐭_n+1 that aggregates class information. Although the DINOv2's authors reported that fine-tuning the model only brings dim improvements, we found that at least for VPR there are substantial gains in selectively unfreezing and training the last blocks of the encoder.§.§ Assignment In NetVLAD, a global descriptor is formed by assigning a set of features to a set of clusters, {C_1, …, C_j, …, C_m}, and then aggregating all features that belong to each cluster. For the assignment, NetVLAD computes a score matrix 𝐒∈ℝ_>0^n × m, where the element in its i^th row and j^th column, s_i,j∈ℝ_>0, represents the cost of assigning a feature to a cluster C_j. In other words, 𝐒 quantifies the affinity of each feature to each clusters. While SALAD draws inspiration from NetVLAD, we identify several crucial aspects in their assignment and propose alternatives to address these. Reduce assignment priors. When building the score matrix S, NetVLAD introduces certain priors. Specifically, it initializes the linear layer that computes S with centroids derived from k-means. While this may accelerate the training, it introduces inductive bias and potentially makes the model more susceptible to local minima. In contrast, we propose to learn each row s_i of the score matrix from scratch with two fully connected layers initialized randomly:s_i =𝐖_s_2(σ(𝐖_s_1(t_i) + 𝐛_s_1)) + 𝐛_s_2where 𝐖_s_1, 𝐖_s_2 and 𝐛_s_1, 𝐛_s_2 are the weights and biases of the layers, and σ is a non-linear activation function.Discard uninformative features. Some features, such as those representing the sky, might contain negligible information for VPR. NetVLAD does not account for this, and the contribution of all features is preserved in the final descriptor. Contrary, we follow recent works on keypoint matching and introduce a `dustbin' where non-informative features are assigned to. For that, we augment the score matrix, from S to S̅ = [ S,s̅_i, m+1] ∈ℝ_>0^n × m+1, by appending the column s̅_i, m+1 representing the feature-to-dustbin relation. As in SuperGlue <cit.>, this score is modeled with a single learnable parameter z ∈ℝ:s̅_i, m+1 = z1_nbeing 1_n = [1, …, 1]^⊤∈ℝ^n a n-dimensional vector of ones.Optimal assignment. The original NetVLAD assignment computes a per-row softmax over 𝐒 to obtain the distribution of each feature's mass across the clusters. However, this approach only considers the feature-to-cluster relationship and overlooks the reverse –the cluster-to-feature relation. For this reason, we reformulate the assignment as an optimal transport problem where the features' mass, μ=1_n, must be effectively distributed among the clusters or the `dustbin', κ=[1^⊤_m, n-m]^⊤. We follow SuperGlue <cit.> and use the Sinkhorn Algorithm <cit.> to obtain the assignment P̅∈ℝ^n ×(m+1) such thatP̅1_m+1 = μandP̅^⊤1_n = κ.This algorithm finds the optimal transport assignment between distributions μ and κ iteratively normalizing rows and columns from exp(S̅). Finally, we drop the dustbin column to obtain the assignment P = [ p_∗,1, …, p_∗,m], where p_∗,j stands for the j^th column of P. §.§ AggregationOnce the feature assignment in our SALAD framework is computed as detailed in <ref>, we focus on the aggregation of these assigned features to form the final global descriptor. The aggregation process in NetVLAD involves combining all features assigned to each cluster C_j. However, we introduce three variations:Dimensionality reduction. To efficiently manage the final descriptor size, we first reduce the dimensionality of the tokens from ℝ^d to ℝ^l. This is achieved by processing the features through two fully connected layers, precisely adjusting the size of the feature vectors while retaining the essential information from the task. f_i = 𝐖_f_2(σ(𝐖_f_1(t_i) + 𝐛_f_1)) + 𝐛_f_1 Aggregation. Based on the assignment matrix derived using the Sinkhorn Algorithm, each feature is aggregated into its assigned cluster. Differently from NetVLAD, we do not subtract the centroids to get the residuals. We directly aggregate these features with a summation, reducing the incorporated priors about the aggregation. Viewing the resulting VLAD vector as a matrix V∈ℝ^m× l, each element V_j,k∈ℝ is computed as follows:V_j,k = ∑_i=1^nP_i, k· f_i,kwhere f_i,k corresponds to the k^th dimension of f_i, with k ∈{1, …, l }.Global token. To include global information about the scene not easily incorporated into local features, we also incorporate a scene descriptor g computed as:g = 𝐖_g_2(σ(𝐖_g_1(t_n+1) + 𝐛_g_1)) + 𝐛_g_1where t_n+1 is the global token from DINOv2. We then concatenate g with V flattened. Following NetVLAD, we do an L2 intra-normalization and an entire L2 normalization of this vector, yielding the final global descriptor.§ EXPERIMENTS To rigorously evaluate the effectiveness of our proposed contributions, we conducted exhaustive experiments following standard evaluation protocols. In <ref>, we present implementation details regarding architecture, training, and validation. Then, in <ref>, we compare our method with recent VPR methods, and, in <ref>, we show ablation studies that assess the importance of the different proposed contributions. <Ref> shows qualitative results that help to interpret SALAD's components. §.§ Implementation DetailsWe ground our training and evaluation setups on the publicly provided framework by MixVPR[<https://github.com/amaralibey/MixVPR>]. For the architecture, we opt for a pretrainedas our feature extraction backbone, targeting a balance between computational efficiency and representational capacity. We keep frozen most of the backbone with exception of the the final 4 layers of the encoder. This approach enhances performance significantly without markedly increasing training time. For the fully connected layers, the weights of the hidden layers 𝐖_s_1, 𝐖_f_1 and 𝐖_g_1 have 512 neurons and use ReLU for the activation function σ.To optimize feature handling, we employ a dimensionality reduction, compressing feature and global token dimensions from d=768 to l=128. We use m=64 clusters, resulting in a global descriptor of size 128 × 64 + 256.We train on GSV-Cities <cit.>, a large dataset of urban locations collected from Google Street View. Given the impressive representation power of DINOv2, our pipeline achieves training convergence within just 4 complete epochs. Using a batch size of 60 places, each represented by 4 images, the training is completed in 30 minutes on a single NVIDIA 3090GTX. We use Multi-Similarity loss <cit.> and AdamW <cit.> for the optimization, with a learning rate set to 6e-5. To ensure an effective learning rate, we reduce the initial rate linearly at every step until 20% of the initial value. We use dropout of 0.3 on the score projection and dimensionality reduction neurons. As our model is agnostic to the image input size (as long as it can be divided in 14× 14 patches), we evaluate on images of size 322×322 but train on 224×224 to speedup training time.To validate our experiments and obtain the best set of hyperparameters, we monitored the recall in the Pittsburg30k-test <cit.>. We observed that in the long run, most configurations perform similarly, but rapid convergence on a few epochs is more sensitive to the hyperparameters. §.§ ResultsWe benchmarked our model against several single-stage baselines, namely NetVLAD <cit.> and GeM <cit.> as two representative tradicional baselines, and CosPlace <cit.>, MixVPR <cit.> and EigenPlaces <cit.> as the three most recent and best performing baselines in the literature. The evaluation spanned a diverse array of well-established datasets: MSLS Validation and Challenge <cit.>, which are comprised of dashcam images; Pittsburgh30k-test and Pittsburgh250k-test <cit.>, featuring urban scenarios; SPED <cit.>, a collection from surveillance cameras; and NordLand, notable for its seasonal variations from images captured from the front of a train traversing Norway. We use Recall@k (R@k) as the metric for all our experiments, as it is standard in related work. We use evaluation data and code from MixVPR <cit.>, which considers retrieval as correct if an image at less than 25 meters from the query is among the top-k predicted candidates.As shown in Table <ref>, our model outperforms all previous methods on all datasets and all metrics. It is worth highlighting the metrics saturation observed in MSLS Val, Pitts250k-test and SPED, and on the other hand the challenging nature of MSLS Challenge and NordLand, for which all baselines show lower R@k. The MSLS Challenge dataset, with its diversity, extensive size and closed labels, and NordLand, with its extreme sample similarity and seasonal shifts, emerge then as key benchmarks for assessing VPR performance. Although our DINOv2 SALAD shows a significant improvement on all benchmarks, it is precisely in MSLS Challenge and NordLand where we obtain the most substantial recall increases, with +7.6%,+11.7%,+9.6% and +17.6%,+14.6%,+12.0% for R@1,R@5,R@10 respectively over the second best.In Table <ref>, we compare our DINOv2 SALAD method, which solely operates on a single retrieval stage, against the leading two-stage Visual Place Recognition (VPR) techniques. In this comparison, we include the best performing models in the literature, namely R2Former <cit.>, TransVPR <cit.>, and Patch-NetVLAD <cit.>, which incorporate a re-ranking refinement.Note in this table how our DINOv2 SALAD, despite being orders of magnitude faster and smaller in memory, significantly outperforms all these two-stage methods on all benchmarks. This finding not only highlights the efficiency of our model but also demonstrates the effectiveness of global retrieval using our novel SALAD aggregation. Additionally, considering our method's reliance on local features, we believe that a re-ranking stage could also be applied on top of these, potentially increasing even more our recall metrics but at the price of a higher computational footprint.§.§ Ablation Studies In this section, we present different studies that evaluated the efficacy of the components and configurations in our proposed methods.Effect of DINOv2.We assess the impact of the DINOv2 backbone and our optimal transport aggregation SALAD separately. For this, we compare with the existing baselines of ResNet NetVLAD or AnyLoc, this last one applying a VLAD on top of a pretrained DINOv2 encoder. We integrate the DINOv2 backbone with various aggregation modules, obtaining a handful of performant techniques that improve their respective previous results. As shown in <Ref>, all of these configurations outperform the baselines, even though AnyLoc already uses DINOv2. This validates the DINOv2's integration in end-to-end fine-tuning to refine its feature extraction capabilities.Effect of SALAD.Our experiments in <Ref> show that aggregation also matters. Even the recent MixVPR aggregation coupled with DINOv2 does not match the performance of DINOv2 NetVLAD and DINOv2 SALAD. We believe that the DINOv2 backbone is especially suitable for local feature aggregation, as its features work remarkably well in dense visual perception tasks <cit.>. Although DINOv2 NetVLAD achieves comparable performance to SALAD, it employs a descriptor almost three times as big. Besides, the generalization performance of DINOv2 NetVLAD is limited, as observed in NordLand results. We attribute this to NetVLAD's priors initialization with urban scenarios, which constrain the convergence of the system. In our experiments we also trained a slimmer DINOv2 NetVLAD version, whose features are dimensionally reduced as described in <Ref>, targetting a final descriptor of roughly the same size as SALAD. In this fairer setup, DINOv2 SALAD clearly outperforms DINOv2 NetVLAD.Effect of hyperparameters. DINOv2 comes in different sizes (<Ref>) that affect the number of parameters, inference speed, and representation capabilities. As shown in <Ref>, more parameters do not always result in better performance. Excesively big models might be harder to train or prone to overfit the training set. From these results, we chose the DINOv2-B backbone. Regarding the dimension reduction, we observed that the Recall@1 is quite stable for different descriptor sizes, with a slight peak for 128 dimensions and worse performance after that. A similar trade-off arises in <Ref> for the number of blocks to train. We observed that fine-tuning two or four blocks report the best results without significant computation overhead.Effect of SALAD components. In <Ref>, we show how different components of our SALAD pipeline affect the final performance. Both the global token, which appends global information not captured in local features, and the dustbin, which helps distill the aggregated features, contribute to the performance of SALAD. We also trained a model using a dual-softmax <cit.> to solve the optimal transport assignment, followingLoFTR and Gluestick <cit.>. Although it achieves only slightly worse performance, the Sinkhorn Algorithm its theoretically sound and provides a better acronym to our method.§.§ Introspective ResultsWe provide an introspection of our model's performance through a series of illustrative figures. <Ref> visualizes the weights that are not assigned to the `dustbin'. This visualization offers insight into the parts of the input image that the network considers informative and validates the effective use of the `dustbin' by our SALAD aggregation. In <Ref>, we display the assignment distribution of patches from two different images depicting the same place. It demonstrates the model's ability to consistently distribute most of the weights into the same bins for patches representing similar regions. Such repeatable and consistent assignment across different images of the same place is crucial for the reliability and performance of the system. Finally, in <Ref>, we showcase various query images alongside their respective top-3 retrievals made by our system.is able to retrieve correct predictions even under challenging conditions, such as severe changes in illumination or viewpoint. § CONCLUSIONS AND LIMITATIONS In this paper, we have proposed DINOv2 SALAD, a novel model for VPR that outperforms previous baselines by a substantial margin. This achievement is the result of combining two key contributions: a fine-tuned DINOv2 backbone for enhanced feature extraction and our novel SALAD (Sinkhorn Algorithm for Locally Aggregated Descriptors) module for feature aggregation. Our extensive experiments demonstrate the effectiveness of these modules, highlighting the model's single-stage nature and exceptionally fast training and inference speed.Our research primarily focused on standard benchmarks, predominantly outdoor environments. The use of the DINOv2 backbone, while effective in these contexts, might encounter limitations in vastly different scenarios from DINOv2's training distribution, such as medical images. Additionally, in SALAD we use an optimal transport assignment in its simplest form. More sophisticated constraints could improve the resulting assignment, a very relevant aspect for our future work.ieeenat_fullname | http://arxiv.org/abs/2311.15937v1 | {
"authors": [
"Sergio Izquierdo",
"Javier Civera"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127154619",
"title": "Optimal Transport Aggregation for Visual Place Recognition"
} |
APS/123-QED [email protected][email protected][email protected]@berkeley.edu^1Department of Physics, University of California at Berkeley, Berkeley, CA 94720, USA ^2School of Mechanical, Aerospace, and Manufacturing Engineering, University of Connecticut, Storrs, CT 06269, USAThis work analyzes bifurcation delay and front propagation in the one-dimensional real Ginzburg-Landau equation (RGLE) with periodic boundary conditions on monotonically growing or shrinking domains. First, we obtain closed-form expressions for the delay of primary bifurcations on a growing domain and show that the additional domain growth before the appearance of a pattern is independent of the growth time scale. We also quantify primary bifurcation delay on a shrinking domain; in contrast with a growing domain, the time scale of domain compression is reflected in the additional compression before the pattern decays. For secondary bifurcations such as the Eckhaus instability, we obtain a lower bound on the delay of phase slips due to a time-dependent domain. We also construct a heuristic model to classify regimes with arrested phase slips, i.e. phase slips that fail to develop. Then, we study how propagating fronts are influenced by a time-dependent domain. We identify three types of pulled fronts: homogeneous, pattern-spreading, and Eckhaus fronts. By following the linear dynamics, we derive expressions for the velocity and profile of homogeneous fronts on a time-dependent domain. We also derive the natural “asymptotic” velocity and front profile and show that these deviate from predictions based on the marginal stability criterion familiar from fixed domain theory. This difference arises because the time-dependence of the domain lifts the degeneracy of the spatial eigenvalues associated with speed selection and represents a fundamental distinction from the fixed domain theory that we verify using direct numerical simulations. The effect of a growing domain on pattern-spreading and Eckhaus front velocities is inspected qualitatively and found to be similar to that of homogeneous fronts. These more complex fronts can also experience delayed onset. Lastly, we show that dilution—an effect present when the order parameter is conserved—increases bifurcation delay and amplifies changes in the homogeneous front velocity on time-dependent domains. The study providesgeneral insight into the effects of domain growth on pattern onset, pattern transitions, and front propagation in systems across different scientific fields.Bifurcation delay and front propagation in the real Ginzburg-Landau equation on a time-dependent domain Edgar Knobloch^1 November 27, 2023 =======================================================================================================§ INTRODUCTION Pattern formation on a time-dependent domain arises in many systems across biology and physics <cit.>.Key examples include the expanding crown in the drop-splash problem <cit.>, structure formation in an expanding universe<cit.>, and a variety of reaction-diffusion problems on growing domains <cit.>. A variety of other problems, ranging from quantum mechanics <cit.> to control theory <cit.>, have also been studied on time-dependent domains. For a review of such problems, see <cit.>.When the domain size becomes time-dependent, new phenomena emerge in the dynamic pattern formation process. For example, the stability of pattern states can change but on a time-dependent domain these changes occur at stronger forcing than that predicted by quasi-static changes in the domain size—a phenomenon referred to as bifurcation delay <cit.>. A better understanding of bifurcation delay can provide additional physical insights into climate tipping points <cit.>, oscillator networks <cit.>, neuron firing <cit.>, and many other applications <cit.>. Bifurcation delay in systems described by nonautonomous ordinary differential equations has been extensively studied <cit.> but similar phenomena in spatially extended systems described by partialdifferential equations are much less understood <cit.> and no explicit formulae are available outside of the adiabatic regime.The phenomenon of front propagation, i.e., the motion of an interface between stable and unstable states or between two different stable states, is also of interest. The simplest example of a front occurs when a stable, spatially homogeneous state propagates into an unstable homogeneous state. Fronts of this type generally travel with a velocity that can be computed from linearized dynamics alone, althoughnonlinearities can amplify the propagation speed <cit.>. Fronts arise in many systems in fluid, chemical, and biological environments such as vortex fronts in Taylor-Couette flow <cit.>, pearling instabilities of lipid bilayers <cit.>, and healing of epidermal wounds <cit.>. For a review of the theory and applications of fronts on a fixed domain, see <cit.>. Front propagation gains additional complexity on time-varying domains or in systems with evolving parameters <cit.>. It is currently unclear how a time-dependent domain affects front propagation.In conserved systems bifurcation delay and front propagation are affected by the presence of a conserved quantity. As the domain expands, the concentration of this quantity is reduced, yielding a local growth-dependent effect known as dilution. A shrinking domain has the opposite effect of increasing the concentration.For one-dimensional isotropic domain growth, the effect of dilution can be transformed into a stabilizing time-dependent bifurcation parameter <cit.>.To understand the properties of these phenomena on a time-dependent domain, we study the one-dimensional (1D) real Ginzburg-Landau equation (RGLE) model. The RGLE captures many of the phenomenological properties of pattern formation with minimal complexity. Furthermore, the RGLE provides a quantitative description of amplitude modulation of more complex patterns near the onset of instability, such as those found in Rayleigh-Bénard convection and Taylor-Couette flow <cit.>. The dynamics of the RGLE on a fixed domain are well-understood <cit.>. However, existing theory is not often generalizable to a time-dependent domain which incorporates nonautonomous effects on an arbitrary time scale.The RGLE on a time-dependent domain was previously studied in <cit.>, focusing on slowly changing domains relative to the intrinsic time scale. In this work, an expression for the bifurcation delay of a phase slip under slow domain changes was derived based on a time-dependent diffusion equation for the spatial phase <cit.>. A local theory for phase slips was also developed based on the regularity properties of parabolic partial differential equations (PDEs) to show how a growing (shrinking) domain increases (decreases) the time to phase slip <cit.>. This work also showed that the nonlinear evolution of the Eckhaus instability can be described with a nonlinear porous-medium type equation <cit.>.The present work analyzes bifurcation delay and front propagation in RGLE on a time-dependent domain and relaxes the slowly time-varying assumption through both theoretical analysis and extensive direct numerical simulations. First, we obtain explicit formulae for the bifurcation delay of primary bifurcations in both growing and shrinking domains of varying time scales. For secondary bifurcation delay, we construct an energy bound on perturbations on a shrinking domain and classify regimes containing arrested phase slips, or phase slips that fail to develop due to re-stabilization on a growing domain. Next, we identify several types of fronts in the RGLE and explain how their dynamics change on a time-dependent domain. For homogeneous fronts, we derive a linear spreading velocity and a natural “asymptotic” velocity which compares well with direct numerical simulations. We also analyze the time evolution of the nonlinear front profile and its connection to the front velocity. Lastly, we show how dilution increases bifurcation delay and amplifies changes in homogeneous front velocity.In Section <ref>, we formulate the RGLE on a time-dependent domain. In Sections <ref> and <ref>, we analyze the effect of a time-dependent domain on bifurcation delay and front propagation, respectively. In Section <ref>, we explain the role of dilution. In Section <ref>, we summarize our results and suggest directions for future study. § REAL GINZBURG-LANDAU EQUATION ON A TIME-DEPENDENT DOMAINWe consider a conserved complex amplitude A described by the 1D RGLE on an isotropically growing domain x∈ [0, Λ L(t)] with periodic boundary conditions:A_t + L̇(t)/L(t)xA_x_advection + L̇(t)/L(t)A_dilution = μ A + A_xx - |A|^2A.Here, subscripts denote partial derivatives and overdots denote total time derivatives, while L(t) is the dimensionless growth parameter; we prescribe L(0)=1 so that Λ is the initial domain size. The advection and dilution terms arise from imposing isotropic growth with a conservation law <cit.>. The RGLE is the lowest-order model that retains phase invariance A↦ Ae^iϕ for arbitrary phase ϕ and separate parity symmetries A↦A and x ↦ -x present in many physical systems <cit.>.Existing analytical techniques and numerical methods for PDEs require a fixed domain. Therefore, we use the change of variable ξ = x/L to obtainA_t + L̇(t)/L(t) A= μ A + 1/L(t)^2A_ξξ - |A|^2A,where ξ∈[0,Λ] is the (Lagrangian) fixed-domain spatial coordinate. Note that the advection term is eliminated in this frame; see Appendix <ref>. To isolate the effect of dilution, we initially neglect the dilution term in (<ref>) to obtainA_t = μ A + 1/L(t)^2A_ξξ - |A|^2A.The dilution term is reintroduced in Section <ref> once this simpler system is analyzed.The RGLE possesses several classes of stationary states summarized in the bifurcation diagram in Fig. <ref> for L=1 and Λ = 2π. These states are defined up to a constant phase shift arising fromtranslationinvariance and the use of periodic boundary conditions. The supercritical bifurcations along the trivial branch A=0 are referred to as primary bifurcations and these generate pattern states. The re-stabilizing subcritical bifurcations along the pattern branches are referred to as secondary bifurcations, and these are responsible for the presence of unstable mixed mode solutions. To relate the properties of Eq. (<ref>) to the time-independent problem, it is often useful to freeze the time dependence of L(t) and treat it as a bifurcation parameter of the system:A_t = μ A + 1/L^2A_ξξ - |A|^2A.This allows us to obtain the bifurcation diagram of the RGLE with respect to L as shown in Fig. <ref> for fixed μ=3. As a system parameter, the domain size L behaves similarly to μ: pattern states become more stable as L increases. However, this is a quasi-static view and does not capture any time-dependent effects of L(t). § BIFURCATION DELAY §.§ Primary Bifurcation Delay Under quasi-static variation of L, supercritical primary bifurcations from the trivial state to pattern states of wavenumber Q and amplitude √(μ-Q^2/L^2) occur at L=Q/√(μ) as shown in Fig. <ref>. On a growing domain, the onset of these pattern states can be delayed beyond the primary bifurcation. Conversely, on a shrinking domain, pattern states can exhibit delayed decay. We aim to explicitly measure these delay effects for given L(t) of arbitrary time scale.§.§.§ Growing DomainStarting from the pattern state A(ξ,t)=a(t)e^iQξ, with a∈ and Q∈^+, we can use (<ref>) to obtain an evolution equation for the amplitude:ȧ= μ(t)a - a^3,where μ(t) ≡μ - Q^2/L(t)^2.We restrict our attention to monotonically increasing L(t). If μ > Q^2, then the pattern state exists at t=0 and remains in existence for all t. For domain growth in which L(t)→∞ as t→∞, we see a(t)→√(μ) regardless of the value of Q. If μ < Q^2, then the trivial state A=0 is initially stable with respect to wavenumber Q perturbations, but an increasing L(t) can destabilize the trivial state at later times. To see this, we suppose a(0)≪ 1 and linearize (<ref>) about the trivial state. Thus ȧ = μ(t)a,and soa(t) = a(0) exp(∫_0^tμ(t') dt'). According to (<ref>), μ(0) < 0 and μ̇(t) > 0 for all t. For a domain that grows sufficiently large, we define the turnaround time t_* such that μ(t_*) = 0. This denotes the time at which the system crosses the primary bifurcation. However, the system does not realize this bifurcation immediately. We define the exit time t_exit > t_* such that a(t_exit) = a(0). This denotes the time when the perturbation exits its initial neighborhood. We can find t_exit by solvingf(t_exit) ≡∫_0^t_exitμ(t') dt' = 0,where f(t) is the entrance-exit function. We can also find the domain sizes at the corresponding times, denoted by L_* and L_exit. Finally, definet_delay≡ t_exit - t_*to be the delay time, or the extra time it takes for the perturbation to leave a neighborhood around the trivial state. We also define L_delay≡ L_exit-L_* to be the extra domain growth during the delay period. We refer to Fig. <ref> (inset) for a graphical depiction of this delay.For an exponentially-growing domain L(t) = e^σ t, σ > 0, we obtain: t_*= 1/σlnQ/√(μ), L_*= Q/√(μ), t_exit = 1/2σ[W_0(-Q^2/μe^-Q^2/μ) + Q^2/μ], L_exit = exp[1/2(W_0(-Q^2/μe^-Q^2/μ) + Q^2/μ)], t_delay = 1/2σ[W_0(-Q^2/μe^-Q^2/μ) + Q^2/μ - lnQ^2/μ],L_delay = exp[1/2(W_0(-Q^2/μe^-Q^2/μ) + Q^2/μ - lnQ^2/μ)], where W_0(z) is the principal branch of the Lambert-W function with integral representation <cit.>W_0(z) ≡1/π∫_0^πln(1+zsin t/te^t t)dt.A larger σ results in a smaller t_delay as shown in (<ref>). Additionally, L_delay in (<ref>) is independent of the growth rate σ.The property that L_delay is independent of the growth time scale holds for all types of monotonic domain growth. Applying (<ref>), we see that L_exit solves∫_0^L_exit(μ - Q^2/L^2)1/L̇dL = 0.For L(t) = L(σ t), σ > 0, we have L̇(t) = σL̇(σ t), and (<ref>) yields∫_0^L_exit(μ - Q^2/L^2)1/L̇dL= 1/σ ∫_0^L_exit(μ - Q^2/L^2)1/L̇dL = 0.Thus, L_exit (and therefore, L_delay) is independent of the growth time scale. Note that this is linear theory, which is suitable because the amplitude |A| is small. §.§.§ Shrinking Domain On a shrinking domain, we consider the problem in which the system begins in a pattern state where μ > Q^2 but decays to the trivial state as L(t) monotonically decreases. We are then interested in the entrance time into a neighborhood around the trivial state; see the red line trajectory in Fig. <ref>. Specifically, we pick ϵ > 0 and define t_enter such thata(t_enter) = ϵ.Additionally, we redefine the delay time and corresponding domain delay as t_delay ≡ t_enter - t_*, L_delay ≡ L_* - L_enter. We interpret L_delay as additional domain compression due to bifurcation delay.Unlike the case of the growing domain, we cannot linearize the problem since the initial amplitude need not be small. Additionally, the amplitude, described by (<ref>), departs from the time-independent pattern branch as the system approaches the primary bifurcation. Instead, as outlined in Appendix <ref>, we obtain a general solution to (<ref>). For the exponentially shrinking domain L(t)=e^σ t for σ < 0, the closed form is a(t) = [ exp(2μ t + Q^2/σ(e^-2σ t-1))/-1/σexp(-Q^2/σ)(-σ/Q^2)^-μ/σΓ(-μ/σ, -Q^2/σ, -Q^2/σe^-2σ t) + a_0^-2]^1/2, where Γ(s,t_0,t_1) is the generalized incomplete gamma function and a_0=a(0).Using this expression, we can understand how the rate of domain compression σ affects the delay. We fix values for μ, Q, and ϵ and compute the delay time t_delay and associated domain compression L_delay over a range of σ by computing implicit solutions to (<ref>). This is shown in Fig. <ref>. We see that as the domain shrinks faster, the delay time decreases while the domain compression increases. §.§ Secondary Bifurcation Delay: Eckhaus Instability and Phase Slip Generation When a pattern state is perturbed, the system may transition into a more stable state via a phase slip that occurs when the pattern amplitude reaches zero at a point in the domain and the system deletes (or injects) a wavelength at that point. An example of this transition is depicted in Fig. <ref>. For a given value of μ, each wavenumber Q pattern state is only unstable to modes Q± k for select values of k∈^+. On a fixed domain, once a pattern state is perturbed by one of these modes, a phase slip necessarily develops <cit.>. The dominant unstable modes of the perturbation dictate the location of the phase slip(s) and the resulting state. The secondary bifurcations along the pattern branch stabilize the pattern state with respect to different values of k. On a time-dependent domain, the onset of phase slips can be delayed. Phase slips can also be prevented altogether—we call these arrested phase slips. We aim to measure phase slip delay and classify the regimes in which phase slips are arrested. §.§.§ Fixed Domain To determine the stability properties of the pattern states, we letA(ξ,t) = A_Q(ξ) + A'(ξ,t),whereA_Q(ξ) = √(μ - Q^2/L^2)e^iQξis the stationary wavenumber Q pattern state and A'(ξ,t) is a small perturbation. We writeA'(ξ,t) = a_k+(t)e^i(Q+k)ξ + a_k-(t)e^i(Q-k)ξ,where k∈^+ and a_k+(t),a_k-(t)∈. Linearizing (<ref>) about the base state A_Q and inserting (<ref>), the evolution of this two-mode perturbation is given by[ ȧ_k+; ȧ_k- ] = M[ a_k+; a_k- ],whereM = [ μ - (Q+k)^2/L^2 - 2a^2 -a^2; -a^2 μ - (Q-k)^2/L^2 - 2a^2 ]and a=√(μ-Q^2/L^2) is the amplitude of the base pattern state. The eigenvalues areλ_k± = -(μ - Q^2/L^2) - k^2/L^2±√((μ-Q^2/L^2)^2 + (2Qk/L^2)^2).The fast eigenvalue λ_k- is always negative and governs the initial transient towards the eigenspace of the slow eigenvalue λ_k+. The secondary bifurcations occur at μ=3Q^2-1/2k^2 when λ_k+=0. These bifurcations are subcritical and generate unstable mixed mode states. The Eckhaus instability occurs at k=1 when the pattern state becomes stable with respect to all small perturbations of wavenumber Q± k. §.§.§ Shrinking DomainThe secondary bifurcations stabilize unstable pattern states. A shrinking domain pushes a stable pattern state into the unstable regime, a transition that can be studied via linearization about the pattern state as the perturbation amplitude is small near this transition. On the other hand, a growing domain requires the retention of the cubic nonlinearity because the deviation from the pattern state of interest is large. Thus, we first analyze the simpler case of the shrinking domain.Suppose the system starts in a stable wavenumber Q state with respect to k, i.e. μ > 3Q^2 - 1/2k^2. The equation that governs the evolution of the two-mode perturbation is now nonautonomous:[ ȧ_k+; ȧ_k- ] = M(t)[ a_k+; a_k- ],whereM(t) = [ μ - (Q+k)^2/L(t)^2 - 2a(t)^2-a(t)^2;-a(t)^2 μ - (Q-k)^2/L(t)^2 - 2a(t)^2 ]and a(t) is the time-dependent amplitude of the base pattern state that evolves by (<ref>). The time-frozen eigenvalues of M(t) are given byλ_±(t) = μ - Q^2/L(t)^2 - 2a(t)^2 - k^2/L(t)^2±√(a(t)^4 + (2Qk/L(t)^2)^2).However, these eigenvalues do not necessarily reflect the correct linear stability of the time-varying system since exponentially growing solutions may exist even when [λ_±(t)]<0 for all t>0 <cit.>.To treat (<ref>) as the nonautonomous problem it is, we apply an energy bound on the growth of a perturbation given by the maximum eigenvalue of M(t)+M^*(t) <cit.>. Since M(t) is real and symmetric, we need only consider the least-stable time-frozen eigenvalue λ_+(t). If we take the squared normr(t) ≡ a_k+^2 + a_k-^2,then we can bound r(t) byr(t) ≤ r(0)exp(2∫_0^tλ_+(τ) dτ).This is similar to (<ref>) with the equality replaced with an inequality. Thus, we can obtain a minimum delay time using the same techniques as outlined in the previous subsection. For a numerical example, see Fig. <ref>. As expected, the calculated upper bound underestimates the true delay time. However, we can see that the true r(t) and upper bound have a similar shape—including the same turnaround time t_* associated with the minimal value in Fig. <ref>—except for an initial transient. This transient is characterized by λ_-(t). Accounting for this initial transient would improve the precision of the upper bound. §.§.§ Growing DomainIn the RGLE on a growing domain the passage across an Eckhaus bifurcation from the unstable to the stable side presents a challenge since we need to consider the basin of attraction of the pattern states. Figure <ref> demonstrates this in a dramatic way. For some L(t), phase slips will develop even if the system crosses the Eckhaus instability. For other L(t), phase slips are arrested—the system falls into the basin of attraction of the original pattern state. Thus, for a growing domain, we are not only concerned with the delay of a phase slip but also whether a phase slip develops at all.Here, we outline a model for characterizing arrested phase slips on a time-dependent domain. For simplicity, we restrict our attention to the transition between the Q=1 pattern state and the Q=0 homogeneous state with Λ=2π. Without loss of generality, we can also force the (developing) phase slip to occur at ξ=Λ/2=π, i.e., in the domain center.To physically motivate this model, we note that the Eckhaus instability is a phase instability, and the onset of a phase slip is characterized by a wavenumber diffusion equation with a negative diffusion coefficient <cit.>. The amplitude is slaved to the phase; once the wavenumber compression is large enough, the amplitude reaches zero and a phase slip occurs. In the Eckhaus-stable regime, the mixed modes are the unstable states that resemble a developing phase slip (see the mixed mode profile in Fig. <ref>). As L(t) increases, the system becomes more Eckhaus-stable, and the mixed modes become more compressed. This suggests that we can use the wavenumber compression of the mixed modes to approximate the basin of attraction of an Eckhaus-stable pattern state. States which are more compressed than the relevant mixed mode fall outside the basin of attraction. To measure the wavenumber compression at the location of the phase slip, we use the phase slip core width Δξ. To define this quantity, we write the profile in its amplitude-phase representationA(ξ,t) = a(ξ,t)e^iϕ(ξ,t).For the phase of the Q=1 pattern, ϕ(ξ=0)=0 and ϕ(ξ=Λ)=2π. The phase slip occurs where ϕ=π. Following the approach of <cit.>, the core width Δξ is defined to be the spatial distance between ϕ=π/2 and ϕ=3π/2, i.e. Δϕ=π around the developing phase slip (see Fig. <ref>). As Δξ decreases, the profile becomes more compressed. Once Δξ=0, a phase slip occurs. On a fixed domain, Δξ scales as (t_slip-t)^1/2, where t_slip is the time of the phase slip <cit.>.To parameterize the basin of attraction, we compute the core width of each steady mixed mode at each L for a fixed μ, denoted as the critical core width Δξ_cr(L). These are computed using pde2path, a Matlab package for numerical continuation and bifurcation analysis in systems of PDEs <cit.>. The spatial direction is discretized using the Fourier collocation method <cit.> following the implementation in <cit.>, and we use N=2048 for the number of grid points. Figure <ref> shows the interpolated pde2path results used to construct Δξ_cr(L) over L for μ = 2. As expected, as L increases, Δξ_cr decreases. Thus, on a growing domain, the number of states that fall within the basin of attraction grows over time.We construct the model as follows: given a perturbed Q=1 pattern state for some growing L(t), we track the core width Δξ(t) over time. If a state falls within the basin of attraction at one time, then the state remains within the basin of attraction thereafter—this is an arrested phase slip. Specifically,* if Δξ(t) > Δξ_cr(L(t)) at some time t>t_*, then the phase slip is arrested;* if Δξ(t) < Δξ_cr(L(t)) for all time, then the phase slip develops. We test this model with direct numerical simulations (DNS) of (<ref>) using Dedalus, a Python library that uses spectral methods to solve differential equations <cit.>. We use a RK222 time-stepping scheme with a Fourier basis with N=1024. The linear terms are treated implicitly, but the time-dependent domain requires that the diffusion term is treated explicitly and a smaller time step is required for numerical stability. As shown in Fig. <ref>, we perform DNS with slow exponential growth L(t)=e^0.1t and fast sigmoidal growth L(t)=1.93+0.93tanh(5(t-2.59)). For exponential growth, the model exhibits remarkable accuracy for many randomly chosen perturbations. Evidently, the mixed mode core width acts as a useful criterion for determining whether phase slips are arrested.However, examining Fig. <ref>(d), we find that this model is not perfect for fast sigmoidal domain growth, as a phase slip occurs even though the evolution (d) crosses Δξ_cr. We find that the model is too lenient in classifying arrested phase slips because it does not account for transient effects due to changes in domain size. These transient effects cause additional compression beyond what is anticipated by the model. In practice, the range of perturbations for which this model fails is quite small, as demonstrated by the near-identical initial conditions shown in the bottom left panel of Fig. <ref>. § FRONTS Front propagation into unstable states arises from localized perturbations in spatially-extended systems. Fronts generally have a well-defined asymptotic velocity and profile. Pulled fronts approach a spreading velocity determined by the linear dynamics ahead of the front, in contrast with pushed fronts that move faster than the linear spreading velocity due to nonlinear effects <cit.>. The profile at the leading edge of the front is intimately connected to the selected velocity.In Fig. <ref>, we identify three types of pulled fronts in the RGLE that correspond to the three types of bifurcations: * Homogeneous fronts arise when the stable homogeneous state propagates into the unstable trivial state due to the supercritical bifurcation at μ=0.* Pattern-spreading fronts arise when a pattern state propagates into the unstable trivial state due to the primary bifurcations at μ=Q^2.* Eckhaus fronts arise when a stable pattern state propagates into an unstable pattern state due to the Eckhaus instability at μ=3Q^2-1/2. Previous studies of pulled fronts in one dimension have mostly focused on their universal properties: the selected velocity and leading edge profile of pulled fronts are independent of the tracking height, specific nonlinearities, and initial conditions, provided the initial conditions are sufficiently steep <cit.>. The asymptotic velocity is approached from below at a slow algebraic rate O(t^-1), while the asymptotic profile is approached at a rate O(t^-2) <cit.>. The selected asymptotic front is the marginally stable uniformly propagating solution—this is known as the marginal stability criterion <cit.>.However, on a time-dependent domain, it is unclear whether the asymptotic velocity and profile are meaningful at all—long-time asymptotic behavior cannot be described for general time-dependent domains. We cannot immediately determine the velocity and profile using the machinery of <cit.>, which relies on long-time asymptotics.Therefore, we develop the theory from first principles to see which aspects of the fixed-domain theory must be adjusted. In Fig. <ref>, we give an overview of the homogeneous, pattern-spreading, and Eckhaus fronts in the RGLE on a time-dependent domain. In the subsections that follow, we analyze each front type in more detail. We primarily focus on homogeneous fronts because they are the most analytically tractable and most thoroughly analyzed in the literature. We make brief remarks about the effect of a growing domain on pattern-spreading and Eckhaus fronts at the end of this section. §.§ Homogeneous Fronts On a fixed domain, homogeneous or uniform fronts can be shifted into a comoving frame with time-independent behavior far from the front. If we take A to be real, the RGLE is equivalent to the 1D Allen-Cahn equation or the Fisher-Kolmogorov-Petrovsky-Piskunov (F-KPP) equation with a cubic nonlinearity. Homogeneous fronts in these equations are the prototypical examples of pulled fronts and have been extensively studied on a fixed domain <cit.>. However, on a time-dependent domain, we do not expect a stationary profile for any choice of a comoving frame and the profile will evolve over time.§.§.§ Linear Analysis Because the asymptotic velocity of a pulled front is entirely determined by the linear dynamics, we first analyze the linearization about the trivial stateA_t = μ A + 1/L(t)^2A_ξξ.Without loss of generality, we take A to be real. For simplicity, we perform this analysis on an infinite domain with a delta function initial condition and track the position of the point A=C. That is, we solveC = A(ξ_C(t), t)for ξ_C(t). There are two such points (see Fig. <ref>); we focus on the right flank without loss of generality. From here, we can obtain the velocity ξ̇_C(t) and the profile A(ξ_C(t), t). We characterize the profile by the steepness, or spatial decay rate, of its leading exponential tail.We first ignore time-dependent effects and obtain the asymptotic velocity and profile at each time from fixed-domain analysis. As shown in Appendix <ref>, the time-frozen asymptotic velocity and steepness of the front arev^*(t)= 2√(μ)/L(t),λ^*(t)= √(μ)L(t).These values may also be obtained from the marginal stability criterion. It is not immediately clear if these expressions remain meaningful in the presence of time dependence.We may also solve (<ref>) directly using the dispersion relation ω(k,t) obtained by inserting A = exp(ikξ-i∫_0^tω(k,t') dt') into (<ref>):ω(k,t) = i(μ - k^2/L(t)^2). Thus, the time-evolved profile of an initial condition with Fourier transform Ã(k,0) is given by: A(ξ,t) = 1/2π∫_-∞^∞Ã(k,0)exp(ikξ - i∫_0^tω(k,t') dt' )dk . For A(ξ,0)=δ(ξ) the exact solution isA(ξ,t) = 1/√(4π h(t))exp(μ t - ξ^2/4h(t)),whereh(t) ≡∫_0^t 1/L(t')^2 dt'. On a fixed domain, it is natural to move into a comoving frame with the asymptotic velocity (<ref>) because this clears the growth term μ t in (<ref>) so that the amplitude neither grows nor decays exponentially. On a time-dependent domain an immediate generalization is the time-frozen asymptotic frameζ = ξ - ∫_0^t v^*(t') dt' = ξ - 2√(μ)g(t),whereg(t) ≡∫_0^t 1/L(t') dt'.However, the profile (<ref>) in this frame becomes A(ζ, t) = 1/√(4π h(t))exp[-ζ√(μ)g(t)/h(t)-ζ^2/4h(t)+μ(t-g(t)^2/h(t))], with a growth term that does not vanish because g(t)^2 / h(t) ≠ t except when L is constant.Instead, to clear the exponential growth in the comoving frame, we shift into the framez = ξ - 2√(μ t h(t))to obtain the profileA(z,t) = 1/√(4π h(t))exp[ -z√(μ t/h(t)) - z^2/4h(t)].This is a much more natural frame—the exponential growth at z=0 vanishes, and the profile depends only on h(t), the integrated diffusion coefficient, and not g(t), the integrated square root of the diffusion coefficient.Using (<ref>) and (<ref>), we define the natural asymptotic velocityv^**(t)= d/dt[2√(μ t h(t))]= √(μ/th(t))(h(t) + t/L(t)^2),and the natural asymptotic steepnessλ^**(t) = √(μ t/h(t)) .These are different from the time-frozen asymptotic values (<ref>) and (<ref>). The natural asymptotic steepness λ^**(t) is physically meaningful: it represents a balance between the growth μ t and diffusion h(t). Note that v^**=v^* and λ^**=λ^* on a fixed domain.We solve (<ref>) using (<ref>) to obtain the linear position and velocityξ_C(t)= 2[h(t)(μ t - ln(C√(4π h(t))))]^1/2,ξ̇_C(t)= 1/L(t)^2[μ t - ln(C√(4π h(t)))]+μ h(t) - 1/2L(t)^2/[h(t)(μ t - ln(C√(4π h(t))))]^1/2.Equations (<ref>) and (<ref>) are exact and valid for all times given a delta function initial condition. As expected, the velocity depends on the tracking point C, even at long times, because the changing steepness of the profile adds an amplitude-dependent “rotation” effect. On a constant domain, we recover the expected O(t^-1) approach to (<ref>) from (<ref>) as t→∞ for all C.Finally, we compare these linear analysis results with the nonlinear results obtained from DNS. These DNS are run using a second-order central finite-difference scheme and RK4 time-marching scheme with μ=1, Λ=100π, N=65536, and dt=10^-5, where N is the number of grid points and dt is the time step. This fine spatial discretization is necessary to precisely measure the nonlinear front velocity.To explain the analysis shown in Figs. <ref>, <ref>, and <ref>, we first summarize the definitions we use: * The nonlinear front (solid black) is obtained from DNS. The nonlinear velocity and nonlinear profile are the true, physical quantities—we use the other named velocities and profiles defined below for comparison.* The linear front (solid blue) is obtained by solving the linearized equation (<ref>) with a delta function initial condition. The linear velocity (<ref>) and linear profile (<ref>) are exact results.* The time-frozen asymptotic front (dashed gray) is obtained from the general fixed-domain theory of fronts as described in Appendix <ref>. It is characterized by the time-frozen asymptotic velocity v^*(t) (<ref>) and time-frozen asymptotic steepness λ^*(t) (<ref>). The steepness describes the leading edge of the full time-frozen asymptotic profile which we compute below.* The natural asymptotic front (dashed red) is obtained by clearing the exponential growth term. It is characterized by the natural asymptotic velocity v^**(t) (<ref>) and natural asymptotic steepness λ^**(t) (<ref>). We also compute the natural asymptotic profile below. On a fixed domain, in which the time-frozen and natural fronts are the same, we use the general terms asymptotic front, asymptotic velocity, asymptotic steepness, and asymptotic profile. In Fig. <ref>, we plot the propagating linear and nonlinear fronts in space for a fixed domain with L(t)=1, an exponentially growing domain with L(t)=e^0.06t, and an exponentially shrinking domain with L(t)=e^-0.02t. We also plot the time-frozen and natural asymptotic fronts; we compute these asymptotic profiles below. On the fixed domain, the linear front recedes from the asymptotic front. This is the expected logarithmic shift due to the O(t^-1) approach to the asymptotic velocity <cit.>. The nonlinear front recedes more than the linear front; this can be derived via asymptotic matching <cit.>. For the shrinking domain, we observe a similar recession of the nonlinear front. However, in this case, the nonlinear front recedes faster as time goes on. We observe the opposite behavior on the growing domain.In Fig. <ref>, we plot the linear, nonlinear, time-frozen asymptotic, and natural asymptotic front positions and velocities for these same regimes with tracking height C=0.5. On the constant domain, both the linear and nonlinear velocities approach the asymptotic velocity v^**=2. On the shrinking domain, the front velocities increase. At a basic level, this result is evident in the Eulerian (lab) frame: the front speed stays roughly constant while the distance between points decreases. Thus, it takes less time for the front to travel between two points. Likewise, on a growing domain, the distance between points increases in the Eulerian frame, so the front velocities decrease in the Lagrangian frame.However, the detailed behavior of the nonlinear front velocity is rather complex. On the shrinking domain, the nonlinear front velocity departs from both the linear velocity (<ref>) and the natural asymptotic velocity (<ref>) at large times. On the growing domain, the nonlinear front moves faster than the natural asymptotic front (<ref>) after around t=5. We would not expect any overshoot of the asymptotic velocity starting from steep initial conditions on a fixed domain; this is a new phenomenon. In addition, the linear and nonlinear velocities track the natural asymptotic velocity instead of the time-frozen asymptotic velocity. This confirms that the natural asymptotic frame is the appropriate comoving frame, suggesting that the marginal stability criterion breaks down on a time-dependent domain. §.§.§ Nonlinear Analysis The asymptotic velocity of a homogeneous front in the nonlinear equation (<ref>) is the same as the asymptotic velocity in the linear equation (<ref>) because homogeneous fronts are pulled fronts. However, this does not mean that the nonlinear velocity matches the linear velocity (<ref>), since, on a time-dependent domain, the asymptotic velocities need not describe the actual velocities. In fact, we saw above that even in the fixed-domain case, the nonlinear front has a slower relaxation to the asymptotic velocity compared to the linear front.To explain the nonlinear behavior, we consider the relationship between front velocity and profile. The key idea is the following:A time-dependent domain changes the front profile. Changes in the profile steepness at the leading edge cause changes in the front velocity.On a fixed domain, if the initial conditions are steeper than the asymptotic steepness (<ref>), then the asymptotic velocity will be approached from below. The velocity will never exceed v^**, and the steepness will never drop below λ^**. However, if the initial conditions are less steep, then the asymptotic velocity and steepness are not useful values. Instead, the front will propagate with a velocity v > v^** and maintain its shallow exponential tail <cit.>.On a shrinking domain, λ^**(t) is a decreasing function. Thus, because we start with a delta function initial condition, the front will always remain steeper than the asymptotic steepness, so the asymptotic values remain valid. On a growing domain, λ^**(t) is an increasing function. Thus, the profile may be steep enough at t=0, but at a later time, the profile may be shallower than the asymptotic steepness. In this case, the asymptotic values are no longer valid, and the front can move faster than v^**(t).The above argument only holds for the linear velocity, and the linear and nonlinear profiles do not necessarily match. To confirm that v^**(t) is also associated with the nonlinear profile, we move into the natural asymptotic frame given by (<ref>) to obtainA_t = μ A + 1/L(t)^2A_zz - A^3 + v^**(t)A_z.For a fixed time t, we seek steady solutions to (<ref>). This amounts to solving a boundary-value problem with boundary conditions of A(z→ -∞)=√(μ) and A(z→∞)=0 with phase condition A(z=0)=C. We split (<ref>) into a first-order system by writing u=A and w=A_z:u_z = w w_z = L(t)^2[-v^**(t)w - (μ u-u^3)].There are two fixed points at (0,0) and (√(μ),0) which are stable and unstable, respectively. The front solution is the heteroclinic orbit connecting the two fixed points, since this is the only solution that satisfies the boundary conditions. These heteroclinic profiles propagating with the natural asymptotic speed are shown in Fig. <ref>. We can also obtain the time-frozen asymptotic profiles by moving into the frame associated with v^*(t). The heteroclinic profiles in this frame are also shown in Fig. <ref>. In Fig. <ref>, we overlap the linear, nonlinear, time-frozen asymptotic, and natural asymptotic profiles at C=0.5 at various times for the different L(t). We also plot the phase plane for (<ref>) at each time with the two fixed points and their linearized eigenspaces. On top of this, we plot the trajectories of the nonlinear profile, time-frozen asymptotic profile, and natural asymptotic profile. From here, we can explain the nonlinear velocities shown in Fig. <ref>. On the shrinking domain, the nonlinear profile remains steeper than the natural asymptotic profile at all times which explains why the nonlinear velocity remains below the natural asymptotic velocity. On the growing domain, the nonlinear profile is less steep than the natural asymptotic profile at long times, so the nonlinear velocity is larger than the natural asymptotic velocity.This analysis also demonstrates how the marginal stability criterion breaks down on a time-dependent domain. Note how, for the fixed domain, the linear profile does not line up closely with the nonlinear and natural asymptotic profiles. This is because, in this regime, the linearization at (0,0) gives a double root with a degenerate eigenspace, so the asymptotic profile behaves likeA(z) ∼ ze^-λ^**z (z→∞).This differs from the puree^-λ^**z exponential tail from the linear analysis but is precisely what is prescribed by the marginal stability criterion. However, this degeneracy holds only in the fixed domain case. On a time-dependent domain, (0,0) is a generic stable node in the natural asymptotic frame. It does not have a repeated spatial eigenvalue, so the natural asymptotic profile is not the marginally stable profile. §.§ Pattern-spreading Fronts Pattern-forming fronts propagating into a trivial state are well-studied in a variety of systems <cit.>. For the RGLE, velocity and stability analyses were conducted in <cit.> for a fixed domain. In fact, these fronts are somewhat artificial in nature. Because wavelength injection cannot occur in the absence of phase slips, the initial condition prescribes the maximum number of wavelengths <cit.>. If we restrict our attention to localized perturbations (specifically, perturbations with compact support), then these can only prescribe a finite number of initial wavelengths and thus a pattern-spreading front can only propagate for a finite time and distance. The homogeneous front eventually takes hold and the analysis of the previous subsection then applies.With this in mind, we consider examples of pattern-spreading fronts on a fixed and exponentially growing domain as shown in subfigures (c) and (d) in Fig. <ref>. In both regimes, we observe a decreasing local wavenumber—pattern spreading—at the leading edge of the front. On a growing domain, the front dynamics are primarily characterized by the curved envelope in the space-time plot depicting a decreasing velocity over time. This is very similar to the homogeneous front.Owing to primary bifurcation delay, the propagation of the pattern-spreading front does not begin until the domain grows large enough for the local wavenumber to fall within the existence band. Thus, even though the rather simple delay analysis of (<ref>) does not appear to highlight any spatiotemporal features, it plays an important role in the timing of front propagation.§.§ Eckhaus Fronts As described in Section <ref>, an Eckhaus-unstable pattern state can undergo one or more phase slips to evolve into a stable pattern state with fewer wavelengths. If the initial perturbation is sufficiently localized, a propagating front of repeated phase slips can form (Fig. <ref>). A rigorous proof of the existence of these fronts was obtained in <cit.>. Unlike the front types discussed so far, here the front invades an unstable state that is nontrivial and nonhomogeneous, although homogeneity can be recovered by transforming (<ref>) into an amplitude-wavenumber representation. Remarkably, Eckhaus fronts travel at the linear spreading velocity even though phase slips are a nonlinear phenomenon <cit.>. In general, a moving front deposits a nonzero wavenumber. This wavenumber is selected dynamically and the resulting state may or may not be stable. In the present problem we saw that the homogeneous front deposits a zero wavenumber state, but this is no longer the case for Eckhaus fronts (Fig. <ref>). Close to the Eckhaus boundary wavenumber selection is described by the Cahn-Hilliard equation and no phase slips take place <cit.> but farther into the unstable regime phase slips start to occur and determine the deposited wavenumber <cit.>. These phase slips occur behind the moving front which continues to move with the speed predicted by the marginal stability prescription, and do so either irregularly or periodically, depending on the front velocity.Since the dynamics of phase slips are modified on a time-dependent domain, as described in the first part of this paper, we may expect that time-dependence will likewise affect both the Cahn-Hilliard regime and the transition to and subsequent behavior of the phase slip regime. On a time-dependent domain the former is described by the phase equation (equivalently an equation for the wavenumber k≡ϕ_x) derived in <cit.>, regularized by a fourth order linear term k_xxxx. While a study of this equation is beyond the scope of this paper, we focus here on the initial wavenumber selection process by an Eckhaus front on a time-dependent domain.For this purpose we first look at the quasi-static analysis. In Fig. <ref>(a), we plot the linear spreading velocity as a function of μ for L=1 with initial wavenumbers Q=1 and Q=2. The full derivation can be found in Appendix <ref>. We also add DNS data confirming the theoretical result. We see that for larger μ, the Eckhaus front velocity is smaller. In Fig. <ref>(b), we plot the velocity as a function of L for μ=3. As expected, L plays a similar stabilizing role as μ: a larger domain size also results in a decrease in the front velocity. Using DNS, we are able to extract some time-dependent behavior beyond the quasistatic regime. On an exponentially growing domain, we find that the Eckhaus fronts slow down as expected; compare (e) and (f) in Fig. <ref>. Once the domain grows to an Eckhaus-stable size, phase slips no longer occur and the front halts. The system then enters a phase-melting state in which two different stable wavenumbers temporarily coexist. The subsequent melting into a state with a uniform wavenumber is described in <cit.>, but we do not yet know how these states evolve on a time-dependent domain. We also do not yet understand how the time-dependent Eckhaus front velocity may deviate from that shown in Fig. <ref>(b). Additionally, we expect delayed front propagation when crossing the Eckhaus instability but this topic is also beyond the scope of this paper. § DILUTION Dilution plays an important role in the changing stability of solutions on a time-dependent domain. Examining Figs. <ref> and <ref>, we see: * For a fixed L, increasing μ makes the trivial state less stable and the pattern states more stable.* For a fixed μ, increasing L makes the trivial state less stable and the pattern states more stable.Thus, μ and L play similar roles in the stability of pattern solutions. If L(t) is growing, then in the undiluted regime (<ref>), we expect the pattern states to become more stable. However, including dilution as in (<ref>) changes the growth rate coefficient:μ↦μ - L̇(t)/L(t).Thus dilution acts to decrease μ when the domain is growing, making the pattern states less stable. The reverse applies in the case of a shrinking domain. Thus dilution resists the changing stability of solutions due to time-dependence of the domain.Consequently, we expect that dilution increases the delay time. We show this for a growing domain across a primary bifurcation. Suppose μ < Q^2 and L̇ > 0. From the definition of the original turnaround time t_*, we haveμ - Q^2/L(t_*)^2 = 0.With dilution presentμ - L̇(t_*)/L(t_*) - Q^2/L(t_*)^2 < 0since L̇ > 0. Thus, the turnaround time with dilution occurs later than that without dilution. Additionally, the exit time t_exit is a root of the original entrance-exit function:f(t_exit) ≡∫_0^t_exit(μ - Q^2/L(t')^2)dt' = 0.The new entrance-exit function givesf_dilut(t_exit) = f(t_exit) - ∫_0^t_exitL̇(t')/L(t') dt' < 0.Thus, as expected, the exit time in the dilution regime also occurs later than that without dilution. Figure <ref> compares the delay with dilution to that without dilution for the primary bifurcation, confirming this result. Furthermore, for any given μ, if we take L(t) = e^μ t, then the bifurcation never takes place since the source term is eliminated by the dilution effect.Dilution plays a different role for homogeneous fronts. If we modify the linear analysis in Eq. (<ref>) to include the dilution term, we obtain the natural asymptotic velocityv_dilut^**(t) = 2√(M(t) h(t)),whereM(t) ≡μ t - ∫_0^tL̇(t')/L(t') dt'.Recall that, in the undiluted regime, the natural asymptotic velocity v^**(t) decreases on a growing domain and increases on a shrinking domain. With dilution, M(t) is smaller for growing domains and larger for shrinking domains compared to its undiluted counterpart μ t. Thus, dilution amplifies the effect of a time-dependent domain on the asymptotic speed. As demonstrated in Fig. <ref>, this result holds for exponential domains (constant L̇/L) in the fully nonlinear regime.Except for exponentially growing domains, extension of the nonlinear analysis to include dilution is not straightforward because the amplitude of the homogeneous solution becomes time-dependent. Thus, we can no longer seek stationary solutions in a comoving frame. Although we generally expect the dynamics behind a pulled front to play a minimal role in its velocity <cit.>, a more comprehensive analysis is necessary to verify this claim for time-dependent domains.§ DISCUSSION We extended previous work on bifurcation delay in the RGLE <cit.> with analyses of the primary bifurcations for both growing and shrinking domains of length L(t) with exact solutions and closed-form expressions. We also analyzed secondary bifurcation delay in detail. For the shrinking domain, in which linearized dynamics apply, we constructed an upper bound on the perturbation amplitude using an energy-based method to find a minimum delay time. For the growing domain, in which the nonlinear, infinite-dimensional dynamics must be retained, we outlined a heuristic model based on the core width of a phase slip to characterize the time-dependent basin of attraction of pattern states and validated this model with DNS. We used this model to classify arrested phase slips, which occur when the time scale of the domain growth competes with the Eckhaus instability.We also gave a detailed linear analysis of homogeneous front propagation into an unstable trivial state on an arbitrary time-dependent domain starting from a delta-function initial condition. Our approach led to a new insight into what determines the front velocity. We defined the natural asymptotic velocity and showed that in a time-dependent domain the velocity is no longer determined by the classical marginal stability criterion for pulled fronts. When the cubic nonlinearity is reintroduced, the degeneracy of the spatial eigenvalues of the trivial state breaks in the natural asymptotic frame. Our results were corroborated by DNS of the RGLE. We also briefly examined DNS of pattern-spreading and Eckhaus fronts on an exponentially growing domain. Lastly, we saw how dilution resists stability changes in a time-dependent domain, causing longer bifurcation delay compared to the undiluted regime. On the other hand, dilution amplifies the effect of a time-dependent domain on the velocity of homogeneous fronts.Many aspects of bifurcation delay remain to be explored.A more complete study and explanation of transient behavior would improve the accuracy of phase slip delays and arrests predicted in this paper. This is not well-explored because transients are unimportant in the fixed-domain RGLE: as long as perturbations are sufficiently small, all trajectories through a phase slip are determined by a one-dimensional eigenspace. Additionally, more complex domain time-dependence remains to be studied.For example, oscillatory domain growth was not considered in this paper, although variants of the Ginzburg-Landau equation have been analyzed with a time-periodic parameter <cit.>. Amplitude- and frequency-dependent shifts of the Eckhaus instability and mixed mode branches are expected, but this has not been considered in this paper. The effects of noise and imperfection terms are also of great interest <cit.>.Additional work remains to fully understand front dynamics in nonautonomous systems of this kind.A larger body of numerical results is required for a complete catalog of the possible dynamics. Front velocities are notoriously difficult to measure in numerical simulations <cit.>. More precise and accurate numerical measurements of front velocities in a wider range of examples would verify our current understanding and perhaps identify new, unexpected behaviors. Theoretical progress is also needed to verify and explain the nonlinear front speeds and profiles obtained from DNS. Generalizing beyond a delta function initial condition on an infinite domain would enable better comparisons with the universal properties described in <cit.>. Of course, we desire a reliable theory for the pattern-spreading and Eckhaus fronts as well. Developing a general theory for front propagation in nonautonomous partial differential equations would make a wide class of systems accessible to theory <cit.>.We are also interested in the effect of a time-dependent domain on the local dynamics of phase slips since these occur generically in growing patterns <cit.>. As described in Section <ref>, the phase slip core width has an algebraic scaling law as the phase slip is approached <cit.>. It is unclear how this scaling behavior changes with domain growth.This study illuminates countless possibilities for studying bifurcation delay and front propagation in more sophisticated models such as the complex Swift-Hohenberg equation <cit.> and the Ginzburg-Landau equation with complex coefficients <cit.> on a time-dependent domain. These models include additional phenomena which have not been explored in detail on time-dependent domains, such as coarsening, spatially localized structures, traveling waves, and naturally occurring pattern-forming fronts. We also look towards bridging the gap between these models and observed phenomena in physical and biological realizations of patterns on time-dependent domains <cit.>. Quantitative predictions of bifurcation delay times and front propagation speeds in experiments would demonstrate the efficacy of this theory.This work was supported by the 2022 UC Berkeley Physics Innovators Initiative (Pi^2) Scholars Program (TT), the Berkeley Physics-and-Astronomy Undergraduate Research Scholars (BPURS) Program (TT) and by the National Science Foundation under grants DMS-1908891 (BF & EK) and OCE-2023541 (CL & EK). CL acknowledges a discussion of pde2path with Tobias Frohoff-Hülsmann and support from the Connecticut Sea Grant PD-23-07 during the completion of this work.§ RGLE WITH A CONSERVATION LAW The 1D real Ginzburg-Landau equation on a fixed domain is A_t = μ A + A_xx - |A|^2A,where A is a complex variable and x∈ [0, Λ]. If A is a conserved quantity, we cannot use a regular time derivative when switching to a time-dependent domain Ω_t. Instead, by the Reynolds transport theorem in one spatial dimension, we haved/dt∫_Ω_t A dV = ∫_Ω_t (A_t + uA_x + u_xA)dV,where u is some velocity determined by the growing domain. This takes into account the fact that material elements change size.We now restrict to isotropic growth in which u = L̇/Lx. Using the modified time derivative found in (<ref>), the full time-dependent RGLE becomesA_t + L̇(t)/L(t)xA_x_advection + L̇(t)/L(t)A_dilution = μ A + A_xx - |A|^2A,where x∈ [0, Λ L(t)]. The second term represents advection and the third term represents dilution. Note that (<ref>) describes the RGLE in the Eulerian frame (lab frame), where the first two terms together equal the material derivative of A. Thus, when we change to the Lagrangian frame, we expect the material derivative to become a normal time derivative because the effect of advection is built into the Lagrangian frame. To show this, letA(x,t) = A(ξ(x,t), t),where A is the amplitude in the Lagrangian frame and ξ∈ [0,Λ] is the Lagrangian coordinate, i.e. ξ(x,t) = x/L(t). Then, A_t= A_t + A_ξdξ/dt= A_t + A_ξd/dt(x/L(t)) = A_t + A_ξ(-L̇(t)/L(t)^2x) = A_t - L̇(t)/L(t)ξA_ξ. Next, the advection term becomes L̇(t)/L(t)xA_x= L̇(t)/L(t)x/L(t)(L(t)A_x) = L̇(t)/L(t)ξA_ξ. Thus, as expected, the advection term drops out, and the Lagrangian description of the RGLE becomesA_t + L̇(t)/L(t)A = μA + 1/L(t)^2A_ξξ - |A|^2A.We drop the tildes to obtain (<ref>).§ PATTERN AMPLITUDE ON A SHRINKING DOMAIN Here, we find the general solution toȧ = μ(t)a - a^3,where a ≥ 0. This is a Bernoulli-type equation which can be solved using the substitution v=a^-2 to obtain the explicit solutiona(t) = [ exp(2 ∫_0^t μ(t')dt')/2∫_0^t exp(2 ∫_0^t'μ(t”)dt”)dt' + a_0^-2]^1/2,where a(0)=a_0. For an exponentially shrinking domain with L(t)=e^σ t, σ < 0, μ(t) = μ - Q^2/L(t)^2 = μ - Q^2e^-2σ t.To find the integral in the denominator, we observe∫_0^t exp(2 ∫_0^t'μ(t”) dt”)dt'= e^-Q^2/σ∫_0^t e^2μ t'e^(Q^2/σ)e^2σ t' dt'.Now let a=2μ, b=-Q^2/σ, c=-2σ. With the substitution v=be^ct', we obtain∫_0^t e^at'e^-be^-ct' dt' = 1/c(1/b)^a/cΓ(a/c,b,be^ct),whereΓ(s,t_0,t_1) = ∫_t_0^t_1 t^s-1e^-t dtis the incomplete generalized gamma function. Substituting for a, b, and c, we obtain ∫_0^t exp(2 ∫_0^t'μ(t”)dt”)dt' = -1/2σ(-σ/Q^2)^-μ/σΓ(-μ/σ, -Q^2/σ, -Q^2/σe^-2σ t). With this result Eq. (<ref>) gives (<ref>).§ SUBCRITICAL PITCHFORK BIFURCATION WITH TIME-DEPENDENT PARAMETER We consider the prototypical form of a one-dimensional subcritical bifurcation with a time-dependent parameter:u̇ = β(t)u + u^3.Three different trajectories are plotted in Fig. <ref>. For growing β(t), the perturbation experiences bifurcation delay analogous to the case of the growing domain across the primary bifurcation of the RGLE: the perturbation shrinks until β=0 then grows.For shrinking β(t), the basin of attraction is important to consider. Even if β(t) < 0 after some finite time, the initial perturbation can still grow to infinity (see the red lines in Fig. <ref>). This behavior is nonlinear; a generic linear stability analysis fails. § HOMOGENEOUS FRONT VELOCITY We derive here the asymptotic velocity and steepness of a homogeneous front on a domain with fixed length L. There are many ways to do this; here, we use the general technique outlined in <cit.>. First, we obtain the dispersion relationω(k) = i(μ - k^2/L^2).Now suppose we move into a comoving frame with velocity v^*. Our goal is to determine which value for v^* allows the amplitude to neither grow nor decay. We determine this velocity by considering a complex wavenumber k^* such that(ω(k)-v^* k)k_k^* = 0.This k^* is a saddle point in the complex plane that, in the long-time limit, provides the dominant contribution to the inverse Fourier transform needed to determine the physical amplitude. We also require that in this frame the amplitude neither grows nor decays:ω(k^*) - v^* k^* = 0.Putting (<ref>) and (<ref>) together, we getv^* = ω(k)k_k^* = ω(k^*)/ k^*.Let k^* = k_r^* + ik_i^*, where k_r, k_i ∈. Then, substituting (<ref>) into (<ref>), we obtainv^* = 2k_i^*/L^2 - 2k_r^*/L^2i = 1/k_i^*(μ - (k_r^*)^2 - (k_i^*)^2/L^2).Separating into real and imaginary parts, we find that k^* = i√(μ)L, and hence thatv^* = 2√(μ)/L.The front steepness is given byλ^* =k^* = √(μ)L. § ECKHAUS FRONT VELOCITY We now derive the linear spreading velocity for Eckhaus fronts on a domain with fixed length L <cit.>. Since these fronts propagate into an Eckhaus-unstable pattern, it is natural to write the amplitude-phase representation of A asA(ξ,t) = a(ξ,t)e^iϕ(ξ,t).Then, defining the wavenumber q(ξ,t)≡ϕ_ξ(ξ,t), we can rewrite the RGLE as a_t= (μ-q^2/L^2)a + 1/L^2a_ξξ - a^3 q_t= 1/L^2ξ(q_ξ + 2qlnaξ). The initial pattern state with uniform wavenumber q_0=Q and amplitude a_0=√(μ-Q^2/L^2) corresponds to a fixed point of the ξ-independent equations.The derivation of the Eckhaus front linear spreading velocity now follows similarly to the homogeneous case. Consider a=a_0+a' and q=q_0+q'. After linearizing, we use the relations a'=a_1(k)e^ikξ-iω t and q'=q_1(k)e^ikξ-iω t to find the dispersion relation:[ -2a_0^2-k^2/L^2+iω -2Qa_0/L^2;-2Qk^2/a_0L^2-k^2/L^2+iω ][ a_1(k); q_1(k) ] = 0.There are two branches of this relation,ω_±(k) = i[-a_0^2-k^2/L^2±√(a_0^4+4Q^2k^2/L^4)].Taking the positive root ω_+(k), we seek a velocity wherev^* = ω_+(k)k_k^* = ω_+(k^*)/ k^*.We can solve this equation implicitly for a fixed Q while varying either μ or L to obtain Fig. <ref>. | http://arxiv.org/abs/2311.16363v1 | {
"authors": [
"Troy Tsubota",
"Chang Liu",
"Benjamin Foster",
"Edgar Knobloch"
],
"categories": [
"nlin.PS"
],
"primary_category": "nlin.PS",
"published": "20231127230338",
"title": "Bifurcation delay and front propagation in the real Ginzburg-Landau equation on a time-dependent domain"
} |
Value-Based Reinforcement Learning for Digital Twinsin Cloud ComputingVan-Phuc Bui, IEEE Student Member, Shashi Raj Pandey, IEEE Member, Pedro M. de Sant Ana, Petar Popovski, IEEE FellowV.-P Bui, S.R. Pandey, and P. Popovski (emails: {vpb, srp, fchi, petarp}@es.aau.dk) are all with the Department of Electronic Systems, Aalborg University, Denmark.P. M. de Sant Ana is with the Corporate Research, Robert Bosch GmbH, 71272 Renningen, Germany (email: [email protected]). This work was supported by the Villum Investigator Grant “WATER” from the Velux Foundation, Denmark.===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================emptyThe setup considered in the paper consists of sensors in a Networked Control System that are used to build a digital twin (DT) model of the system dynamics. The focus is on control, scheduling, and resource allocation for sensory observation to ensure timely delivery to the DT model deployed in the cloud. Low latency and communication timeliness are instrumental in ensuring that the DT model can accurately estimate and predict system states. However, acquiring data for efficient state estimation and control computing poses a non-trivial problem given the limited network resources, partial state vector information, and measurement errors encountered at distributed sensors.We propose the REinforcement learning and Variational Extended Kalman filter with Robust Belief (REVERB), which leverages a reinforcement learning solution combined with a Value of Information-based algorithm for performing optimal control and selecting the most informative sensors to satisfy the prediction accuracy of DT. Numerical results demonstrate that the DT platform can offer satisfactory performance while reducing the communication overhead up to five times. Digital twin, Reinforcement Learning, Internet of Things, Dynamic Systems, Sensor Networks.§ INTRODUCTION The Industry 4.0 smart manufacturing paradigm necessitates the acquisition of substantial volumes of real-time data, which emanate from a diverse array of wireless sensors <cit.>.In contrast to conventional simulation tools or optimization methodologies, digital twin (DT) models undergo a transformation of these extensive datasets into predictive models. The utilization of these models allows for the emulation of potential control strategies, which in turn supports real-time interactions and decision-making for system operators <cit.>.A network control system (NCS) representing the physical world, alongside a DT is situated either in the cloud, catering to extensive physical systems, or at the edge, tailored to local physical systems. The network comprises sensor devices and/or central/distributed units integral to a 5G system or beyond <cit.>. The acquired knowledge from the DT model serves twofold purposes: controlling the physical world; and providing monitoring and forecasting future states. Given the intricate interplay between communication and computation systems, devising a joint design strategy poses challenges in preserving the predictive efficacy of the DT model, and performing accurate control signals while simultaneously extending the operational lifespan of the network <cit.>. There has been a substantial research on the use of Reinforcement Learning (RL) at the DT to perform various tasks <cit.>.In <cit.>, the issue of scheduling IoT sensors is examined through the use of Value of Information (VoI) while taking into account the limitations of communication and reliability. The ultimate goal is to reduce the Mean Squared Error (MSE) of state estimation, which is often hindered by imprecise measurements. Additionally, authors in <cit.> offers a possible resolution for scheduling sensing agents through the utilization of VoI, ultimately leading to enhanced accuracy in a variety of summary statistics for state estimation.These aforementioned works <cit.> and the reference therein, however, do not consider the influence of control performance in the physical world and the strategy of selecting effective sensing agents based on the reliability of estimates and latency requirements. This paper addresses an optimization challenge that involves managing both the optimal control, accuracy of state estimation and power consumption in an NCS, illustrated in Fig. <ref>. Alongside this, a scheduling algorithm is introduced for s, which takes into account the quality of their observations as well as the communication constraints of the DT model. Our contributions are listed as follows: (i) weintroduce a DT architecture tracking dynamic changes of the system parameters and controlling system dynamics; (ii) we propose an uncertainty control reinforcement learning framework that learn to perform actions while controlling the state' uncertainty estimation; (iii) we formulate a novel optimization problem to efficiently schedule sensing agents for maintaining the confidence of DT's system estimate while minimizing the energy cost under latency requirements. We then propose a VoI-based algorithm resulting in a practical and efficient solution in polynomial time. (iv) Numerical simulations were conducted to evaluate the algorithm's performance, which confirms that it surpasses other benchmarks in both controllingand power consumption while improving DT estimation error. § DIGITAL TWIN ARCHITECTURE §.§ System ModelWe adopt a DT architecture, as illustrated in Fig. <ref>, including a single primary agent () and a set of sensing agents (s) denoted by ={1,2,…,M}. These s are responsible for observing the environment andcommunicate with the access point (AP) through awireless channel, which facilitates the construction of the DT model for theand operates in Frequency Division Duplexing (FDD) mode. We assume the communication link between AP and the cloud is perfect. The communication diagram is illustrated in Fig. <ref>, where T_config accounts for theconfigurable and computing time the DT. At the beginning of each query interval (QI), the DT model is required to update the active state of s, after which it estimates the full system state, updates policy, computes optimal control signal, schedules at most C s and applies a fusion algorithm at the AP. Once a control command is generated, the controller promptly transmits it through a downlink channel to the physical world. The application output for actuators control, such as motor drives, retrieves the most recently stored command values from memory and applies them to drive the system dynamics. Each QI takes place at time instances t ∈= {1, 2, …, T}. To maintain synchronization and consistency, these 𝚂𝙰s periodically synchronize their DTs, including locations and power budget status, with the Cloud platform, ensuring a high degree of reliability and managing the overhead of synchronization. Theengages in interactions with the environment, operating within a K-dimensional process as ={1,2,…,K}. The state at the t-th QI is 𝐬_t= [s_t^1, s_t^2, …, s_t^K]^T, and its evolution is described as:𝐬_t= f(𝐬_t-1) + _t-1 + 𝐮_t, ∀ t∈,Here, f:ℝ^K→ℝ^K denotes the state update function, _t-1 is the control signal, and the matrixdescribes the how the control impacts the dynamics. 𝐮_t∼𝒩(0,𝐂_𝐮) is the process noise. At QI t, a D-dimensional observationis received at_m(m ∈ℳ) as 𝐨_t,m = g(𝐬_t)+ 𝐰_t,m∈ℝ^D (D≤ K)corresponding to the 's state. To simplify the analysis, the observation _t,m is assumed to be linearly dependent on the system state, which is_t,m= _m_t+ _t,m, ∀ m∈,with the observation matrix _m∈ℝ^D× Kand the measurement noise _t,m∼(0, 𝐂__m). The covariance matrices 𝐂_ and 𝐂__m are generally not diagonal. Let τ^max be the maximum tolerable latency for transmitting _m's information, the system fulfills the application reliability at a QI t ifℙ[τ _t,m >τ^max] ≤ε, ∀ m∈, t∈,with ε is a outage probability parameter depending on system characteristics <cit.>. §.§ Problem FormulationThe DT model's objective is to uphold a precise estimate of's state and offer the optimal sequence of actions to be executed in the physical realm based on its beliefs about states. Herein, the predicted estimator _t of _t is modeled with p(_t)∼𝒩(_t, _t), n ∈. The MSE of the estimator isMSE_ = 𝔼[||_t-_t||^2_2], t∈. We define the maximum acceptable standard deviation for feature k∈ as ξ_k. This corresponds to the following condition:√([_t]_k)≤ξ_k, ∀ k ∈,where [_t]_k is the k-th element of the diagonal of _t.DefiningV^π(_0) as value function of controllingin (<ref>) under control policy π and 𝐩^tx_t as the transmitted power consumption to forward observations _t≜{_t,m} from s to AP at QI t, we interested in joint minimizing the sum power consumption and delivering optimal control signals. We introduce the ultimate goal h({𝐩^tx_t}, {_t}) = [V^π(_0), -∑_t=0^∞𝐩^tx_t]^T, then formulate theoptimization problemas{𝐩^tx_t}, {_t} maximize h({𝐩^tx_t}, {_t})subject to(<ref>), (<ref>), (<ref>). It is noted that we consider a scenario where the belief vector can be enhanced through estimation techniques facilitated by DT prior to being utilized by RL to suggest the optimal action as an output. By employing this approach, the accuracy of noisy observations can be enhanced, enabling the agent to make more precise decisions.In the following, we propose the REVERB (REinforcement learning and Variational Extended Kalman filter with Robust Belief) framework including two-step approach to address the problem (<ref>): (i) we employ an uncertainty control RL algorithm to devise control actions for the physical world and effectively managing the state estimation errors; (ii) the VoI-basedscheduling with optimal power control algorithm is utilized to identify the most significants for observing its sensing signals, guided by the requirements from the RL model and DT. To further enhance the accuracy of estimated states and forecast the forthcoming system state,the Extended Kalman Filter (EKF) technique is revised and integrated.§ UNCERTAINTY CONTROL POMDP The control problem in (<ref>) is considered as Partially Observable Markov Decision Process (POMDP)that expands upon the MDP by incorporating the sets of observations and observation probabilities to actual states because the provided observationsonly offer partial and potentially inaccurate information.In particular, a POMDP is presented by a 7-tuple ⟨, , , 𝙿, 𝙾, r ,γ⟩. In particular,is a the finite set of possible states,is aset of control primitives anddenotes aset of possible observations. At a time instant t, the agent makes an action _t to move from state _t to _t+1 with the transition probability= ℙ[_t+1|_t,_t]. An observation _t+1 received from s tracking system's state occurred with a probability = ℙ[_t+1|_t+1,_t]. Also, as soon as the transition, the agent receives a numerical reward r(_t, _t, _t+1) verifying r(_t,_t,_t+1) ≤ r^max.An agent do not know exactly its state at QI t and it maintains a estimate-vector _t describing the probability of being in a particular state _t ∈.We define π as a policy of the agent that specifies an action _t based on its policyπ(, ).With initial belief _0, the expected future discounted reward for policy π(,) is given as V^π(_0) = 𝔼[∑_t=0^∞γ_t r(_t,_t,_t+1) |_0,π], where 0< γ_t <1 is the discount factor. At QI t, the estimated state vector is_t= [_t^1, _t^2, …, _t^K]^T∈ℝ^K, which is governed by _t ∼ p(_t|_t; η_t) as theconditional probability distribution function (pdf) of _t given _t. η_t =[ η_t,1,η_t,2, …, η_t,K]^T ∈ℝ^K is parameterized by the accuracy vector, whose each element η_t,k indicates the accuracy of k-th process of estimated state _t. In this work, we define η_t,k = 1/[_t]_k , ∀ k∈.Asη_t,k increases, the confidence level of ŝ_t,k also increases, facilitating the RL agent in making accurate decisions. Nevertheless, achieving high reliability of ŝ_t,k necessitates low error of measurementor multiple observations, consequently amplifying both communication and processing expenses. Given _t and η_t, the ŝ_t,k is assumed exhibit statistical independence, meaning that we can expressp(_t|_t; η_t) = ∏_k∈ p(ŝ_t,k|_t; η_t)in terms of their factorization. §.§.§ Actor-critic DRL AlgorithmFor training the policy π(ŝ,a), we employProximal Policy Optimization (PPO) with the actor-critic structure at RL agents that involves dividing the model into two distinct components, thus harnessing the strengths of both value-based and policy-based methods <cit.>.Specifically, the actor is primarily responsible for estimating the policy, which dictates the agent's actions in a given state, and the critic is dedicated to estimating the value functionpredicting the expected future reward for a particular state or state-action pair. The actor's policy undergoes refinement based on the feedback provided by the critic.To address the joint design problem with the objective of optimizing actions while minimizing communication energy, we implement the Deep RL (DRL) approach within the DT cloud environment, wheretheaction and reward are needed to be redefined.§.§.§ Action Space ReformulationWe focus on the issue of optimizing the accuracy of the estimated state by the RL agent, enabling it to select η_t,k on a continuous scale. The underlying motivation for the proposed framework is to unveil the inherent characteristics of the observation space in terms of the informational value that the observations offer for the given task.The formulation of the action vector structure within the RL agent implemented in DT is represented as_t= [a_t,1,…,a_t,Z,η_t,1,…,η_t,K] ∈ℝ^ZK,where 𝒵 (|𝒵|= Z) is the action space and {a_t,z}_z∈𝒵 correspond to the control signals that exert an influence on the physical environment, enabling the agent to advance towards its objective. Additionally,η_t,k∈[0,∞] denotes the accuracy selection pertaining to the estimated state. §.§.§ Reward Function ReformulationIt is imperative for the RL agent to not only navigate towards the primary objective defined for the problem but also acquire the ability to regulate the acceptable level of accuracy{η_t,k}_k∈. Consequently, the goal-based reward r_t is transformed into an uncertainty-based reward as r̃_t = f(r_t, η_t),wherein f(·) is a monotonically non-decreasing function of r_t and η_t.In scenarios where a direct cost function, denoted as c_k(·), exhibits an upward trend with the accuracy of the observation o_t,k, a suitable additive formulation can be employed. Specifically, the modified reward, r̃_t, can be expressed byr̃_t = r_t + κ∑_k=1^K c_k(η_t,k).Here, c_k(η_t,k) represents a non-increasing function of η_t,k, and κ≥ 0 serves as a weighting parameter. Therefore, the primary objective of the agent is two-fold: to maximize the original reward while simultaneously minimizing the cost associated with the observations.§ VOI-BASED S SCHEDULING AND POWER CONTROL In this section, our focus is to involve the scheduling of s based on three factors: (i) the acceptable level of accuracy η_t for the estimated state, as determined by the RL agent;(ii) the requirement pertainingto the accuracyof the DT model as in (<ref>); and (iii) the communication resources, determined by the system capacity and latency requirements (<ref>). §.§ Sensing Agent Scheduling Problem We formulate a combined s scheduling and power control problem, where our objective is to determine the s that should engage in transmission. We introduce arbitrary variables ξ̅_t,k^2indicates desired DT's error level of system's state at QI t.Given the reliability for the DT in (<ref>) and the requisite level of accuracy η_t to uphold the precision of the RL model, the DT should meet the error constraints at QI t as[_t]_k ≤ξ̅_t,k^2 ≜min{ξ_k^2, 1/η_t,k}, ∀ t∈, k∈.Initially, we establish the availableset _tand the scheduling set _t at QI tas _t = , _t=∅. Denoting the power allocation vector 𝐩^tx_t = {p^tx_t,m}_m∈ as the variable, we formulate the optimization problem with specifications as𝐩_t^tx*= 𝐩^tx_t argmin (1-α) ∑_k∈max{[(n)]_k /ξ̅^2_k-1,0}+α∑_m∈(n)p^tx_m(n) subject to|_t| ≤C, ℙ[τ_t,m > τ^max] ≤ε, ∀m∈, wherein the non-negative parameter α∈[0,1] represents the relative weight to accuracy and energy efficiency within the underlying objective function. It is observed that objective (<ref>) represents a relaxation of constraint (<ref>) due to its dependence on practical conditions, i.e., in situations where the error surpasses a certain threshold, evenquerying all sensor data fails to guarantee the desired level of reliability ξ̅^2_t,k. We note that obtaining observations from additional sources s leads to an enhancement in estimation accuracy; however, this improvement comes at the cost of compromising energy efficiency. For those particular s that exhibit considerable errors in their measurements or possess features that do not significantly contribute to meeting the confidence requirements of the(i.e., those with a low VoI), measuring and transmitting observations leads to an unnecessary expenditure of energy. The constraints given by (<ref>) and (<ref>) are required in ensuring communication capacity and the reliable execution of latency satisfaction within a FDMA uplink slot.It is important to note that the optimization problem presented as (<ref>) is inherently non-convex due to the non-convex nature of the objective function (<ref>), as well as the the constraints (<ref>), (<ref>). Furthermore, the node selection aspect renders the problem analogous to the classic NP-hard knapsack problem. To derive an efficient suboptimal solution, a heuristic algorithm based on EKF is employed. §.§ Sensing Agent Selection with Extended Kalman FilterDue to the complexity of sensor selection based on VoI, we adopt a heuristic approach with primary concept guiding the resolution of (<ref>) is to ensure that, during each QI t the minimum necessary number of s is selected for transmission. This selection aims to sustain the desired level of certainty in estimating the state _t while saving communication resource for other purposes. The expression for the Minimum Mean Square Error (MMSE) estimator applied to a KFis provided in <cit.>. In this work, for aiming to minimize the (weighted) variance of state components, we employ the Extended Kalman estimator, which is common in the IoT literature <cit.>.We also assume that the virtual environment possessing complete awareness regarding the process statistics, encompassing the update function f(𝐬) as well as the noise covariance matrices. The primary strategy to solve (<ref>) involves selecting the minimum number of s to transmit at each QI t in order to maintain the required level of estimation certainty for state _t as specified by {ξ̅_t,k^2}. Our proposed algorithm are summarized in Algorithm <ref>, effectively addresses problem (<ref>). In the dynamic setup, the initial state _t is considered a random vector characterized by a specific mean 𝔼[_t] = μ__0 and covariance matrix Cov[_0]= 𝐂__0._t is initialized as an empty set due to the absence of any prior information. TheEKF then calculates the estimation errors for the belief _t ∼𝒞𝒩(__t,^𝚙𝚛_t)at thebased on prior updates _t-1 as described by ^𝚙𝚛__t = __t-1^T + __t,where the Jacobian matrix = 𝒥{f(_t-1)}linearizes the nonlinear model of f(_t-1). Subsequently, for a given set of error variance qualities {ξ̅_t,k^2}_t∈, k∈, the conditions specified in (<ref>) lead to two potential cases: (i) If (<ref>)are satisfied for all k∈, the DT model achievesbounds without requiring s' observations. The prior updatesuffices to establish the necessary confidence in the estimate, resulting in an empty set for ^*_t = ∅. (ii) If any of conditions is violated, at least one process feature lacks sufficient accuracy, the acquisition of the corresponding observations becomes essential to enhance the estimation process, as dictated by the scheduling approach implemented in our proposed heuristic.In the firstscenario, thethe belief _t can be achieved through the EKF blind updateas _t = ^𝚙𝚛_t=f(_t-1) +_t-1 + _t, ∀ t∈.In the second case, Algorithm <ref> is utilized to identify the s with the highest VoI for querying their observations. In order to identify the most suitable candidate feature s_t,k^*, where k∈, an optimization problem is formulated at the i-th iteration ass_t,k^* =s_t,k∈_t argmax[^(i)_t]_k /ξ̅^2_t,k _m ∈𝒫_t, ∀m∈, _m →s_t,k, ∀m∈,where the notation _m→ s_t,k signifies that the _m measures feature s_t,k. We note thatat the i-th iteration, if any constraint in (<ref>) is still unmet and |_t| < C, |𝒫_t| >0,there is room for scheduling new s to join _t.According to constraint (<ref>), feature s_t,k^* is selected only if at least one _m∈_t can provide coordinating observations. Then,_m^*∈𝒫_t measuring feature s_t,k^* with the minimum error covariance is chosen to send its measurement. The scheduled and availablesets _t and 𝒫_t are updated as_t ←_t ∪{_m^* }; _t ←_t\{_m^*}._t and __t are the combination observation and covariance matrices, which are respectively formulated as_t= [_1;_2;…;_|_t|],__t =diag[__1, __2,…,__|_t|],where _m is theobservation matrix ofthe _m (m∈_t). From here, we can compute the EFK gain by_t =^𝚙𝚛__t_t^T(__t+ _t ^𝚙𝚛__t_t^T)^-1,The posterior error covariance matrix is derived by^𝚙𝚘𝚜__t=( - _t_t) ^𝚙𝚛__t,The iterative loop continues while all three conditions are true:{[l]{|^*_t|< C; ∃ [^𝚙𝚘𝚜__t]_k ≥ξ̅^2_k;∃ s_t,k^* (<ref>)}. .It makes intuitive see that the loop repeats for at maximum C iterations before terminating. The posterior update is then^𝚙𝚘𝚜_t= ^𝚙𝚛_t + _t(_t - _t^𝚙𝚛_t ),where _t represents the combination of receivedobservations. Accordingly, we update _t = ^𝚙𝚘𝚜_t. Our approach ensures long-term balance between state certainty and communication cost with respect to the 's requirements, despite the local scheduling solution provided byfor different QIs. §.§ Optimal Power SchedulingBoth DL and UL transmissions are subject to potential latency during data delivery, which is influenced by the channel quality and the scheduled transmission resource.The channel between _m and AP is modeled as Rician channel with strong LoS link and small-scale fading with rich scattering, where the instantaneous Signal-to-Noise (SNR) ratio is modeled asγ_t,m = Γ p_t,m^tx/d^α_mWN_0𝒢_m, ∀ m ∈,withΓ is constant depending on the system parameter (operating frequency and antenna gain); p_t,m^tx is the transmitted power, d_m represents the distance between device m and AP. α is the path loss exponent; W is the allocated bandwidth and N_0 stands for noise power. 𝒢_m represents the fading power with expected value 𝒢̅≜𝔼[|𝒢_m|^2] =1. Herein, we adopt 𝒢_m as Rician distribution with a non-central chi-square probability distribution function (PDF) which is expressed as <cit.>f_𝒢(w) = (G+1)e^-G/𝒢̅e^-(G+1)w/𝒢̅I_0(2√(G(G+1)w/𝒢̅)),where w≥ 0 and I_0(·) denotes the zero-order modified Bessel function of the first kind, while G represents the Rician factor, signifying the ratio between the power within the Line-of-Sight (LoS) component and the power distributed among the non-LoS multipath scatters.We use Shannon’s bound to achieve the channel capacity R_t,m of every link, which is R_t,m = Wlog_2(1+γ_t,m). The optimal transmission powerfor each scheduledis computed through the following lemma.Given the Rician factor G, the specific transmission bandwidth W and the distance d_m, the optimal allocated power p^*_t,m of _m to satisfy (<ref>) is lower bounded byp^*_t,m = 2 WN_0(1+G)(2^D/(τ^maxW)-1)/y_Q^2d_m^-αΓ,where y_Q = √(2G)+1/2Q^-1(ε)log(√(2G)/√(2G)-Q^-1(ε))-Q^-1(ε), D is the length of data packet, and Q^-1(·) is theinverse Q-function.The outage probability (<ref>) isformulated asℙ[Δ _t,m >τ^max]=ℙ[Wlog_2(1+ γ_t,m) <D/τ^max] =ℙ[ γ_t,m <2^D/Wτ^max-1] ≜ 1 - Q(x_τ, y_τ) ≤ε,where x_τ =√(2G) andy_τ = √(2(G+1)(2^D/Wτ^max-1)𝒢/γ_t,m),with Q(x_τ, y_τ) as the the first order Marcum Q-function. At the maximum tolerable value of ε, then y_Q = Q^-1(x_Q, 1-ε) formed as the inverse Marcum Q–function respecting to second argument. In this study, we consider a Rician channel with strong LoS component, i.e., G > G_0^2/2 and Q^-1(ε) ≠ 0,which yields the approximated form of y_τ as <cit.>:y_τ = √(2G)+ 1/2Q^-1(ε)log(√(2G)/√(2G)-Q^-1(ε)) - Q^-1(ε),withG_0 isthe intersection of the sub-functions at x_τ > max[0, Q^-1(ε)]. Plug this into (<ref>), we obtain (<ref>) and complete the proof. § NUMERICAL RESULTS We examine our proposed REVERB algorithm with the MountainCarContinuousv0 environment from the OpenAI Gym <cit.>. The state vector _t = [x_t, ẋ_̇ṫ]^T includes position x_t and velocity ẋ_̇ṫ. The observation matrices are given by _pos = [1,0];and _vel = [0,1], respectively for the position and velocity. Other important parameters are included in Table <ref>. In the DT system, the agent makes decisions concerning the applied force a_t∈[-1,1] on the car and selects an accuracy level denoted by η_t = [η_t,1, η_t,2]^T. The original reward for each QI in the environment isr_t = -0.1 × a_t^2.In our system, we adopt a modified reward indicated in (<ref>) asr̂_t = r_t + κ×(1/2∑_i =1^2η_t,i),where the positive weighted parameter κ = 5× 10^-6. Additionally, the original environment includes a termination reward when the car successfully reaches the target position at 0.45. The evolution of the state in (<ref>) isdefined withf(_t-1) = [ ẋ_t-1; -φcos(3x_t-1) ],= [0, ϑ]^T,where the constants φ = 0.0025 and ϑ = 0.0015. M s are assuming placed randomly within an area where the maximum distance d^max from s to AP.We compare our proposed REVERB and four other benchmarks: Perfect allows DT to get the observation of the next state without any error; Cost-based greedy indicates that the AP queries all nearest s based on ascending order of distance from AP toat each QI. Error-based greedy <cit.> is similar to Cost-based but it relies on the decreasing confidence levels of s. In the greedy benchmarks is |_t^*| = C. We also consider the Traditional scheme <cit.> where the RL agent gets noise observation from oneevery QI without Algorithm 1. All schemes are conducted under 1000 Monte Carlo simulations.Figure <ref>(a), (b) and (c) compare the trajectory evolution with coordinated force of REVERB, Perfect, and Traditional schemes. It is observed that REVERB's trajectory is close to the Perfect when using only 1 step more to reach the goal. On the contrary, the trained network with the Traditionalmust use up to 2× number of steps to reach the goal. These results confirm REVERB's reliability regarding force adjustment under a noisy environment.Fig. <ref>(d)-(f) depictthe noise levels and number of selected s over the whole observation space. When comparing Fig. <ref>(d)-(e) to Fig. <ref>(f), we notice that DT gathers more data as uncertainty increases and the agent approaches the target (position ≥ 0.45). In scenarios where the noise level is highbut the agent is still far from the target (position < -0.5), requiring additional observations to improve control efficiency is considered ineffective. It is worth noting that this ineffectiveness does not have an adverse impact on the original task's performance. Figure 4 illustrates different schemes' power consumption and Mean-root-mean-square-error (MRMSE). REVERBproves to be the most efficient, consuming the least power and achieving the lowest MRMSE compared to other baselines. On average, REVERB consumes 1.09 [W] to reach the goal, compared to 5.98 [W]of the Error-based algorithm. Even though Traditional one only queries 2 s per QI, the power consumption is up to 5.4 [W] because this scheme takes more QIs to reach the goal, as in Fig. <ref>(c). At the same time, REVERB's MRMSE is always maintained at approximately the same low level as the Cost-based and Error-based methods. The results achieved in Fig. 4 are explained by a snapshot of the uncertainty evolution and the strategic selection of s in Fig. 5, based on their contributions to the DT's performance. It is noteworthy that the management of REVERB's uncertainty is subject to the control of both DT and RL requirements as flowing (<ref>). The requisites imposed upon the RL agent's behavior are primarily administered through the reward function delineated in (<ref>). Hence, DT only requests more observations when the DT threshold is surpassed or when the RL agent necessitates high precision, typically when the agent is nearing its goal and precise force control is imperative. § CONCLUSIONSThis work introduced the DT framework for reliability monitoring, predicting and controlling of a communication system. Under the latency constraint, theDT platform was shown to obtain more reliable control and trajectory tracking than conventional methods while significant saving communication cost. This result is achieved thanks to the proposed uncertainty control POMDP scheme combined with an efficient algorithm selecting subsets of partial s to meet the requirements in the confidence of state estimation. Future study could explore the long-term impacts of scheduling decisions in complex systems and incorporate deep learning-based estimators. 0.9IEEEtran | http://arxiv.org/abs/2311.15985v1 | {
"authors": [
"Van-Phuc Bui",
"Shashi Raj Pandey",
"Pedro M. de Sant Ana",
"Petar Popovski"
],
"categories": [
"eess.SP"
],
"primary_category": "eess.SP",
"published": "20231127162934",
"title": "Value-Based Reinforcement Learning for Digital Twins in Cloud Computing"
} |
[RIDL] AWS[AWS]Amazon Web Services FaaS[FaaS]function-as-a-service CaaS[CaaS]container-as-a-service BPU[BPU]branch prediction unit CSP[CSP]cloud service provider KVM[KVM]kernel-based virtual machine LFB[LFB]line-fill buffer MDS[MDS]microarchitectural data sampling OS[OS]operating system PoC[PoC]proof-of-concept SMT[SMT]simultaneous multi-threading TAA[TAA]TSX Asynchronous Abort VM[VM]virtual machine VMM[VMM]virtual machine manager RIDL[RIDL]Rogue In-flight Data LoadFirecracker is a virtual machine manager (VMM) purpose-built by Amazon Web Services (AWS) for serverless cloud platforms—services that run code for end users on a per-task basis, automatically managing server infrastructure. Firecracker provides fast and lightweight VMs and promises a combination of the speed of containers, typically used to isolate small tasks, and the security of VMs, which tend to provide greater isolation at the cost of performance. This combination of security and efficiency, AWS claims, makes it not only possible but safe to run thousands of user tasks from different users on the same hardware, with the host system rapidly and frequently switching between active tasks. Though AWS states that microarchitectural attacks are included in their threat model, this class of attacks directly relies on shared hardware, just as the scalability of serverless computing relies on sharing hardware between unprecedented numbers of users.In this work, we investigate just how secure Firecracker is against microarchitectural attacks. First, we review Firecracker's stated isolation model and recommended best practices for deployment, identify potential threat models for serverless platforms, and analyze potential weak points.Then, we use microarchitectural attack proof-of-concepts to test the isolation provided by Firecracker and find that it offers little protection against Spectre or MDS attacks. We discover two particularly concerning cases: 1) a Medusa variant that threatens Firecracker VMs but not processes running outside them, and is not mitigated by defenses recommended by AWS, and 2) a Spectre-PHT variant that remains exploitable even if recommended countermeasures are in place and SMT is disabled in the system. In summary, we show that AWS overstates the security inherent to the Firecracker VMM and provides incomplete guidance for properly securing cloud systems that use Firecracker.<ccs2012> <concept> <concept_id>10002978.10003006.10003007.10003010</concept_id> <concept_desc>Security and privacy Virtualization and security</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002978.10003001.10010777.10011702</concept_id> <concept_desc>Security and privacy Side-channel analysis and countermeasures</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Security and privacy Virtualization and security [500]Security and privacy Side-channel analysis and countermeasuresBifurcation diagrams for spacetime singularities and black holes Spiros Cotsakis^1,2, ^1Department of Applied Mathematics and Theoretical Physics Universityof Cambridge Wilberforce Road, Cambridge CB3 0WA, United Kingdom ^2Institute of Gravitation and Cosmology RUDN Universityul. Miklukho-Maklaya 6, Moscow 117198, Russia November2023 =================================================================================================================================================================================================================================================================================== § INTRODUCTIONServerless computing is an emerging trend in cloud computing where CSP serve runtime environments to their customers. This way, customers can focus on maintaining their function code while leaving the administrative work related to hardware, OS, and sometimes runtime to the CSP. Common serverless platform models include FaaS and CaaS. Since individual functions are typically small, but customers' applications can each be running anywhere from one to thousands of functions, CSP aim for fitting as many functions on a single server as possible to minimize idle times and, in turn, maximize profit. A rather light-weight approach to serving runtime environments is to run containers, which encapsulate a process with its dependencies so that only the necessary files for each process are loaded in virtual filesystems top of a shared kernel. This reduces a switch between containers to little more than a context switch between processes. On the other hand, full virtualization provides good isolation between VM and therefore security between tenants, while being rather heavy-weight as each VM comes with its own kernel.Neither of these approaches, container or VM, is ideal for use in serverless environments, where ideally many short-lived functions owned by many users will run simultaneously and switch often, so new mechanisms of isolation have been developed for this use case. For example, mechanisms for in-process isolation <cit.> set out to improve the security of containers by reducing the attack surface of the runtime and underlying kernel.Protecting the kernel is important, as a compromised kernel directly leads to a fully compromised system in the container case.However, certain powerful protections, like limiting syscalls, also limit the functionality that is available to the container and even break compatibility with some applications. In VM research, developers created ever smaller and more efficient VM, eventually leading to so-called microVMs. MicroVMs provide the same isolation guarantees as usual virtual machines, but are very limited in their capabilities when it comes to device or OS support, which makes them more light-weight compared to usual VM and therefore better suited for serverless computing.Firecracker <cit.> is a VMM designed to run microVMs while providing memory overhead and start times comparable to those of common container systems. Firecracker is actively developed by AWS and has been used in production for AWS Lambda <cit.> and AWS Fargate <cit.> serverless compute services since 2018 <cit.>. AWS's design paper <cit.> describes the features of Firecracker, how it diverges from more traditional virtual machines, and the intended isolation model that it provides: safety for “multiple functions run[ning] on the same hardware, protected against privilege escalation, information disclosure, covert channels, and other risks” <cit.>. Furthermore, AWS provides production host setup recommendations <cit.> for securing parts of the CPU and kernel that a Firecracker VM interacts with.In this paper, we challenge the claim that Firecracker protects functions from covert and side-channels across microVMs. We show that Firecracker itself does not add to the microarchitectural attack countermeasures but fully relies on the host and guest Linux kernels and CPU firmware/microcode updates.Microarchitectural attacks like the various Spectre <cit.> and MDS <cit.> variants pose a threat to multi-tenant systems as they are often able to bypass both software and architectural isolation boundaries, including those of VM. Spectre and MDS threaten tenants that share CPU core resources like the BPU or the LFB. CSP providing more traditional services can mitigate the problem of shared hardware resources by pinning the long-lived VM tenants to separate CPU cores, which effectively partitions the resources between the tenants and ensures that the microarchitectural state is only effected by a single tenant at a time. In serverless environments, however, the threat of microarchitectural attacks is greater. The reason for this is the short-livedness of the functions that are run by the different tenants. Server resources in serverless environments are expected to be over-committed, which leads to tenant functions competing for compute resources on the same hardware. Disabling SMT, which would disable the concurrent use of CPU resources by two sibling threads, reduces the compute power of a CPU by up to 30% <cit.>. If customers rent specific CPU cores, this performance penalty may be acceptable, or both threads on a CPU core might be rented together. But for serverless services, the performance penalty directly translates to 30% fewer customers that can be served in a given amount of time. This is why it has to be assumed that most serverless CSP keep SMT enabled in their systems unless they state otherwise. The microarchitectural attack surface is largest if SMT is enabled and the malicious thread has concurrent access to a shared core. But there are also attack variants that perform just as well if the attacker thread prepares the microarchitecture before it yields the CPU core to the victim thread or executes right after the victim thread has paused execution. And even if SMT is disabled by the CSP (as is the case for AWS Lambda), tenants still share CPUs with multiple others in this time-sliced fashion.AWS claims that Firecracker running on a system with up-to-date microarchitectural defenses will provide sufficient hardening against microarchitectural attacks <cit.>. The Firecracker documentation also contains specific recommendations for microarchitectural security measures that should be enabled. In this work, we examine Firecracker's security claims and recommendations and reveal oversights in its guidance as well as wholly unmitigated threats.In summary, our main contributions are:* We provide a comprehensive security analysis of the cross-tenant and tenant-hypervisor isolation of serverless compute when based on Firecracker VM. * We test Firecracker's defense capabilities against microarchitectural attack PoC, employing protections available through microcode updates and the Linux kernel. We show that the virtual machine itself provides negligible protection against major classes of microarchitectural attacks.* We identify a variant of the Medusa MDS attack that becomes exploitable from within Firecracker VM even though it is not present on the host. The kernel mitigation that protects against this exploit, and most known Medusa variants, is not mentioned by AWS's Firecracker host setup recommendations. Additionally, we show that disabling SMT provides insufficient protection against the identified Medusa variant which urges the need of the kernel mitigation.* We identify Spectre-PHT and Spectre-BTB variants which leak data even with recommended countermeasures in place. The Spectre-PHT variants even remain a problem when SMT is disabled if the attacker and victim share a CPU core in a time-sliced fashion.§.§ Responsible DisclosureWe informed the AWS security team about our findings and discussed technical details. The AWS security team claims that the AWS services are not affected by our findings due to additional security measurements. AWS agreed that Firecracker does not provide micro-architectural security on its own but only in combination with microcode updates and secure host and guest operating systems and plans to update its host setup recommendations for Firecracker installations. § BACKGROUND§.§ KVMThe Linux KVM <cit.> provides an abstraction of the hardware-assisted virtualization features like Intel VT-x or AMD-V that are available in modern CPUs.To support near-native execution, a guest mode is added to the Linux kernel in addition to the existing kernel mode and user mode.If in Linux guest mode, KVM causes the hardware to enter the hardware virtualization mode which replicates ring 0 and ring 3 privileges.[The virtualized ring 0 and ring 3 are one of the core reasons why near-native code execution is achieved.]With KVM, I/O virtualization is performed mostly in user space by the process that created the VM, referred to as the VMM or hypervisor, in contrast to earlier hypervisors which typically required a separate hypervisor process <cit.>. A KVM hypervisor provides each VM guest with its own memory region that is separate from the memory region of the process that created the guest. This is true for guests created from kernel space as well as from user space.Each VM maps to a process on the Linux host and each virtual CPU assigned to the guest is a thread in that host process. The VM's userspace hypervisor process makes system calls to KVM only when privileged execution is required, minimizing context switching and reducing the VM to kernel attack surface. Besides driving performance improvements across all sorts of applications, this design has allowed for the development of lightweight hypervisors that are especially useful for sandboxing individual programs and supporting cloud environments where many VM are running at the same time.§.§ Serverless Cloud ComputingAn increasingly popular model for cloud computing is serverless computing, in which the CSP manages scalability and availability of the servers that run the user's code. One implementation of serverless computing is called FaaS. In this model, a cloud user defines functions that are called as necessary through the service provider's application programming interface (API) (hence the name “FaaS”) and the CSP manages resource allocation on the server that executes the user's function (hence the name “serverless computing”—the user does no server management). Similarly, CaaS computing runs containers, portable runtime packages, on demand. The centralized server management of FaaS and CaaS is economically attractive to both CSP and users. The CSP can manage its users' workloads however it pleases, optimize for minimal operating cost, and implement flexible pricing where users pay for the execution time that they use. The user does not need to worry about server infrastructure design or management, and so reduces development costs and outsources maintenance cost to the CSP at a relatively small and predictable rate. §.§ MicroVMs and AWS FirecrackerFaaS and CaaS providers use a variety of systems to manage running functions and containers. Container systems like Docker, Podman, and LXD provide a convenient and lightweight way to package and run sandboxed applications in any environment. However, compared to the virtual machines used for many more traditional forms of cloud computing, containers offer less isolation and therefore less security. In recent years, major CSP have introduced microVMs that back traditional containers with lightweight virtualization for extra security. <cit.> The efficiency of hardware virtualization with KVM and lightweight design of microVMs means that code in virtualized, containerized or container-like systems can run nearly as fast as unvirtualized code and with comparable overhead to a traditional container.Firecracker <cit.> is a microVM developed by AWS to isolate each of the AWS Lambda FaaS and AWS Fargate CaaS workloads in a separate VM. It only supports Linux guests on x86 or ARM Linux-KVM hosts and provides a limited number of devices that are available to guest systems. These limitations allow Firecracker to be very light-weight in the size of its code base and in memory overhead for a running VM, as well as very quick to boot or shut down. Additionally, the use of KVM lightens the requirements of Firecracker, since some virtualization functions are handled by kernel system calls and the host OS manages VM as standard processes. Because of its small code base written in Rust, Firecracker is assumed to be very secure, even though security flaws have been identified in earlier versions (see https://www.cve.org/CVERecord?id=CVE-2019-18960CVE-2019-18960).Interestingly, the Firecracker white paper declares microarchitectural attacks to be in-scope of its attacker model <cit.> but lacks a detailed security analysis or special countermeasures against microarchitectural attacks beyond common secure system configuration recommendations for the guest and host kernel. The Firecracker documentation does provide system security recommendations <cit.> that include a specific list of countermeasures, which we cover in <ref>.§.§ Meltdown and MDS In 2018, the Meltdown <cit.> attack showed that speculatively accessed data could be exfiltrated across security boundaries by encoding it into a cache side-channel. This soon led to a whole class of similar attacks, known as MDS, including Fallout <cit.>, RIDL <cit.>, TAA <cit.>, and Zombieload <cit.>. These attacks all follow the same general pattern to exploit speculative execution: * The victim runs a program that handles secret data, and the secret data passes through a cache or CPU buffer.* The attacker runs a specifically chosen instruction that will cause the CPU to mistakenly predict that the secret data will be needed. The CPU forwards the secret data to the attacker's instruction.* The forwarded secret data is used as the index for a memory read to an array that the attacker is authorized to access, causing a particular line of that array to be cached.* The CPU finishes checking the data and decides that the secret data was forwarded incorrectly, and reverts the execution state to before it was forwarded, but the state of the cache is not reverted. * The attacker probes all of the array to see which line was cached; the index of that line is the value of the secret data.The original Meltdown vulnerability targeted cache forwarding and allowed data extraction in this manner from any memory address that was present in the cache. MDS attacks target smaller and more specific buffers in the on-core microarchitecture, and so make up a related but distinct class of attacks that are mitigated in a significantly different way. While Meltdown targets the main memory that is updated relatively infrequently and shared across all cores, threads, and processes, MDS attacks tend to target buffers that are local to cores (though sometimes shared across threads) and updated more frequently during execution.§.§.§ Basic MDS Variants<ref> charts the major known MDS attack pathways on Intel CPUs and the names given to different variants by Intel and by the researchers who reported them. Most broadly, Intel categorizes MDS vulnerabilities in their CPUs by the specific buffer from which data is speculatively forwarded, since these buffers tend to be used for a number of different operations. RIDL MDS vulnerabilities can be categorized as Microarchitectural Load Port Data Sampling (MLPDS), for variants that leak from the CPU's load port, and Microarchitectural Fill Buffer Data Sampling (MFBDS), for variants that leak from the CPU's LFB. Along the same lines, Intel calls the Fallout vulnerability Microarchitectural Store Buffer Data Sampling (MSBDS), as it involves a leakage from the store buffer. Vector Register Sampling (VRS) is a variant of MSBDS that targets data that is handled by vector operations as it passes through the store buffer. VERW bypass exploits a bug in the microcode fixes for MFBDS that loads stale and potentially secret data into the LFB. The basic mechanism of leakage is the same, and VERW bypass can be considered a special case of MFBDS. L1 Data Eviction Sampling (L1DES) is another special case of MFBDS, where data that is evicted from the L1 data cache passes through the LFB and becomes vulnerable to an MDS attack.Notably, L1DES is a case where the attacker can actually trigger the secret data's presence in the CPU (by evicting it), whereas other MDS attacks rely directly on the victim process accessing the secret data to bring it into the right CPU buffer. §.§.§ MedusaMedusa <cit.> is a category of MDS attacks classified by Intel as MLPDS variants <cit.>. The Medusa vulnerabilities exploit the imperfect pattern-matching algorithms used to speculatively combine stores in the write-combine (WC) buffer of Intel processors.Intel considers the WC buffer to be part of the load port, so Intel categorizes this vulnerability as a case of MLPDS. There are three known Medusa variants which each exploit a different feature of the write-combine buffer to cause a speculative leakage: Cache Indexing: a faulting load is speculatively combined with an earlier load with a matching cache line offset.Unaligned Store-to-Load Forwarding: a valid store followed by a dependent load that triggers an misaligned memory fault causes random data from the WC to be forwarded.Shadow : a faultinginstruction followed by a dependent load leaks the data of a different .§.§.§ TAAThe hardware vulnerability TAA <cit.> provides a different speculation mechanism for carrying out an MDS attack.While standard MDS attacks access restricted data with a standard speculated execution, TAA uses an atomic memory transaction as implemented by TSX.When an atomic memory transaction encounters an asynchronous abort, for example because another process reads a cache line marked for use by the transaction or because the transaction encounters a fault, all operations in the transaction are rolled back to the architectural state before the transaction started. However, during this rollback, instructions inside the transaction that have already started execution can continue speculative execution, as in steps (2) and (3) of other MDS attacks.TAA impacts all Intel processors that support TSX, and the case of certain newer processors that are not affected by other MDS attacks, MDS mitigations or TAA-specific mitigations (such as disabling TSX) must be implemented in software to protect against TAA <cit.>. §.§.§ Mitigations Though Meltdown and MDS-class vulnerabilities exploit low level microarchitectural operations, they can be mitigated with microcode firmware patches on most vulnerable CPUs. Page table isolation Historically, kernel page tables have been included in user-level process page tables so that a user-level process can make a system call to the kernel with minimal overhead. Page table isolation (first proposed by Gruss et al. as KAISER <cit.>) maps only the bare minimum necessary kernel memory into the user page table and introduces a second page table only accessible by the kernel. With the user process unable to access the kernel page table, accesses to all but a small and specifically chosen fraction of kernel memory are stopped before they reach the lower level caches where a Meltdown attack begins. Buffer overwrite MDS attacks that target on-core CPU buffers require a lower-level and more targeted defense.Intel introduced a microcode update that overwrites vulnerable buffers when the first-level data (L1d) cache (a common target of cache timing side-channel attacks) is flushed or theinstruction is run <cit.>. The kernel can then protect against MDS attacks by triggering a buffer overwrite when switching to an untrusted process. The buffer overwrite mitigation targets MDS attacks at their source, but is imperfect to say the least.Processes remain vulnerable to attacks from concurrently running threads on the same core when SMT is enabled (since both threads share vulnerable buffers without the active process actually changing on either thread), Furthermore, shortly after the original buffer overwrite microcode update, the RIDL team found that on some Skylake CPUs, buffers were overwritten with stale and potentially sensitive data <cit.>, and remained vulnerable even with mitigations enabled and SMT disabled. Still other processors are vulnerable to TAA but not non-TAA MDS attacks, and did not receive a buffer overwrite microcode update and as such require that TSX be disabled completely to prevent MDS attacks <cit.>. §.§ SpectreIn 2018, Jan Horn and Paul Kocher <cit.> independently reported the first Spectre variants. Since then, many different Spectre variants <cit.> and sub-variants <cit.> have been discovered. Spectre attacks make the CPU speculatively access memory that is architecturally inaccessible and leak the data into the architectural state. Therefore, all Spectre variants consist of three components <cit.>:The first component is the Spectre gadget that is speculatively executed. Spectre variants are usually separated by the source of the misprediction they exploit. The outcome of a conditional direct branch, e.g., is predicted by the Pattern History Table (PHT). Mispredictions of the PHT can lead to a speculative bounds check bypass for load and store instructions <cit.>. The branch target of an indirect jump is predicted by the Branch Target Buffer (BTB). If an attacker can influence the result of a misprediction of the BTB, then speculative return-oriented programming attacks are possible <cit.>. The same is true for predictions served by the Return Stack Buffer (RSB) that predicts return addresses during the execution of return instructions <cit.>. Recent results showed that some modern CPUs use the BTB for their return address predictions if the RSB underflows <cit.>. Another source of Spectre attacks is the prediction of store-to-load dependencies. If a load is mispredicted to not depend of a previous store, it speculatively executes on stale data which may lead to a speculative store bypass <cit.>. All of these gadgets are not exploitable by default but depend on the other two components discussed now.The second component is how an attacker controls inputs to the aforementioned gadgets. Attackers may be able to define gadget input values directly through user input, file contents, network packets or other architectural mechanisms. On the other hand attackers may be able to inject data into the gadget transiently through load value injection <cit.> or floating point value injection <cit.>. Attackers are able to successfully control gadget inputs if they can influence which data or instructions are accessed or executed during the speculation window.The third component is the covert channel that is used to transfer the speculative microarchitectural state into an architectural state and therefore exfiltrate the speculatively accessed data into a persistent environment. Cache covert channels <cit.> are applicable if the victim code performs a transient memory access depending on speculatively accessed secret data <cit.>. If a secret is accessed speculatively and loaded into an on-core buffer, an attacker can rely on an MDS-based channel <cit.> to transiently transfer the exfiltrated data to the attacker thread where the data is transferred to the architectural state through, e. g., a cache covert channel. Last but not least, if the victim executes code depending on secret data, the attacker can learn the secret by observing port contention <cit.>.§.§.§ MitigationsMany countermeasures were developed to mitigate the various Spectre variants. A specific Spectre variant is effectively disabled if one of the three required components is removed. An attacker without control over inputs to Spectre gadgets is unlikely to successfully launch an attack. The same is true if a covert channel for transforming the speculative state into an architectural state is unavailable. But since this is usually hard to guarantee, Spectre countermeasures mainly focus on stopping mispredictions. Insertinginstructions before critical code sections disable speculative execution beyond this point and can therefore be used as a generic countermeasure. But because of its high performance overhead, more specific countermeasures were developed. Spectre-BTB countermeasures include Retpoline <cit.> and microcode updates like IBRS, STIBP, or IBPB <cit.>. Spectre-RSB and Spectre-BTB-via-RSB can be mitigated by filling the RSB with values to overwrite malicious entries and prevent the RSB from underflowing or by installing IBRS microcode updates. Spectre-STL can be mitigated by the SSBD microcode update <cit.>. Another drastic option to stop an attacker from tampering with shared branch prediction buffers is to disable SMT. Disabling SMT effectively partitions branch prediction hardware resources between concurrent tenants at the cost of a significant performance loss. §.§ AWS's isolation modelFirecracker is specifically built for serverless and container applications <cit.> and is currently used by AWS' Fargate CaaS and Lambda FaaS.In both of these service models, Firecracker is the primary isolation system that supports every individual Fargate task or Lambda event. Both of these service models are also designed for running very high numbers of relatively small and short-lived tasks. AWS itemizes the design requirements for the isolation system that eventually became Firecracker as follows: Isolation: It must be safe for multiple functions to run on the same hardware, protected against privilege escalation, information disclosure, covert channels, and other risks.Overhead and Density: It must be possible to run thousands of functions on a single machine, with minimal waste.Performance: Functions must perform similarly to running natively. Performance must also be consistent, and isolated from the behavior of neighbors on the same hardware.Compatibility: Lambda allows functions to contain arbitrary Linux binaries and libraries. These must be supported without code changes or recompilation.Fast Switching: It must be possible to start new functions and clean up old functions quickly.Soft Allocation: It must be possible to over commit CPU, memory and other resources, with each function consuming only the resources it needs, not the resources it is entitled to. <cit.> We are particularly interested in the isolation requirement and stress that microarchitectural attacks are declared in-scope for the Firecracker threat model.The “design” page in AWS's public Firecracker Git repository elaborates on the isolation model and provides a useful diagram which we reproduce in <ref>.This diagram pertains mostly to protection against privilege escalation. The outermost layer of protection is the jailer, which uses container isolation techniques to limit the Firecracker's access to the host kernel while running the VMM and other management components of Firecracker as threads of a single process in the host userspace. Within the the Firecracker process, the user's workload is run on other threads. The workload threads execute the guest operating system of the virtual machine and any programs running in the guest. Running the user's code in the virtual machine guest restricts its direct interaction with the host to prearranged interactions with KVM and certain portions of the Firecracker management threads. So from the perspective of the host kernel, the VMM and the VM including the user's code are run in the same process. This is the reason why AWS states that each VM resides in a single process. But, since the VM is isolated via hardware virtualization techniques, the user's code, the guest kernel, and the VMM operate in separate address spaces. Therefore, the guest's code cannot architecturally or transiently access VMM or guest kernel memory addresses as they are not mapped in the guest's address space. The remaining microarchitectural attack surface is limited to MDS attacks that leak information from CPU internal buffers ignoring address space boundaries and Spectre attacks where an attacker manipulates the branch prediction of other processes to self-leak information. Not shown in <ref>, but equally important to AWS's threat model, is the isolation of functions from each other when hardware is shared, especially in light of the soft allocation requirement. Besides the fact that compromising the host kernel could compromise the security of any guests, microarchitectural attacks that target the host hardware can also threaten user code directly. Since a single Firecracker process contains all the necessary threads to run a virtual machine with a user's function, soft allocation can simply be performed by the host operating system <cit.>. This means that standard Linux process isolation systems are in place on top of virtual machine isolation.§.§.§ Firecracker security recommendations The Firecracker documentation also recommends the following precautions for protecting against microarchitectural side-channels <cit.>: * Disable SMT* Enable kernel page-table isolation* Disable kernel kame-page merging* Use a kernel compiled with Spectre-BTB mitigation (e.g., IBRS and IBPB on x86)* Verify Spectre-PHT mitigation * Enable L1TF mitigation* Enable Spectre-STL mitigation* Use memory with Rowhammer mitigation* Disable swap or use secure swap§ THREAT MODELSWe propose two threat models applicable to Firecracker-based serverless cloud systems:* The user-to-user model (<ref>): a malicious user runs arbitrary code sandboxed within a Firecracker VM and attempts to leak data, inject data, or otherwise gain information about or control over another user's sandboxed application. In this model, we consider* the time-sliced sharing of hardware, where the instances of the two users execute in turns on the CPU core, and * physical co-location, where the two users' code runs concurrently on hardware that is shared in one way or another (for example, two cores on the same CPU or two threads in the same core if SMT is enabled). * The user-to-host model (<ref>): a malicious user targets some component of the host system: the Firecracker VMM, KVM, or another part of the host system kernel. For this scenario, we only consider time-sliced sharing of hardware resources. This is because the host only executes code if the guest user's VM exits, e. g. due to a page fault that has to be handled by the host kernel or VMM.For both models, we assume that a malicious user is able to control the runtime environment of its application. In our models, malicious users do not posses guest kernel privileges.Therefore, both models grant the attacker slightly less privileges than the model assumed by <cit.> where the guest kernel is chosen and configured by the VMM but assumed to be compromised at runtime. Rather, the attacker's capabilities in our models match the capabilities granted to users in deployments of Firecracker in AWS Lambda and Fargate.§ ANALYSIS OF FIRECRACKER'S CONTAINMENT SYSTEMS<ref> shows the containment offered by Firecracker, as presented by AWS. In this section, we analyze each depicted component and their defenses against and vulnerabilities to microarchitectural attacks. KVM Linux KVM is the hypervisor implemented in modern Linux kernels and therefore part of the Linux code base. It virtualizes the supervisor and user modes of the underlying hardware, manages context switches between VM, and handles most VM-exit reasons unless they are related to I/O operations. Besides these architectural isolation mechanisms, KVM also implements mitigations against Spectre attacks on a VM-exit to protect the host OS or hypervisor from malicious guests. Firecracker heavily relies on KVM as its hypervisor. However, since KVM is part of the Linux source code and developed by the Linux community, we define KVM to not be a part of Firecracker. Therefore, countermeasures against microarchitectural attacks that are implemented in KVM cannot be attributed to Firecracker's containment system. Metadata, device, and I/O services The metadata, device, and I/O services are the parts of the Firecracker VMM and API that interact directly with a VM, collecting and managing metrics and providing connectivity. AWS touts the simplicity of these interfaces (for a reduced attack surface) and that they are written from scratch for Firecracker in Rust, a programming language known for its security features <cit.>.However, Rust most notably provides in-process protection against invalid and out-of-bounds memory accesses, but microarchitectural attacks like cache attacks, Spectre, and MDS can leak information between processes rather than directly hijacking a victim's process. Another notable difference between Firecracker and many other VMM is that all of these services are run within the same host process as the VM itself, albeit in another thread.While the virtualization of memory addresses within the VM provides some obfuscation between the guest's code and the I/O services, some Spectre attacks work specifically within a single process. Intra-process attacks may pose less of a threat to real world systems, however, since two guests running on the same hardware each have their own copy of these essential services. Jailer barrier In the event that the API or VMM are compromised, the jailer provides one last barrier of defense around a Firecracker instance. It protects the host system's files and resources with namespaces and control groups (cgroups), respectively <cit.>. Microarchitectural attacks do not directly threaten files, which are by definition outside of the microarchitectural state. Cgroups allow a system administrator to assign processes to groups and then allocate, constrain, and monitor system resource usage on a per-group basis <cit.>. It is plausible that limitations applied with cgroups could impede an attacker's ability to carry out certain microarchitectural attacks. For example, memory limitations might make it difficult to carry out eviction-based cache attacks, or CPU time limitations could prevent an attacker from making effective use of a CPU denial-of-service tool like HyperDegrade <cit.> which can slow down a victim process, simplifying the timing of a microarchitectural side-channel exfiltration or injection. In practice, Firecracker is not distributed with any particular cgroup rules <cit.>; in fact, it is specifically designed for the efficient operation of many Firecracker VM under the default Linux resource allocation <cit.>.None of the isolation and containment systems in Firecracker seem to directly protect against user-to-user or user-to-host attacks.Therefore, we proceeded to test various microarchitectural attack proof of concepts inside and outside of Firecracker VMs. § AWS FIRECRACKER ISOLATION AND THREAT ANALYSIS (OLD) In this section we first consider the Firecracker's stated security model and the intended (and already available) use cases of the Firecracker microVM to derive potential threat models for microarchitectural attacks.We then analyze threats posed by various classes of microarchitectural vulnerabilities within these threat models from the perspective of a CSP using Firecracker, a benign user of the cloud service, and a malicious user. §.§ AWS's isolation modelFirecracker is specifically built for serverless and container applications <cit.> and is currently used by AWS' Fargate CaaS and Lambda FaaS systems (see <ref>). In both of these service models, Firecracker is the primary isolation system that supports every individual Fargate task or Lambda event. Both of these service models are also designed for running very high numbers of relatively small and short-lived tasks. AWS itemizes the design requirements for the isolation system that eventually became Firecracker as follows: Isolation: It must be safe for multiple functions to run on the same hardware, protected against privilege escalation, information disclosure, covert channels, and other risks.Overhead and Density: It must be possible to run thousands of functions on a single machine, with minimal waste.Performance: Functions must perform similarly to running natively. Performance must also be consistent, and isolated from the behavior of neighbors on the same hardware.Compatibility: Lambda allows functions to contain arbitrary Linux binaries and libraries. These must be supported without code changes or recompilation.Fast Switching: It must be possible to start new functions and clean up old functions quickly.Soft Allocation: It must be possible to over commit CPU, memory and other resources, with each function consuming only the resources it needs, not the resources it is entitled to. <cit.> We are particularly interested in the isolation requirement and stress that microarchitectural attacks are declared in-scope for the Firecracker threat model.The “design” page in AWS's public Firecracker Git repository elaborates on the isolation model and provides a useful diagram which we reproduce in <ref>.This diagram pertains mostly to protection against privilege escalation. The outermost layer of protection is the jailer, which uses container isolation techniques to limit the Firecracker's access to the host kernel while running the VMM and other management components of Firecracker as threads of a single process in the host userspace. Within the the Firecracker process, the user's workload is run on other threads. The workload threads execute the guest operating system of the virtual machine and any programs running in the guest. Running the user's code in the virtual machine guest restricts its direct interaction with the host to prearranged interactions with KVM and certain portions of the Firecracker management threads. Note that the user's code, the VMM, and even KVM system calls, are run in the same process.This is advantageous for performance, but, as <ref> illustrates, exposes both KVM and the Firecracker VMM to the user's code. Therefore it is crucial that both KVM and the VMM be hardened against possible attacks if the virtual machine is to provide any more security than a container against privilege escalation.Not shown in <ref>, but equally important to AWS's threat model, is the isolation of functions from each other when hardware is shared, especially in light of the soft allocation requirement. Besides the fact that compromising the host kernel could compromise the security of any guests, microarchitectural attacks that target the host hardware can also threaten user code directly. Since a single Firecracker process contains all the necessary threads to run a virtual machine with a user's function, soft allocation can simply be performed by the host operating system <cit.>. This means that standard Linux process isolation systems are in place on top of virtual machine isolation.§.§ Threat modelsWith all of the above in mind, we propose two threat models applicable to Firecracker-based serverless cloud systems:* The user-to-user model (<ref>): a malicious user runs arbitrary code sandboxed within a Firecracker VM and attempts to leak data, inject data, or otherwise gain information about or control over another user's sandboxed application. In this model, we consider* the time-sliced sharing of hardware, where the instances of the two users execute in turns on the CPU core, and * physical co-location, where the two users' code runs concurrently on hardware that is shared in one way or another (for example, two cores on the same CPU or two threads in the same core if SMT is enabled).* The user-to-host model (<ref>): a malicious user targets some component of the host system: the Firecracker VMM, KVM itself, or another part of the host system kernel. For this scenario, we only consider time-sliced sharing of hardware resources. This is because the host only executes code if the guest user's VM exits, e. g. due to a page fault that has to be handled by the host kernel or VMM.For both models, we assume that a malicious user is able to control the runtime environment of its application. In our models, malicious users do not posses guest kernel privileges.Therefore, both models grant the attacker slightly less privileges than the model assumed by <cit.> where the guest kernel is chosen and configured by the VMM but assumed to be compromised at runtime. Rather, the attacker's capabilities in our models match the capabilities granted to users in deployments of Firecracker in AWS Lambda and Fargate (cf. <ref>). §.§ MDS Threat AnalysisMDS is a wide class of vulnerabilities that affect a great number of recent Intel CPUs. MDS attacks use timing side-channels to exploit prediction mechanisms in the speculative and out-of-order execution systems present in these CPUs (see <ref>). The nature of this leakage, which typically occurs in small, on-core microarchitectural buffers, gives MDS attacks a very flexible threat model. Any two processes sharing a vulnerable buffer are potentially vulnerable, even without shared memory and often with very little information about the target memory address.This means that even virtual machines with fully isolated memory can be vulnerable to MDS attacks. Furthermore, Firecracker is intended to support serverless platforms that run over-committed, short-lived functions, and benefit greatly from parallelism and frequent user task switching, meaning that microarchitectural buffers are more likely to be shared (either in time or in physical co-location). Every switch from one user's task to another is a possible site of a user-to-user MDS attack. This makes MDS protections crucial for secure serverless computing with Firecracker.Vulnerable CPUs typically require a degree of physical isolation of a potential attacker from its potential victims for complete protection. For most MDS attacks, this means disabling SMT <cit.>, which can have significant performance implications. SMT can only be controlled by the VM host; that is, in the case of a cloud service, the CSP must disable SMT or ensure that only one user can control concurrent threads on a single core, and the end user must trust that the provider has considered this. A user inside of a VM cannot enable or disable SMT on the host, even with guest kernel privileges, nor can a user check the host hardware's SMT state through the standard kernel interfaces. Other useful mitigations, like those controlled by thekernel flag, can be enabled and checked from within the VM guest kernel, although a malicious host could likely cause the guest kernel to report an incorrect state by manipulating emulated CPU flags. However, under the serverless computing model used by AWS Lambda which Firecracker was designed for, the end user does not have kernel privileges within the virtual machines that their code runs on and must completely trust the provider's settings or else assume that no protections are enabled. §.§ Spectre Threat Analysis“Spectre is here to stay” <cit.> summarizes the case of Spectre attacks, at least the same-process attacks, as they are inherent to out-of-order-execution architectures which requires branch predictions to realize their full potential. Cross-process attacks can be mitigated by partitioning branch prediction units between tenants which can be achieved by disabling SMT or installing microcode updates like STIBP on the host. Such mitigations must be deployed by the CSP. A serverless tenant has no reliable possibility to test whether such countermeasures are in place but has to trust the CSP that appropriate protections are enabled. In-process Spectre attacks can be mitigated by tenants by including speculation barriers before critical code sections. § ANALYSIS OF MICROARCHITECTURAL ATTACKS AND DEFENSES IN FIRECRACKER MICROVMSIn this section we present our analysis of a number of microarchitectural side-channel and speculative attack PoC on Firecracker microVMs. We test these PoC on bare metal and in Firecracker instances, and test relevant microcode defenses in the various scenarios. We run our tests on a server with an Intel Skylake 4114 CPU which has virtualization hardware extensions and SMT enabled. The CPU runs on microcode version 0x2006b06[Updating the microcode to a newer version would disable TSX on our system which would make tests with TSX-based MDS variants impossible.]. The host OS is Ubuntu 20.04 with a Linux 5.10 kernel. We used Firecracker v1.0.0 and v1.4.0, the latest version as of July 2023, to run an Ubuntu 18.04 guest with Linux kernel 5.4 which is provided by Amazon when following the quick-start guide[<https://github.com/firecracker-microvm/firecracker/blob/dbd9a84b11a63b5e5bf201e244fe83f0bc76792a/docs/getting-started.md>].In summary, the recommended production host setup provided with AWS Firecracker is insufficient when it comes to protecting tenants from malicious neighbors. Firecracker therefore fails in providing its claimed isolation guarantees. This is because* we identify a Medusa variant that only becomes exploitable when it is run across microVMs. In addition, the recommended countermeasures do not contain the necessary steps to mitigate the side-channel, or most other Medusa variants.* we show that tenants are not properly protected from information leaks induced through Spectre-PHT or Spectre-BTB when applying the recommended countermeasures. The Spectre-PHT variants remain a problem even when disabling SMT.* we observed no differences in PoC performance between Firecracker v1.0.0 and v1.4.0. We conclude that the virtualization layer provided by Firecracker has little effect on microarchitectural attacks, and Firecracker's system security recommendations are insufficient. §.§ Medusa We evaluated Moghimi's PoC <cit.> for the Medusa <cit.> side-channels (classified by Intel as MLPDS variants of MDS <cit.>) on the bare metal of our test system and in Firecracker VMs. There is one leaking PoC for each of the three known variants described in <ref>.We used two victim programs from the PoC library:* The “Block Write” program writes a large amount of consecutive data in a loop (so that the processor will identify repeated stores and combine them). * The “” program performs a similar operation, but with theinstruction instead of many instructions moving smaller blocks of data in a loop. §.§.§ Results<ref> shows the cases in which data is successfully leaked with all microarcitectural protections in the kernel disabled. The left two columns show the possible combinations of the three Medusa PoC and the two included victim programs.The right columns indicate which configurations work on bare metal and with the secret and leaking program running in parallel Firecracker instances.Most notably, with the Cache Indexing variant, the Block Write secret only works with Firecracker. This is likely because of the memory address virtualization that the virtual machine provides: the guest only sees virtual memory regions mapped by KVM, and KVM traps memory access instructions and handles the transactions on behalf of the guest. We tested the effectiveness ofanddefenses against each combination of attacker and victim PoC on bare metal and in Firecracker VM. <ref> lists the protections necessary to prevent Medusa attacks in bare metal and Firecracker scenarios. Across the four vulnerabilities in Firecracker, only one is mitigated byalone, and AWS does not explicitly recommend enabling theprotection, though most Linux distributions ship with it enabled by default. That is to say, a multi-tenant cloud platform could be using Firecracker in a way that is compliant with AWS's recommended security measures and still be vulnerable to the majority of Medusa variants, including one where the Firecracker VMM itself leaks the user's data that would not otherwise be leaked.§.§ RIDL and MoreIn this section, we present an evaluation of the RIDL PoC programs <cit.> provided alongside van Schaik et al.'s 2019 paper <cit.>. RIDL is a class of MDS attacks that exploits speculative loads from buffers inside the CPU (not from cache or memory).The RIDL PoC repository also includes examples of attacks released in later addenda to the paper as well as one variant of the Fallout MDS attack.§.§.§ Results<ref> shows some basic information about the RIDL PoC that we tested and the efficacy of relevant countermeasures at preventing the attacks. We compared attacks on bare metal and in Firecracker to evaluate Amazon's claims of the heightened hardware security of the Firecracker microVM system.For tests on the Firecracker system, we distinguish between countermeasure flags enabled on the host system (H) and the Firecracker guest kernel (VM). Besides theandkernel flags, we tested other relevant flags (cf. <ref>, <cit.>), including , , and , but did not find that they had an affect on any of these programs. We excluded themitigation since the CPU we tested on includesmitigation which makes thekernel flag redundant <cit.>. In general, we found that theprotection does not adequately protect against the majority of RIDL attacks. However, disabling SMT does mitigate the majority of these exploits. This is consistent with Intel's <cit.> and the Linux developers' <cit.> statements that SMT must be disabled to prevent MDS attacks across hyperthreads. The two outliers among these PoC are , which requires bothandon the host, and , which is mitigated only bycountermeasures. The leak relying onuses an alignment fault rather than a page table fault leak to trigger speculation <cit.>. The RIDL paper <cit.> and Intel's documentation of the related VRS exploit <cit.> are unclear about what exactly differentiates this attack from the page-fault-based MFBDS attacks found in other PoC, but our experimental findings indicate that the microarchitectural mechanism of the leakage is distinct. There is a simple and reasonable explanation for the behavior of , which is unique among these PoCs for one key reason: it is the only exploit that crosses security boundaries (leaking page table values from the kernel) within a single thread rather than leaking from another thread. It is self-evident that disabling multi-threading would have little effect on this single-threaded exploit.However, thecountermeasure flushes microarchitectural buffers before switching from kernel-privilege execution to user-privilege execution within the same thread, wiping the page table data accessed by kernel code from the LFB before the attacking user code can leak it.In contrast to Medusa, most of these PoCs are mitigated by AWS's recommendation of disabling .However, as with Medusa, the Firecracker VMM itself provides no microarchitectural protection against these attacks.§.§ SpectreNext we focus on Spectre vulnerabilities. While there have been many countermeasures developed since Spectre attacks were first discovered, many of them either come with a (significant) performance penalty or only partially mitigate the attack. Therefore, system operators often have to decide for a performance vs. security trade-off. In this section we evaluate the Spectre attack surface available to Firecracker tenants in both threat models described earlier. To evaluate the wide range of Spectre attacks, we rely on the PoC provided in <cit.>. For Spectre-PHT, Spectre-BTB, and Spectre-RSB, the repository contains four PoCs each. They differ in the way the attacker mistrains the BPU. The four possibilities are (1) same-process–the attacker has control over the victim process or its inputs to mistrain the BPU–(2) cross-process–the attacker runs its own code in a separate process to influence the branch predictions of the victim process–(a) in-place–the attacker mistrains the the BPU with branch instruction that reside at the same virtual address as the target branch that the attacker wants to be misspredicted in the victim process–(b) out-of-place–the attacker mistrains the BPU with branch instructions that reside at addresses that are congruent to the target branches in the victim process. * same-process: the attacker has control over the victim process or its inputs to mistrain the BPU,* cross-process: the attacker runs its own code in a separate process to influence the branch predictions of the victim process,* in-place: the attacker mistrains the the BPU with branch instruction that reside at the same virtual address as the target branch that the attacker wants to be misspredicted in the victim process* out-of-place: the attacker mistrains the BPU with branch instructions that reside at addresses that are congruent to the target branches in the victim process.The first two and latter two situations are orthogonal, so each PoC combines two of them. For Spectre-STL, only same-process variants are known, which is why the repository only provides two PoCs in this case. For cross-VM experiments, disabled address space layout randomization for the host and guest kernels as well as for the host and guest user level to ease finding congruent addresses that are used for mistraining. §.§.§ Results With AWS recommended countermeasures <cit.> (the default for the Linux kernels in use, cf. <ref>) enabled on the host system and inside Firecracker VM, we see that Spectre-RSB is successfully mitigated both on the host and inside and across VM (cf. <ref>). On the other hand, Spectre-STL, Spectre-BTB, and Spectre-PHT allowed information leakage in particular situations.The PoC for Spectre-STL show leakage. However, the leakage only occurs within the same process and the same privilege level. Since no cross-process variants are known, we didn't test the cross-VM scenario for Spectre-STL. In our user-to-user threat model, Spectre-STL is not a possible attack vector, as no cross-process variants are known. If two tenant workloads would be isolated by in-process isolation within the same VM, Spectre-STL could still be a viable attack vector. In the user-to-host model, Spectre-STL is mitigated by countermeasures that are included in current Linux kernels and enabled by default.For Spectre-PHT, the kernel countermeasures include the sanitization of user-pointers and the utilization of barriers () on privilege level switches. We therefore conclude that Spectre-PHT poses little to no threat to the host system. However, these mitigations do not protect two hyperthreads from each other if they execute on the same physical core in parallel. This is why all four Spectre-PHT mistraining variants are fully functional on the host system as well as inside Firecracker VM. As can be seen in <ref>, this remains true even if SMT is disabled[This is simulated by pinning attacker and victim process to the same core ((1PT))]. In fact, pinning both VM to the same physical thread enables the cross-process out-of-place version of Spectre-PHT whereas we did not observe leake in the SMT case. This makes Spectre-PHT a significant threat for user-to-user.Spectre-BTB PoC are partially functional when AWS recommended countermeasures are enabled. The original variant that mistrains the BTB in the same process and at the same address is fully functional while same-process out-of-place mistraining is successfully mitigated. Also, all attempts to leak information across process boundaries via out-of-place mistraining did not show any leakage. With cross-process in-place mistraining, however, we observed leakage. On the host system, the leakage occurred independent of SMT. Inside a VM, the leakage only occurred if all virtual CPU cores were assigned to a separate physical thread. Across VM, disabling SMT removed the leakage.Besides the countermeasures listed in <ref>, the host kernel has Spectre countermeasures compiled into the VM entry and exit point[<https://elixir.bootlin.com/linux/v5.10/source/arch/x86/kvm/vmx/vmenter.S#L191>] to fully disable malicious guests from attacking the host kernel while the kernel handles a VM exit.In summary, we can say that the Linux default countermeasures–which are recommended by AWS Firecracker–only partially mitigate Spectre. Precisely, we show: * Spectre-PHT and Spectre-BTB can still leak information between tenants in the guest-to-guest scenario with the AWS recommended countermeasures–which includes disabling SMT–in place.* The host kernel is likely sufficiently protected by the additional precautions that are compiled into the Linux kernel to shield VM entries and exits. This, however, is orthogonal to security measures provided by Firecracker. All leakage observed was independent of the Firecracker version in use.§.§.§ EvaluationWe find that Firecracker does not add to the mitigations against Spectre but solely relies on general protection recommendations, which include mitigations provided by the host and guest kernels and optional microcode updates. Even worse, the recommended countermeasures insufficiently protect serverless applications from leaking information to other tenants. We therefore claim that Firecracker does not achieve its isolation goal on a microarchitectural level, even though microarchitectural attacks are considered in-scope of the Firecracker threat model.The alert reader might wonder why Spectre-BTB remains an issue with the STIBP countermeasure in place (cf. <ref>) as this microcode patch was designed to stop the branch prediction from using prediction information that originates from another thread. This also puzzled us for a while until recently Google published a security advisory[<https://github.com/google/security-research/security/advisories/GHSA-mj4w-6495-6crx>] that identifies a flaw in Linux 6.2 that kept disabling the STIBP mitigation when IBRS is enabled. We verified that the code section that was identified as being responsible for the issue is also present in the Linux 5.10 source code. Our assumption therefore is that the same problem identified by Google also occurs on our system.§ CONCLUSIONSCloud technologies constantly shift to meet the needs of their customers. At the same time, CSP aim for maximizing efficiency and profit, which incentivizes serverless CSP to over-commit available compute resources. While this is reasonable from an economic perspective, the resulting system behavior can be disastrous in the context of microarchitectural attacks that exploit shared hardware resources. In the past few years, the microarchitectural threat landscape changed frequently and rapidly. There mitigations that work reasonably well to prevent many attacks, but they often lead to significant performance costs, which forces CSP to find a tradeoff between economic value and security. Furthermore, some microarchitectural attacks simply are not hindered by existing mitigations. The CSP customers have little control over the microarchitectural defenses deployed and must trust their providers to keep up with the pace of microarchitectural attack and mitigation development. Defense-in-depth requires security at every level, from the microcode to VMM to container. Each system must be considered as a whole, as some protections at one system level may open a vulnerabilities at another.We showed that default countermeasures as they are recommended for the Firecracker VMM are insufficient to meet its isolation goals. In fact, many of the tested attack vectors showed leakage while countermeasures where in place. We identified the Medusa cache indexing/block write variant as an attack vector that only works across VM, i. e. with additional isolation mechanisms in place. Additionally, we showed that disabling SMT–an expensive mitigation technique recommended and performed by AWS–does not provide full protection against Medusa variants. The aforementioned Medusa variant, and Spectre-PHT are still capable of leaking information between cloud tenants even if SMT is disabled, as long as the attacker and target threads keep competing for hardware resources of the same physical CPU core. Unfortunately this is inevitably the case in high-density serverless environments. In the present, serverless CSP must remain vigilant in keeping firmware up-to-date and employing all possible defenses against microarchitectural attacks. Users must not only trust their CSP of choice to keep their systems up-to-date and properly configured, but also be aware that some microarchitectural vulnerabilities, particularly certain Spectre variants, are still able to cross containment boundaries. Furthermore, processor designs continue to evolve and speculative and out-of-order execution remain important factors in improving performance from generation to generation. So, it is unlikely that we have seen the last of new microarchitectural vulnerabilities, as the recent wave of newly discovered attacks <cit.> shows. This work was supported by the dfgGerman Research Foundation (DFG)https://www.dfg.de/ under Grants No. dfg439797619 and dfg456967092, by the bmbfGerman Federal Ministry for Education and Research (BMBF)https://www.bmbf.de under Grants bmbfSASVI and bmbfSILGENTAS, by the nsfNational Science Foundation (NSF)https://www.nsf.gov/ under Grant nsfCNS-2026913, and in part by a grant from the Qatar National Research Fund. ACM-Reference-Format | http://arxiv.org/abs/2311.15999v1 | {
"authors": [
"Zane Weissman",
"Thore Tiemann",
"Thomas Eisenbarth",
"Berk Sunar"
],
"categories": [
"cs.CR"
],
"primary_category": "cs.CR",
"published": "20231127164603",
"title": "Microarchitectural Security of AWS Firecracker VMM for Serverless Cloud Platforms"
} |
[ Target-Free Compound Activity Prediction via Few-Shot Learning equal* Peter Eckmanncs-ucsd Jake Andersonchemistry-ucsd Michael K. Gilsonchemistry-ucsd,skaggs-ucsd Rose Yucs-ucsd cs-ucsdDepartment of Computer Science and Engineering, UC San Diego, La Jolla, California, United States chemistry-ucsdDepartment of Chemistry and Biochemistry, UC San Diego, La Jolla, California, United States skaggs-ucsdSkaggs School of Pharmacy and Pharmaceutical Sciences, UC San Diego, La Jolla, California, United StatesMichael [email protected] Rose [email protected] Machine Learning, ICML0.3in ]Predicting the activities of compounds against protein-based or phenotypic assays using only a few known compounds and their activities is a common task in target-free drug discovery. Existing few-shot learning approaches are limited to predicting binary labels (active/inactive). However, in real-world drug discovery, degrees of compound activity are highly relevant. We study Few-Shot Compound Activity Prediction () and design a novel neural architecture to meta-learn continuous compound activities across large bioactivity datasets. Our model aggregates encodings generated from the known compounds and their activities to capture assay information. We also introduce a separate encoder for the unknown compound. We show thatsurpasses traditional similarity-based techniques as well as other state of the art few-shot learning methods on a variety of target-free drug discovery settings and datasets. § INTRODUCTIONA key task in machine learning for drug discovery is to predict the activity of compounds against a target-based or phenotypic assay, reducing the need for expensive lab-based experimental tests <cit.>. Most existing methods <cit.> require information about the target protein, such as amino acid sequence or 3D structure. However, such information is not always available due to experimental difficulties or a lack of mechanistic disease understanding. Indeed, there is increasing interest in target-free drug discovery <cit.> where only a few compounds with weak activity in an experimental assay are known <cit.>. These hit compounds, while not drug candidates themselves, offer a starting point for the discovery of more promising compounds. Traditional methods use chemical similarity, such as the Tanimoto similarity between structural compound fingerprints <cit.>, to find new compounds most similar to the hit compounds. However, these compounds are often similarly undesirable as drug candidates, based on the principle that structurally similar compounds have similar properties <cit.>.We cast the problem of target-free compound activity prediction as few-shot learning <cit.>, a framework that enables a trained model to generalize to new domains (in this case, assays). Few-shot learning is usually investigated for multi-class classification problems. For drug discovery, these techniques have been applied for binary compound activity prediction <cit.>. However, since experimental activity readouts are often continuous <cit.>, formulation as a binary classification problem requires ad-hoc activity thresholding and is overly simplistic. The few-shot regression problem studied here is more relevant for drug discovery applications <cit.>, although it is significantly more challenging <cit.>.In this paper, we propose Few-Shot Compound Activity Prediction (), a model-based few-shot learning approach for target-free compound activity regression. Our model bears some similarity to neural processes (NPs, <cit.>) but with several important differences that are relevant for compound activity prediction. Specifically, we use a deterministic neural encoder to represent context compounds and their activities via a new multiplication-based featurization. We also introduce a separate encoder for the unknown compound to represent its assay-independent binding characteristics. We concatenate these two encodings and feed them to a predictor network to produce a final prediction for the activity of the unknown compound, and train the entire model using mean squared error (MSE).Despite the rich literature on few-shot classification, few-shot regression remains largely under-explored in drug discovery.To the best of our knowledge, only <cit.> have explored few-shot regression in drug discovery, by applying an Attentive Neural Process (ANP, <cit.>), a variant of neural processes, to the task. However, many design choices in ANPs are not tailored to compound activity prediction, including their probabilistic framing and lack of unknown compound encoding, which we will show leads to poor performance. <cit.> also perform very limited comparison with other few-shot learning methods, and only measure model performance on a single dataset. In summary, our contributions include * designing a novel few-shot learning model, , that builds on existing neural process designs but with several important architectural and training changes that are specific to drug prediction considerations,* introducing several new datasets and settings to the problem of few-shot compound activity regression that mimic the drug discovery challenges of hit and lead optimization, high-throughput screening, and anti-cancer drug activity prediction, and * showing thatoutperforms both traditional chemical similarity techniques and modern deep learning-based few-shot learning techniques on this robust set of datasets. § RELATED WORKWe discuss the related work in compound activity prediction and then summarize few-shot learning and its applications to target-free compound activity prediction. Compound activity prediction. Much work focuses on the prediction of compound activities using knowledge of a protein target (e.g. <cit.>), but such information is not always available in practice <cit.>. In the target-free, or “ligand-based” setting, our aim is to use existing compounds (the “context set”) to predict the activity of unknown compounds (the “query set”) against new assays. A common computational chemistry technique for this task is to measure chemical similarity between the context compound(s) and each compound in the query set. This is often performed with binary fingerprints (e.g. <cit.>), although such structure-based similarity can miss compounds with similar activity but different chemical scaffolds. Therefore, more complex chemical descriptors may also be used, such as polarity, molecular topology, and 3D shape <cit.>. Machine learning techniques derive molecular representations in a data-driven fashion and thus promise to improve the quality of similarity measurements that use these representations. Much work focuses on the unsupervised learning of molecular representations that can later be used for downstream tasks such as the assessment of compound similarity <cit.>. Due to their unsupervised nature, however, similarity measurements between the learned embeddings are not necessarily useful for activity prediction. Few-Shot Learning. Few-shot learning is a framework that enables a trained model to generalize to new domains <cit.>. Common techniques include metric-based, optimization-based, and model-based approaches.Metric-based methods use a learned metric space that is trained specifically to reflect activity differences, as opposed to unsupervised similarity-based methods. <cit.> propose an LSTM-based method to iteratively update context compound embeddings, which are used to compute a similarity metric. <cit.> learn a Siamese network-like embedding for compounds in a metric space. The well-known prototypical network <cit.> and matching network <cit.> techniques have also been proposed for use on molecular graphs <cit.>. However, these techniques only measure similarity between discrete classes (active/active), and cannot use continuous labels. This is problematic when the difference between weakly and highly active compounds is critical, therefore reducing the real-world applicability of such techniques <cit.>. Indeed, one of the main challenges of drug discovery is to optimize weakly active compounds into highly active ones <cit.>, yet binary methods like the ones above can make no such distinction.Optimization-based techniques use gradients computed on the context set to adapt the weights of a “base” model, and then apply this adapted model to the query set. Techniques in this area include the LSTM meta-learner <cit.>, which uses a separate “learner” network to adapt the weights of the main network. <cit.> proposed the use of model-agnostic meta-learning (MAML, <cit.>) for few-shot binary compound activity prediction, which finds a set of model parameters that can most quickly be fine-tuned to new tasks.Instead of updating network weights during test time, model-based approaches take both the query and context set as inputs to a single model. For example, MetaNets <cit.> use a memory module coupled with both a base and meta-learner to generate network weights adapted to a new task. Another method, Non-Gaussian Gaussian Processes (NGGPs, <cit.>), expands on previous approaches <cit.> that use GPs for few-shot learning by parameterizing the Gaussian posterior with a normalizing flow. However, neither the optimization-based nor the model-based techniques have been applied to few-shot compound activity regression.Neural processes (NPs, <cit.>), as well as their variants like attentive neural processes (ANPs, <cit.>), combine GPs and neural networks for few-shot learning. To the best of our knowledge, the prediction of continuous compound activity values in the few-shot setting has been explored only once in the literature using the ANP-based MetaDTA <cit.>. However, they include a limited number of experimental settings and baseline comparisons to other few-shot learning models. We propose a novel architecture with some similarity to neural processes but with several important modifications tailored to the prediction of compound activities, and perform a more rigorous comparison across multiple datasets.§ METHODOLOGYWe cast the problem of compound activity prediction in new assays given known compounds as a few-shot regression task. To address this problem, we introduce , which is summarized in Figure <ref>. Problem statement. We seek to predict the activity of a “query” compound in a new assay, given only a small set of “context” compounds and their activities in the same assay. Mathematically, suppose our training dataset consists of K different assays. Each assay k consists of N different compounds that are measured against it, M_k := {m_1, ⋯, m_N}. The experimentally measured activity of a molecule m against an assay k is defined as π_k(m) ∈ℝ. In training, we take a query molecule m_q that is an element of some M_k and aim to predict its activity π_k(m_q). To aid in prediction, we randomly sample n context examples from the same assay, C_k = {(m_i, π_k(m_i))}_i=1^n, where each m_i is randomly sampled from M_k. n must be ≤ N, and typically it is a small number, hence few-shot. Then, we train the model f to predict the activity of the query molecule given the context set, i.e. f(m_q, C_k) = π̂_k (m_q) ≈π_k(m_q).In testing, our model has a similar task, which is to predict the activity of a query molecule given some context set. However, the query and context set come from an assay not seen in training, meaning we measure the ability of the model to adapt its predictions to an unseen assay. Architecture. We employ two separate encoders, a query encoder f_q and a context encoder f_c. Consider a single assay k. We encode the query molecule m_q ∈ M_k and elements of the context set (m_i, π_k(m_i)) ∈ C_k as follows:f_q: m_q ↦ x_q,f_c: (m_i, π_k(m_i))) ↦ r_iwhere r_i is a representation of the i-th context example. The query encoder learns to encode the query molecule into a representation x_q, that is useful for predicting its activity. The context encoder learns to capture some information about assay k from each example in the context set. To aggregate each individual context encoding r_i into a single real-valued vector x_c that represents the context set as a whole, we take the average across each r_i:x_c = 1/n∑_i=1^n r_i.This maintains permutation invariance, as desired, since the order of the contexts should not affect their encoding. More complex aggregation techniques, such as self-attention, did not lead to improved performance (Table <ref>). The predictor network g combines both encodings to generate an activity prediction for the query molecule:g: x_c ⊕ x_q ↦π̂_k(m_q)where ⊕ denotes vector concatenation.We represent molecules using their 2048-bit Morgan fingerprints <cit.>. f_c, f_q, and g are all multilayer perceptrons with ReLU activations. To pass both the context compound and its measured activity value to f_c, we multiply the measured activity scalar with the Morgan fingerprint. Specifically, f_c receives the following vector:Morgan(m_i) ·π_k(m_i)Since Morgan fingerprints are substructure-based, i.e. each element in the vector has a 1 bit if there is a certain substructure present and 0 otherwise, and substructures are known to contribute directly to binding characteristics, this featurization may make it easier for the model to learn which substructures contribute how much to activity. We later confirm this intuition by comparing our proposed multiplication approach with the more traditional concatenation of fingerprint and activity values (Table <ref>). Differences to neural processes. Although our architecture builds on neural processes (NPs, <cit.>) and attentive neural processes (ANPs, <cit.>) such as MetaDTA <cit.>, it differs in several important aspects that are specific to the few-shot compound activity regression task. First, both NPs and ANPs are based upon a probabilistic framework, which would theoretically allow for the prediction of a distribution of possible activities for a given compound. However, such distributions are not very relevant in drug discovery, where one almost always works with point estimates of compound activity except perhaps in the special case of compound toxicity <cit.>. Avoiding a probabilistic framework stabilizes training, and allows us to simply minimize the mean squared error loss.Second, NPs and ANPs do not perform query encoding, meaning the query features are fed directly along with the context embedding to the predictor network. However, in drug discovery, there are useful query features that may be extracted entirely independently of any assay, such as compound shape and electrostatics. Allowing the model to encode the query compound in a distinct query encoder, prior to receiving any assay information, is a novel step that appears to improve prediction performance over baselines that use no such encoding (Table <ref>). Third, instead of concatenating the features of the context compound with its activity value, as in NPs or ANPs, we multiply the two before feeding into the context encoder, as described above. This novel featurization, which is only possible due to the unique binary nature of molecular fingerprints and the scalar nature of the activity value, appears to be more effective than concatenation (Table <ref>). Training. We use a large assay dataset for training, but set aside some of these assays for testing. We train the model in an end-to-end fashion with Mean Squared Error (MSE), with the loss for each epoch defined asℒ = 1/K∑_k=1^K ( 1/N∑_i=1^N (π_k(m_i) - π̂_k(m_i))^2 )where each m_i ∈ M_k is a query molecule.§ EXPERIMENTS §.§ TasksWe use four different datasets (Table <ref>) to testand baseline methods on three tasks related to drug discovery.* Hit and lead optimization: In this scenario, one wishes to use knowledge of a few compounds with modest experimentally determined activities in a binding or phenotypic assay to predict the activities of additional candidate compounds. For this task, we train and test all methods on the BindingDB and PubChem BioAssay (PubChemBA) datasets, which contain continuous activity values across many different assays. * High-throughput screening: In high-throughput screening (HTS), large numbers of compounds are assayed to provide a binary active/inactive label. In this scenario, one again has knowledge of a few compounds with modest activities in an assay of interest, but now the goal is to classify a large number of candidate compounds as active or inactive in the assay. Success in this task would provide the ability to use a small amount of data to guide the selection of a compound library for HTS that will have an enhanced fraction of novel actives than a library of randomly chosen compounds. To model this task, we train all methods on the PubChemBA dataset, but treat their outputs as unnormalized probabilities to compute binary classification metrics. We test all methods on the PubChem High-Throughput Screening (PubChemHTS) dataset, which contains binary activity classifications for compounds in PubChem assays marked as “Screening.” This dataset contains an entirely separate set of assays from the continuous ones of PubChemBA. * Anti-cancer drug activity prediction: We explore whether a model trained on the PubChemBA dataset generalizes to the prediction of compound activity against cancer cell lines. For this task, we use the Cancer Cell Line Encyclopedia (CCLE), which contains IC50 measurements for 24 drugs against 275 patient-derived cancer cell lines. We further probe the biological understanding of the trained models with additional challenges on this dataset involving the generated context encodings. We defer further dataset and preprocessing details to Appendix <ref>. We also include additional experimental results on the FS-Mol dataset <cit.> in Appendix <ref>.For all datasets, assay data were expressed as log_10 of theactivity in nanomolar (nM) units. §.§ BaselinesAs baselines for comparison, we include Tanimoto fingerprint similarity (a widely used traditional technique from computational chemistry) and several state-of-the-art approaches in few-shot learning. We applied both optimization-based (MAML, <cit.>) and model-based (MetaNet, <cit.>; ANP, <cit.>) methods to the regression of compound activities. We omit similarity-based methods (e.g. <cit.>) as they require binarizing the activity data of the context compounds, making for an unfair comparison. Details on the training and implementation ofand baselines are reported in Appendix <ref>. * Tanimoto similarity. Traditional molecular structure-based similarity measure based on binary Morgan fingerprints <cit.>. When given multiple context compounds, we use the highest similarity score between each of the contexts and the query.* MolBERT + attentive neural process (ANP). Combines MolBERT, which is a start-of-the-art sequence-based molecular featurizer for property prediction tasks <cit.>, with an attentive neural process model <cit.> for the few-shot prediction of activity values.* Non-Gaussian Gaussian process (NGGP) <cit.>. Expands on basic Gaussian process techniques for few-shot learning by modeling the posterior distribution with an ODE-based normalizing flow.* MetaNet <cit.>. Uses two separate learners, the base learner and the meta-learner which utilizes a memory mechanism, to quickly adapt to new tasks in the few-shot setting via fast parameterization.* Model-agnostic meta-learning (MAML) <cit.>. Learns a model that can quickly adapt to a new task by training on a small set of context examples. For this paper, we use a simple multilayer perceptron that takes a Morgan fingerprint as input for the base model.* MetaDTA <cit.>. Applies attentive neural processes to the few-shot regression of continuous activity values. We use the MetaDTA(I) variant because its performance is superior to that of the other reported variants. §.§ Hit and lead optimization To explore the applicability of few-shot learning methods to the hit and lead optimization settings, we comparewith baseline methods on the few-shot prediction of compound activity values against assays in PubChemBA and BindingDB. Compounds with high activity are often not known at the hit stage, so we only sampled context compounds (in both training and testing) that have activity values (i.e. effective concentrations) >10 μ M, which is typical of hit compounds <cit.>. Note that a higher effective concentration means lower activity. Following training, we test each method against the assays in the held-out test set. Thus, each test set compound was treated as a query compound, with each query being used with a context set of 1-8 compounds randomly sampled from all compounds against the same assay as the query.Table <ref> reports the mean correlation of the predicted and ground truth activity values across all test-set assays for each method. Pearson's correlation coefficient measures the ability of each method to differentiate between compound activities against the same assay and is a standard metric in the literature <cit.>. Other metrics, such as MSE, may appear favorable even if a method makes the same prediction for all compounds against a given assay.As shown,consistently outperforms Tanimoto similarity, the de facto standard in medicinal chemistry, as well as deep learning-based few-shot learning baselines, across datasets and for different numbers of context compounds. This suggests thatmay be useful for hit and lead optimization, as it is the most successful in predicting the activities of unknown compounds using only weakly active context compounds. Similar results were obtained when we performed the same study without any limit on the context compound activities, except that the correlation coefficients were higher by about 0.10 (Appendix <ref>). We used the version of the PubChemBA model trained without any context activity limits for Sections <ref> and <ref>.§.§ High-throughput screening We evaluated the performance ofand baseline methods on the few-shot prediction of compound activities in high-throughput screening (HTS) assays from PubChemHTS (Table <ref>). While the activity data for a given HTS assay are binary compound labels, more detailed confirmatory (dose-response) studies are often available for selected hit compounds, which can provide context compounds with continuous activity values. In this task, we obtained context compounds via separate dose-response assays not in PubChemHTS, but with the same targets as PubChemHTS assays (see Appendix <ref> for details).For this task, we train all models on PubChemBA. While the models predict a continuous activity value for each query compound, we treat their outputs as unnormalized probabilities (that were inverted, because a low effective concentration corresponds to a high activity), so that classification metrics may be computed from the model output. In other words, we assumed that a high continuous compound activity prediction from the models corresponded to an “Active” classification, and vice-versa. Specifically, we measured performance through ROC-AUC using the ground-truth binary activity labels, a standard metric in the HTS literature <cit.>. We also measured performance with k% enrichment, which is the percent increase of actives over the base rate in the top k% of scored compounds, also a standard metric in the HTS literature <cit.>.We find thatoutperforms baselines both in ROC-AUC and in most enrichment measurements (Table <ref>). This suggests thatis more capable of predicting compound activities in screening libraries than baseline methods, and maybe the most effective at raising the hit rate of a library selected from a much larger set of compounds to perform more targeted and cost-effective testing.§.§ Anti-cancer drug activity prediction In this task, we train all models on PubChemBA and test them on the prediction of anti-cancer drug activities against patient-derived cancer cell lines in the Cancer Cell Line Encyclopedia (CCLE, <cit.>). Context compounds were randomly sampled from all compounds with activity data against a given cell line, and were used to predict the activities against the same cell line of query compounds not in the context set. We report the mean correlation between predicted and experimentally determined IC50 values for drugs across all cell lines.As shown in Table <ref>,is better than the baseline methods at predicting the phenotypic activities of anti-cancer drugs. Although the number of compounds tested in the CCLE is relatively small, the success ofin predicting activity values in this dataset, despite being trained only on PubChemBA, suggests that it may learn fundamental relationships between compounds and assays that generalize across datasets.We further explored the properties of 's trained context encoder using a new classification task. Here, the input was a set of context compounds and their activities against a given cell line, and the output was a prediction of which cell line these activities correspond to. For this task, we applied a simple logistic regression classifier on top of the context encoding generated by(i.e. x_c). For comparison, we apply a similar approach to the latent path prior of MetaDTA (z in <cit.>), our most competitive baseline (Table <ref>).We randomly selected 20 cell lines in the CCLE. For each of the 20 cell lines and for each number of context compounds, we conducted 15 trials, where each trial consisted of randomly sampling context compounds and their activities against the cell line. As not all compounds have measured activities against all cell lines in the CCLE, we only sampled contexts from the 15 compounds that have experimental activities measured against all 20 cell lines. This prevents the logistic regression classifier from simply learning which compounds were tested against which cell lines. For training the classifier, we used a random 80/20 train/test split, where 80% of the context encodings and their associated cell lines were used to train the model and the remaining 20% were used to judge its accuracy.As shown in Table <ref>, the classifier trained on top ofsignificantly outperforms that of MetaDTA on the test set, suggesting that the context encodings generated byare more meaningful. In addition, Figure <ref> shows the t-SNE <cit.> projections of the context encodings generated by(left panel) and MetaDTA (right panel) using 8 context compounds. The encodings ofappear to cluster by cell line (indicated by colors), while the corresponding projections of the MetaDTA encodings appear more scattered, helping to explain the high accuracy of the linear regression classifier trained on . Such clustering signifies thatis able to produce similar encodings of context compounds when their associated activities are derived from the same assay, even if the identity of the context compounds themselves vary. Particularly interesting is that such clustering is observed on the cell line dataset despite having been trained on the nonoverlapping PubChemBA dataset. This suggests that training on large assay datasets allows for the extraction of biologically relevant information on how functional drug responses relate to the unique aspects of various cancer cell lines, e.g. type of cancer or mutations present. Along with Table <ref>, these results help explain the observed superior performance offor compound activity prediction, as a meaningful encoding of assay information is a necessary first step towards predicting the activity of unknown compounds against that assay.§.§ Model ablationsWe report performance metrics of model ablations to thearchitecture in Table <ref>. For each ablation, we trained the model and then measured the mean correlation of the predicted and ground truth activity values across all test-set assays in PubChemBA and BindingDB. This experiment is similar to that presented in Section <ref>, except context compounds are selected at random and not constrained by their activity. 8 context compounds were used for all tests. We test the significance of using a separate query encoder network (“Base model”), or feeding the query features directly to the predictor network (“No query encoding”), similar to a typical neural process model. The greater performance of the variation with the query encoder suggests that encoding the query independent of assay information is beneficial for prediction. “Concatenated context” means that we feed the context encoder a binary compound fingerprint concatenated with its associated activity value, instead of multiplying the two. This is similar to a neural process model. This variation shows inferior performance, suggesting that combining the context compound fingerprint and activity value scalar via multiplication is a useful featurization for the activity prediction task. “No context” denotes that no context was fed to the model at all, and it made activity predictions based solely on the query compound. “Attentive aggregation” means that we applied 4-layer self-attention on the individual context encodings before taking the mean.§ DISCUSSION AND CONCLUSIONSThe proposed few-shot learning modelsurpasses both a standard chemical similarity metric and prior few-shot learning baselines in multiple tasks of interest in early stage drug discovery. These tasks include prediction of compound activities based on a set of weak-binding context compounds, prediction of screening library compounds as active or inactive, and prediction of antitumor activity in cell-based assays, all performed with models trained on large activity datasets. Together, these results suggest thatmay be broadly useful for target-free, or ligand-based, drug discovery, which has become more common in recent years in comparison to target-based drug discovery that uses protein information <cit.>. may already be useful in its present form as a tool to leverage the limited compound activity data that is typically available in the earliest stages of drug discovery, focusing attention on candidate compounds that are much more likely than randomly chosen compounds to be active in an assay of interest. It thus offers a novel approach to speed drug discovery and reduce its costs. Exploring the use offor other compound properties might open further applications. For example, it may find applications in predicting pharmacokinetic parameters of candidate compounds, such as bioavailability and half-life; metabolic susceptibility; and toxicity.Limitations of the present implementation ofinclude its use of a relatively simple molecular representation (Morgan fingerprints), and a context aggregation technique with limited expressiveness. Additionally, the inherent limitations of training on experimental assay data, such as the limited tested dose range <cit.> or systematic biases in which compounds are tested against which targets, may limit the applicability of few-shot methods liketrained on these datasets to real-world drug discovery projects.Future developments could include the exploration of more complex molecular representations (e.g. sequence or graph-based) and the application of more complex context aggregation methods beyond the mean. Finally, research into incorporating target information, when available, with few-shot methods may allow for increased prediction accuracy beyond using target information or context compounds alone.§ ACKNOWLEDGEMENTSThis work was supported in part by U.S. Department Of Energy, Office of Science, U. S. Army Research Office under Grant W911NF-20-1-0334, and NSF Grants #2134274 and #2146343. RY has an equity interest in and is a scientific advisor of Salient Predictions. MKG acknowledges funding from National Institute of General Medical Sciences (GM061300). These findings are solely of the authors and do not necessarily represent the views of the NIH. MKG has an equity interest in and is a cofounder and scientific advisor of VeraChem LLC.icml2023§ DATASET DETAILS§.§ PubChemBA and BindingDBWe trained on PubChemBA <cit.> and BindingDB <cit.> for their size, high quality, and broad coverage across many targets and assays. For both datasets, we excluded very small or very large molecules, defined as less than 10 atoms or more than 70. From BindingDB, we recorded activities in nanomolar units from either the K_D, K_i, IC50, or EC50 columns, if available. Similarly, we used the PubChem “activity value”, which can be any dose-response activity value (either target-based or phenotypic), normalized to nanomolar units. We used such a broad range of different activity types because all values are similarly determined by an underlying binding mechanism, it increased the amount of data we can train on, and allowed the trained models to generalize to both target-based and phenotypic data types. If no continuous activity value was available for a given molecule, we discarded it. When activity was expressed as an upper or lower bound, we took the bound itself as the known activity. To reduce outlier activity values, we also clipped activity values with log10 nM values of <-2.5 or >6.5, as values surpassing those limits were rare. Then, we excluded all assays that include less than 10 measured compounds. Assays were defined via protein sequence in BindingDB (although some protein targets may contain data aggregated from multiple experimental assays), and by bioassay (i.e. AssayID) in PubChemBA. We transformed all activity values using the base-10 logarithm, as activity often spans several orders of magnitude.BindingDB data was taken directly from the file(<https://www.bindingdb.org/rwd/bind/chemsearch/marvin/SDFdownload.jsp?download_file=/bind/downloads/BindingDB_All_2D_2023m0.sdf.zip>). PubChemBA data was downloaded via the FTP interface (<https://ftp.ncbi.nlm.nih.gov/pubchem/Bioassay/Concise/JSON/>). For each row in the downloaded files, the activity value was taken from thecolumn, and the SubstanceIDs were converted into corresponding SMILES strings via the files available at <https://ftp.ncbi.nlm.nih.gov/pubchem/Substance/CURRENT-Full/SDF/>. §.§ Cancer Cell Line EncyclopediaThe Cancer Cell Line Encyclopedia <cit.> consists of interaction data of 24 drugs against a wide array of 479 patient-derived cancer cell lines. For this paper, we used the dataset reported in Table S11 of <cit.>, and extracted IC50 measurements for each drug measured against each cell line. We excluded compounds with less than 10 or more than 70 atoms, and cell lines with less than 10 drugs with measured activity. We also excluded all compound-activity pairs if there was no continuous activity value reported. §.§ PubChemHTSStarting with a list of Assay IDs (AIDs) obtained from the search function at <https://pubchem.ncbi.nlm.nih.gov/>, we downloaded the top 100 AIDs with the highest number of tested substances with “BioAssay Type” equal to “Screening” and a linked “Protein Target” section in the “BioAssay Record.” For each linked protein in a given Screening assay, we obtained continuous activity values to be used as context compounds via the protein's “Chemicals and Bioactivities” section in PubChem. As we used these compounds and their activities for context compounds in our experiments, we excluded all proteins, and therefore assays, with less than 10 tested compounds with continuous activity values. After obtaining the context compounds, we downloaded the datatable for each assay, which contained Compound IDs (which were linked to SMILES strings, to be used as query compounds in our tests, via the PubChem API) and binary compound activity classifications (“Active” or “Inactive” in the datatable file, to be used for computing ROC-AUC and enrichment scores). § IMPLEMENTATION DETAILSUnless specifically stated, all baselines were trained with the same molecular representation (2048-bit Morgan fingerprints with a radius of 3). Forand all baseline methods, we tuned hyperparameters once for each model in the PubChemBA task discussed in Section <ref> using 8 context compounds, except without limiting the context activity range, and used the same hyperparameters for all other datasets and subsequent tasks. For all methods, the reported model performance in each experiment is measured after 2^27 query molecules had been seen in training, or until the average Pearson's r across all test assays stopped improving on PubChemBA with 8 context compounds. A grid search was performed for all sets of hyperparameters, which are listed below for each model (with the best hyperparameters bolded, according to the highest Pearson's r). All models were trained on a server with 8 NVIDIA GTX 3080 GPUs. §.§ FS-CAP was implemented in PyTorch. We used an Adam optimizer with a base learning rate of 10^-5 and 128 steps for learning rate warmup, and then cosine annealed the learning rate to 0 over all training steps. We used dropout (p=0.1) and batch normalization following each layer in the predictor network (except in the last 2 layers), while the encoder networks used neither.Hyperparameters: . We used the same number of layers, , and width of layers, , in both the context encoder, query encoder, and predictor networks.§.§ Tanimoto similarityWe used 2048-bit Morgan fingerprints with a radius of 3 for the calculation of Tanimoto similarity. When using multiple context compounds, we calculate the Tanimoto similarity between each context compound and all query compounds, but only use the highest similarity context compound for each query compound. This is because if a query compound is similar to one of, but not all, the known actives (the context set), it is still presumed to be active. §.§ MolBERT + ANPWe used the pretrained MolBERT model available from <https://github.com/BenevolentAI/MolBERT> to encode SMILES strings into a 768-dimensional vector. We then used this featurizer (which was not made trainable) in an attentive neural process architecture to represent the context and query features, x_i and x_*, respectively <cit.>. We re-implemented the attentive neural process architecture in PyTorch, following the original paper <cit.> and their published code (<https://github.com/deepmind/neural-processes/blob/master/attentive_neural_process.ipynb>) as closely as possible. We trained the model using an Adam optimizer with a base learning rate of 10^-5 and 128 steps for learning rate warmup, and then cosine annealed the learning rate to 0 over all training steps.Hyperparameters: . We use the samefor both the determinstic and latent paths. §.§ NGGPWe used the official PyTorch implementation of NGGP available at <https://github.com/gmum/non-gaussian-gaussian-processes>. Using the existing code available for thedataset, we modified the datalaoders for our task by outputting 2048-bit Morgan fingerprints. We trained only two separate models, one for BindingDB and one for PubChemBA, because the size of the context set is only relevant at test time. We also expanded themodel used in the code to more layers, so the number of parameters was about equivalent to other baselines.Hyperparameters: . We used the defaults provided in the code for thedataset for all other hyperparameters. §.§ MetaNetWe adapted the Chainer code provided in the official MetaNet implementation (<https://bitbucket.org/tsendeemts/metanet/src/master/>) to PyTorch. Most of the architectural choices were kept the same as the original code, although we changed each Block network to include two 2048-wide linear layers with ReLU nonlinearities so that the entire model used about the same number of parameters as other baselines. We trained the model using an Adam optimizer with a base learning rate of 10^-5 and 128 steps for learning rate warmup, and then cosine annealed the learning rate to 0 over all training steps.Hyperparameters:§.§ MAMLWe used the MAML implementation in the learn2learn library <cit.>. The base model was a simple multilayer perceptron that takes a 2048-bit Morgan fingerprint as input and produces a single scalar output, which is the activity value prediction. As in the original MAML paper <cit.>, we used an SGD optimizer with a constant learning rate, as well as applied dropout with p=0.1 after all layers of the network during training.Hyperparameters:§.§ MetaDTASince there was no available implementation of MetaDTA, we re-implemented it in PyTorch. For information on the specifics of the MetaDTA architecture, see Section 3.2 of <cit.>. The context and query inputs, 𝐱_i and 𝐱_q as described in the paper, were represented with 2048-bit Morgan fingerprints, and the context target y_i used the same scalar activity representation as . As it does not specify in the original paper, similarly to , we used an Adam optimizer with a base learning rate of 10^-5 and 128 steps for learning rate warmup, and then cosine annealed the learning rate to 0 over all training steps.Hyerparameters: . We used the same number of layers, , for the query and context set embedding networks, and the decoder network. We also used the same number of attention heads, , for the multi-head cross and self-attention components of the model.§ ADDITIONAL RESULTS §.§ FS-Mol Table <ref> reports results on the few-shot regression of activity values from the FS-Mol dataset <cit.>. While the original FS-Mol paper does not evaluate methods on the regression task, we use the same experimental setting as <cit.>, which is to measure the average task-level out-of-sample coefficient of determination (R^2_os) across 10 random support/query sets. See <cit.> for comparison with other methods (they provide performance values in a bar chart, so we could not obtain the numeric values for this table). §.§ PubChemBA with no activity constraint Table <ref> reports the same experimental setting as is reported in Section <ref>, except without any constraints placed on the activity of context compounds. Here, we simply drew context compounds at random, regardless of their activity value. The models trained on this task using the PubChemBA dataset were applied to the tasks presented in Sections <ref> and <ref>, as the tasks presented in those sections similarly do not have activity constraints on the context compounds. | http://arxiv.org/abs/2311.16328v1 | {
"authors": [
"Peter Eckmann",
"Jake Anderson",
"Michael K. Gilson",
"Rose Yu"
],
"categories": [
"cs.LG",
"q-bio.QM"
],
"primary_category": "cs.LG",
"published": "20231127212341",
"title": "Target-Free Compound Activity Prediction via Few-Shot Learning"
} |
Selective active resonance tuning for multi-mode nonlinear photonic cavities Alan D. Logan1,†,*, Nicholas S. Yama2,†, and Kai-Mei C. Fu1,2,3============================================================================The DMD (Dynamic Mode Decomposition) method has attracted widespread attention as a representative modal-decomposition method and can build a predictive model. However, the DMD may give predicted results that deviate from physical reality in some scenarios, such as dealing with translation problems or noisy data. Therefore, this paper proposes a physics-fusion dynamic mode decomposition (PFDMD) method to address this issue. The proposed PFDMD method first obtains a data-driven model using DMD, then calculates the residual of the physical equations, and finally corrects the predicted results using Kalman filtering and gain coefficients. In this way, the PFDMD method can integrate the physics-informed equations with the data-driven model generated by DMD. Numerical experiments are conducted using the PFDMD, including the Allen-Cahn, advection-diffusion, and Burgers' equations. The results demonstrate that the proposed PFDMD method can significantly reduce the reconstruction and prediction errors by incorporating physics-informed equations, making it usable for translation and shock problems where the standard DMD method has failed. § INTRODUCTIONIn recent years, modal-decomposition methods have rapidly developed and succeeded in fluid dynamics<cit.>. At first, modal decomposition is used as a mathematical tool to analyze data for fluid dynamics. However, many researchers have conceived that it can also be used in system identification - a process by which a model is constructed for a system from measurement data, and some can make great predictions. It enables fast and efficient data reduction<cit.>, data analysis<cit.>, and reduced-order modeling (ROM) <cit.> of complex datasets. Besides effectively analyzing experimental and simulation data, model decomposition methods can also predict fluid dynamics by extracting underlying physical patterns. Among them, Proper Orthogonal Decomposition (POD) <cit.> and Dynamic Mode Decomposition (DMD) <cit.> are widely used to construct data-driven models in scientific and engineering fields. The POD was brought to the fluid dynamics first by Lumley J <cit.>. It is a data decomposition from a dynamical system into a hierarchical set of orthogonal modes according to their energy content. However, POD is an energy-based modal-decomposition method and cannot separate spatial and temporal signals, so additional processing is required for data-driven prediction <cit.>. DMD is an alternative method based on the Koopman operator <cit.>. The infinite-dimensional Koopman mode can be truncated and approximated using the DMD algorithm proposed by Schmid <cit.>. The main advantage of DMD is it can decompose high-dimensional spatiotemporal signals into a triplet of purely spatial modes, scalar amplitudes, and purely temporal signals in a process. The foundation of this decomposition allows us to construct and predict the full data sequence. Therefore, DMD can extract low-dimensional non-orthogonal modes from simulation and experiment time-series data snapshots. These extracted modes can be used for model construction and analysis of continuous unsteady nonlinear flow fields. So far, POD and DMD methods have been applied in various areas. Freund <cit.> used different norms to reveal the flow structures associated with the sound field. Raiola <cit.> proposed a filtering method based on POD to improve the physical insight into the investigation of turbulent flows. Han <cit.> captured the coherent structures of the tip leakage vortex in a mixed-flow pump using DMD and reconstructed the corresponding flow field. Kanbur <cit.> successfully predicted the surface temperature and heat discharge rate of lithium polymer batteries in electrochemistry using DMD. Zhao <cit.> extracted the energy and dynamic characteristics of the high-shear mixer velocity field in a high-shear force mixer problem in fluid mechanics using POD and DMD. Amor <cit.> utilized a higher order DMD to identify the main patterns describing the flow motion in complex flows and enhanced it by implementing a sliding window to speed up calculation. As a data-driven prediction tool, the DMD method still has some limitations in practical applications. Firstly, DMD fundamentally relies on the variable separation of Singular Value Decomposition (SVD) <cit.>, which requires that the observed data must have a certain degree of alignment <cit.>. This means that measurement points in an experiment or simulation should remain aligned over time, without rotating or translating. Otherwise, DMD cannot capture appropriate low-dimensional modes that satisfy physical constraints. This limitation makes it difficult to apply the DMD method to predict translation systems such as advection-dominated diffusion <cit.>and bubble dynamics <cit.>. Secondly, DMD is sensitive to noise in data <cit.>. Errors caused by noise accumulate during the prediction, eventually making the constructed model lose its predictive ability. Lastly, it is worth noting that DMD, as a purely data-driven method, lacks any priori assumptions or knowledge of the underlying dynamics. This can result in poor generalization of the obtained model, as it lacks physical support. In fluid systems with strong nonlinearity and coupling, this limitation often leads to the failure of the DMD method.In order to address these limitations, researchers have made some improvements to the DMD method. To overcome the limitation of requiring alignment of DMD and the failure of the translation problem, Lu <cit.> integrated the Lagrangian framework into DMD and supplemented the velocity data to the dataset, successfully predicting the translation problem. Using Lagrangian DMD, Yin <cit.> accurately reconstructed and predicted the two-dimensional bubble-rising process. Regarding the error caused by noise, Dawson <cit.> and Hemati <cit.> argued that DMD has difficulties extracting accurate dynamic features from noise-corrupted data and proposed various modification strategies. Lastly, to introduce prior physical model features, Baddoo <cit.> utilized the matrix manifold corresponding to physical laws as prior knowledge, purposefully decomposing the linear coefficient matrix into manifolds to enhance the application capabilities of DMD. Gao <cit.> proposed a DMD variant based on the k-nearest-neighbors (KNN) to the applicability of DMD to parameterized problems. To cope with inhomogeneous problems, Lu <cit.> introduced a bias term and identity mapping into DMD. However, these methods are only effective for specific forms of physical information. Therefore, there is a need for a more generalized method to integrate prior knowledge, such as physical equations, into the DMD method to address violations of physical constraints, as well as improve model generalization and robustness.In this paper, we propose a Physics-Fusion Dynamic Mode Decomposition (PFDMD) method that can conveniently integrate data-driven models and physical equations. Based on the application of data assimilation, The proposed PFDMD method introduces more general physical equations through Kalman filtering to correct the linear models generated solely from data-driven approaches. The proposed PFDMD method is elaborated in Section 2. In Section 3, the PFDMD method is tested on well-known equations like the advection-diffusion, Allen–Cahn, and Burgers’ equations, some of which are typically hard to predict with the standard DMD method. The predictive accuracy of the PFDMD method is evaluated, and a balance between physics and data model is demonstrated. We also assess the robustness of PFDMD when dealing with noisy data, inaccurate physical information, and limited data. In Section 4, conclusions and prospects are provided.§ METHODS §.§ Dynamic Mode Decomposition This section provides a brief introduction to the DMD method in dynamic system. In a dynamic system, the observation data (measurement points in experiments or grid points in numerical simulations) at a specific moment t_k are referred to as snapshots and denoted as x_k. By reshaping the snapshot data into a column vector, the current x_k and next x_k+1 snapshots are paired to form a time series: {( x_k,x_k+1) }_k=1^m,t_k+1=t_k+Δ t. In the DMD method, these snapshots are arranged according to the time series and form two matrices X_1 and X_2:X_1=[ x_1 x_2 ⋯ x_n-1]∈ℝ^m×( n-1 ) X_2=[ x_2 x_3 ⋯ x_n]∈ℝ^m×( n-1 ) The DMD method is based on the linear assumption and aims to find the best-fit linear operator between two matrices:X_2=AX_1 A=Amin X_2-AX_1_F=X_2X_1^†where ·_F represents the Frobenius norm, and † denotes the pseudo-inverse. In computations, due to the high dimensionality of the data, computing the inverse directly using the least squares method requires significant computational resources. Therefore, we obtain the pseudo-inverse through SVD decomposition:X_1≈ UΣV^*where ∗ denotes the conjugate transpose. In addition to computing the pseudo-inverse, SVD can also approximate the rank of the data matrix X, thereby reducing dimensionality and computational complexity. After performing SVD, the approximate matrix of X can be obtained as Ã, which is the result of data dimensionality reduction:A=X_2VΣ^-1U^* Ã=U^*AU=U^*X_2VΣ^-1Afterward, spectral decomposition is performed on the à matrix, obtaining the eigenvalues and the eigenvectors:ÃW=WΛwhere the column vectors of W are the eigenvectors of Ã, and the diagonal Λ consists of the corresponding eigenvalues λ.Then the DMD modes is introduced based on the DMD method, which are reconstructed by combining the next timestep X_2 matrix with the eigenvalue matrix W and constitute the mode decomposition of the A matrix.Φ =X_2VΣ^-1W AΦ =( X_2VΣ^-1U^*)( X_2VΣ^-1W )=ΦΛOnce Λ is obtained, it allows for the reconstruction and prediction of any snapshot at any given time:x_k=ΦΛ^k-1Φ^†x_1§.§ Physics-Fusion DMD methodDMD method can construct a linear model through data, with the key step being the derivation of the linear mapping matrix A and its mode decomposition. Based on this decomposition result, it is believed that a preliminary linear approximation model of the system has been obtained. However, as mentioned in the introduction, this purely data-driven model often fails to capture dynamic mapping relationship due to the lack of prior knowledge and physical information, thus constructing a model away from physical reality. Therefore, to address this deficiency, this paper proposes a Physics-Fusion Dynamic Mode Decomposition (PFDMD) method, which integrates physical information and a data-driven model. The algorithm flowchart of PFDMD is shown in Fig. <ref>. As shown in Fig.<ref>, after obtaining the A matrix and decomposing to Φ and Λ through DMD, we can introduce physical information into the model by using the Kalman Filter framework to correct the predicted results with the residual of the physical equations. First, we can preliminarily predict the state at time k+1:û_k+1=Au_k=ΦΛΦ^†u_kwhere u_k represents the snapshot at the current time, and û_k+1 represents the preliminarily predicted snapshot for the next time step obtained through DMD. After obtaining the preliminary prediction result, we bring them into the physical equations and correct the prediction result by the equation residuals. Therefore, the predicted residuals of the physical information are expressed as follows: ẑ_k+1=h(û_k+1)where h( ·) represents the physical information operator, the physical equations the snapshot should satisfy. Afterward, as shown in Fig.1, the residual of the physical information ( z_k+1-ẑ_k+1) is corrected using a Kalman gain coefficient K and then applied to the predicted results. u_k+1=û_k+1+K_k+1( z_k+1-ẑ_k+1)=û_k+1-K_k+1ẑ_k+1where u_k+1 is the predicted result after the correction of physical information, K is the Kalman gain coefficient, z_k+1 is the theoretical residual of the physical equation, whose value is typically zero. ẑ_k+1 is the predicted residual of the physical equations. So, we can get the simplified expression on the right-hand side of Eq. (<ref>).The Kalman gain can be obtained through the covariance matrix P. The preliminary prediction of the covariance matrix P is expressed as follows:P̂_k+1=ΦΛΦ^†P_k( ΦΛΦ^†)^T+Qwhere P_k is the snapshot covariance matrix at the current time step, P̂_k+1 is the preliminarily predicted covariance matrix at the next step. Q represents the process error matrix in the Kalman filtering algorithm, while in this article, it denotes the error of the data-driven model. After predicting the covariance matrix, the Kalman gain can be calculated: K_k+1=P̂_k+1H_k+1^T[ H_k+1P̂_k+1H_k+1^T+R ]^-1 H_k+1=∂ h/∂ u=[ ∂h_1/∂u_1 ∂h_1/∂u_2 ⋯ ∂h_1/∂u_n ∂h_2/∂u_1 ∂h_2/∂u_2 ⋯ ∂h_2/∂u_n ⋮ ⋮ ⋮ ⋮ ∂h_n/∂u_1 ∂h_n/∂u_2 ⋯ ∂h_n/∂u_n ]where K represents the Kalman gain coefficient, and R represents the measurement error matrix in the Kalman filter, which represents the error of the physical information in this article. H is the Jacobian matrix of the nonlinear function h( ·). By Eq. (<ref>), the predicted result can be corrected by combining the physical information. At this point, the PFDMD completes the correction of the predicted result for the next time step and then updates the P matrix for the next iteration:P_k+1=[ I-K_k+1H_k+1]P̂_k+1 By iteration, all the predicted results can be obtained after the correction by physical information. It is worth noting that the prediction process does not introduce new measurement data. Finally, in Eqs. (<ref>) and (<ref>), there are two parameters, Q and R, representing the errors of data and physical information, respectively. This article combines Q and R into a weight coefficient w, which measures whether the PFDMD algorithm is closer to the data-driven model or the physical information. The value range is from 0 to 1.w=R/R+Q.When the weight coefficient is small, the error of the physical information R has a smaller proportion, and the predicted results of PFDMD are closer to the physical information. Conversely, a larger weight coefficient indicates that the data-driven model is more significant, and the predicted results are closer to the results of traditional DMD. In this way, by adjusting the value of the weight coefficient, we can control whether the results of PFDMD are closer to the physical information or the data.§ RESULTSIn this section, the proposed PFDMD method is applied to solve partial differential equations (PDEs) to demonstrate its effectiveness and characteristics. In Section 3.1, the PFDMD method is applied to a one-dimensional diffusion equation and an Allen-Cahn equation to show the effects of prediction and reconstruction. The advection-diffusion equation and Burgers’ equation are used as examples in Section 3.2 to illustrate the performance of PFDMD to solving translation and shock wave problems, where standard DMD fails. In section 3.3, the PFDMD method is applied to several special application scenarios: noisy data, inaccurate physical information and an absence of data model. Section 3.4 discussed the update of Kalman gain in certain steps and the computational cost of the algorithm. All examples are compared with the numerical solutions of the PDE equations, and accuracy is measured by the L2 relative error.Error=√(∑_i=1^N| u̅( x_i,t_i)-u( x_i,t_i) |^2)/√(∑_i=1^N| u( x_i,t_i) |^2). §.§ Simple diffusion problemFirst, the effectiveness of the PFDMD method is demonstrated through the following simple one-dimensional diffusion equation:∂ u/∂ t=D∂^2u/∂x^2, x∈ [0,2], t∈ [0,2]b.c. u( 0,t )=u( 2,t )=0,i.c. u( x,0 )=0.5exp [-( x-1 )^2/0.05^2],In this equation, the diffusion coefficient D is set to 0.01. The spatial domain is discretized into m = 200 intervals, while the time domain is discretized into n = 500 steps. In this paper, PDEs are solved with a first-order finite-difference upwind scheme for the advection part and finite center difference for the diffusion part. Both DMD and PFDMD algorithms select the first n = 200 snapshots as the dataset to obtain the matrix A of the linear equation using Eq. (<ref>). Therefore, in this study, the results from 0 to 200 snapshots are referred to as "reconstruction", while the results from 200 to 500 snapshots are referred to as "prediction". To ensure a fair comparison, both DMD and PFDMD methods in this study use the same SVD truncation. In this case, SVD truncation is 5 (rank = 5) and already has a great reconstruction and prediction ability. The results of the one-dimensional diffusion equation are shown in Fig. <ref>(a), consistent with the results in Lu's research<cit.>. The DMD method has demonstrated excellent predictive capability. However, the zoomed-in detail figure shows that incorporating the physical information in PFDMD results in predictions closer to true values than DMD. Fig. <ref>(b) demonstrates the reconstruction and prediction errors of the two methods, revealing that the relative error in reconstruction is significantly lower than that in DMD as long as physical information is included. This demonstrates that after integrating the physical information, a better reconstruction result can be obtained by adjusting the coefficients of the DMD-generated model through Kalman filtering. The locally zoomed-in Fig. <ref>(c) shows that both methods can achieve satisfactory prediction results after 200 snapshots. Still, the prediction errors increase over time due to the error accumulation caused by the linear model. Furthermore, from the results in Fig. <ref>(c), it can be observed that the smaller the weight coefficient w, the greater the physical information impact and the better the predictive capability of the model.In an alternative case, we take the Allen-Cahn equation in reaction-diffusion as an example to investigate the application of the proposed PFDMD method in more complex nonlinear partial differential equations. Here, we consider the equation with Neumann boundary conditions, which is given as follows:∂ u/∂ t=μ∂^2u/∂x^2+5( u-u^3), x∈ [-1,1], t∈ [0,2]b.c.. ∂ u/∂ x|_x=-1=. ∂ u/∂ x|_x=1=0,i.c. u( x,0 )=0.53x+0.47sin( -3/2π x ),In this case, the spatial domain [-1,1] is discretized into m = 200 intervals, and the time domain [0,2] is discretized into n = 500 steps. Similarly, n = 200 snapshots are selected as the training set, and the prediction is made up to the 500^th step. The SVD truncation is 10 (rank = 10). In this example, we set the diffusion coefficient to μ =1×10^-4. The computed results and errors are shown in Fig. <ref>. PFDMD exhibits similar performance to one-dimensional diffusion equation, demonstrating that this method can also be applied to more complex semi-linear PDE. §.§ Translation and shock wave problemAs stated in the introduction, DMD struggles to accurately generate reduced-order models (ROMs) for advection-dominated problems due to its SVD-based approach. The main reason is that the advection-dominated motion involves translation relationships between the snapshots, while the SVD method requires spatial alignment, making it challenging to capture the underlying dynamics throughout the high-dimensional space <cit.>. However, in PFDMD, introducing covariance allows for incorporating the relationships between spatial positions. Consequently, PFDMD holds promise in addressing the issue of DMD in advection-dominated processes. In this section, we demonstrate the reconstructive and predictive capability of PFDMD for the advection-diffusion equation and the Burgers’ equation and its two-dimensional scenario.By adding an advection term to the diffusion equation, we can obtain an advection-dominated problem, translating the u distribution over time. It can be described by the following equation:∂ u/∂ t+a∂ u/∂ x=D∂^2u/∂x^2, x∈ [0,2], t∈ [0,2]b.c. u( 0,t )=u( 2,t )=0,i.c. u( x,0 )=0.5exp [-( x-1 )^2/0.05^2],In this case, the spatial domain [0,2] is discretized into m = 200 intervals, and the time domain [0,2] is discretized into n = 500 steps. Similarly, n = 200 snapshots are selected as the training set, and the prediction is made up to the 500^th step. In this example, we set the advection and diffusion coefficients as a = 0.6, D = 0.01, respectively, to increase the proportion of the advection term in the advection-diffusion equation and highlight its translation wave characteristic. Fig. <ref> demonstrates that DMD fails to capture the system dynamics, resulting in non-physical (oscillatory and negative) predictions. This failure cannot be compensated by increasing the SVD rank truncation. Therefore, it is inevitable that DMD cannot predict the translation problem beyond 200 steps, and the error increases rapidly after 200 steps. As shown in Fig.<ref>(a), DMD still produces acceptable results during the reconstruction but completely fails during the prediction, with errors reaching up to 100%. After incorporating the physical information, both the reconstruction and prediction errors of PFDMD are much lower than DMD, as shown in Fig. <ref>(b) and (c). The results of reconstruction and prediction demonstrate that PFDMD effectively overcomes the limitations of DMD in predicting translation problems by incorporating physical information.In next example, we aim to emphasize the capability of the proposed method in solving shock wave problem. Let's consider the Burgers’ equation, a nonlinear partial differential equation that plays a significant role in various fields of applied mathematics. It is commonly used to simulate the propagation of shock waves, such as in fluid mechanics, nonlinear acoustics, and gas dynamics. Here, we consider the Burgers’ equation with Dirichlet boundary conditions, specified as follows:∂ u/∂ t+u∂ u/∂ x=μ∂^2u/∂x^2, x∈ [-1,1], t∈ [0,1]b.c. u( -1,t )=u( 1,t )=0,i.c. u( x,0 )=-sin( π x ),In this case, the spatial domain [-1,1] is discretized into m = 200 intervals, and the time domain [0,2] is discretized into n = 500 steps. We select n = 150 snapshots as the training set and predict up to the 500^th step. The viscosity coefficient is set to μ =0.01/π. The SVD truncation is 10 (rank = 10). Fig. <ref>(a) and (b) compare the reconstruction and prediction results of DMD and PFDMD. PFDMD successfully simulates the shock wave generation without non-physical oscillations at the turning points. It is worth noting that the prediction results of DMD exhibit severe oscillations after 300 steps, while PFDMD with a higher weight of physical information still produces smooth curves at the turning points. Fig. <ref>(c) and (d) display the detailed 350^th prediction results and error curves. The predicted results with the chosen weights in PFDMD are more consistent with the physical situation than the DMD method, indicating that PFDMD effectively integrates physical information, keeping the prediction errors consistently lower than DMD.Extending the one-dimensional Burgers’ equation to two-dimensional significantly increases the data amount needed for PFDMD. The following conditions are considered to investigate the applicability of this method in high-dimensional situations:∂ u/∂ t+u∂ u/∂ x+v∂ u/∂ y=μ( ∂^2u/∂x^2+∂^2u/∂y^2), x∈ [0,2], y∈ [0,2], ∂ v/∂ t+u∂ v/∂ x+v∂ v/∂ y=μ( ∂^2v/∂x^2+∂^2v/∂y^2), t∈ [0,1]b.c. u( 0,y,t )=u( 2,y,t )=0, u( x,0,t )=u( x,2,t )=0, v( 0,y,t )=v( 2,y,t )=0, v( x,0,t )=v( x,2,t )=0,i.c. u( x,y,0 )=v( x,y,0 )=1.2( sin( 0.02π x )-sin( 0.02π y ) ), As shown in Fig. <ref>(a), (b), and (c), the flow field plots of the x-direction velocity u in the Burgers’ equation are presented. The dataset includes the first 800 steps, and the prediction is made up to the 1500^th step. From Fig. <ref>(b), it can be observed that the prediction of DMD still produces non-physical results. Not only does the predicted wave peak value end up being 0.2 higher than the actual value, but it also generates oscillations within the region where the wave peak. However, for PFDMD with the appropriate weight, the oscillation does not appear in the prediction results, and the peak value is closer to the actual value. Fig. <ref>(d) shows the model error curves of different weight coefficients. It can be observed that both methods predicted have low errors to 1000 steps. However, beyond 1000 steps, the error accumulates rapidly, indicating the failure of the data-driven model at that point. Therefore, it is necessary to correct the predicted results using physical equation. Hence, lowering the weight coefficient can minimize the dependence on the data-driven model, allowing for predictions of more steps within a reasonable error range.§.§ Impact of inaccurate data model and physical information In the above example, where the physical information is completely accurate, it is evident that a higher weight of physical information leads to a smaller prediction error. However, in practical applications, data and physical information may deviate from the accurate solution. These deviations always involve noisy data, inaccurate physical information and an absence of data model. In such cases, a smaller weight is not always better. Therefore, we initially examine the impact of a noisy dataset on PFDMD by Allen-Cahn equation. In this scenario, to demonstrate the effectiveness of PFDMD, the A matrix is generated using noisy data (original data + 0.01 * Gaussian noise). As shown in Fig. <ref>(a), PFDMD can produce better prediction results and avoid the deviation and sinking state observed in DMD. This suggests that fusing physics equation can, to some extent, prevent the generation of predictions that are not consistent with physical principles. Fig. <ref>(b) illustrates that due to the influence of data noise, the prediction error of DMD increases continuously and eventually exceeds 10%. In contrast, PFDMD effectively suppresses the accumulation of prediction errors. Next, let's consider the scenario that the data and physical equation are both inaccurate. Firstly, in one-dimensional diffusion equation, when generating the data used for DMD training, the diffusion coefficient is increased from 0.01 (the value in the accurate solution) to 0.0105, representing the deviation between the data and the accurate solution. Simultaneously, the diffusion coefficient in the physical equation is changed to 0.0095 to introduce biased physical information into PFDMD. The computational results are shown in Fig. <ref>(a). The results at the peak in Fig. <ref>(a) indicate that the predicted results obtained by DMD are smaller than the accurate solution due to the larger diffusion coefficient. At the same time, PFDMD demonstrates better prediction capability after incorporating the physical information. However, due to the deviation of the diffusion coefficient in physical information, the smaller the weight coefficient (i.e., the closer it is to the physical information), the larger the predicted values. The prediction error plot in Fig. <ref>(b) shows that the DMD model consistently maintains a relatively high error due to the influence of inaccurate data. However, because of the inaccuracy in the physical information, when the weight is set to 0.8, the obtained results have a smaller error compared to the case of a 0.01 weight. Therefore, smaller weight does not lead to more accurate results when physical information is inaccurate. Determining weight value needs to strike a balance between data-driven and physical information-driven. Instead of using existing data to generate the A matrix through DMD, a scenario involves where only physical information is available, without any data, by setting A as the identity matrix. In this situation, the algorithm framework devolves into a framework driven solely by physical equations. Fig. <ref>(a) shows the predicted results at the 300^th step of Allen-Cahn equation. When the A matrix is the identity matrix, the result is depicted by the red curve, which represents the distribution of u that does not change over time. However, corrections are made to the results after incorporating the physical information, regardless of the weight value w. The smaller the weight, the smaller the influence of the identity matrix A, and the more accurate the results, as indicated by the error curve in Fig. <ref>(b). When w is set to 0.001, very close results to the true solution can be obtained. In this case, the PFDMD actually devolves into a corrected non-iterative calculation of the PDE equation. This demonstrates that PFDMD can be applied to special cases without a data-driven method.§.§ Analysis of Kalman gain updating and computational cost One characteristic of the Kalman filtering method is that it can update the Kalman gain (K) for only a few steps and then cease to update the K values, thereby accelerating the computation. PFDMD can also perform such calculation. The following scenario is performed when the Kalman gain coefficient is only updated by using Eq. (<ref>) in the initial 10 timesteps. And, it is unchanged in the subsequent prediction calculations. In other words, the gain coefficients are only applied at each time step, as shown in Eq. (<ref>), without the updating step Eqs. (<ref>) to (<ref>), thereby reducing the computational complexity of PFDMD. To demonstrate the predictive capability of PFDMD after updating the Kalman gain (K) for a limited number of steps, a dataset with significant prediction errors is required. Therefore, we continue with the noisy example of Allen-Cahn equation presented in Section 3.3. As shown in Fig. <ref>(a), even after 10 correction steps, PFDMD can still produce better prediction results without updating the Kalman gain coefficients and avoid the non-physical oscillation observed in DMD. Fig. <ref>(b) displays the average errors in reconstruction and prediction obtained by updating the Kalman gain (K) over various time steps. It can be observed that continuously updating K leads to improved prediction accuracy; however, the trade-off is an increase in computational cost. The improved prediction results by correction are satisfactory, however, the computational cost of the algorithm should be discussed. According to the case in this section, the time taken to generate the DMD modes and the time taken by different methods to predict one step are calculated. As shown in Table <ref>, the DMD still exhibits a very low computational cost, while the updating and correction of P and K in PFDMD greatly increase the time of the prediction step. The prediction time of PFDMD is still controlled within the acceptable range, which is related to the small dataset size of this case. Without updating K, the calculation time of PFDMD is close to DMD. Therefore, to reduce the calculation cost, we can stop updating K after a certain step to accelerate the prediction.§ CONCLUSIONThis paper proposes a Physics-Fusion Dynamic Mode Decomposition (PFDMD) method, which can introduce physics information through the residual of partial differential equations. By comparing the predictive results between DMD and PFDMD in various benchmark tests, we demonstrate that PFDMD outperforms DMD noticeably with different physics information weights. The PFDMD method is capable of solving problems related to translation and shock wave. In particular, PFDMD can adjust the fusion degree of physics information to improve the versatility of the DMD model. It is worth noting that even with the combination of imperfect physics information and an inaccurate DMD model, the PFDMD can still improve prediction accuracy. It is worth noting that the physics fusion framework proposed in this study is not confined to DMD alone. The adaptability of this framework extends to a broader range of data-driven models, including neural networks. Fusing this framework with other data-driven models offers a promising avenue to enhance model performance.§ ACKNOWLEDGMENTSFinancial support from the National Natural Science Foundation of China (Grant no. 22308251, 22178247, 22378304) is gratefully acknowledged.§ DATA AVAILABILITYThe data that support the findings of this study are available from the corresponding author upon reasonable request. unsrt | http://arxiv.org/abs/2311.15604v2 | {
"authors": [
"Yuhui Yin",
"Chenhui Kou",
"Shengkun Jia",
"Lu Lu",
"Xigang Yuan",
"Yiqing Luo"
],
"categories": [
"physics.comp-ph"
],
"primary_category": "physics.comp-ph",
"published": "20231127075457",
"title": "PF-DMD: Physics-fusion dynamic mode decomposition for accurate and robust forecasting of dynamical systems with imperfect data and physics"
} |
1 .001 Siwei Liu, Xi Wang, Craig Macdonald, Iadh Ounismode = title]A Social-aware Gaussian Pre-trained Model for Effective Cold-start Recommendation 1]Siwei Liu[orcid=0000-0002-7326-2883] [email protected]]Xi Wang[orcid=0000-0001-5936-9919] [email protected]]Craig Macdonald[orcid=0000-0003-3143-279X][email protected]]Iadh Ounis[orcid=0000-0003-4701-3223][email protected][1]organization=Mohamed bin Zayed University of Artificial Intelligence , Department of Machine Learning, addressline=Masdar,city=Adu Dhabi, postcode=44737,country=United Arab Emirates [2]organization=University College London, School of Computer Science, addressline=Gower Street,city=London, postcode=WC1E 6BT,country=United Kingdom[3]organization=University of Glasgow, School of Computing Science, addressline=Lilybank Gardens,city=Glasgow,postcode=G12 8QQ, country=United KingdomA Social-aware Gaussian Pre-trained Model for Effective Cold-start Recommendation The use of pre-training is an emerging technique to enhance a neural model's performance, which has been shown to be effective for many neural language models such as BERT. This technique has also been used to enhance the performance of recommender systems. In such recommender systems, pre-training models are used to learn a better initialisation for both users and items.However, recent existing pre-trained recommender systems tend to only incorporate the user interaction data at the pre-training stage, making it difficult to deliver good recommendations, especially when the interaction data is sparse. To alleviate this common data sparsity issue, we propose to pre-train the recommendation model not only with the interaction data but also with other available information such as the social relations among users, thereby providing the recommender system with a better initialisation compared with solely relying on the user interaction data. We propose a novel recommendation model, the Social-aware Gaussian Pre-trained model (SGP), which encodes the user social relations and interaction data at the pre-training stage in a Graph Neural Network (GNN). Afterwards, in the subsequent fine-tuning stage, our SGP model adopts a Gaussian Mixture Model (GMM) to factorise these pre-trained embeddings for further training, thereby benefiting the cold-start users from these pre-built social relations. Our extensive experiments on three public datasets show that, in comparison to 16 competitive baselines, our SGP model significantly outperforms the best baseline by upto 7.7% in terms of NDCG@10. In addition, we show that SGP permits to effectively alleviate the cold-start problem, especially when users newly register to the system through their friends' suggestions. * Social graph pre-training does improve recommendation performance * A Gaussian Mixture Model can effectively extract meaningful relations from the pre-trained embeddings * Experimental results show significant improvements using the proposed Social-aware Gaussian Pre-trained model, especially for cold-start users Gaussian Mixture Model Social Network Graph Neural Networks Recommender Systems[ [ January 14, 2024 ====================§ INTRODUCTIONDeep learning-based models have achieved a remarkable success in different domains <cit.>. However, although these deep models have a strong expressiveness power, they cannot easily reach the maximal optimised solution during the training stage without an effective initialisation <cit.>. Therefore, the pre-training technique has been commonly used to optimise the deep models by providing them with an effective initialisation <cit.>. Such a pre-training technique has been shown to lead to state-of-the-art performances when the pre-trained model is further fine-tuned to address downstream Natural Language Processing (NLP) <cit.> or information retrieval tasks <cit.>. However, this effective technique has been less studied in recommender systems possibly due to the limitations in the existing datasets. For example, in the NLP tasks, one unsupervised deep language model can be pre-trained from unlabelled texts (e.g. Wikipedia) and fine-tuned for a supervised downstream task <cit.>. In contrast, in the recommendation scenario, each dataset contains its specific information about the corresponding users and items, but no other ground truth knowledge such as Wikipedia could be leveraged from outside the dataset to help estimate the users' preferences and items' attributes.An existing Neural Collaborative Filtering (NCF) recommendation approach <cit.> has proposed to pre-train the recommendation model with a Multi-Layer Perceptron (MLP) <cit.>.Although effective, the MLP module does not consider other available auxiliary side information, such as the social relations among users <cit.> or the items' timestamps <cit.>, thereforethe applied pre-training technique of NCF is limited in providing the cold-start users with a better initialisation.Since the social relations among users have been shown to be essential in enhancing the recommendation performance and alleviating the cold-start problem <cit.>,we propose to incorporate the social relations and the interaction data at the pre-training stage so that a better initialisation can be obtained for those users who have fewer interactions.Graph Neural Networks (GNNs), a class of deep learning models <cit.>, have been used to aggregate the nodes' information from their neighbourhoods so as to learn an overall structure from a given graph's type of data. Indeed, while GNNs have been previously exploited to enhance general recommender systems <cit.>, they have only been recently studied as pre-training schemes <cit.>. In this work, we devise a novel Social-aware Gaussian Pre-trained model (SGP), which incorporates the users' social relations in the pre-training stage and attempts to search for a relative optimised solution based on the learned social-aware initialisation during the fine-tuning stage. At the first stage, we pre-train a light GNN model with additional social information togive users/items meaningful initialised embeddings. Given the neighbourhood aggregation property of the GNN model,incorporating the social relations enables socially-connected users to become closer in this latent space through the aggregation process.In the fine-tuning stage, we load the obtained pre-trained embeddings and re-train the model for further recommendations. The most straightforward approach for leveraging these pre-trained embeddings and decoding the social information is to directly reload them. However, it is essential to note that the interaction data, which will be used in the second stage, has already been exploited at the pre-training stage. Therefore, the direct reuse of the interaction data might cause the overfitting problem. To tackle the problem of data reuse, the relational knowledge distillation technique <cit.> has been proposed to distil relations from a pre-trained model. The underlying intuition of the relational knowledge distillation technique is that the distillation model is encouraged to extract the essential relations from the pre-trained model, thereby avoiding a deficient performance as well as overfitting <cit.>. Motivated by this technique of relational knowledge distillation, we propose to distil the information from the pre-trained GNN model so that we can later reconstruct meaningful embeddings.Since all embeddings can be viewed as probability distributions, an intuitive solution for distilling information from those pre-trained embeddings is to follow existing works <cit.> and use a normal distribution to model the embeddings. However, those pre-trained embeddings contain prior knowledge and complex latent relations between users and items, which can hardly be modelled with a normal distribution without information loss. Therefore, during the intialisation of the fine-tuning stage, we propose to apply the Gaussian Mixture Model (GMM) <cit.>, which assumes that all the data points are sampled from a mixture of a finite number of Gaussian distributions. By leveraging this well-developed GMM, our proposed method is devised to factorise the pre-trained embeddings into a finite number of Gaussian distributions, where this number is pre-defined and each distribution could be viewed as a specific interest of a group of users or a particular characteristic of a set of items. To summarise, in this work, we make the following contributions: ∙ We devise a two-stage end-to-end social pre-trained recommendation model, SGP, which uses the GNN model to leverage social information. We show that SGP can achieve state-of-the-art performance on three real-world datasets of user-item interactions and social relations.∙ We leverage the Gaussian Mixture Model to effectively distil information from the pre-trained embeddings for the downstream recommendation task. ∙ Our proposed SGP model is shown to significantly outperform 16 strong baselines from the literature, while being particularly useful for cold-start and extreme cold-start users (newly registered users).The remainder of this paper is organised as follows. In Section <ref>, we position our work in the literature. Section <ref> introduces all relevant notions used in this paper and formally defines the detailed architecture of our SGP model. The experimental setup and the results of our empirical experiments are presented in Sections <ref> and <ref>, respectively, followed by some concluding remarks in Section <ref>.§ RELATED WORKIn the following, we overview the related work from three perspectives: pre-trained models that learn general representations for various downstream tasks (Section <ref>), graph-based recommendation models that leverage the graph structure of user-item interactions(Section <ref>), and social-aware recommendation systems, which incorporate social information into the recommendation process (Section <ref>). §.§ Pre-trained ModelsThe pre-training technique has become an emerging research topic especially in the field of NLP. Pre-trained language models such as the BERT <cit.> and the more recent GPT-3 <cit.> models have demonstrated their robust performance on different downstream NLP tasks. Through pre-training, a language model can learn contextualised embeddings for tokens from a large corpus of texts, so that these tokens can be reused for subsequent tasks with enhanced performances. Such models can then be later fine-tuned for a new downstream task, thereby enhancing the overall performance of the corresponding model and outperforming other handcrafted models. The pre-training technique was also adopted in recommendation models. For example, <cit.> proposed the Neural Collaborative Filtering (NCF) model to introduce a novel deep learning-based method to the recommender systems community, which has attracted a substantial attention from researchers since then. The most remarkable contribution of the NCF model is that it successfully incorporates the multi-layer perceptron (MLP) module, which can in theory effectively approximate various types of prediction functions. However, it is noticeable that this NCF model also uses a generalised matrix factorisation (GMF) module to generate pre-trained embeddings, which limits the NCF model from incorporating auxiliary information at the pre-training stage.To this end, we propose to use instead the GNN technique to replace the MLP pre-trained module due to the former's ability of supporting the incorporation of heterogeneous relations such as the relations among users as well as the users' interaction data. Moreover, the GNN technique, which has been initially devised to implement the node classification and link prediction tasks, would naturally perform better than the GMF module on aggregating similar users and items <cit.>. Hence, compared with the GMF module, when the GNN model is used for the pre-training stage, the embeddings of the socially related users can be better aggregated in closer proximity in the latent space. Apart from using the GMF module to pre-train on the interaction data, <cit.> introduced a linear pre-trained recommender using the network embedding method. However, their proposed model failed to leverage the multi-hops social relations (i.e. a friend's friends), which can be seamlessly addressed by the GNN methods. The study by <cit.> is a more recent related work to ours, which tried to tackle the cold-start problem by pre-training the recommendation model in a meta-learning setting. However, the contribution of their work is to use the underlying structure of the user-item interaction graph, which is different from our research goal of using social information to obtain better initialised users and items' representations. Besides, existing works have leveraged other GNN pre-training and contrastive pre-training techniques for sequential <cit.> and conversational <cit.> recommendations, respectively, which are distinct from our proposed general recommender SGP. §.§ Graph-based RecommendationVarious graph-based recommenders <cit.> have been shown to achieve state-of-the-art performances through the development of the GNN technique and its variants <cit.>.Since the user-item interaction data can be intrinsically depicted as an interaction graph, the GNN technique and its variants have been seamlessly applied in various recommender systems and achieved good performances. For example, the proposed graph-based recommender model, NGCF <cit.>, has been shown to outperform many competitive baselines by incorporating a Graph Convolutional Network (GCN) to encode the collaborative signals and to model the users' and items' embeddings. Building on NGCF, the LightGCN model <cit.> further enhanced its recommendation performance by eliminating redundant neural components from NGCF. Recently, GF-CF <cit.> was proposed to further enhance the performance by incorporating a graph filtering method. However, GF-CF is not an embedding-based model, hence it cannot be adapted to normal pre-training methods. Other variants of LightGCN, including SGL <cit.> and UltraGCN <cit.> have also achieved competitive performances. However, they incorporate memory-consuming data augmentation methods, which will be more challenging if side information is also considered. Inspired by the generalisability of LightGCN and its good trade-off between effectiveness and efficiency, we also adopt the simplified GCN <cit.>. Moreover, we incorporate the social information into the embedding generation and updating process, which enables our proposed SGP model to encode the social relations into the users' embeddings. We will show how this auxiliary social information benefits our model by allowing it to obtain a better model initialisation thereby alleviating the cold-start problem.§.§ Social-aware Recommendation Since the early work of SoReg <cit.>, social relations have been shown to be a valuable source of side information for recommendation. Indeed, SoReg uses social relations as a regularisation in order to enhance the recommendation performance. As deep learning-based recommendation systems have developed, researchers have also focused on how to use social relations more effectively in advanced recommendation scenarios. This has led to the evolution of social relations usage from traditional regularisation methods to more sophisticated relation encoding methods <cit.>. In particular, with the emergence of graph-based recommendation, social relations have become a natural choice of side information since, within a user-item interactions graph, relations between users can be encoded as additional edges between user nodes instead of being treated as attributes ofentities. For example, Diffnet++ <cit.> incorporated the additional social relations by adding the user-user edges into the original user-item bipartite graph. In addition, S^2-MHCN <cit.> has been proposed to capture the high-order information among the users' social relations using a hypergraph neural network augmented by the self-supervised learning technique. Similarly, SCDSR <cit.> is another self-supervised graph recommendation model, which built a heterogeneous graph using the social and information domains. By doing so, SCDSR aims to leverage the high-order correlations between non-bridge users in the social domain and items in the information domain. Although overall effective, these existing models typically neglect the situation when new users register to the system through their friends' suggestions. Compared with the existing models, our proposed SGP model not only benefits normal cold-start users with fewer than 5 interactions, but also provides effective recommendations to those extreme cold-start users who have no interactions at all. § MODEL ARCHITECTUREOur proposed SGP model consists of two main stages: 1) a social-aware pre-training stage, where a multi-layer GNN is employed to generate the pre-trained embeddings and 2) an information distillation stage, where we incorporate the Gaussian Mixture Model (GMM) to distil information from the pre-trained embeddings for the subsequent model's training and generation of recommendations.In the following, we first define the tasks and some preliminaries in Section <ref>. Next, Section <ref>describes how to incorporate the social relations and a light GNN model to propagate the social information into users' and items' embeddings. Finally, in Section <ref>, we demonstrate how to employ the GMM to distil the social information from those pre-trained embeddings for the subsequent training and the production of final recommendations. To clearly illustrate our model,Figure <ref> depicts the overall structure of SGP, where the upper and bottom regions describe the pre-training and fine-tuning stages, respectively. We conclude the presentation of the SGP model with a discussion of its benefits for extreme cold-start users (Section <ref>) as well as an analysis of its time complexity (Section <ref>). §.§ PreliminariesIn this section, we introduce the notations used across the whole article and formally define our research task. Throughout this paper, we use calligraphy typeface alphabets to denote sets (e.g. 𝒰 is the set of users). Besides, matrices and vectors are denoted by bold letters with uppercase letters representing matrices and lowercase letters representing vectors. In Table <ref>, we summarise main notations used in this article for a fast reference.Our task is to highly rank relevant items for each user given their historical interactions and their available social relations.We consider a recommender system with a user set 𝒰 (|𝒰 | = M) and an item set ℐ (|ℐ | = N). Let R∈ℝ^M× N be the user-item interaction matrix, where the content of the matrix R^M× N corresponds to either explicit user ratings <cit.> or implicit feedback <cit.>. We consider implicit feedback here because it is more abundant, therefore, R_ui = 1 if the user u has interacted with the item i, otherwise R_ui = 0.As mentioned before, social network information is also important for improving the recommendation performance, especially when a user does not have enough interactions with items. Let S∈{0,1}^M× M be the user-user social network matrix, where the content of S represents the social connections between each pair of users.§.§ The Pre-training StageA GNN model can leverage the nodes' information and their corresponding relational information in a graph by effectively aggregating information from each node's neighbours. In the recommendation scenario, each node represents either a user or an item. Therefore, suppose that only the interaction information is given, then each user node's neighbours could correspond to those items that have been interacted with by this user (and vice-versa for an item node). On the other hand, when the social network information is also available, then the user's neighbours can be his/her interacted items or friends. This friendship information is also important for the recommender system, because users are more likely to interact with those items that have been previously interacted with by their social neighbours <cit.>. Hence, the users' available social relations provide useful insights for inferring their interests and predicting the items that they will interact with. To effectively propagate the friends' information into each user who is socially connected, we firstly initialise each user with a randomised embedding vector e_u to represent his/her interests. Similarly, we can assign each item with a randomised embedding vector e_i. We set E_u to be the embedding matrix containing all latent vectors of users and E_i to be the embedding matrix for all items. A graph neural network can be employed here to aggregate the users' social information for each user node. By stacking multi-layers GNNs, we can propagate high-order connectivities of social relations from multi-hop neighbours. In our case, we use the most commonly used 3-layers GNNs to capture reasonable depths in the social connectivities while avoiding the possible over-smoothing effect of the GNN. To use multi-layers GNNs <cit.>, we rely on a well-defined Laplacian matrix ℒ for the specific GNN, so that the information propagation and convolution functions can be executed effectively in a matrix multiplication form. Different from the NGCF <cit.> model, which only tries to encode the interaction signal into both the users' and items' embeddings, our model focuses on the social information propagation in the pre-training stage. Furthermore, to further improve the Diffnet++ model <cit.>, which only incorporates the plain social relation links, our model pre-computes the cosine similarity between each user's social relation vector <cit.>[We use the cosine similarity because it can be efficiently computed for our sparse user-user matrix. In addition, the similarities between social relation vectors do not need estimating the magnitude, hence other similarity measures - e.g. the dot product or the Euclidean similarities, may not be appropriate.]. These vectors constitute one-dimensional binarised vectors indicating the social links between users.Therefore, building on this advanced user-user social similarity graph S_sim, the GNN function can better classify similar users. Given the user-user social graph S∈{0,1}^M× M, we can pre-compute the social similarity graph S_sim as follows:S_sim = (S·S^T/√((S·S^T)))^T·1/√((S·S^T)),where T represents the transpose of a matrix and () computes the diagonal matrix of the corresponding matrix. Entries of S_sim are set to 0 if the corresponding user has no social relationships in S.Given the similarity graph S_sim, we can derive its corresponding Laplacian matrix ℒ_sim as follows:ℒ_sim = (S_sim)^-1/2·S_sim·(S_sim)^1/2. Next, using the Laplacian matrix ℒ_sim, we can present the embedding updating function of our proposed SGP model as follows:E_u^(l)= ℒ_sim·E_u^(l-1).Starting from a randomly initialised E^0_u, we stack 3 layers of the GNNs given in Equation (<ref>) and update the embeddings for each user correspondingly. Following the LightGCN model <cit.>, we discard those redundant neural components from the variant of GCN used in NGCF <cit.>. Indeed, the self-connection setup, i.e. adding the dot product of an embedding with itself into Equation (<ref>), was initially proposed in <cit.> to keep each node's original information and to avoid being possibly overloaded with information from the nodes' neighbours. However, this self-connection was later demonstrated in <cit.>to bring no benefit to recommendation performance; instead, it will reduce the training efficiency. Hence, we choose to also remove this redundant part in our embedding updating function following the existing work <cit.>. ruled textbf We follow the GNN technique proposed in the aforementioned LightGCN model to update and aggregate the users' embeddings. However, different from LightGCN, which uses the GNN to incorporate the interaction data, we only incorporate the social information propagation. Similar with other graph-based recommendation models <cit.>, we keep the interaction data as the ground truth for supervising the pre-training of our model. At each training epoch, Equation (<ref>) is invoked to perform the social aggregation, after which we use the binary cross-entropy (BCE) loss <cit.> as the objective function: ζ_BCE=- ∑_(u,i)∈𝐑 y_u i·log(ŷ_u i)+(1-y_u i) ·log(1-ŷ_u i)+λΘ^2,where y_ui is the observed interaction, ŷ_ui is the predicted interaction, which is a dot product of the item embedding and the user embedding obtained from Equation (<ref>), while Θ = {{E_u^l,E_i^l,W^l}_l=1^3} denotes all trainable model parameters and λ controls the L_2 regularisation strength to prevent overfitting.As a result, our pre-trained embeddings E_precan be obtained by minimising the objective function in Equation (<ref>). For a better understanding, the training framework of our pre-training stage is summarised in Algorithm <ref>.§.§ The Fine-tuning Stage After detailing the pre-training stage, we first present the information distillation stage,where we describe how to use the Gaussian Mixture Model to distil hierarchical relations from the pre-trained embeddings E_pre. Finally, we demonstrate how to use the reconstructed embeddings E_recon for the final recommendations. Similar with the previous section, we summarise the training framework of the fine-tuning stage in Algorithm <ref>. §.§.§ Information Distillation StageBy using Equation (<ref>) for the pre-training and Equation (<ref>) for the social aggregation,we aim to encode social information into our pre-trained embeddings E_pre. The latter constitutes the obtained embedding matrix from optimising Equation (<ref>). However, it is not obvious how these pre-trained embeddings can be reloaded. Since we have already used the interaction data as the ground truth during the pre-training stage , directly reloading these embeddings is likely to cause either an overfitting or a marginal improvement. Therefore, in the information distillation stage, we propose to distil information from these pre-trained embeddings. Next, we concatenate these distilled information at the tail of a randomly initialised embedding to add more generalisation to the final embeddings <cit.>. Before extracting useful information from the pre-trained embeddings, we propose to model each user's or item's latent vector as a multi-Gaussian distribution.This is consistent with the implementation details of existing works <cit.>.Indeed, the matrix factorisation technique can be interpreted as the search for the best fitted distribution for the users and items in a latent space. This is why in most cases <cit.>, theembeddingsare initialised with a Gaussian distribution with a given mean (μ) and standard deviation (σ) e.g. μ = 0 and σ^2 =0.1. As discussed in Section <ref>, we expect the pre-training stage to capture hierarchical relations between users and items <cit.>. However, a standard Gaussian distribution cannot represent these learned complex relations from the pre-trained embeddings E_pre, because its low representational power <cit.> limits its ability to convey the users' different preferences and their complex social relations, thereby potentially leading to an information loss. Hence, a mixture model is needed to leverage the possible multivariate Gaussian distributions learned from the pre-training stage and to avoid any possible information loss.With the aforementioned proposal, we employ a well-developed statistical analysis tool, the Gaussian Mixture Model (GMM) <cit.>, which can effectively decompose a multivariate Gaussian distribution into multiple (i.e. k)Gaussian distributions, where k is pre-defined. Specifically, GMM assumes that an observed data point x can be represented as a weighted sum of k Gaussian densities <cit.>, calculated as follows:p(x|λ)=∑_i=1^kw_i · g(𝐱|μ_i, Σ_i),where x is a continuous-valued feature vector, λ represents all the learnable parameters of GMM, i.e. λ = { w_i, μ_i, Σ_i}, w is a k-dimensional vector containing the weight of each Gaussian density and the sum of w_i is equal to 1, i.e. ∑_i=1^k w_i = 1, while μ_i and Σ_i denote the mean vector and the covariance matrix, respectively.Given the feature vector x conditioned on the mean vector μ_i and the covariance matrix Σ_i , we can calculate the corresponding Gaussian density as follows:g(𝐱|μ_i, Σ_i)=exp{-1/2(𝐱-μ_i)^'Σ_i^-1(𝐱-μ_i)}/(2 π)^D / 2|Σ_i|^1 / 2,where D is the dimension of the vector x. In particular, the numerator is related to the Mahalanobis distance <cit.> (i.e. √(𝐱-μ_i)^'Σ_i^-1(𝐱-μ_i))), which represents the distance between x and σ.Since we assume that all the users' and items' embeddings correspond to combinations of Gaussian densities, we can extract meaningful information from these embeddings by analysing each pair of (μ_i, Σ_i), as each pair represents each user's most important preferences or each item's most important characteristics. In order to extract such preferences and characteristics, we decompose the embeddings obtained from the pre-training stage in Section <ref> as follows:p(e_u|λ) = ∑_i=1^kw _i· g(𝐞_𝐮|μ_i, Σ_i),where e_u is the pre-trained user embedding of the user u from E_pre; k is the number of Gaussian distributions as defined above in Equation (<ref>), which determines how many Gaussian densities should be obtained by decomposing the pre-trained embedding. A similar equation can also be applied to an item's embedding vector.Therefore, given the overall pre-trained embeddings E_pre, we calculate all pairs of μ_i and Σ_i as follows:μ, Σ = GMM(E_pre, k),where μ and Σ are two matrices consisting of all (μ_i ,Σ_i) pairs for each user and item and the used GMM function is optimised by the EM algorithm. After employing the GMM to those pre-trained embeddings, we obtain k pairs of μ and Σ for each user and item, from which we have enough statistical information to reconstruct the embeddings containing the social information encoded at the pre-training stage. For each user or item, we use the obtained (μ ,Σ) pairs to generate k Gaussian distributions, where we randomly sample the same number of elements from each distribution to reconstruct socially-aware embeddings. After that, we obtain the reconstructed embeddings E_recon as follows:E_recon = 𝒩(μ, Σ),where μ and Σ are both obtained from Equation (<ref>).§.§.§ Model Training and RecommendationAfter obtaining the reconstructed embeddings E_recon, we use again the BCE loss function <cit.> to train the model but, this time, the model is initialised with E_recon instead of a random matrix. We concatenate the reconstructed embeddings E_recon with the randomly initialised embeddings to represent the users’ preferences and items’ characteristics.Therefore, the model is less likely to fall into the same relative optimised solution within the pre-training stage.To recommend items of interests to a user, we compute the dot product of the concatenated trained embeddings of this user with the trained embeddings of all items in the corpus.Hence, our proposed Social-aware Gaussian Pre-trained model (SGP) is devised to predict the interaction ŷ_ui between user u and item i as follows: ŷ_ui = (e_u-recon∥e_u-rand) ⊙(e_i-recon∥e_i-rand),where ∥ denotes the concatenation and ⊙ is the dot product. The obtained list of scores are then used to identify the items that a given user will be interested to interact with. §.§ Discussion Pre-training embeddings for the recommendation task has already been investigated in the literature. For example, to avoid the poorly performing local minima, both NCF <cit.> and CMN <cit.> apply the Generalised Matrix Factorisation (GMF) as a pre-training model to initialise the embedding of users and items. Specifically, to obtain the representations of users and items, a GMF model is trained using the weighted output of the embedding dot product:R̂_u,v=Wσ(U_u⊙V_V), where ⊙ denotes the element-wise product of vectors, σ is an activation function and W is the trainable parameter.However, existing models cannot leverage social relations during the pre-training stage. Hence, they may not improve the satisfaction of cold-start users. In comparison, Our SGP model can leverage social relations using a graph-based pre-training method such that cold-start users can benefit from friends with more interactions.In addition, SGP can particularly tackle the extreme cold-start problem by inferring the preferences of those extreme cold-start users as a combination of the preferences of their socially related friends.§.§ Time Complexity Analysis For the pre-training stage, we need to pre-compute the social similarity graph S_sim (see Equation (<ref>) and the Laplacian matrix (see Equation (<ref>)), where their complexities are O(M^3) and O(M^2), respectively. In addition, the complexity of the embedding updating function (see Equation (<ref>)) is O(M^2 × |e|), where |e| is the embedding dimension.For the fine-tuning stage, we need to factorise E_pre using the GMM, which has a complexity of O(t × k × L × |e|), where t is the number of iterations for optimising the GMM function, k is the number of Gaussian distributions and L is the number of training samples. Finally, the recommendation step has a complexity of O (M × N), where M and N are the numbers of users and items, respectively. Although the pre-training stage is more time-consuming than the fine-tuning stage, it produces reusable pre-trained embeddings. For a new user who joins the system through a friend's recommendation, the initial embedding can be generated based on the social relations. Since the pre-training stage does not need to be repeated as often as the fine-tuning stage, its time complexity is not a major concern for a practical deployment.§ DATASETS AND EXPERIMENTAL SETUPTo evaluate our proposed SGP model, we perform experiments on three public datasets: Librarything[http://cseweb.ucsd.edu/∼jmcauley/datasets.html], Epinions^3 and Yelp[https://www.yelp.com/dataset]. These datasets are widely used in the recommender systems community. Librarything is a book review dataset, Epinions is a general customer review dataset, while Yelp is a venue check-in dataset. Table <ref> provides the statistics of the three used datasets. In the following, we aim to address the following research questions:RQ1. Can we use the GNN model to leverage the social information and generate pre-trained embeddings for both users and items, thereby improving the overall recommendation performance?RQ2. Can we employ the Gaussian Mixture Model to distil information from the pre-trained embeddings and further enhance the recommendation performance?RQ3. Does our SGP model help in alleviating the cold-start problem, especially for those extreme cold-start users?RQ4.What is the impact of using the social relations on the pre-training stage of our SGP model?RQ5. How do the embeddings dimension and differentranking cut-offs affect the recommendation performances of the pre-trained recommenders?Below, we describe the 16 baselines used to evaluate the performance of SGP, the used evaluation methodology, and their corresponding experimental setup.§.§ Baselines We compare the performance of our SGP model to classical strong non-neural baselines as suggested by <cit.>, as well as existing state-of-the-art neural models:* MF <cit.>. This is the conventional matrix factorisation model, which can be optimised by the Bayesian personalised ranking (BPR <cit.>) or the BCE losses. The regularisation includes the user bias, the item bias and the global bias. * SBPR <cit.>. SBPR is a classic model, which adds the social regularisation to the matrix factorisation method.* UserKNN and ItemKNN <cit.>. Two neighbourhood-based models using collaborative user-user or item-item similarities. * SLIM <cit.>. This is an effective and efficient linear model with a sparse aggregation method.* NCF <cit.>. The method is a CF model, which uses a generalised matrix factorisation method to generate pre-trained embeddings. An MLP module is also used in NCF to capture the nonlinear features from the interactions.* NGCF <cit.>. NGCF is devised to employ a multi-layer GCN on top of the user-item interaction graph to propagate the collaborative signal across multi-hops user-item neighbourhoods. * LightGCN <cit.>. Building on NGCF, LightGCN has fewer redundant neural components compared with the original NGCF model, which makes it more efficient and effective. * UltraGCN <cit.>. UltraGCN is a more efficient GNN-based recommender. It gains higher efficiency than LightGCN by skipping the message passing via a constraint loss.* SGL <cit.>. SGL leverages the self-supervised learning method to generate augmented views for nodes to enhance the model's robustness and accuracy.* VAE-CF <cit.>. A state-of-the-art variational autoencoder-based collaborative filtering recommender system.* GraphRec <cit.>. This is the first GNN-based social recommendation method, which models both user-item and user-user interactions.* Diffnet++ <cit.>. This method is a graph-based deep learning recommender system, which uses the additional social links to enrich the user-item bipartite graph and improve the recommendation performance. * MPSR <cit.>. This is a recent model that uses GNN to construct hierarchical user preferences and assign friends' influences with different levels of trust from different perspectives.* S^2-MHCN <cit.>. This is a self-supervised recommender system, which uses a hypergraph neural network to leverage the social relations between users.* SCDSR <cit.>. SCDSR is another self-supervised graph recommendation model that builds a heterogeneous graph using the social and interaction domains. §.§ Evaluation Methodology Following a common setup <cit.>, we use a leave-one-out evaluation strategy to split the interactions of each dataset into training, validation and testing sets.To speed up the evaluation, we adopt the sampled metrics <cit.>, which randomly sample a small set of the non-interactive items as negative items (rather than take all the non-interactive items as negatives) of the validation and testing sets, and evaluate the metric performance on this smaller set. Here, we sample 100 negative items for each user in the testing sets for evaluation <cit.>. However, different from prior works <cit.> that only use one oracle testing set per dataset with the sampled negative items, we construct 10 different testing sets with different sampled negative items for each dataset using different random seeds, in order to reduce the evaluation bias on some specific testing negatives <cit.>. Hence, the reported performance of each run is based on the average of the 10 testing sets[We have also employed an evaluation methodology where any potential bias is avoided. In this evaluation, we make use of the full set of negative items. We observed similar experimental results and conclusions to those shown in Table <ref>. Hence, our use of 10 different testing sets enhances evaluation efficiency but also sufficiently mitigates against any potential evaluation bias stemming from the negative item sampling.]. In order to answer RQ1, we compare our SGP model with all baselines in terms of Normalised Discounted Cumulative Gain@10 (NDCG), Recall@10 and Mean Average Precision@10 (MAP). We also compare the SGP model with both its pre-training and fine-tuning stages to a variant where only the pre-training stage is used (called SGP (Pre-training)), so as to address RQ2.All models are implemented with PyTorch using the Beta-RecSys open source framework <cit.>. We use the Adam <cit.> optimiser for all the neural network models' optimisations. To tune all hyper-parameters, we apply a grid search on the validation set,where the learning rate is tuned in { 10^-2,10^-3,10^-4}; the latent dimension in { 32,64,128 }and the L_2 normalisation in { 10^-2,...,10^-5}.The node dropout technique is used in the NGCF, LightGCN, UltraGCN, MPSR and GraphRec models as well as our proposed SGP model. The dropout ratios vary amongst { 0.3,0.4,...,0.8 } as suggested in <cit.>. To control how many Gaussian distributions are extracted from the pre-trained embeddings, we vary the number of pre-defined multivariate Gaussian distributions k in Equation (<ref>) in { 2,4,6,8,10 }. Note that due to the limit of the latent dimension, further increases in the k value might result in less data extracted from each pre-trained embedding.For each k value, we run our SGP model for 50 times with different random seeds and we plot the results on the three datasets as a box plot, where we illustrate not only the mean values but also the variance across different random seeds. For a fair comparison with <cit.>, we set the number of neural network layers of the models including NCF, NGCF, Diffnet++, LightGCN, UltraGCN, SGL, GraphRec and SGP to three. For the non-neural models, namely SBPR, MF, UserKNN, ItemKNN and SLIM, we tune them within the same range of learning rates and L_2 normalisations used for theneural baselines, while for the rest of parameters we follow the same implementation details as suggested in <cit.>. To answer RQ3, we further examine the ability of our proposed SGP model to alleviate the cold-start problem, especially for those users who newly registered on the sites. In particular, we first compare the performances of our SGP model to the best performing baseline across different groups of users who have less than {5, 10, 15, 20} interactions, respectively.Second, to simulate the extreme cold-start situation when a user starts using an app that was suggested by his/her friends, we select those users who have social relations but less than five interactions. We define these users as the extreme cold-start[We sampled users with less than 5 interactions for the simulation because at least 3 interactions are needed for the train/valid/test set, and in order to keep enough users in the evaluation pool.] users and we remove all their interactions, so that the situation of newly registered users (no historical interaction) is simulated. Hence, through this defined extreme cold-start setup, we aim to recommend relevant items to those newly registered users solely based ontheir social relations. In order to tackle RQ4, we conduct an ablation study to determine the effect of the social relations in our proposed SGP model and the Diffnet++ model. In this ablation study, we randomly drop {20%,40%,60%,80%} of social relations from both models and measure the resulting recommendation performance across the three used datasets, in order to determine if the performance improvements are indeed gained from the social-aware pre-training.To answer RQ5, we provide a detailed analysis on the largest dataset (i.e. Yelp) to evaluate the performances of SGP and LightGCN on different embedding dimensions and different cut-offs for the recommended items.Additionally, in order to directly observe the effect of social relations in the latent space, we use the t-distributed stochastic neighbour embedding (t-SNE) technique <cit.> to visualise the final embeddings obtained by our SGP model, in comparison to the embeddings obtained by a classic MF model.§ RESULTS ANALYSISIn this section, we report the experimental results and answer the five research questions in turn. §.§ RQ1: Pre-trained Recommendation PerformancesIn order to answer RQ1, we use Table <ref> to report the overall performance of our SGP model in comparison to 13other baselines and the pre-training stage (the first stage only of the SGP model) in terms of 3 different metrics, namely NDCG, Recall and MAP. Comparing the performance of SGP (Pre-training) with other baselines, we can conclude that the pre-training stage itself cannot outperform all baselines. However, through the information distillation stage when we use randomly initialised embeddings concatenated with Multivariate Gaussian distributions extracted from the pre-trained embeddings, our SGP model achieves the best performance, constantly and significantly outperforming all other baselines in terms of all metrics on three used datasets. These results demonstrate that solely employing the GNN model with the available social relations is not sufficient to enhance the recommendation performance. This is likely because the social information should not be considered equally to the interaction information, since the interaction information makes the actual ground truth when inferring the users' main preferences and next items of interest.By reusing these pre-trained embeddings concatenated with randomly initialised embeddings, our SGP model can markedly and significantly enhance the recommendation performance. It is of note that the performances of all the evaluated models on the Epinions dataset are lower than on the Librarything and Yelp datasets. However, these performances are in line with those reported in the literature (e.g. NDCG@10 ≈ 0.3 on the Librarything dataset <cit.>; NDCG@10 < 0.1 on the Epinions dataset <cit.>). These differences may be explained by the differing densities of user-item interactions in the used datasets (see Table <ref>).Additionally, by comparing other graph-based models, we observe that those more recent models such as SGL and UltraGCN do sometimes outperform the LightGCN model. However, LightGCN can still outperform SGL and UltraGCNfor most of the times as shown in Table <ref>. This observation is likely related to our used data split, where we use 10 different testing sets to avoid the oracle testing set. In other words, our results suggest that the recent graph-based models have not achieved consistent and robust improvements over the LightGCN model. In addition to those graph-based baseline models, our SGP also significantly outperforms other social-aware recommender systems, including S^2-MHCN and SCDSR. This indicates that a social graph pre-training technique is more effective than the self-supervised learning technique on the recommendation task. Overall, in answer to RQ1, we can conclude that using the GNN model to leverage the social relations and generate pre-trained embeddings can improve the recommendation performance compared with SGP (Pre-training) and 16 competitive (neural and non-neural) baselines.§.§ RQ2: GMM Information DistillationTo address RQ2, we show a box plot of our SGP model on the 3 used datasets across different number of pre-defined multivariate Gaussian distributions, k, in terms of NDCG@10, where for each k value, the model is trained and evaluated 50 times with different random seeds. In Figure <ref>, the max and min values for each set of experiments are shown as two bars at the top and bottom of each box, respectively. The mean value of each set of experiments is shown as an orange line lying in the middle of each box. We also report the best mean for each dataset in Table <ref> (i.e. k=6 for the Epinions dataset and k=8 for the Librarything and Yelp datasets ). Figure <ref> shows that our SGP model only achieves better performances when k is larger than 4, whereas for all datasets when k=2 or 4, the SGP model has a lower performance than several baselines. This can be explained by the fact that the users' preferences are hard to be estimated with simple distributions. Indeed, usually the users' preferences are formed by combinations of distributions, which cannot be easily factorised with 2 to 4 factors. Therefore, a small number of Gaussian distributions is not sufficient enough to represent the users' preferences. However, we also observe a performance degradation when k is too large. This is likely because the latent dimension has a limited size (usually up to a few hundreds),while each reconstructed embedding is a sample from the multiple extracted Gaussian distributions. Therefore, when k becomes larger, elements sampled from each distribution become fewer, thereby leading to a loss in the accuracy of the representation of its intended original factor.For example, when the latent dimension is 100, if k=10 is applied, only 10 elements are sampled from each Gaussian distribution. Moreover, when k is larger, the performance of our SGP model is relatively stable. This demonstrates that our SGP model is effective in distilling information from the pre-trained embeddings given that enough Gaussian distributions are employed i.e. when k is sufficiently large, the model stabilises and shows less variance. In addition, we observe that the performance of SGP in terms of NDCG@10 varies across different datasets. For example, when k=4, SGP is relatively less effective than the other configurations on the Librarything dataset while SGP is dramatically improved when k≥ 6 on the Epinions dataset. As shown in Table <ref>, different datasets have different social densities leading to different structures of social networks. Therefore, these structures of the social networks can affect the usefulness of the social relations thereby also affecting the performance of social-aware recommenders such as SGP. Overall, in answer to RQ2, we can conclude that the GMM can be used to effectively distill information from the pre-trained embeddings. We also suggest preferable k values, which can be used to enhance the recommendation performance. §.§ RQ3: Cold-start Performances To address RQ3, in Table <ref>, we examine the performance of our SGP model for different groups of userswho have less than {5, 10, 15, 20} interactions, respectively, in comparison to the best baseline, LightGCN, in terms of NDCG@10. From the table, we note that our SGP model overall significantly outperforms LighGCN, while users with less than 10 interactions particularly benefit from our model compared with the other groups of users.Overall, it is reasonable to observe that cold-start users benefit more from our model because when their interaction information is too sparse, incorporating more social information will likely enable the SGP model to predict their possible unknown preferences. However, users with sufficient interactions tend to have their preferences accurately captured by the recommender systems, therefore adding more social relations may not be beneficial for them. Indeed, from Table <ref>, we observe that there is a clear decrease in the reported percentage improvement when we consider the group of users who have less than 10 interactions in comparison to those users who have more than 15 interactions.Table <ref> shows a comparison of our SGP model with a random recommender and a popularity-based recommender for the extreme cold-start users case. The random and popularity-based recommenders are two commonly used baselines when no interaction data is available. Here,we aim to simulate the situation when users register to an App or a Web service following the suggestions of their friends. In this case, the model only knows about the users' friends while it does not have access to the historical interactions. Instead of making random recommendations or only recommending popular items, our SGP model generates embeddings by constructing multivariate Gaussian distributions by evenly sampling elements from their friends' embeddings, which is also produced by the pre-training stage of our SGP model. By comparing our proposed SGP model with its first pre-training stage only (denoted by SGP (Pre-training)) with both baselines and the full SGP model for the extreme cold-start users, we find that SGP (Pre-training) significantly outperforms both the random and the popularity-based recommenders. On the other hand, it is reasonable and natural that when no interactions are observed and no training is conducted, SGP (Pre-training) is far worse than the full SGPmodel. However, SGP (Pre-training) significantly outperforms both the random and the popularity-based recommenders on the Librarything and Yelp datasets and is comparable to the results of SGP on the Epinions dataset.Overall, in answer to RQ3, we can conclude that our proposed method SGP is effective at tackling the cold-start problem and is particularly useful in alleviating the practical extreme cold-start issue. §.§ RQ4: Impact of Social Relations In order to determine the effect of social relations and to answer RQ4, we conduct an ablation study where we randomly dropout different proportions of social relations. Figure <ref> shows how the performance of our SGP model and Diffnet++ model are affected when different proportions of social relations are randomly masked out. From this figure, we can clearly observe a trend that when more social relations are masked during the pre-training, the more the recommendation performances of SGP and Diffnet++ aredegraded across three datasets. This trend reveals thatthe social relations do indeed help the SGP model to achieve a better pre-training thereby enhancing the final recommendation performance. However, we also observe some variance in the performance on the Epinions dataset, compared with the consistent decline of performance on the Librarything and Yelp datasets. This is because the raw data of the Epinions dataset provides bidirectional social relations i.e. both the `trust' and `trustedby' relations are given. Since our current SGP model cannot distinguish between these bidirectional relations, for the sake of simplicity, we unify these two types of relations as one unidirectional social network to fit our implementation. Although unifying the bidirectional relations does bring an overall performance improvement to SGP over other baselines, this unifying method itself is not optimal and can possibly induce noise, because the social influences are not bidirectionally equal. By comparing the Diffnet++ model with SGP across three datasets over different dropout ratios, we observe consistent performances improvement from our proposed model, which further justifies our previous results. Therefore, from our conducted ablation study, in answer to RQ4, we can conclude that using the social relations on the pre-training stage can help enhance the recommendation performance of our SGP model. Furthermore, we postulate that the performance can be further enhanced by enabling our current SGP model to distinguish among bidirectional social relations. We leave the adequate integration of bidirectional social relations into our SGP model to future work. §.§ RQ5: Hyperparameter AnalysisIn this section, we aim to answer RQ5 by examining how the performance of SGP and that of the second-best baseline LightGCN are affected by different recommendation cut-offs and embedding dimensions. First, we plot when the recommendation lists are generated with different cut-offs in Figure <ref>. From this figure, we can observe that our SGP consistently outperforms LightGCN across different cut-offs. Specifically, SGP mainly surpasses LightGCN for larger cut-offs (i.e. when cut-offs ≥ 10). This is due to the fact that we consider social relations as side information, and they are only leveraged during the pre-training stage. As a result, those items that are easy to predict will be preserved as top-ranked items, while social relations play an important role for our SGP model in obtaining a higher effectiveness at lower rank cutoffs.For example, in the venue recommendation scenario, users may visit venues suggested by their friends when travelling to different countries. When these venues are in the test set, a general recommender relying only on the interaction information will unlikely rank these venues at the top of the ranking list for such users. Our proposed SGP model is likely to benefit this case if the users’ friends have visited/liked these venues before. Specifically, SGP gives higher scores to those venues visited/recommended by each user’s friends. As a result, those venues lowly ranked by other recommenders will have a higher chance to appear in the top-10/top-20/top50 ranking lists as shown in Figure <ref>.Here, we must emphasise the difference between the results shown in Figure <ref> and Table <ref> to avoid a possible confusion. In Table <ref>, we have reported performances across different user groups defined by the number of historical interactions of each user.This is different from what we plot in this section, which is based on different cut-offs.Figure <ref> shows how the NDCG@10 measure is affected when different embedding dimensions are applied to SGP and LightGCN. This figure demonstrates that our SGP can bring consistent improvements over the baseline for different embedding dimensions. To conclude on RQ5, our SGP model can constantly outperform the strong LightGCN baseline when different hyperparameters are applied.§.§ The Embedding Visualisation In this section, we aim to analyse how our SGP model affects the users' embeddings in the latent space, compared to the embeddings obtained from a classic MF model that does not encapsulate social relations. We visualise all users' embeddings of the Librarything dataset[The Librarything dataset is a less sparse dataset with a higher or equal social density compared to the Epinions and Yelp datasets therefore, for illustration purposes, we have more users to choose from. However, note that we do nevertheless observe similar trends on the Epinions and Yelp datasets], trained by the SGP model in comparison with embeddings trained by the MF model[The MF model is chosen because it is also an embedding-based method and is not socially aware, and therefore can offer us a clear comparison between a social-aware model and a non-social model.] using the t-distributed stochastic neighbour embedding (t-SNE) approach. Figure <ref>(a) shows the t-SNE for MF, while Figure <ref>(b) provides the t-SNE for SGP. In both plots, we highlightthree anchor users (represented as yellow/green/red dots), along with their corresponding friends (triangles) and their target items (stars). Both the green and orange anchor users are fortunate to have their friends close to their target items, hence, these two anchor users are pulled closer to their target items, as shown inFigure <ref>(b). In contrast, in Figure <ref> (a), these two anchor users are clustered far apart from their target items by MF, due to the fact that social relations are not considered by MF. For the red user's case, he/she has a dissimilar friend, who is located relatively far away from the target item and his/her friends. Our SGP model can still handle this case by relocating the red user to the space between this dissimilar friend and two other similar friends, thereby bringing this user closer to the target item. Through the provided three examples of users, we illustrated different situations where users might possibly benefit from our SGP model, thereby improving the recommendation performances as observed in the reported results across three datasets. § CONCLUSIONSIn this paper, we explored how to leverage a GNN model to generate pre-trained embeddings using the existing social relations among users. Next, we used the Gaussian Mixture Model to carefully extract prior knowledge contained in those pre-trained embeddings for the subsequent fine-tuning and recommendations. Our proposed Social-aware Gaussian Pre-trained (SGP) model can significantly outperform competitive baselines, as demonstrated by the extensive experiments conducted on three public datasets. Furthermore, a detailed user analysis showed that by incorporating the social relations, users who have less than 10 interactions particularly benefit from our SGP model. Moreover, we showed that our SGP model can practically serve extreme cold-start users with reasonable recommendations when it only knows about the friend's preferences of these newly registered users. Finally, we used an ablation study to examine the effect of social relations on our proposed model and a hyperparameter analysis to study the effects of different cut-offs and embedding dimensions, followed by the visualisation of the generated embeddings to further illustrate how our proposed model could benefit recommendations. As future work, we aim to investigate how to leverage the bidirectional nature of social relations so that we can alleviate the issue of ’trust’/’trustedby’ relations, as mentioned in Section <ref>. In addition, we aim to leverage more effective fine-tuning methods for SGP, such as the contrastive graph learning method <cit.>, instead of the plain training method.cas-model2-names | http://arxiv.org/abs/2311.15790v1 | {
"authors": [
"Siwei Liu",
"Xi Wang",
"Craig Macdonald",
"Iadh Ounis"
],
"categories": [
"cs.IR",
"cs.AI",
"68P20",
"H.3.3"
],
"primary_category": "cs.IR",
"published": "20231127130433",
"title": "A Social-aware Gaussian Pre-trained Model for Effective Cold-start Recommendation"
} |
Università degli Studi di Palermo, Dipartimento di Fisica e Chimica, via Archirafi 36, I-90123 Palermo, ItalyINAF/IASF Palermo, via Ugo La Malfa 153, I-90146 Palermo, Italy Department of Physics & Astronomy. University of Southampton, Southampton SO17 1BJ, UK Centre for Astrophysics Research, University of Hertfordshire, College Lane, Hatfield AL10 9AB, UKCentre for Extragalactic Astronomy & Dept of Physics, Durham University, South Road, Durham DH1 3LE, UKWe present a comprehensive spectral analysis of the ultraluminous X-ray source Holmberg II X-1 using broadband and high-resolution X-ray spectra taken with the XMM-Newton satellite over a period of 19 years benefiting from a recent campaign. We tested several models for the broadband spectra among which a double thermal component provided a reasonable description for the continuum between 0.3-10 keV and enabled us to constrain the properties of the accretion disc.The Luminosity-Temperature trends of the inner and outer disc components broadly agree with the expectations for a thin disc, although the exact values of the slopes are slightly sensitive to the adopted model. However, all tested models show L-T trends which deviate from a power law above a bolometric luminosity of about 5 × 10^39 erg/s, particularly for the hot thermal component associated to the inner accretion flow. Assuming that such deviations are due to the accretion rate exceeding its Eddington limit or, most likely, the super-critical rate, a compact object with a mass 16-36 M_⊙, i.e. a stellar-mass black hole, is inferred. The time-averaged (2021) high resolution spectra present narrow emission lines at 1 keV primarily from Ne IX-X and a very strong at 0.5 keV from N VII, which indicate Ne-N-rich gas with non-Solar abundances. This favours a nitrogen-rich donor star, such as a blue/red supergiant, which has escaped from its native stellar clustercharacterised by a low-metallicity environment. F. Barra et al.On the nature of the ultraluminous X-ray source Holmberg II X-1On the nature of the ultraluminous X-ray source Holmberg II X-1 F. Barra1,[email protected], C. Pinto2, M. Middleton3, T. Di Salvo1, D. J. Walton4, A. Gúrpide3 and T. P. Roberts5Received XXX; accepted XXX =============================================================================================================================== § INTRODUCTION Ultraluminous X-ray sources (ULXs) are off-nucleus, point-like, extragalactic sources with X-ray luminosities above L_ X > 10^39erg/s (for recent reviews, see ), brighter than the Eddington limit for a black hole of 10 M⊙, resulting from accretion onto a compact object. The luminosity of a ULX can reach 10^41 erg/s in the X-ray band (0.3-10 keV) alone leading to many conjectures on the nature of the compact object. ULXs, in fact, were theorized as BHs, with a mass greater than 10 and up to 10^5 M⊙ (intermediate mass black hole, IMBH, ) with ESO 243-49 HLX-1as the best candidate (). But the discovery of X-ray pulsations in a fraction of the population of ULXs revealed that at least some ULXs are powered by neutron stars (NSs). Notable examples are M82 X-2 (), NGC 5907 ULX-1 () and NGC 7793 P13 (). The number of persistent ULXs which exhibit pulsations is currently 6 with another 6 transient pulsars whose X-ray luminosity exceeded 10^39erg/s for a short period (see Table 2 in ). Given that about 30 systems have sufficiently good statistics to detect pulsations, the fraction of NS-powered ULXs might be around 30% or above (, see also ).ULX spectra are marked out by a curvature in the range 2-10 keV and a soft excess below 2 keV (the `ultraluminous state', ) and can typically be classified into three different regimes, according to their spectral slopeΓ in the 0.3-10 keV energy range: the soft ultraluminous (SUL) regime for Γ > 2, hard ultraluminous regime(HUL) for Γ<2 and broadened disc (BD), the latter where the X-ray spectrum presents a single peak and is dominated by a blackbody-like component in the 2–5 keV band(). Several ULXs sometimes show spectra switching between these regimes (, ).If the ULX spectrum is dominated by a cool blackbody-like component with kT ≲ 0.1 keV, with a bolometric luminosity L_ BOL>10^39 erg/s, and little detected emission above 1 keV, the source can also be defined as an ultraluminous supersoft source (ULS or SSUL). The presence of a weak hard tail in the X-ray band in ULSs suggests they are accreting in a super-Eddington regime but seen at high inclinations through the wind cone obscuring the innermost part of the disc (, ). This is confirmed by the presence of sharp drops in flux or dips in light-curves followed by spectral softening (e.g. ). These winds were suggested in several ULXs due to the presence of unresolved and intense features in CCD spectra, typically at lower energies (< 2 keV; , though recently detected in two cases above 6 keV: ). Critically, they have since been unambiguously resolved in several ULXs with the XMM-Newton Reflection Grating Spectrometers (RGS, hereafter, e.g. ). The Doppler blueshift of the absorption lines unveiled the long-sought relativistic (0.1-0.2 c) winds predicted by theoretical simulations of super-Eddington accretion discs (); for a recent review on ULX winds see <cit.>.§.§ HOLMBERG II X-1Holmberg II X-1 (hereafter Ho II X-1), located at a distance of 3.05 Mpc[https://ned.ipac.caltech.edu] in the Holmberg II dwarf galaxy, is characterized by a luminosity in the X-ray band (0.3-10 keV) L_ X≥ 10^39 erg/s and up to 10^40 erg/s (). It is possible to see from Fig. <ref> that the X-ray spectrum of Ho II X-1 tends to fit in between the spectra of bright SUL and the hardest ULXs.Importantly, observations ofHo II X-1 in the radio reveals collimated jets, which are at least partially responsible for inflating the surrounding nebula, carrying a kinetic luminosity of > 10^39 erg/s (). In this sense, Ho II X-1 is expected to be analogous to the extreme Galactic source, SS433 (), the latter considered to be an edge-on ULX () and proposed to harbour a fairly massive black hole of ≥ 25 M_⊙ (), in agreement with previous X-ray spectral analysis (). In fact, by comparing the spectral modeling of the microquasar GRS 1915+105 at very high state (e.g, the χ class) with that of Ho II X-1, a compact object of few tens of Solar masses radiating at or above its Eddington limit was forecasted for the latter case and, however, no more massive of 100 M_⊙ ().A relative uncrowded field around Ho II X-1 permitted the identification of an optical counterpart. In fact, the presence of a He II λ4686 nebula, with a luminosity L ∼ 10^36erg/s, and a point-like young companion star consistent with a spectral type O4V or B3 Ib were reported (, , ; although see ). The unknown demographic of ULXs, and difficulty to separate them spectrally (see , although several alternatives have been suggested, e.g. , , ), makes Ho II X-1 an important target given its probable black hole nature. Specifically, we can learn about the nature of the accretion flow by comparing the properties of the wind and evolution of the spectrum to other ULXs and sub-Eddington sources.This paper is structured as follows; in Sect. <ref> we report on the observations of the source and the spectral modelling in Sect. <ref>. In Sect. <ref> we present results on the lines seen by RGS. In Sect. <ref> we discuss our results and give our conclusions in Sect. <ref>. All uncertainties are at 1σ (68 % level).§ OBSERVATIONS AND DATA REDUCTION Ho II X-1 was observed by XMM-Newton 20 times over a period of 19 years (see Table <ref>), benefiting from a recent 250 ks campain (PI: Middleton). This high quality data-set spread over time allows us to probe both short-term (hours-days) and long-term (months-years) variability. The raw data were obtained from the XMM-Newton Science Archive (XSA)[https://www.cosmos.esa.int/web/XMM-Newton/xsa] and were reduced with the Science Analysis System (SAS) version 18.0.0[https://www.cosmos.esa.int/web/XMM-Newton]. In the observations with ID 0112522201 and 0112522301 the satellite's filters were closed and the observation ID 0843840201 is in timing mode and badly calibrated, which leaves 17 well-exposed observations. We used recent calibration files (October 2022). The events files of the EPIC camera were generated through the epproc and emproc tasks and thereafter filtered for the flaring particle background at the recommended threshold above 10 keV (count rate < 0.5 ct/s for EPIC-PN and < 0.35 ct/s for EPIC-MOS 1 and 2). We selected the counts in the source andbackground regions using the evselect task and corrected the lightcurves with the epiclccor task. We chose a circular region for the source of 30 arcsec radius centered on the Chandra X-ray position (RA: 08h 19m 28.99s, Dec: +70d 42m 19.37s) whilst for the background we selected a larger circular region a few arc minutes away from the source, but still on the same chip, avoiding contamination from the copper ring and away from chip gaps. The rmfgen and arfgen tasks were used to generate response matrices and effective area files. For illustration, the concatenated 0.3-10 keV EPIC-PN lightcurves of the individual observations are shown in Fig. <ref>. The XMM-Newton lightcurves confirm that the source brightness changed by a factor of 2-3 within observations and up to a factor of 6-7 between observations. Defining the hardness ratio (HR) as the ratio between the counts in the 1.5-10 keV and the 0.3-10 keV energy bands, we see that the HR is generally higher (i.e. the spectrum is harder) when the source is brighter and lower in proximity of the dips (Fig. <ref> lower-left and right panels) which is typical in ULXs (e.g. ). The EPIC-PN spectra of all the observations are shown in Fig. <ref> divided by the instrument response but without any spectral model.The archival RGS spectra were already thoroughly studied in a previous work <cit.>; we therefore focused on the new data from the 2021 campaign. The RGS data of all observations were reduced according to the standard procedure with the rgsproc pipeline in SAS. We filtered out periods of high background by selecting intervals in the lightcurves of the RGS 1,2 CCD 9 with a count rate below 0.2 ct/s.We extracted the 1^ st-order RGS spectra in a cross-dispersion region of 0.8 arcmin width, centred on the same source coordinates used for extracting the EPIC spectra. The background regions were chosen by selecting photons outside of 98% of the source point-spread-function. ULX RGS spectra typically require at least 100 ks exposure time to achieve the number of counts necessary to detect narrow lines (ideally ∼ 10,000 counts, e.g. ). We stacked the 1^ st-order RGS 1 and 2 spectra from the 2021 observationswith rgscombine for a total exposure time of 210 ks and about 25,900 net source counts. This is amongst the 10 deepest RGS spectra available for a ULX (for a comparison see Fig. 5 in ). RGS operates in the 0.33-2.5 keV band but the source is above the background (and the instrumental foreground) between 0.4-2.0 keV and we use this band for the analysis. The search for spectral features requires accurate knowledge of the continuum shape, therefore we also stack the 2021 MOS 1, MOS 2, and pn spectra to cover the 2-10 keV band. In the end, we produced 4 time-averaged 2021 spectra, one for each camera (RGS, MOS1, MOS2 and pn). § BROADBAND X-RAY SPECTROSCOPY For the purpose of studying the behaviour of the source, we initially proceed to analyse the data collected by the CCDs. The spectra were modelled with the SPEX fitting package v3.07.03 (, 2023). EPIC PN and MOS 1,2 spectra were rebinned to at least 1/3 of the spectral resolution and with at least 25 counts per bin using the task specgroup. All the models take into account the absorption from the circumstellar and interstellar medium by using the hot model (with a gas temperature of 10^-4keV, which yields a quasi-neutral gas phase in SPEX). All spectral models also account for the source redshift (z=0.00052[https://ned.ipac.caltech.edu]). EPIC MOS/PN for each observation spectra were fitted simultaneously as they overlap in the 0.3-10 keV band including a multiplicative constant to account for the well known ≲ 5% cross-calibration uncertainties. §.§ Testing different spectral models for Obs. 02004710101In order to describe Ho II X-1 spectrum, we proceed to test several models on the longest observation (Obs. ID 0200470101), which provides the largest number of counts,with double thermal models, as common procedure in ULX spectra modelling (, ,, ). In particular we want to compare our results with those obtained by <cit.> and <cit.>. The spectral models that we used to describe the data were different combinations of the followingcomponents: blackbody emission (bb) and blackbody modified by coherent Compton scattering (mbb) often used for super-Eddington discs (), multi-temperature disc blackbody (dbb) and inverse comptonisation of soft photons in a hot plasma (comt), typically used for Eddington-limited Galactic XRBs but sometimes also for ULXs (). In order to fit emission and absorption lines we used gaussian lines. A summary of the models tested is below. All models are corrected for the redshift and neutral interstellar absorption. * RHB(B): One or more blackbody emission. * RHBD: cool blackbody emission and a warmer disc blackbody.* RHDD: a double disc- blackbody model. * RHBM: a modified single blackbody model.* RHMM: a modified double blackbody model. * RHBCom: blackbody plus comptonisation. * RHDCom: disc blackbody plus comptonisation.* RHBDP: a powerlaw with a cutoff (etau) is added to the RHBD model.* RHMMG(G): One or more gaussian lines are included to the blackbody fit to account for emission or absorption lines.Fits to the data with continuum-only models (i.e. without gaussians) show residuals around 1 keV, 0.75 keV and 0.5 keV (as previously reported in several ULXs including Ho II X-1: ). These lines are resolved with the high-resolution RGS spectrometers, as explained below. The spectral fits of Obs. ID 0200470101 are reported in Fig. <ref> and <ref>. In particular, the double thermal component bb+mbb (RHBM model, top panel) does not give a good description of the data above 6 keV. For the power law in the bb+dbb+pow model (RHBDP) we adopted a cutoff (etau) at 7.9 keV and a spectral index of 0.59 (). Although the RHDCom and RHBDP modelsprovide a slightly better fit, we prefer to continue with the mbb+mbb model (RHMM model, Fig. <ref>, top panel), with temperatures of ∼ 0.37 keV and ∼ 1.6 keV for the cool and hot component respectively, in order to study the behaviour of the thermal components with the bolometric luminosity (Fig. <ref>) and the radii. The latter model provides also a lower χ^2 with respect to the dbb+dbb model (RHDD model, Fig. <ref>, bottom panel). The cool (hot) thermal component accounts for the mainly soft (hard) X-ray emission, coming from the wind and the outer (inner) part of the disc.The results from our fits of Obs. ID 02004710101 are shown in Table <ref>. We also test the other models mentioned in the Table <ref> but we were unable to constrain the parameters of the hard tail of the comptonisation (see Sect. <ref>). This is likely due to the lack of high-energy coverage (E >10 keV) during this epoch. By adding, in fact, NuSTAR data, an additional component (comt or pow) is needed to describe the spectrum at high energies (see Sect.<ref>). The total column density in several models assumes values, particularly for the RHBDP model (see <ref>, perhaps due to the emission lines at 0.5 keV), smaller than the Galactic value (see Table <ref>). So we decided to restrict the column density to have a minimum value of 5 × 10^20cm^-2. §.§ Spectral modelling of individual observationsWe show in Fig. <ref> the EPIC PN spectra of the 17 observations analysed. As with the longest observation, we fitted the EPIC PN and MOS 1,2 spectra of all observations with all models described before. The primary goal is to understand the trends of the spectral components with the time and, then, to check for systematic effects on the results from the model choice.In particular, testing several models, we obtained N_H values ranging from (0.5-0.8) ×10^21 cm^-2. This was likely due to some dependence with the model applied (i.e. the curvature adopted in the soft band). For clarity and comparison with other sources, we focus on the best-fit RHMM model in the main text. The first fits with N_H free yielded values consistent within 2-3 σ. We therefore rerun them by fixing the column density at the average value of 7×10^20 cm^-2. The results are shown in Table <ref> for all the observations.The best-fit RHMM model provided a good representation of the data with the exception of the well known residuals at 0.5 keV, 0.75 keV and 1 keV. However, we stress that these residuals do not dramatically vary using the models with comptonisation (see e.g Fig. <ref>) and are later fitted by adding three gaussian lines to the RHMM model (see section <ref>). §.§ Variability of the wind featuresThe strength of the spectral features and their variability in Holmberg II X-1 were studied by using the mbb + mbb model by adding three gaussian lines (RHMMGGG model). We fixed the line width to 1 eV for all three gaussian lines since EPIC spectra lack the necessary spectral resolution around 1 keV. The energy centroids were left free to vary and they agreed within the error bars with 0.75 for the absorption line, 0.5 and 1.0 keV for the emission lines between the various observations. The normalisations of the gaussian lines for the seventeen spectra are shown in Fig. <ref>. They do not show significant variability within the uncertainties; the 0.5 keV emission line weakly brightens whilst the 0.75 keV feature is undetected at the highest luminosities. We preferred to use the simplified RHMM(GGG) model in order to avoid complicating the continuum model by adding the powerlaw component for the hard X-ray tail because for such model and the one with the comptonisation several parameters remain unconstrained. However, we have tested the addition of three gaussian lines to the RHBDP model over the 17 spectra and obtained results consistent with those of the RHMMGGG. § HIGH-RESOLUTION X-RAY SPECTROSCOPYIn order to search for and identify narrow lines, we regrouped the 2021 time-averaged EPIC PN, MOS1, MOS2 and RGS spectra according to the optimal binning discussed in <cit.> which adopts energy bins of at least 1/3 of the spectral resolutions and a minimal, positive, number of counts per bin. In this analysis we therefore fit the data by reducing the Cash statistic (C-stat; ). Our binning choice smooths the background spectra in the energy range with low statistics, and removes narrow spurious features. We notice however that both the RGS and EPIC stacked spectra have high statistics and therefore the regrouping turns out to be similar to the standard 1/3 of the spectral resolution in the region of interest for the narrow lines (0.4-2 keV). Moreover, further tests that were performed by rebinning to at least 25 counts per bin, using the SAS task specgroup showed consistent results.The RGS and EPIC time-averaged spectra are shown in Fig. <ref>; they exhibit a very strong, narrow, emission feature around 24 Å close to the dominant 1s-2p transition of the N VII. There are additional features between 12-13 Å compatible with Ne IX-X transitions and a drop below 12 Å similar to thatdetected in the ULXs NGC 1313 X-1 and NGC 5408 X-1 <cit.>.In previous work, the EPIC data in the 0.4-1.77 keV band were ignored to avoid smearing the narrow features and unnecessarily boosting model degeneracy owing to worse energy resolution for the EPIC (see e.g. ).We adopted the same approach and fitted the EPIC and RGS stacked spectra with the RHMM model. In all fits, the parameters of the spectral components and the ISM absorber were coupled among the EPIC and RGS models. However, we left the overall normalisations of the MOS 1,2 and RGS models free to vary with respect to the PN to account for the typical ≲ 5% cross-calibration uncertainties. The RGS + EPIC continuum model provides an overall C-stat of 844 (and a comparable χ^2) for 725 degrees of freedom (the detailed RGS spectrum is shown in Fig. <ref>). §.§ Gaussian line scanIn order to search for and identify the strongest lines, we performed a standard scan of the spectra with a moving Gaussian line following the approach used in <cit.>. We adopted a logarithmic grid of 1000 points with energies between 0.4 (30 Å) and 10 keV (1.24 Å). This provided an energy step that is comparable to the RGS and EPIC resolving power in their energy range.We tested line widths ranging from 100 to 10,000 km/s.At each energy we recorded the Δ C improvement to the best-fit continuum model and measured the single-trial significance as the square root of the Δ C. To easily distinguish between emission and absorption lines, we multiplied the √(Δ C) by the sign of the Gaussian normalisation.The results of the line scan obtained for the 2021 time-averaged stacked RGS + EPIC spectra are shown in Fig. <ref>. We highlight that in the 0.4-2 keV energy band (7-30 Å) only the RGS data is considered. The line scan picked out the strong emission-like feature near 0.5 keV (N VII) and a weaker one at 0.9 keV (Ne IX) as well as some faint absorption-like features between 0.6-0.8 keV and above 1 keV. This shows a very good agreement with the CCD-quality EPIC spectra and supports our former approach of using three gaussian lines (see Fig. <ref> and Table <ref>). The results of the Gaussian line scan are very similar to those obtained for NGC 1313 X-1 <cit.> but the striking, close to rest-frame, N VII emission line would indicate a nitrogen-rich material which we discuss later.The single-trial line significance of the strongest feature from N VII is around 5 σ, which is very high. Accounting for the look-elsewhere effect and multiplying for the number of free bins within the whole 6–30 Å band (which would however not apply here since the line is close to rest) we can exclude at 3.5 σ that the feature is due to the noise (). The absorption features, if real, are offset from the dominant 1s-2p transitions and, therefore, have weaker significance as previously reported for the archival data (). §.§ Grids of photoionisation modelPrevious workperformed multi-dimensional scans of plasma models to fit multiple lines simultaneously, combining their individual improvements with respect to the continuum model (see, e.g., , ). This technique has the advantage of a) preventing the fits from getting stuck in local minima and b) searching for multiple components or phases. Here we tried the same approach with the aim of locating velocity shifts in the emission and absorption lines. In order to avoid strong degeneracy in the plasma models, we adopted Solar abundances; this will have an impact on the overall spectral improvement due to the obvious nitrogen over-abundance, however, we address this when directly fitting the RGS spectra (see Sect. <ref>).The broadband spectral energy distribution (SED) is estimated by extrapolating the best-fit continuum model between 0.001 - 100 keV. We then computed the (photo-) ionisation balance with the SPEX component pionparameterised by the standard ionisation parameter, ξ = L_ ion / (n_ HR^2). Here, L_ion is the ionising luminosity (measured between 13.6 eV and 13.6 keV), n_ H is the hydrogen number density and R is the distance from the ionising source. We also computed the stability (or S) curve, i.e. the relationship between the temperature and the ratio between the radiation and the thermal pressure, Ξ = F / (n_ H c kT) = 19222ξ / T.The SED, ionisation balance T - ξ and S curves are shown in Fig. <ref>.The branches of the S curve with a positive (negative) gradient are characterised by thermally stable (unstable) gas. Further technical details are provided in Fig. <ref>.Following <cit.>, we created grids of emission and absorption models using pion and xabs components in SPEX, respectively. For the line-emitting gas, we performed a multi-dimensional scan by adopting a logarithmic grid of ionisation parameters (log ξ between 0 and 5 with steps of 0.1) and line-of-sight velocities, v_ LOS, between -0.1c (blueshift) and +0.1c (redshift, with steps of 500 km/s). We adopted a velocity dispersion, v_σ, of 1000 km/s which is the typical width of the features found in the Gaussian line scan (see Fig. <ref>). As mentioned above, abundances were chosen to be Solar which saves a great deal of computing time. For the grids of absorption line models we chose a v_ LOS range up to -0.3c as done in previous work.The only free parameter for the pion and xabs components is the column density, N_ H.We applied the automated routine scanning through the emission and absorption model grids fitting the Ho II X-1 2021 time-averaged EPIC and RGS spectra, againusing the RGS only in the 0.4-1.77 keV band. The results are shown in Fig. <ref> in the 2D parameter space of line-of-sight velocity and ionisation parameter. The colour is coded according to the improvement relative to the continuum-only fit. Negative velocities refer to blueshifts or motion towards the observer. The emission model grids hint at a multi-phase gas. The solution with log ξ = 0 and v_ LOS = -0.018c is likely an artefact as it would require very low ionisation species moving at high velocities. The solutions that yield quasi-rest-frame emission lines are more feasible and would indicate the presence of cool (O VII - N VII) and hot (O VIII - Ne IX) gas describing the emission-like features shown in Fig. <ref>. The absorption model grids favour a solution of plasma outflowing at 0.03c with a high ionisation state (log ξ = 3.2), which is similar to the hotter solution for the emission lines. An alternative solution for the absorption lines yields parameters very similar to NGC 1313 X-1 (log ξ = 2.8 and v_ LOS = -0.19c) and a former tentative detection shown in <cit.> for Holmberg II X-1 archival data (log ξ = 3.3 and v_ LOS = -0.20c). As expected, the changes in the C-stat are not very large, most likely because all the emission grids underpredict the N VII emission line. Instead, for the absorption features, the low C-stat is likely due to both adopted abundances and the spectral stacking between different observations which smears out the wind features in the LOS. The latter would smooth the lines because they are expected to be variable over timescales of days (). All observations were taken 3-5 days apart from each other, which prevents us from building 2-3 ad-hoc large stacks and the signal-to-noise ratio of each spectrum is not sufficient to search for lines. §.§ Photoionised and hybrid plasma modellingIn order to obtain tighter constraints on the plasma models, we proceeded to model the RGS + EPIC spectra by adding absorption and emission components of plasma in photoionisation equilibrium with spectral parameters starting from the solutions found in the scans (Fig. <ref>), at first keeping Solar abundances.The addition of a single photoionised absorbing xabs component onto the continuum model confirmed the results from the scan withv_ LOS=-0.033±0.003c, log ξ[erg/scm]=3.3±0.1, v_σ=1500±1000 km/s and N_ H = (1.1 ± 0.4) × 10^22 cm^-2. The fit improvement is still rather small (Δ C=16) for it to be a significant detection.The addition of aphotoionised emitting pion component also provides results consistent with the model scan. In particular, v_ LOS=+0.006±0.002c and log ξ[erg/scm]=1.15±0.15. However, the fit with Solar abundances was poor (Δ C=13) since it over-predicted the O VIII 19 Å line and under-predicted the N VII 24.8 Å. A single pion component is also not able to fit all features, particularly those from higher ionic species such as Ne IX-X around 12-13 Å. A better description of these features are achieved by adding another pion component with log ξ[erg/scm]=2.9±0.2 at the same velocities. This fit with two emitting and one absorbing components with Solar abundances is shown in Fig. <ref> and corresponds to an overall Δ C=34 improvement with respect to the continuum model for 10 degrees of freedom, which is a rather small improvement (C/d.o.f=811/715).§.§.§ Non-Solar abundances and alternative plasma modelsFinally, we attempted at fitting again the RGS and EPIC spectra by releasing some of the abundances of the photoionised gas but keeping them coupled among the emission and absorption components. However, due to the weakness of most spectral features, we couple some of the elemental abundances. In the gaussian line scan and, visually in the RGS spectrum, the N VII, Ne IX-X and Mg XII lines appear significantly stronger than those from O VII-VIII. We therefore fitted the spectra by coupling the N, Ne, and Mg abundances and releasing the iron (which follows different evolutionary scenarios than the other three). Oxygen is kept fixed to Solar as hydrogen does not produce any lines in the 0.3-10 keV energy range. This provided a best fit with a final C/d.o.f=784/713 (see Fig. <ref>, top and bottom panel). This corresponds to a large improvement with respect to the continuum, Δ C=60 (with Δ C = 22, 11, and 27 from the xabs, the hot and cool pion components respectively). The contribution of each component is therefore increased over the continuum when releasing the abundances despite being coupled with each other. There is substantial co-dependence between the xabs (stronger) and the pion (weaker) components. We obtained the following abundance ratios: O/H=1 (fixed), Fe/H=3, C-N-Ne-Mg/H=14 with uncertainties of ∼50 %.In principle, the emission lines could be produced by plasmas in different conditions such as shocks within the ULX wind (especially for the higher ionisation component which may be supersonic) or photoionisation balance. The features can indeed be described with nearly indistinguishable fits and comparable C-stat values by substituting the two pion components with two collisionally-ionised emission components (cie in SPEX, with temperatures kT_ cool∼ 0.17 and kT_ hot∼ 1.2 keV and the abundance ratios being consistent with those obtained with the two pion components).§ DISCUSSION This paper focuses on the physical processes that might cause spectral transitions in Ho II X-1 and in general in ULXs. We first discuss the broadband properties of the XMM-Newton and NuSTAR spectra, and the possible presence of a hard tail above 10 keV. Then we search for a correlation between the total luminosity, the temperature and the radius of the thermal components. Finally, we compare our results with those obtained for different sources and discuss the properties of the wind. §.§ Broadband spectral properties Ho II X-1 shows a flux variability of a factor 2-3 within a given observation and a factor up to 6-7 in different observations, as shown in Fig. <ref> (left-top panel). In addition there are a few flux drops(the phenomenon is more evident in long-term monitoring with Swift/XRT, see e.g ) of the order of few ks due probably to wind optically-thick clumps that briefly obscure the innermost part of the disc, with a reduction of the spectral hardness (Fig. <ref>, lower-left panel). This can be caused either by an increase in the accretion rate or by theprecession of the disc, although the lack of periodicity in the phenomenon would favour the former one. Our results showed that the spectrum is well described with a double thermal model (two mbb components), in agreement with the general properties of intermediate-soft ULX spectra (). Alternative models using comptonisation for the hotter component provides comparably good fits but the parameters are unconstrained for low-statistics short observations and, therefore, we focussed primarily on the double thermal model. We also tested a model with a third component (a cutoff powerlaw) describing the hard tail in the fashion of a NS magnetosphere and obtained similar results. However, we note that it mainly affects the band beyond 10 keV (see, e.g., ) and, therefore, the powerlaw parameters could not be well constrained.In order to determine the role of such third component, we also fit the XMM-Newton data (obsid: 0724810101-301) with the simultaneous NuSTAR data (obsid: 30001031002/3, 30001031005) (see Fig. <ref>), which were retrieved from <cit.>. These data were fitted with the RHMM model with a cutoff powerlaw(E_ cutoff= 7.9 keV and Γ=0.59, see ). The cool and hot component are characterized by a temperature ∼ 0.3 keV and ∼ 1.8 keV, respectively. As expected, the addition of a third component (pow+etau), does not dramatically influence the constraints of the two mbb components. The data collected by NuSTARabove 7 keV are overwhelmed by the background when the source is at low-flux. Constraints on the hard energy tail above 10 keV are very hard to set. A constraint on the properties of such hard X-ray component in the low-flux regime is necessary to understand its nature and help distinguish between a magnetosphere typical of a NS and a small corona around a BH. In order to study this, we proceed to perform a low-flux simulations of Ho II X-1 spectra with the HEX-P satellite (see Fig. <ref>, response matrices from July 2023; ). HEX-P will be able to detect the source at 10 keV and above, which is necessary to place constraints on the tail and the inner part of the disc. This may unveil the presence of an accretion column and the nature of the compact object.The thermal components appear hotter at higher luminosity (see Table <ref>) maybe due to an increase of the accretion rate Ṁ or a better view of the inner accretion flow (wind photosphere less dense due to a precessing disc, although this should not be the case for the cooler, outer, component).§.§ Luminosity - Temperature relationsIn order to unveil any disc structural changes with the time, we examined the luminosity vs temperature (L-T) behaviour for both thermal components of the best fit RHMM model. In particular, it is useful to compare the L-T trends with the theoretical expectations for a thin disc model with a constant emitting area (L ∝ T^4, ) and advection dominated model (L ∝ T^2,). For each thermal component we estimate the bolometric luminosity extrapolating down to 0.001 and up 1000 keV from the best-fit models. There is a small uncertainty in the total luminosity owing to our ignorance of the UV flux, but we only expect those to make a minor contribution for such a bright intermediate-hardness ULX (see discussion on the spectral energy distribution in ). The luminosity-temperature results are reported in Figure <ref> and Table <ref>, which clearly show positive correlations between luminosity and temperature for both components. In addition to the RHMM model, we also explore the results for the RHBDP model in Appendix <ref>. We find a general agreement although there are some deviations with respect to the RHMM model for the hot thermal component at high luminosities.The L-T plot confirms that the source becomes spectrally harder when brighter as seen in Fig. <ref> (lower-left panel). In both plots and for each thermal component we compute the regression lines of the L-T points. We also quantify the L-T trends by computing the Pearsons and Sperman correlation coefficients by using scipy.stats.pearsonsr and scipy.stats.spearmanr routines in PYTHON. These are reported in Table <ref>; for both thermal component, a strong correlation is suggested. At low luminosities, the L-T trends of all models broadly agree with a L ∝ T^4 relationship as predicted for Eddington-limited thin discs (). There are significant deviations at high luminosities, especially for the hot component which exhibit a large scatter from a regression line (see Fig. <ref>). The exact trend depends on the model adopted. For the RHBDP model (Fig. <ref>, middle panel) the temperatures are lower than predicted for a thin disc with constant emitting area, suggesting a possible expansion of the disc and/or a contribution to the emission from the wind. Instead, for the RHMM model the scatter is less clear although it seems to saturate (see Fig. <ref>). These deviations appear for a total bolometric luminosity greater than 5 ×10^39 erg/s. As seen for NGC 55 ULX-1 (), deviations from sub-Eddington behaviour could potentially be used to place rough constraints on the mass of the compact object. If we assume the deviations are due to the fact the source is exceeding the Eddington limit L_ Edd = 1.4 × 10^38M/M⊙ erg/s or the critical luminosity L_ critical= 9/4 L_ Edd (), we estimate a mass of the compact object ranging between 16 and 36 M⊙. This would suggest a stellar mass black hole in agreement with estimates inferred from the powerful radio jets from this source (see ).However, deviations were found in Galactic XRBs when the source exceeded 30% of the Eddington luminosity (e.g. ) which might indicate a slightly more massive BH. Given that Holmberg II X-1 always shows the typical characteristics of a ULX (soft spectra, turnover, high luminosity and lines) it is more likely that the accretion rate is around Eddington and, during these deviations, exceeds the critical value, altering the structure of the accretion disc. §.§ Radius - Temperature relation We show the radius-temperature (R-T) plots for the RHMM model in Fig. <ref>; we do not present the results for the other models because they are consistent within the uncertainties.The radii are estimated from the relation between the luminosity and the temperature of a blackbody. In all cases, there is no clear trend between radii of the two thermal components and their temperatures. The radii would appear constant at confidence levels within 90-95 %. The radius of the hot component has a value ranging between 50-90 km; instead the radius of the cold component is ranging in the range 1500-2200 km. The tests performed with the alternative models yield a similar order of magnitude for such radii. By assuming an average value of the mass of the compact object of 25 M_⊙ (from the range obtained through the Luminosity-Temperature relation), these correspond to, respectively, 2 R_G and 80 R_G. The values found might refer to the inner and the outer part of the disc, respectively. Indeed, for Ṁ∼Ṁ_ Critical = 9/4Ṁ_ Edd we would obtain a spherisation radius R_ sph= 27/4Ṁ/Ṁ_ EddR_G ∼ 15 R_G. We note that in the case of a NS or very small BH accretor (up to a few times M⊙) the rate would be highly super-Eddington, R_ sph would be larger by a factor of a few, and the models might fail to reproduce the exact values of disc radii and temperature, although our L-T trends would disfavour it.§.§ Comparison with other ULXsA similar study was performed for the soft ultraluminous X-ray source NGC 55 ULX-1 (). Also in this case, the regression line in the L-T plot agreed with the thin disc and the deviations implied a 6-14 M⊙ stellar-mass black hole and smaller Ṁ. In the case of NGC 1313 X-1, deviations from a single L-T track are present for the hot component linked to an obscuring wind or a change of the disc scale height, while there is no L-T correlation for the cool component, maybe due to the modeling of hotter part of the accretion disc (see, e.g., , ; although seefor the caveats and related to the modelling and the associated trends).Ho II X-1 is similar to NGC 5204 X-1 regarding repeated transitions between the three classical ULXs regimes (HUL, SUL and SSUL/ULS, ). However, according to their long-term Swift/XRT light curves, Ho II X-1 remains around a 0.3-10 keV count rate of 0.15-0.20 c/s for most of the time with an isolated long episode with the count rate switching between 0.05 and 0.30 c/s, which correspond to SUL and SSUL regimes, respectively.These transitions may be driven by geometrical effects (the funnel structure) and wind due to a varying accretion rate and/or precession, and a line-of-sight grazing the windallowing for a rapid transition between the SUL-SSUL regimes. These transitions were also found during the epochs of highest flux of the ULXs NGC 55 ULX-1 and NGC 247 ULX-1 (see e.g. ) although for these two ULXs, and especially the latter, the flux drops are more prominent. This would therefore suggest a lower inclination angle in Ho II X-1(for a comparable Ṁ) or a lower accretion rate and, therefore, a thinner wind along the line of sight.§.§ Wind propertiesX-ray spectra from CCD / imaging detectors such as EPIC have lower spectral resolution than gratings but provide a far greater number of counts and may therefore yield important results provided that there are some dominant and reasonably well isolated lines. This is the case for the feature at 0.5 keV from N VII (which is well resolved in the RGS stacked spectrum, see Fig. <ref>). Its flux weakly correlates with the source bolometric luminosity, see Fig. <ref>. This line is produced along with others by a cool plasma (see Fig. <ref>) perhaps associated with the cool thermal component. Should such trend be confirmed with deep flux-resolved RGS spectra it would suggest an increase of the accretion rate (as the L-T trends indicate) because a changing viewing angle or ionisation would produce an opposite trend. A parallel paper will also make use of NICER and Swift X-ray observations and data from other energy bands to investigate the variability of the dominant spectral features on timescales from a few days to several weeks.The physical modelling of the spectral lines present in the RGS data indicate evidence for a multiphase plasma. The phase responsible for the strong N VII emission line (log ξ∼1) appears distinct from the two hotter absorption and emitting components (log ξ∼3). Although the T-Ξ curve appears generally stable to thermal perturbations, there is evidence for some possible points of instability which would indeed split the plasma into multiple phases (see Fig. <ref>). The two hot components are not independent of each other due to the low velocity of the absorbing gas; in other words, the inclusion of one weakens the significance of the other although the absorption is clearly stronger. This may also be due to the detailed shapes of the features which resemble those of P-Cygni profiles, especially that of Mg XII and Ne X at 8.4 and 12.1 Å, respectively (see Fig. <ref>). However, the statistics are not good enough to make actual claims of P-Cygni profiles although the physical modelling would confirm them through weakly-redshifted emission and blue-shifted absorption (see Fig. <ref>). Due to this mutual dependence, the fit of the abundances (but also the strength of the N VII line) makes performing Monte Carlo simulations for the physical models (as done in previous work, e.g. ) challenging because degeneracy prevents a meaningful parameter space exploration. The fact that the spectral improvement of each component is boosted when releasing the abundances despite being coupled with each other would suggest that they are all present. Indeed, <cit.> first report on the presence of a possible emission excess from the O VII triplet around 22 Å in the deepest, individual, RGS spectrum of Ho II X-1 from the archival observation taken in 2004. Moreover, <cit.> and, later on, <cit.> studied these data as well as the 2002-2013 time average RGS spectrum (137 ks) and found features consistent with those reported in this work. The 2021 time averaged RGS spectrum seems to show variability with respect to the archival data <cit.>, in which the dominant feature was the O VII line due probably to a variation in the ionisation parameter and so in the luminosity. The line-emitting plasma components are consistent with being at rest within 2-3 σ, each with a velocity dispersion of about 1000-3000 km/s which is very similar (albeit cooler or with lower ionisation parameters) to that observed in the ubiquitous winds of Eddington-limited Galactic X-ray binaries (for a review, see ). This might suggest a common mechanism such as thermal driving <cit.>. From simultaneous fits of the He-like triplets of Mg XI, Ne IX and especially O VII we obtain a 2 σ upper limit on the gas density of about 1.5×10^11 cm^-3 (with the most probable value of 10^10 cm^-3) in agreement with results by <cit.> for the ULX NGC 1313 X-1 and for line-emitting plasma of Galactic XRBs <cit.>, which reinforces our hypothesis for a common mechanism. This was expected given the very weak, undetected, O VII forbidden line at 22.1 Å with respect to the unresolved resonance and intercombination line complex at 21.6-21.8 Å seen both in the spectrum and the line scan (Fig. <ref> and <ref>). Of course, the wind in Ho II X-1 is more extreme than in XRBs given that each line-emitting components has an X-ray luminosity, L_ [0.3-10keV], of about 10^38 erg/s, i.e. orders of magnitude higher than in Galactic XRB winds.The super-Solar abundance of nitrogen in the wind of Holmberg II X-1 is most likely a signature of the abundance pattern of the donor companion star. This could provide alternative means for constraining its nature because previous work has shown that the optical and infrared fluxes are strongly affected by the irradiated outer disc and the surrounding nebula, making it difficult to retrieve information on the companion star <cit.>. N-richstars with a super-Solar N/Fe ratio are found in low-metallicity environments. Some examples are B-type giant/supergiant stars,as reported for Ho II X-1 where the optical counterpart is consistent with a O/B type companion star (), or Wolf–Rayet stars in young star clusters of the Magellanic clouds and similar dwarf galaxies <cit.>. In some cases they might be luminous evolved stars on the asymptotic giant branch (AGB) and more regular red giant stars in the LMC. The latter have been interpreted as the observational evidence of the result of tidally shredded stellar clusters (see ). This is supported in the case of Holmberg II X-1 by the discovery of a bow shock which indicates that the ULX binary system was likely ejected from the nearby young star cluster <cit.>. These results suggest that Holmberg II X-1 system was born in a low-metallicity environment, rich in young/massive stars, and escaped from the native stellar cluster.§ CONCLUSIONS In this paper we have performed an X-ray spectral analysis to understand the nature of the ultraluminous X-ray source Holmberg II X-1. Two thermal components provide a good description of the 0.3-10 keV spectra with the hotter component broadened perhaps by Compton scattering. Both thermal components become hotter at higher luminosities indicating an increase in the accretion rate (most likely) or a clearer view of the inner part of the accretion flow due to a disc precession. The trends between the bolometric luminosity and temperature of each component broadly agree, at low luminosities, with the L ∝ T^4 relationship expected from Eddington-limited thin disc.Deviations are seen above 5 × 10^39 erg/s which are likely due to the accretion rate exceeding the supercritical rate; this would imply a black hole with a mass ranging between 16-36 M⊙ as the compact object. Our results suggest that the two thermal components unveil the portions of the disc inside (hotter) and outside (cooler) the spherisation radius where the wind is launched, in agreement with the super-Eddington scenario. The wind abundance pattern strongly favours a nitrogen-rich donor star, likely a supergiant, which has escaped from its native stellar cluster characterised by a low-metallicity environment. § ACKNOWLEDGEMENTS This work is based on observations obtained with XMM-Newton, an ESA science mission funded by ESA Member States and the USA (NASA). CP acknowledges support for PRIN MUR 2022 SEAWIND 2022Y2T94C and INAF LG 2023 BLOSSOM. TDS acknowledges support from PRIN-INAF 2019 with the project "Probing the geometry of accretion: from theory to observations" (PI: Belloni). TPR acknowledges funding from STFC as part of the consolidated grants ST/T000244/1 and ST/X001075/1.§ DATA AVAILABILITY All the data and software used in this work are publicly available from ESA's XMM-Newton science Archive (XSA[https://www.cosmos.esa.int/web/XMM-Newton/xsa]) and NASA's HEASARC archive[https://heasarc.gsfc.nasa.gov/]. Our spectral codes and automated scanning routines are publicly available and can be found on the GitHub[https://github.com/ciropinto1982].aa§ ADDITIONAL FIGURES AND TABLES Fig. <ref> shows the spectral energy distribution (SED, top panel), photo-ionisation balance (middle panel), and log T-Ξ stability curves (bottom panel).Fig. <ref> shows the photoionised emission (top panel) and absorption (bottom panel) model scan. Fig. <ref> shows the spectral modeling of the XMM-Newton & NuSTAR simultaneous data with the RHMM model plus a cutoff powerlaw. Fig. <ref> shows the luminosity-temperature trends with the following models: RHBM (top panel), RHDD model (middle panel) and RHBDP (bottom panel) model.Table <ref> reports the spectral fits results with RHBPD model for all observations. Table <ref> reports the slopes of the regression least square and the Pearson/Spearman coefficients for the L-T trends with the following models:RHBM, RHBDP and RHDD model. | http://arxiv.org/abs/2311.16243v1 | {
"authors": [
"F. Barra",
"C. Pinto",
"M. Middleton",
"T. Di Salvo",
"D. J. Walton",
"A. Gúrpide",
"T. P. Roberts"
],
"categories": [
"astro-ph.HE"
],
"primary_category": "astro-ph.HE",
"published": "20231127190010",
"title": "On the nature of the ultraluminous X-ray source Holmberg II X-1"
} |
http://arxiv.org/abs/2311.15817v1 | {
"authors": [
"Nora Weickgenannt",
"Jean-Paul Blaizot"
],
"categories": [
"hep-ph",
"nucl-th"
],
"primary_category": "hep-ph",
"published": "20231127134239",
"title": "Chiral hydrodynamics of expanding systems"
} |
|
On Dynamics and Thermodynamics of Moving Media Serge Tychkov 27 November 2024 ============================================== Recent advancements in generative machine learning have enabled rapid progress in biological design tools (BDTs) such as protein structure and sequence prediction models. The unprecedented predictive accuracy and novel design capabilities of BDTs present new and significant dual-use risks. For example, their predictive accuracy allows biological agents, whether vaccines or pathogens, to be developed more quickly, while the design capabilities could be used to discover drugs or evade DNA screening techniques. Similar to other dual-use AI systems, BDTs present a wicked problem: how can regulators uphold public safety without stifling innovation? We highlight how current regulatory proposals that are primarily tailored toward large language models may be less effective for BDTs, which require fewer computational resources to train and are often developed in an open-source manner. We propose a range of measures to mitigate the risk that BDTs are misused, across the areas of responsible development, risk assessment, transparency, access management, cybersecurity, and investing in resilience. Implementing such measures will require close coordination between developers and governments.*equal contribution; order randomized § INTRODUCTION This paper outlines the distinct regulatory challenges posed by biological design tools (BDTs) and provides a range of governance measures to address the risks of their misuse.We describe how the unprecedented predictive accuracy and novel design capabilities of BDTs have the potential to be misused by malicious actors. We then identify four key challenges in regulating BDTs: disagreement on permissible capabilities,ineffectiveness of compute restriction on such small models, difficulty regulating decentralized and non-commercial development, and the potential for safeguards to be circumvented by open-sourcing model weights.Preventing the misuse of biological design tools, while preserving their beneficial scientific uses, will require action at many phases of their lifecycle. We present a menu of 25 measures for mitigating risks of BDT misuse, including those that may be enacted by governments, by developers, and those which rely upon action by both, that we believe merit further exploration. We categorize these measures into six thematic areas: responsible development, risk assessment, transparency, access management, cybersecurity, and investing in resilience.Discussions among the authors led us to highlight a subset of seven measures we believe to be particularly promising for reducing the risks posed by BDT misuse. These are: These are: model evaluations for dangerous capabilities, vulnerability reporting, structured access. know your customer, nucleic acid synthesis screening, securing lab equipment, and model-sharing infrastructure.§ BIOLOGICAL DESIGN TOOLS PRESENT DUAL-USE RISKSBiological design tools (BDTs) are defined in <cit.> as “systems that are trained on biological data and can help design new proteins or other biological agents”. These include protein folding tools, such as AlphaFold2 <cit.>, protein design tools such as RFdiffusion <cit.>, genetic modification tools such as AlphaMissense <cit.> and many other others (see <cit.> for proposed subcategorization).Advances in BDTs are rapidly reducing the gap between in silico prediction and in vivo performance, contributing to progress in many areas of biomedicine, including vaccine development, cancer therapy, and precision medicine <cit.>. There are already several AI-designed molecules in early-stage clinical trials <cit.>, and AI tools may be able to speed up drug development, reduce costs, and increase the novelty of both new therapeutics and their molecular targets <cit.>.Unfortunately, many tools that enable high-precision design or prediction of biological molecules for scientific discovery or therapeutic development can be equally used for harm. Just as a protein-folding model facilitates the design of new drugs, it can also help malicious actors bypass software meant to screen DNA orders for potentially pathogenic sequences. We highlight two significant dual-use characteristics of BDTs: – Unprecedented predictive accuracy reduces the amount of time, resources, and expertise required for experimental testing, iteration, and validation. This accelerates biological engineering, and biosecurity researchers have highlighted how this is likely to effectively shorten the risk chain for biological weapon development <cit.>.– Novel design capabilities enable the creation of pathogens that are more transmissible or virulent than known agents, that can evade current screening techniques, or that have the ability to target only specific species or sub-populations. BDTs may therefore may raise the ceiling of harm posed by a particular biological agent. In the chemical weapons context, the combination of a generative chemical model and a toxicity predictor tool was able to rediscover known nerve agents and suggest novel compounds that were predicted to be equally toxic <cit.>.There have been no publicly recorded attempts of BDTs being used to produce biological weapons or otherwise cause harm. However, malicious actors may plausibly seek to misuse these tools. The historical record shows that both state <cit.> and non-state <cit.> actors have pursued the capability to cause large-scale harm through biological agents. At present, there are significant technical difficulties involved in manufacturing and delivering harmful biological agents, and the authors expect the risk of misuse by BDTs to arise first and foremost from well-resourced and technically-competent actors, such as those associated with state-sponsored bioweapons programs(see <cit.> for an outline of such programs) or in existing research facilities.As AI and bioengineering technologies mature, the pool of actors capable of misusing BDTs could broaden. In particular, we expect that, as with many types of scientific technology, newer versions of tools will be more user-friendly, and, as tools are integrated with one another, it will be easier for less technically-skilled actors to carry out complex biological workflows. Moreover, advances in LLMs may further reduce technical barriers. <cit.> speculates that Aum Shinrikyo may have been able to successively weaponize anthrax if their scientists had LLM-powered lab assistants to troubleshoot the conversion of a vaccine strain into its pathogenic form, and a recent US Executive Order on Safe AI <cit.> warned about dual-use foundation models "lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors".§ CHALLENGES IN REDUCING MISUSE RISK OF BIOLOGICAL DESIGN TOOLSRegulators have increasingly expressed interest in mitigating the risks at the intersection of AI and biology. The Executive Order on Safe AI expressed concern that AI will “substantially [lower] the barrier of entry for non-experts to design… biological, radiological, or nuclear (CBRN) weapons” <cit.>. U.S. Congress has also considered an ‘Artificial Intelligence and Biosecurity Risk Assessment Act’ designed specifically to begin to address the concerns outlined above <cit.>. Policy conversations related to AI-enabled biological misuse are moving quickly, and it is an urgent matter to design proportionate policies that address misuse risks without unduly stifling innovation. Many governments are presently considering how to regulate AI systems, with a particular focus on foundation models <cit.> and `high-risk’ systems, such as those used for biometric identification or for the operation and management of critical infrastructure <cit.>. Throughout these deliberations, a number of promising regulatory approaches have emerged, such as monitoring and restricting access to specific computational resources (“compute”) or introducing specific liability regimes for AI <cit.>. We find, however, that these regulatory approaches, which might be suitable for foundation models, will likely not translate well to BDTs.§.§ Differentiating between harmful and permissible BDTs As noted in Section 2, BDTs are dual-use technologies, with already-realized biomedical benefits and a simultaneous potential for harm. It is in our collective interest to avoid slowing down beneficial science by imposing disproportionate regulatory burdens on tools that pose little misuse risk.A principal challenge in reducing misuse risk of BDTs will be understanding which properties and functions of BDTs correlate to risk. To that end, in a report on AI-facilitated biological weapon development, <cit.> propose a useful taxonomy of different BDT capabilities, including an assessment of the relative maturity of each capability, but do not suggest which tools are of particular concern. The Biden Administration's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence includes an initiative to “evaluate the potential for AI to be misused to enable the development or production of CBRN threat” <cit.>.U.S. policy on dual-use research of concern articulates a set of “experiments of concern”, such as any which “alters the host range or tropism of the agent or toxin” <cit.>, , and provides a model for a technology-agnostic definition of high-risk activities; however, these policies are contested and under revision <cit.>. Precise proposals for definitions of a “biological design tool of concern” will help developers and governments reach a consensus on which BDTs may possess misuse potential. §.§ Controlling access to compute is unlikely to be effective for many BDTs Controlling access to computing infrastructure has attracted interest in the context of dual-use foundation models, which require large amounts of very specialized compute (i.e. the most advanced AI accelerator chips) to train. Training compute utilization has been shown to correlate with model performance and the “emergence” of new and more powerful capabilities <cit.>. Thus, introducing controls on the most advanced AI accelerator chips and monitoring large training runs on compute clusters could curb the development of models harboring capabilities that pose extreme risks <cit.>. This careful delineation also helps avoid introducing burdensome regulation on less well-resourced actors. As a result, regulatory frameworks being developed in the EU, the UK, and the US reflect the critical role of computational resources in the development of frontier language models.[The European Parliament’s negotiating position on the proposed EU AI Act authorizes a European AI Office to “issue and periodically update guidelines on the thresholds that qualify training a foundation model as a large training run” and to “record and monitor known instances of large training runs” (see Amendment 529 of <cit.>). See also Case Study 3.9 in the UK white paper ‘A pro-innovation approach to AI regulation’ <cit.> and the US export controls on specific AI chips <cit.>.]Unfortunately, while such approaches may address risks from frontier foundation models, they seem unlikely to translate successfully to most BDTs. Training runs of frontier language models require hundreds or even thousands of advanced computer chips. GPT-4—currently, the most capable LLM—is reported to have about 1.8 trillion parameters and to have cost about US$63mn to train <cit.>. Even state-of-the-art BDTs have far fewer parameters and require orders of magnitude less compute to train. For example, RoseTTAFold, a popular open-source protein-folding model, has 130 million parameters and requires four weeks of training time on eight V100 GPUs <cit.>, using at most 3.7×10^20 FLOP.[We use https://epochai.org/blog/estimating-training-computeEpoch's training compute calculator with conservative estimates (28 days, 8 GPUs, NVIDIA Tesla V100 SMX2, FP16 precision and 62% utilization rate).] At the time of writing, this would cost about US$10,000[On 19 September 2023, Google Cloud pricing quoted a month’s usage of a V100 GPU at $1267.28. By using more advanced chips, actors may be able to reduce costs further.] and is three orders of magnitude below the parameter threshold of 10^23 for biological models specified in the recent Executive Order on Safe AI that triggers additional reporting requirements. These far lower compute and cost requirements make applying compute governance to BDTs an impractical choice for mitigating misuse risks.§.§ The decentralized and non-commercial nature of BDT development makes it harder to target and enforce regulation Unlike dual-use foundation models, a wide variety of BDTs are developed by a range of actors across the world, often in academic or start-up labs. This decentralization makes it much more difficult to implement regulatory proposals that may be appropriate when dealing with only a few large technology companies. In particular, – It is more difficult to track compliance: with more actors involved, a regulator would have to spend more time and money ensuring that model developers are following the rules. In contrast, with only a few companies capable of developing a foundation model at the level of GPT-4, it is straightforward for regulators to identify those companies whose models might pose risks.– There is a greater risk of regulatory arbitrage: with BDT development spread across the world, many different countries may need to harmonize their legislation to effectively mitigate risks. This is also true for non-biology AI, but the overwhelming concentration of foundation model development in the US has meant that it is less of an immediate concern.– It is harder to consult all model developers to develop best practices: in contrast with the White House Voluntary Commitments, which initially applied to only seven advanced AI companies <cit.>, developing best practices for BDT development is likely to be slower and more difficult. Academic conferences and institutional collaborations like RosettaCommons will be key fora for developing such best practices. Moreover, that many BDT developers lack the resources of large technology companies increases the likelihood that regulation has the unintended effect of stifling innovation. For example, BDT developers cannot as easily implement potentially crucial cybersecurity features, such as those recommended in Section <ref>, both through lack of funding and expertise.§.§ Open-source models are well-suited for science, but challenging targets for regulation The strong norms around open-source[By ‘open-source', we mean model weights and accompanying code that have been freely published on the internet, typically under an open-source license.] development of BDTs complicate the challenge of minimizing the risks of BDT misuse. Free access to code, data, and methodologies is the engine of modern science. By allowing other scientists to reproduce and verify experiments, open-source models facilitate scientific progress. Furthermore, open-source software is used in the vast majority of all software and has been estimated to generate upwards of US$60 billion dollars worldwide <cit.>.However, there are certain cases when open-sourcing a software undermines other important social values, such as public safety <cit.>. Open-sourcing renders software highly vulnerable to modification, which may be great for development, but also allows safety features to be circumvented. For instance, the safety filter used in Stable Diffusion—a state-of-the-art image generation system—can be nullified by deleting one line of code.[We do not provide further technical details, as part of responsible disclosure. This claim is informed by discussion with several technical researchers. See <cit.> for a description of other attacks.] This enables anyone to produce prohibited images, including those featuring nudity and ‘deepfakes’ of real-life people. In the case of BDTs, open-source weights undermine the implementation of potential safeguards, which we discuss in Section <ref>. Export controls are a common approach for mitigating the potential misuse of dual-use technologies, but often do not apply to `fundamental research'. Existing U.S. export controls apply to many dual-use items, including firearms parts, biological materials, and certain forms of information. Through a combination of regulations from the U.S. Department of Commerce, the Department of State, and the Office of Foreign Assets Control, US-based publishers and distributors of dual-use items are generally required to obtain a license prior to dissemination <cit.>. However, information and materials that are "regularly published" or are a result of "fundamental research" are exempted from export controls <cit.>, but the definitions of "published" and "fundamental research" are unclear. Department of Commerce regulations explicitly exempt software that is posted on the internet publicly (with the exception of specified encryption and firearm production software) <cit.>. The Department of State, by contrast, does not provide an exemption for information published on the internet <cit.>. In the absence of specific guidance from regulatory agencies, it is unclear how the web of regulations around export controls applies to BDTs.Another possible mechanism for reducing risks of BDT misuse is imposing legal liability for developers of certain BDTs. However, this approach is stymied by the predominant licensing norms in open-source software. Virtually all open-source licenses come with clauses that absolve the developers of liability for misuse . Any liability regime would have to reconcile this existing norm.§ MEASURES FOR RESPONSIBLE GOVERNANCE OF BIOLOGICAL DESIGN TOOLSMitigating risks arising from misuse of biological design tools, while preserving their beneficial scientific uses, will require action across various stages in their lifecycle. In Figure 1, we outline 25 measures that span the following phases of BDT development and use:1 Dataset Collection and Processing: BDTs cannot be trained without specialized biological datasets. The development of datasets suitable for training BDTs has seen recent investment from both philanthropists and industry[For example, the Open Datasets Initiative funded by Schmidt Futures, the partnership of A-Alpha Bio with Lawrence Livermore National Labortory and the partnership of Ginkgo Bioworks and Google Cloud.] and been discussed as a target for biosecurity interventions <cit.>. The recent Executive Order on Safe AI <cit.> calls for a study that “considers the national security implications of the use of data and datasets, especially those associated with pathogens and omics studies."2 Model Development and Training: Developing BDTs requires training the neural network(s) core to their functionality, optimizing and compressing trained models, and developing APIs, visualizations, and other interfaces to facilitate interaction with the model.3 Model Deployment: There are many ways to make a BDT available for widespread use, such as open-sourcing model weights, using a hosting platform like Hugging Face or ColabFold, creating an API by which users can query the model, or releasing a database of predictions such as the ESM Metagenomic Atlas <cit.> and AlphaFold Protein Structure Database <cit.>.4 Operation and Monitoring: For BDTs to be misused, their designs must be synthesized into physical biological agents, then deliberately or accidentally released. Though a full discussion of laboratory biosafety and biosecurity is beyond the scope of this paper, we highlight several promising interventions at the “digital-to-physical frontier” <cit.> at which digital BDT designs are converted into physical biological materials.We selected BDT-relevant measures from two reports on mitigating risks at the convergence of AI and biotechnology <cit.>, as well as recent reports and recommendations on best practices for dual-use foundation models <cit.>. Although we believe every measure listed in this paper merits investigation, we do not believe they are all equally promising. A range of actors will play a role in preventing the misuse of BDTs. Some measures, such as export controls, expanded liability, and mandating nucleic acid synthesis screening, require government regulation. Other measures, such as model evaluations and structured access, are currently being implemented as part of voluntary governance in developer communities.We do not provide a comprehensive solution to BDT governance; best practices in this area are still being developed, not all measures we recommend are compatible with open source model weights (Supplementary Table 1), and the efficacy of existing measures is heavily contested (see Section 3). Instead, we present a menu of 25 measures for mitigating risks of BDT misuse that we believe merit further exploration. We divide these measures into six thematic areas, and believe seven measures are particularly promising based upon their perceived feasibility to implement and potential impact in risk mitigation: model evaluations for dangerous capabilities, vulnerability reporting, structured access. know your customer, nucleic acid synthesis screening, securing lab equipment, and model-sharing infrastructure. §.§ Responsible DevelopmentOne way to prevent BDT misuse is to limit the development of BDTs that pose the greatest risks. As discussed in Section <ref>, it is difficult to distinguish between harmful and permissible BDTs, so overall we argue that other parts of the development lifecycle are better targets for governance. That being said, we see six key opportunities for regulators to act to control the development of BDTs:1. Expanded Dual-Use Review: Many countries have policies that regulate dual-use biological research, requiring institutional review, increased laboratory safety, and other precautions for research that could facilitate harm. Dual-use review could be extended to laboratory experiments that generate datasets which could be used to train BDTs with high misuse risk (e.g. measurements of cell toxicity, immune activation by viral vectors, or viral transmissibility).2. Model Licensing for Training or Release: As is being considered for general-purpose AI systems, governments could consider requiring that BDT developers demonstrate compliance with certain institutional risk-mitigation processes or cybersecurity standards to obtain licenses to train or release certain high misuse risk BDT models.3. Expanded Developer Liability: Expanded liability could incentivize developers to implement precautionary measures. The European Commission proposed an AI Liability Directive, which would extend consumer protections against damages caused with the involvement of AI systems <cit.>. There is ambiguity about whether open-source software will fall under this directive, especially when used as a component within commercial products <cit.>. One potential approach is to expand liability to developers who release model weights, but only if catastrophic damages occur <cit.>.4. Voluntary Commitments: In a similar manner to the Voluntary Commitments by AI companies secured by the Biden Administration <cit.>, leading BDT developers could voluntarily commit to undertake practices intended to reduce misuse risks. This could include, for instance, employing model evaluations prior to release, refraining from publishing preprints or releasing model weights until such evaluations are complete, post-deployment monitoring, or other measures listed in this paper. The Institute for Protein Design’s announcement of an effort to develop new voluntary guidelines suggest such commitments may be forthcoming <cit.>.5. Export Controls: As is imposed on types of information deemed dual-use, such as specifications for certain nuclear technologies, governments could impose export controls on certain BDTs deemed sufficiently dual-use, which would in effect restrict open-source release. However, presently, most software that is publicly available without restrictions upon further dissemination is exempt from US and EU regimes <cit.>.6. Publication Norms: Developers could adopt as a norm the practices of disclosing Impact Statements (Measure 11), withholding certain types of information from publications (or preprints) due to their potential for misuse, or refraining from releasing weights at the time of publishing until risk assessments or model evaluations have deemed them sufficiently safe to release. Likewise, journals and conferences could facilitate the adoption of norms by making publication or admission contingent on adherence to such practices. §.§ Risk AssessmentContinual risk assessment across the development, deployment, and use of BDTs is necessary because their capabilities and potential for misuse are rapidly evolving. We separate risk assessment into three different measures, all of which are currently being tested by developers of general-purpose models:7. Model Evaluations for Dangerous Capabilities (particularly promising): Benchmarks for dangerous capabilities could be designed by developers, independent third-parties, or government organizations <cit.>. If dangerous capabilities are identified, developers can respond by pausing ongoing research and development, implementing Structured Access (Measure 17), or prioritizing cybersecurity (Measures 21-23).8. Red Teaming: Structured tests could be conducted to identify flaws, limitations, and vulnerabilities in BDT software, in particular in the security features—if any—they possess, with the goal of remediation. This includes attempting to identify biological design tasks or input that elicit apparently harmful output, and efforts to circumvent restrictions to model weights, input/output screening techniques, and access controls.9. Monitoring for Misuse: Developers could set up systems to automatically monitor user inputs and model outputs for potential misuse, with concerning results flagged for human review. This will help developers become aware of and respond to unanticipated potential for misuse, and should be paired with infrastructure to support Vulnerability Reporting (Measure 12). Monitoring is more easily done with API-based deployment, but local models could be bundled with oversight models that detect misuse. This is one of the emerging processes for frontier AI safety identified by the UK government <cit.>. §.§ TransparencyTransparency is needed for the development of safe, accountable, and trustworthy AI systems. The measures available to encourage transparency for BDTs seem much the same as those for other dual-use AI systems, including:10 Impact Statements: Developers could establish as a norm that, upon releasing a model (or its constitutive components), they disclose why they believe doing so does not present a significant risk (or surpass some risk threshold). Standardized approaches for documenting the limitations and ethical considerations of models, such as model cards <cit.>, could promote this norm.11 Information-Sharing with Regulators: Model capabilities could be reported to regulators before (or shortly after) deployment. This requirement of foundation models has been proposed for the EU AI Act and may also apply to certain generative algorithms in China <cit.>.12 Vulnerability Reporting (particularly promising): Developers could create (and regulators could require) mechanisms for software users to report concerning model behavior or other security flaws to their developers. This can help developers to quickly fix vulnerabilities, or, in extreme cases, rapidly roll back or withdraw a model.As an incentive to report potential risks, “bug bounties” could provide financial rewards to the reporter. See <cit.>.13 Watermarking: Leaving a recognizable signature in AI-generated biological designs could help with attribution and establishing liability for misuse. The White House Voluntary Commitments <cit.> and Chinese law <cit.> both include watermarking for generated content. Watermarking efforts could build upon existing research focused on genetic barcodes that track engineered microbes <cit.>. §.§ Access ManagementThere is a spectrum of methods for releasing AI models, ranging from open-source releases to private, proprietary software <cit.>. Different release methods give users different levels of access to AI system components. Developers will need to select access management strategies that appropriately trade-off between promoting innovation and preventing misuse. Some strategies that can be employed to limit access to dangerous capabilities of open-source BDTs are:14 Data Curation: Specialized datasets used to train BDTs could be curated to limitmodels’ ability to provide high-risk outputs. Curation may involve omitting a limited subset of research (e.g. on enhancement of potential pandemic pathogens) from public datasets <cit.> or selectively adding noise to datasets <cit.> (e.g. adding noise to data related to nerve agents). 15 Data Use Agreements: Users could be required to sign agreements outlining acceptable use, BDTs prior to obtaining access to datasets that could be used to train BDTs. Some biological databases, such as those containing patient data or infectious disease sequences, already require such agreements <cit.>. Data use agreements outline circumstances when access can be revoked and provide legal recourse for dataset developers if they find their data has been misused.16. Structured access (particularly promising): Structured access involves implementing role-based access control to software—generally in adherence to the principle of least privilege <cit.>. The archetypical method for implementing structured access is to deploy a model to the cloud and use an API to limit access to authenticated users. This kind of deployment limits the ability to circumvent Input/Output Filtering (Measure 20), facilitates the implementation of Know Your Customer (Measure 18) processes, provides opportunities for Monitoring Misuse (Measure 10), and supports Securing Weights (Measure 20), and allows access to be revoked for users exhibiting concerning (mis)use. Structured access is not compatible with open-source model weights, and cloud hosting expenses can be prohibitive for smaller non-commercial developers. Developers will need financial and technical support to implement structured access at scale, which could include Model-Sharing Infrastructure (Measure 23) and Public Compute (Measure 24).17. Know Your Customer (particularly promising): Know Your Customer (KYC) processes involve collecting data to verify the legitimacy of a customer (or user) before providing them with a service. For example, KYC measures are required in the financial sector as part of anti-money laundering efforts. KYC could allow instances of misuse to be attributed to specific users, and, when Structured Access (Measure 16) is in place, access could be restricted (to some extent) for users with potentially concerning attributes. For datasets and models that are not fully open-sourced, KYC due diligence could be conducted before approving a data use agreement or granting model access.18. Nucleic Acid Synthesis Screening (particularly promising): Preventing ordinary people from ordering smallpox DNA, as well as defending against more complex threats related to commercial nucleic acid synthesis, has been a longstanding priority for the biosecurity community <cit.> and has been proposed as a way to secure the “digital-to-physical frontier” <cit.> and protect against AI being used to engineer dangerous biological materials <cit.>. Nucleic acid synthesis screening could be universalized via regulation requiring synthesis companies to demonstrate compliance with screening standards, or by commitments by scientists not to purchase services unless companies demonstrate adherence to best practices in screening.19. Input/Output Filtering: Developers could implement input filters refusing potentially high-risk inputs or outputs that may be considered high-risk (e.g. exhibiting similarity to more toxic compounds), potentially informed by data acquired by Monitoring for Misuse (Measure 10). This is likely to be more difficult for de novo designs. These filters may also be useful to prevent model extraction attacks, which reconstruct a local copy of a model based only on its API responses to user queries have been demonstrated for both image classification and natural language models <cit.>. In the LLM context, Anthropic responded to bioweapons-focused red teaming by adding classifier-based filters on model outputs <cit.>. §.§ Cybersecurity Even BDTs with excellent access management could be misused due to cybersecurity vulnerabilities. Cybersecurity requirements are proposed in the EU AI Act <cit.> and included in the White House Voluntary Commitments <cit.>, and we consider investment in cybersecurity across the BDT development lifecycle a high priority:20. Database Security: Public biological databases are used to build nucleic acid synthesis screening tools and track clinical events. Recent tabletop exercises have explored how database poisoning attacks could be used to undermine medical countermeasures <cit.>; cybersecurity best practices could allow database maintainers to detect tampering and restore integrity.21. Securing Weights: Weights for closed-source models could be stolen by malicious external actors or leaked by insiders within developer communities. Investing in cybersecurity and insider threat safeguards to protect unreleased model weights is included in the White House Voluntary Commitments <cit.>.22. Securing Lab Equipment (particularly promising): Securing lab equipment could include practices like implementing screening on DNA benchtop synthesizers to prevent the production of specific pathogens <cit.>, ensuring that contract research organizations sequence customer-provided samples before running them on their equipment <cit.>, and mandating minimum network security standards for biofoundry facilities. §.§ Investing in ResilienceGovernments should consider how their resources can be used to support developers in adopting governance measures. Successful implementation of many of the measures above will rely on cloud hosting of BDTs combined with effective cybersecurity practices. Without supportive investment, these measures may be infeasible or unsustainable for smaller non-commercial developers. Strategic investments could support responsible governance across the BDT lifecycle and decrease potential harms, should a BDT be misused. 23. Public Compute: Governments could procure and redistribute computational resources to researchers using BDTs for research pandemic prevention and response. The establishment of this infrastructure could even expand access to these research tools—in particular, to underserved communities—through fast-tracked grants. This infrastructure could also provide a means for governments to promote best practices: access to computational resources should be conditional on adherence to responsible development and risk assessment measures, such as Publication Norms (Measure 6), Vulnerability Reporting (Measure 13), and other practices covered in Voluntary Commitments (Measure 4).24. Model-Sharing Infrastructure (particularly promising): Governments could bear the costs of establishing and maintaining infrastructure to support secure and equitablesharing of BDTs. This could include secure hosting infrastructure that complies with relevant cybersecurity standards and implements fair policies for granting Structured Access (Measure 17) through an API. There may be other ways for governments to support responsible model-sharing, such as offering standard templates for Data Use Agreements (Measure 16), software packages that support Monitoring for Misuse (Measure 10), or sets of Model Evaluations (Measure 7) to inform model release decision-making.25. Fund Countermeasures: Governments could invest in a range of promising countermeasures for reducing potential harm caused by misuse of BDTs. These include investing in BDTs that bolster emergency preparedness and response <cit.>, development of detection and surveillance approaches that are robust to novel design capabilities <cit.>, and broad-spectrum pandemic preparedness technologies that will reduce the severity of any future biological threat, whether deliberate or natural <cit.>.§ CONCLUSIONResponsible governance of biological design tools will require mitigating the dual-use risks posed by BDTs without stifling scientific innovation or sacrificing biomedical advances. A multifaceted approach will be needed, combining both government- and developer-led measures to mitigate risk.The development of BDTs with unprecedented predictive accuracy and novel design capabilities enables new frontiers in biomedicine, but risks misuse by actors seeking to cause large-scale harm through biology. Several regulatory proposals intended to reduce the risks of frontier language models, such as restricting compute access and export controls, are unlikely to be as effective for BDTs due to the significantly smaller amounts of compute needed to train BDTs and the generally decentralized and non-commercial nature of their development.We present a range of risk mitigation measures and highlight seven that we believe to be particularly promising: model evaluations for dangerous capabilities, vulnerability reporting, structured access. know your customer, nucleic acid synthesis screening, securing lab equipment, and model-sharing infrastructure. Fully open-sourcing model weights is incompatible with structured access and know your customer efforts, and would undermine many of the other risk mitigation measures identified.The scope of our analysis is constrained by the quickly evolving nature of both BDTs and the regulatory landscape. We stress that our proposals are theoretical and could benefit from testing and validation (i.e. regulatory sandboxing) to ascertain their efficacy and feasibility. We recognize the need for subsequent, more granular studies to refine the proposed measures. As BDTs and their potential misuse risks continue to evolve, empirical studies and insight from other efforts to regulate AI systems will be critical to refine and adapt policies. It is our hope that initiating these conversations now equips stakeholders to implement prudent, proportionate safeguards as these tools evolve.§ ACKNOWLEDGEMENTSWe gratefully acknowledge Sophie Rose, Cassidy Nelson, and James Smith, as well as both anonymous reviewers from the NeurIPS 2023 Workshop on Regulatable ML, for helpful commentary on this manuscript. Any mistakes remain our own. | http://arxiv.org/abs/2311.15936v3 | {
"authors": [
"Richard Moulange",
"Max Langenkamp",
"Tessa Alexanian",
"Samuel Curtis",
"Morgan Livingston"
],
"categories": [
"cs.CY",
"cs.LG"
],
"primary_category": "cs.CY",
"published": "20231127154502",
"title": "Towards Responsible Governance of Biological Design Tools"
} |
A systematic study comparing hyperparameter optimization engines on tabular dataBalázs Kégl Noah's Ark Lab, Huawei Paris [email protected] 14, 2024 ================================================================================§ ABSTRACTIndustrial simulations of turbulent flows often rely on Reynolds-averaged Navier-Stokes (RANS) turbulence models, which contain numerous closure coefficients that need to be calibrated. Although tuning these coefficients can produce significantly improved predictive accuracy, their default values are often used. We believe users do not calibrate RANS models for several reasons: there is no clearly recommended framework to optimize these coefficients; the average user does not have the expertise to implement such a framework; and, the optimization of the values of these coefficients can be a computationally expensive process. In this work, we address these issues by proposing a semi-automated calibration of these coefficients using a new framework (referred to as turbo-RANS) based on Bayesian optimization. Firstly, we introduce the generalized error and default coefficient preference (GEDCP) objective function, which can be used with integral, sparse, or dense reference data for the purpose of calibrating RANS turbulence closure model coefficients. Secondly, we describe a Bayesian optimization-based algorithm for conducting the calibration of these model coefficients. We demonstrate the computationally efficient performance of turbo-RANS for three example cases: predicting the lift coefficient of an airfoil; predicting the velocity and turbulent kinetic energy fields for a separated flow; and, predicting the wall pressure coefficient distribution for flow through a converging-diverging channel. In the first two examples, we calibrate the k-ω shear stress transport (SST) turbulence model and, in the last example, we calibrate user-specified coefficients for the Generalized k-ω (GEKO) model in Ansys Fluent. An in-depth hyperparameter tuning study is conducted to recommend efficient settings for the turbo-RANS optimization procedure. Towards the goal of facilitating RANS turbulence closure model calibration, we provide an open-source implementation of the turbo-RANS framework that includes OpenFOAM, Ansys Fluent, and solver-agnostic templates for user application. KeywordsTurbulence modelling, Bayesian optimization, calibration, data-driven methods, Reynolds-averaged Navier Stokes, GEKO model.Abbreviations:GEDCP: generalized error and default coefficient preference; GEKO: generalized k-ω;UCB: upper confidence bound; POI: probability of improvement; EI: expected improvement; MAPE: mean absolute percentage error; MSE: mean squared error; JSON: JavaScript object notation.§ INTRODUCTION Turbulent flows are found in many engineering applications such as automotive, aerospace, chemical, nuclear, and wind energy. Engineering activities such as design, optimization, safety assessment, maintenance, and failure investigation in these industries frequently require predicting the behaviour of turbulent flows. Predicting turbulent flow is a necessary, but complex problem, that is often addressed using experimental or computational approaches. While experimental approaches often produce the most physically accurate results, they are expensive and limited in the quantities of interest (information) they can provide. For these reasons, predictions obtained using computational fluid dynamics (CFD) are attractive due to their low cost, capability to rapidly explore a (frequently large) design space, and ability to access detailed information about practically all aspects of the flow.Many industrial CFD simulations rely on solving the Reynolds-averaged Navier-Stokes (RANS) equations, which are more computationally affordable than conducting large-eddy simulation (LES) <cit.>. RANS models have been considered an industrial workhorse for the past several decades. However, there are several well-known deficiencies with the application of these methods. A major source of error in RANS is the turbulence model, which often relies on a single-point, linear-eddy-viscosity closure model (e.g., turbulence closure models such as k-ε <cit.>, k-ω <cit.>, Spalart-Allmaras <cit.>). The past several decades of experience with RANS have resulted in the development of sophisticated turbulence closure models such as the k-ω shear stress transport (SST) model <cit.>, the k-ε-ϕ-f model <cit.>, and the recently-developed generalized k-ω (GEKO) model <cit.>. As a result of the assumptions and approximations underpinning these closure models, even these state-of-the-art models often fail to accurately predict important flow phenomena for various engineering and industrial applications <cit.>.Recently, there has been widespread interest in using data-driven approaches to improve and calibrate RANS models <cit.>. Because of the ability to automate traditional physics-based and heuristic turbulence modeling, more advanced models can be potentially developed. Machine learning has been used to develop complex non-linear eddy viscosity models, which can provide good approximations for highly-resolved LES and direct numerical simulation (DNS) closure terms for use with the RANS modelling framework <cit.>. Additionally, machine learning can be used to generate zonal eddy viscosity models <cit.>, LES subgrid closure models <cit.>, and accelerate DNS through the application of super-resolution models <cit.>. While some approaches aim to replace traditional turbulence closure models used within the RANS framework, another promising approach is using reference data to calibrate closure coefficients in the turbulence models. Most RANS turbulence closure models contain a number of tuneable coefficients, which can be calibrated to improve the RANS predictive performance for certain classes of flows. The problem of calibrating the turbulence model coefficients so that a prediction is in better conformance with the reference data can be formulated as an optimization problem. To this purpose, a number of previous studies have proposed the use of genetic algorithms <cit.>, Nelder-Mead simplex algorithm <cit.>, optimal Latin hypercube method <cit.>, and the global search method <cit.> for addressing this problem. Bayesian optimization was demonstrated to be an effective method for calibrating model coefficients in multiscale dynamical systems by Nadiga et al. <cit.>. As discussed by Diessner et al. <cit.>, the Bayesian optimization methodology is particularly promising for addressing many fluid mechanics problems involving model coefficient calibration because it is efficient at optimizing expensive functions. For example, Morita et al. <cit.> demonstrated the efficient performance of Bayesian optimization for applications to shape optimization and to optimal icing control problems. Naturally, different optimization problems will require different optimization techniques. Some techniques are more suited for high-dimensional problems, noisy problems, or problems where the gradient of the objective function is analytically available. In our view, the primary consideration for the turbulence model calibration problem is the computational cost associated with evaluating the objective function. If we formulate the objective function as an error measure between our reference data and the associated predictions of this data for a given set of model coefficients, then every evaluation of the objective function will require a new CFD calculation using the new turbulence model coefficients. The cost of a CFD calculation varies with the specific problem, but for an industrially relevant flow, a single three-dimensional (3D) RANS calculation can easily require on the order of hundreds to thousands of central processing unit (CPU) hours. Therefore, this optimization problem demands a technique that can carefully and efficiently select new query points (viz., new values for the tuneable model coefficients at which the evaluation of the predictive performance will occur next). Bayesian optimization is a popular choice for optimizing computationally expensive objective functions. For example, Bayesian optimization is commonly used in neural network hyperparameter tuning, where each evaluation of the objective function typically involves an expensive training run <cit.>.Bayesian approaches have been applied for calibrating turbulence models in a number of previous studies. In particular, Bayesian inference has received attention because it can provide information on uncertainty in a RANS model prediction. Maruyama et al. <cit.> applied Bayesian inference to infer model coefficients of the Spalart-Allmaras (SA) model for a transonic flow over a wing. SA model coefficients were also inferred for several transonic airfoil problems in an investigation conducted by Da Ronch et al. <cit.>, who found that a properly calibrated set of model coefficients improved the predictive accuracy on transonic test cases outside of the reference data used for the calibration. Ray et al. <cit.> developed an analytical model for the optimal coefficients of the k-ε turbulence model for a jet in a cross flow and also performed Bayesian calibration of the turbulence model coefficients <cit.>. In particular, these investigators hypothesized that for many flows, model-form errors in RANS may be significantly smaller than coefficient calibration errors. A similar investigation, involving the calibration of the SA model for a jet flow with a concomitant determination of the uncertainty in the tuned model coefficients, was undertaken by Li et al. <cit.>. Edeling et al. <cit.> analyzed the error in the Launder-Sharma k-ε turbulence model for wall-bounded flows and applied Bayesian inference for calibrating the model coefficients. An earlier investigation conducted by Oliver and Moser <cit.> applied Bayesian uncertainty quantification to four eddy-viscosity models. Recently, Fischer et al. <cit.> coupled a neural network with Bayesian optimization to predict optimal values of coefficients of the generalized k-ω (GEKO) model <cit.> for a trench-shaped film-cooling flow. Another popular approach is the field inversion machine learning approach proposed by Durisamy et al. <cit.> which is often applied to infer local values of the calibration coefficients.In this work, we introduce the tuning parameters using Bayesian optimization-RANS (turbo-RANS) platform. More specifically, turbo-RANS is a framework for applying Bayesian optimization to the RANS coefficient calibration problem. Bayesian optimization is particularly suited to problems where an evaluation of the objective function is computationally expensive—this is certainly the case for the turbulence model calibration problem in RANS. We formulate an objective function for use with turbo-RANS that is versatile in terms of the types of reference data that may be used. For three different model coefficient calibration problems, we demonstrate that turbo-RANS can be applied to tune turbulence model coefficients in a computationally efficient manner. Another contribution of the present work is the provision of an open-source implementation of our proposed calibration framework. We illustrate the application of this framework on several example codes and templates have been made available in a GitHub repository <cit.>. We also investigate how the hyperparameters associated with the Bayesian optimization can be chosen and, in the course of this investigation, we provide general recommendations for the settings of these various hyperparameters that lead to a computationally efficient and stable calibration of the RANS model coefficients. A number of previous studies have applied a Bayesian framework to the turbulence model coefficient calibration problem. However, most of these previous studies have focused on the problem as an inference problem. For example, given a set of RANS calculations and some reference data, Bayesian inference can provide a best estimate plus uncertainty (BEPU) of the turbulence closure coefficients. However, this methodology is computationally prohibitive for use with very expensive RANS computations (forward model predictions). Indeed, the computational demands of the RANS model predictions make it practically impossible as a tool for sampling from the posterior distribution using Markov chain Monte Carlo (MCMC) to produce a sequence of dependent draws from the posterior. In the case where one is under the constraint of a very limited computational budget (typically less than about 100 RANS model calculations), we advocate the application of Bayesian optimization to the problem: given previous knowledge of the problem, what is the next most efficient point to query? The use of the Bayesian optimization approach here results in a naturally computationally efficient optimization procedure. The Bayesian optimization approach allows the exploration and exploitation tradeoff to be tightly controlled during optimization. Moreover, the objective function proposed here is flexible and pragmatic and, as a result, is well-suited for use with Bayesian optimization. Whereas many previous studies used simple objective functions (e.g., root-mean-squared error), the objective function proposed here produces a well-posed optimization problem and can simultaneously be used with different types of reference data. We demonstrate that using this objective function for calibration leads generally to an accurate prediction of various quantities of interest derived from the turbulent flow. Finally, we believe that the complexity of the implementation and the ease of use (or lack thereof) are often overlooked factors that limit the widespread use of Bayesian calibration. To address these factors, we present a straightforward methodology and provide an open-source implementation that can be immediately used in a variety of CFD contexts. This paper is organized as follows. In Section <ref>, we highlight relevant concepts of Bayesian optimization. We then discuss the formulation of the generalized objective function used in the turbo-RANS framework, providing a rationale for this formulation. The turbo-RANS framework is then described in detail along with how this framework is implemented. In Section <ref>, we apply turbo-RANS to calibrate coefficients in the k-ω SST and GEKO turbulence closure models for three different types of reference data: namely, integral parameter data, dense DNS data, and sparse wall measurement data. In Section <ref>, we perform an extensive study of how to choose the hyperparameters associated with the Bayesian optimization used in the RANS model calibration. Based on this hyperparameter investigation, we provide recommended settings for the hyperparameters that appear in the Bayesian optimization schema, including those used in the Gaussian process regression. Finally, Section <ref> provides a conclusion—here, we contextualize the results obtained in this study and provide suggestions for future work. § BACKGROUND AND METHODOLOGYThis section is organized as follows. Section <ref> describes Bayesian optimization, including Gaussian process regression using the Matern kernel covariance function (Section <ref>), and various utility functions (Sections <ref>– <ref>). Section <ref> explains the proposed generalized error and default coefficient preference (GEDCP) objective function, which we recommend for optimizing turbulence model coefficients. Section <ref> provides details and a discussion of the turbo-RANS framework. Implementation of this framework is discussed in Section <ref>. §.§ Bayesian optimizationWe summarize the main ideas underpinning Bayesian optimization and Gaussian process regression as they are applied in the turbo-RANS framework. Bayesian optimization is described in detail in <cit.>, and Gaussian process regression is explained in detail in <cit.>. The goal of Bayesian optimization is to maximize an objective function f(x) with respect to a vector of parameters x. Since this objective function can be computationally expensive to compute, we create a surrogate model of the objective function that is significantly less expensive to compute. It is useful for the surrogate model to return both the best estimate μ(x) of the objective function and the uncertainty σ(x) in this best estimate. We introduce an utility function α(x) ≡α(μ(x),σ(x)) that combines the information embodied in the predicted objective function value and uncertainty in this value into a single scalar-valued function. The next x value queried is the point that maximizes the utility function α(x). This utility function allows us to control the tradeoff between exploration and exploitation. Exploration is the evaluation of the objective function in new regions far away from the currently available data (i.e., points with high uncertainty σ(x)). Exploitation is the evaluation of the objective function in regions where it is expected to be favourable based on past data (i.e., points with a high expected value of μ(x)). The primary advantage of Bayesian optimization is the ability to control this exploration/exploitation tradeoff, which produces an optimization procedure that is efficient in terms of the number of objective function evaluations required to find the optimal value of x.§.§.§ Gaussian process regressionIn Bayesian optimization, Gaussian process regression is typically used to construct a surrogate model of the objective function. Given a dataset 𝒟 consisting of a set of values of the objective function f(x) at points x ({(x_1,f(x_1)), (x_2, f(x_2)), …, (x_n,f(x_n))}), we can fit a Gaussian process regression model 𝒢𝒫(x) and use it as a surrogate model for f(x). A Gaussian process regression model fits a Gaussian process to f(x') for all x' in the domain of definition of the function. The Gaussian process is conditioned on the given data 𝒟 consisting of f(x) measured at points x, so f(x') ∼ N(μ(x'), k(x,x';θ)) where N(μ(x'), k(x,x';θ)) is the Gaussian process with mean function μ(x') and k(x,x';θ) is the covariance function between the pair of points x and x'. The covariance function between the pair of points depends on the hyperparameters θ. A complete description of Gaussian process regression is provided in the book by Rasmussen and Williams <cit.>.In this work, the covariance function k(x,x') (suppressing the vector θ of hyperparameters from the notation) used for Gaussian process regression is the Matern kernel which has the following form:k(x_i,x_j) = 1/Γ(ν) 2^ν-1(√(2ν)/l_k d(x_i,x_j) )^ν K_ν(√(2ν)/l_kd(x_i,x_j) ),where k(x_i,x_j) is the covariance of the points x_i and x_j, ν is a kernel parameter (hyperparameter), Γ(ν) is the gamma function evaluated at ν, l_k (hyperparameter) is the kernel length scale (or characteristic length scale of the Gaussian process), d(x_i,x_j) is the Euclidean distance between x_i and x_j, and K_ν ( · ) is the modified Bessel function of order ν. More details are available in theMatern kernel documentation <cit.>. The critical hyperparameters of the Matern covariance function (kernel) are ν and l_k, which are discussed below.The behavior of the Matern kernel depends on the value of the hyperparameter ν. When ν=1/2, this kernel reduces to the absolute exponential kernel. When ν→∞, the Matern kernel reduces to the squared exponential kernel. For ν=3/2 and 5/2, the associated Gaussian process is once- and twice-differentiable, respectively.The length scale l_k of the Matern kernel is a hyperparameter that needs to be optimized for a given dataset 𝒟. In this work, we usefor the Gaussian process regression, which automatically fits this hyperparameter to the available data. However, an initial guess for l_k needs to be specified, and tuning this initial guess can lead to a more efficient fitting of the Gaussian process which, in turn, improves the subsequent Bayesian optimization performance <cit.>. We reparameterize this initial guess in terms of a relative length scale l. The l_k,c_i length scale parameter associated with a model coefficient c_i is determined as follows:l_k,c_i = l (c_i,upper - c_i,lower),where l_k,c_i is an initial guess for the kernel length scale associated with coefficient c_i. Moreover, c_i,upper and c_i,lower are the upper and lower bounds of the domain of definition for the coefficient c_i, respectively. Also, l_k is the length scale vector with components (l_k,c_0,l_k,c_1,...,l_k,c_N). This vector defines an anisotropic Matern covariance function (kernel) in the sense that the length scale associated with each model coefficient c_i can have a different value. The motivation behind parameterizing l_k as a vector is to specify a scale-appropriate initial guess for l_k (viz., an appropriate scale for each model coefficient). For example, two model coefficients c_1 ∈ [0.5,1.5] and c_2 ∈ [0.001,0.005] will require two different length scales. Eq. (<ref>) will apply the same relative length scale to both dimensions, resulting in scale-appropriate length scales for each dimension. Finally, we note that the hyperparameter vector θ for the Matern covariance function (kernel) has the explicit form θ = (ν,l_k^T)^T where the superscript “T” denotes vector transposition. §.§.§ Upper confidence bound (UCB) utility functionThe upper confidence bound (UCB) utility function features a clear exploration/exploitation tradeoff that is controlled by the κ hyperparameter. The UCB utility function is given byα_UCB(x) = μ(x) + κσ(x),where α_UCB(x) is the utility function value at point x, μ(x) is the expected value of the objective function, and σ(x) is the standard deviation at point x (viz., σ(x) = k(x,x)^1/2). Both μ(x) and σ(x) are readily available from the Gaussian process regression of the objective function. Larger values of κ lead to more exploration of points in the parameter space with large uncertainty, and smaller values of κ lead to more exploitation of regions in the parameter space where the expected value μ(x) is known to be large.§.§.§ Probability of improvement (POI) utility functionThe probability of improvement (POI) utility function returns the probability that a given point x will result in an improvement of the objective function value. The POI hyperparameter ξ specifies a minimum probability of improvement. The POI utility function is given byα_POI(x) = CDF(μ(x) - μ^⋆ -ξ/σ(x)),where α_POI(x) is the utility function value at point x, CDF(z) is the cumulative distribution function of the standard Gaussian distribution (viz., a Gaussian distribution with zero mean and unit variance), μ(x) is the expected value of the objective function, μ^⋆ is the best objective function value observed so far, and σ(x) is the standard deviation at x. Again, μ(x) and σ(x) are immediately available from the Gaussian process regression of the objective function. Larger values of ξ lead to a more explorative behavior, whereas smaller values of ξ lead to a more exploitative behavior.§.§.§ Expected improvement (EI) utility functionThe expected improvement (EI) utility function improves on the POI utility function. Whereas the POI utility function finds the point with the highest probability of providing any improvement, the EI utility function finds the point with the highest expected improvement. The hyperparameter ξ is also used to control the exploration/exploitation tradeoff for the EI utility function. The EI utility function has the following form:α_EI(x) = CDF(μ(x) - μ^⋆ - ξ/σ(x)) (μ(x) - μ^⋆ - ξ)+PDF(μ(x) - μ^⋆ - ξ/σ(x))σ(x),where all variable definitions are the same as those of the POI function (Section <ref>), and PDF(z) is the probability density function for a standard Gaussian distribution. As in the case of the POI utility function, smaller ξ values lead to more exploitation, whereas larger ξ values lead to more exploration, in the parameter space. §.§ Generalized Error and Default Coefficient Preference (GEDCP) objective functionObjective function design is important for optimization. In general, the purpose of turbo-RANS is to reduce the error in a RANS prediction by adjusting the turbulence model coefficients. Therefore, the objective function should represent errors in the RANS predictions that ensure a well-posed optimization problem. In view of this, we formulate an objective function for use with turbo-RANS and demonstrate its utility.We formulate a functional form for an objective function, which will be referred to henceforth as the generalized error and default coefficient preference (GEDCP) function. This function aims to be both highly versatile in terms of the types of reference data that can be used in the optimization and well-posed in terms of the process of the optimization itself. This function computes an error measure that can be used to assimilate dense reference data, sparse reference data, or integral reference data (or any combination of these types of data), making it adaptable generally for any kind of calibration problem. For example, RANS coefficients can be calibrated so that the predicted velocity field matches sparse wind-tunnel velocity data or so that the predicted force coefficients match those obtained from experimental data. For unsteady simulations, the coefficients can be calibrated based on an experimental Strouhal number or on a time-averaged force coefficient. The GEDCP function can also combine different types of reference data into a single objective function. For example, coefficients can be calibrated to simultaneously reduce errors in sparse reference data and in reference data for an integral parameter (e.g., lift or drag coefficient).An optional term is included in the GEDCP function that embeds a preference for the “default" values of the turbulence model coefficients. In practice, including this term serves two goals: namely, it embeds prior knowledge of the problem into the objective function, and it often results in the optimization problem having a single global optimum. For an example that motivates the first goal, consider the prior knowledge that C_μ = 0.09 is commonly used in linear eddy viscosity models. This prior knowledge may be an important preference that is relevant for some applications. This default value is based on the correct ratio of ν_t ε/k^2 in a turbulent channel flow away from the wall <cit.>. Here, ν_t is the eddy viscosity, ε is the dissipation rate, and k is the turbulent kinetic energy. If the optimized C_μ deviates from this value, there must be a corresponding significant reduction in the misfit error for this state of affairs to be acceptable. This preference penalty must also increase with increasing deviation from the default value of the coefficient, so if the nominally optimal value for C_μ is very far from 0.09, a large reduction in the fitting error to the reference data must occur for this to be acceptable. The second goal of the preference term is to ensure that the optimization problem has a single (unique) solution. In our experience, an error-based objective function based solely on the misfit between the model predictions and the reference data for calibrating RANS model coefficients often exhibits a saturation in the misfit. In other words, if the coefficient exceeds a certain value, the misfit error remains constant and, as a result, the optimization problem does not have a unique solution. Saturation or asymptotic behavior in the misfit can arise from the nature of the physics in the problem or from the presence of minimum, maximum, hyperbolic tangent, and other functions that are used in various combinations in many turbulence models. With a preference for the default value in a model coefficient, a small slope is introduced into the objective function, which drives coefficients towards the default value of the coefficient if this circumstance arises.The GEDCP objective function, which depends on the model coefficients {c_0,c_1,c_2,…,c_N} that are to be calibrated against the available reference data, is formulated as follows:f_GEDCP(c_0, c_1,…,c_N)=-(λ_F E_F + λ_IE_I)(1+λ_p p)|_c_0, c_1,…,c_N , E_F(c_0, c_1,…,c_N)= ∑_ϕ∈ΦMAPE(ϕ) |_c_0, c_1,…,c_N= ∑_ϕ∈Φ1/N_ϕ∑_i = 1^N_ϕ| ϕ_i - ϕ̃_i (c_0, c_1,...c_N)/ϕ_i| ,E_I(c_0, c_1,…,c_N)= ∑_ψ∈Ψ| ψ - ψ̃(c_0, c_1,…,c_N) /ψ|,p (c_0, c_1,…,c_N) = 1/N∑_n=1^N| c_n,default - c_n/c_n,default| .The variables and terms in Eqs. (<ref>)–(<ref>) are defined below.In Eq. (<ref>), f_GEDCP(c_0, c_1,...,c_N) is the objective function which depends on the tuneable coefficients c_0, c_1, …, c_N; E_F is the field error; E_I is the integral parameter error; p is the default coefficient preference term; and, λ_F, λ_I, and λ_p are regularization parameters that control the amount of preference (relative importance) ascribed to each term. The negative sign in Eq. (<ref>) converts the problem of minimizing E_F, E_I, and p to a maximization problem, which is the convention used for Bayesian optimization.The reference data used for calibration of the model coefficients consist of a set of field values (e.g., velocity, pressure) and integral parameters (e.g., lift coefficient). The set of calibration fields is specified by Φ, and the individual calibration fields are denoted by ϕ∈Φ. Individual reference values for a given calibration field are denoted by ϕ_i. In a similar manner, the set of reference integral parameters is specified by Ψ, and the individual integral parameters are denoted by ψ∈Ψ. As an example, given calibration data consisting of 15 locations with pressure measurements P, 50 locations with measured velocity components U and V, a reference lift coefficient C_l, and a reference drag coefficient C_d, we have the following sets: Φ = { P, U, V }, N_P=15, N_U=N_V=50, and Ψ = { C_l, C_d}.The field error E_F [Eq. (<ref>)] is a summation of the mean absolute percentage errors (MAPE) in each field ϕ over the set of fields in the set Φ for which reference data are available for calibration. Here, i=1,2…,N_ϕ indexes the points where reference data corresponding to the field ϕ are available. Furthermore, ϕ_i is the reference datum associated with the field ϕ at a given location and ϕ̃_i(c_0,c_1,…,c_N) is the predicted value of the field at this location obtained using the set of model coefficients {c_0,c_1,…,c_N}. This term can accommodate arbitrarily sparse and/or dense reference data and involves the determination of MAPE over all available reference data for a given field. Moreover, the use of MAPE implies that the influence of the reference data is not affected by the scale of ϕ_i. For example, a 25% error at a point with low velocity in the boundary layer is penalized equally to that of a 25% error at a point with high velocity. The use of MAPE here is also motivated so that varying scales of ϕ∈Φ are treated equally; for example, the velocity specified in units of m s^-1 can have a different scale than that associated with the pressure in units of Pa. This equal treatment would not hold true if a mean-squared error (MSE) or mean-absolute error (MAE) were to be used in the formulation of the GEDCP objective function. The integral parameter error E_I [Eq. (<ref>)]is a summation of the percentage error in each integral parameter for which reference data are available. Here, Ψ is the set of integral parameters. Again, the percentage error is used so that varying scales associated with different quantities of interest within Ψ are treated equally. For example, consider the calibration of an aerodynamic simulation to match reference data for the lift c_l and drag c_d coefficients. A given prediction using model coefficients c_0,c_1,…,c_N will provide an estimate for the lift c̃_l(c_0,c_1,…,c_N) and drag c̃_d(c_0,c_1,…,c_N) coefficients. The use of a relative error implies that even though c_l and c_d can have different orders of magnitude, the percentage errors will be penalized equally.The default coefficient preference term p is given by Eq. (<ref>). This term can be interpreted as the mean relative deviation of all model coefficients from their corresponding default values. The use of a relative (rather than an absolute) deviation is again motivated by possible scale differences in the values of the model coefficients.In Eq. (<ref>), λ_F, λ_I, and λ_p can be adjusted to determine the preference for reference field data, reference integral data, and for default values of the model coefficients. We recommend setting λ_F=λ_I=1, and λ_p=1/2. These settings retain some preference for the default values of the model coefficients while ensuring that the objective function is primarily influenced by the error-based terms E_F and E_I. If no reference field data is available (viz., Φ=∅), then λ_F=0. Similarly, the absence of reference integral parameter data(viz., Ψ=∅) implies λ_I=0.The multiplicative design incorporated in Eq. (<ref>) is to allow deviations from the default values of the model coefficients, as long as a corresponding significant decrease in misfit (prediction) error is realized. If the optimizer begins to “stray” too far from the default values of the model coefficients with only a marginal or negligible reduction in the prediction error, then the default coefficient preference term will penalize this behavior. However, if this departure from the default values of the model coefficients results in a significant prediction error reduction, then the objective function will still reward the proposed departure in the optimization process. For example, in the event that the proposed model coefficients result in perfect predictions of the quantities of interest (viz., E_F, E_I→ 0), then the default coefficient preference term has no effect. As long as the calibration of the model coefficients results in some error in the fitting process (which is expected for the majority of use cases), then the default coefficient preference term will influence the objective function.As part of the hyperparameter optimization in this work (see Section <ref>), the GEDCP loss functions for two example problems (see Sections <ref>– <ref>) were computed on a dense grid of RANS predictions associated with a grid of values of the model coefficients. This computation allows for the visualization of the GEDCP loss functions for two typical model coefficient calibration problems.Fig. <ref> shows the GEDCP loss function associated with the calibration of the a_1 coefficient of the k-ω SST turbulence closure model. The calibration of this model coefficient was conducted in order to improve the predictive performance of the turbulence model for the lift coefficient of an airfoil. More details of this example problem are provided in Section <ref>. Over a wide range of values of a_1, the GEDCP objective function for this case exhibits a single global optimum point and is a relatively straightforward function to optimize. For this example, turbo-RANS is able to find the optimum value of a_1=0.28 in approximately 9 evaluations of the objective function (see Section <ref> for more detailed results). In the range of 0.34<a_1<0.37, a small negative slope can be seen in f_GEDCP(a_1). This small negative slope is due to the presence of the default coefficient preference term in the GEDCP objective function. Without the inclusion of this default coefficient preference term, an objective function that involves only a misfit term would have a zero slope here and, as a result, the optimizer would not be “guided away” from the upper bound for a_1 when exploring this plateau in the objective function.A more involved example that justifies the inclusion of the default coefficient preference term in the objective function is exhibited in Fig. <ref>. The GEDCP objective function shown here corresponds to optimizing the two model coefficients a_1 and β^* of the k-ω SST turbulence closure model in order to improve the predictive performance of the model for a flow over periodic hills. The misfit error used in this example is a summation over the entire flow field because dense DNS data is available for this case. More details on this case are provided in Section <ref>. Again, a single local maximum over a wide range of values of the model coefficients exists in this case. Without the inclusion of the default coefficient preference term, the objective function would asymptotically saturate (viz., achieve a maximum value) along a line defined by β^*≈ 0.08 in the range 0.36<a_1^*<0.60. With a pure misfit error-based objective function, in spite of the fact that there is virtually no improvement in the predictive accuracy for increasing values of a_1, the optimizer in this case nevertheless recommends that the best estimate of a_1=0.60 (i.e., this value for a_1 maximizes the objective function). This large value for a_1 is not justified (or even reasonable) because there is only a marginal improvement in the predictive performance for values of a_1 greater than about 0.34. The GEDCP objective function addresses this problem through the inclusion of a small preference for the default values of the model coefficients. In so doing, a single (unique) solution is obtained for the optimization problem, resulting in reasonable values for the model coefficients: namely, β^*=0.076 and a_1=0.34. This optimum is found in approximately 20 evaluations of the objective function (see Section <ref> for more results). §.§ turbo-RANS framework Fig. <ref> shows the turbo-RANS framework, along with details of the probe-and-suggest sub-routines. Various steps of the algorithms incorporated in this framework will be described below.To initialize the optimization process, information must be provided by the user as to which model coefficients are to be optimized. This information includes the default values for the model coefficients and their lower and upper bounds. Hyperparameters (see Section <ref>) used in the optimization, as well as other settings, must also be specified by the user as part of the initialization. It is noted that the hyperparameters can be modified during each iteration of the optimization process. The first point probed is the point corresponding to the default values for the model coefficients. If the chosen optimization problem does not include default values for the model coefficients, then this first default coefficient probe is skipped in the initialization.After probing the default values for the model coefficients, the sampling loop is executed. In the framework, n_s is the number of sample points that are probed before the Bayesian optimization loop is entered. If n_s is too small, errant behavior of the Gaussian process regression can occur (see Section <ref>). At this stage, values of the model coefficients are sampled using Sobol sampling to produce a Sobol sequence of points, which promotes better uniformity of the sampled values compared to a purely random sampling <cit.>. After a new point is obtained using Sobol sampling, this point is probed. The sampling loop continues until a number of points n_s (prespecified by the user) has been probed.The Bayesian optimization loop begins after a default point and a sufficient number of additional sample points have been probed. In the Bayesian optimization loop, the optimizer considers the past history and suggests a new value of the model coefficients (viz., a point) to be probed. Following this, the suggested new point is probed. The probe-suggest loop continues until a convergence criterion is satisfied. This convergence criterion is specified by the user. We envision that turbo-RANS loops that involve relatively inexpensive CFD calculations can be stopped when the best objective function value and/or the best set of model coefficients do not provide any further improvements (changes) in the optimization process after a certain number of iterations (early stopping). For problems corresponding to more expensive CFD calculations, an upper bound on the computational budget can be prescribed a priori by the user (e.g., repeat the loop until a pre-specified number of CPU hours has been expended in the computation). With either the convergence-based or budget-based stopping criteria, the Bayesian optimization algorithm will result in an efficient optimization in terms of the number of objective function evaluations (viz., CFD calculations). After the stopping criteria are satisfied, the final recommended model coefficient set is the set that achieves the maximum value of the objective function.Probing a set of values of the model coefficients consists of running a RANS calculation with these coefficients, computing the objective function, and registering the objective function value so that it can be used in future iterations. The probe sub-routine in Fig. <ref> details these steps. The convergence criteria for the RANS calculations are also specified (prescribed) by the user. We recommend the standard practice of evaluating both the residual values of various flow quantities and monitoring an integral parameter to assess convergence. In terms of actual implementation, this process is typically automated using scripts that modify the turbulence model coefficients, execute a RANS calculation, and monitor the convergence of this calculation. To reduce computational demands, we recommend using the converged flow fields obtained from the default coefficient probe (viz., the default values of the model coefficients) as initial conditions for future probe RANS calculations. This re-use of the converged flow field from the default coefficient probe greatly reduces the computational costs of the turbo-RANS algorithm, since the flow fields start near convergence. The post-processing of RANS calculations into fields that are compatible with the GEDCP objective function (Section <ref>) is also typically automated. For example, this post-processing can include computing force coefficients or sampling the solution field at the locations where the reference data is available. Details of the objective function registration are provided in Section <ref>, and in the turbo-RANS documentation <cit.>.The suggest sub-routine exhibited in Fig. <ref> addresses two sub-optimization problems: namely, (1) fitting a Gaussian process using all previously probed points obtained in modelling of the objective function; and, (2) maximizing the utility function. These optimization problems are already implemented in the dependencies for turbo-RANS. The suggest sub-routine executes almost instantaneously, and therefore the main computational expense in the turbo-RANS algorithm is conducting a RANS calculation with a given set of turbulence model coefficients. The computational time of the suggest sub-routine increases with the number of points previously sampled, as the cost of the Gaussian process regression increases with an increasing number of points. However, in our experience, the computational cost of the suggest sub-routine is negligible compared to the cost of conducting the RANS calculations required for each point probed (undertaken in the probe sub-routine). For large parameter spaces and a large number of previously evaluated points defining the objective function hypersurface (e.g., greater than 100), the Gaussian process regression may perform slowly. However, even in the worst case, we have observed that the cost of the suggest sub-routine remains below 1% of the total cost associated with the RANS calculations required for each probed point.§.§ ImplementationThe core Bayesian optimization package used in turbo-RANS is the Python library <cit.>. This library is widely used in many Bayesian optimization applications and includes implementations of the UCB, POI, and EI utility functions. The underlying Gaussian process regression library used inis from <cit.>. A series of convenient Python interfaces to these two libraries has been provided in the turbo-RANS GitHub repository <cit.>. The repository includes examples and templates for a wide variety of use cases, including the following: * two examples based on OpenFOAM: flow over an airfoil (Section <ref>), and flow over periodic hills (Section <ref>);* one example based on Ansys Fluent: flow through a converging-diverging channel (Section <ref>);* templates for optimizing OpenFOAM simulations;* solver-agnostic templates for optimizing CFD simulation Python scripts; and,* solver-agnostic templates for optimizing CFD simulation shell scripts.Full documentation for the code is provided in the GitHub repository <cit.>. However, the main usage details are summarized in the following discussion. The turbo-RANS framework includes two modes: Python mode and JSON mode. Python mode should be used when all optimization tasks can be completed within a single run of a Python script, and no saving (to disk) of the progress in the computations is required. This mode is generally limited to toy optimization problems, and the “simulated optimization" runs described Section <ref>. In Python mode, no external files are required by the program, although the code can optionally read and write external files. JSON mode is recommended for the majority of users. In JSON mode, the program will read and write external files, which can be used to save and restart the optimization. In JSON mode, the code is simply designed to provide the next point to be probed. The code will take into account whether or not the default values of the model coefficients (default point) have been probed, as well as whether or not a sufficient number of sample points have been probed. If there are no points registered, running the code will start a new optimization and suggest probing the default point if one has been provided. Subsequent code runs will suggest Sobol-sampled points until a total of n_s points have been probed. n_s is the total points probed, including the default values. After this sampling, the code will automatically switch to Bayesian optimization for subsequent runs. If the user wishes to implement an alternative initial sampling loop (e.g., providing a set of initial points on a grid rather than through Sobol sampling), then the user needs to only provide the objective function history from this alternative loop. turbo-RANS will automatically begin Bayesian optimization when a total of n_s entries are available in the objective function history, regardless of their origin.To begin the optimization, the user needs to provide a single JSON file with the filename . This file is the only mandatory file required to execute turbo-RANS. More specifically, the filecontains the names for the model coefficients and the lower and upper bounds for each model coefficient; optionally, the file can include also default values for the model coefficients. In either Python or JSON mode, the user has the flexibility to modify hyperparameters of the optimization process at each iteration, allowing fine control over whether exploration or exploitation should be preferred when turbo-RANS suggests the next point. For example, a decay in the value of the hyperparameter κ in the UCB utility function can be implemented in order to gradually shift the preference in the acquisition of a new point from exploration to exploitation <cit.>. At each iteration, the code will automatically save all required settings and objective function histories. Indeed, all the necessary information that is needed to restart the optimization from the last computed iteration of a previous run is saved. Additionally, all input and output files for the program are saved in the human-readable JSON format, and can easily be read by other programs for further analysis.It should be noted that the parallelization of Bayesian optimization is non-trivial. While a RANS calculation with a given set of model coefficients is typically parallelized, the primary turbo-RANS algorithm is currently restricted to running in a sequential manner. This restriction means that only a single RANS calculation is conducted at a time, rather than probing multiple sets of model coefficients to enable multiple RANS calculations to be undertaken simultaneously. It is assumed that the RANS calculation step is by far the most computationally expensive component of the turbo-RANS algorithm since the acquisition of a new point in the Bayesian optimization occurs nearly instantaneously. If desired, the user can opt to implement a parallelized Bayesian optimization by modifying the turbo-RANS example scripts. The code's verbose file input and output structure facilitates implementing parallelized Bayesian optimization but, for now, this functionality is left for a future effort. § DEMONSTRATIONS §.§ Example 1: Flow over an airfoil§.§.§ DescriptionRANS calculations are often used in external aerodynamics, such as in aerospace, wind engineering, and automotive engineering. Accurate prediction of integral quantities such as force coefficients is typically desired in these applications. The purpose of this example is to demonstrate how turbo-RANS can be used to calibrate turbulence model coefficients using integral parameter reference data. §.§.§ Computational setupWe calculate the steady-state, incompressible, turbulent flow over a NACA 0012 airfoil at a 10.12^∘ angle of attack and a chord-based Reynolds number of Re_c=6 × 10^6. The case details are taken from the NASA turbulence modelling resource “2D NACA 0012 Airfoil Validation" 2DN00 case <cit.>. OpenFOAM v2206 was used—the RANS calculations were conducted using the semi-implicit method for pressure-linked equations (SIMPLE) solver (simpleFoam) in OpenFOAM. A second-order upwind scheme was used for the discretization of the convective terms in the momentum equation, and the central difference scheme was used for the discretization of the diffusion terms. A first-order upwind scheme was used for the discretization of the convective terms in the turbulence transport equations. The geometric agglomerated algebraic multigrid (GAMG) solver was used to obtain the pressure, and the preconditioned bi-conjugate gradient (PBiCGStab) solver was used to solve all the other equations. The full OpenFOAM case setup is available in the turbo-RANS GitHub repository <cit.>.The turbulence model used for this case is the k-ω shear stress transport (SST) model. The OpenFOAM v2206 variant of this model is the updated 2003 Menter version <cit.>, with production terms from the 2001 paper <cit.>. The following model equations are provided in Menter <cit.> (k is the turbulent kinetic energy and ω is the specific dissipation rate):∂(ρ k)/∂ t + ∂(ρ U_i k)/∂ x_i = P̃_k - β^*ρ k ω +∂/∂ x_i[ (μ + σ_k μ_t) ∂ k/∂ x_i] ,∂(ρω)/∂ t + ∂ (ρ U_i ω)/∂ x_i = αP̃_k/ν_t - βρω^2 + ∂/∂ x_i[ (μ + σ_ωμ_t) ∂ω/∂ x_i] + 2(1-F_1)ρσ_ω 21/ω∂ k /∂ x_i∂ω/∂ x_i ,ν_t= a_1 k /max (a_1 ω, SF_2) ,where the definitions of the blending functions F_1 and F_2, and limited production term P̃_k have been omitted as they are not relevant to the present discussion. Given that the coefficients are a blend of the coefficients of the k-ε and k-ω turbulence closure models, the k-ω SST model has 10 model coefficients: a_1, β^*, σ_k1, σ_k2, α_1, α_2, β_1, β_2, σ_ω1, and σ_ω2. Default values for these coefficients are given as: a_1 = 0.31, β^* = 0.09, σ_k1 = 0.85, σ_k2 = 1, α_1 = 5/9, α_2 = 0.44, β_1 = 3/40, β_2=0.0828,σ_ω1 = 0.5, and σ_ω2=0.856. Here, the subscript 1 denotes the k-ω model, and the subscript 2 denotes the k-ε model. The default values of these coefficients arise from the application of various heuristics, such as tuning the behavior of certain terms to be in conformance with the characteristics of specific types of (usually canonical) flows. For many industrially relevant flows, additional tuning of these model coefficients can increase predictive accuracy. For the airfoil case, we employ turbo-RANS to calibrate the a_1 coefficient of the k-ω SST turbulence model for this particular flow.Fig. <ref> shows the computational domain and mesh, which were taken from the NASA turbulence modelling resource <cit.>. The computational domain is relatively large. More specifically, the domain is 500c (c is the chord length) in any direction from the airfoil so as to negate blockage effects. Pressure at the inlet is zero gradient, and all other flow quantities at the inlet are prescribed as follows:U_inlet = (50.68, 0, 9.05) m s^-1 ,k_inlet = 0.001075m^2 s^-2 ,ω_inlet = 0.3279s^-1 .In particular, turbulence quantities at the inlet are based on a 0.052% turbulence intensity and a turbulent length scale of 10% of the chord length. At the outlet plane of the computational domain, all quantities except pressure are zero-normal-gradient, and the gauge pressure is fixed at zero Pa.The mesh for this case consists of 57,344 structured hexahedral cells, with approximately 129 points along the surface of the airfoil. With the default values for k-ω SST model coefficients, the y^+ wall-normal distance from the airfoil surface has values lying between 0.017 and 0.88, with a mean value of 0.30. These values imply that the RANS computation for this case is wall-resolved.§.§.§ Optimization problem The purpose of the airfoil example case is to optimize the a_1 model coefficient in the k-ω SST turbulence model in order to improve the accuracy of the model for prediction of the lift coefficient c_l. The objective function for the airfoil case is given byf_GEDCP(a_1)= -(E_I)(1+12p)|_a_1 ,E_I(a_1)= | c_l - c̃_l(a_1)/c_l| .Here, f_GEDCP(a_1) is the GEDCP objective function [Eq. (<ref>)], with λ_I = 1, and λ_p=1/2. This objective function primarily aims to minimize the relative error in the prediction of the integral parameter c_l while retaining a minor preference for the default value of the model coefficient: namely, a_1 = 0.31. In the present work, we have computed this objective function over a grid of values of a_1 so that it can be visualized (cf. Fig. <ref>). No field quantities are used in the objective function for the airfoil case. The parameter space for the model coefficient for the airfoil case is a_1 ∈ [0.25,0.37]. The coefficient a_1 was calibrated so that the predictions matched the reference lift coefficient of c_l= 1.0707, the latter of which comes from the fully-turbulent experimental measurement for this quantity obtained by Ladson <cit.> at a Mach number of 0.15, where the suction and pressure side transition points were fixed at a distance of 0.05 c from the leading edge, using No. 80-W grit carborundum strips.§.§.§ ResultsThe turbo-RANS framework was applied to the airfoil example case, allowing a maximum of ten iterations in the optimization process. The first iteration was the default coefficient run (viz., RANS calculation using the default value for a_1). Subsequently, four additional Sobol-sampled iterations were completed. Therefore, five of these ten iterations correspond to iterations of the Bayesian optimization procedure. Each iteration corresponds to a RANS calculation with a given value for the a_1 model coefficient. The computational time was greatly reduced by using the converged fields from the default coefficient run as initial fields for the remaining RANS calculations involving the new proposals for the value of a_1. Fig. <ref> displays the convergence history over the ten iterations. In nine iterations, the optimization process attained a maximum value of the objective function, and the value of a_1 corresponding to this maximum (optimum) objective function value yielded a predicted value for the lift coefficient of c̃_l = 1.0706. This predicted value for the lift coefficient is in excellent conformance with the experimental value for the lift coefficient of c_l=1.0707. The airfoil example calibration results are summarized in Table <ref>. While the default coefficient performs reasonably well at predicting the experimental value for the lift coefficient, this example demonstrated that turbo-RANS can efficiently calibrate the model coefficient so that the predicted value of the lift coefficient provides an essentially perfect agreement with the corresponding experimental measurement of this coefficient.§.§ Example 2: Periodic hills §.§.§ DescriptionPredicting the evolution of separated, turbulent flow is a major challenge for RANS turbulence models <cit.>. Non-equilibrium effects, non-local effects, and history effects are frequently present for this type of flow. Unfortunately, commonly used RANS turbulence models such as the k-ω SST model often do not accurately predict these flow phenomena. Nevertheless, these models are frequently applied to separated turbulent flows since separation occurs in many industrial flows. A canonical example of separation is turbulent flow over periodic hills. In this flow, a periodic domain is used so that the flow repeatedly separates from the smooth surface of one hill, reattaches, accelerates up a slope of the hill immediately downstream of it, and then separates again from the surface of this second hill. When we apply single-point RANS turbulence closure models such as the k-ω SST model to this type of flow, we can calibrate the model coefficients so that the effects of the poorly captured physics are at least mimicked. Otherwise, the turbulence model will be unable to accurately predict the general characteristics of the flow. In this example, we demonstrate how turbo-RANS can be used to calibrate turbulence model coefficients so that the predicted field quantities are in better conformance with the associated reference data.§.§.§ Computational setupWe calculate the steady-state, incompressible, turbulent flow over a series of periodic hills at a height-based Reynolds number of Re_H=5,600. Direct numerical simulation data and OpenFOAM meshes are provided for this flow by Xiao et al. <cit.>. The software, solver, and schemes are identical to the airfoil example case (cf. Section <ref>). The k-ω SST turbulence model is also calibrated in this example [Eqs. (<ref>)–(<ref>)].The computational domain and mesh for this case are taken from Xiao et al. <cit.>. The computational domain is two-dimensional (2D)—the three-dimensional (3D) DNS data has been averaged along the spanwise direction. The mesh consists of 14,751 hexahedral cells. Boundary conditions and mesh convergence for the RANS calculation, in this case, are discussed in <cit.>. In short, periodic boundary conditions are imposed at the inlet and outlet planes of the computational domain. A no-slip wall boundary condition is imposed on the top and bottom walls of the domain. Along these no-slip walls, 0.01 ≤ y^+ ≤ 0.80, with an average value for y^+ along the top and bottom walls of 0.71 and 0.20, respectively. This range of values for y^+ along the top and bottom walls implies that the RANS calculations conducted here are wall-resolved. §.§.§ Optimization problemFor the case of the periodic hills, the GEDCP objective function is used, but flow field quantities are used in the misfit error term of the function. The purpose is to calibrate the a_1 and β^* model coefficients of the k-ω SST turbulence model so that the predictions of the magnitude of the velocity U⃗ and k fields are in better conformance with the available DNS reference data. No integral parameters are included in the objective function for this example. In consequence, the objective function assumes the following form:f_GEDCP(a_1,β^*)= -(E_F)(1+12p)|_a_1,β^* ,E_F(a_1,β^*)= ∑_ϕ∈{|U⃗|,k }MAPE(ϕ) = ∑_ϕ∈{|U⃗|,k }1/N_ϕ∑_i = 1^N_ϕ| ϕ_i - ϕ̃_i (a_1,β^*)/ϕ_i|.This objective function encapsulates a tradeoff between reducing the MAPE in the prediction of |U⃗| and k while retaining a minor preference for the default value of the model coefficients: namely, a_1 = 0.31 and β^* = 0.09. The parameter space corresponding to the model coefficients for the case of the periodic hills is a_1 ∈ [0.24,0.60] and β^* ∈ [0.045,0.14]. The DNS reference data has been interpolated onto the RANS mesh so that N_ϕ=14,751. Given that every point in the RANS calculation has corresponding reference data, this example demonstrates how turbo-RANS can be used in an application involving highly dense reference data. §.§.§ ResultsA fixed computational budget of 30 iterations was prescribed for the optimization. The default values of the model coefficients were used for the first iteration. For the next nine iterations, Sobol sampling was used to generate the nine additional points. Subsequently, Bayesian optimization was used for the remaining 20 iterations.Fig. <ref> illustrates the convergence of the optimization process for the periodic hills example case. After approximately 20 iterations, the Bayesian optimization has found the maximum (optimum) value (dashed line corresponding to the maximum value obtained using a 30× 30 grid search in the parameter space). At the end of the optimization process, Bayesian optimization found model coefficients whose associated objective function value was slightly larger (better score) than the maximum (best score) obtained in the grid search—naturally, this optimal value occurs between the grid points used in the exhaustive grid search.The predicted velocity and turbulent kinetic energy fields obtained using RANS calculations based on the default and optimized model coefficients were compared to the DNS reference data. Figs. <ref> and <ref> display the streamwise U and vertical V components of the velocity along several vertical sampling lines in the computational domain, respectively. The RANS predictions obtained using the turbo-RANS-optimized model coefficients agree very well with the DNS data, demonstrating a significant improvement in comparison to the corresponding RANS predictions obtained using the default values for the model coefficients. The default values of the model coefficients lead to predictions that overestimate the flow separation and, as a result, do not accurately predict the velocity after the occurrence of the separation. In the bulk flow over the separated flow region, the velocity predicted by the RANS models using the default values of the model coefficients is also high compared to the DNS reference data. After calibrating the a_1 and β^* model coefficients, the velocity here agrees well with the DNS reference data. Both the default and optimized RANS predictions exhibit discrepancies in the near-wall quantities directly along the separated bump. Here, the breakdown of assumptions made in modelling the RANS boundary layers is likely responsible for this discrepancy. Overall, the inclusion of the DNS U and V reference data through the GEDCP objective function [Eq. (<ref>)] leads to a greatly improved conformance between the RANS predictions and the DNS reference data for this separated flow, as summarized in Table <ref>.Fig. <ref> compares vertical profiles of the turbulent kinetic energy k along various sampling lines in the computational domain. The k field was used in conjunction with the |U⃗| field in the calibration of the model coefficients. While there is a clear improvement in the k estimation in many regions of the flow, there is still a discrepancy between the DNS reference k and the RANS predicted k in the shear zone at the top of the separation region. Further accuracy improvements may be possible through the inclusion of additional model coefficients in the calibration procedure, such as the various coefficients that appear in the k-transport equation [Eq. (<ref>)] and the ω-transport equation [Eq. (<ref>)]. Nevertheless, the calibration procedure has generally resulted in an improved prediction of k for this separated flow, as summarized in Table <ref>.While the reference data used in the GEDCP objective function was |U⃗| and k, it was of interest to determine whether other flow quantities were also predicted more accurately. Even though the wall shear stress was not included as an explicit calibration target, Fig. <ref> shows that the wall shear stress along the bottom wall is in much better conformance with the DNS data of the wall shear stress (not used in the calibration) after calibration of the model coefficients. In fact, the baseline k-ω SST turbulence model does not predict any reattachment of the flow along the bottom wall before the flow reaches the downstream hill. After calibration, the k-ω SST turbulence model now predicts reattachment, with the reattachment point in very good agreement with the DNS data. Fig. <ref> demonstrates that calibrating turbulence model coefficients based on a specific set of flow quantities can lead to improvements in the predictive accuracy of other flow quantities not used in the calibration.§.§ Example 3: Converging-diverging channel§.§.§ GEKO turbulence modelThe latest turbulence model that has been developed by Ansys, Inc. is the “generalized k-ω” (GEKO) two-equation model. The aim of this model is to provide flexibility for the user to fine-tune different coefficients for a RANS computation. GEKO has several parameters that can be fine-tuned to increase predictive accuracy. There are six parameters that can be tuned by the user. In this paper, the focus is on fine-tuning the model coefficients C_SEP and C_NW in order to produce more accurate predictions for flow in a converging-diverging channel. The exact equations by which these GEKO coefficients modify the underlying two-equation turbulence closure model are not provided by Ansys, Inc.; however, general guidance for their use is provided in the Ansys User Manual.The parameter C_SEP modifies the characteristics of a separation in the flow. According to the Ansys User Manual, “increasing C_SEP reduces eddy viscosity, which leads to more sensitivity to adverse pressure gradients for boundary layers and lower spreading rates for free shear flow” <cit.>. A change in the value of C_SEP has an influence on all classes of flows. The parameter C_NW modifies the near-wall characteristics of the flow that affect the nature of the wall boundary layer. The parameter C_NW generally has little or no effect on free shear flows. An increase in the value of C_NW leads to a larger wall shear stress. Here, we focus on calibrating global values for the GEKO model coefficients, although local values can also be accommodated through the application of user-defined functions (UDFs). These coefficients have a specified (recommended) range of values which is documented in the Ansys User Manual <cit.>. In accordance with this documentation, the model coefficients C_SEP and C_NW should take values in the range summarized in Table <ref>.§.§.§ Computational setupWe calculate the steady-state, incompressible, turbulent flow through a converging-diverging channel using the GEKO turbulence closure model in a RANS framework. The DNS Reynolds number for this flow, based on U_max and h, is Re_U_max = 12,800. Here, U_max is the maximum velocity in the fully-developed channel cross-section. The computational domain used for the 2D RANS calculation is shown in Fig. <ref>. The DNS reference data for this flow was obtained from Marquillie et al. <cit.>. The mesh is adapted from McConkey et al. <cit.>, and contains 610,720 grid cells as displayed in Fig. <ref>. Mesh independence with the selected mesh density near the converging section was demonstrated in <cit.>.Boundary conditions and fluid properties for the RANS calculation are prescribed to match those of the DNS reference simulation. The inlet plane of the computational domain is placed at a distance of 20h upstream of the converging section, where h is the half-channel height. A Neumann pressure boundary condition is imposed at the inlet plane (viz., the gradient of the pressure in the streamwise direction is zero). The uniform streamwise velocity is prescribed at the inlet with a value of 0.845 m s^-1. This value was chosen to match the mass flow rate from the original RANS computation undertaken by McConkey et al. <cit.>. The inlet plane turbulent kinetic energy value is set to k = 4.28421 × 10^-4m^2 s^-2, and specific dissipation rate at the inlet plane is ω = 0.26993s^-1, in conformance with the computations conducted by McConkey et al. <cit.>. A fixed gauge pressure of zero Pa is imposed at the outlet plane of the computational domain, where all other flow quantities are zero-gradient. The outlet plane is placed at a distance of 20h downstream of the bump. Lastly, a no-slip boundary condition was applied on all walls in the computational domain.The fluid properties of the flow were specified as follows: density ρ= 1kg m^-3 and kinematic viscosity ν = 7.9365 × 10^-5 m^2s^-1. The SIMPLE-consistent (SIMPLEC) algorithm was used for the pressure-velocity coupling. Convective terms in the momentum equation were discretized with a second-order upwind scheme. The convective terms in the k- and ω-transport equations were discretized with a first-order upwind scheme. The diffusion terms in the momentum and turbulence transport equations were discretized with a second-order central difference scheme.§.§.§ Optimization problemFor the converging-diverging channel case, we use the GEDCP objective function to calibrate the model coefficients based on sparse reference data. Specifically, we calibrate C_SEP and C_NW based on a sparse set of measurements of the pressure coefficient along the bottom wall of the channel. The pressure coefficient is determined as follows:C_p = P-P_0/12ρ U_max^2 .Here, P is the pressure and P_0 is the pressure taken at a point downstream of the converging-diverging channel (approximately at a downstream distance of 12h from the start of the converging section).No integral parameters are included in the objective function. The objective function used here is given byf_GEDCP (C_SEP, C_NW)= - (E_F)(1+12p)|_C_SEP, C_NW ,E_F(C_SEP, C_NW)=MAPE(C_p) =1/N_C_p∑_i = 1^N_C_p| C_p,i - C̃_p,i (C_SEP, C_NW)/C_p,i| .Ten points along the bottom wall have been selected as sparse wall measurements (N_C_p=10) used as the reference data in the calibration of the model coefficients. These points are indicated by the blue scatter points in Fig. <ref>. The misfit error term in the GEDCP objective function is the MAPE in C_p over the ten reference data points. For the upper and lower bounds of C_SEP and C_NW, we use the Ansys recommended limits of 0.75 ≤ C_SEP≤ 2.5 and -2 ≤ C_NW≤ 2, as discussed previously in Section <ref>. §.§.§ ResultsA Python script was created to execute an Ansys Workbench journal file. This journal file sets up the entire Fluent problem, beginning with the mesh import and ending with the RANS predictions of the pressure coefficient along the lower wall being exported. The use of a script to modify the journal file promotes full automation of the process used for the determination of the GEKO coefficients. With each iteration, the script sets the values for the C_SEP and C_NW (GEKO model coefficients) in the journal file to those suggested by the turbo-RANS algorithm. The first iteration corresponds to the default values of the model coefficients. Subsequently, nine additional Sobol-sampled iterations are executed. The remaining 20 iterations use the Bayesian optimization schema. After 30 iterations, the recommended values for C_SEP and C_NW are those that maximize the objective function.Fig. <ref> compares the DNS data to the predictions obtained using the default and turbo-RANS-optimized GEKO models. After optimization, the streamwise profile of the pressure coefficient is in better conformance with the DNS data. In particular, in the diverging section of the channel, the turbo-RANS-optimized GEKO model better predicts the pressure recovery, in contrast to the default GEKO model which is seen to significantly under-predict the recovery. The optimized model exhibits a slightly greater error in its prediction of the minimum pressure at the top of the converging section (viz., at x/h≈ 5), but achieves greater predictive accuracy in every other region of the flow.§.§ DiscussionThe preceding three examples demonstrate the flexibility of the GEDCP objective function to work with various reference data types for the calibration of turbulence closure models. A mixture of integral, dense, and sparse reference data was used in these examples. The GEDCP objective function also permits the use of a mixture of reference data types for a given problem. For example, the airfoil case can be modified to include surface pressure coefficient reference data for use in calibration. As demonstrated by the case of the periodic hills, improving predictions of specific quantities of interest often results in improved predictions of other flow quantities (not used as reference data for the calibration). Moreover, when the GEDCP objective function is used as a calibration target in turbo-RANS, a computationally efficient optimization of turbulence model coefficients is possible. Sobol sampling, which occurred until iteration 5 for the airfoil case and iteration 10 for the periodic hills and converging-diverging channel cases, proved to be a highly effective sampling methodology. During the Sobol sampling period, a point close to the optimal value was almost always identified, and therefore the Bayesian optimization algorithm was well-positioned to determine whether to exploit this region or to explore the more sparsely sampled (albeit more uncertain) regions in the parameter space. Section <ref> will investigate and provide recommendations for the number of points to be sampled in the initialization phase of Bayesian optimization.§ HYPERPARAMETER OPTIMIZATION The objective of hyperparameter optimization was to determine the optimal settings for Bayesian optimization in the turbo-RANS framework. In order to recommend suitable hyperparameters for typical applications involving the calibration of model coefficients of RANS turbulence closure models, two cases were investigated in this context: namely, flow over an airfoil (Section <ref>); and, flow over periodic hills (Section <ref>). The objective functions for the airfoil and periodic hills cases are given by Eqs. (<ref>) and (<ref>), respectively.Several optimization runs were completed for these cases with various hyperparameter settings to determine which settings resulted in a solution using the smallest computational effort (i.e., the strategy that arrived at the solution using the smallest number of evaluations of the objective function). Ultimately, the selected hyperparameters allow the optimal RANS model coefficients to be obtained with the lowest computational effort compared to other settings. The following hyperparameters were optimized in this study. * Utility function. Three utility functions are included in the <cit.> Python library: upper confidence bound (UCB),expected improvement (EI), and probability of improvement (POI). See Sections <ref>–<ref> for further explanation of these utility functions.* Exploration/exploitation tradeoff parameter. The UCB function is parameterized by a tradeoff parameter κ, and both the EI and POI functions are parameterized by a tradeoff parameter ξ. The default value for the tradeoff parameter infor the UCB utility function is κ=2.576. For both the EI and POI utility functions, the default value for the tradeoff parameter inis ξ=0. * Gaussian process kernel parameters. The Matern kernel is used in the underlying Gaussian process regression in turbo-RANS. Important parameters for this kernel are the characteristic length scale l and the parameter ν (see Section <ref>).* Number of initially sampled points. Before calling the Bayesian optimization, it is important to condition the underlying Gaussian process regression surrogate model of the objective function on some initial points. We use Sobol sampling to randomly generate these points, which provide a uniform sampling of the parameter space <cit.>.The following procedure was used to select optimal hyperparameters: * Fix l = 0.1, ν = 5/2, n_s=5 for the airfoil case, and n_s=10 for the periodic hills case. * Perform a grid search over both the utility function type and corresponding exploration/exploitation tradeoff parameter.* Select and fix the utility function and exploration/exploitation tradeoff parameter.* Perform a grid search over l and ν.* Select and fix values for l and ν.* Perform a grid search over n_s.The grid search space for each parameter is summarized in Table <ref>. In total, 237 hyperparameter combinations were tested. Given the random nature of the Gaussian process regression that underpins Bayesian optimization, the core turbo-RANS algorithm is not deterministic. However, the algorithm can be made deterministic in practice by setting the random seed in the code. To better select optimal hyperparameters, 30 random turbo-RANS optimization runs were completed for each hyperparameter setting, for a total of 7,110 optimization runs. Following this, the mean and standard deviation of the number of runs to reach convergence were determined.The mean number of runs, n, is determined in accordance ton = 1/N∑_i=1^N n_i ,where N is the total number of turbo-RANS runs, i indexes each turbo-RANS run, and n_i is the number of iterations to reach convergence in the i-th run. The standard deviation, s, of the number of runs required to achieve convergence is given bys = (1/N∑_i=1^N (n_i - n)^2)^1/2 .The purpose of applying turbo-RANS is to tune turbulence model coefficients based on a case-specific objective function. To accelerate the hyperparameter tuning, the objective functions for the airfoil case [Eq. (<ref>)] and periodic hills case [Eq. (<ref>)] were evaluated a priori over a series of a_1 values for the airfoil case, and a grid of values of (a_1, β^*) for the periodic hills case. These a priori simulations permitted faster hyperparameter optimization since the value of f_GEDCP given a model coefficient suggestion can be determined by interpolation. This a priori evaluation of f_GEDCP on a grid of values of the model coefficients would not be used in practice with turbo-RANS. In a typical application, a new RANS prediction would need to be computed with each model coefficient suggestion. In other words, since the hyperparameter optimization procedure required thousands of evaluations of f_GEDCP, it was more computationally efficient to “simulate a simulation" by interpolating between a set of previously computed RANS calculations. These a priori RANS calculations also allow visualization of the objective function. The objective function for the airfoil case is shown in Fig. <ref> for a set of 50 RANS calculations for values of a_1 that span the parameter space. Similarly, the objective function for the periodic hills case is shown in Fig. <ref> for a set of 900 RANS calculations for combinations of values of the two model coefficients (a_1,β^*) that span the parameter space. Convergence criteria in turbo-RANS are determined by the user. For the hyperparameter optimization, the optimal coefficients and values of f_GEDCP were known because a large number of a priori RANS calculations were completed on a grid. For the airfoil case, the optimization procedure was considered convergent when the suggested value of the a_1 coefficient was within 0.5% of the optimal value of this coefficient. For the case of the periodic hills, the optimization procedure was considered convergent when the suggested values for the coefficients (a_1,β^*) resulted in an objective function f_GEDCP value that was within 5% of the optimum objective function value. The choice of convergence criteria for hyperparameter optimization is a special case since it relies on a priori knowledge of the objective function optimum. In practice, turbo-RANS convergence criteria can be set based on the change in the best objective function or best coefficients over a given window of iterations. Given the expensive nature of RANS calculations, we envision that many turbo-RANS optimization procedures will be run with a prescribed computational budget instead of a numerical convergence criteria (e.g., a 40-simulation optimization run can be prescribed a priori). The Bayesian optimization algorithm produces a computationally efficient result for either of these criteria. §.§ Utility functionFig. <ref> shows the results of the utility function search for the airfoil case. In terms of purely minimizing the number of evaluations of the objective function, the UCB, EI, and POI utility functions all give roughly the same minimum mean number of evaluations of n=7. For the UCB and EI utility functions, the performance is stable over a wide range of values of κ and ξ. However, past a critical value of ξ, the EI utility function becomes too explorative, and the number of objective function evaluations for convergence increases sharply. The UCB utility function achieves stable performance for all values of κ, with a minor trend of an increasing n with an increasing value of κ. The POI utility function has an optimum ξ value. Below this optimum value, the performance of the POI utility function is worse than that of the UCB or EI utility functions. Above this optimum value, the POI function becomes highly explorative, performing similarly to the EI utility function. Furthermore, the standard deviation of the 30 random runs is consistent for the UCB utility function, indicating that all the runs achieve convergence in a similar number of iterations. When the EI and POI utility functions become overly explorative, the standard deviation is seen to increase sharply. Fig. <ref> shows the hyperparameter optimization results for the periodic hills case. The general trends are similar to the airfoil case: the UCB utility function performs consistently, and the EI and POI functions have a critical ξ value beyond which the Bayesian optimization process becomes overly explorative. Similar trends to the airfoil case are also seen for the standard deviation. Given the two-coefficient optimization, a larger coefficient search space, and a more complex objective function, it is not surprising that the periodic hills case requires more iterations to find the optimal point than the airfoil case, with the minimum mean number of objective function evaluations being 20. Comparing Figs. <ref> and <ref>, we see that, when optimized, all three utility functions perform equally well. However, a major disadvantage of the POI utility function is the need to determine the optimum ξ, which is problem-specific: it is ξ≈ 10^-3 for the airfoil case and ξ≈ 10^-2 for the periodic hills case. Given their similar performance and low standard deviation, the EI and UCB utility functions are both acceptable choices for use in the acquistion of a new point in the Bayesian optimization process. However, the EI utility function will rapidly become overly explorative past a problem-dependent threshold. Therefore, for a general turbo-RANS use case, we recommend the application of the UCB utility function.The optimal exploration/exploitation tradeoff parameter for the GEDCP objective function generally favours exploitation. However, when the κ or ξ parameters were too small (overly promoting exploitation), we observed that the Bayesian optimization process became “stuck” on a perceived local maximum. For the UCB utility function, the Bayesian optimization process can occasionally become stuck for values of κ⪅ 1. Indeed, these runs would not converge after hundreds of iterations, and therefore they were removed from the mean and standard deviation calculations. It is noted that these “stuck" runs are indicative of overly exploitative behavior. Therefore, we recommend a value of κ = 2, so that the optimization is slightly more (but not overly) exploitative than the default value for κ recommended in .A strategy that can be used to ensure that the Bayesian optimization process does not become “stuck” at a particular query point selected by the utility function is to simply add some “noise” (jitter) to a selected query point. This will guarantee that the selected query point in the current iteration of the Bayesian optimization is different from a previously selected query point. We have not investigated this option in the current study, but it is a possible extension that will be considered in some future work on turbo-RANS. §.§ Matern kernel parametersAfter selecting the UCB utility function with κ=2.0, the Matern kernel parameters were optimized. A grid search was completed for values of ν and l in the following ranges: ν∈ [0,∞) and l ∈ (0,1]. Fig. <ref> shows the results of this grid search. Neither the airfoil nor the periodic hills cases show sensitivity to the length scale value. Performance in the airfoil case is not very sensitive to the ν value, with all values of ν achieving convergence of between 7 and 8 objective function evaluations on average. For the periodic hills case, it becomes clear that ν=0 does not perform well, whereas ν=3/2 and 5/2 achieve similar performance. We recommend setting ν=5/2, which is the default value recommended for theMatern kernel <cit.>. This kernel returns a twice-differentiable function. For the length scale, we recommend a value of l=0.1. This length scale value specifies the initial guess for conditioning the Gaussian process, and the final length scale inside the kernel is optimized internally in . Therefore, only a reasonable initial guess for this hyperparameter needs to be specified. For optimizing the Gaussian process parameters, using a sufficient number of sample points initially before the Bayesian optimization was much more important than the choice of the initial value for the length scale. §.§ Number of initial samplesAfter selecting ν=5/2 and l = 0.1, the last hyperparameter to be optimized was the number of initial samples, n_s. As discussed in Section <ref>, a certain number of points within the parameter space are obtained using Sobol sampling before calling the Bayesian optimization algorithm. The purpose of these points is to provide sufficient information to properly perform Gaussian process regression on the objective function. When n_s is too small, errant behavior of the Gaussian process can occur. This errant behavior results in subsequent slow optimization, with the Bayesian optimization process often getting immediately stuck. When n_s is too large, then the overall algorithm becomes computationally inefficient by overly sampling the parameter space prior to entering the efficient Bayesian optimization loop.Fig. <ref> shows the results of optimizing n_s for the airfoil and periodic hills cases. For the airfoil case, it is optimal to reduce the number of sample points as much as possible. Below n_s=3, the previously mentioned erratic behavior can occur, as the Gaussian process regression of the objective function is fit on too few points. For the periodic hills case, there is a local minimum at n_s≈8. These results show that the number of initial samples should be increased as the number of optimized parameters increases. Given our experience concerning the errant behavior of the Gaussian process regression with too few initial samples, a conservative default has been set in the turbo-RANS algorithm of n_s=10. If the end-user wishes to increase performance further, we recommend first reducing the value of n_s, but generally keeping n_s>5 for maintaining the stability of the Gaussian process regression. §.§ DiscussionIn summary, the purpose of the hyperparameter optimization procedure was to determine optimal settings for Bayesian optimization for typical turbo-RANS objective functions. The hyperparameter search spaces are summarized in Table <ref>, with each hyperparameter combination being evaluated using 30 different random seeds. Each evaluation consisted of a “simulated optimization run", which used a set of pre-determined RANS calculations, rather than conducting a new RANS calculation for each coefficient suggestion. A run was considered convergent based on proximity to the optimal solution from the pre-determined set of RANS calculations. In total, 7,110 turbo-RANS optimization loops were simulated. The recommended hyperparameters for turbo-RANS are summarized in Table <ref>. The optimal hyperparameter settings likely change on a case-by-case basis, since the highly customizable GEDCP objective function varies depending on the type of reference data used in the calibration and on the default coefficient preference. Nevertheless, the two relatively different objective functions [Eqs. (<ref>) and (<ref>)] have similar optimal hyperparameters. Moreover, the UCB utility function is relatively insensitive to κ for these typical objective functions. As the objective function increases in complexity and the number of optimized coefficients increases, the optimal value of κ changes slightly but remains close to the recommended default value of κ=2.576 in thelibrary. This value of κ represents a reasonable exploration and exploitation tradeoff for a general case, and the optimal turbo-RANS setting of κ=2 indicates that a slight preference for exploitation is favourable for turbo-RANS. For different flows, settings in the GEDCP function can depend on the number of coefficients in the problem, so the core optimization problem can vary greatly. While we demonstrated here that the recommended setting (UCB, κ=2) performs well for different problems, it may not perform well for all cases. The Bayesian optimization hyperparameters should be re-tuned if the convergence speed is unsatisfactory for a different objective function.§ CONCLUSIONThis work focused on an important and widespread optimization problem in engineering flow simulation: calibrating turbulence model coefficients. The major contributions of this work are to propose a framework for Bayesian optimization of turbulence model coefficients (turbo-RANS), propose a general objective function, demonstrate the predictive accuracy improvements that can be achieved, and recommend hyperparameters for future users of turbo-RANS. We presented the turbo-RANS framework, demonstrated how RANS models can be calibrated using various types of reference data to produce more accurate predictive results, and performed a systematic hyperparameter tuning for the Bayesian optimization process. Even with perfectly calibrated model coefficients, there is an upper limit to the predictive accuracy that can be achieved with RANS. Errors associated with unavoidable assumptions and approximations (model errors) in RANS can severely degrade the accuracy of these predictions. For example, the assumptions of locality can be violated for many real-world flows <cit.>. Duraisamy et al. <cit.> list four main sources of error in RANS: errors arising from ensemble averaging (L1), errors in the functional representation of the Reynolds stress (L2), errors in the model functional form (L3), and errors in the model coefficients (L4). It is possible that for certain classes of flows, the calibration of model coefficients (which reduces the L4 error) can help offset some L2 and L3 errors. However, this effect is difficult to quantify. Nevertheless, even with perfectly calibrated model coefficients, turbo-RANS is only as good as the “best RANS" solution for a particular problem, which is often limited by unrecoverable L1 and L2 errors. Although turbo-RANS can potentially improve the accuracy of a RANS prediction to some upper bound limit in their predictive accuracy for a given flow, this upper bound limitation in the accuracy will always exist due to implicit modelling assumptions and approximations used in RANS, which may not hold true for many industrially relevant (complex) flows. Within the wider context of modern machine learning techniques for RANS, we believe that coefficient calibration competes with highly sophisticated data-driven algebraic Reynolds stress closure models. Alhough the process of calibrating model coefficients is comparatively simpler than training a non-linear eddy viscosity model, the resulting performance competes with these machine learning models. Previous results for the periodic hills case by Wu et al. <cit.>, McConkey et al. <cit.>, and Kaandorp and Dwight <cit.> show similar predictive accuracy to the presently calibrated linear eddy viscosity models. We demonstrated here that simply by tuning model coefficients, there is significant performance to be leveraged from the physics represented in the RANS turbulence closure models. Even further performance can be leveraged from the Bayesian optimization of zonal coefficient values. Currently, Bayesian inference has been applied to infer local turbulence model coefficients, but the forward problem applied to local or zonal model coefficients has not been explored to the authors' knowledge.Despite years of optimization methods being applied to turbulence model calibration, and despite the significant accuracy improvements that are possible, these methods are not yet widely used in industrial settings. We believe this may be due to the complicated implementation and expert background required to perform Bayesian inference. Therefore, by designing an easy-to-use software package, we hope to help spur industrial interest in this technique. All data and code from the present work can be found in the turbo-RANS GitHub repository <cit.>. We also include the 900 periodic hills RANS calculations used for the present hyperparameter tuning investigation, as broader investigations concerned with uncertainty quantification may find this data set useful. Future work will include model coefficient calibration for more challenging flows. The presently proposed GEDCP objective function can be immediately applied to quantify error in an unsteady quantity of interest, such as a Strouhal number, time-averaged force coefficient, or root-mean-square (RMS) force coefficient. We also aim to explore optimal turbo-RANS hyperparameter tuning for more complex 3D flows and investigate calibrating a RANS turbulence model on a mixture of experimental and numerical reference data. turbo-RANS can also be extended to calibrate coefficients within a Reynolds-stress transport model, or other turbulence closure relations. Ultimately, the algorithm and GEDCP objective function proposed here can be applied to calibrate any coefficient of interest in a RANS model.Currently, CFD best practices such as a mesh independence study, careful numerical scheme selection, designing meshes appropriate for wall treatment, verification, and validation are missing an important step: coefficient calibration. Previously, this would have been a largely manual activity. However, turbo-RANS automates this activity in a highly computationally efficient manner. We envision that, along with validation, calibration can become a more frequently used practice in RANS. With the more widespread use of closure coefficient calibration, future RANS practitioners can potentially generate a coefficient database, and make an educated selection of coefficients to use in their specific application based on the nature of the physics present in their flow. To some extent, this is the idea behind the GEKO turbulence closure model <cit.>. However, the GEKO model still requires user tuning of the model coefficients. With sufficient reporting and the use of calibrated RANS models, a recommendation engine (e.g., a large language model) can be embedded into CFD software of the future. This model can be trained to recommend turbulence model coefficients given a description of the flow physics present and quantities of interest. Such a model relies on more reporting of calibrated closure coefficients, which we aim to make easier with turbo-RANS.§ ACKNOWLEDGEMENTSThis work was funded by the Tyler Lewis Clean Energy Research Foundation (TLCERF) and the National Sciences and Engineering Research Council of Canada (NSERC). We thank S. Rezaeiravesh for useful discussion regarding the presently proposed techniques. We thank W. Melek for his ongoing feedback on this work.§ COMPETING INTERESTSThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | http://arxiv.org/abs/2311.15840v1 | {
"authors": [
"Ryley McConkey",
"Nikhila Kalia",
"Eugene Yee",
"Fue-Sang Lien"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231127140529",
"title": "turbo-RANS: Straightforward and Efficient Bayesian Optimization of Turbulence Model Coefficients"
} |
Analysis of spin-squeezing generation in cavity-coupled atomic ensembles with continuous measurements G. Bertaina January 14, 2024 ===================================================================================================== As black-box machine learning models grow in complexity and find applications in high-stakes scenarios, it is imperative to provide explanations for their predictions. Although Local Interpretable Model-agnostic Explanations (LIME) <cit.> is a widely adpoted method for understanding model behaviors, it is unstable with respect to random seeds <cit.> and exhibits low local fidelity (i.e., how well the explanation approximates the model's local behaviors) <cit.>. Our study shows that this instability problem stems from small sample weights, leading to the dominance of regularization and slow convergence. Additionally, LIME's sampling neighborhood is non-local and biased towards the reference, resulting in poor local fidelity and sensitivity to reference choice. To tackle these challenges, we introduce Glime, an enhanced framework extending LIME and unifying several prior methods. Within the Glime framework, we derive an equivalent formulation of LIME that achieves significantly faster convergence and improved stability. By employing a local and unbiased sampling distribution, Glime generates explanations with higher local fidelity compared to LIME. Glime explanations are independent of reference choice. Moreover, Glime offers users the flexibility to choose a sampling distribution based on their specific scenarios.§ INTRODUCTIONWhy a patient is predicted to have a brain tumor <cit.>? Why a credit application is rejected <cit.>? Why a picture is identified as an electric guitar <cit.>? As black-box machine learning models continue to evolve in complexity and are employed in critical applications, it is imperative to provide explanations for their predictions, making interpretability a central concern <cit.>. In response to this imperative, various explanation methods have been proposed <cit.>, aiming to provide insights into the internal mechanisms of deep learning models.Among the various explanation methods, Local Interpretable Model-agnostic Explanations (LIME) <cit.> has attracted significant attention, particularly in image classification tasks. LIME explains predictions by assigning each region within an image a weight indicating the influence of this region to the output. This methodology entails segmenting the image into super-pixels, as illustrated in the lower-left portion of <ref>, introducing perturbations, and subsequently approximating the local model prediction using a linear model. The approximation is achieved by solving a weighted Ridge regression problem, which estimates the impact (i.e., weight) of each super-pixel on the classifier's output. Nevertheless, LIME has encountered significant instability due to its random sampling procedure <cit.>. In LIME, a set of samples perturbing the original image is taken. As illustrated in <ref>, LIME explanations generated with two different random seeds display notable disparities, despite using a large sample size (16384). The Jaccard index, measuring similarity between two explanations on a scale from 0 to 1 (with higher values indicating better similarity), is below 0.4. While many prior studies aim to enhance LIME's stability, some sacrifice computational time for stability <cit.>, and others may entail the risk of overfitting <cit.>. The evident drawback of unstable explanations lies in their potential to mislead end-users and hinder the identification of model bugs and biases, given that LIME explanations lack consistency across different random seeds. In addition to its inherent instability, LIME has been found to have poor local fidelity <cit.>. As depicted in <ref>, the R^2 value for LIME on the sample image approaches zero (refer also to <ref>). This problem arises from the non-local and skewed sampling space of LIME, which is biased towards the reference. More precisely, the sampling space of LIME consists of the corner points of the hypercube defined by the explained instance and the selected reference. For instance, in the left section of <ref>, only four red points fall within LIME's sampling space, yet these points are distant from 𝐱. As illustrated in <ref>, the L_2 distance between LIME samples of the input 𝐱 and 𝐱 is approximately 0.7𝐱_2 on ImageNet. Although LIME incorporates a weighting function to enforce locality, an explanation cannot be considered as local if the samples themselves are non-local, leading to a lack of local fidelity in the explanation. Moreover, the hypercube exhibits bias towards the reference, resulting in explanations designed to explain only a portion of the local neighborhood. This bias causes LIME to generate different explanations for different references, as illustrated in Figure <ref> (refer to <ref> for more analysis and results). To tackle these challenges, we present Glime—a local explanation framework that generalizes LIME and five other methods: KernelSHAP <cit.>, SmoothGrad <cit.>, Gradient <cit.>, DLIME <cit.>, and ALIME <cit.>. Through a flexible sample distribution design, Glime produces explanations that are more stable and faithful. Addressing LIME's instability issue, within Glime, we derive an equivalent form of LIME, denoted as Glime-Binomial, by integrating the weighting function into the sampling distribution. Glime-Binomial ensures exponential convergence acceleration compared to LIME when the regularization term is presented. Consequently, Glime-Binomial demonstrates improved stability compared to LIME while preserving superior local fidelity (see <ref>). Furthermore, Glime enhances both local fidelity and stability by sampling from a local distribution independent of any specific reference point.In summary, our contributions can be outlined as follows:* We conduct an in-depth analysis to find the source of LIME's instability, revealing the interplay between the weighting function and the regularization term as the primary cause. Additionally, we attribute LIME's suboptimal local fidelity to its non-local and biased sampling space. * We introduce Glime as a more general local explanation framework, offering a flexible design for the sampling distribution. With varying sampling distributions and weights, Glime serves as a generalization of LIME and five other preceding local explanation methods. * By integrating weights into the sampling distribution, we present a specialized instance of Glime with a binomial sampling distribution, denoted as Glime-Binomial. We demonstrate that Glime-Binomial, while maintaining equivalence to LIME, achieves faster convergence with significantly fewer samples. This indicates that enforcing locality in the sampling distribution is better than using a weighting function. * With regard to local fidelity, Glime empowers users to devise explanation methods that exhibit greater local fidelity. This is achieved by selecting a local and unbiased sampling distribution tailored to the specific scenario in which Glime is applied.§ PRELIMINARY §.§ Notations Let 𝒳 and 𝒴 denote the input and output spaces, respectively, where 𝒳⊂ℝ^D and 𝒴⊂ℝ. We specifically consider the scenario in which 𝒳 represents the space of images, and f:𝒳→𝒴 serves as a machine learning model accepting an input 𝐱∈𝒳. This study focuses on the classification problem, wherein f produces the probability that the image belongs to a certain class, resulting in 𝒴 = [0,1].Before proceeding with explanation computations, a set of features {s_i}_i=1^d is derived by applying a transformation to 𝐱. For instance, {s_i}_i=1^d could represent image segments (also referred to as super-pixels in LIME) or feature maps obtained from a convolutional neural network. Alternatively, {s_i}_i=1^d may correspond to raw features, i.e., 𝐱 itself. In this context, ·_0, ·_1, and ·_2 denote the ℓ_0, ℓ_1, and ℓ_2 norms, respectively, with ⊙ representing the element-wise product. Boldface letters are employed to denote vectors and matrices, while non-boldface letters represent scalars or features. B_𝐱(ϵ) denotes the ball centered at 𝐱 with radius ϵ. §.§ A brief introduction to LIME section]sec:original_limeIn this section, we present the original definition and implementation of LIME <cit.> in the context of image classification. LIME, as a local explanation method, constructs a linear model when provided with an input 𝐱 that requires an explanation. The coefficients of this linear model serve as the feature importance explanation for 𝐱.Features. For an input 𝐱, LIME computes a feature importance vector for the set of features. In the image classification setting, for an image 𝐱, LIME initially segments 𝐱 into super-pixels s_1, …, s_d using a segmentation algorithm such as Quickshift <cit.>. Each super-pixel is regarded as a feature for the input 𝐱.Sample generation. Subsequently, LIME generates samples within the local vicinity of 𝐱 as follows. First, random samples are generated uniformly from {0,1}^d. The j-th coordinate z_j^' for each sample 𝐳^' is either 1 or 0, indicating the presence or absence of the super-pixel s_j. When s_j is absent, it is replaced by a reference value r_j. Common choices for the reference value include a black image, a blurred version of the super-pixel, or the average value of the super-pixel <cit.>. Then, these 𝐳^' samples are transformed into samples in the original input space ℝ^D by combining them with 𝐱=(s_1, …, s_d) using the element-wise product as follows: 𝐳 = 𝐱⊙𝐳^'+ 𝐫⊙ (1 - 𝐳^'), where 𝐫 is the vector of reference values for each super-pixel, and ⊙ represents the element-wise product. In other words, 𝐳∈𝒳 is an image that is the same as 𝐱, except that those super-pixels s_j with z^'_j=0 are replaced by reference values.Feature attributions. For each sample 𝐳^' and the corresponding image 𝐳, we compute the prediction f(𝐳). Finally, LIME solves the following regression problem to obtain a feature importance vector (also known as feature attributions) for the super-pixels: 𝐰^LIME= min_𝐯𝔼_𝐳^'∼Uni({0,1}^d) [π(𝐳^') (f(𝐳) - 𝐯^⊤𝐳^')^2] + λ𝐯_2^2, where 𝐳 = 𝐱⊙𝐳^'+ 𝐫⊙ (1 - 𝐳^'), π(𝐳^') = exp{-1 - 𝐳^'_2^2/σ^2}, and σ is the kernel width parameter.In practice, we draw samples {𝐳_i^'}_i=1^n from the uniform distribution Uni({0,1}^d) to estimate the expectation in <ref>. In the original LIME implementation <cit.>, λ = α/n for a constant α > 0. This choice has been widely adopted in prior studies <cit.>. We use 𝐰̂^LIME to represent the empirical estimation of 𝐰^LIME.§.§ LIME is unstable and has poor local fidelityInstability. To capture the local characteristics of the neighborhood around the input 𝐱, LIME utilizes the sample weighting function π(·) to assign low weights to samples that exclude numerous super-pixels and, consequently, are located far from 𝐱. The parameter σ controls the level of locality, with a small σ assigning high weights exclusively to samples very close to 𝐱 and a large σ permitting notable weights for samples farther from 𝐱 as well. The default value for σ in LIME is 0.25 for image data. However, as depicted in <ref>, LIME demonstrates instability, a phenomenon also noted in prior studies <cit.>. As showed in <ref>, this instability arises from small σ values, leading to very small sample weights and, consequently, slow convergence.Poor local fidelity. LIME also suffers from poor local fidelity <cit.>. The sampling space of LIME is depicted in <ref>. Generally, the samples in LIME exhibit considerable distance from the instance being explained, as illustrated in <ref>, rendering them non-local. Despite LIME's incorporation of weights to promote locality, it fails to provide accurate explanations for local behaviors when the samples themselves lack local proximity. Moreover, the sampling space of LIME is influenced by the reference, resulting in a biased sampling space and a consequent degradation of local fidelity.§ A GENERAL LOCAL EXPLANATION FRAMEWORK: GLIME §.§ The definition of GlimeWe first present the definition of Glime and show how it computes the explanation vector 𝐰^Glime. Analogous to LIME, Glime functions by constructing a model within the neighborhood of the input 𝐱, utilizing sampled data from this neighborhood. The coefficients obtained from this model are subsequently employed as the feature importance explanation for 𝐱. Feature space. For the provided input 𝐱∈𝒳⊂ℝ^D, the feature importance explanation is computed for a set of features 𝐬 = (s_1, …, s_d) derived from applying a transformation to 𝐱. These features 𝐬 can represent image segments (referred to as super-pixels in LIME) or feature maps obtained from a convolutional neural network. Alternatively, the features 𝐬 can correspond to raw features, i.e., the individual pixels of 𝐱. In the context of LIME, the method specifically operates on super-pixels.Sample generation. Given features 𝐬, a sample 𝐳^' can be generated from the distribution 𝒫 defined on the feature space (e.g., 𝐬 are super-pixels segmented by a segmentation algorithm such Quickshift <cit.> and 𝒫=Uni({0,1}^d) in LIME). It's important to note that 𝐳^' may not belong to 𝒳 and cannot be directly input into the model f. Consequently, we reconstruct 𝐳∈ℝ^D in the original input space for each 𝐳^' and obtain f(𝐳) (in LIME, a reference 𝐫 is first chosen and then 𝐳 = 𝐱⊙𝐳^'+ 𝐫⊙ (1 - 𝐳^')). Both 𝐳 and 𝐳^' are then utilized to compute feature attributions. Feature attributions. For each sample 𝐳^' and its corresponding 𝐳, we compute the prediction f(𝐳). Our aim is to approximate the local behaviors of f around 𝐱 using a function g that operates on the feature space. g can take various forms such as a linear model, a decision tree, or any Boolean function operating on Fourier bases <cit.>. The loss function ℓ(f(𝐳), g(𝐳^')) quantifies the approximation gap for the given sample 𝐳^'. In the case of LIME, g(𝐳^') = 𝐯^⊤𝐳^', and ℓ(f(𝐳), g(𝐳^')) = (f(𝐳) - g(𝐳^'))^2. To derive feature attributions, the following optimization problem is solved:w^Glime = min_v_z^'∼𝒫 [π(z^') ℓ(f(z), g(z^'))] + λ R(v), where π(·) is a weighting function and R(·) serves as a regularization function, e.g., ·_1 or ·_2^2 (which is used by LIME). We use 𝐰̂^Glime to represent the empirical estimation of 𝐰^Glime.Connection with Existing Frameworks. Our formulation exhibits similarities with previous frameworks <cit.>. The generality of Glime stems from two key aspects: (1) Glime operates within a broader feature space ℝ^d, in contrast to <cit.>, which is constrained to {0,1}^d, and <cit.>, which is confined to raw features in ℝ^D. (2) Glime can accommodate a more extensive range of distribution choices tailored to specific use cases. §.§ An alternative formulation of Glime without the weighting functionIndeed, we can readily transform <ref> into an equivalent formulation without the weighting function. While this adjustment simplifies the formulation, it also accelerates convergence by sampling from the transformed distribution (see <ref> and <ref>). Specifically, we define the transformed sampling distribution as𝒫(𝐳^') = π(𝐳^') 𝒫(𝐳^')/∫π(𝐭) 𝒫(𝐭) d𝐭. Utilizing 𝒫 as the sampling distribution, <ref> can be equivalently expressed as𝐰^Glime = min_𝐯𝔼_𝐳^'∼𝒫 [ℓ(f(𝐳), g(𝐳^'))] + λ/Z R(𝐯),Z = 𝔼_𝐭∼𝒫[π(𝐭) 𝒫(𝐭) ]It is noteworthy that the feature attributions obtained by solving <ref> are equivalent to those obtained by solving <ref> (see <ref> for a formal proof). Therefore, the use of π(·) in the formulation is not necessary and can be omitted. Hence, unless otherwise specified, Glime refers to the framework without the weighting function. §.§ Glime unifies several previous explanation methodsThis section shows how Glime unifies previous methods. For a comprehensive understanding of the background regarding these methods, kindly refer to <ref>.LIME <cit.> and Glime-Binomial. In the case of LIME, it initiates the explanation process by segmenting pixels x_1, ⋯, x_D into super-pixels s_1, ⋯, s_d. The binary vector 𝐳^'∼𝒫 = Uni({0,1}^d) signifies the absence or presence of corresponding super-pixels. Subsequently, 𝐳 = 𝐱⊙𝐳^' + 𝐫⊙ (1-𝐳^'). The linear model g(𝐳^') = 𝐯^⊤𝐳^' is defined on {0,1}^d. For image explanations, ℓ(f(𝐳), g(𝐳^')) =(f(𝐳) - g(𝐳^'))^2, and the default setting is π(𝐳^') = exp(-1 - 𝐳^'_0^2/σ^2), R(𝐯) = 𝐯_2^2 <cit.>. Remarkably, LIME is equivalent to the special case Glime-Binomial without the weighting function (see <ref> for the formal proof). The sampling distribution of Glime-Binomial is defined as 𝒫(𝐳^', 𝐳^'_0=k) = e^k/σ^2/(1+e^1/σ^2)^d, where k=0,1,…,d. This distribution is essentially a Binomial distribution. To generate a sample 𝐳^'∈{0,1}^d, one can independently draw z_i^'∈{0,1} with ℙ(z_i=1) = 1/(1+e^-1/σ^2) for i=1,…,d. The feature importance vector obtained by solving <ref> under Glime-Binomial is denoted as 𝐰^Binomial.KernelSHAP <cit.>. In our framework, the formulation of KernelSHAP aligns with that of LIME, with the only difference being R(𝐯) = 0 and π(𝐳^') =(d-1)/(d𝐳^'_0𝐳^'_0 ( d - 𝐳^'_0)).SmoothGrad <cit.>. SmoothGrad functions on raw features, specifically pixels in the case of an image. Here, 𝐳 = 𝐳^' + 𝐱, where 𝐳^'∼𝒩(0, σ^2 𝐈). The loss function ℓ(f(𝐳), g(𝐳^')) is represented by the squared loss, while π(𝐳^') = 1 and R(𝐯) = 0, as established in <ref>.Gradient <cit.>. The Gradient explanation is essentially the limit of SmoothGrad as σ approaches 0.DLIME <cit.>. DLIME functions on raw features, where 𝒫 is defined over the training data that have the same label with the nearest neighbor of x. The linear model g(z^') = v^⊤z^' is employed with the square loss function ℓ and the regularization term R(v) = 0.ALIME <cit.>. ALIME employs an auto-encoder trained on the training data, with its feature space defined as the output space of the auto-encoder. The sample generation process involves introducing Gaussian noise to x. The weighting function in ALIME is denoted as π(z^') = exp(-𝒜ℰ(x) - 𝒜ℰ(z^')_1), where 𝒜ℰ(·) represents the auto-encoder. The squared loss function is chosen as the loss function and no regularization function is applied.§ STABLE AND LOCALLY FAITHFUL EXPLANATIONS IN GLIME section]sec:lime_unstable§.§ Glime-Binomial converges exponentially faster than LIME section]sec:small_weightTo understand the instability of LIME, we demonstrate that the sample weights in LIME are very small, resulting in the domination of the regularization term. Consequently, LIME tends to produce explanations that are close to zero. Additionally, the small weights in LIME lead to a considerably slower convergence compared to Glime-Binomial, despite both methods converging to the same limit. Small sample weights in LIME. The distribution of the ratio of non-zero elements to the total number of super-pixels, along with the corresponding weights for LIME and Glime-Binomial, is depicted in <ref>. Notably, most samples exhibit approximately d/2 non-zero elements. However, when σ takes values such as 0.25 or 0.5, a significant portion of samples attains weights that are nearly zero. For instance, when σ=0.25 and 𝐳^'_0 = d/2, π(𝐳^') reduces to exp(-8d), which is approximately 10^-70 for d=20. Even with 𝐳^'_0 = d-1, π(𝐳^') equals e^-16, approximating 10^-7. Since LIME samples from Uni({0, 1}^d), the probability that a sample z^' has 𝐳^'_0 = d-1 or d is approximately 2× 10^-5 when d=20. Therefore, most samples have very small weights. Consequently, the sample estimation of the expectation in <ref> tends to be much smaller than the true expectation with high probability and is thus inaccurate (see <ref> for more details). Given the default regularization strength λ=1, this imbalance implies the domination of the regularization term in the objective function of <ref>. As a result, LIME tends to yield explanations close to zero in such cases, diminishing their meaningfulness and leading to instability. Glime converges exponentially faster than LIME in the presence of regularization. Through the integration of the weighting function into the sampling process, every sample uniformly carries a weight of 1, contributing equally to <ref>. Our analysis reveals that Glime requires substantially fewer samples than LIME to transition beyond the regime where the regularization term dominates. Consequently, Glime-Binomial converges exponentially faster than LIME.Recall that ŵ^LIME and ŵ^Glime represent the empirical solutions of <ref> and <ref>, respectively, obtained by replacing the expectations with the sample average. ŵ^Binomial is the empirical solution of <ref> with the transformed sampling distribution 𝒫(𝐳^', 𝐳^'_0=k) = e^k/σ^2/(1+e^1/σ^2)^d, where k=0,1,⋯, d. In the subsequent theorem, we present the sample complexity bound for LIME (refer to <ref> for proof). theorem]thm:over_regularization_lime Suppose samples {z_i^'}_i=1^n ∼Uni({0,1}^d) are used to compute the LIME explanation. For any ϵ >0, δ∈ (0,1), if n = Ω(ϵ^-2 d^9 2^8de^8/σ^2log(4d/δ)), λ≤ n,we have (ŵ^LIME - w^LIME_2 < ϵ ) ≥ 1 - δ. w^LIME = lim_n→∞ŵ^LIME. Next, we present the sample complexity bound for Glime (refer to <ref> for proof). theorem]thm:over_regularization_generallime Suppose z^'∼𝒫 such that the largest eigenvalue of z^'(z^')^⊤is bounded by R and 𝔼[z^'(z^')^⊤] = (α_1 - α_2) I + α_2 11^⊤, Var(z^'(z^')^⊤)_2 ≤ν^2, |(z^' f(z))_i|≤ M for some M>0. {z_i^'}_i=1^n are i.i.d. samples from 𝒫 and are used to compute Glime explanation ŵ^Glime. For any ϵ > 0, δ∈ (0,1), if n = Ω(ϵ^-2M^2 ν^2 d^3 γ^4log(4d/δ)) where γ is a function of λ, d, α_1, α_2, we have (ŵ^Glime - w^Glime_2 < ϵ) ≥1 - δ. w^Glime = lim_n→∞ŵ^Glime.Since Glime-Binomial samples from a binomial distribution, which is sub-Gaussian with parameters M = √(d), ν = 2, α_1 = 1/(1+e^-1/σ^2), α_2 = 1/(1+e^-1/σ^2)^2, and γ(α_1, α_2, d) = d e^2/σ^2 (refer to <ref> for proof), we derive the following corollary: corollary]coro:concentrate_GLIME_Binomial Suppose {z_i^'}_i=1^n are i.i.d. samples from 𝒫(z^', z^'_0=k) = e^k/σ^2/(1+e^1/σ^2)^d, k=1,…, d and are used to compute Glime-Binomial explanation. For any ϵ >0, δ∈ (0,1), if n = Ω(ϵ^-2 d^5 e^4 /σ^2log(4d/δ)),we have (ŵ^Binomial - w^Binomial_2 < ϵ ) ≥ 1 - δ. w^Binomial = lim_n→∞ŵ^Binomial. Comparing the sample complexities outlined in <ref> and <ref>, it becomes evident that LIME necessitates an exponential increase of exp(d, σ^-2) more samples than Glime-Binomial for convergence. Despite both LIME and Glime-Binomial samples being defined on the binary set {0, 1}, the weight π(z^') associated with a sample z^' in LIME is notably small. Consequently, the square loss term in LIME is significantly diminished compared to that in Glime-Binomial. This situation results in the domination of the regularization term over the square loss term, leading to solutions that are close to zero. For stable solutions, it is crucial that the square loss term is comparable to the regularization term. Consequently, Glime-Binomial requires significantly fewer samples than LIME to achieve stability.§.§ Designing locally faithful explanation methods within Glime section]sec:local_unbiasedNon-local and biased sampling in LIME. LIME employs uniform sampling from {0,1}^d and subsequently maps the samples to the original input space 𝒳 with the inclusion of a reference. Despite the integration of a weighting function to enhance locality, the samples {𝐳_i}_i=1^n generated by LIME often exhibit non-local characteristics, limiting their efficacy in capturing the local behaviors of the model f (as depicted in <ref>). This observation aligns with findings in <cit.>, which demonstrate that LIME frequently approximates the global behaviors instead of the local behaviors of f. As illustrated earlier, the weighting function contributes to LIME's instability, emphasizing the need for explicit enforcement of locality in the sampling process. Local and unbiased sampling in Glime. In response to these challenges, Glime introduces a sampling procedure that systematically enforces locality without reliance on a reference point. One approach involves sampling 𝐳^'∼𝒫 = 𝒩(0, σ^2 𝐈) and subsequently obtaining 𝐳 = 𝐱 + 𝐳^'. This method, referred to as Glime-Gauss, utilizes a weighting function π(·) ≡ 1, with other components chosen to mirror those of LIME. The feature attributions derived from this approach successfully mitigate the aforementioned issues. Similarly, alternative distributions, such as 𝒫 = Laplace(0, σ) or 𝒫 = Uni([-σ, σ]^d), can be employed, resulting in explanation methods known as Glime-Laplace and Glime-Uniform, respectively. §.§ Sampling distribution selection for user-specific objectives Users may possess specific objectives they wish the explanation method to fulfill. For instance, if a user seeks to enhance local fidelity within a neighborhood of radius ϵ, they can choose a distribution and corresponding parameters aligned with this objective (as depicted in <ref>). The flexible design of the sample distribution in Glime empowers users to opt for a distribution that aligns with their particular use cases. Furthermore, within the Glime framework, it is feasible to integrate feature correlation into the sampling distribution, providing enhanced flexibility. In summary, Glime affords users the capability to make more tailored choices based on their individual needs and objectives. § EXPERIMENTS section]sec:experiments Dataset and models. Our experiments are conducted on the ImageNet dataset[Code is available at <https://github.com/thutzr/GLIME-General-Stable-and-Local-LIME-Explanation>]. Specifically, we randomly choose 100 classes and select an image at random from each class. The models chosen for explanation are ResNet18 <cit.> and the tiny Swin-Transformer <cit.> (refer to <ref> for results). Our implementation is derived from the official implementation of LIME[<https://github.com/marcotcr/lime>]. The default segmentation algorithm in LIME, Quickshift <cit.>, is employed. Implementation details of our experiments are provided in <ref>. For experiment results on text data, please refer to <ref>. For experiment results on ALIME, please refer to <ref>. Metrics. (1) Stability: To gauge the stability of an explanation method, we calculate the average top-K Jaccard Index (JI) for explanations generated by 10 different random seeds. Let w_1, …, w_10 denote the explanations obtained from 10 random seeds. The indices corresponding to the top-K largest values in w_i are denoted as R_i,:K. The average Jaccard Index between pairs of R_i,:K and R_j,:K is then computed, where JI(A,B) = |A∩ B|/|A∪ B|.(2) Local Fidelity: To evaluate the local fidelity of explanations, reflecting how well they capture the local behaviors of the model, we employ two approaches. For LIME, which uses a non-local sampling neighborhood, we use the R^2 score returned by the LIME implementation for local fidelity assessment <cit.>. Within Glime, we generate samples {z_i}_i=1^m and {z_i^'}_i=1^m from the neighborhood B_x(ϵ). The squared difference between the model's output and the explanation's output on these samples is computed. Specifically, for a sample z, we calculate (f(z) - ŵ^⊤z^')^2 for the explanation ŵ. The local fidelity of an explanation ŵ at the input x is defined as 1/(1 + 1/m∑_i (f(z_i) - ŵ^⊤z_i^')^2), following the definition in <cit.>. To ensure a fair comparison between different distributions in Glime, we set the variance parameter of each distribution to match that of the Gaussian distribution. For instance, when sampling from the Laplace distribution, we use Laplace(0, σ/√(2)), and when sampling from the uniform distribution, we use Uni([-√(3)σ, √(3)σ]^d). §.§ Stability of LIME and GlimeLIME's instability and the influence of regularization/weighting. In <ref>, it is evident that LIME without the weighting function (LIME+π=1) demonstrates greater stability compared to its weighted counterpart, especially when σ is small (e.g., σ=0.25, 0.5). This implies that the weighting function contributes to instability in LIME. Additionally, we observe that LIME without regularization (LIME+λ=0) exhibits higher stability than the regularized LIME, although the improvement is not substantial. This is because, when σ is small, the sample weights approach zero, causing the Ridge regression problem to become low-rank, leading to unstable solutions. Conversely, when σ is large, significant weights are assigned to all samples, reducing the effectiveness of regularization. For instance, when σ=5 and d=40, most samples carry weights around 0.45, and even samples with only one non-zero element left possess weights of approximately 0.2. In such scenarios, the regularization term does not dominate, even with limited samples. This observation is substantiated by the comparable performance of LIME, LIME+π=1, and LIME+λ=0 when σ=1 and 5. Further results are presented in <ref>.Enhancing stability in LIME with Glime. In <ref>, it is evident that LIME achieves a Jaccard Index of approximately 0.4 even with over 2000 samples when using the default σ=0.25. In contrast, both Glime-Binomial and Glime-Gauss provide stable explanations with only 200-400 samples. Moreover, with an increase in the value of σ, the convergence speed of LIME also improves. However, Glime-Binomial consistently outperforms LIME, requiring fewer samples for comparable stability. The logarithmic scale of the horizontal axis in <ref> highlights the exponential faster convergence of Glime compared to LIME.Convergence of LIME and Glime-Binomial to a common limit. In <ref> of <ref>, we explore the difference and correlation between explanations generated by LIME and Glime-Binomial. Mean Squared Error (MSE) and Mean Absolute Error (MAE) are employed as metrics to quantify the dissimilarity between the explanations, while Pearson correlation and Spearman rank correlation assess their degree of correlation. As the sample size increases, both LIME and Glime-Binomial exhibit greater similarity and higher correlation. The dissimilarity in their explanations diminishes rapidly, approaching zero when σ is significantly large (e.g., σ=5). §.§ Local fidelity of LIME and GlimeEnhancing local fidelity with Glime. A comparison of the local fidelity between LIME and the explanation methods generated by Glime is presented in <ref>. Utilizing 2048 samples for each image to compute the R^2 score, Glime consistently demonstrates superior local fidelity compared to LIME. Particularly, when σ=0.25 and 0.5, LIME exhibits local fidelity that is close to zero, signifying that the linear approximation model (ŵ^LIME)^⊤z^' is nearly constant. Through the explicit integration of locality into the sampling process, Glime significantly improves the local fidelity of the explanations.Local fidelity analysis of Glime under various sampling distributions. In <ref>, we assess the local fidelity of Glime employing diverse sampling distributions: 𝒩(0, σ^2I), Laplace(0, σ/√(2)), and Uni([-√(3)σ, √(3)σ]^d). The title of each sub-figure corresponds to the standard deviation of these distributions. Notably, we observe that the value of σ does not precisely align with the radius ϵ of the intended local neighborhood for explanation. Instead, local fidelity tends to peak at larger ϵ values than the corresponding σ. Moreover, different sampling distributions achieve optimal local fidelity for distinct ϵ values. This underscores the significance of selecting an appropriate distribution and parameter values based on the specific radius ϵ of the local neighborhood requiring an explanation. Unlike LIME, Glime provides the flexibility to accommodate such choices. For additional results and analysis, please refer to <ref>. §.§ Human experimentsIn addition to numerical experiments, we conducted human-interpretability experiments to evaluate whether Glime provides more meaningful explanations to end-users. The experiments consist of two parts, with 10 participants involved in each. The details of the procedures employed in conducting the experiments is presented in the following. * Can Glime improve the comprehension of the model's predictions? To assess this, we choose images for which the model's predictions are accurate. Participants are presented with the original images, accompanied by explanations generated by both LIME and Glime. They are then asked to evaluate the degree of alignment between the explanations from these algorithms and their intuitive understanding. Using a 1-5 scale, where 1 indicates a significant mismatch and 5 signifies a strong correspondence, participants rate the level of agreement.* Can Glime assist in identifying the model's errors? To explore this, we select images for which the model's predictions are incorrect. Participants receive the original images along with explanations generated by both LIME and Glime. They are then asked to assess the degree to which these explanations aid in understanding the model's behaviors and uncovering the reasons behind the inaccurate predictions. Using a 1-5 scale, where 1 indicates no assistance and 5 signifies substantial aid, participants rate the level of support provided by the explanations.<ref> presents the experimental results. When participants examined images with accurate model predictions, along with explanations from LIME and Glime, they assigned an average score of 2.96 to LIME and 3.37 to Glime. On average, Glime received a score 0.41 higher than LIME. Notably, in seven out of the ten instances, Glime achieved a higher average score than LIME.In contrast, when participants examined images with incorrect model predictions, accompanied by explanations from LIME and Glime, they assigned an average score of 2.33 to LIME and 3.42 to Glime. Notably, Glime outperformed LIME with an average score 1.09 higher across all ten images. These results strongly indicate that Glime excels in explaining the model's behaviors. § RELATED WORKPost-hoc local explanation methods. In contrast to inherently interpretable models, black-box models can be explained through post-hoc explanation methods, which are broadly categorized as model-agnostic or model-specific. Model-specific approaches, such as Gradient <cit.>, SmoothGrad <cit.>, and Integrated Gradient <cit.>, assume that the explained model is differentiable and that gradient access is available. For instance, SmoothGrad generates samples from a Gaussian distribution centered at the given input and computes their average gradient to mitigate noise. On the other hand, model-agnostic methods, including LIME <cit.> and Anchor <cit.>, aim to approximate the local model behaviors using interpretable models, such as linear models or rule lists. Another widely-used model-agnostic method, SHAP <cit.>, provides a unified framework that computes feature attributions based on the Shapley value and adheres to several axioms. Instability of LIME. Despite being widely employed, LIME is known to be unstable, evidenced by divergent explanations under different random seeds <cit.>. Many efforts have been devoted to stabilize LIME explanations. Zafar et al. <cit.> introduced a deterministic algorithm that utilizes hierarchical clustering for grouping training data and k-nearest neighbors for selecting relevant data samples. However, the resulting explanations may not be a good local approximation. Addressing this concern, Shankaranarayana et al. <cit.> trained an auto-encoder to function as a more suitable weighting function in LIME. Shi et al. <cit.> incorporated feature correlation into the sampling step and considered a more restricted sampling distribution, thereby enhancing stability. Zhou et al. <cit.> employed a hypothesis testing framework to determine the necessary number of samples for ensuring stable explanations. However, this improvement came at the expense of a substantial increase in computation time.Impact of references. LIME, along with various other explanation methods, relies on references (also known as baseline inputs) to generate samples. References serve as uninformative inputs meant to represent the absence of features <cit.>. Choosing an inappropriate reference can lead to misleading explanations. For instance, if a black image is selected as the reference, important black pixels may not be highlighted <cit.>. The challenge lies in determining the appropriate reference, as different types of references may yield different explanations <cit.>. In <cit.>, both black and white references are utilized, while <cit.> employs constant, noisy, and Gaussian blur references simultaneously. To address the reference specification issue, <cit.> proposes Expected Gradient, considering each instance in the data distribution as a reference and averaging explanations computed across all references.§ CONCLUSION In this paper, we introduce Glime, a novel framework that extends the LIME method for local feature importance explanations. By explicitly incorporating locality into the sampling procedure and enabling more flexible distribution choices, Glime mitigates the limitations of LIME, such as instability and low local fidelity. Experimental results on ImageNet data demonstrate that Glime significantly enhances stability and local fidelity compared to LIME. While our experiments primarily focus on image data, the applicability of our approach readily extends to text and tabular data. § ACKNOWLEDGEMENTThe authors would like to thank the anonymous reviewers for their constructive comments. Zeren Tan and Jian Li are supported by the National Natural Science Foundation of China Grant (62161146004). Yang Tian is supported by the Artificial and General Intelligence Research Program of Guo Qiang Research Institute at Tsinghua University (2020GQG1017).plain§ MORE DISCUSSIONS appendix]app:discuss§.§ Implementation details appendix]app:details Dataset selection. The experiments use images from the validation set of the ImageNet-1k dataset. To ensure consistency, a fixed random seed (2022) is employed. Specifically, 100 classes are uniformly chosen at random, and for each class, an image is randomly selected.Models. The pretrained models used are sourced from , with theparameter set to .Feature transformation. The initial step involves cropping each image to dimensions of . Themethod fromis then employed to segment images into super-pixels, with specific parameters set as follows: , , , and . This approach aligns with the default setting in LIME, except for the modified fixed random seed. Consistency in the random seed ensures that identical images result in the same super-pixels, thereby isolating the source of instability to the calculation of explanations. However, for different images, they are still segmented in different ways.Computing explanations. The implemented procedure follows the original setup in LIME. Theparameter is configured as , causing the average value of each super-pixel to act as its reference when the super-pixel is removed. Theis explicitly set to , as recommended for image data in LIME <cit.>. The default value forin Ridge regression is 1, unless otherwise specified. For each image, the model f infers the most probable label, and the explanation pertaining to that label is computed. Ten different random seeds are utilized to compute explanations for each image. Theparameter in both theand thefunction is set to these specific random seeds. §.§ Stability of LIME and Glime appendix]app:stability_lime <ref> illustrates the top-1, top-5, top-10, and average Jaccard indices. Importantly, the results presented in <ref> closely align with those in <ref>. In summary, it is evident that Glime consistently provides more stable explanations compared to LIME.§.§ LIME and Glime-Binomial converge to the same limitappendix]app:common_limit In Figure <ref>, the difference and correlation between explanations generated by LIME and Glime-Binomial are presented. With an increasing sample size, the explanations from LIME and Glime-Binomial become more similar and correlated. The difference between their explanations rapidly converges to zero, particularly when σ is large, such as in the case of σ=5. While LIME exhibits a slower convergence, especially with small σ, it is impractical to continue sampling until their difference fully converges. Nevertheless, the correlation between LIME and Glime-Binomial strengthens with an increasing number of samples, indicating their convergence to the same limit as the sample size grows. §.§ LIME explanations are different for different references. appendix]sec:diff_baseline The earlier work by Jain et al. <cit.> has underscored the instability of LIME regarding references. As shown in <ref>, this instability originates from LIME's sampling distribution, which relies on the chosen reference r. Additional empirical evidence is presented in <ref>. Six distinct references—black, white, red, blue, yellow image, and the average value of the removed super-pixel (the default setting for LIME)—are selected. The average Jaccard indices for explanations computed using these various references are detailed in <ref>. The results underscore the sensitivity of LIME to different references.Different references result in LIME identifying distinct features as the most influential, even with a sample size surpassing 2000. Particularly noteworthy is that, with a sample size exceeding 2000, the top-1 Jaccard index consistently remains below 0.7, underscoring LIME's sensitivity to reference variations.§.§ The local fidelity of Glime appendix]app:local_fidelity_generallime <ref> presents the local fidelity of Glime, showcasing samples from the ℓ_2 neighborhood {z| z - x_2 ≤ϵ} around x. Additionally, <ref> and <ref> illustrate the local fidelity of Glime within the ℓ_1 neighborhood {z| z - x_1 ≤ϵ} and the ℓ_∞ neighborhood {z| z - x_∞≤ϵ}, respectively.A comparison between <ref> and <ref> reveals that, for the same σ, Glime can explain the local behaviors of f within a larger radius in the ℓ_1 neighborhood compared to the ℓ_2 neighborhood. This difference arises from the fact that {z| z - x_2 ≤ϵ} defines a larger neighborhood compared to {z| z - x_1 ≤ϵ} with the same radius ϵ.Likewise, the set {z| z - x_∞≤ϵ} denotes a larger neighborhood than {z| z - x_2 ≤ϵ}, causing the local fidelity to peak at a smaller radius ϵ for the ℓ_∞ neighborhood compared to the ℓ_2 neighborhood under the same σ.Remarkably, Glime-Laplace consistently demonstrates superior local fidelity compared to Glime-Gauss and Glime-Uniform. Nevertheless, in cases with larger ϵ, Glime-Gauss sometimes surpasses the others. This observation implies that the choice of sampling distribution should be contingent on the particular radius of the local neighborhood intended for explanation by the user. §.§ Glime unifies several previous methods appendix]app:previous_methods KernelSHAP <cit.>. KernelSHAP integrates the principles of LIME and Shapley values. While LIME employs a linear explanation model to locally approximate f, KernelSHAP seeks a linear explanation model that adheres to the axioms of Shapley values, including local accuracy, missingness, and consistency <cit.>. Achieving this involves careful selection of the loss function ℓ(·, ·), the weighting function π(·), and the regularization term R. The choices for these parameters in LIME often violate local accuracy and/or consistency, unlike the selections made in KernelSHAP, which are proven to adhere to these axioms (refer to Theorem 2 in <cit.>). Gradient <cit.>. This method computes the gradient ∇ f to assess the impact of each feature under infinitesimal perturbation <cit.>.SmoothGrad <cit.>. Acknowledging that standard gradient explanations may contain noise, SmoothGrad introduces a method to alleviate noise by averaging gradients within the local neighborhood of the explained input <cit.>. Consequently, the feature importance scores are computed as 𝔼_ϵ∼𝒩(0,σ^2I)[∇ f(x+ϵ)].DLIME <cit.>. Diverging from random sampling, DLIME seeks a deterministic approach to sample acquisition. In its process, DLIME employs agglomerative Hierarchical Clustering to group the training data, and subsequently utilizes k-Nearest Neighbour to select the cluster corresponding to the explained instance. The DLIME explanation is then derived by constructing a linear model based on the data points within the identified cluster.ALIME <cit.>: ALIME leverages an auto-encoder to assign weights to samples. Initially, an auto-encoder, denoted as 𝒜ℰ(·), is trained on the training data. Subsequently, the method involves sampling n nearest points to x from the training dataset. The distances between these samples and the explained instance x are assessed using the ℓ_1 distance between their embeddings, obtained through the application of the auto-encoder 𝒜ℰ(·). For a sample z, its distance from x is measured as 𝒜ℰ(z) - 𝒜ℰ(x)_1, and its weight is computed as exp(-𝒜ℰ(z) - 𝒜ℰ(x)_1). The final explanation is derived by solving a weighted Ridge regression problem. §.§ Results on tiny Swin-Transformer <cit.> appendix]app:swin_transformer The findings on the tiny Swin-Transformer align with those on ResNet18, providing additional confirmation that Glime enhances stability and local fidelity compared to LIME. Please refer to <ref>, <ref> and <ref> for results.§.§ Comparing Glime with ALIMEappendix]app:alime While ALIME <cit.> improves upon the stability and local fidelity of LIME, Glime consistently surpasses ALIME. A key difference between ALIME and LIME lies in their methodologies: ALIME employs an encoder to transform samples into an embedding space, calculating their distance from the input to be explained as 𝒜ℰ(𝐳) - 𝒜ℰ(𝐱)_1, whereas LIME utilizes a binary vector 𝐳∈{0,1}^d to represent a sample, measuring the distance from the explained input as 1 - 𝐳_2.Because ALIME relies on distance in the embedding space to assign weights to samples, there is a risk of generating very small sample weights if the produced samples are far from 𝐱, potentially resulting in instability issues.In our ImageNet experiments comparing Glime and ALIME, we utilize the VGG16 model from the repository [<https://github.com/Horizon2333/imagenet-autoencoder>] as the encoder in ALIME. The outcomes of these experiments are detailed in <ref>. The findings demonstrate that, although ALIME demonstrates enhanced stability compared to LIME, this improvement is not as substantial as the improvement achieved by Glime, particularly under conditions of small σ or sample size. §.§ Experiment results on IMDb appendix]app:text_data The DistilBERT model is employed in experimental evaluations on the IMDb dataset, where 100 data points are selected for explanation. The comparison between Glime-Binomial and LIME is depicted in <ref> using the Jaccard Index. Our findings indicate that Glime-Binomial consistently exhibits higher stability than LIME across a range of σ values and sample sizes. Notably, at smaller σ values, Glime-Binomial demonstrates a substantial improvement in stability compared to LIME. § PROOFS §.§ Equivalent Glime formulation without π(·) appendix]app:equivalent_formulation_generallime By integrating the weighting function into the sampling distribution, the problem to be solved isw^Glime =min_v_z^'∼𝒫[π(z^') ℓ(f(z), g(z^'))] + λ R(v) =min_v∫_ℝ^dπ(z^') ℓ(f(z), g(z^'))𝒫(z) dz+ λ R(v)=min_v∫_ℝ^dℓ(f(z), g(z^'))π(z^') 𝒫(z) dz+ λ R(v)/∫_ℝ^dπ(u^') 𝒫(u) du=min_v∫_ℝ^dℓ(f(z), g(z^'))π(z^') 𝒫(z) dz/∫_ℝ^dπ(u^') 𝒫(u) du+ λ R(v)/∫_ℝ^dπ(u^') 𝒫(u) du=min_v∫_ℝ^dℓ(f(z), g(z^'))𝒫(z) dz + λ/Z R(v) 𝒫(z) =π(z^')𝒫(z)/Z, Z = ∫_ℝ^dπ(u^') 𝒫(u) du=min_v_z^'∼𝒫[ℓ(f(z), g(z^'))] + λ/Z R(v) §.§ Equivalence between LIME and Glime-Binomial appendix]app:lime_binomial_equivalence For LIME, 𝒫= Uni({0,1}^d) and thus 𝒫(z^', z^'_0=k)=1/2^d, k=0,1,⋯, d, so that Z = ∫_ℝ^dπ(u^') 𝒫(u) du = ∑_k=0^d e^(k-d)/σ^2dk/2^d = e^-d/σ^2/2^d ( 1 + e^1/σ^2)^dThus, we have 𝒫(z) =π(z^')𝒫(z)/Z = e^(k-d)/σ^22^-d/Z = e^k/σ^2/(1+e^1/σ^2)^dTherefore, Glime-binomial is equivalent to LIME. §.§ LIME requires many samples to accurately estimate the expectation term in <ref>. appendix]app:small_weights In <ref>, it is evident that a lot of samples generated by LIME possess considerably small weights. Consequently, the sample estimation of the expectation in <ref> tends to be much smaller than the true expectation with high probability. In such instances, the regularization term would have a dominant influence on the overall objective.Consider specific parameters, such as σ=0.25, n=1000, d=20 (where σ and n are the default values in the original implementation of LIME). The probability of obtaining a sample z^' with z^'_0 = d-1 or d is approximately d/2^d + 1/2^d = d+1/2^d≈ 2× 10^-5. Let's consider a typical scenario where |f(z)|∈ [0,1], and (f(z)-v^⊤z^')^2 is approximately 0.1 for most z, z^'. In this case, 𝔼_z^'∼Uni({0,1}^d) [π(z^') (f(z) - v^⊤z^')^2] ≈ 0.1 ·∑_k=0^de^(k-d)/σ^2/2^d≈ 10^-7. However, if we lack samples z^' with z^'_0 = d-1 or d, then all samples z^'_i with z^'_i_0 ≤ d-2 have weights π(z^'_i) ≤exp(-2/σ^2)≈ 1.26× 10^-14. This leads to the sample average 1/n∑_i=1^n π(z_i^') (f(z_i) - v^⊤z_i^')^2 ≤ 1.26 × 10^-15≪ 10^-7. The huge difference between the magnitude of the expectation term in <ref> and the sample average of this expectation indicates that the sample average is not an accurate estimation of 𝔼_z^'∼Uni({0,1}^d) [π(z^') (f(z) - v^⊤z^')^2] (if we do not get enough samples).Additionally, under these circumstances, the regularization term is likely to dominate the sample average term, leading to an underestimation of the intended value of v. In conclusion, the original sampling method for LIME, even with extensively used default parameters, is not anticipated to yield meaningful explanations.§.§ Proof of <ref> appendix]app:theorem_41Suppose samples {z_i^'}_i=1^n ∼Uni({0,1}^d) are used to compute LIME explanation. For any ϵ >0, δ∈ (0,1), if n = Ω(ϵ^-2 d^5 2^4de^4/σ^2log(4d/δ)), λ≤ n,we have (ŵ^LIME - w^LIME_2 < ϵ ) ≥ 1 - δ. w^LIME = lim_n→∞ŵ^LIME. To compute the LIME explanation with n samples, the following optimization problem is solved:ŵ^LIME = min_v1/n∑_i=1^n π(z_i^') (f(z_i) - v^⊤z_i^')^2 + λ/nv_2^2.Let L =1/n∑_i=1^n π(z_i^') (f(z_i) - v^⊤z_i^')^2 + λ/nv_2^2. Setting the gradient of L with respect to v to zero, we obtain:-21/nπ(z_i^')(f(z_i) - v^⊤z_i^')z_i^' + 2/nλv = 0,which leads to:ŵ^LIME = (1/n∑_i=1^n π(z_i^')z_i^'(z_i^')^⊤ + λ/n)^-1(1/n∑_i=1^n π(z_i^') z_i^' f(z_i)).Denote Σ_n = 1/n∑_i=1^n π(z_i^')z_i^'(z_i^')^⊤ + λ/n, Γ_n = 1/n∑_i=1^n π(z_i^') z_i^' f(z_i), Σ = lim_n→∞Σ_n, and Γ = lim_n→∞Γ_n. Then, we have:ŵ^LIME = Σ_n^-1Γ_n, w^LIME = Σ^-1Γ.To prove the concentration of ŵ^LIME, we follow the proofs in <cit.>:(1) First, we prove the concentration of Σ_n;(2) Then, we bound Σ^-1_F^2;(3) Next, we prove the concentration of Γ_n;(4) Finally, we use the following inequality:Σ_n^-1Γ_n - Σ^-1Γ≤ 2 Σ^-1_FΓ_n - Γ_2 + 2Σ^-1_F^2 ΓΣ_n - Σ,when Σ^-1(Σ_n - Σ)≤ 0.32 <cit.>.Before establishing concentration results, we first derive the expression for Σ.Expression of Σ. Σ_n = [ 1/n∑_i π(z_i^') (z_i1^')^2+ λ/n 1/n∑_i π(z_i^')z_i1^'z_i2^' ⋯1/n∑_i π(z_i^')z_i,1^'z_id^'; 1/n∑_i π(z_i^')z_i1^'z_i2^'1/n∑_iπ(z_i^') (z_i2^')^2+ λ/n ⋯ 1/n∑_i π(z_i^')z_i2^'z_id^'; ⋮ ⋮ ⋱ ⋮; 1/n∑_i π(z_i^')z_i1^'z_id^' 1/n∑_iπ(z_i^') z_i2^'z_id^' ⋯1/n∑_i π(z_i^')(z_id^')^2+ λ/n; ]By taking n→∞, we have Σ_n →Σ = (α_1 - α_2)I + α_2 11^⊤where α_1 =𝔼_z^'∼Uni({0,1}^d)[π(z^') z^'_i] = 𝔼_z^'∼Uni({0,1}^d)[π(z^') (z^'_i)^2]=∑_k=0^d e^(k-d)/σ^2(z_i^'=1|z^'_0 = k)(z^'_0=k) =∑_k=0^d e^(k-d)/σ^2k/ddk/2^d =∑_k=0^d e^(k-d)/σ^2d-1k-1/2^d =∑_k=0^d e^(k-1)/σ^2e^(1-d)/σ^2d-1k-1/2^d = e^(1-d)/σ^2(1+e^1/σ^2)^d-1/2^d = (1+e^-1/σ^2)^d-1/2^d α_2 =𝔼_z^'∼Uni({0,1}^d)[π(z^') z^'_iz^'_j] =1/Z∑_k=0^d e^(k-d)/σ^2(z_i^'=1, z_j^'=1|z^'_0 = k)(z^'_0=k) =∑_k=0^d e^(k-d)/σ^2k(k-1)/d(d-1)dk/2^d =∑_k=0^d e^(k-d)/σ^2d-2k-2/2^d =∑_k=0^d e^(k-2)/σ^2e^(2-d)/σ^2d-2k-2/2^d =e^(2-d)/σ^2(1+e^1/σ^2)^d-2/2^d = (1+e^-1/σ^2)^d-2/2^dBy Sherman-Morrison formula, we have Σ^-1= ((α_1 - α_2)I + α_2 11^⊤ )^-1 = 1/α_1 - α_2 (I + α_2 /α_1-α_211^⊤ )^-1 = 1/α_1 - α_2(I - α_2 /α_1-α_211^⊤/1 + α_2 /α_1-α_2 d) = (β_1 - β_2)I + β_2 11^⊤where β_1 = α_1 + (d-2)α_2/(α_1 - α_2)(α_1 + (d-1)α_2), β_2 = -α_2/(α_1 - α_2)(α_1 + (d-1)α_2) In the following, we aim to establish the concentration of ŵ^LIME. Concentration of Σ_n. Considering 0 ≤π(·) ≤ 1 and z_i ∈{0,1}^d, each element within Σ_n resides within the interval of [0, 2]. Moreover, as 1/2^d≤α_1 = (1+e^-1/σ^2)^d-1/2^d≤2^d-1/2^d = 1/21/2^d≤α_2 = (1+e^-1/σ^2)^d-2/2^d≤2^d-2/2^d = 1/4e^-1/σ^2/2^d≤α_1 - α_2 = e^-1/σ^2(1+e^-1/σ^2)^d-2/2^d≤1/4 The elements within Σ are within the range of [0, 1/4]. Consequently, the elements in Σ_n - Σ fall within the range of [-1/4, 2].Referring to the matrix Hoeffding's inequality <cit.>, it holds true that for all t > 0, (Σ_n - Σ_2 ≥ t) ≤ 2d exp(-nt^2/32d^2) Bounding Σ^-1_F^2.Σ^-1_F^2 = dβ_1^2 + (d^2-d)β_2^2 Because d/2^d≤α_1 + (d-1) α_2 ≤2 + (d-1)/4 = d+1/4d-1/2^d≤α_1 + (d-2) α_2 ≤2 + (d-2)/4 = d/4 we have |β_1| = |α_1 + (d-2)α_2/(α_1 - α_2)(α_1 + (d-1)α_2)| ≤|1/α_1 - α_2| ≤ 2^de^1/σ^2, β_1^2 ≤ 2^2d e^2/σ^2|β_2| = |-α_2/(α_1 - α_2)(α_1 + (d-1)α_2)| = |e^1/σ^21/(α_1 + (d-1)α_2)| ≤ d^-12^d e^1/σ^2,β_2^2 ≤ d^-2 2^2de^2/σ^2 so that Σ^-1_F^2 = dβ_1^2 + (d^2-d)β_2^2 ≤ d 2^2d e^2/σ^2 + (d^2-d)d^-2 2^2de^2/σ^2≤ 2 d 2^2d e^2/σ^2 Concentration of Γ_n. With |f| ≤ 1, all elements within both Γ_n and Γ exist within the range of [0,1]. According to matrix Hoeffding's inequality <cit.>, for all t > 0, (Γ_n - Γ≥ t ) ≤ 2d exp(-nt^2/8d) Concentration of ŵ^LIME. When Σ^-1(Σ_n - Σ)≤ 0.32 <cit.>, we have Σ_n^-1Γ_n - Σ^-1Γ≤ 2 Σ^-1_FΓ_n - Γ_2 + 2Σ^-1_F^2 ΓΣ_n - Σ Given thatΣ^-1(Σ_n - Σ)≤Σ^-1Σ_n - Σ≤ 2^1/2 d^1/2 2^d e^1/σ^2Σ_n - Σ Exploiting the concentration of Σ_n, where n ≥ n_1 = 2^7 d^3 2^2de^2/σ^2log(4d/δ) and t = t_1 = 5^-22^2.5d^-0.52^-de^-1/σ^2, we have(Σ_n - Σ_2 ≥ t) ≤ 2d exp(-nt^2/32d^2) ≤ 2d exp(-nt^2/32d^2) ≤δ/2 Therefore, with a probability of at least 1-δ/2, we have Σ^-1(Σ_n - Σ)≤Σ^-1Σ_n - Σ≤ 2^1/2 d^1/2 2^d e^1/σ^2Σ_n - Σ≤ 0.32 For n ≥ n_2 = 2^8 ϵ^-2d^2 2^2de^2/σ^2log(4d/δ) and t_2 = 2^-2.5d^-0.52^-de^-1/σ^2ϵ, the following concentration inequality holds: (Γ_n - Γ≥ t_2 ) ≤ 2d exp(-n_2t_2^2/8d) ≤δ/2 In this context, with a probability of at least 1-δ/2, we have Σ^-1Γ_n-Γ≤ϵ/4 Considering Γ≤√(d), we select n≥ n_3 = 2^9ϵ^-2d^5 2^4de^4/σ^2log(4d/δ) and t_3 = 2^-3d^-1.52^-2de^-2/σ^2ϵ, leading to (Σ_n - Σ_2 ≥ t_3) ≤ 2d exp(-n_3t_3^2/32d^2) ≤ 2d exp(-n_3t_3^2/32d^2) ≤δ/2 With a probability at least 1-δ/2, we have Σ^-1^2 ΓΣ_n - Σ≤ϵ/4 In summary, we choose n ≥max{n_1, n_2, n_3}, and then for all ϵ >0, δ∈ (0,1) (Σ_n^-1Γ_n - Σ^-1Γ≥ϵ ) ≤δ§.§ Proof of <ref> and <ref> appendix]app:theorem_glimeSuppose z^'∼𝒫 such that the largest eigenvalue of z^'(z^')^⊤is bounded by R and 𝔼[z^'(z^')^⊤] = (α_1 - α_2) I + α_2 11^⊤, Var(z^'(z^')^⊤)_2 ≤ν^2, |(z^' f(z))_i|≤ M for some M>0. {z_i^'}_i=1^n are i.i.d. samples from 𝒫 and are used to compute Glime explanation ŵ^Glime. For any ϵ > 0, δ∈ (0,1), if n = Ω(ϵ^-2M^2 ν^2 d^3 γ^4log(4d/δ)) where γ^2 = d β_1^2 + (d^2-d)β_2^2, β_1 = (α_1 + (d-2) α_2)/β_0, β_2 = -α_2 / β_0, β_0 = (α_1 - α_2)(α_1 + (d-1)α_2)), we have (ŵ^Glime - w^Glime_2 < ϵ) ≥1- δ. w^Glime = lim_n→∞ŵ^Glime.The proof closely resembles that of <ref>. Employing the same derivation, we deduce that: Σ = (α_1 + λ - α_2)I + α_2 11^⊤, Σ^-1 = (β_1 - β_2)I + β_2 11^⊤ where β_1 = α_1 + λ + (d-2)α_2/(α_1 + λ - α_2)(α_1 + λ + (d-1)α_2), β_2 = -α_2/(α_1+ λ - α_2)(α_1 + λ+ (d-1)α_2) Given that λ_max(z^' (z^')^⊤) ≤ R and Var(z^'(z^')^⊤)_2 ≤ν^2, according to the matrix Hoeffding’s inequality <cit.>, for all t > 0: (Σ_n - Σ_2 ≥ t) ≤ 2d exp(-nt^2/8ν^2) Applying Hoeffding's inequality coordinate-wise, we obtain: (Γ_n - Γ≥ t ) ≤ 2d exp(-nt^2/8M^2 d^2 ) Additionally,Σ^-1_F^2 = dβ_1^2 + (d^2-d)β_2^2 = γ^2By selecting n≥ n_1 = 2^5γ^2ν^2 log(4d/δ) and t_1 =2^3 5^-2γ^-1, we obtain (Σ_n - Σ_2 ≥ t_1) ≤ 2d exp(-n_1t_1^2/8ν^2) ≤δ/2 with a probability of at least 1-δ/2. Σ^-1(Σ_n - Σ)≤Σ^-1·Σ_n - Σ≤γ t_1 = 0.32 Letting n≥ n_2 = 2^5ϵ^-2M^2 d^2 γ^2 log(4d/δ) and t_2 = 2^-2ϵγ^-1, we have(Γ_n - Γ≥ t_2 ) ≤ 2d exp(-n_2t_2^2/8 M^2 d^2)≤δ/2 with a probability of at least 1-δ/2. ΣΓ_n-Γ≤γt_2 ≤ϵ/4 As Γ≤ M, by choosing n≥ n_3 = 2^5 ϵ^-2M^2 ν^2 d γ^4log(4d/δ) and t_3 = 2^-2ϵ M^-1d^-0.5γ^-2, we have(Σ_n - Σ_2 ≥ t_3) ≤ 2d exp(-n_3t_3^2/2ν^2) ≤δ/2 and with a probability of at least 1-δ/2,Σ^-1^2 ΓΣ_n - Σ≤γ^2 M d^0.5 t_3 =ϵ/4 Therefore, by choosing n=max{n_1, n_2, n_3}, we have(Σ_n^-1Γ_n - Σ^-1Γ≥ϵ ) ≤δ Suppose {z_i^'}_i=1^n are i.i.d. samples from (z^', z^'_0=k) = e^k/σ^2/(1+e^1/σ^2)^d, k=1,…, d are used to compute Glime-Binomial explanation. For any ϵ >0, δ∈ (0,1), if n = Ω(ϵ^-2 d^5 e^4 /σ^2log(4d/δ)),we have (ŵ^Binomial - w^Binomial_2 < ϵ ) ≥ 1 - δ. w^Binomial = lim_n→∞ŵ^Binomial.For Glime-Binomial, each coordinate of z^'(z^')^⊤ follows a Bernoulli distribution, ensuring the bounded variance of both z^'(z^')^⊤ and (z^' f(z^') )_i. Additionally, we haveΓ≤√(d), α_1 =𝔼[(z_i^2)^'] = 𝔼[z_i^'] = e^1/σ^2/1+e^1/σ^2= ∑_k=0^d (z_i^'=1|z^'_0 = k) (z^'_0=k) =∑_k=0^dk/ddke^k/σ^2/(1+e^1/σ^2)^d= ∑_k=0^dd-1k-1e^k/σ^2/(1+e^1/σ^2)^d=(1+e^1/σ^2)^d-1/(1+e^1/σ^2)^d e^1/σ^2 = e^1/σ^2/1+e^1/σ^2 α_2 =𝔼[z_i^' z_j^'] = e^1/σ^2/1+e^1/σ^2 = ∑_k=0^d (z_i^'=1, z_j^'=1|z^'_0 = k) (z^'_0=k) =∑_k=0^dk(k-1)/d(d-1)dke^k/σ^2/(1+e^1/σ^2)^d= ∑_k=0^dd-2k-2e^k/σ^2/(1+e^1/σ^2)^d=(1+e^1/σ^2)^d-2/(1+e^1/σ^2)^d e^2/σ^2 = e^2/σ^2/(1+e^1/σ^2)^2 = α_1^2 |β_1|^2 = |α_1+ λ + (d-2)α_2/(α_1+ λ - α_2)(α_1 + λ+ (d-1)α_2)|^2 ≤ |1/α_1 + λ - α_2|≤1/|α_1 - α_2| = e^-1/σ^2 (1+e^1/σ^2)^2≤ 4 e^1/σ^2 |β_2|^2 = |-α_2/(α_1 + λ - α_2)(α_1+ λ+ (d-1)α_2)|^2 ≤ α_2^2/(α_1 - α_2)(α_1+ λ+ (d-1)α_2)^2=α_1α_2/(1 - α_1)(α_1 + (d-1)α_2)^2 ≤ α_1α_2/(1 - α_1)((d-1)α_2)^2 =e^-1/σ^2(1+e^1/σ^2)^2/(d-1)^2≤2^2 e^1/σ^2/(d-1)^2Therefore, dβ_1^2 + (d^2-d)β_2^2 ≤ d e^1/σ^2 + e^1/σ^2d/d-1≤d e^1/σ^2§.§ Formulation of SmoothGrad appendix]app:smooth-grad SmoothGrad is equivalent to Glime formulation with z = z^' + x where z^'∼𝒩(0, σ^2 I), ℓ(f(z), g(z^')) = (f(z) - g(z^'))^2 and π(z) = 1, Ω(v) = 0. The explanation returned by Glime for f at x with infinitely many samples under the above setting is w^* = 1/σ^2_z^'∼𝒩(0, σ^2 I)[z^' f(z^' + x)] = 𝔼_z^'∼𝒩(0, σ^2 I)[∇ f(x + z^')]which is exactly SmoothGrad explanation. When σ→ 0, w^* →∇ f(x + z)|_z = 0. To establish this proposition, we commence by deriving the expression for the Glime explanation vector w^*.Exact Expression of Σ: For each i=1,⋯, n, let z_i^'∼𝒩(0, σ^2I). In this context,Σ̂_n = [1/n∑_k (z_k1^2)^'⋯ 1/n∑_k z_l1^' z_kd^';⋮⋱⋮; 1/n∑_k z_kd^' z_k1^'⋯1/n∑_k (z_kd^2)^';]This impliesΣ = 𝔼_z^'∼𝒩(0, σ^2I)[z^'(z^')^⊤] = [ σ^2 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ σ^2; ] Σ^-1 = [ 1/σ^2 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ 1/σ^2; ]Consequently, we obtainw^* = Σ^-1Γ=1/σ^2_z^'∼𝒩(0, σ^2 I)[z^' f(x + z^')]= 𝔼_z^'∼𝒩(0, σ^2 I)[∇ f(x + z^')] The final equality is a direct consequence of Stein's lemma <cit.>. | http://arxiv.org/abs/2311.15722v1 | {
"authors": [
"Zeren Tan",
"Yang Tian",
"Jian Li"
],
"categories": [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.HC",
"stat.ML"
],
"primary_category": "cs.LG",
"published": "20231127111720",
"title": "GLIME: General, Stable and Local LIME Explanation"
} |
Vol.0 (20xx) No.0, 000–000 Purple Mountain Observatory, Chinese Academy of Sciences, 10 Yuanhua Road, Nanjing,Jiangsu 210023, China; [email protected] of Astronomy and Space Sciences, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, ChinaInstitute of Astrophysics and Space Sciences (IA Porto), Rua das Estrelas, 4150-762, Porto, Portugal Received 20xx month day; accepted 20xx month day We present resolved GMRTobservations of the high gas-phase metallicity dwarf galaxy WISEA J230615.06+143927.9 (z = 0.005) (hereafter J2306) and investigate whether it could be a Tidal Dwarf Galaxy (TDG) candidate. TDGs are observed to have higher metallicities than normal dwarfs. J2306 has an unusual combination ofa blue g – r colour of 0.23 mag, irregular optical morphology and high–metallicity(12 + log(O/H) = 8.68±0.14), making it an interesting galaxy to study in more detail. We find J2306 to be anrich galaxy with a large extended, unperturbed rotatingdisk. Using ourdata we estimated its dynamical mass and found the galaxy to be dark matter (DM) dominated within itsradius. The quantity of DM, inferred from its dynamical mass, appears to rule out J2306as an evolved TDG. A wide area environment search revealsJ2306 to be isolated from any larger galaxies which could have been the source of its high gas metallicity. Additionally, themorphology and kinematics of the galaxy show no indication of a recent merger to explain the high–metallicity. Further detailed optical spectroscopic observations of J2306 might provide an answer to how a seemingly ordinary irregular dwarfgalaxy achieved such a high level of metal enrichment. Guo et al.HI in metal rich dwarf galaxy HI in high gas-phase metallicity dwarf galaxy WISEA J230615.06+143927.9Yan Guo 1,2C. Sengupta 1T. C. Scott 3 P. Lagos 3 Y. Luo1 =============================================================================================================§ INTRODUCTION Extrapolating from the local universe,low–mass dwarf galaxies areunderstood to be the most ubiquitous galaxies in the universe <cit.>, however, the local dwarfs remain relatively unexplored, because of the difficulty of observing such optically faint objects.It is in this regime where cosmological predictions differ most from observations and the large scatter in the observed values of dwarf properties makes it difficult to discern the underlying trends in their properties.Typically, dwarfs are dark matter (DM) dominated, faint irregular optical objects <cit.>, with their observed DM potentialsbeing shallower than predicted by cosmological models <cit.>. Dwarf galaxies are also observed to havesignificantly sub–solar metallicities, 12 + log(O/H) = 7.4 to 7.9, occasionally reaching extremely low values <cit.>.<cit.> using Sloan Digital Sky Survey (SDSS) DR4 data, demonstrated the strong positive correlation between galaxy stellar mass and metallicty (12 + log(O/H)). The scatter in the relation is much larger in the dwarf mass range and<cit.>, applying restrictive selection criteria tothe <cit.> sample, identified 41 dwarf galaxies havingmetallicities between 8.6 ≤ 12 + log(O/H) ≤ 9.3, challenging the idea that dwarf galaxies are necessarily low–metallicity galaxies. Since the dwarf galaxy population is dominated by low–metallicity objects, the rarely found high–metallicity dwarfs are particularly interesting. While a handful of such high–metallicity dwarfs have been detected, their formation scenarios or physical properties are yet to be explored in detail. One possible formation scenario for suchhigh–metallicity dwarfs isformation out of metal– rich gas debris ejected during a tidal interaction between larger galaxies, at least one of which is gas rich <cit.>. These tidal dwarf galaxies (TDGs), are usually observed to have higher metallicities than normal dwarfs of the same stellar mass, see blue points in Figure <ref> and <cit.>. Other possible formation scenarios may include accretion of tiny evolved early–type dwarf galaxies. In this paper, we present Giant Metrewave Radio Telescope (GMRT)21 cm imaging of the high–metallicity dwarf galaxy,SDSS J230614.96+143926.7 also known as WISEA J230615.06+143927.9 (23h 06m 15.050s +14d39m27.50s), hereafter referred to as J2306, selected from the SDSS. The previous single dishdetection of the galaxy was with the Arecibotelescope <cit.>.Being nearby and having a prior single dishdetection made J2306 a suitable candidate for a high–resolutionstudy.More information onthe galaxy's propertiesis provided in Section 2.1.Our aim is to increase the understanding ofJ2306's physical properties and investigatepossible formation scenarios. Using theheliocentric optical velocity (1542 ) of J2306from NASA Extragalactic Database (NED) and assuming H_0 = 68 /Mpc <cit.>, we adopt a distance of 22.7 Mpc to the galaxy. At this distance, the spatial scale is ∼ 6.4 kpc/arcmin. All α and δ positions referred to throughout this paper are J2000. § DATA AND OBSERVATIONS §.§ The source properties and metallicity estimates J2306 is a small nearby dwarf galaxy with an SDSS g–band absolute magnitude = –14.83 mag, g – r colour ∼ 0.23 mag and its r–band D_25 diameter estimated from NED is ∼ 2.9 kpc.The galaxy is not an early–type dwarf, as its optical morphology is non-compact and irregular.Figure <ref> shows the SDSS optical image of J2306. We used the relationship between photometric SDSS model magnitudes for selected colour band filters and stellar M/L ratio from <cit.>, to estimate the galaxy's stellar mass. The SDSS g–band luminosity (L_g) and g-r colour of the source were used to calculatestellar mass using Equation <ref>: log(M/L_g) = -0.499+1.519(g-r) The stellar mass(M_*) of J2306, calculated using equation <ref> is 2.4 × 10^7 , confirming it as a low stellar mass dwarf. Typically, the distance to a galaxy can estimated from its recessional velocity which is related to the expansion rate of universe via the Hubble constant. However, for galaxies with recessional velocities < 1500peculiar motions along the line of sight due to local gravitational forces can add significant additional uncertaintyto the distance estimates. For J2306 adding or subtracting a typical group velocity dispersion (250 ) to the measured recessional heliocentric velocity (1542 ) changes the distance to the galaxy between 19.10 Mpc and 26.45 Mpc, which in turn implies M_* = 1.7 × 10^7and 3.2 × 10^7 , respectively. This shows that, unless J2306 has an extraordinarily high peculiar velocity, it is a low stellar mass dwarf with M_* of the order 10^7 . J2306 is a faint galaxy that has an `unreliable photometry' flag in the SDSS database.As a result of the uncertainties in both distance and photometry for J2306, wetreatour stellar masscalculation as only an order of magnitude estimate.From J2306's SDSS spectrum (Figure <ref>), we estimated its gas phase metallicity. The spectrum's [OIII] λ4363 auroral emission line is weak and [O II] λ3727 is beyond the SDSS spectrometer’s wavelength range, so the T_e method could not be used to calculate gas metallicity. Instead, we used the relation between metallicity and two strong line calibrators (N2 and O3N2) as prescribed in<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.> to estimate the 12 + log(O/H).We took the average value ofthe 12 + log(O/H) derived from the N2 and O3N2 calibrations by each of the above sets of authors to calculate the final gas metallicity value of J2306. The complete list of the 12 + log(O/H)calibrators fromthe selected authors used to calculate the galaxy's metallicity is given in Table <ref>.The metallicity sensitive emission line ratios used in this work are the following: N2 = log(Hα/[NII]λ6584)O3N2 = log([OIII]λ5007/Hβ×Hα/[NII]λ6584) Here Hα, Hβ, [NII]λ6584, [OIII]λ5007 represent the extinction corrected intrinsic fluxes of the respective spectral lines.These metallicity indicators can be used for both, metal-poor and metal-rich galaxies in the range 7.31 < 12 + log(O/H) < 8.84 for the N2 and7.82 < 12 + log(O/H) < 8.78 for the O3N2 emission line ratioson average.However, O3N2–based metallicity can reach values of 12 + log(O/H) ≤9.2 <cit.>. The population spectral synthesis (PSS) code FADO <cit.> was applied to the J2306 SDSS spectrum corrected for Galactic extinction based on<cit.> color excess E(B-V) map and the <cit.> extinction law (CCM). Theobserved emission line fluxes of J2306 were obtained from FADO.We note that due to poor SNR (see Figure <ref>) the FADO fitto the [OIII]λ5007 line failed. However that line was then re–fitted using our own pythonline fitting code and the flux obtained was utilised to calculate12 + log(O/H) for J2306. The [OIII]λ4959 emission line is unresolved. We then corrected the intrinsic extinction for observed flux F(λ) to determine the intrinsic emission–line fluxes I(λ)(see Table <ref>) relative to Hβ using the formula: I(λ)/I(Hβ) = F(λ)/F(Hβ)× 10^c(Hβ)f(λ) where f(λ) is the reddening function given by CCM assuming Rv = 3.1, c(Hβ) is logarithmic reddening parameter calculated for case B, assuming Balmer decrement ratio Hα/Hβ = 2.86 at 10 000 K, from <cit.>. The 12 + log(O/H) mean of all the Table <ref> calibrators was 8.68 with a standard deviation of 0.14 dex. Compared to other dwarf galaxies in the literature <cit.>, our 12 + log(O/H) value is high andapproximately solar <cit.>, within the uncertainties, and is comparable to that of TDGs in the literature (see Figure <ref>). We summarise the properties of J2306 in Table <ref>§.§ GMRT HI observations of J2306 J2306was observed inat 21 cm using the GMRT on 30th May 2022,with the pointing centreat the projected position of J2306. A 12.5 MHz bandwidth was used giving a channel resolution ∼ 6 kHz. The observation details are listed in Table <ref>. Thedata was analysed using standard reduction procedures with theAstronomical Image Processing System (AIPS) software package. The flux density scale used was from <cit.> with uncertainties of ∼5 percent. After bad data due to RFI and faulty antennas were flagged, the data was calibrated and continuum subtracted in the uv–domain.The AIPS task imagr was then used to convert the uv–domain data to three dimensionalimage cubes. To study the distribution in detail, image cubes with different spatial resolutions were made by varying the uv limits and applying different `tapers' to the data. Finally, the AIPS task momnt was used to create the integratedimages and the velocity field maps from theimage cubes.Properties of the low and medium–resolution maps presented in this paper are given in Table <ref>. § RESULTSThe low–resolution GMRT integratedand velocity field images reveal J2306'sdisk to be extended and to a first order unperturbed.The left panel of Figure <ref> shows the low–resolution (38.9^'' × 36.9^'') velocity integratedflux density contours and the right panel shows thevelocity field forJ2306, respectively. Figure <ref> shows the medium–resolution (24.5^'' × 21.5^'')velocity integratedflux density contours forthe galaxy.At the distance of the galaxy, 22.7 Mpc, the 38.9^'' and 24.5^'' GMRT beams sample 4.3 kpc and 2.7 kpc, respectively. While the low–resolutiondisk morphology appears highly symmetric and undisturbed, the medium–resolution image shows the highest column densityhas a more irregular structure, with an overall alignment along a NW to SE axis. This is not unexpected, given that J2306 is a dwarf galaxy, and dwarfs are well known to have irregulardensity distributions. The medium–resolutioncolumn density maximumspatially coincides with the optical galaxy.The low–resolutionvelocity field shows a regularly rotating disk. The velocity field maps are intensity weighted, and we present the low–resolution map here since it has a higher signal–to–noise ratio. The regular rotation of thedisk is consistent withthe single dish Arecibo spectrum'sdouble horn profile which suggests a rotatingdisk. Resolvedimaging of the galaxy reveals the absence of any neighbour or companion galaxy within the GMRT 24^' primary beam, confirming the entire mass detected with Arecibo belongs to J2306.A comparison between the Arecibo and GMRT spectra of the galaxy confirms most of theflux was recovered in the GMRT interferometric imaging. Usually for extended objects, flux loss occurs in the interferometric data due to a combination of several factors, namely flagging of crucial short baselines due to RFI, flagging of bad data leading to insufficient uv coverage and resolving out of the extended emission. Hence when available, it is preferable to carry out the globalestimates, such asmass, using the single dish spectrum data. Of courseto use the single dishflux to estimate a galaxy'smass it is necessary to ensure that there is no contamination from other sources withinthe single dish full width half power (FWHP) beam. In J2306's case the GMRT data allows us to confirm that there are no contaminatingsources within the single dish beam. Using the integrated flux density from the Arecibo spectrum (3.54 Jy ), we estimate themass of J2306 using Equation <ref>: M(HI)=2.36 × 10^5× D^2×∫ S_ν dv Here, D is the distance to J2306 (22.7 Mpc) and ∫ S_ν dv is the integrated flux density from the Arecibo spectrum. Themass of J2306, thus calculated is ∼ 4.3×10^8 . This implies the galaxy's M_HI/M_* ∼18. Even taking into account the caveat about the galaxy's stellar mass in Section <ref> these ratios indicate that J2306is anrich dwarf galaxy.Here M_* and D_25were estimated from the SDSS photometric data and NED respectivelyand their values were used in compiling Table <ref>.§ DISCUSSIONSince the dwarf galaxy population is dominated by low metallicity objects <cit.>, the rarely found high–metallicity dwarfs are of particular interest <cit.>. One pathway to a high–metallicity dwarf is a result of tidal interactions between larger gas rich galaxies. During tidal interactions between such galaxies, gas rich tidal debris can form TDGs (M_* ∼10^7 – 10^8 M_⊙). The collapsing metal rich gas debris is predicted to lead to rotatingand molecular disks and in turn in–situ star formation (SF) in the TDGs. Being born of tidal debris, TDGs are expected to have significantly higher metallicities and little or no DM content compared to normal dwarfs <cit.>. Another possible pathway to high–metallicity dwarf could be the accretion of nearby high–metallicity companions. Given the small size of J2306, any recentlyaccreted companion would have been an even smaller metal rich dwarf galaxy. Here we discuss the environment of J2306, estimate its dynamical mass and explorepossible scenarios under whichit could have attained such high metal abundance.To understand the environment of J2306, we searched for neighbouring galaxies using the NED database. AtJ2306'sdistance, 60^' is ∼ 380 kpc. Thus a 60^' radius search, with an optical velocity (V_opt) constraint of 200≤ V_opt ≤ 3500was carried out using the NED database to search for neighbouring galaxies. This search returned only three neighbours. One of them is HIPASS J2306+14, which is the HIPASS detection for J2306.There is a ∼ 4^' difference between the HIPASS and WISE detected positions. However, the velocities and the HIPASS spectrum reveal them to be the same galaxy. The other two neighbours are AGES J230511+140404and SDSS J230511.15+140345.7 at a projected distance of ∼ 250 kpc from J2306.These two detections are at same redshifts and separated by less than 0.5 ^'and are almost certainlydetections of the same galaxy, with the magnitude of positional offsetswithinthe large positional uncertainties of the Arecibo Galaxy Environment Survey (AGES) survey FWHP beam (3.5 ^' Arecibo beam). It is difficult to identify and confirm isolated galaxies. The standard isolation criteria require no companions within a projected sky diameter of ∼ 1Mpc <cit.>. In that respect, J2306can be considered a fairly isolated galaxy with only one galaxy within a projected diameter ∼ 500 kpc and a velocity range of 200 – 3500 . The relative isolation and the absence of obvious parent galaxy pair to generatethe tidal debris from which to form a TDG makes the recent TDG formation scenario for J2306 highly unlikely. This raises the interesting question of how, as an almost isolated dwarf galaxy, J2306 came to havea higher than average metallicity.SDSS J230511.15+140345.7 is the only galaxy with known redshift, within a diameter of 500 kpc of J2306. We studied its properties to see if this galaxy could have any effect on J2306's evolution. The galaxy is also a small dwarf galaxy, with an r – band D_25 ∼ 0.3 kpc (extracted from NED),g – r colour ∼ 0.25.This galaxy was detected in the AGESandit'smass according to AGES Survey data is 1.2× 10^8<cit.>. The optical size and themass indicates that SDSS J230511.15+140345.7is also a dwarf galaxy. SDSS J230511.15+140345.7is separated from J2306by a projected distance of 250 kpc. We estimated the timescale of a hypothetical past interactionbetween SDSS J230511.15+140345.7 and J2306. Considering their current positions, if both the galaxies moved apart at ∼ 200 (an average group dispersion velocity), the minimum distance covered by each of them would be 125 kpc. Since 250 kpc is their current projected separation, this suggests any past interaction would have taken place at least6.3× 10^8 years ago. Such an interaction could in principle have increased the star formation and thereby enriched the gas in J2306. However, this timescale is too long for detectable signatures of interaction to remain identifiable in thedisk of J2306, and thus we cannot draw any conclusion about this possibility. Even for massive spiral galaxies, signs of past perturbations or mergers are identifiable in their disks only for about 4 – 7 × 10^8 years <cit.>. Interestingly, this timescale is sufficient to enrich the ISM of J2306.Under such an interaction scenario the newly produced metals would be dispersed and mixed on ∼1 - 2 kpc scales in around 10^8 yrs <cit.>.One however cannot rule out the possibility of minor interactionsor mergers of even lower mass metal-rich, early–type dwarfssurrounding J2306. More detailed spectroscopic data for J2306 might provide an answer about the feasibility of this option.As discussed before, there are two well accepted criteria to identify a tidal dwarf galaxy, i.e., their higher than average metal abundance and their lack of DM.J2306is a high–metallicity dwarf with andisk showing signs of regular rotation. We did not find any obvious progenitor galaxy pair near the dwarf galaxy. However, that cannot rule out that it is an old detached TDG which may have drifted away from its parent galaxy pair. We, therefore, estimated its dynamical mass to see if the galaxy is DM dominated or show signs of DM deficiency, within the measuredradius. The ALFALFA Arecibo database shows theline width (W_20) to be ∼106 . From the low–resolution GMRTmap we estimated the approximatediameter of J2306to be ∼13.2 kpc and the inclination to be ∼54^∘. Using the inclination and the W_20 value we estimated the inclination corrected rotation velocity (V_rot) ∼ 63and the dynamical mass (M_dyn) of the galaxy to be ∼6.4 × 10^9 . Comparing the M_dyn to the baryonic mass M_dyn/M_gas+M_* =9.4 , where M_* = 2.4 × 10^7 , M_gas = 65.2 × 10^7 , i.e. M_gas (molecular + atomic) = 1.4 × M_HI (43.0 × 10^7 ).Even allowing for the uncertainty in our estimated stellar mass, this ratio implies, like most regular dwarf galaxies, the galaxy is strongly DM dominated within itsradius. For TDGs this ratio would typically be closer to 1 <cit.>. This reinforces our previous conclusion, based on its environment, that J2306is almost certainly not a tidal dwarf galaxy. To summarise,we explored some possible reasons that could account for J2306's high gas metallicity. High–metallicity is a signature of TDGs so we checked for signs that J2306 is a TDG. The lack of a nearby progenitor pair and the dominance of DM within the radius rule out the galaxy as a TDG. Additionally, we did not find any signs of recent accretion or merger that could be an alternative explanation for the high gas phase metallicity. § CONCLUSIONWe studied the high–metallicity dwarf galaxy J2306to investigate whetherthe galaxy could be a possible TDG candidate. The galaxy is not an early type dwarf, its g – r colour is 0.23 mag and its optical morphology is non–compact and irregular. GMRTmappingof the galaxy confirmed it was rich and its unperturbed and rotating diskextends ∼ 4 times further than the optical disk. We found no signs of past or ongoing interactions in theimages. Neither did we find any possible neighbouring galaxy pair which could potentially be the parent system if J2306 were to be a TDG.Using information from theimages, we estimated the dynamical mass of the galaxy. Contrary to what is expected for TDGs, J2306was found to be a DM dominated galaxy. We explored other possibilities(e.g., interactions or accretions) for the origin of the high–metallicity of J2306. However, we found J2306to be fairly isolated with only one neighbouring galaxy within a projected diameter of 500 kpc. We conclude that while J2306 is a high–metallicity galaxy this property is neither of recent tidal origin, nor does it show any obvious signs of recent accretion or merger. It is located in a fairly isolated environment and thus its enrichment process could be a secular process, or the result of an interaction in the distant past. Further detailed spectroscopic observations of J2306 could provide an answer to how normal irregular dwarf galaxiescan achieve such a level of metal enrichment. § ACKNOWLEDGEMENTS We thank the staff of the GMRT who have made these observations possible. The GMRT is operated by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.YG acknowledges support from the National Key Research and Development Program of China (2022SKA0130100), and the National Natural Science Foundation of China (grant Nos. 12041306). PL (contract DL57/2016/CP1364/CT0010) and TS (contract DL57/2016/CP1364/CT0009) are supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) and the Centro de Astrofísica da Universidade do Porto (CAUP). This research made use of APLpy, an open–source plotting package for Python <cit.>. This research has made use of the Sloan Digital Sky Survey (SDSS). The SDSS Web Site is http://www.sdss.org/. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.raa | http://arxiv.org/abs/2311.15724v2 | {
"authors": [
"Yan Guo",
"C. Sengupta",
"T. C. Scott",
"P. Lagos",
"Y. Luo"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231127111831",
"title": "HI in high gas-phase metallicity dwarf galaxy WISEA J230615.06+143927.9"
} |
Video-based Visible-Infrared Person Re-Identification with Auxiliary SamplesYunhao Du, Cheng Lei, Zhicheng Zhao, Yuan Dong, Fei SuYunhao Du, Cheng Lei, Zhicheng Zhao, Yuan Dong and Fei Su are with Beijing Key Laboratory of Network System and Network Culture, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China.(e-mail:{dyh_bupt, mr.leicheng, zhaozc, yuandong, sufei}@bupt.edu.cn) (Corresponding author: Zhicheng Zhao.) January 14, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================ Visible-infrared person re-identification (VI-ReID) aims to match persons captured by visible and infrared cameras, allowing person retrieval and tracking in 24-hour surveillance systems. Previous methods focus on learning from cross-modality person images in different cameras. However, temporal information and single-camera samples tend to be neglected. To crack this nut, in this paper, we first contribute a large-scale VI-ReID dataset named BUPTCampus. Different from most existing VI-ReID datasets, it 1) collects tracklets instead of images to introduce rich temporal information,2) contains pixel-aligned cross-modality sample pairs for better modality-invariant learning, 3) provides one auxiliary set to help enhance theoptimization, in which each identity only appears in a single camera. Based on our constructed dataset, we present a two-stream framework as baselineand apply Generative Adversarial Network (GAN) to narrow the gap between the two modalities. To exploit the advantages introduced by the auxiliary set,we propose a curriculum learning based strategy to jointly learn from both primary and auxiliary sets. Moreover, we design a novel temporal k-reciprocal re-ranking method to refine the ranking list with fine-grained temporal correlation cues. Experimental results demonstrate the effectiveness of the proposed methods. We also reproduce 9 state-of-the-art image-based and video-based VI-ReID methods on BUPTCampus and our methods show substantial superiority to them. The codes and dataset are available at: <https://github.com/dyhBUPT/BUPTCampus>. Visible-infrared person re-identification, curriculum learning, re-ranking. § INTRODUCTIONPerson re-identification (ReID) aims to search the target person among the gallery set across multiple cameras. It can benefit many crucial tasks such as multi-object tracking <cit.>, crowd counting <cit.> and action analysis <cit.>. With the development of deep learning and efforts of researchers, a large number of ReID models have been proposed in recent years <cit.>, including representation learning <cit.>, metric learning <cit.>, and ranking optimization <cit.>.Most existing ReID models focus on the visible-visible matching problem. However, they are not workable in low illumination. To meet the need for 24-hour surveillance systems, visible-infrared ReID (VI-ReID) has recently received substantial attention. Wu et al.<cit.> first proposed the VI-ReID problem, and constructed the dataset SYSU-MM01. RegDB <cit.> was also a commonly used and relatively small dataset with RGB-IR sample pairs collected by dual camera system. Recently, Zhang et al.<cit.> collected a new challenging low-light VI-ReID dataset LLCM with significant light changes. Different from these image-based datasets,Lin et al.<cit.> contributed a video-based VI-ReID dataset HITSZ-VCM to help exploit the temporal information. However, all these datasets are limited by the data scale, style, or lack of paired samples.In this paper, we concentrate on the video-based VI-ReID problem. We first collect a new dataset, dubbed BUPTCampus, which distinguishes itself from previous datasets in the following aspects: * Instead of images, it collects tracklets as samples, enabling to exploit the temporal cues.* It contains pixel-aligned RGB/IR sample pairs captured by binocular cameras, which can facilitate modality-invariant learning.* It is much larger than existing VI-ReID datasets with 3,080 identities, 16,826 tracklets and 1,869,366 images. Different styles of cameras are used to ensure the diversity of samples (see Tab.<ref> for details).Moreover, existing ReID tasks focus on the cross-camera matching problem,and those identities who appear only once are commonly ignored in training. However, these samples are easy to obtain in reality, and would help the learning. We take vast single-camera samples into consideration, and call them auxiliary samples, and accordingly, the main training samples, i.e., multiple-camera samples, are called primary samples. The comparision between BUPTCampus and several common ReID datasets are shown in Fig.<ref>(Please refer to Tab.<ref> for details). The main difficult cases in BUPTCampus are visualized in Fig.<ref>,including variations of modality/resolution/camera/pose/illumination/view, occlusion, misalignment, and detection noise.Based on the collected dataset, we construct a two-stream framework as baseline following previous works <cit.>. To alleviate the differences between visible and infrared modalities,a GAN <cit.> module (named PairGAN) is applied to generate fake IR samples from real RGB samples. Furthermore, different from previous methods which ignore single-camera samples, we propose to train primary samples and auxiliary ones jointly, and design a dynamic factor to balance the weights of these two sets in the spirit of curriculum learning <cit.>. Moreover, the commonly used re-ranking methods <cit.> directly process the coarse instance-level features. Instead, we propose the temporal k-reciprocal re-ranking algorithm, which takes fine-grained temporal correlations into consideration with the cross-temporal operation. The overall method, called AuxNet, improves the performance of baseline by approximately 10% Rank1 and 10% mAP. We further reproduce 9 state-of-the-art image-based and video-based VI-ReID methods on BUPTCampus, and our methods show substantial superiority to all these solutions.The contributions of our work are summarized as follows: * We construct a large-scale video-based VI-ReID dataset with cross-modality sample pairs and auxiliary samples. To the best of our knowledge, this is the first work of collecting paired RGB-IR tracklets, and additionally, it is also the first work to assist cross-camera matching with single-camera samples.* We present a baseline for the video-based VI-ReID task, and introduce GAN to alleviate the modality variations.* We design a simple but effective two-branch framework to train primary and auxiliary samples jointly. Meanwhile, curriculum learning is introduced to learn a dynamic factor to balance these two branches.* A novel temporal k-reciprocal re-ranking algorithm is proposed, which exploits fine-grained temporal cues with the cross-temporal operation.* 9 state-of-the-art VI-ReID methods are reproduced on BUPTCampus, and our method outperforms them by a large margin. § RELATED WORK§.§ Visible-Infrared Person Re-Identification Generally speaking, there are two main categories of methods in VI-ReID: shared feature learning methods and feature compensation learning.The shared feature learning methods aim to embed the features from different modalities into the same feature space, where modality-specific features are abandoned and only modality-shared features are reserved. Partially shared two-stream networks <cit.> were commonly used to learn modality-shared feature space, in which the network parameters of shallow layers are specific, and the deep layers are shared among different modalities. Chen et al.<cit.> studied the neural feature serach paradigm to select features automatically. Fu et al.<cit.> designed a nerual architecture search method to find the optimal separation scheme for each Batch Normalization(BN) layer. To handle both cross-modality and intra-modality variations simultaneously,Ye et al.<cit.> proposed the bi-directional dual-constrained top-ranking loss to learn discriminative embeddings. Some other works exploited the modality adversarial learning to mitigate the modality differences <cit.>, and they typically adopted a modality classifier to identify the modality of output features.The feature compensation learning methods try to make up the missing modality-specific cues from one modality to another. Wang et al.<cit.> emploied Generative Adversarial Network(GAN)<cit.> to jointly perform alignment in pixel-level and feature-level. To realizethe discrepancy reduction in image-level,Wang et al.<cit.> exploited two variational auto-encoders (VAEs)<cit.> to generate the multi-spectral images. Zhong et al.<cit.> adpoted the intermediate grayscale images as auxiliary information to transfer infrared images to visible images. The modality synergy module and modality complement module <cit.> were designed to synergize and complement the diverse semantics of the two modalities. Huang et al.<cit.> unveiled the modality bias training problemand applied the GCGAN<cit.> to generate the third modality, which balances the information between RGB and IR.Recently, Lin et al.<cit.> proposed the video-based VI-ReID task,and designed an unified framework to learn modal-invariant and motion-invariant subspace. In this paper, we present a new video-based VI-ReID framework, in which GAN is applied to perform compensation learning, and the two-stream framework is used to learn modality-shared features. §.§ Auxiliary Learning Auxiliary learning aims to find or design auxiliary tasks which can improve the performance on the primary task. It only pursues high performance of the primary task and the role of the auxiliary task is an assistant. Toshniwal et al. proposed to use lower-level tasks as auxiliary tasks to improve the performance of speech recognition <cit.>. Zhai et al. presented the S^4L-Rotation strategy, which assisted semi-supervised image classification with a rotation prediction task <cit.>.For fine-grained classification, Niu et al. exploited the knowledge from auxiliary categories with well-labeled data <cit.>. In this paper, we regard the auxiliary task as a degraded and simplified version of the primary task, and gradually reduce the amplitude of its gradients in training.In the re-identification task, auxiliary learning was often implemented as an extra classification task. Considering that ReID suffers from viewpoint changes, Feng et al. applied a view classifier to learn view-related information <cit.>. For cross-modality ReID, Ye et al. introduced a modality classifier to discriminate the features from two different modalities <cit.>. Li et al. added a self-supervised learning branch for image rotation classification to help discover geometric features <cit.>. In occluded ReID task, the occluded/non-occluded binary classification (OBC) loss was proposedto determine whether a sample was from an occluded person distribution or not <cit.>. Instead of using classifiers, He et al. incorporated non-visual clues through learnable embeddings to alleviate the data bias brought by cameras or viewpoints in TransReID <cit.>. Different from them, in this work, we take the large-scale single-camera samples as the auxiliary set, which is easy to collect with low labeling cost, but is generally ignored in previous fully-supervised ReID works. We will show that minor modifications to common learning frameworks can bring remarkable improvements if the auxiliary set is used well. §.§ Rank Optimization Rank optimization typically acts as the post-processing procedure in the ReID task. Given one or more ranking lists, it revises the ranking order to improve the retrieval performance. Leng et al.<cit.> proposed the bi-directional ranking algorithm, which first performed forward ranking and backward ranking, and then computed the final ranking list in accordance with both content and context similarities. Ye et al.<cit.> used both similarity and dissimilarity cues to optimize the ranking list. K-reciprocal<cit.> was one of the most common used re-ranking methods in recent years, which adopted the Jaccard distances of the k-reciprocal sample sets to complement the initial feature distances. Sarfraz et al.<cit.> proposed the expanded cross neighborhood distances between sample pairs to exploit neighbor cues. Yu et al.<cit.> designed a “divide and fuse” strategy, which divided the features into multiple parts firstly, and then encoded the neighborhood relation in the subspaces. Finally, these sparse feature vectors were fused with a fuzzy aggregation operator, which exploited the diversity from different parts of high-dimensional features. However, these methods are not designed for the video-based ReID task, and the temporal information is not explicitly explored. In this work, we improve the k-reciprocal re-ranking algorithm by exploiting fine-grained temporal correlations, and prove its effectiveness for video-based settings. § PROPOSED DATASET The BUPTCampus dataset is constructed for video-based visible-infrared person re-identification. We adopt six binocular bimodal cameras to capture RGB and IR modalities simultaneously with approximate pixel-alignment. To ensure the diversity of samples, different cameras and engines are used to capture videoswith various color styles, resolutions, frame rates and binocular synchronization modes, as shown in Tab.<ref>. The topologies of cameras are shown in Fig.<ref>. For labeling, all RGB videos are processed by the multi-object tracking algorithm SiamMOT <cit.> to predict tracklets, which are further revised manually. Then cross-camera IDs are annotated manually. Benefiting from the synchronization mode of bimodal cameras, the bounding boxes and IDs of IR samples are automatically generated with RGB labels.The resulting dataset contains 3,080 identities, 1,869,066 images and 16,826 trajectories (111 images per trajectory on average). Each identity appears in 1 to 6 cameras, and the proportion distribution is shown in Fig.<ref>. The multiple-camera samples are randomly split into the training set (primary set, 1,074 IDs) and testing set (1,076 IDs). Note that there are 29.59% identities only appear in one camera. These samples are at fingertips in reality, but are generally ignored in existing works. Instead, we regard them as the auxiliary set (930 IDs) to assist optimization procedure.We compare BUPTCampus with common ReID datasets in Tab.<ref>. Most previous image-based and video-based datasets only contain the visible modality<cit.>, which limits the applications in 24-hour surveillance systems. RegDB <cit.>, SYSU-MM01 <cit.> and LLCM <cit.> contain both RGB and IR modalities, but are flawed in data type (image) and scales (400 to 1,000 IDs).HITZS-VCM <cit.> is a recently proposed dataset which supports the video-based setting. Compared with it, BUPTCampus has the following advantages:1) much larger scale; 2) containing auxiliary samples; 3) approximate pixel-alignment between RGB/IR tracklets. In conclusion, we summarize the BUPTCampus as the first video-based VI-ReID dataset with paired samples and the auxiliary set.For quantitative evaluation, we adopt the typical Cumulative Matching Characteristic curve (CMC) and mean Average Precision (mAP) as evaluation metrics <cit.>. CMC represents the expectation of the true match being found within the first n ranks, while it only considers the first match and cannot completely reflect the full retrieval results. mAP takes both precision and recall into account, and it is calculated as the area under the Precision-Recall curve for each query. Euclidean distance is used to calculate the matching cost of all query and gallery pairs to compute the evaluation metrics. Both “visible to infrared” and “infrared to visible” retrieval modes are utilized to achieve a more comprehensive evaluation.§ PROPOSED METHOD The overview of our proposed method is shown in Fig.<ref>. It is built of a two-stream network for baseline (§<ref>), and adopts a GAN module to help modality-invariant learning (§<ref>). Then an auxiliary learning framework (§<ref>) is designed to train primary and auxiliary samples jointly in the spirit of curriculum learning. Finally, we propose the temporal k-reciprocal re-ranking algorithm (§<ref>) to introduce fine-grained temporal cues for ranking optimization. §.§ BaselineThe inputs of the network are RGB/IR tracklet pairs with the fixed length. The partially shared two-stream framework <cit.> with ResNet <cit.> is utilized as our baseline. Specifically, the first convolutional blocks in the two streams don't share weights to capture modality-specific low-level features. Differently, the parameters of deeper convolutional blocks are shared to learn modality-invariant high-level embeddings. To aggregate frame-level features from multiple input images, a temporal average pooling layer is adopted to obtain the final embeddings. In training, the softmax cross-entropy loss is calculated to learn modality-shared identity embeddings, which is denoted by:L_ce^1 = -1N∑_i=1^N log(exp(z_i,i) ∑_k=1^C exp(z_i,k)),where z_i,k represents the output classification logits that an input sample x_i is recognized as identity k. N is the number of tracklet samples. Meanwhile, the triplet loss with hard mining is also adopted. Mathematically, it is represented byL_triplet^1 = ∑_i=1^N [m + max_∀ y_i=y_j D(x_i, x_j) - min_∀ y_iy_k D(x_i, x_k)]_+,where [·]_+ = max(·, 0), m is the triplet margin. x_i and y_i represents the input sample and the corresponding identity label. D(·, ·) calculates the Euclidean distance between the extracted features of two input samples.The total learning objective of our baseline isL^1 = L_ce^1 + L_triplet^1.§.§ PairGANGAN is widely used to alleviate the modality differences between RGB and IR samples in previous works <cit.>. Inspired by this, we insert a GAN module, named PairGAN, to translate real RGB images I_rgb into fake IR images I_ir'. It mainly consists of a generator G_rgb → ir which learns a mapping from RGB to IR,and a discriminator D_ir to distinguish between real and fake IR images. Benefiting from the pixel-aligned RGB/IR sample pairs collected in our dataset, the reconstruction loss can be directly applied to supervise the G_rgb → ir as L_recon = ||G_rgb → ir(I_rgb) - I_ir||_1.To further guarantee that the generator doesn't lose content information,the cycle-consistency loss <cit.> is introduced, which supervises the mapped RGB(IR) images from fake IR(RGB) images as belows:L_cycle =||G_ir → rgb(G_rgb → ir(I_rgb)) - I_rgb||_1 + ||G_rgb → ir(G_ir → rgb(I_ir)) - I_ir||_1,where G_ir → rgb is the generator which generates fake RGB images from IR images. Here the definition of the adversial loss for discriminators is not given for simplification. The overall loss L_gan for PairGAN is the sum of reconstruction loss, cycle-consistency loss and adversial loss.After generating the fake IR tracklets, we input them to the network along with the corresponding real IR tracklets, and train them with the same loss as in Eq.<ref>, denoted byL^2 = L_ce^2 + L_triplet^2.During inference, the two groups of cross-modal features trained by the two loss functions L^1 and L^2 are concatenated together to perform re-identification. §.§ Auxiliary LearningTo exploit the auxiliary set in which each identity only appears in a single camera, the overall framework are designed as a multi-task learning procedure, i.e., primary task and auxiliary task. Specifically, we propose a two-branch framework to train primary samples and auxiliary samples jointly, as shown in Fig.<ref>(b). The primary branch represents the baseline network (§<ref>, §<ref>) as shown in Fig.<ref>(a), and its loss function L_primary is exactly the L^1(L^2) as in Eq.<ref>(Eq.<ref>). The auxiliary branch shares the same structure and weights with the primary branch, but takes auxiliary samples as input, and its loss function L_auxiliary is the same triplet loss with Eq.<ref>. Finally, the total loss is calculated as the weighted sum of these two losses:L_total = (1 - α) L_primary + α L_auxiliary,where α is the factor to balance the two tasks.The intuitive approach is to set α as a fixed value. However, each identity in the auxiliary set only contains one pair of cross-modality tracklets,without cross-camera positive samples, which makes it less effective to the primary task. A big α will make the auxiliary task seriously interfere the learning of the primary task. Instead, a small α will lead to the cues in the auxiliary set not being fully mined.Note that the positive auxiliary samples are almost aligned in pixel-level without pose and view variations. Therefore, we regard the auxiliary task as a simplified version of the primary task. Inspired by the success of curriculum learning <cit.>, we set α in Eq.<ref> as a dynamic curriculum factor as:α(E) = (cos(π E) + ϕ)/2(1+ϕ),where E ∈ [0,1] is the normalized epoch index and ϕ is a predefined hyperparameter. For example, with ϕ=3, the value of α is initially 0.5, and gradually cosine-decreasing to 0.25.In this way, the optimization weight of auxiliary task decreases as a continuous training progress. At the beginning of training, the model learns from both auxiliary samples (simple curriculum) and primary samples (difficult curriculum) equally. With α decreases, the difficulty of the “curriculum" gradually increases, and the optimization emphasis turns to the primary samples. In other words, the auxiliary samples provide an initial optimization direction for fast gradient descent and can accelerate and stabilize the optimization procedure. §.§ Temporal K-reciprocalAs discussed in section <ref>, given the i-th tracklet of length T as input,our network will first extract frame-level features F̃_i = {f̃_i^t}_t=1^T. Then, a temporal average pooling operation is utilized to output the final embeddings f_i.In the testing stage, we denote the features of the i-th query samples and the j-th gallery samples as{Q̃_i = {q̃_i^t}_t=1^T, q_i} and {G̃_j = {g̃_j^t}_t=1^T, g_j}, respectively. Then the Euclidean distance D_feat∈ℝ^M × N between all query-gallery pairs can be calculated, where M and N is the number of query and gallery samples. Its (i,j)-th element is the feature distance between q_i and g_j asd_feat(i,j) = (q_i - g_j)^T (q_i - g_j).Finally, the retrieval list is obtained by ranking all gallery samples with D_feat for each query sample.In the original k-reciprocal re-ranking algorithm <cit.>,the k-reciprocal nearest neighbors for query q_i are defined as R(q_i, k) = {g_j | (g_j ∈ N(q_i, k))(q_i ∈ N(g_j, k)) },where N(q_i, k) is the k-nearest neighbors of q_i. Then the new distance between q_i and g_j can be calculated by the Jaccard metric of their k-reciprocal sets asd_jacc(i,j) = 1 - |R(q_i, k) ∩ R(g_j, k)||R(q_i, k) ∪ R(g_j, k)|,where |·| denotes cardinality of the set. For simplification, the expanded k-reciprocal nearest neighbors and local query are not shown here. In implements, to speed up the calculation, the k-reciprocal neighbors are encoded into feature vectors, and the Jaccard distance in Eq.<ref> can be implemented by element-wise minimization and maximization. The final distance is defined as the weighted sum of original feature distance and the Jaccard distance as d_rerank(i,j) = λ_1 d_feat(i,j) + (1 - λ_1) d_jacc(i,j),where λ_1 ∈ [0, 1] denotes the penalty factor. The success of the k-reciprocal algorithm owes to its automatic gallery-to-gallery similarity mining, in which the rich contextual information latent in sample correlations is fully explored. However, it is designed in an instance-level manner, and the fine-grained frame-level cues are not exploited. To solve this problem, we introduce the cross-temporal operation to calculate the temporal correlations between query and gallery samples.Specifically, for the i-th query, the frame-level featrue sets Q̃_i = {q̃_i^t}_t=1^T are evenly split into L groups along the temporal dimension, where the l-th group is Q̃_i^l = {q̃_i^t}_t=lT/L+1^(l+1)T/L, l=0,...,L-1. Then the temporal average pooling is applied to aggregate frame-level features in each group respectively, resulting in L embeddings Q_i = {q_i^0,..,q_i^l,..,q_i^L-1}. Similarly, we can get the embedding set for the j-th gallery as G_j = {g_i^0,..,g_i^l,..,g_i^L-1}. Thus, the cross-temporal operation can be defined asd_cross(i,j) = ∑_l=0^L-1 d_jacc(q_i^l, g_j^L-1-l),where d_jacc(·, ·) is the Jaccard distance of k-reciprocal sets as in Eq.<ref>. One concise ilustration is shown in Fig.<ref>(c). Therefore, the final distance in Eq.<ref> can be rewritten asd_rerank = λ_1 d_feat + (1 - λ_1) d_jacc + λ_2 d_cross,where the index (i,j) are hiddened here for clarity. We term the overall re-ranking algorithm as temporal k-reciprocal re-ranking.Discussion. Compared with the initial k-reciprocal re-ranking algorithm, the introduced cross-temporal operation fully explores the temporal correlations between query and gallery samples in a more fine-grained manner. Each sub-feature q_i^l has smaller temporal receptive field and contains more detailed temporal information than q_i. Thus, the cross-temporal operation can preserve more cues which may be smoothed by the global average pooling operation. One similar design is PCB<cit.>, which horizontally split the input image into multiple parts to extract local fine-grained cues. PCB operates in the spatial domain, which needs to change the network design and induces more computational cost. Another related method is DaF <cit.>,which divides one instance feature into multiple fragments along the feature dimension and explores the diversity among sub-features. Differently, our method operates along the temporal dimension and optimizes the ranking procedure. Moreover, it doesn't change the structure and training procedure of the network, and only increases negligible inference time. § EXPERIMENTAL RESULTS§.§ Empirical SettingsWe conduct experiments on the constructed BUPTCampus dataset,in which 1,074 IDs are used for primary learning, 930 IDs for auxiliary learning, and 1,076 IDs for testing. Rank1, Rank5, Rank10, Rank20 and mAP are adpoted as the evaluation metrics.We implement our model by PyTorch <cit.> and train it on two NVIDIA TESLA T4 GPUs. ResNet-34 <cit.> is utilized as the backbone because it performs better than ResNet-50 in our experiments. Adam <cit.> is utilized as the optimizer with the weight decay of 1e-5. The learning rate is set as 2e-4 for initialization and updates with the cosine scheduler. We set the maximum number of epochs to 100 and the batch size to 32 with P=8 identities and K=4 tracklets per identity. The tracklet length is 10 and the image resolution is set to 256×128. We use the random sampling strategy for training and the uniform sampling strategy for testing. Random cropping and flipping are used for data augmentation. The margin m in triplet loss is set to 0.6, and the factor ϕ in Eq.<ref> is 3. For the temporal k-reciprocal re-ranking algorithm, the intrinsic hyperparameters in the original k-reciprocal algorithm are set as k1=5, k2=3 and λ_1=0.8. Then the newly added factor λ_2 is set to 0.1, and the number of groups is set as L=2.§.§ Ablation StudyMain Results. Tab.<ref> summarizes the overall ablation results of the main components of our methods, i.e., PairGAN, auxiliary learning and temporal re-ranking. We can draw the following observations: * The introduced PairGAN module can bring 4% to 6% Rank1 improvement and approximate 4% mAP improvement over baseline, which validates it can help the modality-invariant learning.* The designed auxiliary learning method improves Rank1 by 6% and mAP by 5% respectively, which indicates the effectiveness of exploiting the single-camera samples in training.* The temporal re-ranking algorithm can improve the mAP by 4%. That means it can effectively revise the ranking order with mined gallery-to-gallery similarities and temporal correlations.* These above components complement each other. By integrating them all,our final methods improve the performance of baseline by approximately 10% Rank1 and 10% mAP.Sequence Length. Compared with image-based ReID, video-based ReID can expolit rich temporal information, which helps eliminate effects of noises and distractors. To verify the necessity of video data, we study the influence of sequence length in Tab.<ref>, and we have the following conclusions: * A terrible result is obtained in case of setting sequence length to 1, which means the task degrades to image-based ReID.That indicates the image-based methods cannot work well in our benchmark.* As the sequence length varies from 1 to 15, the performance is gradually improved, which demonstrates the effectiveness of temporal information.* When the sequence length is larger than 15, the Rank1 performance degrades under the “visible-to-infrared” mode, which shows that excessive length will bring more redundant information. To achieve a balance between accuracy and efficiency, we set the sequence length to 10 by default.Auxiliary Learning. We compare different designs of the curriculum factor α in our auxiliary learning method in Tab.<ref>. Three types of design are listed, i.e., fixed value, exponential decline and cosine decline. For a fixed value, α=0.3 achieves a trade-off between the primary task and auxiliary task. As to the exponential decline design, it achieves a similar performance to the fixed value. Differently, a cosine decline strategy performs much better, with an improvements of 2% mAP and 3% Rank1 over them.Temporal Re-ranking. The comparisions between the original k-reciprocal re-ranking algorithm and our proposed temporal re-ranking algorithm are listed in Tab.<ref>. Four different models are selected as the baseline methods,i.e., DDAG<cit.>, DART<cit.>, MITML<cit.> and our constructed baseline network. It is shown that the original k-reciprocal algorithm can consistently improve the mAP metrics, but sometimes decreases Rank1. Differently, our temporal re-rakning algorithm can further increase both mAP and Rank1 by a remarkable margin. Moreover, it only increases negligible inference time. We further study the effects of parameter selection in Tab.<ref>. As reported, the performance is robust to the varying values of k1, k2, λ_1, λ_2 and L. §.§ Main Results.For comprehensive comparisions, we reproduce 9 other state-of-the-art methods on BUPTCampus as shown in Tab.<ref>. For image-based methods, i.e., AlignGAN<cit.>, LbA<cit.>, DDAG<cit.>, CAJ<cit.>, AGW<cit.>, MMN<cit.>, DEEN<cit.>, DART<cit.>, the default setting is the sequence length of 1 during inference. For a fair comparision, we also report their results with the length of 10, which is implemented by adding a temporal average pooling operation during inference. For the video-based method, MITML<cit.>, we report the results with the length of 6, as it performs better than the length of 10 in our implements. The results indicate that our solution, dubbed AuxNet, shows substantial superiority to existing state-of-the-art methods.Furthermore, we conduct experiments on the HITSZ-VCM dataset and the results are shown in Tab.<ref>. Please note that the proposed “PairGAN” and “auxiliary learning” methods cannot be utilized here. Therefore, only the results of baseline (based on ResNet50) and temporal re-ranking are givem. The parameters of temporal re-ranking are set to k1=20, k2=6, λ_1=0.3 and λ_2=0.4.§.§ Visualization.Feature Distribution.In order to further analyze the performance difference between baseline and our AuxNet,we compute the feature distance distribution and feature space distribution. Fig.<ref>(a) shows the initial distance distributions of the intra-class and inter-class pairs, in which features are extracted by the network without training on BUPTCampus. Fig.<ref>(b) and (c) visualize the distance distributions obtained by baseline and AuxNet respectively. It is shown that out method can further separate the intra-class and inter-class distances compared with baseline, which has a larger difference Δμ of the means of the two distributions. In Fig.<ref>(d-f), we plot the feature embeddings in the 2D feature space for visualization using UMAP <cit.>. The results show that our AuxNet can better distinguish difficult negative samples (marked by red dotted ellipses), and greatly narrows the gap between the two modality samples with the same identity.PairGAN. Fig.<ref> visualizes the sampled real RGB, generated fake IR by PairGAN and corresponding real IR images. The fake IR images preserve the detailed texture cues from the RGB modality, and have the similar color style with the IR modality. Therefore, it can help alleviate the differences between the two modalities and learn modality-invariant embeddings. We further visualize the distribution of samples of these three modalities in Fig.<ref>. It can be observed that compared to real RGB samples, the fake IR samples have a closer distribution to real IR samples.Ranking List. Fig.<ref> compares the ranking lists of baseline and our AuxNet in both “visible-to-infrared” and “infrared-to-visible” settings. In the first case, given an RGB query sample (ID 1551), the baseline method retrieves two wrong samples in the top-5 ranking list. Instead, our AuxNet successfully revises them.Specifically, the positive sample under camera “G25” has huge modality and view differences from the query sample, and is correctly retrieved. In the second case with an IR query sample (ID 1953), our AuxNet achieves the right top-1 and top-2 matching results.§ CONCLUSIONS In this paper, we contribute a new benchmark for video-based visible-infrared person re-identification, named BUPTCampus, which is the first dataset with RGB/IR tracklet pairs and auxiliary samples. It consists of 3,080 identities, 1,869,066 images and 16,826 tracklets. Furthermore, we construct a two-stream network as baseline and present the PairGAN module to help modality-invariant learning. To exploit the auxiliary samples, we design to train primary samples and auxiliary samples jointly with a curriculum factor. Finally, we propose a novel temporal k-reciprocal algorithm to re-rank the retrieval results with fine-grained temporal correlation cues. We demonstrate the effectiveness of our method by comparing it with 9 state-of-the-art works. We hope the contributed dataset and methods can help to narrow the gap between academic works and realistic applications.§ ACKNOWLEDGMENTSThis work is supported by Chinese National Natural Science Foundation under Grants (62076033, U1931202) and BUPT Excellent Ph.D. Students Foundation (CX2022145).splncs04 | http://arxiv.org/abs/2311.15571v1 | {
"authors": [
"Yunhao Du",
"Cheng Lei",
"Zhicheng Zhao",
"Yuan Dong",
"Fei Su"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127064522",
"title": "Video-based Visible-Infrared Person Re-Identification with Auxiliary Samples"
} |
[ Induced current in braneworld model in high-dimensional AdS bulk in the cosmic string spacetime W. Oliveira dos Santos^1 E-mail: [email protected] , E. R. Bezerra de Mello^1 E-mail: [email protected] ^1Departamento de Física, Universidade Federal da Paraí ba 58.059-970, Caixa Postal 5.008, João Pessoa, PB, Brazil January 14, 2024 ===================================================================================================================================================================================================================================================================type=figure< g r a p h i c s >figure Given only the input textual prompt, our system can autonomously detect and rectify the layout inconsistencies across various position requirements (a-d), object quantities (e-g), and resolutions (h-i).] ^†Equal contribution. *Work done during internship at Alibaba Group. Diffusion models have recently achieved remarkable progress in generating realistic images. However, challenges remain in accurately understanding and synthesizing the layout requirements in the textual prompts. To align the generated image with layout instructions, we present a training-free layout calibration system that intervenes in the generative process on the fly during inference time. Specifically, following a “check-locate-rectify” pipeline, the system first analyses the prompt to generate the target layout and compares it with the intermediate outputs to automatically detect errors. Then, by moving the located activations and making intra- and inter-map adjustments, the rectification process can be performed with negligible computational overhead. To evaluate over a range of layout requirements, we present a benchmark that compensates for the lack of superlative spatial relations in existing datasets. And both quantitative and qualitative results demonstrate the effectiveness of the proposed in calibrating the layout inconsistencies. Our project page is at <https://simm-t2i.github.io/SimM>.§ INTRODUCTION Text-to-image generation <cit.> has emerged as a promising application of AI-generated content (AIGC), demonstrating the remarkable ability to generate synthetic images from conditional text descriptions. This technology has attracted considerable attention in recent years due to its potential impact on various domains such as image customization <cit.>, 3D content creation <cit.> and virtual reality <cit.>. Since achieving high-quality and diverse image generation is challenging, recent advancements have witnessed the rise of diffusion models <cit.>. Diffusion models employ a sequential generation process that gradually refines the generated images by iteratively conditioning on noise variables. This iterative refinement mechanism allows for an improvement in the fidelity and quality.Despite the effectiveness of diffusion models, a significant challenge remains:most text-to-image generators, typified by Stable Diffusion <cit.>, show limitations in accurately understanding and interpreting textual layout instructions <cit.>. This can be regarded as a kind of “hallucination” <cit.>, which refers to the phenomenon that the generated image is inconsistent with the prompt content. On the one hand, various textual descriptions include the relative relation “a dog to the left of a cat” and the superlative relation “the crown on the bottom”, presenting an inherent difficulty for automated systems to parse and understand layout information. Besides, inaccuracies in spatial relations may be due to the prior knowledge embedded in pre-trained models, as the large dataset may contains certain biases or assumptions about object placement or orientation. To exemplify this point, consider the following situation: since the “crown” in the training images are predominantly positioned over the head of another organism, it becomes difficult to specify their occurrence below (<ref>-e).These factors not only compromise the quality and fidelity of the generated images but also hinder the overall utility and user experience of text-to-image generation systems. Some efforts <cit.> attempt to address the issue by training auxiliary modules or fine-tuning diffusion models on datasets with layout annotations. Apart from the difficulty of collecting sufficient high-quality data, these resource-intensive methods require retraining for each given checkpoint,making them struggle to keep up with the rapid version iterations of base models. In this paper, we delve into the exploration of layout calibration given a pre-trained text-to-image diffusion model. Consequently, we present a training-free real-time system , which follows the proposed “check-locate-rectify” pipeline. The checking stage is first applied to mitigate the potential impact on the generation speed, where generates approximate target layout for each object by parsing the prompt and applying heuristic rules. After comparing the target layout with the intermediate cross-attention maps, layout rectification can be initiated if there are layout inconsistencies, and locates the misplaced objects during the localization stage. Finally, during the rectification stage, transfers the located activations to the target regions, and further adjusts them with intra-/inter-map activation enhancement and suppression. The entire workflow only affects the generation process, avoiding any additional training or loss-based updates. We conduct both quantitative and qualitative experiments to evaluate the effectiveness of the proposed . Since the popular DrawBench dataset <cit.> only contains prompts with relative spatial relations, we present a new benchmark that includes superlative descriptions composed of various orientations and objects, compensating for the diversity of textual prompts. Compared to the recent works <cit.>, which rely on precise target layout provided by the user,achieves satisfactory correction results even when the target layout is not precise enough, leading to a significant improvement in the layout fidelity of the generated images. § METHODOLOGY In this paper, we aim to align the generated images with the layout requirements in the prompts, and present a layout calibration system that requires no additional fine-tuning. In <ref>, we first briefly review the publicly avaliable, state-of-the-art text-to-image generator, Stable Diffusion <cit.>. In <ref>, we introduce how to determine whether a layout correction should be initiated. And in <ref>, we detail the localization of activated regions on the merged cross-attention maps. Finally, in <ref>, we present how the system rectifies the cross-attention activations according to the localized patterns and the target locations. An overview of the pipeline is illustrated in <ref>. §.§ Preliminaries Stable Diffusion. Stable Diffusion (SD) <cit.> applies a hierarchical variational autoencoder (VAE) <cit.> to operate the diffusion process <cit.> in a low-dimensional latent space.Specifically, the VAE consisting of an encoder ℰ and a decoder 𝒟 is trained with a reconstruction objective. The encoder ℰ encodes the given image 𝐱 into latent features 𝐳, and the decoder 𝒟 outputs the reconstructed image 𝐱 from the latent, i.e., 𝐱 = 𝒟(𝐳) = 𝒟(ℰ(𝐱)). To applied in a text-to-image scenario, a pre-trained CLIP <cit.> text encoder encodes the input textual prompt into N tokens 𝐲, and a U-Net <cit.> consisting of convolution, self-attention, and L cross-attention layers is adopted as the denoiser ϵ_θ. During training, given a noised latent 𝐳^t and text tokens 𝐲 at timestep t, the denoiser ϵ_θ is optimized to remove the noise ϵ added to the latent code 𝐳:ℒ=𝔼_𝐳∼ℰ(𝐱), 𝐲, ϵ∼𝒩(0,1), t[ϵ-ϵ_θ(𝐳^t, t, 𝐲)_2^2]. During inference, a latent 𝐳^T is sampled from the standard normal distribution 𝒩(0,1). At each denoising stept ∈ [T, ⋯, 1],𝐳^t-1 is obtained by removing noise from 𝐳^t conditioned on the text tokens 𝐲. After the final denoising step,the decoder 𝒟 maps the latent 𝐳^0 to an image 𝐱.Cross-Modal Attention.The SD model leverages cross-attention layers to incorporate textual cues for the control of the image generation process. Given the text tokens 𝐲 and intermediate latent features 𝐳^l, the cross-attention maps from the l-th layer 𝐀^l ∈ℝ^W^l × H^l × N can be derived as 𝐀^l=Softmax( 𝐐^l𝐊^l^⊤/√(d)), where 𝐳^l and 𝐲 are projected to the query matrix 𝐐 and key matrix 𝐊, the dimension d is used to normalize the softmax values, and we omit the superscript t for notational clarity and generality. Existing studies <cit.> have proposed that for the object corresponding to the k-th token of the prompt, higher activations on the intermediate cross-attention maps 𝐀_k^l ∈ℝ^W^l × H^l indicate the approximate position where the object will appear. Therefore, we align the spatial location of generated objects with textual layout requirements by adjusting the activations on the cross-attention maps.§.§ CheckA key constraint for the real-time system is to minimize the influence on the generation speed. Therefore, first (1) detects the presence of object layout requirements within the text and (2) assesses any discrepancies between the generated image and the specified layout requirements. Only if both conditions are met does the system take corrective action; otherwise, it continues with normal generation to avoid additional computational overhead. The exact implementation of the two-step inspection is discussed below.[regular]check-square Layout requirements exist in textual prompts.Existing studies <cit.> have predominantly emphasized relative spatial relations that are more common in written language, such as “a dog to the left of a cat”. However, we argue that superlative spatial relations, which refer to an object shares the same relation to all other objects, have been neglected by previous research and datasets <cit.>. For example, the phrase “a flower on the left” signifies that the flower is positioned to the left of all other objects, making it ideal for the leftmost target location. In practice, it is difficult for users to directly describe their layout requirements using multiple relative expressions at once, so more direct superlative expressions actually account for a larger number. To effectively and efficiently capture both forms of expression in a straightforward manner, our system identifies specific positional keywords with predefined vocabulary (described in ). For relative spatial relations, we define five spatial relations, including left, right, above, below and between, with each relation containing a predefined vocabulary set. And for superlative spatial relations, we include additional vocabulary such as “upper-left” and “lower-right”. The system filters out those prompts that contain words from the vocabulary set to determine the presence of layout requirements. In practice, such a simple check implementation achieves considerable accuracy with negligible additional computational overhead. [regular]check-square Discrepancy exists between the generated image and layout requirements.To determine whether the generated image is consistent with the layout requirements,the target positions of all objects are necessary. For target layout generation, our system provides an efficient solution by performing a dependency parsing on the prompt following with heuristic rules. The dependency parsing can be implemented using an industrial-strength library such as spaCy <cit.>. After assigning syntactic dependency labels to tokens, can parse the binary “' from the superlative “a flower on the left”, and the triple “” from the relative “a dog to the left of a cat”. Following pre-defined rules, the system first assigns target boxes to objects associated with superlative position terms. Then, the remaining relative triples (and quaternions if “” exists) can be organized as a semantic tree, with nodes as objects and edges as spatial relations. By traversing the tree, the remaining space in the image is successively allocated. A detailed example of assignment can be found in . For the object of the k-th token, 𝐛_k = (x_k, y_k, w_k, h_k) ∈ [0, 1]^4 denotes the assigned bounding box, where (x_k, y_k) is the relative coordinates of the centre, w_k and h_k are the relative width and height of the box. And the absolute boundaries 𝐛^l_k for the l-th layer can be computed with the concrete size of the corresponding attention map. Note that the predicted box may not necessarily fit the size of the object and is commonly larger. However, thanks to subsequent activation transfer, this does not affect the rectification performance.Once the target boxes are obtained, the system prepares to assess whether each generated object is aligned with its target position. One natural solution, using an object detector on the generated image, requires a restart of the generation after the assessment for rectification and significantly increases the overall latency. Therefore,places the alignment confirmation in the first denoising step (i.e., the T-th step). Specifically, after deriving the cross-attention maps for all layers, a layered attention merging averages them to obtain a merged attention map: 𝐀̅^T=1/L∑_l=1^L 𝐀^T,l,where · means that the maps are first upsampled to a uniform resolution of W^1 × H^1 before averaging. Then,for the object of the k-th token, sums over the activations within 𝐀̅^T_k that correspond to the bounding box 𝐛^1_k. If the sum does not exceed a pre-defined threshold, the system predicts that the object will be generated in the wrong place.§.§ Locate After confirming the initiation of the rectification, the system identifies the source activated region for each object during the early T^loc denoising steps.Temporal Attention Merging.For each time step t ∈ [T, T-T^loc], the system simply saves the merged attention map 𝐀̅^t without any modification. When the (T-T^loc)-th denoising step is finished, the system performs another temporal merging on all stored maps, obtaining 𝐀̅∈ℝ^W^1 × H^1 × N that more stably indicates the source positions of generated objects: 𝐀̅=1/T^loc∑_t=T-T^loc^T 𝐀̅^t. Activated Region Localization.Given the temporal-merged attention map 𝐀̅, the system locates the current activated region for each object. This is implemented by sweeping 𝐀̅_k with a rectangular sliding window. In practice, we keep the size of the window consistent with the target box assigned by heuristic rules. And the activated region 𝐛^l_k in the l-th layer can be converted from the most salient window 𝐛^1_k found on 𝐀̅_k.§.§ RectifyAfter the (T-T^loc)-th denoising step, the system starts to modify the generated cross-attention map for rectification. Note that in the following statements, 𝐀 denotes the cross-attention maps generated before applying Softmax(·). Besides, the maps from the first and last cross-attention layers are not modified as we have observed that doing so improves the quality of object generation in practice.Activation Transfer.Since the size of the localized source activated region 𝐛^l_k and the assigned target box 𝐛^l_k are kept the same, the activation values of the source region can be directly duplicated to the target region, while the original region is filled with minimum values. In this way, easily realizes the movement of the object. Even if the target boxes are obtained by other means (e.g., user-provided) rather than heuristic rules, this simple transfer remains valid after reshaping the source activated region.Intra-Map Activation Enhancement and Suppression. In practice, we have found that some objects fail to appear due to the insufficient activations in the cross-attention maps. Also, one object may not be exactly in its target area even after the transfer. Therefore, for the object of the k-th token, the system continues to modify the attention map by enhancing the activations in 𝐛^l_k. Meanwhile, to avoid the object appearing in non-target areas, the signal outside 𝐛^l_k is suppressed. Formally, we have 𝐀^t,l_k(i,j) ←𝐀^t,l_k(i,j)·α if(i,j)in 𝐛^l_k𝐀^t,l_k(i,j)/α if(i,j)not in 𝐛^l_k , where l ∈ [2, L-1], and the hyperparameter α∈ℝ^+ denotes the strength of the adjustment.Inter-Map Activation Enhancement and Suppression. The intra-map activation adjustment further enhances the control over the position of individual objects. However, due to the lack of interference between attention maps, the overlap of activated areas on different maps can lead to conflict and confusion in the generation of multiple objects. To avoid the issue, given its corresponding attention map 𝐀^t,l_k of each object, our system generates an adjustment mask 𝐌^t,l_k for other maps: 𝐌^t,l_k = 1 - Softmax(𝐀^t,l_k), where the mask adjusts the attention value of other maps: 𝐀^t,l_g←𝐌^t,l_k ⊙𝐀^t,l_g, for g ∈ [1, N]andg ≠ k. In this way, after applying Softmax(·), the activated regions on different maps can be staggered to reduce conflicts. § EXPERIMENTS Datasets. We utilize different datasets to evaluate the effectiveness for both relative and superlative layout requirements. For prompts involving relative spatial relations, we use a subset of 20 prompts from the DrawBench <cit.> dataset, which is a common choice of previous works <cit.>. However, there is a lack of an appropriate dataset that addresses prompts concerning superlative spatial relations. Therefore, we present a benchmarkconsisting of 203 prompts, where each prompt contains 1 to 4 objects, and each object has superlative layout requirements. Details are provided in . Baselines. We select Stable Diffusion <cit.>, Layout-Guidance <cit.>, Attention-Refocusing <cit.> and BoxDiff <cit.> as baselines in the main comparison. We adopt the official implement and default hyperparameters for all baselines.Evaluation Metrics. The generation accuracy <cit.> is adopted as the primary evaluation metric. Specifically, a generated image will only be considered correct if all objects are correctly generated and their spatial positions or relations, color, and other possible attributes align with the corresponding phrases in the prompt. Following previous studies <cit.>, we also report the CLIP-Score <cit.>, which measures the similarity between the input text features and the generated image features. While this metric has been widely used to explicitly evaluate the fidelity to the text prompt, we highlight its reliability is limited, since CLIP struggles to understand spatial relationships and take them into account when scoring image-text pairs <cit.>. Implementation Details. We adopt the DDIM scheduler <cit.> with 20 denoising steps (i.e., T=20). And the number of localization steps T^loc is set to 1 as default. The ratio of classifier-free guidance is set to 5. Adjustment strength α is set to 10. Four images are randomly generated for each evaluation prompt. §.§ Main Results Quantitative results.<ref> shows the quantitative comparison results between different baselines and our . On the DrawBench dataset, our achieves the highest generation accuracy and CLIP-Score, while outperforming the baselines by a significant margin of 9.5% in terms of accuracy. And on the dataset, not only surpasses the baselines by 14.45% in terms of accuracy but also achieves comparable CLIP-Score. The results signify the effectiveness of system in understanding both relative and superlative relationships, leading to satisfactory rectification of layout inconsistencies.Qualitative results.In <ref>, we present more multi-resolution images generated by .<ref> shows a visual comparison between the proposed and the competing baselines. Without additional layout guidance, the images generated by the vanilla Stable Diffusion fail to convey the layout requirements specified by the textual prompt while also suffering from missing objects. The three baseline models can enhance the accuracy of the generation in terms of layout. However, they each still suffer from respective issues. Taking the second row as an example, BoxDiff exhibits limitations in effectively controlling the layout, where the white daisies that should only appear on the right side also appear on the left and middle as well. And the images generated by Layout-Guidance and Attention-Refocusing exhibit noticeable blockiness, tearing artifacts and object deformations, which significantly degrade the quality. In contrast, our system maintains excellent image quality while rectifying the layout. We attribute this to the activation localization and movement, which allows us to preserve the generative capabilities of the base model to the maximum extent, without relying on rigid constraints imposed by loss functions. §.§ Ablation Study In <ref>, we visualize the generated images after removing the intra- and inter-map activation adjustments from . After removing the intra-map adjustment, objects are missing (first two rows) or specified objects appear outside their target positions (the last row).This illustrates that the mechanism significantly contributes to controlling the placement of objects. Meanwhile, removing the inter-map adjustment increases the likelihood of interference from activations of other maps, which can disrupt the generation of objects in their target positions, ultimately resulting in erroneous or incomplete object generation.§.§ Further Analysis Effect of the number of localization steps T^loc. In <ref>, we present the visual results with layout rectification initiated at different denoising steps during the generation. It can be observed that starting the rectification from the first denoising step yields better results, ensuring that each object appears in its designated position. The later the rectification starts, the worse the correction effect, thus compromising the fidelity of the generated images. This observation is consistent with the conclusion from previous studies <cit.>, where diffusion models establish the layout in early stages and refine the appearance details in later stages.Effect of adjustment strength α.We scale α from 0.1 to 50 and illustrate some generated cases in <ref>. Setting α to 0.1 essentially reverses the enhancement and suppression, resulting in objects appearing in non-designated positions. And setting α to 1 essentially removes the intra-map attention adjustment, leading to less effective layout rectification. Further increasing the α to 10 yields facilitates rectification and provides better control over the layout. However, excessively large values of α (e.g., setting it to 50) can degrade the quality of the generated images while imposing stricter constraints on the object positions.§ RELATED WORK Text-to-Image Generation. Earlier works studied text-to-image generation in the context of generative adversarial networks (GANs) <cit.>. Despite their dominance, the adversarial training nature brings the issues including training instability and less diversity in generation <cit.>. Text-conditional auto-regressive models <cit.> demonstrated more impressive results while requiring time-consuming iterative processes to achieve high-quality image sampling. Natural fitting to inductive biases of image data, the emerging diffusion models <cit.> have recently demonstrated impressive generation results based on open-vocabulary text descriptions. To reduce training overhead and speed up inference, latent diffusion model <cit.> trims off pixel-level redundancy by applying an autoencoder to project images into latent space and generating latent-level feature maps with the diffusion process. And to align with the provided textual input, Stable Diffusion <cit.> further employs cross-attention mechanism to inject textual condition into the diffusion generation process.Layout Control in Diffusion Models. Existing progress fails to fully understand the spatial relations of objects in the free-form textual descriptions and reflect them in the synthesized image, especially for complex scenes. Therefore, jointly conditioning on text and layout has been studied, where layout control signals can be bounding boxes <cit.>, segmentation maps <cit.>, and key points <cit.>. Several methods extend the Stable Diffusion model by incorporating layout tokens into attention layers <cit.> or training layout-aware adapters <cit.>. However, requiring additional training on massive layout-image pairs, these approaches lack flexibility in the base model and may degrade the quality of the generated images. Therefore, recent efforts <cit.> design loss conditioned on layout constraints to update the noised latent together with denoising. Layout-Guidance <cit.> computes the loss by applying the energy function on the cross-attention map, Attention-Refocusing <cit.> constrains both cross-attention and self-attention to “refocus” on the correct regions, and BoxDiff <cit.> designs inner-box, outer-box, and corner spatial constraints. However, they introduce extra computational cost for gradient update, which affects the speed of generation. In contrast, our system directly modifies the activations to conform to the target for rectification, minimizing the computation overhead. Layout Generation. Previous layout-to-image studies <cit.> have largely neglected the discussion on layout generation and heavily relied on users to directly provide accurate layout boxes for objects. However, this necessitates assessing the legality of user input and increases the learning and interaction difficulty for users. Moreover, we have observed a substantial decline in the quality of generated images when the provided boxes are insufficiently accurate. Latest efforts <cit.> have turned to large language models like GPT-4 <cit.> by creating appropriate prompting templates to generate layouts, while each API request adds response time and incurs additional costs. In this paper, our system provides a light-weight solution based on dependency parsing following with heuristic rules. § CONCLUSIONIn this paper, we propose a training-free layout calibration system for text-to-image generators,which aligns the synthesized images with layout instructions in a post-remedy manner. Following a “check-locate-rectify” pipeline, first decides whether to perform the layout rectification by checking the input prompt and the intermediate cross-attention maps. During the rectification, the system identifies and relocates the activations of mispositioned objects, where the target positions are generated by analysing the prompt with dependency parsing and heuristic rules. To comprehensively evaluate the effectiveness of , we present a benchmark called , which covers both simple and complex layouts described in terms of superlative relations. Through extensive qualitative and quantitative experiments, we demonstrate our superiority in improving generation fidelity and quality.ieeenat_fullname § RELATION VOCABULARY FOR CHECKING Our determines the existence of layout requirements by checking whether any words from our predefined relation vocabulary are present in the prompt.According to the semantic similarity,the vocabulary contains six categories:* left: “left”, “west”* right: “right”, “east”* above: “above”, “over”, “on”, “top”, “north”* below: “below”, “beneath”, “underneath”, “under”, “bottom”, “south”* between: “between”, “among”, “middle”* additional superlative: “upper-left”, “upper-right”, “lower-left”, “lower-right” Note that(1) The “additional superlative” category serves as a supplement for words that have not been covered. In the given context, words such as “left” and “above” can also represent the superlative relations.(2) This vocabulary can easily be extended according to the needs of the dataset.§ SUPERLATIVE PREDEFINED POSITIONS For each object associated with a superlative relation, the relative bounding box 𝐛 = (x, y, w, h) is assigned as follows: * left: (0.20, 0.50, 0.33, 1.00)* right: (0.80, 0.50, 0.33, 1.00)* above: (0.50, 0.20, 1.00, 0.33)* below: (0.50, 0.80, 1.00, 0.33)* middle: (0.50, 0.50, 0.50, 0.50)* upper-left: (0.25, 0.25, 0.50, 0.50)* upper-right: (0.75, 0.25, 0.50, 0.50)* lower-left: (0.25, 0.75, 0.50, 0.50)* lower-right: (0.75, 0.75, 0.50, 0.50) § AN EXAMPLE OF TARGET LAYOUT GENERATION To facilitate understanding of how parses the prompt and generates the target bounding box for each object with a set of heuristic rules, we show an example in <ref> to illustrate it more clearly. Specifically, the process can be roughly divided into four steps:1. Semantic parsing. parses the superlative tuples and relative triplets from the prompt. And the relative triples can be organized as a semantic tree, with nodes as objects and edges as spatial relations. 2. Assign the superlative boxes. Given each superlative tuple, assigns a predefined target box to the object according to its superlative position term.3. Traverse the semantic tree for a global view. By traversing the tree, organizes the global layout of the remaining objects.4. Assign the relative boxes. allocates the remaining space to the objects associated with superlative relations. § BENCHMARK DETAILS Overview. Our proposed focuses on superlative relations. Specifically, to sample an evaluation prompt, we first determine the number of objects in the prompt. Each prompt contains a minimum of one object and a maximum of four objects.Then, we sample the superlative relation for each object that has not yet been determined, where the predefined superlative relation set is the same as shown in <ref>.Finally, we sample the objects present in the current prompt from a predefined set of objects. To better evaluate the impact of layout requirements on image generation, a sampled object set can be shared between prompts with different superlative relations. As a result, contains 203 different prompts.The number of prompts containing 1/2/3/4 objects is 36/96/56/15. And the number of occurrences of each superlative relation is 55/55/49/49/56/48/48/48/48. The benchmark will be publicly available.Object set.The predefined object set consists of 28 different items as follows:* single-word: “backpack”, “flower”, “crown”, “towel”, “scarf”, “beach”, “clouds”, “tree”, “table”, “book”, “handbag”, “bus”, “bicycle”, “car”, “motorcycle”, “cat”, “dog”, “horse”* phrase: “chocolate cookie”, “strawberry cake”, “vanilla ice cream cone”* with color: “yellow sunflower”, “gray mountain”, “white daisy”, “pink cupcake”, “red tomato”, “golden saxophone”, “green broccoli” § DETAILED ACCURACIES ONIn <ref>, we report the accuracies when the number of objects in the prompt is different. It can be observed that our outperforms the baselines in all cases.Furthermore, despite the simplicity of the case with a single object, the accuracies do not show a clear downward trend as the number of objects increases. The difficulty of accurately representing the layout is also influenced by the specific layout requirements of the objects and their context.§ ADDITIONAL RESULTS§.§ Latency Comparison for Layout Generation Since our system presents a new solution for generating the target layout, we provide a brief discussion of the observed increase in latency here. Existing layout-to-image works <cit.> commonly rely on GPT-4 <cit.>, however, each invocation of the API requires a response time of ~3 seconds. In contrast, thanks to the industrial-strength library, our proposed solution requires an average of only 0.006 seconds for each prompt and does not require a GPU. This significantly improves the user experience for real-time text-to-image generators. §.§ Comparison with Training-Based MethodLayoutDiffusion <cit.> is a representative approach in training auxiliary modules to embed the layout information into intermediate features for controlling. However, it is constrained to fixed categories, thereby rendering it unsuitable for various datasets including Drawbench. To compare our with LayoutDiffusion, we select prompts that only includes valid objects for LayoutDiffusion from our . As observed in <ref>, the limitation of layout significantly reduces the generation quality of LayoutDiffusion, resulting in its performance being far inferior to . §.§ Additional Qualitative Comparison Results To show the effectiveness of ,we illustrate additional qualitative results in <ref>. | http://arxiv.org/abs/2311.15773v2 | {
"authors": [
"Biao Gong",
"Siteng Huang",
"Yutong Feng",
"Shiwei Zhang",
"Yuyuan Li",
"Yu Liu"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127124833",
"title": "Check, Locate, Rectify: A Training-Free Layout Calibration System for Text-to-Image Generation"
} |
§ INTRODUCTION Decoherence is a fundamental property of quantum systems subject to external noise. It is important from both fundamental and practical point of view. Fundamentally, decoherence is often considered as the phenomenon responsible for the emergence of the description of macroscopic physical reality based on classical mechanics and classical (Kolmogorov) probability theory from the quantum description. Quantum coherence is a distinctive property of the quantum world and the decay of coherence is at least one of the factors of quantum-to-classical transition. From the viewpoint of applications, decoherence is one of the obstacles for construction of a quantum computer, which requires preservation of coherence during the computation.Mathematically, quantum system subject to external noise are studied within the framework of the theory of open quantum systems, in which models of systems interacting with a thermal bath or another environment are considered. In this paper, we will study a particular well-known exactly-solvable model of pure decoherence <cit.>. Despite of the fact that it is well-known and attract interests in a number of recent studies (especially with respect to the question of Markovianity or non-Markovianity of the dynamics) <cit.>,precise asymptotic estimations of the decoherence rate depending on the spectral density (specifying the system-bath interaction) are absent. The purpose of this paper is to fill this gap.Then we discuss some consequences of these results, which can be important for the present active discussions about non-Markovian open quantum dynamics. There is a hierarchy of mathematical definitions related to our intuitive understanding of “Markovianity” or the “absence of memory of the bath” in the context of open quantum systems <cit.>. According to one approach, Markovianity is associated with the semigroup property of the dynamics. A generator of this semigroup has the well-known form of Gorini-Kossakowski-Sudarshan-Lindblad (GKSL; we consider only finite-dimensional systems). So, in this case, the dynamics of the reduced density operator of the system is rather simple and can be described by a GKSL quantum master equation, which, in the finite-dimensional case, is simply a system of linear ordinary differential equations with constant coefficients. Definitions of Markovianity related to certain properties of the dynamics of the reduced density operator of the system, like decrease of the distinguishability of the evolving quantum states or CP-divisibility (“CP” means “completely positive”) cover wider ranges of models and include certain classes of time-dependent GKSL generators <cit.>. The most general (and the strongest) definition of Markovianity, which generalizes the corresponding definition from the classical random process theory, is related to the quantum regression formula <cit.>.One of the popular ways of dealing with non-Markovian quantum dynamics is to embed an open quantum system to a larger system whose dynamics is Markovian in the sense that its dynamics is described by a quantum dynamical semigroup with a (time-independent) GKSL generator. Of course, the original system-bath unitary dynamics is obviously Markovian. But the bath typically contains a continuum number of oscillatory modes. In the described approach, we try to embed our system into a larger system, which is still finite-dimensional or, at least, has a discrete energy spectrum, so that it is possible to cut high energy levels and approximately reduce its (Markovian) dynamics to a finite-dimensional (also Markovian) dynamics. The methods of pseudomodes <cit.> and reaction (collective) coordinate <cit.> are examples of this approach. The assumption of a possibility of a Markovian embedding is often used in data-driven prediction of the dynamics of open quantum systems <cit.>. The method of Markovian embedding is schematically depicted on Fig. <ref>. However, we show that, for some bath spectral densities, such embedding is impossible. Namely, in the case of a positive temperature of the bath, only for the case of an Ohmic spectral density (i.e., very specific asymptotic behaviour in a vicinity of zero), the Markovian embedding is possible. But even in this case, there can be factors not described by Markovian embeddings. If the spectral density is either sub- or super-Ohmic (which are common in physics), such embedding is impossible.Vice versa, for the Ohmic spectral density (again in the case of a positive temperature), we can observe the asymptotic Markovianity in the strongest sense related to the quantum regression formula. That is, even if the dynamics is non-Markovian, it becomes Markovian on large times, which simplifies its description. This results can be considered as a development of the results by D. Lonigro and D. Chruściński <cit.>. Namely, they show that the quantum regression formula for the considered pure decoherence model issatisfied exactly if the bath spectral density is flat for the whole real line of positive and negative frequencies of the bath. However, these conditions (especially negative frequencies) are unphysical. The authors, however, note that the flat spectral density can be a reasonable approximation in realistic scenarios. Here we confirm this hypothesis: asymptotic satisfiability of the quantum regression formula is explained by the fact that, for large times, only the behaviour of the spectral density in a vicinity of zero matters, so, it can be considered to be approximately flat on the whole real line. The following text is organized as follows. In Sec. <ref>, we describe the model. In Sec. <ref>, we prove the main theorem about the long-term rates of decoherence depending on the asymptotic behaviour of the spectral density in a vicinity of zero. In Sec. <ref>, we illustrate the theorem by two particular widely used spectral densities. In Sec. <ref>, we compare the obtained results with various rigorous results about the Davies GKSL generator of a quantum dynamical semigroup in the weak-coupling regime <cit.>. Sec. <ref> is devoted to the impossibility of the Markovian embedding in the cases of sub- or super-Ohmic spectral densities. In Sec. <ref>, we prove asymptotic Markovianity (in the sense of the quantum regression formula) for the Ohmic spectral densities. In all these sections we consider the realistic case of a positive temperature of the bath. However, the zero-temperature bath is often used as a reasonable approximation when the temperature is low. The results can be easily transferred to this case as well, which is discussed in Sec. <ref>.§ PROBLEM STATEMENT Let us consider a two-dimensional quantum system (a qubit) interacting with a finite number of N quantum harmonic oscillators. Mathematically speaking, we work in the Hilbert space ℋ_ S⊗ℋ_ B=ℂ^2⊗(ℓ^2)^⊗ N. Consider the following Hamiltonian (self-adjoint operator) acting in this space:H=H_ S+H_ B+H_ I= ω_0/2σ_z+∑_k=1^Nω_ka_k^† a_k+ σ_z/2⊗∑_k=1^N (g_ka_k^†+g_k^*a_k),where σ_z= [10;0 -1 ] = |1⟩⟨1|-|0⟩⟨0|, |1⟩=[ 1; 0 ], |0⟩=[ 0; 1 ]are one of the Pauli matrices and basis elements (the standard basis) of ℂ^2, respectively. Further, a_k^† and a_k are the bosonic creation and annihilation operators, ω_k>0, ω_0 is a real number and g_k are complex numbers.Consider the following initial density operator (quantum state):ρ(0)=ρ_ S(0)⊗ρ_ B,β≡ρ_ S(0)⊗ Z_ B^-1e^-β H_ B,where ρ_ S(0) is the initial density operator of the system and ρ_ B,β is the thermal (Gibbs) state of the bath with β being the inverse temperature, Z_ B= e^-β H_ B. The joint system-bath state at time t is given byρ(t)=e^-itHρ(0)e^itH.We are interested in the reduced density operator of the system:ρ_ S(t)=_ Bρ(t)=_ B[e^-itHρ(0)e^itH],where _ B denotes the partial trace over the space ℋ_ B. The interaction representation:ϱ(t)= e^it H_ Sρ_ S(t)e^-itH_ S= [ ϱ_11(t) ϱ_10(t); ϱ_01(t) ϱ_00(t) ],where ϱ_jk are the matrix elements of the operator ϱ in the standard basis {|1⟩,|0⟩}.Since [H,σ_z]=0, ϱ_11(t)=ϱ_11(0) and ϱ_00(t)=ϱ_00(0). Also, ϱ_10(t)= ϱ_10(0) _ B[e^-itH_1ρ_ B,βe^itH_0],whereH_j=∑_k=1^Nω_ka_k^† a_k+ (-1)^j-1/2∑_k=1^N (g_ka_k^†+g_k^*a_k),j=0,1. One can show <cit.> that_ B[e^-itH_1ρ_ B,βe^itH_0] = e^-Γ(t),whereΓ(t)= ∑_k=1^N|g_k|^2(βω_k/2) 1-cosω_k t/ω_k^2=∫_0^∞ J(ω)(βω/2) 1-cosω t/ω^2dω= ∫_0^∞ J_ eff(ω) 1-cosω t/ω^2dω.Here J(ω)=∑_k=1^N |g_k|^2δ(ω-ω_k)is the spectral density function of the bath and J_ eff(ω)= J(ω)(βω/2)is sometimes called the effective spectral density function.The diagonal elements of the density matrix ϱ_11(t) and ϱ_00(t) are called the populations and the off-diagonal elements ϱ_10(t) and ϱ_01(t)=ϱ_10(t)^* are called the coherences. The decrease of the off-diagonal elements is called decoherence. It is clear that Γ(t)≥Γ(0)=0 and, hence, |ϱ_10(t)|≤|ϱ_10(0)|. If there finitely many oscillators N in the bath, then Γ(t) and ϱ(t) are periodic or almost periodic functions of t.In the thermodynamic limit of the bath N→∞, the index k runs over a continuous set and we assume that J(ω) tends to an integrable function on the half-line [0,∞). Note that there is a mathematically rigorous way to start directly from the continuum of oscillators rather than to perform a thermodynamic limit of a finite number of oscillators <cit.>. However, for our purposes, this mathematically simpler approach with the thermodynamic limit will suffice.We will be interested in the behaviour of Γ(t) given by the second (or third) line of Eq. (<ref>), where J(ω) is an integrable function on [0,∞), for large times t→∞. If Γ(t)→∞ and, thus, ϱ_10(t)→0 as t→∞, then we will speak that the full decoherence occurs. If Γ(t) is a bounded function of t, so, |ϱ_10(t)|≥ϱ_10^*>0, then we will speak that the partial decoherence occurs. The partial decoherence for certain classes of the spectral densities is a known theoretical prediction <cit.>. For completeness, we include this results into Theorem <ref>, but mainly we are interested in the case of the full decoherence. Then the question we are interested in is the following: What is the asymptotic rate of convergence of ϱ_10(t) to zero for large t depending of the properties of J(ω)? In book <cit.> and also in Ref. <cit.>, only a particular forms of J(ω) are considered. Here we obtain a general answer.The crucial is the asymptotic behaviour of J(ω) for small ω: Pure decoherence is caused by interaction with the low frequencies of the bath. We consider only the case J(ω)∼ cω^γ as ω→+0 (i.e. lim_ω→+0J(ω)ω^-γ=c), where c,γ>0. If 0<γ<1, γ=1 or γ>1, the spectral density is called sub-Ohmic, Ohmic or super-Ohmic, respectively.§ MAIN THEOREM Let J(ω) be an integrable function on [0,∞).* If J(ω)∼ cω^2+δ as ω→+0, where δ>0 and c>0, thenlim_t→∞Γ(t)= ∫_0^∞J(ω)/ω^2(βω/2)dω≡Γ_∞,hence, the decoherence is partial:ϱ_10(t)=ϱ_10(0)B(t)e^-Γ_∞,where B(t) is a bounded function converging to 1 as t→∞.* If J(ω)∼ cω as ω→+0, where c>0, and there exits J'_ eff(0)=lim_ω→+0J'_ eff(ω), thenΓ(t)=Γ_0t+αln t+C+o(1), t→∞,where C is a constant, hence, the decoherence is exponential:ϱ_10(t)=ϱ_10(0)B(t) e^-Γ_0 t/t^α.HereΓ_0 =π/2 J_ eff(0)= πβ^-1lim_ω→+0J(ω)/ω,α =J'_ eff(0)=2β^-1lim_ω→+0(J(ω)/ω)'and B(t) is a bounded function converging to e^-C as t→∞. * If J(ω)∼ cω^2 as ω→+0, where c>0, and there exits J'_ eff(0)=lim_ω→+0J'_ eff(ω), then the decoherence obeys a power law:Γ(t)=αln t+C+o(1), t→∞,hence, ϱ_10(t)=ϱ_10(0)B(t)/t^α,where, again, B(t) is a bounded function converging to e^-C as t→∞. * If J(ω)=ω^1+δG(ω) as ω→+0, where -1<δ<1, G(0)>0, and there exits a derivative of the function ω^-δJ_ eff(ω) at ω=0 (which, as before, can be defined in terms of the limit ω→+0), thenΓ(t)=At^1-δ+C+o(1), t→∞,if 0<δ<1, andΓ(t)=At^1-δ+ 2β^-1G'(0) O(t^-δln t), t→∞,if -1<δ≤0, whereA= 2β^-1G(0) ∫_0^∞1-cosυ/υ^2-δ dυand C is a constant. Thus, the decoherence is subexponential for δ>0 and superexponential for δ<0:ϱ_10(t)=ϱ_10(0)B(t)e^-At^1-δif 0<δ<1, where B(t) is a bounded function converging to e^-C as t→∞, andϱ_10(t)=ϱ_10(0) e^-At^1-δ+2β^-1G'(0) O(t^-δln t),t→∞if -1<δ≤0.The results of Theorem <ref> can be summarized as follows. Let J(ω)∼ cω^γ (or, equivalently, J_ eff(ω)∼ cω^γ-1) as ω→+0 for some c>0. * If 0<γ<1 (sub-Ohmic spectral density), then the decoherence is full and its rate is superexponential. * If γ=1 (Ohmic spectral density), then the decoherence is full and its rate is exponential. * If 1<γ<2 (super-Ohmic spectral density), then the decoherence is full and its rate is subexponential, but faster than any degree of t. * If γ=2 (super-Ohmic spectral density), then the decoherence is full and obeys a power law. * If γ>2 (super-Ohmic spectral density), then the decoherence is partial.Also it can be noticed that all the decoherence constants in Theorem <ref> are proportional to the temperature β^-1, which corresponds to the physical intuition that the environment with higher temperature causes faster decoherence. Let us consider all the cases.Case 1. In this case, J_ eff(ω)∼ω^1+δ, δ>0, and J_ eff(ω)/ω^2∼ω^-(1-δ). That is, J_ eff(ω)/ω^2 is an integrable function. Then, by the Riemann-Lebesgue lemma, ∫_0^∞J_ eff(ω)/ω^2cosω t dω tends to zero as t→∞, from which Eq. (<ref>) follows. Note that if we impose additional conditions on the degree of smoothness of J_ eff(ω) to provide the possibility of a certain number of iterative integrations by parts, then the long-term rate of convergence of Γ(t) to Γ_∞ can be estimated.Cases 2 and 3.In this cases, we can expressΓ(t)=∫_0^ω_ cJ_ eff(ω)-J_ eff(0)-J'_ eff(0)ω/ω^2 (1-cosω t) dω+J_ eff(0)∫_0^ω_ c1-cosω t/ω^2 dω+J'_ eff(0)∫_0^ω_ c1-cosω t/ω dω+∫_ω_ c^∞ J_ eff(ω)1-cosω t/ω^2 dω,where ω_ c>0 (“c” from “cutoff”) is arbitrary. Consider all the terms. The first factor in the integrand of the first term is a locally integrable function and, hence, by the Riemann-Lebesgue lemma, the first term tends to a constant. The same is true for the last term: J_ eff(ω)/ω^2 is integrable on Borel sets that do not include zero. Consider the second term:∫_0^ω_ c1-cosω t/ω^2dω = ∫_0^∞1-cosω t/ω^2dω -∫_ω_ c^∞1-cosω t/ω^2dω= t∫_0^∞1-cosυ/υ^2 dυ -1/ω_ c +∫_ω_ c^∞cosω t/ω^2dω= π/2t -1/ω_ c +∫_ω_ c^∞cosω t/ω^2dω.Here, in the second equality, we have performed the substitution ω t=υ. The last integral converges to zero as t→∞ by the Riemann-Lebesgue lemma. In the second term of (<ref>), we perform the same substitution ω t=υ and then use a known formulafor the integral of (1-cosυ)/υ (see, e.g., Ref. <cit.>):∫_0^ω_ c1-cosω t/ω dω = ∫_0^ω_ ct1-cosυ/υ dυ= lnω_ ct+γ- Ci(t),where γ is the Euler-Mascheroni constant and Ci(t)=-∫_t^∞cos x/x dxis the cosine integral, which is a bounded function converging to zero as t→∞.The expressions for the constants Γ_0 and α follow from the previous calculations and the following observations:J_ eff(0) = lim_ω→+0J(ω)βω/2 = 2β^-1lim_ω→+0J(ω)/ω, J'_ eff(0) = lim_ω→+0(J(ω)βω/2)' = 2β^-1lim_ω→+0( J(ω)/ω)'.We have proved the cases 2 and 3 of the theorem. Note that, if J'_ eff(0)=0 (which is the case, e.g., for the Drude-Lorentz spectral density, see below), then we do not need to introduce a cutoff frequency ω_ c since both the first and the second integrals in Eq. (<ref>) are convergent even for an infinite upper limit of integration in this case.Case 4. In this case, J'_ eff(0) is infinite unless δ=0 (i.e., case 2). However, define G(ω)=ω(βω/2)G(ω), so that J_ eff(ω)=ω^δG(ω). We can apply an expansion analogous to Eq. (<ref>) if we applyTaylor's formula for G:Γ(t)=∫_0^ω_ cG(ω)-G(0)-G'(0)ω/ω^2-δ (1-cosω t) dω+G(0)∫_0^ω_ c1-cosω t/ω^2-δ dω+G'(0)∫_0^ω_ c1-cosω t/ω^1-δ dω+∫_ω_ c^∞G(ω)1-cosω t/ω^2-δdω.Applying the same arguments as for the cases 2 and 3, we conclude that the first and the last terms tend to constants. The second term is analysed analogously to that of the case 2:∫_0^ω_ c1-cosω t/ω^2-δdω = ∫_0^∞1-cosω t/ω^2-δdω -∫_ω_ c^∞1-cosω t/ω^2-δdω= t^1-δ∫_0^∞1-cosυ/υ^2-δ dυ -1/(1-δ)ω_ c^1-δ +∫_ω_ c^∞cosω t/ω^2-δdω.The last integral converges to zero due to the Riemann-Lebesgue lemma.Consider the integral in the third term in Eq. (<ref>). It can be calculated explicitly in terms of the generalized hypergeometric functions, but, for our purposes, the following estimations suffice. If 0<δ<1, then, using the substitution υ=ω t again, we can perform as follows:∫_0^ω_ c1-cosω t/ω^1-δdω = ω_ c^δ/δ- t^-δ∫_0^ω_ ctcosυ/υ^1-δdυ.The last integral converges to a constant (due to Dirichlet's test), while t^-δ tends to zero as t→∞. If -1<δ≤0 (actually, the case δ=0 corresponds to case 2, but we include it also here), then we have∫_0^ω_ c1-cosω t/ω^1-δdω = t^-δ∫_0^ω_ ct1-cosυ/υ^1-δdυ≤ t^-δ∫_0^1 1-cosυ/υ^1-δdυ + t^-δ∫_1^ω_ ct1-cosυ/υdυ.The first integral here is a constant. The second integral was analysed before: Its principal term is ln t as t→∞. Again, the expressions for the constants follow from these calculations and from the following:G(0) = lim_ω→+0 G(ω)ωβω/2 = 2β^-1G(0),G'(0) = lim_ω→+0( G(ω)ωβω/2)' = 2β^-1G'(0).This finishes the prove of the last case 4 of the theorem.§ TWO EXAMPLES OF THE OHMIC SPECTRAL DENSITIES Let us consider two popular choices of Ohmic spectral densities. The first choice is the exponential cutoff:J(ω)=ω e^-ω/Ω,where Ω is the characteristic frequency of the bath. Here Γ_0=πβ^-1 and α=-2(βΩ)^-1, hence, ϱ_10(t)=ϱ_10(0)B(t) t^2(βΩ)^-1 e^-πβ^-1 t.The second choice is the Drude-Lorentz spectral density:J(ω)=ωΩ^2/ω^2+Ω^2.Here Γ_0=πβ^-1 and α=0, hence,ϱ_10(t)=ϱ_10(0)B(t)e^-πβ^-1 t.Thus, in the latter case, the decoherence is slightly faster due to different values of J_ eff'(0). If Ω is large, then the difference is negligible, but if Ω is small, then the difference can be significant. Generally, large value of |J_ eff'(0)| (which corresponds to either a sharp peak or a sharp hollow at zero) can significantly modify the rate of decoherence. Also it is interesting that the long-term dynamics of coherences is determined by only two values: lim_ω→+0J(ω)/ω and lim_ω→+0[J(ω)/ω]' and does not depend on further details of the function J(ω).§ DISCUSSION OF THE MARKOVIAN (WEAK-COUPLING) LIMITIn the theory of weak system-bath coupling limit, one considers a Hamiltonian where the interaction Hamiltonian is multiplied by a small dimensionless parameter λ:H=H_ S+H_ B+λ H_ I,λ→0.One can derive the Davies quantum dynamical semigroup (or, equivalently, the Davies Markovian quantum master equation) for the reduced density operator of the system <cit.>. It predict that, for the considered model, only J_ eff(0) matters: if this value is positive (Ohmic spectral density), then the exponential full decoherence takes place and the rate of decoherence is proportional to λ^2. In the super-Ohmic case, J_ eff(0)=0 and neither full nor partialdecoherence occurs. Let us analyse this from the point of view of the presented analysis. We should replace J(ω) and, thus, Γ(t) by λ^2 J(ω) and λ^2Γ(t), respectively. Then, if J(ω)∼ cω^2+δ as ω→+0 for some c>0, the partial decoherence (<ref>) and (<ref>) is negligible in this limit. Indeed, Γ(t) is a bounded function in this case and λ^2Γ(t) tends to zero uniformly on [0,∞), hence, ϱ_10(t) uniformly tends to ϱ_10(0).If the spectral density is Ohmic, then the Davies quantum dynamical semigroup correctly predicts the exponent in (<ref>). In particular, the decoherence rate is indeed proportional to λ^2. But the Markovian master equation do not capture the power term t^α. From the point of view of the formal limit λ→0, the influence of the power-law terms disappear in the weak-coupling limit. This can be seen as follows: If τ=(λ^2Γ_0)^-1 is the characteristic time of decay of the exponential term, then τ^λ^2α=(λ^2Γ_0)^-λ^2α→1 as λ→0. So, in this limit, the power-law terms significantly differ from 1 only on the times where the coherence is already suppressed by the exponential term. This formally justifies the Davies quantum dynamical semigroup. However, if we consider a concrete physical system, then λ can be small, but is a constant. Then, a sharp peak or hollow at zero (large value of |J'_ eff(0)|) can significantly modify the decoherence rate predicted by the Davies master equation. There is a particular case of a known fact that the Markovian approximation does not work in the case of rapid changes of the spectral density near the Bohr frequencies (differences between the energy levels). Here only the zero Bohr frequency is important.In the case of the sub-Ohmic spectral density J(ω)∼ cω^1+δ as ω→+0, where -1<δ≤0 and c>0, the decoherence is superexponential and the Davies quantum master equation cannot describe it since J_ eff(ω)→∞ as ω→+0 (which reflects a superexponential law of decoherence). The “mild” super-Ohmic case J(ω)∼ cω^1+δ as ω→+0 for 0<δ≤1 and c>0 requires a bit more attention. In this case, the decoherence is slower than exponential, which also cannot be captured by the Markovian master equation: The full decoherence takes place, but the Markovian master equation predicts no decoherence. Namely, let us denote ϱ_10(t) the coherence predicted by the Davies master equation (in contrast to the exact value ϱ_10(t)). In our case, we have ϱ_10(t)=ϱ_10(0) and ϱ_10(t)→0 as t→∞. Nevertheless, the Davies theorem is true, which says (for this particular simple model) <cit.> thatlim_λ→0sup_0≤ t≤ T/λ^2 |ϱ_10(t)-ϱ_10(t)|=0for an arbitrary finite T. Indeed, if one of the asymptotic formulas (<ref>) or (<ref>) is valid, then λ^2Γ(t) tends to 1 uniformly on [0,T/λ^2] for any T>0. However, increasing T requires making λ smaller and smaller to maintain a constant level of error.In Ref. <cit.>, using the resonance theory, it is proved that, under certain additional conditions (in comparison with the Davies theorem), the norm of the difference between the exact reduced density operator of the system and that given by the Davies quantum dynamical semigroup are bounded by Cλ^2 for some C uniformly on the whole time half-line t∈[0,∞). Obviously, this is not true in our case. This is because the mentioned additional conditions are not satisfied in this model. Namely, one of the conditions (the so called Fermi Golden Rule condition) says exactly that λ^-2 is the only characteristic time scale of dissipative dynamics, which is violated in the considered case. So, this comparison can serve as an example showing that the additional conditions in the mentioned theorem strengthening the Davies theorem, are important (“physical”) and not merely “technical”.The inclusion of a term proportional to σ_x into the system Hamiltonian (i.e., transitions between the energy levels) restores the time scale λ^-2. In this case, the pure decoherence is accompanied by the exponential decoherence due to quantum transitions, which are described within the Markovian master equations. If the non-exponential pure decoherence is suppressed by the exponential decoherence due to transitions, then the error of the Markovian master equations is not large. Note also that quantum master equation for the case where the system-bath interaction is not weak, but an additional term propotional to σ_x in the system Hamiltonian can be treated as a small perturbation, was proposed <cit.>. This is the so called strong-decoherence regime: a strong pure decoherence is accompanied by slow transitions between the energy levels in the system.§ PROBLEM OF THE MARKOVIAN EMBEDDING The case of non-exponential decoherence is interesting for one more problem. A popular way of dealing with non-Markovian (non-semigroup) dynamics of an open quantum system is to embed it into a larger system whose dynamics is Markovian (semigroup); see Fig. <ref>. Since the exact system-bath unitary dynamics is already, obviously, Markovian, an additional condition is usually assumed: The enlarged system should be finite-dimensional or, at least, have a discrete energy spectrum (e.g., a finite-dimensional system plus a finite number of harmonic oscillators), so that it is possible to cut high energy levels (depending on the temperature of the bath) and to consider a finite-dimensional enlarged system. The usual physical interpretation of the extension of the system is that only a part of the bath strongly interacting with the system, while the residual bath interacts with both the system and the separated part of the bath only weakly.Namely, let S be a system with a corresponding finite-dimensional Hilbert space ℋ_ S (in our case, ℋ_ S=ℂ^2). Its open dynamics is given by a family of completely positive and trace-preserving maps {Φ_t}_t≥0, so that ρ_ S(t)=Φ_tρ_ S(0), where ρ_ S(t) is the density operator of the system. We will speak about the Markovian embedding if ρ_ S(t)=_ S'ρ_ SS'(t), where ρ_ SS'(t) is the density operator of the enlarged system with the (also finite-dimensional) Hilbert space ℋ_ S⊗ℋ_ S' and ρ_ SS'(t) satisfies a master equation in the GKSL form:ρ̇_ SS'(t)=ℒρ_ SS'(t)= -i[H_ SS',ρ_ SS'(t)] +∑_k=1^K( L_kρ_ SS'(t)L_k^† -1/2{L_k^† L_k,ρ_ SS'(t)}).Here H_ SS' is a self-adjoint operator (a Hamiltonian) in ℋ_ S⊗ℋ_ S', L_k are linear operators in ℋ_ S⊗ℋ_ S', [·,·] and {·,·} are commutator and anti-commutator, respectively. In other words, ρ_ SS'(t)=e^tℒρ_ SS'(0), where e^tℒ is the quantum dynamical semigroup with the generator ℒ <cit.>.However, in this case, according to the general theory of systems of ordinary differential equations and also general operator theory, ρ_ SS'(t) and, thus ρ_ S(t), is a linear combination of terms of the form t^n_je^-l_jt, where -l_j are the eigenvalues of ℒ and n_j are non-negative integers. Moreover, from the positivity of e^tℒ, we have l_j≥0 and n_j=0 whenever l_j=0. So, such a Markovian embedding can describe only the exponential relaxation of ρ_ S(t) to a stationary state (up to the factors t^n_j). In contrast, as we saw, the model of pure decoherence (<ref>) allow the super-, sub-exponential and power-law decoherence, like e^-At√(t), e^-A√(t) and t^-α, respectively. Such dynamics cannot be reproduced by a Markovian embedding. Even in the case of an Ohmic spectral density, the Markovian embedding cannot reproduce the power-law factor t^-α in Eq. (<ref>) [like in example (<ref>)] if α is not a negative integer. Thus, in this particular model, the Markovian embedding is not excluded only for a very specific class of spectral densities.It should be noted that the non-exponential decoherence is not exotic. For example, the decoherence law e^-At^2 is observed in superconducting qubits as a consequence of flicker noise (1/f-noise), where the effective spectral density behaves as J_ eff(ω)∼ c/ω as ω→+0 <cit.>. With such a spectral density, integral (<ref>) diverges, but we can consider a regularization: J=ω^εG(ω), where ε>0 is small and G(0)>0, so that J_ eff∼ 2β^-1G(0)/ω^1-ε as ω→+0. Then, according to Theorem <ref> (namely, Eq. (<ref>)),ϱ_10(t)=ϱ_10(0) e^-At^2-ε+2β^-1G'(0)O(t^1-εln t),t→∞. In Ref. <cit.>, it is shown that the inclusion of a classical noise to a Markovian embedding can reproduce the non-exponential decoherence. In other words, the inclusion of a classical noise (which can be non-Markovian, e.g., the aforementioned flicker noise) significantly extent the power of the method of Markovian embedding.Two more comments should be made here: * We still can hope to reproduce the dynamics of an open quantum system on a finite time interval by a Markovian embedding. But this is already not about a physically meaningful representation of the model (the aforementioned separation of the strongly interacting part of the bath), but rather about merely mathematical approximation of the time dependence. * A Markovian embedding can be used not only for the approximation of the dynamics, but also for the approximation of the equilibrium state <cit.>, which is non-Gibbsian if the system-bath coupling is not negligibly weak <cit.>. Here we do not consider this purpose of Markovian embeddings and write only about the problem of approximation of dynamics. § ASYMPTOTIC MARKOVIANITY In the previous section, we have shown that the Markovian embedding (without additional means like the classical noise) is impossible if the spectral density is not Ohmic and, thus, the decoherence is non-exponential. Let us consider now the case of the Ohmic spectral density and the exponential decoherence, i.e., case 2 of Theorem <ref>. Here, vice versa, the asymptotic Markovianity in the following sense takes place.The exact solution ϱ(t) (<ref>) with the function Γ(t) (<ref>) obviously satisfies the following equation with a time-dependent GKSL generator:ϱ̇= γ(t)/2(σ_zϱσ_z-ϱ),or, if we return to the Schrödinger picture,ρ̇_ S=-i[H_ S,ρ_ S] +γ(t)/2(σ_zρ_ Sσ_z-ρ_ S),where γ(t)=Γ̇(t). In the case J(ω)∼ cω as ω→+0, doing in the same manner as in Eq. (<ref>), we arrive atγ(t) =Γ̇(t) =∫_0^∞J_ eff(ω)sinω t/ω dω=∫_0^ω_ cJ_ eff(ω)-J_ eff(0)/ωsinω t dω +J_ eff(0)∫_0^ω_ csinω t/ω dω +∫_ω_ c^∞ J_ eff(ω)sinω t/ω dω.Again, applying the Riemann-Lebesgue lemma, we obtain γ(t)→Γ_0= const [see Eq. (<ref>)] as t→∞ and, thus, for large times, we obtain a GKSL generator of a quantum dynamical semigroup. Of course, this can be seen also directly from Eqs. (<ref>) and (<ref>) since the linear term in Γ(t) is principal for large times.This asymptotic Markovianity is an interesting phenomenon. Usually, we expect the semigroup dynamics in the weak-coupling limit (or another limit). However, here we have obtained that, for large times, the semigroup property is satisfied for any system-bath coupling strength. A peculiarity of the weak-coupling limit here is that the transient non-Markovian stage of the dynamics is infinitesimal and can be neglected in the principal order approximation, i.e., the second-order in λ. We saw it in Sec. <ref>. Note that, in Ref. <cit.>, it is shown that the effects of the transient dynamics should be included in the higher-order approximations.However, as we mentioned in the introduction, the semigroup property is not the most general definition of quantum Markovianity. The definition generalizing the corresponding definition from the classical theory of random processes, is based on the quantum regression formula <cit.>.Let H, as before, be a system-bath Hamiltonian and the initial system-bath state is the product state ρ_ S⊗ρ_ B,β, where ρ_ S is arbitrary and ρ_ B,β as the initial state of the bath [see Eq. (<ref>)]. One says that the open quantum system satisfies the quantum regression formula if, for every ρ_ S, every n and every bounded operators X_1,…,X_n,Y_1,…,Y_n in the Hilbert space of the system and all t_1,…, t_n≥0, the following equality holds:_ SB[ ℰ̃_n𝒰_t_n…ℰ̃_1𝒰_t_1 (ρ_ S⊗ρ_ B,β) ] = _ S[ ℰ_nΦ_t_n…ℰ_1Φ_t_1 (ρ_ S) ],where 𝒰_t=e^-itH(·)e^itH,Φ_t=_ B [𝒰_t( · ⊗ρ_ B,β)], andℰ̃_k=(X_k⊗ I_ B)(·)(Y_k⊗ I_ B), ℰ_k=X_k(·)Y_kwith I_ B being the identity operator in the bath Hilbert space. The quantum regression formula says that we can use the family of quantum dynamical maps {Φ_t}_t≥0 not only for the description of the dynamics of reduced density operator of the system and prediction of the average values of observables, but also for multitime correlation functions. Obviously, the quantum regression formula cannot be satisfied exactly, except trivial cases (e.g., H_ I=0). We can hope for the satisfaction of this formula only in certain limiting cases. In particular, the quantum regression formula is satisfied in the weak-coupling limit <cit.>.In Ref. <cit.>, it is shown that, for the considered model of pure decoherence, the quantum regression formula (<ref>) is satisfied if and only if the following formula holds for all n, j_1,…,j_n,l_1,…,l_n∈{0,1}, t_1,…,t_n≥0:[ e^-it_nH_j_n… e^-it_1H_j_1ρ_ B,β e^it_1H_l_1… e^-it_nH_l_n] = ∏_k=1^n [ e^-it_kH_j_kρ_ B,β e^it_kH_j_k],where H_j for j=0,1 are given in Eq. (<ref>). Consider the thermodynamical limit N→∞ in Eq. (<ref>) such that J(ω) converges to a function [also denoted by J(ω)] that is integrable on [0,∞) and twice differentiable on [0,ω_1) for some ω_1>0 and such that J(ω)∼ cω as ω→+0 for some c>0. Then the quantum regression formula isasymptotically satisfied for large times in the following sense: lim_N→∞[ e^-it_nH_j_n… e^-it_1H_j_1ρ_ B,β e^it_1H_l_1… e^-it_nH_l_n] =e^-Γ_0 T+O(ln(min t)), min t→∞,andlim_N→∞∏_k=1^n [ e^-it_kH_j_kρ_ B,β e^it_kH_l_k] =e^-Γ_0 T+O(ln(min t)), min t→∞.Here T=∑_k j_k≠ l_kt_k,min t=min_k j_k≠ l_kt_k,and Γ_0 is given in Eq. (<ref>). Thus, both sides of Eq. (<ref>) are exponential functions whose arguments coincide in the principal terms as the minimal t_k goes to infinity.Note that the first limit in Eqs. (<ref>) and (<ref>) is the thermodynamic, i.e., N→∞, and the second one is min t→∞. Let us calculate (<ref>) first. If j_k=l_k, then[ e^-it_kH_j_kρ_ B,β e^it_kH_l_k] =1. If j_k≠ l_k, then, according to Eqs. (<ref>) and (<ref>),[ e^-it_kH_j_kρ_ B,β e^it_kH_l_k] =e^-Γ_0 t_k+O(ln t_k), t_k→∞. Taking the product over all k, we obtain Eq. (<ref>).For the proof of Eq. (<ref>), we will use the results of calculations from Ref. <cit.>. According to them, the left-hand side of Eq. (<ref>) is equal toexp{ -1/2∫_0^∞J_ eff(ω)/ω^2| ∑_k=1^n(j_k-l_k)(e^iω T_k-e^iω T_k-1) |^2 dω},where T_k=∑_k'=1^k t_k'. Now we apply decomposition (<ref>):∫_0^∞J_ eff(ω)/ω^2| ∑_k=1^n(j_k-l_k)(e^iω T_k-e^iω T_k-1) |^2 dω=∫_0^ω_ cJ_ eff(ω)-J_ eff(0)-J'_ eff(0)ω/ω^2| ∑_k=1^n(j_k-l_k)(e^iω T_k-e^iω T_k-1) |^2 dω+J_ eff(0)∫_0^∞1/ω^2| ∑_k=1^n(j_k-l_k)(e^iω T_k-e^iω T_k-1) |^2 dω-J_ eff(0)∫_ω_ c^∞1/ω^2| ∑_k=1^n(j_k-l_k)(e^iω T_k-e^iω T_k-1) |^2 dω+J'_ eff(0)∫_0^ω_ c1/ω| ∑_k=1^n(j_k-l_k)(e^iω T_k-e^iω T_k-1) |^2 dω+∫_ω_ c^∞J_ eff(ω)/ω^2| ∑_k=1^n(j_k-l_k)(e^iω T_k-e^iω T_k-1) |^2dωfor some ω_ c>0. Here, for all terms in the right-hand side, except the second one, we can apply the following estimation:| ∑_k=1^n(j_k-l_k)(e^iω T_k-e^iω T_k-1) |^2≤∑_k=1^n |j_k-l_k|· |e^iω T_k-e^iω T_k-1|^2 ≤ 2∑_k=1^n |j_k-l_k|· (1-cosω t_k).Thus, we can repeat the corresponding steps of the proof of the case 2 of Theorem <ref> and to conclude that all these terms are O(ln(min t)) as min t→∞. The integral in the second term in the right-hand side of Eq. (<ref>) was shown in Ref. <cit.> to be equal to π T, which concludes the proof of Eq. (<ref>) and, thus, the theorem.In Ref. <cit.>, D. Lonigro and D. Chruściński show that the dynamics is exactly Markovian if we formally replace the lower limit of integration in Eq. (<ref>) for Γ(t) by -∞ and put J_ eff(ω) to be constant on the whole line ω∈(-∞,∞). Of course, negative frequencies are unphysical. However, the authors write: “We point out that, while these choices of coupling may be considered unphysical, the corresponding results are indicative of what would be obtained in more realistic scenarios: we can expect an exponential dephasing in the regime in which the spin-boson interaction is `approximately flat' in the energy regime of interest.” Here we actually show that the spectral density can be considered to be `approximately flat' on large times, when only the behaviour of J_ eff(ω) in a vicinity of zero matters. § THE CASE OF ZERO TEMPERATURE We have analysed a realistic case of a positive temperature. However, it is worthwhile to briefly mention the case of the zero temperature as well since it is often used as an approximation for the case of low temperatures. In this case, we still have formula (<ref>) for decoherence, but Γ(t) is defined without the factor (βω/2) in the integral <cit.>: Γ(t)= ∫_0^∞ J(ω) 1-cosω t/ω^2dω. Let J(ω) be an integrable function on [0,∞). * If J(ω)∼ cω^1+δ as ω→+0, where δ>0 and c>0, then lim_t→∞Γ(t)= ∫_0^∞J(ω)/ω^2(βω/2)dω≡Γ_∞,hence, the decoherence is partial:ϱ_10(t)=ϱ_10(0)B(t)e^-Γ_∞,where B(t) is a bounded function converging to 1 as t→∞. * If J(0)=c>0 and there exits J'(0), thenΓ(t)=Γ_0t+αln t+C+o(1), t→∞,where C is a constant, hence, the decoherence is exponential:ϱ_10(t)=ϱ_10(0)B(t) e^-Γ_0 t/t^α.HereΓ_0=π/2 J(0),α=J'(0)and B(t) is a bounded function converging to e^-C as t→∞. * If J(ω)∼ cω as ω→+0, where c>0, thenΓ(t)=αln t+C+o(1), t→∞,hence, the decoherence obeys a power law: ϱ_10(t)=ϱ_10(0)B(t)/t^α,where, again, B(t) is a bounded function converging to e^-C as t→∞. * If J(ω)=ω^γG(ω) as ω→+0, where -1<γ<1, G(0)>0, and there exits G'(0), thenΓ(t)=At^1-γ+C+o(1), t→∞,if 0<γ<1, andΓ(t)= At^1-γ+G'(0) O(t^-γln t), t→∞,if -1<γ≤0, whereA= G(0) ∫_0^∞1-cosυ/υ^2-γ dυand C is a constant. Thus, the decoherence is subexponential for δ>0 and superexponential for γ<0:ϱ_10(t)=ϱ_10(0)B(t)e^-At^1-γif 0<γ<1, where B(t) is a bounded function converging to e^-C as t→∞, andϱ_10(t)=ϱ_10(0) e^-At^1-γ+G'(0) O(t^-δln t),t→∞if -1<δ≤0.The results of Theorem <ref> can be summarized as follows. Let J(ω)∼ cω^γ as ω→+0 for some c>0. * If -1<γ<0 (sub-Ohmic spectral density), then the decoherence is full and its rate is superexponential. * If γ=0 (sub-Ohmic spectral density), then the decoherence is full and its rate is exponential. * If 0<γ<1 (sub-Ohmic spectral density), then the decoherence is full and its rate is subexponential, but faster than any degree of t. * If γ=1 (Ohmic spectral density), then the decoherence is full and obeys a power law. * If γ>1 (super-Ohmic spectral density), then the decoherence is partial. The proof of Theorem <ref> is completely analogous to that of Theorem <ref>: we simply replace J_ eff=J(ω)(βω/2) by J(ω) everywhere. In other words, J_ eff(ω)= J(ω) for β=∞. Since (βω/2)∼(βω/2)^-1 as ω→+0, elimination of this factor leads to the reduction of γ by 1 in all cases. For example, the case J_ eff(0)=c>0 corresponds to the exponential decoherence for both positive temperature and zero temperature cases. However, in the case of a positive temperature, this corresponds to J(ω)∼ (β/2)cω as ω→+0, while, in the case of the zero temperature, this corresponds to J(ω)=c. Again, if the decoherence is not exponential (γ≠0), then a Markovian embedding is impossible, while, in the exponential case γ=0, the asymptotic Markovianity takes place.Note that exact form of Γ(t) for the particular super-Ohmic spectral densityJ(ω)=2ηω_ c^1-γω^γ e^-ω/ω_ cfor γ>1, where η, ω_ c>0, and the zero temperature was obtained in Ref. <cit.>. From this formula, the partial decoherence follows (which is discussed in the mentioned paper), in agreement with Theorem <ref>.Let us consider now the case of a positive, but a very small temperature. How the predictions of Theorems <ref> and <ref> can agree? Consider, for example, the aforementioned spectral density (<ref>) for 1<γ≤2. Theorem <ref> predicts only a partial decoherence, while Theorem <ref> predicts the full decoherence (though slower than exponential) for this case. Of course, the answer is in different time scales: all decoherence constants in Theorem <ref> are proportional to the temperature β^-1, while those in Theorem <ref> are not. So, at the beginning, the system partially decohere and then, on a larger time scale proportional to β, the full decoherence occurs. Different time scales caused by vacuum and positive-temperature contributions are well-known for this model <cit.>.§ CONCLUSIONS The main result of this paper is Theorem <ref> (for the case of a positive temperature of the bath) and also Theorem <ref> (for the case of the zero temperature), where the long-time rate of decoherence in a known exactly solvable model of decoherence have been related to the asymptotic behaviour of the bath spectral density at low frequencies. Though the considered model of pure decoherence is paradigmatic in the theory of open quantum systems, we are not aware of a such detailed analysis of the long-term behaviour of coherence in this model.We have discussed consequences of these results for the theory of weak-coupling limit, for the possibility of the Markovian embedding, and for the asymptotic Markovianity. In particular, we see that the decoherence is not necessarily exponential, while Markovian embeddings can give only the exponential relaxation to a steady state. Hence, the Markovian embedding, which is widely used to model the non-Markovian dynamics of open quantum systems, is not always possible. From the other side, if the spectral density is Ohmic (and the temperature is positive), the exponential decoherence dominates for large times. As a consequence, we have asymptotic Markovianity in the most general sense: in the sense of the quantum regression formula.Finally, it is worthwhile to mention works about theoretical analysis of decoherence in the considered model with a special control technique called the dynamical decoupling, which is aimed to suppress the decoherence <cit.>. It would be interesting to extend the results of the present paper about the long-term rates of decoherence depending on the asymptotic behaviour of the spectral density at low frequencies to the case of dynamical decoupling.This work was funded by the Russian Federation represented by the Ministry of Science and Higher Education of the Russian Federation (grant number 075-15-2020-788). The author thanks Alexander Teretenkov, Dvira Segal, and Michiel Burgelman for fruitful discussions on the subject of the present paper and its results.The author declares no conflict of interest.Abbreviations The following abbreviation is used in this manuscript:GKSL Gorini-Kossakowski-Sudarshan-Lindblad References999 BP Breuer, H.-P.; Petruccione, F. The Theory of Open Quantum Systems; Oxford University Press: New York, NY, USA, 2002.EkertQCompDiss Palma, G.M.; Suominen K.-A.; Ekert, A.K. Quantum computers and dissipation, Proc. Roy. Soc. Lond. A 1996, 452, 567–584.[https://doi.org/10.1098/rspa.1996.0029CrossRef]AlickiDecoh Alicki, R. Pure decoherence in quantum systems, Open Syst. Inf. Dyn. 2004, 11, 53–61.[https://doi.org/10.1023/B:OPSY.0000024755.58888.acCrossRef]Brito Brito, F.; Werlang, T. A knob for Markovianity, New J. Phys. 2015, 17, 072001.[https://doi.org/10.1088/1367-2630/17/7/072001CrossRef]VacchiniQRT Guarnieri, G.; Smirne, A.; Vacchini, B. Quantum regression theorem and non-Markovianity of quantum dynamics, Phys. Rev. A 2014, 90, 022110.[https://doi.org/10.1103/PhysRevA.90.022110CrossRef] MerkliNesterovDimer Merkli, M.; Berman, G.P.; Sayre, R.T.; Gnanakaran, S.;Könenberg, M.; Nesterov, A. I.; Song, H. Dynamics of a chlorophyll dimer in collective and local thermal environments, J. Math. Chem. 2016, 54, 866–917.[https://doi.org/10.1007/s10910-016-0593-zCrossRef] LonigroChrus Lonigro, D.; Chruściński, D.Quantum regression in dephasing phenomena,J. Phys. A 2022, 55, 225308.[https://doi.org/10.1088/1751-8121/ac6a2dCrossRef]SuperOhmic Nacke, Ph.; Otterpohl, F.; Thorwart, M.; Nalbach P. Quantum regression theorem and non-Markovianity of quantum dynamics, Phys. Rev. A 2023, 107, 062218.[https://doi.org/10.1103/PhysRevA.107.062218CrossRef]MarkHier Li, L.; Hall, M.J.W.; Wiseman, H.M.Concepts of quantum non-Markovianity: a hierarchy, Phys. Rep. 2018, 759, 1–51.[https://doi.org/10.1016/j.physrep.2018.07.001CrossRef] ChrusIntroNonMark Chruściński, D.Introduction to non-Markovian evolution of n-level quantum systems. In Open Quantum Systems. A Mathematical Perspective; Bahns, D.; Pohl, A.; Witt, I., Eds.; Springer Nature Switzerland, 2010; pp. 55–76. ChrusBeyondMark Chruściński, D.Dynamical maps beyond Markovian regimeAvailable online: URLhttps://arxiv.org/abs/2209.14902 https://arxiv.org/abs/2209.14902(accessed on 07th November 2023). Lax Lax, M. Formal theory of quantum fluctuations from a driven state, Phys. Rev. 1963, 129, 2342–2348.[https://doi.org/10.1103/PhysRev.129.2342CrossRef]LoGullo Lo Gullo, N.; Sinayskiy, I.; Busch, Th.; Petruccione, F. Non-Markovianity criteria for open system dynamicsAvailable online: URLhttps://arxiv.org/abs/1401.1126 https://arxiv.org/abs/1401.1126(accessed on 07th November 2023).Tamapre Tamascelli, D.; Smirne, A.; Lim, J.; Huelga, S.F.; Plenio M.B. Efficient simulation of finite-temperature open quantum systems, Phys. Rev. Lett. 2019, 123, 090402.[https://doi.org/10.1103/PhysRevLett.123.090402CrossRef]Tama Mascherpa, F.; Smirne, A.; Somoza, A.D.; Fernández-Acebal, P.; Donadi, S.; Tamascelli, D.; Huelga, S.F.; Plenio, M.B. Optimized auxiliary oscillators for the simulation of general open quantum systems, Phys. Rev. A 2020, 101, 052108.[https://doi.org/10.1103/PhysRevA.101.052108CrossRef]GarrawayPetruc Pleasance, G.; Garraway, B.M.; Petruccione, F. Generalized theory of pseudomodes for exact descriptions of non-Markovian quantum processes, Phys. Rev. Res. 2020, 2, 043058.[https://doi.org/10.1103/PhysRevResearch.2.043058CrossRef]TereFinT Teretenkov, A.E. Integral representation of finite temperature non-Markovian evolution of some systems in rotating wave approximation, Lobachevskii J. Math. 2020, 41, 2397–2404.[https://doi.org/10.1134/S1995080220120410CrossRef]TereSeveralBath Teretenkov, A.E. Exact non-Markovian evolution with several reservoirs, Phys. Part. Nucl. 2020, 51, 479–484.[https://doi.org/10.1134/S1063779620040711CrossRef] Lambert Iles-Smith, J.; Lambert, N.; Nazir, A. Environmental dynamics, correlations, and the emergence of noncanonical equilibrium states in open quantum systems, Phys. Rev. A 2014, 90, 032114.[https://doi.org/10.1103/PhysRevA.90.032114CrossRef] Strasberg Strasberg, P.; Schaller, G.; Lambert, N.; Brandes, T. Nonequilibrium thermodynamics in the strong coupling and non-Markovian regime based on a reaction coordinate mapping, New J. Phys. 2016, 18, 073007.[https://doi.org/10.1088/1367-2630/18/7/073007CrossRef]Segal Anto-Sztrikacs, N.; Nazir, A.; Segal, D. Effective Hamiltonian theory of open quantum systems at strong coupling, PRX Quantum 2023, 4, 020307.[https://doi.org/10.1103/PRXQuantum.4.020307CrossRef]Luchnikov19 Luchnikov, I.A.; Vintskevich, S.V.; Ouerdane, H.; Filippov, S.N. Simulation complexity of open quantum dynamics: Connection with tensor networks, Phys. Rev. Lett. 2019, 122, 160401.[https://doi.org/10.1103/PhysRevLett.122.160401CrossRef]Luchnikov22 Luchnikov, I.A.; Kiktenko, E.O.; Gavreev, M.A.; Ouerdane, H.; Filippov, S.N.; Fedorov A.K. Probing non-Markovian quantum dynamics with data-driven analysis: Beyond “black-box” machine-learning models, Phys. Rev. Research 2022, 4, 043002.[https://doi.org/10.1103/PhysRevResearch.4.043002CrossRef] Davies Davies, E.B. Markovian master equations, Commun. Math. Phys. 1974, 39, 91–110.[https://doi.org/10.1007/BF01608389CrossRef]Davies2 Davies, E.B. Markovian master equations. II, Math. Ann. 1976, 219, 147–158.[https://doi.org/10.1007/BF01351898CrossRef]MerkliRev Merkli, M. Quantum Markovian master equations: Resonance theory shows validity for all time scales, Ann. Phys. 2020, 412, 167996.[https://doi.org/10.1016/j.aop.2019.167996CrossRef] TMCA Trushechkin, A.S.; Merkli, M.; Cresser, J.D.; Anders, J.Open quantum system dynamics and the mean force Gibbs state.AVS Quantum Sci. 2022, 4, 012301.[https://doi.org/10.1116/5.0073853CrossRef]MerkliIdealQGas Merkli, M. The ideal quantum gas. In Open Quantum Systems I. The Hamiltonian Approach; Attal, S.; Joye, A.; Pillet, C.-A., Eds.; Springer: Berlin, Germany, 2006; pp. 183–233.Viola2013 Khodjasteh, K.; Sastrawan, J.; Hayes, D.; Green, T.J.;Biercuk, M.J.; Viola, L. Designing a practical high-fidelity long-time quantum memory, Nature Comm. 2013, 4, 2045.[https://doi.org/10.1038/ncomms3045CrossRef]GR Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products; Elsevier: Burlinglon, MA, USA, 2007. Fay Fay, T.P.; Lindoy, L.P.; Manolopoulos, D.E.Spin-selective electron transfer reactions of radical pairs: Beyond the Haberkorn master equation, J. Chem. Phys. 2018, 149, 064107.[https://doi.org/10.1063/1.5041520CrossRef]TrushUltra Trushechkin, A.Quantum master equations and steady states for the ultrastrong-coupling limit and the strong-decoherence limit,Phys. Rev. A 2022, 106, 042209.[https://doi.org/10.1103/PhysRevA.106.042209CrossRef] AL Alicki, R.; Lendi, K.Quantum Dynamical Semigroups and Applications;Springer: Berlin, 2007. QEngSupercond Krantz, P.; Kjaergaard, M.; Yan, F.; Orlando, T.P.; Gustavsson S.; Oliver, W.D.A Quantum Engineer's Guide to Superconducting Qubits,Appl. Phys. Rev. 2019, 6, 021318.[https://doi.org/10.1063/1.5089550CrossRef]StochPseudomodes Luo, S.; Lambert, N.; Liang, P.; Cirio M. Quantum-classical decomposition of Gaussian quantum environments: A stochastic pseudomode model, PRX Quantum 2023, 4, 030316.[https://doi.org/10.1103/PRXQuantum.4.030316CrossRef]TereJPA Teretenkov, A.E. Non-perturbative effects in corrections to quantum master equation arising in Bogolubov–van Hove limit, J. Phys. A 2021, 54, 265302.[https://doi.org/10.1088/1751-8121/ac0201CrossRef]Dumke1983 Dümke, R.Convergence of multitime correlation functions in the weak and singular coupling limits, J. Math. Phys. 1983, 24, 311–315.[https://doi.org/10.1063/1.525681CrossRef]ViolaLloyd1998 Viola, L.; Lloyd, S.Dynamical suppression of decoherence in two-state quantum systems,Phys. Rev. A 1998, 58, 2733–2744.[https://doi.org/10.1103/PhysRevA.58.2733CrossRef] | http://arxiv.org/abs/2311.16010v1 | {
"authors": [
"Anton Trushechkin"
],
"categories": [
"quant-ph",
"math-ph",
"math.MP"
],
"primary_category": "quant-ph",
"published": "20231127170927",
"title": "Long-term behaviour in an exactly solvable model of pure decoherence and the problem of Markovian embedding"
} |
Controlling Formal Fibers ofCountably Many Principal Prime Ideals David Baron, Ammar Eltigani, S. Loepp, AnaMaria Perez, M. Teplitskiy January 14, 2024 ======================================================================== Large pre-trained vision models achieve impressive success in computer vision. However, fully fine-tuning large models for downstream tasks, particularly in video understanding, can be prohibitively computationally expensive. Recent studies turn their focus towards efficient image-to-video transfer learning. Nevertheless, existing efficient fine-tuning methods lack attention to training memory usage and exploration of transferring a larger model to the video domain. In this paper, we present a novel Spatial-Temporal Side Network for memory-efficient fine-tuning large image models to video understanding, named Side4Video. Specifically, we introduce a lightweight spatial-temporal side network attached to the frozen vision model, which avoids the backpropagation through the heavy pre-trained model and utilizes multi-level spatial features from the original image model. Extremely memory-efficient architecture enables our method to reduce 75% memory usage than previous adapter-based methods. In this way, we can transfer a huge ViT-E (4.4B) for video understanding tasks which is 14× larger than ViT-L (304M). Our approach achieves remarkable performance on various video datasets across unimodal and cross-modal tasks (i.e., action recognition and text-video retrieval), especially in Something-Something V1&V2 (67.3% & 74.6%), Kinetics-400 (88.6%), MSR-VTT (52.3%), MSVD (56.1%) and VATEX (68.8%). We release our code at <https://github.com/HJYao00/Side4Video>. § INTRODUCTIONfigure/The success of large language models (LLMs) <cit.> in understanding and generating nuanced human text has inspired similar scaling endeavors in computer vision <cit.>. However, compared with image models, pre-training of large video models encounters constraints posed by large high-quality video datasets and computational resources. A prevalent method is transferring CLIP image encoder, a powerful pre-trained Vision Transformer <cit.> (ViT) on web-scale image-text pairs dataset, for video tasks. Nevertheless, as the model size increases, fully fine-tuning video models is computationally expensive. The raises a critical question: How to effectively adapt large pre-trained image models, such as ViT-E <cit.> with its 4.4 billion parameters, to video understanding still remains a challenge. To accommodate the rapid expansion in model size, Parameter-Efficient Fine-Tuning (PEFT) methods <cit.> which fine-tune a small part of parameters are proposed in nature language processing (NLP). Among these methods, adapter-based methods <cit.>, lightweight modules inserted into pre-trained models, are widely used for video action recognition and text-video retrieval due to efficiency and adaptability. Nevertheless, adapter-based methods require backpropagation through the frozen layers of models, which yields unnecessary memory cost, as shown in <ref> (Left). To further reduce memory usage, LST <cit.> first introduces a side network attached to the frozen pre-trained model for NLP tasks, as shown in <ref> (Right), eliminating the need for backpropagation within the pre-trained models. A similar work <cit.> is adopted in computer vision, where a side network is used to predict mask proposals and attention bias for semantic segmentation. However, exploration of side networks in video understanding remains limited. In this work, we introduce Side4Video, a novel and memory-efficient method for fine-tuning pre-trained image models for video understanding tasks. In addition, we explore the enhancements afforded by transferring a larger model to the task of video understanding. To be specific, we devise a spatial-temporal side network attached to frozen pre-trained models which receives multi-level spatial features from frozen ViT. Our Side4Video utilizes a divided spatial-temporal module to learn video representation which consists of temporal convolution, spatial self-attention and feed forward network in each block. Beyond simply opting for a low-dimensional side network to minimize memory usage, we investigate a variety of strategies to further conserve memory and bolster temporal reasoning capabilities, including removing [CLS] token in side network, memory-efficient temporal convolution, [CLS] token shift spatial attention. Thanks to this structure, our approach enables us to transfer ViT-E to video understanding tasks with a small amount of computational resources.Contrary to previous PEFT methods which are applied to a single task, we evaluate our model on both unimodal and cross-modal video tasks (, action recognition, and text-video retrieval) across six popular benchmarks (, Something-Something (SS) V1&V2 <cit.>, Kinetics-400 <cit.>, MSR-VTT <cit.>, MSVD <cit.>, and VATEX <cit.>).Our contributions are summarized as follows: * We introduce an innovative method for memory-efficient fine-tuning of pre-trained image models on video tasks.* For action recognition, our method can achieve a 75% reduction in memory usage and a 2.2% increase in accuracy on SSV2, surpassing the previous Video Adapter <cit.>. In text-video retrieval, our method achieves a 30% memory reduction while improving the R@1 metric by 1.1 on MSR-VTT, compared to the classic CLIP4Clip <cit.>.* To our knowledge, this is the pioneering work in efficiently transferring a large image backbone, ViT-E/14, to video understanding tasks. By scaling up the model to ViT-E/14, which is 14.5 times larger than ViT-L/14, our model delivers state-of-the-art performance on both unimodal and cross-modal video tasks.§ RELATED WORKLarge Vision Model. The advent of ViT <cit.> signaled a leap forward in the pre-training of large-scale vision models, distinguished by their transferability and scalability. The CLIP model <cit.>, pre-trained on 400 million image-text pairs, has garnered significant interest due to its remarkable generalization capabilities and its ability to align knowledge across visual and textual domains. Building on CLIP's success, later works <cit.> have expanded on the size of both datasets and models, further augmenting CLIP's representational capability. A noteworthy work is EVA-CLIP <cit.>, which leverages LAION-2B <cit.> consisting of 2.32 billion image-text pairs, to pre-train a 64-layer ViT-E/14 with 4.4B parameters, achieving impressive results. Yet, the efficient adaptation of such huge image models to the video domains is extremely expensive and rarely explored. CLIP for Video Understanding. Due to its impressive generalization ability, CLIP is extensively expanded to action recognition <cit.> and text-video retrieval <cit.>. However, these methods typically require fully fine-tuning the whole model, which is computationally intensive. To mitigate these issues, recent works <cit.> extend the PEFT methods <cit.> from NLP to the video domain. For action recognition, ST-adapter <cit.> and AIM <cit.> insert spatial-temporal adapters adapters inside the models to accommodate video data. For text-video retrieval, Cross-Model Adapter <cit.> employs weight-sharing adapters into both video and text encoders. However, adapter-based methods lead to unnecessary backpropagation through the frozen parameters, incurring additional memory overhead.Side-Tuning. At first, Side-Tuning <cit.> is proposed to solve the forgetfulness in incremental learning. As model size expand, fine-tuning of large models become constrained by available computational resources. LST <cit.> first focuses on memory reduction by implementing a lightweight transformer attached to pre-trained models for NLP tasks and SAN <cit.> leverages this technique to image semantic segmentation. These methods focus more on same modality and implement their side network by a lightweight transformer. Our work also explores cross-modal capability of side network. Note that several works <cit.> share the similar thoughts in video domains which avoid backpropagation through the pre-trained models. EVL <cit.> adopts a parallel transformer decoder to extract spatial features from frozen CLIP while DiST <cit.> uses an integration branch to fuse the features from spatial encoder and temporal encoder, which spatial encoder is frozen CLIP. As distinct from their approaches, we introduce a spatial-temporal side encoder to learn video representation which has better continuity and scalability. Furthermore, we successfully transfer a large model for video understanding tasks to explore the advantages brought by an increased model size.§ METHODOLOGY §.§ PreliminaryViT splits an image I ∈ ℝ^H× W× C into a sequence of non-overlapping patches and then project them into the embedding space as x_e = [x_1, x_2, ..., x_N], x_e ∈ ℝ^N× D, where N denotes the number of patches and D is the hidden dimension. Subsequently, ViT prepends a learnable [CLS] token x_0 to the x_e = [x_0,x_1,x_2,...,x_N] and adds a positional embedding E_pos to x_e as Z_0 = x_e + E_pos, where Z_0 is the final input being fed to a sequence of transformer blocks.Considering T frames f_t of a video V = [f_1,f_2,...,f_T], our work focuses on fine-tuning a large pre-trained ViT for video understanding in a memory-efficient way. Adding adapters inside the frozen pre-trained model causes additional backpropagation through the large frozen pre-trained model. Posterior methods such as meanP <cit.> and SeqTransf <cit.>, which modeling spatial-temporal features after frozen ViT avoid above situation. However, posterior structures neglect low-level features which are important to video understanding tasks. Inspired by LST <cit.>, we propose spatial-temporal side network which utilizes multi-level spatial features to memory-efficient transfer image models to video understanding tasks.§.§ OverviewWe introduce Side4Video, a method that fully leverages multi-level features of ViT while avoiding backpropagation through the large pre-trained models. By freezing the pre-trained models and only updating the side network parameters, our approach significantly minimizes the memory footprint. Specifically, Side4Video is constructed as a lightweight spatial-temporal side network attached to pre-trained model, consisting of l layers in d dimensions. The side network is seamlessly integrated with the pre-trained model, receiving multi layer features from ViT before each side block. Each Side4Video block is composed of temporal convolution, [CLS] token shift self-attention and MLP layer, as depicted in <ref>. Finally, the output Z_out∈ℝ^T × D from ViT's [CLS] token maintains the original zero-shot capability, while the output s_out∈ℝ^T× N× D from side network captures comprehensive video information. We deploy Global Average Pooling (GAP) on s_out to obtain a global video representation for action recognition, while preserving frame-level global representation to support fine-grained matching for text-video retrieval. §.§ Side4Video Our Side4Video block is composed of temporal convolution, [CLS] token shift spatial self-attention and MLP layer. Here we describe Side4Video block in detail.Remove [CLS] token in side network. The [CLS] token of ViT is the global image representation. In video domain, a common practice is to average [CLS] tokens of each frame as the final video representation. However, updating a learnable token increases the memory consumption. We find that GAP on patch tokens can achieve competitive performance while introducing extra [CLS] tokens in side network increase unnecessary memory footprint. Moreover, in order to enhance temporal modelling and harmonize the input paradigm <ref> of each block, we use a 3D convolution project the video frames to sequence s_0 without additional [CLS] token, s_0 ∈ℝ^T× N× d. Feature fusion on patch tokens. Side4Video effectively leverages multi-level spatial features of ViT. To achieve this, we implement a linear projection Down(·) to convert the D dimensional ViT features Z^l_out to d dimensional features z^l_out. Note that this projection function is applied to both [CLS] token and patch tokens at each layer and we only fuse ViT patch tokens features z^l_out and Side4Video features s^l-1_out by element-wise addition. The [CLS] token will be used in spatial self-attention. The fusion strategy is:z^l_out = Down(Norm(Z^l_out)), s_in^l = s^l-1_out + z^l_out.Temporal module in Side4Video. Convolution <cit.> and self-attention <cit.> are two popular way for temporal modeling. To minimize training memory cost, we investigate the impact of 3D convolution and temporal attention to memory footprint and performance, and detail is shown in <ref>. Although temporal attention is good at long-range modeling, temporal convolution is more memory-efficient and easy to convergence. Following MVFNet <cit.>, we employ depth-wise separable temporal convolutions to further reduce memory. To simplify, the process starts with a 1× 1 × 1 convolution as a point-wise convolution, then the 3× 1 × 1 channel-wise temporal convolution followed by the 1× 1 × 1 point-wise convolution to form the depth-wise separable convolution. We also find that 3D batch normalization <cit.> effectively enhance spatial-temporal modeling. We adopt batch normalization before temporal convolution and MLP layer and keep layer normalization <cit.> before self-attention.[CLS] token shift self-attention. Due to frozen pre-trained [CLS] tokens contain global spatial features, we extend these works <cit.> by shifting the whole pre-trained [CLS] channels back-and-forth across adjacent frames.Then, we concatenate the shifted token to K, V, where K, V is the key and value in self-attention. In this case, Side4Video learn temporal information in [CLS] tokens with negligible memory increasing. §.§ Side4Video for video understanding Given a video, the side network generates a video representation s_out∈ℝ^T× N× d, for which we apply Global Average Pooling (GAP) over the patch tokens to obtain the final representation. We design two GAP methods to yield the final video representations for vision-only and cross-modal tasks, respectively.Side4Video for action recognition. Vision-only task requires models to pay more attention on spatial-temporal modeling to understand dynamic actions. Given that the frozen pre-trained ViT lacks temporal reasoning capabilities and Side4Video models spatial-temporal features, we obtain final video representation by performing global average pooling on the output of Side4Video: s_final = 1/T× N∑_t,ns_out. Side4Video for text-video retrieval. Unlike vision-only task, cross-modal task requires video and text models to learn a joint embedding space. CLIP, containing rich vision-text aligned knowledge, is widely used in text-video retrieval. Since the side network is random initialized, we leverage the powerful zero-shot capabilities of CLIP to stabilize training. Specifically, we first average over the patch tokens to obtain frame-level representations of side network. Then, we project the features back to the D-dim and aggregate them with [CLS] tokens from the ViT. Subsequently, we reuse the pre-trained projection layer Proj(·) to map the features into the joint embedding space, resulting in the final frame-level representations s_final: s_final = Proj(Up(1/N∑_ns_out) + Z_out).Side4Video plays a role in enhancing spatial modeling and inject temporal information.Finally, we employ the advanced token-wise fine-grained matching <cit.> instead of simple global matching to generate similarity matrix for text-video retrieval. § EXPERIMENTS§.§ Experiment Settings Datasets. To demonstrate the effectiveness of our method, we conduct a comprehensive evaluation on two popular video understanding tasks, , action recognition and text-video retrieval. For action recognition, we employ three widely adopted benchmarks to evaluate our model, including Something-Something V1&V2 (SSV1 and SSV2) <cit.> and Kinetics-400 (K400) <cit.>. For text-video retrieval, we adopt three well-known benchmarks, including MSR-VTT <cit.>, MSVD <cit.> and VATEX <cit.>. The statistics of these datasets are provided in Supplementary Material. Implementation Details. In this paper, we adpot OpenAI-CLIP <cit.> for ViT-B/16, ViT-L/14 and EVA-CLIP <cit.> for ViT-E/14. Following the ViT-E/14, we implement flash attention <cit.> and post normalization to maintain consistency. <ref> presents the configuration of our model for action rcognition. By adjusting dimensions and the number of layers, our model balances memory usage with performance. For text-video retrieval, we construct 320-dimensions side networks with 12, 24, and 32 layers for ViT-B, ViT-L, and ViT-E, respectively.Constrained by a 40G memory limit, we only train a scale-down version of ViT-E. Although the lightweight model does not fully exploit ViT-E's capabilities, it still represents a notable advancement over the ViT-L. More details are provided in Supplementary Material. §.§ Memory Comparison<ref> presents the training memory usage and performance comparison with existing efficient fine-tuning methods on SSV2. For a fair comparison, we measure memory footprint within the same environment (A100, 80G), using 8 frames as model input. All the models are tested with 1 spatial crop and 3 temporal clips here. Benefiting from spatial-temporal side network, Our-B/16 yields a remarkable 70% reduction in memory consumption while simultaneously improving top-1 accuracy by 1.5% compared to ST-Adapter-B/16. Additionally, it is worth noting that another side-tuning like method DiST <cit.> tends to use more tunable parameters in contrast to adapter-based methods, , 19M on DiST-B 7M on ST-Adapter-B. However, Our-B/16 reduce the tunable parameters to 4M which is more parameter-efficient than ST-Adapter <cit.> and AIM <cit.>. Compared with DiST, Our-B/16 and Our-L/14 save 30% and 16% memory compared to DiST-B/16 and DiST-L/14 while achieving comparable performance. Furthermore, our method exhibits excellent scalability. Scaling up our model by increasing l and d, Our-B/16 and Our-L/14 achieve the highest accuracy rate of 70.2% and 71.8%, improving by 1.5% and 1.0% compared to DiST-B/16 and DiST-L/14. In addition, we also provide the memory comparison on text-video retrieval in <ref>. Compared to CLIP4Clip <cit.>, our method achieves a 30% reduction in memory usage while improving 1.1% Recall@1. §.§ Comparisons on Action RecognitionResults on Something-Something V1&V2. <ref> shows the SOTA comparison on SSV1 and SSV2. Under same size pre-training models, our method achieves competitive results compared with full fine-tuning methods on both SSV1 and SSV2. For example, Side4Video with 16 frames achieve comparable results with UniFormerV2 <cit.> with 32 frames (62.4% 62.7% on SSV1, 73.2% 73.0% on SSV2). Moreover, Side4Video surpasses all the frozen backbone methods and Our L/14 outperforms ST-Adapter and AIM by 0.9% and 2.6% on SSV2. Scaling up backbone to ViT-E/14, we reach the highest accuracy of 67.3% on SSV1 and 75.2% on SSV2. We observe an impressive performance improvement as the model size increases.Results on Kinetics-400. <ref> presents the performance comparison on Kinetics-400. We conduct similar results with SSv1 and SSv2. On ViT-L/14, our model with an input of 16 frames achieves comparable results with frozen backbone methods. For example, Side4Video achieves comparable performance to ST-Adapter <cit.> (87.0% 87.2%) and AIM <cit.> (87.0% 87.5%) with less input frames (16 32). Scaling up pre-trained model to ViT-E, our model enhances accuracy by 1.6% over ViT-L/14, attaining an accuracy of 88.6%. §.§ Comparisons on Text-Video Retrieval Beyond action recognition, we also evaluate our model on the cross-modal text-video retrieval task. Unlike other efficient fine-tuning methods <cit.> tailored exclusively for text-video retrieval, our work concentrates on the video component. Consequently, we keep the ViT frozen, opting to update only the side network and the text encoder. As a baseline for comparison, we present results from CLIP4Clip ( ViT), which similarly freezes the ViT and updates the text encoder.Performance on MSR-VTT, MSVD, and VATEX. As shown in <ref>, using a ViT-B/32 backbone, our model achieves a 1.1% improvement in Recall@1 while reducing memory consumption by 30% compared to the full fine-tuning approach of CLIP4Clip <cit.>. When considering methods with a frozen backbone, our approach surpasses the baseline—CLIP4Clip ( ViT)—by a substantial 4.2% in Recall@1. Regarding PEFT methods, our method shows notable performance enhancements. By scaling our model up to ViT-E/14, we set new state-of-the-art results with 52.3% on MSR-VTT, 56.1% on MSVD, and 68.8% on VATEX, exceeding the performance of prior SOTA Cap4Video <cit.> by margins of 0.9%, 4.3%, and 2.2%, respectively. §.§ Ablation studyAs shown in <ref>, we conduct ablation studies on Something-Something V1 dataset. The impact of fusion layers. With memory limitations (A100 40GB), we deploy a 32-layer Side4Video for the 64-layer ViT-E/14. Hence, we explore how fusing features at varying depths impacts performance. We evaluate two fusion strategies: top and interval. The top strategy integrates high-level features from the 32nd to 64th layer while interval method fuses multi-level features every 2 layers from the beginning, the 2nd, 4th, ..., 64th layers. The results in <ref> reveal that interval-based method yields a 0.7% improvement in accuracy over top-based method. These findings suggest that multi-level fusion is more beneficial for video understanding tasks. The study of different temporal components. Although self-attention specializes in long-range modeling, it is more data-hungry and memory-inefficient compared to convolution. The results in <ref> show that temporal convolution reaches higher accuracy, with an increase of 0.6% on SSV1.[CLS] token shift self-attention. The token shift technique learns features of adjacent frames without an increase in memory footprint. In ViT, the [CLS] tokens summarize the spatial information of each frame. Leveraging this, we employ [CLS] token shift to enhance the temporal reasoning capability of our model. We show the effectiveness of [CLS] token shift in <ref> which brings 0.6% accuracy improvement with a negligible increase in memory consumption due to the number of K,V turn to N+1. Exploration of video representation. We explore multiple methods to obtain the final video representation, including the use of the [CLS] token within ViT, the incorporation of an additional [CLS] token in the side network, and the application of GAP. For the first method, we concatenate the dimensionality-reduced [CLS] token to the beginning of the input sequence for the side network. Note that we cannot update [CLS] token parameters which will lead to backpropagation through the pre-trained models while extra [CLS] token obtain poor performance. According to <ref>, GAP reaches both the highest performance and the least memory footprint. The reason for this performance gap between GAP and [CLS] token may be due to the learning-rate settings as mentioned in <cit.>. In conclusion, we adopt GAP for the final representation. The impact of layers and dimension By adjusting layers l and dimension d, we can control the memory consumption and performance of our model. Increasing l or d both enhances the complexity of models and memory usage, but their effects to the model are different. Increasing l enables model to utilize more diverse level features from ViT while increasing d enhances modeling ability of each layer. In study of layers, we base on interval fusion strategy mentioned above for 4 layers and 6 layers. As shown in <ref>, increasing the layers from 4 to 12, we can see the performance gradually boost. The model fused features from all layers achieving the highest accuracy of 58.5%. The results in <ref> show that a dimension of 320 of our model achieves the best performance. A comparative evaluation of the significance of d versus l reveals that our model with 4 layers at 320 dimensions outperforms 12 layers with 128 dimensions, under a comparable memory footprint. In conclusion, given equivalent memory usage constraints, we prefer a higher dimensional side network to a deeper one.§ CONCLUSIONIn this paper, our motivation is to transfer large per-trained image models to video understanding tasks. To this end, we introduce Side4Video for memory-efficient image-to-video transfer learning for video understanding. Side4Video receives multi-level features from frozen ViT that avoids backpropagation through the pre-trained models. We achieve better performance than previous efficient fine-tuning methods. Scaling up model size, we transfer a huge pre-trained model (, ViT-E) to video understanding tasks and observe its notable improvement. In the era of large models, we hope our work can inspire researchers who desire to fine-tune larger models in limited resources. ieeenat_fullname § DATA EFFICIENCY<ref> illustrates the impact of varying training dataset sizes on the performance of our Side4Video. Our model showcases remarkable data efficiency compared with other methods. For example, with only 5% of the Something-Something V2 dataset, Our B/16 model attains a Top-1 accuracy of 48.1%, which is approximately 13% higher than DiST B/16. When scaling up the backbone to ViT-E/14, our model achieves an impressive accuracy rate of 60.2% with 5% of the training data.§ PERFORMANCE GAIN OVER CLIPSince our Side4Video receives multi-level spatial features from CLIP, we compare the accuracy of CLIP and Side4Video across different categories on Kinetics-400. Following <cit.>, we utilize CLIP's text encoder to obtain the zero-shot classification results and <ref> reports the 10 worst classification results of CLIP.§ VISUALIZATIONIn <ref>, we present visualizations of the attention maps generated by CLIP and Side4Video. These illustrations demonstrate that our model more precisely concentrates on dynamically moving target objects. Significantly, as observed in the second frame, even when only a portion of the basketball is in view, our model proficiently traces the basketball's trajectory which showcases the spatial-temporal capability of our model. § MORE RESULTS ON TEXT-VIDEO RETRIEVAL<ref> and <ref> present more results on MSVD and VATEX, respectively. Our method also exhibits excellent performance on video-to-text retrieval. Additionally, we observe a phenomenon where Our L/14 outperforms Our E/14 on video-to-text retrieval, which may be attributed to the pre-training data and the backbones.§ ADDITIONAL IMPLEMENTATION DETAILS Dataset.We evaluate our model on two video understanding tasks, , action recognition and text-video retrieval, to demonstrate the effectiveness of our approach.For action recognition, we employ three widely adopted benchmarks to evaluate our model, including Something-Something V1&V2 (SSV1 and SSV2) <cit.> and Kinetics-400 (K400) <cit.>. Temporal-related datasets SSV1 and SSV2 contain 110K videos and 220k videos in 174 classes. Scene-based dataset K400 is a large-scale video dataset comprising 300K video clips in 400 human action classes.For text-video retrieval, we adopt three well-known benchmarks, including MSR-VTT <cit.>, MSVD <cit.> and VATEX <cit.>. MSR-VTT consists of 10k videos with 20 textual descriptions for each video. We split the dataset following <cit.>, which includes 9K videos for training and 1K videos for testing. MSVD consists of 1970 video clips with approximately 80K descriptive sentences, where train, validation, and test sets are split into 1200, 100, and 670 videos. VATEX is a relatively large dataset, containing 34,991 videos with multiple annotations. There are 25,991 videos for training, 1,500 videos for validation, and 15,500 videos for testing.Implementation Details. All experiments are implemented in PyTorch. For both action recognition and text-video retrieval, we employ OpenAI-CLIP <cit.> for ViT-B/16, ViT-L/14, and EVA-CLIP <cit.> for ViT-E/14.For action recognition, <ref> presents the configuration list for Kinetics-400 <cit.> and Something-Something V1&V2 <cit.>.For text-video retrieval, we freeze all the ViT encoders except the final linear projection and update Side4Video and text encoder parameters. We construct Side4Video with 12, 24, and 32 layers for ViT-B/16, ViT-L/14, and ViT-E/14, respectively. For all models, the dimensionality of the side network is set to 320. Following CLIP4Clip <cit.>, we use a unified training setting for all the datasets (, MSR-VTT <cit.>, MSVD <cit.> and VATEX <cit.>). We set text length to 32 and video length to 12. We train our model with a batch size of 128 for 5 epochs with Adam (β_1 = 0.9, β_2 = 0.98) optimizer. The initial learning rate is 1e-7 for CLIP module and 1e-4 for new modules. | http://arxiv.org/abs/2311.15769v1 | {
"authors": [
"Huanjin Yao",
"Wenhao Wu",
"Zhiheng Li"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127123942",
"title": "Side4Video: Spatial-Temporal Side Network for Memory-Efficient Image-to-Video Transfer Learning"
} |
APS/123-QEDDepartment of Physics and Astronomy, University of the Western Cape, P/B X17, Bellville 7535, South Africa Department of Physics and Astronomy, University of the Western Cape, P/B X17, Bellville 7535, South Africa iThemba LABS, P.O. Box 722, Somerset West 7129, South Africa [email protected] Department of Physics and Astronomy, University of the Western Cape, P/B X17, Bellville 7535, South Africa iThemba LABS, P.O. Box 722, Somerset West 7129, South Africa Cyclotron Institute and Department of Physics & Astronomy, Texas A&M University, College Station, Texas 77843, USA Department of Physics and Astronomy and the Facility for Rare Isotope Beams, Michigan State University, East Lansing, Michigan 48824-1321, USASchool of Physics, University of the Witwatersrand, Johannesburg 2050, South Africa Department of Physics and Astronomy, University of the Western Cape, P/B X17, Bellville 7535, South Africa Department of Physics and Astronomy, University of the Western Cape, P/B X17, Bellville 7535, South Africa iThemba LABS, P.O. Box 722, Somerset West 7129, South Africa Instituto de Física, Universidad Nacional Autónoma de México, Av. Universidad 3000, Mexico City 04510, MexicoiThemba LABS, P.O. Box 722, Somerset West 7129, South Africa Department of Physics and Astronomy, University of the Western Cape, P/B X17, Bellville 7535, South Africa Department of Physics, University of Stellenbosch, Private Bag X1, 7602 Matieland, Stellenbosch, South Africa iThemba LABS, P.O. Box 722, Somerset West 7129, South Africa School of Physics, University of the Witwatersrand, Johannesburg 2050, South Africa Department of Physics and Astronomy, University of the Western Cape, P/B X17, Bellville 7535, South Africa Department of Physics and Astronomy, University of the Western Cape, P/B X17, Bellville 7535, South Africa iThemba LABS, P.O. Box 722, Somerset West 7129, South Africa iThemba LABS, P.O. Box 722, Somerset West 7129, South Africa Background: The nucleosynthesis of several proton-rich nuclei is determined by radiative proton-capture reactions on unstable nuclei in nova explosions. One such reaction is [23]Mg(p,γ)[24]Al, which links the NeNa and MgAl cycles in oxygen-neon (ONe) novae. Purpose: To extract [23]Mg(p,γ)[24]Al resonance strengths from a study of proton-unbound states in [24]Al, produced via the ^24Mg(^3He,t) reaction. Methods: A beam of [3]He^2+ ions at 50.7 MeV was used to produce the states of interest in [24]Al. Proton-triton angular correlations were measured with a K=600 QDD magnetic spectrometer and a silicon detector array, located at iThemba LABS, South Africa.Results:We measured the excitation energies of the four lowest proton-unbound states in ^24Al and place lower-limits on Γ_p/Γ values for these four states. Together with USD-C shell-model calculations of partial gamma widths, the experimental data are also used to determine resonance strengths for the three lowest [23]Mg(p,γ)[24]Al resonances.Conclusions: The energy of the dominant first [23]Mg(p,γ) resonance is determined to be E_r = 481.4 ± 1.1 keV, with a resonance strength ωγ = 18 ± 6 meV.Study of proton-unbound states in [24]Al relevant for the [23]Mg(p,γ) reaction in novae G. F. Steyn January 14, 2024 ======================================================================================= § INTRODUCTIONThe [23]Mg(p,γ)[24]Al reaction is a crucial link between the NeNa and MgAl cycles in oxygen-neon (ONe) novae. Consequently, an accurate determination of this stellar reaction rate is critical for an improved understanding of elemental abundances up to Ca <cit.>. At peak nova temperatures (T_9 = 0.2-0.4), the dominant contribution to this process is through the first resonance above the proton threshold in ^24Al(c.f. Fig. <ref>), at E_x = 2345.1 ± 1.4 keV <cit.>. Direct measurements of the ^23Mg(p,γ) reaction rate are challenging, primarily because of small cross sections at astrophysically relevant energies and the difficulty in producing intense ^23Mg radioactive ion beams with sufficient purity. As a result, there have been several attempts to determine the ^23Mg(p,γ) reaction rate indirectly <cit.>, using theoretical estimates of the partial proton and gamma widths of relevant states in [24]Al. For isolated narrow resonances, the resonant contribution for each level can be obtained from its resonance strengthωγ = (2J_r+1)/8Γ_p Γ_γ/Γ,where J_r is the spin of the resonance, Γ_p and Γ_γ are the partial proton and gamma widths of the resonant state, and Γ = Γ_p + Γ_γ is its total width. To date, the only direct [23]Mg(p,γ) measurement was performed by Erikson et al. <cit.> at the TRIUMF-ISAC facility, where a radioactive ^23Mg ion beam was incident on a hydrogen gas target at the DRAGON recoil spectrometer <cit.>. Prompt γ rays were detected with an array of bismuth germanate (BGO) scintillators surrounding the target, while the recoils were identified using a combination of ionization chamber and microchannel plate (MCP) detectors located at the focal plane of the spectrometer. However, the experiment was severely affected by a dominant time-varying [23]Na contamination in the beam <cit.>. This made it difficult to position the resonance at the center of the gas-target and obtain its ωγ precisely. Nonetheless,the energy of the lowest resonance and its corresponding resonance strength were determined to be E_r = 485.7_-1.8^+1.3 keV and ωγ = 37.8_-15.4^+20.5 meV, respectively <cit.>.Considering the above, in this work we report complementary measurements of proton-unbound states in [24]Al, produced via the [24]Mg(^3 He,t)[24]Al reaction.§ APPARATUSThe experiment was performed at the iThemba LABS cyclotron facility, where a 10 pnA, 50.7-MeV dispersion-matched ^3He^2+ beam was bombarded on an ≈300 μg/cm^2-thick MgF_2 target on a carbon backing. The target was located in the scattering chamber of the K = 600 QDD magnetic spectrometer, which was configured in 0^∘ mode, with the beam stop located inside the first dipole magnet <cit.>. Momentum-analyzed reaction ejectiles were detected in a 1/4^''-thick plastic scintillator at the spectrometer focal plane, after passing through two vertical drift chambers (VDCs). The VDCs determined the horizontal and vertical positions of the ejectiles crossing the focal plane, while the plastic scintillator was used for particle identification (PID) purposes, and to generate the data acquisition (DAQ) trigger. The PID plots were generated using the relative time difference between the cyclotron RF and DAQ trigger signals, together with the energy deposited in the scintillator. Fig. <ref> shows the PID plot obtained for this experiment, with the triton group explicitly highlighted. An array of five 400-μm-thick MMM-type Double-sided Silicon Strip Detectors (DSSDs) called CAKE <cit.> was used to detect protons from unbound states in [24]Al, in coincidence with triton focal-plane events. These detectors were placed upstream of the target location in a backward-facing lampshade configuration and covered ≈ 25% of the total solid-angle. Each DSSD comprised 16 ring channels and 8 sector channels. The rings had an angular range of θ_lab = 115- 165, for the array.§ DATA ANALYSIS §.§ Triton singlesThe [24]Mg(^3 He,t) singles spectrum, obtained with the appropriate triton PID gate, is shown in Fig. <ref>. Because the [24]Mg(^3 He,t) reaction Q-value is ∼ 10 MeV lower than [25,26]Mg(^3 He,t) reactions, one can safely assume negligible contributions from other Mg isotopes in this spectrum. However other contaminant peaks from carbon and oxygen in the target foil cannot be ruled out. To determine such contributions, we took additional ([3]He,t) data with a Li_2CO_3 target, whose spectrum is also shown in Fig. <ref>. Three contaminant peaks are evident in the region of interest (ROI). We associate these peaks to states in [16]F, produced via the [16]O(^3 He,t) reaction.All triton peaks were fit using a lineshape function defined by a Gaussian distribution convolved with a low-energy exponential tail <cit.>. Peak centroids (μ_t) and areas (A_t) were recorded for later analysis. We next used known excitation energies in ^24Al <cit.> to perform an in-situ energy calibration of the focal-plane spectrum. This was done using a quadratic fit of the formE_ fit(i) = a_0 + a_1 μ_t(i) + a_2 μ_t(i)^2. For this procedure, we used the Nuclear Data Sheets <cit.> to identify calibration peaks within the range 425 ≤ E_x≤ 3328 keV. The 1.5 and 1.6 MeV states were not included in the analysis, because of possible unresolved doublets in the region <cit.> and the presence of the contaminant [16]F ground state peak. In the first trial, the above calibration yielded a poor fit, with reduced chi-squared value, χ^2/ν = 5.1. Further analysis showed that more reasonable agreement (χ^2/ν = 1.2), with a p value of 31%, was only obtained if we excluded two other points (the 500 and 1261 keV states) from the calibration. The residuals from these two fits are shown in Fig. <ref>. We use the results of this regression analysis to independently determine the excitation energies of observed states in [24]Al that are relevant for the [23]Mg(p,γ) reaction rate. These are listed in Table <ref>. For the lowest [23]Mg(p,γ) resonance, our calibration yields E_x = 2345.5 ± 1.1 keV, which is in excellent agreement with a previous γ-ray measurement <cit.>. Our data do not show explicit evidence of the recently reported state at 2605 keV <cit.>. The contaminant 721 keV peak from [16]F does not significantly affect our energy determination for 2345 keV state. However it is important that this contamination is corrected for, when the area of the 2345 keV peak is used determine its corresponding proton branching-ratio. Such a correction is trivial for this case, thanks to the additional high statistics triton peak associated with the 424 keV state in [16]F. Since this peak appears in both spectra (c.f. Fig. <ref> around channel number 390), the correction factor was determined from the relative areas of both the contaminant peaks.§.§ Triton-proton coincidencesAs mentioned previously, charged-particle-like events from [24]Mg([3]He,t)[24]Al^*(p)[23]Mg were detected using the DSSD array described in Ref. <cit.>.The energy and timing signals for the array were digitized using CAEN V785 ADCs and V1190A TDCs, respectively. Triton-proton (t-p) coincidences were selected by gating on the prompt TDC timing peak, which was around 40 ns wide and further imposing an energy condition on the sector and ring channels (so that they are within 300 keV). The DSSDs were energy calibrated using a [226]Ra α-source at a later time, when the beam was off. Fig. <ref> shows the coincident t-p energy loci obtained from this experiment. Since each ring of the DSSD array corresponds to a polar angle (θ) with respect to the beam-axis, the proton branching ratios B_p(θ) can be determined by gating on the relevant proton groups in Fig. <ref>, and making use of the formula B_p(θ) = N_tp(θ)/N_t1/ϵ(θ).Here N_tp(θ) are the registered t-p coincidences for a particular ring, N_t are the corresponding triton singles and ϵ(θ) is the proton detection efficiency of the ring. For states with definite spin and parity, these yields are expected to follow the simple angular distribution <cit.> B_p(θ_c.m.) = ∑_k=even A_k P_k(cosθ_c.m.),whose integrated yield gives the total proton branching ratio for each stateΓ_p/Γ = ∫_-1^1B_p(θ_c.m.) d(cosθ_c.m.) = A_0.To perform the above analysis, we first determined ϵ(θ) for each active ring of the DSSD array using GEANT4 Monte Carlo simulations. These values were used to obtain an initial set of B_p(θ) for the first four [23]Mg(p,γ)[24]Al resonances. Next, a similar analysis was also performed on the [16]O(^3 He,t)[16]F^*(p) data, obtained with the Li_2CO_3 target. As [16]F is unbound and Γ_p/Γ = 1 for its observed states <cit.>, this important procedure determined an effective normalization[A weighted mean of the results from the 424 and 721 keV-state data yielded a normalization factor κ = 1.51 ± 0.04.] to correct the [24]Al results, which had previously only relied on simulated efficiencies.The renormalized t-p distributions for the first four proton-unbound states in [24]Al are shown in Fig. <ref>.[As the coincidence data for the 2345 and 2523 keV states were statistics limited, for these cases the angular yield at each bin was obtained by combining data from 4 adjacent strips in the DSSDs.], with the converged fit results for A_0 and their associated ± 1σ uncertainties shown in the figure insets. If one assumes Gaussian distributions for these values, it is evident that their coverage probabilities also include disallowed regions of parameter space, with Γ_p/Γ > 1. Consequently, we use the fit-results in Fig. <ref> to present lower-limits on Γ_p/Γ values for all four states. These are listed in Table <ref>. § RESULTS Although we are unable to quote Γ_p/Γ values with sufficient precision, it is still possible to extract meaningful resonance strengths from our data, particularly for the first three states.[Since Γ_p/Γ≈ 1 for the 1007 keV resonance, its ωγ value is given by its partial gamma width.] For this part of the analysis we relied on shell-model-calculated partial gamma widths (Γ_γ), obtained using a recently developed USD-C isospin non-conserving Hamiltonian <cit.>. Next, we used Monte Carlo simulations to generate Gaussianly distributed random deviates around the measured central A_0 values. Each simulated variable A_0^sim was used to calculate a corresponding χ^2 with respect to the measured data points in Fig. <ref>. Simultaneously, the simulations also determined associated ωγ values, making use of the relation (Γ_p/Γ_γ)_sim = A_0^sim/1-A_0^sim.For each state, the simulated ωγ that yielded the minimum χ^2 with respect to our data[Events that yielded Γ_p/Γ_γ < 0 were rejected from the analysis.] was accepted as the true value, with its ± 68% CL statistical uncertainties obtained from the values that yield χ^2 = χ^2_min+1. Clearly, since ωγ is proportional to Γ_γ, a more realistic uncertainty on the former should include a contribution from the shell model calculation. In order to estimate these uncertainties, we performed similar calculations of level lifetimes in the mirror [24]Na nucleus, which were then compared with experimentally measured values <cit.>. This comparison is shown in Table. <ref>, based on which we conservatively assign a 30% relative uncertainty to each calculated Γ_γ value for [24]Al. These systematic uncertainties from theory are the dominant contributions to the final uncertainties in our extracted ωγ values, which are listed in Table <ref>. The final values from Table <ref> were used to evaluate [23]Mg(p,γ) reaction rates with the RatesMC Monte Carlo code described in Refs. <cit.>. The non-resonant (direct capture) contribution was determined from a polynomial expansion of the S-factor S(E) ≈ S(0) + S'(0)E + 1/2S”(0)E^2 keV b,whose coefficients were taken from Ref. <cit.>. The results for individual resonances identified in this work are shown in Fig. <ref>, with the fractional contribution of each component shown in Fig. <ref>.§ SUMMARYTo summarize, this work reports a study of states in [24]Al, which are important to determine the [23]Mg(p,γ)[24]Al nuclear reaction rate in ONe novae.Together with shell-model calculations of partial gamma widths, our experimental data are used to determine resonance strengths for the first three [23]Mg(p,γ) resonances. Our measured excitation energy for the dominant lowest resonance is in excellent agreement with a previous γ-ray measurement <cit.>. Making use of the most recent atomic mass data compilation <cit.>, this translates to a resonance energy E_r = 481.4 ± 1.1 keV. Our extracted ωγ value for this resonance is 18 ± 6 meV. These values may be compared with the lower-precision direct-measurement results of Erikson et al. <cit.>, E_r = 485.7_-1.8^+1.3 keV and ωγ = 37.8_-15.4^+20.5 meV, which relied on extensive Monte Carlo simulations and a joint likelihood analysis to determine their median and ± 68% CL values. The quoted uncertainties in their work were the result of a highly asymmetric joint probability density function (PDF), which showed significant tailing and a double-humped structure for its projected ωγ probability distribution. Although our extracted ωγ value is in reasonable agreement with the result reported by Erikson it et al., it appears to be more consistent with the left peaked-structure of their PDF for ωγ (c.f. Fig. 16 in Ref. <cit.>), which is centered around ∼ 23 meV instead.We are thankful to Alejandro García for insightful discussions.This work was partially funded by the National Research Foundation (NRF), South Africa under Grant No. 85100. B.A.B acknowledges funding support from the National Science Foundation, under Grant. No. PHY-2110365. E.C.V thanks the NRF-funded MaNuS/MatSci program at the University of the Western Cape for financial support during the course of his M.Sc. | http://arxiv.org/abs/2311.15935v1 | {
"authors": [
"E. C. Vyfers",
"V. Pesudo",
"S. Triambak",
"P. Adsley",
"B. A. Brown",
"H. Jivan",
"M. Kamil",
"D. J. Marin-Lambarri",
"R. Neveling",
"J. C. Nzobadila Ondze",
"P. Papka",
"L. Pellegri",
"B. M. Rebeiro",
"B. Singh",
"F. D. Smit",
"G. F. Steyn"
],
"categories": [
"nucl-ex"
],
"primary_category": "nucl-ex",
"published": "20231127154240",
"title": "Study of proton-unbound states in $^{24}{\\rm Al}$ relevant for the $^{23}{\\rm Mg}(p,γ)$ reaction in novae"
} |
Cluster of Excellence SimTech, University of Stuttgart, Germany [email protected] Institute of Thermodynamics and Fluid Mechanics, Technische Universität Ilmenau, D-98684 Ilmenau, GermanyTurbulent flow over permeable interface is omnipresent featuring complex flow topology. In this work, a data-driven, end-to-end machine learning model has been developed to model the turbulent flow in porous media. For the same, we have derived a non-linear reduced order model with a deep convolution autoencoder network. This model can reduce highly resolved spatial dimensions, which is a prerequisite for direct numerical simulation, by 99%. A downstream recurrent neural network has been trained to capture the temporal trend of reduced modes, thus it is able to provide future evolution of modes. We further evaluate the trained model's capability on a newer dataset with a different porosity. In such cases, fine tuning could reduce the efforts (up to two-order of magnitude) to train a model with limited dataset (10%) and knowledge and still show a good agreement on the mean velocity profile. Especially, the fine-tuned model shows a better agreement in the porous domain than the channel and interface areas indicating the topological feature is less challenging for training than the multi-scale nature of the turbulent flows.Leveraging the current model, we find that even quick fine-tuning—achieving an impressive order-of-magnitude reduction in training time by approximately 𝒪(10^2)—still results in effective flow predictions. This promising discovery encourages the fast development of a substantial amount of data-driven models tailored for various types of porous media. The diminished training time substantially lowers the computational cost when dealing with changing porous topologies, making it feasible to systematically explore interface engineering with different types of porous media. Overall, the data-driven model shows a good agreement, especially for the porous media which can aid the DNS and reduce the burden to resolve this complex domain during the simulations.The fine tuning is able to reduce the training cost significantly and maintain an acceptable accuracy when a new flow condition comes into play. Non-intrusive, transferable model for coupled turbulent channel-porous media flow based upon neural networks Sandeep Pandey January 14, 2024 ============================================================================================================ § INTRODUCTION Turbulent flow across permeable interfaces is a phenomenon that is pervasive in both natural environments and engineering contexts, manifesting in diverse applications ranging from sediment transport to transpiration cooling in gas turbines. Porous media are often characterized by a complex spatial topology <cit.>, and numerous properties have been recognized to exert a substantial influence on both mass and momentum transport at the interface between porous and free-flowing media. Despite this understanding, the task of comprehending and optimizing the macroscopic and microscopic characteristics of porous media continues to present a challenge. The underlying reasons for this complexity include the inherent difficulties associated with conducting experimental measurements at the pore scale <cit.> and the substantial computational resources required for numerical simulations that adequately resolve the entire range of relevant scales. Historical examinations of turbulent flow over permeable substrates have investigated the impact of variable permeability on surface flows. In scenarios where interfacial permeability is low, the turbulent surface flow exhibits characteristics akin to a canonical boundary layer, owing to the limited disruption of near-wall structures. Conversely, when permeability is augmented, large-scale vortical structures begin to manifest within the surface flow. This phenomenon has been ascribed to Kelvin-Helmholtz (KH) type instabilities, originating from the inflection points present in the mean velocity profile. In more recent times, the exploration of porous media as a means of achieving drag reduction has catalyzed a series of comprehensive studies focused on the subject of anisotropic permeability.Direct Numerical Simulation (DNS) provides a distinct advantage for observing and analyzing the physics of turbulence within a constrained small spatial domain, extending beyond traditional applications such as channel <cit.> and pipe flows <cit.>. <cit.> conducted an investigation into the influence of the anisotropic permeability tensor within porous media at a regime characterized by higher permeability. Their findings revealed that both streamwise and spanwise permeabilities contribute to turbulence enhancement, while vertical permeability alone does not exert a similar influence. The effect on turbulence is particularly pronounced in the presence of porous walls exhibiting streamwise permeability, as this configuration promotes the development of large-scale streamwise perturbations induced by Kelvin–Helmholtz instability. However, despite the insights offered by DNS, its utilization for extensive parametric studies, often required in industrial applications, remains a costly solution. In such contexts, it is still customary to employ either analytical models or advanced turbulence models, which tend to provide satisfactory accuracy within established applications. Nevertheless, these models are not without their challenges, as they can exhibit divergent behavior, and the calibration process can prove to be both intricate and time-consuming.Machine learning (ML) is on surge where it has successfully tackled many complex tasks ranging from self-driving cars <cit.>, drug discovery <cit.>, weather prediction <cit.>, and more recently state-of-the-art large language model such as GPT for natural language processing <cit.>. Machine learning has been democratized heavily in the last decade and some of the reasons include the vast availability of open-source libraries, an active community, a huge volume of incoming data from physical as well as numerical experiments, and an ease of access to high-performance computing resources via cloud. In recent years, ML has shown potential in the modeling of various fluid flow use-cases <cit.>. Typically, big data ML problems rely upon some kind of feature extraction process e.g. conventional feature selection process, long-standing proper orthogonal decomposition (POD) <cit.>, relatively new dynamic mode decomposition (DMD) <cit.> or ML-based methods <cit.>. Feature extractors transform the input raw data into information-preserving feature vectors with the primary goal of reducing the dimensionality of the data. Several attempts have been made in the past few years to combine POD or DMD with machine learning <cit.>. However, these methods have certain limitations especially when the flow advances in the turbulent flow regimes. Therefore, an end-to-end machine learning system is being suggested where POD and DMD are replaced by a non-linear DNN <cit.>. A common architecture is a deep convolution autoencoder (DAE) to extract the features for further processing and decision making <cit.>. Turbulent flow problem often involves the tempo-spatial data, therefore, extracted features could be used to train a downstream network to build a data-driven dynamic model. A common choice is recurrent neural network (RNN) which preserves the sequential relationship in a time-series <cit.>. A combination of such approaches has shown exemplary results in terms of data compression and modeling <cit.>.In this work, we make an attempt to model complex flow behaviors in a porous medium with the help of a data-driven approach. The flow is in a turbulent regime and due to the presence of porosity, geometry is resolved to a finer scale. This setup makes the end-to-end ML task challenging and equally interesting. The goal is to combine the deep learning-based feature extraction step followed by a recurrent neural network to model temporal evolution. Echo state network was chosen as the RNN due to its proven efficacy in modeling multi-dimensional tempo-spatial data.The trained models are restricted to the domain of training and work poorly beyond the trained data domain which could be a specific parameter such as the Reynolds number in fluid flow problems. A commonly used strategy called transfer learning is typically used, where the objective is to transfer the gained knowledge from the training of source task to related target task with a limited dataset <cit.>.Fine-tuning, which can be considered a part of transfer learning, is one methodology where an already trained and optimized model is tuned with another similar set of data for a similar task. This could decrease the training time significantly in addition to neural architecture search and training data requirements <cit.>. Fine-tuning can be performed on the entire network level where we are allowed to retrain all the layers while initializing the parameters with the pre-trained model. Alternatively, several layers can be frozen and retraining is allowed on only a few layers. The former method is suitable when the network size is very large and data is limited. While the first approach could lead to overfitting <cit.>. Therefore, we also present the result from fine-tuning where a trained model from a given data domain was transferred to a different data domain.The remainder of the paper is divided into 4 sections. Section <ref> describes the fluid flow problem in porous media along with the underlying governing equations and numerical method. Section <ref> explains in detail regarding the hierarchical data-driven model. Section <ref> describes the results from training and fine-tuning. With section <ref>, we conclude the work and give a brief outlook. § SYNTHETIC DATA GENERATIONHigh-fidelity simulations such as Direct Numerical Simulation (DNS) <cit.> can serve reliable spatial and temporal resolved data for the modeling. In our DNS, the three-dimensional incompressible Navier-Stokes equations are solved in a non-dimensional form as follows: ∂ u_j/∂ x_j=0 ∂ u_i/∂ t+∂ u_i u_j/∂ x_j=-∂ p/∂ x_i+1/Re_D∂^2 u_j/∂ x_i ∂ x_j+Πδ_i1 Here, Π represents a constant pressure gradient in the mean-flow direction. The governing equations are normalized by employing the half-width of the entire simulation domain, denoted as H (refer to figure <ref>a), and the mean bulk velocity U_b within the channel region specified by y/H=[0,1]. The velocity components in the streamwise (x), wall-normal (y), and spanwise (z) directions are henceforth expressed as u, v, and w, respectively. The domain dimensions (L_x/H × L_y/H × L_z/H) are fixed at 10 × 2 × 0.8π for all scenarios considered. The lower half of the domain (y/H=[-1,0]) encompasses the porous media, while the upper half (y/H=[0,1]) constitutes the free channel flow. The porous layer is constructed with 50 cylindrical elements aligned in the streamwise direction and 5 rows positioned in the wall-normal direction, as delineated in figure <ref>. The distance D between two adjacent cylinders is consistently set at D/H=0.2. No-slip boundary conditions are enforced on the cylinders, the upper wall, and the lower wall, whereas periodic boundary conditions are implemented in both the streamwise and spanwise directions. The spectral/hp element solver Nektar++ is utilized to solve the equations as referred to in Eqn. <ref>-<ref> <cit.>. The geometric configuration in the x-y plane is discretized utilizing quadrilateral elements, with localized refinement in the vicinity of the cylinders (refer to figure <ref>(a)). Localized element expansions are implemented based on the modified Legendre basis <cit.>. Flexibility in polynomial orders is employed across the wall-normal range through continuous Galerkin projection. Specifically, the polynomial order within the free-flow region, defined as y/H=[0.2, 1], is set at P=6-7. The proximal wall region and the upper two strata of cylinders, where y/H=[-0.4, 0.2], are augmented with a superior order of P=8-9. Conversely, an inferior order of P=5 is designated within the more profound locations of the cylinder array (y/H=[-1, -0.4]). The spanwise orientation is broadened utilizing a Fourier spectral method, and the 2/3 rule is implemented to circumvent aliasing errors. The temporal progression is executed with a second-order mixed implicit-explicit (IMEX) scheme, with the time step fixed at Δ T/(H/U_b)=5×10^-4. Two DNS cases are performed with varying porosity φ=0.5,0.6,0.7, and 0.8, which is defined as the ratio of the void volume to the total volume of the porous structure. The parameters of the simulated cases are listed in TABLE <ref>, where the cases are named after their respective porosity. The superscripts (·)^p and (·)^s represent permeable wall and smooth wall side variables, respectively. Variables with superscript ^+ are scaled by friction velocities u_τ of their respective side and viscosity ν. It should be noted that the distance between the cylinders remains constant, while the porosity undergoes modification through variation in the cylinder radii. The normalized cylinder radius is found to be within the range r_c^p+=r_c u_τ^p/ν=42-52 for all examined cases (refer to TABLE <ref>), thereby implying that the effects of surface roughness are presumed to be consistent across different scenarios. For all instances, the Reynolds number associated with the top wall boundary layer is configured at Re_τ^s=δ^su_τ^s/ν≈180 (δ represents the distance between the position of maximal streamwise velocity and the wall). This configuration aids in minimizing fluctuations in the top wall boundary layer. In the region of the upper smooth wall, the streamwise cell size spans from 4.1≤Δ x^s+≤6.3, and the spanwise cell size is confined to below Δ z^s+=5.4. On the side corresponding to the porous media, the value of Δ z^p+ is capped at 8.4, whereas Δ x^p+ and Δ y^p+ are augmented by polynomial refinement of the local mesh <cit.>. The cumulative number of grid points fluctuates between 88×10^6 (C05) and 110×10^6 (C06), with each cylinder within the porous domain resolved using 80 to 120 grids along its perimeter. The spatial resolution of the current work aligns closely with that of preceding DNS studies <cit.>. Additionally, the utilization of a high-order scheme in conjunction with body-fitted mesh confers a notable advantage in resolving fine scales and necessitates fewer grids compared to both finite volume and immersed boundary methods to attain equivalent accuracy <cit.>. Furthermore, a comparative analysis of our grid resolutions with the Kolmogorov length scale, denoted as η=(ν/ϵ)^1/4, at the interface has been conducted. For all cases, the findings show that (Δ x/η)y=0⩽2, (Δ y/η)y=0⩽1, (Δ z/η)_y=0⩽4.5, thereby validating that the current resolution is indeed sufficient § DATA-DRIVEN MODELLINGData driven modelling (DDM) is an approach to harness the available data about a system and establish a connection between the system state variables (input, internal and output variables) without explicit knowledge of the physical behaviour <cit.>. In this work, we have the synthetic data derived from the DNS experiments while the end model should be based upon this data and should able to provide a model to predict the temporal evolution of a state variable. Figure <ref> depicts our end-to-end 2-step ML framework. To have an extendable framework, we chose to first extract the features with the help of a deep encoder and feed the extracted features to a second-level recurrent neural network (RNN). This architecture is inspired from our earlier work <cit.>. §.§ Feature extraction with autoencoderThe deep convolution autoencoder (DAE) operates within the framework of unsupervised machine learning, meaning that the model's training does not depend on labeled data. The learning objective is to distill efficient representations or features within a low-dimensional hyper-plane, with the aim of reproducing the original data while minimizing reconstruction error. In more formal terms, an autoencoder encodes input data I into a compact form c=encode(I), where c∈ℝ^l. It then decodes this compact form back into a reconstructed representation Î=decode(c). The difference between the reconstructed and original data, given by Î-I, constitutes the reconstruction error.In the specific implementation discussed, 2-dimensional data were sourced from DNS, as delineated in <ref>. The architecture of the autoencoder is deep, comprising several convolution layers. Each layer functions as a local feature extractor and shares its weights across the network. Following each convolution layer is a pooling layer—specifically, a max pooling layer in this instance—which diminishes the dimensionality of the data and emphasizes non-repetitive key features. This contributes to the network's unique translation invariance property. The encoder segment of the network concludes with a bottleneck layer, which harbors the latent space or extracted features. These features are subsequently channeled into a decoder section, which attempts to reconstruct the original DNS data. The training process initiates with random weight initialization across the network and iteratively refines these weights according to the loss function, employing backpropagation for optimization.Consider three-dimensional input data I∈ℝ^ N_x × N_y × C_in with C_in because the number of input channels. Here, C_in=1 as we load the streamwise velocity field u_x only into the network and thus I∈ℝ^ N_x × N_y.This input is convoluted with a kernel k_m as shown in Eqn. <ref>. Conv(m,I)=ψ( b_m+ ∑_i=1^C_ink_m ∗I_i )form ∈ M In the encoder, multiple convolutional layers are typically followed by pooling layers.A window of configurable size and sliding is used with a configurable step across the input. This process combines the input within each window to produce a single output element using a chosen aggregation function. The common max-pooling layer's aggregation function retains only the maximum element within each window. The pooling layer's step size is usually chosen to reduce the input's dimensionality, denoted as I.Within the encoder segment of a DAE, multiple convolutional layers are commonly succeeded by pooling layers. These layers operate by deploying a configurable sliding window across the input, aggregating the information within each window into a single output element via a specified aggregation function. A frequent choice for this function is the retention of the maximum element within each window, as implemented in a max-pooling layer. This approach serves to diminish the dimensionality of the input, represented by I, typically through the strategic selection of the pooling layer's step size.The cumulative effect of the encoding sequence is to map the input data into a latent space ℝ^l, where l ≤ N_x × N_y, and N_x and N_y denote the dimensions of the initial input. Following the encoder, the decoder of the DAE primarily mirrors the encoder's design, with a critical distinction: it substitutes max-pooling layers with upsampling layers to expand the data back to its original domain size. This expansion process, known as unpooling, essentially doubles the data's dimensions. In the context of the work discussed, nearest-neighbor linear interpolation is employed to facilitate the upsampling. The processing of intricate turbulence data necessitates a sophisticated, deep architecture composed of multiple convolutional layers, augmented by corresponding upsampling layers. Such an arrangement ensures the attainment of a faithful representation, minimizing information loss. In the comprehensive system, subsequent to training all relevant networks, the DAE's decoder is applied to extract a compressed form, denoted by ĉ_t, from a specific turbulence snapshot I_t. This condensed snapshot is then input into an RNN, which predicts the subsequent reduced representation ĉ_t+1. The DAE's decoder translates ĉ_t+1 back into the input domain, recreating the anticipated turbulence snapshot. Through the integrated use of deep convolutional autoencoders and recurrent neural networks, the system attains efficient and insightful compression of complex turbulence data. This approach enhances the capacity for predictive modeling and nuanced analysis within the realm of fluid mechanics.§.§ Echo State Network (ESN)ESN is closely related to reservoir computing (RC) and it shines out with sequential data where temporal relationship is present. In a typical ESN setup, input data is fed to a input layer which is connected to a big sparse reservoir which is finally connected to an output layer. ESN differs from typical RNN in terms of not requiring a back-propagation, therefore, inherently fast in training. More details can be found in our earlier work <cit.>. In this work, ESN acts as a downstream network which learns to predict the temporal evolution of the features obtained by the encoder section of DAE in an auto-regressive manner. The predicted values can be fed back to the decoder section of DAE to reproduce the feature of interest. Therefore, during the deployment, we are only interested in the trained ESN and decoder while encoder can be thrown away. The reservoir is represented as an N × N adjacency matrix denoted by W_η^(r). Its initialization relies on a vector of hyperparameters η and is utilized to encode the Echo State Network's (ESN) input into a hidden representation, known as the reservoir state. This state accumulates information from past inputs. N is also referred to as the reservoir dimension and is typically chosen to be much larger than the dimension of the latent space, such that N ≥ l. Additionally, the reservoir is updated at each time step with new input. More precisely, during each time step t, the ESN's input c_t influences the computation of an updated reservoir state r_t ∈ℝ^N. This computation is given byr_t=(1-α)r_t-1+αtanh(W^(in)c_t +W^(r)_ηr_t-1) The parameter α denotes a leakage rate determining the blending of the previous state and current input. The random matrices W^(in) and W^(r)_η are initialized at the start of the training process and remain constant throughout. The ESN's output at time step t is acquired by ĉ_t+1=W^(out)r_t In the ESN case, the final output layer is trained only; no backpropagation in several epochs is required as for convolutional networks. Thi implies that the components of the output matrix W^(out) have to be optimized to give a minimal cost function that quantifies the difference between training data and ESN output. The cost function C is given byC[W^(out)]=1/n_tr∑_t=1^n_tr (W^(out)r_t- c_t )^2 + β∑_i=1^l W^(out)_i _2^2 and has to be minimized corresponding toW^(out)_∗ = arg min C(W^(out)) Here, W^(out)_i denotes the ith row of W^(out) and _2^2 the L_2 norm. The number of training samples is n_tr. Eqn.<ref> and Eqn.<ref> are known as ridge regression with the parameter β also known as the Tikhonov regularization parameter. The last term suppresses large values of the rows of the output matrix. This regression problem is solved by W^(out)_∗ = YS^T(SS^T+βId)^-1 The hyperparameters of the Echo State Network (ESN) encompass several key parameters, namely the reservoir size N, node density D (N_active/N) representing the percentage of active nodes, the spectral radius of the reservoir ρ(W^(r)_η), the leakage rate α, and the Tikhonov regularization parameter β used in the cost function C. These hyperparameters are collectively represented as the vector η=(N,D,ρ,α,β). Notably, the training process is inherently rapid when compared to other types of Recurrent Neural Networks (RNNs). Upon successful training and hyperparameter optimization, the ESN functions as an autonomous dynamical system, where a forecasted compressed snapshot ct+1 can serve as the subsequent ESN input to predict ct+2 and so forth. This scenario is often referred to as the closed-loop scenario, wherein the ESN operates in a self-contained and predictive manner. The Echo State Network (ESN) is a specialized paradigm within the domain of reservoir computing (RC), and it exhibits particular efficacy in handling sequential data where temporal dependencies are intrinsic. In a standard configuration, ESN commences with the input data being introduced to an input layer. This input layer is subsequently interconnected to a large, sparse reservoir, culminating in the output layer. Distinct from traditional Recurrent Neural Networks (RNNs), ESN does not necessitate back-propagation, rendering it inherently efficient in terms of training time. Comprehensive details about this mechanism are elaborated in previous work, as cited in <cit.>. Within the framework of the present study, ESN serves as a downstream recurrent neural network, functioning to predict the temporal evolution of the features extracted by the encoder segment of a DAE in an autoregressive fashion. The forecasted values are reintroduced to the decoder segment of the DAE, facilitating the recreation of the target feature. During the operational phase, the focus is exclusively on the trained ESN and the decoder, while the encoder is disregarded.The structural construct of the reservoir is represented as an N × N adjacency matrix, denoted by W_η^(r). The initialization of this reservoir matrix is predicated on a specific vector of hyperparameters, symbolized by η, and is applied to translate the ESN's input into a concealed representation known as the reservoir state, amassing information from previous inputs. The reservoir dimension, referred to as N, is usually selected to substantially exceed the latent space's dimension, following the criterion N ≥ l. Additionally, this reservoir is dynamically updated with every time increment, receiving new input. Specifically, at each discrete time t, the input vector c_t modulates the computation of an updated reservoir state r_t ∈ℝ^N.The evolution of the reservoir state is defined by:rt=(1-α)rt-1+αtanh(W^(in)ct +W^(r)ηrt-1)where α denotes a leakage rate, dictating the integration of previous states and current inputs. The random matrices W^(in) and W^(r)η are initialized at the commencement of training and are retained as constants. The ESN's output at the time step t is deduced by:ĉ_t+1= W^(out)r_t Contrasting with convolutional networks, in the ESN case, only the final output layer requires training, obviating the need for backpropagation across several epochs. This implies that the elements of the output matrix W^(out) must be meticulously optimized to minimize a cost function, denoted by C, which quantifies the discrepancy between the training data and the ESN's output.The cost function and its minimization are formally represented as:C[W^(out)]=1/n_tr∑_t=1^n_tr (W^(out)r_t- c_t )^2 + β∑_i=1^l W^(out)_i _2^2Here, W^(out)_i represents the i-th row of W^(out), and | |_2^2 signifies the L_2 norm. The regression problem is resolved by:W^(out)∗ = YS^T(SS^T+βId)^-1. The ESN's hyperparameters include the reservoir size N, the node density D, the spectral radius of the reservoir ρ(W^(r)η), the leakage rate α, and the Tikhonov regularization parameter β in the cost function C. Collectively, these are captured in the vector η=(N,D,ρ,α,β). This training methodology is inherently expeditious relative to other forms of RNNs. Following successful training and optimization of hyperparameters, the ESN operates as an autonomous dynamical system. In this mode, a forecasted compressed snapshot c_t+1 can be iteratively used as the succeeding ESN input to predict c_t+2 and continue thereafter. This mode of operation is recognized as the closed-loop scenario. § RESULTS AND DISCUSSION §.§ Data driver flow model As mentioned earlier, we first trained a DAE network to extract the feature in lower dimensions. For the same, input and output layers have the same dataset i.e. the snapshots from the DNS consist of u-velocity component with a dimension of 1501 × 109 × 1. The neural network has multiple convolution layers combined with a max pooling layer while the decoder section has a convolution layer and up-sampling layers, giving a converging-diverging shape. The features or latent modes are obtained at the end of the encoder section and it has a shape of 24 × 2 × 34, thereby giving a dimensionality reduction of 99%. This could increase further on the expense of reconstruction loss. TABLE <ref> illustrates the various layers and their shape.After fixing the DAE architecture, various hyperparameters were obtained by using Bayesian optimization. This particular approach is part of AutoML paradigm <cit.>, and it assists in achieving an optimized set of parameters in a relatively smaller number of iterations when compared to the grid or random search <cit.>. It is worth mentioning that data was scaled between 0-1 before the training. For the entire process, we used 2000 snapshots, 60% are used for training, 20% for validation while remaining 20% for blind testing. The model has around 1.9 Million trainable parameters and the model reaches a loss of O(10^-4) for both training and validation set in 50 epochs. Once DAE is trained, we extract the features for the entire dataset using its encoder section. These encoded data which have 99% reduced dimension were further used to train the second level of RNN i.e. ESN. Unlike DAE, training an ESN is relatively easy and computationally cheaper because of straightforward design choices. The optimized network of ESN has 3726 reservoirs, 0.93 as spectral radius, 0.94 as leaking rate and 0.16 as reservoir density. For the best results, the look back history period is 4 timesteps i.e., the model requires 4 timesteps data as an initialization then the system can predict auto-regressively. As a first step of validation, we visualize the low-dimensional features from DAE along with the generated data from ESN on the blind validation set. We call the method generative because the model works auto-repressively once we provide the initial data. Figure <ref> shows such a time series for 2 of the modes out of 1632. It is impressive that the model maintains a value of 0 without any minor fluctuations for ϕ_1 while the series fluctuates with the right phase and amplitude for ϕ_10. Figure <ref> further illustrates the probability density function (PDF) of these 2 modes for the DAE and ESN. It reaffirms the generative capability of ESN where it is able to match the overall distribution with GT (i.e., DAE for ESN). As a next step of validation, a comparison in high-dimensional space where flow physics needs to be captured is performed. Figure <ref> displays contour plots of the original DNS data in panels (a) and (d), reconstructed data from the features space of DAE in panels (b) and (e), and the decoded value from the prediction of ESN is shown in panels (c) and (f). One can observe a good qualitative agreement in terms of capturing the flow fields. Both regions i.e. main channel flow and porous flow can be predicted. Both DAE and ESN were trained from the DNS, and are not able to fully capture the strong turbulent field in the channel. While the flow in the porous media section is much better because it falls within the laminar region mostly. The obvious reason is the limited capability of DAE to capture the entire flow field because it discards 99% of input data and uses only 1% of the features, similar to conventional reduced-order modeling approaches such as POD. The derived model with ESN shows an excellent result while matching the field compared to its GT i.e. DAE. It suggests that the entire model can further be improved if a state-of-the-art feature extractor such as a transformer model can be used instead of DAE. It is worth mentioning that DAE surpasses conventional as well as some advanced feature extractors e.g., variational autoencoder for this use-case, which was observed in our experimentation. Often, averages and fluctuations are critical values for the analysis in industrial applications. Therefore, we further validate the results with mean and fluctuation profiles over the cross-section of the channel. Figure <ref> shows these profiles and the mean velocity profile overlaps with the DNS for both DAE and ESN. The fluctuation of velocity is slightly deviated in the main channel while they remain zero in the porous section due to dominating laminar flow. The distinction made by the model is also great where it can predict the interface where flow is transiting from laminar to turbulent.For a comprehensive analysis, we include findings from Fourier spectra in our comparison. Figure <ref> contrasts the streamwise spectra E_uu near both the smooth wall at y=19.5 and the permeable wall at y=10.2. Both the DAE and ESN models perform well in capturing the energy at larger scales (k_x < 1). However, neither DAE nor ESN successfully models the abundant turbulent kinetic energy found in the inertial subrange, particularly at y=19.5. This limitation is anticipated, given that the DAE model achieves a 99% dimensionality reduction. §.§ Fine tuning to a various porosityThe trained model has shown a good agreement in terms of quantitative as well as qualitative characteristics of the flow. However, the trained model is limited to a given data domain that was used for the training which means that the model will not predict accurate results if we use different parametric conditions. This is one of the biggest downsides of such a data-driven model. Considering the current setup, different porous media with various porosities <cit.> and topologies <cit.> are often compared together, whereas the channel remains unchanged. Variations in porous topology, porosity, and permeability can lead to distinct influences on turbulence modulation, which in turn could impact functionalities like drag reduction, noise absorption, and heat transfer enhancement. We evaluate the use of fine-tuning as a method to extend the versatility of these models to broader data sets. Fine-tuning involves adapting a pre-trained model to new, limited data. Given that the DAE is computationally intensive within this workflow, we are seeking a substantial reduction of the training time by using fine-tuning. The DAE model was tested using a distinct data set for this assessment.Here, we used case C06 which has higher level of porosity φ=0.6.The other parameters such as turbulent Reynolds number on both wall sides Re_τ^p, Re_τ^s, permeability √(K_αα)^p+ and Forchheimer coefficients C_αα^p+ are also quite different between C05 and C06 , as shown in TABLE <ref>. TABLE <ref> shows the cases used for the fine-tuning study. It compares the training time and the mean square error of these cases. The case numbering is based on an increasing order of the training time. Case A uses the trained model with C05 without any change. Therefore, its training time is 0 while the error is also the highest. Case E is trained from scratch, which is the same as the procedure in the last section. It is featured with the longest training time and the lowest mean square error showing the high accuracy and high training cost. Cases A and E represent two extreme conditions, whereas cases B, C and D fall between the two extremes. Case D is based on the same model as the original model from C05, however, it is retrained with all data points for case C06. Its training cost 𝒪(10^2) is one order of magnitude lower than case A whereas the mean square error is only slightly higher which indicates a worthy trade-off. Case C is fundamentally close to case D however only retrained with 10% data for case C06. Hence, its training time 𝒪(10^1) is again one order of magnitude lower than case D but with a significant increase of the mean square error. Case B is inherited with the model from case C05 with only 1 trainable layer for case C06. The training time is approximately half of that of case C and shows a significant increase in the error. In general, both case C and case D are valuable approaches with a reasonable trade-off of accuracy and training cost. Case D offers a high accuracy and still a significant reduction of training cost, whereas case C can be valuable when the training resource is extremely limited. Figure <ref> compares the instantaneous velocity and time-averaged velocity obtained from the ground truth (DNS) with all the fine-tuning cases from TABLE <ref>. In the DNS result (figure <ref>(a)), turbulent structures on multi-scales can be resolved by fine resolutions. On the other hand, low-rank modeled data by DAE can capture the dominant structures such as the blow and suction events, for instance, the low-velocity region at the interface (x≈20, y≈10). The fine details are missing due to the truncation of the original data. Compared to the model case A without additional training (figure <ref>(c)), cases B,C,D,E certainly show more details, which corresponds to the accuracy. From cases A to E, an increase in fine motions is observed. On the average results, all the mean velocity contours show similar behavior with the maxima in the channel center and the minima down to the porous domain. It is somehow difficult to tell the quality of modeling.Moreover, figure <ref> shows the quantification of the time-averaged streamwise velocity profile and its fluctuation. In the channel region, all the models are able to reproduce the DNS results even in case A without any training on the DNS data. It is because the channel domain is topologically the same even though the porosity in the connected domain is different. A clear difference is seen in the porous domain as well as in the vicinity of the interface. Case A in red color is not able to follow the DNS data of the mean velocity as well as the velocity fluctuation. On the other hand, cases B, C and case D, empowered by the fine-tuning method, show an excellent modeling of the flow inside the porous domain regarding the mean velocity profile.This is quite impressive considering case B has only one trainable layer and with more than two orders of magnitude lower training cost. This suggests that a small amount of data is able to fine-tune the model to model and predict the mean profile in the entire domain. The prediction of velocity fluctuation in figure <ref>(b) seems much more challenging. Case B shows only a slight improvement but is still close to that of case A. Even though, case B can model the mean velocity profile with no clear discrepancy.Case C (10% new training data and 𝒪(10^2) reduction of the training time) shows a significant improvement in both the channel and porous region. Its performance is quite close to the case D and E with much higher training cost. Therefore, case C is considered as a balanced case with efficiency and accuracy. On the other hand, a further cost reduction of from case C to case B is not quite significant where the accuracy loss is noticeable.For a more in-depth examination of the fine-tuning results in turbulent channel flows, Figure <ref> shows the streamwise spectra E_uu for all fine-tuned cases against DNS results at y=19.5 (close to the top wall) and y=10.2 (close to the porous wall). Across all fine-tuned models, the predictions for large-scale motion at k_x<1 are highly accurate. However, deviations become apparent at higher streamwise wavenumbers. Although differences in time-averaged velocity fluctuations are noticeable (as seen in figure <ref>), they are less discernible in frequency space when plotted on a log-log scale. Among the cases, cases D and E stand out for their superior performance in the inertial subrange at both positions.To further investigate the performance of fine-tuned models, figure <ref> displays the instantaneous velocity field u_x to the porous media domain. Case A fails to accurately capture the flow pattern within the porous domain due to the missing input of the new training data.Starting from case B, fine-tuning is able to significantly increase the modeling of the flow inside porous media even though the training data is quite limited. Increasing the training effort from case B to case E can progressively enhance the fidelity. Even though the flow in porous media is significantly influenced by dispersion caused by the porous structure, there are still instances of non-uniform velocity around y≈8. This is indicative of a transitional regime between laminar and turbulent flow, due to its proximity to the interface. This phenomenon is better captured by Cases D and E.The interfacial behavior is of great interest since it enables bi-directional interactions between channel flow and porous media <cit.>. On the other hand, it is featured with high complexity involving two different types of geometries as well as a mixing state of turbulent, transition and laminar flows. Figure <ref> depicts the instantaneous velocity fluctuation at the interface from DNS and various reduced-order models. In the boundary layer region (x>10), DNS shows a rich scale from the large-scale structures to the fine motions, whereas ROM can only reproduce the bigger ones. This is reasonable considering that ROM has to ignore these details to deliver a low-rank model. Among them, case A shows a filtered modeling of the velocity fluctuation field, whereas more details are retained by other cases. Below the interface, the strength of fluctuation is much weaker. But case C, D and E can still deliver the model in a certain way. Case A has received no training data from this DNS, therefore, the porous topology is unclear and the prediction is far from satisfactory.§ CONCLUSIONS AND OUTLOOKFlows characterized by turbulence over permeable interfaces are ubiquitous and present intricate flow patterns. Addressing the challenges posed by the complex geometry and mixed state of laminar, transitional and turbulent regimes, our study introduces a data-driven, end-to-end machine learning framework specifically designed to model turbulent flows within a channel flow coupled with porous media. To achieve this, we've established a non-linear reduced-order model powered by a deep convolution autoencoder network. Impressively, this model can reduce the spatial dimensions required for highly resolved simulations—integral for direct numerical simulation—by 99%. To capture the time-varying characteristics of the reduced modes, we employed a downstream recurrent neural network. This strategic integration allows the model to accurately predict the future trajectories of these modes. Furthermore, to test the adaptability and robustness of our model, we evaluated its performance on a dataset that presents a different porosity from the original training set. Through fine-tuning, our system has demonstrated the potential to significantly cut down on computational resources, reducing training efforts by up to two orders of magnitude when working with a limited dataset (constituting merely 10% of the full training data). Even with this reduced data input, the results achieved were commendable, particularly with respect to the mean velocity profile. The efficient fine-tuning of our current model significantly reduces computational time, making it much easier to study the effect of different types of porous media for drag reduction, enhanced heat transfer and noise absorption. This efficiency is particularly advantageous for systematic investigations into interface engineering, allowing for more focused and quicker research cycles. Of particular note is the observation that the fine-tuned model excels within the porous domain in comparison to the channel and interface regions. This suggests that the inherent topological features of the porous media are more amenable to training compared to the challenging multi-scale attributes typical of turbulent flows.The work is supported by the funding by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) project SFB1313 (project No. 327154368) and under Germany’s Excellence Strategy-EXC2075-390740016. In addition, all authors gratefully acknowledge the access to the high performance computing facility Hawk at HLRS, Stuttgart. | http://arxiv.org/abs/2311.15600v1 | {
"authors": [
"Xu Chu",
"Sandeep Pandey"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231127074925",
"title": "Non-intrusive, transferable model for coupled turbulent channel-porous media flow based upon neural networks"
} |
MS-TP-23-50 Quantum kinetic approach to the Schwinger production of scalar particles]Quantum kinetic approach to the Schwinger production of scalar particles in an expanding universe [1]Anastasia V. [email protected] 1,2]Oleksandr O. [email protected] [1]Physics Faculty, Taras Shevchenko National University of Kyiv, 64/13, Volodymyrska Street, Kyiv, 01601, Ukraine [2]Institute for Theoretical Physics, University of Münster, Wilhelm-Klemm-Straße 9, Münster, 48149, Germany We study the Schwinger pair creation of scalar charged particles by a homogeneous electric field in an expanding universe in the quantum kinetic approach. We introduce an adiabatic vacuum for the scalar field based on the Wentzel–Kramers–Brillouin solution to the mode equation in conformal time and apply the formalism of Bogolyubov coefficients to derive a system of quantum Vlasov equations for three real kinetic functions. Compared to the analogous system of equations previously reported in the literature, the new one has two advantages. First, its solutions exhibit a faster decrease at large momenta which makes it more suitable for numerical computations. Second, it predicts no particle creation in the case of conformally coupled massless scalar field in the vanishing electric field, i.e., it respects the conformal symmetry of the system. We identify the ultraviolet divergences in the electric current and energy-momentum tensor of produced particles and introduce the corresponding counterterms in order to cancel them. [ [=====§ INTRODUCTION One of the most fascinating phenomena occurring in a strong electromagnetic field is the Schwinger effect or the creation of particle–antiparticle pairs from physical vacuum <cit.>. It was predicted almost a hundred years ago and until the present day has not been observed in the laboratory because of extremely high value of the required electric field E∼ 10^18 V/m <cit.>. Nevertheless, such enormous fields might exist in the universe, e.g., around the compact objects like neutron stars or black holes <cit.> or during the early stages of evolution (see, e.g., Refs. <cit.>). The Schwinger effect in the early universe is often considered within the context of inflationary magnetogenesis models which include the coupling of the electromagnetic field to the inflaton or to the spacetime curvature <cit.>. The simplest scenario for the Schwinger pair production during inflation which can be studied analytically is the case of a constant and homogeneous electric field (with possible presence of a collinear magnetic field) in de Sitter spacetime which has been studied in much detail in the literature <cit.>. The results of this simple approach has been used for a lot of phenomenological applications <cit.>; however, in realistic inflationary models both underlying assumptions, the constant in time gauge field and purely de Sitter expansion of the universe, typically do hot hold. In a time-dependent electromagnetic field, the Schwinger pair production has a more complicated dynamics. This was demonstrated by using the kinetic approach both in Minkowski space <cit.> and in the expanding universe <cit.>. In contrast to the case of constant field, the Schwinger induced current exhibits the non-Markovian character depending not only on the field at a given moment of time but also on the prehistory. Moreover, due to inertial properties of charge carriers the current is retarded with respect to the changes in the electric field which leads to the oscillatory behavior of the induced current and the electric field. In the kinetic approach, the particle creation is described by the Schwinger source term which can be constructed phenomenologically <cit.> or derived from the first principles <cit.>. The latter way, although being much more involved, allows to capture the effects of particle statistics and the nonlocal in time character of the pair-creation process. There is, however, a fundamental issue which makes the application of such a first-principles approach ambiguous in the case of the expanding universe. This is a well-known in the literature problem of defining the vacuum state and particles in nonstationary backgrounds which do not match the Minkowski spacetime in the asymptotic past and future <cit.>. The result for the number of produced particles depends on the definition of physical vacuum state or, in other words, on the choice of the observer. In this work, we revisit the problem of pair creation of the scalar particles by a time-dependent electric field in the expanding Friedmann-Lemaître-Robertson-Walker (FLRW) universe previously considered in Ref. <cit.>. We mostly follow the same strategy in order to derive the system of quantum kinetic equations describing the Schwinger pair production in this system. However, in the present study, we perform all computations in terms of the conformal time rather then physical time considered in Ref. <cit.>. Despite the seemingly trivial change, we find that the definition of the adiabatic vacuum is different compared to the previous work and, consequently, the final system of equations appears to be different. We analyze the ultraviolet (UV) behavior of its solutions and show that kinetic functions exhibit faster decrease at large momenta as compared to the system in Ref. <cit.> which makes the new system of equations more attractive for numerical analysis. Also the new setup respects the conformal symmetry of the problem and does not lead to the particle production in the case of conformally coupled massless scalar field in the absence of the electric field. The aim of this work is not to show that the computations in Ref. <cit.> are incorrect, but to demonstrate that there is a different and more convenient way to derive the system of quantum kinetic equations. Although we expect that the numerical result in the case of a strong electric field |eE|≫ H^2, m^2 (where e is the electric charge, H is the Hubble parameter, and m is the particle's mass) would be the same for both approaches, there might be some numerical differences in the case of a weak field where the particle production by the electric field and the time-varying metric are comparable. The rest of the article has the following structure. In Sec. <ref> we consider the evolution of a quantum scalar field on a classical background of the electric field and the expanding universe. In particular, we define the adiabatic vacuum and use the formalism of the Bogolyubov coefficients in order to derive the system of quantum kinetic equations. Then, in Sec. <ref> we determine the basic observables – electric current and energy-momentum tensor of the produced particles – and express them in terms of the kinetic functions. In Sec. <ref>, we realize the renormalization program in order to separate the UV divergent contributions to the observables and cancel them by introducing the counterterms to the action. Section <ref> is devoted to conclusions. In Appendix <ref>, we determine the asymptotical behavior of the kinetic functions in the limit of large momenta. Throughout the work, we use natural units and set ħ=c=1. We assume that the universe is described by a spatially flat FLRW metric. In terms of conformal time η it has the form g_μν=diag(a^2, -a^2, -a^2, -a^2), where a=a(η) is the scale factor. § QUANTUM VLASOV EQUATIONS FOR THE SCALAR FIELD In order to describe the process of the Schwinger production of scalar charged particles by a strong electric field in the expanding Universe we consider the action S=∫ d^4x√(-g)[-M_p^2/2R + ℒ_ EM + ℒ_ ch], whereg= det(g_μν)=-a^8 is the determinant of the spacetime metric, R=-6a”/a^3 is the Ricci curvature scalar for FLRW metric. Thus, the first term in the action represents the Einstein–Hilbert action for gravity. The second term in brackets in Eq. (<ref>) represents the Lagrangian density for the electromagnetic field[Throughout this work, by “electromagnetic field” we mean any Abelian gauge field, not necessarily the one corresponding to U(1)_EM subgroup of the Standard Model.] ℒ_EM=-1/4F_μνF^μν+ℒ_int(A_μ,ϕ), where the term ℒ_int describes the coupling of the electromagnetic field to some generic field ϕ. Although this interaction is necessary to produce sufficiently strong electromagnetic field needed for the Schwinger pair production, in this work we will not stick to any specific model and will assume that the electromagnetic field already exists in the Universe. Moreover, we consider the following configuration of the gauge field: (i) the magnetic field B is negligibly small compared to the electric one E; (ii) the coherence length of the electric field λ_E is much greater than any other physically relevant length scale in the problem, e.g., the Schwinger pair production scale l_S∼ |eE|^-1/2 or the Hubble scale l_H∼ H^-1. The latter condition allows us to treat the electric field spatially homogeneous and dependent only on time E=E(η). Such a field configuration can be described by the vector potential (in the Coulomb gauge) A_μ=(0, A(η)) with A'(η)=-a^2 E. With the factor a^2, the electric field E is a physical field measured by the comoving observer. Finally, we go back to the last term in Eq. (<ref>) which represents the Lagrangian density of the complex scalar charged field χ with mass m and charge e: ℒ_ ch=g^μν(𝒟_μχ)^†(𝒟_νχ)-( m^2-ξ R ) |χ|^2. Here |χ|^2≡χ^†χ, 𝒟_μ=∂_μ-ieA_μ is the covariant derivative acting on the scalar field, and ξ is the coupling constant responsible for the nonminimal coupling of the scalar field χ to gravity. Varying action in Eq. (<ref>) with respect to χ^†, we obtain the equation of motion for the scalar field χ: 1/√(-g)𝒟_μ[√(-g)g^μν𝒟_νχ] + ( m^2-ξ R )χ=0. Using the explicit form of the FLRW metric and electromagnetic field, we rewrite Eq. (<ref>) in the following form: χ”+2a'/aχ' + (m^2/a^2+6ξa”/a)χ - (∂-ieA)^2χ=0. Promoting the scalar field χ to the corresponding quantum operator, we expand it over the set of creation and annihilation operators of particles (b̂^†_k, b̂_k) and antiparticles (ĉ^†_k, ĉ_k) with different momenta k: χ̂(η,x)=∫d^3k/(2π)^3/2a(η)[b̂_kχ_k(η) e^ik·x + ĉ^†_kχ^*_-k(η) e^-ik·x] . The annihilation and creation operators satisfy the canonical commutation relations among which the only nontrivial ones are [b̂_k, b̂^†_k'] = [ĉ_k, ĉ^†_k']= δ^(3)(k-k') . The factor a(η) in denominator in Eq. (<ref>) was introduced for further convenience. Inserting the expansion (<ref>) into the equation of motion (<ref>) we get the mode equation governing the evolution of the mode function χ_k(η): χ”_k(η)+Ω_k^2(η)χ_k(η)=0 . It has an oscillator-like form with the time-dependent effective frequency Ω_k^2(η)=(k-eA)^2+m^2a^2+(6ξ-1)a”/a. It is easy to observe from Eq. (<ref>) that in the absence of an electromagnetic field in a non-expanding Universe, the frequency Ω_k does not depend on time η. Then, Eq. (<ref>) has two independent solutions which describe the positive- and negative-frequency modes. If the initial condition imposed at a certain moment of time corresponds, e.g., to a positive-frequency mode, the solution remains to be positive-frequency all the time. Therefore, we conclude that the creation of particles in Minkowski spacetime does not occur. Moreover, even in the expanding universe, for the massless scalar particle, m=0, with the conformal nonminimal coupling to gravity, ξ=1/6, and in the absence of the electric field, equation of motion (<ref>) has a very simple oscillator-like formχ”_k+k^2 χ_k=0 whose positive-frequency solution is a well-known Bunch-Davies vacuum <cit.>: χ_k,+^BD(η)=1/√(2k) e^-ikη. Being prepared in this state initially, the scalar field will remain in it forever meaning that there is no particle production even in the expanding universe. This is a consequence of the conformal invariance of the action of a massless conformally-coupled scalar field <cit.>. Here, we would like to note that this conformal symmetry becomes explicit only if we are working in the FLRW metric written in terms of the conformal time which is conformally flat. The situation changes in the presence of a nonzero electric field in the expanding universe. Indeed, the value of Ω_k becomes dependent on the conformal time η, and therefore, in general, it is impossible to find exact general solution to the mode equation (<ref>) and separate positive- and negative-frequency modes. Nevertheless, one can still construct an approximate solution by employing the Wentzel–Kramers–Brillouin approximation up to a certain adiabatic order. For example, in the zeroth adiabatic order, the positive-frequency solution has the form χ_k,+^(0)(η)=1/√(2Ω_k(η)) e^-i∫^ηΩ_k(η')dη' , where the phase of this function can be fixed by choosing the lower integration limit in the exponent. For modes with very large momenta this expression reduces to Eq. (<ref>) for the Bunch–Davies vacuum. Now, let us require that at a given moment of time η_0 the exact mode function χ_k matches the one in Eq. (<ref>). Then, the annihilation and creation operators appearing in decomposition (<ref>) with such mode functions define the adiabatic vacuum (of zeroth order). Consequently, at the moment of time η_0 there will be no particles in the system. However, at any later moment of time, the exact mode function does not match Eq. (<ref>) and in general represents a mixture of positive- and negative-frequency adiabatic modes. In such a case, it is convenient to describe it by the Bogolyubov coefficients α_k and β_k in the following way: χ_k(η) = 1/√(2Ω_k(η))[α_k(η) e^-iΘ_k(η) + β_k(η) e^iΘ_k(η)], where Θ_k(η)=∫_η_0^ηΩ_k(η')dη'. The Bogolyubov coefficients satisfy the normalization condition |α_k(η)|^2 - |β_k(η)|^2 = 1, and at the moment of time η_0 they satisfy the initial conditions α_k(η_0)=1 and β_k(η_0)=0 (by construction of the adiabatic vacuum at the moment η_0). It is straightforward to show that the mode equation (<ref>) is identically satisfied if the evolution of the Bogolyubov coefficients is determined by the following system of equations: α_k'= Ω_k'2Ω_ke^2iΘ_kβ_k, β_k'= Ω_k'2Ω_ke^-2iΘ_kα_k. Taking into account the normalization condition in Eq. (<ref>), we conclude that two complex functions α_k and β_k have only three independent real degrees of freedom, which can be conveniently parameterized as follows: ℱ_k(η)= |β_k(η)|^2, 𝒢_k(η)= Re(α_k β^*_k e^-2iΘ_k(η)), ℋ_k(η)= Im(α_k β^*_k e^-2iΘ_k(η)). Then, using Eqs. (<ref>), one can derive equations of motion for quantities (<ref>)–(<ref>): ℱ_k'(η) = Ω_k'/Ω_k 𝒢_k(η), 𝒢_k'(η) = Ω_k'/2Ω_k[1 + 2ℱ_k(η)] +2Ω_kℋ_k(η), ℋ_k'(η) = -2Ω_k 𝒢_k(η). The main advantages of this system of equations compared to (<ref>) is that it is real and does not contain fast oscillating coefficients. In order to rewrite it in the final form, we switch to the conformal momentum of the particle p=k-eA and introduce the conformal energy ϵ_p≡√(p^2+m^2a^2). [The quantities p and ϵ_p do not coincide with the physical momentum and energy of the scalar particle measured by the comoving cosmological observer. They are introduced for convenience in further analysis. The corresponding physical quantities can be expressed as p_phys=p/a and ϵ_p,phys=ϵ_p/a.] Further, we perform the following replacements in the system of equations (<ref>): Ω_k → ω(η,p)=√(ϵ_p^2+(6ξ-1)a”/a), ℱ_k(η) → ℱ(η, p), 𝒢_k(η) → 𝒢(η, p), ℋ_k(η) → ℋ(η, p), Ω_k'/Ω_k → Q(η,p)≡1/ω(η,p)^2[e a^2(p·E ) + m^2aa'+6ξ-1/2(a”'/a-a'a”/a^2)]. Further, we would like to note that the time derivative is changed according to the rule ℱ_k(η) → d/dηℱ(η, p)=[∂/∂η+∂p/∂η·∂/∂p]ℱ(η, p)= [∂/∂η+e a^2E·∂/∂p] ℱ(η, p) and similar relations also hold for the quantities 𝒢(η, p), ℋ(η, p). Finally, we obtain the system of quantum Vlasov equations for the kinetic functions (<ref>): (∂/∂η+e a^2E·∂/∂p) ℱ(η, p)= Q(η,p) 𝒢(η, p),(∂/∂η+e a^2E·∂/∂p) 𝒢(η, p)= 1/2 Q(η,p)[1+2ℱ(η, p) ] + 2ω(η,p)ℋ(η, p), (∂/∂η+e a^2E·∂/∂p) ℋ(η, p)= -2ω(η,p)𝒢(η, p). These equations describe the creation of charged scalar particles in the expanding universe in the presence of spatially homogeneous electric field. § PHYSICAL OBSERVABLES In this section we determine the main physical observables for the scalar field, namely the electric current and energy-momentum tensor, and express them in terms of kinetic functions ℱ, 𝒢, and ℋ introduced in the previous section. §.§ Electric current We start with the electric current which is the vacuum expectation value of the corresponding quantum operator: j^μ=⟨∂ℒ_ch/∂ A_μ⟩ = (0, 1/a^2j). Using the explicit form of the scalar-field Lagrangian density in Eq. (<ref>), we get j=ie⟨χ̂^† ∇χ̂-χ̂ ∇χ̂^†-2ieA χ̂^†χ̂⟩. Further, we substitute the operator decomposition (<ref>) and express the electric current in terms of the mode function: j=-2e/a^2∫d^3k/(2π)^3(k-eA) |χ_k(η)|^2. Finally, we use Eqs. (<ref>), (<ref>)–(<ref>) and rewrite Eq. (<ref>) in terms of the kinetic functions ℱ, 𝒢, andℋ as follows: j=-2e/a^2∫d^3k/(2π)^3(k-eA) 1+2ℱ_k+2𝒢_k/2Ω_k =-2e/a^2∫d^3p/(2π)^3 p ℱ(η,p)+𝒢(η,p)/ω(η,p), where the term with a unity in the numerator was omitted since it is odd under the change p→ -p and, therefore, identically vanishes after integration over all momenta. §.§ Energy-momentum tensor The energy-momentum tensor of a scalar charged field χ can be found by varying the action with respect to the spacetime metric: T_μν^χ =2/√(-g)⟨δ S_χ/δ g^μν⟩=⟨(𝒟_μχ̂)^†(𝒟_νχ̂)+(𝒟_νχ̂)^†(𝒟_μχ̂) -g_μν[(𝒟_αχ̂)^†(𝒟^αχ̂)-m^2χ̂^†χ̂]⟩+ 2ξ⟨(R_μν-1/2R g_μν - ∇_μ ∇_ν + g_μν ∇_α ∇^α)χ̂^†χ̂⟩. Then, we immediately have the energy density of the produced particles ρ_χ≡ T_0^χ 0: ρ_χ = 1/a^2⟨ (𝒟_0χ̂)^†(𝒟_0χ̂)+(𝒟_iχ̂)^†(𝒟_iχ̂)+ (m^2a^2 + 6ξa^' 2/a^2)χ̂^†χ̂⟩-2ξ/a^2⟨(∂_i^2 - 3a'/a∂_0)χ̂^†χ̂⟩. Instead of pressure, P_χ=(1/3)g^ijT_ij^χ, it is more convenient to work with the trace of the energy-momentum tensor T_χ=g^μνT_μν^χ which has the form: T_χ=⟨ 2(6ξ-1)[(𝒟_αχ̂)^†(𝒟^αχ̂)+ξ R χ̂^†χ̂] + 4(1-3ξ)m^2χ̂^†χ̂⟩. Again, using the decomposition of χ̂ over annihilation and creation operators in Eq. (<ref>), we express the energy density and trace in terms of the mode function: ρ_χ=∫d^3k/(2π)^3 a^4{|χ_k'-a'/aχ_k|^2+ [m^2a^2-6ξa^' 2/a^2+(k-eA)^2 + 6ξa'/a∂_0]|χ_k|^2}, T_χ=∫d^3k/(2π)^3 a^4{2(6ξ-1)[|χ_k'-a'/aχ_k|^2-(Ω_k^2+a”/a) |χ_k|^2] + 2m^2a^2|χ_k|^2}. Finally, expressing them in terms of the kinetic functions ℱ, 𝒢, and ℋ we obtain ρ_χ =∫d^3p/(2π)^3 a^4{ω(η,p)[1+2ℱ(η,p)]+2a'/a(6ξ-1)ℋ(η,p) -6ξ-1/ω(η,p)(a^' 2/a^2+a”/a) [1/2+ℱ(η,p)+𝒢(η,p)]}, T_χ =∫d^3p/(2π)^3 a^4{ - 4(6ξ-1)[ω(η,p)𝒢(η,p)+a'/aℋ(η,p) ] +2/ω(η,p)[m^2a^2-(6ξ-1) (a”/a-a^' 2/a^2)][1/2+ℱ(η,p)+𝒢(η,p)] }. We will see below that all physical observables derived in this section contain UV divergences and, therefore, must be renormalized in order to obtain a physical meaning. We perform the renormalization in the following section. § RENORMALIZATION In this section, we study the behavior of the kinetic functions at large values of momentum and show that the integrals in Eqs. (<ref>), (<ref>), and (<ref>) are divergent in the UV region. Further, we extract the divergent parts using the dimensional regularization and introduce the counterterms in the action which cancel those divergences. §.§ Asymptotical behavior of the kinetic functions at high energies The coefficients ω(η,p) and Q(η,p) in the quantum Vlasov equations (<ref>) in the limit of large momenta, p≡ |p|→∞, behave as ω(η,p)=p+𝒪(p^-1),Q(η,p)=ea^2 (p̂·E) p^-1+𝒪(p^-2) where p̂=p/p. Then, from Eq. (<ref>), one can see that at high momenta, ℱ∝ p^-4, 𝒢∝ p^-3, and ℋ∝ p^-2, i.e., the kinetic functions demonstrate power-law behavior which may potentially lead (and indeed leads as we will see below) to the UV divergences in the observables. In order to resolve the latter issue, we need the exact expressions for the first few terms of the large-p expansion of the kinetic functions. For the technical details, we refer the reader to Appendix <ref> while here we just list the results: ℱ(η,p) =e^2a^4(v·E)^2/16 ϵ_p^4 + +𝒪(ϵ_p^-5), 𝒢(η,p) =e a^2/8 ϵ_p^3 v·(E'+2a'/aE) + e^2a^4[E^2-3(v·E)^2]/8 ϵ_p^4+ 1/8 ϵ_p^4[m^2(a'^ 2+aa”) + (6ξ-1)(a^IV/2a-a”^ 2/2a^2-a'a”'/a^2 + a'^ 2a”/a^3)]+𝒪(ϵ_p^-5), ℋ(η,p) =-e a^2 (v·E)/4 ϵ_p^2-1/4 ϵ_p^3[m^2aa'+6ξ-1/2(a”'/a-a'a”/a^2)]+𝒪(ϵ_p^-4), where v=p/ϵ_p is the velocity of the particle and we presented only those terms which may lead to a divergent contribution to observables. Note that the kinetic functions are expressed in terms of inverse powers of ϵ_p rather than p. Among the two expansions which are completely equivalent in the UV region the expansion in inverse powers of ϵ_p is more convenient since it does not cause the problems in the infrared limit p→ 0. §.§ Electric current Using Eq. (<ref>) for the electric current and the asymptotical expansions for the kinetic functions in Eqs. (<ref>)–(<ref>), it is easy to see that (i) the integral over the momentum is logarithmically divergent in the UV region and (ii) the divergence comes only from the term in 𝒢(η,p which behaves as ∝ϵ_p^-3, i.e, from the first term in Eq. (<ref>). Then, the ith component of the electric current reads as: j^i=-e^2/4∫d^3p/(2π)^3 v^i v^j/ϵ_p^3 ( E^j '+2a'/aE^j ) + 𝒪(ϵ_p^-4), where we wrote explicitly only the term which leads to the divergence. In order to extract the divergent contribution, we employ the dimensional regularization switching from the (3+1)-dimensional space time to a general d-dimensional one. Then, for the divergent integral in Eq. (<ref>) by switching to the physical momentum q=p/a we get: ∫d^3p/(2π)^3 v^i v^j/ϵ_p^3 = ∫d^3q/(2π)^3 v^i v^j/(q^2+m^2)^3/2→μ_r^4-d∫d^d-1q/(2π)^d-1 v^i v^j/(q^2+m^2)^3/2= δ^ijΩ_(d-1) μ_r^4-d/(d-1)(2π)^d-1∫_0^+∞ dqq^d (q^2+m^2)^-5/2 = δ^ij/12π^2(m^2/4πμ_r^2)^d-4/2Γ(4-d/2)=δ^ij/12π^2[2/4-d-γ_E-ln(m^2/4πμ_r^2)+𝒪(4-d)], where Ω_(d-1)=2π^(d-1)/2/Γ(d-12) is the full solid angle in (d-1)-dimensional space; μ_r is a free parameter with the dimension of mass, which arises in order to compensate the mass dimension of the original integral and has a meaning of the renormalization energy scale (for simplicity, in what follows we set it μ_r=m); and γ_E≈ 0.577 is the Euler-Mascheroni constant. Obviously, the first term in brackets in Eq. (<ref>) tends to infinity in the limit d→ 4 which restores the correct dimension of our spacetime; therefore, it must be subtracted. However, together with this term, one can subtract any finite contributions. This ambiguity is resolved only by convention. One of the most popular subtraction schemes, the MS scheme, requires to subtract the combination 1/ε̅≡2/4-d-γ_E+ln 4π. Then, from Eqs. (<ref>) and (<ref>) we conclude that the divergent part of the electric current can be written as j_div=-e^2/48π^21/ε̅(E'+2a'/aE). Subtracting from the full electric current in Eq. (<ref>) the first term in Eq. (<ref>), we obtain the regular (i.e., final) part of the current: j_reg=-2e∫d^3p/(2π)^3 p {ℱ(η,p)+𝒢(η,p)/a^2 ω(η,p)-e(p·[E'+2(a'/a)E])/8 ϵ_p^5}. The divergent contribution to the electric current can be canceled out from the Maxwell equation by introducing the counterterm in the action of the following form: δ S_Z_3=(Z_3-1)∫ d^4x√(-g)(-1/4F_μνF^μν). In the presence of this counterterm together with Eqs. (<ref>) and (<ref>) implies the following Maxwell equation for the electric field: (1+ Z_3 - 1)(E'+2a'/aE) = j_div + j_reg. Substituting here Eq. (<ref>) and requiring that after cancellation this equation takes the renormalized form containing only finite quantities E'+2a'/aE = j_reg, we obtain the value of the renormalization parameter Z_3 = 1 - e^2/48π^21/ε̅, which reproduces the well-known result for the charge renormalization parameter in the scalar quantum electrodynamics in Minkowski spacetime, see, e.g., Ref. <cit.>. §.§ Energy density Again, using the asymptotical expansions in Eqs. (<ref>)–(<ref>) we expand the integral in Eq. (<ref>) keeping the terms leading to the UV divergence. We get the following expression for the energy density: ρ_χ =∫d^3p/(2π)^3 a^4{ϵ_p-6ξ-1/2ϵ_pa^' 2/a^2- 6ξ-1/2ϵ_p^2a'/a e(v·E)+e^2a^4(v·E)^2/8 ϵ_p^3- 6ξ-1/2 ϵ_p^3[m^2a^' 2+6ξ-1/2(a'a”'/a^2-2a^' 2a”/a^3-a^'' 2/2a^2) ] + 𝒪(ϵ_p^-4) }. Here, the third term in curly brackets vanishes after integration since it is odd under the change p→ -p. All other terms are divergent and can be computed in the dimensional regularization by switching to a d-dimensional spacetime as shown in Eq. (<ref>) or in a similar way: ∫d^3p/(2π)^3 ϵ_p → -m^4 a^4/4π^2(m^2/4πμ_r^2)^d-4/2Γ(4-d/2)/d (d-2)=-m^4 a^4/32π^2[1/ε̅+3/2+𝒪(4-d)], ∫d^3p/(2π)^3 1/ϵ_p → -m^2 a^2/4π^2(m^2/4πμ_r^2)^d-4/2Γ(4-d/2)/d-2=-m^2 a^2/8π^2[1/ε̅+1+𝒪(4-d)], ∫d^3p/(2π)^3 1/ϵ_p^3 →1/4π^2(m^2/4πμ_r^2)^d-4/2Γ(4-d/2)=1/4π^2[1/ε̅+𝒪(4-d)]. Collecting all terms proportional to 1/ε̅, we get the divergent part of the energy density: ρ_χ^div=[e^2E^2/96π^2-m^4/32π^2 -6ξ-1/16π^2m^2a^' 2/a^4 - (6ξ-1)^2/16π^2(a'a”'/a^6-2a^' 2a”/a^7-a^'' 2/2a^6)] 1/ε̅. This divergent contribution can be eliminated by introducing the corresponding counterterms into the action. First of all, we note that the first term in Eq. (<ref>), the only one which depends on the electric field, is fully canceled by the counterterm (<ref>) considered in the previous subsection. The rest of the divergent terms does not depend on the electric field but only on the particle's mass and time derivatives of the scale factor. Therefore, they represent the radiative corrections to the Einstein–Hilbert action for gravity. Let us search for the counterterms which cancel those divergences using the following Anzatz: δ S_grav=∫ d^4 x √(-g)(a_1 + a_2 R + a_3 R^2). The first term, a_1, renormalizes the cosmological constant, a_2 takes care of the Planck mass, and a_3 renormalizes the coefficient of the Starobinsky R^2 term <cit.>. The effective energy-momentum tensor which follows from such counterterms in the action can be obtained by varying Eq. (<ref>) with respect to metric: δ T^μν_grav=2/√(-g)δ(δ S_grav)/δ g_μν= -a_1g^μν+a_2(2R^μν-Rg^μν)+4a_3[RR^μν-1/4R^2g^μν-(∇^μ∇^ν-g^μν∇_α∇^α )R ]. The 00 component of this expression gives the corresponding energy density: δρ_grav = -a_1 + 6a_2 a^' 2/a^4 - 72 a_3 (a'a”'/a^6-2a^' 2a”/a^7-a^'' 2/2a^6). Now, requiring ρ_χ^div + δρ_Z_3+ δρ_grav = 0, we obtain the coefficients a_1, a_2, and a_3 in the following form: a_1 =-m^4/32π^2 1/ε̅, a_2 =(ξ-1/6) m^2/16π^2 1/ε̅, a_3 =-(ξ-1/6)^21/32π^2 1/ε̅, which fully fix the counterterm in Eq. (<ref>). Finally, subtracting from the integrand in Eq. (<ref>) the terms shown in Eq. (<ref>) and collecting the finite contributions in Eqs. (<ref>)–(<ref>), we obtain the expression for the regular part of the energy density: ρ_χ^reg =-3m^4/64π^2+6ξ-1/16π^2m^2a^' 2/a^4+∫d^3p/(2π)^3 a^4{ω(η,p)[1+2ℱ(η,p)]+2a'/a(6ξ-1)ℋ(η,p)-6ξ-1/ω(η,p)(a^' 2/a^2+a”/a) [1/2+ℱ(η,p)+𝒢(η,p)]-ϵ_p+6ξ-1/2ϵ_pa^' 2/a^2+ 6ξ-1/2ϵ_p^2a'/a e(v·E)-e^2a^4(v·E)^2/8 ϵ_p^3+ 6ξ-1/2 ϵ_p^3[m^2a^' 2+6ξ-1/2(a'a”'/a^2-2a^' 2a”/a^3-a^'' 2/2a^2)]}. This expression represents the physically meaningful and finite energy density of the produced scalar particles which can be used, in particular, in the Friedmann equation describing the expansion rate of the universe. §.§ Trace of the stress-energy tensor Finally, let us now consider the UV divergences in the trace of the energy-momentum tensor of the produced scalar particles. Expanding the integrand in Eq. (<ref>) by using Eqs. (<ref>)–(<ref>), we collect the terms which might lead to the divergence: T_χ =∫d^3p/(2π)^3 a^2{- ea'/a(v·E)/4ϵ_p^2 -(6ξ-1)e/2ϵ_p^2v·(E'+2a'/aE)-(6ξ-1)a^2 e^2/2ϵ_p^3[E^2-3(v·E)^2]+m^2/ϵ_p - 6ξ-1/ϵ_p(a”/a^3-a^' 2/a^4)-6ξ-1/2ϵ_p^3m^2(2a”/a-a^' 2/a^2)+ (6ξ-1)^2/2ϵ_p^3(3a^'' 2/2a^4 - 3a”a^' 2/a^5-a^IV/2a^3+2a”'a'/a^4)}. Obviously, the first two terms identically vanish since they are odd in momentum, the third term does not give divergent contribution according to Eqs. (<ref>) and (<ref>). Thus, the divergent part of the trace does not depend on the electric field. Divergences in the rest of the terms can be extracted applying the dimensional regularization and using Eqs. (<ref>)–(<ref>). Then, we obtain the following expression: T_χ^div=[-m^4/8π^2-(6ξ-1)m^2/8π^2a”/a^3+(6ξ-1)^2/8π^2(3a^'' 2/2a^6 -3a”a^' 2/a^7-a^IV/2a^5+2a”'a'/a^6)]1/ε̅. It is straightforward to check that this expression is fully canceled by the trace of the effective energy-momentum tensor (<ref>) if the coefficients a_1, a_2, and a_3 are given by Eqs. (<ref>)–(<ref>). Thus, we need to add no new counterterms in order to cancel the divergence in the trace of the energy-momentum tensor. In order to get the finite part of T_χ, we subtract from the integrand in Eq. (<ref>) the terms in large-p expansion up to O(ϵ_p^-3) inclusive, shown in Eq. (<ref>) and take into account the finite contribution coming from the unity in Eq. (<ref>). This leads to the following expression: T_χ^reg =-m^4/8π^2-(6ξ-1)m^2/8π^2(a”/a^3-a^' 2/a^4) + ∫d^3p/(2π)^3 a^4{ - 4(6ξ-1)[ω(η,p)𝒢(η,p)+a'/aℋ(η,p) ] +2/ω(η,p)[m^2a^2-(6ξ-1) (a”/a-a^' 2/a^2)][1/2+ℱ(η,p)+𝒢(η,p)] + ea'/a(v·E)/4ϵ_p^2 +(6ξ-1)e/2ϵ_p^2v·(E'+2a'/aE)+(6ξ-1)a^2 e^2/2ϵ_p^3[E^2-3(v·E)^2]-m^2/ϵ_p + 6ξ-1/ϵ_p(a”/a^3-a^' 2/a^4) +6ξ-1/2ϵ_p^3m^2(2a”/a-a^' 2/a^2)- (6ξ-1)^2/2ϵ_p^3(3a^'' 2/2a^4 - 3a”a^' 2/a^5-a^IV/2a^3+2a”'a'/a^4)}. This expression together with Eq. (<ref>) can be used to derive the pressure of the produced scalar particles, P_χ= (ρ_χ^reg-T_χ^reg)/3. § CONCLUSIONS In this work, we describe the production of spinless charged particles (and antiparticles) in a homogeneous electric field in the expanding FLRW universe from the first principles. To this end, we study the particle excitations of a quantum scalar field which evolves on a classical time-dependent background consisting of a scale factor a(η) and the electromagnetic vector-potential A(η). Because of the absence of static asymptotical states of this system in the infinite past and future we face a typical problem of how to define the vacuum state and particles in the expanding universe. Following the standard procedure <cit.>, we construct a physically reasonable approximation for the vacuum state – the adiabatic vacuum – in which the modes with very large momenta are almost not excited. However, this procedure is not unique as one can define adiabatic vacua of different adiabatic order and at different moments of time. In particular, in Ref. <cit.> and in the present work the same physical system was considered while difference is in the choice of adiabatic vacuum which leads to a slightly different results. We argue that the approach presented in this work leads to a system of equations which is more suitable for numerical simulations of the problem. Throughout the paper, we work with the conformal time in terms of which the FLRW metric is explicitly conformally flat. In a simple case of a conformally coupled massless scalar field in the absence of the electric field, the mode equation admits an exact solution which corresponds to the Bunch–Davies vacuum. Provided that initially the system was in this state, no particle creation occurs at any moment of time in future. This is a manifestation of a conformal symmetry of the physical system under consideration which makes its evolution in FLRW spacetime to be identical to that in the Minkowski spacetime. This symmetry was not respected in the system of equations in Ref. <cit.>, where definition of the vacuum state is based on the Wentzel–Kramers–Brillouin solution to the mode equation written in physical time. In a general case, where the conformal symmetry is broken by the particle's mass, nonminimal coupling ξ≠ 1/6, and/or the external electric field, the particle creation is described by the system of three quantum Vlasov equations for kinetic functions ℱ(η,p), 𝒢(η,p), and ℋ(η,p). The first one is the analog of a classical one-particle distribution function in the Boltzmann kinetic theory while the two others arise as auxiliary dynamical variables which describe the nonlocal in time process of particle creation. We study the asymptotical behavior of kinetic functions at large momenta and show that they decrease at large momenta much faster than in the case of Ref. <cit.>. Indeed, the leading term in the expansion in inverse powers of momentum is ∝ p^-4 for ℱ(η,p), ∝ p^-3 for 𝒢(η,p), and ∝ p^-2 for ℋ(η,p) compared to ∝ p^-2, p^-2, and p^-1 in the respective cases on Ref. <cit.>. This makes our new system of equations more suitable for numerical analysis on a lattice in the momentum space. The powerlike behavior of the kinetic functions at large momenta leads to the UV divergences in the main observables such as the electric current or the energy-momentum tensor of produced particles. We use the dimensional regularization in order to extract the divergent contributions and cancel them by introducing the counterterms to the action. The counterterms coincide with those derived in Ref. <cit.> and with a well-known results for the scalar quantum electrodynamics obtained by the conventional Green-function approach <cit.>. The obtained system of quantum Vlasov equations can be further used to describe the dynamics of particle production by a strong electric field in models of inflationary magnetogenesis. Another interesting problem which has a direct physical application within the Standard Model is to describe in a similar quantum kinetic approach the production of charged fermions. The presence of external magnetic field in addition to the electric one makes the dynamics much more interesting and may lead to chirality production via the chiral anomaly <cit.>. The quantum kinetic description of particle production in non-orthogonal electric and magnetic fields is very relevant, e.g., for the axion inflation model; however, it is still missing in the literature.We plan to address these issues elsewhere. .25cm Acknowledgements O. O. S. is grateful to Prof. Kai Schmitz and all members of Particle Cosmology group for their kind hospitality at the University of Münster where the final part of this work was done. § DECLARATIONS Funding The work was supported by the National Research Foundation of Ukraine (Project No. 2020.02/0062). The work of O. O. S. was sustained by a Philipp-Schwartz fellowship of the University of Münster. .15cm Conflicts of interest The authors declare no competing interests. .15cm Author Contributions All authors contributed in the writing and research of this paper. A. L. did computations and wrote the first draft of Sec. 2, 3 and Appendix A.O. S. performed computations in Sec. 4 and wrote Sec. 1, 4, and 5. Both authors have checked and approved the manuscript. .15cm Data availability No new data were created or analysed in this study. § EXPANSION IN THE INVERSE POWERS OF MOMENTUM In this Appendix, we derive asymptotical expressions for the kinetic functions ℱ(η,p), 𝒢(η,p), and ℋ(η,p) in the limit of large momenta. For this, we need the corresponding expansions of the quantities ω(η,p) and Q(η,p) which are the coefficients in the system of quantum Vlasov equations (<ref>): ω(η,p)=p+ω^(-1)+ 𝒪(p^-3), ω^(-1)=m^2a^3+(6ξ-1)a”/2ap, Q(η,p)=Q^(-1)+Q^(-2)+ 𝒪(p^-3), Q^(-1)=e a^2(p·E)/p^2, Q^(-2)=1/p^2[m^2aa'+6ξ-1/2(a”'/a-a'a”/a^2)] . Let us represent the total derivative operator with respect to conformal time as the sum of two operators L̂^(0) and L̂^(-1): d/dη=L̂^(0)+L̂^(-1),whereL̂^(0)=∂/∂η,L̂^(-1)=e a^2E∂/∂p . Obviously, the operator L̂^(0) does not change the asymptotical behavior of a given term at large momenta while L̂^(-1) reduces the power of momentum by one. Further, we represent the kinetic functions as power series in inverse momentum: for ℱ(η,p), 𝒢(η,p), and ℋ(η,p) the series starts from the term ∝ p^-4, ∝ p^-3, and ∝ p^-2, respectively. Then, substituting these expansions together with Eqs. (<ref>)–(<ref>) into the system of quantum Vlasov equations (<ref>), we obtain the following set of equations: L̂^(0)ℱ^(-4) =Q^(-1)𝒢^(-3), 0 =1/2Q^(-1)+2p ℋ^(-2), 0 =1/2Q^(-2)+2p ℋ^(-3)=0, L̂^(0)ℋ^(-2) =-2p 𝒢^(-3), L̂^(0)ℋ^(-3)+L̂^(-1)ℋ^(-2) =-2p 𝒢^(-4). From Eqs. (<ref>) and (<ref>) we immediately find expressions for the terms ℋ^(-2) and ℋ^(-3): ℋ^(-2) =-1/4pQ^(-1)=-e a^2(p̂·E)/4p^2, ℋ^(-3) =-1/4pQ^(-2)=-1/4p^3[m^2aa'+6ξ-1/2(a”'/a-a'a”/a^2)], where p̂=p/p is the unit vector in the direction of p. Then, the terms 𝒢^(-3) and 𝒢^(-4) can be found from Eqs. (<ref>) and (<ref>), respectively: 𝒢^(-3) =-1/2pL̂^(0)ℋ^(-2)=ea^2/8p^3 p̂·(E'+2a'/aE) , 𝒢^(-4) =-1/2p[L̂^(0)ℋ^(-3)+L̂^(-1)ℋ^(-2)] =e^2a^4[E^2-3(p̂·E)^2]/8p^4+ 1/8p^4[m^2(a^' 2+aa”) + (6ξ-1)(a^IV/2a-a^'' 2/2a^2-a'a”'/a^2 + a^' 2a”/a^3)]. Finally, the term ℱ^(-4) can be found by solving differential equation (<ref>). It is easy to see that the following function is a solution to this equation: ℱ^(-4)=e^2a^4(p̂·E)^2/16p^4. Thus, we derived a few first terms in the Laurent series for the kinetic functions ℱ(η,p), 𝒢(η,p), and ℋ(η,p) at large momenta p→∞. However, it is not convenient to use them in the computations in Sec. <ref> because they lead to spurious infrared divergences in the integrals for physical observables. In order to overcome this problem, it is more convenient to perform expansion in inverse powers of ϵ_p instead of p. Therefore, in Eqs. (<ref>)–(<ref>) we replace p→ϵ_p in denominators and get Eqs. (<ref>)–(<ref>) in the main text. 53 #1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook[Sauter1931]Sauter:1931zz Sauter, F.: Über das Verhalten eines Elektrons im homogenen elektrischen Feld nach der relativistischen Theorie Diracs. Z. Phys. 69, 742–764 (1931) 10.1007/BF01339461 [Heisenberg and Euler1936]Heisenberg:1936nmg Heisenberg, W., Euler, H.: Consequences of Dirac's theory of positrons. Z. Phys. 98(11-12), 714–732 (1936) 10.1007/BF01343663 [Schwinger1951]Schwinger:1951nm Schwinger, J.S.: On gauge invariance and vacuum polarization. Phys. Rev. 82, 664–679 (1951) 10.1103/PhysRev.82.664 [Cohen and McGady2008]Cohen:2008wz Cohen, T.D., McGady, D.A.: The Schwinger mechanism revisited. Phys. Rev. D 78, 036008 (2008) 10.1103/PhysRevD.78.036008 https://arxiv.org/abs/0807.11170807.1117 [hep-ph] [Ruffini et al.2010]Ruffini:2009hg Ruffini, R., Vereshchagin, G., Xue, S.-S.: Electron-positron pairs in physics and astrophysics: From heavy nuclei to black holes. Phys. Rept. 487, 1–140 (2010) 10.1016/j.physrep.2009.10.004 https://arxiv.org/abs/0910.09740910.0974 [astro-ph.HE] [Kim and Kim2023]Kim:2023qdj Kim, C.M., Kim, S.P.: Schwinger pair production and vacuum birefringence around high magnetized neutron stars. In: 5th Zeldovich Meeting (2023) [Kobayashi and Afshordi2014]Kobayashi:2014zza Kobayashi, T., Afshordi, N.: Schwinger effect in 4D de Sitter space and constraints on magnetogenesis in the early universe. JHEP 10, 166 (2014) 10.1007/JHEP10(2014)166 https://arxiv.org/abs/1408.41411408.4141 [hep-th] [Sharma et al.2017]Sharma:2017eps Sharma, R., Jagannathan, S., Seshadri, T.R., Subramanian, K.: Challenges in inflationary magnetogenesis: Constraints from strong coupling, backreaction and the Schwinger effect. Phys. Rev. D 96(8), 083511 (2017) 10.1103/PhysRevD.96.083511 https://arxiv.org/abs/1708.081191708.08119 [astro-ph.CO] [Domcke et al.2020]Domcke:2019qmm Domcke, V., Ema, Y., Mukaida, K.: Chiral anomaly, Schwinger effect, Euler-Heisenberg Lagrangian, and application to axion inflation. JHEP 02, 055 (2020) 10.1007/JHEP02(2020)055 https://arxiv.org/abs/1910.012051910.01205 [hep-ph] [Turner and Widrow1988]Turner:1987bw Turner, M.S., Widrow, L.M.: Inflation produced, large scale magnetic fields. Phys. Rev. D 37, 2743 (1988) 10.1103/PhysRevD.37.2743 [Ratra1992]Ratra:1991bn Ratra, B.: Cosmological 'seed' magnetic field from inflation. Astrophys. J. Lett. 391, 1–4 (1992) 10.1086/186384 [Garretson et al.1992]Garretson:1992vt Garretson, W.D., Field, G.B., Carroll, S.M.: Primordial magnetic fields from pseudoGoldstone bosons. Phys. Rev. D 46, 5346–5351 (1992) 10.1103/PhysRevD.46.5346 https://arxiv.org/abs/hep-ph/9209238hep-ph/9209238 [Fröb et al.2014]Frob:2014zka Fröb, M.B., Garriga, J., Kanno, S., Sasaki, M., Soda, J., Tanaka, T., Vilenkin, A.: Schwinger effect in de Sitter space. JCAP 04, 009 (2014) 10.1088/1475-7516/2014/04/009 https://arxiv.org/abs/1401.41371401.4137 [hep-th] [Bavarsad et al.2016]Bavarsad:2016cxh Bavarsad, E., Stahl, C., Xue, S.-S.: Scalar current of created pairs by Schwinger mechanism in de Sitter spacetime. Phys. Rev. D 94(10), 104011 (2016) 10.1103/PhysRevD.94.104011 https://arxiv.org/abs/1602.065561602.06556 [hep-th] [Stahl et al.2016]Stahl:2015gaa Stahl, C., Strobel, E., Xue, S.-S.: Fermionic current and Schwinger effect in de Sitter spacetime. Phys. Rev. D 93(2), 025004 (2016) 10.1103/PhysRevD.93.025004 https://arxiv.org/abs/1507.016861507.01686 [gr-qc] [Hayashinaka and Yokoyama2016]Hayashinaka:2016dnt Hayashinaka, T., Yokoyama, J.: Point splitting renormalization of Schwinger induced current in de Sitter spacetime. JCAP 07, 012 (2016) 10.1088/1475-7516/2016/07/012 https://arxiv.org/abs/1603.061721603.06172 [hep-th] [Hayashinaka et al.2016]Hayashinaka:2016qqn Hayashinaka, T., Fujita, T., Yokoyama, J.: Fermionic Schwinger effect and induced current in de Sitter space. JCAP 07, 010 (2016) 10.1088/1475-7516/2016/07/010 https://arxiv.org/abs/1603.041651603.04165 [hep-th] [Sharma and Singh2017]Sharma:2017ivh Sharma, R., Singh, S.: Multifaceted Schwinger effect in de Sitter space. Phys. Rev. D 96(2), 025012 (2017) 10.1103/PhysRevD.96.025012 https://arxiv.org/abs/1704.050761704.05076 [gr-qc] [Bavarsad et al.2018]Bavarsad:2017oyv Bavarsad, E., Kim, S.P., Stahl, C., Xue, S.-S.: Effect of a magnetic field on Schwinger mechanism in de Sitter spacetime. Phys. Rev. D 97(2), 025017 (2018) 10.1103/PhysRevD.97.025017 https://arxiv.org/abs/1707.039751707.03975 [hep-th] [Hayashinaka and Xue2018]Hayashinaka:2018amz Hayashinaka, T., Xue, S.-S.: Physical renormalization condition for de Sitter QED. Phys. Rev. D 97, 105010 (2018) 10.1103/PhysRevD.97.105010 https://arxiv.org/abs/1802.036861802.03686 [gr-qc] [Banyeres et al.2018]Banyeres:2018aax Banyeres, M., Domènech, G., Garriga, J.: Vacuum birefringence and the Schwinger effect in (3+1) de Sitter. JCAP 10, 023 (2018) 10.1088/1475-7516/2018/10/023 https://arxiv.org/abs/1809.089771809.08977 [hep-th] [Domcke and Mukaida2018]Domcke:2018eki Domcke, V., Mukaida, K.: Gauge field and fermion production during axion inflation. JCAP 11, 020 (2018) 10.1088/1475-7516/2018/11/020 https://arxiv.org/abs/1806.087691806.08769 [hep-ph] [Tangarife et al.2018]Tangarife:2017rgl Tangarife, W., Tobioka, K., Ubaldi, L., Volansky, T.: Dynamics of relaxed inflation. JHEP 02, 084 (2018) 10.1007/JHEP02(2018)084 https://arxiv.org/abs/1706.030721706.03072 [hep-ph] [Stahl2019]Stahl:2018idd Stahl, C.: Schwinger effect impacting primordial magnetogenesis. Nucl. Phys. B 939, 95–104 (2019) 10.1016/j.nuclphysb.2018.12.017 https://arxiv.org/abs/1806.066921806.06692 [hep-th] [Geng et al.2018]Geng:2017zad Geng, J.-J., Li, B.-F., Soda, J., Wang, A., Wu, Q., Zhu, T.: Schwinger pair production by electric field coupled to inflaton. JCAP 02, 018 (2018) 10.1088/1475-7516/2018/02/018 https://arxiv.org/abs/1706.028331706.02833 [gr-qc] [Giovannini2018]Giovannini:2018qbq Giovannini, M.: Spectator electric fields, de Sitter spacetime, and the Schwinger effect. Phys. Rev. D 97, 061301 (2018) 10.1103/PhysRevD.97.061301 https://arxiv.org/abs/1801.099951801.09995 [hep-th] [Kitamoto2018]Kitamoto:2018htg Kitamoto, H.: Schwinger effect in inflaton-driven electric field. Phys. Rev. D 98, 103512 (2018) 10.1103/PhysRevD.98.103512 https://arxiv.org/abs/1807.037531807.03753 [hep-th] [Chua et al.2019]Chua:2018dqh Chua, W.Z., Ding, Q., Wang, Y., Zhou, S.: Imprints of Schwinger effect on primordial spectra. JHEP 04, 066 (2019) 10.1007/JHEP04(2019)066 https://arxiv.org/abs/1810.098151810.09815 [hep-th] [Shakeri et al.2019]Shakeri:2019mnt Shakeri, S., Gorji, M.A., Firouzjahi, H.: Schwinger mechanism during inflation. Phys. Rev. D 99(10), 103525 (2019) 10.1103/PhysRevD.99.103525 https://arxiv.org/abs/1903.053101903.05310 [hep-th] [Sobol et al.2018]Sobol:2018djj Sobol, O.O., Gorbar, E.V., Kamarpour, M., Vilchinskii, S.I.: Influence of backreaction of electric fields and Schwinger effect on inflationary magnetogenesis. Phys. Rev. D 98(6), 063534 (2018) 10.1103/PhysRevD.98.063534 https://arxiv.org/abs/1807.098511807.09851 [hep-ph] [Sobol et al.2019]Sobol:2019xls Sobol, O.O., Gorbar, E.V., Vilchinskii, S.I.: Backreaction of electromagnetic fields and the Schwinger effect in pseudoscalar inflation magnetogenesis. Phys. Rev. D 100(6), 063523 (2019) 10.1103/PhysRevD.100.063523 https://arxiv.org/abs/1907.104431907.10443 [astro-ph.CO] [Gorbar et al.2021]Gorbar:2021rlt Gorbar, E.V., Schmitz, K., Sobol, O.O., Vilchinskii, S.I.: Gauge-field production during axion inflation in the gradient expansion formalism. Phys. Rev. D 104(12), 123504 (2021) 10.1103/PhysRevD.104.123504 https://arxiv.org/abs/2109.016512109.01651 [hep-ph] [Gorbar et al.2022]Gorbar:2021zlr Gorbar, E.V., Schmitz, K., Sobol, O.O., Vilchinskii, S.I.: Hypermagnetogenesis from axion inflation: Model-independent estimates. Phys. Rev. D 105(4), 043530 (2022) 10.1103/PhysRevD.105.043530 https://arxiv.org/abs/2111.047122111.04712 [hep-ph] [Kluger et al.1991]Kluger:1991ib Kluger, Y., Eisenberg, J.M., Svetitsky, B., Cooper, F., Mottola, E.: Pair production in a strong electric field. Phys. Rev. Lett. 67, 2427–2430 (1991) 10.1103/PhysRevLett.67.2427 [Kluger et al.1992]Kluger:1992gb Kluger, Y., Eisenberg, J.M., Svetitsky, B., Cooper, F., Mottola, E.: Fermion pair production in a strong electric field. Phys. Rev. D 45, 4659–4671 (1992) 10.1103/PhysRevD.45.4659 [Schmidt et al.1998]Schmidt:1998vi Schmidt, S.M., Blaschke, D., Röpke, G., Smolyansky, S.A., Prozorkevich, A.V., Toneev, V.D.: A quantum kinetic equation for particle production in the Schwinger mechanism. Int. J. Mod. Phys. E 7, 709–722 (1998) 10.1142/S0218301398000403 https://arxiv.org/abs/hep-ph/9809227hep-ph/9809227 [Kluger et al.1998]Kluger:1998bm Kluger, Y., Mottola, E., Eisenberg, J.M.: The quantum Vlasov equation and its Markov limit. Phys. Rev. D 58, 125015 (1998) 10.1103/PhysRevD.58.125015 https://arxiv.org/abs/hep-ph/9803372hep-ph/9803372 [Schmidt et al.1999]Schmidt:1998zh Schmidt, S.M., Blaschke, D., Röpke, G., Prozorkevich, A.V., Smolyansky, S.A., Toneev, V.D.: NonMarkovian effects in strong field pair creation. Phys. Rev. D 59, 094005 (1999) 10.1103/PhysRevD.59.094005 https://arxiv.org/abs/hep-ph/9810452hep-ph/9810452 [Bloch et al.1999]Bloch:1999eu Bloch, J.C.R., Mizerny, V.A., Prozorkevich, A.V., Roberts, C.D., Schmidt, S.M., Smolyansky, S.A., Vinnik, D.V.: Pair creation: Back reactions and damping. Phys. Rev. D 60, 116011 (1999) 10.1103/PhysRevD.60.116011 https://arxiv.org/abs/nucl-th/9907027nucl-th/9907027 [Alkofer et al.2001]Alkofer:2001ik Alkofer, R., Hecht, M.B., Roberts, C.D., Schmidt, S.M., Vinnik, D.V.: Pair creation and an X-ray free electron laser. Phys. Rev. Lett. 87, 193902 (2001) 10.1103/PhysRevLett.87.193902 https://arxiv.org/abs/nucl-th/0108046nucl-th/0108046 [Blaschke et al.2019]Blaschke:2019pnj Blaschke, D.B., Juchnowski, L., Otto, A.: Kinetic approach to pair production in strong fields—two lessons for applications to heavy-ion collisions. Particles 2(2), 166–179 (2019) 10.3390/particles2020012 [Gorbar et al.2019]Gorbar:2019fpj Gorbar, E.V., Momot, A.I., Sobol, O.O., Vilchinskii, S.I.: Kinetic approach to the Schwinger effect during inflation. Phys. Rev. D 100(12), 123502 (2019) 10.1103/PhysRevD.100.123502 https://arxiv.org/abs/1909.103321909.10332 [gr-qc] [Sobol et al.2020]Sobol:2020frh Sobol, O.O., Gorbar, E.V., Momot, A.I., Vilchinskii, S.I.: Schwinger production of scalar particles during and after inflation from the first principles. Phys. Rev. D 102(2), 023506 (2020) 10.1103/PhysRevD.102.023506 https://arxiv.org/abs/2004.126642004.12664 [gr-qc] [Birrell and Davies1984]Birrell-book Birrell, N.D., Davies, P.C.W.: Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge University Press, Cambridge (1984). 10.1017/CBO9780511622632 [Parker and Toms2009]Parker-book Parker, L., Toms, D.: Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity. Cambridge Monographs on Mathematical Physics. Cambridge University Press, Cambridge (2009). 10.1017/CBO9780511813924 [Parker1968]Parker:1968mv Parker, L.: Particle creation in expanding universes. Phys. Rev. Lett. 21, 562–564 (1968) 10.1103/PhysRevLett.21.562 [Parker1969]Parker:1969au Parker, L.: Quantized Fields and Particle Creation in Expanding Universes. I. Phys. Rev. 183(5), 1057–1068 (1969) 10.1103/PhysRev.183.1057 [Bunch and Davies1978]Bunch:1978yq Bunch, T.S., Davies, P.C.W.: Quantum field theory in de Sitter space: Renormalization by point splitting. Proc. Roy. Soc. Lond. A 360, 117–134 (1978) 10.1098/rspa.1978.0060 [Srednicki2007]Srednicki-book Srednicki, M.: Quantum Field Theory, 1st edition edn. Cambridge University Press, Cambridge (2007) [Starobinsky1980]Starobinsky:1980te Starobinsky, A.A.: A new type of isotropic cosmological models without singularity. Phys. Lett. B 91, 99–102 (1980) 10.1016/0370-2693(80)90670-X [Christensen1976]Christensen:1976vb Christensen, S.M.: Vacuum expectation value of the stress tensor in an arbitrary curved background: The covariant point separation method. Phys. Rev. D 14, 2490–2501 (1976) 10.1103/PhysRevD.14.2490 [Adler1969]Adler:1969gk Adler, S.L.: Axial vector vertex in spinor electrodynamics. Phys. Rev. 177, 2426–2438 (1969) 10.1103/PhysRev.177.2426 [Bell and Jackiw1969]Bell:1969ts Bell, J.S., Jackiw, R.: A PCAC puzzle: π^0 →γγ in the σ model. Nuovo Cim. A 60, 47–61 (1969) 10.1007/BF02823296 | http://arxiv.org/abs/2311.15981v2 | {
"authors": [
"Anastasia V. Lysenko",
"Oleksandr O. Sobol"
],
"categories": [
"hep-ph",
"astro-ph.CO"
],
"primary_category": "hep-ph",
"published": "20231127162750",
"title": "Quantum kinetic approach to the Schwinger production of scalar particles in an expanding universe"
} |
The Common Workflow Scheduler Interface: Status Quo and Future Plans]The Common Workflow Scheduler Interface:Status Quo and Future Plans [email protected] 0000-0003-0520-0792Humboldt-Universität zu Berlin Unter den Linden 6 BerlinGermany 10099 [email protected] 0000-0003-0391-728XTechnische Universität Berlin Straße des 17. Juni 135 BerlinGermany 10623 [email protected] 0000-0003-3755-1503University of Glasgow University Avenue GlasgowUnited Kingdom G12 8QQ [email protected] 0000-0003-2166-9582Humboldt-Universität zu Berlin Unter den Linden 6 BerlinGermany 10099 Nowadays, many scientific workflows from different domains, such as Remote Sensing, Astronomy, and Bioinformatics, are executed on large computing infrastructures managed by resource managers. Scientific workflow management systems (SWMS) support the workflow execution and communicate with the infrastructures' resource managers. However, the communication between SWMS and resource managers is complicated by a) inconsistent interfaces between SMWS and resource managers and b) the lack of support for workflow dependencies and workflow-specific properties.To tackle these issues, we developed the Common Workflow Scheduler Interface (CWSI), a simple yet powerful interface to exchange workflow-related information between a SWMS and a resource manager, making the resource manager workflow-aware. The first prototype implementations show that the CWSI can reduce the makespan already with simple but workflow-aware strategies up to 25%. In this paper, we show how existing workflow resource management research can be integrated into the CWSI.[ Ulf Leser January 14, 2024 ====================This work was presented at the 18th Workshop on Workflows in Support of Large-Scale Science (WORKS 2023) and was published as part of the workshop paper “Novel Approaches Toward Scalable Composable Workflows in Hyper-Heterogeneous Computing Environments” in the Proceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (SC-W '23) <https://doi.org/10.1145/3624062.3626283> § INTRODUCTIONAnalyzing large datasets is the daily business of many scientists <cit.>. The data analysis often involves multiple dependent steps, which can be organized as a workflow <cit.>. As these workflows are becoming increasingly complex and datasets easily exceed hundreds of gigabytes or even terabytes <cit.>, scientists use scientific workflow management systems (SWMS), such as Nextflow, Airflow, or Argo, and computer clusters. One essential feature of SMWSs is communication with a resource manager, such as SLURM, Kubernetes, or OpenPBS. Therefore, the SWMS submits ready-to-run tasks to the resource manager, and the resource manager takes over the responsibility for assigning these tasks to a node that executes them. This simplifies the workflow execution on large-scale computing infrastructures and hides the complexity from the scientist. However, as with SWMS, there is also a variety of resource managers available, and different clusters may use a different one. In a worst-case scenario, the SWMS preferred by the scientist does not support the cluster's resource manager at all. Even if the SWMS supports a given resource manager, features beyond submitting tasks and awaiting their completion are frequently not supported.In this paper, we first give an overview of the Common Workflow Scheduler (CWS) and the Common Workflow Scheduler Interface (CWSI) which we both first presented in <cit.>. The CWSI is used to exchange workflow-related information between SWMSs and resource managers. Second, we present prior results, showing promising outcomes when using the CWSI with workflow-aware resource management methods. Moving on, we outline SWMS, where we started to implement CWSI support and demonstrate how the CWS can serve as a central point for provenance. Last, we illustrate how the CWS can be extended with new scheduling, resource allocation, and runtime prediction methods. § COMMON WORKFLOW SCHEDULERIn the present landscape, each resource manager has its own unique way of handling task submissions.For example, a task's definition significantly differs between SLURM and Kubernetes. While SLURM supports task dependencies, Kubernetes lacks this feature. To address the challenge that resource managers schedule workflow tasks without workflow awareness, we developed the Common Workflow Scheduler (CWS) <cit.>. The CWS allows for the transfer of essential information, such as input files, CPU, and memory requests, along with task-specific parameters using the Common Workflow Scheduler Interface (CWSI). Task-specific parameters vary for each task invocation and are passed on to the utilized tools. For further details on the interface, we refer to our previous paper <cit.>.In Figure <ref>, we provide an architectural overview for a single resource manager, in this case, for Kubernetes. The CWS runs as a component in the resource manager and exposes the CWSI. A resource manager has to implement the CWS with its interface once. Conversely, a workflow engine needs to implement support for CWSI to work with all resource managers already offering CWSI. SWMSs such as Airflow, Nextflow, or Argo send their requests, which are then kept in memory of CWS. From this storage, the CWS can fetch the workflow graph and task dependencies and use this information for scheduling. This storage can further be used for provenance to trace the workflow execution; we elaborate on this in Section <ref>. The CWS can be extended with task runtime and resource predictors that read task information from the storage and learn characteristics. Such learned characteristics can then be used to predict the demands for upcoming tasks, which is helpful for better scheduling. We provide examples for such prediction strategies in Section <ref>. Notably, workflow engines with CWSI support do not need their own scheduler component.Instead, all ready-to-run tasks are submitted to the resource manager and the scheduling happens there.We have implemented a plugin[<https://github.com/CommonWorkflowScheduler/nf-cws>] for the SWMS Nextflow to communicate with the CWSI and the CWS for the resource manager Kubernetes[<https://github.com/CommonWorkflowScheduler/KubernetesScheduler>]. Figure <ref> shows the results from running nf-core workflows with the original Nextflow-Kubernetes interaction (Original strategy) and the Rank (Min) Round Robin scheduling algorithm. nf-core is a collection of best-practice Nextflow workflows which all come with small test sets. The Rank (Min) Round Robin, on average, outperformed other strategies tested with a median runtime improvement of up to 24.8% and an average reduction of 10.8% compared to the original strategy <cit.>. § SWMS SUPPORTWe started by implementing the CWSI for a resource manager, Kubernetes, and a SWMS, Nextflow.We are now actively working to extend our project to support other popular SWMSs, namely Airflow and Argo, to further explore and demonstrate the benefits of the CWSI. Below, we describe these three SWMSs and discuss the integration of the CWSI.Nextflow is a workflow engine initially designed for bioinformatics but getting uptake also in different domains and is used by more than 1,000 organizations <cit.>. One of the main advantages of Nextflow is its support for, at the time of writing, 20 different resource managers. The large support makes it easy to port Nextflow workflows between environments. The support is achieved by abstracting the resource manager from the scientist but also from the internal Nextflow logic. Accordingly, Nextflow only supports the basic features of resource managers. For example, on SLURM, the task dependency feature is not used. Thus, Nextflow can profit from providing additional workflow context to the resource manager.Airflow is an Apache Incubator project designed for workflow management.Similar to Nextflow, Airflow supports Kubernetes as a resource manager and is also not exclusively tied to it. Airflow supports workflow-aware scheduling for Kubernetes through a tailor-made strategy exclusively implemented for the Airflow-Kubernetes interplay. Therefore, Airflow starts a big worker on every node for the whole workflow execution and assigns tasks into these worker pods bypassing Kubernetes' task assignment logic. However, this strategy has a significant drawback: the big containers will request resources for the entire workflow execution time regardless of the actual load. As many workflows have a merge point somewhere, where the entire execution is waiting for one particular task, this strategy leads to substantial resource wastage. By integrating the CWSI into Airflow, we aim to retain its workflow-aware scheduling capabilities while preventing unnecessary resource requests throughout the runtime.This optimization ensures more efficient utilization of resources and minimizes wastage on a large scale. One big difference to our already existing Nextflow interaction is the knowledge of the physical DAG in Airflow. While this was foreseen in the development of the CWSI, we have to make use of it in our CWS implementation.Argo is a SWMS designed exclusively for Kubernetes.However, since Kubernetes lacks support for task dependencies, Argo also submits each task individually, and Kubernetes then schedules them in a FIFO manner. This is comparable to the strategy of Nextflow and, thus, makes Argo an ideal candidate to support our CWSI. Just like Nextflow, Argo is expected to benefit in a similar way. We are currently working on developing an Argo extension to achieve this. § PROVENANCE WITH THE CWSIWorkflow provenance is one aspect that needs to be addressed in all SWMS <cit.>. Since the CWSI takes a central role in workflow execution, possessing comprehensive knowledge of the resource manager and the SWMS, it emerges as the most suitable entity for the management of provenance data.All SWMS represent provenance differently, so it is very heterogeneous <cit.>. Further, resource managers and SWMSs are only designed to gather a portion of the available data, each focusing on collecting data in its own scope <cit.>. Accordingly, the resource manager traces the node states while the SWMS collects task-related metrics. The CWSI is particularly implemented for each resource manager and can support a resource manager's specific APIs to collect traces while it has knowledge about the workflow. By gathering and storing all metrics and task dependencies in a centralized manner, provenance becomes more streamlined and manageable.Another significant advantage of using the CWSI for provenance is that the data will be available across different SWMS, even if a particular SWMS does not yet provide built-in provenance data. This interoperability ensures that provenance information can be maintained consistently and comprehensively, enhancing workflow traceability and reproducibility.In turn, researchers and scientists can have greater confidence in the reliability and trustworthiness of their results. § ADVANCED RESOURCE MANAGEMENT WITH THE CWSIAs we saw in the previous section, the CWS provides information about task executions and performance metrics. Using this information allows possible interface extensions to derive task characteristics from it. Task characteristics can be predicted runtime, CPU or memory usage, which can be used for scheduling and fed back to the SWMS. Many scheduling strategies, such as HEFT <cit.>, require knowledge of this.In the following, we will show how the CWSI can be used to implement approaches for task resource prediction, task runtime prediction, and scheduling with real workflow systems. Task resource prediction:Predicting the resources a task instance will utilize enables workflow performance optimization by reducing resource wastage and increasing performance <cit.>. Several research approaches tackle this challenge by analytic methods, regression models, or reinforcement learning and achieve a significant reduction in resource wastage <cit.>. A key challenge is to avoid underprovisioning of resources, as this leads to task failures while overprovisioning leads to high resource wastage <cit.>. These approaches frequently assume a relationship between input data size and a task's resource usage to predict peak memory consumption, i.e., a task's memory usage increases with bigger inputs.Further, many of these approaches conduct a form of online learning, incorporating monitoring data from task executions as feedback.The CWSI provides information to train such models, e.g., the number of file inputs, input sizes, or peak memory, which are retrieved and stored from monitoring. As these metrics are constantly gathered and updated, also online learning approaches are applicable. Therefore, we plan to integrate existing task resource prediction methods in our CWSI prototype to a) increase workflow performance and b) evaluate them under real-world conditions. Task Runtime Prediction: Predicting task runtimes is essential as many resource management techniques, such as scheduling, rely on accurate runtime estimates beforehand. To this end, many existing research approaches rely on historical data to build prediction models <cit.>. Many of them build on machine-learning models like neural networks, clustering methods, or regression methods <cit.>. While, especially complex models, showed to achieve low prediction errors, they also require a lot of training data. As an alternative, we recently presented Lotaru <cit.>, an online approach that can cope with cold-start problems and is able to predict task runtimes without historical traces. To do this, Lotaru executes microbenchmarks and quickly runs the workflow with reduced input data locally. Next, it predicts a task's execution time using a Bayesian linear regression based on the data points collected from the local workflow profiling and the microbenchmarks.Since Lotaru and other research approaches that support heterogeneous infrastructures require machine characteristics, we are extending our CWSI to store such information and extend the prototype to gather these metrics with Kubestone[<kubestone.io>].We are currently incorporating Lotaru into the CWSI prototype to handle unknown workflows or workflows with a lack of historical data. Further, we plan to implement other research methods that perform better with more training data provided by the provenance store of CWSI. Workflow Task Scheduling: Applying sophisticated scheduling algorithms helps to achieve optimization objectives such as a makespan reduction, cost reduction, or energy reduction. Although extensive research in this field exists, many approaches are missing uptake in real-world scenarios. For instance, Yarn schedules tasks in a fair manner <cit.>, while Kubernetes applies a Round-robin-like strategy <cit.>. Due to the dynamic nature of workflows and infrastructures, in practice, only dynamic scheduling approaches should be considered, i.e., approaches that can adjust their execution plans or react to failures in the infrastructure. Some of these dynamic approaches <cit.> are based on the static heuristic HEFT <cit.> and require knowledge about task runtimes and communication times between the nodes. Our own prior work Tarema <cit.>, does not require such metrics but dynamically classifies incoming tasks according to their resource usage to select the best-fitting node.The CWSI, together with task runtime and resource prediction, provides additional information to apply more sophisticated scheduling techniques.We are currently implementing the Tarema strategy into our CWSI prototype and plan other more sophisticated approaches enabled through the additional data provided by the CWSI and their plugins.§ CONCLUSIONIn this paper, we presented the status quo of the Common Workflow Scheduler Interface and described the available plugin for Nextflow and the integration into Kubernetes. Further, we have demonstrated that by implementing the CWSI alongside basic scheduling approaches like rank and file size, we achieve an average runtime reduction of 10.8%. Next, we outlined upcoming support for the workflow engines Airflow and Argo and how to extend the storage to become the central place for workflow provenance. Additionally, we presented our next steps to implement resource allocation, runtime prediction, and new scheduling methods. We assume that the planned workflow algorithms that consider cluster heterogeneity and task runtime, as we outlined in this paper, will further improve resource efficiency. This work was funded by the German Research Foundation (DFG), CRC 1404: "FONDA: Foundations of Workflows for Large-Scale Scientific Data Analysis." ACM-Reference-Format | http://arxiv.org/abs/2311.15929v1 | {
"authors": [
"Fabian Lehmann",
"Jonathan Bader",
"Lauritz Thamsen",
"Ulf Leser"
],
"categories": [
"cs.DC"
],
"primary_category": "cs.DC",
"published": "20231127153807",
"title": "The Common Workflow Scheduler Interface: Status Quo and Future Plans"
} |
Exploring scale invariance in the expansion of a spherical unitary Fermi gas Kaijun Jiang^1 January 14, 2024 ============================================================================ The weighted ancestor problem on a rooted node-weighted tree T is a generalization of the classic predecessor problem: construct a data structure for a set of integers that supports fast predecessor queries. Both problems are known to require Ω(loglog n) time for queries provided (n polylog n) space is available, where n is the input size. The weighted ancestor problem has attracted a lot of attention by the combinatorial pattern matching community due to its direct application to suffix trees. In this formulation of the problem, the nodes are weighted by string depth. This attention has culminated in a data structure for weighted ancestors in suffix trees with (1) query time and an (n)-time construction algorithm [Belazzougui et al., CPM 2021]. In this paper, we consider a different version of the weighted ancestor problem, where the nodes are weighted by any functionthat maps the nodes of T to positive integers, such that (u)≤(u) for any node u and (u_1)≤(u_2) if node u_1 is a descendant of node u_2, where (u) is the number of nodes in the subtree rooted at u. In the size-constrained weighted ancestor() problem, for any node u of T and any integer k, we are asked to return the lowest ancestor w of u with weight at least k. We show that for any rooted tree with n nodes, we can locate node w in (1) time after (n)-time preprocessing. In particular, this implies a data structure for the problem in suffix trees with (1) query time and (n)-time preprocessing, when the nodes are weighted by . We also show several string-processing applications of this result. § INTRODUCTION In the classic predecessor problem <cit.>, we are given a set S of keys from a universe U with a total order. The goal is to preprocess set S into a compact data structure supporting the following on-line queries: for any element q∈ U, return the maximum p∈ S such that p≤ q; p is called the predecessor of q.The weighted ancestor problem, introduced by Farach and Muthukrishnan in <cit.>, is a natural generalization of the predecessor problem on rooted node-weighted trees. In particular, given a rooted tree T, whose nodes are weighted by positive integers and such that these weights decrease when ascending from any node to the root, the goal is to preprocess tree T into a compact data structure supporting the following on-line queries: for any given node u and any integer k, return the farthest ancestor of u whose weight is at least k. Both the predecessor and the weighted ancestor problems are known to require Ω(loglog n) time for queries provided (n polylog n) space is available, where n is the input size of the problem <cit.>. The weighted ancestor problem has attracted a lot of attention by the combinatorial pattern matching community <cit.> due to its direct application to suffix trees <cit.>.The suffix tree of a string X is the compacted trie of the set of suffixes of X. In this formulation of the problem, a node u is weighted by string depth: the length of the string spelled from the root of the suffix tree to u; and a weighted ancestor query for two integers i and ℓ returns the locus of substring X[i i+ℓ-1] in the suffix tree of X. This attention has culminated in a data structure for weighted ancestors in suffix trees, given by Belazzougui, Kosolobov, Puglisi, and Raman <cit.>, supporting (1)-time queries after an (n)-time preprocessing.In this paper, we consider a different version of the weighted ancestor problem.Let T be a rooted tree on a set V of n nodes. By (u), we denote the number of nodes in the subtree rooted at a node u∈ V. Let :V→ℕ denote any function that maps the nodes of T to positive integers, such that (u)≤(u) for any node u∈ V and (u_1)≤(u_2) if node u_1 ∈ V is a descendant of node u_2 ∈ V. The latter is also known as the max-heap property: the weight of each node is less than or equal to the weight of its parent, with the maximum-weight element at the root. For any node u ∈ V and any positive integer k, a size-constrained weighted ancestor query, denoted by (u,k)=w, asks for the lowest ancestor w∈ V of u with weight at least k. The size-constrained weighted ancestor problem is to preprocess T into a compact data structure that supports fastqueries. We formalize the problem as follows:Size-Constrained Weighted Ancestor Query ()A rooted tree T on a set V of n nodes weighted by a function :V→ℕ.Given a node u∈ V and an integer k>0, return the lowest ancestor w of u with (w)≥ k.We assume throughout the standard word RAM model of computation with word size Θ(log n); basic arithmetic and bit-wise operations on (log n)-bit integers take (1) time. We show the following result.theoremtheoremmain For any rooted tree with n nodes weighted by a function , we canconstruct a data structure answeringqueries in (1) time after (n)-time and (n)-space preprocessing.In particular, Theorem <ref> implies a data structure for the problem in suffix trees with (1) query time and (n)-time preprocessing, when the nodes are weighted by .We show several string-processing applications of this result because (u), with (u)≤(u), can be defined as the number of leaf nodes in the subtree rooted at u. The intuition here is that the number of leaf nodes in the subtree rooted at u in a suffix tree corresponds to the number of occurrences of the string represented by the root-to-u path.§ PRELIMINARIES For any bit string B of length n and any α∈{0, 1}, the classicandqueries are defined as follows: * _α: for any given i ∈ [1,n], it returns the number of ones (or zeros) in B[1 i]; more formally, _α(B,i) = |{j ∈ [1, i] : B[j] = α}|.* _α: for any given rank i, it returns the leftmost position where the bit vector contains a one (or zero) with rank i; more formally, _α(B,i) = min{j ∈ [1, n] : _α(B,j) = i }. The following result is known. Let B be a bit string of length n stored in (n/log n) words. We can preprocess B in (n/log n) time into a data structure of n + o(n) bitssupportingandqueries in (1) time. Bit strings can also be used as a representation of monotonic integer sequences supporting predecessor queries. Assume we have a set S of n keys from a universe U with a total order. In the predecessor problem, we are given a query element q∈ U, and we are to find the maximum p∈ S such that p≤ q (the predecessor of q). The following result is known for a special case of the predecessor problem.We can preprocess a set of log^(1) n integers in linear time and space to supportqueries in (1) time. § CONSTANT-TIME QUERIES USING (N LOG N)SPACEWe first show how to solve the problem in (1) time using (n log n)space. This solution forms the basis for our linear-time and linear-space solution in the next section.§.§ Heavy-Path DecompositionLet T be a rooted tree with n nodes. We compute the heavy-light decomposition of T in (n) time <cit.>. Recall that, for any node u in T, we define (u) to be number of nodes in the subtree of T rooted at u. We call an edge (u,v) of T heavy if (v) is maximal among every edge originating from u (breaking ties arbitrarily). All other edges are called light. We call a node that is reached from its parent through a heavy edge heavy; otherwise, the node is called light. The heavy path of T is the path that starts at the root of T and at each node on the path descends to the heavy child as defined above. The heavy-path decomposition is then defined recursively: it is a union of the heavy path of T and the heavy-path decompositions of the off-path subtrees of the heavy path. A well-known property of this decomposition is that every root-to-node path in T passes through log n light nodes. In particular, the following lemma is implied. Let T be a rooted tree with n nodes. Any root-to-leaf path in T consists of at most log n + (1) heavy paths. §.§ Data StructureWe construct a heavy-path decomposition of T. Consider a heavy path H = v_1, …, v_ℓ. We construct a bit string B(H) that represents the differences between node weights using unary coding: Suppose that nodes v_1, …, v_ℓ of H are listed in decreasing order of their depth and let δ(v_i)=(v_i)-(v_i-1), for all i>1. B(H)=((v_1))·(δ(v_2))…·…(δ(v_i))·…·(δ(v_ℓ)), where (i) denotes the unary code of i, i.e., (i) consists of i 1's followed by a 0. The important property of our encoding is that the total number of 0-bits in B(H) is ℓ and the total number of 1-bits is (v_ℓ).Let H=u_1u_2… u_6 be the heavy path of T from Figure <ref>. We have ℓ=6 and (u_1)=1,(u_2)=2,(u_3)=5,(u_4)=6,(u_5)=9,(u_6)=16. We have B(H)=. For instance, the second 1 denotes δ(u_2)=(u_2)-(u_1)=1. The leftmost occurrence of 111 denotes δ(u_3)=(u_3)-(u_2)=3 1's. For any heavy path H, we can construct B(H) in (1 + ℓ/log n) time using standard word RAM bit manipulations. By Lemma <ref>, every leaf node of T has (log n) ancestors v_t, such that v_t is the topmost node of some heavy path H. Hence, the total weight of all topmost nodes, summed over all heavy paths H, is (nlog n). Thus, the total length of all bit strings B(H) is (n log n), and we can construct them in (n + n log n/log n) = (n) time. We store each such bit string according to Lemma <ref> to support (1)-timeandqueries using (n) preprocessing time and words of space. Furthermore, for each leaf node ℓ in T we store the weights of the top nodes of each heavy path on the path from the root to ℓ. By Lemma <ref>, there are at most (log n) such top nodes for each leaf. For every leaf node westore the weights of its top node ancestors ina fusion tree data structure according to Lemma <ref>. The total space used byall such fusion trees is (n log n) words and the preprocessing time is (n log n).§.§ QueriesSuppose we are given a node u and an integer k as an (u,k) query. We are looking for the lowest ancestor w of u with weight at least k. If the weight of u is at least k, we return u. Otherwise we proceed as follows.First, we identify the heavy path H_w that contains node w: we find an arbitrary leaf descendant u_ℓ of u; then, using the fusion tree of u_ℓ, we find the lowest ancestor u' of u_ℓ with weight at least k, such that u' is a top node. H_w is the heavy path, such that u' is its top node. When we find H_w,we answer a query f=_0(B(H_w),j) for j=_1(B(H_w),k) using Lemma <ref> in (1) time. Let w_1 denote the lowest ancestor of u on the heavy path H_w (see Figure <ref>). Let w_2 denote the (f+1)-th node on H_w. The node w is the highest node among w_1 and w_2. The query time is (1) by Lemma <ref> for finding H_w and by Lemma <ref> for finding f. We next show an example of how we use the bit strings to find f and thus the (f+1)th node.Let B(H)= from Example <ref>, u_2 from Figure <ref>, and k=7. Then j=_1(B(H),7)=11 and f=_0(B(H),11)=4. The output node is u_5, the (f+1)th node on H.In summary, we have shown the following result, which we will improve in the next section.For any rooted tree with n nodes weighted by a function , we canconstruct a data structure answeringqueries in (1) time after (nlog n)-time and (nlog n)-space preprocessing. § CONSTANT-TIME QUERIES USING (N)SPACE We now improve the above solution to the problem to a linear-time and linear-space preprocessing. We will reuse the previous section's linear-time heavy-path decomposition and the corresponding bit string encoding. The key challenge is identifying the top nodes of heavy paths in (1) time using linear space.§.§ ART DecompositionThe ART decomposition, proposed by Alstrup, Husfeldt, and Rauhe <cit.>, partitions a tree into a top tree and several bottom trees with respect to a parameter χ. Each node v of minimal depth, with no more than χ leaves below it, is the root of a bottom tree consisting of v and all its descendants. The top tree consists of all nodes that are not in any bottom tree. The ART decomposition satisfies the following important property:Let T be a tree with ℓ leaf nodes. The ART decomposition of T with parameter χ produces a top tree with at most(ℓ/χ) leaves. Such a decomposition of T can be computed in linear time.§.§ Data StructureRecall that T consists of n nodes. As discussed in Section <ref>, we compute the heavy-path decomposition of T, construct bit strings for each heavy path, and preprocess the strings to supportandqueries in (1) time. This takes (n) preprocessing time and space, allowing us to answer queries on a heavy path in (1) time. Thus, a linear-space and (1)-time solution remains to find the top nodes of heavy paths. First, we construct the contracted tree C_T of T obtained by contracting all edges of heavy paths in T.In particular, this leaves all the light edges from T in C_T and removes all the heavy edges from T (see Figure <ref>). We then apply the ART decomposition on C_T (see Figure <ref>) with parameter χ^2, where χ = ϵlog n/loglog n and ϵ is a positive constant. We apply the ART decomposition again with parameter χ (see Figure <ref>) on each resulting bottom tree. The resulting partition of C_T contains three levels of trees that we call the top tree, the middle trees, and the bottom trees.Contracting T takes (n) time. By Lemma <ref>, the ART decompositions cost (n) total time.Let us first consider the top tree. As in Section <ref>, we store a fusion tree data structure for each leaf node in the top tree. By Lemma <ref>, the top tree has (|C_T|/χ^2) leaves and hence, by Lemmas <ref> and <ref>,this uses (|C_T|/χ^2·log n) = (n (loglog n)^2/log n) = o(n) space and preprocessing time. For the middle or bottom trees, we tabulate the answers to all possible queries in a global table. The index in the table is given by a tree encoding, and the node u and integer k for thequery. The corresponding value in the table is the resulting return node of the query. We encode the input to a query as follows. We represent each middle and bottom tree compactly as a bit string encoding the tree structure and the weights of all nodes. Since each internal node in C_T is branching, the number of nodes in a middle or bottom tree isbounded by (χ).Thus, we can encode the tree structure using (χ) bits. The weight of a node in a middle or bottom tree is bounded by (χ^2) or (χ),respectively, and can thus be encoded in (logχ) bits. Hence, we can encode the tree structure and all weights using (χlogχ) bits. We encode the query node u using (logχ) bits. Since the maximum weight is (χ^2) we can also encode the query integer k using (logχ) bits. Hence, the full encoding uses (χlogχ) + (logχ) + (logχ) = (χlogχ) bits. To encode the return node stored in the global table we use (logχ) bits. Thus, the table uses 2^(χlogχ)logχ = 2^(ϵlog n) = o(n) bits for a sufficiently small constant ϵ > 0. Constructing the global table takes o(n) time. §.§ QueriesSuppose we are given a node u and an integer k as an (u,k) query. Let u_t denote the top node on the heavy path of u in T and let u_H denote the corresponding node in the contracted tree C_T.We find the lowest ancestor w_H of u_H with weight at least k in C_T. If u_H is in the top tree we find w_H as described in Section <ref>. If u_H is in a middle or bottom tree, weuse the global table to find w_H. If the result is not in the middle or bottom tree we move up a level and query the middle or top tree. Each of these at most three queries takes (1) time. Thus w_H is found in (1) time.Suppose that w_H corresponds to a node w' in the initial tree and let H' denote the heavy path such that w' is its top node. As explained in Section <ref>, we can find the lowest ancestor of u with weight at least kon H' inO(1) time using rank and select queries on B(H').In total a query takes (1) time. In summary, we have obtained the following result. *§ STRING-PROCESSING APPLICATIONS In this section, we show several applications of Theorem <ref> on suffix trees. Recall that the number of leaf nodes in the subtree rooted at u in a suffix tree corresponds to the number of occurrences of the string represented by the root-to-u path. §.§ Internal Longest Frequent Prefix Internal pattern matching is an active topic <cit.> in the combinatorial pattern matching community. The internal longest frequent prefix problem is the following: preprocess a string X of length n over an integer alphabet Σ=[1,n^(1)] into a compact data structure supporting the following on-line queries:* 𝖨𝖫𝖥𝖯_X(i, j, f): return the longest prefix of X[i j] occurring at least f times in X.We first construct the suffix tree T of X in (n) time <cit.>, and preprocess it in (n) time for classic weighted ancestor queries <cit.> as well as forqueries using Theorem <ref>. Forqueries, as (u), we use the number of leaf nodes in the subtree rooted at node u in T. Such an assignment satisfies the requested properties of (·), and can be done in linear time. Any 𝖨𝖫𝖥𝖯_X(i, j, f) query can be answered by first finding the locus u of X[i j] in T in (1) time using a classic weighted ancestor query on T, and, then, answering (u,f) in T in (1) time using Theorem <ref>. We obtain the following result.For any string X of length n over alphabet Σ=[1,n^(1)], we can construct a data structure that answers 𝖨𝖫𝖥𝖯_X queries in (1) time after (n)-time and (n)-space preprocessing. §.§ Longest Frequent Substring The longest frequent substring problem is the following: preprocess a dictionary 𝒟 of d strings (documents) of total length n over an integer alphabet Σ=[1,n^(1)] into a compact data structure supporting the following on-line queries:* 𝖫𝖥𝖲_𝒟(P, f): return the longest substring of P that occurs in at least f documents of 𝒟.We start by constructing the generalized suffix tree T of 𝒟 in (n) time <cit.> and preprocess it in (n) time forqueries using Theorem <ref>. Forqueries, as (u), we use the number of dictionary strings having at least one leaf node in the subtree rooted at node u in T. This assignment satisfies the requested properties of (·), and can be done in linear time <cit.>. We find the locus v_i in T of the longest prefix of P[i |P|] that occurs in some string in 𝒟. We can do this, for all i∈ [1,|P|], using the matching statistics algorithm on T <cit.>. For each locus v_i, we trigger a (v_i,f) query using Theorem <ref>. We obtain the following result. For any dictionary 𝒟 of total length n over alphabet Σ=[1,n^(1)], we can construct a data structure that answers 𝖫𝖥𝖲_𝒟(P, f) queries in (|P|) time after (n)-time and (n)-space preprocessing. An analogous result can be achieved for the following version of the longest frequent substring problem: preprocess a string X of length n over an integer alphabet Σ=[1,n^(1)] into a compact data structure supporting the following on-line queries:* 𝖫𝖥𝖲_X(P, f): return the longest substring of P that occurs at least f times in X.In particular, instead of a generalized suffix tree, we now construct the suffix tree T of X and follow the same querying algorithm as above. Also, forqueries, as (u), we use the number of leaf nodes in the subtree rooted at node u in T. Such an assignment satisfies the requested properties of (·), and can be done in linear time using a standard DFS traversal on T. We obtain the following result. For any string X of length n over alphabet Σ=[1,n^(1)], we can construct a data structure that answers 𝖫𝖥𝖲_X(P, f) queries in (|P|) time after (n)-time and (n)-space preprocessing.§.§ Frequency-constrained Substring Complexity For a string X, a dictionary 𝒟 of d strings (documents) and a partition of [d] in τ intervals ℐ=I_1,…,I_τ, the function f_X,𝒟,ℐ(i,j) maps i,j to the number of distinct substrings of length i of X occurring in at least α_j and at most β_j documents in 𝒟, where I_j=[α_j,β_j].Function f is known as the frequency-constrained substring complexity of X <cit.>. For example, let 𝒟={}. For X= and I_1=[1,2],I_2=[3,4],I_3=[5,6], we have f_X,𝒟,ℐ(2,2)=3:occurs in 3 ∈ I_2 documents;occurs in 4 ∈ I_2 documents; andoccurs in 3 ∈ I_2 documents. Let S be a 2D array such that S[i,j]=f_X,𝒟,ℐ(i,j). Pissis et al. <cit.> showed that after an (n)-time preprocessing of a dictionary 𝒟 of total size n over an integer alphabet Σ=[1,n^(1)], for any X and any partition ℐ of [d] in τ intervals given on-line, S can be computed in near-optimal (|X| τloglog d) time. Since S is of size |X|·τ, the bounds are nearly-optimal with respect to the preprocessing and query times.Their solution can be summarized as follows. In the preprocessing step, we construct the generalized suffix tree T of 𝒟. In querying, the first step is to construct the suffix tree of X and compute the document frequency of its nodes in (|X|) time. In the second step, we enhance the suffix tree of X with (|X|τ) nodes with document frequencies by answeringqueries on T in (loglog d) time per query <cit.>. The whole step thus takes (|X| τloglog d) time. In the third step, we infer a collection of length intervals, one per node of the enhanced suffix tree and sort them in (|X|τ) time using radix sort. In the last step, we sweep through the intervals from left to right to compute array S in (|X|τ) total time. We plug in Theorem <ref> for the second step ( queries). Forqueries, as (u), we use the number of dictionary strings having at least one leaf node in the subtree rooted at node u in T. Such an assignment satisfies the requested properties of (·), and can be done in linear time <cit.>. We obtain the following result. For any dictionary 𝒟 of d strings over alphabet Σ=[1,n^(1)], we can construct a data structure that answers S=f_X,𝒟,ℐ queries in (|X|τ) time after (n)-time and (n)-space preprocessing. § ACKNOWLEDGMENTSPB is supported by the Danish National Research Council (DFF 3105-00302B and DFF 9131-00069B). SPP is supported by the PANGAIA and ALPACA projects that have received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreements No 872539 and 956229, respectively.plain | http://arxiv.org/abs/2311.15777v1 | {
"authors": [
"Philip Bille",
"Yakov Nekrich",
"Solon P. Pissis"
],
"categories": [
"cs.DS"
],
"primary_category": "cs.DS",
"published": "20231127125159",
"title": "Size-constrained Weighted Ancestors with Applications"
} |
firstpage–lastpage Attacking at non-harmonic frequencies in screaming-channel attacks Jeremy Guillaume10009-0005-3398-3423 Maxime Pelcat20000-0002-1158-0915 Amor Nafkha10000-0002-1164-7163 Rubén Salvador30000-0002-0021-5808January 14, 2024 ==============================================================================================================================================We study tidal dissipation in models of rotating giant planets with masses in the range 0.1 - 10 M_J throughout their evolution. Our models incorporate a frequency-dependent turbulent effective viscosity acting on equilibrium tides (including its modification by rapid rotation consistent with hydrodynamical simulations) and inertial waves in convection zones, and internal gravity waves in the thin radiative atmospheres. We consider a range of planetary evolutionary models for various masses and strengths of stellar instellation. Dissipation of inertial waves is computed using a frequency-averaged formalism fully accounting for planetary structures. Dissipation of gravity waves in the radiation zone is computed assuming these waves are launched adiabatically and are subsequently fully damped (by wave breaking/radiative damping). We compute modified tidal quality factors Q' and evolutionary timescales for these planets as a function of their ages. We find inertial waves to be the dominant mechanism of tidal dissipation in giant planets whenever they are excited. Their excitation requires the tidal period (P_tide) to be longer than half the planetary rotation (P_rot/2), and we predict inertial waves to provide a typical Q'∼ 10^3 (P_rot/1 d)^2, with values between 10^5 and 10^6 for a 10-day period. We show correlations of observed exoplanet eccentricities with tidal circularisation timescale predictions, highlighting the key role of planetary tides. A major uncertainty in planetary models is the role of stably-stratified layers resulting from compositional gradients, which we do not account for here, but which could modify predictions for tidal dissipation rates.planet-star interactions – planetary systems – planets and satellites: interiors – planets and satellites: physical evolution § INTRODUCTION Tidal interactions play a major role in the dynamics of star-planet and stellar binary systems, leading to planetary orbital migration <cit.>, orbital circularization <cit.>, spin-orbit re-alignment <cit.>, and rotational evolution <cit.>. Indeed, tidal interactions alter the architectures of exoplanetary systems, and the rotations of stars and planets. However, theoretical predictions for tidal evolution are currently uncertain, and they highly depend on the specific prescriptions employed in the aforementioned papers. This motivates us to work towards developing more realistic treatments of tidal flows in stars and planets.Some early steps towards a theory of tides were made long before the discovery of the first exoplanet from studying the Earth-Moon system <cit.>. More than a hundred years of research has substantially refined our knowledge of tidal interactions in fluid bodies such as stars and giant planets. Motivated by binary stars, some crucial steps were made by <cit.>, who separated the contributions of equilibrium (non-wavelike) and dynamical (wavelike) tides in linear theory. Dynamical tides can be further divided into inertial waves (hereafter IWs, or magneto-inertial) and internal gravity waves (hereafter GWs, or magneto-gravito-inertial), propagating in the convective and radiative regions, respectively <cit.>. Recently, <cit.> applied the latest tidal theory to compute modified tidal quality factors Q' (essentially the ratio of the maximum tidal energy stored to the energy dissipated in one tidal period – a quantity essential for tidal modelling) and tidal evolutionary timescales in a range of stellar models, developing prescriptions for the dissipation of tides of various types (i.e. equilibrium tides, IWs and GWs). These were implemented in <cit.> to explore the secular evolution of hot Jupiter systems over a wide parameter space, and employed to provide an explanation for close solar-type binary circularization in <cit.>.Even after the first exoplanet detection, attention has primarily focused on computing tidal dissipation inside stars, while dissipation in planetary interiors has typically been restricted to Solar System objects. A comprehensive application of tidal theory to rotating planetary models was performed in <cit.> (see alsoand ), which opened the doors to a new direction of research. Giant planets share many similarities in structure with stars. They are both primarily multi-layer fluid bodies containing both convective and radiative regions, albeit planets may also have solid cores. But there are important differences: planets tend to rotate faster (compared with both their dynamical and convective frequencies) such that IWs are almost always likely to be important and hot Jupiters also have larger tidal amplitudes than planet-hosting stars, thereby requiring consideration of nonlinear tidal mechanisms like the elliptical instability <cit.> or other nonlinear IW interactions <cit.>. The interaction between equilibrium tides and turbulent convection is also expected to be far into the regime of fast tides (where tidal frequencies exceed convective turnover frequencies) because convection is typically much slower in planets than in stars <cit.>. Hence, the reduction in the turbulent viscosity for fast tides must be accounted for <cit.>. Furthermore, convection is likely to be influenced by rapid rotation, which can modify this interaction <cit.>.The role of IWs for planetary tidal dissipation has been explored in prior work <cit.>. However, a detailed study of the evolution of tidal dissipation rates and tidal quality factors Q' from equilibrium and dynamical tides following planetary evolution has never been performed previously to our knowledge, though <cit.> performed computations for just the equilibrium tide, assuming a different mechanism dissipates the tidal flow to what we consider. We use new models of giant planet interiors with masses in the range from 0.1 to 10 M_J computed with the MESA code <cit.>, with various strengths of stellar instellation so as to model both hot and cold planets, to theoretically calculate tidal dissipation rates <cit.>. In Sec. <ref>, we describe our model. Our results are presented in Sec. <ref> and applied to exoplanet eccentricities in Sec. <ref>.§ METHODS§.§ Tidal dissipation mechanisms We consider a giant planet of mass M_pl and radius R_pl and employ spherical coordinates centred on the body with radial coordinate r. The intensity of tidal dissipation is often quantified by the (modified) tidal quality factor Q', which is proportional to the ratio of the maximum energy stored in the tide to the amount dissipated in one period <cit.>. More effective dissipation corresponds to lower Q'. Here we consider three mechanisms of tidal dissipation: equilibrium (non-wavelike) tides damped by their interaction with turbulent (rotating) convection, IWs, and GWs. Each mechanism is characterized by its corresponding tidal quality factor (Q'_eq, Q'_iw, and Q'_gw for equilibrium tides, IWs, and GWs, respectively), and these are calculated within the formalism of B20 (building upon many prior works) with a few modifications that we will describe below. We focus on tides with spherical harmonic degree l = 2 and azimuthal wavenumber m = 2, which is usually the dominant component in systems with low obliquities. This is likely to be the dominant component of tidal forcing in asynchronously rotating bodies, as well as for eccentricity tides in weakly eccentric but spin-synchronised bodies – see e.g. Eqs. 4 and 5 in <cit.> or Table 1 in <cit.>.The equilibrium tide is a quasi-static fluid response of a perturbed body that is thought to be dissipated through the action of “turbulent viscosity" in convective zones. The corresponding tidal quality factor is obtained via the expression:1/Q'_eq = 16π G/3(2l+1)R_pl^2l+1|A|^2D_v/|ω_t|,where G is the gravitational constant, D_v is the rate of viscous dissipation of the equilibrium tide, and A is the amplitude of the tidal potential component (this will not be specified further as it cancels because D_v∝ |A|^2 in linear theory). The tidal forcing frequency is ω_t=2π/P_tide (where P_tide is the tidal period), which is ω_t = 2(n - Ω_pl) for a circular, aligned orbit, where n and Ω_pl are the orbital mean motion and planetary spin, respectively. For an eccentric orbit with synchronised[Perhaps the spin should instead be pseudo-synchronised for an eccentric orbit, but it is not clear that the classical formula of <cit.> is valid since it is derived by assuming equilibrium tide damping with a constant time-lag <cit.>.] (and aligned) spin we instead have |ω_t|=n. The viscous dissipation rate D_v is computed using Eqs. (20) to (22) in B20. This quantity depends on the equilibrium tidal displacement vector (defined in section 2 of B20) and the turbulent effective viscosity ν_E at each radius. The latter is assumed to be a function of radius and to act like an isotropic shear viscosity, linked to the crude mixing-length theory expectation ν_MLT∝u_c l_c, with u_c the convective velocity and l_c=α H_p the mixing-length (α is mixing-length parameter and H_p is pressure scale height). As demonstrated in hydrodynamical simulations <cit.> and as previously hypothesised using phenomenological arguments <cit.>, ν_E is reduced for fast tides (where ω_t>ω_c, with the convective frequency ω_c = u_c/l_c) in a frequency-dependent manner. This can be accounted for using a piece-wise continuous correction factor depending on the ratio ω_t/ω_c (for which we employ Eq. (27) of B20, inferred from detailed numerical simulations) at each radius in the planet. Moreover, the rapid rotation expected for giant planets stabilizes convection on large length scales, and a steeper temperature gradient is required to sustain a given heat flux <cit.>. <cit.> and <cit.> have shown that such rapid rotation reduces ν_E even further (though interestingly the regime with ω_t≫ω_c is not modified by rotation). Following these works, and as confirmed by their simulations, we take into account rapid rotation according to rotating mixing-length theory (RMLT), by multiplying l_c and u_c at each radius by Ro^3/5 and Ro^1/5, respectively, where Ro is the convective Rossby number (Ro = ω_c/Ω_pl, based on the non-rotating convective frequency). We demonstrate in Sec. <ref> that the dissipation of equilibrium tides is insufficient to provide significant orbital or spin evolution compared with wavelike tides. Motivated by results from the (albeit very idealised), numerical simulations described above, we assume equilibrium tides are damped by their interaction with convection in a way that can be modelled as a local (frequency-dependent) effective viscosity that is positive at each radial location in the planet (and is isotropic for simplicity). Negative values for ν_E – corresponding to tidal anti-dissipation – have been found to occur, particularly at very high tidal frequencies <cit.>, but these are typically negligibly small in magnitude so we neglect their contribution here. On the other hand, we ignore possible contributions to the tidal energy transfer from Reynolds stresses involving tide-tide correlations and gradients of the convective flow <cit.>. This is because we believe that it is not currently possible to estimate contributions from this term without detailed numerical simulations <cit.>. Turning to wavelike tides, the tidal quality factor representing inertial wave dissipation is computed following the (low frequency) frequency-averaged formalism of <cit.>, and is calculated according to:1/Q'_iw = 32π^2 G/3(2l+1)R_pl^2l+1|A|^2(E_l + E_l-1 + E_l+1),where the parameters E_l, E_l-1, and E_l+1, are specified by Eqs. (31)–(33) in B20, fully accounting for the planetary structure. These coefficients are proportional to the squared spin rate, implying that inertial wave dissipation is more efficient in rapidly rotating bodies. Note that E_l and E_l± 1 involve radial integrals that depend to some extent on the assumed core size (inner boundary of the fluid envelope). This dependence on core size is found to be weak in our realistic models though <cit.>, unlike results obtained for incompressible models. We adopt impenetrable boundary conditions (with vanishing radial velocity) for inertial waves (not the total tide) at the core-envelope boundary and planetary surface, which is appropriate at low frequencies, and followsand . We do neglect the possibility of very large stably-stratified cores in this work though due to the uncertainties in such planetary models.This is a simple measure to represent the typical level of dissipation due to inertial waves over the full range of propagation of these waves, which is straightforward to compute in a given planetary or stellar model <cit.>. Note that this quantity is independent of the specific damping mechanism, and is computed in a model assuming an impulsive encounter to excite all inertial waves, which are then assumed to be subsequently fully dissipated. Modelling tidal evolution of nearly circular or aligned orbits using this quantity involves making assumptions, since this is not rigorously valid <cit.>, but it is believed to be a representative value for inertial wave dissipation. We choose to adopt this approach <cit.> because this measure is both simpler and much faster to compute (hence amenable to evolutionary studies), and it is also much more robust to the incorporation of additional (or variation in) model physics than the direct linear (or nonlinear) response at a particular frequency.It should be remembered however that the actual dissipation due to inertial waves at a given ω_t could differ substantially from this value <cit.>. In particular, predictions for inertial wave dissipation find substantial deviations (potentially by orders of magnitude, either larger or smaller) at a particular frequency – and between the frequency-averaged measure and the response at a particular frequency – depending on the degree of density stratification <cit.>, the presence of magnetic fields <cit.>, differential rotation <cit.>, nonlinearity <cit.>, convection, varying the microscopic diffusivities <cit.>, and the presence of stably-stratified (or different density) inner fluid layers (as opposed to a rigid core) <cit.>. However, the frequency-averaged measure has been found to be much more robust regarding the incorporation of magnetic fields <cit.>, nonlinearity, and to a limited extent differential rotation <cit.>. It is an open question how reliable this approach will be at modelling a population of individual systems that are each forced at a particular tidal frequency (or range of these) at a given epoch.We assume that upward propagating gravity waves are excited (adiabatically) at the base of the radiative envelope and are fully damped (e.g. by radiative diffusion or wave breaking) before propagating back to their launching sites <cit.>. The corresponding tidal quality factor is then given by (B20):1/Q'_gw = 2 [Γ(1/3)]^2/3^1/3(2l+1)(l(l+1))^4/3R_pl/G M_pl^2𝒢 |ω_t|^8/3.The quantity 𝒢 depends on the planetary conditions at the radiative/convective interface:𝒢 = σ_b^2 ρ_b r_b^5 |𝒩^2/ln r|_r=r_b^-1/3.Subscript b denotes the base of the radiative envelope, so r_b and ρ_b are the corresponding radius and density, respectively, 𝒩 is the Brunt-Väisälä frequency, and the parameter σ_b is determined numerically by the derivative of the dynamical tide radial displacement (see Eq.(43) in B20). This is the simplest measure of gravity wave dissipation that applies if the waves are fully damped – regardless of the specific damping mechanism. Whether or not this is valid is an open question. This estimate is the simplest one to estimate the effects of gravity waves and was the one adopted in many prior works <cit.> but future work should explore in detail the validity of this assumption in thin envelopes. We omit the influence of Coriolis forces here, partly for simplicity, and partly because the typical level of dissipation due to gravity (or gravito-inertial) waves is unlikely to differ substantially in this fully damped regime <cit.>.When a planet possesses multiple radiative and convective envelopes, the total dissipation rates (not tidal quality factors) are derived by summing up the contribution from each layer where the dissipation takes place. §.§ Planetary modelWe compute planetary models using the MESA code <cit.>. Most parameters in ourfiles are adopted from thetest suit. We setandto 0.2804 and 0.02131, respectively, to reproduce the high average metallicity of hot Jupiter hosts (<[Fe/H]> = +0.19 dex, see ). According to our exploration of parameter space, the tidal quality factors obtained with metallicities in the range between -0.5 and +0.5 dex are similar within an order of magnitude for any given age <cit.>. Given that abundances do not seem to play a major role in any of our results, the effects of chemical composition will not be reported further in this paper.The planetary mass is varied between 0.1 and 10 M_J, where M_J is the mass of Jupiter and we fix the initial radius to 2R_J (increasing to 4R_J for “hot-start" models with higher initial entropy does produce substantial differences in Q'). The core mass is 10 M_⊕, where the subscript ⊕ refers to Earth units, and its density is 5 g cm^-3, giving a fixed core radius of approximately 0.2R_J. Note that the core radius, when normalised by R_pl, varies because the planetary radius (rather than the core size) evolves in time. The fiducial value of the incident flux is 1000 F_⊕ and we use the canonical mixing length value α=2, though we have explored smaller α values and found minimal differences in our results.The column depth for irradiation is fixed at 330 g cm^-2 to reproduce the mean opacity from <cit.>. The controlis set toto avoid convergence issues at late ages. We found that, for lower planetary masses, the default spatial resolution is too low to provide accurate solutions for the tidal response. Therefore, we choose a higher resolution by settingif there are no convergence issues at the beginning of the MESA run. Otherwise, the parameteris gradually increased until convergence issues are avoided.Our planetary models generate a neutrally (adiabatically) stratified interior, as we might expect in convective regions, with only the surface layers being stably stratified (radiative). However, observational inferences from the Solar System's gas giants suggest this assumption may not be valid due to interior compositional gradients, and Jupiter or Saturn could possess extended dilute stably stratified fluid cores <cit.>. The consequences of interior stably stratified layers are outside the scope of the present paper, and are currently a major uncertainty in planetary models <cit.>.§ RESULTS §.§ Evolution of tidal quality factorsWe now turn to present our results for Q' computed in planetary models. We begin by showing the dependence of the tidal quality factors for each mechanism on the tidal forcing period P_tide in Fig. <ref> for a range of planetary masses, ages, and instellations. Here, the top, middle, and bottom rows show planets with M_pl = 0.3, 1.0, and 10 M_J, respectively. The left column corresponds to models of young Jupiters (t = 10 Myr), and the right column represents models of old Jupiters (t = 3 Gyr). The fiducial hot planets (F = 1000 F_⊕) are shown with solid lines and cold planets (F = F_⊕) are shown with dashed lines. These panels show a wide range of tidal frequencies, and hence our results can be applied to model spin-orbit synchronisation, the dominant component driving orbital circularisation, and aspects of tidal obliquity evolution.As reported previously, equilibrium tide dissipation is very weak in all cases displayed according to our assumptions outlined in <ref>, with the minimum Q'_eq∼ 10^9. At low tidal periods, the rotationally-modified tidal quality factor Q'_eq, RMLT, FIT, shown in blue, is the same as the one obtained based on non-rotating convection Q'_eq, FIT (black, where subscript FIT refers to a fit from numerical simulations). This is because, in the high-frequency regime, ν_E∝ν_MLT(ω_c/ω_t)^2 ∝u_c^3/l_c1/ω_t^2, and the adopted rotationally-induced scalings for the convective velocity and length scale counteract each other <cit.>. In the fast tides regime (with or without rotational inhibition of convection) – which is typically the most relevant one in giant planets <cit.> – we thus have Q'_eq∝ P_tide^-1 because |ω_t|/Q'_eq∝ D_v∝ω_t^2ν_E∝ω_t^0. Thus, the slowness of convective flows relative to tides leads to substantial reductions in equilibrium tide damping.With our chosen rotation period of 10 hr adopted for illustration (results for different P_rot can be obtained simply by re-scaling since Q'_iw∝ P_rot^2, so if P_rot=1 day, Q'_iw should be a factor of 5.76 larger), inertial wave dissipation is the dominant tidal mechanism (Q'_iw is smallest) over the full range of tidal periods considered, except for the `old' model of 0.3 M_J planet, for which gravity waves begin to prevail at low P_tide (i.e., Q'_gw<Q'_iw). Note that Q'_iw only strictly operates if the tidal period P_tide>P_rot/2, otherwise inertial waves are not (linearly) excited and we should not employ Q'_iw to model tidal evolution. This is independent of frequency because we have adopted the frequency-averaged measure here – in reality, inertial wave dissipation is expected to be strongly frequency-dependent, though this value is thought to be a representative one for tidal modelling of planetary populations as discussed in <ref>. In this short tidal period regime (left region compared to the dotted line), Q'_gw should be used instead according to Fig. <ref>.The prediction for Q'_iw is similar in all models since they have the same rotation period and a similar internal structure. Indeed, each model has a structure for all ages that is very similar to a polytrope with a polytropic index ranging from n=1 (commonly thought to be appropriate for Jupiter) to 1.5 (thought to apply to fully convective low-mass stars). As shown in Fig. <ref>, where we plot density normalised by the mean density and radius normalised by the planetary radius, our models are well described by Lane-Emden polytropes with such a range of n, where n=1 and n=1.5 are represented by black dotted and dashed lines, respectively. The only exception is the models of young low-mass planets displayed in red in the top panel. Nevertheless, as these planets cool down with age, they approach the profile of the n = 1 polytrope. Accordingly, the models at 3 Gyr, depicted in green, yield flatter density profiles. In contrast to the `cold' models, shown with solid lines, highly-irradiated planets, represented by dash-dotted lines, are characterized by a steeper density gradient. At the same time, one can see that the internal structure of more massive planets, displayed in the bottom panel, is closer to the n = 1 polytrope and less sensitive to age and instellation. We, therefore, conclude that most of our models can be approximated by polytropic solutions with n=1 or n=1.5 with sufficient accuracy.Adopting a polytropic model with index n=1 (1.5), we find Q'_iw=230.22ω_dyn^2/Ω_pl^2 (or 130.83 ω_dyn^2/Ω_pl^2), where ω_dyn^2=G M_pl/R_pl^3 is the squared dynamical frequency, implying a value approximately 2558 (1454) for a Jupiter-like model/rotation, similar to the values shown in Fig. <ref>. Hence Q'_iw (for a fixed P_rot) varies only modestly with planetary mass, age, and instellation within the ranges we consider, ultimately because planetary structures always remain very similar (and similar to polytropes) in our models.Internal gravity waves become more dissipative (smaller Q'_gw) in planets with thicker radiative envelopes, typically corresponding to higher stellar instellations (solid purple lines), lower planetary masses, and older ages (where planets have had time to develop thicker envelopes). In all cases Q'_gw∝ P_tide^8/3 under the assumptions of our model, and thus shorter tidal periods imply more efficient dissipation.In Fig. <ref>, we show the evolution with planetary age of the tidal quality factors corresponding to equilibrium tides (first panel), inertial waves (second panel), and gravity waves (third panel) for a set of `hot' planetary models with different masses. We now fix both the tidal and rotation periods at P_tide=1 day and P_rot=10 hr, respectively, to focus on the age and mass dependence here. The choice of P_rot=10 hr is made for comparison with Jupiter, but we note that for inertial waves, we predict Q'_iw∝ P_rot^2 in general, so our results can be easily scaled for different rotation rates.According to the prescriptions we have adopted, the equilibrium tide is characterized by negligible damping inside the convective envelope (using Q'_eq, RMLT, FIT), insufficient to cause a significant change in orbital or spin parameters. The relevant tidal quality factor Q'_eq>10^10 throughout the evolution of each of these planets. This is similar to the conclusions in B20 regarding the inefficiency of equilibrium tide dissipation in stellar interiors. Note that Q'_eq increases with age because the planet cools, thus convection slows down as the planet evolves, and because the planet shrinks.In contrast, dissipation of inertial waves appears to be the most important mechanism in almost all cases when they are excited, with Q'_iw (for this P_rot) ranging between 10^2 and 2 × 10^4, with higher values corresponding to higher-mass objects. As shown in the second panel of Fig. <ref>, planets gradually become less dissipative with age. We have explored the reason for this, and found that it is primarily not related to structural changes (consistent with Fig. 4 of B20 for low mass fully convective objects), but is instead explained by the shrinking radius as the planet cools for a fixed P_rot because Q'_iw∝ω_dyn^2/Ω_pl^2∝ R_pl^-3. The evolution of planetary radius is displayed in the bottom panel. Furthermore, given that planetary spin-down is typically the natural outcome of long-term tidal and planetary evolution (unless, e.g., the planet is spiralling into its star while remaining tidally locked), we expect inertial wave damping to become less efficient at later epochs. On the other hand, the tidal quality factor due to gravity waves Q'_gw does not exhibit substantial evolution during the planetary lifetime, and its variation for each planetary model is within approximately an order of magnitude for a fixed P_tide and planetary mass. In addition to its strong dependence on P_tide (Q'_gw∝ P_tide^8/3), gravity wave damping also strongly depends on the planetary mass, spanning over six orders of magnitude for our mass range characteristic of gas giants. This is illustrated in the third panel of Fig. <ref>. Similar to inertial waves, gravity waves dissipate more efficiently (smaller Q'_gw) in lower-mass objects, with values as small as Q'_gw≈ 10^4, whereas the most massive objects we consider are much less dissipative. These values are sensitive to the radius r_b and density ρ_b at the launching region, which is manifested through the factor 𝒢 being proportional to r_b^5 (Eq. <ref>).In Fig. <ref> we explore in more detail the reasons for the substantial variation of Q'_gw with planetary mass in our models. Here, we illustrate the evolution of the quantities involved in the expression for Q'_gw, given by Eqs. (<ref>) and (<ref>), for the models with M_pl = 0.1, 1, and 10 M_J, depicted in red, green, and blue, respectively. The top left and bottom left panels display |𝒩^2/ln r|_r=r_b andσ_b, while the top right and bottom right panels display r_b and ρ_b as a function of age, respectively. The value of Q'_gw obtained for planets with M_pl = 10 M_J is, on average, 3.5 orders of a magnitude higher than for Jupiter-mass planets. These two planets are characterized by similar values of σ_b and r_b, but there is an order of magnitude difference attributable to |𝒩^2/ln r|_r=r_b (note that this quantity is raised to the power -1/3 in Eq.(<ref>)), and half an order of magnitude from the variation in ρ_b. The remaining factor of 10^2 results from the presence of M_pl^2 in Eq. (<ref>). At the same time, decreasing the planetary mass from 1 to 0.1 M_J reduces Q'_gw by a factor of ∼ 50-10^2. A factor of ∼ 4 arises due to differences in |𝒩^2/ln r|_r=r_b. One can see that σ_b is substantially smaller in the case of a lower-mass planet, leading to a factor of ∼ 3 in Q'_gw. An additional factor of ∼ 4 (at t ∼ 1 Gyr) comes from differences in ρ_b, and one of ∼ 1.5 arises from the combination r_b^5 R_pl/M_pl^2. Combining these (crude) factors results in a total reduction of ∼ 50-10^2 from 1 to 0.1 M_J, in agreement with the overall differences outlined above. Therefore, we conclude that differences in several parameters come into play to produce the strong dependence on planetary mass exhibited by Q'_gw in our models, rather than any one of these parameters. §.§ Impact of the incident fluxThe strength of external irradiation affects the locations of the interfaces between convective and radiative regions near the surface, thereby altering the conditions inside the layers where gravity waves are excited and propagate. This is demonstrated with the example of a Jupiter-mass planet in Fig. <ref>. The top and middle panels display the evolution of the tidal quality factors of inertial and gravity waves, respectively. In the following plots, blue circles correspond to the low incident flux, characteristic of a `cold' Jupiter (F = F_⊕), and the red triangles represent a typical `hot' Jupiter irradiation (F = 1000 F_⊕). One can see that the enhancement of stellar flux received by a planet slightly increases the inertial wave dissipation rate, which is most noticeable at early ages, which is primarily because these planets have slightly inflated radii (see discussion above). Nonetheless, differences between the two models are always within an order of magnitude. On the contrary, the incident flux plays a crucial role in modifying gravity wave damping for most of the planetary lifetime. According to our `cold' model, Q'_gw rises by almost two orders of magnitude up to 10^9 after 30 Myrs. Eventually, the dissipation rates are amplified to the values obtained for hot Jupiters after 6 Gyrs. As we show in the bottom panel of Fig. <ref>, these dramatic changes in Q'_gw are directly linked to the location and number of radiative zones near the surface that arise in these models. Here, we display the separation of the base of each radiative zone from the planetary surface. Note that the lower the point is, the closer the corresponding interface is to the planetary surface. The prominent feature in the cold planetary model is the emergence of a second radiative region after 3 Myr. Thus, the bottom base is depicted by the blue circles, while the top base, when present, is illustrated by the black circles. As a result of the above, both envelopes could contribute to the dissipation of gravity waves. The overall tidal quality factor due to gravity waves is computed (crudely) by summing up the dissipation rates associated with each radiative region. As we mentioned previously, the efficiency of gravity wave damping is determined by the conditions at their launching sites, including the local density, characterized by a steep gradient in the vicinity of the planetary surface. Thereby, the occurrence of the outer radiative layer does not significantly alter the evolution of the tidal quality factor as long as the bottom layer exists. However, at the planetary age of 30 Myrs, two convective envelopes merge into one, leaving the top radiative region as the only contributor to gravity wave damping, which immediately manifests in a sharp increase in Q'_gw, as shown in the middle panel. Finally, after 6 Gyrs, the radiative envelope becomes divided into two separate regions again, allowing the dissipation rates obtained for cold and hot Jupiters to converge. We have found that the appearance of multiple radiative layers is insensitive toandin the ranges [0.25,0.28] and [0.004,0.03], respectively, and the resulting values of Q'_gw do not differ substantially. It is possible that the emergence of these layers would differ with different equations of state to those used in our version of MESA.The comparison between models with different incident fluxes was also provided in Fig. <ref>, where solid and dashed lines represent hot and cold Jupiters, respectively. Contrary to the earlier epoch, at late ages, planets with M_pl = 1 and 10 M_J reveal substantial variation in gravity wave dissipation rate with the amount of irradiation, with hot Jupiters being more dissipative. For 0.3 M_J planets, however, the changes in Q'_gw between cold and hot models are manifested at early times. In addition, for lower-mass planets, tidal quality factors due to equilibrium tides are also sensitive to the incident flux. In contrast to gravity waves, the dissipation of equilibrium tides is somewhat more effective in such low mass cold planets, which have smaller Q'_eq.We have assumed gravity waves are launched in each layer and are then fully damped before returning to their launching sites, even when there multiple radiative regions. It is uncertain whether this is justified, as is the emergence of a second radiative region, which may either be the natural outcome of planetary evolution for low incident fluxes, or it could be an artifact caused by uncertainties in the equation of state – or various neglected physics – in current versions of MESA. We have discovered that this feature can be eliminated with the introduction of additional interior heating, which may arise due to tidal heating or Ohmic dissipation. The fully damped assumption can potentially be justified by linear radiative damping, though this might only be effective in the outer zone, but there are alternative possibilities (including differential rotation, non-linearity, and magnetic fields). These should be explored further in future work, but for now we caution that there are potentially large uncertainties in Q'_gw, particularly in our cold models. To summarise, we find gravity wave damping can be effective for highly-irradiated planets with extended stable layers near their surfaces that are deeper than one percent of their radii. Otherwise, inertial waves are predicted to be the most important tidal mechanism when they are excited (i.e. for P_tide>P_rot/2) in almost all models, except perhaps for the latest ages for low masses when Q'_iw>Q'_gw.§ APPLICATION TO STAR-PLANET AND PLANET-MOON SYSTEMS The dissipation of planetary tides can potentially explain an important aspect of the eccentricity distribution in star-planet systems, which is that hot Jupiters tend to have smaller eccentricities (and a stronger preference for circularity) than warm and cold Jupiters. To explore this scenario, we collect data from the NASA Exoplanet Archive (<https://exoplanetarchive.ipac.caltech.edu/>) representing massive close-in planets (0.1 M_J < M_pl < 10M_J, P_orb < 20days; P_orb is the orbital period). In addition, we filter out systems containing another planet with P_orb < 100days to avoid scenarios where eccentricity excitation due to planet-planet interactions may be competing with tidal eccentricity damping. Our main sample consists of 162 systems with a known eccentricity, stellar effective temperature, stellar and planetary mass, and planetary radius. This sample has been further extended by eight systems, namely HAT-P-2, HAT-P-4, HD 118203, HD 149026, HD 189733, HD 209458, Kepler-91, and WASP-8, with no accessible data on the planetary radii. The radii of the corresponding planets have been derived using the mass-radius-flux parametrization from <cit.> given by his Eqs. (16)—(18), which encompasses the relations from <cit.> and <cit.>. We also consider 136 additional planets with an upper bound on the eccentricity below 0.1. We assume that these planets are on circular orbits (e=0) when calculating the relative number of eccentric planets (i.e., the planets with e>0.1).For every system in our sample, we calculate a corresponding circularization timescale due to planetary tides, τ_e,pl, following the equation <cit.>:τ_e,pl = 4/63Q'_pl/nM_pl/M_*(a/R_pl)^5.Here, the tidal quality factor Q'_plis set equal to Q'_iw since, as shown in Sec. <ref>, inertial waves provide the main contribution to the overall tidal dissipation inside the planets in almost all cases in our models. Q'_iw may be represented as the product of two components, namely the structural tidal quality factor Q'_iw,s and the parameter ϵ_Ω^-2≡(Ω_pl / √(GM_pl / R_pl^3))^-2. The first component, Q'_iw,s, is computed via linear interpolation between our models of hot planets with the adjacent masses and ages plotted in Fig. <ref> (note the large observational uncertainties in ages does not lead to large differences according to this figure), while ϵ_Ω is inferred using observational data, assuming spin-orbit synchronization (Ω_pl = n, since this is expected to occur more rapidly than circularization). Our sample is illustrated in the top panel of Fig. <ref>. One can see that eccentricities tend to increase with the predicted circularization timescale. This is especially true for planets orbiting stars above the Kraft break (T_eff > 6250 K, ), as shown in black. Hot stars have thin convective envelopes, leading to weaker tidal dissipation in stellar interiors (see B20), suggesting planetary tides may be even more important for eccentricity evolution in these systems. This weaker stellar tidal dissipation may have several observational manifestations. In particular, star-planet systems with a hot star typically sustain higher obliquities, as shown in <cit.>. Here, we draw similar conclusions concerning the eccentricity distribution, which reveals the same trend, albeit with caution due to the low numbers involved. Indeed, among the systems with e> 0.1, planets orbiting stars above (below) the Kraft break have an average eccentricity of 0.33 (0.24). Apart from this trend, a correlation between eccentricity and eccentricity damping timescale appears to be more pronounced in hot stars, which may imply that stellar tides also contribute to the orbital circularization of hot Jupiters, or it could be related to the shorter main-sequence ages of hotter stars.As reviewed in <cit.>, hot Jupiters might have formed via two channels, namely disc migration and high-eccentricity migration (e.g. triggered by planet-planet scattering or secular/Kozai migration). In contrast to the former channel, the latter allows the formation of highly eccentric hot planets. It is still unknown which channel (if any) dominates within the overall hot Jupiter population. The presence of almost circular systems with τ_e,pl a few orders of magnitude higher than the age of the universe, seen in Fig. <ref>, suggests that disc migration/low eccentricity formation is likely to be a favorable scenario for some fraction of our sample. To avoid the planets which might have formed with low initial eccentricities, we separate the planets with e > 0.1 (“eccentric planets") and e < 0.1 (“non-eccentric planets"). We select bin sizes along the x-axis to have roughly equal numbers of eccentric planets in each one. For every bin, we calculate the average eccentricity of the eccentric planets and plot it in the middle panel of Fig. <ref>. In addition, we derive the fraction of planets per bin with e > 0.1, and we display this in the bottom panel. Both quantities are found to increase with our predicted tidal circularization timescale. There is only a handful of eccentric planets with τ_e,pl < 10^8 yrs. This might be expected if planetary tides have acted here given that the average age of the observed systems is on the order of a few Gyr. Another prominent detail is that the mean eccentricity of the eccentric sub-sample increases when τ_e,pl∼ 1 Gyr, i.e., when τ_e,pl becomes comparable with the systems' mean age. The above features strongly suggest that tidal dissipation due to inertial waves can play an important role in shaping the orbital architectures of star-planet systems containing giant planets. On the other hand, under the assumptions of our models, neither equilibrium tides nor gravity waves can explain tidal circularisation timescales consistent with observations (that are shorter than or comparable to the ages of the systems). Gravity waves could circularise only the very closest planets under the assumptions we have made to model them <cit.>.From the detailed analysis of astrometric observations of Jupiter's and Saturn's satellites, <cit.> constrained tidal dissipation rates in our Solar System's giants. They inferred k_2/Q = (1.102 ± 0.203) × 10^-5 for Jupiter and k_2/Q = (1.59 ± 0.54) × 10^-4 for Saturn, giving approximately Q' = (1.59 ± 0.25) × 10^5 and Q' = (9.43 ± 4.39) × 10^3 for these planets, respectively. According to our models for evolved planets at the Solar System's age (Figs. <ref> and <ref>), we find 10^3 ≲ Q'_iw≲ 10^4 for P_rot≈ 10 hr. Hence, inertial waves are sufficiently dissipative – according to the frequency-averaged measure we have computed – to explain observations. Given that the actual dissipation rate, and hence Q' value, due to inertial waves at a given tidal frequency can vary by orders of magnitude from this “typical value" represented by the frequency-average <cit.>, this suggests that the orbital evolution of Jupiter's and Saturn's moons can be explained by inertial waves <cit.>. However further work is required to explore this scenario in more detail, and to determine the validity of the frequency-averaged formalism in modelling tidal evolution.According to Fig. <ref>, the above constraints on Jupiter and Saturn can also be obtained via gravity wave damping in the envelope. For the rotation period of Saturn (P_rot = 0.44 days) and the orbital period of Enceladus (P_orb = 1.37 days), our Saturn model predicts Q'_gw∼ 3 × 10^3 (not strongly depending on instellation for the relevant mass and age). In turn, the present-epoch gravity wave dissipation rate calculated for the Jupiter model strongly depends on the incident flux. Adopting P_rot = 0.41 and P_orb = 1.77 days (Jupiter's rotation period and Io's orbital period, respectively) yields Q'_gw∼ 4 × 10^4 for a hot Jupiter model and Q'_gw∼ 2 × 10^6 for a cold model. Thus, our crude gravity wave dissipation estimates are also (surprisingly) in reasonable agreement with the observations. It would be interesting to explore the role of interior stably-stratified layers in future work, which we have neglected in our models due to the large uncertainties involved <cit.>.§ CONCLUSIONSWe have studied theoretically the evolution of tidal dissipation rates and modified quality factors Q' in rotating giant planets following their evolution using MESA interior models with masses ranging from 0.1 to 10M_J, for various incident stellar fluxes. We compute the dissipation of equilibrium tides by rotating turbulent convection (assuming an effective viscosity consistent with hydrodynamical simulations), dissipation of gravity waves in the thin radiative envelope, and inertial waves in the convective interior. Our models indicate that inertial waves are almost always likely to be the dominant mechanism of tidal dissipation in giant planets whenever they are excited[These waves can also be excited “nonlinearly" by the elliptical instability for the hottest (very shortest period) planets, which we do not study here <cit.>.] – i.e., when the tidal period P_tide>P_rot/2 – and are capable of providing Q'_iw∼ 10^3 (P_rot/10 hr)^2. This implies Q'_iw∼ 10^5-10^6 for orbital periods of order 10 days (assuming spin-orbit synchronism). Note that the frequency-averaged measure we have adopted can differ by orders of magnitude from the predictions at a specific tidal frequency according to linear and nonlinear calculations <cit.>, but is likely to represent a rather robust “typical value" of dissipation due to these waves.In hot low-mass planets (approx 0.1M_J), our models also predict efficient dissipation of gravity waves in the radiative envelope with Q'_gw∼ 10^4 (P_tide/1d)^8/3 <cit.>. This indicates efficient dissipation via this mechanism is also possible, though Q'_gw values ranging up to six orders of magnitude larger are found in the more massive planets we modelled.We have shown that our predicted circularization timescales from the dissipation of inertial waves correlate well with observed planetary eccentricities. This provides evidence that inertial wave dissipation may have played an important role in planetary tidal evolution.The values of Q' we have obtained can be compared with the latest statistical inferences from modelling exoplanetary eccentricity damping in <cit.>. They found Q'=10^5±0.5 for hot Jupiters with P_tide∈[0.8,7] days, with no strong evidence of any tidal period dependence. For eccentricity tides, assuming spin-orbit synchronism (hence P_tide=P_orb=P_rot), we predict Q'_iw≈ 10^3.5 for P_tide=0.8 days, and Q'_iw≈ 10^5.5 for P_tide=7 days, with a value of Q'_iw≈ 10^4.5 for P_tide=2.4 days. Our results are therefore consistent with the range they obtained for tidal periods longer than about 2.4 days, and thus we argue that inertial waves are likely to be able to explain their results in these cases. For the shortest tidal periods, we find more effective dissipation than they do, though this may be mitigated when considering the frequency-dependent tidal dissipation. It is also unclear whether their assumption of a constant planetary radius affects their results.Convective damping of equilibrium tides is estimated to be negligible in giant planets compared with wavelike tides because of the strong frequency-reduction of the effective turbulent viscosity due to the slow convection (relative to the tide) in these bodies <cit.>, – though see <cit.> for an alternative viewpoint. Rapid rotation minimally affects the resulting convective turbulent viscosities in the fast tides regime <cit.>, though it reduces the effective viscosity even further for slow tides. Further work should explore the interaction between tidal flows and convection in more realistic numerical models.It is essential in future work to study whether the frequency-averaged formalism for inertial wave dissipation is appropriate to model tidal interactions when global inertial modes are excited in realistic density-stratified models of planets, and whether it faithfully reproduces overall trends resulting from the dynamical evolution in a population of planetary systems. The role of interior stably-stratified layers such as inferred for the dilute cores of Jupiter & Saturn should also be explored further, as should the effects of differential rotation and magnetic fields.§ ACKNOWLEDGEMENTSWe would like to thank the initial referee, Caroline Terquem, for reading two versions of our manuscript and for providing critical comments each time, even if we disagree with many of them, and our second referee, Jim Fuller, for a constructive report that helped us to improve the paper. AJB was supported by STFC grants ST/S000275/1 and ST/W000873/1. NBV was supported by EPSRC studentship 2528559. AA was supported by a Leverhulme Trust Early Career Fellowship (ECF-2022-362). § DATA AVAILABILITYThe data underlying this article will be shared on reasonable request to the corresponding author.mnras | http://arxiv.org/abs/2311.15815v1 | {
"authors": [
"Yaroslav A. Lazovik",
"Adrian J. Barker",
"Nils B. de Vries",
"Aurélie Astoul"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20231127134038",
"title": "Tidal dissipation in rotating and evolving giant planets with application to exoplanet systems"
} |
^1International Iberian Nanotechnology Laboratory (INL), Av. Mestre José Veiga, 4715-330 Braga, Portugal^2Centro de Física das Universidades do Minho e do Porto, Universidade do Minho, Campus de Gualtar, 4710-057 Braga, Portugal The emergence of effective S=1/2 spins at the edges of S=1 Haldane spin chains is one of the simplest examples of fractionalization.Whereas there is indirect evidence of this phenomenon,direct measurement of the magnetic moment of an individual edge spin remains to be done. Here we show how scanning tunnel microscopy electron-spin resonance (ESR-STM) can be used tomap the stray field created by the fractionalS=1/2 edge spin and we propose efficient methods to invert the Biot-Savart equation,obtaining the edge magnetization map. This permits one to determine unambiguouslythe two outstanding emergent properties of fractional degrees of freedom, namely, their fractional magnetic moment and their localization length ξ. Probing spin fractionalization with ESR-STM absolute magnetometry Y. del Castillo^1,2, J. Fernández-Rossier^1[On permanent leave from Departamento de Física Aplicada, Universidad de Alicante, 03690 San Vicente del Raspeig, Spain]^,[[email protected]]January 14, 2024 =============================================================================================================================================================================================================Fractionalization is one of the most dramatic examples of emergence in many-body systems <cit.>.It shows how new quantized degrees of freedom, such as quasiparticles with charge e/3, can govern the low-energy properties of a system of interacting electrons with charge e. In the case discussed here,a chain of interacting S=1 spin behaves as if two S=1/2 spin degrees of freedom were localized at the edges. These examples illustrate how we can not rule out that the quantum numbers of the so-called fundamental particles are actually emerging out of an interacting system made of degrees of freedom with different quantum numbers <cit.>. Haldane spin chains <cit.> provide one of the simplest examples of fractionalization and emergence. Out of a model of interacting S=1 spins without intrinsic energy and length scales, a Haldane gap Δ_H, and two S=1/2 degrees of freedom localized at the edges with localization length ξ emerge. Given that the building blocks of the model have S=1,the S=1/2 edge states are fractional. Their emergence can be rationalized in terms of the AKLT <cit.> valence bond solid state, that in turn has a number of outstanding properties, including being a resource state for measurement-based quantum computing <cit.>. The fractional charge of quasiparticles in the Fractional Quantum Hall effect was determined by an outstanding experiment<cit.> that leveraged on the relation between shot noise and charge<cit.>. In the case of spin fractionalization, a direct measurement of the spin of the fractional edge states and their localization length remains to be done. Until recently, experimental probes of Haldane spin chains relied on bulk probes, such as neutron scattering <cit.> and electron spin resonance <cit.>, and provided indirect evidence of the presence of S=1/2 degrees of freedom and a Haldane gap. Advances in on-surface synthesis combined with atomic-scale resolution inelastic electron tunnel spectroscopy (IETS) based on scanning tunnel microscopy (STM) have made it possible to probe individual Haldane spin chains made with covalently coupled S=1 nanographene triangulenes <cit.>.IETS of triangulene spin chains showed the presence of a Haldane gap in the center of the chains as well as in-gap edge excitations for short chains and zero bias Kondo peaks for longer chains <cit.>,consistent with the existence of emergent S=1/2 edge spins <cit.>. Here we propose a direct measurement of the edge magnetic moment M=gμ_B S_ edge associated to the fractionalS_ edge=1/2 degrees of freedom using STM-based electron spin resonance (ESR-STM)<cit.>. We assume that a Haldane spin chain, not necessarily made with triangulenes, is deposited on a surface, sufficiently decoupled from the substrate so that the Kondo effect is suppressed, the magnetic moment of the edge states is preserved and can be probed with ESR-STM magnetometry <cit.>. The feasibility of this weak-coupling scenario has been demonstratedin several ESR-STM experiments where S=1/2 species, such as Ti <cit.>, Cu <cit.> and alkali atoms <cit.>, are deposited on a bilayer of MgO on top of an Ag surface.Our proposal (see Fig. <ref>) relies on the demonstrated capability of ESR-STM to act as an absolute magnetometer <cit.>. To do so, an ESR-STM active spin acts as a sensor that can be placed at several distances of a second spin or group of spins denoted as target. At finite temperature, the target spins can occupy different quantum states, each of which generates its own stray field<cit.>. As a result, the ESR-STM spectrum of the sensor spin features several peaks, whose frequency and intensity relate to the stray-field and occupation probability of the quantum state of the target. As we show below, this can be used to obtain a direct measurement of the magnetic moment, and thereby the spin, of the edge states in Haldane spin chains.We assume that a Haldane spin chain with N spins S=1 is placed on a surface and can be described with the Hamiltonian:H= ∑_n=1,N-1 J[S⃗_n·S⃗_n+1+β(S⃗_n·S⃗_n+1)^2] +∑_n=1,N gμ_B S⃗_n·B⃗We take values of β in the range 0<β<1/3. The low energy manifold is conformed by a singlet with S=0 and a S=1 triplet. The singlet-triplet splitting is given by the sum of the effective inter-edge coupling and the Zeeman energy:E(S,S^z)=S Δ_ST(N)+gμ_B S^z B^z,where μ_B is the Bohr magneton, g=2and S=0,1 and S^z=± 1, 0. We choose the quantization axis of the spin to be parallel to the external magnetic field. The singlet-triplet splitting shows an exponential decay given by Δ_ST∝ e^-N/ξ, where N is the system size and ξ represents the localization length. This quantity remains significantly smaller than the Haldane gap, the energy splitting between the low-energy manifold and the bulk states.The exponential dependence of Δ_ST closely resembles what would be expected if two S=1/2 spins were localized at the edges of the chain, with a localization length on the order of ξ. This characteristic is illustrated in Fig. <ref>b, where we calculate the expectation value of the S^z_n operators for the low-energy states with S=1 and S^z=+1. Clearly, these states form a magnetic texture localized at each edge, with a combined magnetic moment of S^z=∑_n=1, N/2⟨± 1 | S^z_n| ± 1 ⟩ =± 1/2. We now consider that an STM-ESR-active spin picks up the straight field generated by a nearby Haldane spin chain. In ESR-STM experiments the DC current across the STM-surface junction is measured as a function of the frequency of the driving voltage. The ESR-STM spectrum for this lateral-sensing setup can be described by the following equation <cit.>:I_DC(f)= ∑_ℓ p_ℓ L(f-f(ℓ)),where the sum runs over the eigenstates of the Hamiltonian (eq. <ref>) of the spin chain, H|ℓ⟩=E_ℓ|ℓ⟩, p_ℓ=1/Ze^-E_ℓ/(k_B T) are the thermal occupationsofeach eigenstate and L(f-f_ℓ) is a Lorentzian type resonance curve centered around the frequency f_ℓ. We assume that the external magnetic field is perpendicular to the sample so that the stray field created by the chain at the sensor location is also perpendicular to the substrate. [Since both the external field and the exchange interactions are much larger, it is safe to neglect the backaction effect of the stray field of the sensor on the Hamiltonian of the spin chain. ] As a result, the resonantfrequency of the sensor shifts linearly with the stray field: f(ℓ)=μ_B g_ s/h (B+b(ℓ)),whereh=2πħ, and g_ s is the gyromagnetic factor of the sensor,B and b_ℓ denote the external field and the stray field generated by the target spins in the state ℓ, respectively, both along the off-plane direction.In turn, the stray fieldis given by: b^z(ℓ)= -μ_0/4π∑_n=1^Nm_n^z(ℓ)/(d^n)^3 , where d^n is the distance between the sensor and the spin n. The magnetic moment vector m_n^z (l) is generated by the spin n in each state ℓ, and its components are given by:m_n^z(ℓ)= -gμ_B ⟨ℓ|S^z_n|ℓ⟩.Equations <ref>, <ref> <ref>, <ref>relatethe expectation value of the magnetic moments in a givenHaldane-chain state|ℓ⟩to the ESR-STM spectrum of a nearby sensor.We now discuss how to determine the magnetic moment of these states and verify fractionalization. Importantly, we assume that the temperature is much smaller than the Haldane gap [We note that, for the case of nanographene Haldane spin chains, the range of temperatures where ESR-STM has been implemented with T<4 K, much smaller than as Δ_H was found to be in the range of 10 meV.], so that only the four states of the ground state manifold of the chain contribute to the sum in eq. (<ref>).Only two of these states, with S=1 and S^z=± 1 have a non-vanishing expectation value of the spins. The correspondingmagnetic profile of theS^z=+1 statescalculated using DMRG <cit.>, is shown in figure 1b, for β=0.09, relevant for triangulene spin chains<cit.>. It corresponds to two physically separated objects with S^z=1/2 localized at the edges. The S^z=-1 has analogous properties. In contrast,the expectation value of the spin operators is identically zero when calculated with the S^z=0 states. Consequently, the four low-energy states of the Haldane spin chain correspond to three distinct magnetic states, with a vanishing stray field for the S=0 and S=1, S^z=0 states, and a finite stray field of opposite sign for the S^z=± 1 states (see Fig. (<ref>a). As a result, the ESR-STM spectrum of the spin-sensor has three distinct peaks corresponding to three different stray fields (See figure <ref>b). Expectedly,the stray fields of the S^z=± 1 have the same magnitude and opposite sign. From the splitting of these peaks, it is possible to pull out the value of the stray field at the location of the sensor: b^z (±1)=h/g_sμ_B (f_0-f_± 1)here, f_± 1 represents the resonant frequencies measured at the sensor when the Haldane spin chain occupies the states with S^z =± 1 within the ground state manifold. In order to determine the magnetic moments of the N/2 spins of one half of the chain, that define a vector M≡(m_1(±),...,m_N/2(±)), we need to measure the stray field in a set of different locations N_M,that yields a vector in the readout-location space, B≡ (b_1(±),...,b_N_M(±)). We have considered two methods to pull out the vector M out of B. The first method is the full inversion of the Biot-Savart's equation (FIBS), that can be written down asB=-(μ_0/4π)DM, where the elements of the matrix D are |d^n_n_m|^-3. In this case, it is apparent that the number of necessary readouts equals half of the chain length, N_M=N/2. The second method involves the use of artificial neural networks and requires a dramatically smaller number of measurements to determine the magnetization map (Fig. 3c,d).The finite magnetic sensitivity of the readouts, denoted by δBimposes an uncertainty in the determination of the edge magnetic moment. The sensor spectral resolution is ultimately limited by the shot noise <cit.>taylor08,dreau11,barry20:δ B_min= 4/3h δ f/g_s μ_B √(e/I_0 Δ t)where δ f is the linewidth, I_0 is the maximal current in the resonance peak, and Δ t is measurement time, that may be limited by factors such as thermal drift of the tip.The associated minimal shift in the sensor frequency is given by Δ f min= gμ_B δ B_ min/ h in the range of 1MHz have been reported in STM-ESR magnetometry <cit.>.In the case of the FIBS, the uncertainty of the edge magnetic moments is given by:δ⟨ S^z ⟩ = (4π/μ_0μ_B g ∑_n,n_m |(D^-1)^n_n_m|)δ B_ min.In Figure <ref>a, we show that assuming the reported resolution, Δ f min=1MHz <cit.>),the uncertainty in the determination of the edge spin, δ⟨ S^z ⟩,is an order of magnitude lower than ⟨ S^z ⟩. In principle, longer readout times would make it possible to decrease δ f, and thereby, δ⟨ S^z⟩.We now discuss a second method to invert the Biot-Savart equation that makes use of artificial neuron networks (NN) to invert Biot-Savart's equation. This method comes with two advantages. First, it reduces the number of ESR-STM measurements. Second, it yields a dramatically smaller uncertainty in the determination of the fractional spin. Our approach involves two different NNs. The first NN classifies magnetic profiles derived from ESR readouts, confirming the presence of thecharacteristic Haldane spin chain edgemagnetization (See Fig. (1b)). The second NN is trained to convert Haldane-type magnetic profiles, with a specific number of measurements, into spin expectation values. The training set incorporates several thousand spin distributions obtained varying βbetween 0 and 1/3, each with its characteristic magnetic profile. In addition, we added random noise with amplitude bounded by the magnetic sensitivity range (See Supplemental Material <cit.>). The synergistic application of these two neural networks reduces the training time to a few minutes on a conventional laptop. This is crucial since a distinct NN needs to be trained for each experimental layout, considering factors such as spin arrangement, sensor positions, sensor type, and spectral resolution.InFigure <ref>b, we used N_M=7 strategically positioned measurements <cit.>, as shown in Figure <ref>c. Our calculations reveal that, with a numberESR readouts much smaller than N/2, it is possible to obtain the magnetization maps and, therefore, the total magnetic moment of half of the chain with an uncertainty as low as δ⟨ M_z ⟩≈ 10^-4μ_B (See Supplemental Material <cit.>), as shown in Figure <ref>b. Consequently, this methodology proves sufficient for determining the presence of an S=1/2 object and its localization length, using state-of-the-art ESR-STM instrumentation. We now discuss how to infer another important property of the edge spins, namely, their localization length ξ. This is based on two facts.First,the inversion of the Biot-Savart equation yields the value of the magnetic moment at many sites close to the edge. Second, our numeric work shows that we can parametrize the spins with the following equation: ⟨ S_n^z⟩_±= ±(-1)^nAe^-n/ξ where𝒜 represents the maximum value of ⟨ S^z_n ⟩ at the edge spin, ensuring that ∑_n=1^N/2⟨ S^z_n ⟩ = 1/2. Our numerical calculation (see Supplemental Material <cit.>) shows that the second moment of the magnetization field⟨ n ⟩ = ∑_n |⟨ S^z_n ⟩| · n/∑_n |⟨ S^z_n ⟩|is proportional to ξ in an almost one-to-one relation (⟨ n ⟩≈ 1.02ξ), which permits one to determine this quantitywith an uncertainty associated to δ⟨ S_n^z ⟩. For the reported spectral resolution of 1 MHz and d=0.5 nm, the relative errors would beδξ/ξ≈ 10^-1 for FIBS and δξ/ξ≈ 10^-5 for NNs <cit.>. In conclusion, we propose a method to measure the two outstanding properties of the S=1/2 fractional degrees of freedom that emerge at the edges of Haldane S=1 chains: their fractional magnetic moment M=gμ_B S and their localization length or spatial extension, ξ. Our theoretical analysis shows that our method can be implemented with state-of-the-art ESR-STM magnetometry.Our proposal permits one to go beyond previous work, where the presence of fractional degrees of freedom is inferred indirectly, but the actual fractionalization of the magnetic moment is not measured directly. This approach could be used to probe fractional edge spins expected to occur intwo-dimensional AKLT modelsand could also inspire similar experiments using related atomic scale magnetometers, such as NV centers <cit.>.We acknowledge Arzhang Ardavan for fruitful discussions and Jose Lado for technical assistance on the implementation of DMRG.J.F.R.acknowledges financial support from FCT (Grant No. PTDC/FIS-MAC/2045/2021),SNF Sinergia (Grant Pimag),Generalitat Valenciana funding Prometeo2021/017and MFA/2022/045, and funding from MICIIN-Spain (Grant No. PID2019-109539GB-C41). YDC acknowledges funding from FCT, QPI,(Grant No.SFRH/BD/151311/2021) and thanks the hospitality of the Departamento de Física Aplicada at the Universidad de Alicante. apsrev4-2 | http://arxiv.org/abs/2311.15720v1 | {
"authors": [
"Y. del Castillo",
"J. Fernández-Rossier"
],
"categories": [
"cond-mat.mes-hall",
"quant-ph"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231127111319",
"title": "Probing spin fractionalization with ESR-STM absolute magnetometry"
} |
IPv6 Bitcoin-Certified Addresses Mathieu Ducroux nChain AG Zug, Switzerland [email protected] xxxx; accepted xxxx ======================================================================== A pivotal feature of IPv6 is its plug-and-play capability that enables hosts to integrate seamlessly into networks. In the absence of a trusted authority or security infrastructure, the challenge for hosts is generating their own address and verifying ownership of others. Cryptographically Generated Addresses (CGA) solves this problem by binding IPv6 addresses to hosts’ public keys to prove address ownership. CGA generation involves solving a cryptographic puzzle similar to Bitcoin’s Proof-of-Work (PoW) to deter address spoofing. Unfortunately, solving the puzzle often causes undesirable address generation delays, which has hindered the adoption of CGA. In this paper, we present Bitcoin-Certified Addresses (BCA), a new technique to bind IPv6 addresses to hosts’ public keys. BCA reduces the computational cost of generating addresses by using the PoW computed by Bitcoin nodes to secure the binding. Compared to CGA, BCA provides better protection against spoofing attacks and improves the privacy of hosts. Due to the decentralized nature of the Bitcoin network, BCA avoids reliance on a trusted authority, similar to CGA. BCA shows how the PoW computed by Bitcoin nodes can be reused, which saves costs for hosts and makes Bitcoin mining more efficient.Cryptographically Generated Addresses, IPv6 security, Bitcoin, Proof of Work § INTRODUCTIONAs IPv6 adoption is gaining significant momentum, it has become critical to mitigate attacks such as those targeting the Neighbor Discovery Protocol (NDP) which provides link-layer address resolution <cit.>. NDP operates under the premise that the network consists of trusted hosts. However, this assumption does not always hold, especially in public wireless networks where minimal authentication mechanism is required to join the link. This gives the opportunity for an attacker to spoof the addresses of legitimate hosts and launch Denial-of-Service (DoS), Man-in-the-middle, and other network-related attacks <cit.>. To overcome these issues, the SEcure Neighbor Discovery (SEND) protocol has been introduced <cit.>. SEND employs Cryptographically Generated Addresses (CGA) to provide authentication of IPv6 addresses without relying on a trusted authority or additional security infrastructure <cit.>. CGA has proven to be useful in other environments and its usage has been proposed to secure the Shim6 multihoming protocol <cit.> and the Mobile IPv6 protocol <cit.>.A CGA is an IPv6 address whose interface identifier (its rightmost 64 bits) is generated by hashing a public key and auxiliary parameters. Any host can verify the ownership of an address by recomputing the address from the public key and requesting a signature from the corresponding public key. To reinforce the binding between the public key and the address, CGA introduced the hash extension technique to the address generation process <cit.>. This technique requires hosts to solve a partial hash inversion puzzle, similar to Proof-of-Work (PoW)-based systems <cit.>. The puzzle solution is hashed together with the public key to generate the address. The difficulty of the puzzle is chosen by hosts depending on their computational power. Increasing the difficulty of the puzzle increases the resistance of an address against spoofing attacks. On the other hand, it also increases the address generation time.The issue with CGA is that it trades security for performance without being able to offer a good balance between the two. As noted in the original CGA RFC, the hash extension technique is effective if the computational power of attackers and hosts grow at the same rate <cit.>. In reality, attackers benefit from a linear increase in attack speed by investing in parallel hardware. This leaves hosts with limited parallel hardware highly susceptible to spoofing attacks. For standard devices, it has been shown that the cost of generating an address with high security can be prohibitive <cit.>. The issue becomes particularly problematic on mobile networks, in which devices have limited computational power and operations such as handovers must be completed within a few milliseconds <cit.>. Additionally, the high computational cost of CGA generation disincentivizes hosts from changing their address frequently, which exposes them to privacy-related attacks <cit.>. Using the same interface identifier for a long period of time makes it possible for an attacker to monitor and correlate the activity of a host, even as it changes subnet. The correlation can be done on the characteristics of the intercepted packets, such as size or timing <cit.>.Several approaches have been proposed to solve the performance issue of CGA. First, a time-based termination condition can be added to the address generation process <cit.>. This ensures that the address generation time does not exceed a predefined value. However, the trade-off between security and performance remains the same and hosts with limited computational power remain highly susceptible to spoofing attacks. Another suggestion consists of performing the work in advance or offline, rather than in real time when a new address is needed <cit.>. Even though this might reduce the delay needed to obtain a new address, it does not reduce the computational burden of generating addresses. To guarantee a good level of security while maintaining a reasonable computational cost, a local key server can be employed <cit.>. The server is responsible for performing the work in advance and serving keys to hosts that join the network. However, this model introduces a single point of failure and involves setting up additional security infrastructure, which is against the original CGA design <cit.>.The work performed by hosts in CGA exhibits similarities to that performed by Bitcoin nodes to secure the Bitcoin blockchain. Launched in 2009, Bitcoin is the first and most successful implementation of a blockchain <cit.>. Bitcoin nodes consists of a distributed and decentralized network of nodes which agrees on the next block of transactions to be appended to the blockchain through PoW <cit.>. In PoW, Bitcoin nodes compete to solve a partial hash inversion puzzle tied to a block. The first node to complete the puzzle obtains the right to add the block to the blockchain and claim the associated reward. The competitive nature of PoW coupled with the development of highly performant hardware and the increasing popularity of Bitcoin has led to a steady increase of the total hash rate of the Bitcoin network. In February 2023, it peaked at about 300 × 10^18 hashes per second <cit.>, making Bitcoin the most powerful distributed hashing system in the world.In this paper, we introduce Bitcoin-Certified Addresses (BCA), a new technique to generate IPv6 addresses from a host’s public key registered on the Bitcoin blockchain. The PoW computed by Bitcoin nodes is used to secure the binding between the public key and the address, thereby making the computational cost of address generation minimal for hosts. Compared to CGA, the security of the binding is improved since it is guaranteed by Bitcoin nodes instead of hosts’ devices which typically have limited computational power. The reward mechanism in Bitcoin incentivizes nodes to invest in the latest hardware technology, as evidenced by the network’s increasing hash rate <cit.>. As a result, the growth in computational power of Bitcoin nodes is likely to be aligned with that of a powerful attacker. In CGA, this assumption is not true because standard devices tend to have limited hardware available. This renders them vulnerable to attackers that are able to invest in lots of parallel hardware.BCA improves on previous proposals which suggested delegating the expensive work performed during address generation to external computers <cit.>. First, BCA is more efficient as it does not require setting up extra infrastructure. Instead, it relies on the existing Bitcoin network and the already expended work of the Bitcoin nodes. Secondly, the highly distributed nature of the Bitcoin network makes it resistant to DoS attacks and manipulation by an attacker. Finally, the financial incentives for performing the work are clearer. BCA provides atomic payments in the form of transaction fees to remunerate Bitcoin nodes for their work.BCA demonstrates how the PoW computed by Bitcoin nodes can be reused, which makes Bitcoin mining more efficient. A common criticism of Bitcoin mining is that it wastes computational power and energy without having any intrinsic value <cit.>. Several projects proposed to replace PoW with computational problems that have direct real-world applications <cit.>. Some blockchains make additional value of the PoW computed in Bitcoin by reusing it in their consensus protocol, a technique referred to as merge-mining <cit.>. BCA shows that besides securing blockchains, the PoW computed by Bitcoin nodes can be used to secure other protocols that would otherwise require users to expend a lot of computational resources themselves.Our contributions in this paper are the following:* We introduce BCA, a new technique to generate IPv6 addresses from public keys. The technique is efficient and provides a good level of security for all hosts, regardless of their computational power.* We present a detailed analysis of BCA and CGA. In particular, we analyse the cost of spoofing attacks in both techniques. We show that in practice, BCA provides better security against spoofing attacks than CGA. * We provide an implementation of BCA to demonstrate its usability. We also provide an implementation of CGA and evaluate the two techniques. Our evaluation shows that to obtain the same level of security as BCA, generating an address in CGA takes an average of 696.15 s, while it takes only 0.00096 ms in BCA on a standard laptop. The rest of the paper is organized as follows. Section <ref> presents CGA and analyzes its resistance against spoofing attacks and costs. Section <ref> describes the Bitcoin protocol. Section <ref> introduces the proposed BCA technique and gives a detailed analysis of it. Section <ref> describes our BCA implementation and compares its performances to that of CGA. Finally, the conclusions are presented in Section <ref>.§ CRYPTOGRAPHICALLY GENERATED ADDRESSES (CGA)In this section, we describe the CGA technique and analyze its resistance against spoofing attacks and its costs. §.§ CGA Specification The objective of CGA is to prevent spoofing of IPv6 addresses by binding the generated address to a public key to prove address ownership. IPv6 addresses are 128-bit IP addresses where the leftmost 64 bits form the subnet prefix and the rightmost 64 bits form the interface identifier <cit.>. The subnet prefix is used to determine the host’s location in the Internet topology and the interface identifier is used as an identity of the host.A CGA is an IPv6 address whose interface identifier is obtained by hashing the host’s public key and auxiliary parameters, together known as the CGA Parameters data structure. Any host can verify that a message comes from the claimed address by recomputing the address from the CGA Parameters data structure and verifying the attached signature, which must be valid for the public key.The CGA Parameters data structure comprises a 128-bit randomly generated modifier value, the 64-bit subnet prefix of the address, an 8-bit collision count value, the DER-encoded public key of the host, and a variable length extension field value. The modifier is used to strengthen the binding between the public key and the address and enhance privacy by adding randomness to the address. The collision count can only take value 0, 1, or 2, and is used during address generation to recover from an address collision detected by Duplicate Address Detection (DAD) <cit.>. The extension field can be used for additional data items. By default, it has length 0.Each CGA also has a 3-bit security parameter sec encoded in the three leftmost bits of its interface identifier. The sec parameter determines the strength of the binding between the public key and the address. It can have values from 0 (lowest security) to 7 (highest security). Hosts should select this value depending on their computational power. The higher the sec value, the longer it takes to generate an address.Fig. <ref> illustrates the CGA generation algorithm. It starts with the hash extension technique, in which hosts solve a partial hash inversion puzzle tied to their public key. Specifically, hosts iterate the modifier until the 16 ×sec leftmost bits of Hash2 are equal to zero. Hash2 contains the 112 leftmost bits of the hash digest computed over the CGA Parameters data structure with the subnet prefix and collision count set to zero. Then, Hash1 is computed by hashing the CGA Parameters data structure and taking the 64 leftmost bits of the hash digest. The interface identifier of the address is derived from Hash1 by setting the u and g bits <cit.> (respectively 6th and 7th leftmost bits, starting from 0) to zero and encoding the sec parameter in the three leftmost bits. Finally, the address is obtained by concatenating the subnet prefix with the interface identifier. If an address collision is detected by DAD, the collision count is incremented by one and the Hash1 value is recomputed. After three collisions, the algorithm stops and an error is reported.The original specification of CGA suggests the use of the SHA-1 algorithm to compute Hash1 and Hash2 <cit.>. Due to the vulnerabilities found in SHA-1 <cit.>, propositions have been made to transition to more secure hash algorithms <cit.>. For example, the SHA-256 hash algorithm is employed in several alternative CGA designs <cit.>.The CGA verification algorithm takes as input the CGA and its associated CGA Parameters data structure. The verification starts by checking that the collision count is less than or equal to 2. Next, it checks that the subnet prefix in the CGA Parameters data structure is the same as the one in the address. The verification then recomputes Hash1 and checks that it matches with the interface identifier of the address. Finally, it recomputes Hash2 and verifies that the 16 ×sec leftmost bits of the recomputed value are zero. §.§ CGA Analysis Spoofing a CGA implies finding a CGA Parameters data structure that comprises the attacker’s public key and successfully binds to the target CGA. The attacker can then use her private key to sign messages and pretend to be the legitimate owner of the address. In the following, we analyze the cost of this attack. We adapt the analysis made by Bos, Özen, and Hubaux <cit.> to the case where all three values of the collision count can be iterated to spoof the CGA. Given a network, assume hosts generate IPv6 addresses using CGA with security parameter 0 ≤sec≤ 7. Then, the expected number of hash function evaluations required to spoof a specific address is:T_CGA =2^59 if sec = 0, 2^59 + (2^16 ×sec+59 / 3)if sec > 0. Spoofing an address generated with CGA requires finding a valid CGA Parameters data structure that comprises the attacker’s public key and fulfills these conditions: (1) the leftmost 16 ×sec bits of Hash2 are zero, and (2) the Hash1 value yields the target address. An attacker can proceed either by first satisfying condition (1) and then (2) or vice versa.When starting with condition (1), the attacker is expected to perform 2^16 ×sec hash function evaluations to find a suitable modifier. For a fixed collision count, the probability that condition (2) is satisfied is 2^-59. Because the attacker can try the three different values of collision count to satisfy condition (2), Hash2 is expected to be computed 2^16 ×sec+59 / 3 times in total. Condition (2) is satisfied after computing Hash1 2^59 times on average. The total cost for spoofing, when starting with condition (1), thus becomes 2^59 + (2^16 ×sec+59 / 3) hash function evaluations.When starting with condition (2), the attacker iterates the inputs to Hash1 until condition (2) is satisfied. This is expected to require 2^59 hash function evaluations. Next, the attacker computes Hash2 and verifies that condition (1) is satisfied, which happens with a probability 2^-16 ×sec. So, the attacker is expected to compute Hash1 2^16 ×sec+59 times. Moreover, the attacker is expected to compute Hash2 2^16 ×sec times before condition (1) is satisfied. Therefore, the total cost for spoofing, when starting with condition (2), becomes 2^16 ×sec + 2^16 ×sec+59 hash function evaluations.For all sec values with 1 ≤sec≤ 7, the cost of the attack is smaller when starting with condition (1) than with condition (2). Therefore, this attack has a cost 2^59 + (2^16 ×sec+59 / 3) hash function evaluations. If sec = 0, then the computation of Hash2 is skipped and condition (2) is satisfied after performing an average of 2^59 hash function evaluations. The security of the binding between the public key and the address depends on the sec parameter. Incrementing the sec value by one increases the cost of spoofing attacks by a factor 2^16. At the same time, it increases the cost of address generation by the same factor, thereby introducing a trade-off between security and performance. The cost of address verification is independent of the sec parameter and is always two hash function evaluations.Several studies found that generating addresses with a sec value of 2 takes several minutes on standard devices, which is not an acceptable delay for generating addresses <cit.>. This implies that the sec value cannot be larger than 1, which yields approximately 73 bits of security. For low-end devices, only a sec with a value of zero is practical <cit.>. This yields 59 bits of security which is far from being considered secure. These results suggest that generating secure CGAs on the fly is not practical for most devices.§ BITCOINThe Bitcoin blockchain is a distributed ledger that acts as an immutable record of data <cit.>. As Bitcoin is public, anyone can publish data on the blockchain embedded in Bitcoin transactions. The Bitcoin network consists of nodes who are responsible for validating transactions and aggregating them into blocks. These blocks of transactions are appended to the blockchain via a stochastic process called mining. This process requires nodes to solve a partial hash inversion puzzle known as Proof of Work (PoW) <cit.>.A Bitcoin block contains an ordered list of transactions and a header which is of a fixed size 80 bytes. The block header contains multiple fields among which are the difficulty of the PoW of the block, a nonce value which can be iterated to solve the PoW, and a Merkle root obtained by hashing the ordered list of transactions into a Merkle tree <cit.>.The PoW in Bitcoin consists of iterating the values in the block header until its double SHA-256 hash (SHA-256 applied twice) is below some target value. The target is derived from the difficulty value encoded in the block header. Nodes use a predefined algorithm to adjust the difficulty of the PoW to the computational power of the Bitcoin network. The algorithm ensures that the time required by the network to solve the PoW is 10 minutes on average.A Bitcoin transaction contains virtually any number of inputs and outputs. Transaction outputs lock a certain number of bitcoins which can be unlocked by providing the correct data (e.g. a set of signatures) in the input of a new transaction. Outputs can also be used to inscribe data, such as hash values or text, onto the blockchain. Transactions include a fee that remunerates nodes for validating transactions, solving the PoW, and publishing new blocks.Bitcoin relies on Merkle trees to commit to the transactions appearing in the block. The leaves of the Merkle tree consist of the transactions and each non-leaf node is labeled with the hash of the concatenation of its child nodes. The advantage of using Merkle trees is that it allows anyone to produce concise, unique, and easy-to-verify inclusion proofs of transactions. Given a transaction and its Merkle proof, a verifier can recompute the Merkle root and compare it with the Merkle root in the block header. If they match, the verifier can be assured that the transaction has been included in the block.As of February 2023, the three main implementations of Bitcoin are Bitcoin Core (BTC) <cit.>, Bitcoin Cash (BCH) <cit.>, and Bitcoin SV (BSV) <cit.>. Table I shows a comparative analysis of these three implementations. The transaction throughput refers to the number of transactions that can be added to the blockchain per second. Among these implementations, BSV offers the highest transaction throughput and cheapest transaction fees.§ BITCOIN-CERTIFIED ADDRESSES (BCA) In this section, we present Bitcoin-Certified Addresses (BCA), an enhanced version of CGA whereby hosts delegate the work that secures the binding between the public key and the IPv6 address to Bitcoin nodes. §.§ Design Rationale BCAs are generated from a Bitcoin transaction registering the public key of a host and the header of a Bitcoin block containing the transaction. The search for a valid block header performed by Bitcoin nodes during the PoW process is used to secure the binding between the public key and the address, similar to the search for a valid modifier in CGA. Therefore, no computationally intensive operations have to be performed by hosts during address generation. Because the amount of work performed by Bitcoin nodes is typically much higher than what can be performed by the conventional machine of a host in CGA, the security of the binding is stronger in BCA than what can be achieved in practice in CGA. BCA ensures that multiple addresses can be generated from a single transaction and public key, thus avoiding the need to create and broadcast a new transaction every time a new address is needed. This is achieved by creating a Merkle tree of modifier values and including the root of the tree in the transaction where the public key is registered. The modifier values are then used as input to the BCA generation algorithm. They should be randomly generated to ensure that addresses generated from the same public key are un-linkable, thus protecting the privacy of hosts.Each BCA is associated a BCA Parameters data structure, whose format is depicted in Table <ref>. The BCA Parameters data structure is communicated to other hosts during address verification. To prevent an attacker from spamming hosts by sending very large (and potentially incorrect) proofs to be verified, we impose a limit N_max to the number of modifier values that can be committed in the transaction. We also assume that the number of transactions appearing in a block does not exceed M_max, which implies that the Merkle proof of the transaction should not contain more than log_2 (M_max) hash values. We recommend using the values N_max = 32 and M_max = 2^28, which allows for more than 250 million transactions to be included in a block.Like CGA, each BCA has a security parameter sec encoded in the three leftmost bits of its interface identifier. This parameter is derived from the difficulty of the PoW computed by Bitcoin nodes and determines the strength of the BCA against spoofing attacks. §.§ BCA Specification Public key registration. In order to generate BCAs, hosts must register their public key on the Bitcoin blockchain. For this, they create and broadcast a Bitcoin transaction whose data payload includes the hash of their public key. They also generate a list of N random 128-bit modifier values, with N ≤ N_max, and include the root of the associated Merkle tree in the transaction. Hosts must store these modifier values and the associated Merkle tree in memory. Once the transaction is included in a block, hosts fetch and verify the Merkle proof of inclusion of the transaction in the block and store it in memory. They also fetch the block header of the block where the transaction is included and store it in memory.The sec parameter is derived from the difficulty of the PoW encoded in the block header as sec = ⌊log_2 (difficulty) / 16 ⌋. This implies that for a given difficulty, the double SHA-256 hash of the block header found by Bitcoin nodes has its 16 × (sec + 2) leftmost bits equal to zero. The minimum sec value is 0 and is obtained when the difficulty of the PoW is 1, its minimum value. This corresponds to a PoW that requires finding a hash digest whose leftmost 32 bits are zero. The maximum sec value is 7, which requires finding a hash digest whose leftmost 144 bits are zero.BCA generation. We define modifieri as the i-th modifier in the list of generated modifier values. The index i is initialized to zero and is incremented by one every time a new address in the subnet is generated. The BCA generation algorithm takes as input the modifieri, the block header, the subnet prefix, and the transaction, and works as follows. * Set collision count to zero.* Hash the concatenation of the modifieri, the block header, the subnet prefix, the collision count, and the transaction, and take the leftmost 64 bits of the resulting hash value. The result is Hash1.* Construct the interface identifier from Hash1 by writing the sec value into the three leftmost bits and setting the u and g bits to zero.* Concatenate the subnet prefix and the interface identifier to form a 128-bit IPv6 address.* Perform DAD. If an address collision is detected, increment the collision count by one and go back to step 2. After three collisions, stop and report an error.The output is a new BCA and its associated BCA Parameters data structure. Fig. <ref> summarizes the public key registration process and BCA generation algorithm.BCA verification. The BCA verification algorithm takes as input the BCA and its associated BCA Parameters data structure, and checks that the following conditions hold:* The collision count is equal to 0, 1, or 2.* The subnet prefix in the BCA Parameters data structure is equal to the subnet prefix of the address.* The hash of the public key is equal to the hashed public key included in the data payload of the transaction.* The format of the block header is valid.* The Merkle proof of inclusion of the transaction in the block is valid and the height of the corresponding Merkle tree is less than or equal to log_2(M_max) (there are less than M_max transactions in the block).* The Merkle proof of inclusion of modifieri in the transaction is valid and the height of the corresponding Merkle tree is less than or equal to log_2(N_max) (there are less than N_max modifier values committed in the transaction).* Extract the sec parameter from the three leftmost bits of the interface identifier of the address. The leftmost 16 × (sec + 2) bits of the double SHA-256 hash of the block header are zero.* Compute Hash1 from the BCA Parameters data structure. Hash1 is equal to the interface identifier of the address. Differences in the u, g, and sec bits are ignored. If each condition is met, then the binding between the public key in the BCA Parameters data structure and the address is valid. §.§ BCA Analysis In the following, we analyze the resistance of BCA against spoofing attacks and compare BCA to CGA. Given a network, assume hosts generate IPv6 addresses using BCA with security parameter 0 ≤sec≤ 7. Assume there is a limit N_max on the number of modifier values committed in the transaction and a limit M_max on the number of transactions contained in a Bitcoin block. Then, the expected number of hash function evaluations required to spoof a specific address is:T_BCA = 2^59 + 2^16 ×sec+92/3 × N_max× M_max. Spoofing an address generated with BCA requires constructing a dummy Bitcoin block that contains a transaction registering the attacker’s public key and that fulfills the following two conditions: (1) the leftmost 16 × (sec + 2) bits of the double SHA-256 hash of the block header are zero, and (2) the Hash1 value yields the target address. The attack can be conducted in two ways: by first satisfying condition (1) and then (2), or vice versa. Note that the block does not need to be published to the blockchain for the attack to be successful. The attacker can include any number of transactions M registering her public key, such that M ≤ M_max. Additionally, the attacker can commit to any number of modifier values N ≤ N_max in the transactions.When starting with condition (1), the attacker is expected to perform 2^16 × (sec+2)+1 hash function evaluations to find a suitable block header. Because the probability that one of the N modifier values, one of the M transactions, and one of the three collision count values satisfy condition (2) is 2^-59× 3 × N × M, this process needs to be repeated 2^59 / (3 × N × M) times on average. This suggests that the attack is the most efficient when N and M are maximal, that is the block created by the attacker contains M_max transactions and each transaction commits to N_max modifier values. Condition (2) is satisfied after being tested 2^59 times on average. The total cost for spoofing, when starting with condition (1), thus becomes 2^59 + 2^16 ×sec+92/3 × N_max× M_max hash function evaluations.When starting with condition (2), the attacker iterates the parameters of the block header until condition (2) is satisfied. This is expected to require 2^59 hash function evaluations. Next, the attacker computes the double SHA-256 hash of the block header to verify that condition (1) is satisfied, which happens with a probability 2^-16 × (sec+2). On average, the double SHA-256 hash of 2^16 × (sec+2) different block headers have to be computed before condition (1) is satisfied. Therefore, the total cost for spoofing, when starting with condition (2), becomes 2^16 × (sec+2)+1 + 2^16 × (sec+2)+59 hash function evaluations.For all sec values with 0 ≤sec≤ 7, the cost of the attack is smaller when starting with condition (1) than with condition (2). Therefore, creating a BCA Parameters data structure that binds the attacker’s public key to some target address requires on average 2^59 + 2^16 ×sec+92/3 × N_max× M_max hash function evaluations.The resistance of BCA against spoofing attacks is higher than what can be achieved in practice with CGA. By using the values N_max = 32 and M_max = 2^28 in (<ref>), the cost of spoofing attacks becomes T_BCA = 2^59 + (2^16 ×sec+59 / 3) hash function evaluations. As of February 2023, the difficulty of the PoW in the three main implementations of Bitcoin (BTC, BCH, and BSV) is such that the sec value is 2 (cf. Table <ref>), yielding a security of ∼89 bits. As a comparison, the maximum security that can be attained in CGA on a standard device with a reasonable address generation time is by using sec equal to 1 <cit.>, which yields ∼73 bits of security.Once the public key is registered on the blockchain, generating BCAs is fast and requires a single hash function evaluation. Like CGA, mobile hosts changing subnet can quickly obtain a new address by recomputing Hash1 with the new subnet prefix. Verifying an address generated with BCA implies verifying two Merkle proofs. This is a slightly higher requirement than CGA, which demands two hash function evaluations, but remains still lightweight. Note that the verification does not require any connection to the Bitcoin network.BCA offers better privacy than CGA. In CGA, changing an address implies redoing all the expensive work required during CGA generation, which disincentivizes hosts from changing their address frequently. In BCA, changing an address can efficiently be done by selecting a different modifier that was committed in the transaction and recomputing Hash1 with the new value. By committing to N_max = 32 modifier values in the transaction, hosts can change their address every day for a month in the same subnet without having to broadcast a new transaction to the Bitcoin network. When hosts change subnet, they may reuse the old modifier values to generate new addresses. This further minimizes the interaction needed with the Bitcoin network but may make it easier for an observer to link addresses with each other.In comparison to CGA, BCA incurs extra storage overheads, but they do not exceed a few kilobytes. First, hosts have to store a maximum of N_max modifier values in memory and the associated Merkle tree. Assuming that 32 128-bit modifier values are generated and that the nodes of the Merkle tree are of size 256 bits, then around 2.5 extra kilobytes of data have to be stored. Moreover, hosts have to store the Merkle proof of the transaction. Assuming that Bitcoin blocks do not contain more than 2^28 transactions, then the size of the Merkle proof of the transaction to be stored does not exceed 1 kilobyte.§.§ Deployment Considerations It is possible to combine BCA and CGA with one another to improve the usability of both techniques. The prerequisite for generating BCAs is to register a public key on the blockchain. Because a transaction is included in the blockchain after 10 minutes on average, hosts must perform the public key registration process in advance, rather than when a new address is needed. For hosts who have not yet registered their public key and would like to quickly obtain a new address, it might be desirable to use CGA with a low sec value. By using a low sec value, CGA generation is fast but the resistance of the address against spoofing attacks is low. This approach is acceptable if the address is a temporary one. Whenever a connection with the Bitcoin network is established, hosts should register their public key on the blockchain and generate addresses using BCA that are more resistant to spoofing attacks.BCA requires the Bitcoin network to be able to process a large number of transactions. The transaction fees should also be as low as possible to minimize costs for hosts to register their public key on the blockchain. In 2023, it is estimated that the number of networked devices is roughly 29.3 billion <cit.>. Assuming all devices generate IPv6 addresses with BCA and they register their public key on the blockchain once a month, an average of 10,000 transactions per second would be produced on the network. According to Table <ref>, Bitcoin SV provides the highest transaction throughput and lowest transaction fees, making it the most adapted Bitcoin implementation to integrate in BCA.§ IMPLEMENTATION AND EVALUATIONIn this section, we describe our BCA implementation, evaluate its security and cost, and compare its performances to that of CGA.For the purpose of our implementation, we employ Elliptic Curve Cryptography with the secp256k1 curve to generate public keys. The hash algorithm used is SHA-256. The parameters for the maximum number of modifier values N_max and the maximum number of transactions in a block M_max are taken to be N_max = 32 and M_max = 2^28.The public key registration process is performed on the Bitcoin SV blockchain. We start by randomly generating a public key and 32 modifier values. The hash of the public key and the Merkle root of the modifier values are included in a Bitcoin transaction <cit.>. The size of the resulting transaction is 341 bytes. At the time of writing, this requires paying only 0.00004 cents of USD in transaction fees, which is negligible for any user. The difficulty value of the block in which the transaction is included is 68.271 × 10^9. The resulting sec value is 2, which according to (<ref>) yields ∼89 bits of security.We implemented the address generation algorithm of BCA and CGA in Python. The hashing part is implemented in C++ using the OpenSSL library <cit.>. We set the sec value in CGA to 2 to evaluate both techniques with the same level of security. Table <ref> shows the result of our evaluation. The tests were performed on an Intel Core i7-1165G7 CPU running at 2.80 GHz on a single thread. As the results show, CGA generation took on average 696.15 s of CPU time to complete, with 32 tests performed. The best case required around 167 s, while the worst case more than 20 minutes. This shows the high computational cost and variance of CGA generation, which is undesirable for users. On the other hand, BCA generation took only 0.00096 ms on average.§ CONCLUSIONIn this paper, we presented Bitcoin-Certified Addresses (BCA), a new technique to bind IPv6 addresses to public keys to prove address ownership. BCA solves the inefficiency issues of CGA by delegating the expensive work needed to secure the binding to the highly resilient network of Bitcoin nodes. As opposed to CGA, BCA does not trade security for performance, it offers both. Once the public key of hosts is registered on the blockchain, it is easy to generate and change addresses which improves the privacy of hosts compared to CGA. To demonstrate the usability and efficiency of BCA, we offered an implementation relying on the Bitcoin SV network.Because of its low computational constraint, good security guarantees, and lack of reliance on a trusted authority or security infrastructure, we believe that BCA is a promising technique for protecting IPv6 networks against address spoofing, especially in environments where devices have limited computational power such as in Internet of Things (IoT) networks. In the future, we should ensure that devices can seamlessly interact with the Bitcoin network to register their public key and use BCA to generate IPv6 addresses. This implies developing the tools and infrastructure that make funding and broadcasting transactions as easy as possible. Finally, this requires continuing to improve the capacity of the Bitcoin network so that it can scale with the increasing number of transactions.§ ACKNOWLEDGEMENTS The author thanks Michaella Pettit, Wei Zhang, Luigi Lunardon, Alessio Pagani, and Enrique Larraia for their useful comments on the paper. The author also thanks John Murphy for implementing the BCA and CGA techniques.99 SimNDP W. A. Simpson, T. Narten, E. Nordmark, and H. Soliman, “Neighbor Discovery for IP version 6 (IPv6),” IETF, RFC 4861, September 2007. [Online]. Available: https://www.rfc-editor.org/rfc/rfc4861.NikNDPAttacks P. Nikander, J. Kempf, and E. Nordmark, “IPv6 Neighbor Discovery (ND) Trust Models and Threats,” IETF, RFC 3756, May 2004. [Online]. Available: https://www.rfc-editor.org/rfc/rfc3756.KemSEND J. Kempf, J. Arkko, B. Zill, and P. Nikander, “SEcure Neighbor Discovery (SEND),” IETF, RFC 3971, March 2005. [Online]. Available: https://www.rfc-editor.org/rfc/rfc3971.AurCGA T. Aura, “Cryptographically Generated Addresses (CGA),” IETF, RFC 3972, March 2005. [Online]. Available: https://www.rfc-editor.org/rfc/rfc3972.NoeShim6 E. Nordmark and M. Bagnulo, “Shim6: Level 3 Multihoming Shim Protocol for IPv6,” IETF, RFC 5533, June 2009. [Online]. Available: https://www.rfc-editor.org/rfc/rfc5533.ArkMoIPv6 J. Arkko, C. Vogt, and W. Haddad, “Enhanced Route Optimization for Mobile IPv6,” IETF, RFC 4866, May 2007. [Online]. Available: https://www.rfc-editor.org/rfc/rfc4866.Aur2CGA T. Aura, “Cryptographically Generated Addresses (CGA),” in Lecture Notes in Computer Science, Berlin, Heidelberg: Springer Berlin Heidelberg, 2003, pp. 29–43.JakPoW M. Jakobsson and A. Juels, “Proofs of work and bread pudding protocols(extended abstract),” in Secure Information Networks, Boston, MA: Springer US, 1999, pp. 258–272.AlSTBCGA A. Alsa'deh, H. Rafiee, and C. Meinel, “Stopping time condition for practical IPv6 Cryptographically Generated Addresses,” in The International Conference on Information Network 2012, 2012.BosCGA++ J. W. Bos, O. Özen, and J.-P. Hubaux, “Analysis and optimization of cryptographically generated addresses,” in Lecture Notes in Computer Science, Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 17–32.AlsSEND A. AlSa’deh and C. Meinel, “Secure neighbor discovery: Review, challenges, perspectives, and recommendations,” IEEE Secur. Priv., vol. 10, no. 4, pp. 26–34, 2012.QadCGAMobile S. Qadir and M. U. Siddiqi, “Cryptographically Generated Addresses (CGAs): A survey and an analysis of performance for use in mobile environment,” International Journal of Computer Science and Network Security (IJCSNS), vol. 11, no. 2, pp. 24–31, 2011.GonSLAAC F. Gont, S. Krishnan, T. Narten, and R. Draves, “Temporary Address Extensions for Stateless Address Autoconfiguration in IPv6,” IETF, RFC 8981, February 2021. [Online]. Available: https://www.rfc-editor.org/rfc/rfc8981.GuaQuickCGA S. Guangxue, W. Wendong, G. Xiangyang, Q. Xirong, J. Sheng, and G. Xuesong, “A quick CGA generation method,” in 2010 2nd International Conference on Future Computer and Communication, 2010.NakBit S. Nakamato, “Bitcoin: a peer-to-peer electronic cash system,” 2009.BitHashrate “Bitcoin, Bitcoin Cash, Bitcoin SV, BSV hashrate chart,” BitInfoCharts. [Online]. Available: https://www.bitinfocharts.com/comparison/hashrate-btc-bch-bsv-sma30.html. [Accessed: 23-May-2023].SalPoS F. Saleh, “Blockchain without waste: Proof-of-Stake,” Rev. Financ. Stud., vol. 34, no. 3, pp. 1156–1190, 2021.BecPoW J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, and R. Böhme, “Can we afford integrity by proof-of-work? Scenarios inspired by the bitcoin currency,” in The Economics of Information Security and Privacy, Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 135–156.MilPermacoin A. Miller, A. Juels, E. Shi, B. Parno, and J. Katz, “Permacoin: Repurposing Bitcoin Work for Data Preservation,” in 2014 IEEE Symposium on Security and Privacy, Berkeley, CA, USA, 2014, pp. 475-490.ChaHybridMining K. Chatterjee, A. K. Goharshady, and A. Pourdamghani, “Hybrid mining: Exploiting blockchain’s computational power for distributed problem solving,” in Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, 2019.KinPrimecoin S. King, “Primecoin: Cryptocurrency with prime number proof-of-work,” Primecoin.io. [Online]. Available: https://www.primecoin.io/primecoin-paper.pdf. [Accessed: 23-May-2023].JudMergedMining A. Judmayer, A. Zamyatin, N. Stifter, A. G. Voyiatzis, and E. Weippl, “Merged mining: Curse or cure?,” in Lecture Notes in Computer Science, Cham: Springer International Publishing, 2017, pp. 316–333.TasBabylon E. N. Tas, D. Tse, F. Yu, and S. Kannan, “Babylon: Reusing Bitcoin mining to enhance Proof-of-Stake security,” arXiv [cs.CR], 2022.Namecoin “Namecoin,” Namecoin.org. [Online]. Available: https://www.namecoin.org/. [Accessed: 23-May-2023].HinIPv6 R. Hinden and S. Deering, “IP Version 6 Addressing Architecture,” IETF, RFC 4291, February 2006. [Online]. Available: https://www.rfc-editor.org/rfc/rfc4291.ThoSLAAC S. Thomson, T. Narten, and T. Jinmei, “IPv6 Stateless Address Autoconfiguration,” IETF, RFC 4862, September 2007. [Online]. Available: https://www.rfc-editor.org/rfc/rfc4862.WangSHA1 X. Wang, Y. L. Yin, and H. Yu, “Finding Collisions in the Full SHA-1,” in Advances in Cryptology – CRYPTO 2005, Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp. 17–36.BagMultiHash M. Bagnulo and J. Arkko, “Support for Multiple Hash Algorithms in Cryptographically Generated Addresses (CGAs),” IETF, RFC 4982, July 2007. [Online]. Available: https://www.rfc-editor.org/rfc/rfc4982.AlSCSCGA A. AlSa'deh, F. Cheng, and C. Meinel, “CS-CGA: Compact and more Secure CGA,” in 2011 17th IEEE International Conference on Networks, 2011.CheECC T. Cheneau, A. Boudguiga, and M. Laurent, “Significantly improved performances of the cryptographically generated addresses thanks to ECC and GPGPU,” Comput. Secur., vol. 29, no. 4, pp. 419–431, 2010.ShaCGAAnalysis J. L. Shah and J. Parvez, “IPv6 cryptographically generated address: Analysis and optimization,” in Proceedings of the International Conference on Advances in Information Communication Technology & Computing - AICTC ’16, 2016.Merkle R. C. Merkle, “A digital signature based on a conventional encryption function,” in Advances in Cryptology — CRYPTO ’87, Berlin, Heidelberg: Springer Berlin Heidelberg, 1988, pp. 369–378.BTC “Bitcoin core,” Bitcoin.org. [Online]. Available: https://www.bitcoin.org/en/bitcoin-core/. [Accessed: 23-May-2023].BCH “Peer-to-peer electronic Cash,” Bitcoincash.org. [Online]. Available: https://www.bitcoincash.org/. [Accessed: 23-May-2023].BSV “The Original Bitcoin Blockchain : Bitcoin SV (BSV),” BitcoinSV. [Online]. Available: https://bitcoinsv.io. [Accessed: 23-May-2023].BitDiff “Bitcoin, Bitcoin Cash, Bitcoin SV, BSV difficulty chart,” BitInfoCharts. [Online]. Available: https://www.bitinfocharts.com/comparison/difficulty-btc-bch-bsv-sma30.html. [Accessed: 23-May-2023].TxFee “Bitcoin, bitcoin cash avg. Transaction fee chart,” BitInfoCharts. [Online]. Available: https://www.bitinfocharts.com/comparison/transactionfees-btc-bch.html. [Accessed: 23-May-2023].TxFeeBSV “Stats – whatsonchain.com – BSV explorer,” Whatsonchain.com. [Online]. Available: https://www.whatsonchain.com/block-stat/avg_fee?days=365. [Accessed: 23-May-2023].TxRateBTC K. Croman et al., “On scaling decentralized blockchains: (A position paper),” in Financial Cryptography and Data Security, Berlin, Heidelberg: Springer Berlin Heidelberg, 2016, pp. 106–125.TxRateBCH Evan, “How many transactions per second can Bitcoin Cash handle?,” Coinanalysis, 22-Mar-2019. [Online]. Available: https://www.coinanalysis.io/how-many-transactions-per-second-bitcoin-cash. [Accessed: 23-May-2023].TxRateBSV “Bitcoin scaling,” Bitcoinscaling.io. [Online]. Available: https://www.bitcoinscaling.io/tps-watch. [Accessed: 23-May-2023].Cisco Executive summary, “Cisco Annual Internet Report (2018–2023),” Cisco.com. [Online]. Available: https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.pdf. [Accessed: 23-May-2023].WhatsOnChain “WhatsOnChain,” Whatsonchain.com. [Online]. Available: https://whatsonchain.com/tx/9ec9fb680fc6ef412bd5d88c6724817f8dee3710ec386f0d08e29389c219e9f7<https://whatsonchain.com/tx/9ec9fb680fc6ef412bd5d88c6724817f8dee3710ec386f0d08e29389c219e9f7>. [Accessed: 07-Sep-2023].OpenSSL “OpenSSL,” Openssl.org. [Online]. Available: https://www.openssl.org/. [Accessed: 07-Sep-2023]. | http://arxiv.org/abs/2311.15842v1 | {
"authors": [
"Mathieu Ducroux"
],
"categories": [
"cs.CR",
"cs.NI"
],
"primary_category": "cs.CR",
"published": "20231127140831",
"title": "IPv6 Bitcoin-Certified Addresses"
} |
Non-intrusive, transferable model for coupled turbulent channel-porous media flow based upon neural networks Sandeep Pandey January 14, 2024 ============================================================================================================ Vision-language models (VLMs) have recently shown promising results in traditional downstream tasks. Evaluation studies have emerged to assess their abilities, with the majority focusing on the third-person perspective, and only a few addressing specific tasks from the first-person perspective. However, the capability of VLMs to “think” from a first-person perspective, a crucial attribute for advancing autonomous agents and robotics, remains largely unexplored.To bridge this research gap, we introduce EgoThink, a novel visual question-answering benchmark that encompasses six core capabilities with twelve detailed dimensions. The benchmark is constructed using selected clips from egocentric videos, with manually annotated question-answer pairs containing first-person information.To comprehensively assess VLMs, we evaluate eighteen popular VLMs on EgoThink.Moreover, given the open-ended format of the answers, we use GPT-4 as the automatic judge to compute single-answer grading. Experimental results indicate that although GPT-4V leads in numerous dimensions, all evaluated VLMs still possess considerable potential for improvement in first-person perspective tasks. Meanwhile, enlarging the number of trainable parameters has the most significant impact on model performance on EgoThink. In conclusion, EgoThink serves as a valuable addition to existing evaluation benchmarks for VLMs, providing an indispensable resource for future research in the realm of embodied artificial intelligence and robotics. § INTRODUCTIONBenefiting from the rapid development of large language models (LLMs) <cit.>, vision-language models (VLMs) <cit.> have shown remarkable progress in both conventional vision-language downstream tasks <cit.> and following diverse human instructions <cit.>. Their application has expanded into broader domains such asrobotics <cit.> and embodied artificial intelligence (EAI) <cit.>. As a result, the thorough evaluation of VLMs has become increasingly important and challenging. Observing and understanding the world from a first-person perspective is a natural approach for both humans and artificial intelligence agents. We propose that the ability to “think” from a first-person perspective, especially when interpreting egocentric images, is crucial for VLMs. However, as shown in Table <ref>, the ability to think from a first-person perspective is not adequately addressed by current evaluation benchmarks for VLMs. On one hand, most of these benchmarks (six out of nine, as listed in Table <ref>) focus solely on the third-person perspective. On the other hand, those benchmarks that do consider the first-person perspective only encompass a limited range of capabilities. For instance, EgoTaskQA <cit.> examines spatial, temporal, and causal aspects, whereas EgoVQA <cit.> is limited to object, action, and person aspects. Therefore, there is a clear need to develop a comprehensive benchmark to evaluate the first-person capabilities of VLMs more effectively.In this work, we introduce a new benchmark for VLMs from a first-person perspective, named EgoThink. The initial step in developing this benchmark involves determining the necessary capabilities to assess. Humans, when interacting with the real world, consider a series of questions centered on themselves, ranging from “What is around me?”, “What am I doing?”, “Where am I?”, “What about the situation around me?”, “What will happen to me?” to “How will I do?”. Drawing inspiration from this, we evaluate six core capabilities of VLMs, namely object, activity, localization, reasoning, forecasting, and planning. Each capability corresponds to one of the aforementioned questions, as illustrated in Figure <ref>. The next step is constructing the benchmark. We first categorize the six core capabilities into twelve detailed dimensions. We then select a minimum of 50 distinct and clear clips from egocentric videos for each dimension and manually annotate them with relevant first-person question-answer pairs. This approach ensures the quality and variety of the benchmark. The final step is evaluating VLM performance on this benchmark. Building on recent studies <cit.>, we use GPT-4 <cit.> as an automatic evaluator. The Pearson correlation coefficient, when compared with human evaluation, shows a value of 0.68, indicating that the evaluation results are dependable.Based on our proposed EgoThink benchmark, we conduct comprehensive experiments to evaluate the first-person capabilities of eighteen popular VLMs with varying model and data compositions. The findings indicate that GPT-4V stands out as the most effective model in various aspects. However, it shows less impressive results in specific capabilities such as activity and counting. Additionally, we observed that no single VLM consistently surpasses others in every aspect. For instance, GPT-4V is less effective than BLIP-2-11B for localization. Increasing the language model portion of the VLMs generally leads to better performance, but this improvement is not uniform across all models. Finally, our results highlight a significant potential for further enhancing the first-person capabilities of VLMs. § RELATED WORK Vision-Language Models. Inspired by the impressive success of LLMs <cit.>, the recent popular VLMs tend to regard the powerful LLMs as the core backbone. At the beginning, VLMs usually use large-scale image-text pairwise datasets <cit.> or arbitrarily interleaved visual and textual data <cit.> to pre-train. Furthermore, thanks to the availability of enormous image-text instruction datasets <cit.>, recent studies <cit.> further apply instruction tuning to help VLMs generate satisfactory answers. Benefiting from the two-stage training process, recent VLMs can achieve stunning performance on downstream vision-language tasks <cit.>. Evaluations of VLMs. To evaluate the abilities of VLMs, there are diverse types of vision language downstream tasks. Conventional benchmarks, such as image caption tasks <cit.> and visual question reasoning tasks <cit.>, mainly probe specific abilities of VLMs from the third-person perspective. Meanwhile, specialized analytical studies comprehensively evaluate the performance of VLMs from the third-person perspective, where Vlue <cit.> consists of five fundamental tasks and Lvlm-ehub <cit.> evaluates six categories of capabilities on 47 standard vision-language benchmarks. As for the first-person perspective, there are some egocentric evaluation benchmarks in the computer vision field to assess some visual capabilities <cit.>. In terms of multi-modality, there are a few benchmarks, such as EgoVQA<cit.> and EgoTaskQA <cit.>, where mainly specific tasks without an overall understanding.In this paper, we mainly focus on exploring the comprehensive capabilities of VLMs to think from a first-person perspective, as a supplement to previous evaluation benchmarks.§ EGOTHINK BENCHMARKIn this section, we first elaborate on the core capabilities of thinking from a first-person perspective.Then, we introduce the process to manually construct our proposed benchmark EgoThink, which asks VLMs to generate open-ended answers according to first-person images and questions.§.§ Core Capabilities As shown in Figure <ref>, we specifically design six categories with twelve fine-grained dimensions from the first-person perspective for quantitative evaluation.* Object: What is around me? Recognizing objects in the real world is a preliminary ability of the human visual system <cit.>. Images from a first-person or egocentric perspective <cit.> pay more attention to the objects surrounding the subject or in hands. Moreover, we further divide the object category into three fine-grained dimensions: (1) Existence, predicting whether there is an object as described in the images; (2) Attribute <cit.>, detecting properties or characteristics (e.g., color) of an object; (3) Affordance <cit.>, predicting potential actions that a human can apply to an object. * Activity: What am I doing? Activity recognition is to automatically recognize specific human activities in video frames or still images <cit.>. From the egocentric perspective, we mainly focus on actions or activities based on object-hand interaction <cit.>.* Localization: Where am I? In reality, localization is a critical capability for navigation and scene understanding in the real world <cit.>. Here we investigate the localization capability from two aspects, Location and Spatial Relationship. Location indicates detecting the scene surrounding the subject <cit.>. Spatial reasoning contains allocentric and egocentric perspectives <cit.>. We focus on the egocentric perspective, i.e., the position of the object with respect to the subject. * Reasoning: What about the situation around me? During the complex decision-making process, reasoning lies everywhere in our lives.Here we mainly focus on Counting, Comparison, and Situated Reasoning. Due to the first-person perspective, we generally count or compare objects in our hands or surrounding ourselves. As for situated reasoning, we employ cases that cannot be answered directly from the information in the images and require further reasoning processes.* Forecasting: What will happen to me? Forecasting <cit.> is a critical skill in the real world. From an egocentric view, forecasting always predicts the future of object-state transformation or hand-object interactions.* Planning: How will I do? In reality, planning <cit.> is an important capability to deal with complex problems, typically applied in Navigation <cit.> and Assistance <cit.>. Navigation is going to a goal location from a start position, while assistance is offering instructions to solve daily problems. §.§ Data CollectionIn this section, we mainly introduce the detailed processing to construct our EgoThink benchmark. Collecting first-person visual data.Firstly, we leverage a popular and large egocentric video dataset Ego4D <cit.>, which is designed to advance the field of first-person perception in computer vision. To obtain a diverse representation in different scenarios, Ego4D encompasses 3,670 hours of video from 931 unique camera wearers spanning 74 global locations across 9 countries. To collect first-person visual data, we begin by extracting every frame from a subset of the Ego4D video dataset, yielding a diverse raw image dataset. Considering the heavy human labor and the diversity of scenarios, we sample images every few dozen frames. To ensure high quality, we apply strict criteria for selecting the extracted frames. We first exclude images that lack clarity or fail to exhibit egocentric characteristics. Then, to obtain the high diversity within the dataset, we conduct a further screening to ensure that at most two images per video are included in the filtered image set. Finally, we obtain enormous high-quality images with exhibit egocentric characteristics as first-person image candidates.Annotating question-answer pairs. Upon receiving a substantial collection of first-person image candidates, we engage six annotators to manually label question-answer pairs. Given that the EgoThink benchmark is composed of twelve dimensions, annotators were responsible for two specific dimensions. The annotators can access all the image candidates and are asked to select appropriate images to annotate their corresponding question-answer pairs to relevant categories. Once the image is selected, it will be removed from the candidates to ensure no repetition. Moreover, to ensure the correctness of our annotations, we have three additional annotators to review the question-answer pairs after the first annotation process. The annotation will not be reserved until the three annotators all agree that the first-person visual data and the assigned question-answer pairs meet the definition of a specific dimension. Statistics. The EgoThink benchmark comprises a collection of 700 images across six categories with twelve fine-grained dimensions. These images are extracted from 595 videos, ensuring a broad representation of scenarios. To guarantee diversity, a wide range of scenes and concepts has been deliberately selected. As depicted in Figure <ref>, the dataset encompasses a diverse range of scenes, covering key scenarios relevant to EAI. Furthermore, we have meticulously crafted question and answer for each image in the EgoThink benchmark, aiming to closely replicate real-life conversations. This involves employing different question types, varying questions in length and complexity, paired with well-reasoned and accurate answers. Detailed statistics of the EgoThink benchmark are presented in Appendix <ref>.§ EXPERIMENTS§.§ Experimental Setups Vision-Language Models. We collect the most popular eighteen types of representative VLMs to assess as shown in Table <ref>.Due to the possible effects of model parameters, we divide models into ∼7B and ∼13B for a fair comparison.Detailed information about VLMs can be found in Appendix <ref>. We conduct zero-shot setups for all VLMs across our EgoThink benchmark. The prompts used for each VLM are shown in Appendix <ref>.Single-answer grading.Considering evaluating open-ended model generations is not a trivial problem <cit.>, we propose to use GPT-4 <cit.> as the automatic evaluator <cit.> to better measure generated answers.In this protocol, we want to measure how close one model output is to the reference. Different from traditional similarity-based methods, GPT-4 pays more attention to semantics. In the detailed implementation, we format the question, the model output, and the reference in a prompt as shown in Appendix <ref> and feed it into the GPT-4 evaluator. The GPT-4 evaluator is asked to assign a score of 0 (wrong), 0.5 (partially correct), or 1 (correct) to the model output. Additionally, we further discuss to use of GPT-3.5-Turbo, Claude-2, and humans as evaluators in Section <ref>.§.§ Results We first present the overall results of the evaluated models on our EgoThink benchmark. As shown in Figure <ref>, despite having improved over the years, VLMs are still difficult to think from a first-person perspective, even GPT-4V. Among the six categories, only the scores on planning and localization are relatively high, the performance in other capabilities can only reach around 60 points at best. Among the better models, GPT-4V generally performs much better than other models, only falling short in localization dimension compared to BLIP-2-11B. We will further introduce the detailed scores across different dimensions as presented in Table <ref>. More case studies can be found in Appendix <ref>.Results on object.In detail, we evaluate through three dimensions, including existence, attribute, and affordance. For existence, InstructBLIP-11B and LLaVA-1.5-13B achieve the top-2 performance, indicating that they can predict the object precisely from the first-person perspective. As for GPT-4V, as illustrated in Figure <ref>, we observe that its performance in handed object detection leaves room for improvement. As for both attribute and affordance, the GPT-4V model has demonstrated superior performance, especially in the attribute dimension. In both dimensions, some open-source models as shown at the top of Figure <ref> locate the wrong place or only answer the type of the object rather than its attribute or affordance. Results on activity.The performance of GPT-4V outperforms all open-source VLMs in the activity dimension. Among the ∼7B models, mPLUG-owl-7B significantly outperforms other VLMs and even achieves superior or comparable performance to ∼13B models.Overall, ∼13B models tend to perform better than ∼7B model in the activity dimension, but their scores are just below the passing line.The most possible reason is that detecting the specific action is difficult for VLMs as shown at the bottom of Figure <ref>. Results on localization. In general, BLIP-2-11B has shown obvious advantages among all VLMs, even surpassing GPT-4V in both location and spatial relationship dimensions. In the location dimension, BLIP-2-11B, GPT-4V, and InstructBLIP-11B demonstrate superior ability to achieve around 90 points. However, perceiving the spatial relationship of an object relative to oneself is much more difficult. This phenomenon can be also reflected in the top of Figure <ref> where VLMs hard to distinguish left or right hand. Results on reasoning. Counting is the most difficult ability <cit.> among all evaluated dimensions. The best-performing model, GPT-4V, only scores 42.0, far away from satisfaction. Under the first-person perspective setup, VLMs need to not only count but also understand the relative position to oneself, as shown in the top case of Figure <ref>. Meanwhile, the comparison dimension also reflects the high difficulty, where the best score of 56.0 is obtained by LLaVA-1.5-13B. As for situated reasoning, GPT-4V demonstrates its strong commonsense reasoning ability to answer complex questions, as shown at the bottom of Figure <ref>. Results on forecasting. Achieving high performance seems to be challenging as the best score achieved by GPT-4V is only 55.0. InsturctBLIP-11B achieves a relatively high score of 53.0 which is close to that of GPT-4V.We observe that the VLMs mainly suffer from two problems: recognizing objects incorrectly or forecasting too far as shown in the top of Figure <ref>. Results on planning. In both navigation and assistance dimensions, the highest scores are achieved by GPT-4V with 60.0 and 84.0, respectively. LLaVA-13B-Llama2 behaves well in both dimensions with the second-best performance but its score is still 10 points lower than that of GPT-4V. The most possible reason is that answers provided by most open-source VLMs lack crucial details or overlook important information given in the images, as illustrated at the bottom of Figure <ref>. § ANALYSIS§.§ Effects of ComponentsAs shown in <Ref>, VLMs consist of multiple key components.In this section, we probe the influence of different components on our EgoThink benchmark.The total parameters of LLMs. Here we compare the performance of ∼7B and ∼13B variants of four VLMs.Note that the increase in the number of parameters mainly falls in the LLMs.Firstly, as shown in the top part of <Ref>, scaling does not lead to significant improvement for PandaGPT and InstructBLIP, while LLaVA (LLaVA-7B and LLava-13B-Llama2) and LLaVA-1.5 benefit a lot from scaling. We hypothesize that this is because LLaVA series models do not freeze their language models during instruction tuning, indicating that enlarging the number of trainable parameters can help improve both performance and generalization.In other words, one can see that simply scaling up language models without better alignment may not help. Instruction tuning.We directly compare the performance of BLIP-2-11B and InstructBLIP-11B, because these two models differ only in instruction tuning and additional instruction-aware tokens.As presented in the bottom part of <Ref>, InstructBLIP-11B outperforms BLIP-2-11B after instruction tuning, despite an unexpectedly small margin. This may be because much of the instruction tuning data employed by InstructBLIP is collected from specific downstream tasks, whose data distributions are very different from our first-person perspective data.The information of image encoder. Considering that there is no ablation version of VLMs for image encoder, following Set-of-Mark <cit.>, we probe the effect of visual grounding information (i.e., a set of marks) in our setups. As presented in Figure <ref>, GPT-4V with additional segmentation information can correctly detect the mentioned location and objects, indicating that supplemented image information can be helpful in some situations. More discussion about quantitative experiments can be found in Appendix <ref>.§.§ Agreements between Human and Evaluators In this section, we further assess the model performance on object and planning dimensions using GPT-3.5-Turbo, Claude-2, and human annotators. Due to the heavy human labor, we ask three annotators to evaluate the performance of GPT-4V, which is the overall best model.Human annotators consider the following aspects to evaluate: accuracy, completeness, logical soundness, and grammatical correctness.Our annotation system and detailed guidelines can be found in Appendix <ref>. We further conduct GPT-3.5-Turbo and Claude-2 with the same evaluation prompt as GPT-4.The Pearson correlation coefficients between automatic evaluators (i.e., GPT-4, GPT-3.5-Turbo, Claude-2) and humans are 0.68, 0.43, and 0.68, respectively.The Cohen's Kappa coefficient among the three annotators is 0.81. This shows that evaluations made by GPT-4 and Claude-2 have a high correlation with humans. We hypothesize that recent well-performant LLMs can evaluate highly aligned with humans, given that most answers in our benchmark are relatively short and precise. Detailed scores of all evaluators and their correlations are discussed in Appendix <ref>.§ CONCLUSIONTo pave the way for the development of VLMs in the field of EAI and robotics, we introduce a comprehensive benchmark, EgoThink.Designed to evaluate the capacity of VLMs to “think” from a first-person perspective, EgoThink encompasses six core capabilities across twelve detailed dimensions. We assess eighteen popular VLMs and find that even the top-performing VLMs in most dimensions achieve only around a score of 60. GPT-4V achieves the best overall performance, but can not consistently surpass other open-source VLMs across all dimensions. In the analysis, we further probe the impact of various components on model performance and find that the total number of trainable parameters in LLMs has the most significant effect. Despite the human agreement with automatic evaluators being high, the evaluation of planning is difficult due to the detailed information in the answers.In future research, we aim to improve the evaluation method and further explore the essential capabilities of VLMs in the EAI and robotics fields.§ ACKNOWLEDGE Thanks to Xiaolong Wang, Yangyang Yu, Zixin Sun, and Zhaoyang Li for their contributions to data collection and construction. We appreciate Zeyuan Yang, Szymon Tworkowski, Guan Wang, and Zonghan Yang for their support of API resources; Xinghang Li for his valuable discussion; Siyu Wang for her code base on the annotation system. ieeenat_fullname§ STATISTICSTo prove the quality and diversity of our proposed EgoThink benchmark, here we present statistics on the following aspects as shown in Table <ref>.* Number of instances (#Instance). The total count of instances across various capability dimensions. To guarantee the dependability of the results, each dimension (e.g., existence) should encompass a minimum of 50 items, and each capability (e.g., object) should consist of at least 100 items in total.* Number of concepts (#Concept). The total count of unique concepts, encompassing objects and activities, primarily featured and referenced in the images and question-answering pairs. For instance, within the forecasting capability, the unique concept within the question-answer pair “What will I do? Open the cabinet” is identified as “open the cabinet”.* Number of scenes (#Scene). The total count of unique scenes depicted in the images, such as a kitchen. The variety of these real-world scenarios contributes to the evaluation of the VLMs' generalization capabilities.* Number of videos (#Video). The total count of unique videos from which we derive images. Given that scenes and concepts within the same video tend to be similar, we make a concerted effort to select images from a diverse range of videos. This strategy ensures the richness of our dataset and enhances the precision of our evaluation.* Question length (LenQ). The average question length across various capability dimensions.* Answer length (LenA). The average answer length across various capability dimensions.* Question types (TypeQ). The total count of various types of questions. Questions are classified based on basic interrogative words such as: what, which, where, when, why, and how. § MODEL HUB In this section, we briefly introduce various types of VLMs as follows: * GPT-4V(ision) <cit.> is the product of OpenAI that empowers users to command GPT-4 to interpret and analyze image inputs;* Flamingo <cit.> is the first vision-language model to apply few-shot learning to solve tasks, which inserts new cross-attention layers between frozen LLMs layers. As for implementation, we use the open-source library OpenFlamingo <cit.>;* BLIP-2 <cit.> proposes a lightweight Querying Transformer to bridge the gap between frozen image encoders and frozen language models; * InstructBLIP <cit.> introduces an instruction-aware Query Transformer, which receives the instruction as additional inputs with visual features. InstructBLIP is a finetuned model based on BLIP-2;* MiniGPT-4 <cit.> uses one projection layer to align a frozen visual encoder with a frozen language model; * LLaVA <cit.> trains both the projection matrix and pre-trained language model for an improved adaptation;* LLaVA-1.5 <cit.> changes the linear vision-language connector to a two-layer MLP connector and additionally adopts academic task data;* mPLUG-owl <cit.> designs a visual abstractor module to summarize visual information within learnable tokens;* Otter-I <cit.> adopts in-context instruction tuning on a dataset containing 2.8 million multi-modal instruction-response pairs, named MIMIC-IT;* PandaGPT <cit.> is designed to be a general-purpose multi-modal model that can accept images, text, videos, and audio. It connects image and text with a linear projection layer, leaves LLM trainable with LoRA, and is trained with instruction following.* LLaMA-Adapter (V2) <cit.> is a fast lightweight method that proposes an early fusion strategy to efficiently adapt LLaMA into a visual instruction model. § MODEL INFERENCE PROMPTS As for most capabilities, our annotated answers are as precise as possible to ensure the assessment is accurate. Therefore, we design specific prompts to ask VLMs to generate short answers with no redundant information. The designed prompts for various VLMs are listed in Table <ref>. However, considering solving planning tasks is complex, we have selected a series of special prompts for VLMs in the planning dimension as listed in Table <ref>. § EVALUATION PROMPTS We use similar prompts <cit.> to evaluate model predictions for GPT-4, GPT-3.5-turbo, and Claude-2. The designed prompts are shown in Table <ref>. § ADDITIONAL CASESCases on object. In the existence dimension, GPT-4V and other open-source VLMs still have a hard time dealing with unusual cases as shown in Figure <ref>. In the top case, GPT-4V cannot detect the exact location of the mentioned object. As for the other two cases, VLMs even cannot identify the detailed objects. As shown in <Ref>, in the first case, the VLMs also locate the wrong place, inferring the glove rather than “the cap of the bottle”. In the second and third cases, VLMs only answer the name of objects rather than the specific attribute or affordance.Cases on activity. In the activity dimension, GPT-4V and other models also have the problem of not being able to correctly detect objects as shown in <Ref>, which leads to the models answering activities that are almost completely unrelated to the answer.Cases on localization. In the location dimension, as the first case in <Ref>, GPT-4V and other VLMs cannot correctly detect the scene due to unexpected items.Even after changing the question format, GPT-4V still misunderstands the environment according to the unexpected items. For the spatial relationship dimension, the second case in <Ref> shows that GPT-4V is not able to recognize the egocentric view and cannot distinguish between left and right, while other VLMs can.Cases on reasoning. In the counting dimension, as shown in <ref>, we find that some VLMs can not distinguish the specific location reference, such as “I holding”. Moreover, when the number is large, it cannot say the exact amount.Cases on forecasting. The first case in <Ref> demonstrates that the models cannot identify the objects accurately. In the second case, the models are not able to recognize the egocentric view.Cases on planning. In the navigation and assistance dimensions, models can neglect important information in the images and the answer might be too brief without details or too lengthy without getting to the point as shown in the two cases in the <Ref>.§ THE INFORMATION OF IMAGE ENCODERIn the main paper, additional image information can assist the model in detecting objects on certain images, as shown in Figure <ref>. However, in order to quantitatively analyze the function of this module, we conduct experiments on both extensive and attribute dimensions. The results are shown in Table <ref> where the correct answer rate of samples with SoM has decreased. We consider that the additional image marks and masks obscure the information in the original image (such as colors, object borders, etc.), resulting in incorrect model judgments. How to provide additional information without losing the original image information may be a future research direction that can be considered.§ HUMAN ANNOTATION §.§ Annotation SystemIn order to save human labor, we construct an annotation system based on Streamlit. Our annotation system is designed as a multi-user image and text annotation system, which can display images and provide an interactive interface for users to annotate efficiently as shown in Figure <ref>. §.§ Annotation GuidelineHere we present the detailed annotation guidelines for annotators: 1) Accuracy. The model output should be factually correct, without violating commonsense and the knowledge provided in the data.2) Completeness. It is acceptable that the format of the answer given by the model is different from the reference answer, but the model output should provide the key information of the reference answer or reasonable answer beyond the reference answer. 3) Logic. The answer should be logical. It should provide answers with reasonable logical sequence. 4) Language and grammar. The output should use correct spelling, vocabulary, punctuation, and grammar. § AGREEMENT We select object and planning dimensions to compare the differences between evaluation models. Scores of different models evaluated by GPT-3.5-Turbo, Claude-2, GPT-4V, and human annotators are shown in <Ref>, and the Pearson correlation coefficients between them are shown in Table <ref>. As our main evaluation model, GPT-4V is scored by different evaluators (including humans), and the average scores are shown in Figure <ref>. In general, the consistency among GPT-4, Claude-2, and Human are high. | http://arxiv.org/abs/2311.15596v1 | {
"authors": [
"Sijie Cheng",
"Zhicheng Guo",
"Jingwen Wu",
"Kechen Fang",
"Peng Li",
"Huaping Liu",
"Yang Liu"
],
"categories": [
"cs.CV",
"cs.CL"
],
"primary_category": "cs.CV",
"published": "20231127074425",
"title": "Can Vision-Language Models Think from a First-Person Perspective?"
} |
[ Strong Spin-Motion Coupling in the Ultrafast Quantum Many-body Dynamics of Rydberg Atoms in a Mott-insulator Lattice K. Ohmori January 14, 2024 ==================================================================================================================== < g r a p h i c s >figureA showcase of objects textured by EucliDreamer.] This paper presents a novel method to generate textures for 3D models given text prompts and 3D meshes. Additional depth information is taken into account to perform the Score Distillation Sampling (SDS) process <cit.> with depth conditional Stable Diffusion <cit.>.We ran our model over the open-source dataset Objaverse <cit.> and conducted a user study to compare the results with those of various 3D texturing methods. We have shown that our model can generate more satisfactory results and produce various art styles for the same object. In addition, we achieved faster time when generating textures of comparable quality. We also conduct thorough ablation studies of how different factors may affect generation quality, including sampling steps, guidance scale, negative prompts, data augmentation, elevation range, and alternatives to SDS. § INTRODUCTION3D modeling is widely used in the media, gaming, and education industries to create characters and environments, realistic product visualizations and animations, and interactive simulations and learning materials. According to Mordor Intelligence, the market size of global 3D modeling and animation has reached 6 billion <cit.>.Meanwhile, 3D modeling is challenging due to its high cost and long production time. A 3D model can be encoded as a 3D mesh, a 2D texture, and a surface-to-surface mapping. A simple object (e.g. a house, a tree, etc.) can take a senior artist several days to make.As part of the modeling process, the 3D texturing problem is, given a meshed shape in 3D representing a meaningful object, color all points on the object surface defined by a 2D image and the 2D-to-3D mapping. In reality, the texturing step takes the majority of the cost and time in 3D modeling.Ever since the last decade, researchers have been looking for methods to assist 3D creation. Before the era of large language models, most methods for 3D generation adopted Generative Adversarial Networks (GAN) <cit.> and Contrastive Language-Image Pre-Training (CLIP) <cit.>. In recent years, with the accessibility of pre-trained large language models, many researchers have adopted diffusion-based methods for better quality 3D generation <cit.>.Today, based on diffusion models, 3D texturing still faces many problems including poor quality, low diversity, and lousy view consistency. Below are several common problems generated textures may have, illustrated by Figure <ref>. * Wrong semantics. If the 3D mesh represents a car, the corresponding texture should give the tires a dark rubber color and not cover the window with colorful patterns. * View inconsistency. Because popular texturing methods derive multi-views from a single-view image, the resulting 3D object may have different content or color themes. This happens often when texturing buildings, cars, and other objects that have symmetric structures. * Excessive light reflections and shadows. Despite the industry having various standards and practices for 3D texturing, it is generally preferable that they do not contain shadows before rendering <cit.>. Thus, a texture with heavy shadows becomes unusable. * Bad color tones. Sometimes a 3D texture does not fall into any of the prior categories but the appearance is bad in a subjective way. This usually happens when the texture violates color principles. They may combine two colors that do not match, or have a color with too high saturation.3D modeling data is scarce by its nature and many are owned by private entities, which puts pressure on algorithms to use existing datasets efficiently.This paper presents a novel method that adopts Stable Diffusion depth <cit.> to generate 3D texturing that yields better quality. To the best of our knowledge, our method is the first to use Stable Diffusion depth in the SDS process for 3D texturing tasks. Through a series of experiments and a user study, we prove the effectiveness of our method and improvement in the inference speed.§ RELATED WORK §.§ Early attempts of 3D generation Inspired by the works on text-guided generation in the 2D space <cit.>, many early attempts for 3D generation adopted GAN <cit.>. Jiajun Wu et al. <cit.> proposed a 3D-GAN to generate 3D objects from a probabilistic space. GAN and RL are used to generate physics-aware 3d mesh <cit.> Similarly, when Contrastive Language-Image Pre-Training (CLIP) <cit.> was proposed for joint embedding in 2021, the CLIP-based works on 2D <cit.> generation naturally inspired explorations in 3D <cit.>.Dreamfields <cit.> proposed a text-guided 3D generation using Neural Radiance Field (NeRF) <cit.>. It is an inverse rendering approach to create 3D models from a set of 2D images.§.§ 3D modeling with SDS and diffusion models In 2022, DreamFusion <cit.> first proposed using a pre-trained 2D text-to-image model for text-guided 3D generation through differentiable rendering. Their core method, Score Distillation Sampling (SDS), samples uniformly in the parameter space from pre-trained diffusion models to obtain gradients that match given text prompts.DreamFusion <cit.> inspired many SDS-based improvements for 3D generation. Magic3D <cit.> further improved the quality and the speed of 3D modeling by a two-step approach, which obtains a coarse model using a low-resolution diffusion, accelerating with a sparse 3D hash grid structure, and then interacting with a high-resolution latent diffusion model. Fantasia3d <cit.> further disentangles geometry and appearance generation, supervised by the SDS loss.Our method adopts SDS as well. We will also explore alternatives to SDS in the Ablation Studies <ref>. §.§ 3D texturing and depth information While much research focuses on generating a whole 3D model, such generated output can hardly be used in practice due to complicated industry-specific requirements. For example, 3D models in gaming are rendered in real-time, which limits the number of polygons and requires clear topological structures. In contrast, AI-powered 3D texturing is a more realistic goal. One notable research on texturing is TEXTure <cit.> which can generate a new texture and its surface-to-surface mapping given a new 3D mesh. It applies Stable Diffusion depth <cit.> but only as a simple mesh projection. Text2Tex <cit.> proposes a powerful texuring method involving two stages, generation and refinement. It also incorporates depth information but does so through ControlNet <cit.>.As a 3D texturing method, our EucliDreamer will directly use Stable Diffusion depth. We will compare it with the ControlNet depth in the Ablation Studies <ref>. §.§ View consistency for 3D modeling The NeRF <cit.> and SDS process require multi-view images for a single object, which may not be accessible due to a lack of data. As a result, generating multi-views from a single image becomes a crucial step in high-quality texture generation. Such generations sometimes suffer poor consistency across different views, for example, a house with four walls all in different color patterns, or a car whose color is asymmetric.Zero-1-to-3 <cit.> proposes a method to synthesize images of an object from new angles through diffusion models. Other research works <cit.> also attempted to address the view consistency problem.Different from Text2Tex <cit.>, SDS enforces 3D consistency by interaction. we further improve consistency using a large batch size. § METHODWe present our novel method that, for the training loop with SDS loss, depth information is taken into account in addition to the color information. §.§ Depth conditioning Our method, inspired by DreamFusion <cit.>, is composed of the rendering stage and the SDS training stage. Different than previous methods, we add a depth layer of Stable Diffusion along with the RGB color layer.Our model takes a 3D mesh that represents the model's shape as input and generates texture for it iteratively. The texture is represented as a hash grid <cit.>. Initially, the mesh is textured with random texture. On each iteration, the textured mesh is fed into a differentiable renderer <cit.> that renders RGB image and depth from multiple different angles. Subsequently, we use score distillation sampling (SDS) to guide the iteration of the texture. In SDS, a depth conditional diffusion model [https://huggingface.co/stabilityai/stable-diffusion-2-depth] takes the RGB image as input, conditioned on text prompt and depth. The model keeps iterating over the hash grid and updating it through the same rendering and computation process guided by SDS loss. The hash grid is converted at the end of all iterations for popular 3D formats.Input UV maps are optional. They are surface-to-surface mappings to guide the coloring of the meshed 3D model. In practice, it is usually in the interest of users to input the UV to convert the NeRF <cit.> view so the resulting texture output is easy to edit further.Theoretically, adding depth conditioning will improve the quality and speed of the texture generation. We will show our results and explain why in the later sections. §.§ Datasets Our rendering step is based on the open-sourced Stable Diffusion 2 <cit.> that can generate and modify images based on text prompts.We run our model over objects from Objaverse <cit.>, a large open-sourced dataset of objects with 800K+ textured models with descriptive captions and tags.§.§ Experiments We conducted two sets of experiments to show the power of our EucliDreamer.For the first set of experiments, we run our model over several 3D meshes from the open-sourced Objeverse to observe the generation quality. We use the best parameters from our Ablation Studies <ref>. In addition to the text input to other models, we add a group of the same keywords to all the prompts in our model, which can be viewed as part of our algorithm. For the second set of experiments, we run our model over fixed 3D meshes with custom text prompts and observe the style variations. We select some 3D models from Objaverse <cit.> that are typical to represent a category, such as cars, furniture, buildings, etc. For each model, we try different combinations of prompts to generate textures in various art styles. §.§ Evaluation We compare our results with those of Text2Tex <cit.>, CLIPMesh <cit.>, and Latent-Paint <cit.>. We conducted a user study of 28 participants including CS researchers, artists, and practitioners from the gaming industry. They are asked to rank the results in terms of generation quality.§ ABLATION STUDIESFor model understanding and parameter selection, we conduct a series of experiments for EucliDreamer with different settings. §.§ SDS vs. VSD SDS <cit.> is notoriously known for its lack of diversity and overly smoothed texture. This is because SDS is mean-seeking. Intuitively, a single prompt has multiple plausible generation results. SDS tends to create an average of them (mean seeking). Variational Score Distillation (VSD) is proposed in ProlificDreamer <cit.> to mitigate this problem by introducing a variational score that encourages diversity. However, in depth-conditioned texturing, there are fewer plausible generation results given a single prompt and depth. This leads to the diversity and details in our approach (mode seeking), even though VSD is not used. Through experiments shown in Figure <ref>, our approach produces better results without VSD. Thus VSD in ProlificDreamer <cit.> is not required when Stable Diffusion depth is used. §.§ StableDiffusion depth vs. ControlNet depth The most straightforward way of depth-conditioned SDS is to use ControlNet <cit.>. However, empirically we found that the SDS gradient from ControlNet <cit.> is much noisier than SD-depth. SD-depth concatenates depth with latents. The SDS gradients flow to depth and latents separately, which is less noisy. As shown in Figure <ref>, we demonstrate better generation results with Stable Diffusion depth than ControlNet depth.§.§ Elevation range Elevation range defines camera positions and angles at the view generation step. When it is set between 0-90 degrees, cameras face down and have an overhead view of the object. We find that fixed-angle cameras may have limitations and may miss certain angles, resulting in blurry color chunks at the surface of the object. One way to leverage the parameter is to randomize camera positions so all angles are covered to a fair level. §.§ Sampling min and max timesteps The sampling min and max timesteps refer to the percentage range (min and max value) of the random timesteps to add noise and denoise during SDS process. The minimum timestep sets the minimum noise scale. It affects the level of detail of the generated texture. The maximum timestep affects how drastic the texture change on each iteration or how fast it converges. While their values may affect quality and convergence time, no apparent differences were observed between different value pairs of sampling steps. In general, 0 and 1 should be avoided for the parameters. §.§ Learning rateIn the context of 3D texture generation, a large learning rate usually leads to faster convergence. A small learning rate creates fine-grained details. We found learning rate of 0.01 with the Adam optimizer is good for 3D texture generation.§.§ Batch size We set batch sizes to 1, 2, 4, and 8 respectively. Results show that a larger batch size leads to more visual details and reduces excessive light reflections and shadows. More importantly, large batch size increases view consistency. This is because gradients from multiple views are averaged together and the texture is updated once. A batch size of 8 is enough from our observation. A batch size of more than 8 does not lead to a significant reduction in the number of iteration steps.§.§ With vs. without gradient clipping Intuitively gradient clipping can avoid some abnormal updates and mitigate the Janus problem, shown in Debiased Score Distillation Sampling (D-SDS) <cit.>. In the context of texture generation, we do not observe Janus problem due to the fact that the mesh is fixed. In our experiments, there's no noticeable benefit of gradient clipping.§.§ Guidance scale The guidance scale specifies how close the texture should be generated based on input text prompts. We find that the guidance scale affects diversity and saturation. Even with the depth conditional diffusion model, a high guidance scale like 100 is still required.§.§ Negative prompts Negative prompts in Stable Diffusion can fix some artifacts caused by SDS. Adding negative prompts like "shadow, green shadow, blue shadow, purple shadow, yellow shadow" will help with excessive shadows. §.§ Data augmentationCamera distance in the view generation step can also affect resolution and texture quality. We select three pairs for camera distance range, results as shown in Figure <ref>.[1.5, 2.0] generates the best result in terms of texture and coloring quality. This is likely because this setup yields the most “normal” viewpoints - not too far, not too close, similar to typical images of an ambulance. §.§ Image-to-texture using Dreambooth3DDreambooth3D <cit.> was initially proposed as an extension of DreamFusion <cit.> to enable image-to-3D shape generation. In this section, we demonstrate that our approach can naturally be combined with Dreambooth3D <cit.> without modification. We follow the same procedure as described below.* Partially finetuning Stable Diffusion depth with the user-provided image(s);* The 3D texture generation with finetuned stable-diffusion-depth from step 1;* Finetuning Stable Diffusion depth again with outputs from step 2;* Final generation with finetuned stable-diffusion-depth from step 3. Shown in Figure <ref>, combined with Dreambooth3D <cit.>, our approach generates satisfactory textures given a 3D mesh and a single image as inputs. § RESULTSWe adopt the best setup from Ablation Studies <ref> and conduct the main experiments. The results are exciting, as the generated textures are of good quality, achieve faster inference time with better aesthetics, and have various colors and styles through different text prompts. §.§ Benchmark objects from Objaverse The benchmark objects generated from our first set of experiments look realistic, detailed, and of high quality. The subtle bright areas of the front surface of the dumpster and the compass increase the level of detail, a trait seen in objects that apply a special technique named texture baking. The colored patterns of the compass are symmetric with straight outlines, which indicates a high generation quality. §.§ User study on generation qualityWe selected 28 participants for the user study who are engineers, AI researchers, artists, and other game practitioners, mostly based in the U.S. About half of them have prior experience using a 3D modeling tool like Unity or Blender.In the study, we present four sets of objects: an ambulance, a dumpster, a compass, and a backpack, via online questionnaires. Each set contains 4 textures generated by A, B, and C, as reported by Text2Tex <cit.>, and by our method. The participants are asked to rank the results in terms of generation quality of their definition, but not their preferences of colors. The results are shown in Figure <ref>. Most participants selected their top 2 choices from either SOTA Text2Tex <cit.> or ours. For the ambulance and dumpster objects, the votes are unanimously toward our generated textures. For the compass, about two-thirds voted for ours. The votes for the backpack are diverse for the second-best quality object. Overall, the study indicates a high acceptance of the textures generated by our model.§.§ Faster inference time Figure <ref> shows the SDS loss throughout the training steps. In general, it takes EucliDreamer around 4300 steps to converge the loss. Note that for every object, the loss suddenly drops after 2500 steps, and the quality improvement is visually confirmable. In contrast, DreamFusion <cit.> can take more than 10,000 steps to converge, which indicates that our method converges faster. §.§ Better ascetics Section <ref> mentioned some common issues with texturing quality: semantics, view consistency, light reflections and shadows, and color tones. Our method can mitigate these issues by providing more realistic textures. With Stable Diffusion depth, our EucliDreamer never generates textures with extra patterns as shown in Figure <ref> because the flower pattern would have inconsistent depth at the surface. §.§ Various styles Our second set of experiments focuses on the diversity of the generated textures. By providing input text prompts of the model content and styles, one can use EucliDreamer to generate 3D textures for a given 3D mesh in certain styles. For example, a prompt of "a building, damaged, dirty, 3D rendering, high-quality, realistic" will give the modeled building a rusty feel and walls in pale white, as shown in Figure <ref>.From an artistic perspective, art styles are subjective feelings resulting from combinations of color tones, brush strokes, lighting and shadows, and more. We will leave readers to judge the quality and creativity of the AI-generated textures.§ DISCUSSIONSThrough the experiments, we have proved that our method with Stable Diffusion conditioning can generate higher-quality textures in a shorter time. This is as expected, for the following reasons.In terms of quality, depth information further restricts the probability density, eliminating unnatural or impossible cases. In some cases, without the depth layer, the model may generate a texture for a truck that has human characters on the surface, or a tree pattern on the leaves for a tree model.In terms of generation time, the time of the loss conversion is much shorter for two reasons. First, depth conditioning serves as a constraint that eliminates many possibilities. Second, only the surface of the object is computed. Both led to a smaller probability space that takes less time to search. §.§ Significance Through the experiments and the user study, we have proved that adding Stable Diffusion depth to the SDS training can greatly improve the quality and speed of 3D texturing. Also, VSD <cit.> as an alternative to SDS is not a must when Stable Diffusion depth is enforced. §.§ Limitations Our method, like many other SDS-based ones, only applies to convex objects. §.§ Future directions We as a team still have much to do around the current 3D generation pipeline. Due to the 2-dimensional nature of Stable Diffusion, light reflections and shadows are not well handled during the SDS step. We will address this problem from both data and algorithm perspectives.In 3D modeling, there are still many problems yet to be solved. Besides AI-powered texture generation, we also hope to research more on generating 3D meshes, scenes, animation, etc. We hope our work can inspire more research in 3D texturing and eventually solve the hassle in the real world.KaolinLibrary,ravi2020accelerating,hertz2023delta,sella2023voxe,armandpour2023reimagine,an2023panohead,wu2023omniobject3d,zhang2023avatarverseieeenat_fullname§ IMPLEMENTATION DETAILS §.§ Parameter selection For the main experiment, we selected the following parameters that performed the best in <ref>. The Adam optimizer has a learning rate of 0.01.The prompt keywords below are added in addition to the original phrase of the object (e.g.“a compass”).The texture resolution is set to 512*512. §.§ Infrastructure ThreeStudio(https://github.com/threestudio-project/threestudio)is a unified framework for 3D modeling and texturing from various input formats including text prompts, images, and 3D meshes. It provides a Gradio backend with easy-to-use frontend with input configurations and output demonstration.We forked the framework and added our approach with depth conditioning. A depth mask is added to encode the depth information.§.§ Hardware We used a Nvidia RTX 4090 GPU with 24GB memory. This allows us to run experiments with a batch size up to 8 and dimension 512*512. In theory, a GPU with more memory will allow us to generate textures with higher definition and in better quality.§ MORE RESULTS FROM ABLATION STUDIESDue to the page limit, we show experimental results of elevation range (section <ref>), batch size (section <ref>), guidance scale (section <ref>), negative prompts (section <ref>), and data augmentation (section <ref>) here. The results support our conclusions in the Ablation Studies section <ref>. | http://arxiv.org/abs/2311.15573v1 | {
"authors": [
"Cindy Le",
"Congrui Hetang",
"Ang Cao",
"Yihui He"
],
"categories": [
"cs.CV",
"cs.GR"
],
"primary_category": "cs.CV",
"published": "20231127065553",
"title": "EucliDreamer: Fast and High-Quality Texturing for 3D Models with Stable Diffusion Depth"
} |
1 Department of Physics, University of Washington, Seattle WA 98195, USA2Department of Electrical and Computing Engineering, University of Washington, Seattle WA 98195, USA3 Physical Sciences Division, Pacific Northwest National Laboratory, Richland, Washington 99352, USA† These authors contributed equally to this work.*[email protected] Resonant enhancement of nonlinear photonic processes is critical for the scalability of applications such as long-distance entanglement generation. To implement nonlinear resonant enhancement, multiple resonator modes must be individually tuned onto a precise set of process wavelengths, which requires multiple linearly-independent tuning methods. Using coupled auxiliary resonators to indirectly tune modes in a multi-resonant nonlinear cavity is particularly attractive because it allows the extension of a single physical tuning mechanism, such as thermal tuning, to provide the required independent controls. Here we model and simulate the performance and tradeoffs of a coupled-resonator tuning scheme which uses auxiliary resonators to tune specific modes of a multi-resonant nonlinear process. Our analysis determines the tuning bandwidth for steady-state mode field intensity can significantly exceed the inter-cavity coupling rate g if the total quality factor of the auxiliary resonator is higher than the multi-mode main resonator. Consequently, over-coupling a nonlinear resonator mode to improve the maximum efficiency of a frequency conversion process will simultaneously expand the auxiliary resonator tuning bandwidth for that mode, indicating a natural compatibility with this tuning scheme. We apply the model to an existing small-diameter triply-resonant ring resonator design and find that a tuning bandwidth of 136 GHz≈ 1.1 nm can be attained for a mode in the telecom band while limiting excess scattering losses to a quality factor of 10^6. Such range would span the distribution of inhomogeneously broadened quantum emitter ensembles as well as resonator fabrication variations, indicating the potential for the auxiliary resonators to enable not only low-loss telecom conversion but also the generation of indistinguishable photons in a quantum network.§ INTRODUCTIONIntegrated photonic resonators can greatly enhance the efficiency of nonlinear processes, with applications in frequency conversion <cit.> and photon-pair production <cit.>. In such devices, optimal performance occurs when the resonator has a set of high-quality-factor modes (collectively satisfying phase matching conditions) which are each centered at one of the participant frequencies of the targeted process. However, achieving this multiple-resonance condition for any set of frequencies — let alone for a specific set — requires precision beyond what is attainable by current fabrication processes. Even if such precision were attainable, transient fluctuations in temperature or humidity, or gradual material relaxation, will inevitably disrupt the multi-resonance structure. Consequently, practical implementations require a means of dynamically tuning the resonance frequencies in order to compensate for both fabricated and dynamic variations. Photonic resonators can be tuned by a variety of mechanisms, including static methods such as post-fabrication modification of the resonator <cit.> or cladding <cit.>, or active methods via temperature, electro-optic effect, or mechanical strain <cit.>. Such methods have been shown to enable double or even triple resonance provided the tuning mechanism has a different effect on each mode <cit.>. However, these techniques employ blanket alterations of the multi-mode structure which necessarily affect all resonances simultaneously <cit.>. Consequently, the particular wavelengths for which the multi-resonance condition can be achieved are still determined by the static device dimensions, including fabrication imperfections. Alternative, mode-targeted wavelength trimming methods <cit.> have also been demonstrated and could be used to achieve multi-resonance at specific frequencies, though these corrections are static and cannot be used to compensate for dynamic variations. An ideal tuning mechanism should combine the features of these paradigms, enabling dynamic tuning of each mode individually. Such control would not only increase tolerance to environmental and fabrication variances, but would also find use in quantum networks to correct the spectral inhomogeneity of optically active defects <cit.>.In this paper, we develop a complete description of an active tuning scheme for multi-resonant nonlinear resonators which uses coupled auxiliary resonators to independently tune the participant modes (Fig. <ref>). Through selective coupling in the frequency domain, a targeted mode can be isolated within the auxiliary resonator and tuned independently. While similar schemes have been explored experimentally <cit.>, we derive, in full generality, an analytical model for the auxiliary-resonator-based tuning mechanism based on a temporal coupled-mode theory description.It is shown that an individual mode in an overcoupled nonlinear resonator can be tuned on a scale exceeding the inter-cavity coupling rate |g| — which may already be significantly larger than the resonance linewidth — without significant degradation of the conversion efficiency. The formalism is directly applied to triply-resonant difference frequency conversion, in which we show there is negligible degradation of the conversion efficiency over several linewidths of detuning. Practical considerations, such as the range of physically attainable inter-cavity coupling rates and quality factors are studied in the case of small-diameter gallium phosphide ring resonators. For a main resonator with telecom-band intrinsic and coupling quality factors of Q_i = 10^6 and Q_c = 10^5 respectively, simulations find inter-cavity coupling rates as high as |g|/2π=136 GHz are attainable without inducing significant excess losses, yielding a tuning bandwidth of approximately 64 linewidths. These calculations indicate that auxiliary resonator tuning may be the most promising tuning technology to realize high-yield multi-resonant devices at targeted operating frequencies.§ ORTHOGONAL TUNING BY SELECTIVE MODE COUPLING Auxiliary resonator tuning uses modifications of an auxiliary resonator structure to indirectly influence specific coupled modes in a main resonator while leaving other main resonator modes unperturbed. In principle, the tuning scheme could be implemented using any resonator tuning mechanism to control the auxiliary resonator, including thermal or electro-optic tuning. Many performance metrics, such as tuning speed and reversibility, are inherited from the chosen physical tuning mechanism. In this section, a model of the auxiliary resonator tuning system is developed in order to describe behaviors that are inherent to the tuning scheme. A single-mode auxiliary resonator system is shown in Fig. <ref>a. The main resonator supports a mode with complex field amplitude a at an unperturbed frequency ω_a, which is coupled to mode b at frequency ω_b in the auxiliary resonator with an inter-cavity coupling rate g as well as an input/output waveguide. The field evolution in this is system is described by the coupled-mode theory (CMT) model at = -i( ω_a - iΓ_a/2) a - i g^*/2 b + √(κ) s_+,bt = -i( ω_b - iΓ_b/2) b - i g/2 a, s_-= -s_+ + √(κ) a, where s_± are the incoming/outgoing waveguide field amplitudes (normalized so that |s_±|^2 is the input/output power) and κ is the main-resonator-waveguide coupling rate. The intrinsic (γ_a, γ_b) and coupling (κ) loss rates for each mode combine to form the total energy loss rates Γ_a = γ_a + κ and Γ_b = γ_b. The steady-state response to monotone driving at frequency ω is obtained by a Fourier transform: a(t) → A(ω), b(t) → B(ω), and s_±(t)→ S_±(ω). The auxiliary resonator field B can be eliminated from the resulting algebraic system of equations to yield a single resonator equation0 = - (iδ_eff + Γ_eff/2) A + √(κ_j) S_+where δ_eff = δ - |g|^2/4(Δ_b + δ)/(Δ_b + δ)^2 + Γ_b^2/4,Γ_eff = Γ_a + |g|^2/4Γ_b/(Δ_b + δ)^2 + Γ_b^2/4,are the effective detuning and loss of the combined resonator system. The quantities δ = ω_a - ω and Δ_b = ω_b - ω_a are the unperturbed drive detuning and resonator detuning, respectively. The resulting steady-state field in the main cavity is given byA/S_+ = √(κ)/iδ_eff + Γ_eff/2 = √(κ)[ i ( Δ_b + δ) + Γ_b/2 ]/( iδ + Γ_a/2 ) [i ( Δ_b +δ) + Γ_b/2 ] + |g|^2/4.This can be compared to the maximum steady-state field in the absence of the auxiliary resonator A_0/S_+ = 2√(κ)/Γ_a as obtained by setting |g|=0 and ω = ω_a. The resulting normalized steady-state field |A|/|A_0| is shown in Fig. <ref>a as a function of the drive and resonator detunings (δ,Δ_b). We observe characteristic anti-crossing behavior with field maxima closely tracking the eigenvalues of an equivalent lossless system for |g|≳Γ_a,Γ_b. For large resonator detunings |Δ|≫|g|, the normalized steady-state field at δ = 0 approaches unity and is largely insensitive to changes in Δ indicating decoupling of the auxiliary and main resonator modes. This demonstrates that the tuning mechanism can affect a targeted mode in the main resonator without significantly perturbing other modes, provided significant detuning is maintained.This effective-resonator picture shows explicitly that the auxiliary frequency ω_b (and consequently resonator detuning Δ_b) can be tuned to reduce the effective detuning δ_eff at the expense of introducing additional effective loss Γ_eff≥Γ_a. These competing effects must be balanced to optimize the particular performance metric under consideration. In many cases, maximization of the steady-state resonator field Eq. (<ref>) is desired which depends on minimizing |iδ_eff+Γ_eff/2|. We determine the optimal resonator detuning in this case to be given byΔ_b^opt =-δ + |g|^2+2Γ_aΓ_b/8δ + 1/2√(|g|^4+4Γ_aΓ_b|g|^2 + 4Γ_a^2Γ_b^2/16δ^2 + Γ_b^2).This relation can be used to find the required tuning range for the auxiliary resonator in order to implement a desired tuning range for the main resonator. In the limit of |g|≫Γ_a,Γ_b, the optimal detuning can be simplified to Δ_b^opt≈ -δ + |g|^2/4δ. The normalized cavity field |A/A_0| along the Δ_b^opt contour is plotted as a function of the drive detuning δ in Fig. <ref>b. The field roughly follows a Lorentzian lineshape with full-width-at-half-max ≈√(Γ_a/Γ_b). Explicit calculation of the normalized steady-state energy |A/A_0|^2 reveals a 3-dB tuning (δ) bandwidth ofB_δ = √((√(2)-1)|g|^2Γ_a/Γ_b + Γ_a^2),which is plotted against values extracted from the field directly in Fig. <ref>c. We observe that the tuning bandwidth can be made arbitrarily large provided |Γ_a|≫|Γ_b| — even if the inter-cavity coupling is relatively weak (|g|≈Γ_a). This could be achieved, for example, in a system where the inter-cavity coupling is large compared to the intrinsic losses (|g|≫γ_a,γ_b) but the main resonator quality factor is intentionally spoiled by significant over-coupling to the waveguide (i.e. by designing coupling κ≫γ_a,γ_b so that κ≈Γ_a ≫Γ_b). Beyond the potential to tune the resonance over several linewidths Γ_a, the proposed operation in the over-coupled regime κ≈Γ_a is also desirable in many nonlinear processes as will be discussed in Section <ref>. The CMT model and its predicted effects are verified using finite-difference-time-domain simulation in Appendix <ref>. § NONLINEAR PROCESSES IN AUXILIARY-RESONATOR-TUNED SYSTEMS To generalize the auxiliary resonator model for applications involving nonlinear mode interactions, a resonator configuration similar to Fig. <ref>b is considered. A single main resonator supports multiple modes a_j at frequencies ω_a_j, each of which may be coupled to an associated auxiliary resonator mode b_j and an input/output mode s^±_j. The main resonator modes can interact through nonlinear functions N_j( a) corresponding to the phase-matched resonant process specifically targeted by the design. In contrast, the auxiliary resonator design(s) will generally not support phase-matched multi-resonant nonlinear processes. Thus, the system can be described with a series of equation pairs: a_jt = -i( ω_a_j - iΓ_a_j/2) a_j - i g_j^*/2 b_j + √(κ_j) s_j^+ + N_j( a),b_jt = -i( ω_b_j - iΓ_b_j/2) b_j - i g_j/2 a_j.For a given nonlinear process the various modes will be driven, either externally or by the nonlinear interaction, at a single frequency ω_d,j. The steady-state response is taken to be a_j(t) = A_jexp(-iω_d,j t) and b_j(t) = B_jexp(-iω_d,j t). This enables an effective-resonator description for each pair of modes j, 0 = -( iδ_j,eff + Γ_j,eff/2) A_j + √(κ_j) S_j^+ + N_j( A)where S_j^+ is the waveguide field amplitude at ω_d,j and (δ_j,eff,Γ_j,eff) are given by Eqs. (<ref>) and (<ref>) as in the single-frequency case.§.§ Difference-frequency generation This section illustrates the effect of auxiliary resonator tuning on a nonlinear process using the specific case of resonantly-enhanced χ^(2) difference frequency generation (DFG), which is of particular interest for quantum information applications such as long-distance entanglement distribution over silica optical fiber. In the multi-resonant DFG process (ω_i ↔ω_p + ω_o) shown in Fig. <ref>b, the main resonator supports input a_1, pump a_2, and output a_3 modes at frequencies ω_1 ≈ω_i, ω_2 ≈ω_p, and ω_3 ≈ω_o. The input and pump resonant modes are driven by incoming waveguide modes s^+_1 and s^+_2, while all three resonator modes are coupled to outgoing waveguide modes s^-_j. Two separate auxiliary rings b_2 and c_3 have resonances ω_b ≈ω_2 and ω_c ≈ω_3 and are coupled to a_2 and a_3 with rates g_b and g_c respectively. The system dynamics are described by the CMT equations a_1t = - i (ω_1 - i Γ_1/2)a_1 - i ω_1 β a_2 a_3 + √(κ_1) s^+_1,a_2t = - i (ω_2 - i Γ_2/2)a_2 - i ω_2 β^* a_1 a_3^* - i g_b^*/2 b_2 + √(κ_2) s^+_2,a_3t = - i (ω_3 - i Γ_3/2)a_3 - i ω_3 β^* a_1 a_2^* - i g_c^*/2 c_3, b_2t = - i (ω_b - i Γ_b/2)b_2 - i g_b/2 a_2,c_3t = - i (ω_c - i Γ_c/2)c_3 - i g_c/2 a_3, s^-_j= - s^+_j + √(κ_j) a_j, where β is a nonlinear mode overlap coefficient which encompasses effects from phase matching and spatial field overlap <cit.>. For this analysis, β is treated as constant, which is valid for detunings much less than a free spectral range. The effects of detuning and auxiliary resonators on phase matching in a ring resonator geometry are detailed in Appendix <ref>. Auxiliary resonators are not used to tune the highest-frequency mode ω_1 because reaching similar coupling rates with the shorter wavelength would require smaller distances between the main and auxiliary resonators compared to the auxiliary resonators for ω_2 or ω_3. The smaller intercavity distance could induce excessive scattering losses in the longer-wavelength modes. Instead, all three modes would be simultaneously shifted (e.g. by temperature tuning of the main resonator) to achieve resonance ω_1 = ω_i, and then the auxiliary resonators b_2 and c_3 are tuned to shift ω_2 and ω_3 onto the desired output ω_o and pump ω_p = ω_i - ω_p frequencies respectively.The steady-state fields in the effective-resonator picture satisfy 0= -(iδ_1 + Γ_1/2)A_1 - i ω_1 β A_2 A_3 + √(κ_1) S^+_1, 0= -(iδ_2,eff + Γ_2,eff/2)A_2 - i ω_2 β^* A_1 A_3^* + √(κ_2) S^+_2, 0= -(iδ_3,eff + Γ_3,eff/2)A_3 - i ω_3 β^* A_1 A_2^*, with S^+_1 and S^+_2 as the input and pump driving amplitudes. Throughout this section, the drive detunings are labeled δ_1 = ω_1 - ω_i, δ_2 = ω_2 - ω_p, and δ_3 = ω_3 - ω_o, and the resonator detunings are Δ_b = ω_b - ω_2 and Δ_c = ω_c - ω_3. In the small-signal limit of the DFG process, the converted light in the output-frequency resonator mode is negligible compared to either of the externally-driven modes. In this case, the small-signal conversion efficiency η_ss can be derived by solving the steady-state equations while neglecting the nonlinear (β) terms in Eqs. (<ref>) and (<ref>):η_ss≡S^-_3/S^+_1S^+_2^2 = ω_3^2|β|^2κ_1κ_2κ_3/|iδ_1+Γ_1/2|^2 |iδ_2,eff+Γ_2,eff/2|^2 |iδ_3,eff+Γ_3,eff/2|^2.The small signal efficiency is maximized when the steady-state field of each participant mode is independently maximized, so the same choice of resonator detuning in Eq. (<ref>) remains optimal. Consequently, the modification of the small-signal efficiency and corresponding bandwidth match that of the single-resonator model Eq. (<ref>). Similar to a triple-resonant process in a single resonator, η_ss is maximized when all three main resonator modes are effectively critically coupled to the waveguide: κ_j = Γ_j,eff/2. However, the optimal waveguide coupling for auxiliary-tuned modes can vary depending on Γ_eff, which increases with the magnitude of the corrected detuning. The quantum-limited maximum conversion efficiency can be investigated using the undepleted-pump approximation in which |S^+_1|≪|S^+_2|. In this regime, only the nonlinear term in the pump mode Eq. (<ref>) can be neglected, resulting in an effective coupling between the input and output modesg_NL = ω_1β A_2 = ω_1β√(κ_2) S^+_2/iδ_2,eff + Γ_2,eff/2Under the assumption that δ_1 = 0 by global tuning of the main resonator, the conversion efficiency is given byη = r^2|g_NL|^2 κ_1 κ_3/( (Γ_1Γ_3,eff/4)+r|g_NL|^2 )^2 + δ_3,eff^2Γ_1^2/4where r = ω_3/ω_1. Optimal conversion efficiency occurs for pump powers|S^+_2, opt|^2 = Γ_1/2|β|^2ω_1ω_3κ_2 |δ_2,eff + i Γ_2,eff/2|^2 |δ_3,eff + i Γ_3,eff/2|which is similarly minimized when the resonator detuning is given by Eq. (<ref>). The corresponding optimal conversion efficiency is thenη_opt = 2ω_3κ_1κ_3/ω_1Γ_1 ( 2|δ_3,eff + iΓ_3,eff/2| + Γ_3,eff).We find that η_opt is approximately maximized for resonator detunings which maximize the cavity fields Eq. (<ref>). The resulting η_opt can be compared to the ideal system quantum limit <cit.>η_QL = ω_3 κ_1 κ_3/ω_1 Γ_1 Γ_3.which corresponds to δ_3,eff = 0 and Γ_3,eff=Γ_3.§ PRACTICAL TUNING RANGE AND BANDWIDTHThe above analysis demonstrates the potential of the auxiliary resonator tuning mechanism in enabling efficient and targeted multi-resonant nonlinear processes. The bandwidth over which such processes can be achieved, however, is largely determined by the inter-cavity coupling rate |g| (Eq. (<ref>)). Consequently, the practicality of the tuning scheme depends on the range of experimentally obtainable coupling rates in realistic device designs. In this section, we simulate both the coupling rates and added scattering losses resulting from an auxiliary ring resonator coupled to a triply-resonant ring resonator design <cit.>. Based on these simulations, we then examine a resonant DFG process at targeted frequencies in the combined system in order to characterize the tuning bandwidth and performance.§.§ Coupling rates and lossesTo determine |g| for a ring or disk resonator, the propagation of a broadband pulse through the coupling region, depicted in Fig. <ref>a, is simulated with a variational finite-difference time-domain method (varFDTD, Lumerical) to determine the power scattering matrix. varFDTD performs an approximately equivalent two-dimensional simulation using effective indices derived from a three-dimensional structure. The normalized single-pass power coupling spectrum |k|^2 can then be related to the inter-cavity coupling rate |g| as|g| = 2|k|/√(τ_aτ_b)where τ_x is the round-trip time of the cavity <cit.>. For this particular configuration τ_x = v_g,x / 2π R_x with v_g,x and R_x as the group velocity and radius of the resonators, respectively. However, the presence of the auxiliary ring simultaneously introduces scattering losses at the coupling region due to the perturbation of the evanescent field. This excess scattering loss at the coupling region |s|^2 is similarly calculated from the single-pass varFDTD simulations by comparing the combined transmitted |t|^2 and coupled |k|^2 powers to the transmission in the absence of the auxiliary ring. The corresponding coupling region loss rate is given by Γ_cr,x = |s|^2/τ_x. It is expected that both the coupling rate and scattering loss depend on the evanescent field overlap and thus decay exponentially with increasing separations d_gap.The main resonator geometry is based on the design from Ref. <cit.>, which utilizes a hybrid-integrated gallium phosphide(GaP)-on-oxide(SiO_2) ridge-waveguide ring of inner radius of 5.3 µm, width of 690 nm, and height of 430 nm to target phase-matched resonances at 637 nm, 1080 nm, and 1550 nm for DFG. A range of auxiliary resonator radii R_aux, widths w_aux, and separations d_gap were simulated ranging from 3–8 µm, 200–500 nm, and 0–300 nm respectively. Frequency-domain simulations (Lumerical MODE) are utilized to determine the group indices and bending losses for propagating modes near 1080 nm and 1550 nm. This enables the determination of the inter-cavity coupling rates via Eq. (<ref>). The coupling rate |g| most strongly depends on the inter-cavity separation distance d_gap, as shown in Fig. <ref>b.The dependence of |g| on the auxiliary ring geometry is comparatively weak (Appendix <ref>). In general, the coupling rate increases for smaller ring diameters with shorter round-trip propagation times and smaller ring widths with stronger evanescent fields, so long as the auxiliary resonator mode is not approaching cutoff. As expected, we observe a roughly exponential decay of the coupling rate |g| with increasing inter-cavity separation. For small separations (d_gap < 50 nm), the auxiliary resonator begins to strongly perturb the main resonator modes causing increased coupling rates but also increased scattering losses Γ_cr (Fig. <ref>c). Consequently, the optimal coupling distance corresponds to the smallest distance for which the scattering losses do not significantly contribute to the total quality factor. For a design targeting loaded quality factors of Q=10^5, a coupling distance of d_gap = 125 nm would provide a sufficiently high telecom coupling rate (|g|≈ 90 GHz) while introducing relatively minor losses (Q_cr≈ 10^6). §.§ Difference-frequency generation We now model the performance of a triply resonant DFG process (based on <cit.>) with the addition of auxiliary ring resonators.We consider an auxiliary ring design with nominal dimensions R_aux=3 µm, w_aux=400 nm, and d_gap=150 nm. The resulting pump (1080 nm) and output (1550 nm) inter-cavity coupling rates are |g_b|/2π≈ 15 GHz and |g_c|/2π≈64 GHz respectively. We assume all three modes have fabrication-limited intrinsic quality factors of Q_i = 10^6 (quality factors in GaP photonics at visible/telecom wavelengths have been observed to be well above 10^5 <cit.>). Coupling-region loss at the chosen d_gap = 150 nm corresponds to scattering quality factors far exceeding 10^6 (Fig. <ref>c) and may consequently be neglected. Coupling to the waveguide introduces an additional loss channel within the main resonator but not the auxiliary resonators. We assume that the main resonator is significantly over-coupled, corresponding to a coupling quality factor of Q_c=10^5 for all three modes. The resulting total quality factors for the three main resonator modes are then Q_j = 9.1×10^4 (j=1,2,3). The auxiliary rings do not couple to the waveguide and so Q_b = Q_c = 10^6. We then compute the 3-dB tuning bandwidth Eq. (<ref>) for the pump and output modes to beB_δ,2/2π ≈ 32 GHz≈ 11 (Γ_2 / 2π), B_δ,3/2π ≈ 136 GHz≈ 64 (Γ_3 / 2π).The assumed resonator parameters and the corresponding bandwidths are summarized in Table <ref>. As described in Eq. (<ref>), the accessible portion of this tuning curve is limited by the tuning range of the auxiliary resonator. Also, if a resonance is tuned by more than approximately 20% of a free spectral range, the conversion efficiency may be reduced due to phase matching effects (Appendix <ref>), though this is not likely to become significant in high-finesse systems.Finally, we compute the projected DFG performance metrics as shown in Fig. <ref> assuming the input mode has been tuned globally onto resonance (i.e. δ_1=0). We observe that the pump and output modes could be tuned over a range of 4 and 23 main-resonator linewidths, respectively, while maintaining over 90% of the maximum small signal conversion efficiency. The small signal efficiency can be increased at the expense of the pump mode tuning bandwidth by reducing the coupling rate κ_2 to the near-critically coupled regime (κ_2 ≈γ_2). Although the tuning bandwidth of the pump mode is diminished compared to the output mode, the critical power is relatively insensitive to modulation, remaining near 10 mW over the full range (Fig. <ref>d). Operation at the quantum efficiency limit can be achieved over the much larger telecom tuning bandwidth (Fig. <ref>c) even in spite of the comparably worse pump bandwidth. These results indicate that the auxiliary resonator tuning method could enable high-efficiency, frequency-targeted DFG over large bandwidths with minimal impact in performance.§ CONCLUSION AND OUTLOOK Coupled auxiliary resonators offer a straightforward method to actively and independently tune specific resonances in a multi-resonant device without altering the structure of the primary resonator. The tuning scheme can be implemented and used in conjunction with any resonator-specific tuning mechanism, allowing for compatibility with most material platforms while also limiting the required fabrication complexity.This technique enables the tuning of individual resonances over tens of linewidths and can be accurately modeled by a small number of fast and computationally inexpensive simulations. The flexibility and control uniquely afforded by the auxiliary-resonator tuning scheme is particularly relevant to quantum information applications which impose strict restrictions on process wavelengths. For example, difference-frequency conversion of photons emitted from solid-state qubits—such as the nitrogen-vacancy <cit.> and silicon-vacancy <cit.> centers in diamond—to a specific telecom wavelength could be used to not only minimize losses on a fiber-based quantum network <cit.>, but also correct for spectral inhomogeneity of the emitter nodes. In this way, the high conversion efficiency afforded by the multi-resonant nonlinear process, combined with the flexibility of the auxiliary resonator tuning mechanism, may serve an indispensable role in the development of large-scale quantum networks.§ ACKNOWLEDGEMENTThis material is based upon work supported by Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704 and National Science Foundation Grant No. ECCS-1807566. N.S.Y. was supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-2140004.§ DISCLOSURESThe authors declare no conflicts of interest.§ DATA AVAILABILITYThe simulation data and calculations in this work can be obtained by contacting the authors. § NUMERICAL VALIDATION To verify the predictions from the CMT model, we simulate a GaP-on-oxide coupled-auxiliary-resonator system using varFDTD, which performs an approximately equivalent two-dimensional simulation using effective indices derived from the three-dimensional structure. For the highly-planar devices under consideration, varFDTD enables reasonably accurate simulation of device performance with much higher throughput. The system, shown schematically in the inset of Fig.<ref>a, consists of two GaP disk resonators on a silicon-oxide substrate. One resonator is evanescently coupled to a waveguide, which is used to excite the resonators. By varying the auxiliary resonator radius while maintaining the edge-to-edge separation of the resonators, the auxiliary resonator detuning Δ_b can be modulated without significantly affecting the coupling rate g directly. Accurate simulation of steady-state device transmission and cavity fields requires the transient response to completely decay over the course of the simulation. To achieve this while minimizing computational resources, we use a comparatively small geometry with nominal radii of 1.54 µm and a height of 500 nm. Furthermore, intrinsic material losses within the resonators ((n_GaP) = 10^-4) are imposed, which has an effect similar to scattering losses from surface roughness in fabricated devices. Simulations of the main resonator transmission in the absence of the auxiliary resonator reveal an intrinsic loss rate γ/2π = 20.4 GHz and waveguide coupling loss rate κ/2π = 14.6 GHz. By varying the radius of the auxiliary resonator by ± 5.5 nm in 0.5 nm steps, the frequency of the auxiliary resonance is tuned by approximately ± 750 GHz. Single-pass coupling simulations (discussed in main text Section 4) predict coupling rates of g/2π≈ 170 GHz. Simulated transmission spectra are shown in Fig. <ref>a. The CMT model is fit using known values of the loss and coupling rates (γ, κ, and g) obtained from single-pass simulations (main text, Section 4) with the only free parameters being the absolute and relative frequencies of the resonators. In Fig. <ref>b, the normalized steady-state main cavity energy obtained from one of these simulations is compared to the corresponding CMT prediction. Fig. <ref>c shows the highest attainable cavity energy as a function of drive detuning δ compared to the unperturbed resonance, under optimal auxiliary resonator detuning Δ_b. The simulation results closely follow the Lorentzian shape of the CMT model curve, which is identical to that in Fig. 2b except using the known parameters. In each of these plots, reasonably good agreement is observed between the varFDTD simulation results and CMT model predictions, indicating that the model can provide useful predictions of system behavior using only a few parameters which can be extracted from a computationally inexpensive set of simulations.§ SINGLE-PASS COUPLING SIMULATIONS Simulations of the single pass coupling and loss were performed for the full, multi-dimensional parameter sweep. Representative behavior is shown in Fig. <ref> for variation with respect to the auxiliary ring resonator radius and width. In general these parameters do not strongly affect the coupling rate provided the width and radius is sufficiently large as to allow for low-loss propagation of the auxiliary resonator mode. As expected, the strongest dependence is on the separation between the two rings.§ PHASE MATCHING IN RING RESONATORS For nonlinear processes in a ring or whispering gallery resonator, the angular propagation constants q_j describe the spatial phase evolution of each propagating mode j over the resonator circumference. At the resonant frequencies ω_j, the propagation constants q_j are integers equal to the azimuthal mode numbers m_j. Conservation of momentum for a perfectly phase-matched nonlinear process requires the sum of propagation constants for the input and output modes to be equal. This section uses the specific case of triple-resonant difference frequency generation ω_i ↔ω_p + ω_o (Fig. <ref>a) in a ring resonator with resonant frequencies ω_1 ≈ω_i, ω_2 ≈ω_p, and ω_3 ≈ω_o to illustrate phase-matching effects related to auxiliary resonator tuning. For DFG, conversion efficiency is maximized when the phase mismatch M ≡ q_2+q_3-q_1 is equal to zero. In the absence of an auxiliary resonator, the nonlinear overlap integral β= β_0 ∫_0^2πθexp(iMθ) appearing in the CMT equations exhibits a sinc-like dependence on M:|β| = 2|β_0| sin(π M)/M.Consequently, |β| decreases for M≠0 and vanishes completely for nonzero integer values of M. Quasi-phase matched processes exhibit a similar dependence, but offset to a nonzero optimal M.A multi-resonant nonlinear cavity with modes detuned from the designed process frequencies ω_j will have angular propagation constants similarly displaced from the intended integer azimuthal mode numbers: q_j = m_j + Δ q_j, resulting in a round-trip phase shift of 2 πΔ q_j. A coupled auxiliary resonator can tune a main resonator mode onto the design frequency by introducing a compensating phase shift of -2 πΔ q_j over the length of the intercavity coupling region, allowing constructive self-interference. The angular propagation constant of the mode is not affected outside of the coupling region. For a ring or disk resonator with sufficiently large diameter, the effect of the coupling region can be approximated as a phase discontinutiy at the angular location of the auxiliary resonator θ_j so that the resonant field has spatial phaseϕ_j(θ) = (m_j + Δ q_j) θ, 0 ≤θ < θ_j (m_j + Δ q_j) θ - 2πΔ q_j ,θ_j ≤θ < 2π .The effect of this phase discontinuity is illustrated in Fig. <ref>b.An example tuning scheme is shown in Fig. <ref>a, consisting of a triple-resonant DFG ring resonator coupled to two auxiliary resonators. The highest frequency mode in the main ring is tuned to the input frequency (ω_1 = ω_i) by a global tuning mechanism, while any residual detuning for the other two modes is corrected by the auxiliary resonators at angular positions θ_2 and θ_3. Provided the main resonator modes satisfy phase matching (m_1 = m_2 + m_3), the phase mismatch of the unperturbed process is M = Δ q_2 - Δ q_3. This yields a nonlinear overlap ofβ = β_0 ( ∫_0^θ_2θ e^iM θ+ e^-2π i Δ q_2∫_θ_2^θ_3θ e^iM θ + e^-2π i (Δ q_2 - Δ q_3)∫_θ_3^2πθ e^iM θ)which can be evaluated as|β| = 2|β_0|/|Δ q_2 - Δ q_3|| sin(πΔ q_2) - e^i(Δθ -π)(Δ q_2 - Δ q_3)sin(πΔ q_3) |where Δθ = θ_3 - θ_2. The nonlinear overlap is plotted in Fig. <ref>c for Δθ = 0^∘, 60^∘, 120^∘, and 180^∘. For most practical applications, the frequency shifts from auxiliary resonator tuning will be much less than the free spectral range of each mode (Δ q_j ≪ 1), resulting in negligible phase matching effects on |β| regardless of resonator positioning. If only one auxiliary resonator is used for tuning or if the angular separation between two auxiliary resonators is near zero, the phase matching of the tuned process remains unchanged from the unperturbed process. Increasing the angular separation between two auxiliary resonators can allow the phase discontinuities from tuning to improve the effective phase matching of the frequency conversion process, similar to a quasi-phase matching scheme. However, depending on the relative magnitudes and directions of the tuning applied to each mode, the effective phase matching may instead be degraded compared to the unperturbed process. | http://arxiv.org/abs/2311.15606v1 | {
"authors": [
"Alan D. Logan",
"Nicholas S. Yama",
"Kai-Mei C. Fu"
],
"categories": [
"physics.optics"
],
"primary_category": "physics.optics",
"published": "20231127080001",
"title": "Selective active resonance tuning for multi-mode nonlinear photonic cavities"
} |
capbtabboxtable[][]=6.0in=8.5in =0.1truein#1#1#1#1 #1#1and #1 Submitted to #1AbstractABSTRACTPresented CONTRIBUTED TO#1 Submitted to #1 ACKNOWLEDGEMENTSorkshopsymbols.texa]Jon Butterworth b]Cesare Cazzaniga c]Aran Garcia-Bellido d]Deepak Kar e]Suchita Kulkarni f]Pedro Schwaller g]Sukanya Sinha g]Danielle Wilson-Edwards h]Jose Zurita[a]Department of Pysics & Astronomy, University College London, London, United Kingdom [b]ETH Zürich, Institute for Particle Physics and Astrophysics, 8093 Zurich, Switzerland [c]Department of Physics and Astronomy, University of Rochester, Rochester NY, USA [d]School of Physics, University of Witwatersrand, Johannesburg, South Africa [e]Institute of Physics, NAWI Graz, University of Graz, Graz, Austria [f]PRISMA+ Cluster of Excellence and Mainz Institute for Theoretical Physics, Johannes Gutenberg-Universität Mainz, Mainz, Germany [g]School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom [h]Instituto de Física Corpuscular, CSIC-Universitat de Valéncia, Paterna, Spain [*] Corresponding author(s): Sukanya Sinha [[email protected]]MITP Colours in Darkness workshop summary report [================================================This report summarises the talks and discussions that took place over the course of the MITP Youngst@rs Colours in Darkness workshop 2023. All talks can be found at this URL: <https://indico.mitp.uni-mainz.de/event/377/>. § INTRODUCTION In recent years, there has been an increase in the number of search programmes exploring the possiblity of a “dark sector" beyond the Standard Model (BSM) using LHC data. To date, dark matter (DM) searches at the Large Hadron Collide (LHC) have usually focused on WIMPs (Weakly Interacting Massive Particles), but since the standard signatures have found no compelling evidence, several recent phenomenology papers have explored the possibility of accessing the dark sector with unique collider topologies. If dark mesons exist, their evolution and hadronization procedure are currently little constrained. They could decay promptly and result in a very Standard Model (SM) QCD-like jet structure (darkjets), even though the original decaying particles are dark sector ones; they could behave as semi-visible jets (SVJs); or they could behave as completely detector-stable hadrons, in which case the final state is just the missing transverse momentum. Furthermore, depending on whether the dark hadrons decay promptly or not, emerging jet (EJ) signatures can also arise. Owing to the associated experimental challenges, these classes of models are still under developed and mildly explored. Recent developments in reconstruction and identification techniques have made it possible to probe such models at the LHC, and the first limits on some of these signatures are public from both ATLAS and CMS. However, there's still a lot of ground left to cover, in terms of shower and hadronisation approach, and benchmarking the models for future iterations of the searches as we step into an era of unprecedented data, with LHC Run-3 well underway. This workshop [https://indico.mitp.uni-mainz.de/e/darkshowershttps://indico.mitp.uni-mainz.de/e/darkshowers] aimed to foster collaboration between the experimental and theory community dedicated towards developing and understanding the strongly interacting dark sector. The workshop featured talks from the leading experts in the field, which were followed by extensive discussion sessions, to understand the current status of the dark showering module within Monte Carlo generators like and , as well as identifying potential studies pertaining to signal generation and designing theoretically motivated models that will drive future search strategies for strongly interacting dark sectors.§ DAY 1 The day focused on the theoretical perspective, and on the status of event generation. All theory speakers reminded us of the vast landscape of phenomenological scenarios and their signatures, and in all cases the discussions focused on how to improve the search strategies to capture a broader set of scenarios than currently aimed for.The landscape of Hidden Valley (HV) scenarios <cit.> is vast [Presentation by Matthew Strassler]. The signatures can be classified in the “easy” (parton level MC suffices), “feasible” (a dark shower Monte Carlo is needed) and “guesswork” (no reliable simulation). For the “easy” case, the reach can be broadened by making searches as inclusive as possible, for instance by relaxing event selection criteria. This point was illustrated using theZ → A' S, S → A' A' (where A' is a dark photon, S a new scalar and A' decays to leptons) search done in <cit.>, proposing to replace the lepton isolation criteria by a displacement cut, as often non-isolated leptons would arise in the decay of dark jets. On the “feasible” signatures, the current QCD knowledge can be exploited to explore this region better. Concretely, can simulate the simplest case of i) perturbative (QCD-like) theories, ii) with mass degenerate dark quarks and iii) with all dark quark masses lying below the dark confinement scale Λ. Deviating from any of these three assumptions can render unreliable and/or not applicable, and theory/simulation progress is required to improve the situation.In order to simulate confining HV theories and analyse their phenomenology, development of event generators is necessary [Presentation by Suchita Kulkarni]. The main developments to the HV module during and since the Snowmass process <cit.> were highlighted. These include the possibility to abandon the mass degeneracy hypothesis, the implementation up to three loops of the dark sector coupling constant up and allowing the SM Higgs decay into dark gluons . These new features must be analyzed carefully, as raises only few sanity warnings. Possible improvements to the HV module are the expansion to include other Lie Groups beyond the SU(N), and considering that the dark “quarks” must not necessarily be Dirac fermions. The importance of estimating hadronization uncertainties away from SM QCD point was emphasised. Among the experimental signatures of confining HV scenarios emerging jets <cit.> are an interesting phenomenon [Presentation by Pedro Schwaller]. While only the simplest case has been study, considering more general assumptions, like i) a mixture of different dark pions lifetimes, ii) more than one dark meson, open up new avenues. On i), the reinterpretation of current EJ CMS search for the case of two different dark hadron lifetimes was demonstrated <cit.>, while on ii) interesting phenomenology can be obtained from the “down” portal and “up-portal” (mostly focused on “top portal”) scenarios depending on the flavour structure of the dark sector-SM interactions<cit.>, which can lead to exotic top decays, t → u X, where X can be either a (long lived) dark pion or result in a full dark shower <cit.>. Moreover, scenarios like ii) can also lead to consistent models of dark matter, where the relic density is calculable. Finally it was emphasised that by working with concrete benchmark models, the reach of collider searches can be compared with e.g. cosmological constraints (CMB, BBN) and with the reach of fixed target or flavour probes of dark sectors. Although HV scenarios offer theoretically and experimentally interesting landscape, the number of free parameters may introduce model dependence. Thus an alternative framework for simulating confining HV scenarios was discussed where both emerging and semi-visible jets can be accommodated [Presentation by Nishita Desai]. A brute-force approach would require choosing too many parameters in , and hence she investigated the impact of them on the observable energy distributions. The proposal involves a simplified setup using 6 parameters: R_ max (Jet Radius parameter), a (Jet energy shape), N_H (avg # of dark hadrons), the fraction of invisible hadrons , the mass of the dark hadron m_H, and the decay table of dark hadrons. This has been provisionally validated for the mediated dark showers.Finally, the implementation of dark showers in the framework was also discussed [Presentation by Dominic Stafford]. While many aspects are still undergoing development (in particular the hadronization modules), dark splitting in shower module has been successfully implemented, and preliminary results show similar production rates than those obtained with .§.§ Discussion summary: Day 1 In summary, the main challenge on the theory frontier is to capture the vast range of possible signatures while taking into account the limits of theoretical knowledge, availability of simulation tools, and the finite number of phenomenological studies and experimental searches that can be done in practice. The progress on simulation tools, simplified phenomenological models and simple but complete benchmark models reported above looks promising and makes us confident that the challenge can be overcome in the future.§ DAY 2 Day 2 of the workshop was mostly focused on critical review of experimental results and possible future improvements, prefaced by an introductory theory talk on SVJ. The initial model was designed to give rise to the desired topology, with a limited set of parameters in HV module. However it is clear that these parameter choices result in certain amount of model dependence <cit.>. Semi-visible jets production have been considered so far mainly in the context of Simplified Models (s-channel and t-channel), as well as portal contact interaction coupling SM quarks with dark quarks, later undergoing showering and hadronization and decay to the SM sector. More recently, an alternative production mechanism of SVJ via glueballs was suggested in <cit.>[Presentation by Tim Cohen]. Furthermore, in the direction of better understanding the jet substructure and having control of the model-dependence, recently the Lund Jet Plane (LJP) approach <cit.> has been applied to dark showers <cit.>. The main idea of the proposed method relies on the fact that the non-perturbative effects are isolated in the lower region of the LJP (below ln k_t < lnΛ_d). Thus, one can have control on the impact of the non-perturbative effects removing emissions in the LJP below a certain k_t threshold.CMS presented their early Run 2 emerging jets results <cit.>[Presentation by Jannicke Pearkes], which uses the impact parameters of tracks in jets to separate signal regions. The b-jet contamination in the background acceptance was carefully studied. The full Run 2 result is coming soon. Run 3 analysis is expected to use dedicated triggers and a ML approach to tag the jets, potentially combining with SVJ. CMS then reviewed the full Run 2 s-channel SVJ search <cit.>[Presentation by Aran Garcia-Bellido], motivating the specific HV parameter choices made. Cut-based and BDT jet-tagger strategies were used, and new filters were developed to discard problematic regions of the detector artificially enhancing the data sample with events characterized by aligned with the second leading jet, data of interest for the search. The next searches will focus on t-channel SVJ production, boosted topology, and uncovered phase spaces, such as lower mediator masses and SVJs with leptons. Then ATLAS presented their t-channel SVJ <cit.>[Presentation by Deepak Kar] and s-channel darkjets <cit.> results[Presentation by Dilia Maria Portillo Quintero]. The former used a trigger as opposed to the un-prescaled jet pt and used in the CMS search. This could have allowed to probe intermediate values of but the analysis did not because of the difficulty in fake multijet modelling, which will be an useful aspect to study going forward. The darkjets search mainly used number of tracks in ungroomed jets to identify signal but the usual issue is de-correlating it from large multijet background.While experimentally track multiplicity is an useful variable, more robust observables, such as subjets inside jets <cit.>, their correlation and masses can be useful as probes of meta-stable hidden hadrons. The coupling was chosen such that the models have not already been excluded by existing dijet searches. The ECS talks highlighted the HEPData <cit.> preparation for this result[Presentation by Danielle Wilson-Edwards], which is an important input to make this result usable by larger community, as well as potential improvements in these searches by using partial event building[Presentation by Angelica Aira Araw Ayalin], allowing us to expand the search coverage. An attempt to simulate dark showers for near-conformal confining Hidden Valleys, using the <cit.> HV module, was also presented[Presentation by Joshua Lockyer]. §.§ Discussion summary: Day 2 The discussion session focused on possible benchmark signal choices.Model-dependence can arise at different stages when considering the different unconventional jets production: from dark quarks production going to dark showering, dark hadronization and decays. However, fixed the portal and given the assumption of a QCD-like hidden sector, as well as choosing certain number of dark colours, and dark flavours, allows some control on the perturbative steps of the process. The modeling of the non-perturbative regime of these QCD-like hidden sectors (hadronisation and dark hadrons spectrum) remain one of the main unknowns that can strongly impact our predictions for a scenario where the presence of a new confining dark sector leaves its imprint on the substructure of QCD-like jets <cit.>. This aspect is extremely relevant for searches exploiting this powerful information. The choice ATLAS made to go with =1 (as opposed to =2 in CMS) has certain conceptual difficulties, but it still generates the topology of interest. It was mentioned that choosing > 1resolves the issues pertaining to dark meson production, and > 2 can help to avoid problems associated with hadronization and baryon production, however, then it becomes challenging to implement the fraction consistently. The suggestion was to force all pions to be either invisible or visible, and set decay width by hand, but even then, simulating mass-split theories where cascade decays within the dark sector can lead to more ambiguities. There are alternate approaches like using a Gaussian smearing, discussed in the previous day of the workshop, to avoid complicated HV parameterisation, however further studies are required on that front. It was also concluded that the 8 HV module is currently insufficient to describe the expected behaviour for near-conformal confining Hidden Valleys, due to the presence of infra-red fixed points (IRFPs). However, the studies show significant progress on designing a framework which can adequately describe the behaviour at high . Independent tests are needed to gain more confidence in thedifferent classes of theories that can be studied using this approach. It is a very much an open issue, and a compromise has to be struck between theoreticallysound models and experimentally searchable models focusing on uncovered phase space. § DAY 3 The day began with discussion of complementarities between the dark sector and DM, and how simple DM limits maybe interpreted in more complex scenarios[Presentation by Mark Goodsell]. There are indications from simulations of galaxy formation that models in which DM is a single species of particle with negligible self-interaction (apart from annihilation in the very early universe) struggle to describe the density curves for galaxies, producing a “cusp” in density rather that the oberved broad core <cit.>. While interactions between baryonic matter and DM may be able to explain this <cit.>, self-interactions amongst DM also provide a potential explanation <cit.>. In this case, the strength of self interactions required can be obtained from scales similar to Λ_QCD, and thus strong interactions become potentially relevant. For DM production in such scenarios, there are several mechanisms, generally involving either kinetic mixing between a dark state and the photon and/or Z boson, or the introduction of a scalar mediator coupling to or mixing with the SM Higgs boson. Freeze-out of the appropriate DM relic abundance can be obtained by exploiting 3 → 2 processes <cit.> or by using heavier states such as the dark ρ mesons <cit.>. When the ρ lifetime is long, this leads to semi-visible jet and displaced vertex signatures at the LHC.Standard tools such as Micromegas <cit.> are not (yet) equipped for computing such non-standard relic density channels, and such calculations are done piecemeal. In addition, given the complex phenomenology of these kinds of models and the wide range of possible experimental signatures, it is not efficient to design individual collider search strategies for every scenario, and reinterpretation of experimental results in multiple scenarios becomes critical. This is feasible, and has been done, for dijet and monojet analyses, but is a big challenge for the more unconventional signals such as semi-visible jets and long-lived particles. Sometimes this is due to unavailability of detailed experimental information, especially as complex machine learning tools are employed. Agreeing on how to share such information between experiments and with the theory community is a critical issue.For the more standard final states at colliders, one approach is to consider particle-level measurements[Presentation by Jon Butterworth], using the Contur package <cit.> which uses the analyses stored in Rivet <cit.> to perform a signal-injection of events predicted by a specific BSM scenario on a library of hundreds of fiducial cross section measurements. If the signal would have been seen, the model point can be excluded. If the expected and actual limits begin to diverge, with the expected limit stronger than the actual, this may be a sign of an anomaly. To be ideally usable in Contur, a measurement needs to be unfolded to a “particle level” fiducial cross section, that is, corrected for detector effects such as resolution and efficiency within a kinematic region of good acceptance, and not extrapolated beyond that region, since such extrapolations (including correcting for vetoes on reconstructed objects) inevitably introduce model dependence. It is also better, though not essential, that the measurement is defined in terms of the true final state, not in terms of production processes. Hundreds of LHC measurements already meet enough of these criteria to be usable, and are present in the Rivet and HEPData. Several DM models have been tested by Contur, including a strongly interacting Dark Matter model <cit.>, and a combined study using an interface to GAMBIT <cit.>, but these are all limited to models where the final state consists of (potentially novel) combinations of SM objects. Extending this to objects such as non-isolated leptons and photons, emerging jets or long-lived particles is a challenge, but in general if an object definition can be made in terms of final state particles, and implemented in Rivet, Contur can make use of it.This discussion led naturally on to a discussion of analysis preservation for consumption both inside and outside collaborations, particularly as applied to strongly interacting DM models[Presentation by Louie Corpe]. This is to some extent a solved problem for SM-like measurements, where many are already available in Rivet. Once this information is made available, theorists can reinterpret the searches. Similar tools are available for searches, and there are community guidelines and discussion documents available <cit.>. Object or event selections using complex machine learning methods currently pose a challenge, especially when they mingle particle- and detector-level concepts. Detector level simulations for theorists are difficult to implement, due to the unavailability of collaboration-backed simulation configurations, and the limitations of fast detector simulations and parameterisations.Many discussions are underway as to how to meet this challenge, and there is an urgent necessity for feedback on the usefulness of the available experimental information for reinterpretation. To ensure the usefulness of data well beyond the lifetime of the experiments, object definitions have to be made clear from the experimental side at some level which is interpretable in terms of theoretically-accessible objects, ideally final-state particles. These challenges for non-standard objects are not, however, unique to unfolded measurements, and have for example to be met (and have been met) even for the calibration of objects used in detector-level searches. The final step to making them reinterpretable is hopefully therefore not insurmountable.Then followed a series of presentations by early career researchers and discussions of specific analyses.A reinterpretation of CMS emerging jet search in the context of exotic Higgs boson decays [Presentation by Juliana Carrasco], following <cit.>, demonstrated a closure test using acceptances provided for CMS signal benchmark points, showing that the search was reproducible using information available on HEPData. Different exotic higgs decay portals were then checked, and bounds were set which are better than the current best bounds (from ATLAS). Such studies can also show where are limits are less strong, and thus lead to proposals for improvement.A study of semi-visible jets dominated by b-quarks was presented [Presentation by Wandile Nzuza], focusing on a search strategy that utilizes variable radius jets to better encompass the semi-visible jet behaviour on an event-by-event basis <cit.>. There are some constraints on the final state from current searches that probe b-jets, however, there is sufficient phase space left to be explored. The current CMS semi-visible jet search exploits the SVJ b-enriched content via a BDT-based jet tagger. Constraints from this search should be compared with this studies, together with constraints from the current ATLAS SVJ jet search. Semi-visible jets can also be produced with non-isolated lepton pairs being present inside the jet, and there were several presentations focusing on the case of non-tau leptons within SVJ [Presentation by Cesare Cazzaniga], as well as having Non-isolated taus in SVJ [Presentation by Tobias Fitschen] based on <cit.>. No strong bounds from existing searches are present for either scenarios. Potential search strategies can include lepton counting or mass correlation between the hardest lepton pairs. Topological triggers tend to be more promising for such a final state compared to standard triggers, and alternative triggering strategies such as Trigger-Level Analysis, or Scouting, accompanied by Partial Event Building (PEB) can be explored for both final states. The signal generation for SVJ with leptons can be approached via different methods, as discussed in the last presentation [Presentation by Clarisse Prat.].§.§ Discussion summary: Day 3The different presentations throughout the day triggered some discussion about how to best preserve existing analyses, and make them accessible and/or reproducible for the wider community beyond the respective experiments. The theory community came up with a wishlist of information that would help them in reinterpreting relevant analyses in the context of new models. Full likelihoods seem to be a mandatory requirement, with additional information coming from covariance matrices, explicitly spelling out intermediate steps in the analysis chain and descriptive object definitions. It was mentioned that the efficiency maps provided tend to be model dependent and hence difficult to reinterpret. Existing packages can be utilised better, i.e. providing validated SimpleAnalysis <cit.> routines, REANA <cit.> implementations of the full analysis workflow, or using the detector smearing available within Rivet <cit.>. The cutflow tables provided by experiments are a good starting point, and attention to detail is required when adding information to HEPData. There was an overarching demand for analyses that use sophisticated ML techniques to include in HEPData the features obtained from the classifiers. Finally, to gain a semi-quantitative estimate of phase space/parameter gaps, simpler analyses that are easily recastable are useful. For the early career session, particularly for the SVJ-l, it was noted that the use of single-lepton triggers is limited by the isolation requirements, and thresholds, mainly from the Level1 side. PEB is preferred rather than lower di-muon triggers, because of high rates of fake muons specifically for ATLAS. An alternate trigger approach would be three lepton triggers, if the trigger bandwidth permits.There were some follow up discussions on the definition of for SVJ-l, that is summarised in the list below: * If is forced as an input parameter, it can be taken at face value, and is easier to understand for theorists* If is made an output parameter, then accurate branching ratios should be taken into account. However, the downside to this approach is that the becomes much more model-dependent and difficult to reinterpret.* If can potentially be used as a measure of how to characterise angular distribution of jets in the t-channel with respect to , or shape estimation* Invisible fraction of total energy for a given model is preferred, especially when multiple flavours of dark quarks is involved Finally, there was a discussion on generation of the SVJ-l signatures. This has been studied for the first time in <cit.>, however an other possibilities mentioned in the last talk of the session can be further explored. Additionally, it was mentioned that if is set to 3 in the hidden valley module, then K_d →π_d π_d due to weak interactions is triggered, and if there are flavour changing neutral currents, then K_d →π_d γ_d are also a possible generation mode for the signature. This leads to less leptons in final state, but potentially interesting physics.§ ACKNOWLEDGEMENTSC. Cazzaniga is supported by the Swiss National Science Fundation (SNFS) under the SNSF Eccellenza program.A. Garcia-Bellido's research is supported by the US Department of Energy award DE-SC0008475. D. Kar is supported by South Africa CERN research consortium. S. Kulkarni is supported by Austrian Science Fund research group funding FG1. P. Schwaller is supported by the Cluster of Excellence “Precision Physics, Fundamental Interactions, and Structure of Matter” (PRISMA+ EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy (Project No. 39083149). Research by S. Sinha is part of a project that has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (grant agreement 101002463).D. Wilson-Edwards' research is supported by European Research Council grant REALDARK (grant agreement no. 101002463) and the Science and Technology Facilities Council, part of the UK Research and Innovation.J. Zurita is supported by the Generalitat Valenciana (Spain) through the plan GenT program (CIDEGENT/2019/068), by the Spanish Government (Agencia Estatal de Investigación) and ERDF funds from European Commission (MCIN/AEI/10.13039/501100011033, Grant No. PID2020-114473GB-I00). § DETAILED SUMMARY OF EXPERIMENTAL TALKS§.§ Search for s-channel semi-visible jets with CMS in Run 2 Speaker: Aran Garcia-Bellido The CMS experiment published in 2022 the first search for resonant production of “semi-visible jets” (SVJ) using 138 of data from Run 2 <cit.>. The theoretical model used for the search is based on the proposal in <cit.>, wherea heavy leptophobic Z^' boson from broken U(1) symmetry couples the Standard Model quarks (with coupling strength g_q) with dark quarks (with coupling strength g_χ) belonging to a QCD-like Hidden sector. As mentioned in Section <ref>, the dark quarks shower and hadronise in the hidden sector leading to dark bound states with degenerate masses M_d (pseudo-scalar mesons π_d, and vector mesons ρ_d) that can be stable or can decay promptly back to Standard Model quarks. Unstable ρ_d can decay to a pair of SM quarks of any flavor with equal probability, while unstable π_d must decay through a mass insertion, thus decays to the heaviest SM quarks kinematically accessible are preferred (usually b-quarks). The final signature is mainly characterized by at least two large jets with heavy-flavour content and E_T aligned to one of them. The SVJ signal is simulated at leading order with the Hidden Valley model implemented with PYTHIA 8.226 for 2016 and PYTHIA 8.230 for 2017 and 2018 <cit.>. Both the number of dark colours the number of dark flavours have been set to 2. The main parameters of the signal model have been listed in Section <ref>, where the mediator mass in this case is the mass of the Z^' boson M_Z'. For the reference parameters different values have been scanned: 19 values forM_Z'∈ [1.5, 5.1] TeV, 12 values for r_inv∈ [0,1], 12 values for M_d ∈ [1,100] GeV and 3 values for α_d (changing resulting multiplicities of dark hadrons from the shower and hadronization). Namely, α_peak is denoted as the value of the dark gauge coupling constant at 1 TeV for which the multiplicity of dark hadrons is maximised, while α_low and α_high are respectively 1/2α_peak and 3/2α_peak. Three 2D scans changing M_Z' has a function of the other parameters have been performed with a total 575 signal points. The Z^' boson couplings have been chosen such that g_q = 0.25 and g_χ = 0.5. With these choices, the production cross section, branching fraction, and mediator width are compatible with the benchmark model recommended by the LHC DM Working Group <cit.>. The search strategy consists in looking for the heavy mediator Z^' producing a bump in the transverse mass spectrum of the di-jet system M_T. Twomain approaches have been followed: 1. inclusive search using only event-level variables which should give “model-independent” results, 2. BDT-based search: train a SVJ tagger using our signal model, and explore the full sensitivity leveraging the expected differences in jet substructure between semi-visible jets and Standard model ones. The data is recorded with jet p_T and HT triggers. At least two anti-k_T with radius R = 0.8, p_T > 200 GeV and |η|<2.4 are required. A selection on the transverse ratio is required R_T =E_T/M_T and used in the selection instead of E_T in order to avoid the transverse mass sculpting and to identify events with invisible particles. The remaining t-channel QCD events are rejected requiring the pseudo-rapidity separation between the two highest p_T jets to be Δη (J1, J2) < 1.5. Events with M_T > 1.5 TeV are selected to be in the fully efficient region for the triggers. In order to reduce the tt̅ and W(ℓν)+jets backgrounds, events containing mini-isolated electrons or muons are vetoed. Events with anomalously high E_T values can occur due to a variety of reconstruction failures, detector malfunctions, or other non-collision backgrounds. These events are rejected by custom filters tested in a dedicated control region at high Δη (J1, J2). The final selection is completed requiring the minimum azimuthal separation between the two highest p_T jets and the missing momentum E_T to be Δϕ_min <0.8. This requirement allows to further reject the electroweak backgrounds from W/Z+jets and select a region of the phase space complementary to WIMPs searches. On top of the previous selections, in the BDT-based search a jet-tagger is trained and applied to try to discriminate between SVJ and SM background jets using jet substructure variables. This tagger mainly employs 3 types of jet substructure variables related to: heavy object tagging (soft dropped mass, N-subjettinneses, energy correlation fucntions), quark-gluon discrimination (axis, generalized angularities) and flavour-based (energy fractions). The BDT has been trained with equal mix of QCD and tt̅, and with a mixture of many signal models. To avoid mass sculpting, the background jets p_T spectrum has been reweighted to match the signal. The tagger achieves overall very good performance with AUC ∼ 0.93-0.95 with respect to all backgrounds.The background estimation for both the inclusive and BDT-based strategy is performed via an analytic fit to M_T data distribution. For the inclusive search strategy, a further categorization in R_T is employed: the data are divided in a low-R_T region(0.15<R_T<0.25) and a high-R_T region (R_T>0.25). Including the low-R_T region improves the expected limit by ∼ 60 % and helps covering r_inv∼ 0 scenario. When the BDT is employed, subsets of the low-R_T and high-R_T inclusive signal regions are selected by requiring that both jets in each event are tagged as semi-visible. Events in which only one jet is tagged as semi-visible are not found to provide significant additional sensitivity. The results from the inclusive signal regions exclude observed (expected) values of up to 1.5 < M_Z' < 4.0 TeV (1.5 < M_Z' < 4.3 TeV), with the widest exclusion range for models with α_dark = α_low. Depending on the Z^' mass, 0.07 < r_inv < 0.53 (0.06 < r_inv < 0.57) and all M_d and α_dark variations considered are also observed (expected) to be excluded. The results from the BDT-based signal regions increase the observed (expected) excluded mediator mass range to 1.5 < M_Z' < 5.1 TeV (1.5 < M_Z' < 5.1 TeV) for wide ranges of the other signal parameters. The range of observed (expected) excluded rinv values also increases to 0.01 < r_inv < 0.77 (0.01 < r_inv < 0.78), and the M_d and α_dark variations are excluded for a wider range of Z^' masses. Within the CMS experiment, new analysis on SVJ are ongoing trying to cover new event topologies, such as in the case of the t-channel production, as well as trying to access lower masses for the Z^' in the boosted topology and using Data scouting. Moreover, anomaly detection techniques relying on autoencoders are under investigation for an unsupervised jet-tagger for semi-visible jets allowing to have a more model-independent search <cit.>. Finally, new signatures for leptons-enriched semi-visible jetshave been proposed in <cit.>, and a further analysis on these two signatures is in preparation.§.§ Search for emerging jets with CMS in Run 2Speaker: Jannicke PearkesThe CMS experiment published in 2019 the first search for “emerging jets” (EJ) using 16.1 of data from 2016 <cit.>. The theoretical model is based on the proposal in <cit.>, which includes a complex scalar mediator X_DK, an SU(3) color triplet in SM QCD, that can be pair produced via gluon fusion or quark-antiquark annihilation. Each mediator then decays to a dark quark Q_DK and a SM quark. The dark quarks form dark pions that typically will have long lifetimes before they decay back to SM particles. So the final signature is four jets: two SM jets and two emerging jets (EJ). The EJ signal is simulated at leading order with the Hidden Valley model implemented with modified PYTHIA 8.212 <cit.>. The number of dark colours is three, the number of dark flavours is set to 7, Λ = m_Q,DK, and Γ_X,DK is 10 . All the Q_DK are mass degenerate, and the meson masses are set such that: m_π,DK = 0.5m_Q,DK and m_ρ,DK = 2m_Q,DK. These assumptions reduce the number of free parameters to three: the mediator mass m_X,DK, and the mass and decay length of the dark pion: m_π,DK and cτ_π,DK. The data is recorded with > 900 triggers and four anti-k_T jets (R = 0.4) are required in the reconstruction with at least one track in each jet. The analysis uses four kinematic variables based on the track impact parameters to identify EJs with varying significance into six “groups”, and then seven optimized selection “sets” to define signal and background enriched regions based on other event variables like , theof the four jets, , the number of identified EJs corresponding to a given group. For a given m_π,DK, the analysis is most sensitive (has highest acceptance) for intermediate decay lengths (25 < cτ_π,DK < 100 mm) and high mediator masses (m_X,DK > 1200). For lower mediator masses, the trigger requirement lowers the acceptance; and for small decay lengths the dark pions decay very fast making them indistinguishable from SM QCD jets, while for large decay lengths the displaced tracks fall outside the tracking volume. Limits are set at 95% confidence level excluding dark pion decay lengths between 5 and 225 mm for dark mediators with masses between 400 and 1250 . Decay lengths smaller than 5 and greater than 225 mm are also excluded in the lower part of this mass range. A paper is in preparation with the full Run-2 data which will have almost 10 times more data. The CMS EJ group is currently exploring more dark QCD models in this signature, is implementing more targeted searches using machine learning to tag the jets, and is working to combine these results with the semivisible jets signature <cit.>. For Run 3 currently taking place, new triggers have become accessible that include displaced jets and anomaly detection triggers. §.§ Search for t-channel semi-visible jets with ATLAS in Run 2 Speaker: Deepak KarThe search for t-channel Semi-visible jets <cit.> was performed with the full Run 2 dataset. For the theory model, the The Pythia8 HV dark coupling was set to be running at one-loop, the number of dark flavours (N_flav) was set to 1, and the dark confinement scale (Λ_D) was set to 6.5 TeV. It was raised in discussion during the workshop, that the choice of N_flav = 1 could be theoretically problematic.At leading order, two SVJs are back-to-back with the direction of the E_T aligned with one of the jets - which is a signature dominated by dijet background processes. The addition of extra jets results in a boost in the cross section. A boost by additional jets leads to signatures with the E_T not pointing necessarily in the direction of one of the two SVJs. Conversely, for multijet processes, the E_T is typically aligned with one of the jets, as it usually arises due to mis-measured jets. Thus, these events are typically discarded in searches including jets and E_T. Events in this analysis were selected based on the presence of two central jets, E_T trigger, leading jet p_T > 250 GeV,H_T > 600 GeV, E_T > 600 GeV, and jet closest to MET with Δϕ < 2. Three dedicated control regions were constructed for background estimation. Challenges arose from difficulties modelling signals with lower E_T, since they are buried by QCD signatures with fake E_T. To further address this, MET smearing techniques can be explored at the particle level (HepData <cit.>, as well as Rivet <cit.> and MadAnalysis 5 <cit.> routines will soon be made available). While it is necessary to include benchmark models, it was also noted that if the model demonstrates sufficient versatility, a signature-driven search approach may be warranted. Direct connections do not always exist between our motivation and the specific signature, exemplified by cases such as R-Parity Violating SUSY, where MET is notably absent. As a community, we should decide whether we opt for reasonably sensible models and more signature driven searches, as opposed to a fully bottom-up theoretically driven choice of models. Namely, two options were outlined:* Option 1: Encourage the exploration of parameter choices like N_c = 3 and N_flav = 1, and manipulating r_inv to generate a wide range of values - with the caveat that we must explicitly define which observables and techniques should be avoided.* Option 2: Contemplate experimental strategies first and then design models tailored to those strategies. For example, if all quarks are degenerate, and dark pions decay to heavy flavor (HF), the triggering mechanism becomes a significant part of the question. Alternatively, if cascade decays lead to b-quarks decaying into leptons, this demands a different approach to triggering. Therefore, identifying the desired signatures, and then creating models in alignment with these strategies, represents a distinct school of thought. §.§ Search for dark jet resonances with ATLAS in Run 2 Speaker: Dilia Maria Portillo Quintero The search for dark jet resonances <cit.> was performed with the full Run 2 dataset recorded by ATLAS. Four benchmark models, introduced in <cit.>, were considered in the search. Large radius jets were considered as they better encapsulated the double hadronisation procedure and resonance structure. To reduce the background and increase the analysis' sensitivity to the signal, dark jets were tagged using jet substructure information - namely the number of ungroomed tracks associated to a jet n_track. A resonance was then searched for over the smoothly falling dijet invariant mass (m_jj) distribution. Challenges encountered during the analysis included defining a new observable to decorrelate n_track from m_jj, decision of whether n_track should be defined in data or MC, and estimating the background. In discussion, it was noted that as a first iteration, this search had limitations, for example in terms of the HV parameter choices and the discriminating variable - n_track. The variable n_track is IRC unsafe and may not be suitable for the next iterations, suggestions for other variables with discriminating power are welcomed. Further, it was mentioned that if the dark hadrons had mass greater than 15 / 20 GeV, new features can potentially show up in the tails of the m_jj distributions. Probing subjets within large radius jets (a mostly model independent fact) could give insight into the nature of the jet, since these subjets are metastable hidden hadrons <cit.>. One could check the correlations between the hardest subjets to verify if it’s an actual signal, and as a community, investigating the masses of the subjets should remain in the agenda. This approach can be considered for discriminating between from jets orginating from quarks / gluons and unconventional jets.jhep | http://arxiv.org/abs/2311.16330v1 | {
"authors": [
"Jonathan Butterworth",
"Cesare Cazzaniga",
"Aran Garcia-Bellido",
"Deepak Kar",
"Suchita Kulkarni",
"Pedro Schwaller",
"Sukanya Sinha",
"Danielle Wilson-Edwards",
"Jose Zurita"
],
"categories": [
"hep-ph",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20231127213529",
"title": "MITP Colours in Darkness workshop summary report"
} |
1 .001 A new fuzzy multi-attribute group decision-making method based on TOPSIS and optimization modelsQixiao Hu et al. mode = title]A new fuzzy multi-attribute group decision-making method based on TOPSIS and optimization models[0] 1]Qixiao Hu[type=editor,role=Researcher, ]Conceptualization of this study, Methodology, Software[1]organization=School of Mathematics, Sichuan University, addressline=610065,city=Chengdu, state=Sichuan, country=China1]Shiquan Zhang [1] [email protected]]Chaolang Hu [1] [email protected]]Yuetong LiuIn this paper, a new method based on TOPSIS and optimization models is proposed for multi-attribute group decision-making in the environment of interval-valued intuitionistic fuzzy sets. Firstly, by minimizing the sum of differences between individual evaluations and the overall consistent evaluations of all experts, a new optimization model is established for determining expert weights. Secondly, based on TOPSIS method, the improved closeness index for evaluating each alternative is obtained. Finally, the attribute weight is determined by establishing an optimization model with the goal of maximizing the closeness of each alternative, and it is brought into the closeness index so that the alternatives can be ranked. Combining all these together, the complete fuzzy multi-attribute group decision-making algorithm is formulated, which can give full play to the advantages of subjective and objective weighting methods. In the end, the feasibility and effectiveness of the provided method are verified by a real case study. * Unknown weights.* Optimization models that combine the advantages of objective and subjective methods.* Complete multi-attribute group decision-making method.interval intuitionistic fuzzy setsexpert weightTOPSIS methodmulti-attribute group decision-makingoptimization models [ [ 27 November 2024 ====================§ INTRODUCTIONMulti-attribute group decision-making is a process in which many decision-makers determine the decision-making range around the decision-making goal, and then propose decision-making methods to evaluate, rank and select alternatives <cit.>. This process is mainly to solve the problems of evaluation and selection, and its theory and methods are widely used in engineering, technology, economy, management and other fields. In this paper, experts generally refer to decision-makers.Fuzzy multi-attribute group decision-making process can use many different methods, such as ELECTRE <cit.>, PROMETHEE <cit.>, TOPSIS <cit.> and so on. However, no matter which method we use, we must consider how to deal with data fuzzily, expert weight and attribute weight. Firstly, with the development of economy and society, the decision-making problems that people need to solve are becoming more and more complicated. On the one hand, it is difficult to quantify some attributes because of their fuzziness. At this time, decision makers can't get accurate information, and accordingly, they can't make accurate evaluation. On the other hand, even if the attribute can be quantified, it is easy to make the evaluation value inaccurate due to the influence of subjective and objective factors such as the energy of decision makers and the incompleteness of understanding of things. It is not difficult to draw the conclusion that almost all the decision-making processes are related to fuzziness, which makes the problem of fuzzy multi-attribute decision-making aroused widespread concern. Fuzzy multi-attribute decision-making method introduces fuzzy theory into multi-attribute decision-making to improve the scientificity and practicability of decision-making, because it can not only better describe the attributes in alternatives, but also overcome the difficulty of inaccurate evaluation of decision makers caused by subjective and objective factors.Secondly, both the determination of expert weight and attribute weight are very important in multi-attribute group decision-making, which has attracted the attention of a large number of scholars, because different weights may lead to different decision-making results. The methods of determining weights are divided into the following three categories: subjective weighting method, objective weighting method and combination of subjective and objective weighting method. The subjective weighting method is to compare the importance of decision makers or attributes and assign them. The advantage of this method is that it can be weighted according to the importance of decision makers or attributes, but it is subjective and adds a lot of manpower and material resources. Among them, the common methods are AHP <cit.> and Delphi. The objective weighting method is to use objective data to obtain weights, which has the advantages of strong objectivity and strong mathematical theoretical basis. However, it does not consider the subjective intention of decision makers, and it will be inconsistent with the actual situation. Among them, the common methods are entropy weighting method <cit.> and deviation maximization method. The combination of subjective and objective weighting method is aimed at the advantages and disadvantages of subjective weighting method and objective weighting method, which considers both the subjective intention of decision makers and the internal laws of objective data <cit.>, thus making the results more real and reliable. Common methods include goal programming method. In practical application, we tend to use the combination of subjective and objective weighting method to determine the expert weight and attribute weight, which makes the final decision-making result more credible.In order to solve the increasingly complex decision-making problems more reasonably, many scholars have conducted deeply research on fuzzy multi-attribute group decision-making methods. In the environment of intuitionistic fuzzy set, Sina et al. directly gives the expert weight by subjective weighting method, and determines the attribute weight by combining CRITIC and Ideal Point that are objective weighting method, then it gives the alterative ranking by combining ARAS and EDAS to solve the decision-making problem of entrepreneur construction projects <cit.>. In the environment of interval-valued intuitionistic fuzzy sets, Ting-Yu directly gives the expert weight by subjective weighting method, and determines the attribute weight by weight optimization model, finally it gives the alterative ranking by TOPSIS method to solve the treatment plan decision-making problem <cit.>. In the environment of intuitionistic fuzzy set,Shi-fang et al. obtains the expert weights through IFWA operator and aggregates the decision matrices corresponding to multiple experts into a decision matrix, and determines the attribute weights through intuitionistic fuzzy entropy, then finally solves the personnel decision-making problem through GRA <cit.>. In the environment of intuitionistic fuzzy set, Behnam et al. obtains the expert weight through IFWA operator, then determines the attribute weight through subjective weighting method combined with IFWA operator, and solves the decision-making problem of company updating manufacturing system by combining ELECTRE method <cit.>. In the environment of interval-valued intuitionistic fuzzy sets, Feifei et al. directly gives the expert weight by subjective weighting method, then determines the attribute weight by continuous weighted entropy, finally solves the evaluation problem of community emergency risk management by TOPSIS <cit.>. In the environment of interval hesitant fuzzy sets, Gitinavard et al. determines the attribute weight by combining expert empowerment with extended maximum deviation method, and extends IVHF-TOPSIS method to determine the expert weight, then uses the proposed IVHF-MCWR model to solve the location and supplier decision-making problems <cit.>.When fuzzy multi-attribute group decision-making is used to solve the above problems, the expert weight or attribute weight is determined by subjective weighting method or objective weighting method, which cannot take into account the internal laws of data itself and expert opinions at the same time. Liu et al. put forward a new model of expert weight optimization <cit.>. When the expert evaluation results are consistent with those of all expert groups, we should give him higher weight. At the same time, we can let decision makers give constraints on the expert weight, which full develops the advantages of subjective and objective weighting methods. However, Ting-Yu directly entrusts the weight of experts <cit.>, thus this paper extends the optimization model of determining the weight of experts proposed by <cit.> under the environment of interval-valued intuitionistic fuzzy sets and combines it with <cit.>. So that the weight of experts and attributes will be determined by the optimization model formed by objective data respectively, and at the same time, it can be constrained by expert opinions. Finally, a complete fuzzy multi-attribute group decision-making process is formed by combining TOPSIS.This paper is arranged as follows. In the second part, the related theories of interval intuitionistic fuzzy sets are expounded. The third part explains the extended expert weight determination method <cit.> and how to determine the attribute weight, as well as the extended TOPSIS <cit.>, and finally develops a complete fuzzy multi-attribute group decision-making method. The fourth part illustrates the effectiveness of this method through a decision-making case, and the fifth part is the summary of this paper.§ PRELIMINARIESdefinitionDefFuzzy set is a common method for fuzzy processing. In 1965, Zedah put forward the concept of fuzzy set <cit.>, which provided a solution for people to deal with fuzzy information in decision-making problems.In 1986, Atanassov et al. extended the fuzzy set and put forward the intuitionistic fuzzy set <cit.>. This theory can simultaneously express the support, opposition and neutrality of the decision-maker in terms of membership, non-membership and hesitation, which can effectively deal with the problem of uncertain decision information. In 1996, Gehrke et al. advanced interval fuzzy sets to solve the problem that it is too strict to use a certain numerical value as the membership degree <cit.>, and its membership degree is in the form of a closed subinterval of an interval. In 2009, Torra et al. proposed the concept of hesitant fuzzy set in order to describe the hesitation in the decision-making process <cit.>, which allows the existence of multiple membership values. In 2012, Zhu et al. combined hesitant fuzzy sets with intuitive fuzzy sets, and proposed dual hesitant fuzzy sets <cit.>. They added non-membership degree to the hesitant fuzzy set, and allowed many values to appear in the non-membership degree. In 2013, Yager proposed Pythagorean fuzzy sets by adjusting the constraints of membership and non-membership in intuitionistic fuzzy sets <cit.>. Other types of fuzzy sets, such as interval intuitionistic fuzzy sets <cit.>, are all extended on the basis of the above fuzzy sets.The multi-attribute group decision-making algorithm proposed in this paper is discussed in the environment of interval intuitionistic fuzzy sets, so we will list the related theories of interval intuitionistic fuzzy sets. X be a non-empty set and the interval intuitionistic fuzzy set is as follows:A={<x,(μ_A(x),v_A(x))>|x∈ X }.Where μ_A(x) and v_A(x) represent the membership interval and non-membership interval of x∈ X, respectively. They can be expressed by the interval as:μ_A(x)=[μ_A^-(x),μ_A^+(x)], v_A(x)=[v_A^-(x),v_A^+(x)],they satisfy: μ_A(x)⊆ [0,1], v_A(x)⊆ [0,1] and0≤μ_A(x)+v_A(x)≤ 1. When μ_A^-(x)=μ_A^+(x) and v_A^-(x)=v_A^+(x) , interval intuitionistic fuzzy sets degenerate into intuitionistic fuzzy sets. At the same time, for ∀ x∈ X , its hesitation interval can be expressed as:π_A(x)=[π_A^-(x),π_A^+(x)]=[1-μ_A^+(x)-v_A^+(x),1-μ_A^-(x)-v_A^-(x)]. IfA_x=<μ_A(x),v_A(x)>=<[μ_A^-(x),μ_A^+(x)], [v_A^-(x),v_A^+(x)]>,B_x=<μ_B(x),v_B(x)>=<[μ_B^-(x),μ_B^+(x)], [v_B^-(x),v_B^+(x)]>,are any two interval intuitionistic fuzzy sets, λ is any real number greater than 0, then there are the following operation rules: * A_x⊕ B_x=<[μ_A^-(x)+μ_B^-(x)-μ_A^-(x)·μ_B^-(x), μ_A^+(x)+μ_B^+(x)-μ_A^+(x)·μ_B^+(x)], [v_A^-(x)· v_B^-(x), v_A^+(x)· v_B^+(x)]>;* A_x⊗ B_x=<[μ_A^-(x)·μ_B^-(x), μ_A^+(x)·μ_B^+(x)], [v_A^-(x)+v_B^-(x)-v_A^-(x)· v_B^-(x), v_A^+(x)+v_B^+(x)-v_A^+(x)· v_B^+(x)]>;* λ· A_x=<[1-(1-μ_A^-(x))^λ, 1-(1-μ_A^+(x))^λ], [(v_A^-(x))^λ, (v_A^+(x))^λ]>;* (A_x)^λ=<[(μ_A^-(x))^λ, (μ_A^+(x))^λ], [1-(1-v_A^-(x))^λ, 1-(1-v_A^+(x))^λ]>.IfA_x=<μ_A(x),v_A(x)>=<[μ_A^-(x),μ_A^+(x)], [v_A^-(x),v_A^+(x)]>,B_x=<μ_B(x),v_B(x)>=<[μ_B^-(x),μ_B^+(x)], [v_B^-(x),v_B^+(x)]>,are any two interval intuitionistic fuzzy sets, then the lower bound p^-(A_x⊇ B_x) of the inclusion comparison possibility of A_x and B_x is defined as <cit.>:p^-(A_x⊇ B_x)=max{1-max{(1-v_B^-(x))-μ_A^-(x)/(1-μ_A^-(x)-v_A^+(x))+(1-μ_B^+(x)-v_B^-(x)), 0}, 0}.And the upper bound p^+(A_x⊇ B_x) of the inclusion comparison possibility of A_x and B_x is defined as:p^+(A_x⊇ B_x)=max{1-max{(1-v_B^+(x))-μ_A^+(x)/(1-μ_A^+(x)-v_A^-(x))+(1-μ_B^-(x)-v_B^+(x)), 0}, 0}.Then the inclusion comparison possibility p(A_x⊇ B_x) of A_x and B_x is defined as:p(A_x⊇ B_x)=1/2(p^-(A_x⊇ B_x)+p^+(A_x⊇ B_x)),that is to say, the possibility that A_x is not smaller than B_x is p(A_x⊇ B_x). Then p(A_x⊇ B_x) has the following properties: * 0≤ p(A_x⊇ B_x)≤ 1;* p(A_x⊇ B_x)+p(A_x⊆ B_x)=1.§ IMPROVED FUZZY MULTI-ATTRIBUTE GROUP DECISION-MAKING METHOD§.§ Fuzzy multi-attribute group decision-making problemWith the increasing complexity of decision-making environment and problems, fuzzy multi-attribute group decision-making methods for solving evaluation and decision-making problems have been widely concerned. Although there are many methods for fuzzy multi-attribute group decision-making, we all have to go through three steps: fuzzification, expert weight and attribute weight determination. When solving the multi-attribute group decision-making problem, we might as well make the following assumptions: * l decision makers;* m alternatives;* n indicators of each alternative.Experts need to evaluate each index of each alternative semantically, among which the k-th decision maker is marked as D_k, the i-th alternative is marked as A_i, and the j-th indicator is marked as x_j. The following Figure <ref> shows the process of solving the fuzzy multi-attribute group decision-making problem. Specifically, based on the improved TOPSIS method proposed in <cit.>, this paper first fuzzifies the semantic evaluation of the j-th index of the i-th alternative by the k-th decision-maker through the interval intuitionistic fuzzy set, which can be recorded as: A_ij^k=<[μ_ij^k-, μ_ij^k+], [v_ij^k-, v_ij^k+]>. Secondly, optimization models are established to determine the expert weight and attribute weight, and corresponding constraints can be added according to actual needs. In this way, we can not only give full play to the advantages of objective weighting method that make full use of objective data, but also avoid the disadvantages of not considering the subjective intention of decision makers. Finally, the alternatives can be sorted. Thus, a complete fuzzy multi-attribute group decision-making method is formed, which can be used to help people solve complex multi-attribute group decision-making problems.§.§ Determination of expert weight based on optimization modelLiu et al. points out that different experts have different degrees of experience and knowledge of related fields, therefore the importance of different experts should be different, and we should give higher weight to experts with rich experience and full understanding of decision-making projects <cit.>. In other words, if the evaluation results of an expert are more consistent with the evaluation results of all experts, the evaluation results of the expert will be more valuable for reference, so we give such experts greater weight. Then we will establish an optimization model based on this and combine the subjective intention of decision makers to restrict it, so that we can get the weight of experts through the combination of subjective and objective methods. Firstly, we assume that the decision matrix corresponding to the i-th alternative is A_(i):A_(i)=([ A_(i)^1 A_(i)^2 ⋯ A_(i)^l ]),where A_(i)^k=([ A_i1^k A_i2^k⋯ A_in^k ])^T represents the evaluation of the i-th alternative by the k-th expert, and we assume that the corresponding weight of experts is w=([ w_1 w_2 ⋯ w_l ])^T. Secondly, each alternative corresponds to a consistent score point, which is obtained by linear combination of l experts' evaluations. The interval intuitionistic fuzzy set corresponding to the evaluation of the j-th indicator of the i-th alternative by the k-th decision-maker is A_ij^k, which corresponds to four numbers. In order to be able to use the weight determination model proposed by <cit.>, we split the interval intuitionistic fuzzy set, thus the length of the evaluation column vector of the k-th decision-maker for the i-th alternative becomes four times as long as the original one:A_(i)^k=([ μ_i1^k- ⋯ μ_in^k- μ_i1^k+ ⋯ μ_in^k+ v_i1^k- ⋯ v_in^k- v_i1^k+ ⋯ v_in^k+ ])^T.Then the consistent score point corresponding to the i-th alternative can be expressed as: b_(i)=∑_k=1^l w_k· A_(i)^k =([∑_k=1^l w_k·μ_i1^k-;⋮;∑_k=1^l w_k·μ_in^k-;∑_k=1^l w_k·μ_i1^k+;⋮;∑_k=1^l w_k·μ_in^k+; ∑_k=1^l w_k· v_i1^k-;⋮; ∑_k=1^l w_k· v_in^k-; ∑_k=1^l w_k· v_i1^k+;⋮; ∑_k=1^l w_k· v_in^k+ ]),which reflects the overall evaluation results of experts on the i-th alternative. And all alternatives are treated equally, so the decision matrix A_(i) corresponding to each alternative can be assembled into an overall decision matrix A=([ A_(1) A_(2) ⋯ A_(m) ])^T when determining the expert weight, where the k-th column A^(k) of A represents the evaluation results of all alternatives by the k-th expert. Then the overall consistent score point is b=([ b_(1) b_(2) ⋯ b_(m) ])^T, thus the distance from the evaluation results of all alternatives by the k-th expert to the overall consistent score point b is:d_(k)= A^(k)-b_2.Finally, we can establish an optimization model as follows:min_w Q(w)=∑_k=1^ld_(k)s.t.∑_k=1^lw_k=1,0≤ w_k≤ 1,1≤ k ≤ l.Through the optimization model, we can understand that if the distance between the evaluation result of an expert and the overall consistent score is closer, we will give such an expert greater weight. For example, as shown in Figure <ref>, after the above-mentioned optimization model processing, the obtained expert weights are ranked as w_4>w_2>w_1>w_5>w_3. At the same time, Liu et al. also proves that the optimization model has a unique solution <cit.>, and we can attach more constraints to the optimization model, such as giving the highest weight to authoritative experts, and the optimization model still has a unique solution. And numerical experiments show that the closer the expert's evaluation result is to the consistent score point, the higher the weight he will get.§.§ Improved TOPSIS methodAn improved TOPSIS method is proposed to solve the problem of multi-attribute group decision-making by <cit.>. In this paper, the weight of experts is given directly, but its acquisition method is not explained. In the last section, we can calculate the expert weight by extending and using the method proposed by <cit.>. Therefore, after getting the expert weight, we can solve the multi-attribute group decision-making problem completely by combining with the improved TOPSIS proposed by <cit.>. The specific process is as follows. Firstly, after obtaining the expert weight w according to the optimization model, we can weight the evaluation A_ij^k of the j-th indicator of the i-th alternative by the k-th decision-maker to get A _·_ij^k:A _·_ij^k=l· w_k· A_ij^k≜ <[μ_·_ij^k-, μ_·_ij^k+], [v_·_ij^k-, v_·_ij^k+]>.Secondly, according to <cit.>, the optimal membership degree p(A _·_ij^k) of A _·_ij^k can be calculated according to the following formula:p(A _·_ij^k)=1/l(l-1)(∑_k^'=1^lp(A _·_ij^k⊇A _·_ij^k^')+l/2-1).According to the above calculation, the interval intuitionistic fuzzy order weighted average (IIOWA) operator <cit.> can be extended to get the comprehensive decision matrix D. The specific steps are as follows: * Reorder ([ 1 2 ⋯ l ]) to ([ σ(1) σ(2)⋯ σ(l) ]), which satisfies p(A _·_ij^σ(k-1)) ≥ p(A _·_ij^σ(k));* Calculate the weight vector τ= ([ τ_1 τ_2 ⋯ τ_l ])^T of IIOWA operator:τ_k=e^-((k-u_l)^2/2· t_l^2)/∑_k^'=1^le^-((k^'-u_l)^2/2· t_l^2),where u_l is the average value of 1, 2, ⋯ , l and t_l is the corresponding standard deviation.* Using IIOWA operator to calculate the element A_ij in row i and column j of the comprehensive decision matrix D:A_ij = <[1-∏_k=1^1(1-μ _·_ij^σ(k)-)^τ_k , 1-∏_k=1^1μ _·_ij^σ(k)+)^τ_k], [∏_k=1^1(v _·_ij^σ(k)-)^τ_k, ∏_k=1^1(v _·_ij^σ(k)+)^τ_k]>,which represents the comprehensive evaluation of the j-th indicator of the i-th alternative by all experts, and is abbreviated as A_ij=<[μ_ij^-, μ_ij^+], [v_ij^-, v_ij^+]>. The comprehensive decision matrix D obtained by the above two steps can not only reflect the importance of different experts, but also reflect the consistency of all experts' evaluation. Then, according to the comprehensive decision matrix D, the positive and negative ideal solutions of interval intuitionistic fuzzy are found:A_+={ <x_j, ([μ_+j^-, μ_+j^+], [v_+j^-, v_+j^+])> | x_j∈ X, j=1,2,⋯, n},A_-={ <x_j, ([μ_-j^-, μ_-j^+], [v_-j^-, v_-j^+])> | x_j∈ X, j=1,2,⋯, n},where[μ_+j^-, μ_+j^+]=[((max_iμ_ij^-|x_j∈ X_b), (min_iμ_ij^-|x_j∈ X_c)), ((max_iμ_ij^+|x_j∈ X_b), (min_iμ_ij^+|x_j∈ X_c))],[v_+j^-, v_+j^+]=[((min_iv_ij^-|x_j∈ X_b), (max_iv_ij^-|x_j∈ X_c)), ((min_iv_ij^+|x_j∈ X_b), (max_iv_ij^+|x_j∈ X_c))],[μ_-j^-, μ_-j^+]=[((min_iμ_ij^-|x_j∈ X_b), (max_iμ_ij^-|x_j∈ X_c)), ((min_iμ_ij^+|x_j∈ X_b), (max_iμ_ij^+|x_j∈ X_c))],[v_-j^-, v_-j^+]=[((max_iv_ij^-|x_j∈ X_b), (min_iv_ij^-|x_j∈ X_c)), ((max_iv_ij^+|x_j∈ X_b), (min_iv_ij^+|x_j∈ X_c))].Also, X_b represents the benefit indicator, and X_c represents the cost indicator in the indicator. Finally, assuming that the attribute weight is w=([ w_1 w_2 ⋯ w_n ])^T, we can improve the closeness index of the i-th alternative to be CC(A_i):CC(A_i)= ∑_j=1^np((A_ij⊇ A_-j|x_j∈ X_b), (A_-j⊇ A_ij|x_j∈ X_c))w_j·{∑_j=1^n[(p(A_+j⊇ A_ij)+p(A_ij⊇ A_-j)|x_j∈ X_b), (p(A_ij⊇ A_+j)+p(A_-j⊇ A_ij)|x_j∈ X_c)]w_j}^-1,where 0≤ CC(A_i)≤ 1 (i=1,2,⋯, m). For the j-th indicator of the i-th alternative, if it belongs to the benefit indicator, the inclusion comparison possibility p(A_ij⊇ A_-j) with A_ij not less than A_-j and the inclusion comparison possibility p(A_+j⊇ A_ij) with A_ij not greater than A_+j are calculated. At this time, if there is a higher possibility that A_ij is better than A_+j and a lower possibility that A_ij is worse than A_-j, then the j-th indicator of the i-th alternative has a good performance. And the same is true for the cost indicator. Therefore, we can sort the closeness CC(A_i) and choose the alternative with the largest index value as the optimal alternative.§.§ Determination method of attribute weightIn the first two parts, the problem of multi-attribute group decision-making can be solved by combining the method of determining expert weights proposedby <cit.> with the improved TOPSIS method in <cit.>, but how to calculate attribute weights is not explained. When the attribute weight is unknown, Ting-Yu also suggests that an optimization model can be established by combining subjective and objective weighting methods to determine the attribute weight <cit.>. The specific methods are as follows. In the previous part, we finally choose the alternative through the improved closeness CC(A_i). When the attribute weights are unknown, Ting-Yu established the following optimization model <cit.>:max{ CC(A_1), CC(A_2), ⋯, CC(A_m)}s.t. ∑_j=1^nw_j=1, w_j≥ 0,j=1,2,⋯, n. At this time, the above multi-objective optimization model is transformed into the following single-objective optimization model by using the max-min operator proposed in <cit.>:maxϑs.t.CC(A_i)≥ϑ, i=1,2,⋯, m, ([ w_1 w_2 ⋯ w_n ])∈Γ _0.WhereΓ _0={([ w_1 w_2 ⋯ w_n ])|∑_j=1^nw_j=1, w_j≥ 0, j=1,2,⋯, n},and attribute weights can be obtained by solving the above optimization model.In practical application, experts can limit the attribute weight according to their own experience and professional knowledge, which can be divided into five forms: weak ranking, strict ranking, ranking difference, interval boundary and proportional boundary. However, the opinions of experts are almost impossible to be completely unified, so the following non-negative deviation variables:e_(1)j_1j_2^-, e_(2)j_1j_2^-, e_(3)j_1j_2j_3^-, e_(4)j_1^-, e_(4)j_1^+, e_(5)j_1j_2^- (j_1≠ j_2≠ j_3),which can be added to the five types of constraints to become the relaxed five types of constraints. * Relaxed weak ranking:Γ_1={([ w_1 w_2 ⋯ w_n ])∈Γ _0|w_j_1+e_(1)j_1j_2^-≥w_j_2,j_1∈Υ_1, j_2∈Λ _1},where Υ_1 and Λ _1 are two disjoint subsets in index set N={1,2,⋯,n}.* Relaxed strict ranking:Γ_2={([ w_1 w_2 ⋯ w_n ])∈Γ _0|w_j_1-w_j_2+e_(2)j_1j_2^-≥δ_j_1j_2^',j_1∈Υ_2, j_2∈Λ _2},where δ_j_1j_2^' is a constant andδ_j_1j_2^'≥ 0, Υ_2 and Λ _2 are two disjoint subsets in index set N.* Relaxed ranking of differences:Γ_3={([ w_1 w_2 ⋯ w_n ])∈Γ _0|w_j_1-2w_j_2+w_j_3+e_(3)j_1j_2j_3^-≥ 0,j_1∈Υ_3, j_2∈Λ _3, j_3∈Ω_3},where Υ_3, Λ _3 and Ω_3 are three disjoint subsets in index set N.* Relaxed interval boundary:Γ_4={([ w_1 w_2 ⋯ w_n ])∈Γ _0|w_j_1+e_(4)j_1^-≥δ_j_1, w_j_1-e_(4)j_1^+≤δ_j_1+ε_j_1, j_1∈Υ_4},where δ_j_1 and ε_j_1 are constans, which satisfy δ_j_1≥ 0, ε_j_1≥ 0, 0≤δ_j_1≤δ_j_1+ε_j_1≤ 1, Υ_4 is a subset in index set N.* Relaxed proportional boundary:Γ_5={([ w_1 w_2 ⋯ w_n ])∈Γ _0|w_j_1/w_j_2+e_(5)j_1j_2^-≥δ_j_1j_2^'',j_1∈Υ_5, j_2∈Λ _5},where δ_j_1j_2^'' is a constant and0≤δ_j_1j_2^'≤ 1, Υ_5 and Λ _5 are two disjoint subsets in index set N.And let Γ be the sum of the five kinds of relaxed constraints: Γ=Γ_1∪Γ_2∪Γ_3∪Γ_4∪Γ_5.Obviously, in the process of restricting attribute weights by experts, the less opinions experts have, the more favorable it is to determine the final attribute weights. In other words, we hope that these non-negative deviation variables are small enough. Combined with the above optimization model, a new optimization model can be established:max{ CC(A_1), CC(A_2), ⋯, CC(A_m)}min{∑_j_1,j_2,j_3∈ N(e_(1)j_1j_2^-+e_(2)j_1j_2^-+ e_(3)j_1j_2j_3^-+ e_(4)j_1^-+ e_(4)j_1^++ e_(5)j_1j_2^-)}s.t. ([ w_1 w_2 ⋯ w_n ])∈Γ.e_(1)j_1j_2^-≥0, j_1∈Υ_1, j_2∈Λ _1.e_(2)j_1j_2^-≥0, j_1∈Υ_2, j_2∈Λ _2.e_(3)j_1j_2j_3^-≥0, j_1∈Υ_3, j_2∈Λ _3, j_3∈Ω_3.e_(4)j_1^-≥0, e_(4)j_1^+≥0, j_1∈Υ_4.e_(5)j_1j_2^-≥0, j_1∈Υ_5, j_2∈Λ _5.In order to facilitate the solution, the above model can be transformed into a single objective optimization model <cit.>:maxϑs.t.CC(A_i)≥ϑ, i=1, 2, ⋯, m,-∑_j_1,j_2,j_3∈ N(e_(1)j_1j_2^-+e_(2)j_1j_2^-+ e_(3)j_1j_2j_3^-+ e_(4)j_1^-+ e_(4)j_1^++ e_(5)j_1j_2^-)≥ϑ, ([ w_1 w_2 ⋯ w_n ])∈Γ.e_(1)j_1j_2^-≥0, j_1∈Υ_1, j_2∈Λ _1.e_(2)j_1j_2^-≥0, j_1∈Υ_2, j_2∈Λ _2.e_(3)j_1j_2j_3^-≥0, j_1∈Υ_3, j_2∈Λ _3, j_3∈Ω_3.e_(4)j_1^-≥0, e_(4)j_1^+≥0, j_1∈Υ_4.e_(5)j_1j_2^-≥0, j_1∈Υ_5, j_2∈Λ _5.By combining subjective and objective methods to establish this optimization model, we not only consider the objective information contained in each index, but also consider the subjective opinions of experts. Therefore, the attribute weights with practical significance can be obtained to solve the multi-attribute group decision-making problem. At the same time, we can also give some other constraints to limit the attribute weight, such as the requirements of the decision-making project itself.§.§ The complete algorithmIn this paper, interval intuitionistic fuzzy sets, optimization model for determining expert weights, improved TOPSIS decision-making method and optimization model for determining attribute weights are introduced respectively. This paper combines them for the first time, which can completely solve the multi-attribute group decision-making problem and help us make the final decision. The framework of this paper is shown in Figure <ref>. As can be seen from the figure: firstly, we use interval intuitionistic fuzzy sets to fuzzify the collected expert semantic evaluations; secondly, establish an optimization model to determine the weight of experts; then, the improved closeness index CC(A_i) is obtained by TOPSIS method; next, an optimization model is established by maximizing the closeness index CC(A_i) to determine the attribute weight; finally, the attribute weight is brought into the closeness index CC(A_i) so that the alternatives can be sorted. The complete algorithm is shown in algorithm <ref>. § CASE STUDYThis section will solve a multi-attribute group decision-making problem by using the method proposed above. We will use the same case as <cit.> and compare the final results. This is a decision-making problem about the treatment of basilar artery occlusion in an 82-year-old solitary patient with hypertension. Her two sons and one daughter {D_1, D_2, D_3} will make the linguistic evaluation of four alternatives {A_1, A_2, A_3, A_4}: intravenous thrombolysis, intra-arterial thrombolysis, antiplatelet therapy and heparinization from five indicators {x_1, x_2, x_3, x_4, x_5}: survival rate, severity of complications, possibility of cure, cost and possibility of recurrence, where {x_1, x_3} are the benefit indicators and {x_2, x_4, x_5} are the cost indicators, and the final decision will be obtained through the above algorithm. Table <ref> shows the interval intuitionistic fuzzy sets corresponding to different linguistic evaluation. The algorithm proposed in this paper can be used to make the decision: * Collect the linguistic evaluation of four alternatives by three decision makers from five indicators, as shown in Table <ref>, and convert them into interval intuitionistic fuzzy sets according to Table <ref>.* By splitting the interval intuitionistic fuzzy sets, the length of the evaluation column vector of the k-th decision-maker for the i-th alternative will be four times as long as the original one. And calculate the consistent score point b_(i) corresponding to the i-th alternative from the linear combination of l experts' evaluations. Then, assemble the evaluation results of all alternatives and calculate the distance from the evaluation results A^k of the k-th expert to the overall consistent score point b.* Establish the expert weight optimization model:min_w Q(w)=∑_k=1^3d_(k)s.t.∑_k=1^3w_k=1,0≤ w_k≤ 1,1≤ k ≤ 3.It can be obtained that the weight of three decision makers is w=([ 0.45456 0.26647 0.27897 ])^T, which is different from that given directly as ([ 0.40 0.35 0.25 ])^T in <cit.>. Table <ref> shows the weights of the three decision makers and the distance to the overall consistent score point. Through observation, it is found that the greater the weight of the decision maker, the closer his evaluation is to the overall consistent score point, which is in line with our goal of establishing the optimization model (<ref>). * Through the expert weights obtained above, the evaluation A_ij^kis weighted by (<ref>) to get A _·_ij^k , and the optimal membership p(A _·_ij^k) of A _·_ij^k is calculated by (<ref>). For example, A _·_11^1=<[0.8490, 0.9832], [0, 0.0168]>, p(A _·_11^1⊇A _·_11^2)=0.6650, p(A _·_11^1⊇A _·_11^3)=0.9168 and p(A _·_11^1)=0.4303.* A comprehensive decision matrix D (whose weighted vector is τ=([ 0.2429 0.5142 0.2429 ])^T) is obtained by using the extension IIOWA:D=[<[0.6896, 0.9153], [0, 0.0816]>⋯ <[0.2454, 0.4422], [0.2565, 0.4644]>;⋮⋱⋮; <[0.4088, 0.6189], [0.0983, 0.3668]>⋯<[0.6959, 0.9192], [0, 0.0780]> ]. * Use (<ref>) and (<ref>) to find the positive and negative ideal solutions of interval intuitionistic fuzzy corresponding to the comprehensive decision matrix D, and calculate the closeness of each alternative through (<ref>).* Establish the attribute weight optimization model:maxϑs.t. 0.9492w_1+0.6887w_2+w_3+0.9553w_4+0.9549w_5/1.4492w_1+1.4718w_2+1.8308w_3+1.8609w_4+1.7735w_5≥ϑ,0.8645w_1+0.5w_2+w_3+0.5w_4+w_5/1.4894w_1+1.4165w_2+1.5w_3+1.5w_4+1.5w_5≥ϑ,0.5w_1+0.7906w_2+0.5w_3+w_4+0.5429w_5/1.4492w_1+1.4450w_2+1.5w_3+1.5w_4+1.5429w_5≥ϑ,0.6817w_1+0.8817w_2+0.7866w_3+0.8495w_4+0.5w_5/1.4745w_1+1.4442w_2+1.7866w_3+1.8495w_4+1.5w_5≥ϑ, (e_(i)14^-+e_(ii)52^-+ e_(iii)324^-+ e_(iv)4^-+ e_(iv)4^++ e_(v)23^-)≥ϑ, w_1+e_(i)14^-≥w_4,w_5-w_2+e_(ii)52^-≥0.04,w_3-2w_2+w_4+e_(iii)324^-≥0, w_4+e_(iv)4^-≥0.08,w_4-e_(iv)4^+≤0.15,w_2/w_3+e_(v)23^-≥0.4,(e_(i)14^-≥0, e_(ii)52^-≥0,e_(iii)324^-≥0,e_(iv)4^-≥0, e_(iv)4^+≥0, e_(v)23^-)≥0, w_1+w_2+w_3+w_4+w_5=1, w_j≥0, j=1,2, ⋯, 5.Then the weight of five attributes is w=([ 0.2234 0.1659 0.2245 0.1074 0.2787 ])^T.* Substitute the attribute weights obtained in step <ref> into the closeness index to obtain: CC(A_1)=0.5575228, CC(A_2)=0.5686608, CC(A_3)=0.4058395, CC(A_4)=0.4583039. According to the closeness, the ranking of each alternative is A_2>A_1>A_4>A_3, thus the optimal alternative is A_2.Compared with <cit.>, the improvement of this paper is combined with <cit.>. The expert weight obtained by establishing the optimization model through subjective and objective weighting method is more convincing than giving the expert weight directly through subjective weighting method. And from the same case, the closeness obtained in <cit.> is: A_2>A_1>A_4>A_3. Although it is slightly different from the closeness calculated in this paper, the final ranking of the closeness of each alternative is consistent, and the optimal alternative is A_2. § CONCLUSIONIn the context of interval-valued intuitionistic fuzzy sets, this paper extends the optimization model for determining expert weights proposed in <cit.> and combines it with the method proposed in <cit.>. In this way, the determination of expert weight and attribute weight gives full play to the advantages of subjective and objective weighting methods. By combining with TOPSIS, a complete fuzzy multi-attribute group decision-making method is formed. The feasibility of the method proposed in this paper is verified by calculating the decision-making problem of treatment scheme in <cit.>. And compared the results of this paper with those of <cit.>, it is found that the final decision-making results are consistent, which verifies the effectiveness of the proposed method of this paper. Our next research direction is to continue to improve the optimization model for determining expert weights proposed in <cit.>, so as to be used in other multi-attribute group decision-making methods to improve the decision-making effect.§ ACKNOWLEDGEMENTSThis work is supported by the “Key Research Program” (Grant No.2022YFC3801300), the Ministryof Science and Technology, PRC. And we would like to deliver thanks to Professor Han Huilei and Professor Huang Li for their assistance.unsrt | http://arxiv.org/abs/2311.15933v1 | {
"authors": [
"Qixiao Hu",
"Shiquan Zhang",
"Chaolang Hu",
"Yuetong Liu"
],
"categories": [
"cs.AI"
],
"primary_category": "cs.AI",
"published": "20231127154130",
"title": "A new fuzzy multi-attribute group decision-making method based on TOPSIS and optimization models"
} |
1]Milena Harned 2]Iris Liebman [1]Girls' Angle, Email: [email protected] [2]Girls' Angle, Email: [email protected] An Unexpected Class of 5+gon-free Line Patterns [ August 1, 2023 =============================================== Let S be a finite subset of ℝ^2 ∖ (0,0). Generally, one would expect the pattern of lines Ax + By = 1, where (A, B) ∈ S to contain polygons of all shapes and sizes. We show, however, that when S is a rectangular subset of the integer lattice or a closely related set, no polygons with more than 4 sides occur. In the process, we develop a general theorem that explains how to find the next side as one travels around the boundary of a cell.§ INTRODUCTIONConsider an arbitrary, finite, collection of lines of the form Ax+By=1, where A and B are integers, such as the 25 lines shown in Figure <ref>. These lines dissect the plane into regions, the bounded ones of which are convex polygons of various shapes and sizes. For example, the central polygon is a decagon, and there are several pentagons, some of which have been shaded to better illustrate their presence.Now consider the pattern formed by the 80 lines of the form Ax + By = 1, where A and B are integers between -4 and 4, inclusive. Here every polygon is either a triangle or quadrilateral! Figure <ref> shows the first quadrant of this pattern. We generalize this observation as follows: Let S be a rectangular lattice (see Definition <ref>) and consider the pattern of lines of the form Ax + By = 1, where (A, B) ∈ S. The bounded regions formed will all be triangles or quadrilaterals.For a precise statement, see Theorem <ref>.The key observation is that if one walks around the boundary of one of these polygons that does not contain the origin, no three consecutive sides can have the origin on the same side (i.e., to the left or right). Since the origin switches to the opposite side of the sides of a convex polygon exactly twice per lap, no such polygon can have more than 4 sides.Theorem <ref> is tight in the sense that it requires regularity of the rectangular lattice (see Remark <ref>). We also show that subsets of the lattice obtained by intersecting with a triangle do not necessarily enjoy this property by providing a counterexample in Section <ref>.To prove Theorem <ref>, we developed a general method for determining the next side of a polygon, Theorem <ref>, which is one of the main results of this paper. In more detail, suppose that we are travelling clockwise around the boundary of some polygon and A_1x + B_1y=1 and A_2x+B_2y=1 are lines that contain two of its consecutive sides. This theorem enables us to find the line Ax+By=1 which contains the next side of the polygon that we encounter. The proof is somewhat delicate because two intersecting lines can be consecutive sides of 4 different polygons, so the lines themselves do not supply sufficient information to answer the question.Line arrangements such as the ones we consider in this paper have been studied for decades. For example, Grünbaum <cit.> provides an early survey and more recently, Leaños, et al., <cit.> studied 5+gon-free simple arrangements. In <cit.>, line arrangements live in the projective plane and are treated combinatorially, with a focus on arrangements where no three lines are concurrent (simple) or where all regions are triangles (simplicial). Here we consider sets of lines that are, in general, neither simple nor simplicial, and we introduce a way to coordinatize these lines. Furthermore, we focus our study on ℝ^2 rather than the projective plane because we use notions of orientation, such as left, right, clockwise, and counterclockwise. Note, however, that there is no great loss in generality in working in ℝ^2 or using our coordinatization since any finite set of lines can be translated so that none of the lines are the line at infinity or pass through the origin.§ NOTATIONIn this section we state the notation conventions we will use throughout the paper.We wish to draw a careful distinction between an ordered pair thought of as a pair of coefficients versus as coordinates of a point in the Euclidean plane. For this reason, we defineto be the plane of coefficients and ℝ^2 to be the Euclidean plane. Thus, if (A, B) ∈, we are thinking of A and B as coefficients in the linear equation Ax + By = 1. We think of the line that corresponds to the graph of Ax + By = 1 as living in ℝ^2. Specifically, we defineto be ℝ^2 ∖{(0, 0)}. Then for any P = (A, B) ∈, define L_P to be the line in ℝ^2 defined by the equation Ax + By = 1. For any S ⊂, define ℒ_S = {L_P | P ∈ S }.We think ofas parameterizing the set of lines that do not pass through the origin of ℝ^2. Please note that whileis closely related to the dual space of ℝ^2, it is not the dual space since we think of points inas specific lines, not linear functionals.We denote by O the origin.Whenever we speak of “left”, “right”, “clockwise”, or “counterclockwise”, we mean it from the vantage of one standing on the plane with “up” oriented in accordance with the right-hand rule. Let P, Q ∈. We say that “Q is to the left of P” if it is in the half plane to the left of OP. Similarly, we say that “Q is to the right of P” if it is in the half plane to the right of OP.Finally, when we refer to traveling ”to the right” on a line L that does not contain O, we mean to travel in the direction of the points that are to the right of a given point on L. Similarly, traveling “to the left” on L means to move in the direction of points to the left of a given point on L.§ GENERAL OBSERVATIONS§.§ Lines of the Form Ax+By=1We first collect some basic facts regarding lines of the form Ax+By=1 in a form that we need. These facts are well-known and we omit their proofs. For example, Proposition <ref>(iv) follows immediately from Theorem 3.3.4(a) in <cit.>.Let P = (A, B) ∈. Then* The line through O and (A, B) in ℝ^2 is perpendicular to L_P.[We state it this way to emphasize that we are thinking ofand ℝ^2 as different planes.]* The x-intercept of L_P is 1/A.* The y-intercept of L_P is 1/B.* The distance between (A, B) in ℝ^2 and O is the reciprocal of the distance between L_P and O. * If A^2 + B^2 = 1, then (A, B) in ℝ^2 is on L_P.* If A^2 + B^2 > 1, then the line L_P separates (A, B) in ℝ^2 from O.* If A^2 + B^2 < 1, then (A, B) in ℝ^2 and O are on the same side of L_P.§.§ Linear TransformationsConsider a non-degenerate linear transformation M : ℝ^2 →ℝ^2. We denote the transpose of M by M^t. We compute1 = Ax + By = ([ A; B ])^t ([ x; y ]) = ([ A; B ])^t M^t (M^t)^-1([ x; y ]) = (M ([ A; B ]))^t (M^t)^-1([ x; y ]).Thus, if (x, y) is a point on the line Ax + By = 1, then (M^t)^-1([ x; y ]) is a point on the line A^' x + B^' y = 1, where([ A^'; B^' ]) = M ([ A; B ]).In other words, we haveLet M : ℝ^2 →ℝ^2 be a non-degenerate linear transformation and let S ⊂. Thenℒ_MS = {(M^t)^-1L_P | P ∈ S},where MS = { MC | C ∈ S }.§.§ Multiple Lines of the Form Ax+By=1 A line inthat passes through the origin corresponds to a family of parallel lines in ℝ^2. This follows from Proposition <ref>(i). Let m, n ∈ℝ be constants not both zero. Consider the line l of points in the coefficient plane given by mA + nB = 1, Then ℒ_l consists of all lines through (m, n) ∈ℝ^2 except for the line through (m, n) that passes through O. For any (A, B) ∈ l, we have mA + nB = 1 so (m, n) is on every line L_P where P ∈ l. Conversely, if L is a line that contains (m, n) but not O, then we may write the equation of the line L as Cx+Dy=1 for some constants C and D such that Cm + Dn = 1. Therefore (C, D) ∈ l. In the setup of the above proposition, note that the lines L_P rotate around (m,n) in the clockwise direction when P traverses l from left to right.Let S ⊂. The regions created by the lines in ℒ_S are convex. Furthermore, if S is finite, the bounded regions created by ℒ_S are convex polygons. All the regions formed by the lines in ℒ_S are intersections of half-planes. Since half-planes are convex, it follows that all regions created by the lines in ℒ_S are convex. (For general properties of convex sets, see <cit.>.) Additionally, when S is finite, a bounded region will have a finite number of linear sides, and hence be a polygon. §.§ Polygons and PointsLet S ⊂ be finite. From Proposition <ref>, we know that ℝ^2 is partitioned into convex regions, of which the bounded ones are polygons. Let Δ be one of these polygonal regions, and let n be the number of its sides.Define L_k, k ∈ℤ/nℤ to be the sequence of lines that contain the sides of Δ as encountered when traveling around Δ in the clockwise direction (up to translation in the index k). Similarly, define the sequence of points inthat correspond to said L_k to be P_k. Define the sequence D_k, k ∈ℤ/nℤ, to be a sequence of l's and r's (for “left" and “right") by declaring D_k to be the side of L_k on which the origin sits when traveling around the boundary of Δ in the clockwise sense. See Figure <ref> for an illustration of these definitions.If D_k = D_k+1 then the polygon Δ and the origin will be contained in a pair of opposite angles formed by k and k+1. If D_k ≠ D_k+1 then Δ and the origin will be in adjacent angles formed by k and k+1.Note that as we travel around the boundary of Δ in the clockwise direction, the region Δ is always on our right side.Suppose D_k = D_k+1=r. Then, the origin is also on the right of both sides when walking from k to k+1. Therefore, both Δ and the origin are in the same angle formed by k and k+1.When D_k = D_k+1=l, the origin is on the left of both sides when walking from k to k+1. Therefore, Δ and the origin are in opposite angles formed by k and k+1. If D_k ≠ D_k+1, Δ is on the same side of one of the lines as the origin and on the opposite side of the other line as the origin. Therefore, Δ and the origin are in adjacent angles formed by k and k+1.If D_k = D_k+1 then P_kP_k+1∩ S = P_kP_k+1∩ S. If D_k ≠ D_k+1 then P_kP_k+1∩ S = { P_k, P_k+1}. Recall from Proposition <ref>, that ℒ_P_kP_k+1 comprises the set of all lines through k∩k+1 other than the one which passes through O. The line that contains O is the limit of the lines L_Z as Z moves on P_kP_k+1 toward infinity (in either direction). Therefore, the union of the lines L_Z for Z ∈P_kP_k+1 is the pair of opposite angles created by k and k+1 which does not contain O, and the union of the lines L_Z for Z ∈P_kP_k+1∩P_kP_k+1^c, together with the line that passes through O and k∩k+1, is the pair of opposite angles created by k and k+1 which does contain O.If D_k = D_k+1 then, by Lemma <ref>, Δ is in the pair of opposite angles that contains the origin. Therefore, there must not be any point in S ∩P_kP_k+1∩P_kP_k+1^c, for lines associated to such points would split Δ.If D_k ≠ D_k+1, then, by Lemma <ref>, Δ is in the pair of opposite angles that does not contain the origin. Therefore, there must not be any point in P_kP_k+1∩ S, other than P_k and P_k+1, for lines associated to such points would split Δ.Let x ∈ℝ^2 be any point not in Δ or on the extension of any of the sides of Δ. Then there exist exactly two vertices of Δ where Δ and x are in adjacent angles formed by the extensions of the sides intersecting at that vertex.Let us first draw the lines connecting x with each vertex of Δ. Consider the vertices R_i and R_j of Δ such that the angle formed by the rays xR_i and xR_j is maximized. Since this angle is maximized, xR_i and xR_j do not intersect the interior of Δ, so no pair of opposite angles formed by sides meeting at either R_i or R_j will contain both Δ and x.The lines connecting x with the remaining vertices of the polygon will be contained within the angle formed by xR_i and xR_j, thus, these lines must go through the interior of Δ. For any vertex R_k where xR_k goes through the interior of Δ, the pair of opposite angles formed by the sides of Δ with a vertex at R_k will contain both Δ and all points on xR_k, including x.If we arrange the values of the sequence of D_1, …, D_n in order around a circle, there will be two places where the adjacent values differ, unless Δ contains the origin, in which case all the values will be r. Assume Δ does not contain the origin. Imagine rotating k clockwise to k+1 around k∩k+1. By Lemma <ref>, note that as we rotate the line, it will cross over the origin if and only if D_k ≠ D_k+1. This occurs if and only if k∩k+1 is equal to R_i or R_j, where R_i and R_j are as defined in the proof of Proposition <ref>. Thus, there will be only two places where the values of D_k change.When Δ contains the origin, there is no place where the values can change, since all rays from the origin through a vertex of Δ go through the interior of the polygon; and, indeed, when Δ contains the origin, D_k = r for all k.If P_k+1 is to the left of P_k then D_k+1≠ D_k. If P_k+1 is to the right of P_k then D_k+1 = D_k.Consider the line P_kP_k+1. As a point P ∈ travels from P_k to the right along P_kP_k+1, the line L_P rotates clockwise. If P_k+1 is hit before P wraps around to the opposite side, then D_k+1 will stay equal to D_k. As P wraps around to the opposite side, L_P in ℝ^2 passes over the origin. Therefore, if P_k+1 is to the left of P_k, then D_k+1≠ D_k.We now explain how to find P_k+2, given Δ, P_k, and P_k+1.For this discussion, please refer to Figure <ref>. Our strategy for finding P_k+2 is to travel along k+1 to the next vertex of Δ (i.e., from k∩k+1 to k+1∩k+2). As we travel, we take note of the set of points inthat corresponds to the lines through each position on k+1 that we pass through. We stop when this set of points contains at least one point in S other than P_k+1. We then ascertain which of these points of S corresponds to P_k+2.By Proposition <ref>, we know that for any line K ⊂ through P_k+1, ℒ_K corresponds to a set of lines in ℝ^2 that concur at a point on k+1. If K = P_kP_k+1, then the point of concurrency of the lines in ℒ_K is k∩k+1. As K rotates around P_k+1 away from P_kP_k+1, the point of concurrency moves along k+1 away from k∩k+1. As K rotates toward OP_k+1 from either side, the point of concurrency moves to infinity.To be more precise, let M be the line through P_k that is parallel to OP_k+1. Note that M does not contain O because k and k+1 intersect and so cannot be parallel, which means P_kP_k+1 does not contain the origin (See Proposition <ref>). Therefore, we can define a map m : M →{lines inthat contain P_k+1} by sending each P ∈ M to m(P) = PP_k+1. Note that m(P) parametrizes all the lines inthat pass through P_k+1. As P moves right along M, m(P) rotates clockwise about P_k+1 and as P tends to infinity in either direction, m(P) tends to the line OP_k+1. Thus, as P moves away from P_k, the point of concurrency of the lines in ℒ_m(P) moves away from k∩k+1, and as P tends to infinity in either direction, the point of concurrency tends to infinity.Starting at P_k, which way should P move along M so that the corresponding point of concurrency of the lines in ℒ_m(P) moves toward the next vertex of Δ? The answer, as we shall show, is that P should move along M in the direction given by D_k+1. We first note that by Definition <ref>, k+2 intersects k+1 on the origin side or the non-origin side of k depending on whether D_k is r or l, respectively. Therefore, P must be moved so that the point of concurrency of the lines in ℒ_m(P) are on the origin or the non-origin side of k depending on whether D_k is r or l, respectively. We can determine which side of L_k the point of concurrency is on by looking at which line, parallel to L_k, passes through the point of concurrency, that is, looking at L_m(P) ∩OP_k (unless m(P) is parallel to OP_k, in which case the point of concurrency is where the line parallel to k and through O intersects k+1). Therefore, when P ≠ P_k, the point of concurrency is in the non-origin side of k if m(P) ∩OP_k∈OP_k, and on the origin side of k otherwise. If P_k+1 is to the right of P_k, then m(P) ∩OP_k∈OP_k if and only if P is to the left of P_k. If P_k+1 is to the left of P_k, then m(P) ∩OP_k∈OP_k if and only if P is to the right of P_k. Since m(P) never crosses over the origin, rightward motion along m(P) corresponds to the direction from P to P_k+1 if P_k+1 is to the right of P_k and the direction from P_k+1 to P if P_k+1 is to the left of P_k.With these observations in mind, suppose (D_k, D_k+1) = (r, r). Then we must move P so that the point of concurrency moves into the origin side of L_k.This means that we do not want m(P) ∩OP_k∈OP_k. Now P_k+1 cannot be to the left of P_k because with D_k+1=D_k=r, we would have to be able to reach P_k+1 from P_k by moving to the right. So P_k+1 is right of P_k, which means P must move to the right from P_k, which corresponds to D_k+1. A similar argument shows that P moves from P_k in the direction given by D_k+1 in all cases (see Figure <ref>).We move P until m(P) first touches points of S other than those on P_kP_k+1. Let P^* be the value of P when this occurs.Since S is a finite set, there are finitely many points in m(P^*) ∩ S. Let Q_0, Q_1, Q_2, Q_3, …, Q_q be the points of m(P^*) ∩ S with Q_0 = P_k+1 and the Q_i indexed in the order that the points appear on m(P^*) starting at Q_0 and moving away from the direction of P^*, wrapping around to the opposite end of m(P^*) if necessary.Which of the points Q_i corresponds to P_k+2? Since we are progressing around the boundary of Δ in the clockwise direction, when we reach k+1∩k+2, the next side will be on the line L_Q_k in which Q_k is the last of the Q_k's, k>0, that we encounter as we rotate away from k+1 in the clockwise direction.Suppose D_k=D_k+1. In this case P_k+1 is to the right of P_k, so rightward motion along m(P^*) corresponds to walking along m(P^*) from P_k+1 away from P^*. Therefore, clockwise rotation about k+1∩k+2 corresponds to walking along m(P^*) away from P^*. Therefore, the next side will be Q_q.Suppose D_k ≠ D_k+1. In this case P_k+1 is to the left of P_k, so rightward motion along m(P^*) corresponds to walking along m(P^*) from P_k+1 toward P^*. Therefore, clockwise rotation about X corresponds to walking along m(P^*) toward P^*. Therefore the next side will be Q_1.To summarize, In the notation set up in the preceding discussion, P_k+2 = Q_q if D_k=D_k+1 and P_k+2 = Q_1 if D_k ≠ D_k+1. §.§ The Region Containing O. Let S ⊂ be a finite set. In this section, we identify the points in S that correspond to the sides of the region containing O.Let O_k ∈ be the points in S that correspond to the sides of the region containing O as encountered in the clockwise order. If the region containing O is bounded, O_k is defined up to translation of the indices. Otherwise, O_k is indexed so that the unbounded sides correspond to the first and last terms of the sequence. The exact determination of O_k depends on whether the region containing O is bounded or not and on the dimension of the convex hull of S. We shall now go through the various cases and explain precisely what O_k is, and then prove that our prescription is correct in Proposition <ref>. Let C be the convex hull of S. (Note that the vertices of C are among the points of S.)We consider six cases:* C is not a polygon. * S = {P}. Set {O_k} to be the one-term sequence with O_1 = P. * C is a line segment of nonzero length and the line containing C also contains the origin. If the origin is not interior to C, set {O_k} to be the one-term sequence with O_1 being the point of S farthest from the origin. If the origin is interior to C, set {O_k} to be the two-term sequence with O_1 and O_2 being the endpoints of C in either order. * C is a line segment of nonzero length and the line containing C does not contain the origin. Set {O_k} to be the two-term sequence consisting of the endpoints of C labeled so that O_2 is to the right of O_1. * C is a polygon. * The interior of C contains the origin (see Figure <ref>). Set O_1, O_2, O_3, …, O_N to be the vertices of C in clockwise order, starting at an arbitrary vertex of C.* C does not contain the origin. Then there are exactly two rays emanating from the origin that pass through the boundary of C but not its interior. Label these rays r_1 and r_2 so that the points of r_2 ∖{(0,0)} are to the right of the points of r_1 ∖{(0,0)}. Between r_1 and r_2, the boundary of C contains two paths that both have one point on r_1 and one point on r_2. (Note that the union of these two paths may not comprise the entire boundary of C if r_1 or r_2 intersects the boundary of C in a line segment.) Each ray Δ from the origin that intersects the interior of C intersects the boundary of C in two points (because C is convex), one farther from the origin than the other. Call the one of the two paths that contains the points on these rays farther from the origin the “outer” path. Set O_1 to be the point of S which is the point on this outer path on r_1. As we travel from O_1 along the outer path, by construction, we will be going clockwise around C. Let O_2, O_3, …, O_N be the vertices of C as we encounter them, with O_N being the point of the outer path on r_2. (Note that C may have more than N vertices.) * The origin is on the boundary of C. Set O_1, O_2, O_3, …, O_N to be the vertices of C as they are encountered if one travels once around the boundary of C in the clockwise direction starting at the origin.Let S ⊂ be a finite set. Let N be the length of the finite sequence O_k, as defined in Definition <ref>. Let Δ be the region that contains O. The boundary of Δ is precisely given by the lines L_O_k, 1 ≤ k ≤ N.We use the notation and case numbering from the discussion just after Definition <ref>.The cases i(a-c) may be directly verified. So assume that the convex hull of S is a nondegenerate convex polygon.Case ii(a). Let us regard the indices of the sequence {O_k} modulo N. Note that O_kO_k+1∩ S = O_kO_k+1∩ S. Also, O_k+1 is to the right of O_k, by construction. Therefore, if one travels from L_O_k onto L_O_k+1 in the clockwise direction, one will be walking around the boundary of some region Δ. Furthermore, O will be on the right as one walks along this part of the boundary of Δ. By Theorem <ref>, we find that the next side of Δ encountered will indeed by L_O_k+2. Thus, the {O_k} are the sides of a bounded region for which O is always to the right as one travels around its boundary in the clockwise direction. Since clockwise travel around the boundary of any bounded convex region that does not contain O must involve O switching from right to left or left to right exactly twice, we conclude that Δ must be the region that contains O. Case ii(b-c). When C does not contain the origin, the sequence {O_k} must still form the boundary of some region Δ for which O is always on the right as one travels around Δ in the clockwise direction. However, by construction, when one reaches L_O_N from L_O_N-1, one will walk off to infinity without encountering any further intersection with lines of ℒ_S (because, as in the discussion preceding Theorem <ref>, we would rotate O_N-1O_N about O_N and pass over the origin before meeting another point in S). Similarly, by reversing the construction in the discussion preceding Theorem <ref>, we see that L_O_1 also has no further intersections with lines in ℒ_S when one travels counterclockwise along it from the intersection of L_O_2 and L_O_1. Thus, the {O_k} must correspond to the lines that form the boundary of the unbounded region which contains O (because if O were not in the region, O must still switch to one's left at some point as one travels clockwise about the boundary of the region, even though the region is unbounded). § RECTANGULAR LATTICESWe define a “rectangular lattice” as follows.Let (a, b) ∈. Let Δ_x, Δ_y ∈ℝ_>0. Let N, M ∈ℤ_≥ 0. Assume in addition that the three corners (a, b + MΔ_y), (a + NΔ_x, b), and (a + NΔ_y, b + MΔ_x) are all in . We define a rectangular lattice to be a subset S ofof the form: S ≡{(a + kΔ_x, b + jΔ_y) | 0 ≤ k ≤ N, 0 ≤ j ≤ M, k, j ∈ℤ_≥ 0}∩. (The purpose of intersecting withis merely to eliminate the possibility of including the origin.) We use this definition of S throughout this section.We call a point of the form (a + kΔ_x, b + jΔ_y) where k = 0, k=N, j=0, or j=M a boundary point of S and refer to the union of these points as the boundary of S. Let S ⊂ be a rectangular lattice and let Δ be a bounded polygonal region created by the lines of ℒ_S which does not contain O. Then no three consecutive D_k's are equal.Assume D_k = D_k+1. We shall prove that D_k+1≠ D_k+2. We split into two cases, depending on whether D_k=D_k+1 is r or l. We begin with the case D_k=D_k+1=r. By Proposition <ref>, P_k+1 is to the right of P_k.By Lemma <ref>, P_kP_k+1∩ S = P_kP_k+1∩ S. Also, because L_k and L_k+1 are consecutive sides of Δ, they cannot be parallel, therefore the origin of the coefficient plane is not on P_kP_k+1.We use the notation as setup in the discussion of how to find the next side preceding Theorem <ref>. Thus, m(P) = PP_k+1 where P is on the line M which passes through P_k and is parallel to OP_k+1. According to our discussion there, since D_k+1 = r, the next side around Δ is found by moving P to the right along M from P_k. Let m be the first line m(P) that hits another point in S as P move right from P_k. (In the notation preceding Theorem <ref>, we have m = m(P^*).) Because Δ is bounded, m(P) will not pass through the origin as P travels from P_k+1 to P^*.Let m_r be the portion of m to the right of P_k+1 and let m_l be the portion of m to the left of P_k+1.We will show that D_k+2 = l.Let P_k = (x_k, y_k) and P_k+1 = (x_k+1, y_k+1). Let P_k' = (x_k', y_k') be the reflection of P_k over P_k+1 (i.e., x_k' = 2x_k+1-x_k and y_k' = 2y_k+1-y_k). Note that because P_kP_k+1∩ S = P_kP_k+1∩ S, P_k' is not in S. Also, P_k' ≠ O, since k and k+1 are not parallel.We now show that there exists points of S on m_l by contradiction. Suppose there are no such points. Then there must exist a point (x, y) in m_r ∩ S.Let us first consider the case where y_k+1 < y_k and x_k+1 > x_k. A representation of this is shown in Figure <ref>.We claim that (x_k+1, y_k) ∈ S. By definition of rectangular lattice, the only way this would not be the case is if (x_k+1, y_k) = O, but this is not the case since P_k+1 is to the right of P_k. Therefore, as P moves from P_k to P^*, m(P) will not rotate beyond the vertical. If m is vertical, then (x_k+1, y_k) ∈ m_l ∩ S. So assume m is not vertical and let (x, y) be a point on m_r.Then, x > x_k+1. Note that as P moves from P_k to P^*, m(P) rotates clockwise because P^* is to the right of P_k. Since m was obtained by a clockwise rotation of m(P), we cannot have both x ≥ x_k' and y ≥ y_k'. If (x, y) satisfies x ≥ x_k' and y ≤ y_k', it is not in S, for otherwise, P_k' ∈ S, a contradiction. If x < x_k' and y < y_k', then (x, y_k') is in S. When m(P) rotates clockwise, (x, y_k') would be hit before (x, y), therefore (x, y) cannot be on m_r. Therefore any point (x, y) in S on m_r other than P_k+1 must satisfy y ≥ y_k' and x ≤ x_k' (with not both being equalities). But the reflection of these points over P_k+1 are points on m_l that are also in S, a contradiction.The cases where x_k+1 < x_k and y_k+1 < y_k, or where x_k+1 < x_k and y_k+1 > y_k, or where x_k+1 > x_k and y_k+1 > y_k follow by entirely analogous proofs.Now, we consider the cases where x_k = x_k+1 or y_k = y_k+1.Because P_kP_k+1∩ S = P_kP_k+1∩ S, we know that P_k and P_k+1 must be on opposite sides of the boundary of S. Therefore there exist no points in S on m_r other than P_k+1, unless P_k and P_k+1 are corners of the boundary of S.So suppose P_k and P_k+1 are corners of the boundary of S. Note that S is not a subset of a line, for if it were, all the regions formed by ℒ_S would be unbounded (since all its lines are concurrent). Also, P_k and P_k+1 do not correspond to opposite corners of the boundary of S since they share a horizontal or vertical component. As we rotate P_kP_k+1, the line will either rotate through 90^∘ or will rotate through less than 90^∘ before hitting another point of S. In the latter case, m_r ∩ S = ∅, so m_l ∩ S must not be empty. In the former case, m_l ∩ S = ∅, and m_r consists of the points of the boundary of S that form the side that contains P_k+1 other than P_kP_k+1.In the former case, according to our procedure for finding the next side, P_k+2 will be the next corner of the boundary of S and P_k+3 will be the final corner of the boundary of S (assuming that Δ is bounded). Then the P_i are among the O_j, and Δ contains O. (Note that for Δ to be a bounded region containing O, the origin of of the coefficient plane must be contained within the boundary of S.)Thus, if Δ is bounded but does not contain O, there will be a point in S other than P_k+1 on m_l, by Theorem <ref>, P_k+2 will be on m_l. By Proposition <ref>, since m_l is to the left of OP_k+1, we know that D_k+2 = l.Now, consider the case where D_k=D_k+1=l. The proof that D_k+2=r is similar to the case where D_k=D_k+1=r (up to the point where P_k and P_k+1 are corners of the boundary of S) with a few key differences which we point out here:First, to find m, we would move P from P_k to the left along M, which corresponds to counterclockwise rotation of m(P) instead of clockwise rotation.Then, where we considered y_k+1 < y_k and x_k+1 > x_k, we replace the case where both x < x_k' and y < y_k' with the condition that both x > x_k' and y > y_k'. If x > x_k' and y > y_k', then (x_k', y) is in S. When m(P) rotates counterclockwise, (x_k', y) would be hit before (x, y), therefore (x, y) cannot be on m_r. We also replace the case where x ≥ x_k' and y ≥ y_k' with x ≤ x_k' and y ≤ y_k'.We now pick up the proof of the case where D_k=D_k+1=l supposing that P_k and P_k+1 are consecutive corners of the boundary of S. As we move P from P_k to P^*, m(P) will rotate counterclockwise either through 90^∘ or through less than 90^∘ before hitting another point of S. In the former case, m_l ∩ S = ∅, and m_r consists of the points of the boundary of S that form the side that contains P_k+1 other than P_kP_k+1.The situation here differs from the case where D_k=D_k+1=r in that the bounded case does not actually exist because, if Δ is bounded, the origin ofmust be contained within the boundary of S. However, if this is true, only a rotation clockwise would cause P_kP_k+1 to rotate through 90^∘, so D_k=D_k+1≠ l. In the latter case, if Δ is bounded, then m_r ∩ S = ∅. The set of lines ℒ_S forms no polygons with more than 4 sides. Let Δ be a polygon in the pattern of lines formed by ℒ_S which does not contain the origin. By Lemma <ref>, the values D_k will change only twice for any given polygon in ℝ^2. By Lemma <ref>, if Δ is bounded and does not contain O, no more than two consecutive D_k's can be equal, so there can be only a maximum of four D_k's and hence a maximum of four sides to Δ.Now consider the polygon formed by the lines of L_S that does contain the origin. By Proposition <ref>, in a rectangular lattice, the points inthat correspond to the boundary lines of the region that contains O are a subset of the corners of the boundary of the lattice, so the region containing O has a maximum of four sides.We note that this result does not work for general arrays inas the setS = {(-2,-2), (-2, 2), (-2, 3), (2,-2), (2,2), (2,3), (3,-2), (3,2), (3,3)}is a 3-by-3 array where ℒ_S contains a pentagon. The regularity of rectangular lattices matters. Let S^' be the image of S under a non-degenerate linear transformation. Then the set of lines ℒ_S^' forms no polygons with more than 4 sides. This proposition follows from Proposition <ref>.§ FURTHER DISCUSSION There seem to be interesting connections to number theory which may be worth further exploration. For example, if S = {(a,b) ∈ℤ^2 | -n ≤ a,b ≤ n}∩, for some positive integer n, then the number of two-sided unbounded regions in ℒ_S is equal to #{(a, b) ∈ S | a and b are relatively prime}.Unlike the case with a rectangular lattice, there do exist S ⊂ such that D_k=D_k+1 does not imply D_k+1≠ D_k+2 but such that the polygons formed by the lines in ℒ_S are all triangles and quadrilaterals. Figure <ref> shows an example. Still, one might ask: if the lattice is intersected with any convex polygonal region, does the corresponding pattern of lines bound polygons that are all triangles and quadrilaterals? The answer is no, as shown by the counterexample in Figure <ref>. In fact, there is an (n+2)-sided polygon formed by the lines corresponding to the lattice points inside or on the boundary of the triangle whose vertices are given by (-3, -1), (-2, -1), and (-3+F_2n+1, -1+F_2n), where F_n is the Fibonacci sequence with F_1=F_2=1. This example is based on geometrical properties of convergents of the continued fraction expansion of the golden mean. Figures <ref> and <ref> correspond to the cases n=2 and n=3, respectively.§ ACKNOWLEDGEMENTSThe authors would like to thank C. Kenneth Fan for his assistance in obtaining the results of the paper as well as helping to create and edit this paper.plain | http://arxiv.org/abs/2312.12447v1 | {
"authors": [
"Milena Harned",
"Iris Liebman"
],
"categories": [
"math.CO",
"52C30"
],
"primary_category": "math.CO",
"published": "20231127200800",
"title": "An Unexpected Class of 5+gon-free Line Patterns"
} |
Lie–Poisson discretization for incompressible MHD]Spatio-temporal Lie–Poisson discretization for incompressible magnetohydrodynamics on the sphere Klas Modin: Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, 412 96 Gothenburg, Sweden [email protected] Roop: Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, 412 96 Gothenburg, Sweden [email protected][2020]37M15; 65P10; 53D20; 76W05 We give a structure preserving spatio-temporal discretization for incompressible magnetohydrodynamics (MHD) on the sphere. Discretization in space is based on the theory of geometric quantization, which yields a spatially discretized analogue of the MHD equations as a finite-dimensional Lie–Poisson system on the dual of the magnetic extension Lie algebra 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^*. We also give accompanying structure preserving time discretizations for Lie–Poisson systems on the dual of semidirect product Lie algebras of the form 𝔣=𝔤⋉𝔤^*, where 𝔤 is a J-quadratic Lie algebra. Critically, the time integration method is free of computationally costly matrix exponentials. We prove that the full method preserves the underlying geometry, namely the Lie–Poisson structure and all the Casimirs. To showcase the method, we apply it to two models for magnetic fluids: incompressible magnetohydrodynamics and Hazeltine's model. [ Michael Roop================Acknowledgement. This work was supported by the Knut and Alice Wallenberg Foundation, grant number WAF2019.0201, and by the Swedish Research Council, grant number 2022-03453. The authors would like to thank Darryl Holm and Philip Morrison for pointing us to Hazeltine's model for magnetic fluids. § INTRODUCTIONThe equations of incompressible magnetohydrodynamics (MHD) describe the evolution of the velocity v(t,x) of an ideal charged fluid and its magnetic field B(t,x) on a two- or three-dimensional manifold M:{ v̇+∇_vv=-∇ p+curlB× B, Ḃ=L_vB,divB=0,divv=0. .Here, p(t,x) is a pressure function, L_v denotes the Lie derivative along the vector field v(t,x), and ∇_vv is the covariant derivative of the vector field v along itself.The MHD system (<ref>) admits a Hamiltonian formulation in terms of a Lie–Poisson structure <cit.> on the dual of the semidirect product Lie algebra𝔦𝔪𝔥=𝔛_μ(M)⋉𝔛_μ^*(M).The Hamiltonian on 𝔦𝔪𝔥^* is given by H(v,B)=1/2∫_M(|v|^2+|B|^2)μ . Here, 𝔛_μ(M) is the Lie algebra of divergence-free vector fields, whereas 𝔛_μ^*(M) and 𝔦𝔪𝔥^* denote the (smooth) dual spaces. Physically, the Hamiltonian represents the energy of the magnetic fluid.Geometrically, system (<ref>) is the Poisson reduction of a canonical flow, with a right-invariant Hamiltonian, on the cotangent bundleT^*(Diff_μ(M)⋉𝔛_μ^*(M)),where the subscript μ stands for a reference Riemannian volume form, and Diff_μ(M) is the group of volume-preserving diffeomorphisms of M, i.e., diffeomorphisms of M that leave the differential form μ invariant:Diff_μ(M)={φ∈Diff(M)|φ^*μ=μ}. The Hamiltonian nature of the flow (<ref>) implies a multitude of conservation laws. In 3-D, they are magnetic helicity and cross-helicity; Khesin, Peralta-Salas, and Yang <cit.> showed that these are the only independent Casimirs.In 2-D, there are an infinite number of Casimirs (detailed below). These integrals, and the underlying Lie–Poisson structure, significantly restrict which states are possible to reach from a given initial state. They thereby influence the long time qualitative behavior in phase space. Indeed, to capture the qualitative behavior in long time numerical simulations, one should strive for discretizations that preserve the rich geometric structure in phase space (for a detailed motivation of structure preserving schemes in the case of plasma physics, see the review paper by Morrison <cit.>).But infinite-dimensional Lie–Poisson structures, such as 𝔦𝔪𝔥^* for MHD, are strikingly rigorous; all traditional spatial discretizations, including all finite element methods based on discrete exterior calculus, fail to admit a finite-dimensional Lie–Poisson formulation. In addition, Lie–Poisson preserving time discretizations (integrators) are hard to come by.Nevertheless, to find such structure preserving discretizations is critical for qualitatively reliable long time simulations, motivated by the intensified study of stellarators, where 2-D MHD provide a simple model for low beta tokamak dynamics <cit.>.The goal of this paper is to develop, for the sphere S^2, a spatio-temporal discretization of the MHD system (<ref>) that preserves the underlying Lie–Poisson structure, including the Casimir conservation laws. To this end, we draw on two bodies of previous work. First, that of Zeitlin <cit.>, who used quantization theory to derive a Lie–Poisson preserving spatial discretization for the incompressible Euler equations on the flat torus and later extended it to MHD. Second, that of Modin and Viviani <cit.>, who for the spherical domain S^2 took Zeitlin's approach further, developed a tailored Lie–Poisson preserving temporal discretization, and together with Cifani <cit.> addressed computational efficiency.Let us give a brief overview of the approach. For spatial discretization, the main tool is the theory of Berezin–Toeplitz quantization <cit.>.The basic idea is to replace the infinite-dimensional Poisson algebra of smooth functions by the finite-dimensional Lie algebra 𝔰𝔲(N) of skew-hermitian, trace free matrices.Together with a quantized Laplacian on 𝔰𝔲(N), one then obtains a finite-dimensional approximation of Euler's equations as a matrix flow — Zeitlin's model. The present paper extends this approach to models describing the motion of incompressible magnetized fluids on S^2. Indeed, the spatially discretized analogue of the MHD equations constitutes a Lie–Poisson flow on the dual of the Lie algebra 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^*, usually referred to as the magnetic extension of 𝔰𝔲(N). Next, for temporal discretization, it is natural to consider isospectral symplectic Runge-Kutta integrators (IsoSRK) <cit.>. These schemes yield Lie–Poisson integrators for any reductive and J-quadratic Lie algebra 𝔤 (see details below), which include all the classical Lie algebras. However, the magnetic extension 𝔤⋉𝔤^* is not reductive and not defined by a J-quadratic condition; we need an extension of IsoSRK. Such an extension is developed in this paper.Although MHD on S^2 is our main concern, the semidirect product approach covers a large variety of dynamical systems arising in mathematical physics <cit.>.Among them are: * the Kirchhoff equations <cit.>, describing a rigid body moving in an ideal fluid, as a Lie–Poisson system on the dual of 𝔢(3)=𝔰𝔬(3)⋉ℝ^3;* the barotropic Euler equations describing the motion of a compressible fluid <cit.>, as a Lie–Poisson system on the dual of 𝔰=𝔛(M)⋉ C^∞(M);* Hazeltine's equations describing magnetized plasma <cit.>. 2-D MHD together with these and other examples underline the need for structure preserving numerical methods for Lie–Poisson systems of semidirect product Lie algebras.§ VORTICITY FORMULATION FOR MHD EQUATIONSIn this section, we work on two-dimensional Riemannian manifolds (M,g) without boundary and with trivial first co-homology (i.e., no “holes”). First, since the vector fields B(t,x) and v(t,x) are divergence-free and the co-homology is trivial, one can introduce two smooth functions θ∈ C^∞(M) and ψ∈ C^∞(M) corresponding to the Hamiltonians for the vector fields v and B:v=X_ψ, B=X_θ.The function ψ is called the stream function. Similarly, we refer to θ as the magnetic stream function.Next, we define the vorticity function ω∈ C^∞(M) and the magnetic vorticity β∈ C^∞(M) byβ = Δθ, ω = Δψ. The vorticity formulation of the 2-D MHD equations (<ref>) is{ ω̇={ω,ψ}+{θ,β},ω =Δψ, θ̇={θ,ψ},β=Δθ,.where {·,·} is the Poisson bracket on M. System (<ref>) admits a Hamiltonian formulation in terms of a non-canonical Poisson bracket, see <cit.>. The corresponding Hamiltonian isH=1/2∫_M(ωΔ^-1ω+θΔθ)μ.The Casimir invariants for system (<ref>) areC_f=∫_Mf(θ)μ, I_g=∫_Mω g(θ)μ,for any choice of smooth functions fℝ→ℝ and gℝ→ℝ. The function I_g is the two-dimensional analogue of the cross-helicity Casimir.The Casimir I_g in (<ref>) is a more general invariant compared to the conventional definition of cross-helicity.Indeed, the Casimir I_g corresponds to conventional cross-helicity for g(x)=x.We shall, however, refer to I_g as cross-helicity even for a general function g.Due to Stokes' theorem, the vorticity functions ω and β have zero mean∫_Mωμ=∫_Mβμ=0,which reflects zero circulation. Since Hamiltonian functions are defined up to a constant, we it is no restriction to assume that also∫_Mψμ=∫_Mθμ=0,and therefore system (<ref>) evolves on the space of pairs of zero-mean functions C_0^∞(M). § SPATIAL DISCRETIZATION OF MHD EQUATIONSIn this section, we present a spatial discretization of the incompressible MHD equations on the sphere S^2 based on the theory of quantization <cit.>.In contrast to standard discretization schemes for systems of PDEs, such as finite element methods, we focus on conservation of the underlying geometric structure in phase space and the corresponding Casimirs (<ref>).Namely, we replace the infinite-dimensional Poisson algebra (C_0^∞,{·,·}) with a finite-dimensional analogue: skew-Hermitian matrices with zero trace (𝔰𝔲(N),[·,·]).The sequence of Lie algebras (𝔰𝔲(N),[·,·]) converges (in a weak sense) to the Lie algebra (C_0^∞,{·,·}) as N→∞, as we shall briefly review next.Thereafter, the spatially discretized analogue of (<ref>) is a Lie–Poisson flow on the dual 𝔣^* of the semidirect product Lie algebra 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^*.§.§ Quantization on the sphereWe start with the definition of an 𝔏_α-quasilimit <cit.>, which is a weak limit.Let (𝔏,[·,·]) be a complex (real) Lie algebra and let (𝔏_α,[·,·]_α) be an indexed sequence of complex (real) Lie algebras with α∈ℕ (or ℝ) equipped with metrics d_α and a family of linear maps p_α𝔏→𝔏_α.The Lie algebra (𝔏,[·,·]) is said to be an 𝔏_α-quasilimit, if * all p_α are surjective for α≫0,* if for all x,y∈𝔏 we have d_α(p_α(x),p_α(y))→0, as α→∞, then x=y,* for all x,y∈𝔏 we have d_α(p_α([x,y]),[p_α(x),p_α(y)]_α)→0, as α→∞. Now we explicitly specify M to be the two-dimensional sphere S^2, which is a symplectic manifold with symplectic form Ω given by the area form. The associated Poisson bracket on S^2 is given by{f,g}Ω=df∧dg, f,g∈ C_0^∞(S^2).Equipped with the bracket (<ref>), the set C_0^∞(S^2) becomes an infinite-dimensional Poisson algebra (C_0^∞,{·,·}) with an orthogonal basis (with respect to L^2) given by spherical harmonics Y_lm(ϑ,ϕ):Y_lm(ϑ,ϕ)=√(2l+1/4π(l-m)!/(l+m)!)𝒫_l^m(cosϑ)e^imϕ, l≥1, m=-l,-l+1,…,l,where 𝒫_l^m are the associated Legendre functions.Then, elements of the Poisson algebra (C_0^∞,{·,·}) are approximated by matrices in the following way <cit.>.An approximating sequence is given by the matrix Lie algebras (𝔰𝔲(N),[·,·]_N), where [·,·]_N=1/ħ[·,·] for ħ = 2/√(N^2-1) is a rescaling of the matrix commutator [·,·].The family of projections p_N C_0^∞→𝔰𝔲(N), p_N Y_lm↦iT^N_lmis defined as follows for the basis element Y_lm:(T^N_lm)_m_1m_2=(-1)^[(N-1)/2]-m_1√(2l+1)[ N-1/2 l N-1/2;-m_1 m m_2 ],where (:::) stands for the Wigner 3j-symbol.Then, the following result of 𝔏_N-convergence holds: For any choice of matrix norms d_N, the sequence of finite-dimensional Lie algebras (𝔰𝔲(N),[·,·]_N), N∈ℕ, with projections defined by (<ref>)-(<ref>), is an 𝔏_N-approximation of the infinite-dimensional Poisson algebra (C_0^∞,{·,·}) with the Poisson bracket (<ref>). Let us introduce the matrix operator norm (also called the spectral norm):A_L_N^∞=sup_x=1Ax, A∈𝔰𝔲(N),where · is the Euclidean norm.The following results give the convergence rate for the 𝔏_N approximation in Theorem <ref>.For every f,g∈ C^∞(S^2) there exists c>0, such thatf_L^∞-cħ≤p_N(f)_L_N^∞≤f_L^∞,1/ħ[p_N(f),p_N(g)]-p_N({f,g})_L_N^∞=O(ħ).Later, Charles and Polterovich established a sharper estimate <cit.>:1/ħ[p_N(f),p_N(g)]-p_N({f,g})_L_N^∞≤ħ c_0(f_C^1g_C^3+f_C^2g_C^2+f_C^3g_C^1),where c_0>0 is a constant, and f_C^k=max_i≤ k sup|∇^i f|. §.§ Quantized MHD systemAs we can see from (<ref>), the stream functions ψ,θ and the vorticities ω,β are related to one another through the Laplace-Beltrami operator Δ.Therefore, to complete the spatial discretization, we need also to discretize the Laplacian.Indeed, the quantized Laplacian on 𝔰𝔲(N) is given by the Hoppe–Yau Laplacian <cit.>:Δ_N(·)=N^2-1/2([X_3^N,[X_3^N,·]]-1/2[X_+^N,[X_-^N,·]]-1/2[X_-^N,[X_+^N,·]]),where X_±^N=X_1±iX_2,and X_a, a=1,2,3, are generators of a unitary irreducible “spin (N-1)/2” representation of 𝔰𝔬(3), i.e.,[X_a,X_b]=2iε_abc/√(N^2-1)X_c, X_1^2+X_2^2+X_3^2=𝕀,where ε_abc is the Levi–Civita symbol.The Hoppe–Yau Laplacian (<ref>) corresponds to the continuous Laplace-Beltrami operator Δ in the sense that the matrices T_lm^N are eigenvectors of Δ_N with eigenvalues -l(l+1):Δ_NT_lm^N=-l(l+1)T_lm^N,while the spherical harmonics Y_lm are eigenvectors of Δ with the same eigenvalues:Δ Y_lm=-l(l+1)Y_lm. Let us now give an explicit correspondence between the continuous function ω∈ C_0^∞(S^2) and its quantized counterpart W∈𝔰𝔲(N). The function ω can be decomposed in the spherical harmonics basis, ω=∑_l=1^∞∑_m=-l^lω^lmY_lm and thereforeW=p_N(ω)=∑_l=1^N-1∑_m=-l^liω^lmT_lm^N.If the function ω∈ C_0^∞(S^2) is real-valued, then ω^lm=(-1)^mω^l(-m), which implies that the matrix W is skew-Hermitian:W+W^†=0W∈𝔲(N).Furthermore, since ω has vanishing circulation, we have ω^0,0 = 0, which implies that tr(W) = 0, i.e., W ∈𝔰𝔲(N).Also, the Hoppe–Yau Laplacian Δ_N restricts to a bijective operator on 𝔰𝔲(N)Δ_N𝔰𝔲(N)→𝔰𝔲(N). We have now all the ingredients to write down the spatially discretized analogue of incompressible MHD equations (<ref>) on the sphere, similarly to how it is done for incompressible Euler's equations in <cit.>.Namely, we replace the continuous flow (<ref>) with its quantized counterpart:{ Ẇ=[W,Δ^-1_NW]+[Θ,Δ_NΘ], Θ̇=[Θ,Δ^-1_NW], .where W,Θ∈𝔰𝔲(N).In case of trivial magnetic field, Θ=0, equation (<ref>) coincides with the Zeitlin's model for incompressible Euler's equations on the sphere.Let us give a more detailed description of matrices T_lm^N. Despite that we have an explicit formula (<ref>) for them, its usage is not efficient due to high computational complexity of the algorithm for finding Wigner 3j symbols. Instead, let us note that due to construction (<ref>) of the Hoppe–Yau Laplacian, it preserves the space of matrices with zero entries except on ± m off diagonals. This allows to identify the corresponding eigenmatrix T_lm^N with a sparse skew-hermitian matrix that has non-trivial entries only on the m off diagonal, thus reducing the eigenvalue problem (<ref>) to (N-|m|)-dimensional eigenvalue problem. Therefore the complexity of computing the entire basis (T_lm^N) for fixed N is O(N^2) instead of O(N^4) if Δ_N were a full matrix. Further, finding the commutator requires O(N^3) operations per time step, which gives the entire complexity of the algorithm as O(N^3) per time step. For details, see <cit.>. §.§ Lie–Poisson nature of the quantized flowOne essential property of Zeitlin's approach via quantization is that it preserves the Lie–Poisson nature of the flow. In other words, the quantized flow is a Lie–Poisson system, exactly as the continuous one, but on the discrete counterpart of the dual of the Lie algebra 𝔦𝔪𝔥^*.Introducing M_1=Δ^-1_NW, M_2=Δ_NΘ, we rewrite system (<ref>) as{ Ẇ=[W,M_1]+[Θ,M_2], Θ̇=[Θ,M_1]. .Our goal is now to show that (<ref>) is a Lie–Poisson system on the dual of the Lie algebra 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^*.First, we introduce the magnetic extension F=SU(N)⋉𝔰𝔲(N)^* of the group SU(N). The group operation in F is(φ,a)·(ψ,b)=(φψ,Ad^*_ψa+b),φ,ψ∈SU(N), a,b∈𝔰𝔲(N)^*.The adjoint operator on the Lie algebra 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^* isad_(v,a)𝔣→𝔣,ad_(v,a)(w,b)=([v,w],ad^*_wa-ad^*_vb)for v,w∈𝔰𝔲(N), a,b∈𝔰𝔲(N)^*. From now on, we will identify the Lie algebra 𝔰𝔲(N) with its dual 𝔰𝔲(N)^* via the Frobenius inner product⟨ A,B⟩=tr(A^†B), A∈𝔰𝔲(N)^*, B∈𝔰𝔲(N).Then, the dual 𝔣^*=(𝔰𝔲(N)⋉𝔰𝔲(N)^*)^*≃𝔰𝔲(N)⋉𝔰𝔲(N)^* can be identified with 𝔣 via the pairing⟨(ξ,a),(w,b)⟩=⟨ b,ξ⟩+⟨ a,w⟩,where ⟨·,·⟩ is defined by (<ref>) for ξ,w∈𝔰𝔲(N), a,b∈𝔰𝔲(N)^*, and the coadjoint action of 𝔣 on 𝔣^* isad_(v,a)^*𝔣^*→𝔣^*,ad^*_(v,a)(w,b)=([w,v],ad^*_vb-ad^*_wa),where (v,a)∈𝔣, (w,b)∈𝔣^*. Using (<ref>), one can get an explicit formula for ad^* operator asad^*_(v,a)(w,b)=([w,v],[v^†,b]+[a,w^†]). Summarizing the above discussion, we arrive at the following result.System (<ref>) is a Lie–Poisson flow on the dual 𝔣^* of the Lie algebra 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^*:J̇=ad^*_MJ,where J=(Θ,W^†)∈𝔣^*, M=(M_1,M_2^†)∈𝔣, with the HamiltonianH(W,Θ)=1/2(tr(W^†M_1)+tr(Θ^†M_2)).The Hamiltonian nature of the quantized flow (<ref>) suggests that there are quantized analogues of the Casimirs (<ref>). Indeed, they are (up to a normalization constant depending on N)C_f=tr(f(Θ)), I_g=tr(Wg(Θ)),for arbitrary smooth functions fℝ→ℝ and gℝ→ℝ. As N→∞, (<ref>) converge to corresponding continuous Casimirs (<ref>), which follows from results in <cit.>.Note that preservation of the Casimir C_f is equivalent to preservation of the spectrum of Θ. § LIE–POISSON PRESERVING TIME INTEGRATORTo get the fully discretized incompressible MHD equations, one also needs to discretize system (<ref>) in time.There are generic time integration methods for Lie–Poisson systems (see, e.g., <cit.>), but these make heavy use of the matrix exponential. Such methods are computationally too expensive when the dimension of the Lie algebra is large (as in the case here). Our goal is instead to develop a “matrix exponential free” integrator that preserves the underlying Lie–Poisson geometry of the flow (<ref>), meaning it should preserve the Casimirs (<ref>) exactly, be a symplectic map on the coadjoint orbits of 𝔣^*, and thereby nearly preserves the Hamiltonian (<ref>) in the sense of backward error analysis <cit.>.There are several ways to construct symplectic integrators for Hamiltonian systems on T^*ℝ^n, among them are symplectic Runge-Kutta methods <cit.>.Given a Butcher tableau[c_1 a_11 a_12⋯ a_1s;c_2 a_21 a_22⋯ a_2s;⋮⋮⋮⋱⋮;c_s a_s1 a_s2⋯ a_ss; b_1b_2⋯b_s ]with b_ia_ij+b_ja_ji=b_ib_j for all i,j=1,…,s, the corresponding method being applied to Hamiltonian systems on a symplectic vector space (ℝ^2n,Ω) is symplectic. An example is the implicit midpoint method.However, when directly applied to a Lie–Poisson system, a symplectic Runge-Kutta scheme does not yield a Poisson integrator.There exist a few approaches to obtain Poisson integrators for Lie–Poisson systems (P,{·,·},H). * If P=𝔤^*, and the Hamiltonian H can be split into the sum of integrable Hamiltonians, H=∑ H_i, one can use splitting methods <cit.>.* If P=𝔤^*, the Lie–Poisson system is a Poisson reduction of a Hamiltonian system on T^*G. In this case, the discrete Lie–Poisson flow is constructed from a discrete G-invariant Lagrangian on TG <cit.>. The other approach is to embed G in a linear space and use constrained symplectic integrator RATTLE <cit.>. This, however, results in a very complicated scheme on high dimensional vector spaces. For example, in case of a 2-dimensional sphere S^2, which is a coadjoint orbit of 𝔰𝔬(3)^*, one would lift the equations to T^*SO(3) embedded in T^*ℝ^3× 3 with dimension 18.* For domains originating from the generalised Hopf fibration, one can use collective symplectic integrators. See <cit.> for details. The other approach is to make use of the Poisson reduction (more precisely, Poisson reconstruction) to reduce the discrete symplectic flow on T^*G (for example, symplectic Runge-Kutta method) to a discrete Lie–Poisson flow on 𝔤^* <cit.>. This is how isospectral Runge-Kutta methods were developed for a large class of isospectral flows on J-quadratic Lie algebras, including the Euler-Zeitlin equations on a sphere. The main advantages of the method are that it is formulated directly on the algebra, does not involve expensive group-to-algebra maps, and can be applied to any isospectral flow. Therefore, we might expect that using the strategy from <cit.> to construct a Lie–Poisson integrator for (<ref>) will give the same benefits, as isospectral flows considered in <cit.> have a similar geometry as equations (<ref>) do. We mention also the work of Kraus, Tassi, and Grasso <cit.>, where an integrator for 2-D MHD on the plane is developed.The integrator preserves the linear and quadratic Casimirs and the energy of MHD equations on the plane.However, the method does not preserve higher order Casimir, nor the Lie–Poisson structure.As we shall see below the strategy of using the Poisson reduction will result in the numerical scheme for incompressible MHD on the sphere that completely preserves the underlying geometry of the equations.§.§ Matrix representation of 𝔰𝔲(N)⋉𝔰𝔲(N)^*The first natural attempt to derive structure preserving integrator for (<ref>) is to represent it as an isospectral flow on a space of 2N× 2N matrices, in other words, to convert the system of matrix equations (<ref>) into a single matrix flow.That could potentially make it possible to apply the isospectral integrators developed in <cit.>.Let us introduce the two lower triangular block matricesV=[ Θ 0; W Θ ], M=[ M_1 0; M_2 M_1 ].This embeds 𝔣 = 𝔰𝔲(N)⋉𝔰𝔲(N)^* as a subalgebra of 𝔤𝔩(2N,ℂ) such that the equations (<ref>) constitute an isospectral flow of matrices of the form (<ref>):V̇=[V,M(V)]. Let us check whether 𝔣⊂𝔤𝔩(2N,ℂ) fits the conditions stated in <cit.> for isospectral symplectic Runge-Kutta integrators to work, i.e., that it is J-quadratic and reductive.Let J be a matrix such that J^2 = c I, where I is the identity matrix. The corresponding J-quadratic Lie algebra 𝔤⊂𝔤𝔩(N,ℂ) is given byA ∈𝔤 A^†J+JA=0.Let 𝔤⊂𝔤𝔩(N,ℂ) be J-quadratic.Then the Lie algebra 𝔤⋉𝔤^*⊂𝔤𝔩(2N,ℂ) is a subalgebra of the J̃-quadratic Lie algebra 𝔤̃ forJ̃=[ 0 J; J 0 ] . Clearly, J̃^2=cI.Let A∈𝔤⋉𝔤^* ⊂𝔤𝔩(2N, ℂ), i.e.A=[ Θ 0; W Θ ],where W,Θ∈𝔤. Since 𝔤 is J-quadratic, we haveΘ^†J+JΘ=0⟹Θ^†=-1/cJΘ J,W^†J+JW=0⟹ W^†=-1/cJWJ.We aim to prove that A^†J̃+J̃A=0.First, we getA^†=-1/c[ JΘ JJWJ;0 JΘ J ],and thereforeA^†J̃+J̃A=-1/c[ JΘ JJWJ;0 JΘ J ][ 0 J; J 0 ]+[ 0 J; J 0 ][ Θ 0; W Θ ]=0.This concludes the proof. At first glance, this result indicates that the isospectral flow (<ref>) is suitable for isospectral integrators, in particular, the midpoint isospectral integrator <cit.>V_n=(I+h/2M(Ṽ))Ṽ(I-h/2M(Ṽ)),V_n+1=V_n+h[Ṽ,M(Ṽ)],because 𝔤=𝔰𝔲(N) is J-quadratic with J=I.More precisely,The scheme (<ref>) constitutes an isospectral integrator for the isospectral flow (<ref>). It preserves the Casimirs tr(V_n^k)=tr(V_n+1^k).A direct consequence of Lemma <ref> and Theorem 1 in <cit.>. However, this result is not enough, since 𝔤⋉𝔤^* is a proper subalgebra of the larger J̃-quadratic algebra 𝔤̃. Indeed, there is no guarantee that V remains of the lower triangular block form (<ref>) as the general form of an element in 𝔤̃ is V=[ Θ B; W Θ ],where W,Θ,B∈𝔤.Assuming that V remains of the lower triangular block form (<ref>), Theorem <ref> explains preservation of the spectrum of Θ, as it follows directly from the formula (<ref>) since tr(V^k) = 2tr(Θ^k) in this case.However, we cannot obtain preservation of the cross-helicity Casimir from (<ref>), again because 𝔤̃ is a larger Lie algebra than 𝔤⋉𝔤^*. Moreover, the Lie algebra 𝔣 = 𝔤⋉𝔤^* is not reductive, which means that the condition[𝔣^†,𝔣]⊆𝔣does not hold.Consequently, whether the flow preserves the Lie–Poisson structure cannot be addressed with the method developed in <cit.>, since that method requires a reductive Lie algebra.But, as we shall see below, the method still preserves all the geometric properties: * the scheme (<ref>) preserves the cross-helicity Casimir;* the scheme (<ref>) is a Lie–Poisson integrator on the dual 𝔣^* of the Lie algebra 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^*.Thus, the condition that the Lie algebra be reductive is sufficient, but not necessary, for the isospectral Runge-Kutta integrators developed in <cit.> to yield a Lie–Poisson integrator. The numerical scheme (<ref>) results in an integrator written for matrices (W,Θ) in (<ref>) as follows:Θ_n=Θ̃-h/2[Θ̃,M̃_̃1̃]-h^2/4M̃_̃1̃Θ̃M̃_̃1̃, Θ_n+1=Θ_n+h[Θ̃,M̃_̃1̃],W_n=W̃-h/2[W̃,M̃_̃1̃]-h/2[Θ̃,M̃_2]-h^2/4(M̃_̃1̃W̃M̃_̃1̃+M̃_2Θ̃M̃_1+M̃_1Θ̃M̃_2), W_n+1=W_n+h[W̃,M̃_̃1̃]+h[Θ̃,M̃_̃2̃],where M̃_̃1̃=Δ_N^-1(W̃) and M̃_̃2̃=Δ_N(Θ̃).In the forthcoming sections we present an alternative derivation of the scheme (<ref>), directly using reduction theory for semidirect products.This approach explains the properties of the method (<ref>) listed in Remark <ref>. §.§ Reduction theory for semidirect products The strategy of deriving the structure preserving numerical scheme for (<ref>) is based on the following observation.The flow (<ref>) on the dual of 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^* can be seen as a Poisson reduction of a Hamiltonian system on T^*F with a right-invariant Hamiltonian. The reduction emerges from the momentum map μ T^*F→𝔣^*.The situation reflects the fact that the continuous equations (<ref>) are a Poisson reduced Hamiltonian flow on the continuous counterpart of the cotangent bundle T^*F, as was discussed above.The momentum map has the property that it is a Poisson map between T^*F and 𝔣^*. Therefore, having a discrete symplectic flow Φ_h T^*F→ T^*F that is equivariant with respect to the lifted right action of F on T^*F, one gets a Poisson integrator ϕ_h𝔣^*→𝔣^* by applying the momentum map. We need therefore to reconstruct the canonical system on T^*F from the system (<ref>), apply a symplectic integrator that keeps the flow on T^*F, check that it is also equivariant, and finally reduce the method back to 𝔣^*.To do so, one needs the momentum map μ.First, the cotangent bundle T^*F of the magnetic extension F=SU(N)⋉𝔰𝔲(N)^* isT^*F={(Q,m,P,α)| Q∈SU(N),P∈ T^*_Q(SU(N)),m∈𝔰𝔲(N)^*,α∈𝔰𝔲(N)}. The lifted left action of the group F=SU(N)⋉𝔰𝔲(N)^* on its cotangent bundle T^*F is(G,u)·(Q,m,P,α)=(GQ,Ad^*_Qu+m, (G^-1)^†P,α)for (G,u)∈ F.Now, the momentum map associated to the lifted left action (<ref>) is given by <cit.>μ(Q,m,P,α)=(PQ^†-QP^†/2, Qα Q^†)=(W^†,Θ). The canonical equations on T^*F{ Q̇=-M_1Q, Ṗ=M_1^†P+2M_2^†Qα^†,α̇=0, .for the right-invariant Hamiltonian H̃=H∘μ defined byM_1=Δ^-1_NW, M_2=Δ_NΘ, H(W,Θ)=1/2(tr(W^†M_1)+tr(Θ^†M_2)),are reduced to the Lie–Poisson system on 𝔣^*Ẇ=[W,M_1]+[Θ,M_2], Θ̇=[Θ,M_1],by means of the momentum map (<ref>). First, we observe that since W∈𝔰𝔲(N), Θ∈𝔰𝔲(N), and Δ_N𝔰𝔲(N)→𝔰𝔲(N) by construction (<ref>), we have that M_1,M_2∈𝔰𝔲(N). Using (<ref>) we getẆ=1/2(Q̇P^†+QṖ^†-Ṗ Q^†-PQ̇^†).Using (<ref>) and rearranging the terms we getẆ=1/2([QP^†,M_1]+[PQ^†,M_1^†])+[Qα Q^†,M_2].Since M_1,M_2∈𝔰𝔲(N), we have M_1^†=-M_1, M_2^†=-M_2, and get the first equation in (<ref>). For the Θ component, we haved/dt(Qα Q^†)=Q̇α Q^†+QαQ̇^†.Again, using (<ref>), we getd/dt(Qα Q^†)=[Qα Q^†,M_1]⟹Θ̇=[Θ,M_1].This concludes the proof.We have not written the equation for the variable m in (<ref>), as it is not needed to get the algebra variables (W,Θ) back, see (<ref>).Since, according to (<ref>), the matrix α is constant, the equations (<ref>) can be seen as a Hamiltonian flow on T^*SU(N) with the matrix α as a parameter defining an initial condition for the matrix Θ∈𝔰𝔲(N).Thus, despite the incompressible MHD equations have twice as many unknowns as in the incompressible Euler equations, the Hamiltonian left-reconstructed flow still takes place on the cotangent bundle T^*SU(N), which reflects a more general observation that any Lie–Poisson system on a semidirect product can be viewed as a Newton system with a smaller symmetry group (see <cit.> for details). Recall that the space 𝔣^* can be seen as a quotient of T^*F with respect to the lifted right action of F, i.e., 𝔣^*=T^*F/F.In other words, points in 𝔣^* are F-orbits of points in T^*F. Then, if points a∈ T^*F and b∈ T^*F belong to the same orbit, i.e., b=f· a for some f∈ F, they correspond to the same point μ(a)=μ(f(a))∈𝔣^*.Let Φ_h T^*F→ T^*F be a symplectic method on T^*F.Then it descends to an integrator on 𝔣^* if the points Φ_h(a) and (Φ_h∘ f)(a) belong to the same orbit, which means that (Φ_h∘ f)(a)=(f∘Φ_h)(a). This holds for any point a∈ T^*F, meaning that the method Φ_h T^*F→ T^*F must also be equivariant.The setup is illustrated in a diagram Fig. <ref>.In summary, we arrive at the following result: Consider the Lie–Poisson system (<ref>) evolving on the dual of the semidirect product Lie algebra 𝔣^*=𝔰𝔲(N)⋉𝔰𝔲(N)^*. Let Φ_h T^*F→ T^*F be a symplectic numerical method applied to the Hamiltonian system (<ref>). If it is also equivariant with respect to the right SU(N)⋉𝔰𝔲(N)^* action(Q,P,α,m)·(G,u)=(QG,P(G^-1)^†,Ad_G^-1α,Ad^*_Gm+u),then it descends to a Lie–Poisson integrator ϕ_h on 𝔣^*.§.§ Casimir preserving scheme According to Theorem <ref>, we need a symplectic integrator for the Hamiltonian system (<ref>).We choose the simplest one among symplectic Runge-Kutta methods, which is the implicit midpoint method.If we denote the right hand sides of (<ref>) byf(Q,P) =-M_1Q, g(Q,P) =M_1^†P+2M_2^†Qα^†,so thatQ̇=f(Q,P),Ṗ=g(Q,P),then the method Φ_h (Q_n,P_n)↦ (Q_n+1,P_n+1) isQ_n=Q̃-h/2f(P̃,Q̃), P_n=P̃-h/2g(P̃,Q̃), Q_n+1=Q̃+h/2f(P̃,Q̃), P_n+1=P̃+h/2g(P̃,Q̃).The implicit midpoint method is known to be symplectic, the only thing we need to prove is that it is also equivariant with respect to action (<ref>). The implicit midpoint method Φ_h(Q_n,P_n)↦(Q_n+1,P_n+1) defined by (<ref>) is equivariant with respect to action (<ref>), i.e.,Φ_h∘(G,u)=(G,u)∘Φ_h.First, we can write Φ_h=Φ_h^(2)∘Φ_h^(1), whereΦ_h^(1) (Q_n,P_n)↦(Q̃,P̃), Φ_h^(2) (Q̃,P̃)↦(Q_n+1,P_n+1).Since equivariance of both Φ_h^(1) and Φ_h^(2) implies that their composition Φ_h is also equivariant, it is enough to prove equivariance of Φ_h^(1) and Φ_h^(2) individually.Further, as we have an explicit formula for Ψ=(Φ_h^(1))^-1, and also that equivariance of Ψ implies equivariance of Φ_h^(1), we will prove equivariance of Ψ with respect to F=SU(N)⋉𝔰𝔲(N)^* action: Ψ∘ F=F∘Ψ.(Ψ∘ F)(Q̃,P̃)=(Q_n,P_n),whereQ_n=Q̃G-h/2f(Q̃G,P̃(G^†)^-1,G^-1α G)=Q̃G+h/2M_1(W(Q̃G,P̃(G^-1)^†))Q̃G.Further, using (<ref>) we getW(Q̃G,P̃(G^-1)^†) =1/2(Q̃GG^-1P̃^†-P̃(G^-1)^†G^†Q̃^†)==1/2(Q̃P̃^†-P̃Q̃^†)=W(Q̃,P̃).Therefore f(Q̃G,P̃(G^†)^-1,G^-1α G)=-M_1(W(Q̃,P̃))Q̃G, and finallyQ_n=(Q̃-h/2f(Q̃,P̃))G.For P_n, we getP_n=P̃(G^†)^-1-h/2g(Q̃G,P̃(G^†)^-1,G^-1α G),with g(Q̃G,P̃(G^†)^-1,G^-1α G)=M_1^†P̃(G^†)^-1+2M_2^†Q̃G(G^-1α G)^†,where M_1=M_1(W(Q̃G,P̃(G^-1)^†), and M_2=M_2(Θ(Q̃G,G^-1α G)).We have already shown that W(Q̃G,P̃(G^-1)^†)=W(Q̃,P̃). Now,Θ(Q̃G,G^-1α G)=Q̃G(G^-1α G)G^-1Q̃^†=Q̃αQ̃^†=Θ(Q̃,α).Therefore, we also have that M_2(Θ(Q̃G,G^-1α G))=M_2(Θ(Q̃,α)). Further,g(Q̃G,P̃(G^†)^-1,G^-1α G) =(M_1^†P̃+2M_2^†Q̃α^†)(G^-1)^†==g(Q̃,P̃,α)(G^-1)^†.Thus, for P_n we getP_n=(P̃-h/2g(Q̃,P̃,α))(G^-1)^†,and finally(Ψ∘ F)(Q̃,P̃)=((Q̃-h/2f(Q̃,P̃))G,(P̃-h/2g(Q̃,P̃,α))(G^-1)^†).On the other hand,(F∘Ψ)(Q̃,P̃) =F(Q̃-h/2f(Q̃,P̃),P̃-h/2g(Q̃,P̃))==((Q̃-h/2f(Q̃,P̃))G,(P̃-h/2g(Q̃,P̃))(G^-1)^†).Comparing (<ref>) and (<ref>) we conclude that Ψ=(Φ_h^(1))^-1 is equivariant, and therefore Φ_h^(1) is so as well.Since maps (Φ_h^(1))^-1 and Φ_h^(2) differ by a sign in front of h/2 (see (<ref>)), the proof of equivariance for Φ_h^(2) is exactly the same as for (Φ_h^(1))^-1, and Φ_h=Φ_h^(2)∘Φ_h^(1) is equivariant. This concludes the proof. We can see in equation (<ref>) that the implicit midpoint method is not formulated intrinsically on T^*SU(N).Namely, the matrix Q̃ does not necessarily belong to SU(N).However, the flow of matrices (W,Θ) still remains on 𝔰𝔲(N)⋉𝔰𝔲(N)^* due to (<ref>).Moreover, the proof of equivariance does not use that Q̃∈SU(N). Finally, we arrive at the following result.The implicit midpoint method (<ref>) for the Hamiltonian system (<ref>) descends to a Lie–Poisson integrator ϕ_h𝔣^*→𝔣^*,(W_n,Θ_n)↦(W_n+1,Θ_n+1)for the Lie–Poisson flow (<ref>).The method is defined by the following equations:Θ_n=Θ̃-h/2[Θ̃,M̃_̃1̃]-h^2/4M̃_̃1̃Θ̃M̃_̃1̃, Θ_n+1=Θ_n+h[Θ̃,M̃_̃1̃],W_n=W̃-h/2[W̃,M̃_̃1̃]-h/2[Θ̃,M̃_2]-h^2/4(M̃_̃1̃W̃M̃_̃1̃+M̃_2Θ̃M̃_1+M̃_1Θ̃M̃_2), W_n+1=W_n+h[W̃,M̃_̃1̃]+h[Θ̃,M̃_̃2̃],where M̃_̃1̃=Δ_N^-1(W̃), M̃_̃2̃=Δ_N(Θ̃). Furthermore, this integrator preserves the Casimirs (<ref>):tr(f(Θ_n)) =tr(f(Θ_n+1)), tr(W_ng(Θ_n)) =tr(W_n+1g(Θ_n+1)).The formulae (<ref>) are obtained straightforwardly by means ofW_n =1/2(Q_nP_n^†-P_nQ_n^†), Θ_n =Q_nα Q_n^†,W_n+1 =1/2(Q_n+1P_n+1^†-P_n+1Q_n+1^†), Θ_n+1 =Q_n+1α Q_n+1^† .Preservation of Casimirs is a direct consequence of Theorem <ref> and Lemma <ref>. The scheme (<ref>) has order of consistency O(h^2), the same as the underlying symplectic Runge-Kutta method (<ref>).It is straightforward to generalize to an integrator of arbitrary order O(h^s), once we apply symplectic s-stage Runge-Kutta scheme on T^*F.§.§ Algebras other than 𝔰𝔲(N) Here, we show that above formalism also allows to develop structure preserving integrators for other Lie algebras defined by different constraints. We start with J-quadratic Lie algebras. §.§.§ J-quadratic Lie algebrasLet 𝔤 be J-quadratic, that isA^†J+JA=0 for any A∈𝔤 and J being J^2=cI, with c∈ℝ∖{0}, and J=± J^†. This setting covers most of the classical Lie algebras, such that 𝔰𝔲(N), 𝔲(N), 𝔰𝔬(N), 𝔰𝔭(N,ℂ), 𝔰𝔭(N,ℝ). The relation (<ref>) implies the following quadratic constraint on the group GL(N,ℂ) defining the corresponding matrix Lie group G:Q∈ G⟺ Q^†JQ=J.Therefore, the momentum map μ T^*F→𝔣^* isμ(Q,m,P,α) =(Π(QP^†), Qα Q^-1)==(1/2(QP^†-1/cJPQ^†J), 1/cQα JQ^†J)=(W,Θ),where α∈𝔤, andΠ𝔤𝔩(N,ℂ)→𝔤,Π(W)=1/2(W-1/cJW^†J), W∈𝔤𝔩(N,ℂ).is a projector onto the Lie algebra 𝔤. The Hamilton's equations on T^*F are{ Q̇=-M_1Q, Ṗ=M_1^†P+2/cJM_2Qα J,α̇=0, .Applying the implicit midpoint method to (<ref>) and using (<ref>), we getΘ_n=1/cQ_nα JQ_n^†J=1/c(Q̃+h/2M̃_̃1̃Q̃)α J(Q̃+h/2M̃_̃1̃Q̃)^†J==(I+h/2M̃_̃1̃)Θ̃(I-h/2M̃_̃1̃),Θ_n+1=Θ_n+h[Θ̃,M̃_̃1̃], W_n=W̃-h/2[W̃,M̃_̃1̃]-h/2[Θ̃,M̃_2]-h^2/4(M̃_̃1̃W̃M̃_̃1̃+M̃_2Θ̃M̃_1+M̃_1Θ̃M̃_2),W_n+1=W_n+h[W̃,M̃_̃1̃]+h[Θ̃,M̃_̃2̃],which coincides with (<ref>). Since the method just derived has the same form for all J-quadratic Lie algebras, we have thus extended the previous setting from 𝔣=𝔰𝔲(N)⋉𝔰𝔲(N)^* to𝔣=𝔤⋉𝔤^* for an arbitrary J-quadratic Lie algebra 𝔤. In this way, the method (<ref>) represents the natural extension of isospectral symplectic Runge-Kutta methods <cit.> for Lie–Poisson systems on J-quadratic Lie algebras 𝔤 to those on the magnetic extension 𝔣=𝔤⋉𝔤^* of 𝔤.We therefore call these integrators (<ref>) magnetic symplectic Runge-Kutta methods. §.§.§ General type Lie algebrasHere, we do not assume that 𝔤 is a J-quadratic Lie algebra, in other words the Lie group G does not allow for the constraint Q^†JQ=J. In this case one has to use the general formula for the momentum map μ T^*F→𝔣^*:μ(Q,m,P,α)=(Π(P^†Q), Qα Q^-1)=(W^†,Θ).The Hamiltonian equations are the same as (<ref>), and the integration scheme, in particular, for the Θ variable isΘ_n=(I+h/2M̃_̃1̃)Θ̃(I+h/2M̃_̃1̃)^-1,Θ_n+1=(I-h/2M̃_̃1̃)Θ̃(I-h/2M̃_̃1̃)^-1.One can see that in this case there is no way to get rid of inverse matrix operations, and thereforediscrete semidirect product reduction theory provides us with a Lie–Poisson integrator different from (<ref>). However, the question if there are mathematically reasonable and practically important examples of such Lie–Poisson flows remains open.§ HAZELTINE'S EQUATIONS FOR MAGNETIZED PLASMAIn this section, we consider another important example of a Lie–Poisson system on the dual of a semidirect product Lie algebra, which is Hazeltine's equations, describing 2D turbulence in magnetized plasma <cit.>.This system is a generalization of the MHD equations (<ref>) considered previously, namely{ ω̇={ω,Δ^-1ω}+{θ,Δθ}, θ̇={θ,Δ^-1ω}-α{θ,χ},χ̇={χ,Δ^-1ω}+{θ,Δθ}, .where ω and θ have the same meaning as before, χ is the normalized deviation of the particle density from a constant equilibrium value, and α is a constant parameter.If α=0, the system (<ref>) decouples into the dynamics of the two fields ω and θ that constitutes the MHD dynamics, and the dynamics of the χ field.The system (<ref>) is known to be a Lie–Poisson system, as first described in <cit.>, for the HamiltonianH=1/2∫_S^2(ωΔ^-1ω+θΔθ-αχ^2)μ,and with the Casimirs𝒞=∫_S^2(f(θ)+χ g(θ)+k(ω-χ))μfor arbitrary smooth functions f,g,k.Using the geometric quantization approach as described above, we get a spatially discretized analogue of the system (<ref>) given by{ Ẇ=[W,M_1]+[Θ,M_2], Θ̇=[Θ,M_1]-α[Θ,χ],χ̇=[χ,M_1]+[Θ,M_2], .where W,Θ,χ∈𝔰𝔲(N), and M_1=Δ_N^-1W, M_2=Δ_NΘ.By introducing a new variable Ψ=W-χ, the system (<ref>) becomes{ Ψ̇=[Ψ,M_1], Θ̇=[Θ,M_3],χ̇=[χ,M_3]+[Θ,M_2], .where M_3=M_1-αχ.The system (<ref>) is a Lie–Poisson flow on the dual 𝔣^* of the Lie algebra𝔣=𝔰𝔲(N)⊕(𝔰𝔲(N)⋉𝔰𝔲(N)^*).The quantized analogues of the Casimirs for (<ref>) are: * the spectrum of Ψ=W-χ, or equivalentlyℰ_k=tr(k(W-χ))for any smooth function k;* the spectrum of Θ, or equivalently𝒞_f=tr(f(Θ))for any smooth function f;* the cross-helicityJ=tr(χ g(Θ))for any smooth function g.We also have the Hamiltonian as a conserved quantity:H=1/2tr(WM_1+Θ M_2-αχ^2). As we see from (<ref>), we can apply the isospectral (midpoint) integrator <cit.> to the first equation for Ψ, and the magnetic (midpoint) integrator (<ref>) to the pair of equations for Θ and χ. This results in the following scheme for the variables W, Θ, and χ:Θ_n=Θ̃-h/2[Θ̃,M̃_̃3̃]-h^2/4M̃_̃3̃Θ̃M̃_̃3̃, Θ_n+1=Θ_n+h[Θ̃,M̃_̃3̃],W_n=W̃-h/2[W̃,M̃_̃1̃]-h/2[Θ̃,M̃_2]-h^2/4(M̃_̃1̃W̃M̃_̃1̃+M̃_2Θ̃M̃_3+M̃_3Θ̃M̃_2-αM̃_̃1̃χ̃^2-αχ̃^2M̃_̃1̃+α^2χ̃^3), W_n+1=W_n+h[W̃,M̃_̃1̃]+h[Θ̃,M̃_̃2̃],χ_n=χ̃-h/2[χ̃,M̃_̃3̃]-h/2[Θ̃,M̃_2]-h^2/4(M̃_̃3̃χ̃M̃_̃3̃+M̃_2Θ̃M̃_3+M̃_3Θ̃M̃_2),χ_n+1=χ_n+h[χ̃,M̃_̃3̃]+h[Θ̃,M̃_̃2̃],where M̃_̃1̃=Δ_N^-1(W̃), M̃_̃2̃=Δ_N(Θ̃), M̃_̃3̃=M̃_̃1̃-αχ̃.The numerical scheme (<ref>) is a Lie–Poisson integrator for (<ref>).It preserves the Casimirs exactly,tr(k(W_n-χ_n))=tr(k(W_n+1-χ_n+1)),tr(f(Θ_n))=tr(f(Θ_n+1)),tr(χ_n g(Θ_n))=tr(χ_n+1g(Θ_n+1)),and nearly preserves the Hamiltonian (<ref>) in the sense of backward error analysis.§ KIRCHHOFF EQUATIONSAnother example of a Lie–Poisson system on the dual of a semidirect product Lie algebra is the Kirchhoff equations, describing the motion of a rigid body in an ideal fluid.This is a Lie–Poisson flow on the dual of 𝔣=𝔰𝔬(3)⋉ℝ^3≃𝔰𝔬(3)⋉𝔰𝔬(3)^*, and is thus a magnetic extension of the rigid body dynamics:{ ṁ=m×ω+p× u,ṗ=p×ω, .where p,m,ω,u∈ℝ^3, andu_i=∂ H/∂ p_i,ω_i=∂ H/∂ m_i, i=1,2,3,for the HamiltonianH(m,p)=1/2(∑_k=1^3a_km_k^2+∑_k,j=1^3b_kj(p_km_j+m_kp_j)+∑_k,j=1^3c_kjp_kp_j),where a_k,b_kj,c_kj are real numbers.Using the standard isomorphism between ℝ^3 and 𝔰𝔬(3), we construct skew-symmetric matricesW=[0 -m_3m_2;m_30 -m_1; -m_2m_10 ],Θ=[0 -p_3p_2;p_30 -p_1; -p_2p_10 ],M_1=[0 -ω_3ω_2;ω_30 -ω_1; -ω_2ω_10 ], M_2=[0 -u_3u_2;u_30 -u_1; -u_2u_10 ].Then, system (<ref>) takes the form of a Lie–Poisson flow on the dual of 𝔣=𝔰𝔬(3)⋉𝔰𝔬(3)^*:Ẇ=[W,M_1]+[Θ,M_2], Θ̇=[Θ,M_1]. The Casimirs (<ref>) for the case of Kirchhoff equations are a generalization of the well-known CasimirsI_1=p_1^2+p_2^2+p_3^2, I_2=m_1p_1+m_2p_2+m_3p_3,which are obtained from (<ref>) by taking f(Θ)=Θ^2, and g(Θ)=Θ.The following classical cases are known to be integrable for the Kirchhoff equations (for all of them b_kj=c_kj=0 for j k): * Kirchhoff case <cit.>a_1=a_2, b_11=b_22, c_11=c_22. * Clebsch case <cit.>b_11=b_22=b_33,c_11-c_22/a_3+c_33-c_11/a_2+c_22-c_33/a_1=0. * Lyapunov–Steklov–Kolosov case <cit.>b_11-b_22/a_3+b_33-b_11/a_2+b_22-b_33/a_1=0,c_11-(b_22-b_33)^2/a_1=c_22-(b_33-b_11)^2/a_2=c_33-(b_11-b_22)^2/a_3. In the forthcoming section, we will illustrate the method (<ref>) by both verifying the preservation of Casimirs, Hamiltonian, and capturing integrable behaviour. § NUMERICAL SIMULATIONSIn this section, we provide numerical tests of the schemes (<ref>) and (<ref>), verifying the exact preservation of Casimirs and near preservation of the Hamiltonian. §.§ Kirchhoff equationsWe begin verifying the properties of the method on low-dimensional integrable cases of the Kirchhoff equations.For all the simulations, we used the time step size h=0.1, and the final time of simulation is T=1000.Initial conditions are randomly generated 𝔰𝔬(3) matrices.[Numerical simulations for this section are implemented in a Python code available at <https://github.com/michaelroop96/kirchhoff.git>]First, we consider the Kirchhoff integrable case. Fig. <ref> shows the exact preservation of Casimir functions for the Kirchhoff integrable case, and Fig. <ref> shows nearly preservation of the Hamiltonian function.The phase portrait is shown in Fig. <ref>.One can clearly observe a quasi-periodic dynamics with a regular pattern typical for integrable systems.Second, we consider the Clebsch integrable case, for which we also observe exact preservation of Casimirs in Fig. <ref>, nearly preservation of Hamiltonian in Fig. <ref>, and a phase portrait in Fig. <ref>.Finally, we have the Lyapunov-Steklov-Kolosov integrable case, with exact preservation of Casimirs shown in Fig. <ref>, nearly preservation of Hamiltonian shown in Fig. <ref>, and a phase portrait in Fig. <ref>.§.§ Incompressible 2-D MHD equationsHere, we demonstrate on low-dimensional matrices with N=5 that the integrator (<ref>) preserves the underlying geometry, namely Casimirs and Hamiltonian[Numerical simulations for this section are implemented in a Python code available at <https://github.com/michaelroop96/qflowMHD.git>]. The final time of the simulation is T=7500.Variations of spectrum of Θ and tr(WΘ) are presented in Fig. <ref>. One can see from Fig. <ref> that variation of Casimirs has the magnitude 10^-16, which is the tolerance of the fixed point iteration that is used to find (W̃,Θ̃). This indicates that the Casimirs are exactly preserved.In Fig. <ref>, one can see nearly preservation of the Hamiltonian function. The magnitude of the variation is related to the error constant of the method.§.§ Hazeltine's equations Here[Numerical simulations for this section are implemented in a Python code available at <https://github.com/michaelroop96/qflowAlfven.git>], we demonstrate the properties stated in Theorem <ref> on low-dimensional matrices with N=5, α=2.The final time of simulation is T=7500. The variations in the spectrum of Θ and W-χ are presented in Fig. <ref>. Variations of cross-helicity tr(χΘ) and Hamiltonian are presented in Fig. <ref>. spmpsci | http://arxiv.org/abs/2311.16045v2 | {
"authors": [
"Klas Modin",
"Michael Roop"
],
"categories": [
"math.NA",
"cs.NA",
"math-ph",
"math.DG",
"math.MP",
"37M15, 65P10, 53D20, 76W05"
],
"primary_category": "math.NA",
"published": "20231127180916",
"title": "Spatio-temporal Lie-Poisson discretization for incompressible magnetohydrodynamics on the sphere"
} |
=1=10001ε ϵ | http://arxiv.org/abs/2311.15844v1 | {
"authors": [
"Slava Rychkov",
"Ning Su"
],
"categories": [
"hep-th",
"cond-mat.stat-mech"
],
"primary_category": "hep-th",
"published": "20231127141013",
"title": "New Developments in the Numerical Conformal Bootstrap"
} |
Energy Dissipation of Fast Electrons in Polymethylmetacrylate (PMMA): Towards a Universal Curve for Electron Beam Attenuation in Solidsfor Energies between ∼0 eV and 100 keV Olga Ridzel January 14, 2024 =============================================================================================================================================================================== In this paper we construct an uncountable union of line segments T which has full intersection with the sets ({0}× [0,1]) ∪({1}× [0,1]) ⊂ℝ^2 but has null two-dimensional measure. Further results are proved on the decay rate of μ (T) if the line segments comprising T are replaced with increasingly fine approximations by parallelograms.§ INTRODUCTION In <cit.>, Kaczynski proves a variety of results placing restrictions on possible boundary functions of a half-plane function with some property. The last result is different; he constructs a measurable half plane function f with a nonmeasurable boundary function. Crucial to this construction is a measure theoretic generalization of the trapezoid formula which we document here. For the remainder of the paper, let X_0={ (x,y):y=0} and X_1={ (x,y):y=1}, let μ refer to Lebesgue measure in the contextually obvious dimension (we will distinguish μ _1 and μ _2 to refer to one and two-dimensional Lebesgue measure respectively when necessary), and let μ^* refer to outer measure.A trapezoid T=∪ℓ is a disjoint union of line segments with each segment having its two endpoints on X_0 and X_1 (i.e. ℓ ={((1-t)x_0+tx_1,t):t∈ [0,1]}).If T is a trapezoid, let T_0=T∩ X_0 and T_1=T∩ X_1. In <cit.>, Kaczynski proved the following result:If T is a trapezoid, then μ ^*_2(T)=(μ ^*_1(T_0)+μ ^*_1(T_1))/2.This theorem was used to construct a collection of non-intersecting paths (not lines) γ _x corresponding to each point x∈ [0,1] such that for each path lim _t→ 0γ _x(t)=(x,0) but the union satisfies μ(∪ _x∈ [0,1]γ_x) =0. This construction immediately leads to the creation of a measurable half plane function with non-measurable boundary function <cit.>. We depict the situation in Figure 1.One may wonder the extent to which this result holds if we omit the restriction that the lines of a trapezoid are not disjoint. To this end, we amend the trapezoid definition:An unrestricted trapezoid T=∪ℓ is a union (not necessarily disjoint) of line segments with each segment having its two endpoints on X_0 and X_1 (i.e. ℓ ={((1-t)x_0+tx_1,t):t∈ [0,1]}).In <cit.>, Kaczynski proves that there is no sensible upper bound in terms of μ _1(T_0) and μ _1 (T_1) on the two dimensional measure of an unrestricted trapezoid.There exists an unrestricted trapezoid T with μ _1(T_0)=μ _1(T_1)=0 and μ _2(T)=∞.Proof: Let M be a residual set of measure zero (e.x. let x_k be an enumeration of rationals and M=⋂ _n=1^∞⋃_k=1^∞( x_k-1/2^k+1n,x_k+1/2^k+1n)). For any point (x,y) (y∈ (0,1)) and line ℓ passing through (x,y) with angle θ with the x-axis, let F_0(θ )=( x-y(θ ),0) and F_1(θ )=( x+(1-y) (θ ),1) denote the intersections ℓ∩ X_0 and ℓ∩ X_1 respectively. F_0 and F_1 are homeomorphisms from (0,π ) onto X_0 and X_1 respectively, so F_0^-1(M) and F_1^-1(M) are both residual sets of measure zero. Since the intersection of residual sets is nonempty, for every (x,y) with y∈ (0,1) we can choose an angle α =F_0^-1(M)∩ F_1^-1(M) such that the line ℓ intersecting (x,y) with angle α intersects X_0∩ M and X_1∩ M. Let the unrestricted trapezoid T be the union of all such lines so that μ _1(T_0)=μ _1(T_1)=μ _1(M)=0 and μ _2(T)=μ _2({(x,y),y∈ (0,1)} )=∞. .§ NO LOWER BOUND In <cit.>, Kaczynski poses the opposite question as an open problem, asking if there is a lower bound on the two-dimensional measure of an unrestricted trapezoid in terms of μ (T_0) and μ (T_1). We will prove the following theorem indicating no lower bound exists:There exists an unrestricted trapezoid T with μ _1(T_0)=μ _1(T_1)=∞ and μ _2(T)=0.Since it suffices to find an unrestricted trapezoid with measure zero and sets μ (T_0)=μ (T_1)=μ ([0,1])=1, we can formulate the question a different way:There exists a bijection f:[0,1]→ [0,1] such that, for the setS=⋃ _t∈ [0,1]⋃ _x∈ [0,1]( (1-t)x+tf(x),t),we have μ (S)=0.It is clear that Theorem 3 is a corollary of Proposition 1.It is perhaps not immediate that such a function should exist. Any monotone f induces a standard trapezoid which abides by Theorem 1. The choice f(x)=1-x gives measure μ (S)=1/2, and other decreasing functions cannot be massaged to induce a smaller measure. We will instead draw inspiration from fractal geometry <cit.>; the particular iterative scheme we choose will create one-dimensional slices in x similar to the Cantor middle third set. In Figure 2, the basic pattern is represented by the piecewise functionf_1(x)={[x x∈ [0,1/3);x+1/3 x∈ [1/3,2/3);x-1/3 x∈ [2/3,1] ].otherwise we divide [0,1] into three subintervals and connect them via parallelograms where f_1 essentially swaps the second and the third. In this case, μ (S)=5/6. In each subsequent iteration we embed an affine transformed version of the base pattern in each parallelogram, thus reducing the total measure of the unrestricted trapezoid (we will prove this limits to zero as in Proposition 1).Since this procedure iteratively divides subintervals of [0,1] in 3, it is perhaps not surprising that this this sequence of functions { f_k} has a natural representation on base 3 fractions, namelyf_k(0.x_1x_2x_3,,,)=0.y_1y_2...y_kx_k+1x_k+2...where y_j=2x_j (mod 3). Let f=lim _k→∞f_k such that for input x∈ [0,1], f replaces every 1 in its base 3 expansion with a 2 and vice-versa. We will prove that this function satisfies proposition 1.Before we prove proposition 1, we will state a result from <cit.> on modified Cantor sets:Let C_t be a modified Cantor set with representationC_t={∑ _k=1^∞x_k/3^k:x∈{ 0,1,t}} .Then the following holds:μ _1(C_t)={[ 1/q t=p/q in lowest terms, p+q≡ 0 (mod 3); 0 otherwise ].We are now ready to prove Proposition 1.Proof of Proposition 1: Let us define f acting on base 3 fractions asf_k(0.x_1x_2x_3,,,)=0.y_1y_2y_3...where y_j=2x_j (mod 3). Then the linear combination (1-t)x+tf(x) satisfiesS_t=⋃ _x∈ [0,1]((1-t)x+tf(x))={∑ _k=1^∞x_k/3^k:x∈{0,1+t,2-t}} .Since S_t is just a scaled copy of C_(2-t)/(1+t), by Proposition 2 we have the measureμ _1(S_t)={[ (1+t)/q (2-t)/(1+t)=p/q in lowest terms, p+q≡ 0 (mod 3); 0 otherwise ].Since μ _1(S_t) takes on nonzero value at only countably many t, the result follows from Fubini's theorem.§ PARALLELOGRAM FORMULATION Over 30 yeara after <cit.>, Kaczynski would outline a reformulation of the problem, offering three conjectures on the matter <cit.>. Rather than conceiving the unrestricted trapezoid as a union of lines, Kaczynski considered unions of increasingly thinner parallelograms.Let P_j,k^n={(x,y)∈ℝ^2:nx+(k-j)y≥ k-1,nx+(k-j)y≤ k,y∈[0,1]} be a parallelogram with parallel sides [(j-1)/n,j/n]×{ 0} and [(k-1)/n,k/n]×{ 1}. For σ∈Sym(n), we say that the unrestricted trapezoid T^n_σ is given by the union T^n_σ =⋃ _j=1^n P^n_j,σ(j).In this way, the unrestricted trapezoid associated with f_1 above can be expressed as T^3_(2 3).The following conjectures center around properties of the smallest unrestricted trapezoid of n parallelograms of width 1/n, namely α (n)=min _σ∈Sym(n)μ _2(T^n_σ).lim _n→∞α (n)=0.Kaczynski claimed to have proved this. We will show that this follows from Propoition 1 in the following section.There exists constants c_± >0 such that c_-/logn<α (n)<c_+/logn.Kaczynski claimed to be able to prove that c_- exists and also claimed a weaker upper bound in the form of α (n)<c_+(log (log n))^2/logn. We will prove a different upper bound.The sequence α (n) decreases monotonically.This was left as an open problem in <cit.>.§ PROVING THE CONJECTURES To prove conjectures from the previous section, we first connect projections of the Sierpinski gasket <cit.> to our modified Cantor sets.We define the Sierpinski gasket 𝒢⊂ℝ^2 as𝒢={∑ _k=1^∞x_k/3^k:x∈{ (0,0),(2,0),(0,2)}} .We define the n^th partial Sierpinski gasket 𝒢_n⊃𝒢 similarly. Let G_n={∑ _k=1^nx_k/3^k:x∈{ (0,0),(2,0),(0,2)}} and let Δ (a,b,c) be the area bounded by the triangle with vertices a,b,c∈ℝ^2.𝒢_n=⋃_x∈ G_nΔ(x,x+(1/3^n,0) ,x+( 0,1/3^n))) Let the projection operator proj_θ :ℝ^2→ℝ map points (x,y)∈ℝ^2 onto a line through the origin with angle θ to the x-axis such that proj_θ :(x,y)↦ xcosθ +ysinθ. We are interested in the sets proj_θ (𝒢_n); more generally there is an active field of research surrounding linear projections of fractals in ℝ^2 <cit.>. One of the major subjects of interest is the following:The Favard distance of a compact set E⊂ℝ^2 is given byFav(E)=1/π∫ _0^πμ(proj_θ (E))dθ While it is perhaps obvious that Fav(𝒢)=0 from proposition 2, a result by Bond and Volberg <cit.> gives a decay bound on the Favard distances of partial Sierpinski gaskets.There exist constants C,p>0 such that Fav(𝒢_n)≤ Cn^-p.A later result by Bond and Volberg proves that p>1/14 <cit.>.The rest of this section centers around utilizing proposition 3 to prove the conjectures from section 3. To do this, we must compare proj_θ (𝒢_n) to the modified Cantor sets from Proposition 2 and the horizontal slices of the unrestricted trapezoid. Let us define partial variants of these sets:Let D_n(a,b,c)={∑ _k=1^nx_k/3^k:x_k∈{ a,b,c}}. If T^3^n_σ is the unrestricted trapezoid associated with the n^th iteration induced by swapping 1↔ 2 in a base 3 expansion (i.e. if x-1=x_nx_n-1...x_1 is an n-digit base 3 number, then σ (x-1)+1=y_ny_n-1...y_1 where y_k=2x_k (mod 3)), then we write S^(n)_t=(ℝ×{ t} )∩ T^3^n_σ, the slice of T^3^n_σ at y=t, asS^(n)_t=⋃ _x∈ D_n(0,1+t,2-t)[ x,x+1/3^n]We similarly define the n^th partial modified Cantor set asC^(n)_t=⋃ _x∈ D_n(0,1,t)[ x,x+1/3^n] This allows us to make the following comparison:For t∈ [0,1] and φ (t)=tan^-1(2-t/1+t), we haveμ (S^(n)_t)≤ (1+t)μ(proj_φ (t)(𝒢_n)) Proof: We can show that μ (S^(n)_t)≤ (1+t)μ( C^(n)_(2-t)/(1+t)) for t≥ 0 sinceS^(n)_t= ⋃ _x∈ D_n(0,1+t,2-t)[ x,x+1/3^n]⊂⋃ _x∈ D_n(0,1+t,2-t)[ x,x+1+t/3^n] = ⋃ _x∈ D_n(0,1,2-t/1+t)(1+t)[ x,x+1/3^n] =(1+t)C^(n)_(2-t)/(1+t)The result then follows from noticing that for θ≤π/4, we have proj_θ (𝒢_n)=C^(n)_tanθ. Combining these results leads us to a similar polynomial bound on the decay of the unrestricted trapezoids induced by the iterative scheme described in Definition 6.Let σ be the permutation on base 3 numbers as defined in Definition 6. Then there exist constants C,p>0 such that μ( T^3^n_σ)≤ Cn^-p.Proof: By Fubini's theorem, The measure of T^3^n_σ is given by integrating over the measures of the horizontal slicesμ _2( T^3^n_σ) =∫ _0^1μ( S^(n)_t) dtFrom Lemma 1, this integral can be bound by an integral over projections of the Sierpinski gasket:∫ _0^1μ( S^(n)_t) dt≤∫ _0^1(1+t)μ(proj_φ (t)(𝒢_n)) dtBy the change of variables θ =φ (t), we can write:∫ _0^1(1+t)μ(proj_φ (t)(𝒢_n)) dt =∫ _tan ^-1(1/2)^tan ^-1(2)9(tan ^2θ +1)/(tanθ +1) ^3μ(proj_θ(𝒢_n)) dθPlacing an estimate on the trigonometric portion of the integral we can write∫ _tan ^-1(1/2)^tan ^-1(2)9(tan ^2θ +1)/(tanθ +1) ^3μ(proj_θ(𝒢_n)) dθ≤10/3∫ _tan ^-1(1/2)^tan ^-1(2)μ(proj_θ(𝒢_n)) dθSince Lebesgue measure is nonnegative, we can extend the limits of integration to bound the measure of T^3^n_σ by a factor of the associated Favard distanceμ( T^3^n_σ)≤10π/3Fav(𝒢_n)Proposition 3 completes the proof. This result translates to a logarithmic bound on α (n) only for n=3^m where m∈ℕ. If n has the base 3 representation n=x_kx_k-1...x_0, we can define σ _n as in Figure 3 such that the first x_k3^k intervals map to x_k copies of T^3^k_σ, the next x_k-13^k-1 intervals map to x_k-1 copies of T^3^k-1_σ, etc. Using this scheme, we can prove a weaker upper bound on Conjecture 2 (consequently proving Conjecture 1 as well):There exist constants C,p>0 such that α (n)<C/(logn )^p.Before proving this theorem, we supply a technical lemma:For p∈ (0,1), there exists a constant C such that∫ _0^ne^xx^-pdx≤ Ce^nn^-p Proof: Let us make the change of variables t=1-x/n such that this integral can be transformed into∫ _0^ne^xx^-pdx=e^n/n^p-1∫ _0^1e^-nt/(1-t)^pdtFor any constant C>1, we can find t_0 dependent only on C,p such that pt+1>(1-t)^-p for t∈ (0,t_0). This allows us to split the integral∫ _0^1e^-nt/(1-t)^pdt≤∫ _0^t_0C(pt+1)e^-ntdt+∫ _t_0^1e^-nt_0/(1-t)^pdtBoth of these integrals have closed form expressions, the leading asymptotic term 1/n arising from the left integral, thus completing the proof. Proof of Theorem 4: It is clear that α (n)≤μ( T^n_σ _n). Let n=x_kx_k-1...x_0 be a base 3 integer representation. Then the following holds:μ( T^n_σ _n) =1/n∑ _j=0^k x_j3^j μ( T^3^j_σ)Each digit must satisfy x_j≤ 2, and by Proposition 4 there exist constants C,p>0 such that μ( T^3^j_σ) ≤ Cj^-p, so we may write1/n∑ _j=0^k x_j3^j μ( T^3^j_σ) <C/n∑ _j=1^⌊log _3n⌋ 3^jj^-pSince the function f(x)=3^xx^-p is increasing for x>p/log _3(x), we can bound this sum by the integralC/n∑ _j=1^⌊log _3n⌋ 3^jj^-p≤C/n∫ _0^log _3 n+13^xx^-pdxBy making the change of variables x=ulog _3e, we can writeC/n∫ _0^log _3 n+13^xx^-pdx=C(log _3e)^1-p/n∫ _0^log 3ne^uu^-pduThe result follows from applying Lemma 2 and collecting constants. § APPLICATION TO CLUSTER SETS While Theorem 1 assisted in constructing a measurable half-plane function with nonmeasurable boundary function in <cit.>, the open questions from <cit.> and <cit.> answered by Propositions 1 and 4 were also initially conceived to address problems related to boundary functions and cluster sets. We state the relevant material here for completeness.Given a function defined on the open unit disk f:𝔻→ℂ and a Jordan arc γ with an endpoint on the boundary of the disk, we say that C(f,γ ) is the cluster set of f on γ whereC(f,γ )={ w∈ℂ:∃{z_n} s.t. z_n∈γ, |lim z_n|=1, lim f(z_n)=w}Let γ _1,γ_2,γ _3⊂𝔻 be Jordan arcs with a similar endpoint z on the unit circle. If, for a disk function f, the intersection satisfies C(f,γ _1)∩ C(f,γ _2)∩ C(f,γ _3)=∅, then f has the three-arc property at z. If each γ _1,γ _2,γ _3 is a union of rectilinear segments, then we say f has the three-segment property at z. In <cit.>, a version of conjecture 1 was suggested to assist in answering the following open question from <cit.> (presented as conjecture):There exists a continuous function in 𝔻 having the three-segment property at each point of a set of positive measure or second category on |z|=1.Weaker versions of this conjecture have been proved. Jarník <cit.> gave an example of a function having the three-segment property at uncountably many points. Piranian was cited in <cit.> as having proved the existence of a continuous disk function with the three-arc (not segment) property. Bagemihl later proved the existence of a normal meromorphic (not continuous) disk function with the three-segment property <cit.>.unsrt | http://arxiv.org/abs/2311.16210v1 | {
"authors": [
"Parker Kuklinski"
],
"categories": [
"math.CA"
],
"primary_category": "math.CA",
"published": "20231127173451",
"title": "An uncountable union of line segments with null two-dimensional measure"
} |
http://arxiv.org/abs/2311.15726v1 | {
"authors": [
"Nandana Bhattacharya",
"Arpita Sen",
"Jianwei Zhang",
"Ranjan Kumar Patel",
"Siddharth Kumar",
"Prithwijit Mandal",
"Shashank Kumar Ojha",
"Jyotirmay Maity",
"Zhan Zhang",
"Hua Zhou",
"Fanny Rodolakis",
"Padraic Shafer",
"Christoph Klewe",
"John William Freeland",
"Zhenzhong Yang",
"Umesh Waghmare",
"Srimanta Middey"
],
"categories": [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall",
"cond-mat.str-el"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231127112016",
"title": "Electron correlation mediated site-selective charge compensation in polar/non-polar heterointerface"
} |
|
A Hardy-Littlewood triple is a 3-tuple of integers with the form (n, n+2, n+6). In this paper, we study Hardy-Littlewood triples of the form (p, P_a, P_b) and improve the upper and lower bound orders of it, where p is a prime and P_r has at most r prime factors. Our new results generalize and improve the previous results. Probing spin fractionalization with ESR-STM absolute magnetometry Y. del Castillo^1,2, J. Fernández-Rossier^1[On permanent leave from Departamento de Física Aplicada, Universidad de Alicante, 03690 San Vicente del Raspeig, Spain]^,[[email protected]]January 14, 2024 =============================================================================================================================================================================================================§ INTRODUCTIONLet x be a sufficiently large integer, N be a sufficiently large even integer, p be a prime, and let P_r denote an integer with at most r prime factors counted with multiplicity. For each N ⩾ 4 and r ⩾ 2, we define π_1,r(x):= |{p : p ⩽ x, p+2=P_r}|and D_1,r(N):= |{p : p ⩽ N, N-p=P_r}|. In 1966 Jingrun Chen <cit.> proved his remarkable Chen's theorem: let x be a sufficiently large integer and N be a sufficiently large even integer, thenπ_1,2(x)≫C_2 x/(log x)^2and D_1,2(N) ≫C(N) N/(log N)^2,whereC_2:=2∏_p>2(1-1/(p-1)^2) and C(N):=∏_p | N p>2p-1/p-2∏_p>2(1-1/(p-1)^2)and the detail was published in <cit.>. In 1990, Wu <cit.> generalized Chen's theorem and proved thatD_1,3(N) ≫C(N) N/(log N)^2loglog NandD_1,r(N) ≫C(N) N/(log N)^2 (loglog N)^r-2.and Kan <cit.> proved the similar result in 1991. Kan <cit.> also proved the more generalized theorem in 1992:D_s,r(N) ≫C(N) N/(log N)^2 (loglog N)^s+r-3,where s ⩾ 1,D_s,r(N):= |{P_s : P_s ⩽ N, N-P_s=P_r}|.Clearly their methods can be modified to get a similar lower bound order on the twin prime version. For this, we refer the interested readers to <cit.>.Now we focus on the Hardy-Littlewood triples (n, n+2, n+6) with almost-prime values. In fact, if we defineπ_1,a,b(x):= |{p : p ⩽ x, p+2=P_a, p+6=P_b}|andD_1,a,b(N):= |{p : p ⩽ N, N-p=P_a, p+6=P_b}|,then a special case of Hardy-Littlewood conjecture states that π_1,1,1(x) should be asymptotic to C_3 x/(log x)^3, whereC_3=9/2∏_p>3(1-3 p-1/(p-1)^3) ≈ 2.86259.In 2015, Heath-Brown and Li <cit.> proved that π_1,2,76(x) ≫C_3 x/(log x)^3 and Cai <cit.> improved this result to π_1,2,14(x) ≫C_3 x/(log x)^3 by using a delicate sieve process later. Their results refined Chen's theorem. Like Wu's generalization of Chen's theorem, we may conjecture that π_1,3,r(x) should be asymptotic to C_3 xloglog x/(log x)^3 for some large r. Very recently, Li and Liu <cit.> proved π_1,3,6(x) ≫C_3 x/(log x)^3. They also got π_1,3,3(x) ≫C_3 x/(log x)^3 by assuming GEH(0.99). In this paper, we improve their asymptotic estimates of π_1,3,r(x) on the orders by fixing some small prime factors q and prove that:For every integer a ⩾ 2 and b ⩾ 14, we haveπ_1,a,b(x) ≫C_3 x/(log x)^3(loglog x)^a-2 and D_1,a,b(N) ≫N/(log N)^3(loglog N)^a-2,where π_1,a,b(x) and D_1,a,b(N) are defined above.By similar arguments, we also obtain those theorems:For every integer a ⩾ 3 and b ⩾ 6, we haveπ_1,a,b(x) ≫C_3 x/(log x)^3(loglog x)^a-3 and D_1,a,b(N) ≫N/(log N)^3(loglog N)^a-3.For every integer a ⩾ 4 and b ⩾ 5, we haveπ_1,a,b(x) ≫C_3 x/(log x)^3(loglog x)^a-4 and D_1,a,b(N) ≫N/(log N)^3(loglog N)^a-4. We also prove some conditional results:For every integer a ⩾ 2 and b ⩾ 4, assuming GEH(0.99), we haveπ_1,a,b(x) ≫C_3 x/(log x)^3(loglog x)^a-2 and D_1,a,b(N) ≫N/(log N)^3(loglog N)^a-2.For every integer a ⩾ 3 and b ⩾ 3, assuming GEH(0.99), we haveπ_1,a,b(x) ≫C_3 x/(log x)^3(loglog x)^a-3 and D_1,a,b(N) ≫N/(log N)^3(loglog N)^a-3. While calculating, we find that it seems hard to improve our Theorems <ref>–<ref>. (For example, you need to get an improvement about 45 percent on the sieve process to replace the condition b ⩾ 14 by b ⩾ 13 in our Theorem 1.)In this paper, we only provide a detailed proof of π_1,3,14(x) ≫C_3 x loglog x/(log x)^3, which is a simple version of Theorem <ref>. The readers can modify our proof to get Theorems <ref>–<ref>.§ PRELIMINARY LEMMASLet 𝒜 denote a finite set of positive integers, 𝒫 denote an infinite set of primes, q denote a prime number satisfies q< x^ε and put𝒜={p+2/q: 7<p ⩽ x, p ≡ -2 ( q), (p+6, P(z))=1}, 𝒫={p : (p,q)=1}, 𝒫(r)={p : p ∈𝒫,(p, r)=1}, P(z)=∏_p∈𝒫 p<z p, 𝒜_d={a : a ∈𝒜, a ≡ 0( d)}, S(𝒜; 𝒫,z)=∑_a ∈𝒜(a, P(z))=1 1, ℳ_k={n: n=p_1 p_2 ⋯ p_k,13<n ⩽ x+6, x^0.005⩽ p_1 < ⋯ < p_k, n ≡ 4 ( q) }, 𝒜^(k)={p+2/q : 7<p ⩽ x, p ≡ -2 ( q), p+6 ∈ℳ_k}, ℰ={qmp_1p_2p_3p_4: qmp_1p_2p_3p_4 ⩽ x+2, (x/q)^1/13⩽ p_1<p_2<p_3<p_4<(x/q)^1/8.4, (m, qp_1^-1 P(p_2))=1}, ℬ={n-2 : n ∈ℰ}, 𝒲={{p+2/q, p+6}: 7<p ⩽ x, p ≡ -2 ( q)}, 𝒲^(1)={{n-2,n+4}: n ∈ℰ}, 𝒲^(2)={{n-6, n-4/q}: n ∈ℳ_k }.([<cit.>, Lemma 1], deduced from <cit.>, Proposition 1). Let 𝒲 be a finite subset of ℕ^2. Suppose that z_1, z_2⩾ 2 with log z_1≍log z_2 and write 𝐳={z_1, z_2}. For 𝐝={d_1, d_2} and 𝐧={n_1, n_2}, we write 𝐝|𝐧 to mean that d_1| n_1 and d_2| n_2. Set𝒲_𝐝={𝐧∈𝒲: 𝐝|𝐧},S(𝒲, 𝐳)=∑_{n_1, n_2}∈𝒲p|n_1⇒ p ⩾ z_1p| n_2⇒ p ⩾ z_2 1 .Suppose that|𝒲_𝐝|=h(𝐝) X+R(𝐝)for some X>0 independent of 𝐝 and some multiplicative function h(𝐝) ∈ [0,1) such that h(p, 1)+h(1, p)-1<h(p, p) ⩽ h(p, 1)+h(1, p) for all primes p, andh(p, 1), h(1, p) ⩽ c p^-1,h(p, p) ⩽ c p^-2for some constant c ⩾ 2.Let h_1(d)=h(d, 1) and h_2(d)=h(1, d). Suppose that∏_w ⩽ p<z(1-h_j(p))^-1⩽log z/log w(1+L/log w) (j=1,2)for z ⩾ w ⩾ 2 and some positive constant L. Then: S(𝒲, 𝐳) ⩽ X V(z_0, h^*) V_1 V_2{(F(s_1) F(s_2))(1+O((log D_1 D_2)^-1 / 6))} +O_ε(∑_d_1 d_2⩽(D_1 D_2)^1+ετ^4(d_1 d_2)|R({d_1, d_2})|), S(𝒲, 𝐳) ⩾ X V(z_0, h^*) V_1 V_2{(f(s_1) F(s_2)+F(s_1) f(s_2)-F(s_1) F(s_2))(1+O((log D_1 D_2)^-1 / 6))} +O_ε(∑_d_1 d_2⩽(D_1 D_2)^1+ετ^4(d_1 d_2)|R({d_1, d_2})|)for any ε>0, wherez_0=exp(√(log z_1 z_2)),s_j=log D_j/log z_j(j=1,2), V(z_0, h^*)=∏_p<z_0(1-h^*(p)),h^*(p)=h(p, 1)+h(1, p)-h(p, p), V_j=∏_z_0⩽ p<z_j(1-h_j(p)) (j=1,2) .and γ denotes the Euler's constant, f(s) and F(s) are determined by the following differential-difference equationF(s)=2 e^γ/s,f(s)=0,0<s ⩽ 2,(s F(s))^'=f(s-1), (s f(s))^'=F(s-1),s ⩾ 2 . ([<cit.>, Lemma 2], deduced from <cit.>).F(s)= 2 e^γ/s,0<s ⩽ 3;F(s)= 2 e^γ/s(1+∫_2^s-1log (t-1)/t d t),3 ⩽ s ⩽ 5 ;F(s)= 2 e^γ/s(1+∫_2^s-1log (t-1)/t d t+∫_2^s-3log (t-1)/t d t ∫_t+2^s-11/ulogu-1/t+1 d u),5 ⩽ s ⩽ 7;f(s)= 2 e^γlog (s-1)/s,2 ⩽ s ⩽ 4 ;f(s)= 2 e^γ/s(log (s-1)+∫_3^s-1d t/t∫_2^t-1log (u-1)/u d u),4 ⩽ s ⩽ 6 ;f(s)= 2 e^γ/s(log (s-1)+∫_3^s-1d t/t∫_2^t-1log (u-1)/u d u..+∫_2^s-4log (t-1)/t d t ∫_t+2^s-21/ulogu-1/t+1logs/u+2 d u),6 ⩽ s ⩽ 8. ([<cit.>, Lemma 4], deduced from <cit.>, <cit.>). Letx>1,z=x^1/u,Q(z)=∏_p<z p.Then for u ⩾ 1, we have∑_n ⩽ x(n, Q(z))=1 1=w(u) x/log z+O(x/log ^2 z),where w(u) is determined by the following differential-difference equationw(u)=1/u, 1 ⩽ u ⩽ 2, (u w(u))^'=w(u-1), u ⩾ 2 .Moreover, we havew(u) ⩽1/1.763, u ⩾ 2, w(u)<0.5644, u ⩾ 3, w(u)<0.5617, u ⩾ 4. (GEH(θ), deduced from [<cit.>, Conjecture 1.2]). Let θ∈ (0,1) be a constant and putP(y_1, …, y_k ; z_1, …, z_k) ={m=p_1⋯ p_k : y_1⩽ p_1⩽ z_1, …, y_k⩽ p_k⩽ z_k},π_k(x ; q, a) =∑_m ∈ P(y_1, …, y_k ; z_1, …, z_k) m ⩽ x, m ≡ a( q) 1,π_k(x ; q) =∑_m ∈ P(y_1, …, y_k ; z_1, …, z_k) m ⩽ x,(m, q)=1 1 .Then for any A>0 and l ⩾ 1 there exists B=B(A, l)>0 such that∑_q ⩽ x^θlog ^-B xτ^l(q) max _(a, q)=1|π_k(x ; q, a)-π_k(x ; q)/φ(q)| ≪x/log ^A x,where the implied constant depends only on k, l and A. ([<cit.>, Lemma 3], deduced from <cit.>, <cit.>). GEH(θ) is true for θ⩽1/2. § WEIGHTED SIEVE METHODWe have4π_1,3,14(x) ⩾ 3S(𝒜;𝒫, (x/q)^1/13)+S(𝒜;𝒫, (x/q)^1/8.4)+∑_(x/q)^1/13⩽ p_1<p_2<(x/q)^1/8.4S(𝒜_p_1 p_2;𝒫,(x/q)^1/13)+∑_(x/q)^1/13⩽ p_1<(x/q)^1/8.4⩽ p_2<(x/q)^0.475-2/13-εp^-1_1S(𝒜_p_1 p_2;𝒫,(x/q)^1/13)-∑_(x/q)^1/13⩽ p<(x/q)^1/3.145S(𝒜_p;𝒫,(x/q)^1/13)-∑_(x/q)^1/13⩽ p<(x/q)^1/3.81S(𝒜_p;𝒫,(x/q)^1/13)-∑_(x/q)^1/13⩽ p_1<(x/q)^1/3.145⩽ p_2 <(x/qp_1)^1/2S(𝒜_p_1 p_2;𝒫(p_1),p_2)-∑_(x/q)^1/8.4⩽ p_1<(x/q)^1/3.81⩽ p_2 <(x/qp_1)^1/2S(𝒜_p_1 p_2;𝒫(p_1),(x/qp_1 p_2)^1/2)-∑_(x/q)^1/13⩽ p_1 < p_2 < p_3< p_4<(x/q)^1/8.4S(𝒜_p_1 p_2 p_3 p_4;𝒫(p_1),p_2) -∑_(x/q)^1/13⩽ p_1 < p_2 < p_3<(x/q)^1/8.4⩽ p_4< (x/q)^0.475-2/13-εp^-1_3S(𝒜_p_1 p_2 p_3 p_4;𝒫(p_1),p_2) -2∑_(x/q)^1/3.145⩽ p_1 < p_2 <(x/qp_1)^1/2S(𝒜_p_1 p_2;𝒫(p_1),p_2)-2∑_(x/q)^1/3.81⩽ p_1 < p_2 <(x/qp_1)^1/2S(𝒜_p_1 p_2;𝒫(p_1),p_2)-∑_k=15^199 S(𝒜^(k) ; 𝒫, (x/q)^1/13)-∑_k=15^199 S(𝒜^(k) ; 𝒫, (x/q)^1/8.4)-∑_k=15^199 S(𝒜^(k) ; 𝒫, (x/q)^1/3.81)-∑_k=15^199 S(𝒜^(k) ; 𝒫, (x/q)^1/3.145)+O(x^12/13) = (3 S_11+S_12)+(S_21+S_22)-(S_31+S_32)-(S_41+S_42)-(S_51+S_52)-2(S_61+S_62)-(S_71+S_72+S_73+S_74)+O(x^12/13)= S_1+S_2-S_3-S_4-S_5-2S_6-S_7+O(x^12/13). It is similar to that of [<cit.>, Lemma 6] so we omit it here.§ PROOF OF THEOREM 1.1In this section, sets 𝒜, ℰ, ℬ, ℳ_k, 𝒜^(k), 𝒲, 𝒲^(1) and 𝒲^(2) are defined respectively. §.§ Evaluation of S_1, S_2, S_3We are going to use Lemma <ref> to the set 𝒲 and obtain upper and lower bounds of S_1, S_2 and S_3. For a prime p>7, we note that d_1 |(p+2/q) and d_2 | (p+6) imply that (d_1,d_2)=(d_1,2)=(d_2,6)=1. Therefore we can take|𝒲_𝐝|=h(𝐝) X+R(𝐝),X=π(x)/φ(q),h(𝐝)= 1/φ(d_1 d_2),(d_1, d_2)=(d_1, 2)=(d_2, 6)=1, 0,otherwise. It is easy to show thath_1(p)= 0, p=2,1/p-1, p ⩾ 3 ;h_2(p)= 0, p=2,3,1/p-1, p ⩾ 5 ;h^*(p)= 0, p=2,1/2, p=3,2/p-1, p ⩾ 5.andV(z_0, h^*) V_1 V_2=1/2∏_3<p ⩽ z_0(1-2/p-1) ∏_z_0<p ⩽ (x/q)^1/13(1-1/p-1) ∏_z_0<p ⩽ x^0.005(1-1/p-1)=(1+O(z_0^-1)) C_3 V((x/q)^1/13) V(x^0.005)withV(z)=∏_p<z(1-1/p)=e^-γ/log z(1+O(1/log z)). To deal with the error term, by the Chinese remainder theorem, we have|R(𝐝)| ⩽|r(d_1 d_2)|,where|r(d)|=max _(a, d)=1|∑_p ⩽ x p ≡ a( d) 1-π(x)/φ(d)|+O(1) .By Bombieri's theorem we have∑_d_1 d_2⩽ x^1/2-ετ^4(d_1 d_2)|R(𝐝)| ≪ x(log x)^-5 . Then by Lemma <ref> we haveS_11= S(𝒲,{(x/q)^1/13, x^0.005})⩾(1+o(1)) π(x)/φ(q) C_3 V((x/q)^1/13) V(x^0.005){f(6.175) F(5)+F(6.175) f(5)-F(6.175) F(5)} +O(x(log x)^-5) = (1+o(1)) 4 C_3 π(x)/φ(q)(log x^0.475-ε)(log x^0.025){f_0(6.175) F_0(5)+F_0(6.175) f_0(5)-F_0(6.175) F_0(5)} +O(x(log x)^-5)⩾818.10189 C_3 x loglog x/ (log x)^3,wheref_0(s)=s/2 e^γ f(s),F_0(s)=s/2 e^γ F(s) . Similarly, we haveS_12= S(𝒲,{(x/q)^1/8.4, x^0.005})⩾(1+o(1)) π(x)/φ(q) C_3 V((x/q)^1/8.4) V(x^0.005){f(3.99) F(5)+F(3.99) f(5)-F(3.99) F(5)} +O(x(log x)^-5) = (1+o(1)) 4 C_3 π(x)/φ(q)(log x^0.475-ε)(log x^0.025){f_0(3.99) F_0(5)+F_0(3.99) f_0(5)-F_0(3.99) F_0(5)} +O(x(log x)^-5)⩾516.86063 C_3 x loglog x/ (log x)^3,S_21⩾(1+o(1)) 0.475 × 4 C_3 π(x)/φ(q)(log x^0.475-ε)(log x^0.025){f_0(5) G+F_0(5) g-F_0(5) G}+O(x(log x)^-5)⩾73.9301 C_3 x loglog x/ (log x)^3,S_22⩾(1+o(1)) 0.475 × 4 C_3 π(x)/φ(q)(log x^0.475-ε)(log x^0.025){f_0(5) H+F_0(5) h-F_0(5) H}+O(x(log x)^-5)⩾149.13684 C_3 x loglog x/ (log x)^3,S_31⩽(1+o(1)) 0.475 × 4 C_3 π(x)/φ(q)(log x^0.475-ε)(log x^0.025){F_0(5) J}+O(x(log x)^-5)⩽1282.38485 C_3 x loglog x/ (log x)^3,S_32⩽(1+o(1)) 0.475 × 4 C_3 π(x)/φ(q)(log x^0.475-ε)(log x^0.025){F_0(5) K}+O(x(log x)^-5)⩽1048.20211 C_3 x loglog x/ (log x)^3,whereG=∫_1 / 13^1 / 8.4d t_1/t_1∫_t_1^1 / 8.4d t_2/t_2(0.475-t_1-t_2) +∫_1 / 13^1 / 8.4d t_1/t_1∫_t_1^1 / 8.4d t_2/t_2(0.475-t_1-t_2)∫_2^5.175-13(t_1+t_2)log(t_3-1)/t_3 d t_3, g=∫_1 / 13^1 / 8.4d t_1/t_1∫_t_1^1 / 8.4log(5.175-13(t_1+t_2))/t_2(0.475-t_1-t_2) d t_2 +∫_1 / 13^2.175 / 26d t_1/t_1∫_t_1^2.175 / 13-t_1d t_2/t_2(0.475-t_1-t_2)∫_3^5.175-13(t_1+t_2)d t_3/t_3∫_2^t_3-1log(t_4-1)/t_4 d t_4 , H=∫_1 / 13^1 / 8.4d t_1/t_1∫_1 / 8.4^0.475-2 / 13-t_1d t_2/t_2(0.475-t_1-t_2) +∫_1 / 13^1 / 8.4d t_1/t_1∫_1 / 8.4^0.475-2 / 13-t_1d t_2/t_2(0.475-t_1-t_2)∫_2^5.175-13(t_1+t_2)log(t_3-1)/t_3 d t_3, h=∫_1 / 13^1 / 8.4d t_1/t_1∫_1 / 8.4^0.475-2 / 13-t_1log(5.175-13(t_1+t_2))/t_2(0.475-t_1-t_2) d t_2 ,J=∫_1 / 13^1 / 3.145d t/t(0.475-t)+∫_1 / 13^0.475-3 / 13d t_1/t_1(0.475-t_1)∫_2^5.175-13 t_1log(t_2-1)/t_2 d t_2 +∫_1 / 13^0.475-5 / 13d t_1/t_1(0.475-t_1)∫_2^3.175-13 tlog(t_2-1)/t_2 d t_2∫_t_2+2^5.1751/t_3logt_3-1/t_2+1 d t_3 , K=∫_1 / 13^1 / 3.81d t/t(0.475-t)+∫_1 / 13^0.475-3 / 13d t_1/t_1(0.475-t_1)∫_2^5.175-13 t_1log(t_2-1)/t_2 d t_2 +∫_1 / 13^0.475-5 / 13d t_1/t_1(0.475-t_1)∫_2^3.175-13 tlog(t_2-1)/t_2 d t_2∫_t_2+2^5.1751/t_3logt_3-1/t_2+1 d t_3 .§.§ Evaluation of S_4, S_5, S_6By Chen's role-reversal trick we know thatS_51= ∑_p ∈ℬ (p+6, P(x^0.005))=1 1⩽ ∑_m ∈ℬ (m, P(x^0.475-ε/2))=1(m+6, P(x^0.005))=1 1+O(x^1/2) =S(𝒲^(1),{x^0.475-ε/2, x^0.005})+O(x^1/2).we may write|𝒲_𝐝^(1)|=h(𝐝)|ℰ|+R^(1)(𝐝),whereh(𝐝)= 1/φ(d_1 d_2),(d_1, d_2)=(d_1, 2)=(d_2, 6)=1, 0,otherwiseand|R^(1)(𝐝)| ⩽ max _(a, d_1 d_2)=1|∑_n ∈ℰn ≡ a( d_1 d_2) 1-1/φ(d_1 d_2)∑_n ∈ℰ (n, d_1 d_2)=1 1|+1/φ(d_1 d_2)∑_n ∈ℰ (n, d_1 d_2)>1 1 =R_1^(1)(𝐝)+R_2^(1)(𝐝) . To deal with the error term, by the arguments used in <cit.>, we have∑_d_1 d_2⩽ x^1/2-ετ^4(d_1 d_2)R^(1)(𝐝) ≪ x(log x)^-5 . By Lemma <ref> we have|ℰ| =∑_(x/q)^1/13⩽ p_1 < p_2 < p_3< p_4<(x/q)^1/8.4∑_1⩽ m⩽x/qp_1 p_2 p_3 p_4 (m, qp_1^-1P(p_2))=1 1⩽ (1+o(1))0.5617x/qlog x∫_1/13^1/8.4d t_1/t_1∫_t_1^1/8.41/t_2(1/t_1-1/t_2) log1/8.4 t_2 d t_2 .⩽0.00934x/qlog x. Then by Lemma <ref> we haveS_51⩽S(𝒲^(1),{x^0.475-ε/2, x^0.005})⩽(1+o(1))4 C_3 |ℰ|/(log x^0.475-ε)(log x^0.025){F_0(2) F_0(5)}+O(x(log x)^-5)⩽4.41937 C_3 x loglog x/ (log x)^3. Similarly, we haveS_52⩽(1+o(1))4 C_3 x L/q log x (log x^0.475-ε)(log x^0.025){F_0(2) F_0(5)}+O(x(log x)^-5)⩽22.91504 C_3 x loglog x/ (log x)^3,S_41⩽(1+o(1))4 C_3 x( ∫_2.145^12log(2.145-3.145/t+1)/t d t )/q log x (log x^0.475-ε)(log x^0.025){F_0(2) F_0(5)}+O(x(log x)^-5)⩽371.11243 C_3 x loglog x/ (log x)^3,S_42⩽(1+o(1))4 C_3 x ( ∫_2.81^7.4log(2.81-3.81/t+1)/t d t )/q log x (log x^0.475-ε)(log x^0.025){F_0(2) F_0(5)}+O(x(log x)^-5)⩽341.31874 C_3 x loglog x/ (log x)^3,S_61⩽(1+o(1))4 C_3 x ( ∫_2^2.145log(t-1)/t d t )/q log x (log x^0.475-ε)(log x^0.025){F_0(2) F_0(5)}+O(x(log x)^-5)⩽2.27032 C_3 x loglog x/ (log x)^3,S_62⩽(1+o(1))4 C_3 x ( ∫_2^2.81log(t-1)/t d t )/q log x (log x^0.475-ε)(log x^0.025){F_0(2) F_0(5)}+O(x(log x)^-5)⩽49.78864 C_3 x loglog x/ (log x)^3,whereL=∫_1 / 13^1 / 8.4d t_1/t_1∫_t_1^1 / 8.4d t_2/t_2^2∫_t_2^1 / 8.4d t_3/t_3∫_1 / 8.4^0.475-2 / 13-t_3w(1-t_1-t_2-t_3-t_4/t_2)/t_4 d t_4⩽ 0.5644 ∫_1 / 13^1 / 8.4d t_1/t_1∫_t_1^1 / 8.41/t_2(1/t_1-1/t_2) log 8.4(0.475-2/13-t_3) d t_2⩽ 0.04839. §.§ Evaluation of S_7By Chen's role-reversal trick we know thatS_71= ∑_p+6 ∈ℳ_k(p+2/q, P((x/q)^1/13))=1 1⩽ ∑_m ∈ℳ_k(m-6, P(x^0.025/2))=1(m-4/q, P((x/q)^1/13))=1 1+O(x^1/2) =S(𝒲^(2),{x^0.025/2, (x/q)^1/13})+O(x^1/2).we may write|𝒲_𝐝^(2)|=h(𝐝)|ℳ_k|+R^(2)(𝐝),whereh(𝐝)= 1/φ(d_1 d_2),(d_1, d_2)=(d_1, 2)=(d_2, 6)=1, 0,otherwiseand|R^(2)(𝐝)| ⩽ max _(a, d_1 d_2)=1|∑_n ∈ℳ_k n ≡ a( d_1 d_2) 1-1/φ(d_1 d_2)∑_n ∈ℳ_k(n, d_1 d_2)=1 1|+1/φ(d_1 d_2)∑_n ∈ℳ_k(n, d_1 d_2)>1 1 =R_1^(2)(𝐝)+R_2^(2)(𝐝) . To deal with the error term, by the arguments similar to those for S_51, we have∑_d_1 d_2⩽ x^1/2-ετ^4(d_1 d_2)R^(2)(𝐝) ≪ x(log x)^-5 . By the prime number theorem and summation by parts we have|ℳ_k|=1/φ(q)∑_z ⩽ p_1⩽⋯⩽ p_k-1⩽(x+6/p_1⋯ p_k-2)^1 / 2x/p_1⋯ p_k-1logx/p_1⋯ p_k-1 =(1+O(1/log x)) c_kπ(x)/φ(q),wherec_k=∫_k-1^199d t_1/t_1∫_k-2^t_1-1d t_2/t_2⋯∫_3^t_k-4-1d t_k-3/t_k-3∫_2^t_k-3-1log(t_k-2-1) d t_k-2/t_k-2 .By similar numerical integration used in <cit.>, we haveC_0=∑_k=15^199 c_k<0.00408. Then from (29)–(31) we haveS_71⩽(1+o(1))4 C_0 C_3 π(x)/φ(q) (log x^0.475-ε)(log x^0.025){F_0(2) F_0(6.175)}+O(x(log x)^-5)⩽2.38485 C_3 x loglog x/ (log x)^3. Similarly, we haveS_72⩽(1+o(1))4 C_0 C_3 π(x)/φ(q) (log x^0.475-ε)(log x^0.025){F_0(2) F_0(3.99)}+O(x(log x)^-5)⩽1.57643 C_3 x loglog x/ (log x)^3,S_73⩽(1+o(1))4 C_0 C_3 π(x)/φ(q) (log x^0.475-ε)(log x^0.025){F_0(2) F_0(2)}+O(x(log x)^-5)⩽1.37432 C_3 x loglog x/ (log x)^3,S_74⩽(1+o(1))4 C_0 C_3 π(x)/φ(q) (log x^0.475-ε)(log x^0.025){F_0(2) F_0(2)}+O(x(log x)^-5)⩽1.37432 C_3 x loglog x/ (log x)^3.§.§ Proof of theorem 1.1By (14)–(19), (23)–(28) and (32)–(35) we getS_1+S_2⩾ 3194.23324 C_3 x loglog x/(log x)^3, S_3+S_4+S_5+2S_6+S_7⩽ 3181.18071 C_3 x loglog x/(log x)^3, 4π_1,3,14(x) ⩾ (S_1+S_2)-(S_3+S_4+S_5+2S_6+S_7) ⩾ 13.05253 C_3 x loglog x/(log x)^3, π_1,3,14(x) ⩾ 3.26313 C_3 x loglog x/(log x)^3.Now the proof of π_1,3,14(x) ≫C_3 x loglog x/(log x)^3 is completed. Then we can prove Theorem <ref> by replacing q by products of small primes q_1 q_2 ⋯ q_a-1 where q_i denote a prime number satisfiesa ⩾ 2, q_i < x^ε for every 1 ⩽ i ⩽ a-1. § AN UPPER BOUND RESULTNow we finish this paper with a look at the upper bound estimate. Let𝒲^'={{p+2, p+6}: 7<p ⩽ x},then we haveπ_1,1,1(x) ⩽ S(𝒲^',{x^1/10, x^1/10})+O(x^1/10). By Lemma <ref> and some routine arguments we haveS(𝒲^',{x^1/10, x^1/10})⩽(1+o(1))C_3 π(x) V(x^1/10)V(x^1/10){F(2)F(2)} +O(x(log x)^-5)⩽100 C_3 x/ (log x)^3. Finally by (36)–(37) we get the following theorem of the upper bound orders of Hardy-Littlewood prime triples. π_1,1,1(x) ≪C_3 x/ (log x)^3 and D_1,1,1(N) ≪N/(log N)^3. plain | http://arxiv.org/abs/2401.01348v1 | {
"authors": [
"Runbo Li"
],
"categories": [
"math.NT"
],
"primary_category": "math.NT",
"published": "20231127111212",
"title": "On the upper and lower bound orders of almost prime triples"
} |
Benjamin Vigneron [email protected]]Benjamin Vigneron Département de Physique, Université de Montréal, Succ. Centre-Ville, Montréal, Québec, H3C 3J7, Canada0000-0001-7271-7340]Julie Hlavacek-Larrondo Département de Physique, Université de Montréal, Succ. Centre-Ville, Montréal, Québec, H3C 3J7, Canada Centre de recherche en astrophysique du Québec (CRAQ)0000-0003-2001-1076]Carter Lee Rhea Département de Physique, Université de Montréal, Succ. Centre-Ville, Montréal, Québec, H3C 3J7, Canada Centre de recherche en astrophysique du Québec (CRAQ)0000-0002-2478-5119]Marie-Lou Gendron-Marsolais Instituto de Astrofísica de Andalucía, IAA-CSIC, Apartado 3004, 18080 Granada, España0000-0003-4220-2404]Jeremy Lim Department of Physics, The University of Hong Kong, Pokfulam Road, Hong Kong Department of Physics, University of North Texas, Denton, TX 76203, USA0000-0001-5262-6150]Yuan Li Department of Physics, University of North Texas, Denton, TX 76203, USA0000-0003-1278-2591]Laurent Drissen Département de physique, de génie physique et d'optique, Université Laval, Québec (QC), G1V 0A6, Canada Centre de recherche en astrophysique du Québec (CRAQ) Department of Physics and Astronomy, University of Hawai'i at Hilo, Hilo, HI 96720, USA Canada-France-Hawaii Telescope, 65-1238 Mamalahoa Hwy, Kamuela, Hawaii 96743, USA 0000-0003-2630-9228]Greg L. Bryan Department of Astronomy, Columbia University, 550 West 120th Street, New York, NY 10027, USA Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY 10010, USA0000-0002-2808-0853]Megan Donahue Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA0000-0002-3398-6916]Alastair Edge Department of Physics, Durham University, South Road, Durham DH1 3LE, UK0000-0002-9378-4072]Andrew Fabian Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK0000-0003-1932-0162]Stephen Hamer Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK0000-0002-3074-9608]Thomas Martin Département de physique, de génie physique et d'optique, Université Laval, Québec (QC), G1V 0A6, Canada Centre de recherche en astrophysique du Québec (CRAQ)0000-0001-5226-8349]Michael McDonald Kavli Institute for Astrophysics and Space Research, MIT, Cambridge, MA 02139, USA Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada0000-0001-7597-270X]Annabelle Richard-Lafferrière Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB1 0HA, UK0000-0002-5136-6673]Laurie Rousseau-Nepton Canada-France-Hawaii Telescope, 65-1238 Mamalahoa Hwy, Kamuela, Hawaii 96743, USA0000-0002-3514-0383]G. Mark Voit Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA0000-0002-0104-9653]Tracy Webb Department of Physics, McGill Space Institute, McGill University, 3600 rue University, Montreal H3A 2T8, Canada0000-0003-0392-0120]Norbert Werner Department of Theoretical Physics and Astrophysics, Faculty of Science, Masaryk University, Brno, Czech RepublicWe present new high-spectral resolution observations (R = λ/Δλ = 7000) of the filamentary nebula surrounding NGC 1275, the central galaxy of the Perseus cluster. These observations have been obtained with SITELLE, an imaging Fourier transform spectrometer installed on the Canada-France-Hawai Telescope (CFHT) with a field of view of 11 arcmin × 11arcmin encapsulating the entire filamentary structure of ionised gas despite its large size of 80kpc×50kpc. Here, we present renewed flux, velocity and velocity dispersion maps that show in great detail the kinematics of the optical nebula at λ6716, λ6731, λ6584, Hα(6563Å), and λ6548. These maps reveal the existence of a bright flattened disk-shaped structure in the core extending to r ∼ 10 kpc and dominated by a chaotic velocity field. This structure is located in the wake of X-ray cavities and characterised by a high mean velocity dispersion of 134 km/s. The disk-shaped structure is surrounded by an extended array of filaments spread out to r∼ 50 kpc that are 10 times fainter in flux, remarkably quiescent and has a uniform mean velocity dispersion of 44 km/s. This stability is puzzling given that the cluster core exhibits several energetic phenomena. Based on these results, we argue that there are two mechanisms to form multiphase gas in clusters of galaxies: a first triggered in the wake of X-ray cavities leading to more turbulent multiphase gas and a second, distinct mechanism, that is gentle and leads to large-scale multiphase gas spread throughout the core. § INTRODUCTIONClusters of galaxies are extended structures hosting several hundred to thousands of gravitationally bound galaxies (e.g. , ). They are mostly composed of dark matter while galaxies only represent a very small fraction of the cluster's mass (e.g. , ). There is also a third component made of hot X-ray emitting gas at temperatures of ∼ 10^7-10^8 K that fills the space between the galaxies and is known as the intra-cluster medium or ICM (e.g. , , ).This ICM accounts for a substantial fraction of the mass of a cluster (∼13%) and often leads clusters to being classified into two distinct categories depending on their X-ray emission profiles: cool-core clusters with a strongly peaked X-ray emission profile and non-cool core clusters with a more diffuse and uniform X-ray emission profile (, ). The central dominant galaxy located in the core, known as the brightest cluster galaxy (BCG), often exhibits extended filamentary nebulae of optical ionised gas in the case of cool-core clusters (, ). These filaments are composed of multiphase gas that have high Hα luminosities (e.g. ) and are often co-spatial with cold gas (e.g. ), as well as soft X-ray gas. The most extended filaments can be mostly devoid of star formation, which excludes photoionization as a primary ionization mechanism (e.g. ).A promising explanation for the formation of the filamentary nebulae surrounding BCGs resides in the precipitation limit hypothesis (e.g. , , , ). Here, the constant cooling by emission of the ICM in cool-core clusters should imply high star formation rates in the cores of clusters. However, since there is limited evidence of correspondingly extremely high star formation inside these galaxies, a mechanism is necessary to stop the gas from cooling and falling in the potential well (e.g. ). In clusters of galaxies, the heating mechanism is thought to be orchestrated by the supermassive black hole residing in the BCG. This mechanism can be seen in action in radio bubbles generated by the central black hole which carve out large X-ray cavities in the ICM, entailing shocks, turbulence, and mixing (e.g. , , ). This phenomenon will in turn bring enough energy to limit the cooling of the ambient gas and therefore star formation in the galaxy (, , ). However, the mechanisms explaining how the AGN energy reheats the ICM remains a matter of debate.The precipitation limit hypothesis proposes that the heating process associated with the activity of the AGN sets in motion flows, which in turn encourage the hot surrounding medium to condense at higher altitudes. Hence, the adiabatic uplift of material through radio bubbles promotes condensation by reducing the ratio of cooling time to free-fall time in some locations. This cycle is analogous to the precipitations occurring on Earth, where raindrops are formed by the uplifting of gas higher in the atmosphere. Moreover, simulations show that the rain of cold gas towards the center of the galaxy will initially feed the central black hole and therefore the power of the flows that it produces, enabling a self-regulated feedback loop (e.g. ). This energy from the black hole will, in turn, heat the ambient medium and increase the ratio of the cooling time to the free fall time so that the precipitation will cease. The filamentary nebulae surrounding BCGs have been interpreted as a tell-tale sign of this model and could therefore offer the possibility to study this hypothesis.Here, we target the filamentary nebula that surrounds NGC 1275, the BCG located at the center of the Perseus cluster of galaxies. NGC 1275 has been studied extensively at all wavelengths (see , , , , ). The cluster hosts several X-ray cavities that originate from multiple generations of radio jets emitted from the active galactic nuclei (AGN) of NGC 1275 (, , , ). These jets carve the neighbouring ICM, creating buoyantly rising radio-emitting bubbles.NGC 1275 also displays one of the largest filamentary nebulae known with a size of about 80 kpc × 50 kpc (e.g. , , ). Combined with the proximity of the Perseus cluster, this makes NGC 1275 a target of choice for our understanding of the formation and ionization mechanisms of filamentary nebulae in clusters.The first observations of the filamentary nebula surrounding NGC 1275 by <cit.>, <cit.>, and <cit.> unveiled a high-velocity (HV) feature (∼ 8200 km/s) associated with a forefront spiral galaxy falling onto NGC 1275 (), and a low-velocity (LV) structure (∼ 5200 km/s) linked to NGC 1275. Hubble Space Telescope observations of the low-velocity structure then revealed the filamentary appearance of the ionised gas (), whereas soft X-ray counterparts were discovered for certain bright filaments with theChandra X-Ray Observatory (). Moreover, cold molecular gas has been associated with the emission of the filamentary nebula of NGC 1275 (, , , ), and also linked to a disk of emission near the galaxy (). Nevertheless, the first detailed observations of the nebula was performed by <cit.> with high-resolution imaging, integral field, and long-slit spectroscopy (WIYN & KPNO). These observations brought forth the first velocity map of the central ∼ 45arcsec (∼ 16.5 kpc) of the nebula. Observations from the Gemini Multi-Object Spectrograph along six slits carefully positioned along certain filaments showed evidence of outflowing gas and flow patterns (). In 2018, the filamentary nebula was then imaged for the first time with SITELLE (Spectromètre Imageur à Transformée de Fourier pour l’Etude en Long et en Large de raies d’Emission), an imaging Fourier transform spectrometer () installed at the Canada-France-Hawaï Telescope (CFHT) that has an extremely large field of view (11 arcmin × 11 arcmin) capable of imaging the nebula in its entirety (see Figure <ref>). <cit.> showed that the velocity structure of the filaments appears to be generally devoid of specific trends or rotation.In this paper, we present new high-spectral resolution observations of the filamentary nebula surrounding NGC 1275 obtained with SITELLE. The first SITELLE observations presented in <cit.> revealed the kinematics of the ionized nebula, but suffered from poor spectral resolution (R = 1800) that could not spectroscopically resolve the filaments beyond a projected radii of r≳ 10 kpc centered on the AGN.Here, the high-spectral resolution nature of the data at R = 7000 allows us to deepen the scope of the study by performing a detailed analysis of the kinematics of the nebula, in particular the velocity dispersion of the gas. The analysis of the data revealed a central structure displaying higher mean flux and velocity dispersion than the rest of the outer filaments. This disk-shaped structure seems spatially correlated with a similar structure of cold molecular gas, which will be detailed in later sections. This result appears relevant when considering the clear kinematic correlations between the ionized and molecular gas in the central region,as well as possibly far infrared (IR) [CII] emission lines of the gas surrounding NGC 1275 as observed with Herschel by <cit.>.In Section 2, we present the new SITELLE observations and the various procedures used during the data analysis. In Section 3 we present and discuss our results. Finally, a summary of our conclusions will be presented in Section 4.To directly compare our results to those of <cit.> and the Hitomi <cit.>, we also adopt for NGC 1275 a redshift of z=0.017284, which implies an angular scale of 21.2kpc arcmin^-1. This z also corresponds to a luminosity distance of 75.5 Mpc, assuming H_0 = 69.6km s^-1Mpc^-1, Ω_M = 0.286 and Ω_vac = 0.714 § DATA ANALYSIS §.§ Observations with SITELLE The filamentary nebula surrounding NGC 1275 was observed in February 2020 with SITELLE during Queued Service Observations 20AD99 by PIs Hlavacek-Larrondo and Rhea. SITELLE is a Fourier transform imaging spectrometer with an incredibly large field of view of 11 arcmins by 11 arcmins and equipped with two E2V detectors of 2048 × 2064 pixels, resulting in a spatial resolution of 0.321 × 0.321 arcsecs. SITELLE was used with the SN3 filter for 4 hours (1710 exposures of 8.42s) including overheads, necessary to obtain a high-spectral resolution of R=7000. This filter covers wavelengths from 648 nm to 685 nm. These observations were centered on NGC 1275 with RA : 03:19:48.16 and DEC : +41:30:42.1. Five emission lines are covered with the SN3 filter, namely : λ6716, λ6731, λ6584, Hα(6563Å), and λ6548. The oxygen emission lines [O I]λ6300 and [O I]λ6363 cannot be observed in such a configuration since they fall just outside of the spectral range of the SN3 filter (648-685 nm). The data reduction of SITELLE is performed at CFHT through a dedicated pipeline: ORBS. We summarize the reduction process here. The electronic bias and the flat-field curvature of the interferometric images are first corrected. Images are then aligned to solve potential guiding errors. Cosmic rays are detected through an algorithm comparing successive images to determine if any abnormal flux increase can be observed for a given pixel. These detections are correctedby an estimation of the given gaussian flux of nearby pixels. Atmospheric variations are alsotaken into account and corrected through a transmission function taken during the data acquisition. The reduction pipeline then produces the Fourier transform of all the interferograms contained within the data cube, which are then phase corrected. Finally, wavelength and flux calibrations are performed to adequately record the high-spectral resolution observations (). §.§ Background Subtraction After a careful evaluation of the high-spectral resolution data obtained from SITELLE, it became clear that an important variability of the background emission across the field-of-view is present in the spectra (see Appendix A). This background emission is mainly produced by the galaxies present near the filamentary nebula as well as the diffuse emission of other structures in the field-of-view. Background subtraction is therefore needed to properly disentangle this unwanted emission from the filament emission lines, however, this procedure is not taken into account in the reduction pipeline.This step must therefore be performed during the fitting procedure. The background variability affects mainly the analysis of the λ6717 and λ6731 emission lines, which are much fainter than the other optical lines. To tackle this issue, we decided to divide the entire filamentary nebula into nine regions as illustrated in Figure <ref>. For each of these nine regions, we attributed a specific sky zone devoid of targeted emission lines and located within the boundaries of that region. The red circles shown in Figure <ref> illustrate these 9 sky zones. The sky zone is then used during the fitting procedure to perform the background subtraction for each dedicated region. Once all the regions have been fitted properly (see following sections), we can then create a mosaic of the flux, velocity, and dispersion maps for the complete filamentary nebula. This procedure allows us to counter the background variability by considering several background regions instead of one to increase the reliability of our results. This approach allowed us to validate results obtained while considering this background variability. The development of a dedicated methodology to tackle background variability in detail for SITELLE data will be part of a future paper. §.§ Weighted Voronoi Tessellation - WVT From a preliminary analysis of the high-spectral resolution data of NGC 1275, we noticed that the signal-to-noise ratio (SNR) of the filamentary nebula was relatively low compared to what was expected from the SITELLE Exposure Time Calculator. Poor weather conditions during the data acquisition are considered to be the main cause of a lower SNR (see Appendix B for an example of the SNR map).Thus, to tackle this issue, we decided to implement a weighted Voronoi tessellation algorithm as a feature of thesoftware using the following tool https://github.com/XtraAstronomy/AstronomyToolshttps://github.com/XtraAstronomy/AstronomyTools (). This procedure creates bins of pixels whose SNR is defined by a threshold chosen by the user. Therefore, regions with strong emission will be contained within smaller bins, while noisier regions will be grouped into larger bins compared to regions of interest such as filaments. This also leads to the presence of bins containing a single pixel where the signal-to-noise ratio is already high.More specifically, we started by making a SNR map of the filamentary nebula by only considering the Hα emission line and emission line doublet. The SNR map is produced by determining the ratio of the maximum value of the emission lines by the standard deviation of the spectra in a waveband devoid of emission lines. The pixels are then aggregated to create the bins and reach the SNR threshold established. In our case, we decided to fix the SNR level to 30 which is close to the estimated value of SNR = 35 for most of the filaments that would have been obtained in ideal observing conditions with SITELLE. From there, we established the number of bins defined by the algorithm and created numpy files of pixels coordinates for each to be later used in the spectral fitting procedure. The functionofcan be called to produce the fitting of a specific region of the image which will be run through the weighted Voronoi algorithm first.One example of the WVT procedure can be seen in Figure <ref> where the algorithm has been applied to the horseshoe filament as shownin Figure <ref>. It illustrates that the WVT code manages to improve the flux detection of the base of the horseshoe compared to the background emission. However, the fluxof the filament's upper region remains too low to be properly detected and accounted for in the resulting flux map.We therefore applied a flux threshold of 2×10^-17 erg s^-1cm^-2 after the fitting to our final maps to properly eliminate unwanted bins of noisy data displaying higher flux.A similar flux threshold was also applied in the analysis of low-spectral resolution observations with SITELLE by <cit.>.§.§ Masking ProcedureTo help the fitting process, we decided to implement a masking procedure designed to keep as much information about the filamentary nebula as necessary. To implement this mask, we used the observational data presented by <cit.> and produced by the WIYN 3.5m telescope. These deep observations showed the filamentary nebula surrounding NGC 1275 with higher signal-to-noisetherefore, by using these observations as a basis for our masking procedure, we can retain the necessary information for our fitting results.The procedure used to develop this mask was implemented as follows : first, a threshold signal-to-noise value of 10 in the <cit.> WIYN data was chosen in order to retain as much information regarding the filaments as possible without keeping too many background pixels in the resulting image.Secondly, for the mask to be used in conjunction with thesoftware, some groundwork was needed to perfectly match the size and position of the mask with our SITELLE data according to the WCS coordinates of the latter. Indeed, the WIYN observations did not have the same astrometry as our data, meaning that they were not properly aligned. Moreover, the number of pixels in both observations are not the same which required specific care to handle their interpolation. To solve this issue, we determined the number of pixels needed to cover the entirety of the SITELLE field of view by considering the pixel dimensions of both the SITELLE and WIYN instruments. Then, by knowing the position of the central pixel of the SITELLE data cube using the declination and right ascension, we were able to determine the corresponding pixel on the mask. Thus, by knowing the number of pixels needed as well as the central pixel we used the functionfrom thepackage to extend our mask to fit the SITELLE field of view.We then interpolated the mask onto the specific dimensions of the SITELLE observations using the functionfrom thepackage, therefore creating a fitted mask without losing its nature despite having different dimensions originally. Finally, to properly align the mask with the SITELLE data, we used the WCS coordinates given within the SITELLE data cube to determine the position of 11 stars, which would then act as anchor points for theandfunctions of thepackage. By determining the position of these stars within the newly produced mask, we matched their positions to the ones of the SITELLE data and allow the overlapping of both images. §.§ Emission Line Fitting Procedure After obtaining our mask to the SITELLE data cube, we were able to proceed to the emission lines fitting part of the analysis. The recent softwarewas used to perform the fit () according to our mosaic structure detailed previously. Sinceis a novel analysis software, a summary of capabilities is describedhere but we invite the user to see https://crhea93.github.io/LUCI/index.htmlhttps://crhea93.github.io/LUCI/index.html for more details ().First, the spectra found in the data cube are normalized according to the highest amplitude and a shift in wavelength is applied to properly center the velocities between -500and 500 km/s. This constraint is necessary for the following procedure that uses machine learning since it allows better prior estimates for the fitting algorithm. This velocity range was chosen after a visual inspection of the data and considering the previous work by <cit.>. These boundaries were given as training parameters for the neural network to predict the velocity shift of the emission lines. To efficiently fit a spectrum, the velocity and broadening prior information need to be precise to facilitate the minimization algorithm at the heart of the emission lines fitting and thus to accelerate the whole procedure (, ). Therefore, a machine learning technique based on a convolutional neural network (CNN) is used to properly determine these priors. From there, the values obtained through the CNN are passed as the line position and broadening. The amplitude, however, is obtained from the height of the shifted emission line. With these three values, the line can be effectively fitted bywith thefunction from thepackage. This function uses the Sequential Least Squares Programming (SLSQP) optimization algorithm. Since SITELLE's Instrumental Spectra Function (ISF - see ) is a sinc convolved with a gaussian resulting from the velocity dispersion along the line of sight, we used the sincgauss fitting model presented by . The results from this procedure are the amplitude, velocity, and broadening of the five emission lines present in the spectrum: λ6716, λ6731, λ6584, Hα(6563Å), and λ6548, where we separated the fitting procedure by first considering the SNR of the Hα and emission lines to obtain their respective parameters. Then, we specifically considered the emission lines to derive their own parameters, which will be detailed in Section 2.7.Finally, to account for the location of Earth on its orbit at the time of observations, a correction is applied to the velocity map. This correction is determined with thefunction of thepackage, and gives a velocity correction of -27.5 km/s.Through this analysis, we adopted a redshift of z=0.017284 for NGC 1275, whichuses during the fitting procedure so that all calculations are done in the rest frame of the object to properly determine the velocity, velocity dispersion and flux parameters of the emission lines. An example of a fitted spectrum usingis displayed in Fig. <ref>. The resulting Hα flux, velocity and velocity dispersion maps are shown in Fig. <ref>.Finally, through these steps, we explored the fitting of averaged spectra over the bins produced by the WVT procedure to better understand the complex nature of the ionised gas in these filaments and determine their most prominent features as if no binning was applied. §.§ Multiple Emission ComponentsDue to the high-spectral resolution of our observations, we have to take into account the possibility of resolving multiple velocity components that a single bin can potentially contain due to the potential overlapping of filaments (see Appendix C). Indeed, since the three-dimensional structure of the filaments is currently unknown, it is impossible to decipher the possible overlap of several filaments. For the sake of clarity in our resulting maps, we decided to mask the main sources of multiple emission components. Firstly, the high-velocity system, associated with a forefront galaxy, has a velocity of ∼ 8200 km/s. It can thus be identified through its systemic velocity and ignored during the fitting procedure by only considering the emission lines associated with the filamentary nebula.Moreover, broad, multiple component emission lines from the AGN are coupled with the filament emission in the central ∼ 2.6 kpc, thus, the resulting fits produced byare poor when considering these specific regions. An example of a spectra extracted from the central region close to the AGN is displayed in Fig. <ref>.The large broadening of the AGN's emission overlapping with the filaments' narrow emission impedesfrom properly fitting the central regions of the filamentary nebula. Indeed, the fitting algorithm proves to be limited by the presence of largely broadened and blended emission lines for which the machine learning algorithm has not been trained on, therefore preventing it from performing accurate fitting. Therefore, to facilitate the analysis, we decided to mask the central region of emission coming from the AGN displaying broad emission lines. To do so, we studied the emission line profiles withand determined that the AGN spectral presence is confined within a radius of ∼ 2.6 kpc from the center of the galaxy. There was also evidence of a small number (∼ 10) of localized clumps detected over 1 to 5 pixels within the central regions (r<10 kpc) that showed evidence of multiple components in the emission lines. Similar clumps had already been detected in a previous study by <cit.>, where the authors mentioned the presence of several regions displaying double-peaked emission lines.Some of these regions appear as small knots or plumes displaying an extremely low separation in velocity components, leading to slightly double-peaked emission lines. However, we can also detect slightly larger regions displaying several components in velocity and located near the eastern part of the filamentary nebula below the high-velocity system. It is not yet clear if these regions with multiple velocity components are due to the superposition of filaments with Doppler-shifted velocities in the line of sight. Spectra of specific regions displaying several emission lines components are presented in Appendix C.Here we focus only on the results from the fitting of a single component to all bins. A dedicated multiple-component analysis of the few localized regions with overlapping filaments, as well as of the core, will be performed in the future. §.§ [SII] Emission Lines Fitting Regarding the detection and fitting of the λ6716 and λ6731 emission lines, one of the main difficulties in their analysis resides in the presence of strong sky lines at the same wavelengths (see Appendix A). However, after applying our methodology to tackle the background variability, we were able to detectthese emission lines. To help the fitting procedure, we made new WVT bins based on the λ6716 and λ6731 doublet SNR, which are fainter than the λ6583, Hα(6563Å), and λ6548 lines. The new larger bins thus created improved the detection of these fainter emission lines.The resulting λ6716 and λ6731 flux maps, velocity and velocity dispersion maps are shown in Fig. <ref>. The bottom left and right plots clearly show similar velocity and velocity dispersion maps as Hα seen in Fig. <ref>. A deeper analysis of theemission lines ratio, which are often used as an estimate of the gas density compared to the density of the hot X-ray gas, will be presentedin a future article (Vigneron et al., in preparation). § RESULTS AND DISCUSSION§.§ Comparison with Previous SITELLE Observations Previous SITELLE R=1800 observations of the filamentary nebula surrounding NGC 1275 were already analyzed by <cit.> and revealed several key features of the velocity structure.Our new high-spectral resolution observations, however, presented new challenges during the analysis step. Indeed, the spectra produced by SITELLE uses a Fourier transformation based on a sinc function, which shows specific lobes around the central peak. Since SITELLE instrumental function is a sinc function convoluted with a gaussian resulting from the velocity dispersion along the line of sight, a sincgauss model was specifically chosen during the spectral fitting procedure withto properly take into account the spectral information contained within the lobes Moreover, the spectral resolution associated with the SITELLE observations induce a minimum value of velocity dispersion that can be resolved clearly (see Figure 3 in ). Thus, when considering a small spectral resolution of R = 1800 with the SITELLE instrument, as was the case for the observations studied in <cit.>, the lower velocity dispersion bound that could be obtained was of ∼ 80 km/s. However, with a high-spectral resolution of R = 7000, the lower velocity dispersion value that can be determined is of ∼ 15 km/s, allowing us to properly resolve the emission lines (see Figure 3 of). This result is visible in Figure <ref> when comparing the spectra of the same region at both low and high-spectral resolution. We can indeed observe that the width of the emission lines is smaller than what was previously found by <cit.> in region A, B and D. Region C is contained within the central region and given its intrinsic large velocity dispersion, the low-spectral resolution observations of SITELLE were capable of resolving this structure. For illustrative purposes, we also overplotted the contours of the HVS on the left-hand map of Figure <ref> These contours were created by forcing LUCI to specifically fit the emission line from the HVS thus allowing us to isolate it on a dedicated small region of the SITELLE data. §.§ Emission Line Intensity Ratios and Flux Maps First, the Hα averaged flux map visible in the upper plot of Figure <ref> shows a disk-shaped structure in the central region with a size of ∼ 22 kpc by 5 kpc displaying a flux an order of magnitude higher than the rest of the filaments. The flux map from <cit.> hinted to this structure, but the new data reveal it in much greater detail. The high averaged flux of this disk-shaped structure is also associated with a large velocity dispersion, which will be discussed in section 3.4. Beyond the disk-shaped structure, localised parts of specific filaments display slightly higher Hα fluxes, but most of them have a homogeneous lower flux.Emission line ratios can be of great interest when studying the ionization properties of an emitting gas (see ). In this regard, the λ6583/Hα ratio is of importance since it informs us of the relative implication of soft versus hard sources of ionization. In the case of a low ratio (meaning Hα flux ≫ λ6583 flux), photons from young stars can be the ionizing source, while for a higher ratio (meaning Hα flux ≤λ6583 flux), a harder ionizing spectrum (from an AGN, for example) or energetic ICM particles are needed to provide additional heat(see ). Similarly, the ratio of λ6716 by Hα(6563Å) can be used as a confirmation value for various ionization mechanisms. Here, to produce the ratio map of λ6716 by Hα(6563Å), we specifically applied the fitting procedure for the Hα emission line by considering the bins obtained through the WVT algorithm which uses an SNR of 35 for the doublet (between wavelengths of 671,1 nm to 675,7 nm) thus creating bins of the exact same size and position as the doublet flux maps but fitting the Hα emission line at λ = 656 nm. Afterwards, we can obtain the ratio map by dividing both these flux maps therefore producing a flux ratio map for both the emission lines with the exact same number of bins (see bottom-left plot of Figure <ref>). Figure <ref> upper-left map shows that the central region of the structure displays a ratio of λ6583/Hα close to or slightly superior to 1.0, while the extended filaments show a variety of smaller ratio values between 0.5 to 1.0. This trend between central and extended filaments can also be seen in the upper right plot of Figure <ref>, where we produced the mean (i.e. taking the mean of the ratio values accross the annuli) and ensemble fit results (i.e. extracting one combined spectra for the entirety of the pixels inside the annulus and fitting it to obtain an ensemble ratio value) for the λ6583/Hα ratio across annuli containing 1500 pixels each, reproducing <cit.>.However, in the optics of emulating this methodology for high spectral resolution SITELLE data, we noticed that ensemble spectra at high spectral resolution are severely affected by the spatial distance between annuli pixels leading to blended emission lines and to large offsets between the mean and ensemble fits. We have decided to leave those results, but we are analyzing only the mean fit results.The center of each annuli is determined based on the WCS coordinates of the central galaxy NGC 1275 given as RA : 03:19:48.16 and DEC : +41:30:42.1 through the literature (see ) and observed in our SITELLE data. The width of the annuli is determined according to the fixed number of 1500 detected pixels they contain. Therefore, they vary between widths of 7 to 10 pixels and up to 57 pixels for the outer filaments. Error bars are also plotted for both methods and obtained through statistical calculations for the mean, or through 's fitting procedure for the ensemble fit. The error bars for the ensemble fit have also been plotted but are too small to see compared to the data scale. This profile shows a decrease in the mean values from the center (0 - 10 kpc) to the outer filaments (10 - 30 kpc) from 1.0 ± 0.5 to 0.6 ± 0.5, thus suggesting a gradual change or variation of the ionisation mechanism. However, the ratio values appear more chaotic than structured in the extended filaments which is reflected by a fairly large standard deviation in the error bars of the mean profile (see upper right plot of Figure <ref>). Such radial trend is similar to what was previously found by <cit.> with the low-spectral resolution SITELLE data (R=1800), but also through slit spectroscopy (). The presence of star formation has already been detected for specific southern and north-western regions of the filamentary nebula (see ). However, photoionization models predict a ratio that is highly dependent on metallicity, with an upper limit of around 0.5 (), thus, such a process could not explain the higher ratios found throughout the southern filaments. In addition, we can also conclude that the central region, near the AGN, is more likely to be ionised by its activity as well as energetic particles since it displays higher ratios, while the outer filaments could predominantly be ionized by energetic particles thus showing intermediate λ6548 / Hα emission line ratios.Another possibility that is often presented as a mechanism to explain the energetics and ionisation of the filaments is through collisional excitation by energetic particles coming from the hot ICM (see ). In such circumstances, the λ6548 / Hα emission line ratio would be close to ∼ 0.3. However, if we consider the ratio of /Hα, displayed in the bottom left plot of Figure <ref>, we can see that the outer filaments show higher ratios above ∼ 1, nevertheless, various behavior are observed for different filaments. In the case of ionization by cosmic rays, the ratio of /Hα is predicted to be at 1.4 according to the cosmic ray heating model (see Table 5 of ), which could be an argument in favor of the collisional ionisation model for the outer filaments. However, the northern filaments as well as parts of some other filaments show even higher /Hα ratio values above ∼ 2.0. Nevertheless, in the bottom right plot of Figure <ref>, we produced the mean and ensemble fit results of the λ6716/Hα ratio across annuli containing 1500 pixels each, we can see that most mean values are constrained between ∼ 0.5 and 1.5 which could argue in favor of an ionization mechanism through cosmic rays, though it is not possible to offer conclusive results with the emission lines at hand. Using subsequent SITELLE observations with SN1 (365 - 385 nm) and SN2 (480 - 520 nm) filters of NGC1275 (PI: G.-Marsolais), that provides other emission lines, we are currently pursuing a deeper analysis of the ionization mechanisms at play in the filamentary nebula (Rhea et al. in preparation).Similarly, the analysis of emission lines and their ratios will be explored in a future article (Vigneron et al. in preparation) §.§ Central disk-shaped Structure As we mentioned, the filamentary nebula seen in the optical is part of a larger multiphase structure that correlates spatially with filament-like structures seen in X-rays (, )and molecular gas structures (, ) Such multi-phase filaments are at the heart of many models explaining the formation of such structures. Indeed, it is argued that the black hole jets create expanding bubbles, rising through and carving the ICM, while inducing turbulence, sound waves and shocks (e.g. , ). This effect can be seen through the deformation of optical filaments taking the shape of convection cells and their spatial correlation with the trailing radio bubbles (; see also the horseshoe filament in left plot of Fig <ref>).One model argues that the depletion of metals in the galaxy and the deformed morphology of these filaments are explained by the rise of bubbles fueled by the activity of the central supermassive black hole (e.g. ).A second model demonstrates that the rising of radio bubbles through the ICM can trigger local thermal instabilities in the X-ray emitting gas that would lead to local cooling flows where the density of material would increase as a consequence of cooling.Due to the intense gravitational potential of the BCG, these filaments of cooler gas would fall onto the BCG, fueling the black hole activity through this feedback cycle (e.g. , ). This precipitation-based model would explain the observed multiphase structure by the cooling of gas from X-ray temperatures at ∼ 4 × 10^7K, down to cold molecular gas at < 10^3K visible in radio (, ). Thus, the detection of a cold CO(2-1) molecular gas structure close to the central region of the filaments in NGC 1275, as well as similar detections of correlated cold molecular gas in the outer filaments () offer a key argument regarding this formation model. The strongly emitting central structure was detected and studied by <cit.> and is divided into three main filaments and smaller clumps, all located near the central plane of the galaxy. They also showed that this central disk spatially correlates with the optical filaments seen in Hα, as well as the X-ray emission between 0.5 and 1 keV. Our new observations with SITELLE as seen in Figure <ref> also show that the Hα flux contours spatially correlate with the CO(2-1) molecular gas as seen by <cit.>, especially in the case of the left-side filaments of the central disk-shaped structure (see right panel of Figure <ref>), while also located along the lower borders of an expanding radio-filled bubble as observed by <cit.>. This region seems to be directly linked with the central disk that has higher averaged flux and velocity dispersion as seen with the Hα flux map shown in Figure <ref>.Such contrast between central and extended fluxis similar to what was previously obtained by <cit.>, where the detection of CO across the filaments demonstrated that the central region has a flux around 10 times higher than the emission found in the filaments. However, <cit.> clearly showed that this higher central flux emission is not only linked to the AGN, but also to this bright and uniform disk-shaped feature. This result proves insightful when considering the observations of BCGs by ALMA () revealing mostly filamentary and disk-shaped structures around these galaxies. Some of these features display rotation (e.g. ), however, this doesn't appear to be the case for the central disk-shaped structure of NGC 1275 (see ), even though, closer observations of the inner regions close to the central galaxy indicates rotation motions of the gas (see , ). Decades of optical observations have revealed that NGC 1275 is surrounded by an extended array of filaments spreading throughout the inner 100 kpc and are the largest seen in any cluster (e.g. ). However, we also know that the Perseus cluster is one of the closest to us, therefore, in more distant clusters, only the brightest filaments could potentially be seen. Indeed, when considering our renewed observations of NGC 1275 with SITELLE, we can observe a significantly brighter central disk-shaped structure (∼ 1×10^-17 erg s^-1cm^-2Å^-1 flux) as opposed to the outer filaments (∼ 1×10^-18 erg s^-1cm^-2Å^-1 flux). Thus, more distant clusters could also harbour extended array of filaments that our current generation of telescopes are not able to detect. Simulations exploring this idea have been developped for NGC 1275 by <cit.> and revealed that by a redshift of z∼ 0 .06, most of the filaments would be undetectable leaving only the central bright region surrounded by unresolved features (see Fig. 16 of ). Similarly, recent observations of the BCG of the Centaurus cluster by <cit.> revealed a faint and diffuse Hα nebula surrounding it.Recent ALMA observations of NGC 1275 by <cit.> provided a detection of a smaller portion of the central disk in CO(2-1) close to the AGN, as well as HCN(3-2) and HCO^+(3-2) emission within a radius of 1.8 arcsec (0.658 kpc) around the AGN, revealing a rotating motion of the emitting gas at this scale. Such detection is within the central region that we have masked in our SITELLE fits due to the broad component being present as mentionned in Section 2.6. §.§ Velocity Dispersion Structure Previous observations of the filamentary nebula surrounding NGC 1275 with slit spectroscopy or IFU instruments demonstrated that the velocity dispersion of the ionised optical gas was higher in the central region near the AGN (∼ 150 km/s) while diminishing gradually to lower values (∼ 50 km/s) in the outer filaments (, ). These observations support the idea that the AGN activity was the source of a higher level of agitation in the central region while other mechanisms such as the turbulence and shocks induced by the trailing of ascending radio-filled bubbles would induce a lower but non-negligible velocity dispersion in the outskirts of the structure ().The new SITELLE observations at high-spectral resolution confirm that the central disk-shaped structure with higher velocity dispersion is not due to multiple velocity components based on visual inspection of the spectra. Indeed, the regions displaying multiple velocity components are extremely localized compared to the extended disk-shaped central structure visible in Fig. <ref>.Beyond this disk-shaped structure, the velocity dispersion decreases sharply and remains uniformly low throughout the rest of the filaments, out to r ∼ 50 kpc. The central velocity dispersion is almost two times higher in the central disk-shaped structure than what previous low-spectral observations showed (, see left plot of Fig. <ref>). When considering only the central bright region by applying a more stringent flux cut-off of 1×10^-17 erg s^-1cm^-2Å^-1, the mean velocity dispersion of the central disk-shaped structure is ∼ 134 km/s. The disk-shaped structure seems specifically located at the position of the central CO(2-1) disk of molecular gas and at the border of the inflating radio bubble (see right side of Figure <ref>). The most striking feature displayed in Fig. <ref> is the lack of a smooth gradient between the high dispersion of the central region (∼ 134 km/s) and the lower dispersion of the outer filaments (∼ 44 km/s - see also the right plot of Figure <ref>). For the gas in the central region, we could expect to see a higher velocity dispersion since more gas is likely found near the central regions, whereas for the outer filaments, we are viewing individual filaments and therefore, we are probing more of a pencil beam view of the kinematics in the outer filaments. However, the sharp contrast in velocity dispersion from the central disk-shaped structure to the filaments beyond this structure is puzzling. We also note that despite this expected overlap of gas in the central region, few highly separated velocity component emission lines are found in this region, which will be discussed in section 3.7. If we consider the entire system of low dispersion filaments, it has a global velocity dispersion much greater than ∼ 44 km/s, as indicated by the Hα velocity dispersion map, even if the central region displaying higher values is excluded. The uniformity in the (pencil beam) velocity dispersion of the outer filaments might therefore be reflecting a different physical process from the one responsible for the global velocity dispersion of the filament system.The global velocity dispersion is likely to be connected to motions of the hot medium, while the pencil beam velocity dispersion of the ionized gas is more likely to be connected to shearing motions at the interfaces between the filaments and the hot medium, which we will investigate in section 3.5.Thus, this result could also imply that two completely different mechanisms might be at play to introduce such a clear differentiation in velocity dispersion. We also compare our result with simulations from <cit.> (see Figure <ref>). The brown line shows the average line-of-sight velocity dispersion as a function of radius in simulated filaments from <cit.> that reproduce the ones seen in NGC 1275.We first take the simulation outputs generated every 10 Myr over a 300 Myr period when the simulated cluster most resembles Perseus in the morphology and spatial distribution of the Hα filaments. For each output, we make a line-of-sight velocity dispersion map of the Hα gas (with temperatures of ∼ 10^4 K) weighted by emissivity. Then, the data is split into radial distance bins and averaged for each bin. We then compute the time-averaged velocity dispersion profile and its 1σ scatter (orange shaded area). We can thus see that the simulation shows lower values of velocity dispersion further away from the central region which is in accordance with our observational results.Overplotted in purple and green are the mean and ensemble fits extracted from the annuli seen in the left side of <ref>. These datapoints clearly show a drop in velocity dispersion at a short radial distance away from the central region and matching the simulated data. The outer filaments display stable low velocity dispersion in agreement with simulated data. An interesting similarity to our results can be observed in the study of Abell 2597 by <cit.> as observed with MUSE and ALMA. This study also demonstrated the presence of a spatial correlation between a central v-shaped molecular gas structure and its optical emission counterpart in Hα. Both of these structures were also found to be comoving and closely tracing the wake of the X-ray cavities formed by the activity of the BCG's AGN. Moreover, when considering the velocity dispersion of the optical gas in Abell 2597 (see Figure 13 of ), a similar result can be observed where the correlated emission region displays an extremely high velocity dispersion (up to ∼ 350 km/s) which quickly falls to lower values for the outer filaments (∼ 50 km/s) can be seen. Regarding the filament formation models that have been developed and detailed previously, <cit.> explore a unifying model involving chaotic cold accretion, precipitation, and stimulated feedback through the description of a galaxy-scaled "fountain". In this model, the central cold molecular gas and optically emitting filaments would supply the accretion reservoir of the AGN, thus fueling its activity. The jets it produces would then inflate radio bubbles, buoyantly rising and carving the surrounding ICM, therefore creating turbulence and thermal instabilities as well as uplifting ionised gas in their trail (e.g.and simulation works by ). The raised material would then precipitates back to the central potential well thus sustaining the "fountain". Moreover, further thermal instabilities generated by the turbulent AGN feedback would reinforce the precipitation of colder gas in slightly denser parts of the ICM, thus facilitating the formation of extended filamentary nebulae (e.g. , see alsoand ). §.§ Extended Filaments and Turbulent Radiative Mixing LayersWe now investigate a possible formation scenario for the extended filaments displaying lower velocity dispersion through turbulent radiative mixing layers. This scenario involves the formation of filaments in situ instead of beeing dragged out by the adiabatic rise of the radio bubbles carved by the SMBH jets.Turbulent mixing layers are frontier layers between gaseous materials at different temperatures. Due to their extreme difference in internal energy, the materials will interact and produce a layer of intermediate temperature where turbulent motions are more prominent. This layer will thus produce emission lines associated with the intermediate temperature and possibly shield the inner colder layer from being completely mixed with the higher temperature material. In the context of filaments in clusters, the hot intracluster gas at ∼ 4 × 10^7 K () acts as the warm layer while turbulent motions are thought to be due to the AGN feedback processes and set a frontier layer between this medium and the colder optically emitting filaments at 10^4 K.Previous works by <cit.> have highlighted the idea of extended turbulent mixing layers in the intracluster medium. They discussed that filamentary structures visible in the optical could form out of the cooling flows of AGN through absorption of far UV emission of turbulent mixing layers, which would then induce UV and optical emission by photoionization of the colder gas. However, detections of embedded cold molecular gas all throughout the filamentary nebula were made later by <cit.>, implying that turbulent radiative mixing layers could be created at the boundaries between cold molecular gas and the surrounding hot ICM. Indeed, the ICM of the Perseus cluster is known to have a high temperature (∼ 4×10^7 K, see ), while the cold molecular gas found all throughout the filaments has a much lower temperature (∼ 10-10^3 K, e.g. ). Thus, we might wonder if the optical emission observed with the SITELLE data could be part of a radiative turbulent mixing layer between the two media. The presence of a turbulent mixing layer could then explain the intermediate temperature of the optical gas. We do know that the filaments are extremely thin and display a thread-like structure () which would be reminiscent of the shape taken by turbulent mixing layers (see ).With such a thin structure, physically close filaments must be connected, leading to single component emission lines (see ). A higher spectral resolution than R=7000 would be needed to explore in detail the possible multiple components of their emission lines since none are detected within the outer filaments after a spectral examination of the SITELLE data.The velocity dispersion in mixing layers are increased by in situ turbulence caused by the interaction between material at different temperatures. Recent simulations (e.g. , , ) show however that the turbulence of material inside the layer remains low (∼ 30 km/s.). These simulations results could potentially prove insightful in regards of the observed mean velocity dispersion value of ∼ 44 km/s, prevalent within the extended filamentary nebula surrounding NGC 1275.Hence, when considering the structure of a turbulent mixing layer in conjunction with our optically emitting extended filaments, it could mean that the outer layer of turbulence between the cold molecular gas and the hot ICM could be shielding the inner filaments and preventing them from having a higher velocity dispersion, leading to a clear dichotomy between inner filaments, disturbed by the formation and growth of radio bubbles, and outer quiescent filaments. Therefore, the velocity dispersion in outer extended filaments could be a proxy for turbulent radiative mixing layers.Finally, recent works reported the discovery of hidden cooling flows in a sample of nearby clusters of galaxies, including Perseus (see , , ). Here, the authors argue that AGN feedback may not be as efficient in heating the intracluster medium as initially thought - instead, the intracluster medium would be allowed to cool, explaining the substantial molecular gas reservoirs found in these systems. They predict that these cooling flows are not seen at X-ray wavelengths because they are being obscured by photoelectrically-absorbing cold clouds and dust (e.g. , , ) - if this is the case, then the absorbed emission will re-emerge in far infrared. Indeed, extreme cooling of the gaseous material surrounding BCGs could prohibit them from being detected due to extremely low temperature and emissivity (see ). According to these results cooling flows of ∼ 30 - 100 M_⊙.yr^-1 could therefore be present within clusters such as the Perseus cluster of galaxies (). Interestingly, it is suggested that these hidden cooling flows could have internal structures resembling that of turbulent radiative mixing layers, similar to those proposed here (see , ). Nevertheless, future observations of the ICM in clusters of galaxies with upcoming X-ray telescopes such as XRISM will probe a variety of transitions expected from such hidden cooling flows, providing further insight into these flows and their connection to multiphase gas in clusters.§.§ Velocity Structure Now focusing on the analysis of the velocity structure obtained with the new SITELLE observations, we find a very similar Hα velocity map as to what was previously obtained by <cit.> at lower spectral resolution observations (see Figure 5 ofand left plot of Figure <ref>).This result strengthens previous conclusions regarding the dynamics of the filamentary nebula.Moreover, when looking at the mean and ensemble velocity fits across annuli containing 1500 pixels (see right plot of Figure <ref>), there does not appear to be a specific radial velocity trend throughout the filaments, which reinforces the idea of a chaotic velocity structure.Considering the Herschel infrared emission kinematics as studied by <cit.> and displayed in the upper left panel of Figure <ref>, we can see that similar velocity trends could also hint at a comoving nature of the gas in various wavelengths. Indeed, when considering similar velocity intervals as Figure 5 in <cit.>, we can clearly see that the south-western filaments kinematics (see top-right plot of Figure <ref>) display a similar trend of negative velocities between -350 and -50 km/s with the spatially correlated [CII] emission line kinematics (see top-left plot of Figure <ref>). This trend is also visible for both intermediate velocity values between -50 and 50 km/s as well as positive velocities between 50 and 350 km/s, although less strikingly (see bottom plots of Figure <ref>).<cit.> also obtained a velocity map of the central molecular gas disk, which showed mostly blue-shifted velocities and without sign of global rotation. To compare directly the velocity maps obtained for the optical emission to the molecular map, we shifted our map considering a redshift value of z = 0.01756 used by <cit.> instead of the adopted z = 0.017284. After this correction, the Hα emission shows a similar velocity structure for the central disk-shaped structure (see Figure <ref> and Figure 2 of ).The detection of similar disk-shaped structures can also be found in simulation works such as those done by <cit.>, where they exhibit higher levels of flux and originate from the cooling of gas onto the black hole's potential well. However, the main difference to these simulations is the fact that the central disk always displayed a rotating motion along the plane perpendicular to the jets, which is not the case for our observations where the motion seems more chaotic (see Figures <ref> & <ref>). This difference shows that these simulations seems to not be fully capturingthe physical processes at play in the formation and evolution of the filaments we are observing in NGC 1275. Moreover, the presence of radio bubbles fueled by the SMBH relativistic jets, which seem spatially linked to the presence of this disk, was not considered in previous simulations until recently (see ). However, these simulations incorporating the effect of expanding radio bubbles do not yet study their impact on the velocity dispersion and kinematics of the surrounding medium of the bubbles. This aspect of 3D hydrodynamical simulations of AGN feedback still needs to be explored to properly understand the prominent role of radio bubbles on their surrounding environment.In that sense, recent simulation works by <cit.> clearly show the turbulence and eddy formation as well as the uplift of material in the wake of rising radio bubbles. Though the simulation models only involve detached bubbles approaching their terminal velocities, it is still of interest to see that material close to the central region gets lifted upwards and acts as the upper layer of the deformation close to the bubble's lower boundary (see top panels of Figure 4 from ). Therefore, an argument could be made that the central region or disk visible in our observations, displaying a higher level of velocity dispersion as well as a clear spatial correlation to cold molecular gas, could be lifted through the detachment and adiabatic rise of the bubbles thereafter forming future filamentary structures around NGC 1275.Thanks to the high-spatial and spectral resolution of SITELLE, we can also study the velocity profiles of targeted filaments in their entirety. To do this, we extracted the velocity measurements and means within 10 bins over some specific filaments as in <cit.>. Figure <ref> shows that the velocities of the outer filaments are significantly more heterogeneous and chaotic, while the velocities of the central disk display less variations. Its velocity profile could also hint at a potential rotation pattern in the central radius of 2.5 kpc around the AGN, however, the error bars prevent us from making a clear conclusion.The northern filament displays a velocity profile ranging from slightly positive to slightly negative values the further away from the base of the filament, which would indicate stretching. Similar results were obtained both with SITELLE () and the Gemini Multi-Object Spectrograph (). The southern filament mostly shows negative velocities slightly decreasing then increasing along the filament, while the central filament seems to display an inverse trend with mostly positive velocity values. If we now bring our attention to the averaged velocity profile of the Horseshoe filament, we can see that both its bases (first and last datapoints of the lower left panel of Figure <ref>)seem to display an average velocity closer to negative velocities of ∼ - 25 km/s while the tip of the Horseshoe (represented by the fifth and sixth datapoints of the lower left panel of Figure <ref>) also display a drop in averaged velocity closer to a null velocity. On the other hand, the branches of the Horsehoe clearly display positively increasing then decreasing averaged velocity values (as shown in the lower left panel of Figure <ref>). This averaged velocity profile could indicate that the branches of the Horsehoe filament might be stretching horizontally as opposed to the bases and tip who seem to follow a trend of displacement through the traction of a radio emitting bubble (see ). However, the limited detection of the tip of the Horseshoe, as can be seen on the right-hand side of Figure 16, prevents us from making affirmative statements regarding these central bins of the Horseshoe averaged velocity profile.§ CONCLUSIONWe performed the analysis of new high-spectral resolution observations with SITELLE and obtained new flux, velocity, and velocity dispersion maps of λ6716, λ6731, λ6584, Hα(6563Å), and λ6548 emission lines for the filamentary nebula surrounding NGC 1275, the brightest cluster galaxy of the Perseus cluster. * We detected a central disk-shaped structure displaying higher averaged flux (∼ 1 × 10^-17 - 2 × 10^-16 erg.s^-1.cm^-2.Å^-1) and velocity dispersion (∼ 134 km/s) than the rest of the filaments which is also spatially correlated with a disk-shaped structure as seen in CO(2-1) associated with molecular gas. Both of these structures seem to spatially correlate with the wake of the radio bubbles that have been inflated through the relativistic jets of the supermassive black hole at the center of the galaxy. However, this disk-shaped feature does not display a clear rotation pattern which entails a definitive difference from similar structures obtained through simulations.* The rest of the filamentary structure displays fainter flux measurement (∼ 1 × 10^-18 erg.s^-1.cm^-2.Å^-1) as well as a much lower velocity dispersion (∼ 44 km/s) across the outer filaments, thus implying a potential more quiescent formation mechanism. * Thanks to our very high-spectral resolution, we also managed to detect regions and knots displaying multiple emission line components. However, they are extremely localized and only found in the central part of the filamentary nebula within r<10 kpc. Nevertheless, a future study of these structures could potentially help us to gain more insight into the three-dimensional structure of the central filaments. * Regarding the formation model explored for this filamentary nebula, it seems that a unifying model as explored by <cit.> could potentially explain the observed characteristics of the emitting gas observed flux and velocity dispersion measurements obtained through the analysis of the SITELLE observations of NGC 1275. Indeed, regarding the galaxy-scale fountain model explored by <cit.> we observe similar trends both in terms of kinematics and velocity dispersion between NGC 1275 and Abell 2597. These similarities are also seen in the comoving nature of filamentary nebulaes with both cold molecular structures and surrounding gas. Therefore, a unifying model of formation through the inflow of cold molecular clouds toward the AGN could thus feed its accretion and drive the formation of radio bubbles. This would in turn uplift multiphase material away from the BCG which could then precipitate back as toward the AGN and supply the fountain, while turbulent radiative mixing layers could form between the hot ICM and cold molecular gas. However, contentious points still remain to be explored to properly comprehend the mechanisms leading to the formation of such extended filamentary nebula. Through our analysis of new high-spectral resolution observations of the filamentary nebula surrounding NGC 1275, we reinforced the previous results established by <cit.> and discovered new structures in the optical emission of the filaments. However, these are the first results obtained with this dataset and we expect to get more through an improved analysis of the emission close to the AGN, a study of the SN1 and SN2 filters of SITELLE, as well as the calculation of the velocity structure function with these new maps. Finally, new X-ray observations with the XRISM space telescope (XRISM Science ), the successor of Hitomi, will enable a breakthrough in the study of AGN feedback. This study will offer a touchstone for the analysis of renewed X-ray observations of NGC 1275.The authors would like to thank the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. The observations at the CFHT were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site.B.V. acknowledges financial support from the physics departement of the Université de Montréal. J.H.-L. acknowledges support from NSERC via the Discovery grant program, as well as the Canada Research Chair program. M.G.-M. acknowledges financial support from the grant CEX2021-001131-S funded by MCIN/AEI/ 10.13039/501100011033, from the coordination of the participation in SKA-SPAIN, funded by the Ministry of Science and Innovation (MCIN). J.L. acknowledges support from the Research Grants Council of Hong Kong through grant 17300620. N.W. is supported by the GACR grant 21-13491X. Y.L. acknowledges financial support from NSF grants AST-2107735 and AST-2219686, NASA grant 80NSSC22K0668, and Chandra X-ray Observatory grant TM3-24005X. L.R.-N. is grateful to the National Science foundation NSF - 2109124 and the Natural Sciences and Engineering Research Council of Canada NSERC - RGPIN-2023-03487 for their support.B.V. also personally acknowledges Pr. Christopher Conselice for sharing observational data of NGC 1275 in the optical, as well as Dr. Rupal Mittal for sharing Herschel kinematic data of NGC 1275 in the infrared.python (), astropy (, ), numpy (), scipy (), matplotlib (),()§ BACKGROUND VARIABILITY In this section, we display two spectra extracted from background regions (see Figure <ref>) which illustrate the clear variability in background sky emission at high-spectral resolution. § SIGNAL-TO-NOISE RATIO In this section, we present the signal-to-noise ratio map obtained with thefunction ofbefore applying the weighted Voronoi tesselation binning. As can be seen in Figure <ref>, the overall signal-to-noise ratio is low across the entire filamentary nebula and thus binning is required.§ MULTIPLE EMISSION LINE COMPONENTS SPECTRA In this section, we show a small number of spectra belonging to localized region of the filamentary nebula and displaying multiple emission line components. Their central respective coordinates are the following : RA : 3:19:48.165, DEC : +41:30:54.275 ; RA : 3:19:46.92, DEC : +41:30:51.703 ; RA : 3:19:46.623, DEC : +41:30:40.277aasjournal | http://arxiv.org/abs/2311.16247v1 | {
"authors": [
"Benjamin Vigneron",
"Julie Hlavacek-Larrondo",
"Carter Lee Rhea",
"Marie-Lou Gendron-Marsolais",
"Jeremy Lim",
"Jake Reinheimer",
"Yuan Li",
"Laurent Drissen",
"Greg L. Bryan",
"Megan Donahue",
"Alastair Edge",
"Andrew Fabian",
"Stephen Hamer",
"Thomas Martin",
"Michael McDonald",
"Brian McNamara",
"Annabelle Richard-Lafferriere",
"Laurie Rousseau-Nepton",
"G. Mark Voit",
"Tracy Webb",
"Norbert Werner"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231127190101",
"title": "High-Spectral Resolution Observations of the Optical Filamentary Nebula in NGC 1275"
} |
[email protected] Programa de Pós-Graduação em Física, Universidade Federal do Pará, 66075-110, Belém, Pará, Brazil. [email protected] Campus Altamira, Instituto Federal do Pará, 68377-630, Altamira, Pará, Brazil. [email protected] Programa de Pós-Graduação em Física, Universidade Federal do Pará, 66075-110, Belém, Pará, Brazil. Departamento de Matemática da Universidade de Aveiro and Centre for Research and Development in Mathematics and Applications (CIDMA), Campus de Santiago, 3810-183 Aveiro, Portugal.We investigate the propagation of massless particles and scalar fields in the background of Hayward regular black holes. We compute the absorption and scattering cross sections and compare our numerical results with some analytical approximations, showing that they are in excellent agreement. We show that some of the absorption and scattering results of Reissner-Nordström black holes can be mimicked by Hayward regular black holes, for appropriate choices of the charge of the two different black holes.Geodesic analysis, absorption and scattering in the static Hayward spacetime Luís C. B. Crispino January 14, 2024 ============================================================================§ INTRODUCTION In recent years, experiments testing the strong-field regime have consolidated general relativity (GR) as a robust theory to describe gravity <cit.>. Despite its achievements, GR also predicts the existence of curvature singularities in the core of the standard black hole (BH) solutions. We may argue that the limitations of Einstein's Theory at the BH center rely on its classical formalism. Therefore, a fully quantum gravity theory, which would successfully combine GR and quantum field theory, could avoid the formation of curvature singularities. Yet, although there have been efforts in this direction (see, e.g., Ref. <cit.> for a review), a fully successful quantum gravity theory has not been obtained so far.As an alternative to the standard BH solutions of GR, there are the so-called regular BH (RBH) spacetimes, i.e. (curvature) singularity-free BH geometries. Nonsingular static spacetimes can be obtained by requiring an effective cutoff in the energy density at the BH center, preventing the spacetime metric from diverging at r = 0. This can be accomplished by demanding that the spacetime behaves as a de Sitter <cit.> or Minkowski <cit.> geometry at the BH core. (For reviews on RBHs and possible physical sources see Refs. <cit.>.)The Hayward geometry <cit.> is an example of spacetime which can have no curvature singularities, avoid the mass inflation phenomena <cit.> (at least from the classical point of view), and can be interpreted as a BH solution sourced by a nonlinear magnetic monopole <cit.> [The Reissner-Nordström solution can also be regarded as a BH solution sourced by a magnetic monopole <cit.>.]. Due to these features, the Hayward geometry has gained a lot of attention over the past few years. We can improve our understanding of the Hayward spacetime by investigating how it interacts with surrounding fields. In this context, we can compute for Hayward BHs the absorption and scattering cross sections [a subject which have been extensively studied since the 1960s in several BH scenarios (see, e.g., Refs. <cit.> and references therein)].We investigate the absorption and scattering properties of neutral massless test scalar fields in the background of Hayward RBHs. The remainder of this paper is organized as follows. In Sec. <ref>, we introduce the Hayward spacetime as a solution of GR minimally coupled to nonlinear electrodynamics (NED). In Sec. <ref>, we investigate the trajectories of massless particles and also consider the semiclassical glory approximation. The partial wave-analysis is applied in Sec. <ref> to obtain the absorption and scattering cross sections of the massless scalar field. In Sec. <ref>, we present our main results concerning the absorption and scattering of neutral massless test scalar fields in Hayward spacetime. Our concluding remarks are stated in Sec. <ref>. Throughout this work, we consider natural units, for which G = c = ħ = 1, and signature +2.§ HAYWARD SPACETIME The action associated with the minimal coupling between GR and NED can be written as𝒮 = 116π∫ d^4x(R-ℒ(F))√(-g),where R is the Ricci scalar, ℒ(F) is a gauge-invariant Lagrangian density, and g is the determinant of the metric tensor g_μν. The function F is the Maxwell scalar, namelyF = F_μνF^μν,with F_μν being the standard electromagnetic field tensor. By varying the action (<ref>) with respect to g_μν, we getG_μ^ ν = T_μ^ ν = 2(ℒ_FF_μσF^νσ -14δ_μ^ νℒ(F)),in which ℒ_F≡∂ℒ/∂ F. The dynamic field equations of the electromagnetic field are given by∇_μ(ℒ_FF^μν)=0and ∇_μ⋆ F^μν = 0,where ⋆ F^μν is the dual electromagnetic field tensor.We consider a static and spherically symmetric line element given byds^2 = -f(r)dt^2+f(r)^-1dr^2+r^2dΩ^2,where f(r) is the metric function to be determined by the field equations (<ref>) and dΩ^2 = dθ^2+sin^2θ dφ^2 is the line element of a 2-dimensional unit sphere. In this context, the only non-null components of the electromagnetic field tensor are given by F_23 = -F_32 = Qsinθ, so that the Maxwell scalar isF = 2Q^2r^4. The NED model associated with the Hayward spacetime can be written as <cit.>ℒ(F) = 12M|Q|Q^2(Q^2 F/2 )^3/2(1+( Q^2 F/2)^3/4)^2,where Q and M are the magnetic charge and mass of the central object, respectively. By using the equation G_0^ 0 = T_0^ 0, we obtain the Hayward metric functionf(r) = 1-2Mr^2r^3+Q^3.In the chargeless limit (Q → 0), the Hayward metric function reduces to the Schwarzschild one. In Fig. <ref>, we display the Kretschmann scalar invariant, given byK = R_μνσρR^μνσρ,where R_μνσρ is the Riemann tensor, for the Hayward spacetime. Throughout this paper, we consider Q > 0, what is sufficient to guarantee the absence of curvature singularities for r ≥ 0 <cit.>. The line element (<ref>), considering the metric function (<ref>), describes RBHs when the condition Q ≤ Q_ext≈ 1.0582M is satisfied, where Q_ext is the extreme charge value. We can obtain Q_ext by solving f(r) = 0 and f^'(r) = 0 simultaneously, where ' denotes a differentiation with respect to the radial coordinate r. For Q < Q_ext, we have two horizons, given by the real roots of f(r) = 0. We denote the Cauchy horizon and the event horizon as r_- and r_+, respectively. For Q = Q_ext, the two horizons coincide. For its turn, Q > Q_ext leads to horizonless solutions, which is beyond the scope of this work. Moreover, we exhibit our results in terms of the normalized charge, defined asα≡QQ_ext,which facilitates comparisons between different spacetimes. In the Reissner-Nordström (RN) case, Q can represent an electric or magnetic charge, while in the Hayward geometry Q can be identified exclusively as a magnetic charge <cit.>.For large r, the Hayward metric behaves asf(r) = 1-2Mr + 2MQ^3r^4 + 𝒪[1r^5],whereas at the core we havef(r) = 1 - 2MQ^3r^2 + 𝒪[r]^5.Therefore, the spacetime is asymptotically flat as r →∞ and has a de Sitter behavior at the center. As occurs in the Bardeen geometry <cit.>, the Hayward spacetime does not satisfy a correspondence with the Maxwell theory for large r since the corresponding NED model (<ref>) does not behave as ℒ(F) → F for small F. We also point out that the behavior of the Hayward geometry at its center is a common feature of NED-based RBHs that satisfy the weak energy condition <cit.>. In this context, the energy density of the NED source is maximal and finite at the solution core, preventing it from diverging as r → 0, in contrast with linear electrodynamics. § GEODESIC ANALYSIS In this section, we investigate the propagation of massless particles in Hayward RBH spacetimes. Due to the spherical symmetry of the geometry, we treat the equations of motion in the equatorial plane, i.e., θ = π /2, without loss of generality.We recall that in NED models, photons follow null geodesics of an effective metric tensor <cit.>. Therefore, the classical results discussed in this section apply only for massless particles with nature other than electromagnetic. §.§ Trajectory of massless particles The classical Lagrangian L that provides the equations of motion of particles in the spacetime (<ref>) is given byL = 12 g_μνẋ^μẋ^ν,where the overdot corresponds to a differentiation with respect to an affine parameter. For massless particles, we have L = 0. The constants of motion associated with L are given byE = f(r)ṫ and L = r^2φ̇,where E and L are the energy and angular momentum of the particle, respectively. Using Eqs. (<ref>)-(<ref>), and the condition L = 0, we may find a radial equation for massless particles given byṙ^2L^2 = V(r) = 1b^2 - f(r)r^2,where b ≡ L/E is the impact parameter. From ṙ|_r = r_c = 0 and r̈|_r = r_c = 0, we may find the critical radius r_c of the unstable circular orbit and the critical impact parameter b_c, namely2f(r_c)- r_cf'(r_c) = 0, b_c = L_cE_c = r_c√(f(r_c)),respectively. In Fig. <ref>, we exhibit some geodesics of massless particles in the background of the Hayward spacetime. We can obtain these trajectories by numerically integrating the radial equation (<ref>) and its first derivative. As we can observe, for b < b_c, the geodesics are absorbed, while for b > b_c they are scattered. For its turn, the situation b = b_c is related to a geodesic going round the BH in an unstable circular orbit (with radius r_c). The classical capture cross section of geodesics, also known as geometric cross section (GCS), is given by <cit.>σ_gcs≡π b_c^2.Notice that in the spherically symmetric scenario, the critical impact parameter corresponds to the shadow radius as seen by a distant observer <cit.>. Therefore, the shape of the shadow can be obtained by the parametric plot of Eq. (<ref>). This is illustrated in Fig. <ref> where we exhibit the shadows of the Hayward spacetime, considering different values of α (and massless particles with nature other than electromagnetic). At high energies, the absorption cross section (ACS) can be described by a formula known as the sinc approximation, which involves the GCS and the features of null unstable geodesics given by <cit.>σ_hf≈σ_gcs[1-8π b_cΛ e^-π b_cΛsinc(2π b_cω)],where sinc(x) ≡sin(x)/x and Λ is the Lyapunov exponent related to circular null geodesics <cit.>, namelyΛ = √(L^2_c2ṫ^2(d^2V(r)dr^2)|_r = r_c).§.§ Deflection angle in the weak-field limit By using the geodesic method, we can obtain an expression for the deflection angle and classical differential SCS in the weak-field limit. The turning point r_0, defined as the radius of maximum approximation of the (massless) particle, for a given value of b, satisfies the condition 𝒰(r)|_r = r_0 = 0, where𝒰(r) ≡drd φ = r^2√(1b^2-f(r)r^2).Thus, the deflection angle of the scattered massless particle can be written as <cit.>Θ(b) = 2∫_r_0^∞1√(𝒰(r))dr - π.We can obtain an analytic expression for the deflection angle in the weak field limit by expanding the integrand of Eq. (<ref>) in powers of 1/r. The radius r_0 as a function of b is obtained by solving Eq. (<ref>) and expanding the results in powers of 2M/b. Following these steps, we can find that the weak deflection angle of massless particles in the background of Hayward and RN spacetimes, which are given byΘ(b)_H= 4Mb + 15π M^24b^2+𝒪[1b]^3,Θ(b)_RN= 4Mb + 3π(5M^2-Q^2)4b^2+𝒪[1b]^3,respectively [From now on, we will abbreviate Hayward to H in equations and figures, whenever convenient.]. We see that the charge contributions do not modify the dominant term. Moreover, it can be shown that the charge contributions in the Hayward case will appear only for orders higher than 1/b^3, due to the asymptotic behavior of the Hayward geometry [cf. Eq. (<ref>)]. The classical differential SCS is given by <cit.>dσ_cldΩ = bsinθ|dbdΘ|,where θ is the scattering angle, which is related to the deflection angle by Θ=θ-2nπ, with n∈ℤ^+ being the number of times that the massless particle orbits the BH before being scattered to infinity. The classical SCS may be obtained by inverting Eq. (<ref>) and inserting b(Θ) into Eq. (<ref>). In the weak field limit, we can use Eqs. (<ref>)-(<ref>) to obtain the classical differential SCS for small scattering angles, which can be expressed asdσ_cl^HdΩ= 16M^2Θ^4+15π M^24 Θ ^3 + 𝒪[1Θ]^2,dσ_cl^RNdΩ= 16M^2Θ ^4+3π(5M^2-Q^2)4 Θ ^3+ 𝒪[1Θ]^2.Similarly to the weak deflection angle, the BH charge does not affect the leading term of the classical differential SCS. §.§ Semiclassical glory The semiclassical glory approximation can be used to unveil some wave scattering properties near the backward direction, i.e., θ = π. In the background of a static and spherically symmetric BH geometry, the glory approximation for scalar waves can be written as <cit.>dσ_gdΩ = 2πω b_g^2|dbdθ|_θ = πJ_0^2(ω b_gsinθ),where ω is the frequency of the scalar wave, b_g is the impact parameter of backscattered rays, and J_0 is the Bessel function of the first kind. Note that exist numerous values of b_g corresponding to multiple values of θ=Θ+2π n, that result on backscattered null rays. The contributions to the glory scattering come from all the rays scattered near to θ≈ 180^∘. We know that the main contributions for the glory scattering are provided by the mode n = 0 <cit.>. Therefore, we consider only n = 0 in the computation of the glory approximation.§ PARTIAL-WAVE ANALYSIS In this section, we present the equation that governs the propagation of neutral massless test scalar fields in the background of the setup introduced in Sec. <ref>. We also exhibit the differential SCS and total ACS of massless scalar waves in the background of spherically symmetric BHs. §.§ Massless scalar field The Klein-Gordon equation that governs the propagation of the neutral massless test scalar field Φ in the background of curved spacetimes reads1√(-g)∂_μ(√(-g)g^μν∂_νΦ) = 0.Within spherical symmetry, we can decompose Φ asΦ= 1r∑_l^∞ C_ω lΨ_ω l(r)P_l(cosθ)e^-iω t,where C_ω l are coefficients, with ω and l being the frequency and angular momentum of the scalar field, respectively. The function P_l is the Legendre polynomial and Ψ_ω l is the radial function. Using the tortoise coordinate r_⋆, defined as f(r)dr_⋆=dr, we can show that Ψ_ω l satisfies d^2/dr_⋆^2Ψ_ω l+(ω^2-V_eff(r))Ψ_ω l=0,where the effective potential V_eff(r) isV_eff(r) = f(r)(1rdf(r)dr+l(l+1)r^2). In Fig. <ref>, we display the effective potential in the Hayward spacetime for distinct values of α and l. Notice that for l = 0, the peak of the effective potential decreases as we increase the α values. However, for l ≥ 1, the peak presents the opposite behavior. The behavior of the effective potential for l = 0 is remarkably different from the well-known (regular) BH solutions.In Fig. <ref>, we compare the effective potentials in Hayward and RN spacetimes for fixed values of α and l, showing that, outside the event horizon, they typically satisfyV_eff^RN > V_eff^H.The solutions of the Klein-Gordon equation consistent with the absorption/scattering problem are given byΨ_ω l∼ T_ω le^-iω r_⋆, r_⋆→ -∞ (r→ r_+),e^-iω r_⋆+R_ω le^iω r_⋆, r_⋆→∞ (r→∞),where T_ω l and R_ω l are complex coefficients, which satisfy|R_ω l|^2+|T_ω l|^2 = 1.§.§ Absorption and scattering cross sections It is usual to obtain an expression for the total ACS σ as a sum of partial waves contributions σ_l. For that purpose, we expand the scalar field as a sum of asymptotic plane waves and fix C_ω l with appropriated boundary conditions <cit.>, resulting inσ = ∑_l = 0σ_l,where σ_l is given byσ_l = πω^2(2l+1)(1-|R_ω l|^2).For its turn, the differential SCS for static and spherically symmetric spacetimes can be written as <cit.>dσdΩ = |h(θ)|^2,where h(θ) is the scattering amplitude given byh(θ) = 12iω∑_l = 0^∞(2l+1)[e^2iδ_l(ω)-1]P_l(cosθ),with the phase shifts e^2iδ_l(ω) being defined as e^2iδ_l(ω)≡ (-1)^l+1R_ω l. § RESULTS In this section, we present a selection of our results concerning the absorption and scattering cross sections of massless test scalar waves in the background of Hayward spacetimes. We also compare our numerical results for the Hayward geometry with those obtained in the RN case. §.§ Numerical method We numerically solve Eq. (<ref>) from very close to the event horizon, i.e., r_initial = 1.001r_+, to a region very far from the BH, typically chosen as r_∞ = 10^3M. We then match the numerical solutions with the appropriated boundary conditions given by Eq. (<ref>) and compute the reflection coefficient. Moreover, to calculate the absorption and scattering cross sections, we need to perform sums on the angular momentum of the scalar wave. For the absorption case, we typically set l = 6, while for the scattering case, we consider l = 20. Furthermore, the differential SCS has poor convergence for small values of the scattering angle. We improve the series convergence in this limit using the numerical method developed in Refs. <cit.>.In Fig. <ref>, we compare our numerical results for the total ACS of massless scalar waves in the Hayward spacetime with some approximations. We can see that in the low-frequency regime, the total ACS tends to the event horizon area, namelyσ_lf = 4π r_+^2,as expected <cit.>. Moreover, in the high-frequency regime, our numerical results oscillate around the GCS [cf. Eq. (<ref>)] and the oscillatory pattern is well described by the sinc approximation [cf. Eq. (<ref>)], even for moderate frequency values. Analogously, in Fig. <ref>, we compare our numerical results for the differential SCS of massless scalar waves in the Hayward spacetime with some approximations. We can observe that the differential SCS oscillates around the classical differential SCS [cf. Eq. (<ref>)] and the oscillatory pattern is well described by the glory approximation [cf. Eq. (<ref>)] near the backward direction. We have, therefore, obtained excellent agreement between our numerical results and the approximated analytical ones. §.§ Massless scalar absorption In Fig. <ref>, we show the partial and total ACSs of scalar waves in Hayward spacetimes. As we can see, the total ACS typically decreases as we increase the BH charge. However, for α^H≳ 0.9658, the first peak of the total ACS can be larger than in the Schwarzschild case (α = 0). This feature can be related to the effective potential. As discussed in Sec. <ref>, the height of the potential barrier decreases as we enhance the values of α for l = 0. Therefore, massless scalar waves with l = 0 are more absorbed in the background of highly charged Hayward spacetimes, in contrast to what happens for l ≥ 1.In Fig. <ref>, we compare the total ACSs of massless scalar waves in Hayward and RN spacetimes. As we can see, for the same value of the normalized charge, the total ACS in the Hayward RBH is typically larger than the corresponding one in the RN case. This result is consistent with the analysis presented in Sec. <ref>, since the effective potential of the RN BH is always greater than that of the Hayward RBH for the same values of α.§.§ Massless scalar scattering In Fig. <ref>, we show a selection of our results for the scalar differential SCS in Hayward spacetimes. We can observe that the interference fringe widths get wider as we increase the BH charge, in agreement with the glory approximation <cit.>. We also notice that, for small scattering angles, the contributions of the BH charge are negligible, as stablished by Eq. (<ref>). A comparison between the scattering spectra of Hayward and RN BHs is presented in Fig. <ref>. For the same α values, the interference fringe widths for the RN spacetime are typically larger thanthose in the Hayward corresponding case.§.§ Mimicking standard BHs We have also searched for situations in which the absorption and scattering spectra of Hayward and RN BHs can be very similar. Regarding configurations for which the ACSs are similar, we seek for the values of α that satisfy the following condition: b_c^H = b_c^RN. On the other hand, in the scattering case, we consider the values of the normalized charges for which the impact parameter of backscattered light rays matches, i.e., b_g^H = b_g^RN.A situation for which the ACSs of Hayward and RN BHs basically coincide is exhibited in the top panel of Fig. <ref>. Indeed, we can find values of the charge for which the total ACSs can be very similar in the whole frequency range, as long as we consider low-to-moderate values of the normalized charges.We exhibit, in the top panel of Fig. <ref>, a situation for which the SCSs of Hayward and RN BHs basically coincide. In Fig. <ref>, we show the comparison between the differential SCSs in Hayward and RN spacetimes.For low-to-moderate values of the normalized charges, we can find configurations for which the SCSs are very similar.As we increase the BH charge,keeping b_g^H = b_g^RN, the oscillatory profile remains similar, but the differences become more evident.§ FINAL REMARKS We have investigated the absorption and scattering spectra of massless scalar fields in the background of the Hayward RBH solution. We have compared our numerical results obtained for arbitrary values of the frequency and scattering angle of the scalar wave with some analytical approximations, showing that they are in excellent agreement. We have noticed that the interference fringe widths get wider as we increase the RBH charge or decrease the frequency.Concerning the absorption properties, we have obtained that, although the total ACS typically decreases as we enhance the RBH charge, the first peak (local maximum related to the monopole mode, l=0) of the ACS in the Hayward spacetime can be larger than in the Schwarzschild case, for α^H≳ 0.9658, while the local maxima related to l>0 are smaller than those of Schwarzschild, assuming any (positive) value of magnetic charge. This distinctive feature may be related to the behavior of the effective potential.We also have noticed that it is possible to find configurations for which the absorption and scattering of massless scalar waves in Hayward and RN geometries are very similar. These similarities can be found for general values of the frequency and scattering angle of the scalar wave but are constrained to low-to-moderate values of the BH charges. Our results reinforce that regular and singular BHs can have very similar absorption and scattering properties under certain circumstances, but we might be able to distinguish between them in other scenarios <cit.>.It is worth mentioning that recently the absorption and scattering properties of massless test scalar fields in Hayward RBH spacetimes were addressed in Ref. <cit.>, but the results obtained there were not sound <cit.>. Here we provided the correct results.We are grateful to Fundação Amazônia de Amparo a Estudos e Pesquisas (FAPESPA), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) – Finance Code 001, from Brazil, for partial financial support. MP and LC thank the University of Sheffield, in England, and University of Aveiro, in Portugal, respectively, for the kind hospitality. LL would like to acknowledge IFPA – Campus Altamira for the support. This work has further been supported by the European Union's Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017 Grant No. FunFiCO-777740 and by the European Horizon Europe staff exchange (SE) programme HORIZON-MSCA-2021-SE-01 Grant No. NewFunFiCO-101086251.99AAA2016 B. P. Abbott et al. [LIGO Scientific Collaboration and Virgo Collaboration], Observation of Gravitational Waves from a Binary Black Hole Merger, https://doi.org/10.1103/PhysRevLett.116.061102Phys. Rev. Lett. 116, 061102 (2016).AAA2019L1 K. Akiyama et al. [The Event Horizon Telescope Collaboration], First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole, https://doi.org/10.3847/2041-8213/ab0ec7ApJL 875, L1 (2019).AAA2022L12 K. Akiyama et al. [The Event Horizon Telescope Collaboration], First Sagittarius A* Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole in the Center of the Milky Way, https://iopscience.iop.org/article/10.3847/2041-8213/ac6674/metaApJL 930, L12 (2022).GA2023 G. Agazie et al. [NANOGrav Collaboration], The NANOGrav 15 yr Data Set: Evidence for a Gravitational-wave Background, https://doi.org/10.3847/2041-8213/acdac6ApJL 951, L8 (2023).CPB2004 C. P. Burgess, Quantum Gravity in Everyday Life: General Relativity as an Effective Field Theory, https://doi.org/10.12942/lrr-2004-5Living Rev. Relativ. 7, 5 (2004).B1968 J. Bardeen,Non-singular General Relativistic Gravitational Collapse, in Proceedings of the International Conference GR5 (Tbilisi, Georgia, U.S.S.R., 1968), p. 174.D1992-2 I. Dymnikova, Vacuum nonsingular black hole, https://doi.org/10.1007/BF00760226Gen. Relativ. Gravit. 24, 235 (1992).B1994 A. Borde, Open and closed universes, initial singularities, and inflation, https://link.aps.org/doi/10.1103/PhysRevD.50.3692Phys. Rev. D 50, 3692 (1994).ABG1998 E. Ayón-Beato and A. García, Regular Black Hole in General Relativity Coupled to Nonlinear Electrodynamics, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.80.5056Phys. Rev. Lett. 80, 5056 (1998).D2004 I. Dymnikova, Regular electrically charged vacuum structures with de Sitter centre in nonlinear electrodynamics coupled to general relativity, https://doi.org/10.1088/0264-9381/21/18/009Class. Quantum Grav. 21, 4417 (2004).H2006 S. A. Hayward, Formation and Evaporation of Nonsingular Black Holes, https://link.aps.org/doi/10.1103/PhysRevLett.96.031103Phys. Rev. Lett. 96, 031103 (2006).BV2014 L. Balart and E. C. Vagenas, Regular black holes with a nonlinear electrodynamics source, https://link.aps.org/doi/10.1103/PhysRevD.90.124045Phys. Rev. D 90, 124045 (2014).C2015 H. Culet, On a Regular Charged black Hole with a Nonlinear Electric Source, https://doi.org/10.1007/s10773-015-2521-6Int. J Theor. Phys. 54, 2855 (2015).SV2019 A. Simpson and M. Visser, Regular black holes with asymptotically Minkowski cores, https://doi.org/10.3390/universe6010008Universe 6, 8 (2019).A2008 S. Ansoldi, Spherical black holes with regular center: a review of existing models including a recent realization with Gaussian sources, https://arxiv.org/abs/0802.0330v1arXiv:0802.0330 [gr-qc].DPS2021 D. P. Sorokin, Introductory Notes on Non-linear Electrodynamics and its Applications, https://arxiv.org/abs/2112.12118Fortsch. Phys. 70, 2200092 (2022).SZ2022 L. Sebastiani and S. Zerbini, Some Remarks on Non-Singular Spherically Symmetric Space-Times, https://doi.org/10.3390/astronomy1020010Astronomy 1, 99 (2022).CL2023 C. Lan, H. Yang, Y. Guo, and Y.-G. Miao, Regular black holes: A short topic review, https://doi.org/10.1007/s10773-023-05454-1Int. J. Theor. Phys. 62, 202 (2023).BKS2021 A. Bonanno, A.-P. Khosravi, and F. Sauressig, Regular black holes with stable cores, https://doi.org/10.1103/PhysRevD.103.124027Phys. Rev. D 103, 124027 (2021).FW2016 Z.-Y. Fan and X. Wang, Construction of regular black holes in general relativity, https://link.aps.org/doi/10.1103/PhysRevD.94.124027Phys. Rev. D 94, 124027 (2016).F2017 Z.-Y. Fan, Critical phenomena of regular black holes in anti-de Sitter space-time, https://doi.org/10.1140/epjc/s10052-017-4830-9Eur. Phys. J. C 77, 266 (2017).TSA2018 B. Toshmatov, Z. Stuchlík, and B. Ahmedov, Comment on ”Construction of regular black holes in general relativity”, https://doi.org/10.1103/PhysRevD.98.028501Phys. Rev. D 98, 028501 (2018).MA2018S. H. Mehdipour and M. H. Ahmadi, Black hole remnants in Hayward solutions and noncommutative effects, https://doi.org/10.1016/j.nuclphysb.2017.09.021Nuc. Phys. B 926, 49 (2018).C2019 S. M. Carroll, Spacetime and Geometry: An Introduction to General Relativity (Cambridge University Press, Cambridge, 2019).FHM1988J. A. Futterman, F. A. Handler, and R. A. Matzner, Scattering from Black Holes (Cambridge University Press, Cambridge, England, 1988).DDL2006 S. Dolan, C. Doran, and A. Lasenby, Fermion scattering by a Schwarzschild black hole, https://link.aps.org/doi/10.1103/PhysRevD.74.064005Phys. Rev. D 74, 064005 (2006).COM2007 L. C. B. Crispino, E. S. Oliveira, and G. E. A. Matsas, Absorption cross section of canonical acoustic holes, http://dx.doi.org/10.1103/PhysRevD.76.107502Phys. Rev. D 76, 107502 (2007).CDE2009 L. C. B. Crispino, S. R. Dolan, and E. S. Oliveira, Scattering of massless scalar waves by Reissner-Nordström black holes, https://link.aps.org/doi/10.1103/PhysRevD.79.064022Phys. Rev. D 79, 064022 (2009).OCH2011 E. S. Oliveira, L. C. B. Crispino, and A. Higuchi, Equality between gravitational and electromagnetic absorption cross sections of extreme Reissner-Nordström black holes, https://doi.org/10.1103/PhysRevD.84.084048Phys. Rev. D 84, 084048 (2011).CB2014 C. L. Benone, E. S. de Oliveira, S. R. Dolan, and L. C. B. Crispino, Absorption of a massive scalar field by a charged black hole, https://link.aps.org/doi/10.1103/PhysRevD.89.104053Phys. Rev. D 89, 104053 (2014).MC2014 C. F. B. Macedo and L. C. B. Crispino, Absorption of planar massless scalar waves by Bardeen regular black holes, https://link.aps.org/doi/10.1103/PhysRevD.90.064001Phys. Rev. D 90, 064001 (2014).CDHO2014 L. C. B. Crispino, S. R. Dolan, A. Higuchi, and E. S. de Oliveira, Inferring black hole charge from backscattered electromagnetic radiation, https://doi.org/10.1103/PhysRevD.90.064027Phys. Rev. D 90, 064027 (2014).MOC2015 C. F. B. Macedo, E. S. de Oliveira, and L. C. B. Crispino, Scattering by regular black holes: Planar massless scalar waves impinging upon a bardeen black hole, https://link.aps.org/doi/10.1103/PhysRevD.92.024012Phys. Rev. D 92, 024012 (2015).BC2016 C. L. Benone and L. C. B. Crispino, Superradiance in static black hole spacetimes, https://doi.org/10.1103/PhysRevD.93.024028Phys. Rev. D 93, 024028 (2016).S2017 S. Fernando, Bardeen–de Sitter black holes, https://doi.org/10.1142/S0218271817500717Int. J. Mod. Phys. D 26, 1750071 (2017).SBP2018 P. A. Sanchez, N. Bretón, and S. E. P. Bergliaffa, Scattering and absorption of massless scalar waves by Born-Infeld black holes, https://doi.org/10.1016/j.aop.2018.04.011Ann. Phys. 393, 107 (2018).AD2019 A. Delhom, C. F. B. Macedo, G. J. Olmo, and L. C. B. Crispino, Absorption by black hole remnants in metric-affine gravity, https://link.aps.org/doi/10.1103/PhysRevD.100.024016Phys. Rev. D 100, 024016 (2019).JBC2020 H. C. D. L. Junior, C. L. Benone, and L. C. B. Crispino, Scalar absorption: Black holes versus wormholes, https://doi.org/10.1103/PhysRevD.101.124009Phys. Rev. D 101, 124009 (2020).MLC2020-2 R. B. Magalhães, L. C. S. Leite, and L. C. B. Crispino, Schwarzschild-like black holes: Light-like trajectories and massless scalar absorption, https://doi.org/10.1140/epjc/s10052-020-7909-7Eur. Phys. J. C 80, 386 (2020).PLC2020 M. A. A. Paula, L. C. S. Leite, and L. C. B. Crispino, Electrically charged black holes in linear and non-linear electrodynamics: Geodesic analysis and scalar absorption, https://link.aps.org/doi/10.1103/PhysRevD.102.104033Phys. Rev. D 102, 104033 (2020).PLC2022 M. A. A. de Paula, L. C. S. Leite, and L. C. B. Crispino, Scattering properties of charged black holes in nonlinear and Maxwell's electrodynamics, https://doi.org/10.1140/epjp/s13360-022-02916-zEur. Phys. J. Plus 137, 785 (2022).JBC2022 H. C. D. L. Junior, C. L. Benone, and L. C. B. Crispino, Scalar scattering by black holes and wormholes, https://doi.org/10.1140/epjc/s10052-022-10576-7Eur. Phys. J. C 82, 638 (2022).PLC2023c M. A. A. de Paula, L. C. S. Leite, and L. C. B. Crispino, Massless scalar scattering by a charged regular black hole, https://onlinelibrary.wiley.com/doi/full/10.1002/asna.20220115Astron. Nachr. 344, e220115 (2023).MLC2020-3R. B. Magalhães, L. C. S. Leite, and L. C. B. Crispino, Parametrized black holes: scattering investigation, https://doi.org/10.1140/epjc/s10052-022-10612-6Eur. Phys. J. C 82, 698 (2022).SX2023 S. V. M. C. B. Xavier, C. L. Benone, L. C. S. Leite, and L. C. B. Crispino, Scattering by stringy black holes, https://link.aps.org/doi/10.1103/PhysRevD.108.084060Phys. Rev. D 108, 084060 (2023).BR2013 K. A. Bronnikov and S. G. Rubin, Black Holes, Cosmology and Extra Dimensions (World Scientific, Singapore, 2013).ZM2023 T. Zhou and L. Modesto, Geodesic incompleteness of some popular regular black holes, https://link.aps.org/doi/10.1103/PhysRevD.107.044016Phys. Rev. D 107, 044016 (2023).ABG2000 E. Ayón-Beato and A. García, The Bardeen model as a nonlinear magnetic monopole, https://doi.org/10.1016/S0370-2693(00)01125-4Phys. Lett. B 493, 149 (2000).P1970 J. F. Plebański, Lectures on Non-linear Electrodynamics (NORDITA, Copenhagen, Denmark, 1970).GDP1981 S. A. Gutiérrez, A. L. Dudley, and J. F. Plebański, Signals and discontinuities in general relativistic nonlinear electrodynamics, http://dx.doi.org/10.1063/1.524874J. Math. Phys. 22, 2835 (1981).MN2000 M. Novello, V. A. De Lorenci, J. M. Salim, and R. Klippert, Geometrical aspects of light propagation in nonlinear electrodynamics, https://link.aps.org/doi/10.1103/PhysRevD.61.045001Phys. Rev. D 61, 045001 (2000).W1984 R. Wald, General Relativity (University of Chicago Press, Chicago, United States, 1984).CH2018 P. V. P. Cunha and C. A. R. Herdeiro, Shadows and strong gravitational lensing: a brief review, http://dx.doi.org/10.1007/s10714-018-2361-9Gen. Relat. Gravit. 50, 42 (2018).MP2023 M. A. A. de Paula, H. C. D. Lima Junior, P. V. P. Cunha, and L. C. B. Crispino, Electrically charged regular black holes in nonlinear electrodynamics: light rings, shadows and gravitational lensing,https://link.aps.org/doi/10.1103/PhysRevD.108.084029Phys. Rev. D. 108, 084029 (2023).S1978 N. Sanchez, Absorption and emission spectra of a Schwarzschild black hole, https://link.aps.org/doi/10.1103/PhysRevD.18.1030Phys. Rev. D. 18, 1030 (1978).DEF2011 Y. Décanini, G. Esposito-Farèse, and A. Folacci, Universality of high-energy absorption cross sections for black holes, https://link.aps.org/doi/10.1103/PhysRevD.83.044032Phys. Rev. D. 83, 044032 (2011).VC2009 V. Cardoso, A. S. Miranda, E. Berti, H. Witek, and V. T. Zanchin, Geodesic Stability, Lyapunov Exponents and Quasinormal Modes, https://link.aps.org/doi/10.1103/PhysRevD.79.064016Phys. Rev. D. 79, 064016 (2009).N2013 R. G. Newton, Scattering Theory of Waves and Particles (Dover Publications, New York, United States, 2013). RAM1985 R. A. Matzner, C. DeWitt-Morette, B. Nelson, and T.-R. Zhang, Glory scattering by black holes, https://link.aps.org/doi/10.1103/PhysRevD.31.1869Phys. Rev. D 31, 1869 (1985).U1976 W. Unruh, Absorption Cross Section of Small Black Holes, https://link.aps.org/doi/10.1103/PhysRevD.14.3251Phys. Rev. D 14, 3251 (1976).YRW1954 D. R. Yennie, D. G. Ravenhall, and R. N. Wilson, Phase-Shift Calculation of High-Energy Electron Scattering, https://link.aps.org/doi/10.1103/PhysRev.95.500Phys. Rev. 95, 500 (1954).DGM1997S.R. Das, G. Gibbons, and S.D. Mathur, Universality of Low Energy Absorption Cross Sections for Black Holes, https://link.aps.org/doi/10.1103/PhysRevLett.78.417Phys. Rev. Lett. 78, 417 (1997).H2001A. Higuchi, Low-frequency Scalar Absorption Cross Sections for Stationary Black Holes, https://doi.org/10.1088/0264-9381/18/20/102Class. Quantum Grav. 18, L139-L144 (2001); Addendum, https://iopscience.iop.org/article/10.1088/0264-9381/19/3/401Class. Quantum Grav. 19, 599(A) (2002).JRV2023 E. L. B. Junior, M. E. Rodrigues, and H. A. Vieira, Is it possible to distinguish between different black hole solutions using the Shapiro time delay?, https://doi.org/10.1140/epjc/s10052-023-11520-zEur. Phys. J. C 81, 409 (2023).WW2022 M.-Y. Wan and C. Wu, Absorption and scattering of massless scalar wave from Regular Black Holes, https://doi.org/10.1007/s10714-022-03034-yGen. Relativ. Gravit. 54, 148 (2022).PLC2023b M. A. A. de Paula, L. C. dos Santos Leite, and L. C. B. Crispino, Comment on: “Absorption and scattering of massless scalar wave from Regular Black Holes”, https://doi.org/10.1007/s10714-023-03122-7Gen. Relativ. Gravit. 55, 73 (2023). | http://arxiv.org/abs/2311.15771v1 | {
"authors": [
"Marco A. A. de Paula",
"Luiz C. S. Leite",
"Luís C. B. Crispino"
],
"categories": [
"gr-qc"
],
"primary_category": "gr-qc",
"published": "20231127124357",
"title": "Geodesic analysis, absorption and scattering in the static Hayward spacetime"
} |
[email protected] of Computing and Information Technology, Arab Academy for Science, Technology, and Maritime Transport (AASTMT) Aswan Egypt81516 [email protected] 0000-0003-1047-2143atlanTTic - I & C Lab - Universidade de Vigo Vigo Spain [email protected] 0000-0002-5088-0881atlanTTic - I & C Lab - Universidade de Vigo Vigo Spain [email protected] 0000-0003-0140-1618 College of Computing and Information Technology, Arab Academy for Science, Technology, and Maritime Transport (AASTMT)AlexandriaEgypt [email protected] 0000-0001-6553-4159College of Computing and Information Technology, Arab Academy for Science, Technology, and Maritime Transport (AASTMT) Aswan Egypt81516Nowadays, the ubiquitous usage of mobile devices and networks have raised concerns about the loss of control over personal data and research advance towards the trade-off between privacy and utility in scenarios that combine exchange communications, big databases and distributed and collaborative (P2P) Machine Learning techniques. On the other hand, although Federated Learning (FL) provides some level of privacy by retaining the data at the local node, which executes a local training to enrich a global model, this scenario is still susceptible to privacy breaches as membership inference attacks. To provide a stronger level of privacy, this research deploys an experimental environment for FL with Differential Privacy (DP) using benchmark datasets. The obtained results show that the election of parameters and techniques of DP is central in the aforementioned trade-off between privacy and utility by means of a classification example. Using Decentralized Aggregation for Federated Learning with Differential Privacy Nashwa El-Bendary January 14, 2024 ================================================================================§ INTRODUCTIONFederated Learning (FL) offers a useful paradigm for training a MachineLearning (ML)model from data distributed across multiple datasilos,eliminating the need for rawdata sharing as it has the ambition toprotect data privacy through distributed learning methods thatkeep the data local. In simple terms, with FL, itis not the data that movesto a model, but it is a model that moves todata, which means that training is happeningfrom user interaction with enddevices. Federated Learning's key motivation is to provide privacy protection as well as there has recently been someresearch into combining theformal privacy notion ofDifferentialPrivacy (DP) with FL. Instead of researching on the traditional FL model, we consider working on a peer-to-peerapproach to FL or decentralized FL framework, where any node can play the role of modelprovider and model aggregator at different times. That is being applied depending on variousaspects related to the communication infrastructure and the distribution of the data. For example, considering a traditional FL model where the aggregator is unavailable for some period.In a peer-to-peer approach, theaggregator role can be played by a different node. Also, it canbe decided from time to time that the aggregator node is the one withmore connections in the network, or whatever other criteria thatallows the learning process to be improved. So, everynode in thenetwork has the capability of being a model provider and an aggregator node, and itplays different roles in different rounds ofthe learning process according to the state of the infrastructure. However, under this scenario, where the communication infrastructurecannot beconsidered reliable and secure, additionalprivacy measures are required. In a decentralized FLsystem, while the data is kept local and not exchanged with peers, the model parameters could beobserved by malicious or curious agents and be used to infer knowledge about the datasets or theaggregated model. To tackle this problem, a DP mechanism <cit.> can beintroduced into the distributed learning environment, so that inference attacks are harder.However, as DP adds controlled noise to the information exchanged by the nodes, it might affectthe learning rate and accuracy of the algorithm. This trade-off has been considered in some worksfor classical (centralized, single-server) FL systems <cit.>.In this paper, we study experimentally the interplay between DP and the learning performance ina distributed FL system.The remaining of this paper is organized as follows. In Section <ref>, a review ofthe literature related to our work is presented. Section <ref> starts with introducing a background on DP and distributed FL methods, then the framework of the proposed system isdescribed. The implementation details and experimental setup are illustrated inSection <ref>. The obtained numerical results are presented inSection <ref>. Section <ref> discusses the attained observations,and provides several insights.§ RELATED WORKIn the scientific literature, a number of studies have discussed on distributed ML, FL,and privacy preserving machine learning. Reading diverse existing literature'sapplication,benefits, and drawbacks is one of the most significant elements of study, we discovered that FLis arelatively new topic with limited published literature and articles, but much research hasalready been conducted in the areas of privacy-preserving learning and distributed machine learningalgorithms or the Traditional Centralized Federated Learningwhich the role of the server goes toone role until convergence. In <cit.>, the authors compared two different variants ofclippingfor FedAvg: clip the client model vs. clip the client model difference (CE-FedAvg), consideringnot all the clients participate in each round of communication. Here some conductedexperiments for FedAvg, CE-FedAvg and DP-FedAvg with different modelsnamely MLP, AlexNet, Resnet,MobilNetV2, on two different benchmarkDataset EMNIST and Cifar-10 for Classification, and localdata distribution which falls into two ways; 1) IID Data setting where the samples are uniformlydistributed to each client 2) Non-IID Data setting, where the clients have unbalanced samples.The results showed that the performance depends on the structures of the neural network being usedand the heterogeneity data distribution among clients is one of the main causes of the differentbehavior between the clipped and unclipped.The authors in <cit.> proposed algorithms based on differentially privateSGD (DP–SGD) that add Gaussian noises to eachcomputed gradient and then clip the noised gradient (NC), which isdifferent to the conventional method in the sequence of clippinggradient andadding noise (CA). The experimental settings toverify the performance consider differentfactors, as the training of two popular deep learning models, (CNN and LSTM) on three different datasets, namely MNIST, CIFAR10, SVH, respectively, andadoption of two gradientdescent optimization methods (SGD and Adam) for evaluation, with the inclusion of Gaussian noise and clipping. This work also proposes a new privacy protection metric called"TotalParameters Value Difference" to measure the privacy protectioncapability and examinehow the impact of adding noise in thetraining process will affect the model itself. Theresults showedand validated the effectiveness of their proposed method (AC), whichimproves remarkably the accuracy of themodel when otherparameter settings are the same.However, the TPVD of AC arelower than those of CA. The results showed proposedamodification of AC, as showed changing the sequence of addingnoise and clipping canachieve higher accuracy and fasterconvergence that outperforms the conventional method even under different parameter settings and the TPVD metric proposedin this paper as aprivacy protection metric for DL models canbetter reflect the perturbation effects. In <cit.>, the authors introduced a FL framework, with theproposed method of DL to add noise to the objective function of the optimization at each siteto produce a minimizer of the perturbed objective. The proposed model is tested on twodifferent datasets for two major tasks ;(1) prediction of adverse drug reaction (ADR),Dataset used Limited MarketScan Explorys Claims-EMR Data (LCED); (2) prediction of mortalityrate, dataset using Medical Information Mart for Intensive Care (MIMIC III) data. The resultsshow that although DL offers a strong level of privacy, it deteriorates the predictive capabilityof the produced global models due to the excessive amount of noise added during the distributedFL training process, while in <cit.> the authors proposed a novel frameworkbased on the concept of DP, in which artificial noises are added to the parametersat the clients side before aggregating, namely, noising before model aggregation FL (NbAFL),the proposed NbAFL evaluated by using multi-layer perception (MLP) on MNIST dataset, theyfound that there is an optimal K that achieves the best convergence performance at a fixed privacylevel.In <cit.> the authors introduced a FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller to facilitate the deployment in IOT applications. The concept of the proposed Framework based on this study where some devices are directly connected to the Base station, while others are associated with a certain number of neighboring devices. The main objective was to overcome the challenge of energy limitations or a potentially high transmission delay.In order to overcome the short comings of the reviewed literature, the main objective of the approach proposed in this paper is to conduct an experimental framework to investigate the interaction between DP and learning performance in a distributed FL system.§ METHODS AND METHODOLOGYThis section introduces the notion of Federated Learning considering guaranteeing the modelprivacy protection through utilizing the characteristic of keeping the data in the localnode. Although FL keeps the data in the localnode, so that privacy is guaranteed, in order to protectthe model. Also, we are going to apply DP mechanisms to the models which are exchanged among thenodes in the FL scenario. So, we arealso going to introduce the concept of DP.§.§ Federated Learning: definition and role ofthe aggregator node The notion of Federated Learning was first introduced in <cit.>,which demonstrates a new learning context in which a shared model is learned by aggregatinglocally computed gradient changes without centralizing different data on devices. Federated averaging (FedAvg) is a communication efficientalgorithm for distributed trainingwith a number of clients. As it is mentioned in <cit.> its set-up is a systemin which multipleclients collaborate to solve machine learning problems, with acentralaggregator overseeing the process. This settingdecentralizes the training data, ensuring thateach device's data issecure. Federated learning is based on two key principles: localcomputingand model transmission, which mitigates some of theprivacy risks and costs associated withstandard centralizedmachine learning methods. The client's original data is kept on site andcannot be transferred or traded. Each device uses localdata for local training, then uploadsthe model to the server foraggregation, and finally, the server transmits the model update to the participants to achieve the learning goal. Formally <cit.>, the server aggregates the weights sent from the N clients as (FedAvg), as𝐰 = ∑_i = 1^N p_i 𝐰_iwhere 𝐰_i is the parameter vector trained at the i-th client, 𝐰 is the parameter vector after aggregating at the server, N is the number of clients, and p_i =|𝒟_i| / |𝒟|, with 𝒟_i the dataset of node i and 𝒟 =∪_i 𝒟_i the whole distributed dataset. The server solves the optimization problem𝐰^∗ = min_𝐰∑_i = 1^N p_i F_i(𝐰, 𝒟_i)where F_i is the local loss function of the i-th client. Generally, the local loss functionis given by local empirical risks.The training process of such a FL system usually contains the following four steps: 1) Localtraining: All active clients locally compute training gradients or parameters and send locallytrained ML parameters to the server; 2) Model aggregating: The server performs secure aggregationover the uploaded parameters from N clients without learning local information; 3) Parameters broadcasting: The server broadcasts the aggregated parameters to the N clients; 4) Model updating:All clients update their respective models with the aggregated parameters.In order to prevent information leakage and the local model parameter which is circulated overthe network from the inference attacks as it is vulnerable to it, a natural approach to defining privacy for those models will be used namely Differential Privacy and this will be discussed inthe next Section. §.§ Differential Privacy A Differential Privacy mechanism M satisfies (ϵ, δ)-DP for two non-negativenumbers ϵ and δ, if the following inequality holdsℙ( M(𝒟) ∈ S ) ≤ϵℙ( M(𝒟^') ∈ S ) + δwhere 𝒟 and 𝒟^' are neighboring datasets under the Hamming distance, and S is an arbitrary subset of outputs of M.Intuitively speaking, the number δ represents the probability that amechanism’s outputvaries by more than a factor ϵ whenapplied to a dataset and any one of its closeneighbors. A lower value of δ signifies greater confidence and a smaller value of ϵtightens thestandard for privacy protection <cit.>.Typical mechanisms for M include the perturbation of the dataset values with Laplacian,Exponential, or Gaussian noise. In order for the perturbation mechanisms to have formal privacyguarantees, the amount of noisethat is added to the local modelupdates across each provider may result in exploding gradients problem, which refers to large increases in the norm of thegradient during training, so itrequires a clipping operation, andthis will be discussed in the next Section.§.§ Clipping and Bounded Norm Operation As discussed in <cit.>, clipping is a crucial step in ensuring the DPof FLalgorithms. so, each provider’s/client’s model update needshave a bounded norm, which isensured by applying an operationthat shrinks individual model updates when their norm exceedsagiven threshold. To create FL algorithms that protect DP is toknow that clipping impacts aFL algorithm's convergenceperformance. There are two major clipping strategies used for FLalgorithms, one is local model clipping which consists in the clients directlyclipping themodels sent to the server; the other is differenceclipping, where the local update difference between the initialmodel and the output model i clipped.§.§ Proposed Framework of P2P FL with DP In this Section we will define the peer-to-peer framework thatworked on and how the DP willbe introduced in some of the nodes and finally assess the results by measuring some metrics.We consider a FL problem in a decentralized setting as shown inFig. <ref>, in whicha set V = {1, …, K } of nodes can only communicate with their respective neighbors.Each node i ∈ V stores a subset 𝒟_i of samples of a common unknowndistribution 𝒟 from whichwe are interested in learning. The connectivityis characterized by an undirected graph G = (V, E), with V denoting the set of nodes andE ⊆{(i, j) ∈ V × V: i ≠ j } the set of edges. The set of neighbors ofnode i is denoted as N_i = { j ∈ V:(i, j) ∈ E }.Each node has available a local data set 𝒟_i, and all devices collaborativelytrain a machine learning model by exchangingmodel-related information without directlydisclosingdata samples to one another. Each node can play the role of aggregatorof modelsor provider of a model.* When a node N is an aggregator, it receives the modelfrom all its neighbors andobtains an aggregatedmodel. * When a node N is a provider, it sends its modelparameter to the aggregator/aggregators.The Peer to Peer FL scenario works in rounds. A round is definedin the graph by identifyingthe node which is the aggregator. Therounding process until the learning process converges.A round is defined in the graph by identifying the node which is theaggregator. The aggregator broadcasts the global model Parameterto its neighbor providers. Then sampled clients performlocal training of (e.g., SGD optimizer) and compute their updates.To introduce Differential Privacy in the previous P2P FL model, a new graph is considered asfollows. An undirected differentialprivate graph DPG (N, E, 𝖣𝖯) with 𝖣𝖯denoting the applyingDP for a node i, assuming that 𝖣𝖯(i) ∈{0, 1 } representswhether the node applies differentialPrivacy when obtaining/updating the local Model. Also,therounding process is updated, so that the perturbation mechanisms(DP) are added to thelocal updates for a subset of nodes. In orderfor the perturbation mechanism to have formalprivacyguarantees, each local update needs to have a bounded norm,which is ensured by applyinga clipping operation that reduces theclients’ individual model updates when their norm exceeds a given threshold. The network and communication model are depicted in Figures <ref> and <ref>, respectively.Measuring quality of DP P2P FL: mainly, we will focused on the relationship between the privacybudget and the utility of themodel as well as the performance of the attacker so, we are interested in studying the trade-off utility (accuracy, loss ) vs.privacy (epsilon, strengthagainst model attacks) in them: (1) Privacy: evaluation of how much information is leaked by the DF mechanisms; and (2) Utility: evaluation of the differencebetween results obtained from the original and deferentiallyprivate data (Loss - Accuracy) § IMPLEMENTATION DETAILS AND EXPERIMENTAL SETUPFor system implementation, framing the experiments in previously described way makes themmanageable byAnaconda, which is a Python-based data processing platform thatruns ondifferent environment management systems likeWindows, macOS and Linux. It can easilycreate, save, load, andswitch between environments on the local computer as it comeswithsome default implementations of Integrated DevelopmentEnvironments (IDEs), the one usedis Spyder IDE, which is an editor with syntax highlighting, introspection, and code completion. Also, we used a set of frameworks, libraries, and dependencies tomeet all the requirementsin a smooth environment for our peer-to-peer framework. Thedifferentially private PyTorch library is an open source deep learningframework for developing deep learning models with the Opacus library that enables trainingPyTorch models with differentialprivacy. Also, it supports training with minimal code changesrequired on the client and allows tracking the privacy budget expended at any given moment. Also, one of the most important Framework used isfor building FL systems. It is used for edge devices to collaboratively learn a shared prediction model, while keepingtheirtraining data on the device. Algorithm <ref> describes aggregator execution pseudocode for Federated Averaging targeting updates from K providers per round in peer-to-peer. The main goal of the experiment is to know whether DL will affect the Learning process as we mentioned before in the problem as well as to get over the privacy concerns and toprivacy-preserving guarantee.We consider working on two benchmarks dataset MNIST and CIFAR-10. MNIST has been used in manyresearch experiments. It is a large database of handwritten digits of black and white imagesfrom NIST's original datasets which is a large database of handwritten uppercase and lowercaseletters as well as digits. It is normalized to fit into a 28x28 pixel, which introduces grayscale levels, the dataset included images only of handwritten digits database contains 60,000 trainingimages and 10,000 testing images in which, half of the training set and half of the test setwere taken from NIST's training dataset, while the other half of the training set and theother half of the test set were taken from MNIST's testing dataset. While the other one isCIFAR-10 which consists of 60000 32 × 32 color images in 10 classes, with 6000images per class. There are 50000 training images and 10000 test images, the dataset is dividedinto five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly selected images from each class. The training batches contain theremaining images in random order but some training batches may contain more images from one class than another. Between them, the training batches containexactly 5000 images from eachclass. The CIFAR-10 classes areairplane, automobile, bird, cat, deer, dog, frog, horse, ship,and truck. We are going to propose a set of successive Experiments confined to a FL problem in adecentralized setting with applied DL mechanisms to some nodes depending on some aspectsrelated to the state of the infrastructure of our Peer-to-Peer Framework.The entire experiments will be having: * a scheduler to regulate (control) the process of the Peer-to-Peer Network in each roundfor all the nodes as to set a role for each node in the Network to be an aggregator from theneighbor nodes or the provider. * Also, the scheduler will control whether the node has DL or not, depending on two aspects: * The number of samples in dataset (size of dataset).* The number of connections with other nodes. Here, the experiments run on 5 clients with equalized data but it is split randomly.For the setup of peer to peer, the role of the aggregator goes to the node which has the highestnumber of connections with the other nodes while its neighbors play the role of provider as wellas all of them will have DL and so on in each successive round.§ EXPERIMENTAL RESULTS AND DISCUSSIONThis Section shows the extensive evaluations were conducted to show the performance oftheproposed methods as there are theresults of four different approachesCentralized (Traditional)FederatedLearning, Differential Privacy to centralized Federated Learning, peer-to-peerFederated Learning and Differential Privacy to Peer to Peer on two different Datasets MNISTand CIFAR in Five Rounds.The experiments show when it runs on five clients with equalized data among all the clients witha single aggregator (server) until convergence achieved, and this is go to the part of thetraditional FL. On the other hand, the second part of the experiments is when it runs on peerto peer FL, the rule of the aggregator does not go to a specific node among all the rounds butit depends on the node with the highest connections among the nodes and it differs from round toround in which every node can play either one rule of aggregator or provider per round until the learning process over all the rounds finishes. Tables <ref> and <ref> list the baseline results for the two datasets ina classical FL setting, namely with a fixed node acting as a server for 5 clients, whereinthe dataset is uniformly split among the clients. The final accuracy and loss attained forstandard FL without DP and with DP Server, respectively, are listed in Tables <ref>and <ref>, also for both datasets.In turn, Tables <ref> and <ref> list theperformance (loss and accuracy) for the case of decentralized FL,where the aggregator is changing in each round (the aggregator nodeis marked in boldface numbers). The nodes that do not participatein a given round are due to the graph connectivity, recall that our graph is not complete, so in a particular round there might be nophysical communication between a client and the aggregator node.Even in absence of DP, this could slow down the rate of learningfor the whole system in comparison to a full interchange in a round, as in classical FL.According to the results, when comparing the centralized FL and DP centralised FL: accuracy on MNIST is consistently high for all the rounds, so little sensitive to theintroduction of privacy; in contrast, accuracy on CIFAR is initially low and graduallyincreases up to 70% in round 5. As expected, Table <ref> shows that, instandard FL, with and without DP at the clients, performance is homogeneous across all theclients, which attain similar performance. This is simply an indication that the data have beensplit uniformly without bias among the clients. Note also that, when DP is introduced(Table <ref>, performance decreases drastically for CIFAR, while is stable for MNIST. Thus, we clearly see that DP is more effective (for a constant ϵ) in the latter case,and that the tuning of DP needs to be carefully set depending on the statistical distribution ofthe data. We also confirm experimentally that DP has a substantial impact on accuracy, and can therefore make convergence quite slow in FL. § CONCLUSIONSThis paper has analyzed a decentralized approach for a FederatedLearning, namely peer-to-peer FL with Differential Privacy to enable privacy among the network nodes and significantly improveupon established ways for protectingprivacy. Peer-to-peer FLwith DP (with noise multiplier0.5, ϵ = 2.59 for every client) is able to protect personal data much better thanthe traditional methods. We have compared as well different approaches to ML according to privacy:1) Centralized FL (reduction of the P2P model where the aggregator is the same in all rounds); 2)P2P FL (decentralized); 3) DP Centralized ; 4) DP P2P FL. Strong privacy requirements in FL among a fully decentralized set of agents can be achieved byadapting the natural solution of introducing DL at the clients in each communication andcomputing round. However, in contrast to the well-studied case of a FL approach with DP in the scenarios of a single and central aggregator, the peer-to-peer interactions during all thelearning cycle (including local learning, noise injection and distribution of the models) ismore complex and still not understood. In this respect, the work in this paper are a firstexperimental attempt at getting some insight into this interplay FL vs. DP vs. communications.This work was supported by the Spanish Government under research project “Enhancing Communication Protocols with Machine Learning while Protecting Sensitive Data (COMPROMISE)" PID2020-113795RB-C33, funded by MCIN/AEI/10.13039/501100011033 ACM-Reference-Format | http://arxiv.org/abs/2311.16008v1 | {
"authors": [
"Hadeel Abd El-Kareem",
"Abd El-Moaty Saleh",
"Ana Fernández-Vilas",
"Manuel Fernández-Veiga",
"asser El-Sonbaty"
],
"categories": [
"cs.LG",
"cs.CR"
],
"primary_category": "cs.LG",
"published": "20231127170256",
"title": "Using Decentralized Aggregation for Federated Learning with Differential Privacy"
} |
[ [ January 14, 2024 ====================We consider the homogenisation problem for the ϕ^4_2 equation on the torus 𝕋^2, namely the behaviour as → 0 of the solutions to the equationsuggestively written as∂_t u_-∇·A(x/,t/^2) ∇u_= -u^3_+ξ whereξ denotes space-time white noise and A: 𝕋^2×ℝ is uniformly elliptic, periodic and Hölder continuous.When the noise is regularised at scale δ≪ 1 we show that any joint limit ,δ→ 0 recovers the classical dynamical ϕ^4_2 model. In certain regimes or ifthe regularisation is chosen in a specific way adapted to theproblem, we show that the counterterms can be chosen as explicit local functions of A. § INTRODUCTION For A̅∈ℝ^2× 2 a strictly positive definite matrix, the constant coefficient ϕ^4_2 equation∂_t u -∇·A̅ ∇u = -u^3+ξ ,(up to a change of variables) was considered in <cit.>. It was observed that, just as in the construction of the ϕ^4_2 measure <cit.>, in order to obtain a meaningful notion of solution one should instead consider the renormalised equation formally given by∂_t u -∇·A̅∇ u = -u^3 + ∞·u+ξ ,see also <cit.>. In <cit.> it was noted that in the non-translation invariant setting, i.e. when A∈ C^α( ℝ×𝕋^2, ℝ^2× 2) for some α>0, a meaningful notion of solution corresponds to considering∂_t u-∇·A(x,t) ∇ u = -u^3 + ∞· D(x,t) u +ξ,where D= (A^s)^-1/2 for A^s the symmetric part of A, see also <cit.> for a general discussion in this direction.In this article, we consider the periodic homogenisation problem for the ϕ^4_2 equation. Assuming that A(x,t) is space-time periodic, we consider the equations formally given by∂_t u_-∇·A(x/,t/^2) ∇u_= -u^3_+ ∞·D(x/, t/^2) +ξfor ^-1∈ℕ.Periodic homogenisation of elliptic and parabolic equations is a well studied subject, see for example <cit.>. The results therein imply that the oscillatory operator _ = ∇·A(x/,t/^2) ∇ converges in the resolvent sense to a homogenised operator ∇·A̅∇, see (<ref>) for an expression for A̅ in our setting.In particular, these results suggest that as → 0 the solutions u_ to (<ref>) converge to a solution of (<ref>) for A̅ the homogenised matrix associated to A. The main results of this article state that instead an additional constant renormalisation is required. More precisely, we show in Theorem <ref> below that one can find constantsα_,δ∼|log(δ/)∧0|/4π, α̅_,δ∼ |log(δ)| ∧|log()|/4π ,and a bounded family of constants c_,δ such that the solutionu_, δ to∂_t u_, δ -ℒ_u_, δ= -u_,δ^3+ 3 α_, δD(x/,t/^2)u_,δ + 3α̅_,δ (A̅)^-1/2u_,δ + 3c_,δ u_,δ +ξ_, δ, converges to that of (<ref>) as ,δ→ 0. Here, ξ_, δ denotes a rather specific regularisation of the white noise ξ, namely it is regularised by the heat kernel associated to ℒ_ at time δ^2. This particular choice is convenient since it allows us to provide an explicit expression for the renormalisation which covers every possible regime regarding the relative sizes ofand δ.Our method also applies for usual (translation invariant) mollification of the noise, but one only retains explicit local counter terms in the regime ≲δ, see Theorem <ref> and Proposition <ref>. Similarly, Theorem <ref> uses the general class of mollification schemes in <cit.> and treats the regime δ≲. Importantly, in all these results the solutions for δ=0 agree and fall within in the canonical class of solutions constructed in <cit.>.As this article was nearing completion, we learned of the forthcoming work <cit.> which considers a very similar situation.While that and the present article are the first results on homogenisation of singular SPDEs, at least to the best of our knowledge, homogenisation of stochastic PDEs that are either more regular or linearhas been considered for example in <cit.>.For the topic of stochastic homogenisation which is related but quite distinct, we refer to <cit.>. While the arguments of this article extend directly to the 2-dimensional parabolic Anderson model(swapping the ansatz of <cit.> for the one in <cit.>), the general method is expected to be applicable to more singular subcritical SPDEs within the framework of regularity structures. §.§.§ AcknowledgmentsHS would like to thank Felix Otto and Kirill Cherednichenko for valuable discussions and gratefully acknowledges financial support from the EPSRC via Ilya Chevyrev’s New Investigator Award EP/X015688/1. MH was partially supported by the Royal Society through the research professorship RP\R1\191065. §.§ Function SpacesWe first introduce the function spaces used in this article. We shall always identify 𝕋^d= ℝ^d/ℤ^d.§.§.§ Function spaces on 𝕋^dT^dFor γ∈ (0,1) and B⊂ℝ^d, define for f:ℝ^d →ℝf_C^γ(B)= sup_x,ζ∈B|f(x)- f(ζ)| /|x-ζ|^γ.For ρ: ℝ^d →ℝ we shall write ρ^λ_x= λ^-dρ(· - x/λ). Then, for a distribution F∈𝒟'(ℝ^d) and -1<γ<0 let F_C^γ(B) = sup_x∈B sup_λ∈(0,1)sup_ρ∈𝔅^1(ℝ^d) |⟨F, ρ^λ_x ⟩|/λ^γ,where for n∈ℕ𝔅^n(ℝ^d)= { ρ∈𝒞_c^∞(B_1): sup_k≤n∇^k ρ_Ł^∞< 1 } . Next, recall the canonical projection π_d: ℝ^d→𝕋^d and the associated pullback π_d^* of functions, resp. distributions. We define C^γ(𝕋^d) as the closure[ We choose this slightly non-standard convention, since it makes the space separable. ]off∈ C^∞(𝕋^d) under the following norms respectively:f_C^γ(𝕋^d):=π_d^* f_C^γ(ℝ^d)+ π^* f_Ł^∞(ℝ^d) ifγ∈(0,1), π_d^* f_Ł^∞(ℝ^d)ifγ=0,π_d^* f_C^γ(ℝ^d) ifγ∈(-1,0) , where · _Ł^∞([-1,1]^d) denotes the usual (essential) supremum-norm. We shall often freely identify functions, resp. distributions on 𝕋^d with periodic functions, resp. distributions by pullback under (<ref>).§.§.§ Function spaces on 𝕋^d×ℝT^dxRWe equip ℝ^d+1 with the parabolic scaling (1,…,1,2) in the sense of <cit.> and write |z|_= |x|+ |t|^1/2 for z=(x,t)∈ℝ^d+1.Similarly to above, we shall often identify functions and distributions on 𝕋^d×ℝ with their counterpart on ℝ^d+1 by pullback under the projectionπ_d,1: ℝ^d+1→𝕋^d×ℝ(x,t) ↦(π_d x, t) .For γ∈ (0,1) we define the Hölder semi-norm for subsets B⊂ℝ^d×ℝ asf_C^γ_(B)= sup_(x,t), (ζ, τ)∈B|f(x,t)- f(ζ, τ)| /(|x-ζ|+ √(|t-τ|))^γ = sup_z, z'∈B|f(z)- f(z')| /|z-z'|_^γ. Then, define for γ≥ 0 the space C^γ(𝕋^d×[0,T]) as the closure ofC^∞(𝕋^d× [0,T]) under the normsf_C_^γ(ℝ^d×[0,T]):=π_d,1^* f_Ł^∞(ℝ^d×[0,T])ifγ=0,π_d,1^* f_C_^γ(ℝ^d×[0,T])+ π_d,1^*f_Ł^∞(ℝ^d×[0,T]) ifγ∈(0,1),In order to define the corresponding distribution spaces, denote for λ>0 by𝒮_^λ: ℝ^d+1→ℝ^d+1, z=(x,t) ↦𝒮_^λ(z)= (λ^-1x, λ^-2t)the parabolic scaling map. This time, writing for ρ: ℝ^d+1→ℝρ^λ_z= λ^-(d+2)ρ( 𝒮_^λ(· - z) ),define for γ<0 and bounded B⊂ℝ^d+1 the following semi-norms F_C_^γ(B) = sup_z∈B sup_λ∈(0,1)sup_ρ∈𝔅^1(ℝ^d+1) |⟨F, ρ^λ_x ⟩|/λ^γ. Define the space C_^γ(ℝ^d+1)⊂𝒟' (ℝ^d+1) as the completion of smooth compactly supported functions C^∞_c(ℝ^d+1) with respect to the Fréchet topology induced by the seminormsf_C_^γ((-N,N+1)^d+1)forN∈ℕ and identify C_^γ(𝕋^d×ℝ) as the pushforward underπ_d,1 of the subspace of C_^γ(ℝ^d+1) consisting of the periodic (in the first d components) distributions. §.§.§ Singular function spacesIn order define function spaces allowing for singular behaviour near time 0,we first introduce the forward parabolic cylinder Q̃_r(x,t)=B_r(x)×(t, t+r^2)⊂ℝ^d+1 ,centered at (x,t)∈ℝ^d×ℝ of radius r>0. For γ∈ (0,1], η≤γ we define the weighted Hölder norm on functions f: 𝕋^d× (0,T] →ℝ^d f_C^γ,η_T =sup_(x,t)∈[-1,1]^d ×(0,T] sup_(ζ, τ)∈Q̃_√(t)(t,x)∩([-1,1]^d ×(0,T])|π_d,1^*f(x,t)- π_d,1^*f(ζ, τ)| /√(t)^η-γ (|x-ζ|+ √(|t-τ|))^γ+ sup_(x,t)∈[-1,1]×(0,T]|π_d,1^* f(x,t)|/|t|^η/2 ∧0 . §.§.§ Functions of two variablesFurthermore for γ∈ (0,1] we introduce the following Hölder semi-norms for subsets B,B'⊂ℝ^d×ℝF_C^0,γ_(B×B') = sup_z∈B sup_z',z̅'∈B'|F(z,z') -F(z, z̅')|/ |z'-z̅'|_^γ ,F_C^γ,0_(B×B') =sup_z,z̅∈B sup_z'∈B'|F(z,z') -F(z̅, z') | /|z-z̅|_^γ,as well as for γ, γ'∈ (0,1],F_C^γ,γ'_(B×B')= sup_z,z̅∈B sup_z',z̅'∈B'|F(z,z') -F(z̅, z') -F(z, z̅')+ F(z̅, z̅')| /|z-z̅|_^γ·|z'-z̅'|_^γ' .§.§ Non-translation invariant heat kernels Throughout this article we make the following assumption.A: ℝ^d+1→ℝ^d× d is uniformly elliptic, ℤ^d+1-periodicand (space-time) θ-Hölder continuous for some θ∈ (0,1), i.e.sup_|z-z'|_≤1|A(z)-A(z')|/|z-z'|_^θ<+∞ .We shall throughout use the notation a≲ b to mean that there exists a constant C>0 such that a≤ Cb and whenever applicable further explain what that constant depends on. Throughout this article, constants will depend on A without further mention.Next we recall some properties of the fundamental solution of the differential operator ∂_t - ∇· A(z)∇ , c.f. <cit.>, <cit.>. We shall write a^i,j for the entries of the inverse matrix A^-1 of A and define [e:deftheta] ϑ^z(ζ)= ∑_i,j a^i,j(z) ζ_i ζ_j, w^z(ζ, τ)= 1_{τ> 0 }/τ^d/2 exp(ϑ^z(ζ)/4τ) .The fundamental solution of the differential operator with coefficients “frozen” at z=(ζ, τ) is given by Z(x,t; ζ,τ): =C(ζ,τ) w^(ζ,τ)(x-ζ,t-τ) ,where C(z)= (4π)^-d/2(A^s(z))^-1/2.We also define Z̅(x,t; ζ,τ): =C(x,t) w^(x,t)(x-ζ,t-τ) ,and note by direct computation that|Z̅ (x,t;ζ, τ-δ)-Z̅(x,t;ζ, τ)| ≲δ^θ/2/ (t-τ)^d/2+θ/2exp( -μ|x-ζ|^2/t-τ )uniformly over t>τ>0, x,ζ∈ℝ^d, δ∈ [0,1].The following is essentially known and summarises some properties of the fundamental solution.Under Assumption <ref> there exists μ>0 such that the (unique) fundamental solution Γ of the differential operator (<ref>) satisfies|Γ(x,t; ζ, τ)-Z(x,t;ζ, τ)|+|Γ(x,t; ζ, τ)-Z̅ (x,t;ζ, τ)| ≲(t-τ)^(θ-d)/2exp( -μ|x-ζ|^2/t-τ ) ,and |Γ(x,t; ζ, τ-δ)-Z̅ (x,t;ζ, τ-δ)-Γ (x,t; ζ, τ)+Z̅(x,t;ζ, τ)| ≲((δ^θ/2/ (t-τ)^d/2+θ/2 +δ^θ/2/ (t-τ)^d/2) ∧ 1/ (t-τ)^d/2-θ/2) exp( -μ |x-ζ|^2/t-τ)uniformly over t>τ>0, x,ζ∈ℝ^d, δ∈ [0,1]. Throughout this article, μ will denote some strictly positive constant (depending on A), which is allowed to change from line to line. Recall that by the construction of the fundamental solution by Levi's parametrix method, c.f. <cit.>, we can write Z_≥ 1:= Γ-Z asZ_≥1 = ∑_ν=1^∞Z_ν,where each summand is explicitly given in <cit.>. Furthermore, it follows from the discussion around <cit.> that there existH,μ>0 such that|Z_ν(x,t; ζ,τ)| ≲H^ν/Γ(θν) 1/ (t-τ)^d/2-θν/2exp( -μ|x-ζ|^2/t-τ )and| Z_ν(x,t; ζ,τ-δ)-Z_ν(x,t; ζ,τ)| ≲H^ν/Γ(θν) δ^θ/2 / (t-τ)^d/2-θ/2(ν-1)exp( -μ|x-ζ|^2/t-τ ) .We conclude that |Z_≥1(x,t; ζ,τ)| ≲ 1/ (t-τ)^d/2-θ/2exp( -μ|x-ζ|^2/t-τ ) and|Z_≥1(x,t; ζ, τ-δ)-Z_≥1(x,t; ζ, τ)|≲ δ^θ/2 / (t-τ)^d/2exp( -μ|x-ζ|^2/t-τ ) .A direct computation shows that |Z(x,t; ζ,τ)-Z̅ (x,t; ζ,τ)|≲(t-τ)^(θ-d)/2exp( -μ|x-ζ|^2/t-τ ).Thus (<ref>) together with (<ref>) imply the first inequality of the proposition.Next note that (<ref>) together with (<ref>) and a similar bound on Z imply thatẐ = Z̅ - Z satisfies| Ẑ (x,t; ζ,τ-δ) - Ẑ (x,t; ζ,τ) | ≲((δ^θ/2/ (t-τ)^d/2+θ/2 +δ^θ/2/ (t-τ)^d/2) ∧ 1/ (t-τ)^d/2-θ/2) exp( -μ |x-ζ|^2/t-τ) .which combined with (<ref>) implies the last inequality of the proposition. We shall denote by Γ_ the heat kernel of the differential operator∂_t +ℒ_ with ℒ_= - ∇·(A(x/, t/^2)∇) .We observe that for >0 and f∈𝒞_c^∞ (ℝ^d+1), if u is a solution to(∂_t +ℒ_1)u = f∘𝒮_^1/ onℝ^d+1,then u_= ^2 u ∘𝒮^_ satisfies (∂_t +ℒ_) u_= f. We can conclude the following scaling property for the heat kernel. It holds that Γ_ (z,z̅)= 1/^dΓ_1 (𝒮^_ (z), 𝒮^z̅). We define Z^ and Z̅^ exactly as in (<ref>) and (<ref>) but with A(x,t) replaced byA(x/, t/^2). One directly checks that |Z̅^ (x,t;ζ, τ-δ^2)-Z̅^(x,t;ζ, τ)| ≲δ^θ/ (t-τ)^d/2+θ/2 exp( -μ|x-ζ|^2/t-τ ) . There exists μ>0 such that|Γ_(x,t; ζ,τ)- Z̅^ (x,t; ζ,τ) |≲ ^-θ/ (t-τ)^d/2-θ/2exp( -μ |x-ζ|^2/t-τ) ,and |Γ_(x,t; ζ, τ-δ^2)-Z̅^ (x,t;ζ, τ-δ^2)-Γ_ (x,t; ζ, τ)+Z̅^(x,t;ζ, τ)| ≲1_δ≤·((δ^θ^θ/ (t-τ)^d/2+θ/2 +δ^θ/ (t-τ)^d/2) ∧^-θ/ (t-τ)^d/2-θ/2) exp( -μ |x-ζ|^2/t-τ)+ 1_δ>δ^θ/ (t-τ)^d/2exp( -μ |x-ζ|^2/t-τ)uniformly over t>τ>0, x,ζ∈ℝ^d and δ,∈ (0, 1].From Lemma <ref> and the fact that^-dZ̅^1(𝒮_^ z; 𝒮_^z̅ )= Z̅^(z; z̅ ) one sees that |Γ^(z;z̅)- Z̅^(z;z̅)|= ^-d |Γ^1(z;z̅)- Z̅^1(z;z̅)|which, combined with Proposition <ref>, shows (<ref>). For the latter inequality, we treat the regimes δ≤ and δ> separately. The former follows by the same scaling argument as above, while for δ> the bound follows by the triangle inequality from (<ref>) and Proposition <ref> below.§.§ Periodic HomogenisationWe now recall some known results from periodic homogenisation, always under Assumption <ref>. One defines for j=1,…,d the corrector Φ_j: 𝕋^d×ℝ→ℝ as the unique (weak) 1-periodic solution to the cell problem(∂_t+ℒ_1)Φ_j= ∇· (A(x,t) e_j), ∫_𝕋^d× [0,1]Φ_j(x,t)dxdt =0 ,where e_j denotes the jth basis vector of ℝ^d, c.f. <cit.>.Then, the homogenised matrix defined as A̅= (a̅_i,j)_i,j=1^d, a̅_i,j:= ∫_ [0,1]^d+1(a_i,j(x,t) +∑_k=1^d a_i,k(x,t) ∂_k Φ_j(x,t) ) dx dt,is strictly positive definite.Writing Ã:ℝ^d+1→ℝ^d× d for the matrix with entries ã_i,j(y,s)=a_j,i(y,-s), we setℒ̃_= -∇· (Ã(x/, t/^2) ∇). One finds that the fundamental solution Γ̃_(x,t,ζ,τ) of ∂_t +ℒ̃_ satisfies Γ̃_ (x,t;ζ,τ)= Γ_ (ζ, -τ;x, -t) .For j=1,…,d we then denote by Φ̃_j: 𝕋^d×ℝ→ℝthe correctors associated to the homogenisation problem associated to Ã.For j,i=1,…,d, letb_i,j:= a_i,j +∑_k=1^d a_i,k(x,t) ∂_k Φ_j(x,t) - a̅_i,j.as well as b_d+1,j= - Φ_j . The functions {Ψ_k,i,j}_k,i=1; j=1^d+1;d, characterised by the next Lemma <cit.>, are called dual correctors.There exist continuous periodic functions Ψ_k,i,j: ℝ^d+1→ℝ satisfyingb_i,j= ∂_t Ψ_d+1,i,j + ∑_k=1^d ∂_k Ψ_k,i,j, Ψ_k,i,j=-Ψ_k,j,i . We write Q_r(x,t)= B_r(x) ×(t-r^2, t)⊂ℝ^d+1 ,for the parabolic cylinder at (x,t)∈ℝ^d×ℝ of radius r>0. The following is <cit.>.Let p>d+2 and α=1-d+2/p and R>0. There exists a constant C=C(R,d,p) such that for all ∈ (0,1] and f=(f_1,…,f_d)∈ L^p(Q_2r(x_0,t_0)) any solutionu_ to (∂_t+ℒ_)u_= ∇· f on Q_2r(x_0,t_0) satsfies the boundu__C_^α(Q_r(x_0,t_0)) ≤Cr^1-α( 1/r(_Q_2r(x_0,t_0) |u_|^2)^1/2+ (_Q_2r(x_0,t_0) |f|^p)^1/p ) uniformly over r<R.The next theorem is well known, see for example <cit.> and <cit.>.There exists μ>0 such that | Γ_(x,t;y,s)|≲1/|t-s|^d/2 exp(-μ|x-y|^2/t-s )uniformly over ∈ (0,1], x,y∈ℝ^d and -∞ <s<t<∞.The following is <cit.>. There exists μ>0 such that |(Γ_-Γ̅)(x,t; y, s)|≲C/(t-s)^d+1/2exp( μ|x-y|/t-τ)uniformly over x,y∈ℝ^d and -∞ <s<t<∞.§.§ Main resultsDenote by ξ space time white noise on 𝕋^d×ℝ.Fix ϕ∈ C^∞_c(B_1(0)) even such that ∫_ℝϕ(t)dt=1. For , δ>0 setξ_,δ (x,t)= ∫_ℝ^d+11/δ^2ϕ((t-s)/δ^2)Γ_(x,t, ζ, t-δ^2) ξ(dζ, ds) ,where we implicitly identified ξ with its periodic counterpart by pullback.The following lemma suggests that this regularisation is particularly convenient for studying homogenisation of singular SPDEs. It is a direct corollary of Proposition <ref> in Section <ref>.For every α>d+2/2 there exists a modification of (<ref>) which extends to a continuous map[0,1]^2→C_^-α(ℝ^d+1), (,δ)↦ξ_,δ .It has the property that for any ∈ [0,1] it holds that ξ_,0= ξ and that for any δ>0ξ_0,δ (x,t)= ∫_ℝ^d+1 ϕ^δ(t-s) Γ̅(x,t, ζ, t-δ^2) ξ(dζ, ds). We define the set□:= { (, δ) ∈(0,1]^2:^-1∈ℕ }, in order to state the main results of this article, see Remark <ref>. We will always view □ as a subspace of [0,1]^2, so its closureequals = { (, δ) ∈ [0,1]^2:∈{0}∪ℕ^-1}.Let ξ denote space-time white noise on 𝕋^2×ℝ and let u_0∈𝒞^α(𝕋^2) for α>-1/10.Consider for δ>0 theregularisation ξ_,δ(x,t) as in (<ref>). There exist constants[ Here we write f∼ g (for functions f, g: (0,1]^2 →ℝ) to mean thatf-g extends continuously to [0,1]^2. ] α_,δ∼|log(δ/)∧0|/4π, α̅_,δ∼ |log(δ)| ∧|log()|/4π ,and a bounded family of constants c_,δ such that if we denote by u_, δ the solution to∂_t u_, δ -ℒ_ u_, δ=-u_,δ^3+ 3 α_, δD(x/,t/^2)u_,δ + 3α̅_,δ(A̅)^-1/2u_,δ + 3c_,δ u_,δ +ξ_, δ, with initial condition u_, δ(0)=u_0, then, for any T>0 the solution map □→Ł^0[C([0,T], 𝒟'(𝕋^2))] , (, δ) ↦u_, δ ,is well defined and has a unique continuous[Recall that the Ł^0-topology is characterised by convergence in probability.] extension to .Furthermore, the constants α_,δ, α̅_,δ, c_,δ (depending on ϕ) can be chosen such that the following hold.*For δ>0 one has lim_→ 0α_,δ=0, lim_→ 0 c_,δ=0 and the limit α̅_0,δ:=lim_→ 0α̅_,δ exists. In particular, u_0, δ agrees with the classical solution to∂_t u_0, δ -∇·A̅∇u_0, δ= -u_,δ^3+ 3α̅_0,δ(A̅)^-1/2 u_0,δ +ξ_0, δ. *For > 0, the limits α̅_,0=lim_δ→ 0α̅_,δ andc_, 0=lim_δ→ 0 c_,δ exist. Furthermore there exists a sequence of constantsα̂_∼1/4π |log()| , such that u_,0 agrees with the[Assuming we choose the same way to affinely parametrise the solution family, e.g. by choosing the same cutoff function κ(t), see Section <ref>. In that case,α̂_=1/8∫_ℝ κ(τ)^2(1- κ(τ/^2)^2) /τ dτ .]solution to ∂_t u_,0-ℒ_u_,0=-u_,0^3+ ∞·D(x/,t/^2)u_,0 + 3(α̅_,0 (A̅)^-1/2 + c_,0)- 3α̂_D(x/,t/^2)u_,0+ ξ constructed in <cit.>. *For all ∈[0,1], u_,0 does not depend on the choice of ϕ. Note that the restriction (,δ)∈□ instead of (,δ)∈ (0,1]^2 is only so that the differential operator∇· A(x/, t/^2)∇ can be pushed forward to the torus. We couldjust as well have formulated the results on the full plane but with periodic noise instead, in which case the statementholds for □ replaced by (0,1]^2. The same remark applies to Theorem <ref>, Proposition <ref> and Theorem <ref> below. The constants c_,δ∈ℝ have the property that in generallim_δ↓0 lim_↓0c^(1)_,δ= 0≠lim_↓0lim_δ↓0c^(1)_,δ,see (<ref>) for an expression of the limit on the right hand side. It can be seen that Theorem <ref> as well as its proof extend to arbitrary polynomial non-linearities of odd degree where the highest degree part has a negative coefficient. We focus on the cubic case in order to streamline exposition, the generalisation follows along standard arguments, c.f. <cit.>. §.§.§ Translation invariant regularisationIn this section we consider homogenisation of the ϕ^4_2 equation for usual translation invariant regularisations of the noise. Let us define for C>0 the following set^<_C:={ (,δ)∈(0,1]^2 :<Cδ} . Let ξ denote space-time white noise on 𝕋^2×ℝ, let ϕ∈ C_c^∞(B_1) be even, non-negative with ∫_ℝ^d+1ϕ=1, and let u_0∈𝒞^α(𝕋^2) for α>-1/10.Consider for δ>0 the regularisation ξ^♭_δ(z)=ξ(ϕ_z^δ). There exist constantsα̅^♭_,δ∼|log(δ)| ∧|log()|/4π ,and c^♭_,δ bounded on ^<_C for any C>0 such that ifwe denote by u^♭_, δ the solution to∂_t u^♭_, δ -ℒ_ u^♭_, δ=-(u^♭_,δ)^3+ 3α̅^♭_,δ(A̅)^-1/2u^♭_,δ + 3c^♭_,δ u^♭_,δ +ξ^♭_δ, with initial condition u_, δ^♭ (0)=u_0, then for every C>0 and T>0 the solution map^<_C∩□→Ł^0[C([0,T], 𝒟'(𝕋^2))], (, δ) ↦u_, δis well defined and has a unique continuous extension to ^<_C∩.Furthermore, the constants α̅_,δ^♭, c^♭_,δ (depending on ϕ) can be chosen such that the following hold.*For δ>0 it holds that lim_→ 0 c^♭_,δ=0 and the limit α̅^♭_0,δ:=lim_→ 0α̅^♭_,δ exists. In particular, u^♭_0, δ agrees with the classical solution to∂_t u^♭_0, δ -∇·A̅∇u^♭_0, δ= -(u^♭_,δ)^3+ 3α̅_0,δ^♭(A̅)^-1/2 u^♭_0,δ +ξ^♭_0, δ. *The process u^♭_0,0 does not depend on the choice of ϕ and agrees with u_0,0 in Theorem <ref>.See Item <ref> of Proposition <ref> and Remark <ref> for a formula for the constant c^♭_,δ. The next proposition shows that the above theorem is sharp in the following sense.Let ξ, ϕ, ξ^♭_δ, u_0∈𝒞^α(𝕋^2) and α̅^♭_,δ be as in Theorem <ref>. There exist constants c^♭♭_,δ bounded on (0,1]^2 and ℤ^3 periodic functions D_λ^♭♭ : ℝ^3→ℝsatisfying 1+ |log(λ)| ≲D_λ^♭♭ (z) ≲1+|log(λ)|,uniformly in λ∈ (0,1), such that if we denote by u^♭♭_, δ the solution to∂_t u^♭♭_, δ -ℒ_ u^♭♭_, δ=-(u^♭♭_,δ)^3+ 3D^♭♭_δ/(x/,t/^2)u^♭♭_,δ + 3α̅^♭_,δ(A̅)^-1/2u^♭♭_,δ + 3c^♭♭_,δ u^♭♭_,δ +ξ^♭_, δ, with initial condition u_, δ^♭♭(0)=u_0, then, for any T>0 the solution map□→Ł^0[C([0,T], 𝒟'(𝕋^d))], (, δ) ↦u^♭♭_, δis well defined and has a unique continuous extension to .Furthermore, the constants c^♭♭_,δ and the functions D^♭♭_λ can be chosen (depending on ϕ) such that the following hold.*The constants c^♭_,δ from Theorem <ref> can be written as c^♭_,δ=c^♭♭_,δ + ∫_[0,1]^3 D^♭♭_δ/.*For all δ>0, lim_→ 0 D^♭♭=0 and lim_→ 0c^♭♭_,δ=0.*For all δ∈ [0,1]the process u^♭♭_0,δ agrees with u^♭_0,δ in Theorem <ref>.*For each >0 the limitlim_δ→ 0 c^♭♭_,δ exists and agrees with the limit lim_δ→ 0 c_,δ from Theorem <ref> and, furthermore , u^♭♭_,0 agrees with the solution u_,0 in Theorem <ref>.Equation (<ref>) provides an expression for D^♭♭_λ. One can check thatD̂^ϕ(z)= lim_λ→ 0 D^♭♭_λ(z)-|log(λ)|/4πD(z)defines a continuous (but ϕ-dependent) function. This is a rather generic feature of logarithmic divergences which tend to come with a regularisation independent prefactor and aregularisation dependent part of order one.Thus, in the setting of Proposition <ref> it follows by a virtually identical proof thatfor ĉ^♭♭_ϵ,δ=c^♭♭_,δ+ ∫_[0,1]^3 D^♭♭_δ/-|log(δ/)∨ 0|/4π∫_[0,1]^3 D(z)the solution map □∋(, δ) ↦û^♭♭_, δ∈Ł^0[C([0,T], 𝒟'(𝕋^d))] to∂_t û^♭♭_, δ -ℒ_û^♭♭_, δ=-(û^♭♭_,δ)^3+ 3|log(δ/) ∨ 0|/4πD(x/,t/^2) û^♭♭_,δ+ 3(α̅^♭_,δ(A̅)^-1/2 +ĉ^♭♭_,δ)û^♭♭_,δ +ξ^♭_, δ,also extends continuously toand it holds that û^♭♭_0,δ=u^♭♭_0,δ for all δ∈ [0,1]. But for any >0, in general, û^♭♭_,0 does depend on ϕ and does not fall in the class of solutions exhibited in <cit.>. §.§.§ Non-translation invariant regularisationsWe first recall the general class non-translation invariant regularisations of <cit.> and then establish the analogue result to Theorem <ref>. Let ρ∈_c^∞(_+) non-negative with support in [0,1), all odd derivatives vanishing at the origin, and such that ∫_ℝ^dρ (|x|^2) dx=1. Recalling the definition of ϑ^z in (<ref>), letρ^(z,, δ)(x)=1/δ^2 (A(𝒮^z))^1/2 ρ(ϑ^𝒮^z (x)/δ^2) .We set for ϕ∈_c^∞((-1,1)) even such that ∫ϕ(t) dt=1ϱ^(,δ; z) (x,t; ζ, τ)= 1/δ^2ϕ((t-τ)/δ^2) ρ^(z,,δ) (x-ζ) ,and ϱ^,δ (x,t; ζ, τ)=ϱ^(,δ; (x,t)) (x,t; ζ, τ).For later use we also introduce ϱ̅^δ (x,t; ζ, τ) to be defined asϱ^,δ (x,t; ζ, τ) but with A replaced by the identity matrix. Let ξ_,δ^♯(z) = ∫ϱ^,δ (z; z') ξ(z')dz' . Define for C>0 the set ^>_C:={ (,δ)∈ (0,1]^2 :>Cδ}. We then have the following result.Let ξ denote space-time white noise on ℝ×𝕋^2 and let u_0∈𝒞^α(𝕋^d) for α>-2/3.Denote for , δ>0 by ξ_,δ^♯ its regularisation as in (<ref>). Then, there exist sequences of constants α^♯_δ,∼|log(δ/)∧0|/4π,and c^♯_δ, bounded on ^>_C for any C>0such that ifwe denote by u^♯_, δ the solution to∂_t u^♯_, δ -ℒ_ u^♯_, δ= -(u^♯_,δ)^3+ 3 α^♯_δ,D(x/,t/^2)u^♯_,δ + c^♯_,δu_,δ+ξ^♯_, δ ,with initial condition u^♯_, δ (0)=u_0, then for every C>0 and T>0 the solution map^>_C∩□→Ł^0[C([0,T], 𝒟'(𝕋^2))], (, δ) ↦u_, δis well defined and has a unique continuous extension to ^>_C∩.Furthermore, the constants α̅^♯_,δ, c_,δ^♯ can be chosen (depending on ϕ and ρ) such that the following hold: *For >0, the limit lim_δ→ 0 c_,δ^♯ agrees with lim_δ→ 0 c_,δ from Theorem <ref>.*For all ∈ℕ^-1∪{0} the process u^♯_,0 agrees with u_,δ in Theorem <ref> and does not depend on the choice of ϕ and ρ. § HOMOGENISATION ESTIMATES FOR KERNELSIn this section we establish further kernel estimates. Recall the definition of the backwards parabolic cylinder Q_r(x,t) in (<ref>) and the forward parabolic cylinder Q̃_r(x,t) in (<ref>). We have the following uniform Hölder bound on heat kernels. For α∈ (0,1), there exists μ>0 such thatΓ_( · ; y, s )_C_^α(Q_√( |t-s|)/8(x,t))+ Γ_(x,t;·)_C_^α(Q̃_√( |t-s|)/8(y,s)) ≲1/|t-s|^d+α/2 exp(-μ|x-y|^2/t-s ) .uniformly over x,y∈ℝ^d, -∞ <s<t<∞ and ∈ (0,1]. Note that u_( · )=Γ_( ·; y, s ) satisfies the assumptions of Theorem <ref> with r=√( |t-s|)/8 and f=0. By Theorem <ref>sup_z∈ Q_2r(x,t)Γ_ ( · ,y,s ) ≲sup_x'∈ B_2r(x),t' ∈ (t-(2r)^2, t) 1/|t'-s|^d/2exp(-μ|x'-y|^2/t'-s)≲1/|t-s|^d/2exp(-μ |x-y|^2/t-s),where we recall that the exact value of μ is allowed to change from line to line. This proves thatΓ_( · ; y, s )_C_^α(Q_√( |t-s|)/8(x,t)) ≤C/|t-s|^d+α/2 exp(-μ|x-y|^2/t-s ) .The bound on Γ_(x,t;·)_C_^α(Q̃_√( |t-s|)/8(y,s)) follows similarly using (<ref>). For α,α'∈ (0,1), there exists μ>0 such thatΓ__C_^α, α'(Q_√( |t-s|)/8(x,t)×Q̃_√( |t-s|)/8(y,s)) ≲1/|t-s|^d+(α+ α')/2 exp(-μ|x-y|^2/t-s ) uniformly over x,y∈ℝ^d, -∞ <s<t<∞ and ∈ (0,1]. We apply Theorem <ref> on Q_√( |t-s|)/8(x,t) with f=0, r= √( |t-s|)/8 to the increment Γ(·, z')- Γ(·, z̅') for z', z̅'∈Q̃_√( |t-s|)/8(y,s). Thus,Γ(·, z')- Γ(·, z̅')_C_^α(Q_√( |t-s|)/8(x,t) ≲ r^-αsup_z∈ Q_√( |t-s|)/4(x,t)) |Γ(z, z')- Γ(z, z̅')| .Therefore,Γ__C_^α, α'(Q_√( |t-s|)/8(x,t)×Q̃_√( |t-s|)/8(y,s)) = sup_z',z̅'∈Q̃_√( |t-s|)/8(y,s)Γ(·, z')- Γ(·, z̅')_C_^α(Q_√( |t-s|)/8(x,t))/|z'-z̅'̅|^α'≲ r^-αsup_z∈ Q_√( |t-s|)/4(x,t)sup_z',z̅'∈Q̃_√( |t-s|)/8(y,s) |Γ(z, z')- Γ(z, z̅')| /|z'-z̅'̅|^α'≲ r^-αsup_(ζ, τ)∈ Q_√( |t-s|)/4(x,t)1/|τ-s|^d+α/2exp(-μ |ζ-y|^2/τ-s) ≲1/|t-s|^d+α+α'/2exp(-μ |x-y|^2/t-s),where we used (<ref>) in the first inequality and Proposition <ref> in the second inequality.We define for I,J∈{0,1}Γ^I,J_(x,t;y,s)= (1+1_{I>0} ∑_i=1^d Φ_i^(x,t) ∂_x_i) (1+ 1_{J>0}∑_j=1^d Φ̃_j^(y,-s) ∂_y)Γ̅ (x,t; y, s),where Φ_j^ and Φ̃_j^ were defined in (<ref>) and (<ref>) respectively. Let R>0. If(∂_t- ℒ_) u_= (∂_t- ℒ̅) u̅ on Q_2r(x_0,t_0), then u_ -u_0- ∑_i=1^d Φ_i^∂_i u_0_C_^α(Q_r(x_0,t_0))≲1/r^α(_Q_2r(x_0,t_0) |u_-u_0|^2)^1/2 +r^1-α^2 sup_Q_2r(x_0,t_0)( |∇^3 u_0| + |∇∂_t u_0|)+/r^αsup_z∈ Q_2r(x_0,t_0) |∇ u_0(z)| +( ^2-α + r + ^2/r^α) sup_z∈ Q_2r(x_0,t_0) |∇^2 u_0(z)| +^2 ∇^2 u_0_C_^α(Q_r(x_0,t_0)) ,uniformly over r<R.Setw_= u_-u_0-∑_i=1^d Φ_i^∂_i u_0 - ^2 ∑_i,j=1^d Ψ^_d+1,i,j ∂_i ∂_j u_0, then one finds that∂_t+ℒ_ w_= ∇· F_ , whereF_, i(x,t)= ( a_i,j^Φ_k^ +Ψ_i,k,j^ )∂_j ∂_k u_0 + Ψ_i,d+1,j^∂_t ∂_j u_0 +a_i,j^ (^-1∂_j Ψ_d+1,l,k^) ∂_l ∂_k u_0 + a_i,j^Ψ_d+1,l,k^∂_j ∂_l ∂_k u_0,see <cit.>. Therefore by Theorem <ref> we find that w__C_^α(Q_r(x_0,t_0)) ≤ Cr^1-α( 1/r(_Q_2r(x_0,t_0) |w_|^2)^1/2+ (_Q_2r(x_0,t_0) |F_|^p)^1/p) ≲r^1-α( 1/r(_Q_2r(x_0,t_0) |w_|^2)^1/2+ (_Q_2r(x_0,t_0) |∇^2 u_0|^p)^1/p +^2(_Q_2r(x_0,t_0) |∇^3 u_0|^p)^1/p +^2(_Q_2r(x_0,t_0) |∇∂_t u_0|^p)^1/p) .Since one easily checks that Ψ^_d+1,i,j ∂_i ∂_j u_0_C_^α(Q_r(x_0,t_0)) ≲ ∂_i ∂_j u_0_C_^α(Q_r(x_0,t_0)) + ^-αsup_z∈Q_r(x_0,t_0) |∂_i ∂_j u_0(z)|we conclude that u_ -u_0- ∑_i=1^d Φ_i^∂_i u_0_C_^α(Q_r(x_0,t_0))≲ r^1-α( 1/r(_Q_2r(x_0,t_0) |w_|^2)^1/2+ (_Q_2r(x_0,t_0) |∇^2 u_0|^p)^1/p +^2(_Q_2r(x_0,t_0) |∇^3 u_0|^p)^1/p +^2(_Q_2r(x_0,t_0) |∇∂_t u_0|^p)^1/p) +^2 ∇^2 u_0_C_^α(Q_r(x_0,t_0)) + ^2-αsup_z∈ Q_r(x_0,t_0) |∇^2 u_0(z)|.Finally, note that(_Q_2r(x_0,t_0) |w_|^2)^1/2≤(_Q_2r(x_0,t_0) |u_-u_0|^2)^1/2 +sup_z∈Q_2r(x_0,t_0) |∇u_0(z)| +^2 sup_z∈Q_2r(x_0,t_0) |∇^2 u_0(z)|and thusu_ -u_0- ∑_i=1^d Φ_i^∂_i u_0_C_^α(Q_r(x_0,t_0))≲ r^1-α( 1/r(_Q_2r(x_0,t_0) |u_-u_0|^2)^1/2+ (_Q_2r(x_0,t_0) |∇^2 u_0|^p)^1/p +^2(_Q_2r(x_0,t_0) |∇^3 u_0|^p)^1/p +^2(_Q_2r(x_0,t_0) |∇∂_t u_0|^p)^1/p) +^2 ∇^2 u_0_C_^α(Q_r(x_0,t_0)) + ^2-αsup_z∈ Q_r(x_0,t_0) |∇^2 u_0(z)|+/r^αsup_z∈ Q_2r(x_0,t_0) |∇ u_0(z)| + ^2/r^αsup_z∈ Q_2r(x_0,t_0) |∇^2 u_0(z)|.The proof is completed after applying the inequality(_Q_2r(x_0,t_0) |g|^p)^1/p≤g_Ł^∞(Q_2r(x_0,t_0))and collecting terms. For α∈ (0,1) there exists μ>0 such thatΓ_( · ; y, s )- Γ^1,0_ ( · ; y, s )_C_^α(Q_√( |t-s|)/8(x,t))+ Γ_(x,t;·)- Γ^0,1_(x,t;·)_C_^α(Q̃_√( |t-s|)/8(y,s))≲/|t-s|^d+1+α/2exp(-μ|x-y|^2/t-s) , uniformly over x,y∈ℝ^d, -∞ <s<t<∞ and ∈ (0,1].We only show the bound for the first term on the left-hand side since the other one follows analogously.First consider the case ≤ r:= √( |t-s|)/8. Let u_(·)= Γ(· ;y,s ) and u_0(·)= Γ̅(· ;y,s ) onQ_√( |t-s|)/4(x,t). Then it follows from Proposition <ref> that u_ -u_0- ∑_i=1^d Φ_i^∂_i u_0_C_^α(Q_r(x,t))≲ 1/r^α(_Q_2r(x,t) |u_-u_0|^2)^1/2 +r^2-αsup_Q_2r(x,t)( |∇^3 u_0| + |∇∂_t u_0|)+/r^αsup_z∈ Q_2r(x,t) |∇ u_0(z)| + r^1-αsup_z∈ Q_2r(x,t) |∇^2 u_0(z)| +^2 ∇^2 u_0_C_^α(Q_r(x,t)) ,where we used <r.Thus, we conclude similarly to the proof of Theorem <ref>, but using (<ref>) to bound the first term. For >r=√( |t-s|)/8 note thatΓ_( · ; y, s )- Γ^1,0_ ( · ; y, s )_C_^α(Q_√( |t-s|)/8(x,t))≤ Γ_( · ; y, s )_C_^α(Q_√( |t-s|)/8(x,t)) + Γ̅_C_^α(Q_√( |t-s|)/8(x,t)) +∑_i=1^d Φ_i^ (x,t) ∂_iΓ̅( · ; y, s )_C_^α(Q_√( |t-s|)/8(x,t)) .The claim follows by estimating the first summand using Proposition <ref> and the remaining ones directly.The following is a direct consequence of Proposition <ref>.For α∈ (0,1) there exists μ>0 such that Γ_( · ; y, s )- Γ̅( · ; y, s )_C_^α(Q_√( |t-s|)/8(x,t)) +Γ_(x,t;·)- Γ̅(x,t;·)_C_^α(Q̃_√( |t-s|)/8(y,s))≲(/|t-s|^d+1+α/2∨^1-α/|t-s|^d+1/2)exp(-μ|x-y|^2/t-s)uniformly over x,y∈ℝ^d, -∞ <s<t<∞ and ∈ (0,1]. For the purpose of this article the next proposition is the main input from periodic homogenisation theory.For α, α'∈ (0,1), there exists μ>0 such that Γ_-Γ^1,1__C_^α, α'(Q_√( |t-s|)/8(x,t)×Q̃_√( |t-s|)/8(y,s)) ≲/|t-s|^d+1+α+ α'/2exp(-μ|x-y|^2/t-s)+ ^2/|t-s|^d+2+α+ α'/2exp(-μ|x-y|^2/t-s) ,uniformly over x,y∈ℝ^d, -∞ <s<t<∞ and ∈ (0,1].We proceed similarly to the proof of Proposition <ref>. Consider first the case ≤ r:= √( |t-s|)/8. For z', z̅'∈Q̃_r(y,s), letu_^z', z̅' = Γ_(·, z')- Γ_(·, z̅')andu_0;^z', z̅' = Γ^0,1_(·, z')- Γ^0,1_(·, z̅') .Thus, by Proposition <ref> Γ_-Γ_^1,1_C_^α, α'(Q_r(x,t)×Q̃_r(y,s))= sup_z',z̅'∈Q̃_r(y,s) Γ_(·, z')- Γ_(·, z̅') -(Γ^1,1_(·, z')- Γ^1,1_(·, z̅') )_C_^α(Q_r(x,t)) /|z'-z̅'̅|^α'= sup_z',z̅'∈Q̃_r(y,s) u_^z', z̅' -u_0;^z', z̅'- ∑_i=1^d Φ_i^∂_i u_0;^z', z̅' _C_^α(Q_r(x,t)) /|z'-z̅'̅|^α'≲1/r^αsup_z∈Q_2r(x,t)sup_z',z̅'∈Q̃_r(y,s)|u_^z', z̅'(z)-u_0;^z', z̅'(z)|/|z'-z̅'̅|^α' + r^2-αsup_z∈Q_2r(x,t) sup_z',z̅'∈Q̃_r(y,s)|∇^3 u_0;^z', z̅' (z)| + |∇∂_t u_0;^z', z̅'(z)|/|z'-z̅'̅|^α'+ sup_z∈Q_2r(x,t) sup_z',z̅'∈Q̃_r(y,s)(/r^α |∇u_0;^z', z̅'(z)| /|z'-z̅'̅|^α'+ r^1-α |∇^2 u_0;^z', z̅'(z)| /|z'-z̅'̅|^α' )+^2 sup_z',z̅'∈Q̃_r(y,s) 1/|z'-z̅'̅|^α' ∇^2 u_0;^z', z̅'_C_^α(Q_r(x,t)).Bounding each term separately, we find by Proposition <ref>1/r^αsup_z∈Q_2r(x,t)sup_z',z̅'∈Q̃_r(y,s)|u_^z', z̅'(z)-u_0^z', z̅'(z)|/|z'-z̅'̅|^α' ≲C/|t-s|^d+1+α+α'/2 exp(-μ|x-y|^2/t-s )and using that <r on each of the remaining terms the same upper bound. For > r:= √( |t-s|)/8 and setting A = Q_r(x,t)×Q̃_r(y,s), note that the claim follows from Γ_- Γ^1,1__C_^α, α'(A) ≤Γ__C_^α, α'(A) + Γ̅_C_^α, α'(A)+∑_i=1^d Φ_i^∂_i;1Γ̅_C_^α, α'(A) +∑_j=1^d Φ̃_j^∂_j;2Γ̅_C_^α, α'(A)+^2∑_i,j=1^dΦ_i^Φ̃_j^∂_i;1∂_j;2Γ̅_C_^α, α'(A) ,using Proposition <ref> to bound Γ__C_^α, α'(A).§.§ Post-processing of kernel estimates We fix a cutoff function κ: ℝ→ [0,1] such that* κ(t)=0 for t<0 and for t>2, * κ(t)=1 for t∈ (0,1), * κ|_ℝ_+ is smooth, and writeκ^(t)=κ(t/^2) as well asκ^_c(t)= 1_{t>0}(1-κ^(t) ). We define Γ̅_ (x,t;ζ, τ)= κ^_c(t-τ) Γ̅ (x,t;ζ, τ) and Γ̃_ (x,t; ζ, τ) = (1+Φ_(x,t) ∇_x) (1+Φ̃_(ζ,τ) ∇_ζ) Γ̅_ (x,t; ζ, τ).We also fix χ: ℝ^d→ [0,1] smooth and compactly supported on [-2/3, 2/3]^d⊂ℝ^d such that ∑_k∈ℤ^dχ(x+k)=1 for all x∈ℝ^d. Finally, for Γ∈{Γ̅,Γ_, Γ̃_} setK (t,x;s,y)=∑_k∈ℤ^dκ(t-s)χ(x-y) Γ(x,t;y+k,s) and denote the resulting kernels by K̅ ,K_ and K̃_ respectively.Recall from <cit.> the space of kernels ^β_L,R= { K∈ : K_β;L,R<+∞} which for L,R∈ (0,1) is equipped with the normK_β;L,R =inf_{K_n}_n≥0 (sup_n∈ℕ K_n_Ł^∞/2^n(||- β) + sup_n∈ℕ K_n_C^0,R_/2^n(||- β+R)+sup_n∈ℕ K_n_C^L,0_/2^n(||- β+L)+ sup_n∈ℕ K_n_C^L,R_/2^n(||- β+L+R) ),where the infimum is taken over over all kernel decomposition K(z,z')= ∑_n≥ 0 K_n(z,z') such that each K_n for n≥ 1 is supported on{(z,z')∈ (ℝ^d+1)^× 2 :|z-z'|_≤ 2^-n+1} and K_0 is supported on {(z,z')∈ (ℝ^d+1)^× 2 :|z-z'|_≤ C } for some C>0.For every R,L∈ (0,1) there exists C>0 such thatK__β;L,R + K̃__β;L,R + K̅_β;L,R <Cuniformly over ∈ (0,1], β∈ (0,2]. Let φ∈𝒞^∞_c(B_2∖ B_1/2) be such that ∑_n=1^∞φ^2^-n(x)=1 for every x∈ B_1/2∖{0}. For K∈{K̅, K_, K̃_} set for n≥ 1K_n(z;z')=φ^2^-n(z-z') K(z;z') and K_0= K- ∑_n≥ 1K_n.We shall show the bound for this decomposition of the kernels. The bound on K̅ is standard, and the bound on K_ follows directly from Theorem <ref>, Proposition <ref> and Proposition <ref>. Finally, we see from (<ref>) that the bound on K̃_,n for 2^-n< follows directly from the bound on K̅, while for 2^-n≥ we consider the following two cases separately: * t-s<2: here, the bounds follow again from the bound on K̅.* t-s≥2: here, the bounds follow Corollary <ref>, resp.Proposition <ref> and the triangle inequality.The following Proposition is sufficient to treat the equation considered here. For every R,L∈ [0,1) it holds thatK_- K̃__β;L,R≲^2-β∨uniformly over β∈ [0,2]. If furthermore 0<R+L<1, it also holds thatK_- K̅_β;L,R≲^2-β∨^1-R-Luniformly over β∈ [0,2]. By interpolating between the bounds of Theorem <ref> and Proposition <ref> we find that|K_n,(z;z')-K̅_n(z;z')| ≲ (^2-β∨) 2^n(||-β) .By direct computation, we see that|K̃_n,(z;z')-K̅_n(z;z')| ≲(^2-β ∨) 2^n(||-β)and thus|K_n,(z;z')-K̃_n,(z;z')|≤ |K_n,(z;z')-K̅_n(z;z')|+ |K̃_n,(z;z')-K̅_n(z;z')|≲ (^2-β∨) 2^n(||-β). By interpolating between Proposition <ref> and Proposition <ref> one finds that K_n,-K̃_n_C^L,R_(Q_2× Q_2)≲ 2^n(||-2+L+R) if <2^-n, (^2-β∨) 2^n(||-β+L+R) otherwise.The bounds on K_n,-K̃_n_C^L,0_(Q_2× Q_2), K_n,-K̃_n_C^0,R_(Q_2× Q_2) follow similarly but using Proposition <ref> and the triangle inequality, which concludes the proof of (<ref>). Finally,K̅_n-K̃_n,_C^L,R_(Q_2× Q_2)≲^1-L-R 2^||-1 if <2^-n, (^2-β∨)2^||-β+L+R otherwise.The remaining bounds follow from Corollary <ref>, which combined with (<ref>) yields (<ref>).§ FIXED POINT THEOREM For v_0 ∈𝒞^η (𝕋^d) write(K_(t) v_0) (x)= v_0(K(x,t;· ,0)) = v_0(κ(t)Γ_(x,t; · ,0)) It follows that v= K_ (v_0) satisfies the initial value-problem v(0)=v_0,(∂_t - ℒ_) v=0, on 𝕋^d ×(0,1) .For -1<η < γ<1, η≠ 0 it holds that K_ ( ·)v_0_C^γ, η_T≲_T v_0_η uniformly over ∈ℕ^-1 and v_0∈𝒞^η (𝕋^d). Furthermore, for η<0, κ∈ [0,1) it holds thatK_( ·) v_0- K̅ ( ·) v_0_C^γ, η- κ_T≲_T (^κ∨^1+η-γ) v_0_η.For the first part of the Lemma we use Proposition <ref> with β=2. We see that for n_t∈ℕ such that √(t)∈ [2^-n_t-1, 2^n_t)|(K_(t) v_0) (x)|≤∑_n=0^n_t |v_0(K_,n(x,t; · ,0)) |≲∑_n=0^n_t2^-nη ≲|t|^η/2 ∧0 . Similarly we observe that for t<τ|(K_(t) v_0) (x)- (K_(τ) v_0)(ζ) |≤∑_n=0^n_t | v_0(K_,n(x,t; · ,0)-K_,n(ζ,τ; · ,0)) |≲(|x-ζ| + √(|t-τ|))^γ∑_n=0^n_t2^-n(η- γ) ≲(|x-ζ| + √(|t-τ|))^γ|t|^(η- γ)/2 ,where in the last line we used that η<γ.In order to conclude the second part of the lemma, we use (<ref>) of Proposition <ref> with β= 2-κ and find for n_t as above,|(K̅(t) v_0- K_(t) v_0) (x)| ≲∑_n=0^n_t | v_0((K̅_n - K_,n)(x,t; · ,0))| ≲∑_n=0^n_t(^2-β∨^1+η )2^n(2-β-η) ≲(^κ∨^1+η) |t|^(η-κ)/2∧0 =(^κ∨^1+η) |t|^(η-κ)/2 ,where we used that η<0. Similarly, we find|(K_(t)-K̅(t)) v_0 (x)- (K_(τ) - K̅ (τ))v_0(ζ) | ≲(^κ∨^1+η-γ) (|x-ζ| + √(|t-τ|))^γ|t|^(-κ+η-γ)/2 ,where we used that -κ +η -γ<0.Next, we shall prove the main fixed point theorem. Let α, η∈ (-1/10,0), κ∈ (0, 1/10∧ (-η))and γ∈ (-α, 1/4). Then, for every family 𝕏^ = (X^_1,X^_2,X^_3)∈C^α(𝕋^2×ℝ)^× 3 and v_0^∈ C^η+κ(𝕋^2×ℝ) indexed by ∈ [0,1], there exists a T^*∈ (0,1) depending onsup_∈ [0,1]𝕏^_C^α(𝕋^d×(-1,2)) and sup_∈ [0,1] v_0^_C^η+κ (𝕋^d),such that there exist unique solutions{w_}_∈ (0,1]∈ C^γ, η_T^* to the equations∂_t-ℒ_w_ = -( w_)^3 +3 w_^2 X^_1 - 3w_ X_2^+ X_3^ , w_ (0)=v^_0 . for ∈ (0,1] as well as a unique solution w̅∈ C^γ, η_T^* to(∂_t-∇·A̅∇) w̅ = -( w̅ )^3 +3w̅^2 X^0_1 - 3w̅ X^0_2 + X^0_3 , w̅ (0)=v_0^0 .Furthermore, it holds thatw_ - w̅_C^γ,η_T^* ≲^κ + 𝕏^-𝕏^0_C^α(𝕋^d×(-1,2))+ v^_0-v^0_0_C^η(𝕋^d),where the implicit constant depends on sup_∈ [0,1]X^_C^α(𝕋^d×(-1,2)) and sup_∈ [0,1] v_0^_C^η+κ (𝕋^d).For convenience, we opt to use the theory of regularity structures in the proof. This is only done in order to shorten exposition as it allows us to avoid reproving standard lemmata.Consider the regularity structure 𝒯= (T,G) where T_α is spanned by three elements {Ξ_i}_i=1^3, T_0 is spanned by 1, and G is the trivial group consisting of only one element. We consider models (Π,Γ) such that Π_x 1= 1 andΠ_x Ξ_i= X_i for all x∈ℝ^d+1.Next, writing v_(x,t):=( K_(t)v^_0 )(x) and v̅(x,t):=( K̅(t) v^0_0 )(x) we note that v_∈ C^γ,η+κ_T for every T>0 by Lemma <ref>. Thus, we can reformulate the fixed point problem in the language of regularity structures W= (I + J^K_(·) + 𝒩^K_) 𝐑^+ F(W,Ξ) + v_·1 ,where for Ξ=(Ξ_1,Ξ_2, Ξ_3) we write F(z,Ξ)=-( z )^3 +3z^2 Ξ_1 - 3 z Ξ_2 + Ξ_3. Here this simplifies toW(t,x)= w(t,x)·1 , w= K_( 𝐑^+ f(w,𝕏^) )+ v_ for f(z,𝕏)=-( z )^3 +3z^2 X_1 - 3 z X_2 + X_3. We consider K_ as an element of ^β_L,R for β=1, L=R=1/4.In particular, we then find by (<ref>), thatK_- K̅ _β;L,R ≲^2-β∨^1-R-L≤^κ.Note that for our choice of β, L,R it holds that L>γ, R>-α, thus we can apply <cit.> (which is a modification of <cit.> which includes continuity of the fixed point with respect to the kernel) for the choice of models determined by 𝕏= 𝕏^ and 𝕏= 𝕏^0 to obtainthat T^*∈ (0,1)and solutions w^, w̅∈ C^γ, η_T^* exist, and furthermorew_ - w̅_C^γ,η_T^* ≲^κ +𝕏^-𝕏^0_C^α(𝕋^d×(-1,2))+ v_- v̅_C^γ, η_T^*.Sincev_(x,t)- v̅(x,t)= ( K_(t)(v^_0 -v^0_0 ))(x) + ( (K_(t)- K̅(t))v^0_0 )(x)and thus by Lemma <ref>v_- v̅ _C^γ,η_T^* ≲v^_0- v^0_0_ C^η + ^κv^0_0_C^ η+κ,this completes the proof.§.§ An a priori boundIn this section we show that the a priori estimate for the ϕ^4_2 model of <cit.> holds uniformly in the homogenisation length-scale.Fix p∈ 2ℕ large enough and α<0 such that |α| is small enough (depending on p, see <cit.> for the precise condition). Let T>0 and M>0, then there exists C<0 such that if for some ∈(0,1], γ>0, T'∈ (0,T] and 𝕏∈C^α(𝕋^2×ℝ)^× 3 satisfying 𝕏_C^α(𝕋^d×(-1,T+1))< M, if w∈ C^γ(𝕋^2× [0,T']) is a solution to∂_t-ℒ_w = -w^3 +3 w^2 X_1 - 3w X_2 + X_3 , thensup_0≤t≤T' w(t)_Ł^p(𝕋^2)≤ w(0)_Ł^p(𝕋^2) + C.Importantly the constant C>0 does not depend on . The proof adapts essentially mutatis mutandis from <cit.> using only the qualitative fact that A(x/,t/^2) is Hölder continuous together with uniform ellipticity. We point out the minor modifications that are required:* By Schauder estimates one finds that (0,T'] ∋t↦∇ w(t) ∈ L^∞ (𝕋^2) is continuous.* Using Hölder continuity of w one obtains the analogue of <cit.> for 0<κ <t≤ T'⟨w(t), ϕ⟩-⟨w(κ), ϕ⟩= ∫^t_κ- ⟨A(·/, s/^2) ∇w(s), ∇ϕ⟩+ ⟨F(w(s), 𝕏_s), ϕ⟩ , ds. * Then, arguing exactly as in the proof of <cit.> one finds thatfor 0<κ <t≤ T' 1/p( w(t)_Ł^p(𝕋^2) - w(κ)_Ł^p(𝕋^2))=-∫^t_κ(p-1)⟨A(·/, s/^2) ∇w(s), w(s)^p-2 ∇w(s) ⟩ds+ ∫^t_κ⟨F(w(s), 𝕏_s), w(s)^p-1⟩ , ds. * Using that A is uniformly elliptic and bounded one concludes as in the proof of <cit.> thatsup_κ≤t≤T' w(t)_Ł^p(𝕋^2)≤ w(κ)_Ł^p(𝕋^2) + C. Letting κ→ 0 completes the proof. § CONTINUITY OF THE MODEL FOR THEOREM <REF>Let ξ_, δ be as in (<ref>),we define1>_,δ(z)= ∫_ℝ^d+1 K_(z, z') ξ_,δ(z') dz'.Let d≥ 2. For α>0, 1>α'>0, there exists a modification of (<ref>) such that for all T>0 the map[0,1]^2 →C^α'/2( [-T,T], C^-d/2+1-α- α'(ℝ^d) ), (, δ)↦1>_,δ is a.s. continuous.Using the stochastic estimates of Proposition <ref>, the claim follows by applying a Kolmogorov-type criterion, c.f. <cit.>, to the Banach space valued random variables{1>_, δ (·,t)}_(, δ, t)∈ [0,1]^2× [-T,T].Let α, α' ,κ>0 such that α'+κ<1.Using the shorthand notation _s,t1>_, δ := 1>_, δ (·, t)- 1>_, δ (·, s) it holds that *[⟨1>_, δ( · , t) , ψ^λ_x ⟩^2]^1/2≲λ^-α-d/2+1*[⟨_s,t1>_, δ , ψ^λ_x ⟩^2]^1/2≲ |t-s|^α'/2λ^-α-α'-d/2+1*[⟨_s,t1>_, δ-_s,t1>_0, δ , ψ^λ_x ⟩^2]^1/2≲^κ |t-s|^α'/2λ^-α-α'-κ-d/2+1*[⟨_s,t1>_, δ-_s,t1>_, 0 , ψ^λ_x ⟩^2]^1/2≲δ^κ |t-s|^α'/2λ^-α-α'-κ-d/2+1 ,uniformly over |t-s| ∨λ≤ 1, ,δ∈ (0,1], x ∈^d,and ψ∈𝔅^0(ℝ^d). We note that for z=(x,t) and writing ϕ^δ(t)= 1/δ^2ϕ(t/δ^2) 1>_, δ (x,t) = ∫_ℝ^d+1 K_(x,t; ζ, τ) ξ_,δ(ζ, τ) dζdτ = ∫_ℝ^d+1 Γ_(x,t; ζ, τ) κ(t-τ) ξ_,δ(ζ,τ) dζdτ = ∫_ℝ^d+1 Γ_(x,t; ζ, τ) κ(t-τ) ( ∫_ℝ^d+1 ϕ^δ(τ-s) Γ_(ζ,τ; y, τ-δ^2) ξ(dy, ds) ) dζdτ = ∫_ℝ^d+1 (∫_ℝ ϕ^δ(τ-s) κ(t-τ)Γ_(x,t;y, τ-δ^2)dτ)ξ(dy, ds)= ∫_ℝ^d+1 (∫_ℝ ϕ^δ(τ) κ(t-s-τ)Γ_(x,t;y, s+ τ-δ^2)dτ)ξ(dy, ds)= ∫_ℝ^d+1 H_, δ(x,t;y,s)ξ(dy, ds),where H_, δ is defined in (<ref>). Thus, the inequalities follow from Proposition <ref> and Proposition <ref>.§.§ RenormalisationRecall the functions κ^(t) and κ^_c(t) from (<ref>), we define𝒵_(x,t;ζ, τ):= κ^(t-τ)Z̅^(x,t;ζ,τ) + κ_c^(t-τ)Γ̅ (x,t;ζ, τ) .It follows from Corollary <ref> and Proposition <ref> that |(Γ_-𝒵_)(x,t;ζ, τ)|≲(^-θκ^(t-τ)/(t-τ)^d/2-θ/2 +κ_c^(t-τ) /(t-τ)^(d+1)/2 )exp( -μ|x-ζ|^2/(t-τ) ) .Similarly it follows from Corollary <ref> and Corollary <ref> that |(Γ_-𝒵_)(x,t;ζ, τ-δ^2)-(Γ_-𝒵_)(x,t;ζ, τ)| ≲δ^θκ^(t-τ)((1_δ≤^θ/ (t-τ)^d/2+θ/2 +1_δ≤/ (t-τ)^d/2) ∧1_δ≤δ^-θ^-θ/ (t-τ)^d/2-θ/2+ 1_δ>/ (t-τ)^d/2) exp( -μ |x-ζ|^2/t-τ)+ δ^θ^1-θκ_c^(t-τ)/(t-τ)^(d+1)/2exp( -μ |x-ζ|^2/(t-τ)).Furthermore, it holds that |𝒵_(x,t;ζ, τ)|≲1/(t-τ)^d/2exp( -μ |x-ζ|^2/(t-τ)) .as well as |𝒵_(x,t;ζ, τ-δ)-𝒵_(x,t;ζ, τ)|≲δ^θ(1/(t-τ)^d/2+θ/2∨^-θ/(t-τ)^d/2)exp( -μ |x-ζ|^2/(t-τ)) . In dimension d=2, we claim that 𝒵_ is a sufficiently good, explicit approximation of Γ_. In order to quantify this, we definef_,δ(x,t) = ∫_ℝ∫_ℝ(ϕ^δ)^*2 ( τ-τ' ) κ(t-τ)κ(t-τ') ( ∫_ℝ^dΓ_(x,t;η, τ-δ^2) Γ_(x,t;η, τ'-δ^2)-𝒵_ (x,t;η, τ-δ^2) 𝒵_ (x,t;η, τ'-δ^2)dη) dτdτ' . For each ∈ (0,1] the function f_, δ is (ℤ)^2× (^2ℤ)-periodic and satisfiessup_, δ∈(0,1]^2 f_, δ_Ł^∞<+∞ .Furthermore, for δ_0>0, it holds that lim_→ 0f_,δ_0_Ł^∞=0 and for _0>0, the functions f__0,δ converge pointwise as δ→ 0. Lastly, it holds that for every κ∈ [0, θ)sup_, δ∈(0,1]^2 |f_, δ- f_,0|/δ^κ ^-κ<+ ∞ . WritingΓ_(x,t;η, τ-δ^2) Γ_(x,t;η, τ'-δ^2) -𝒵_ (x,t;η, τ-δ^2) 𝒵_ (x,t;η, τ'-δ^2) =(Γ_(x,t;η, τ-δ^2) -𝒵_ (x,t;η, τ-δ^2)) Γ_(x,t;η, τ'-δ^2) -𝒵_ (x,t;η, τ-δ^2)( Γ_(x,t;η, τ'-δ^2)-𝒵_ (x,t;η, τ'-δ^2))we see by (<ref>) and (<ref>) that|Γ_(x,t;η, τ-δ^2) Γ_(x,t;η, τ'-δ^2) -𝒵_ (x,t;η, τ-δ^2) 𝒵_ (x,t;η, τ'-δ^2)| ≲(^-θκ^(t-τ+δ^2)/(t-τ+δ^2)^d/2-θ/2 +κ_c^(t-τ+δ^2)/(t-τ+δ^2)^(d+1)/2)×1/(t-τ'+δ^2)^d/2exp( -μ |x-ζ|^2/(t-τ+δ^2) -μ |x-ζ|^2/(t-τ'+δ^2))+ (^-θκ^(t-τ'+δ^2) /(t-τ'+δ^2)^d/2-θ/2 +κ^_c(t-τ'+δ^2)/(t-τ'+δ^2)^(d+1)/2)×1/(t-τ+δ^2)^d/2exp( -μ |x-ζ|^2/(t-τ+δ^2) -μ |x-ζ|^2/(t-τ'+δ^2)).Therefore, by a direct calculation∫_ℝ^d|Γ_(x,t;η, τ-δ^2) Γ_(x,t;η, τ'-δ^2) -𝒵_ (x,t;η, τ-δ^2) 𝒵_ (x,t;η, τ'-δ^2) |dη≲(^-θκ^(t-τ+δ^2) /(t-τ+δ^2)^-θ/2 + κ^_c(t-τ+δ^2) /(t-τ+δ^2)^1/2 + ^-θκ^(t-τ'+δ^2) /(t-τ'+δ^2)^-θ/2 + κ_c^(t-τ'+δ^2) /(t-τ'+δ^2)^1/2)×1/(2t-τ-τ'+2δ^2)^d/2and thus|f_,δ(x,t)|≲∫_ℝ∫_ℝ(ϕ^δ)^*2 ( τ-τ' )(^-θκ^(t-τ+δ^2) /(t-τ+δ^2)^-θ/2 + κ_c^(t-τ+δ^2) /(t-τ+δ^2)^1/2) κ(t-τ)κ(t-τ')/(2t-τ-τ'+2δ^2)^d/2 dτ dτ'≲∫_ℝ∫_ℝ(ϕ^δ)^*2 ( τ-τ' )(^-θκ^(τ+δ^2) /(τ+δ^2)^-θ/2 + κ_c^(τ+δ^2) /(τ+δ^2)^1/2) κ(τ)κ(τ')/(τ+τ'+2δ^2)^d/2 dτ dτ' .Uniform boundedness in , δ can thus be read off directly.We note that lim_→ 0f_,δ_0_Ł^∞=0, since κ(τ) κ^(τ+δ^2)= 0 for all τ∈ℝ whenever δ>2. Note that f__0,δ converges pointwise as δ→ 0 by dominated convergence. Lastly, similarly to above but additionally to (<ref>) and (<ref>) using (<ref>) and (<ref>)it follows that|f_, δ(x,t)- f_,0(x,t)|≲δ^κ^-κ. §.§.§ CountertermsWe define α_,δ:=1/4π∫_ℝ∫_ℝ (ϕ^δ)^*2 ( τ-τ' ) κ(τ)κ(τ') κ^(τ+δ^2)κ^(τ'+δ^2) /(τ+τ'+2δ^2)^d/2dτdτ' andα̅^(1)_,δ=1/4π ∫_ℝ∫_ℝ (ϕ^δ)^*2 ( τ-τ' ) κ(τ)κ(τ') κ_c^(τ+δ^2)κ_c^(τ'+δ^2)/(τ+τ'+2δ^2)^d/2dτdτ'.These constants are chosen such that the functiong_,δ(x,t): =∫_ℝ∫_ℝ(ϕ^δ)^*2 ( τ-τ' ) κ(t-τ)κ(t-τ')×(∫_ℝ^d𝒵_ (x,t;η, τ-δ^2) 𝒵_ (x,t;η, τ'-δ^2) dη) dτdτ'- α_,δ D(x/, t/^2) - α̅^(1)_,δ (A̅)^-1/2satisfies the following bound.For each >0, the function g_, δ is (ℤ)^2× (^2ℤ)-periodic and satisfiessup_, δ∈(0,1]^2 g_, δ_Ł^∞<+∞ .Furthermore, for every >0, lim_δ→ 0 g_, δ= g_, 0 exists.For δ>0,lim_→ 0 g_, δ_Ł^∞=0 . Lastly, it holds that for κ∈ [0,θ)sup_, δ∈(0,1]^2g_, δ- g_, 0_Ł^∞/^-κδ^κ<+∞ .A strightforward calculation along the lines of <cit.> shows that α_,δ D(x/, t/^2)= ∫_ℝ∫_ℝ(ϕ^δ)^*2 ( τ-τ' ) κ(t-τ)κ(t-τ')κ^(t-τ +δ^2) κ^(t-τ' +δ^2) ×(∫_ℝ^dZ̅_ (x,t;η, τ-δ^2) Z̅_ (x,t;η, τ'-δ^2) dη) dτdτ' andα̅^(1)_,δ=∫_ℝ∫_ℝ(ϕ^δ)^*2 ( τ-τ' ) κ(t-τ)κ(t-τ')κ_c^(t-τ +δ^2) κ_c^(t-τ' +δ^2)×(∫_ℝ^dΓ̅ (x,t;η, τ-δ^2) Γ̅ (x,t;η, τ'-δ^2) dη) dτdτ' .Thus g_, δ(x, t)= 2∫_ℝ∫_ℝ(ϕ^δ)^*2 ( τ-τ' ) κ(t-τ)κ(t-τ')κ^(t-τ +δ^2) κ_c^(t-τ' +δ^2) ×(∫_ℝ^dΓ̅ (x,t;η, τ-δ^2) Z̅^ (x,t;η, τ'-δ^2) dη) dτdτ'and using the following bound which can be directly read off (<ref>)|Γ̅ (x,t;η, τ)| + |Z̅^(x,t;η, τ) |≲(t-τ)^-d/2 exp(-μ|x-η|^2/t-τ) a straightforward computation shows that|g_, δ(x, t)|≲∫_ℝ∫_ℝ (ϕ^δ)^*2 ( τ-τ' ) κ(t-τ)κ(t-τ') κ^(t-τ+δ^2) κ_c^(t-τ' +δ^2)/2(t+δ^2) -τ-τ'dτdτ'≲∫_ℝ∫_ℝ (ϕ^δ)^*2 ( τ-τ' ) κ(τ)κ(τ') κ^(τ+δ^2) κ_c^(τ' +δ^2) /2δ^2 +τ+τ'dτdτ'which by substitution (τ, τ')↦ (δ^2τ, δ^2τ') is seen to be bounded. For >0 we see that g_, δ converges pointwise as δ→ 0 from (<ref>) and dominated convergence. For δ>0,sinceκ(τ) κ^(τ+δ^2)=0 for all τ∈ℝ we see that g_, δ=0 whenever δ>2. Lastly, the bound |g_, δ(x, t)-g_, 0(x, t)|≲^-κδ^κ follows similarly to the first estimate of the lemma from (<ref>) using the bound (<ref>) on Z̅^ as well as the same bound on Γ̅.We set h_, δ(x,t):= ∫_ℝ∫_ℝ(ϕ^δ)^*2 ( τ-τ' ) κ(t-τ)κ(t-τ')×( ∑_k∈ℤ^2∖{0}∫_ℝ^dΓ_(x,t;η+k, τ-δ^2) Γ_(x,t;η, τ'-δ^2)dη) dτdτ'.For each >0, the function h_, δ is (ℤ)^2× (^2ℤ)-periodic and for , δ>0, z∈ℝ^d the limits h_, 0(z)= lim_ν→ 0 h_, ν(z),h_0, δ(z)= lim_ν→ 0h_ν, δ (z) exist. It holds thatsup_, δ∈(0,1]^2(h_, δ_Ł^∞+ h_, δ- h_, 0_Ł^∞/δ^θ+h_, δ- h_ 0, δ_Ł^∞/^θ)<+∞ .In particular the map (0,1]^2∋ (, δ) ↦ h_, δ∈ C^0 extends continuously to [0,1]^2. The existence of the limits h_, 0 and h_0, δ follows by (<ref>) and dominated convergence. Uniform bounds on h_, δ_Ł^∞ follow from Theorem <ref>.The bound on h_, δ- h_, 0_Ł^∞ follows from Proposition <ref>. The bound on h_, δ- h_ 0, δ_Ł^∞ follows from Proposition <ref>. The last claim is immediate form (<ref>). Recall the definition ofα̅^(1)_,δ in (<ref>) and setα̅^(2)_,δ:= (A̅)^1/2 ∫_[0,1]^3 h_,δ, α̅_,δ:=α̅^(1)_,δ+α̅^(2)_,δ .We also defineF_,δ(x,t):= [ 1>_,δ^2 (x,t) ]- α_,δ D(x/, t/^2) - α̅_,δ(A̅)^-1/2 , and observe thatF_,δ(x,t) =f_, δ(x,t)+g_,δ(x,t) +h_, δ(x,t) - ∫_[0,1]^3 h_,δ . We further define for (,δ)∈ (0,1]^2c_,δ:= ∫_[0,1]^3 F_,δ =∫_[0,1]^3 f_,δ+g_,δ, as well asF̂_,δ= F_,δ- c_,δ. The next lemma follows directly from Lemmata <ref>, <ref>.For >0, the limit lim_δ→ 0 c_, δ= ∫_[0,1]^3 (f_,0+g_,0) exists.For δ>0, one has lim_→ 0 c_, δ=0. A straightforward computation shows that(f_,0+g_,0)(x,t)=∫_ℝ^3κ(t-τ)(Γ^2_(x,t;η, τ) - (κ^ (t-τ) Z̅^(x,t;η, τ))^2-(κ^_c (t-τ) Γ̅(x,t;η, τ))^2 ) dη dτand therefore by substitution (f_,0+g_,0)( x, ^2 t) =∫_ℝ^3κ(^2(t-τ)) (Γ^2_1(x, t; η, τ) - (κ^1 (t-τ) Z̅^1(x, t;η, τ))^2-(κ^1_c ( t-τ) Γ̅( x,t;η, τ))^2 )dη dτ .Thus, (f_,0+g_,0)( x, ^2 t) converges to∫(Γ^2_1(x, t; η, τ) - (κ^1 (t-τ) Z̅^1(x, t;η, τ))^2-(κ^1_c ( t-τ) Γ̅( x,t;η, τ))^2 )dηdτ as → 0 and thereforelim_→0 c_,0 = ∫_[0,1]^3×ℝ^3 ( Γ^2_1(x, t; η, τ) -(κ^1 (t-τ) Z̅^1(x, t;η, τ))^2-(κ^1_c ( t-τ) Γ̅( x,t;η, τ))^2)dx dt dηdτ .For each >0, the function F_, δ is (ℤ)^2× (^2ℤ)-periodic and satisfies for κ∈ [0,θ)sup_, δ∈(0,1]^2 F_, δ_Ł^∞ +sup_, δ∈(0,1]^2F_, δ- F_, 0_Ł^∞/^-κδ^κ<+∞ Furthermore, the map [ν,1] ∋↦ F_, 0∈ C^0([0,1]^3) is κ-Hölder continuous for each ν >0. The estimate (<ref>) follows directly from Lemmata <ref>, <ref> and <ref>. Continuity of (0,1] ∋↦ f_, 0+g_, 0∈ C^0 can be read off of (<ref>) which together with Lemma <ref> implies the last claim.For every κ∈ (0,1) andκ' ∈ [0,θ), it holds that F̂_,δ_C^-κ ≲^κ ,F̂_,δ -F̂_,0 _C^-κ ≲^κ-κ' δ^κ',uniformly over (,δ) ∈ (0,1]^2. Furthermore, for every α<0 the map(0,1]^2→𝒞^-α ,(,δ) ↦F̂_,δextends continuously to [0,1]^2. Since c_,δ=∫_[0,1]^d+1 F_,δ, the first part of the lemma is a direct consequence of Lemma <ref> in the Appendix. The second part of the lemma follows from the first part together with the continuity of (0,1] ∋↦F̂_, 0∈ C^0 which is a consequence of Lemma <ref>.§.§ Renormalised productsFor ≥ 0,δ>0 we define2>_,δ(z) := 1>_,δ(z)^2- α_,δ D(𝒮^z) - α̂_,δ(A̅)^-1/2 -c_,δ,3>_,δ(z) := 1>_,δ(z)^3 - 3(α_,δ D(𝒮^z) - α̂_, δ(A̅)^-1/2 -c_,δ) .One has the following. For any α>0 there exists a modification of (<ref>) such that the map[0,1]^2→C^-α(ℝ^3)^×2, (, δ)↦(2>_,δ,3>_,δ)is a.s. continuous.The proof is standard and follows along the lines of the proof of Proposition <ref> applyingKolmogorov's criterion to the C^-α((-T,T)^3)-valued random variables{ 2>_, δ (·)}_(, δ)∈[0,1]^2 ,{3>_, δ (·)}_(, δ)∈[0,1]^2 ,using equivalence of moments for random variables in a finite Wiener Chaos, together with Proposition <ref>.For every α>0, κ∈ [0, θ) and T∈{2>,3>}, it holds that*[⟨ T_, δ , ψ^λ_⋆⟩^2]^1/2≲_αλ^-α*[⟨ T_, δ-T_0, δ , ψ^λ_⋆⟩^2]^1/2≲_α,κ^κλ^-α-κ and for each ∈ [0,1] there exist random variables⟨ T_, 0 , ψ^λ_⋆⟩ satisfying the above and such that furthermore[⟨T_, δ-T_, 0 , ψ^λ_⋆ ⟩^2]^1/2 ≲_α,κ δ^κλ^-α-κ , uniformly over ⋆∈ℝ^3, λ∈ (0,1] and ψ∈𝔅^1(ℝ^3). We write Q_n: Ł^2(Ω)→Ł^2(Ω) for the projection onto the nth Wiener chaos. Then, for δ>0,2>_, δ = Q_2 2>_, δ + Q_0 2>_, δ = 1>_, δ^♢2 + F̂_,δand3>_, δ = Q_3 3>_, δ + Q_1 3>_, δ = 1>_, δ^♢3 + 3F̂_,δ1>_, δ ,where we used ♢ to denote the Wick product. (For δ=0 we define the left-hand side by the right-hand side.) Thus, settingG_,δ(z,z')= ∑_k∈ℤ^d∫_ℝ^d+1H_,δ (z+k,w) H_, δ(z', w) dw , one finds for k∈{1,2,3}[⟨1>_, δ^♢k, ψ^λ_⋆ ⟩^2] = ∫∫G^k_,δ(z,z') ψ^λ_⋆ (z)ψ^λ_⋆(z') dz dz' .The bound of Item <ref> on 1>_, δ^♢ 2 and 1>_, δ^♢ 3 follows directly using Proposition <ref> and Proposition <ref>. The bounds of Item <ref> and Equation (<ref>) on 1>_, δ^♢ 2 and 1>_, δ^♢ 3 follow similarly.The bounds on Q_0 2>_, δ= F̂_, δ follow directly from Lemma <ref> and Lemma <ref>. In order to establish the bound of Item <ref> onQ_1 3>_, δ= F̂_,δ1>_, δnote that[⟨F̂_,δ1>_, δ, ψ^λ_(t,x) ⟩^2]=[⟨1>_, δ, F̂_,δ ψ^λ_(t,x) ⟩^2]≲[⟨1>_, δ, ψ^λ_(t,x) ⟩^2].In order to establish the remaining bounds, writeF̂_,δ1>_, δ= F̂_,δ( 1>_, δ - 1>_0, δ) +F̂_,δ1>_0, δ ,respectivelyF̂_,δ1>_, δ-F̂_,01>_, 0 =F̂_,δ( 1>_, δ-1>_, 0) +(F̂_,δ- F̂_,0)1>_, 0 .Then, the desired bound on the first summand of (<ref>) and (<ref>) follows as in (<ref>).Next, note that we can write G_,δ= ∑_m G_, δ;m whereG_, δ;m(z,z')= ∑_k∈ℤ^d(H_,δ⋆ H_, δ)_m (z+k, z') as in Proposition <ref>. Furthermore writing 𝐅̂_,δ(z,z') =F̂_,δ(z)F̂_,δ(z') andψ^λ(z, z')= ψ^λ(z)ψ^λ(z'), it follows that[ ⟨F̂_,δ1>_0, δ,ψ^λ⟩^2 ]=∑_n ∫∫ G_, δ; n (z,z') 𝐅̂_,δ(z,z')ψ^λ(z, z')dzdz' Let N_λ∈ℕ be such that λ∈ (2^-(N_λ+1), 2^-N_λ],then by (<ref>)∑_n=0^N_λ∫∫ G_, δ; n(z,z') 𝐅̂_,δ(z,z')ψ^λ(z, z')dzdz' ≲∑_n=0^N_λ^-α 2^nκλ^-α≲^-αλ^-α-κ . On the other hand∑_n=N_λ^∞∫∫ G_, δ; n(z,z') 𝐅̂_,δ(z,z')ψ^λ(z, z') dzdz' ≤𝐅̂_,δ_Ł^∞∑_n=N_λ^∞∫∫ G_, δ; n(z,z') ψ^λ(z, z') dzdz' ≤𝐅̂_,δ_Ł^∞∑_n=N_λ^∞∫ψ^λ(z)(∫ G_, δ; n(z,z') ψ^λ(z') dz ) dz'≲𝐅̂_,δ_Ł^∞∑_n=N_λ^∞λ^-|| 2^-n(-κ +||)≲𝐅̂_,δ_Ł^∞λ^-κFinally, the second term of (<ref>) can be bounded similarly. Finally, observe that by combining Proposition <ref> and Proposition <ref> we in particular obtain for α<0 a modelℤ_,δ:= ( 1>_,δ,2>_,δ ,3>_,δ ) ∈C_^α(ℝ^3)^×3continuous in (,δ)∈ [0,1]^2.§ CONTINUITY OF FURTHER MODELSIn this section we check convergence of the model for the regularisations used in Theorem <ref>, Proposition <ref> and Theorem <ref>.§.§ Translation invariant regularisationIn this section we writeϕ^δ:=δ^-d-2(ϕ∘𝒮^δ_) and considerξ^♭_δ(z):=∫_ℝ^d+1 ϕ^δ(z-z') ξ(dz') and1>_,δ^♭(z)= ∫_ℝ^d+1 K_(z, z') ξ^♭_,δ(z') dz'. The proof of the following proposition is an ad verbatim adaptation of the proof of Proposition <ref>, replacing H by H^♭, see (<ref>).Let d≥ 2. For α>0, 1>α'>0, there exists a modification of 1>_,δ^♭ such that for all T>0 the map(0,1]^2 →C^α'/2( [-T,T], C^-d/2+1-α- α'(ℝ^d) ), (, δ)↦1>^♭_,δ extends continuously to [0,1]^2. We return to d=2 and define for (, δ)∈ (0,1]^2D̃^♭♭_,δ(x,t): = ∫_(ℝ^3)^× 2(ϕ^δ)^*2 (ζ-ζ', τ-τ' ) κ (τ) κ (τ') ×( κ^(τ)κ^(τ')Z̅^(x,t;ζ,t-τ)Z̅^(x,t;ζ',t-τ') ) dτ dζ dτ' dζ'and α̅^(1),♭: = (A̅)^1/2∫_(ℝ^3)^× 2(ϕ^δ)^*2 (ζ-ζ', τ-τ' ) κ (τ) κ (τ') ×(κ_c^(τ)κ_c^(τ') Γ̅ (x,t;ζ, t-τ)Γ̅ (x,t;ζ', t-τ') ) dτ dζ dτ' dζ'.Furthermore, writing z=(x,t), w=(ζ, τ) and w'=(ζ', τ') we define analogue functions to the ones in Section <ref>: f^♭_,δ(z): =∫_(ℝ^3)^× 2(ϕ^δ)^*2 (w-w' )κ (τ) κ (τ')( K_(z;w) K_(z;w') -𝒵_ (z;w)𝒵_ (z;w')) dw dw',g^♭_, δ(z): =2 ∫_(ℝ^3)^× 2(ϕ^δ)^*2 (w-w' ) κ (τ) κ (τ') κ^(τ) κ_c^(τ') Z̅^(z;ζ,t-τ)Γ̅ (z;ζ', t-τ') dwdw', h^♭_, δ(z): = ∫_(ℝ^3)^× 2(ϕ^δ)^*2 (w-w' )κ (τ) κ (τ')∑_k∈ℤ^2∖{0}Γ_(z + k, w) Γ_(z; w') dwdw' .Lastly, we set α̅^(2),♭_,δ:= (A̅)^1/2∫_[0,1]^3 h^♭_, δ and α̅^♭_,δ= α̅^(1),♭_,δ+α̅^(2),♭_,δ as well asF^♭♭_,δ(x,t):= [ (1>_,δ^♭)^2 (x,t) ]-D^♭♭_,δ(z)- α̅^♭_,δ (A̅)^-1/2, c^♭♭_,δ= ∫_[0,1]^3 F^♭♭_, δ.Exactly as in the previous section one finds thatF^♭♭_,δ(z):= f^♭_,δ(z) + g^♭_, δ(z) + h^♭_, δ(z)- ∫_[0,1]^3 h^♭_, δ,c^♭♭_,δ= ∫_[0,1]^3 f^♭_, δ+ g^♭_, δ. The functionF̂^♭♭_, δ(z)= F^♭♭_, δ(z)- c^♭♭_,δ is (ℤ)^2× (^2ℤ)-periodic, has mean 0, i.e. ∫_[0,1]^d+1F̂_, δ^♭♭=0, and satisfies for κ∈ [0,θ) the bounds sup_, δ∈(0,1]^2 F̂^♭♭_, δ_Ł^∞ +sup_, δ∈(0,1]^2F̂^♭♭_, δ- F̂^♭♭_, 0_Ł^∞/^-κδ^κ<+∞ .Furthermore, F̂_,0^♭♭= F̂_,0 for every ∈ [0,1].Periodicity and the mean 0 property are immediate. The bound (<ref>) follows by observing that the estimates of Lemma <ref>, Lemma <ref> and Lemma <ref> still hold for thefunctions f^♭_,δ, g^♭_, δ and h^♭_, δ, respectively. The last claim follows by noting that for every ∈ (0,1],lim_δ→ 0 f^♭_,δ= lim_δ→ 0 f_,δ, lim_δ→ 0g^♭_, δ=lim_δ→ 0g_, δ and lim_δ→ 0 h_, δ=lim_δ→ 0 h_, δ.DefineD^♭♭_λ(x,t) = ∫_(ℝ^3)^×2 (ϕ^λ)^*2 (ζ-ζ', τ-τ' ) κ(τ) κ(τ')×( κ^1(τ)κ^1(τ')Z̅^1(x,t;ζ,t-τ)Z̅^1(x,t;ζ',t-τ') ) dζdτdζ' dτ'.The function D^♭♭_λ satisfies (<ref>) and lim_λ→∞D^♭♭_λ_Ł^∞= 0. It holds that D̃^♭♭_,δ= D^♭♭_δ/∘𝒮^_. Furthermore, the constants c^♭♭_, δ satisfylim_→ 0 c^♭♭_, δ =0 for δ>0 and lim_δ→ 0 c^♭♭_, δ =lim_δ→ 0 c_, δ for >0 where c_, δ was defined in (<ref>). All claims follow by a simple direct computation.We set for , δ>02>_,δ^♭♭(z) := (1>_,δ^♭(z))^2- D^♭♭_δ/(𝒮^z) - α̅^♭_,δ(A̅)^-1/2 -c^♭♭_,δ,3>^♭♭_,δ(z) := (1>^♭_,δ(z))^3 - 3(D^♭♭_δ/(𝒮^z) + α̅^♭_,δ(A̅)^-1/2 +c^♭♭_,δ) ,and write ℤ^♭♭_, δ= (1>^♭_,δ, 2>^♭♭_,δ,3>^♭♭_,δ ). For any α>0 there exists a modification such that the map(0,1]^2→C^-α(ℝ^3)^×3, (, δ)↦ℤ^♭♭_, δ extends continuously to [0,1]^2. Furthermore,ℤ^♭♭_, 0= ℤ_, 0 for every ∈ [0,1]. The claim about 1>^♭_,δ follows from Proposition <ref>. It thus suffices to show the analogue of Proposition <ref> for 2>^♭_,δ and 3>^♭_,δ, the proof of which adapts mutatis mutandis with the main change being the use of Lemma <ref> instead of <ref>. Lastly, we observe that F̂_,0^♭♭= F̂_,0 for every ∈ [0,1], which implies the last claim. Next we define for(, δ)∈ (0,1]^2 the new constant c_,δ^♭= c^♭♭_, δ + ∫_[0,1]^3 D^♭♭_δ/ and set2>_,δ^♭(z) := (1>_,δ^♭)(z)^2- α̅^♭_,δ(A̅)^-1/2 -c^♭_,δ,3>^♭_,δ(z) := (1>^♭_,δ(z))^3 - 3(α̅^♭_,δ(A̅)^-1/2 +c^♭_,δ) , and ℤ^♭_, δ= (1>^♭_,δ, 2>^♭_,δ,3>^♭_,δ ). For any α>0 andC>0 there exits a modification such that the map^<_C →C^-α(ℝ^3)^×3, (, δ)↦ℤ^♭_, δ ,extends continuously to the closure of ^<_C and it holds thatℤ^♭♭_0, δ= ℤ^♭_0, δ for every δ∈ [0,1]. Observe that for any C>0 the function D^♭_,δ is bounded on ^<_C. The claim for2>_,δ^♭= 2>_,δ^♭♭ + D^♭♭_,δ -∫_[0,1]^3 D^♭♭_,δthus follows by combining Lemma <ref> with Prop <ref>.The claim for 3>^♭_,δ=1>_,δ^♭♢2>_,δ^♭ follows along the line of the last part of the proof of Proposition <ref>.§.§ Generic non-translation invariant regularisationsRecall the non-translation invariant regularisation ξ_,δ^♯ of (<ref>) and let1>_,δ^♯(z)= ∫_ℝ^d+1 K_(z, z') ξ^♯_,δ(z') dz'= ∫_(ℝ^d+1)^×2 K_(z, z') ϱ^,δ (z'; w) ξ(w)dw dz'. The proof of the following proposition is an ad verbatim adaptation of the proof of Proposition <ref>, replacing H by H^♯, see (<ref>).Let d≥ 2. For α>0, 1>α'>0 and C>0, there exists a modification of 1>_,δ^♭ such that for all T>0 the map^<_C →C^α'/2( [-T,T], C^-d/2+1-α- α'(ℝ^d) ), (, δ)↦1>^♯_,δ extends continuously to the closure of ^<_C and 1>^♯_,0 =1>_,0 for all ∈ [0,1]. Returning to d=2, let Z^♭ denote Z̅ but with A replaced by the identity matrix, i.e. Z^♭ is the usual translation invariant heat kernel. Then we set α^♯_,δ = ∫( ∫ϱ̅^δ(w,v) ϱ̅^δ(w',v) dv)κ(τ) κ(τ')κ^(τ)κ^(τ')Z^♭(w') Z^♭(w) dw dw' andF^♯_,δ(x,t):= [ (1>_,δ^♯)^2 (x,t) ]- α^♯_,δD(x/,t/^2) , c^♯_,δ = ∫_[0,1]^3 F^♯_,δ. The functionF̂^♯_, δ(z)= F^♯_, δ(z)- c^♯_,δ is (ℤ)^2× (^2ℤ)-periodic, has mean 0 and satisfies for κ∈ [0,θ) the bounds sup_, δ∈(0,1]^2 F̂^♯_, δ_Ł^∞ +sup_, δ∈(0,1]^2F̂^♯_, δ- F̂^♯_, 0_Ł^∞/^-κδ^κ<+∞ .Furthermore, one has lim_δ→ 0 c_,δ^♯=lim_δ→ 0 c_,δ for ∈ (0,1] and F̂_,0^♯= F̂_,0 for every ∈ [0,1]. Using the notation of the beginning of Section <ref> we setϱ^,δ(w,w')= ∫ϱ^,δ(w,v) ϱ^,δ(w',v) dv ,ϱ^(,δ;z)(w,w') =∫ϱ^(,δ;z)(w,v) ϱ^(,δ;z)(w',v) dvandϱ̅^δ(w,w')= ∫ϱ̅^δ(w,v) ϱ̅^δ(w',v) dv.This time we setD̃^♯_,δ(z):= ∫_(ℝ^d+1)^×2 ( ϱ^,δ(w,w') κ(τ) κ(τ')κ^(τ)κ^(τ') ×Z̅^(x,t;ζ,t-τ)Z̅^(x,t;ζ',t-τ') ) dζdτdζ' dτ' andO^♯:= (A̅)^1/2 ∫_(ℝ^d+1)^×2 ( ϱ^,δ(w,w') κ(τ) κ(τ')κ_c^(τ) κ_c^(τ') ×Γ̅ (x,t;ζ, t-τ) Γ̅ (x,t;ζ', t-τ') ) dζdτdζ' dτ' as well as f^♯_,δ,g^♯_,δ and h^♯_,δ by replacing in the definition of the analogous functions in Section <ref> (ϕ^λ)^*2 (w-w') by ϱ^,δ(w,w'). One observes that analogue estimates of Lemma <ref>, Lemma <ref> and Lemma <ref> still hold for the functions f^♯_,δ, g^♯_, δ and h^♯_, δ if one replaces (0,1] by ^>_C for some C>0. Furthermore, such an estimate also holds for O^♯ on ^>_C. Next we observe that ∫_(ℝ^3)^× 2ϱ^(,δ;(x,t))(w,w') κ (τ) κ (τ') κ^(τ)κ^(τ')Z̅^(x,t;ζ,t-τ)Z̅^(x,t;ζ',t-τ') dζ dτ dζ' dτ' = α^♯_,δ D(x/,t/^2) , and one checks thatD̃^♯_,δ(x,t) - α^♯_,δ D(x/,t/^2) also satisfies the estimate of Lemma <ref> on ^>_C. Thus, we conclude (<ref>). The final part of the lemma follows by direct inspection of the definitions combined with the observation that for >0 one has lim_δ→ 0D̃^♯_,δ - α^♯_,δ D∘𝒮_^_L^∞=0 by the same argument as in <cit.>.Define for (, δ)∈ (0,1]^22>_,δ^♯(z) := (1>_,δ^♯)(z)^2- α^♯_,δ(D∘𝒮^_)(z) -c^♯_,δ,3>^♯_,δ(z) := (1>^♯_,δ(z))^3 - 3(α^♯_,δ(D∘𝒮^_)(z) +c^♯_,δ) ,and ℤ^♯_, δ= (1>^♯_,δ, 2>^♯_,δ,3>^♯_,δ ).For any α>0 andC>0there exits a modification such that the map^>_C →C^-α(ℝ^3)^×3, (, δ)↦ℤ^♯_, δ,extends continuously to the closure of ^>_C and ℤ^♯_, 0= ℤ_,0 for every ∈ [0,1]. The claim for 1>_,δ^♯ follows from Proposition <ref>. The claim for 2>_,δ^♯ and 3>_,δ^♯ follows mutatis mutandis as the proof of Proposition <ref> but using Lemma <ref> and the bounds on H^♯ in Proposition <ref>.§ PROOF OF THE MAIN RESULTSThe proofs of Theorem <ref>, Theorem <ref>, Proposition <ref> and Theorem <ref> share a common generic part, which we provide first.Using the classical Da-Prato Debusche trick, <cit.>, we consider the equation forw_, δ = u_,δ - X_1^, δ,forX_1^, δ∈{1>_, δ, 1>^♭_, δ,,1>^♯_, δ } .which is exactly of the form (<ref>) with initial condition v_0=u_0 - X_1^, δ( ·,0) and𝕏 ∈{ ℤ,ℤ^♭, ℤ^♭♭, ℤ^♯}.Thus, the existence of a solution up to a random time T^*>0 as well as continuity in (, δ) as a map into Ł^0[C([0,T^*), 𝒟'(𝕋^d))]follow by combining Proposition <ref> with the appropriate continuity results on models: * For Theorem <ref> this is the content of Proposition <ref> and Proposition <ref>.* For Theorem <ref> this the content of Proposition <ref> and Proposition <ref>.* For Proposition <ref> this is content of Proposition <ref> and Proposition <ref>.* For Theorem <ref> this is the content of Proposition <ref> and Proposition <ref>.Note that for δ=0 the solutions do not depend on the choice of regularisation, since 𝕏_, 0 does not depend on that choice. Consider w_, δ(T^*/2)∈ L^∞(𝕋^2) and let p∈ 2ℕ be large enough, using Theorem <ref> there exists a (random) C>0 such that the Ł^p norm of any possible solution on the full interval [T^*/2,T] is a priori bounded by w_, δ(T^*/2)_L^p(𝕋^2) +C. Using the elementary consequence of Hölders inequalityw_C^-d/p(𝕋^d) ≤w_Ł^p(𝕋^d),apply Proposition <ref> repeatedly to construct a solution u_,δ:[0,T]→𝒟'(𝕋^d) as well as to obtain continuity of the solution map as a a function of (, δ) into Ł^0[C([0,T], 𝒟'(𝕋^d))].Thus, it remains to prove the parts which are specific to the corresponding results. One checks (<ref>) directly using the formulas (<ref>) and (<ref>).To check Item <ref>, it suffices to note the that lim_→ 0α_,δ=0 and thatlim_→ 0α̅_,δ exists both of which can be read off the definition directly, as well as that lim_→ 0 c_,δ=0 which is part of Lemma <ref>. For Item <ref> one checks that lim_δ→ 0α̅_,δ andlim_δ→ 0 c_,δ exist, the former can be read of the definition the latteris part of in Lemma <ref>. In order to compare the solution obtained here to the ones of <cit.> one compares the used counterterms. Finally, Item <ref> was already explained in the generic part of the proof.One checks (<ref>) directly from the definition of α^♭_,δ Section <ref>. To check Item <ref>it suffices to note thatfor δ>0 it holds that lim_→ 0 c^♭_,δ=0 and the limit α̅^♭_0,δ:=lim_→ 0α̅^♭_,δ exists. For the former, note that this follows directly from Lemma <ref> together with the definition of c^♭_,δ, while the latter can be read off the definition. Item <ref> was already observed in the generic part of the proof. The estimate (<ref>), Item <ref> and Item <ref> are contained in Lemma <ref>. Item <ref> follows directly from the fact that 𝕏^♭♭_0,δ= 𝕏^♭_0,δ in Proposition <ref>. The first claim of Item <ref>, namely that lim_δ→ 0 c^♭♭_,δ=lim_δ→ 0 c_,δ is content of Lemma <ref> while the latter claim was already explained in the generic part of the proof. The estimate (<ref>) can be read off of (<ref>), Item <ref> is contained in Lemma <ref> and Item <ref> was already explained in the generic part of the proof. § PERIODIC FUNCTIONS Suppose f∈Ł^∞(ℝ^d+1) is (ℤ)^d× (^2ℤ) periodic. Then, for κ∈ (0,1)f- ∫_[0,1]^d+1 f _C_^-κ≤n^-κ f_L^∞ .Let F(z)= f(𝒮^z) and C= ∫_[0,1]^d+1 F=_𝒮^^-1_( [0,1]^d+1 ) f. We shall show that|∫_ℝ^d+1 (f(z) -C) ϕ^λ_⋆ (z)dz |≲f_L^∞^κλ^-κuniformly over ϕ∈ C_c(B_1) satisfying ϕ_C^κ_<1.By translation, it suffices to consider only the case ⋆=0.Note that for >λ the bound holds trivially.In the case <λ ∫_ℝ^d+1 (f(z) -C) ϕ^λ (z) dz = ^||∫_ℝ^d+1 F(𝒮^λ^-1 z) ϕ ( z) - C= ^||∑_h∈ℤ^d+1∫_𝒮^λ ([0,1]^d+1+ h) F(𝒮^λ^-1z) ϕ ( z) - ^||∑_h∈ℤ^d+1 C ∫_𝒮^λ([0,1]^d+1+ h)ϕ ( z')dz'= ^||∑_h∈ℤ^d+1∫_𝒮^λ ([0,1]^d+1+ h) F(𝒮^λ^-1z) ϕ ( z) - ∑_h∈ℤ^d+1λ^||∫_𝒮^λ ([0,1]^d+1+ h) F(𝒮^λ^-1z)dz∫_𝒮^λ([0,1]^d+1+ h)ϕ ( z')dz'= ^||∑_h∈ℤ^d+1∫_𝒮^λ ([0,1]^d+1+ h) F(𝒮^λ^-1z)( ϕ ( z) - _𝒮^λ([0,1]^d+1+ h)ϕ ( z')dz')= ^||∑_h∈ℤ^d+1∫_𝒮^λ ([0,1]^d+1+ h) F(𝒮^λ^-1z)( _𝒮^λ([0,1]^d+1+ h)ϕ ( z) - ϕ ( z')dz').Thus|∫_ℝ^d+1 (f(z) -C) ϕ^λ_x (z) | ≤^||+κλ^-κ∑_h∈ℤ^d+1∫_𝒮^λ ([0,1]^d+1+ h) F(𝒮^λ^-1z) 1_{(ϕ( · )) ∩𝒮^λ([0,1]^d+1+h)≠∅}ϕ_C^κ_≤^||+κλ^-(||+κ) N_λ, F_L^∞ϕ_C^κ_ ,where N_λ, := |{h∈ℤ^d:(ϕ( · )) ∩𝒮^λ([0,1]^d+1+h)≠∅}|≲^-||λ^||. § REGULARISED KERNELS Fix χ: ℝ^d→ [0,1] smooth and compactly supported on [-2/3, 2/3]^d⊂ℝ^d such that ∑_k∈ℤ^dχ(x+k)=1 for all x∈ℝ^d and let H_,δ(x,t;y,s)= ∑_k ∈ℤ^dχ(x-y) ∫_ℝδ^-2ϕ(τ/δ^2) κ(t-s-τ)Γ_(x,t;y+k, s+ τ-δ^2)dτand H_,δ^♭(x,t;y,s)= ∑_k ∈ℤ^dχ(x-y) ∫_ℝδ^-2-d(ϕ∘𝒮_^δ) (ξ,τ) κ(t-s-τ)Γ_(x,t;y+ξ+k, s+ τ)dτ . For every R,L∈ (0,1) there exists C>0 such that H_, δ_β;L,R<C , H^♭_, δ_β;L,R <Cuniformly over , δ∈ [0,1]^2, β∈ (1,2].For R,L∈ (0,1), β∈ (1,2] such that R+2-β<1H_, δ- H_,0_β;L,R≲δ^2-β, H^♭_, δ- H^♭_,0_β;L,R≲δ^2-βuniformly over , δ∈ [0,1]^2. Finally, for 0<R+L<1 it holds thatH_, δ- H_0, δ_β;L,R≲^2-β∨^1-R-L, H^♭_, δ- H^♭_0, δ_β;L,R≲^2-β∨^1-R-Luniformly over β∈ [0,2]. Set for φ^2^-n is as in the proof of Proposition <ref> and n≥ 1 andH_,δ;n(x,t,y,s)= φ^2^-n(z-z') H_,δ(z;z') , H^♭_,δ;n(x,t,y,s)= φ^2^-n(z-z') H^♭_,δ(z;z') ,well as H_,δ;0(x,t,y,s)= ∑_k ∈ℤ^d H_,δ(x,t;y+k,s) - ∑_n=1^∞ H_,δ;n(x,t,y,s) and H^♭_,δ;0(x,t,y,s)= ∑_k ∈ℤ^d H^♭_,δ(x,t;y+k,s) - ∑_n=1^∞ H^♭_,δ;n(x,t,y,s).In particular for both H and H^♭ we thus obtain a decomposition as in the definition of the spaces ^β_L,R.Inequalities (<ref>) follow from Theorem <ref>, Proposition <ref> and Proposition <ref>. Similarly, (<ref>) follows from Proposition <ref> and Proposition <ref>. Finally, (<ref>) follows as (<ref>) in Proposition <ref>.H^♯_,δ(x,t;y,s)=∫_ℝ^d+1κ(t-τ)Γ_(x,t, ζ, τ) ϱ^,δ (ζ, τ; y,s) dζ dτ . For every R,L∈ (0,1) H^♯_, δ_β;L,R≲δ/∨ 1uniformly over , δ∈ [0,1]^2, β∈ (1,2].For R,L∈ (0,1), β∈ (1,2] such that R+2-β<1H^♯_, δ- H^♯_,0_β;L,R≲ (δ/∨ 1) δ^2-βuniformly over , δ∈ [0,1]^2. Finally, for 0<R+L<1 it holds that H^♯_, δ- H^♯_0, δ_β;L,R≲ (δ/∨ 1) ^2-β∨^1-R-Luniformly over β∈ [0,2].We decompose H^♯_, δ analogously to (<ref>). Then, (<ref>) and(<ref>) follow by a more tedious but similar computation to (<ref>) and (<ref>). To see (<ref>), we introduceH̅^♯_,δ(x,t;y,s)=∫_ℝ^d+1 κ(t-τ)Γ̅(x,t, ζ, τ) ϱ^,δ (ζ, τ; y,s) dζdτ and estimate separately the two terms H^♯_, δ- H̅^♯_, δ_β;L,R and H̅^♯_, δ- H^♯_0, δ_β;L,R.§ CONVOLUTIONLet L,L', R, R'<1 and β, β' ∈ (0, ||).Assume that ||-β-β'>0, L<β, L'<β', thenfor H∈^β_L,R, H'∈^β'_L',R' it holds thatH⋆H':=∫_ℝ^d+1H(z,w) H(z', w)belongs to ^β+β'_L,L' and furthermore it holds that H⋆ H'_β+β'; L, L'≲ F_β;L,0 F_β;L,0.We write H= ∑_n=0^∞ H_n, H'= ∑_m=0^∞ H'_m, thusH⋆H':= ∑_m≥0 (H⋆_> H')_m +(H⋆_< H')_m + H_m⋆H_m ,where(H⋆_> H')_m:= ∑_n>mH_n⋆H_m, (H⋆_< H')_m:= ∑_n>mH_m⋆H_n.Note that (H⋆ H')_m is supported on {(z,z')∈ (ℝ^d+1)^× 2 :|z-z'|_<2^-m+1}. For γ∈{0,L}, γ' ∈{0,L} with the understanding that · _C^0,0= · _L^∞, we find * H_m⋆ H'_m_C^γ,γ'≲ 2^-m||2^m(||-β+γ)2^m(||-β'+γ') = 2^-m(||-β-β'+γ +γ')* (H⋆_> H')_m_C^γ,γ'≲∑_n>m2^-n||2^n(||-β+ γ)2^m(||-β') ≲ 2^-m(||-β-β'+γ+γ').* Similarly, (H⋆_< H')_m_C^γ,γ'≲ 2^-m(||-β-β'+γ+γ'). Thus, writing H⋆ H'= ∑_m≥ 0 (H⋆ H')_m where(H⋆H')_0= ∑_i=0^1 (H⋆_> H')_i +(H⋆_< H')_i + H_i⋆H_iand for m≥ 1(H⋆H')_m= (H⋆_> H')_m+1 +(H⋆_< H')_m+1 + H_m+1⋆H_m+1this concludes the proof. Martin | http://arxiv.org/abs/2311.15788v1 | {
"authors": [
"Martin Hairer",
"Harprit Singh"
],
"categories": [
"math.AP",
"math.PR",
"60H17, 35B27"
],
"primary_category": "math.AP",
"published": "20231127130352",
"title": "Periodic space-time homogenisation of the $φ^4_2$ equation"
} |
[email protected] Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, [email protected] Institute of Theoretical Physics and Mark Kac Center for Complex Systems Research, Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland Stochastic resetting is a protocol of starting anew, which can be used to facilitate the escape kinetics. We demonstrate that restarting can accelerate the escape kinetics from a finite interval restricted by two absorbing boundaries also in the presence of heavy-tailed, Lévy type, α-stable noise. However, the width of the domain where resetting is beneficial depends on the value of the stability index α determining power-law decay of jump length distribution. For heavier(smaller α) distributions the domain becomes narrower in comparison to lighter tails. Additionally, we explore connections between Lévy flights and Lévy walks in presence of stochastic resetting. First of all, we show that for Lévy walks, the stochastic resetting can be beneficial also in the domain where coefficient of variation is smaller than 1. Moreover, we demonstrate that in the domain where LW are characterized by a finite mean jump duration/length, with the increasing width of the interval LW start to share similarities with LF under stochastic resetting.02.70.Tt,05.10.Ln, 05.40.Fb, 05.10.Gg, 02.50.-r, Lévy flights and Lévy walks under stochastic resetting Bartłomiej Dybiec January 14, 2024 ====================================================== § INTRODUCTION Since pioneering works of Smoluchowski <cit.>, Einstein <cit.>, Langevin <cit.>, Perrin <cit.> and Kramers <cit.> studies of Brownian motion and random phenomena attracts steadily growing interest. Probabilistic explanation of properties of Brownian motion boosted development of the theory of stochastic processes <cit.>, increased our understanding of random phenomena <cit.> and opened studies on noise driven systems <cit.> and random walks <cit.>.The Wiener process (Brownian motion — BM) is one of the simplest examples of continuous (time and space) random processes. Its mathematical properties nicely explains observed properties of Brownian motion <cit.>, e.g., the linear scaling of the mean square displacement <cit.>. It can be extended in multiple ways, e.g., by assuming more general jump length distribution, introducing memory or assuming finite propagation velocity. In that context, Lévy flights (LF) <cit.> and Lévy walks (LW) <cit.> are two archetypal types of random walks <cit.>. In the LF it is assumed that displacements are immediate and generated from a heavy-tailed, power-law distribution.At the same time in the LW, a random walker travels with a finite velocity v for random times distributed according to a power-law density. The assumption that individual jump lengths follow a general α-stable density is supported by multiple experimental observations demonstrating existence of more general than Gaussian fluctuations. Heavy-tailed, power-law fluctuations have been observed in plenitude of experimental setups including, but not limited to, biological systems <cit.>, dispersal patterns of humans and animals <cit.>, search strategies <cit.>, gaze dynamics <cit.>, balance control <cit.>, rotating flows <cit.>, optical systems and materials <cit.>, laser cooling <cit.>, disordered media <cit.>, financial time series <cit.>. Properties of systems displaying heavy-tailed, non-Gaussian fluctuations are studied both experimentally <cit.> and theoretically <cit.>. Lévy flights attracted considerable attention due to their well-known mathematical properties, e.g., self similarity, infinite divisibility and generalized central limit theorem. Therefore, the α-stable noises are broadly applied in diverse models displaying anomalous fluctuations or describing anomalous diffusion.Stochastic resetting <cit.> is a protocol of starting anew, which can be applied (among others) to increase efficiency of search strategies. In the simplest version, it assumes that the motion is started anew at random times, i.e., restarts are triggered temporally making times of starting over independent of state of the system, e.g., position.Among multiple options, resets can be performed periodically (sharp resetting) <cit.>, or at random time intervals following exponentiall (Poissonian resetting) <cit.> or a power-law <cit.> density. Starting anew can be also spatially induced <cit.>. Escape kinetics under stochastic resetting display universal properties <cit.> regarding relative fluctuations of first passage times as measured by the coefficient of variation (CV). Typically it is assumed that the restarting is immediate and does not generate additional costs, however options with overheads are also explored <cit.>. Stochastic resetting attracted considerable attention due to its strong connection with search strategies <cit.>. During the search an individual/animal is interested in minimization of the time needed to find a target, which in turn is related to the first passage problem <cit.>. In setups where due to long excursions in the wrong direction, i.e., to points distant from the target <cit.>, the mean first passage time (MFPT) can diverge. In such cases, stochastic resetting is capable of turning the MFPT finite. Furthermore, it can optimize already finite MFPT <cit.>. Stochastic resetting is capable of minimization of the time to find a target when the coefficient of variation (the ratio between the standard deviation of the first passage times and the MFPT in the absence of stochastic resetting) is greater than unity <cit.>.Not surprisingly, the stochastic resetting is capable of minimizing the MFPT from a finite interval restricted by two absorbing boundaries <cit.>. As demonstrated in <cit.>, in the case of escape from finite intervals restricted by two absorbing boundaries mean first passage time for Lévy flights and Lévy walks display similar scaling <cit.> as a function of the interval width. Therefore, one can study properties of escape from finite intervals under combined action of Lévy noise and stochastic resetting with special attention to verification if properties of escape kinetics still bears some similarities with LW under restarts. The model under study is described in the next section (Sec. <ref> Model and Results). Sec. <ref> (Lévy walks under stochastic resetting) analyzes properties of LW on finite intervals under stochastic resetting and compares them with properties of corresponding LF. The manuscript is closed with Summary and Conclusions (Sec. <ref>). § MODEL AND RESULTSThe noise driven escape (from any domain of motion Ω) is a stochastic process, therefore individual first passage times are not fixed but random. For first passage times it is possible to calculate — the relative standard deviation — the coefficient of variation (CV) <cit.>CV= σ( t_fp) /⟨ t_fp⟩ = σ( t_fp)/𝒯=√(𝒯_2-𝒯^2)/𝒯 = √(𝒯_2/𝒯^2 -1),which is the ratio between the standard deviation σ(t_fp) of the first passage times (FPT) t_fp and the mean first passage time 𝒯=⟨ t_fp⟩. In addition to statistical applications, the coefficient of variation plays a special role in the theory of stochastic resetting <cit.>. It provides a useful universal tool for assessing potential effectiveness of stochastic restarting which can be used to explore various types of setups under very general conditions. Typically stochastic resetting can facilitate the escape kinetics in the domain where CV>1. Therefore, examination of CV given by Eq. (<ref>) (constrained by the fact that resets are performed to the same point from which the motion was started) can be a starting point for exploration of effectiveness of stochastic resetting.The escape of a free particle from a finite interval (-L,L) under action of Lévy noise, i.e., escape of α-stable, Lévy type process, can be characterized by the mean first passage time (MFPT) 𝒯 which reads <cit.> 𝒯(x_0)= (L^2-|x_0|^2)^α/2/Γ(1+α) σ^α and the second moment 𝒯_2=⟨ t_fp^2 ⟩ given by <cit.> 𝒯_2(x_0) =α L^α/[ Γ(1+α)σ^α]^2× ∫_|x_0|^2^L^2[ t-|x_0|^2 ]^α/2-1_2F_1[ -α/2,1/2,1+α/2; t/L^2]dt,where x_0 is the initial condition, _2F_1(a,b;c;z) stands for the hypergeometric function, while Γ(…) is the Euler gamma function. From Eqs. (<ref>) and (<ref>) with α=2 one getsCV = √(2/3)√(L^2+x_0^2/L^2-x_0^2).As it implies from Eqs. (<ref>) and (<ref>), the coefficient of variation does not depend on the scale parameter σ. The independence of the CV on the scale parameter σ can be intuitively explained by the fact that σ can be canceled by time rescaling. Such a transformation (linearly) rescales individual FPTs and consequently in exactly the same way the MFPT andthe standard deviation making their ratio σ independent. From Eq. (<ref>) it implies that CV>1 forx_0 ∈(-L,-L/√(5)) ∪(L/√(5),L )what is in accordance with earlier findings <cit.>.Equivalently, the setup corresponding to Eqs. (<ref>) and (<ref>) can be described by the Langevin equationdx/dt = ξ_α(t)and studied by methods of stochastic dynamics. In Eq. (<ref>), the ξ_α is the symmetric α-stable Lévy type noise and x(t) represents the particle position (with the initial condition x(0)=x_0). The α-stable noise is a generalization of the Gaussian white noise to the nonequilibrium realms <cit.>, which for α=2 reduces to the standard Gaussian white noise.The symmetric α-stable noise is related to the symmetric α-stable process L(t), see Refs. <cit.>. Increments Δ L=L(t+Δ t)-L(t) of the α-stable process are independent and identically distributed random variables following an α-stable density with the characteristic function <cit.>φ(k) = ⟨exp(i k Δ L ) ⟩ = exp[ - Δ t σ^α| k|^α].Symmetric α-stable densities are unimodal probability densities defined by the characteristic function with probability densities given by elementary functions only in a limited number of cases (α=1 Cauchy density, α=2 Gauss distribution), however in more general cases can be expressed using special functions <cit.>. The stability index α (0<α⩽ 2) determines the tail of the distribution, which for α<2 is of power-law type p(x) ∝ |x|^-(α+1). The scale parameter σ (σ>0) controls the width of the distribution, which can be characterized by an interquantile width or by fractional moments ⟨ |x|^κ⟩ of order κ (0<κ<α), because the α-stable variables with α<2 cannot be quantified by the variance which diverges. Within studies we set the scale parameter to unity, i.e., σ=1.The MFPT can be calculated from multiple trajectories generated according to Eq. (<ref>) as the average of the first passage times𝒯(x_0) = ⟨ t_fp⟩ = ⟨inf{t : x(0)=x_0|x(t)| ⩾ L }⟩,while 𝒯_2(x_0) is the second moment of the first passage time. The Langevin equation (<ref>) can be approximated with the (stochastic) Euler–Maruyama method <cit.>x(t+Δ t) = x(t) + ξ_α^t Δ t^1/α,where ξ_α^t represents a sequence of independent identically distributed α-stable random variables <cit.>, see Eq. (<ref>). For the model under study, the coefficient of variation CV(x_0) is a symmetric function of the initial condition x_0, i.e., CV(x_0)=CV(-x_0) as the escape problem (due to noise symmetry) is symmetric with respect of the sign change in the initial condition x_0. The symmetry implies from the system symmetry (symmetric boundaries and symmetric noise) and consequently is visible not only in Eq. (<ref>) but also in Eqs. (<ref>) – (<ref>). The condition CV>1 is a sufficient, but not necessary condition, when stochastic resetting can facilitate the escape kinetics <cit.>. As it implies from Eq. (<ref>), stochastic resetting can accelerate the escape kinetics if initial condition, which is equivalent to the point from which the motion is restarted, sufficiently breaks the system symmetry, i.e., if restarting the motion anew is more efficient in bringing a particle towards the target (edges of the interval)than waiting for a particle to approach the target (borders).Fig. <ref> compares results of numerical simulations (points) with theoretical predictions (solid lines) for α=2 (Gaussian white noise driving) demonstrating perfect agreement. Due to system symmetry, in Fig. <ref>, we show results for x_0>0 only. Moreover, we set the interval half-width L to L=1. Stochastic resetting, i.e., starting anew from the initial conditions x_0 can be used to facilitate the escape kinetics. One of common restarting schemes, is the so-called fixed rate (Poissonian) resetting, for which the distribution of time intervals between two consecutive resets follow the exponential density ϕ(t) = r exp(-rt), where r is the (fixed) reset rate. Thus, the mean time between two consecutive restarts reads ⟨ t ⟩ = 1 / r. The MFPT under Poissonian resetting for a process driven by GWN (α = 2) <cit.> from the (-L,L) interval restricted by two absorbing boundaries reads𝒯(x_0) = 1/r[ sinh2L/√(σ^2/r)/sinhL-x_0/√(σ^2/r) +sinhx_0+L/√(σ^2/r) -1 ].Fig. <ref> presents MFPT as a function of resetting rate r for various initial positions x_0. MFPTs have been estimated using the so-called direct approach <cit.>. Within such a scheme from the simulation of the system without resets the (unknown for α<2) first passage time density is estimated. In the next step, instead of simulating the Langevin dynamics under stochastic resetting, pairs of first passages times (in the absence of resetting) and resetting times are generated, until the first passage time is smaller than the resetting time. The first passage time under restart is equal to the sum of (all) generated time intervals between resets increased by the last first passage time, see <cit.>. In Fig. <ref> points representing results of computer simulations with α=2 nicely follow solid lines demonstrating theoretical predictions given by Eq. (<ref>). With help of Eqs. (<ref>) – (<ref>) more general driving than the Gaussian white noise can be studied. As it is already visible from Fig. <ref>(a), which presents numerically estimated CV(x_0) (points) along with theoretical values (lines), the width of the domain where CV(x_0)>1 increases with the growing α, i.e., for smaller α the domain where resetting facilitates the escape kinetics is narrower. The set of initial conditions resulting in facilitation of the escape kinetics is further studied in Fig. <ref>(b). The bottom panel of Fig. <ref> showssuch that CV()=1, i.e.,divides the set of initial conditions x_0 to such that for |x_0| > the coefficient of variation is greater than 1. From Fig. <ref>(b) it clearly implies that with the increasing stability index α the(solid line) moves towards the center of the interval and attains its asymptotic value 1/√(5) for α=2, see Eq. (<ref>), which is marked by a dashed line. Moreover, it indicates that under action of heavy tailed noises stochastic resetting can be beneficial in narrower domain of the width WW = 2 × (L-),which for L=1 is equal to 2(1-), see the dot-dashed line in Fig. <ref>(b). The growth of W can be intuitively explained by the mechanism underlying escape dynamics. More precisely, with decreasing α the dominating escape scenario is the escape via a single (discontinuous) long jump, which is less sensitive to the initial condition than escape protocol for α=2, when the trajectories are continuous.From Eqs. (<ref>) – (<ref>) one can also calculate the opposite, α→ 0, limit of the MFPT: 𝒯=1 and of the second moment: 𝒯_2=2. For α=0 the Hypergeometric function in Eq. (<ref>) can be replaced by unity, and the remaining integral reads α∫_|x_0|^2^L^2[ t-|x_0|^2 ]^α/2-1 dt = 2[L^2- |x_0|^2 ]^α/2. Additionally, plugging 𝒯=1 and 𝒯_2=2 to Eq. (<ref>) one gets CV=1, regardless of x_0. Indeed, Fig. <ref>(a) demonstrates that with the decreasing α, the CV(x_0) curve approaches the CV=1 line. Consequently, with the decreasing α themoves to the right, i.e., towards the absorbing boundary. However, we are unable to reliably calculate the lim_α→ 0 as numerical evaluation of analytical formulas leads to not fully controllable errors. At the same time, for α≈ 0, stochastic simulations are unreliable. In overall, we are not able to provide the definitive answer whetherreaches edges of the interval, i.e., ± L, or it stops in a finite distance to the absorbing boundary. We finish the exploration of LF under restarting by Fig. <ref>, which shows numerically estimated MFPTs for LF as a function of the resetting rate for various value of the stability index α: α∈{0.8,1.2,1.6,1.8,2}. Different panels correspond to various initial conditions: x_0=0.5 (top panel (a)) and x_0=0.7 (bottom panel (b)). Finally, solid lines show theoretical dependence for α=2, see Eq. (<ref>), while dashed lines r=0 asymptotics of MFPTs, i.e., 𝒯(x_0,r=0), see Eq. (<ref>). First off all, the comparison of panels (a) and (b) further corroborates that with the decreasing α the domain in which resetting can facilitate the escape kinetics becomes narrower. Importantly, Fig. <ref> clearly shows the difference between escape scenarios for Lévy flights and Brownian motion. For LF the dominating strategy, especially for small α, is escape via a single long jump, while for BM the trajectories are continuous and the particle needs to approach the absorbing boundary. This property changes the sensitivity to resetting, especially in domains where restarting hinders the escape kinetics. For small α moving back to the initial condition practically does not interrupt waiting for a long jump, while for α close to 2 it substantially decreases the chances of escape. Therefore, for α close to 2, the MFPT grows faster with increasing resetting rate. On the other hand, the growth rate is a decaying function of the initial position, c.f., Figs. <ref>(a) and <ref>(b) for α=2. Moreover, from simulations we do not see facilitation of the escape kinetics due to resetting in the domain where CV<1.In <cit.> similarities and differences between LF and LW have been studied. In particular, it has been demonstrated that for LW with the power-law distribution of the jump length duration τf(τ) ∝1/τ^1+αthe MFPT from the (-L,L) interval scales as𝒯(0) ∝{[ L 0<α < 1; L^α 1<α < 2; L^2 α = 2; ].,with the half-width of the interval. More precisely, in <cit.>, it was assumed that v=1 and τ=|ξ_α|, where ξ_α are independent, identically distributed random variables following a symmetric α-stable density, see Eq. (<ref>). The observed scaling suggests that in the situation when the average jump duration/length becomes finite (α>1) Lévy walks display the same scaling on the interval width as Lévy flights, see Eq. (<ref>). In contrast, for α<1, the FPT for LW is bounded from below. Namely, the first passage time t_fp⩾ L/v, which originates from the fact that the process has a finite velocity. From this property, it implies that (0) ⩾ L/v and thus the scaling of MFPT must differ from (0) ∝ L^α observed for LF with α < 1. Finally, for α = 2, the underlying process, by means of the central limit theorem, converges to the Wiener process revealing the same scaling of the MFPT like a Brownian particle.§ LÉVY WALKS UNDER STOCHASTIC RESETTING After studying properties of LF on finite intervals under stochastic resetting, we move to examination of LW. In the case of LW numerical simulations were conducted to investigate the regime in which stochastic resetting can be beneficial. Similarly as in <cit.> it was assumed that v=1 and τ=|ξ_α|, where ξ_α are independent, identically distributed random variables following a symmetric α-stable density, see Eq. (<ref>). For LW the first passage time density has two peaks corresponding to escape in a single long jump or a sequence of subsequent jumps towards the left or right boundary. These peaks are located at (L-x_0)/v (escape via the right boundary) or (x_0+L)/v (escape via the left boundary). Heights of peaks associated with such escapes increase with the drop in α and decay with the increasing interval half-width L. We suspect that these peaks are one of the reasons for the emergence of differences between LF and LW, see for instance Eq. (<ref>) for α<1 and discussion below Eq. (<ref>). The coefficient of variation CV(x_0) <cit.>, see Eq. (<ref>), obtained through numerical simulations of LW onto (-L,L) interval with various initial positions x_0 were compared to analytical results acquired for LF, see Eqs. (<ref>) – (<ref>). We focus mainly on 1 ⩽α⩽ 2 case, as for α from that range MFPT as a function of the interval half-width L for LW and LF scales in the same manner, see Eqs. (<ref>) and (<ref>). Top panel of Fig. <ref> demonstrates that for relatively small intervals half-width (L / vσ≈ 1), CV for LF and CV for LW noticeably differ, i.e., CV for LW is significantly smaller than CV for LF. However, as depicted in the bottom panel of Fig. <ref>, with increasing L, for the same set of α (α∈{ 1.5, 2}) the CV for LW model follows the one present for LF. The agreement originates in the fact that for large enough L peaks corresponding to escape in a single jump (or sequences of consecutive jumps toward the boundary) in the first passage time distribution are small enough.In the next step, the region in which stochastic resetting can be beneficial for LW was explored numerically under Poissonian (fixed rate) resetting. The distribution of time intervals between two consecutive resets follows the exponential density ϕ(t) = r exp(-rt), where r (r>0) is the reset rate. The efficiency of stochastic resetting can be verified by use of the normalized ratio of the minimal MFPT under stochastic resetting to its value in the absence of resettingΛ (x_0) = min_r𝒯(x_0, r)/𝒯(x_0, 0),where 𝒯(x_0, r) stands for MFPT under resetting with reset rate r and the initial position x_0 equivalent to the restarting point. If stochastic resetting does not facilitate the escape kinetics Λ=1, because 𝒯(x_0, 0) is the minimal mean first passage time. The decay of Λ below one, indicates that stochastic resetting accelerates the escape kinetics. Fig. <ref> presents the normalized ratio Λ(x_0) (points) along with CV(x_0) (lines). For small |x_0 / L| the asymmetry introduced by the initial position is not strong enough to open space for optimization of the MFPT by stochastic resetting resulting in Λ(x_0)=1. Therefore, for small |x_0|, Λ(x_0)=1 not only shows the impossibility of enhancing the escape kinetics, but also introduces a visual reference level clearly demonstrating where CV drops below unity. From examination of the normalized ratio Λ(x_0) it is possible to see easily if the drop in Λ(x_0) coincides with the increase of the coefficient of variation CV above unity. As expected for |x_0 / L| large enough stochastic resetting facilitates the escape kinetics for LW. For small L (L / vσ≈ 1) the region where escape kinetics is accelerated by stochastic resetting differs from the one indicated by CV>1 criterion <cit.>, because MFPT can be shortened even in the domain where CV<1, see top panel Fig. <ref>. This is in accordance with the fact that the condition CV>1 is sufficient, but not necessary, for observation of the facilitation of escape kinetics due to stochastic resetting <cit.>. For large enough interval half-widths L, the point where Λ(x_0) drops below unity agrees with the prediction based on the CV criterion for α∈{1, 1.5, 2}, see bottom panel of Fig. <ref>. However in case of α = 0.5 the disagreement with CV >1 criterion persists. § SUMMARY AND CONCLUSIONS Lévy flights and Lévy walks constitute two paradigmatic random walks schemes generalizing the Brownian motion. The former one assumes long power-law distributed jumps, while the latter one, introduces a finite propagation velocity. LF and LW possess inherent similarities, as LW trajectories have the same spatial properties whereas the differences are in temporal properties: infinite (LF) versus finite (LW) propagation velocity.We have demonstrated that stochastic resetting can facilitate the escape kinetics, as measured by the mean first passage time, from a finite interval restricted by two absorbing boundaries both for LF and LW. Stochastic resetting is beneficial when the initial condition, which is equivalent to the point from which the motion is restarted, sufficiently breaks the system symmetry. Under such a condition, restarting the motion anew is more efficient in bringing a particle towards the target than waiting for a particle to approach it. Both for LF and LW the domain in which resetting is beneficial depends on the exponent α defining power-law distribution of jump lengths. The width of the domain utilizing restarting is growing with the increasing α, i.e., for lighter-tails it is wider as lighter tails change the typical escape scenario.Lévy flights and Lévy walks displays the same scaling of the MFPT on the interval half-width L for 1 ⩽α⩽ 2. Analogously, under restarting, for 1 < α⩽ 2 with the increasing interval half-width L, the coefficient of variation for LW tends to the one for LF. However, the agreement is recorded already for finite L. Additionally, for LW with small L, it is clearly visible that the stochastic resetting facilitates escape kinetics not only in the region where CV>1, but it can also accelerate the escape kinetics in situations when CV<1. § ACKNOWLEDGMENTS We gratefully acknowledge Poland’s high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2023/016175.The research for this publication has been supported by a grant from the Priority Research Area DigiWorld under the Strategic Programme Excellence Initiative at Jagiellonian University.§ DATA AVAILABILITYThe data (generated randomly using the model presented in the paper) that support the findings of this study are available from the corresponding author (BŻ) upon reasonable request.§ REFERENCES<#>1 73 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Smoluchowski(1906)]smoluchowski1906b2 author author M. Smoluchowski, 10.1002/andp.19063261405 journal journal Ann. Phys. volume 326, pages 756 (year 1906)NoStop [Einstein(1905)]einstein1905 author author A. Einstein, @noopjournal journal Ann. Phys. volume 17, pages 549 (year 1905)NoStop [Langevin(1908)]langevin1908 author author P. Langevin, @noopjournal journal C. R. Acad. Sci. (Paris) volume 146, pages 530 (year 1908)NoStop [Perrin(1909)]perrin1909mouvement author author J. Perrin, @noopjournal journal Ann. Chim. Phys. volume 18, pages 5 (year 1909)NoStop [Kramers(1940)]kramers1940 author author H. A. Kramers, @noopjournal journal Physica (Utrecht) volume 7, pages 284 (year 1940)NoStop [van Kampen(1981)]vankampen1981 author author N. G. van Kampen, @nooptitle Stochastic processes in physics and chemistry (publisher North–Holland,address Amsterdam, year 1981)NoStop [Gardiner(2009)]gardiner2009 author author C. W. Gardiner, @nooptitle Handbook of stochastic methods for physics, chemistry and natural sciences (publisher Springer Verlag, address Berlin, year 2009)NoStop [Zwanzig(2001)]Zwanzig2001 editor R. Zwanzig, ed.,@nooptitle Nonequilibrium Statistical Mechanics (publisher Oxford University Press, address New York, year 2001)NoStop [Horsthemke and Lefever(1984)]horsthemke1984 author author W. Horsthemke and author R. Lefever, @nooptitle Noise-inducted transitions. Theory and applications in physics, chemistry, and biology(publisher Springer Verlag, address Berlin,year 1984)NoStop [Montroll and Shlesinger(1984)]montroll1984 author author E. W. Montroll and author M. F. Shlesinger, in @noopbooktitle Lévy processes: Theory and applications, editor edited byeditor J. L. Lebowitz andeditor E. W. Montroll(publisher North Holland, address Amsterdam,year 1984) pp. pages 1–121NoStop [Metzler and Klafter(2000)]metzler2000 author author R. Metzler and author J. Klafter, @noopjournal journal Phys. Rep. volume 339, pages 1 (year 2000)NoStop [Metzler and Klafter(2004)]metzler2004 author author R. Metzler and author J. Klafter, @noopjournal journal J. Phys. A: Math. Gen. volume 37, pages R161 (year 2004)NoStop [Brown(1828)]brown1828 author author R. Brown, @noopjournal journal Phil. Mag. 2 volume 4, pages 161 (year 1828)NoStop [Nordlund(1914)]nordlund1914new author author I. Nordlund, @noopjournal journal Z. Phys. Chem volume 87, pages 40 (year 1914)NoStop [Dubkov et al.(2008)Dubkov, Spagnolo, and Uchaikin]dubkov2008 author author A. A. Dubkov, author B. Spagnolo, and author V. V. Uchaikin,@noopjournal journal Int. J. Bifurcation Chaos. Appl. Sci. Eng. volume 18,pages 2649 (year 2008)NoStop [Chechkin et al.(2008)Chechkin, Metzler, Klafter, andGonchar]chechkin2008introduction author author A. V. Chechkin, author R. Metzler, author J. Klafter,andauthor V. Y. Gonchar, in10.1002/9783527622979 booktitle Anomalous transport: Foundations and applications, editor edited by editor R. Klages, editor G. Radons,and editor I. M. Sokolov (publisher Wiley-VCH, address Weinheim, year 2008) pp. pages 129–162NoStop [Shlesinger and Klafter(1986)]shlesinger1986 author author M. F. Shlesinger and author J. Klafter, in @noopbooktitle On Growth and Form: Fractal and Non-fractal Patterns in Physics, editor edited by editor H. E. Stanley and editor N. Ostrowsky (publisher Springer Verlag, address Berlin, year 1986) p. pages 279NoStop [Zaburdaev et al.(2015)Zaburdaev, Denisov, and Klafter]zaburdaev2015levy author author V. Zaburdaev, author S. Denisov,and author J. Klafter,@noopjournal journal Rev. Mod. Phys.volume 87, pages 483 (year 2015)NoStop [Denisov et al.(2004)Denisov, Klafter, and Urbakh]denisov2004 author author S. Denisov, author J. Klafter, and author M. Urbakh,@noopjournal journal Physica Dvolume 187, pages 89 (year 2004)NoStop [Bouchaud et al.(1991)Bouchaud, Ott, Langevin, andUrbach]bouchaud1991 author author J. P. Bouchaud, author A. Ott, author D. Langevin,andauthor W. Urbach, @noopjournal journal J. Phys. II France volume 1, pages 1465 (year 1991)NoStop [Brockmann et al.(2006)Brockmann, Hufnagel, and Geisel]brockmann2006 author author D. Brockmann, author L. Hufnagel,and author T. Geisel,@noopjournal journal Nature (London)volume 439, pages 462 (year 2006)NoStop [Sims et al.(2008)Sims, Southall, Humphries, Hays, Bradshaw, Pitchford, James, Ahmed, Brierley, Hindell, Morritt, Musyl, Righton, Shepard, Wearmouth, Wilson, Witt, and Metcalfe]sims2008 author author D. W. Sims, author E. J. Southall, author N. E. Humphries, author G. C. Hays, author C. J. A. Bradshaw, author J. W. Pitchford, author A. James, author M. Z. Ahmed, author A. S.Brierley, author M. A.Hindell, author D. Morritt, author M. K. Musyl, author D. Righton, author E. L. C. Shepard, author V. J. Wearmouth, author R. P. Wilson, author M. J. Witt,and author J. D. Metcalfe, @noopjournal journal Nature (London) volume 451, pages 1098 (year 2008)NoStop [Reynolds and Rhodes(2009)]reynolds2009 author author A. M. Reynolds and author C. J. Rhodes, @noopjournal journal Ecology volume 90, pages 877 (year 2009)NoStop [Amor et al.(2016)Amor, Reis, Campos, Herrmann, andAndrade]amor2016 author author T. A. Amor, author S. D. S. Reis, author D. Campos, author H. J. Herrmann,and author J. S. Andrade, @noopjournal journal Sci. Rep. volume 6, pages 20815 (year 2016)NoStop [Cabrera and Milton(2004)]cabrera2004 author author J. L. Cabrera and author J. G. Milton, @noopjournal journal Chaos volume 14, pages 691 (year 2004)NoStop [Collins and De Luca(1994)]collins1994random author author J. J. Collins and author C. J. De Luca, @noopjournal journal Phys. Rev. Lett. volume 73, pages 764 (year 1994)NoStop [Solomon et al.(1993)Solomon, Weeks, and Swinney]solomon1993 author author T. H. Solomon, author E. R. Weeks,and author H. L. Swinney,10.1103/PhysRevLett.71.3975 journal journal Phys. Rev. Lett. volume 71, pages 3975 (year 1993)NoStop [Barthelemy et al.(2008)Barthelemy, Bertolotti, and Wiersma]barthelemy2008 author author P. Barthelemy, author J. Bertolotti,and author D. Wiersma, @noopjournal journal Nature (London) volume 453, pages 495 (year 2008)NoStop [Mercadier et al.(2009)Mercadier, Guerin, M. Chevrollier, andKaiser]mercadier2009levyflights author author M. Mercadier, author W. Guerin, author M. M. Chevrollier,andauthor R. Kaiser, @noopjournal journal Nat. Phys. volume 5, pages 602 (year 2009)NoStop [Barkai et al.(2014)Barkai, Aghion, and Kessler]barkai2014 author author E. Barkai, author E. Aghion, and author D. A. Kessler,@noopjournal journal Phys. Rev. Xvolume 4, pages 021036 (year 2014)NoStop [Bouchaud and Georges(1990)]bouchaud1990 author author J. P. Bouchaud and author A. Georges, @noopjournal journal Phys. Rep. volume 195, pages 127 (year 1990)NoStop [Laherrère and Sornette(1998)]laherrere1998 author author J. Laherrère and author D. Sornette, @noopjournal journal Eur. Phys. J. B volume 2, pages 525 (year 1998)NoStop [Mantegna and Stanley(2000)]mantegna2000 author author R. N. Mantegna and author H. E. Stanley, @nooptitle An introduction to econophysics. Correlations and complexity in finance (publisher Cambridge University Press, address Cambridge,year 2000)NoStop [Lera and Sornette(2018)]lera2018gross author author S. C. Lera and author D. Sornette, @noopjournal journal Phys. Rev. E volume 97, pages 012150 (year 2018)NoStop [Solomon et al.(1994)Solomon, Weeks, and Swinney]solomon1994 author author T. H. Solomon, author E. R. Weeks,and author H. L. Swinney,@noopjournal journal Physica Dvolume 76, pages 70 (year 1994)NoStop [Barkai(2001)]barkai2001 author author E. Barkai, 10.1103/PhysRevE.63.046118 journal journal Phys. Rev. E volume 63, pages 046118 (year 2001)NoStop [Chechkin et al.(2006)Chechkin, Gonchar, Klafter, andMetzler]chechkin2006 author author A. V. Chechkin, author V. Y. Gonchar, author J. Klafter, and author R. Metzler, indoi: 10.1002/0470037148.ch9 booktitle Fractals, Diffusion, and Relaxation in Disordered Complex Systems: Advances in Chemical Physics, Part B, Vol. volume 133, editor edited by editor W. T. Coffey and editor Y. P. Kalmykov (publisher John Wiley & Sons, address New York, year 2006) pp. pages 439–496NoStop [Jespersen et al.(1999)Jespersen, Metzler, and Fogedby]jespersen1999 author author S. Jespersen, author R. Metzler,and author H. C. Fogedby,@noopjournal journal Phys. Rev. Evolume 59, pages 2736 (year 1999)NoStop [Klages et al.(2008)Klages, Radons, and Sokolov]klages2008 author author R. Klages, author G. Radons, and author I. M. Sokolov,@nooptitle Anomalous transport: Foundations and applications (publisher Wiley-VCH, address Weinheim, year 2008)NoStop [Touchette and Cohen(2009)]touchette2007 author author H. Touchette and author E. G. D. Cohen, 10.1103/PhysRevE.80.011114 journal journal Phys. Rev. E volume 80,pages 011114 (year 2009)NoStop [Touchette and Cohen(2007)]touchette2009 author author H. Touchette and author E. G. D. Cohen, 10.1103/PhysRevE.76.020101 journal journal Phys. Rev. E volume 76,pages 020101 (year 2007)NoStop [Chechkin and Klages(2009)]chechkin2009 author author A. V. Chechkin and author R. Klages, @noopjournal journal J. Stat. Mech. , pages L03002 (year 2009)NoStop [Dybiec et al.(2012)Dybiec, Parrondo, and Gudowska-Nowak]dybiec2012 author author B. Dybiec, author J. M. R. Parrondo,and author E. Gudowska-Nowak, @noopjournal journal EPL (Europhys. Lett.) volume 98, pages 50006 (year 2012)NoStop [Kuśmierz et al.(2014)Kuśmierz, Rubi, and Gudowska-Nowak]kusmierz2014 author author Ł. Kuśmierz, author J. M. Rubi,and author E. Gudowska-Nowak, 10.1088/1742-5468/2014/09/P09002 journal journal J. Stat. Mech volume 2014, pages P09002 (year 2014)NoStop [Evans and Majumdar(2011)]evans2011diffusion author author M. R. Evans and author S. N. Majumdar, @noopjournal journal Phys Rev. Lett. volume 106, pages 160601 (year 2011)NoStop [Evans et al.(2020)Evans, Majumdar, and Schehr]evans2020stochastic author author M. R. Evans, author S. N. Majumdar,and author G. Schehr, 10.1088/1751-8121/ab7cfe journal journal J. Phys. A: Math. Theor. volume 53,pages 193001 (year 2020)NoStop [Gupta and Jayannavar(2022)]gupta2022stochastic author author S. Gupta and author A. M. Jayannavar, 10.3389/fphy.2022.789097 journal journal Front. Phys. , pages 10:789097 (year 2022)NoStop [Pal and Reuveni(2017)]pal2017first author author A. Pal and author S. Reuveni,@noopjournal journal Phys. Rev. Lett.volume 118, pages 030603 (year 2017)NoStop [Nagar and Gupta(2016)]nagar2016diffusion author author A. Nagar and author S. Gupta,10/ggkqbw journal journal Phys. Rev. E volume 93 (year 2016),10/ggkqbwNoStop [Dahlenburg et al.(2021)Dahlenburg, Chechkin, Schumer, andMetzler]dahlenburg2021stochastic author author M. Dahlenburg, author A. V. Chechkin, author R. Schumer, and author R. Metzler,@noopjournal journal Phys. Rev. Evolume 103, pages 052123 (year 2021)NoStop [Reuveni(2016)]reuveni2016optimal author author S. Reuveni, 10.1103/PhysRevLett.116.170601 journal journal Phys. Rev. Lett. volume 116, pages 170601 (year 2016)NoStop [Pal et al.(2020)Pal, Ku śśmierz, andReuveni]pal2020search author author A. Pal, author L. Ku śśmierz,and author S. Reuveni, 10.1103/PhysRevResearch.2.043174 journal journal Phys. Rev. Res. volume 2, pages 043174 (year 2020)NoStop [Bodrova and Sokolov(2020)]bodrova2020resetting author author A. S. Bodrova and author I. M. Sokolov, 10.1103/PhysRevE.101.052130 journal journal Phys. Rev. E volume 101, pages 052130 (year 2020)NoStop [Sunil et al.(2023)Sunil, Blythe, Evans, and Majumdar]sunil2023cost author author J. C. Sunil, author R. A. Blythe, author M. R. Evans,andauthor S. N. Majumdar, 10.1088/1751-8121/acf3bb journal journal J. Phys. A: Math. Theor. volume 56,pages 395001 (year 2023)NoStop [Viswanathan et al.(2011)Viswanathan, Da Luz, Raposo, andStanley]viswanathan2011physics author author G. M. Viswanathan, author M. G. Da Luz, author E. P. Raposo,and author H. E. Stanley,@nooptitle The Physics of Foraging: An Introduction to Random Searches and Biological Encounters (publisher Cambridge University Press, address Cambridge,year 2011)NoStop [Palyulin et al.(2014)Palyulin, Chechkin, and Metzler]palyulin2014levy author author V. V. Palyulin, author A. V. Chechkin,and author R. Metzler, @noopjournal journal Proc. Natl. Acad. Sci. U.S.A. volume 111, pages 2931 (year 2014)NoStop [Redner(2001)]redner2001 author author S. Redner, @nooptitle A guide to first passage time processes (publisher Cambridge University Press,address Cambridge, year 2001)NoStop [Kusmierz et al.(2014)Kusmierz, Majumdar, Sabhapandit, andSchehr]kusmierz2014firstorder author author L. Kusmierz, author S. N. Majumdar, author S. Sabhapandit,and author G. Schehr, 10.1103/PhysRevLett.113.220602 journal journal Phys. Rev. Lett. volume 113, pages 220602 (year 2014)NoStop [Méndez et al.(2021)Méndez, Masó-Puigdellosas, Sandev,and Campos]mendez2021ctrw author author V. Méndez, author A. Masó-Puigdellosas, author T. Sandev,and author D. Campos, 10.1103/PhysRevE.103.022103 journal journal Phys. Rev. E volume 103, pages 022103 (year 2021)NoStop [Pal and Prasad(2019a)]pal2019first author author A. Pal and author V. V. Prasad, 10.1103/PhysRevE.99.032123 journal journal Phys. Rev. E volume 99, pages 032123 (year 2019a)NoStop [Dybiec et al.(2017)Dybiec, Gudowska-Nowak, Barkai, and Dubkov]dybiec2017levy author author B. Dybiec, author E. Gudowska-Nowak, author E. Barkai,and author A. A. Dubkov, @noopjournal journal Phys. Rev. E volume 95, pages 052102 (year 2017)NoStop [Getoor(1961)]getoor1961 author author R. K. Getoor, http://www.jstor.org/stable/1993412 journal journal Trans. Am. Math. Soc. volume 101, pages 75 (year 1961)NoStop [Janicki and Weron(1994)]janicki1994 author author A. Janicki and author A. Weron, @nooptitle Simulation and chaotic behavior of α-stable stochastic processes (publisher Marcel Dekker, address New York, year 1994)NoStop [Samorodnitsky and Taqqu(1994)]samorodnitsky1994 author author G. Samorodnitsky and author M. S. Taqqu, @nooptitle Stable non-Gaussian random processes: Stochastic models with infinite variance (publisher Chapman and Hall, address New York, year 1994)NoStop [Górska and Penson(2011)]gorska2011 author author K. Górska and author K. A. Penson, @noopjournal journal Phys. Rev. E volume 83, pages 061125 (year 2011)NoStop [Higham(2001)]higham2001algorithmic author author D. J. Higham, 10.1137/S0036144500378302 journal journal SIAM Review volume 43,pages 525 (year 2001)NoStop [Mannella(2002)]mannella2002 author author R. Mannella, @noopjournal journal Int. J. Mod. Phys. C volume 13, pages 1177 (year 2002)NoStop [Chambers et al.(1976)Chambers, Mallows, and Stuck]chambers1976 author author J. M. Chambers, author C. L. Mallows,and author B. W. Stuck, @noopjournal journal J. Am. Stat. Assoc. volume 71, pages 340 (year 1976)NoStop [Weron and Weron(1995)]weron1995 author author A. Weron and author R. Weron,@noopjournal journal Lect. Not. Phys.volume 457, pages 379 (year 1995)NoStop [Weron(1996)]weron1996 author author R. Weron, @noopjournal journal Statist. Probab. Lett. volume 28, pages 165 (year 1996)NoStop [Rotbart et al.(2015)Rotbart, Reuveni, and Urbakh]rotbart2015michaelis author author T. Rotbart, author S. Reuveni, and author M. Urbakh,@noopjournal journal Phys. Rev. Evolume 92, pages 060101 (year 2015)NoStop [Pal and Prasad(2019b)]pal2019landau author author A. Pal and author V. Prasad,@noopjournal journal Phys. Rev. Res.volume 1, pages 032001 (year 2019b)NoStop [Pal et al.(2022)Pal, Kostinski, and Reuveni]pal2022inspection author author A. Pal, author S. Kostinski, and author S. Reuveni,@noopjournal journal J. Phys. A: Math. Theor. volume 55, pages 021001 (year 2022)NoStop | http://arxiv.org/abs/2311.16014v1 | {
"authors": [
"Bartosz Żbik",
"Bartłomiej Dybiec"
],
"categories": [
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.stat-mech",
"published": "20231127171948",
"title": "Lévy flights and Lévy walks under stochastic resetting"
} |
[email protected] Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan SOKENDAI (The Graduate University for Advanced Studies), Okazaki 444-8585, Japan Department of Basic Science, The University of Tokyo, Meguro-ku, Tokyo, 153-8902, JapanDepartment of Physics, Tokyo University of Science, Shinjuku-ku, Tokyo, 162-8601, JapanInstitute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan SOKENDAI (The Graduate University for Advanced Studies), Okazaki 444-8585, Japan Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan SOKENDAI (The Graduate University for Advanced Studies), Okazaki 444-8585, Japan [email protected] Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan SOKENDAI (The Graduate University for Advanced Studies), Okazaki 444-8585, Japan [email protected] Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585, Japan SOKENDAI (The Graduate University for Advanced Studies), Okazaki 444-8585, Japan Rydberg atoms in optical lattices and tweezers is now a well established platform for simulating quantum spin systems. However, the role of the atoms' spatial wavefunction has not been examined in detail experimentally. Here, we show a strong spin-motion coupling emerging from the large variation of the interaction potential over the wavefunction spread. We observe its clear signature on the ultrafast, out-of-equilibrium, many-body dynamics of atoms excited to a Rydberg S state from an unity-filling atomic Mott-insulator. We also propose a novel approach to tune arbitrarily the strength of the spin-motion coupling relative to the motional energy scale set by trapping potentials. Our work provides a new direction for exploring the dynamics of strongly-correlated quantum systems by adding the motional degree of freedom to the Rydberg simulation toolbox. Strong Spin-Motion Coupling in the Ultrafast Quantum Many-body Dynamics of Rydberg Atoms in a Mott-insulator Lattice K. Ohmori January 14, 2024 ====================================================================================================================Quantum simulation platforms, such as ion crystals <cit.>, polar molecules <cit.>, ultracold neutral atoms <cit.>, and Rydberg atoms <cit.>, offer remarkable opportunities to study various many-body problems, of which one important category are localized spin models, see e.g. <cit.>.To mimic pure spin systems, two energy levels in the internal degrees of freedom (d.o.f.) are identified as an effective spin-1/2, and approximations are then applied onto the full Hamiltonian describing a given experimental platform, notably to decouple the external motional d.o.f. (position and momentum) from the spin dynamics.Recently, new proposals are emerging to purposely use spin-motion coupling (i.e., a state-dependent force) and open new regimes of quantum simulation with Rydberg atoms <cit.>.In this work, based on the ultrafast Rydberg quantum platform <cit.>, we report on the experimental realization of an extreme regime of spin-motion coupling κ which is (i) comparable to the spin-spin interaction strength V, and (ii) overly dominates the natural motional energy scale ω set by a trapping potential. We also propose a novel experimental approach, ultrafast stroboscopic Rydberg excitation, to tune the ratio κ/ω over many orders of magnitude. Rydberg atoms display interactions ranging up to the GHz-scale at micrometer inter-atomic distances r <cit.>. The potential V(r) typically follows a 1/r^3-dependence for resonant dipole-dipole interaction, or a 1/r^6-potential in the non-resonant van der Waals (vdW) regime. Over the last decade, spin models have been implemented with Rydberg atoms in a gas phase <cit.>, in an optical lattice <cit.>, or in an array of optical tweezers, e.g. <cit.>.In these works, spin-motion coupling (arising when the atom explores the spatially-varying potential) is either considered negligible, or as a small source of decoherence with the external degrees of freedom treated as a thermal bath.For example, if atoms move randomly during the dynamics, because of a finite thermal energy, the interaction varies and blurs the spin dynamics. By preparing atoms in a pure motional quantum state, the coupling to motion is coherent and creates spin-motion entanglement <cit.>. In this coherent regime, the spin-motion coupling originates from the variation of the potential V(r) over the rms (root mean squared) spread x_ rms of the atom position wavefunction, around a distance d <cit.>. The first-order, linear, spin-motion coupling term is parameterized by κ:κ = -x_ rms. ∂ V/∂ x|_x=d = 6 x_ rms/d V(d), where we assumed a repulsive vdW potential.First, we compare the ratio of spin-motion to spin-spin coupling κ/V, which depends on the choice of optical traps: lattice or tweezers. In both approaches, the quantum fluctuation of position x_ rms = √(ħ/2mω) (m the mass of the atom) is slightly tunable through the trapping angular frequency ω∼ 2π× 10-100 kHz giving a spread of a few tens of nanometers. The distance d between atoms is typically 0.5 μm with lattice and can range from 2 to 10 μm for tweezers. Consequently, the spin-motion coupling is usually only a small perturbation for tweezers κ/V ≪ 0.1 <cit.>, while it is comparable to the spin-spin coupling in the lattice platform κ/V ∼ 0.5. We will see clear signatures of this large perturbation on the spin dynamics in the first part of this work. Secondly, we discuss the relevance of motion through the ratio κ/ω, which can vary over many orders of magnitude depending on the platform. For molecules, interacting through a dipole-dipole potential V on the kHz-scale or less <cit.>, the spin-motion coupling is negligible κ / ω < 0.01, except if working with delocalized, overlapping, wavefunctions <cit.>. For Rydberg atoms excited with cw-lasers, forcing the Rydberg blockade limits the interaction strength V to the MHz-scale which nevertheless allows to enter the perturbative regime κ / ω∼ 0.1 - 0.5 and already opens up exciting prospects <cit.>.By using picosecond pulsed lasers, our ultrafast approach allows to always overcome Rydberg blockade <cit.> and prepare Rydberg atoms with interaction strength at the GHz-scale <cit.>. Here, the spin-motion coupling becomes overly dominant with κ / ω∼ 10 - 1000, such that motional dynamics can be completely neglected on the timescale of spin-spin and spin-motion entanglement. In the final part of this work, we will propose the ultrafast stroboscopic method to effectively tune κ / ω.Experimental platform The schematic of our experimental system is shown in Fig. <ref>(a). We prepare a three-dimensional (3D) unity-filling Mott-insulator state with ∼3 × 10^4atoms in the |↓⟩ = |5S⟩ ground state of ^87Rb. The 3D optical lattice, with period a_ lat = 532 nm, has a depth of 20 E_R for each axis giving rise to an isotropic trapping frequency ω = 2π× 18 kHz in the harmonic oscillator approximation (for details, see Ref. <cit.>). The spatial wavefunction |ψ⟩_ spatial of each atom have a quantum uncertainty of position x_ rms = 57 nm, and a momentum uncertainty p_ rms = ħ/2x_ rms = m × (6.4 mm/s). Following preparation of the ground-state atoms, they are then coherently excited to the |↑⟩ = |29S⟩ Rydberg state using a two-photon excitation with broadband laser pulses as described in Ref. <cit.> and shown in Fig. <ref>(b). This prepares each atom in a coherent electronic superposition |ψ⟩_ elec = √(1-p)|↓⟩ + √(p)|↑⟩, with p the probability to be in the Rydberg state, and where we mapped the ground and Rydberg states to a spin-1/2.Two atoms in the 29S state experience strong dipole-dipole interaction in the vdW regime. Figure <ref>(c) shows the interaction potential calculated using the pairinteraction software <cit.>. It is very well approximated by an isotropic, repulsive, vdW form V(r) = C_6/r^6, where the calculated coefficient C_6^ th is 2π× 16 MHz μ m^6. At the shortest inter-atomic distance, the interaction is expected to be 0.7 GHz. The mixing with the dominant interaction channel (the pair-state 28P-29P) remains negligible thanks to its large energy separation of 20 GHz.Choosing a Rydberg S-state, rather than D-state as in previous works <cit.>, was motivated by obtaining this clean isotropic potential, despite the increased experimental challenge caused by the smaller excitation strength of S-state and in spectrally resolving the S and D states when using picosecond laser pulses <cit.>. The model Hamiltonian Here, we discuss the model Hamiltonian, including the motional degrees of freedom. Following excitation, each atom j is initially in a product state of spatial and internal d.o.f. |ψ_j⟩ = |ψ⟩_ spatial⊗|ψ⟩_ elec.. We then consider the evolution of this system in the nanosecond timescale relevant for spin-spin and spin-motion entanglement. For such short duration, the motion of atoms can be completely ignored: the position probability distribution do not have time to evolve either from the absence of confining potential for the Rydberg state or from the vdW repulsion. The ultrafast dynamics is then driven only by:Ĥ/ħ = ∑_j<kV(r̂_jk) ⊗n̂_j n̂_k ≈∑_j<k( V_jk + κ_jkr̂_jk- r̅_jk/x_ rms·e_jk + ...) ⊗n̂_j n̂_k .Here, r̂_jk = r̂_j - r̂_k is the quantum operator of the relative position of atoms j and k, r̅_jk its expectation value, e_jk a unit vector along site j and k, V_jk and κ_jk the couplings evaluated at distance r̅_jk, and n̂_j=|↑⟩_j⟨↑| is the projection operator on the Rydberg state for the j-th atom. Applying this Hamiltonian to the initial product state creates entanglement within the spin sector, but also, and this is the key point of the first part of this work, between the spin and motional sectors of the Hilbert space.In the second line of Eq. (<ref>), we perform an expansion of the potential up to first-order, useful for a later qualitative discussion and to outline the appearance of spin-motion coupling parametrized by κ.Results We now present experimental results obtained by time-domain Ramsey interferometry <cit.>, to probe the many-body entangled state generated by the above Hamiltonian.In short, a first pump pulse initiates the many-body dynamics which is read-out by a second probe pulse after a variable delay τ = 0 - 3 ns.This second pulse gives rise to a Ramsey interference whose contrast is a probe to the single-atom coherence in the spin sector, i.e., between the ground and Rydberg state.Spin-spin and spin-motion coupling generates entanglement entropy <cit.>, which reduces the single-atom coherence and thus the Ramsey contrast. Typical interferograms are shown in Fig. <ref>(a,b). In absence of interaction (blue curve, obtained for a low-density atomic sample), the highly contrasted interference indicates a constant pure state.For atoms prepared as a Mott-insulator (red curve), the decreasing contrast signals a reduced purity in the spin sector, which is shown in Fig. <ref>(c) as a function of the delay τ. Additionally, we also extract a phase shift of the Ramsey oscillations with the reference non-interacting sample. Numerical solution To calculate the Ramsey contrast and phase shift from the action of the Hamiltonian of Eq. (<ref>), we extend previous results <cit.> to include the spatial wavefunction of each atom, which requires to calculate terms such as the two-body spatial overlap:O_jk(t)= ⟨ψ_j; ψ_k | exp(-iV(r̂)t) | ψ_j; ψ_k ⟩ = C ∫ dr exp(- |r - r̅_jk|^2/x_ rms^2 -i C_6/|r|^6t ),where C is a normalization constant. The second line is obtained after reformulating the two-body wavefunctions |ψ_j;ψ_k⟩ into two independent one-body system: a trivial one for the center-of-mass, unaffected by the interaction, and the interesting one for the relative coordinate r_jk with reduced mass m/2. For a two-atom system, the Ramsey contrast and phase are directly related to the amplitude and phase of the complex-valued overlap O. For the many-body dynamics considered here, the analytical expression relating them is given in Ref. <cit.>, which also include details on neglecting three-body (and higher) overlap terms.The calculation results are then fitted to the Ramsey contrast data with a single free parameter: the coefficient C_6. The fitted curve, see Fig. <ref>(c), agrees well with the experimental data for a coefficient C_6^ exp = 2π×5.5 MHz μ m^6. With this value, the positive trend (related to the sign of C_6) and magnitude of the phase shift are also well captured.The fitted C_6^ exp coefficient is 3 times smaller than obtained from ab-initio calculation of the vdW potential, which calls for further investigation of the accuracy of the vdW potential calculation in the short, sub-micron distance regime. This could be done using a tweezers platform where a simpler system of only two atoms can be prepared <cit.>, potentially down to the short sub-micrometer distance explored here by throwing atoms with moving tweezers <cit.>.To emphasize the importance of the spin-motion coupling in this experiment, we also show calculation for a pure spin-spin model where we ignore the spatial extent of the wavefunctions, as done in previous works <cit.>. As shown in the green curve of Fig. <ref>(c), the Ramsey contrast would have displayed an oscillation which is clearly absent in the experimental data. We can thus conclude that capturing spin-motion entanglement is essential to account for the observed many-body dynamics.DiscussionWe now present a hierarchy of approximations to identify the relevant terms in Eqs. (<ref>,<ref>) that create spin-motion entanglement. We consider two atoms at nearest-neighbour (NN) distance a_ lat, where the variation of potential over the wavefunction describing their relative distance ψ_12(r) is largest.We then restrict the problem to 1D, along the inter-atomic axis, by neglecting the wavefunction spread in the other two directions as it gives a small 1/2(x_ rms/a_ lat)^2 ≃ 0.5 % increase in nearest-neighbor distance, 20 times smaller than the effect along the inter-atomic axis. This allows a convenient phase-space representation of the 1D wavefunction ψ_12(x), as shown in Fig. <ref>(a). The 1/r^6-potential then applies a strong force on the wavefunction which can be decomposed with a series expansion of the potential around the mean interatomic distance a_ lat. The zeroth-order term V = C_6^ exp/a_ lat^6 gives rise to spin-spin entanglement reaching its maximal value at time τ = π/V = 2.1 ns, and corresponding to a minimum in the Ramsey contrast of the green curve of Fig. <ref>(c). For longer time, the two effective spins would de-entangle and the Ramsey visibility re-increase <cit.>. The first-order linear term, explicitly written in Eq. (<ref>), gives a uniform force on the wavefunction F = 6 ħ C_6/a_ lat^7 = ħκ / x_ rms≃m/2 (2.5 × 10^7 m s^-2). The momentum kick Δ p from this acceleration becomes comparable to the relative momentum rms spread after τ = p_ rms/√(2)F = 0.3 ns. As the state-dependent force is applied only on part of the spin sector (|↑↑⟩), it creates spin-motion entanglement that is captured by the reduced overlap |O| between the displaced and initial momentum wavefunction seen in Fig. <ref>(b). It explains why the Ramsey contrast drops initially faster than expected from a pure spin model, see Fig. <ref>(c), as well as why it does not re-increase beyond τ = 2.1 ns as the pure spin model predicts.For a good qualitative description of the dynamics, it is necessary to go beyond the first-order term to capture the wide variation of the mechanical force over the wavefunction. As seen in Fig. <ref>(b), a second-order expansion brings the calculated overlap much closer to the exact result from Eq. (<ref>). Qualitatively, these second-order terms r̂_jk^2 = (x̂_j-x̂_k)^2 have two interesting effects on the wavefunction. First, they squeeze each atom wavefunction through the terms x̂_j^2 and x̂_k^2: the atoms feel a stronger force at shorter distance from the other one, which will compress the wavefunction. And secondly, they entangle the two atoms wavefunctions through the cross term x̂_j x̂_k. The relative wavefunction ψ_12 cannot anymore be decomposed into a product state of two single-atom wavefunctions. Such entanglement between the motion of two atoms is not captured at lower order. The third-order terms are required to explain the negative value taken by the Wigner distribution. Outlook The strong spin-motion coupling observed here precludes the realization of a pure spin model in our experimental regime. However, instead of performing quantum simulation in the spin sector, we could rather work fully in the motion sector of the Hilbert space. This would be realized by completely transferring ground-state atoms to Rydberg orbits, a step that can be done with high-fidelity in the microsecond timescale <cit.> (but only for weakly interacting atoms), and for which progress have been reported by our group for picosecond-scale excitation <cit.>. We could then prepare a unit-filling Mott-insulator state of Rydberg atoms which would be submitted to strong internal vdW force <cit.>. Interestingly, the forces from two opposite directions of a given atom cancel in first-order and the second-order squeezing and entangling terms would dominate the dynamics. This would lead to non-trivial distortion of the spatial wavefunctions observable by time-of-flight imaging, a technique also available on the tweezers platform <cit.>. In this work, we neglected the effect of kinetic energy due to the large separation of timescales between the Rydberg interaction (nanoseconds) and the motion of atoms (microseconds). We now propose to bring these two scales together to investigate a larger class of Hamiltonians with ultrafast stroboscopic Rydberg excitation. As schematically drawn in Fig. <ref>, ground-state atoms are transferred in a picosecond-timescale to Rydberg states to experience for a brief time T_R the strong force demonstrated in this work. They are then brought back to the ground-state to now experience the kinetic energy and the trapping potential on a microsecond-timescale T_0. This step is repeated with a high enough frequency to apply Average Hamiltonian Theory (AHT) <cit.>, and a controlled duty cycle to vary the effective, reduced, coupling strength κ_ eff = T_R/T_0 ×κ relatively to the trapping frequency ω. Optionally, a spin-1/2 can be encoded in the ground-state manifold, and a spin-dependent force obtained by spin-selective ultrafast excitation.This ultrafast Floquet engineering approach can be seen as complementary to Rydberg dressing <cit.>, where a trapped ground-state atom is instead continuously and weakly dressed by a small fraction of Rydberg character. Compared to other proposals for spin-motion coupling using long-lived circular Rydberg states <cit.>, or Rydberg facilitation (anti-blockade) <cit.>, here we note that the stroboscopic approach have the practical advantage to not require magic-trapping of the Rydberg state. Finally, we emphasize that ultrafast Rydberg excitation with pulsed lasers (delivering up to 100 GHz of ground-Rydberg Rabi frequency) unlocks the full GHz-strength of interaction between Rydberg atoms, otherwise curbed by the limited MHz-scale Rabi frequency achievable with cw-lasers. In conclusion, we have considered the force experienced by Rydberg atoms, mapped it into a spin-motion coupling term, and observed a clear signature: a strong perturbation to the spin dynamics. We proposed a quantum control technique, ultrafast Floquet engineering, to tune the relative strength of this force compared to the trapping potential of optical lattice or tweezers, opening novel regimes of quantum simulation with Rydberg atoms. Among the new avenues, we envision the creation of exotic motional states such as a Rydberg crystal: an atomic array with each atom stabilized in free-space (i.e., in the absence of a confining lattice potential) by long-range isotropic vdW repulsion between Rydberg atoms, a state reminiscent of electronic Wigner crystals <cit.>. The authors acknowledge Y. Okano and H. Chiba for the technical support. We thank Y. Zhang for the helpful discussions regarding the extension of delay line. This work was supported by MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) JPMXS0118069021, JSPS Grant-in-Aid for Specially Promoted Research Grant No. 16H06289 and JST Moonshot R&D Program Grant Number JPMJMS2269. S.S. acknowledges support from JSPS KAKENHI Grant No. JP21H01021. M.K. acknowledges supports from JSPS KAKENHI Grants No. JP20K14389 and No. JP22H05268.45 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Monroe et al.(2021)Monroe, Campbell, Duan, Gong, Gorshkov, Hess, Islam, Kim, Linke, Pagano, Richerme, Senko, and Yao]Monroe2021 author author C. Monroe, author W. C. Campbell, author L.-M. Duan, author Z.-X. Gong, author A. V. Gorshkov, author P. W. Hess, author R. Islam, author K. Kim, author N. M.Linke, author G. Pagano, author P. Richerme, author C. Senko, and author N. Y. Yao, title title Programmable quantum simulations of spin systems with trapped ions, https://doi.org/10.1103/RevModPhys.93.025001 journal journal Rev. Mod. Phys. volume 93, pages 025001 (year 2021)NoStop [Langen et al.(2023)Langen, Valtolina, Wang, and Ye]Langen2023 author author T. Langen, author G. Valtolina, author D. Wang, and author J. Ye, @nooptitle Quantum state manipulation and science of ultracold molecules (year 2023), https://arxiv.org/abs/2305.13445 arXiv:2305.13445 [cond-mat.quant-gas] NoStop [Gross and Bloch(2017)]GRB17 author author C. Gross and author I. Bloch,title title Quantum simulations with ultracold atoms in optical lattices, https://doi.org/10.1126/science.aal3837 journal journal Science volume 357, pages 995 (year 2017)NoStop [Browaeys and Lahaye(2020)]BRL20 author author A. Browaeys and author T. Lahaye, title title Many-body physics with individually controlled Rydberg atoms, @noopjournal journal Nature Physics volume 16, pages 132 (year 2020)NoStop [Joshi et al.(2022)Joshi, Kranzl, Schuckert, Lovas, Maier, Blatt, Knap, andRoos]Ross2022 author author M. K. Joshi, author F. Kranzl, author A. Schuckert, author I. Lovas, author C. Maier, author R. Blatt, author M. Knap, and author C. F.Roos, title title Observing emergent hydrodynamics in a long-range quantum magnet, https://doi.org/10.1126/science.abk2400 journal journal Science volume 376, pages 720 (year 2022)NoStop [Yan et al.(2013)Yan, Moses, Gadway, Covey, Hazzard, Rey, Jin, andYe]YMG13 author author B. Yan, author S. A. Moses, author B. Gadway, author J. P. Covey, author K. R. Hazzard, author A. M. Rey, author D. S. Jin, and author J. Ye, title title Observation of dipolar spin-exchange interactions with lattice-confined polar molecules, @noopjournal journal Nature volume 501, pages 521 (year 2013)NoStop [Jepsen et al.(2020)Jepsen, Amato-Grill, Dimitrova, Ho, Demler, and Ketterle]Ketterle2020 author author P. N. Jepsen, author J. Amato-Grill, author I. Dimitrova, author W. W. Ho, author E. Demler, and author W. Ketterle, title title Spin transport in a tunable Heisenberg model realized with ultracold atoms, https://doi.org/10.1038/s41586-020-3033-y journal journal Nature volume 588, pages 403 (year 2020)NoStop [Semeghini et al.(2021)Semeghini, Levine, Keesling, Ebadi, Wang, Bluvstein, Verresen, Pichler, Kalinowski, Samajdar, Omran, Sachdev, Vishwanath, Greiner, Vuletić, and Lukin]Semeghini2021 author author G. Semeghini, author H. Levine, author A. Keesling, author S. Ebadi, author T. T. Wang, author D. Bluvstein, author R. Verresen, author H. Pichler, author M. Kalinowski, author R. Samajdar, author A. Omran, author S. Sachdev, author A. Vishwanath, author M. Greiner, author V. Vuletić, and author M. D. Lukin, title title Probing topological spin liquids on a programmable quantum simulator,https://doi.org/10.1126/science.abi8794 journal journal Science volume 374, pages 1242 (year 2021)NoStop [Chen et al.(2023)Chen, Bornet, Bintz, Emperauger, Leclerc, Liu, Scholl, Barredo, Hauschild, Chatterjee, Schuler, Läuchli, Zaletel, Lahaye, Yao, andBrowaeys]Chen2023 author author C. Chen, author G. Bornet, author M. Bintz, author G. Emperauger, author L. Leclerc, author V. S. Liu, author P. Scholl, author D. Barredo, author J. Hauschild, author S. Chatterjee, author M. Schuler, author A. M.Läuchli, author M. P.Zaletel, author T. Lahaye, author N. Y. Yao,and author A. Browaeys,title title Continuous symmetry breaking in a two-dimensional Rydberg array, https://doi.org/10.1038/s41586-023-05859-2 journal journal Nature volume 616, pages 691 (year 2023)NoStop [Belyansky et al.(2019)Belyansky, Young, Bienias, Eldredge, Kaufman, Zoller, andGorshkov]Gorshkov2019 author author R. Belyansky, author J. T. Young, author P. Bienias, author Z. Eldredge, author A. M. Kaufman, author P. Zoller, and author A. V. Gorshkov, title title Nondestructive cooling of an atomic quantum register via state-insensitive Rydberg interactions, https://doi.org/10.1103/PhysRevLett.123.213603 journal journal Phys. Rev. Lett. volume 123,pages 213603 (year 2019)NoStop [Gambetta et al.(2020)Gambetta, Li, Schmidt-Kaler, andLesanovsky]Lesanovsky2020 author author F. M. Gambetta, author W. Li, author F. Schmidt-Kaler, andauthor I. Lesanovsky, title title Engineering nonbinary Rydberg interactions via phonons in an optical lattice, https://doi.org/10.1103/PhysRevLett.124.043402 journal journal Phys. Rev. Lett. volume 124,pages 043402 (year 2020)NoStop [Mazza et al.(2020)Mazza, Schmidt, and Lesanovsky]Lesanovsky2020b author author P. P. Mazza, author R. Schmidt,and author I. Lesanovsky,title title Vibrational dressing in kinetically constrained Rydberg spin systems, https://doi.org/10.1103/PhysRevLett.125.033602 journal journal Phys. Rev. Lett. volume 125,pages 033602 (year 2020)NoStop [Magoni et al.(2023)Magoni, Joshi, and Lesanovsky]Lesanovsky2023 author author M. Magoni, author R. Joshi, andauthor I. Lesanovsky, title title Molecular dynamics in Rydberg tweezer arrays: Spin-phonon entanglement and Jahn-Teller effect, https://doi.org/10.1103/PhysRevLett.131.093002 journal journal Phys. Rev. Lett. volume 131,pages 093002 (year 2023)NoStop [Méhaignerie et al.(2023)Méhaignerie, Sayrin, Raimond, Brune, and Roux]Mehaignerie2023 author author P. Méhaignerie, author C. Sayrin, author J.-M. Raimond, author M. Brune, and author G. Roux, title title Spin-motion coupling in a circular-Rydberg-state quantum simulator: Case of two atoms, https://doi.org/10.1103/PhysRevA.107.063106 journal journal Phys. Rev. A volume 107, pages 063106 (year 2023)NoStop [Chew et al.(2022)Chew, Tomita, Mahesh, Sugawa, de Léséleuc, and Ohmori]CTM21 author author Y. Chew, author T. Tomita, author T. P. Mahesh, author S. Sugawa, author S. de Léséleuc, and author K. Ohmori, title title Ultrafast energy exchange between two single Rydberg atoms on the nanosecond timescale, @noopjournal journal Nature Photonics volume 16,pages 724 (year 2022)NoStop [Bharti et al.(2023)Bharti, Sugawa, Mizoguchi, Kunimi, Zhang, de Léséleuc, Tomita, Franz, Weidemüller, andOhmori]BSM23 author author V. Bharti, author S. Sugawa, author M. Mizoguchi, author M. Kunimi, author Y. Zhang, author S. de Léséleuc, author T. Tomita, author T. Franz, author M. Weidemüller, and author K. Ohmori, title title Picosecond-scale ultrafast many-body dynamics in an ultracold Rydberg-excited atomic Mott insulator, https://doi.org/10.1103/PhysRevLett.131.123201 journal journal Phys. Rev. Lett. volume 131,pages 123201 (year 2023)NoStop [Saffman et al.(2010)Saffman, Walker, and Mølmer]SWM10 author author M. Saffman, author T. G. Walker, and author K. Mølmer, title title Quantum information with Rydberg atoms, https://doi.org/10.1103/RevModPhys.82.2313 journal journal Rev. Mod. Phys. volume 82, pages 2313 (year 2010)NoStop [Takei et al.(2016)Takei, Sommer, Genes, Pupillo, Goto, Koyasu, Chiba, Weidemüller, and Ohmori]TSG16 author author N. Takei, author C. Sommer, author C. Genes, author G. Pupillo, author H. Goto, author K. Koyasu, author H. Chiba, author M. Weidemüller, and author K. Ohmori, title title Direct observation of ultrafast many-body electron dynamics in an ultracold Rydberg gas, @noopjournal journal Nature communications volume 7, pages 13449 (year 2016)NoStop [Borish et al.(2020)Borish, Markovi ćć, Hines, Rajagopal, and Schleier-Smith]SchleierSmith2020 author author V. Borish, author O. Markovi ćć, author J. A. Hines, author S. V. Rajagopal, and author M. Schleier-Smith, title title Transverse-field Ising dynamics in a Rydberg-dressed atomic gas, https://doi.org/10.1103/PhysRevLett.124.063601 journal journal Phys. Rev. Lett. volume 124, pages 063601 (year 2020)NoStop [Signoles et al.(2021)Signoles, Franz, Ferracini Alves, Gärttner, Whitlock, Zürn, and Weidemüller]Signoles2021 author author A. Signoles, author T. Franz, author R. Ferracini Alves, author M. Gärttner, author S. Whitlock, author G. Zürn, and author M. Weidemüller, title title Glassy dynamics in a disordered Heisenberg quantum spin system, https://doi.org/10.1103/PhysRevX.11.011011 journal journal Phys. Rev. X volume 11, pages 011011 (year 2021)NoStop [Zeiher et al.(2016)Zeiher, Van Bijnen, Schauß, Hild, Choi, Pohl, Bloch, andGross]ZBS16 author author J. Zeiher, author R. Van Bijnen, author P. Schauß, author S. Hild, author J.-y. Choi, author T. Pohl, author I. Bloch, and author C. Gross, title title Many-body interferometry of a Rydberg-dressed spin lattice, @noopjournal journal Nature Physics volume 12, pages 1095 (year 2016)NoStop [Guardado-Sanchez et al.(2018)Guardado-Sanchez, Brown, Mitra, Devakul, Huse, Schauß,and Bakr]Guardado2018 author author E. Guardado-Sanchez, author P. T. Brown, author D. Mitra, author T. Devakul, author D. A. Huse, author P. Schauß, and author W. S. Bakr, title title Probing the quench dynamics of antiferromagnetic correlations in a 2d quantum Ising spin system, https://doi.org/10.1103/PhysRevX.8.021069 journal journal Phys. Rev. X volume 8, pages 021069 (year 2018)NoStop [Kim et al.(2023)Kim, Yang, Mølmer, and Ahn]Ahn2023 author author K. Kim, author F. Yang, author K. Mølmer, and author J. Ahn, @nooptitle Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays (year 2023), https://arxiv.org/abs/2307.04342 arXiv:2307.04342 [quant-ph] NoStop [Steinert et al.(2023)Steinert, Osterholz, Eberhard, Festa, Lorenz, Chen, Trautmann, and Gross]Steinert2023 author author L.-M. Steinert, author P. Osterholz, author R. Eberhard, author L. Festa, author N. Lorenz, author Z. Chen, author A. Trautmann, and author C. Gross, title title Spatially tunable spin interactions in neutral atom arrays, https://doi.org/10.1103/PhysRevLett.130.243001 journal journal Phys. Rev. Lett. volume 130,pages 243001 (year 2023)NoStop [Holland et al.(2022)Holland, Lu, and Cheuk]HLC22 author author C. M. Holland, author Y. Lu, andauthor L. W. Cheuk, @nooptitle On-demand entanglement of molecules in a reconfigurable optical tweezer array (year 2022), https://arxiv.org/abs/2210.06309 arXiv:2210.06309 [cond-mat.quant-gas] NoStop [Bao et al.(2022)Bao, Yu, Anderegg, Chae, Ketterle, Ni, and Doyle]BYA22 author author Y. Bao, author S. S. Yu, author L. Anderegg, author E. Chae, author W. Ketterle, author K.-K. Ni, and author J. M. Doyle, @nooptitle Dipolar spin-exchange and entanglement between molecules in an optical tweezer array (year 2022), https://arxiv.org/abs/2211.09780 arXiv:2211.09780 [physics.atom-ph] NoStop [Li et al.(2023)Li, Matsuda, Miller, Carroll, Tobias, Higgins, and Ye]LMM23 author author J.-R. Li, author K. Matsuda, author C. Miller, author A. N. Carroll, author W. G. Tobias, author J. S. Higgins, and author J. Ye, title title Tunable itinerant spin dynamics with polar molecules, https://doi.org/10.1038/s41586-022-05479-2 journal journal Nature volume 614, pages 70 (year 2023)NoStop [Mizoguchi et al.(2020)Mizoguchi, Zhang, Kunimi, Tanaka, Takeda, Takei, Bharti, Koyasu, Kishimoto, Jaksch, Glaetzle, Kiffner, Masella, Pupillo, Weidemüller, andOhmori]MZK20 author author M. Mizoguchi, author Y. Zhang, author M. Kunimi, author A. Tanaka, author S. Takeda, author N. Takei, author V. Bharti, author K. Koyasu, author T. Kishimoto, author D. Jaksch, author A. Glaetzle, author M. Kiffner, author G. Masella, author G. Pupillo, author M. Weidemüller, and author K. Ohmori, title title Ultrafast creation of overlapping Rydberg electrons in an atomic BEC and Mott-insulator lattice, https://doi.org/10.1103/PhysRevLett.124.253201 journal journal Phys. Rev. Lett. volume 124,pages 253201 (year 2020)NoStop [SM()]SM @nooptitle See supplemental material for more details.Stop [Weber et al.(2017)Weber, Tresp, Menke, Urvoy, Firstenberg, Büchler, and Hofferberth]Weber2017 author author S. Weber, author C. Tresp, author H. Menke, author A. Urvoy, author O. Firstenberg, author H. P. Büchler, and author S. Hofferberth, title title Calculation of Rydberg interaction potentials, https://doi.org/10.1088/1361-6455/aa743a journal journal Journal of Physics B: Atomic, Molecular and Optical Physicsvolume 50, pages 133001 (year 2017)NoStop [Šibalić et al.(2017)Šibalić, Pritchard, Adams, andWeatherill]SPA17 author author N. Šibalić, author J. Pritchard, author C. Adams,and author K. Weatherill,title title ARC: An open-source library for calculating properties of alkali Rydberg atoms, https://doi.org/https://doi.org/10.1016/j.cpc.2017.06.015 journal journal Computer Physics Communications volume 220, pages 319 (year 2017)NoStop [Sommer et al.(2016)Sommer, Pupillo, Takei, Takeda, Tanaka, Ohmori, and Genes]SPT16 author author C. Sommer, author G. Pupillo, author N. Takei, author S. Takeda, author A. Tanaka, author K. Ohmori, and author C. Genes, title title Time-domain Ramsey interferometry with interacting Rydberg atoms,https://doi.org/10.1103/PhysRevA.94.053607 journal journal Phys. Rev. A volume 94,pages 053607 (year 2016)NoStop [Hwang et al.(2023)Hwang, Byun, Park, de Léséleuc, and Ahn]HBP23 author author H. Hwang, author A. Byun, author J. Park, author S. de Léséleuc, and author J. Ahn, title title Optical tweezers throw and catch single atoms, https://doi.org/10.1364/OPTICA.480535 journal journal Optica volume 10, pages 401 (year 2023)NoStop [Jo et al.(2020)Jo, Song, Kim, and Ahn]Ahn2020 author author H. Jo, author Y. Song, author M. Kim, and author J. Ahn, title title Rydberg atom entanglements in the weak coupling regime, https://doi.org/10.1103/PhysRevLett.124.033603 journal journal Phys. Rev. Lett. volume 124,pages 033603 (year 2020)NoStop [Levine et al.(2018)Levine, Keesling, Omran, Bernien, Schwartz, Zibrov, Endres, Greiner, Vuleti ćć, and Lukin]Levine2018 author author H. Levine, author A. Keesling, author A. Omran, author H. Bernien, author S. Schwartz, author A. S. Zibrov, author M. Endres, author M. Greiner, author V. Vuleti ćć, and author M. D. Lukin, title title High-fidelity control and entanglement of Rydberg-atom qubits, https://doi.org/10.1103/PhysRevLett.121.123603 journal journal Phys. Rev. Lett. volume 121, pages 123603 (year 2018)NoStop [Faoro et al.(2016)Faoro, Simonelli, Archimi, Masella, Valado, Arimondo, Mannella, Ciampini, and Morsch]Morsch2016 author author R. Faoro, author C. Simonelli, author M. Archimi, author G. Masella, author M. M. Valado, author E. Arimondo, author R. Mannella, author D. Ciampini, and author O. Morsch, title title van der Waals explosion of cold Rydberg clusters, https://doi.org/10.1103/PhysRevA.93.030701 journal journal Phys. Rev. A volume 93, pages 030701 (year 2016)NoStop [Bergschneider et al.(2019)Bergschneider, Klinkhamer, Becher, Klemt, Palm, Zürn, Jochim, and Preiss]Bergschneider2019 author author A. Bergschneider, author V. M. Klinkhamer, author J. H. Becher, author R. Klemt, author L. Palm, author G. Zürn, author S. Jochim, and author P. M. Preiss, title title Experimental characterization of two-particle entanglement through position and momentum correlations, https://doi.org/10.1038/s41567-019-0508-6 journal journal Nature Physics volume 15, pages 640 (year 2019)NoStop [Brown et al.(2022)Brown, Muleady, Dworschack, Lewis-Swan, Rey, Romero-Isart, andRegal]Brown2022 author author M. O. Brown, author S. R. Muleady, author W. J. Dworschack, author R. J. Lewis-Swan, author A. M. Rey, author O. Romero-Isart, and author C. A. Regal, @nooptitle Time-of-flight quantum tomography of single atom motion (year 2022), https://arxiv.org/abs/2203.03053 arXiv:2203.03053 [quant-ph] NoStop [Haeberlen and Waugh(1968)]AHT1968 author author U. Haeberlen and author J. S. Waugh, title title Coherent averaging effects in magnetic resonance, https://doi.org/10.1103/PhysRev.175.453 journal journal Phys. Rev. volume 175, pages 453 (year 1968)NoStop [Goldman and Dalibard(2014)]Dalibard2014 author author N. Goldman and author J. Dalibard, title title Periodically driven quantum systems: Effective Hamiltonians and engineered gauge fields,https://doi.org/10.1103/PhysRevX.4.031027 journal journal Phys. Rev. X volume 4,pages 031027 (year 2014)NoStop [Eckardt and Anisimovas(2015)]Eckardt2015 author author A. Eckardt and author E. Anisimovas, title title High-frequency approximation for periodically driven quantum systems from a Floquet-space perspective, https://doi.org/10.1088/1367-2630/17/9/093039 journal journal New Journal of Physics volume 17, pages 093039 (year 2015)NoStop [Choi et al.(2020)Choi, Zhou, Knowles, Landig, Choi, and Lukin]Choi2020 author author J. Choi, author H. Zhou, author H. S. Knowles, author R. Landig, author S. Choi, and author M. D.Lukin, title title Robust dynamic Hamiltonian engineering of many-body spin systems, https://doi.org/10.1103/PhysRevX.10.031002 journal journal Phys. Rev. X volume 10, pages 031002 (year 2020)NoStop [Jau et al.(2016)Jau, Hankin, Keating, Deutsch,and Biedermann]JHK16 author author Y.-Y. Jau, author A. Hankin, author T. Keating, author I. H. Deutsch, and author G. Biedermann, title title Entangling atomic spins with a Rydberg-dressed spin-flip blockade, @noopjournal journal Nature Physics volume 12, pages 71 (year 2016)NoStop [Guardado-Sanchez et al.(2021)Guardado-Sanchez, Spar, Schauss, Belyansky, Young, Bienias, Gorshkov, Iadecola, and Bakr]GSB21 author author E. Guardado-Sanchez, author B. M. Spar, author P. Schauss, author R. Belyansky, author J. T. Young, author P. Bienias, author A. V. Gorshkov, author T. Iadecola, and author W. S. Bakr, title title Quench dynamics of a Fermi gas with strong nonlocal interactions, https://doi.org/10.1103/PhysRevX.11.021036 journal journal Phys. Rev. X volume 11, pages 021036 (year 2021)NoStop [Wigner(1934)]Wigner1934 author author E. Wigner, title title On the interaction of electrons in metals, https://doi.org/10.1103/PhysRev.46.1002 journal journal Phys. Rev. volume 46, pages 1002 (year 1934)NoStop § SUPPLEMENTAL MATERIAL§.§ Rydberg excitationThe unity-filling atomic Mott-insulator is prepared in the 5S_1/2, |F=2, m_F=-2⟩ hyperfine ground state of ^87Rb <cit.>. We turn off the trapping and optical lattice beams ∼ 2 μs before the Rydberg excitation to avoid multiphoton ionization. We then use two-photon excitation, with picosecond infrared (IR) and blue laser pulses, to excite the ground-state atoms to the |29S_1/2, m_F=-2⟩ Rydberg state. The pulsed laser system for the excitation is as described in ref. <cit.>.In detail, we prepare the 29S Rydberg state by using σ^- and σ^+ polarized IR and blue pulses as shown in Fig. <ref>(a). In order to resolve the 29S state from the nearby 27D state, which is only 80 GHz lower in energy, we reduce the excitation bandwidth from previous works by roughly half. At the 29S state resonance (where we perform experiments), the population in the nearby 27D state is found to be only 4 % of the total population which is negligibly small. The laser pulses have an energy of 50 nJ (IR) and 560 nJ (blue), anda 1/e^2-diameter of 230 μm (IR) and 50 μm (blue). In order to estimate the excitation bandwidth, we excite only the |27D_5/2, m_F =-4⟩ state by using σ^--polarization for both laser pulses (Fig. <ref>(a)). A fit to the purple curve in Fig. <ref>(b) gives a 72(5) GHz (FWHM) bandwidth. In this measurement, the blue pulse energy was reduced to 300 nJ. §.§ Rydberg state detectionAt the end of the experiment, the Rydberg atoms are ionized by a strong electric field, detected by a micro-channel plate (MCP), and counted after going through a pre-amplifier and a time-gated integrator. This detection setup is as described in ref. <cit.>.For the field ionization, we applied +2.5 kV pulses to six electrodes (red electrodes in ref. <cit.>) and -3 kV pulses to two electrodes (blue electrodes in ref. <cit.>).§.§ Ramsey measurements The Ramsey measurements are performed by producing a pair of pump and probe pulses with an optical delay-line interferometer <cit.>. The pump-probe delay τ was tuned by a mechanical stage, whereas the fine delay was controlled with attosecond (as) precision by using a piezoelectric transducer to observe the 1 femtosecond-period Ramsey fringe. We scanned the relative delay in steps of ∼ 60 as over a range of ∼ 3 femtoseconds, see Fig. <ref>(a).Experiments are realized on both a Mott-insulator sample, where atoms strongly interact with their neighbours, as well as on a low-density reference sample where interaction can be neglected. This procedure is described in ref. <cit.>, and allows to extract a phase shift of the Ramsey interferograms between the interacting and reference sample. For the Mott-insulator measurement, the Rydberg population is obtained after sending a single pump-probe pulse for each experimental realization. For the reference measurement, we repeat the pump-probe sequence 150 times, every 1 millisecond, on each low-density sample. This allows to increase statistics and reduce the influence of shot-to-shot uncertainties in pulse energy, pulse pointing and atom number fluctuations. The ion signals are measured after each pair of pump and probe pulses and we record its decrease caused by a depletion of the sample from the finite Rydberg population. This population is extracted by fitting an exponential decay <cit.>. By implementing this scheme, there is 30 % reduction in statistical uncertainties for the estimation of contrast and phase of reference sample Ramsey interferograms, as shown in Fig. <ref>.The Ramsey signals are measured alternately for the reference sample and the Mott-insulator state. Before and after each measurement, we check the number of atoms N and Rydberg state population p. We exclude the data with deviations of average values by more than 15 % from the set values: N^set∼ 30000 atoms and p^set∼ 4.8%, and 15 % change, as compared to average values, in values recorded after measurement. §.§ Many-body dynamics for spin-motion coupled systemThe Ramsey signal P_j for an atom j is given by the following expression <cit.>:P_j(τ) = 2p (1-p) Re[1+C_j(τ) e^(iE_rτ/ħ + ϕ_0)]Here, p is the population in the Rydberg state, E_r is the energy difference between the ground and Rydberg state, τ is the pump-probe delay, ϕ_0 is a phase arising from the AC-Stark shifts during the pulse excitation, and C_j(τ) is interaction-induced modulation of the Ramsey fringe that reflects the coherences established in the system during the many-body dynamics. The experimentally observed many-particle signal is given by averaging over contributions from all the atoms P̅(τ) = (1/N)∑_j = 1^N P_j(τ). The term C_j(τ) contains the full signature of the interactions, and Ramsey contrast and the phase are related to its absolute value and angle, respectively. For a pure spin-spin model (SS) and just two atoms j and k, this term reads: C^SS_j,k(τ) =(1-p) + p e^i V_jkτ,where V_jk is the van der Waals potential between atom j and k. This complex-valued term has minimum amplitude for V_jkτ = π, corresponding to maximal spin-spin entanglement. In the special case p = 0.5, the Ramsey constrast would vanish.For a many-body system, interaction of atom j with all possible other atoms k has to be included <cit.>: C^SS_j(τ) = ∏_k ≠ j[ (1-p) + p e^i V_jkτ].We now include the external degrees of freedom and spin-motion coupling (SM). We first focus on a case of only two atoms j and k and obtain:C^SM_j,k(τ)= (1-p) + p ⟨ψ_j; ψ_k | e^i V(r̂)τ | ψ_j; ψ_k ⟩, = (1-p) + p O_jk(τ)where |ψ_j⟩ describes the spatial wavefunction of atom j, and O_jk(τ) is the overlap term introduced in the main text. For atoms localized to an infinitesimal region, the overlap reduces to the previous case O_jk(τ) = e^iV_jkτ. Extending the calculation to the many-body spin-motion-coupled system is more subtle that for the spin-spin model. Indeed, a strict derivation requires the calculation of terms of higher-order, such as O_jkl = ⟨ψ_j; ψ_k; ψ_l | e^i V(r̂)τ | ψ_j; ψ_k; ψ_l ⟩, corresponding to 3 atoms j, k and l in the Rydberg state. Such higher-order terms do not decompose simply into product of two-body terms, as for the pure spin model. The physical picture being that the momentum kicks on atom j from two other atoms k and l can compensate each other. However, to simplify the calculations, we perform the approximation that higher-order overlaps decomposes into products of two-body overlaps and write: C^SM_j(t) ≃∏_k ≠ j [(1-p)+ p O_jk(t)].We justify this approximation by pointing out that, in our regime of low Rydberg state population (p∼4.8%), the dominant error term (3-body overlaps) contributes to a negligibly small fraction p of the two-body terms. Finally, the calculation over other atoms k is performed only up to the fourth nearest-neighbor in the 3D lattice (distance 2 a_ lat), where the interaction has already dropped by a factor 2^6 = 64. 3 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Bharti et al.(2023)Bharti, Sugawa, Mizoguchi, Kunimi, Zhang, de Léséleuc, Tomita, Franz, Weidemüller, andOhmori]BSM23_1 author author V. Bharti, author S. Sugawa, author M. Mizoguchi, author M. Kunimi, author Y. Zhang, author S. de Léséleuc, author T. Tomita, author T. Franz, author M. Weidemüller, and author K. Ohmori, title title Picosecond-scale ultrafast many-body dynamics in an ultracold Rydberg-excited atomic Mott insulator, https://doi.org/10.1103/PhysRevLett.131.123201 journal journal Phys. Rev. Lett. volume 131,pages 123201 (year 2023)NoStop [Mizoguchi et al.(2020)Mizoguchi, Zhang, Kunimi, Tanaka, Takeda, Takei, Bharti, Koyasu, Kishimoto, Jaksch, Glaetzle, Kiffner, Masella, Pupillo, Weidemüller, andOhmori]MZK20_1 author author M. Mizoguchi, author Y. Zhang, author M. Kunimi, author A. Tanaka, author S. Takeda, author N. Takei, author V. Bharti, author K. Koyasu, author T. Kishimoto, author D. Jaksch, author A. Glaetzle, author M. Kiffner, author G. Masella, author G. Pupillo, author M. Weidemüller, and author K. Ohmori, title title Ultrafast creation of overlapping Rydberg electrons in an atomic BEC and Mott-insulator lattice, https://doi.org/10.1103/PhysRevLett.124.253201 journal journal Phys. Rev. Lett. volume 124,pages 253201 (year 2020)NoStop [Sommer et al.(2016)Sommer, Pupillo, Takei, Takeda, Tanaka, Ohmori, and Genes]SPT16_1 author author C. Sommer, author G. Pupillo, author N. Takei, author S. Takeda, author A. Tanaka, author K. Ohmori, and author C. Genes, title title Time-domain Ramsey interferometry with interacting Rydberg atoms,https://doi.org/10.1103/PhysRevA.94.053607 journal journal Phys. Rev. A volume 94,pages 053607 (year 2016)NoStop | http://arxiv.org/abs/2311.15575v1 | {
"authors": [
"Vineet Bharti",
"Seiji Sugawa",
"Masaya Kunimi",
"Vikas Singh Chauhan",
"Tirumalasetty Panduranga Mahesh",
"Michiteru Mizoguchi",
"Takuya Matsubara",
"Takafumi Tomita",
"Sylvain de Léséleuc",
"Kenji Ohmori"
],
"categories": [
"physics.atom-ph",
"cond-mat.quant-gas",
"quant-ph"
],
"primary_category": "physics.atom-ph",
"published": "20231127070402",
"title": "Strong Spin-Motion Coupling in the Ultrafast Quantum Many-body Dynamics of Rydberg Atoms in a Mott-insulator Lattice"
} |
IEEEexample:BSTcontrolLearning with Errors over Group Rings Constructed by Semi-direct Product^† Jiaqi Liu, Fang-Wei Fu Jiaqi Liu and Fang-Wei Fu are with Chern Institute of Mathematics and LPMC, Nankai University, Tianjin 300071, China, Emails: [email protected], [email protected] ^†This research is supported by the National Key Research and Development Program of China (Grant Nos. 2022YFA1005000 and 2018YFA0704703), the National Natural Science Foundation of China (Grant Nos. 12141108, 62371259, 12226336), the Fundamental Research Funds for the Central Universities of China (Nankai University), the Nankai Zhide Foundation. manuscript submitted January 14, 2024January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= emptyThe ever-increasing demand for high-quality and heterogeneous wireless communication services has driven extensive research on dynamic optimization strategies in wireless networks. Among several possible approaches, multi-agent deep reinforcement learning (MADRL) has emerged as a promising method to address a wide range of complex optimization problems like power control. However, the seamless application of MADRL to a variety of network optimization problems faces several challenges related to convergence. In this paper, we present the use of graphs as communication-inducing structures among distributed agents as an effective means to mitigate these challenges. Specifically, we harness graph neural networks (GNNs) as neural architectures for policy parameterization to introduce a relational inductive bias in the collective decision-making process. Most importantly, we focus on modeling the dynamic interactions among sets of neighboring agents through the introduction of innovative methods for defining a graph-induced framework for integrated communication and learning. Finally, the superior generalization capabilities of the proposed methodology to larger networks and to networks with different user categories is verified through simulations.Graph Neural Networks, Multi-Agent Deep Reinforcement Learning, Wireless Networks, Power Control § INTRODUCTIONWireless communication networks constitute complex systems demanding careful optimization of network procedures to attain predefined performance objectives. MADRL, owing to its inherent advantages, has emerged as a promising strategy for the optimization of a variety of network problems. Nevertheless, the practical implementation of MADRL in real systems is hindered by challenges related to convergence, which continue to constitute an active area of research. These challenges encompass the non-stationarity of the environment, the partial observability of the state, as well as the coordination and cooperation among agents <cit.>. To this end, this paper elucidates the role of leveraging graph structures as an effective means to account for non-stationarity in MADRL systems by introducing a relational inductive bias in the collective decision-making process. Leveraging GNN as neural architectures for policy optimization, the learning process is governed by feature aggregation over neighbor entities, and computations over graphs afford a strong relational inductive bias beyond what convolutional and recurrent layers can provide <cit.>. The underscoring intuition behind this approach is that choosing the architecture with the right bias for the problem at hand might dramatically increase cooperation among agents, allowing them to communicate local features to the relevant peers to compensate for partial observability. Moreover, GNN-based optimization is capturing significant interest for cellular networks, as peer-to-peer communication between base stations and AI-related messaging over the control plane are currently being studied for standardization <cit.>.In this work, graph structures are harnessed to tackle a power control optimization problem in cellular networks, serving as an illustrative example within the realm of network optimization in mobile radio networks. A particular emphasis is placed on effectively accounting for the mutual interplay between sets of neighboring agents through the introduction of innovative strategies for defining a graph-induced framework for integrated communication and learning. fancy[C]This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.§ RELATED WORK AND CONTRIBUTIONSThe utilization of GNN for the optimization of wireless networks has gained significant traction in the recent literature. This interest can be attributed to the innate characteristics of GNNs, which enable a scalable solution and exhibit inductive capability and, thanks to the permutation equivariance property, increased generalization. Notably, these properties find practical application in works such as <cit.>, where GNNs are harnessed to capture the dynamic structure of fading channel states for the purpose of learning optimal resource allocation policies in wireless networks. Another domain that has witnessed substantial utilization of GNN is channel management within wireless local area networks (WLANs), as evidenced by works such as <cit.> and <cit.>. A notable insight derived from the study by Gao et al. <cit.> is the inherent property of GNNs to provide decentralized inference, rendering them a viable and promising approach for the practical implementation of over-the-air MADRL systems. Similarly, the application of GNNs in addressing power control optimization challenges within wireless networks is explored in works such as <cit.>, <cit.>, and <cit.>.Throughout the body of pertinent literature, GNNs are utilized as either centralized controllers or decentralized entities to model data-driven policies based on feature convolution over graphs. However, despite the crucial role of graph structure in defining agents' interactions, the effect of different graph formation strategies on achieving a collective goal is a problem often overlooked. Instead, the definition of the graph structure has an influence on inducing communication among distributed agents, thereby enabling effective cooperation. For instance, <cit.> puts forth the idea that uncontrolled information sharing among all agents in a distributed setting could be detrimental to the learning process. To this end, an attentional communication model is proposed to learn when communication should take place. Another noteworthy work in this field is presented by <cit.>, showcasing how targeted communication, where agents learn both what messages to send and whom to address them, can compensate for partial observability in a multi-agent setting.In light of the above background, the presented work introduces the following contributions:* We consider graphs as communication-inducing structures for distributed optimization problems. Specifically, by parameterizing a joint action policy leveraging GNN, the relational inductive bias introduced by message transformation (i.e., what to communicate) and message passing (i.e., whom to communicate) is shown to have a deep impact on the decision-making process.* We showcase increasingly articulated strategies to integrate domain knowledge into the formulation of graph construction strategies. Our investigation demonstrates the substantial impact of these strategies on the collective capacity to acquire cooperative behavior and enhance the overall performance.* Finally, a novel approach to learning optimal edge weights in an end-to-end fashion is presented, showcasing impressive inductive capabilities and surpassing conventional strategies grounded in domain expertise.empty§ SYSTEM MODEL AND FORMULATIONThe reference scenario under consideration includes a group of base stations concurrently providing communication services to a set of UE, categorized based on their distinct service and performance requirements. Specifically, S categories are considered, with increasingly demanding requirements in terms of reliability (BER). An overall depiction of the scenario is provided in Fig. <ref>. Wireless network's modeling, UE' requirements and traffic distributions, and the POMDP formulation are described hereafter. §.§ Wireless network modeling We consider a graph representation of the network 𝒢=(𝒱, ℰ), where nodes 𝒱 correspond to base stations/agents[We use the terms base stations and agents interchangeably.] and edges ℰ represent virtual links between them. To determine the graph structure, it is necessary to define an adjacency matrix, denoted by A ∈ℝ^|𝒱|×|𝒱|. Each element a_u,v∈ A indicates the connectivity between nodes u, v ∈𝒱.A fundamental aspect of the analysis presented here revolves around defining the proper graph-inducing structure for the set of base stations. To this end, four distinct strategies to determine A are presented.§.§.§ Binary edgesAs a first strategy, we consider a binary representation for the edges: a_u,v =1if ||𝐬(u) - 𝐬(v)||_2 < D0otherwise ,where 𝐬(u) denotes the position of node u in the 2D-space, || · ||_2 is the Euclidean distance, and D ∈ℝ is a threshold. This results in a symmetric matrix, and the corresponding graph 𝒢(𝒱, ℰ) is unweighted and undirected. §.§.§ Distance-based edgesA more informative approach involves considering edges with continuous values a_u,v∈ℝ, based on the physical proximity between the nodes, i.e., a_u,v∝ e^-||𝐬(u) - 𝐬(v)||_2 .This results in a weighted and undirected graph, since the physical distance between two nodes is symmetric. §.§.§ Relation-based edgesThis method involves determining edges based on the mutual interaction between the sets of nodes. In other words, nodes are deemed adjacent if their actions have a mutual influence on one another. The evaluation of this mutual interaction poses a non-trivial challenge and necessitates the application of advanced tools such as directed graphical modeling <cit.>, causal inference <cit.>, or, in the specific context of power optimization in wireless networks, the utilization of measurements or empirical models to quantify and assess mutual interference. In a general formulation, this can be expressed as a_u,v∝ℐ(u | v) ,where ℐ(u | v) indicates the influence that node v exerts on u, and typically ℐ(u | v) ≠ℐ(v | u). In a wireless communication network, ℐ(u |·) could be determined as the level of interfering power at node u from each node v ∈𝒱 connected to u. This results in a weighted and directed or undirected graph, depending on the symmetric nature of ℐ(·|·). §.§.§ Learning-based edgesIn our final strategy, an end-to-end learning method is introduced for the determination of A. During the collective training phase of agents, the graph's edge weights are derived simultaneously with the policy parameters. This is achieved by using network topological characteristics as input features for a separate GNN. This second auxiliary GNN is not tasked with parameterizing the policy, but focuses on the dynamic assignment of edge weights as the training progresses.The initial stage of the process entails determining the geometric properties and connections among network nodes. To accomplish this, we establish edge features, denoted as ℱ_u,v, for each directed edge a_u,v. The edge feature is denoted by a set composed as follows: ℱ_u,v = {d_u,v, sin(θ_u,v), cos(θ_u,v)} ,where: * d_u,v≜ u - v _2 is the Euclidean distance between nodes u and v.* θ_u,v≜arctan2(v_y - u_y, v_x - v_x) is the angle between nodes u and v.The training of the auxiliary GNN for edge prediction hinges upon the introduction of an auxiliary graph structure, denoted as 𝒢_f=(𝒱_f, ℰ_f). In this representation, nodes 𝒱_f correspond to the set of edges ℰ of the original graph 𝒢, and node features on 𝒢_f are the corresponding edge features ℱ_u,v of 𝒢. The set of edges ℰ_f connecting nodes in 𝒢_f is determined as a binary adjacency matrix. Since nodes 𝒱_f in 𝒢_f correspond to edges ℰ in 𝒢, two nodes in 𝒱_f are deemed adjacent if their associated edges in 𝒢 share a common node. More formally, edge a_u_f,v_f∈{0, 1}, for u_f, v_f ∈𝒱_f, can be expressed as a_u_f,v_f =1if 𝐨(u_f) = 𝐨(v_f)0otherwise ,where 𝐨(u_f) denotes the node o ∈𝒱 from which the edge e ∈ℰ associated with u_f originates. This definition ensures that the auxiliary graph 𝒢_f captures the relationships between edge features that are associated with the same node in the original network.Finally, the auxiliary GNN is tasked to determine the edges weights through the auxiliary graph as described above. As previously mentioned, this procedure involves an end-to-end learning approach. This accounts for learning the optimal edge weights (i.e., node embeddings on the auxiliary graph), given the edge features of 𝒢, and the optimal policy parameters in a joint fashion. To meet this objective, a multi-layered GNN-based architecture is designed. First, 𝒢_f is processed through the auxiliary GNN to compute the edge weights for 𝒢. Subsequently, the policy-GNN operates on 𝒢 using the edge weights estimated by the auxiliary GNN in the previous step. Upon performing a backpropagation step, this architecture allows the concurrent update of the auxiliary GNN and the policy-GNN parameters. empty §.§ UEs' requirements and traffic modeling In the considered system model, the user generation process follows a PCP. A PCP is a stochastic point process defined as the union of points resulting from M independent homogeneous Poisson point processes (PPPs) centered around the base stations on a Euclidean space ℝ^2. More formally, let Φ_C_i be a homogeneous PPP, centered on base station C_i with intensity λ_C_i > 0, which generates a set of random points 𝐬∈ℝ^2, denoted by 𝒞_𝐬, i. Each user belongs to one of S distinct categories based on its BER requirement. Consequently, the i-th base station C_i is characterized by S intensity parameters, denoted by λ_C_i^(k), k ∈{1, …, S}, accounting for distinct user categories. The resulting PCP, denoted by 𝒰, is defined as the union of all resulting points𝒰 = ⋃_i ∈{1, …, M}k ∈{1, …, S}𝒞_𝐬, i^(k).To meet the varying BER requirements of different user categories, an independent adaptive modulation and coding scheme (MCS) is employed for each category. The adaptive MCS tailors the link spectral efficiency η on a user basis to ensure that the required average BER is achieved for every k-th category. As a general expression, the relationship between the signal-to-noise ratio and average bit error rate P_b for an uncoded M-QAM modulation scheme can be obtained from the union bound on error probability, which yields a closed-form expression that is a function of the distance between signal constellation points <cit.>P_b ≃1/log_2 LL-1/L erfc(√(| h_0|^2/2· N)) ,where L = √(M) denotes the constellation order, 2| h_0 | denotes the minimum distance between signal constellation points, and N denotes the noise power. §.§ Partially Observable Markov Decision Process The MARL problem is formulated as a POMDP, where agents collect local observations from the global environment. Considering the nature of the optimization problem at hand, which involves distributed agents collectively aiming for an optimal power-tuning configuration, and because there are no temporal dependencies between the actions of the agents, the problem is formulated as a stateless POMDP. In this formulation, the state transition dynamics are independent of past states or actions. As a consequence, the associated POMDP tuple is given by⟨𝒮, 𝒜, R(s, a) ⟩ ,where 𝒮 denotes the state/observation space, 𝒜 denotes the action space, and R(s, a) indicates the reward function as a function of the state s and action a.§.§.§ Observation spaceEach agent collects information pertaining to user traffic distribution across a fixed-size grid G defined in polar coordinates and divided into bins, which are indexed henceforth with distance d and angle ϕ with respect to the position of the base station. This approach results in a state space of constant dimensions that can be represented as a 3D tensor:𝒮 = [ 𝐮_d_1,ϕ_1 … 𝐮_d_1,ϕ_n; 𝐮_d_2,ϕ_1 ⋱ 𝐮_d_2,ϕ_n; 𝐮_d_m,ϕ_1 … 𝐮_d_m,ϕ_n; ] ,where 𝐮_d,ϕ = (u_d,ϕ^(1), …, u_d,ϕ^(S)) is a vector denoting the aggregated traffic for all the categories, and each u_d,ϕ^(k) is evaluated as u_d,ϕ^(k) = ∑_l ∈ G_d,ϕ t_l^(k) ,where t_l^(k) denotes the traffic demand of the l-th user of category k in the bin in G indexed by (d,ϕ).§.§.§ Action spaceEach agent i is tailored to tune its own transmitting power 𝐩_i, modeled as a discrete set, as a function of its own local observations. The action space is thus given by𝒜 = { p_0, …, p_M} ,where M denotes the number of available power levels.§.§.§ Optimization problemHere, the objectives that steer the efforts of distributed agents in achieving an optimal solution are delineated. The goal is to solve the following optimization problemmax_𝐩∑_l ∈ G∑_k η_l^(k)(𝐩) B_l(𝐩) ,where η_l^(k)(𝐩) denotes the link spectrum efficiency of the l-th user, for all users in the reference area given by G. The link spectrum efficiency is evaluated as a function of the perceived SINR and service category k. Furthermore, the available bandwidth for the l-th user, denoted by B_l(𝐩), depends on the specific scheduling mechanism in use and the total number of users connected to the base station serving the l-th user.The objective function in (<ref>) encompasses two critical considerations: by choosing a proper collective power configuration 𝐩, the base stations should maximize the average link spectral efficiency, while at the same time performing MLB to equally balance the number of users among different base stations. User assignments to base stations occur upon executing action a and following a best-server criterion. In particular, each user is assigned to the base station from which it receives the highest power. It is noteworthy that the optimization of the link spectral efficiency hinges on the distribution of user categories among base stations, which are subject to varying BER requirements, giving rise to intricate mutual interference dynamics. empty§ GRAPH MULTI-AGENT REINFORCEMENT LEARNING In the considered scenario, CTDE framework is employed along with parameter sharing. Here, policy optimization hinges upon the application of policy gradient methods. These methods seek to maximize some scalar performance measure J(θ) of the policy parameterization, θ, through approximate gradient ascent steps:θ_t+1 = θ_t + α·∇ J(θ_t) ,for some learning rate α. The estimation of ∇ J(θ) is carried out according to the policy gradient theorem:∇ J(θ) ∝∑_s μ(s) ∑_a Q_π(s, a) ∇π(a | s, θ) ,where π denotes the policy corresponding to parameter vector θ, μ denotes the on-policy distribution under π, and Q_π(s, a) denotes the true value function associated to state s and action a. Specifically, for the numerical experiments detailed in the subsequent section, the REINFORCE algorithm <cit.> is employed. The latter provides an approximation of the state distribution μ and the value function Q defined in Eq. (<ref>), through the utilization of Monte Carlo sampling. Notably, REINFORCE accounts for a model-free, on-policy approach, in which the sum over states and actions can be naturally substituted by an averaging procedure over the target policy π∇ J(θ) = 𝔼_π[Q_π(S_t, A_t) ∇π(a | S_t, θ)] .Through algebraic derivation <cit.>, Eq. (<ref>) leads to the update rule for REINFORCE, given byθ←θ + α ∇lnπ(A_t | S_t, θ) R(τ)_∇ J(θ) ,where R(τ) denotes the return for a trajectory τ obtained by sampling over π. In the considered case, the MDP formulation is stateless, thus each episode coincides with one action step.While the policy is acquired through centralized training, it is essential to note that during execution, the agents only have access to local information; hence, all agents operate in a distributed manner with implicit coordination. By parameterizing the policy π using GNN, it becomes feasible to adopt a fully decentralized execution approach, enabling the transformation and aggregation of local features from neighboring agents through the mechanism of message passing. The type of GNN opted for the numerical experiments is the local k-dimensional GNN <cit.>. Its relative update function can be written as𝐡_v^(l+1)=σ(𝐖_1^(l) 𝐡_v^(l) + 𝐖_2^(l) AGG({e_u, v^(l)·𝐡_u^(l),∀ u ∈𝒩_v })) ,where e_u, v^(l) denotes the edge weight from node u to node v, which is evaluated according to one of the strategies presented in Sec. <ref>. As confirmed by the numerical findings presented in the subsequent section, introducing distinct relational biases by modeling edge weights according to different strategies (i.e., inducing a bias on what to communicate (message transformation), and to whom to communicate (message passing)) has a strong influence on the collective ability to learn how to cooperate for increased performance. empty§ SIMULATIONS AND NUMERICAL RESULTSIn order to test the validity of the proposed framework, and assess the role that different graph modeling strategies can have in learning a cooperative behavior, a series of numerical experiments is conducted on a simulated network scenario. The scenario comprises 11 base stations with randomly generated traffic, as described in Sec. <ref>. We considered system bandwidth B = 60 MHz, carrier frequency f_c = 3.7 GHz, and 3GPP UMa path loss channel model <cit.>. The four graph modeling strategies discussed in Sec. <ref> have been evaluated against a baseline strategy employing REINFORCE in conjunction with a policy parameterization using a classical DNN.Specifically, the “relation-based edges" strategy, which has been previously introduced as a general formulation, has been evaluated here according to a mutual interference criterion considering an average-to-worst-case scenario (i.e., serving cell transmitting with average tx power, interfering cell transmitting with maximum power). Assessing the effects of mutual interference relies heavily on the distribution of users across various service categories. Base stations, primarily serving users with more demanding requirements, are particularly susceptible to inter-cell interference compared to those serving other user groups. In particular, in all the training and inference tests the number of service categories S is set to 3. §.§ Training performanceThe plot in Fig. <ref> provides a detailed representation of the performance achieved during training time on the simulated network environment. The figure displays how the average reward evolves during training for the four considered graph models and the baseline, showcasing the performance of each model as a function of the number of training epochs. A total of 30 training instances has been carried out and results are portrayed displaying the mean reward together with the 99% confidence intervals. As evident from the figure, employing a GNN-based policy parameterization allows for much faster convergence and overall increased performance with respect to the solution employing a DNN-based policy. Since both methods employ CTDE with parameter sharing, the results clearly indicate that enabling agents to communicate their local features to their neighbors has a strong impact on enabling cooperation and improved performance. Also, as evident from the figure, the choice of the graph can greatly affect the performance. While the unweighted graph employing binary edges is unable to distinguish between more and less relevant neighbors, it can only achieve a mediocre performance. Vice versa, employing graph structures embedding contextual information, such as the physical proximity (orange curve) or measured level of mutual interference (green curve) into their edges, accounts for superior performance. This result shows how integrating domain knowledge in the problem formulation can drive the agents' collective behavior toward an improved solution in a shorter time. Finally, the graph with learnable edge weights (red curve) stands out as the most effective approach. By leveraging geometrical features of the environment, as described in Sec. <ref>, the proposed solution is able to learn in an end-to-end fashion the most effective formulation for edge weighting, while concurrently optimizing the weights in (<ref>) for policy parameterization. §.§ Generalization testsA unique characteristic of GNNs is their remarkable ability to generalize well to unseen scenarios, thanks to their inherent permutation equivariance (Sec. <ref>). Here, results regarding the inductive capability of the proposed models are presented and discussed. As for the previous section, results are conducted over 30 independent runs. The results are presented as the mean score, accompanied by 99% confidence intervals. These values are normalized with respect to the rewards achieved by learnable edges-based GNNs trained for the same number of epochs on larger networks.Fig. <ref> illustrates the behavior of the learned models when they are deployed in networks of progressively larger sizes. Within this context, it becomes apparent that all GNN models exhibit a stable trend or a marginal increase in their performance as the network size increases. This behavior is attributed to the increased difficulty of the learning task as the network size grows, making it more challenging for the training to converge. Consequently, these results suggest that training on smaller scenarios effectively scales to larger, unseen ones. Remarkably, the GNN with learnable edges features better performance with respect to the other models, indicating that it not only learns a better policy but a policy that generalizes better to larger networks.As a final result, the behavior of the learned models when deployed on networks with traffic patterns that are increasingly distinct from those encountered during training is depicted in Fig. <ref>. In the simulated scenario, the traffic is modeled as a PCP, and each base station is linked with three λ rates for each user category. To measure the difference in traffic patterns on the x-axis, the cosine similarity between the vector of λ rates is calculated. The vectors dimension is three times the number of agents in the system. Similarly to the previous case, the results are normalized with respect to the rewards obtained by learnable edges-based GNNs trained for the same number of epochs on increasingly different traffic patterns. As shown in the figure, all GNN models demonstrate remarkable generalization capabilities, despite a marginal yet tolerable reduction in their performance as traffic patterns vary. Once more, the GNN with learnable edges exhibits superior performance.§ CONCLUSION In this paper, we investigated power control optimization in wireless networks through MARL and policy parameterization with GNN. Different adaptive graph modeling strategies are considered, including binary edges, distance-based edges, relation-based edges, and learnable edges. In particular, the latter approach, where edge weights are learned in an end-to-end fashion through the use of an auxiliary GNN, offers a promising and flexible method that can adapt to the problem at hand and simultaneously optimize the edge weights for policy parameterization. This method indeed proved to be effective by leading to faster convergence and improved performance during training. At the same time, it emerged to manage the complexity of larger and unseen inference scenarios better than baselines. Consequently, it exhibits solid generalization capabilities when addressing scalability and traffic variability challenges. To conclude, this study highlights the importance of modeling the communication structure among agents, which can significantly influence the overall performance of multi-agent systems in wireless networks. emptyIEEEtran | http://arxiv.org/abs/2311.15858v1 | {
"authors": [
"Lorenzo Mario Amorosa",
"Marco Skocaj",
"Roberto Verdone",
"Deniz Gündüz"
],
"categories": [
"cs.NI",
"cs.LG",
"cs.MA"
],
"primary_category": "cs.NI",
"published": "20231127142540",
"title": "Multi-Agent Reinforcement Learning for Power Control in Wireless Networks via Adaptive Graphs"
} |
0000-0001-9206-1641]Ke QinSchool of Science, Qingdao University of Technology, Qingdao 266525, People's Republic of China; [email protected] School of Physics, Zhengzhou University, Zhengzhou 450001, People's Republic of China 0000-0002-9739-8929]Kun XuSchool of Science, Qingdao University of Technology, Qingdao 266525, People's Republic of China; [email protected] 0000-0002-3007-8197]Dong-Dong Liu Key Laboratory for the Structure and Evolution of Celestial Objects, Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, People's Republic of China 0000-0002-2479-1295]Long Jiang School of Science, Qingdao University of Technology, Qingdao 266525, People's Republic of China; [email protected] of Physics and Electrical Information, Shangqiu Normal University, Shangqiu 476000, People's Republic of China 0000-0002-3231-1167]Bo Wang Key Laboratory for the Structure and Evolution of Celestial Objects, Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, People's Republic of China 0000-0002-0785-5349]Wen-Cong Chen School of Science, Qingdao University of Technology, Qingdao 266525, People's Republic of China; [email protected] School of Physics and Electrical Information, Shangqiu Normal University, Shangqiu 476000, People's Republic of ChinaBlack hole (BH) ultracompact X-ray binaries (UCXBs) are potential Galactic low-frequency gravitational wave (GW) sources. As an alternative channel, BH UCXBs can evolve from BH+He star binaries. In this work, we perform a detailed stellar evolution model for the formation and evolution of BH UCXBs evolving from the He star channel to diagnose their detectability as low-frequency GW sources. Our calculations found that some nascent BH+He star binaries after the common-envelope (CE) phase could evolve into UCXB-LISA sources with a maximum GW frequency of ∼5 mHz, which can be detected in a distance of 10 kpc (or 100 kpc). Once BH+He star systems become UCXBs through mass transfer, they would emit X-ray luminosities of ∼10^38 ergs^-1, making them ideal multimessenger objects. If the initial He-star masses are ≥ 0.7 M_⊙, those systems are likely to experience two Roche lobe overflows, and the X-ray luminosity can reach a maximum of 3.5× 10^39 ergs^-1 in the second mass-transfer stage. The initial He-star masses and initial orbital periods of progenitors of Galactic BH UCXB-LISA sources are in the range of 0.32-2.9 M_⊙ and 0.02-0.19 days, respectively. Nearly all BH+He star binaries in the above parameter space can evolve into GW sources whose chirp masses can be accurately measured. Employing a population synthesis simulation, we predict the birthrate and detection number of Galactic BH UCXB-LISA source evolving from the He star channel are R=2.2×10^-6 yr^-1 and 33 for an optimistic CE parameter, respectively.§ INTRODUCTIONUltracompact X-ray binaries (UCXBs) are low-mass X-ray binaries (LMXBs) with ultra-short orbital periods (usually less than 60 minutes), consisting of an accreting compact object and a hydrogen-poor donor star <cit.>. UCXBs can help us to understand the angular momentum loss mechanisms <cit.>, the common-envelope (CE) evolution <cit.>, and the accretion process of compact objects <cit.>, thus they are ideal laboratories for testing stellar and binary evolutionary theory. Furthermore, UCXBs can emit continuous low-frequency gravitational wave (GW) signals, which could be detected by the space-borne GW detectors such as the Laser Interferometer Space Antenna <cit.>, TianQin <cit.>, and Taiji <cit.>. Therefore, UCXBs are ideal objects pursuing multi-messenger investigations <cit.>.Thus far, the number of confirmed UCXBs (with an accurately measured orbital period ≤ 80 minutes) and candidates is around 45 <cit.>. Based on the accurate detection of orbital periods, 20 sources were identified to be UCXBs in high confidence, which includes 11 persistent sources and 9 transient sources <cit.>. Among confirmed UCXBs, 19 sources were discovered to include a neutron star (NS). At present, there exist two black hole (BH) UCXB candidates. The first one is the luminous X-ray source X9 in globular cluster 47 Tucanae, which was thought to be a BH accreting from a white dwarf (WD) in a close orbit <cit.>. Recently, <cit.> found some evidences that the compact object in UCXB 4U 0614+091 may be a BH by the simultaneous NICER and NuSTAR observations. <cit.> estimated that the main-sequence (MS) channel can form 60-80 NS UCXB-LISA sources in the Galaxy. However, LISA can only detect ∼4 BH UCXBs evolving from the MS channel <cit.>. If the ratio of the numbers between NS and BH UCXB-LISA sources is similar to that of the identified numbers between NS and BH UCXBs, the number of confirmed BH UCXBs evolving from the MS channel is ∼1, which is comparable to the present observation. Therefore, it is difficult to observe BH UCXBs. Because of the compactness of UCXBs, their donor stars are generally thought to be partially or completely degenerate stars such as WDs or helium (He) stars <cit.>. Optical spectroscopic analysis of UCXBs can help us to identify the properties of donor stars <cit.>. Employed X-ray, ultraviolet, and optical spectroscopy, chemical elements of accretion disks in some UCXBs may include He, C, N, O, Ne, and Si <cit.>. Such a diversity of chemical compositions implied those donor stars should evolve to different nuclear-burning stages and interior degeneracy, which requires different evolutionary models to account for the formation of UCXBs <cit.>. It is generally thought that UCXBs in the Galactic field evolved from the following three channels: the WD channel, the evolved MS star channel, and the He star channel <cit.>.In the first channel, the progenitors of UCXBs are compact binaries consisting of a NS/BH and a low-mass WD, in which GW radiation drives mass transfer <cit.>. <cit.> performed complete numerical models for the formation of UCXBs evolved from a stable mass transfer from a WD to an accreting NS, and found that the WD channel can reproduce the observed properties of some UCXBs with high He abundances. Using an improved mass transfer hydrodynamics model, <cit.> argued that only NS+He WD binaries with a donor-star mass less than 0.2 M_⊙ could form UCXBs through a stable mass transfer. <cit.> also found that the donor-star masses have to be less than 0.4 M_⊙ to form UCXBs from NS+CO WD binaries.The second channel originates from a NS/BH accreting mass from a MS donor star that fills its Roche lobe. If the MS star starts mass transfer late enough and the orbital-angular-momentum loss by magnetic braking is efficient, the system would evolve toward a UCXB. In this channel, UCXBs generally evolve from BH/NS+MS binaries whose initial orbital periods are shorter than the bifurcation period <cit.>. In the UCXB stage, the donor star is most likely to evolve into a WD. It is noteworthy that mass transfer never ceases in the whole evolutionary stage except for those systems with a NS and a fine-tuning initial orbital period <cit.>.In the He star channel, the direct progenitors of UCXBs are BH/NS+He star binaries. It is generally believed that BH/NS+He star binaries are the evolutionary products of high-mass X-ray binaries, in which the hydrogen envelope of the He-star progenitors are fully ejected in the CE stage <cit.>. Due to the close orbit, the GW radiation dominates the orbital evolution of the nascent BH/NS+He star system and triggers a mass transfer. The BH/NS accretes He-rich materials from the He star that fills the Roche lobe and the system appears as an UCXB <cit.>.Employing detailed stellar evolution models, <cit.> found that NS/BH+He star binaries can evolve into double NS or BH+NS systems through stripped supernova explosions, and these double NS and BH+NS systems would evolve toward high-frequency GW events that can be discovered by aLIGO. Binary population synthesis (BPS) simulations indicated that binary systems consisting of a NS/BH and a naked He star can account for ∼50%-80% UCXBs <cit.>. By a population-synthesis simulation, <cit.> proposed there are ∼200 BH+He star binaries and ∼540 NS+He star binaries in the Milky Way.Compared with those dim WDs, He stars are close to MS stars in the Hertzsprung-Russell (H-R) diagram <cit.>. Therefore, BH+He star systems can be observed in the detached stage, and provide more evolutionary details such as the mass-transfer efficiency <cit.>, and the stellar wind <cit.>. In general, low-mass He stars are referred to as the hot subdwarfs <cit.>, while high-mass He stars are called Wolf-Rayet stars <cit.>. At present, several sources consisting of a compact star and a He star have been reported. For example, LS V+22 25 (LB-1) was proposed to contain a stellar mass (≈ 8 M_⊙) BH and a low-mass (0.5-1.7 M_⊙) stripped He star <cit.>. PG 1432+159, HE 0532-4503, PG 1232-136, and PG 1743+477 were also thought to be candidates containing a BH and a He star with a very thin hydrogen envelope <cit.>. However, no X-ray emissions from these detached binaries were detected. HD49798 <cit.>, M101 ULX-1 <cit.>, IC 10 X-1, and NGC 300 X-1 are most likely X-ray sources including He stars, in which the X-ray emissions originated from a NS/BH accreting from the stellar winds of He stars. Especially for the latter two sources, <cit.> proposed that the accretion disk and the limited X-ray luminosity of 10^38 ergs^-1 can be interpreted by a stellar-mass BH accreting from a Wolf-Rayet companion through the wind-Roche lobe overflow (RLOF) mechanism <cit.>. However, both the stellar wind accretion and the RLOF models can explain the observations of galactic strong X-ray sources Cygnus X-3 and SS 433 <cit.>, which are also thought to consist of a compact object and a mass-transferring He star. As a consequence, binary systems consisting of a compact object and a He star are potential galactic strong X-ray sources. Furthermore, they are also promising low-frequency GW sources in the Galaxy <cit.>. Therefore, it is of great significance to study the evolution of BH binaries including He stars.In this paper, we perform a detailed stellar evolution model for a large number of BH+He star binaries to investigate whether or not they can evolve toward UCXBs and low-frequency GW sources in the Milky Way. In Section 2, we describe the binary evolution code. Some detailed simulated results are shown in Section 3. The discussion and conclusion are presented in Sections 4, and 5, respectively. § BINARY EVOLUTION MODELThe Modules for Experiments in Stellar Astrophysics <cit.> is a popular code in the stellar and binary evolution field. In this work, we use a binary updated version (r12115) of the MESA to model the formation and evolution of BH UCXBs. We assume that the CE stage would produce many detached BH-He star binaries, which are taken to be the evolutionary beginning point of detailed stellar evolution. For simplicity, the BH is considered a point mass with an initial mass of M_ BH,i=8 M_⊙. The code only models the nuclear synthesis of the He star and the orbital evolution of the binary. Therefore, the evolutionary fates of BH-He star binaries depend on the initial He-star mass (M_ He,i) and the initial orbital period (P_ orb,i) for a given input physics. We build a zero age main sequence He star model with solar metallicity <cit.>, which consists of 98% helium and 2% metallicity (i.e. Y=0.98, Z = 0.02). The lowest He-star mass is taken to be 0.32 M_⊙, under which the center He burning would extinguish <cit.>. For each binary system, we run the MESA code until the time step reaches a minimum time-step limit or the stellar age is greater than the Hubble time (14 Gyr).For the wind setting of the He star, the "Dutch" options with a scaling factor of 0.8 are used in the schemes including hot_-wind_-scheme, cool_-wind_-RGB_-scheme, and cool_-wind_-AGB_-scheme <cit.>. We use Type 2 opacities for extra C/O burning during and after He burning. Furthermore, the time step options with mesh_-delta_-coeff=1.0 and varcontrol_-target=10^-3 are adopted. Our inlists are available at doi:10.5281/zenodo.10075413.The fast wind of the He star is thought to carry away its specific orbital-angular momentum. The wind-accretion efficiency of the BH via Bondi-Hoyle-like accretion is low <cit.>. For example, the wind-accretion efficiency is ∼0.003 for a binary with a 8.8 M_⊙ BH and a 6.0 M_⊙ He star <cit.>. Therefore, we ignore the wind accretion in the whole evolutionary process. Once the He star fills its Roche lobe, the mass transfer initiates from the donor star to the BH at a rate of Ṁ_ tr. During the mass transfer, we adopt the accretion efficiency scheme given by <cit.>, i.e. α= 0, β = 0.5, and δ= 0, here α, β, and δ represent the fractions of mass loss from the He star in the form of fast wind, the ejected mass from the vicinity of the BH and from a circumbinary co-planar toroid, respectively. Therefore, the accretion rate of the BH is Ṁ_ acc=(1-β)Ṁ_ tr=0.5 Ṁ_ tr.The mass-growth rate of the BH is limited by the Eddington accretion rate asṀ_ Edd=4π GM_ BH/κ cη,where G is the gravitational constant, c is the speed of light in vacuo, M_ BH is the mass of the BH, κ = 0.2(1+X) is the Thompson-scattering opacity of electrons (X is the hydrogen abundance of the transferred material, and X = 0 for a He donor star); η = 1-√(1-(M_ BH/3M_ BH,0)^2) (for M_ BH≤√(6)M_ BH,0, M_ BH,0 is the initial mass of the BH) is the energy conversion efficiency of the accreting BH <cit.>. Therefore, the mass-growth rate of the accreting BH is Ṁ_ BH= min(0.5 Ṁ_ tr,Ṁ_ Edd). The excess materials in unit time (Ṁ_ tr-Ṁ_ BH) are thought to be ejected at the vicinity of the BH in the form of isotropic winds, carrying away the specific orbital angular momentum of the BH.Thus, the angular-momentum-loss rate due to isotropic winds can be written asJ̇_ iso= -2π a^2M_ He^2(Ṁ_ tr-Ṁ_ BH)/(M_ BH+M_ He)^2P_ orb,where a is the orbital separation of the binary, M_ He is the mass of the He star, and P_ orb is the orbital period. In our model, orbital angular momentum loss via GW radiation and mass loss (fast wind and isotropic wind) are included. During the inspiral of two components in BH binaries, the change of the mass quadrupole produces low-frequency GW signals with a frequency of f_ gw = 2/P_ orb. When the systems evolve into a close orbit, the emitting GW signals may be detected by space-borne GW detectors such as LISA. For a 4 years LISA mission, the GW characteristic strain of our simulated BH binaries is given by <cit.>h_ c≈ 3.75× 10^-20(f_ gw/0.001 Hz)^7/6(ℳ/1 M_⊙)^5/3(10 kpc/d).d is the distance of the GW sources. For simplicity, the chirp mass ℳ can be expressed asℳ=(M_ BHM_ He)^3/5/(M_ BH+M_ He)^1/5.In the numerical calculation, the corresponding binaries are thought to be LISA sources if the calculated characteristic strain exceeds the LISA sensitivity curve given by <cit.>.It is worth noting that the chirp mass in Equation (4) should be applied in a detached system that its orbital decay is entirely caused by GW radiation. For BH UCXBs, mass transfer is inevitable to influence their orbital evolution. However, the mass transfer in compact BH binaries is driven by GW radiation and goes along a timescale close to that of GW radiation <cit.>. Therefore, the estimation of the chirp mass in Equation (4) remains reliable in semi-detached BH binaries.§ SIMULATION RESULTFor a fixed initial BH mass and a given input physics, the evolutionary fates of BH+He star binaries depend on P_ orb,i and M_ He,i. As evolutionary examples, we model the evolution of 10 BH+He star binaries with different M_ He,i and P_ orb,i, which divide into three groups as follows: (1) group 1 with M_ He,i = 0.6 M_⊙, P_ orb,i = 0.03, 0.06, 0.09, 0.11 days; (2) group 2 with M_ He,i = 0.32, 0.4, 0.6, 0.8 M_⊙, P_ orb,i = 0.06 days. (3) group 3 with M_ He,i = 1.2, 1.8, 2.8 M_⊙, P_ orb,i = 0.06 days. Groups 1 and 2 only experience one RLOF, while there exist two or more RLOFs for group 3. Some main evolutionary parameters of three groups are listed in Table 1. §.§ Orbital evolutionFigure 1 shows the evolution of orbital periods with the stellar age for BH+He star binaries in three groups. Because of small donor-star masses, the donor stars in groups 1 and 2 fill their Roche lobes until their orbital periods are in the range of 13 to 48 minutes, thus these systems appear as ultracompact detached binaries in a long timescale. In group 1, those systems with P_ orb,i=0.03,0.06,0.09 days can firstly be detected by LISA as low-frequency GW sources at a distance of 10 kpc, and 100 kpc (these two distances are the typical distances of the sources in the Galaxy and the Large/Small Magellanic Cloud)(the system with P_ orb,i=0.03 days is visible by LISA at a distance of 10 kpc at the beginning of binary evolution), then experience RLOF and become UCXBs. The system with P_ orb,i=0.11 days initiates the mass transfer after it is visible by LISA at a distance of 10 kpc, then it appears a UCXB that can be detected by LISA at a distance of 10 kpc, and 100 kpc (i.e. UCXB-LISA source).In group 2, the system with M_ He,i=0.8 M_⊙ is visible by LISA in a distance of 10 kpc at the beginning of binary evolution, then it fills its Roche lobe and appears as a UCXB-LISA source. Subsequently, continuous orbital shrinkage makes it a strong GW source that can be detected by LISA at a distance of 100 kpc. The other three systems with low donor-star masses first appear as low-frequency GW sources that can be detected by LISA at a distance of 10 kpc and 100 kpc, then evolve into UCXB-LISA sources. Because of the short orbital periods, the GW radiation drives the orbital periods of the systems in groups 1 and 2 to continuously decrease to a minimum of 6-12.4 minutes, which are very similar to the minimum orbital periods (8-10 minutes) obtained by <cit.> in NS+He star systems. Most evolutionary tracks of groups 1 and 2 emerge "knee" features, which are consistent with the positions that the RLOF starts. This phenomenon originates from the orbital expansion caused by the mass transfer from the less massive He star to the more massive BH, which dilutes the orbital decay due to GW radiation.In group 3, three systems first appear as low-frequency GW sources that can be detected by LISA at a distance of 10 kpc, then begin case BA mass transfer at the first solid circles. The orbital period of the system with M_ He,i=1.2 M_⊙ continuously decreases after the mass transfer. However, the orbits of the other two systems with M_ He,i=1.8, and 2.8 M_⊙ show a widening tendency. In theory, the orbit should widen when the mass is transferred from the less massive donor star to the more massive BH. In contrast, GW radiation causes the orbit to shrink. The evolutionary fate of the orbit would depend on the competition between the mass transfer and GW radiation. The orbital shrinkage of the system with M_ He,i=1.2 M_⊙ originates from a low mass-transfer rate (see also Figure 2), which can not conquer the orbital decay caused by GW radiation. After the case BA mass transfer ceases, three systems start a case BB mass transfer soon (the solid circles are approximately in the same positions as those open circles, and the timescales between these two circles are 0.346, 0.389 and 0.340 Myr for M_ He,i=1.2, 1.8, and 2.8 M_⊙, respectively). Subsequently, two systems M_ He,i=1.8, and 2.8 M_⊙ can not be detected by LISA at a distance of 10 kpc due to a further expansion of the orbits. Once case BB mass transfer ends, the continuous orbital shrinkage by GW radiation causes the system with M_ He,i=1.8 M_⊙ to evolve toward LISA sources that can be detected at a distance of 10 kpc, and 100 kpc. The evolution of the systems with M_ He,i=1,2, and 2.8 M_⊙ stops due to a numerical difficulty after the case BB mass transfer ceases. §.§ Evolution of mass-transfer ratesFigure 2 plots the evolution of the mass-transfer rates of BH X-ray binaries in three groups. At t_ rlof≈0.007-54.9 Myr (see also Table 1), the He stars fill their Roche lobes. The mass transfer begins early for those massive He stars. Because of a positive correlation between the mass and the radius <cit.> for zero-age MS He stars, the more massive He star is easy to fill its Roche lobe.For low-mass He stars with M_ He,i≤ 0.8 M_⊙, GW radiation dominates the orbital evolution in the early mass-transfer stage. Due to the shrinkage of the orbit, the GW-radiation timescale decreases, thus the mass-transfer rate slowly enhances. After the mass transfer dominates the orbital evolution, the orbital period achieves a minimum, at which the mass-transfer rate emerges a maximum <cit.>. Subsequently, the He star evolves to more degenerate, and the correlation between the mass and the radius is negative, resulting in a decreasing mass-transfer rate <cit.>. For high-mass He stars with M_ He,i≥ 1.2 M_⊙, mass transfer dominates the orbital evolution due to high mass-transfer rates. The orbital periods increase or slowly decrease, producing decreasing mass-transfer rates.Three BH+He star binaries in group 3 experience mass transfer twice. The first mass transfer last a relatively long timescale (1.6-5.8 Myr) at a rate of ∼ 10^-7 Ṁ_⊙ yr^-1, which increases with the increase of M_ He,i. As the He abundance in the core drops below 0.1, it develops a carbon-oxygen (CO) core. Due to the contraction of the He star, the binary gradually becomes a detached system, and the first mass transfer ceases. After the core He is exhausted, the He star begins the He-shell burning. With the continuous growth of CO core, the He star begins to expand and initiates the second mass transfer. The mass-transfer rate in the second stage is significantly higher than that of the first mass-transfer stage, and the duration is shorter. A high donor-star mass tends to produce a high mass-transfer rate and a short mass-transfer duration. For M_ He,i = 1.8 and 2.8 M_⊙, the second mass-transfer rate can exceed 10^-6 Ṁ_⊙ yr^-1. The orbital-expand effect caused by such a high Ṁ_ tr conquers the orbital-shrinkage effect caused by GW radiation, and the binary orbits begin to widen (see also Figure 1), then the mass-transfer rate continuously decreases. The whole burning He-shell is almost stripped in the second mass-transfer stage, and the remaining CO core becomes highly degenerate. Because of a negative correlation between the mass and the radius of a degenerate star, the binary detaches again. In fact, the BH+He star systems in group 3 may undergo a third RLOF, in which the CO WD fills the Roche lobe and triggers a mass transfer. Due to the limitation of the minimum time step, the duration of the third mass transfer is too short (≤ 1 yr) to show in Figure 2. It is noteworthy that the BH+He star system with M_ He,i = 2.8 M_⊙ emerges a transient ultra-high mass-transfer rate (∼10^-5 Ṁ_⊙ yr^-1) in the first mass-transfer stage. The mass transfer proceeds in a thermal time-scale <cit.>, thus the mass-transfer rate is very high in this stage, which is much higher than Ṁ_ Edd. With a rapid decrease of the convective-core mass, the He star rapidly contracts until it re-reaches thermodynamic equilibrium <cit.>. Subsequently, the mass transfer will continue steadily on the nuclear timescale. §.§ Evolution of X-ray luminositiesCompared with detached binaries, BH UCXBs are ideal multimessenger sources for the detection in both GW and electromagnetic wave bands. X-ray observations of these GW sources could reveal more information about the history of binary evolution <cit.>, and provide some constraints on the nature of the companion, the structure of accretion disk, the relatively accurate position <cit.>, and so on. In the mass transfer phase, the X-ray luminosity of the accretion disk surrounding a BH can be calculated by <cit.>L_X={[ϵṀ_ accc^2,Ṁ_ crit<Ṁ_ acc≤Ṁ_ Edd; ϵ(Ṁ_ acc/Ṁ_ crit)Ṁ_ accc^2,Ṁ_ acc<Ṁ_ crit; ].where ϵ is the radiation efficiency of the accretion disk, Ṁ_ crit is the critical accretion rate depending on the transition between the low/hard and high/soft states. In the calculation, we take ϵ=0.1, and Ṁ_ crit=10^-9 M_⊙yr^-1 <cit.>.Figure 3 depicts the evolution of X-ray luminosities of BH UCXBs. It is noteworthy that BH UCXBs evolved from the He star channel produce relatively high X-ray luminosities of ∼10^38-39 ergs^-1, which is 2-5 orders of magnitude higher than those in the MS channel <cit.>. However, the timescales of BH UCXBs evolved from the He star channel are ∼1-10 Myr, which is much shorter than those (∼50-900 Myr) in the MS channel. All systems in groups 1 and 2 are visible LISA sources at a distance of 10 kpc or 100 kpc during the UCXB stage. Two systems with high donor-star masses (M_ He,i = 1.8 and 2.8 M_⊙) in group 3 can not be detected by LISA in the later stage of the second mass-transfer phase because of a rapid orbital expansion. When the binaries evolve to the minimum orbital period, the mass-transfer rates reach the maxima, and the X-ray luminosities emerge peaks (∼10^39 ergs^-1). For a high M_ He,i, BH UCXBs tend to become so-called ultraluminous X-ray sources (ULXs, L_X≥ 10^39 ergs^-1) <cit.>, in which the maximum X-ray luminosity is 3.5× 10^39 ergs^-1. Recently, <cit.> found the first smoking gun evidence for the existence of He donor star in ULXs by Very Large Telescope Multi Unit Spectroscopic Explorer observations. Similarly, the population synthesis study also indicated that NS X-ray binaries containing the He stars can account for a large part of ULXs in Milky Way-like galaxies <cit.>. §.§ Detection of GW signalsSince the Laser Interferometer Gravitational-Wave Observatory (LIGO) first detected the high-frequency GW signal from the double BH merger event GW150914 <cit.>, the detection of GW opens a new window to understand the distant universe. GW signals provide us with more useful information about stellar and binary evolution. Figure 4 plots the evolution of BH+He star systems in the characteristic stain versus GW frequency diagram. Because of the same chirp mass, the evolutionary tracks of the four systems in group 1 overlap before RLOF. Due to short initial orbital periods, the nascent BH+He star system with P_ orb,i=0.03, and 0.06 days can be instantly detected by LISA and Taiji at a distance of 10 kpc after the CE stage. It is impossible to form detached BH+WD systems via isolated binary evolution <cit.>. Therefore, those sources with chirp masses similar to those of BH+He star systems should be the post-CE systems, and provide indirect evidence of CE evolutionary stages. When the mass transfer starts, these low-frequency GW sources also appear as UCXBs, which are the ideal multimessenger detection sources. For a long detection distance of 100 kpc, those binaries have to spend a longer evolutionary timescale to evolve toward a shorter orbital period, thus appearing as GW sources only in a few Myr. BH X-ray binaries in Group 3 experience a rapid orbital expansion, thus two systems with M_ He,i=1.8, and 2.8 M_⊙ are not potential GW sources in the later stage of the second mass transfer. However, the GW radiation will drive the orbit of the system with M_ He,i=1.8 M_⊙ to continuously shrink after the mass transfer ceases, resulting in a low-frequency GW source that is detectable in a long timescale. Since the chirp masses are constant, the evolutionary tracks of detached systems are lines with a slope of 7/6 (h_ c∝ f_ gw^7/6 according to equation 3). In groups 1 and 2, the maximum GW frequency of BH UCXBs is 5.6 mHz, while the maximum frequency can reach about 63.5 mHz for a detached BH binary in group 3 (see also Table 1). To understand the progenitor properties of BH UCXBs, we model the evolution of a large number of BH+He star binaries. Figure 5 summarizes the initial contour of the progenitors of BH UCXBs in the P_ orb,i-M_ He,i diagram. All BH+He star binaries with initial parameters located between two solid curves can evolve toward UCXBs (with durations of ≥ 0.1 Myr) that can be detected by LISA at a distance of 10 kpc. The initial He-star masses and initial orbital periods of progenitors of Galactic BH UCXB-LISA sources are in the range of 0.32-2.9 M_⊙ and 0.02-0.19 days, respectively. Such an initial parameter space is slightly wider than that (0.32-1.2 M_⊙ and 0.01-0.1 days) for the progenitors of NS UCXB-LISA sources evolving from the He star channel <cit.>. It is clear that the systems with relatively long initial orbital periods (P_ orb,i≥ 0.05 days) and high donor-star masses (M_ He,i≥ 0.7 M_⊙) may experience two RLOF stages. For massive He stars with M_ He,i≥ 1.2 M_⊙, all BH binaries will experience two RLOFs. Some systems marked by crosses can also evolve into low-frequency GW sources, however, they can not become valid UCXBs with a duration of ≥ 0.1 Myr. Those BH+He star binaries that experienced two RLOF stages could produce a high GW frequency in the final evolutionary stage (see also M_ He,i=1.8 M_⊙ in Figure 4), while these sources are not UCXBs because of the absence of mass transfer. For a detection distance of 10 kpc, these binaries are most likely as multimessenger sources in the first mass-transfer stage. The measurement of chirp mass is very significant in constraining the masses of two components. For a detached binary system, the chirp mass can be derived byℳ=c^3/G(5π^-8/3/96f_ gw^-11/3ḟ_ gw)^3/5,where ḟ_ gw is the GW-frequency derivative. Therefore, the measurement of chirp mass depends on the accuracy of the ḟ_ gw. The space GW detectors are only able to detect ḟ_ gwfor those ultra-compact binaries with a large signal-to-noise ratio (SNR) and a small orbital period close to the minimum orbital period <cit.>. The minimum ḟ that LISA can measure is given by <cit.>:ḟ_ gw,min≈ 2.5× 10^-17(10/S/N)(4 yr/T)^2 Hz s^-1,where S/N is the SNR of GW signals, and T is the mission duration of LISA.Figure 6 illustrates the evolution of ḟ_ gw with the GW frequencies for our simulated three groups. Taking S/N = 10 and T = 4 yr, we have the detection limitation of GW-frequency derivative as ḟ_ gw,min=2.5× 10^-17 Hz s^-1, which is plotted by the horizontal dashed lines in Figure 6. In the orbital shrinkage stages, the angular momentum loss rates by GW radiation continuously increase, resulting in a rapid increase of ḟ_ gw. The maximum ḟ_ gw is close to the peak GW frequency, however, ḟ_ gw sharply decreases to be 0 at the maximum GW frequency, then its sign turns into negative due to an orbital expansion. Except for the orbital expansion stage of two systems in group 3, other BH UCXBs could provide detectable ḟ_ gw in a timescale of 1 Myr. All evolutionary curves of ḟ_ gw in the climbing stage are approximate lines with a slope of n=11/3, which implies a relation ḟ_ gw∝ f_ gw^11/3. This is consistent with that the GW radiation dominates the orbital evolution of the binaries. According to equation (6), ḟ_ gw∝ f_ gw^11/3 for a constant chirp mass in detached binaries, in which the angular-momentum loss is fully contributed by GW radiation <cit.>. Therefore, if a braking index as ḟ_ gw∝ f_ gw^n is defined, its measurement can diagnose whether the binary emitting low-frequency GW signals is a detached system.Figure 7 presents the parameter space of BH+He star systems whose chirp masses can be accurately measured. For BH UCXB-LISA sources evolving from the He star channel, only several systems evolved from the progenitors with M_ He,i= 0.7-1.5 M_⊙ and 2.4-2.6 M_⊙ and relatively long P_ orb,i are difficult to detect ḟ_ gw. For the MS channel, only BH UCXBs with initial orbital periods very near the bifurcation period have detectable ḟ_ gw <cit.>. Therefore, BH UCXB-LISA sources evolved from the He star channel are most likely to measure their chirp masses. § DISCUSSION §.§ Origin of BH+He star binariesSimilar to NS+He star systems, BH+He star systems could also be descendants of high-mass X-ray binaries, in which the companion of the BH loses its hydrogen envelope through stellar wind or a mass-transfer stage <cit.>. Cyg X-3 was proposed to include a 2-4.5 M_⊙ BH (or NS) and a 7.5-14.2 M_⊙ Wolf-Rayet donor star <cit.>, and it may be the progenitor of a Galactic double BH or BH-NS <cit.>. In addition, massive He stars might be formed through quasi-chemically homogeneous evolution <cit.>. However, our simulations find that BH binaries with massive He stars (M_ He,i>3.0 M_⊙) are hard to evolve into detectable UCXBs with a duration longer than 0.1 Myr. <cit.> performed 1D model of post-CE BH binaries with short orbital period (≤0.2 days) consisting of a BH and a massive He star (M_ He,i≥3.3 M_⊙) that experiences stable mass transfer, and found that their mass-transfer timescales are shorter than 0.1 Myr, which is consistent with our results. NS/BH+He star binaries could evolve from NS/BH+MS binaries through Case B or Case C mass transfer <cit.>. For those systems with a large mass ratio, the mass transfer is dynamically unstable. Subsequently, the binary systems enter a CE phase <cit.>, and the evolutionary products of those systems that experienced Case B mass transfer in the CE phase are binaries with an unevolved He star and a NS/BH. Those NS/BH+He star binaries produced from Case B are so close that they initiate a Case BA mass transfer in the core He burning stage. It is clear that our simulated systems in the parameter space of the BH UCXB-LISA source are formed from Case B mass transfer. Case C mass transfer would produce an evolved He star and a NS/BH after the CE phase, subsequent mass transfer would start after core He is exhausted.§.§ He star Channel and MS channel There exists a bifurcation period for the formation of UCXBs in the MS channel, which is defined as the maximum initial orbital period forming UCXBs within a Hubble time <cit.>. However, in the He star channel there is not a critical period similar to the MS channel because of short initial orbital periods. In the He star channel, the initial donor-star masses of the BH+He star system that can evolve into UCXB-LISA sources are in a wide range from 0.32 to 2.9 M_⊙. However, this range is shortened to be 0.4-1.6 M_⊙ in the MS channel because the stars with radiative envelope will not experience magnetic braking <cit.>. Certainly, the initial orbital period range of the He star channel is much narrower than that of the MS channel. In the MS channel, the maximum GW frequency emitting by BH UCXBs is ∼ 3 mHz <cit.>, which is slightly smaller than that (5.6 mHz) in the He star channel. Because of the high compactness of He stars, BH+He star systems can evolve into LISA sources that can be detected at a distance of 100 kpc, while this phenomenon is impossible for the MS channel.To understand the final evolutionary fates of the He stars, we plot the evolution of five BH-He star binaries in an H-R diagram in Figure 8. The final luminosities and the effective temperatures of five He stars are log(L/L_⊙)∼-1 to -3, and 6300-31000 K, respectively. Such luminosity and effective temperature ranges are comparable to those of WDs. It is clear that those BH binaries with a massive donor star (1.2-1.8 M_⊙) had already evolved into detached systems before the donor stars evolved into WDs by a contraction and cooling stage. Therefore, the He star channel could form detached BH-WD systems. It is noteworthy that the MS channel can only form mass-transferring BH-WD systems rather than detached BH-WD systems <cit.>. Similar to the MS channel, BH binaries with a low-mass (0.32-0.6 M_⊙) He star can only evolve into semi-detached BH-WD binaries rather than detached systems. Because those WDs evolving from the He star channel lack a thin H envelope, their effective temperatures are much higher than those of WD evolving from the MS channel.§.§ He star Channel and Dynamic Process ChannelIn globular clusters and young dense clusters, compact BH-WD binaries might assembled through BHs capturing WDs in a dynamic process such as tidal captures. Subsequently, GW radiation causes their orbits to continuously shrink, and detached BH-WD binaries appear as low-frequency GW sources that can be detected by space-borne GW detectors. Once WDs enter the tidal radius of BHs, they will be disrupted by BHs, and these systems become UCXBs. For a semi-detached BH-WD binary with M_ BH =8 M_⊙ and M_ WD =0.2 M_⊙, the estimated minimum GW frequency is 6.4 mHz <cit.>. Because the tidal radius of the BH satisfies <cit.>R_ t=(M_ BH/M_ WD)^1/3R_ WD∝ M_ WD^-2/3,the tidal radius will be smaller if the WD captured by the BH is more massive. According to the Keplerian third law, it would yield a relatively high minimum GW frequency. However, the maximum GW frequency is 5.6 mHz forBH UCXBs evolving from the He star channel. Therefore, the GW frequency could be a probe to test the formation channel of BH UCXBs.§.§ Influence of tidal effectsIn the detailed binary evolution models, we ignore the tidal effects. In principle, the tidal coupling between the orbit and the He donor star can influence the orbital evolution of BH X-ray binaries. During the shrinkage stage of the orbit, the donor star spins up to corotate with the orbital rotation due to tidal coupling. This spin-up indirectly consumes the orbital angular momentum, extracting orbital angular momentum from the binary system at a rate ofJ̇_ t=-IΩ̇,where I is the momentum of inertia of the He star, Ω̇ is the derivative of the orbital angular velocity. Ignoring the influence of the mass transfer, the change rate of the orbital angular velocity Ω satisfiesΩ̇/Ω∼-3J̇_ gw/J,where J and J̇_ gw are the total angular momentum of the system and its loss rate via gravitational radiation. Therefore, the ratio between J̇_ t and J̇_ gw isJ̇_ t/J̇_ gw∼3IΩ/J=3I/μ a^2,where μ=M_ BHM_ He/(M_ BH+M_ He) is the reduced mass of the binary system.Considering a BH binary with M_ BH=8 M_⊙, M_ He=1 M_⊙, and P_ orb=0.1 days, we have μ a^2=3.0×10^55 g cm^2. Since the radius of the He star R_ He=0.212R_⊙(M_ He/M_⊙)^0.654 <cit.>, the momentum of inertia of the He star can be estimated to be I=0.4M_ HeR_ He^2=1.7×10^53 g cm^2. According to equation (11), the rate of angular momentum loss via the tidal effects is approximately two orders of magnitude smaller than that via gravitational radiation. Therefore, the influence of the tidal effects on the orbital evolution is trivial.§.§ Influence of BH Masses on the Parameter SpaceThe binary population synthesis simulations found that the newborn BHs have a mass range of 5-16 M_⊙ in BH binaries with normal-star companions, in which the BH-masses distribution emerges a peak at ∼ 7-8 M_⊙ in the Model A of <cit.>. In BH-He star X-ray binaries, the BH masses range from 5 to 20 M_⊙, and are most likely to gather at ∼ 7-8 M_⊙ in the Model A <cit.>. Based on these statistical results, we adopt a constant initial BH mass of 8 M_⊙ in the detailed binary evolution models.For a same He star, a high BH mass naturally results in a high chirp mass, a high characteristic strain, and a long detection distance of BH UCXB-LISA sources. When M_ He,i=0.6 M_⊙ and P_ orb,i=0.06 days, our calculations found that the maximum GW frequencies in the UCXB stage are 2.55, 2.75, and 2.91 mHz for M_ BH,i=15, 8, and 5 M_⊙, respectively. During the evolution BH binaries, a high BH mass would produce an efficient angular momentum loss via GW radiation, and a high mass transfer rate. The rapid mass transfer from the less massive He star to the more massive BH drives the orbits of the systems to widen, resulting a relatively small maximum GW frequency. Therefore, there exist inverse correlation between the initial BH mass and the maximum GW frequency in BH UCXBs evolved from the He star channel.When M_ BH,i=5 M_⊙, and M_ He,i=1 M_⊙, our simulations show that the initial orbital periods of BH+He star binaries that can evolve into BH UCXB-LISA sources are in the range of 0.04-0.08 days, which is slightly narrower than that (0.04-0.09 days, see also Figure 5) in M_ BH,i=8 M_⊙. If the BH has a high initial mass of 15 M_⊙, the range of initial orbital periods turns into 0.04-0.1 days. Therefore, a high/low initial BH mass tends to widen/reduce the initial orbital period range. This tendency originates from the gravitational radiation is the dominant mechanism driving the orbit of BH+He star binaries to shrink. A high mass BH naturally produces a high rate of angular momentum loss, driving the BH+He star binaries with a slightly long period to evolve into BH UCXB-LISA sources. Furthermore, the initial He-star masses that can evolve into BH UCXB-LISA sources are in the range of 0.32-3.0 M_⊙ and 0.32-2.9 M_⊙ for M_ BH,i=5, and 15 M_⊙, respectively. Therefore, the influence of the initial BH masses on the initial parameter space is trivial. §.§ Detectability of BH UCXBs as LISA sourcesWe employ a rapid binary evolution code developed by <cit.> to study the birthrate of BH UCXB-LISA sources in the Galaxy. Based on the binary population synthesis (BPS) approach, the primordial binary samples are produced in the way of Monte Carlo simulations. Subsequently, a sample of 1 ×10^7 primordial binaries are evolved until the formation of BH+He star systems through the rapid binary evolution code. Similar to <cit.>, the initial input parameters and basic assumptions in the BPS simulation are as follows: (1) All primordial stars are thought to be members of binary systems orbiting in circular orbits. (2) The primordial primary mass distribution arises from the initial mass function derived by <cit.>, and it produces the secondary mass distribution by adopting a constant mass ratio (0<q≤1) distribution n(q)=1. (3) The initial separations distribution is taken to be constant in log a for wide binaries with orbital periods longer than 100 yr, then changes into a uniform distribution for close binaries <cit.>. (4) The standard energy prescription proposed by is used to tackle the CE ejection process <cit.>, in which the efficiency α_ CE of ejecting the envelope and the parameter λ describing the stellar mass-density distribution are merged as a degenerate parameter α_ CEλ. BH UCXB-LISA sources are thought to be produced if the parameters of the BH+He star systems are consistent with those of the progenitors of BH UCXB-LISA sources in Figure 5. The influence of the initial BH mass on the initial parameter space is trivial, thus its affect on the population synthesis simulations can be ignored. As a consequence, the criterion determined compact objects in the BPS simulation is just a BH, no matter whether their masses are 8 M_⊙.In Figure 9, we plot the evolution of the birthrates of BH UCXB-LISA sources evolving from the He star channel as a function of time when we take a constant star formation rate (SFR) of 5 M_⊙yr^-1 for Population I.The Monte Carlo simulations predict the birthrates of BH UCXB-LISA sources in the Galaxy to be R=2.2×10^-6, and 3.6×10^-7 yr^-1 when the degenerate CE parameter α_ CEλ=1.5, and 0.5 <cit.>, respectively. These two birthrates are 1-2 orders of magnitude higher than that (∼3.9×10^-8 yr^-1) estimated in the MS channel <cit.>, however, are one order of magnitude lower than those (3.1-11.9×10^-6 yr^-1) predicted for NS UCXBs evolving from the He star channel <cit.>.According to Table 1, the mean detection timescale △ t_ LISA,10≈ 15 Myr for BH UCXB-LISA sources evolving from the He-star channel for a detection distance of 10 kpc. Therefore, we can estimate the detection number of BH UCXB-LISA sources formed by the He-star channel in the Galaxy to be N=R △ t_ LISA,10≈ 33, and 5 as α_ CEλ=1.5, and 0.5, respectively. For a low degenerate CE parameter α_ CEλ=0.5, the detection number of BH UCXB-LISA sources evolving from the He-star channel is similar to that from the MS channel <cit.>. However, the detection number of BH UCXB-LISA source evolving from the He star channel is one order of magnitude higher than that from the MS channel for a conventional degenerate CE parameter α_ CEλ=1.5.§ CONCLUSIONBH UCXBs can emit both low-frequency GW signals and X-ray emission, making them intriguing multi-messenger detection sources. In this paper, we employ the MESA code to simulate the formation and evolution of BH UCXBs evolving from the He star channel and diagnose their detectability in both X-ray and GW bands. Taking an initial BH mass of M_ BH,i=8 M_⊙, our main conclusions are summarized as follows: * The mass transfer of BH+He star binaries is sensitive to the masses of He stars. When M_ He,i≥ 0.7 M_⊙, those systems may experience two RLOFs. The mass-transfer rates in second RLOF stages can reach ∼ 10^-6 M_⊙yr^-1. Such a high mass-transfer rate causes their orbits to rapidly expand, thus some systems can not appear as UCXB-LISA sources at a distance of 10 kpc in the later stage of the second mass-transfer phase. * The mass-transfer rates of BH UCXBs evolving from the He star channel are much higher than those from the MS channel,producing X-ray luminosities that are 2-5 orders of magnitude higher than those from the MS channel. Due to the short initial orbital periods, the BH+He star systems become UCXBs at the beginning of the mass transfer and emit an X-ray luminosity of ∼10^38 ergs^-1. The maximum X-ray luminosity in the first mass-transfer stage can exceed 10^39 ergs^-1. Those BH binaries with massive He stars experience a second RLOF, and produce a maximum X-ray luminosity of 3.5× 10^39 ergs^-1, which exceeds the threshold luminosity of ULXs. * The shortest orbital period in the BH UCXB stage is 6 minutes, which corresponds to a GW frequency of 5.6 mHz. Because of short initial orbital periods, our simulated BH+He star binaries are already BH UCXB-LISA sources within a distance of 10 kpc (or 100 kpc) at the onset of the first mass transfer. BH X-ray binaries with massive He stars (M_ He,i=1.8, and 2.8 M_⊙) will not be detected by LISA at a distance of 10 kpc in the later stage of the second mass-transfer phase due to a rapid orbital expansion. However, the continuous orbital shrinkage due to GW radiation causes the system with M_ He,i=1.8 M_⊙ to evolve toward LISA sources after the second mass transfer ceases, emitting GW signals with a frequency of up to 63.5 mHz. * Compared with NS UCXBs from the He star channel <cit.>, the progenitors of BH UCXB-LISA sources have a slightly wide initial parameter space. The initial He-star masses and initial orbital periods of the progenitors of Galactic BH UCXB-LISA sources are in the range of 0.32-2.9 M_⊙ and 0.02-0.19 days, respectively. Meanwhile, nearly all systems in this parameter space can evolve into BH UCXB-LISA sources whose chirp masses can be accurately measured. However, only a tiny fraction of BH UCXB-LISA sources possess measured chirp masses for the MS channel <cit.>. * In the He star channel, those BH binaries with a massive He star (1.2-1.8 M_⊙ ) can evolve into detached BH-WD binaries, which can not be achieved in the MS channel. Meanwhile, the effective temperatures of those WDs evolving from the He star channel are much higher than those of WD evolving from the MS channel because of the absence of a thin H envelope. * The Monte Carlo BPS simulations predict the birthrates of Galactic BH UCXB-LISA sources evolving from the He star channel to be R=2.2×10^-6, and 3.6×10^-7 yr^-1 when α_ CEλ=1.5, and 0.5, respectively. In a 4-year LISA mission, the detection numbers of Galactic BH UCXB-LISA sources for the He-star channel are 33, and 5 for α_ CEλ=1.5, and 0.5, respectively. * Compared with the MS channel, BH UCXB-LISA sources from the He star channel possess high X-ray luminosities and high possibility of detecting the chirp masses. Therefore, BH UCXB-LISA sources evolving from the He star channel are ideal multimessenger objects that deserve to be pursued in the X-ray and GW community.We cordially thank the anonymous referee for the detailed and insightful comments that improved this manuscript. This work was partly supported by the National Natural Science Foundation of China (under grant Nos. 12273014, 12225304, 12273105, 12373044, and 12203051), the Western Light Project of CAS (No. XBZG-ZDSYS-202117), the Youth Innovation Promotion Association CAS (No. 2021058), and the Natural Science Foundation (under grant No. ZR2021MA013) of Shandong Province. [Abbott et al. (2016)]abbo16 Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016, PhRvL, 116, 061102 [Abdusalam et al. (2020)]abdu20 Abdusalam, K., Ablimit, I., Hashim, P., et al. 2020, ApJ, 902, 125 [Amaro-Seoane et al. (2017)]amar17 Amaro-Seoane, P., Audley, H., Babak, S., et al. 2017, arXiv:1702.00786 [Amaro-Seoane et al. (2023)]amar23 Amaro-Seoane, P., Andrews, J., Arca Sedda,, M., et al. 2023, Living Rev Relativ, 26, 2 [Armas Padilla et al. (2023)]arma23 Armas Padilla, M., Corral-Santana, J. M., Borghese, A. ; Cúneo, V. A., Muñoz-Darias, T., Casares, J., & Torres, M. A. P. 2023, A&A, 677, A186 [Avila-Reese (1993)]avil93 Avila-Reese, V. A. 1993, Rev. Mex. Astron. Astrofis., 25, 79 [Bahramian et al. (2017)]bahr17 Bahramian, A., Heinke, C. O., Tudor, V., et al. 2017, MNRAS, 467, 2199 [Bailyn & Grindlay (1987)]bail87 Bailyn, C. D., & Grindlay, J. E. 1987, ApJ, 316, L25 [Bauer & Kupfer (2021)]baue21 Bauer, E. B., & Kupfer, T. 2021, ApJ, 922, 245 [Belczynski & Taam (2004)]belc04 Belczynski, K., & Taam, R. E. 2004, ApJ, 603, 690 [Belczynski et al. (2013)]belc13 Belczynski, K., Bulik, T., Mandel, I., Sathyaprakash, B. S., Zdziarski, A. A., & Mikolajewska, J. 2013, ApJ, 764, 96 [Bhattacharya & van den Heuvel (1991)]bhat91 Bhattacharya, D., van den Heuvel, E. P. J. 1991, Phys. Rep., 203, 1 [Bobrick et al. (2017)]bobr17 Bobrick, A., Davies, M. B., & Church, R. P. 2017, MNRAS, 467, 3556 [Cantiello et al. (2007)]cant07 Cantiello, M., Yoon, S. C., Langer, N., & Livio, M. 2007, A&A, 465, L29 [Cartwright et al. (2013)]cart13 Cartwright, T. F., Engel, M. C., Heinke, C. O., et al. 2013, ApJ, 768, 183 [Chatterjee et al. (2013)]chat13 Chatterjee, S., Umbreit, S., Fregeau, J. M., & Rasio, F. A. 2013, MNRAS, 429, 2881 [Chen et al. (2021)]chen21 Chen, H.-L., Tauris, T. M., Han, Z., & Chen, X. 2021, MNRAS, 503, 3540 [Chen (2020)]chwc20 Chen, W.-C. 2020, ApJ, 896, 129 [Chen et al. (2020)]chen20 Chen, W.-C., Liu, D.-D., & Wang, B. 2020, ApJL, 900, L8 [Chevalier (1993)]chev93 Chevalier R. A. 1993, ApJ, 411, L33 [Church et al. (2017)]chur17 Church, R. P., Strader, J., Davies, M. B., & Bobrick, A. 2017, ApJL, 851, L4 [Coti Zelati et al. (2021)]coti21 Coti Zelati, F., de Ugarte Postigo, A., Russell, T. D., et al. 2021, A&A, 650, A69 [de Mink et al. (2007)]mink07 de Mink, S. E., Pols, O. R., & Hilditch, R. W. 2007, A&A, 467, 1181 [Davies & Hansen (1998)]davi98 Davies, M. B., & Hansen, B. M. S. 1998, MNRAS, 301, 15 [Deloye & Bildsten (2003)]delo03 Deloye, C. J., & Bildsten, L. 2003, ApJ, 598, 1217 [Dewi et al. (2002)]dewi02 Dewi, J. D. M., Pols, O. R., Savonije, G. J., & van den Heuvel, E. P. J. 2002, MNRAS, 331, 1027 [Eggleton et al. (1989)]eggl89 Eggleton, P. P., Fitchett, M. J., & Tout, C. A. 1989, ApJ, 347, 998 [El Mellah et al. (2019a)]mell19aEl Mellah, I., Sander, A. A. C., Sundqvist, J. O., & Keppens, R. 2019a, A&A, 622, A189 [El Mellah et al. (2019b)]mell19b El Mellah, I., Sundqvist, J. O., & Keppens, R. 2019b, A&A, 622, L3[Eldridge et al. (2013)]eldr13 Eldridge, J. J., Fraser, M., Smartt, S. J., Maund, J. R., & Crockett, R. M. 2013, MNRAS, 436, 774 [Eldridge et al. (2020)]eldr20 Eldridge, J. J., Stanway E. R., & Breivik K., et al. 2020, MNRAS, 495, 2786 [Ergma & Fedorova (1990)]ergm90 Ergma, E. V., & Fedorova, A. V. 1990, Ap&SS, 163, 143 [Fabian et al. (1975)]fabi75 Fabian, A. C., Pringle, J. E., Rees, M. J. 1975, MNRAS, 172, 15p [Feng & Soria (2011)]feng11 Feng, H., & Soria, R. 2011, NewAR, 55, 166 [Geier et al. (2010)]geie10 Geier, S., Heber, U., Podsiadlowski, P., et al. 2010, A&A, 519, A25 [Glebbeek et al. (2009)]gleb09 Glebbeek, E., Gaburov, E., de Mink, S. E., Pols, O. R., & Portegies Zwart, S. F. 2009, A&A, 497, 255 [Gräfener et al. (2002)]graf02 Gräfener, G., Koesterke, L., & Hamann, W.-R. 2002, A&A, 387, 244 [Götberg et al. (2018)]gotb18 Götberg, Y., de Mink, S. E., Groh, J. H., et al. 2018, A&A, 615, A78 [Götberg et al. (2020)]gotb20 Götberg, Y., Korol, V., Lamberts, A., et al. 2020, ApJ, 904, 56 [Han et al. (2007)]han07 Han, Z., Podsiadlowski, P., & Lynas-Gray, A. E. 2007, MNRAS, 380, 1098 [Han et al. (2002)]han02 Han, Z., Podsiadlowski, P., Maxted, P. F. L., Marsh, T. R., & Ivanova, N. 2002, MNRAS, 336, 449 [Heinke et al. (2013)]hein13 Heinke, C. O., Ivanova, N., Engel, M. C. et al. 2013, ApJ, 768, 184[Hopman et al. (2004)]hopm04 Hopman, C., Portegies Zwart, S. F., & Alexander, T. 2004, ApJL, 604, L101 [Huang et al. (2020)]huan20 Huang, S.-J., Hu, Y.-M., Korol, V., et al. 2020, Phys. Rev. D, 102, 063021 [Hurley et al. (2000)]hurl00 Hurley, J. R., Pols, O. R., & Tout, C. A. 2000, MNRAS, 315, 543 [Hurley et al. (2002)]hurl02 Hurley, J. R., Tout, C. A., & Pols, O. R. 2002, MNRAS, 329, 897 [Iben (1990)]iben90 Iben, Jr., I. 1990, ApJ, 353, 215 [in't Zand et al. (2007)]int07 in't Zand, J. J. M., Jonker, P. G., & Markwardt, C. B., 2007, A&A, 465, 953[Ivanova et al. (2013)]ivan13 Ivanova, N., Justham, S., Chen, X., et al. 2013, A&ARv, 21, 59 [Jiang et al. (2017)]jian17 Jiang, L., Chen, W.-C., Li, X.-D. 2017, ApJ, 837, 64 [Jiang et al. (2021)]jian21 Jiang, L., Tauris, T. M., Chen, W.-C.,& Fuller, J. 2021, ApJL, 920, L36 [Jiang et al. (2023)]jian23 Jiang, L., Chen, W.-C., Tauris, T. M., Müller, B., & Li, X.-D. 2023, ApJ, 945, 90[Kaaret et al. (2017)]kaar17 Kaaret, P., Feng, H., & Roberts, T. P. 2017, ARA&A, 55, 303 [Köding et al. (2002)]kodi02 Köding, E., Falcke, H., & Markoff, S. 2002, A&A, 382, L13 [Landau & Lifshitz (1971)]land71 Landau, L. D., & Lifshitz, E. M. 1971, Classical Theory of Fields (Oxford:Pergamon Press) [Lasota et al. (2008)]laso08 Lasota, J.-P., Dubus, G., & Kruk, K. 2008, A&A, 486, 523 [Lin & Yu (2018)]lin18 Lin, J., & Yu, W. 2018, MNRAS, 474, 1922 [Liu et al. (2013)]liu13 Liu, J.-F., Bregman, J. N., Bai, Y., Justham, S., & Crowther, P. 2013, Nature, 503, 500 [Liu et al. (2007)]liu07 Liu, Q. Z., van Paradijs, J., & van den Heuvel, E. P. J. 2007, A&A, 469, 807 [Liu et al. (2023)]liu23 Liu, W.-M., Yungelson, L., & Kuranov, A. 2022, A&A, 668, A80 [Lommen et al. (2005)]lomm05 Lommen, D., Yungelson, L., van den Heuvel, E., et al. 2005, A&A, 443, 231 [Luo et al. (2016)]luo16 Luo, J., Chen, L.-S., Duan, H.-Z., et al. 2016, CQGra, 33, 035010 [Ma & Li (2009)]ma09 Ma, B., & Li, X.-D. 2009, ApJ, 698, 1907 [Madhusudhan et al. (2008)]madh08 Madhusudhan, N., Rappaport, S., Podsiadlowski, P., & Nelson, L. 2008, ApJ, 688, 1235 [Mereghetti et al. (2009)]mere09 Mereghetti, S., Tiengo, A., Esposito, P., et al. 2009, Science, 325, 1222 [Miller & Scalo (1979)] mill79 Miller, G. E., & Scalo, J. M. 1979, ApJS, 41, 513 [Miller-Jones et al. (2015)]mill15 Miller-Jones, J. C. A., Strader, J., Heinke, C. O., et al. 2015, MNRAS, 453, 3918 [Motch et al. (2014)]motc14 Motch, C., Pakull, M. W., Soria, R., Grisé, F., & Pietrzy´nski, G. 2014, Nature, 514, 198 [Moutard et al. (2023)]mout23 Moutard, D. L., Ludlam, R. M., García, J. A., et al. 2023, ApJ, in press [Narayan & Yi (1995)]nara95 Narayan, R., & Yi, I. 1995, ApJ, 452, 710 [Nelemans & Jonker (2010)]nele10a Nelemans, G., & Jonker, P. G. 2010, NewAR, 54, 87 [Nelemans et al. (2006)]nele06 Nelemans, G., Jonker, P. G., & Steeghs, D. 2006, MNRAS, 370, 255 [Nelemans et al. (2004)]nele04 Nelemans, G., Jonker, P. G., Marsh, T. R., & van der Klis, M. 2004, MNRAS, 348, L7 [Nelemans et al. (2010)]nele10b Nelemans, G., Yungelson, L. R., van der Sluys, M. V., & Tout, C. A. 2010 MNRAS, 401, 1347 [Nelson et al. (1986)]nels86 Nelson, L. A., Rappaport, S. A., & Joss, P. C. 1986, ApJ, 304, 231 [Packet (1981)]pack81 Packet, W. 1981, A&A, 102, 17 [Patruno & Zampieri (2008)]patr08 Patruno, A., & Zampieri, L. 2008, MNRAS, 386, 543 [Paxton et al. (2011)]paxt11 Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS, 192, 3 [Paxton et al. (2013)]paxt13 Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208, 4 [Paxton et al. (2015)]paxt15 Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15 [Paxton et al. (2018)]paxt18 Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, ApJS, 234, 34 [Paxton et al. (2019)]paxt19 Paxton, B., Smolec, R., Schwab, J., et al. 2019, ApJS, 243, 10 [Peng & Shen (2021)]peng21 Peng S., Shen R.-F. 2021, ApJ, 916, 80 [Pietrukowicz et al. (2019)]piet19 Pietrukowicz, P., Mróz, P., Udalski, A., Soszyński, I., & Skowron, J. 2019, ApJ, 881, L41 [Podsiadlowski et al. (1992)]pods92 Podsiadlowski, P., Joss, P. C., & Hsu, J. J. L. 1992, ApJ, 391, 246 [Podsiadlowski et al. (2003)]pods03 Podsiadlowski, P., Rappaport, S., & Han, Z. 2003, MNRAS, 341, 385 [Podsiadlowski et al. (2002)]pods02 Podsiadlowski, P., Rappaport, S., & Pfahl, E. D. 2002, ApJ, 565, 1107 [Pols et al. (2002)]pols02 Pols, O. R., & Dewi, J. D. M. 2002, PASA, 19, 233 [Postnov & Yungelson (2006)]post06 Postnov, K. A., & Yungelson, L. R. 2006, Living Rev. Relativity, 9, 6 [Piro (2019)]piro19 Piro, A. L. 2019, ApJL, 885, L2 [Puls et al. (2008)]puls08 Puls, J., Vink, J. S., & Najarro, F. 2008, A&ARv, 16, 209 [Qin et al. (2023)]qin23 Qin, K., Jiang, L., & Chen, W.-C. 2023, ApJ, 944, 83 [Quast et al. (2019)]quas19 Quast, M., Langer, N., & Tauris, T. M. 2019, A&A, 628, A19[Rappaport et al. (1982)]rapp82 Rappaport, S., Joss, P. C., & Webbink, R. F. 1982, ApJ, 254, 616 [Robson et al. (2019)]robs19 Robson, T., Cornish, N. J., & Liu, C. 2019, CQGra, 36, 105011 [Ruan et al. (2020)]ruan20 Ruan, W.-H., Liu, C., Guo, Z.-K., Wu, Y.-L., & Cai, R.-G. 2020, NatAs, 4, 108 [Savonije et al. (1986)]savo86 Savonije, G. J., de Kool, M., van den Heuvel, E. P. J. 1986, A&A, 155, 51 [Sengar et al. (2017)]seng17 Sengar, R., Tauris, T. M., Langer, N., & Istrate, A. G. 2017, MNRAS, 470, L6 [Shao & Li (2019)]shao19Shao, Y., & Li, X.-D. 2019, ApJ, 885, 151 [Shao & Li (2020)]shao20Shao, Y., & Li, X.-D. 2020, ApJ, 898, 143 [Shao et al. (2019)]shao19 Shao, Y., Li, X.-D., & Dai, Z.-G. 2019, ApJ, 886, 118 [Smith (2014)]smit14 Smith, N. 2014, ARA&A, 52, 487[Takahashi & Seto (2002)]taka02 Takahashi, R., & Seto, N. 2002, ApJ, 575, 1030 [Tauris (2018)]taur18 Tauris, T. M. 2018, PhRvL, 121, 131105 [Tauris et al. (2017)]taur17 Tauris, T. M., Kramer, M., Freire, P. C. C., et al. 2017, ApJ, 846, 170 [Tauris et al. (2013)]taur13 Tauris, T. M., Langer, N., Moriya, T. J., et al. 2013, ApJL, 778, L23 [Tauris et al. (2015)]taur15 Tauris, T. M., Langer, N., & Podsiadlowski, P. 2015, MNRAS, 451, 2123[Tauris & van den Heuvel (2006)]taur06 Tauris, T. M., & van den Heuvel, E. P. J. 2006, in Lewin W. H. G., van der Klis M., eds, Compact Stellar, X-ray Sources. Cambridge Univ. Press, Cambridge, p. 623 [Tudor et al. (2018)]tudo18 Tudor, V., Miller-Jones, J. C. A., Knigge, C., et al. 2018, MNRAS, 476, 1889 [Tutukov & Fedorova (2016)]tutu16 Tutukov, A. V. & Fedorova, A. V. 2016, Astron. Reports, 60, 106 [van der Sluys et al. (2005)]sluy05 van der Sluys, M. V., Verbunt, F., & Pols, O. R. 2005, A&A, 431, 647 [van Haaften et al. (2012a)]haaf12a van Haaften, L. M., Nelemans, G., Voss, R., et al. 2012a, A&A, 537, A104 [van Haaften et al. (2012b)]haaf12b van Haaften, L. M., Voss, R., & Nelemans, G. 2012b, A&A, 543, A121 [van Kerkwijk et al. (1992)]kerk92 van Kerkwijk, M. H., Charles, P. A., Geballe, T. R., et al. 1992, Nature, 355, 703 [Vink (2017)]vink17 Vink, J. S. 2017, A&A, 607, L8 [Wang et al. (2021)]wang21 Wang, B., Chen, W.-C., Liu, D.-D., et al. 2021, MNRAS, 506, 4654 [Wang et al. (2019)]wang19 Wang, H.-T., Jiang, Z., Sesana, A., et al. 2019, Phys. Rev. D, 100, 043003 [Webbink (1984)]webb84 Webbink, R. F. 1984, ApJ, 277, 355 [Webbink & Han (1998)]webb98 Webbink, R. F., & Han, Z. 1998, in AIP Conf. Ser. 456, Laser Interferometer Space Antenna, Second Int. LISA Symp. on the Detection and Observation of Gravitational Waves in Space, ed. W. M. Folkner (Melville, NY: AIP), 61 [Wong & Schwab (2019)]wong19 Wong, T. L. S., & Schwab J. 2019, ApJ, 878, 100 [Woosley & Heger (2006)]woos06 Woosley, S. E., & Heger, A. 2006, ApJ, 637, 914 [Yao & Feng (2019)]yao19 Yao, Y., & Feng, H. 2019, ApJL, 884, L3 [Yoon et al. (2006)]yoon06 Yoon, S.-C., Langer, N., & Norman, C. 2006, A&A, 460, 199 [Yu et al. (2021)]yu21 Yu, S., Lu, Y., & Jeffery, C. S. 2021, MNRAS, 503, 2776 [Yungelson (2008)]yung08 Yungelson, L. R. 2008, Astron. Lett., 34, 620 [Yungelson et al. (2020)]yung20 Yungelson, L. R., Kuranov, A. G., Postnov, K. A. & Kolesnikov, D. A. 2020, MNRAS, 496, L6 [Zdziarski et al. (2013)]zdzi13 Zdziarski, A. A., Mikolajewska, J., & Belczynski, K. 2013, MNRAS, 429, L104 [Zhou et al. (2023)]zhou23 Zhou, C., Feng, H., & Bian, F. et al. 2023, ApJ, 947, 52 [Zhu et al. (2012)]zhu12 Zhu, C.-H., Lü, G.-L., & Wang, Z.-J. 2012, Res. Astron. Astrophys., 12, 1526 | http://arxiv.org/abs/2311.15590v1 | {
"authors": [
"Ke Qin",
"Kun Xu",
"Dong-Dong Liu",
"Long Jiang",
"Bo Wang",
"Wen-Cong Chen"
],
"categories": [
"astro-ph.HE",
"astro-ph.SR"
],
"primary_category": "astro-ph.HE",
"published": "20231127072800",
"title": "Black Hole Ultracompact X-Ray Binaries as Galactic Low-frequency Gravitational Wave Sources: the He Star Channel"
} |
1]Sergey Dolgov 2]Dante Kalise 3]Luca Saluzzi [1]Department of Mathematical Sciences, University of Bath, United Kingdom. [2]Department of Mathematics, Imperial College London, United Kingdom.[3]Department of Mathematics, Scuola Normale Superiore, Pisa, Italy.Statistical Proper Orthogonal Decomposition for model reduction in feedback control [ January 14, 2024 =================================================================================== Feedback control synthesis for nonlinear, parameter-dependent fluid flow control problems is considered. The optimal feedback law requires the solution of the Hamilton-Jacobi-Bellman (HJB) PDE suffering the curse of dimensionality. This is mitigated by Model Order Reduction (MOR) techniques, where the system is projected onto a lower-dimensional subspace, over which the feedback synthesis becomes feasible. However, existing MOR methods assume at least one relaxation of generality, that is, the system should be linear, or stable, or deterministic. We propose a MOR method called Statistical POD (SPOD), which is inspired by the Proper Orthogonal Decomposition (POD), but extends to more general systems. Random samples of the original dynamical system are drawn, treating time and initial condition as random variables similarly to possible parameters in the model, and employing a stabilizing closed-loop control. The reduced subspace is chosen to minimize the empirical risk, which is shown to estimate the expected risk of the MOR solution with respect to the distribution of all possible outcomes of the controlled system. This reduced model is then used to compute a surrogate of the feedback control function in the Tensor Train (TT) format that is computationally fast to evaluate online. Using unstable Burgers' and Navier-Stokes equations, it is shown that the SPOD control is more accurate than Linear Quadratic Regulator or optimal control derived from a model reduced onto the standard POD basis, and faster than the direct optimal control of the original system. § INTRODUCTION Nonlinear dynamics arising in fluid flow play a vital role in numerous industrial applications, ranging from aerospace engineering to chemical processing. Efficient management and control of these systems under random state fluctuations or different parameters is crucial to ensure optimal performance, stability, and safety.One key approach to achieving these objectives is feedback control. This involves continuous monitoring of the system's behavior and applying corrective actions to minimize deviations from a reference state. In the context of fluid flow problems, feedback control enables engineers to actively manipulate various system parameters such as flow rates, pressures, temperatures, and concentrations to stabilize the dynamics. In the case of linear dynamics and quadratic cost functions, a feedback controller that is relatively cheap computationally is the Linear Quadratic Regulator (LQR), which reduces to the solution of a matrix Algebraic Riccati equation <cit.>. However, the complexity and nonlinearity inherent in fluid dynamics pose significant challenges in the design and implementation of feedback control strategies. Unlike other fields where linear models and simplified assumptions may be sufficient, fluid systems often exhibit intricate behaviors, including turbulence, flow separation, and time-dependent phenomena. Consequently, developing effective feedback control techniques for fluid problems turns out to be a very challenging task. Potentially, the optimal feedback control can be computed by solving the Hamilton-Jacobi-Bellman (HJB) nonlinear partial differential equation (PDE).However, the HJB equation is usually of hyperbolic nature, and posed on the state space of the dynamical system, which can be very high-dimensional.The latter is typical when the system arises from a discretization of a PDE, such as the Navier-Stokes equation in fluid dynamics. Moderately high-dimensional HJB equations have been tackled with various methods: max-plus algebra <cit.>, sparse grids and polynomials <cit.>, tree-structure algorithms <cit.>, deep neural networks <cit.>, low-rank tensor decompositions <cit.> and kernel interpolation techniques <cit.>. However, direct solution of the HJB equation for thousands of dimensions (which is typical in Finite Element discretizations) remains out of reach.The PDE formulation of the HJB equation can bebypassed by a data-driven approach, where one computes the value function and/or control at each of the given (usually random) states in a closed-loop fashion, and finds an approximate feedback control in some ansatz by solving a regression problem by minimizing an empirical risk of given samples of the control. Such regression in a sparse polynomial basis was considered e.g. in<cit.>, a tensor format regression was employed in <cit.>, and a neural network was trained in <cit.>. However, the dimension of the sought control function is still that of the state space, requiring a sheer number of unknowns in the approximation ansatz and training data. Fortunately, not all states of the system are important or reachable, especially in the controlled regime. Therefore, a promising approach to tackle the curse of dimensionality is the Model Order Reduction (MOR). Here, the dynamical system is projected onto some precomputed basis of lower dimension, and the HJB equation (or a data-driven variant thereof) is solved for the reduced system. The key question now is the algorithm to compute such a basis. One of the simplest options is called Proper Orthogonal Decomposition (POD). This method collects system states ("snapshots") at certain time points, and finds a basis that minimizes the total projection error of these snapshots onto the basis. In the context of feedback control, the POD method was used to deliver a reduced system (and hence a reduced HJB equation) in <cit.>. However, the snapshots produced from an uncontrolled deterministic system may be inaccurate, and even misleading for the controlled system at a different realisation of random parameters or initial conditions. For instance, it may be simply impossible to collect snapshots from an uncontrolled system that exhibits a finite-time blowup.A state of the art method to identify a basis that approximates well only controllable and observable states is balanced truncation (BT) <cit.>. Note that the original formulation of BT still uses the uncontrolled system, and thus requires its stability, as well as linearity. This problem was mitigated by the closed-loop balanced truncation <cit.> that incorporates the LQR, stabilizing the system matrix. The linearity assumption aside, the reduction of the closed-loop instead of uncontrolled system will be the first ingredient of our paper. Since BT uses the linear structure of the dynamical system quite explicitly, its generalization to nonlinear systems is difficult, and still limited to relatively simple cases. For example, a BT for bilinear systems was introduced in <cit.>, lifting of some nonlinear systems to quadratic-bilinear systems of larger dimension was proposed in <cit.>, and an ℋ_∞-balanced truncation for differential-algebraic systems, linearizing the system around one state (the stationary solution), was presented in <cit.>. A more general approach to nonlinear BT has been recently developed in <cit.> however, the nonlinear reduction also requires the solution of high-dimensional HJB type equations, which is precisely the computational limitation we try to avoid in this paper. As we will see in numerical examples, linearization around one particular state may be inaccurate for more general nonlinearities and random parameters, such as the Navier-Stokes equation with random inflow and boundary control. Random parameters (such as random coefficients or initial conditions) are particularly difficult but relevant scenarios motivating the feedback control, which needs to be robust with respect to this uncertainty. If the random variables enter the model (bi)linearly, they can also be lifted to apply e.g. bilinear BT <cit.>. Existence of a quadratic-bilinear structure suitable for BT is not clear in general. On the other hand, POD methods require only that random realizations of the model can be sampled: in this case, the minimization of the projection error in POD can be straightforwardly extended to the minimization of an empirical risk calculated using training samples of all random variables in the model. A deep-learning/POD approach for parameter-dependent nonlinear PDEs has been developed in <cit.>.Empirical balanced truncation using snapshots to approximate the Gramians emerged soon after the first BT papers <cit.>, and many more followed. Similarly to <cit.>, <cit.> proposes a balanced POD to compute the Gramians from numerical simulations of state responses to unit impulses. Using similar ideas, <cit.> shows that the POD kernel is an approximation to the controllability Gramian. This is an important observation for this paper: a feedback control applies usually to the fully observed state, in other terms the observation matrix is the identity, which drops the main benefit of BT compared to POD: the extra truncation of poorly observable states. Thus, we propose a MOR method that resembles the POD to a large extent, except that the snapshots are collected at random samples of time, parameters and initial conditions. In contrast to particularly designed inputs (such as unit impulses), this statistical approach makes the empirical risk a convergent estimate of the expected risk. This method is motivated by Likelihood Informed Subspace <cit.> and Active Subspace <cit.> methods in statistics, where the reduced basis is derived from an empirical mean Hessian of the log-likelihood, or Gramian of the forward model. Here we propose a similar approach in the feedback control context. Stabilization is achieved by collecting the snapshots from a closed-loop system similarly to <cit.>, but controlled with any feasible (possibly suboptimal) regulator, such as the Pontryagin Maximum Principle <cit.>, or State-Dependent Riccati Equation <cit.>. The proposed method is close to <cit.> which linearizes the system by estimating system matrices from snapshots, followed by balanced truncation, and <cit.> where the POD is extended to a low-rank tensor approximation of the full solution as a function of state, parameters and time, followed by recompression of the basis at the given parameter. However, in our statistical POD method we neither assume nor seek any linearization, and in contrast to <cit.> we do this in the optimal control setting. Moreover, as soon as the statistical POD basis is ready, we pre-compute a low-rank tensor approximation (specifically, the Functional Tensor Train (TT) format <cit.>) of the closed-loop control of the reduced model as a function of the reduced state. This TT approximation provides the desired control in the feedback form that is fast to evaluate numerically, since it needs only a modest amount of linear algebra operations with the TT decomposition, in contrast to solving an optimal control problem from scratch. This enables fast synthesis of a nearly-optimal control in the online regime. A schematic of the entire procedure is shown in Figure <ref>.The rest of the paper is organized as follows. In Section 2 we introduce relevant background material on optimal control and tensor train approximation. In Section 3 we introduce our statistical POD approach, and present related error estimates in Section 4. The proposed methodology is first assessed in Section 5 using Burgers' equation, to conclude with a full application to fluid flow control in Navier-Stokes in Section 6. § BACKGROUND §.§ Parameter-dependent optimal control problemsWe are interested in a class of parameter-dependent optimal control problems where a parameter μ∈^M accounts for uncertainties in initial/boundary conditions of the control system propagating along state and control trajectories.To introduce μ we assume a finite dimensional noise.Namely, given a complete probability space (Ω, ℱ, ℙ) for random realisations of the system ω∈Ω, the σ-algebra of events ℱ and the probability measure ℙ,we assume that any random variable X(ω) can be expressed as a deterministic function X(ω)=x(μ(ω)) of the random vector μ(ω) with a product density function π(μ) = π(μ_1) ⋯π(μ_M), and expectations with respect to ℙ can be computed as integrals weighted with π,𝔼[X] = ∫_ℝ^M x(μ) π(μ) dμ. Now the system dynamics can be described as a parameter-dependent controlled evolution equation:{[ d/dty(t;μ)=f(y(t;μ),u(t;μ),μ),t∈(0,T], μ∈^M,;y(0;μ)=x(μ) ∈ℝ^d, ].where we denote by y:[0,T] ×^M→^d the state of the system, by u:[0,T] ×^M →^m the control signal, and by 𝒰=L^∞ ([0,T] ×^M;U_ad), U_ad⊆^m, the set of admissible controls. Given the control system (<ref>), we are concerned with the synthesis of a feedback control law u(y(t;μ);μ), that is, a control law that primarily depends on the current state of the system y(t;μ). We design such a control law by minimizing a finite horizon cost functional of the formJ_T(u;x,μ):=∫_0^T L(y(t;μ),u(t;μ)) dt,μ∈^M, u ∈𝒰,where L:ℝ^d ×ℝ^m →ℝ is a suitable running cost. Note that the dependence on the cost is with respect to x, the initial condition, which generates the trajectory y(t;μ)For a given parameter realization μ and initial condition x(μ), the optimal control signal is given by the solution of u(·)∈𝒰inf J_T(u;x,μ) ,subject to (<ref>) . For a fixed μ, we obtain deterministic dynamics, for which optimality conditions for the dynamic optimization problem (<ref>) are given by Pontryiagin's Maximum Principle (PMP)d/dty(t;μ) =f(y(;μ),u(t;μ),μ), y(0;μ) =x(μ),-d/dt p(t;μ) =∇_y f(y(t;μ),u(t;μ),μ)^⊤ p(t;μ)+∇_yL(y(t;μ),u(t;μ)), p(T;μ) =0 , u(t;μ) =w ∈ U_ad{L(y(t;μ),w) +f(y(t;μ),w,μ)^⊤ p(t;μ)} ,wherep:[0,T] ×^M→^d denotes the adjoint variable. The PMP system can be solved using a reduced gradient method (see e.g. <cit.> for more details), which requires an initial guess for the optimal control signal u(t;μ). In Section <ref>, we will explain how the construction of reduced basis can improve the choice for the initial guess and accelerate the entire algorithm. §.§ Sub-optimal control laws using the State-Dependent Riccati EquationThe State-Dependent Riccati Equation (SDRE) is an effective technique for feedback stabilization of nonlinear dynamics <cit.>. The method is based on the sequential solution of linear-quadratic control problems arising from sequential linearization of the dynamics along a trajectory. Given a realization of μ, we consider an unconstrained (U_ad=^m) infinite-horizon quadratic cost functionalJ_∞(u;x,μ) = ∫_0^+∞ y(t;μ)^⊤ Q y(t;μ) + u(t)^⊤ R u(t)dt,with Q∈^d× d, Q≽ 0 ,R∈^m× m, R≻ 0 , and system dynamics expressed in semilinear formd/dty(t;μ) = A(y(t;μ);μ) y(t;μ) +B(y(t;μ);μ) u(t) y(0;μ) = x(μ) .In the the case of linear dynamics in the state, A(y(t;μ);μ) =A(μ)∈^d× d, and B(y(t;μ);μ)=B(μ)∈^d× m, under standard stabilizability assumptions, the optimal control is expressed in feedback formu(y;μ) = -R^-1 B^⊤(μ) P(μ)y ,where P(μ)∈^d× d is the unique positive definite solution of the Algebraic Riccati Equation (ARE)A^⊤(μ) P(μ) + P(μ) A(μ) -P(μ)B(μ)R^-1B^⊤(μ)P(μ) +Q = 0 .In this case, the feedback operator K=R^-1 B^⊤(μ) P(μ) does not depend on the state. Formally, the SDRE method extends this logic to semilinear systems by indexing with respect to the current state of the trajectory, that is,u(y;μ) = -R^-1 B^⊤(y;μ) P(y;μ)y ,where P(y;μ) is the solution of the AREA^⊤(y;μ) P(y;μ) + P(y;μ) A(y;μ)-P(y;μ)B(y;μ)R^-1B^⊤(y;μ)P(y;μ) = -Qwhere A(y;μ) and B(y;μ) are frozen at y. The implementation of the SDRE control requires the sequential solution of eq. (<ref>) as the state y(t;μ) evolves in time. The SDRE feedback loop satisfies necessary optimality conditions at a quadratic rate as the state is driven to zero (we refer to <cit.> for more details and a complete statement of the result).§.§ Tensor Train surrogate model for the control Neither of the two previously presented control laws can be effectively implemented for real-time stabilization of large-scale dynamics. Both the numerical realization of the PMP optimality system and the computational burden of solving AREs along a trajectory in the SDRE method surpass the time scale required for real-time control. Hence, we propose to construct a surrogate feedback law from state-parameter samples in a supervised learning manner. As a surrogate ansatz we use the Tensor Train (TT) decomposition, which can then be evaluated in real time along a controlled trajectory. A scalar function ũ(x): ℝ^d →ℝ with x=(x_1,…,x_d) ∈ℝ^d is said to be represented in the Functional Tensor Train (FTT) decomposition <cit.> if it can be written asũ(x) = ∑_α_0,…,α_d=1^r_0,…,r_d u^(1)_α_0,α_1(x_1) u^(2)_α_1,α_2(x_2) ⋯ u^(d)_α_d-1,α_d(x_d),with some cores u^(k)(x_k): ℝ→ℝ^r_k-1× r_k, k=1,…,d, and ranks r_0,…,r_d. Without loss of generality, we can let r_0=r_d=1, but the intermediate ranks can be larger than 1 to account for the structure of the function. In practice, the exact decomposition (<ref>) may be not possible, and we seek to approximate a given control function u(x) by ũ(x) in the form (<ref>), minimising the error u - ũ in some (usually Euclidean) norm. We aim at scenarios where r_k stay bounded, or scale mildly. Quadratic value functions <cit.> or weakly correlated Gaussian functions <cit.>, for example, admit FTT approximations with ranks at most polynomial in d and poly-logarithmic in the approximation error u - ũ. This allows one to avoid the curse of dimensionality by replacing u(x) by its approximation (<ref>).The FTT (<ref>) has originally emerged as the TT decomposition <cit.> of tensors (multiindex arrays). Indeed, the two come together. For practical computations with (<ref>) one needs to discretise each core using some basis functions ϕ^(k)_1(x_k),…,ϕ^(k)_n_k(x_k) in the k-th variable. One can now write the k-th core using a 3-dimensional tensor 𝒰^(k)∈ℝ^r_k-1× n_k × r_k of expansion coefficients,u^(k)_α_k-1,α_k(x_k) = ∑_i=1^n_k𝒰^(k)(α_k-1,i,α_k) ϕ_i^(k)(x_k).The multivariate function ũ(x) becomes discretised in the Cartesian basis, and defined by a d-dimensional tensor of expansion coefficients,ũ(x) = ∑_i_1,…,i_d=1^n_1,…,n_d𝒰(i_1,…,i_d) ϕ^(1)_i_1(x_1) ⋯ϕ^(d)_i_d(x_d),𝒰(i_1,…,i_d) = ∑_α_0,…,α_d=1^r_0,…,r_d𝒰^(1)(α_0,i_1,α_1) ⋯𝒰^(d)(α_d-1,i_d,α_d).The tensor decomposition (<ref>) is the original TT format <cit.>. Note that to interpolate ũ(x) on any given x requires first d univariate interpolations of the cores as shown in (<ref>), followed by the multiplication of matrices u^(1)(x_1) ⋯ u^(d)(x_d). This can be implemented in 2∑_k=1^d n_k r_k-1 r_k = 𝒪(dnr^2) operations, where we define upper bounds n:=max_k n_k and r:=max_k r_k.To compute a TT approximation, instead of minimising the error u-ũ directly, one can use faster algorithms based on the so-called cross interpolation <cit.>. These methods consider sampling sets of the Cartesian form X_<k⊕ X_k ⊕ X_>k, for each k=1,…,d, where X_<k = {(x_1,…,x_k-1)} of cardinality r_k, X_k = {x_k} of cardinality n_k, and X_>k = {(x_k+1,…,x_d)} of cardinality r_k; X_<k and X_>k are taken empty when k=1 and k=d, respectively. Note that the number of samples in X_<k⊕ X_k ⊕ X_>k is r_k-1 n_k r_k, equal to the number of unknowns in 𝒰^(k). Therefore, one can resolve the linear interpolation equations u(x) = ũ(x) , ∀ x ∈ X_<k⊕ X_k ⊕ X_>kon the elements of 𝒰^(k) exactly, whenever u^(1)(x_1)⋯ u^(k-1)(x_k-1) for (x_1,…,x_k-1) ∈ X_<k, ϕ^(k)(x_k) for x_k ∈ X_k and u^(k+1)(x_k+1) ⋯ u^(d)(x_d) for (x_k+1,…,x_d) ∈ X_>k are linearly independent. Moreover, pivoting can be applied to thus computed 𝒰^(k) to identify next sampling sets X_<k+1 and X_>k-1 as subsets of X_<k⊕ X_k and X_k ⊕ X_>k, respectively, to make u^(1)(x_1)⋯ u^(k)(x_k) and u^(k)(x_k) ⋯ u^(d)(x_d) in the next step better conditioned. Iterating over k=1,…,d, the TT-Cross updates all TT cores until convergence. Vector functions can be approximated component by component, since the number of components in the control problems we consider is small. For a comprehensive textbook on tensor methods see e.g. <cit.>. § A STATISTICAL POD APPROACH FOR OPTIMAL CONTROL PROBLEMS In this section we describe our main contribution, outlined in Figure <ref>. Our ultimate goal is the fast synthesis of the feedback control that is resilient to stochastic perturbations of the system. We split the entire procedure into two stages. In the first (offline) stage, we construct a reduced state basis that minimizes the average squared projection error for an ensemble of controlled trajectories of random realizations of the dynamical system and/or initial state. Due to this averaging of random samples of the error, we call the span of thus obtained basis the Statistical POD subspace. It accommodates states that are well controllable on average, and hence its dimension can be much smaller than the original problem dimension. In the second (online) stage, the system is projected onto the Statistical POD subspace, and an optimal control problem is solved for the reduced system in the feedback form. Due to its smaller dimension, the reduced optimal control problem is faster to solve numerically, and can enable the real-time control synthesis. The computations can be accelerated even further if the control function of the reduced problem is approximated in the TT format. The latter step can be moved into the offline stage, making the online stage a mere TT interpolation.Offline StageThe offline phase for the statistical POD method constitutes of the collection of snapshots and the construction of the reduced basis and, hence, the reduced dynamics. In this phase we rely on the application of the PMP system (<ref>)-(<ref>) to obtain the optimal trajectory for each given sample of the parameter. We fix N different realizations μ = (μ_1,…,μ_N) ∈^M × N. For each realization μ_i ∈μ, we solve the system (<ref>)-(<ref>) and we collect the corresponding optimal trajectory (y^*(t_1;μ_i), …, y^*(t_n_t;μ_i)) at some n_t time instances {t_i}_i=1^n_t. The final snapshot matrix Y_μ will contain all the optimal trajectories at the given time instances t_1,…,t_n_t and realizations μ_1,…,μ_N:Y_μ=[y^*(t_1;μ_1), …,y^*(t_n_t;μ_1),y^*(t_1;μ_2), …, y^*(t_n_t;μ_N)] ∈ℝ^d × n_t N,where each y^*(t_i;μ_j) ∈^d is a column vector.At this point we perform a Singular Value Decomposition (SVD) of the snapshot matrix Y_μ = U_μΣ_μ V_μ^⊤ and we denote as U_μ^ℓ the first ℓ columns of the orthogonal matrix U_μ. The number of basis ℓ can be selected by making the truncation errorℰ_μ(ℓ) =∑_i=1^ℓσ^2_i,μ/∑_i=1^min{N n_t, d}σ^2_i,μbelow a desired threshold. Here, {σ_i,μ}_i are the singular values of the matrix Y_μ. The quantity (<ref>) is related to the projection error arising in the truncation of the singular value decomposition to the first ℓ basis. To shorten the notation, in what follows, we will denote the reduced basis matrix as U^ℓ := U_μ^ℓ .The full order dynamics (<ref>) is then reduced via projection onto the subspace spanned by U^ℓ to the following reduced dynamics {[ d/dty^ℓ(t;μ)=(U^ℓ)^⊤ f(U^ℓ y^ℓ(t;μ),u(t;μ),μ),t∈(0,+∞),μ∈μ,; y^ℓ(0;μ)=(U^ℓ)^⊤ x∈ℝ^ℓ. ]. To ease the notation, we define f^ℓ (y^ℓ(t;μ),u(t;μ),μ) = (U^ℓ)^⊤ f(U^ℓ y^ℓ(t;μ),u(t;μ),μ).The procedure for the offline stage is presented in Algorithm <ref>. In general, f(U^ℓ y^ℓ(t;μ),u(t;μ),μ) is still computationally expensive since it requires the evaluation of the nonlinearity on the lifted variable U^ℓ y^ℓ(t;μ)∈^d. For these cases one may apply Empirical Interpolation Method (EIM, <cit.>) and Discrete Empirical Interpolation Method (DEIM, <cit.>) to solve this issue.In some cases, such as problems arising from semidiscretization of nonlinear PDEs, the dynamical system may present a quadratic-bilinear form:f(y(t;μ),u(t;μ),μ) = A y +T (y ⊗ y )+ ∑_i=1^mθ_k(μ) N_k y_k u_k + θ_0(μ) B u,where A, N_k ∈^d × d, T∈^d × d^2, B ∈^d × m and θ_i: μ→. In this case, we can assemble the reduced matricesA^ℓ = (U^ℓ)^⊤ A U^ℓ, N_k^ℓ = (U^ℓ)^⊤ N_k U^ℓ,T^ℓ = (U^ℓ)^⊤ T (U^ℓ⊗ U^ℓ), B^ℓ = (U^ℓ)^⊤ Band define the following reduced dynamicsd/dt y^ℓ(t;μ)= A^ℓ y^ℓ +T^ℓ (y^ℓ⊗ y^ℓ )+ ∑_i=1^mθ_k(μ) N^ℓ_k y^ℓ_k u_k + θ_0(μ) B^ℓ u,avoiding the use of empirical interpolation techniques.Online StageGiven the reduced dynamics (<ref>), the reduced PMP for a given μ readsd/dty^ℓ(t,μ) =f^ℓ (y^ℓ(t;μ),u(t;μ),μ), y^ℓ(0;μ) =y^ℓ_0(μ),-d/dt p^ℓ(t;μ) =∇_y^ℓ f^ℓ(y^ℓ(t;μ),u(t;μ),μ)^⊤ p^ℓ(t;μ)+∇_y^ℓ L(U^ℓ y^ℓ(t;μ),u(t;μ)), p^ℓ(T;μ) =0 , u(t) =_w ∈ U_ad{L(U^ℓ y^ℓ(t;μ),w) +f^ℓ(y^ℓ(t;μ),w,μ)^⊤ p^ℓ(t;μ)} . The reduced-order state variable is ℓ-dimensional and it can be exploited to construct a faster approximation of the optimal control. The reduced-order dynamics enables the synthesis of an optimal feedback law, now set in a lower dimensional domain, helping in the mitigation of the curse of dimensionality. Let us consider the feedback map u^ℓ: ℝ^ ℓ→ U_ad obtained solving the reduced PMP (<ref>)-(<ref>). This controller can be used in the full order model dynamics considering first the projection of the state and then the evaluation of the feedback map:ẏ(t;μ)=f(y(t;μ),u^ℓ((U^ℓ)^⊤ y(t;μ )),μ),obtaining a control problem where the computation of the optimal feedback is independent from the original dimension of the dynamical system. The presented methodology is based on the construction of a snapshot set of optimal trajectories obtained via the resolution of the PMP system (<ref>)-(<ref>). The sampling of the controlled dataset may be derived also upon the SDRE framework introduced in Section <ref>, solving the sequential AREs (<ref>) along the optimal trajectory. Since the number of variables in the dynamical system may be arbitrary large, the SDRE approach requires efficient solvers for the resolution of high-dimensional Riccati equations. We refer to <cit.> for a comprehensive discussion of such techniques.The singular values for the Statistical POD method may present a slow decay due to the nature of the problem and the size of the parameter domain. In this context a good approximation requires a large number of POD basis and the online stage will still be affected by the dimensionality problem. To this end, an efficient surrogate model for the feedback control in the reduced space accelerates the procedure, obtaining a fast and reliable method for the computation of the optimal control solution. We consider the representation of the feedback control u^ℓ(x^ℓ) in the Functional Tensor Train format (<ref>), obtained via a TT Cross procedure (see Section <ref>). The optimal reduced trajectories obtained by solving the system (<ref>)-(<ref>) are employed as sampling points for the Cross approximation. In contrast to existing POD techniques for the approximation of optimal control problems, the proposed Statistical POD method introduces stochastic terms in the dynamical system to explore the manifold of controlled solutions for the parameterized problem. This enforces the robustness of the corresponding reduced basis to perturbations, an essential feature for feedback control. In particular, as shown in Section <ref>, the empirical risk of the Statistical POD procedure converges to the expected risk. Thus, increasing the amount of training data for constructing the reduced-order system, we obtain a model that is more representative of the true system dynamics, including unseen scenarios. §.§ Efficient computation of the snapshots The main building block in the offline phase is represented by the computation of different optimal trajectories via the PMP system (<ref>)-(<ref>). The resolution of this system relies strongly on the choice of the initial guess for the optimal control u^*(·). In this section we introduce a procedure which accelerates the high-dimensional computation exploiting the information about the optimum for the reduced problem. Let us consider a prefixed initial guess u^0(·). In the offline stage presented in Algorithm <ref>, the different optimal trajectory realizations are stored in the snapshot matrix Y_μ and the basis are constructed only at the end of the loop. An SVD decomposition can be added between Lines <ref>-<ref> of Alg. <ref> to construct a reduced basis containing partial information on the statistical behaviour of the optimal trajectories and build a reduced dynamics. The constructed reduced system is employed for a fast computation of the approximated optimal control u_red(·), which can be used as initial guess for the high-dimensional computation. In particular, at the i-th step we consider as initial guess the control which achieves a lower cost functional, i.e. u^i(·) ∈_u(·) ∈{u^0(·),u_red^i(·) } J(u) . The optimized procedure is sketched in Algorithm <ref>. § EXPECTED RISK OF STATISTICAL PODWe analyse the proposed technique providing a rigorous error estimate on the expected risk. For this reason we first recall an eigenvalue perturbation theory, the Bauer-Fike theorem, which will be employed for the proof of Proposition <ref>. Let A be an d × d diagonalizable such that A = V Λ V^-1 and let E be an arbitrary d × d matrix. Then for every μ∈σ(A+E) there exists λ∈σ(A) such that|λ - μ| ≤κ_p(V) ‖ E ‖_pwhere κ_p(V) is the condition number in p-norm of the matrix V.We are ready to formulate the error estimate on the expected projection error characterizing the statistical POD approach.Consider t as a uniformly distributed random variable on [0,T], and consider N iid realizations of the optimal trajectories y^(i):=y(t_i;μ_i), i=1,…,N. Assume that √(Var[y_jy_k])≤ C<∞ for any j,k=1,…,d,where y_j is the jth component of the random vector y(t,μ). LetG = 1/N∑_i=1^N y(t_i;μ_i) y(t_i;μ_i)^⊤ G^* = 𝔼[yy^⊤],and let U^ℓ∈ℝ^d ×ℓ be an orthonormal matrix of ℓ leading eigenvectors of G. Then 𝔼[y(t;μ) - U^ℓ (U^ℓ)^⊤ y(t;μ)_2^2] ≤∑_j=ℓ+1^dλ^*_j + 2d ℓC/√(N)≤∑_j=ℓ+1^dλ_j + (d+ℓ)d C/√(N),where λ^*_j are eigenvalues of G^* sorted descending, and λ_j are eigenvalues of G sorted descending. Let G^* = U_* Λ_* U_*^⊤ and G = U Λ U^⊤ be the eigenvalue decompositions of G^* and G, respectively, where eigenvalues in Λ and Λ_* are sorted descending. Note that the matrix U^ℓ contains the first ℓ columns of U. Moreover, let us introduce P^ℓ = U^ℓ (U^ℓ)^⊤ to shorten the notation.Since y(t;μ) in (<ref>) is independent of precomputed trajectories y^(i), and hence of G and U^ℓ, we can factorise the total expectation into those over y and G,𝔼[y - U^ℓ (U^ℓ)^⊤ y_2^2] = 𝔼_G[𝔼_y[y - U^ℓ (U^ℓ)^⊤ y_2^2]],and hence express first𝔼_y[y - P^ℓ y_2^2] = 𝔼_y[y^⊤ y - y^⊤ P^ℓ y] = tr(𝔼_y[yy^⊤ -P^ℓ y y^⊤])due to orthogonality and the cyclic permutation under the trace of a matrix. Now,tr(𝔼_y[yy^⊤]) =tr(Λ_*) = ∑_j=1^dλ_j^*,whiletr(𝔼_y[P^ℓyy^⊤]) = tr(P^ℓ G^*)= tr(P^ℓ G) + tr(P^ℓ (G^* - G)) = tr((U^ℓ)^⊤ U Λ U^⊤ U^ℓ)) + tr(P^ℓ (G^* - G)).Due to orthogonality, the first trace is just the trace of the leading ℓ×ℓ submatrix of Λ, which is ∑_j=1^ℓλ_j. Now taking also 𝔼_G, we obtain𝔼[y - P^ℓ y_2^2] = ∑_j=1^dλ_j^* - 𝔼_G[∑_j=1^ℓλ_j] - 𝔼_G[tr((U^ℓ)^⊤ (G^*-G) U^ℓ)]= ∑_j=ℓ+1^dλ_j^* + 𝔼_G[∑_j=1^ℓ (λ_j^* - λ_j)] - 𝔼_G[tr((U^ℓ)^⊤ (G^*-G) U^ℓ)] ≤∑_j=ℓ+1^dλ_j^* + ℓ·𝔼_G[G^* - G_2] + ℓ·𝔼_G[G^* - G_2],due to the Bauer-Fike Theorem <ref> (second term), and |tr(Q^⊤ A Q)| = |∑_j=1^ℓ q_j^⊤ A q_j| ≤ℓmax_j ∈{1,…,ℓ} |q_j^⊤ A q_j| ≤ℓA_2for any A ∈ℝ^d × d and orthonormal Q ∈ℝ^d ×ℓ for the third term.By the Jensen's inequality and norm equivalence,(𝔼_G[G^* - G_2])^2 ≤𝔼_G[G^* - G_2^2] ≤𝔼_G[G^* - G_F^2] = ∑_j,k=1^d 𝔼_G[(G^*_j,k - G_j,k)^2].Since 𝔼_G[G] = G^*, 𝔼_G[(G^*_j,k - G_j,k)^2] = Var[G_j,k]. In turn, G_j,k is a sum of iid random variables y^(i)_j y^(i)_k/N. For those,Var[G_j,k] = N ·Var[y^(i)_j y^(i)_k/N] = 1/NVar[y_j y_k] ≤C^2/N,where in the penultimate step we used that y^(i) are samples from the same distribution of y. Summing over j,k and taking the square root gives the first claim of the proposition. The second claim comes from rewriting∑_j=ℓ+1^dλ^*_j = ∑_j=ℓ+1^dλ_j + ∑_j=ℓ+1^d (λ^*_j - λ_j),and using again Theorem <ref> to get |λ^*_j - λ_j| ≤G^* - G_2. Note that the expected risk consists of the truncated eigenvalues (bias), as for the classical POD technique, and a variance term decaying as 1/√(N), as usual for Monte-Carlo methods.§ NEUMANN BOUNDARY CONTROL FOR 2D BURGERS' EQUATION To better illustrate the different blocks of our approach, in this section we study a stabilization problem for the solution of the viscous Burgers' equation in a bidimensional domain, where the control appears as a Neumann boundary condition.§.§ Problem formulationMore precisely, we consider the following state equation ∂_t y -νΔ y + y ·∇ y =α y (ξ,t,μ) ∈Ξ× [0,T] ×^M,ν∂_n y = u(t) (ξ,t,μ) ∈∂Ξ_1× [0,T] ×^M,ν∂_n y = 0 (ξ,t,μ) ∈∂Ξ_2 × [0,T] ×^M, y(ξ,0;μ) = ỹ_0(ξ;μ)(ξ,μ) ∈Ξ×^M,where ν >0 is the viscosity constant, α≥ 0 regulates the unstable term, ξ=(ξ_1,ξ_2) ∈Ξ = [0,1]^2 is the spatial variable, ∂Ξ_1 = {0}× [0,1]and ∂Ξ_2 = ∂Ξ∖∂Ξ_1, see Figure <ref>. Our aim is to drive the dynamical system to the equilibrium y≡ 0 and the corresponding cost functional we want to minimize reads:J_T(u; μ) = ∫_0^T∫_Ξ |y(ξ,t;μ)|^2dξdt + ∫_0^T |u(t)|^2 dt . For the application of the statistical POD technique, we assume that the initial condition is a realisation of the following random field,ỹ_0(ξ; μ) =y_0(ξ) +∑_i=1^M_1∑_j=1^M_2 (i+j)^-γμ_i+(j-1)M_1cos(i πξ_1) cos(j πξ_2),where μ_i+(j-1)M_1∼𝒩(0, σ^2) are normally distributed random variables with zero mean and variance σ^2, M_1 and M_2 are the maximal frequencies in the first and second spatial variables, and y_0(ξ) is the mean initial state. We take σ=σ_0=0.05 and y_0 ≡ 0.05 by default. The parameter γ refers to the decay of the Fourier coefficients and it is related to the regularity we want to assume. In Figure <ref> we can observe two initial conditions fixing γ=3 on the left and γ=4 on the right. Higher values for γ correspond to smoother initial conditions since the Fourier coefficients decay more rapidly.First, we discretize the Burgers' equation (<ref>) via P^1 finite elements {ϕ_i}_i=1^d in space centered at uniform grid points {(ξ^i_1,ξ^i_2)}_i=1^d, obtaining the following semidiscrete formE ẏ(t) = (C+α E) y(t) + F(y(t) ⊗ y(t))+ B u(t),where E∈^d × d is the mass matrix, C∈^d × d is the stiffness matrix, F ∈^d × d^2 arises from the nonlinear convective term and(B)_i =1 (ξ^i_1,ξ^i_2)∈∂Ξ_1,0i=1,…,d.We apply an implicit Euler scheme for the time discretization. In particular, we consider d=289 finite elements and n_t = 100 time steps, fixing the final time T=50. The computation of the optimal trajectories is based on the Pontryagin's Maximum Principle system (<ref>)-(<ref>), solved via a gradient-descent method for each sample of μ. More precisely, we consider the state-adjoint system and the gradient of the cost functional following <cit.>.Once the reduced basis are computed, following the construction of the reduced dynamics (<ref>), the reduced system readsd/dty^ℓ(t;μ) = A^ℓ y^ℓ(t;μ) + F^ℓ (y^ℓ⊗ y^ℓ) + B^ℓ u(t), y^ℓ(0;μ) = x^ℓ = (U^ℓ)^⊤ x, which can be rewritten in the following semilinear formẏ^̇ℓ̇(t;μ) = 𝒜^ℓ(y^ℓ) y^ℓ(t;μ)+ B^ℓ u(t),where (𝒜^ℓ(y))(i,j) =A^ℓ(i,j) + ∑_k=1^ℓ F^ℓ(i,(j-1)ℓ+k) y_k , i,j ∈{1,…, ℓ}.Note that the reduced system (<ref>) is free from μ (which appears only the full initial state ỹ_0), hence (<ref>) can be written in the usual feedback form where the reduced initial state is a variable x^ℓ. For the remaining part of the section we fix default γ=4, M_1=M_2=8, ν=0.02 and T=50. §.§ Error indicators. Throughout the numerical experiments, we will measure the error of the reduced model using the following error indicators: ℰ_J = |J(u^*)-J(u_red^*)|, ℰ_y = ∑_k=1^n_t (t_k-t_k-1) ‖ y(u^*,t_k)-y( u_red^*,t_k)‖_2,where J(u) is the total cost of the full model computed on the control u, u^* is the optimal control of the full model, u_red^* is the optimal control of the reduced model, while y(u,t) is the full system state with control u at time t. In other terms, ℰ_Jand ℰ_y test respectively the cost and the trajectory in the original model when the control signal is computed using the reduced model. The TT performance will be assessed usingℰ_TT = |J(u_red,TT) - J(u_red)|, where u_red is any feasible feedback law for the reduced system (not necessarily optimal), and u_red,TT is its TT approximation. For simplicity, in the numerical tests we use the SDRE controller. §.§ SPOD accuracy in the stable case (α = 0)We begin considering the stable case α=0, avoiding the exponential growth for the solution, while in the second subsection we analyse the behaviour for the unstable case. We apply the statistical POD strategy for the reduction of (<ref>)-(<ref>) with α = 0 and we compare it with the POD methods based on the information of two specific dynamics: the uncontrolled solution and the optimal trajectory starting from the reference initial condition y_0. The training for the statistical POD is based on N=40 independent optimal trajectories each started from a sample of the initial condition (<ref>).In the left panel of Figure <ref> we show the decay of singular values of Y_μ varying the number of samples N ∈{1,20,40}. As expected, inclusion of more snapshots from different model regimes produces a slower decay.However, singular vectors of Y_μ with more snapshots produce a more accurate reduced model. In the right panel of Figure <ref> we compare the model reduction errors (<ref>) for three different sets of snapshots used for POD:Y_μ consisting of N=40 controlled trajectories started from random samples of the initial condition ("statistical POD"), one controlled trajectory started from the mean initial condition ("1 POD controlled"), and one uncontrolled trajectory started from the mean initial condition ("1 POD uncontrolled"). In all three cases ℓ=20 dominant singular vectors of the snapshot matrix are selected as the basis for model reduction. We see that the uncontrolled snapshots give a very inaccurate basis for reduction of the controlled system, using the controlled snapshots gives a more accurate model, especially when multiple random samples of snapshots are used.In Figure <ref> we compare the cumulative CPU time for the computation of the offline phase with Algorithm <ref> and with its optimized version, Algorithm <ref>. The "not-optimized" algorithm has an almost constant increase in the CPU time and at the first iterations it performs slightly better since the statistical basis constructed with the first snapshots do not help in the construction of a good initial guess. On the other hand, the increase of the knowledge on the ensemble of reduced dynamics leads to a decrease of the CPU time at each step for the optimized version, demonstrating its beneficial support for the high-dimensional resolution of the PMP system.In Table <ref> we compare the accuracy of the three techniques changing the parameters of the problem. In the first case we double the standard deviation for the random variables, such that μ_i∼𝒩(0, 4σ_0^2), in the second case we consider γ = 3, obtaining a set of initial conditions less smooth. The dimension of the reduced basis in each case is ℓ=6. The uncontrolled dynamics is not sufficient to build a good reduced model.The POD technique based on controlled solutions obtains better results for both indicators, especially for the statistical POD. In particular, fixing γ = 3 the statistical POD reveals an improvement of one order of magnitude.In Table <ref> we compare the 1 POD technique and the statistical POD with completely different initial data which cannot be represented in the form (<ref>). For these cases we fix the reduced basis size ℓ=30 to obtain accurate approximations. We note that for all the test cases the statistical approach is more accurate than 1 POD, with a difference of almost one order of magnitude. Furthermore, we see that the accuracy is decreasing for the different choices of y_0. This is due to the fact that the distance between the initial conditions considered in Table <ref> and the mean of the ones represented in the form (<ref>) is increasing, leading to the necessity of increasing the basis size to obtain the same accuracy.This demonstrates the higher accuracy of the statistical approach even for initial conditions far from the set of training samples.§.§ Faster control synthesis using a TT approximation We now assess the pre-computation of the TT approximation of the control in the feedback form, u_red,TT(x^ℓ) ≈ u_red(x^ℓ), aiming at faster synthesis of u_red,TT(x^ℓ) in the online regime by simply interpolating the TT decomposition, in contrast to computing u_red(x^ℓ) via optimisation. Recall that as long as x^ℓ is fixed, the rest of the model is independent of μ, therefore, u_red,TT(x^ℓ;μ) is actually just u_red,TT(x^ℓ). We fix the reduced model order ℓ = 20 for all methods in this subsection. Recall that u_red,TT(x^ℓ) is computed in the offline stage via the TT-Cross algorithm sampling u_red(x^ℓ) at certain states x^ℓ∈ [-1,1]^20. We fix the stopping tol = 10^-3 for the TT-Cross, and n=6 Legendre basis functions for discretizing each component of x^ℓ.For faster computations in this offline stage, we compute u_red(x^ℓ) via the State-Dependent Riccati Equation (SDRE) applied to the semilinear system (<ref>) instead of PMP. This introduces a negligible difference to the total cost, e.g. choosing y_0 = 0.5cos(πξ_1) cos(πξ_2) the error between SDRE and PMP in the computation of the total cost is order 10^-3.In Table <ref> we notice that the SPOD method also gives a more structured reduced model which lends itself to a more accurate TT approximation of the optimal control. Now we compare the SDRE controller (with and without TT approximation) and the LQR controller. The advantage of the latter is that only one Riccati equation (<ref>) (that for the linearization at the origin) needs to be solved and used in the computation of the feedback law (<ref>), but we show that the absence of nonlinear terms in the control strategy leads to a slower stabilization of the system. In the left panel of Figure <ref> we show the shape of the initial condition y_0 = 0.5cos(π x) cos(π y), and in the right panel we show the running cost of the solution with the three controllers. We see that the LQR controller gives a higher cost, which makes it less attractive despite the computational simplicity. The reduction of the computing time can be achieved using the TT interpolation instead.In the same figure we see that the costs using the direct SDRE computation and the TT interpolation are indistinguishable, indicating a negligible error due to the TT approximation.CPU times of computing the control for one system state are compared in Table <ref>. The dimension of the reduced space is still fixed equal to 20. First of all, we note that the application of SPOD enables a speed-up of 50 times for Pontryagin's, while the use of SDRE achieves a speed up of two orders with respect to the reduced PMP. Finally, the computation of a TT surrogate function gains a further extra speed-up order of magnitude, achieving in the end a final acceleration of 5 orders between the reduced TT-SDRE and the full PMP. §.§ SPOD accuracy in the unstable case (α>0)In this subsection we move to the unstable case, fixing the parameter α = 0.2 in the dynamical system (<ref>). The addition of this term changes drastically the behaviour of the uncontrolled dynamics, characterized now by an exponential growth. It is clear that in this case the construction of the POD basis based on the uncontrolled solution is meaningless, since it quickly diverges far away from the desired state.In the left panel of Figure <ref> we show the decay of the singular values of different POD strategies varying the number of sampled initial conditions. The decay of the singular values is slower than that for the stable dynamics, reflecting the more complex nature of the problem. In the right panel of Figure <ref> we report the mean error in the computation of the cost functional for 1 POD and the statistical approach varying the basis size. The initial conditions are sampled again in the form (<ref>). The POD method based on the uncontrolled dynamics is not reported since it achieves an error of order ≈ 10^5, demonstrating its inefficiency in this context. SPOD presents a better accuracy for all the test cases.In Table <ref> we report the comparison of the two techniques varying the parameters appearing in the definition of the initial conditions (<ref>) with ℓ =20 reduced basis. Again, in the first columns we consider as random variable μ_i∼𝒩(0, 4σ_0^2), doubling the reference standard deviation considered in (<ref>), while in the last columns we fix the decay exponent γ=3. The SPOD gets a better results for all the indicators, especially for the case σ = 2 σ_0, where ℰ_y performs two order of magnitudes better than the standard POD.Finally, we pass to the application of the SDRE considering the reduced dynamical system(<ref>) and we apply the Tensor Train Cross for the construction of a surrogate model. We fix a number of basis ℓ=20. In Table <ref> we compare the total cost obtained using LQR, SDRE and its approximation via TT. We immediately notice that SDRE achieves better results than LQR for all the study tests and the TT approximation obtains almost the same cost as SDRE for the digits displayed.The left panel of Figure <ref> shows the configuration of the controlled solution via TT-SDRE at final time with infinity norm of order 10^-2. In the right panel of Figure <ref> the running costs for LQR, SDRE and TT SDRE. There is a visual superposition of the curves for SDRE and TT SDRE, reflected also in Table <ref>. The SDRE controller is able to reduce the running cost faster, which is reflected by the lower total cost in Table <ref>.§ DIRICHLET BOUNDARY CONTROL FOR THE NAVIER-STOKES EQUATIONSWe consider a more challenging problem: the optimal control of the 2D incompressible Navier-Stokes (NS) equation via Dirichlet boundary control. §.§ Problem formulation The NS equation reads∂_t y -νΔ y + y ·∇ y +∇ p =0 (ξ,t,μ) ∈Ξ× [0,T] ×^M,∇· y = 0 (ξ,t,μ) ∈Ξ× [0,T] ×^M, y = g(ξ;μ) (ξ,t,μ) ∈Γ_in× [0,T] ×^M, y = 0 (ξ,t,μ) ∈Γ_w× [0,T]×^M,y =u(t) (ξ,t,μ) ∈Γ_u× [0,T]×^M, ν∂_n y- p n⃗ = 0 (ξ,t,μ) ∈Γ_out× [0,T]×^M,y(ξ,0;μ) = y_0(ξ;μ)(ξ,μ) ∈Ξ×^M,where ν >0 is the viscosity parameter andg(ξ;μ) = 4ξ_2(1-ξ_2) + 1/2∑_k=1^M k^-γsin(2π k ξ_2) μ_k,is the uncertain inflow with μ = (μ_1,…,μ_M) being independent random variables each distributed uniformly on [-c,c], where the half-width c>0 will be varied in the numerical tests. Note that the mean inflow is given byg(ξ) = 4ξ_2(1-ξ_2).The initial condition is set asy_0(ξ;μ) = g(ξ;μ),ξ∈Γ_in, 0,The equation is posed on a backward facing step domain, illustrated in Figure <ref>. The control u(t) is taken in the piecewise-constant formu(t) = ∑_i=0^n_t[ u_1(t_i); u_2(t_i) ]χ_[t_i,t_i+1)(t),where {t_i}_i=0^n_t is a uniform discretization of the time interval [0,T].We introduce the following cost functionalJ_T(y,u;μ) = ∫_0^T ∫_Ξ |∇× y (ξ,t,μ)|^2d ξdt + ∫_0^Tδ |u(t)|^2 dtwith the aim of minimizing the vorticity of the flow over a time interval [0,T] with a penalty cost for the control weighted by the parameter δ >0.Since both the control and the random field appear in the boundary conditions, it is convenient to split the solution in the formy = y +y,where y takes into account the boundary conditions, and y has homogeneous boundary conditions, and formulate the feedback control problem on y. In turn, y needs to satisfy only the boundary and divergence-free conditions, so we choose it as the solution of the stationary Stokes equation -νΔy+∇p =0 (ξ,μ) ∈Ξ×^M,∇·y = 0 (ξ,μ) ∈Ξ×^M,y = g(ξ;μ) (ξ,μ) ∈Γ_in×^M, y = 0 (ξ,μ) ∈Γ_w×^M, y =u (ξ,μ) ∈Γ_u×^M, ν∂_n y- p n⃗ = 0 (ξ,μ) ∈Γ_out×^M.Note that y is a linear function of both u and g(ξ;μ), which is in turn a linear function of μ due to (<ref>). This allows us to use superposition and write the Stokes solution in the form of a linear map,y(μ,u) = Yμ + U u, Y = [ Y_0 … Y_M ],U = [ U_1 U_2 ],where Y_0 is the solution of (<ref>) with μ=0 and u=0 (that is, the uncontrolled Stokes solution with the mean inflow), Y_j for j=1,…,M is the solution of (<ref>) with μ_j=1, μ_i=0 for i≠ j and u=0,μ = [ 1 μ_1 ⋯ μ_M ]^⊤ is the random vector augmented by 1 for brevity of the map (<ref>), andU_j is the solution of (<ref>) with g(ξ;μ)=0, u_j=1, u_i=0 for i≠ j. As a by-product, this allows us to precompute (<ref>) without time dependence of u(t), only for those M+3 initial inputs.Plugging (<ref>) into (<ref>) we can write the following Navier-Stokes-type equation on y:∂_t y -νΔy + y·∇y + y·∇y + y·∇y + ∇p = νΔy - y·∇y,∇·y = 0,with all boundary and initial conditions homogeneous. §.§ Semi-discretization We discretize (<ref>) and (<ref>) in space using the stable P_2-P_1 Taylor-Hood finite elements pair, involving bilinear elements {φ_i}_i=1^d_p for the pressure and biquadratic elements {ϕ_i}_i=1^d_v for the velocity, duplicated such that i=1,…,d_v/2 indexes the first component of the velocity, and i=d_v/2+1,…,d_v indexes the second component. By convention, ϕ_i and ϕ_j for any i≤ d_v/2 and j>d_v/2 are assumed non-overlapping. The resulting semidiscretization of (<ref>) readsE ẏ(t) + A y(t) + F(y) y(t) + F^*(y) y(t) + F(y) y(t) + D^⊤ p(t) = f_0(t), D y(t) = 0 ,where E,A,F(·),F^*(·) and D are mass and stiffness matrices with elements(E)_i,j= ∫_D ϕ_i ϕ_j dξ, i,j=1, …, d_v, (A)_i,j= ∫_D ν∇ϕ_i ·∇ϕ_j dξ, i,j=1, …, d_v, (F(v))_i,j= ∫_D ϕ_i(v ·∇) ϕ_j dξ,i,j=1, …, d_v, (F^*(v))_i,j= ∫_D ϕ_i ϕ_j ∂_ξ_k v dξ,i,j=1, …, d_v, k = 1, j ≤ d_v/2, 2, j>d_v/2,(D)_i,j= (D_1)_i,j + (D_2)_i,j, (D_k)_i,j= ∫_D φ_i ∂_ξ_kϕ_j dx, k =1,2,i=1, …, d_p,j =1, …, d_v, f_0(t) = -A y(μ,u(t)) - F(y)y.Note that ẏ = 0 almost everywhere since u(t) is piecewise-constant. To obtain a feedback form, we can further separate μ and u:E ẏ(t) + A y(t) + C(μ) y(t) + F_yu(y⊗ u)+ F(y) y(t) + D^⊤ p(t) = f(μ) - B(μ)u - F_uu(u ⊗ u), D y(t) = 0 ,whereC(μ) = ∑_j=0^Mμ_j (F(Y_j) + F^*(Y_j)), (F_yu)_i,k+2(j-1)= (F(U_k))_i,j + (F^*(U_k))_i,j, i,j = 1,…,d_v, k=1,2,(B(μ))_i,k= (AU)_i,k + ∑_j=0^Mμ_j (F(Y_j)U_k + F(U_k)Y_j)_i, i =1,…,d_v, k=1,2,(F_uu)_i,j+2(k-1)= (F(U_j)U_k)_i, i =1,…,d_v, j,k=1,2,f(μ) = -A Yμ- ∑_j=0^Mμ_j F(Y_j) Yμ. Applying the same semidiscretization to the cost functional, we obtainJ̃_T(y,u) = ∫_0^Ty(t)^⊤ D_vort y(t) +δ |u(t)|^2 dt= ∫_0^T y(t)^⊤ D_vorty(t) + u(t)^⊤ (δI_2 + U^T D_vort U) u(t) dt+ ∫_0^T 2 μ^⊤Y^⊤ D_vorty(t) + 2 u(t)^⊤ U^⊤ D_vorty(t) dt,where I_2 ∈ℝ^2 × 2 is the identity matrix andD_vort = (D_2-D_1)^⊤ (D_2 -D_1).The system of ODEs (<ref>) is approximated via an implicit Euler scheme with n_t time steps, and the resulting nonlinear algebraic system is solved via a Newton's method with stopping tolerance 10^-6.By default, we fix M=8, γ=3, ν=2· 10^-3, δ=10^-3, T=20, n_t=80, d_v=10382, d_p=1340 obtaining a problem of dimension d=11722.§.§ Model Order Reduction Next, we pass to the construction of the reduced basis and the corresponding reduced dynamical system. Fixing a realization μ, the corresponding snapshot matrix Y_μ will be formed just from the divergence-free velocity snapshots {y^0,…, y^n_t} with homogeneous boundary conditions. Since the solution of the ROM dynamics is a linear combination of the snapshots, the reduced trajectory directly benefits from the divergence-free property and homogeneous boundary conditions and it solves following reduced ODEs systemE^ℓẏ^ℓ(t) + A^ℓ y^ℓ(t)+ C^ℓ(μ) y^ℓ(t) +F^ℓ_yu( y^ℓ(t) ⊗ u) +F^ℓ (y^ℓ(t)) y^ℓ(t) = f^ℓ(μ)-B^ℓ(μ)u-F^ℓ_uu(u ⊗ u), y^ℓ(0) = x^ℓ,whereE^ℓ= (U^ℓ)^⊤ E U^ℓ, A^ℓ= (U^ℓ)^⊤ A U^ℓ, C^ℓ(μ) =∑_j=0^Mμ_j[(U^ℓ)^⊤ F(Y_j) U^ℓ + (U^ℓ)^⊤ F^*(Y_j) U^ℓ] , F^ℓ_yu= (U^ℓ)^⊤ F_yu (U^ℓ⊗ I_2), F^ℓ (y^ℓ) = ∑_k=1^ℓ y^ℓ_k [(U^ℓ)^⊤ F(U^ℓ_k) U^ℓ],(B^ℓ(μ))_k= (U^ℓ)^⊤ A U + ∑_j=0^Mμ_j [(U^ℓ)^⊤ F(Y_j) U_k + (U^ℓ)^⊤ F(U_k) Y_j], k=1,2, F_uu^ℓ= (U^ℓ)^⊤ F_uu, f^ℓ(μ) = -(U^ℓ)^⊤ A Yμ- ∑_j=0^Mμ_j (U^ℓ)^⊤ F(Y_j) Yμ,while the reduced cost functional readsJ̃^ℓ_T(y^ℓ,u) = ∫_0^T y^ℓ(t)^⊤ D^ℓ_vort y^ℓ(t) + u(t)^⊤ R u(t) + 2 μ^⊤ Y^ℓ_vort y^ℓ(t) + 2 u(t)^⊤ U^ℓ_vort y^ℓ(t) dt,withD^ℓ_vort= (U^ℓ)^⊤ D_vort U^ℓ, R = δ I_2 + U^T D_vort U, Y^ℓ_vort=Y^⊤ D_vort U^ℓU^ℓ_vort=U^⊤ D_vort U^ℓ.Note that coefficients involving U^ℓ can be precomputed, after which the reduced ODEs system (<ref>) can be both assembled and solved for each μ and u with the complexity independent of the full dimension d. §.§ Accuracy of the controlled reduced modelIn the left panel of Figure <ref> we show the decay of the singular values for 1 POD and for the SPOD using different number of realizations for the collection of the snapshots (N ∈{5,10,15}).We note that the singular values above 10^-3 are almost indistinguishable in SPOD using 10 and 15 realisations. This shows that N=15 is sufficient for the statistical procedure to converge for the most relevant modes.We now pass to study the performances of the SPOD approach and its difference with the 1 POD technique. The statistical approach has been constructed upon N=15 trajectories controlled by PMP considering the random inflow (<ref>) and fixing the parameter c=1. The right panel of Figure <ref> displays the behaviour of the different error indicators for a system where the inflow is set to its mean value, g = g(ξ). It is interesting to note that although 1 POD is built exactly upon the controlled snapshots of the mean inflow, leading to a projection error of order 10^-5 for ℓ =70, the SPOD performs better for both error indicators for higher reduced dimensions, reflecting the richness of the SPOD basis for accommodating different trajectories. In Figure <ref> we show the analysis of the mean of the errors (<ref>) on 10 random samples of μ changing the half-width c of the interval of μ_k. In the left panel we consider c=1, employed also for the construction of the statistical basis. The results are immediately evident: all the error indicators for the 1 POD resolutions are stuck between order 10^-2 and 10^-3, while SPOD presents in general a decreasing behaviour, reaching order 10^-5 for the error indicator ℰ_J with 70 reduced basis vectors. In the right panel the numerical experiments are run with parameter c=2, introducingoptimal trajectories possibly far from those considered during the basis construction. This is reflected in the order of the different error indicators, which achieve at most order 10^-2, but still yielding a better approximation compared to the 1 POD strategy.§.§ Suboptimal controllersLastly, we compare different faster controllers: the PMP applied to the full order model but with the mean inflow, the SDRE applied to the reduced order model using 1 POD and SPOD bases, as well as the LQR applied to reduced models.Firstly, we show the uncontrolled flow at the final time in Figure <ref> with a prefixed random inflow (<ref>)with μ_* = [0.0984; -0.3838; -0.1259;0.0398; -0.4314;0.1589;0.2323;0.0947 ] . On the left panel we show the absolute velocity |v|(ξ)=√(y_1(ξ)^2+y_2(ξ)^2), while on the right panel we show the velocity vector field y=(y_1(ξ),y_2(ξ)). It is possible to notice the presence of vortexes due to re-circulation issues. The total cost of the uncontrolled dynamics is equal to 4.6279.Using PMP on the full system with the mean inflow g(ξ) to compute the control signal, but applying this signal to the system with a random realisation of the inflow gives the flow as shown in Figure <ref>. We see that this control is unable to reduce the vortexes completely. This indicates the need for controllers that are more specific and robust to random inputs to the model.Now consider applying SDRE and LQR to the reduced system (<ref>). Since SDRE does not take into account quadratic terms in the control, we must omit them, and write down the reduced dynamics in a semilinear formẏ^ℓ(t) = 𝒜^ℓ(y^ℓ(t)) y^ℓ(t)+ ℬ^ℓ(y^ℓ(t))u (t),y^ℓ(0) = x^ℓ, where𝒜(y^ℓ(t)) = -(E^ℓ)^-1 (A^ℓ +C^ℓ(μ)+F^ℓ (y^ℓ(t))), ℬ(y^ℓ(t)) =-(E^ℓ)^-1 (F^ℓ_yu ( y^ℓ(t) ⊗ I_2)+B^ℓ(μ)). First we compute the solution controlled via a LQR feedback, solving the Riccati equation (<ref>) for x=0, obtaining the matrix P_0.At this point we consider the linearized feedback mapu(x^ℓ) = -R^-1 (ℬ(0)^⊤ P_0+2 U^ℓ_vort)x^ℓand the resulting total cost is 3.3228. The final configuration and the velocity vector field is shown in Figure <ref>. We note that the resulting flow is less turbulent than the uncontrolled case, but the solution is still far from the laminar regime. §.§ Tensor Train approximation of the reduced SDRE controlNow we consider the TT approximation of the feedback control function computed by SDRE on 1 POD and SPOD reduced models. Since the TT Cross approximates scalar functions, it is applied twice, one per each component of the control (<ref>). Moreover, we fix μ=μ_* as defined in (<ref>). This allows us to approximate again a function u(x^ℓ) depending on the reduced state only. In these state variables, we consider a domain X^ℓ = _i=1^ℓ [a_i,b_i], where the interval ranges are chosen such that the domain contains all the reduced snapshots. Furthermore, we fix tol = 10^-3 and we consider n=6 Legendre basis functions per dimension.Table <ref> displays the approximation error ℰ_TT (<ref>). The error in the statistical framework is in the order of 10^-2, while the approximation error with the 1 POD basis is order 10^-1, reflecting the fact that the projection onto the 1 POD basis is not able to approximate perturbed trajectories. Finally, in Figures <ref>-<ref> we show the final configuration of the controlled solution respectively for TT-SDRE-1POD and TT-SDRE-SPOD. For the 1 POD approach, it is possible to note by the right panel of Figure <ref> that a turbulent regime is still active, while for the statistical approach (in Figure <ref>) the fluid presents a more laminar behaviour. § CONCLUDING REMARKSWe have developed a model order reduction method for synthesis of feedback control laws for nonlinear, parameter-dependent dynamics, including fluid flow problems. The reduction phase is inspired by POD techniques, requiring samplingof the (sub)optimal control problem solutions for different realizations of the random variables. Snapshots are compressed for the construction of a statistical POD basis which minimizes the empirical risk.The resulting reduced order model facilitates the construction of a data-driven stabilizing feedback law in the tensor train format. The low-rank tensor train structure enables the real-time implementation of a feedback control for high-dimensional problems such as vorticity minimization in 2D Navier-Stokes. Future research directions include the design of robust ℋ_∞ controllers, the development of greedy sampling strategies for both parameters and initial conditions which can alleviate the computational cost of the offline phase, and the training of higher dimensional surrogates using physics-informed neural networks. § ACKNOWLEDGEMENTSThis research was supported by the UK Engineering and Physical Sciences Research Council New Horizons Grant EP/V04771X/1, the New Investigator Award EP/T031255/1 and the Standard Grant EP/T024429/1. elsarticle-num | http://arxiv.org/abs/2311.16332v1 | {
"authors": [
"Sergey Dolgov",
"Dante Kalise",
"Luca Saluzzi"
],
"categories": [
"math.OC",
"cs.NA",
"math.NA"
],
"primary_category": "math.OC",
"published": "20231127213721",
"title": "Statistical Proper Orthogonal Decomposition for model reduction in feedback control"
} |
IEEEexample:BSTcontrol Towards Transfer Learning for Large-Scale Image Classification Using Annealing-based Quantum Boltzmann Machines© 2023 IEEE.Personal use of this material is permitted.Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The authors acknowledge funding from the German Federal Ministry for Economic Affairs and Climate Action, project PlanQK, 01MK20005I. Daniëlle Schuman0009-0000-0069-5517 LMU [email protected] Leo Sünkel LMU [email protected] Philipp Altmann0000-0003-1134-176X LMU [email protected] Jonas Stein0000-0001-5727-9151 LMU [email protected] Christoph Roch0000-0003-0781-6590 LMU [email protected] Thomas Gabor LMU [email protected] Claudia Linnhoff-Popien0000-0001-6284-9286 LMU [email protected] 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Quantum Transfer Learning (QTL) recently gained popularity as a hybrid quantum-classical approach for image classification tasks by efficiently combining the feature extraction capabilities of large Convolutional Neural Networks with the potential benefits of Quantum Machine Learning (QML). Existing approaches, however, only utilize gate-based Variational Quantum Circuits for the quantum part of these procedures. In this work we present an approach to employ Quantum Annealing (QA) in QTL-based image classification. Specifically, we propose using annealing-based Quantum Boltzmann Machines as part of a hybrid quantum-classical pipeline to learn the classification of real-world, large-scale data such as medical images through supervised training. We demonstrate our approach by applying it to the three-class COVID-CT-MD dataset, a collection of lung Computed Tomography (CT) scan slices. Using Simulated Annealing as a stand-in for actual QA, we compare our method to classical transfer learning, using a neural network of the same order of magnitude, to display its improved classification performance. We find that our approach consistently outperforms its classical baseline in terms of test accuracy and AUC-ROC-Score and needs less training epochs to do this.quantum transfer learning, quantum annealing, simulated annealing, quantum machine learning, quantum boltzmann machine § INTRODUCTION Promising great advantages such as speed-ups, increased space efficiency and the ability to model probability distributions naturally, quantum computing is being heavily investigated for usage in machine learning (ML) applications <cit.>. To combine these potential advantages with the ability to efficiently extract “highly informative features” <cit.> from large-scale image data using pre-trained, state-of-the-art classical Convolutional Neural Networks (CNNs), hybrid Quantum Transfer Learning (QTL) has recently seen a surge in popularity in such applications <cit.>. These approaches combine said classical CNNs with Variational Quantum Circuits (VQCs) trained on the data to be learned in a supervised manner <cit.>. They often achieve desirable results, even though they use only the small qubit count available on the current Noisy Intermediate-Scale Quantum (NISQ) computers of the gate-model type <cit.>. Another type of still noisy <cit.>, but currently available Quantum Computers (QCs), namely Quantum Annealers, can also be used for supervised QML: This happens e.g. in the form of Quantum Boltzmann Machines (QBM) <cit.>, which might offer a speed-up and potential improvement of the training process <cit.>.Eager to explore the applicability of Transfer Learning (TL) in Quantum Annealing (QA), we want to evaluate the compatibility of QBMs with QTL approaches, to see if we can combine the advantages of both concepts. To do this, we propose an approach that combines the QTL-method SEQUENT <cit.>, a two-step process that connects a CNN to a quantum classifier using a classical compression layer, with an annealing-based QBM that takes the role of said classifier. We test this approach by classifying the COVID-CT-MD dataset <cit.>, a large-scale, real-world, three-class medical image dataset consisting of slices of lung Computed Tomography (CT) scans. To save QPU time, we do this using the Simulated Annealing algorithm (SA) <cit.> instead of QA, and compare this approach to a classical TL one using a similarly-sized Feed-forward Neural Network (FNN).The rest of the paper is structured as follows: In Sec. <ref>, we introduce QBMs as well as QTL, and mention the most relevant work in the area. We then present our architecture in Sec. <ref>, and describe the COVID-CT-MD dataset <cit.> and our experiments performed on it in Sec. <ref>. We conclude by discussing the implications and limitations of our work, as well as the needs for future research derived from it, in Sec. <ref>.§ BACKGROUND AND RELATED WORK §.§ (Quantum) Boltzmann MachinesA Boltzmann Machine (BM) is an undirected, arbitrarily connected, stochastic neural network (NN) whose neurons s_i ∈𝐬, called visible and hidden units, can take the values 0 and 1 with a certain probability <cit.>. The function modeled by the BM takes the form of a Boltzmann distribution governing said probabilities <cit.>:P_model(𝐬) = e^E(𝐬)∑_𝐬 e^E(𝐬) withE(𝐬) = ∑_ij w_ij s_i s_j + ∑_i b_i s_iwhere w_ij and b_i are the networks weights and biases and E is the energy function. The BM can be trained using stochastic gradient descent to minimize the Kullback-Leibler divergence (D_KL) between P_model and the distribution P_data underlying the dataset to be learned, for which, in case of supervised learning, the conditional distribution of the datapoints' labels given their input vectors is used <cit.>. Training involves sampling from both distributions, as the gradient of D_KL is <cit.>:∂_w_ij D_KL(P_data|| P_model) = ⟨ s_i s_j ⟩_data - ⟨ s_i s_j ⟩_model ∂_b_i D_KL(P_data|| P_model) = ⟨ s_i⟩_data - ⟨ s_i ⟩_model where ⟨ . ⟩ denotes averaging samples. Sampling thus means determining the values of all units s_i and s_j that are not clamped, i.e. fixed to values of a datapoint, several times, once for each sample. In supervised learning, one always clamps some of the visible units to the values of an input vector, and sampling from P_data additionally involves clamping the rest of them to the label <cit.>. The difference between classical BMs and QBMs lies in the way the values of the unclamped units are obtained: Classically, one has to calculate the value of each unit based on its probability of becoming 1 ofp (s_i = 1) = (1 + e^∑_j w_ij s_j + b_i)^-1which depends on the current values of its neighbors s_j <cit.>. This is done iteratively for all units in a sample until their values do not change anymore <cit.>. As this has to be done for every sample taken for every datapoint in every epoch, the process can become very time consuming <cit.>.This is why general classical BMs are only rarely used nowadays[ This can be e.g. seen when searching Google Scholar for recent literature on BMs while excluding results containing “restricted” and “quantum”: Most of the few fitting results consider either niche scientific applications or special types of BMs, such as Deep BMs, chaotic BMs or higher-order BMs.]. Instead, one mainly uses Restricted BMs (RBMs) which make calculations more efficient byrestricting their connectivity to enable parallel calculations of some units' values and approximating samples from P_model by calculating them using samples from P_data <cit.>.QBMs on the other hand can sample the values of all unclamped units at once, without any restrictions or approximations <cit.>. This is possible as current Quantum Annealers do in general return samples for entire bit-vectors 𝐬 that are approximately Boltzmann distributed with regards to an energy function that they are given in the form of e.g. a Quadratic Unconstrained Binary Optimization (QUBO) problem <cit.>. Thus, mapping each unclamped unit to a logical qubit of the annealer, one can directly map a BMs weights and biases to a QUBO, provided they are scaled with the appropriate inverse effective temperature β_eff of the hardware, and determine the units values using one run of the QA process per sample <cit.>.Thus, previous work like <cit.> and <cit.> that used standalone QA-based QBMs trained with supervised learning for image classification has observed benefits in terms of computation time <cit.> or less fluctuations in training accuracy <cit.> in comparison to classical equivalents. However, these papers usually classified only rather simple datasets like the “bars-and-stripes” <cit.> or the MNIST dataset <cit.>, where extensive feature extraction as provided by a TL approach using a large CNN is not necessary. The only annealing-based work similar to ours is <cit.>, which uses a pipeline of a classical autoencoder and a Deep Belief Network pre-trained with QA-based restricted QBMs to classify, among others, medical images. Unlike this approach, we train our QBM to directly classify the images using supervised learning, instead of only using it to initialize a classical classifier.§.§ (Quantum) Transfer Learning Transfer learning (TL) is a ML technique which re-uses previously learned knowledge by adapting an existing ML model M_A (usually a large NN) which has been trained on a dataset D_A to perform a similar learning task on a usually similar dataset D_B <cit.>.This is done by replacing part of M_A, in case of a large CNN usually the last (few) fully-connected layers used for classification, by a new comparable component such as an untrained classifier M_B (often also consisting of fully connected layers) <cit.>. The resulting new model M_A'B is subsequently trained on the new dataset D_B while either freezing the (usually convolutional) layers M_A' taken from M_A, meaning their weights and biases are not trained, or just training them along with M_B <cit.>. The first approach, which is used in this work, is called feature extraction, as that is what M_A' is used for, the second is called fine-tuning <cit.>. Both have the advantage that less datapoints are needed in D_B to achieve a model with good generalization and less computational resources are needed compared to training a model M_A'B from scratch <cit.>. In Quantum Transfer Learning (QTL), the same methods are applied to hybrid QML models, meaning that either M_A' or M_B or both, contain or consist entirely of a QML component such as a trainable VQC <cit.>. Using a classical M_A and a hybrid quantum-classical M_B is particularly popular, as employing a classical M_A' as a feature extractor allows to drastically downsample inputs, making even large datapoints small enough to be processed on a NISQ computer <cit.>.VQC-based QTL approaches that process large datasets, like the COVID-CT-Scans, on gate-model-type QCs are <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>.A gate-based QTL approach especially relevant to our work is SEQUENT <cit.>, a two-step TL approach. Using a large pre-trained CNN like ResNet-18, it first replaces the fully-connected layer(s) at the end of this network with a classical compression layer, for further down-sampling data, and a surrogate classical classification layer <cit.>. These layers are both trained on a dataset D_B while freezing the other layers of the CNN <cit.>. Then, in a second step, the surrogate classification layer is replaced by a VQC, which is trained while the rest of the network, including the compression layer, is being frozen <cit.>. This way of training allows the impact of this quantum part on the classification performance to become more visible <cit.>. § METHODOLOGYOur approach for constructing a QA-based QTL pipeline for large-scale image classification, which can be seen in Fig. <ref>, follows a process very similar to SEQUENT <cit.>, only replacing the VQC with an annealing-based QBM. It starts with a pre-processing phase in which we resize and subsequently crop the input images using functionalities of the Pytorch library <cit.> to scale them from their original size, e.g. 512 x 512 pixels for our COVID-CT-MD dataset <cit.>, to a size of 224 x 224 pixels. After pre-processing, the image is fed into a ResNet-18 <cit.> which was pre-trained on the ImageNet data set <cit.> for feature extraction. This network's last layer has been replaced by a compression layer which further down-samples the data from dimensionality 512 to 64, and has been pre-trained for 10 epochs on our COVID-CT-MD dataset <cit.> using a surrogate classification layer. This layer has subsequently been replaced by a layer that binarizes the data and a (deep) QBM, which classifies the data point into one of our three categories. In our experiments, the number of hidden layers h and the total number of hidden units n of this (deep) QBM were treated as hyperparameters to be optimized, h ranging from 1 to 4 and n ranging from 12 to 500. These ranges were chosen to allow the exploration of many combinations of h with different numbers of units per layer, while ensuring that most of these combinations could potentially be embedded into current QA hardware <cit.>.Regarding the rationale behind using this two-step TL approach, we would like to point out that, being an undirected neural network, a QBM cannot be trained using backpropagation. This makes it impossible to train any upstream classical feed-forward layers of a combined neural network architecture simultaneously. Thus, using a SEQUENT-like approach to train any compression layers is in this case not only advantageous regarding the investigation of the impact of this quantum-part on the network, it is outright unavoidable if one wants to employ such a layer. § EXPERIMENTSTo show that this architecture can be used for large-scale image classification, we perform experiments on the COVID-CT-MD dataset <cit.>. Like Stein et al. <cit.>, we do this using SA instead of actual QA, as the hyperparameter optimization we perform in our experiments requires an extremely large amount (hundreds of millions[One arrives at this order of magnitude by multiplying 55 runs * 10 seeds * 2715 training data points * 2 sampling phases (from P_data and P_model) * between 1 and 20 epochs * between 5 and 100 samples per sampling phase.]) of annealing runs, which is currently not feasible for us to execute on quantum hardware given the scarce availability of QA machines. We consider it to be reasonable to use SA as a stand-in for QA in this context, even though it has slightly different working mechanisms due to the absence of quantum tunneling in the algorithm and might take a lot more execution time on certain problem instances <cit.>, as it also returns approximately Boltzmann-distributed results, just like QA <cit.>. This is also stated in the documentation of D-Wave's SA implementation <cit.>, which we use in our experiments.§.§ Dataset The COVID-CT-MD dataset <cit.> consists of lung CT scan slices labeled as COVID-19 pneumonia (“Covid”), Community Acquired Pneumonia (“Cap”) and healthy (“Normal”)[In contrast to both other classes, the “Normal” images are not labeled per slice, but only per patient. Hence, we here selected random slices between the indexes 15 and 112, in most of which the lung takes up a decent part of the image.] which we convert from DICOM format to gray-scale PNG images sized 512 x 512 pixels. We split this dataset into a train and a test set which each contain a small, but equal amount of patients for each class (20 for the train set, 5 for the test set), the number of which is limited by the available amount of Cap patients (which is 25). We also balance the sets regarding the total amount of images per class, by deleting images as necessary, always selecting the patients of whom we had the most slices when doing so. This results in 905 images per class in the training and 275 images per class in the test set. In order to be able to increase the size of the training and test sets regarding the number of patients, as well as to save valuable computation time running the SA or, in the future, QA algorithm, we refrain from using validation. §.§ Experimental setup and results The first step of our experiments is to perform a hyperparameter optimization, using the Bayesian search algorithm of the weights and biases framework <cit.> to find hyperparameters that maximize the average of training accuracy and training AUC-ROC-Score. The hyperparameters being optimized can be taken from the top line of Table <ref>. We continue this process for 55 runs, each time averaging over 10 random seeds.To assess the benefits of using an annealing-based QBM in this pipeline, we compare our approach to a classical one using the same pipeline, but replacing the QBM with a simple FNN using sigmoidal units[The sigmoid activation function was chosen due to being almost equivalent to (<ref>) <cit.>, causing it to give the neurons comparable outputs to the average values of units in a BM and thus maximizing the similarity of both approaches.], with a similar amount of parameters to optimize (again having the number of its hidden units be subject to optimization). We chose this as a classical baseline instead of a classical (R)BM since it falls into the category of backpropagation-based approaches, which are much more commonly used in state-of-the-art medical image classification techniques <cit.>. Furthermore, FNNs are one of the most common options for this type of component in classical feature-extraction-based transfer learning for medical image classification <cit.>, making them suitable our setting. This baseline was also subjected to the same hyperparameter optimization procedure.When plotting the training accuracy values and AUC-ROC-Scores of the resulting three best identified hyperparameter settings for both of these models, listed in Table <ref> and shown in Fig. <ref>, one can see that the annealing-based approach not only reaches significantly higher values, training results are also significantly more consistent over different training seeds compared to the classical approach. And while the training with SA does take far more wall clock time compared to the classical baseline, the figure also shows it executes less epochs in said time to reach this performance. This might lead to a future speed-up in training if using actual QA were to greatly reduce the time needed per training step in comparison to SA.Subsequently, we apply all of the models trained with the respective three best identified hyperparameter configurations to our test set. For each of these models, of which we have 10 per hyperparameter configuration (as we used 10 training seeds), we run the test 10 times, each time using a different test seed. Subsequently, we determine the test values by averaging over these test seeds. The distribution of the test values over the different models can be seen in Fig. <ref>: While the results are generally not optimal, the annealing models on average outperform the classical models and again show more consistent behavior across training seeds. § DISCUSSION AND CONCLUSIONIn this work, we have used a hybrid QTL approach including an annealing-based QBM to classify images of a large real-world dataset, namely the COVID-CT-MD dataset. While the classification performance of the approach with under 70% test accuracy and a test AUC-ROC-Score of only around 80% is not very high, it on average significantly outperforms a similarly-sized sigmoidal FNN. Also, even though it still takes significantly more wall clock time when using SA, it needs a smaller amount of training epochs to reach this level of performance and is less variable in its performance when using different training seeds. This indicates that, whilst not optimal yet, the approach seems promising for further research into the possible advantages of QML for large-scale image recognition, be it regarding classification performance or execution speed.Future work should thus include three aspects:Firstly, possibilities to achieve higher classification performance with the current approach should be investigated. Its suboptimal classification performance is likely due to overfitting on the training dataset, which is neither very large nor diverse, considering it only contains images of a small amount of patients which, when coming from the same person, are very similar. Thus, methods to circumvent this problem, such as employing validation and early stopping, or using another dataset with more different pictures, should be explored.Secondly, a strong limitation of this work is that so far, we have only used SA to evaluate our approach. Performing experiments on quantum hardware is however a necessary step to determine the actual capability of the approach to enable effectively utilizing near-term available QCs in large-scale image classification. The reason for this is that Quantum Annealers come with a lot of physical properties that might cause their behavior to differ from that of SA: Features like quantum tunneling might have the potential to improve the performance of the approach, while effects of noise, early freeze-outs of the physical dynamics of the annealing process or the “breaking” of entanglement between visible and hidden units might harm it <cit.>. In this context, it might also be interesting to see how our approach compares to the original SEQUENT approach using VQCs and how it compares to a version using supervised classical RBMs instead of the QBM or FNN, to explore the performance of different comparable approaches.Lastly, while we do suspect that using our pipeline for large-scale image classification is beneficial, due to its employment of feature extraction and further data compression that casts the data into a more information-dense form, we have yet to investigate experimentally how this compares to using a stand-alone QBM on this type of data, without involved pre-processing.Thus, although the approach presented in this work may seem promising, the question as to whether it will lead to a near-term quantum advantage remains open.IEEEtran | http://arxiv.org/abs/2311.15966v1 | {
"authors": [
"Daniëlle Schuman",
"Leo Sünkel",
"Philipp Altmann",
"Jonas Stein",
"Christoph Roch",
"Thomas Gabor",
"Claudia Linnhoff-Popien"
],
"categories": [
"quant-ph",
"cs.ET",
"cs.LG",
"eess.IV"
],
"primary_category": "quant-ph",
"published": "20231127160749",
"title": "Towards Transfer Learning for Large-Scale Image Classification Using Annealing-based Quantum Boltzmann Machines"
} |
Value-Based Reinforcement Learning for Digital Twinsin Cloud ComputingVan-Phuc Bui, IEEE Student Member, Shashi Raj Pandey, IEEE Member, Pedro M. de Sant Ana, Petar Popovski, IEEE FellowV.-P Bui, S.R. Pandey, and P. Popovski (emails: {vpb, srp, fchi, petarp}@es.aau.dk) are all with the Department of Electronic Systems, Aalborg University, Denmark.P. M. de Sant Ana is with the Corporate Research, Robert Bosch GmbH, 71272 Renningen, Germany (email: [email protected]). This work was supported by the Villum Investigator Grant “WATER” from the Velux Foundation, Denmark.===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================⋆ The authors contribute equally to this paper.Among the many tasks that Large Language Models (LLMs) have revolutionized is text classification. However, existing approaches for applying pretrained LLMs to text classification predominantly rely onusing single token outputs from only the last layer of hidden states. As a result, they suffer from limitations in efficiency, task-specificity, and interpretability. In our work, we contribute an approach that uses all internal representations by employing multiple pooling strategies on all activation and hidden states. Our novel lightweight strategy, Sparsify-then-Classify (STC) first sparsifies task-specific features layer-by-layer, then aggregates across layers for text classification. STC can be applied as a seamless plug-and-play module on top of existing LLMs. Our experiments on a comprehensive set of models and datasets demonstrate that STC not only consistently improves the classification performance of pretrained and fine-tuned models, but is also more efficient for both training and inference, and is more intrinsically interpretable.§ INTRODUCTION Ever since the groundbreaking work of <cit.>, transformer-based models have rapidly become mainstream and firmly established the research forefront. Large Language Models (LLMs) have in recent years rapidly evolved into remarkably advanced and sophisticated tools with emergent capabilities that, in many cases, often rival or even surpass human performances across a spectrum of rather complex Natural Language Processing (NLP) tasks, including natural language inference, machine translation, text summarization, and question answering <cit.>.Despite the supreme performance in a wide variety of NLP tasks which is still not saturated with increasing parameters, researchers are currently facing with several pivotal challenges in applying novel advanced LLMs like GPT-4 to text classification tasks. Contemporary mainstream paradigms leveraging pretrained LLMs as text classifiers include sentence embedding, model fine-tuning and in-context learning, among which each paradigm, however, comes with its own set of limitations. In general, * The sentence embedding approach directly using pretrained model outputs often results in suboptimal performance, as the entire sentence embedding process is designed to be task-agnostic.* While the model fine-tuning approach frequently achieves state-of-the-art (SotA) performance in text classification tasks, it demands a substantial investment in computational resources and time, especially as model sizes grow. Additionally, the per-task-per-model nature often limits the model's versatility, making it suitable only for the specific task it was fine-tuned for.* The in-context learning (ICL), as exemplified by the few-shot prompting <cit.> that demands no explicit fine-tuning, has its own set of challenges. The effectiveness of ICL hinges critically on the quality of the prompts used, with performance varying dramatically from matching the SotA fine-tuned models to mere random guess, leading to the necessity of prompt engineering which can become an intricate and costly trial-and-error process <cit.>. Additionally, among all existing text classification paradigms, the vastly underlying implicit processes, by which the models achieve their prominent capabilities, have been inadvertently neglected, as shown in Figure <ref>. In specific, a common trait among the three conventional approaches is their naïve reliance on a single token output from the final layer's hidden state output of the model, such as the firsttoken in BERT or the last token in GPT for each input sentence, which potentially loses task-specific information learned by the internal representations of the model. On the other hand, recent studies in LLM interpretability <cit.> have demonstrated that internal representations are remarkably adept at capturing essential features, yet barely have they explored the possibility of establishing better text classifiers from leveraging multi-layer representations. Considering that the conventional methods in Figure <ref> have common limitations due to their reliance on single-token outputs from the final layer, a thorough examination scrutinizing and leveraging the formidable capabilities of these internal representations should intuitively boost the performance for text classification tasks.To address the aforementioned issues accompanied by traditional text classification approaches, in this work, we propose the Sparsify-then-Classify (STC)[Available at <https://github.com/difanj0713/Sparsify-then-Classify>] approach shifting focus from the final layer outputs to the rich internal representations within LLMs that could be utilized respectively for text classification tasks. As shown in Table <ref>, our method leverages activations of feedforward neural network (FFN) neurons within transformer blocks and hidden states across different layers of pretrained models, with neither the need for retraining or fine-tuning model weights nor prompt engineering. By investigating the massive internal representations hidden beneath model outputs, we sparsify salient features relevant to specific tasks using logistic regression probes, and subsequently develop aggregation strategies emphasizing the multi-layer representation of sparsified features to form task-specific classifiers. Given that model structure and weights are transparent, STC can be seamlessly integrated into existing pretrained and fine-tuned transformer-based models in a plug-and-play fashion. Current results from our experiments on RoBERTa, DistilBERT, and GPT2 family, as Table <ref> shows, demonstrate that our proposed approach not only consistently outperforms conventional sequence classification methods on both pretrained and fine-tuned models, but also achieves greatly improved efficiency. Specifically, when applied to a frozen pretrained GPT2-XL model for the IMDb sentiment classification task, our method, by aggregating only the activation results from the lower 60% layers yields a 94.72% accuracy that can already surpass the 93.11% of the fully fine-tuned model. Extending STC to all layers makes merely 0.25% of original trainable parameters and 35.51% of the training cost compared with fine-tuning baseline, with further boosting the performance to 94.92%, all from a frozen task-agnostic pretrained model. ushering in a wide spectrum of potential applications from augmented monitoring of model content generation to refined model architectures that transcend traditional single-token approachesThe primary contributions of our work are as follows: * We propose a lightweight Sparsify-then-Classify method. After layer-wise linear probing on pooled FFN activation or hidden state patterns across layers of pretrained LLMs, we sparsify salient neurons on each layer and aggregate across layers to train a hierarchical text classifier.* We show that for different transformer-based LLMs, our approach consistently outperforms existing LLM application paradigms for text classification. Our method applying to frozen models yields results approaching or even surpassing the fine-tuned models, and deployed on already fine-tuned models could even further improve the performance. * Through comparative analyses, our method, when applied to well-established models,demonstrates improved performance compared with the best results independently derived from probing single layers, along with better efficiency in training and inference, and intrinsic interpretability.§ PRELIMINARIES This section lays the groundwork for our exploration of transformer-based LLMs by delving into internal mechanisms and representations. We establish here the essential nomenclature and terminology to facilitate a clear and consistent understanding of the concepts and methodologies discussed throughout this study.The foundational structures of transformer-based models, including encoder-only variants like BERT and decoder-only variants like GPT, always comprise three principal components: (1) an embedding layer at the base focusing on translating tokens into dense vectors within a high-dimensional hidden space, (2) L identical encoder or decoder blocks, each of which contains a H-head (self-)attention mechanism and a feed-forward network (FFN), jointly applying nonlinear transformations on intermediate representations of D-dimensional hidden space, and (3) a head layer on top of the last-layer hidden states that reshapes and refines representation outputs to generate desired final output, for instance, language modeling head (LM head) in GPT for next token output, or sequence classification head for label output. An illustration is provided in <ref>. Encoder and Decoder Mechanisms Raw texts tailored for specific tasks serve as inputs to the pretrained tokenizers before entering the language model. After going through model-specific word-token embedding and word-position embedding mechanisms, a tokenized input sequence of length N becomes the first hidden state X_0∈ℝ^N× D that works as the input to the block at layer 0. Afterward, the inner mechanism of each transformer block at layer l∈{0,⋯,L-1} can be formulated as follows: Q_l,h = X_lW_l,h^Q, K_l,h = X_lW_l,h^K, V_l,h = X_lW_l,h^VAttn_l(X_l) = ⊕_h=0^HAttn_l,h(X_l) = ⊕_h=0^H softmax(Q_l,hK_l,h^T)V_l,h H_l= X_l+Attn_l(X_l)FFN_l(H_l)= f_act(H_lW_l,1+b_l,1)W_l,2+b_l,2 X_l+1 = H_l+FFN_l(H_l)where matrices with h∈{0,⋯,H} represent different attention heads, Attn_l(X_l) being formed by concatenating the (self-)attention output of all heads Attn_l,h(X_l), H_l the attention output with residual added, which serves as input for FFN, whose output with residual added subsequently becomes the next hidden state X_l+1. The scaling factors, layer norms, and dropouts are omitted here for simplicity.Internal Representations For each layer, we focus on the hidden state and activation records of FFN_lX_l ∈ℝ^N×D, l=0,⋯, LA_l = f_act(H_lW_l,1+b_l,1) ∈ℝ^N×A, l=0,⋯, L-1in which A=kD is by convention a multiple of D representing the number of FFN neurons, i.e., the dimension of hidden space within the FFN.The Head Layer Conventionally, the hidden states in the last layer L of each neuron are condensed token-wise across the entire input sequence with a variable sequence length N into a feature space with a static size of 1, by f_single picking the result on the specific position at a certain single token, i.e.at the beginning of each sentence for BERT variants and the last token of each sentence for GPT variants, as:g(X_L,f_single:ℝ^N↦ℝ) : ℝ^N× D↦ℝ^DThis g with condensed sentence information is either subsequently passed to typically a linear layer followed by a softmax activation to output the probability distribution of next token predicted as the LM head, or passed to one or more linear layers followed by a softmax or sigmoid activation to output the label of text as the classification head. In practice, the models always run on token sequence inputs pre-processed as batches X_0∈ℝ^B×N× D along with masks for attention M∈{0,1}^B×N, in which B is the batch size and N length of the longest token sequence within the batch. In this case, the pooling function should also incorporate the mask to calculate the f_single results on the correct positions within the range of sentences. Application ParadigmsThough with similar architectural designs, the distinct expertise, shaped by the discrepancies in training objectives – BERT being tailored for bidirectional language understanding and GPT for auto-regressive language modeling and generation – has long posed limitations in LLM downstream application scenarios. In the Figure <ref> we concisely demonstrate the three mainstream application paradigms using pretrained models as solutions for text classification. § METHODOLOGYWe hereby propose our Sparsify-then-Classify method that leverages LLM internal neural representations for text classification tasks, as illustrated in Figure <ref>, following a procedure of* Extracting hidden states and activations using multiple pooling strategies;* Layer-wise linear probing for attributing task-specific capabilities onto features;* Layer-wise sparsification capturing and filtering different importance of features;* Cross-layer aggregating and leveraging sparsified task-specific features for text classification. §.§ Internal Representation Extraction In Section <ref> we have described how the last layer of hidden state records are condensed token-wise by the pooling function extracting the result on the specific position at a certain single token, which is purely based on the way the model is pretrained on the last layer of output, ignoring the large amount of potentially useful information inside the sentence and model. Previous studies <cit.> have shown that the activations among tokens within the sentence input also hold meaningful position-specific patterns that vary across different layers of the model, intuitively suggesting more expressive representations to be utilized. Hence in this study, we investigate the FFN activations and hidden states for all layersg(X_l,f_single:ℝ^N↦ℝ) : ℝ^N×D ↦ℝ^D l=0,⋯, Lg(A_l,f_single:ℝ^N↦ℝ) : ℝ^N×A ↦ℝ^A l=0,⋯, L-1along with two more pooling methods, average pooling and max pooling, to additionally express more detailed information about how specific neurons are activated across tokens within a sentence.By using f ∈{f_avg,f_max,f_single} we map the hidden states and activation of sentences with variable lengths into a feature space with a static size of 3. The pooled hidden states {g(X, f_avg), g(X, f_max), g(X, f_single)} and activations {g(A, f_avg), g(A, f_max), g(A, f_single)} are calculated in batches and stored.§.§ Linear ProbingOn the existing understanding of the LLM interpretability literature, it is important to note the linear representation hypothesis, which posits that features within neural networks are represented linearly <cit.>. By delving deeper into these linear dimensions, linear probing <cit.> could help unravel the intricate functionalities of LLM internal representations. Therefore our investigation focuses on the technique of linear probing on each layer independently within LLM internal representations, which is widely adopted for interpretability and evaluating the quality of features in LLMs <cit.>.Based on the pooled results of hidden states and FFN activations, we train logistic regressors (LR) with ℒ^1-regularization (also known as Lasso regularization) as the linear classifier probes, to predict the category that the text belongs to. Each layer of the LLMs is examined independently to understand the behavior and contribution of neurons at different depths. For LR probes, the objective is to predict the target Y∈ℝ^B from pooled hidden states g(X_l,f) ∈ℝ^B× D or activations g(A_l,f) ∈ℝ^B× (kD) for all B sentences at a given layer l of model. The optimization problem for fitting LR probes with ℒ^1-regularization is formulated as:W_X,l,b_X,l= min_W,b||Y-σ(g(X_l,f)W+b)||_1+λ||W||_1, l=0,⋯, L W_A,l,b_A,l= min_W,b||Y-σ(g(A_l,f)W+b)||_1+λ||W||_1, l=0, ⋯, L-1where σ(·) denotes the logistic sigmoid function, and λ≥ 0 is the regularization parameter that controls the trade-off between the fidelity of the model to the training data and the complexity of the model's weights. The term ||W||_1 represents the ℒ^1-norm of the weight vector, which is the sum of the absolute values of the weights. The regularization term controlled by hyper-parameter λ, encourages sparsity in the weight vector, effectively performing feature selection by driving certain weights to zero. The regression yields a logistic predictor Y_X=σ(g(X_l,f)W_X,l+b_X,l) and Y_A=σ(g(A_l,f)W_A,l+b_A,l) of each layer for inference.§.§ Layer-wise Sparsification One aspect of the ℒ^1-regularized LR probes is that the learned weights W can be interpreted as indicators of the importance of neurons at that layer for the specific task <cit.>. Weights with higher magnitudes suggest that the corresponding neurons play a crucial role in making task-specific predictions. When a weight is close to zero, it indicates that the corresponding contribution of the neuron is minute for the task. In essence, LR probes provide a mechanism to identify task-specific salient neurons within the internal representations of LLMs. By the sparsity of ℒ^1-regularized LR weights, where weights have been inherently pushed towards zero with less important features often having weights exactly at zero, we can derive an intuitive criterion for neuron selection. Here we consider the trained LR probes as sparsified salient feature indicators by investigating the magnitude of weights they learned. Given logistic predictor Y_X=σ(g(X_l,f)W_X,l+b_X,l) and Y_A=σ(g(A_l,f)W_A,l+b_A,l) for each layer l=0,⋯, L_temp as the LR probes we trained independently based on each layer's hidden states and activations by running the LLM up to layer L_temp, we focus on the logistic regression weights W_X,l∈ℝ^D and W_A,l∈ℝ^(kD). In this context a measure that emphasizes larger weights is advantageous, as it aligns with the underlying assumption that a higher logistic regression weight correlates with stronger informativeness and a more significant contribution the corresponding input feature makes to the model's predictive capacity. Consequently, ranking neurons according to their absolute magnitude of weights in trained linear probes yields good feature selection results in LLMs <cit.>. At the same time, selecting a large number of features, especially those that are not strongly predictive of the outcome, will dilute the effect of the predictive features and make it harder to identify the signal amidst the noise <cit.>. To encourage better layer-wise sparsification, we introduce the squared ℒ^2-norm from the perspective of signal processing to being employed as a measure of the energy exerted by the LR weights. Mathematically, the squared ℒ^2-norm of a weight vector ||W||_2^2 as the sum of the squares of its components, effectively amplifies the impact of larger weights, thereby providing a unified scale that accentuates the more influential features in terms of their contribution to the model's predictions <cit.>. On this foundation, a threshold η is established on the squared ℒ^2-norm ||W_X,l||_2^2 and ||W_A,l||_2^2, which is calculated by the sum of squared weights corresponding all input features within that layer, serving as a measure of the cumulative importance of different features' contributions to the layer's LR probe. By sorting all squared feature weights in descending order and calculating the cumulative sum across features, we can assess the collective importance of any given set of individual features. when the cumulative sum for a set of neurons S_η(X_l) or S_η(A_l) exactly exceeds η ||W_X,l||_2^2 for hidden states or η ||W_A,l||_2^2 for activation, we designate the given set of neurons as salient. This process, similar to the weight pruning techniques in neural networks <cit.> that adopts backward elimination of variables, can effectively filter out neurons with minimal impact according to the learned logistic regression weights, thereby emphasizing the features most relevant to the probe's decision-making.§.§ Cross-layer Aggregation Current probing techniques predominantly focus on layer-specific analysis that falls short in detecting features distributed across multiple layers of the model <cit.>. A straightforward concatenation of all layer inputs is not a viable solution; such an approach suffers from the nature of sparsity of LLM internal representations, rendering training process ineffective. A common understanding in the field of interpretation, particularly studies on generative models, suggests a potential solution. In LLMs it is often observed that the lower layers tend to focus on encoding elementary features of input text like syntax, grammar, and basic semantics, foundational information which higher layers utilize for more abstract and complex aspects of understanding broader concepts, generating coherent responses, applying more sophisticated operations like deduction and creativity, etc. <cit.> To address this, we introduce a simple yet highly potent way to aggregate the salient neurons dispersed across various layers together, using previously attributed layer-specific importance of hidden state and activation features. The neurons passing the η criterion are now leveraged to construct a refined feature vector record for each instance, aggregating across the collected layers to highlight the features learned at different parts of LLM. The rationale behind this aggregation is to harness the diverse yet complementary information captured by different layers of the LLM, thereby enhancing the overall predictive capability on specific text classification tasks. This aggregation forms a composite representation X_Agg,L_temp and A_Agg,L_temp, by concatenating the activations and hidden states of selected salient neurons from the previous layer-wise sparsification process:X_Agg,L_temp =⊕_l=0^L_temp⊕_i∈S_η(X_l)X_l,i, L_temp=0, ⋯, LA_Agg,L_temp =⊕_l=0^L_temp⊕_j∈S_η(A_l)A_l,j, L_temp=0, ⋯, L-1 where ⊕ represents concatenation, S_X,l and S_A,l the sets of indices for salient neurons in hidden states and activations, respectively, and X_l,i and A_l,j the hidden states and activations at the i-th and j-th salient feature in layer l of model. Finally, we utilize these aggregated feature vectors to train higher-order LR classifiers for each L_temp on pooled representations g(X_Agg,L_temp, f) and g(A_Agg,L_temp, f), which yields logistic predictors Y_Agg, X,Ltemp=σ(g(X_Agg,L_temp,f) W_X,L_temp+b_X,L_temp) and Y_Agg, A,Ltemp=σ(g(A_Agg,L_temp,f) W_A,L_temp+b_A,L_temp) for each L_temp, as our text classifiers. To summarize, we present the overall algorithm design of STC within <ref>, <ref> and <ref> as an end-to-end solution for leveraging the internal representations of LLMs for text classification tasks, as illustrated in Algorithm <ref>.W_X,L_temp,b_X,L_temp= min_W,b||Y-σ(g(X_Agg,L_temp,f)W+b)||_1+λ||W||_1, L_temp=0,⋯, L W_A,L_temp,b_A,L_temp= min_W,b||Y-σ(g(A_Agg,L_temp,f)W+b)||_1+λ||W||_1, L_temp=0, ⋯, L-10.8§ EXPERIMENTAL SETUP AND RESULTS §.§ Data Ingredients PreparationTo evaluate the performance of STC leveraging the internal representations of LLMs as text classifiers, we collect 3 relevant datasets and study the hidden states and FFN activation patterns of multiple well-established transformer-based language models: RoBERTa, DistilBERT, and GPT2 family, including GPT2, GPT2-M, GPT2-L and GPT2-XL. Dataset Collection We select the following 3 datasets, as details summarized in Table <ref>: * IMDb: The IMDb dataset <cit.> is one of the most popular sentiment classification datasets, curated for the binary classification task of positive and negative movie reviews. * SST-2: The SST-2 dataset for sentiment analysis, part of the General Language Understanding Evaluation (GLUE) benchmark <cit.>, provides a binary classification task based on the Stanford Sentiment Treebank. The true test set labels of datasets within GLUE benchmark are not publicly accessible, and conventionally the fine-tuned models often report the performance on the validation set, hence we adopt the same approach by using the original validation set as the test set.* EDOS (SemEval-2023 Task 10): <cit.> collects dataset for facilitating exploratory experiments of Explainable Detection of Online Sexism (EDOS). The dataset contributes a hierarchical taxonomy of sexism content, in which we select Task A for our experiments, where systems are expected to predict whether a post is sexist or not.In practice, if the dataset does not provide a validation set, we randomly split 20% of the training set as validation and record the random seed to ensure reproducible results.Inner Represetation AcquisitionWe replicated and validated the forward pass ofDistilBERT[For DistilBERT, L=6, D=768, k=4, H=12, max(N)=512, f_act=GELU.],RoBERTa base[For RoBERTa, L=12, D=768, k=4, H=12, max(N)=512, f_act=GELU.],GPT2 base[For GPT2, L=12, D=768, k=4, H=12, max(N)=1024, f_act=GELU_new.],GPT2-M[For GPT2-M, L=24, D=1024, k=4, H=16, max(N)=1024, f_act=GELU_new.],GPT2-L[For GPT2-L, L=36, D=1280, k=4, H=20, max(N)=1024, f_act=GELU_new.], andGPT2-XL[For GPT2-XL, L=48, D=1600, k=4, H=25, max(N)=1024, f_act=GELU_new.] model to access their internal representations, integrating with both pretrained and fine-tuned weights sourced from HuggingFace[<https://huggingface.co/>]. Within each transformer block in the models, the activations of the Feed-Forward Neural Network, which comprises A=kD neurons in each transformer block, along with block hidden states comprising D neurons in between transformer blocks, are collected, pooled using f ∈{f_avg,f_max,f_single}, and stored. §.§ Baseline PerformanceSentence Embedding For BERT and its variants, the models take the output oftoken in the final hidden states for downstream tasks <cit.>. For the GPT2 family, models take the final hidden states output of the last valid token for each input sequence to perform downstream tasks. Thus, We collect the output of the firsttoken in the last layer hidden states for BERT and its variants, and output of the last valid tokens for each sentence in the last layer hidden states from GPT2 and its variants. A fully connected layer with softmax is then trained atop with Y=softmax(g(X_L,f_single)W). The results are calculated in our experiments and presented in the Embedding column of Table <ref> as the frozen LLM baseline performance. Model Fine-tuning For all the fine-tuned models we found that were fine-tuned on the studied dataset and uploaded to HuggingFace, we test their baseline performance using the same classification head structure as the sentence embedding approach, and compare the results with those reported on the corresponding model web pages. Several notable findings comparing our experiment results to the reported ones are discussed in <ref>. The results are reported in the Fine-tuning column of Table <ref> as the baseline performance of fine-tuned LLMs, in which the entries for several models are skipped as no publicly available fine-tuning models. In-context Learning The in-context learning (ICL) approach, particularly fashionable in the realm triggered by the recent enthusiasm in generative LLMs, involves few-shot learning, i,e, leveraging a limited number of examples within prompts as prefixes and suffixes of input texts to guide the model to generate textual output in the desired way. This technique is provided here as an estimation on the effectiveness of context-based learning without extensive training or fine-tuning, showing the adaptability of pretrained LLMs using a few examples as reference within input for generating response. The results are shown in the ICL avg. column of Table <ref>, in which entries for BERT variants are skipped as such paradigm is not supported by their encoder-only architecture with no LM heads. §.§ Sparsify then ClassifyLayer-wise ProbingThe classification performance of layer-wise linear probes is regarded as an important indicator for measuring the knowledge acquisition in individual layers. To maintain a non-overfitting evaluation of the best per-layer performance, however, it is crucial to note that we treat both the pooling function and the layer l as hyper-parameters. Meanwhile, as the best number of iterations for training the probes cannot be determined in advance, we also treat it as a hyper-parameter with a sufficient maximum limit. Consequently, the test performance we report is derived from applying this best combination of hyper-parameters, as determined by the validation set, ensuring that our reported results are not overfitted. Our findings are presented in the Probes column of Table <ref>, demonstrating the fact that the best individual layer performance of both frozen and fine-tuned models could be close to, and mostly even well above the results using their respective classification heads. In addition, to corroborate the linear representation hypothesis in our selected models and datasets, we also examine the more expressive nonlinear single layer perceptrons (SLP) probes as the comparative group to LR probes. We report the details and results of SLP in <ref>. Feature Sparsification In the first stage, at each layer of the LLM, a linear probe with ℒ^1-regularization is applied to the activations and hidden states. This process is aimed at identifying the most salient neurons in the layer, which are most predictive of the output categories. The LR model is trained to minimize the sum of the ℒ^1-norm of the weights and the error in predicting the output categories, which facilitates the selection of the most relevant features while penalizing less important ones. Once the layer-wise probes are fitted, the weights of the linear models, W, are sorted in descending order, after which the most indicative neurons on the specific task are selected based on their contribution to the model's predictive power. This selection procedure is achieved by accumulating the squared weights of the neurons until a predefined threshold, η, of the total squared weight magnitude is reached. The neurons that meet this criterion are then marked as salient for the corresponding layer and prepared for further aggregation. In practice, we have η ranged from 0.001 to 0.5, indicating a wide range of levels in sparsification with the number of neurons selected per layer ranging from 1 to 5%-10% of total. Our experimental results confirm the findings of <cit.> and <cit.> where a minimal amount of most informative neurons could embrace comparable performance with the model classification head, yet within single layer and without further aggregation, we have not found evidence that using only sparsified neurons could yield better performance than than the layer-wise probes.[Intuitively, as η increases with more relevant neurons selected, the classification performance of the selected neurons tend to be more stable and closer to the layer-wise probes if the probes are well-trained.] Thus, to visually demonstrate how well feature sparsification actually works, we refer to Section <ref> with further details showcasing the intrinsic interpretability of STC's sparsification process.Multi-layer AggregationFinally, the salient neurons identified across different layers of the LLM are combined to form an aggregated representation. This representation is then used to train a new LR model, which is called then-Classify and aligns with our aforementioned linear representation hypothesis[Note that linear models may not and have to be not optimal for different cases. However, we present the linear model as a naïve and intuitive approach which is already sufficient to attain the consistent performance boost in various cases.]. The aggregated LR models, starting from the bottom layer and extending to each subsequent layers, are then evaluated based on their performance in predicting the output categories. Similar to section <ref>, the processes of evaluating and updating the LR models are all conducted specifically on the validation set to ensure no bias being introduced. Through this workflow, the predetermined parameters, such as the threshold η, the choices of pooling methods, and the number of layers to aggregate, are all regarded as hyper-parameters in our reporting the results based on the test performance of the classifier that demonstrates the best performance on the validation set in the Agg. column of Table <ref>. Noteworthy Results Here we take the GPT2-XL activation records with max-pooling as a representative example to demonstrate the concrete performance our proposed aggregation method yields. As shown in Figure <ref>, we report the performance of layer-wise LR probes and cross-layer aggregation with η=0.5 of each layer for both frozen models and fine-tuned models. In this case, the STC manages to consistently outperform the layer-wise results starting from the second layer (for the first layer only features within η=0.5 cumulative importance threshold are utilized, thereby leading to a slight decrement in performance). Specifically, during the first half of the model layers the aggregated method yields a smoother rise in the curve above the layer-wise probes, at some point the lift between even approaches 4%. After the layer-wise curve reaches its peak performance and starts to retrace, the aggregation method maintains its performance till the final layer of the model, and even manages to achieve slight improvements from the latter part of the layers. For a given LLM, we train and evaluate the classifier Y on aggregated features for each combination of pooling function f, the number of aggregated layers l and the aggregation threshold η. We refer to Figure <ref> and Figure <ref> as what-which-where plots as they visualize the performance along with information of what aggregation threshold is used, which pooling choice is selected, and where in depth the layer aggregation takes place in LLMs. Respectively, the pooling choice is indicated with different marker shapes, the magnitude of η is reflected by the marker sizes, and the number of aggregated layers is shown with different colors in the color bar. In addition, we present how the number of features evolves with every combination in the lower subfigures of each plot, further demonstrating the efficacy of STC. For a comprehensive list of what-which-where plot results in every specific case, please refer to <ref>.§ DISCUSSION§.§ PerformanceOur experimental evaluations comparing performance metrics of different LLM paradigms along with our approaches yield promising results. Notably, we observed a consistent improvement in performance by integrating our method into well-established approaches, namely comparing the sentence embedding baseline with frozen LLM + STC, and model fine-tuning baseline with fine-tuned LLM + STC in Table <ref>.§.§.§ Frozen LLMsThe prevalent practice of sentence embedding in the realm of LLM application involves the employment of frozen models, where the pretrained weights are not modified. This involves appending a task-specific sequence classification head to the top of the model for adapting the pretrained representations to particular tasks. Our experiments suggest that this paradigm exhibits relatively poor performance as expected, and may not fully exploit the model's capabilities. The limitation primarily lies within the head solely relying on the representation of a single token from the last layer of pretrained models, which, in pretrained models, is originally designed and trained for performing general tasks like token prediction rather than specialized ones. The nuanced, layer-specific representations remain largely untapped, yielding suboptimal task performance. In contrast, our methodology adopts a more granular approach by delving into the multi-layered architecture of transformer-based LLMs, embarking on an anatomical perspective by exploring the internal representations hidden beneath the final layer outputs. With detailed examination of salient neurons for each layer and layer aggregation strategy capitalizing on the diverse and rich internal features encapsulated across the model's depth, we manage to harness the previously neglected intermediates. The outcome is a clear elevation in performance, compared with the frozen model's sentence embedding baseline performance we have a 2.63% (layer-wise) and 3.37% (aggregation) accuracy improvement on average for the IMDb sentiment classification task, demonstrating the effectiveness of the in-depth utilization of LLMs' inherent capabilities.§.§.§ Fine-tuned LLMsAnother standard practice undergoes a phase of retraining or fine-tuning massive amounts of pretrained model weights to adapt the head to specific tasks. This process, albeit effective to a certain extent, often demands a significant investment of computational resources and time. Intriguingly, the experiments on fine-tuned models reveal that our proposed method can always further improve their performance when deployed on already fine-tuned models, with 1.41% (layer-wise) and 1.75% (aggregation) accuracy improvement on average for the IMDb sentiment classification task. This has from another perspective verified our previous testament about the confined informativeness by conventional classification heads' only using single tokens on the final layer output, and the robustness of our approach that rediscovers the capabilities of inner representation in capturing essential features. It is also worth mentioning that in our reproducing the reported performance of fine-tuned models, we observed a phenomenon as by freezing the fine-tuned model weights and further retraining the classification head, we could sometimes achieve a higher performance compared with the reported baseline. For example, given reported accuracy results of 92.80%, 94.67%, and 94.06% for fine-tuned DistilBERT, RoBERTa, and GPT2 base models, the corresponding performance of our retrained classification heads are 92.68%, 95.64%, and 95.89%. These originally imperfect fine-tuning classification heads can be attributed to an intrinsic vulnerability of the current fine-tuning strategy, by which the weights of language models along with classification heads on top are jointly evaluated and backward propagated together. Such approach may suffer from the difference in model weight sizes of the LLM and classification head themselves, yielding an inconsistency in the effectiveness of fine-tuning on each part. This becomes more obvious for larger models, where the the pre-trained model and the classification head often require different learning rates for optimal training: the layers having been extensively pre-trained usually only need smaller updates, while the classification head being initialized from scratch, might benefit from larger updates. The discrepancy in the size and complexity between the large pre-trained layers and the relatively small classification head can lead to inefficiencies in joint training. By decoupling the fine-tuned model, our retrained classification heads contribute a better-optimized performance compared with the original ones.§.§.§ ComparisonComparing experiments implemented on frozen pretrained models with fine-tuned models, we notice that our approach working on fine-tuned models yields better performance than frozen pretrained ones for each model structure respectively. This not only fits the logic that fine-tuning on LLMs has improved the performance of the classification head's output as it was originally designed and fine-tuned for, but also suggests that fine-tuning has enhanced the salience of model inner representations in capturing task-specific features as well, which is subsequently leveraged by our method. Specifically, a pattern could be found that for larger models, the performance improvement that model fine-tuning brings compared with frozen models becomes smaller. For example, for the smallest DistilBERT the accuracy of model fine-tuning is 5.85% higher than the sentence embedding approach, and for GPT2-XL this has dropped to 1.25%. The same can be observed in the results of models integrated with our methods, that the accuracy of fine-tuned + STC is 6.40% higher than frozen + STC for DistilBERT, and only 0.07% higher for GPT2-XL. This suggests a relationship between the size of LLMs and their amenability to fine-tuning, that for larger LLMs, the less improvement introducing model fine-tuning brings. This can be due to the fact that larger pretrained models already hold greater capacity for learning and generalization. Also, the increase in size makes the models harder to be effectively fine-tuned, as with their vast number of parameters, larger models can be more prone to overfitting when fine-tuned on small datasets.§.§ Efficiency Efficiency during training and inference in terms of both computational resources and environmental impact have become increasingly relevant in discussions about recent large-scale models and provoked a paramount consideration in the adaptation of LLMs for specialized tasks. Time, storage, money and energy required can often pose significant hurdles, particularly in scenarios with enormous data for training and deployments. In this investigation, we lead a quantitative analysis with trainable parameters, floating-point arithmetic and performance results on the merits of our proposed approach as means to mitigate these challenges, by comparing our method with current fashionable fine-tuning strategies as baseline.For number of trainable parameters of language models, we report the total amount of trainable parameters within LLM (without embedding layers at the bottom) N_param porvided by HuggingFace. For STC we estimate the number of parameters usingN_param LR ≈ L(1+k)D N_param STC ≈ (1+ρ_η· N_f ·L+1/2) N_param LRwhere L as the number of layers within LLM, D dimension of LLM hidden states, k the ratio of FFN activation dimension to the hidden states, ρ_η the ratio of features with η parameter selected during aggregation, which we take 0.1 for estimation, and N_f the number of pooling methods which we take 3 in our experiments.For the total floating point operations needed for the conventional practice of model fine-tuning and our STC approach, we estimate with empirical estimations given by<cit.>, we haveC_LLM train ≈6N_paramBS C_LLM forward ≈(2N_param+2LND)Bin which B represents the batch size, S the steps used in training, N the length of input sequence in tokens. For the cost of layer-wise LR probes and cross-layer aggregation, we can similarly deriveC_LR train ≈2L(1+k)DBS+LBSC_LR forward ≈ (1+k)DB+Bwhere k the ratio of activation dimension to the hidden states dimension, andC_Frozen + STC train ≈ C_LLM forwardS + ρ_η·L+1/2 C_LR trainC_Frozen + STC forward ≈L_temp/LC_LLM forward+C_LR forward≈L_temp/LC_LLM forwardAll the training cost here is calculated in terms of BS which is the total amount of input sequences inputted during all epochs in training on the entire training set, and the forward cost is calculated in the amount of input sequences within a single batch. §.§.§ Training PhaseOur proposed STC method demonstrates a significant reduction in the trainable parameters across all models, as shown in Table <ref>. For instance, for the largest model GPT2-XL we examined, the STC reduces trainable parameters to just 0.25% of its original amount updated in the standard model fine-tuning baseline. This substantial decrease in complexity not only lessens memory demands particularly when retaining the original pretrained weights, but also accelerates training times in IO especially considering that the fine-tuning approach always updates the whole model in a uniform way even if the forward pass and backward gradient calculations can be parallelized, which is not always feasible. Our approach makes the training process more accessible and feasible, especially in environments with limited hardware computational resources. In terms of computational load, as measured by floating point operation costs in training shown in Table <ref>, our STC method consistently shows a reduction across all models. Remarkably, the estimated speedup values range from approximately 2.8 times of traditional fine-tuning methods using all IMDb training data in a single epoch, to an impressive 5.7 times when fine-tuning extends over two epochs, which highlights STC's significant efficiency advantage.Intriguingly these benefits of STC appear to be more pronounced in cases where model scales go larger with the same architecture, which is particularly evident when examining results within the GPT2 family. Such a reduction in computational load directly translates to decreased energy consumption and reduced GPU/CPU usage time. Consequently, this leads to enhanced cost-effectiveness, a critical aspect in promoting sustainable AI practices and facilitating large-scale AI deployments. §.§.§ Inference PhaseEfficiency during inference is crucial for real-time applications and deployments of services that leverage LLMs. The storage needed for storing the model weights and speed at which a model can provide accurate predictions directly impacts the usability and effectiveness of the model in practical scenarios.Layer Aggregation As has already been illustrated in Figure <ref>, the STC performance is mostly enhanced in the lower half of LLM layers, with only marginal improvements in the rest part of the model. This qualitative observation agrees with our initial idea of introducing layer aggregation as mentioned in <ref> that the lower layers of LLMs are sufficiently salient for capturing task-specific features within input texts. Here we further compare the baseline performance of conventional classification heads with our STC method aggregating different proportions of the lower model layers to quantitatively substantiate the possibility of only running part of the LLM forward pass during inference. With our results in Table <ref>, we can observe that for most cases, utilizing the first 60% layers of the model is sufficient for STC to outperform the corresponding baselines that employ full model forward passes. Based on the results of 20% layers, the incremental improvements for including every additional 20% layers, are on average 3.69%, 3.35%, 0.94%, and 0.23%, among which a clear distinction can be found that highlights the efficiency-performance trade-off. Such a pattern underscores the effectiveness of our layer aggregation approach. After training and determining an optimal L_temp that balances performance and efficiency as expected, our method can operate efficiently using only initial segments of LLMs, reducing both the storage needs for model weights and the computational time required for inference. This makes the STC a highly pragmatic solution, particularly in efficiency-critical real-world application scenarios that are more speed- and resource-management-focused and less sensitive to performance metrics, such as cases involving handling large volumes of text or API deployments that may face high concurrency demands.Multi-task Learning Our approach makes no modification to model weights and hence is inherently suitable for a multi-task learning workflow. This stands in stark contrast to the model fine-tuning approach, which necessitates the training and independent storage of task-specific model weights for each individual task. Once the most compute-intensive acquisition of inner representations from input texts is complete, multiple salient feature sparsification and layer aggregation processes can not only proceed with their training phases respectively focusing on different features for specific tasks in perfect parallel, but are also allowed to generate results in inference at the same time with maintaining the same level of task-specific performance as if they were operating independently. Therefore, in cases where multiple tasks are present, our STC approach showcases a significant advantage over model fine-tuning by achieving multiple times savings in model weight storage as reported in Table <ref> of trainable parameters, speedups in training floating point operation costs as in Table <ref> and in inference as in Table <ref>.§.§ Intrinsic InterpretabilityThe interpretability of machine learning models, especially LLMs that have often been criticized as “black box" models, is always crucial for researchers to understand how these models make decisions, and can also become an essential aspect for transparency-demanding real-world applications. In this section, we focus on providing a brief illustration and discussion on the interpretability of our STC method.Sparisifed Neurons vs. Aggregated Classifiers Here we illustrate how STC gains its ability by its layer-wise feature sparsification and cross-layer aggregation processes. In Figure <ref> we plot the histograms of the sparsified neuron activations in each layer in a descending way of their importance attributed by layer-wise probing, along with the aggregated STC prediction results, by which we use the sigmoid function output the logistic predictor Y_Agg, A,Ltemp=σ(g(A_Agg,L_temp,f) W_A,L_temp+b_A,L_temp) for each L_temp yields.As is illustrated in the upper part of Figure <ref>, each layer of the sparsified neurons collectively contributes their distinctiveness of positive and negative samples to the aggregated classifiers at that layer and above, consistently resulting in better performance of the aggregated classifiers at higher levels. This visualization helps in understanding how neurons and layers with different levels of abstraction in the model empower the overall decision-making and performance of the aggregation process for text classification tasks.As mentioned in <ref>, our layer-wise sparsification is actually a greedy strategy that approximates a 1-step backward elimination process which efficiently filters out the least promising features, yet for getting an exact ranking of features requires a progressively multi-step filtering techniques <cit.>, which is less favorable in our case as another cross-layer aggregation structure would be trained above the filtered sparsification results. When examining individual neurons in each layer, the abilities of their activations in separating samples apart are indeed not strictly in the order of their importance attributed by layer-wise LR probes, as they adopt a linear combination perspective instead of naïvely estimating each neuron's contribution independently. In LLMs, there are often scenarios where individual neuron activations might not be significantly indicative of studied classification labels on their own, but when combined, reveal a clearer picture, for instance, the contextual polarities, intensity modifiers, sarcastic statements, etc. Features that seem useless by themselves can provide performance improvement when taken together with others <cit.>, this nature allows our retraining of LR in cross-layer aggregation to further capture the nuances within selected features, which can been verified by comparing the layer-wise and aggregated performance results in Figure <ref> as well. Generalization to Token-level Classification Extending the trained sentence-wise classifiers to provide token-wise predictions provides a transparent breakdown of how each token contributes to the sentence-level classification, allows for a nuanced post-hoc examination of understanding of how STC processes and interprets each individual components (words or sub-word patterns) of the input text, and also serve as a proof showcasing the robustness of cross-domain generalization adaptability of our method. This can be implemented as a tool for real-world applications in visualizing complex sentences or documents where different parts may carry different classification results, laying the foundation for more transparent, accountable, and user-trustworthy systems.By examining Figure <ref>, it is evident that the STC prediction results correspond well with the fine-grained local sentiment expressed by individual tokens at different places within both positive and negative sample sentences. This remarkable capabilities come from incorporating the max and average pooling strategies in our approach, which enable the model initially trained on more generalized features to adeptly transition to recognizing token-specific patterns. In contrast, baseline models typically lack this nuanced capability of transferring to a token-level breakdown since they are largely constrained by their structural dependencies on the first or last token results for performing classification. § RELATED WORKConventional LLM Application Paradigms in Text Classification In the realm of text classification, LLMs are applied through distinct paradigms, each with unique methodologies and focal areas. Traditional paradigms include the following three: sentence embedding method, model fine-tuning, and in-context learning. The sentence embedding approach, following <cit.>, primarily utilizing encoder-focused LLMs, leverages the last layer's hidden states of single tokens, focusing on generating sentence-level embeddings for text classification without modifying model weights or prompt fine-tuning, as demonstrated in seminal works by <cit.>, <cit.>, <cit.>, etc. In contrast, the model fine-tuning approach involves a comprehensive adaptation of the entire LLM structure, including model weights and classification heads, for task-specific adjustments, with research including <cit.>, <cit.> and <cit.> demonstrating the effectiveness. The in-context learning (ICL) approach, exemplified by a trend of generative models after GPT-3 (<cit.>), adopts a decoder-focused structure and relies on prompt-based learning without altering model weights or using a separate classification head, offering a flexible and prompt-driven methodology.Interpreting Internal Representations The quest to unravel the interpretability of model internal representations has long become a cornerstone in understanding AI decision-making process. Classic methods like word2vec<cit.> initially illustrated that linearly interpretable features capturing semantic nuances could be extracted from embedding spaces. <cit.> introduced the concept of linear probes for deep convolutional networks. Subsequent research in LLMs has furthered our understanding in how they acquire task-specific knowledge, including studies by <cit.> delving into how FFN layers in transformer-based models operate as key-value memories, <cit.> exploring the concept of knowledge neurons within Transformer models, and <cit.> identifying specific LLM units as experts in certain aspects of tasks based on their pattern of activations. <cit.> and <cit.> have explored the encoding of spatial, temporal, and factual data within LLMs. Recent work by OpenAI including <cit.> extends our understanding of LLMs by letting GPT4 to interpret the different levels of features captured by individual neurons in GPT2-XL. This collection of research has provided vast evidence of internal representations in LLMs being able to capture task-specific features. Leveraging Internal Representations as Text Classifiers Leveraging activations and hidden states of LLMs for text classification tasks has gained traction. Studies as early as <cit.> have shown the potential of utilizing individual neuron activations in LSTM to conduct sentiment classification. Research by <cit.> and <cit.> provided further insights into hidden states being used for linguistic and factual representation in models like ELMo, BERT, and the Pythia family. Closely relevant research also includes <cit.> identifying specific neurons within RoBERTa that are responsible for particular skills over neuron activations, and empirically validating the potential of using the top-ranked task-specific neurons for classification with competitive performance. Moreover, <cit.> and <cit.> from the LLaMA2 family, demonstrated the effectiveness of leveraging these internal representations for more nuanced and specific text classification tasks. These works highlight the adaptability and robustness of LLMs in text classification by employing innovative techniques like sparse probing to unravel the complexities within the internal representations, and herald the potential for better text classifiers using neuron representations than LM head or classification head. § CONCLUSION In this work, we introduce a novel approach, Sparsify-then-Classify (STC), for leveraging the internal representations of Large Language Models (LLMs) in text classification tasks. Our methodology diverges from traditional practices, such as sentence embedding and model fine-tuning, by focusing on the rich, yet often neglected, intermediate representations within LLMs. We demonstrate that by utilizing multiple pooling strategies and layer-wise linear probing to identify salient neurons and aggregating these features across layers, we can significantly boost the performance and efficiency of LLMs as text classifiers.The experimental results across various datasets and LLM architectures, including RoBERTa, DistilBERT, and the GPT2 family, empirically highlight the effectiveness of our approach. Notably, the STC method consistently outperforms traditional sentence embedding approaches when applied to frozen models and even improved the performance of already fine-tuned models. This enhancement is particularly evident in larger models, where our method effectively leveraged the complex, multi-layered structure of LLMs. In terms of efficiency, the STC approach shows substantial reductions in the number of trainable parameters and computational load during the training phase, and versatility of efficiently utilizing the lower layers of LLMs and multi-task learning strategies in inference phases. This efficiency gain is crucial, considering the growing concerns about the environmental and computational costs associated with training and deploying large-scale models. Additionally, by visualizing a breakdown of the intrinsic sparisification and aggregation processes and the token-level classification possibilities, we underscore the advanced interpretability and adaptability of STC in handling complex textual nuances. Future Work Looking ahead, there are several promising directions for further refining and extending our methodology. By exploring these avenues, we envision a comprehensive enhancement of the understanding and utilization of LLMs, making them more efficient, interpretable, and adaptable to a wide range of applications.* Advanced Layer Aggregation Techniques The current approach of layer aggregation in STC is relatively simplistic and intuitive. Future work could explore more advanced methods for aggregating features sparsified across layers, potentially using learning techniques to automatically determine the most effective combination of layers and features for a given task. * Adaptation to Other NLP Tasks While this study focused on text classification, the STC method has potential applications in other NLP tasks such as regression and token-level classification. Adapting and optimizing STC for these tasks could further demonstrate its versatility. * Incorporation of Unsupervised LearningLeveraging unsupervised or semi-supervised learning techniques in the STC framework could allow it to automatically discover features and attribute them accordingly, enhancing its applicability in scenarios with limited labeled data. This could make the approach more accessible and reduce the reliance on extensive labeled datasets. * Exploration of Transfer Learning Investigating the transferability of our approach across different domains could be valuable. Understanding how well the salient features identified in one domain can be adapted to another could lead to more efficient model training and broader applicability. § ACKNOWLEDGMENTSThis was was supported in part by......Appendices§ BASELINE ARCHITECTURE Figure <ref> illustrates the detailed architectures for text classification using LLMs in three conventional mainstream paradigms, which are served as baseline methods in our study.§ LINEAR VS. NONLINEAR PROBESIn addition to linear probes, we examine the Single Layer Perceptron as nonlinear probes with S_X=√(D) neurons in its mere hidden layer for probing hidden state representations, and S_A=√(kD) neurons for probing FFN activations. This is to decode the potential non-linearity of LLM internal representations, such as polysemantic properties, by introducing a substantially more expressive model. The optimization of SLP probes can be formulated asW_X,l,b_X,l = min_W,b||Y-σ((f_act(g(X_l,f)W_1+b_1)W_2+b_2))||_1, l=0, ⋯, L W_A,l,b_A,l= min_W,b||Y-σ((f_act(g(A_l,f)W_1+b_1)W_2+b_2))||_1, l=0, ⋯, L-1For f_act we used ReLU as the activation function in our SLP probes. We report the testing accuracy of IMDb dataset for both sequence classification head and layer-wise probing in Table <ref>, demonstrating that the more expressive SLP yields a minute and inconsistent improvement for all models. The results are referred to as decent support to the hypothesis that for the text classification tasks, the features are as well represented linearly in Large Language Models. | http://arxiv.org/abs/2311.15983v1 | {
"authors": [
"Yilun Liu",
"Difan Jiao",
"Ashton Anderson"
],
"categories": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"primary_category": "cs.LG",
"published": "20231127162820",
"title": "Sparsify-then-Classify: From Internal Neurons of Large Language Models To Efficient Text Classifiers"
} |
Institut für Theoretische Physik, Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, Germany Institute for Applied Physics, University of Bonn, Wegelerstraße 8, 53115 Bonn, GermanyInstitut für Theoretische Physik, Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, GermanyPhysikalisches Institut, Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, GermanyPhysikalisches Institut, Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, GermanyInstitut für Theoretische Physik, Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, Germany School of Physics and Astronomy and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, The University of Nottingham, Nottingham, NG7 2RD, United Kingdom We propose a protocol for the amplified detection of low-intensity terahertz radiation using Rydberg tweezer arrays. The protocol offers single photon sensitivity together with a low dark count rate. It is split into two phases: during a sensing phase, it harnesses strong terahertz-range transitions between highly excited Rydberg states to capture individual terahertz photons. During an amplification phase it exploits the Rydberg facilitation mechanism which converts a single terahertz photon into a substantial signal of Rydberg excitations. We discuss a concrete realization based on realistic atomic interaction parameters, develop a comprehensive theoretical model that incorporates the motion of trapped atoms and study the many-body dynamics using tensor network methods. Avalanche terahertz photon detection in a Rydberg tweezer array Igor Lesanovsky January 14, 2024 ===============================================================Introduction — When an atom is excited to a high-lying Rydberg state the valence electron and the remaining positively charged core form a giant electric dipole <cit.>. Rydberg atoms are thus highly susceptible to electric fields and can find applications in a variety of sensors <cit.>, for example, for detecting small dc field variations <cit.>. Another important property of Rydberg atoms is that their spectrum features strong dipole-transitions across a wide range of frequencies, including the terahertz (THz) regime<cit.>. This property, together with their large electric dipole moment, permits the realization of THz sensors offering spatial and temporal resolution for the detection of classical fields <cit.>.In this work we propose and theoretically investigate a protocol which allows for the amplified detection of THz radiation at the single THz photon level utilizing additionally the strong state-dependent inter-atomic interactions between Rydberg atoms. These interactions have already been used for enhanced metrological protocols <cit.>. Our detector is based on a Rydberg tweezer array in which ground state atoms are laser-excited to a Rydberg state. Absorption of a THz photon triggers the transition of an excited atom to a second Rydberg state. Carefully chosen inter- and intra-state interactions then initiate a facilitation dynamics resulting in an avalanche amplification of an absorbed THz photon. We characterize the detector and discuss limitations and imperfections, e.g., resulting from the coupling of the electronic dynamics to lattice vibrations. Beyond providing single photon sensitivity in the THz regime the proposed detector offers a low dark count rate, and therefore may find applications as a sensing device in dark matter searches <cit.>.Atomic model — To illustrate the basic idea behind the detector we consider a one-dimensional open-boundary chain of N Rydberg atoms. Neighboring atoms are positioned at an interatomic distance a_0 as depicted in Fig. <ref>a. Such setting and higher-dimensional generalizations of it can be realized with the help of optical tweezer arrays <cit.>. The Rydberg atoms are modeled as three-level systems consisting of a ground state |g⟩ and two Rydberg states |e⟩ and |r⟩. The transition frequency, ω_THz, between the two Rydberg states can be chosen across a wide frequency range, including THz. Two neighboring atoms in the Rydberg state |r⟩ interact with a density-density interaction V_rr. An off-resonant laser with a Rabi frequency Ω_gr and a detuning Δ_gr (Ω_gr≪|Δ_gr|) couples the state |g⟩ and |r⟩. The laser detuning is set such that it cancels out the interaction between two consecutive atoms in the |r⟩ state, i.e. Δ_gr + V_rr = 0. This is the so-called facilitation condition, which has been experimentally and theoretically explored in a various settings <cit.>. It ensures that the excitation of an atom to the |r⟩ state is strongly enhanced when it is located next to an atom already in state |r⟩. This process is at the heart of the conversion of a THz photon into a detectable avalanche of Rydberg atoms.Dynamical THz detection protocol — The THz detection protocol is depicted schematically in Fig. <ref>b,c. It consists of four steps: preparation of the initial state, the sensing mode, the amplification mode and, finally, the measurement. In the first step, the tweezer array is loaded with atoms in their ground state. The state of the many-body system is then |Ψ_g⟩=⊗_j |g⟩_j, where the sites are labeled by the index j.Next the sensing mode is initialized by a π-pulse to the |e⟩-state, such that the many-body wave function is |Ψ_s⟩=⊗_j |e⟩_j.For a time interval of length T_s the atoms can absorb a THz photon triggering the transition to the second Rydberg state |r⟩. We assume that the rate of absorption, Γ_THz, is sufficiently small such that at most one photon is absorbed in the sensing time window: Γ_THzT_s≪ 1. This is no fundamental limitation as amplification is also possible for multiple excitations. Assuming absorption of a photon at site k, the state of the system becomes|Ψ_er⟩=⊗_j≠ k|e⟩_j|r⟩_k. Note, that the wavelength of THz radiation is typically much larger than the characteristic interatomic distance <cit.>, which actually leads to a collective excitation in state |r⟩. This case will be considered further below. After the sensing interval a π-pulse de-excites the atoms in state |e⟩ to the ground state |g⟩, resulting in the state |Ψ_gr⟩=⊗_j≠ k|g⟩_j|r⟩_k. Here we require the interaction between two atoms in state |e⟩, V_ee, and the dipolar exchange interaction, V_er, to be sufficiently small. Below, we will discuss how these conditions can indeed be met in a realistic setting.Next is the amplification mode, which lasts for a time T_a, as shown in Fig. <ref>b,c. Here the dynamics is described by the Hamiltonian (ħ=1)H_a =Ω_gr∑_j (rg_j + h.c.)+Δ_gr∑_j n^(r)_j+ V_rr∑_j n^(r)_j n^(r)_j+1,where the laser detuning Δ_gr is chosen to match the facilitation condition, i.e. Δ_gr+V_rr=0. Moreover, we chose the detuning to be much larger than the Rabi frequency, Ω_gr≪|Δ_gr|. This condition ensures that predominantly facilitated excitations take place. Off-resonant excitations limit the dark count rate, which can be minimized by an optimal |Δ_gr|. The facilitation process leads to a large number of Rydberg atoms conditioned on the presence of one THz-excited atom in state |r⟩ (see Fig. <ref>c). After the amplification time, the number of Rydberg atoms in the state |r⟩ is measured, whose average is given by the signal𝒮=∑_j 𝒮_j=∑_j ⟨Ψ_gr|e^iH_a T_an^(r)_j e^-iH_a T_a|Ψ_gr⟩.Here 𝒮_j is the spatially resolved signal, i.e., the probability of having a Rydberg atom in state |r⟩ on site j. Ideally, the signal 𝒮 is proportional to the total number of atoms. Our analysis is only strictly valid for times T_a that are much smaller than the lifetime of the Rydberg atoms. However, in a typical experimental setting each decayed Rydberg atom is lost from the system. Unless, the loss takes place at the facilitation front where it interrupts the avalanche, absent atoms in the bulk can simply be included in the Rydberg count. In the following we discuss the quantum dynamics of this part of the protocol, taking into account interatomic forces and the fact that the initial THz absorption is collective. We provide experimental parameters below.Many-body dynamics during amplification mode — The facilitation excitation dynamics taking place during amplification is illustrated in Fig. <ref>a, where we present the spatially resolved signal 𝒮_j as a function of time t, starting from the initial state |Ψ_gr⟩. For the case shown the THz photon was absorbed by the central atom, located at site k=0. In Fig. <ref>b we show the time evolution for the total signal 𝒮, see Eq. (<ref>).The dynamics is characterized by three stages, which are delimited by the vertical red dashed and solid lines in Fig. <ref>b. In the first stage we observe a quadratic increase of the signal 𝒮∝(Ω_grt)^2. In the second stage a ballistic expansion is established (see region between red dashed and solid lines in Fig. <ref>b). Here, the already facilitated Rydberg atoms excite their neighbors leading to the creation of clusters of consecutive Rydberg excitations. This cluster grows from the boundaries (the facilitation front) as the de-excitation of Rydberg atoms within the bulk is off-resonant: atoms in state |r⟩ experience an energy-shift, which is 2V_rr, since they are interacting with their left and right neighbor. During this ballistic expansion the number of Rydberg atoms grows approximately linearly in time and so does the signal: 𝒮∝Ω_grt, leading to higher and higher amplification <cit.>. A third stage follows in which the dynamics is governed by finite size effects and the signal 𝒮 starts to reduce once the edges of the Rydberg cluster hit the boundaries of the lattice. This implies that there is an optimal value for the amplification time T_a, which is proportional to N/Ω_gr. Note, that this is indeed a quantity that can be optimized: the absorption of a THz photon may take place at a random time. However, the starting point of the amplification mode is precisely known. Note furthermore, that the inclusion of dephasing yields a saturation of the signal at 𝒮≈ N/2 in the long time limit <cit.>.So far, we have assumed that the absorption of a THz photon takes place at a specific site k of the atom chain. However, terahertz radiation has a much longer wavelength than the typical interatomic distances a_0 in tweezer arrays. Absorption of the THz photon is then described by the collective jump operator L=√(Γ_THz)∑_j re_j<cit.>. This creates the coherent superposition state |Ψ^c_er⟩=1/√(N)∑_k ⊗_j≠ k|e⟩_j|r⟩_k∝ L|Ψ_s⟩, where the excitation in the Rydberg state |r⟩ is collective, i.e. shared among the entire ensemble (see <cit.> for more details). At the end of the sensing phase this state is de-excited to |Ψ^c_gr⟩=1/√(N)∑_k ⊗_j≠ k|g⟩_j|r⟩_k. Starting the amplification mode leads to the signal shown in Fig. <ref>c,d, which increases notably faster than that of a local excitation. This acceleration is actually a coherent effect owed to the collective nature of the state |Ψ^c_gr⟩. To see this, we show for comparison the signal for a mixed state, corresponding to the incoherent equal weight average over all possible initial positions of the atom in the |r⟩-state. This signal is lower than the one corresponding to the excitation of the central atom (Fig. <ref>a), which is expected since the latter case produces more facilitated atoms than an initial excitation close to the boundary. Experimental considerations — As indicated previously, the implementation of the detector protocol requires specifically chosen interactions: interactions between Rydberg |r⟩-states shall be strongest, while interactions between atoms in the |e⟩-state and crossed interactions between atoms in the |e⟩- and |r⟩-states shall be small compared to the relevant laser Rabi frequency. Moreover, the detector is spectrally sensitive solely near the frequency ω_THz, which, however, can be tunedover a wide range. We illustrate this in the following by considering two exemplary cases for the element ^39K.We choose the Rydberg state |r⟩ = 70P_1/2 and two different sensing states, i.e. states from which the atom is excited into |r⟩ upon absorption of the photon: (a) |e⟩_(a) = 68S_1/2 and (b) |e⟩_(b) = 45S_1/2. Using state (a) the transition energy is 54GHz, a convenient frequency for a laboratory microwave source for demonstration of the scheme, while for state (b) the transition is in the THz regime, with about 1THz. The corresponding interaction potentials are shown in Fig. <ref>. Choosing an interatomic distance a_0 = 6 m, in scenario (a) the interaction energy is V_rr≈12.5MHz, V_ee≈9MHz, and V_er≈1MHz. With a Rabi frequency of Ω_ge = 2π×30MHz, the Rydberg blockade at this distance can be broken, effectively allowing to neglect the interaction in |e⟩. Moreover, as the interaction between |r⟩ and |e⟩ is much smaller than V_ee, an atom excited to |e⟩ will not affect the remaining atoms in |r⟩. Thus both excitation and de-excitation can approximately be treated in the limit of non-interacting states as was assumed previously. The potential V_rr (and thus the laser detuning: Δ_gr=-V_rr) is large enough to suppress off-resonant scattering during sensing: choosing a Rabi frequency of Ω_gr = 2π×0.2MHz and a detuning Δ_gr = 12.5MHz results in a dark count rate of 0.33. In a cryogenic environment at 1K this is further reduced to 0.05 due to lower influence of black-body radiation on broadening of the Rydberg state's absorption. For an array of 11 atoms, these parameters give an optimal amplification time of about 25s, which leaves about 50s for the sensing before significant atom loss from the tweezer array. The lifetimes of the Rydberg states are sufficiently long, with τ_e = 193 s (1.2ms), τ_r = 129 s(330 s), and τ_45S = 46 s(91 s) at 330K (1K).Assuming that Rydberg atoms are not lost from the system, e.g. by using state-independent trapping, the detector dead time is thus given by the readout time of about 10ms, after which the next sensing cycle can start (the cycle rate is thus ∼100 Hz). Case (b) refers to a frequency in the range ofHere, the potentials V_ee and V_er are even weaker than in case (a), as the relevant dipole transition matrix elements are diminished by a small wave function overlap. Therefore, all required conditions are naturally met. Theweak interaction of the |e⟩ states also allows to reduce the lattice spacing a_0, which increases Δ_gr and suppresses the dark count rate further.Facilitation dynamics and atomic motion — The faciliation mechanism is highly sensitive to the distance between neighboring atoms <cit.>. We therefore investigate in the following the impact of atomic motion within the tweezer traps. For simplicity, we assume the atoms to be trapped in a state-independent potential <cit.>. Traps are modeled as harmonic oscillators with frequency ν and bosonic lowering and rising operators a_j and a_j^† acting at site j. This is valid as long as the atoms are cooled near the motional ground state (ν≫ kT) with thermal energy kT <cit.>. Coupling between the Rydberg facilitation dynamics and the vibrational motion is caused by the dependence of the interaction potential V_rr on the interatomic separation x: V_rr→ V_rr(x). The Hamiltonian H_a [Eq. (<ref>)], valid during the amplification mode, therefore changes to <cit.>H_a'= H_a +ν∑_j a_j^† a_j +V_rr(x)x|_x=a_0∑_j n^(r)_j n^(r)_j+1δ x^(j,j+1).Note, that we considered here only small (first order) displacements of the atoms from their equilibrium positions. These displacements are represented by the operator δ x^(j,j+1)= 1/√(2mν) (a_j+a_j^†-a_j+1-a_j+1^†), where m is the atom mass. The coupling strength between the vibrational degree of freedom and the Rydberg state of the atoms is then given by κ=1/√(2mν) ∂_x V_rr(x)|_x=a_0 <cit.>. To simulate the dynamics of the ensuing spin-boson Hamiltonian we resort to the time-evolving block decimation algorithm (TEBD) <cit.>, and truncate the Fock space of the harmonic oscillators at a maximum of 7 phonons.Figure <ref>a,b show the dynamics during amplification mode. While Rydberg excitations, triggered by the absorption of a THz, are still spreading, we observe in Fig. <ref>c that the maximally achievable amplification generally decreases when the coupling strength κ between the vibrational and electronic degrees of freedom is increased. Nevertheless, for κ=1.5Ω_gr there is only a minimal change compared to the uncoupled case (κ=0), and even for stronger couplings significant amplification is possible. Thus, robust trapping with κ≪ν is certainly advantageous, but efficient amplification is possible when vibrational atomic motion is present.Conclusions and future directions —We have discussed a protocol for a THz photon avalanche detector that combines the tunable frequency range of transitions among Rydberg states with facilitated Rydberg excitation. The detector offers single photon sensitivity together with a low dark count rate and fast operation cycle. One possibility to further enhance the sensitivity of the detector is to use correlated initial states: rather than initializing all the atoms in the |e⟩-state one may think of preparing them in a Dicke state containing in the manifold of |e⟩,|r⟩-states. This would allow to collectively (superradiantly) enhance the absorption of THz photons. However, implementing such protocol requires switchable interactions between |r⟩-states and a generalization of the avalanche dynamics into the many-body regime, i.e. where multiple excitations are initially present. Beyond the microscopically controlled optical tweezer arrays, as discussed here, the protocol is expected to also work in disordered gases as long as Doppler-broadening is significantly smaller than Δ_gr. The large number of atoms in these gases enhances the THz absorption probability. At the same time single photon sensitivity is maintained due to the large signal produced by the avalanche. Acknowledgements —We acknowledge funding from the Deutsche Forschungsgemeinschaft within SPP 1929 GiRyd (Grant No. 428276754: LE3522/1 and GR4741/5), a Heisenberg professorship to C.G. (GR4741/3) and the research units FOR5413 (Grant No. 465199066) and FOR5522 (Grant No. 499180199). We also acknowledge funding from the Horizon Europe programme HORIZON-CL4-2022-QUANTUM-02-SGA via the project 101113690 (PASQuanS2.1), the Baden-Württemberg Stiftung through Project No. BWST_ISF2019-23, the Alfried Krupp von Bohlen and Halbach foundation and the state of Baden-Württemberg through bwHPC grant no INST 40/575-1 FUGG (JUSTUS 2 cluster).supplement | http://arxiv.org/abs/2311.16365v1 | {
"authors": [
"Chris Nill",
"Albert Cabot",
"Arno Trautmann",
"Christian Groß",
"Igor Lesanovsky"
],
"categories": [
"quant-ph",
"cond-mat.quant-gas"
],
"primary_category": "quant-ph",
"published": "20231127230732",
"title": "Avalanche terahertz photon detection in a Rydberg tweezer array"
} |
Physics-Informed Neural Networks for Predicting the Asymptotic Outcome of Fast Neutrino Flavor ConversionsZewei Xiong 0000-0002-2385-6771 =========================================================================================================== Instruction tuning has become the de facto method to equip large language models (LLMs) with the ability of following user instructions. Usually, hundreds of thousands or millions of instruction-following pairs are employed to fine-tune the foundation LLMs. Recently, some studies show that a small number of high-quality instruction data is enough. However, how to select appropriate instruction data for a given LLM is still an open problem.To address this problem, in this paper we present a model-oriented data selection (MoDS) approach, which selects instruction data based on a new criteria considering three aspects: quality, coverage and necessity. First, our approach utilizes a quality evaluation model to filter out the high-quality subset from the original instruction dataset, and then designs an algorithm to further select from the high-quality subset a seed instruction dataset with good coverage. The seed dataset is applied to fine-tune the foundation LLM to obtain an initial instruction-following LLM. Finally, we develop a necessity evaluation model to find out the instruction data which are performed badly in the initial instruction-following LLM and consider them necessary instructions to further improve the LLMs. In this way, we can get a small high-quality, broad-coverage and high-necessity subset from the original instruction datasets. Experimental results show that, the model fine-tuned with 4,000 instruction pairs selected by our approach could perform better than the model fine-tuned with the full original dataset which includes 214k instruction data. Codes, data, and models are available[https://github.com/CASIA-LM/MoDS]. § INTRODUCTION With the development of artificial intelligence, large language models, such as GPT-3 <cit.>, GPT-4 <cit.>, PaLM <cit.>, OPT <cit.>, and some other open-source LLMs <cit.>, have showed revolutionary potential in general language understanding and generation. As a critical technique of LLMs, instruction tuning <cit.> enables LLMs to correctly follow various kinds of user instructions. In early researches, instruction tuning <cit.> mainly focuses on how to construct large-scale, diverse, and high-quality instruction data. Recently, <cit.> proposes a LIMA model which demonstrates that only 1,000 carefully crafted high-quality instructions can enable the model to possess a powerful instruction-following capability. Their results suggest that almost all of the knowledge in LLMs has been learned during pre-training, and only a small number of instruction tuning data is required to activate models to follow instructions and produce high quality responses. Subsequently, there has been a growing interest among researchers in the systematic filtration of high-quality and comprehensive subset from the extensive pool of instruction dataset <cit.>. However, these data filtration methods rely too much on extra LLMs or mainly focus on the quality of instructions. Different from those methods, this paper proposes a model-oriented approach which selects instruction data based on a new criteria considering three aspects: quality, coverage and the necessity as well. The quality requires the selected instruction data to be good enough for both questions and answers. The coverage requires the selected instruction data to be diverse enough. The necessity indicates that the selected instruction data indeed fill the ability gap for the LLM of interested. In order to select high-quality instruction data from a large dataset, this paper first proposes to use a quality evaluation model to assess all the (instruction, input, output) triplets, and then filter out the instruction data with high-quality scores. After that, we further propose to use a k-center greedy algorithm <cit.> to select instruction data from the high-quality subset. This k-center greedy algorithm could select a subset of data points that are the farthest apart, thereby making the instruction data we collect are diverse and have broader coverage. In this way, we can get a seed instruction dataset for the target LLM fine-tuning. Due to the difference of pre-training data, model architecture and training processes, different LLMs vary in their abilities, which result in the fact that different LLMs require different kinds of instruction data. In order to further find out the instruction data the specific LLM needed, we fine-tune the given LLM with the seed instruction dataset, and then assess its inference results on all the high-quality instruction dataset. In this way, we can filter out the instructions on which the specific LLM performs poorly, making up an augmented dataset for the target LLM. This augmented dataset indicates the instruction-following capabilities that LLM lacks. Finally, by merging the seed instruction data and the augmented data, we get a high-quality, broad-coverage and high-necessity subset from the original large-scale instruction datasets. We then utilize these selected data to fine-tune the target LLM again. Our contributions can be summarized as follows:(1) We propose a new criteria for instruction data section including quality, coverage and necessity, and verify that they are valuable for the LLM fine-tuning.(2) We propose a model-oriented instruction selection approach which not only considers the quality and coverage of instruction data, but also integrates the necessity of instructions based on the ability of specific LLMs.(3) Experimental results show that the LLM fine-tuned with 4,000 instruction data selected by our approach could achieve a better performance than the LLM fine-tuned with the full original dataset (214k), indicating that our approach is effective in selecting valuable instruction data with high-quality, broad-coverage and high-necessity. § RELATED WORK Recent researches show that instruction tuning could enable LLMs to be tailored to specific domains, tasks, or applications by providing explicit instructions or guidelines <cit.>. In order to enhance the instruction-following abilities of LLMs, previous work mainly focus on increasing the data sizes through various strategies <cit.>. However, the work of() illustrates that even a small number of constructed high-quality instructions could empower the model with a powerful instruction-following capability. They indicate that most of the knowledge in LLMs have been acquired during the pre-training procedure, and only a limited number of instruction data are enough to activate LLMs to follow instructions and generate high-quality responses. Their work demonstrates significant improvements compared to LLMs which are fine-tuned with similar-scale unfiltered data. However, it should be noted that their approach requires manual involvement to select data from extensive datasets, which is both time-consuming and costly.Motivated by the work of <cit.>,() proposed an instruction mining approach which adopts a linear quality rule and bag of indicators to evaluate the quality of instruction-following data. However, they do not conduct comparisons with LLMs trained on the complete dataset, and their approach is very complex.Besides,() recently propose a ALPAGASUS model which directly leverages an external LLM (chatgpt) to score each instruction and then selects 9k Alpaca data with a threshold. Their model surpasses the performance of the official Alpaca model which is trained on the complete dataset. However, they rely excessively on external LLMs with great performance.Different from them,() present a self-guided approach for LLMs to independently identify and choose relevant instruction pairs from extensive open-source datasets. In their approach, they introduce an Instruction-Following Difficulty (IFD) metric as a tool to identify gaps in a model's responses versus its autonomous generation capability. It signficantly reduces the need for manual curation and the associated costs for instruction tuning. However, when computing the IFD metric they only adopt one answer for each instruction, which neglects that the responses for each instruciton are diverse. Besides, they don't pay much attention to the quality and coverage of instruction data during selection procedure.§ METHODOLOGY§.§ Which instructions are the valuable data for a given LLM() show that LLM's knowledge has been mostly learnt during pre-training. Instruction tuning is mainly used to teach a given LLM to learn how to follow a certain pattern when interacting with human, and only a small number of carefully crafted high-quality instructions are enough to equip the given LLM with powerful instruction-following capabilities. However, for different LLMs, as the knowledge and abilities they have learnt during the pre-training procedure are different, the instruction tuning data they require shoud be different as well. Consequently, how to select the most crucial data for a given LLM has garnered much attention of researchers. After analyzing some LLMs and instructions, we find that the valuable instruction tuning data for one given LLM are mainly decided by the following three aspects:Quality. "Quality" refers to the data quality of both the instructions and their corresponding responses in the dataset, which directly influences the knowledge LLM learns. As demonstrated in the work of <cit.>, high-quality instruction data can effectively enhance LLM's ability to follow instructions.Coverage."Coverage" refers to the types of instrucitons the dataset includes. It represents the diversity of one instruction dataset. The more diverse instruction the dataset covers, the greater the potential of stimulating the capabilities of a large language model is. Researches of <cit.> also show that enhancing the diversity of instruction data can effectively enhance LLM's ability to follow instructions during fine-tuning. Necessity. "Necessity" indicates the importance and uniqueness of one instruction for fine-tuning a specific LLM. As described in the work of <cit.>, LLMs have already acquired a substantial amount of knowledge and capabilities during pre-training. Instruction tuning primarily focuses on how to use a limited number of instruction data to stimulate LLM's capabilities, enabling LLMs to follow a certain pattern when interacting with human. Due to the knowledge and capabilities LLMs have learned are different, the importance and uniqueness of the same instruction data may vary for different LLMs. For a given instruction, if the LLM could generate a high-quality response, it indicates that the LLM has already owned the ability of following this type of instructions, and this instruction data is non-essential for the fine-tuning. Conversely, if the LLM cannot generate a good response for that instruction, it suggests that the LLM lacks the ability to follow this type of instructions, and that instruction is necessary for optimizing the LLM's capabilities.§.§ Instruction Data Selection As mentioned in the previous section, the process of selecting effective instruction data from a large-scale dataset for a given LLM is primarily determined by three aspects: quality, coverage, and necessity. To efficiently select the most valuable instruction data with these three aspects, this paper proposes a model-oritented approach for instruction data selection, which is shown in the top of Figure <ref>. This approach mainly includes three modules: Quality Evaluation, Diverse Data Selection for Seed Instructions and Augmented Data Selection. The details are presented in the following.§.§.§ Quality Evaluation The quality of instruction data plays a crucial role in the learning of instruction-following capabilities for LLMs. Therefore, to select effective instruction data, we first evaluate the qualities of instruction data and their corresponding response in the large-scale dataset, and then filter out the higher-quality data from it. When assessing the qualities of instruction data, we utilize the reward-model-deberta-v3-large-v2[https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2] model which is developed by OpenAssistant. This is a reward model designed based on the DeBERTa <cit.> architecture, and is trained on four different types of human feedback data <cit.>, endowing it with the abilities of QA model evaluation, reward scoring, and detecting potential toxic response via ranking. In this paper, we mainly adopt its reward scoring capability to generate a quality score for each (instruction, input, output) triplet in the large-scale dataset. As shown in Figure <ref>, some examples with quality scores are displayed. After generating the quality scores for each (instruction, input, output) triplet, we will filter them with a threshold α. Through collecting the (instruction, input, output) triplet whose quality score is larger than α, we can get a high-quality instruction dataset. §.§.§ Diverse Data Selection for Seed Instrucitons After getting a high-quality instruction dataset, we will further select data from it. In order to select diverse instruction data with the maximum coverage, we propose to use K-Center greedy algorithm <cit.> for data selection. K-Center greedy algorithm is proposed by <cit.> in 2017, which is a simple yet effective approach used to address the K-Center problem. The objective of the K-Center problem is to choose a subset of K centers from a given set of data points in a manner that minimizes the maximum distance between any data point and its nearest center. This algorithm commences by selecting an initial center, typically the point farthest from any existing centers, and then proceeds to add new centers iteratitively. At each step, it chooses the point farthest from the current set of centers. As shown in Algorithm 1 <cit.>, it presents the details of this algorithm.During diverse data selection process, we generate the sentence embeddings for all instructions with BERT <cit.>, which are used to compute the distances of different data points. Through this module, we can get a seed instruction dataset which has a great diversity and broad coverage.§.§.§ Augmented Data Selection For different LLMs, as the knowledge and capabilities they learned in the pre-training procedure are different, the instruction tuning data they require will be different as well. For one instruction, if the given LLM could generate a good response, it indicates that the given LLM has owned the ability to handle this type of instruction, and this instruction data is not necessary for the fine-tuning of the LLM. Conversely, if the LLM cannot generate a good response, it suggests that the LLM couldn't effectively process that type of instruction data, and the instruction data is very important and unique for the fine-tuning of the target LLM.In section 3.2.2, we have generated a seed instruction dataset with high-quality and broad-coverage. However, as the valuable instructions vary for different LLMs, the seed instruction dataset may not include all the instructions the target LLM needs. In order to find out these missed instructions, we first fine-tune the pre-trained LLM with the seed instruction dataset, generating an initial LLM. Then we generate the responses of all the instructions in high-quality dataset with the initial LLM. After that, we will use a necessity evaluation model to compute a review score for each instruction and its generated response. In this paper, we still adopt the reward model used in section 3.2.1 as the necessity evaluation model. If the review scores are less than the threshold β, it represents that the initial LLM could not generate good responses for these instructions, and it doesn't own the capabilities to handle that types of instructions. After collecting all the instructions with low review scores, we will again use the K-center greedy selection algorithm described section 3.2.2 to select a subset from them, and then build an augmented dataset. This dataset could effectively compensate for the capability deficiencies of the initial LLM.§.§.§ Fine-tuning with Selected Instruction Data Following the methods outlined in the previous section, we can get a seed instruction dataset and its augmented dataset for a given LLM. After that, we will merge these two datasets, and then fine-tune the raw pre-trained LLM. This process has been shown in the bottom part of Figure <ref>. In this way, we can get the final LLM which has a good instruction-following capability. The raw pre-trained LLM used in this paper is LLaMA 2 <cit.>. § EXPERIMENTS§.§ Datasets§.§.§ Training set Alpaca. In this paper, we use Alpaca <cit.> which is built by Stanford University as one of the original instruction datasets. This dataset comprises 52,002 (instruction, input, output) triplets. It was created using the self-instruct approach <cit.> with ChatGPT. And the LLM trained on this dataset shows a good instruction-following ability. However, relying too much on ChatGPT make researches concern about the quality of instruction data.Mixture Dataset. In addition to Alpaca, we also build a much larger mixture instruction dataset as the original training data. In this dataset, we mix the instruction data from HC3 <cit.>, alpaca <cit.>, alpaca-evol-instruct <cit.>, dolly-v2 <cit.>, InstructWild <cit.> and lima <cit.>,and then construct a mixture instruction dataset which includes 214,526 (instruction, input, output) triplets. Compared to Alpaca, this dataset contains more diverse and rich instructions ranging from open-domain, medical, legal, financial and so on.§.§.§ Test set In order to evaluate the performance of our proposed approach, we also utilize five different test sets as in the work of <cit.>, including Koala <cit.>, WizardLM <cit.>, Self-instruct <cit.>, Vicuna <cit.> and LIMA <cit.>.These test sets contain 180, 218, 252, 80 and 300 human-curated instruction data respectively, covering math, coding, writing, knowledge, computer and other domains.§.§ Detatils of Training and Testing Training details.In this paper, we adopt LLaMA 2 <cit.> with 7B parameters as the raw LLM for fine-tuning. During fine-tuning procedure, we utilize the same hyperparameters as the work of <cit.>, which include a learning rate of 2e-5, a warmup ratio of 0.03, a weight decay of 0.0 and a batch size of 128. Besides, the fine-tuning epoch is set to 3. And we conduct all fine-tuning and evaluation experiments on NVIDIA RTX A100. During the procedure of quality evaluation and necessity evaluation, both of the threshold α and β is set to 0.0 for Alpaca dataset, while they are set to 1.0 and -1.0 respectively for Mixture dataset.Testing details.During testing process, human evaluation is the most accurate and reliable approach to evaluate the instruciton-following capabilities of LLMs. However, this approach is very time-consuming and costly. Moreover, the evaluation results may also be effected by human biases. Consequently, in this paper, we also utilize ChatGPT and GPT-4 for the evaluation of LLMs as in the work of <cit.>. During evaluation process, all the LLMs are prompted to generate the responses for all of the instructions in test sets. Subsequently, the evaluation LLM is prompted to assign a score for each of these responses based on the aspects of relevance and accuracy. And the score is on a scale from 1 to 10. Besides, in order to eliminate the impact of positional bias on the judements, following the work of <cit.>, we also evaluate the responses of two given LLMs twice, but with different ordering in the prompts. Finally, we will compare their scores in these two times of evaluations respectively, and the criteria of winning is presented in the following:Wins: If the model outperforms in both comparions or wins in one while tying in the other.Tie: If the model ties in both comparions or wins in one while losing in the other.Loses: If the model loses in both comparisons or ties in one while losing in the other.§.§ Results and AnalysisThis section mainly present the performance of our approach on different test sets. As shown in Figure <ref>, it presents the comparison of our MoDS model with the model trained on the full alpaca dataset. During fine-tuning procedure, both the size of seed instruction dataset and augmented instruction dataset of our MoDS model are 500. From this figure we can see that our MoDS approach which only adopts 1000 instruction data achieves a better performance than the model trained on the full alpaca dataset, which utilizes 5,2000 instructions. This results indicate that our instruction data selection approach is effective, and a small number of high-quality, broad-coverage and high-necessity selected instruction data could also make LLMs have a powerful instruction-following ability. In order to compare our method with the self-guided instruction data selection approach proposed by <cit.>, Table <ref> shows their comparisons with the corresponding models trained on full Alpaca dataset. In the work of <cit.>, they introduce a Instruction-Following Difficulty (IFD) metric as a tool to identify gaps in a model's responses versus its autonomous generation capability and then select 5% percentage (about 2600 instructions) of the full alpaca data to fine-tune the raw LLM. In Table <ref>, Self-guided represents the model fine-tuned with the 2600 instruction data[https://github.com/MingLiiii/Cherry_LLM] selected by self-guided approach. And MoDS(1000) represents the model fine-tuned with 500 seed instruction data and 500 augmented instruction data which are choosen by our approach, while MoDS(2000) represents the model fine-tuned with 1000 seed instruction data and 1000 augmented instruction data. For all of them, the pre-trained language model is LLaMA 2. From this table, we can see that MoDS(1000) is comparable to Self-guided on Vicuna, Koala, WizardLM and LIMA test sets, while it is better than Self-guided on Sinstruct test set. And MoDS(2000) is better than Self-guided on all of the test sets. It should be noted that the numbers of instruction data utilized by MoDS(1000) and MoDS(2000) are smaller than the Self-guided model. The results demonstrate that our model-oriented approach can better select instruction data the target LLM needs, and then effectively enhance LLM's instruction-following capabilities. In addition to the Alpaca dataset, Figure <ref>presents the comparison results of our MoDS model trained on the selected data with the model trained on full Mixture Instruction Dataset. While fine-tuning on this dataset, the size of seed instructions and augmented instructions of our MoDS model are 1000 and 3000 respectively. From this figure, we can see that MoDS performs significantly better than the model trained on full mixture dataset. However, our MoDS model only adopt 4,000 instructions to fine-tuning the pre-trained language model while the model trained on full Mixture Dataset utilize 214K instructions. This result once again demonstrates that our proposed approach could effectively select valuable instruction data from large-scale datasets for a target LLM. §.§ Ablation Study To select instruction data with maximum coverage, this paper proposes to use K-center greedy algorithm to select data from high-quality datasets. In order to analyze the effect of K-center greedy algorithm on data selection, Figure <ref> shows the comparison results of K-center greedy and random sampling approaches on diffferent testsets. In this figure, we first select data from the high-quality dataset with K-center greedy algorithm and random sampling approach respectively, and then fine-tune the pre-trained language model with the selected subsets. The number of selected instruction is 1000, and the original instruction dataset is the Mixture Dataset which includes 214k instrucitons. From this figure, we can see that the model fine-tuned with K-center greedy algorithm performs much better than the model which is fine-tuned with random sampling approach. It indicates that K-center greedy algorithm could select more valuable and diverse instruction data from high-quality dataset.In Figure <ref>, we compare MoDS with the model which is just fine-tuned with the seed instruction data extracted from Mixture Dataset. Through this way, we can check whether the augmented instruction data could further improve the ability of LLMs. Instead of selecting 1,000 seed instruciton data and 3,000 augmented instruction data respectively, in Figure <ref> we directly select 4,000 seed instruction data from the high-quality subset of Mixture Dataset. After that, we utilize these 4,000 instruction data to fine-tune the pre-trained language model and compare its performance with MoDS. From this figure, we can see that MoDS is much better than the model fine-tuned with 4,000 seed instruction data. This result demonstrates that the augmented instruction data could effectively compensate for LLM's capacity gaps, thus further enhancing its instruction-following capability.To investigate the impact of instruction number on LLMs in our approach, Figure <ref> presents the winning scores of our models with different numbers of augmented instruction data on Mixture Dataset. Following the work of <cit.>, the winning score is also computed by (Num(win)-Num(lose)/Num(all)) + 1. The number of "win", "lose" and "all" are also computed across all five test sets. And the values of winning score which are higher than 1.0 represents our model performs better than the model fine-tuned with full Mixture Dataset, while the values below 1.0 indicate that our model's performance is worse than the full Mixture Dataset model. From this figure, we can see that the performance of our models could effectively improve when we increase the number of augmented instruction data. This result also illustrates that the augmented data are very valuable to enhance the instruction-following capabilities of LLMs. Furthermore, when the size of augmented dataset reaches 3000, the performance of the model no longer significantly improves. This result suggests that using 3,000 augmented instruction data for Mixture Dataset is already enough in compensating for the model's capability shortcomings. § CONCLUSTIONIn this paper, we propose a model-oriented instruction data selection approach to select valuable instructions for a target foundation LLM. During the selection of instruction data, our approach not only considers the quality and coverage of instruction data, but also integrates the necessity of instructions based on the ability of target LLM. First of all, in our approach, we use a quality evaluation model to evaluate all the (instruction, input, output) triplets in the datasets, and then filter out the instructions with high quality. Secondly, we use a K-center greedy algorithm to select a seed instruction dataset from the high-quality dataset, which makes the selected data as diverse as possible and have a broad coverage. Thirdly, we use the seed instruction dataset to fine-tune the foundation LLM, and then evaluate the fine-tuned LLM on all high-quality instructions to find out the augmented instruction data for the target LLM, which could effectively compensate for the model's capability gaps. Finally, by merging the seed instruction data and the augmented data, we can get a high-quality, broad-coverage and high-necessity dataset from the original large-scale datasets. The final selection dataset is used to fine-tune the foundation LLM to generate the optimized LLM which have the powerful instruction-following capability. acl_natbib | http://arxiv.org/abs/2311.15653v1 | {
"authors": [
"Qianlong Du",
"Chengqing Zong",
"Jiajun Zhang"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231127093313",
"title": "MoDS: Model-oriented Data Selection for Instruction Tuning"
} |
[NO \title GIVEN] [NO \author GIVEN] January 14, 2024 ====================== The paper considers the fractional Fourier transform (FRFT)–based numerical inversion of Fourier and Laplace transforms and the closed Newton Cotes quadrature rules. It is shown that the fast FRFT of a QN-long weighted sequence is the composite of two fast FRFTs: the fastFRFT of a Q-long weighted sequence and the fast FRFT of an N-long sequence. The Newton-Cotes rules, the composite fast FRFT, and non-weighted fast Fractional Fourier transform (FRFT) algorithms are applied to the Variance Gamma distribution and the Generalized Tempered Stable (GTS) distribution for illustrations. Compared to the non-weighted fast FRFT, the composite fast FRFT provides more accurate results with a small sample size, and the accuracy increases with the number of weights (Q). Keywords: Fractional Fourier Transform (FRFT), Discrete Fourier Transforms (DFT), Newton-Cotes rules, Variance Gamma distribution, Generalized Tempered Stable Distribution[NO \title GIVEN] [NO \author GIVEN] January 14, 2024 ======================§ INTRODUCTIONFractional Fourier transform(FRFT) is an important time-frequency analyzing tool, often used for the numerical evaluation of continuous Fourier and Laplace transforms <cit.>. FRFT appears in the mathematical literature as early as 1929 <cit.> and generalizes the traditional Fourier transform (FT) based on the idea of fractionalizing the eigenvalues of the FT<cit.>. An impetus for studying the fractional Fourier transform is the existence of the fast Fractional Fourier transform(FRFT) algorithm that is significantly more efficient than the conventional fast Fourier transform (FFT) algorithm <cit.>. On the other hand, The Newton–Cotes quadrature rules, named after Isaac Newton and Roger Cotes, are the most common numerical integration schemes<cit.> based on evaluating the integrand at equally spaced points using the polynomial interpolation. The idea of combining both schemes comes initially from the fact that the fast FRFT scheme is formulated based on a simple step-function approximation to the integral <cit.>, and the Filon formula <cit.> was derived on the assumption that the integrand may be approximated stepwise by parabolas. These approximations of the Fourier integrals are called the Filon-Simpson rule, Filon-trapezoidal rule, and, more general, Filon's method<cit.>. This paper aims to provide a broader development of the approximation evaluation of the fast FRFT from the Newton-cote rules and show that such approximation can be written as the FRFT of the weighted FRFT. The resulting schemes will be applied to analyze the numerical error of two probability density functions. We organize the paper as follows. Section 2 develops the higher-order composite Newton-Cotes quadrature formula. Section 3 presents the fast fractional Fourier transform algorithm and combines it with Newton-Cote rules. Section 4 provides two illustrative examples. § COMPOSITE NEWTON-COTES QUADRATURE FORMULASThe Newton-Cotes rules value the integrand f at equally spaced points x_i over the interval [a,b]; where x_i=a+ib-a/M=a+ih with h=b-a/M; M=QN and x_Qp + Q=x_Q(p+1) where Q is the number of h within the subinterval [x_Qp,x_Qp+ Q] of interval [a,b]. §.§CompositeRules To have greater accuracy, the idea of the composite rule is to subdivide the interval [a, b] into smaller intervals like [x_Qp,x_Qp+ Q], applying the quadrature formula in each of these smaller intervals and add up the results to obtain more accurate approximations. ∫_a^bf(x)dx=∑_p=0^N- 1∫_x_Qp^x_Qp + Qf(x)dxWe define the Lagrange basis polynomials over the sub-interval [x_Qp,x_Qp+ Q]. l_Qp+j(x)=∏_i j i=0^Qx - x_Qp+i/x_Qp+j - x_Qp+il_j(x_i)=δ_ij= {[ 0 :ij; 1 : i=j ].The Lagrange Interpolating Polynomialand the integration can be derivedf(x)=∑_j=0^Qf(x_Qp+j)l_Qp+j(x)∫_x_Qp^x_Qp + Qf(x)dx =∑_j=0^Qf(x_Qp+j)∫_x_Qp^x_Qp + Ql_Qp+j(x)dxThe integration of Lagrange basis polynomials ∫_x_Qp^x_Qp + Ql_Qp+j(x)dx =b-a/M(-1)^(Q-j)/j!(Q-j)!∫_0^Q∏_i j i=0^Q(y - i)dyWe have the Lagrange Interpolating integrationover [x_Qp,x_Qp+ Q] ∫_x_Qp^x_Qp + Qf(x)dx = b-a/M∑_j=0^Q W_j f(x_Qp+j)W_j =(-1)^(Q-j)/j!(Q-j)!∫_0^Q∏_i j i=0^Q(y - i)dyProposition 1.1 For Q Even, M=QN integer,andf ∈𝒞^Q+2([a,b]), there exists η∈ ]a,b[ such that ∫_a^bf(x)dx = b-a/M∑_p=0^M/Q - 1∑_j=0^Q W_j f(x_Qp+j) + h^Q+2f^(Q+2)(η)/(Q+2)!b-a/Q∫_0^Q∫_0^y∏_i=0^Q(a-i)da dyWithW_j =(-1)^(Q-j)/j!(Q-j)!∫_0^Q∏_i j i=0^Q(y - i)dy For Proposition 1.1proof, see<cit.> §.§ Weights Computation Before using the formula in(<ref>), we need to compute the weight {W_j}_0≤ j ≤ Q developed previously.Proposition 1.2 For Q Even and M=QN integer,j∈{0,1,2 ,...,Q}W_j = ∑_i=0^Q C^j_iQ^i+1/i+1(-1)^(Q-j)/j!(Q-j)! where (C^j_i)_0≤ i ≤ Q 0≤ j ≤ Qare coefficients of the polynomial functions.For Proposition 1.2proof, see<cit.>The coefficient values (C^j_i)_0≤ i ≤ Q0≤ j ≤ Q of the polynomial function were obtained by resolving the following equations (<ref>) with Vandermonde matrix <cit.>.∏_i j i=0^Q(y - i)=∑_i=0^Q C^j_i y^i Table <ref> provides the weight values as a function of the Lagrange polynomial function of degree Q. The error analysis<cit.> shows that the global error of the integral approximation in (<ref>) is Q+2 th-order accurate (O(h^Q+2)).§ FAST FRACTIONAL FOURIER TRANSFORM (FRFT) AND COMPOSITE NEWTON-COTES QUADRATURE RULES §.§ Fast Fourier Transform and Fractional Fourier Transform The Conventional fast Fourier transform (FFT) algorithm is widely used to compute discrete convolutions, discrete Fourier transforms (DFT) of sparse sequence, and to perform high-resolution trigonometric interpolation <cit.>. The discrete Fourier transforms (DFT) are based on N^th roots of unity e^-2π i/N. The generalization of DFT is the fractional Fourier transform, which is based on fractional roots of unity e^- 2π iα, where α is an arbitrary complex number.The fractional Fourier transform is defined on M-long sequence (x_1, x_ 2, …, x_M) as follows G_k+s(x,δ)=∑_j=0^M-1 x_je^-2π i j(k+s)δ0≤ k<M0≤ s≤ 1 Let us have 2j(k+s)=j^2 + (k+s)^2- (k-j+s)^2 , equation (<ref>) becomes G_k+s(x,δ) =∑_j=0^M-1 x_je^-π i (j^2 + (k+s)^2- (k-j+s)^2) δ=e^-π i (k+s)^2 δ∑_j=0^M-1 x_je^-π i j^2 δe^π i (k-j+s)^2 δ=e^-π i (k+s)^2 ∑_j=0^M-1 y_jz_k-jy_j=x_je^-π i j^2 δz_j=e^π i (j+s)^2 δThe expression ∑_j=0^M-1 y_jz_k-j is a discrete convolution. Still, we need a circular convolution (i.e., z_k-j=z_k-j+M) to evaluate G_k+s(x,δ). The conversion from discrete convolution to discrete circular convolution is possible by extending the sequence y and z to length 2M defined as follows.y_j= x_je^-π ij^2δz_j= e^π i(j+s)^2δ 0≤ j<My_j =0 z_j =e^π i(j+s-2M)^2δ M≤ j<2M Taking into account the 2M-long sequence, the previous fractional Fourier transform becomes G_k+s(x,δ)=e^-π i (k+s)^2 δ∑_j=0^2M-1 y_jz_k-j=e^-π i (k+s)^2 δDFT_k^-1[DFT_j(y)DFT_j(z)] Where DFT is the Discrete Fourier Transform, and DFT^-1 is the inverse of DFT. For an n-long sequence z, we have DFT_k(z)=∑_j=0^N-1z_je^-2πjk/N DFT^-1_k(z)=1/N-1∑_j=0^N-1z_je^2πjk/N This procedure is referred to in the literature as the Fast fractional Fourier Transform Algorithm with a total computational cost of 20Mlog_2M + 44M operations <cit.>.We assume that [f](y) is zero outside the interval [-a/2, a/2]; β=a/M is the step size of the M input values of [f](y), defined by y_j=(j-M/2) β for 0 ≤ j <M. Similarly, γ is the step size of the M output values of f(t), defined by x_k=(k-M/2) γ for 0 ≤ k <M.By choosing the step size β on the input side and the step size γ in the output side, we fix the FRFT parameter δ=βγ/2π and yield <cit.> the density function f (<ref>) at x_k. f(x_k) = 1/2π∫_-∞^+∞[f](y)e^ix_kydy≈1/2π∫_-a/2^a/2[f](y)e^ix_kydy=γ/2π∑_j=0^N-1[f](y_j)e^2π i(k-N/2)(j-N/2)δ =γ/2πe^-π i(k-N/2)NδG_k([f](y_j)e^-π i jNδ),-δ) We have : f̂(x_k) = γ/2πe^-π i(k-N/2)NδG_k([f](y_j)e^-π i jNδ,-δ) 0≤ s <1 §.§ FRFT of QN-long weighted sequenceThe Advanced Fast Fourier Transform (FRFT) algorithm combines the Fast Fractional Fourier (FRFT) algorithm (<ref>) and the 12-point rule Composite Newton-Cotes Quadrature (<ref>) to evaluate the inverse Fourier integrals. We assume that [f](x) is zero outside the interval [-a/2, a/2], Q=12, m=QN and β=a/M is the step size of the M input values [f](y), defined by y_j+Qp=(Qp+ j - M/2) β for 0 ≤ p <N and 0 ≤ j <Q. Similarly, the output values of f(x) is defined by x_Ql+f+s=(Ql+f+s-M/2) γ for 0 ≤ l <N, 0 ≤ f <Q and 0≤ s≤ 1.f(x_Ql+f+s) =1/2π∫_∞^+∞e^i y x_Ql+f+s F[f](y)dy = 1/2π∫_-a/2^a/2e^i y x_Ql+f+s F[f](y)dy= 1/2π∑_p=0^N-1∫_y_Qp^y_Qp + Qe^i y x_Ql+f+s F[f](y)dy(composite rule) Based on the Lagrange interpolating integration over [x_Qp, x_Qp+Q]<cit.>, we have the following expression.∫_x_Qp^x_Qp + Qe^i y x_Ql+f+sF[f](x)dx≈β∑_j=0^Q w_je^i y x_Ql+f+s F[f](x_j + Qp)we consider f(x_Ql+f+s) the approximation of f(x_Ql+f+s) and the expression (<ref>) becomes f(x_Ql+f+s)=β/2π∑_p=0^N-1∑_j=0^Q w_j F[f](y_j + Qp)e^i x_Ql+f+s y_j + Qp0≤ s <1= β/2π∑_j=0^Q∑_p=0^N-1w_jF[f](y_j + Qp)e^2π iδ (Ql+f+s-M/2) (Qp+ j - M/2)βγ=2πδ= β/2πe^-π iδ M(Ql+f+s-M/2) G_Ql+f(w_j[f](y_j + Qp)e^-π i(j + Qp)Mδ),-δ) We have a Probability density function as a function of fractional Fourier Transform. f(x_Ql+f+s) = β/2πe^-π iδ M(Ql+f+s-M/2) G_Ql+f(w_j[f](y_j + Qp)e^-π i(j + Qp)Mδ,-δ) The Fourier transform function ([f]) is weighted, and we have a fast Fractional Fourier Transform (FRFT) of a QN-long weighted sequence.§.§ FRFT of Q-long weighted sequence FRFT of N-long sequence we consider f(x_Ql+f+s) the approximation of f(x_Ql+f+s) and the expression (<ref>) becomes f(x_Ql+f+s)=β/2π∑_p=0^N-1∑_j=0^Q w_j F[f](y_j + Qp)e^i x_Ql+f+s y_j + Qp= β/2π∑_j=0^Qw_j∑_p=0^N-1F[f](y_j + Qp)e^2π iδ (Ql+f+s-M/2) (Qp+ j - M/2)βγ=2πδ= β/2πe^-π iδ M(Ql+f+s-M/2)∑_j=0^Qw_je^2π iδ (Ql+f+s-M/2)j∑_p=0^N-1F[f](y_j + Qp)e^2π iδ (Ql+f+s-M/2)Qp= β/2πe^-π iδ M(Ql+f+s-M/2)∑_j=0^Qw_j G_l+f+s/Q(ξ_p,δ Q^2)e^2π iδ (Ql-m/2)je^2π iδ (f+s)j We have the first fractional Fourier transform (FRFT) on the N-long complex sequence {ξ_p}_0≤ p < N G_l+f+s/Q(ξ_p,α_1=-δ Q^2) =∑_p=0^N-1ξ_pe^-2πi (l+f+s/Q)pα_1ξ_p=e^-π i M p Q δF[f](y_j + Qp) f(x_Ql+f+s) becomes f(x_Ql+f+s)= β/2πe^-π iδ M(Ql+f+s-M/2)∑_j=0^Qw_j G_l+f+s/Q(ξ_p,-α_1)e^2π iδ (Ql-M/2)je^2π iδ (f+s)j= β/2πe^-π iδ M(Ql+f+s-M/2)G_f+s(z_j,δ)We have the second fractional Fourier transform (FRFT) on the Q-long complex sequence{z_j}_0≤ j ≤ Q G_f+s(z_j,α_2=-δ) =∑_j=0^Qz_je^-2π i(f+s)jα_2 z_j=w_j G_l+f+s/Q(ξ_p,-α_1)e^2π iδ (Ql-M/2)jThe advanced FRFT - scheme yields the following approximationf(x_Ql+f+s)= β/2πe^-π iδ M(Ql+f+s-M/2)G_f+s(z_j,-α_2)G_f+s(z_j,α_2 =-δ)=∑_j=0^Qz_je^-2π i(f+s)jα_2 z_j=w_j G_l+f+s/Q(ξ_p, -α_1)e^2π iδ (Ql-M/2)jG_l+f+s/Q(ξ_p,α_1) =∑_p=0^N-1ξ_pe^-2πi (l+f+s/Q)pα_1ξ_p=e^-π i M p Q δF[f](y_j + Qp)(<ref>) shows that the fast FRFT of a QN-long weighted sequence is the composite of two fast FRFTs: the fastFRFT of a Q-long weighted sequence and the fast FRFT of an N-long sequence.G_Ql+f(w_j[f](y_j + Qp)e^-π i(j + Qp)Nδ,-δ)=G_f+s(z_j,-α_2) z_j =w_j G_l+f+s/Q(ξ_p, -α_1)e^2π iδ (Ql-M/2)j§.§ FRFT of N-long weighted sequence FRFT of Q-long sequencewe consider f(x_Ql+f+s) the approximation of f(x_Ql+f+s) and the expression (<ref>) becomes f(x_Ql+f+s)=β/2π∑_p=0^N-1∑_j=0^Q w_j F[f](y_j + Qp)e^i x_Ql+f+s y_j + Qp=β/2πe^-π iδ M(Ql+f+s-M/2)∑_p=0^N-1e^2π iδ (Ql+f+s-M/2)Qp∑_j=0^Qw_jF[f](y_j + Qp)e^2π iδ (Ql -M/2)je^2π iδ (f+s)j=β/2πe^-π iδ M(Ql+f+s-M/2)∑_p=0^N-1G_f+s(z_j,δ)e^-π iδ M Qpe^2π i(l+f+s/Q)δ Q^2pWe have the first fractional Fourier transform (FRFT) on the N-long complex sequence {z_j}_0≤ j < Q G_f+s(z_j,α_2=-δ) =∑_j=0^Qz_je^-2π i(f+s)jα_2 z_j=w_jF[f](y_j + Qp)e^2π iδ (Ql -m/2)j f(x_Ql+f+s) becomes f(x_Ql+f+s)=β/2πe^-π iδ m(Ql+f+s-m/2)∑_p=0^N-1G_f+s(z_j,δ)e^-π iδ m Qpe^2π i(l+f+s/Q)δ Q^2p= β/2πe^-π iδ m(Ql+f+s-m/2)G_l+f+s/Q(ξ_p,δ Q^2)We have the second fractional Fourier transform (FRFT) on the Q-long complex sequence{ξ_p}_0≤ p ≤ Q G_l+f+s/Q(ξ_p,α_1=-δ Q^2)=∑_p=0^N-1ξ_pe^-2πi (l+f+s/Q)pα_1ξ_p=G_f+s(z_j,δ)e^-π iδ m Qp The advanced FRFT - scheme yields the following approximationf(x_Ql+f+s)= β/2πe^-π iδ m(Ql+f+s-m/2)G_f+s(z_j,-α_2)G_f+s(z_j,α_2 =-δ)=∑_j=0^Qz_je^-2π i(f+s)jα_2 z_j=w_j G_l+f+s/Q(ξ_p, -α_1)e^2π iδ (Ql-m/2)jG_l+f+s/Q(ξ_p,α_1) =∑_p=0^N-1ξ_pe^-2πi (l+f+s/Q)pα_1ξ_p=e^-π i m p Q δF[f](y_j + Qp)The results (<ref>) are similar to the results (<ref>). However, the numerical computation shows that the results (<ref>) do not provide the numerical inversion of the Laplace transforms. As shown in Fig <ref>, Fig <ref>, and Fig <ref>, the results (<ref>) do not yield a function in the sense each input is not related to exactly one output.§ ILLUSTRATION EXAMPLES§.§ Variance-Gamma VG (μ, δ,α,θ,σ) DistributionIn the study, the VG model has five parameters: parameters of location (μ), symmetric (δ), volatility (σ), and the Gamma parameters of shape (α) and scale (θ). The VG model density function is proven to be (<ref>).f(y) =1/σΓ(α) θ^α∫_0^+∞1/√(2πν)e^-(y-μ-δν)^2/2νσ^2ν^α -1e^-ν/θ dν The VG model density function (<ref>) has an analytical expression with a modified Bessel function of the second kind. the expression can be obtained by making some transformations and changing the variable in (<ref>).-(y-μ-δν)^2/2νσ^2-ν/θ = δ(y-μ/σ^2) -1/2σ^2(δ^2 + 2σ^2/θ)ν - (y-μ/σ^2)^21/2ν (<ref>) becomes f(y) =e^δ(y-μ/σ^2)/√(2π)σΓ(α) θ^α∫_0^+∞ e^ -1/2σ^2(δ^2 + 2σ^2/θ)ν - (y-μ/σ^2)^21/2νν^α -3/2 dν We consider the modified Bessel function of the second kind (k_α(z))<cit.>.k_α(z)=1/2(1/2z)^α∫_0^+∞ e^(-t -z^2/4t)1/t^α +1 dt |arg(z)|≤π/4k_α(z) is the second kind of solution for the modified Bessel's equation.z^2 d^2w/dz^2 + z dw/dz - (z^2 + α^2)w =0 By changing variable, u=1/2σ^2(δ^2 + 2σ^2/θ)ν, and (<ref>) becomes f(y) =2e^δ(y-μ/σ^2)/√(2π)σΓ(α) θ^α(|y-μ|/√(δ^2 + 2σ^2/θ))^α-1/2 k_-α +1/2(√(δ^2 + 2σ^2/θ)|y-μ|/σ^2)Table <ref> presents the estimation results of the five parameters (μ,δ,α,θ,σ) of theVariance-Gamma variable. The data comes from the daily S&P 500 historical data (adjustment for splits and dividends) and spans from January 4, 2010, to December 30, 2020. See <cit.> for more details.The VG density function in (<ref>) shows continuity at μ but not derivative at the same value. Fig <ref> illustrates the continuity end non-derivative at μ by the pick at μ. The expression of f(μ) can be determined analytically.f(μ)=1/√(2π)σΓ(α) θ^α∫_0^+∞ e^ -1/2σ^2(δ^2 + 2σ^2/θ)νν^α -3/2 dν=1/√(2π)σΓ(α) θ^αΓ(α-1/2)/(1/2σ^2(δ^2 + 2σ^2/θ))^α -1/2 =1/√(2πθ)σ1/( 1 + θ/2δ^2/σ^2)^α -1/2Γ(α-1/2)/Γ(α) We have the following expressionf(μ)=1/√(2πθ)σ1/( 1 + θ/2δ^2/σ^2)^α -1/2Γ(α-1/2)/Γ(α)For the VG parameter data: μ̂=0.0848, δ̂=-0.0577, σ̂=1.0295, α̂=0.8845, θ̂= 0.9378, we have f̂(μ̂) ≈ 0.8552The Variance Gamma distribution is infinitely divisible, and the Fourier transform function has explicit closed form (<ref>). [f](x)=e^-iμ x/(1+1/2θσ^2x^2 + iδθ x)^αf(y) = 1/2π∫_-∞^+∞e^iy x + Ψ(-x)dxBased on the integral approximation proposition 1.1 in (<ref>),the numerical estimation of density function (<ref>) becomesf̂(x)= β∑_p=0^N-1∑_j=0^Q W_j e^(iy_j+Qp x)F[f](y_j+Qp) Integral approximation f(x_k)= γ/2πe^-π i(k-N/2)NδG_k([f](y_j)e^-π i jNδ,-δ)fast FRFT f(x_Ql+f) = β/2πe^-π iδ M(Ql+f-M/2)G_f+s(z_j,-α_2) composite fast FRFTTaking into account the numerical density function in (<ref>); we can numerically compute the absolute error (f-f). §.§ Generalized Tempered Stable (GTS)(β_+, β_-, α_+,α_-, λ_+, λ_-) DistributionWe consider a GTS variable Y=μ +X =μ + X_+ - X_-∼ GTS(μ, β_+, β_-, α_+,α_-, λ_+, λ_-) with X_+∼ TS(β_+, α_+,λ_+, λ_-) andX_-∼ TS(β_-, α_-,λ_-, λ_-). The characteristic exponent can be written <cit.> Ψ(ξ)=μξ i + α_+Γ(-β_+)((λ_+ - iξ)^β_+ - λ_+^β_+) + α_-Γ(-β_-)((λ_- + iξ)^β_- - λ_-^β_-)Table <ref> presents the estimation results of the seven parameters (β_+, β_-, α_+,α_-, λ_+, λ_-) Generalized Tempered Stable (GTS) Distribution. The data comes from the daily GTS historical data (adjustment for splits and dividends) and spans from April 28, 2013, to June 22, 2023. See <cit.> for more details.The characteristic function of the GTS variable has the following expressionϑ(ξ)=E[e^i Y ξ]=e^Ψ(ξ)And the GTS density function (f) generated by the characteristic function (<ref>) and Fourier Transform (F(f)) can be written <cit.> as follows:F[f](ξ) = ϑ(-ξ) f(y) = 1/2π∫_-∞^+∞e^iy x + Ψ(-x)dxBased on the Conventional FRFT and the FRFT of Q-long weighted sequence FRFT of N-long sequence, (<ref>) becomesf(x_k)= γ/2πe^-π i(k-N/2)NδG_k([f](y_j)e^-π i jNδ,-δ)FRFT without weight f(x_Ql+f) = β/2πe^-π iδ M(Ql+f-M/2)G_f+s(z_j,-α_2) Composite fast FRFTThe GTS probability density function has neither a closed form nor an analytic expression. We have the absolute error (f-f), which is the difference between the estimations composite fast FRFT (f) and fast FRFT without weight(f).As shown in Fig <ref>, the absolute error (f-f) is almost zero.Analytically, the GTS probability density function does not have a closed form, but Fig <ref> shows that the GTS probability density function is smooth, which is differentiable at least one time.§ CONCLUSION The closed composite Newton-Cotes quadrature rule and the Fast Fractional Fourier Transform (FRFT) algorithm are reviewed in the paper. Both schemes are combined to yield the fast Fractional Fourier Transform (FRFT) of a QN-long weighted sequence. It is shown that the fast Fractional Fourier transform (FRFT) of a QN-long weighted sequence is the composite of two fast FRFTs: the fastFRFT of a Q-long weighted sequence and the fast FRFT of an N-long sequence. By changing the order of composition, we have another theoretical alternative, the composite of the fastFRFT of an N-long weighted sequence and the fast FRFT of a Q-long sequence. However, the numerical computation shows that the composition does not provide the numerical inversion of the Fourier and Laplace transforms. The composite of the two fast FRFTs (the fastFRFT of a Q-long weighted sequence and the fast FRFT of an N-long sequence) scheme was applied to estimate the probability density function of the Variance-Gamma VG (μ, δ,α,θ,σ) distribution and the Generalized Tempered Stable (GTS)(β_+, β_-, α_+,α_-, λ_+, λ_-) Distribution. Compared to the non-weighted fast FRFT, the composite fast FRFT provides more accurate results with a small sample size, and the accuracy increases with the number of weights (Q). The composite fast FRFT performs relatively well when the inversion of Fourier and Laplace transforms has singularities. This was the case for the VG probability density function, which is continuous but non-derivative at x=μ. Analytically, the GTS probability density function does not have a closed form, but the numerical results show that the GTS probability density function is smooth, which is differentiable at least one time. unsrt* | http://arxiv.org/abs/2311.16379v1 | {
"authors": [
"A. H. Nzokem"
],
"categories": [
"math.NA",
"cs.NA",
"math.PR"
],
"primary_category": "math.NA",
"published": "20231127235459",
"title": "Enhanced the Fast Fractional Fourier Transform (FRFT) scheme using the closed Newton-Cotes rules"
} |
[ Ulf Leser January 14, 2024 ==================== Even though online social movements can quickly become viral on social media, languages can be a barrier to timely monitoring and analyzing the underlying social behaviors. This is especially true forunder-resourced languages on social media like dialectal Arabic; the primary language used by Arabs on social media. Therefore, it is crucial to be mindful of this and provide cost effective solutions to efficiently exploit resources from high-resourced languages to solve language-dependent online social behavior analysis in under-resourced languages on social media. This paper proposes to localize content of resources in high-resourced languages into under-resourced Arabic dialects. Content localization goes beyond content translation that converts text from one language to another; content localization adapts culture, language nuances and regional preferences from one language to a specific target language and dialect. Automating understanding of the natural and familiar day-to-day expressions in different regions, is the key to achieve a wider analysis of online social behaviors especially for smart cities. In this paper, we utilize content-localization based neural machine translation to develop sentiment and hate speech classifiers for two low-resourced Arabic dialects: Levantine and Gulf. Not only this but we also leverage the power of unsupervised learning to facilitate the analysis of sentiment and hate speech predictions by inferring hidden insights and topics from the corresponding data and providing coherent interpretations of those insights and topics in their native language and dialects. The experimental evaluations on real data have validated the effectiveness of our proposed system in precisely distinguishing positive and negative sentiments as well as accurately identifying hate content in both Levantine and Gulf Arabic dialects. Our findings shed light on the importance of considering the unique nature of dialects within the same language and ignoring the dialectal aspect would lead to inaccurate and misleading analysis.In addition to our experimentation results, we present a proof-of-concept of our proposed system using COVID-19 real data collected directly from Lebanon and Saudi Arabia geo-regions. We study the sentiment and hate speech behaviors during the COVID-19 pandemic in Lebanon and Saudi Arabia and provide a comprehensive analysis from three views: temporal, topic-based, and dialect-based. Our proposed unsupervised learning methodology has shown reliability in discovering insightful topics and dynamically providing coherent phrases to interpret the inferred topics in two different Arabic dialects; Levantine and Gulf. [Disclaimer: This work uses terms, sentences, or language that are considered foul or offensive by some readers. Owing to the topic studied in this thesis, quoting toxic language is academically justified but I do not endorse the use of these contents of the quotes. Likewise, the quotes do not represent my opinions and I condemn online toxic language.] § INTRODUCTIONThe trend in research is that researchers tend to build data resources for every problem in every language and even dialects <cit.>. This can be seen in the huge discrepancy of resources between languages, where very few have a high-resource status while many others are low-resourced. English, for instance, is a high-resourced language while > 70% of OSN users speak languages other than English, of which dialectal Arabic is among the top used languages on social media; yet it is considered a low-resourced language. This research trend limits its current systems to generalize to other languages. Moreover, the expensive cost (i.e. in terms of HW/SW requirements, time, efforts, cost, and human labor) for building a data resource for each language and dialect has definitely contributed to the under-resourced status of languages like dialectal Arabic; the primary language used on social media among Arabs. Language under-resource issue introduces a significant barrier to smart cityauthorities who need access to large data volumes containing important information about the online social behaviors (OSB) of citizens. Understanding the online social behaviors is crucial for making informed decisions about how to improve and manage smart cities. One way to address the language under-resource issue is to develop intelligent tools that allow communication between high and low resourced languages so that it would make it possible to exploit existing data resources to solve OSB tasks in low-resourced languages.Machine translation is a common tool to bridge the communication gap between high and low resourced languages on social media; however, word-to-word translation does not ensure transferring the context, culture, and tone of messages from a language/dialect to another language/dialect, which, in turn, might not resonate with the familiar social day-to-day expressions tailored to specific languages/dialects. In other words, traditional translation approach does not ensure that the translated content is culturally accurate and appropriate to understand the cognitive, affective, and emotional aspects of online social behaviors residing within shared content on social media. Further, conversations on social media do not follow certain rules and are dominated by informal nature which current machine translation systems are still immature at delivering accurate translations to the informal social conversations. In response, we propose content localization system that is able to understand informal communications on social media (i.e. in high-resourced languages) and transfer their context, culture, and tone to languages and dialects with low resource status. This work proposes to localizes data resources (i.e. collected, cleaned, and annotated) of a high-resourced language into a low-resourced dialectal language. The localized data resources are then used to develop OSB models (i.e. sentiment and hate speech in this paper) in the low-resourced language/dialects with minimized time, effort, and human labor requirements. In this paper, we examine the validity of our proposed system on English as a high-resourced language and two Arabic dialects (Levanitne and Gulf) as low-resourced language/dialects. Given the heavy information flow being generated daily on social media <cit.>, discovering the knowledge insights embedded within those information in real time is of great importance <cit.>; this is extremely crucial to facilitate the analysis of corresponding online social behaviors of citizens within smart cities especially during critical situations like pandemics. Yet, it is nearly impossible to manually monitor huge loads of online data flow <cit.>. Thanks to the unsupervised learning nature of topic modeling algorithms that makes it possible to cluster huge volume of data into meaningful insights fast and without prior human-knowledge involved. Since classical topic modeling methods are sensitive to noise, which social media data suffers from <cit.>, proper NLP preprocessing tools are required to clean the data noise and only keep the informative pieces of information. Such tools do exist in high-resourced languages like English but are insufficient or non-existent in low-resourced languages like dialectal Arabic <cit.>. Fortunately, unsupervised deep learning algorithms have been proven robust against data noises. In addition to the unique nature of Transformers' architecture and through the power of transfer learning, it has been easy and possible to identify insightful topics in languages with insufficient NLP preprocessing tools. Although topic modeling algorithms can be used to capture key insights that are present in OSN data <cit.>, they rely on a set of top keywords to explain the inferred insights <cit.> which cannot provide a comprehensive understanding of those insights <cit.>. Phrases are preferable over single keywords to explain topics <cit.>; single keywords do not offer context to relate the keywords of topics while sentences are too specific and usually focus on a single aspect of a topic. Given the free nature of conversation sharing on OSNs, finding the optimal lengths of phrases that best describe the aspects of a topic is challenging. Current phrase-extraction methods rely on a fixed sliding window which might miss important phrases with lengths other than the predefined ones. Some topics might use longer or shorter phrase expressions than the other and this is difficult to control on OSN open platforms that accommodate unstructured data format. This work targets the mentioned limitation and proposes to extract dynamic-length phrases to coherently describe inferred topics.This study is intended to contribute to the public management by monitoring social media activities in low-resourced languages; this can be used to improve public well-being and to prevent potential social unrest in smart cities. By monitoring the behavior of online social activities, authorities can identify trends, concerns, or potential causes of social unrest. It could also be used to identify those whose incite violence as it has been shown that this type of social ties information can be inferred from OSN data <cit.>. All these information help authorities to address corresponding issues and instantly take proper decisions that ensure QoL in smart cities. We summarize the contributions of this paper as follows: * Design a content-localization based system for real-time monitoring of online social behaviors in low-resourced dialectal Arabic on social media.* Develop a model for real-time data exploration and dynamic interpretation using unsupervised learning approach for two low-resourced Arabic dialects: Levantine and Gulf.* Develop a content-localization based BERT sentiment classifier for two low-resourced Arabic dialects; Levantine and Gulf.* Develop a content-localization based BERT hate classifier for two low-resourced Arabic dialects: Levantine and Gulf.* Conduct a large scale analysis of online social behavior during COVID-19 pandemic in Lebanon and Saudi Arabia (i.e. two under-resourced Arabic dialects: Levantine and Gulf).The rest of the paper is organized as follows. Section <ref> presents the related work. Our proposed system is presented in Section <ref>. Section <ref> explains the experimental design and evaluation metrics whereas the results and analysis are discussed in Section <ref>. Finally, in Section <ref> we conclude our proposed work and discuss possible future directions. § RELATED WORKNeural machine translation (NMT) has made a significant progress in the past decade on language pairs with abundant resources like English-French and English-Spanish. Unlike English-MSA (Modern Standard Arabic) machine translation that has demonstrated remarkable performance lately <cit.>, the accuracy of English-dialectal Arabic <cit.> machine translation is significantly lower that this of English-MSA. The high variability of Arabic dialects has definitely contributed to the scarcity of large parallel data resources which in turn introduces a major challenge in developing accurate NMT systems for low-resourced dialectal languages like Arabic <cit.>. Despite being more widely used on social media than Modern Standard Arabic (MSA), dialectal Arabic machine translation from English is still in its early stages <cit.>.Zbib et al <cit.> proposed a statistical machine translation (SMT) system from Arabic Egyptian and Levant to English. Another English-Egyptian MT system was studied by Nagoudi et al. <cit.> where transfer learning was implemented during MT training.MDC corpus for English-Levantine/NorthAfrican/Egyptian was proposed by Bouamor et al <cit.>; a preliminary analysis of the corpus confirms the differences between dialects especially between Western and Eastern Arabic dialects. In later work, Bouamor et al <cit.> proposed MADAR parallel corpus and lexicon for city-level dialectal Arabic in the travel domain. They found that the average similarity between Arabic dialects in their dataset is 25.8%. Qatari-English speech corpus consisting of 14.7k pair sentences was collected by Elmahdy et al. <cit.> from Qatari TV shows. The Bible [ https://www.biblesociety.ma], [ https://www.bible.com] was translated from English to Arabic North_African dialect. Sajjad et al. <cit.> combined the abovementioned datasets to develop an NMT system from dialectal Arabic to English. We find that these studies suffer from at least one of eight limitations that contradict with the main objective of this study: (1) domain-dependent translation corpus, (2) inconsideration of social media communication culture like abbreviations and composed words, (3) inconsideration of idiomatic expression, (5) inconsideration of code switching, (6) non-professional translators that are not bilingual native to near native in both languages, (7) inconsideration of content localization for both context and tone of messages, (8) small size corpus. In addition, most of previous works have studied and tested a translation direction from dialectal Arabic to English. Yet, literature suggests that OSB analysis yields better results when training on a native language/dialect. Our work addresses the abovementioned limitations and propose to utilize content-localization based dialectal Arabic NMT customized for social media conversations. The objective of this study is to provide a system that minimizes the expensive cost associated with the current practise of researchers where they build data resources for every OSB problem in every language and even dialect<cit.>.Deep learning approach has reliably solved the problem of insufficient NLP language-dependent preprocessing tools for under-resourced languages like dialectal Arabic. Unlike the classical topic modeling algorithms like LDA and NMF that require efforts for data preprocessing and hyperparameter tuning in order to predict meaningful clusters or topics, BERT-based (i.e. Transformer-based) topic modeling approach <cit.> alleviates this requirement by leveraging pre-trained language models that learn the contextual representations of words at the presence of data noise instead of the classical way of learning on count data and ignoring the order and context of words. However, BERT-based topic models rely on the top single keywords to describe inferred topics. According to the literature <cit.>, people favor the use of phrases over single keywords and sentences to describe a topic; they claim that combining keywords create difficulties to comprehend the main meaning of topics while long sentences are too specific and might miss capturing other aspects of the topic. Existing phrase extraction methods <cit.> not only rely on language-dependent preprocessing tools (e.g. chuncking, POS tagging, and n-gram), but also rely on fixed sliding windows to extract phrases. This contradicts with the the diversity of OSN conversations (i.e. or expressions) that could be in any language/dialect of any length. In other words, a pre-defined sliding window for phrase extraction makes it difficult to determine the optimal lengths of phrases to describe topics inferred from social media data. Further, human bias can be arise from manual interpretation of topics <cit.>. Also, given the diversity and huge size of OSN contents makes the availability of domain experts to label topics a difficult task. Exploiting external knowledge resources to automatically label topics is also not applicable to the data streams of social media since emerging social contents might not exist in these external resources in a timely manner <cit.>. The existing topic interpretation studies targeting social media data have been either focusing on single keywords <cit.> or fixed-length phrases <cit.> to describe topics resulted from topic models. The meaning of a sentence varies with the length and order of its constituting words. This paper proposes to use RAKE algorithm <cit.> for dynamic-length topic interpretation as it solves the mentioned issues found incurrent topic labelling methods. To the best of our knowledge, this paper is the first to address these issues and to utilize RAKE algorithm for automatic interpretation of BERTopic-style topics using two under-resourced Arabic dialects on social media: Levant and Gulf. § METHOD Figure. <ref> presents the proposed content-localization based system for modeling and analyzing online social behaviors (i.e. sentiment and hate speech in this work) in low-resourced Arabic dialects (i.e. Levantine and Gulf in this study). An online social behavior (OSB) data resource in a high-resourced language is localized to a low-resourced language/dialect of interest in the content-localization engine. The localized data resource is fed into the OSB engine to develop the OSB models of interest (i.e. sentiment and hate analyzers in this paper) in the target language/dialect. Supervised learning approach is used to train and test the OSB models. Data-to-be-analyzed (i.e. COVID-19 case study) is collected in the language/dialect of interest from social media platform (i.e. Twitter in this paper), and then fed in parallel into the OSB engine and data exploratory and interpretation (DEI) engine. The models in the OSB engine produce the corresponding predictions (i.e. sentiment and hate) from the data. In the DEI engine, unsupervised leaning algorithms are used to develop the DEI models. The data analysis engine uses the outputs of both OSB and DEI engines and creates an analytic story from three views: language/dialect-based, temporal, and topic-based analysis. Language/dialect-based analysis provides a cultural view of online social behaviors. In temporal analysis, the online social behavior is illustrated in a time-line manner (i.e. over days, months, etc) whereas topic-based analysis provides analysis based on the themes inferred from the given data. §.§ Content-Localization based Neural Machine TranslationContent localization goes beyond translation which converts messages from a language to another. Content localization adapts the content and context of messages to a specific language by taking into consideration local terminology and cultural customs. Same words might convey different meaning in different dialects; for example, the word "صاحبي>" in Arabic-Gulf dialect refers to a friend while in Arabic-Lebanese dialect it refers to boyfriend. In this work, we adopt the content localization translation approach to localize existing data resources (i.e. cleaned and annotated) in high-resourced languages (i.e. English in this paper) into low-resourced languages (i.e. dialectal Arabic in this paper) as an attempt to expand the analysis of online social behaviors across languages/dialects without the burden of creating new data resources for every language/dialect. It is important to mention that constructing new data resources is expensive in terms of time, efforts, cost, and human labor. Not only this, but recruiting and retaining domain experts or workers who are willing to commit to annotating large datasets is also a barrier. Two neural machine translation models <cit.> are utilized to localize annotated sentiment and hate speech datasets in English (i.e. high-resourced language) into two low-resourced Arabic dialects; Levantine and Gulf. It is to note that Transformer architecture has been used to train the NMT models. Further, the development of the NMT models has considered five criteria <cit.>: (1) Content localization based translation approach, (2) Consideration of OSN cultural language and expressions: slang abbreviations, iconic emotion (e.g. emojis, emoticons) should be kept in the localized texts while preserving their order and context; hashtag words (single-word and composed-word hashtags) are also localized into the corresponding Arabic dialects, (3) Consideration of informal Language: informal language is used in daily conversations; it includes slang expressions like "lol", "OMG, idk but I'm feeling down today", (4) Consideration of idiomatic expressions: an idiomatic expression should not be word-to-word translated; instead, it should be localized to convey the context or its equivalent idiomatic expression in the corresponding dialects, (5) Consideration of language code borrowing: code borrowing refers to the use of one primary language but mixing in words from another language to fit the primary language. For instance, the word "lol" is written using the Arabic alphabets as "لول>"; similarly, the word "cheese" is written using the Arabic alphabets as "تشيز>".Figure. <ref> illustrates the Transformers architecture used for the development of the content-localization based NMT models; it consists of 12 layers of encoder and 12 layers of decoders with model dimension of 1024 on 16 heads. On top of both encoder and decoder, there is an additional normalization layer that was found to stabilize the training. Weights were initialized from mBART pre-trained model <cit.> that was trained on 25 distinct languages. The NMT models were finetuned on our English-multidialectal Arabic dataset customized for social media conversations <cit.>. §.§ Online Social Behavior (OSB) Modeling This paper studies two types of online social behaviors: sentiment and hate speech. The general supervised deep learning classification framework is followed to build our sentiment and hate classifiers. Data is prepared and preprocessed first before the training process starts. BERT architecture is utilized in training our OSB classifiers. A BERT pre-trained model is fine-tuned by training the entire BERT architecture on our localized datasets in order to alleviate any possible biases resulted from the pre-training <cit.>. BERT-base-arabic-camelbert-mix model <cit.>, used in this work, ispre-trained on a mix of Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA). A classification layer is appended to the BERT layer where logits are produced. Soft max layer is used to normalize the output logits and computes the probability of classes. The optimizer used is Adam with a learning rate of 1e-4, a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. Early-stopping approach is used to avoid over-fitting the neural network on the training data and improve the generalization of the models. The models are optimized using Adams optimizer. Finally, the models are evaluated and tested using validation and test sets before producing the final predictions. The prediction of sentiment analyzer is one of two classes: positive or negative sentiments, whereas the hate analyzer predicts one of two classes: hate or non-hate. §.§ Data Exploration and Interpretation (DEI)The DEI component is mainly responsible for exploring social media data by intelligently findingmajor themes in the data and then generating explainable interpenetration of these themes. Topic modeling is the approach adopted in this paper to explore and discover latent patterns (i.e. themes or topic) within large datasets. The general framework used in unsupervised topic learning is followed in this paper; Figure. <ref> illustrates the methodology used to build our topic model.The data preprocessing of topic modeling is implemented based on the criteria to increase the topic relevance and minimize the noise (i.e uninformative) parts of the data. To learn the topics, BERT-based (i.e. Transformer-based) <cit.> is used in this study. BERT-based topic modeling leverages pre-trained large language models that learn the contextual representations of word features unlike traditional topic modeling approach (e.g. LDA and NMF) that learn on count data ignoring the context and order of words <cit.>. The output of the topic model will be k topics. For each topic, we consider the top n keywords and their corresponding subsets of data documents (i.e. tweets in this study). After the topic clusters have been inferred, they are fed into the topic interpretation component to automatically generate the interpretations of the topics; hence giving us a deeper understanding of the topics without manual intervention. Using the top n keywords only is inadequate in interpreting the coherent meaning of topics <cit.>. A good interpretation of a topic should be able to capture its meaning and distinguish it from other topics. Single words are not able do this as they lack the context of phrases and sentences <cit.>. To elaborate, single keywords are too broad and may overlook the semantic relationships needed to convey the main idea of the topics. Phrases, on the other hand, provide context to single words, resulting in stronger topic cohesion. Additionally, phrases are inherently broad, allowing them to capture the overall meaning of topics <cit.>.This paper proposes the use of an unsupervised dynamic-length phrase extraction approach; RAKE -Automatic Rapid Keywords Extraction- algorithm <cit.> which is utilized to identify topic phrases.Note that RAKE algorithm does not require specific domain or language dependencies; it is domain and language independent keyword-extraction algorithm that uses word frequency and co-occurrence to identify meaningful phrases. RAKE is a graph-based algorithm that was designed based on the assumption that the multiple words that constitute a keyword are rarely split by stop words or punctuation marks. The assumption states that stop words and punctuation are uninformative unlike the remaining words that are assumed to be informative; they are referred to as content words. RAKE starts extracting phrases by splitting a given text into a set of candidate keywords at the occurrence of word delimiters. Next, the resulted set of candidate keywords is split into a sequence of consecutive words at the occurrence of phrase delimiters or stop words. The consecutive words within a sequence together form a new candidate keyword (i.e. phrase). A word-word co-occurrence graph is created to be later used in computing the scores of the candidate phrases. Three scoring metrics were proposed: (1) deg(word); word degree to calculatewords with frequent occurrences in a given document as well as in longer candidate phrases, (2) freq(word): word frequency which computes frequent words without taking into consideration the word-word co-occurences, (3) deg(word)/freq(word): ratio of degree to frequency. In this work, we use the degree of word metric deg(word) to calculate phrases scores. The score of a candidate phrase is computed as the sum of its constituting words' scores. RAKE algorithm has been proven computationally fast and effective in dynamic-length phrase extraction from limited-sized social media datasets for automatic and coherent topic interpretations <cit.>. We follow the methodology depicted in Figure. <ref> to extract phrases of topics. First, data subsets of the inferred topics are preprocessed; each is preprocessed independently. This process is similar to that of topic modeling; however, in phrases extraction stopwords are kept. Second, RAKE algorithm is applied to each topic data subset independently to extract corresponding keywords and phrases. The weights of the resulted keywords and phrases are calculated using the degree of word metric deg(word) that computes the words that have frequent occurrences in a document as well as in longer candidate phrases.Third, keywords and phrases based on the top n keywords (i.e. resulting from our BERT-based topic model) are selected. Before selecting RAKE keywords and phrases, the duplicated keywords (i.e. resulting from our BERT-based topic model) across topics are removed. Keyword duplication might lead to ambiguous interpretations of topics. By removing duplication, we ensure that each topic is interpreted in a distinctive way. Finally, the keywords and phrases are assigned weights based on their importance (i.e. degrees), and then ranked according to those degrees. The output phrases have different lengths, with a minimum of two words. To determine the optimum lengths of phrases, the average lengths of phrases for each topic is calculated. After calculating the average, phrases with more general concept and phrases with more specific concept than the average are considered. This is done by selecting shorter phrases and longer phrases than the calculated average length. § EXPERIMENT DESIGN AND EVALUATION METRICS§.§ Datasets and Preprocessing We list below the datasets we have used for modeling and evaluating our proposed system: SemEval-2013/2017 Sentiment Dataset <cit.>: We have combined the two sentiment datasets of SemEval 2013 and 2017, and removed the neutral class. We have balanced the classes to obtain 6,500 positive tweets and 4,523 negative tweets. This dataset is localized into Arabic Levantine and Gulf using our proposed NMT models to be used later for sentiment modeling.ArSentD-Lev <cit.>: This dataset consists of 1,232 positive tweets and 1,884 negative tweets collected from the Arabic Levant region, and manually annotated through the crowd-sourcing approach. This dataset is used for evaluation purposes. OCLAR datasets <cit.>: OCLAR is an opinion corpus for Arabic Lebanese reviews. The positive class is the reviews rating from 1 to 3 (3,465 reviews), while the negative class is the reviews rating from 1 to 2 (451 reviews). This dataset is used for evaluation purposes. Saudi Banks Dataset <cit.>:This manually annotated dataset contains Arabic-Saudi tweets from four Saudi banks. The dataset contains 8,669 negative tweets and 2,143 positive tweets. This dataset is used for evaluation purposes. Saudi Vision-2030 Dataset <cit.>:Thismanually annotated dataset contains tweets discussing several aspects of Saudi Vision 2030. It consists of 2,436 positive tweets and 1,816 negative tweets. This dataset is used for evaluation purposes. HatEval Dataset <cit.>: This manually annotated and approved English dataset has been constructed based on women or immigrants as targets of hate speech. The dataset contains 5,470 hate tweets. This dataset is translated into Arabic Levantine and Gulf using our proposed NMT models to be used later for hate speech modeling.Let-Mi datasets <cit.>:This dataset consists of Levantine tweets annotated for detecting misogynistic behavior on online social media. This dataset, which consists of 2,654 hate tweets and 2,586 non-hate tweets, has been annotated manually by Levant people. This dataset is used for evaluation purposes. COVID-19 datasets - Lebanon <cit.>: Arabic tweets were retrieved using geo-coordinates of Lebanon during the COVID-19 pandemic in 2020. This dataset is used for evaluation purposes in our COVID-19 case study.COVID-19 datasets - Saudi Arabia <cit.>:Arabic COVID-19 related tweets were collected using geo-coordinates of Saudi Arabia during eight consecutive days; 14-21 of March 2020. This dataset is used for evaluation purposes in our COVID-19 case study. A list of preprocessign steps have been implemented to prepare the data before the modeling the stage: (1) Removing extra whitespaces, (2) Removing encoding symbols, (3) Removing URLs, (4) Converting text to lower case, (5) Removing tashkeel and harakat: tashkeel or harakat refer to all the diacritics placed over or below letters, (6) Normalizing Hamza, (7) Removing user mention, (8) Removing special characters and numbers, (9) Removing stopwords. §.§ Experimental Design and Evaluation Protocol We design the experiments of this study as follows:(1) We localize existing annotated sentiment datasets in English into a target language/dialect (i.e Arabic Levantine and Gulf in this experiment). Note that we preserve the source annotations of the source datasets. We then train sentiment classifiers using these localized datasets. Later, we evaluate our trained classifiers on external datasets under the condition that the external datasets should have been created and annotated in the native target language and dialects. In this experiment, we localize the English sentiment datasets <cit.> to Arabic Levantine and Gulf using our proposed NMT models. The localized datasets will be used later to train an Arabic sentiment classifiers for Levantine and Gulf dialects. The purpose of this experiment is to examine the validity of our proposed approach that aims to minimize the dependecy of language/dialects in modeling online social behavior (i.e. sentiment in this experiment), especially in low-resourced languages/dialects.(2) We localize an existing annotated English hate speech dataset into two target Arabic dialects, Levantine and Gulf, using our proposed NMT models. Then, we train two Arabic hate classifiers using these localized datasets, one for each dialect. Note that we preserve the source annotations of the source dataset.After evaluating our dialectal Arabic hate classifiers on validation split, we further assess their performance on an external dataset of Arabic Levantine dialect. The Levant external dataset has been constructed and annotated in the native Levant dialect by native Levant speakers. In this experiment, we localize the English hate dataset <cit.> to Arabic Levantine and Gulf dialects to be used later for training Arabic-Levantine and Arabic-Gulf hate classifiers. The purpose of this experiment is to examine the impact of different dialects of the very same language on the analysis of online social behaviors on social media (i.e. hate speech in this experiment). For sentiment and hate speech modeling, the datasets have been randomly split into 80% for training and 20% for validation. We evaluate our models on the validation splits during training- every 100 steps- to track their learning progress. We have used early-stopping approach to prevent any potential overfitting to the training data by regularizing the model learning during the training process. Accuracy, precision, recall, and F-score, are common evaluation metrics used for supervised classification evaluation. It is worth noting that precision, recall and f-score give a better view of model performance than accuracy alone does.(3) We use Transformer-based deep topic learning to train and evaluate our topic models using two dialectal Arabic datasets; COVID-19 datasets for Lebanon <cit.> (i.e. Levant dialect) and Saudi Arabia <cit.> (Gulf dialect). BERTtopic <cit.> is used to learn representative topics from the two COVID-19 datasets. Pre-trained Arabic language model is used as embeddings. Note that it is not needed to define the number of topics in advance as BERTopic <cit.> uses HDBSCAN clustering algorithm that does not allow to pre-define the number of clusters. Coherence score <cit.> is used as a metric to evaluate the performance of our topic models. A coherence score for a topic is calculated by measuring the degree of semantic similarity between high scored words within the topic.(4) For topic dynamic interpretation (i.e. phrase extraction), we implement RAKE algorithm and apply it to each tweet subset corresponding to each topic inferred by our BERTopic models. We found out that the average phrase length of three has the highest frequency in all the topics for Lebanon and Saudi Arabia datasets. Therefore, we decide on considering phrases of lengths two and four to add a more general meaning and a more specific meaning to the phrases of length three. § RESULTS AND ANALYSIS §.§ Topic Modeling and InterpretationTwo BERTopic-based models that have been trained on two Arabic datasets; COVID-19 in Lebanon (i.e. Arabic-Levantine dialect) and COVID-19 in Saudi Arabia (i.e. Arabic-Gulf dialect). Both has performed at 0.1 of coherence score on topic size of 100 that has been determined by BERTopic during the training process. We group the inferred topics for both Lebanon and Saudi Arabia into categories as seen in Figure. <ref>. Figure. <ref> lists a sample of the top 5 keywords inferred by our BERTopic models; the top five keywords are shown to provide a general idea about the inferred topics but not a coherent interpretations of topics.In Figure. <ref>, we list the results of our RAKE-based phrases extracted from COVID-19 datasets of Lebanon and Saudi Arabia. The figure shows a sample of topic keywords and phrases used during COVID-19 pandemic over there.Note thata larger topic samples with their keywords and phrase interpretations are presented in the appendix in Figure.<ref>. By looking at the keywords of topic 1-Saudi Arabia (Figure.<ref>), we can seethat these keywords aremainly about the private sector: "القطاع الخاص>" meaning private sector,"العمل>"work,"القطاع>"sector,"الموظغين>"employees,"الشركات>"companies,"الحكومي>"governmental, and "موظف>" an employee.However, this set of single keywords is not contextualized, therefore, the message is not conveyed.Thanks to our topic interpreter that provides details about the keywords by adding contexts which have facilitated the understanding of the topic. Phrases of length 3 and 4 have added some information about the private sector and employees; the phrases of length 3 ("القطاع الخاص ضد>" meaning private sector is against, "موظفين القطاع الخاص>" private sector's male employees, "موظفات القطاع الخاص>" private sector's female employees) indicate that the private sector is against its employees of both genders male and female. The idea of the topic is wrapped up in phrases of length 4; the phrases " القطاع الخاص الصحي مظلوم>" meaning the private health sector is oppressed, "القطاع الخاص يتجاهلون القرارات>" private sector is ignoring the decisions, "اغلاق القطاع الخاص مطلب>" shutting down the private sector is a requirement,"إلزام القطاع_الخاص بتعليق العمل>" forcing the private sector to suspend work, have added more context to complete the idea of the topic; the employees of private sectors seem to have been complaining about the private sector not abiding by the corono virus measures implemented by the authorities during the early stages of the pandemic back in 2020, and demanding that the employees go to the work place instead of quarantining at home. Similar to topic 1-Saudi Arabia, the phrases of length 2, 3 and 4 ("مسلسلات نتفلكس>" meaning Netflix series,"مسلسل جديد>"a new tv series, "مسلسل فاست>"the Fast tv series, "مسلسل تركي>"a Turkish series, "افلام نتفلكس>"Netflix movies,"مافيه زي مسلسلات زمان>"nothing like old tv series, "جبت مسلسلات بالنتفلكس مظلومه>"I found Netflix series that are underrated, "نصائح ل مشاهدات افلام>"recommendations of movies to watch, "فلم الليله جمميل فعاليات>"the movie of tonight nice activities), have wrapped up the context of topic 2-Saudi Arabia, whose single keywords set includes"مسلسلات>"meaning tv series,"فلم>"a movie,"طاش>"Tash is an 80's Saudi comedy show, and "مشاهدات>"views. Unlike the single keywords that have failed to convey the message, those phrases have manifested that some people in Saudi have been watching globalized movies and series, giving recommendations, and voicing out their opinions on movies. Phrases of maximum lengths are mostly discarded since long phrases (i.e. sentences) do not provide the overall meaning of the topics. instead they only show one part of the whole topic. A case example can be seen in topic 2 of Saudi Arabia (Table. <ref>). The long sentence"اكملت للتو فلم للمخرج مارتن سكورسيزي والمصور روبيرت ريتشاردسون امسيت ثملا بسحر > الفلم منتهى الغموض النهاية غير متوقعة ابدا أيهما الأفضل>"meaning "I just finished watching a movie directedby Martin Scorsese and filmed by Robert Richardson. I was enchanted by the thrilling and unpredictable aspect of the movie...", provides a specific sub-detail about the topic which shows only one part of the whole topic. This finding is supported by the results obtained by Qiaozhu <cit.> that sentences might not be accurate to capture the general meaning of a topic as they might be too specific. More examples of our extracted phrases can be seen in the Appendix section (Figure. <ref>).§.§ Sentiment AnalysisThe results in Table. <ref> presents the performance of the proposed sentiment and hate classifiers trained using the localized dataset (i.e. translated from English to Arabic Levantine and Gulf dialects using the proposed NMT models). The results represent the classification performance using the validation split of the same data that the classifiers have been trained on.The English to Arabic-Levantine and English to Arabic-Gulf are shown to have effectively learnt to distinguish between positive and negative sentiment classes using the localized datasets (i.e. by the proposed NMT models). Both Levant and Gulf sentiment classifiers have performed the same in terms of accuracy (86%), precision (0.86), and recall (0.86) Figure. <ref> illustrates the high frequency words predicted as positive or negative classes by our sentiment classifiers. The words in the figures corresponding to the positive class (<ref>, <ref>) reflect positive sentiment such as "متحمس>" meaning "excited", "طيب>" meaning "good", "يحنن>" meaning "so good or spectacular", "يضحك>" meaning "funny", "مبسوط>" meaning "happy", "المفضل>" meaning "favourite", "رائعة>" meaning "spectacular", "زين>" meaning "nice". Figures related to the negative class (<ref>, <ref>) also reflect negative sentiment such as "قتل>" meaning "killing or murder", "غلط>" meaning "wrong or mistake", "سيء>" meaning "bad", "يضرب>" meaning "beat", "أكره>" meaning "I hate", "زعلان>" meaning "sad or upset", "موت>" meaning "death", "مات>" meaning "died", "غبي>" meaning "stupid or dumb". From this, we claim that our sentiment classifiers have been able to effectively learn sentiment from our localized data and to successfully distinguish between positive and negative classes which, in turn, proves the effectiveness of using the content-localization based NMT approach to transfer the context of social media texts from high-resourced language to low-resourced language and dialects.Table. <ref> summarizes the performance of our localized Levant and Gulf sentiment classifiers using external datasets. Each dataset that corresponds to a specific dialect is used to evaluate the classifier that has been trained on the same dialect (i.e. Levantine dataset is used to evaluate the Levantine sentiment classifier and the Gulf dataset to evaluate the Gulf sentiment classifier). The localized Arabic-Gulf sentiment classifier is evaluated on two Saudi (i.e. Gulf dialect) sentiment datasets: Saudi Bank Reviews dataset <cit.> and Saudi Vision 2030 dataset <cit.>. The results show that our Gulf classifier is able to distinguish between both classes (i.e. positive and negative) in both datasets; it has performed at a positive f-score between 0.6-0.76, negative f-score between 0.7-0.93, and over all accuracy between 66%-89%. It can be observed that the Gulf classifier performs better on the negative class than it does on the positive class. After investigating the data, it has been found that there is mislabeling or ambiguity in some sentences as to whether they belong to the positive or negative class, as shown in the following examples:- "للامانه انا مع الاهلي ليا عشر سنين وجيتهم هارب من سامبا واستخدمت جميع المنتجاتمن قروض شخصيه وعقاريه وبطايق فيزا> واشوفهم افضل بنك ممكن تكون تجربتك الشخصيه سيءه ولكن لا تستعجل وتروح وتورط ممكن عشان الدمج فيه شويه لخبطه>".- "يستاهلون جميعا ساهموا في خدمه دينهم ووطنهم وليس من الانصاف اجحاف مجهوداترجال الامن لماذا لا تقدم لهم العروض كباقي> منسوبي الصحه والتعليم والطيران>".- "يارب الوظيفة>". The mislabeling and ambiguity of labeling is a common issue found in existing datasets. This in fact affects the learning and evaluation process of online social behavior modeling. It is an ongoing challenge that labeling datasets, especially for online social behavior like sentiment, still puzzles the researchers in this area <cit.>. Overall, our localized Arabic Gulf classifier has shown a reliable performance in detecting the online sentiment behavior. Below are three examples of correct predicted sentiments:- "الامير محمد بن سلمان يحقق رؤية المملكة 2030 بدعم الشباب>".- "هذا الرجل وضع للحق ميزان و بعدله استوى الأمير و الفقير أقنع بحربه على الفساد وب رؤية 2030 فيها المستقبل للوطن> والعالم العربي والإسلامي و كافة الدول العضمى له منا الدعاء بأن يطيل الله في عمره على طاعته ليحقق ما نتطلع له من مستقبل باهر> ".- "رؤية 2030 سوف تجعل السعوديه في مصاف الدول العضمى نثق برؤية سيدي>". A similar learning performance has been achieved in accurately detecting positive and negative classes by our localized Arabic-Levantine classifier on two Levantine sentiment datasets: ArSentD-Lev <cit.> and OCLAR <cit.>. The localized Arabic-Levantine classifier has scored a high learning accuracy of 88% with capability of separating positive and negative classes at positive and negative f-score of 0.83 and 0.82, respectively.The overall predictions of our localized sentiment classifiers (i.e. Arabic Levantine and Gulf classifiers) on Lebanon and Saudi COVID-19 datasets, show that the sentiment is more negative in Lebanon with 87% of overall negative vibes than it is in Saudi Arabia with 65% of negative vibes during COVID-19 pandemic in 2020.Figure. <ref> provides a temporal view of sentiment behavior during the second week of March 2020, when COVID health measures were implemented in Saudi Arabia during COVID-19 pandemic. Overall, the sentiment is shown to decrease over time especially after a positive spike on day 15 of March 2020. The temporal sentiment analysis for Lebanon is not provided in this case study due to the reason that date and time information is not available in the original Lebanon dataset <cit.>.To facilitate the understanding of the sentiment behavior in Lebanon and Saudi Arabia during COVID-19 Pandemic,we provide a deeper analysis of the topics that people were discussing on social media platforms during the pandemic (Figure. <ref>). Having the topics at hand, we are able to deduce the reasoning of the temporal abstract analysis. In other words, we can understand the reasons and causes of the inferred sentiment behavior. This has helped us to get a clearer picture of the story of events. The topics of both Lebanon and Saudi Arabia show an overall negative sentiment behavior with Lebanon having more negative vibes than Saudi Arabia does; while Saudi Arabia shows a more positive sentiment in two topics (i.e. quarantine/activities and online shopping), Lebanon shows a positive sentiment in one topic (i.e. online shopping) only. Also, the maximum negative score that Saudi sentiment reached is -0.4 compared to the negative peaks of ≈-1 that Lebanon sentiment has hit.A wider exploratory analysis for the topic groups shown in Figure. <ref> is illustrated in Figure. <ref>. The politics topic (Figure. <ref>) in Lebanon clearly shows the highest negative sentiment; the underlying subtopics (shown in Figure. <ref>) of the politics topic explain the negative behavior by the residents of Lebanon toward international and regional news. It is interesting that the discussions on "Saudi Arabia" followed by "Kuwait" and "Jordan" subtopics, have shown a slight positive sentiment compared to the rest of other politics subtopics.Various local Saudi topics are discussed through the subtopics in Figure. <ref>. Subtopics such as "private sector", "students and distant education", "in-mosque praying hold", "hydro and bills", "Saudi banks" and "rents" show mostly negative sentiments. Subtopics that show positive sentiments are related to religion like "Ramadan", and support like "donation", "citizens' support", "thank-you to health workers", and surprisingly "quarantine". The positive sentiment associated with the "quarantine" subtopic reflects that the Saudi citizens seem to be willingly accepting and encouraged to respect the stay-at-home measure in order to prevent the spread of the corona virus. The rest of the topic-based sentiment analysis for both Lebanon and Saudi Arabia can be found in the Appendix section (Figure. <ref> and <ref>). §.§ Hate Speech AnalysisThe content-localization based hate classifiers are shown to have been able to efficiently learn representative features from the localized hate datasets to detect the hate content correctly using the validation split data, as seen in Table. <ref>. According to the results, our localized Levant and Gulf hate classifiers could accurately classify hate and non-hate content at validation f-score between 0.68% and 0.70% for Levant and Gulf hate classifiers, respectively. Further assessments on an external native Levant hate dataset have illustrated the validity of our content-localization approach; the Levant hate classifier could successfully and reliably recall 67% of hate content from the native Levant hate tweets (i.e. from Let-Mi datasets <cit.>) while maintain a high precision performance of 76% for the hate class. On the other hands, the Gulf hate classifier could only recall 38% of the hate presence from the same set of Levant hate tweets at a 81% of hate precision. We list example sentences expressed in Arabic-Levantine dialect (i.e. taken from Let-Mi datasets <cit.>), which the localized Gulf hate classifier has classified as non-hate, while our localized Levantine classifier has been able to classify as hate content: (1) " اي نحن ما منقبلها صرماية بإجرنا ...مبروك. ع راسكم>, (2) "انشالله بيقبر قلبك عن قريب ...يافهيمة عصرك>", (3) "ضبي لسانك احسنلك>, (4) "انشالله بيقبرك إنتي وعيلتك>". As seen in the examples listed above, the same language has got different localized dialects; an expression in a certain dialect means something else in another and it is used in a different context. Ignoring such a feature can negatively impact the learning models so much that they end up generating misleading outputs. For instance, the Levantine idiom "صرماية بإجرنا>" -which means "shoes in our foot"- is a very local expression that Levantine people use in a negative situation (i.e. usually when in anger and it is used for swearing); however, the Gulf people do not use this idiom with the same structure; its equivalence though can be "شبشب في رجلي>" or "نعال في رجلي>". The same applies to the toxic expressions "بيقبرك, بيقبر قلبك, ضبي لسانك>" - literally translated as "bury you, bury your heart, hold or fold your tongue"- are used exclusively by Levantine people to express anger or dissatisfaction. This finding highlights the importance of distinguishing dialects of the very same language and their localized contextual meanings. Overlooking those differences results in inaccurate understanding of the target dialect, which in turn leads to misleading and imprecise analysis of online social behaviors.We have utilized our localized Arabic Levantine and Gulf hate classifiers to analyze the hate speech behavior during COVID-19 pandemic in Lebanon and Saudi Arabia. Our results show the overall hate behavior detected in Lebanon is > 2x greater than (18%) the hate behavior detected in Saudi Arabia (7%). This is clear in the hate behavior depicted in Figure. <ref>; hate behavior in Lebanon is higher than that in Saudi Arabia during COVID-19 in 2020, especially in topics related to politics and local events, where the hate behavior scores the highest (Figure. <ref>). Saudi Arabia has shown a slight hate behavior only in topics related to COVID-19 and politics; however, the detected hate behavior in Saudi Arabia is still ≈ 2x lower than that in Lebanon (Figure. <ref>). COVID topic in Saudi Arabia (Figure. <ref>) is slightly higher in hate score compared to its corresponding in Lebanon (Figure. <ref>); this indicates that the people in Saudi Arabia seem to have been more upset about the virus spread and its consequences. "Sport" topic in both Lebanon and Saudi Arabia shows an insignificant level of hate behavior, while topics "economy", "quarantine activities", and "online shopping" score zero hate behavior in both Lebanon and Saudi Arabia.Politics topic in Lebanon, which has the highest hate score during the COVID-19 pandemic in Lebanon (Figure. <ref>), discusses eighteen subtopics (Figure. <ref>), out of which are "Iran", "Egypt", "Uyghur Muslims", and "Trump peace plan". Those subtopics show the highest hate behavior in the politics topic. Throughout the local topics discussed in Saudi COVID-19 data (Figure. <ref>), subtopics "private sector", "in-mosque praying hold", and "internet complaints" show a slight amount of hate behavior. The hate behavior detected in "private sector" reflects peoples' complaints about private sector being late in implementing restriction measures during the pandemic in Saudi Arabia. Religion is an essential part of life in Saudi Arabia; people go to mosques five times a day to perform five prayers every day. Therefore, implementing the measure of closing down mosques during the pandemic in Saudi Arabia was a major frustration to many, and that might explain the slight presence of hate behavior in the associated subtopic. The hate behavior detected in "internet complaints" subtopic is expected; during the pandemic all schools, universities, and work shifted to online, and people were instructed to stay home; however, some suffered from internet connection issues, which is most likely the cause of hate behavior. The rest of the topic-based hate analysis for both Lebanon and Saudi Arabia can be found in the Appendix section (Figure. <ref> and <ref>).§ CONCLUSION This paper addresses the issue of sub-optimal performance of existing machine translation systems in accurately translating informal messages of low-resourced languages like dialectal Arabic, and proposes a system that utilizes content-localization based neural machine translation (NMT) models customized for informal communications on social media. This being said, these NMT models do not only localize the context and culture of informal messages into different dialects of a language, but also they localize the OSN culture into the localization of these messages. We localize the content of sentiment and hate speech datasets from English language to two low-resourced Arabic dialects (i.e. Levantine and Gulf) and develop four OSB (i.e. sentiment and hate speech behaviors in this paper) classifiers for both Levantine and Gulf dialects using the localized data resources. We then evaluate the performance of our proposed OSB classifiers using two approaches: (1) using external data resources collected and manually annotated in native Levanitne and Gulf dialects, (2) using proof-of-concept case study on COVID-19 data collected during the pandemic in 2020 from Lebanon (Levant dialect) and Saudi Arabia (Gulf dialect) regions. Both evaluation approaches have proven the efficacy of our proposed system in learning sentiment and hate speech from localized data resources; this has been shown in the high performance results in terms of precision, recall, f-score, and accuracy, for both sentiment and hate speech tasks in Levantine and Gulf dialects. Further, our experimental results shed light on the importance of considering different dialects within the very same language to ensure effective and accurate OSB analysis. This can be seen in our localized Levant hate model being able to detect hate content from native Levant messages whereas our localized Gulf hate model has shown low performance to do so. This proves that by overlooking dialectal aspects within a language, we can miss out on valuable insights and perspectives which in turn would lead to inaccurate and misleading results and hence improper policy and decision making. The causing and reasoning behind predicted online sentiment and hate behaviors are achieved through our proposed unsupervised learning methodology for data exploration and interpretation. We adopt topic modeling and phrase extraction approach for capturing insightful topics, trends, and concerns formed during the pandemic and automatically providing coherent interpretations to the inferred topics regardless of the language/dialect used and without human intervention. We opt to use unsupervised learning techniques that omit the requirements of language-dependent preprocessing tools to remove noises and retain informative pieces of given data. Our BERTopic models have shown a robust performance against data noises and successfully identified meaningful topics from both Lebanon and Saudi Arabia COVID-19 datasets representing two different Arabic dialects: Levantine and Gulf. To coherently interpret the inferred topics by our topic models, RAKE algorithm has illustrated superior language/dialect independence capability to automatically and dynamically extract the most representative phrases that best describe each topic better than the interpretation of single keywords alone. For future directions, we plan to extend our system to include more of the Arabic dialects like Yemeni,Iraqi, and Egyptian. We also plan to expand ours OSB modeling to various social behaviors including sarcasm, emotions, and optimism/pessimism. Moreover, we are interested in dynamic topic modeling to automatically track changes of topics over time. This will serve as an assisting tool to understand formed trends and concerns in an event through their evolution over time; this will allow us to improve the analysis of online social behavior of citizens in smart cities.§ APPENDIXunsrt | http://arxiv.org/abs/2312.03727v1 | {
"authors": [
"Fatimah Alzamzami",
"Abdulmotaleb El Saddik"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231127153733",
"title": "Content-Localization based System for Analyzing Sentiment and Hate Behaviors in Low-Resource Dialectal Arabic: English to Levantine and Gulf"
} |
^*Both authors contributed equally to the paper. University of Illinois at Urbana Champaign USA [email protected] University of Illinois at Urbana Champaign USA [email protected] University of Illinois at Urbana Champaign USA [email protected] growth of e-commerce has seen a surge in popularity of platforms like Amazon, eBay, and Taobao. This has given rise to a unique shopping behavior involving baskets – sets of items purchased together. As a less studied interaction mode in the community, the question of how should shopping basket complement personalized recommendation systems remains under-explored. While previous attempts focused on jointly modeling user purchases and baskets, the distinct semantic nature of these elements can introduce noise when directly integrated. This noise negatively impacts the model's performance, further exacerbated by significant noise (e.g., a user is misled to click an item or recognizes it as uninteresting after consuming it) within both user and basket behaviors. In order to cope with the above difficulties, we propose a novel Basket recommendation framework via Noise-tolerated Contrastive Learning, named , to handle the noise existing in the cross-behavior integration and within-behavior modeling. First, we represent the basket-item interactions as the hypergraph to model the complex basket behavior, where all items appearing in the same basket are treated as a single hyperedge. Second, cross-behavior contrastive learning is designed to suppress the noise during the fusion of diverse behaviors. Next, to further inhibit the within-behavior noise of the user and basket interactions, we propose to exploit invariant properties of the recommenders w.r.t augmentations through within-behavior contrastive learning. A novel consistency-aware augmentation approach is further designed to better identify the noisy interactions with the consideration of the above two types of interactions. Our framework offers a generic training paradigm that is applicable to different backbones. Extensive experiments on three shopping transaction datasets verify the effectiveness of our proposed method. Our code is available at <https://github.com/Xinrui17/BNCL>. <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012>[500]Information systems Recommender systemsRobust Basket Recommendation via Noise-tolerated Graph Contrastive Learning Jingrui He November 27, 2023 =========================================================================== § INTRODUCTIONRecommender systems have become a powerful tool that greatly enhances shopping experiences on online platforms ever since their inception <cit.>.In practice, customers often purchase multiple items at the same time, and the co-occurrence relationships could provide rich information in item properties mining. Basket recommendation <cit.> is to predict a set of relevant items that a customer will be interested in by analyzing the composition of the historical interactions and the current shopping baskets if given [A customer may go to this supermarket to buy products many times during a period. Then this customer has multiple shopping baskets], which can be better used for product arrangement, procurement, promotion, and marketing <cit.> to improve customer experience and generate business value. There are an increasing number of works <cit.> trying to explore the customer's order history to capture the users' shopping preference and item semantics, with the aim of improving the recommendation quality and boosting online service. However, not all of the order details in the transaction data are essential and relevant to determining the user's next action. There are usually some user-item interactions as well as basket-item interactions appearing as noise due to the diversity of the basket contents and the users' mismatched behaviors. In practice, the noisy interaction occurs in the shopping transaction data for the reason that the users' shopping behavior is somewhat random and fragmented, meaning that users sometimes buy items against their shopping habits or there are some items in the basket that are not related to any others. The existence of such noisy interactions is verified <cit.> to hinder the understanding of the user's behavior patterns, which will harm the recommender system training and thus hinder its practical deployment. Therefore, it is necessary to denoise in basket recommendation (BR) to extract effective information to enhance recommendation performance. However, most existing BR methods <cit.> mainly tend to jointly model purchases and baskets which will introduce noise to the learned representations due to the heterogeneity of user purchases and basket behaviors. To suppress the noise in recommender systems, current works can be classified as follows: The first line of work <cit.> focuses on improving model robustness <cit.> against user interaction noises.; Another emerging series of papers <cit.> put their attention on inhibiting the negative effect of noisy basket behaviors. Both the two types of works only consider the noise within one behavioral pattern, while the joint noise handling of the above two behaviors remains unexplored. What's more, the semantic mismatch of user purchases and basket behaviors will introduce additional noise during the behavioral fusion process. Consequently, it's essential to design the denoise method from a global view that considers the comprehensive information from both two behaviors. To illustrate the noise, we also give an example in Figure <ref>, which presents the consuming history of a user from the view of user purchase behavior (below) and the view of basket behavior (above). We observe that, from the basket behavior view, the bread in the basket 3 can be regarded the outlier with high probability. However, if we take a look at the user purchase behavior, the bread could not be treated as an anomaly while the computer and mouse are more likely to noise for the user. Moreover, if we look at the racket, which is both unimportant in the two views, there is a high probability for it to be a consistent noise. From the example, we're inspired that it's crucial to design the denoise approach that considers information comprehensively from both within and across the two behavior views. In this paper, we propose a comprehensive within-basket recommendation framework via noise-tolerated contrastive learning to handle the noise existing in the cross-behavior integration and within-behavior modeling. To be more specific, first, we adopt the typical user-item bipartite graph to model the user-item interactions from the user purchase behavior, and the basket-item interactions for the basket behavior are represented as the hypergraph to model the complex basket behavior, where all the items appearing in the same basket are treated as a single hyperedge. Secondly, we propose cross-behavior contrastive learning to fuse the representations learned from the basket behavior into recommender systems, which aims at suppressing the noise introduced by the fusion of diverse behaviors. Then, to handle the within-behavior noise of the user and basket interactions respectively, we propose to exploit invariant properties of the recommenders w.r.t augmentations through within-behavior contrastive learning. During the process, a novel consistency-aware augmentation method is proposed to better identify the noisy interactions with the comprehensive information of both the two types of behaviors.To optimize the model, we leverage a multi-task training strategy to jointly optimize the classic recommendation task and the self-supervised contrastive denoising task. In summary, the contributions of this paper could be summarized as follows: * This work formulates the idea of integrating the basket behaviors into user-item interaction modeling with a light hypergraph message passing schema as well as a joint self-supervised learning paradigm. * We systematically illustrate the noise issues in the within-basket recommendation problem and propose a general basket recommendation framework to improve robustness against the within-behavior and cross-behavior noise issues via noise-tolerated contrastive learning. * Extensive experimental results over three shopping transaction datasets show that our proposed method outperforms state-of-the-art baselines in terms of various ranking metrics.The rest of the paper is organized as follows. We show the preliminary definition in Section <ref> and introduce the proposed in Section <ref>. Then we present the experimental results in Section <ref>. Section <ref> briefly discusses the existing work. In the end, we conclude the paper in Section <ref>.§ PRELIMINARY §.§ Within-basket Recommendation SettingAs a common practice, we use U={u_1, u_2,…, u_|U|} to represent all users and I={i_1, i_2, …, i_|I|} to represent all items where |U| and |I| denote the number of the users and the items respectively. We consider the basket as a set of items that the user ordered in one transaction. Therefore, we will obtain an interaction basket sequence according to the transaction record of the user u, which is denoted as B^u=(b_1^u, b_2^u, …, b_|B^u|^u) where |B^u| is the number of the baskets that user u has purchased and b_j^u⊆ I represents the j^th basket purchased by user u. Within-basket recommendation task aims to recommend the most possible item list to be added to a partially given basket b_p^u associated with a user u.§.§ User-item Interaction View LearningIn the raw transaction data, each user and item is assigned a unique ID respectively. We use a d dimension embedding to represent each user and item where e_u∈ℝ^d is the embedding of the user, e_i∈ℝ^d is the embedding of the item. To capture the users' purchase behavior, we adopt the idea of the user-item bipartite graph to model the user-item interactions, in which the user and the item are treated as nodes. If the user u has bought the item i, there will be an edge connecting these two nodes on the user-item interaction graph. We adopted the graph convolutional network LightGCN <cit.> to perform the message passing on the user interaction graph. Following LightGCN, the k-th layer information propagation on the user-item bipartite graph could be described as follow:{[ 𝐞_u^(k)=∑_i ∈𝒩_u1/√(|𝒩_u|)√(|𝒩_i|)𝐞_i^U(k-1); 𝐞_i^U(k)=∑_u ∈𝒩_i1/√(|𝒩_i|)√(|𝒩_u|)𝐞_u^(k-1) ].where e_u^(k), e_i^U(k)∈ℝ^d are the embedding of the user u and the item i at k-th layer respectively. We randomly initialize the user and item embedding e_u^(0), e_i^U(0) at the very beginning. To take the aggregating information at different depths into account, the final representations e_u and e_i^U of the user and the item are obtained as the mean of the output embedding of different layers:e_u=1/K∑_k=0^Ke_u^k e_i^U=1/K∑_k=0^Ke_i^U(k)where K is the number of the layers, e_u ande_i are the final representations of the user and the item. §.§ Contrastive learningIntuitively, the different views of the same sample are likely to be clustered in the embedding space. Contrastive learning (CL) aims at learning good data representations by reducing the distance of the defined positive pairs while pushing away the representations of the negative pairs in the embedding space. The main process of CL <cit.> is to first construct the diverse views of the raw data through an augmentation set 𝒜. Given a data point x, the augmentation results of x are denoted as 𝒜(x), which are treated as the positive pairs, otherwise, the negative pairs. CL tries to learn an encoder f such that the positive pairs are well aligned and the negative ones have been pushed apart. We denote the z(x) as the representation of sample x in the embedding space. To achieve this goal, a class of methods employed InfoNCE <cit.> as the contrastive loss function, formulated as:ℒ_InfoNCE =∑_x ∈𝒳, x^'∈𝒜(x)-logexp(sim(x, x^') / τ)/∑_x_k∈𝒳exp(sim(x, x_k) / τ)where 𝒳 is the set of data samples in a mini-batch, x^' is the augmented view of a random data point x, sim(·, ·) is a similarity function and τ is the temperature coefficient used to control the uniformity of the representation in the embedding space. In this case, the model is more likely to learn the more invariant and essential properties of the raw data by mapping the positive pairs into the nearby space. To adapt to the downstream task, it is natural to adopt the contrastive objective as a ladder combined with supervised signals to form a multitask learning objective.§ METHOD In this section, we introduce the proposed . The overall framework is shown in Figure <ref>. First, we model the user-item and basket-item interactions from the user purchase behavior and the basket behavior through a user-item interaction graph and a complementary basket hypergraph respectively. Then, we propose cross-behavior contrastive learning to eliminate the noise introduced in the fusion of the representations learned from purchase and basket behavior. Finally, within-behavior contrastive learning is employed in the user’s purchase behavior and the basket behavior respectively for denoising these two behaviors. In addition, a consistency-aware augmentation is designed in the within-behavior contrastive learning to help identify the true noisy interactions. §.§ Complementary View LearningIn the basket recommendation task, the orders are usually unique to the users, which means it is not likely that two customers share exactly the same basket. Compared with user-basket interactions, what matters most is the interactions between users and items, which contains the cross-basket information, as well as the interrelated information among items within a basket. Thus, we propose to learn the representations from both the user purchase behavior and the basket behavior.For the user purchase behavior, which contains the user-item interactions, we employ the user-item interaction graph and perform LightGCN message passing on it to learn the representations of the user and item, denoted as e_u and e_i. From the user purchase behavior, we encode the user's personalized information and shopping preferences to the user and item embedding.Different from the general recommendation task, the basket recommendation scenario is provided with the shopping basket records which contain valuable correlation information about the items. Specifically, the compositions in a basket often imply the complementarity, similarity, or substitution relationships among the item. The basket behavior is encoded in the basket-item interactions, which serves as a supplementary part to extract effective item properties for recommendation. We use a basket hypergraph 𝒢_hyper=(V, E) to model the basket behavior for the following reasons. In the hypergraph, a hyperedge can connect two or more vertices <cit.>, which can well model the diverse relationship between a basket and multiple items. The hypergraph has the ability to aggregate high-order information through the message passing among hypernodes and hyperedges, which is essential to capturing the complex relations between various baskets and items.Here V is the set of vertices and each vertice represents an item in the item set. We use ϵ to denote the basket-item interactions, which means that a hyperedge ϵ∈ E denotes a basket b and the vertices it connects represent all the items belonging to this basket. The basket hypergraph contains |I| vertices and M hyperedges.The relationship between vertices and hyperedges can be described by an incidence matrix H ∈ℝ^|I| × M defined as follows: 𝐇(v, ϵ)={[1, if ϵ∈ E;0, otherwise ].Following the spectral hypergraph convolution proposed in <cit.>, we design light message passing on the constructed hypergraph to effectively perform the information aggregation with hyperedges as the mediators. The representation of vertices E_I^B(k) at k^th layer is obtained as :𝐄_I^B(k)=𝐃^-1 / 2𝐇 𝐁^-1𝐇^T𝐃^-1 / 2𝐄_I^B(k-1)where D ∈ℝ^|I| × |I| and B∈ℝ^M × M are the diagonal degree matrices of the vertices and the hyperedges, where each entry denotes the degree value of corresponding items/hyperedges. E_I^B(k)∈ℝ^|I| × d^k is the item embedding matrix at the k^th layer on the hypergraph and d^k is the hidden size of k^th layer. We use e_i^B(k) to denote the representation of item i at the k^th layer, which is equal to the i^th row of E_I^B(k).We use the mean of the representations at different layers as the final representation of the items learned from the basket behavior:e_i^B=1/K∑_k=0^Ke_i^B(k)Defining 𝒩_b={i|𝐇(i,b)=1} as the items incorporated in the basket b, we can obtain the final basket embedding from the items:e_b=1/|𝒩_b|∑_i∈𝒩_be_i^B §.§ Cross-behavior Contrastive LearningItem embedding e_i^U and e_i^B, learned from the user purchase behavior and the basket behavior, capture inherent semantic properties of different views. However, loosely integrating them together may lead to suboptimal results due to the inconsistent feature space and view aggregation schema. To accomplish the fusion of diverse behaviors, in this section, we propose cross-behavior contrastive learning to suppress the noise during the fusion of the user purchase behavior and the basket behavior. Motivated by the core idea of CL, we introduce cross-behavior contrastive learning to align the representation of the same item generated from different behaviors, which could unify the item embedding into the same feature space and further explore the intrinsic semantic properties of the item.The representation of the same item should share the same intrinsic semantic properties. Thus, we define the positive pairs as the embedding of the same item learned from the augmentation views of the user purchase behavior and the basket behavior, denoted as ê_i^U and ê_i^B. Then we treat the representations of different item embedding learned from different behaviors as negative pairs. Employing the InfoNCE <cit.>, the objective of cross-behavior contrastive learning could be formulated as:ℒ_CL^cb=∑_i ∈ℐ-logexp(sim(ê_i^U, ê_i^B) / τ)/∑_j ∈ I, i ≠ jexp(sim(ê_i^U, ê_j^B) / τ)where sim(·, ·) is cosine similarity function, τ is the temperature of the contrastive learning.§.§ Within-behavior Contrastive LearningIn the above section, we model the user purchase behavior and the basket behavior through a user-item interaction graph and a basket hypergraph and learn the representation from these two graphs. However, in real-world shopping transaction records, there are always some items that are irrelevant to the user preference or to the intents of the basket which appear as noise. The existence of noise will affect the learned user shopping preferences and deteriorate the quality of the representation.To tackle this problem, we propose within-behavior contrastive learning on the user purchase behavior and the basket behavior respectively to achieve the goal of denoising the user-item interactions and the basket-item interactions. To be more specific, we generate the augmented view of the user-item interaction graph and the basket hypergraph. Then, the message passing is performed both on the original graph and the augmented graph. We denote the representation of the user and item on the user-item interaction graph as e_u and e_i^U, the representation of the basket and item on the basket hypergraph as e_b and e_i^B. Also, we have the representation of the user and item ê_u, ê_i^U on the augmented view of the user-item interaction graph, and the representation of the basket and item ê_b, ê_i^B on the augmented view of the basket hypergraph. The objective of the user representation learning in within-behavior contrastive learning is defined as:ℒ_CL^U=∑_u ∈𝒰-logexp(sim(e_u, ê_u) / τ)/∑_v ∈ U, u ≠ vexp(sim(e_u, ê_v) / τ) We adopt the same contrastive objectives for item embedding of the original view and the augmented view on the user-item interaction graph, and the basket embedding as well as the item embedding of the original view and the augmented view on the basket hypergraph. Four contrastive terms are obtained as ℒ_CL^U, ℒ_CL^I_u, ℒ_CL^B, ℒ_CL^I_b. The final contrastive learning loss for within-behavior denoising is the sum of these four terms:ℒ_CL^wb= ℒ_CL^U+ℒ_CL^I_u+ℒ_CL^B+ ℒ_CL^I_b§.§ Consistency-aware Augmentation Up to now, we have investigated denoising techniques applicable for each behavior respectively, as well as during the fusion of heterogeneous behaviors. However, developing an effective augmentation strategy for contrastive learning that precisely identifies the noisy interactions, considering these two types of behaviors, remains an unresolved challenge. Commonly used data augmentation methods for graph structures, such as stochastic edge and node removal, introduce substantial randomness in altering the graph's structure and identifying noise. For example, when the edges removed are connected to important nodes, some fundamental relationships may be lost and the underlying structure of the graph may be destroyed as a result. Moreover, previous augmentation techniques that rely on a single view can be unreliable in the context of basket recommendation. An item can be considered noise from one perspective, while it may carry meaningful information when viewed from another perspective, as depicted in Figure <ref>. To mitigate the risk of losing important relationships, it is crucial to develop an augmentation strategy that integrates crucial multi-view information and removes interactions that consistently exhibit noise.Aiming to identify the noise, we propose a consistency-aware augmentation approach for cross-behavior and within-behavior contrastive learning. To be more specific, we comment that a user-item interaction or a basket-item interaction tends to be noisy when the item is considered potentially noisy based on both the user purchase behavior and the basket behavior. Inspired by the definition of the node centrality in the graph <cit.>, we define the interaction importance s_u-i ^ϵ and s_b-i ^ϵ upon two graphs as: s_u-i ^ϵ=log(δ_d(u)+δ_d(i) + δ_d_hyper(i) ) s_b-i ^ϵ=log(δ_d_hyper(b)+δ_d_hyper(i) + δ_d(i) )where δ_d(u) and δ_d(i) are the degree of the user and item node on the user-item interaction graph. On the basket hypergraph, we focus on both the degree of the hyperedge δ_d_hyper(b) which is defined as the number of items contained on the basket hyperedge, and the degree of the item vertex δ_d_hyper(i) which is defined as the number of hyperedges connecting to this item vertex. According to the expression of edge importance, the importance of an edge is defined from the perspective of the user purchase behavior as well as the basket behavior. We comment that a user-item interaction or a basket-item interaction is more important when it has greater edge importance on the corresponding graph. To generate more meaningful augmentation, we are inclined to drop the less important interactions.To calculate the probability of dropping an edge ϵ, for example, on the user-item interaction graph, we use normalization to transform the edge importance s_u-i^ϵ of edge ϵ into the probability:p_ϵ=s_max-s_u-i^ϵ/s_max-s_min· pwhere p is the overall edge drop probability, s_max and s_min is the max and min value of s_u-i. Following the same formula, the probability of dropping a basket-item interaction ϵ on the basket hypergraph can be obtained as well. According to the probability p_ϵ of dropping the edge ϵ, we could obtain the augmentation view of the original graph 𝒢 where the probability that the edge ϵ belongs to the augmentation view 𝒢̂ is 1-p_ϵ.With consistency-aware augmentation, the positive pairs generated from two behavior views tend to share more common intrinsic properties for better alignment in cross-behavior contrastive learning. While it can generate the more meaningful augmentation view for within-behavior contrastive learning on the user-item interaction graph and basket hypergraph respectively.§.§ Prediction and OptimizationTo perform recommendations, the ranking score of each user-item pair considers both the user information and current basket information:ŷ_(u,b,i)=(1-r)·𝐞_u^⊤𝐞_i+r·𝐞_b^⊤𝐞_iwhere 𝐞_i=𝐞_i^U+𝐞_i^B denotes the fused item embedding performed with contrastive denoising, r is a hyperparameter to balance the capacity of the user and the basket in the recommendation. We use the BPR loss <cit.> as the main recommendation loss. We sample a positive item i and a negative item j for the user u, where the positive item is selected within the current basket and the negative item is sampled from items without being purchased. The main loss is:L_main =-∑_(u, i, j)logσ(𝐲̂_(u,b, i)-𝐲̂_(u, b,j))+λΘ_2^2where Θ is the model parameters and λ is a positive constant. To improve recommendation with the self-supervised denoising tasks, we leverage a multi-task training strategy to jointly optimize the classic recommendation task, the cross-behavior contrastive task and the within-behavior contrastive task. In this way, we have the final multi-task loss as:ℒ= ℒ_main+α_1ℒ_CL^cb+α_2ℒ_CL^wbwhere α_1 and α_2 are hyperparameters to control the linear weight. § EXPERIMENTSIn this section, we conduct experiments to evaluate the performance of our proposed . Our experiments intend to answer the following research questions: * RQ1:How does perform in the within-basket recommendation task compared with the baseline models?* RQ2: How do different components in contribute to the performance?* RQ3: How is the generalization ability of our proposed under different circumstances (e.g., varying length of recommended item list and backbones)?* RQ4: How does the proposed perform in the presence of noise (the robustness of to the varying ratio of noise added)? §.§ DatasetWe evaluate the within-basket recommendation performance on real-world datasets: Instacart [https://www.kaggle.com/c/instacart-market-basket-analysis], Tafeng [https://www.kaggle.com/chiranjivdas09/ta-feng-grocery-dataset] and Valuedshoppers [https://www.kaggle.com/c/acquire-valued-shoppers-challenge]. * Instacart is a transaction dataset collected from an online shopping grocery. It contains the record of over 3 million grocery orders over time which come from more than 200,000 users.* Tafeng contains Chinese grocery store transaction data over four months released by ACM RecSys. It consists of the records of over 13000 users' shopping orders. * Valuedshoppers provides almost 350 million purchase histories from over 300,000 shoppers which includes a large set of users' basket-level shopping behaviors. Considering a large number of records, We sampled the transactions for training and prediction.We treat the set of items purchased by a user during a time session as a shopping basket. In order to make the basket informative enough to be useful in the algorithm, we remove baskets containing less than 30 items for Instacart, and less than 10 items for the Valuedshoppers and Tafeng due to the sparsity of the basket-item interactions in these two datasets. The statistics of the final processed datasets are shown in Table <ref>. We split 80% items of each basket as training data and the remaining 20% as test data for both of the datasets.§.§ Experimental Settings§.§.§ Evaluation metrics We evaluate the performance of models by the Top K recommendation metrics, including the Recall@K, Precision@K, HR@K, and NDCG@K <cit.>. We first compute the recommendation score for the given user u with all the items then full ranking is executed to generate the top K most possible items.§.§.§ Baseline We consider the following baselines for comparison:Simple method: * PersonPop-k: It is a basic method to return top k items from the training set in terms of the purchase frequency of a given user.MF-based Methods: * BPR-MF <cit.>: It is a method to model user and item interactions. The representation is learned by maximizing the distance between the user and its purchased and unpurchased items. * Triple2vec <cit.>: It learns the user and item representation via the triplets (item, item, user) sampled for asingle shopping basket. NBR Methods:Additionally, we modify the relevant NBR (Next-Basket Recommendation) methods to suit our within-basket recommendation setting. Specifically, we treat the current partially provided basket as the last basket in the shopping history for next-basket prediction. * Dream <cit.>: It leverages recurrent neural networks to model the dynamics of users’ behaviors and the sequential patterns between items. * TIFUKNN <cit.>: It is a nearest neighbor-based model that outperforms deep recurrent neural networks in NBR. It relies on the similarity of the target user with other users and the purchase history of the target user.GNN-based Methods:* LightGCN <cit.>: It is a simplified model of NGCF <cit.>. Lightgcn directly uses the normalized summation of neighbors to perform aggregation on the graph, which greatly improves the recommendation performance. * BasConv <cit.>: It constructs a UBI graph and then designs the heterogeneous aggregators on the graph to realize an informative message passing in representation learning. * MITGNN <cit.>: It is the recent model focused on the within-basket recommendation task, which retrieves multiple intents across the defined basket graph to learn the representation of users and items. Denoising Methods: * SGL <cit.>: It combines the collaborative graph neural network filtering model with contrastive learning for recommendation by perturbing the graph structure through simple data augmentation operations on the graph structure.* CLEA <cit.>: CLEA denoises the basket by automatically splitting the basket into positive and negative sub-baskets and using anchor-guided contrastive learning. We adopt the idea of CLEA and adapt the model to the within-basket recommendation task. Note that we omit the comparison with the potential basket recommendation baseline PerNIR <cit.> as their objective focuses on predicting the next item for the current basket rather than complementing the entire basket. §.§.§ Parameter settingsFor fair comparisons, we adopt the following setting for all methods: the batch size is set to 1024; the embedding size is fixed to 128; all embedding parameters are initialized by Xavier initialization <cit.>; the hidden dimension is 64 for all methods. We optimize each baseline method according to the validation set. For LightGCN and SGL, we adopt 3 layers of propagation to achieve their best performance on all datasets. For BasConv model, 2 layers' aggregation is performed to have the best results. For our model,We fine-tune the hyperparameter r within the range of [0, 0.1, 0.2, 0.5] and p within the range of [0.1, 0.3, 0.5, 0.7].the coefficients for both the cross-behavior and the within-behavior contrastive learning are tuned in the range of [1× 10^-1, 1× 10^-3, 1× 10^-5]. Our model converges best when the learning rate is 5× 10^-4, and the number of propagating layers is 2 for u-i and b-i graph. On Instacart, the coefficients for both cross-behavior and within-behavior contrastive learning are 0.1; On Tafeng, when the coefficients for cross-behavior and within-behavior contrastive learning are set to 1× 10^-2 and 1× 10^-3 respectively, the model reaches the best performance; On Valuedshoppers, we choose the coefficients as 1× 10^-4 and 1× 10^-5 respectively for cross-behavior and within-behavior contrastive learning. §.§ Results (RQ1) In this section, we compare the performance of several state-of-the-art baselines on the within-basket recommendation task on three real-world datasets and the results for K=40, 60 are shown in Table <ref>. The baselines are arranged according to the different types of models.We find that consistently outperforms other methods, which demonstrates its remarkable ability to extract informative representations by leveraging user-item interactions, basket-item interactions by effectively filtering out irrelevant or noisy information. The reported best-performing models are significant w.r.t. the second best performing with p-value < 0.05. What's more, the improvement of the baseline methods varies across different datasets (e.g. referring to the underlined results), while our method offers a general denoising approach that achieves stable enhancements.The superiority of GNN-based models over classical models (BPRMF and Triple2vec) clearly demonstrates the significance of graph structure in modeling interactions and learning representations for within-basket recommendation tasks. However, we observed that the GNN-based method BasConv, which directly incorporates basket behavior, did not perform as well as expected. This observation suggests that poor-quality basket-item interactions can hinder representation learning and negatively impact performance.Next basket recommendation methods TIFU-KNN, as well as CLEA, exhibited strong performance on several metrics as TIFU-KNN reaches the second-best results with recall and hit rate on ValuedShopper confirming the importance of the usage of the shopping history in basket recommendation. TIFU-KNN's poor performance on NDCG indicates that non-neural network-based methods tend to overlook the order of recommendations to some extent. Additionally, we find that NBR methods heavily rely on the number of purchased baskets so that it boosts better performance on ValuedShopper dataset where each user has more baskets on average. It could be observed that methods such as SGL and CLEA which take denoising into account achieve better performance, showing the necessity of eliminating the effect of the noisy interaction in basket recommendation.Comparing SGL and CLEA, CLEA achieves better results since it focuses on denoising in the basket while SGL performs contrastive learning only on the user-item view, which proves the important role of basket denoising. §.§ Ablation Study (RQ2) In this section, we investigate the effectiveness of the proposed by evaluating the impact of different components. We denote the complementary basket hypergraph as B-I, the within-behavior contrastive learning with consistency-aware augmentation on the user-item interaction graph and basket view hypergraph as CA, and the cross-behavior contrastive learning as CL Fusion. We replace the consistency-aware augmentation with random edge perturbations and denote this model as _random. What's more, _add represents the without hyperparameter r. Based on the typical user-item interaction graph, the checkmark under the corresponding module in Table <ref> indicates whether this module was incorporated into the model. It can be seen that all the components are reasonably designed and essential to the final performance. We can observe that when any one of these components is removed, the performance drops accordingly on all metrics. The findings could be summarized as follows: * Incorporating the basket hypergraph leads to a noticeable enhancement in performance, confirming that basket behavior offers additional valuable information for representation learning.* Note that when integrating user purchase behavior and basket behavior, the utilization of cross-behavior contrastive learning yields the most significant improvement in recommendation performance compared to directly combining the embeddings learned from both behaviors (e.g., achieving a 10.59% increase in NDCG). This finding shows that cross-behavior contrastive learning contributes to reducing the effect of irrelevant information in the views of user purchase behavior and basket behavior by keeping the invariant and essential semantics of the items, which verifies the effectiveness of capturing the cooperative association between different views.* The experimental results with random edge perturbations augmentation _randomand consistency-aware augmentation indicate that the consistency-aware augmentation will better identify the noise compared to a random structural augmentation and benefit the cross-behavior and within-behavior contrastive learning, which is consistent with our motivation in Section <ref>* Compared with , the performance of _add gets worse, which indicates that loosely adding the predictions of two separate views will include noise and ultimately results in suboptimal performance. §.§ Case Study§.§.§ Genralization Study (RQ3) We conduct experiments to verify the robustness and the generalization ability of the proposed . First, we test with different K in the range of [5, 10, 20, 40, 60, 80, 100]. The results are presented in Figure <ref>, which shows that on all metrics, with the increasing value of K, our model consistently outperforms the baseline methods.Second, we test our model with the different message-passing backbones on the user-item interaction graph. In the typical recommendation algorithms, user-item interaction data is often modeled as a user-item bipartite graph like what we employed in the user-item interaction graph part. Numerous algorithms have focused on representation learning using user-item graphs <cit.>. Fism <cit.> proposed to improve the representation learning by training with the similarity matrix between items and for each training sample, it removes the direct link between the current user and its positive items when calculating the objective. We adopt Fism message-passing method and MF method on the user-item interaction modeling part of instead of the LightGCN message-passing to learn the embedding. The results on Istacart dataset are shown in Table <ref>, we can find that our model still performs the best even though the backbone has changed, which demonstrates the robustness and stability of our method.§.§.§ Denoising Capability (RQ4) In this section, we further investigate the denoising performance of the proposed method. We introduce varying levels of noise to the user-item interaction graph and the basket hypergraph, specifically adding 20%, 40%, 60%, and 80% noisy interactions, and then observe the performance of . we consistently observe that outperforms both LightGCN and SGL across all noise ratios. As depicted in Figure <ref>, we note that as the ratio of added noise increases from 20% to 80%, our method experiences a mere 6.99% decline in Recall@60, whereas LightGCN's performance drops by 18.12% and SGL's by 9.00%. Similarly, the drop in NDCG@60 for our method is only 2.22%, whereas LightGCN experiences a 10.06% decline and SGL an 8.49% decline. These results serve as further evidence of the robustness of our model in handling noisy interactions. The evaluation under more advanced attack algorithms <cit.> is left as future work. 0.99 § RELATED WORKIn this section, we briefly review the related work on basket recommendation and contrastive learning for recommendation. §.§ Basket RecommendationBasket recommendation (BR) <cit.> is to recommend a set of items that are mostly possible purchased by targeted users based on their shopping records. The basic idea is to capture correlations and perform the prediction throughCollaborative Filtering (CF) methods <cit.> or Markov Chain (MC) methods <cit.>. FPMC <cit.> is proposed to capture both sequential effects and long-term user taste where each user-specific transition is modeled by an underlying MC.Focus on predicting the items for the user’s next baskets, DREAM <cit.> used an LSTM network <cit.> to capture the series features of the basket sequence.Some studies explore within-basket recommendations that also consider the content of the current basket. Triple2vec <cit.> improves the within-basket recommendation by constructing the training samples as triples of the user and the items. DBFM<cit.> contributes a basket recommendation solution based on factorization with a deep neural network.PerNIR <cit.> models the short-term interests of users represented by the current basket, as well as their long-term interests to address the task. Recently, GNN has shown its great potential to capture the interactions among the user and the item in representation learning <cit.>.Basconv <cit.> is proposed to capture heterogeneous interaction signals on a UBI graph by designing three different aggregators for user, basket and item entities. MITGNN <cit.> combines the translation-based model with the GNN to improve within-basket recommendation via retrieving the multi-intent pattern.In practice, noisy interactions are easy to occur during shopping behaviors and will hinder the capture of users' preferences and item properties while few works explicitly consider denoising in BR. For NBR, CLEA <cit.> used a denoising generator to denoise the baskets and then extract relevant items to enhance recommendation performance. However, the identification of noisy interactions and the elimination of their impact on heterogeneous behaviors in basket recommendation have been overlooked in the current literature. §.§ Contrastive Learning for RecommendationContrastive learning aims to learn representation by minimizing the distance of positive instances while making negative instances far apart in the representation space <cit.>, which has achieved great success on graph <cit.> and hypergraph <cit.> representation learning, as well as the application in recommender systems <cit.> and dense retrieval <cit.>. A Contrastive multi-view graph representation learning algorithm <cit.> is introduced for learning both node and graph-level representations by contrasting structural views of graphs. CLRec <cit.> bridged the theoretical gap between contrastive learning objective and traditional recommendation objective, which showed that directly performing contrastive learning can help to reduce the exposure bias. Neighborhood-enriched Contrastive Learning (NCL) <cit.> explicitly incorporated the potential neighbors into contrastive pairs by introducing the neighbors of a user (or an item) from graph structure and semantic space respectively. CMP-PSP <cit.> effectively leveraged contrastive multi-view learning and pseudo-siamese networks to mitigate data sparsity and noisy interactions. CCFCRec <cit.> adopts contrastive collaborative filtering for cold-start item recommendation which applies contrastive learning to transfer the co-occurrence signals to the content CF module. KACL <cit.> performs contrastive learning across the user-item interaction view and KG view to include the knowledge graph in the recommendation while eliminating the noise it may introduce. Despite these advancements, the potential of contrastive learning in denoising within basket recommendation remains underexplored. Our work aims to harness the power of contrastive learning in behavior denoising and integrating diverse behaviors. § CONCLUSION In this paper, we formulate the basket recommendation by integrating the basket behaviors into user-item interaction modeling with a light hypergraph message passing schema and a joint self-supervised learning paradigm. We systematically illustrate the noise issues in the basket recommendation problem. A general basket recommendation framework via noise-tolerated contrastive learning is proposed accordingly to improve robustness against the noise in BR. To be specific, we suppress the cross-behavior noise by making use of additional supervision signals with cross-behavior contrastive learning. Then to inhibit the within-behavior noise in the user and basket interactions, we propose to exploit invariant properties of the recommenders w.r.t augmentations through within-behavior contrastive learning. In addition, a novel consistency-aware augmentation approach is designed to better identify noisy interactions by comprehensively considering the two types of interactions. Extensive experimental results over three datasets on within-basket recommendation task show that our proposed method outperforms state-of-the-art baselines in terms of various ranking metrics. One direct extension of our work is using as an effective tool to suppress the noise in multi-view learning and help the model fusion.Additionally, incorporating temporal dynamics and the order of the baskets into our current framework is a promising avenue for future research. Moreover, we'd like to test our method with more advanced backbones and self-supervised learning strategies. We're also interested in evaluating the bias <cit.> and fairness <cit.> of our approach under various settings. This work is supported by National Science Foundation under Award No. IIS-1947203, IIS-2002540, IIS-2117902, IIS-2137468, Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture, and IBM-Illinois Discovery Accelerator Institute - a new model of an academic-industry partnership designed to increase access to technology education and skill development to spur breakthroughs in emerging areas of technology. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government. ACM-Reference-Format | http://arxiv.org/abs/2311.16334v2 | {
"authors": [
"Xinrui He",
"Tianxin Wei",
"Jingrui He"
],
"categories": [
"cs.IR"
],
"primary_category": "cs.IR",
"published": "20231127213810",
"title": "Robust Basket Recommendation via Noise-tolerated Graph Contrastive Learning"
} |
Journal ofClass Files, Vol. 14, No. 8, August 2015 Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Learning with Noisy Low-Cost MOS for Image Quality Assessment via Dual-Bias Calibration Lei Wang, Qingbo Wu, Member, IEEE, Desen Yuan, King Ngi Ngan, Life Fellow, IEEE, Hongliang Li, Senior Member, IEEE, Fanman Meng, Member, IEEE and Linfeng Xu The authors are with the School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China.Received xxxx; accepted xxxx =================================================================================================================================================================================================================================================================================================================================== Learning based image quality assessment (IQA) models have obtained impressive performance with the help of reliable subjective quality labels, where mean opinion score (MOS) is the most popular choice. However, in view of the subjective bias of individual annotators, the labor-abundant MOS (LA-MOS) typically requires a large collection of opinion scores from multiple annotators for each image, which significantly increases the learning cost. In this paper, we aim to learn robust IQA models from low-cost MOS (LC-MOS), which only requires very few opinion scores or even a single opinion score for each image. More specifically, we consider the LC-MOS as the noisy observation of LA-MOS and enforce the IQA model learned from LC-MOS to approach the unbiased estimation of LA-MOS. In this way, we represent the subjective bias between LC-MOS and LA-MOS, and the model bias between IQA predictions learned from LC-MOS and LA-MOS (i.e., dual-bias) as two latent variables with unknown parameters. By means of the expectation-maximization based alternating optimization, we can jointly estimate the parameters of the dual-bias, which suppresses the misleading of LC-MOS via a gated dual-bias calibration (GDBC) module. To the best of our knowledge, this is the first exploration of robust IQA model learning from noisy low-cost labels. Theoretical analysis and extensive experiments on four popular IQA datasets show that the proposed method is robust toward different bias rates and annotation numbers and significantly outperforms the other learning based IQA models when only LC-MOS is available. Furthermore, we also achieve comparable performance with respect to the other models learned with LA-MOS. image quality assessment, low-cost MOS, labor-abundant MOS, subjective bias, noisy label learning.§ INTRODUCTION Image quality assessment (IQA) is an active research area in multimedia technology, which is critical for evaluating and developing various perceptual-friendly image/video applications <cit.>. With the rapid development of the deep neural network (DNN), learning based IQA models are receiving more and more attention. Various advanced networks have been developed for IQA and achieved impressive performance recently. It is important to notice that the success of learning based IQA models highly relies on reliable subjective quality labels, which determine the direction of optimizing these advanced networks. Unfortunately, due to the subjective bias of individual annotators, it is greatly challenging to collect reliable subjective quality labels, especially on a large-scale image database.As the most widely used label for IQA in ideal conditions, the typical mean opinion score in existing datasets is labor-abundant (LA-MOS), which requires a large collection of opinion scores from multiple annotators (usually more than 15) for each image.Under a strictly standardized subjective experiment, the mean value of all annotators' opinion scores is finally used to represent their majority decision, which attempts to eliminate the subjective bias from individual annotators.As shown in Fig. <ref> (a), subjective bias is presented as a Gaussian distribution, and the variance of the distribution diminishes as the annotation number increases. Meanwhile, different methods for pre/post-screening of the subject are also developed in BT500 <cit.>, P910 <cit.>, and P913 <cit.>, to further refine the regular LA-MOS by rejecting or weakening the votes of a subject, whose behavior is biased and inconsistent with the others.Despite efficient subjective bias reduction, the aforementioned methods are only applicable to LA-MOS, whose annotators are large enough for each image. On one hand, this label collection process is time-consuming and expensive, which significantly increases the learning cost. On the other hand, the model bias caused by subjective bias and the corresponding calibration strategy are both rarely investigated in previous works.In this paper, we aim to learn robust IQA models from low-cost MOS (LC-MOS), which only requires very few opinion scores or even a single opinion score for each image. LC-MOS exists in practical scenes and its corresponding noise form is subjective bias presented as a Gaussian distribution, which is different from the uniform distribution of the classification noisy label scene.Then we explore the negative impact of LCMOS on learning-based IQA models. As shown in Fig. <ref> (b), "IQA_LA-LA" represents the MSE curve between the output of the model trained by LA-MOS and its fit target LA-MOS;"IQA_LC-LC" represents the MSE curve between the output of the model trained with LC-MOS and its fit target LC-MOS; "IQA_LC-LA" represents the MSE curve between the output of the model trained with LC-MOS and its potential fit target LA-MOS. Although "IQA_LC-LC"(the orange dotted line) can reach 0 as well as "IQA_LA-LA"(the blue line), "IQA_LC-LA" deviates from 0 (the orange solid line), which means that the IQA model prediction trained with LC-MOS is far from its real potential fitting target LA-MOS.To address this issue, we consider the LC-MOS as the noisy observation of LA-MOS and enforce the IQA model learned from LC-MOS to approach the unbiased estimation of LA-MOS. In this way, we represent the dual-bias, including the subjective bias between LC-MOS and LA-MOS and model bias between IQA models learned from LC-MOS and LA-MOS, as two latent variables with unknown parameters. By means of the expectation-maximization based alternative optimization, we can jointly estimate the parameters of the dual-bias and adaptively suppress the misleading of LC-MOS via a gated dual-bias calibration (GDBC) module.Meanwhile, GDBC also achieved comparable performance in terms of four metrics using LC-MOS to the IQA model learned from LA-MOS. To the best of our knowledge, this is the first exploration of robust IQA model learning from noisy LC-MOS. For clarity, the main contributions of this paper are summarized in the following: * We propose an alternative optimization based dual-bias (including subjective bias and model bias) calibration method for robust IQA model learning from noisy LC-MOS, which significantly reduces the learning cost.* We develop a GDBC module to adaptively update the estimated subjective bias by measuring the stability of IQA model learning in neighboring iterations, which reduces the risk of overadjustment.* We verify the effectiveness of the proposed GDBC method from both the theoretical and experimental analysis, which achieves state-of-the-art performance when very few opinion scores are available for each image. § RELATED WORKIn this section, we first briefly review learning based IQA methods, then introduce the related works about robust classification models and subjective annotation of practical conditions. §.§ Learning based Image Quality Assessment As an important research area in image processing, IQA is essential in many applications such as image compression, image restoration, medical imaging, and etc, where the quality of the visual information is critical for accurate analysis and interpretation. Recently, learning based IQA has received considerable attention due to the powerful ability of DNNs. For example,Ma et al. <cit.> proposed a multi-task DNN for IQA with learning based end-to-end optimization utilizing auxiliary distortion information. Zhang et al. <cit.> designed a deep bilinear convolutional neural network (DBCNN) for IQA with both synthetic and authentic distortions. Talebi et al. <cit.> proposed a convolutional neural network predicting score distribution for neural image assessment (NIMA). Su et al. <cit.> proposed a self-adaptive hyper-network architecture for IQA (HyperIQA) in the wild. Zhang et al. <cit.> proposed a unified IQA model and an approach training for both synthetic and realistic distortions. Sun et al. <cit.> developed a distortion graph representation learning framework for IQA.In addition, new paradigms for learning based IQA have emerged in complex scenarios. For example, the meta-learning IQA for fast adaptation <cit.>, the generative adversarial network (GAN) for active inference <cit.>, the evolvable predictive head for continuous learning <cit.>, the disentangled representation based on variational auto-encoders (VAE) for image generation <cit.>, the vision-language correspondence <cit.> for multimodal scenarios, and the perceptual attack for security scenarios <cit.>. Since deep learning is an end-to-end optimization that relies on quality score regression, the superior performance of the above learning based IQA models heavily depends on reliable subjective quality labels, especially on a large-scale image database.Moreover, there is still a strong demand to construct new datasets for many emerging scenarios and tasks, including distorted images <cit.>, virtual reality (VR) <cit.>, light field <cit.>, hazy images <cit.>, smartphone photography <cit.> and etc.Therefore, it is necessary and urgent to explore feasible IQA models in the subjective bias scenario. §.§ Robust Model Learning and Subjective Label ScreeningCompletely clean labels are difficult to obtain in practical conditions.Researchers have found the over-parameterized network can learn any complex function from corrupted labels <cit.>. Zhang et al. <cit.> demonstrated that DNNs can easily fit the entire training dataset with any corrupted label ratio, ultimately leading to less generality on the test dataset.To train efficient DNNs in noisy cases, many methods have been proposed including robust loss functions <cit.>, regularization <cit.>,robust network architecture <cit.>,sample selection <cit.>,training strategy <cit.>, and etc. These methods focus on robust classification problems and the noise of perturbed labels is assumed to obey a uniform distribution. The design of most methods is based on noise tolerance and one-hot label properties such as sparsity regularization <cit.>.These methods cannot be directly transferred to LC-MOS for IQA regression tasks due to the inconsistency of data properties.In the practical condition for IQA, the opinion score of each annotator is biased against the ideal objective label, which is different from artificial perturbation in classification problems <cit.>. There are different acquisition and processing methods for IQA datasets <cit.>, such as ensuring the consistency of the subjective evaluation environment <cit.>, adding post-processing to the collected data <cit.>, and discarding outliers <cit.>. These datasets are collected through crowdsourcing and require multiple annotators' opinion scores, which is very time-consuming and expensive.Furthermore, the International Telecommunication Union (ITU) and researchers have proposed a number of standards <cit.> for crowd-sourced data processing to eliminate the subjective bias in MOS, containing the model based on subject rejection in BT500 <cit.>,the model based on subject bias/inconsistency modeling and maximum likelihood estimation in P910 <cit.>, and the model based on subject bias removal in P913 <cit.>.However, these methods rely on sufficient annotation information and cannot be directly transferred to IQA models under LC-MOS scenarios.§ METHODSIn this section, we first introduce some preliminary of subjective bias problem formulation. Then, we describe the misleading effect of LC-MOS, and then propose an expectation-maximization based dual-bias calibration scheme. §.§ PreliminariesLet x ∈𝒳⊂ℝ^d denote an d-dimensional image, and y/y^*∈𝒴 denote its corresponding LC-MOS/LA-MOS, i.e.,y=1/M∑_m=1^M R_my^*=1/S∑_s=1^S R_s ,where R_m/R_s denote the m th/s th manual annotation for x, {R_m}_m=1^M⊂{R_s}_s=1^S, and M≪ S. To simplify the discussion, we represent the LC-MOS and LA-MOS with a normalized label space, i.e., 𝒴⊂[0,1], and a higher y/y^* means better subjective quality in terms of very few/abundant manual annotations. In this context, the subjective bias z is defined as the difference between y and y^*, i.e.,z=y-y^*,which is assumed to follow a Gaussian distribution with unknown parameters, i.e., z∼𝒩(μ_z, σ_z^2).Learning based IQA aims to obtain a parametric model f_θ: 𝒳→𝒴 that maps the image space into the subjective quality based label space, where θ is learned from paired image and label samples. Typically, the LA-MOS serves as the label, and we derive the optimal parameter θ^* by minimizing the risk R defined in the following, θ^*=min_θ∈ΘR(θ),where Θ is the available parameter set for f_θ, and R(θ) is measured with the expectation of the loss between f_θ(x) and y^* on the training set, i.e.,R(θ)=𝔼_x,y^*[ℒ(f_θ(x),y^*)],where ℒ(·,·) is the loss of the IQA model with respect to the label for each training sample. To save the annotation costs, this paper tries to replace parts of LA-MOS with LC-MOS, which may mislead the IQA model learning. Let θ^* denote the optimal parameters learned from y^*. We define the model bias b byb=f_θ(x)-f_θ^*(x). In the following, a theoretical analysis of LC-MOS's misleading effect would be conducted on the popular square error loss function. Then, we put forward our gated dual-bias calibration (GDBC) method to efficiently suppress the aforementioned misleading effect, which enforces f_θ(x) to approach y^* rather than y. §.§ Misleading Effect of LC-MOSFollowing the discussion in <cit.>, we denote the noisy labels with bias rate η by y^η, i.e.,y^η={[ y, with probability η; y^*, with probability 1-η ]. .Then, given a collection of training samples with previous noisy labels y^η, IQA model learning usually employs square error to measure the loss of f_θ(x) with respect to y^η, i.e.,ℒ(f_θ(x),y^η)=f_θ(x)-y^η_2^2.Let R^η(θ)=𝔼_x, y^η[ℒ(f_θ(x),y^η)] denote the risk with bias rate η, and θ^*,η denote the parameter for the global minimum of risk R^η(·). To facilitate the analysis, we rewrite the risk R^η(θ) with bias rate η in the following expanded form, i.e.,R^η(θ)= 𝔼_x,y^η[ℒ(f_θ(x), y^η)]= 𝔼_x𝔼_y^* |x𝔼_y^η|x, y^*[ ℒ(f_θ(x), y^η)] = 𝔼_x𝔼_y^* |x{ (1-η) ℒ(f_θ(x), y^*)+.η𝔼_z |x, y^* [ℒ(f_θ(x), y^*+ z)] } = 𝔼_x𝔼_y^* |x{ (1-η) ℒ(f_θ(x), y^*)+ .η𝔼_z |x, y^* [ℒ(f_θ(x), y^*)- 2 z (f_θ(x)-y^*)+ z^2] } = 𝔼_x𝔼_y^* |x{ℒ(f_θ(x), y^*)+.η𝔼_z |x, y^* [z^2-2z(f_θ(x)-y^*)] } = R(θ) -η𝔼_x𝔼_y^* |x[2μ_z(f_θ(x)-y^*)+ μ_z^2 +σ_z^2]. Then, given the optimal and random parameters of R(·), i.e., θ^* and θ, we can represent their risk difference D under R^η(·) byD= R^η(θ^*)-R^η(θ) =R(θ^*)-R(θ)-η𝔼_x𝔼_y^* |x[2μ_z(f_θ^*(x)-y^*)+ μ_z^2 +σ_z^2]+η𝔼_x𝔼_y^* |x[2μ_z(f_θ(x)-y^*)+ μ_z^2 +σ_z^2] = R(θ^*)-R(θ)+η𝔼_x𝔼_y^* |x [2μ_zb]. Although R(θ^*)-R(θ)≤ 0, we can not guarantee that R^η(θ^*)-R^η(θ)≤ 0 due to the uncertainty of μ_z and b, which are both probably nonzero and with the same signs. That is, θ^* does not necessarily equal to θ^*,η when training with the square error loss. In addition, this misleading effect of LC-MOS would become more significant when a higher bias rate η or larger subjective bias μ_z are applied to the Eq. (<ref>). Therefore, it is urgent to develop a robust learning framework to train the IQA model from the low-cost noisy labels. §.§ Expectation-maximization based Dual-bias CalibrationWe propose an expectation-maximization based dual-bias calibration to alleviate the above-mentioned challenges of biased LC-MOS. The framework of GDBC is shown in Fig. <ref>. The algorithm alternates between model update and bias update. During the model update process, the neural network parameters are updated through backpropagation, and image features are learned for quality evaluation. During the bias update process, we repeatedly estimate the subjective bias. The latter subjective bias in turn helps the model learn. The whole process is a process of mutual promotion so that the robustness of the model is improved. According to Eq. (<ref>), if we know z and replace y^η by y^η-z in Eq. (<ref>), both the subjective and model biases would be pushed to zero, which improves the noisy label tolerance of IQA model learning. To this end, we consider Z={(z_i)_i=1^n} as the latent variable of n training sample and represent the likelihood of the unknown parameter set Ω={θ,(ω_i)_i=1^n} byL(Ω;Y^η)= ∏_i=1^n p(y_i^η|θ,ω_i) ,where Y^η={(y_i^η)_i=1^n} denotes n independently observed noisy labels and ω_i={μ_z_i,σ_z_i}. Following <cit.>, we employ the expectation-maximization based iterative method to derive Ω for maximizing L(Ω; Y^η), which helps us achieve the dual-bias calibration. Let Ω^t={θ^t,(ω_i^t)_i=1^n} denote the estimated parameters in the tth iteration. We first conduct the Expectation step (E-step) by computing the conditional expectation of the log-likelihood,Q(Ω|Ω^t)= 𝔼 _Z | Y^η,Ω^t[ log L(Ω;Y^η,Z)]= 𝔼 _Z | Y^η,Ω^t[ log∏_i=1^n p(y_i^η,z_i|Ω)]= ∑_i=1^n∫ p(z_i| y_i^η,Ω^t)[log p(y_i^η| z_i,Ω)+.. log p(z_i|Ω)]dz_i .For dual-bias calibration, we want to use the calibrated label y_i^η-z_i to supervise the IQA model f_θ(x_i), which could be transformed to maximizing the posterior of the observed noisy label y_i^η with f_θ(x_i)+z_i. Based on this requirement, we assume that y_i^η_| z_i,Ω∼𝒩(f_θ(x_i)+z_i,σ_y_i^η| z_i,Ω), which enforces f_θ(x_i) to approach y_i^* according to Eq. (<ref>). In addition, the conditional distribution of z_i_| y_i^η,Ω^t could be derived from the Bayes theoremp(z_i| y_i^η,Ω^t)=p(y_i^η| z_i,Ω^t)p(z_i|Ω^t)/∫ p(y_i^η| z_i,Ω^t)p(z_i|Ω^t)dz_i,where we obtain[Detailed proof is given in the supplementary material.] that z_i_| y_i^η,Ω^t∼𝒩(μ_z_i| y_i^η,Ω^t,σ_z_i| y_i^η,Ω^t) andμ_z_i| y_i^η,Ω^t=σ_y_i^η| z_i,Ω^t^2μ_z_i^t-σ_z_i|Ω^t^2[f_θ^t(x_i)-y_i^η]/σ_y_i^η| z_i,Ω^t^2+σ_z_i|Ω^t^2. By plugging the probability density of z_i_| y_i^η,Ω^t, y_i^η_| z_i,Ω and z_i_|Ω into Eq. (<ref>), we rewrite the conditional expectation of the log-likelihood byQ(Ω|Ω^t)= -1/2∑ _i=1^n[ σ_z_i| y_i^η,Ω^t^2 + (μ_z_i - μ_z_i| y_i^η,Ω^t)^2/σ_z_i|Ω^2.. + σ_z_i| y_i^η,Ω^t^2 + [y_i-f_θ(x)- μ_z_i| y_i^η,Ω^t]^2/σ_y_i^η| z_i,Ω^2..+log (2 πσ_z_i|Ω^2)+log (2 πσ_y_i^η| z_i,Ω^2)]. Then, by setting the derivative of Q(Ω|Ω^t) w.r.t. μ_z_i to zero, we could obtain the updated parameter of subjective biasin the Maximization step (M step), i.e.,μ_z_i^t+1=αμ_z_i^t+(1-α)c_i^t,where α=σ_y_i^η| z_i,Ω^2/σ_z_i|Ω^2+σ_y_i^η| z_i,Ω^2, c_i^t=f_θ^t(x_i)-y_i^η, and μ_z_i^0=0. It is noted that the fitting error c_i^t usually converges to a small value when training with clean labels in several iterations <cit.>. Unbounded updating of Eq. (<ref>) may result in overadjustment. To address this issue, we develop a gated dual-bias calibration (GDBC) module by measuring the stability of IQA model learning in neighboring iterations, i.e.,μ_z_i^t+1= αμ_z_i^t+(1-α) c^t_i, C_1 > t_hϵμ_z_i^t,otherwise,where C=[c_i^t-t_h,⋯,c_i^t]^T represents the fitting errors of the IQA model in the neighboring t_h iterations, and the subjective bias calibration only activates when C's l_1 norm exceeds a threshold ϵ.Since μ_z_i^t+1 maximizes the probability of z_i, we could suppress the model bias by removing μ_z_i^t+1 from the noisy LC-MOS y_i^η, and optimize the IQA model parameters viaθ^t+1=θ^t-λ∇_θ[1/n∑_i=1^nℒ(f_θ^t(x_i),y_i^η-μ_z_i^t+1)],where λ is the learning rate, and ∇_θ denotes the gradient operator. We repeat the alternative subjective bias and model bias calibrations until the maximum iteration steps are reached. Finally, we could output a robust IQA model even when learning with noisy LC-MOS. § EXPERIMENTS§.§ ProtocolWe evaluate the proposed GDBC method on four popular IQA databases, i.e., VCL <cit.>, CSIQ <cit.>, LIVEC <cit.> and KONIQ <cit.>, which only provide the LA-MOS for all images. Let M denote the annotation number to be simulated in the experiments. We develop the following three LC-MOS settings according to the annotation resources of different databases:* When raw opinion scores of all subjects are available (such as VCL <cit.>), we randomly sample M scores for each image.* When pairwise MOS and standard deviation are available (such as CSIQ <cit.> and LIVEC <cit.>), we use them to simulate a normal distribution of subjective ratings <cit.>, and randomly sample M scores from this distribution.* When the empirical distribution of raw opinion scores is available (such as KONIQ <cit.>), we randomly sample M scores from this empirical distribution. For each image, the mean value of the previously sampled M scores is used as the LC-MOS, where M is smaller than the minimum requirement of ITU recommendations <cit.>. More specifically, we investigate four candidate numbers, i.e., M={1, 2, 4, 8}. To validate the universality of the proposed method, we select four representative deep neural networks for IQA model learning, i.e., ResNet-50 <cit.>, DBCNN <cit.>,HyperIQA <cit.> and NIMA <cit.>. Following the criterion of <cit.>, we randomly split each database into non-overlapped training and testing sets, which cover 80% and 20% samples respectively. To eliminate the performance bias for specific LC-MOS or train-test split, we repeat the random LC-MOS sampling and train-test splitting 10 times for each database, and report the median results across all trials for evaluation. Let LA-MOS denote the ground-truth subjective quality of each image. Three widely used metrics are used for evaluating the performances of the IQA models learned from the LA-MOS, LC-MOS, and our GDBC method, i.e., the Pearson's Linear Correlation Coefficient (PLCC) <cit.>, the Spearman's Rank Order Correlation Coefficient (SRCC) <cit.>, and Kendall’s Rank Correlation Coefficient (KRCC). In addition, to highlight the performance improvement of the proposed method when training with LC-MOS, we also introduce a relative index Δ(%), i.e.,Δ(%)=m_w/ GDBC-m_w/o GDBC/m_w/o GDBC× 100where m_w/ GDBC and m_w/o GDBC denote the evaluation metric of an IQA model trained with and without the GDBC module, respectively.In our experiment, we train the IQA models with Adam optimizer<cit.> and set α to 0.9, t_h to 1 or 3, ϵ to 0.01 or 0.1, epoch to 50, batch size to 16. The optimal learning rates are founded by grid search and scheduled by the cosine annealing rule<cit.>.During training and inference, we scale and center crop 320× 320× 3 sub-images from the original image without changing their aspect ratio.All experiments are performed on a workstation with a single NVIDIA GeForce RTX 3090 GPU. §.§ Performance evaluation for the GDBC To demonstrate the effectiveness of the proposed GDBC method, we first investigate the most challenging case with the setting of η=100% and M=1, which means that all training samples are labeled with the noisy LC-MOS and only one annotation is available for each image.§.§.§ Effectiveness of subjective bias calibrationLet y_i^η-μ_z_i^t+1 in Eq. <ref> denote the calibrated LC-MOS. In Table <ref>, we report the subjective bias calibration results when GDBC is applied to different deep IQA models, which measure the mean square error (MSE) between the calibrated LC-MOS and its corresponding LA-MOS. Due to the difference in LC-MOS settings, the MSE between the raw LC-MOS and LA-MOS are different across different IQA databases, where the LIVEC and CSIQ present the largest and smallest bias respectively. Meanwhile, it is seen that the MSE values of the calibrated LC-MOS are much smaller than the raw LC-MOS when GDBC is applied to all deep IQA models, whose MSE reductions could range from 64% to 77%. These results verify the effectiveness and universality of the proposed method in reducing subjective bias. In the following, we will further investigate the benefits of subjective bias calibration for IQA model learning.§.§.§ Guidance for the IQA model learning In Fig. <ref>, we show the loss curves of different deep IQA models when they are trained on multiple popular databases with different labels or training strategies. More specifically, GDBCIQA_LC-LA represents the MSE between an IQA model's output and the LA-MOS when LC-MOS and GDBC strategy are used for training; IQA_LA-LA represents the MSE between an IQA model's output and the LA-MOS when LA-MOS is used for training;IQA_LC-LC represents the MSE between an IQA model's output and the LC-MOS when LC-MOS is used for training; IQA_LC-LA represents the MSE between an IQA model's output and the LA-MOS when LC-MOS is used for training. It is seen that both IQA_LA-LA and IQA_LC-LC could converge to a very small value except that the convergence speed of IQA_LC-LC is slower than IQA_LA-LA, which is consistent with the observations of existing noisy label learning tasks <cit.>. When we compare IQA_LC-LC with IQA_LC-LA, it is found that the IQA_LC presents a clear overfitting toward LC-MOS, whose loss quickly goes up with respect to LA-MOS after several epochs. Meanwhile, the overfitting issue of IQA_LC-LC is very obvious for LIVEC but obsolete for CSIQ. Referring to Table <ref>, this observation confirms that a larger bias of raw LC-MOS would be more misleading for IQA model learning. By contrast, our GDBCIQA_LC-LA follows a more consistent tendency with IQA_LA-LA, which efficiently suppresses the rising of the loss with respect to LA-MOS across all IQA models and databases. This clearly shows the benefits of LC-MOS calibration for robust IQA model learning. §.§.§ Effectiveness of model bias calibrationIn this section, we evaluate the LA-MOS prediction performances of different deep IQA models when they are trained with different labels and strategies. As shown in Table <ref>, it is not surprising that all IQA models trained with the LA-MOS achieve better performance than their counterparts trained with the LC-MOS, which have been highlighted by the gray background. These results further confirm the issue of LC-MOS overfitting observed in Fig. <ref>. Meanwhile, we can also find that the proposed GDBC efficiently improves the performances of all IQA models when they are trained with LC-MOS, where all Δ(%) report positive values across different databases. In addition, as shown in each row of Table <ref>, we can find that the performance improvement of GDBC is relatively smaller in CSIQ than in the other databases. There are two possible reasons to account for this result. Firstly, the CSIQ database contains the smallest LC-MOS bias as shown in Table <ref>, which limits the improvement space of GDBC. Secondly, in comparison with the authentic distortion databases (i.e., KONIQ and LIVEC), the diversity of the CSIQ database is limited which reduces the discrepancy between the training and testing sets and the potential overfitting risk. When we look at each column of Table <ref>, it is seen that the performance improvement of GDBC changes significantly across different IQA models, where the DBCNN and HyperIQA obtain higher Δ(%). A possible reason lies in the bias propagation. Actually, unlike the plain single-branch architecture in ResNet and NIMA, DBCNN and HyperIQA are composed of multiple branches, which are beneficial for capturing more comprehensive quality-aware features. However, when training with LC-MOS, the subjective bias would easily propagate between different branches and enlarge the misleading effect of LC-MOS. In these cases, the superiority of GDBC would become more significant.§.§ Robustness toward different subjective bias settingsBesides different IQA models and databases, we also investigate the robustness of the proposed method toward different subjective bias settings by changing the bias rate and annotation number. More specifically, a higher bias rate and smaller annotation number would cause greater subjective bias. In this section, all experiments are conducted on the KONIQ database. §.§.§ Different bias ratesFocusing on the challenging cases, we first set the annotation number to the smallest value, i.e., M=1, and investigate the performances of different IQA models learned with LC-MOS under the following bias rates η={100%, 80%, 60%, 40%}. As shown in Table <ref>, the performances of all IQA models gradually increase with a decreasing η. By contrast, the performance improvement Δ(%) keeps falling with the decreasing η. Since the impact of subjective bias reduces with a decreasing bias rate, the overfitting risk toward LC-MOS would also reduce, which may limit the room for IQA models' improvement from GDBC. Even so, the proposed GDBC improves the LA-MOS prediction accuracy for all IQA models across different bias rates, whose Δ(%) values are all positive. This verifies that the proposed GDBC is robust to the variations of bias rates.§.§.§ Different annotation numbersOn the other hand, we set the bias rate to the highest value, i.e., η=100%, and investigate the performances of different IQA models with the following annotation numbers M={1, 2, 4, 8}. When M increases, the LC-MOS would keep approaching LA-MOS, which also reduces the impact of subjective bias. Similarly, in Table <ref>, we can find that all IQA models' performances increase as the annotation number goes up. Meanwhile, the superiority of the proposed GDBC would keep shrinking. But, our method still achieves positive Δ(%) across all annotation numbers in this investigation, which verifies the robustness of GDBC to the variations of annotation numbers. §.§ Parameter analysisIn this section, we further investigate the impact of the parameters α and ϵ in Eq. <ref>, which control the updating intensity and frequency of μ_z_i for our GDBC. More specifically, a larger α would reduce the updating intensity and tend to keep the μ_z_i unchanged in each iteration. Similarly, a larger ϵ would reduce the updating frequency of μ_z_i in the whole training process and vice verse. All experiments are conducted on the KONIQ database under the most challenging setting, i.e., η=100% and M=1. Referring to Eq. <ref>, we can infer that α ranges from 0 to 1. Without loss of generality, we investigate the performance variation of GDBC under seven different α values, i.e., {0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1}. As shown in Fig. <ref>, the performances of all IQA models undergo a process of up and down when α gradually grows from 0 to 1. Actually, too large α would reduce the updating intensity and result in underadjustment toward μ_z_i. By contrast, too small α would increase the updating intensity and may result in overadjustment for μ_z_i. Although different IQA models prefer different α settings, their performance changes become very slight when α ranges from 0.5 to 0.9. In view of the average performance of all IQA models, we experimentally set α to 0.9 in the proposed method. In addition, we also investigate the performance variations of GDBC under different ϵ values, i.e., {0.001, 0.01, 0.1, 1.0}. As shown in Fig. <ref>, when ϵ grows from 0.001 to 0.1, the performances of GDBC are insensitive to different threshold settings, whose curves are very close to each other across all IQA models and the SRCC fluctuations are within 0.02. However, when ϵ reaches 1, the performance of GDBC significantly drops across all IQA models. As mentioned above, too large ϵ tends to disable the updating of μ_z_i and degrade GDBC to a plain model, which directly learns from the noisy LC-MOS and suffers their misleading. Based on these investigations, we experimentally set ϵ to 0.01 in our GDBC. §.§ Comparison with separate subjective bias calibration and model bias calibration methods Besides the effectiveness and robustness validation, we further compare our alternating optimization based dual-bias calibration model with some separate subjective bias calibration and model bias calibration methods. On the one hand, many standardized label screening models have been developed to recover high-quality MOS from multiple noisy human annotations, which focus on the subjective bias calibration, such as the subject rejection (SR) model in ITU-R BT.500 <cit.>, the maximum likelihood estimation (MLE) model in ITU-T P.910 <cit.>, and the subject bias removal (SBR) model in ITU-T P.913 <cit.>. In view of the label screening models' request for multiple annotations, we simply the most challenging case of LC-MOS learning to the following setting, i.e., η=100% and M=2, and the repetition number of subjective test is set to 1. On the other hand, regarding the subjective bias as the noise, recent noisy label learning models could also be applicable to the model bias calibration in our LC-MOS learning task, such as the generalized cross entropy (GCE) loss <cit.> and symmetric cross entropy (SCE) loss based <cit.> methods. It is noted that these representative noisy label learning methods, such as GCE and SCE, were originally developed for multiple output based classification networks, which is hard to apply to most single output based regression networks in IQA. For compatibility, we evaluate all bias calibration models on a multiple output based IQA model NIMA <cit.>. Meanwhile, in view of the availability of raw human annotations, we conduct all experiments on the VCL database in this section.As shown in Table <ref>, both the label screening and noisy label learning methods deteriorate the performance of NIMA when LC-MOS is used for training. Only the proposed GDBC improves the IQA model learning. Firstly, unlike our iterative optimization strategy, existing label screening methods like SR, MLE, and BR adopt a one-stop post-processing paradigm, which can not identify or even suppress the error in calibrating the human annotations. When the available annotations are very few, the calibration error easily becomes considerable and further interferes with the IQA model learning. Secondly, to avoid overfitting issues, existing noisy label learning methods like GCE and SCE focus on suppressing the gradient backpropagation, which slows down the rate of convergence and easily causes underfitting in turn. These reported results further verify the superiority and necessity of jointly calibrating the subjective bias and model bias.§ CONCLUSION In this paper, we explore a new challenge to learn robust IQA models from noisy low-cost MOS (LC-MOS), which requires very few opinion scores for each image. By jointly inferring the subjective bias and model bias, we develop a plug-and-play gated dual-bias calibration (GDBC) module, which enforces the IQA model learned from LC-MOS to approach the unbiased estimation of labor-abundant MOS (LA-MOS). Extensive experiments on four popular IQA databases and four representative deep IQA models verify the effectiveness of the proposed method, which significantly outperforms the IQA models directly learned from LC-MOS and even achieves comparable performance with respect to the IQA models learned from the expensive and time-consuming LA-MOS. Meanwhile, we also verify the superiority of the proposed method over the existing label screening and noisy label learning methods.IEEEtran | http://arxiv.org/abs/2311.15846v1 | {
"authors": [
"Lei Wang",
"Qingbo Wu",
"Desen Yuan",
"King Ngi Ngan",
"Hongliang Li",
"Fanman Meng",
"Linfeng Xu"
],
"categories": [
"cs.CV",
"eess.IV"
],
"primary_category": "cs.CV",
"published": "20231127141154",
"title": "Learning with Noisy Low-Cost MOS for Image Quality Assessment via Dual-Bias Calibration"
} |
firstpage–lastpage Perspective on new implementations ofatomtronic circuits Juan Polo^1*, Wayne J. Chetcuti^1, Enrico Domanti^1,2,3, Philip Kitson^1,2,3, Andreas Osterloh^1, Francesco Perciavalle^1,4, Vijay Pal Singh^1, Luigi Amico^1,2,3,5 January 14, 2024 ======================================================================================================================================================================= It is widely accepted that the width-luminosity relation used to standardize normal Type Ia supernovae (SNe Ia) breaks down in underluminous, 1991bg-like SNe Ia. This breakdown may be due to the choice of parameter used as a stand-in for the width of the SN Ia light curve. Using the colour stretch parameter s_BV instead of older parameters resolves this issue. Here, I assemble a sample of 13 nearby 1991bg-like SNe Ia from the literature, all of which have independent host-galaxy distance moduli and little to no reddening. I use Gaussian process regression to fit the light curves of these SNe in U/u, g, B, V, R/r, and I/i, and measure their peak absolute magnitudes. I find statistically significant (>5σ confidence level) correlations between the peak absolute magnitudes of the underluminous SNe and their s_BV values in the range 0.2<s_BV<0.6. These correlations are broadly consistent with fits to s_BV<0.7 SNe Ia with preliminary B- and V-band peak absolute magnitudes from the Carnegie Supernova Project and significantly inconsistent with similar fits to normal and transitional SNe Ia (with 0.7<s_BV<1.1). The underluminous width-luminosity relation shown here needs to be properly calibrated with a homogeneous sample of 1991bg-like SNe Ia, after which it could be used as a rung on a new cosmological distance ladder. With surface-brightness fluctuations (or another non-Cepheid method) used to calibrate distances to nearby 1991bg-like SNe, such a ladder could produce an independent measurement of the Hubble-Lemaître Constant, H_0. methods: distance scale – supernovae: general§ INTRODUCTIONType Ia supernovae (SNe Ia) are famous for their use as standard candles, a use that led to the discovery of the accelerating expansion of the Universe and the existence of dark energy <cit.>. Yet, SNe Ia are not truly standard; they are standardizable. Independently, <cit.> and <cit.> both showed that more luminous SNe Ia had wider light curves, i.e., they took longer to rise to peak and then decline. This correlation was corroborated and modernized by <cit.> and has been in use ever since (for a historical review of the width-luminosity relation, see ; for a general review of the history and physics of SNe, see ). <cit.> found a tight correlation between the peak absolute luminosities of ten SNe Ia and the widths of their light curves, parameterized by Δ m_15(B), the number of magnitudes by which a SN fades 15 days after it peaks in the B band. Variations on this parameter are now routinely used to fit SN Ia light curves and standardize them for use in cosmology. Examples include Δ, used in<cit.>; s, which is used inand<cit.>; and x1, used in<cit.>. SNe are divided into several classes, each with its own subclasses. The same is true for SNe Ia. The majority of SNe Ia fall into the so-called `normal' subclass, but there are also overluminous 1991T-like SNe <cit.>, transitional 1986G-like SNe <cit.>, underluminous 1991bg-like SNe <cit.>, peculiar SNe Iax <cit.>, and more. The width-luminosity relation works exceptionally well for normal SNe Ia and extends to overluminous SNe Ia as well. However, this relation breaks down in underluminous, fast-evolving 1991bg-like SNe <cit.>. Consequently, these SNe Ia are routinely excised from cosmology samples, and the belief that `1991bg-like SNe are not standard candles' has become dogma. <cit.> introduced a different stretch parameter, s_BV, defined as the time at which the B-V colour curve reaches peak, relative to the time of B-band peak, divided by 30 days. When used as the basis for their own light-curve fitter,<cit.>, this parameter made it possible to standardize not just the light curves of normal and overluminous SNe Ia, but also underluminous SNe Ia as well. And yet, underluminous 1991bg-like SNe Ia are still not treated as standardizable candles. Some of this has to do with the continued use of light-curve fitters such asand , which are optimized for normal SNe Ia, and some of it has to do with the now deeply-ingrained belief that 1991bg-like SNe fall off the width-luminosity relation. To wit, even <cit.>, who usedto measure the Hubble-Lemaître Constant, H_0, still imposed an s_BV>0.5 cut that excluded most of the 1991bg-like SNe in their sample.There are hints that the bias against 1991bg-like SNe may be starting to fade. <cit.> and <cit.> noted that distances derived from the light curves of the 1991bg-like SNe 2006mr and 2015bo were consistent with independent distance measurements of their host galaxies. <cit.> also included a figure of -derived B-band peak absolute magnitudes vs. s_BV that extended all the way down to 1991bg-like SNe (0.2<s_BV<0.6). The apparent correlation between the peak B-band luminosities and s_BV values of the 1991bg-like SNe is striking but not elaborated on. In this paper, I set out to test whether there truly is a correlation between the luminosities of 1991bg-like SNe and the widths of their light curves, as parameterized by s_BV. In Section <ref>, I assemble a sample of 13 1991bg-like SNe that suffered little to no dust extinction and that exploded in galaxies with independent distance measurements. Since all SN Ia light-curve fitters are optimized towards normal SNe Ia, in Section <ref> I fit the light curves of the SNe myself with Gaussian process regression (GPR). The results, shown in Section <ref>, are clear: there are statistically significant correlations between the peak absolute magnitudes of 1991bg-like SNe and s_BV in multiple filters (g, B, V, R/r, and I/i). This leads to Section <ref>, in which I discuss the potential use of 1991bg-like SNe in a new distance ladder, independent from the one currently dominating SN Ia cosmology. § SAMPLEA review of the literature on 1991bg-like SNe revealed that only ∼ 30 such objects had been methodically observed and published. Of these, only 13 had well-sampled multiwavelength light curves, exploded in galaxies with independent distance measurements, and suffered relatively little host-galaxy dust extinction (E(B-V)<0.2 mag). The latter is important because most estimates of host-galaxy extinction are a byproduct of light-curve fitting, which I avoid in this paper. The SN sample used in this work is shown in Table <ref>. Of the 13 SNe, 11 are used to test for the presence of a width-luminosity relation, while the other two SNe are held back as test cases (Section <ref>). Below, I describe the sources of the light curves used in this work. Basic parameters of the SNe in the sample, including the names and types of galaxies in which they exploded, are summarized in Table <ref>.The eponymous SN 1991bg was extensively studied by <cit.>, <cit.>, and <cit.>. Here, I use the peak magnitudes reported by <cit.>, based on a combined analysis of their data and those of <cit.> and <cit.>. UBVRI observations of SN 1997cn were published by <cit.> and <cit.>.<cit.> and <cit.> published BVRI light curves of SN 1998de, obtained as part of the Lick Observatory Supernova Search (LOSS; ). UBVRI observations of SN 1999by were published by <cit.>, who presented an in-depth study of this object. Additional BVRI observations, obtained by LOSS, were published by <cit.>. BVI and BVRI observations of SN 1999da were published by <cit.> and <cit.>, respectively. Here, I use the latter as they contain the additional R filter. The Carnegie Supernova Project (CSP; ) have published ugriBV observations of 16 underluminous, 1991bg-like SNe. Of these, only the host galaxies of SNe 2005bl, 2006mr, 2007ax, and 2008R had independent distance measurements and host-galaxy reddening <0.2 mag <cit.>. An independent analysis of SN 2005bl <cit.> estimated a host-galaxy reddening of 0.17±0.08 mag and an R_V value of 3.1, which I use here. Additional observations of SN 2007ax were published by <cit.>, who also estimated a negligible host-galaxy reddening of <0.01 mag.SN 2015bo was discovered by <cit.> and studied by <cit.>. Here, I use the BVugri light curves reported by the latter. BVri observations of SNe 2017ejb and 2019so were obtained with the Las Cumbres Observatory's network of 1-m telescopes <cit.> and reported by <cit.>. SN 2017ejb was discovered by the D<40 Mpc survey <cit.> and spectroscopically classified as a 1991bg-like SN by <cit.> and <cit.>. SN 2019so was discovered by the Asteroid Terrestrial-impact Last Alert System (ATLAS; ) and reported by <cit.>. Although a spectroscopic classification of this SN has not yet been published, its light curves are consistent with those of other 1991bg-like SNe. The discovery of SN 2021qvv by ATLAS was reported by <cit.>. Spectra and UBVgri imaging of SN 2021qvv were obtained as part of the Las Cumbres Observatory's Global Supernova Project and published by <cit.>. The latter classified it as a 1991bg-like SN similar to SN 2006mr. § ANALYSISHere, I describe the steps taken to measure the peak absolute magnitudes of the SNe in my sample: measuring s_BV (Section <ref>), obtaining the distance moduli of the SN host galaxies (Section <ref>) and the reddening suffered by the SNe (Section <ref>), and measuring peak apparent and absolute magnitudes (Section <ref>).§.§ Colour stretch SNe 1997cn, 2005bl, 2006mr, 2007ax, 2008R, 2015bo, 2017ejb, and 2019so have been fit withby other authors <cit.>. For these SNe, I use the published s_BV values. For the rest of the sample (namely, SNe 1991bg, 1998de, 1999by, 1999da, and 2021qvv), I have derived s_BV by fitting their light curves with GPR fits, as done by <cit.>. To ensure that the GPR-derived s_BV values are not systematically offset from the -derived values, I use GPR to fit the light curves of 16 1991bg-like SNe observed by CSP (SNe 2005bl, 2005ke, 2006gt, 2006mr, 2007N, 2007ax, 2007ba, 2008R, 2009F, 2006bd, 2006hb, 2007al, 2007jh, 2008bd, 2008bi, and 2008bt) and compare my values to those published by <cit.>. As Fig. <ref> shows, there is a tight correlation between the two techniques, with s_BV()=(0.95±0.14)s_BV(GPR)-(0.06±0.08) and a reduced χ^2_r value of 15.2/14=1.1. The GPR-derived s_BV values of the pre-2000 SNe have been corrected accordingly. The resultant s_BV values, along with Δ m_15(B) measurements from the literature, are presented in Table <ref>. §.§ Distance moduliNo single work on nearby distance moduli encompasses all of the host galaxies in my sample. Instead of picking and choosing between the various measurements available for each galaxy, I take a weighted mean of all measurements conducted from the year 2000 onwards. For some host galaxies, which have >10 measurements, this results in an underestimated uncertainty on the distance modulus. I do not include distance modulus measurements derived from fitting the light curves of the SNe in the sample, for two reasons. First, this would be cyclical reasoning, as the goal of this paper is to ascertain whether 1991bg-like SNe can be used as standardizable candles. Second, as <cit.> noted, because current light-curve fitters are not optimized for 1991bg-like SNe, the distance moduli derived in this manner often span a range of ∼ 1 mag. The sources of the measurements used for each host galaxy are gathered in Appendix A, and the resultant distance moduli and their uncertainties are collected in Table <ref>. §.§ ReddeningGalactic line-of-sight reddening towards the SNe in the sample, E(B-V)_MW, were extracted from the reddening maps produced by <cit.>, while host-galaxy reddening values, E(B-V)_Host, were extracted from the literature (see Section <ref>). Unlike Galactic reddening, host-galaxy reddening is derived from light-curve fits. Since most light-curve fitters are not optimized towards fitting 1991bg-like light curves, these values are suspect. Thus, I chose to include only SNe whose literature E(B-V)_Host values were <0.1 mag. The sole exception is SN 2005bl, with E(B-V)_Host=0.17±0.08 mag <cit.>. As it exploded in an elliptical galaxy, I chose to keep it in the sample. To avoid any systematic uncertainties that have to do with how host-galaxy reddening and R_V values are derived, and since, by design, the host-galaxy reddenings are low, I chose not take them into account in the following analysis. A test in which these values were included in the derivation of the peak absolute magnitudes showed that including them did not impact the results in a significant manner. §.§ Peak magnitudesHere, I follow <cit.> and fit the light curves of the SNe in my sample using the Matlab 2023a GPR-fitting routinewith the default parameters. These fits provide the date and magnitude of maximum light in each of six filters: U/u, g, B, V, R/r, and I/i. To estimate the uncertainties on the derived parameters, I repeat the fit in each filter 100 times, each time varying the photometry of the SN randomly according to the measurement uncertainties. The mean and standard deviation of the results are then taken as the estimated peak date and its 1σ uncertainty. The same method is applied to estimating the peak apparent magnitude in each filter. Magnitude uncertainties smaller than 0.01 mag are rounded up. The resultant peak apparent magnitudes are presented in Table <ref>.In Fig. <ref>, I compare the peak apparent magnitudes derived using this method with published values from the literature, derived using various fitting techniques. In all filters except U/u, the mean difference between the GPR and literature values is <0.1 mag. In U/u, this value rises to 0.2 mag, but this is due to a single outlier, SN 2005bl.With the fitted peak apparent magnitudes, Galactic reddening values, and distance moduli described above, the peak absolute magnitude of the i-th SN in my sample in filter λ is simply:M(λ)_max,i = m(λ)_max,i-A(λ)_i-μ_i,where μ_i is the distance modulus of the i-th SN, and A(λ)_i is derived from the Galactic line-of-sight reddening, E(B-V)_MW,i, the Galaxy's R_V=3.1, and the <cit.> extinction law. Due to the nature of the SN sample, which includes SNe observed by different surveys in different filters (e.g., U/u, R/r, and I/i), I do not attempt to perform K or S corrections. The resultant peak absolute magnitudes are presented in Table <ref> and Fig. <ref>. § RESULTSHere, I fit the peak absolute magnitudes measured in the previous section and show that correlations between them and the colour stretch of the SNe are statistically significant in all filters except U/u (Section <ref>); compare these correlations to fits applied to independent CSP data (Section <ref>); and demonstrate the effect of these correlations on our understanding of SNe 1997cn and 1999da (Section <ref>). §.§ Width-luminosity relationsI perform two independent tests to determine whether the peak absolute magnitudes in a specific filter are correlated with colour stretch, s_BV, the results of which are collected in Table <ref>. First, all filters display a strong, negative Pearson correlation coefficient, r, with r≲-0.9 in g, B, V, and R/r, and r∼0.7 in U/u and I/i. The same test shows that these correlations have statistically significant p-values of <0.05 in all filters except U/u (and a more stringent p<0.01 in all filters except U/u and I/i). Second, A likelihood-ratio test (see, e.g., ) prefers a 1st-order polynomial over a 0th-order polynomial fit to the data in every filter with a statistical significance of >5σ. The same test shows that a 2nd-order polynomial is not preferred over a 1st-order polynomial.Based on the likelihood-ratio test, I fit 1st-order polynomials to the peak absolute magnitudes in each filter, the results of which are presented in Table <ref> and Fig. <ref>. An analysis of the residuals produced by subtracting the best-fitting polynomials from the data reveals a scatter, S, of <0.3 mag in all filters except U/u, and <0.2 mag in B and V. I attribute the high scatter in U/u to the difficulty in estimating the time of peak from photometry that, only in this band, did not always cover the rise and peak of the light curves (SNe 1999by, 2007ax, 2008R, and 2021qvv). Note that, even without K and S corrections, and with distance moduli derived using a variety of techniques, the scatter found here is on par with, or better than, the scatter found in the initial width-luminosity relations measured by <cit.> for normal SNe Ia. §.§ Comparison with CSPIn Fig. <ref>, I compare my B- and V-band measurements to similar CSP peak absolute magnitudes from <cit.>. These measurements are preliminary (the finalized CSP measurements will appear in the published version of ; W. B. Hoogendam, private communication) but provide a good sanity test and the option to compare the correlations found above with the well-established width-luminosity relation of normal SNe Ia.The B-band CSP data were obtained by fitting the light curves withusing the EBV2 model, while the V-band peak absolute magnitudes were obtained via GPR fits, which explains the larger scatter in this filter <cit.>. To recreate figure 9 from <cit.>, I have removed all V-band measurements with uncertainties σ[M(V)]>0.3 mag and σ(s_BV)>0.06 ( used a stricter σ(s_BV)>0.05, but my sample has an average ⟨σ(s_BV)⟩∼0.06). No cuts were applied to the B-band data.A likelihood-ratio test applied to the entire range of data in the B band prefers a 2nd-order polynomial over a 1st-order polynomial at a >5σ confidence level, indicating that the underluminous SNe Ia follow a different trend than the normal ones. The scatter in this filter is small enough that this preference can be seen by eye as well. The same likelihood-ratio test conducted on the V-band CSP measurements finds no statistically significant preference for a 2nd-order polynomial over a 1st-order polynomial, or even for a 1st-order polynomial over a 0th-order polynomial. Based on the results of the likelihood-ratio test on the B-band CSP data, I fit the CSP measurements in both filters with 1st-order polynomials in two s_BV ranges: 0.7<s_BV<1.1 to capture all normal SNe Ia (as well as transitional SNe Ia), and s_BV<0.7 to include only underluminous SNe Ia. In the B band, I find that the normal SNe Ia follow M(B)_CSP= (-1.5 ± 0.1)s_BV -(17.9±0.1) (χ^2=5.7/42=0.14) and underluminous SNe Ia follow M(B)_CSP= (-5.3 ± 0.2)s_BV -(15.5±0.1) (χ^2=22/9=2.4). In the V band, the correlations are M(V)_CSP= (-1.8 ± 0.1)s_BV -(17.2±0.1) (χ^2=1010/55=18) and M(V)_CSP= (-4.0 ± 0.2)s_BV -(15.7±0.1) (χ^2=498/15=33), respectively. The higher χ^2 values, divided by the number of degrees of freedom, are due to the larger scatter of the measurements in this filter.Overplotting the B- and V-band peak absolute magnitudes from Fig. <ref> shows that the width-luminosity relations from my sample are in broadly good agreement with the fits to the CSP data. This is somewhat surprising, given the heterogeneity of my sample, the variety of methods used to derive distance moduli to the SN host galaxies, and the lack of K corrections. It is too much to assume that all of these factors would conspire to produce correlations that coincidentally agreed with the homogeneous, well-calibrated CSP data. Instead, we must conclude that there are, indeed, clear correlations between the peak absolute magnitudes of underluminous SNe Ia and the widths of their light curves, as parameterized by s_BV.§.§ Two test casesWhen constructing the SN sample in Section <ref>, the underluminous SNe 1997cn and 1999da were set aside. Below, I describe the circumstances that led me to isolate each of these SNe and how they help validate the width-luminosity relations from Section <ref>. §.§.§ SN 1997cnSN 1997cn was discovered by <cit.> and observed by both <cit.> and <cit.>, but only after it had already peaked and begun to decline. <cit.> estimated that the SN reached B-band maximum light on JD 2450587.5, while <cit.> usedto fit the <cit.> light curves and derive a peak date of 2450580.7 ± 0.7, 6.8 d earlier than the <cit.> date and ∼ 2 d prior to its discovery. A spectroscopic fit with<cit.> yielded a third date: 2450588.3, 0.8 d after the <cit.> estimate. As the light curves of this SN did not cover the date of maximum light, I did not include it in my sample.Thefit conducted by <cit.> provided an s_BV value of 0.35±0.06. As this value is consistent with the s_BV value of SN 2007ax, 0.355±0.061 <cit.>, the light curves of the two SNe can be compared to each other directly. In Fig. <ref>, I fit the V-band measurements of SN 2007ax to those of SN 1997cn taken by both <cit.> and <cit.> by varying the date of maximum light and the offset between the light curves. I find t(B)_max = 2450586.1 ± 0.2 d, with SN 1997cn 0.72±0.01 mag fainter than SN 2007ax, with a reduced χ^2_r=2.1. Fig. <ref> includes peak absolute magnitudes of SN 1997cn estimated by adding an offset of 0.72 mag to the peak apparent magnitudes of SN 2007ax. The resultant values are consistent with peak apparent magnitudes estimated by GPR fits to the data from <cit.>, with the exception of the g band, in which there were no previous measurements. In all filters, the resultant peak absolute magnitudes of SN 1997cn are consistent with the width-luminosity relations within the scatter of the calibration sample. This indicates that the width-luminosity relations measured in Section <ref> can be used to estimate the peak absolute magnitudes of individual underluminous SNe Ia.§.§.§ SN 1999daThe host galaxy of SN 1999da only had two independent, post-2000 distance measurements (both from ), which averaged to μ = 33.12 ± 0.27 mag. However, distance moduli derived by fitting the light curves of this SN by various groups <cit.>, using different light-curve fitters, produced an average value of μ_SN = 33.95 ± 0.08, ∼ 0.8 mag fainter. Fig. <ref> shows SN 1999da twice, using each of the distance moduli above. In all four filters in which this SN was observed, the peak absolute magnitudes derived using the host-based distance modulus are systematically dimmer than the width-luminosity relations. The peak absolute magnitudes derived with the light-curve-based distance modulus, on the other hand, are consistent with the correlations. This indicates that, at least for this SN, existing light-curve fitters already did a good job of deriving its luminosity distance.§ CONCLUSIONSIn this work, I have assembled a sample of 13 underluminous, 1991bg-like SNe Ia from the literature to test whether this type of SN Ia can be used as a standardizable candle. To minimize systematic uncertainties, I have chosen SNe that exploded in host galaxies with low to no reddening and well-sampled light curves. Even so, the sample still suffers from systematic uncertainties that stem from the variety of filters in which the SNe were observed as well as a lack of K and S corrections. This sample still exhibits statistically significant (>5σ) correlations between the peak absolute magnitudes of the underluminous SNe and the widths of their light curves, as parameterized by the colour stretch, s_BV. The underluminous width-luminosity relations measured here are similar in nature to those of normal SNe Ia but with significantly steeper slopes. The B- and V-band width-luminosity relations are consistent with preliminary CSP data, which both strengthens the existence of the correlations and shows that , the light-curve fitter used by CSP, is able to standardize underluminous as well as normal SNe Ia. The correlations shown here are further strengthened by two test cases, SNe 1997cn and 1999da, which were not included in the calibration sample.The work done here shows that underluminous, 1991bg-like SNe Ia can be standardized and hence used to measure extragalactic distances. However, these width-luminosity relations must first be properly calibrated by compiling a homogeneous sample of 1991bg-like SNe Ia observed by a single survey in a single set of filters. Even then, because 1991bg-like SNe are rarer than normal SNe Ia (15 vs. 70 per cent of all SNe Ia, respectively; ), their inclusion in SN Ia cosmology samples will not make much of a difference. Moreover, their inherent dimness makes them more susceptible to Malmquist bias, which limits their use as probes of dark energy.Instead, I suggest using 1991bg-like SNe Ia as a rung on a new cosmological distance ladder, one that would provide an independent check on the distance ladder based on Cepheids and normal SNe Ia. While extremely successful (e.g., ), the latter ladder is biased towards star-forming galaxies, which host young Cepheid variables. Underluminous SNe Ia, on the other hand, are mostly found in massive, passive galaxies <cit.>. This makes them less prone to host-galaxy reddening and removes the systematic uncertainty produced by the mass step found in Hubble residuals (e.g., ). Bereft of Cepheids, the galaxies that host underluminous SNe Ia will have to be calibrated by other means, such as surface-brightness fluctuations. Several groups have already attempted to use this method to either measure H_0 directly or to calibrate the host galaxies of normal SNe Ia (e.g., ). A new distance ladder, based on surface-brightness fluctuations (or some other non-Cepheid method) and 1991bg-like SNe Ia could provide an independent measurement of H_0, a necessary step towards resolving the current `Hubble tension' (e.g., ).§ ACKNOWLEDGMENTSI thank Chris R. Burns and Saurabh W. Jha for helpful discussions, and Willem B. Hoogendam and the CSP for sharing their data with me. This research has made use of NASA's Astrophysics Data System and the NASA/IPAC Extragalactic Database (NED), which are funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. § DATA AVAILABILITYAll data used in this work, except for the CSP data shown in Fig. <ref>, are public and have been published elsewhere. The measurements and fits produced by this work are included in the tables throughout this paper. Any remaining data or measurements will be shared on request to the corresponding author. @urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc[Ajhar, Tonry, Blakeslee, Riess& SchmidtAjhar et al.2001]2001ApJ...559..584A Ajhar E. A.,Tonry J. L.,Blakeslee J. P.,Riess A. G., Schmidt B. P.,2001, @doi [] 10.1086/322342, https://ui.adsabs.harvard.edu/abs/2001ApJ...559..584A 559, 584[Bernardi, Alonso, da Costa, Willmer, Wegner, Pellegrini, Rité& MaiaBernardi et al.2002]2002AJ....123.2159B Bernardi M.,Alonso M. V.,da Costa L. N.,Willmer C. N. A., Wegner G.,Pellegrini P. S.,Rité C., Maia M. A. G.,2002, @doi [] 10.1086/339697, https://ui.adsabs.harvard.edu/abs/2002AJ....123.2159B 123, 2159[Blakeslee, Lucey, Barris, Hudson& TonryBlakeslee et al.2001]2001MNRAS.327.1004B Blakeslee J. P.,Lucey J. R.,Barris B. J.,Hudson M. J., Tonry J. L.,2001, @doi [] 10.1046/j.1365-8711.2001.04800.x, https://ui.adsabs.harvard.edu/abs/2001MNRAS.327.1004B 327, 1004[Blakeslee et al.,Blakeslee et al.2009]2009ApJ...694..556B Blakeslee J. P.,et al., 2009, @doi [] 10.1088/0004-637X/694/1/556, https://ui.adsabs.harvard.edu/abs/2009ApJ...694..556B 694, 556[Blakeslee et al.,Blakeslee et al.2010]2010ApJ...724..657B Blakeslee J. P.,et al., 2010, @doi [] 10.1088/0004-637X/724/1/657, https://ui.adsabs.harvard.edu/abs/2010ApJ...724..657B 724, 657[Blakeslee, Jensen, Ma, Milne& GreeneBlakeslee et al.2021]2021ApJ...911...65B Blakeslee J. P.,Jensen J. B.,Ma C.-P.,Milne P. A., Greene J. E.,2021, @doi [] 10.3847/1538-4357/abe86a, https://ui.adsabs.harvard.edu/abs/2021ApJ...911...65B 911, 65[Blondin & TonryBlondin & Tonry2007]Blondin2007 Blondin S.,Tonry J. L.,2007, @doi [] 10.1086/520494, http://adsabs.harvard.edu/abs/2007ApJ...666.1024B 666, 1024[Brout & ScolnicBrout & Scolnic2021]2021ApJ...909...26B Brout D.,Scolnic D.,2021, @doi [] 10.3847/1538-4357/abd69b, https://ui.adsabs.harvard.edu/abs/2021ApJ...909...26B 909, 26[Brown et al.,Brown et al.2013]2013PASP..125.1031B Brown T. M.,et al., 2013, @doi [] 10.1086/673168, https://ui.adsabs.harvard.edu/abs/2013PASP..125.1031B 125, 1031[Burns et al.,Burns et al.2011]2011AJ....141...19B Burns C. R.,et al., 2011, @doi [] 10.1088/0004-6256/141/1/19, http://adsabs.harvard.edu/abs/2011AJ....141...19B 141, 19[Burns et al.,Burns et al.2014]2014ApJ...789...32B Burns C. R.,et al., 2014, @doi [] 10.1088/0004-637X/789/1/32, https://ui.adsabs.harvard.edu/abs/2014ApJ...789...32B 789, 32[Burns et al.,Burns et al.2018]2018ApJ...869...56B Burns C. R.,et al., 2018, @doi [] 10.3847/1538-4357/aae51c, https://ui.adsabs.harvard.edu/abs/2018ApJ...869...56B 869, 56[Cantiello, Blakeslee, Raimondo, Mei, Brocato& CapaccioliCantiello et al.2005]2005ApJ...634..239C Cantiello M.,Blakeslee J. P.,Raimondo G.,Mei S.,Brocato E., Capaccioli M.,2005, @doi [] 10.1086/491694, https://ui.adsabs.harvard.edu/abs/2005ApJ...634..239C 634, 239[Cantiello, Blakeslee, Raimondo, Brocato& CapaccioliCantiello et al.2007]2007ApJ...668..130C Cantiello M.,Blakeslee J.,Raimondo G.,Brocato E., Capaccioli M.,2007, @doi [] 10.1086/521295, https://ui.adsabs.harvard.edu/abs/2007ApJ...668..130C 668, 130[Cantiello, Biscardi, Brocato& RaimondoCantiello et al.2011]2011AA...532A.154C Cantiello M.,Biscardi I.,Brocato E., Raimondo G.,2011, @doi [] 10.1051/0004-6361/201116667, https://ui.adsabs.harvard.edu/abs/2011A A...532A.154C 532, A154[Cantiello et al.,Cantiello et al.2013]2013AA...552A.106C Cantiello M.,et al., 2013, @doi [] 10.1051/0004-6361/201220756, https://ui.adsabs.harvard.edu/abs/2013A A...552A.106C 552, A106[Cardelli, Clayton& MathisCardelli et al.1989]1989ApJ...345..245C Cardelli J. A.,Clayton G. C., Mathis J. S.,1989, @doi [] 10.1086/167900, http://adsabs.harvard.edu/abs/1989ApJ...345..245C 345, 245[Chen et al.,Chen et al.2022]2022ApJS..259...53C Chen P.,et al., 2022, @doi [] 10.3847/1538-4365/ac50b7, https://ui.adsabs.harvard.edu/abs/2022ApJS..259...53C 259, 53[Ciardullo, Feldmeier, Jacoby, Kuzio de Naray, Laychak& DurrellCiardullo et al.2002]2002ApJ...577...31C Ciardullo R.,Feldmeier J. J.,Jacoby G. H.,Kuzio de Naray R., Laychak M. B., Durrell P. R.,2002, @doi [] 10.1086/342180, https://ui.adsabs.harvard.edu/abs/2002ApJ...577...31C 577, 31[Conley et al.,Conley et al.2008]2008ApJ...681..482C Conley A.,et al., 2008, @doi [] 10.1086/588518, http://adsabs.harvard.edu/abs/2008ApJ...681..482C 681, 482[Contreras et al.,Contreras et al.2010]2010AJ....139..519C Contreras C.,et al., 2010, @doi [] 10.1088/0004-6256/139/2/519, https://ui.adsabs.harvard.edu/abs/2010AJ....139..519C 139, 519[Courtois & TullyCourtois & Tully2012]2012ApJ...749..174C Courtois H. M.,Tully R. B.,2012, @doi [] 10.1088/0004-637X/749/2/174, https://ui.adsabs.harvard.edu/abs/2012ApJ...749..174C 749, 174[Di Valentino et al.,Di Valentino et al.2021]2021CQGra..38o3001D Di Valentino E.,et al., 2021, @doi [Classical and Quantum Gravity] 10.1088/1361-6382/ac086d, https://ui.adsabs.harvard.edu/abs/2021CQGra..38o3001D 38, 153001[Feldmeier, Jacoby& PhillipsFeldmeier et al.2007]2007ApJ...657...76F Feldmeier J. J.,Jacoby G. H., Phillips M. M.,2007, @doi [] 10.1086/510897, https://ui.adsabs.harvard.edu/abs/2007ApJ...657...76F 657, 76[Ferrarese et al.,Ferrarese et al.2000]2000ApJ...529..745F Ferrarese L.,et al., 2000, @doi [] 10.1086/308309, https://ui.adsabs.harvard.edu/abs/2000ApJ...529..745F 529, 745[Filippenko et al.,Filippenko et al.1992a]1992AJ....104.1543F Filippenko A. V.,et al., 1992a, @doi [] 10.1086/116339, http://adsabs.harvard.edu/abs/1992AJ....104.1543F 104, 1543[Filippenko et al.,Filippenko et al.1992b]1992ApJ...384L..15F Filippenko A. V.,et al., 1992b, @doi [] 10.1086/186252, http://adsabs.harvard.edu/abs/1992ApJ...384L..15F 384, L15[Filippenko, Li, Treffers& ModjazFilippenko et al.2001]2001ASPC..246..121F Filippenko A. V.,Li W. D.,Treffers R. R., Modjaz M.,2001, in Paczynski B.,Chen W.-P., Lemme C.,eds,Astronomical Society of the Pacific Conference Series Vol. 246, IAU Colloq. 183: Small Telescope Astronomy on Global Scales. p. 121[Foley et al.,Foley et al.2013]2013ApJ...767...57F Foley R. J.,et al., 2013, @doi [] 10.1088/0004-637X/767/1/57, http://adsabs.harvard.edu/abs/2013ApJ...767...57F 767, 57[Ganeshalingam et al.,Ganeshalingam et al.2010]2010ApJS..190..418G Ganeshalingam M.,et al., 2010, @doi [] 10.1088/0067-0049/190/2/418, http://adsabs.harvard.edu/abs/2010ApJS..190..418G 190, 418[Garnavich et al.,Garnavich et al.2004]2004ApJ...613.1120G Garnavich P. M.,et al., 2004, @doi [] 10.1086/422986, http://adsabs.harvard.edu/abs/2004ApJ...613.1120G 613, 1120[Garnavich et al.,Garnavich et al.2023]2023ApJ...953...35G Garnavich P.,et al., 2023, @doi [] 10.3847/1538-4357/ace04b, https://ui.adsabs.harvard.edu/abs/2023ApJ...953...35G 953, 35[Gómez & RichtlerGómez & Richtler2004]2004AA...415..499G Gómez M.,Richtler T.,2004, @doi [] 10.1051/0004-6361:20034610, https://ui.adsabs.harvard.edu/abs/2004A A...415..499G 415, 499[GrahamGraham2002]2002MNRAS.334..859G Graham A. W.,2002, @doi [] 10.1046/j.1365-8711.2002.05548.x, https://ui.adsabs.harvard.edu/abs/2002MNRAS.334..859G 334, 859[GraurGraur2022]2022supe.book.....G Graur O.,2022, Supernova. The MIT Press, Cambridge, MA[Graur, Bianco, Huang, Modjaz, Shivvers, Filippenko, Li& EldridgeGraur et al.2017a]2017ApJ...837..120G Graur O.,Bianco F. B.,Huang S.,Modjaz M.,Shivvers I., Filippenko A. V.,Li W., Eldridge J. J.,2017a, @doi [] 10.3847/1538-4357/aa5eb8, http://adsabs.harvard.edu/abs/2017ApJ...837..120G 837, 120[Graur, Bianco, Modjaz, Shivvers, Filippenko, Li& SmithGraur et al.2017b]2017ApJ...837..121G Graur O.,Bianco F. B.,Modjaz M.,Shivvers I.,Filippenko A. V.,Li W., Smith N.,2017b, @doi [] 10.3847/1538-4357/aa5eb7, http://adsabs.harvard.edu/abs/2017ApJ...837..121G 837, 121[Graur et al.,Graur et al.2023]2023MNRAS.526.2977G Graur O.,et al., 2023, @doi [] 10.1093/mnras/stad2960, https://ui.adsabs.harvard.edu/abs/2023MNRAS.526.2977G 526, 2977[Guy, Astier, Nobili, Regnault& PainGuy et al.2005]Guy2005 Guy J.,Astier P.,Nobili S.,Regnault N., Pain R.,2005, @doi [] 10.1051/0004-6361:20053025, https://ui.adsabs.harvard.edu/abs/2005A A...443..781G 443, 781[Guy et al.,Guy et al.2007]Guy2007 Guy J.,et al., 2007, @doi [] 10.1051/0004-6361:20066930, http://adsabs.harvard.edu/abs/2007A[Hicken, Wood-Vasey, Blondin, Challis, Jha, Kelly, Rest& KirshnerHicken et al.2009]2009ApJ...700.1097H Hicken M.,Wood-Vasey W. M.,Blondin S.,Challis P.,Jha S., Kelly P. L.,Rest A., Kirshner R. P.,2009, @doi [] 10.1088/0004-637X/700/2/1097, http://adsabs.harvard.edu/abs/2009ApJ...700.1097H 700, 1097[Hoogendam et al.,Hoogendam et al.2022]2022ApJ...928..103H Hoogendam W. B.,et al., 2022, @doi [] 10.3847/1538-4357/ac54aa, https://ui.adsabs.harvard.edu/abs/2022ApJ...928..103H 928, 103[HowertonHowerton2016]2016TNSTR.157....1H Howerton S.,2016, Transient Name Server Discovery Report, https://ui.adsabs.harvard.edu/abs/2016TNSTR.157....1H 2016-157, 1[Jensen, Tonry, Barris, Thompson, Liu, Rieke, Ajhar& BlakesleeJensen et al.2003]2003ApJ...583..712J Jensen J. B.,Tonry J. L.,Barris B. J.,Thompson R. I.,Liu M. C.,Rieke M. J.,Ajhar E. A., Blakeslee J. P.,2003, @doi [] 10.1086/345430, https://ui.adsabs.harvard.edu/abs/2003ApJ...583..712J 583, 712[Jensen et al.,Jensen et al.2021]2021ApJS..255...21J Jensen J. B.,et al., 2021, @doi [] 10.3847/1538-4365/ac01e7, https://ui.adsabs.harvard.edu/abs/2021ApJS..255...21J 255, 21[Jha et al.,Jha et al.2006a]2006AJ....131..527J Jha S.,et al., 2006a, @doi [] 10.1086/497989, http://adsabs.harvard.edu/abs/2006AJ....131..527J 131, 527[Jha, Branch, Chornock, Foley, Li, Swift, Casebeer& FilippenkoJha et al.2006b]2006AJ....132..189J Jha S.,Branch D.,Chornock R.,Foley R. J.,Li W.,Swift B. J.,Casebeer D., Filippenko A. V.,2006b, @doi [] 10.1086/504599, http://adsabs.harvard.edu/abs/2006AJ....132..189J 132, 189[Jha, Riess& KirshnerJha et al.2007]2007ApJ...659..122J Jha S.,Riess A. G., Kirshner R. P.,2007, @doi [] 10.1086/512054, https://ui.adsabs.harvard.edu/abs/2007ApJ...659..122J 659, 122[Jordán et al.,Jordán et al.2005]2005ApJ...634.1002J Jordán A.,et al., 2005, @doi [] 10.1086/497092, https://ui.adsabs.harvard.edu/abs/2005ApJ...634.1002J 634, 1002[Jordán et al.,Jordán et al.2007]2007ApJS..171..101J Jordán A.,et al., 2007, @doi [] 10.1086/516840, https://ui.adsabs.harvard.edu/abs/2007ApJS..171..101J 171, 101[Kamionkowski & RiessKamionkowski & Riess2023]2023ARNPS..73..153K Kamionkowski M.,Riess A. G.,2023, @doi [Annual Review of Nuclear and Particle Science] 10.1146/annurev-nucl-111422-024107, https://ui.adsabs.harvard.edu/abs/2023ARNPS..73..153K 73, 153[Kasliwal et al.,Kasliwal et al.2008]2008ApJ...683L..29K Kasliwal M. M.,et al., 2008, @doi [] 10.1086/591521, https://ui.adsabs.harvard.edu/abs/2008ApJ...683L..29K 683, L29[Kelly, Hicken, Burke, Mandel& KirshnerKelly et al.2010]Kelly2010 Kelly P. L.,Hicken M.,Burke D. L.,Mandel K. S., Kirshner R. P.,2010, @doi [] 10.1088/0004-637X/715/2/743, http://adsabs.harvard.edu/abs/2010ApJ...715..743K 715, 743[Kelsey et al.,Kelsey et al.2021]2021MNRAS.501.4861K Kelsey L.,et al., 2021, @doi [] 10.1093/mnras/staa3924, https://ui.adsabs.harvard.edu/abs/2021MNRAS.501.4861K 501, 4861[Kelsey et al.,Kelsey et al.2023]2023MNRAS.519.3046K Kelsey L.,et al., 2023, @doi [] 10.1093/mnras/stac3711, https://ui.adsabs.harvard.edu/abs/2023MNRAS.519.3046K 519, 3046[Khetan et al.,Khetan et al.2021]2021A A...647A..72K Khetan N.,et al., 2021, @doi [] 10.1051/0004-6361/202039196, https://ui.adsabs.harvard.edu/abs/2021A A...647A..72K 647, A72[Kowalski et al.,Kowalski et al.2008]2008ApJ...686..749K Kowalski M.,et al., 2008, @doi [] 10.1086/589937, https://ui.adsabs.harvard.edu/abs/2008ApJ...686..749K 686, 749[Krisciunas et al.,Krisciunas et al.2001]2001AJ....122.1616K Krisciunas K.,et al., 2001, @doi [] 10.1086/322120, https://ui.adsabs.harvard.edu/abs/2001AJ....122.1616K 122, 1616[Krisciunas et al.,Krisciunas et al.2017]2017AJ....154..211K Krisciunas K.,et al., 2017, @doi [] 10.3847/1538-3881/aa8df0, https://ui.adsabs.harvard.edu/abs/2017AJ....154..211K 154, 211[Leibundgut et al.,Leibundgut et al.1993]1993AJ....105..301L Leibundgut B.,et al., 1993, @doi [] 10.1086/116427, https://ui.adsabs.harvard.edu/abs/1993AJ....105..301L 105, 301[Li, Qiu, Qiao, Zhang, Zhou& HuLi et al.1997]1997IAUC.6661....1L Li W. D.,Qiu Y. L.,Qiao Q. Y.,Zhang Y.,Zhou W., Hu J. Y.,1997, , https://ui.adsabs.harvard.edu/abs/1997IAUC.6661....1L 6661, 1[Macri, Stetson, Bothun, Freedman, Garnavich, Jha, Madore& RichmondMacri et al.2001]2001ApJ...559..243M Macri L. M.,Stetson P. B.,Bothun G. D.,Freedman W. L., Garnavich P. M.,Jha S.,Madore B. F., Richmond M. W.,2001, @doi [] 10.1086/322395, https://ui.adsabs.harvard.edu/abs/2001ApJ...559..243M 559, 243[Masters et al.,Masters et al.2010]2010ApJ...715.1419M Masters K. L.,et al., 2010, @doi [] 10.1088/0004-637X/715/2/1419, https://ui.adsabs.harvard.edu/abs/2010ApJ...715.1419M 715, 1419[Mei et al.,Mei et al.2007]2007ApJ...655..144M Mei S.,et al., 2007, @doi [] 10.1086/509598, https://ui.adsabs.harvard.edu/abs/2007ApJ...655..144M 655, 144[Meldorf et al.,Meldorf et al.2023]2023MNRAS.518.1985M Meldorf C.,et al., 2023, @doi [] 10.1093/mnras/stac3056, https://ui.adsabs.harvard.edu/abs/2023MNRAS.518.1985M 518, 1985[Mieske & HilkerMieske & Hilker2003]2003AA...410..445M Mieske S.,Hilker M.,2003, @doi [] 10.1051/0004-6361:20031296, https://ui.adsabs.harvard.edu/abs/2003A A...410..445M 410, 445[Mieske, Hilker& InfanteMieske et al.2005]2005AA...438..103M Mieske S.,Hilker M., Infante L.,2005, @doi [] 10.1051/0004-6361:20041583, https://ui.adsabs.harvard.edu/abs/2005A A...438..103M 438, 103[Misgeld & HilkerMisgeld & Hilker2011]2011MNRAS.414.3699M Misgeld I.,Hilker M.,2011, @doi [] 10.1111/j.1365-2966.2011.18669.x, https://ui.adsabs.harvard.edu/abs/2011MNRAS.414.3699M 414, 3699[Modjaz, Li, Filippenko, King, Leonard, Matheson, Treffers& RiessModjaz et al.2001]2001PASP..113..308M Modjaz M.,Li W.,Filippenko A. V.,King J. Y.,Leonard D. C., Matheson T.,Treffers R. R., Riess A. G.,2001, @doi [] 10.1086/319338, https://ui.adsabs.harvard.edu/abs/2001PASP..113..308M 113, 308[Neill, Seibert, Tully, Courtois, Sorce, Jarrett, Scowcroft& MasciNeill et al.2014]2014ApJ...792..129N Neill J. D.,Seibert M.,Tully R. B.,Courtois H.,Sorce J. G., Jarrett T. H.,Scowcroft V., Masci F. J.,2014, @doi [] 10.1088/0004-637X/792/2/129, https://ui.adsabs.harvard.edu/abs/2014ApJ...792..129N 792, 129[Pan, Reed, Medallon, Foley, Jha, Rest& ScolnicPan et al.2017]2017ATel10437....1P Pan Y. C.,Reed D. K.,Medallon M. S.,Foley R. J.,Jha S. W., Rest A., Scolnic D.,2017, The Astronomer's Telegram, https://ui.adsabs.harvard.edu/abs/2017ATel10437....1P 10437, 1[Perlmutter et al.,Perlmutter et al.1999]1999ApJ...517..565P Perlmutter S.,et al., 1999, @doi [] 10.1086/307221, http://ads.ari.uni-heidelberg.de/abs/1999ApJ...517..565P 517, 565[PhillipsPhillips1993]1993ApJ...413L.105P Phillips M. M.,1993, @doi [] 10.1086/186970, http://adsabs.harvard.edu/abs/1993ApJ...413L.105P 413, L105[Phillips & BurnsPhillips & Burns2017]2017hsn..book.2543P Phillips M. M.,Burns C. R.,2017, in Alsabti A. W.,Murdin P., eds, , Handbook of Supernovae. p. 2543, @doi10.1007/978-3-319-21846-5_100[Phillips et al.,Phillips et al.1987]1987PASP...99..592P Phillips M. M.,et al., 1987, @doi [] 10.1086/132020, https://ui.adsabs.harvard.edu/abs/1987PASP...99..592P 99, 592[PskovskiiPskovskii1977]1977SvA....21..675P Pskovskii I. P.,1977, , https://ui.adsabs.harvard.edu/abs/1977SvA....21..675P 21, 675[Riess et al.,Riess et al.1998]1998AJ....116.1009R Riess A. G.,et al., 1998, @doi [] 10.1086/300499, http://ads.ari.uni-heidelberg.de/abs/1998AJ....116.1009R 116, 1009[Riess et al.,Riess et al.2004]Riess2004 Riess A. G.,et al., 2004, @doi [] 10.1086/383612, http://ads.ari.uni-heidelberg.de/abs/2004ApJ...607..665R 607, 665[Riess et al.,Riess et al.2022]2022ApJ...934L...7R Riess A. G.,et al., 2022, @doi [] 10.3847/2041-8213/ac5c5b, https://ui.adsabs.harvard.edu/abs/2022ApJ...934L...7R 934, L7[RussellRussell2002]2002ApJ...565..681R Russell D. G.,2002, @doi [] 10.1086/337917, https://ui.adsabs.harvard.edu/abs/2002ApJ...565..681R 565, 681[RustRust1974]1974PhDT.........7R Rust B. W.,1974, PhD thesis, Oak Ridge National Laboratory, Tennessee[Saha, Thim, Tammann, Reindl& SandageSaha et al.2006]2006ApJS..165..108S Saha A.,Thim F.,Tammann G. A.,Reindl B., Sandage A.,2006, @doi [] 10.1086/503800, https://ui.adsabs.harvard.edu/abs/2006ApJS..165..108S 165, 108[Sakai et al.,Sakai et al.2000]2000ApJ...529..698S Sakai S.,et al., 2000, @doi [] 10.1086/308305, https://ui.adsabs.harvard.edu/abs/2000ApJ...529..698S 529, 698[Saulder, van Kampen, Chilingarian, Mieske& ZeilingerSaulder et al.2016]2016AA...596A..14S Saulder C.,van Kampen E.,Chilingarian I. V.,Mieske S., Zeilinger W. W.,2016, @doi [] 10.1051/0004-6361/201526711, https://ui.adsabs.harvard.edu/abs/2016A A...596A..14S 596, A14[Schlafly & FinkbeinerSchlafly & Finkbeiner2011]2011ApJ...737..103S Schlafly E. F.,Finkbeiner D. P.,2011, @doi [] 10.1088/0004-637X/737/2/103, http://adsabs.harvard.edu/abs/2011ApJ...737..103S 737, 103[Smith et al.,Smith et al.2019]2019ATel12389....1S Smith K. W.,et al., 2019, The Astronomer's Telegram, https://ui.adsabs.harvard.edu/abs/2019ATel12389....1S 12389, 1[Sorce, Tully& CourtoisSorce et al.2012]2012ApJ...758L..12S Sorce J. G.,Tully R. B., Courtois H. M.,2012, @doi [] 10.1088/2041-8205/758/1/L12, https://ui.adsabs.harvard.edu/abs/2012ApJ...758L..12S 758, L12[Sorce et al.,Sorce et al.2013]2013ApJ...765...94S Sorce J. G.,et al., 2013, @doi [] 10.1088/0004-637X/765/2/94, https://ui.adsabs.harvard.edu/abs/2013ApJ...765...94S 765, 94[Sorce, Tully, Courtois, Jarrett, Neill& ShayaSorce et al.2014]2014MNRAS.444..527S Sorce J. G.,Tully R. B.,Courtois H. M.,Jarrett T. H.,Neill J. D., Shaya E. J.,2014, @doi [] 10.1093/mnras/stu1450, https://ui.adsabs.harvard.edu/abs/2014MNRAS.444..527S 444, 527[Springob, Masters, Haynes, Giovanelli & MarinoniSpringob et al.2009]2009ApJS..182..474S Springob C. M.,Masters K. L.,Haynes M. P.,Giovanelli R., Marinoni C.,2009, @doi [] 10.1088/0067-0049/182/1/474, https://ui.adsabs.harvard.edu/abs/2009ApJS..182..474S 182, 474[Springob et al.,Springob et al.2014]2014MNRAS.445.2677S Springob C. M.,et al., 2014, @doi [] 10.1093/mnras/stu1743, https://ui.adsabs.harvard.edu/abs/2014MNRAS.445.2677S 445, 2677[Stritzinger et al.,Stritzinger et al.2011]2011AJ....142..156S Stritzinger M. D.,et al., 2011, @doi [] 10.1088/0004-6256/142/5/156, https://ui.adsabs.harvard.edu/abs/2011AJ....142..156S 142, 156[Tartaglia, Sand, Wyatt, Valenti, Bostroem, Reichart, Haislip& KouprianovTartaglia et al.2017]2017ATel10439....1T Tartaglia L.,Sand D.,Wyatt S.,Valenti S.,Bostroem K. A., Reichart D. E.,Haislip J. B., Kouprianov V.,2017, The Astronomer's Telegram, https://ui.adsabs.harvard.edu/abs/2017ATel10439....1T 10439, 1[Taubenberger et al.,Taubenberger et al.2008]2008MNRAS.385...75T Taubenberger S.,et al., 2008, @doi [] 10.1111/j.1365-2966.2008.12843.x, http://adsabs.harvard.edu/abs/2008MNRAS.385...75T 385, 75[Theureau, Hanski, Coudreau, Hallet& MartinTheureau et al.2007]2007AA...465...71T Theureau G.,Hanski M. O.,Coudreau N.,Hallet N., Martin J. M.,2007, @doi [] 10.1051/0004-6361:20066187, https://ui.adsabs.harvard.edu/abs/2007A A...465...71T 465, 71[Tonry, Dressler, Blakeslee, Ajhar, Fletcher, Luppino, Metzger& MooreTonry et al.2001]2001ApJ...546..681T Tonry J. L.,Dressler A.,Blakeslee J. P.,Ajhar E. A.,Fletcher A. B.,Luppino G. A.,Metzger M. R., Moore C. B.,2001, @doi [] 10.1086/318301, https://ui.adsabs.harvard.edu/abs/2001ApJ...546..681T 546, 681[Tonry et al.,Tonry et al.2018]2018PASP..130f4505T Tonry J. L.,et al., 2018, @doi [] 10.1088/1538-3873/aabadf, https://ui.adsabs.harvard.edu/abs/2018PASP..130f4505T 130, 064505[Tonry et al.,Tonry et al.2021]2021TNSTR2173....1T Tonry J.,et al., 2021, Transient Name Server Discovery Report, https://ui.adsabs.harvard.edu/abs/2021TNSTR2173....1T 2021-2173, 1[Tully & CourtoisTully & Courtois2012]2012ApJ...749...78T Tully R. B.,Courtois H. M.,2012, @doi [] 10.1088/0004-637X/749/1/78, https://ui.adsabs.harvard.edu/abs/2012ApJ...749...78T 749, 78[Tully & PierceTully & Pierce2000]2000ApJ...533..744T Tully R. B.,Pierce M. J.,2000, @doi [] 10.1086/308700, https://ui.adsabs.harvard.edu/abs/2000ApJ...533..744T 533, 744[Tully, Rizzi, Shaya, Courtois, Makarov& JacobsTully et al.2009]2009AJ....138..323T Tully R. B.,Rizzi L.,Shaya E. J.,Courtois H. M.,Makarov D. I., Jacobs B. A.,2009, @doi [] 10.1088/0004-6256/138/2/323, https://ui.adsabs.harvard.edu/abs/2009AJ....138..323T 138, 323[Tully et al.,Tully et al.2013]2013AJ....146...86T Tully R. B.,et al., 2013, @doi [] 10.1088/0004-6256/146/4/86, https://ui.adsabs.harvard.edu/abs/2013AJ....146...86T 146, 86[Tully, Courtois& SorceTully et al.2016]2016AJ....152...50T Tully R. B.,Courtois H. M., Sorce J. G.,2016, @doi [] 10.3847/0004-6256/152/2/50, https://ui.adsabs.harvard.edu/abs/2016AJ....152...50T 152, 50[Turatto, Benetti, Cappellaro, Danziger, Della Valle, Gouiffes, Mazzali& PatatTuratto et al.1996]1996MNRAS.283....1T Turatto M.,Benetti S.,Cappellaro E.,Danziger I. J.,Della Valle M.,Gouiffes C.,Mazzali P. A., Patat F.,1996, @doi [] 10.1093/mnras/283.1.1, https://ui.adsabs.harvard.edu/abs/1996MNRAS.283....1T 283, 1[Turatto, Piemonte, Benetti, Cappellaro, Mazzali, Danziger& PatatTuratto et al.1998]1998AJ....116.2431T Turatto M.,Piemonte A.,Benetti S.,Cappellaro E.,Mazzali P. A.,Danziger I. J., Patat F.,1998, @doi [] 10.1086/300622, https://ui.adsabs.harvard.edu/abs/1998AJ....116.2431T 116, 2431[Uddin et al.,Uddin et al.2023]2023arXiv230801875U Uddin S. A.,et al., 2023, @doi [arXiv e-prints] 10.48550/arXiv.2308.01875, https://ui.adsabs.harvard.edu/abs/2023arXiv230801875U p. arXiv:2308.01875[Valenti, Sand, Tartaglia, Hosseinzadeh, Arcavi, Howell& MccullyValenti et al.2017]2017TNSCR.613....1V Valenti S.,Sand D.,Tartaglia L.,Hosseinzadeh G.,Arcavi I., Howell D. A., Mccully C.,2017, Transient Name Server Classification Report, https://ui.adsabs.harvard.edu/abs/2017TNSCR.613....1V 2017-613, 1[Villegas et al.,Villegas et al.2010]2010ApJ...717..603V Villegas D.,et al., 2010, @doi [] 10.1088/0004-637X/717/2/603, https://ui.adsabs.harvard.edu/abs/2010ApJ...717..603V 717, 603[Wang, Wang, Pain, Zhou& LiWang et al.2006]2006ApJ...645..488W Wang X.,Wang L.,Pain R.,Zhou X., Li Z.,2006, @doi [] 10.1086/504312, https://ui.adsabs.harvard.edu/abs/2006ApJ...645..488W 645, 488[Wood et al.,Wood et al.2023]2023AAS...24142410W Wood C.,et al., 2023, in American Astronomical Society Meeting Abstracts. p. 424.10[de Vaucouleurs, de Vaucouleurs, Corwin, Buta, Paturel& Fouquede Vaucouleurs et al.1991]1991rc3..book.....D de Vaucouleurs G.,de Vaucouleurs A.,Corwin Herold G. J.,Buta R. J.,Paturel G., Fouque P.,1991, Third Reference Catalogue of Bright Galaxies§ DISTANCE MODULUS REFERENCESIn this section, I summarize the sources of the various distance moduli measurements used to calculate the weighted means shown in Table <ref>. The acronyms used in Table <ref>, below, are: Cepheids (Cep), cosmic microwave background (CMB), dwarf galaxy diameter (DGD), fundamental plane (FP), globular cluster luminosity function (GCLF), globular cluster radius (GCR), planetary nebula luminosity function (PNLF), surface brightness fluctuations (SBF), and the Tully-Fisher relation (TF). | http://arxiv.org/abs/2311.16245v1 | {
"authors": [
"Or Graur"
],
"categories": [
"astro-ph.HE",
"astro-ph.CO"
],
"primary_category": "astro-ph.HE",
"published": "20231127190026",
"title": "Underluminous Type Ia supernovae are standardizable candles"
} |
http://arxiv.org/abs/2311.15796v1 | {
"authors": [
"Marios Costa",
"Demetrianos Gavriel",
"Haralambos Panagopoulos",
"Gregoris Spanoudes"
],
"categories": [
"hep-lat"
],
"primary_category": "hep-lat",
"published": "20231127131934",
"title": "Mass effects on the QCD $β$-function"
} |
|
1Department of Physics, University of Trento, Via Sommarive 14, 38123, Trento, Italy2Centre for Sensors and Devices, Fondazione Bruno Kessler, via Sommarive 18, 38123, Trento, Italy *[email protected] Integrated photonics has emerged as one of the most promising platforms for quantum applications. The performances of quantum photonic integrated circuits (QPIC) necessitate a demanding optimization to achieve enhanced properties and tailored characteristics with more stringent requirements with respect to their classical counterparts. In this study, we report on the simulation, fabrication, and characterization of a series of fundamental components for photons manipulation in QPIC based on silicon nitride. These include crossing waveguides, multimode-interferometer-based integrated beam splitters (MMIs), asymmetric integrated Mach-Zehnder interferometers (MZIs) based on MMIs, and micro-ring resonators. Our investigation revolves primarily around the Visible to Near-Infrared spectral region, as these devices are meticulously designed and tailored for optimal operation within this wavelength range. By advancing the development of these elementary building blocks, we aim to pave the way for significant improvements in QPIC in a spectral region only little explored so far. § INTRODUCTIONAdvancements in quantum technologies over the past decade have the potential to revolutionize sensing <cit.>, communication <cit.> and computation <cit.>. Quantum applications do not have an a-priori preferred material platform, since each of these has its strengths and weaknesses. This is especially true for integrated quantum photonics, whose architecture is based on the manipulation of photon states through integrated optical circuits.For instance, measurement-based quantum computing <cit.> implemented on silicon photonics <cit.> has gained much attention because of the impressive advancements in the integration of the elementary building blocks in a material platform which is compatible with electronic circuits. Such an interest is generated by the idea of a fault-tolerant quantum computer based on linear optical elements <cit.>. Another example is that of Boson Samplers integrated in the direct-laser written glass platform, which allows demonstrating the ability of photonic structures to perform computational tasks beyond classical capabilities <cit.>.In the context of QPIC, silicon nitride (SiN) is gaining much attraction due to its exceptional optical properties. This material leverages CMOS technology for its fabrication, achieving the remarkable feature of producing low propagation loss QPICs from inherently compact dimensions. Notably, due to the large optical bandgap of ∼5 eV at the visible to near-infrared spectral region, SiN does not suffer from two-photon absorption (TPA), a phenomenon that constrains the efficiency of nonlinear applications in silicon waveguides. SiN possesses a nonlinear coefficient n_2 of 2.4 x 10^-19 m^2/W <cit.>, a value one order of magnitude lower than that of silicon, yet positioning SiN as a formidable contender for quantum-oriented applications <cit.>. Another advantage of SiN is its spectral coverage, spanning from 0.35 μm to 7 μm with negligible material absorption (or up to 3.0 μm when embedded in silica)<cit.>. This distinctive feature positions SiN as an optimal choice for QPICs within the visible spectral range, where silicon is not transparent<cit.>, enabling the use of silicon for the direct integration of single photon visible detectors <cit.>. Moreover, this broad operation window makes SiN also compatible with single-photon sources based on a wide variety of Quantum Dots semiconductor compounds <cit.>, for example around 800-815 nm <cit.> or around 735-790 nm <cit.>.This paper presents the performances of state-of-the-art SiN-based photonic integrated circuits (PICs) and discusses their potential use in both classical and quantum applications. Specifically, it will discuss basic photonic components for manipulating classical and quantum states of light such as crossing waveguides, beam splitters based on multi-mode interferometers (MMIs), photons filters and routers via asymmetric Mach-Zehnder interferometers (aMZI) and micro-ring resonators in a spectral region between 650 and 850 nm. To the best of our knowledge, all these components together in such spectral region have not been investigated so far.The paper is organized as follows. Section 2 provides a comprehensive account of the design as well as fabrication methods employed for the realization of SiN-based components. In Section 3, both theoretical and experimental aspects of waveguide crossings, MMIs, aMZIs, and micro-ring resonators are presented.§ WAVEGUIDESPICs were realized in stoichiometric silicon nitride (SiN) deposited via low-pressure chemical vapor deposition (LPCVD) on 150 mm diameter silicon wafer in the FBK's cleanroom facilities. First, a bottom SiO_2 cladding of 1.7 μm was grown using tetraethoxysilane (TEOS) gas precursor at 710 °C chamber temperature, directly followed by the 140 nm-thick SiN film deposition at 770 °C. Next, the photonic circuits were patterned on standard photoresist using i-line stepper lithography, and directly defined in the SiN using an inductively-coupled plasma reactive ion etching. Finally, the realized SiN circuits were covered with a top, 1.6 μm thick, borophosphosilicate glass (BPSG) cladding at 640 °C, and the wafer was diced into single chips. The SiN channel waveguides have refractive index n_SiN = 1.991at 750 nm and a 140 nm × 650 nm cross-section (Fig. <ref>(a)). The dimensions of the SiN waveguides were chosen to achieve single-mode conditions at 750 nm for both the transverse electric (TE) and transverse magnetic (TM) polarizations <cit.>, with a mode field distribution as shown in Fig. <ref>(b).Linear characterization measurements were conducted using the experimental setup sketched in Fig. <ref>. The setup incorporated a supercontinuum laser (FYLA SCT500) as the light source, which enabled the measurement of a wide-ranging spectral response covering wavelengths from 650 to 850 nm. The input polarization is determined by employing achromatic half-wave and quarter-wave plates. The output spectra were acquired using an Optical Spectrum Analyzer (OSA, Yokogawa-AQ6373B), allowing for high-resolution (100 pm) characterization of the optical signals emitted by the supercontinuum laser. The first relevant figure of merit in the characterization of the performances of the fabricated waveguides is given by the insertion losses (ILs), obtained by the sum of propagation losses (PLs) and coupling losses (CLs). Performing the cut-back method <cit.>, it becomes possible to accurately determine the PLs and ILs.Fig. <ref>(a) illustrates the PLs across a range of wavelengths from 650 to 850 nm. PLs were evaluated in TE (blue line) and TM (orange line) polarizations. The highlighted band represents the uncertainty in the measurement, which was calculated by considering multiple measurements performed on three nominally-identical PICs. This approach provides a range of values that accounts for potential variations among the PICs fabricated for this study. Within the operational range around 750 nm, we observed PLs of (2.4 ± 0.2) dB/cm in TE polarization and (1.6 ± 0.2) dB/cm in TM polarization. Such an asymmetry in the PLs is caused by the higher scattering losses for TE polarization due to a more significant mode overlap with the waveguide sidewalls, whose roughness is responsible for the loss <cit.>. We note that these results are lower than the current state of the art <cit.>.Butt coupling has been used for light injection and collection. The access waveguides of the PIC consists of an adiabatic 150 μm-long tapered section. The waveguide width is reduced from 3.25 μm at the facets to 0.65 μm. These optimum values are the result of an optimization done via 3D FDTD simulations. After the adiabatic tapering, three Euler S-bends with an effective radius of 40 μm are also inserted to radiate spurious light associated to higher-order modes. This ensures that only the fundamental mode of the two polarizations is guided in the PIC, and offsets the input and output ports of the PIC, resulting in a minimization of the collected stray light.CLs are shown in Fig. <ref>(b): they exhibit a value of (8.4 ± 0.8) dB per facet in TE polarization and (7.2 ± 0.8) dB per facet in TM polarization at 750 nm, which are comparable with the existing state-of-the-art <cit.>.§ INTEGRATED COMPONENTS This section focuses on the analysis of basic optical components. Achieving optimal performance of these elements is of paramount importance, as it directly impacts the overall functionality and efficiency of PICs and QPICs. §.§ Waveguide crossingsWaveguide crossings were designed with a multimode interference-based crossing <cit.> and optimized to negligible insertion loss and crosstalk. This design consists of a four-ports symmetric MMI with a cross geometry (Fig. <ref>(a)), where the geometrical dimensions are optimized to focus the field profile in the center of the crossing. In this way, the crosstalk between adjacent ports is minimized while the propagation to opposite ports is maximized using the self-imaging mechanism. Fig. <ref>(a) shows the design of the crossing, whose parameters have been optimized by means of 3D FDTD simulations in the 650-850 nm region. Fig. <ref>(b) reports the simulated electric field profile at 730 nm for the optimized crossing where it is observed the field focused at the center of the structure. Fig. <ref>(c) shows a scanning electron microscope (SEM) image of a single crossing.Fig. <ref>(d) shows the insertion losses for TE (blue line) and TM polarizations (orange line) of a single waveguide crossing. To assess the insertion losses accurately, the transmission of a total number of 40 cascaded crossings was measured. The uncertainty in the measurements, represented by the highlighted band, arises from multiple measurements of nominally identical crossings. In the 650-850 nm spectral range, our crossings exhibit ILs < 0.13 dB in TE polarization and ILs < 0.25 dB in TM polarization.The crosstalk cannot be measured due to its low value beyond the sensitivity of the setup. Remarkably, the component shows consistent and nearly equal losses across the whole analyzed spectral region. §.§ MMI Beam SplittersBeam splitters or beam couplers can be realized by MMI devices. These are composed by a certain number of input and output single-mode waveguides connected by a wide multimode waveguide. The working principle is based on self-imaging phenomenon <cit.>. If the interference mechanism involves all the modes, the structure is called general interference MMI (G-MMI). On the other hand, if the input waveguides are positioned so that to couple to only few modes of the multimode waveguide, a different interference pattern is obtained. In particular, here we design the MMI to get pair-mode interference and we name it P-MMI. In this design the second-order, fifth-order, eighth-order, etc. modes are not excited: for every pair of excited modes, there is one which is not excited <cit.>. Generally, G-MMIs are longer than P-MMIs and are more robust with respect to fabrication errors due to their tolerance with respect to the input waveguide position. They have superior performances, both in terms of insertion losses and bandwidth of operation. On the other hand, MMIs with restricted interference mechanisms, like P-MMIs, have a smaller footprint <cit.>.Figs. <ref>(a1-b1) show the designs of a 2x2 beam splitters based on G-MMI or P-MMI: the values of the parameters have been obtained from 3D FDTD simulations at 730 nm for a TE0 input. Note that the length is almost three times shorter for P-MMI, and that the input and output ports of the G-MMI are set on the edge of the multimode waveguide. Figs. <ref>(a2-b2) report the simulated electric field profile at 730 nm in the optimized G-MMI and P-MMI used as a 1x2 beam splitter. A different interference pattern can be observed, which arises from the different number of modes excited in the multimode waveguide. Figs. <ref>(a3-b3) show SEM images of a fabricated G-MMI and P-MMI.Fig. <ref> shows the insertion losses (red lines) and output unbalance (blue lines) of 2x2 beam splitters based on G-MMI or P-MMI within a wavelength range of 650-850 nm and for different polarizations. The uncertainties in the measurements, represented by the highlighted bands, come from multiple measurements of nominally identical MMIs. The dashed curves are the simulated spectra from 3D FDTD calculations. The insertion losses have a minimum at the design wavelength of 730 nm, with ILs < 0.5 dB for G-MMI, while this is at 720 nm for P-MMIs. For the G-MMI, the output unbalance (defined as the ratio between the transmission intensities at the two outputs) is flat over a 650-800 nm wide bandwidth with a maximum value of (0.2± 0.2) dB for TE and (0.3± 0.2) dB for TM polarization. For P-MMIs, an unbalance of approximately (0.7± 0.6) dB is observed at the design wavelength for TE polarization. However, the behavior is suboptimal for TM polarization, displaying unbalance values exceeding (1.2± 0.2) dB. Thus, G-MMIs exhibit polarization insensitivity and demonstrate excellent outputs balance. In contrast, P-MMIs are adversely affected by polarization, leading to noticeable performance degradation in TM with respect to TE due to the fact that this component was optimized for a TE0 input mode and its selective modal excitation requirement results in a wavelength and polarization-sensitive device. Despite being less robust in terms of unbalance, P-MMIs exhibit a wider bandwidth of minimal insertion losses compared to G-MMIs.Table <ref> provides a comparison of the 2x2 G-MMI beam splitter to the state-of-art for SiN in the ViS-NIR region. Note that the different components refer to different spectral regions. §.§ Mach-Zehnder InterferometersFrom 2x2 MMIs, an integrated MZI can be realized (inset Fig. <ref> a). Tuning of the MZI response is usually achieved by inserting phase shifters on its arms. Therefore, an integrated MZI implements a beam splitter with controllable reflectivity that routes the optical signal in PICs or that realizes any single-qubit operation in QPICs. In fact, mxm MZI network either in the Reck <cit.> or Clements <cit.> scheme can perform any discrete unitary operator on m optical modes. Here, we consider asymmetric Mach-Zehnder Interferometers (aMZI) where the input/output 2x2 MMIs are connected by two arms of significantly different length. Such aMZI produces an output transmission pattern characterized by a free spectral range FSR = λ^2/n_g ·δ L, and resonance wavelength, λ_ res= n_ eff·δ L/q, where n_ eff is the effective index and n_g is the group index of the propagating mode in the waveguide, q is an integer number and δ L is the difference between the lengths of the two aMZI arms <cit.>. We choose δ L = 30.6 μm in order to have a FSR = 8.9 nm, given n_g=1.961 at λ=730 nm. The aMZI's arms are based on four identical Euler 45^∘-bends with effective radius 80 μm connected by straight 1 μm wide waveguides with total length difference equal to δ L: these parameters were chosen to eliminate possible loss channels. The insertion losses of the aMZIs are mainly given by the insertion losses of the MMIs, while the visibility of the interference pattern depends on the unbalance of the MMIs. This implies that the spectral bandwidth of the MZIs is intrinsically linked to the operational spectral bandwidth of the MMIs. Fig. <ref> presents the spectral responses of aMZIs based on G-MMIs (G-aMZI, panels a-b) or P-MMIs (P-aMZI, panels c-d) for TE and TM polarizations, respectively. The optical signal is input in port 1 (in1) and the transmission out of output ports 1 (out1, blue lines) and 2 (out 2, red lines) is measured. For the G-aMZI, insertion losses amount to (-0.8 ± 0.6) dB at 730 nm with a maximum rejection ratio of (-21.1 ± 0.4) dB for TE polarization (Fig. <ref>(a)). The measured FSR is (9.00 ± 0.12) nm, which is consistent with the design value. For TM polarization (Fig. <ref>(b)), the insertion losses in the band centered at 730 nm are (-0.7 ± 0.8) dB, with a rejection ratio of (-20.6 ± 0.6) dB. On the contrary, for the P-aMZI, Fig. <ref>(c) shows an insertion losses band centered at 720 nm with a minimum ILs = (-0.8 ± 0.8) dB and a rejection ratio of (-20.9 ± 0.4) dB for the chosen TE polarization. The measured FSR is (9.0± 0.3) nm. Fig. <ref>(d) shows that for TM polarization the insertion losses are (-0.8 ± 0.8) dB in a band consistently centered at 720 nm, with a rejection ratio of (-18.7 ± 1.2) dB. These lower performances are due to the worse behavior of P-MMIs in TM polarization, which are affected by larger insertion losses and whose unbalance impacts the rejection ratio. It is noteworthy that outside the design wavelength region, the visibility decreases, resulting in increased insertion losses and reduced rejection. We attribute this degradation of the performances to the MMI bandwidth. These data show that G-aMZIs are preferable for both the signal polarizations, while P-aMZIs can be satisfactorily used for TE polarization only.§.§ High rejection filtersOne of the most critical components in QPIC is an integrated wavelength-selective optical interference notch filter with high rejection which can be used to e.g. remove high-power pump photons from low-power idler and signal photons generated in a spontaneous four-wave mixing process <cit.>. This can be realized by using a sequence of aMZI, summing the rejection given by the single elements<cit.>. The filter can be designed to operate at a specific wavelength by fixing the FSR and λ_ res of its elements.In Fig. <ref>, we present the results of a sequence of 4 G-aMZIs or P-aMZIs in TE polarization. The G-aMZIs and P-aMZIs sequence were measured without any trimming. Insertion losses of (-2.7 ± 0.3) dB for the G-aMZIs and (-2.9 ± 0.4) dB for the P-aMZIs are observed. The rejection ratio was measured using a tunable wavelength titanium-sapphire (Ti:Sa) laser and an optical power meter (Thorlabs-PM100USB) yielding a value of (64 ± 2) dB and (61 ± 2) dB for G-aMZIs and P-aMZIs, respectively (insets to Fig. <ref>(a-b)). Fig. <ref>(c) shows the insertion losses at about 735 nm for a single aMZI and for the sequence of 4 aMZI (points). Simulations are also shown as a continuous line and match the experimental results. However, when the rejection ratio is considered (Fig. <ref>(d)), a difference is observed between the simulated and measured values. This can be attributed to the lack of active tuning of the four cascaded aMZI, which would facilitate phase synchronization, potentially enhancing the rejection. Despite this, the observed rejection ratio is high which can be attributed to the robustness of the manufacturing process of the MMIs, rendering them well-suited for this specific application. Our results cannot be compared with the literature data since there are no similar published studies in this spectral region. Nevertheless, other filter structures have been studied <cit.>, such as cascaded grating-assisted contra-directional couplers (GACDC)<cit.> or directional couplers (DC) interwoven with Bragg gratings (BG) <cit.>. In the first instance, a sequence of 16 GACDC structures yielded a rejection of 68.5 dB and ILs of -5.6 dB. Our aMZI-based structures, while achieving an equivalent rejection level, exhibit significantly reduced ILs. In the second alternative filter structure, simulations show rejection of 60 dB. Table <ref> summarizes this comparison.§.§ Micro-ring resonators Micro-ring resonators (MR) are important components in PICs given their resonant characteristics and small footprint <cit.>. In QPICs, the narrow bandwidth of high-Q factor MRs can be employed to implement selective optical switches and filters, as well as for single photon processing <cit.>. In addition, their resonant field enhancement can be used to enhance nonlinear optical processes in view of the generation of single photon pairs through e.g. four-wave mixing process<cit.>. Fig. <ref>(a) shows the design of a racetrack MR in the all-pass configuration, composed by two semicircles of radius R=30μm connected by two straight waveguides of length L_C, resulting in a total cavity length of 𝒫 = 2π R + 2 L_C. The bus waveguide runs parallel to one of the two straight arms at a distance of 600 nm, determining the evanescent coupling strength. By using Lumerical's 2.5D FDTD Propagation Method, we simulate such design with different coupling lengths L_C in the spectral range 650-850 nm. Fig. <ref>(b) reports the SEM image of one of the fabricated structures.To find the critical-coupling working point of our MR, a series of racetrack MRs with different L_C, from 5 μm to 25 μ m, was fabricated. Notably, the variation of L_C changes the expected Free-Spectral-Range of the MR, FSR(λ) = λ^2/n_g ·𝒫, but not its intrinsic quality factor, which depends only on the propagation loss Q_i = π· n_g/λ· PLs <cit.>. Then, by analyzing MRs with different tailored L_C it is possible to determine the critical-coupling regime as the point at which the extinction ratio ER shows a minimum and the total quality factor Q^-1_tot = Q^-1_i + Q^-1_L_C becomes half the value of the intrinsic one <cit.>. Since the spectral resolution of the setup presented in Fig. <ref>(a) is not suited to measure high-Q factor MR, a different measurement setup was implemented using a tunable ECDL source (Sacher Lion) and an optical wavelength meter (HighFinesse WS6) with 1 pm accuracy. A series of MRs was probed in the TE polarization in 805-815 nm range, obtaining normalized transmission spectra as the one shown in Fig. <ref>(a). Fig. <ref>(b) details a single resonance and its fit with a single Lorentzian. Fig. <ref>(c) shows the coupling length dependence of the Q-factor (light blue points) and of the ER (red points) compared with the simulations (dashed lines). A nearly critical coupling regime is observed for L_C^crit = 15.0 μm with a measured Q_tot^crit = (4.5 ± 0.2)× 10^4 and ER= (-11 ± 3) dB. These values match the simulated intrinsic Q_i = (1.20 ± 0.05)× 10^5, which corresponds to a value for the propagation loss inside the MR of PLs_ring=2.6±0.1 dB/cm, in good agreement with the PLs measured directly at 810 nm in straight waveguides.§ CONCLUSION This work demonstrates that SiN is a suitable material to develop QPICs in the visible-to-near-infrared region. We have designed, fabricated, and tested different basic building blocks integrated in SiN PIC. All the devices we have described exhibit state-of-the-art performances, characterized by low insertion losses and high efficiency. The multimode interferometers (MMIs), exhibiting a spectral unbalance below 0.2 dB across a bandwidth exceeding 100 nm, have demonstrated remarkable versatility. This device is suited for Mach-Zehnder interferometers (MZIs). These latter components have demonstrated their efficacy in the implementation of specific wavelength selectors and integrated filters, enabling rejection levels of approximately 21 dB with a single device and over 60 dB in a cascade of four fully passive devices. Our micro-ring resonators have achieved a high-quality factor Q_tot^crit of 4.5 × 10^4 while maintaining compact dimensions. This combination of a high Q factor and a significant non-linear coefficient n_2 further stress the potential of SiN for integrated quantum applications. We are currently working to develop quantum photon sources and single photon detectors on this platform to close the set of building blocks needed to realize a complete QPIC in SiN around 750 nm.§ ACKNOWLEDGEMENTSThis work was supported by EC within the Horizon 2020 project EPIQUS (grant 899368) and by Q@TN, the joint lab between University of Trento, FBK- Fondazione Bruno Kessler, INFN-National Institute for Nuclear Physics and CNR-National Research Council and financed by PAT. | http://arxiv.org/abs/2311.16016v1 | {
"authors": [
"Matteo Sanna",
"Alessio Baldazzi",
"Gioele Piccoli",
"Stefano Azzini",
"Mher Ghulinyan",
"Lorenzo Pavesi"
],
"categories": [
"physics.optics",
"physics.app-ph"
],
"primary_category": "physics.optics",
"published": "20231127172135",
"title": "SiN integrated photonic components in the Visible to Near-Infrared spectral region"
} |
On the Kepler problem on the Heisenberg group Sergey Basalaev, Sergei Agapov The work of the second author was performed according to the Government research assignment for IM SB RAS, project FWNF-2022-0004.=====================================================================================================================================================================The ultimate goal of any numerical scheme for partial differential equations (PDEs) is to compute an approximation of user-prescribed accuracy at quasi-minimal computational time. To this end, algorithmically, the standard adaptive finite element method (AFEM) integrates an inexact solver and nested iterations with discerning stopping criteria balancing the different error components. The analysis ensuring optimal convergence order of AFEM with respect to the overall computational cost critically hinges on the concept of R-linear convergence of a suitable quasi-error quantity. This work tackles several shortcomings of previous approaches by introducing a new proof strategy. First, the algorithm requires several fine-tuned parameters in order to make the underlying analysis work. A redesign of the standard line of reasoning and the introduction of a summability criterion for R-linear convergence allows us to remove restrictions on those parameters. Second, the usual assumption of a (quasi-)Pythagorean identity is replaced by the generalized notion of quasi-orthogonality from [Feischl, Math. Comp., 91 (2022)]. Importantly, this paves the way towards extending the analysis to general inf-sup stable problems beyond the energy minimization setting.Numerical experiments investigate the choice of the adaptivity parameters. § INTRODUCTIONOver the past three decades, the mathematical understanding of adaptive finite element methods (AFEMs) has matured; see, e.g., <cit.> for linear elliptic PDEs, <cit.> for certain nonlinear PDEs, and <cit.> for an axiomatic framework summarizing the earlier references. In most of the cited works, the focus is on (plain) convergence <cit.> and optimal convergence rates with respect to the number of degrees of freedom, i.e., optimal rates, <cit.>. The adaptive feedback loop strives to approximate the unknown and possibly singular exact PDE solution u^⋆ on the basis of a posteriori error estimators and adaptive mesh refinement strategies. Employing AFEM with exact solver, detailed in Algorithm <ref> below, generates a sequence (_ℓ)_ℓ∈_0 of successively refined meshes together with the corresponding finite element solutions u_ℓ^⋆≈ u^⋆ and error estimators η_ℓ(u_ℓ^⋆) by iteratingsolve ⟶ estimate ⟶ mark ⟶ refineA key argument in the analysis of (<ref>) in <cit.> and succeeding works for symmetric PDEs consists in showing linear convergence of the quasi-errorΔ_ℓ^⋆≤ Δ_ℓ-1^⋆withΔ_ℓ^⋆[ u^⋆ - u_ℓ^⋆^2 +γ η_ℓ(u_ℓ^⋆)^2 ]^1/2for all ℓ∈,where 0 < , γ < 1 depend only on the problem setting and the marking parameter.Here, · is the PDE-induced energy norm providing a Pythagorean identity of the formu^⋆ - u_ℓ+1^⋆^2 + u_ℓ+1^⋆ -u_ℓ^⋆^2=u^⋆ - u_ℓ^⋆^2for all ℓ∈_0.Extension of the analysis to nonsymmetric linear PDEs can be done by relaxing the Pythagorean identity to a quasi-Pythagorean estimate in <cit.>.The later work <cit.> showed that a tail-summability of the estimator sequence∑_ℓ' = ℓ + 1^∞η_ℓ'(u_ℓ'^⋆)≤' η_ℓ(u_ℓ^⋆) for all ℓ∈_0.or, equivalently, R-linear convergenceη_ℓ(u_ℓ^⋆)≤^ℓ-ℓ' η_ℓ'(u_ℓ'^⋆)for all ℓ≥ℓ' ≥ 0,with 0 << 1 and , ' > 0, suffices to prove convergence.Additionally, a sufficiently small marking parameter θ leads to optimal rates in the sense of <cit.>. This can be stated in terms of approximation classes <cit.> by mathematically guaranteeing the largest possible convergence rates > 0 withsup_ℓ∈ (#_ℓ)^s η_ℓ(u_ℓ^⋆) < ∞. However, due to the incremental nature of adaptivity,the mathematical question on optimal convergence rates should rather refer to the overall computational cost (resp. the cumulative computational time). This, coined as optimal complexity in the context of adaptive wavelet methods <cit.>, was later adopted for AFEM in <cit.>. Therein, optimal complexity is guaranteed for AFEM with inexact solver, provided that the computed iterates u_ℓ^ are sufficiently close to the (unavailable) exact discrete solutions u_ℓ^⋆. However, this comes at the cost of a small solver-stopping parameter λ in the stopping criterion for the algebraic solver within the modified adaptive algorithm solve & estimate ⟶ mark ⟶ refineDriven by the interest in AFEMs for nonlinear problems <cit.>, recent papers <cit.> aimed to combine linearization and algebraic iterates into a nested adaptive algorithm. Following the latter, the algorithmic decision for either mesh refinement or linearization or algebraic solver step is steered by a-posteriori-based stopping criteria with suitable stopping parameters. This allows to balance the error components and compute the inexact approximations u_ℓ^k ≈ u_ℓ^⋆ given by a contractive solver with iteration counter k = 1, …, [ℓ] on the mesh _ℓ, and |ℓ,k| ∈_0 denotes the lexicographic order of the sequential loop (<ref>); see Algorithm <ref> below.Due to an energy identity (coinciding with (<ref>) for symmetric linear PDEs), the works <cit.> prove full R-linear convergence for the quasi-error Δ_ℓ^k [ u^⋆ - u_ℓ^k^2 + γ η_ℓ(u_ℓ^k)^2 ]^1/2 with respect to the lexicographic ordering |·, ·|, i.e.,Δ_ℓ^k ≤^|ℓ,k| - |ℓ',k'| Δ_ℓ'^k'for all (ℓ',k'),(ℓ,k) ∈ with|ℓ',k'| ≤ |ℓ,k|,which is guaranteed for arbitrary marking parameter θ and stopping parameter λ (with constants > 0 and 0 << 1 depending on θ and λ). Moreover, <cit.> proves that full R-linear convergence is also the key argument for optimal complexity in the sense that it ensures, for all s > 0,M(s)sup_(ℓ,k) ∈ (#_ℓ)^s Δ_ℓ^k≤sup_(ℓ,k) ∈( ∑_(ℓ',k') ∈ |ℓ',k'| ≤ |ℓ,k|#_ℓ')^sΔ_ℓ^k ≤ C_ cost(s)M(s),where C_ cost(s) > 1 depends only on s, , and . Since all modules of AFEM with inexact solver as displayed in (<ref>) can be implemented at linear cost (#_ℓ), the equivalence (<ref>) means that the quasi-error Δ_ℓ^k decays with rate s over the number of elements #_ℓ if and only if it decays with rate s over the related overall computational work.In essence, optimal complexity of AFEM with inexact solver thus follows from a perturbation argument (by taking the stopping parameter λ sufficiently small) as soon as full linear convergence (<ref>) of AFEM with inexact solver and optimal rates of AFEM with exact solver (for sufficiently small θ) have been established; see, e.g., <cit.>.In this paper, we present a novel proof of full linear convergence (<ref>) with contractive solver that, unlike <cit.>, avoids the Pythagorean identity (<ref>), but relies only on the quasi-orthogonality from <cit.> (even in its generalized form from <cit.>). The latter is known to be sufficient and necessary for linear convergence (<ref>) in the presence of exact solvers <cit.>. In particular, this opens the door to proving optimal complexity for AFEM beyond symmetric energy minimization problems. Moreover, problems exhibiting additional difficulties such as nonsymmetric linear elliptic PDEs, see <cit.>, or nonlinear PDEs, see <cit.>, ask for more intricate (nested) solvers that treat iterative symmetrization/linearization together with solving the arising linear SPD systems. This leads to computed approximates u_ℓ^k,j≈ u_ℓ^⋆ with symmetrization iteration counter k = k[ℓ] and algebraic solver index j = j[ℓ, k]. The new proof of full linear convergence allows to improve the analysis of <cit.> by relaxing the choice of the solver-stopping parameters. Additionally, in the setting of <cit.>, we are able to show that the full linear convergence holds from the initial mesh onwards instead of the a priori unknown and possibly large mesh threshold level ℓ_0>0. Furthermore, the new analysis does not only improve the state-of-the-art theory of full linear convergence leading tooptimal complexity, but also allows the choice of larger solver-stopping parameters which also leads to a better numerical performance in experiments.The remainder of this work is structured as follows: As a model problem, Section <ref> formulates a general second-order linear elliptic PDE together with the validity of the so-called axioms of adaptivity from <cit.> and the quasi-orthogonality from <cit.>. In Section <ref>, AFEM with exact solver (<ref>) is presented in Algorithm <ref> and, for completeness, Theorem <ref> summarizes the proof of R-linear convergence (<ref>) from <cit.>. Section <ref> focuses on AFEM with inexact contractive solver (<ref>) detailed in Algorithm <ref>. The main contribution is the new and more general proof of full R-linear convergence of Theorem <ref>. Corollary <ref> proves the important equivalence (<ref>).The case of AFEM with nested contractive solvers, which are useful for nonlinear or nonsymmetric problems, is treated in Section <ref> by presenting Algorithm <ref> from <cit.> and improving its main result in Theorem <ref>. In Section <ref>, we discuss the impact of the new analysis on AFEM for nonlinear PDEs. We show that Theorem <ref> applies also to the setting from <cit.>, namely strongly monotone PDEs with scalar nonlinearity. Numerical experiments and remarks are discussed in-depth in Section <ref>, where the impact of the adaptivity parameters on the overall cost is investigated empirically.#1#1§ GENERAL SECOND-ORDER LINEAR ELLIPTIC PDESOn a bounded polyhedral Lipschitz domain Ω⊂^d, d ≥ 1, we consider the PDE-÷(A∇ u^⋆) + b·∇ u^⋆ +cu^⋆ = f - ÷f in Ωsubject tou^⋆ = 0on ∂Ω,where A, b, c ∈ L^∞(Ω) and f, f ∈ L^2(Ω) with, for almost every x ∈Ω, positive definite A(x) ∈^d × d_ sym, b(x), f(x) ∈^d, and c(x), f(x) ∈. With ··_L^2(Ω) denoting the usual L^2(Ω)-scalar product, we suppose that the PDE fits into the setting of the Lax–Milgram lemma, i.e., the bilinear formsa(u, v) A∇ u∇ v_L^2(Ω)andb(u, v)a(u, v) + b·∇ u + cuv_L^2(Ω)are continuous and elliptic on H^1_0(Ω). Then, indeed, a(·, ·) is a scalar product and u a(u,u)^1/2 defines an equivalent norm on H^1_0(Ω). Moreover, the weak formulationb(u^⋆, v) = F(v) fv_L^2(Ω) +f∇ v_L^2(Ω)for allv ∈ H^1_0(Ω)admits a unique solution u^⋆∈ H^1_0(Ω).Let _0 be an initial conforming triangulation of Ω⊂^d into compact simplices. The mesh refinement employs newest-vertex bisection (NVB). We refer to <cit.> for NVB with admissible _0 and d ≥ 2, to <cit.> for NVB with general _0 for d=2, and to the recent work <cit.> for NVB with general _0 in any dimension d ≥ 2. For d = 1, we refer to <cit.>. For each triangulation _ and _⊆_, let _(_, _) be the coarsest conforming refinement of _ such that at least all elements T ∈_ have been refined, i.e., _⊆_∖_. To abbreviate notation, we write _∈(_) if _ can be obtained from _ by finitely many steps of NVB and, in particular, (_0).For each _∈, we consider conforming finite element spaces_v_∈ H^1_0(Ω):v_|_Tis a polynomial of total degree≤ p for allT ∈_,where p ∈ is a fixed polynomial degree. We note that _∈(_) yields nestedness _⊆_ of the corresponding discrete spaces.Given _∈, there exists a unique Galerkin solution u_^⋆∈_ solvingb(u^⋆_,v_) = F(v_)for allv_∈_that is quasi-optimal in the sense of the Céa lemmau^⋆ - u_^⋆≤ min_v_∈_u^⋆ - v_with/ .Here,is the boundedness constant andis the ellipticity constant of b(·,·) with respect to ·. We consider the residual error estimator η_(·) defined, for T ∈_ and v_∈_, by η_(T, v_)^2 |T|^2/d ‖ -÷(A∇ v_ - f) + b·∇ v_ + c v_ - f ‖_L^2(T)^2 + |T|^1/d ‖(A∇ v_ - f) · n‖_L^2(∂ T ∩Ω)^2,where · denotes the jump over (d-1)-dimensional faces. Clearly, the well-posedness of (<ref>) requires more regularity ofandthan stated above, e.g., |_T, |_T ∈ W^1, ∞(T) for all T ∈_0. To abbreviate notation, we define, for all _⊆_ and all v_∈_,η_(v_)η_(_, v_)withη_(_, v_)( ∑_T ∈_η_(T, v_)^2)^1/2. From <cit.>, we recall that the error estimator satisfies the following properties. There exist constants , , ,> 0, and 0 << 1 such that the following properties are satisfied for any triangulation _∈ and any conforming refinement _∈(_) with the corresponding Galerkin solutions u_^⋆∈_, u_^⋆∈_ to (<ref>) and arbitrary v_∈_, v_∈_.(A1)enumi stability. |η_(_∩_, v_) - η_(_∩_, v_)| ≤ v_ - v_.(A2)enumi reduction. η_(_\_, v_) ≤ η_(_\_, v_).(A3)enumi reliability. u^⋆ - u_^⋆≤ η_(u_^⋆).(A3^+)enumidiscrete reliability. u_h^⋆ - u_H^⋆≤ η_H(_H \_h, u_H^⋆).(QM)enumi quasi-monotonicity. η_(u_^⋆) ≤ η_(u_^⋆). The constantdepends only on uniform shape regularity of all meshes _∈ and the dimension d, whileandadditionally depend on the polynomial degree p. The constantreads 2^-1/(2d) for bisection-based refinement rules in ^d and the constantcan be bounded by ≤min 1 + (1+), 1 +.In addition to the estimator properties in Proposition <ref>, we recall the following quasi-orthogonality result from <cit.> as one cornerstone of the improved analysis in this paper. There exist > 0 and 0 < δ≤ 1 such that the following holds: For any sequence _ℓ⊆_ℓ+1⊂ H^1_0(Ω) of nested finite-dimensional subspaces, the corresponding Galerkin solutions u_ℓ^⋆∈_ℓ to (<ref>) satisfy (A4)enumi quasi-orthogonality. ∑_ℓ' = ℓ^ℓ + Nu^⋆_ℓ'+1-u^⋆_ℓ'^2≤ (N+1)^1-δu^⋆- u^⋆_ℓ^2 for all ℓ, N ∈_0.Here,and δ depend only on the dimension d, the elliptic bilinear form b(·,·), and the chosen norm ·, but are independent of the spaces _ℓ.Quasi-orthogonality (<ref>) is a generalization of the Pythagorean identity (<ref>) for symmetric problems. Indeed, if b = 0 in (<ref>) and a(·,·)b(·,·) is a scalar product, the Galerkin method for nested subspaces _ℓ⊆_ℓ+1⊂ H^1_0(Ω) guarantees (<ref>). Thus, the telescopic series proves (<ref>) with = 1 and δ = 1. We highlight that <cit.> proves (<ref>) even for more general linear problems and Petrov–Galerkin discretizations. A closer look at the proofs in Section <ref>–<ref> below reveals that they rely only on the properties (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), but not on (<ref>), the Céa lemma (<ref>), or linearity of the PDE. Hence, Algorithms <ref>, <ref>, and <ref> and the corresponding Theorems <ref>, <ref>, and <ref> apply beyond the linear problem (<ref>); see Section <ref> for a nonlinear PDE.§ AFEM WITH EXACT SOLUTIONTo outline the new proof strategy, we first consider the standard adaptive algorithm (see, e.g., <cit.>), where the arising Galerkin systems (<ref>) are solved exactly.The following theorem asserts convergence of Algorithm <ref> in the spirit of <cit.>. Let 0 < θ≤ 1 and ≥ 1 be arbitrary. Then, Algorithm <ref> guarantees R-linear convergence of the estimators η_ℓ(u_ℓ^⋆), i.e., there exist constants 0 << 1 and > 0η_ℓ+n(u^⋆_ℓ+n) ≤ ^nη_ℓ(u^⋆_ℓ)for all ℓ, n ∈_0. For vanishing convection b = 0 in (<ref>) anda(·,·)b(·,·), <cit.> proves linear convergence of the quasi-error (<ref>). Together with reliability (<ref>), this yields R-linear convergence of the estimator sequence η_ℓ+n(u_ℓ+n^⋆) ≤(^2+γ)^1/2/γ^1/2 ^nη_ℓ(u_ℓ^⋆) for all ℓ, n ∈_0.In this sense, Theorem <ref> is weaker than linear convergence (<ref>) from <cit.>, but provides a direct proof of R-linear convergence even if b(·, ·) ≠ a(·, ·). Moreover, while the proof of (<ref>) crucially relies on the Pythagorean identity (<ref>), the works <cit.> extend the analysis to the general second-order linear elliptic PDE (<ref>) using ∀ 0 < ε < 1∃ℓ_0 ∈_0∀ℓ≥ℓ_0 u^⋆ - u^⋆_ℓ+1^2 ≤u^⋆ - u^⋆_ℓ^2 - ε u^⋆_ℓ+1 - u^⋆_ℓ^2. From this, contraction (<ref>) follows for all ℓ≥ℓ_0 and allows to extend the AFEM analysis from <cit.> to general second-order linear elliptic PDE. However, the index ℓ_0 depends on the exact solution u^⋆ and on the sequence of exact discrete solutions (u_ℓ^⋆)_ℓ∈_0. Moreover, ℓ_0 = 0 requires sufficiently fine _0 in <cit.>. In contrast to that, R-linear convergence (<ref>) from Theorem <ref> holds with ℓ_0 = 0 and any initial mesh _0. The proof of Theorem <ref> relies on the following elementary lemma that extends arguments implicitly found for the estimator sequence in <cit.> but that will be employed for certain quasi-errors in the present work. Its proof is found in Appendix <ref>. Let (a_ℓ)_ℓ∈_0, (b_ℓ)_ℓ∈_0 be scalar sequences in _≥ 0. With given constants 0 < q < 1, 0 < δ < 1, and C_1, C_2 > 0, suppose thata_ℓ+1≤ q a_ℓ + b_ℓ,b_ℓ+N≤ C_1 a_ℓ,and∑_ℓ' = ℓ^ℓ+N b_ℓ^2 ≤ C_2(N+1)^1-δa_ℓ^2 for all ℓ, N ∈_0.Then, (a_ℓ)_ℓ∈_0 is R-linearly convergent, i.e., there exist > 0 and 0 << 1 witha_ℓ+n≤ ^n a_ℓfor all ℓ, n ∈_0. We employ Lemma <ref> for the sequences defined by a_ℓ = η_ℓ(u_ℓ^⋆) and b_ℓ :=u_ℓ+1^⋆ - u_ℓ^⋆. First, we note thatu_ℓ^''^⋆ - u_ℓ^'^⋆axiom:reliability≲η_ℓ”(u_ℓ”^⋆) + η_ℓ^'(u_ℓ^'^⋆) eq:quasi-monotonicity≲η_ℓ(u_ℓ^⋆) for all ℓ, ℓ^', ℓ^''∈_0 with ℓ≤ℓ^'≤ℓ^''.In particular, this proves b_ℓ+N≲ a_ℓ for all ℓ, N ∈_0. Moreover, quasi-orthogonality (<ref>) and reliability (<ref>) show∑_ℓ' = ℓ^ℓ+N u^⋆_ℓ'+1 - u^⋆_ℓ'^2 ≤^2 (N+1)^1-δ η_ℓ(u^⋆_ℓ)^2 for all ℓ, N ∈_0.In order to verify (<ref>), it thus only remains to prove the perturbed contraction of a_ℓ. To this end, let ℓ∈_0. Then, stability (<ref>) and reduction (<ref>) showη_ℓ+1(u^⋆_ℓ)^2≤η_ℓ(_ℓ+1∩_ℓ, u^⋆_ℓ)^2+^2 η_ℓ(_ℓ\_ℓ+1, u^⋆_ℓ)^2=η_ℓ(u_ℓ^⋆)^2- (1 -^2)η_ℓ(_ℓ\_ℓ+1, u^⋆_ℓ)^2.Moreover, Dörfler marking (<ref>) and refinement of (at least) all marked elements lead toθη_ℓ(u^⋆_ℓ)^2eq:doerfler≤η_ℓ(_ℓ, u^⋆_ℓ)^2≤η_ℓ(_ℓ\_ℓ+1, u^⋆_ℓ)^2.The combination of the two previously displayed formulas results inη_ℓ+1(u^⋆_ℓ)≤q_θ η_ℓ(u_ℓ^⋆)with0 < q_θ[ 1 - (1 - ^2) θ]^1/2 < 1.Finally, stability (<ref>) thus leads to the desired estimator reduction estimateη_ℓ+1(u^⋆_ℓ+1)≤ q_θ η_ℓ(u_ℓ^⋆) +u_ℓ+1^⋆ - u_ℓ^⋆for all ℓ∈_0.Altogether, all the assumptions (<ref>) are satisfied and Lemma <ref> concludes the proof. § AFEM WITH CONTRACTIVE SOLVERLet Ψ__→_ be the iteration mapping of a uniformly contractive solver, i.e.,u^⋆_ - Ψ_(v_)≤ u^⋆_ - v_for all _∈ and allv_∈_.The following algorithm is thoroughly analyzed in <cit.> under the assumption that the problem is symmetric (and hence the Pythagorean identity (<ref>) holds).The sequential nature of Algorithm <ref> gives rise to the countably infinite index set: = [](ℓ,k) ∈_0^2:u_ℓ^k ∈_ℓ is defined in Algorithm <ref>together with the lexicographic ordering(ℓ',k') ≤ (ℓ,k) :⟺u_ℓ'^k' is defined not later than u_ℓ^k inAlgorithm <ref>and the total step counter| ℓ, k | #(ℓ', k') ∈ :(ℓ', k') ≤ (ℓ, k)∈_0 for all (ℓ, k) ∈.Defining the stopping indices supℓ∈_0:(ℓ,0) ∈∈_0 ∪{∞}, [ℓ]supk ∈_0:(ℓ,k) ∈∈∪{∞}, whenever(ℓ,0) ∈, we note that these definitions are consistent with that of Algorithm <ref>(ii). We abbreviate = [ℓ], whenever the index ℓ is clear from the context, e.g., u_ℓ^ u_ℓ^[ℓ] or (ℓ, ) = (ℓ, [ℓ]).Asis an infinite set, the typical case is = ∞ and [ℓ] < ∞ for all ℓ∈_0, whereas < ∞ implies that [] = ∞, i.e., non-termination of the iterative solver on the mesh _. The following theorem states convergence of Algorithm <ref>. In particular, it shows that < ∞ implies η_(u_^⋆) = 0 and consequently u^⋆ = u_^⋆ by reliability (<ref>). Let 0 < θ≤ 1, ≥ 1, λ > 0, and u_0^0 ∈_0 be arbitrary. Then, Algorithm <ref> guarantees R-linear convergence of the modified quasi-errorΗ_ℓ^ku_ℓ^⋆ - u_ℓ^k + η_ℓ(u_ℓ^k),i.e., there exist constants 0 << 1 and > 0 such thatΗ_ℓ^k≤^|ℓ,k| - |ℓ',k'| Η_ℓ'^k'for all (ℓ',k'),(ℓ,k) ∈ with|ℓ',k'| ≤ |ℓ,k|. Unlike <cit.> (and <cit.>), Theorem <ref> and its proof employ the quasi-error Η_ℓ^k from (<ref>) instead of Δ_ℓ^k [ u^⋆ - u_ℓ^k^2 + γ η_ℓ(u_ℓ^k)^2 ]^1/2 analogous to (<ref>). We note that stability (<ref>) and reliability (<ref>) yield Δ_ℓ^k ≲Η_ℓ^k, while the converse estimate follows from the Céa lemma (<ref>). The work <cit.> extends the ideas of <cit.> (that prove (<ref>) for AFEM with exact solver) and of <cit.> (that extend (<ref>) to the final iterates for AFEM with contractive solver). For the scalar product b(·, ·) = a(·,·) and arbitrary stopping parameters λ > 0, it shows that the quasi-error Δ_ℓ^k from Remark <ref> satisfies contraction Δ_ℓ^k≤Δ_ℓ^k-1 for all(ℓ,k) ∈ with0 < k < [ℓ], Δ_ℓ+1^0≤Δ_ℓ^-1 for all(ℓ,) ∈ with contraction constant 0 << 1, along the approximations u_ℓ^k ∈_ℓ generated by Algorithm <ref>. The proof of (<ref>) can be generalized similarly to Remark <ref>, see <cit.>: With the quasi-Pythagorean estimate (<ref>), the contraction (<ref>) transfers to general second-order linear elliptic PDEs (<ref>) under the restriction that (<ref>) holds only for all ℓ≥ℓ_0, where ℓ_0 ∈_0 exists, but is unknown in practice. While, as noted in Remark <ref>, contraction (<ref>) implies full R-linear convergence (<ref>), theproof of Theorem <ref> works under much weaker assumptions than that of <cit.> and covers the PDE (<ref>) with ℓ_0=0. The proof of Theorem <ref> relies on Lemma <ref> and the following elementary result essentially taken from <cit.>. Its proof is found in Appendix <ref>. Let (a_ℓ)_ℓ∈_0 be a scalar sequence in _≥0 and m > 0. Then, the following statements are equivalent: (i) tail summability: There exists a constant C_m > 0 such that ∑_ℓ' = ℓ+1^∞ a_ℓ'^m ≤ C_m a_ℓ^m for all ℓ∈_0. (ii) R-linear convergence: There holds (<ref>) with certain 0 << 1 and > 0. The proof is split into two steps.Step 1 (tail summability with respect to ℓ). Let ℓ∈ with (ℓ+1,) ∈. Algorithm <ref> guarantees nested iteration u_ℓ+1^0 = u_ℓ^ and [ℓ] ≥ 1. This and contraction of the algebraic solver (<ref>) showu_ℓ+1^⋆ - u_ℓ+1^eq:contractive-solver≤^[ℓ] u_ℓ+1^⋆ - u_ℓ^≤ u_ℓ+1^⋆ - u_ℓ^As in the proof of Theorem <ref>, one obtains the estimator reductionη_ℓ+1(u_ℓ+1^)*eq:estimator-reduction≤q_θ η_ℓ(u_ℓ^) +u_ℓ+1^- u_ℓ^*eq1:single:convergence≤q_θ η_ℓ(u_ℓ^) + ( + 1) u_ℓ+1^⋆- u_ℓ^.Choosing 0 < γ≤ 1 with 0 < max{ + (+1) γ ,q_θ} < 1, the combination of (<ref>)–(<ref>) readsa_ℓ+1u_ℓ+1^⋆ - u_ℓ+1^ + γ η_ℓ+1(u_ℓ+1^)≤ [ u_ℓ+1^⋆ - u_ℓ^ + γ η_ℓ(u_ℓ^) ]≤ [ u_ℓ^⋆ - u_ℓ^ + γ η_ℓ(u_ℓ^) ]+u_ℓ+1^⋆ - u_ℓ^⋆ a_ℓ + b_ℓ.Moreover, estimate (<ref>) from the proof of Theorem <ref> and stability (<ref>) prove thatu_ℓ”^⋆ - u_ℓ'^⋆*eq:quasimonotonicity_b≲η_ℓ(u_ℓ^⋆)*axiom:stability≲u_ℓ^⋆ - u_ℓ^ + η_ℓ(u_ℓ^)≃ a_ℓforℓ≤ℓ' ≤ℓ”≤with (ℓ,) ∈,which yields b_ℓ+N≲ a_ℓ for all 0 ≤ℓ≤ℓ+N ≤ with (ℓ,) ∈. As in (<ref>), we see∑_ℓ' = ℓ^ℓ+N b_ℓ'^2≃∑_ℓ' = ℓ^ℓ+Nu_ℓ'+1^⋆ - u_ℓ'^⋆^2axiom:orthogonality≲(N+1)^1-δ u^⋆ - u_ℓ^⋆^2axiom:reliability≲(N+1)^1-δ η_ℓ(u_ℓ^⋆)^2 axiom:stability≲(N+1)^1-δ [ η_ℓ(u_ℓ^) + u_ℓ^⋆- u_ℓ^]^2≃ (N+1)^1-δa_ℓ^2 for all0 ≤ℓ≤ℓ+N < .Hence, the assumptions (<ref>) are satisfied and Lemma <ref> concludes tail summability (or equivalently R-linear convergence by Lemma <ref>) of Η_ℓ^≃ a_ℓ, i.e.,∑_ℓ' = ℓ+1^ - 1Η_ℓ'^≲Η_ℓ^for all 0 ≤ℓ < . Step 2 (tail summability with respect to ℓ and k). First, for 0 ≤ k < k' < [ℓ], the failure of the termination criterion (<ref>) and contraction of the solver (<ref>) prove thatΗ_ℓ^k'eq:single:termination≲u_ℓ^⋆ - u_ℓ^k' + u_ℓ^k' - u_ℓ^k'-1eq:contractive-solver≲u_ℓ^⋆ - u_ℓ^k'-1eq:contractive-solver≲^k'-k u_ℓ^⋆ - u_ℓ^keq2:single:quasi-error≤^k'-k Η_ℓ^k.Second, for (ℓ,) ∈, it holds thatΗ_ℓ^*axiom:stability≲ u_ℓ^⋆ - u_ℓ^ + η_ℓ(u_ℓ^-1) +u_ℓ^ - u_ℓ^-1 *eq:contractive-solver≤ Η_ℓ^-1 + 2u_ℓ^⋆ - u_ℓ^eq:contractive-solver≤ (1 + 2)Η_ℓ^-1for all(ℓ,) ∈.Hence, we may concludeΗ_ℓ^k'≲^k'-k Η_ℓ^kfor all0 ≤ k ≤ k' ≤[ℓ].Withu_ℓ+1^⋆ - u_ℓ^⋆≲a_ℓ≃Η_ℓ^ from (<ref>), stability (<ref>) and reduction (<ref>) showΗ_ℓ+1^0 = u_ℓ+1^⋆ - u_ℓ^ +η_ℓ+1(u_ℓ^)≤Η_ℓ^ + u_ℓ+1^⋆ - u_ℓ^⋆≲Η_ℓ^for all(ℓ,) ∈.Overall, the geometric series proves tail summability (<ref>) via∑_(ℓ',k') ∈|ℓ',k'|> |ℓ,k|Η_ℓ'^k' =∑_k'=k+1^[ℓ]Η_ℓ^k'+ ∑_ℓ' = ℓ+1^∑_k'=0^[ℓ']Η_ℓ'^k'eq:single:contraction-Lambda≲Η_ℓ^k + ∑_ℓ' = ℓ+1^Η_ℓ'^0eq2:single:contraction-Lambda≲Η_ℓ^k + ∑_ℓ' = ℓ^-1Η_ℓ'^eq:single:summability-ell≲Η_ℓ^k + Η_ℓ^eq:single:contraction-Lambda≲Η_ℓ^kfor all (ℓ,k) ∈.Sinceis countable and linearly ordered, Lemma <ref> concludes the proof of (<ref>) The following comments on the computational cost of implementations of standard finite element methods underline the importance of full linear convergence (<ref>). * Solve & Estimate. One solver step of an optimal multigrid method can be performed in (#_ℓ) operations, if smoothing is done according to the grading of the mesh <cit.>. Instead, one step of a multigrid method on _ℓ, where smoothing is done on all levels and all vertex patches needs (∑_ℓ'=0^ℓ#_ℓ') operations. The same remark is valid for the preconditioned CG method with optimal additive Schwarz or BPX preconditioner <cit.>. One solver step can be realized via successive updates in (#_ℓ) operations, while (∑_ℓ' = 0^ℓ#_ℓ') is faced if the preconditioner does not respect the grading of the mesh hierarchy.* Mark. The Dörfler marking strategy (<ref>) can be realized in linear complexity (#_ℓ); see <cit.> for = 2 and <cit.> for = 1.* Refine. Local mesh refinement (including mesh closure) of _ℓ by bisection can be realized in (#_ℓ) operations; see, e.g., <cit.>.Since the adaptive algorithm depends on the full history of algorithmic decisions, the overall computational cost until step (ℓ,k) ∈, i.e., until (and including) the computation of u_ℓ^k, is thus proportionally bounded by∑_(ℓ',k') ∈|ℓ',k'| ≤ |ℓ,k|#_ℓ'≤ cost(ℓ,k)≤∑_(ℓ',k') ∈|ℓ',k'| ≤ |ℓ,k|∑_ℓ” = 0^ℓ'#_ℓ”.Here, the lower bound corresponds to the case that all steps of Algorithm <ref> are done at linear cost (#_ℓ). The upper bound corresponds to the case that solve & estimate, mark, and refine are performed at linear cost (#_ℓ), while a suboptimal solver leads to cost (∑_ℓ” = 0^ℓ#_ℓ”) for each mesh _ℓ). In any case, the following corollary shows that full R-linear convergence guarantees that convergence rates with respect to the number of degrees of freedom _ℓ≃#_ℓ and with respect to the overall computational cost cost(ℓ,k) coincide even for a suboptimal solver. For s > 0, full R-linear convergence (<ref>) yieldsM(s)sup_(ℓ,k) ∈ (#_ℓ)^sΗ_ℓ^k≤sup_(ℓ,k) ∈(∑_(ℓ',k') ∈|ℓ',k'| ≤ |ℓ,k|∑_ℓ” = 0^ℓ'#_ℓ”)^s Η_ℓ^k≤ C_ cost(s) M(s),where the constant C_ cost(s) > 0 depends only on , , and s. Moreover, there exists s_0 > 0 such that M(s) < ∞ for all 0 < s ≤ s_0.By definition, it holds that#_ℓ≤ M(s)^1/s(Η_ℓ^k )^-1/sfor all(ℓ,k) ∈.For (ℓ',k') ∈, this, full R-linear convergence (<ref>), and the geometric series prove that∑_ℓ” = 0^ℓ'#_ℓ”≤M(s)^1/s ∑_ℓ” = 0^ℓ' (Η_ℓ”^0)^-1/s *eq:single:convergence≤M(s)^1/s ^1/s (Η_ℓ'^k')^-1/s ∑_ℓ” = 0^ℓ' (^1/s)^|ℓ',k'| - |ℓ”,0|≤M(s)^1/s ^1/s/1-^1/s (Η_ℓ'^k')^-1/s.Moreover, full R-linear convergence (<ref>) and the geometric series prove that∑_(ℓ',k') ∈|ℓ',k'| ≤ |ℓ,k| (Η_ℓ'^k')^-1/seq:single:convergence≤^1/s (Η_ℓ^k)^-1/s ∑_(ℓ',k') ∈|ℓ',k'| ≤ |ℓ,k| (^1/s)^|ℓ,k| - |ℓ',k'|≤^1/s/1-^1/s(Η_ℓ^k)^-1/s.The combination of the two previously displayed formulas results in∑_(ℓ',k') ∈|ℓ',k'| ≤ |ℓ,k|∑_ℓ” = 0^ℓ'#_ℓ”≤(^1/s/1-^1/s)^2M(s)^1/s(Η_ℓ^k)^-1/s.Rearranging this estimate, we conclude the proof of (<ref>). It remains to verify M(s) < ∞ for some s > 0. If < ∞, the supremum becomes a maximum of finitely many non-zero values and hence M(s) < ∞ for all s > 0. If = ∞, note that NVB guarantees that#_ℓ≤#_ℓ-1≤^ℓ#_0for all (ℓ,k) ∈ with ℓ≥ 1,where ≥ 2 depends only on _0 (see <cit.> and note that = 2 for d=1 and = 4 for d = 2, respectively). Moreover, full R-linear convergence (<ref>) yields thatΗ_ℓ^keq:single:convergence≤^|ℓ,k| Η_0^0≤^ℓ Η_0^0for all (ℓ,k) ∈.Multiplying the two previously displayed formulas, we see that(#_ℓ)^sΗ_ℓ^k≤(^s )^ℓ(#_0)^sΗ_0^0for all(ℓ,k) ∈.Note that the right-hand side is uniformly bounded, provided that s > 0 guarantees ^s ≤ 1. This concludes the proof with s_0 log(1/) / log().Considering the nonsymmetric model problem (<ref>), a natural candidate for the solver is the generalized minimal residual method (GMRES) with optimal preconditioner for the symmetric part. However, a posteriori error estimation and contraction in the PDE-related energy norm are still open. Instead, <cit.> follows the constructive proof of the Lax–Milgram lemma to derive a contractive solver. Its convergence analysis, as given in <cit.>, is improved in the following Section <ref>. § AFEM WITH NESTED CONTRACTIVE SOLVERSWhile contractive solvers for SPD systems are well-understood in the literature, the recent work <cit.> presents contractive solvers for the nonsymmetric variational formulation (<ref>) that essentially fit into the framework of Section <ref> and allow for the numerical analysis of AFEM with optimal complexity. To this end, the proof of the Lax–Milgram lemma as proposed by Zarantonello <cit.> is exploited algorithmically (while the original proof <cit.> relies on the Hahn–Banach separation theorem): For δ > 0, we consider the Zarantonello mapping Φ_(δ; ·) _→_ defined bya(Φ_(δ; u_), v_)=a(u_,v_) + δ[ F(v_) - b(u_, v_) ]for allu_, v_∈_.Since a(·, ·) is a scalar product, Φ_(δ; u_) ∈_ is well-defined. Moreover, for any 0 < δ < 2α/L^2 and 0 <[1 - δ(2α-δ L^2)]^1/2 < 1, this mapping is contractive, i.e.,u_^⋆ - Φ_(δ; u_)≤ u_^⋆ - u_for allu_∈_;see also <cit.>.Note that (<ref>) corresponds to a linear SPD system. For this, we employ a uniformly contractive algebraic solver with iteration function Ψ_(u_^♯;·) _→_ to approximate the solution u_^♯Φ_(δ; u_) to the SPD system (<ref>), i.e.,u_^♯ - Ψ_(u_^♯; w_)≤ u_^♯ - w_for allw_∈_ and all _∈,where 0 << 1 depends only on a(·,·), but is independent of _. Clearly, no knowledge of u_H^♯ is needed to compute Ψ_(u_^♯; w_) but only that of the corresponding right-hand side a(u_^♯, ·) _→; see, e.g., <cit.>. Extending the index notation from Section <ref>, we define the triple index set(ℓ, k, j) ∈_0^3:u_ℓ^k, j is used in Algorithm <ref>together with the lexicographic ordering(ℓ', k', j') ≤ (ℓ, k, j) :⟺ u_ℓ'^k', j' is defined not later than u_ℓ^k,j in Algorithm <ref>.and the total step counter|ℓ, k, j |#(ℓ', k', j') ∈ : (ℓ', k', j') ≤ (ℓ, k, j)∈_0 for(ℓ, k, j) ∈.Moreover, we define the stopping indices supℓ∈_0:(ℓ,0,0) ∈∈_0 ∪{∞}, [ℓ]supk ∈_0:(ℓ,k,0) ∈∈∪{∞}, whenever(ℓ,0,0) ∈, [ℓ,k]supj ∈_0:(ℓ,k,j) ∈∈∪{∞}, whenever(ℓ,k,0) ∈. First, these definitions are consistent with those of Algorithm <ref>(i.a.II) and Algorithm <ref>(ii). Second, there holds indeed [ℓ, k] < ∞ for all (ℓ, k, 0) ∈; see <cit.>. Third, < ∞ yields [] = ∞ and η_(u_^⋆) = 0 with u_^⋆ = u^⋆; see <cit.>.The following theorem improves <cit.> in the sense that, first, we prove R-linear convergence for all ℓ≥ℓ_0 = 0, while ℓ_0 ∈ is unknown in practice in <cit.>, and, second, <cit.> requires severe restrictions onbeyond (<ref>) below. We note that (<ref>) is indeed satisfied, if the algebraic system is solved exactly, i.e., = 0, so that Theorem <ref> is a consistent generalization of Theorem <ref>. Let 0 < θ≤ 1, ≥ 1, ,> 0, and u_0^0,0∈_0. With q_θ [1 - (1-^2)θ]^1/2, suppose that0<+ 2/1- /1 - 2/1-< 1and < (1-)(1-)(1-q_θ)/8.Then, Algorithm <ref> guarantees R-linear convergence of the quasi-error equation Η_ℓ^k,ju_ℓ^⋆ - u_ℓ^k,j + u_ℓ^k,⋆ - u_ℓ^k,j+ η_ℓ(u_ℓ^k,j),i.e., there exist constants 0 << 1 and > 0 such thatΗ_ℓ^k,j≤^|ℓ,k,j| - |ℓ',k',j'| Η_ℓ'^k',j' for all (ℓ',k',j'),(ℓ,k,j) ∈ with|ℓ',k',j'| ≤ |ℓ,k,j|.As proven for Corollary <ref> in Section <ref>, an immediate consequence of full linear convergence (and the geometric series) is that convergence rates with respect to the number of degrees of freedom and with respect to the overall computational cost coincide. For s > 0, full R-linear convergence (<ref>) yields= -4muM(s)sup_(ℓ,k,j) ∈ (#_ℓ)^sΗ_ℓ^k,j≤sup_(ℓ,k,j) ∈(∑_(ℓ',k',j') ∈ |ℓ',k',j'| ≤ |ℓ,k,j|∑_(ℓ”,k”,j”) ∈|ℓ”,k”,j”| ≤ |ℓ',k',j'|#_ℓ”)^s Η_ℓ^k,j≤ C_ cost(s) M(s),where the constant C_ cost(s) > 0 depends only on , , and s. Moreover, there exists s_0 > 0 such that M(s) < ∞ for all 0 < s ≤ s_0.The proof of Theorem <ref> requires the following lemma (essentially taken from <cit.>). It deduces the contraction of the inexact Zarantonello iteration with computed iterates u_ℓ^k,≈ u_ℓ^k, ⋆ from the exact Zarantonello iteration. For the inexact iteration, the linear SPD system (<ref>) is solved with the contractive algebraic solver (<ref>), i.e., u_ℓ^k,⋆Φ_ℓ(δ; u_ℓ^k-1,) and u_ℓ^k,jΨ_ℓ(u_ℓ^k,⋆, u_ℓ^k,j-1) guaranteeu_ℓ^⋆ - u_ℓ^k,⋆≤ u_ℓ^⋆ - u_ℓ^k-1,for all(ℓ,k,j) ∈ withk ≥ 1.We emphasize that contraction is only guaranteed for 0 < k < [ℓ] in (<ref>) below, while the final iteration k = [ℓ] leads to a perturbed contraction (<ref>) thus requiring additional treatment in the later analysis. The proof of Lemma <ref> is given in Appendix <ref>.Under the assumptions of Theorem <ref>, the inexact Zarantonello iteration used in Algorithm <ref> satisfiesu_ℓ^⋆ - u_ℓ^k,≤ u_ℓ^⋆ - u_ℓ^k-1,for all(ℓ,k,) ∈ with1 ≤ k < [ℓ]as well asu_ℓ^⋆ - u_ℓ^,≤ u_ℓ^⋆ - u_ℓ^-1,+ 2/1-η_ℓ(u_ℓ^,)for all(ℓ,,) ∈. The proof is split into six steps. The first four steps follow the proof of Theorem <ref> usingΗ_ℓ^ku_ℓ^⋆ - u_ℓ^k, + η_ℓ(u_ℓ^k,)for all(ℓ,k,) ∈.By contraction of the algebraic solver (<ref>) as well as the stopping criteria for the algebraic solver (<ref>) and for the symmetrization (<ref>), it holds thatu_ℓ^,⋆ - u_ℓ^,eq:double:contractive-solver≲u_ℓ^, - u_ℓ^,-1eq:double:termination:alg≲η_ℓ(u_ℓ^,) + u_ℓ^, - u_ℓ^-1,eq:double:termination:sym≲η_ℓ(u_ℓ^,) ≤Η_ℓ^.In particular, this proves equivalenceΗ_ℓ^≤Η_ℓ^ + u_ℓ^,⋆ - u_ℓ^, =Η_ℓ^,≲Η_ℓ^for all (ℓ,,) ∈. Step 1 (auxiliary estimates & estimator reduction). For (ℓ, , ) ∈, nested iteration u_ℓ^,0 = u_ℓ^-1, and [ℓ,] ≥ 1 yieldu_ℓ^,⋆ - u_ℓ^,eq:double:contractive-solver≤^[ℓ,] u_ℓ^,⋆ - u_ℓ^,0≤ u_ℓ^,⋆ - u_ℓ^-1,.From this, we obtain thatu_ℓ^⋆ - u_ℓ^, ≤u_ℓ^⋆ - u_ℓ^,⋆ + u_ℓ^,⋆ - u_ℓ^,*eq:step1:1*≤(1+)u_ℓ^⋆ - u_ℓ^,⋆ +u_ℓ^⋆ - u_ℓ^-1,*eq:zarantonello≤[ (1+)+ ]u_ℓ^⋆ -u_ℓ^-1,≤3u_ℓ^⋆ - u_ℓ^-1,.For (ℓ+1, , ) ∈, contraction of the inexact Zarantonello iteration (<ref>), nested iteration u_ℓ+1^0, = u_ℓ^,, and [ℓ+1] ≥ 1,show thatu_ℓ+1^⋆ - u_ℓ+1^-1,*eq1:zarantonello:inexact≤^[ℓ+1]-1 u_ℓ+1^⋆ - u_ℓ+1^0,≤u_ℓ+1^⋆ - u_ℓ^,.The combination of the previous two displayed formulas showsu_ℓ+1^⋆ - u_ℓ+1^,*eq:step1:2*≤3u_ℓ+1^⋆ - u_ℓ+1^-1,*eq:step1:3*≤3u_ℓ+1^⋆ - u_ℓ^,.Analogous arguments to (<ref>) in the proof of Theorem <ref> establishη_ℓ+1(u_ℓ+1^,) eq:single:estimator-reduction≤q_θ η_ℓ(u_ℓ^,) +u_ℓ+1^, - u_ℓ^,eq:step1:1≤q_θ η_ℓ(u_ℓ^, ) + 4u_ℓ+1^⋆ -u_ℓ^, . Step 2 (tail summability with respect to ℓ). With λ :=, we defineγq_θ(1-)/4,C(γ,λ) := 1 + 2/1 -λ/γ,andαλ/γeq:double:assumption:lambda< (1-)(1-q_θ)/2 q_θ.By definition, it follows thatC(γ,λ)=1 + 2/1 -α<1 + 1-q_θ/q_θ = 1 / q_θ.This ensures thatq_θ C(γ,λ) < 1as well as + 4 C(γ,λ)γ< + 4/q_θ γ = 1.With contraction of the inexact Zarantonello iteration (<ref>), Step 1 provesu_ℓ+1^⋆ - u_ℓ+1^, + γ η_ℓ+1(u_ℓ+1^,)eq2:zarantonello:inexact≤ u_ℓ+1^⋆ - u_ℓ+1^-1,+ C(γ,λ)γ η_ℓ+1(u_ℓ+1^,)*eq:step1:3*≤ u_ℓ+1^⋆ - u_ℓ^,+C(γ,λ)γ η_ℓ+1(u_ℓ+1^,)*eq:step2:1≤(+ 4C(γ,λ)γ)u_ℓ+1^⋆ - u_ℓ^,+ q_θC(γ,λ)γ η_ℓ(u_ℓ^,)≤ [ u_ℓ+1^⋆ - u_ℓ^, + γ η_ℓ(u_ℓ^,) ]for all(ℓ+1,,) ∈,where (<ref>) ensures the bound0 < max{ + 4C(γ,λ)γ ,q_θC(γ,λ) } < 1.Altogether, we obtaina_ℓ+1u_ℓ+1^⋆ - u_ℓ+1^, + γ η_ℓ+1(u_ℓ+1^,)*eq:step3:1≤[ u_ℓ^⋆ - u_ℓ^, + γ η_ℓ(u_ℓ^,) ]+u_ℓ+1^⋆ - u_ℓ^⋆a_ℓ + b_ℓfor all (ℓ,,) ∈,which corresponds to (<ref>) in the case of a single contractive solver (with u_ℓ^, replacing u_ℓ^ in (<ref>)). Together with (<ref>)–(<ref>) (with u_ℓ^, replacing u_ℓ^), the assumptions (<ref>) of Lemma <ref> are satisfied. Therefore, Lemma <ref> proves tail summability∑_ℓ' = ℓ+1^-1Η_ℓ'^eq1:double:proof≃∑_ℓ' = ℓ+1^-1[ u_ℓ'^⋆ -u_ℓ'^, + γ η_ℓ'(u_ℓ'^,) ]≲ u_ℓ^⋆ - u_ℓ^, + γ η_ℓ(u_ℓ^,)eq1:double:proof≃Η_ℓ^for all (ℓ,,) ∈. Step 3 (auxiliary estimates). First, we employ (<ref>) to deduceΗ_ℓ^ axiom:stability≲u_ℓ^⋆ - u_ℓ^,+ u_ℓ^, - u_ℓ^-1,+ η_ℓ(u_ℓ^-1,)eq1:double:proof≤Η_ℓ^-1 + 2u_ℓ^, - u_ℓ^-1,eq:step1:2*≤Η_ℓ^-1 + 8u_ℓ^⋆ - u_ℓ^-1,≤ 9Η_ℓ^-1for all(ℓ,,) ∈.Second, for 0 ≤ k < k' < [ℓ], the failure of the stopping criterion for the inexact Zarantonello symmetrization (<ref>) and contraction (<ref>) prove thatΗ_ℓ^k'eq:double:termination:sym≲u_ℓ^⋆ - u_ℓ^k', + u_ℓ^k', - u_ℓ^k'-1,eq1:zarantonello:inexact≲u_ℓ^⋆ - u_ℓ^k'-1,eq1:zarantonello:inexact≲^k'-k u_ℓ^⋆ - u_ℓ^k,.Moreover, for k < k' = [ℓ], wecombine (<ref>) with (<ref>) to getΗ_ℓ^eq1:step7≲Η_ℓ^[ℓ]-1eq1a:step7≲^([ℓ]-1)-k u_ℓ^⋆ - u_ℓ^k,≃^[ℓ]-k u_ℓ^⋆ - u_ℓ^k,.The combination of (<ref>)–(<ref>) proves thatΗ_ℓ^k'≲^ k' - k u_ℓ^⋆ - u_ℓ^k,≲^ k' - k Η_ℓ^kfor all (ℓ,0, 0) ∈ with0 ≤ k ≤k' ≤[ℓ],where the hidden constant depends only on , , and . Third, we recallu_ℓ^⋆ - u_ℓ-1^⋆eq:quasimonotonicity_b≲η_ℓ-1(u_ℓ-1^⋆)axiom:stability≲η_ℓ-1(u_ℓ-1^,) + u_ℓ-1^⋆ -u_ℓ-1^,=Η_ℓ-1^.Together with nested iteration u_ℓ-1^, = u_ℓ^0,, this yields thatΗ_ℓ^0= u_ℓ^⋆ - u_ℓ-1^, + η_ℓ(u_ℓ-1^,)≤u_ℓ^⋆ - u_ℓ-1^⋆ + Η_ℓ-1^≲Η_ℓ-1^for all(ℓ,0,0) ∈. Step 4 (tail summability with respect to ℓ and k). The auxiliary estimates from Step 3 and the geometric series prove that∑_(ℓ',k',) ∈|ℓ',k',| > |ℓ,k,|Η_ℓ'^k'=∑_k' = k+1^[ℓ]Η_ℓ^k'+ ∑_ℓ' = ℓ+1^∑_k'=0^[ℓ]Η_ℓ'^k'eq2:step7≲Η_ℓ^k+ ∑_ℓ' = ℓ+1^Η_ℓ'^0eq3:step7≲Η_ℓ^k+ ∑_ℓ' = ℓ^-1Η_ℓ'^≲Η_ℓ^k+ Η_ℓ^eq2:step7≲Η_ℓ^kfor all(ℓ,k,) ∈. Step 5 (auxiliary estimates). Recall Η_ℓ^≤Η_ℓ^, from (<ref>). For j=0 and k=0, the definition u_ℓ^0,0 u_ℓ^0, u_ℓ^0,⋆ leads to Η_ℓ^0,0 = Η_ℓ^0. For k ≥ 1, nested iteration u_ℓ^k,0 = u_ℓ^k-1, and contraction of the Zarantonello iteration (<ref>) implyu_ℓ^k,⋆ - u_ℓ^k,0≤u_ℓ^⋆ - u_ℓ^k,⋆ + u_ℓ^⋆ - u_ℓ^k-1,eq:zarantonello≤( + 1)u_ℓ^⋆ - u_ℓ^k-1,≤ 2Η_ℓ^k-1.Therefore, we derive thatΗ_ℓ^k,0≤ 3Η_ℓ^(k-1)_+for all(ℓ,k,0) ∈,where(k-1)_+ max{0, k-1 }.For any 0 ≤ j < j' < [ℓ,k], the contraction of the Zarantonello iteration (<ref>), the contraction of the algebraic solver (<ref>), and the failure of the stopping criterion for the algebraic solver (<ref>) proveΗ_ℓ^k, j' ≤u_ℓ^⋆ - u_ℓ^k,⋆ + 2u_ℓ^k,⋆ -u_ℓ^k, j' + η_ℓ(u_ℓ^k,j')*eq:zarantonello≲u_ℓ^k,j' - u_ℓ^k-1, +u_ℓ^k,⋆ -u_ℓ^k,j' + η_ℓ(u_ℓ^k,j')*eq:double:contractive-solver≲u_ℓ^k,j' - u_ℓ^k-1,+u_ℓ^k,j' - u_ℓ^k, j'-1 +η_ℓ(u_ℓ^k,j')*eq:double:termination:alg≲u_ℓ^k,j' - u_ℓ^k, j'-1*eq:double:contractive-solver≲u_ℓ^k,⋆ - u_ℓ^k, j'-1*eq:double:contractive-solver≲^j'-j u_ℓ^k,⋆ - u_ℓ^k, j≲^j'-j Η_ℓ^k,j.For j' = [ℓ,k], it follows thatΗ_ℓ^k,axiom:stability≲Η_ℓ^k,-1 +u_ℓ^k, - u_ℓ^k,-1*eq:double:contractive-solver≲Η_ℓ^k,-1 +u_ℓ^k,⋆ - u_ℓ^k,-1*eq:double:quasi-error≤2Η_ℓ^k,-1≲^[ℓ, k]-j Η_ℓ^k,j.The combination of the previous two displayed formulas results inΗ_ℓ^k, j'≲^j'- j Η_ℓ^k,jfor all(ℓ, k, 0) ∈with0 ≤j ≤ j' ≤[ℓ,k],where the hidden constant depends only on , , , , and .Step 6 (tail summability with respect to ℓ, k, and j). Finally, we observe that∑_(ℓ',k',j') ∈|ℓ',k',j'| > |ℓ,k,j|Η_ℓ'^k',j'=∑_j'=j+1^[ℓ,k]Η_ℓ^k,j'+ ∑_k'=k+1^[ℓ]∑_j'=0^[ℓ,k']Η_ℓ^k',j'+ ∑_ℓ' = ℓ+1^∑_k'=0^[ℓ']∑_j'=0^[ℓ',k']Η_ℓ'^k',j'eq3:step10≲Η_ℓ^k,j+ ∑_k'=k+1^[ℓ]Η_ℓ^k',0+ ∑_ℓ' = ℓ+1^∑_k'=0^[ℓ]Η_ℓ'^k',0eq1:step9≲Η_ℓ^k,j+ ∑_(ℓ',k',) ∈|ℓ',k',| > |ℓ,k,|Η_ℓ'^k'eq:step8≲Η_ℓ^k,j+ Η_ℓ^keq2:double:proof≲Η_ℓ^k,j+ Η_ℓ^k,eq3:step10≲Η_ℓ^k,jfor all(ℓ,k,j) ∈.Sinceis countable and linearly ordered, Lemma <ref> concludes the proof of (<ref>). § APPLICATION TO STRONGLY MONOTONE NONLINEAR PDESIn the previous sections, the particular focus is on general second-order linear elliptic PDEs (<ref>). However, the results also apply to nonlinear PDEs with strongly monotone and Lipschitz-continuous nonlinearity as considered, e.g., in <cit.> to mention only some recent works.Given a nonlinearity A^d →^d, we consider the nonlinear elliptic PDE-÷(A(∇ u^⋆)) = f - ÷fin Ωsubject tou^⋆ = 0on ∂Ω.We define the nonlinear operator Å H^1_0(Ω) → H^-1(Ω)H^1_0(Ω)^* via Å u A(∇ u)∇(·)_L^2(Ω), where we suppose that the L^2(Ω) scalar product on the right-hand side is well defined. Then, the weak formulation of (<ref>) readsÅ u^⋆v = F(v) fv_L^2(Ω) +f∇ v_L^2(Ω)for allv ∈ H^1_0(Ω),where ·· on the left-hand side denotes the duality brackets on H^-1(Ω) × H^1_0(Ω).Let a(·,·) be an equivalent scalar product on H^1_0(Ω) with induced norm ·. Suppose that Å is strongly monotone and Lipschitz continuous, i.e., there exist 0 < α≤ L such that, for all u, v, w ∈ H^1_0(Ω),α u-v^2 ≤Å u - Å vu-vandÅ u - Å vw≤ Lu-v w.Under these assumptions, the Zarantonello theorem <cit.> (or main theorem on strongly monotone operators <cit.>) yields existence and uniqueness of the solution u^⋆∈ H^1_0(Ω) to (<ref>). For _∈ and _⊆ H^1_0(Ω) from (<ref>), it also applies to the discrete setting and yields existence and uniqueness of the discrete solution u_^⋆∈_ toÅ u_^⋆v_ = F(v_)for allv_∈_,which is quasi-optimal in the sense of the Céa lemma (<ref>).As already discussed in Section <ref>, the proof of the Zarantonello theorem relies on the Banach fixed-point theorem: For 0 < δ < 2α/L^2, define Φ_(δ; ·) _→_ viaa(Φ_(δ; u_), v_)=a(u_, v_) + δ[ F(v_) - Å(u_)v_]for allu_, v_∈_.Since a(·, ·) is a scalar product, Φ_(δ; u_) ∈_ is well-defined. Moreover, for 0 < δ < 2α/L^2 and 0 <[1 - δ(2α-δ L^2)]^1/2 < 1, this mapping is a contraction, i.e.,u_^⋆ - Φ_(δ; u_)≤ u_^⋆ - u_for allu_∈_;see also <cit.>. Analogously to Section <ref>, the variational formulation (<ref>) leads to a linear SPD system for which we employ a uniformly contractive solver (<ref>). Overall, we note that for the nonlinear PDE (<ref>), the natural AFEM loop consists of* discretization via a conforming triangulation _ℓ (leading to the non-computable solution u_ℓ^⋆ to the discrete nonlinear system (<ref>)),* iterative linearization (giving rise to the solution u_ℓ^k, ⋆ = Φ_ℓ(δ; u_ℓ^k-1, ) of the large-scale discrete SPD system (<ref>) obtained by linearizing (<ref>) in u_ℓ^k-1,),* and an algebraic solver (leading to computable approximations u_ℓ^k, j≈ u_ℓ^k, ⋆).Thus, the natural AFEM algorithm takes the form of Algorithm <ref> in Section <ref>.So far, the only work analyzing convergence of such a full adaptive loop for the numerical solution of (<ref>) is <cit.>, which uses the Zarantonello approach (<ref>) for linearization and a preconditioned CG method with optimal additive Schwarz preconditioner for solving the arising SPD systems. Importantly and contrary to the present work, the adaptivity parameters θ,, andin <cit.> must be sufficiently small to guarantee full linear convergence and optimal complexity, while even plain convergence for arbitrary θ, , andis left open. Instead, the present work proves full R-linear convergence at least for arbitrary θ andand the milder constraint (<ref>) on .To apply the analysis from Section <ref>, it only remains to check the validity of Proposition <ref> and Proposition <ref>. The following result provides the analogue of Proposition <ref> for scalar nonlinearities. Note that, first, the same assumptions are made in <cit.> and, second, only the proof of stability (<ref>) (going back to <cit.>) is restricted to scalar nonlinearities and lowest-order discretizations, i.e., p = 1 in (<ref>). Suppose that A(∇ u) = a(|∇ u|^2) ∇ u, where a ∈ C^1(_≥0) satisfiesα (t-s) ≤ a(t^2) t - a(s^2) s ≤L/3(t-s)for allt ≥ s ≥ 0.Then, there holds (<ref>) for v := ∇ v_L^2(Ω) and the standard residual error estimator (<ref>) for lowest-order elements p = 1 (with A∇ v_ understood as A(∇ v_) and b = 0 = c) satisfies stability (<ref>), reduction (<ref>), reliability (<ref>), discrete reliability (<ref>), and quasi-monotonicity (<ref>) from Proposition <ref>. Under the same assumptions as in Proposition <ref>, quasi-orthogonality (<ref>) is satisfied. For the convenience of the reader, we include a sketch of the proof. Under the assumptions of Proposition <ref> and for any sequence of nested finite-dimensional subspaces _ℓ⊆_ℓ+1⊂ H^1_0(Ω), the corresponding Galerkin solutions u_ℓ^⋆∈_ℓ to (<ref>) satisfy quasi-orthogonality (<ref>) with δ = 1 and = L/α, i.e.,∑_ℓ' = ℓ^∞ u_ℓ'+1^⋆ - u_ℓ'^⋆^2≤L/α u^⋆ - u_ℓ^2for all ℓ∈_0. One can prove that the energyE(v) 1/2∫_Ω∫_0^|∇ v(x)|^2 a(t) ṭx̣ - F(v)for all v ∈ H^1_0(Ω)is Gâteaux-differentiable with Ẹ(v) = Å v - F. Then, elementary calculus (see, e.g., <cit.> or <cit.>) yields the equivalenceα/2 u_^⋆- v_^2≤ E(v_) - E(u_^⋆)≤L/2 u_^⋆- v_^2for all _∈ and allv_∈_.In particular, we see that u_^⋆ is the unique minimizer toE(u_^⋆) = min_v_∈_ E(v_),and (<ref>)–(<ref>) also hold for u^⋆ and H^1_0(Ω) replacing u_^⋆ and _, respectively.From this and the telescopic sum, we infer thatα/2∑_ℓ' = ℓ^ℓ+N u_ℓ'+1^⋆ -u_ℓ'^⋆^2eq:nonlinear:equivalence≤∑_ℓ' = ℓ^ℓ+N[E(u_ℓ'^⋆) - E(u_ℓ'+1^⋆) ]= E(u_ℓ^⋆) - E(u_ℓ+N+1^⋆)eq:nonlinear:minimization≤ E(u_ℓ^⋆) - E(u^⋆)eq:nonlinear:equivalence≤L/2 u^⋆ -u_ℓ^⋆^2for all ℓ, N ∈_0.Since the right-hand side is independent of N, we conclude the proof for N →∞. Thus, Theorem <ref> applies also to the nonlinear PDE (<ref>) under the assumptions on the nonlinearity from Proposition <ref>. Unlike <cit.>, we can guarantee full linear convergence (<ref>) for arbitrary θ, arbitrary , and a weaker constraint (<ref>) on . Optimal complexity then follows along the lines of <cit.> if the adaptivity parameters are sufficiently small.§ NUMERICAL EXPERIMENT The following numerical experiment employs the Matlab software package MooAFEM from <cit.>.[The experiments accompanying this paper will be provided under <https://www.tuwien.at/mg/asc/praetorius/software/mooafem>.] On the L-shaped domain Ω = (-1,1)^2 ∖ [0, 1) × [-1, 0), we consider-Δ u^⋆ + b·∇ u^⋆ + u^⋆ = 1 in Ωand u^⋆ = 0on ∂Ωwithb(x) = x;see Figure <ref> for the geometry and some adaptively generated meshes.Optimality of Algorithm <ref> with respect to large solver-stopping parametersand . We choose δ = 0.5, θ = 0.3, and the polynomial degree p=2. Figure <ref> presents the convergence rates for fixed = 0.7 and several symmetrization parameters ∈0.1, 0.3, 0.5, 0.7, 0.9. We observe that Algorithm <ref> obtains the optimal convergence rate -1 with respect to the number of degrees of freedom and the cumulative computational time for any selection of . Moreover, the same holds true for fixed = 0.7 and any choice of the algebraic solver parameter ∈0.1, 0.3, 0.5, 0.7, 0.9 as depicted in Figure <ref>. Table <ref> illustrates the weighted cumulative computational time of Algorithm <ref> and shows that a smaller marking parameter θ = 0.3 in combination with larger solver-stoppingparametersandis favorable. Furthermore, Figure <ref> shows that Algorithm <ref> guarantees optimal convergence rates -p/2 for several polynomial degrees p with fixed δ = 0.5, marking parameter θ = 0.3, and moderate== 0.7.Optimality of Algorithm <ref> with respect to large marking parameter θ. We choose the polynomial degree p = 2, δ = 0.5, and solver-stopping parameters == 0.7. Figure <ref> shows that also for moderate marking parameters θ, Algorithm <ref> guarantees optimal convergence rates with respect to the number of degrees of freedom and the cumulative computational time. Moreover, we observe that a very small as well as a large choice of θ lead to a worse performance. § PROOFS OF LEMMA <REF>, LEMMA <REF>, AND LEMMA <REF> The proof is split into four steps.Step 1. We consider the perturbed contraction of (a_ℓ)_ℓ∈_0 from (<ref>). By induction on n, we see with the empty sum understood (as usual) as zero thata_ℓ+n≤ q^n a_ℓ + ∑_j=1^n q^n-j b_ℓ+j-1for all ℓ, n ∈_0.From this and the geometric series, we infer thata_ℓ+n≤q^n a_ℓ + C_1 ( ∑_j=1^n q^n-j) a_ℓ≤(q^n + C_1/1-q) a_ℓC_3 a_ℓfor all ℓ, n ∈_0. Step 2. Next, we note that the perturbed contraction of (a_ℓ)_ℓ∈_0 from (<ref>) and the Young inequality with sufficiently small ε > 0 ensure0 < κ(1+ε) q^2 < 1and a_ℓ+1^2 eq:summability:criterion≤κa_ℓ^2 + (1+ε^-1) b_ℓ^2 for all ℓ∈_0.This and the summability of (b_ℓ)_ℓ∈_0 from (<ref>) guarantee∑_ℓ' = ℓ+1^ℓ+N a_ℓ'^2 = ∑_ℓ' = ℓ^ℓ+N-1 a_ℓ'+1^2 *eq:summability:criterion≤κ∑_ℓ' = ℓ^ℓ+N-1 a_ℓ'^2 + (1+^-1) C_2 N^1-δ a_ℓ^2.Rearranging the estimate, we arrive at∑_ℓ' = ℓ^ℓ+N a_ℓ'^2 ≤ 1 + κ + (1+^-1) C_2 N^1-δ/1-κa_ℓ^2D_N a_ℓ^2 for all ℓ, N ∈_0,where we note that 1 ≤ D_N ≃ N^1-δ as N →∞. In the following, we prove that this already guarantees that (<ref>) holds with an N-independent constant (instead of the constant D_N growing with N); see also Lemma <ref> below.Step 3. We show by mathematical induction on n that (<ref>) impliesa_ℓ+n^2 ≤(∏_j=1^n (1-D_j^-1))∑_ℓ'=ℓ^ℓ+n a_ℓ'^2 for all ℓ, n ∈_0.Note that (<ref>) holds for all ℓ∈_0 and n = 0 (with the empty product interpreted as 1). Hence, we may suppose that (<ref>) holds for all ℓ∈_0 and up to n ∈_0. Then,a_ℓ+(n+1)^2=a_(ℓ+1)+n^2*eq2:feischl≤(∏_j=1^n (1-D_j^-1))∑_ℓ'=ℓ+1^(ℓ+1)+n a_ℓ'^2= (∏_j=1^n (1-D_j^-1))( ∑_ℓ'=ℓ^ℓ+(n+1) a_ℓ'^2 - a_ℓ^2 )*eq1:feischl≤(∏_j=1^n (1-D_j^-1))( ∑_ℓ'=ℓ^ℓ+(n+1) a_ℓ'^2 - D_n+1^-1∑_ℓ'=ℓ^ℓ+(n+1) a_ℓ'^2)= (∏_j=1^n+1 (1-D_j^-1))∑_ℓ'=ℓ^ℓ+(n+1) a_ℓ'^2.This concludes the proof of (<ref>).Step 4. From (<ref>)–(<ref>), we infer thata_ℓ+n^2≤(∏_j=1^n (1-D_j^-1)) D_n a_ℓ^2for all ℓ, n ∈.Note thatM_n := log[ (∏_j=1^n (1-D_j^-1)) D_n ]= ∑_j=1^n log(1-D_j^-1) + log D_n.With 1 - x ≤exp(-x) for all 0 < x < 1, it follows for x = D_j^-1 thatM_n ≤log D_n - ∑_j=1^n D_j^-1≃ (1-δ)log n - ∑_j=1^n 1/j^1-δ -∞,since log n ≤∑_j=1^n (1/j). Fix n_0 ∈ such that M_n_0 < 0. It follows from (<ref>) thata_ℓ+in_0^2≤ q_0^i a_ℓ^2for all ℓ, i ∈_0,where 0 < q_0 := exp(M_n_0) < 1.Let ℓ∈_0. For general n ∈_0, choose i,j ∈ with j < n_0 such that n = i n_0 + j. With (<ref>) and quasi-monotonicity (<ref>) of a_ℓ, we derivea_ℓ+n^2=a_(ℓ + j) + in_0^2eq4:feischl≤q_0^i a_ℓ + j^2eq:a_ell:quasi-mon≤C_3^2 q_0^i a_ℓ^2=C_3^2 q_0^-j/n_0 q_0^n/n_0 a_ℓ^2≤(C_3^2/q_0) (q_0^1/n_0)^n a_ℓ^2.This completes the proof of (<ref>) with C_3^2/q_0 > 0 and 0 <q_0^1/n_0 < 1. First, observe that (a_ℓ)_ℓ∈_0 is R-linearly convergent in the sense of (ii) if and only if(a_ℓ^m)_ℓ∈_0 is R-linearly convergent in the sense of (ii) withreplaced by ^m andreplaced by ^m. Therefore, we may restrict to m = 1. The implication (ii) ⟹ (i) follows from the geometric series, i.e., ∑_ℓ' = ℓ+1^∞ a_ℓ'(ii)≤ C a_ℓ∑_ℓ' = ℓ+1^∞ q^ℓ'-ℓ = Cq/1-qa_ℓfor all ℓ∈_0. Conversely, (i) yields that (C_1^-1 + 1) ∑_ℓ' = ℓ+1^∞ a_ℓ'(i)≤ a_ℓ + ∑_ℓ' = ℓ+1^∞ a_ℓ' = ∑_ℓ' = ℓ^∞ a_ℓ'for all ℓ∈_0. Inductively, this leads to a_ℓ+n≤∑_ℓ' = ℓ+n^∞ a_ℓ'(i)≤1/(C_1^-1+1)^n ∑_ℓ' = ℓ^∞ a_ℓ'(i)≤1 + C_1/ (C_1^-1+1)^na_ℓfor all ℓ, n ∈_0. This proves (ii) with 1 + C_1 and (C_1^-1+1)^-1. Let (ℓ,k,) ∈ with k ≥ 1. Contraction of the Zarantonello iteration (<ref>) proves u_ℓ^⋆ - u_ℓ^k,≤u_ℓ^⋆ - u_ℓ^k,⋆ + u_ℓ^k,⋆ - u_ℓ^k,eq:double:zarantonello≤ u_ℓ^⋆ - u_ℓ^k-1, + u_ℓ^k,⋆ - u_ℓ^k,. From the termination criterion of the algebraic solver (<ref>), we see that u_ℓ^k,⋆ - u_ℓ^k,≤/1- u_ℓ^k, - u_ℓ^k,-1eq:double:termination:alg≤/1- [ η_ℓ(u_ℓ^k,) + u_ℓ^k, - u_ℓ^k-1,]. With the termination criterion of the inexact Zarantonello iteration (<ref>), it follows that u_ℓ^k,⋆ - u_ℓ^k,eq:double:termination:sym≤2/1- η_ℓ(u_ℓ^k,)fork = [ℓ], u_ℓ^k, - u_ℓ^k-1, for1 ≤ k < [ℓ]. For k = [ℓ], the preceding estimates prove (<ref>). For k < [ℓ], it follows that u_ℓ^⋆ - u_ℓ^k,≤ u_ℓ^⋆ - u_ℓ^k-1, + 2/1-[ u_ℓ^⋆ - u_ℓ^k, + u_ℓ^⋆ - u_ℓ^k-1,]. Provided that 2/1-< 1, this proves u_ℓ^⋆ - u_ℓ^k,≤ + 2/1- /1 - 2/1-u_ℓ^⋆ - u_ℓ^k-1,eq:double:assumption:lambda= u_ℓ^⋆ - u_ℓ^k-1,, which is (<ref>). This concludes the proof. | http://arxiv.org/abs/2311.15738v1 | {
"authors": [
"Philipp Bringmann",
"Michael Feischl",
"Ani Miraci",
"Dirk Praetorius",
"Julian Streitberger"
],
"categories": [
"math.NA",
"cs.NA",
"41A25, 65N15, 65N30, 65N50, 65Y20"
],
"primary_category": "math.NA",
"published": "20231127114321",
"title": "On full linear convergence and optimal complexity of adaptive FEM with inexact solver"
} |
XX Month, XXXX XX Month, XXXX XX Month, XXXX XX Month, XXXX XX Month, XXXX XXXXXXXXXXXX Department of Electronics and Telecommunications (DET), Politecnico di Torino, 10129 Turin, Italy Microwave Department, IMT Atlantique, 29285 Brest, France CORRESPONDING AUTHOR: F. P. ANDRIULLI (e-mail: [email protected]). This work was supported in part by the European Research Council (ERC) through the European Union’s Horizon 2020 Research and Innovation Programme under Grant 724846 (Project 321) and in part by the H2020-MSCA-ITN-EID project COMPETE GA No 955476. Preparation of Papers for IEEE Open Journal of Antennas and PropagationAuthor et al. In this work, we introduce new integral formulations based on the convolution quadrature method for the time-domain modeling of perfectly electrically conducting scatterers that overcome some of the most critical issues of the standard schemes based on the electric field integral equation (EFIE). The standard time-domain EFIE-based approaches typically yield matrices that become increasingly ill-conditioned as the time-step or the mesh discretization density increase and suffer from the well-known DC instability. This work presents solutions to these issues that are based both on new Calderón strategies and quasi-Helmholtz projectors regularizations. In addition, to ensure an efficient computation of the marching-on-in-time, the proposed schemes leverage properties of the Z-transform—involved in the convolution quadrature discretization scheme—when computing the stabilized operators. The two resulting formulations compare favorably with standard, well-established schemes. The properties and practical relevance of these new formulations will be showcased through relevant numerical examples that include canonical geometries and more complex structures.Boundary element method, Calderón preconditioning, computational electromagnetic, convolution quadrature method, EFIE, time-domain integral equations Calderón Strategies for the Convolution Quadrature Time Domain Electric Field Integral Equation PIERRICK CORDEL1, Student Member, IEEE, ALEXANDRE DÉLY1, ADRIEN MERLINI2, Senior Member, IEEE AND FRANCESCO P. ANDRIULLI1,Fellow, IEEE January 14, 2024 ==========================================================================================================================================§ INTRODUCTIONTime domain boundary integral equations (TDIEs) are widely used in the simulation of transient electromagnetic fields scattered by perfectly electrically conducting (PEC) objects <cit.>. Like their frequency-domain counterparts, the spatial discretization of these equations is often performed via the boundary element method. The time discretization, however, can be tackled in different ways. A popular approach leverages time basis functions either within a Marching-On-in-Time (MOT) scheme <cit.> or within a Marching-On-In-Order procedure <cit.>. The convolution quadrature (CQ) approach <cit.> is an attractive alternative to these methods in which only space basis functions are explicitly defined. The approach has been applied to several equations in elastodynamics and acoustics <cit.> and then in electromagnetics <cit.>. It provides an efficient time-stepping scheme with matrices derived from the space-discretized Laplace domain operators.Another advantage of the CQ method is the use of implicit schemes (e.g. Runge Kutta methods <cit.>), which are generally more stable and typically allow for a better accuracy control of the solution over time <cit.>. However, the CQ time stepping scheme is solved via a computationally expensive MOT algorithm. Nowadays, fast solvers can reach quasi-linear complexity in time and space <cit.>. Usually, this fast technology uses iterative solvers, resulting in an overall computational cost that is proportional to the number of iterations which is low for well-conditioned systems. Working with well-conditioned matrices is therefore essential to reduce the computational cost of the solution process, in addition to being necessary to obtain accurate results <cit.>.Lamentably, however, the CQ discretized time domain electric field integral equation (EFIE) is plagued by several drawbacks. Indeed, the matrices resulting from the discretization of the EFIE are known to become ill-conditioned for large time steps or at dense mesh discretizations: the condition number of the MOT matrices grows quadratically with the time step and with the inverse of the average mesh edge length. These two phenomena are the CQ counterparts of what for standard MOT schemes are known as the large time step breakdown <cit.> and the dense discretization breakdown (or h-refinement breakdown) <cit.>. Another challenge in handling the CQ EFIE is that it involves operators whose definitions include a time integration. To avoid dealing with this integral, the time-differentiated counterpart of this formulation is often used <cit.>, but this differentiation is subject to a source of instability in the form of spurious linear currents living in the nullspace of the operator that degrades the solution <cit.>; this phenomenon is known as the direct current instability (DC instability). In this work, we propose new Calderón-preconditioned and quasi-Helmholtz regularized formulations free from the limitations mentioned above. The Calderón identities they rely on are already a well-established preconditioning approach in both the frequency domain <cit.> and time domain discretized by the Galerkin method <cit.> that is extended in this work to convolution quadrature discretizations and complemented with quasi-Helmholtz regularization. The contribution of this paper is twofold: (i) we present a first approach to tackle the regularization of the EFIE operator and to address the DC instability resulting in a new operator that presents no nullspace on simply connected geometries, thus stabilizing the solution, and (ii) we build upon this first regularized form of the EFIE to obtain an equation that, at the price of a higher number of matrix-vector products, is stable in the case of multiply connected geometries.This article is structured as follows: the time domain formulations of interest are summarized in Section <ref> along with the convolution quadrature method and the boundary element method for spatial discretization; in Section <ref>, the new Calderón and projectors-based preconditioning strategies are presented; finally, Section <ref> presents the numerical studies that confirm the effectiveness of the different approaches before concluding. Preliminary studies pertaining to this work were presented in the conference contribution <cit.>. § BACKGROUND AND NOTATIONS§.§ Time Domain Integral FormulationsIn this work, we consider the problem of time-domain scattering by a perfectly electrically conducting object that resides in free space. The object is illuminated by an electromagnetic field ( e^inc,h^inc) ( r,t) which induces a surface current density j_Γ on its boundary Γ that is the solution of the time-domain EFIEη_0 T ( j_Γ) ( r,t)= - ( r ) × e^inc( r,t).Here,is the outpointing normal to Γ and η_0 is the characteristic impedance of the background. The electric field operator T includes the contributions of the vector and scalar potentials, respectively denoted T_s and T_h <cit.>T( f ) ( r,t )=-1/c_0∂/∂ t T_s(f )( r,t ) + c_0∫_-∞^t T_h( f)( r,t')dt' , T_s( f)( r,t)=( r ) ×∬_ r' ∈Γ( G_ r*_t f)( r', t)dS' , T_h( f)( r,t)=( r) ×∇∬_ r' ∈Γ( G_ r*_t∇'· f)( r',t)dS' ,where c_0 is the speed of light in the background. The temporal convolution product *_t and the temporal Green function G are defined as(f*_t g)(t)=∫_-∞^∞f(τ)g(t-τ)dτ,G_ r( r',t)=δ(t-| r- r'|/c_0)/4π | r- r'| ,with δ the time Dirac delta.§.§ Marching-On-In-Time with Convolution Quadratures Let Θ be a placeholder for any of the integral operators previously presented and let k( r,t) be a causal function (∀ t<0, k( r,t)= 0). With these notations, most time domain integral equation take the formΘ( f_c)( r,t)= k( r,t) ,where f_c is the solution to be solved for. The first step of the Marching-On-in-Time solution scheme with convolution quadratures is to apply the boundary element method <cit.> as spatial discretization. Assuming separability between the space and time variables, the unknown function f_c is expanded as a linear combination of N_s spatial basis functions such that <cit.>f_c( r,t)≈∑_n=1^N_s[f⃗_Γ]_n(t)f^src_n( r) ,where { f^src} are the source spatial basis functions and their associated time coefficients are stored in the vector f⃗_Γ(t). Then, the equation (<ref>) is tested by the spatial basis functions { f^tst} leading to the time-dependent matrix system(θ *f⃗)(t)=k⃗(t),where for n and m in 1,N_s, we have,(θ *f⃗)_m(t) =∑_n=1^N_s( θ_m,n*_t f_n)(t) ,[ θ ]_m,n(t) = ⟨ f^tst_m,Θ(δ f^src_n)⟩_Γ(t) ,[ k⃗_Γ ]_m(t) =⟨ f^tst_m, k⟩_Γ(t) ,with ⟨ f, g⟩_Γ=∬_ r ∈Γ f( r) · g( r) dS .The second step is the discretization in time with the convolution quadrature method <cit.>.First, the system (<ref>) must be transformed in the Laplace domain <cit.> and we denote θ_ℒ, f⃗_ℒ and k⃗_ℒ the Laplace transform of θ, f⃗_Γ and k⃗_Γ. The system (<ref>) is then equivalent toℒ{t ↦(θ*f⃗)(t) }(s)=θ_ℒ(s) f⃗_ℒ(s)=k⃗_ℒ(s) .Then, a representation on the Z-domain discretizes the system (<ref>). The Laplace parameter, in the operator θ_ℒ, is replaced by the matrix-valued parameters_cq(z)=1/Δ t( A+1⃗_pb⃗^T/z-1)^-1 ,diagonalizable for the considered z values, with the following eigenvalue decomposition s_cq(z)= Q(z) Λ(z) Q^-1(z). The elements of the diagonal matrix Λ(z) are the eigenvalues of s_cq(z) and the columns of Q(z) are their associated eigenvectors. The time step size is denoted Δ t and the matrix A and the vectors 1⃗_p, c⃗, b⃗ of size p are determined by the implicit scheme used <cit.>. The discretized Z-domain operator θ_𝒵 is defined such that for any for k and l in 1,p and for m and n in 1,N_s[ θ_𝒵 ]_Φ_m,k, Φ_n,l(z)=[ Q(z)θ_m,n^Λ(z)Q^-1(z) ]_k,l,where Φ_α,β=p(α-1)+β is an appropriate indexing function and θ_m,n^Λ(z) is a diagonal matrix defined as[ θ_m,n^Λ ]_k,k(z)=[ θ_ℒ(Λ_k,k) ]_m,n(z) .The vectors f⃗_𝒵 and k⃗_𝒵 are the Z-domain representation <cit.> of the respective time-discretized vectors[ k⃗_i ]_Φ_m,k=[ k⃗_Γ ]_m( Δ t (i+[c⃗]_k )),[ f⃗_i ]_Φ_n,k=[ f⃗_Γ ]_n ( Δ t (i+[c⃗]_k )),yielding to the following discretization of the system (<ref>)θ_𝒵(z) f⃗_𝒵(z) =k⃗_𝒵(z).Finally, the equivalent time discretized system of (<ref>) is obtained by applying the inverse Z-transform on (<ref>)𝒵^-1{θ_𝒵(z)f⃗_𝒵(z)}_i=[ Z_θ*_sf]_i= ∑_j=0^i Z_θ,j f_i-j =k⃗_i,where Z_θ,i=𝒵^-1{z →θ_𝒵(z)}_i are the time domain interaction matrices and *_s is the sequence convolution product. The system sequence (<ref>) is rewritten in the following Marching-On-In-Time that can be solved for f⃗_i Z_θ,0f⃗_i =k⃗_i-∑_j=1^i Z_θ,jf⃗_i-j. §.§ Classic Integral Marching-On-In-TimesIn this subsection, the discretization scheme described above will be applied to the specific case of the EFIE. The Rao-Wilton-Glisson (RWG) basis functions { f^rwg_n }_N_s <cit.> are used to expand the current density asj_Γ( r,t) ≈∑_n=1^N_s [ j⃗_Γ]_n (t) f^rwg_n( r),where the current coefficients are gathered in an unknown vector function of time j⃗_Γ (t). The EFIE is then tested with rotated RWG basis functions {× f^rwg_n}_N_s, leading to the following Marching-On-In-TimeZ_ T, 0j⃗_i =-η_0^-1e⃗^inc_i - ∑_j=1^i Z_ T, jj⃗_i- j ,where the vector sequences j⃗_i and e⃗^inc_i, and the time domain interaction matrices Z_ T,i are respectively generated by the convolution quadrature method described in Subsection <ref> of j⃗_Γ (t) and the following space-discretized vector and matrix[ e⃗^inc_Γ ]_m(t) = ⟨× f^rwg_m, × e^inc⟩_Γ( t ), [ T ]_m,n(t)=⟨× f^rwg_m, T (δ f^rwg_n) ⟩_Γ( t ).However, the time integral contribution of this operator T involves an unbounded number of non-vanishing matrices Z_ T,i (<ref>), leading to a prohibitive quadratic complexity with the number of time steps <cit.>. Historically, the time differentiated formulation is preferred because it is not afflicted by this drawback <cit.>, and leads to the following MOT <cit.>Z_Ṫ,0j⃗_i =-η_0^-1ė⃗̇^inc_i - ∑_j=1^N_conv Z_Ṫ,jj⃗_i- j ,where ė⃗̇^inc_n and Ż_n are respectively the time domain vectors and interaction matrices generated by the convolution quadrature method described in Subsection <ref> of [ ė⃗̇^inc_Γ ]_m(t)=⟨× f^rwg_m, ×∂/∂ t e^inc⟩_Γ , [ Ṫ ]_m,n(t)=⟨× f^rwg_m,∂/∂ t T (δ f^rwg_n) ⟩_Γ. §.§ EFIE DC instabilityThe electric field integral operator suffers from the DC instability: since for all constant-in-time solenoidal current j_cs we have ∇·j⃗_cs=0 and ∂/∂ t j_cs=0, we can conclude thatT ( j_cs)=0.Therefore, the EFIE solution is only determined up to a constant solenoidal current <cit.>. Its time differentiated counterpart inherits these drawbacks and amplifies the DC instability by further adding linear in time solenoidal currents to the nullspace. This latter deteriorates the late time simulation in which spurious currents grow exponentially in the operator nullspace <cit.>. This behaviour is predicted by the polynomial eigenvalues analysis of the MOT: a stable MOT has all its eigenvalues inside the unit circle in the complex plane while a MOT that suffers from the DC instability has some eigenvalues that cluster around 1 <cit.>. The eigenvalue distribution of the time differentiated EFIE MOT is represented in fig:sphere_eigenvalues in which such a cluster is clearly visible around 1. §.§ Quasi-Helmholtz projectorsPrevious works show that the electric field integral equation discretized in space using RWG basis functions can be stabilized by the quasi-Helmholtz projectors <cit.>. These projectors are formed from the star-to-rwg transformation matrix, denoted Σ and defined in <cit.>, which maps the discretized current into the non-solenoidal contributions <cit.>. The quasi-Helmholtz projectors on the non-solenoidal space and its complementary (the one on solenoidal/quasi-harmonic space) are respectivelyP^Σ=(Σ(Σ^TΣ)^+Σ^T)and P^Λ H= I- P^Σ,where ^+ denotes the Moore–Penrose pseudoinverse <cit.>.§ CALDERÓN PRECONDITIONING OF THE EFIE EFIE formulations based on the quasi-Helmholtz projectors cure the DC instability and the conditioning at large time steps. However, these formulations still suffer from a dense discretization breakdown. One appealing strategy could be to apply standard preconditioning schemes to the Marching-On-In-Time matrices directly to cure the matrix conditioning issues. However, the solution currents would remain unaltered and subject to DC instabilities as the original scheme. This is why the preconditioning has to be performed on the continuous equations to build a new operator without nullspace and then discretize the formulation to obtain a well-conditioned scheme. In this part, Calderón preconditioning strategies are proposed to cure the DC instability and the conditioning breakdowns.§.§ A Convolution Quadrature Calderón time-domain EFIECalderón preconditioners are based on the Calderón identity <cit.> T^2= T∘ T= - I /4+K^2,where ∘ is the composition operator, I is the identity operator and K is defined asK( f)( r,t)=( r) ×∇×∬_ r' ∈Γ( G_ r*_tf) ( r',t)dS' .The operator - I /4+K^2 is a well-behaved operator for increasing discretization densities. As a consequence, with a proper discretization, T^2 is well-conditioned for large time steps and dense meshes for simply connected structures <cit.>. In practice, a discretization of T^2 is used in which the right EFIE operator is discretized with RWG basis functions and the left preconditioner is discretized with Buffa-Christiansen (BC) basis functions ( f^bc_n)_N_s <cit.>[ 𝕋 ]_m,n(t)=⟨× f^bc_m, T (δ f^bc_n)⟩_Γ (t) .The preconditioning leads to the following space-discretized formulation(𝕋 G_m^-1 * ( T * j⃗_Γ)) (t )=-η_0^-1(𝕋 G_m^-1 *e⃗^inc_Γ)(t ),where the matrix G_m is the mixed gram matrix linking the the two discretizations[ G_m ]_m,n=⟨× f^rwg_m, f^bc_n⟩_Γ .Then, the convolution quadrature leads to the MOT scheme[ Z_𝕋 G_m^-1*_sZ_ T ]_0j⃗_i =-η_0^-1[ Z_𝕋 G_m^-1 *_se⃗^inc ]_i-∑_j=1^i[ Z_𝕋 G_m^-1*_sZ_ T ]_jj⃗_i- j ,where Z_𝕋,i are the time domain interaction matrices of the space-discretized operator 𝕋(t) (<ref>) generated by the convolution quadrature method described in Subsection <ref>, the sequence convolution quadrature product *_s is the discretization of the space-discretized temporal convolution product * and G_m= G_m ⊗ I_p .The Kronecker product ⊗ I_p is required to match with the convolution quadrature method where I_p is the identity matrix of size p. Unfortunately, the MOT in (<ref>), involves operators with temporal integrations leading to a time consuming MOT. A more favorable scheme can be obtained by noticing the following commutative properties∂/∂ t T_s/h( f)( r,t)= T_s/h(∂/∂ t f)( r,t),∫_-∞^t T_s/h( f)( r,t') dt'= T_s/h(∫_-∞^. f)( r,t),and the cancellation property T_h^2=0 <cit.>, we haveT^2 =(-1/c_0∂/∂ t T_s + c_0∫_-∞^· T_h)∘(-1/c_0∂/∂ t T_s + c_0∫_-∞^· T_h)=c_0^-2∂^2/∂ t^2 T_s^2- T_s∘ T_h- T_h∘ T_s .This is advantageous since, besides not involving any time integration contribution, the operator c_0^-2∂^2/∂ t^2 T_s^2-T_s T_h- T_h T_s has no nullspace for simply connected geometries leading to a DC-stable discretization (“dottrick TDEFIE”) <cit.>. By extending the previous notations on T_s and T_h[ T_s/h ]_m,n(t)=⟨× f^rwg_m, T_s/h(δ f^rwg_n) ⟩_Γ( t ), [ 𝕋_s/h ]_m,n(t)=⟨× f^bc_m, T_s/h(δ f^bc_n)⟩_Γ( t ),the proposed space-discretized operator is denoted T_c=c_0^-2∂^2/∂ t^2𝕋_s G_m^-1 * T_s -𝕋_s G_m^-1*T_h-𝕋_h G_m^-1*T_s ,yielding to the following space-discretized formulation( T_c* j⃗_Γ)(t)=-η_0^-1(𝕋 G_m^-1* e⃗^inc_Γ)(t).The right-hand side operator still involves a temporal integration in (<ref>). However, given the commutative properties (<ref>), the temporal integral on the scalar potential T_h is evaluated with the incident fieldc_0 (∫_-∞^·𝕋_h G_m^-1* e⃗^inc_Γ)(t)= c_0 (𝕋_h G_m^-1* e⃗^prim_Γ)(t) ,where[ e⃗^prim_Γ ]_m(t) = ⟨× f^rwg_m, ∫_-∞^t× e^inc(t') dt'⟩_Γ ,[ e⃗^prim_i ]_Φ_m,k= [ e⃗^prim_Γ ]_m (Δ t (i+[c⃗]_k )) .Therefore, the previous MOT is rewritten as Z_T_c,0j⃗_i=- η_0^-1∑_j=0^N_conv(Z_Ṫ_s,je⃗^inc_i- j+ Z_T_h,je⃗^prim_i- j)-∑_j=1^N_conv Z_T_c,jj⃗_i- j ,where the time domain interaction matrices Z_T_c,i, Z_Ṫ_s,i and Z_T_h,i are respectively generated by the convolution quadrature method described in Subsection <ref> of the space-discretized operators T_c, Ṫ_s=-c_0^-1∂/∂ t𝕋_s G_m^-1 and T_h=c_0𝕋_h G_m^-1.As in (<ref>), the interaction matrix sequence Z_T_c,i involves computationally expensive sequence convolution products *_s, however, the convolution quadrature method allows the substitution of the sequence convolution products *_s by matrix multiplications in the Z-domain, that can be evaluated at a lesser cost. By extending the notations of the convolution quadrature described in Subsection <ref> on the space-discretized operators T_s/h and 𝕋_s/h and by using the Z-domain properties, the matrix sequence Z_T_c,i is equal toZ_T_c,i= c_0^-2𝒵^-1{ s_cq^2𝕋_s,𝒵 G_mT_s,𝒵}-𝒵^-1{𝕋_h,𝒵 G_mT_s,𝒵 +𝕋_s,𝒵 G_mT_h,𝒵},where the matrix s_cq(z)= I_N_s⊗ s_cq(z) is the Z-discretization of the time derivative and I_N_s is the identity matrix of size N_s. The formulation (<ref>) is a good candidate to obtain a stable current solution, however, the proposed operator T^2 has static nullspaces for multiply connected geometries <cit.>. As such, (<ref>) is still subject to DC-instabilities for multiply connected geometries. The polynomial eigenvalue analysis on a sphere and on a torus (respectively fig:cal_eigen_sphere and fig:cal_eigen_torus) illustrate this phenomenon. While all the eigenvalues cluster in 0 in the spherical case, an analysis on a torus highlights four eigenvalues of this MOT clustered around 1, corresponding to the four constant regime solutions <cit.>.§.§ A Convolution Quadrature Calderón time-domain EFIE regularized with quasi-Helmholtz projectorsThe previous Calderón formulation is perfectly adapted to simply connected geometries, ensuring that the new operator has no nullspace. However, on multiply-connected geometries, the harmonic subspace is non-empty, thus enlarging the nullspace of T which is a new source of DC instability in (<ref>)<cit.>. The discretized EFIE operators can be regularized using the quasi-Helmholtz projectors to address this issue. Because the regularization is based on projectors, it does not compromise the h-refinement regularizing effect of the original Calderón scheme. The regularized EFIE space-discretized operators are T^reg=(c_0/a∫_-∞^· P^Λ H+ P^Σ) * T*( P^Λ H+a/c_0∂/∂ t P^Σ),𝕋^reg =(c_0/a∫_-∞^·ℙ^Σ H+ℙ^Λ)*𝕋*(ℙ^Σ H+a/c_0∂/∂ tℙ^Λ),where the BC quasi-Helmholtz projectors are defined with the loop-to-RWG transformation matrix Λ <cit.> such thatℙ^Λ=(Λ(Λ^TΛ)^+Λ^T) andℙ^Σ H= I-ℙ^Λ ,and where the scaling c_0/a with a defined as the maximal diameter of the scatterer, ensures consistent dimensionality and helps reduce the conditioning further. This application of the projectors is equivalent to differentiating the non-solenoidal contributions on the left and in time integrating the solenoidal contributions on the right of each EFIE operator <cit.>. The regularized Calderón operator in the space-discretized time domain isT^reg_c= 𝕋^reg G_m^-1 *T^reg.At first sight, (<ref>) and (<ref>) seem to involve unpractical temporal integrals. However, the problematic contributions in the regularized EFIE operator 𝐓^reg will vanish, since P^Λ H T_h= T_h P^Λ H = 0 and P^Σ T_h P^Σ = T_h, and we have T^reg =-a^-1 P^Λ H T_s P^Λ H - c_0^-1 P^Λ H∂/∂ t T_s P^Σ -c_0^-1 P^Σ∂/∂ t T_s P^Λ H -a/c_0^2 P^Σ∂^2/∂ t^2 T_s P^Σ+ a T_h.Similarly, the dual EFIE operator simplifies as𝕋^reg = -a^-1ℙ^Σ H𝕋_sℙ^Σ H - c_0^-1ℙ^Σ H∂/∂ t𝕋_sℙ^Λ -c_0^-1ℙ^Λ∂/∂ t𝕋_sℙ^Σ H -a/c_0^2ℙ^Λ∂^2/∂ t^2𝕋_sℙ^Λ + a𝕋_h.The space-discretized formulation of the regularized Calderón EFIE isT^reg_c* y⃗_Γ =-η_0^-1𝕋^reg G_m^-1* R* e⃗^inc_Γ,whereR =c_0/a∫_-∞^· P^Λ H+ P^Σandj⃗_Γ= ( P^Λ H+a/c_0∂/∂ t P^Σ)y⃗_Γ.The right-hand side of (<ref>) has a temporal integral which is directly evaluated on the incident field to avoid quadratic complexity with the number of time steps. This leads to the following MOT Z_T_c^reg,0 y_i =-η_0^-1∑_j=0^N_conv Z_T^reg,j( P^Σe⃗^inc_i- j +c_0/a P^Λ He⃗^prim_i- j)-∑_j=1^N_conv Z_T^reg_c,jy⃗_i- j ,where P^Λ H= P^Λ H⊗ I_p, P^Σ= P^Σ⊗ I_p, the vector sequence y⃗_n is the time discretization of y⃗_Γ (t), and the time domain interaction matrices Z_T^reg_c,i and Z_T^reg,i are respectively generated by the convolution quadrature method described in Subsection <ref> of the space-discretized operators T^reg_c and T^reg=𝕋^reg G_m^-1. Once the computation of y⃗_n is done, the current j⃗ still has to be evaluated. The convolution quadrature discretization of the time derivative is Z_∂/∂ t,i =𝒵^-1{z→ s_cq(z)}_i= Δ t^-1 A^-1δ_i,0-Δ t^-1 A^-11⃗_pb⃗^T A^-1δ_i,1,where δ_i,0 is the Kronecker delta, A^-1=I_N_s⊗ A^-1 and A^-11⃗_pb⃗^T A^-1 =I_N_s⊗ A^-11⃗_pb⃗^T A^-1 <cit.>. Therefore, the current solution is obtained asj⃗_i =[( P^Λ H+a/c_0 Z_∂/∂ t P^Σ)*_sy⃗]_i= P^Λ Hy⃗_i + a/Δ t c_0 P^Σ[A^-1y⃗_i -A^-11⃗_pb⃗^T A^-1y⃗_i-1].§ RESULTS To test the effectiveness of the proposed schemes, simulations have been realized with different geometries, excited by a Gaussian pulse plane wavee^inc( r,t)=A_0 exp(-(t-k̂· r/c)^2/2σ^2) p̂,h^inc( r,t)=1/η_0k̂× e^inc( r,t) ,where σ=6/(2π f_bw), p̂=x̂, k̂=-ẑ, A_0=1 and f_bw is the frequency bandwidth. Notice that this frequency bandwidth is proportional to the maximal frequencies excited by the pulse Gaussian. In this work, the Runge-Kutta Radau IIA method of stage 2 is used for all simulations <cit.>. The time step size Δ t has been chosen equal to (ψ f_max)^-1 where f_max is the upper frequency of the excitation and ψ=3 is an oversampling parameter.§.§ Canonical geometriesTo illustrate the key properties of the newly proposed schemes, namely the Calderón preconditioned formulation (<ref>) and the and the Calderón preconditioned formulation regularized by the quasi-Helmholtz-projectors (<ref>), they are compared in the case of modelling of canonical scatterers to other formulations present on the literature: the EFIE MOT schemes (MOT EFIE) (<ref>), the time-differentiated one (MOT TD-EFIE) (<ref>), the formulation regularized by the quasi-Helmholtz-projectors (MOT qH-EFIE) <cit.>. In this subsection, the excitation parameters have been chosen not to excite the first resonant mode of the geometries.The first set of numerical tests were performed on the unit sphere, discretized with 270 RWG functions. The intensity of the resulting currents at one point of the geometry are shown in fig:sphere_current_desity. As expected, the time differentiated and the non-differentiated EFIEs are the only formulations suffering from DC-instabilities on this simply connected scenario. In addition, the condition number of the matrices to invert for each MOT are presented in fig:cond_number_dt and fig:cond_number_ns with respect to the time step size Δ t and the mesh density h^-1. The standard EFIE formulation and its time-differentiated counterpart suffer from ill-conditioning at large time steps while the stabilized ones remain well-conditioned. Instead, only the Calderón preconditioned formulations presented in this work remain well-conditioned for dense discretizations (fig:cond_number_ns). The second set of tests focused on the stability of the different formulations when modelling multiply connected scatters, here a torus with inner radius of 0.2 and outer radius of 0.5. The current densities at the probe point are shown in fig:torus_current_desity and the conditioning studies are represented in fig:torus_cond_number_ns and fig:torus_cond_number_dt. In line with the polynomial eigenvalue analysis of the non-regularized Calderón EFIE formulation (fig:cal_eigen_torus), the formulation (<ref>) suffers from DC instability. Moreover, the static nullspace of the continuous operator deteriorates the condition number of the matrix to invert for large time steps (fig:torus_cond_number_dt). However, the newly proposed regularized Calderón formulation (<ref>) is stable and remains well-conditioned at large time steps and dense meshes for this geometry.§.§ Non-canonical geometries The final set of numerical tests is dedicated to more complex test structures (fig::all_geometry). In addition, instead of direct solver we rely on the iterative solver GMRES with different relative target tolerances ϵ <cit.> . For practical reason, the maximum number of iteration has been limited to 200 without restart.All the structures have been illuminated by a pulse Gaussian plane wave with f_bw=1.6. Table <ref> shows the condition number and the number of iterations needed. The Calderón preconditioned formulation (CP-EFIE) (<ref>) and the Calderón preconditioned formulation regularized by the quasi-Helmholtz-projectors (qH-CP-EFIE) (<ref>) are the only formulations requiring less than 200 iterations to converge at each time steps. As expected, the condition number of the CP-EFIE is high on non-simply connected geometries because of the presence of the operator DC nullspace on these structures. Even if, in this case, the number of iterations remains low, the solution is corrupted by DC instability arising from this nullspace. This phenomenon is absent for the qH-CP-EFIE formulation which is free from high conditioning or DC instability and yields stable solutions up to the target precision of the iterative solver. § CONCLUSION In this paper, novel Calderón preconditioned techniques have been presented for the time domain Electric Field Integral Equations solved with Marching-On-In-Time with convolution quadratures. These formulations eliminate the DC-instability for simply and multiply connected geometries. In addition, they cure the h-refinement and large time step breakdowns and generate well-conditioned Marching-On-In Time. Finally, numerical results on complex geometries showcased the effectiveness of the proposed schemes.[ < g r a p h i c s > ]PIERRICK CORDEL(Student Member, IEEE) received the M.Sc. Eng. Degree from the Ecole Nationale Supérieure des Mines de Nancy, France, in 2021 as well as a M.Sc. in Industrial Mathematics from the University of Luxembourg, Luxembourg, the same year. Currently, he is doing a PhD thesis at the institute Politecnico di Torino, in Italy. His research focuses on time domain integral equations discretized with the convolution quadrature method.[ < g r a p h i c s > ]ALEXANDRE DÉLY received the M.Sc. Eng. degree from the École Nationale Supérieure des Télécommunications de Bretagne (Télécom Bretagne), France, in 2015. He received the Ph.D. degree from the École Nationale Supérieure Mines-Télécom Atlantique (IMT Atlantique), France, and from the University of Nottingham, United Kingdom, in 2019.His research focuses on preconditioned and fast solution of boundary element methods frequency domain and time domain integral equations. He is currently working in Thales, Elancourt, France, on electromagnetic modeling and numerical simulations [ < g r a p h i c s > ]ADRIEN MERLINI (S’16–M’19) received the M.Sc.Eng. degree from the École Nationale Supérieure des Télécommunications de Bretagne (Télécom Bretagne), France, in 2015 and received the Ph.D. degree from the École Nationale Supérieure MinesTélécom Atlantique (IMT Atlantique), France, in 2019. From 2018 to 2019, he was a visiting Ph.D. student at the Politecnico di Torino, Italy, which he then joined as a Research Associate. Since 2019, he has been an Associate Professor with the Microwave Department, IMT Atlantique. His research interests include preconditioning and acceleration of integral equation solvers for electromagnetic simulations and their application in brain imaging. Dr. Merlini received 2 Young Scientist Awards at the URSI GASS 2020 and the EMTS 2023 meetings. In addition, he has co-authored a paper that received the 2022 ICEAA-IEEE APWC Best Paper Award, 5 that received honorable mentions (URSI/IEEE–APS 2021, 2022, and 2023) and 3 best paper finalists (URSI GASS 2020, URSI/IEEE–APS 2021 and 2022). He is a member of IEEE-HKN, the IEEE Antennas and Propagation Society, URSI France, and of the Lab-STICC laboratory. He is currently serving as Associate Editor for the Antenna and Propagation Magazine. [ < g r a p h i c s > ]FRANCESCO P. ANDRIULLI(S’05–M’09– SM’11–F’23) received the Laurea in electrical engineering from the Politecnico di Torino, Italy, in 2004, the MSc in electrical engineering and computer science from the University of Illinois at Chicago in 2004, and the PhD in electrical engineering from the University of Michigan at Ann Arbor in 2008. From 2008 to 2010 he was a Research Associate with the Politecnico di Torino. From 2010 to 2017 he was an Associate Professor (2010-2014) and then Full Professor with the École Nationale Supérieure Mines-Télécom Atlantique (IMT Atlantique, previously ENST Bretagne), Brest, France. Since 2017 he has been a Full Professor with the Politecnico di Torino, Turin, Italy. His research interests are in computational electromagnetics with focus on frequency- and time-domain integral equation solvers, well-conditioned formulations, fast solvers, low-frequency electromagnetic analyses, and modeling techniques for antennas, wireless components, microwave circuits, and biomedical applications with a special focus on brain imaging.Prof. Andriulli received several best paper awards at conferences and symposia (URSI NA 2007, IEEE AP-S 2008, ICEAA IEEE-APWC 2015) also in co-authorship with his students and collaborators (ICEAA IEEE-APWC 2021, EMTS 2016, URSI-DE Meeting 2014, ICEAA 2009) with whom received also a second prize conference paper (URSI GASS 2014), a third prize conference paper (IEEE–APS 2018), seven honorable mention conference papers (ICEAA 2011, URSI/IEEE–APS 2013, 4 in URSI/IEEE–APS 2022, URSI/IEEE–APS 2023) and other three finalist conference papers (URSI/IEEE-APS 2012, URSI/IEEE-APS 2007, URSI/IEEE-APS 2006, URSI/IEEE–APS 2022). A Fellow of the IEEE, he is also the recipient of the 2014 IEEE AP-S Donald G. Dudley Jr. Undergraduate Teaching Award, of the triennium 2014-2016 URSI Issac Koga Gold Medal, and of the 2015 L. B. Felsen Award for Excellence in Electrodynamics. Prof. Andriulli is a member of Eta Kappa Nu, Tau Beta Pi, Phi Kappa Phi, and of the International Union of Radio Science (URSI). He is the Editorin-Chief of the IEEE Antennas and Propagation Magazine and he serves as a Track Editor for the IEEE Transactions on Antennas and Propagation, and as an Associate Editor for URSI Radio Science Letters. He served as an Associate Editor for the IEEE Transactions on Antennas and Propagation, IEEE Antennas and Wireless Propagation Letters, IEEE Access and IET-MAP. | http://arxiv.org/abs/2311.15592v1 | {
"authors": [
"Pierrick Cordel",
"Alexandre Dély",
"Adrien Merlini",
"Francesco P. Andriulli"
],
"categories": [
"math.NA",
"cs.NA"
],
"primary_category": "math.NA",
"published": "20231127073446",
"title": "Calderón Strategies for the Convolution Quadrature Time Domain Electric Field Integral Equation"
} |
Low-lying geodesics on the modular surface and necklaces]Low-lying geodesics on the modular surface and necklacesAra Basmajian]Ara Basmajian AB is supported by PSC CUNY Award 65245-00 53 [Ara Basmajian]The Graduate Center, CUNY, 365 Fifth Ave., N.Y., N.Y., 10016, USA, and Hunter College, CUNY, 695 Park Ave., N.Y., N.Y., 10065, USA [email protected] Liu]Mingkun Liu [Mingkun Liu]DMATH, FSTM, University of Luxembourg, Esch-sur-Alzette, Luxembourg [email protected] ML is supported by the Luxembourg National Research Fund OPEN grant O19/13865598 [2020]Primary 20F69, 32G15, 57K20; Secondary 20H10, 53C22The m-thick part of the modular surface X is the smallest compact subsurface of X with horocycle boundary containing all the closed geodesics which wind around the cusp at most m times. The m-thick parts form a compact exhaustion of X. We are interested in the geodesics that lie in the m-thick part (so called m low-lyinggeodesics). We produce a complete asymptotic expansion for the number of m low-lying geodesics of length equal to 2n in the modular surface. In particular, we obtain the asymptotic growth rate of the m low-lying geodesics in terms of their word length using the natural generators of the modular group. After establishing a correspondence between this counting problem and the problem of counting necklaces with n beads, we perform a careful singularity analysis on the associated generating function of the sequence.[ [ January 14, 2024 ====================§ INTRODUCTION Consider the (2,3,∞) triangle group; that is, the modular surface X ℍ/(2, ). There are many interesting classes of closed geodesics on X including so-called reciprocal geodesics, ones that stay in a fixed compact subsurface of X, and ones that exclusively leave a compact subsurface, as well as of course the set of all closed geodesics on X. Our interest in this paper is on the growth rate of the closed geodesics that stay in a fixed compact subsurface of X. To be precise, let ⊂ X be the cusp with its natural horocycle boundaryof length one in X. For m a positive integer, we define the m-thick part of X, denoted X_m, to be the smallest compact subsurface of X with horocycle boundary which contains all the closed geodesics which wind around the cusp at most m-times. The m-thick parts form a compact exhaustion of X. We are interested in the geodesics that lie in the m-thick part (so called m low-lyinggeodesics). See Figure <ref>.Using the fact that _2 ∗_3 is isomorphic to the modular group, we use word length with respect to the natural generators of the factors in _2 ∗_3 to study the growth of the m low-lying geodesics. In <cit.>, it was shown that |{γ a closed geodesic in X_m : |γ| = 2n }| ≳2^n(1-1/m)-1/nas n →∞. Our main result in this paper, Theorem <ref>, is a complete asymptotic analysis of the number of m low-lying geodesics as well as the primitive ones. Let α_m be the unique positive real solution of the equation x^m - x^m-1 - ⋯ - x - 1 = 0. Let us mention that this equation is the characteristic equation of the generalized Fibonacci sequence, and α_m is a Pisot number, namely, it is a real algebraic integer strictly greater than 1, with all its Galois conjugates having modulus strictly less than 1.For any k ∈_≥ 1 and m ∈_≥ 3, when n →∞, we have|{γ a closed geodesic in X_m : |γ| = 2n }| = 1/n∑_d 3mu |2mu n, d ≤ kφ(d)α_m^n/d +( α_m^n/(k+1)/n)where φ is the Euler's totient function; and for primitive geodesics, we have|{γ a primitive closed geodesic in X_m : |γ| = 2n }| = 1/n∑_d 3mu |2mu n, d ≤ kμ(d)α_m^n/d +( α_m^n/(k+1)/n)where μ stands for the Möbius function. Some corollaries follow, For any m ∈_≥ 3, we have|{γ a closed geodesic in X_m : |γ| = 2n }| ∼α_m^n/nas n →∞. The same conclusion holds for primitive closed geodesics.The asymptotic growth rate of the primitive closed geodesics in X_m converges to the asymptotic growth rate of the primitive closed geodesics on the modular orbifold, as m →∞. There is an extensive literature on cusp excursions by a random geodesic on a hyperbolic surface including the papers <cit.>. In particular, the papers <cit.> investigate the relation between depth, return time, and other invariants in various contexts including connections to number theory. Papers involving growth of particular families of geodesics include <cit.>. Geometric length growth of low-lying geodesics having the arithmetic condition of being fundamental is studied in <cit.>.For hyperbolic geometry we use <cit.> for a basic reference, and for combinatorial analysis <cit.>. §.§ AcknowledgementsWe would like to thank Naomi Bredon, Christian El Emam, Alex Nolte, and Nathaniel Sagman for useful discussions. This project started during the first author's visit to the University of Luxembourg. It is a pleasure to thank the University of Luxembourg and Hugo Parlier for their support and hospitality during that time. § NECKLACES AND LOW-LYING GEODESICSIn this section we establish a correspondence between low-lying geodesics andnecklaces. A (binary) necklace is made of a circular pattern of beads, each bead being one of two colors, say red or black, with the constraint that the number of consecutive adjacent beads of the same color being at most m. Two necklaces are considered the same if they differ by a cyclic rotation. It is not difficult to see that the set of all m low-lying geodesics of length n is in one-to-one correspondence with the set of necklaces made of n beads with the longest run of the same color being at most m. We denote the number of such necklaces of length n by A_m(n), and by B_m(n) the number of primitive ones.In this paper, we give a complete asymptotic analysis of the number of m low-lying geodesics. Call the generator of the ℤ_2 factor a, and the generator of the ℤ_3 factor b. Using the generators, {a,b,b^-1} we define the length of a closed geodesic γ on X, denoted | γ |, to be the minimal word length in the conjugacy class of a lift in (2, ). Any hyperbolic element can be conjugated into the normal form ab^ϵ_1 ab^ϵ_2⋯ ab^ϵ_n, and the normal forms realize the minimum word length; hence the word length of a closed geodesic is necessarily even.Noting that the normal form is a product of parabolic elements (which represent going around the cusp), we are able to conclude how deep a geodesic wanders into the cusp by looking at the exponents of these parabolics. Namely, staying in the compact subsurface X_m is equivalent to not having a run of ϵ_i's longer than m. See for example Lemma 7.1 in <cit.> for a precise statement. Hence there is a one-to-one correspondence between low-lying closed geodesics of length 2n and conjugacy classes of so called low-lying words in the modular group. Namely, a lift of a low-lying geodesic corresponds to a conjugacy class of hyperbolic elements in (2, ). Now such a conjugacy class has a normal form representative ab^ϵ_1 ab^ϵ_2⋯ ab^ϵ_n, where the number of consecutive ϵ_i of the same sign is at most m. Assigning the color black to +1, and the color red to -1, we get a mapping (ϵ_1, …, ϵ_n) ⟼ ab^ϵ_1ab^ϵ_2⋯ ab^ϵ_nwhich respects cyclic equivalence between the domain and range.We have shown, For any m ∈_≥ 1, we haveA_m(n) = |{γ a closed geodesic in X_m ≥ 3 : |γ| = 2n }|,and B_m(n) = |{γ a primitive closed geodesic in X_m ≥ 3 : |γ| = 2n }|.§ GENERATING FUNCTIONSLet m ∈_≥ 2. For technical reasons, instead of working with A_m(n) (resp. B_m(n)), we consider Â_m(n) (resp. B̂_m(n)), the number of m-necklaces (resp. primitive m-necklaces) of size n that are nonconstant. We haveÂ_m(n) =A_m(n) - 2if1 ≤ n ≤ m, A_m(n)ifn > m, andB̂_m(n) =B_m(n) - 2 = 0ifn = 1, B_m(n)ifn > 1.In particular, Â_m(n) (resp. B̂_m(n)) and A_m(n) (resp. B_m(n)) have the same asymptotic behavior.We encode Â_m(n) and B̂_m(n) into two generating functions _m(z) and _m(z), respectively, defined by_m(z) = ∑_n=1^∞Â_m(n) z^n, _m(z) = ∑_n=1^∞B̂_m(n) z^n.The numbers Â_m(n) and B̂_m(n) can be recovered by extracting the coefficient of z^n in the functions _m(z) and _m(z) respectively:Â_m(n) = [z^n]_m(z), B̂_m(n) = [z^n]_m(z)where [z^n] stands for the coefficient extractor.DefineF_m(z) 2 z^2 (1 - z^m) (m z^m+1 -(m+1)z^m + 1)/(z-1) (z^m+1 - 1) (z^m+1 - 2z + 1).We have formulas_m(z) = ∑_i=1^∞μ(i) ∫_0^1 F_m((xz)^i)/xdx.andNec_m(z) = ∑_j=1^∞∑_i=1^∞μ(i) ∫_0^1F_m((x z^j)^i)/xdxwhere y = (xz^j)^i and μ stands for the Möbius function.We proceed following <cit.>. We say a binary sequence is a non-constant m-sequence if it represents a non-constantm-necklace, and we denote by W_m(n) the number of non-constant m-sequences of size n. For example, W_2(4) = 6, and the six non-constant 2-sequences are: (0,0,1,1), (0,1,0,1), (0,1,1,0), (1,0,0,1), (1,0,1,0), and (1,1,0,0). Note that (0,0,1,0), (0,1,0,0), (1,0,1,1), and (1,1,0,1) are not non-constant 2-sequences. We denote by _m(z) the generating function of W_m(n), namely_m(z) ∑_n=1^∞ W_m(n) z^n.We have formula_m(z) = F_m(z)Every non-constant sequence can be decomposed into blocks of the same color such that adjacent blocks have different colors. For example, (0,1,1,0,1,1,1,0,0) has 5 blocks of sizes 1, 2, 1, 3, 2, respectively. For non-constant m-sequences, they have a minimum of 2 blocks, with each block size being bounded by m. If the first and the last block have the same color, the sum of their size is at most m. In binary sequences, the color of the first block determines the colors of the following ones, and the first and last block share the same color if and only if the number of blocks is odd.Therefore, the generating function of non-constant m-sequences with even number of blocks is2∑_k=1^∞ (z + z^2 + ⋯ + z^m)^2k = 2∑_k=1^∞( z 1-z^m/1-z)^2k = -2z^2 (1 - z^m)^2/(z^m+1 - 1) (z^m+1 - 2z + 1),and the generating function of non-constant m-sequences with odd number of blocks is2∑_k=0^∞ (z + z^2 + ⋯ + z^m)^2k+1 (z^2 + 2z^3 + ⋯ + (m-1) z^m) = 2z^2 ((m-1)z^m - mz^m-1 + 1)/(1-z)^2∑_k=0^∞( z 1-z^m/1-z)^2k+1= -2z^3 (z^m - 1) ((m-1)z^m - mz^m-1 + 1)/(z-1)(z^m+1-1) (z^m+1 - 2z + 1).Summing the two functions yields the lemma.Let _m(z) denote the generating function of primitive m-sequences. By construction,_m(z) = ∑_k=1^∞_m(z^k).Note that this does not hold if constant sequences are included. Now it follows from the Möbius inversion formula and Lemma <ref> that_m(z) = ∑_k=1^∞μ(k)_m(z^k) = ∑_k=1^∞μ(k) F_m(z^k)where μ is the Möbius function.Let (z) be the generating function of primitive m-necklaces. We introduce an auxiliary variable u, and consider the bivariate generating functions (z,u) (zu) and (z,u) (zu). Observe that the primitive cycles of length k and primitive sequences of length k are in 1-to-k correspondence. Thus, in terms of generating functions, (z, u) can be obtained by applying the transformation u^k ↦ u^k/k to (z, u), and equivalently,_m(z, u) = ∫_0^u _m(z, x)dx/x = ∑_i=1^∞μ(i) ∫_0^u F_m((xz)^i)dx/x.Therefore, we obtain the formula_m(z, u) = ∑_j=1^∞_m(z^j, u^j) = ∑_j=1^∞∑_i=1^∞μ(i) ∫_0^u^j F_m((xz^j)^i)dx/x.Now the result follows by setting u=1. § SINGULARITY ANALYSISIn this section, we perform the singularity analysis to track down the asymptotic behavior of A_m(n) and B_m(n). Roughly speaking, rather than consider _m(z) as a formal power series, we view it as a complex function. Then, the asymptotic behavior of the coefficients Â_m(n) of _m(z) can be understood by analysing the behavior of _m(z) near its singularities. For details about this method, we recommend <cit.>. Our approach is slightly different from the standard one introduced in <cit.> as we take into account not only the principle singularities but also the minor singularities to obtain the complete asymptotic expansion (<ref>) and (<ref>). To prepare for it, let us begin with the following lemma which follows directly from results by Miles <cit.>, Miller <cit.>, and <cit.>.For any m ∈_≥ 2, apart from the trivial solution z=1, the polynomial equation1 - 2z + z^m+1 = 0has exactly one positive real solution r_m which is simple and lies in the interval (1/2, 1). All other solutions have modulus strictly greater than 1. Factoring (z-1) from 1-2z+z^m+1 yields1-2z+z^m+1 = (z-1)(z^m + z^m-1 + ⋯ + z - 1).After the change of variables z = 1/y, the equation z^m + z^m-1 + ⋯ + z - 1 = 0 becomesy^m - y^m-1 - ⋯ - 1 = 0.Now by <cit.>, <cit.>, or <cit.>, equation (<ref>) has a unique positive real solution in the interval (1, 2), and all other solutions have modulus strictly greater than 1. The lemma follows.For m=2,3, we have exact formulasr_2 = √(5)-1/2,r_3 = √(3√(33)+17)/3 - 2/3√(3√(33) + 17) - 1/3.The exact expression of r_4 is already too lengthy to present here. Numerically,r_2 ≈ 0.6180,r_3 ≈ 0.5437,r_4 ≈ 0.5188,r_5 ≈ 0.5087,r_7 ≈ 0.5020,r_10≈ 0.5002.Asymptotically, when m →∞, we have r_m → 1/2. Notation. In order to simplify our notation, in the remainder of this section wewrite r for r_m, α for α_m, andfix m ∈_≥ 2.The idea is the following: we write _m(z) (and _m(z)) as the sum of two functions. The first is a standard function that accounts for the main terms in the asymptotic expansion (<ref>). The other function corresponds to the error term in (<ref>), and all its singularities are located farther from the origin than those of the first function.Recall that we denote by μ the Möbius function, and by φEuler's totient function. For any m ∈_≥ 2, we have formulas_m(z) = ∑_k=1^∞φ(k)/klog1/1 - z^k / r + h(z)and_m(z) = ∑_k=1^∞μ(k)/klog1/1 - z^k / r + h_1(z)whereh(z) ∑_j=1^∞ h_j(z),h_j(z) ∑_i=1^∞μ(i) ∫_0^1 x^i-1z^ijf_m((x z^j)^i) dx,andf_m(z) 2z(1-z^m)(mz^m+1 - (m+1)z^m + 1) + (z-1)^2 (z^m+1 - 1)ψ_m(z)/(z - 1) (z^m+1 - 1) (z^m+1 - 2z + 1)where ψ_m is defined to be the unique polynomial such thatt^m+1 - 2t + 1 = (t-1)(t^m + ⋯ + t - 1) = (1-t) · (r-t) ·ψ_m(t).By Proposition <ref>, we have_m(z) = ∑_j=1^∞∑_i=1^∞μ(i) ∫_0^1F_m((x z^j)^i)/xdxwhereF_m(z) 2 z^2 (1 - z^m) (m z^m+1 -(m+1)z^m + 1)/(z-1) (z^m+1 - 1) (z^m+1 - 2z + 1).Write y(x z^j)^i. Using the identitylog1/1 - z^ij/r = ∫_0^1 i x^i-1 z^ij/r - x^i z^ijdx, = ∫_0^1 1/xy/r-y,the integral that appears in the right-hand side of (<ref>) can be written as∫_0^1 F_m(y)dx/x = 1/ilog1/1-z^ij/r + ∫_0^1 ( F_m(y)/x - 1/xy/r - y) dx.This can be further rewritten as∫_0^1 F_m(y)dx/x = 1/ilog1/1-z^ij/r+ ∫_0^1 x^i-1z^ij 2y(1-y^m)(my^m+1 - (m+1)y^m + 1) + (y-1)^2 (y^m+1 - 1)ψ_m(y)/(y - 1) (y^m+1 - 1) (y^m+1 - 2y + 1)dx = 1/ilog1/1-z^ij/r + ∫_0^1 x^i-1z^ijf_m((x z^j)^i) dx.Thus, the generating function _m(z) equals_m(z) = ∑_j=1^∞∑_i=1^∞μ(i)/ilog1/1 - z^ij / r + ∑_j=1^∞ h_j(z)= ∑_k=1^∞φ(k)/klog1/1 - z^k / r + h(z)where we have used, in the second equality, the identity (see <cit.> for a proof)∑_d | kμ(d)/d = φ(k)/kand the fact that summing over the indices i,j is the same as summing over k and its divisors. Similarly, the generating function (<ref>) can be rewritten as_m(z) = ∑_k=1^∞μ(k)/klog1/1 - z^k / r + h_1(z)as claimed. The following lemma shows that the rational function f_m(z) is holomorphic in |z| < 1. Let r = r_m be as in Lemma <ref>. We have2r(1-r^m)(mr^m+1 - (m+1) r^m + 1) + (r-1)^2 (r^m+1 - 1)ψ_m(r) = 0.In other words, the numerator of the rational function f_m has r as a root. Since r^m+1 - 2r + 1 = 0, we haver(1-r^m) = 1-r,r^m+1 - 1 = 2(r-1).Thus, it suffices to show thatm r^m+1 - (m+1) r^m + 1 - (r-1)^2ψ_m(r) = 0.Note that for any polynomial P, if x_0 is a simple root of P and P(x) = (x-x_0) Q(x), then Q(x_0) = P'(x_0). By Lemma <ref>, r is a simple root of 1- 2x + x^m+1, and hence(r-1)ψ_m(r) = (m+1) r^m - 2.On the other hand, again by the fact that r^m+1 - 2r + 1 = 0, we havem r^m+1 - (m+1) r^m + 1 = m r^m+1 - (m+1) r^m + 1 + r^m+1 - 2r + 1 = ((m+1)r^m - 2) (r-1),and the lemma follows. For any k ∈_≥ 1, the functions defined byg_k(z) ∑_i=k^∞φ(i)/ilog1/1-z^i/r, g̃_k(z) ∑_i=k^∞μ(i)/ilog1/1-z^i/r,are holomorphic in the open disk { z ∈ : |z| < r^1/k}. Let a ∈ (0,1). For any z ∈ such that |z| ≤ (ar)^1/k, we have |z|^i/r ≤ a for any i ≥ k, and hence|g_k(z)| ≤∑_i=k^∞log1/1-|z|^i/r≤( 1/arlog1/1-a) ∑_i=k^∞ |z|^i ≤( 1/arlog1/1-a) ∑_i=k^∞ (ar)^i/k < ∞where we have used the fact that (since -log(1-x)/x is increasing on (0,1)) for any 0 < x ≤ x_0 < 1,log1/1-x≤x/x_0log1/1-x_0.Therefore, it follows from the Weierstrass M-test that the partial sum under consideration converges uniformly in the compact disk { z ∈ : |z| ≤ (ar)^1/k}, where a ∈ (0,1) is arbitrary. This implies the assertion for g_k. The same argument applies to g̃_k as well. The functions h_1(z) and h(z) defined in (<ref>) are holomorphic in the open unit disk { z ∈ : |z| < 1 }. It follows from Lemma <ref> and <ref> that the rational function f_m(y) defined by (<ref>) is holomorphic in the open disk |y|<1. Thus, for any a ∈ (0,1), there exists M = M(a) such that |f_m(z)| ≤ M for any z ∈ with |z| ≤ a, and therefore, for any j ∈_≥ 1 and |z| ≤ a,|h_j(z)| ≤ M ∑_i=1^∞ |z|^ij∫_0^1 x^i-1dx = M ∑_i=1^∞|z|^ij/i = M log1/1-|z|^j < ∞.In particular, it follows that h_1 is holomorphic in |z| < 1. Now by (<ref>), for any |z| ≤ a, we have|h(z)| ≤∑_j=1^∞ |h_j(z)| ≤ M ∑_j=1^∞log1/1-|z|^j≤M/alog1/1-a∑_j=1^∞ |z|^j < ∞which shows that h(z) is holomorphic in the open disk |z|<1. Now, we are ready to prove our main result on necklace counting.First observe that Proposition <ref> allows us to translate the low-lying geodesic counting problem to counting necklaces. By Lemma <ref>, for any k ∈_≥ 1, the generating function (<ref>) can be written as_m(z) = φ(1) log1/1-z/r + ⋯ + φ(k)/klog1/1-z^k/r+ φ(k+1)/k+1log1/1-z^k+1/r + g_k+2(z) + h(z)where g_k+2 is defined by (<ref>). For any integer 1 ≤ d ≤ k+1, we have[z^n]log1/1-z^d/r = (1/r)^n/d/n/d = dα^n/d/n ifd | n 0otherwisewhere α 1/r. On the other hand, by Lemma <ref> and <ref>, the function g_k+2 + h is holomorphic in { z ∈ : |z| < r^1/(k+2)} which contains the disk { z ∈ : |z| ≤ r^1/(k+1)}. Thus it follows from Cauchy's inequality that[z^n] (g_k+2(z) + h(z)) ≤ρ^-n sup_|z| = ρ |g_k+2(z) + h(z)| = (ρ^-n) = (α^n/(k+1)/n)for any r^1/(k+1) < ρ < r^1/(k+2). This completes the proof of (<ref>).To prove (<ref>), it is sufficient to write the generating function (<ref>) as_m(z) = μ(1) log1/1-z/r + ⋯ + μ(k)/klog1/1-z^k/r+ μ(k+1)/k+1log1/1-z^k+1/r + g̃_k+2(z) + h_1(z).Then the rest of the proof is similar to that of (<ref>). § NUMERICAL COMPUTATIONSNecklaces of small sizes, say n ≤ 25, can be generated and counted using the SageMath package https://doc.sagemath.org/html/en/reference/combinat/sage/combinat/necklace.html. For larger sizes, the CPU time required noticeably increases, as the necklace count grows exponentially. However, we can still efficiently compute A_m(n) and B_m(n) using Proposition <ref>. We have verified that the two methods agree for n ≤ 25.When n is prime, A_m(n) = B_m(n), and both can be well approximated by α_m^n/n. For instance, A_3(83) = 111384745483589787826 and α_3^83/83 ≈ 111384745483589787826.0120. 2 [BaVa22]BasmajianValli1 Ara Basmajian and Robert Suzzi Valli, Combinatorial growth in the modular group. Groups Geom. Dyn. 16 (2022), no. 2, 683–703.[BaVa23]BasmajianValli2 Ara Basmajian and Robert Suzzi Valli, Counting cusp excursions of reciprocal geodesics. Submitted.[BPPZ14]BocaPasolPopaZaharescu Florin Boca, Vicenţiu Paşol, Alexandru Popa, and Alexandru Zaharescu, Pair correlation of angles between reciprocal geodesics on the modular surface.Algebra Number Theory 8 (2014), no. 4, 999–1035.[BoKo17]BourgainKontorovich1 Jean Bourgain and Alex Kontorovich, Beyond expansion II: low-lying fundamental geodesics. J. Eur. Math. Soc. (JEMS) 19 (2017), no. 5, 1331–1359.[BoKo19]BourgainKontorovich2 Bourgain and Alex Kontorovich, Beyond expansion III: Reciprocal geodesics. Duke Math J. 168 (2019), no. 18, 3413–3435.[Bus10]Buser Peter Buser, Geometry and spectra of compact Riemann surfaces. Birkhäuser Boston, Ltd., Boston, MA, 2010, xvi+454 pp.[Erl19]Erlandsson Viveka Erlandsson, A remark on the word length in surface groups. Trans. Amer. Math. Soc.372 (2019), no. 1, 441–445.[EPS20]ErlandssonParlierSouto Viveka Erlandsson, Hugo Parlier, and Juan Souto, Counting curves, and the stable length of currents. J. Eur. Math. Soc. (JEMS) 22 (2020), no. 6, 1675–1702.[ErSo16]ErlandssonSouto1 Viveka Erlandsson and Juan Souto, Counting curves in hyperbolic surfaces. Geom. Funct. Anal. 26 (2016), no 3, 729–777.[ErSo]ErlandssonSouto2 Viveka Erlandsson and Juan Souto, Counting and equidistribution of reciprocal geodesics and dihedral groups. eprint .[FlaSe09]FlajoletSedgewick Philippe Flajolet and Robert Sedgewick, Analytic combinatorics, Cambridge University Press,Cambridge, 2009, xiv+810 pp.[Haas05]Haas5 Andrew Haas, The distribution of geodesic excursions out the end of a hyperbolic orbifold and approximation with respect to a Fuchsian group Geom. Dedicata 116 (2005), 129–155.[Haas08]Haas4 Andrew Haas, The distribution of geodesic excursions into the neighborhood of a cone singularity on a hyperbolic 2-orbifold. Comment. Math. Helv. 83 (2008), no. 1, 1–20.[Haas09a]Haas2 Andrew Haas, Geodesic cusp excursions and metric Diophantine approximation. Math. Res. Lett. 16 (2009), no. 1, 67–85.[Haas09b]Haas3 Andrew Haas, Geodesic excursions into an embedded disc on a hyperbolic Riemann surface. Conform. Geom. Dyn. 13 (2009), 1–5.[Haas13]Haas1 Andrew Haas, Excursion and return times of a geodesic to a subset of a hyperbolic Riemann surface. Proc. Amer. Math. Soc. 141 (2013), no. 11, 3957–3967.[HarWri08]HardyWright G.H. Hardy and E.M. Wright. An Introduction to the Theory of Numbers. Oxford University Press, Oxford, 2008, xxii+621 pp.[Kon16]Kontorovich Alex Kontorovich. Applications of thin orbits. Dynamics and analytic number theory, 289–317, London Math. Soc. Lecture Note Ser., 437, Cambridge University Press, Cambridge, 2016.[MePe93]MelianPestana María V. Melián and Domingo Pestana,Geodesic excursions into cusps in finite-volume hyperbolic manifolds. Michigan Math. J. 40 (1993), no. 1, 77–93.[Miles60]Miles E.P. Miles, Jr. Generalized Fibonacci numbers and associated matrices. Am. Math. Mon. 67 (1960), 745–752.[Miller71]Miller M.D. Miller. On Generalized Fibonacci Numbers. Am. Math. Mon. 78 (1971), no.10, 1108–1109.[Mir08]Mirzakhani Maryam Mirzakhani, Growth of the number of simple closed geodesics on hyperbolic surfaces. Ann. of Math. (2) 168 (2008), no. 1, 97–125.[Mor22]Mor Ron Mor, Excursions to the cusps for geometrically finite hyperbolic orbifolds and equidistribution of closed geodesics in regular covers. Ergodic Theory Dyn. Syst. 42 (2022), no. 12, 3745–3791.[Poll09]Pollicott Mark Pollicott, Limiting distributions for geodesics excursions on the modular surfacer. Spectral analysis in geometry and number theory, 177–185, Contemp. Math., 484, Amer. Math. Soc., Providence, RI,2009[RanTio21]RandeckerTiozzo Anja Randecker and Giulio Tiozzo, Cusp excursion in hyperbolic manifolds and singularity of harmonic measure. J. Mod. Dyn. 17 (2021), 183–211.[Sar07]Sarnak Peter Sarnak, Reciprocal Geodesics. Analytic number theory, 217–237, Clay Math. Proc., 7, Amer. Math. Soc., Providence, RI, 2007.[Strat95]Stratmann Bernd Stratmann, A note on counting cuspidal excursions. Ann. Acad. Sci. Fenn. Ser. A I Math. 20 (1995),no. 2, 359–372.[Sull82]Sullivan Dennis Sullivan, Disjoint spheres, approximation by imaginary quadratic numbers, and the logarithm law for geodesics. Acta Math. 149 (1982), no. 3–4, 215–237.[Trin]Trin Marie Trin, Thurston's compactification via geodesic currents: The case of non-compact finite area surfaces.eprint .[Wol98]Wolfram Stephen Wolfram. Solving generalized Fibonacci recurrences. Fibonacci Q. 36 (1998), no. 2, 129–145. | http://arxiv.org/abs/2311.16041v1 | {
"authors": [
"Ara Basmajian",
"Mingkun Liu"
],
"categories": [
"math.GT",
"math.CO"
],
"primary_category": "math.GT",
"published": "20231127180453",
"title": "Low-lying geodesics on the modular surface and necklaces"
} |
Institute for Physics of Microstructures, Russian Academy of Sciences, 603950 Nizhny Novgorod, GSP-105, RussiaMoscow Institute of Physics and Technology (National Research University), Dolgoprudnyi, Moscow region, 141701 RussiaSchool of Physics and Telecommunication Engineering, Shaanxi University of Technology, Hanzhong 723001, ChinaInstitute for Physics of Microstructures, Russian Academy of Sciences, 603950 Nizhny Novgorod, GSP-105, RussiaInstitute for Physics of Microstructures, Russian Academy of Sciences, 603950 Nizhny Novgorod, GSP-105, Russia Moscow Institute of Physics and Technology (National Research University), Dolgoprudnyi, Moscow region, 141701 [email protected] University Bordeaux, LOMA UMR-CNRS 5798, F-33405 Talence Cedex, France World-Class Research Center “Digital Biodesign and Personalized Healthcare”, Sechenov First Moscow State Medical University, Moscow, 19991, Russia We study the distinctive features of the phase pumping effect in Josephson transport through a three-layered ferromagnet F_1/F/F_2 with non-coplanar magnetization. Using Gor'kov and Bogoliubov-de Gennes formalisms we go beyond the quasiclassical approximation and analyze the dependence of the spontaneous Josephson phase ψ on the exchange field h in the F layer and details of magnetization profile. The pumping of the Josephson phase can be generated by the mutual rotation of magnetizations in F_1 and F_2 layers resulting in the nontrivial phase gain at the rotation period (Berry phase). The increase in h is shown to cause changes in the topology of the phase evolution: the gain of the Josephson phase at the pumping period switches from 0 to 2π. We study the scenario of these switchings originating from the interplay between several competing local minima of the free energy of the junction versus the superconducting phase difference. Our analysis provides the basis for the search of experimental setup realizing the phase pumping phenomenon. Adiabatic phase pumping in S/F/S hybrids with non-coplanar magnetization A. I. Buzdin January 14, 2024 ========================================================================§ INTRODUCTION The phenomenon of quantum pumping in different mesoscopic systems attracts the interest of both experimentalists and theoreticians for several decades (see, for example, <cit.> and references therein). One of the first suggestions of adiabatic charge pumping mechanisms was made by D. Thouless in his seminal paper <cit.> where he considered the dynamics of quantum particles in the moving periodic potential. Being rather general, the idea of adiabatic pumping can be naturally applied not only for a charge variable but also for other physical quantities. In particular, the superconducting Josephson-type systems are known to provide an interesting possibility to realize the Thouless pumping scenario for the superconducting phase variable <cit.>, which is dual to the electric charge (see <cit.> and references therein). To create a driving potential for the superconducting phase we need to consider the systems, which allow the continuous tuning of the equilibrium Josephson phase between the superconducting electrodes. This possibility can be realized in so-called φ_0 junctions possessing an unconventional current phase relation I_s(φ)=I_csin(φ-φ_0) and revealing, thus, a spontaneous phase difference φ_0≠{0, π} in the ground state. The appearance of a nonzero spontaneous phase is possible only for Josephson systems with broken time-reversal and inversion symmetries <cit.>. Being integrated into the superconducting loop such φ_0 junction should produce spontaneous electric current <cit.>. This anomalous Josephson effect arises in a variety of systems involving unconventional superconductors <cit.>, topological insulators <cit.>, Josephson junctions consisting of conventional superconductors separated by magnetic normal metal <cit.>, quantum dot <cit.> or semiconductor nanowire with strong spin-orbit interaction <cit.>. In most cases the spontaneous phase can be tuned by varying a certain parameter, however the typical range of such tuning appears to be strongly restricted. For instance, in a large class of Josephson systems with the spin-orbit interaction the spontaneous phase can be tuned by changing the direction of the exchange (or Zeeman) field 𝐡 with the characteristic variation range of the anomalous phase restricted by the value ∼α h L which is typically small (here α is the spin-orbit constant and L is the junction length). Restrictions on the possible range of the anomalous phase can also be posed by the system design. Prominent examples include the superconductor/ferromagnet/superconductor (S/F/S) Josephson junctions with varying thickness of the F layer (see Refs. <cit.> and references therein), long Josephson junctions with current injectors acting as an effective source of the phase jumps along the junction (see, e.g., Refs. <cit.>) as well as curved nanowire junctions <cit.>. Despite the possibility of engineering any value of the sponteneous phase in such systems, it is extremely hard to tune this phase after the system is fabricated.The continuous tuning of the spontaneous phase can be realized in the Josephson junction with a weak link consisting of half-metal (HM) <cit.> and surrounded by two conventional ferromagnets F_1 and F_2. To stress this difference between conventional φ_0 junctions and structures where the spontaneous phase can be tuned in the whole range between 0 and 2π we will refer to the latter systems as ψ junctions. The key ingredient for such tuning is the non-coplanar magnetization distribution which provides a phase bias for equal-spin Cooper pairs determined by the mutual orientations of magnetic moments in F_1 and F_2 layers. The pumping of the Josephson phase can be generated by the mutual rotation of magnetizations in side ferromagnets resulting in the nontrivial phase gain at the rotation period <cit.> (see the thick red line in Fig. <ref>). Several important results regarding the behavior of the spontaneous phase in Josephson systems with non-coplanar magnetization were obtained in Ref. <cit.> within the framework of the circuit theory. In particular, it was demonstrated that for the S/F_1/HM/F_2/S structure an equilibrium superconducting phase difference is not small and continuously depends on the magnetic configuration. For S/F_1/F/F_2/S system it was shown that the interference of the contributions from equal-spin Cooper pairs to the Josephson current can cause 0-π transition upon the change in the magnetic configuration. It is important to note that the circuit theory analysis in Ref. <cit.> relies on the quasiclassical approximation (which is applicable only for small exchange field values) and is relevant only for rather long diffusive junctions, so that the length of the junction greatly exceeds the spatial scale of the spin-singlet and short-ranged spin-triplet superconducting correlations. One can naturally expect that for rather short junctions the behavior of the spontaneous phase can be qualitatively different from the one offered by a circuit theory. Indeed, in this case one can't exclude the contributions to the Josephson current from the spin-singlet and short-ranged spin-triplet Cooper pairs. Being sensitive to the magnitude of the spin polarization in the central F layer, these additional contributions can, in turn, affect the current-phase relation of the junction and the value of the anomalous phase. There are several important questions that remain open for both clean and diffusive Josephson systems with non-coplanar magnetization. These questions relate to the behavior of the ground-state superconducting phase difference upon the change in a magnetic configuration in S/F_1/F/F_2/S junctions for arbitrary ratio of the Fermi energy and the exchange field h in the central F layer. Keeping in mind that the phase pumping effect is absent for h = 0, one can naturally expect that the increase in h should cause changes in the topology of the phase evolution: the gain of the Josephson phase at the pumping period should switch from 0 to 2π (see Fig. <ref>). The analysis of the above problems is known to require a theoretical approach, which goes beyond the standard quasiclassical approximation in the ferromagnet (for details see Ref. <cit.>).In the present paper we fill these gaps and develop a theory of the anomalous Josephson effect in S/F_1/F/F_2/S junctions beyond the quasiclassical approximation. For such structures we demonstrate the possibility to create ψ junction in which the Josephson phase can be tuned byvarying the direction of magnetization in F_1 or F_2 layer, e.g., under the effect of external magnetic field. To highlight the main qualitative features of the ψ junctions we first consider a models of atomically thin SF/HM/SF Josephson junction, (see Fig. <ref>) where the leads consists of ferromagnetic superconductors (SF) with the built-in exchange field, the electron motion in the plane of the layers is assumed to be ballistic while the electron transfer across the structure is associated with the tunneling processes. Using the combination of the microscopical Gor'kov formalism and tight-binding model we calculate the critical current and spontaneous Josephson phase ψ of the junction. Although such model does not account the interference effects coming from the finite thickness of the layers, it allows the exact analytical solution which does not rely on any sort of quasiclassical approaches or numerical modeling. The results of calculations clearly show that non-coplanarity in the magnetic configuration causes the generation of ψ junction where the spontaneous phase ψ equals to the angle between projections of the exchange fields in the SFs layers to the plane perpendicular to the exchange field of the central half-metallic layer. Specifically, sin(ψ) = n_h·( n_1×n_2)/|sinθ_1sinθ_2| where n_1, n_2 and n_h are the unit vectors along the exchange field in the F_1, F_2 and half-metallic F layer, respectively, while θ_1 and θ_2 are the angles between the vectors n_1, n_2 and the spin quantization axis n_h in the half-metal.We then consider clean three-dimensional S/F_1/F/F_2/S junctions with a finite thickness of the central layer and non-coplanar magnetization distribution. For simplicity, the effects of the finite thickness of the side ferromagnets F_1 and F_2 are neglected. Correspondingly, their role in the Josephson transport is reduced to the spin-active boundary conditions for the quasiparticle wave function. Several properties of similar hybrid structures were addressed in Refs. <cit.>. The current-phase relation and the behavior of the supercurrent at zero superconducting phase difference (the so-called anomalous current) as a function of hybrid structure parameters were analyzed in Ref. <cit.> for the one-dimensional S/F/S Josephson junctions with the spin-active interfaces. Extensive analysis of various characteristics of clean three-dimensional S/F_1/F/F_2/S junctions including the current-phase relation, the anomalous current, the spatial profiles of the pair amplitude and the density of states was performed in Ref. <cit.>. The focus of the present work is on the behavior of the spontaneous superconducting phase difference ψ and the distinctive features of the phase pumping effect. The calculations of the Josephson transport are carried out within the framework of the theoretical approach used in Ref <cit.>. For this purpose, we derive the Bogoliubov - de Gennes (BdG) equations for the hybrid structure and then express the supercurrent in terms of the determinant of the matching condition matrix. Such an approach is known to be equivalent to the standard one <cit.> and is suitable for the Josephson junctions of arbitrary length. It is shown that if the exchange field h in F layer exceeds the Fermi energy (F is a half-metal) the spontaneous phase ψ is proportional to the angle between projections of the exchange fields in F_1 and F_2 to the plane perpendicular to the exchange field in F. It is demonstrated that when decreasing the h value below the Fermi energy the free energy of the junction as a function of the superconducting phase has two competing local minima resulting in jump-wise changes of the spontaneous phase ψ upon magnetization rotation accompanied with the hysteresis phenomena. Further decrease in h is shown to induce several changes in the topology of the phase evolution: the gain of the Josephson phase at the rotation period switches between 0 to 2π (compare black and red trajectories in Fig. <ref>). It is demonstrated that the tunability of the spontaneous phase as a function of the relative orientation of the magnetic moments in three ferromagnetic layers can persist up to rather small exchange fields in the central one. Our numerical results also reveal rather complex behavior of the ground-state superconducting phase difference as a function of the structure parameters. In particular, in our simulations performed for short ballistic junctions we observe a prominent role of the size quantization effects in the behavior of the spontaneous phase, which manifest themselves through oscillatory and/or the jump-wise behavior of the anomalous phase upon the change in the exchange field or the junction length. All the results of our numerical simulations are clarified by the analytical expression for the current-phase relation, which has been derived for the case of a large mismatch between the Fermi velocities in the superconducting leads and in the central ferromagnet.The paper has the following structure. In Sec. <ref> using the exact solution of Gor'kov equations we analyze the current-phase relation of the atomically thin SF/HM/SF junction. Sec. <ref> is devoted to the analysis of the behavior of the ground-state superconducting phase difference and the phase pumping phenomenon for clean S/F_1/F/F_2/S Josephson junctions with a finite thickness of the central layer. In Sec. <ref> we summarize our results. § SF/HM/SF JOSEPHSON JUNCTION OF ATOMIC THICKNESS In this section we analyze the current-phase relation of the Josephson junction based on theSF_1/HM/SF_2 structure of atomic thickness. Our goal here is to find the exact solutions of the Gor'kov equations beyond the quasiclassical approximation and establish the generic conditions required for the formation of the Josephson ψ junction. The system geometry is schematically shown in Fig. <ref>. The y-axis is chosen perpendicular to the layers' interfaces. The spin quantization axis in the HM layer coincides with the z axis, while the exchange field h_j in the SF_j layer forms the angle θ_j with the z axis and the angle χ_j with the x-axis in the xy-plane: h_j=h(sinθ_j cosχ_j,sinθ_j sinχ_j, cosθ_j), where j=1,2. We assume the electron motion to be characterized by the momentum p in the plane of each layer while the quasiparticles transfer perpendicular to the layers is described by the tight-binding model. The transfer integral t between SF_jand HM layers is assumed to be much smaller than the critical temperature T_c. Also we restrict ourselves to the limit of coherent interlayer tunneling which conserves the in-plane electron momentum. The in-plane quasiparticles motion in the SF layers is described by the energy spectrum ξ( p), while the energy spectrum in the half-metal is spin dependent: ξ_↑=ξ( p) and ξ_↓=+∞. The gap functions are Δ_1=Δ_0 e^ i φ/2 and Δ_2=Δ_0 e^ -i φ/2. We follow the approach of <cit.> and introduce the electron annihilation operators ϕ, ζ and η in the SF_1, HM and SF_2 layers, respectively. Then the system Hamiltonian can be written in the following form: Ĥ=Ĥ_0+Ĥ_BCS+Ĥ_t,where the operator Ĥ_0=∑_ p;α,β={↑,↓}(Â^(1)_αβϕ^†_αϕ_β+P̂_αβζ^†_αζ_β+Â^(2)_αβη^†_αη_β)describes the electron motion in the isolated layers (α and β are the spin indexes), the contributionĤ_BCS=∑_ p(Δ^*_1ϕ_- p,↓ϕ_ p,↑+Δ_1ϕ^†_ p, ↑ϕ^†_- p, ↓ + +Δ^*_2η_- p,↓η_ p,↑+Δ_2η^†_ p, ↑η^†_- p, ↓),stands for the s-wave superconducting coupling inside the junction electrodes, the operatorĤ_t=∑_n,p, α={↑,↓} t (ϕ^†_ p, αζ_ p, α + ζ^†_ p, αϕ_ pα ) + +t (ζ^†_ p, αη_ p, α + η^†_ p, αζ_ pα ),reflects the coherent electron tunneling between the layers, and the matrices Â^(j) and P̂ are defined asÂ^(j)=( [ξ( p)-hcosθ_j -h e^-i χ_j sinθ_j; -he^i χ_j sinθ_jξ( p)+hcosθ_j ]), P̂=( [ ξ( p) 0; 0+∞ ]). The appearance of the spontaneous ground-state Josephson phase ψ and its direct relation to the relative orientation of the exchange fields in SF_1 and SF_2 layers (namely, ψ=χ_2-χ_1) straightly follows from the form of the Hamiltonian (<ref>). Indeed, below we show that it is possible to make the simultaneous phase transformation of all annihilation operators which does not change the form of the Hamiltonian (<ref>) but makes two modifications. First, this transfromation effectively rotates the exchange field vectors in the xy plane in a way that the angle between their projection to this plane becomes zero. Second, it produces an additional phase shift ψ between the superconducting electrodes of the Josephson junction. The appearance of this spontaneous phase reflects the anomalous Josephson effect.The generic phase transformation of all annihilation operators reads ϕ_α=ϕ̃_α e^iκ_α, ζ_α=ζ̃_αe^iμ_α, η_α=η̃_αe^iν_α, where κ_α, μ_α and ν_α are certain constants. Since we assume the full spin polarization of electrons inside the half-metal the spin-down component of the corresponding annihilation operator ζ_↓=0 and, thus, the part Ĥ_t of the Hamiltonian does not change, if we take κ_1=μ_1 and μ_1=ν_1. Moreover, if in addition we choose κ_2-κ_1=χ_1 and ν_2-ν_1=χ_2 the part Ĥ_0 remains the same as in Eq. (<ref>), but with the modified form of the matrix Â^j: Â^j→Ã^j=ξ( p)σ_0 -h cosθ_jσ_z -h sinθ_jσ_x. The form of the transformation for the matrix Â^j makes the problems equivalent to the case of the exchange fields lying in the xz-plane. Also the described transformation of Â^j is accompanied by the corresponding modification of the order parameter values Δ_1 and Δ_2 in the superconducting part H_BCS of the Hamiltonian: Δ_1 →Δ̃_1=Δ_1 e^-i(κ_1+κ_2)and Δ_2 →Δ̃_2=Δ_2 e^-i(ν_1+ν_2). Since the exchange field lying in the xz plane does not affect the Josephson phase, we find: Arg(Δ̃_1) - Arg(Δ̃_2)=φ +χ_2-χ_1. From this expression one sees that even for φ=0 the superconducting leads effectively acquire the announced phase differenceψ=χ_2-χ_1,which corresponds to the current-phase relationI=I_csin(φ+ψ).Note that in the case when the exchange field in the central layer is comparable or less than Fermi energy (instead of half metal we have a conventional ferromagnet) one cannot perform the above procedure. Indeed, since in this case ζ_↓ 0, one should also put κ_2=μ_2=ν_2 to leave the operator Ĥ_t unchanged. But in this case it is impossible to choose κ_2-κ_1=χ_1 and ν_2-ν_1=χ_2. Thus, the formation of the ψ junction with the spontaneous ground state phase defined only by the mutual orientation of magnetic moments in ferromagnetic layers requires the material with the full spin polarization. The emergence of the spontaneous phase in S/F_1/F/F_2/S systems where the spin polarization in the central ferromagnetic layer is not full is treated in detail in Sec. <ref>. After establishing the appearance of the spontaneous Josephson phase we now turn to the analysis of the critical current I_c for the SF/HM/SF junction. The value I_c depends on the angle θ between the magnetizations in the SF layers and the spin quantization axis in half-metal (without loss of generality we assume θ_1=θ_2=θ) and is nonzero for θ 0, π. To show this, let us calculate this current. The above analysis shows that I_c for the Josephson junction under consideration is the same as the critical current for SF/HM/SF structure with the exchange fields of the SF layers both lying in the xz plane and forming the angle θ with z-axis. To calculate the Josephson energy of the later junction we consider the infinite multilayered structure consisting of alternating atomically thin SF (the exchange field is h=hcosθẑ+hsinθx̂) and HM (the spin quantization axis coincides with the z axis) layers and the gap varying from one unit cell to another asΔ_n=Δ_0 e^ikn (see Fig. <ref>a). Introducing the effective coupling constant 𝒯 between the superconducting layers (see Fig. <ref>b) we can write down the superconducting contribution to the free energy for such SF/HM structure as: F_s=∑_n[τ𝒩(0)|Δ_n|^2+𝒯𝒩(0)(Δ_nΔ^*_n+1+Δ_n+1Δ^*_n) ] =Δ_0^2𝒩(0)(τ +2𝒯cos k)N,where τ=(T-T_c0)/T_c0, T_c0 is the critical temperature in the absence of the proximity effect (t=0) and the exchange field (h=0), 𝒩(0) is the electron density of states at the Fermi level, N is the number of superconducting layers. From Eq.(<ref>) one sees that the Josephson energy of a single junction with the phase difference φ between the superconducting leads readsE_J = Δ_0^2𝒩(0)(τ+𝒯cosφ) .Thus, we drive to the expression for the critical current:I_c=-4πΔ_0^2 𝒩(0) 𝒯/Φ_0. The coupling parameter 𝒯 entering Eq. (<ref>) can be expressed via the critical temperatures T_c^0 and T_c^π for 0-phase (k=0) and π-phase (k=π) of SF/HM superlattice, respectively. Indeed, in the superconducting transition we have F_s=0 and τ_c =-2𝒯cos k. As the result, the coupling parameter 𝒯 reads as: 4𝒯=(T_c^π-T_c^0)/T_c0.Our next step is to find T_c^π and T_c^0 using the microscopical Gor'kov formalism. To that end, we will write the system of matrix Gor'kov equations, find the anomalous Green function solving this system and calculate T_c using the self-consistency equation. The Hamiltonian of the SF/HM superlattice has the formĤ=Ĥ_0+Ĥ_BCS+Ĥ_t,where the three operators describe the electron motion in the plane of the layers, superconducting pairing in the superconducting layers and tunneling between the layers, respectively:Ĥ_0=∑_n; p;β, γ={↑,↓}(Ĉ_βγϕ^†_n,p, βϕ_n,p, γ+P̂_βγζ^†_n,p, βζ_n,p, γ), Ĥ_BCS=∑_ p(Δ^*ϕ_n, - p,↓ϕ_n, p,↑+Δϕ^†_n,p, ↑ϕ^†_n, - p, ↓), Ĥ_t=t∑_n,p, β={1,2}( ζ^†_n,p, βψ_n-1,p, β+ζ^†_n-1,p, βϕ_n,p, β)+ +( ϕ^†_n,p, βζ_n,p, β+ζ^†_n,p, βϕ_n,p, β).Here ϕ and ζ are the electron annihilation operators in the SF and HM layers, respectively, n is the number of a unit cell and the matrix Ĉ is defined asĈ=( [ ξ( p)-hcosθ-hsinθ;-hsinθ ξ( p)+hcosθ ]). Since the multilayered system is periodic in space, we can introduce the Fourier components of the annihilation operatorsϕ_n,p, β=∫^π_-πdq/2πe^iqnϕ_q,p, β, ζ_n,p, β=∫^π_-πdq/2πe^iqnζ_q,p, β,and write down the Hamiltonian (<ref>)-(<ref>) in the Fourier representation:Ĥ_0=∑_q; p;β, γ={↑,↓}(Ĉ_βγϕ^†_q,p, βϕ_q,p, γ+P̂_βγζ^†_q,p, βζ_q,p, γ), Ĥ_BCS=Δ_0∑_q, p( ϕ_k-q, - p,↓ϕ_q, p,↑+Δϕ^†_q,p, ↑ϕ^†_k-q, - p, ↓), Ĥ_t=∑_q,p, β={↑,↓} (T(q)ϕ^†_q,p, βζ_q,p, β+ T^*(q)ζ^†_q,p, βϕ_q,p, β),where T(q)=2te^iq/2cos (q/2).Next we introduce the set of imaginary-time Green functions:F^†_αβ( p,q; τ_1, τ_2)=⟨ T_τϕ^†_-q, - p, α(τ_1) ϕ^†_k+q,p, β(τ_2)⟩, G_αβ( p, q; τ_1, τ_2)=-⟨ T_τϕ_q,p, α(τ_1) ϕ^†_q,p, β(τ_2)⟩, F^ζ†_αβ( p, q; τ_1, τ_2)=⟨ T_τψ^†_-q, - p, α(τ_1) ϕ^†_k+q,p, β(τ_2)⟩, E_αβ(p, q; τ_1, τ_2)=-⟨ T_τζ_q,p, α(τ_1) ζ^†_q,p, β(τ_2)⟩.The resulting system of the matrix Gor'kov equations in the frequency representation takes the following form:(iω_n-Ĉ)Ĝ+Δ_0 I F̂^†+T(k+q)Ê=1̂, (iω_n+Ĉ)F̂^† -Δ_0 I Ĝ-T(q)F̂^ζ†=0, (iω_n-P̂)Ê+T^*(k+q)Ĝ=0, (iω_n+P̂)F̂^ζ† -T^*(q)F̂^†=0,where ω_n=π T(2n+1) are the fermion Matsubara frequencies and Î=i σ_y. Solving the system of equations (<ref>) in the lowest order over the gap potential we find the anomalous Green function:F̂^†=Δ_0[(iω_n +Ĉ)-|T(q)|^2(iω_n +P̂)^-1]^-1Î ×[(iω_n -Ĉ)-|T(k+q)|^2(iω_n -P̂)^-1]^-1. The critical temperature T_c can be expressed via F̂^†_↑↓ using the self-consistency equation:ln(T_c0/T_c)=T_c ∑_n=-∞^∞[∫^+∞_-∞ d ξ∫^π_-πdq/2πF̂^†_↑↓/Δ_0 +π/ω_n],where T_c0 is the critical temperature in the absence of proximity effect (t=0) and the exchange field (h=0).To simplify the further calculations, we assume t to be small and perform the power expansion of (<ref>) over this parameter. Next we substitute this expansion into Eq.(<ref>) and obtain rather cumbersome equation for T_c (for details see Appendix <ref>). Solving this equation, we can represent the critical temperature as T_c =α -βcos k [see Eqs.(<ref>)-(<ref>)], whereβ=∑_n>0πT̃_c0^2t^4h^2(h^4+35h^2 ω̃_n^2 +70 ω̃_n^4)sin^2θ/Wω̃_n(ω̃_n^2+h^2)^3 (4ω̃_n^2+h^2)^2.Here ω̃_n=πT̃_c0 (2n+1), T̃_c0 is the critical temperature in the absence of the proximity effect (t=0) and W=1-∑_n>04πT̃_c0 h^2 ω̃_n/(ω̃_n^2+h^2)^2>0.As a result, we can easily calculate the desired difference (T_c^π-T_c^0)=2β and the critical current I_c=2πΔ_0^2N(0) β/(Φ_0 T_c0). As expected, we obtain I_c ∝sin^2θ.§ S/F_1/F/F_2/S JUNCTIONS WITH A FINITE THICKNESS OF THE F LAYERIn this section we analyze the behavior of the spontaneous ground-state phase difference in clean S/F_1/F/F_2/S Josephson junctions with a finite thickness of the central layer L. Using the solutions of the Bogoliubov-de Gennes equations, we describe the crossover from the regime of half-metallic F layer to the case when the exchange field in the F layer is small compared to the Fermi energy. To simplify the calculations we treat the F_1 and F_2 layers as spin-active interfaces between the superconducting leads and the central ferromagnet. Also we assume that these interfaces are characterized by the finite potential barrier which partially damps the electron transfer between the layers.§.§ Geometry and basic equationsThe geometry of the system is schematically shown in Fig. <ref>. The superconducting gap is taken in the form Δ = Δ_0e^iφ/2 in the left electrode and Δ = Δ_0e^-iφ/2 in the right one. We introduce the z axis along the exchange field 𝐡 in the central ferromagnet F which is assumed to be perpendicular to the plane of the layers. The Hamiltonian of the hybrid structure reads <cit.>Ĥ=Ĥ_0+Ĥ_ex+Ĥ_BCS ,where Ĥ_0= ∫ψ_α^†( r)[-∇1/2m(𝐫)∇-μ(𝐫) + U(𝐫)]ψ_α( r)d^3 r ,and the potential U(𝐫) is nonzero only at the interfaces between the F layer and superconducting leads:U(𝐫) = U_1δ(z + L/2) + U_2δ(z - L/2).The spin-dependent part of the single-particle Hamiltonian has the following formĤ_ex=∫ψ_α^†( r)[ h( r)·σ̂]_αβψ_β( r)d^3 rwith[ 𝐡(𝐫) = h_1(𝐧_1σ̂)δ(z + L/2) + h_2(𝐧_2σ̂)δ(z-L/2); + hσ̂_z[Θ(z + L/2) - Θ(z-L/2)] ]and the unit vector𝐧_j = (sinθ_jcosχ_j,sinθ_jsinχ_j,cosθ_j)directed along the exchange field vector in the j-th interface. The termĤ_BCS=∫[ ψ_α^†( r)Δ ( r)(iσ̂_y)_αβψ_β^†( r)+h.c.]d^3 rstands for the superconducting paring inside the superconducting layers. In the above expressions ψ_α^† (ψ_α) is the fermionic creation (annihilation) operator, α,β = ↑,↓ denote spin degrees of freedom (summation over repeated spin indices is implied), σ̂_i (i = x,y,z) are the Pauli matrices acting in the spin space, m(𝐫) is the effective mass profile, and μ(𝐫) denotes the difference between the chemical potential and the bottom of the electron energy band. To derive the Bogoliubov-de Gennes equations, we perform the standard Bogoliubov transformation ψ_α( r)=∑_n[u_nα( r) γ_n + v_nα^∗( r)γ_n^†], where γ_n^† (γ_n) is the quasiparticle creation (annihilation) operator, u_nα( r) and v_nα( r) are the electron and hole components of the quasiparticle wave function. The resulting equations take the formȞ_ BdG(𝐫)Ψ(𝐫) = E Ψ(𝐫), Ȟ_ BdG(𝐫) = τ̌_z[-∇1/2m(𝐫)∇ - μ(𝐫) + U(𝐫)]+ 𝐡(𝐫)σ̂ + τ̌_x ReΔ(𝐫) -τ̌_y ImΔ(𝐫), where Ψ(𝐫) = [u_↑(𝐫),u_↓(𝐫),v_↓(𝐫),-v_↑(𝐫)]^ T and τ̌_i (i = x,y,z) are the Pauli matrices acting in the electron-hole space. For the calculations of the Josephson transport we follow the approach used in Ref. <cit.>. This approach is based on the fact that the normal Matsubara Green's function of the system, which defines the supercurrent though the junction, can be expressed in terms of the determinant of the matching condition matrix for Eqs. (<ref>) at E = iω_n. Note that for the considered S/F/S junctions with spin-active interfaces the general expression for the Josephson current represents a trivial generalization of the one in Ref. <cit.>. Before going into details, let us briefly outline the key points of the procedure. As a first step, we derive the solutions of Eqs. (<ref>) at the Matsubara frequencies in the whole system. Matching the resulting solutions at the interfaces, we derive the scattering matrices and the solvability condition matrix. Finally, the determinant of the solvability condition matrix is used to compute the Josephson current. Below we list the appropriate solutions of Eqs. (<ref>) at E = iω_n derived under the assumption of the in-plane translational symmetry for the stepwise m(z) and μ(z) profiles. In what follows we assume the identical electron band structure in both superconducting leads. In the left superconductor (z<-L/2) the quasiparticle wave function readsΨ_I = e^i𝐩_||𝐫e^ϰ(z + L/2)[e^ip_s(z + L/2)[A_+; e^- iφ/2a_+A_+ ]+ e^-ip_s(z + L/2)[ A_-; e^-iφ/2a_-A_- ]].Here 𝐩_|| is the conserved in-plane momentum, a_±(ω_n) = iω_n± i√(ω_n^2 + Δ_0^2)/Δ_0 ,ϰ = m_s√(ω_n^2 + Δ_0^2)/p_s, p_s = √(2m_sμ_s - 𝐩_||^2), m_s is the effective mass for the electrons in superconducting leads, and μ_s is the difference between the chemical potential and the bottom of the electron energy band in the S layers. In the central ferromagnet (-L/2<z<L/2) the solution of BdG equations can be written as follows:Ψ_II = e^i𝐩_||𝐫[Q̂(z)B_+ + Q̂(-z)B_-; Q̂̅̂(z)B̅_++ Q̂̅̂(-z)B̅_- ] ,where Q̂(z) =diag(e^ ip_↑z,e^ i p_↓z), Q̂̅̂(z) =diag(e^ ip̅_↑z,e^ i p̅_↓z),p_↑,↓ = √(2m_f(iω_n ∓ h + μ_f)- 𝐩_||^2) , p̅_↑,↓(iω_n,h) = p_↑,↓(-iω_n,-h), m_f denotes the effective mass for the electrons in the central ferromagnet, and μ_f is the difference between the chemical potential and the bottom of the electron energy band in the central layer at h = 0. For a finite h the bottom of the lower and higher spin-split subband in the F layer is located at -(μ_f + h) and -(μ_f - h), respectively. Finally, in the right superconductor (z>L/2) the quasiparticle wave function has the form:Ψ_III = e^i𝐩_||𝐫e^-ϰ(z - L/2){e^ip_s(z - L/2)[C_+; e^iφ/2a_-C_+ ]+ e^-ip_s(z - L/2)[C_-; e^iφ/2a_+C_- ]} .In the above solutions A_±, B_±, B̅_±, and C_± are 2×1 column vectors in the spin space, which should be determined from the matching conditions for the quasiparticle wave function at the S/F interfaces (z = ± L/2). It is straightforward to show that the corresponding matching conditions for the considered model (<ref>) [B_+; B̅_+ ] = 𝒦̌_1[B_-; B̅_- ] , [B_-; B̅_- ] = 𝒦̌_2[B_+; B̅_+ ] ,yield the following expressions for the scattering matrices, which couple the electron and hole waves in the central ferromagnet:𝒦̌_j = -Q̌(L/2)(W̌_j + Ǩ)^-1(W̌_j - Ǩ)Q̌(L/2).HereW̌_j = gτ̌_z + iZ_0,j + iZ_jτ̌_z𝐧_jσ̂ -f[τ̌_xsin(φ_j) + τ̌_ycos(φ_j)], g = ω_n/√(ω_n^2 + Δ_0^2) , f = Δ_0/√(ω_n^2 + Δ_0^2) , Q̌ and Ǩ are diagonal matrices with the following structure in the electron-hole space X̌ =diag(X̂,X̂̅̂), K̂ = (m_s/m_f)diag(p_↑/p_s, p_↓/p_s), Z_0,j = 2m_sU_j/p_s, and Z_j = 2m_sh_j/p_s. The current-phase relation I(φ) can be obtained from the determinant of the matching condition matrix I_s(φ) = -2e𝒜T∑_ω_n>0∫d^2𝐩_||/(2π)^2∂/∂φln|𝒫|, 𝒫(iω_n, 𝐩_||,φ) =det|1 - 𝒦̌_1𝒦̌_2|, where 𝒜 is the cross-sectional area of the junction. We calculate the spontaneous superconducting phase difference by minimizing the free energy of the junction, which up to a phase-independent constant is given byF(φ) = -𝒜T∑_ω_n>0∫d^2𝐩_||/(2π)^2ln|𝒫|.The resulting Eqs (<ref>), (<ref>), and (<ref>) form the basis for our analytical analysis and numerical simulations of the Josephson transport.§.§ Analytical results for the current-phase relation. Qualitative considerationHere we provide some analytical results, which clarify the behavior of the current-phase relation and the spontaneous superconducting phase difference. Note that the major challenge for obtaining a closed-form expression for the current-phase relation stems from the fact that the spin-active interfaces couple the quasiparticle states in the central ferromagnet from both spin-split subbands. Some analytical progress can be made when typical Fermi velocities in the central F layer are much smaller than the ones in the normal-metal state of the superconducting leads. In this case, one can consider the velocity ratio matrix Ǩ in Eq. (<ref>) as a perturbation and expand the determinant of the solvability condition matrix 𝒫 up to the second-order terms. Choosing, for simplicity, equal barrier strength parameters Z_0,1 = Z_0,2 = Z_0, Z_1 = Z_2 = Z, we arrive at a sinusoidal current-phase relation of the following form:I_s(φ) = (I_↑↓^+ + I_↑↓^-)sin(φ) + I_↑↑sin(φ - χ) + I_↓↓sin(φ + χ), I_↑↓^+ = 4e𝒜T Re∑_ω_n>0∫d^2𝐩_||/(2π)^2f^2[w^2-4g^2Z^2cos(θ_1)cos(θ_2)]/γ(w^2 + 4g^2Z^2)^2[p_↑p̅_↑/sin(p_↑L)sin(p̅_↑L) + p_↓p̅_↓/sin(p_↓L)sin(p̅_↓L)], I_↑↓^- = 4e𝒜T Re∑_ω_n>0∫d^2𝐩_||/(2π)^2-4iwgZf^2cos(θ_+)cos(θ_-)/γ(w^2 + 4g^2Z^2)^2[p_↑p̅_↑/sin(p_↑L)sin(p̅_↑L) - p_↓p̅_↓/sin(p_↓L)sin(p̅_↓L)], I_↑↑ = -4e𝒜T Re∑_ω_n>0∫d^2𝐩_||/(2π)^24g^2f^2Z^2sin(θ_1)sin(θ_2)/γ(w^2 + 4g^2Z^2)^2p_↑p̅_↓/sin(p_↑L)sin(p̅_↓L) , I_↓↓ = -4e𝒜T Re∑_ω_n>0∫d^2𝐩_||/(2π)^24g^2f^2Z^2sin(θ_1)sin(θ_2)/γ(w^2 + 4g^2Z^2)^2p_↓p̅_↑/sin(p_↓L)sin(p̅_↑L) .Here w = 1 + Z_0^2 - Z^2, θ_± = (θ_1±θ_2)/2, χ = χ_1 - χ_2, and γ = m_f^2p_s^2/m_s^2. The first two terms proportional to sin(φ) in the right-hand side of Eq. (<ref>) describe the contribution to the Josephson transport from the spin-singlet and the spin-triplet Cooper pairs with zero spin projection on the direction of the exchange field 𝐡/|𝐡|. The remaining terms ∝ I_↑↑ and I_↓↓ contain the contributions to the supercurrent from the parallel spin-triplet pairs |↑↑⟩ and |↓↓⟩, respectively.Equations (<ref>) provide several valuable insights into the physics of the anomalous Josephson effect in S/F_1/F/F_2/S hybrid structures. First, one can see that the above expressions capture the crossover from the S/F_1/F/F_2/S to the S/F_1/HM/F_2/S - type junctions upon the increase in the exchange field in the central ferromagnet. Indeed, for h > μ_f the contributions to the Josephson current from the higher spin-split subband in the central ferromagnet ∝ 1/sin(p_↑L), 1/sin(p̅_↓L) exhibit an exponential decay and one is left with a single contribution from the lower spin-split subbandI_ S/F/HM/F/S(φ) = I_↓↓sin(φ + χ).Correspondingly, in the half-metallic regime the anomalous phase ψ is a linear function of the misorientation angle χ = χ_1 - χ_2. Second, in the limit h→ 0 we get I_↑↑ = I_↓↓, which implies that the total contibution of the parallel spin-triplet Cooper pairs to the Josephson transport ∝cos(χ)sin(φ). Within this limit and for rather strong spin-active barriers one can expect that the variations of χ should cause 0-π transitions due to contributions of the parallel spin-triplet Cooper pairs. Note that this result is in qualitative agreement with the results of the circuit theory <cit.>. Third, the functional form of the current-phase relation (<ref>) suggests quite complex behavior of the spontaneous phase difference within the parameter range h<μ_f. It is clear that in the general case the anomalous phase is determined by the competition of all the above contributions. The important point in the subsequent analysis is that these contributions are of different magnitude and exhibit different behavior with respect to the superconducting phase difference φ. In particular, the first two terms proportional to sin(φ) in the right-hand side of Eq. (<ref>) favor the spontaneous phase ψ = 0 or π whereas the remaining terms are responsible for the tunability of ψ as a function of χ. Corresponding critical currents I_↑↓^±, I_↑↑ and I_↓↓ can, in turn, exhibit a singular behavior when p_↑,↓L = π n (n = 1,2,3,...). One can naturally expect that such jumpwise behavior of the critical currents should result in the jumpwise changes and/or oscillations of the spontaneous superconducting phase difference ψ upon the change in the band structure parameters μ_f and h of the central ferromagnet as well as the junction length L. It is important to note that even though the validity of Eqs. (<ref>) breaks down near the resonances, our analytical results also provide qualitative explanation for ψ(h) curves observed in our numerical simulations.§.§ Numerical simulationsWe proceed with the discussion of the results of numerical simulations. Our focus is on the behavior of the anomalous phase ψ as a function of the exchange field in the central ferromagnet h and the misorientation angle χ. For simplicity, we consider the case of equal barrier strength parameters Z_0,1 = Z_0,2 = Z_0, Z_1 = Z_2 = Z and take θ_1 = θ_2 = π/2. Our numerical results are obtained for the following parameter set: T = 0.1Δ_0, μ_s/Δ_0 = 10^3, μ_f/Δ_0 = 5× 10^2, m_s = m_f, Z_0 = 0, and Z = 0.5. Typical current-phase relations I_s(φ) along with the corresponding F(φ) curves for χ = 0, π/4, π/2, 3π/4, π and several exchange fields h/μ_f = 1, 0.8, and 0.6 are shown in Fig <ref>. Circles in Fig. <ref> denote the superconducting phase difference, at which the free energy of the junction reaches its minimal value (the anomalous phase ψ). In the half-metallic regime h/μ_f = 1 [see Figs. <ref>(a) and <ref>(b)] the corresponding I_s(φ) curves are sinusoidal and the spontaneous phase ψ≈π - χ. The decrease in h [see panels (c)-(f) in Fig. <ref>] leads to deviations of the spontaneous phase from π - χ and the appearance of the superconducting diode effect in the system I_c+≠ I_c-, where I_c+ = max_φ I_s(φ) and I_c- = |min_φI_s(φ)|. Note that a nonreciprocal superconducting transport is a common feature of the considered Josephon junctions with non-coplanar magnetization distribution and h < μ_f <cit.>. The results in Figs. <ref>(c) and <ref>(e) clearly demonstrate that both the magnitude of the diode effect as well as the preferential direction, for which the Josephson junction can carry larger supercurrent, are governed by the misorientation angle χ.Let us now discuss the tunability of the ground-state superconducting phase difference as a function of the hybrid structure parameters. We show several ψ(χ) plots for h/μ_f = 1, 0.8, 0.6, 0.4 and 0.2 in Fig. <ref>(b). One can see that in the half-metallic regime for h/μ_f = 1, the corresponding ψ(χ) dependence is, indeed, linear ψ = π - χ. As it has been explained previously, such behavior originates from the fact that the supercurrent is carried only by the parallel spin-triplet Cooper pairs from the lower spin-split subband in the central ferromagnet. It is interesting to note that the tunability of the anomalous phase can persist up to rather small values of the exchange field h. Moreover, Fig. <ref>(a) also demonstrates that within the parameter range h < μ_f, the spontaneous phase difference can exhibit sudden jumps at certain χ values due to the competition between two local minima of the free energy of the contact versus the superconducting phase difference [see Figs. <ref>(a) and <ref>(c)]. The results demonstrate that for the considered Josephson junctions with non-coplanar magnetic texture, the variations of the misorientation angle in the case h<μ_f can induce the first-order phase transitions between the states with different anomalous phases accompanied with the hysteresis phenomena. Typical behavior of the ground-state superconducting phase difference within the full range of the exchange fields for χ = π/4, π/2, and 3π/4 are shown in Fig. <ref>. The results in Figs. <ref>(a), <ref>(b), and <ref>(c) are obtained for various length of the junction L/ξ = 0.02, 0.05, and 0.1, respectively. Let us, first, discuss the plots in Fig. <ref>(a). One can see that for χ = π/4 the spontaneous phase jumps at h ≈ 0.05μ_f and then exhibits several oscillations upon the increase in the exchange field. At h ≈μ_f the anomalous phase saturates at π - χ. Except the case of rather weak exchange fields, the behavior of the spontaneous phase for χ = 3π/4 is qualitatively similar to the above discussed one. For χ = π/2 we observe that the spontaneous phase exhibits both oscillations and the jumps. It is important to note that the number of oscillations on ψ(h) curves in Fig. <ref>(a) is of order N_f = k_fL/π, where k_f is the Fermi momentum in the central layer at h = 0. Indeed, for our choice of the hybrid structure parameters N_f ∼ 10, which allows us to associate the oscillations of the anomalous phase with the size quantization effects in the central F layer. The results for longer junctions L/ξ = 0.05 and 0.1 [Figs. <ref>(b) and <ref>(c)] reveal the increase in the number of oscillations upon the increase in the junction length. One can see that the number of oscillations of the anomalous phase in Figs. <ref>(b) and <ref>(c) is of order k_fL/π. The resulting exchange-field behavior of the anomalous phase is in a good qualitative agreement with the results of Eq. (<ref>). Our main results regarding the behavior of the ground-state superconducting phase difference are summarized in Fig. <ref>, where we show the surface plot of the anomalous phase ψ versus the exchange field h and the misorientation angle χ. Qualitatively, the observed behavior of the anomalous phase reflects the interplay between several physical phenomena such as the spin-filtering effect, the size quantization effect in ballistic junctions with rather thin ferromagnet and a tunability of the spin-triplet supercurrent by the relative orientation of the magnetic moments in three ferromagnetic layers.Finally, we analyze the features of the phase pumping phenomenon for different exchange fields h in the central ferromagnet. For this purpose we perform the calculations of the superconducting phase difference corresponding to both global and local minima of the free energy. The results presented in Fig. <ref> show the detailed evolution in the behavior of the system upon the increase in h. In particular, the plots for h = 0.01μ_f (shown by black lines in Fig. <ref>(a)) reveal two states of the system, one of which is stable within the whole range of misorientation angles. The other one is metastable and can be realized only within a certain χ range. So, in this case the mutual rotation of magnetization in F_1 and F_2 layers corresponds to a topologically trivial trajectory of the system in the parameter space (ψ,χ) (see, e.g., the black solid line in Fig. <ref>) with the superconducting phase difference trapped near zero. For h = 0.03μ_f (see the red lines in Fig. <ref>(a)) there appear stable states both near 0 and π within certain χ ranges. Thus, the topologically trivial evolution of the superconducting phase upon the magnetization rotation can be accompanied with thermally activated jumps of the spontaneous phase. The probability of the latter process obviously decreases with decreasing temperature. Note that for h/μ_f = 0.06, 0.09, and 0.11 (see Fig. <ref>(b) and the black lines in Fig. <ref>(c)) the system exhibits a similar behavior except that the superconducting phase difference can be trapped near π. Corresponding plots for h/μ_f = 0.12 (see the red lines in Fig. <ref>(c)) reveal the change in the topology of the phase evolution and the appearance of the phase pumping: the gain of the Josephson phase at the pumping period switches to 2π. The presence of the jump of the ground-state phase difference and the metastable states in the vicinity of the jump implies that the nontrivial phase evolution should be accompanied with the hysteresis phenomena. The results for h/μ_f = 0.15 (shown by the red lines in Fig. <ref>(c)) illustrate the transition into the topologically trivial state. We observe another transition into the state with a nontrivial phase evolution at h ≈ 0.2μ_f (see the black lines in Fig. <ref>(e)). Figs. <ref>(f)-(h) demonstrate that a nontrivial gain of the superconducting phase at the rotation period persists upon further increase in h, the χ regions corresponding to metastable states shrink and disappear for rather large h (see, e.g., the results for h/μ_f = 0.45 in Fig. <ref>(f)). Therefore, we find that for rather large exchange fields in the central ferromagnet the magnetization rotation corresponds to a continuous topologically nontrivial trajectory of the system in the parameter space (ψ,χ) (see, e.g., the red solid line in Fig. <ref>) and the continuous phase pumping whereas for smaller fields the corresponding trajectories are discontinuous.§ CONCLUSION To sum up, we have studied the anomalous Josephson effect in S/F_1/F/F_2/S systems with non-coplanar magnetic moments in ferromagnetic layers with arbitrary ratio between the exchange field in the F layer and the Fermi energy. As a first step, considering SF_1/half-metal/SF_2 Josephson junctions of atomic thickness in the frames of Gor'kov formalism we have demonstrate that the spontaneous ground-state phase difference ψ arising in such systems coincides with the angle between projections of magnetic moments in the SF_1 and SF_2 layers to the plane perpendicular to the spin quantization axis in half-metal. Interestingly, this feature of ψ junctions gives rise to the Berry phase effects. Indeed, the rotation of magnetic moment, e.g., in the SF_1 layer should produce the accumulation of the ground-state phase 2π after the full precession period which may results in the interesting possibility of magnetic flux pumping in superconducting loop <cit.>. Also, using the exact solution of Gor'kov equations for the Green functions we have calculated the critical current in the current-phase relation and establish its dependence on the magnetic moments direction in SF_1 and SF_2 layers.Then we analyzed the behavior of the ground-state superconducting phase difference and the features of the phase pumping effect in S/F_1/F/F_2/S junctions upon the decrease in the exchange field in the F layer from the values comparable with the Fermi energy (the limit of half-metal) down to the ones of the order of the critical temperature of the superconducting phase transition. To do this we have performed calculations of the Josephson transport for clean S/F_1/F/F_2/S systems with the finite F layer thickness. For a half-metallic central layer it has been shown that the spontaneous phase is a linear function of the misorientation angle ψ = π - χ. As the exchange field decreases we have found that the considered Josephson systems are characterized by a nonlinear ψ(χ) dependence and can feature two competing local minima in the free energy of the junction vs the superconducting phase difference. This competition leads to a bistable behavior of the system, which manifests itself through the jump-wise changes of the spontaneous phase upon the variation in the misorientation angle accompanied with the hysteresis phenomena. We have demonstrated that further decrease in h can induce several changes in the topology of the phase evolution: the gain of the Josephson phase at the rotation period switches between 0 to 2π. Also we have found that the spontaneous phase can exhibit oscillating and/or the jump-wise behavior as a function of the exchange field. The ψ junctions incorporated into superconducting circuits provide a possibility to switch between the superconducting states with different vorticities without the application of external magnetic field <cit.>. The direct coupling between superconducting phase difference and orientation of the magnetic moment open a way to generate a magnetic moment precession by superconducting current in ψ junction similar to that predicted for φ_0 junction <cit.>. We believe that the ψ junction may be a very interesting building block for superconducting spintronics.Now there are several experimental evidences of the Josephson currentthrough the half- metal ferromagnets <cit.>. Most probably, this long-ranged triplet supercurrent could be generated by the non-collinear surface magnetization at the interface with a half metal. In such acase, our model of Sec. <ref> with the spin-active interfaces seems to be quite adequate for the description of the experiments <cit.>. The non-collinear surface magnetization should be the same at both ends of the junctions and then we should have ψ=π , i.e. the π junction realization. This may be directly verified by incorporating Josephson junction with a half-metal in a closed superconducting loop (similar to the experiments <cit.>). Note that recently a superconducting quantum interference device was used to detect the transitions between 0 and π states in S/F/S junctions with composite ferromagnetic layer <cit.>. This experimental technique and the fabricated setups provide a perfect playground to verify the effects predicted in the present paper and we hope that our results will stimulate the corresponding activity. § ACKNOWLEDGEMENTS The authors thank A. A. Bespalov for stimulating discussions. This work was supported by Ministry of Science and Higher Education of the Russian Federation within the framework of state funding for the creation and development of World-Class Research Center (WCRC) “Digital biodesign and personalized healthcare,” grant no. 075-15-2022-304 in part related to the analysis of the system of atomically thin layers and the Russian Science Foundation (Grant No. 20-12-00053) in part related to the analysis of the current-phase relations for the system with finite thickness of the central ferromagnetic layer. The work of A. I. B. was supported by ANR SUPERFAST and the LIGHT S&T Graduate Program and EU COST CA21144Superqumap. The work of H. M. was supported by the National Natural Science Foundation of China (Grant No. 12174238), the Natural Science Basic Research Program of Shaanxi (Program No. 2020JM-597), and the Scientific Research Foundation of Shaanxi University of Technology (Grant No. SLGKY2006). § CRITICAL TEMPERATURE OF SF/HM SUPERLATTICEAssuming t to be small, we perform the power expansion of (<ref>) up to the 4th order:F̂^†/Δ_0=X_+Î X_-+|T(k+q)|^2 X_+Î X_- Y_- X_- +|T(q)|^2 X_+Y_+ X_+Î X_- +|T(q)|^4 X_+Y_+ X_+ Y_+ X_+Î X_- + + |T(k+q)|^4 X_+Î X_- Y_- X_- Y_- X_- + |T(q)|^2 |T(k+q)|^2 X_+Y_+ X_+Î X_- Y_- X_-,where X_±=(iω_n ±Ĉ)^-1, Y_±=(iω_n ±P̂)^-1. Then we calculate the following expressions entering Eq.(<ref>):∑_n=-∞^∞∫^+∞_-∞ d ξ∫^π_-πdq/2π X_+Î X_-=-∑_n>02πω_n/ω_n^2+h^2Î, ∑_n=-∞^∞∫^+∞_-∞ d ξ∫^π_-πdq/2π [|T(k+q)|^2 X_+Î X_- Y_- X_- +|T(q)|^2 X_+Y_+ X_+ I X_-]=∑_n>04π t^2 ω_n (ω_n^2-2h^2)/(ω_n^2+h^2)^2 (4ω_n^2+h^2)Î, ∑_n=-∞^∞∫^+∞_-∞ d ξ∫^π_-πdq/2π[|T(q)|^4 X_+Y_+ X_+ Y_+ X_+Î X_-. .+ |T(k+q)|^4 X_+Î X_- Y_- X_- Y_- X_-]= =-3π t^4/8∑_n>0[ 4(h^4-9h^2 ω_n^2+2ω_n^4)/ω_n(ω_n^2+h^2)^3 (4ω_n^2+h^2). .- 2h^2(h^6+7h^4 ω_n^2-46 h^2 ω_n^4 -16 ω_n^6)/ω_n^3(ω_n^2+h^2)^3 (4ω_n^2+h^2)^2cos^2 θ]Î, ∑_n=-∞^∞∫^+∞_-∞ d ξ∫^π_-πdq/2π[|T(q)|^2 |T(k+q)|^2 X_+Y_+ X_+. .+ Î X_- Y_- X_-]==t^4(1+cos k/2)∑_n>0π(h^4+35h^2 ω_n^2+70ω_n^4)/ω_n(ω_n^2+h^2)^3 (4ω_n^2+h^2)^2sin^2 θÎ.We substitute the above expressions into the self-consistency equation (<ref>) and obtain:ln(T_c/T_c0)=-2π T_c ∑_n>0[h^2/ω_n(ω_n^2+h^2).+2t^2ω_n (ω_n^2-2h^2)/(ω_n^2+h^2)^2 (4ω_n^2+h^2) - 3t^4(h^2-2ω_n^2)^2(4ω_n^4-9h^2 ω_n^2-h^4)/8ω_n^3(ω_n^2+h^2)^3 (4ω_n^2+h^2)^2+ +t^4h^2(328 ω_n^6+278h^2 ω_n^4-17 h^4 ω_n^2 -3 h^6)/8ω_n^3(ω_n^2+h^2)^3 (4ω_n^2+h^2)^3sin^2 θ .+t^4h^2(h^4+35h^2 ω_n^2 +70 ω_n^4)cos k/4ω_n(ω_n^2+h^2)^3 (4ω_n^2+h^2)^2sin^2 θ]. Finally, representing the critical temperature in the formT_c=T̃_c0(1+a t^2+b t^4),we calculate a and b using (<ref>) and obtaina=-1/W∑_n>04πT̃_c0ω̃_n(ω̃_n^2-2h^2)/(ω̃_n^2+h^2)^2 (4ω̃_n^2+h^2), b=2πT̃_c0/W∑_n>0[3(h^2-2ω̃_n^2)^2(4ω̃_n^4-9h^2 ω̃_n^2-h^4)/8ω̃_n^3(ω̃_n^2+h^2)^3 (4ω̃_n^2+h^2)^2.-h^2(328 ω̃_n^6+278h^2 ω̃_n^4-17 h^4 ω̃_n^2 -3 h^6)/8ω̃_n^3(ω̃_n^2+h^2)^3 (4ω̃_n^2+h^2)^3sin^2 θ- .-h^2(h^4+35h^2 ω̃_n^2 +70 ω̃_n^4)cos k/4ω̃_n(ω̃_n^2+h^2)^3 (4ω̃_n^2+h^2)^2sin^2 θ] +a^2/2W[1-∑_n>04πT̃_c0 h^2 ω̃_n(3ω̃_n^2+h^2)/(ω̃_n^2+h^2)^3]+ +a/W∑_n>016πT̃_c0ω̃_n(2ω̃_n^6 -10 ω̃_n^4 h^2-2ω̃_n^2 h^4+h^6)/(ω̃_n^2+h^2)^3 (4ω̃_n^2+h^2)^2.where ω̃_n=πT̃_c0(2n+1). Since h ≪ω̃_n, W>0 and the critical temperature is higher for the π-phase. § DETAILS OF NUMERICAL CALCULATIONS Here we provide the details of numerical simulations. Our starting point is the expression for the free energy (<ref>). The determinant of the matching condition matrix 𝒫 is defined by Eq. (<ref>) and the scattering matrices 𝒦_i (i = 1,2) are given in Eq. (<ref>). As a first step, we calculate the matrix product 𝒦̌_1(ω_n,𝐩_||)𝒦̌_2(ω_n,𝐩_||) for a certain Matsubara frequency and the in-plane momentum:𝒦̌_1𝒦̌_2 = Q̌(L/2)(W̌_1 + Ǩ)^-1(W̌_1 -Ǩ)Q̌(L)(W̌_2 + Ǩ)^-1(W̌_2 - Ǩ)Q̌(L/2).For the case θ_1 = θ_2 = π/2 considered in the main text we have 𝐧_jσ̂ = cos(χ_j)σ̂_x + sin(χ_j)σ̂_y, and the matrix products can be reduced to the form(W̌_j + Ǩ)^-1(W̌_j - Ǩ) = Λ̌_j [(g + iZ_jσ̂_x)τ̌_z + iZ_0,j + Ǩ - fτ̌_y]^-1[(g + iZ_jσ̂_x)τ̌_z + iZ_0,j - Ǩ - fτ̌_y]Λ̌_j^† .Here we introduced the unitary matrices Λ̌_j = e^i(φ_jτ̌_z - χ_jσ̂_z)/2, the values φ_1 and φ_2 stand for the phase of the superconducting order parameter in the left and right lead, respectively. We find the inverse matrix entering Eq. (<ref>) analytically and then derive the expression for the matrix productΛ̌_j^†(W̌_j + Ǩ)^-1(W̌_j - Ǩ)Λ̌_j = [ 1 - 2𝒲̂_j^ T(iZ_0,j-g-iZ_jσ̂_x + K̂̅̂)K̂ 2if𝒲̂_j^ TK̂̅̂; -2if𝒲̂_jK̂1 - 2𝒲̂_j(iZ_0,j+g + iZσ̂_x + K̂)K̂̅̂ ] ,where 𝒲̂_j = -1/η_j[ w_j - iZ_0,js_↓↓ + gδ_↓↓-K_↓K̅_↓ -iZ_j(2g + δ_↑↓); -iZ_j(2g + δ_↓↑) w - iZ_0,js_↑↑+gδ_↑↑-K_↑K̅_↑ ] , η_j = [w_j - iZ_0,js_↑↑+gδ_↑↑-K_↑K̅_↑][w-iZ_0,js_↓↓+gδ_↓↓-K_↓K̅_↓] + Z_j^2(2g + δ_↑↓)(2g + δ_↓↑), w_j = 1 + Z_0^2 - Z^2, s_σσ^' = K_σ + K̅_σ^', δ_σσ^' = K_σ-K̅_σ^', and σ,σ^' = ↑,↓ are the spin indices. The determinant of the solvability condition matrix 𝒫(iω_n,𝐩_||) [see Eq. (<ref>)] is calculated numerically using the above expressions (<ref>), (<ref>), and (<ref>). The next steps are the integration of log|𝒫(iω_n,𝐩_||)| with respect to the parallel momentum for a certain Matsubara frequency and then the summation over the Matsubara frequencies. The dominant contribution to the momentum integrals stems from the states with |𝐩_|||<p_Fs, where p_Fs is the Fermi momentum in the normal-metal state of the superconducting lead. Note that some care is needed for numerical evaluation of the wave numbers p_↑,↓(iω_n), p̅_↑,↓(iω_n) defined by Eq. (<ref>). In Eqs. (<ref>) and (<ref>) the summation is carried out over positive Matsubara frequencies. In our numerical calculations we choose the branch cut of the complex square root to be along the negative real axis. For this particular choice one should couple the electronic states with the wave numbers p_σ in the ferromagnet with the hole states with the wave numbers -p̅_σ [see Eqs. (<ref>) and (<ref>)] because in the opposite case the kinematic phase factors of the hole excitations e^ip̅_σL quickly diverge upon the increase in the Matsubara frequency. This problem is resolved by the replacement p̅_σ→ - p̅_σ in numerical code.As a result, we obtain the free energy of the junction F (<ref>) as a function of the superconducting phase difference φ. Finally, we compute the derivative ∂ F/∂φ, which gives us the current-phase relation (<ref>).9 AltshulerS1999 B. L. Altshuler and L. I. Glazman, Science 283, 1864 (1999). SwitkesS1999 M. Switkes, C. M. Marcus, K. Campman, and A. C. Gossard, Science 283, 1905 (1999). MoskaletsPRB2005 M. Moskalets and M. Büttiker, Phys. Rev. B 72, 035324 (2005). ButtikerJLTP2000 M. Büttiker, J. Low Temp. Phys. 118, 519 (2000). ThoulessPRB1983 D. J. Thouless, Phys. Rev. B 27, 6083 (1983). Nazarov_PRL V. Braude, and Yu. V. Nazarov, Phys. Rev. Lett. 98, 077003 (2007). AstafievN2022 R. S. Shaikhaidarov, K. H. Kim, J. W. Dunstan, I. V. Antonov, S. Linzen, M. Ziegler, D. S. Goluber, V. N. Antonov, E. V. Il'ichev, and O. V. Astafiev, Nature 608, 45 (2022). Buzdin A. Buzdin, Phys. Rev. Lett. 101, 107005 (2008).Ustinov A. V. Ustinov, V. K. Kaplunenko, J. Appl. Phys. 94, 5405 (2003).Bauer A. Bauer, J. Bentner, M. Aprili, M. L. Della-Rocca, M. Reinwald, W. Wegscheider, C. Strunk, Phys. Rev. Lett. 92, 217001 (2004).Buzdin_2005 A. Buzdin, Phys. Rev. B 72, 100501(R) (2005).Feofanov A. K. Feofanov, V. A. Oboznov, V. V. Bol'ginov, J. Lisenfeld, S. Poletto, V. V. Ryazanov, A. N. Rossolenko, M. Khabipov, D. Balashov, A. B. Zorin, P. N. Dmitriev, V. P. Koshelets, A. V. Ustinov, Nat. Phys. 6, 593 (2010). Ortlepp T. Ortlepp, Ariando, O. Mielke, C. J. M. Verwijs, K. F. K. Foo, H. Rogalla, F. H. Uhlmann, H. Hilgenkamp, Science 312, 1495 (2006).Geshkenbein V. B. Geshkenbein and A. I. Larkin, Pis’ma Zh. Eksp. Teor. Fiz. 43, 306 (1986) [JETP Lett. 43, 395 (1986)].Yip S. Yip, Phys. Rev. B 52, 3087 (1995).Sigrist M. Sigrist, Prog. Theor. Phys. 99, 899 (1998).Kashiwaya S. Kashiwaya, and Y. Tanaka, Rep. Prog. Phys. 63, 1641 (2000).Tanaka_TI Y. Tanaka, T. Yokoyama, N. Nagaosa, Phys. Rev. Lett. 103, 107002 (2009).Houzet_TI F. Dolcini, M. Houzet, J. S. Meyer, Phys. Rev. B 92, 035428 (2015).Aubin A. Assouline, C. Feuillet-Palma, N. Bergeal, T. Zhang, A. Mottaghizadeh, A. Zimmers, E. Lhuillier, M. Eddrie, P. Atkinson, M. Aprili, H. Aubin, Nat. Commun. 10, 126 (2019).Reynoso A. A. Reynoso, G. Usaj, C. A. Balseiro, D. Feinberg, and M. Avignon, Phys. Rev. Lett. 101 107001 (2008).Mironov_SOC S. V. Mironov, A. S. Mel’nikov, A. I. Buzdin, Phys. Rev. Lett. 114, 227001 (2015).Bergeret_SOC F. Konschelle, I. V. Tokatly,F. S. Bergeret, Phys. Rev. B 92, 125443 (2015).Martin A. Zazunov, R. Egger, T. Jonckheere, T. Martin, Phys. Rev. Lett. 103, 147004 (2009).Martin_2 L. Dell’Anna, A. Zazunov, R. Egger, T. Martin, Phys. Rev. B 75, 085305 (2007).Brunetti A. Brunetti, A. Zazunov, A. Kundu, R. Egger, Phys. Rev. B 88, 144515 (2013).Nazarov_PRB T. Yokoyama, M. Eto, and Yu. V. Nazarov, Phys. Rev. B 89, 195407 (2014).Campagnano G. Campagnano, P. Lucignano, D. Giuliano, A. Tagliacozzo, J. Phys. Condens. Matter 27, 205301 (2015).Kouwenhoven D. B. Szombati, S. Nadj-Perge, D. Car, S. R. Plissard, E. P. A. M. Bakkers, and L. P. Kouwenhoven, Nature Phys. 12, 568 (2016).Nesterov K. N. Nesterov, M. Houzet, J. S. Meyer, Phys. Rev. B 93, 174502 (2016).Ying Z.-J. Ying, M. Cuoco, P. Gentile, and C. Ortix, 2017 16th International Superconductive Electronics Conference (ISEC), Naples, Italy (2017). Spanslatt C. Spånslätt, Phys. Rev. B 98, 054508 (2018).Kutlin A. G. Kutlin, A. S. Mel'nikov Phys. Rev. B 101, 045418 (2020).Kopasov A. A. Kopasov, A. G. Kutlin, and A. S. Mel'nikov, Phys. Rev. B 103, 144520 (2021).GurlichPRB2010 C. Gürlich, S. Scharinger, M. Weides, H. Kohlstedt, R. G. Mints, E. Goldobin, D. Koelle, and R. Kleiner, Phys. Rev. B 81, 094502 (2010).SickingerPRL2012 H. Sickinger, A. Lipman, M. Weides, R. G. Mints, H. Kohlstedt, D. Koelle, R. Kleiner, and E. Goldobin, Phys. Rev. Lett. 109, 107002 (2012). UstinovAPL2002 A. V. Ustinov, Appl. Phys. Lett. 80, 3153 (2002). GaberPRB2005 T. Gaber, E. Goldobin, A. Sterck, R. Kleiner, D. Koelle, M. Siegel, and M. Neuhaus, Phys. Rev. B 72, 054522 (2005). GoldobinPRB2016 E. Goldobin, S. Mironov, A. Buzdin, R. G. Mints, D. Koelle, and R. Kleiner, Phys. Rev. B 93, 134514 (2016). Pickett W. E. Pickett and J. S. Moodera, Phys. Today 54(5), 39 (2001).Coey J. M. D. Coey and M. Venkatesan, J. Appl. Phys. 91, 8345 (2002).Keizer R. S. Keizer, T. B. Goennenwein, T. M. Klapwijk, G. Miao, G. Xiao, and A. Gupta, Nature (London) 439, 825 (2006).Anwar M. S. Anwar, F. Czeschka, M. Hesselberth, M. Porcu, and J. Aarts, Phys. Rev. B 82, 100501(R) (2010).Eschrig_1 M. Eschrig, T. Löfwander, T. Champel, J. C. Cuevas, J. Kopu, and G. Schön, J. Low Temp. Phys. 147, 457 (2007).Eschrig_2 M. Eschrig and T. Löfwander, Nat. Phys. 4, 138 (2008).Eschrig_3 R. Grein, M. Eschrig, G. Metalidis, and G. Schön, Phys. Rev. Lett. 102, 227005 (2009).Eschrig_4 M. Eschrig, A. Cottet, W. Belzig, and J. Linder, New J. Phys. 17, 083037 (2015).Mironov_HM S. Mironov, A. Buzdin, Phys. Rev. B 92, 184506 (2015).Meng_Flux S. Mironov, H. Meng, A. Buzdin, Appl. Phys. Lett. 116, 162601 (2020). Devizorova Zh. Devizorova and S. Mironov, Phys. Rev. B 95, 144514 (2017). Margaris I. Margaris, V. Paltoglou, and N Flyzanis, J. Phys.: Condens. Matter 22, 445701 (2010).Halterman K. Halterman, M. Alidoust, R. Smith, and S. Starr, Phys. Rev. B 105, 104508 (2022).KalenkovPRB2009 M. S. Kalenkov, A. V. Galaktionov, and A. D. Zaikin, Phys. Rev. B 79, 014521 (2009).BeenakkerPRL1991 C. W. J. Beenakker, Phys. Rev. Lett. 67, 3836 (1991).Devizorova_2 Zh. Devizorova, S. Mironov, A. I. Buzdin, Phys. Rev. B 103, 224510 (2021).PGdeGennes P. G. de Gennes, Superconductivity of Metals and Alloys, Benjamin, New York, 1966 (Chap.5).Buz A. I. Buzdin, Rev. Mod. Phys. 77, 935 (2005).Shukrinov Y. M. Shukrinov, I. R. Rahmonov, K. Sengupta, A. Buzdin, Appl. Phys. Lett. 110, 182407 (2017). Visani C. Visani, Z. Sefrioui, J. Tornos, C. Leon, J. Briatico, M. Bibes, A. Barthlmy, J. Santamara, J. E. Villegas, Nat. Phys. 8, 539 (2012). Sanchez-Manzano D. Sanchez-Manzano, S. Mesoraca, F. A. Cuellar, M. Cabero, V. Rouco, G. Orfila, X. Palermo, A. Balan, L. Marcano, A. Sander, M. Rocci, J. Garcia-Barriocanal, F. Gallego, J. Tornos, A. Rivera, F. Mompean, M. Garcia-Hernandez, J. M. Gonzalez-Calbet, C. Leon, S. Valencia, C. Feuillet-Palma, N. Bergeal, A. I. Buzdin, J. Lesueur, Javier E. Villegas, J. Santamaria, Nat. Mater. 21, 188 (2022).Jungxiang Y. Jungxiang, R. Fermin, K. Lahabi, J. Aarts, arXiv:2303.13922 (2023).Birge_1 J. A. Glick, V. Aguilar, A. B. Gougam, B. M. Niedzielski, E. C. Gingrich, R. Loloee, W. P. Pratt Jr., N. O. Birge, Sci. Adv. 4, eaat9457 (2018).Birge_2 V. Aguilar, D. Korucu, J. A. Glick, R. Loloee, W. P. Pratt, Jr., N. O. Birge, Phys. Rev. B 102, 024518 (2020). | http://arxiv.org/abs/2311.15577v1 | {
"authors": [
"A. A. Kopasov",
"Zh. Devizorova",
"H. Meng",
"S. V. Mironov",
"A. S. Mel'nikov",
"A. I. Buzdin"
],
"categories": [
"cond-mat.supr-con",
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.supr-con",
"published": "20231127071049",
"title": "Adiabatic phase pumping in S/F/S hybrids with non-coplanar magnetization"
} |
On the nature of the ultraluminous X-ray source Holmberg II X-1 F. Barra1,[email protected], C. Pinto2, M. Middleton3, T. Di Salvo1, D. J. Walton4, A. Gúrpide3 and T. P. Roberts5Received XXX; accepted XXX ===============================================================================================================================Understanding the entire history of the ionization state of the intergalactic medium (IGM) is at the frontier of astrophysics and cosmology. A promising method to achieve this is by extracting the damping wing signal from the neutral IGM. As hundreds of redshift z>6 quasars are observed, we anticipate determining the detailed time evolution of the ionization fraction with unprecedented fidelity. However, traditional approaches to parameter inference are not sufficiently accurate.We assess the performance of a simulation-based inference (SBI) method to infer the neutral fraction of the universe from quasar spectra. The SBI method adeptlyexploits the shape information of the damping wing, enabling precise estimations of the neutral fractionand the wing position w_p. Importantly, the SBI framework successfully breaks the degeneracy between these two parameters, offering unbiased estimates of both. This makes the SBI superior to the traditional method using a pseudo-likelihood function.We anticipate that SBI will be essential to determine robustly the ionization history of the Universe through joint inference from the hundreds of high-z spectra we will observe.§ INTRODUCTIONUnderstanding the evolution of the neutral fraction of the universe is a central topic in the study of cosmic reionization.Up to now, the most successful and well-known constraint has been from analyzing the Thompson scattering signal in the Cosmic Microwave Background (CMB)<cit.>. However, due to its integrated effect in the CMB, constraining the detailed time evolution of the cosmic neutral fraction is challenging. Another approach is to analyze the effect of the neutral intergalactic medium (IGM) on spectra from quasars directly living in the epoch of reionization (Fig. <ref>). With hundreds of high-redshift quasars discovered and followed-up spectroscopically <cit.>, this method could potentially surpass the CMB method in the near future, if the information can be extracted reliably. Neutral gas in the IGM forms a damping wing on quasar spectra. Collectively, many neutral patches in a quasar sightline create a characteristic damping wing for a givenwith a small scatter arising from cosmic variance <cit.>. If we could unambiguously measure the entiredamping wing, constraining the neutral fraction would be straightforward. However, the residual neutral hydrogen inside the quasar proximity zone (i.e. the region where the gas is ionized by the quasar) could cause significant absorption, rendering the damping wing profile hard to recover.Nevertheless, such structure (i.e. theforest) inside the proximity zone is dictated by the cosmic large-scale structure, which is straight forward to model <cit.>. In Fig. <ref>, we illustrate the two main components of a proximity zone spectrum. The left panel shows a snapshot of a radiative transfer cosmological simulation with a quasar in the center <cit.>. The quasar generates a large number of photons ionizing the gas (blue region) in its surroundings. The gas outside the quasar ionization front is not impacted, thus remaining neutral (yellow region). If the universe is already ionized, there will be no neutral patches. In this scenario, the quasar spectrum has only one component: theforest in the proximity zone (grey line in the right panel of Fig. <ref>). However, when reionization is not complete, there will be a damping wing component (orange envelope) that arises outside the quasar proximity zone and suppresses all the flux at smaller wavelengths. The position of the damping wing is determined by the quasar's past activities, but the shape of the damping wing is determined by the global neutral fraction <cit.>.Measuring this envelope accurately is therefore key to measuring the neutral fraction.Due to the complexity of theforest, it is challenging to explicitly write down the likelihood function of the flux at each wavelength (i.e. pixel). Previous studies <cit.> thus used a “pseudo-likelihood” function to compress all of this information into one number, leading to degraded and/or biased parameter constraints. Simulation-based inference (SBI) has emerged as a rapidly advancing technique specifically aiming to solve inference problems with intractable likelihoods. We thus set out to assess whether SBI using conditional normalizing flows can improve the inference compared to the traditional pseudo-likelihood method. § DATATo accurately model the details in the proximity zone, we need hydrodynamical cosmological simulations and radiative transfer. However, simulating spectra with radiative transfer on-the-fly can be expensive. To reduce the simulating time, we develop a simulator that makes composite spectra using two pre-calculated datasets: proximity zoneforests and damping wings.We create a set of proximity zoneforests by post-processing the Cosmic Reionization On Computers (CROC) simulations <cit.>. In each of the six 40 simulation boxes, we first select the 20 most massive halos. We then draw 50 sightlines with random orientations centered on each halo to sample the cosmological environments around quasars, and use 1D radiative transfer modelling <cit.> to create a large set ofspectra ranging from -4000 km/s to 8000 km/s.To model damping wing components, we use the simulated damping wing database created by <cit.>. This database samples 10,000 sightlines traveling through different neutral patches in a universe with a given , capturing realistic scatter in the damping wing profile. The database is evenly spaced across 21 global neutral fractions ranging from =0.995 to =0.008. For eachand the starting point of the damping wing w_p, we then randomly draw oneforest and one damping wing with the closest . We then move the wing to w_p and multiply the flux with the transmitted proportion of the wing on the red side and with zero on the blue side. We show an example of a simulated composite spectrum in the left panel of Fig. <ref>.Real spectra usually have noise of varying levels. As a result, previous studies often have binned the data as a function of wavelength to reduce the effective noise per binned pixel <cit.>. In order to make realistic comparison between SBI and traditional pseudo-likelihood approach described in <cit.>, we also bin our simulated quasar spectra to a similar resolution (500 km/s) over the entire spectral range in our subsequent tests (see the blue line in the left panel of Fig. <ref> as an example). In the Supplementary Material, we show how the result changes with different noise levels and number of spectra used.§ METHODS §.§ Simulation-Based Inference We use the default Neural Posterior Estimation (NPE) <cit.> framework as implemented by the [https://www.mackelab.org/sbi/. We use the “SNPE_C” class with a single round. ]package without any additional embedding network. The NPE aims to find the neural network (a density estimator) ϕ which maximizes Σ_i=0^i=Nln P(θ_i|x_i, ϕ), where (θ_i, x_i) are independent (parameter, simulation) pairs generated from the prior. For the density estimator, we adopt Masked Autoregressive Flow (MAF) <cit.> trained on 100,000 simulated spectra. We run a small grid search for two hyper-parameters: the number of hidden features n_ hf = (5, 10, 20) and number of transformations n_ t=(5, 10, 20). We found that all the combinations return sensible results with clearly decoupled parameter constraints. We therefore opt to use MAF with the simplest hyper-parameter combination (n_ hf, n_ t) = (5,5), which converged after 277 epochs.[All computation was done on the SciNet NIAGARA supercomputer using one CPU node with 40 cores. The total training of our final model took 73 minutes of CPU time.]§.§ Pseudo-likelihood ApproachIn <cit.>, a pseudo-likelihood L̃(x|θ) function is proposed to enable a Bayesian inference offrom quasar spectra. The pseudo-likelihood function is calculated by simply multiplying the likelihood for each wavelength bin assuming they are all independent:L̃(x|θ) = ∏_i=1^n_ binsL̃_i(x_i|θ)Similar to <cit.>, we create a uniform grid of parameters with 21 bins inand 16 bins in w_p from 0 to 8000 km/s. We use the simulator described in Sec. <ref> to generate 1000 spectra for each parameter pair (, w_p). Then we use the distribution of flux values to estimate the flux probability distribution function (PDF) at each bin in order to evaluate the likelihood L̃_i(x_i|θ).[The flux PDF is estimated using a histogram with 50 bins, although we find slightly changing the number of bins or using kernel density estimation with Gaussian kernels does not have significant impact on the final results.] § RESULTSWe use a uniform prior for bothand w_p. In the middle and right panels of Fig. <ref>, we show SBI and the pseudo-likelihood distributions for the example spectrum on the left panel. Compared with the SBI result in the middle panel, the pseudo-likelihood method shows stronger, more extended degeneracies between the two parameters and lower resolution due to the coarser input parameter grid, reliance on a single summary statistic L̃, and ignorance of potential intrapixel covariances.This degeneracy in the pseudo-likelihood method can be especially severe when w_p,true is large: for w_p> w_p,true, many bins in Eq.<ref> have non-zero L̃_i(x_i|θ), which can be compensated with a stronger damping wing (smaller ).On the other hand, SBI could extract the shape information of the damping wing from correlated pixels to break the degeneracy between the two parameters. We also performed a probability calibration check <cit.> using the ranks of the estimated quantiles for 1000 test spectra to check whether there are biases across both sets of constraints (Fig. <ref>). We find that the rank distribution is consistent with a uniform distribution for both parametersand w_p: comparing the rank distribution with the uniform distribution using the Kolmogorov–Smirnov (K-S) test, we obtain a p-value of 0.16 and 0.56 forand w_p, respectively. This implies that there is no strong reason to believe that the collection of SBI PDFs do not accurately capture the scatter in true values given our estimated uncertainties. From left to right, the blue histograms in Fig. <ref> show the rank distribution, bias distribution and (two-sided) 68% interval of theconstraint from the SBI method, respectively. For a single spectrum, we find the root mean square (RMS) of bias into be 0.06, which is smaller than the RMS of 68% scatter (0.12).Moreover, the typical scatter is approximately equal to the typical cosmic variance of the damping wing <cit.>. This demonstrates that SBI not only produces an unbiased inference of , but that these constraints are close to optimal. On the other hand, the traditional pseudo-likelihood method introduces significant bias. We show the rank distribution, bias and scatter of the inference onin the faint orange histogram in Fig. <ref>. The “U” shape clearly demonstrates that the traditional method under-predicts the uncertainties in the inference. The RMS of the bias is 0.11, larger than the SBI method. The RMS of the scatter of the pseudo-likelihood method (0.124) is similar to SBI (0.120), but the distribution is noticeably skewed towards both smaller and large values compared to SBI. § DISCUSSION We demonstrated that using SBI, we can more accurately constrain the neutral fraction of the universe using individual quasar spectra. SBI performs systematically better than traditional pseudo-likelihood methods, returning PDFs that are more accurate, and with more realistic uncertainties over our parameters of interest.For an individual spectrum, the traditional method shows larger bias than the SBI method. Such bias could compromise our results when inferring from multiple spectra. Up to now, there are hundreds of quasars detected at z>6 with high-quality spectra, and one key step forward is to build an unbiased methodology to jointly constrain the global neutral fraction . From this perspective, the SBI method is clearly more desirable. The SBI can also be straight-forwardly implemented in such a multiple-spectra case bysimply concatenating multiple spectra as input data[See Supplementary Material.]. § BROADER IMPACT This is a pioneering study on using SBI with state-of-the-art radiative transfer cosmological simulations. The results show the promising potential of using quasar spectra to measure the detailed time evolution of the IGM ionization state, which is crucial for both cosmology and astrophysics. In the future, we will incorporate various realistic factors, especially continuum uncertainties <cit.> into the SBI framework and develop end-to-end machinery to uncover the reionization history. The methodology presented in this study can also be extended to detect and characterize different absorption systems such as Lyman Limit Systems or DampedAbsorbers in theforest in quasar spectra <cit.>, which is crucial for understanding structure formation.This study has been supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding references #DIS-2022-568580 and #RGPIN-2023-04849. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto.plain§ SUPPLEMENTARY MATERIAL§.§ Spectra with NoiseReal spectra always have some level of noise. Currently, the sample with the highest quality has an average signal-to-noise (SNR) ∼ 20 per 10 km/s <cit.>. We thus test how adding such noise impacts the results. In each spectrum, we first add an uncorrelated Gaussian noise with zero mean and a standard deviation of 5% of the continuum level, and then bin it to 500 km/s per bin. We again train a MAF with 5 hidden features and 5 transformations and test the results on a separate set of 1000 spectra with the same noise level. The simulation-based calibration shows the rank distributions are consistent with a uniform distribution for both(K-S test p-value =0.13) and w_p (p-value =0.41), indicatingthe uncertainties inferred from the SBI accurately capture the scatter in true values. The bias of each individual spectrum is increased slightly compared with the noiseless case: the RMS increases from 0.06 to 0.09 for , and from 505 km/s to 775 km/s for w_p, while the RMS scatter is increased from 0.12 km/s to 0.18 km/s for , and from 720 km/s to 1120 km/s for w_p (see also Fig. <ref>).§.§ Scaling with Number of SpectraUsing multiple spectra at the same redshift could increase the constraint on . This is especially promising as we will have more reionization epoch quasars observed in the near future <cit.>. Here we test how the inference scales with the number of spectra with different noise levels. Because we are primarily interested in , here we only train it to infer the single parameter . The input data for multiple spectra inference is constructed by simply concatenating the spectra together into a single 1D array.We test a few realistic cases: SNR=5 for 1, 5 and 25 spectra and SNR=20 for 1 and 5 spectra. We use the same hyper-parameters (n_ hf=5, n_ t=5) and find that except for the single spectrum with SNR=5 case, all the others return an unbiased inference based on the K-S test on rank distribution with uniform distribution: the single SNR=5 spectrum case has p-value=0.008 while others all have p-value>0.05. For the single SNR=5 spectrum case, we further test other hyper-parameter combinations with n_ hf varying between (3,5,10,20) and n_ t varying between (2,3,5,10,20), but the highest p-value returned is still only 0.03 (from combination n_ hf=3, n_ t=2). This indicates that using a single spectrum with only SNR=5, it is very challenging to obtain an accurateconstraint. In Fig. <ref> we show the test results for each case, all trained with n_ hf=5, n_ t=5.For SNR=5 spectra, the RMS bias improves from 0.13, 0.069, to 0.044, and the RMS scatter improves from 0.25, 0.14 and 0.081 as we increase the number of spectra from 1, 5, to 25, respectively. On the other hand, for SNR=20 spectra, the RMS bias improves from 0.089 to 0.045 while the RMS scatter from 0.18 to 0.088 as we increase the number of spectra from 1 to 5. | http://arxiv.org/abs/2311.16238v1 | {
"authors": [
"Huanqing Chen",
"Joshua Speagle",
"Keir K. Rogers"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20231127190004",
"title": "Learning Reionization History from Quasars with Simulation-Based Inference"
} |
APS/123-QED ^1Nordita, KTH Royal Institute of Technology and Stockholm University, Hannes Alfvéns väg 12, 106 91 Stockholm, Sweden ^2 Department of Physics, University of Connecticut, Storrs, Connecticut 06269, USA ^3 Departamento de Física, Universidad Técnica Federico Santa María, Casilla 110, Valparaíso, ChileIn recent decades, the Altland-Zirnabuer (AZ)table has proven incredibly powerful in delineating constraints for topological classification of a given band-insulator based on dimension and (nonspatial) symmetry class, and has also beenexpanded by considering additional crystalline symmetries. Nevertheless, realizing a three-dimensional (3D), time-reversal symmetric (class AII) topological insulator (TI) in the absence of reflection symmetries, with a classification beyond the ℤ_2 paradigm remains an open problem. In this work we present a general procedure for constructing such systems within the framework ofprojected topological branes (PTBs). In particular, a 3D projected branefrom a “parent"four-dimensional topological insulator exhibits a ℤ topological classification,corroborated through its response to the inserted bulk monopole loop.More generally, PTBs have been demonstrated to be an effective route to performing dimensional reduction and embedding the topology of a (d+1)-dimensional “parent" Hamiltonian in d dimensions, yielding lower-dimensional topological phases beyond the AZ classification without additional symmetries. Our findingsshould be relevant for the metamaterial platforms, such as photonic and phonic crystals, topolectric circuits, and designer systems. Three-dimensional ℤ topological insulators without reflection symmetryVladimir Juričić^3,1 January 14, 2024 ========================================================================Introduction: Despite the rapid developments in our understanding and diagnosis of topological phases of matter in recent decades, the classification rules provided by the Altland-Zirnbauer (AZ) table have remained steadfast when only non-spatialsymmetries, such as time-reversal and particle-hole, are considered<cit.>. In this respect, a few known exceptions to the AZ table exist, perhaps most famous is the Hopf-insulator <cit.>, whichachieves classification beyond the AZ table in three dimensions. This rather unexpected exception to the tenfold periodic table of topological insulating phases has naturally led to questions on what, if any, other exceptions may exist to prompt a reexamination of this paradigm. Moreover, rapid advances in meta-materials and engineered (designer) systems <cit.> for realizing the exotic physics of such exceptional systems in an experimental setting provides additional motivation to answer this question.In this work, we show that projected topological branes (PTBs), recently introduced in Ref. <cit.>, provide a robust pathway to realizing such exceptional systems. In particular, we demonstrate that topological classification of the (d+1) dimensional system is preserved in the dimensional reduction procedure, realizing a d-dimensional topological brane. This procedure thereby offers a route to traverse the AZ table, as shown in Fig. <ref>, and realize topological states in lower dimensions which would otherwise not be permitted. For clarity, we focus on time-reversal symmetric (class AII) insulators in three dimensions as these represent a majority of real, spinful, condensed-matter systems. In the presence of reflection symmetry, they are characterized by an integer topological invariant, thereforeaccommodatingℤ classification <cit.>. However, for general time-reversal insulators lacking additional symmetry, the AZ table limits topological classification to only a parity, ℤ_2, invariant. By contrast, in four spatial dimensions class AII insulatorsadmit a ℤ invariant. By forming 3D PTBs from four-dimensional topological insulators in class AII, we can thus provide a general principle for achieving the insulators with aℤ topological invariantin three dimensions, thereby going beyond the AZ table without introducingsymmetry constraints. In order to prove that we have preserved the topological nature of the parent system, we utilize known real-space probes of bulk topology, namely electromagnetic vortices, monopoles, and monopole loops for parent Hamiltonians in d=2,3,4, respectively<cit.>. In particular, the spectrum of the prototypical, four-dimensional (4D) time-reversal symmetric model with a monopole-loop inserted in the fourth dimension yields a number of induced mid-gap modes, N_0=2|𝒞_2|, that is in correspondence with the second Chern number, 𝒞_2, characterizing the 4D topological state. Remarkably, this number of mid-gap modes remains invariant when the 3D PTB is constructed out of the 4D parent state, therefore demonstrating therealization ofℤ classified time-reversal invariant topological insulators in d=3. Projected topological branes:To construct the PTB, we employ the method based on the Schur complement <cit.>.The Hamiltonian for the projected topological brane (PTB), takes the form <cit.>,H_PTB=H_11-H_12H_22^-1H_21, where H_11 (H_22) corresponds to the Hamiltonian on (outside) the lower-dimensional brane, whileH_12=H_21^† is the coupling between the two subsystems of the d-dimensional parent system, with the Hamiltonian for the parent system on a d-dimensional lattice of the form, H=[ H_11 H_12; H_21 H_22 ]. While the form of Eq. (<ref>) is insensitive to choice of dimension, we detail the generalization of this algorithm for constructing (d-1)-dimensional branes from d-dimensional parent cubic lattices. Generalization of this procedure for d-dimensional parent systems relies on specifying a (d-1)-dimensional hyperplane, the equation of which takes the form, ∑_j=1^dα_jx_j=β,where α_j and β are real numbers. A lattice site, i, projected onto the (d-1)-dimensional PTB, obeys the relation,|∑_j=1^dα_j x_j,i+β|/√(∑_j=1^dα_j^2)<1/√(2)a,where a is the lattice constant. As an illustrative example, we choose a parent cubic system in d=3 dimensions of size 10 × 10 × 10. We then select the parameters for our plane, α_j=1,2,3=1 and β=1/100. The lattice sites which make up the PTB are colored in red in the middle panel of Fig. <ref>. In order to visualize the PTB, we map each lattice site in red onto the projected plane at the point nearest to the lattice site. The result is shown in the right panel of Fig. <ref>, demonstrating that the PTB is a hexagonal system as expected given the orientation of the plane, perpendicular to the (111) axis of the cube. Having established a definite procedure for construction of projected branes in d-dimensions, we consider a 4D, parent tight-binding model for demonstration of topology beyond the AZ table.Bulk topology of parent model: We explicitly consider a 4D generalization of the Bernevig-Hughes-Zhang topological insulator<cit.> on a cubic lattice. The Bloch Hamiltonian takes the form, H(𝐤)= ∑_j=1^5d_j(𝐤)Γ_j. Employing the basis, Γ_j=1,2,3=τ_1⊗σ_j, Γ_4= τ_2⊗σ_0, Γ_5= τ_3⊗σ_0,where τ_0,1,2,3(σ_0,1,2,3) are the 2 × 2 identity matrix and three Pauli matrices respectively, acting on the orbital (spin) degrees of freedom, the vector 𝐝(𝐤)reads, d_j=1,2,3,4(𝐤)=t_psin k_j, d_5(𝐤)= t_s(Δ- η_1∑_j=3^4cos k_j-η_2∑_j=1^2cos k_j -η_3cos k_1cos k_2)Γ_5.We have set the lattice constant to unity for simplicity, t_p,s have units of energy and Δ, η_1,2,3 are dimensionless, real, non-thermal band parameters used for driving topological phase transitions.Time-reversal symmetry 𝒯, is generated by 𝒯^†H^*(-𝐤)𝒯=H(𝐤), where 𝒯=iτ_0⊗σ_2, such that 𝒯^2=-1, placing the Hamiltonian in class AII. As such, it supports ℤ topological classification via calculation of the second Chern number, 𝒞_2<cit.>. The second Chern number is efficiently computed for the model at hand as, 𝒞_2= 3/8π^2 ∫ d^4k ϵ^abcded̂_a∂d̂_b/∂ k_x∂d̂_c/∂ k_y∂d̂_d/∂ k_z∂d̂_e/∂ k_w.We will consider three main parameter sets, (A) Δ =3.5, (η_1,η_2,η_3)=(1,1,0), (B) Δ =0.5, (η_1,η_2,η_3)=(1,1,0), and (C) Δ =0.5, (η_1,η_2,η_3)=(1,0,1). These phases support 𝒞_2=-1, +3, +4 respectively.Bulk topology of PTBs: Utilizing the parent, 4D Bloch Hamiltonian detailed in Eq. (<ref>), we will now construct the 3D PTB following the previously detailed procedure, fixing α_j=1,2,3,4=1 and β=0.01. The lattice sites projected onto the 3D hyperplane are shown in Fig. <ref>. In order to diagnose the bulk topology of the PTB we will utilize a real-space probes in terms of defects. In particular, singular magnetic probes have been proven to be ideal in detecting topology in real-space through the emergenceof bound states. Famously, in two-dimensions insertion of magnetic vortices has been used to determine spin-Chern number (𝒞_s), with the number of mid-gap vortex-bound modes (N_VBM), following the relationship N_VBM=2𝒞_s<cit.>. Furthermore, in three dimensions, magnetic monopoles throughthe number of monopole bound mid-gap modeswas employed to diagnose the bulk-invariant<cit.>. In 4D systems, a natural extension is the monopole loop, whereby a unit strength magnetic monopole is placed in each plane parameterized by the conserved momenta in the fourth dimension, k_4. Notice that the high-symmetry values of k_4, namely k_4=0,±π, represent distinct 3D topological insulators. Importantly, the emergent 3D topological insulator defined at these planes supports a chiral (unitary particle-hole) symmetry defined as, S^-1HS=-H where S=Γ_4. This emergent chiral symmetry at the high-symmetry locations k_4=0,±π, in turn, pins to zero energy the surface and monopole bound states induced by the monopole-loop. Finally, even through the chiral symmetry can be broken, asallowed for time-reversal symmetric insulators (class AII), the number of bound states remains invariant, consistent with the ℤ classification, but they are simply shifted to finite energy values.We insert the monopole loop into the bulk under open-boundary conditions along the x,y,z directions and periodic boundary conditions along the w direction, by employing the singular, north-pole gauge 𝐀(𝐫_i)= g/r_iθ_i/2 ϕ̂_i=g-y_i x̂ + x_i ŷ/r_i(r_i+z_i),where i index lattice sites and we fix g=1 to specifically consider the case of a unit-strength monopole loop. The results of inserting the monopole loop into the parent Hamiltonian for phase (A), (B), and (C) are shown in Figs. <ref>-<ref>, detailing that the number of mid-gap zero modes in each phase precisely follows the relationship, N_0= 2|𝒞_2|. Having established this relationship for the parent system, we perform the projection to construct the topological brane. We carry out this process both with and withoutthe monopole loop inserted, maintaining identical boundary conditions utilized for examining the four-dimensional system.Solving for the spectra of the projected brane in each phase, we find the results shown in Fig. <ref>-<ref>. Remarkably, the number of mid-gap zero modes in each phase under insertion of the monopole loop remains invariant, thereby demonstrating that the topology of the parent 4D topological state is inherited by the 3D PTB.Summary and outlook:In this work we have demonstrated that the PTBs offer a route to perform dimensional reduction of a parent Hamiltonian in (d+1) dimensions to a d-dimensional one, while preserving the bulk topological invariant. Importantly, this allows for the construction of lattice tight-binding models for which the bulk topology goes beyond the ten-fold classification scheme based on the AZ table. PTBs thus fall under the category of symmetry non-indicative phases, of which other known examples include the Hopf insulator. As the projected branes can be constructed through lattice tight-binding models, opportunities exist to realize these systems in engineered metamaterial systems, including photonic, phononic and topolectric systems. Furthermore, the designer quantum materials offer another route to experimentally test our proposal.This important direction for physical realization of the exotic properties will be pursued in a subsequent work. Finally, higher-dimensional crystalline dislocations, being related to translations in extra dimensions, should provide a refined classification of the 3Dℤ projected branes, which we plan to study in future. Acknowledgments: We are thankful to Bitan Roy for useful discussions and critical reading of the manuscript.V. J. acknowledges the support ofthe Swedish Research Council Grant No. VR 2019-04735and Fondecyt (Chile) Grant No. 1230933. Nordita is supported in part by NordForsk. | http://arxiv.org/abs/2311.16092v1 | {
"authors": [
"Alexander C. Tyner",
"Vladimir Juričić"
],
"categories": [
"cond-mat.mes-hall",
"cond-mat.str-el"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231127185802",
"title": "Three-dimensional $\\mathbb{Z}$ topological insulators without reflection symmetry"
} |
[ Towards Vision Enhancing LLMs:Empowering Multimodal Knowledge Storage and Sharing in LLMsYunxin Lisch Baotian Husch Wei Wangsch2 Xiaochun Caosch2Min Zhangsch schSchool of Computer Science and Technology, Harbin Institute of Technology, Shenzhen sch2School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen UniversityBaotian [email protected] Learning, ICML0.3in ] Recent advancements in multimodal large language models (MLLMs) have achieved significant multimodal generation capabilities, akin to GPT-4. These models predominantly map visual information into language representation space, leveraging the vast knowledge and powerful text generation abilities of LLMs to produce multimodal instruction-following responses. We could term this method as LLMs for Vision because of its employing LLMs for visual-language understanding, yet observe that these MLLMs neglect the potential of harnessing visual knowledge to enhance overall capabilities of LLMs, which could be regraded as Vision Enhancing LLMs. In this paper, we propose an approach called MKS2, aimed at enhancing LLMs through empowering Multimodal Knowledge Storage and Sharing in LLMs. Specifically, we introduce the Modular Visual Memory, a component integrated into the internal blocks of LLMs, designed to store open-world visual information efficiently. Additionally, we present a soft Mixtures-of-Multimodal Experts architecture in LLMs to invoke multimodal knowledge collaboration during generation. Our comprehensive experiments demonstrate that MKS2 substantially augments the reasoning capabilities of LLMs in contexts necessitating physical or commonsense knowledge. It also delivers competitive results on multimodal benchmarks.§ INTRODUCTION Recent advances <cit.> on Multimodal Large Language Models (MLLMs)have opened the eyes of text-only large language models (LLM, “blind” to visual information), allowing them to understand and process multimodal information, thereby promoting the further development of LLMs-centered Artificial General Intelligence (AGI). In this line of works such as MiniGPT-4 <cit.>, LLaVA <cit.>, and BLIP-2 <cit.>, information outside language modality is usually aligned into language space, and then the rich knowledge stored in LLM and its powerful text generation capability are used to understand various multimodal information and generate the response corresponding to human instructions. They took an significant step towards constructing a multimodal large visual-language model similar to GPT-4 <cit.>, contributing a lot of multimodal instruction-following data <cit.> and efficient multimodal fine-tuning technical <cit.>. These approaches concentrating on multimodal information understanding could be regarded as “LLMs for vision” because of mainly utilizing LLMs for processing visual-language problems. However, current MLLMs, pretrained and supervised fine-tuned (SFT) LLMsboth overlook enhancing the ability of LLMs to tap into visual knowledge. Ideally, just as the human brain retains and utilizes visual information, MLLMs or LLMs should be equipped to store external visual information. In situations that require visual common sense, even in the absence of direct visual input, LLMs should be able to access this stored visual-language knowledge for combined reasoning. This goes beyond merely processing multimodal input, as “LLMs for Vision”depicted in Figure <ref>. Hence, we present a term “Vision Enhancing LLMs” to describe the desired capability for LLMs. Through this enhancement, large models would store and effectively draw upon multimodal knowledge and their knowledge base and reasoning capabilities would be enhanced. To this end,We present MKS2, an innovative approach designed for empowering Multimodal Knowledge Storage and Sharing within LLM, consisting of two core stages: Visual Information Storage and Multimodal Knowledge Collaboration. In the first stage, we introduce Modular Visual Memory (MVM) in internal transformer blocks of LLMs to store visual information. Specifically, inspired by previous works <cit.> focused on measuring parametric knowledge of pretrained language models and observing the knowledge storage role of feed-forward neural networks (FNN), we incorporate a two layers of FNN into each LLM block to build a lightweight visual memory. Subsequently, we employ a collection of image-text pairs to exclusively train and update MVM using two learning approaches: image-to-text generation and text-to-image retrieval. In both ways, soft image and token embeddings pass through the visual memory following attention calculations. These strategies empower LLMs to comprehend, translate, and store visual information in LLMs via a linguistic framework.For multimodal knowledge collaboration, we introduce a soft Mixtures-of-Multimodal Experts (MoMEs) architecture. This framework leverages specialized experts, including the Modular Visual Memory (Visual Expert) and the original MLPs (Textual Expert) in LLMs during the generation process. To efficiently achieve this, we freeze all parameters of LLMs, apply Low-Rank Adaption (LoRA <cit.>) to each expert module and facilitate information integration across LLM blocks through a token-level soft mixing approach. By doing so, the overall model becomes adept at accommodating both multimodal and text-modality information, enabling seamless collaboration across various input forms. During training, we collect a diverse set of instruction data, containing text-only instructions and image-text multimodal instruction-following data, to ensure the effectiveness of MoMEs in handling multimodal as well as text-only tasks.To validate the effectiveness of our approach, we evaluate MKS2 on seven natural language processing (NLP) benchmarks and six image-text understanding datasets. Extensive experiment results indicate that MKS2 achieves superior performances on NLP tasks requiring physical or visual world knowledge, e.g, MKS2-Llama-2 significantly exceeds Llama-2-chat as shown in Figure <ref>. It also achieves competitive performances on image-text understanding scenarios compared to previous MLLMs. Our main contributions can be summarized as follows:* We introduce MKS2, a vision-enhanced learning framework for LLMs, designed for effective storage and sharing of multimodal knowledge. This framework efficiently handles both multimodal and text-only inputs. * MKS2 demonstrates superior outcomes in knowledge-intensive tasks over traditional SFT LLMs and LLMs employing Reinforcement Learning from Human Feedback (RLHF). * Ablation studies validate the efficacy of mixtures-of-multimodal-experts that incorporates a visual knowledge expert. This architecture distinctly improves the performance of LLMs beyond the capacities of conventional supervised fine-tuned LLMs. * Our experiments indicate that multimodal instruction-following data further enhances LLMs' performance in natural language reasoning tasks that require extensive commonsense. § PRELIMINARIES We first review the supervised fine-tuning approach and recently proposed multimodal instruction-following tuning method for LLMs.§.§ Supervised Fine-tuning A pure pretrained LLM is fine-tuned on high-quality labeled datasets using token-level supervision to produce a Supervised Fine-Tuned model, dubbed as SFT-LLM. Common methods are using GPT-4 automatically constructed instruction data <cit.> and manually annotated high-quality data from downstream tasks <cit.> to fine-tune pure LLMs. To reduce training costs, recent works present some efficient instruction-tuning approaches, e.g., LoRA <cit.>, QLoRA <cit.>, etc. These SFT-LLMs are capable of generating human-like responses for various text-only instructions, having a profound impact on all walks of life.§.§ Multimodal Instruction-Following Tuning Compared to traditional visual-language models such as Oscar <cit.>, Flamingo <cit.>, OFA <cit.>, etc, the multimodal instruction-following tuning approach explored extending the text-only instruction tuning in LLMs to multi-modality. These MLLMs applying LLMs as the multimodal information processor achieve impressive zero-shot performances on unseen tasks. Generally, as the traditional approach depicted in Figure <ref>, a frozen visual encoder (e.g., visual encoder of CLIP) is used to obtain the sequence representation of an image and a visual mapping network (VMN, a linear projection layer or Q-former from BLIP-2) projects the image encoding into soft image embeddings into the language space of LLMs. Then, we can utilize an efficient fine-tuning technical to allow LLMs to process multimodal information, thereby turning LLMs into MLLMs. Formally, a multimodal image-text instruction sample could be expressed in the following triplet form, i.e., (I,T,R), where I,T,R represent the input image, text description (about human demands or image-related premises), and ground-truth response, respectively. During training, the constructed MLLMs is forced to predict the next token of response via the autoregressive objective, which could be presented as:ℒ(θ)=-∑_i=1^N log P(R_i | I, T, R_<i ; θ),where N is the length of response and θ refers to the training parameters in the whole framework. In conclusion, we find that these two approaches ignore introducing visual knowledge to improve overall capabilities of LLMs for processing text-only tasks. § METHODOLOGY In the following subsections, we will present the two stages of MKS2 in detail: Visual Information Storage and Multimodal Knowledge Collaboration. §.§ Visual Information Storage To realize visual information storage in LLMs, we propose injecting Modular Visual Memory (MVM) into internal blocks of LLMs and forcing MVM to memorize open-world visual information via language-centered learning strategies. Modular Visual Memory (MVM). This module is two layers of feed-forward neural networks (FFN) and injected into each transformer block of LLMs. As the top part shown in Figure <ref>, the input image I is first projected into soft image embedding 𝐡_I via the pretrained visual encoder, Q-former from BLIP-2, and a learnable linear layer. Take the first block as an example; the calculation process can be presented as follows:𝐡_s^T=Self-Attention(𝐡_I), 𝐡_F^T=𝐡_s^T + MVM(layernorm(𝐡_s^T)),where Self-Attention is the original attention calculation in LLMs. We just inserted MVM inside the original LLMs and did not change other structures. All hidden states pass the MVM after gaining the output 𝐡_s^T of Self-Attention, and we also set the overall size of visual memory by controlling the hidden state dimensions of FFN.Language-Centered Learning Strategies. As we consider LLMs as analogs to the human brain, we have embarked on a groundbreaking endeavor to create the visual storage memory in LLMs. Our ultimate goal is to empower LLMs with the capability to comprehend a given image and conjure related visual scenarios based on textual input, akin to human cognition. To this end, we adopt two learning objects to train MVM with a large mount of image-text pairs. As shown in Figure <ref>, we allow LLMs to generate the language description of an image, which resembles understanding and translating an image like brain. Additionally, given a sentence with some visual objects, LLM should attach to the sentence-related image, which resembles imagination. Suppose that the short description (caption) of an input image I is D, the description generation loss isℒ_c=-1/N∑_i=1^N l_c(IMG_i, D_i),where N is the number of image-text pairs in a batch and l_c refers to the cross-entropy loss.While retrieving the related image, we use the output hidden state h_e of the end token </s> of input caption to match the image embedding. Concretely, we employ a learnable linear layer to project it into the same dimension with image global encoding obtained by the visual encoder. Then we calculate the cosine similarity between them and minimize the InfoNCE loss for text-to-image (t2i) retrieval over a batch of N samples. The negatives are other irrelevant images in a batch. Hence, the total language-centered learning loss is ℒ_Stage 1 = ℒ_c +ℒ_t2i,ℒ_t2i =-1/N∑_i=1^N(logexp(sim(D_i, IMG_i) / τ)/∑_j=1^N exp(sim(D_i, IMG_j) / τ)),where τ is a learnable temperature parameter. During training, we freeze all pretrained parameters of LLMs and only update MVM. In addition to the way of retrieving images to achieve visual information association, using image generation technology for joint training is also an alternative approach. §.§ Multimodal Knowledge Collaboration After gaining visual information storage inside LLMs, we need consider how to realize multimodal knowledge collaboration during generation.Regarding pretrained MVM and MLP in LLMs as visual and textual experts respectively, we propose a soft mixtures-of-multimodal experts (MoMEs) approach to achieve multimodal knowledge utilization at the token level.Mixtures-of-Multimodal Experts (MoMEs). To speed up training process, as the bottom part shown in Figure <ref>, we freeze MVM and other parameters of LLMs, applying Low-Rank Adaption <cit.> (LoRA) to tow-modality experts: MVM and MLP. We denote the inputs tokens for one sequence inputted to MoMEs by 𝐗∈ℝ^m × d, where m is the number of tokens and d is their dimension. The computed process for visual and language knowledge expert could be given in𝐡_VE = LoRA-MVM(X), 𝐡_TE = LoRA-MLP(X),LoRA(W_0) := W_0 X+Δ W X=W_0 x+B A X,where B, A are learnable parameters added for each pretrained weight of visual and textual experts. LoRA-MVM and LoRA-MLP(X) represent original knowledge experts equipped with additional LoRA calculation. By doing so, the training process is efficient because of doing not update the overall parameters of experts. Each MoE layer uses expert functions (shown in E.q. <ref>) applied on individual tokens, namely {f_i: ℝ^d →ℝ^d}_1: 2. Each expert will process p slots, and each slot has a corresponding d-dimensional vector of parameters. As S→M shown in Figure <ref>, the token-level combination for expert outputs can be presented asS = Softmax(w_s X + b_s), 𝐡_M = S_1 𝐡_VE + S_2 𝐡_TE,𝐡_o = 𝐡_s^T + 𝐡_M,where S∈ R^X×2 and the final dimension is normalized with Softmax calculation. The output of each block in LLMs is denoted to 𝐡_o. §.§ TrainingIn the first stage, the size of used image-text pairs are about 2.3M from CC3M <cit.>, COCO Captioning <cit.>,and Flick-30k <cit.>.To achieve multimodal knowledge collaboration, as shown in Figure <ref>, we use text-only and image-text instruction-following data to train the overall architecture. The added modular visual memory and LLMs are frozen during training. We use widely-used instruction data including: high-quality natural language processing tasks from Flan-T5 <cit.>, complex instruction-finetuning data from WizardLLMs <cit.>, and multimodal instruction data LLaVAR <cit.>, which totally consists of 1.5M text-only and 166k image-text instruction tuning data.§ EXPERIMENTS§.§ Datasets Natural Language Processing Benchmarks. We use seven text-only downstream datasets to comprehensively evaluate MKS2, which consists of physical world knowledge-relevant datasets and basic ability assessment benchmark MMLU <cit.>. We use multiple choice question answering tasks that can benefit from visual knowledge: PIQA <cit.> that requires physical commonsense reasoning, Commonsense QA (CSQA) <cit.> for evaluating the commonsense reasoning capability of models, OpenBook QA (OBQA) <cit.> that requires multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension, RiddleSense (RS) <cit.> for complex understanding of figurative language and counterfactual reasoning skills, Social IQA <cit.> which focuses on physical or taxonomic knowledge for testing social commonsense intelligence, StrategyQA <cit.> that needs the reasoning steps should be inferred using a strategy. Image-Text Understanding Benchmarks. To evaluate the multimodal capability of our proposed model, we introduce six classical Visual Question Answering (VQA) datasets: VQAv2 <cit.>, OK-VQA <cit.>, ST-VQA <cit.>, OCR-VQA <cit.>, TextVQA <cit.>, and DocVQA <cit.>. VQAv2 is a classic open-world VQA dataset, containing more than 1 million samples. Scene Text Visual Question Answering (STVQA) consists of 31,000+ questions across 23,000+ images collected from various public datasets. The OCRVQA dataset includes more than 1 million question-answer pairs that cover over 207,000 book cover images. The TextVQA dataset consists of over 45,000 questions related to text on more than 28,000 images selected from specific categories of the OpenImages dataset. DocVQA is a comprehensive dataset comprising 12,767 document images with diverse types and content, accompanied by over 50,000 questions and answers. For datasets containing far more than 5000 image-question pairs, we selected the first 5000 pairs for test, similar to <cit.>. §.§ Comparing ModelsThe comparing models mainly comprise three types of open-source large language models Llama-2: SFT Llama-2, RLHF-tuned Llama-2, and recently proposed MLLMs. To verify the new vision enhancing supervised fine-tuning method MKS2, we present text-only instruction tuned variant Llama-2-7b-INST-LoRA and Vicuna-Llama-2 <cit.>, where we adopt the text instruction data identical to our approach and set r=16 for LoRA to train Llama2-7b-INST. Hence, the training parameters of Llama-2-7b-INST-LoRA is similar to the proposed MKS2-Llama-2-7B, about 14M. RLHF-tuned models are are language models that have been trained using a combination of human feedback and reinforcement learning techniques, achieving better performance to understand human instruction and generate high-quality responses. We mainly comparing with Llama-2-7b-chat and Llama-2-13b-chat variants released by Meta. Additionally, to evaluate the multimodal information processing capability of MKS2, we also introduce recently proposed MLLMs as baselines. Flamingo <cit.> and OFA <cit.> are traditionally pretrained visual and language models, which have seen an amount of image-text pairs. BLIP-2 <cit.> is a widely-used visual and language models, achieving remarkable zero-shot performance on downstream image-text understanding tasks. MiniGPT-4 <cit.>, FROMAGe <cit.>, mPLUG-Owl <cit.>, LLaVR <cit.> and InstructBLIP <cit.> are multimodal instruction tuned MLLMs, trained with enormous image-text instruction-following data. §.§ Implementation DetailsWe take the pretrained Llama-2 version <cit.> as the backbone of MKS2 and run all models with Adam Optimizer <cit.> on 4 A100-80G GPUs with python environment. All models are trained and tested with Bfloat16 floating-point format. The dimension of the middle layer of the inserted visual memory module is 1/4 of the hidden state size of LLMs. For Llama-2-7b, the total parameters of MVM is about 410 million. During visual information storage, we take the frozen visual encoder and Q-former from BLIP-2-FlanT5-xxl to obtain the image encoding, thus the length of soft image embedding is 32. Additionally, we set the initial learning rate to 1e-4 and train the model for about 2 epochs with warm up steps equaling 5000. The batch size is set to 32 with four-step gradient accumulation for single GPU device. While performing instruction-following learning, we set the batch size, r in LoRA to 3 and 8, respectively, and the max length of input is set to 1024. To tag the position of image embedding, we introduce two learnable tokens <img-start> and <img-end>.Similar to Llama-2-chat, we add [INST] and [/INST] at the starting and ending of inputting text instruction, as like “[INST] Please write a short story about cat and dog [/INST]”. During generation, we set beam sizes to 1 and 4 for text-only and VQA tasks respectively.§.§ Overall PerformancesPerformance of vision enhancing LLMs. We present zero-shot model performances on Table <ref>, aiming to evaluate the instruction-understanding and open-world problem solving abilities of LLMs. We observe that the proposed method MKS2-Llama-2-7B/13B achieves best performances on almost all evaluation datasets, especially on substantially suppressing Llama-2-7b/13b-chat. Compared to powerful Llama-2-7b-INST-LoRA of the same magnitude, MKS2-Llama-2-7b could gains by about 8% on CommensenseQA, 14.5% on OpenBookQA, 16% on PIQA, and 8.6% on RS, respectively. Hence, MKS2 is capable of markedly improving the overall performance on text-only tasks requiring physical world knowledge. Compared to Vicuna-Llama-2 models with all parameters of Llama-2 updating, our approach stands out by requiring being fine-tuned on only a small fraction of parameters (< 0.2% of LLM parameters) while still achieving superior performance on several tasks.Competitive performances on multimodal benchmarks. We also present the model’s zero-shot performance on the VQA dataset in Table <ref>. To gain suitable and robust image embeddings,we further fine-tuned the visual mapping network for one epoch and freeze all other parameters, which does not affect any text-only performance of LLMs. We can see that MKS2-Llama-2-7b could achieve comparative performances on open-world and scene text VQA datasets. It's noteworthy that there was no discernible performance degradation when comparing MKS2 performance on multimodal tasks to that of the original large language model without visual enhancement. This implies that the addition of visual enhancement in LLMs did not lead to a loss in text-related knowledge, while performing LLMs for vision. Moreover, the incorporation of text-only data proves to be a valuable strategy for enhancing the model proficiency in answering open visual questions.While mixed instruction data leads to improvements in open-world question answering, it appears to have a detrimental effect on scene text recognition, possibly due to shifts in training data distribution. Further investigation into the fine-tuning process and data distributions can help optimize MKS2 performance across a wider range of tasks involving both text and images.§.§ Ablation Study and Analysis Effects of MKS2.Comparing the experimental results of MKS2 w/o Multimodal-SFT, MKS2 w/o Multimodal-SFT & MoMEs, and Llama-2-7b-INST-LoRA in Table <ref>, we observed that the incorporation of visual information into the MVM positively impacted the model's performance on common sense reasoning tasks. This demonstrates that MoMEs can leverage stored visual information to improve its understanding and reasoning abilities in various contexts. Additionally, the performances of MKS2 variants w/o Text-SFT and w/o Text-SFT + MoMEs on multimodal tasks further emphasizes its ability to access textual knowledge without significant compromise. The integration of visual information alongside textual data did not hinder the model's capacity to extract and utilize textual knowledge effectively. This suggests that the core text-related capabilities of LLMs remain robust when dealing with multimodal inputs. Impact of multimodal instruction data for MKS2 performance. Our experimental results in Table <ref> highlights the impact of multimodal instruction data on MKS2's performance. While it effectively enhances the accuracy in addressing questions related to physical world knowledge, it does not provide a substantial boost in solving intricate and complex problems that demand advanced reasoning and strategic thinking. These findings emphasize the importance of tailoring data and approaches to specific task requirements when leveraging multimodal data in large language models for optimal performance. Further research may uncover strategies to bridge this gap for complex problem-solving tasks. Could text-only instruction data be effective for enhancing multimodal performance while building MLLMs?In the ablation experiments discussed in Table <ref>, text-only instruction data was found to enhance the performance of pre-trained LLMs on various open multimodal problems, particularly open-ended questions that demand a fusion of textual and visual information. However, the introduction of text-only instruction data may lead to a trade-off, as it could potentially diminish the LLMs to answer visual questions involving scene text recognition. These findings underscore the importance of striking a careful balance when incorporating additional data modalities into LLMs, with a keen consideration of task requirements and data characteristics. Such careful integration ensures that overall model performance is optimized without unintended consequences on its ability to handle diverse multimodal challenges. Consequently, optimizing LLMs for multimodal tasks requires a nuanced approach tailored to the specific problem domain. Impact of visual memory size and data scale.We introduce more image-text pairs from LAION-400M <cit.> to analyse the impacts of vision memory and data sizes, and the experimental results are shown in Table <ref>. Our experimental results reveal that enlarging the size of the pre-trained image-text data leads to improvements in the MKS2 model's performance across various downstream tasks. Additionally, we observe that expanding the visual storage module of the MKS2 model not only enhances performance but also contributes to greater stability in this enhancement, as the size of the image-text data increases. To summarize, these findings suggest a twofold strategy for optimizing the MKS2-based LLMs during the supervised fine-tuning phase: increasing the size of the image-text data and selecting a proportionately larger visual storage size. This approach is crucial for achieving more consistent and stable performance improvements.§.§ Case StudyWe present some cases in Figure <ref> to further show the overall capability of MKS2-Llama-2. We can see that the proposed model achieve better performance while answering commonsense Q&A with physical knowledge. In addition, we also observe that the multimodal understanding capability of MKS2-Llama-2 is powerful, such as it could recognize the funny of the cat with a lion's head. In addition, it can employ the relevant knowledge to enrich the response based on visual clues, e.g., the short story around the content shown in the second image.§ RELATED WORKSVisual knowledge enhanced methods. There is a long line of work on utilizing explicit visual information to improve the imaginative representation of language, thus promoting diverse generation capability of LLMs. Particularly, <cit.> leverage visual knowledge in NLP tasks, developing multiple cross-model enhanced methods to improve the representation capability of pretrained language models.Some works <cit.> proposed to retrieve images corresponding to texts from the image corpus and use visual knowledge to improve the performance on the downstream tasks such as text completion <cit.>, story generation <cit.>, and concept-to-text <cit.>. Recently, some researchers <cit.> proposed to utilize the powerful text-to-image technical to obtain the imagination representation of language and infuse them into the language model via the prefix-tuning way. In this paper, we present visual information storage in LLMs and achieve visual knowledge enhancing LLMs without explicitly inputting images to language models.LLMs for vision. Recent works <cit.> towards multimodal LLMs focus on utilizing the extensive knowledge and language generation capabilities of LLMs to solve multimodal tasks <cit.>, especially for visual understanding and reasoning. Firstly, these works usually map the visual information obtained by pretrained visual encoder into the representation space of LLMs, through a learnable linear projection layer <cit.>, MLP, or Q-Former <cit.>. This stage is usually called feature alignment and only a few hundred thousand data may be needed to do a good job <cit.>. Afterwards, the initial MLLMs will be tuned via multimodal instruction-following data <cit.>. At this stage, LLMs and projection layer are often tuned together only with multimodal instruction data. The commonly used large language model is the SFT LLM and the tuning approach adopts widely used lightweight LoRA <cit.>. These works, however, rarely consider using visual knowledge to enhance the pure text processing capabilities of LLMs, thereby building a more robust LLM or MLLM. § CONCLUSION In this paper, we present a new approach MKS2 that allows LLMs to memorize and employ visual information, achieving multimodal knowledge storage and collaboration in LLMs. MKS2 consists of modular visual memory and soft mixtures-of-multimodal experts, which are used to store visual information and realize multimodal knowledge collaboration, respectively. We conduct extensive experiments on many NLP and VQA tasks and the experimental results show that MKS2 is capable of enhancing the reasoning capability of LLMs and being used to solve multimodal problems. langley00 icml2023 | http://arxiv.org/abs/2311.15759v1 | {
"authors": [
"Yunxin Li",
"Baotian Hu",
"Wei Wang",
"Xiaochun Cao",
"Min Zhang"
],
"categories": [
"cs.CL",
"cs.AI",
"cs.CV"
],
"primary_category": "cs.CL",
"published": "20231127122920",
"title": "Towards Vision Enhancing LLMs: Empowering Multimodal Knowledge Storage and Sharing in LLMs"
} |
[Corresponding author: ][email protected] for Theoretical Physics, ETH Zürich, 8093 Zürich, SwitzerlandMechanical resonators operating in the high-frequency regime have become a versatile platform for fundamental and applied quantum research. Their exceptional properties, such as low mass and high quality factor, make them also very appealing for force sensing experiments. In this Letter, we propose a method for detecting and ultimately controlling nuclear spins by directly coupling them to high-frequency resonators via a magnetic field gradient. Dynamical backaction between the sensor and an ensemble of nuclear spins produces a shift in the sensor's resonance frequency, which can be measured to probe the spin ensemble. Based on analytical as well as numerical results, we predict that the method will allow nanoscale magnetic resonance imaging with a range of realistic devices. At the same time, this interaction paves the way for new manipulation techniques, similar to those employed in cavity optomechanics, enriching both the sensor's and the spin ensemble's features. Near-resonant nuclear spin detection with high-frequency mechanical resonators Javier del Pino January 14, 2024 ==============================================================================Magnetic resonance force microscopy (MRFM) is a method to achieve nanoscale magnetic resonance imaging (MRI) <cit.>. It relies on a mechanical sensor interacting via a magnetic field gradient with an ensemble of nuclear spins. The interaction creates signatures in the resonator oscillation that can be used to detect nuclear spins with high spatial resolution. Previous milestones include the imaging of virus particles with 5-10 resolution <cit.>, Fourier-transform nanoscale MRI <cit.>, nuclear spin detection with a one-dimensional resolution below 1 <cit.>, and magnetic resonance diffraction with subangstrom precision <cit.>. In all of these experiments, the excellent force sensitivity of the mechanical resonator is a key factor. The MRFM community is thus constantly searching for improved force sensors to reach new regimes of spin-mechanics interaction. Over the last decade, new classes of mechanical resonators made from strained materials showed promise as force sensors <cit.>. Today, these resonators come in a large variety of designs, including trampolines <cit.>, membranes <cit.>, strings <cit.>, polygons <cit.>, hierarchical structures <cit.>, and spider webs <cit.>. Some of these resonators are massive enough to be seen by the naked eye, but their low dissipation nevertheless makes them excellent sensors, potentially on par with carbon nanotubes <cit.> and nanowires <cit.>. Using these strained resonators for nuclear spin detection requires novel scanning force geometries <cit.> and transduction protocols <cit.>. At the same time, new experimental opportunities become feasible,thanks to the ability of high-Q mechanical systems to strongly interact with a wide array of quantum systems, such as nuclear spins, artificial atoms, and photonic resonators <cit.>.In this work, we propose a protocol for nuclear spin detection based on the near-resonant interaction between a mechanical resonator and an ensemble of nuclear spins. In contrast to earlier ideas <cit.>, our method is most efficient when the resonator is detuned from the spin Larmor frequency. The coupling then causes a shift of the mechanical resonance frequency which can be measured to quantify the spin states. Importantly, in contrast to the driven amplitude predicted in Ref. <cit.>, the response time to the frequency shift is not limited by the ringdown time. Our proposal suits the typical frequency range of strained resonators (1-50), but can also be applied to other state-of-the-art systems, such as graphene or carbon nanotube resonators (≳100). Our approach offers a radically simplified experimental apparatus, as it circumvents the need for spin inversion pulses and the related hardware. We also show that for realistic parameters, our method can attain single nuclear spin sensitivity, which would be a major milestone on the way towards spin-based quantum devices. Finally, it opens up a new field of spin manipulation via mechanical driving, drawing parallels to techniques used in cavity optomechanics <cit.>.We consider a nuclear spin ensemble placed on a mechanical resonator, see Fig. <ref>(a). The ensemble comprises N spins that interact with a normal mode of the resonator. The composite system can be described with the Zeeman-like Hamiltonian ℋ=-ħγηB + ℋ_m, where ħ is the reduced Planck constant, γ the nuclear spin's gyromagnetic ratio, and B the magnetic field at the spins' location. The spin ensemble operator has the three components Î_i = ∑_k=1^Nσ̂_i,k/2 with the spin-1/2 Pauli matrices σ̂_i,k for spin k={1,⋯,N}, and i∈ [x, y, z]. We describe a single vibrational mode as a driven harmonic oscillator displacing along the z axis governed by the Hamiltonian ℋ_m= ^2/2m+1/2m^2^2 - F_0cos( t), where q̂ is the z-position operator of the resonator at the position of the spin ensemble, p̂ is the corresponding momentum operator, m is the effective mass,is the resonance frequency, andand F_0 are the frequency and strength of an applied force, respectively. If B is inhomogeneous, the spins experience a position-dependent field B() as the mechanical resonator vibrates. To lowest order, we approximate this field as B()≈B_0 + G with a constant component B_0 = B(=0) and relevant field gradients G_i=∂ B_i/∂ z. The coherent spin-resonator dynamics therefore obey the Hamiltonianℋ≈ -ħ -ħγG·Î+ℋ_m, with the Larmor precession frequency =γ|B_0|. Any real system, in equilibrium with a thermal bath, experiences mechanical damping (rate ), spin decay (longitudinal relaxation time T_1), and spin decoherence (transverse relaxation time T_2). We thus succinctly represent our system dynamics using the Heisenberg picture's dissipative equations of motion (EOM). Driving the resonator to an oscillation amplitude z_0 well above its zero-point fluctuation amplitude z_zpf=√(ħ/(2mω_0)), we treat the mechanical resonator classically in the mean-field approximation. This allows us to approximate the averaged operators as the products of their respective averages, e.g. replacing ⟨q̂Î_i|$⟩ by⟨q̂|⟨%s|%s⟩⟩Î_i. These averages then evolve via time-dependent Bloch equations <cit.> q̈ =-^2q -q̇ + F_0/m + ħγ/mG·I, İ_x,y =-1/T_2I_x,y±(+γ qG_z)I_y,x∓γ qG_y,xI_z ,= 1/T_1(I_0 - ) - γ q(G_x - G_y),where we dropped the...and the hat notation. Here,I_0denotes the Boltzmann polarization. In the limitk_BT ≫ħ, it simplifies toI_0≈Nħ/(4k_BT)<cit.>. The thermomechanical fluctuations do not impact the average resonator's positionq, leaving Eq. (<ref>) explicitly independent of temperatureT. Note that in our simplified model, the spins' decay (decoherence) timeT_1(T_2) is independent of temperature and magnetic field.To simplify the treatment, we restrict ourselves to the regime where (i) the spins' force on the resonator is weak compared to the driving force,|δF| ≪F_0, and (ii) the spin-resonator coupling, parameterized by a Rabi frequencyΩ_R = γG_i z_0, is weaker than dissipation, meaningΩ_R ≪, κ, see Fig. <ref>(c). In order to fulfill (ii), we selectz_0to be small and on the order of the thermal motionz_th<cit.>. The conditions (i) and (ii) imply that we remain in the weak coupling limit, where the oscillation inside the field gradientGexcites a precessing spin polarization orthogonal toB_0(i.e.,I_x,y≠0), but does not lock the spins to the resonator frequencyω_0. The backaction of the spins can be treated as a perturbation of the driven resonator oscillation at frequency. Note that, in the absence of spin locking, spin-spin interactions will lead to shortT_2times on the order of 100 for dense spin ensembles <cit.>. The spin componentsI_x,yexert a linear force back onto the resonator that we calculate via the Harmonic Balance method <cit.> as detailed in <cit.>. The force involves a static component= ħγI_0G_z/mthat shifts the mechanical equilibrium, and two oscillating components, in-phase and quadrature. This dynamical backaction loop causes a frequency shift(corresponding to a phase shift in the driven response) and a linewidth change<cit.>:= -g^2(ω_+/ω_+^2 + ^2 + ω_-/ω_-^2 + ^2 ),= -g^2(/ω_+^2+^2-/ω_-^2 + ^2),whereω_±=±andg^2=ħγ^2 I_0(G_x^2+G_y^2)/(4m). The results in Eq. (<ref>) and in Fig. <ref> demonstrate that the equilibrium spin populationI_0alters the resonator properties through dynamical back-action: the force produced by the oscillating in-plane componentsI_xandI_yis delayed with respect to the drive. The force is most pronounced when spins interact with the resonator faster than the resonator's reaction time (Γ_m≪,κ). This spin-mediated delayed force creates a closed feedback loop that modifies the resonator's stiffness (frequency) andenhances or suppresses damping, depending on when the force peaks in relation to position or velocity. Importantly, the strongest spin-mechanics interaction occurs at a detuning≠that depends onand, see Eq. (<ref>).This is different from the early MRFM proposal that considered resonant coupling forces where=<cit.>, and from spin noise measurements in MRI <cit.>. Instead, the effect we rely on resembles dynamical back-actionin e.g. cavity optomechanics <cit.>. For our spin sensing protocol, measuring the frequency shiftis advantageous over measuring the linewidth broadening, as the frequency response rate is not intrinsically limited by the ringdown time <cit.>. In Fig. <ref>, we show the analyticalfor a string resonator as a blue line, see “SiN string" entry in Table <ref> for details <cit.>. We consider a sample ofN=10^6nuclear^1H spins (gyromagnetic ratioγ=2π×42.58/). The gradient of the magnetic field is calculated from a magnetic tip simulation <cit.>, and we useT_2=100<cit.>. In this configuration, simulations indicate a maximum frequency shift of approximately|δω_max|/2π=5mHzat a detuning of(-ω_0)/2π≈10kHzatT=4K. The analytical frequency shift increases twenty-fold to|δω_max|/2π≈100mHzwhen the temperature is lowered toT=0.2Kbecause of the higher Boltzmann polarization.To substantiate our analytical result in Eq. (<ref>), we perform a numerical simulation ofEqs. (<ref>)-(<ref>). The simulation closely aligns with the expected results but overestimates the value of|δω_max|at4, as seen in Fig. <ref>(a). The excellent agreement at a lower temperature in Fig. <ref>(b) suggests that deviations arise from the strain on the assumptionΩ_R ≪, κfor fixed spin lifetimeT_1, asΩ_R∝z_0increases with thermal motion <cit.>.To assess the performance of our spin detection scheme, we calculate numerical frequency shifts for various device geometries (Table <ref>), showcasing the versatility of the method. The frequency shift notably increases with a decrease in the resonator's mass, for example graphene sheet resonators are expected to achieve significant frequency shifts exceeding|δω_max|/2π=15.5kHz. Note that the1/dependence ofg^2in Eqs. (<ref>) and (<ref>) is compensated byI_0∝.We systematically collect in Fig. <ref> the maximum frequency shift as a function of temperature for string resonators, and compare it with the expected thermal frequency noise <cit.>. We find that the protocol should allow the detection of10^6nuclear spins (^1H) at 4 within less than a minute of averaging under ideal conditions. At moderate dilution refrigeration temperatures of 0.2, the same measurement should be feasible within less than 100. We should consider two limitations under realistic conditions: first, technical frequency noise stemming from e.g. temperature drifts will increase the frequency fluctuations. Second, inhomogeneous spectral broadening (a distribution in) of a spin ensemble can reduce the signal, which we discuss in the supplemental material <cit.>. The theory results summarized in Fig. <ref> demonstrate the advances that are feasible in force-detected nanoscale MRI when considering only Boltzmann (thermal) polarization. This polarization is very small for the relevant temperatures and Larmor frequencies; for instance, at 4 and/2π= 5.5(corresponding to|B_0| = 130), the effective mean ensemble polarization of a sample containing10^6^1H spins is equivalent to that of 33 fully polarized^1H spins <cit.>. While the detection of a single electron spin with a silicon cantilever required an averaging time of roughly 4.7e4 in 2004 <cit.>, our method offers the sensitivity for detecting a single nuclear spin (with a roughly1500times lower magnetic moment) in 2.5e4. A horizontal red line in Fig. <ref> is the corresponding frequency shift for the string device considered here.The sensitivity of MRI at the nanoscale can be boosted by targeting the statistical spin polarization <cit.>, whose standard deviation surpasses the mean polarization for small ensembles <cit.>. From simple considerations, we indeed expect that spin-mediated resonator backaction can be extended to the statistical regime. There, instead of a static frequency shift as in Eq. (<ref>), fluctuating nuclear polarization will generate a fluctuating resonator frequency that can be accurately monitored using a phase-locked loop. However, a full model for this regime, based on higher momenta analysis of spin and resonator quantum statistical distributions, <cit.>, is left for future work.Our method uses a single drive (e.g. via electrical or optomechanical coupling) acting directly on the resonator to detect nuclear spins. It thereby significantly reduces the experimental overhead compared to typical MRFM experiments, which require a microstrip close by the resonator <cit.> to generate periodic spin flipping through radio-frequency pulses <cit.>. Dynamic nuclear polarization (DNP) can in principle be implemented to further increase the signal-to-noise ratio <cit.>, potentially allowing the detection of a single nuclear spin within minutes of averaging time. Near-resonant spin-mechanics coupling even opens the possibility of coherently manipulating nuclear spins through mechanical driving <cit.>. For instance, mechanical vibrations can be harnessed to tilt the ensemble spin polarization fully into thex-yplane, resulting in a large increase in the measured signal. Moreover, an intriguing possibility arises when swapping the roles of the resonator and the spin ensemble for spin cooling through backaction <cit.>, akin to cavity cooling in the reversed dissipation regime in cavity optomechanics <cit.>. Our simplified study paves the way for delving into the intricacies of local spin dissipation and decoherence <cit.> and dipole-dipole interactions <cit.> in particular experimental configurations. It also lays the groundwork for exploring further opportunities of parametric driving <cit.> and multimode resonators <cit.>. With these capabilities, nanoscale MRI will become a versatile platform for nuclear spin quantum sensing and control on the atomic scale.§ ACKNOWLEDGMENTSWe thank Raffi Budakian, Oded Zilberberg, and Jan Košata for fruitful discussions. This work was supported by the Swiss National Science Foundation (CRSII5_177198/1) and an ETH Zurich Research Grant (ETH-51 19-2). 59 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Sidles(1991)]Sidles_1991 authorauthorJ. A. Sidles, titletitleNoninductive detection of single-proton magnetic resonance, https://doi.org/10.1063/1.104757journaljournalApplied Physics Lettersvolume58, pages2854 (year1991)NoStop [Poggio and Degen(2010)]Poggio_2010 authorauthorM. Poggio and authorC. L. Degen, titletitleForce-detected nuclear magnetic resonance: recent advances and future challenges, https://doi.org/10.1088/0957-4484/21/34/342001journaljournalNanotechnology volume21,pages342001 (year2010)NoStop [Degen et al.(2009)Degen, Poggio, Mamin, Rettner, andRugar]Degen_2009 authorauthorC. L. Degen, authorM. Poggio, authorH. J. Mamin, authorC. T. Rettner, and authorD. Rugar, titletitleNanoscale magnetic resonance imaging, https://doi.org/10.1073/pnas.0812068106journaljournalProceedings of the National Academy of Sciences volume106, pages1313 (year2009)NoStop [Nichol et al.(2013)Nichol, Naibert, Hemesath, Lauhon, and Budakian]Nichol_2013 authorauthorJ. M. Nichol, authorT. R. Naibert, authorE. R. Hemesath, authorL. J. Lauhon, and authorR. Budakian, titletitleNanoscale fourier-transform magnetic resonance imaging,https://doi.org/10.1103/PhysRevX.3.031016journaljournalPhysical Review X volume3,pages031016 (year2013)NoStop [Grob et al.(2019)Grob, Krass, Héritier, Pachlatko, Rhensius, Košata, Moores, Takahashi, Eichler, and Degen]Grob_2019 authorauthorU. Grob, authorM. D. Krass, authorM. Héritier, authorR. Pachlatko, authorJ. Rhensius, authorJ. Košata, authorB. A. Moores, authorH. Takahashi, authorA. Eichler, and authorC. L.Degen, titletitleMagnetic resonance force microscopy with a one-dimensional resolution of 0.9 nanometers, https://doi.org/10.1021/acs.nanolett.9b03048journaljournalNano Letters volume19, pages7935 (year2019)NoStop [Haas et al.(2022)Haas, Tabatabaei, Rose, Sahafi, Piscitelli, Jordan, Priyadarsi, Singh, Yager, Poole, Dalacu, and Budakian]Haas_2022 authorauthorH. Haas, authorS. Tabatabaei, authorW. Rose, authorP. Sahafi, authorM. Piscitelli, authorA. Jordan, authorP. Priyadarsi, authorN. Singh, authorB. Yager, authorP. J.Poole, authorD. Dalacu, and authorR. Budakian, titletitleNuclear magnetic resonance diffraction with subangstrom precision, https://doi.org/10.1073/pnas.2209213119journaljournalProceedings of the National Academy of Sciences volume119, pagese2209213119 (year2022)NoStop [Eichler(2022)]Eichler_2022 authorauthorA. Eichler, titletitleUltra-high-q nanomechanical resonators for force sensing, https://doi.org/10.1088/2633-4356/acaba4journaljournalMaterials for Quantum Technology volume2, pages043001 (year2022)NoStop [Reinhardt et al.(2016)Reinhardt, Müller, Bourassa, andSankey]Reinhardt_2016 authorauthorC. Reinhardt, authorT. Müller, authorA. Bourassa, andauthorJ. C. Sankey, titletitleUltralow-noise sin trampoline resonators for sensing and optomechanics, https://doi.org/10.1103/PhysRevX.6.021001journaljournalPhysical Review X volume6, pages021001 (year2016)NoStop [Norte et al.(2016)Norte, Moura, and Gröblacher]Norte_2016 authorauthorR. Norte, authorJ. Moura, andauthorS. Gröblacher,titletitleMechanical resonators for quantum optomechanics experiments at room temperature, https://doi.org/10.1103/PhysRevLett.116.147202journaljournalPhysical Review Letters volume116, pages147202 (year2016)NoStop [Tsaturyan et al.(2017)Tsaturyan, Barg, Polzik, andSchliesser]Tsaturyan_2017 authorauthorY. Tsaturyan, authorA. Barg, authorE. S. Polzik, andauthorA. Schliesser, titletitleUltracoherent nanomechanical resonators via soft clamping and dissipation dilution, https://doi.org/10.1038/nnano.2017.101journaljournalNature Nanotechnology volume12,pages776 (year2017)NoStop [Rossi et al.(2018)Rossi, Mason, Chen, Tsaturyan, andSchliesser]Rossi_2018 authorauthorM. Rossi, authorD. Mason, authorJ. Chen, authorY. Tsaturyan, and authorA. Schliesser, titletitleMeasurement-based quantum control of mechanical motion,https://doi.org/10.1038/s41586-018-0643-8journaljournalNature volume563, pages53 (year2018)NoStop [Reetz et al.(2019)Reetz, Fischer, Assumpção, McNally, Burns, Sankey, and Regal]Reetz_2019 authorauthorC. Reetz, authorR. Fischer, authorG. Assumpção, authorD. McNally, authorP. Burns, authorJ. Sankey, and authorC. Regal, titletitleAnalysis of membrane phononic crystals with wide band gaps and low-mass defects,https://doi.org/10.1103/physrevapplied.12.044027journaljournalPhysical Review Applied volume12, pages044027 (year2019)NoStop [Ghadimi et al.(2018)Ghadimi, Fedorov, Engelsen, Bereyhi, Schilling, Wilson, andKippenberg]Ghadimi_2018 authorauthorA. H. Ghadimi, authorS. A. Fedorov, authorN. J. Engelsen, authorM. J. Bereyhi, authorR. Schilling, authorD. J. Wilson, andauthorT. J. Kippenberg,titletitleElastic strain engineering for ultralow mechanical dissipation, https://doi.org/10.1126/science.aar6939journaljournalScience volume360, pages764 (year2018)NoStop [Beccari et al.(2022)Beccari, Visani, Fedorov, Bereyhi, Boureau, Engelsen, andKippenberg]Beccari_2022 authorauthorA. Beccari, authorD. A. Visani, authorS. A. Fedorov, authorM. J. Bereyhi, authorV. Boureau, authorN. J. Engelsen, and authorT. J. Kippenberg, titletitleStrained crystalline nanomechanical resonators with quality factors above 10 billion, https://doi.org/10.1038/s41567-021-01498-4journaljournalNature Physics volume18, pages436 (year2022)NoStop [Gisler et al.(2022)Gisler, Helal, Sabonis, Grob, Héritier, Degen, Ghadimi, and Eichler]Gisler_2022 authorauthorT. Gisler, authorM. Helal, authorD. Sabonis, authorU. Grob, authorM. Héritier, authorC. L. Degen, authorA. H. Ghadimi, and authorA. Eichler, titletitleSoft-clamped silicon nitride string resonators at millikelvin temperatures, https://doi.org/10.1103/PhysRevLett.129.104301journaljournalPhysical Review Lettersvolume129, pages104301 (year2022)NoStop [Bereyhi et al.(2022a)Bereyhi, Arabmoheghi, Beccari, Fedorov, Huang, Kippenberg, and Engelsen]Bereyhi_2022 authorauthorM. J. Bereyhi, authorA. Arabmoheghi, authorA. Beccari, authorS. A. Fedorov, authorG. Huang, authorT. J. Kippenberg, andauthorN. J. Engelsen,titletitlePerimeter modes of nanomechanical resonators exhibit quality factors exceeding 10^9 at room temperature,https://doi.org/10.1103/PhysRevX.12.021036journaljournalPhysical Review X volume12,pages021036 (year2022a)NoStop [Fedorov et al.(2020)Fedorov, Beccari, Engelsen, andKippenberg]Fedorov_2020 authorauthorS. Fedorov, authorA. Beccari, authorN. Engelsen, andauthorT. Kippenberg, titletitleFractal-like mechanical resonators with a soft-clamped fundamental mode, https://doi.org/10.1103/PhysRevLett.124.025502journaljournalPhysical Review Letters volume124, pages025502 (year2020)NoStop [Bereyhi et al.(2022b)Bereyhi, Beccari, Groth, Fedorov, Arabmoheghi, Kippenberg, and Engelsen]Bereyhi_2022_NC authorauthorM. J. Bereyhi, authorA. Beccari, authorR. Groth, authorS. A. Fedorov, authorA. Arabmoheghi, authorT. J. Kippenberg, and authorN. J. Engelsen, titletitleHierarchical tensile structures with ultralow mechanical dissipation, https://doi.org/10.1038/s41467-022-30586-zjournaljournalNature Communications volume13, pages3097 (year2022b)NoStop [Shin et al.(2021)Shin, Cupertino, de Jong, Steeneken, Bessa, and Norte]Shin_2021 authorauthorD. Shin, authorA. Cupertino, authorM. H. J. de Jong, authorP. G. Steeneken, authorM. A. Bessa, and authorR. A. Norte, titletitleSpiderweb nanomechanical resonators via bayesian optimization: Inspired by nature and guided by machine learning, https://doi.org/10.1002/adma.202106248journaljournalAdvanced Materials volume34, pages2270019 (year2021)NoStop [Moser et al.(2013)Moser, Güttinger, Eichler, Esplandiu, Liu, Dykman, and Bachtold]Moser_2013 authorauthorJ. Moser, authorJ. Güttinger, authorA. Eichler, authorM. J. Esplandiu, authorD. E. Liu, authorM. I. Dykman, and authorA. Bachtold, titletitleUltrasensitive force detection with a nanotube mechanical resonator,https://doi.org/10.1038/NNANO.2013.97journaljournalNature Nanotechnology volume8, pages493 (year2013)NoStop [Rossi et al.(2016)Rossi, Braakman, Cadeddu, Vasyukov, Tütüncüoglu, Fontcuberta i Morral, andPoggio]Rossi_2016 authorauthorN. Rossi, authorF. R. Braakman, authorD. Cadeddu, authorD. Vasyukov, authorG. Tütüncüoglu, authorA. Fontcuberta i Morral, and authorM. Poggio, titletitleVectorial scanning force microscopy using a nanowire sensor, https://doi.org/10.1038/nnano.2016.189journaljournalNature Nanotechnology volume12, pages150 (year2016)NoStop [Hälg et al.(2021)Hälg, Gisler, Tsaturyan, Catalini, Grob, Krass, Héritier, Mattiat, Thamm, Schirhagl, Langman, Schliesser, Degen, and Eichler]Halg_2021 authorauthorD. Hälg, authorT. Gisler, authorY. Tsaturyan, authorL. Catalini, authorU. Grob, authorM.-D. Krass, authorM. Héritier, authorH. Mattiat, authorA.-K.Thamm, authorR. Schirhagl, authorE. C. Langman, authorA. Schliesser, authorC. L. Degen, andauthorA. Eichler, titletitleMembrane-based scanning force microscopy, https://doi.org/10.1103/PhysRevApplied.15.L021001journaljournalPhysical Review Applied volume15, pagesl021001 (year2021)NoStop [Košata et al.(2020)Košata, Zilberberg, Degen, Chitra, and Eichler]Kosata_2020 authorauthorJ. Košata, authorO. Zilberberg, authorC. L. Degen, authorR. Chitra, and authorA. Eichler, titletitleSpin detection via parametric frequency conversion in a membrane resonator, https://doi.org/10.1103/PhysRevApplied.14.014042journaljournalPhysical Review Appliedvolume14, pages014042 (year2020)NoStop [Aspelmeyer et al.(2014)Aspelmeyer, Kippenberg, and Marquardt]Aspelmeyer_2014 authorauthorM. Aspelmeyer, authorT. J. Kippenberg, and authorF. Marquardt, titletitleCavity optomechanics,https://doi.org/10.1103/RevModPhys.86.1391journaljournalReviews of Modern Physics volume86, pages1391 (year2014)NoStop [Bachtold et al.(2022)Bachtold, Moser, and Dykman]RevModPhys.94.045005 authorauthorA. Bachtold, authorJ. Moser, and authorM. I. Dykman,titletitleMesoscopic physics of nanomechanical systems, https://doi.org/10.1103/RevModPhys.94.045005journaljournalRev. Mod. Phys. volume94, pages045005 (year2022)NoStop [Sidles(1992)]Sidles_1992 authorauthorJ. A. Sidles, titletitleFolded stern-gerlach experiment as a means for detecting nuclear magnetic resonance in individual nuclei, https://doi.org/10.1103/PhysRevLett.68.1124journaljournalPhysical Review Letters volume68, pages1124 (year1992)NoStop [Berman and Tsifrinovich(2022)]Berman_2022 authorauthorG. P. Berman and authorV. I. Tsifrinovich, titletitleMagnetic resonance force microscopy with matching frequencies of cantilever and spin, https://doi.org/10.1063/5.0073237journaljournalJournal of Applied Physics volume131,pages044301 (year2022)NoStop [Lee et al.(2016)Lee, Lee, Ovartchaiyapong, Minguzzi, Maze, and Bleszynski Jayich]Lee_2016 authorauthorK. W. Lee, authorD. Lee, authorP. Ovartchaiyapong, authorJ. Minguzzi, authorJ. R. Maze, and authorA. C. Bleszynski Jayich, titletitleStrain coupling of a mechanical resonator to a single quantum emitter in diamond, https://doi.org/10.1103/PhysRevApplied.6.034005journaljournalPhysical Review Applied volume6, pages034005 (year2016)NoStop [Teissier et al.(2017)Teissier, Barfuss, and Maletinsky]Teissier_2017 authorauthorJ. Teissier, authorA. Barfuss, and authorP. Maletinsky,titletitleHybrid continuous dynamical decoupling: a photon-phonon doubly dressed spin, https://doi.org/10.1088/2040-8986/aa5f62journaljournalJournal of Optics volume19, pages044003 (year2017)NoStop [MacQuarrie et al.(2013)MacQuarrie, Gosavi, Jungwirth, Bhave, and Fuchs]MacQuarrie_2013 authorauthorE. R. MacQuarrie, authorT. A. Gosavi, authorN. R. Jungwirth, authorS. A. Bhave, and authorG. D. Fuchs, titletitleMechanical spin control of nitrogen-vacancy centers in diamond, https://doi.org/10.1103/PhysRevLett.111.227602journaljournalPhysical Review Letters volume111, pages227602 (year2013)NoStop [Sup()]Supplement @noop titleSee supplemental material at [url will be inserted by publisher] for further details on the analytical approach and additional simulation case studies.Stop [Niinikoski(2020)]niinikoski_2020 authorauthorT. O. Niinikoski, https://doi.org/10.1017/9781108567435titleThe Physics of Polarized Targets (publisherCambridge University Press, year2020)NoStop [Bloch(1946)]Bloch_1946 authorauthorF. Bloch, titletitleNuclear induction, https://doi.org/10.1103/PhysRev.70.460journaljournalPhysical Review volume70, pages460 (year1946)NoStop [Krack and Gross(2019)]Krack_2019 authorauthorM. Krack and authorJ. Gross,https://doi.org/10.1007/978-3-030-14023-6titleHarmonic Balance for Nonlinear Vibration Problems (publisherSpringer International Publishing, year2019)NoStop [Košata et al.(2022)Košata, del Pino, Heugel, andZilberberg]Kosata_2022_SP authorauthorJ. Košata, authorJ. del Pino, authorT. L. Heugel, andauthorO. Zilberberg, titletitleHarmonicbalance.jl: A julia suite for nonlinear dynamics using harmonic balance, https://doi.org/10.21468/SciPostPhysCodeb.6journaljournalSciPost Physics Codebases , pages6 (year2022)NoStop [Hairer et al.(1993)Hairer, Wanner, and Nørsett]Hairer_1993 authorauthorE. Hairer, authorG. Wanner, and authorS. P. Nørsett,https://doi.org/10.1007/978-3-540-78862-1titleSolving Ordinary Differential Equations I (publisherSpringer Berlin Heidelberg, year1993)NoStop [Allan et al.(1972)Allan, Gray, and Machlan]Allan_1972 authorauthorD. W. Allan, authorJ. E. Gray, and authorH. E. Machlan,titletitleThe national bureau of standards atomic time scales: Generation, dissemination, stability, and accuracy,https://doi.org/10.1109/TIM.1972.4314051journaljournalIEEE Transactions on Instrumentation and Measurementvolume21, pages388 (year1972)NoStop [Walls and Allan(1986)]Walls_1986 authorauthorF. Walls and authorD. Allan,titletitleMeasurements of frequency stability,https://doi.org/10.1109/PROC.1986.13429journaljournalProceedings of the IEEE volume74, pages162 (year1986)NoStop [McCoy and Ernst(1989)]McCoy_1989 authorauthorM. McCoy and authorR. Ernst,titletitleNuclear spin noise at room temperature, https://doi.org/10.1016/0009-2614(89)87537-2journaljournalChemical Physics Lettersvolume159, pages587 (year1989)NoStop [Guéron and Leroy(1989)]Gueron_1989 authorauthorM. Guéron and authorJ. Leroy, titletitleNmr of water protons. the detection of their nuclear-spin noise, and a simple determination of absolute probe sensitivity based on radiation damping, https://doi.org/10.1016/0022-2364(89)90338-7journaljournalJournal of Magnetic Resonance (1969) volume85, pages209 (year1989)NoStop [Braginsky and Manukin(1967)]Braginsky_1967 authorauthorV. B. Braginsky and authorA. Manukin, titletitlePonderomotive effects of electromagnetic radiation, https://api.semanticscholar.org/CorpusID:118076715journaljournalJournal of Experimental and Theoretical Physics (year1967)NoStop [Albrecht et al.(1991)Albrecht, Grütter, Horne, andRugar]Albrecht_1991 authorauthorT. R. Albrecht, authorP. Grütter, authorD. Horne, and authorD. Rugar, titletitleFrequency modulation detection using high-q cantilevers for enhanced force microscope sensitivity, https://doi.org/10.1063/1.347347journaljournalJournal of Applied Physics volume69, pages668 (year1991)NoStop [Weber et al.(2016)Weber, Güttinger, Noury, Vergara-Cruz, and Bachtold]Weber_2016 authorauthorP. Weber, authorJ. Güttinger, authorA. Noury, authorJ. Vergara-Cruz, and authorA. Bachtold, titletitleForce sensitivity of multilayer graphene optomechanical devices, https://doi.org/10.1038/ncomms12496journaljournalNature Communications volume7, pages12496 (year2016)NoStop [Rugar et al.(2004)Rugar, Budakian, Mamin, and Chui]Rugar_2004 authorauthorD. Rugar, authorR. Budakian, authorH. J. Mamin, andauthorB. W. Chui, titletitleSingle spin detection by magnetic resonance force microscopy, https://doi.org/10.1038/nature02658journaljournalNature volume430,pages329 (year2004)NoStop [Degen et al.(2007)Degen, Poggio, Mamin, and Rugar]Degen_2007 authorauthorC. L. Degen, authorM. Poggio, authorH. J. Mamin, andauthorD. Rugar, titletitleRole of spin noise in the detection of nanoscale ensembles of nuclear spins, https://doi.org/10.1103/PhysRevLett.99.250601journaljournalPhysical Review Letters volume99, pages250601 (year2007)NoStop [Herzog et al.(2014)Herzog, Cadeddu, Xue, Peddibhotla, and Poggio]Herzog_2014 authorauthorB. E. Herzog, authorD. Cadeddu, authorF. Xue, authorP. Peddibhotla, and authorM. Poggio, titletitleBoundary between the thermal and statistical polarization regimes in a nuclear spin ensemble, https://doi.org/10.1063/1.4892361journaljournalApplied Physics Lettersvolume105, pages043112 (year2014)NoStop [Kubo(1962)]Kubo_1962 authorauthorR. Kubo, titletitleGeneralized cumulant expansion method, https://doi.org/10.1143/JPSJ.17.1100journaljournalJournal of the Physical Society of Japanvolume17, pages1100 (year1962)NoStop [Poggio et al.(2007)Poggio, Degen, Rettner, Mamin, andRugar]Poggio_2007 authorauthorM. Poggio, authorC. L. Degen, authorC. T. Rettner, authorH. J. Mamin, and authorD. Rugar, titletitleNuclear magnetic resonance force microscopy with a microwire rf source, https://doi.org/10.1063/1.2752536journaljournalApplied Physics Letters volume90, pages263111 (year2007)NoStop [Carver and Slichter(1953)]Carver_1953 authorauthorT. R. Carver and authorC. P. Slichter, titletitlePolarization of nuclear spins in metals, https://doi.org/10.1103/PhysRev.92.212.2journaljournalPhysical Review volume92, pages212 (year1953)NoStop [Carver and Slichter(1956)]Carver_1956 authorauthorT. R. Carver and authorC. P. Slichter, titletitleExperimental verification of the overhauser nuclear polarization effect, https://doi.org/10.1103/PhysRev.102.975journaljournalPhysical Review volume102, pages975 (year1956)NoStop [Hunger et al.(2011)Hunger, Camerer, Korppi, Jöckel, Hänsch, and Treutlein]Hunger_2011 authorauthorD. Hunger, authorS. Camerer, authorM. Korppi, authorA. Jöckel, authorT. Hänsch, and authorP. Treutlein, titletitleCoupling ultracold atoms to mechanical oscillators, https://doi.org/10.1016/j.crhy.2011.04.015journaljournalComptes Rendus Physique volume12,pages871 (year2011)NoStop [Butler and Weitekamp(2011)]Butler_2011 authorauthorM. C. Butler and authorD. P. Weitekamp, titletitlePolarization of nuclear spins by a cold nanoscale resonator, https://doi.org/10.1103/PhysRevA.84.063407journaljournalPhysical Review A volume84, pages063407 (year2011)NoStop [Nunnenkamp et al.(2014)Nunnenkamp, Sudhir, Feofanov, Roulet, and Kippenberg]Nunnenkamp_2014 authorauthorA. Nunnenkamp, authorV. Sudhir, authorA. Feofanov, authorA. Roulet, and authorT. Kippenberg, titletitleQuantum-limited amplification and parametric instability in the reversed dissipation regime of cavity optomechanics, https://doi.org/10.1103/PhysRevLett.113.023604journaljournalPhysical Review Letters volume113, pages023604 (year2014)NoStop [Ohta et al.(2021)Ohta, Herpin, Bastidas, Tawara, Yamaguchi, and Okamoto]Ohta_2021 authorauthorR. Ohta, authorL. Herpin, authorV. Bastidas, authorT. Tawara, authorH. Yamaguchi, and authorH. Okamoto, titletitleRare-earth-mediated optomechanical system in the reversed dissipation regime, https://doi.org/10.1103/PhysRevLett.126.047404journaljournalPhysical Review Lettersvolume126, pages047404 (year2021)NoStop [Kirton and Keeling(2017)]Kirton_2017 authorauthorP. Kirton and authorJ. Keeling, titletitleSuppressing and restoring the dicke superradiance transition by dephasing and decay, https://doi.org/10.1103/PhysRevLett.118.123602journaljournalPhysical Review Letters volume118, pages123602 (year2017)NoStop [Dalla Torre et al.(2016)Dalla Torre, Shchadilova, Wilner, Lukin, and Demler]Dalla_Torre_2016 authorauthorE. G. Dalla Torre, authorY. Shchadilova, authorE. Y. Wilner, authorM. D. Lukin, and authorE. Demler,titletitleDicke phase transition without total spin conservation, https://doi.org/10.1103/PhysRevA.94.061802journaljournalPhysical Review A volume94, pages061802 (year2016)NoStop [Dobrindt and Kippenberg(2010)]Dobrindt_2010 authorauthorJ. M. Dobrindt and authorT. J. Kippenberg, titletitleTheoretical analysis of mechanical displacement measurement using a multiple cavity mode transducer, https://doi.org/10.1103/PhysRevLett.104.033901journaljournalPhysical Review Lettersvolume104, pages033901 (year2010)NoStop [Burgwal et al.(2020)Burgwal, Pino, and Verhagen]Burgwal_2020 authorauthorR. Burgwal, authorJ. d. Pino, and authorE. Verhagen,titletitleComparing nonlinear optomechanical coupling in membrane-in-the-middle and single-cavity systems, https://doi.org/10.1088/1367-2630/abc1c8journaljournalNew Journal of Physics volume22,pages113006 (year2020)NoStop [Burgwal and Verhagen(2023)]Burgwal_2023 authorauthorR. Burgwal and authorE. Verhagen, titletitleEnhanced nonlinear optomechanics in a coupled-mode photonic crystal device, https://doi.org/10.1038/s41467-023-37138-zjournaljournalNature Communications volume14,pages1526 (year2023)NoStop Supplemental Material: Near-resonant nuclear spin detection with high-frequency mechanical resonators§ ANALYTICAL APPROACH§.§ Master equation formalism We offer further details on the analytical solution for the spin-mechanical model introduced in the main text. The model features a driven mechanical resonator moving alongz, influencing an ensemble ofNspins. The spins interact also with a spatially-dependent magnetic field. The combined dynamics is described by the Hamiltonian ℋ = ^2/2m+1/2m^2^2 - F_0cos( t) - ħÎ_z -ħγ(G_xÎ_x + G_yÎ_y + G_zÎ_z),whereħis the reduced Planck constant,() is the mechanical (Larmor) resonance frequency,is the driving frequency,G_iis the magnetic gradient alongi∈[x,y,z],γis the gyromagnetic ratio of a nuclear spin, andF_0is the driving force. Hereandstand for the position and momentum operators for the resonator. The spins are described by the collective spin operatorsÎ_i = ∑_k=1^Nσ̂_i,k/2, whereσ̂_i,kare the Pauli matrices describing a spin-1/2.We extract the dissipative equations of motion (EOM) using the Heisenberg picture. We account for mechanical damping () as well as spin decay (T_1) and decoherence (T_2). In order to get physically measurable quantities, we take the mean value of the derived EOM, these are given by:= -^2 -q̇ + F_0/m + ħγ/m(G_x+G_y+G_z),Î̇_x,y =-1/T_2Î_x,y±Î_y,x±γ G_zÎ_y,x∓γ G_y,x, = 1/T_1(I_0 - ) - γ G_x + γ G_y,where we identify the renormalized mechanical frequency as↦√(ω_0^2+Γ_m^2/4). HereI_0stands for the Boltzmann (thermal) equilibrium polarization <cit.> I_0=-N[(2I+1)((2I+1)ħ/(2k_BT))-(ħ/(2k_BT))]/2,withNthe number of spins in the considered ensemble andI=1/2the spin number. The resonator motion is driven well above its zero-point fluctuation, we can therfore apply the mean-field approximation and study the semiclassical limit. This entails separating the cross-correlated mean values, i.e.Î_i= Î_i. We furthermore drop the braket and hat notation, e.g.≡I_x, leading to the EOM presented in the main text Eqs. (4)-(6), namely,q̈ =-^2q -q̇ + F_0/m + ħγ/mG·I, İ_x,y =-1/T_2I_x,y±(+γ qG_z)I_y,x∓γ qG_y,xI_z ,= 1/T_1(I_0 - ) - γ q(G_x - G_y),with vectorsG=(G_x,G_y,G_z)andI=(I_x,I_y,I_z). §.§ Slow-flow equations of motionWe further analyze here the solution to the main text Eqs. (4)-(6). We assume that the resonator dynamic is dominated by the external force. Thus, the coupling to the spins act as a small correction. We can then write the mechanical motion asq(t)=q^(0)(t)+δq(t), whereq^(0)(t)=u_q^(0)+v_q^(0)with u_q^(0) = F_0/m[(^2-^2)^2 + ^2^2](^2-^2), v_q^(0) = F_0/m[(^2-^2)^2 + ^2^2]Γ_m.The part accounting for the spins then obeysδ̈ ̈q̈=-ω_0^2δ q-Γ_mδ̇ ̇q̇+ħγ/mG·I. We express the solution forδq(t)in terms of an ansatz δ q(t) = δ a_q(t) + δ u_q(t) + δ v_q(t),whereδa_q(t),δu_q(t)andδv_q(t)are real time-dependent amplitudes to be found. Employing this form of the solution is particularly beneficial when examining perturbations associated with the behavior of a driven harmonic oscillator. The dynamics of the spins in response to the mechanical motion can be calculated employing a similar ansatzI_i(t) = a_i(t) + u_i(t) + v_i(t),with amplitudesa_i(t), u_i(t), v_i(t). Given Eq. (<ref>), the spins exert a time-dependent force on the resonator given byδ F(t)=ħγG·I(t)=ħγ[𝐆·𝐚(t)+𝐆·𝐮(t)cos(ω_dt)+𝐆·𝐯(t)sin(ω_dt)],where we used vector notation fora_x,y,z(t), u_x,y,z(t), v_x,y,z(t).At this stage, we have not yet introduced any constraints or approximations in the ansatz amplitudes.However, the calculation of the correctionsδa_q,δu_q,δv_qis greatly facilitated by assuming the weak impact of the spin-dependent force on the resonator. Namely, we assume⟨⟨|ħγG·I|⟩⟩_T_d≪F_0, where⟨⟨...⟩⟩_T_ddenotes the average over a drive periodT_d=2π/.In this setting, we can assume the amplitudesδa_q(t),δu_q(t),δv_q(t)with respect toT_d, accounting for the transient evolution of the amplitude and phase of the resonator towards the steady state <cit.>. In the steady state, resonator and spin precession amplitudes settle to constant values, i.e.δȧ_q=δu̇_q=δv̇_q=0andȧ_i=u̇_i=v̇_i=0. In our ansatzq(t)thus acts as a harmonic magnetic field with frequency, acting on the spins. In particular, the spin prompts a spin precession component at frequency, according to Eq. (<ref>). Note the ansatz does not presuppose the synchronization or “locking” of the spin dynamics with the external field. We seek if such a steady state can exist. To this end, we insert the ansatz forq(t)andI_i(t)in the mean-field equations of motion and equate the harmonic amplitudes at both sides of the equations with the same time dependence, a procedure dubbed the “harmonic balance” <cit.>. This approach also neglects super-harmonic generation (e.g. termscos(2t),sin(2t)) that arises from the mechanical motion driving the spins, which requires extending the harmonic ansatz forq(t), I_i(t)to higher frequencies. Note that harmonic balance relies on the slowly-flowing nature of the amplitudesa_i, u_i, v_i<cit.>. §.§.§ Linear response theory The introduction of the ansatz results in nonlinear couplings between the harmonic amplitudes of the mechanical resonator and the spins. The system's steady states are defined by the roots of these coupled polynomials. While we could solve these equations numerically using advanced algebraic methods, as detailed in reference <cit.> and implemented in the package <cit.>, we opt for deriving an analytical solution within a linearized framework. Here we find the mechanical dynamics of the resonator in the weakly fluctuating regime⟨⟨|δq|⟩⟩_T_d≪⟨⟨|q^(0)|⟩⟩_T_d. The smallness ofδqallows us to neglect the nonlinear coupling between the fluctuationsδu_q(t),δv_q(t)and the spin amplitudesa_i(t),u_i(t),v_i(t). Under this linearization, the spin dynamics directly follows from the solutions of the first-order differential equations that do not containδa_q(t),δu_q(t),δv_q(t), namely ȧ_x,y +a_x,y∓ a_y,x = 0,u̇_x,y +u_x,y∓ u_y,x +v_x,y - γ u_q^(0)(G_z,xa_y,z - G_y,za_z,x)= 0,v̇_x,y +v_x,y∓ v_y,x -u_x,y - γ v_q^(0)(G_z,xa_y,z - G_y,za_z,x)= 0, ȧ_z + 1/T_1 a_z - I_01/T_1 = 0,u̇_z + 1/T_1 u_z +v_z - γ u_q^(0)(G_ya_x - G_xa_y)= 0,v̇_z + 1/T_1 v_z -u_z - γ v_q^(0)(G_ya_x - G_xa_y)= 0. The resonator features a high quality factor (Γ_m≪,1/T_1) which, together with the weak spin-resonator coupling (γG_i z_0≪,1/T_1) lead to spins quickly reaching steady state compared to the slower resonator timescale. This condition permits the application of approximation methods, like adiabatic elimination of the spins <cit.>, in order to approximate the time evolution of the resonator towards its steady state. Our focus is nevertheless on the global steady state behavior, where all amplitudes in the problem are fixed.Solving Eqs. (<ref>) whenȧ_i=u̇_i=v̇_i=0to find the steady state amplitudesa|_t→∞,u|_t→∞,v|_t→∞leads to a steady state force δ F|_t→∞≈ħγ[𝐆·𝐚|_t→∞+𝐆·𝐮_t→∞cos(ω_dt)+𝐆·𝐯_t→∞sin(ω_dt)]. Such force (<ref>) will not be in phase with the external resonator's driving (its quadratures will not be aligned with the drive), namelyu_q^(0),v_q^(0). To facilitate the expressions, we choose a phase/time origin for the driven resonator (i.e. we perform a gauge fixing), such thatv_q^(0)=0andu_q^(0)=F_0/(m√((^2-^2)^2 + ^2^2)). In this gauge, we can identifyδF|_t→∞≈δF_0 - δq̇ - δΩ^2 q, whereδF_0 = ħγI_0G_z/mandδ = ħγ^2I_0(G_x^2+G_y^2)/m T_2^-1/(T_2^-2 + ^2)^2 + 2(T_2^-1-)(T_2^-1+)^2 +^4,δΩ^2= -ħγ^2I_0(G_x^2+G_y^2)/m(T_2^-2-^2+^2)/(T_2^-2 + ^2)^2 + 2(T_2^-1-)(T_2^-1+)^2 +^4.We can now reconstruct the mechanical evolution in the steady state from the effective equation of motion. Under resonant driving=,q̈ + (+δ)q̇ + ( + )^2q |_t→∞ = F_0 + δ F_0,with= 1/2δΩ^2/. We can rewriteδandin a more convenient way leading to main text Eqs. (7) and (8)= -g^2( + /1/T_2^2 + (+)^2 +- /1/T_2^2+(-)^2),δ = -g^2(T_2^-1/1/T_2+(+)^2 - T_2^-1/1/T_2^2 + (-)^2),where we introducedg^2 = ħγ^2I_0(G_x^2+G_y^2)/(4m). §.§.§ Beyond linear response For certain parameter regimes, the nonlinearities in Eqs. (<ref>)-(<ref>) can lead to complex behavior in the stationary limitt→∞, including self-sustained motion, multi-stability, and limit cycles <cit.>. In particular, the analogy with optomechanics is expected to break down when the Rabi frequency is comparable to the spin dissipation:γG_iz_0 ∼, 1/T_1. In that case, the spins' equilibration is not fast enough before they act back on the resonator, and spin-resonator timescales cannot be adiabatically separated. Effectively, the resonator motion then triggers spin-induced nonlinear effects, such as aperiodic time modulation of the Larmor frequency due to theG_zgradient (see main text Eqs. (4)-(6)), with frequency. The resonator's response would then pick up higher frequency components not described by Eq. (<ref>). While under the linearized theory, the steady state value is time independent and equal toI_z=I_0, we observe the generation of higher order harmonics in the spectrum ofI_z[Fig. <ref>]. Note that in our simulations, we do not focus on the regime where higher excitation makes the spin-conservation constraint (d(∑_i I_i^2)/dt = 0) relevant, which would lead to additional `many-wave mixing'.A comprehensive examination of nonlinearities in our detection protocol is beyond the scope of this manuscript; however, we offer a brief overview of the necessary approach below. The impact of weak nonlinearities can be expressed by still expanding the solution forx(t)with an ansatz of the formx(t)=a_q(t)+u_q(t)cos( t) + v_q(t)sin( t),which includes both the displacement by the driving field and small fluctuations, while keeping the same ansatz for the spins in Eq. (<ref>). We will account now for the nonlinear corrections to this resonant behavior. The equations of motion for the ansatz amplitudeswithout linearization readω_0^2 a_q + G̃_x a_x + G̃_y a_y + G̃_z a_z+ Γ_mȧ_q=0, ω_0^2 u_q - ^2 u_q + Γ_m v_q + 2 v̇_q-F_0 + G̃_x u_x + G̃_y u_y + G̃_z u_z +Γ_mu̇_q = 0 , ω_0^2 v_q- ^2 v_q - 2 u̇_q - Γ_m u_q G̃_x v_x + G̃_y v_y + G̃_z v_z + Γ_mv̇_q= 0 , 1/T_2 a_x +a_y + G̃_z a_q a_y - G̃_y a_q a_z - G̃_̃ỹ/2 u_q u_z + G̃_̃z̃/2 u_q u_y + G̃_̃z̃/2 v_q v_y - G̃_̃ỹ/2 v_q v_z + ȧ_x = 0, 1/T_2 u_x +u_y +v_x + G̃_z a_q u_y + G̃_z a_y u_q - G̃_y a_q u_z - G̃_y a_z u_q + u̇_x = 0, 1/T_2 v_x +v_y -u_x + G̃_z a_q v_y + G̃_z a_y v_q - G̃_y a_z v_q - G̃_y a_q v_z + v̇_x = 0, 1/T_2 a_y -a_x + G̃_x a_q a_z - G̃_z a_q a_x - G̃_̃z̃/2 u_q u_x + G̃_̃x̃/2 u_q u_z + G̃_̃x̃/2 v_q v_z - G̃_̃z̃/2 v_q v_x + ȧ_y = 0, 1/T_2 u_y +v_y -u_x + G̃_x a_q u_z + G̃_x a_z u_q - G̃_z a_q u_x - G̃_z a_x u_q + u̇_y = 0, 1/T_2 v_y -u_y + G̃_x a_z v_q -v_x + G̃_x a_q v_z - G̃_z a_x v_q - G̃_z a_q v_x + v̇_y = 0, 1/T_1 a_z - I_0 1/T_1 + G̃_y a_q a_x - G̃_x a_q a_y + G̃_̃ỹ/2 u_q u_x - G̃_̃x̃/2 u_q u_y - G̃_̃x̃/2 v_q v_y + G̃_̃ỹ/2 v_q v_x + ȧ_z = 0, 1/T_1 u_z +v_z + G̃_y a_q u_x + G̃_y a_x u_q - G̃_x a_y u_q - G̃_x a_q u_y + u̇_z = 0, 1/T_1 v_z -u_z + G̃_y a_q v_x + G̃_y a_x v_q - G̃_x a_q v_y - G̃_x a_y v_q + v̇_z = 0. whereG̃_i=γG_iis shorthand.The resonator's susceptibility can then by found by (i) finding the steady states of these equations, i.e. finding the roots of a system of coupled polynomials arising fromȧ_i=u̇_i=v̇_i=ȧ_q=u̇_q=v̇_q=0, and (ii) performing linear fluctuation analysis around these solutions. These two steps can be facilitated by the use of the HarmonicBalance.jl package <cit.>.Finally, the frequency spectrum in Fig. <ref> reveals that as the driving strength increases, the lowest order nonlinear effect is the generation of a second harmonic at a frequency2. Similar equations to Eqs. (<ref>) can be similarly obtained for the amplitudes of an extended ansatz that includes also the higher harmonic generated at2.§.§ Linewidth change In Fig. <ref>, we show the linewidth change predicted by the analytical model for the three resonators considered in this work. This change produces an effective reduction of the resonator's quality factor, resulting in “cold damping”, as conventionally used in cavity optomechanics <cit.>. The influence of cold damping on a cantilever in an MRFM experiment, albeit in a nonresonant setting, was studied previously <cit.>.§ RELAXATION AND SPECTRAL BROADENING§.§ Relaxation ratesThe spin lifetimeT_1of nuclear spins resulting from energy relaxation can vary strongly in typical nuclear magnetic resonance (NMR) experiments, ranging from microseconds to days. Our experimental situation is untypical, as we will probe nanoscale samples at low temperatures and low magnetic fields. We do not need a very specific value forT_1, as our analytical results hold as long asT_1≪1/. To avoid speculation about the dependency ofT_1on field strength and temperatures below 70, we use the same value ofT_1 = 50for all our simulations. If needed for specific experimental situation, we envisage reducingT_1by introducing paramagnetic agents, such as free radicals or metal ions <cit.>. §.§ Decoherence due to spin-spin couplingIn typical NMR experiments, interactions between neighboring nuclear spins can often be neglected when the Rabi frequencyΩ_Rexceeds the spin-spin coupling strengthJ. In that scenario, the range of Larmor frequencies that are affected by the spin lock is dominated by the spectral “power broadening” equal toΩ_R. By contrast, in the experiments we describe, the conditionΩ_R≥Jis typically not fulfilled, and we are limited by the conditionΩ_R ≪1/T_2, 1/T_1. As we cannot ignore spin-spin interaction in the weak-driving regime,Jis explicitly implemented in the simulations through the spin decoherence timeT_2 = 100<cit.>.§.§ Inhomogeneous broadeningIn our paper, we primarily consider the interaction between a mechanical resonator and an ensemble of spins with identical Larmor frequencies. This was done to present the fundamental concept as clearly as possible. However, any real sample is extended in space, and therefore contains spins at various positions within the magnetic field gradient. When all these spins in a sample are excited at the same time, their variation in Larmor frequencies leads to a so-called “inhomogeneous broadening”1/T_2^*and a corresponding dephasing timeT_2^*. ForT_2^*<T_2, the inhomogeneous broadening dominates over spin-spin interactions and increases the effective spectral with of the ensemble. In our sample, the driving fields are necessarily weak to fulfill the conditionΩ_R = γG_i z_0 ≪1/T_2, 1/T_1. Only the spins within the narrow range= ω_0 ±Ω_Rare directly excited, yielding an inhomogeneous broadening of1/T_2^* = Ω_R. Through spin-spin interactions, all the spins within= ω_0 ±1/T_2are indirectly excited. AsΩ_R ≪1/T_2, the broadening of the spin ensembles is limited by1/T_2, not1/T_2^*. We therefore do not need to be concerned about the effects of inhomogeneous broadening.§.§ Signal contributions with different signs The spectral width of the spins that are excited in our scheme depends on1/T_2. In principle, this has the consequence that the resonator interacts with spins covering a significant part of the frequencies shown in Fig. 2, which can lead to a mutual cancellation of positive and negative frequency shifts. In the following, we discuss how this problem can be avoided. * Differential measurements: the contributions from different parts of the sample can cancel each other when they generate frequency shifts with equal strength and opposite signs. However, as the nanomagnetic probe is scanned over the sample, spins at different locations enter the strong field gradient at different scan positions. For instance, at a certain position, mainly spins contributing positive frequency shifts may be located within the strong gradient, causing a strong positive frequency shift. The outcome of a full scan then corresponds to a differential spin signal, where the frequency shift of the resonator does not indicate the number of spins at one particular Larmor frequency, but rather the relative weight between `positive' and `negative' spins. Knowing the field distribution of the nanomagnetic probe (which can be calibrated with well-known samples), the spatial distribution of spins can be reconstructed by integrating over the differential signal. * Statistical polarization: in MRFM, we typically probe the statistical polarization of a spin ensemble, not the Boltzmann polarization <cit.>. In this case, only the number of spins (weighted by the square of the local gradient) is important to evaluate the measured signal, not the sign of their individual contributions. For instance, let us assume the simple case of two equal spin sub-ensembles with different Larmor frequencies that contribute with equally strong but opposite frequency shifts in Fig. 2. Assuming the spins fluctuate thermally (and ignoring any remaining Boltzmann polarization), the two ensembles will spend an equal amount of time in a symmetric configuration as in the antisymmetric one. In the antisymmetric configuration, the sub-ensembles will contribute to the frequency shifts with the same sign. The total resonator frequency variance resulting from the combined effect of the two ensembles in the long-time limit is therefore overall increased, not reduced. * Single spin: One exciting aspect of the method we propose is the quantum limit of probing a single spin. In this situation, the problem arising from variations in the Larmor frequency naturally vanishes, leaving only a well-defined Larmor frequency generating a single frequency shift. § EXACT NUMERICAL SIMULATIONSTo verify our theoretical predictions, we numerically simulate our mean-field EOM given by Eqs. (<ref>)-(<ref>). We solve the EOM using an explicit Runge-Kutta method of order 8, which is well-suited for handling the large separation of timescales in the problem <cit.>. §.§ Results for different devicesWe focus on three device families: silicon nitride (SiN) membranes, SiN strings, and graphene sheets. These devices span across a large range of frequencies and across several orders of magnitude of mass. We simulate a hypothetical sample containingN=10^6^1H nuclear spins, correspondingto a sample size of≈(20)^3<cit.>. To estimate the magnetic field gradients, we numerically simulate a cobalt nanorod of length 1 and radius 50 that is pre-magnetized to1, inspired by Ref. <cit.>. The magnetic simulation results are presented in Sect. <ref>. The driving strength,proportional toz_0, can be easily tuned in the experiment. We choose it in the following such thatΩ_R≪1/T_2,1/T_1. Due to the lowT_2=100value, this condition might be hard to fulfill for high temperatures as the driving amplitudez_0is lower bound by the resonator's thermal motion. Membrane and string resonators can have quality factor aroundQ=/=10^8at room temperature and increase up to10^9for low temperatureT≤4K. To fulfill the condition≪,1/T_1with theseQfactors,T_1should be at most in the minute range. However, in order to reduce the number of timesteps required in the simulation, we use a reducedT_1time as well as a reduced quality factorQfor the resonator within the regime≪,1/T_1. Throughout the simulations, we usedT_1=50.Figure <ref> shows the analytical (blue lines) and the numerical (red dots) results for a SiN membrane resonator <cit.>. Figure <ref>(a) and (b) show the results for the case with driving amplitude close to the thermal motion of the membrane at the given temperature (T=4for (a) andT=0.2for (b)). In these cases, the numerical data follows very closely the analytical model as the conditionΩ_R≪,1/T_1is strongly fulfilled. However, the integration time to resolve the frequency shift above the frequency noise is extremely long. On the opposite, Figure <ref>(c) and (d) show the same case but with increased driving amplitude (z_0=10for both cases). In this case,Ω_Ris still smaller thanbut not anymore smaller than1/T_1(in our simulationsT_1=50). We see that the numerical frequency shift is now smaller than the analytical one, nevertheless, the integration time to resolve the frequency shift is now strongly reduced to feasible values.From Eq. (<ref>) we see that the spin dynamics should feature an additional effect that is not described by the analytical model: the jittering of the rotation frequency along the polarization axis (in our case) due to the modulation of the Larmor frequency from theG_zgradient. The jittering is caused by the second term on the r.h.s. of the equation, which contains a magnetic gradient-dependent contribution. As the oscillator is driven, it is possible to leverage this frequency jittering. For the case of the string resonator presented in the main text, the Larmor frequency is/2π= 5.5, the greatest driving strength (atT=77) isz_0=20, and the gradient alongzisG_z=1/MT/m. The maximum amplitude of the frequency modulation (recall thatqoscillates) isγG_zz_0/2π∼1, amounting to∼0.02of the precession frequency, so the effect is very limited for this resonator. For the membrane resonator, the greatest driving is 10, giving a frequency jittering of∼500, amounting here to∼0.04of the precession frequency. For the third resonator, the graphene sheet, the effect is even smaller due to the higher Larmor frequency: in this case, the jittering is smaller than0.001of the precession frequency. We conclude that this effect can be ignored for the resonators we are considering.§.§ Magnetic tip simulationsTo extract a meaningful value for the magnetic field gradientsG_i, we perform a numerical simulation of the magnetic field of a cobalt nanomagnet. The nanomagnet resembles a cylinder of lengthL=1and radiusR=50. We are directly inspired by the nanomagnet presented in Ref. <cit.>. We assume that the nanomagnet is pre-magnetized to 1 and we apply an external magnetic field. The latter is used to tune the region where the Larmor frequency matches the mechanical resonator's frequency; we want it to be as close as possible to the nanomagnet in order to harvest the highest magnetic field gradients. Hence, the external magnetic field can be in the opposite direction of the nanomagnetz-magnetic field depending on the device investigated, as the required magnetic field for frequency matching can be smaller than the nanomagnet-generated magnetic field. The nanomagnet magnetization should remain roughly constant due to the shape anisotropy, which turns our Co cylinder effectively into a hard magnet <cit.>.Figure <ref>(a) shows the absolute value of the magnetic field in the vicinity of the nanomagnet (black rectangle) for the case of a SiN string withω_0/2π= 5.5. In this case, we apply an external magnetic field of 0.2 in the opposite direction of the nanomagnetz-magnetic field. The region where the Larmor frequency of the spins would be resonant with the resonator mechanical frequency (=) is showed as a black line. We can then extract the magnetic field gradients in thexandzdirections of the spin reference frame. These gradients are displayed on Fig. <ref>(b) and <ref>(c). In the optimal case, the sample would be in a region whereG_xis maximal andG_zminimal. In addition, the sample must be small enough so that it does not overlap the right and left lobes otherwise the effect of theG_xgradient would cancel out due to the sign inversion of the latter. From this simulation, we extract the value of the gradients used in the main text, namelyG_x=G_y=2/(G_x=G_yby symmetry) andG_z=1/. §.§ Boltzmann vs statistical polarization To justify the interest in looking at the statistical polarization of the spins instead of the Boltzmann polarization, we can easily plot the different values for a range of temperature and number of spins in the sample. The Boltzmann polarization is given by Eq. (<ref>) whereas the statistical polarization is given byI_0 = 1/2√(N)<cit.>. The comparison is shown in Fig. <ref> for the string resonator. The black dashed line shows the case of10^6spins used in the main text. It is clear that for samples containing fewer spins the statistical polarization would allow a much stronger signal than the Boltzmann polarization in the same conditions.19 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Niinikoski(2020)]Niinikoski_2020 authorauthorT. O. Niinikoski, https://doi.org/10.1017/9781108567435titleThe Physics of Polarized Targets (publisherCambridge University Press, year2020)NoStop [Rand(2005)]Rand_2005 authorauthorR. H. Rand, @noop titleLecture Notes on Nonlinear Vibrations, howpublished<http://audiophile.tam.cornell.edu/randdocs/nlvibe52.pdf> (year2005)NoStop [Krack and Gross(2019)]Krack_2019 authorauthorM. Krack and authorJ. Gross,https://doi.org/10.1007/978-3-030-14023-6titleHarmonic Balance for Nonlinear Vibration Problems (publisherSpringer International Publishing, year2019)NoStop [Breiding and Timme(2018)]Breiding_2018 authorauthorP. Breiding and authorS. Timme, titletitleHomotopycontinuation.jl: A package for homotopy continuation in julia, in https://doi.org/10.1007/978-3-319-96418-8_54booktitleLecture Notes in Computer Science, Vol. volume10931 LNCS (publisherSpringer International Publishing, year2018) pp. pages458–465NoStop [Košata et al.(2022)Košata, del Pino, Heugel, andZilberberg]Kosata_2022_SP authorauthorJ. Košata, authorJ. del Pino, authorT. L. Heugel, andauthorO. Zilberberg, titletitleHarmonicbalance.jl: A julia suite for nonlinear dynamics using harmonic balance, https://doi.org/10.21468/SciPostPhysCodeb.6journaljournalSciPost Physics Codebases , pages6 (year2022)NoStop [Lugiato et al.(1984)Lugiato, Mandel, and Narducci]Lugiato_1984 authorauthorL. A. Lugiato, authorP. Mandel, and authorL. M. Narducci,titletitleAdiabatic elimination in nonlinear dynamical systems, https://doi.org/10.1103/PhysRevA.29.1438journaljournalPhysical Review A volume29, pages1438 (year1984)NoStop [Bhaseen et al.(2012)Bhaseen, Mayoh, Simons, and Keeling]Bhaseen_2012 authorauthorM. J. Bhaseen, authorJ. Mayoh, authorB. D. Simons, andauthorJ. Keeling, titletitleDynamics of nonequilibrium dicke models, https://doi.org/10.1103/PhysRevA.85.013817journaljournalPhysical Review A volume85, pages013817 (year2012)NoStop [Chitra and Zilberberg(2015)]Chitra_2015 authorauthorR. Chitra and authorO. Zilberberg, titletitleDynamical many-body phases of the parametrically driven, dissipative dicke model, https://doi.org/10.1103/PhysRevA.92.023815journaljournalPhysical Review A volume92, pages023815 (year2015)NoStop [Tsaturyan et al.(2017)Tsaturyan, Barg, Polzik, andSchliesser]Tsaturyan_2017 authorauthorY. Tsaturyan, authorA. Barg, authorE. S. Polzik, andauthorA. Schliesser, titletitleUltracoherent nanomechanical resonators via soft clamping and dissipation dilution, https://doi.org/10.1038/nnano.2017.101journaljournalNature Nanotechnology volume12,pages776 (year2017)NoStop [Aspelmeyer et al.(2014)Aspelmeyer, Kippenberg, and Marquardt]Aspelmeyer_2014 authorauthorM. Aspelmeyer, authorT. J. Kippenberg, and authorF. Marquardt, titletitleCavity optomechanics,https://doi.org/10.1103/RevModPhys.86.1391journaljournalReviews of Modern Physics volume86, pages1391 (year2014)NoStop [Greenberg et al.(2009)Greenberg, Il’ichev, and Nori]Greenberg_2009 authorauthorY. S. Greenberg, authorE. Il’ichev, and authorF. Nori, titletitleCooling a magnetic resonance force microscope via the dynamical back action of nuclear spins, https://doi.org/10.1103/PhysRevB.80.214423journaljournalPhysical Review B volume80, pages214423 (year2009)NoStop [Ghadimi et al.(2018)Ghadimi, Fedorov, Engelsen, Bereyhi, Schilling, Wilson, andKippenberg]Ghadimi_2018 authorauthorA. H. Ghadimi, authorS. A. Fedorov, authorN. J. Engelsen, authorM. J. Bereyhi, authorR. Schilling, authorD. J. Wilson, andauthorT. J. Kippenberg,titletitleElastic strain engineering for ultralow mechanical dissipation, https://doi.org/10.1126/science.aar6939journaljournalScience volume360, pages764 (year2018)NoStop [Weber et al.(2016)Weber, Güttinger, Noury, Vergara-Cruz, and Bachtold]Weber_2016 authorauthorP. Weber, authorJ. Güttinger, authorA. Noury, authorJ. Vergara-Cruz, and authorA. Bachtold, titletitleForce sensitivity of multilayer graphene optomechanical devices, https://doi.org/10.1038/ncomms12496journaljournalNature Communications volume7, pages12496 (year2016)NoStop [Kocman et al.(2019)Kocman, Di Mauro, Veglia, and Ramamoorthy]Kocman_2019 authorauthorV. Kocman, authorG. M. Di Mauro, authorG. Veglia, and authorA. Ramamoorthy,titletitleUse of paramagnetic systems to speed-up nmr data acquisition and for structural and dynamic studies,https://doi.org/10.1016/j.ssnmr.2019.07.002journaljournalSolid State Nuclear Magnetic Resonance volume102, pages36 (year2019)NoStop [Bloch(1946)]Bloch_1946 authorauthorF. Bloch, titletitleNuclear induction, https://doi.org/10.1103/PhysRev.70.460journaljournalPhysical Review volume70, pages460 (year1946)NoStop [Degen et al.(2007)Degen, Poggio, Mamin, and Rugar]Degen_2007 authorauthorC. L. Degen, authorM. Poggio, authorH. J. Mamin, andauthorD. Rugar, titletitleRole of spin noise in the detection of nanoscale ensembles of nuclear spins, https://doi.org/10.1103/PhysRevLett.99.250601journaljournalPhysical Review Letters volume99, pages250601 (year2007)NoStop [Hairer et al.(1993)Hairer, Wanner, and Nørsett]Hairer_1993 authorauthorE. Hairer, authorG. Wanner, and authorS. P. Nørsett,https://doi.org/10.1007/978-3-540-78862-1titleSolving Ordinary Differential Equations I (publisherSpringer Berlin Heidelberg, year1993)NoStop [Krass, Marc-Dominik(2022)]Krass_2022 authorauthorKrass, Marc-Dominik, title3D Magnetic Resonance Force Microscopy, https://doi.org/10.3929/ethz-b-000536378Ph.D. thesis, schoolETH Zurich (year2022)NoStop [Longenecker et al.(2012)Longenecker, Mamin, Senko, Chen, Rettner, Rugar, and Marohn]Longenecker_2012 authorauthorJ. G. Longenecker, authorH. J. Mamin, authorA. W. Senko, authorL. Chen, authorC. T. Rettner, authorD. Rugar, and authorJ. A. Marohn, titletitleHigh-gradient nanomagnets on cantilevers for sensitive detection of nuclear magnetic resonance, https://doi.org/10.1021/nn3030628journaljournalACS Nano volume6, pages9637 (year2012)NoStop | http://arxiv.org/abs/2311.16273v1 | {
"authors": [
"Diego A. Visani",
"Letizia Catalini",
"Christian L. Degen",
"Alexander Eichler",
"Javier del Pino"
],
"categories": [
"cond-mat.mes-hall",
"physics.ins-det"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231127192330",
"title": "Near-resonant nuclear spin detection with high-frequency mechanical resonators"
} |
School of Physics, Peking University, Beijing 100871, ChinaThe minimal heat that must be released to the reservoir when a single input x is mapped to a single output y is considered. It was previously argued by Zurek in an informal style that the minimal heat should be K(x|y). Recently, Kolchinsky has rigorously proved a generalized Zurek's bound in the Hamiltonian "two-point measurement" scheme, which implies a trade-off between heat, noise and protocol complexity in a computation from x to y. Here we propose another proof of the generalized Zurek's bound from a detailed fluctuation theorem, showing that the generalized Zurek's bound holds in any system that satisfies a DFT. Several applications of the bound are also discussed.Derivation of the generalized Zurek's bound from a detailed fluctuation theorem Jihai Zhu December 15, 2023 ===============================================================================§ INTRODUCTION It is noticed that there is a fundamental relationship between thermodynamics and information since the birth of Maxwell's demon<cit.>. Today, when talking about this relationship, people often refer to Landauer's principle, which says that when some amount of information of the system is erased, a corresponding amount of heat must be released to the reservoir<cit.>:β/ln2⟨ Q⟩≥ S(p_X)-S(p_Y),where p_X is the input ensemble, p_Y is the output ensemble, S(p)=-∑_ip_ilog p_i is the Shannon entropy in bits, β is the inverse temperature of the heat reservoir, and ⟨ Q⟩ is the average amount of heat released to the heat reservoir during the process. For the simplicity of notations, hereafter we set β/ln2≡ 1. Thus Landauer's principle can be written as:⟨ Q⟩≥ S(p_X)-S(p_Y).It should be noted that what Landauer's principle considers is not a map from a single input x to a single output y, but a map from an input ensemble p_X to an output ensemble p_Y. It is thus natural to ask whether there exists a lower bound of cost of mapping a single input x to a single output y.As the first attempt in answering the question, Zurek<cit.> proposed that the minimal cost was given by the Kolmogorov relative complexity K(x|y). In spite of its simplicity, this bound is incomplete, because we can argue by intuition that this minimal cost must depend on the specific protocol 𝒫 that accomplishes the mapping: for example, one can easily imagine a protocol that deterministically maps a particular binary sequence of 1GB zeros-and-ones into 1GB zeros without any heat release. Therefore, the proper question should be: Given a protocol 𝒫 which accomplishes a (stochastic) mapping on a state space 𝒳, is there a minimal amount of heat that must be released to the reservoir when a single input x∈𝒳 is mapped to a single output y∈𝒳? As an answer to the above question, Kolchinsky<cit.> has recently derived a so-called "generalized Zurek's bound", that is:Q(x→ y)+≥K(x|y,𝒫)-log1/P(y|x),where 𝒫 is the protocol, Q(x→ y) is average heat released to the reservoir during a mapping from x to y, and P(y|x) is the probability of receiving y as the output when x is the input of the protocol 𝒫(Fig.<ref>). We write a + or × sign on top of an (in)equality sign to mean that the (in)equality holds to a constant additive or multiplicative term. In Appendix A we show that this constant term is actually negligible in practice. Kolchinsky's derivation of (<ref>) is rigorous, and is based on the Hamiltonian "two-point measurement" scheme.In this paper, we further generalize Kolchinsky's result to any system that satisfies a detailed fluctuation theorem(DFT), as the "two-point measurement" scheme satisfies DFT while not all quantum processes featured with DFT can be represented in the "two-point measurement" scheme<cit.>. We first lay out some notations and techniques of algorithmic information theory(AIT) and stochastic thermodynamics that are used in this paper. After that, we derive the bound (<ref>) with the assumption that the system satisfies DFT. At last, we discuss two applications of the bound. § PRELIMINARIES§.§ Algorithmic information theoryWe briefly sketch the notations and results of algorithmic information theory(AIT) that are used in this paper. For a more complete introduction, readers can refer to <cit.>, <cit.> and <cit.>.AIT is a theory on the "complexity" of objects (sequences, numbers, functions, etc.). By the term "complexity" of an object x, or K(x) as is the usual notation, we mean the minimal resource required to "describe" x, which is often pinned down as (the length of) the shortest program p, when set as input for a (prefix-free) universal Turing machine U, generates x as the output, that is:K(x)≡min{l(p):U(p)=x}. As can be seen from above, K(x) is dependent on the artificially chosen universal Turing machine U. However, this subjectivity can be saved by noticing that there exists an interpreter program int(U,U') between any two universal Turing machines U and U', thus any program on U' can be run on U by adding int(U,U') before the program. To be specific:K_U(x)-K_U'(x)≤ l(int(U,U')),and vice versa, so thatK_U(x)+=K_U'(x),and there's nothing lost in writing K(x) instead of K_U(x).The Kolmogorov relative complexity K(x|y) is similarly defined as (the length of) the shortest program p, when set as input for a (prefix-free) universal Turing machine U coupled with a parameter tape writing y, which can be checked any time during the computation, generates x as the output. To be specific:K(x|y)≡min{l(p):U(p,y)=x}.The following statements of Kolmogorov complexity hold: * For every x, y and z,K(x|z)+≤K(x,y|z)+≤K(x|y,z)+K(y|z). * Given a set A with |A| elements, for most x∈ A, K(x|A)+=log |A|.* For every (computable) probability measure P(x) (that is, ∑_xp(x)=1), S_sh(P)+=∑_xP(x)K(x|P), where S_sh(P) is the Shannon entropy of P. * For every (computable) relative probability semimeasure P(x|y) (that is, ∑_xP(x|y)≤ 1 for every y), the coding theorem implies the inequality that K(x|y,P)+≤log1/P(x|y). §.§ Stochastic thermodynamicsStochastic thermodynamics is an emergent field that uses stochastic equations to study mesoscopic, nonequilibrium systems. Here we summary the notations and definitions used in this paper. More succinct introductions can be found in <cit.> and <cit.>.We consider a system with state space 𝒳 coupled to a heat reservoir at reverse temperature β. There may be manipulations and/or drivings acting on the system. We assume the system undergoes a time-inhomogeneous Markov chain, and the jump rate from state x to state x' at time t is k_x'x(λ(t)), where λ(t) is the manipulation protocol acting on the system. As time passes from t_0 to t_n+1, the system may experience n times of jumps from x_i to x_i+1 at time t_i(i=1,2,⋯,n) and travel a trajectory x(t):x(t)= x_it_i≤ t≤ t_i+1. Since the stochasticity of the system comes from its coupling with the heat reservoir, every time a jump occurs, there is heat released from the system to the reservoir. As is customary <cit.>, the heat released to the reservoir during a jump from x to x' is assigned to be:q_x'x≡logk_x'x/k_xx',and the total heat released during a trajectory x(t) is:q(x)=∑_i=1^nlogk_x_ix_i-1(t_i)/k_x_i-1x_i(t_i), On the other hand, the probability density P_x(λ) for a trajectory x to occur with the forward protocol λ can be expressed as:P_x(λ)=p_x_0(t_0)P_x|x_0(λ),where p_x_0(t_0) is the probability of the system being in state x_0 at the beginning of the trajectory, and P_x|x_0(λ) is the relative probability density, which is given by:P_x|x_0(λ)=∏_i=1^nk_x_ix_i-1(t_i)2^-∑_i=1^n∫_t_i-1^t_ie_x_i-1(t) t,where e_x≡∑_x'≠ xk_x'x is the escape rate of state x.We now consider the time-reverse process, or backward process, where both the trajectory and the protocol evolves in reverse order in time. The concept of backward process is critical in the derivation of various fluctuation theorems. We define t≡ t_n+1-(t-t_0), x(t)≡ x(t), λ(t)≡λ(t), and refer to P_x|x_n(λ) as the relative probability density of the backward trajectory.Surprisingly, the ratio of P_x|x_0(λ) with P_x|x_n(λ) can be represented by the heat q(x) released during the forward process x as in equation (<ref>):P_x|x_0(λ)/P_x|x_n(λ)=2^q(x).The above equation is the so-called detailed fluctuation theorem(DFT). § DERIVATION OF THE BOUNDIn this section, we shall derive the generalized Zurek's bound, with the assumption that the system satisfies DFT.First, let's rewrite inequality (<ref>) which we are going to prove:K(x|y,𝒫)+≤log1/P(y|x)+Q(x→ y),where P(y|x) and Q(x→ y) can be further expressed as:P(y|x)=∑_x:x_f=yP_x|x(λ), Q(x→ y)=1/P(y|x)∑_x:x_f=yP_x|x(λ)q(x),where x_f is the final state of the trajectory x.Since P(·|·) and Q(·→·) can be computed from 𝒫, to prove inequality (<ref>), we only have to prove that the right-hand-side can be written as a negative logarithm of a semimeasure of x given y (the coding theorem), that is:∑_x2^-(log1/P(y|x)+Q(x→ y))=∑_xP(y|x)2^-Q(x→ y) ≤∑_xP(y|x)1/P(y|x)∑_x:x_f=yP_x|x(λ)2^-q(x)=∑_x∑_x:x_f=yP_x|x(λ)2^-q(x)=∑_x∑_x:x_f=yP_x|x(λ)P_x|y(λ)/P_x|x(λ)=∑_x∑_x:x_f=xP_x|y(λ)=1,where the first inequality comes from equation (<ref>) and Jensen's inequality and the third equality comes from the assumption of detailed fluctuation (<ref>). Thus inequality (<ref>) is proved.Considering the inequality K(x|y,𝒫)+≤K(x|y)-K(𝒫) as deduced from (<ref>), we can write a weaker bound as:Q(x→ y)+log1/P(y|x)+K(𝒫)+≥K(x|y). As noted by Kolchinsky<cit.>, the above inequality tells a trade-off between heat, noise and protocol complexity that has to be paid for the erased information in a computation from x to y.§ APPLICATIONSIn this section, we discuss two applications of the generalized Zurek's bound we just proved. §.§ Cost of erasure of an input sequenceImagine x is a binary sequence of length n (n may be large) and we want to set all the digits of x to zero (to erase the sequence). If x is a very random sequence, then K(x|𝒫)≈ n. Since K(0^n|𝒫)≈log n≪ n, the minimal heat release we have to pay for the erasure is approximately n-log1/P(0^n|x), so that if we hope to perform the erasure in a relatively high success rate, we have to pay for a cost proportional to the length of the sequence to be erased, which is consistent with the prediction given by Landauer's principle.However, if x consists some kind of internal regularity, even if are unaware of that regularity and keep 𝒫 unchanged, K(x|𝒫) can still be much smaller and less heat may be released in the erasure. For example, if all the bits on odd positions in x are 1, that is, x is of the form x_01x_21⋯ x_n-21, then K(x|𝒫)≈ n/2, so that the minimal heat release to perform an erasure is reduced to a half in virtue of this regularity, which is a result cannot be concluded form Landauer's principle. §.§ Condition for heat functions of nondeterministic Turing machinesIn this part, we generalize previous results on the thermodynamic properties of Turing machines<cit.> to nondeterministic Turing machines.A physical Turing machine is a protocol 𝒫 that mimics the input-output relation P:𝒳→𝒳 of a mathematical Turing machine. 𝒫 is called a realization of P. A Turing machine is called (non)deterministic if P is (non)deterministic.Here (as in <cit.>), we consider only the thermodynamic properties of physical Turing machines. Suppose a physical Turing machine 𝒫 is coupled with a heat reservoir. When 𝒫 is given an input x, an average amount of heat Q(x) is released to the reservoir, that is:Q(x)≡∑_yP(y|x)Q(x→ y).Q(x) is called the heat function of 𝒫.It is proved in <cit.> that, when P is deterministic, the heat function Q(x) is realizable iff for every y∈𝒳, there is:∑_x:P(x)=y2^-Q(x)≤ 1.Considering the form of the sum after the first equality in formula (<ref>),we can easily generalize inequality (<ref>) to any nondeterministic function P, and the heat function Q(x→ y) is realizable iff for every y∈𝒳, there is:∑_xP(y|x)2^-Q(x→ y)≤ 1. A dominating realization of P is a realization 𝒫_dom whose heat function Q_dom(x) is less than the heat function Q(x) of any other realization of P up to a constant additive term which may depend on Q(·). It is shown in <cit.> that for any realization of any deterministic P, the heat function Q(x) obeys:Q(x)+≥K(x|P(x))-K(Q,P).Thus we can assign Q_dom(x)=K(x|P(x)) as the dominating heat function of P.We can easily generalize inequality (<ref>) to any nondeterministic function P, since inequality (<ref>) is actually a deterministic restriction of the generalized Zurek's bound (<ref>). After rearrangement and note that 𝒫=(P,Q), we have:Q(x→ y)+≥K(x|y,Q,P)-log1/P(y|x) +≥K(x|y)-log1/P(y|x)-K(Q,P),and the dominating heat function can be assigned as Q_dom(x→ y)=K(x|y)-log1/P(y|x).§ FUTURE WORKIn this paper, we have studied the applications of Kolmogorov complexity in stochastic thermodynamics, especially the relationship between Kolmogorov complexity and fluctuation theorems. In Appendix B we will introduce an algorithmic definition of stochastic entropy that satisfies the integral fluctuation theorem.So far, there are still many interesting problems at the intersection of AIT and thermodynamics but cannot be solved in our paradigm. For example, consider a process that realizes a computation of a very hard problem f (such as 3SAT problem) but not erase the input, that is: P(x)=(x,f(x)) for a relatively high success rate. Since f is very hard to compute), it is reasonable that this process should cost a lot(time, energy, etc.), but according to bound (<ref>), K(x,f(x)|x)≈ 0 so no lower bound of the cost of this process can be acquired.To solve the above problem, it may be helpful to consider resource-bounded Kolmogorov complexity, or to restrict our consideration to protocols that can be constructed from a limited set of subprograms (such as in <cit.> and <cit.>). It would yield insightful results by applying the concepts and techniques of the well-established domain of computational complexity theory into the emergent field of stochastic thermodynamics.IEEEtran§ THE CONSTANT TERM IN THE GENERALIZED ZUREK'S BOUNDMany (in)equalities in this paper feature a constant additive term. In this appendix, we show that the constant term can actually be omitted in practical applications.All the constant terms essentially come from the artificial choice of a universal Turing machine, as the length of the interpreter in inequality (<ref>). To be more specific, it is reasonable to assume that l(int(U,U'))≈ K_U(U'). We also suppose that U' is a universal Turing machine with s states and a symbols. Since there are in total 2^s^2a^2 (nondeterministic) Turing machines with s states and a symbols, we may further assume K_U(U')≈log(2^s^2a^2)=s^2a^2. According to <cit.>, the smallest universal Turing machines can make s^2a^2≈ 10^2bits. Convert bits to Joule per Kelvin:10^2bits=10^-21JK^-1,which is a negligible amount in comparison to practically interesting thermodynamic quantities. § AN ALGORITHMIC DEFINITION OF STOCHASTIC ENTROPYIn this appendix, we introduce an algorithmic definition of stochastic entropy following <cit.> and <cit.>, and derive an integral fluctuation theorem (IFT) of it.Consider a time-inhomogenuous Markovian system with (meso)state space 𝒳. It is a common practice to assign a stochastic entropy (the total entropy of the composite system of the system and the reservoir) to every single state x∈𝒳 <cit.>:s_sto(x)≡-log p_x(t)-f_x(t),where p_x(t) is the probability of the system being in state x at time t, and f_x(t) is the free energy of state x. Because p_x(t) depends on the prepared state distribution p_x(0), the stochastic entropy of the system depends not only on the state x but also on the prepared state distribution at time t=0, which is not a favorable property since we prefer the entropy of the system to be a pure state function.As in <cit.> and <cit.>, we can define an entropy function of the system to depend only on x. To be concrete, we define the algorithmic entropy as:s_alg(x)≡ K(x|𝒫)-f_x(t).Since p_x(t) can be computed from 𝒫, the coding theorem implies that K(x|𝒫)+≤-log p_x(t), so that s_alg(x)+≤s_sto(x).We justify the above definition by demonstrating that s_alg(x) can restore to Boltzmann and Gibbs entropy in different senses. If x is restricted in a set A with |A| elements, then K(x|A)≈log|A| which is the Boltzmann entropy. For every probability distribution P(x), S_sh(P)+=∑_xP(x)K(x|P), which means that Gibbs entropy is the ensemble average of the algorithmic entropy for any ensemble.What's more, in the following we derive the integral fluctuation theorem for s_alg(x), thus proving the second law of the algorithmic entropy. <cit.> has derived the integral fluctuation theorem for a single jump, and here we generalize the previous result to any trajectory.The integral fluctuation theorem states that for every prepared state distribution p_x(0), there is:⟨ 2^-Δ s_alg(x)⟩_x×≤1,where Δ s_alg(x) is the algorithmic entropy production during the trajectory x, and ⟨·⟩_x means averaging over trajectories.From definition (<ref>), Δ s_alg(x) can be expressed as:Δ s_alg(x)=q(x)+K(x_f|𝒫)-K(x_0|𝒫)Thus we have:⟨ 2^-Δ s_alg(x)⟩_x=∑_xP_x(λ)2^-q(x)-K(x_f|𝒫)+K(x_0|𝒫)=∑_xp_x_0(0)P_x|x_0(λ)P_x|x_f(λ)/P_x|x_0(λ)2^-K(x_f|𝒫)+K(x_0|𝒫)=∑_xp_x_0(0)P_x|x_f(λ)2^-K(x_f|𝒫)+K(x_0|𝒫) ×≤∑_xp_x_0(0)P_x|x_f(λ)2^K(x_0|x_f,𝒫) ×≤∑_xp_x_0(0)P_x|x_f(λ)1/P(x_0|x_f,𝒫)=1,where the second equality comes from (<ref>), the first inequality comes from (<ref>), and the second inequality comes from the coding theorem.As shown in Appendix A, the constant multiplicative factor in (<ref>) is negligible, thus by Jensen's inequality we arrive at the second law of the algorithmic entropy:Δ S_alg≡⟨Δ s_alg⟩_x≥ 0. | http://arxiv.org/abs/2311.16269v2 | {
"authors": [
"Jihai Zhu"
],
"categories": [
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.stat-mech",
"published": "20231127191931",
"title": "Derivation of the generalized Zurek's bound from a detailed fluctuation theorem"
} |
Mapping quantum circuits to shallow-depth measurement patterns based on graph states [====================================================================================empty We study the global forms of class [A_N-1] 4d = 2 theories by deriving their defect groups (charges of line operators up to screening by local operators) from Coulomb branch data. Specifically, we employ an explicit construction of the BPS quiver for the case of full regular punctures to show that the defect group is (_N)^2g, where g is the genus of the associated Riemann surface. This determines a sector of surface operators in the 5d symmetry TFT. We show how these can also be identified from dimensional reduction of M-theory. January 14, 2024 § INTRODUCTION AND OVERVIEWIn exploring properties of quantum field theories that cannot be accessed through perturbative methods, symmetry is one of the precious few footholds available. In particular, anomalies of global symmetries provide quantities that, by the classic anomaly matching argument of tHooft:1979rat, are invariant under renormalisation group flow.A modern viewpoint on anomalies is in terms of invertible theories in one dimension higher, so that an anomalous theory lives on the boundary of an anomaly theory. Upon a background gauge transformation, the anomalous phases from the boundary and the bulk cancel, rendering the combined system anomaly-free; this is anomaly inflow <cit.>. In this framework, anomaly matching is the statement that anomaly theories are topological and therefore invariant under RG flow.The concept of a symmetry, traditionally seen as a transformation on the fields that leaves the action and (up to an anomalous phase) partition function invariant, has in recent years been reexamined and generalised. The key observation is that the presence of a traditional symmetry is equivalent to the existence of operators/defects, supported on closed codimension-1 submanifolds of spacetime, and invariant under continuous deformations of those submanifolds. The group law of the symmetry is expressed in the fusion algebra of the corresponding defects. This perspective suggests the generalisation to p-form symmetries <cit.>, whose charged objects are p-dimensional extended operators acted on by codimension-(p + 1) symmetry operators. These may in addition mix with each other in higher group-like structures <cit.>. Another natural generalisation is to non-invertible symmetries <cit.>.A similar picture to that of anomaly inflow applies when one has several field theories with identical local dynamics, but different spectra of extended operators, that is, different global structures <cit.>. The theories can then be viewed together as a relative field theory <cit.> living on the boundary of a non-invertible TQFT, called the symmetry TFT (SymTFT) <cit.>, in one dimension higher. The set of all global forms is a property of the SymTFT itself, while a topological boundary condition picks out a boundary theory with a particular global structure (see <ref> below). Indeed, the SymTFT and its topological boundary conditions can be studied abstractly, quite apart from any dynamical boundary theory, just as a group can be studied independently of its representations <cit.>.In the remainder of this section, we outline the structure of the paper, introduce the central concepts, summarise the main points and give some directions for future work.DelZotto:2022ras discussed a concrete example of the relation between global structures and the symmetry TFT, namely the case of a 4d QFT that has a Coulomb phase. We review it in some detail in <ref>.In this work, we apply the story of <cit.> to the case of the classconstruction <cit.>. We start from the 6d = (2,0) SCFT of typeand get a 4d = 2 supersymmetric theory by compactifying on a Riemann surface Σ_g,p of genus g and with p punctures. We study the most straightforward case, where the punctures are all regular and full (see <cit.> for an elaboration on this) and = A_N - 1 = (N). We call the 4d theory [(N), Σ_g,p]. Concretely, we claim that the defect group <cit.> of this theory is([(N), Σ_g,p]) ≅ (_N)^2g;this then determines a sector of surface operators in the SymTFT.There are a priori good reasons to expect this defect group, and indeed special cases of the claim have appeared in earlier literature. Tachikawa:2013hya considered the case without punctures and (though the term defect group was not established) argued that([, Σ_g,0]) ≅ H^1(Σ_g,0; Z(G))where Z(G) is the centre of the simply connected group G with algebra ; this agrees with (<ref>). Bhardwaj:2021pfz proposed that regular untwisted punctures do not affect the defect group in general, so that in particular (<ref>) holds also for p > 0. They verified this expectation in the cases [(N), Σ_0,4], [(2), Σ_g,p] and [(N), Σ_1,p̃], where Σ_1,p̃ denotes the torus with p simple punctures (see <ref>), as well as many theories outside the scope of this paper. Our confirmation of (<ref>) adds another class of examples to this list.When = (N) and there are no punctures, one can derive the defect group via the SymTFT that descends from the M-theory Chern–Simons term when realising the 6d theory as the worldvolume theory of a stack of M5-branes; we review this in <ref> following <cit.> and <cit.>. Furthermore, we add punctures using a geometric construction <cit.> and argue that the defect group is unaffected as expected.The main result of this work is presented in <ref>. We make use of the known charge lattices of classtheories to verify (<ref>) for small N, g and p by explicitly calculating the BPS quiver and defect group. This serves as a check of the paradigm from <cit.> that the SymTFT can be accessed from Coulomb branch data. In <ref>, we do the same calculation for a class of theories where the punctures are not full. In <ref>, we make some observations on the structure of the BPS quivers to motivate why (<ref>) should hold.There are several interesting avenues for generalisation and application of our methods in future work. Ideally, one would like a systematic, algorithmic construction of BPS quivers for classtheories with arbitrary simply laced gauge algebraand any configuration of regular punctures. Several proposals in this direction have been made <cit.>, but the constructions are complex and it is not clear to what extent they can be generalised. Here the work of Goncharov:2019gzh provides a promising starting point. Once one knows the BPS quiver of a theory, the natural application is to compute its BPS spectrum using the mutation method of <cit.>. In particular, this enables one to calculate the Schur index according to the conjecture of Cordova:2015nma; comparing this to the derivation of the index as a TQFT correlator on Σ_g,p <cit.> would be a good cross-check. The argument in <ref> on adding regular punctures is somewhat schematic; it would be useful to reproduce it at a higher level of rigour. Finally, filling in the proof sketch of <ref> would establish a result of general interest in the mathematics of cluster algebras. § DEFECT GROUP AND SYMTFT IN THE COULOMB PHASE §.§ Wilson and 't Hooft lines in Maxwell theoryAs preparation for the general case, let us consider four-dimensional Maxwell theory with a U(1) 1-form gauge field A. The field strength F = A is a closed but not necessarily exact 2-form. We wish the Wilson loop W_(γ) = ^∮_γ A of electric chargeto be well-defined. Consider deforming the loop from γ to γ' along a surface Σ with ∂Σ = γ' - γ; we find W_(γ') = ^∫_Σ F W_(γ). In particular, from γ' = γ we must have ∮_Σ F ∈ 2π for all closed surfaces Σ. Next, we insert an 't Hooft loop H_(ℓ), defined by sourcing a magnetic flux: ∮_ΣF/2π = whenever Σ links ℓ. In the presence of W_, this is consistent with the previous condition when ∈; this is the Dirac quantisation condition. Thus the set of allowed Wilson lines constrain the allowed 't Hooft lines, and vice versa.Let us now add massive dynamical states to the theory: a particle with electric chargeand a monopole with charge . Integrating them out is equivalent to inserting Wilson and 't Hooft operators W_ and H_ supported on their worldlines in the IR path integral; thus these must be allowed line operators. In particular we have∈, ∈, ∈,whereandare the charges of any other allowed lines in the theory. Given the dynamical chargesand , the set of solutions (, ) to (<ref>) up to screening by the dynamical states is called the defect group, in this case _×_. Here the most general line is dyonic, with simultaneous electric and magnetic charges (, ). The condition for two such lines to be well-defined in each others' presence is the Dirac–Schwinger–Zwanziger quantisation condition <cit.> ' - ' ∈. A maximal subgroup of lines satisfying this will be (the Pontryagin dual of) the 1-form symmetry. We will elaborate on this in the next section.The reasoning so far holds regardless of the normalisation of the gauge field A. If the gauge transformations are A → A - θ with the gauge parameter θ valued in /2π, invariance of the Wilson line W_ under large gauge transformations requires ∈. For convenience, however, we may rescale electric charges and the gauge field by 1/ and magnetic charges bywithout affecting the above discussion, ensuring that electric and magnetic charges of dynamical states are all integers, while lines may carry fractional electric charge. We will take this convention in the remainder of the paper.§.§ Lines in the general caseIn this section we review the main points of <cit.>, in particular the notion of a defect group and its relation to the SymTFT. This generalises the arguments above in three respects: We allow for r ≥ 1 gauge fields, we allow dyonic dynamical states as well as lines, and we also allow for flavour charge. For in-depth discussions of the various charge lattices, see <cit.>.We are interested in the 1-form symmetry of a four-dimensional U(1)^r gauge theory with charged massive dynamical states. This theory has generalised dyonic line operators L, carrying electric and magnetic charges *α = (^1, ^1, …, ^r, ^r) forming a lattice Γ_L ⊂^2r. To be well-defined in the presence of charged states, they need to satisfy the quantisation condition Γ_L, Γ⊂, where*α,*α' = ∑_i=1^r (^i '^i - ^i '^i)is the Dirac pairing and Γ⊂^2r is the lattice of dynamical charges, assumed to be of full rank. We denote this as Γ_L ⊂Γ^*, where Γ^* = {*α∈^2r | *α, Γ⊂} is the dual of Γ with respect to the Dirac pairing.The worldlines of dynamical states form a subgroup Γ⊂Γ_L. As these lines can end on a local operator insertion, they cannot be charged under the one-form symmetry (the local operator is said to screen the line). The 1-form symmetry charges therefore take values in the defect group = Γ^* / Γ. Because the Dirac pairing is perfect, it induces an isomorphism Γ^* ≅_(Γ, ) via *α↦·, *α, and we can express the defect group in terms of its restriction Γ→_(Γ, ) as ≅.A genuine line operator needs to be well-defined not only in the presence of dynamical states, but also in the presence of other lines. A configuration of two lines with charges *α, *α' is subject to a phase ambiguity ^2π*α,*α', so this requires the stronger condition Γ_L, Γ_L⊂. There are multiple maximal solutions Γ_L (maximal isotropic sublattices of Γ^*) to this constraint, each defining a global structure of the theory.Let us make this reasoning more concrete. Since the Dirac pairing is skew-symmetric, there is a basis {γ^e,i, γ^m,i}_i=1^r for Γ where it takes the simple formγ^e,i, γ^m,j = n_i δ_ij γ^e,i, γ^e,j = γ^m,i, γ^m,j = 0with n_i ∈. Then, since we assume Γ has full rank, it is clear that {1/n_iγ^e,i, 1/n_iγ^m,i}_i=1^r is a basis for Γ^*, and the defect group is= Γ^* / Γ≅⊕_i=1^r ( / n_i )^2.Indeed, the Dirac pairing expressed as a matrix ^αβ = γ^α, γ^β becomes in this basis= ⊕_i=1^r (0 n_i -n_i 0),and we see thatagrees with (<ref>). The integers n_i are the non-zero invariant factors of —the diagonal entries in its Smith normal form.The picture we have outlined is easily generalised to incorporate flavour. We allow for the possibility of f flavour charges (a rank f flavour group) so that Γ has rank 2r + f. These have zero Dirac pairing with any other charge; they generate . We quotient out by the flavour charges to define reduced charge latticesΓ̃ = Γ/, Γ̃^* = Γ^*/⊗_and define the defect group by = Γ̃^* / Γ̃. As above, the Dirac pairing defines an isomorphism Γ̃^* ≅_(Γ̃, ) with restriction Q̃Γ̃→_(Γ̃, ), such that≅≅and≅⊕^f.The central claim of <cit.> is that the symmetry TFT capturing the choice of global structure explained above is described by the actionS = /2π ^αβ/2∫_X_5 B_α∧B_βwhere the 4d theory lives on _4 ⊂∂ X_5.[Importantly, ∂ X_5 can have components other than _4; in the simplest case X_5 = _4 × [0,1] as in <ref>.] The fields B_α are 2-form higher (1) gauge fields. (Properly, they are cocycles in differential cohomology <cit.>; see also <cit.>. Roughly speaking, they are locally defined 2-forms such that 1/2πB_α are globally defined 3-forms with integer periods.[Compared to the fields b_α of <cit.>, our fields are B_α = 2π b_α. This is the conventional normalisation used in physics, although the b_α are mathematically more natural.]) In the special basis introduced above, the action reduces toS = /2π∑_i = 1^r n_i ( ∫_X_5 B_e,i∧B_m,i - 1/2∫_∂ X_5 B_e,i∧ B_m,i).We disregard the boundary terms, which are local in the boundary gauge fields, and focus on the bulk terms, which describe a product of r BF theories <cit.>. The terms with n_i = 1 describe trivial (invertible) field theories, but those with n_i > 1 give a non-invertible field theory containing two-dimensional surface operators ending on the 4d line operators. They form the defect group , and their linking relationsexp(∮_Σ B_e,i) exp(∮_Σ' B_m,j) = exp(2π/n_iδ_ijlink(Σ, Σ'))capture the phase ambiguity between 4d lines. A topological boundary condition determines the set of line operators in the absolute theory; see <ref>. From this perspective it is clear that the defect group is a property of the SymTFT bulk, while a maximal isotropic sublattice Γ_L / Γ corresponds to a choice of topological boundary condition, as derived in the 3d case in <cit.>. § REDUCTION FROM M-THEORY One derivation of the symmetry TFT of the class [(N), Σ_g,p] theory proceeds by dimensional reduction of M-theory. We realise the = (2,0) theory on a six-dimensional spacetime _6 as the worldvolume theory on a stack of N coincident M5-branes. Witten:1998wy showed that, in the limit N →∞, the near-horizon geometry is X_7 × S^4 where the conformal boundary of X_7 is ∂ X_7 = _6. He further argued that the low-energy theory close to the branes has a topological sector described by the Chern–Simons actionS = - N/2 · 2π∫_X_7 C ∧C,where C is the M-theory (1) 3-form gauge field. This can be justified as follows: Upon reduction of the Chern–Simons term in the Euclidean 11d supergravity action,S = -/6(2π)^2∫_X_7 × S^4 C ∧C∧C,the single-derivative term isS = -/2 · 2π∫_X_7 C ∧C1/2π∮_S^4C.Since each M5-brane sources one unit of flux for C, the S^4 integral evaluates to N and we recover (<ref>). This reduction is well-known, and has been performed in greater generality in the framework of differential cohomology <cit.>.Next, we perform Kaluza–Klein reduction on X_7 = X_5 ×Σ_g,p, beginning with the case p = 0, detailed in <cit.>. Expand C in terms of eigen-1-forms of the Laplace–de Rham operator Δ = δ + δ on Σ_g,0:C = ∑_i B_i ∧ω^i, Δω^i = λ_i ω^iwith ∮_Σ_g,0ω^i ∧ω^j = 0 if λ_i ≠λ_j. The coefficients B_i are 2-forms on X_5. Reducing the kinetic term 1/2κ^2∫_X_7C∧C produces a mass term for B_i unless ω^i = 0; we truncate to these massless modes. Now reduce the topological term (<ref>) to obtainS = - N/2 · 2π∑_i,j∫_X_5 B_i ∧B_j∮_Σ_g,0ω^i ∧ω^j.In the terms with λ_i ≠ 0, ω^i = 1/λ_iδω^i is exact and the integral over Σ_g,0 is zero; similarly if λ_j ≠ 0. Thus the contributing ω^i are harmonic forms which we can take to have integral periods. These are by the Hodge theorem <cit.> and Poincaré duality in bijection with the generators c^i of H_1(Σ_g,0; ) ≅^2g. The intersection pairing (c^i, c^j) = ∫_Σ_g,0ω^i ∧ω^j ∈ is(c^e,i, c^m,j)= -(c^m,j, c^e,i) = -δ^ij(c^e,i, c^e,j)= (c^m,i, c^m,j) = 0with the appropriate choices of generators c^e,1, …,c^e,g, c^m,1, …,c^m,g. ThusS =N/2π∑_i = 1^g ∫_X_5 B_e,i∧B_m,i,(again, up to boundary terms) which reproduces (<ref>) with g of the n_i equal to N. By (<ref>), this indeed matches with (<ref>). While Witten's construction relied on the holographic limit N →∞, we expect the result to hold for any N, and indeed check it for small N in the next section.The presence of punctures complicates the analysis since there are now boundary conditions for C. Naively, one could observe that since (<ref>) couples the 5d gauge fields through the intersection pairing on H_1(Σ_g,0; ), and since the elementary cycles surrounding punctures are in the kernel of the intersection pairing on H_1(Σ_g,p; ), adding punctures should not affect (<ref>). The problem is that by the Hodge decomposition with boundary <cit.> and Lefschetz duality <cit.>, this pairing results from the KK reduction above only if C is taken to have Dirichlet boundary conditions at the punctures, and this is not generally the case. Indeed, irregular punctures do introduce 1-form symmetries “trapped” at the punctures <cit.>.From the M-theory perspective, a regular puncture can be described using a construction of Bah:2018jrv,Bah:2019jts: We modify the space Σ_g,0× S^4 in the above construction in a neighbourhood D ⊂Σ_g,0 of the puncture, replacing D × S^4 with a space X_6 whose C flux along four-cycles define the puncture data. We thus obtain the 11-dimensional spacetime _11 = X_5 × Y_6. The space X_6 is a fibration of the form S^2 → X_6 → X_4; S^1 → X_4 →^3 where the S^2 and S^1 shrink at certain singular loci.We outline an argument that this structure means that the puncture does not modify the defect group. If X_6 and X_4 were non-singular fibrations, we would have the long exact sequences⋯→π_1(S^2) →π_1(X_6) →π_1(X_4) →π_0(S^2) →⋯ ⋯→π_2(^3) →π_1(S^1) →π_1(X_4) →π_1(^3) →⋯establishing that π_1(X_6) ≅π_1(X_4) ≅π_1(S^1) ≅, generated by the loop winding along S^1. In the present case, however, the S^1 shrinks and this loop is contractible; we therefore expect π_1(X_6) = 0 instead.[This is analogous to the singular fibration S^1 → S^2 → [0,1] of the 2-sphere over the unit interval. The S^1 fibre shrinks to a point at the endpoints of the interval, so π_1(S^2) = 0 rather than .] Then H_1(X_6) = 0 by the Hurewicz theorem <cit.>. Now, expanding C__11 = ∑_i B^(2)_i,X_5∧ω^i(1)_Y_6 + ϕ^(0)_X_5 c^(3)_Y_6 + ⋯, the topological term (<ref>) reduces to-/2· 2π∫_X_5 B_i ∧B_j∮_Y_6ω^i ∧ω^j ∧c/2πwhere ω^i and c/2π have integral periods; in particular ω^i is valued in H^1(Y_6; ). Consider for simplicity the case of a single puncture; then we have Y_6 = (Σ_g,1× S^4) ∪ X_6 with (Σ_g,1× S^4) ∩ X_6 ≃ S^1 × S^4, and the reduced Mayer–Vietoris sequence <cit.> gives H_1(Y_6) ≅ H_1(Σ_g,0× S^4) ≅ H_1(Σ_g,0) because H_1(X_6) = 0. Therefore H^1(Y_6; ) ≅ H^1(Σ_g,0; ) and ω^i are cocycles on Σ_g,0. Then, the integral of c/2π captures the total flux N sourced by the branes; we recover (<ref>) and confirm that the puncture does not affect the defect group. § DEFECT GROUP FROM BPS QUIVERS In this section, we find the defect group and hence the SymTFT using an explicit construction of the 4d BPS quiver.§.§ Full punctures A BPS quiver <cit.> of a 4d = 2 supersymmetric theory has nodes corresponding to charges of certain BPS states, and arrows such that the signed adjacency matrix is the matrixof Dirac pairings as in <ref>. The nodes always correspond to physical BPS states; hence every charge in the lattice they generate is realised by a dynamical state (not in general a BPS state). We assume, as in <cit.>, that this is the full charge lattice of the theory; in other words, every charge is a sum of BPS charges.Cecotti:2011rv described how to construct BPS quivers of class [(2)] theories from ideal triangulations <cit.> of Σ_g,p. The nodes of the quiver are the arcs of the triangulation, and they are joined by arrows going counterclockwise around each triangle (with respect to the orientation of Σ_g,p). In the literature on cluster algebras, these are called quivers of surface type. The procedure generalises to other classtheories via work on Hitchin systems and spectral networks <cit.> and their relation to the BPS quivers <cit.>. We need only the fact that the BPS quiver for a class [(N)] theory with full punctures is a so-called (N - 1)-triangulation <cit.>: Starting from an ideal triangulation, the quiver has has N - 1 nodes for each arc, as well as N - 12 internal nodes in each triangle, connected as in <ref>. Once the quiver is known, the defect group (as well as the flavour rank f) can be extracted using (<ref>). For example, for the class [(3)] theory on a torus with one full puncture, one finds the BPS quiver shown in <ref>. The torus is triangulated with two triangles and three edges that begin and end on the single puncture. The BPS quiver is the Fock–Goncharov 2-triangulation and has eight nodes. The corresponding signed adjacency matrix(<ref>) has the Smith normal formdiag(1,1,1,1,3,3,0,0)so that ≅ (_3)^2 ⊕^2. This means that the theory has defect group ≅ (_3)^2 and two flavour charges.Accompanying this work is a computer program to carry through this calculation for arbitrary N, g and p. Details on the computation are found in <ref>. We find that(⊕^f)([(N), Σ_g,p]) ≅ (_N)^2g⊕^(N - 1)pin agreement with (<ref>), for all values 2 ≤ N ≤ 7, 0 ≤ g ≤ 8 and 1 ≤ p ≤ 8 (3 ≤ p ≤ 8 for g = 0).In addition to the defect group, the calculation also determines the rank r and flavour rank f of the theory. <Ref> directly givesf = (N - 1) p.To derive r, the dimension of the Coulomb branch of the 4d theory, note that the charge lattice has rank 2r + f, equal to the number of nodes in the BPS quiver. As described above, the quiver has 2r + f = N - 12 t + (N - 1)a nodes, where a is the number of arcs in the triangulation and t is the number of triangles. By Euler's formula p - a + t = 2 - 2g, as well as the fact that 2a = 3t in a triangulation, we find that 2r + f = (N^2 - 1)(2g + p - 2). Together with the result (<ref>) for f, this leads to the following formula for the rank:r = (N^2 - 1)(g - 1) + N2 p. We can obtain some basic consistency checks on the above construction and results constructing [(N), Σ_g,p] by gluing together copies of the T_N theory <cit.>, which is the compactification on a sphere with three full punctures: T_N = [(N), Σ_0,3]. Each full puncture carries an (N) flavour symmetry <cit.> and connecting two punctures by a tube corresponds to gauging it. Thus we obtain Σ_g,p from 2g + p - 2 T_N theories on thrice-punctured spheres after connecting 3g + p - 3 pairs of punctures by a tube (<ref>).While it is not obvious how to extract the defect group from this picture, we can easily derive r and f. The (N)^p flavour symmetry evidently reproduces (<ref>). As for the rank, the T_N theory on each sphere has d - 2 Coulomb branch operators of scaling dimension d for each d = 3, …, N <cit.>, so its rank is ∑_d = 3^N (d - 2) = N - 1 2. Furthermore, each gauged (N) associated to a tube contributes N - 1 Coulomb branch operators. The total rank isr = N - 1 2 (2g + p - 2) + (N - 1) (3g + p - 3)which indeed simplifies to (<ref>).§.§ Non-full punctures One limitation of our computation is that it deals with full punctures only. In general one can consider tame punctures labelled by any partition of N <cit.>. The general expectation is that the defect group should be the same for any choice of partitions of N at the punctures, as argued in <cit.> and in <ref>. Our result confirms that changing a full puncture, which is labelled by (1, …, 1), into the absence of a puncture, which can be thought of as labelled by (N), does not affect the defect group. For theories with partial punctures, several constructions of the BPS quivers exist <cit.>, but here we will be content with checking a specific example. Namely, in the case of [(N), Σ_1,p̃], where Σ_1,p̃ denotes the torus with p simple punctures labelled by (N-1, 1), the quiver is known <cit.> and displayed in <ref>. We have checked that for this quiver,≅ (_N)^2 ⊕^pfor 2 ≤ N ≤ 7 and 1 ≤ p ≤ 8, so that the defect group is indeed (_N)^2. This Coulomb branch result can also be confirmed from a quiver gauge theory description (<ref>) as in <cit.>. In a purely electric duality frame, the gauge group is (N)^p with Wilson lines charged under Z((N)^p) ≅ (_N^p)^(1). The dynamical states in the bifundamental representations screen the 1-form symmetry to the diagonal (_N)^(1), identified with a maximal isotropic subgroup of ≅ (_N)^2.Indeed, this can be slightly generalised to a field-theoretic argument for arbitrary simple punctures preserving the defect group: Adding a simple puncture corresponds to replacing a gauged (N) by (N)_1 ×(N)_2 and a hypermultiplet in the bifundamental (<ref>); see <cit.>. The hypermultiplet has N-ality (1, -1) and therefore the lines charged under the centre 1-form symmetries (_N)^(1)_1 and (_N)^(1)_2 are identified up to screening.§.§ Structure of the BPS quiver The result (<ref>) is a purely combinatorial statement about the (N-1)-triangulations of fock2006moduli, and it is interesting to consider it as such. It has been partially addressed in the mathematical literature; in particular, <ref> when N = 2 is a special case of Theorem 14.3 of <cit.>. Here, we present some observations on the structure of the quivers that make the result plausible in the form of a proof sketch. It would be interesting to see if this reasoning could be extended to a full elementary proof of (<ref>).Recall that we have the charge lattice Γ = ^2r + f with standard basis given by the nodes of the quiver, and we conceptualise the Dirac pairing as a -linear map Γ→_(Γ, ).First, the free factor inbeing ^(N-1)p is equivalent to ≅^(N-1)p. It is in fact easy to exhibit (N-1)p null (flavour) vectors; N-1 for each elementary cycle wrapping a puncture in Σ_g,p, as in <ref>. Recall that a node in the quiver is a generator of Γ; in the figure, the marked nodes γ_i define a vector γ = ∑_i γ_i ∈Γ. The covector γ = ·,γ∈_(Γ,) is found by following the arrows adjacent to each marked point, adding 1 for outgoing arrows and -1 for incoming arrows. The construction of γ ensures that all such contributions (occurring at the nodes marked with blue circles) cancel, so that γ = 0. This works no matter the number of arcs incident to the puncture. This shows that the rank ofis at least (N-1)p, but does not rule out hypothetical further flavour vectors.[The story is somewhat complicated by torsion. The cycles form a maximal linearly independent subset of , but not a generating set. For example, consider the case where the vertices of each triangle are all distinct. Summing all cycles gives a multiple of the null vector γ = (1, …, 1), namely 3γ, but γ itself cannot in general be attained this way.]Next we search for the torsional part of ; the defect group. There is an explicit map v H_1(Σ_g,p; ) →Γ, defined in <ref>, such that (∘ v) ⊂ N _(Γ, ). <Ref> shows that it defines a homomorphism v_N H_1(Σ_g,p; _N) →Γ⊗_N. Each generator c ∈ H_1(Σ_g,p; ) passes through a sequence of triangles, and in each triangle, turns either clockwise or counterclockwise. A cycle that turns purely clockwise or counterclockwise surrounds a single puncture; then v(c) is a sum of flavour vectors as in <ref>. Thus v_N descends to a homomorphism ṽ_N H_1(Σ_g,0; _N) →Γ̃⊗_N such that (⊗ 1__N) ∘ṽ_N = 0 (recall that Γ̃ = Γ /, Γ̃^* ≅_(Γ̃, ) and = (Γ̃↪Γ̃^*)).Here we conjecture that in fact (⊗ 1__N) = ṽ_N, so that we have an exact sequence0 ⟶ H_1(Σ_g,0; _N) Γ̃⊗_N Γ̃^* ⊗_N ⟶⊗_N ⟶ 0.In other words, (, _N) ≅ H_1(Σ_g,0; _N) ≅ (_N)^2g. From this, if one can show thatis N-periodic (N = 0, that is, NΓ̃^* ⊂); then ≅(, _N) and (<ref>) follows. § ACKNOWLEDGEMENTSThis work is supported by the European Research Council under grant 851931 (MEMO). I thank Michele Del Zotto for suggesting the project and for his mentorship throughout. I am grateful to Azeem Hasan, Robert Moscrop and Shani Nadir Meynet for enlightening discussions. I thank the Department of Physics of the University of Colorado, Boulder and the Simons Collaboration on Global Categorical Symmetries for hospitality during the TASI 2023 school and the GCS2023 conference and school, respectively. § COMPUTATION The computations in <ref> were done using SageMath <cit.>. The source code and results are available at <https://gitlab.com/eliasrg/class-s-defect-groups>.The first step is to construct an ideal triangulation of the punctured Riemann surface Σ_g,p. We achieve this by starting from a triangulation of Σ_g,1, or of Σ_0,3 if g = 0. These are shown in <ref>. We then add additional punctures one by one, as shown in <ref>, until there are p punctures in total. In preparation for the next step, we also keep track of the orientation of each triangle.Having obtained the triangulation, we next construct the class [(N)] BPS quiver. We put N - 1 nodes on each arc and N - 12 internal nodes in each triangle, and connect them with arrows according to <ref>, counterclockwise with respect to the orientation of the surface, as in <cit.>.Finally, we compute the defect group and flavour rank from the quiver's signed adjacency matrixusing (<ref>) and≅ Swhere S is the Smith normal form of . The result of this computation is <ref>. | http://arxiv.org/abs/2311.16224v1 | {
"authors": [
"Elias Riedel Gårding"
],
"categories": [
"hep-th",
"math-ph",
"math.MP",
"quant-ph"
],
"primary_category": "hep-th",
"published": "20231127190000",
"title": "Defect groups of class $\\mathcal{S}$ theories from the Coulomb branch"
} |
http://arxiv.org/abs/2311.15712v1 | {
"authors": [
"Álvaro Tejero",
"Daniel Manzano",
"Pablo I. Hurtado"
],
"categories": [
"quant-ph",
"cond-mat.quant-gas",
"cond-mat.stat-mech"
],
"primary_category": "quant-ph",
"published": "20231127105726",
"title": "An atom-doped photon engine: Extracting mechanical work from a quantum system via radiation pressure"
} |
|
Elementary Axioms for parts in toposes]Elementary Axioms for parts in toposes Ruiz]Enrique Ruiz Hernández^† [†]Centro de Investigación en Teoría de Categorías y sus Aplicaciones, A.C. Méxi[email protected]órzano]Pedro Solórzano^⋆ [⋆]Instituto de Matemáticas, Universidad Nacional Autónoma de México, Oaxaca de Juárez, Méxi[email protected] (⋆) CONACYT-UNAM Research fellow.In memory of William LawvereElementary extensions to the topos axioms are considered to describe connectedness, which further help complete a synthetic way of describing precohesiveness over the full subcategory of objects with decidable equality. In this setting, a sufficiently powerful metatheory provides a complete axiomatic description for precohesion over a boolean topos. [2020]Primary 18B25; Secondary 03G30, 03B38 [ [ January 14, 2024 ====================§ MOTIVATIONAfter the similarly titled work by Colin McLarty<cit.> on sets of points, our intention is to conveniently dualize some of the notions discussed therein in order to obtain axiomatics that synthetically describe connectedness in the spirit of late Bill Lawvere's program to axiomatize cohesion <cit.>. Following McLarty we also assume our toposes additionally satisfy the following Nullstellensatz[In the context of Precohesion there is another Nullstellensatz axiom. The one discussed here is, among other things, an external existence axiom, while the other one is internal.] Postulate: 8.5cm (NS) Any object is either initial or has a global element. As is well-known, this postulate has plenty of categorical and logical implications, several of which will be utilized in the sequel. Perhaps the simplest remark is that this Nullstellensatz provides an Axiom of Choice for finite collections of spaces.The duality we consider follows David Ellerman's Logic of partitions <cit.>, where the natural dual to 1 is not 0 but 2 (cf. the concept of a coseparator). We propose two alternative postulates (one apparently weaker than the other) for our toposes: 8.5cm (WDQO) For any object X there is a minimal decidable quotient object along which any arrow to 2=1+1 factors uniquely. 8.5cm (DQO) For any object X there is a unique decidable quotient object along which any arrow to 2 factors uniquely. In the presence of either postulate, let p_X:X→Π(X) be the corresponding quotient map. The object Π(X) can be regarded as a subobject of 2^X via 2^p_X∘(t↦{t}). Naturally, call an object X connected if Π(X)=1. § MAIN RESULTSIn the presence of either of WDQO or DQO, let 𝒲 be the full subcategory of objects Q for whichp_Q is an isomorphism and notice that Π(X) is in 𝒲.For DQO, 𝒲 is (ℰ), the full subcategory of decidable objects. By the uniqueness in DQO, Π is readily seen to be the object part of a functor, and the inclusion of (ℰ) intois the right adjoint of Π.In stark contrast from McLarty's situation, the interplay between the Nullstellensatz and these postulates is quite rigid, as seen in the following result. T:TeorAIn the presence of the Nullstellensatz,the postulates WDQO and DQO are equivalent. Thus, in this setting 𝒲 necessarily coincides with (ℰ). One also obtains the following result.T:TeorB If a topos ℰ satisfies the Nullstellensatz and WDQO postulates, Π is the object part of a functor which preserves finite products and complemented subobjects, and is furthermore left adjoint to the inclusion of (ℰ).Observe that satisfying the factorization property in DQO is necessary in order to have said adjunction. With the assumptions of T:TeorBTheorem B it is also sufficient. T:TeorATheorem A shows this requirement is quite strong.<cit.> proves, as is, that the full subcategory of interest is actually a topos. Herein we provide technical conditions for this to be the case in our setting. One such condition is that dense subobjects trully meet every part: that the arrow Π(f) be epic for every dense arrow f (see <ref>).In the presence of the Nullstellensatz, DQO, and McLarty's DSO postulate (“There exists a unique decidable subobject for any given object containing all its points.”) yield a synthetic description for precohesion over the boolean topos (ℰ), by invoking the work of <cit.>: T:TeorCFor a topos ℰ satisfying the Nullstellensatz, DSO and DQO are sufficient conditions for ℰto be precohesive over the full subcategory (ℰ) of decidable objects.At this point it is worthwhile mentioning another work of Matías Menni which is based on a conversation with Lawvere <cit.>. Therein a construction of the pieces functor Π is proposed from the knowledge of the points functor Γ:ℰ→𝒮 and, then, to obtain precohesiveness it is necessary and sufficient for the inclusion functor of the discrete spaces 𝒮→ℰ to be cartesian closed. In the presence of NS and DQO this is certainly already the case for the inclusion of (ℰ) into ℰ (see <ref>). Likewise, the same method of proof as for T:TeorATheorem A and T:TeorBTheorem B yields the following converse to T:TeorCTheorem C. T:TeorDLet ℰ satisfy the Nullstellensatz and be precohesive over a boolean 𝒮. Thensatisfies DQO and the fully faithful inclusion of 𝒮 into ℰ is dense in (ℰ).So that, assuming that one accepts a sufficiently powerful axiom of choice in one's metalogic, 𝒮 is equivalent to (ℰ) and ℰ satisfies McLarty's DSO. With the aforementioned metalogical assumption, T:TeorCTheorem C and T:TeorDTheorem D provide a complete characterization of precohesiveness over a boolean base for toposes that satisfy the Nullstellensatz. <cit.> gives necessary and sufficient conditions for the canonical geometrical morphism f:^C^op→ to be precohesive. It is immediate to see that these conditions are sufficient for ^C^op to satisfy the Nullstellensatz. In light of T:TeorCTheorem C and T:TeorDTheorem D, this means that if the canonical geometric morphism is precohesive then (^C^op) is equivalent to .Furthermore, the same holds for bounded geometric morphisms ending in . These considerations give a partial answer toa question of Francisco Marmolejo[Personal communication.]: Is there a formula to characterize discreteness in precohesion?[In the setting of a precohesive f:→𝒮, an object is discrete if the counitf^*f_*→ 1 is an isomorphism.]In the presence of the Nullstellensatz, the discrete spaces of a precohesion over a boolean base are precisely the decidable objects[There are different reasons for why the name discrete and not decidable should be the name used for objects satisfying this equation. Yet the name is somewhat standard already.].Thus the answer in this setting is the decidability of equality:(x=y)∨ (x≠ y).Lastly, in this set-up, we give necessary and sufficient synthetic conditions for both the points and the parts functors to collapse to each other in what is called a quality type. The simplest nontrivial example of this is ^E over , where E is the monoid with exactly two elements, one of which is idempotent (see Example A.4.4.9 by <cit.>). We follow an approach that is as synthetic as possible, with no disdain to standard category theoretic methods which are also very much present.Our synthetic reasoning is represented usingJohn J. Zangwill's local set theories on local languages, greatly expounded by John L. Bell <cit.> (a brief summary can be found in <cit.>).§.§ Aknowledgements.The second named author is supported by the CONAHCYT Investigadoras e Investigadores por México Program Project No. 61. Both authors wish to thank Lillián Bustamante and her staff for their hospitality the day some of the main ideas for this project were conceived. § PRELIMINARIES§.§ Local set theories As noted by Zangwill (see <cit.>), there is no loss of generality in assuming everything is constructed from a local set theory on a local language. The latter is a typed languagewith primitives ∈, { : } and =; ground, product and power type symbols; distinguished type symbols 1, Ω; and function symbols; together with a natural intuitionistic deduction system. A local set theory (LST) is a collection S of sequents Γ:α, closed under the derivability operator ⊢. In general this derivability operator is not transitive. Also, the LST need not be classical.This is categorically equivalent to, starting from an elementary topos, considering its internal language.See <cit.> for extensive discussions on the subject.In general, what follows is written following the notations and conventions from Bell's text <cit.>. In particular, for any type 𝐗, the universal S-set U_𝐗={x:⊤} is simply denoted by X.§.§ Decidable equality In the literature (eg. <cit.>), an object D in a topos is frequently called decidable if and only if its diagonal has a complement, i.e. if there exists a map ϵ_D:D̃→ D× D such that D[r]^(.4)δ_DD× DD̃[l]_(.35)ϵ_D is a coproduct diagram. In terms of the internal language of the topos, being decidable is equivalent to having decidable equality. It is well known that initial and terminal objects are decidable; that finite coproducts are decidable if and only if their summands are; and that this is inherited by finite products and by subobjects.Let S be a local set theory on a local language . Let X be a decidable S-set. Then * x,x',y,y'∈ X⊢_S(x=x'∧ y=y')⇔((x=x')∨(y=y')).* x,x',y,y'∈ X⊢_S(x=x'∨ y=y')⇔((x=x')∨(y=y')).It is straightforward to verify that ω∨ω⊢_Sω∨ω. Therefore, if α:=(x=x'), thenx,x'∈ X⊢_Sα∨α.Hence, for β:=(y=y'), x,x',y,y'∈ X⊢_S(α∨α∨β)∧(β∨α∨β). On the other hand, by distributivity,x,x',y,y'∈ X⊢_S(α∧β)∨α∨β.Now, sinceis a modality,x,x',y,y'∈ X,(α∧β)⊢_S((α∧β)∨α∨β)∧(α∧β)⊢_S(α∨β)∧(α∧β)⊢_Sα∨β.For the second item, similarly, one has thatx,x',y,y'∈ X⊢_S(α∨α∨β)∧(β∨α∨β).and by distributivity,x,x',y,y'∈ X⊢_S(α∧β)∨α∨β.Hence, by the first De Morgan law,x,x',y,y'∈ X⊢_S(α∨β)∨α∨β.Thereforex,x',y,y'∈ X,(α∨β)⊢_S((α∨β)∨α∨β)∧(α∨β)⊢_S(α∨β)∧(α∨β)⊢_Sα∨β.On the other hand, already for lextensive categories one can observe the following: Letbe a lextensive category. Then(())=().Furthermore, if () is a topos, then it is Boolean.As (())⊆(), it suffices to show that()⊆(()).So let A∈(). That is, there is an arrow ϵ_A:Ã→ A× A such thatA@(>->[r]^(.4)δ_AA× AÃ[l]_(.35)ϵ_Ais a coproduct diagram in . Hence ϵ_A:Ã→ A× A is monic since in a distributive category injections are monic (See <cit.>). That is, à is a subobject of A× A. Whence it is decidable. Therefore, since () is a full subcategory, ϵ_A is in (), and so is the sum A+Ã. In other words, (())=().In light of this, one can see thatis a 2-functor 𝐋𝐞𝐱𝐂𝐚𝐭→𝐋𝐞𝐱𝐂𝐚𝐭, where 𝐋𝐞𝐱𝐂𝐚𝐭 is the 2-category of lextensive categories and lextensive functors. Futhermoreis an idempotent 2-comonad with unit the inclusion functor J_:()→ and comultiplication the identity on ().§.§ The Nullstellensatz in local languages NSLL In <cit.> it is seen that the Nullstellensatz holds on a topos if and only if the topos is two-valued and supports split in it.In <cit.> NS is seen to be equivalent to the requirement that the corresponding local set theory on its internal language be strongly witnessed. A local set theory S is strongly witnessed if for any formula α with at most one free variable,⊬_S ∃ x.α implies there exists a closed term τ for which ⊢_Sα(x/τ). One also has that if S is strongly witnessed then it is witnessed: That from ⊢_S ∃ x.α it follows that there exists a closed term τ for which ⊢_Sα(x/τ). Another consequence of the Nullstellensatz is the local set theory S is complete: that for any closed term α of type Ω, ⊢_Sα⊢_Sα.These tools will be used profusely in several arguments herein as they provide a way to give proofs by contradiction; e.g., one proposes an appropriate S-set and assumes it to be nonempty to obtain a contradiction. §.§ Sets of distinguished subsetsLet S be a local set theory on a local language . Let X be a universal S-set. Recall that P(X) is the S-set {u: u⊆ X}.For any u∈ P(X) let u^c={x∈ X: x∉ u} and consider the following S-set:P_c(X)={u∈ P(X): u∪ u^c=X}.Analogously, let _c(X) be the collection of subobjects classified by 2.For any modality operator μ:Ω→Ω, for any u∈ P(X) let u^μ={x∈ X: μ(x∈ u) }. One says that u is μ-closed if u=u^μ and define the following S-set:P_μ(X)={u∈ P(X): u=u^μ}.Notice that P_c(X)⊆ P_(X). Also if X is a universal S-set, thenu,v∈ P_(X),(u=v)⊢_Su=v.Indeed,u,v∈ P_(X), (u=v)⊢_S(∀ x.x∈ u⇒ x∈ v∧∀ x.x∈ v⇒ x∈ u)⊢_S∀ x.(x∈ u)⇒(x∈ v)∧∀ x.(x∈ v)⇒(x∈ u)⊢_Su^=v^⊢_Su=v. As in other intuitionistic systems, one has that ⊢_S(ω∨ω) and thus the following result holds. Let S be a local set theory on a local language . If Y is a subset of the S-set X, then Y∪ Y^c is -dense in X. It is clear that⊢_S(x∈ Y∨(x∈ Y)).Thereforex∈ X⊢_S(x∈ Y∨(x∈ Y))⊢_S(x∈ Y∪ Y^c).Let S be a local set theory on a local language . Every subset of a universal S-set X is complemented iff the only -dense subset of X is X. Let A be a subset of a universal S-set X. Suppose A is complemented. Then⊢_Sx∈ A∨(x∈ A).Hence (x∈ A)⊢_Sx∈ A.Conversely, suppose the only -dense subset of X is X. By <ref>, A∪ A^c is -dense in X. Therefore A∪ A^c=X. That is,⊢_Sx∈ A∨(x∈ A).Let S be a local set theory on a local language . Let f:X→ Y be an S-function. Since f^-1(v):={x∈ X:∃ y∈ v(y=fx)}, then v,w∈ P(Y)⊢_S f^-1(v∪ w)=f^-1(v)∪ f^-1(w),v,w∈ P(Y)⊢_S f^-1(v∩ w)=f^-1(v)∩ f^-1(w),y∈ Y⊢_Sf^-1({y})^c=f^-1({y}^c). § FIRST CONSEQUENCES OF THE POSTULATES In the presence of the Nullstellensatz, the definition of connectedness provided by WDQO coincides with the one dictated by the intuition:Consider a nondegenerate topossatisfying NS and WDQO. An object X is connected if and only if _c(X)=2. For the necessity, since Π X=1 and NS guarantees that _c(1)=2, there are exactly two arrows X→ 2. So, as X,0∈_c(X), these are all of the complemented subobjects of X.For the sufficiency, if _c(X)=2 and X≠ 0 there is an arrow 1→ X. Now, by considering the composite1[r] X[r]^!1,we know !:X→ 1 is epic. Thus, since clearly every arrow X→ 2 factors through !, there must be an epic arrow 1→Π X; but this arrow is also monic (consider the composite 1→Π X→ 1). So Π X≅ 1. The Nullstellensatz is necessary for this. Take for example the topos ×, which satisfies WDQO but not NS and _c(1)=2×2=4.The rest of the section, however, will analyze the consequences of merely assuming WDQO.For example, that ΠΠ X≅Π X for every object X∈. Indeed, if Π X is the smallest decidable quotient object, since ΠΠ X is a decidable quotient object of Π X and Π X of X, then ΠΠ X is a decidable quotient object of X, but Π X is the smallest one, so ΠΠ X≅Π X. Ifis a topos satisfying WDQO, then Π 1≅ 1. Indeed, just consider the composite1[r]^p_1 Π 1[r]^!1So p_1 is mono. Therefore it is iso.Let S be a consistent local set theory on a local languagewith (S) satisfying WDQO. Then Π 0=0 and if Π X=0 then X=0. Since (S) is nondegenerate, using the surjectivity of p_0 it is straightforward to verify that y∈Π 0⊢. Since 0 is strict, if Π X=0, then X=0.Letbe a topos. Let q:X↠ Q be an epic arrow inwith Q decidable such that for every arrow f:X→ 2 inthere is an arrow f':Q→ 2 making the following diagram commutative:X[dr]^f@->>[d]_qQ[r]_f'2.(Notice that f' is the unique arrow making the above diagram commutative). Then, as in the case of Π X, there is a correspondence (X,2)≅(Q,2) between the complemented subsets of X and those of Q. If B is a complemented subobject of X with characteristic h:X→ 2, let h': Q→ 2 be the assumed factoring arrow and let B̅ be the corresponding complemented subobject of Q with characteristic h'.This correspondence is expressed by pullbacks as follows:B@->>[r]@(>->[d]_q^-1(h')=h̅ B̅[r]@(>->[d]^h'1[d]^⊤X@->>[r]_q@/_2pc/[rr]_h Q[r]_h'2,where h',h̅ are the subobjects determined by h' and h, resp. The operator (-) is used to denote corresponding subobjects. For example, B̅ corresponds to B. If A is a subobject of Q, then A̅ is the corresponding subobject of X. Similarly, (-)' denotes correspondence between morphisms X→ 2 and Q→ 2 (cf. <cit.>).Letbe a topos satisfying WDQO. Then Π is surjective on complemented subobjects, and preserves complemented subobjects. Let A be a complemented subobject of Π X with X an object of . Let A̅ be the corresponding subobject of X to A. Let A^c and A̅^c be the complements of A and A̅, resp. (remember that (X,2)≅(Π X,2)). So, as a topos is an extensive category, we have the following pair of pullback squares with Π X and X being coproducts:A̅@(>->[r]^p_X^-1(A)@->>[d]_A^-1(p_X)X@->>[d]^p_X A̅^c@(>->[l]_p_X^-1(A^c)@->>[d]^(A^c)^-1(p_X)A@(>->[r]Π X A^c@(>->[l] Now, consider the following commutative diagram:A̅@(>->[r]@->>[d]_p_A̅X[d]^p_A̅+p_A̅^c A̅^c@(>->[l]@->>[d]^p_A̅^c ΠA̅[r]_(.4)iΠA̅+ΠA̅^cΠA̅^c[l]Clearly, p_A̅+p_A̅^c is epic. Now, let f:X→ 2 be an arrow in . So we have a pair of arrows f_1, f_2 such that the following diagram commutes:A̅@(>->[r][dr]_f_1X[d]^fA̅^c@(>->[l][dl]^f_2 2.Hence there are unique arrows f'_1 and f'_2 such that the following diagram commutes:A̅@(>->[r][dr]_f_1@->>[d]_p_A̅X[d]^fA̅^c@(>->[l][dl]^f_2@->>[d]^p_A̅^c ΠA̅[r]_f'_12ΠA̅^c[l]^f'_2.So we have the following commutative diagram:A̅@(>->[r]@->>[d]_p_A̅X@->>[d]^p_A̅+p_A̅^c A̅^c@(>->[l]@->>[d]^p_A̅^c ΠA̅[r]_(.4)i@/_.7pc/[dr]_f'_1 ΠA̅+ΠA̅^c[d]^[f'_1,f'_2] ΠA̅^c[l]@/^.7pc/[dl]^f'_2.2Hence, by the universality of coproducts,f=[f'_1,f'_2]∘(p_A̅+p_A̅^c).So, as ΠA̅+ΠA̅^c is decidable, there is an epic arrow φ:ΠA̅+ΠA̅^c↠Π X such thatX[rr]^(.4)p_A̅+p_A̅^c[drr]_p_XΠA̅+ΠA̅^c[d]^φ Π X On the other, let g:A̅→ 2 be an arrow in , and let χA̅^c:A̅^c→ 2 be the characteristic arrow classifying A̅^c. So there is an arrow G:Π X→ 2 making the following diagram commute:A̅@(>->[d]@/^2pc/[ddr]^gX[dr]^(.3)[g,χA̅^c][d]_p_XΠ X[r]_G 2.Hence the following diagram commutes:A̅@(>->[r]@->>[d]_A^-1(p_X)@/^6pc/[drr]^g X[dr]^(.3)[g,χA̅^c][d]_p_X A@(>->[r]Π X[r]_G 2.Therefore, since A^-1(p_X) is an epic arrow, there is an epic arrow ψ:A↠ΠA̅ making the following diagram commute:A̅[r]^A^-1(p_X)@->>[dr]_p_A̅A@->>[d]^ψΠA̅.Piecing everything together gives that the following diagram commutes:A̅@(>->[r]@->>[d]^p_A̅@/_1.5pc/@->>[dd]_A^-1(p_X)X[d]^p_A̅+p_A̅^c ΠA̅@(>->[r]^(.4)iΠA̅+ΠA̅^c[d]^φA@(>->[r]@->>[u]_ψ Π X.In particular, the lower square commutes since A^-1(p_X) is epic. Therefore, ψ is also monic and ΠA̅≅ A.Now let B be a complemented subobject of X in . Substitute B for A̅ and B̅ for A in the proof above. This substitution yields B̅≅Π B.Letbe a topos satisfying WDQO. Then Π preserves finite coproducts. The injections X@(>->[r] X+Y Y@(>->[l] are complements of each other in X+Y. So, by the previous theorem, Π X and Π Y are complements of each other in Π(X+Y). Hence Π(X+Y)≅Π X+Π Y. § PNEUMOCONNECTEDNESS The concept of pneumoconnectedness[The Greek rootpne uma means breath or spirit. Hence pneumoconnected is “connected in spirit".] is now introduced to describe the behavior of something that ought to be connected yet it is merely not the case that it is not connected.More precisely, let S be a local set theory on a local language . Let X be a universal S-set and τ a term of type 𝐏𝐗. Write (τ) for the formula∀ w∈ P_c(X).(τ∩ w=∅∨τ∩ w^c=∅),where w is not free in τ.A term τ is pneumoconnected if and only if ⟨ x_1,…,x_n⟩∈ Z⊢_S(τ),where x_1,…,x_n are all the free variables of τ. Observe that if one considers the non-trivial separators of τ, namelyΘ(τ):={u:u∈ P_c(X)∧(τ∩ u=∅)∧(τ∩ u^c=∅)},the following is true:Let X be a universal S-set and τ be a term of type 𝐏𝐗, then ⊢_S(τ)⇔(Θ(τ)=∅).That is that τ is pneumoconnected if and only if it is not the case that there are non-trivial separators of τ.First notice that ⊢_S(Θ(τ)=∅)⇔(Θ(τ)^c=PX).(τ),u∈Θ(τ)⊢_S∀ w∈ P_c(X).(τ∩ w=∅∨τ∩ w^c=∅)∧ u∈Θ(τ)⊢_S(τ∩ u=∅∨τ∩ u^c=∅)∧(τ∩ u=∅∨τ∩ u^c=∅)⊢_S.That is, (τ)⊢_S(u∈Θ(τ)). Therefore(τ)⊢_SΘ(τ)^c=PX. Conversely,Θ(τ)^c=PX⊢_S(w∈Θ(τ))⊢_S(w∈ P_c(X)∧(τ∩ w=∅)∧(τ∩ w^c=∅)).HenceΘ(τ)^c=PX,w∈ P_c(X)∧(τ∩ w=∅)∧(τ∩ w^c=∅)⊢_S.WhenceΘ(τ)^c=PX,w∈ P_c(X)⊢_S(τ∩ w=∅∨τ∩ w^c=∅).Therefore Θ(τ)^c=PX⊢_S(τ).Let S be a local set theory on a local language , X a universal S-set, and τ a term of type 𝐏𝐗. Any two intersecting non-trivial pneumoconnected separators of τ are equal. That is u,v∈Θ_p(τ),(u∩ v=∅)⊢_Su=v,whereΘ_p(τ):={u:u∈Θ(τ)∧(u)}.We haveu,v∈Θ_p(τ)⊢_S(u∩ v=∅∨ u∩ v^c=∅)∧(u∩ v=∅∨ u^c∩ v=∅)⊢_S((u∩ v=∅∨ u∩ v^c=∅)∧(u∩ v=∅∨ u^c∩ v=∅))⊢_S(u∩ v=∅∨ u=v).That is,u,v∈Θ_p(τ),(u∩ v=∅)⊢_S(u=v)⊢_Su=vby (<ref>). Let S be a local set theory on a local language . For any X be a universal S-set and τ a term of type 𝐏𝐗, u,v∈Θ_p(τ),u⊆ v⊢_Su=v. u,v∈Θ_p(τ),u⊆ v⊢_S(u∩ v=∅).§ FIBER PNEUMOCONNECTEDNESSThe main application of the concept of pneumoconnectedness is on fibers of (surjective) maps. A term of the form f^-1(y) is a valid τ in (<ref>)for any map f:X→ Y. In this case, f haspneumoconnected fibers precisely when⊢_S∀ y∈ Y.(f^-1(y)).The following theorem is key to proving all the main results. It completely characterizes the behavior of p:1→Π.[Fiber Pneumoconnectedness Lemma]T:equivpneumofibers Let S be a local set theory on a local languagewith (S) satisfying NS. Let q:X↠ Q be a surjective S-function with Q decidable. Then the following statements are equivalent:(i) Every arrow X→ 2 factors through q. (ii) The map q has pneumoconnected fibers. (iii) Every arrow X→ Y with Y decidable factors through q. [Proof of T:TeorATheorem A] The WDQO postulate is an existence statement, and the T:equivpneumofibersFiber Pneumoconnectendess Lemma is precisely the uniqueness statement required to finish the proof: Indeed, let q:X↠ Q and q':X↠ Q' with Q and Q' decidable that satisfy (i).Then, by (iii), there are arrows Q→ Q' and Q'→ Q which are inverses of each other. Thus one verifies DQO.Let S be a local set theory on a local languagewith (S) satisfying NS. Let q:X↠ Q be a surjective S-function with Q decidable such that for every arrow f:X→ 2 there is an arrow f':Q→ 2 making the following diagram commutative:X[dr]^f@->>[d]_qQ[r]_f'2.Then the fibers of q are pneumoconnected. DefineR:={z:z∈ Q∧∃ v∈ P_c(X)((q^-1(z)∩ v≠∅)∧(q^-1(z)∩ v^c≠∅))}.Suppose ⊢_S(R=∅). Then, there is a closed term a such that ⊢_Sa∈ R; that is, such that⊢_Sa∈ Q∧∃ v∈ P_c(X)((q^-1(a)∩ v≠∅)∧(q^-1(a)∩ v^c≠∅)).Hence⊢_S∃ v∈ P_c(X)((q^-1(a)∩ v=∅)∧(q^-1(a)∩ v^c=∅)),but, since S is strongly witnessed, it is also witnessed. Therefore there is a closed term A such that⊢_SA∈ P_c(X)∧(q^-1(a)∩ A=∅)∧(q^-1(a)∩ A^c=∅). Now, as Q is decidable, {a} is a complemented S-set of Q. And according to <ref> and the assumption that every arrow X→ 2 factors through q, there is an A̅∈_c(Q) such that q^-1A̅=A. On the other hand, if ⊢_S(({a}∩A̅=∅)∨({a}∩A^c=∅)), then ⊢_Sa∈A̅∩A^c, which is a contradiction. So, as S is consistent and complete,⊢_S({a}∩A̅=∅)∨({a}∩A^c=∅).So,⊢_S(q^-1(a)∩ A=∅)∨(q^-1(a)∩ A^c=∅),which is a contradiction. But S is consistent and complete since it is strongly witnessed. Therefore ⊢_SR=∅. That is,z∈ Q∧∃ v∈ P_c(X)((q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅))⊢_S.Whencez∈ Q⊢_S∃ v(v∈ P_c(X)∧((q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅)))⊢_S∀ v(v∈ P_c(X)∧((q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅)))⊢_S(v∈ P_c(X)∧((q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅))).Hencez∈ Q,v∈ P_c(X)∧((q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅))⊢_S.Thereforez∈ Q,v∈ P_c(X)⊢_S((q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅)). If P_c(X) is decidable in the previous lemma, then the fibers of Q are connected. By <ref>,z∈ Q,v∈ P_c(X)⊢_S(q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅),and, by the decidability of P_c(X),z∈ Q,v∈ P_c(X)⊢_S(q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅). Let S be a local set theory on a local language . Let q:X↠ Q be a surjective S-function with Q decidable such that its fibers are pneumoconnected. Then every arrow X→ Y with Y decidable factors through q. Let f:X→ Y be an S-function.Now, define f' via its graph | f'| as | f'|:={⟨ z,y⟩:∃ x(⟨ x,z⟩∈| q|∧⟨ x,y⟩∈| f|)}.Since q is surjective and f is an S-function,z∈ Q⊢_S∃ y(⟨ z,y⟩∈| f'|). Now, as the fibers of q are pneumoconnected,z∈ Q,v∈ P_c(X)⊢_S((q^-1(z)∩ v=∅)∨(q^-1(z)∩ v^c=∅)).On the other hand,⟨ x,z⟩∈| q|,⟨ x,y⟩∈| f|,⟨ x',z⟩∈| q|,⟨ x',y'⟩∈| f| ⊢_Sx,x'∈ q^-1(z),x∈ f^-1(y),x'∈ f^-1(y').Hence, as Y is decidable and q has pneumoconnected fibers, by <ref>,(y=y'),⟨ x,z⟩∈| q|,⟨ x,y⟩∈| f|,⟨ x',z⟩∈| q|,⟨ x',y'⟩∈| f|⊢_S(q^-1(z)∩ f^-1(y)=∅)∧(q^-1(z)∩(f^-1(y))^c=∅)∧(f^-1(y)∈ P_c(X))⊢_S.So⟨ x,z⟩∈| q|,⟨ x,w⟩∈| f|,⟨ x',z⟩∈| q|,⟨ x',y'⟩∈| f|⊢_S(y=y'),But Y is decidable, therefore⟨ x,z⟩∈| q|,⟨ x,y⟩∈| f|,⟨ x',z⟩∈| q|,⟨ x',y'⟩∈| f|⊢_Sy=y'.I.e., f'=(| f'|,Q,Y) is an S-function, and f'∘ q=f. <ref> yields (i)⇒(ii), <ref> yields (ii)⇒(iii). Finally (iii)⇒(i) is trivial since 2 is decidable. § PRODUCT PRESERVATION AND FUNCTORIALITYThe T:equivpneumofibersFiber Pneumoconnectendess Lemma also provides a way to prove that Π preserves finite products. This in turn will complete the proof forT:TeorBTheorem B.Let S be a consistent local set theory on a local languagewith (S) satisfying NS and WDQO. Let X,Y be universal S-sets. Then if Π X=Π Y=1, then Π(X× Y)=1. Let Z⊆ X× Y be complemented in X× Y and different from ∅ and X× Y. Therefore, since (S) satisfies NS, there are closed terms ⟨ a,b⟩ and ⟨ c,d⟩ of type X× Y such that⊢_S⟨ a,b⟩∈ Z and ⊢_S⟨ c,d⟩∈ Z^c.Now, since Π X=1 and Z is complemented in X× Y and X↣ X× Y through (x↦⟨ x,b⟩), thenX×{b}=X×{b}∩ Z.Similarly, {c}× Y⊆ Z^c. However,Z⊇ (X×{b})∩({c}× Y)⊆ Z^c,and (X×{b})∩({c}× Y)={⟨ c,b⟩}. So we got a contradiction. Therefore, Z=X× Y or Z=∅. Hence Π(X× Y)=1. Let S be a consistent local set theory on a local languagewith (S) satisfying NS and WDQO. Then p_X× p_Y has pneumoconnected fibers and accordingly Π(X× Y)≅Π X×Π Y. Defineϑ:=((p_X× p_Y)^-1(⟨ z,w⟩)∩ v=∅)∧((p_X× p_Y)^-1(⟨ z,w⟩)∩ v^c=∅)andR:={⟨ z,w⟩∈Π X×Π Y:∃ v∈ P_c(X× Y).ϑ}.Suppose ⊢_S(R=∅). So there are closed terms a,b of the appropriate types such that⊢_S⟨ a,b⟩∈Π X×Π Y∧∃ v∈ P_c(X× Y)ϑ(z/a,w/b).So, again, since S is witnessed because strongly witnessed, there is a closed term D of the same type of X× Y such that⊢_SD∈ P_c(X× Y)∧ϑ(z/a,w/b,v/D).That is,⊢_S((p_X× p_Y)^-1(⟨ a,b⟩)∩ D=∅)∧((p_X× p_Y)^-1(⟨ a,b⟩)∩ D^c=∅). On the other hand, Π({a})=1=Π({b}) and(p_X× p_Y)^-1(⟨ a,b⟩)=p_X^-1(a)× p_Y^-1(b),so by the previous lemma, Π({a}×{b})=1, but then⊢_S(p_X× p_Y)^-1(⟨ a,b⟩)∩ D=∅)∨(p_X× p_Y)^-1(⟨ a,b⟩)∩ D^c=∅),which is a contradiction. Therefore, as S is consistent and complete, ⊢_SR=∅. That is,⟨ z,w⟩∈Π X×Π Y,v∈ P_c(X× Y)⊢_Sϑ. Therefore, the epic arrow p_X× p_Y:X× Y→Π X×Π Y has pneumoconnected fibers. Hence, as (S) satisfies DQO, Π(X× Y)≅Π X×Π Y. [Proof of T:TeorBTheorem B] Also by the T:equivpneumofibersFiber Pneumoconnectendess Lemma (iii)if f:X→ Y is an arrow with Y∈𝒲, then there exists a unique f':Π X→ Y such that f'∘ p_X=f. Therefore Π is functorial and Π⊣ℐ.By <ref>, Π preserves complemented subobjects and by <ref> it also preserves products. § DETACHABLE CLOSURE AND CONNECTED COMPONENTSThe postulated existence of Π provides a space that encodes detachability. Intuitively, it should also cover the notion of connected component. For any point a:1→ X it does follow thatp_X^-1(p_Xa) is connected and detachable—the two necessary conditions for a connected component. In general, the main difficulty in the reasoning arises from the lack of determinacy from points. The concept of pneumoconnectedness circumvents this in the general description as seen in the T:equivpneumofibersFiber Pneumoconnectendess Lemma. However, more can still be said. For any nonempty subspace A of a space Xanother subspace à can be constructed that is the smallest detachable from X containing A.Furthermore, if A is connected, then so is à and as such, it coincides with p_X^-1(p_Xa) for any a∈ A. In order to accomplish these goals, the following results of perhaps general interest are needed. Firstly, the next proposition is a general consequence of Π being functorial and a left-adjoint. Every quotient of a decidable space is decidable in a topossatisfying DQO. Let D be any decidable object and let R be an equivalence relation on it. As D× D is decidable and R is a subobject of D× D, R is decidable. Now, q:D↠ Q is the coequalizer of the projections from R to D, say r_1,r_2. Furthermore, R is the kernel pair of q with projections r_1,r_2. So, as Π preserves coequalizers,Π D[r]^Π q Π Qis a coequalizer of Π r_1,Π r_2. Consider now the following commutative diagram:R@<.5ex>[r]^r_1@<-.5ex>[r]_r_2[d]_≅^p_RD[r]^q[d]_p_D^≅Q[d]^p_Q Π R@<.5ex>[r]^Π r_1@<-.5ex>[r]_Π r_2 Π D[r]_Π q Π Q.It is clear that Π q∘ p_D coequalizes r_1,r_2. Let us see that Π q∘ p_D is a coequalizer of r_1,r_2. So let q':D X→ Q' be another S-function that coequalizes r_1,r_2. Hence chasing the diagram above,q'∘ p_D^-1∘Π r_1=q'∘ p_D^-1∘Π r_2.But Π q is a coequalizer of Π r_1,Π r_2. Therefore there is a unique t:Π Q→ Q' such thatq'=t∘Π q∘ p_D. Whence Q≅Π Q; that is, Q is decidable.With this in hand, the strategy is to produce a suitable quotient and extract from it a detachable subset. The following lemma brings light to what the general argument should be.Let S be a local set theory on a local languagewith (S) satifying NS and WDQO. Let X be an S-set and A be a complemented nonempty subset of Π X. Let R_A be the equivalence relation on Π X generated by A× A, and q_A:Π X→ Q_A the quotient of that relation. Then⊢_Sq_A^-1(q_Aa)=A,where a is some point of A. Let D_A:={⟨ z,z⟩:z∈ A^c∩Π X}. Then, as A is complemented in Π X, the relation A× A∪ D_A is reflexive; that is,z∈Π X⊢_S⟨ z,z⟩∈ A× A∪ D_A.It is clear that this relation is symmetric and transitive. Now, since ⊢_SA× A∪ D_A=A× A∪Δ_Π X,⊢_SR_A=A× A∪ D_A. Finally, as ⊢_S(A× A)∩ D_A=∅, we have ⊢_Sq_A^-1(q_Aa)=A, where a is some point of A given by NS. Let S be a local set theory on a local languagewith (S) satifying NS and WDQO. Let X be an S-set and A be a nonempty subset of Π X. Let R_A be the equivalence relation on Π X generated by A× A, and q_A:Π X→ Q_A the quotient of that relation. The smallest complemented subset of Π X containing A is Ã:=q_A^-1(q_Aa),for some point a of A. By <ref>, Q_A is decidable. On the other hand, since ⊢_S(A=∅), there is a closed term a such that ⊢_Sa∈ A. Hence, from the decidability of Q_A, the singleton {q_Aa} is complemented in Q_A. Therefore à is complemented in Π X. It is clear that A⊆Ã.Let B be a complemented subset of Π X containing A and let R_B be the equivalence relation generated by B× B. Then⊢_SA× A⊆ B× B⊆ R_B.Hence ⊢_SR_A⊆ R_B. Therefore there is an S-function φ:Q_A↠ Q_B such that the following diagram commutes:Π X@->>[r]^q_A@->>[dr]_q_BQ_A@->>[d]^φ Q_B,where q_B is the quotient of R_B. Since there is a closed term b such that ⊢_Sb∈ B, and φ is surjective, there is some closed term a' such that⊢_Sa'∈ A∧ b=φ(a').Therefore⊢_S{a'}⊆φ^-1(b)⊢_SÃ=q_A^-1(a')⊆ q_A^-1φ^-1(b)=q_B^-1(b)=B.(<ref>)Let S be a local set theory on a local languagewith (S) satifying NS and WDQO. Let X be an S-set. For every subset A of X there is a smallest complemented subset à of X containing A. We have ⊢_Sp_X(A)⊆Π X. Hence ⊢_Sp_X(A)⊆p_X(A) with p_X(A) the smallest complemented subset of Π X containing p_X(A). Whence⊢_SA⊆ p_X^-1(p_X(A))⊆ p_X^-1(p_X(A)).Now let B a complemented subset of X containing A. So, by <ref>, p_X(B) is complemented in Π X, and clearly ⊢_Sp_X(A)⊆ p_X(B). Hence ⊢_Sp_X(A)⊆ p_X(B). Therefore, again by <ref>,⊢_S p_X^-1(p_X(A))⊆ B. Let S be a local set theory on a local languagewith (S) satifying NS and WDQO. Let X be an S-set. Then A↦ p_X^-1(p_X(A)) is left adjoint to the inclusion J:_c(X)→(X).§ TOPOS CONDITIONS FOR THE DECIDABLE EQUALITY To determine whether () is a topos, a couple of properties are yet to be established. It is already lextensive. Furthermore, NS and DQO give the following result. For a nondegenerate topossatisfying NS and WDQO, () is cartesian closed. Let A,B,C∈() and let f:A× B→ C be in . Then the following diagram incommutes:A× B[dr]^f[d]_f̃× 1 C^B× B[r]_(.6)[d]_p_C^B× 1CΠ(C^B)× B[ur]_e[d]_ẽ× 1 C^B× B@/_.7pc/[uur]_,where e is the appropriate composite determined by the following diagram, which commutes by <ref> and the T:equivpneumofibersFiber Pneumoconnectedness Lemma:C^B× B@/_.7pc/[dl]_p_C^B× 1[d]_p_C^B× B[dr]^Π(C^B)× B[r]_≅ Π(C^B× B)[r]_(.65)'CTherefore, ẽ∘ p_C^B=1. Hence p_C^B is monic, and accordingly an iso. Whence Π(C^B)≅ C^B.The only possibly missing property for it to be a topos is that of having a subobject classifier. The obvious candidate to play this rôle is 2=1+1, yet a proof of this without any further assumptions—which the next section will trivially provide—is at the time of writing, at best elusive and at worst unattainable. However, the following result provides conditions for this to be the case in the presence of the Nullstellensatz and DQO.Letbe a nondegenerate topos satisfying NS and WDQO. The following statements are equivalent. (i) The arrow Π(f) is epic for every -dense arrow f. (ii) Every subobject of a decidable object is complemented in . (iii) The only -dense subobject of a decidable object inis the object itself. (iv) The category () is a topos. In which case, () is a well-pointed topos.In general, () need not be a topos. Consider the Sierpinski topos 𝒮. By <ref>, it suffices to see that (𝒮) is not Boolean. Consider any injective function f:A→ B with |A|≥ 2. Let a∈ A. So let g be the restriction of f to {a}. That is, g:{a}→ B is a subobject of f but g is not complemented.The argument to prove the theorem is split into a few propositions. Let S be a consistent local set theory on a local languagewith (S) satisfying NS and WDQO. Let m:B↣Π X be a monic arrow. LetR@(>->[r]^(.55)p_X^-1(m)@->>[d]_q X@->>[d]^p_XB@(>->[r]_mΠ Xbe a pullback diagram. Then q is an epic arrow with pneumoconnected fibers, and accordingly Π(R)≅ B. Without loss of generality, letR={⟨ z,x⟩:m(z)=p_X(x)}.Now, defineZ:={z:z∈ B∧∃ v∈ P_c(R).(q^-1(z)∩ v=∅)∧(q^-1(z)∩ v^c=∅)}Suppose ⊢_S(Z=∅). Then there is a closed term b such that⊢_Sb∈ B∧∃ v∈ P_c(R).(q^-1(b)∩ v=∅)∧(q^-1(b)∩ v^c=∅)Hence there is a closed term D such that⊢_SD∈ P_c(R)∧(q^-1(b)∩ D=∅)∧(q^-1(b)∩ D^c=∅). On the other hand,⟨ z,x⟩∈ q^-1(b)⊢_Sx∈ p_X^-1(m(b))x∈ p_X^-1(m(b))⊢_S⟨ b,x⟩∈ q^-1(b).So there is an isomorphism φ:q^-1(b)→ p_X^-1(m(b)). Let E:=q^-1(b)∩ D. Therefore ⊢_Sφ E∈ P_c(p_X^-1(m(b)))∧(φ E=∅)∧(φ E=p_X^-1(m(b))),which is a contradiction, since p_X has pneumoconnected fibers. So ⊢_SZ=∅. That is, q has pneumoconnected fibers. Hence, since B is decidable, by the T:equivpneumofibersFiber Pneumoconnectendess Lemma, Π R≅ B. Let S be a consistent local set theory on a local languagewith (S) satisfying NS and WDQO. Every subobject of Π X is classified by 2 if and only if Π(f) is epic for every -dense f:A→ X.Suppose that Π(f) is epic for every -dense arrow f:A→ X in (S). Now, let m:B↣Π X be a monic arrow. Consider the following pullback diagrams:R@(>->[r]^(.55)p_X^-1(m)@->>[d] X@->>[d]^p_X P@(>->[l]_p_X^-1(m^c)@->>[d] B@(>->[r]_mΠ XB^c@(>->[l]^(.45)m^cSince inverse images preserve pseudocomplements, p_X^-1(m^c)≅ p_X^-1(m)^c, without loss of generality P=R^c, and, by <ref>, B≅Π R and B^c≅Π R^c.Now, as, by <ref>, r:R+R^c↣ X is -dense, Π(r) is epic. Consider the following commutative diagram:R@(>->[rrr][dd]^(.65)p_R|(.49)R+R^c@(>->[dl]^r@->>[dd]|^(.65)p_R+R^cR^c@(>->[lll]@->>[dd]^p_R^cR@@<-1.45ex>[ur]^[@!43]=@(>->[rrr]^(.6)p_X^-1(m)@->>[dd] X@->>[dd]^(.65)p_XR^c@(>->[lll]_(.4)p_X^-1(m^c)@@<-1.45ex>[ur]^[@!43]=@->>[dd]Π R@(>->[rrr]|(.54) Π R+Π R^c@->>[dl]^Π r Π R^c@(>->[lll]|(.34)B@@<-1.45ex>[ur]^[@!43]≅@(>->[rrr]_mΠ X B^c@(>->[lll]^m^c@@<-1.45ex>[ur]^[@!43]≅Therefore, as Π R^c≅ B^c and accordingly Π R+Π R^c≅Π R∪Π R^c, then Π r is monic. So B is complemented.Conversely, suppose every subobject of Π X is complemented. Let fX→ Y be a -dense arrow in (S).Now, as f is -dense (its image is -dense in Y),y∈ Y⊢_S(∃ x∈ X.y=fx).Hencey∈ Y∧ w=p_Y(y)⊢_S(w=p_Y(y))∧(∃ x∈ X.y=fx)⊢_S(w=p_Y(y)∧∃ x∈ X.y=fx)⊢_S∃ x∈ X.w=p_Y(y)∧ y=fx⊢_S∃ x∈ X.w=p_Y(fx).So ∃ y∈ Y.w=p_Y(y)⊢_S∃ x∈ X.w=p_Y(fx). Whence, as p_Y is epic,w∈Π Y⊢_S∃ y∈ Y.w=p_Y(y)⊢_S∃ x∈ X.w=p_Y(fx).On the other hand,x∈ X∧ w=(Π f)p_Xx⊢_Sp_X(x)∈Π X∧ w=(Π f)p_Xx⊢_S∃ z∈Π X.w=Π f(z).Thereforew∈Π Y⊢_S∃ x∈ X.w=p_Y(fx)⊢_S∃ x∈ X.w=(Π f)p_Xx⊢_S∃ z∈Π X.w=Π f(z).That is, the image of Π f is -dense. So, by assumption, (Π f) is a complemented subobject of Π Y. Suppose ⊢_S((Π f)^c∩Π Y=∅). Then, as S strongly witnessed, there is a closed term b such that⊢_Sb∈(Π f)^c∩Π Y.But (Π f) is -dense, so ⊢_S(b∈(Π f)), which is a contradiction. So ⊢_S(Π f)=Π Y. That is, Π f is epic. <ref> yields (i)⇔(ii). Also, <ref> yields (ii)⇔(iii).By <ref>, (ii) implies (iv). Now, since () is a full subcategory of , 2 is the same in () and ,and every subobject of a decidable object is decidable, then (iv) implies (ii) by <ref>. § PRECOHESIVENESS<cit.> proposes the following two postulates as extensions to the topos axioms. [WDSO] Each object A has a least subobject γ_A: A@(>->[r] A such that A is decidable and every global element of A factors through γ_A. [DSO] Each object A has a unique subobject γ_A:@(>->[r] A A with A decidable and every global element of A factoring through γ_A. It is seen that in the presence of either of them together with the Nullstellensatz, the full subcategory ℳ of domains of isomorphic γ is a topos. In either case,is a right adjoint to the inclusion functor from ℳ. Assuming DSO in the presence of NS,ℳ is (). Menni proves that the sole existence of a right adjoint to the inclusion :()→ makes () a topos <cit.>. Letbe a topos. If the inclusion functor :()→ has a right adjoint R, then the counit ϵ of the adjunction ⊣ R is a monomorphism. For X∈ let m∘ e be the epi-mono factorization of ϵ_X, where m is the image of ϵ_X.Now, asis a topos, e is the coequalizer of its kernel pair (h,k): the following diagram is a pullback diagram:D[r]^k[d]_h RX[d]^e RX[r]_(.4)e⊷(ϵ_X),and e is the coequalizer of h and k (see <cit.>). Then, since ⟨ h,k⟩:D↣ RX× RX is monic and RX× RX is decidable, D is decidable. Therefore, h and k are arrows in (). As () is a topos by the previous proposition, h and k have a coequalizer e' in (). Butpreserves colimits since it is left adjoint. Whence, e' is a coequalizer of h and k intoo. Therefore, (e')≅(e) in , and ⊷(ϵ_X) is decidable.Now, as m:⊷(ϵ_X)→ X has domain in (), so, by the universality of ϵ_X, there is a unique arrow m':⊷(ϵ_X)→ RX such that the following diagram commutes:⊷(ϵ_X)@–>[d]_m'[dr]^mRX[r]_ϵ_XXHence, since m is monic, so is m'.On the other hand,ϵ_X=m∘ e=ϵ_X∘ m'∘ e.Therefore, by the universality of ϵ_X, 1=m'∘ e. Hence, m' is split epic. So m' is an isomorphism with inverse e. Whence, ϵ_X is monic.Thus, the requirement thatbe a subobject is also necessary. Even when only WDSO is assumed in the presence of NS, McLarty sketches an argument to show that ℳ is equivalent to the full subcategory _ of -sheaves. And furthermore, that the inclusion functor from _ has a left adjoint.Under said equivalence, this is the same as saying that WDSO together with NS provide an adjunction ⊣Λ.More details on this and a further generalization described by McLarty can be found in the [A:UIAO]Appendix.If a topossatisfies DQO, then Π constitutes a reflector which decidabilizes the objects of , and ifsatisfies DSO, thenconstitutes a coreflector which decidabilizes the objects of .In light of the characterization of precohesiveness given by <cit.>, a toposis precohesive over a topos 𝒮 if there is a string of adjunctionsf_!⊣ f^*⊣ f_*⊣ f^!:→𝒮such that f^* is fully faithful, f_! preserves finite products, and that the counit f^*f_*→ 1 is monic. With this in mind, and with the results of Theorems A and B, it is natural to piece together DQO and DSO in the presence of NS and ask whetheris precohesive over (). It is now evident:Sincesatisfies DSO, by C:stringofthreeMcLarty's Minimal Separated Cover Theorem and <ref>𝒲=()=ℳ≃_(),and Π⊣ℐ⊣Γ⊣Λ:→().Let f:→() with f_!=Π, f^∗:=ℐ, f_∗:=, f^!=Λ. Then f^* is already fully faithful, f_! preserves finite products by <ref>, and, by DSO or <ref>, f^*f_*→ 1 is already monic.In order to provide a converse, first notice that having decidable equality is stable under certain conditions.Letbe a lextensive category and 𝒮 be a full lextensive subcategory of . If 𝒮 is Boolean and the inclusion functor J:𝒮→ has a left adjoint and a right adjoint, then 𝒮 is contained in (). Every subobject in 𝒮 is complemented; in particular, the diagonal of every object of 𝒮. So, since J preserves limits and colimits, every object of 𝒮 is decidable in . Letbe a nondegenerate topos satisfying NS. Let f:→𝒮 be precohesive with 𝒮 a Boolean topos. Without loss of generality, suppose 𝒮 is a full subcategory ofand f^∗:𝒮→ is the inclusion functor. Now, by <ref>, 𝒮⊆().On the other hand, since f is a geometric morphism, the 1 and 2 of 𝒮 are the same of . Therefore every arrow X→ 2 infactors through the unit σ_X:X→ f^∗ f_!X. So, by the T:equivpneumofibersFiber Pneumoconnectendess Lemma,satisfies DQO. Now, again by the T:equivpneumofibersFiber Pneumoconnectendess Lemma, p:1→Π is necessarily the unit of f_!⊣ f^∗. Therefore, for any A∈(), since Π(A)≅ A, it follows thatf^∗ f_!A≅ A.Therefore f^∗ is dense and thus an equivalence.Without loss of generality, suppose 𝒮=(). Let A∈() and h:B↪ A be a monic arrow in . Hence B is in () and, since 𝒮 is full, h is in 𝒮. But 𝒮 is Boolean. Therefore B is classified by 2, the 2 of .To see that it further satisfies DSO consider the counit β:f^∗ f_∗→ 1. Clearly, every global element of an object X infactors through β_X:f^∗ f_∗ X→ X; and if g:A↣ X has the same property with A decidable, then there is a unique arrow g':A↣ f_∗ X such that the following diagram commutes:A@(>->[dr]^g@(>->[d]_g' f_∗ X@(>->[r]_(.6)β_XXIt remains to see that g' is epic. Without loss of generality, think of A as ⊢_()A⊆ f_∗ X. Now, sincesatisfies NS, then () is complete. Hence ⊢_()f_∗ X∩(A^)^c=∅ or ⊢_()(f_∗ X∩(A^)^c=∅). If ⊢_()(f_∗ X∩(A^)^c=∅), then there is a closed term a such that ⊢_()a∈ f_∗ X∩(A^)^c. But since ⊢_()f_∗ X∩(A^)^c⊆ X, ⊢_()a∈ X.Hence ⊢_()a∈ A. That is,⊢_()a∈ A∧(a∈ A)since⊢_()a∈(A^)^c⇔(a∈ A)⇔(a∈ A).Therefore, as () is consistent, ⊢_()f_∗ X∩(A^)^c=∅. Whence A is -dense in f_∗ X.Now, since every subobject of f_∗ X is classified by 2, by <ref>, f_!g' is epic. Consider the following commutative diagram:A[r]^g'[d]_≅^σ_Af_∗ X[d]_σ_f_∗ X^≅f_!A@->>[r]_f_!g'f_!f_∗ X.Hence g' is epic. Sosatisfies DSO.§ DECIDABLE INDISCERNIBILITY AND THE REMNANTS OF COHESIONFor any local operator j, the full subcategory of j-sheaves has a left adjoint L_j and the unit can be explicitly described in two steps: First passing to a universal separated quotient q_X^j:X→ Q_jX and then closing it inside the space of j-closed subobjects. In terms of the corresponding local set theory it can be further described as the quotient obtained by the equivalence relation μ_j(x=x') on X, where μ_j is the modality corresponding to j.Let f:→𝒮 be precohesive over a Boolean 𝒮 and supposesatisfies the Nullstellensatz. By T:TeorCTheorem C andT:TeorDTheorem D, without loss of generality, let Π⊣ℐ⊣Γ⊣Λ:→() be such f.[Decidable Indiscernibility Lemma]T:Quality For a topossatisfying NS, DSO and DQO, the precohesive morphism f:→() is a quality type if and only if Q_X is decidable.Recall that a precohesive morphism f:→𝒮 is a quality type if the canonical natural morphism θ:Γ→Π is an isomorphism (see<cit.> ). More explicitly, if α is the unit of ℐ⊣Γ, then θ=α^-1Π·Γ p. Futhermore, For a topossatisfying NS, DSO and DQO, with precohesive morphism Π⊣ℐ⊣Γ⊣Λ:→(), the canonical morphism θ:Γ→Π is such thatℐ (θ_X)=p_X∘γ_X. As𝒮[rr]^1[dr]_ℐ <><4.5>γ 𝒮[dr]^ℐ<><-4.5>α [ur]_Γ[rr]_1 = 1_ℐ,we have<><4.5> α^-1 [dr]^Γ<><4.5>γ 𝒮[dr]^ℐ𝒮[ur]^ℐ[rr]_1 𝒮[r]_ℐ @|=[r]𝒮[r]_ℐ [ur]^Γ[rr]_1 .Hence[rr]^1[dr]_Π <><4.5> α^-1 [dr]^Γ <><4.5>γ 𝒮[dr]^ℐ<><-4.5>p 𝒮[ur]_ℐ[rr]_1 𝒮[r]_ℐ @|=[r][dr]_Π[rr]^1 [ur]^Γ[rr]_1 . <><-4.5>p 𝒮[ur]_ℐThat is, ℐθ=p·γ, which amounts to (<ref>).The following result shows that the property of having pneumoconnected fibers is also present in the canonical map from an object to its sheafification. Let S be a consistent local set theory on a local languagewith (S) satisfying NS. Then q_^X:X↠ Q_X has pneumoconnected fibers. Let X be a universal S-set. DefineR:={z:z∈ Q_X∧∃ v∈ P_c(X).((q_^X)^-1(z)∩ v≠∅)∧((q_^X)^-1(z)∩ v^c≠∅)}.Suppose ⊢_S(R=∅). Then there is a closed term a such that ⊢_Sa∈ R; that is, such that⊢_Sa∈ Q_X∧∃ v∈ P_c(X).((q_^X)^-1(a)∩ v≠∅)∧((q_^X)^-1(a)∩ v^c≠∅).Hence⊢_S∃ v∈ P_c(X).((q_^X)^-1(a)∩ v=∅)∧((q_^X)^-1(a)∩ v^c=∅)),but, since S is strongly witnessed, it is also witnessed. Therefore there is a closed term A such that⊢_SA∈ P_c(X)∧((q_^X)^-1(a)∩ A=∅)∧((q_^X)^-1(a)∩ A^c=∅).And, again, there are closed terms b and b' of the appropriate type such that⊢_Sb∈(q_^X)^-1(a)∩ Aand⊢_Sb'∈(q_^X)^-1(a)∩ A^c.Therefore ⊢_S(b=b')∧(b=b'), which is a contradiction since S is consistent.So ⊢_SR=∅ since S is complete and consistent. Hencez∈ Q_X∧∃ v∈ P_c(X).((q_^X)^-1(z)∩ v≠∅)∧((q_^X)^-1(z)∩ v^c≠∅)⊢_SWhencez∈ Q_X⊢_S∃ v.v∈ P_c(X)∧((q_^X)^-1(z)∩ v=∅∨(q_^X)^-1(z)∩ v^c=∅)⊢_S∀ v.(v∈ P_c(X)∧((q_^X)^-1(z)∩ v=∅∨(q_^X)^-1(z)∩ v^c=∅))⊢_S(v∈ P_c(X)∧((q_^X)^-1(z)∩ v=∅∨(q_^X)^-1(z)∩ v^c=∅)).Thereforez∈ Q_X,v∈ P_c(X)⊢_S((q_^X)^-1(z)∩ v=∅∨(q_^X)^-1(z)∩ v^c=∅). As any decidable object is necessarily -separated, any arrow from X to a decidable object factors through Q_X. But the latter need not coincide with Π(X) since it need not be decidable. An application of the T:QualityDecidable Indiscernibility Lemma and Corollary 4.6 in <cit.> provides examples of toposes with Q_X not decidable: for example, the topos of reflexive graphs. The following result and its proof are already explained in <cit.> within a paragraph where several other ideas are discussed. It is re-stated here to simplify its subsequent cross-referencing herein.[<cit.>] Let S be a consistent local set theory on a local languagewith (S) satisfying NS and WDSO. Then γ_X:Γ X↣ X is -dense in X for X a universal S-set. Now, the next result is an easy consequence of the definitions, yet it is key in proving the T:QualityDecidable Indiscernibility Lemma.Let A be -separated and let m:A↣ X, then q_^X∘ m is monic. Sincex,x'∈ A,q_^X∘ m(x)=q_^X∘ m(x')⊢_()(x=x') and x,x'∈ A,q_^X∘ m(x)=q_^X∘ m(x')⊢_()(x=x')⇒ (x=x'), it follows thatx,x'∈ A,q_^X∘ m(x)=q_^X∘ m(x')⊢_()x=x'. Suppose θ is an isomorphism. Then Γ≅Π. Hence ℐ≅Λ. Therefore, since Q_X↣Λ X, Q_X is decidable.Conversely, suppose Q_X is decidable. Then, by the T:equivpneumofibersFiber Pneumoconnectendess Lemma, Q_X≅Π X. On the other hand, by <ref>, γ_X:Γ X↣ X is -dense in X. Hence Π(γ_X) is epic by <ref> and, equivalently, Q_(γ_X) is too. By <ref>, q_^X∘γ_X is monic. Now, since Γ X[r]^(.45)q_^Γ X[d]_γ_XQ_Γ X[d]^Q_(γ_X)X[r]_(.45)q_^XQ_Xcommutes,it follows that q_^X∘γ_X is epic and thus iso. That is, p_X∘γ_X is iso for all X. And since ℐ is fully faithful, by Lemma <ref> the conclusion follows. § FINAL THOUGHTS The discovery that in the presence of the Nullstellensatz, the postulates DQO and DSO complete the picture for precohesiveness over a boolean base is quite enlightening. It remains to find a clear characterization of this without NS; even though, in a sense, this is more of a mathematical exercise than a geometrical one. The way this precohesion behaves in the presence of the Nullstellensatz suggests there ought to be a description that characterizes the behavior of the fibers of q^j_X:X→ Q_jX for an arbitrary local operator j, which might then provide a description for the required behavior of the fibers of the unit of f^*f_! of an arbitrary precohesion f. Yet nothing thus far eases the work required to synthetically describe the image of f^*.Finally, the description given by T:TeorCTheorem C and T:TeorDTheorem D does provide an axiomatic ground framework to build upon in the specific context of Synthetic Differential Geometry which adds to the one proposed by <cit.>. § SOME REMARKS ABOUT MCLARTY'S CANONICAL POINT. The purpose of this appendix is to shed light on a couple of ideas very succinctly described in <cit.>. Letbe a topos and j:→ be a local operator on it. Given an object A∈, a j-cover of A is a subobject B↣ A of A which is j-dense. In particular, even for the weaker requirement WDSO, in the presence of NS,γ_X:Γ X→ X is -dense by <ref>. It is furthermore the smallest:Letbe a nondegenerate topos satisfying NS and WDSO. Then for every object X, the subobject γ_X:Γ X→ X is the smallest -cover.Let a:A↣ X be a -cover of X∈. By (<ref>),for any point peither ⊢_S p∈ A or ⊢_S p∉ A. However, we have ⊢_S(p∈ A). So, asis nondegenerate, ⊢_Sp∈ A. Hence every global element of X factors through a∘γ_A:Γ A↣ X. Therefore there is a monic arrow r:Γ X↣Γ A. On the other hand, as Γ is right adjoint, Γ a is monic. So Γ X≅Γ A. That is, Γ X is a subobject of every -cover of X; that is γ_X is the smallest -cover of X. In particular, Γ X is separated since it is decidable. More generally, if for an arbitrary j every object A∈ has a smallest j-cover and every domain of a smallest j-cover is separated, the following result is sketched by McLarty. Let 𝒩 be the full subcategory of domains of minimal j-covers in . Let γ_A:Γ A→ A denote the smallest j-cover of A for every object A of . [McLarty's Minimal Separated Cover Theorem]C:stringofthree Letbe a topos and j:→ be a local operator on it. Assume further that every object A∈ has a smallest j-cover and every domain of a smallest j-cover is separated.*There exists a right adjoint Γ to the inclusion ℐ_𝒩.*The restriction of the j-sheafification functor L to 𝒩 is an equivalence between 𝒩 and _j.*The following diagram_j@^((->@/_.7pc/[dl]_J@|(.42)⊤[d] [rr]^Γ @|(.4)⊤[d]𝒩@^((->@/^2pc/[ll]^ℐ_𝒩@/_.7pc/[ul]_L|_𝒩is a UIAO (cf. <cit.>). Letbe a nondegenerate topos satisfying NS and WDSO. Let ℳ be the subcategory ofof objects X with γ_X an isomorphism. Then ℳ=𝒩, and accordingly ℳ≃_.This is fundamental to obtaining from NS, DQO and DSO the string of adjoints required in the definition of precohesion. The following couple of result establish very natural stability facts about dense maps: transitivity under composition and preservation under pullbacks. Let S be a consistent local set theory on a local languageand μ a modality in S. If C↣ B and B↣ A are μ-dense in B and A respectively, then C↣ B↣ A is μ-dense in A. Without loss of generality suppose that the given dense arrows are actually the inclusions X⊆ Y⊆ Z. As X is dense in Y and Y is dense in Z⊢_S X⊆ Y∧ Y⊆X∧ Y⊆ Z∧ Z⊆Y.So naturally ⊢_SX⊆ Z and ⊢_SY⊆X. Thus ⊢_SY⊆X and ⊢_SX⊆ Z∧ Z⊆X. Letbe a topos and j:→ be a local operator on it. If s:S↣ A is j-dense and f:B→ A is an arrow in , then f^-1(s) is j-dense. If s=1_A then f^-1(s)=1_B. Therefore, as f^-1(s)=f^-1(s), f^-1(s) is j-dense. [Proof of C:stringofthreeMcLarty's Minimal Separated Cover Theorem]To verify (<ref>), it suffices to see that each γ_A is couniversal to A from 𝒩, let B∈𝒩 and f:B→ A be an arrow in . Let D be the pullback object in the following pullback diagram:D[r]@(>->[d]_f^-1(γ_A) Γ A@(>->[d]^γ_AB[r]_f ASo, by <ref>, D is dense in B, and B is a j-cover of some object X of . Hence, by <ref>, D is a j-cover of X, and accordingly it is isomorphic to B. So, as γ_A is monic, there is a unique arrow f':B→ A making the following diagram commute:B[dr]^f[d]_f' A[r]_γ_AA. Now on to (<ref>). Consider Γ|__j:_j→𝒩 and L|_𝒩:𝒩→_j, where L is the left adjoint of the reflection L⊢ J:_j→. To verify (<ref>) it will be seen that L|_𝒩∘Γ|__j≅ 1__j and Γ|__j∘ L|_𝒩≅ 1_𝒩.To prove the first isomorphism in (<ref>), let l_A:A→ LA be the unit of the reflection L⊣ J:_j→. For a j-sheaf F it follow thatl_F:F→ LFis an iso by Corollary 5.20 and Theorem 5.13 in <cit.>. On the other hand, as γ_F is a dense monic, thenLγ_F:LΓ F→ LFis iso (see the remark in <cit.> before Corollary 5.27). Therefore there is a natural isomorphismL|_𝒩∘Γ|__j≅ 1__j,as promised. To prove the second isomorphism in (<ref>), let X∈. Then, as Γ X is the domain of the smallest j-cover γ_X, Γ X is separated, so l_Γ X is a j-cover of LΓ X (Corollary 5.20 in <cit.>). Therefore, as γ_LΓ X is the smallest j-cover, there is a monic arrow r:Γ LΓ X↣Γ X such that the following diagram commutes:Γ@(>->[dr]^l_Γ X X Γ LΓ X@(>->[r]_γ_LΓ X@(>->[u]^r LΓ X.On the other hand, as γ_X is a j-cover (a dense monic) and LΓ X a sheaf, there is a unique arrow ψ:X→ LΓ X making the following diagram commute:Γ X@(>->[r]^γ_X@(>->[dr]_l_Γ XX@–>[d]^ψ LΓ X.For ψ, the following diagram commutes:Γ X@(>->[r]^γ_X[d]_ΓψX[d]^ψ Γ LΓ X@(>->[r]_γ_LΓ XLΓ X.So, from the last three diagrams,l_Γ X =ψ∘γ_X(second diagram)=γ_LΓ X∘Γψ (third diagram)=l_Γ X∘ r∘Γψ (first diagram).But l_Γ X is monic, so r∘Γψ=1_Γ X, and hence r is epic. Therefore r is iso.To verify naturality r, let f:X→ Y be an arrow in . Since r=Γ(ψ)^-1, it suffices to check the naturality of ψ. To do so, consider the following diagram:Γ X[rr]^l_Γ X[dr]_γ_X[ddd]_Γ f LΓ X[ddd]^LΓ f X[d]^f[ur]_ψ_XY[dr]^ψ_YΓ Y[ur]^γ_Y[rr]_l_Γ Y LΓ Y.Since the triangles, the outermost square and the left square commute,ψ_Y∘ f∘γ_X=LΓ f∘ψ_X∘γ_X.But γ_X is a dense monic and LΓ Y a sheaf, so the right square commutes. That is, ψ is a natural transformation. This establishes (<ref>).To verify (<ref>), first notice that the natural transformation ψ:1_⇒ LΓ:→ is an isomorphism when restricted to _j. Indeed, for F∈_j, it is straightforward to verify that the following diagram commutes:Γ F[r]^l_Γ F[d]_γ_FLΓ F[d]^Lγ_F_≅F[r]_l_F^≅[ur]^ψ_FLF. To establish (<ref>), it remains to see that Γ⊣Λ:=J∘ L|_𝒩. Let f:X→Λ A be an arrow inwith A∈𝒩. The following diagram commutes:X[r]^ψ_X[d]_fΛΓ X[d]^ΛΓ f Λ A[r]_ψ_Λ A ΛΓΛ AOn the other hand, as r:Γ|__j∘ L|_𝒩⇒1_𝒩 and ψ:1__j⇒ L|_𝒩∘Γ|__j form an adjoint equivalence,<><4.5>r _j[rr]^1[dr]^Γ|__j_j@|=[r] 1_L|_𝒩 𝒩[rr]_r[ur]^L|_𝒩 <><-4>ψ 𝒩[ur]_L|_𝒩That is, Lr·ψ L=1. Therefore ψ L=Lr^-1.So the first diagram may be re-drawn asX[r]^ψ_X[d]_fΛΓ X[d]^ΛΓ f Λ AΛΓΛ A[l]^Λ r^-1_ANow, if there is another arrow g:Γ X→ A inmaking the following diagram commute:X[r]^ψ_X[dr]_fΛΓ X[d]^Λ gΛ A,thenΛ g∘ψ_X∘γ_X=Λ(r^-1_A∘Γ f)∘ψ_X∘γ_XΛ g∘ l_Γ X =Λ(r^-1_A∘Γ f)∘ l_Γ X,but l_Γ X is a dense monic and Λ A a sheaf, so Λ g=Λ(r^-1_A∘Γ f). Whence, since Λ is faithful, g=r^-1_A∘Γ f. Therefore Γ⊣Λ with unit ψ. plainnat | http://arxiv.org/abs/2311.16355v2 | {
"authors": [
"Enrique Ruiz Hernández",
"Pedro Solórzano"
],
"categories": [
"math.CT",
"math.LO",
"Primary 18B25, Secondary 03G30, 03B38"
],
"primary_category": "math.CT",
"published": "20231127223813",
"title": "Elementary axioms for parts in toposes"
} |
firstpage–lastpage Mapping quantum circuits to shallow-depth measurement patterns based on graph states [==================================================================================== Two unresolved questions at galaxy centers, namely the formation of the nuclear star cluster (NSC) and the origin of the γ-ray excess in the Milky Way (MW) and Andromeda (M31), are both related to the formation and evolution of globular clusters (GCs). They migrate towards the galaxy center due to dynamical friction, and get tidally disrupted to release the stellar mass content including millisecond pulsars (MSPs), which contribute to the NSC and γ-ray excess. In this study, we propose a semi-analytical model of GC formation and evolution that utilizes the Illustris cosmological simulation to accurately capture the formation epochs of GCs and simulate their subsequent evolution. Our analysis confirms that our GC properties at z=0 are consistent with observations, and our model naturally explains the formation of a massive NSC in a galaxy similar to the MW and M31. We also find a remarkable similarity in our model prediction with the γ-ray excess signal in the MW. However, our predictions fall short by approximately an order of magnitude in M31, indicating distinct origins for the two γ-ray excesses. Meanwhile, we utilize the catalog of Illustris halos to investigate the influence of galaxy assembly history. We find that the earlier a galaxy is assembled, the heavier and spatially more concentrated its GC system behaves at z=0. This results in a larger NSC mass and brighter γ-ray emission from deposited MSPs.globular clusters: general – galaxies: evolution – Galaxy: centre – gamma-rays: galaxies – pulsars: general§ INTRODUCTION A compact bright star cluster is commonly observed at the centers of galaxies of all types, known as the nuclear star cluster (NSC) (e.g. <cit.>). They take up the innermost a few to tens of parsecs at most, and have masses 10^5 ∼ 10^8 M_⊙ <cit.>, making them the densest known star clusters. Observationally, they are distinguished by prominently brighter luminosity on top of the disk or bulge component (e.g. <cit.>). Besides, some NSCs are also observed to co-exist with a supermassive black hole (SMBH) at the galaxy center (e.g. <cit.>). NSCs consist of a mixed stellar population in terms of age, metallicity, etc (e.g. <cit.>). The complexity in stellar population has complicated the quest of NSC formation mechanisms. The young and metal rich stars suggests an in-situ formation scenario, where local star formation is triggered by the inflow of gas induced by various mechanisms, such as bar-driven infall <cit.> and the action of instabilities <cit.>. The fact that this young stellar population is usually flattened, rotating, and concentrated at the center of NSCs <cit.> also favors the in-situ formation scenario. On the other hand, the old and metal poor population can naturally arise from massive globular clusters (GCs) which migrate into the galaxy center due to dynamical friction. This was actually the first proposed NSC formation scenario <cit.> one year after the ground-breaking observation of the NSC in M31 <cit.>. Observationally, this scenario is also supported by evidences such as the deficit of massive GCs <cit.> and the nucleation fraction tracing the fraction of galaxies that have GCs <cit.>. Despite observational motivation for both NSC formation mechanisms, a comprehensive modeling is lacking. For in-situ formation, direct simulation is challenging. There are limited studies focusing on different aspects of the process, such as gas inflow<cit.>, momentum feedback and self-regulation <cit.>, and stellar two-body relaxation <cit.>. Ex-situ formation was more extensively studied (e.g. <cit.>). In particular, <cit.> used direct n-body simulation to study the final parsec-scale evolution of spiraled-in GCs and resulting NSC morphological and kinematic properties. The general picture has been well-established, but previous studies adopted crude treatments of the initial conditions of GCs due to a lack of knowledge on their formation. Besides, the evolution of the old GC systems is closely correlated with the evolution of the host galaxy, which hasn't been taken good care of in previous studies. Up to today, it is still uncertain the roles of in-situ and ex-situ formation mechanisms (e.g. <cit.>), with ongoing debates on topics such as a potential transition between the two mechanisms indicated by the NSC mass or galaxy stellar masses <cit.>. In this study, we focus on the ex-situ formation of NSCs, as the infall of GCs might contribute to another unsolved problem. A diffuse γ-ray excess has been observed at the centers of the Milky Way (MW) and Andromeda (M31) galaxies <cit.>. These excesses exhibit spherical symmetry and extend over a few parsecs, with a peak energy around a few GeV. Possible origins of these excesses are debated primarily over DM (e.g. <cit.>) and millisecond pulsars (MSPs) (e.g. <cit.>. While the spatial distribution of DM is already expected to peak at galaxy centers (e.g. <cit.>), MSPs are not commonly observed there due to difficulties in resolving individual γ-ray sources. However, MSPs are abundant in GCs, with much more observed number per unit mass compared to the galaxy field <cit.>. This is due to the fact that MSPs are believed to originate from low-mass X-ray binaries (LMXBs), where the neutron star gets spun up through mass transfer from the companion star <cit.>. Thus, the high stellar density of GCs provide the desirable environment for both primordial binary formation and dynamical encounter. Galaxy center MSPs can originate from GCs that have migrated in and tidally dissolved.On the other hand, the central bulge region of galaxies is also a dense stellar environment, although less dense by an order of magnitude than most GCs <cit.>. Thus, MSPs could in principle form in-situ as well. However, recent studies using scaling relations to probe in-situ MSP luminosity cast doubts on this mechanism as the sole origin of the γ-ray excess. The Galactic center excess (GCE) has been examined by <cit.> and <cit.>, who pointed out that LMXBs are too rarely observed in the MW bulge that in-situ MSPs can account for less than a quarter of the excess luminosity. For M31, where ∼ 10^4 LMXBs are needed to explain the excess, less than 80 were detected within the inner 12 arcmin (∼ 2.7 kpc) <cit.>. Consequently, it is crucial to explore the contribution of ex-situ MSPs from GCs.From the previous analysis, we see that the evolution of GCs inevitably contributes to NSC formation and the γ-ray excess, and it is important to find out the extent of its contributions. However, the formation of GCs still remains highly uncertain. Many previous studies (e.g. <cit.>) assumed that all GCs formed at a single redshift, which only serves as a primitive approximation. To achieve a more reliable modeling of GC evolution and mass deposition, a better formulation of GC formation is needed.In this paper, we use a new semi-analytic model of GC formation and evolution to study its contribution to the NSC formation and galaxy center γ-ray excess in galaxies similar to the MW and M31. The model adopts the GC formation scenario by <cit.>, where GC formation was triggered by periods of rapid mass accretion onto the host galaxy across its assembly history, typically triggered by major galactic mergers. To obtain realistic galaxy merging histories, results from the Illustris cosmological simulation are used, and GCs are sampled at qualified simulation snapshots. After formation, the GC population is subject to orbital migration and mass loss depending on their mass and galactocentric distance. In addition, we also model an evolving background potential according to the galaxy assembly history. Through this new model, we hope to enhance our understanding of the connection between GCs,the NSC and galaxy center γ-ray excess .We arrange this paper as follows. We introduce our modeling of GC formation and evolution in Section <ref>. In it we also show how to account for the galaxy center MSP luminosity at z=0. The calculation of halo parameters from Illustris outputs is introduced in the Appendix <ref>. In Section <ref>, we present our model predictions of GC properties, the NSC mass and γ-ray emission by MSPs at z=0. As we can retrieve from Illustris a collection of halos of similar masses but with different assembly histories, we discuss the moderation effect of assembly history in Section <ref>. As our γ-ray luminosity prediction for the M31 falls short to observation, we also discuss alternative explanations. Caveats of our study are listed and discussed as well. Finally in Section <ref>, we summarize important results and suggest future work.§ METHODS In this section, we present our semi-analytical model for the formation and evolution of GCs in the framework of hierarchical structure formation. Then we describe the calculation of the γ-ray luminosity at z=0 from deposited MSPs.§.§ Formation of GCs in cosmological simulations We introduce our modeling of GC formation in terms of formation times, initial masses and spatial distribution. In modeling GC formation times, we improve on the simple prescription by previous studies (e.g. <cit.>) that GCs formed at a single redshift. While this assumption is partly justified due to the old ages of most GCs, it is recently recognized that GC formation covers a wide range of cosmic time with diverse formation histories (<cit.> and references therein). Therefore, a more physically-motivated GC formation model is desired. Fortunately, over the past decades, our understanding of the origin of GCs in the framework of hierarchical structure formation has been revolutionized. Here, we adopt the GC formation model <cit.> (hereafter CGL), which was built upon earlier works by <cit.> and <cit.>. The CGL model assumes that GC formation was triggered by periods of rapid mass accretion onto the host galaxy, typically during major mergers. This idea was motivated by multiple reasons such as more observed young massive clusters in interacting galaxies (e.g. <cit.>), earlier formation times of GCs than the field stars and that galactic mergers were more frequent at high redshifts <cit.>, and that galactic mergers are able to induce the high densities and pressures desired for cluster formation (e.g. <cit.>). In the CGL model, GC formation is painted onto the halo merger trees of the Illustris simulation, which captures the evolution of halo properties from z=47 to 0 <cit.>. GC formation is triggered when the specific halo mass accretion rate exceeds a threshold value, which is a tunable parameter. The total GC mass is mapped from the halo mass through observed stellar mass-halo mass relation (SMHM) <cit.>, the stellar mass-gas mass relation <cit.>, and GC mass fraction from total gas mass (the second tunable parameter). Once the total GC mass at the epochs of formation is fixed, individual GC masses are sampled from a power -2 mass function observed from young massive clusters <cit.>. It is encouraging to note that this simple two-parameter model successfully reproduced several key observed properties of GC populations, such as metallicity bimodality, GC mass-halo mass relation, etc. This serves as a much improved model of GC formation for the study of NSCs in galaxies with different assembly histories.As GCs form at multiple epochs across the galaxy assembly history, their initial spatial distribution varies and influences their subsequent orbital evolution. Thus, we need to carefully take care of this issue at each formation epoch. For GCs formed inside the galaxy ('in-situ'), we assume that, similar to normal star formation, GCs follow similar spatial distribution of cold gas during their formation. As gas does not exhibit a bulge structure, we adopt the continuous spherical Sérsic density distribution, based on which GCs are assigned to specific galacto-centric radii. On the other hand, for GCs formed inside satellite galaxies and were brought in via galactic mergers ('ex-situ'), we lack detailed knowledge of the stellar dynamics during galactic mergers. As a simplified treatment, we place these ex-situ GCs at half the virial radius of the descendant halo. This shouldn't affect our GC distribution at the galaxy center at z=0, though, because these massive GCs carry large angular momenta relative to the descendant halo, which are highly unlikely to be sufficiently reduced by dynamical friction from the ambient stellar density.The Sérsic spatial density distribution was proposed by <cit.> to match the well-established Sérsic surface brightness profile <cit.>. The spherical density distribution is given by:ρ(r)=ρ_0 (r/R_e)^-p e^-b (r/R_e)^1/N_s,Here ρ_0 is a normalization, R_e is the effective radius of the galactic disk, N_s is the concentration index. The term b is a function of N_s to ensure half of the projected light is contained within R_e, and can be well approximated analytically by b=2N_s-1/3+0.009876/N_s for 0.5<N_s<10<cit.>. The form of p is adopted from <cit.> as p=1.0-0.6097/N_s+0.05563/N_s^2 for 0.6<N_s<10 and 10^-2≤r/R_e≤10^3 to match the Sérsic surface brightness profile <cit.>. The effective radius R_e is related to the virial radius of the halo, R_ vir, and halo spin parameter, λ, by assuming a classical galactic disk formation model (e.g. <cit.>):R_e = λ R_ vir/√(2).Since we do not have the information of the total energy of the dark matter halo to estimate the traditional spin parameter defined in <cit.>, we instead use an alternative definition λ_ B=j_ sp/√(2)R_ virV_ vir that only requires the specific angular momentum and virial velocity V_ vir (e.g. <cit.>). Both M_ vir and j_ sp are provided at snapshots of the Illustris simulation, and v_ vir and r_ vir are calculated accordingly as explained in the Appendix.Regarding the concentration index N_s, larger values were conventionally associated with higher concentration, but we found this to be misleading. In Fig. <ref>, we present the cumulative mass fraction distribution versus normalized radial distance for different N_s. We see that larger N_s does exhibit higher concentration in the inner region. However, beyond the crossing point at r⪆ R_e ( which we refer to as Sérsic crossing hereafter), they reach saturation much slower than curves with smaller N_s. Consequently, while larger N_s values indicate a more peaked distribution, they also indicate a greater spread. In our subsequent studies, we investigate the influence of different N_s values on our results within the range of 0.5 to 4, with increments of 0.5. As a fiducial model, we select N_s=2 following the work of <cit.>. To study the GC system in the MW and M31, we select Illustris halos with similar masses at z=0. For the MW, we adopt the findings by <cit.>, who estimated a virial mass of 1.01±0.24× 10^12M_⊙. Among the Illustris halos, 1099 fall within this range. For M31, there are significant uncertainties, therefore we adopt a wider range of 0.7-2.5 × 10^12M_⊙ based on the review by <cit.>. Accordingly, 2029 halos were selected, including those selected for the MW. §.§ Dynamical evolution of GCs After birth, GCs migrate towards the galactic center due to dynamical friction while depositing masses due to stellar evolution and tidal stripping along the way. In situations where they have migrated into the innermost region, the tidal force can be so strong that GCs get completely torn apart <cit.>. Thus, GCs that make all the way into the inner a few parsecs can build up NSC and contribute to the MSP populations there. Wedirectly adopt the analytical prescriptions of <cit.> in modeling: 1) orbital migration, 2) tidal stripping, and 3) direct tidal disruption. These prescriptions include corrections to parameters originally proposed by <cit.> and <cit.>. Below, we describe two improvements we made in this work. The first improvement is an updated prescription of the mass loss due to stellar evolution, which is obtained from the stellar population synthesis model FSPS <cit.>. This allows us to account for the evolving nature of stars within GCs more accurately. The second improvement is modeling the time-varying gravitational potential by extracting the time-evolution of the dark matter halos from the Illustris simulation. For any cosmic time, we linearly interpolate the halo mass and spin between adjacent Illustris snapshots, and calculate galactic structural parameters accordingly. The overall gravitational potential comprises three components: the dark halo, the stellar component, and modeled GCs. The dark halo is described using the NFW distribution <cit.>, and the calculations for determining halo parameters are introduced in the Appendix <ref>. The stellar component is modeled with a Sérsic distribution, as explained earlier. The mass of modeled GCs is also included in calculating the overall potential. For cosmological parameters, we adopt a flat ΛCDM model with h=0.704 and Ω_ m,0=0.2726, consistent with the Illustris simulation.To improve the efficiency of our simulation, we implement sub-cycling in the evolution of GCs. We divide the entire time span into 100 sections, each characterized by a constant background potential. To evolve individual GCs, we first calculate their evolution timestep dt_i being the smaller of the tidal and orbital evolution timescales multiplied by a fraction, ts_m and ts_r, respectively. Then, a GC with the smallest value of t_i+dt_i means that it evolves the fastest, and exerts the strongest influence on other GCs. Thus, we find and evolve such GC at each step, until all GCs cross the current time span. Meanwhile, we found that it is usually the first step of calculated dt_i that is overestimating. To efficiently address this, we introduce a maximum cutoff timestep dt_max on top of the ts factors. By testing single GCs with different masses and galacto-centric positions, we optimize the values of dt_max, ts_m and ts_r together. We find that dt_max=0.01Gyr and ts_m=ts_r=0.2 strike a balance between efficiency and accuracy.For calculating the change in GC mass and galactocentric distance in each step, we employ the Runge-Kutta 4th order method. This method offers a significant speed improvement of approximately 20 times compared to the 1st order Euler method. §.§ MSPs from GCs MSPs are deposited by GCs due to tidal stripping and disruption, thus they can contribute to the unresolved γ-ray excess. In calculating the γ-ray luminosity at z=0 from MSPs, we follow <cit.> who set the total γ-ray luminosity of deposited MSPs equal to that of the debris of the GC, using luminosity-mass relation of GCs fitted to observations. To account for uncertainties in this fitting, they also used a constant luminosity-mass relation for comparison, and labeled it as the model 'C'. The model with fitted luminosity-mass relation was labeled 'EQ'.Then, individual MSP luminosity was sampled according to the observed MSP luminosity function. To account for MSP spin-down due to the loss of rotational energy via magnetic dipole braking, <cit.> use two models of the spin-down timescale: a Gaussian distribution model (GAU) and a log-normal distribution model (LON).In our study, we follow their prescribed methodology and calculate the γ-ray luminosity at redshift z=0 using the four models, namely GAU-C, GAU-EQ, LON-C, and LON-EQ.§ RESULTS In this section, we present our results on the properties of GCs at z=0 for MW and M31-like galaxies, and compare with observations. We then show the predicted mass of the NSC and γ-ray luminosity distribution from MSPs. We also explore how the above quantities vary with different galaxy assembly histories. §.§ GC-halo mass scaling relation One of the most striking observations of GCs is a linear correlation between the mass of the GC system and its host halo <cit.>. The mass ratio is approximately 10^-5 across 5 orders of magnitude <cit.>. Therefore, it is crucial for our model to reproduces this scaling relation.Fig. <ref> shows our model prediction of the scaling relation for the MW and M31-like halos combined for different N_ s. We see that the runs with smaller N_ s tend to produce lower mass in the GC system at z=0. This is in line with our findings in Fig. <ref>, that a smaller N_s corresponds to a smaller spatial spread of formed GCs, which results in more mass loss due to stronger tidal effect. Nevertheless, the changes of total GC mass in different N_ s is very small and all choices of N_ s produce the GC mass in reasonably good agreement with observations. Consequently, our model successfully reproduces the GC-halo mass scaling relation. §.§ Spatial distribution of GC number density In this section, we closely examine the spatial distribution of GCs in their number density at z=0. Fig. <ref> shows the average trend for MW and M31 combined, as we have checked that they are barely distinguishable. When we compare the initial and final GC distributions, we notice that the number density outside ≈3 kpc barely decreases. This can be attributed to the fact that beyond this position, most GCs with a mass of approximately 10^5 M_⊙ have tidal and migration timescales longer than a Hubble time. In addition, stellar evolution alone cannot exhaust a GC. When we investigate the effect of different N_ s values, it turns out that this made minimal difference for both the initial and final GC distributions, except for the innermost region. In the initial distribution, the typical Sérsic crossing is only observed ≈100 pc, since GC formation across cosmic times overlaps in the outer regions as the halo grows, erasing outer crossings. Our inclusion of ex-situ GCs also contributes to this effect. In the final distribution, although the difference between N_ s values remains minimal, the trend is opposite to that observed in the initial distribution. Larger N_ s values within the Sérsic crossing lead to stronger tidal disruption, resulting in fewer surviving GCs and a smaller number of GCs in the final distribution. Comparing our results with observations, we found a minor overshoot within ≈ 5 kpc for the MW. However, individual halos exhibit a spread around the average distribution, and several candidate halos demonstrate conformity to the observation. We present one such candidate halo in Fig. <ref>. Therefore, our model can be considered an acceptable fit to the observed spatial distribution of MW and M31 GCs. Additionally, we observed no preference for N_ s, so we will keep N_ s=2 as the fiducial choice.To further analyze the GC population, we utilize the orbital information in the <cit.> catalogue to distinguish between the in-situ and ex-situ sub-populations of MW GCS. In order to do so, we follow <cit.>, who analyzed the dynamical differences of the two branches of GCs on the age-metallicity plot for 151 MW GCs. Based on their distinct features, they assigned 62 GCs as having formed in-situ, which comprise of bulge GCs and disk GCs. The former are defined as having apocenter distance less than 3.5 kpc, while the latter as having orbital altitude less than 5 kpc and circularity greater than 0.5. For our GC catalogue, we calculate the orbital and potential parameters using the Python package Galpot by <cit.>. Since our treatment of ex-situ GCs is rudimentary, we focus on in-situ GCs for now.In Fig. <ref>, we plot the number density distributions of MW in-situ GCs in a similar manner to Fig. <ref>. We can observe that most of the observations made for the overall GC distribution also apply to the in-situ GC distribution, except for 2 differences. First, in-situ GCs exhibit a more concentrated distribution, with a significant drop in their numbers towards the outskirts. This suggests that in-situ GCs preferentially populate the central regions of the MW-like halos. Second, without the contribution from the ex-situ population, the dispersion between different values of N_s becomes more pronounced in the outskirts, and we can observe the outer portion of the Sérsic crossing. Nevertheless, we can regard our model as satisfactorily reproducing the observed GC number density distribution, both overall and in-situ, and N_s=2 as a reasonable choice for the concentration parameter. One advantage of utilizing the halo catalog extracted from Illustris is the ability to investigate the influence of different assembly histories on halos with similar masses. To parameterize the halo assembly histories, we employ the concept of half mass redshift, denoted as z_ hm, which represents the redshift at which the halo acquired half of its present mass. In Fig. <ref>, we show the distribution of z_ hm values. The histogram exhibits a log-normal shape centered around z∼1.2, corresponding to a look-back time of approximately 8 billion years, with a tail extending towards higher redshifts. We are interested in whether z_ hm has any discernible effect on the spatial distribution of GCs. Fig. <ref> illustrates the GC number density distribution with different z_ hm for MW and M31-like galaxies. When we look at the influence of z_ hm for initial GCs, galaxies that formed earlier (referred to as EFGs, earlier formed galaxies) tend to exhibit more concentrated distributions of GCs compared to halos that formed later (referred to as LFGs, later formed galaxies). This trend can be attributed to the higher merger rate of halos at larger redshifts, as indicated by previous studies <cit.>. Consequently, EFGs experience more GC formation events, starting from smaller halo sizes, while LFGs undergo fewer GC formation events, each resulting in significant growth of halo mass and size and leading to a more spread-out distribution of formed GCs.In the case of final GCs, however, the impact of z_ hm is less pronounced. This can be understood, as more concentrated GCs also experience stronger tidal disruption and dynamical friction, which reduces their numbers. Additionally, as these GCs formed earlier, these disruptive processes act over a longer period. Consequently, the final GC number density distribution shows minimal traces of the galaxy assembly history.When comparing our results with observations, as the influence of z_ hm on the final GC distribution is not significant, the results are similar to those of Fig. <ref>.§.§ GC mass function Besides the spatial distribution of GCs, their mass function at z=0 is also an important property to compare with observations. Previous studies using the same GC formation model <cit.> have carried out comprehensive analyses, confirming a transformation from the initial power-law mass function to a log-normal shape at z=0 that agrees with observations. Therefore, in this study we focus on the in-situ GC population. In Fig. <ref> we show the median mass function of candidate MW-like halos for different N_s, and overplot observation results. We see that our model also shows a log-normal shape, although with its peak shifted towards smaller GC masses compared with observations. This can be due to the incompleteness in observed low mass GCs. Nevertheless, the shaded region corresponding to N_s=2 can cover most of the observation trend. Indeed, candidate halos can fit the observation quite well, as illustrated by Fig. <ref>. Thus, our model provides a good fit to the observed in-situ GC mass function.We also observe that N_s results in more noticeable discrepancies in the predicted mass function above ∼ 10^5 M_⊙, while at lower masses, the influence of different N_s is flipped and less prominent. Larger N_s leads to slightly fewer light GCs but more heavier ones, and vice versa. This is due to the fact that those light GCs are prone to weak dynamical friction, thus they barely migrate inward. Their disruption is mostly determined by their position at birth. As we have observed in Fig. <ref>, larger N_s have peaked density at the galaxy center, which leads to prominent disruption of lighter GCs that initially formed in the region. On the other hand, as the general distribution is more spreaded, heavier GCs have a longer journey to migrate to the galaxy center and are subject to weaker tidal effect. Thus, the distribution leads to slightly stronger disruption of light GCs formed in the innermost region, but weaker disruption of heavier ones. §.§ NSC massIn this section, We examine the NSC mass contributed by GCs as they spiral in. On average, ∼80 and 100 GCs migrated into the NSC of the MW and M31 respectively, with smaller N_s corresponding to a few more GCs. The standard deviation is ∼30 and 40 respectively. In Fig. <ref>, we compare the deposited GC masses with the observed NSC mass at z=0. Unlike the number density distribution, the cumulative mass distribution of initial GCs exhibits a prominent Sérsic crossing in the inner region, because the range of the y axis is much smaller in this case. Regarding the deposited mass, we observe that GCs with smaller values of N_ s contribute more to the total deposited mass, which aligns with the trend we observed in the GC-halo mass scaling relation discussed in Section <ref>. Regardless of the influence of N_ s, the deposited mass is predominantly confined to the central regions. This contrasts with the results presented by <cit.>, where a plateau is established starting from 4 pc outward and continues to rise prominently up to 10 kpc. The disparity arises due to our GC formation occurring throughout the entire assembly history of the halo, resulting in a more concentrated distribution of GCs, as shown in Fig. <ref>. Consequently, GCs are more susceptible to significant tidal effects and tend to deposit mass towards the galaxy center. While the average deposited mass exceeds the NSC mass, it is important to note that different galaxy assembly history yield substantial variations, as depicted in Fig. <ref>. As discussed in the preceding section, EFGs give rise to more GCs, which experience stronger and more prolonged tidal disruption. Consequently, halos with larger z_ hm have GCs depositing more mass towards the center. Conversely, LFGs exhibit less deposited mass, implying that our MW NSC plausibly originates from a halo with z_ hm∼1.§.§ Gamma ray luminosity As our GC fittings turn out consistent with observations, we move forward with N_ s=2 to check the spatial distribution of the luminosity of deposited MSPs. As different authors analyzed the γ-ray excess data in different methods, we present our results in both differential and cumulative distribution. Fig. <ref> shows our model prediction of the differential flux distribution. Notably, different models yield similar results. And although they generally underestimate the excess flux, the overall shape is consistent. Consequently, we select the GAU-EQ model as the best choice and plot it for different z_ hm intervals in Fig.<ref>. We observe that EFGs exhibit relatively higher flux emission, as their GCs deposit more MSPs. Halos with z_ hm≳0.7 provide a good fit to the observations. Note that for clarity purpose, the color code here does not precisely match the colorbar in Fig. <ref>. With that in mind, we notice that the results are compatible, indicating that a halo with z_ hm≳0.7 can successfully reproduce both the NSC mass and the spatial distribution of the γ-ray differential flux emmision.For the cumulative flux distribution, Fig. <ref> illustrates the GAU-EQ model for different z_ hm. We can observe that our model also shows a consistent shape with the observations, which appears better than the results by <cit.> as we exhibit more flux in the innermost region. This arise from our GC formation occurring throughout the entire assembly history of the halo. As a result, GCs started their evolution closer to the galaxy center. Once again, halos with z_ hm≳0.7 provide a good fit. And consequently, it is plausible that the GCE arose solely from deposited MSPs. Having examined the GCE, we proceed to the M31 excess. Due to the lack of detailed analysis regarding the excess spatial distribution, we solely compare the cumulative luminosity at 6 kpc. Fig. <ref> illustrates the cumulative luminosity distribution for the four MSP luminosity models. The models exhibit notable discrepancies, particularly in the innermost region, which diminish as we move towards the outskirts and nearly vanish at 6 kpc. This discrepancy stems from the fact that the EQ models employ a fitted relation, where log(L_γ/m_GC) is negatively proportional to log(m_GC). Consequently, heavier GCs are dimmer compared to the C models that utilize a constant luminosity-mass ratio. Since heavier GCs are more susceptible to dynamical friction and tidal disruption, they primarily contribute to the γ-ray emission in the innermost regions. As we move farther away from the galaxy center, lighter GCs become more dominant, which reduces the discrepancy. Nevertheless, all models converge and fall short of matching the excess signal at 6 kpc. Additionally, the highest luminosity among individual halos only reaches approximately 7×10^37 erg/s, which is less than one-third of the excess luminosity. Therefore, it is unlikely that MSPs alone account for the M31 excess. This suggests that the two galaxies have distinct origins of the excess emissions despite their similar masses. § DISCUSSION§.§ Galaxy assembly historyWe have observed in preceding sections that galaxy assembly history has clear influences on the GC and NSC properties. We shall take a systematic look in this section. In Fig. <ref> we illustrate the correlations between z_ hm and four important masses: the halo mass, NSC mass, total initial GC mass and GC mass at z=0. The halo mass does not correlate with z_ hm since halos of any mass can assemble early or late. However, the density of data points varies across z_ hm, consistent with the log-normal distribution presented in Fig. <ref>. And we have demonstrated previously that EFGs give rise to more GCs, resulting in a positive linear trend between the initial GC masses and z_ hm. We have also showed that EFGs have GCs distributed closer to the galaxy center, making them susceptible to stronger and prolonged tidal disruption. Consequently, they contribute a greater amount of mass to the NSC, leading to a positive linear trend between the NSC mass and z_ hm. Thus, the NSC mass serves as a good indicator in breaking the degeneracy to infer the galaxy assembly history. Larger NSC masses indicate relatively earlier accumulation of the halo mass, and vice versa. However, it is intriguing that z_ hm appears to provide little information about the final GC mass. This seems to suggest a lack of correlation between initial and final GC masses either, contrary to what one might expect. To verify this observation, we show in Fig. <ref> the initial and final GC masses for all MW-like halos, with colors denoting z_ hm. A positive trend reasonably exist between initial and final GC masses, although there is relatively large dispersion, particularly at smaller final GC masses. If we observe z_ hm across different initial GC masses, larger GC masses do correspond to larger z_ hm. However, if we examine final GC masses, each value is associated with a large spread of z_ hm values. This is due to the fact that the earlier-formed larger GC masses also suffer from stronger disruption, potentially resulting in smaller final masses at z=0. Consequently, small final GC masses can arise from either small initial masses or earlier-formed large initial masses. This breaks the correlation between final GC masses and galaxy assembly history.We are also interested in correlations associated with the NSC mass, a highly significant outcome of our GC model. Fig. <ref> shows the NSC mass plotted against the halo mass, initial GC mass, and final GC mass. As previously demonstrated in Fig. <ref>, the NSC mass is positively correlated with z_ hm, which has no relation to the halo mass. It is not surprising to observe that the NSC mass does not correlate with the halo mass. It is the earlier assembly of the halo that give rise to a heavier NSC, rather than the mass of the halo itself. However, it is worth noting that this observation may change when examining a broader range of halo masses, which will be investigated in our future studies.In the middle panel of Fig. <ref>, a strong positive trend is observed between the mass of initial GCs and the NSC mass. Not only do GCs serve as the fuel for the build-up of the NSC, but larger initial GC masses also correlate with larger z_ hm values, indicating earlier formation and more concentrated distribution. Collectively, these factors contribute to a larger NSC mass. However, final GC masses exhibit no correlation with the NSC mass. This can be comprehended, as we have demonstrated in Fig. <ref> that final GC masses do not correlate with z_ hm. Each GC mass at z=0 might arise from either early-formed large GC_i which builds up a large NSC, or later-formed lighter GC_i which contributes little to the NSC mass. Thus, the final GC mass does not serve as an indicator for the NSC mass.In summary, we presented in this section various correlations associated with the galaxy assembly history. We found that z_ hm, the total initial GC mass and the NSC mass are correlated, due to the fact that EFGs give rise to an old, heavy and concentrated GC system which contributes a larger amount of mass to the NSC. Thus, either value among z_ hm, the initial GC mass and the NSC mass serves as an indicator of the other two. And one important observation is that we could utilize the NSC mass to infer knowledge on galaxy assembly history. Intriguingly, the final GC mass is not correlated with z_ hm or the NSC mass. This stems from the degeneracy in the relation between initial and final GC masses.§.§ Possible explanations of the M31 gamma-ray excessWhile our results fall short of the M31 excess by an order of magnitude on average, candidate halo can reach approximately one third of the signal at its highest. Nevertheless, even with in-situ MSPs combined, the MSP channel alone is unable to fully explain the M31 excess. Thus, it is evident that a contribution from DM is necessary. Upon the first report on the detection of the M31 excess, <cit.> have brought up the possible explanation of DM. A primitive estimate inferred from a DM-origin GCE results in a flux deficit by 5 times, though the level of uncertainty was high.Subsequent investigations claimed to match the excess luminosity, but commonly identified tensions with observational constraints, such as the under-detection of DM emission in MW dwarf galaxies <cit.> and a lack of DM radio emission for M31 <cit.>. Additionally, <cit.> found that the two preferred DM annihilation channels for M31, namely bb̅ and an even mixture of bb̅/τ^+τ^-, favor smaller DM masses compared to those suggested by the GCE. Consequently, it was suggested that DM alone does not explain the M31 excess. Combining these findings with our results, it becomes clear that a combination of MSPs and DM offers a promising and potentially inevitable way for explaining the M31 excess. However, the question remains as to why the MW and M31 have different origins for producing such excess emission. §.§ Caveats and future worksIn this section discuss the caveats in our model and improvements to be made in future works.First, although we adopt the analytical expression of tidal disruption from <cit.>, it was based on a static spherical galactic background with circular cluster orbits <cit.>. Our inclusion of the assembly history of galaxies, however, indicates a more complicated galactic background, especially at large redshifts when galactic mergers were more frequent. This complication is twofold. On one hand, the galactic background keeps varying, although in out treatment of linear interpolation, the variation is steady. Thus, the static approximation is not unreasonable. On the other hand, the process of galactic mergers, especially major mergers, perturbs the galactic environment and GC orbits. However, due to the vibrant nature and lack of knowledge on the process, we leave it to future research. The eccentricity of GC orbits also differs from the circularity approximation. However, to accurately capture realistic GC orbital evolution along galaxy assembly histories can be computationally expensive (e.g. <cit.>. We could only treat our method as a time averaged approximation to real eccentric GC orbits. However, as very eccentric orbits usually happens for ex-situ GCs, the approximation does not significantly affect our results on the properties of the NSC. With improving powers of N-body simulation on galactic mergers and dynamical friction, more knowledge will enable us to incorporate these effects into a more comprehensive model.In investigating the formation of NSCs, we didn't take into account the in-situ channel of young stars forming in nuclear regions, despite various observational evidence as mentioned in <ref>. Our model partially takes care of this channel, as the GC spatial distribution at formation can sometimes sample GCs at the center of the host galaxies. Nevertheless, a more systematic investigation is warranted to obtain a thorough picture of NSC formation.Besides fitting the NSC mass, future works will be carried out on investigating other NSC properties, such as the age/metallicity distribution and the internal mass profile of the NSC. Besides MW and M31-like galaxies, we will extend our investigation to a broader galaxy mass range and to different galaxy types.There are also caveats in modeling the MSPs, as their distribution and evolution in GCs remains highly unknown to us. As stellar density increases towards the GC center, LMXBs and MSPs are supposed to peak near the center. There are evidences both from observations and simulation (see <cit.> and references therein). Thus, the number of MSPs stripped and deposited to the ambient environment depends on the current size of the GC. Our treatment essentially assumes a uniform distribution of MSPs in GCs, which only serves as a lower limit to the MSP contribution to the galaxy center γ-ray excess. On the other hand, following <cit.>, <cit.> studied the γ-ray excess problem by depositing MSPs only when the GC is fully disrupted. This methodology goes to another extreme by assuming all MSPs residing at the very center. Further knowledge on the distribution of MSPs inside GCs will enable us to better evaluate their γ-ray contribution to the galaxy center.Furthermore, we follow <cit.> to assume the effect of new MSP formation canceling MSP spindown in GCs, which is somewhat arbitrary. More knowledge is required to properly evaluate these competing factors.§ CONCLUSIONS In this study, we have presented a comprehensive model of GC formation and evolution, based on the premise that they primarily form following periods of rapid halo mass accretion. Leveraging the results from the Illustris cosmological simulation, we sample GCs across the galaxy assembly history and simulate their subsequent evolution, accounting for the mass loss and radial migration within an evolving galactic background. Our model successfully reproduces key observations of the MW and M31 GC system at z=0, including the mass scaling of the total GC system with the host halo, and the spatial distribution of GC number density. For the MW, we also reproduced the spatial distribution for the in-situ subpopulation. With this model at hand, we investigate the spatial distribution of deposited masses of migrated GCs to study its link to the formation of the NSC and galaxy center γ-ray excess. We find that both NSC masses of the MW and M31 can be reproduced. Detailed spatial distribution of the GCE can also come entirely from deposited MSPs. However, the M31 excess strength is three times as large as our most luminous candidate galaxy. Even factoring in in-situ MSPs born at the galaxy center, the MSP channel still cannot fully account for the excess emission. It becomes evident that DM must play a role in explaining the M31 excess, highlighting a fundamental astrophysical difference between the two galaxies. This constitutes another big difference between them, apart from their galaxy center SMBHs differing in mass by about 50 times. Further investigations are demanded in figuring out the causes to and possible links between these differences.Another intriguing aspect we discovered is the influence of galaxy assembly history on galaxy properties, which we investigated using halo half mass redshift z_ hm. Interestingly, we found that it does not correlate with halo mass, but conveys valuable information about the GC system and NSC mass. Specifically, EFGs with large z_ hm give rise to an old, heavy and concentrated GC population as they formed, and vice versa. This results in more deposited mass from GCs and a heavier NSC, which in turn serves as an informative indicator of galaxy assembly history.In conclusion, our comprehensive model of GC formation and evolution provides a robust framework for understanding the properties of the GC system, the NSC and galaxy center γ-ray emissions. Additionally, our study unveils the significance of galaxy assembly history in shaping galaxy properties. However, the need to invoke DM to explain the M31 excess emphasizes the distinct astrophysical origins of these high energy emissions from the two similar galaxies. Further investigations are warranted to unravel the precise mechanisms that drive these differences and establish a comprehensive understanding of galaxy formation and evolution.§ ACKNOWLEDGEMENT Yuan Gao would like to thank the University of Hong Kong for providing postgraduate scholarship and essential remote access to online academic resources for conducting this study during the pandemic. § DATA AVAILABILITYThe data underlying this article will be shared on reasonable request to the corresponding author. The datasets were derived from the Illustris simulation results in the public domain: [The Illustris Collaboration, <https://www.illustris-project.org/data/>]. The observation data of the MW and M31 are available via corresponding references in the article.mnras § DETERMINING HALO PARAMETERS The NFW profile is described in <cit.> as:ρ_ NFW(r)=ρ_0 (r/R_s)^-1(1+r/R_s)^-2The corresponding potential is given by:Φ_ NFW(r)=-4 πρ_0 R_s^3 G/r ln(1+r/R_s)Here ρ_0 is normalized by the virial mass M_ vir=4 πρ_0 R_s^3[ln(1+c)-c/1+c], where the halo concentration c=R_ vir/R_s is taken from <cit.> with the analytic form: c=9.354 (M_ vir h/10^12M_⊙)^-0.094The virial radius R_ vir is mapped from M_ vir and z via:R_ vir=163/(1+z) h(M_ virh/10^12M_⊙)^1/3(Δ_ vir/200)^-1/3Ω_m,0^1/3 kpchere Δ_ vir is the average halo over-density at R_ vir, which we take from the spherical collapse model as: Δ_ vir=18π^2+82x-39x^2/x+1,x≡Ω_m(z)-1In our adopted cosmological model, the evolution of matter density parameter equals <cit.>:Ω_m(z)=Ω_m,0(1+z)^3/Ω_Λ,0+Ω_m,0(1+z)^3 | http://arxiv.org/abs/2311.17071v2 | {
"authors": [
"Yuan Gao",
"Hui Li",
"Xiaojia Zhang",
"Meng Su",
"Stephen Chi Yung Ng"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231127190000",
"title": "Globular Clusters Contribute to the Nuclear Star Cluster and Galaxy Center Gamma-Ray Excess, Moderated by Galaxy Assembly History"
} |
M. Günder et al. Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Schloss Birlinghoven, 53757 Sankt Augustin, Germany [email protected] University of Bonn, Institute for Computer Science III, Friedrich-Hirzebruch-Allee 5, 53115 Bonn, Germany RWTH Aachen University, Department of Computer Science, Ahornstraße 55, 52074 Aachen, Germany Model-agnostic Body Part Relevance Assessment for Pedestrian DetectionMaurice Günder1,20000-0001-9308-8889 Sneha Banerjee1,30000-0002-9950-2873 Rafet Sifa1,20009-0004-6680-8210 Christian Bauckhage1,20000-0001-6615-2128January 14, 2024 ==============================================================================================================================================================Model-agnostic explanation methods for deep learning models are flexible regarding usability and availability. However, due to the fact that they can only manipulate input to see changes in output, they suffer from weak performance when used with complex model architectures. For models with large inputs as, for instance, in OD, sampling-based methods like KernelSHAP are inefficient due to many computation-heavy forward passes through the model. In this work, we present a framework for using sampling-based explanation models in a computer vision context by body part relevance assessment for pedestrian detection. Furthermore, we introduce a novel sampling-based method similar to KernelSHAP that shows more robustness for lower sampling sizes and, thus, is more efficient for explainability analyses on large-scale datasets. § INTRODUCTION Today's deep learning model architectures are more powerful than ever and enable the use of AI in a wide range of application areas. However, with increasing model complexity comes increasing opacity and their output is less (human-)interpretable. Therefore, it is not uncommon for large models to be regarded only as black boxes. This can be particularly problematic in safety-relevant applications as, for instance, in AD where AI models cause decisions of autonomous systems that should be trustworthy, reasonable, and explainable. Thus, the field of XAI is of increasing interest.Generally, XAI approaches for analysis of deep learning models can be categorized in model-specific and model-agnostic methods. While model-specific methods are tailored to the underlying architecture and manipulate the test model in inference and/or training, model-agnostic methods are applied in a post-hoc manner to the test model, i.e., to fully trained models. These methods have the advantage of high flexibility, since models are treated as black boxes and, thus, any model can be analyzed the same way. Hence, the interpretation or explanation results can be compared across model classes or architectures. However, a major drawback of model-agnostic XAI methods is that only the model input can be manipulated to analyze consequential output changes. Therefore, these methods are sampling-based, which leads to a high computational effort for complex models.In AD, the trustworthy recognition of street scenes, especially pedestrians, is of major interest. Contemporary OD models show good performances, but have very different basic architectures and working principles. <cit.> For pedestrian detection, a severe challenge is that commonly, pedestrians appear under occlusion so that OD models have high robustness requirements here. <cit.> From an XAI point of view, it is therefore of particular importance on which semantic regions a test model bases its decisions for detecting a pedestrian and that those explanations can be compared regardless of the underlying test model architecture. Hence, model-agnostic explanation models should be considered here.Model-agnostic explanation methods can be further distinguished into global and local explanation methods. Global methods try to explain the model on the data as a whole to interpret the overall performance, whereas local methods try to explain outputs for single data points or instances. TCAV is a method that tests a model for relevant features that are given by example images <cit.>. A CAV then quantifies the extent to which the model was activated in a prediction to a given concept. Those concepts can be, for instance, textures, color schemes, or anything that is describable by a bunch of images. Since we want to assess body parts, we don't have clear color or textures we want to focus on, and example images of isolated body parts are not available. This is why we do not focus on the TCAV in this work. LIME is a method for local explanation of instances by introducing a surrogate model that is simpler and more interpretable than the typically complex reference model <cit.>. The surrogate model, here, can be chosen arbitrarily which allows a lot of freedom in modeling but could end up in a surrogate model that is not human-interpretable. SHAP tries to avoid this by defining a special LIME conform model that fulfills the properties of so-called Shapley values <cit.>. They originate from game theory and are well-defined and theoretically based measures for feature contribution to a certain outcome. The KernelSHAP method samples instances of present or absent features and, by observing the model output, it uses the so-called Shapleykernel to calculate the contribution of each feature to the output.However, when it comes to OD problems, LIME and SHAP show some shortcomings being based on input sampling. In comparison to many machine learning tasks dealing with image processing, the input dimension and typically the model size is rather low, which makes single forward passes through the model quite fast. In image processing or, particularly, OD tasks, the input, i.e., image data, is rather complex and forward passes are computationally heavier. Thus, sampling images causes lots of forward passes decelerating the model explanation substantially. Due to the drastically larger number of input dimension, even more samples are needed to gain meaningful model explanations.Therefore, we need to adapt sampling-based, model-agnostic explanation methods like KernelSHAP to explain the output pedestrian detection models.§ MATERIALS AND METHODS We now shed light on how our approach to model-agnostic body part relevance assessment is structured. Figure <ref> outlines the concept from the input street scene image to the so-called relevance maps. The details about the individual modules shown in this sketch are explained in the next sections. §.§ Superpixel Model In image processing like OD, the input size is typically much larger than for other machine learning tasks. Those large input sizes make sample based analyses mostly infeasible due to the large combinatorial space. This is why the input size has to be drastically reduced in order to have efficient sampling. Moreover, the contribution of a single pixel to the actual detection can be considered to be negligibly small. Thus, a commonly used trick is to summarize a region of image pixels as a so-called superpixel. One way would be a fixed tiling into rectangular or quadratic superpixels, ignoring the actual image content. The other way is to define superpixels by semantic regions with similar texture, color, shape, or, in our case of pedestrian detection, body parts. In contrast to the fixed tiling, the semantic regions usually have different sizes. The KernelSHAP method estimates the attribution of the input features to the output. Thus, we need to parametrize the superpixels by feature values. Our superpixel model, which serves as the explainable surrogate model, should have interpretable feature values. As we want to assess the relevance of body parts to the pedestrian detection, the feature values should represent the degree of information that is visible in the respective superpixel. Therefore, we introduce a presence value π_i for each superpixel i. A value of π_i = 1 means that the i-th superpixel is fully visible, as in the original input image. With decreasing presence value π_i →0, the superpixel gets increasingly hidden. In this work, we use three methods to hide the information of the superpixel. The first method is to overlay the superpixel with noise sampled from the information of the remaining image by a multinomial normal distribution given by the RGB information. The second method overlays the superpixel with noise sampled from the information of the neighboring superpixel contents. The third method is to remove all superpixels by a content-away inpaint method implemented in the OpenCV library <cit.>. A presence value of π_i = 0 means, that only the overlay is visible in the image, i.e., the superpixel information is completely hidden. Thus, our superpixel model for the body part relevance assessment gets a presence vector π⃗∈[0,1]^k for k visible body parts as an input and samples an image based on this vector. This image is forwarded to the black-box OD model that should be analyzed. Figure <ref> shows our three masking methods in the case of fully hidden body parts, i.e., π⃗ = 0⃗.The typical output of an OD model are labels, bbox coordinates, and classification scores. The number of those elements is dependent from the number of detected objects in the input image. Thus, we need to formalize the detection quality of a distinct pedestrian of interest among multiple possible detections with multiple bbox and scores. For pure classification, the classification score would be enough, but for OD, it is desirable for a detection quality score to include information about the precision of the bbox, as well. Therefore, we calculate the DICE between our ground truth bbox and all detection's bbox defined byDICE(A,B) = 2|A∩ B|/|A|+|B| ,where A and B are the two bbox of interest. We identify the correct bbox by the maximum DICE with the ground truth bbox G. To include also the pure classification quality, we multiply this value with the respective classification score c for the detection. Thus, our detection quality q_p of a pedestrian p with detected bounding box P isq_p = DICE(P,G) · c_p .Since DICE and c_p are values in the interval [0,1], it is q_p ∈ [0,1].All in all, we now wrapped our OD model into a surrogate superpixel model, with an input vector and an output scalar. §.§ Body Part Segmentation In order to introduce the superpixel model parametrization to our pedestrian detection model, we need to get a segmentation of the body parts. For the currently available large-scale pedestrian datasets like CityPersons <cit.> or EuroCity Persons <cit.>, proper body part segmentations are not available. Thus, we utilize BodyPix, a trained model for body segmentation <cit.>. BodyPix enables us to have vast amounts of real world pedestrian data. However, two major drawbacks are that the segmentation quality is rather low if the pedestrians resolution is low, i.e., for pedestrians appearing far away in the image. The other major drawback is that there is no instance segmentation available, which means that for pedestrian groups or multiple pedestrians in one bbox, we can only access the same body parts of all pedestrians at one time. At least, we can reduce the impact of the resolution problem by focusing our relevance assessment only on the biggest pedestrians, measured by bbox area, in the dataset of interest. By default, BodyPix segments 24 different body parts, including front and back parts. We can simplify our analysis by introducing 3 further mappings, where body parts are unified. We call those mappings abstraction levels, where level 0 is the original BodyPix output. The granularity reduces with ascending level number. The mappings are shown in Figure <ref>. §.§ From Sampling to Local Explanation In KernelSHAP, one first defines an input and a baseline. The input is the instance to explain, so, in our case the visible pedestrian, i.e., we set π⃗_input = 1⃗ as the input. As the baseline, we set a completely absent or hidden pedestrian, thus π⃗_baseline = 0⃗. The sampling of the binary perturbation is weighted with the Shapley kernel and feature attribution values are calculated using weighted linear regression. <cit.> In this work, we will call those attribution values (body part) relevance scores.As mentioned, KernelSHAP perturbs the instance by masking features, so that all body parts can be absent or present and, hence, it does not consider our still possible partly presences with 0 < π_i < 1. Therefore, we introduce a second custom sampling and explanation method using continuous sampling and, as well as KernelSHAP, linear regression to get the scores, but without following the Shapley properties. A uniform sampling of the presence values would end up in many blended body parts with is rather unrealistic. This is why we use a distribution that concentrates on values near 0 and 1. One distribution that has this property is the Beta distributionB(x, α, β) = Γ(α+β)/Γ(α)Γ(β) x^α-1(1-x)^β-1with the so-called concentration coefficients α and β. By deliberately choosing proper values for α and β, we can not only steer the concentration strength to the boundaries, but also the expectation value. In this work, for instance, we choose α = 0.2 and β = 0.1 resulting in a distribution that is concentrated on its limits at 0 and 1 with a slightly stronger concentration on 1. The expectation value results in an average pedestrian visibility of about 67. An expectation value above 50 makes sense in our use case since otherwise, the pedestrian might often be not detected at all, resulting in a detection quality of 0. Thus, if too many generated samples end up in non-recognitions, we won't get insights in the relevance of body parts and the sampling becomes inefficient. Figure <ref> shows a plot of the presence vector sampling pdf of our custom method. Another reason to concentrate the pdf on the limits is to have a robust linear regression even with a low amount of samples due to many data points at the outermost regions of the regression domain.Once the presence vectors are sampled and propagated through the superpixel and the OD model, we have the corresponding pedestrian detection quality scores and can calculate our body part relevance scores for both explanation methods by linear regression. The relevance scores can be visualized by the body part shapes with colors representing the respective relevance scores. We call those visualization relevance maps. Furthermore, we can estimate the error of the relevance scores of our method by performing 4 regressions with random 75 chosen from all data points. Means and std of those fits yield the relevance scores and errors, respectively.§ EXPERIMENTS AND RESULTS In this section, we will perform a few experiments about comparability between KernelSHAP and our Beta sampling method. Additionally, since the number of samples is the crucial parameter that impacts the evaluation speed of both methods, we observe the stability of the relevance scores under small sample sizes. As a test model, we use a RetinaNet50 <cit.> object detection model trained on pedestrians from the EuroCity Persons <cit.> dataset.§.§ Local Explanations We evaluate KernelSHAP and our method by using our superpixel model for an example image from the EuroCity Persons dataset. For both methods, 2048 samples were drawn. In this case, the superpixel model uses the inpaint method to hide the body parts. The result is shown in Figure <ref>.We notice, that the relevance scores calculated by KernelSHAP and our method are similar. Nevertheless, they show some minor differences. One problem in XAI is, that there is no ground truth explanation, especially not for model-agnostic methods. Thus, we treat the KernelSHAP results as the standard and try to compare our results with it because KernelSHAP has a heavier game theoretical basement due to the Shapley formalism. §.§ Efficient Sampling In this experiment, we now want to see, how many samples we need at least, to get a fairly stable relevance score determination. We perform these experiments by using the first and third abstraction degree of body parts (see Figures <ref> and <ref>) and use inpaint and image noise masking. As sampling sizes, we use powers of 2 from 84096. In order to also cover, how the methods perform for different pedestrians, we perform the sampling on the 100 biggest pedestrians in the EuroCity Persons dataset regarding bbox size. Among those, 2 could not be segmented properly, so that 98 pedestrians contribute in the final results shown in Figure <ref>. In both abstraction degrees, body parts that do not undergo a merging with other body parts, namely face and torso, have agreeing relevance scores. A remarkable fact is, that the relevance scores differ among the masking methods. For instance, the torso has a significantly higher relevance score for the image noise masking than for the inpaint masking. However, comparing KernelSHAP with our Beta sampling method, we observe that Beta sampling yields more stable results. At 64 samples per pedestrian, the Beta sampling already gives results comparable to the higher sampling sizes. KernelSHAP, however, needs more samples to give converging relevance scores, if they converge at all. Conclusively, all experiment show, that our test model mainly focuses on torso and face regions which means that the clear presence or visibility of torso and head mainly drives the pedestrian detection quality.§ DISCUSSION As already mentioned, KernelSHAP and our Beta sampling method yield comparable relevance scores. This makes sense by looking at the similar sampling properties. The Shapley kernel prefers samples with either very few or very many visible body parts, as shown in <cit.>. Even if the Beta sampling does not follow the Shapley properties exactly, the pdf is concentrated on 0 and 1 and, thus, samples are similar but with the difference of being non-binary. Therefore, we could say that the Beta sampling method is a continuation, or interpolation, of the Shapley kernel sampling.The experiments show that our Beta sampling method requires fewer samples for robust relevance score assessment than KernelSHAP. Note that the two introduced methods in this work are local explanation methods per se. In order to gain insights into the global explainability of the test model, many local evaluations must be carried out, as we did in the experiments with many pedestrian instances. Thus, our method enables time-efficient analysis for large-scale datasets.Nevertheless, a shortcoming in this work is the usage of the BodyPix body part segmentation model for the pedestrian detection. BodyPix is mainly used for high-resolution footage of human bodies. However, in street scene data, pedestrians are usually quite far away and, thus, have bad resolutions. Therefore, BodyPix can hardly segment proper body parts for those pedestrians. Additionally, BodyPix cannot discriminate different pedestrian instances, which is problematic in pedestrian detection since pedestrian occur in groups quite often and bbox overlap. This is why this work has to be seen as a proof-of-concept for the pedestrian detection use case. It is desirable to use our methods with datasets having available proper body parts and instance segmentation maps. To our knowledge, there is currently no such large-scale street scene dataset available. § CONCLUSION Our work demonstrates, that KernelSHAP can be adapted to OD use cases. Moreover, the robustness can be increased by using non-binary sampling that is still similar to the Shapley kernel sampling. With specific reference to our application of pedestrian detection, it must be noted that BodyPix can only be used to a very limited extent for street scene shots due to the low resolution of the pedestrians. A possible starting point for further research would therefore be the use of simulation data, for which detailed semantic and instance segmentation maps are possibly rather available. Additionally, simulation data can further enrich the analysis by considering attributes beyond body parts like accessories or vehicles like bikes, wheelchairs, buggies, etc. Simulations also enable to gain data tailored to answer specific questions or scenarios that rarely appear in real-world data. § ABBREVIATIONSADautonomous driving AIartificial intelligence bbox[bbox]bounding box bbox[bboxes]bounding boxes CAVConcept Activation Vector DICESørensen-Dice Coefficient LIMELocal Interpretable Model-agnostic Explanations ODobject detection pdfprobability density function SHAPShapley Additive Explanations stdstandard deviation TCAVTesting with Concept Activation Vector XAIExplainable Artificial Intelligencesplncs04 | http://arxiv.org/abs/2311.15679v1 | {
"authors": [
"Maurice Günder",
"Sneha Banerjee",
"Rafet Sifa",
"Christian Bauckhage"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127101025",
"title": "Model-agnostic Body Part Relevance Assessment for Pedestrian Detection"
} |
Properties of the Magellanic Corona Model for the formation of the Magellanic Stream [ Received xxxx; accepted xxxx ==================================================================================== Electron cryomicroscopy (cryo-EM) is an imaging technique widely used in structural biology to determine the three-dimensional structure of biological molecules from noisy two-dimensional projections with unknown orientations. As the typical pipeline involves processing large amounts of data, efficient algorithms are crucial for fast and reliable results. The stochastic gradient descent (SGD) algorithm has been used to improve the speed of ab initio reconstruction, which results in a first, low-resolution estimation of the volume representing the molecule of interest, but has yet to be applied successfully in the high-resolution regime, where expectation-maximization algorithms achieve state-of-the-art results, at a high computational cost. In this article, we investigate the conditioning of the optimization problem and show that the large condition number prevents the successful application of gradient descent-based methods at high resolution. Our results include a theoretical analysis of the condition number of the optimization problem in a simplified setting where the individual projection directions are known, an algorithm based on computing a diagonal preconditioner using Hutchinson’s diagonal estimator, and numerical experiments showing the improvement in the convergence speed when using the estimated preconditioner with SGD. The preconditioned SGD approach can potentially enable a simple and unified approach to ab initio reconstruction and high-resolution refinement with faster convergence speed and higher flexibility, and our results are a promising step in this direction. § INTRODUCTIONWe consider the problem of gradient-based optimization for tomographic reconstruction with a particular focus on electron cryomicroscopy (cryo-EM). Stochastic gradient descent (SGD) based optimization methods have become a standard algorithmic framework in many areas due to their speed, robustness, flexibility and ease of implementation, particularly with the availability of fast and robust automatic differentiation libraries. However, in cryo-EM, SGD methods have only really found use in ab initio structure determination, where the goal is only a low-resolution structure. High-resolution structures are then determined by switching to a different form of optimization, typically some form of expectation-maximization (EM), despite the fact that these methods are often slow and require specific modeling assumptions. A natural question which we address here then is why stochastic gradient optimization techniques have not been able to solve the high-resolution optimization problem. Doing so could further speed up data processing for cryo-EM, simplify workflows and unify open research questions. However, perhaps more significantly, using SGD-based optimization methods would allow for more flexibility in modeling the reconstruction problem. Common modeling assumptions (e.g., Gaussian noise, Gaussian priors, rigid structures,discrete Fourier-based structure representations) which may not be optimal but are required by current refinement methods could be relaxed with SGD methods.In this paper, we show that standard SGD-based methods struggle to accurately determine high-resolution information in cryo-EM due to fundamental properties of the induced optimization problem. We perform a theoretical analysis, in a simplified setting, which shows that the induced optimization problem can suffer from ill-conditioning, which results in arbitrarily slow convergence for standard SGD algorithms. Moreover, our analysis shows that the conditioning may be acceptable at low resolution but becomes worse as resolution increases, explaining why SGD has been successful in the ab initio setting but has yet to be successfully used for high-resolution refinement. While our analysis is in a simplified setting, we argue and empirically verify that this poor conditioning behavior continues to exist in real-world cryo-EM settings. Finally, based on our analysis, we propose a new SGD-based algorithm which, unlike standard methods, exploits higher-order derivatives to improve the conditioning of the problem. Our results demonstrate that this new method is able to overcome the conditioning problem and efficiently and accurately determine high-frequency information. §.§ BackgroundCryo-EM enables biologists to analyze the structure of macromolecules in their native state. In comparison with the older method of X-ray crystallography, cryo-EM does not require crystallized samples, allowing one to study larger molecules with complex structure and conformations. Indeed, its potential to uncover the structure and function of macromolecules has been acknowledged by the scientific community: cryo-EM was named the “Method of the year” in 2015 by Nature Methods <cit.>, and its development was the subject of the 2017 Nobel Prize in Chemistry.The standard cryo-EM single particle analysis (SPA) pipeline involves freezinga biological sample in a thin layer of ice and imaging it with an electron microscope. The imaged sample contains multiple copies of a macromolecule captured in distinct,random and unknown orientations. Following a number of data processing steps, two-dimensional projections of the electrostatic potential of the macromolecule are captured in a stack of images, which we refer to as the particle images.The goal of the cryo-EM SPA pipeline is to reconstruct a three-dimensional volume representing the structure of a molecule from the collected particle images.In addition to the random orientations, each projection of the volume is shifted off the center of the image by a small, unknown amount, and the particle images are further blurred by a contrast transfer function (CTF) which is image-specific and depends on the settings of the microscope. Moreover, to avoid the damaging of the biological sample by the electron beam, the imaging is done at a low dosage, which results in a poor signal-to-noise ratio (SNR). Therefore, cryo-EM reconstruction requires solving a tomography problem with unknown viewing directions and in-plane shifts, and low SNR. Here we refer to the particle orientations and the in-plane shifts collectively as the pose variables.Cryo-EM reconstruction approaches implemented in established software packages <cit.> consist of two separate stages: ab initio reconstruction and high-resolution 3D refinement.Ab initio reconstruction provides an initial, low-resolution estimation of the volume.This is a non-convex problem for which many methods have been developed, but the current state-of-the-art methods are based on stochastic gradient descent algorithms which were first used in the context of cryo-EM in the cryoSPARC software <cit.>.More recently, a similar approach has been adopted by other software packages such as RELION <cit.>. After the ab initio step, high-resolution 3D refinement performs further optimization of the volume and generates a high-resolution reconstruction. This usually employs an optimization algorithm such asexpectation-maximization <cit.> to iteratively reconstructthe volume and a search procedure to estimate the pose variables. The EM algorithm has become a standard approach to high-resolution refinement <cit.> and achieves state of the art reconstructions. However, EM-based methods are computationally expensive, generally requiring full passes through all images at each iteration of refinement and necessitating sophisticated grid and tree search algorithms to reduce the computational costs. Further, they are restricted to several key assumptions including Gaussian noise, a Gaussian prior on structures, a rigid structure, and a discrete Fourier-based representation of the structure.Recently, a new class of methods for cryo-EM have emerged which aim to reconstruct not just static structures but also conformational heterogeneity, where the reconstructedvolume is subject to different kinds of deformations <cit.>. Such methods greatly expand the capabilities of cryo-EM, e.g., with time-resolved cyro-EM <cit.>. However, existing methods usually require a high resolution structure and knownpose variables as input, limiting their applicability. Moreover, they cannot use the standard EM-based optimization approaches, often usingneural networks trained using SGD instead, and generally fail to match existingrigid structure refinement approaches in resolution and requiring substantiallymore computation. While motivated by the promise of capturing conformational heterogeneity,we focus here on the static reconstruction case. Our results suggest that improving the performance of SGD-based methods may bethe key to unlocking this new capability of cryo-EM.§.§ Prior workThe Bayesian formulation of the cryo-EM reconstruction problem and its solution via the EM algorithm has a long history, with early works including <cit.> as well as their implementation for high-resolution 3D refinement in state-of-the-art software such as RELION <cit.> and cryoSPARC <cit.>. While early ab initio reconstruction methods involved heuristic approaches such as using a low-pass filtered known reconstruction of a similar structure to the one of interest, mathematically rigorous approaches based on the method of common lines have been developed in <cit.>. The SGD algorithm introduced in a cryo-EM context in <cit.> showed improved robustness and speed in obtaining ab initio reconstructions from scratch. More recently, the VDAM algorithm, a gradient-based algorithm with adaptive learning rate has been introduced in the latest version of the RELION software <cit.>. This brief list of cryo-EM reconstruction algorithms is non-exhaustive and we refer the reader to more comprehensive review articles such as <cit.>.The aforementioned articles view the SGD algorithm and its variants as tools for the ab initio step, while the best results for high-resolution refinement are achieved using the EM algorithm. In this work, we present an analysis of the conditioning of the reconstruction optimization problem and propose a method to improve the convergence of SGD for high-resolution refinement by on-the-fly estimation of a suitable preconditioner. While basic preconditioners are used in cryoSPARC <cit.> and several strategies for adapting the step size based on second order information are implemented in the VDAM algorithm <cit.>, neither work addresses the conditioning of the problem explicitly, and the preconditioners used are not suitable in the high-resolution regime. In contrast, while we theoretically analyze the reconstruction problem in a simplified setting, our proposed solution is specifically designed to overcome the conditioning issue in a big data, high-resolution setting.We leverage ideas from the machine learning literature <cit.> to estimate a preconditioner that poses no significant additional computational cost over the ordinary SGD algorithm, does not require a particular initialization or warm start, and is straightforward to incorporate into existing SGD implementations in cryo-EM frameworks. Finally, we provide numerical experiments that show the feasibility of our preconditioning approach for estimating high resolution information.§.§ Outline The remaining parts of the article are structured as follows.In Section <ref>, we describe the mathematical setting of thecryo-EM reconstruction problem, as well as current approaches for high-resolutionrefinement using the EM algorithm and ab initio reconstruction using the SGD algorithm. Section <ref> presents the main contributions.In Section <ref>, we analyze the condition number of the linear inverse problem in the simplified setting where the pose variables are known, while in Sections <ref>-<ref> we describe several ideasthat are part of a proposed construction of a preconditioner that allows SGD to overcome the conditioning issue. In Section <ref>, we providenumerical experiments that validate the theoretical contributions, and inSection <ref> we conclude with a summary of the main advantagesof the proposed approach and motivate a potential extension of this work to the fully general setting.§ PRELIMINARIES §.§ Forward model The objective of cryo-EM reconstruction is to estimate a three-dimensional volumerepresenting the shape of a molecule v from a stack of particle images{x_i}_i=1^N.A simple and frequently used model of the image formation process consists of the following steps: each particle image x_i is formed by rotating the volume vby a three-dimensional rotation operator R_i, projecting it along the z-axis,applying a small in-plane shift T_i, convolving the result with a contrast transfer function (CTF) C_i, and adding Gaussian noise.This model is often formulated in the Fourier domain to speed up the computation of the projections by taking advantage of the Fourier slice theorem and the fastFourier transform (FFT). The Fourier slice theorem states that the two-dimensionalFourier transform of a projection of a three-dimensional volume is the intersectionof the three-dimensional Fourier transform of the volume with a plane passing throughthe origin of the coordinate axes, where the projection direction is determinedby the orientation of the plane.Let v ∈^M_v be the (discretized) three-dimensional volume andx_i ∈^M_x, for i=1,…,N, the particle images. Here, M_v is thetotal number of voxels in a M × M × M grid discretization of the volumeand, similarly, M_x is the total number of pixels in a M × M grid discretizationof the particle images[Throughout this article, we will use the words “voxel” and “pixel” to referto the elements of a discretized volume and image respectively, in theFourier domain. ]. Without loss of generality, we assume that in thisvectorized representation of the volume, the first M_x = M × M elements of the volume correspond to the volume slice at z=0 in the Fourier domain. Moreover, note that in this representation, both the CTF C_i and the in-plane shift operator T_i are diagonal matrices in ^M_x × M_x, since inthe Fourier domain, the convolution corresponds to element-wise multiplicationand the in-plane shift corresponds to a phase factor. Since the volume and images are defined on discrete grids,the rotated volume and the initial grid coordinates are no longer aligned.Specifically, the value of a volume v acted on by a rotation operator R at the grid coordinates r is given by evaluating the volume v at the rotated coordinates R^T r: (R v)(r) = v(R^T r). However, due to the possible misalignment between the coordinate grid that v is defined on and the rotated coordinatesR^T r, the value of v at R^T r must be interpolated using its values at the neighboring grid points. We define two projection operators that use nearest-neighbor and trilinear interpolation, respectively. In short, projection by nearest-neighborinterpolation assigns to the rotated grid point the value of the volume at thegrid point closest to the rotated grid point, while projection by trilinearinterpolation performs linear interpolation using the closest eight grid pointsto the rotated grid point and assigns the resulting value to it, as follows: Let ϕ_i ∈ SO(3) ×^2 denote the pose variable encapsulating the rotation matrix R and the diagonal shift matrix T, and {r_j}_j=1^M_x be the coordinates of the Fourier grid at z=0. We define the followingprojection operators P_ϕ∈^M_x × M_v: * Nearest-neighbor interpolation projection operator P^nn_ϕ:(P^nn_ϕ v)(r_j) := T v_k^*(j),where k^*(j) = _k ∈{1,…,M_v}r_k - R^T r_j_2, for all j=1,…,M_x.In this case, the matrix P^nn_ϕ has exactly one non-zeroelement in each row, and therefore(P^nn_ϕ)^* P^nn_ϕ∈^M_v × M_vis diagonal.* Trilinear interpolation projection operator P^tri_ϕ(P^tri_ϕ v)(r_j) :=T v_j^*,where v_j^* is obtained by using trilinear interpolation of v atR^T r_j, for all j=1,…,M_x.As the nearest-neighbor interpolation operator is a diagonal matrix, we will useit to establish theoretical results regarding the conditioning of the reconstruction problem. However, the trilinear interpolation operator is more common in practice,and therefore we will show in numerical experiments that our preconditioner estimationmethod applies to this case as well. Since the nearest-neighbor projection operator P^nn_ϕ correspondsto a plane slice through the volume passing through the center of thecoordinate axes that is approximated by the closest grid points to the plane, it follows from (<ref>) that the indices of the non-zeroelements in the diagonal matrix (P^nn_ϕ)^* P^nn_ϕ∈^M_v × M_vare the indices of these nearest-neighbor elements in the vectorizedrepresentation of the volume v ∈^M_v. Moreover, the non-zero (diagonal) elements of (P^nn_ϕ)^* P^nn_ϕ are realand positive, as the shift matrix T is complex diagonal with elements ofunit absolute value. An alternative approach to the interpolation given in Definition <ref> is to sample the volume on the rotated grid using non-uniformFFT <cit.>as done, for example, in <cit.>. In <cit.>, the structure of the matrix involving a projection and a backprojection is leveraged to obtain a fast preconditioner based on a circular convolution <cit.>. Given the projection operators defined above, we can state the forward model as:x_i = C_i P_ϕ_iv + η_i,i=1,…,N,where P_ϕ_i is one of the two projection operators in Definition <ref>, C_i ∈^M_x × M_x is a diagonal matrix corresponding to the CTF, and η_i is the noise vector in the i-th image, with elements complex normally distributed with variance σ^2.Both the rotation R_i and the shift T_i are specific to each image x_i andnot known. The CTF model is the same across all images, with image-specific parameters(e.g., defocus). We assume that the CTF model is known and its parameters have been estimated in advance, and that the noise variance σ^2 is constant acrossparticle images and pixels and has also been estimated in advance. §.§ Bayesian formulation and the EM algorithm The standard approach to high-resolution 3D refinement in cryo-EM is to solve a Bayesianformulation of the volume reconstruction problem with marginalization over thepose variables. This is solved using the expectation-maximizationalgorithm <cit.>, which has first been used inthe context of cryo-EM for aligning and denoising particle imagesin <cit.> and further refinedfor full volume reconstruction in the software packages RELION <cit.> and cryoSPARC <cit.>.Computing the full posterior distribution of the volume v given the particleimages {x_i}_i=1^N is computationally expensive.Instead, the maximum-a-posteriori (MAP) problem involves computing the volume vthat maximizes the (log) posterior:_v∈^M_vlog p(v | x_1,…,x_N) = _v∈^M_vlog p(x_1,…,x_N | v) + log p(v) = _v∈^M_v∑_i=1^N log p(x_i|v) + log p(v),where p(v) is the prior distribution of the volume v and p(x_i | v) is the likelihood probability of observing the image x_i given the volume v:p(x_i | v) ∝∫_SO(3)×^2 p(x_i | ϕ_i, v) p(ϕ_i) ϕ_i,where p(ϕ_i) is the prior distribution of the pose variable ϕ_i of theparticle image x_i and p(x_i|ϕ_i,v) is the likelihood of observingimage x_i given the volume v and pose ϕ_i:p(x_i|ϕ_i,v) ∝exp( -x_i - C_i P_ϕ_iv_2^2/2σ^2).The volume prior p(v) plays a regularization rolein solving the reconstruction problem and, throughout this manuscript, we assumeit is Gaussian with variance τ^2 for all elements of the volume:p(v) ∝ e^-v_2^2/2τ^2. The EM algorithm finds the optimal volume v^* in (<ref>) by alternatingbetween two steps at each iteration. The fist step computes the expected value ofthe posterior of the pose variables given the current estimateof the volume v^(k) (the E step), while the second step computes a newiterate for the volume v^(k+1) that maximizes this quantity (the M step). If the prior is Gaussian, the v^(k+1) can be computed analytically by letting the derivatives be equal to zero.The resulting algorithm performs the following update at iteration k: E step: Γ_i^(k)(ϕ) = p(ϕ_i = ϕ | v^(k), x_1, …, x_N) = p(x_i | ϕ, v^(k)) p(ϕ) /∫_ϕ_l p(x_i | ϕ_l, v^(k)) p(ϕ_l) ϕ_l ,M step:v^(k+1) =( ∑_i=1^N ∫_ϕΓ_i^(k)(ϕ) P_ϕ^* C_i^2 P_ϕϕ + σ^2/τ^2 I_M_v)^-1( ∑_i=1^N ∫_ϕΓ_i^(k)(ϕ) P_ϕ^* C_i x_i ϕ), where I_M_v is the M_v × M_v identity matrix.In more general implementations of the EM algorithm in software packages suchas RELION and cryoSPARC, the noise variance σ^2 and the prior varianceτ^2 are not constant across image pixels and volume voxels respectively, andare both estimated at each iteration using the current volume iterate v^(k)and weights Γ_i^(k).Since the optimization landscape is non-convex, the EM algorithm can converge toa local maximum different from the MAP estimator <cit.>,so a good initialization is required. Therefore, it is best used for high-resolution refinement, where an initial volumeand possibly priors for the pose variables are provided by the ab initio algorithm.However, the E step is particularly expensive at high resolution, as it requiresintegrating over the entire space of the pose variable ϕ_i for each particleimage in the dataset.In practice, this is performed by employing efficient gridding and search techniques.§.§ Stochastic gradient descent for ab initio reconstruction The SGD algorithm has become the preferred method for solving the MAP problem (<ref>) in the context of ab initio reconstruction. It was first used in <cit.> to obtain an initial volume reliably and without requiring a good initialization, as it is the case for EM. The key property that SGD exploits is that the objective functionin (<ref>) can be split into separate terms for each particle image, not unlike the loss functions used in the training of deep neuralnetworks. In particular,the objective function (<ref>) can be written as:f(v) = 1/N∑_i=1^N f_i(v),where each term f_i corresponds to a particle image:f_i(v) = log p(x_i|v) + 1/Nlog p(v)At iteration k, SGD performs the following update:v^(k+1) = v^(k) - η_kf_j(v^(k))/ v^(k),where η_k is the step size at iteration k and the index j of the term(and particle image) used to compute the gradient is chosen uniformly at random. In practice, a mini-batch of particle images, rather than a single image, is usedto compute the gradient at each iteration. The volume update step (<ref>) for the current mini-batch is performed after a refinement step of the posedistribution of each particle image in the current mini-batch given the currentvolume and data p(ϕ_i | v^(k), x_1,…,x_N), for example byusing (<ref>) similarly to the EM algorithm. Various techniques canbe used to speed up the computation time, for example the grid refinementimplemented in RELION <cit.> or the branch-and-bound approachin cryoSPARC <cit.>.The SGD algorithm has two major advantages over EM. First, it has a lower computationalcost per iteration, as it only uses a subset of the dataset, while EM requiresa pass through the entire dataset. While computing the integrals in (<ref>)for the images in the mini-batch cannot be avoided, the gridding and searchingapproaches that EM uses to efficiently sample the space of poses are also beneficialin the implementation of SGD. Second, it has been observed in practice that the noise in the sampled gradientenables SGD to explore the optimization landscape more efficiently beforeconverging to a local minimizer of the objective function, preventing it frombecoming stuck in unwanted local minima. Given these advantages, the SGD algorithm is particularly well-suited for ab initioreconstruction, as it has been shown in practice in cryoSPARC <cit.>. More recently, the gradient-based algorithm VDAM has been introduced for ab initio reconstruction in the RELION software <cit.>. However, despitethe clear benefits in terms of computational cost and convergence speed of stochasticgradient algorithms compared to EM for (low resolution) ab initio reconstruction,EM is still the state-of-the-art approach for high-resolution refinement. To investigate how the performance of SGD can be improved for high-resolution cryo-EM reconstruction, we first state a convergence result from <cit.>:Suppose that f: ^M → in (<ref>) is twice-differentiable and strongly convex and its gradient is Lipschitz-continuous with constant L. Furthermore, we assume that there exists B ≥ 1 such that:max_i {∇ f_i(x) }≤ B ∇ f(x) .Let v^* be the minimizer of f and v^(k) the iterate generated by the SGD iteration (<ref>) with fixed step size η_k = 1/LB^2.Then:𝔼[ f(v^(k)) ] - f(v^*) ≤(1 - 1/κ(∇^2 f(v^*)) B^2)^k [ f(v^(0)) - f(v^*)],where κ(∇^2 f(v^*)) = max_j λ_j/min_j λ_j is the condition number of the Hessian matrix ∇^2 f(v^*).The main implication of Theorem <ref> is that the rate ofconvergence of SGD is affected negatively by a large condition number of theHessian matrix κ(∇^2 f(v^*)). In particular, we can derive that,to reach objective function errorϵ = 𝔼[ f(v^(k)) ] - f(v^*), at least k^* iterations are required, where:k^* = 𝒪( κ(∇^2 f(v^*)) B^2 log1/ϵ). Note that we chose this particular result for its simplicity, despite the rather strong condition (<ref>). More general results for the convergenceof SGD where this condition is relaxed can be found, for example, in <cit.>.To improve the convergence of SGD, we will precondition it using an approximation of the diagonal of the Hessian matrix, which we will compute usingHutchinson's diagonal estimator <cit.>:D = 𝔼[z ⊙ Hz],where z ∈ℝ^M_v is a vector with elements drawn from a Rademacher(0.5)distribution (the entries of z are 1 or -1 with equal probability) and⊙ denotes element-wise multiplication. The central claim of this article is that the condition number of the Hessian ofthe loss function in the cryo-EM reconstruction problem scales unfavorably withthe target resolution of the reconstruction, and therefore it affects the convergenceof SGD for high resolution refinement. In the next section, we argue that this isindeed the case and we propose a solution for overcoming this issue in a simplified setting. § SGD FOR HIGH-RESOLUTION REFINEMENT: FIXED POSE VARIABLESIn this section, we study the optimization problem (<ref>) in the setting where the pose variables (the three-dimensional orientations and the in-planeshifts of the particles) are known. While this is a simpler problem that can besolved with other methods, it captures the main difficulty that makes the applicationof gradient-based algorithms non-trivial at high resolution, namely the large conditionnumber of the Hessian. Therefore, analyzing the reconstruction problem with theknown pose variables provides useful insights and directions for approaching theproblem in its full generality.For simplicity and without loss of generality, we assume, like in the previous section,that the variance of the noise σ^2 and the variance of the prior τ^2are fixed and given in advance, and are constant across all images and pixels(in the case of σ^2) and across all voxels of the volume (in the case of τ^2).The analysis presented in this section can be generalized to the case where σ and τ are not constant, and the preconditioner estimation we proposehas the flexibility to incorporate existing methods for determining these parameters in the reconstruction process. To simplify the notation,we collect these two parameters in one parameter λ = σ^2/τ^2,which we will refer to as the regularization parameter. §.§ Condition numberHaving access to the true pose variable ϕ_i^* for each image x_i isequivalent to taking the prior distribution for the pose variable ϕ_iin (<ref>) to be a Dirac delta distribution centered at the truevalue ϕ_i^*, namely p(ϕ_i) = δ_ϕ_i^*.In addition, we assume for simplicity that the images are not deformed by the CTF:C_i = I_M_x, for all i=1,…,N.With the regularization parameter λ described above and lettingP_i := P_ϕ_i^*,we write the optimization problem (<ref>) as:_v∈^M_v1/2∑_i=1^N x_i - P_i v_2^2 + λ/2v_2^2,Letting f(v) be the objective function in (<ref>) and f_i(v) the i-th term: f_i(v) := 1/2x_i - P_i v_2^2 + λ/2Nv_2^2,the optimization problem (<ref>) becomes:_v∈^M_v f(v) =_v∈^M_v∑_i=1^N f_i(v),The minimizer of (<ref>) is the point v^* ∈^M_v that satisfies: Hv^* = b,where b =∑_i=1^N P_i^* x_i and H ∈ℝ^M_v × M_v with:H = ∇^2 f =∑_i=1^N P_i^*P_i + λ I_M_v.Note that, for the problem (<ref>), H is the Hessian of the objective function. The SGD algorithm solves problem (<ref>) by iterativelytaking steps in the direction of negative sampled gradient:v^(k+1) = v^(k) - η_k ∇ f_j(v^(k)),where η_k is the step size at iteration k and the index j is selected uniformly at random. Its convergence properties are determined by the conditionnumber of the matrix H, as stated in Theorem <ref>.When the projection matrices P_i are the nearest-neighbor interpolationmatrices (<ref>) in Definition <ref>,the matrices P_i^* P_i are diagonal with real non-negative elements (seeRemark <ref>), thus the Hessian matrix H is also diagonal with realnon-negative elements. In this case, its condition number <cit.> isκ(H) = max_i H_ii/min_i H_ii. We will now analyze the structure and the condition number of the Hessian matrixH when the projection matrices correspond to nearest-neighbor interpolation, namely P_i = P_ϕ_i^*^nn, for i=1,…,N.In order to do so, we introduce two necessary concepts. First, the projectionassignment function of a particle image maps each element of the image to theelement of the volume whose value is assigned to it by the projection operator.Let P_i := P_ϕ_i^nn∈^M_x × M_v, i=1,…,N, be nearest-neighbor interpolation projection matrices given in Definition <ref> and corresponding to the pose variables ϕ_i, i=1,…,N. We define the projection assignment functionΛ_i : {1,2,…,M_x}→{1,2,…,M_v } as the function thatmaps each pixel index k of the i-th particle image to the voxel indexΛ_i[k] in the volume v whose value is assigned by the operatorP_i at index k. Namely, we have that:(P_i v) [k] = T_i[k] v[Λ_i(k)],k = 1,…,M_x,where the square brackets notation is used for the value of the image (in theleft-hand side) or volume (in the right-hand side) at a particular index k,and T_i[k] is the k-th element in the diagonal of the translation matrix T_i. Second, the voxel mapping set of a volume element contains the indices of theimages that contain a projection of that volume element.For every voxel index j ∈{1,…,M_v}, we define the voxel mappingset Ω_j ⊆{1,…,N} as the set of indices ofimages that contain a pixel that is mapped by their projection assignmentfunctions Λ_i to j, namely:Ω_j = {i : ∃ k ∈{1,…,M_x} such that Λ_i(k) = j}. Given the functions Λ_i and the sets Ω_j defined above, the diagonalelements of the (diagonal) matrix P_i^* P_iare (P_i^* P_i)_jj = 0if i ∉Ω_j and (P_i^* P_i)_jj∈{1, 2} otherwise.We make the following assumptions: * We assume that each voxel index j is mapped at most once by the projection assignment function of an image Λ_i.* Without loss of generality, we assume that j=1 is the index of thevoxel corresponding to the center of the coordinate axes.Then, the voxel at j=1 is mapped by all projection operators P_i, i=1,…,N, or equivalently, Ω_1 = N. The second assumption above concerns the ordering of the elements in the vectorizedrepresentation of the grid, specifically so that the center is mapped to the elementat index j=1, while the first assumption simplifies our analysis (at a cost of a factor of at most two in the condition number bound below) by ensuring thatthe diagonal elements of P_i^* P_i satisfy:(P_i^* P_i)_jj =0,if i ∉Ω_j, 1,if i ∈Ω_j.Then, the full Hessian matrix H is also diagonal, with its diagonal elements given by:H_jj = |Ω_j| + λ,where |Ω_j| is the cardinality of the set Ω_jand 0 ≤ |Ω_j| ≤ N, for all j=1,…,M_v.Finally, Proposition <ref> below captures the main difficulty of thereconstruction problem, namely the condition number of the Hessian matrixincreasing with the resolution.Let M_x = M^2 and M_v = M^3 for grid length M and let Assumptions <ref>hold for the nearest-neighbor interpolation projection matrices P_i := P_ϕ_i^nn. Then, for any fixed number of images N, we have that: κ(H) ≥N + λ/N/M + λ, ∀ M ≤ N, κ(H) = N + λ/λ, ∀ M > N.Since the matrices P_i are the nearest-neighbor interpolation matrices P_ϕ_i^nn, the matrix H is also diagonal and, according to (<ref>), to compute κ(H) we need to find the largest and smallest elements of H. Using (<ref>) and Assumptions <ref>, we have that max_j H_jj = H_11 + λ = N+λ.To compute min_j H_jj, note that the projection assignment functions Λ_i for i=1,…,N, map NM^2 image pixels to M^3 volume voxels.For M > N, there are more voxels than total pixels (in all the images), and so there exists a voxel j^* such that |Ω_j^*| = 0.Then, min_jH_jj = H_j^* j^* = λ,and so κ(H) = N+λ/λ. For M ≤ N, there are NM^2 pixels mapped to M^3 voxels, and therefore there exists a voxel j^* such that |Ω_j^*j^*| ≤ NM^2/M^3 = N/M. Then, min_j H_jj≤ N/M + λ, and so κ(H) ≥N+λ/N/M + λ. We can also write the bounds in Proposition <ref> in terms of the radius R in the Fourier domain. If we assume that the number of pixels in a 2D disk of radius R is approximately π R^2 and the number of voxels in a 3D ball of radius R is approximately 4/3π R^3 then, following a similarargument, we obtain: κ(H) ≳N+λ/3N/4R + λ,∀ R ≲3N/4, κ(H) = N+λ/λ,∀ R ≳3N/4.More generally, if the ratio of the number of pixels in a projected image and thenumber of voxels in a volume at a given resolution R is p(R), then the boundsin (<ref>) and (<ref>)can be written as:κ(H) ≥N + λ/p(R) N + λ,∀ R such that1/p(R)≤ N. While the counting argument above shows that the condition number is large whenthere are more voxels to reconstruct than pixels in all the particle images, inpractice, the condition number grows fast with the resolution due to an additionalfactor. Specifically, in light of the Fourier slice theorem, each image is usedto reconstruct the voxels corresponding to a slice through the volume passingthrough the center of the coordinate axes. Therefore, the large condition numberof the matrix H is also a consequence of the fact that the elements of Hcorresponding to low-frequency voxels are reconstructed using pixel values inmost images, while the elements of H corresponding to high-frequency voxels are“seen” by fewer pixels in the particle images. Each new iteration will providemore information to the low-frequency voxels than to the high-frequency ones(relative to the total number of low-frequency voxels and high-frequency voxels, respectively), which leads to errors being amplified (or corrected) at differentrates when solving the inverse problem. Lastly, this problem is exacerbated bypreferred orientations of the particles: the orientation angles often do not coverSO(3) evenly in real datasets, causing the Fourier transform of the volume to missentire slices.In Figure <ref>, we illustrate the statement above for the setting of this section, specifically with nearest-neighbor interpolation in the projection operators and no CTF. In panel (a), we show the lower bound on κ(H) given in (<ref>)as a function of the radius in the Fourier space, as well as the condition numberfor a dataset of N=10000 particle images with uniformly sampled orientationswith R ranging from 1 to 304 voxels at intervals of 16 and λ = 10^-8. The condition number grows faster than the lower bound due to the effectsdescribed in the previous paragraph. To further illustrate the relationship between the number of images and the conditionnumber, we show in panels (b,c) of Figure <ref> the Hessian Hwhen using nearest-neighbor interpolation and no CTF when the dataset containsN=5 images (panel (b)) and when the dataset contains N=100 images (panel (c)).This shows how the particle images contribute to a larger fraction of the voxelsclose to zero than those at a large Fourier radius.In light of the dependence of the rate of convergence of SGD on the conditionnumber of the Hessian H given in Theorem <ref> andequation (<ref>), Figure <ref>(a) suggeststhat the number of iterations required to reach a certain error grows exponentiallywith the resolution.Since the root cause is the large condition number at high resolution, we will address this issue by preconditioning the SGD iterations, specifically by using an approximation of the diagonal D ∈ℝ^M_v × M_v of H:v^(k+1) = v^(k) - η_k D^-1∇ f(v^(k)). For fixed σ, τ, taking λ=σ^2/τ^2 and known poses,the matrix whose inverse appears in the M step of the EM algorithm inequation (<ref>) is the full Hessian matrix (<ref>) of theloss function. Therefore, EM implicitly solves the conditioning issuethat is problematic for SGD, and in our preconditioning approach, we aim toapproximate, using mini-batches, this gradient scaling that EM applies at every iteration.With the facts above regarding the condition number of the Hessian of the loss function, we now proceed to estimate the diagonal preconditioner for this matrix. §.§ Computing the preconditionerWe aim to obtain an approximation of the diagonal of the Hessian matrix H anduse it to precondition SGD, which is equivalent to preconditioning the linearsystem (<ref>) using the Jacobi preconditioner <cit.>. Motivated by algorithms such as AdaHessian and OASIS in the machine learningliterature <cit.>, we estimate the diagonalof the Hessian using Hutchinson's diagonal estimator <cit.> stated in (<ref>).Estimating the diagonal of H using (<ref>) has two practical advantages.First, for any function f, applying (<ref>) only requires computingHessian-vector products, rather than forming the full Hessian matrix.Indeed, for a function f, a Hessian-vector product is computed efficientlyusing Jacobian-vector products and automatic differentiation as follows:(v, z) ↦∇(∇ f(v)) zSecond, the computation can be split into mini-batches so that, at each iteration, only the current mini-batch of images is used for the Hessian-vector product computation.The update of the preconditioner at the current iteration obtained using the currentmini-batch is combined with the estimated preconditioner from the previous iterationusing an exponential average as done, for example, in Adam <cit.>,AdaHessian <cit.> and OASIS <cit.>. In addition, to take advantage of the fact that the Hessian H in (<ref>) of the objective function (<ref>) is independent of the currentiterate when the orientations are known, the exponential average is taken betweenthe estimated preconditioner at the previous iteration and the estimated diagonalusing all the samples of Rademacher vectors z drawn up to the current iteration.However, in the more general reconstruction problem with unknown posevariables (<ref>), only the current update would be used. Starting with the identity matrix as the initial estimate, D^(0) = I_M_v × M_v,the update rule for the diagonal estimator D^(k) at iteration k is: D_avg^(k) = 1/k( z^(k)⊙ Hz^(k)), D^(k) = β D^(k-1) + (1-β) D_avg^(k),where β∈ (0,1) and the Hessian-vector product is computed using thecurrent mini-batch.An example of the convergence of this estimate over 100 batches of 1000 imageseach and a total number of N=50000 images is shown in Figure <ref>.When the projection operator uses nearest-neighbor interpolation, i.e.P_i := P_ϕ_i^nn, and therefore the Hessian matrix H is diagonal,Hutchinson's estimator with Rademacher samples (<ref>) computes theexact diagonal of H using a single sample vector z or, if computed usingmini-batches, after a single epoch.§.§ Thresholding of the estimated preconditionerAs we will see in the numerical experiments section, the combination of variance of the sampled gradient using mini-batches and error in the diagonal approximation of the Hessian, especially in the early iterations, can lead to highly amplifiederrors in the current iterate v^(k). In general, this problem can be avoided by using variance-reduced stochastic gradientmethods <cit.>, which require computingfull gradients at a subset of the iterations or storing previously computed gradients.However, in a typical cryo-EM setting with large datasets, this has a prohibitivecomputational cost.Instead, we propose a simple solution that leverages the particular structure ofthe cryo-EM reconstruction problem. Given the specific structure of the matrices P_i and P_i^*P_i (see Definition <ref> and equation (<ref>)) and the fact that the projection operators perform slicing in the Fourier domain (according to the Fourier slice theorem), the preconditioned SGD iteration (<ref>)updates the elements of v^(k) corresponding to high-frequency voxels at a lower rate using information in the particle images compared to the low-frequency voxels. Similarly, the elements of the estimated preconditioner D^(k) correspondingto high-frequency voxels in the volume have small values, close to the regularizationparameter λ, while the elements corresponding to low-frequency voxels havemagnitudes that reflect the large number of images that contribute to the reconstructionof the voxels (see the discussion in the paragraph below equation (<ref>)),and are scaled by the CTF at that particular resolution.In light of (<ref>), this is due to |Ω_j| being large for low-frequency elements and small for high-frequency elements of the diagonal of H.This knowledge can be incorporated into the reconstruction algorithm in two ways. One approach is to tune the regularization parameter λ so that it balancesthe small entries in the preconditioner with the ability to obtain good convergenceat high resolution. Alternatively, the same effect can be achieved without interferingwith the regularization term by thresholding the smallest elements of the preconditioner,with the benefit of using the structure of the measurement operator in the preconditionerwhile allowing the freedom to chose the regularization term and parameter by othermeans, for example as done in RELION <cit.>. Here, we opt for the latter approach, namely thresholding the elements ofthe preconditioning matrix at the current iteration D^(k) below a certainvalue α, chosen as follows.Let C_i(r) be the radially symmetric CTF corresponding to the i-th image, asa function of the Fourier radius r ∈ [0, R], and let P_x(r) and P_v(r)be the number of pixels in a 2D Fourier shell at radius r and the number of voxelsin a 3D Fourier shell at radius r, respectively. Following the steps in Section <ref> with non-trivial CTFs and assumingthat the particle orientations are uniformly distributed (the details are omittedfor brevity), the expected value of an element of the Hessian matrix H at themaximum Fourier radius R is:H̅ = P_x(R)/P_v(R)·∑_i=1^N |C_i(R)|^2 + λ,Setting the threshold of the smallest elements of D^(k) to α = H̅, the final preconditioner at iteration k is defined as: D̂^(k)_jj = max{|D^(k)_jj|, α},for all j = 1, …, M_v.§.§ The algorithmWith the preconditioner estimation approach proposed in Section <ref> and the thresholding described in Section <ref>, we present thefull algorithm in Algorithm <ref>.No warm start is required for the approximation of the diagonal preconditioner,which is initialized with the identity matrix and estimated iteratively. The step size η^(k) at each iteration is set using the stochastic Armijoline-search <cit.>, namely the largest step size η is sought that satisfies the condition:f__k(v^(k) - η D^-1∇ f__k(v^(k)) )≤ f__k(v^(k)) - c ·η∇ f__k(v^(k)) ^2_D^-1,where ℐ_k is the index set of the current mini-batch, D is the preconditioner, and c ∈ (0,1) is a hyper-parameter. Condition (<ref>) ensures that a sufficient decrease in theobjective function is attained over the current mini-batch at every iteration§ NUMERICAL EXPERIMENTSIn this section, we present numerical results that demonstrate the arguments given in this article regarding the convergence of the SGD algorithm forhigh resolution cryo-EM reconstruction. In particular, the numerical experiments in this section further verify two claims:* Claim 1: The condition number κ(H) of the Hessian of the loss function is large at high resolution, which leads to slow convergence of the SGD algorithm. * Claim 2: Preconditioning SGD using the diagonal preconditioner asestimated using the tools described in Sections <ref>-<ref> leads to improved convergence at high resolution. The dataset used in this section is derived from the Electron Microscopy PublicImage Archive database entry EMPIAR-10076 of the bacterial ribosome, whereN=50000 images of 192 × 192 pixels are selected and whose pose variableshave been computed using RELION. The condition number of the reconstructionproblem for this dataset with the computed pose variables is shown inFigure <ref>(a) for increasing values of themaximum Fourier radius R. Nearest-neighbor interpolation has been used for this figure, and therefore the condition number has been computed using (<ref>). The condition number grows quickly with the resolution, and at the maximum radiusR corresponding to the grid side length M=192, it is of the order of 10^7. Note that the bound in Proposition (<ref>) andequations (<ref>)-(<ref>), shown as the solidblue line in Figure <ref>(a), holds for the condition number when no CTF is included, shown as the red, dashed line.Because the CTF has magnitude less than one at the origin, the numeratorin (<ref>) is smaller that in the case when no CTF is used, and thereforethe condition number computed in this setting, shown in the green dashdot line,is not always larger than the bound and plateaus at a lower value at high resolution.Moreover, in this plot we see the effect of the non-uniform orientations.In particular, above a certain radius (here approximately at 20 voxels from the origin),there are voxels that are not projected to any image pixels (namely their voxel mappingset Ω_j has cardinality zero) and therefore the condition number reachesO(1/λ), in contrast to the plot shown in Figure <ref>(a),where uniform orientations have been simulated. The non-uniform orientations areseen in the cross-section of the diagonal of the Hessian matrix H inFigure <ref>(b), which also shows the effect of the CTF.To verify the two claims on the dataset described above, we run three algorithms: SGD with no preconditioner, SGD with a preconditioner that has been precomputedin advance using (<ref>), and Algorithm <ref>, namelySGD with a preconditioner estimated during the refinement process and thresholded appropriately.All algorithms use the stochastic Armijo line-search (<ref>) for the step size adaptation. While the theory in Section <ref> appliesto nearest-neighbor interpolation, where the matrices P_i^* P_i are diagonal (and therefore the optimization problem can be solved easily by other methods), in the numerical experiments presented in this section, the projection operators are implemented using trilinear interpolation, which is often used in practice.In this case, the Hessian matrix H, whose diagonal we estimate and use as apreconditioner, is no longer diagonal.To evaluate and compare the performance of the three algorithms, we compute thevalue of the loss function in (<ref>) after each epoch (a full pass through the dataset), as well as the Fourier Shell Correlation (FSC) of each reconstructionwith a ground-truth solution, obtained using L-BFGS with 750 iterations.The FSC, a standard error measure in the cryo-EM literature, is the cross-correlationcoefficient between two volumes across three-dimensional shells in the Fourierdomain <cit.>. Specifically, given two volumes u and v inthe Fourier space, the FSC at radius r from the origin is defined asFSC(r) = ∑_ℓ∈ S_r u_ℓ v_ℓ^* /√((∑_ℓ∈ S_r |u_ℓ|^2)(∑_ℓ∈ S_r |v_ℓ|^2)),where S_r is the set of Fourier voxels in the spherical shell at radius r.Figure <ref> shows the results obtained using the three algorithms. In panel (a), we see the convergence of the loss function significantly improved when using the precomputed preconditioner (red) over not using a preconditioner (blue), verifying Claim 1 above. When estimating the preconditioner during refinement using Algorithm <ref>, the value of the loss (green, solid) is betweenthose of the previous two algorithms and approaches the fully preconditioned SGD as the preconditioner progressively becomes more accurate, verifying Claim 2. In panel (b), we show the FSC between the final reconstructions of each of the three algorithms and the ground truth reconstruction. While at low resolution, the FSC value is high for all algorithms, showing good convergence, at high resolution the FSC degrades for SGD with no preconditioning (blue), while the two preconditioned algorithms have a high value, close to one. In addition, panels (a) and (b) of Figure <ref> also show the results of running Algorithm 1 without the thresholding of the estimated preconditioner described in Section <ref> (green, dashed), highlighting the importance of this step. The loss initially decreases as expected, however it plateaus after two epochs.A clue for this issue is given in panel (c), showing the high variance of thestochastic gradient at high-frequency over one full epoch for an arbitrarilyselected iterate (orange, dashed) and the large magnitude of the radial averageof the high-frequency elements of D^-1 (blue, dashed), where D is theestimated preconditioner without thresholding.This would not be a problem by itself if such entries in the preconditioner werecomputed exactly. However, due to the preconditioner not being accuratelyestimated early in the run of the algorithm, the high-frequency elements of thevolume have a large error (due to the low |Ω_j| of such high-frequency elements, they are updated using the correct value at a small subset of the individual iterations). Therefore, in the first iterations, the variance in the stochastic gradient amplifiesthe errors in the estimated preconditioner at high resolution, leading to a largeloss in the high-frequency elements. Then, the step size adaptation leads to asmall step size and, therefore, to slow overall convergence. This also explainsthe low value of the FSC when using a preconditioner with no thresholding evenin comparison with the non-preconditioned SGD in panel (b) of Figure <ref>.For further insight into the difference between SGD without a preconditioner andSGD with a (precomputed) preconditioner at low and high resolution, as well ashow the estimated preconditioner behaves in relation with them, we show in Figure <ref> the FSC with the ground truth for the three algorithms at specific Fourier shells and across epochs. Panel (a) shows the FSC for a low-frequency Fourier shell, panel (b) shows amedium-frequency shell, and panel (c) shows a high-frequency shell.At low resolution, all three algorithms are almost indistinguishable, while at mediumresolution, the non-preconditioned SGD shows slower convergence than the preconditionedSGD, with SGD with the the estimated preconditioner quickly approaching the accuracy ofthe fully preconditioned SGD.The advantage of using a preconditioner is seen most clearly at high resolution, where the FSC of the non-preconditioned SGD is much lower than the other two algorithms.SGD with an estimated preconditioner converges more slowly than at medium resolution,but eventually approaches the FSC of SGD with a precomputed preconditioner.The small difference in the FSC at the last epoch between the two preconditioned algorithms is likely due to a combination of factorssuch as the thresholding of the very high-frequency elements in the preconditioner and the limited accuracy of the preconditioner estimate after only 10 epochs. Lastly, we show in Figure <ref> a complete overview of the shell-wise FSC for each algorithm using a heat map of the correlation as afunction of the Fourier shell number and the epoch number. Finally, in light of equation (<ref>), the number of iterations or epochs required for a gradient-based algorithm to achieve an accuracy ϵ scales linearly with the condition number of the Hessian matrix H.We show in Figure <ref> the impact of the condition number on the number of epochs required by each of the three algorithms to reacha certain FSC value in our experiments, for each Fourier shell. As the condition number of the preconditioned algorithms scales better with the resolution, theplots show the number of iterations growing more slowly with the resolutionfor the preconditioned algorithms than in the non-preconditioned case.§ OUTLOOK AND CONCLUSIONIn this article, we analyzed the the conditioning of the cryo-EM reconstructionproblem with a view towards applying stochastic gradient methods efficiently athigh resolution. The proposed preconditioner construction and the numerical experiments performed show promising results for high-resolution 3D refinement.While the analysis and experiments presented hold in the special case when the posevariables are known, this simplified setting captures the main difficulty of applyinggradient-based algorithms at high resolution, namely the large condition numberof the Hessian of the loss function due to the particular structure of the projectionoperator in the Fourier domain.This proof-of-concept work shows the potential of the SGD algorithm for the moregeneral cryo-EM reconstruction problem. There are a number of benefits that suchan approach would provide:* The main advantage is the improved convergence speed. While the EMalgorithm requires a full pass through the entire dataset at each iteration,SGD methods only use a mini-batch of the particle images. By estimating a preconditioner during the reconstruction process, the convergence of SGD improves at high resolution, while also benefiting from the speedof using mini-batches. In contrast, EM implicitly computes the same preconditioner, but at an increased computational cost due to requiringthe entire dataset at each iteration. * A critical component of the current EM approaches is the ℓ_2 regularizer, which makes the maximization step computationally tractable. SGD, on the other hand, is compatible with other regularization methods,and one could take advantage, for example, of the Wilsonprior <cit.> and learned regularizationmethods <cit.>.* While the numerical experiments presented here illustrate the performance of the estimated preconditioner with a simple SGD implementation, thepreconditioner is compatible with existing and more sophisticated stochastic gradient methods used in established cryo-EM software. Unifying the stepsof the ab initio reconstruction and high-resolution refinement using asingle algorithm is not only more consistent conceptually, but also apractical improvement, allowing a more streamlined implementation. We defer to future work the analysis and the preconditioner construction in thegeneral case, where the pose variables are not known. The main difficulty in thegeneral case over the setting of our analysis is that, when marginalizing overthe unknown poses (see (<ref>)), the objective function is no longerquadratic. Therefore, the Hessian depends on the current volume iterate.However, it is expected that the pose prior distributions are already narrow athigh resolution and do not vary considerably from one epoch to another.Moreover, at each iteration, only a subset of the pose variables are updated (thosecorresponding to the particle images in the current mini-batch), and existingefficient pose sampling techniques can be used. Therefore, an approach to estimate the preconditioner in the general case based on similar ideas to the ones presented in this article is a promising future direction.§ ACKNOWLEDGMENTS This work was supported by the grants NIH/R01GM136780 and AFOSR/FA9550-21-1-0317, and by the Simons Foundation. | http://arxiv.org/abs/2311.16100v1 | {
"authors": [
"Bogdan Toader",
"Marcus A. Brubaker",
"Roy R. Lederman"
],
"categories": [
"math.NA",
"cs.NA",
"q-bio.BM"
],
"primary_category": "math.NA",
"published": "20231127185932",
"title": "Efficient high-resolution refinement in cryo-EM with stochastic gradient descent"
} |
Nicolet and Atasoy A Choice-Driven Service Network Design and Pricing Including Heterogeneous Behaviors A Choice-Driven Service Network Design and Pricing Including Heterogeneous Behaviors Adrien Nicolet, Bilge Atasoy Dept. of Maritime and Transport Technology, Delft University of Technology, The Netherlands, {A.Nicolet, B.Atasoy}@tudelft.nlThe design and pricing of services are two of the most important decisions faced by any intermodal transport operator. The key success factor lies in the ability of meeting the needs of the shippers. Therefore, making full use of the available information about the demand helps to come up with good design and pricing decisions. With this in mind, we propose a Choice-Driven approach, incorporating advanced choice models directly into a Service Network Design and Pricing problem. We evaluate this approach considering three different mode choice models: one deterministic with 4 attributes (cost, time, frequency and accessibility); and two stochastic also accounting for unobserved attributes and shippers' heterogeneity respectively. To reduce the computational time for the stochastic instances, we propose a predetermination heuristic. These models are compared to a benchmark, where shippers are solely cost-minimizers. Results show that the operator's profits can be significantly improved, even with the deterministic version. The two stochastic versions further increase the realized profits, but considering heterogeneity allows a better estimation of the demand.Network Design, Pricing, Mode Choice, Heterogeneity[ * January 14, 2024 ====================§ INTRODUCTION In intermodal freight transport, planning at the tactical level is of key importance to make the best use of existing infrastructure and available assets and to ensure reliable transport plans. An appropriate way of managing this task is through Service Network Design (SND) problems, as they cover most of the tactical decisions <cit.>. They can support the decisions of intermodal operators about the itineraries to be served, the offered frequencies and how demand should be assigned to these services.Until recently, pricing was not explicitly covered in most SND models although it plays a crucial role in the success of the planning <cit.>. As pointed out by <cit.>, intermodal transport pricing is a difficult task as costs must be accurately computed and some knowledge of the market situation has to be gained. Indeed, the costs faced by an intermodal carrier are various <cit.>: some of them, e.g. crew costs or contracts with infrastructure manager, are perfectly known by the carrier but other variable costs are set by external companies, such as terminal operators for the handling costs or energy suppliers for the fuel costs. For the latter, not only do they depend on external actors, but also on the transport demand as they increase together with the carried load. Although carriers have some control on the quantity of transported freight (via contract binding, for example), demand remains mostly stochastic in nature <cit.>. As a result, variable costs can only be estimated from the expected transport demand.Regarding the pricing decision itself, some knowledge about the targeted demand, such as the willingness to pay or the transport requirements, is also of key importance. Indeed, the cost of transportation is among the main drivers of shippers' mode choice. It would, however, be inadequate to consider that shippers are purely cost-minimizers as other factors (e.g., transport time, offered quality, service frequency) play a role in the decision process, see for example <cit.> or <cit.>. On top of that, these factors and their importance can vary from shipper to shipper and the final decision of choosing a mode also depends on the available alternatives, hence making the planning and pricing process even more complex. On the other hand, there exists a great variety of mode choice models (see <cit.> for a comprehensive review) that can be used to support the planning of intermodal carriers. For example, <cit.> include values of time and reliability, that are estimated from a stated preference survey, within the cost minimization of a SND model. This represents a step towards the integration of shippers' preferences within the planning process. §.§ Illustrative example To illustrate the benefits of using a mode choice model for the pricing decision, we consider the case in Figure <ref>, where two shippers, S1 and S2, want to send 200 Twenty-foot Equivalent Units (TEUs) each. To do so, they have two alternatives: Road and Inland Waterway Transport (IWT). Each mode has the following utility function for each shipper i: V_i^IWT = β_ff + β_c,ip_IWT = 1×5 + β_c,i ×x,V_i^Road = α_Road + β_c,ip_Road = 15 + β_c,i ×15,where α_Road is the Alternative Specific Constant (ASC) for Road, equal to 15, and the ASC for IWT is normalized to 0. p_Road is the cost of the Road alternative, set to 15 /TEU, and β_c,i represents the cost sensitivity of each shipper i: we assume that it is -5 for S1 and -2 for S2. β_f is the weight associated to the frequency of IWT services f, and assumed to be 1 for both shippers.In this example, the decision-maker is the IWT carrier that wants to set up a transport service running each working day (hence: f=5) and to optimize their price x. The carrier faces a fixed cost, c_fix, of 100 per round trip and a variable cost, c_var, of 1 /TEU. Assuming that the transport demand of shippers is split according to a logit model, the carrier aims at setting a unique price so as to maximize their profits, expressed as:Π(x) =∑_i(200 ×e^V_i^IWT/e^V_i^IWT+e^V_i^Road)(x-c_var) - f ×c_fix = ∑_i(200 ×e^V_i^IWT/e^V_i^IWT+e^V_i^Road)(x-1) - 500 The carrier does not necessarily know the full utility specification of the shippers. Therefore, it can opt for various strategies, here we consider three of them: * Assume that shippers are homogeneous and purely cost-minimizers, the considered utilities may then be: V_i^IWT = -1x and V_i^Road = -1 × 15 ∀ i;* Make more market study to come up with the same utility functions as above, but consider that shippers are homogeneous with a mean cost sensitivity, thus: β_c,i = -3.5 ∀ i;* Consider also the heterogeneity regarding the cost sensitivity, thus: β_c,1 = -5 and β_c,2 = -2. Assuming that the carrier has enough capacity to accommodate the demand, the resulting profits Π(x) for each strategy are depicted in Figure <ref>, together with the profits stemming from each individual shipper. Before the price reaches 7 /TEU, there is no difference between the 3 strategies: the profits grow linearly with the price between 2.25 and 7 /TEU. This is because the price of IWT is low that its utility is much higher than Road in all three strategies: thus, the whole demand is assigned to IWT and the profits only depend on the price. The same linear relationship is observed for the individual shippers with a slope reduced by 2, since each of them represents half of the demand.Above 7 /TEU, the utility of IWT gradually approaches the one of Road and the relationship between price and profit becomes non-linear as the demand is also varying with the price. The highest expected profits are reached by strategy A with a price of 12.5 /TEU. Indeed, when only considering costs, the IWT carrier can charge a higher amount as the cost of Road is relatively high and the other advantages of Road (such as low travel time) are not considered. When the price approaches and exceeds the one of Road (15 /TEU), the profits decrease quickly as expected.The maximal expected profits of strategy B are also particularly high and occur at 11 /TEU. This is because the cost sensitivity of S2 is overestimated: in strategy B, it is as if the cost of Road was still too high for S2 despite the other advantages of this transport mode. But in reality, the profits stemming from S2 reach zero for a price of 11 /TEU because Road advantages (included in α_Road) overcome the higher cost, thus making Road much more attractive. If strategies A or B are used for pricing, it would result in profits reduced by half when the obtained price is applied to the heterogeneous shippers.Applying strategy C gives the highest profits, as it considers the true cost sensitivity of S2. In fact, the optimal price of 9 /TEU almost exactly corresponds to the one maximizing the profits generated by S2 only. For higher prices, the S2 profits reduce sharply while profits from S1 only grow linearly: hence resulting in a decrease in the total profits. That is why applying the higher prices found by strategies A and B results in lower profits. It should be noted that, for strategy C, it is assumed that the carrier knows the true cost sensitivity of both shippers. In reality, it will not be the case. Nevertheless, this example showcases that the more information is known by the carrier, the more beneficial it is for the planning and pricing decisions.A revenue management strategy would be trivial in this example with only two shippers and simple utility functions, then the optimal solution would be to set different prices for S1 and S2. However, segmentation may be difficult to identify when much more shippers are considered and less detailed information are available. In the remainder of this work, we will not consider revenue management, although we recognize that it can be an effective tool to optimize pricing decisions.Let us conclude this example by enforcing capacity constraints for the IWT carrier. Figure <ref> shows the same price-profit relationships but considering a fixed vessel capacity of 20 TEUs. Since the IWT carrier cannot serve the whole demand, the price from which positive profits are obtained is higher (3.5 /TEU) than in the uncapacitated case. Then, the same linear relationship between price and profits is observed, but it remains valid for higher prices because the limiting factor is capacity (only 100 TEUs per direction), and not the total demand (200 TEUs per direction).The conclusions regarding the optimal prices under different strategies remain essentially the same. However, in this capacitated case, using strategy A would result in losses for the carrier even though it has the highest expected profits. Again, the optimal price obtained through strategy C corresponds to the optimal price for S2.The purpose of this illustrative example is to highlight the advantages of using accurate choice models for the pricing decision: as it can substantially improve the obtained profits compared to less detailed demand representations. In fact, even if the exact parameters are not known, it is beneficial for the carrier to incorporate more information about their clients. Moreover, the impact of considering capacity constraints is demonstrated: although the previous conclusion still holds, the obtained profits are significantly reduced. §.§ Paper outline In this work, we aim to exploit the advantage of using advanced choice models in the planning process of an intermodal carrier. For this, we make use of choice-based optimization to combine a SND and a pricing problem with a detailed representation of shippers. Therefore, we develop a Choice-Driven Service Network Design and Pricing (CD-SNDP) model, which includes an existing mode choice model to consider shippers' behavior directly in the decision-making of the carrier.In the rest of this paper, we review the existing literature on SNDP in Section <ref>. Then, the proposed methodology is described in Section <ref>, where deterministic and stochastic formulations are covered. In Section <ref>, the CD-SNDP is applied to a case study and several variations of the model are compared with each other. Finally, we conclude the paper in Section <ref> and share some insights for future research.§ LITERATURE REVIEW Although it has not been applied to SNDP models yet, choice-based optimization is already used for other types of problems. Therefore, we first review the state of literature on SNDP in intermodal transport, then investigate the existing choice-driven methods in related transportation fields and finally present the main contributions of the present work. §.§ Service Network Design and Pricing problems in intermodal transport The majority of existing studies on SND are formulated as a cost minimization of the transport operator and do not include the revenues of fulfilling the transport orders <cit.>. Nevertheless, two models using cost minimization have addressed the pricing decision. <cit.> determine the price charged by an intermodal carrier using a pre-defined profit margin, expressed as a given percentage of the operational costs. The price is the addition of the costs and the margin and cannot exceed a given market price. <cit.> include a target for the minimal profit (per transported unit) to be achieved by an intermodal operator: this translates into a constraint assuring that the applied rate is greater or equal to that target added to the operating costs. The authors also include a cost sensitivity factor representing the willingness to pay for intermodal transport rather than road and enforce that the rate difference between road and intermodal transport has to be greater or equal to this factor.For the works applying a profit maximization, some of them do not include the pricing decision but rather assume fixed tariffs that are included as parameters into the model. <cit.> apply a SND model to explore new rail services along a Polish freight corridor. The demand is represented as contracts generating a given revenue when served. The operator then decides to serve or not the contracts in order to maximize their profit. It also decides on the services' frequency and the vehicles and demand assignment to these services under vehicle balancing and capacity constraints. <cit.> are interested in designing a barge transport service along a Belgian canal considering empty container repositioning. Their SND model decides at which inland ports to stop and in which sequence as well as the fulfillment of transport demand from different clients. <cit.> also apply a SND model to barge transport with detailed fleet management and revenue management considerations. Different customer segments are considered as well as two different service levels (standard or express) with a given fare. The operator then decides which services to operate, what percentage of the demand to serve and how to assign the vessels and demand to the services so as to maximize their profits. The model has been developed further to include the possibility of bundling services and penalties for early and late distribution <cit.>. <cit.> treat similar models and propose decomposition algorithms for computational efficiency. <cit.> capture demand elasticity using a gravity model, where the demand is considered inversely proportional to the transport costs faced by the carrier. The decisions are whether or not an arc (or a path) is used and in which sequence to visit the demand nodes. Finally, <cit.> use SND to conceive a new platooning service of autonomous vehicles. They come up with a two-stage stochastic model considering scenarios to represent the demand variation. The first stage designs the services performed by manually operated vehicles and assigns rates to the different customers over all scenarios, whereas the second sets the flow of autonomous vehicles for each particular scenario.Other works include demand functions in the profit maximization to capture the influence of prices on the transport volumes. <cit.> design a railroad network using a concave inverse demand function. In this case, the demand for each service and each itinerary are the decision variables and the corresponding prices are computed using the inverse demand function. <cit.> represent two competitive road carriers within a non-cooperative game model. Each carrier has to set their price so as to maximize their own profit and the demand is represented as a linear function of the carrier's price and the competitor's price. <cit.> also investigate competition between carriers: each of them fix their price, frequency and capacity. The demand of shippers for a given carrier is represented as a function of price and frequency. The inconvenience of demand functions is that they become hard to obtain when the number of shippers or alternatives increase <cit.>, thus requiring a numerical estimation or some simplifying assumptions.An increasingly common way to model SNDP problems is using Stackelberg game or bilevel programming. This formulation was first proposed in intermodal freight transport by <cit.>. The intermodal carrier is the leader and sets the price of their services to maximize their profit. Truck carriers are followers that will adjust their prices based on the leader's decision and the exogenous demand is split between the carriers using a logit model, where the considered attributes are the prices, travel times and reliability. A general formulation for the Joint Design and Pricing (JDP) on a network has been proposed by <cit.>. The network operator decides on the network design and prices so as to maximize their profits. The network and rates of the competitors are assumed known and exogenous. The followers are the network users that seek to minimize their cost by selecting the services of the operator or those of the competitors. The authors propose an iterative procedure to solve the JDP. <cit.> propose a similar formulation, with the addition of capacity constraints and revenue management considerations. <cit.> extend the JDP formulation to include time constraints, as well as capacity constraints. Their model is used to design and price the hinterland barge services of an extended gate operator. In their work, <cit.> include some level-of-service attributes in the JDP formulation. In particular, the lower level costs are more detailed as they not only consider transport costs but also the cost of capital. An iterative heuristic is later proposed to solve large instances of the JDP <cit.>. A similar formulation is adopted by <cit.> to design and price rail container transport. The lower level objective is to minimize the generalized costs, made of price, transport time, convenience and security. Only the price is endogenous to the model. The same authors also propose a time varying model <cit.>. A single-level formulation is used and the demand follows a logit model with price as single attribute. The model proposed by <cit.> extends the JDP of <cit.> with the introduction of additional cost components. The carrier faces some waiting costs and penalty for an under-utilisation of transport capacity, while the lower level costs also embed heterogeneous shipper classes through different values of time and reliability. Finally, there also exist a few different versions of Stackelberg game. A monopoly setting is proposed by <cit.> where a hinterland carrier sets services and prices in multiple planning horizons. The followers are represented by a set of captive consignees that minimize their transport and storage costs. <cit.> consider three different actors as leaders and all shippers as followers. The upper level itself is represented as a three-level program where ocean carriers are leaders of terminal operators which, in turn, are leaders of land carriers <cit.>. At the lower level, shippers set their production, consumption and transportation demand using spatial price equilibrium.The relevance of bilevel models is questioned by <cit.>, especially because of the simplifying assumptions regarding demand modeling (pure cost minimizers and homogeneous preferences). They propose a SNDP model applied to an express shipping service by airplanes and trucks. In their profit maximization problem, the carrier has to set prices for some given service times that can be selected by their customers. The service time chosen by each customer is the one providing a welfare greater or equal to all the other options. With our CD-SNDP, we show that it is possible to include advanced demand modeling, also considering heterogeneity, within a bilevel setting. In particular, the proposed formulation is inspired by the work of <cit.>, where the cost minimization of shippers is replaced by the maximization of their utility. In our work, beside the costs, the utility functions also consider the transport time, the accessibility of a mode and the frequency of intermodal services. This last element implies that now, both the price and frequency decisions of the carrier have an influence on the shippers. This CD-SNDP formulation then allows for a more detailed and realistic representation of the shippers' characteristics and behavior towards the prices and services designed by the carrier. To build our model, we make use of choice-based optimization: hereafter are presented some applications of this method to other transportation problems. §.§ Choice-based optimization in transportation The term choice-based optimization refers to optimization problems that explicitly include a discrete choice model into their formulation <cit.>. That is why works decoupling the optimization from the demand, using iterative procedures such as simulation-optimization, are not considered here (e.g., <cit.>).Although not for freight, choice-based optimization has been used in a few works to model passenger SND problems. <cit.> propose a profit maximization problem to support the design of ferry services, where the operator decides on the itineraries and schedules of the ferries. They assume that the passenger demand is split according to a logit model including two attributes: a given price, and the travel time, which is dependent on the decision variables. <cit.> also include a logit model into a profit maximization problem to design a car-sharing network. Among other things, the operator decides on the number of car-sharing stations to open. The utility function of car-sharing is composed of given rental costs and walking access costs. The latter are directly dependent on the number of opened stations. A drawback of these two models is that they are non-linear due to the exponential terms inherent to the logit model. A Mixed-Integer Linear Programming (MILP) including a logit mode choice model is proposed by <cit.> to design passenger rail services. The main decision is the selection of lines to open. To get rid of the exponential terms of the logit model, the authors precompute the modal shares of rail for each possible solution. This precomputation technique is useful when only binary or integer variables are included in the choice model. However, as mentioned by the authors, the model can become intractable when the instance size increases.Choice-based optimization has also been applied to facility location and pricing problems. It is used by <cit.> to set up hubs and prices for an airline company. The demand is split between companies using a logit model with price as unique attribute. A similar modeling approach is adopted by <cit.> to locate retail stores and set selling prices. <cit.> study an intermodal dry port location and pricing problem where the route choice of shippers is determined using a logit model including six attributes, where only transport cost depends on the decision variables. The common point of these three models is that they are all non-linear: therefore, heuristics are required to solve them.In most of the aforementioned models, the inclusion of discrete choice into an optimization problem either results in a non-linear model. In their work, <cit.> propose a general framework to deal with more advanced choice models. In particular, the authors rely on the Sample Average Approximation (SAA) principle to deal with the non-linearities of the choice model and, therefore, come up with a MILP model. The proposed model is then applied to the pricing of parking services using a Mixed logit to represent the demand. The latter comprises price as endogenous attribute and other exogenous attributes. <cit.> develop this framework further to model oligopolistic competition, whereas <cit.> present a non-linear cooperative game to model collaborative pricing of urban mobility.The present CD-SNDP is inspired by the work of <cit.> to integrate an existing Mixed logit model within a bilevel setting. Specifically, error terms are included in the utilities to account for the attributes that are not captured by the model but still play a role in the mode choice. Moreover, the coefficient representing cost sensitivity is considered randomly distributed to consider the heterogeneous preferences of shippers. It is assumed that the probability distributions of the error terms and the cost coefficient are known and the resulting CD-SNDP problem is solved using stochastic optimization. The addition of these more detailed behavioral attributes within SND models aim at providing a more realistic representation of shippers' reaction to the proposed services, ultimately helping intermodal carriers to make more informed design and pricing decisions.The following section provides a recap of the main characteristics of the previously reviewed bilevel SNDP models and sums up the contributions of our work. §.§ Contributions The existing bilevel models for SNDP presented in Section <ref> are sorted in Table <ref>. In particular, it shows if the models consider some competition or not and if some constraints regarding the fleet are included (e.g., size limit). It also indicates how the transport demand is modeled: most works assume that shippers are cost minimizers, while one uses price equilibrium and the last one considers demand as exogenous. Still regarding demand, it can be noticed that no existing model considers unobserved attributes that play a role in the choices of shippers. In addition, only three studies embed shippers' heterogeneity through distinct values of time (or reliability). Finally, only one work considers that frequencies also influence the demand alongside the prices endogenously in the optimization model. The proposed CD-SNDP is a generalization of the model by <cit.>. Firstly, it generalizes the network structure as cycles and services with multiple stops are now allowed. Secondly, the shippers' objective is also generalized as they do not only proceed to a minimization of their costs, but instead maximize their utilities. These utilities contain other attributes beside the costs, such as frequency, accessibility, etc. Thirdly, our formulation generalizes the representation of shippers as it can accommodate some unobserved attributes (via randomly distributed error terms) and shippers' heterogeneity (through the Mixed logit formulation). Finally, the service frequency is made endogenous to the optimization model along with the price. A summary of the aforementioned contributions can be found on the last row of Table <ref>. § METHODOLOGY As previously mentioned, the present work is inspired by the bi-level JDP formulation proposed by <cit.>. In both the JDP and the proposed model, the upper level represents the decisions of the transport operator and the lower level corresponds to the shippers. Just like in the JDP, the upper level consists of determining the frequency and price of services that maximize the operator's profit. However, the lower level now represents the utility maximization of the shippers, whereas in the JDP, it is assumed that shippers proceed to a minimization of their logistics costs. This change of paradigm brings additional complexity to the problem as the two decision variables of the upper level are now included in the lower level, unlike the JDP where only the price is included but not the frequency. Moreover, the competition of the transport operator is now represented as more than one alternative.Concerning the upper level, it differs from the JDP in two aspects. Firstly, the path-based formulation of services is replaced by a cycle-based formulation. The latter is deemed more accurate to represent realistic decision-making. Indeed, most intermodal transport services go back and forth on an itinerary with a defined schedule. The cycle-based representation also enables a more elaborate representation of services as multiple intermediary stops can be added in both directions. In addition, it simplifies the asset management of the operators. In a path-based formulation, they may need to re-balance the vehicles at the end of the planning horizon; whereas a cycle-based representation ensures that each vehicle ends up at its starting point. It is noteworthy that the arc-based pricing representation is not changed compare to the benchmark. Indeed, shippers will pay only for the transport of their cargo from its origin to its destination, and not for the whole cycle. The second difference is the addition of fleet size and cycle time feasibility constraints. The former restricts the actions of the transport operator as they do not have an infinite number of vehicles at their disposal to satisfy the demand. Moreover, the illustrative example of Section <ref> showed that it significantly reduces the obtained profits. The latter determines, for each service, the number of cycles that can be performed by one vehicle during the planning horizon given the cycle's duration. A natural consequence of these additional constraints is that an heterogeneous fleet can be considered. In the remainder of this paper, the JDP with fleet constraints will be used as Benchmark. The benchmark with cycle-based formulation (instead of path-based) will be further referred to as SNDP. Finally, the proposed choice-driven model, which considers utility maximization of shippers, is denoted CD-SNDP. The notations for the CD-SNDP are described in the following paragraphs.§.§ Problem formulation The transport network is represented as a directed graph 𝒢 = (𝒩,𝒜), where 𝒩 is the set of terminals and 𝒜={(i,j): i,j ∈𝒩, i ≠ j} the set of links between these terminals.§.§.§ Upper level The operator's fleet is heterogeneous, therefore the different vehicle types are denoted by set 𝒦. The number of available vehicles per type is V_k and their capacity is Q_k.Set 𝒮 includes all transport services that can be run by the operator. Unlike the benchmark, where each service corresponds to a single arc of 𝒜, a service is composed of a sequence of arcs. Each arc in this sequence is called a leg and the whole sequence of legs for a given service s is denoted ℒ_s. The cycle-based formulation of the problem implies that the sequence starts and ends at the same node.The maximum number of cycles of service s that can be performed by vehicle type k is named W_sk: it typically consists of the maximum operating time divided by the cycle time (sum of travel time and time at terminals). Each service s has a fixed cost c^FIX_sk of operating it with vehicle type k and a variable cost c^VAR_ijsk per container transported between terminals i and j. Moreover, we introduce the parameter δ_ijl_s, which equals one if a container traveling from i to j uses the service leg l_s and zero otherwise.The transport operator has three decision variables in the upper level problem: * v_sk is the number of vehicles of type k that the operator allocates to each service s;* f_sk is the frequency of each service s per vessel type k;* p_ij is the price per container charged to shippers requesting to transport goods from i to j. §.§.§ Lower level The shippers are represented as a whole, i.e. their demand is aggregated. The container transport demand between terminals i and j is denoted D_ij. Shippers decide to assign demand to the transport operator or their competitors by the maximization of their utility. The utility function of using the services proposed by the transport operator between i and j is denoted U^O_ij and is dependent on p_ij and f_sk, whereas the utility of using a competing alternative h is given as U^h_ij.Finally, the decision variables of the lower level consist in the number of containers that are assigned to the operator's services (x_ijsk) and to every competing alternative (z^h_ij).All the aforementioned sets, parameters and decision variables are listed in Table <ref>. lX Notation2lSets: 𝒩 Set of terminals (indices: i, j)𝒜 Set of arcs (i,j)𝒦 Set of vehicle types (index: k)𝒮 Set of potential services (index: s)ℒ_s Set of legs of service s ∈𝒮 (index: l_s)ℋ Set of competing alternatives (index: h)2lParameters: V_k Number of vehicles of type k in the operator's fleet Q_k Capacity of vehicle type k [TEUs] W_sk Maximum number of cycles of service s that can be performed by vehicle type k c^FIX_sk Fixed cost of operating service s with vehicle type k [] c^VAR_ijsk Variable cost of transport between i and j with service s and vehicle type k [/TEU] δ_ijl_s Dummy param. equal to 1 if container traveling from i to j uses service leg l_s, 0 otherwise D_ij Aggregated transport demand of shippers between i and j [TEUs] U^O_ij Utility of using the operator's services between i and j U^h_ij Utility of using competing alternative h between i and j 2lVariables:v_sk Number of vehicles of type k assigned to service s by the operator f_sk Frequency of service s operated with vehicle type k p_ij Price charged by the operator to shippers wanting to transport goods from i to j [/TEU] x_ijsk Cargo volume using service s operated with vehicle type k between i and j [TEUs] z^h_ij Cargo volume using competing alternative h between i and j [TEUs]§.§.§ Mathematical model The proposed SNDP is expressed as a Bi-Level Mixed Integer Problem (BLP):(𝐁𝐋𝐏) max_v,f,p,x,z ∑_(i,j) ∈𝒜∑_s ∈𝒮∑_k ∈𝒦 p_ijx_ijsk - ∑_s ∈𝒮∑_k ∈𝒦 c^FIX_skf_sk - ∑_(i,j) ∈𝒜∑_s ∈𝒮∑_k ∈𝒦 c^VAR_ijskx_ijsk s.t. ∑_s ∈𝒮 v_sk ≤V_k∀k ∈𝒦 f_sk ≤W_skv_sk∀s ∈𝒮, ∀k ∈𝒦 ∑_(i,j) ∈𝒜 δ_ijl_sx_ijsk ≤Q_k f_sk∀l_s ∈ℒ_s, ∀s ∈𝒮, ∀k ∈𝒦 x_ijsk ≤∑_l_s ∈ℒ_s δ_ijl_sD_ij∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀k ∈𝒦p_ij ≥0∀(i,j) ∈𝒜v_sk , f_sk ∈ℕ∀s ∈𝒮, ∀k ∈𝒦 where x and z solve:max_x,z ∑_(i,j) ∈𝒜 ( ∑_s ∈𝒮∑_k ∈𝒦 U^O_ijx_ijsk + ∑_h ∈ℋ U^h_ijz^h_ij ) s.t. ∑_s ∈𝒮∑_k ∈𝒦 x_ijsk + ∑_h ∈ℋ z^h_ij = D_ij∀(i,j) ∈𝒜x_ijsk ≥0∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀k ∈𝒦z^h_ij ≥0∀(i,j) ∈𝒜, ∀h ∈ℋ At the upper level, the objective function of the transport operator (<ref>) is to maximize their profit. It is computed as the revenues from the transported containers minus the fixed and variable costs of the offered services. Constraint (<ref>) is the fleet size constraint for each vehicle type. Constraint (<ref>) ensures that the service's frequency is inferior to the maximum number of cycles that can be performed by the assigned vehicles. Constraint (<ref>) assures that the total number of containers transported on each leg of every service does not exceed the available capacity of the service, whereas constraint (<ref>) ensures that no container can be assigned to a service that does not go through the origin or destination terminal of the container. The domains of the operator's decision variables are defined by constraints (<ref>)-(<ref>).Regarding the lower level, shippers seek to maximize their utility (<ref>) by assigning their containers either to the operator's services or to the competition. Moreover, constraint (<ref>) enforces the total transport to be met. Finally, constraints (<ref>)-(<ref>) define the domain of the decision variables of the shippers.§.§ Model transformation The proposed bi-level problem can be reformulated as a single level problem and then linearized, for more details on these procedures the reader is referred to <cit.>. For the reformulation, additional variables λ_ij, ∀ (i,j) ∈𝒜 are introduced: they represent the dual variables related to constraints (<ref>). The model can then be transformed using the Karush-Kuhn-Tucker conditions. After this process, the following constraints appear:λ_ij ≤- U^O_ij∀(i,j) ∈𝒜, ∀h ∈ℋλ_ij ≤- U^h_ij∀(i,j) ∈𝒜, ∀h ∈ℋ(-U^O_ij - λ_ij) ∑_s ∈𝒮∑_k ∈𝒦 x_ijsk = 0∀(i,j) ∈𝒜(-U^h_ij - λ_ij) z^h_ij = 0∀(i,j) ∈𝒜 Note that constraints (<ref>) and (<ref>) are non-linear. To address these, the big M technique is used and binary variables are introduced: y^I_ij and y^II_ij for constraint (<ref>); y^Ih_ij and y^IIh_ij for constraint (<ref>).The only remaining non-linear expression is the first term of the operator's objective function (<ref>). To remedy it, the strong duality theorem can be applied to the lower level problem (<ref>)-(<ref>), as in the work of <cit.>. At optimality, we have:-∑_(i,j) ∈𝒜 ( ∑_s ∈𝒮∑_k ∈𝒦 U^O_ijx_ijsk + ∑_h ∈ℋ U^h_ijz^h_ij ) = ∑_(i,j) ∈𝒜 D_ij λ_ij In addition, the following form is considered for the utility function of the transport operator: U^O_ij = U̅^O_ij + β_c p_ij + β_f f_ij = U̅^O_ij + β_c p_ij + β_f ∑_s ∈𝒮∑_k ∈𝒦 ϕ_ijs f_skwhere U̅_ij is the part of utility depending on attributes exogenous to the model, β_c and β_f are the coefficients respectively weighting the importance of price and frequency in the utility function, and ϕ_ijs is a dummy equal to one if both terminals i and j are contained in service s and zero otherwise. Then, using equations (<ref>) and (<ref>), the first term in (<ref>) can be expressed as:∑_s ∈𝒮∑_k ∈𝒦 p_ijx_ijsk = -1/β_c ( D_ij λ_ij+ ∑_h ∈ℋ U^h_ijz^h_ij + ∑_s ∈𝒮∑_k ∈𝒦 U̅^O_ijx_ijsk + β_f ∑_s ∈𝒮∑_k ∈𝒦 ϕ_ijs f_skx_ijsk ) Because we now have the term f_skx_ijsk, it may look like a non-linear expression has just been replaced by another. This new term is nevertheless more convenient as the order of magnitude of the frequency is more limited than that of the price. The frequency can then be represented in base 2 conveniently: f_sk = ∑_b = 0^B_sk-1 2^b f_skb, where f_skb are binary variables and B_sk = ⌈log_2(W_skV_k+1) ⌉. The product term in (<ref>) can ultimately be linearized using the well-known technique for the product of binary and continuous variables. The variable representing the product is referred to as a_ijskb.The final MILP is then formulated as follows:(𝐌𝐈𝐋𝐏)max_v,f,p,x,z∑_(i,j) ∈𝒜-1/β_c( D_ijλ_ij+ ∑_h ∈ℋ U^h_ijz^h_ij + ∑_s ∈𝒮∑_k ∈𝒦U̅^O_ijx_ijsk + β_f ∑_s ∈𝒮∑_k ∈𝒦ϕ_ijs∑_b = 0^B_sk-1 2^b a_ijskb) - ∑_s ∈𝒮∑_k ∈𝒦 c^FIX_skf_sk - ∑_(i,j) ∈𝒜∑_s ∈𝒮∑_k ∈𝒦 c^VAR_ijskx_ijsk s.t. constraints (<ref>) - (<ref>) & (<ref>) - (<ref>)f_sk = ∑_b = 0^B_sk-1 2^b f_skb∀s ∈𝒮, ∀k ∈𝒦a_ijskb ≤D_ij f_skb∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀k ∈𝒦, ∀b ∈ℬa_ijskb ≤x_ijsk∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀k ∈𝒦, ∀b ∈ℬa_ijskb ≥x_ijsk - D_ij(1-f_skb)∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀k ∈𝒦, ∀b ∈ℬλ_ij ≤- (U̅^O_ij + β_c p_ij + β_f ∑_s ∈𝒮∑_k ∈𝒦 ϕ_ijs f_sk)∀(i,j) ∈𝒜-(U̅^O_ij + β_c p_ij + β_f ∑_s ∈𝒮∑_k ∈𝒦 ϕ_ijs f_sk) - λ_ij ≤M^I_ij y^I_ij∀(i,j) ∈𝒜∑_s ∈𝒮∑_k ∈𝒦 x_ijsk ≤M^II_ij y^II_ij∀(i,j) ∈𝒜y^I_ij + y^II_ij ≤1∀(i,j) ∈𝒜λ_ij ≤- U^h_ij∀(i,j) ∈𝒜, ∀h ∈ℋ-U^h_ij - λ_ij ≤M^Ih_ij y^Ih_ij∀(i,j) ∈𝒜, ∀h ∈ℋz^h_ij ≤M^IIh_ij y^IIh_ij∀(i,j) ∈𝒜, ∀h ∈ℋy^Ih_ij + y^IIh_ij ≤1∀(i,j) ∈𝒜, ∀h ∈ℋf_skb ∈{0,1}∀s ∈𝒮, ∀k ∈𝒦, ∀b ∈ℬa_ijskb ∈ℕ∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀k ∈𝒦, ∀b ∈ℬ y^I_ij, y^II_ij, y^Ih_ij, y^IIh_ij ∈{0,1}∀(i,j) ∈𝒜 λ_ij ≥0∀(i,j) ∈𝒜§.§ Stochastic formulation In equation (<ref>), we set the generic form of the utility function U^O_ij. In particular, it was assumed that it was fully deterministic, but in reality this is not the case. Firstly, the utility traditionally contains a random term ϵ (also called error term), representing the unobserved attributes playing a role in the mode choice. Secondly, one or several β coefficients can be assumed as randomly distributed, to account for heterogeneous preferences. Note that these remarks also hold for U^h_ij. With these considerations, the CD-SNDP model becomes a stochastic optimization problem. We then come up with a SAA formulation to solve it. A sample ℛ is made of R independent realizations. To generate a realization r, a draw is performed in the distribution of each random parameter and the corresponding utility functions are computed. For each realization, different utility functions U^O_ijr and U^h_ijr are obtained: this impacts the mode choice such that the shippers' decision variables x_ijskr and z^h_ijr become dependent on the sampling. This is also true for the dual variables λ_ijr. Nevertheless, the decision variables of the transport operator are fixed once and for all independently of the sampling. Finally, the objective function is modified such that the average profits over all samples is maximized. §.§ Predetermination heuristic To speed up the solving time of the stochastic formulations, we propose a predetermination heuristic. As its name suggests, it consists in determining the operator's utility based on given price and frequency values before the optimization. To compute the operator's utility, discrete sets of predefined prices 𝒫 and frequencies ℱ are considered. It is also assumed that the sampling of the shippers' population is already performed so that the utilities of competing alternatives U^h_ijr can be computed. Along with the predefined prices p and frequencies ψ, these allow to pre-compute several values for each OD pair: the demand faced by the operator d^ψ p_ij, the resulting profit π^ψ p_ij, and the price generating the most profit for a given frequency P^ψ_ij. Algorithm <ref> shows the steps to obtain these values.To compute the profit, the fixed and variable costs are needed per OD pair ij. However, the available cost parameters are expressed per service s and vehicle type k. The costs per OD pair then need to be estimated using the following formulas:ĉ^FIX_ij = ∑_s' ∈𝒮' ϕ_ijs' ( min_k ( c^FIX_s'k/2 ) )ĉ^VAR_ij = ∑_s' ∈𝒮' ϕ_ijs' ( min_k c^VAR_ijs'k ) Here, we consider only services with 2 legs: 𝒮' = {s ∈𝒮 : |ℒ_s|=2} (i.e. direct services between two terminals i and j). For the fixed cost estimation ĉ^FIX_ij, we select the fixed cost of the corresponding service for the cheapest vehicle type and divide it by two (to get the cost for only one service leg). Since the variable costs parameters are already expressed per OD pair, we just select the variable cost of the corresponding service for the cheapest vehicle type as our estimation ĉ^VAR_ij.Once Algorithm <ref> has been used to compute demand values d^ψ p_ij and price values P^ψ_ij, they can then be used as parameters to solve an auxiliary optimization problem (AP). This problem consists in determining, for a given sample ℛ, the optimal frequencies for fixed prices p̃_ij: (𝐀𝐏) max_v,f,g,x,z 1/|ℛ| ∑_r ∈ℛ ( ∑_(i,j) ∈𝒜∑_s ∈𝒮∑_k ∈𝒦 p̃_ijx_ijskr - ∑_s ∈𝒮∑_k ∈𝒦 c^FIX_skf_sk - ∑_(i,j) ∈𝒜∑_s ∈𝒮∑_k ∈𝒦 c^VAR_ijskx_ijskr ) s.t. ∑_s ∈𝒮 v_sk ≤V_k∀k ∈𝒦 f_sk ≤W_skv_sk∀s ∈𝒮, ∀k ∈𝒦 ∑_r ∈ℛ ∑_(i,j) ∈𝒜 δ_ijl_sx_ijskr/|ℛ| ≤Q_k f_sk∀l_s ∈ℒ_s, ∀s ∈𝒮, ∀k ∈𝒦 x_ijskr ≤∑_l_s ∈ℒ_s δ_ijl_sD_ij∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀k ∈𝒦, ∀r ∈ℛ∑_s ∈𝒮∑_k ∈𝒦 x_ijskr + z_ijr = D_ij∀(i,j) ∈𝒜, ∀r ∈ℛ∑_ψ∈ℱ g_ijψ ≤1∀(i,j) ∈𝒜 ∑_s ∈𝒮∑_k ∈𝒦 ϕ_ijs f_sk = ∑_ψ∈ℱ ψg_ijψ∀(i,j) ∈𝒜 ∑_r ∈ℛ∑_s ∈𝒮∑_k ∈𝒦 x_ijskr/|ℛ| ≤∑_ψ∈ℱ g_ijψ d^ψp̃_ij∀(i,j) ∈𝒜v_sk ∈ℕ∀s ∈𝒮, ∀k ∈𝒦f_sk ∈ℕ∀s ∈𝒮, ∀k ∈𝒦g_ijψ ∈{0,1}∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀ψ∈ℱx_ijskr ≥0∀(i,j) ∈𝒜, ∀s ∈𝒮, ∀k ∈𝒦, ∀r ∈ℛz_ijr ≥0∀(i,j) ∈𝒜, ∀r ∈ℛ This auxiliary problem contains additional elements that deserve some discussion. First, the objective (<ref>) is now formulated as a SAA function and the decision variables x and z are now dependent on r. Constraints (<ref>) to (<ref>) are modified accordingly. A new binary variable g_ijψ is introduced: it is equal to one if the predefined frequency ψ is chosen for OD pair (i,j), and zero otherwise. Constraint (<ref>) ensures that at most one frequency ψ is chosen per OD pair. The value of ψ is then linked to the decision variable of services frequency f through constraint (<ref>). Finally, constraint (<ref>) aggregates the decision variables x_ijskr of cargo assigned to the operator over the whole sample and bounds it with the precomputed demand d^ψ p_ij defined in Algorithm <ref>. This last constraint allows to keep the utility functions out of the optimization problem. As a result, the variable z_ijr is now independent of the competing modes h. Once the optimization is performed, the corresponding value of z^*_ijr can be assigned to the competing mode h'_ijr with the maximum utility as computed in Algorithm <ref>.Getting rid of the utilities and the pricing decision in the optimization allows to considerably decrease the solving time. Indeed, the variables p_ij, f_skb, a_ijskb, λ_ij and y_ij's are not used anymore, and only the variables g_ijψ are added. The number of constraints is also drastically reduced. The idea of the heuristic is to exploit this advantage to solve the auxiliary problem iteratively, as described in Algorithm <ref>. The performance of the heuristic is highly dependent on the size of sets 𝒫 and ℱ. The more values they contain, the better is the approximation at the cost of additional computational resources. These sets should then ensure a good coverage of the search space in order for the heuristic to return satisfying solutions.§ CASE STUDY The proposed CD-SNDP is applied to container transport on a small network of 3 nodes: Rotterdam (RTM), Duisburg (DUI) and Bonn (BON). We consider an inland vessel operator competing with two other modes (Road and Rail) and another IWT carrier. §.§ Description The operator's fleet is composed of 2 vessel types : 24 vessels of type M8 and 12 vessels of type M11 with a maximal capacity of 180 TEUs and 300 TEUs, respectively. Each vessel type has a maximal operation time, T^max, of 120 hours per week. The transport demand inputs are based on the NOVIMOVE project <cit.>, whereas the costs for IWT and the two competing modes as well as the sailing time t^sail and the time spent in ports t^port are estimated using the model of <cit.>. Based on these inputs, the computation of the maximum number of cycles is straightforward: W_sk=T^max/(t^sail_sk+t^port_sk).Regarding shippers, the utility functions are formulated as follows:U^O_ij = α^IWT + β_a^Inter a^IWT_ij + β_q^IWT q_ij + β_c^Inter (p_ij+VoTt^IWT_ij) + β_f^Inter f_ij + ϵ^O_ijU_ij^h=IWT = α^IWT + β_a^Inter a^IWT_ij + β_q^IWT q_ij + β_c^Inter (p^IWT_ij+VoTt^IWT_ij) + β_f^Inter f^IWT_ij + ϵ^IWT_ijU_ij^h=Rail = α^Rail + β_a^Inter a_ij^Rail + β_c^Inter (p^Rail_ij+VoTt^Rail_ij) + β_f^Inter f_ij^Rail + ϵ^Rail_ijU_ij^h=Road = α^Road + β_a^Road a_ij^Road + β_c^Road (p^Road_ij+VoTt^Road_ij) + ϵ^Road_ijwhere, for each mode, α is the alternative specific constant, a is an accessibility metric, t is the total travel time in hours, p is the cost for shippers in thousand of euros per TEU, and f is the weekly frequency for intermodal transports (i.e. IWT and Rail). Moreover, q_ij is a dummy equal to one if a seaport is located at i or j and VoT is the Value of Time expressed in 1000/TEU/hour. Each attribute is weighted by a coefficient β and each mode has a random error term ϵ. Although they have similarities, it is assumed that the vessel operator and the IWT carrier alternatives are not correlated. The same assumption holds between all alternatives. Therefore, in the remainder of this work, all the error terms ϵ_ij are considered independent and identically distributed (iid).Within the CD-SNDP context, all the terms contained in the utilities of the competing modes (IWT, Rail and Road) are exogenous to the optimization model and are thus treated as parameters. Regarding the utility of the operator, only the terms in bold (p_ij and f_ij) are endogenous while the other terms are also parameters. p_ij is the decision variable on pricing and f_ij corresponds to the term ∑_s ∈𝒮∑_k ∈𝒦ϕ_ijs f_sk, as introduced in equation (<ref>).The model's coefficients were estimated with aggregate data using a Weighted Logit methodology. It is named weighted because, during the estimation, the log-likelihood function is weighted by the yearly cargo flows on each OD pair <cit.>. It thus gives more importance to the OD pairs with high volumes. For more details, the reader is referred to <cit.>. One noteworthy characteristic of the data on which the coefficients were estimated is that the frequency for IWT does not exceed 35 services per week. Therefore, the following constraint is added to our CD-SNDP problem to guarantee consistency between the results and the mode choice model:f_sk ≤35 ∀s ∈𝒮, ∀k ∈𝒦 We use this case study to compare the results of 3 deterministic and 2 stochastic models. The former consist in the benchmark, the SNDP and the CD-SNDP, which uses only the deterministic part of the utility functions in equations (<ref>) to (<ref>) without error terms ϵ. The latter two are stochastic variations of the CD-SNDP: * Multinomial Logit (MNL): with iid error terms ϵ, following an Extreme Value distribution;* Mixed Logit: random β_c^Inter following a Lognormal distribution of parameters μ_c^Inter and σ_c^Inter (representing the heterogeneous cost sensitivity of shippers) together with iid error terms ϵ.§.§ Evaluation through out-of-sample simulation In order to assess the solutions returned by these models, we simulate the demand response using an out-of-sample population. Indeed, the profit returned by the optimization is the one expected based on the SAA and the model's assumptions, but it gives no indication on how well the solution will perform with actual shippers. This out-of-sample simulation also allows to compare the different models with each other. The procedure is as follows: * For each OD pair, generate a population of 1000 shippers (i.e. perform 1000 draws of ϵ and β_c^Inter, note that these draws are different than the ones used in the SAA) and divide the demand D_ij equally among the shippers;* For each shipper, compute their utilities by plugging the drawn ϵ and β_c^Inter, as well as the frequencies and prices returned by the model, into equations (<ref>) to (<ref>);* Allocate the shipper's containers to the alternative with the maximal utility;* When all containers have been allocated, compute the resulting modal shares and the actual profit for the inland vessel operator.§.§ Coefficients of utility functions For the out-of-sample simulation, we directly make use the coefficients of the Weighted Logit Mixture model estimated in <cit.>. However, these true utility functions of the shippers are not known by the operator. The same coefficients cannot, therefore, be used in the CD-SNDP. To alleviate this issue, we use the Weighted Logit Mixture to generate synthetic choice data, from which utility coefficients can be estimated by the operator. This process ensures that the true utility functions remain hidden from the operator, as they only have access to the choice realizations of shippers.The available inputs are the OD matrices and the attributes related to IWT, Rail and Road on each OD pair along the European Rhine-Alpine (RA) corridor. To generate a choice instance for a given OD pair using the Weighted Logit Mixture, we first draw the value of β_c^Inter and each mode's ϵ in their respective distributions. Then, they are plugged, along with each mode's attributes, into equations (<ref>) to (<ref>). Finally, the mode with the highest utility is selected and we get one synthetic choice instance. This process is then repeated for all OD pairs. To remain consistent with the Weighted Logit methodology, the number of generated choice instances per OD pair is set proportional to its cargo volume. In particular, each OD pair get at least one choice instance and an additional instance is generated per 10'000 TEUs circulating yearly on the OD pair. As a result, we end up with a synthetic dataset composed of 8676 choice instances, from which the MNL and Mixed Logit models can be estimated.The coefficients of the Weighted Logit Mixture model are presented in Table <ref>, along with the coefficients of the Mixed Logit and MNL estimated using the synthetic dataset (note that α_IWT is normalized to zero). The mean value of β_c^Inter is also presented.§.§ Deterministic results In the remainder of this section, we will present and discuss the results of these various models, starting with the deterministic ones.§.§.§ CD-SNDP vs. Benchmark and SNDP The weekly frequencies for both vessel types and the charged prices are reported in Table <ref>. In order to better understand the pricing decision, the table also displays the prices of the competing alternatives. For the benchmark and SNDP, the optimal prices are set at the same level as the cheapest competing alternative (in our case, IWT). This is because of the assumption that shippers are purely cost-minimizers and the deterministic nature of the models: if the vessel operator charges just 0.001 less than the cheapest alternative, then the models will consider that all shippers will choose the services of the vessel operator instead of the competition. In the CD-SNDP, shippers are assumed to consider other attributes beside cost to perform their mode choice: the optimal prices then differ from the cheapest alternative. Regarding the optimal frequencies, allowing to visit more than 2 terminals per service provides additional flexibility to the SNDP compared to the Benchmark. The SNDP takes advantage of this consolidation opportunity which results in higher expected profits. Figure <ref> displays the expected profits versus the actual ones returned by the out-of-sample simulation: it shows that the SNDP also returns higher simulated profits. The reason is that the vessel operator is able to attract more demand with this 3-stop service. This is seen in Figure <ref>, which represents the expected and actual modal shares for each deterministic model.For the CD-SNDP, the expected profits drop compared to the two other models. This is because the OD pair Duisburg-Bonn is not served anymore by the vessel operator. The distance between these two terminals is indeed relatively short so, as the CD-SNDP takes multiple factors in consideration for the mode choice of shippers, it is evident that Road becomes the preferred option for this OD pair. Although the vessel operator gets smaller market shares than with the Benchmark and SNDP (See Figure <ref>), the CD-SNDP returns actual profits that are more than 2.5 times higher. Since the choice-driven model also considers frequency in the mode choice, it is able to charge much more on the OD pair Rotterdam-Duisburg due the very high proposed frequency (42 services per week) between these two terminals.These deterministic results already suggest that significant gains can be achieved with the Choice-Driven SNDP. More efficient services and pricing can be designed, thus resulting in considerably increased profits.§.§ Stochastic variants In this section, the results of the stochastic versions of the CD-SNDP are described. Two random utility formulations are compared: MNL (with random error terms ϵ) and Mixed Logit (with ϵ and distributed cost sensitivity β_c^Inter). First, we present the results obtained with the exact method, and then the ones with the predetermination heuristic.§.§.§ Exact method The two stochastic variants are solved through SAA with a sample size of R = 1000, i.e. a thousand draws are performed in the distributions of ϵ (and of β_c^Inter, for the Mixed Logit). For each variant, we run 20 replications with 20 different random seeds, thus generating 20 different samples. The aggregated statistics of the obtained solutions and computation time are presented in Table <ref>. Note that a time limit of 48 hours has been applied, that is why the statistics of the optimality gap are also presented. The frequency decision remains identical for all 20 replications of both stochastic variants. The pricing decision, on the other hand, is very variable from one replication to another. The variation is slightly more pronounced for the MNL case than for the Mixed Logit, but the main takeaway is that the MNL results in higher prices than the Mixed Logit. Also, both variants find higher prices than the deterministic CD-SNDP.The influence on the expected and actual profits is depicted in the boxplots of Figure <ref>. The higher prices set by the MNL lead to greater expected profits compared to the Mixed Logit. But this difference is canceled out when comparing the simulated profits as the MNL profits fall at a very slightly lower level than the ones of the Mixed Logit. Nevertheless, the actual profits for both variants are substantially higher than for the deterministic CD-SNDP. In fact, they provide an additional 40% gain compared to the deterministic case. This is because, due to the more detailed choice models, the modal shares can be estimated much better during the optimization. Indeed, Figure <ref> shows the average expected modal shares against the actual ones. These shares are really close to each other for both the MNL and the Mixed Logit, whereas the deterministic model highly overestimates the share of the vessel operator.This also explains why no bigger vessels are being operated in both stochastic variants, compared to the deterministic case. Due to the very high operator share estimated in the latter, smaller vessels only are not sufficient to meet the demand. The stochastic solutions prefer to operate only smaller vessels, as the operating costs are lower and the service capacity is sufficient to accommodate all the incoming demand.Comparing the MNL with the Mixed Logit, the accuracy of their modal share estimation is nearly equivalent, but the MNL tends to slightly overestimate the operator's share during the optimization process. This is why the expected profits in Figure <ref> are significantly higher than the actual ones. On the other hand, the expected profits with the Mixed Logit are in line with the actual ones. In the end, the actual profits for both the MNL and Mixed Logit end up at a similar level. However, the large optimality gaps reported for the Mixed Logit in Table <ref> prevent any conclusion at this stage. Indeed, no replication was able to terminate within the 48 hours limit. Even though the addition of stochasticity in the CD-SNDP provide more gains, it is done at the expense of computing time. In order to remedy this, we make use of the predetermination heuristic presented in Section <ref>.§.§.§ Predetermination heuristic The two stochastic variants are solved using the same samples as for the exact method. We use the following set of predefined prices: 𝒫=ℕ∩ [0,500], and the set of predefined frequencies: ℱ=ℕ∩ [0,35] in accordance with (<ref>). The statistics of the heuristic solutions are reported in Table <ref> together with the computation time.Compared to the exact method, the predetermination heuristic is remarkably faster. When the computation was in the order of days for the exact method, it is now reduced to a few minutes. Most of these minutes are spent precomputing the demand and price values with Algorithm <ref>. With the heuristic, there is also little difference in solving time between the two stochastic variants.Regarding the quality of the solutions, the prices found with the heuristic are consistent with the ones returned by the exact method. However, there is some variation in the resulting frequencies. With the exact method, the optimal solution for all replications with both choice models was deploying all the small vessels on the 3-stop service and not using the bigger vessels. With the heuristic, small vessels are still concentrated on the 3-stop service but some are also deployed on the service between Duisburg and Bonn. Some bigger vessels are also used on the services between Rotterdam and Duisburg and between Duisburg and Bonn, but not between Rotterdam and Bonn because the precomputed profits are decreasing with the increase of frequency on this OD pair. The solution of the heuristic makes a better use of the operator's fleet: it allows the operator to attract more demand on certain OD pairs due to the increased frequencies, but additional costs are incurred. The comparison between the profits obtained with the exact method and with the predetermination heuristic is shown in Figure <ref>.The profit ranges found by the heuristic are similar to the ones with the exact method. Even with increased frequencies (generating additional costs) and similar price ranges, the profits remain stable because the increased frequencies attract more demand, compensating for the incurred costs. We still observe a significant gap between the expected and actual profits in the MNL case, whereas these two values are at a similar level for the Mixed Logit. This is because the MNL used in the CD-SNDP has a much lower cost coefficient β_c^Inter in absolute value than in the actual population (see Table <ref>). On the other hand, the Mixed Logit has coefficients that are more in line with the actual population. The cost sensitivity of the shippers is then underestimated by the MNL, which results in prices that are higher than with the Mixed Logit. The CD-SNDP with MNL then expects that high profits will be realized, whereas in reality there will be less demand than expected due to the higher prices: thus resulting in a profit loss.However, the actual profits of the MNL are slightly higher with the heuristic than with the exact method. This is due to the frequencies on the Rotterdam-Duisburg service being slightly higher in general with the MNL than with the Mixed Logit. Since the actual shippers population has a frequency coefficient β_f^Inter higher than both the MNL and Mixed Logit (see Table <ref>), the decision to propose additional frequency pays off.So far, we notice that the heuristic is able to find solutions performing at least as well as the exact method. We can then make use of the fast solving time of the heuristic to increase the number of draws from 1000 to 5000. Table <ref> reports the solutions' statistics with a sample size of R = 5000.There is a direct proportionality between the computation time and the number of random draws, suggesting a linear computational complexity of the predetermination heuristic. This would allow to further increase the number of draws, but some memory issues can occur as the precomputation step requires to store a lot of values before the optimization. Nevertheless, moving to 5000 draws allows to improve the accuracy of the returned solutions, as the interval between the minimal and maximal values of the decision variables becomes tighter.As depicted in Figure <ref>, increasing the number of draws also leads to more accurate expected and actual profits since the confidence interval is tighter with 5000 draws than with 1000 draws. Stated differently, increasing the number of draws makes the heuristic less sensitive to changes of the sample, leading to more stable solutions. Additionally, the actual profits are enhanced when more draws are performed. There is still a significant drop between the expected and actual profits in the MNL case because the model underestimates the cost sensitivity of shippers compared to reality. For the Mixed Logit, there is a noticeable increase in the actual profits compared to the expected ones. Referring back to Table <ref>, the average value of the cost coefficient β̅_c^Inter is higher in absolute value for the Mixed Logit than in the actual population. This means that the CD-SNDP with Mixed Logit has the tendency to overestimate the cost senstivity of shippers compared to reality. Therefore, it will be more cautious in pricing but will in the end be able to attract more demand than anticipated, which results in increased profits. §.§ Key insights Several take-aways can be gathered from the results presented above. First, a cycle-based formulation (with multiple stops allowed) of the SND problem is more efficient in terms of asset usage as the operator can use consolidation opportunities. This results in both reduced costs and increased demand. The mathematical expression of services is less straightforward than with a path-based formulation due to the addition of service legs, but the improved results justify this effort.Secondly, it is highly beneficial for the transport operator to include the information they have about the demand during the design of their services. The CD-SNDP results have shown that, even with a simple deterministic model, the solution of the SNDP problem is able to generate actual profits that are nearly three times higher than the benchmark. This is because the benchmark's assumption that shippers are purely cost-minimizers neglects other attributes that still play a role in the decision-making of shippers, such as the service frequencies. The utility functions also include the arbitrage between these attributes through the weighting coefficients.Thirdly, making use of stochastic CD-SNDP exploits further the potential of the model. Indeed, perfect and complete information about the shippers is not available to the operator, so that their demand model will miss some aspects that play a role in the shippers' choices. These aspects can indirectly be accounted for by adding random error terms in the model. Including this uncertainty into the model enables gains of almost 50% compared to the deterministic CD-SNDP. Therefore, the stochastic formulation of the CD-SNDP is one convenient way to account for imperfect information endogeneously to the model.Finally, quantifying and incorporating the heterogeneous preferences of shippers allows for a more accurate estimation of the profits. Indeed, except for the stochastic CD-SNDP with Mixed Logit, all models presented above substantially overestimate the profits. This can lead to very bad surprises for the operator if they expect a given amount of profit in their budget, but end up realizing much less. On the other hand, the formulation with Mixed Logit expects profits in line with (or even lower than) the ones that are realized. Considering heterogeneity then allows to get a better prevision of the profits. § CONCLUSION This work proposes a Service Network Design and Pricing problem that incorporates the mode choice behavior of shippers. Therefore, we develop a so-called Choice-Driven Service Network Design and Pricing problem that directly includes utility-based mode choice models into a bilevel optimization problem, which can then be reformulated as a single level linear problem. The random nature of utility-based models, such as the Multinomial Logit, allows to account for missing information about attributes playing a role in the mode choice. Opting for a Mixed Logit formulation further allows to consider the heterogeneous preferences of shippers, thus getting a more realistic representation of the shippers' population. Due to the randomness, the problem becomes stochastic, which makes it computationally expensive to solve with an exact method. To overcome this issue, we develop a predetermination heuristic that computes utilities prior to the optimization.The results show that the heuristic is able to considerably reduce the computational time, while finding solutions of similar quality to the exact method. Regarding the proposed model itself, it is compared to a benchmark where shippers are assumed purely cost-minimizers. We show that the profits achieved by our model are substantially higher, even if the embedded mode choice model is simply deterministic. All in all, including more information about the shippers while designing and pricing the services suggests considerable gains for the transport operator. Even if the exact model or parameters are not known, it is still far better than not using the available information.Now that the potential of our Choice-Driven Service Network Design and Pricing has been demonstrated on a small network, the immediate next step is to apply this methodology to a larger network. This will allow to assess the proposed model on a real-size instance and to test the performance of the predetermination heuristic on a larger problem. The search space could potentially be reduced by applying practical rules, such as a maximum number of stops per services.Moreover, several assumptions made in this work deserve to be challenged. Firstly, in the mode choice models, the utilities of the vessel operator and of the competing IWT carrier are considered independent from each other. However, since they are both proposing inland waterway services, these two options are correlated with each other. This could also apply to a lesser extent to the Rail alternative, which also proposes scheduled intermodal services. Further work should consider this correlation between choice alternatives. Secondly, it is assumed that the utilities of the competing alternatives can be computed by the operator, implying that they have full information about their competitors. Some attributes can indeed be found, e.g., the frequency or travel times, but the price that the competitors are applying cannot be known perfectly: at best it can be estimated. The choice-driven model should then be developed further to account for this imperfect information. Thirdly, the competition is assumed exogenous and fixed meaning that they will not react to the operator's new services. But the competitors will also seek to improve their services and profits, even more so if they lose market share to the operator. These dynamics can be covered, for example, through an Agent-Based Model accounting for the reactions of the different parties involved.Another dynamic aspect that can be included is about the pricing, particularly in the context of inland waterway transport. Indeed, the frequent low water levels on the Rhine reduce the capacity of the vessels, thus increasing the transportation costs per container. The operators then have to increase the price they charge to compensate for the losses. A time dimension could be included in the optimization model to deal with dynamic pricing. Finally, our formulation implies that a single price per OD pair is set for all shippers. However, revenue management techniques can be used to improve the performance of the proposed model. This will provide the operator with additional gains because they can tailor the prices offered to specific customers. A revenue management setting would also to develop the full potential of the formulation with Mixed Logit, as the prices can be adapted to the different cost sensitivities of shippers.This research is supported by the project “Novel inland waterway transport concepts for moving freight effectively (NOVIMOVE)”. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 858508. informs2014trsc | http://arxiv.org/abs/2311.15907v1 | {
"authors": [
"Adrien Nicolet",
"Bilge Atasoy"
],
"categories": [
"math.OC"
],
"primary_category": "math.OC",
"published": "20231127151320",
"title": "A Choice-Driven Service Network Design and Pricing Including Heterogeneous Behaviors"
} |
Improving Denoising Diffusion Probabilistic Models via Exploiting Shared Representations Delaram Pirhayatifard, Mohammad Taha Toghani, Guha Balakrishnan, César A. UribeDepartment of Electrical and Computer Engineering, Rice University, Houston, TX, USA.Email addresses: {mailto:[email protected], mailto:[email protected], mailto:[email protected], mailto:[email protected]}@rice.edu. January 14, 2024 =============================================================================================================================================================================================================================================================================================================================== In this work, we address the challenge of multi-task image generation with limited data for denoising diffusion probabilistic models (DDPM), a class of generative models that produce high-quality images by reversing a noisy diffusion process. We propose a novel method, SR-DDPM, that leverages representation-based techniques from few-shot learning to effectively learn from fewer samples across different tasks. Our method consists of a core meta architecture with shared parameters, i.e., task-specific layers with exclusive parameters. By exploiting the similarity between diverse data distributions, our method can scale to multiple tasks without compromising the image quality. We evaluate our method on standard image datasets and show that it outperforms both unconditional and conditional DDPM in terms of FID and SSIM metrics. § INTRODUCTIONDiffusion models are a class of generative models that produce high-quality images by reversing a noisy diffusion process <cit.>. They have shown several advantages over previous state-of-the-art generative models such as GANs <cit.>, such as their scalability and their ability to capture the underlying structure of the data, including the spatial relationships between different objects <cit.>. This enables them to generate images that are more realistic and diverse than those produced by other generative models <cit.>. These advances have made diffusion models powerful and useful tools for generating images and other complex data for various applications, such as computer vision <cit.>, natural language processing <cit.>, artistic image generation <cit.>, medical image reconstruction <cit.>, and music generation <cit.>. Diffusion models are based on non-equilibrium thermodynamics <cit.>, where diffusion increases the system’s entropy. They generate samples by gradually introducing random noise to data and learning to reverse the process to obtain the desired data samples. However, diffusion models have some limitations, such as being time-consuming and computationally expensive to train <cit.>, being difficult to troubleshoot and scale to large datasets <cit.>, and having high-dimensional latent variables similar to the original data.Few-shot learning is a type of meta-learning <cit.> that enables models to learn from a limited amount of data <cit.>. This is especially useful in scenarios where the data availability is scarce or the training costs and time are high <cit.>. In such cases, few-shot learning can be used to quickly learn from a small number of examples. Several optimization-based and hierarchical-based techniques have been proposed to enable meta-learning for different problems. These techniques allow researchers to make more accurate predictions and to better understand the underlying structure of data. Few-shot learning is also a powerful tool for image generation in limited data setups. It can be used to create a variety of images from a small dataset, such as images of a specific object in different poses or environments. This can be a useful tool for data augmentation, as well as for creating new images for different tasks.In this work, we study image generation in multi-task setups with limited data per task. We propose to enhance the quality of image generation in diffusion models by leveraging the idea of shared and personalized representations. We introduce a novel hierarchical-based algorithm called Shared-Representation Denoising Diffusion Probabilistic Model (SR-DDPM), which exploits a combination of shared and exclusive features <cit.> to improve the sample fidelity under limited data regimes. We discuss how our method is capable of fast and light fine-tuning, as well as better scalability to unseen tasks, i.e., data from a new category. We evaluate the performance of SR-DDPM on four standard datasets: MNIST <cit.>, Fashion-MNIST (FMNIST) <cit.>, CIFAR-10 <cit.>, and CIFAR-100 <cit.> under limited data samples.The rest of this paper is structured as follows. Section <ref> reviews the related works. Section <ref> introduces the problem setup and our method, SR-DDPM, for improving the performance of DDPMs using a mixture of shared and exclusive layers. Section <ref> presents the numerical results and Section <ref> concludes the paper.§ BACKGROUNDRecent advances in diffusion models have focused on improving the quality and efficiency of the generated images in various ways. For instance, <cit.> introduced the concept of noise-conditioned score networks, which learn the corresponding noise for two consecutive images in the diffusion process. Rombach et al. <cit.> proposed a two-stage approach to distinguish the imperceptible details in high-quality photos via adversarial auto-encoders, which reduce the size of latent DDPM. Moreover, some works proposed non-Markovian and operator learning techniques for implicit fast sampling <cit.>.Ho et al. <cit.> found that cascaded diffusion models were capable of generating high-fidelity images without the assistance of auxiliary image classifiers. There have also been recent attempts to boost image quality by incorporating conditional approaches that use noise prediction <cit.>. In recent studies, broader corruption processes such as blurring, pixelation, and desaturation have also been considered in training and sampling diffusion models <cit.>.Furthermore, there has been a growing focus on score-based generative modeling using stochastic differential equations (SDE) <cit.>, where the goal is to learn score functions, gradients of log probability density functions, on a wide range of noise-perturbed data distributions, and then sample with Langevin-type methods. Additionally, several exceptional efforts have been made for cases with multi-modal datasets and 3D image generation <cit.>. This has been achieved by incorporating additional information about the data, such as object labels or scene context, via using attention layers in the model <cit.>. § PROBLEM SETUP & ALGORITHM In this section, we first describe the underlying problem setup for few-shot image generation. Then after reviewing the notion and formulation of DDPM <cit.>, we present our method, SR-DDPM.Data Setup: We consider a set of n different tasks , where for each task , there exist a set of m_i samples , whereeachis an image in a d-dimensional space ( for CIFAR-10 <cit.>). In the conventional setup for diffusion models, the underlying mechanism is to aggregate all samples 𝒮 = ∪_i=1^n 𝒮_i irrespective of their task and every ∈𝒮 is a realization of some generic (global) distribution ∼𝒟. In this research, we unravel how to exploit the combination of diverse yet similar distributions 𝒟_i to improve the quality of image generation in diffusion models.We start by stating the problem setup for DDPM and then introduce our method for shared representations.DDPM:Let _0∈ℝ^d be an image sampled from distribution 𝒟. Moreover, let _1,_2,…,_T denote T latent variables where each _t∈ℝ^d, for all t∈[T]. The forward (diffusion) process q can be defined as follows:q(_1:T|_0)∏_t=1^T q(_t|_t-1),q(_t|_t-1)𝒩(_t; √(1-β_t)_t-1, β_t ),whereis a variance schedule for the underlying gaussian noise with meanand varianceat each timestep t. According to <cit.>, the latent variable _t can be directly derived based on the observed data _0 as_t = √(α_t)_0 + √(1-α_t),whereand . Moreover, the reverse (generative) process p_ψ, parameterized by a set of parameters ψ, can be summarized as follows:p_ψ(_0:T)∏_t=1^T p_ψ(_t-1|_t),p_ψ(_t-1|_t)𝒩(_t-1; _ψ(_t,t), σ_t^2 ), _ψ(_t,t) 1/√(α_t)[_t -β_t/√(1-α_t)_ψ(_t,t)],where , and _ψ: ℝ^d ×ℕ→ℝ^d is a neural network with parameters ψ that takes _t and timestep t as inputs and estimates the realization ofin (<ref>). For example, UNet with attention is a proper candidate for _ψ. In <cit.>, it is explained thatprovides similar experimental results to . Note that the underlying assumption in this formulation is that _ψ(_t,t) is a shared model across the Markov chain (from 0 to T) which is expressive enough to recover the noise value. Therefore, it is sufficient to optimize the network parameters with respect to some loss function ℒ: ℝ^d ×ℝ^d →ℝ^+, i.e., minimizingℒ(, _ψ(√(α̅_t)_0 + √(1-α̅_t), t) ),which quantifies the distance between the original noise and prediction of the denoising model.Next, we explain SR-DDPM for multi-task denoising diffusion models with shared and exclusive representations. SR-DDPM:Our goal is to exploit the exclusiveness of each task {𝒯_i}_i=1^n by splitting the denoising network architecture into shared and personal (exclusive) layers. Figure <ref> depicts the UNet structure that uses a common set of parameters ϕ for all tasks i∈[n], and a distinct set of parameters {θ_i}_i=1^n for each task. The set of parameters ψ in (<ref>) is the combination of ϕ and θ_i, for any i∈[n]. This allows us to jointly capture both shared and unshared features. For example, Figure <ref> shows that for different tasks involving various outfits, we train and sample from n parallel Markov chains with shared parameters ϕ and exclusive parameters {θ_i}_i=1^n. In other words, we minimize the following:𝔼_i[ℒ(, _ϕ,θ_i(√(α̅_t)_0^i + √(1-α̅_t), t) )],where _0^i∼𝒟_i and i∼Uniform([n]). Algorithms <ref> and <ref> respectively describe the training and sampling processes of SR-DDPM. As shown in Algorithm <ref>, atthe training phase, we randomly choose a task and an image from that task. Then, we use a first-order optimization method such as Adam <cit.> to optimize the stochastic gradient and minimize the cost in (<ref>). Finally, we generate samples by feeding a noise signal to the network and applying the denoising process of Algorithm <ref>. § EXPERIMENTS In this section, we describe the experimental setup and the results of our proposed method. We compare our method with unconditional and conditional DDPM. We consider four standard datasets: MNIST, FMNIST, CIFAR-10, and CIFAR-100. We implement a multi-task scheme with 500 samples per task. Following Section <ref>, we adopt a UNet as the denoising network with four layers for all methods. The network has a bottleneck in the middle to learn only the most important features of the data. We increase the number of channels by a factor of two and decrease the image size by the same factor per layer. We personalize one layer as the exclusive stage at the first and end of the network for all datasets. We train the model for each method within 600 epochs. We use the Adam optimizer with a learning rate of 5×10^-4 for all experiments.We quantitatively compare the performance of our model with DDPM and Conditional DDPM (C-DDPM) using the implementation of DDPM <cit.> on Hugging Face <cit.>. We measure the performance of SR-DDPM on the four different datasets using sample quality (FID@10k) and structural similarity (SSIM) on test data. Table <ref> compares SR-DDPM with unconditional and conditional DDPM. Our method achieves better FID scores than the other two on all four datasets.Figure <ref> visualizes the reverse process for image generation foron all datasets. We also display 20 samples from each task in Figures <ref>-<ref>. The images generated from FMNIST show that the trained model can identify similarities between the tasks. For example, look at the T-shirt and Dress generation in Figure <ref>. Some of the images generated from one task overlap with the other task when using the corresponding exclusive layers. This implies that the method can capture the similarity between the tasks implicitly by using the shared and exclusive layers. § CONCLUSION We presented a novel algorithm for training diffusion models with limited data. Our method outperforms unconditional and conditional DDPM on image generation in the same training time. We also found that the personal layer for each task can detect similarities among tasks automatically. This means that we can train a new personal layer for a new task without fine-tuning the whole network. Our method offers an interpretable way to generate images by using a combination of shared and unshared parameters to capture the differences among tasks.IEEEtran | http://arxiv.org/abs/2311.16353v1 | {
"authors": [
"Delaram Pirhayatifard",
"Mohammad Taha Toghani",
"Guha Balakrishnan",
"César A. Uribe"
],
"categories": [
"cs.LG",
"cs.AI",
"cs.CV",
"eess.IV",
"eess.SP"
],
"primary_category": "cs.LG",
"published": "20231127223026",
"title": "Improving Denoising Diffusion Probabilistic Models via Exploiting Shared Representations"
} |
ϵε÷⌈⌉ theoremTheorem[section] definition[theorem]Definition hypothesis[theorem]Hypothesis corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Propositionremark remark[theorem]Remark *notationNotation equationsectionacknowledgementsRandom Splitting of Point Vortex Flows]Random Splitting of Point Vortex Flows A. Agazzi]Andrea Agazzi Università di Pisa, Dipartimento di Matematica, 5 Largo Bruno Pontecorvo, 56127 Pisa, Italia. mailto:andrea.agazzi at unipi.itandrea.agazzi at unipi.it F. Grotto]Francesco Grotto Università di Pisa, Dipartimento di Matematica, 5 Largo Bruno Pontecorvo, 56127 Pisa, Italia. mailto:francesco.grotto at unipi.itfrancesco.grotto at unipi.it J. Mattingly]Jonathan C. Mattingly Department of Mathematics and Department of Statistical Science, Duke University, Durham, NC, 27708 USA mailto:[email protected] at duke.com We consider a stochastic version of the point vortex system, in which the fluid velocity advects single vortices intermittently for small random times. Such system converges to the deterministic point vortex dynamics as the rate at which single components of the vector field are randomly switched diverges, and therefore it provides an alternative discretization of 2D Euler equations. The random vortex system we introduce preserves microcanonical statistical ensembles of the point vortex system, hence constituting a simpler alternative to the latter in the statistical mechanics approach to 2D turbulence. [ [ January 14, 2024 ==================== § INTRODUCTION The Point Vortex (PV) system is a finite-dimensional system of singular ODEs describing the evolution of an incompressible, 2-dimensional fluid in the idealized case where the vorticity, i.e. the curl of the velocity field, is concentrated in a finite set of points.Introduced by Helmholtz in 1858 <cit.>, the PV system is known to be well-posed for almost every initial configuration <cit.>. It has been shown to be the limit of solutions of 2D Euler equations <cit.> in the well-posedness class L^∞ of the latter, and the PV system itself converges to solutions of 2D Euler equations in a Mean Field scaling regime for initial data in L^∞ <cit.> (cf. also <cit.>).The properties of PV dynamics as a Hamiltonian system with singular interactions have also been the object of extensive research, because of the coexistence of stable and unstable configurations and the presence of singular solutions possibly related to dissipation properties of 2D Euler equations <cit.> (cf. <cit.> for an overview on PV as Hamiltonian dynamics).Just as in the case of the closely related 2D Euler PDE dynamics, the main problem concerning PV systems is the long-time asymptotic properties of the solutions. Equilibrium states of PV systems play a prominent role in the statistical mechanics approach to 2D turbulence rooted in the works of Onsager <cit.>, with the convergence towards states exhibiting the formation of coherent structures being the crucial mathematical open problem <cit.>.The present note is devoted to a stochastic modification of the PV system inspired by the random splitting technique recently developed in <cit.>. We will prove that the stochastic vortex flow we exhibit is in fact a regularized version of the deterministic PV dynamics converging to the latter in the limit of small regularization parameter. This in turn implies convergence towards solutions of 2D Euler equations in sight of the aforementioned results. Unlike the original PV system, the stochastic dynamics we propose is well-defined for all initial configurations, the convergence to the deterministic system holding up to the time of eventual singularities, as in other versions of PV dynamics regularized by the introduction of noise <cit.>.The most important feature of the stochastic dynamics we propose is that it preserves the same kinetic energy functional (i.e. the PV Hamiltonian) as the original deterministic flow. To the best of our knowledge, this is the first desingularization method for PV dynamics that preserves such a crucial first integral of motion, possibly opening the way for a new approach to the study of microcanonical ensembles of PVs and their relation with 2D turbulence. We defer a proper discussion to <Ref>, after having established a rigorous construction and our main results in <ref>.§ SPLITTING VORTEX FLOWS We consider the dynamics on the periodic space domain ≃ [0,1)^2 and establish our results on the finite time interval t∈ [0,1]. All the forthcoming arguments can be easily adapted to the general case of PV dynamics on smooth surfaces with or without boundaries. Throughout, we define k^⊥ := (-k_2, k_1) for k = (k_1,k_2) ∈ℝ^2, extending this notation naturally to the differential operator ∇ := (∂_1, ∂_2), and denote by |·|, respectively · the Euclidean and induced operator norm. §.§ Deterministic Vortex DynamicsA system of N point vortices with intensities ξ_1,…,ξ_N∈∖{0} and distinct positionsx=(x_1,…,x_N)∈:= x∈: x_i≠x_j∀i≠j ,evolves according to the dynamicsẋ_i = v_i(x), v_i(x)=∑_j≠i ξ_j K(x_i-x_j) ,whereK:∖(0,0) →^2,K(x)=∇^⊥Δ^-1(x) =1/2π ∑_k∈^2∖(0,0) k^⊥/|k|^2 e^2πk·x ,is the 2D Biot-Savart kernel, whose action on a vorticity distribution returns the corresponding velocity field of the fluid, which in turn advects vortex positions.As proved in <cit.>, for almost all initial configurations x=x(0)∈ with respect to the product Lebesgue measure on ⊃ the ODE system <ref> admits a unique solution which is smooth and global in time. With a slight abuse of notation we will denote by Φ_t:→ the (almost-everywhere defined) solution flow of <ref>, i.e. the flow of v=(v_1,…,v_N) on .We will also denote by Φ^(i)_t:→ the flow of a single component of the velocity field, (0,…,v_i,…,0). For i> N, abusing again notation, we will write Φ^(i) implying that the apex is to be considered modulo N. Notice that, unlike Φ, each flow Φ^(i) is well-defined for all times at any point x∈, because of the particular form of the interaction kernel K, which prevents the i-th vortex from colliding with any other one. §.§ Stochastic SplittingDenoting throughout y := max{k ∈ℕ : k≤ y}, we define the (stochastically) split PV flow(s) as follows:Let m ∈ and consider a vector of i.i.d. non-negative random variables τ = (τ_i)_i = 1^∞ with common distribution ρ having at most exponential tails and satisfying 𝔼(τ_i) = 1. For t>0,define the jumping stochastic flowΦ_t^m(x) = Φ_τ_ℓN/m^(N) ∘…∘Φ_τ_1/m^(1)(x),ℓ= mt ,and the interpolated stochastic flow as the solution ofd/dt _t(x)=N τ_i v_j(x), i=Nmt,j=i (modN) .In particular, when m t is an integer,_t(x)= Φ_t^m(x) .Concretely, we let the single components Φ^(i) of the PV flow act one by one, over small (random) time intervals. The difference between Φ^m and Ψ^m consists in the former being piecewise constant in time and the latter having continuous trajectories. Notice that the stochastic flows are well-defined for all initial configurations, even the ones leading to collapse at finite time the dynamics <ref>, since this is the case for every single Φ^(i). §.§ Convergence for regularized interaction kernels Consider the following smooth approximation of the PV interaction kernel:K_δ(x)=(1-χ_δ(x)) K(x), δ>0 ,with χ_δ∈ C^∞() supported by a δ-neighborhood of 0∈ and with χ_δ(0)=1. In the present paragraph, we assume that K_δ replaces K in the definitions of flows Φ,Φ^(i),Φ^m,, omitting dependence on δ for a lighter notation. Notice that if a solution of the PV dynamics is such that vortices are δ-separated at all times, that is also a solution of <ref> with K_δ replacing K. Let K_δ replace K in <ref> and the subsequent definitions, and let x∈𝒳 be fixed. Then -almost surely, for all t∈ [0,1], _t(x) →Φ_t(x)asm→∞ . Definingd_(x,y)^2 := ∑_j=1^N ∑_i = 1^2min((x_j)_i-(y_j)_i,1-(x_j)_i-(y_j)_i)^2 , we first prove convergence of fixed-time marginals of the interpoleted stochastic flow:Let the assumptions of <ref> hold and fix t ∈ [0,1], then for all ϵ > 0 lim sup_m →∞sup_x ∈𝒳d_Ψ_t^m(x), Φ_t(x) > ϵ = 0 . For ℓ∈ℕ, introduce the ℓ-step jumping flow with timestep h > 0:Φ_h^ℓ(x) := Φ_h^(ℓN,1)(x) , where for any 1≤ j ≤ i, Φ_h^(i,j)(x) :=Φ_hτ_i^(i) ∘…∘Φ_hτ_j^(j)(x) .We couple Φ_h^ℓ(x) with Φ_t^m(x) and Ψ_t^m(x) by setting h = 1/m and identifying the underlying τ = (τ_i)_i=1^∞, so that whenever mt ∈ℕ, we have Φ_1/m^mt(x) = Φ_t^m(x) = Ψ_t^m(x).We proceed to prove thatfor any t ∈ [0,1]and for any ϵ>0 sufficiently small, we havelim sup_m →∞sup_x ∈𝒳d_Ψ_t^m(x), Φ_t^m(x) > ϵ/3= 0 ,lim sup_m →∞ sup_x ∈𝒳d_Φ_t^m(x), Φ_t/mt^mt(x) > ϵ/3= 0 ,lim sup_m →∞ sup_x ∈𝒳d_Φ_t/mt^mt(x) , Φ_t(x) > ϵ/3= 0 ,which, combined, yield the desired result.Starting from <ref>, we recall from <cit.> the definition of the flow mapsS_t f(x):= f(Φ_t(x)) ,S_h τ^ℓ f(x) := f(Φ_h^ℓ(x)) ,and their operator normS_2 →0 := sup_f_2 = 1 sup_x ∈ |S f(x)| ,where f_2 denotes the following norm on C^2(𝒳):f_2 := sup_x ∈𝒳, k=0,1,2 ( sup_|η| = 1 |D_x^k f(x) η|) .Under the smoothness hypothesis of this Lemma, by <cit.> – which uses the same notation we just recalled – for every ϵ' >0 it holdslim sup_ℓ→∞ S_t - S_tτ/ℓ^ℓ_2→0> ϵ'= 0 .<ref> now follows by choosingf_x,t, ϵ(·) := 1- χ̅_ϵ,Φ_t(x)( · ),x ∈, t > 0,ϵ>0 ,with χ̅_ϵ,y∈ C^2() having support in the d_-ball ℬ_ϵ/3(y) of radius ϵ/3 around y∈, equalling 1 in y and such thatsup_x ∈D_x^2χ̅_ϵ,y(x) ≤64 ϵ^-2 and sup_x ∈|D_xχ̅_ϵ,y(x)| ≤16 ϵ^-1 .Indeed, setting ϵ' = ϵ^2/128 we havelim sup_m→∞sup_x ∈𝒳d_Φ_t/mt^mt(x) , Φ_t(x) > ϵ/3 ≤lim sup_ℓ→∞sup_x ∈𝒳|f_x,t,ϵ(Φ_t/ℓ^ℓ(x))|>1/2= lim sup_ℓ→∞sup_x ∈𝒳|(S_t - S_tτ/ℓ^ℓ)2 ϵ' f_x,t,ϵ(x)|>ϵ' ≤lim sup_ℓ→∞S_t - S_tτ/ℓ^ℓ_2 →0>ϵ' = 0 ,the last step following from <ref>.We now turn to proving <ref>. To do so we note that, definingM := sup_x ∈, i ∈{1, …, N} (|v_i(x)|, D_xv_i(x)) , we have that for all t >0sup_r ∈(0,t) d_(Φ^(i)_t(x), Φ^(i)_t(y)) ≤e^Mt d_(x,y) ,so that we can write, uniformly in x ∈𝒳, d_Φ_t^m(x), Φ_t/mt^mt(x)= d_Φ_1/m^mt(x), Φ_t/mt^mt(x) ≤∑_j=1^Nmt d_Φ_1/m^(Nmt,j)(Φ_t/mt^(j-1,1)(x)), Φ_1/m^(Nmt,j+1)(Φ_t/mt^(j,1)(x))≤e^M/m ∑_j=1^Nmt τ_j∑_j=1^Nmt sup_yd_Φ_1/m^(j)(y), Φ_t/mt^(j)(y)≤e^M/mt ∑_j=1^Nmt τ_j M ∑_j=1^Nmt |1/m-t/mt|τ_j≤|1- tm/tm| t e^ NM/Nmt ∑_j=1^Nmt τ_jNM/Nmt ∑_j=1^Nmt τ_j .Combining the strong law of large numbers for 1/ℓ∑_k = 1^ℓτ_k in ℓ = Nmt→∞ with tm/tm→ 1 as m →∞ we have that the right hand side converges almost surely to 0, proving the desired result.Finally, to prove <ref>, we define t_m := max{j/m : j/m≤ t ,j ∈ℕ}, so that mt_m ∈ℕ and by <ref> we have Φ_t^m(x) = Φ_t_m^m(x) = Ψ_t_m^m(x). Then, recalling <ref> we writesup_x ∈𝒳d_Φ_t^m(x), Ψ_t^m(x) = sup_x ∈𝒳d_Ψ_t_m^m(x), Ψ_t^m(x) ≤τ_m t_m N M/mwhich converges almost surely to 0 as m →∞, establishing the claim. Defining for every x ∈𝒳A_m(ϵ) := {sup_t ∈[0,1] d_(_t(x), Φ_t(x)) ≤ϵ} ,we aim to show that for all ϵ > 0, lim sup_m →∞A_m(ϵ)^c =0 .For a stepsize = ϵ^2 and a tolerance Δ(ϵ) := e^-NMϵ^3/20 for M in <ref>, we introduce the setsB_j,m(ϵ) := { sup_t ∈(0, )d_(_t(_j (x)), _j (x)) ≤ϵ/3} , B_j,m'(ϵ):={d_(Φ_(_j (x)), _(_j (x))) ≤Δ(ϵ)} . It is readily checked that, for sufficiently small ϵ > 0, one has⋂_j=0^^-1B_j,m(ϵ) ∩B_j,m'(ϵ) ⊂A_m(ϵ)for allm ∈ . Indeed, adapting the estimate <ref> to trajectories of Φ, and since by triangle inequality for all k ∈{1, …,^-1}d_(Φ_k(x), _k(x)) ≤∑_j = 1^k d_(Φ_(k-j) (_j(x)), Φ_(k-(j-1))(_(j-1)(x))) ,on ⋂_j=1^kB_j,m'(ϵ) for all k ∈{1, …,^-1} we can writed_( Φ_k (x),_k(x)) ≤e^NM ∑_j = 1^k d_(Φ_ (_j(x)), _(_j(x))) ≤ϵ/10 .Combining the above with the definition of B_j,m(ϵ) and the fact that for ϵ small enough sup_x ∈𝒳, t ∈ (0,)d_(x, Φ_t(x)) ≤ NM s < ϵ/3yields <ref>.To conclude, it remains to estimate the probabilities of B_j,m(ϵ), B_j,m'(ϵ): for every ϵ > 0, we have by <ref> thatlim sup_m →∞ A_m(ϵ)^c≤lim sup_m →∞ (⋃_j=0^^-1 B_j,m(ϵ)^c ∪B_j,m'(ϵ)^c ) ≤∑_j=0^^-1 lim sup_m →∞ B_j,m(ϵ)^c + ∑_j=0^^-1 lim sup_m →∞B_j,m'(ϵ)^c,where the second inequality is a union bound. We finally obtain the desired claim by noting that the second term on the right hand side vanishes by application of <ref>, and that by the strong law of large numbers, recalling the definition ofand that τ_kiid∼ρ with 𝔼(τ_k = 1), for the first term we havelim sup_m →∞ sup_x ∈, t ∈(0, )d_(_t(x) - x) > ϵ/3 ≤lim sup_m →∞1 /m ∑_k = 1^sm M τ_k> ϵ/3 = lim sup_m →∞1/sm ∑_k = 1^sm τ_k> 1/3Mϵ =0 ,upon choosing ϵ < 1/4M.§.§ Convergence to the deterministic vortex flow In what follows, G : ∖ (0,0) denotes the (zero-average) Green function of the Laplace operator -Δ onand K(x)=-∇^⊥ G(x) returns to be the singular interaction of the PV system. We denote by dx^N the Lebesgue (equivalently, Haar) measure on . dx^N⊗-almost surely, for all t∈ [0,1] we have _t(x) →Φ_t(x), asm→∞ .The proof essentially relies on the following bound on vortex distances, which reprises the classical argument of Dürr-Pulvirenti <cit.>. There exists a constant C=C(N)>0 such that for all δ>0∫_ dx^Nmin_m≥0inf_t∈[0,1] min_i≠j d_(_t(x)_i,_t(x)_j) <δ≤C (-logδ)^-1 . To lighten notation, in the following C denotes a positive N-dependent constant possibly varying in each occurrence. Since _t is the result of subsequent compositions of the flows Φ^(i), the proof reduces to establish the thesis for the latter. The functionL:∖→[0,∞) , L(x)=∑_i≠j G(x_i-x_j)+c ,(where c=c(N)>0 is a constant to be chosen so that L≥ 0, thanks to the fact that G is bounded from below, and we define throughout := {x ∈ : x_i = x_j fori ≠ j}) allows to control the minimum distance between vortices, sinceL(x)≤-Clog min_i≠j d_(x_i,x_j), x∈∖ .Notice that L∈ L^1() as G∈ L^1(). It holds:d/dt [L∘Φ^(1)_t](x) =∑_i≠1 ∇G(x_i-x_1) ∑_j≠1 ∇^⊥G(x_j-x_1) ,in which the sum on the right-hand side also contains no contribution from the product with i=j due to orthogonality. Integrating in time we can thus write, for t^∗>0,∫_sup_t∈ [0,t^∗] (L∘Φ^(1)_t(x)) dx^N =∫_ L(x) dx^N+ ∫_sup_t∈ [0,t^∗]∫_0^t ∑_i≠ j≠ 1∇ G(_s(x)_i-_s(x)_1)∇^⊥ G(_s(x)_j-_s(x)_1) ds dx^N .In the latter expression, since Φ^(1) preserves dx^N, we can swap the integral overand the supremum over time; we can then use the estimate |∇ G (y)|≤ C |y|^-1, y∈, to control factors of the integrand, finally arriving to∫_ sup_t∈[0,t^∗] (L∘Φ^(1)_t(x)) dx^N ≤C t^∗ .The same argument clearly holds for all Φ^(i), and for compositions of those flows on subsequent time intervals as the ones in the definition of _t, leading in particular to∫_ sup_t∈[0,1] (L∘_t(x)) dx^N ≤C 1/m∑_i=1^mN τ_i .With this estimate at hand, the thesis follows from <ref> and Markov inequality.If (Ω,) is the probability space on which the random times τ_i are defined, the measurable subsetA_δmin_m≥0inf_t∈[0,1] min_i≠j d_(_t(x)_i,_t(x)_j) <δ ⊂Ω× ,is such that on A_δ the random flow (x)_i does not change if the interaction kernel K is replaced by K_δ as in <Ref>. In particular, conditionally to A_δ, <ref> applies yielding: dx^N⊗-almost surely on A_δ, for all t∈ [0,1], _t(x) →Φ_t(x) as m→∞. The proof is then completed by observing that ⋃_δ>0 A_δ= ×Ω, therefore the subset of ⊗Ω on which the thesis does not hold must be negligible by <ref>.§ EQUILIBRIUM STATISTICAL MECHANICS We have exhibited a random dynamical system whose flow Ψ^m converges to that of PV dynamics in the deterministic limit m→∞. We conclude the present note with some remarks on the compatibility of the flow Ψ^m with the statistical mechanics of PVs. We refer to <cit.> for a survey on classical statistical mechanics approach to 2D turbulent phenomena, to <cit.> for a more recent account, andto <cit.> for therelevance to microcanonical ensembles of PVs.The interaction energy of the PV system,H(x_1,…,x_N)=∑_i≠j ξ_i ξ_j G(x_i-x_j),corresponds to the (renormalized) kinetic energy of the fluid, and it acts as the Hamiltonian function of <ref> regarded as the Hamilton equations in conjugate coordinates (x_j,1,ξ_j x_j,2). Combined with the fact that the PV flow Φ (out of the negligible set of singular initial configurations) is the flow of a divergence-less vector field, and as such preserves dx^N by Liouville theorem, this allowed Onsager <cit.> to consider canonical and microcanonical ensembles preserved by Φ. Specifically,ν_β(dx^N)=1/Z_β,N e^-βH(x_1,…,x_N),Z_β,N=∫_ e^-βH(x_1,…,x_N) dx^N,is well defined (i.e. Z_β,N<∞) for inverse temperature β < 4π/min_i |ξ_i| (cf. <cit.>) and defines an invariant measure of <ref>. On the other hand, conditioning dx^N to an energy level set {H=E} one can introduce the microcanonical ensembleμ_E(dx^N) =1/Z_E,N δH(x_1,…,x_N)-E dx^N,Z_E,N being the Lebesgue measure of {H(x_1,…,x_N)=E}⊂. For high enough energy E≫1, Onsager predicted that, under the microcanonical ensemble, typical configurations of vortices behave similarly to samples from a negative-temperature Canonical ensemble, i.e. β<0 in <ref>. Under the latter distribution, for large enough |β| so to prevent statistical Lebesgue repulsion (cf. <cit.>), typical configurations should exhibit aggregation of same-sign vortices, as proximity of vortices with different signs is penalized by the density e^-β H. This should allow the use of PV statistical ensembles to describe the formation of coherent structures in 2D turbulent flows. At present, this remains mostly conjectural as far as rigorous results are concerned, and we shall rather refer to numerical studies such as <cit.> for a contemporary viewpoint.The single component flow Φ^(i) is in fact the flow of the vector field ∇^⊥_i H, thus H is a first integral of motion for all Φ^(i)'s, and consequently for the random flows Φ^m, Ψ^m. Since Φ^(i) is still the flow of a divergence-less vector field, Liouville theorem applies and the measure invariance arguments just outlined can be repeated for Φ^m, Ψ^m. As a consequence, the latter are completely equivalent to <ref> from the point of view of equilibrium states, while being simpler as far as the time evolution is concerned. Let us also stress again the fact that these random flows are well-defined for all initial PV configurations, therefore singular dynamics are completely ruled out in this setting. Incidentally, we observe that this possibly introduces a new tool in the study of the continuation of PV dynamics after collapse via stochastic regularization (cf. <cit.>).Further insight on the stability of vortex interactions is necessary in order to fully replicate the results of <cit.> for split PV flows, but the splitting approach reduces the problem to the analysis of the evolution of a single PV in a fixed configuration of vortices, thus moving a step forward towards a better understanding of PV dynamics.§.§ Acknowledgements AA acknowledges partial support by the Italian Ministry for University and Research through PRIN grant ConStRAINeD, and by the University of Pisa, through project PRA 2022_85. FG was supported by the project Mathematical methods for climate science funded by PON R&I 2014-2020 (FSE REACT-EU). JCM thanks the NSF RTG grant DMS-2038056 for general support and the Visiting Fellow program at the Mathematics Department, University of Pisa. plain | http://arxiv.org/abs/2311.15680v1 | {
"authors": [
"Andrea Agazzi",
"Francesco Grotto",
"Jonathan C. Mattingly"
],
"categories": [
"math.PR",
"math-ph",
"math.DS",
"math.MP"
],
"primary_category": "math.PR",
"published": "20231127101346",
"title": "Random Splitting of Point Vortex Flows"
} |
[email protected] School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, JapanAdvanced Research Laboratory, Technology Infrastructure Center, Technology Platform, Sony Group Corporation, 1-7-1 Konan, Minato-ku, Tokyo, 108-0075, Japan [email protected] School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan Center for Quantum Information and Quantum Biology, Osaka University, 1-2 Machikaneyama, Toyonaka, Osaka 560-0043, Japan RIKEN Center for Quantum Computing (RQC), Hirosawa 2-1, Wako, Saitama 351-0198, JapanSolving linear and nonlinear differential equations with large degrees of freedom is an important taskfor scientific and industrial applications.In order to solve such differential equations on a quantum computer, it is necessary to embedclassical variables into a quantum state.While the Carleman and Koopman-von Neumann embeddings have been investigated so far, the class of problems that can be mapped to theSchrödinger equation is not well understood even for linear differential equations. In this work,we investigate the conditions for linear differential equations to be mapped to the Schrödinger equation and solved on a quantum computer. Interestingly, we find that these conditions are identical for both Carleman and Koopman-von Neumann embeddings. We also compute the computational complexity associated with estimating the expected values of an observable. This is done by assuming a state preparation oracle, block encoding of the mapped Hamiltonian via either Carleman or Koopman-von Neumann embedding, and block encoding of the observable using 𝒪(log M) qubits with M is the mapped system size. Furthermore, we consider a general classical quadratic Hamiltonian dynamics and find a sufficient condition to map it into the Schrödinger equation. As a special case, this includes the coupled harmonic oscillator model [Babbush et al., <cit.>].We also find a concrete example that cannot be described as the coupled harmonic oscillator butcan be mapped to the Schrödinger equation in our framework. These results are important in the construction of quantum algorithms for solving differential equations of large-degree-of-freedom. How to Map Linear Differential Equations to Schrödinger Equations via Carleman and Koopman-von Neumann Embeddings for Quantum Algorithms Keisuke Fujii January 14, 2024 ========================================================================================================================================§ INTRODUCTIONDifferential equations are often used for describing complex phenomena,such as the Maxwell equation in electromagnetism, the Navier-Stokes Equation in fluid dynamics, the Magnetohydrodynamics equation in plasma physics, the primitive equation in climate modeling, andLotka-Volterra equation in ecology. While some differential equations can be solved analytically, many of practical significance cannot.To understand the behavior of these complex systems, these equations must be solved numerically.However, numerical simulation of such systems with a large number of variables requires substantial computational resources.Quantum computers are thought to offer powerful computational capability for a certain class of problems such as prime factorization <cit.>, simulation of quantum many-body system <cit.>, and linear system solver <cit.>.We aim to harness such an advantage of quantum computers to solve linear or non-linear differential equations in a large number of variables efficiently. When restricted to linear differential equations, there have been investigated algorithms for solving sparse linear differential equations using the quantum linear system algorithm and time discretization <cit.>.Furthermore, the quantum algorithm <cit.> to solve coupled harmonic oscillators with exponential speedup has been proposed recently and shown to include a BQP-complete problem.On the other hand, solving nonlinear differential equations with quantum computers requires embedding the classical variables of the differential equations into quantum states.As introduced in Refs. <cit.>, there are two known ways to realize such embedding. The first one, Carleman embedding <cit.>, embeds classical variables into the amplitudes of a quantum state directly.The second one, Koopman-von Neumann embedding <cit.>, is to embed classical variables into quantum amplitudes through orthogonal polynomials. In general, these embeddings require infinite variables because Carleman and Koopman-von Neumann embeddings convert a nonlinear differential equation into a linear one by introducing additional degrees of freedom with respect to nonlinear terms. One of the advantages of Carleman embedding is the mapping between classical variables and quantum amplitudes is straightforward; each classical variable corresponds to a complex amplitude with an appropriate normalization. By virtue of this property, if the nonlinear differential equations are sparse, the mapped linear equations will also be sparse. Based on this mapping, a quantum algorithm to solve a nonlinear differential equation has been proposed <cit.>. Unfortunately, for the Carleman embedding, the matrix defining the mapped linear system is not hermitian in general,and hence it cannot be regarded as Schrödinger equation. To handle this, we have to discretize in the time direction and use quantum linear system algorithms to solve it <cit.>. On the other hand,Koopman-von Neumann embedding <cit.> using orthogonal polynomials to map classical variables to quantum amplitudes provides a linear differential equation that hasa form of Schrödinger equation and hence can be solved by Hamiltonian simulation <cit.>. However, as a drawback, the mapped Hamiltonian is not guaranteed to be sparse even if theoriginal nonlinear differential equation is sparse.In Ref. <cit.>, the authors clarify a sufficient condition on the nonlinear differential equation that the mapped Hamiltonian becomes sparse, and hence can be solved efficiently on a quantum computer. In the perspective of solving differential equations in a large number of variables with quantum computers,it remains unclear what kind of differential equations can be converted into the Schrödinger equation either Carleman or Koopman-von Neumann embedding. Furthermore, it is not well understood how the classes of linear and nonlinear equations that can be solved by Carleman and Koopman-von Neumann embedding differ,and in what sense they can be efficiently solved by a quantum computer when the conditions are met. In this study, we investigate the relation between Carleman and Koopman-von Neumann embedding, specifically focusing on linear differential equations. We consider general linear differential equations, but not necessarily in the form of the Schrödinger equation. In order to map them to the Schrödinger equation via Carleman or Koopman-von Neumann embedding,we define a linear transformation to change the variables and clarify the necessary and sufficient condition on the linear differential equations for being able to do that. Specifically, in the case of the Koopman-von Neumann embedding,the dimension of the mapped Schrödinger equation becomes infinite even if the original differential equation is linear because of the nature of the Koopman-von Neumann embedding. However, if the mapped Hamiltonian preserves the total fock number,we can restrict the system into the one-particle subspace and find a finite dimensional Schrödinger equation to solve the original problem. Interestingly, the necessary and sufficient condition on the linear differential equation to be mapped into the Schrödinger equation is equivalent for both Carleman and Koopman-von Neumann embeddings. We also argue the computational complexity of the quantum algorithms solving these linear differential equationswith assuming state preparation oracle, block encoding of the mapped Hamiltonian, and block encoding of the observable, using 𝒪 (log M) qubits, where M is the mapped system size.Furthermore, we consider a case, where the differential equation is given asHamilton's canonical equation of a quadratic classical Hamiltonian, which includes coupled harmonic oscillator model <cit.> as a special case. We provide a sufficient condition on classical quadratic Hamiltonian so thatwe can map the corresponding Hamiltonian's canonical equation to the Schrödinger equation. While the coupled harmonic oscillators treated in Ref. <cit.> is a special case of the above result, we also provide a nontrivial example that cannot be described as a coupled harmonic oscillator by the method of Ref. <cit.> butcan be mapped to the Schrödinger equation in our framework. By using the same argument as in Ref. <cit.>, we show that a quantum algorithm obtained via Koopman-von Neumann embedding can solve BQP-complete problems even in the case of linear differential equations. These results are general in the situation of embedding general linear differential equations into the Schrödinger one, and are very importantfor constructing efficient quantum algorithms for linear and nonlinear differential equations in the future. The rest of the paper is organized as follows. In Sec. <ref>, we introduce notations about the mathematical terms of matrices. In Sec. <ref>, we investigate the necessary and sufficient condition for embeddinga given differential equation into the Schrödinger equation via Carleman embedding. We also argue about the computational complexity of a related quantum algorithm to solve the linear differential equation. In Sec. <ref>, the same argument is expanded for the Koopman-von Neumann embedding. In Sec. <ref>, we investigate the relation between Carleman and Koopman-von Neumann embedding, whether or not one can construct one embedding from another in a constructive way. In Sec. <ref>, we consider a special case where the linear differential equationis given as a Hamilton's canonical equation with a quadratic classical Hamiltonian. In Sec. <ref>, we revisit the result obtained in Ref. <cit.> in our framework. Sec. <ref> is devoted to conclusion and discussion. § PRELIMINARY In this section, we define notations in this study. First, we say A∈ℂ^N × N (N ∈ℤ_>0) is diagonalizable when there exists a regular matrix P ∈ℂ^N × N such that P^-1AP is a diagonal matrix. Note that P is not necessarily a unitary matrix but a regular matrix in general. Second, we denote σ_p(A) as the set of the eigenvalues of A∈ℂ^N × N (N ∈ℤ_>0). Third, we say A∈ℂ^N × N (N ∈ℤ_>0) is similar to B ∈ℂ^N × N when there exists a regular matrix P ∈ℂ^N × N such that B=P^-1AP. Fourth, we define the real and imaginary parts of a matrix A ∈ℂ^N × N^' (N, N^'∈ℤ_>0) as ℜ𝔢 A = 1/2(A + A^* ), ℑ𝔪 A = 1/2i(A - A^* ).Fifth, we denote Im B as the image of B ∈ℂ^N × N^' (N, N^'∈ℤ_>0). Finally, we denote I_N (N ∈ℤ_>0) as a N × N identity matrix and O_N × N^' (N, N^'∈ℤ_>0) as a N × N^' zero matrix. Next, to extend the scope of linear differential equations that can be mapped to the Schrödinger equation, we define the one-way reversible linear transformation as a method for changing the variables as follows. We say B ∈ℂ^M × N (M,N ∈ℤ_>0 with M ≥ N) is a one-way reversible linear transformation when B satisfies rank B = N. This transformation is a generalization of the technique used in Ref. <cit.>. Next, for a one-way reversible linear transformation B, we define the characteristic matrix C_B as C_B(B^† B)^-1 B^†. The properties of C_B are given as follows: Let 𝕂 be ℂ or ℝ, and B belongs to 𝕂^M × N with rank B =N. C_B = (B^† B)^-1 B^† satisfies C_B B = I_N and C_B |_(Im B)^⊥ = 0.B satisfies rank B = N, so that from rank–nullity theorem we obtain Ker B = 0. Next, x ∈Ker B^† B satisfies x^† B^† B x = 0, so 0=x^† B^† B x=‖ B x ‖^2. Now,Ker B = {0}, then Ker B^† B = {0}. Therefore, there exists the inverse (B^† B)^-1 of B^† B. This leads the existence of C_B = (B^† B)^-1 B^†. We can calculate like following C_B B = (B^† B)^-1 B^† B = I_N, then we obtain C_B B = I_N.Since (Im B)^⊥ = Ker B^†, C_B = (B^† B)^-1 B^† satisfies C_B |_(Im B)^⊥ = 0.§ CARLEMAN EMBEDDING FOR LINEAR DIFFERENTIAL EQUATIONS We would like to know when a given linear differential equation can be mapped into the Schrödinger equation and then a quantum computer can be applied to solve it ifappropriate oracle and/or block encoding are provided. To do so, it is necessary to consider a concrete method for how to embed classical variables into quantum states.The most straightforward approach would be to regard the vector of classical variables as a quantum state vector with appropriate normalization.Since this approach is equivalent to considering only the linear terms of the Carleman embedding (see Appendix <ref> for a review), we will refer to it as Carleman embedding in the linear case as well.That is, here we use the terminology, Carleman embedding, as a method to embed classical variables into a quantum state. Let us start with a general linear differential equation:d/dt[ x_1 (t); ⋮; x_N (t) ] =A [ x_1 (t); ⋮; x_N (t) ],where A ∈ℝ^N × N (N ∈ℤ_>0). Since A is not necessarily anti-hermitian,to map the above differential equation to the Schrödinger equation, we define the one-way reversible linear transformations B, which can be regarded as a generalization of the result obtained in Ref. <cit.>. Suppose we have a one-way reversible linear transformation B ∈ℂ^M × N (M,N ∈ℤ_>0 with M ≥ N) and define y(t):= (y_1(t), …, y_M(t))^⊤ = B (x_1(t), …, x_N(t))^⊤. Then we obtain a linear differential equation with respect to y(t):d/dt[ y_1(t);⋮; y_M(t) ] = BAC[ y_1(t);⋮; y_M(t) ],where C ∈ℂ^N × M satisfies CB = I_N. In this case, we assumed the existence of C satisfying CB = I_N for reversing the transformed solution y (t) to the original solution x(t), since we want to know the original solution x(t) from y(t). We also imposed rank B = N on B as described in Sec.<ref> so that there exists C ∈ℂ^N × M such that CB = I_N holds. As seen in Sec. <ref>,we can find such a C as a Moore–Penrose pseudo-inverse of B, i.e., C_B = (B^† B)^-1 B^†.The problem we want to solve here is to clarify what is the necessary and sufficient condition on A for BAC to be anti-hermitian, and hence the linear differential equation with respect to y becomes the Schrödinger equation. To do so we define a property, pure imaginary diagonalizable, of a given matrix A as follows.A ∈ℝ^N × N is called pure imaginary diagonalizable iff A is diagonalizable and σ_p(A) ⊂ i ℝ.We note that if A is pure imaginary diagonalizable then A is similar to a diagonal anti-hermitian matrix, i.e.,there exists a regular matrix P ∈ℂ^N × N such that P^-1AP is the diagonal matrix whose diagonal components are in i ℝ. We also note that this P is not necessarily a unitary matrix. For a given linear differential equation d/dtx(t) = A x (t), where A ∈ℝ^N × N, there exists (B, C) ∈ℂ^M × N×ℂ^N × M with rank B = Nand CB=I_N such thata linear differential equation for y(t):=Bx(t),d/dty(t)= BAC y(t) becomes the Schrödinger equation, i.e., BAC is anti-hermitian iff A is pure imaginary diagonalizable.We prove this in Appendix <ref>. Specifically,we can take C to be C_B = (B^†B)^-1B^†, i.e., Moore–Penrose pseudo-inverse of B, where C_B B= I_N as shown in Lemma <ref>.We remember how to solve a linear differential equation d/dtx(t) = A x(t), where A ∈ℝ^N × N. If A satisfies Theorem <ref>, the solution x can be represented as x(t)=P^-1exp(Λ t) P x(0), where the regular matrix P ∈ℂ^N × N such that PAP^-1 is a diagonal matrix and Λ PAP^-1. Λ is the diagonal matrix whose diagonal components are in i ℝ, so exp(Λ t) is an unitary matrix. That is, the one-way reversible linear transformation change the basis (or classical variables) so thatthe time evolution can be given as a unitary transformation, i.e., the Schrödinger equation. Since the new variables y can be regarded as complex amplitudes ofquantum state with an appropriate normalization,we call this Carleman embedding of the linear case and B ∈ℂ^M × N is a Carleman transforming matrix of A here when BAC_B is an anti-hermitian matrix. Note that A itself is not an anti-hermitian matrix in general. For example, as in Ref. <cit.>, we can transformHamilton's canonical equation of the coupled harmonic oscillator into the Schrödinger equation as follows:d/dt( B [ x_1(t);⋮; x_N(t); ẋ_1(t);⋮; ẋ_N(t) ]) = BAC_B( B [ x_1(t);⋮; x_N(t); ẋ_1(t);⋮; ẋ_N(t) ]),where A ∈ℝ^2N × 2N satisfies d/dt (x_1(t), …, x_N(t), ẋ_1(t), …, ẋ_N(t))^⊤ = A (x_1(t), …, x_N(t), ẋ_1(t), …, ẋ_N(t))^⊤, B ∈ℂ^N(N+3)/2 × 2N satisfies B [ x_1(t);⋮; x_N(t); ẋ_1(t);⋮; ẋ_N(t) ] = [√(m_1)ẋ_1(t); ⋮;√(m_N)ẋ_N(t);i √(κ_11) x_1(t); ⋮;i √(κ_NN) x_N(t);i √(κ_12)( x_1(t)-x_2(t) ); ⋮; i √(κ_N-1 N)( x_N-1(t)-x_N(t) ) ],C_B∈ℂ^2N × N(N+3)/2 is defined in Sec. <ref>, m_j >0(j ∈{1, …, N}) is the point positive mass, and κ_jk (j,k ∈{1, …, N}) is the spring constant satisfying κ_jk=κ_kj≥ 0. B A C_B becomes an anti-hermitian matrix, and this B is the example ofa Carleman transforming matrix of A.Now we consider the following problem to solve a linear differential equation defined by a pure imaginary diagonalizable matrix A, as in Ref. <cit.>. Let A be pure imaginary diagonalizable and B be a Carleman transforming matrix of A. Let x(t) be a solution of d/dtx (t) = A x(t) and define the normalized state|ψ(t)⟩ = 1/‖ B x (t) ‖ B x(t).Let observable O ∈ℂ^M × M be a hermitian matrix. Assume we are given (α_ Car, a_ Car, 0)-block encoding U_H_ Car of H_ Car iBAC_B,(β_ Car, b_ Car, 0)-block encoding U_O of O, and oracle access to a unitary U_ ini that prepares the initial state, i.e., U_ ini|0⟩↦|ψ(0)⟩. Given t ≥ 0 and ε > 0, the goal is to estimate a expectation value that is ε-close to ⟨ψ(t) | O | ψ(t)|$⟩.Let δ >0. Problem <ref> can be solved with probability at least 1-δ by a quantum algorithm that makes𝒪 (β_ Carlog(1/δ)/ε) uses of U_O, U_O^†, controlled-U_O, controlled-U_O^†, U_ ini, U_ ini^† and controlled-U_H_ Car or its inverse, and 𝒪( β_ Carlog(1/δ)/ε(α_ Car t + log (β_ Car/ε)/log (e+log (β_ Car/ε)/α_ Car t)) ) uses of U_H_ Car or its inverse, with 𝒪 (log M) qubits.From Theorem 58 and Corollary 60 in Ref. <cit.>, we can implement an (1,a_ Car+2, ε/4 β)-block encoding V of e^-it H_ Car=e^BAC_B t with 𝒪(α_ Car t + log (β_ Car/ε)/log (e+log (β_ Car/ε)/α_ Car t)) uses of U_H_ Car or its inverse,3 uses of controlled-U_H_ Car or its inverse. Then, |ϕ(t)⟩= (⟨0|^⊗ a_ Car+2 V |0⟩^⊗ a_ Car+2) U_ ini|0⟩ is ε/4β_ Car-close to |ψ(t)⟩. Since ‖ O ‖≤β_ Car holds by the definition of U_O in Problem <ref>, then‖⟨ψ(t) | O | ψ(t)|-⟩β (⟨0|^⊗ b_ Car⟨ϕ (t)|)U_O (|0⟩^⊗ b_ Car|ϕ (t)⟩)‖= ‖⟨ψ (t) | O | ψ(t)|-⟩⟨ϕ (t) | O | ϕ(t)|⟩‖≤‖⟨ψ (t) | O | ψ(t)|-⟩⟨ψ (t) | O | ϕ(t)|⟩‖ + ‖⟨ψ (t) | O | ϕ(t)|-⟩⟨ϕ (t) | O | ϕ(t)|⟩‖≤‖ O ‖‖|ψ (t)⟩ - |ϕ(t)⟩‖‖|ψ(t)⟩‖ + ‖ O ‖‖|ψ (t)⟩ - |ϕ(t)⟩‖‖|ϕ(t)⟩‖≤ 2 ·β_ Car· (ε/4β_ Car) = ε/2holds. If we can estimate (⟨0|^⊗ b_ Car⟨ϕ (t)|) U_O (|0⟩^⊗ b_ Car|ϕ (t)⟩) with additive error at most ε/2 β_ Car and error probability δ, we gain a state that is ε-close to ⟨ψ(t) | O | ψ(t)|$⟩. A method based on high-confidence amplitude estimation <cit.> provides a quantum algorithm for estimating the(⟨0|^⊗ b_ Car⟨ϕ (t)|) U_O (|0⟩^⊗ b_ Car|ϕ (t)⟩)that makes𝒪 (β_ Carlog(1/δ)/ε)uses ofU_Oand its inverse,𝒪 (β_ Carlog(1/δ)/ε)uses of conrolled-U_Oand its inverse, and𝒪 (β_ Carlog(1/δ)/ε)uses of the quantum circuit that prepares|ϕ(t)⟩or its inverse.That is, Problem <ref> can be solved with probability at least1-δby a quantum algorithm that makes𝒪 (β_ Carlog(1/δ)/ε)uses ofU_O,U_O^†, controlled-U_O, controlled-U_O^†,U_ ini,U_ ini^†and controlled-U_H_ Caror its inverse, and𝒪( β_ Carlog(1/δ)/ε(α_ Car t + log (β_ Car/ε)/log (e+log (β_ Car/ε)/α_ Car t)) )uses ofU_H_ Caror its inverse. § KOOPMAN-VON NEUMANN EMBEDDING FOR LINEAR DIFFERENTIAL EQUATIONSNext, we apply Koopman-von Neumann embedding for a linear differential equationd/dtx(t) = A x(t). Specifically, we employ the Hermite polynomial{H_n(x)}_n=0^∞as an orthogonal polynomial sequence for Koopman-von Neumann embedding, for simplicity. In the Koopman-von Neumann embedding, we define a position eigenstate:x̂_j|x⟩ = x_j |x⟩,wherex̂_jis aj-th position operator, and a mapped HamiltonianĤ = ∑_j=1^N 1/2( k̂_j ( ∑_l=1^N a_jlx̂_l ) +( ∑_l=1^N a_jlx̂_l ) k̂_j ),wherea_jlis the(j,l)component ofAandk̂_jis aj-th momentum operator satiftying[x̂_l, k̂_j]=i δ_l,j. The relation between classical variables and quantum states is as follows:( ⊗_j=1^N ⟨n_j|) |x⟩ = ∏_j=1^N w(x_j)^1/2 H_n_j (x_j),where⊗_j=1^N |n_j⟩is a tensor product of the eigenvector of the number operators andw(x)is the weight function of the Hermite polynomial. Since the position eigenstate, which is a continuous variable quantum state, is employed, the mapped Hamiltonian and Schrödinger equationi d/dt|x⟩ = Ĥ|x⟩have infinite dimensions even if the original differential equation is linear and finite dimensional. Fortunately, when the HamiltonianĤpreserves the total fock number,Ĥcan be block diagonalizable under the fock basis, so we focus on the subspace with the total fock number one. Furthermore, ifdiv (A x)=0, i.e.,tr A =0holds, that is the weight function is constant over time and the norm of|x⟩does not decay as described in Appendix <ref>, the Schrödinger equationi d/dt|x⟩ = Ĥ|x⟩restricted in the space with the total fock number one is equivalent to the original linear differential equationd/dtx(t) = A x(t). Similarly to Sec. <ref>, we consider the cases where linear differential equations can be transformed,by a one-way reversible linear transformation, into the Schrödinger equation through the Koopman-von Neumann embedding.Suppose we have a one-way reversible linear transformationB ∈ℂ^M × N with rank B =N(M,N ∈ℤ_>0 with M ≥ N)andC ∈ℂ^N × MsatisfyingCB = I_N, as in Sec. <ref>. We definey(t):= (y_1(t), …, y_M(t))^⊤ = B (x_1(t), …, x_N(t))^⊤, then obtain a linear differential equation with respect toy(t):d/dt[ y_1(t);⋮; y_M(t) ] = BAC[ y_1(t);⋮; y_M(t) ].For the mapped Hamiltonian (<ref>) being hermitian, we assumeBACis a real matrix. We also assumeBandCare a real matrix so thatBACmust be a real matrix, for simplicity. To preserve the norm of the mapped position state, we have to imposediv (BAC y )=0, i.e.,tr (BAC) =0. The problem we want to solve here is to clarify what is the necessary and sufficient condition onAwhichBACis real anti-hermitian (i.e., real anti-symmetric) andtr (BAC) =0and also the mapped Hamiltonian preserves the total fock number, and hence the linear differential equationybecomes the Schrödinger equation which can be block diagonalizable under the fock basis.For a given linear differential equation d/dtx(t) = A x (t) with any initial value x (0) ∈ℝ^N, where A ∈ℝ^N × N, there exists (B, C) ∈ℝ^M × N×ℝ^N × M with rank B = Nand CB=I_N such thata linear differential equation for y(t):=Bx(t),d/dty(t)= BAC y(t) satisfies the mapped Hamiltonian preserves the total fock number and div (BAC y)=0, i.e., tr (BAC) =0 holds iff A is pure imaginary diagonalizable.We prove in Appendix <ref>.Similarly toSec. <ref>, we can takeCto beC_B :=(B^†B)^-1B^†for a givenone-way reversible linear transformationB. We here call such a transformationB ∈ℝ^M × Nthat results ina real anti-symmetric matrixBAC_Bvia the Koopman-von Neumann embedding, a Koopman-von Neumann transforming matrix ofA.Now we consider the following problem to solve a linear differential equation defined by a pure imaginary diagonalizable matrixA, using Koopman-von Neumann embedding, as in Sec. <ref>.Let A be pure imaginary diagonalizable and B be a Koopman-von Neumann transforming matrix of A. Let x(t) be a solution of d/dtx (t) = A x(t) and define the normalized state|ψ(t)⟩ = 1/‖ B x (t) ‖ B x(t).Let observable O ∈ℂ^M × M be a hermitian matrix. Assume we are given (α_ Koo, a_ Koo, 0)-block encoding U_H_ Koo of H_ Koo iBAC_B,(β_ Koo, b_ Koo, 0)-block encoding U_O of O, and oracle access to a unitary U_ ini that prepares the initial state, i.e., U_ ini|0⟩↦|ψ(0)⟩. Given t ≥ 0 and ε > 0, the goal is to estimate a expectation value that is ε-close to ⟨ψ(t) | O | ψ(t)|$⟩. From Corollary <ref> in Appendix <ref>, ifBis a Koopman-von Neumann transforming matrix of pure imaginary diagonalizableA, thenBis a Carleman transforming matrix ofA. Thus, we can solve Problem <ref> in the same way as Problem <ref>. Let δ >0. Problem <ref> can be solved with probability at least 1-δ by a quantum algorithm that makes𝒪 (β_ Koolog(1/δ)/ε) uses of U_O, U_O^†, controlled-U_O, controlled-U_O^†, U_ ini, U_ ini^† and controlled-U_H_ Koo or its inverse, and 𝒪( β_ Koolog(1/δ)/ε(α_ Koo t + log (β_ Koo/ε)/log (e+log (β_ Koo/ε)/α_ Koo t)) ) uses of U_H_ Koo or its inverse, with 𝒪 (log M) qubits.We note that when considering the case ofd=2in the quantum solvable ODE of Ref. <cit.>,quantum solvable ODE becomes a linear differential equation andH_ Koobecomes a sparse Hamiltonian. We also note that Ref. <cit.> provides the computational complexity required for the construction ofH_ Koo, so by combining it with Theorem <ref>,we can obtain a more detailed computational complexity.So far, we have considered the cases where linear differential equations can be transformed,by a one-way reversible linear transformation (i.e., Carleman or Koopman-von Neumann transforming matrices), into the Schrödinger equation through the Carleman or Koopman-von Neumann embedding.Interestingly, as a result of the discussion above, we found that in both cases, the condition for reducing to the Schrödinger equation is the same:Amust be pure imaginary diagonalizable. In contrast to the existing approach to solve nonlinear differential equations using the linear system solver <cit.>, it is unique that the present approach does not require any information about the condition number or discretization of time of the given linear differential equations.Once the Carleman transforming matrix or Koopman-von Neumann transforming matrix is established, and a block encoding of the mapped Hamiltonian is given, the present approach allows us to simulate the linear differential equation by Hamiltonian simulation. On the other hand, when the original linear differential equations are sparse, the sparsity is preserved when using the linear system solver to solve linear differential equations <cit.>. However, in our approach,it is not guaranteed that the obtained differential equations (Schrödinger equation) will also exhibit sparsity, presenting a potential drawback. In the next section, we will explore the relation between the Carleman and Koopman-von Neumann embeddings and see the Carleman and Koopman-von Neumann transforming matrices can be constructed from each other.§ THE RELATION BETWEEN CARLEMAN AND KOOPMAN-VON NEUMANN EMBEDDINGNext, we investigate how a Carleman transforming matrix and a Koopman-von Neumann transforming matrix can be constructed from each other. Corollary <ref> in Appendix <ref> tells us one of the ways to transform a Koopman-von Neumann transforming matrix into a Carleman transforming matrix. Then, we search for the conversely way to transform a Carleman transforming matrix to a Koopman-von Neumann transforming matrix. Let A ∈ℝ^N × N be pure imaginary diagonalizable, and B ∈ℂ^M × N be a Carleman transforming matrix of A. Then, B^'[ ℜ𝔢 B; ℑ𝔪 B ]∈ℝ^2M × N is a Koopman-von Neumann transforming matrix. We prove in Appendix <ref>. Theorem <ref> also tells us Koopman-von Neumann embedding is a “decomplixification” (converse of the complexification) of Carleman embedding in some sense. Next, we investigate the sparsity of the matrices obtained via the Carleman transforming matrix and the Koopman-von Neumann transforming matrix, when they are applied to a given pure imaginary diagonalizable matrix. Corollary <ref> shows the Koopman-von Neumann transforming matrix is also the Carleman transforming matrix. Considering them as the same, the sparsity of the Hamiltonian transformed by the Carleman transforming matrix is preserved when compared to the Hamiltonian transformed by the Koopman-von Neumann transforming matrix. On the other hand, when constructing the Koopman-von Neumann transforming matrix from the Carleman transforming matrix using the method outlined in Theorem <ref>,the sparsity of the Hamiltonian transformed by the Carleman transforming matrix is inherited by the Hamiltonian transformed by the Koopman-von Neumann transforming matrix under some assumptions, as stated in Proposition <ref>. Let A ∈ℝ^N × N be pure imaginary diagonalizable, and B ∈ℂ^M × N be a Carleman transforming matrix of A. If B^† B ∈ℝ^N × N holds, BAC_B is s-sparse and BAC_B^* is s^'-sparse, then B^' AC_B^' is 2(s+s^')-sparse, where B^'[ ℜ𝔢 B; ℑ𝔪 B ]∈ℝ^2M × N.Since BAC_B is s-sparse, ℜ𝔢 (BAC_B) and ℑ𝔪 (BAC_B) are s-sparse. Similarly, since BAC_B^* is s^'-sparse, ℜ𝔢 (BAC_B^*) and ℑ𝔪 (BAC_B^*) are s^'-sparse. Since A and B^† B are real matrices and C_B = (B^† B)^-1 B^† by Lemma <ref>, ℜ𝔢 (BAC_B)= (ℜ𝔢 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ + (ℑ𝔪 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤ ℑ𝔪 (BAC_B)= (ℑ𝔪 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ - (ℜ𝔢 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤ ℜ𝔢 (BAC_B^*)= (ℜ𝔢 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ - (ℑ𝔪 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤ ℑ𝔪 (BAC_B^*)= (ℑ𝔪 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ + (ℜ𝔢 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤holds. Thus,(ℜ𝔢 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ = 1/2(ℜ𝔢 (BAC_B) + ℜ𝔢 (BAC_B^*) ) (ℑ𝔪 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤ = 1/2(ℜ𝔢 (BAC_B) - ℜ𝔢 (BAC_B^*) ) (ℑ𝔪 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ = 1/2(ℑ𝔪 (BAC_B^*) + ℑ𝔪 (BAC_B) ) (ℜ𝔢 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤ = 1/2(ℑ𝔪 (BAC_B^*) - ℑ𝔪 (BAC_B) )holds.Then, (ℜ𝔢 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤, (ℑ𝔪 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤, (ℑ𝔪 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ and (ℜ𝔢 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤ are (s+s^')-sparse. B^' AC_B^' is B^' AC_B^' = B^' A(B^'† B^')^-1 B^'†= [ ℜ𝔢 B; ℑ𝔪 B ] A (B^† B)^-1[ (ℜ𝔢 B)^⊤ (ℑ𝔪 B)^⊤ ]= [ (ℜ𝔢 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ (ℜ𝔢 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤; (ℑ𝔪 B) A (B^† B)^-1 (ℜ𝔢 B)^⊤ (ℑ𝔪 B) A (B^† B)^-1 (ℑ𝔪 B)^⊤ ],so B^' AC_B^' is 2(s+s^')-sparse.Proposition <ref> showsthere is the possibility thatthe way to solve a BQP-complete problem efficiently with Carleman embedding and sparse Hamiltonian simulation converts easily to that with Koopman-von Neumann embedding and sparse Hamiltonian simulation.§ CLASSICAL QUADRATIC HAMILTONIAN DYNAMICSNext, we focus on the case where the differential equation is given as Hamilton's canonical equation of a quadratic classical Hamiltonian, using the above results. The authors of Ref. <cit.> regard the following equation as the generalized form:M d^2/dt^2x = -F x,whereMis a positive matrix,Fis a positive-semidefinite matrix, andx=(x_1, …, x_N)^⊤. Eq. (<ref>)'s HamiltonianH(q_1, …, q_N, p_1, …, p_N), whereq_j=x_j, p_j=m_j ẋ_j (j ∈{1, …, N})andm_j > 0(j ∈{1, …, N})is the point positive mass, isH(q_1, …, q_N, p_1, …, p_N)= [ q_1 q_N p_1 p_N ][ H_1 O_N × N; O_N × N H_2 ][ q_1; ⋮; q_N; p_1; ⋮; p_N ],whereH_1 ≥ 0andH_2>0. This Hamiltonian's canonical equation can be mapped into the Schrödinger equation according to Ref. <cit.>. Since the HamiltonianH(q_1, …, q_N, p_1, …, p_N)is imposed strong constraints, it is expected that more general Hamiltonian's canonical equation can be mapped to the Schrödinger equation. Indeed, we show the quadratic Hamiltonian whose quadratic form is positive or negative can be mapped to the Schrödinger equation. Furthermore, we provide a nontrivial example that cannot be treated by the way in Ref. <cit.> but can be mapped to the Schrödinger equation in our framework. §.§ General CaseWe investigate a sufficient condition on classical quadratic Hamiltonian so thatwe can map the corresponding Hamiltonian's canonical equation to the Schrödinger equation. Assume that the HamiltonianH(q_1, …, q_N, p_1, …, p_N)is given that is a quadratic form of generalized position and momentumq_1, …, q_N, p_1, …, p_N(N ∈ℤ_>0)as follows:H(q_1, …, q_N, p_1, …, p_N) = [ q_1 … q_N p_1 … p_N ]H̃[ q_1; ⋮; q_N; p_1; ⋮; p_N ],whereH̃∈ℝ^2N × 2Nis taken as a symmetric matrix without loss of generality. Hamilton's canonical equation ofH(q_1, …, q_N, p_1, …, p_N)isd/dt[ q_1; ⋮; q_N; p_1; ⋮; p_N ] = A [ q_1; ⋮; q_N; p_1; ⋮; p_N ],whereA 2 [ O_N × N I_N;-I_N O_N × N ]H̃.Now, we assume thatH̃ > 0. According to Lemma <ref> in Appendix <ref>, there existsB ∈ℝ^M × 2N with rank B =2Nsuch thatH̃ = B^⊤ B. Eq. (<ref>) becomed/dt B [ q_1; ⋮; q_N; p_1; ⋮; p_N ] = B A C_B B [ q_1; ⋮; q_N; p_1; ⋮; p_N ]= ( 2 B [ O_N × N I_N;-I_N O_N × N ] B^⊤) B [ q_1; ⋮; q_N; p_1; ⋮; p_N ],whereC_B = (B^† B)^-1 B^† = (B^⊤ B)^-1 B^⊤ = H̃^-1 B^⊤from Lemma <ref>. Since2 B [ O_N × N I_N;-I_N O_N × N ] B^⊤is a real anti-symmetric matrix, thus the following Proposition <ref> holds by Proposition <ref> in Appendix <ref>. Under the above setting, if H̃ > 0 holds,A 2 [ O_N × N I_N;-I_N O_N × N ]H̃ becomes a pure imaginary diagonalizable matrix. Next, we assume that-H̃ > 0. From Lemma <ref>, there existsB ∈ℝ^M × 2N with rank B =2Nsuch that-H̃ = B^⊤ B. Eq. (<ref>) becomed/dt (-B) [ q_1; ⋮; q_N; p_1; ⋮; p_N ] = (-B) ( 2 [ O_N × N I_N;-I_N O_N × N ]H̃) C_-B (-B) [ q_1; ⋮; q_N; p_1; ⋮; p_N ]= ( 2 (-B) [ O_N × N-I_N; I_N O_N × N ] (-B)^⊤) (-B) [ q_1; ⋮; q_N; p_1; ⋮; p_N ],whereC_-B = ((-B)^† (-B))^-1 (-B)^† = ((-B)^⊤ (-B))^-1 (-B)^⊤ = H̃^-1 B^⊤from Lemma <ref>. Since2 (-B) [ O_N × N-I_N; I_N O_N × N ] (-B)^⊤is a real anti-symmetric matrix, the following Proposition <ref> holds by Proposition <ref> in Appendix <ref>. Under the above setting, if -H̃ > 0 holds,A 2 [ O_N × N I_N;-I_N O_N × N ]H̃ becomes a pure imaginary diagonalizable matrix.According to Proposition <ref> and <ref>,H̃being a positive or negative matrix is the sufficient condition on classical quadratic Hamiltonian so thatwe can map the corresponding Hamiltonian's canonical equation to the Schrödinger equation.§.§ Concrete ExampleWe provide an example that cannot be dealt with the method in Ref. <cit.> but can be in the present framework. Let HamiltonianH(q_1, q_2, p_1, p_2)beH(q_1, q_2, p_1, p_2)= q_1^2 + 2 q_2^2 + p_1^2 + 2 p_2^2 + 2 q_1 p_2 + 2q_2 p_1 = [ q_1 q_2 p_1 p_2 ]H̃[ q_1; q_2; p_1; p_2 ],whereH̃ = [ 1 0 0 1; 0 2 1 0; 0 1 1 0; 1 0 0 2 ]. From the Hamilton's canonical equation ofH(q_1, q_2, p_1, p_2), we obtaind^2/dt^2[ q_1; q_2; p_1; p_2 ] = - [000 -4;0 1240;0 -400;400 12 ][ q_1; q_2; p_1; p_2 ] -A^'[ q_1; q_2; p_1; p_2 ].SinceA^'in Eq. (<ref>) is not a real symmetric positive semidefinite matrix,Eq. (<ref>) cannot be applied to Theorem 4 in Ref. <cit.> that generalizes the results in Ref. <cit.>. Thus, Eq. (<ref>) cannot be dealt with the method in Ref. <cit.>.On the other hand, sinceH̃is a positive matrix, the Hamilton's canonical equation ofH(q_1, q_2, p_1, p_2)is mapped into Schrödinger equation under an appropriate Carleman transforming matrix or Koopman-von Neumann transforming matrix from Proposition <ref>. Indeed, we can take a Koopman-von Neumann transforming matrixBasB= 1/√(5)[ 2 0 0 1; 0 3 1 0; 0 1 2 0; 1 0 0 3 ]becauseB^⊤ B Ais a real anti-symmetric matrix, and Proposition <ref> and <ref> in Appendix <ref> holds. Then the mapped Hamiltonian becomes2i B [ O_2 × 2 I_2;-I_2 O_2 × 2 ] B^⊤ = 2i/5[0 -130;1008; -3001;0 -8 -10 ].Therefore, the Hamilton's canonical equation of the aboveH(q_1, q_2, p_1, p_2)is a nontrivial example that cannot be treated in the method of Ref. <cit.> but can be mapped to the Schrödinger equation in our framework.§ COUPLED HARMONIC OSCILLATORSIn this section, we revisit the coupled harmonic oscillators described in Ref. <cit.>. We demonstrate a sufficient condition to map the Hamilton's canonical equation of the coupled harmonic oscillators to the Schrödinger equation. We also show the BQP-complete problem in Ref. <cit.> belongs to the problem class that can be solved efficiently with Koopman-von Neumann embedding.§.§ Carleman and Koopman-von Neumann Embedding for Coupled Harmonic OscillatorsWe investigate a sufficient condition on coupled harmonic oscillators so thatwe can map the corresponding Hamiltonian's canonical equation to the Schrödinger equation. Coupled harmonic oscillators in Ref. <cit.> are given bym_j ẍ_j(t) =∑_k=1 k ≠ j^N κ_jk( x_k(t)-x_j(t) ) - κ_jj x_j(t)(j ∈{1, …, N}),wherem_jis the point positive mass, andκ_jkis the spring constant satisfyingκ_jk=κ_kj≥ 0. Eq. (<ref>) has the HamiltonianH(q_1, …, q_N, p_1, …, p_N)= ∑_j=1^N 1/2κ_jj q_j^2+ ∑_1 ≤ j < k ≤ N1/2κ_jk (q_j - q_k)^2+ ∑_j=1^N 1/2 m_j p_j^2 ,whereq_j=x_j,p_j=m_j ẋ_j. The Hamiltonian can be represented asH(q_1, …, q_N, p_1, …, p_N) = 1/2[ q_1 q_N p_1 p_N ][ ∑_j=1^N κ_1j-κ_12 -κ_1N00;-κ_21⋱⋱⋮⋮⋱ ⋮;⋮⋱⋱ -κ_N-1 N⋮ ⋱⋮;-κ_N1-κ_N N-1 ∑_j=1^N κ_Nj00;001/m_10 0;⋮⋱ ⋮0⋱⋱⋮;⋮ ⋱⋮⋮⋱⋱0;000 01/m_N ][ q_1; ⋮; ⋮; q_N; p_1; ⋮; ⋮; p_N ].Hamilton's canonical equation ofH(q_1, …, q_N, p_1, …, p_N)becomesd/dt[ q_1; ⋮; ⋮; q_N; p_1; ⋮; ⋮; p_N ] = [ 0 0 1/m_1 0 0; ⋮ ⋱ ⋮ 0 ⋱ ⋱ ⋮; ⋮ ⋱ ⋮ ⋮ ⋱ ⋱ 0; 0 0 0 0 1/m_N; -∑_j=1^N κ_1jκ_12κ_1N 0 0;κ_21 ⋱ ⋱ ⋮ ⋮ ⋱ ⋮; ⋮ ⋱ ⋱ κ_N-1 N ⋮ ⋱ ⋮;κ_N1 κ_N N-1 -∑_j=1^N κ_Nj 0 0 ][ q_1; ⋮; ⋮; q_N; p_1; ⋮; ⋮; p_N ] A [ q_1; ⋮; q_N; p_1; ⋮; p_N ].Since [ 1/m_1 0 0; 0 ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ 0; 0 0 1/m_N ]is a positive matrix, according to Proposition <ref>, if [ ∑_j=1^N κ_1j-κ_12 -κ_1N;-κ_21⋱⋱⋮;⋮⋱⋱ -κ_N-1 N;-κ_N1-κ_N N-1 ∑_j=1^N κ_Nj ]is a positive matrix, thenH̃is a positive matrix and the Hamilton's canonical equation ofH(q_1, …, q_N, p_1, …, p_N)will be mapped to the Schrödinger equation. Therefore, [ ∑_j=1^N κ_1j-κ_12 -κ_1N;-κ_21⋱⋱⋮;⋮⋱⋱ -κ_N-1 N;-κ_N1-κ_N N-1 ∑_j=1^N κ_Nj ]being a positive matrix is the sufficient condition on coupled harmonic oscillators so thatwe can map the corresponding Hamiltonian's canonical equation to the Schrödinger equation.§.§ BQP-complete Problem that can be Solved by Koopman-von Neumann EmbeddingWe will show there is a BQP-complete problem in the class that can be solved efficiently by Koopman-von Neumann embedding. The authors of Ref. <cit.> demonstrate thatthe problem of estimating the ratio of the kinetic energy of a specific coupled harmonic oscillator to the total energy of the system (CHO problem) can be solved efficiently using Carleman embedding and Hamiltonian simulation. They also prove that the CHO problem includes a BQP-complete problem. From Theorem <ref> and <ref>,the class of linear differential equations that can be mapped to Schrödinger equation by Carleman embedding is identical to that class by Koopman-von Neumann embedding. Therefore, it appears that the CHO problem can be solved efficiently with Koopman-von Neumann embedding. However, it is unclear whether the CHO problem can be resolved efficiently using Koopman-von Neumann embedding because it is not proved that the computational complexity of solving the CHO problem with Koopman-von Neumann embedding is equivalent to that with Carleman embedding. To show the CHO problem can be solved efficiently using Koopman-von Neumann embedding,we need to prove the mapped Hamiltonian via Koopman-von Neumann embeddingH_ Koocan be constructed efficiently. Since Ref. <cit.> showsH_ Carcan be constructed efficiently, we only have to demonstrate the existence of the unitary matrixUthat satisfiesH_ Koo = U H_ Car U^†and can be constructed efficiently.The Carleman transforming matrixB ∈ℂ^N(N+3)/2 × 2NofAin Ref. <cit.> satisfies Eq. (<ref>) and (<ref>). Then, we construct a Koopman-von Neumann transforming matrix fromB. From Theorem <ref>,B^'[ ℜ𝔢 B; ℑ𝔪 B ]is a Koopman-von Neumann transforming matrix ofA. LetB^''∈ℝ^N(N+3)/2 × Nbe the matrix obtained by removing columns with all-zero elements fromB^', satisfyingB^''[ x_1(t);⋮; x_N(t); ẋ_1(t);⋮; ẋ_N(t) ] = [√(m_1)ẋ_1(t); ⋮;√(m_N)ẋ_N(t);√(κ_11) x_1(t); ⋮;√(κ_NN) x_N(t);√(κ_12)( x_1(t)-x_2(t) ); ⋮; √(κ_N-1 N)( x_N-1(t)-x_N(t) ) ].Since we can check easilyB^''AC_B^''is a real anti-symmetric matrix using the factB^''is a Koopman-von Neumann transforming matrix ofA,B^''is a Koopman-von Neumann transforming matrix ofA. Now we focus on the relation betweenBandB^'',BandB^''satisfyB^'' =UB, whereU [1; ⋱ ;1;-i ;⋱;-i ; -i; ⋱ ; -i ].The mapped Hamiltonian via the Carleman embeddingH_ Carin Ref. <cit.> isH_ Car = i BAC_B.The mapped Hamiltonian via the Koopman-von Neumann embedding withB^''isH_ Koo = i B^''AC_B^''.Then, the relation betweenH_ CarandH_ KooisH_ Koo = U H_ Car U^†.SinceUhas 1 and-iregularly arranged on the diagonal elements, we can constructUefficiently. Then, there exists the unitary matrixUthat satisfiesH_ Koo = U H_ Car U^†and can be constructed efficiently. Therefore, we can solve the CHO problem efficiently, which includes the BQP-complete problem, by using Koopman-von Neumann embedding and a Hamiltonian simulation.§ CONCLUSION AND DISCUSSIONIn this study, we investigated the relation between Carleman and Koopman-von Neumann embedding, especially focusing on linear differential equations. First, we investigated the necessary and sufficient condition for embeddinga given linear differential equation into the Schrödinger equation via Carleman embedding and Koopman-von Neumann embedding. In both cases, the condition for reducing to the Schrödinger equation is the same;Ais pure imaginary diagonalizable. We also discussed the computational complexity of a related quantum algorithm to solve the linear differential equation using Carleman or Koopman-von Neumann embedding. Next, we applied the above result to a special case where the linear differential equationare given as a Hamilton's canonical equation with a quadratic classical Hamiltonian. We found that the matrixH̃defining the classical quadratic Hamiltonian is positive or negative, then the Hamilton's canonical equation can be mapped intothe Schrödinger equation. The coupled harmonic oscillator system discussed in Ref. <cit.> is a special case of this framework. Focusing on this relationship, we showed the BQP-complete problem in Ref. <cit.> belongs to the problem class that can be resolved efficiently using Koopman-von Neumann embedding. This clearly indicates thatsolving differential equations using the Koopman-von Neumann embedding <cit.> includes nontrivial problems even in the linear case. Furthermore, we found a nontrivial example that cannot be described as the coupled harmonic oscillator in Ref. <cit.> butcan be mapped to the Schrödinger equation in our framework.Our results have clarified the conditions under which general linear equations can be mapped to the Schrödinger equation by means of Carleman or Koopman-von Neumann embedding. This has laid the foundation for the concrete construction of future quantum algorithms for solving linear differential equations using Hamiltonian simulation.Furthermore, since the Carleman and Koopman-von Neumann embeddings are compatible in the linear domain,we expect that a perturbative approach of these arguments to the nonlinear domain will be useful for exploring quantum algorithms for nonlinear differential equations. This work is supported by MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant No. JPMXS0120319794 and JST COI-NEXT Grant No. JPMJPF2014. apsrev4-1§ CARLEMAN EMBEDDINGFollowing Refs. <cit.>, we introduce the Carleman embedding which is the method of transforming a nonlinear differential equation into a linear differential equation. First, we prepare the tensor product ofx=(x_1, …, x_N)^⊤∈ℝ^Nx^[j]x ⊗…⊗ x_j .For example, whenj=1, 2,x^[1], x^[2]arex^[1]=x,x^[2]=(x_1^2, x_1 x_2, …, x_N x_N-1, x_N ^2)^⊤. Assume that the differential equation is given byd/dt[ x_1; ⋮; x_N ] = [ p_1(x_1, …, x_N);⋮; p_n(x_1, …, x_N) ],wherep_1(X_1, …, X_N), …, p_n(X_1, …, X_N)are real polynomials in variablesX_1, …, X_N with p_1(0)=…=p_n(0)=0. Letkbek max_j ∈{1, …, N} p_j(X_1, …, X_N), Eq. (<ref>) is rewritten byd/dt[ x_1; ⋮; x_N ] =F_1 x+ F_2 x^[2]+… +F_k x^[k],whereF_l ∈ℝ^N × N^l (l ∈{1, …, k})are suitable matrices. Now, we define the transfer matrixA_j+l-1^j ∑_ν =1^j I_N ⊗…⊗F_l_ν⊗…⊗ I_N^j∈ℝ^N^j × N^j+l-1 (j ∈ℤ_>0, l ∈{1, …, k}),then we can transform Eq. (<ref>) intod/dt x^[j] = ∑_l=0^k-1 A_j+l^j x^[j+l] (j ∈ℤ_>0).Therefore, we obtaind/dt[ x^[1]; x^[2]; x^[3]; ⋮ ] = [ A_1^1 A_2^1 A_3^1 A_k^1 0 0; 0 A_2^2 A_3^2 A_k^2 A_k+1^2 0; 0 0 A_3^3 A_k^3 A_k+1^3 A_k+2^3; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ][ x^[1]; x^[2]; x^[3]; ⋮ ].Let the matrix in the right hand of Eq. (<ref>) be𝒜, and each components ofx^[j]be explicitly, then we obtaind/dt[ x_1; ⋮; x_n;x_1 ^2; x_1 x_2; ⋮; x_n x_n-1;x_n ^2; ⋮ ] = 𝒜[ x_1; ⋮; x_n;x_1 ^2; x_1 x_2; ⋮; x_n x_n-1;x_n ^2; ⋮ ].The above method is called Carleman embedding, which transforms a nonlinear differential equation (<ref>) into the linear differential equation of the form like Eq. (<ref>) and (<ref>).§ KOOPMAN-VON NEUMANN EMBEDDINGFollowing Refs. <cit.>, we introduce the Koopman-von Neumann embedding which is the method of transforming a nonlinear differential equation into a linear differential equation. First, we introduce an orthonormal polynomial sequence and the Rodrigues' Formula. Then, we show the details of the Koopman-von Neumann embedding with the Hermite polynomial. We note that the Hermite polynomial{H_n(x)}_n ∈ℤ_≥ 0is the orthonormal polynomial sequence and satisfies the Rodrigues' formula (<ref>). §.§ Rodrigues' FormulaFirst, we define a weight function and an orthonormal polynomial sequence as described in Ref. <cit.> with some modification. Let a,b ∈ℝ∪{±∞} with a<b. A function w(a,b) →ℝ_≥ 0 is called a weight function, when w satisfies w(x)>0 on almost everywhere x ∈ (a,b) and ∫_a^b | x |^n w(x) dx < ∞ (n ∈ℤ_≥ 0).Let a,b ∈ℝ∪{±∞} with a<b, and w be a weight function on (a,b). A sequence {p_n(x)}_n=0^∞⊂ℝ[x] is called an orthonormal polynomial sequence, when {p_n(x)}_n=0^∞ satisfies the following two conditions,* p_n(x) is a polynomial of degree n for n ∈ℤ_≥ 0,* ∫_a^b p_m(x)p_n(x) w(x) dx = δ_m,n for m,n ∈ℤ_≥ 0. Next, we define the Rodrigues' formula that defines a typical orthonormal polynomial sequence as described in Ref. <cit.> with some modification. Let a,b be in ℝ∪{±∞} with a<b, and w be the weight function on (a,b) that satisfies w^' (x)/w(x) = (cx+d)/ X(x), where c,d ∈ℝ and X(x) ∈ℝ[x]with X(x) ≤ 2. Moreover, we assume that w(x), X(x) also satisfy ( d/dx)^k [ w(x)X(x)^n ] =0 at x=a,b for 0 ≤ k < n. An orthonormal polynomial sequence {p_n(x)}_n=0^∞ is defined by a formula of typep_n(x) = 1/K_n1/w(x)d^n/dx^n[ w(x) X(x)^n ](n ∈ℤ_≥ 0),where K_n ∈ℝ∖{0} is an appropriate constant. Eq. (<ref>) is called the Rodrigues' formula.§.§ Koopman-von Neumann Embedding with the Hermite PolynomialNow we select the Hermite polynomial{H_n(x)}_n ∈ℤ_≥ 0as the orthonormal polynomial sequence that satisfies the Rodrigues' formula (<ref>) in Appendix <ref>. We assume that the following differential equations are given,d/dt x_j = F_j (x)(j ∈{1, …, N }),wherex=(x_1, …, x_N)^⊤, and analytic functionF ℝ^N ∋x↦( F_1(x), …, F_N(x) )^⊤∈ℝ^N. Leta_j^†, a_j(j ∈{1, …, N})bejth bosonic creation and annihilation operators, and{|n_j⟩}_n_j=0^∞be an eigenvector sequence of the number operatorN_j = a_j^† a_jsuch thatN_j |n_j⟩=n_j |n_j⟩. Now we take the Hilbert space spanned by{⊗_j=1^N |n_j⟩}_n_1, …, n_N ∈ℤ_≥ 0. We introduce the special vector|x⟩=⊗_j=1^N |x_j⟩ (x_1, …, x_N ∈ (a,b) )defined by |x⟩⊗_j=1^N ( w(x_j)^1/2∑_n_j=0^∞ H_n_j (x_j) |n_j⟩).We define the hermitian operatorx̂_j(j ∈{1, …, N})that is represented bya_j^†, a_jand satisfiesx̂_j |x⟩= x_j |x⟩forx_1, …, x_N ∈ (a,b). We also define the hermitian operatork̂_j(j ∈{1, …, N})that is represented bya_j^†, a_jand satisfies[ x̂_j, k̂_l ] = i δ_j, lforj,l ∈{1, …, N}. Now, the solutionxof Eq. (<ref>) satisfies the differential equationi d/dt|̃x̃⟩̃ = Ĥ|̃x̃⟩̃,where the operatorĤisĤ=∑_j=1^N 1/2( k̂_j F_j(x̂_j) +F_j(x̂_j) k̂_j ),and the vector|̃x̃⟩̃is|̃x̃⟩̃ = exp( 1/2∫_0^t divF (x(τ)) d τ) |x⟩.To obtain the Schrödinger equation of|x⟩, we assume thatdiv F(x) = 0forx∈ (a,b)^N. From Eq. (<ref>), we obtain|̃x̃⟩̃ = |x⟩, then Eq. (<ref>) becomes i d/dt|x⟩ = Ĥ|x⟩.The above method is called Koopman-von Neumann embedding, which transforms a nonlinear differential equation (<ref>) into the linear differential equation of the form like Eq. (<ref>). In this case, we note that we choose the Hermite polynomial as the orthonormal polynomial sequence that satisfies the Rodrigues' formula (<ref>) in Appendix <ref>. § THE PROOF OF THEOREMS IN SECTION <REF>, <REF> AND <REF>§.§ The Proof of Theorem <ref>First, we characterize an anti-hermitian matrix. For N ∈ℤ_>0, let A ∈ℂ^N × N, the following conditions are equivalent:* A is an anti-hermitian matrix.*For all |x⟩,|y⟩∈ℂ^N, ⟨x | (A+A^†) | y|=⟩ 0. (1. ⟹ 2.) Since A is an anti-hermitian matrix, A+A^† = 0. Then, for all |x⟩,|y⟩∈ℂ^N, ⟨x | (A+A^†) | y|=⟩ 0. (2. ⟹ 1.) Let |e_j⟩∈ℂ^N(j ∈{1, …, N } ) be the vector whose j th component is 1 and the others are 0. From condition 2., for all j,k ∈{1, …, N}, ⟨e_j | (A+A^†) | e_k|=⟩0, then A+A^†=0. Thus, A is an anti-hermitian matrix. Next, we characterize the condition of Theorem <ref> usingC_B.Let A ∈ℝ^N × N. For B ∈ℂ^M × N with rank B =N, the following conditions are equivalent:*There exists C ∈ℂ^N × M with CB=I_N such that BAC is an anti-hermitian matrix.* BAC_B is an anti-hermitian matrix.The implication (2. ⟹ 1.) is quite clear. To prove the converse, let's assume that condition 1. holds, and we want to show that BAC_B = BAC. First, notice that C_B B = C B = I_N. This means that for any vector |x⟩∈Im B, we have C_B |x⟩ = C |x⟩. Consequently, we have BAC_B |x⟩ = BAC |x⟩ for |x⟩∈Im B.Since BAC is an anti-hermitian matrix, we can apply Lemma <ref>. Therefore, for all pairs of vectors (|x⟩, |y⟩) ∈ (Im B) × (Im B)^⊥, we have ⟨x| BAC + (BAC)^† |y|=⟩ 0. Now, because |y⟩∈ (Im B)^⊥, we can conclude that ⟨x|(BAC)^† |y|=⟩ (BAC |x⟩)^†|y⟩ = 0. Consequently, ⟨x| BAC|y|=⟩ 0. We also have ⟨z| BAC|y|=⟩ 0 for any |z⟩∈ (Im B)^⊥. Given that ℂ^M = (Im B) ⊕ (Im B)^⊥, for all pairs of vectors (|x⟩, |y⟩) ∈ℂ^M × (Im B)^⊥, we still have ⟨x| BAC|y|=⟩ 0.Therefore, BAC |y⟩ = 0 for all |y⟩∈ (Im B)^⊥. According to Lemma <ref>, it is evident that C_B |_(Im B)^⊥ = 0. Consequently, for all |y⟩∈ (Im B)^⊥, we have BAC_B |y⟩ = 0. Then, BAC_B |y⟩ = BAC |y⟩ for all |y⟩∈ (Im B)^⊥.Since ℂ^M = (Im B) ⊕ (Im B)^⊥, for all vectors |x⟩∈ℂ^M, we have BAC_B |x⟩ = BAC |x⟩. Therefore, BAC_B = BAC. Considering condition 1., we can conclude that BAC is an anti-hermitian matrix, which implies that BAC_B is also an anti-hermitian matrix. As a result, condition 2. is satisfied. From the proof of Proposition <ref>, the following Corollary <ref> holds. Under the setting of Proposition <ref>, if BAC is an anti-hermitian matrix, then BAC_B=BAC. Next, we investigate the condition equivalent toBAC_Bbeing an anti-hermitian matrix. Let A ∈ℝ^N × N. For B ∈ℂ^M × N with rank B =N, the following conditions are equivalent:* BAC_B is an anti-hermitian matrix.* B^† BA is an anti-hermitian matrix. (1. ⟹ 2.) Since BAC_B is an anti-hermitian matrix and C_BB=I_N from Lemma <ref>, B^† (BAC_B) B=B^† BA is an anti-hermitian matrix. Therefore, condition 2. holds. (2. ⟹ 1.) Since rank B =N, B^† B is a regular matrix as proved in Lemma <ref>. From condition 2., (B^† B)^-1 (B^† BA) ( (B^† B)^-1)^† = A (B^† B)^-1 is an anti-hermitian matrix. Since C_B = (B^† B)^-1 B^†, B( A (B^† B)^-1) B^† = BAC_B is an anti-hermitian matrix. Therefore, condition 1. holds. Next, we prove the equivalence of the set ofB^† Band the set of positive matrices. { B^† B | B ∈ℂ^M × N with rank B =N } ={ D ∈ℂ^N × N| D >0 }holds. ( ⊂ ) Let B ∈ℂ^M × N with rank B =N. Since B^† B satisfies B^† B ≥ 0 and B^† B is a regular matrix as proved in Lemma <ref>, thus B^† B satisfies B^† B > 0. That is, B^† B ∈{ D ∈ℂ^N × N| D >0 }. ( ⊃ ) Let D ∈ℂ^N × N withD >0. Since D>0, √(D)∈ℂ^N × N exists and satisfies √(D)>0. √(D)>0 leads rank√(D) = N. We suppose B [√(D); O_(M-N) × N ]∈ℂ^M × N. Since rank√(D) = N, rank B = N holds. From the definition of B, B^† B = D holds. Then, D ∈{ B^† B | B ∈ℂ^M × N with rank B =N }. Proposition <ref> is proved by Lemma <ref>. For A ∈ℝ^N × N, the following conditions are equivalent:*There exists B ∈ℂ^M × N with rank B=N such that B^† BA is an anti-hermitian matrix.*There exists D ∈ℂ^N × N with D>0 such that DA is an anti-hermitian matrix. Finally, we show the equivalence of the condition about onlyA. For A ∈ℝ^N × N, the following conditions are equivalent:*There exists D ∈ℂ^N × N with D>0 such that DA is an anti-hermitian matrix.* A is diagonalizable and σ_p(A) ⊂ i ℝ. (1. ⟹ 2.) Since D>0, √(D)∈ℂ^N × N exists and satisfies √(D)>0. DA is an anti-hermitian matrix, then √(D)^-1 (DA) ( √(D)^-1)^† = √(D) A √(D)^-1 is an anti-hermitian matrix. Since A is similar to √(D) A √(D)^-1 and √(D) A √(D)^-1 is an anti-hermitian matrix, A is diagonalizable and σ_p (A) = σ_p ( √(D) A √(D)^-1) ⊂ i ℝ. Therefore, condition 2. holds. (2. ⟹ 1.) According to condition 2., there exists P ∈ℂ^N × N such that PAP^-1 is a diagonal matrix. Since σ_p (A) ⊂ i ℝ, PAP^-1 is an anti-hermitian matrix. Since P is a regular matrix, we can apply the polar decomposition for P, then there exists a unitary matrix U ∈ℂ^N × N and a positive matrix P' ∈ℂ^N × N (i.e., P'>0) such that P=UP'. Since PAP^-1 is an anti-hermitian matrix, U^† (PAP^-1) U = U^† (UP'AP'^-1U^†) U = P'AP'^-1 is also an anti-hermitian matrix. Since P'AP'^-1 is an anti-hermitian matrix, P' (P'AP'^-1) P'^† = P'^2A is also an anti-hermitian matrix. P' satisfies P' > 0, then P'^2 > 0 holds. Let D ∈ℂ^N × N be D=P'^2, D satisfies D>0 and DA is an anti-hermitian matrix. Therefore, condition 1. holds. From Proposition <ref>, <ref>, <ref> and <ref>, Theorem <ref> is proved.§.§ The Proof of Theorem <ref>First, we characterize an anti-symmetric matrix. For N ∈ℤ_>0, let A ∈ℝ^N × N,the following conditions are equivalent:* A is a real anti-symmetric matrix.*For all |x⟩∈ℝ^N, ⟨x|A|x|=⟩ 0. (1. ⟹ 2.)A is a real anti-symmetric matrix, then A+A^⊤ = 0. Then, for all |x⟩∈ℝ^N, 0 = ⟨x|(A+A^⊤)|x|=⟩2 ⟨x|A|x|$⟩, thus⟨x|A|x|=⟩ 0. Therefore, condition 2. holds.(2. ⟹ 1.)From condition 2., for all|x⟩, |y⟩∈ℝ^N, ⟨x| (A+A^⊤) | y|=⟩1/2{(⟨x|+⟨y|) A (|x⟩+|y⟩)-(⟨x|-⟨y|) A (|x⟩-|y⟩) } =0.ThenA+A^⊤ = 0holds, that is,A ∈ℝ^N × Nis a real anti-symmetric matrix. Next, usingC_B, we characterize the condition of Theorem <ref>. We suppose that a differential equation d/dtx(t) = A x (t) with any initial value x (0) ∈ℝ^N, where A ∈ℝ^N × N. The following conditions are equivalent:*There exists (B, C) ∈ℝ^M × N×ℝ^N × M with rank B = Nand CB=I_N such thata linear differential equation for y(t):=Bx,d/dty(t)= BAC y(t) satisfies the mapped Hamiltonian preserves the total fock number and div (BAC y)=0, i.e., tr (BAC) =0 holds.*There exists B ∈ℝ^M × N with rank B = N such that BAC_B is a real anti-symmetric matrix. (1. ⟹ 2.) According to condition 1., the Hamiltonian preserves the total fock number, so the Hamiltonian can be block diagonalized with bases {|0 … 0⟩, |1 0 … 0⟩, …, |0 … 0 1⟩, |2 0 … 0⟩, |110… 0⟩, …, |0 … 02⟩, …}. Since div (BAC y) =0, 0-th Hermite polynomial is H_0(x)=1, 1-st Hermite polynomial is H_1(x)=2x, and the weight function of the Hermite polynomials is w(x)=exp(-x^2), the Schrödinger equation like Eq. (<ref>) restricted in the subspace spanned by {|10…0⟩, …, |0…01⟩} becomesi d/dt{exp( - ∑_j=1^M y_j(t)^2/2) [ 2 y_1(t);⋮; 2 y_M(t) ]} = Ĥ^(1){exp( - ∑_j=1^M y_j(t)^2/2) [ 2 y_1(t);⋮; 2 y_M(t) ]},where y(t)(y_1(t), …, y_M(t))^⊤ and Ĥ^(1) is the Hamiltonian matrix restricted in the subspace spanned by {|10…0⟩, …, |0…01⟩}. Since the norm of the state vector in a Schrödinger equation is preserved, so‖exp(- ∑_j=1^M y_j(t)^2/2) [ 2 y_1(t);⋮; 2 y_M(t) ]‖ = 2 exp(- ∑_j=1^M y_j(t)^2/2) √(∑_j=1^M y_j(t)^2 )is constant. By the continuity of y(t) and the number of the solution x of equation e^-x/2√(x)=a(a ≥ 0) being finite, ∑_j=1^M y_j(t)^2 is constant. Thus, we have transformed Eq. (<ref>) into the following form, denoted as Eq. (<ref>):d/dt[ y_1(t);⋮; y_M(t) ] = -i Ĥ^(1)[ y_1(t);⋮; y_M(t) ].Since y(t) = B x(t), we can rewrite the equation as:d/dt B [ x_1(t);⋮; x_N(t) ] = -i Ĥ^(1) B [ x_1(t);⋮; x_N(t) ],where x (t)(x_1(t), …, x_N(t))^⊤. Additionally, considering the equation d/dtx(t) = A x(t), we can derive:d/dt B [ x_1(t);⋮; x_N(t) ] = BA [ x_1(t);⋮; x_N(t) ].From Eq. (<ref>) and (<ref>), we obtain the equation:-i Ĥ^(1) B [ x_1(t);⋮; x_N(t) ] = BA [ x_1(t);⋮; x_N(t) ].By the continuity of (x_1(t), …, x_N(t))^⊤, t ↓ 0 leadsEq. (<ref>) holds at t=0. Eq. (<ref>) at t=0 holds for any initial value x (0) ∈ℝ^N, then BA=-i Ĥ^(1)B. Thus BAC_B = -i Ĥ^(1)B C_B holds. Since ℝ^M can be represented as ℝ^M = (Im B) ⊕ (Im B)^⊥, let |y⟩ be |y⟩=|y_1⟩+|y_2⟩, where (|y_1⟩, |y_2⟩) ∈ (Im B) × (Im B)^⊥. C_B B = I_N by Lemma <ref> leads BC_B |_Im B = Id_Im B. Since C_B |_(Im B)^⊥=0, BC_B |_Im B = Id_Im B, and Ĥ^(1) is a hermitian matrix, then ⟨y | BAC_B | y|$⟩ becomes⟨y | BAC_B | y| ⟩= (⟨y_1| + ⟨y_2|) BAC_B (|y_1⟩ + |y_2⟩) = ⟨y_1 | BAC_B | y_1| ⟩ = ⟨y_1 | (-i Ĥ^(1)B C_B) | y_1| ⟩ =-i ⟨y_1 | Ĥ^(1) | y_1|∈⟩i ℝ.BAC_B ∈ℝ^M × Mand|y⟩∈ℝ^Mleads⟨y | BAC_B | y|∈⟩ℝ, then⟨y | BAC_B | y|∈⟩ℝ∩ i ℝ = {0}, thus⟨y | BAC_B | y|=⟩ 0holds for all|y⟩∈ℝ^M. According to Lemma <ref>,BAC_Bis an anti-symmetric matrix. Therefore, condition 2. holds.(2. ⟹ 1.)First, we showtr (BAC_B)=0. SinceBAC_Bis a real anti-symmetric matrix,tr (BAC_B)=0holds.Next, letBAC_BbeBAC_B=(ã_jk)_j,k=1,…, M. SinceBAC_Bis an anti-symmetric matrix,ã_kj=-ã_jk (j, k ∈{1, …, M})holds. Now, let's rephrase Eq. (<ref>) using the operatorsx̂_j = (a_j^† + a_j) / √(2)andk̂_j= i (a_j^† - a_j) / √(2) (j ∈{1, …, M}),wherea_j^†, a_j(j ∈{1, …, N})arejth bosonic creation and annihilation operators in Appendix <ref>.Eq. (<ref>) can be rewritten as follows:Ĥ = 1/2{∑_j=1^M i(a_j^†-a_j)/√(2)(∑_k=1^M ã_jka_k^†+a_k/√(2)) + ∑_j=1^M(∑_k=1^M ã_jka_k^†+a_k/√(2)) i(a_j^†-a_j)/√(2)}= 1/2{∑_j=1^M ∑_k=1^Mã_jki(a_j^†-a_j)(a_k^†+a_k)/2- ∑_k=1^M ∑_j=1^Mã_kji(a_k^†+a_k)(a_j^†-a_j)/2}=i/2∑_j=1^M ∑_k=1^M ã_jk(a^†_j a_k -a_j a^†_k ).This representation implies the Hamiltonian preserves the total fock number. Therefore, condition 1. holds.From Proposition <ref>, the following Corollary <ref> holds. Under the setting of Proposition <ref>, if B ∈ℝ^M × N is a Koopman-von Neumann transforming matrix of A, then B is a Carleman transforming matrix of A.Since B ∈ℝ^M × N is a Koopman-von Neumann transforming matrix of A, BAC_B is a real anti-symmetric matrix according to Proposition <ref>. Since a real anti-symmetric matrix is an anti-hermitian matrix, BAC_B is an anti-hermitian matrix. According to Proposition <ref> and the definition of the Carleman transforming matrix, B is a Carleman transforming matrix of A. Focus on the matrix representation ofĤwith{|10…0⟩, …, |0…01⟩}, the following Corollary <ref> holds. Under the setting of Proposition <ref>, the matrix representation of -iĤ with {|10…0⟩, …, |0…01⟩} is coincidence with BAC_B by using Eq. (<ref>).We can check the relation by calculation. Next, we investigate the condition equivalent toBAC_Bbeing a real anti-symmetric matrix. Let A ∈ℝ^N × N. For B ∈ℝ^M × N with rank B =N, the following conditions are equivalent:* BAC_B is a real anti-symmetric matrix.* B^⊤ BA is a real anti-symmetric matrix. (1. ⟹ 2.) Since BAC_B is a real anti-symmetric matrix and C_BB=I_N from Lemma <ref>, B^⊤ (BAC_B) B=B^⊤ BA is an anti-symmetric matrix. By A ∈ℝ^N × N and B ∈ℝ^M × N, B^⊤ BA ∈ℝ^N × N is a real anti-symmetric matrix. Therefore, condition 2. holds. (2. ⟹ 1.) Since B ∈ℝ^M × N with rank B =N, B^† B = B^⊤ B is a regular matrix as proved in Lemma <ref>. From condition 2., (B^⊤ B)^-1 (B^⊤ BA) ( (B^⊤ B)^-1)^⊤ = A (B^⊤ B)^-1 is an anti-symmetric matrix. Since C_B = (B^† B)^-1 B^†, B( A (B^⊤ B)^-1) B^⊤ = BA ( (B^† B)^-1 B^†) = BAC_B is an anti-symmetric matrix. Therefore, condition 1. holds. Next, we prove the equivalence of the set ofB^⊤ Band the set of real positive matrices. { B^⊤ B | B ∈ℝ^M × N with rank B =N } ={ D ∈ℝ^N × N| D >0 }holds. ( ⊂ ) Let B ∈ℝ^M × N with rank B =N. Since B^⊤ B satisfies B^⊤ B ≥ 0 and B^† B = B^⊤ B is a regular matrix as proved in Lemma <ref>, B^⊤ B satisfies B^⊤ B > 0. That is, B^⊤ B ∈{ D ∈ℝ^N × N| D >0 }. ( ⊃ ) Let D ∈ℝ^N × N withD >0. Since D>0 and D is a real matrix, √(D)∈ℝ^N × N exists and satisfies √(D)>0. √(D)>0 leads rank√(D) = N. We suppose B [√(D); O_(M-N) × N ]∈ℝ^M × N. Since rank√(D) = N, rank B = N holds. From the definition of B, B^⊤ B = D holds. Then, D ∈{ B^⊤ B | B ∈ℝ^M × N with rank B =N }. Proposition <ref> is proved by Lemma <ref>. For A ∈ℝ^N × N, the following conditions are equivalent:*There exists B ∈ℝ^M × N with rank B=N such that B^⊤ BA is a real anti-symmetric matrix.*There exists D ∈ℝ^N × N with D>0 such that DA is a real anti-symmetric matrix. To use the proof of Theorem <ref> for showing Theorem <ref>, we prove the following Lemma <ref>. Let A ∈ℝ^N × N, and B ∈ℂ^M × N with rank B =N. If B^† BA is an anti-hermitian matrix, then (ℜ𝔢(B^† B) )A is a real anti-symmetric matrix.Since B^† BA is an anti-hermitian matrix, ℜ𝔢 (B^† BA) is a real anti-symmetric matrix. A is a real matrix, then ℜ𝔢 (B^† BA) = (ℜ𝔢(B^† B) )A. Thus, (ℜ𝔢(B^† B) )A is a real anti-symmetric matrix. Finally, we show the equivalence of the condition about onlyA. For A ∈ℝ^N × N, the following conditions are equivalent:*There exists D ∈ℝ^N × N with D>0 such that DA is a real anti-symmetric matrix.* A is diagonalizable and σ_p(A) ⊂ i ℝ. (1. ⟹ 2.) Since D>0 and D is a real matrix, √(D)∈ℝ^N × N exists and satisfies √(D)>0. DA is a real anti-symmetric matrix, then √(D)^-1 (DA) ( √(D)^-1)^⊤ = √(D) A √(D)^-1∈ℝ^N × N is a real anti-symmetric matrix. Since A is similar to √(D) A √(D)^-1 and √(D) A √(D)^-1 is a real anti-symmetric matrix, A is diagonalizable and σ_p (A) = σ_p ( √(D) A √(D)^-1) ⊂ i ℝ. Therefore, condition 2. holds. (2. ⟹ 1.) According to Proposition <ref> and <ref>, there exists B ∈ℂ^M × N with rank B =N such that B^† BA is an anti-hermitian matrix. We apply Lemma <ref> for B^† BA, then (ℜ𝔢(B^† B) )A is a real anti-symmetric matrix. Let D=ℜ𝔢(B^† B) ∈ℝ^N × N, then DA is a real anti-symmetric matrix. According to Lemma <ref>, B^† B satisfies B^† B > 0, then ⟨x | B^† B |x|>⟩ 0 holds for all |x⟩∈ℝ^N ∖{0}. Since |x⟩∈ℝ^N and ⟨x | B^† B |x|>⟩ 0, ℜ𝔢( ⟨x | B^† B |x|⟩) = ⟨x|D|x|>⟩0 holds. Thus, D satisfies D>0. Therefore, condition 1. holds. From Proposition <ref>, <ref>, <ref> and <ref>, Theorem <ref> is proved. §.§ The Proof of Theorem <ref> Proof of Theorem <ref>. From the definition of the Carleman transforming matrix and Proposition <ref> and <ref>,B^† BAis an anti-hermitian matrix. According to Lemma <ref>,(ℜ𝔢(B^† B) )Ais a real anti-symmetric matrix. SinceB^'⊤ B^' A = (ℜ𝔢(B^† B) )A,B^'⊤ B^' Ais a real anti-symmetric matrix. Let|x⟩∈Ker B^',|x⟩satisfies(ℜ𝔢 B) |x⟩ = (ℑ𝔪 B) |x⟩=0. Thus0 = (ℜ𝔢 B + i ℑ𝔪 B) |x⟩ = B |x⟩, then|x⟩∈Ker B.rank B = Nand rank–nullity theorem leadKer B ={0}, then|x⟩=0. This impliesKer B^' = {0}. Rank–nullity theorem leadsrank B^' = N. That is,B^'∈ℝ^2M × NsatisfiesB^'⊤ B^' Ais a real anti-symmetric matrix andrank B^' = N, so we can apply Proposition <ref> forB^'.Then,B^' AC_B^'is a real anti-symmetric matrix. According to the definition of the Koopman-von Neumann transforming matrix,B^'is a Koopman-von Neumann transforming matrix ofA. | http://arxiv.org/abs/2311.15628v2 | {
"authors": [
"Yuki Ito",
"Yu Tanaka",
"Keisuke Fujii"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231127084856",
"title": "How to Map Linear Differential Equations to Schrödinger Equations via Carleman and Koopman-von Neumann Embeddings for Quantum Algorithms"
} |
[]+2cmAuto-CsiNet: Scenario-customized AutomaticNeural Network Architecture Generation for Massive MIMO CSI FeedbackXiangyi Li, Jiajia Guo, Member, IEEE,Chao-Kai Wen, Fellow, IEEE, and Shi Jin, Fellow, IEEEX. Li, J. Guo, and S. Jin are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, P. R. China (email: [email protected], [email protected] [email protected]). C.-K. Wen is with the Institute of Communications Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan (e-mail: [email protected]). ==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Deep learning has revolutionized the design of the channel state information (CSI) feedback module in wireless communications. However, designing the optimal neural network (NN) architecture for CSI feedback can be a laborious and time-consuming process. Manual design can be prohibitively expensive for customizing NNs to different scenarios. This paper proposes using neural architecture search (NAS) to automate the generation of scenario-customized CSI feedback NN architectures, thereby maximizing the potential of deep learning in exclusive environments. By employing automated machine learning and gradient-descent-based NAS, an efficient and cost-effective architecture design process is achieved. The proposed approach leverages implicit scene knowledge, integrating it into the scenario customization process in a data-driven manner, and fully exploits the potential of deep learning for each specific scenario. To address the issue of excessive search, early stopping and elastic selection mechanisms are employed, enhancing the efficiency of the proposed scheme. The experimental results demonstrate that the automatically generated architecture, known as Auto-CsiNet, outperforms manually-designed models in both reconstruction performance (achieving approximately a 14% improvement) and complexity (reducing it by approximately 50%). Furthermore, the paper analyzes the impact of the scenario on the NN architecture and its capacity. Massive MIMO, CSI feedback, deep learning, neural network architecture search.§ INTRODUCTION Massive multiple-input and multiple-output (MIMO) is crucial for 6th generation (6G) wireless communication networks, as it enables high throughput, multiple streams, and pervasive coverage for diverse applications in smart cities <cit.>. However, the acquisition of downlink channel state information (CSI) for high-quality precoding in massive MIMO systems is challenging due to the low channel reciprocity of frequency division duplexing (FDD) systems <cit.>. To reduce feedback overhead, compressing CSI at the user equipment (UE) and reconstructing it at the base station (BS) is necessary <cit.>. Traditional CSI compression and feedback methods, such as compressed sensing (CS) <cit.> and codebook-based methods <cit.>, have limitations due to the complexity of the iterative algorithm and sophisticated codebook design. Therefore, new enabling technology is urgently needed to handle the high-dimensional nonlinear problem of CSI feedback. The rapid development of artificial intelligence (AI) has been fueled by substantial improvements in hardware computing power, mass data collection, and device storage capacity, creating infinite possibilities for intelligent communications <cit.>. Motivated by the excellent performance of deep learning (DL), intelligent CSI feedback methods <cit.> are far superior to traditional algorithms in terms of performance and speed <cit.>, thus attracting widespread attention from the academic community. The key idea behind the DL-based CSI feedback scheme is to use neural network (NN) to achieve effective compression and decompression of CSI, which includes an encoder network at the UE for dimensional reduction of the high-dimensional CSI matrix. The encoder network outputs the compressed code before feedbacking to the BS, and a decoder network reconstructs the original CSI from the compressed code. The data-driven DL can quickly fit solutions via deep NN and achieve efficient and highly accurate reconstruction through the combination of “offline training" and “online deployment" modes. One key issue in DL-based designs for wireless communication is constructing an efficient NN to maximize performance in a specific scenario. Novel NN architectures can be categorized into architectures based on convolutional neural networks (CNN), attention mechanisms <cit.>, generative models <cit.>, and feature preprocessing and extraction <cit.>. This article focuses mainly on CNN-based architectures, which have advanced in four ways: (1) receptive field amplification<cit.>, (2) multiple resolutions<cit.>, (3) lightweight convolutions<cit.>, and (4) flexible information interaction<cit.>. The decoder network typically employs a cell/block-based structure, e.g., the RefineNet block, with a multi-branch (multiple resolutions) structure within the cell. Notably, most CNN-based architectures for CSI feedback share these two characteristics. Table <ref> summarizes the number of cells and branches, and whether the cell contains skip connections.However, these architectures are all manually designed, which involves a substantial and laborious workload in adjusting architecture hyperparameters. Moreover, most of these designs have focused on improving NN performance on the COST2100 indoor/outdoor scene dataset (released by <cit.>). There is no guarantee that these manually-designed network structures will perform equally well when applied to other scene CSI datasets. Designing NNs manually faces two main challenges: * Artificial NN design is an inefficient “trial and error” process, leading to high costs, such as time, manpower, computing power, and other resource consumption. The workload of manual design in adjusting the hyperparameters of NN architectures is also tremendous, laborious, and repetitive. Moreover, expert experience and knowledge are crucial in manual design, which requires manual attention in each link and cannot be realized in automatic mode. Constrained by experience and thinking, the design structure easily falls into the local optimal solution. * The CSI feedback module requires different NN architectures for various scenarios and tasks, influenced by factors such as scatterer density, mobile terminal speed, antenna distribution, and bandwidth. A general NN may not achieve optimal performance for a specific task, as observed in Table <ref> where the best architectures for indoor and outdoor scenarios are not the same. Manually designing task-oriented NN structures is costly, and the development of custom-designed NNs for practical applications has limitations. Thus, DL cannot fully realize its potential in specific scenarios.This work concentrates on the design of a scenario-customized NN architecture for intelligent CSI feedback. To overcome the challenges of labor-intensive manual design and the prohibitive costs associated with scenario customization, we introduce Auto-CsiNet. This automatic design scheme employs NN architecture search (NAS) and autonomous machine learning (AutoML) <cit.> to generate a structure for the CSI feedback NN without the need for extensive labor resources or expertise.Furthermore, this economical and automatic design scheme not only accomplishes scenario-customized design of NN architectures but also maximizes the performance of Auto-CsiNet for specific tasks, thereby unlocking the full potential of deep learning in the CSI feedback module. Through AutoML, scene knowledge is seamlessly and implicitly integrated into the customization process in a data-driven manner. This enables manufacturers to efficiently acquire customized networks by adopting NAS, achieving complete automation and standardization in industrial processes.Auto-CsiNet is based on CS-CsiNet <cit.> and is designed by modifying the RefineNet cell searched using NAS. The PC-DARTS method <cit.>, a state-of-the-art and efficient NAS method, is utilized, enabling Auto-CsiNet to maintain a low design cost in terms of computing power and time cost, and making it easy to implement in the industry. Simulation results demonstrate that Auto-CsiNet outperforms numerous manually designed networks in reconstruction accuracy and NN complexity. The major contributions of this work can be summarized as follows: * The necessity of scenario-specific customization: The investigation focuses on the problem of limited generalization of NN architectures and analyzes the cause, including the relationship between the capacity of NNs and the entropy of CSI information, the balance between the performance of NNs and transplantation, and the difficulty in network convergence. * The automatic generation framework: Auto-CsiNet, a novel framework for the automatic generation of the CSI feedback NN architecture, is proposed. Auto-CsiNet utilizes AutoML and the low-cost NAS method to reduce designing cost and enable a labor-free and standardized designing process that is friendly to manufacturers. Additionally, Auto-CsiNet can tap into the maximum potential of NN architectures to achieve optimal performance on specific scenarios. * Refinements in evaluation criteria: As the appropriate number of search rounds vary with different scenarios or feedback NN settings and over-searching can lead to degraded performance of automatically generated CSI feedback NNs, the evaluation criteria are refined by introducing early stop and elastic selection mechanisms. These refinements further ensure high performance in CSI feedback applications. The rest of this paper is organized as follows: Section <ref> presents the system model and the relationship between the NN architectures and scenario characteristics. Section <ref> displays the proposed automatic generation framework of the CSI feedback NN structures based on AutoML, including the mechanism of NAS and the complete framework with the advanced evaluating criterion. Section <ref> provides CSI simulation details and the evaluation of our proposed Auto-CsiNet compared with the manually designed NNs. Section <ref> gives the concluding remarks.Notations: Vectors and matrices are denoted by boldface lower and upper case letters, respectively. (·)^H and cov(·) denote Hermitian transpose and the covariance, respectively. ℂ^m× n or ℝ^m× n denotes the space of m× n complex-valued or real-valued matrix. For a 2-D matrix 𝐀, 𝐀[i,j], row_k(𝐀) and col_k(𝐀) represent the (i,j)^ th element, the k^ th row and k^ th column in matrix 𝐀. 𝗏𝖾𝖼(𝐀) means the vectorized 𝐀. For a 1-D vector 𝐚, 𝐚[i] denotes its i^ th element. |·| measures the scale of a set and ·_2 is the Euclidean/L2 norm. [a_k]_k=1^N represents a list of [a_1,a_2,...,a_N]. para(·) and FLOPs(·) denote the NN's parameter amount and the floating-point operations (FLOPs) amount. C_n^m=n!/(m!(n-m)!) stands for composite number. § SYSTEM MODEL AND PROBLEM FORMULATION§.§ Massive MIMO-OFDM FDD System Consider a typical urban micro-cell in FDD massive MIMO system with one BS serving for multiple single-antenna UEs. The BS is placed in the center of the cell and equipped with an N_t-antenna uniform linear array (ULA)[We adopt the ULA model here for simpler illustration, while the analysis and the proposed model are not restricted to any specific array shape.]. We apply orthogonal frequency division multiplexing (OFDM) in downlink transmission over N_ f subcarriers. The received signal on the n^ th subcarrier for a UE can then be modeled as <cit.>:y_n = 𝐡_n^H𝐯_nx_n+ z_n,where 𝐡_n ∈ℂ^N_t,x_n∈ℂ andz_n∈ℂ denote the downlink instantaneous channel vector in the frequency domain, the transmit data symbol and the additive noise, respectively. The beamforming or precoding vector 𝐯_n ∈ℂ^N_t should be designed by the BS based on the received downlink CSI. In this paper, we assume that perfect CSI has been acquired through pilot-based channel estimation and focus on the design of feedback approaches. We stack all the N_f frequency channel vectors and derive the downlink CSI metrix 𝐇_SF=[𝐡_1,𝐡_2,...,𝐡_N_f]∈ℂ^N_t× N_f in the spatial-frequency domain. We represent the channel matrices in the angular-delay domain using a 2D discrete Fourier transform (DFT) to better display the feature sparsity, which can be described as following:𝐇_AD=𝐅_s𝐇_SF𝐅_f,where 𝐅_s and 𝐅_f are N_t× N_t and N_f× N_f DFT matrices, respectively. The high-dimensional CSI matrix should be sparse in the delay domain, and only the front N_c (N_c<N_f) rows displays distinct non-zero values, because the time delay among multiple paths only exists in a particularly limited period of time.Hence, we retain the first N_c non-zero rows to further reduce the feedback overload and derive the dimension-reduced CSI in the angle-delay domain: 𝐇_AD∈ℂ^N_t× N_c. Moreover, the channel matrix is also sparse in a defined angle domain if the number of the transmit N_t tends to infinite.Notice that in this paper, all statistical analysis of the angle-delay CSI are for 𝐇_AD. The complex-valued elements in 𝐇_AD are also divided into real and imaginary real-valued parts, then normalized in the range of [0,1], where we finally obtain the NN's input: 𝐇∈ℝ^N_t× N_c× 2. §.§ Single Side DL-based CSI Feedback The whole DL-based CSI feedback process is realized by a framework combining CS and DL, the same as CS-CsiNet <cit.>, which consists a random linear projection at UE for CSI matrix dimension reduction and a decoder NN at BS for CSI reconstruction. The CS-based compression process can be expressed as:𝐬=𝐀𝗏𝖾𝖼(𝐇),where 𝗏𝖾𝖼(𝐇)∈ℝ^N (N=2N_t· N_c) is composed of the real and imaginary parts of the vectored 𝐇∈ℝ^N_t× N_c× 2, and 𝐬∈ℝ^M (M<N) presents the compressed codeword, where M is determined by the predefined compression ratio (CR) that M=N×CR. The linear projection matrix 𝐀∈ℝ^M× N is also known as the sensing matrix or measurement matrix in CS, which can be generated randomly <cit.>. The compressed codeword 𝐬 is quantized into a bitstream vector 𝐬_q via an uniformed quantization operation 𝖰𝗎𝖺(·) <cit.> with the quantization bits B, i.e., 𝐬_q=𝖰𝗎𝖺(𝐬;B)∈{0,1}^B × M. Then, the bitstream codeword is fedback to BS via a perfect uplink channel.[We assume perfect feedback link where no error occurs on the codeword 𝐬_q.] It is subsequently restored into the real-valued codeword vector 𝐬_d via a dequantization operation 𝖣𝖾𝗊(·), i.e., 𝐬_d=𝖣𝖾𝗊(𝐬_q), where 𝖣𝖾𝗊(·) is the inverse of𝖰𝗎𝖺(·).A decoder NN is deployed at BS to decompress and reconstruct the original CSI matrix 𝐇 from 𝐬_d. Denote the decoder function as 𝖣𝖾𝖼(·) and the recovered CSI matrix is expressed as: 𝐇= 𝖣𝖾𝖼(𝐬_d;ω),where ω denotes the parameters of the decoder NN. §.§ Task Characteristics The CSI module is affected by various factors in different scenarios and tasks. In addition to system requirements and limitations on the model (reconstruction accuracy, feedback overhead, NN complexity, etc.), factors affecting CSI feature distribution are also included, such as scene complexity, mobile terminal speed, sampling intensity, bandwidth, and antenna distribution. Among these, scene complexity is particularly important, as each scenario has unique environmental characteristics, such as the distribution (density or height) of buildings, trees, and other features that depict the UE's surrounding scattering environment. In practice, the BS collects the downlink CSI by region, where the CSI maps sampled within a local subregion exhibit high spatial correlation and reflect the scattering environment around UE in the scene<cit.>. To solve the problem of excessive communication cost caused by the collection of high-resolution downlink CSI at BS, the uplink CSI is used instead of downlink CSI <cit.> in NAS searching and training to obtain the optimal network, and then a small amount of downlink CSI samples are used to fine-tune the network parameters.The multi-path frequency response channel vector 𝐡_n on the n^ th subcarrier can be formulated as <cit.>:𝐡_n =∑_ℓ=1^Lα_ℓe^j(θ_ℓ-2π f_nτ_ℓ)𝐚_t^H(ϕ_ℓ),where L and f_n denote the propagation path number and the n^ th subcarrier frequency with α_ℓ, θ_ℓ, τ_ℓ and ϕ_ℓ stand for the complex attenuation amplitude, random phase shift, delay, azimuth angles of departure (AoDs) associated with the ℓ^ th path, respectively. 𝐚_t(ϕ_ℓ) stands for the antenna array response vectors at the BS, and the ULA antenna array response vectors can be given as <cit.>:𝐚_t(ϕ_ℓ) =1/N_t[1, e^-j ϖ_nsin (ϕ_ℓ),..., e^-j ϖ_n(N_t-1) sin (ϕ_ℓ)]^T,in which ϖ_n=2π d f_n/c with c and d denoting the speed of light and the distance between antenna elements, respectively.Equation (<ref>) describes the relationship between scenario complexity and CSI feature sparsity. For example, a rich scattering environment, such as a commercial district, is reflected in a dense and complex CSI with a wide angular and delay range. In contrast, the CSI map of an open scattering environment, such as a park, is sparse, with the feature centrally distributed. To describe the 1-D feature distributions, we also use the power delay profile (PDP) and power angular spectrum (PAS), which can be calculated as:PAS(𝐇) = 1/N_t [ 𝖼𝗈𝗅_k(𝐇 ) _2^2 ]_k=1^N_t,PDP(𝐇) = 1/N_c [ 𝗋𝗈𝗐_k(𝐇 ) _2^2 ]_k=1^N_c,where 𝐇, 𝖼𝗈𝗅_k(·) and 𝗋𝗈𝗐_k(·) denote the CSI matrix, and k^ th column and row of matrix, respectively. ||· ||_2 stands for the Euclidean norm. The distribution of channels in the angular and delay domain (i.e., PAS and PDP) has certain statistical characteristics under specific scenarios. To further quantify the compressibility of the CSI matrix for different scenarios, we introduce power spectral entropy (PSE)<cit.> as the compressibility metric. For a given K-length vector 𝐯=[v_1,v_2,...,v_K], the PSE of 𝐯 can be defined asPSE(𝐯)=-1/log_2K∑_i=1^K p(v_i)log_2p(v_i),where p(v_i)=|v_i|^2/(∑_i=1^K |v_i|^2 ) stands for the probability of the component v_i. The PSE value is normalized to [0,1]. From the perspective of information theory, low information entropy means that the information source is relatively certain and the corresponding amount of information is small. Therefore, a low PSE value indicates that CSI has a high degree of compressibility. Figure <ref> shows the PSE distribution of PAS, PDP, and 𝐇, where two scenarios are marked in blue and orange, respectively. The COST2100 dataset in Figure <ref> is the same as in <cit.>. In Figure <ref>, QuaDRiGa software[QuaDRiGa software [Online] Available: <https://quadriga-channel-model.de/>.] <cit.> was used to simulate two scenarios, namely, a park scene (with empty buildings) and a business district scene (with dense buildings) in a typical urban community. The statistic PSE is calculated with 100,000 CSI samples in each scenario, which is adequate to reflect the scene characteristics. The orange-marked scenario (COST2100 Outdoor or QuaDRiGa simulated scenario 2) is more complex than the blue-marked scenario (COST2100 Indoor or QuaDRiGa simulated scenario 1), as reflected in higher PSE values and low compressibility in CSI samples. For the COST2100 Indoor/Outdoor scenario, the PSE value of PAS is higher than that of PDP, indicating higher compressibility in the delay domain, while the two scenarios of QuaDRiGa show the opposite situation. Scenario 1 of QuaDRiGa has a wider distribution of PSE compared to scenario 2 because PSE is more sensitive to sparse features in the CSI. §.§ Task-specific NN ArchitectureManual design of NN structures is resource-intensive, making it challenging to construct customized NN structures for specific tasks in practice. As a result, researchers often resort to using a general NN structure for all scenarios or tasks, which may result in lower performance for each specific task. This is due to the limited generalization of the NN structure, making it difficult to design a general NN structure that performs well across all scenarios or tasks. The following is the analysis from three perspectives:Relationship between NN capacity and the CSI information entropy For homogeneous tasks like CSI feedback in different scenarios, the required NN capacity varies with the data distributions and CSI information entropy. For example, the COST2100 Outdoor scene is more complex than the Indoor scene, as shown in Figure <ref>. Table <ref> shows that the CSI feedback network with the best performance in the Outdoor scene, MRFNet <cit.>, is more complex than the one with the best performance in the Indoor scene, STNet <cit.>. Compared to STNet, MRFNet has a wider (more feature maps) and deeper (more layers) NN architecture, allowing it to remember more feature information during reconstruction. Balancing NN performance and transplantability While a network structure can be portable and embeddable, NN structures with good transplantability, such as Transformer, LSTM, and Inception, are relatively simple unit structures. Small and general NNs have high portability but perform poorly due to limited representation ability, as they can only extract shallow features. In contrast, complex and dedicated task-oriented NNs have high performance but low transplantability due to their sophisticated architecture. For instance, transplanting the whole GoogLeNet framework to some object identification tasks may be disappointing, and it is common practice to transplant only GoogLeNet's Inception units. Therefore, a universal network to handle CSI feedback for all scenarios comes at the cost of sacrificing some model performance for each specific scenario. Solution space and solving efficiency The NN architecture defines the solution space for a task, which expands as the architecture becomes more complex. However, a larger solution space is not always better for different scenarios. Complex scenes require a large solution space due to the difficult task, while simple scenes only require an appropriately sized solution space. In a large solution space, the solution of gradient descent can converge to an unsatisfactory local optimal point due to the inappropriate initial point, which leads to network degradation. This degradation phenomenon is supported by the results in Table <ref>: the complex MRFNet <cit.> failed to compete with the simpler STNet <cit.> on the Indoor task in terms of both performance and NN complexity.To achieve optimal performance for specific tasks and scenarios, a universal NN would require comprehensive consideration of all task characteristics. Designing a universal NN structure that can accommodate all tasks is impractical. Therefore, it is necessary to customize NN structures according to specific scenarios, which requires a convenient and efficient scheme of NN structure customization.§ FRAMEWORK OF NAS-BASED AUTOMATIC NN DESIGNManual design of a dedicated CSI feedback network for specific scenarios is a challenging task. On one hand, people often assemble network operators or modules with limited expert experience and knowledge and through a random combination of “trial and error”, which is inefficient, resource-intensive and experience-dependent. On the other hand, scenario customized NN structure design can further obtain higher performance, while scene customization is difficult to achieve by manual design due to the high design cost. In addition, there is no theoretical research on the relationship between scene knowledge and the CSI feedback NN structure, which can not guide the artificial design of scene-customized NN structure. To address the challenges associated with manual design, AutoML is a promising solution for building DL systems without human assistance, thereby reducing the dependence on human experiential knowledge and broadening the application domain of DL. One essential component of AutoML is NAS, which aims to optimize the hyperparameters of NN structure in DL. NAS has been widely studied and successfully applied in various AI fields, such as pattern recognition, image segmentation, semantic analysis, etc.Given the efficiency of NAS, we propose using NAS to replace manually designing NN architectures and harness the full potential of machine learning in designing CSI feedback networks. This automatic CSI feedback NN architecture generation scheme enables convenient and efficient design of the optimal NN structure for a specific scenario, and can be regarded as a black box in operation, whose input is the CSI dataset of a specific scene and output is the optimal NN structure of the scene. Inside the black box, the search loop is conducted based on three core elements in NAS: search space, search strategy, and evaluation strategy. A search space, consisting of candidate NN structures, is established. An NN is selected from the search space based on the search strategy. The model is subsequently evaluated, and its performance on the validation set is used to update the search strategy. The loop continues until a stopping condition is met, and the optimal NN structure is achieved. Through this data-driven manner, the implicit scene knowledge hidden in the CSI feature distribution can be explored and then be utilized in the scenario-customized NN architecture design process. NAS is known to demand significant computing power and hardware resources, resulting in a high implementation threshold. However, by selecting appropriate and efficient NAS technologies and controlling the difficulty of the search, the automatic generation process for the CSI feedback NN structure in this paper consumes fewer resources and allows for cost control, making it easy to implement in actual production. Next, the NAS scheme, which has been adopted in our automatic generation process, is described from the perspective of the three elements in NAS. Each element is detailed in the following subsection. §.§ Cell-based Search SpaceThe search space defines the structural paradigm that NAS can explore, i.e., the solution space. If the solution space is too large, NAS can become overburdened, which may be difficult to support by current computing power devices. Cell-based networks, which are built by stacking several repetitive cells, are a smaller-scale solution space that significantly reduces search costs compared to the global search space, which allows arbitrary connections between ordered nodes and has the largest scale. For example, the NASNet <cit.> method to search the CIFAR-10 classification network requires 800 GPUs to run 28 days in the global search space, while the cost is reduced to 500 GPUs and 4 days in the unit-based search. Additionally, most CSI feedback NN architectures that have been manually designed are cell-based (as depicted in Table <ref>), which supports the selection of a cell-based search space as a reasonable and effective approach. §.§.§ Overall Architecture of Auto-CsiNetFigure <ref> depicts (a) the cell-based overall architecture of our proposed Auto-CsiNet, and (b) the cell's inner structure. Auto-CsiNet is designed based on CS-CsiNet <cit.> by modifying the RefineNet cell searched by NAS. CSI dimension reduction in the encoder is achieved through random projection. The compression ratio (CS) represents the ratio of the compressed codeword's dimension to the original CSI dimension. The cell-based search space is created, and the two refinement cells share the same architecture (Figure <ref>(b)). The cell k has two external input nodes connected to the output of the former cell k-1 and the cell k-2. Each cell has several inner nodes (3 nodes in Figure <ref>(b)), and each node contains two branches, each with a selectable operation and input assignment. The two branches are added to get the node output, and the output of all internal nodes is concatenated. The channel value of the concatenated feature node is restored through the 1× 1 convolutional layer. To increase the flow of feature information in the cell and avoid network degradation, we add residual connections, consistent with most manual-designed works (as depicted in Table <ref>). The output of the current unit is obtained after passing through the ReLU layer. The first part in Table <ref> lists the structural parameters that need to be configured in advance when setting up Auto-CsiNet (which cannot be changed in the automatic generation stage), including CR, the number of repetitive cells, the cell's inner nodes and width (channel value).§.§.§ Space ComplexityWe discuss the space scale, i.e., the number of the candidate architectures of Auto-CsiNet. Given a cell with N inner nodes, equal-sized to a 2N-layer global-based structure, and a candidate operation set 𝒪, the cell-based space scale is:∏_k=0^N-1 | 𝒪 | ^2×C_k+2^2,where C stands for composite. | 𝒪 | ^2 indicates that each node contains two branches (two operations), and C_k+2^2 represents the number of combinations of node k connected to the preceding nodes. The equal-sized global-based space scale is | 𝒪 |^2N× 2^C_2N^2, where 2^C_2N^2 represents two possibilities between any two layers, connected or not connected. From the expressions, the cell-based space scale is much smaller than the global-based one. For instance, | 𝒪 |=8, N=5 in our later experiments, the cell-based scale is 2.89e12, while the equal-sized global-based scale is 3.78e22. By selecting the cell-based search space, the space scale is significantly reduced, which effectively controls the difficulty and cost of searching. §.§ Gradient Decent Search StrategyThe search strategy defines how to find the optimal network structure, which is a hyperparameter optimization problem of the NN structure, tuned according to the performance of NN observed in the validation set. The most efficient, convenient, and accessible search strategy thus far is based on the gradient descent (GD) method <cit.>, which can significantly reduce the computing power and time resource consumption compared with other NAS methods.For example, in search of the CIFAR-10 classification network, Large-scale ensemble <cit.>, an evolution algorithm NAS method, requires 250 GPUs to run 10 days, while DARTS only takes one GPU and 0.4 days.§.§.§ Pipeline of DARTS Different from the other iterative search strategies based on discrete search space (the number of candidate network structures is countable), the core idea of GD-based algorithm is to make the discrete optimization problem (strategy function) continuous, which the GD-based algorithm can optimize. The most typical GD-based approach is DARTS <cit.>, which treats the NAS process as a black box problem and maps the original discrete search space to the continuous space through a particular relaxation operation. The cell's topological structure is a directed acyclic graph (DAG) consisting of an ordered sequence of N nodes (as depicted in Figure <ref>(a)). Each node x^(i) stands for the feature map in convolutional networks, and eachdirected edge (i,j) is associated with some operation o^(i,j) that transforms x^(i) to x^(j). Each intermediate node is computed based on all of its predecessors:x^(j)=∑_i<jo^(i,j)(x^(i)).Let 𝒪={o_1,o_2,...,0} be the set of candidate operations, e.g., convolution, max pooling, , where a specialoperation is included to indicate a lack of connection between two nodes. Then, the discrete search space is relaxed into continuous by placing a mixture of candidate operations on each edge, that is, relaxing the categorical choice of a particular operation to a softmax over all possible operations:o̅ ^(i,j)=∑_o∈𝒪exp (α _o^(i,j))/∑_o'∈𝒪exp (α _o'^(i,j))o ^(i,j),where the operation mixing weights for edge (i,j) are parameterized by a vector {α_o^(i,j)}_o∈𝒪 of dimension |𝒪|. In this way, the task of the architecture search is reduced to learning a set of continuous variables {α_o^(i,j)}_o∈𝒪,i<j, which we refer to as the architecture weight (encoded as) and depicted in Figure <ref>(b). The architecture weights in the figure have been normalized.Figure <ref> presents the pipeline of DARTS <cit.>, and the detailed steps are as follows: (I) Construct the search space and determine the candidate operators. (II) Build the SuperNet corresponding to the search space and relax the search space by placing the softmax mixture of candidate operations. The assigned architecture weights {α_o^(i,j)}_o∈𝒪,i<j are initialized with the mean value. (III) SuperNet optimization: jointly optimize the architecture weights and the NN parameters (e.g., kernel parameters of the convolutional layer) by solving a bi-level optimization. The thickness of the edge represents the architecture weight. (IV) The operator with the largest weight (the coarsest edge) is retained to combine into the final output NN structure, i.e., replace each mixed operation o̅^(i,j) with the most likely operation o ^(i,j) =argmax _o∈𝒪α _o^(i,j)[Since in cell-based space, each node has an in-degree of 2, the first two with the highest probability will be retained when selecting the operators.].§.§.§ Advanced Version PC-DARTSThe NAS method adopted in our automatic CSI feedback generation scheme is PC-DARTS <cit.>, an advanced version of DARTS, which involves the Partial-Channel-Connection and the Edge-Normalization operations to improve the search efficiency of DARTS further. As depicted in Figure <ref>, the high searching speed mainly comes from the Partial-Channel-Connection operation, where all candidate operators only apply to 1/K part of the feature maps (marked by colors), and the rest is directly concatenated to the operating-acted part. This operation also addresses the problem of excessive memory consumption in the SuperNet training of DARTS <cit.> (SuperNet is |𝒪| times more complex than normal networks).The hyper-parameter K in the proportion 1/K of selected channels can be adjusted and represents the multiple by which PC-DARTS outperforms DARTS in search speed. In our experiments, we set K=4 as in <cit.>. However, the negative effect of Partial-Channel-Connection, that is, the convergence instability of the architecture weight caused by random selection of channels in each round, may lead to undesired final search results. Edge-Normalization is adopted to reduce the fluctuation effect by introducing another set of normalized weights {β^(i,j)} that act on the edges:𝐱_j=∑_i<jexp (β^(i,j))/∑_i'<j exp (β ^(i',j)) f_PC(𝐱_i),where f_PC denotes the function of Partial-Channel-Connection, involving the calculation of {α_o^(i,j)}_o∈𝒪. Therefore, the connectivity of edge (i,j) is determined by both {α_o^(i,j)}_o∈𝒪 and β^(i,j), for which the normalized coefficients are multiplied together, i.e., exp (β^(i,j))/∑_i'<j exp (β ^(i',j))×exp (α _o^(i,j))/∑_o'∈𝒪exp (α _o'^(i,j)). Then, the operations (edges) are selected by finding the large edge weights as in DARTS. Therefore, β can dilute the fluctuation effect of α that acts on the unfixed feature maps. For convenience, we denote α as the whole architecture weights (i.e., {α_o^(i,j);β^(i,j)}_o∈𝒪,i<j). Thanks to the Partial-Channel-Connection and Edge-Normalization operations, the advanced PS-DARTS can finish the search on the c-10 dataset within several hours, significantly reducing the search cost.§.§ Weight-sharing Evaluation StrategyThe evaluation strategy (evaluation forecast) is required to predict the approximate NN performance and accelerate the evaluation process. Among them, weight-sharing is a core driving force of the GD-based approach to achieve efficient search. In DARTS <cit.> and PC-DARTS <cit.>, the one-shot weight-sharing enables the search and evaluation to be conducted in parallel. Instead of training each architecture separately as the manual design process, weight sharing builds a SuperNet that assembles all the architectures as its submodels. In Figure <ref>(a), the search space can be viewed as the SuperNet (Super-DAG) with all candidate operators and connections, from which the sub-network (sub-DAG) is sampled and evaluated. Once the SuperNet has been trained, sub-network can be directly evaluated by inheriting weights from the SuperNet, thus saving the considerable cost of training each architecture from scratch.§.§.§ Bi-level Optimization in SuperNet The SuperNet contains edge/architecture weights α and the corresponding operator's parameters ω, alternating optimized in each training epoch to realize the bi-level optimization.The scenario-specific CSI training dataset 𝒟_train is divided into two parts, 𝒟_train,ω and 𝒟_train,α. The mean square error (MSE) loss is adopted in CSI feedback NN training:ℒ_MSE(α, ω;𝒟) =1/|𝒟| ∑_𝐇∈𝒟𝖲𝗎𝗉𝖾𝗋𝖭𝖾𝗍(α, ω;𝐬_d)-𝐇_2^2,where 𝐬_d denotes the compressed code after the dequantization given in (<ref>). Then, the bi-level optimization problem with α as the upper-level variable and ω as the lower-level variable:min_αℒ_MSE(α,ω^*(α);𝒟_train,α) +λα _2^2s.t.ω^*(α)=argmin _ω ℒ_MSE(α,ω;𝒟_train,ω),where α _2^2 is the L_2 regular term with respect of α, promoting the high score (probability) to appear in a certain operator, which is convenient to determine the structure of the final search. λ is the weight decay rate. The goal for architecture search is to find the optimal α^* that minimizes the loss on 𝒟_train,α, which can be regarded as minimizing the validation loss. The optimal NN parameters ω^*(α) are obtained by minimizing the loss on 𝒟_train,ω (regarded as the training loss). Since the second order hessian matrix is involved in calculating the gradient of α associated with ω, leading tremendous calculation burden, we adopt the first order approximation, i.e., ▽ _αℒ_MSE(α,ω^*(α);𝒟_train,α) ≈▽ _αℒ_MSE(α,ω;𝒟_train,α), where the α and ω are updated alternately in each epoch. Algorithm <ref> depicts our proposed automatic generation scheme for scenario-customized CSI feedback NN architecture, including the alternative bi-level optimization process. For lower-level optimizations, operator parameters are updated while architecture weights are fixed. For higher-level optimizations, the opposite is true. Each sub-network can directly inherit the parameters of the selected operator, and its evaluation value can directly influence the architecture weight of the corresponding operators. In addition, the warm-up stage is used for NN parameters to grow up fully, where the architecture weights are not updated. Otherwise, the SuperNet would prefer selecting the parameter-free operations, such as Identity or pooling, which converge faster than those complex ones.§.§.§ Early Stopping and Elastic Selection CriteriaThe measurement of reconstruction accuracy in the CSI feedback is normalized MSE (NMSE):NMSE(A;𝒟 )=1/ | 𝒟 | ∑_𝐇∈𝒟𝐇-A(𝐬_d) _2^2 /𝐇_2^2,where 𝐬_d denotes the dequantized compressed code given in (<ref>), and A represents a trained CSI feedback NN architecture.However, the hard criteria condition specified in Algorithm 1, which selects the output of the E_search-th epoch as the optimal solution, is not flexible enough and may result in suboptimal structures due to convergence fluctuations. Furthermore, our experiments have shown that increasing the number of search rounds may not continuously improve the sub-network performance and may actually lead to performance degradation due to excessive searching. Therefore, we introduce the early stopping and elastic selection mechanisms in Algorithm <ref> to prevent over-searching. Since the suitable searching epochs number E_search is also scenario-dependent and cannot simply be fixed, we record the structure of the output when SuperNet validates well, i.e., A∈𝒜, where 𝒜 is the set of candidate architectures. These elements are retrained during the evaluation phase, and the final optimal architecture, A^*, is selected from 𝒜 based on its evaluation performance.§ SIMULATION RESULTS AND DISCUSSIONSIn this section, we present details on the CSI dataset simulation and NN training. We then compare Auto-CsiNet, which is the architecture generated by the proposed automatic generation scheme for scenario-customized CSI feedback architecture, with manually designed works. We also investigate the characteristics of the searched architecture with respect to the searching time and scenario.§.§ CSI Simulation and Training Configuration§.§.§ Scenario Setup For performance evaluation, we use four datasets. The first two QuaDRiGa datasets were simulated using QuaDRiGa software <cit.>, and the last two COST2100 datasets were simulated using COST2100 simulation software <cit.> and published by <cit.>. The simulation details are presented in Table <ref>, while the scenario characteristics are discussed in Section <ref>. §.§.§ Training Configuration of Auto-CsiNetTable <ref> shows the training details for both the searching stage (SuperNet training) and the evaluation stage (sub-network training). In the searching stage, a search space based on cells is created, where the number of internal nodes is set as 1-6, and the cell's channel value is set to 8. The channel value or width of a cell represents the channel value of its internal nodes. The candidate operators are as follows:, , , , , , , ,which mean none connection, identity mapping, depth-separable convolution and dilated convolution, respectively. Note that the cell structure used for building the SuperNet does not include a residual connection between the input and output nodes. This is because the candidate operators cannot be fully trained with such a connection. However, the residual connection is added to the searched cell structure during the sub-network evaluation. The rest of the configurations in the cell follow the standard cell-based methods proposed in <cit.>.After a warm-up period of 20 epochs, the formal search stage is initiated, as described in Algorithm <ref> and <ref>. Following 400 epochs of searching, a sub-optimal candidate architecture set 𝒜 with the scale M_record=20 is obtained, greatly reducing the scale of the cell-based space from 2.89e12 (when inner node number is 5) to 20. The sub-networks' training and evaluation follow the standard setup used in CsiNet <cit.>. Notably, the sub-network generated through the search process need not be consistent with the SuperNet's width. In this experiment, the sub-network channel value is set to 7 by default to compare with the manual design work <cit.>. The experiment is performed using a single Tesla V100 GPU, and the search stage takes 0.4-1.0 days (depending on the cell's node configuration), while the evaluation stage takes 1.0-2.0 days (which requires about a dozen sub-networks to be trained, each taking approximately 0.1 days). Furthermore, if GPU memory is sufficient, the search and evaluation stages can be conducted in parallel to further reduce the time cost. §.§ Experiments and Analysis§.§.§ Effectiveness of Auto-CsiNet There is a tradeoff between NN complexity and performance, and a practical cell structure can achieve a better balance between the two. To compare the tradeoff effectiveness between NAS-CsiNet and manual-designed networks, we compare the performance of Auto-CsiNet with that of CS-CsiNet <cit.>, CS-CsiNet+ <cit.>, CS-CRNet <cit.> and CS-MRFNet <cit.>. For comparison, we use the RefineNet in CsiNet-M1 <cit.>, the CRBlock <cit.>, and the MRFBlock <cit.> as the cell structure in CS-CsiNet+, CS-CRNet, and CS-MRFNet, respectively. The cell width is set to 7 for all networks. We conduct the search on four scenario datasets (QuaDRiGa Scene 1/2 and COST2100 Indoor/Outdoor) and obtain the corresponding Auto-CsiNet series (N1-N6) with a variety of internal nodes in the cell structure. The results are shown in Figure <ref>, where the different models are marked by different colors, and the radius represents the complexity of the cell structure. Note that the cell complexity is affected by the number of internal nodes and the operator selection. On the balance diagram for all scenarios, the NAS-CsiNet series are located at the lower left equilibrium points compared to the artificial-designed networks. For example, Auto-CsiNet-N1 surpasses CS-CsiNet or CS-CRNet in NMSE performance and is more lightweight. These results demonstrate that Auto-CsiNet can exceed the performance and efficiency of the manual-designed NN, thus verifying the effectiveness of Auto-CsiNet. In addition, it is worth noting that the manual-designed CS-CRNet <cit.> can be represented in the search space, specifically the CRBlock in CS-CRNet is one of the candidate structures for NAS search with N3 configuration (as shown in Figure <ref>). Thus, as the optimal search result, Auto-CsiNet-N3 can achieve a better balance point than CS-CRNet within the same search space. This demonstrates that NAS can quickly search for the optimal solution in the search space with a reasonable and efficient optimization strategy. In this approach, PC-DARTS is utilized to employ GD-based optimization and the weight-sharing strategy to speed up the search. Compared with the inefficient random trial and error process of manually designed NN structure, this approach can traverse 4.72e6 [calculate by (<ref>)] candidate structures in a limited time, and its valid and orderly traversal process also ensures its searching efficiency.This advantage also explains why Auto-CsiNet can outperform manually designed networks such as MRFBlock <cit.> when the cell structure is relatively complex. Large capacity networks experience a diminishing marginal effect, meaning that increasing model size yields diminishing returns, which is more pronounced in artificially designed networks. The size of the search space expands exponentially with the number of inner nodes of the cell, making it more difficult to manually design the network structure. Due to the limited capacity to traverse sub-networks manually in the large-scale search space, the probability of selecting the optimal solution is minimal. Therefore, powerful NAS is necessary to quickly traverse more sub-networks and reduce the diminishing marginal effect of complex units to a certain extent.Additional results are presented in Tables <ref> and <ref>, demonstrating a certain level of generalization with respect to the CR and quantization bits B. The proposed scheme proves effective even with minor deviations in the settings of CR and B during the search and evaluation stages. Table <ref> also details the network's practical running time per sample, tested on a Tesla V100 GPU with a batch size of 1. Notably, these results do not consistently align with the ranking of cell-FLOPs complexity. Factors such as data read, memory usage, and GPU occupancy affect this discrepancy. It is also attributed to the parallel operation of each branch in the multi-branch cell structure, where the network's runtime is determined by the longest-running branch, rather than the number of branches. Despite having higher cell complexity, Auto-CsiNet-N6 exhibits a shorter runtime than Auto-CsiNet-N4. The calculation of cell FLOPs complexity involves the sum of the complexities of all operators/layers. Figure <ref> showcases the average spectral efficiency of the MISO-OFDM system, which employs a zero-forcing precoding scheme. The spectral efficiency results are influenced by the performance of the decoder network used for CSI reconstruction. The NMSE performance of these decoder networks is detailed in Table <ref>. We compare the state-of-the-art (SOTA) manually designed model, CS-MRFNet <cit.>, with Auto-CsiNet-N5, highlighting their spectral efficiency values at an SNR of 11 dB. The performance gain ratio is approximately 2.47% (a gain of 0.186 bits/s/Hz over 7.531 bits/s/Hz). This gain is attributed solely to modifications in the decoder network structure for CSI reconstruction. Furthermore, Figure <ref> illustrates the effect of training dataset size on network performance. The performance ranking is hardly affected by the training set size, ensuring the proposed scheme's effectiveness in search without large datasets. Although reducing training samples only yields approximate performance with reduced accuracy, it has negligible effects on decision-making in the search process, i.e., architecture weight optimization. This result also validates the strategy that a reduced training set can further accelerate model evaluation in NAS <cit.>. §.§.§ Gain for Scenario CustomizationDue to the high cost of manual design work, it can often be challenging to customize NN structures to fit specific scenarios. The aim of Auto-CsiNet proposed in this paper is not only to replace manual design work with AutoML but also to obtain scenario-specific NN architectures conveniently by adopting low-cost NAS methods, thereby unleashing the maximum potential of NN for this task.In Figure <ref>, we compare Auto-CsiNet-N5 with two artificially-designed NNs, CS-CsiNet and CS-SimpleCNN. CS-SimpleCNN's decoder only contains a dense layer and a 3x3 convolutional layer, that is, CS-CsiNet removes two RefineNet units. We use Auto-CsiNet-N5 as an example of the Auto-CsiNet series to show the experimental comparison results, and the performance of other models in the Auto-Csinet-Nx series are shown in Figure <ref>. Auto-CsiNet-N5s are searched on the QuaDRiGa Scene 1/2 and COST2100 Indoor/Outdoor tasks, and the architectures are depicted in Figure <ref>, while the manually-designed networks are general-purpose for all tasks.The results demonstrate that a universal network is not adequate for all tasks, as the NN structure has limited generalization and may not show the same superiority for different tasks. For instance, while CS-CsiNet is more complex than CS-SimpleCNN, it cannot outperform CS-SimpleCNN on all tasks. CS-SimpleCNN performs better in the QuaDRiGa scene 1 and COST2100 Indoor scenario. In contrast, the customized-designed Auto-CsiNet-N5s achieve the highest performance for the specific scenario. The concept of scene-specific customization of Auto-CsiNet can be viewed as giving up the generalization of the NN structure in exchange for greater potential for performance improvement so that the NN can unleash its maximum potential for the given scenario. §.§.§ Architecture Characteristic AnalysisFigure <ref> illustrates the effect of the scenario on the architecture by displaying the search results of Auto-CsiNet-N3 on QuaDRiGa Scene 1 and Scene 2 (first two subfigures). It also compares the cell configuration effect by examining Auto-CsiNet-N3 and Auto-CsiNet-N6 searched on QuaDRiGa Scene 2 (last two subfigures). In each subfigure, the bar represents the proportion of the simplest operator () or the most complex operator () within a cell, and the line depicts the the NMSE performance of the sub-network at each period in the search process.In the first two subfigures of Figure <ref>, it is observed that complex operators, such aswith a high parameter number, are more likely to appear in the search structure of complex scenes with high PSE. On the contrary, simple operators likeare more likely to appear in the search results of simple scenes. Comparing Auto-CsiNet-N3 and Auto-CsiNet-N6 searched on a same scenario dataset, the probability of the complex convolution is higher than the parameter-free operation in Auto-CsiNet-N3, indicating that operatorcan bring more gain to the sub-network than operator . In Auto-CsiNet-N6, the opposite is true, demonstrating that NAS can alleviate network degradation to some extent.Throughout the search process, several patterns in the searched structures were identified. Firstly, the probability of convolutions is higher than , with large-kernel convolutions being more likely to appear than small-kernel ones. This is consistent with the manual design experience, where large kernel sizes enable a large receptive field <cit.>. Secondly, the probability ofoccurring is almost zero, indicating that NAS tends to assemble operations in parallel rather than in series, asinterrupts information flow in the SuperNet, resulting in a low score. Thirdly, the cell complexity (number of inner nodes N) cannot be set to infinity, as overlarge N leads to network degradation, as observed with Auto-CsiNet-N7 performing worse than Auto-CsiNet-N6 due to network degradation.Figure <ref> also reflects the degradation of sub-network performance caused by excessive search. The X-axis indicates the search time point, represented by the number of search epochs, at which the sub-network with optimal performance appears. As the performance of the sub-network fluctuates with the search time, and the appropriate search time varies depending on the scenarios or cell configurations, only a sub-optimal structure can be obtained with a fixed search time. This issue can be addressed through the use of the early stop and elastic selection strategy outlined in Algorithm <ref>. § CONCLUSION This paper focuses on the design of NN architectures for intelligent CSI feedback. To address the main challenges in manual design, we propose an automatic generation scheme for NN structures using NAS to substitute the laborious process of adjusting hyperparameters in manual design work. This approach enables a standardized and convenient process for scene-specific customization of NN architectures and integrates implicit scene knowledge from the CSI feature distribution into the architecture design in a data-driven manner.To reduce the threshold for implementing NAS and its resource consumption in CSI feedback, we employ an efficient gradient descent-based NAS, namely the PC-DARTS method <cit.>, and control the scale of the search space by constructing it based on CS-CsiNet <cit.>. This helps in controlling and reducing the search cost, and the scheme can be easily implemented in practice. Additionally, we observe that excessive searching leads to a degradation in performance of the search structure. To address this, we further improve the scheme by adopting the early stopping and elastic selection mechanisms.The resulting searched NN structure, referred to as Auto-CsiNet, outperforms manually designed NN architectures in terms of performance and complexity, validating the effectiveness of the proposed automatic scheme. Furthermore, the scene-specific Auto-CsiNets surpass the manually designed general NN architectures in the given scenes, demonstrating that the proposed scheme achieves scenario-specific customization to maximize performance. The searched NN structure is also consistent with the experience of human design, such as the observation that complex scenes require a larger receptive field. NAS can also alleviate network degradation during the search process if the cell is set to be too complex.For future work, it is important to consider practical deployment issues. For example, in future communication networks, edge servers will determine the NN architectures for various application scenarios, and limitations such as the power of hardware devices and requirements for latency should be taken into account when selecting suitable candidate operation sets. Depth-separable convolution serves as an example of a candidate operation set that is suitable for lightweight networks and can be easily deployed on mobile devices.IEEEtran | http://arxiv.org/abs/2311.15950v1 | {
"authors": [
"Xiangyi Li",
"Jiajia Guo",
"Chao-Kai Wen",
"Shi Jin"
],
"categories": [
"cs.IT",
"cs.AI",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20231127155658",
"title": "Auto-CsiNet: Scenario-customized Automatic Neural Network Architecture Generation for Massive MIMO CSI Feedback"
} |
]Two new families of metrics via optimal transportand barycenter problems]Jun Kitagawa Department of Mathematics, Michigan State University, East Lansing, MI 48824 [email protected] ]Asuka TakatsuDepartment of Mathematical Sciences, Tokyo Metropolitan University, Tokyo 192-0397, Japan &RIKEN Center for Advanced Intelligence Project (AIP), Tokyo Japan. [email protected] [2020] 49Q22, 44A12, 30L05,28A50 [ [ January 14, 2024 ====================We first introduce a two-parameter family of metrics onthe space of Borel probability measures on Euclidean space with finite pth moment for 1≤ p <∞,called the sliced Monge–Kantorovich metrics which include the sliced Wasserstein and max-sliced Wasserstein metrics.We then show that these are complete, separable metric spaces and these metrics have a dual representation, but the spaces are (except for an endpoint case) not geodesic. The completeness, duality, and non-geodesicnessare new even in the sliced and max-sliced Wasserstein cases. Next we define a two-parameter family of metrics on a subspace of Borel probability measures on the product of two metric spaces,called the disintegrated Monge–Kantorovich metrics. When this product is ×^n-1, the sliced Monge–Kantorovich spaces can be isometrically embedded into the corresponding disintegrated Monge–Kantorovich spaces.We then prove the disintegrated Monge–Kantorovich metrics are complete, separable (except an endpoint case), geodesic spaces, with a dual representation.Additionally, we prove existence and duality for an associated barycenter problem,and provide conditions for uniqueness of a barycenter. These results on barycenter problems for the disintegrated Monge–Kantorovich metrics also yield the corresponding existence, duality, and uniqueness results for classical Monge–Kantorovich barycenters in a wide variety of spaces.§ INTRODUCTIONFor a complete, separable metric space (X,_X), let 𝒫(X) denote the space of Borel probability measures on X. For 1≤ p<∞, also let 𝒫_p(X) denote the set of elements in 𝒫(X) with finite pth moment. For μ∈𝒫(X) and a Borel map T from X to a measurable space Y,the pushforward measure T_♯μ∈𝒫(Y) is defined for a Borel set A⊂ Y by T_♯μ (A):=μ(T^-1(A)).Then _p^X will denote the well-known p-Monge–Kantorovich metricon 𝒫_p(X), from optimal transport theory.To be precise, let π_i: X× X→ X be the projection onto the ith coordinate for i=1,2. For μ, ν∈𝒫_p(X), we defineΠ(μ,ν): ={γ∈𝒫(X× X)|π_1_♯γ=μ, π_2_♯γ=ν}, _p^X(μ, ν): =inf_γ∈Π(μ, ν)‖_X‖_L^p(γ)=inf_γ∈Π(μ, ν)(∫_X× X_X(x,y)^pdγ(x, y))^1/p.The infimum above is always attained (see <cit.>*Theorem 4.1, for instance) and a minimizer is called a p-optimal coupling between μ and ν. It is well known that _p^X is a metric on 𝒫_p(X) and provides a rich geometric structure,laying the groundwork to name a few examples, for the theory of synthetic Ricci curvature, PDEs on singular spaces, and a myriad of applications (see, for example, <cit.>*Parts II and III,<cit.>*Chapters 4, 7, and 8, and <cit.>). HoweverMonge–Kantorovich metrics are expensive to compute, and there is great interest in alternative metric structures on spaces of probability measures. In this paper we introduce two different two-parameter families of metrics based on optimal transport,and show various geometric properties of these families. To introduce the first family,let n∈ℕ and denote by σ_n-1 the Riemannian volume measure on the (n-1)-dimensional standard unit sphere 𝕊^n-1, normalized to be a probability measure.Also for ω∈^n-1, we define the map ^ω: ^n→ for x∈^n by^ω(x): =⟨ x,ω⟩. Our first family of metrics is defined as follows.For 1≤ p < ∞ and 1≤ q ≤∞, we define for any μ, ν∈𝒫_p(^n),_p, q(μ, ν): =_p^(^∙_♯μ, ^∙_♯ν)_L^q(σ_n-1)=(∫_^n-1_p^(^ω_♯μ, ^ω_♯ν)^qdσ_n-1(ω))^1/q, if1≤ q<∞, _ω∈^n-1_p^(^ω_♯μ, ^ω_♯ν), ifq=∞.We call _p, q the sliced (p, q)-Monge–Kantorovich metric, and will generically refer to sliced Monge–Kantorovich metrics. We note that ^ω_♯μ is the Radon transform of μ, and the case _p, p is known as the so-called sliced Wasserstein metric, while _p, ∞ is often referred to asthe max-sliced Wasserstein metric.See BayraktarGuo21, sliced-original, mSWDand the references therein for related works.To state our main results on the sliced Monge–Kantorovich metrics,we first fix some notation.We frequently denote functions from some space Ω with values in functions on another space, with the variable ω∈Ω written as a subscript.For a metric space (X,_X), let C_b(X) denote the space of bounded continuous functions on X. For 1≤ p<∞ and 1≤ r' ≤∞,define 𝒜_p:={ (Φ_∙, Ψ_∙)∈ C(^n-1; C_b())^2 | -Φ_ω(t)-Ψ_ω(s) ≤ |t-s|^pfor all t, s∈ℝ and ω∈𝕊^n-1},𝒴_r' := {ζ∈ C_b(^n-1) | ‖ζ‖_L^r'(σ_n-1)≤ 1, ζ>0 }. Let 1≤ p<∞, 1≤ q≤∞ and n∈ℕ. Then: * (𝒫_p(ℝ^n), _p,q) is a complete, separable metric space. * (𝒫_p(ℝ^n), _p,q) istopologically equivalent to (𝒫_p(ℝ^n), _p^ℝ^n). * (𝒫_p(ℝ^n), _p,q) and(𝒫_p(ℝ^n), _p^ℝ^n) are bi-Lipschitz equivalent if n=1 but not bi-Lipschitz equivalent if n≠ 1 and 1≤ q<∞. * (𝒫_p(ℝ^n), _p,q) is a geodesic spaceif and only ifeither p=1 or n=1. * Assume p≤ q. Set r:=q/p, and denote by r' the Hölder conjugate of r.Then for μ, ν∈𝒫_p(^n),we have_p, q(μ, ν)^p = sup{ -∫_^n-1ζ(∫_Φ_∙ dR^∙_♯μ +∫_Ψ_∙ dR^∙_♯ν) dσ_n-1| (Φ,Ψ)∈𝒜_p, ζ∈𝒴_r'}.Regarding item (<ref>) above, recall the following definition.Let (X,_X) be a metric space. A curve ρ:[0,1]→ X is called aminimal geodesic if _X(ρ(τ_1), ρ(τ_2))≤ |τ_1-τ_2|_X(ρ(0),ρ(1))holds for any τ_1, τ_2∈ [0,1].A metric space (X,_X) is said to be geodesicif any two points in X can be joined by a minimal geodesic. Note that due to the triangle inequality, equality actually holds in (<ref>) for a minimal geodesic. Of course it is clear that when n=1, one has _p, q=_p^ for all p and q, hence the only interesting cases which we will treat for _p, q are when n≥ 2.Noting the defect (non-geodesicness) of the sliced Monge–Kantorovich metrics as defined above, we attempt to remedy this by introducing another two-parameter family of metrics into which (for certain settings)the sliced metrics can be embedded. Let (Y, _Y) and (Ω, _Ω) becomplete, separable metric spaces, and let σ be a Borel probability measure on Ω.Let π_Ω:Y ×Ω→Ω denote the natural projection, andfor 1≤ p<∞, define𝒫^σ(Y×Ω) :={𝔪∈𝒫(Y×Ω)| (π_Ω)_♯𝔪=σ}.Next recall a form of disintegration of measures which can be found, for example, in <cit.>*Chapter III-70 and 72. [Disintegration Theorem]Let Y, Ω be complete, separable metric spaces, and fix a probability measure 𝔪∈𝒫(Y×Ω). Then there exists a map 𝔪^∙:Ω→𝒫(Y), uniquely defined (π_Ω)_♯𝔪-a.e., such that if A∈ℬ_Y and B∈ℬ_Ω, the real valued function on Ω defined by ω↦𝔪^ω(A) is Borel, and𝔪(A× B)=∫_B𝔪^ω(A)d(π_Ω)_♯𝔪(ω).By an abuse of notation, we denote such a disintegration by 𝔪=𝔪^∙⊗(π_Ω)_♯𝔪.We are now ready to define our second family of metrics; below δ_y_0^Y is the delta measure on Y concentrated at the point y_0∈ Y.Let 1≤ p <∞ and 1≤ q≤∞.Given 𝔪, 𝔫∈𝒫^σ(Y×Ω), we define_p,q(𝔪, 𝔫):=‖_p^Y(𝔪^∙, 𝔫^∙)‖_L^q(σ),and call _p, q the disintegrated (p, q)-Monge–Kantorovich metric. Weset𝒫^σ_p,q(Y×Ω):={𝔪∈𝒫^σ(Y×Ω) | _p^Y(δ_y_0^Y,𝔪^∙)∈ L^q(σ)for some (hence all) y_0∈ Y }. By <cit.>*Lemma 12.4.7, the function ω↦_p^Y(𝔪^ω, 𝔫^ω) is Borel, hence _p, q as above is well-defined. Recall also:For a locally compact Hausdorff space Y, a function ϕ on Y is said to vanish at infinity if{ t∈ Y | |ϕ(t)|≥ε}is compact for any ε>0. We let C_0(Y) stand forthe space of continuous functions on Y vanishing at infinity equipped with the supremum norm. We say a geodesic space (Y, _Y) is ball convex with respect to a point y_0∈ Y if for any geodesic ρ: [0, 1]→ Y and τ∈ [0, 1]_Y(ρ(τ), y_0)≤max{_Y(ρ(0), y_0), _Y(ρ(1), y_0)}.To state the properties of _p, q, we define a subspace of C(Y)for a locally compact metric space (Y,_Y) by 𝒳_p :={ϕ∈ C(Y) | ϕ (t) /1+_Y(y_0,t)^p∈ C_0(Y)for some (hence all) y_0∈ Y}equipped with the norm defined by ‖ϕ‖_𝒳_p,y_0:=sup_t∈ Y|ϕ (t) /1+_Y(y_0,t)^p|,for ϕ∈ C(Y).Since all (𝒳_p, ‖·‖_𝒳_p,y_0) for y_0∈ Y are equivalent to each other,we simply denote this normed space by 𝒳_p and write the norm as ‖·‖_𝒳_p with the convention that we have fixed some y_0∈ Y,when there is no possibility of confusion.We also define 𝒜_p,Y,σ :={ (Φ,Ψ)∈(C_b(Ω;𝒳_p)∩ C_b(Y×Ω))^2 | -Φ_ω(t)-Ψ_ω(s) ≤_Y(t, s)^pfor all t, s∈ Y and ω∈Ω},and 𝒴_r', σ will denote the analogue of 𝒴_r' with ^n-1 and σ_n-1 replaced by Ω and σ, respectively. The main results on disintegrated metrics we prove are as follows; below for a function c on Y× Y and a function ϕ on Y, the notation ϕ^c stands for the c-transform of ϕ (see Definition <ref>).Let 1≤ p<∞, 1≤ q≤∞.Let (Y, _Y) and (Ω, _Ω) be complete, separable metric spaces, and let σ∈𝒫(Ω).Then:*(𝒫^σ_p,q(Y×Ω), _p, q) is a metric space, and separable when q<∞. If (Y, _Y) and (Ω, _Ω) are locally compact, then the space is also complete.*If (Y, _Y) is a locally compact geodesic space that is ball convex with respect to some point in Yand (Ω, _Ω) is locally compactthen (𝒫^σ_p,q(Y×Ω), _p, q) is geodesic.*Letp≤ q, set r:=q/p, anddenote by r' the Hölder conjugate of r. Then if (Y, _Y) and (Ω, _Ω) are locally compact, for 𝔪, 𝔫∈𝒫^σ_p, q(Y×Ω) we have _p, q(𝔪, 𝔫)^p=sup{ -∫_Ωζ(∫_YΦ_∙ d𝔪^∙ +∫_YΦ^_Y^p_∙ d𝔫^∙)dσ| Φ∈ C_b(Ω; 𝒳_p), ζ∈𝒴_r', σ}=sup{ -∫_Y×ΩζΦ d𝔪 -∫_Y×ΩζΨ d𝔫| (Φ, Ψ)∈𝒜_p,Y,σ, ζ∈𝒴_r', σ}. The relation between the sliced and disintegrated Monge-Kantorovich metrics we have defined are as follows. Let 1≤ p<∞, 1≤ q≤∞ and n∈ℕ. If ^n-1 is given the canonical metric and equipped with the probability measure σ_n-1,then there exists an isometric embedding from(𝒫_p(ℝ^n), _p,q) to (𝒫^σ_p,q(×^n-1), _p, q)defined by sending μ∈𝒫_p(ℝ^n) to the element of the form^∙_♯μ⊗σ_n-1. The embedding in the theorem above shows that (𝒫_p(ℝ^n), _p,q) can be viewed as a sort of “submanifold” embedded into the geodesic space (𝒫^σ_p,q(×^n-1), _p, q), but _p,q is in actuality utilizing the ambient metric from the larger space rather than the intrinsic metric generated from itself. In fact, it is proved in <cit.>*Lemma 2.6 and Lemma 2.8 that the intrinsic metric on 𝒫_p(ℝ^n) induced by _p,p between discrete measures with compact supports is _p^ℝ^n. In the final portion of this paper, we consider barycenter problems related to the sliced and disintegrated Monge–Kantorovich metrics. First we define some notation and terminology. Fix K∈ with K≥ 2 and writeΛ_K:={Λ=(λ_1, …, λ_K)∈ (0, 1)^K | ∑_k=1^Kλ_k=1}. Take Λ∈Λ_K and≥ 0.Fora complete, separable metric space (X,_X),also fix a collectionM=(x_k)_k=1^K in X.We define ℬ^_X,_Λ, M: X→ [0, ∞) by ℬ^_X,_Λ,M(x):=∑_k=1^Kλ_k _X(x_k, x)^, with the convention 0^0:=0. We call any minimizer of ℬ^_X,_Λ,M on X a _X-barycenter. For simplicity,we writeℬ^p,_Λ, M :=ℬ^_p^Y,_Λ, M, B^p,q,_Λ, M :=ℬ^_p,q,_Λ, M, 𝔅^p,q,_Λ,M:= ℬ^_p,q,_Λ, M,where Y and Ω will be understood. To state our main results on _p, q- and _p, q-barycenters, for metric spaces (Y,_Y), (Ω,_Ω) and 1≤ p <∞, define𝒵_p: ={ξ∈ C(Y×Ω)| |ξ|/1+_Y(y_0,π_Y)^p∈ C_0(Y×Ω) for some (hence any) y_0∈ Y}, ‖ξ‖_𝒵_p: =sup_(t, ω)∈ Y×Ω|ξ(t, ω)|/1+_Y(y_0, t)^p,where π_Y is the natural projection from Y×Ω to Y. Also define for ξ∈𝒵_p and ω∈Ω,S_λ, pξ_ω(s) :=sup_t∈ Y(-λ_Y(t, s)^p-ξ(t, ω) ) for s∈ Y.Fix any K∈ with K≥ 2, Λ∈Λ_K, 1≤ p<∞, p≤ q≤∞, and ≥ 0. *For n∈ℕ and any M∈𝒫_p(^n)^K, there exists a minimizer of B^p,q,_Λ, M in 𝒫_p(^n).* If (Y,_Y) and (Ω,_Ω) are complete, separable metric spaces such that their product satisfies the Heine–Borel property and σ∈𝒫_p(Ω), then for any 𝔐∈𝒫^σ_p, q(Y×Ω)^K, there exists a minimizer of 𝔅^p,q,_Λ, 𝔐 in 𝒫^σ_p, q(Y×Ω).* If(Y, _Y) and (Ω, _Ω) are complete, separable, locally compact metric spaces andσ∈𝒫(Ω), for any 𝔐=(𝔪_k)_k=1^K in 𝒫^σ_p,q(Y×Ω),inf_𝔫∈𝒫^σ_p,q(Y×Ω)𝔅^p, q, p_Λ, 𝔐(𝔫)=sup{-∑_k=1^K∫_Ωζ_k(ω)(∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω) dσ(ω)| (ζ_k, ξ_k)∈𝒴_r', σ×𝒵_p, ∑_k=1^Kζ_kξ_k≡ 0 }. * Suppose 1< p≤ q<∞, and let Y be a complete, connected Riemannian manifold, possibly with boundary, equipped with the Riemannian distance function.Also let (Ω, _Ω) be a complete, separable, locally compact metric space and σ∈𝒫(Ω). Let 𝔐=(𝔪_k)_k=1^K in 𝒫^σ_p,q(Y×Ω), then if for some index 1≤ k≤ K, for σ-a.e. ω, the measures 𝔪_k^ω are absolutely continuous with respect to the Riemannian volume measure on Y, minimizers of 𝔅^p, q, p_Λ, 𝔐 in 𝒫^σ_p, q(Y×Ω) are unique if they exist. Upon inspection of the proofs, it is easy to see that when p=q (hence r'=∞), in the duality results Theorem <ref> (<ref>) and Theorem <ref> (<ref>), the maximum values are attained when ζ≡ 1, thus the supremum over ζ is not needed. Since the proof of Theorem <ref> (<ref>) is based on Theorem <ref> (<ref>) through Proposition <ref>, the supremums over ζ_k are also not needed there when p=q.It is highly likely that most, if not all of the claims in Theorems <ref> and <ref> can be shown when Ω is a more general topological space, since thm: disintegration is known to hold in a more general setting. However to maintain some level of generality without making our proofs overly technical, we have opted to consider only the case when Ω is a metric space, and leave themore general setting for a future work. We note that the only place where the metric _Ω is directly used is in the proof of Theorem <ref> (<ref>).Finally, we can use Theorem <ref> to obtain results for classical _p^Y-barycenters in a wide variety of spaces. In particular, we can extend the duality results of <cit.> to locally compact metric spaces, and the uniqueness results to a wide variety of Riemannian manifolds with or without boundary, without the need for lower curvature bounds.Fix K∈, Λ∈Λ_K, 1≤ p<∞. Let (Y,_Y) be a complete, separable metric space and fix M=(μ_k)_k=1^K in 𝒫_p(Y).* If (Y, _Y) satisfies the Heine–Borel property, for any ≥ 0 there exists a minimizer of ℬ^p,_Λ,M(ν) in 𝒫_p(Y).*If (Y, _Y) is locally compact, inf_ν∈𝒫_p(Y)ℬ^p,p_Λ,M(ν) =sup{-∑_k=1^K ∫_Yϕ_k^λ_k_Y^pdμ_k| |ϕ_k|/1+_Y(y_0, ·)^p∈ C_0(Y), ∑_k=1^Kϕ_k≡ 0}. * If p>1 and Y is a complete, connected Riemannian manifold, possibly with boundary, equipped with the Riemannian distance function, and μ_k is absolutely continuous with respect to the Riemannian volume measure on Y for some 1≤ k≤ K, then there is a unique minimizer ofℬ^p,p_Λ,M(ν) in 𝒫_p(Y).§.§ Discussion of methods and existing literatureIt is proved in <cit.>*Theorem 2.3 that(𝒫_p(ℝ^n), _p,q) with p=q or ∞ is topologicallyequivalent to (𝒫_p(ℝ^n), _p^ℝ^n) for any n∈ℕ, and(𝒫_p(ℝ^n), _1,∞), (𝒫_p(ℝ^n), _1^ℝ^n)arebi-Lipschitzequivalent toeach other. However, it is also shown there that(𝒫_p(ℝ^n), _1,1) and(𝒫_p(ℝ^n), _1^ℝ^n)arebi-Lipschitzequivalent toeach other if and only if n=1.On the other hand, it is known that_p, p≤ C_p^ℝ^n for some constant C,and it is shown in <cit.>*Theorem 5.1.5 that_p^ℝ^n≤ C_p, p^1/n+1 holds for measures with compact supports for some constant C, where the constant also depends on the diameter of the union of the two supports(see also <cit.>*Remark 2.4). The completeness in Theorem <ref> (<ref>), and (<ref>) and (<ref>) are new even inthe sliced and max-sliced Wasserstein cases p=q and q=∞. For the non-geodesicness, we explicitly give two measures not connected by a _p, q-minimal geodesic. The counterexample exploits that if one has a _p, q-minimal geodesic μ(τ), then the curves τ↦^ωμ(τ) must be _p^-minimal geodesics for certain values of ω. The duality result is in some sense the expected one, from applying the classical Kantorovich duality to each pair of one-dimensional transport problems. However, one must take care when making a selection of admissible dual potentials to ensure some continuity in the ω variable, we exploit Michael's continuous selection theorem to accomplish this. We note that the metric _2, ∞ is equal to the 1-dimensional projection robust 2-Wasserstein distance defined in <cit.>. In <cit.> the authors also define what are called subspace robust 2-Wasserstein distances, which are shown to be bi-Lipschitz equivalent to _2^^n and geodesic, however these are not included in our family _p, q. The authors have also recently learned of an independent result by Park and Slepčev (<cit.>) in which they have shown topological properties, completeness, and non-geodesicness of the space (𝒫_2(^n), _2, 2). The overall scope of the paper of Park and Slepčev differ from ours, as they also consider topics such as tangential structure and statistical properties of the sliced metric, and relations to Sobolev norms, while we introduce the disintegrated metrics and analyze associated barycenter problems. The results in Theorem <ref> are the natural expected ones, but at each step we must take extreme care about issues of measurability. The key difficulty is that weak continuity of a sequence of measures is not inherited by the sequence of disintegrations. An additional difficulty is that we must apply the Kuratowski and Ryll-Nardzewski measurable selection theorem instead of the Michael’s selection theorem, which in particular requires the range of a multivalued mapping to be separable. This requires careful choices of multivalued mappings and causes additional complications.Of course some complications also arise since we are considering general spaces Y and Ω in place ofand ^n-1. Additionally, if Y=Ω⊂^n, σ∈𝒫_p(Y) is absolutely continuous with respect to n-dimensional Lebesgue measure, and 𝔪 and 𝔫 are p-optimal couplings between σ and measures μ and ν∈𝒫_p(Y) respectively, it can be seen that _p, p(𝔪, 𝔫) coincides with the linearized optimal transport metric introduced in <cit.>. We also note there is a somewhat similar notion of layerwise Wasserstein distance introduced in <cit.>.Regarding the results in Theorem <ref> on barycenters, we are able to obtain existence of _p, q-barycenters by a simple compactness argument, but were unable to prove a duality result. This is partly due to the fact that the Radon transform (push-forward under ^ω) and its dual transform are ill-behaved on Banach spaces, and are only known to have nice properties on Fréchet spaces defined by a countably infinite family of semi-norms (see for example, Hertle83, Hertle84). On the other hand, we were unable to obtain enough continuity to directly prove existence of _p, q-barycenters, thus we have taken the route of using duality in the disintegrated metric setting to prove existence of barycenters. The uniqueness result relies on existence of dual maximizers (or rather, extracting an appropriate limit of a maximizing sequence), which is perhaps the most involved proof of the paper, involving an application of a Komlós' Theorem on a Hilbert space valued Bochner–Lebesgue space. We also comment that due to the lack of geodesics for _p, q, it is more natural to consider barycenters for _p, q. Finally, Corollary <ref> comes from a quick application of the corresponding results in Theorem <ref> where Ω is a one point space. We note that the requirement that Y be a Riemannian manifold in Theorem <ref> (<ref>) and Corollary <ref> is only really necessary to obtain Lemmas <ref> and <ref>, the remainder of the proof is possible if Y is a space where there is a distinguished class of measures for which all p-optimal couplings with left marginals from this class are supported on the graph of an a.e. single valued mapping that can be uniquely determined from a dual potential.Some existing results on barycenters in similar settings include KimPass17, KimPass18, Jiang17, Ohta12. §.§ Outline of paperWe present the proofs of Theorem <ref> in Section <ref>, Theorems <ref> and <ref> in Section <ref>, and Theorem <ref> and Corollary <ref> in Section <ref> respectively, with the proofs further broken down into subsections. We also present some supplementary results on the disintegrated Monge–Kantorovich metrics that do not directly fall under Theorem <ref> in subsection <ref>. §.§ NotationWe close this section by summarizing some notation.p2.2cmp10.4cmp3.1cmNotation Meaning DefinitionC_b(X) Bounded continuous functions on X C_0(X) Bounded continuous functions on X vanishing at infinity 𝒢(X) Minimal geodesics on X defined on [0,1]Definition <ref> _𝒢(X)Supremum metric on 𝒢(X) Definition <ref> e^τ Evaluation map on 𝒢(Y) sending ρ to ρ(τ) Definition <ref> 𝒫(X) Borel probability measures on X 𝒫_p(X) Borel probability measures on X with finite pth momentΠ(μ,ν) Couplings between μ and ν(<ref>) ℋ^i i-dimensional Hausdorff measure σ_n-1 Normalized Riemannian volume measure on 𝕊^n-1 ^X_p(μ,ν)p-Monge–Kantorovich distance between μ and ν(<ref>) _p,q(μ,ν)Sliced (p, q)-Monge–Kantorovich distance between μ and νDefinition <ref> ^ω Function on ℝ^n sending x to ⟨ x,ω⟩(<ref>)_y_0(t)Distance between y_0 and t σ Borel probability measure on Ω π_Y, π_Ω Projections from Y×Ω to Y, Ω 𝒫^σ (Y×Ω) Borel probability measures on Y×Ω with Ω-marginal σ(<ref>) 𝒫^σ_p,q (Y×Ω)𝔪=𝔪^∙⊗σ∈𝒫^σ(Y ×Ω) with ^Y_p(δ_y_0^Y, 𝔪^∙)∈ L^q(σ_n-1) (<ref>) _p,q(𝔪,𝔫) Disintegrated (p, q)-Monge–Kantorovich distance between𝔪 and 𝔫(<ref>) ϕ^c c-transform of ϕDefinition <ref> 𝒜_p (Φ,Ψ)∈ C(𝕊^n-1;C_b(ℝ)) with -Φ_ω(t)-Ψ_ω(s)≤ -|t-s|^p for t, s∈ℝ^n and σ_n-1-a.e. ω(<ref>)𝒜_p,Y,σ(Φ,Ψ)∈ C_b(Y×Ω) with -Φ_ω(t)-Ψ_ω(s)≤ -_Y(t,s)^p for t, s∈ Y and σ-a.e. ω(<ref>) 𝒴_r'ζ∈ C(𝕊^n-1) with ζ>0 and ζ_L^r'(σ_n-1)≤ 1(<ref>)𝒴_r',σζ∈ C(Ω) with ζ>0 and ζ_L^r'(σ)≤ 1p.5, ℓ.1 𝒳_pϕ∈ C(Y) with ϕ/(1+_y_0^p)∈ C_0(Y) (<ref>) ℬ^p,_Λ, M _p^Y-barycenter on 𝒫_p(Y) Definition <ref> B^p,q,_Λ, M_p,q-barycenter on 𝒫_p(ℝ^n) Definition <ref> 𝔅^p,q,_Λ,M _p,q-barycenter on 𝒫^σ_p,q(Y×Ω) Definition <ref>𝒵_pϕ∈ C(Y×Ω) with ϕ/(1+(_y_0∘π_Y)^p)∈ C_0(Y) (<ref>) S_λ,pξ_ω Partial λ_Y^p-transform of ξ_ω (<ref>) § SLICED MONGE–KANTOROVICH METRICSIn this section we will prove Theorem <ref>. We start with a few preliminary lemmas before moving to the proof of the actual theorem. First, we recall here some properties of the usual Monge–Kantorovich metrics for later use. If (X, _X) is a metric space we will write B^X_r(x) for the open ball centered at x∈ X of radius r>0 with respect to _X. Let (X,_X) be a complete, separable metric space and 1≤ p<∞.Then (𝒫_p(X), _p^X) is also a complete, separable metric space. For a sequence (μ_j)_j∈ℕ in 𝒫_p(X) and μ∈𝒫_p(X),the following four conditions are equivalent to each other.* lim_j→∞_p^X(μ_j,μ)=0.* (μ_j)_j∈ℕ converges weakly to μand lim_j→∞∫_X_X(x_0, x)^p dμ_j(x)= ∫_X_X(x_0, x)^p dμ(x)holds for some (hence all) x_0∈ X.* (μ_j)_j∈ℕ converges weakly to μand lim_r→∞lim sup_j→∞∫_X∖ B_r^X(x_0)_X(x_0, x)^p dμ_j(x)=0. * For any continuous function on ϕ on X with|ϕ|≤ C(1+_X(x_0,·)^p)for some C∈ℝ and x_0∈ X, lim_j→∞∫_Xϕ(x) dμ_j(x)= ∫_Xϕ(x) dμ(x).Next, some notation and conventions. Throughout this paper,we will take 1≤ p<∞, 1≤ q ≤∞and n∈ℕ unless stated otherwise.For 1≤ i≤ n, we write e_i for the ith coordinate vector in ^n.We also denote by 1_E the characteristic function of a set E.For spaces X_1 and X_2, π_X_1:X_1 × X_2→ X_1 and π_X_2:X_1 × X_2→ X_2 denotethe natural projections.We will write δ^Y_y to denote the delta measure at the point y on a space Y. We will first show continuity of the function on 𝕊^n-1 defined byω↦_p^(^ω_♯μ, ^ω_♯ν) for μ,ν∈𝒫_p(ℝ^n) to ensure that _p,q(μ,ν) is well-defined.This is also proved in <cit.>*Proposition 2.3, but we provide the proof here for completeness. Let μ, ν∈𝒫_p(ℝ^n). *For ω∈𝕊^n-1,^ω_♯μ∈𝒫_p(ℝ) holds. Moreover, the pth moment of ^ω_♯μ is bounded by the pth moment of μ. *For any sequence (ω_j)_j∈ in 𝕊^n-1 converging to ω, lim_j→∞_p^(^ω_j_♯μ, ^ω_♯μ)=0,in particular, the sequence (^ω_j_♯μ)_j∈ℕ weakly converges to ^ω_♯μ. *The function on 𝕊^n-1 defined by ω↦_p^(^ω_♯μ, ^ω_♯ν)is continuous.We calculate ∫_ℝ |t|^p d ^ω_♯μ(t) = ∫_ℝ^n |⟨ x,ω⟩|^p d μ(x) ≤∫_ℝ^n|x|^p |ω|^p dμ(x) =∫_ℝ^n |x|^p dμ(x) =_p^^n(δ_0^ℝ^n, μ)^p,which proves assertion (<ref>).Let (ω_j)_j∈ be a sequence in 𝕊^n-1 converging to ω.Then for any ϕ∈ C_b(), we have ϕ(⟨·, ω_j⟩)∈ C_b(^n) for each j∈ℕ hence by dominated convergence,lim_j→∞∫_ϕ d^ω_j_♯μ = lim_j→∞∫_^nϕ(⟨ x, ω_j⟩)dμ(x) = ∫_^nϕ(⟨ x,ω⟩)dμ(x)=∫_ϕ d^ω_♯μ,thus (^ω_j_♯μ)_j∈ converges weakly to ^ω_♯μ.Moreover, since | ⟨·, ω_j⟩|^p≤|·|^p and μ has finite pth moment, dominated convergence yields lim_j→∞∫_ |t|^p d^ω_j_♯μ(t) = lim_j→∞∫_^n| ⟨ x, ω_j⟩|^p dμ(x) = ∫_^n| ⟨ x, ω⟩|^p dμ(x) =∫_ |t|^p d^ω_♯μ(t). Then it follows from Theorem <ref> thatlim_j→∞_p^(^ω_j_♯μ, ^ω_♯μ)=0,and assertion (<ref>) follows.Assertion (<ref>) for μ,ν together with the triangle inequality for _p^ leads to lim_j→∞_p^(^ω_j_♯μ, ^ω_j_♯ν) = _p^(^ω_♯μ, ^ω_♯ν),proving assertion (<ref>). Next we give a comparison result between _p^^n and _p,q.For 1≤ q≤∞ and n∈ℕ, set M_q,n :=⟨ e_1,·⟩_L^q(σ_n-1).Using the fact that (see <cit.>*(5.2.5.(ii)) for instance) for any v_1, v_2∈^n ⟨ v_1, v_2⟩=n∫_^n-1⟨ v_1, ω⟩⟨ v_2, ω⟩ dσ_n-1(ω),we have M_2,n=n^-1/2; additionally it is easy to see that M_∞,n=1.We also observe from the rotational invariance of σ_n-1 that M_q,n |x| =⟨ x,·⟩_L^q(σ_n-1)holds for any x∈ℝ^n.The following comparison in Lemma <ref> is proved for p=q in <cit.>*Proposition 5.1.3. Note that a simple application of Hölder's inequality yields p≤ p', q≤ q' ⇒_p, q≤_p', q',which will be used in the sequel.For μ,ν∈𝒫_p(^n),_p,q(μ, ν) ≤ M_max{p,q},n·_p^^n(μ, ν). Additionally,for any x_0∈ℝ^n, _p,p(δ_x_0^^n,μ) = M_p,n·_p^^n(δ_x_0^^n,μ).Consequently,ℝ^n is homothetically embedded into (𝒫_p(ℝ^n), _p,p). Let γ be a p-optimal coupling between μ and ν,then (^ω×^ω)_♯γ∈Π(^ω_♯μ,^ω_♯ν) for each ω∈𝕊^n-1.We calculate_p, q(μ, ν)=_p^(^∙_♯μ, ^∙_♯ν)_L^q(σ_n-1) ≤(∫_ℝ×ℝ |t-s|^p d(^∙×^∙)_♯γ(t,s))^1/p_L^q(σ_n-1) =(∫_ℝ^n×ℝ^n |⟨ x-y,·⟩|^p dγ(x,y))^1/p_L^q(σ_n-1). When q=∞, we easily obtain(∫_ℝ^n×ℝ^n |⟨ x-y,·⟩|^p dγ(x,y))^1/p_L^∞(σ_n-1) ≤(∫_ℝ^n×ℝ^n|⟨ x-y,·⟩|^p_L^∞(σ_n-1) dγ(x,y))^1/p= (∫_ℝ^n×ℝ^n| x-y|^p dγ(x,y))^1/p=M_∞,n·_p^^n(μ, ν),where we have used M_∞,n=1.If p≤ q<∞,by Minkowski's integral inequality, we find_p, q(μ, ν) ≤ (∫_ℝ^n×ℝ^n |⟨ x-y,·⟩|^p dγ(x,y))^1/p_L^q(σ_n-1)=[ ∫_^n-1(∫_^n ×ℝ^n |⟨ x-y,ω⟩|^p dγ(x,y))^q/pdσ_n-1(ω)]^p/q·1/p ≤[ ∫_^n ×ℝ^n(∫_^n-1|⟨ x-y,ω⟩|^qdσ_n-1(ω) )^p/qdγ(x,y)]^1/p = ( ∫_^n ×ℝ^nM_q,n^p |x-y|^p dγ(x,y))^1/p=M_q,n·_p^^n(μ, ν). When q<p, the above with the aforementioned monotonicity of _p, q in q shows _p, q(μ, ν)≤_p, p(μ, ν)≤ M_p,n·_p^^n(μ, ν). Since we haveΠ(δ_x_0^^n,μ)={δ_x_0^^n⊗μ},Π(^ω_♯δ_x_0^^n,^ω_♯μ) ={( ^ω_♯δ_x_0^^n)⊗ ( ^ω_♯μ) }={(^ω×^ω)_♯ (δ_x_0^^n⊗μ)}for each x_0∈ℝ^n and ω∈𝕊^n-1, the inequality in (<ref>) becomes an equalityand this proves the lemma. By Lemma <ref>, for μ, ν∈𝒫_p(^n), and x_0∈ℝ^n,if _p^^n(δ_x_0^^n,μ)=_p^^n(δ_x_0^^n, ν),then _p, p(δ_x_0^^n,μ)=_p, p(δ_x_0^^n, ν). However, this does not hold for p≠ q and n≥ 2 in general.Indeed, if we take μ:=δ_e_1^ℝ^n, ν:=1/2(δ_e_1^^n+δ_e_2^ℝ^n),and p≠ q, then _p, q(δ_0^ℝ^n,μ) =_p^( δ_0^ℝ,δ_⟨ e_1,∙⟩^ ) _L^q(σ_n-1) =⟨ e_1,·⟩_L^q(σ_n-1) =M_q,n,_p, q(δ_0^ℝ^n,ν) =_p^( 1/2(δ_0^ℝ,δ_⟨ e_1,∙⟩^+δ_⟨ e_2,∙⟩^) )_L^q(σ_n-1)=( 1/2 |⟨ e_1,·⟩|^p+1/2|⟨ e_2,·⟩|^p )^1/p_L^q(σ_n-1).We find that _p, ∞(δ_0^ℝ^n,ν )<1=M_∞,n.If p<q<∞, then it follows from strict convexity of the function t↦ t^q/p on (0,∞) that_p, q(δ_0^ℝ^n,ν )^q= ∫_𝕊^n-1( 1/2 |⟨ e_1,ω⟩|^p+1/2|⟨ e_2,ω⟩|^p )^q/p dσ_n-1(ω)<∫_𝕊^n-1( 1/2 |⟨ e_1,ω⟩|^q +1/2|⟨ e_2,ω⟩|^q ) dσ_n-1(ω) =M_q,n^q.If 1≤ q<p, then we similarly observe from strict concavity of the function t↦ t^q/p on (0,∞) that _p, q(δ_0^ℝ^n,ν)>M_q,n. Thus we find _p^^n(δ_0^ℝ^n,μ)=_p^^n(δ_0^ℝ^n,ν )=1, _p, q(δ_0^ℝ^n,μ ) ≠_p, q(δ_0^ℝ^n,ν).§.§ Complete, separable, metric.In this subsection we will prove (𝒫_p(^n), _p, q) is a complete, separable metric space. Before the proof, we make a quick remark which will be used a number of times.If (X, _X) is a metric space that satisfies the Heine–Borel property, then by <cit.>*Remark 5.1.5, any sequence (μ_j)_j∈ in 𝒫_p(X) with uniformly bounded pth moments (or equivalently, is bounded in (𝒫_p(X), _p^X)) has a subsequence that weakly converges to some μ∈𝒫(X). Then since _p^Y( δ_x_0^X,μ_j)^p is the pth moment of μ_j with respect to x_0, for any x_0∈ X, by the weak lower-semicontinuity of _p^X (<cit.>*Remark 6.12) the limiting measure μ also has finite pth moment.(Metric): Let μ, ν∈𝒫_p(^n). From the definition it is immediate that _p, q(μ, ν)=_p, q(ν, μ) ≥ 0,and since _p^ is a metric on 𝒫_p(), that _p, q(μ, μ)=0.Now suppose _p, q(μ, ν)=0. Then for σ_n-1-a.e. ω we have _p^(^ω_♯μ, ^ω_♯ν)=0,hence ^ω_♯μ=^ω_♯ν.Writing ℱ_^-1 and ℱ_^n^-1 for the inverse Fourier transforms onand ^n respectively, a quick calculation yields that for any r>0 andσ_n-1-a.e. ω,ℱ_^n^-1μ(rω) =∫_^ne^i⟨ rω, x⟩ dμ(x)=∫_ e^i r td ^ω_♯μ(t)=ℱ_^-1 (^ω_♯μ)(r)=ℱ_^-1 (^ω_♯ν)(r)=∫_ e^i r td ^ω_♯ν(t)=∫_^ne^i⟨ rω, x⟩ dν(x)=ℱ_^n^-1ν(rω).Since a probability measure is uniquely determined by its inversetransform, ( see <cit.>*Proposition 3.8.6 for instance), we have μ=ν. For the triangle inequality,using the triangle inequality for _p^ together with Minkowski's inequality, we have for μ_1, μ_2, μ_3∈𝒫_p(^n),_p, q(μ_1, μ_3) =_p^(^∙_♯μ_1, ^∙_♯μ_3)_L^q(σ_n-1)≤_p^(^∙_♯μ_1, ^∙_♯μ_2)+_p^(^∙_♯μ_2, ^∙_♯μ_3)_L^q(σ_n-1)≤_p^(^∙_♯μ_1, ^∙_♯μ_2)_L^q(σ_n-1)+_p^(^∙_♯μ_2, ^∙_♯μ_3)_L^q(σ_n-1)=_p, q(μ_1, μ_2)+_p, q(μ_2, μ_3). (Separability):Due to Theorem <ref>, (𝒫_p(^n), _p^ℝ^n) is separableand there exists a countable dense set 𝒟 of (𝒫_p(^n), _p^ℝ^n). By Lemma <ref>,the set 𝒟 is also dense in (𝒫_p(^n), _p,q)hence (𝒫_p(^n), _p,q) is separable.(Completeness):Let (μ_j)_j∈ be a Cauchy sequence in (𝒫_p(^n), _p,q).We prove the completeness of (𝒫_p(^n), _p,q) in several steps.Claim 1. There exists E_p, q⊂𝕊^n-1 such thatσ_n-1(E_p,q)=1 and (^ω_♯μ_j)_j∈ is Cauchy in _p^ for any ω∈ E_p, q. Proof of Claim 1. If q=∞, then the claim is trivial.In the case q<∞, for any ε_1, ε_2>0, there exists some J∈ such that whenever j_1, j_2≥ J, we have _p, q(μ_j_1, μ_j_2)<ε_1ε_2.It follows from Chebyshev's inequality that σ_n-1({ω∈^n-1|_p^(^ω_♯μ_j_1, ^ω_♯μ_j_2) ≥ε_1 })≤ ε_1^-q∫_^n-1_p^(^ω_♯μ_j_1, ^ω_♯μ_j_2)^qdσ_n-1(ω) =ε_1^-q_p, q(μ_j_1, μ_j_2)^q < ε_2^q,for j_1, j_2≥ J.Now we can take a subsequence of (μ_j)_j∈ℕ (not relabeled) such that for all j∈ℕ,σ_n-1({ω∈^n-1|_p^(^ω_♯μ_j, ^ω_♯μ_j+1)≥ 2^-j})≤ 2^-j.Setting E_p, q:=^n-1∖(⋂_ℓ=1^∞⋃_j=ℓ^∞{ω∈^n-1| _p^(^ω_♯μ_j, ^ω_♯μ_j+1)≥ 2^-j}),we haveσ_n-1(E_p, q)= 1-σ_n-1(⋂_ℓ=1^∞⋃_j=ℓ^∞{ω∈^n-1| _p^(^ω_♯μ_j, ^ω_♯μ_j+1)≥ 2^-j})=1by the Borel–Cantelli lemma, and we can see that the sequence (^ω_♯μ_j)_j∈ is Cauchy in _p^ whenever ω∈ E_p, q. Since _p^ is complete on 𝒫_p() by Theorem <ref>,for every ω∈ E_p, q,there exists a measure μ^ω∈𝒫_p() such that lim_j→∞_p^(^ω_♯μ_j, μ^ω)=0.By Theorem <ref>, (^ω_♯μ_j)_j∈ℕ weakly converges to μ^ω.Claim 2. (μ_j)_j∈ is a tight sequence.Proof of Claim 2. Since σ_n-1(E_p, q)=1, the set E_p,q must contain some linearly independent collection {ω_i}_i=1^n. For each 1≤ i≤ n, since (^ω_i_♯μ_j)_j∈ weakly converges the sequence is tight, hence for any fixed ε>0, there exist r_i, ε>0 such that ^ω_i_♯μ_j(ℝ∖[-r_i, ε,r_i, ε])<ε for all j∈ℕ.By the independence of {ω_i}_i=1^n, the set given by⋂_i=1^n{x∈^n|⟨ x, ω_i⟩∈ [r_i,ε, r_i,ε] },is compact and we see thatμ_j( ^n∖⋂_i=1^n{ x∈^n|⟨ x, ω_i⟩∈ [r_i,ε, r_i,ε] }) =μ_j(⋃_i=1^n{x∈^n|⟨ x, ω_i⟩∉ [-r_i, ε,r_i, ε]})≤∑_i=1^nμ_j({x∈^n|⟨ x, ω_i⟩∉ [-r_i, ε,r_i, ε]})=∑_i=1^n ^ω_i_♯μ_j(ℝ∖[-r_i, ε,r_i, ε])<nε,proving tightness.It follows from Claim 2 that there exists a probability measure μ∈𝒫(^n) and a subsequence (not relabeled) of (μ_j)_j∈ that weakly converges to μ.Then (^ω_♯μ_j)_j∈ℕ converges weakly to ^ω_♯μ.By uniqueness of weak limits, we must have μ^ω=^ω_♯μ for all ω∈ E_p,q,so in particular we have lim_j→∞_p^(^ω_♯μ_j, ^ω_♯μ)=0for ω∈ E_p, q. Claim 3.The pth moment of (μ_j)_j∈ℕis uniformly bounded and μ has finite pth moment.Proof of Claim 3. Let {ω_i}_i=1^n⊂ E_p,q be linearly independent.We denote by (ω_ii')_1≤ i,i'≤ nthe Gram matrix of {ω_i}_i=1^n,that is,(ω_ii')_1≤ i,i'≤ n=(⟨ω_i, ω_i' ⟩)_1≤ i,i'≤ n. Then (ω_ii')_1≤ i,i'≤ n is invertible and we denote by(ω^ii')_1≤ i,i'≤ n its inverse matrix.Since any norm on ℝ^n is equivalent to the Euclidean norm,for p and n, there exists C_n,p>0 such that ∑_i=1^n a_i^2 ≤(C_n,p∑_i=1^n a_i^p)^2/pfora_i∈ [0,∞), each1≤ i≤ n. For x∈ℝ^n, we have x=∑_i=1^n ∑_i'=1^n ω^ii'⟨ x,ω_i'⟩ω_iwhich yields |x|^2≤ n∑_i=1^n ( ∑_i'=1^n |ω^ii'||⟨ x,ω_i'⟩|)^2≤n^3max_1≤ i,i' ≤ n |ω^ii'|^2∑_i'=1^n |⟨ x,ω_i'⟩|^2 ≤ n^3max_1≤ i,i' ≤ n |ω^ii'|^2( C_n,p∑_i'=1^n |⟨ x,ω_i'⟩|^p )^2/p. This ensureslim sup_j→∞∫_ℝ^n|x|^pdμ_j(x) ≤n^3p/2max_1≤ i,i' ≤ n |ω^ii'|^plim sup_j→∞∫_ℝ^nC_n,p∑_i'=1^n | ⟨ x, ω_i'⟩|^p dμ_j(x) = n^3p/2 C_n,pmax_1≤ i,i' ≤ n |ω^ii'|^p∑_i'=1^n lim sup_j→∞∫_ℝ |t|^p d _♯^ω_i'μ_j(t)=n^3p/2 C_n,pmax_1≤ i,i' ≤ n |ω^ii'|^p∑_i'=1^n ∫_ℝ |t|^p d _♯^ω_i'μ(t)<∞,where the last equality follows from Theorem <ref>. Then by Remark <ref>we see that μ has finite pth moment. Claim 4. _p, q(μ_j, μ) →0 as j→∞.Proof of Claim 4. First suppose q=∞ and fix ε>0.Then there exists some J∈ℕ such that for σ_n-1-a.e. ω∈𝕊^n-1, all j≥ J, and any ℓ∈ℕ,_p^(^ω_♯μ_j,^ω_♯μ_j+ℓ) ≤_ω'∈^n-1_p^(^ω'_♯μ_j,^ω'_♯μ_j+ℓ) = _p,∞(μ_j, μ_j+ℓ)<ε.By (<ref>), we have_p^(^ω_♯μ_j,^ω_♯μ) = lim_ℓ→∞_p^(^ω_♯μ_j,^ω_♯μ_j+ℓ)<ε,then taking an essential supremum over ω shows_p,∞(μ_j, μ)<ε,proving the claim.If q<∞, then for all ω∈ E_p, q_p^( ^ω_♯μ_j, ^ω_♯μ) ≤( ∫_×| t-s|^pd(^ω_♯μ_j⊗^ω_♯μ)(t,s))^1/p =(∫_^n×^n| ⟨ x-y,ω⟩|^p d(μ_j⊗μ)(x, y))^1/p ≤2^1-1/p(∫_^n| x|^pdμ_j(x)+∫_^n|y|^p dμ( y))^1/p,which we have shown is bounded independent of j. Thus if q<∞, by the dominated convergence theorem together with (<ref>), we obtainlim_j→∞_p, q(μ_j, μ)^q= lim_j→∞∫_𝕊^n-1_p^(^ω_♯μ_j, ^ω_♯μ)^q dσ_n-1(ω)= 0as claimed. Since the original sequence is Cauchy in _p, q, convergence of a subsequence implies convergence of the full sequence to the same limit, and we are finished with the proof of completeness.§.§ Topological equivalenceWe now prove topological equivalence of _p^^n and _p, q. In <cit.>*Theorem 2.3 it is shown that _1^^n, _1, 1, and _1, ∞ are all topologically equivalent. By Lemma <ref>, any sequence that converges in _p^^n converges in _p, q to the same limit, for any p and q.Let (μ_j)_j∈ N be a convergent sequence in (𝒫_p(^n), _p,q) with limit μ. If E_p, q⊂^n-1 is the set from Claim 1 in the proof of Theorem <ref> there exists a linearly independent set {ω_i}_i=1^n⊂ E_p,q, and by Theorem <ref>,lim_r→∞lim sup_j→∞∫_ℝ∖ (-r, r) |t|^p d _♯^ω_iμ_j(t) =0for 1≤ i≤ n. SetA_i:={x∈ℝ^n ||⟨ x,ω_i⟩|≥|⟨ x,ω_i'⟩| for 1≤ i'≤ n}.For x∈A_i_0∖ B_r^^n(0) with 1≤ i_0≤ nand r>0,we observe from (<ref>) thatr^2<|x|^2≤ n^3 max_1≤ i,i'≤ n|ω^ii'|^2 ∑_i'=1^n |⟨ x,ω_i'⟩|^2 ≤ n^4 ( max_1≤ i,i'≤ n|ω^ii'|^2) |⟨ x,ω_i_0⟩|^2.This leads toA_i_0∖ B_r^^n(0) ⊂{x∈ℝ^n ||⟨ x,ω_i_0⟩|>ĉ^-1r}for r>0, where we setĉ=ĉ({ω_i}_i=1^n):=n^2·max_1≤ i,i' ≤ n |ω^ii'|, then using (<ref>),lim sup_j→∞∫_ℝ^n∖ B_r^^n(0)|x|^pdμ_j(x) ≤n^-p/2· C_n,pĉ^p·lim sup_j→∞∫_ℝ^n∖ B_r^^n(0)∑_i'=1^n | ⟨ x,ω_i'⟩|^p dμ_j(x)≤n^-p/2· C_n,pĉ^p·lim sup_j→∞∑_i=1^n ∫_A_i∖ B_r^^n(0)∑_i'=1^n | ⟨ x,ω_i'⟩|^p dμ_j(x)≤n^-p/2· C_n,pĉ^p·lim sup_j→∞∑_i=1^n ∫_A_i∖ B_r^^n(0)n | ⟨ x,ω_i⟩|^p dμ_j(x)≤n^1-p/2· C_n,pĉ^p·lim sup_j→∞∑_i=1^n ∫_ℝ∖ (-ĉ^-1r, ĉ^-1r) |t|^p d _♯^ω_iμ_j(t)0. Thus by Theorem <ref> we have the convergenceof (μ_j)_j∈ N in(𝒫_p(^n), _p^ℝ^n)to μ.§.§ Metric equivalencesIn this subsection we will prove that the sliced and classical Monge–Kantorovich spaces (for the nontrivial case n≥ 2) are not bi-Lipschitz equivalent when q<∞. In <cit.>*Theorem 2.3 it is shown that _1^^n and _1, ∞ are bi-Lipschitz equivalent: this proof strongly uses that the dual problem for _1^^n is over the class of 1-Lipschitz functions, and we were not able to obtain an analogous proof for p>1.We start by showing lower semi-continuity of _p, q under weak convergence. Although we do not explicitly use the following proposition, it is presented here as a result of independent interest.For sequences (μ_j)_j∈ℕ, (ν_j)_j∈ℕ in 𝒫_p(ℝ^n) weakly converging to μ_∞, ν_∞ respectively in 𝒫(ℝ^n),_p,q(μ_∞, ν_∞) ≤lim inf_j→∞_p,q(μ_j, ν_j).Moreover, if sup_j∈_p,q(δ_0^ℝ^n, μ_j)<∞,then there exists a weakly convergent subsequence with limit in 𝒫_min{p,q}(ℝ^n).Let (μ_j)_j∈ℕ, (ν_j)_j∈ℕ be sequences in 𝒫_p(ℝ^n) weakly converging to μ_∞,ν_∞ respectively in 𝒫(ℝ^n).Then for any ω∈^n-1,we see that (^ω_♯μ_j)_j∈ℕ and (^ω_♯ν_j)_j∈ converge weakly to^ω_♯μ_∞ and ^ω_♯ν_∞ respectively. By weak lower-semicontinuity of _p^, <cit.>*Remark 6.12,we have _p^(^ω_♯μ_∞, ^ω_♯ν_∞)≤lim inf_j→∞_p^(^ω_♯μ_j, ^ω_♯ν_j),then it follows (from Fatou's lemma when q<∞ and simple calculation when q=∞) that lim inf_j→∞_p,q(μ_j, ν_j)≥lim inf_j→∞_p^(^∙_♯μ_j, ^∙_♯ν_j)_L^q(σ_n-1)≥_p^(^∙_♯μ_∞, ^∙_♯ν_∞)_L^q(σ_n-1) =_p,q(μ_∞, ν_∞).This proves the first assertion.Next, let (μ_j)_j∈ℕ be a sequence in 𝒫_p(ℝ^n)such that _p,q(δ_0^ℝ^n, μ_j) is uniformly bounded from above.Ifp≤ q, then by Lemma <ref> with the monotonicity of _p,q, M_p,n·sup_j∈_p^ℝ^n(δ_0^ℝ^n, μ_j) =sup_j∈_p,p(δ_0^ℝ^n, μ_j) ≤sup_j∈_p,q(δ_0^ℝ^n, μ_j) <∞.It follows from Remark <ref> thatthere exists a weakly convergent subsequence with limit μ_∞∈𝒫_p(ℝ^n).On the other hand, if q<p, then _p,q(δ_0^ℝ^n, μ_j)^q=∫_𝕊^n-1( ∫_ℝ^n |⟨ x, ω⟩|^p dμ_j(x) )^q/pdσ_n-1(ω)≥∫_𝕊^n-1∫_ℝ^n |⟨ x, ω⟩|^q dμ_j(x) dσ_n-1(ω)=_q,q(δ_0^ℝ^n, μ_j)^q =M_n,q^q ·_q^ℝ^n(δ_0^ℝ^n, μ_j)^qby Jensen's inequality and Tonelli's theorem, and then Lemma <ref>.Again by Remark <ref>,there exists a weakly convergent subsequence with limit μ_∞∈𝒫_q(ℝ^n), completing the proof.By following an idea similar to <cit.>*Theorem 2.3 (iii), we can show the claimed metric non-equivalences when q<∞.We first show estimates on the p-Monge–Kantorovich metrics between certain measures and empirical samples. Below, for 1≤ i≤ n, we will write ℋ^i for i-dimensional Hausdorff measure.For any ω=cosθ e_1+sinθ e_2∈^1 with θ∈ [0, π/4],the Borel probability measure ^ω_♯ (4^-1ℋ^2|_[-1,1]^2) is absolutely continuous with respect to the one-dimensional Lebesgue measure, where the density f_θ is an even function on ℝ andf_θ(t):= 1/2cosθ, 0≤ t≤cosθ-sinθ, cosθ+sinθ-t/2sin2θ,cosθ-sinθ< t≤cosθ+sinθ,0, t>cosθ+sinθ.If θ=0, it is clear that f_θ=2^-11_[-1, 1], thus we may assume θ>0. By symmetry we can see that f_θ will be even, and since θ∈ (0, π/4] it is clear that^ω_♯ (4^-1ℋ^2|_[-1,1]^2) ([cosθ+sinθ, ∞))=0. Now fix t∈ (0, cosθ+sinθ). The density f_θ can be calculated as one quarter of the length of the line segment {(x, y)∈ [-1, 1]^2| xcosθ+ysinθ=t}.This line segment has one endpoint(t-sinθ/cosθ, 1)and the other endpoint (x_∗, y_∗) with either y_∗=-1 or x_∗=1.This yieldsf_θ(t)=1/4√((x_∗-t-sinθ/cosθ)^2 + (y_∗-1)^2 ).If y_∗=-1, then x_∗cosθ-sinθ=t, or equivalently x_∗=(t+sinθ)/cosθ. However since x_∗≤ 1, this can only happen when t≤cosθ-sinθ.Thus when cosθ-sinθ< t≤cosθ+sinθ, we have x_∗=1, and we calculate y_∗=(t-cosθ)/sinθ, consequentlyf_θ(t) =1/2cosθ, 0<t≤cosθ-sinθ, sinθ+cosθ-t/2sin2θ, cosθ-sinθ<t≤cosθ+sinθ, as claimed.Fix 2≤ p<∞ and n≥ 2.Let us write π_^2: ^n→^2×{0}⊂^n for projection onto the first 2 coordinates.Also let μ:=.4^-1ℋ^2|_[-1, 1]^2×{0} viewed as an element of 𝒫_p(^n),and let {X_j}_j∈ be an i.i.d. collection of ^n-valued random variables (defined on some probability space whose nature is immaterial), distributed according to μ. Then for any N∈ and ω∈^n-1 with π_ℝ^2(ω)≠ 0,𝔼(_p^(^ω_♯μ, 1/N∑_j=1^Nδ_⟨ X_j,ω⟩^)^p)≤ |π_ℝ^2(ω)|^p · (5p)^p·(2^p +2)· N^-p/2≤ (5p)^p·(2^p +2)· N^-p/2. Fix ω∈𝕊^n-1 with π_ℝ^2(ω)≠ 0.By symmetry, it is sufficient to consider the case θ:=arctan( ⟨ e_2,ω⟩/⟨ e_1,ω⟩)∈(0, π/4].We also write :=|π_ℝ^2(ω)| and ω(θ):=cosθ e_1 +sinθ e_2∈𝕊^1, and define L^:ℝ→ℝ by L^(t):= t for t∈ℝ. For any Borel measurable ψ: → by Lemma <ref> we have ∫_ψ d^ω_♯μ =∫_^nψ(⟨ x, π_^2(ω)⟩)dμ(x)=∫_^nψ(⟨ x, ^-1π_^2(ω)⟩)dμ(x)=∫_ψ ( t)f_θ(t)dt=∫_ψdL^_♯ ( ^ω(θ)_♯ (4^-1ℋ^2|_[-1,1]^2)),consequently, ^ω_♯μ=L^_♯ ( ^ω(θ)_♯ (4^-1ℋ^2|_[-1,1]^2)).Let f_ω denote the density of R^ω_♯μ with respect to the one-dimensional Lebesgue measure, that is,f_ω(t)=^-1 f_θ (^-1t),and set F_ω(t):=R^ω_♯μ((-∞,t])=^ω(θ)_♯ (4^-1ℋ^2|_[-1,1]^2)((-∞,^-1 t]) for t∈ℝ. Since every random variable ⟨ X_j,ω⟩ is distributed according to ^ω_♯μ,we can apply <cit.>*Theorem 5.3 to obtain𝔼(_p^(^ω_♯μ, 1/N∑_j=1^Nδ_⟨ X_j,ω⟩^)^p) ≤(5p/√(N+2))^p ·∫_ℝ[F_ω(t) (1-F_ω(t))]^p/2/f_ω(t)^p-1 dt.Since f_θ is an even function on ℝ, we have F_ω(t)=1-F_ω(-t) for t∈ℝ, hence∫_ℝ[F_ω(s) (1-F_ω(s))]^p/2/f_ω(s)^p-1 ds=2 ∫_0^∞[F_ω(s) (1-F_ω(s))]^p/2/f_ω(s)^p-1 ds=2 ∫_0^(cosθ+sinθ)[F_ω(s) (1-F_ω(s))]^p/2/f_ω(s)^p-1 dt.For cosθ-sinθ<^-1t≤cosθ+sinθ, by Lemma <ref> we have1-F_ω(t) =1-^ω(θ)_♯ (4^-1ℋ^2|_[-1,1]^2)((-∞,^-1 t])=∫_^-1 t^∞ f_θ(s)ds =1/2·(sinθ+cosθ-^-1t)^2/2sin 2θ.This implies that, again using Lemma <ref>,∫_ℝ[F_ω(s) (1-F_ω(s))]^p/2/f_ω(s)^p-1 ds≤ 2 ∫_0^(cosθ-sinθ)(2cosθ)^p-1ds+ 2 ∫_(cosθ-sinθ)^(cosθ+sinθ)2^-p/2^p-1(2sin2θ)^p/2-1(sinθ+cosθ -^-1t) ds = 2 (cosθ-sinθ)(2cosθ)^p-1 +2^-p/2^p(2sin 2θ)^p/2-1(2sinθ)^2 ≤^p(2^p +2),finishing the proof. Let n≠ 1 and 1≤ q<∞. (p≥ 2): Suppose p≥ 2.Let μ:=.4^-1ℋ^2|_[-1, 1]^2×{0} and again suppose {X_j}_j=1^N are i.i.d. random samples distributed according to μ; also write μ_N:=1/N∑_j=1^Nδ_X_j^^n. By using Lemma <ref>, we have when p≥ q by Minkowski's integral inequality,𝔼(_p, q(μ, μ_N)^p)=𝔼([∫_^n-1_p^(^ω_♯μ, ^ω_♯μ_N)^qdσ_n-1(ω)]^p/q)≤[∫_^n-1(𝔼(_p^(^ω_♯μ, ^ω_♯μ_N)^p))^q/pdσ_n-1(ω)]^p/q≤ (5p)^p·(2^p +2)· N^-p/2,while if p<q, using Jensen's inequality followed by monotonicity of _p^ in p, then Tonelli's theorem, we have𝔼(_p, q(μ, μ_N)^p) = 𝔼([∫_^n-1_p^(^ω_♯μ, ^ω_♯μ_N)^qdσ_n-1(ω)]^p/q)≤(𝔼[∫_^n-1_p^(^ω_♯μ, ^ω_♯μ_N)^qdσ_n-1(ω)])^p/q≤(𝔼[∫_^n-1_q^(^ω_♯μ, ^ω_♯μ_N)^qdσ_n-1(ω)])^p/q=[∫_^n-1𝔼(_q^(^ω_♯μ, ^ω_♯μ_N)^q)dσ_n-1(ω)]^p/q≤[ (5q)^q·(2^q +2)]^p/q· N^-p/2.Now suppose by contradiction, there is a C>0 such that _p^^n(μ, ν)≤ C_p, q(μ, ν) for all μ, ν∈𝒫_p(^n), then by the classical Ajtai–Komlós–Tusnády theorem (see for example <cit.>*(7))we have for some constant C>0 and for all N,𝔼(_p^^n(μ, μ_N)^p)≥C(log N/N)^p/2.Thus combining the above with (<ref>) or (<ref>) implies for some constant C_p, q, n>0 depending only on p, q, and n,C_p, q, n≥ (log N)^p/2which is a contradiction as N→∞, and the result is proved when p≥ 2.(1≤ p<2): Now suppose 1≤ p<2 and let μbe the standard Gaussian on ^n.By symmetry of μ we have ^ω_♯μ=^e_1_♯μ for ω∈^n-1,and^e_1_♯μ(I) =∫_I (2π)^-1/2e^-t^2/2dtfor any Borel set Iin ℝ, This means that ^ω_♯μ is the standard Gaussian on , thus by <cit.>*Corollary 6.10 there is some constant C_p>0 such that𝔼(_p^(^ω_♯μ, ^ω_♯μ_N)^p)≤ C_pN^-p/2. At the same time, by <cit.>*Section 5 and (7) we have for some constant C>0,𝔼(_p^^n(μ, μ_N)^p)≥C N^-p/n, n≥ 3, C N^-p/2(log N)^p/2, n=2,Thus using the same calculations leading to (<ref>) and (<ref>) we obtainC_p, q, n≥ N^p/2-p/n, n≥ 3,(log N)^p/2, n=2,for some constant C_p, q, n,which is again a contradiction as N→∞.§.§ (Non-)existence of geodesicsIn this subsection we discuss the geodesic properties of (𝒫_p(^n), _p, q).We now recall and/or prove a few basic properties of geodesics with respect to_p^^n. When n=1 and p>1, minimal geodesics are uniquely determined. Let p>1. For μ_0,μ_1∈𝒫_p(ℝ), there exists a unique minimal geodesic in (𝒫_p(ℝ),_p^ℝ) from μ_0 to μ_1. When p>1 on a more general space Y, a geodesic in (𝒫_p(Y),_p^Y) can be obtained as a family of pushforwards of what is known as a dynamic optimal coupling. More specifically,for a complete, separable geodesic space (Y,_Y), denote by 𝒢(Y) the set of minimal geodesics ρ:[0,1]→ Y. For τ∈ [0,1], define e^τ:𝒢(Y)→ Y bye^τ(ρ):=ρ(τ)(see also Definition <ref> below).Let(Y,_Y) be a complete, separable geodesic space and p>1. Then, for μ_0,μ_1∈𝒫_p(Y), there exists Γ∈𝒫(𝒢(Y)) such that(e^0×e^1)_♯Γ is an p-optimal coupling between μ_0 and μ_1, and e^∙_♯Γ:[0,1]→𝒫(Y)forms a minimal geodesic from μ_0 and μ_1 in (𝒫_p(Y),^Y_p).Moreover, for τ_1,τ_2∈ [0,1] (e^τ_1×e^τ_2)_♯Γ∈Π(e^τ_1_♯Γ,e^τ_2_♯Γ) is a p-optimal coupling.We now take a short detour to highlight thebehavior of _p, q when p=1, which is different with respect to geodesics. First, it is easy to see for any complete, separable metric space (Y, _Y), linear combinations give _1^Y geodesics, unlike when p>1. Let (Y,_Y) be a complete, separable metric space. For μ_0, μ_1 ∈𝒫_1(Y)the curve ((1-τ) μ_0+τμ_1)_τ∈ [0,1] is a minimal geodesic with respect to _1^Y. Let γ be a 1-optimal coupling between μ_0 and μ_1. For τ, τ_1, τ_2∈ [0, 1] with τ_1≤τ_2, setμ(τ):=(1-τ) μ_0+τμ_1, γ_τ_1,τ_2:=(Id_Y×Id_Y )_♯ ((1-τ_2)μ_0+ τ_1 μ_1)+ (τ_2-τ_1)γ.Then μ:[0,1]→𝒫_1(Y)is a curve joining μ_0 to μ_1, and γ_τ_1,τ_2∈𝒫(Y × Y). For any Borel set A⊂ Y, we can see γ_τ_1,τ_2(A× Y)=(1-τ_2)μ_0(A)+τ_1 μ_1(A)+(τ_2-τ_1)μ_0(A)=μ(τ_1)(A)and similarly γ_τ_1,τ_2(Y× A)=μ(τ_2)(A). Thus γ_τ_1,τ_2∈Π(μ(τ_1),μ(τ_2)) and _1^Y(μ(τ_1),μ(τ_2))≤∫_Y × Y_Y(t,s) dγ_τ_1,τ_2(t,s)=(τ_2-τ_1) ∫_Y × Y _Y(t, s) dγ(t,s) =(τ_2-τ_1)_1^Y(μ_0,μ_1),proving the lemma.With this lemma in hand, we can easily prove that (𝒫_1(ℝ^n), _1,q) is geodesic for all q.For any μ_0, μ_1∈𝒫_1(^n), the curve ((1-τ)μ_0+τμ_1)_τ∈ [0, 1] is a minimal geodesic with respect to _1, q. For μ_0, μ_1 ∈𝒫_1(ℝ^n), and τ∈ [0,1],set μ(τ):=(1-τ) μ_0 +τμ_1. Then, for ω∈𝕊^n-1,(^ω_♯μ(τ))_τ∈ [0,1]=((1-τ)^ω_♯μ_0+τ^ω_♯μ_1)_τ∈ [0,1]is a minimal geodesic with respect to _1^ℝ by Lemma <ref>. Then, for τ_1, τ_2∈ [0,1],we have_1,q (μ(τ_1),μ(τ_2)) = _1^ ( ^ω_♯μ(τ_1), ^ω_♯μ(τ_2) ) _L^q(σ_n-1) =|τ_1-τ_2| _1,q (μ_0,μ_1). The situation when 1<p<∞ paints a stark contrast with the case p=1.Let μ_0, μ_1∈𝒫_p(ℝ^n). Assume that there exists a minimal geodesic μ:[0,1]→ (𝒫_p(ℝ^n),_p,q)from μ_0 to μ_1. Then, for any partition {[τ_i, τ_i+1)}_i=0^N-1 of [0,1), we have _p,q (μ_0,μ_1)= ∑_i=1^N _p,q (μ(τ_i-1),μ(τ_i))= ∑_i=1^N _p^ℝ( ^∙_♯μ(τ_i-1), ^∙_♯μ(τ_i)) _L^q(σ_n-1)≥∑_i=1^N _p^ℝ( ^∙_♯μ(τ_i-1), ^∙_♯μ(τ_i)) _L^q(σ_n-1)≥_p^ℝ( ^∙_♯μ_0, ^∙_♯μ_1 ) _L^q(σ_n-1) =_p,q (μ_0,μ_1),consequently the inequalities above become equalities. For 1≤ q<∞, this implies that∑_i=1^N _p^ℝ( ^ω_♯μ(τ_i-1), ^ω_♯μ(τ_i)) = _p^ℝ( ^ω_♯μ_0, ^ω_♯μ_1 )for σ_n-1-a.e. ω,and then for all ω∈𝕊^n-1 by the continuity in Lemma <ref> (<ref>). In particular, ^ω_♯μ(·):[0,1]→ (𝒫_p(ℝ), _p^ℝ) is the (unique by Lemma <ref>) minimal geodesic from R^ω_♯μ_0 to R^ω_♯μ_1 for every ω∈^n-1. Regarding the case q=∞, notice thatthe continuity in Lemma <ref> (<ref>) together with the compactness of 𝕊^n-1 ensures the existence of ω_∗∈𝕊^n-1 such that _p,∞ (μ_0,μ_1) =_p^ℝ ( ^ω_∗_♯μ_0, ^ω_∗_♯μ_1 ).Then it holds for such ω_∗ that _p,∞ (μ_0,μ_1) =_p^ℝ ( ^ω_∗_♯μ_0, ^ω_∗_♯μ_1 )≤∑_i=1^N _p^ℝ( ^ω_∗_♯μ(τ_i-1), ^ω_∗_♯μ(τ_i)) ≤∑_i=1^N _p^ℝ( ^∙_♯μ(τ_i-1), ^∙_♯μ(τ_i)) _L^∞(σ_n-1)=_p,∞ (μ_0,μ_1).Thus from Lemma <ref>,for 1< p < ∞,if _p,∞ (μ_0,μ_1)=_p^ℝ ( ^ω_∗_♯μ_0, ^ω_∗_♯μ_1 ), for some ω_∗∈^n-1,then ^ω_∗_♯μ(·):[0,1]→ (𝒫_p(ℝ), _p^ℝ) is the unique minimal geodesic from R^ω_∗_♯μ_0 to R^ω_∗_♯μ_1. Now let us take :=2-√(2)∈ (0,1)and μ_0:=1/4(δ^ℝ^n_e_1+e_2+δ^ℝ^n_-e_1+e_2+δ^ℝ^n_-e_1-e_2+δ^ℝ^n_e_1-e_2),μ_1:=1/4(δ^ℝ^n_ e_1+δ^ℝ^n_ e_2+δ^ℝ^n_- e_1+δ^ℝ^n_- e_2). We first consider the case q<∞. In this case, forω=e_1, e_2, since (^ω_♯μ(τ))_τ∈[0,1] is a minimal geodesic in (𝒫_p(ℝ), _p^ℝ)and ^ω_♯μ(0)= ^ω_♯μ_0=1/2( δ^ℝ_1+δ^ℝ_-1),^ω_♯μ(1)= ^ω_♯μ_1= 1/4(δ^ℝ_+2δ^ℝ_0+δ^ℝ_-),we have^ω_♯μ(1/2)= 1/4(δ^ℝ_1+/2+δ^ℝ_1/2+δ^ℝ_-1/2+δ^ℝ_-1+/2).This implies that 1 = ^ω_♯μ (1/2) ({±1+/2, ±1/2}) =μ (1/2)( { x∈ℝ^n | ⟨ x, ω⟩∈{±1+/2, ±1/2}}), consequentlythe support of μ(1/2) is contained inE:= ⋂_i=1^2 {x∈ℝ^n | ⟨ e_i,x ⟩∈{±1+/2, ±1/2}}. Similarly,forω=(e_1+ e_2)/√(2), we find that ^ω_♯μ(1/2)= 1/4(δ^ℝ_2+/2√(2)+δ^ℝ_/2√(2) + δ^ℝ_-/2√(2)+δ^ℝ_-2+/2√(2))andthe support of μ(1/2) is contained inE':= {x∈ℝ^n | ⟨ e_1+e_2,x ⟩∈{±2+/2, ±/2}}.However this implies thatthe support of μ(1/2) is contained in E∩ E'=∅, which is clearly a contradiction. Thus there is no minimal geodesic in (𝒫_p(ℝ^n),_p,q) from μ_0 to μ_1.To handle the case q=∞, we only need to verify that e_1, e_2, and 2^-1/2(e_1+e_2) are maximizers of ω↦_p^(^ω_♯μ_0, ^ω_♯μ_1), and then we will reach the same contradiction as above. To this end we see thatfor ω∈𝕊^n-1, there exist θ∈ [0,2π) and ∈ [0,1] (possibly not unique) such that ⟨ e_1,ω⟩ =⟨e_1,ω(θ)⟩,⟨ e_2,ω⟩ =⟨e_1,ω(θ)⟩,where we set ω(θ):=cosθ e_1+sinθ e_2.This implies that_p^ℝ(R_♯^ωμ_0, R_♯^ωμ_1) = _p^ℝ( R_♯^ω(θ)μ_0 ,R_♯^ω(θ)μ_1) ≤_p^ℝ( R_♯^ω(θ)μ_0 ,R_♯^ω(θ)μ_1),thus we conclude that _p,∞(μ_0,μ_1) =sup_θ∈ [0, 2π)_p^ℝ(R_♯^ω(θ)μ_0, R_♯^ω(θ)μ_1).For θ∈ [0, 2π),from R_♯^ω(θ)μ_0 =1/4( δ^ℝ_cosθ+sinθ+δ^ℝ_-cosθ+sinθ+δ^ℝ_-cosθ-sinθ+δ^ℝ_cosθ-sinθ),R_♯^ω(θ)μ_1=1/4(δ^ℝ_cosθ+δ^ℝ_sinθ+δ^ℝ_-cosθ+δ^ℝ_-sinθ),we observe that _p^ℝ(R_♯^ω(θ)μ_0, R_♯^ω(θ)μ_1) = _p^ℝ(R_♯^ω(θ+π/2)μ_0,R_♯^ω(θ+π/2)μ_1 )= _p^ℝ( R_♯^ω(-θ)μ_0,R_♯^ω(-θ)μ_1 ),which implies that _p,∞(μ_0,μ_1) =max_θ∈ [0,π/4]_p^ℝ(R_♯^ω(θ)μ_0, R_♯^ω(θ)μ_1). For θ∈ [0,π/4], define w_p(θ) :=_p^ℝ(R_♯^ω(θ)μ_0, R_♯^ω(θ)μ_1)^p. Then we calculate w_p(0)=_p^ℝ(1/2( δ^ℝ_1+δ^ℝ_-1), 1/4(δ^ℝ_+2δ^ℝ_0+δ^ℝ_-))^p=1/2 (1-)^p+1/2,w_p(π/4) =_p^ℝ( 1/4( δ^ℝ_√(2)+2δ^ℝ_0+δ^ℝ_-√(2)), 1/2(δ^ℝ_/√(2)+δ^ℝ_-/√(2)) )^p=1/2(√(2)-/√(2))^p+1/2(/√(2))^p.Since 1-=√(2)-1=/√(2), √(2)-/√(2)=√(2)-2-√(2)/√(2)=1,we conclude w_p(0)=w_p(π/4).Note that, if we replacewith any r ∈ (0,1), then w_p(0)=w_p(π/4) holds if and only if r=in the case of p≠ 2, but w_p(0)=w_p(π/4) holds for any r∈(0,1) in the case of p=2. Next we will provew_p(θ)< w_p(0) for θ∈ (0,π/4). Indeed, for θ∈ (0,π/4), we have-cosθ-sinθ≤ -cosθ+sinθ≤cosθ-sinθ≤cosθ+sinθ , -cosθ≤ - sinθ≤sinθ≤cosθ,which yieldsw_p(θ)= 1/2{( cosθ+sinθ-cosθ)^p +|cosθ-sinθ-sinθ|^p }= 1/2[ { (1-)cosθ+sinθ}^p +|cosθ-(1+)sinθ|^p ],where we have used that θ↦ (1-)cosθ+sinθ is strictly increasing on (0,π/4).Let θ_∗:=arctan1/1+∈(0, π/4).On one hand, on (0,θ_∗),θ↦|cosθ-(1+)sinθ|=cosθ-(1+)sinθis strictly decreasing. On the other hand, on (θ_∗,π/4),θ↦|cosθ-(1+)sinθ|=-cosθ+(1+)sinθis strictly increasing. These imply thatw_p(θ) ≤ w_p(π/4)=w_p(0) for θ∈ (θ_∗, π/4). Now assume θ∈ (0, θ_∗) and setα(θ):=(1-)cosθ+sinθ,β(θ):=cosθ-(1+)sinθ.Thenα(θ), β(θ)>0 andw_p(θ)=(α(θ)^p+β(θ)^p)/2.We also find α'(θ) =-(1-)sinθ+cosθ=1/√(2)( α(θ)+β(θ)), β'(θ) =-sinθ-(1+)cosθ=-1/√(2)( 3α(θ)+β(θ)).Consequently, we havew_p'(θ)= {1/2(α(θ)^p+β(θ)^p)}' =p/2√(2){α(θ)^p-1( α(θ)+β(θ))-β(θ)^p-1( 3α(θ)+β(θ))}=p/2√(2)β(θ)^p F_p(α(θ)β(θ)^-1),where we define F_p:(0,∞)→ℝ byF_p():=^p+^p-1-3-1.Note that θ↦α(θ)β(θ)^-1 is strictly increasing on (0,θ_∗) and α(0)β(0)^-1=1-lim_θ↑θ_∗α(θ)β(θ)^-1=∞.A direct calculation givesF'_p()=p^p-1+(p-1)^p-2-3,F”_p()=(p-1)^p-3[p+(p-2)],and since p>1 we see F_p is strictly convex on (1,∞). We also have F_p()≤+1-3-1=-2<0 if ∈ (0,1], lim_→∞F_p()=∞,thus there exists a unique _p∈ (1,∞) such that F_p()<0 if s∈ (0,_p) and F_p()>0 if ∈ (_p,∞).Equivalently, there exists a unique θ_p∈ (0,θ_∗) such that w_p'(θ)<0 if θ∈ (0,θ_p) and w_p'()>0 if θ∈ (θ_p,θ_∗).Consequently, we have w_p(θ) <max{ w_p(0),w_p(θ_∗)}=w_p(0) for θ∈ (0,θ_∗). Thus we havew_p(θ)< w_p(0)for θ∈ (0,π/4).All together the above implies that for ω∈𝕊^n-1, _p,∞(μ_0,μ_1)= _p^ℝ(R_♯^ωμ_0, R_♯^ωμ_1)holds if and only ifω∈{± e_1, ± e_2,1/√(2) (e_1 ± e_2), 1/√(2) (-e_1 ± e_2) },completing the desired verification, hence the proof. §.§ DualityWe will now formulate a dual problem for _p, q for p≤ q in several steps. We recall some definitions and known results, these will be done here in the more general framework of metric spaces for use in later sections as well. Let (Y,_Y) be a metric space. For ϕ∈ C_b(Y), we setϕ_C_b(Y) :=sup_y∈ Y |ϕ(y)|. Then for a function ϕ on Y, a function c on Y× Y, and y∈ Y, define the c-transform of ϕ byϕ^c(s):=sup_t∈ Y(-c(t,s)-ϕ(t)) ∈ (-∞,∞].If a function can be written as the c-transform of another function, it is said to be c-convex.In most cases for this paper, we consider c=_Y^p for some 1≤ p<∞ on a metric space (Y, _Y).We now recall the classical duality for _p^Y, also known as Kantorovich duality. This will be the basis of a duality theory for _p, q. Let (Y,_Y) be a complete, separable metric space and 1≤ p<∞. For μ,ν∈𝒫_p(Y),_p^Y(μ,ν)^p=sup{ -∫_Yϕ dμ-∫_Yψ dν| (ϕ,ψ)∈ C_b(Y)^2, -ϕ(t)-ψ(s) ≤_Y(t,s)^pfor (t,s)∈ Y^2}=sup{ -∫_Yϕ dμ-∫_Yϕ^_Y^p dν| ϕ∈ C_b(Y) }. We start by giving an elementary inequality in a general setting. Let (Y, _Y) be a metric space. If ϕ, ψ∈ C_b(Y), we have ϕ^_Y^p, ψ^_Y^p∈ C_b(Y) withϕ^_Y^p_C_b(Y)≤ϕ_C_b(Y)and ϕ^_Y^p-ψ^_Y^p_C_b(Y)≤ϕ-ψ_C_b(Y).For any s∈ Y, we find that‖ϕ‖_C_b(Y)≥sup_t∈ Y(-ϕ(t))≥ϕ^_Y^p(s)≥ -_Y(s,s)^p-ϕ(s)=-ϕ(s)≥ -ϕ_C_b(Y), thus ϕ^_Y^p is bounded on Y.Next suppose (s_j)_j∈ℕ is a sequence in Y converging to some s_0.Fix ε>0. Then since ϕ^_Y^p is bounded from above, there exists t_0∈ Y such thatϕ^_Y^p(s_0)≤ -_Y(t_0, s_0)^p-ϕ(t_0)+ε thusϕ^_Y^p(s_0)-ϕ^_Y^p(s_j)≤ -_Y(t_0, s_0)^p+_Y(t_0, s_j)^p+ε≤ p·max{_Y(t_0, s_j)^p-1, _Y(t_0, s_0)^p-1}|_Y(t_0, s_j)-_Y(t_0, s_0)|+ε≤ p·max{_Y(t_0, s_j)^p-1, _Y(t_0, s_0)^p-1}_Y(s_j, s_0)+ε< 2εif j is sufficiently large. Similarly, for any j∈, we have ϕ^_Y^p(s_j)-ϕ^_Y^p(s_0)≤ pmax{_Y(t_j, s_j)^p-1, _Y(t_j, s_0)^p-1}_Y(s_j, s_0)+ε,where t_j∈ Y satisfiesϕ^_Y^p(s_j)≤ -_Y(t_j, s_j)^p-ϕ(t_j)+ε. We also find that _Y(t_j, s_0)^p ≤ 2^p-1(_Y(s_0, s_j)^p+_Y(t_j, s_j)^p)≤ 2^p-1(_Y(s_0, s_j)^p-ϕ^_Y^p(s_j)-ϕ(t_j)+ε) ≤ 2^p-1(_Y(s_0, s_j)^p+2‖ϕ‖_C_b(Y)+ε),which implies that{t_j}_j∈ is bounded, consequently for j sufficiently large,ϕ^_Y^p(s_j)-ϕ^_Y^p(s_0)≤ pmax{_Y(t_j, s_j)^p-1, _Y(t_j, s_0)^p-1}_Y(s_j, s_0)+ε<2ε.Thus we see ϕ^_Y^p is continuous.Now by definition, -_Y(t, s)^p≤ψ(t)+ψ^_Y^p(s)for t, s∈ Y,henceϕ^_Y^p(s)-ψ^_Y^p(s)=sup_t∈ Y(-_Y(t, s)^p-ϕ(t))-ψ^_Y^p(s)≤sup_t∈ Y(ψ(t)+ψ^_Y^p(s)-ϕ(t))-ψ^_Y^p(s)=ϕ-ψ_C_b(Y), and switching the roles of ϕ and ψ completes the proof of the lemma. The duality for _p, q(μ, ν) comes about in the natural way, by applying the classical Kantorovich duality to the pair ^ω_♯μ and ^ω_♯ν for each ω∈^n-1, then “gathering” the corresponding problems. However, we must be careful of the dependencies on the variable ω that arise.For μ, ν∈𝒫_p(^n) with 1≤ p<∞ and ε>0, we define a set-valued function F^μ,ν_ε from 𝕊^n-1 to 2^C_b(ℝ) byF^μ,ν_ε(ω):={ϕ∈ C_b(ℝ) |-∫_ϕ d^ω_♯μ-∫_ϕ^_ℝ^p d^ω_♯ν> _p^(^ω_♯μ, ^ω_♯ν)^p-ε}. Let us recall the following approximate selection property due to Michael. Let Ω be a paracompact space, and (X,·_X) a normed space. For a map F:Ω→ 2^X, if F(ω) is nonempty and convex for any ω∈Ω, and F is lower semi-continuous, that is,{ω∈Ω| F(ω) ∩ O} is open in Ω for any open set O in X, then, for any r>0, there exists a continuous map f:Ω→ X such that inf_x∈ F(ω)x-f(ω)_X<rfor every ω∈Ω. We will need to apply this approximate selection result to F^μ, ν_ε.Let μ, ν∈𝒫_p(^n) and ε>0. Then F^μ,ν_ε (ω) is convex and nonempty for each ω∈𝕊^n-1, and F^μ,ν_ε is lower semi-continuous.Since μ, ν∈𝒫_p(^n) and ε>0 are fixed, we write F in place of F^μ,ν_ε. It is clear from Theorem <ref> in the case of X=ℝ that F(ω)≠∅ for all ω∈^n-1.Next we show that F(ω) is convex for each ω∈𝕊^n-1. Let ϕ_0, ϕ_1∈ F(ω), and τ∈ (0, 1). Then, for any s∈, we find that [ (1-τ)ϕ_0+τϕ_1 ]^_^p(s) =sup_t∈[ (1-τ)(-| t-s|^p-ϕ_0(t)) +τ(-| t-s|^p-ϕ_1(t))] ≤sup_t∈[ (1-τ)(-| t-s|^p-ϕ_0(t))]+sup_t∈[ τ(-| t-s|^p-ϕ_1(t))] =(1-τ)ϕ_0^_^p(s)+τϕ_1^_^p(s).Hence, -∫_[(1-τ)ϕ_0+τϕ_1)]d^ω_♯μ -∫_[(1-τ)ϕ_0+τϕ_1]^_^p d^ω_♯ν ≥ (1-τ)(-∫_ϕ_0d^ω_♯μ-∫_ϕ_0^_^pd^ω_♯ν)+τ(-∫_ϕ_1d^ω_♯μ-∫_ϕ_1^_^pd^ω_♯ν) > _p^(^ω_♯μ, ^ω_♯ν)^p-ε,proving the convexity of F(ω).Finally, we prove the lower semi-continuity of F.Given ϕ∈ C_b() define the function G_ϕ: ^n-1→ [-∞,∞) byG_ϕ(ω):=-∫_ϕ d^ω_♯μ-∫_ϕ^_^p d^ω_♯ν-_p^(^ω_♯μ, ^ω_♯ν)^p.We see that ϕ∈ F(ω) if and only if G_ϕ(ω)>-ε,hence if G_ϕ is continuous on 𝕊^n-1and O⊂ C_b() is open,then we have{ω∈^n-1 | F(ω) ∩ O ≠∅} =⋃_ϕ∈ OG_ϕ^-1((-ε, ∞))which is then open as a union of open sets.In particular, this would prove F is lower semi-continuous, thus the rest of the proof is devoted to showing the continuity of G_ϕ. Let (ω_j)_j∈ be a sequence in 𝕊^n-1 converging to ω. For ϕ∈ C_b(), by Lemma <ref> we have that ϕ^_^p∈ C_b(ℝ).We observe from Lemma <ref> (<ref>) together with Theorem <ref> that lim_j→∞(-∫_ϕ d^ω_j_♯μ-∫_ϕ^_^p d^ω_j_♯ν)=-∫_ϕ d^ω_♯μ-∫_ϕ^_^p d^ω_♯ν.This with Lemma <ref> (<ref>) yields lim_j→∞G_ϕ(ω_j)=G_ϕ(ω),hence G_ϕ is continuous.Now we are in position to prove the claimed duality result.Let (Φ, Ψ)∈𝒜_p, and ζ∈𝒴_r'; recall that we now assume p≤ q and write r:=q/p. By the dual representation for L^r-norms (see <cit.>*Proposition 6.13, this is applicable for the case p=q as well since σ_n-1 is finite), and the Kantorovich duality Theorem <ref>,_p, q(μ, ν)^p=‖_p^(^∙_♯μ, ^∙_♯ν)^p‖_L^r(σ_n-1)≥∫_^n-1ζ(ω) ( -∫_ℝΦ_ω(t) d^ω_♯μ(t) -∫_ℝΨ_ω(s) d^ω_♯ν(s) )dσ_n-1(ω).Next we show the reverse inequality.Since the case of μ=ν is trivial, we assume μ≠ν.Fix ε>0, then by Lemma <ref>, we can apply Proposition <ref> to find f_∙∈ C( ^n-1; C_b() ) such thatinf_ϕ∈ F^μ,ν_ε(ω)‖ϕ-f_ω‖_C_b()<εfor all ω∈^n-1, in particular for each ω∈^n-1, there exists ϕ^ω∈ F^μ,ν_ε(ω)such that ‖ϕ^ω-f_ω‖_C_b()<ε. This with Lemma <ref> yields ‖ (ϕ^ω)^_^p-f_ω^_^p‖_C_b()<ε.From Lemma <ref> we also find f_∙^_^p∈ C(𝕊^n-1;C_b(ℝ)) hence (f_∙, f_∙^_^p)∈𝒜_p. Then we calculate-∫_^nf_ω(⟨ x,ω⟩)dμ(x)-∫_^n f_ω^_^p(⟨ y,ω⟩)dν(y)≥-∫_^nϕ^ω(⟨ x, ω⟩)dμ(x) -∫_^n(ϕ^ω)^_^p(⟨ y,ω⟩)dν(y) -2ε =-∫_ϕ^ω d^ω_♯μ-∫_(ϕ^ω)^_^p d^ω_♯ν-2ε ≥_p^(^ω_♯μ, ^ω_♯ν)^p-3ε,where the last inequality follows from ϕ^ω∈ F^μ,ν_ε(ω).Again by dual representation for the L^r-norm (<cit.>*Proposition 6.13),there exists some ζ̃∈ L^r'(σ_n-1) with ‖ζ̃‖_L^r'(σ_n-1)<1 such that∫_^n-1ζ̃_p^(^∙_♯μ, ^∙_♯ν)^p dσ_n-1 > ‖_p^(^∙_♯μ, ^∙_♯ν)^p‖_L^r(σ_n-1)-ε=_p, q(μ, ν)^p-ε,since _p^(^∙_♯μ, ^∙_♯ν)^p≥ 0 we may assume ζ̃≥ 0 holds σ_n-1-a.e.. If p<q we have r'<∞,then by density of C_b(^n-1) in L^r'(σ_n-1),we can find ζ∈ C_b(^n-1) such that‖ζ-ζ̃‖_L^r'(σ_n-1)<_p, q(μ, ν)^-pε,‖ζ‖_L^r'(σ_n-1)≤ 1,and we may assume ζ≥ 0 everywhere on 𝕊^n-1.If p=q we have r'=∞,thus by positivity of _p^(^ω_♯μ, ^ω_♯ν)^pwe can choose ζ≡ 1 on 𝕊^n-1.Then using Hölder's inequality, ∫_^n-1|(ζ- ζ̃) _p^(^∙_♯μ, ^∙_♯ν)^p| dσ_n-1≤‖ζ-ζ̃‖_L^r'(σ_n-1)_p^(^∙_♯μ, ^∙_♯ν)^p_L^r(σ_n-1) <ε,which yields by (<ref>),∫_^n-1ζ(ω)_p^(^ω_♯μ, ^ω_♯ν)^pdσ_n-1(ω) > _p, q(μ, ν)^p-2ε,and ζ_p^(^∙_♯μ, ^∙_♯ν)^p∈ L^1(σ_n-1).Now for any 0<δ<1, we have| [ (1-δ)ζ+δ] _p^(^∙_♯μ, ^∙_♯ν)^p|≤ ( ζ+1)_p^(^∙_♯μ, ^∙_♯ν)^p∈ L^1(σ_n-1). Thus by dominated convergence, we may replace ζ by (1-δ)ζ+δ for some sufficiently small δ>0 to assume that (<ref>) remains valid with ζ>0 on ^n-1. In particular we have ζ∈𝒴_r'.Multiplying (<ref>) by ζ then integrating, we find∫_^n-1ζ(ω) (-∫_f_ω d^ω_♯μ -∫_ f_ω^_^pd^ω_♯ν)dσ_n-1(ω)=∫_^n-1ζ(ω) (-∫_^nf_ω(⟨ x,ω⟩)dμ(x)-∫_^n f_ω^_^p(⟨ y,ω⟩)dν(y))dσ_n-1(ω)≥∫_^n-1ζ(ω)_p^(^ω_♯μ, ^ω_♯ν)^pdσ_n-1(ω)-3ε=_p, q(μ, ν)^p-5ε,where in the third line we have used that ‖ζ‖_L^1(σ_n-1)≤‖ζ‖_L^r'(σ_n-1)≤ 1. Since ε>0 is arbitrary, this finishes the proof.§ DISINTEGRATED MONGE–KANTOROVICH METRICS In this section, we prove various properties of the the disintegrated Monge–Kantorovich metrics as claimed in Theorem <ref>. For the remainder of the paper, (Y, _Y) and (Ω, _Ω) will be complete, separable metric spaces and σ∈𝒫(Ω), with other conditions added as necessary. As before, we start with some preliminaries.Recall that for 𝔪∈𝒫^σ(Y×Ω), we write𝔪=𝔪^∙⊗σwhere 𝔪^ω∈𝒫(Y) for each ω∈Ω, following from thm: disintegration, and we fix some point y_0∈ Y. For ease of notation we will write _y_0(t):=_Y(y_0, t) for t∈ Y.For any 𝔪∈𝒫^σ(Y×Ω), since (t,ω) ↦_y_0(t)^p is a nonnegative Borel function on Y×Ω, we have∫_Y×Ω_y_0(t)^pd𝔪(t, ω) = ∫_Ω∫_Y _y_0(t)^pd𝔪^ω(t) dσ(ω) = ∫_Ω_p^Y(δ_y_0^Y,𝔪^ω)^p dσ(ω).Thus for 𝔪,𝔫∈𝒫^σ_p, 1(Y×Ω), we have _p^Y(δ_y_0^Y,𝔪^ω), _p^Y(δ_y_0^Y,𝔫^ω) ∈ [0,∞) for σ-a.e. ω, consequently _p^Y(𝔪^ω, 𝔫^ω) ∈ [0,∞) for σ-a.e. ω.We also see that for 𝔪∈𝒫^σ_p,q(Y×Ω),we have_p,q(δ_y_0^Y⊗σ, 𝔪) =‖_p^Y(δ_y_0^Y, 𝔪^∙ )‖_L^q(σ)<∞.In a similar fashion to the sliced Monge–Kantorovich metrics, a simple application of Hölder's inequality shows thatp≤ p', q≤ q' ⇒_p, q≤_p', q', 𝒫^σ_p,q(Y×Ω) ⊂𝒫^σ_p',q'(Y×Ω). Finally, we will denote by ℬ_σ the completion of the Borel σ-algebra on Ω with respect to σ. §.§ Complete, separable, metric.As with the sliced metrics, it is easy to show _p, q is a metric, and the proof of completeness is somewhat similar. However, separability will be a more involved proof, as there is no direct comparison between _p, q and the usual Monge–Kantorovich metrics (however, note Lemma <ref> below). First recall the following definitions.If X is any space, we say a map f: Ω→ X is simple if there are finite collections {E_i}_i=1^I⊂ℬ_σ and {x_i}_i=1^I⊂ X, such that the E_i form a partition of Ω and f(ω)=x_i whenever ω∈ E_i.We will denote such a function by f=∑_i=1^I1_E_ix_i. If (X, _X) is a metric space, a map f: Ω→ X is σ-strongly measurable if there exists a sequence of simple functions that converges σ-a.e. pointwise to f. Also if Z is any measurable space with a σ-algebra ℱ_Z, we will say a map f: Z→ X is ℱ_Z-measurable if f^-1(O)∈ℱ_Z for any open set O⊂ X. If Z is equipped with a topology and ℱ_Z is the Borel σ-algebra on Z, then we simply say f is Borel measurable. By <cit.>*Theorem 1 if (X, _X) is separable, a ℬ_σ-measurable map f: Ω→ X is σ-strongly measurable. In the converse direction, since the inverse image of any set under a simple function is a finite union of elements of ℬ_σ, a σ-strongly measurable map is always ℬ_σ-measurable (regardless of separability of the range).We will write L^0(σ; X) for the collection of maps from Ω to X which are strongly σ-measurable.Note the above definitions do not actually require any vector space structure on the range X, since the sets E_j in the definition of simple are assumed mutually disjoint.We are now ready to prove that (𝒫^σ_p, q(Y×Ω), _p, q) is a complete, separable (for q<∞) metric space.To prove separability when q<∞,we will adapt the arguments in <cit.>*Theorem 1 and <cit.>*Remark 1.2.20.In doing so, we will not actually need any particular properties of the _p^Y metric, and the proof of separability below could be completed for any collection of maps f: Ω→ X where (X, _X) is a separable metric space such that the functions _X(x_0, f(·)) are ℬ_σ-measurable for every x_0∈ X;additionally the only properties of (Ω, σ) that are used are that σ is finite and ℬ_σ is generated by a countable algebra. We note that𝒫^σ_p, ∞(Y×Ω) is not separable with respect to _p, ∞ for any p if Y is not a single point and σ is such that there exists an uncountable family {E_a}_a∈ A of Borel sets such that σ(E_a_1∖ E_a_2)>0 for all distinct a_1, a_2∈ A. Indeed, fix two distinct points y_1,y_2∈ Y and let 𝔪_a:=(1_E_aδ_y_1^Y+1_Ω∖ E_aδ_y_2^Y). Then {𝔪_a}_a∈ A is uncountable but _p, ∞(𝔪_a_1, 𝔪_a_2)=_Y(y_1, y_2)>0 whenever a_1≠ a_2. As an example, if Y is a Riemannian manifold and σ is absolutely continuous with respect to the Riemannian volume, then for the sets E_a one can take geodesic balls of sufficiently small radius, centered at an uncountable collection of points. We are now ready to prove the claims in Theorem <ref> (<ref>).(Metric):Let 𝔪, 𝔫∈𝒫^σ_p, q(Y×Ω).From the definition, it is immediate that _p, q(𝔫, 𝔪) =_p, q(𝔪, 𝔫)=‖_p^Y(𝔪^∙,𝔫^∙)‖_L^q(σ)≥ 0,and equality holds if and only if_p^Y(𝔪^∙,𝔫^∙)=0, σ-a.e.. Since _p^Y is a metric on 𝒫_p(Y), _p, q(𝔪, 𝔫)=0 if and only if𝔪^ω=𝔫^ω for σ-a.e. ω,that is, 𝔪=𝔫 by thm: disintegration.For the triangle inequality,using the triangle inequality for _p^Y together with Minkowski's inequality, we have for𝔪_1,𝔪_2,𝔪_3∈𝒫^σ_p, q(Y×Ω),_p, q(𝔪_1, 𝔪_3) =_p^Y(𝔪^∙_1, 𝔪^∙_3)_L^q(σ)≤_p^Y(𝔪^∙_1, 𝔪^∙_2)+_p^Y(𝔪^∙_2, 𝔪^∙_3)_L^q(σ)≤_p^Y(𝔪^∙_1, 𝔪^∙_2)_L^q(σ)+_p^Y(𝔪^∙_2, 𝔪^∙_3)_L^q(σ)=_p, q(𝔪_1, 𝔪_2)+_p, q(𝔪_2, 𝔪_3). By the above triangle inequality, we also see_p, q(𝔪, 𝔫) ≤_p, q(𝔪, δ_y_0^Y⊗σ)+_p, q(𝔫, δ_y_0^Y⊗σ)<∞for all 𝔪, 𝔫∈𝒫^σ_p, q(Y×Ω).(Separability):Assume q<∞ and fix y_0∈ Y. Let {μ_j}_j∈ℕ be a _p^Y-dense subset in 𝒫_p(Y) (recall (𝒫_p(Y), _p^Y) is separable, see Theorem <ref>). For j, ℓ∈ let us define the sets A_j, ℓ⊂𝒫_p(Y) byA_1, ℓ: =B^_p^Y_ℓ^-1(μ_1), A_j, ℓ:=B^_p^Y_ℓ^-1(μ_j)∖⋃_i=1^j-1B^_p^Y_ℓ^-1(μ_i), j>1,then we see the A_j, ℓ are mutually disjoint and ⋃_j=1^∞ A_j, ℓ=𝒫_p(Y).We also define the collection {μ_j, ℓ}_j∈ℕ for each ℓ∈ℕ by choosing some μ_j, ℓ∈ A_j, ℓ ifA_j, ℓ≠∅ and otherwise μ_j, ℓ:=δ_y_0^Y. Since (Ω, _Ω) is separable, there exists a countable algebra 𝒬⊂ 2^Ω which generates the Borel σ-algebra on Ω.Then we claim that𝒟 :={(∑_i=1^I 1_Q_iμ_j+1_Y∖⋃_i=1^IQ_iδ_y_0^Y)⊗σ| {Q_i}_i=1^I⊂𝒬, {μ_i}_i=1^I⊂{μ_j, ℓ}_j∈ forI, ℓ∈}is _p, q-dense in 𝒫^σ_p, q(Y×Ω).Since 𝒟 is countable this will prove separability. To this end, fix 𝔪=𝔪^∙⊗σ∈𝒫^σ_p, q(Y×Ω), then by <cit.>*Lemma 12.4.7,the function f_j:Ω→defined by f_j(ω):=_p^Y(μ_j, 𝔪^ω)is Borel measurable for each j∈ℕ. Forj,ℓ∈,define the Borel setE_j, ℓ:=f_j^-1([0, ℓ^-1))∩(⋂_i=1^j-1 f_i^-1([ℓ^-1, ∞))).We note that the E_j, ℓ are mutually disjoint for ℓ∈ℕ fixed, and also 𝔪^ω∈ A_j, ℓ ω∈ E_j, ℓ.Thus {E_j, ℓ}_j∈ℕ is a covering of Ω consisting of mutually disjoint sets.For each ℓ∈ℕ, since ‖_p^Y(δ_y_0^Y, 𝔪^∙)‖_L^q(σ)^q=_p, q(𝔪, δ_y_0^Y⊗σ)^q<∞, there exists J_ℓ∈ such that∫_Ω∖⋃_j=1^J_ℓE_j, ℓ_p^Y(δ_y_0^Y, 𝔪^ω)^qdσ(ω)<ℓ^-q.Now for ω∈Ω and ℓ∈, define the measures 𝔪_ℓ^ω∈𝒫_p(Y) by𝔪_ℓ^ω:=∑_j=1^J_ℓ1_E_j, ℓ(ω)μ_j, ℓ+1_Ω∖⋃_j=1^J_ℓE_j, ℓ(ω) δ_y_0^Y,for any Borel A⊂Ω the functions ω↦𝔪_ℓ^ω(A) are clearly Borel measurable, hence we can define 𝔪_ℓ:=𝔪_ℓ^∙⊗σ∈𝒫^σ(Y×Ω), and since each μ_j, ℓ has finite pth moment we find 𝔪_ℓ∈𝒫^σ_p, q(Y×Ω). Then we calculate, using (<ref>), _p, q(𝔪_ℓ, 𝔪) =(∑_j=1^J_ℓ∫_E_j, ℓ_p^Y(μ_j, ℓ, 𝔪^ω)^qdσ(ω)+∫_Ω∖⋃_j=1^J_ℓE_j, ℓ_p^Y( δ_y_0^Y, 𝔪^ω)^qdσ(ω))^1/q≤(2^qℓ^-q∑_j=1^J_ℓσ(E_j, ℓ)+ℓ^-q)^1/q≤ (1+2^q)^1/qℓ^-1.Fix ε>0, and let ℓ_0∈ be such that _p, q(𝔪_ℓ_0, 𝔪)<ε. We now construct an element of 𝒟 approximating 𝔪_ℓ_0.Let M:=max_1≤ i, i'≤ J_ℓ_0(_p^Y(μ_i, ℓ_0, μ_i', ℓ_0)^q+_p^Y(δ_y_0^Y, μ_i', ℓ_0)^q).By <cit.>*Lemma A.1.2, for each 1≤ i≤ J_ℓ_0 there is Q_i∈𝒬 so thatσ(Q_iΔ E_i, ℓ_0)<ε^q/(M J_ℓ_0), using these define Q_1:=Q_1,Q_i:=Q_i∖⋃_i'=1^i-1Q_i'for 2≤ i≤ J_ℓ_0.Then we find, for each 1≤ i≤ J_ℓ_0,∫_Q_i_p^Y(𝔪_ℓ_0^ω, μ_i, ℓ_0)^qdσ(ω)=∫_Q_i∩ E_i, ℓ_0_p^Y(𝔪_ℓ_0^ω,μ_i, ℓ_0)^qdσ(ω) +∫_Q_i∖ E_i, ℓ_0_p^Y(𝔪_ℓ_0^ω, μ_i, ℓ_0)^qdσ(ω)=∫_Q_i∩ E_i, ℓ_0_p^Y(μ_i, ℓ_0,μ_i, ℓ_0)^qdσ(ω)+∫_Q_i∖ E_i, ℓ_0_p^Y(𝔪_ℓ_0^ω,μ_i, ℓ_0)^qdσ(ω)≤ Mσ(Q_i∖ E_i, ℓ_0)≤ Mσ(Q_iΔ E_i, ℓ_0)<ε^q/J_ℓ_0and∫_Ω∖⋃_i=1^J_ℓ_0Q_i_p^Y(δ_y_0^Y, 𝔪_ℓ_0^ω)^qdσ(ω)=∫_Ω∖⋃_i=1^J_ℓ_0Q_i_p^Y(δ_y_0^Y, 𝔪_ℓ_0^ω)^qdσ(ω)=∫_Ω∖⋃_i=1^J_ℓ_0(Q_i∪ E_i, ℓ_0)_p^Y(δ_y_0^Y, 𝔪_ℓ_0^ω)^qdσ +∫_(⋃_i=1^J_ℓ_0E_i, ℓ_0)∖(⋃_i=1^J_ℓ_0Q_i)_p^Y(δ_y_0^Y, 𝔪_ℓ_0^ω)^qdσ(ω)=∫_Ω∖⋃_i=1^J_ℓ_0(Q_i∪ E_i, ℓ_0)_p^Y(δ_y_0^Y, δ_y_0^Y)^qdσ+∫_(⋃_i=1^J_ℓ_0E_i, ℓ_0)∖(⋃_i=1^J_ℓ_0Q_i)_p^Y(δ_y_0^Y, 𝔪_ℓ_0^ω)^qdσ(ω)≤∫_⋃_i=1^J_ℓ_0(E_i, ℓ_0∖Q_i)_p^Y(δ_y_0^Y, 𝔪_ℓ_0^ω)^qdσ(ω)≤ M∑_i=1^J_ℓ_0σ(E_i, ℓ_0∖Q_i) ≤ M∑_i=1^J_ℓ_0σ(E_i, ℓ_0ΔQ_i)<ε^q.Thus if we take 𝔫^∙:=∑_i=1^J_ℓ_01_Q_iμ_i, ℓ_0+1_Ω∖⋃_i=1^J_ℓ_0Q_iδ_y_0^Y,we find for 𝔫:=𝔫^∙⊗σ∈𝒟, using (<ref>) and (<ref>),_p, q(𝔫, 𝔪) ≤_p, q(𝔫, 𝔪_ℓ_0)+_p, q(𝔪_ℓ_0, 𝔪)≤(∑_i=1^J_ℓ_0∫_Q_i_p^Y(μ_i, ℓ_0, 𝔪_ℓ_0^ω)^qdσ(ω)+∫_Ω∖⋃_i=1^J_ℓ_0Q_i_p^Y(δ_y_0^Y, 𝔪_ℓ_0^ω)^qdσ(ω))^1/q+ε<(1+2^1/q)ε,finishing the proof of separability.(Completeness): Now assume (Ω, _Ω) and (Y, _Y) are locally compact. Let (𝔪_j)_j∈ be a Cauchy sequence in (𝒫^σ_p, q(Y×Ω), _p,q). By an argument similar to Claim 1 in the proof of Theorem <ref> (<ref>), where we replace R^ω_♯μ_j with 𝔪_j^ω and _p^ by _p^Y,there exists E_p, q⊂Ω such thatσ(E_p,q)=1and (𝔪^ω)_j∈ is Cauchy in _p^Y for any ω∈ E_p, q.Since _p^Y is complete on 𝒫_p(Y),for every ω∈ E_p, q,there exists 𝔪^ω∈𝒫_p(Y) such that lim_j→∞_p^Y(𝔪_j^ω, 𝔪^ω)=0. Then, for ϕ∈ C_b(Y×Ω), it follows from Theorem <ref> that ∫_Y ϕ(t,ω)d𝔪^ω(t) =lim_j→∞∫_Y ϕ(t,ω)d𝔪^ω_j(t),which is a ℬ_σ-measurable function by thm: disintegration. Since ϕ is bounded and each 𝔪_j^ω and σ are probability measures, the dominated convergence theorem yields ∫_Ω∫_Y ϕ(t,ω)d𝔪^ω(t) dσ(ω) =lim_j→∞∫_Ω∫_Y ϕ(t,ω)d𝔪_j^ω(t) dσ(ω). Now we define a functional L on C_0(Y×Ω) byL(ϕ):= ∫_Ω∫_Y ϕ(t,ω)d𝔪^ω(t) dσ(ω),which is linear, and if ϕ∈ C_0(Y×Ω) is nonnegative,then L(ϕ) ≥ 0. Since Y×Ω is locally compact, <cit.>*Theorem 7.11.3 implies thatL can be identified with a Borel measure on Y×Ω with values in [0,∞], which we denote by 𝔪. The uniqueness in thm: disintegration implies that 𝔪=𝔪^∙⊗σ.Taking a sequence of nonnegative functions in C_0(Y×Ω) increasing to 1_Y×Ω, by monotone convergence L(1_Y×Ω)=1, thus 𝔪 is a Borel probability measure on Y×Ω,this with (<ref>) ensures 𝔪∈𝒫^σ(Y×Ω).Now fix ε>0, then there exists j_0 such that for all j, ℓ≥ j_0, _p, q(𝔪_j, 𝔪_ℓ)<ε. Then using Fatou's lemma when q<∞ and directly by definition for q=∞, and recalling (<ref>),‖_p^Y(𝔪_j^∙, 𝔪^∙)‖_L^q(σ)= ‖lim inf_ℓ→∞_p^Y(𝔪_j^∙, 𝔪_ℓ^∙)‖_L^q(σ)≤lim inf_ℓ→∞‖_p^Y(𝔪_j^∙, 𝔪_ℓ^∙)‖_L^q(σ)<ε,which ensures _p^Y(𝔪_j^∙, 𝔪^∙)∈ L^q(σ).Since we have_p^Y(δ_y_0^Y, 𝔪^ω) ≤_p^Y(δ_y_0^Y, 𝔪_j^ω)+ _p^Y(𝔪_j^ω, 𝔪^ω) for ω∈ E_p,qand σ(E_p,q)=1, we conclude _p^Y(δ_y_0^Y, 𝔪^∙)∈ L^q(σ), that is, 𝔪∈𝒫^σ_p,q(Y×Ω).It also follows from (<ref>) that lim_j→∞_p,q(𝔪_j, 𝔪) =lim_j→∞‖_p^Y(𝔪_j^∙,𝔪^∙)‖_L^q(σ) =0for the particular chosen subsequence. Since the original sequence is Cauchy, the full sequence also converges in _p, q to 𝔪.This proves the completeness. §.§ Existence of geodesicsIn contrast to Theorem <ref> (<ref>) in the case of the sliced metrics, (𝒫^σ_p, q(Y×Ω), _p,q) is a geodesic space. We will take _p^Y geodesics connecting each pair 𝔪_1^ω and 𝔪_2^ω, then use these to construct a geodesic for _p, q. However, in order to do so we must make sure the dependence on ω is ℬ_σ-measurable, hence we will have to use the Kuratowski and Ryll-Nardzewski measurable selection theorem which we recall first.Let (X,ℱ_X) be a measurable space and (Y,_Y) be a metric space.A set-valued function F from X to 2^Y is said to beℱ_X-weakly measurable if{ x∈ X | F(x) ∩ O ≠∅}∈ℱ_Xfor any open O⊂ Y. By <cit.>*Corollary 1 it is equivalent to replace “open” by “closed” in the above definition; it is then clear that if Y is σ-compact then it is also equivalent to replace “open” by “compact.”Let (X,ℱ_X, μ) be a measure space and(Y,_Y) a complete, separable metric space.For a map F:X→ 2^Y, if F(x) is nonempty and closed for μ-a.e. x∈ X, andF is ℱ_X-weakly measurable,then there exists an ℱ_X-measurable map f_∙:X→ Y such that f_x∈ F(x) for μ-a.e. x∈ X. Such a map is called a measurable selection of F. We also introduce the following space which will be used in the proof of Theorem <ref> (<ref>). Suppose (Y, _Y) is complete, separable, and a geodesic space. We let 𝒢(Y) denote the space of minimal geodesics with respect to _Y, and define the metric _𝒢(Y) on 𝒢(Y) by _𝒢(Y)(ρ_1, ρ_2): =sup_τ∈ [0, 1]_Y(ρ_1(τ), ρ_2(τ)).Also for τ∈ [0, 1] the evaluation map e^τ: 𝒢(Y)→ Y is defined by e^τ(ρ):=ρ(τ).We can see that (𝒢(Y), _𝒢(Y)) is separable and complete since it is a closed subset of C([0, 1]; Y) with the same metric _𝒢(Y), which is also separable by <cit.>*Theorem 2.4.3. Let (Y,_Y) be a complete, separable, and geodesic space. Then for any fixed τ∈ [0,1], the map e^τ_♯:𝒫(𝒢(Y))→𝒫(Y) is both weakly and _p^𝒢(Y)-to-_p^Y continuous. In particular, if (Γ_j)_j∈ is convergent with respect to _p^𝒢(Y), the sequence(e^τ_♯Γ_j)_j∈ℕ is convergent with respect to _p^Y. Let (Γ_j)_j∈ℕ be a weakly convergent sequence in 𝒫(𝒢(Y)) with limit Γ.For ϕ∈ C_b(Y), we have ϕ∘e^τ∈ C_b(𝒢(Y)) and lim_j→∞∫_Y ϕ(t) d e^τ_♯Γ_j(t) = lim_j→∞∫_𝒢(Y)ϕ(e^τ(ρ)) d Γ_j(ρ) = ∫_𝒢(Y)ϕ(e^τ(ρ)) d Γ(ρ) =∫_Y ϕ(t) d e^τ_♯Γ(t),which shows weak continuity of e^τ_♯. Now if (Γ_j)_j∈ converges to Γ in _p^𝒢(Y), the above implies (e^τ_♯Γ_j)_j∈ converges weakly to e^τ_♯Γ. Then if ρ_0∈𝒢(Y) is identically y_0, by Theorem <ref>lim sup_j→∞∫_Y∖ B_r^Y(y_0)_Y(y_0, t)^pde^τ_♯Γ_j(t)=lim sup_j→∞∫_𝒢(Y)∖ B_r^𝒢(Y)(ρ_0)_Y(y_0, ρ(τ))^pdΓ_j(ρ)≤lim sup_j→∞∫_𝒢(Y)∖ B_r^𝒢(Y)(ρ_0)_𝒢(Y)(ρ_0, ρ)^pdΓ_j(ρ) 0,hence by another application of Theorem <ref> we see (e^τ_♯Γ_j)_j∈ℕ converges in _p^Y.We are now ready to prove Theorem <ref> (<ref>). Note that an infinite dimensional Banach space is a complete, separable geodesic space that is ball convex with respect to a point, but is not locally compact, hence both conditions are necessary on (Y, _Y).Recall we assume that (Ω, _Ω) is locally compact and (Y, _Y) is a locally compact geodesic space that is ball convex with respect to some y_0∈ Y. If p=1, by Lemma <ref> it is easy to see that ((1-τ)𝔪_0+τ𝔪_1)_τ∈ [0,1] is a minimal geodesic with respect to _1, q for any 1≤ q≤∞, thus we assume p>1. For 𝔪_0, 𝔪_1∈𝒫^σ_p, q(Y×Ω),there exists E∈ℬ_σsuch that σ(E)=1and 𝔪_0^ω, 𝔪_1^ω∈𝒫_p(Y) for all ω∈ E, set ℱ_E:={ E∩ B | B∈ℬ_σ}⊂ℬ_σ.As previously mentioned, (𝒫_p(Y^2), _p^Y^2) is a complete, separable metric space.For t, s∈ Y,since we have_Y(t, s)^p =(_Y(t, s)^2)^p/2≤ 2^p/2(_y_0(t)^2+_y_0(s)^2)^p/2=2^p/2_Y^2((y_0, y_0), (t,s))^p,Theorem <ref> yields thatthe function on 𝒫_p(Y^2) defined by 𝒞(μ):=‖_Y^p‖_L^1(μ)is continuous with respect to _p^Y^2.Now defineF:E→ 2^𝒫_p(𝒢(Y)) by F(ω):={Γ∈𝒫_p(𝒢(Y)) | e^∙_♯Γ is a minimal geodesic from 𝔪_0^ω to 𝔪_1^ω};note that if Γ∈ F(ω) then (e^0×e^1)_♯Γ∈Π(𝔪_0^ω, 𝔪_1^ω) is a p-optimal couplingby <cit.>*Corollary 7.22.We now show that F satisfies the hypotheses of the Kuratowski and Ryll-Nardzewski selection theorem, Theorem <ref>.Claim 1. F(ω) is nonempty and closed for each ω∈Ω.Proof of Claim 1. By Proposition <ref>, for any ω∈ E there is a Γ∈𝒫(𝒢(Y)) such that e^∙_♯Γ is a minimal geodesic from 𝔪_0^ω to 𝔪_1^ω.Additionally, if ρ_0∈𝒢(Y) isidentically equal to y_0,then we have ∫_𝒢(Y)_𝒢(Y)(ρ_0,ρ)^pdΓ(ρ)=∫_𝒢(Y)(sup_τ∈ [0, 1]_Y( ρ(τ), ρ_0(τ)) )^pdΓ(ρ)≤ 2^p-1∫_𝒢(Y)sup_τ∈ [0, 1](_Y(y_0,ρ(0))^p+_Y(ρ(0), ρ(τ))^p)dΓ(ρ)=2^p-1∫_𝒢(Y)sup_τ∈ [0, 1](_y_0(ρ(0))^p+τ^p _Y(ρ(0), ρ(1))^p)dΓ(ρ)≤ 2^p-1∫_Y_y_0(t)^pde^0_♯Γ(t)+2^p-1∫_Y^2_Y(t, s)^pd(e^0×e^1)_♯Γ(t, s)=2^p-1∫_Y_y_0(t)^pd𝔪_0^ω(t)+2^p-1_p^Y(𝔪_0^ω, 𝔪_1^ω)<∞,hence Γ∈𝒫_p(𝒢(Y)), thus we have F(ω)≠∅. Now given ω∈ E,if (Γ_j)_j∈ℕ⊂ F(ω) convergesin (𝒫_p(𝒢(Y)),_p^𝒢(Y)), by Lemma <ref>the sequence ((e^0×e^1)_♯Γ_j)_j∈ℕ converges weakly. Since each (e^0×e^1)_♯Γ_j is a p-optimal coupling between 𝔪_0^ω and 𝔪_1^ω, so is its weak limit by <cit.>*Corollary 5.21,hence the limit belongs to F(ω); in other words F(ω) is closed in(𝒫_p(𝒢(Y)),_p^𝒢(Y)). Claim 2. F is ℱ_E-weakly measurable.Proof of Claim 2. For Γ∈𝒫_p(𝒢(Y)),define Φ_Γ: E→ℝ^3 by Φ_Γ(ω) :=( _p^Y(e^0_♯Γ, 𝔪_0^ω)^p, _p^Y(e^1_♯Γ, 𝔪_1^ω)^p, |𝒞((e^0×e^1)_♯Γ)-_p(𝔪_0^ω,𝔪_1^ω)^p|),which is ℱ_E-measurable by <cit.>*Lemma 12.4.7. Since (𝒢(Y), _𝒢(Y)) is closed in C_b([0, 1]; Y) with the supremum norm it is complete, and as seen before is separable. Thus (𝒫_p(𝒢(Y)), _p^𝒢(Y)) is complete and separable. Fix a closed set K in (𝒫_p(𝒢(Y)), _p^𝒢(Y)), then there exists a countable set {Γ_j}_j∈ℕ that is _p^𝒢(Y)-dense in K.Set B:=⋂_m=1^∞⋃_j=1^∞Φ_Γ_j^-1([0,m^-1)^3),Ω_K:={ω∈ E| F(ω) ∩ K ≠∅},by the ℱ_E-measurability of each Φ_Γ_j,we find B∈ℬ_σ.We will now show that Ω_K=B. If ω∈Ω_K, there exists Γ∈ F(ω) ∩ K,and a sequence (Γ_j_ℓ)_ℓ∈ taken from (Γ_j)_j∈ℕ that converges to Γ with respect to _p^𝒢(Y). Then by Lemma <ref>, the sequence (e^i_♯Γ_j_ℓ)_ℓ∈ converges in _p^Y to 𝔪^ω_i=e^i_♯Γ, for i=0, 1.Similarly, the convergence of (Γ_j_ℓ)_ℓ∈ℕ to Γ in _p^𝒢(Y) implies convergence of ((e^0×e^1)_♯Γ_j_ℓ)_ℓ∈ℕ to (e^0×e^1)_♯Γ in _p^Y^2, hence the continuity of 𝒞 implies thatlim_ℓ→∞| 𝒞((e^0×e^1)_♯Γ_j_ℓ)-_p^Y(𝔪_0^ω,𝔪_1^ω)^p|=lim_ℓ→∞|𝒞((e^0×e^1)_♯Γ_j_ℓ)-𝒞((e^0×e^1)_♯Γ)|=0. Thus for any m∈, if ℓ is sufficiently large, we haveΦ_Γ_j_ℓ(ω)∈ [0,m^-1)^3 which yields ω∈ B.Now assume ω∈ B. For each m∈ℕ, there is j(m)∈ℕ such thatΦ_Γ_j(m)(ω)∈ [0,m^-1)^3, that is,_p^Y(e^0_♯Γ_j(m), 𝔪_0^ω)^p<m^-1,_p^Y(e^1_♯Γ_j(m), 𝔪_1^ω)^p<m^-1,|𝒞((e^0×e^1)_♯Γ_j(m))-_p^Y^2(𝔪_0^ω,𝔪_1^ω)^p|<m^-1.Since {e^0_♯Γ_j(m)}_m∈∪{𝔪_0^ω} and {e^1_♯Γ_j(m)}_m∈∪{𝔪_1^ω} are compact in (𝒫_p(Y), _p^Y), by <cit.>*Corollary 7.22 there is a subsequence of (Γ_j(m))_m∈ (not relabeled) that converges weakly to some Γ∈𝒫(𝒢(Y)).Since (Y, _Y) is ball convex with respect to y_0, recalling that ρ_0∈𝒢(Y) is identically y_0,lim sup_r→∞lim sup_m→∞∫_𝒢(Y)∖ B_r^𝒢(Y)(ρ_0)_𝒢(Y)(ρ, ρ_0)^pdΓ_j(m)(ρ)≤lim sup_r→∞lim sup_m→∞∫_{ρ∈𝒢(Y)|max{_y_0(ρ(0)), _y_0(ρ(1))}≥ r}max{_y_0(ρ(0)), _y_0(ρ(1))}^pdΓ_j(m)(ρ)≤lim sup_r→∞lim sup_m→∞∫_{ρ∈𝒢(Y)|_y_0(ρ(0))≥ r}_y_0(ρ(0))^pdΓ_j(m)(ρ)+lim sup_r→∞lim sup_m→∞∫_{ρ∈𝒢(Y)|_y_0(ρ(1))≥ r}_y_0(ρ(1))^pdΓ_j(m)(ρ)≤lim_r→∞lim sup_m→∞∫_Y∖ B_r^Y(y_0)_y_0(t)^pde^0_♯Γ_j(m)(t) +lim_r→∞lim sup_m→∞∫_Y∖ B_r^Y(y_0)_y_0(t)^pde^1_♯Γ_j(m)(t)= 0by (<ref>) and Theorem <ref>, hence Γ_j(m)→Γ in _p^𝒢(Y) as m→∞. Since F(ω)∩ K is closed, this implies Γ∈ F(ω)∩ K, thus ω∈Ω_K, proving Ω_K=B∈ℱ_E, and in particular F is ℱ_E-weakly measurable.As mentioned previously(𝒫_p(𝒢(Y)), _p^𝒢(Y)) is complete and separable, hence we can apply Theorem <ref>, to find an ℱ_E-measurable selection Γ_∙:E→𝒫_p(𝒢(Y)) of F.By Lemma <ref>, as the composition of a continuous map with an ℱ_E-measurable one, the mape^τ_♯Γ_∙: Ω→𝒫_p(Y) is ℱ_E-measurable for each τ∈[0, 1].Since (𝒫_p(Y), _p^Y) is separable, by Remark <ref> we have e^τ_♯Γ_∙∈ L^0(σ; 𝒫_p(Y)),thus there exists a sequence of simple functions from Ω to 𝒫_p(Y) that _p^Y-converge pointwise σ-a.e. to e^τ_♯Γ_∙; in particular the sequence also converges weakly pointwise σ-a.e.. Thus for any ϕ∈ C_0(Y×Ω), as a σ-a.e. pointwise limit of ℱ_E-measurable maps the real valued functionω↦∫_Yϕ(y, ω) de^τ_♯Γ_ω(t)is also ℱ_E-measurable. This implies thatL_τ(ϕ):=∫_Ω(∫_Y ϕ de^τ_♯Γ_ω) dσ(ω)is well-defined for any ϕ∈ C_0(Y×Ω) and τ∈ [0, 1].Since Y×Ω is locally compact, just as we argued for the functional L defined in (<ref>) we may identify L_τ with a probability measure 𝔪_τ∈𝒫^σ(Y×Ω), whose disintegration satisfies σ-a.e.,𝔪_τ^∙=e^τ_♯Γ_∙.Then since (Y,_Y) is ball convex with respect to y_0, _p^Y(δ_y_0^Y, 𝔪_τ^ω)=(∫_Y _y_0(t)^pde^τ_♯Γ_ω(t))^1/p =(∫_𝒢(Y)_y_0(ρ(τ))^pdΓ_ω(ρ))^1/p≤(∫_𝒢(Y)max{_y_0(ρ(0)),_y_0(ρ(1))}^pdΓ_ω(ρ))^1/p≤(∫_𝒢(Y) (_y_0(ρ(0))+_y_0(ρ(1)))^p dΓ_ω(ρ))^1/p ≤(∫_Y_y_0(t)^p de^0_♯Γ_ω(t))^1/p + (∫_Y_y_0(t)^p de^1_♯Γ_ω(t))^1/p = _p^Y(δ_y_0^Y, 𝔪_0^ω) + _p^Y(δ_y_0^Y, 𝔪_1^ω)for σ-a.e. ω, hence 𝔪_τ∈𝒫^σ_p, q(Y×Ω).Finally, fix 0≤τ_1<τ_2≤ 1.Since Γ_ω∈ F(ω), for σ-a.e. ω we have _p^Y(𝔪_τ_1^ω, 𝔪_τ_2^ω)=|τ_1-τ_2|_p^Y(𝔪_0^ω, 𝔪_1^ω), thus _p, q(𝔪_τ_1, 𝔪_τ_2)=‖_p^Y(𝔪_τ_1^∙, 𝔪_τ_2^∙)‖_L^q(σ)=|τ_1-τ_2|‖_p^Y(𝔪_0^∙, 𝔪_1^∙)‖_L^q(σ) =|τ_1-τ_2|_p, q(𝔪_0, 𝔪_1),proving that τ↦𝔪_τ is a minimal geodesic with respect to _p, q.§.§ DualityWe now work toward the duality result for disintegrated Monge–Kantorovich metrics.Notice that (𝒳_p, ·_𝒳_p) is a Banach space. We begin with a few lemmas on the _Y^p-transform of a function in 𝒳_p. The continuity below is an analogue of <cit.>*Appendix C, but in spaces other than ^n and for functions in the restricted class 𝒳_p.If ϕ∈𝒳_p and μ∈𝒫_p(Y), then ϕ^_Y^p is locally bounded and continuous on Y, and belongs to L^1(μ) for all μ∈𝒫_p(Y).We first show local boundedness. Note by definition, ϕ^_Y^p(s)≥ -_Y(s,s)^p-ϕ(s)=-ϕ(s)>-∞for all s∈ Y.To see local boundedness from above, fix y_0,s∈ Y. Since compact sets are bounded and ϕ∈𝒳_p, there exists an R>0 such that if _y_0(t)>R, then |ϕ(t)|/1+_y_0(t)^p≤ 2^-p,we calculate for such t,-_Y(t, s)^p-ϕ(t) ≤-_Y(t, s)^p+2^-p(1+_y_0(t)^p)≤-_Y(t, s)^p+2^-p[1+2^p-1(_Y(t,s)^p+_y_0(s)^p)] =-1/2_Y(t, s)^p+1/2^p+1/2_y_0(s)^p ≤1/2^p+1/2_y_0(s)^p.Thusϕ^_Y^p(s)≤max{1/2^p+1/2_y_0(s)^p, sup_t∈ B_R^Y(y_0)(-_Y(t, s)^p-ϕ(t))},since ϕ∈𝒳_p implies ϕ is bounded on finite radii balls, the expression on the right is locally bounded in s, hence we see ϕ^_Y^p is locally bounded. Since μ has finite pth moment, the above bounds give ϕ^_Y^p∈ L^1(μ).To see continuity, fix a convergent sequence (s_j)_j∈ℕ in Y with limit s_0. Then by the same argument as in the proof of Lemma <ref> using (<ref>) and (<ref>),it is enough to show that {t_j}_j∈ is bounded, where t_j∈ Y satisfiesϕ^_Y^p(s_j)≤ -_Y(t_j, s_j)^p-ϕ(t_j)+ε.for each j∈.Suppose by contradiction that sup_j∈ℕ_y_0(t_j)=∞, then since ϕ∈𝒳_p, for all j sufficiently large we can apply (<ref>) to obtain-ϕ^_Y^p(s_j)≤ -_Y(t_j, s_j)^p-ϕ(t_j)+ε≤-1/2_Y(t_j, s_j)^p+1/2^p+1/2_y_0(s_j)^p+ε-∞,as (s_j)_j∈ is bounded.This contradicts that ϕ^_Y^p is locally bounded, since s_j→ s_0 as j→∞, proving the continuity of ϕ^_Y^p at s_0.As in Lemma <ref>, we can now prove the following. Let ϕ∈𝒳_p and μ∈𝒫_p(Y). Then: *ϕ∈ L^1(μ) and ∫_Y |ϕ| dμ≤‖ϕ‖_𝒳_p∫_Y(1+_y_0(t)^p)dμ(t). *Let R_ϕ>0 be such thatif _y_0(t)>R_ϕ, then |ϕ(t)|/1+_y_0(t)^p≤ 2^-p-1.Then for all ϕ̃∈𝒳_p with ϕ-ϕ̃_𝒳_p<2^-p-1 and s∈ Y,|ϕ̃^_Y^p(s)-ϕ^_Y^p(s)| ≤‖ϕ-ϕ̃‖_𝒳_p(1+ max{ R_ϕ^p, 2^p+1(1+‖ϕ‖_𝒳_p)(1+_y_0(s)^p)}). Assertion (<ref>) follows from the inequality |ϕ(t)|≤ϕ_𝒳_p (1+_y_0(t)^p) for all t∈ Y. Assertion (<ref>) is more involved.Fix ε>0, then if ϕ̃∈𝒳_pby Lemma <ref>, ϕ̃^_Y^p is finite on all of Y. Thus for any s∈ Y, there exists t_ϕ̃∈ Y such thatϕ̃^_Y^p(s) ≤ -_Y(t_ϕ̃,s)^p-ϕ(t_ϕ̃)+ε.Then, ϕ̃^_Y^p(s)-ϕ^_Y^p(s) ≤ -_Y(t_ϕ̃,s)^p-ϕ̃(t_ϕ̃)+_Y( t_ϕ̃,s)^p+ϕ(t_ϕ̃)+ε≤‖ϕ-ϕ̃‖_𝒳_p(1+_y_0(t_ϕ̃)^p)+ε,and switching the roles of ϕ, ϕ̃ yields|ϕ̃^_Y^p(s)-ϕ^_Y^p(s)| ≤‖ϕ-ϕ̃‖_𝒳_p(1+ max{_y_0(t_ϕ)^p,_y_0(t_ϕ̃)^p }) +ε. Now suppose ϕ̃∈𝒳_p with ϕ-ϕ̃_𝒳_p<2^-p-1, then if _y_0(t)>R_ϕ,|ϕ̃(t)|/1+_y_0(t)^p≤ϕ-ϕ̃_𝒳_p + |ϕ(t)|/1+_y_0(t)^p <2^-p.Ifs, t∈ Y satisfy _y_0(t) ≥max{R_ϕ, 2_y_0(s)},by the triangle inequality, _Y(t, s)≥|_y_0(t)-_y_0(s)| =_y_0(t)-_y_0(s)≥1/2_y_0(t),then from (<ref>) we obtain that -_Y(t,s)^p-ϕ̃(t)≤ -1/2_Y(t, s)^p+1/2^p+1/2_y_0(s)^p≤ -1/2^p+1_y_0(t)^p+ 1/2^p+1/2_y_0(s)^p,Thus if s∈ Y is such that _y_0(t_ϕ̃)≥max{R_ϕ,2_y_0(s)},we have -‖ϕ̃‖_𝒳_p(1+_y_0(s)^p) ≤-ϕ̃(s)≤ϕ̃^_Y^p(s)≤ -_Y(t_ϕ̃,s)^p-ϕ̃(t_ϕ̃)+ε≤ -1/2^p+1_y_0(t_ϕ̃)^p+ 1/2^p+1/2_y_0(s)^p+εor rearranging,_y_0(t_ϕ̃)^p≤ 2^p+1‖ϕ̃‖_𝒳_p(1+_y_0(s)^p)+ 2+2^p_y_0(s)^p+2^p+1ε ≤2^p+1(2^-p-1+‖ϕ‖_𝒳_p)(1+_y_0(s)^p)+ 2+2^p_y_0(s)^p+2^p+1ε≤ 2^p+1[(1+‖ϕ‖_𝒳_p)(1+_y_0(s)^p)+ε].Thus in all cases, we have_y_0(t_ϕ̃)^p≤max{ R_ϕ^p, 2^p+1[(1+‖ϕ‖_𝒳_p)(1+_y_0(s)^p)+ε]}.We can obtain the above estimate when ϕ̃=ϕ as well, hence combining with (<ref>) and taking ε to 0 finishes the proof. As in the case of the sliced metrics, our approach will be to apply the classic Kantorovich duality for each ω∈Ω. In this case however, we must appeal to the Kuratowski and Ryll-Nardzewski measurable selection theorem (Theorem <ref>) instead of the Michael continuous selection theorem(Proposition <ref>) which greatly complicates the proof. To this end, given 𝔪, 𝔫∈𝒫^σ_p, 1(Y ×Ω), and ε>0,we define a set-valued function F^𝔪, 𝔫_ε from Ω to 2^𝒳_p byF^𝔪, 𝔫_ε(ω):={ϕ∈𝒳_p| -∫_Yϕ d𝔪^ω-∫_Yϕ^_Y^p d𝔫^ω> _p^Y(𝔪^ω, 𝔫^ω)^p-ε}^‖·‖_𝒳_p,where A^‖·‖_𝒳_p denotes the closure of A⊂𝒳_pwith respect to the norm ‖·‖_𝒳_p. Assume (Ω, _Ω), (Y, _Y) are locally compact and let 𝔪, 𝔫∈𝒫^σ_p, 1(Y×Ω).Then for each ε>0, we find F^𝔪, 𝔫_ε is ℬ_σ-weakly measurable and F^𝔪, 𝔫_ε (ω) is closed and nonempty for σ-a.e. ω∈Ω.Since𝔪, 𝔫∈𝒫^σ_p, 1(Y×Ω) and ε>0 are fixed,we write F in place of F^𝔪, 𝔫_ε. We first showF(ω)≠∅. Since 𝔪^ω, 𝔫^ω∈𝒫_p(Y) for σ-a.e. ω, for such ω by the classical Kantorovich duality Theorem <ref> for _p^Y, there exists ϕ_ε∈ C_b(Y)⊂𝒳_psuch that _p^Y(𝔪^ω, 𝔫^ω)^p- ε< -∫_Yϕ_ε d𝔪^ω-∫_Yϕ^_Y^p_ε d𝔫^ω,thus ϕ_ε∈F(ω)≠∅. By definition, F(ω) is closed. Next, we prove the ℬ_σ-weak measurability of F.Define F(ω):={ϕ∈𝒳_p| -∫_Yϕ d𝔪^ω-∫_Yϕ^_Y^p d𝔫^ω> _p^Y(𝔪^ω, 𝔫^ω)^p-ε}(compare with F^μ,ν_ε in Definition <ref>).First, for any open set O⊂𝒳_p and any set A⊂𝒳_p,it is clear that A^‖·‖_𝒳_p∩ O≠∅ if and only if A∩ O≠∅,thus it is sufficient to prove that F is ℬ_σ-weakly measurable.To this end, fix ϕ∈𝒳_p and define the function G_ϕ: Ω→ [-∞,∞) byG_ϕ(ω):=-∫_Yϕ d𝔪^ω-∫_Yϕ^_Y^p d𝔫^ω-_p^Y(𝔪^ω, 𝔫^ω)^p,then ϕ∈ F(ω) if and only if G_ϕ(ω)>-ε, hence{ω∈Ω|F(ω) ∩ O ≠∅} =⋃_ϕ∈ OG_ϕ^-1((-ε, ∞)).Since (Y, _Y) is locally compact and separable, by combining <cit.>*(5.3) Theorem ii) and iv), and <cit.>*Chapter V.5, Exercise 2(c) we find C_0(Y) is separable, hence there exists a countable set {ϕ̃_j}_j∈ℕ⊂ C_0(Y), dense inthe supremum norm, then {ϕ_j}_j∈ℕ:={ (1+_y_0^p)ϕ̃_j}_j∈ℕ⊂𝒳_p is dense in ‖·‖_𝒳_p; we may throw out some elements to assume {ϕ_j}_j∈ℕ⊂ O while remaining dense in O. We now claim that⋃_ϕ∈ OG_ϕ^-1((-ε, ∞))=⋃_j=1^∞ G_ϕ_j^-1((-ε, ∞)).Since {ϕ_j}_j∈ℕ⊂ O, it is clear that⋃_j=1^∞ G_ϕ_j^-1((-ε, ∞)) ⊂⋃_ϕ∈ OG_ϕ^-1((-ε, ∞)).On the other hand, suppose ω∈ G_ϕ^-1((-ε, ∞)) for some ϕ∈ O. From Lemma <ref> combined with the fact that 𝔫^ω∈𝒫_p(Y), and the density of {ϕ_j}_j∈ℕ in 𝒳_p,for any δ>0, there exists j_δ∈ such thatG_ϕ(ω)-G_ϕ_j_δ(ω) = -∫_Y (ϕ-ϕ_j_δ) d𝔪^ω- ∫_Y (ϕ^_Y^p-ϕ_j_δ^_Y^p) d𝔫^ω <δ,thus taking δ=G_ϕ(ω)+ε>0, we have G_ϕ(ω)-G_ϕ_j_δ(ω) <G_ϕ(ω)+ε,consequentlyG_ϕ_j_δ(ω)>-ε. Thus ω∈G_ϕ_j_δ^-1((-ε, ∞)) and the opposite inclusion is proved. By <cit.>*Lemma 12.4.7 and thm: disintegration, we seeG_ϕ_j^-1((-ε, ∞))∈ℬ_σ for each j∈ℕ , hence ⋃_j=1^∞ G_ϕ_j^-1((-ε, ∞))∈ℬ_σ.Thus combining (<ref>) and (<ref>), this shows F is ℬ_σ-weakly measurable.We now prove some auxiliary lemmas.If f∈ L^0(σ; 𝒳_p), then for 𝔪, 𝔫∈𝒫^σ_p, 1(Y×Ω), the functions defined byω↦∫_Y f_ω d𝔪^ω, ω↦∫_Y f_ω^_Y^p d𝔫^ωare ℬ_σ-measurable. If f is σ-strongly measurable, for each j∈ there exist I_j∈, {ϕ_i, j}_i=1^I_j⊂𝒳_p, and a partition {A_i, j}_i=1^I_j⊂ℬ_σ of Ω such that the simple functions f^j_ω:=∑_i=1^I_j1_A_i, j(ω)ϕ_i, jpointwise converge for σ-a.e. ωto f_ω. The measures 𝔪^∙ and 𝔫^∙ have finite pth moment σ-a.e., fix ω such that this holds. For each j∈ℕ, since {A_i, j}_i=1^I_j is a disjoint collection there exists a unique 1≤ i(j)≤ I_j such that ω∈ A_i(j), j,then ∫_Y f^j_ω d𝔪^ω =∑_i=1^I_j1_A_i, j(ω)∫_Yϕ_i, j(t)d𝔪^ω(t), ∫_Y(f^j_ω)^_Y^pd𝔫^ω =∫_Y [ sup_t∈ Y(-_Y(t, s)^p-∑_i=1^I_j1_A_i, j(ω)ϕ_i, j(t))] d𝔫^ω(s)=∫_Y[ sup_t∈ Y(-_Y(t, s)^p-ϕ_i(j), j(t))]d𝔫^ω(s)=∫_Yϕ_i(j), j^_Y^pd𝔫^ω=∑_i=1^I_j1_A_i, j(ω)∫_Yϕ_i, j^_Y^pd𝔫^ω,which are ℬ_σ-measurable functions of ω by thm: disintegration.Thusfrom Lemma <ref>, we observe each of the functions in (<ref>)is a σ-a.e. pointwise limit of ℬ_σ-measurable functions, hence is ℬ_σ-measurable itself. If f∈ L^0(Ω; 𝒳_p),there is a sequence (f_j)_j∈ℕ⊂ C_b(Ω; 𝒳_p) which converges pointwise σ-a.e. to f. By Remark <ref>, f is a ℬ_σ-measurable map. Then since 𝒳_p is complete and separable, for each j∈, we may apply <cit.>*Theorem 7.1.13, where ℬ_μ(X) in the reference is our ℬ_σ,to f to find a compact set K_j⊂Ω such that σ(Ω∖ K_j)<2^-j and f restricted to K_j is continuous; we may also assume K_j⊂ K_j+1 for each j∈ℕ. Since 𝒳_p is a normed space it is locally convex, hence the Tietze extension theorem <cit.>*Theorem 4.1 applies and there is a continuous function f_j: Ω→𝒳_p such that f_j= f on K_j. Moreover since K_j is compact and f restricted to it is continuous, the image f(K_j) is also compact, hence bounded in 𝒳_p. Then <cit.>*Theorem 4.1 also ensures that the image f_j(Ω) is contained in the convex hull of f(K_j), consequently f_j is bounded. Since σ(K_j)→ 1 as j→∞,it is clear that f_j converges pointwise σ-a.e. to f, finishing the proof.We are now ready to prove the duality result.Recall r=p/q and we assume (Y, _Y) and (Ω, _Ω) are locally compact. Let Φ∈ C_b(Ω; 𝒳_p) and ζ∈𝒴_r', σ⊂ L^r'(σ). By continuity of Φ and the separability of 𝒳_p with Remark <ref> we have Φ∈ L^0(σ; 𝒳_p) , hence by Lemma <ref>, the Kantorovich duality Theorem <ref> for _p^Y,and the dual representation for the L^r-norm again (<cit.>*Proposition 6.13), we obtain-∫_Ωζ(ω)(∫_YΦ_ω(t)d𝔪^ω(t) +∫_YΦ^_Y^p_ω(s)d𝔫^ω(s))dσ(ω)≤∫_Ωζ(ω)_p^Y(𝔪^ω, 𝔫^ω)^pdσ(ω)≤‖_p^Y(𝔪^∙, 𝔫^∙)^p‖_L^r(σ) =‖_p^Y(𝔪^∙, 𝔫^∙)‖_L^q(σ)^p=_p, q(𝔪, 𝔫)^p. To show the reverse inequality, fix ε>0 and let Ebe the set of ω∈Ω such thatboth of 𝔪^ω, 𝔫^ω have finite pth moment; note that σ(E)=1. By Lemma <ref>,the set-valued mapping F^𝔪, 𝔫_ε is nonempty and closed valued, and ℬ_σ-weakly measurable. Since𝒳_p is separable, by Theorem <ref> we can find a map f_∙: Ω→𝒳_p that is ℬ_σ-measurable such that f_ω∈F^𝔪, 𝔫_ε(ω) for all ω∈Ω, and by Remark <ref>, this implies f_∙∈ L^0(σ; 𝒳_p).By Lemma <ref> for ω∈ E-∫_Y f_ω(t)d𝔪^ω(t)-∫_Yf_ω^_Y^p(s)d𝔫^ω(s) ≥_p^Y(𝔪^ω, 𝔫^ω)^p-ε.Arguing as in the proof of Theorem <ref> (<ref>), with _p^Y(𝔪^∙, 𝔫^∙) replacing _p^(^∙_♯μ, ^∙_♯ν), there also exists ζ∈𝒴_r', σ satisfying∫_Ωζ(ω)_p^Y(𝔪^ω, 𝔫^ω)^pdσ(ω) > _p, q(𝔪, 𝔫)^p-ε;and again if p=q we may take ζ≡ 1. Now for j∈ and u ∈, let T_j(u):=max{min{u, j}, -j}:=min{u,j}, if u≥ 0, max{ u,-j },if u<0.A simple calculation yields thatfor each u,v∈ℝ, the sequence ( T_j(u)+T_j(v) )_j∈ℕ is non-negative and non-decreasing if u+v≥0, and non-positive and non-increasing if u+v ≤ 0 with limit u+v, and in particular ( T_j(-f_ω(t))+T_j(-f_ω^_Y^p(s)))≤_Y(t, s)^pfor each t,s∈ Y and ω∈Ω.For each ω∈Ω define the sets E_±(ω):={(t,s)| ±(f_ω(t)+ f_ω^_Y^p(s))≤ 0}then we can see(±∫_E_±(ω)( T_j(-f_ω(t))+ T_j(-f_ω^_Y^p(s)))d(𝔪^ω⊗𝔫^ω)(t, s))_j∈are non-negative, non-decreasing sequences for each ω∈Ω. Thus integrating against ζσ and using monotone convergence(and usingT_j(-f_ω(t))+T_j(-f_ω^_Y^p(s))=0 on E_+(ω)∩ E_-(ω)), if j_0 is large enough we obtain-∫_Ωζ(ω)(∫_Y [-T_j_0( -f_ω(t))] d𝔪^ω(t)+∫_Y [-T_j_0(-f_ω)]^_Y^p(s)d𝔫^ω(s) ) dσ(ω)≥∫_Ωζ(ω)(∫_Y T_j_0( -f_ω(t))d𝔪^ω(t)+∫_Y T_j_0(-f_ω^_Y^p(s))d𝔫^ω(s))dσ(ω)>_p, q(𝔪, 𝔫)^p-2ε,where the first inequality follows from (<ref>);let us fix such a j_0. By Lemma <ref>, there exists a sequence (Φ_j)_j∈⊂ C_b(Ω; 𝒳_p) converging pointwise σ-a.e. to -T_j_0∘ (-f_∙) in ‖·‖_𝒳_p; we may truncate to assume (Φ_j)_ω_C_b(Y)≤ 2j_0,for all ω∈Ω,and by Lemma <ref>, the sequence (Φ_j^_Y^p)_j∈ also satisfies the same bound. Thus - ζ(ω)(∫_Y (Φ_j)_ω(t)d𝔪^ω(t) +∫_Y(Φ_j^_Y^p)_ω(s)d𝔫^ω(s))≥ -4j_0ζ(ω),alsoby Lemma <ref> we have that- lim_j→∞ζ(∫_Y (Φ_j)_∙ d𝔪^∙+∫_Y(Φ_j^_Y^p)_∙ d𝔫^∙)=-ζ(∫_Y [-T_j_0(-f_∙(t))]d𝔪^∙(t)+∫_Y [-T_j_0(-f_∙)]^_Y^p(s)d𝔫^∙(s)), holds σ-a.e..Since C_b(Ω; 𝒳_p)⊂ L^0(σ; 𝒳_p)by Remark <ref>,all functions involved can be integrated against σ by Lemma <ref>; by (<ref>) and since ζ∈ L^r'(σ)⊂ L^1(σ) we may apply Fatou's lemma, thus combining with (<ref>) and (<ref>) we havelim inf_j→∞(-∫_Ωζ(∫_Y (Φ_j)_∙ d𝔪^∙+∫_Y (Φ_j^_Y^p)_∙ d𝔫^∙) dσ) > _p, q(𝔪, 𝔫)^p-2ε.Since ε>0 is arbitrary, this shows the first equality in Theorem <ref> (<ref>). To see the second equality, note that if Φ:=Φ_j for some j∈, |Φ_ω(t)-Φ_ω_0(t_0)|≤ (1+_y_0(t)^p)‖Φ_ω-Φ_ω_0‖_𝒳_p +|Φ_ω_0(t)-Φ_ω_0(t_0)|0 and by Lemma <ref> (<ref>) we have |Φ_ω^_Y^p(t)-Φ_ω_0^_Y^p(t_0)|≤|Φ_ω^_Y^p(t)-Φ_ω_0^_Y^p(t)|+|Φ_ω_0^_Y^p(t)-Φ_ω_0^_Y^p(t_0)|≤‖Φ_ω-Φ_ω_0‖_𝒳_p (1+ max{R_Φ_ω_0^p, 2^p+1(1+Φ_ω_0_𝒳_p)(1+_y_0(t)^p)} +|Φ_ω_0^_Y^p(t)-Φ_ω_0^_Y^p(t_0)|0,where we have used that Lemma <ref> and boundedness of Φ_ω_0 implies continuity of Φ_ω_0^_Y^p. This shows for each j∈ℕ, we have Φ_j, Φ_j^_Y^p∈ C_b(Y×Ω). Since Φ_j∈ C_b(Ω; 𝒳_p), Lemma <ref> (<ref>)implies Φ_j^_Y^p∈ C_b(Ω; 𝒳_p), hence (Φ_j, Φ_j^_Y^p)∈𝒜_p, Y, σ proving the second equality in Theorem <ref> (<ref>).§.§ Isometric embeddingWe will now show that the natural candidate for embedding 𝒫_p(^n) into 𝒫(×^n-1) via the Radon transform is actually an isometry of each sliced metric space into the disintegrated metric space with the same values of p and q. Let μ∈𝒫(ℝ^n).Forϕ∈ C_b(ℝ×𝕊^n-1), by dominated convergence the function on 𝕊^n-1 defined by ω↦∫_ℝϕ(t,ω)dR^ω_♯μ(t) = ∫_ℝ^nϕ(⟨ x,ω⟩,ω)dμ(x)is continuous, andL_μ(ϕ): =∫_𝕊^n-1∫_ℝϕ(t,ω)dR^ω_♯μ(t)dσ_n-1(ω) =∫_𝕊^n-1∫_ℝ^nϕ(⟨ x,ω⟩,ω)dμ(x)dσ_n-1(ω)is well-defined.As in the proof involving L defined in (<ref>), we can identify L_μ with a Borel probability measure 𝔪_μ∈𝒫^σ_n-1(×^n-1) and 𝔪^∙=R^∙_♯μ. For μ∈𝒫_p(ℝ^n), a direct calculation combined with Lemma <ref> gives‖_p^(δ_0^, ^∙_♯μ)‖_L^q(σ_n-1) =_p,q(δ_0^ℝ^n, μ) ≤ M_max{p, q}, n_p^^n(δ_0^^n, μ)<∞,hence𝔪_μ∈𝒫^σ_n-1_p, q(×^n-1).Finally, for μ, ν∈𝒫_p(ℝ^n), we have_p,q(𝔪_μ, 𝔪_ν)=‖_p^(𝔪_μ^∙, 𝔪_ν^∙)‖_L^q(σ_n-1)=‖_p^(R^∙_♯μ, R^∙_♯ν)‖_L^q(σ_n-1)=_p,q(μ, ν),showing that the map μ↦𝔪_μ is an isometry. By the completeness from Theorem <ref> (<ref>), the image of (𝒫_p(ℝ^n), _p,q) under μ↦𝔪_μ is closed in (𝒫^σ_p, q(×^n-1), _p,q). However, by Theorem <ref> (<ref>) the embedded image is not geodesically convex in (𝒫^σ_p, q(×^n-1), _p,q) when n≥ 2 and p>1.§.§ Further properties of disintegrated Monge–Kantorovich metricsIn this subsection, we prove that _p, p can be recognized as coming from a certain optimal transport problem on (Y×Ω)^2. For 1≤ p<∞, define𝔠_p:(Y×Ω)^2 → [0,∞] by𝔠_p((t,ω),(s,ω')) := _Y(t,s)^p, if ω=ω', ∞,else.For 𝔪,𝔫∈𝒫^σ_p(Y×Ω), set ℭ_p(𝔪, 𝔫) :=inf_Γ∈Π(𝔪, 𝔫)𝔠_p_L^p(Γ)∈[0,∞].For 𝔪, 𝔫∈𝒫^σ_p, p(Y×Ω), ℭ_p(𝔪, 𝔫) is finite and ℭ_p(𝔪, 𝔫)=_p, p(𝔪, 𝔫)^p. Fix 𝔪, 𝔫∈𝒫^σ_p, p(Y×Ω).For any (Φ, Ψ)∈𝒜_p,Y,σ, and ω∈Ω, it turns out thatΦ(·, ω), Ψ(·, ω)∈ C_b(Y) and-Φ(t, ω)-Ψ(s, ω)≤_Y(t,s)^p for t, s∈, in particular we have -Φ(t, ω)-Ψ(s, ω')≤𝔠_p((t,ω),(s,ω')).Then, noting that Y×Ω is a complete, separable metric space, the Kantorovich duality Theorem <ref>[We have stated Theorem <ref> only for cost functions of the form _Y^p, however the same result holds for any lower-semicontinuous cost function bounded from below, hence for 𝔠_p, see <cit.>*Theorem 5.10.] yieldsℭ_p(𝔪,𝔫) =sup_(Φ,Ψ)∈𝒜_p,Y,σ( -∫_Y×ΩΦ d𝔪 -∫_Y×ΩΨ d𝔫)=sup_(Φ,Ψ)∈𝒜_p,Y,σ∫_Ω( -∫_YΦ(t,ω)d𝔪^ω(t) -∫_YΨ(s,ω)d𝔫^ω(s) ) dσ(ω) ≤∫_Ω_p^Y(𝔪^ω,𝔫^ω)^p dσ(ω) =_p,p(𝔪,𝔫)^p<∞.Thus ℭ_p(𝔪,𝔫) is finite and ℭ_p(𝔪,𝔫)≤_p,p(𝔪,𝔫)^p. On the other hand,since 𝔠_p is lower semi-continuous and non-negative,by <cit.>*Theorem 4.1 there exists γ∈Π(𝔪,𝔫) such that ℭ_p (𝔪,𝔫) =∫_(Y×Ω)^2𝔠_p dγ,since ℭ_p (𝔪,𝔫)<∞ by above, we find thatγ({ ((t,ω), (s,ω'))|ω≠ω'})=0.Let π_Ω^2:(Y×Ω)^2→Ω^2 be the natural projection. Then for ℬ_σ-measurable sets A, A'⊂Ω,(π_Ω^2) _♯γ(A× A')=γ( { (t,ω), (s,ω')|ω∈ A, ω'∈ A'}) =γ({(t,ω), (s,ω')|ω∈ A, ω'∈ A', ω=ω'}) =γ({(t,ω), (s,ω)|ω∈ A∩ A'})=σ({ω∈Ω|ω∈ A∩ A'})=(_Ω×_Ω)_♯σ(A× A'),hence (π_Ω^2)_♯γ=(_Ω×_Ω)_♯σ. Let us regard γ as a probability measure on Y^2 ×Ω^2and consider the disintegration γ =γ^(∙,∗)⊗ (π_Ω^2)_♯γ =γ^(∙,∗)⊗ (_Ω×_Ω)_♯σ.For ϕ∈ C_b(Y^2×Ω^2), the function on Ω^2 (resp. Ω) defined by (ω, ω') ↦∫_Y^2ϕ(t, s,ω, ω')dγ^(ω, ω')(t, s) ( resp. ω↦∫_Y^2ϕ(t, s,ω,ω)dγ^(ω, ω)(t, s) ) is Borel by thm: disintegration, and∫_Ω^2∫_Y^2ϕ(t,s, ω,ω')dγ^(ω, ω')(t,s) d(π_Ω^2)_♯γ(ω,ω') = ∫_Ω∫_Y^2ϕ(t,s,ω,ω)dγ^(ω, ω)(t,s) dσ(ω). Now for any Borel set E_Y⊂ Y and E_Ω∈ℬ_σ, since γ∈Π(𝔪, 𝔫) we have∫_E_Ω𝔪^ω(E_Y)dσ(ω)=∫_Y×Ω1_E_Ω (ω) 1_E_Y(t) d𝔪(t,ω)= ∫_ (Y×Ω)^21_E_Ω (ω) 1_E_Y(t) dγ((t,ω), (s,ω'))= ∫_E_Ω∫_Y^21_E_Y(t) dγ^(ω, ω)(t, s) dσ(ω)=∫_E_Ω∫_Y^21_E_Y× Y(t, s) dγ^(ω, ω)(t, s) dσ(ω) =∫_E_Ωγ^(ω, ω)(E_Y× Y) dσ(ω).Since E_Y and E_Ω are arbitrary (and using a similar argument for 𝔫) this implies that for σ-a.e. ω, we haveγ^(ω, ω)∈Π(𝔪^ω,𝔫^ω).Finally, using this claim with the disintegration (<ref>), for γ∈Π(𝔪, 𝔫) we have_p, p(𝔪, 𝔫)^p=∫_Ω_p^Y(𝔪^ω, 𝔫^ω)^pdσ(ω)≤∫_Ω∫_Y^2_Y(t,s)^pd γ^(ω, ω)(t, s)dσ(ω)=∫_Ω∫_Y^2𝔠_p( (t,ω), (s,ω) )d γ^(ω, ω)(t, s)dσ(ω) = ∫_(Y×Ω)^2𝔠_p( (t,ω), (s,ω') )dγ ((t,ω), (s,ω'))=ℭ_p(𝔪, 𝔫),completing the proof of the lemma.It is possible to view (𝒫^σ_p, q(Y×Ω), _p, q) as the metric space valued L^q-space on (Ω, σ)where the range is (𝒫_p(Y), _p^Y) (i.e., elements are of the form ω↦𝔪^ω).Properties such as completeness for such spaces are claimed in various works, but do not appear to come with proofs in the literature except when the range is a Banach space (i.e., for Bochner–Lebesgue spaces), which is not the case here. Recall that 𝒫_2(ℝ^n) can be viewed as the quotient space of L^2([0, 1]; ℝ^n)under the equivalence relation ∼, where S∼ T if and only ifT_♯ℋ^1|_[0,1]=S_♯ℋ^1|_[0,1]. In particular, if p=2,the map from L^2([0,1];ℝ^n) to (𝒫_2(^n),_2^ℝ^n) sendingT to T_♯ℋ^1|_[0,1] formally becomes a “Riemannian submersion" (for instance, see <cit.>*Section 4). This Riemannian interpolation is recovered for a complete, separable, geodesic spaceby the use of absolutely continuous curves (<cit.>*Chapter 8, for instance).This enables one to discuss the notion of differentiability on (𝒫_2(^n),_2^^n), see also <cit.> for various notions of differentiability. It may be possible to apply such an approach to the spaces (𝒫^σ_p, q(Y×Ω), _p, q) in certain settings, which is left for a future work. § SLICED AND DISINTEGRATED BARYCENTERSIn this section, we prove our various claims regarding _p, q- and _p, q-barycenters.§.§ Existence of sliced barycentersWe first prove the existence of _p, q-barycenters by a simple compactness argument. We note that _p, q-barycenters are different from _p, q-barycenters on 𝒫^σ_p, q(×^n-1), as can be seen from the nongeodesicness proved in Theorem <ref> (<ref>). If =0, we see B^p,q,_Λ, M is constant on 𝒫_p(ℝ^n)and the claim holds trivially, thus assume ≠ 0.Since each μ_k∈𝒫_p(^n) and B^p,q,_Λ, M is nonnegative, it has a finite infimum and we may take a minimizing sequence (ν_j)_j∈ℕ⊂𝒫_p(^n); that is lim_j→∞B^p,q,_Λ, M(ν_j)=inf_ν∈𝒫_p(^n)B^p,q,_Λ, M(ν),and moreover we may assume sup_j B^p,q,_Λ, M(ν_j)≤ C<∞ for some C>0. Since λ_1>0, then for any j, using Lemma <ref> and then Hölder's inequality and that p≤ q, we have(∫_^n| x|^pdν_j(x))^1/p =_p^^n(δ_0^^n, ν_j)=M_p, n_p, p(δ_0^^n, ν_j)≤ M_p, n_p, q(δ_0^^n, ν_j)≤ M_p, n( _p, q(δ_0^^n, μ_1)+_p, q(μ_1, ν_j))≤ M_p, n(_p, q(δ_0^^n, μ_1)+(λ_1^-1C)^1/).Since the pth moments of the ν_j are uniformly bounded, we may pass to a subsequence (not relabeled) which converges weakly to some ν_∞∈𝒫_p(^n) as j→∞by Remark <ref>.Then as in (<ref>), we have_p^(^ω_♯μ_k, ^ω_♯ν_∞)≤lim inf_j→∞_p^(^ω_♯μ_k, ^ω_♯ν_j). Thus by Fatou's lemma, we obtainB^p,q,_Λ, M(ν_∞) =∑_k=1^Kλ_k _p^(^∙_♯μ_k, ^∙_♯ν_∞)_L^q(σ_n-1)^≤∑_k=1^Kλ_klim inf_j→∞_p^(^∙_♯μ_k, ^∙_♯ν_j)_L^q(σ_n-1)^≤lim inf_j→∞∑_k=1^Kλ_k _p^(^∙_♯μ_k, ^∙_♯ν_j)_L^q(σ_n-1)^=lim inf_j→∞B^p,q,_Λ, M(ν_j),proving that ν_∞ is an_p, q-barycenter of M.§.§ Existence of disintegrated barycentersNext we prove Theorem <ref> (<ref>), that is, the existence of _p, q-barycenters. Compared to the case of _p, q-barycenters, we lack the continuity need to apply the direct method, hence we must appeal to the dual problem for _p, q to show existence. We will require (Y, _Y) to be locally compact to apply the duality result Theorem <ref> (<ref>), but will need the stronger Heine–Borel property to take advantage of Remark <ref> in order to obtain weak compactness of a minimizing sequence. Note that the Heine–Borel property is strictly stronger than local compactness on a complete, separable metric space: the metric space (, min{1, | x-y|}) has the same topology as the usual Euclidean one on and is complete and locally compact, but the ball of radius 2 is all ofand hence not compact. Again since each 𝔪_k∈𝒫^σ_p, q(Y×Ω) and by nonnegativity, the infimum of 𝔅^p, q, _Λ, 𝔐 is finite. Let (𝔫_j)_j∈ℕbe a minimizing sequence for 𝔅^p, q, _Λ, 𝔐, in particular sup_j 𝔅^p, q, _Λ, 𝔐(𝔫_j)≤ C for some C>0. Fix some ω_0∈Ω and write _ω_0(ω):=_Ω(ω_0, ω).If q<∞, since p≤ q we may use Jensen's inequality in the second inequality below to obtain for some C_κ, p>0(∫_Y×Ω_Y×Ω((y_0, ω_0), (t, ω))^pd𝔫_j(t, ω))^/p≤ C_κ, p[ (∫_Ω_ω_0(ω)^pdσ(ω))^/p+(∫_Ω∫_Y _y_0(t)^pd𝔫^ω_j(t)dσ(ω))^q/p·/q]≤ C_κ, p{(∫_Ω_ω_0(ω)^pdσ(ω))^/p+[∫_Ω(∫_Y_y_0(t)^pd𝔫^ω_j(t))^q/pdσ(ω)]^/q}= C_κ, p[(∫_Ω_ω_0(ω)^pdσ(ω))^/p+_p, q(δ_y_0^Y⊗σ, 𝔫_j)^]≤C_κ, p[(∫_Ω_ω_0(ω)^p dσ(ω))^/p+2^(_p, q( 𝔫_j, 𝔪_1)^+_p, q( δ_y_0^Y⊗σ, 𝔪_1)^)) ≤ C_κ, p[ (∫_Ω_ω_0(ω)^pdσ(ω))^/p+2^(λ^-1C+_p, q( δ_y_0^Y⊗σ, 𝔪_1)^) ].If q=∞, then we can use the trivial inequality(∫_Ω∫_Y _y_0(t)^pd𝔫^ω_j(t)dσ(ω))^/p≤‖∫_Y _y_0(t)^pd𝔫^∙_j(t)‖_L^∞(σ)^/pto obtain a similar bound to above.Since Y×Ω has the Heine–Borel property, again byRemark <ref>we may pass to a subsequence to assume (𝔫_j)_j∈ℕ converges weakly to some 𝔫 in 𝒫^σ(Y×Ω). Fix ζ∈𝒴_r', σ and (Φ, Ψ)∈𝒜_p,Y,σ, then since ζΨ∈ C_b(Y×Ω) we have∫_Y×ΩζΨ d𝔫=lim_j→∞∫_Y×ΩζΨ d𝔫_j.Since the Heine–Borel property also implies Y×Ω is locally compact, we may apply Theorem <ref> (<ref>) and combine with (<ref>) above to obtain(-∫_Y×ΩζΦ d(δ_y_0^Y⊗σ)-∫_Y×ΩζΨ d𝔫)^1/p =lim_j→∞(-∫_Y×ΩζΦ d(δ_y_0^Y⊗σ)-∫_Y×ΩζΨ d𝔫_j)^1/p≤lim inf_j→∞_p, q(δ_y_0^Y⊗σ, 𝔫_j)≤ 2(λ_1^-1C+_p, q( δ_y_0^Y⊗σ, 𝔪_1)^)^1/.Thus taking a supremum over ζ∈𝒴_r', σ and (Φ, Ψ)∈𝒜_p,Y,σ and using Theorem <ref> (<ref>) again, we see _p, q( δ_y_0^Y⊗σ, 𝔫)<∞, hence 𝔫∈𝒫^σ_p, q(Y×Ω).Now for each k, fix ζ_k∈𝒴_r', σ and (Φ_k, Ψ_k)∈𝒜_p,Y,σ, then using Theorem <ref> (<ref>) as above, we have∑_k=1^Kλ_k(-∫_Y×Ωζ_kΦ_k d𝔪_k-∫_Y×Ωζ_kΨ_k d𝔫)^/p=lim_j→∞∑_k=1^Kλ_k(-∫_Y×Ωζ_kΦ_k d𝔪_k-∫_Y×Ωζ_kΨ_k d𝔫_j)^/p≤lim inf_j→∞∑_k=1^Kλ_k_p, q(𝔪_k, 𝔫_j)^=lim inf_j→∞𝔅^p, q, _Λ, 𝔐(𝔫_j).Again taking a supremum over ζ_k and (Φ_k, Ψ_k) for each k and using Theorem <ref> (<ref>) gives 𝔅^p, q, _Λ, 𝔐(𝔫)≤lim inf_j→∞𝔅^p, q, _Λ, 𝔐(𝔫_j),that is, 𝔫 minimizes 𝔅^p, q, _Λ, 𝔐.§.§ Duality for disintegrated barycentersWe now work toward duality for _p, q-barycenters, in the spirit of <cit.> in the classical Monge–Kantorovich case with p=2.It is easy to see the space 𝒵_p introduced in Theorem <ref> (<ref>) is a Banach space with respect to ‖·‖_𝒵_p. It turns out that we can replace C_b(Ω; 𝒳_p) by 𝒵_p in the duality formulation for _p, q. For λ∈ (0,1], ξ∈𝒵_p and ω∈Ω, recall thatwe denote by S_λ, pξ_ω the _Y^p-transform of ξ(·,ω) that is,S_λ, pξ_ω(s) :=sup_t∈ Y(-λ_Y(t, s)^p-ξ(t, ω) ) for s∈ Y.By Lemma <ref>, S_λ, pξ_ω is locally bounded and continuous, and belongs to L^1(μ) for any μ in 𝒫_p(Y).Suppose (Y, _Y) and (Ω, _Ω) are locally compact, let 1≤ p<∞, 1≤ q≤∞, and λ∈ (0, 1], and 𝔪, 𝔫∈𝒫_p, q(Y×Ω). Thensup_(ζ, ξ)∈𝒴_r', σ×𝒵_p, η=ζξ∫_Ω(-ζ∫_Y ξ(t,·) d𝔫^∙(t)- ∫_Y S_λ, pξ_∙(s) d𝔪^∙(s) ) dσ =λ_p, q(𝔪, 𝔫)^p.Claim 1. 𝒵_p⊂ C_b(Ω; 𝒳_p). Proof of Claim 1. Fix ξ∈𝒵_p, then it is clear that ξ(·, ω)∈𝒳_p for each ω∈Ω, andsup_ω∈Ω‖ξ(·, ω)‖_𝒳_p=‖ξ‖_𝒵_p<∞.Now for any ε>0 we can find compact sets K_Y⊂ Y and K_Ω⊂Ω such that |ξ(t, ω)|/1+_y_0(t)^p<ε/2for(t, ω)∈ (Y×Ω)∖(K_Y× K_Ω),so in particular for any t∉K_Y, regardless of ω. Suppose (ω_j)_j∈ℕ is a convergent sequence in Ω with limit ω_∞,then since ξ is uniformly continuous on the compact set K_Y×(⋃_j=1^∞{ω_j}∪{ω_∞}),if t∈ K_Y and j is sufficiently large, we have |ξ(t, ω_j)-ξ(t, ω_∞)|<ε. Thus for any t∈ Y, and j large enough1/1+_y_0(t)^p·|ξ(t, ω_j)-ξ(t, ω_∞)|< ε,hence we see(ξ(·, ω_j))_j∈ℕ converges to ξ(·, ω_∞) in 𝒳_p; in particular, ω↦ξ(·, ω) is a continuous map from Ω to 𝒳_p. By this claim combined with Theorem <ref> (<ref>), we seesup_(ζ, ξ)∈𝒴_r', σ×𝒵_p, η=ζξ-∫_Ωζ(∫_Y ξ(t,·) d𝔫^∙(t)+ ∫_Y S_λ, pξ_∙ d𝔪^∙) dσ≤λ_p, q(𝔪, 𝔫)^p.To obtain the opposite inequality, fix ε>0, then again by Theorem <ref> (<ref>) there exists some ζ∈𝒴_r', σ and Φ∈ C_b(Ω; 𝒳_p) such that-∫_Ωζ(∫_Y Φ_∙ d𝔫^∙+ ∫_Y S_λ, pΦ_∙ d𝔪^∙) dσ∈(λ_p, q(𝔪, 𝔫)^p-ε, λ_p, q(𝔪, 𝔫)^p].This shows-ζ(∫_Y Φ_∙ d𝔫^∙+ ∫_Y S_λ, pΦ_∙ d𝔪^∙)∈ L^1(σ),hence we can find a compact set K_ε⊂Ω such that| -∫_Ω∖ K_εζ(∫_Y Φ_∙ d𝔫^∙+ ∫_Y S_λ, pΦ_∙ d𝔪^∙) dσ|<ε.Now define for δ>0ψ_δ(ω): =min{1, δ^-1_Ω(ω,Ω∖ K_ε)}, ξ_δ(t, ω): =ψ_δ(ω)Φ_ω(t)∈ C(Y×Ω).Claim 2. There exists an R>0 such that for all ω∈ K_ε and t∈ Y such that _y_0(t)> R, |Φ_ω(t)|/1+_y_0(t)^p≤ 2^-p-1. Proof of Claim 2. Suppose the claim is false, then there is a sequence ((ω_j, t_j))_j∈⊂ K_ε× Y such that |Φ_ω_j(t_j)|/1+_y_0(t_j)^p≥ 2^-p-1, _y_0(t_j)≥ jfor all j. By compactness of K_ε, we may pass to a subsequence and assume (ω_j)_j∈ℕ converges to some ω_∞∈ K_ε. By the continuity of Φ, this implies for all j sufficiently large|Φ_ω_∞(t_j)|/1+_y_0(t_j)^p ≥|Φ_ω_j(t_j)|/1+_y_0(t_j)^p-‖Φ_ω_∞-Φ_ω_j‖_𝒳_p≥ 2^-p-2,contradicting that Φ_ω_∞∈𝒳_p. Now by Lemma <ref> (<ref>) and taking R>0 from Claim 2,|∫_K_εζ(∫_Y (S_λ, pξ_δ)_∙ d𝔪^∙- ∫_Y S_λ, pΦ_∙ d𝔪^∙) dσ|≤|∫_K_εζ‖Φ_∙-(ξ_δ)_∙‖_𝒳_p(∫_Y (1+ max{ R^p, 2^p+1(1+‖Φ‖_C_b(Ω; 𝒳_p))(1+_y_0^p)}) d𝔪^∙) dσ|≤|‖Φ‖_C_b(Ω; 𝒳_p)∫_K_εζ| 1-ψ_δ|(∫_Y (1+ max{ R^p, 2^p+1(1+‖Φ‖_C_b(Ω; 𝒳_p))(1+_y_0^p)}) d𝔪^∙) dσ|≤‖Φ‖_C_b(Ω; 𝒳_p)‖ζ|_{ω∈ K_ε| 0<_Ω(ω, Ω∖ K_ε)<δ}‖_L^r'(σ)×‖∫_Y (1+ max{ R^p, 2^p+1(1+‖Φ‖_C_b(Ω; 𝒳_p))(1+_y_0^p)}) d𝔪^∙dσ‖_L^r(σ).We also find|∫_K_εζ(∫_Y (ξ_δ)_∙ d𝔪^∙- ∫_Y Φ_∙ d𝔪^∙) dσ| ≤‖Φ‖_C_b(Ω; 𝒳_p)‖ζ|_{ω∈ K_ε| 0<_Ω(ω, Ω∖ K_ε)<δ}‖_L^r'(σ),thus if δ>0 is sufficiently small, combining with (<ref>) implies that-∫_Ωζ(∫_Y ξ_δ(t, ·) d𝔫^∙+ ∫_Y (S_λ, pξ_δ)_∙ d𝔪^∙) dσ∈ (λ_p, q(𝔪, 𝔫)^p-3ε, λ_p, q(𝔪, 𝔫)^p].By an argument similar to the one leading to (<ref>), we easily see ξ_δ∈𝒵_p, then since ε is arbitrary, the proof is finished.It is well known that if Y is a locally compact Hausdorff space, elements of the dual of C_0(Y) equipped with the supremum norm can be identified with integration against elements of ℳ(Y), the space of (signed) Radon measures on Y (<cit.>*Theorem 7.17), moreover the total variation norm is equal to the operator norm. Then we can see 𝒵_p^* ={𝔪∈ℳ(Y×Ω)| (1+(_y_0∘π_Y)^p)𝔪∈ℳ(Y×Ω)}, 𝒳_p^* ={μ∈ℳ(Y)| (1+_y_0^p)μ∈ℳ(Y)},which are normed spaces.Let 𝔪∈𝒫^σ(Y×Ω) with 1≤ p<∞ and λ∈ (0,1]. For η∈𝒵_p, define H_λ, r, 𝔪(η):= inf{∫_Ωζ(ω)(∫_Y S_λ,pξ_ω d 𝔪^ω) dσ(ω) | (ζ, ξ)∈𝒴_r', σ×𝒵_p, η=ζξ}. For 𝔪∈𝒫^σ_p,1(Y×Ω) with 1≤ p<∞and λ∈ (0,1], H_λ, r, 𝔪 is proper and convex on 𝒵_p.We first prove that H_λ, r, 𝔪 is proper. SinceH_λ, r, 𝔪(0)≤ 0we see H_λ, r, 𝔪 is not identically ∞.Also, for any ξ∈𝒵_p and ζ∈𝒴_r', σ we have η=ζξ∈𝒵_p and∫_Ωζ(ω)∫_YS_λ, pξ_ω(s)d𝔪^ω(s)dσ(ω) ≥∫_Ωζ(ω)∫_Y(-ξ(s, ω))d𝔪^ω(s)dσ(ω)=-∫_Y×Ωη d𝔪≥ -‖η‖_𝒵_p∫_Y×Ω(1+_y_0(t)^p) d𝔪(t, ω)=-‖η‖_𝒵_p(1+_p, 1(δ_y_0^Y⊗σ, 𝔪))>-∞,hence H_λ, r, 𝔪 is proper.Next we show H_λ, r, 𝔪 is convex. Recall that we write r=q/p≥ 1 and r' for the Hölder conjugate of r.Fix η_0, η_1 ∈𝒵_p,and for i=0,1, let (ζ_i, ξ_i)∈𝒴_r', σ×𝒵_p satisfy η_i=ζ_iξ_i.For τ∈ (0,1), let ζ:=(1-τ) ζ_0+τζ_1,ξ:=1/ζ· [(1-τ)ζ_0ξ_0+ τζ_1ξ_1],then (1-τ)η_0 +τη_1=ζξ.Moreover, it is clear that ζ∈𝒴_r', σ and ξ∈ C(Y×Ω). Since|ξ| =|(1-τ)ζ_0ξ_0+τζ_1ξ_1/(1-τ)ζ_0+τζ_1|≤max{|ξ_0|, |ξ_1|}we have ξ∈𝒵_p as well. This yields H_λ, r, 𝔪((1-τ)η_0+τη_1)≤∫_Ωζ(∫_Y S_λ,pξ_∙ d 𝔪^∙) dσ = ∫_Ω∫_Ysup_t∈ Y(-λ_Y(t, s)^pζ-ξ(t, ·)ζ)d 𝔪^∙(s) dσ =∫_Ω∫_Y sup_t∈ Y{-λ_Y(t, s)^p[(1-τ)ζ_0+τζ_1] - [ (1-τ)ζ_0ξ_0(·,t)+ τζ_1ξ_1(·,t)]}d 𝔪^∙(s) dσ ≤(1-τ)∫_Ωζ_0∫_Ysup_t∈ Y(-λ_Y(t, s)^p-ξ_0(·,t) )d 𝔪^∙(s) dσ +τ∫_Ωζ_1 ∫_Ysup_t∈ Y(-λ_Y(t, s)^p-ξ_1(·,t) )d 𝔪^∙(s) dσ= (1-τ) ∫_Ωζ_0(∫_Y (S_λ,pξ_0)_∙ d 𝔪^∙) dσ+τ∫_Ωζ_1(∫_Y (S_λ,pξ_1)_∙ d 𝔪^∙) dσ.Taking an infimum over admissible ζ_i, ξ_i proves the convexity of H_λ, r, 𝔪. For 𝔫∈𝒵_p^∗, recall the Legendre–Fenchel transform of H_λ, r, 𝔪 is defined byH_λ, r, 𝔪^∗(𝔫) := sup_η∈𝒵_p(∫_Y×Ωη d𝔫-H_λ, r, 𝔪(η)).Let 𝔪∈𝒫^σ_p,q(Y×Ω) with 1≤ p<∞ and λ∈ (0,1].If (Ω, _Ω) and (Y, _Y) are locally compact, for 𝔫∈𝒵_p^∗, we haveH_λ, r, 𝔪^∗(-𝔫) :=λ_p,q(𝔪, 𝔫)^p,if 𝔫∈𝒫^σ_p, q(Y×Ω), ∞,else.First suppose 𝔫∈𝒫^σ(Y×Ω),then by Lemma <ref>,H_λ, r, 𝔪^∗(-𝔫) =sup_η∈𝒵_p(-∫_Y×Ωη d𝔫-H_λ, r, 𝔪(η)) =sup_η∈𝒵_psup_(ζ, ξ)∈𝒴_r', σ×𝒵_p, η=ζξ∫_Ω(-∫_Y η(t,·) d𝔫^∙(t)- ζ∫_Y S_λ, pξ_∙(s) d𝔪^∙(s) )dσ =sup_(ζ, ξ)∈𝒴_r', σ×𝒵_p, η=ζξ[-∫_Ωζ(∫_Y ξ(t,·) d𝔫^∙(t)+ ∫_Y S_λ, pξ_∙(s) d𝔪^∙(s) ) dσ] = λ_p, q(𝔪, 𝔫)^p,note that since 𝔪∈𝒫^σ_p, q(Y×Ω), we have _p, q(𝔪, 𝔫)=∞ if 𝔫∉𝒫^σ_p, q(Y×Ω). We now handle the case of 𝔫∉𝒫^σ(Y×Ω). First suppose𝔫∈𝒵^∗_p and the Ω marginal of 𝔫 is not σ. In this case, there exists some ϕ∈ C_b(Ω) such that ∫_Ωϕ dσ≠∫_Y×Ωϕ(ω) d𝔫(t, ω).If C∈ and η_Cϕ∈𝒵_p is defined by η_Cϕ(t, ω):=-Cϕ(ω),then (S_λ, pη_C, ϕ)_ω≡ Cϕ. Since we can decompose η_C, ϕ=ζξ where ζ≡ 1 and ξ=η_C, ϕ we calculateH^*_λ, r, 𝔪(-𝔫) ≥sup_C∈(-∫_Y×Ωη_Cϕ d𝔫- ∫_Ω∫_Y (S_λ, pη_Cϕ)_ω d𝔪^ω dσ(ω)) = sup_C∈ℝ C (∫_Y×Ωϕ(ω) d𝔫(t, ω)-∫_Ωϕ dσ)=∞.Now suppose 𝔫∈𝒵^∗_p is not nonnegative.Here, 𝔫 is said to be nonnegative if𝔫(E)≥0 for any measurable set E⊂ Y×Ω, hence there exists someη∈𝒵_p such that η≥ 0 everywhere and-∫_Y×Ωη d 𝔫 >0.Then it is clear from the definition that -(S_λ, p(Cη))_ω≥ 0 on Y for any constant C>0, hence we can again calculateH_λ, r, 𝔪^*(-𝔫) ≥sup_C>0( -∫_Y×Ω Cη d 𝔫- ∫_Ω∫_Y (S_λ, p(Cη))_ω d𝔪^ω dσ(ω) ) ≥sup_C>0(-C∫_Y×Ωη d 𝔫)=∞.We are now ready to prove our duality result for _p, q-barycenters. Let 𝔫∈𝒫^σ_p,q(Y×Ω) and (η_k)_k=1^K a collection in 𝒵_p such that ∑_k=1^K η_k≡ 0.For each k fix (ζ_k, ξ_k)∈𝒴_r',σ×𝒵_p such that η_k=ζ_kξ_k. By the choice of 𝔫the pth moment of 𝔫^ω is finite for σ-a.e. ω,thus for such ω,_p^Y(𝔪_k^ω, 𝔫^ω)<∞ for all kand |∫_Yξ_k(t, ω)d𝔫^ω(t)| ≤‖ξ_k‖_𝒵_p∫_Y(1+_y_0(t)^p)d𝔫^ω(t)<∞.Then for such ω∈Ω, s, t∈ Y,and 1≤ k ≤ K,we can first integrate the inequalityλ_k_Y(t, s)^p ≥ -ξ_k(t, ω)-(S_λ_k,pξ_k)_ω (s)for each fixed ω, in (t, s) against a p-optimal coupling between 𝔫^ω and 𝔪_k^ω,then multiply by ζ_k and integrate in ω against σ to obtainλ_k _p,q(𝔫,𝔪_k)^p≥λ_k∫_Ωζ_k(ω)_p^Y(𝔪_k^ω, 𝔫^ω)^pdσ(ω) ≥-∫_Ωζ_k(ω)∫_Y (S_λ_k,pξ_k)_ωd𝔪_k^ω dσ(ω)- ∫_Ωζ_k(ω)∫_Y ξ_k(s, ω)d𝔫^ω(s) dσ(ω) =-∫_Ωζ_k(ω)∫_Y (S_λ_k,pξ_k)_ωd𝔪_k^ω dσ(ω)-∫_Y×Ωη_k d𝔫.Since ∑_k=1^K η_k≡ 0,taking a supremum over all such pairs (ζ_k, ξ_k), and then summing over 1≤ k ≤ K in the above inequality gives∑_k=1^Kλ_k _p,q(𝔪_k,𝔫)^p ≥-∑_k=1^K H_λ_k, r, 𝔪_k(η_k)-∫_Y×Ω∑_k=1^Kη_k d𝔫=-∑_k=1^K H_λ_k, r, 𝔪_k(η_k). Thus it follows thatinf_𝔫∈𝒫^σ_p,q(Y×Ω)𝔅^p, q, p_Λ, 𝔐(𝔫)≥sup{-∑_k=1^K H_λ_k, r, 𝔪_k(η_k) |∑_k=1^K η_k≡ 0, η_k ∈𝒵_p}.Let us now show the reverse inequality.It follows from Proposition <ref> thatinf_𝔫∈𝒫^σ_p,q(Y×Ω)𝔅^p, q, p_Λ,𝔐(𝔫) = inf_𝔫∈𝒫^σ_p,q(Y×Ω)∑_k=1^Kλ_k _p,q(𝔪_k, 𝔫)^p = inf_𝔫∈𝒫^σ_p,q(Y×Ω)∑_k=1^K H_λ_k, r, 𝔪_k^∗(-𝔫). Define the function H on 𝒵_p as the infimal convolution of H_λ_1, r, 𝔪_1,…, H_λ_K, r, 𝔪_K, that is, defined forη∈𝒵_p by H(η) := inf{∑_k=1^K H_λ_k, r, 𝔪_k(η_k) |∑_k=1^K η_k≡η, η_k ∈𝒵_p }. Then (<ref>) impliesinf_𝔫∈𝒫^σ_p,q(Y×Ω)𝔅^p, q, p_Λ, 𝔐(𝔫) ≥ -H(0). Note that H is convexsince each of H_λ_1, r, 𝔪_1,…, H_λ_K, r, 𝔪_K is proper and convex byLemma <ref>,and then by <cit.>*Lemma 4.4.15 the Legendre–Fenchel transform of H satisfies H^∗(𝔫) =∑_k=1^K H_λ_k, r, 𝔪_k^∗(𝔫) for 𝔫∈𝒵_p^∗.Let 𝒵^∗∗_p be the dual of 𝒵^∗_pand regard 𝒵_p as a subset of 𝒵^∗∗_p under the natural isometric embedding.For 𝔣∈𝒵^∗∗_p,the Legendre–Fenchel transform of H^∗ on 𝒵^∗∗_pis given byH^∗∗(𝔣) := sup_𝔫∈𝒵_p^∗( 𝔣(𝔫 ) -H^∗(𝔫) ). Then we observe from Proposition <ref> and (<ref>) that -H^∗∗(0) =inf_𝔫∈𝒵^∗_pH^∗(-𝔫)=inf_𝔫∈𝒵^∗_p∑_k=1^K H_λ_k, r, 𝔪_k^∗(-𝔫)= inf_𝔫∈𝒫^σ_p,q(Y×Ω)∑_k=1^K λ_k _p,q( 𝔪_k,𝔫)^p =inf_𝔫∈𝒫^σ_p,q(Y×Ω)𝔅^p, q, p_Λ, 𝔐(𝔫). Thus by (<ref>) and (<ref>) it is enough to show H^∗∗(0)=H(0). To this end, first note by Proposition <ref> combined with (<ref>) we seeH^*(-δ^Y_y_0⊗σ)=∑_k=1^K λ_k _p, q(𝔪_k, δ^Y_y_0⊗σ)<∞.Thus since its Legendre–Fenchel transform is not identically ∞, we see H never takes the value -∞. At the same time using (<ref>),H(0) ≤∑_k=1^K H_λ_k, r, 𝔪_k(0)≤ 0,hence H is not identically ∞, in particular it is proper. Now without loss, we may assume that each λ_k is strictly positive (otherwise we may simply consider an _p, q-barycenter with the corresponding 𝔪_k removed from 𝔐).Suppose η∈𝒵_p with‖η‖_𝒵_p≤ 2^1-p· K·min_1 ≤ k≤ Kλ_k. Then, using that_y_0(t)^p ≤ 2^p-1 (_Y(t, s)^p+_y_0(s)^p) to obtain the final inequality below,H(η)≤∑_k=1^KH_λ_k, r, 𝔪_k(K^-1η)≤∑_k=1^K∫_Ω∫_Y (S_λ_k, p(K^-1η))_ω d𝔪_k^ω dσ(ω)≤∑_k=1^K∫_Ω∫_Ysup_t∈ Y(-λ_k_Y(t, s)^p +K^-1‖η‖_𝒵_p(1+_y_0(t)^p)) d𝔪_k^ω(s) dσ(ω)≤∑_k=1^Kλ_k∫_Ω∫_Y sup_t∈ Y(-_Y(t, s)^p+2^1-p(1+_y_0(t)^p)) d𝔪_k^ω(s) dσ(ω)≤∑_k=1^Kλ_k∫_Ω∫_Y (2^1-p+_y_0(s)^p) d𝔪_k^ω(s) dσ(ω)<∞,proving that H is bounded from above in a neighborhood of 0. Thus by <cit.>*Proposition 4.1.4 and Proposition 4.4.2 (a), we obtain H^∗∗(0)=H(0), finishing the proof. §.§ Uniqueness of disintegrated barycentersIn this final subsection, we prove _p, q-barycenters are unique under some absolute continuity conditions, when p>1 and q<∞. We start by noting that in the case q=∞, it is possible to construct many examples where _p, ∞-barycenters are non-unique; this is even in the case on ^n where 𝔪_k^ω is absolutely continuous for all k and ω.Let 1< p<∞ (the case p=1 may have nonuniqueness for other reasons, see Example <ref> below), make the same assumptions as in Theorem <ref> (<ref>), and also assume (Y, _Y) is a geodesic space.Also take two distinct elements μ_0,μ_1∈𝒫_p(Y),and assume there exists a measurable E⊂Ω with 0<σ(E)<1,and define 𝔪_k:=𝔪_k^ω⊗σ∈𝒫^σ_p, ∞(Y×Ω) by𝔪_k^ω: = μ_0, 1≤ k ≤ K-1, 1_E(ω)μ_0 +1_Ω∖ E(ω)μ_1, k=K.For 𝔫∈𝒫_p,∞^σ(Y×Ω), we calculate_p,∞(𝔪_k,𝔫) =_ω∈ E_p^Y(μ_0, 𝔫^ω),1≤ k≤ K-1,max{_ω∈ E_p^Y(μ_0, 𝔫^ω), _ω∉ E_p^Y(μ_1, 𝔫^ω) },k=K,consequently, 𝔅_Λ, 𝔐^p, ∞, p(𝔫)=∑_k=1^K λ_k _p,∞(𝔪_k,𝔫)^p=(1-λ_K)_ω∈Ω_p^Y(μ_0, 𝔫^ω)^p+λ_Kmax{_ω∈ E_p^Y(μ_0, 𝔫^ω), _ω∉ E_p^Y(μ_1, 𝔫^ω) }=max{_ω∈ E_p^Y(μ_0, 𝔫^ω)^p, _ω∉ E[ (1-λ_K) _p^Y(μ_0, 𝔫^ω)^p +λ_K_p^Y(μ_1, 𝔫^ω)^p ] }for Λ∈Λ_Kand 𝔐:=(𝔪_k)_k=1^K.Let μ(·):[0,1]→𝒫_p(Y) be a minimal geodesic from μ_0 to μ_1,theninf_ν∈𝒫_p(Y)[(1-λ_K)_p^Y(μ_0, ν)^p+λ_K_p^Y(μ_1, ν)^p] =ℬ_(1-λ_K, λ_K), (μ_0,μ_1)^p,p(μ(λ_K)),which ensures thatwhenever μ∈𝒫_p(Y) with_p^Y(μ_0,μ)^p ≤ℬ_(1-λ_K, λ_K), (μ_0,μ_1)^p,p(μ(λ_K)) the measure( 1_E(ω)μ)+1_Ω∖ Eμ(λ_K) )⊗σis a minimizer of 𝔅^p, ∞, p_Λ, 𝔐. Thus since λ_K≠ 0, 1, this yields infinitely many possible minimizers. Also, we can see that _1, q-barycenters may not be unique due to nonuniqueness of _1^Y-barycenters.Let M=(μ_k)_k=1^K in 𝒫_p(Y) to be determinedand define 𝔐=(𝔪_k)_k=1^K in 𝒫^σ_p, q(Y×Ω) by 𝔪_k=:μ_k⊗σ for all ω∈Ω.For Λ∈Λ_K, byconvexity of the L^q(σ)-norm, for any 𝔫∈𝒫^σ_p, q(Y×Ω) we have∑_k=1^Kλ_k_p, q(𝔪_k, 𝔫)≥‖λ_k∑_k=1^K_p^Y(μ_k, 𝔫^∙)‖_L^q(σ),hence any measure of the form 𝔫:=ν_0⊗σ where ν_0∈𝒫_p(Y) is a minimizer ofℬ^p,p_Λ, M (i.e., an _p^Y-barycenter ) will be a _p, q-barycenter, hence if (μ_k)_k=1^K can be chosen in a way that there exist nonunique _p^Y-barycenters, this will lead to nonuniqueness of _p, q-barycenters as well.For p=1, it is strongly suspected that such configurations yielding nonunique barycenters exist for various Λ, we give such an example in the case of Y= with the measures μ_k absolutely continuous, and λ_k≡ K^-1 where K is even, which incidentally, relies on our duality result Corollary <ref>. Define ν_0:=ℋ^1|_[-2,-1],ν_1:=ℋ^1|_[1,2],μ_k:= ν_0,k even, ν_1,k odd.Then we calculate𝔅^1, q, 1_Λ, 𝔐(ν_0⊗σ)=1/K∑_k=1^K_1^(μ_k, ν_0) ≤1/K∑_k odd∫_1^2| t-(t-3)| dt=3/2, 𝔅^1, q, 1_Λ, 𝔐(ν_1⊗σ)=1/K∑_k=1^K_1^(μ_k, ν_1) ≤1/K∑_k even∫_-2^-1| t-(t+3)| dt=3/2.Now define ϕ: → byϕ(t):=-4-t, -4≤ t< -2,t, -2≤ t< 2,4-t, 2≤ t≤ 4,0, else.Since ϕ is 1-Lipschitz, it is classical that ϕ^_=-ϕ, then if we defineϕ_k(t): =-ϕ(t)/K, k even, ϕ(t)/K, k odd,we have∑_k=1^Kϕ_k≡ 0,-∑_k=1^K∫_ϕ_k^λ_k_ dμ_k=-∑_k even∫_-2^-1ϕ(t)/Kdt+∑_k odd∫_1^2ϕ(t)/Kdt =1/2(∫_1^2tdt-∫_-2^-1tdt) =3/2.By Corollary <ref> (<ref>) we see that both ν_0 and ν_1 are _1^-barycenters. For the remainder of the section Y will be a complete, connected Riemannian manifold, possibly with boundary,and _Y (resp._Y) will be the Riemannian distance function (resp. volume measure).We will also write_1(y): =min{1, sup{r>0|exp_y is a diffeomorphism onB_r^T_y(Y∖∂ Y)(0)}} for y∈ Y∖∂ Y, (K): =inf_y∈ K_1(y) for any compact K⊂ Y∖∂ Y,and B_r^Y(y) for the closed ball of radius r centered at y. Although Y∖∂ Y may not be complete, by <cit.>*Lemmas 10.90 and 10.91, we have (K)>0 for any compact K⊂ Y∖∂ Y. First we show a very simple lemma on covering boundaries of geodesic balls.For any compact set K⊂ Y∖∂ Y and 0<r<(K)/2, there exists an N∈ depending only on K and r such that for any y∈ K, there exists a set of points {y_i}_i=1^N∈B^Y_5r/4(y)∖ B^Y_3r/4(y) such that {B^Y_r/2(y_i)}_i=1^N is a covering of ∂ B^Y_r(y). Suppose the lemma does not hold, then there is a sequence (ỹ_j)_j∈⊂ K such that no collection of j or fewer open balls of radius r/2 with centers in B^Y_5r/4(ỹ_j)∖ B^Y_3r/4(ỹ_j) is a covering of ∂ B^Y_r(ỹ_j). By compactness of K, we may pass to a subsequence to assume (ỹ_j)_j∈ℕ converges to some ỹ_∞∈ K.Now, also by compactness, for some N∈ there is a covering {B^Y_r/2(y_i)}_i=1^N of B^Y_9r/8(ỹ_∞)∖ B^Y_7r/8(ỹ_∞) with y_i∈B^Y_9r/8(ỹ_∞)∖ B^Y_7r/8(ỹ_∞) for 1≤ i≤ N.For j>N satisfying _Y(ỹ_j, ỹ_∞)<r/8, since r<(K)/2 and ỹ_j∈ K, we see that y∈∂ B^Y_r(ỹ_j) implies _Y(ỹ_j, y)=r. Then by the triangle inequality we have ∂ B^Y_r(ỹ_j)⊂B^Y_9r/8(ỹ_∞)∖ B^Y_7r/8(ỹ_∞) while each y_i∈B^Y_5r/4(ỹ_j)∖ B^Y_3r/4(ỹ_j), a contradiction.It is well known that local boundedness for a λ_Y^p-convex function translates to a Lipschitz bound. To show convergence of a maximizing sequence in the disintegrated barycenter dual problem from Theorem <ref> (<ref>), we will need to consider sequences of averages constructed from the maximizing sequence. When p=2, the average of _^n^2-transforms is also a _^n^2-transform, but this does not hold for p≠ 2 or on more general manifolds Y. Thus in the next lemma, we will prove that under certain conditions, local Lipschitzness of the average of _Y^p-transforms also follows from boundedness. The following lemma is stated in more generality than will be needed later. Fix λ∈ (0, 1], R>0, and suppose(g_m)_m∈ is a sequence such that the functions f_m:=g_m^λ_Y^p are bounded uniformly in m∈ℕ on B^Y_R(y_0). If there exists an increasing sequence (M_ℓ)_ℓ∈⊂, and λ_m, ℓ≥ 0 for each ℓ∈ and 1≤ m≤ M_ℓ, and C_1, C_2>0 such thatsup_ℓ∈1/M_ℓ∑_m=1^M_ℓλ_m, ℓ≤ C_1,sup_t∈B^Y_R(y_0)1/M_ℓ∑_m=1^M_ℓλ_m, ℓ| f_m(t)|≤ C_2, then the sequence(1/M_ℓ∑_m=1^M_ℓλ_m, ℓf_m)_ℓ∈ is uniformly Lipschitz on {y∈B^Y_R/2(y_0)|_Y(y, ∂ Y)≥ 2R^-1}.We can assume that λ=1 as g_m^λ_Y^p= λ (λ^-1 g_m)^_Y^p.Since the result follows from <cit.>*Proposition 3.1 when p=1, assume 1<p<∞.Let N∈ be from applying Lemma <ref> with the choice ofthe setK:={y∈B^Y_R(y_0)|_Y(y, ∂ Y)≥ 2R^-1},compact in Y∖∂ Y, andr=min{(K), R}/2.Now let us write B_R/2:={y∈B^Y_R/2(y_0)|_Y(y, ∂ Y)≥ 2R^-1}. Fix t ∈B_R/2 and ε>0, then since each f_m is finite on B^Y_R(y_0), for each m there exists s_m, t∈ Y such that f_m(t)≤ - _Y(t, s_m, t)^p-g_m(s_m, t)+ε. Then for any t'∈ Y,we havef_m(t')+ε ≥ - _Y(t', s_m, t)^p-g_m(s_m, t)+ε≥ - _Y(t', s_m, t)^p+ _Y(t, s_m, t)^p+f_m(t)≥ p _Y(t', s_m, t)^p-1(_Y(t, s_m, t)-_Y(t', s_m, t))+f_m(t).Now let {B^Y_r/2(y_i)}_i=1^N be a cover of ∂ B^Y_r(t) where each y_i∈B^Y_5r/4(t)∖ B^Y_3r/4(t), which exists from the application of Lemma <ref> above. By completeness and connectedness, there is at least one minimal, unit speed geodesic γ_m, t from t to s_m, t. For 1≤ i≤ N, defineB_i: =B^Y_r/2(y_1), i=1,B^Y_r/2(y_i) ∖⋃_i'=1^i-1B^Y_r/2(y_i'), 2≤ i≤ N, I_i: ={m∈|γ_m, t(r)∈ B_i and s_m, t∉B_2r^Y(t)}.Then for m∈ I_i, using that r<(K)/2 and t∈ K we have_Y(t, s_m, t)-_Y(y_i, s_m, t)≥_Y(t, s_m, t)-_Y(γ_m, t(r), s_m, t)-_Y(γ_m, t(r), y_i)≥_Y(t, γ_m, t(r))-r/2= r/2.Using this we can calculate for each 1≤ i≤ N, by taking t'=y_i in (<ref>) and noting that each y_i∈B^Y_R(y_0), C_2+ε ≥1/M_ℓ∑_m=1^M_ℓλ_m, ℓ| f_m(y_i)|+ε≥1/M_ℓ∑_1≤ m ≤ M_ℓ,m∈ I_iλ_m, ℓ[ p _Y(y_i, s_m, t)^p-1(_Y(t, s_m, t)-_Y(y_i, s_m, t))+f_m(t)]≥pr/2M_ℓ∑_1≤ m ≤ M_ℓ,m∈ I_iλ_m, ℓ_Y(y_i, s_m, t)^p-1-1/M_ℓ∑_m=1^M_ℓλ_m, ℓ| f_m(t)|≥pr/2M_ℓ∑_1≤ m ≤ M_ℓ,m∈ I_iλ_m, ℓ[ 2^-p+1_Y(t”, s_m, t)^p-1-_Y(t”, y_i)^p-1]-C_2≥pr/2M_ℓ∑_1≤ m ≤ M_ℓ,m∈ I_iλ_m, ℓ[ 2^-p+1_Y(t”, s_m, t)^p-1-(2R)^p-1]-C_2for any t”∈B_R/2. Hence, for t_1, t_2∈B_R/2, we find1/M_ℓ∑_m=1^M_ℓλ_m, ℓf_m(t_1)-1/M_ℓ∑_m=1^M_ℓλ_m, ℓf_m(t_2)≤1/M_ℓ∑_m=1^M_ℓλ_m, ℓ( _Y(t_2, s_m, t_1)^p- _Y(t_1, s_m, t_1)^p+ε)≤p/M_ℓ∑_m=1^M_ℓλ_m, ℓmax{_Y(t_1, s_m, t_1)^p-1, _Y(t_2, s_m, t_1)^p-1}|_Y(t_2, s_m, t_1)-_Y(t_1, s_m, t_1)|+ε C_1≤2^p/r(2C_2+ε+2^p-2prR^p-1)_Y(t_1, t_2)+ε C_1,thus taking ε→ 0 and then reversing the roles of t_1 and t_2 yields the uniform Lipschitz bound on B_R/2.The above lemma also immediately gives an analogue of <cit.>*Corollary C.5 which we will have use for later.Fix λ∈ (0, 1] and suppose R>0.For a function g on Y,iff:=g^λ_Y^p is bounded on B_R^Y(y_0), then it is uniformly Lipschitz on {y∈B^Y_R/2(y_0)|_Y(y, ∂ Y)≥ 2R^-1}. Simply apply Lemma <ref> with f_m≡ f and λ_m, ℓ≡ 1. For 1≤ k ≤ K and m∈ℕ, let (ζ_k, m, ξ̂_k, m)_k=1^K ∈ (𝒴_r', σ×𝒵_p)^K satisfy ∑_k=1^K ζ_k, mξ̂_k, m =0, -∑_k=1^K∫_Ωζ_k, m(∫_Y (S_λ_k,pξ̂_k, m)_∙ d 𝔪_k^∙) dσ ≤-∑_k=1^K∫_Ωζ_k, m+1(∫_Y (S_λ_k,pξ̂_k, m+1)_∙ d 𝔪_k^∙) dσ,inf_𝒫^σ_p, q(Y×Ω)𝔅_Λ, 𝔐^p, q, p, which is the value of the supremum for the dual problem in Theorem <ref> (<ref>).Defineξ̃_k, m:= S_λ_k, p(S_λ_k, pξ̂_k, m), 1≤ k≤ K-1, -1/ζ_K, m∑_i=1^K-1ζ_i, mξ̃_i, m, k=K,then ∑_k=1^K ζ_k, mξ̃_k, m≡ 0.For 1≤ k≤ K-1, it is classical thatS_λ_k, pξ̃_k, m=S_λ_k, p(S_λ_k, p(S_λ_k, pξ̂_k, m))=S_λ_k, pξ̂_k, m,ξ̂_k, m≥ξ̃_k, m≥ -S_λ_k, pξ̂_k, m.This yields ξ̃_K, m =-1/ζ_K, m∑_k=1^K-1ζ_k, mξ̃_k, m≥ -1/ζ_K, m∑_k=1^K-1ζ_k, mξ̂_k, m=ξ̂_K, m,hence -S_λ_K, pξ̃_K, m≥ -S_λ_K, pξ̂_K, m. For 1≤ k≤ K-1, since(<ref>) holdsand ξ̂_k, m∈𝒵_p, by Lemma <ref> we see ξ̃_k, m(·, ω) is bounded on compact sets when ω∈Ω is fixed.Thus by Corollary <ref>, we have that ξ̃_k, m(·, ω) is continuous on Y for all 1≤ k≤ K-1, this also implies ξ̃_K, m(·, ω) is continuous on Y. Finally, for 1≤ k≤ K, let ξ_k, m(t, ω): =ξ̃_k, m(t, ω)-ξ̃_k, m(y_0, ω), η_k, m:=ζ_k, mξ_k, m,thenξ_k, m(y_0, ω)=η_k, m(y_0, ω)=0 for all k, m, and ωand ∑_k=1^K η_k, m≡ 0for all m. Since-∑_k=1^K∫_Ωζ_k, m(∫_Y (S_λ_k,pξ_k, m)_∙ d 𝔪^∙)dσ=-∑_k=1^K∫_Ωζ_k, m∫_Y [(S_λ_k,pξ̃_k, m)_∙+ξ̃_k, m(y_0, ·) ]d 𝔪^∙ dσ=-∑_k=1^K∫_Ωζ_k, m∫_Y (S_λ_k,pξ̃_k, m)_∙ d 𝔪^∙ dσ,we see thatlim sup_m→∞(-∑_k=1^K∫_Ωζ_k, m(∫_Y (S_λ_k,pξ_k, m)_∙ d 𝔪_k^∙) dσ)≥inf_𝒫^σ_p, q(Y×Ω)𝔅_Λ, 𝔐^p, q, p.If p<q, then we have 1<r'<∞ hence L^r'(σ) is reflexive. Since (ζ_k, m)_m∈ is bounded in L^r'(σ) for each 1≤ k≤ K, we then can pass to a subsequence which can be assumed to converge weakly in L^r'(σ) to some ζ_k. If p=q, then by Remark <ref> we may assume that each ζ_k, m≡ 1, hence we still have weak convergence, this time to ζ_k≡ 1.Claim 1. There exists a set 𝒩⊂Ω with σ(𝒩)=0, and for each 1≤ k≤ K, subsequences of (η_k, m)_m∈,(ζ_k, m)_m∈ (not relabeled), and a function η_k: Y×Ω→ such that η_k(·, ω)∈ C(Y) for σ-a.e. ω, where writingη_k, M^(t, ω): =1/M∑_m=1^Mη_k, m(t, ω)ζ^_k, M(ω):=1/M∑_m=1^Mζ_k, m(ω),we havelim_M→∞η_k, M^(t, ω) =η_k(t, ω),for all(t, ω)∈ Y× (Ω∖𝒩), lim_M→∞ζ^_k, M(ω) = ζ_k(ω),for all ω∈Ω∖𝒩.Moreover, the convergence of η_k, M^(·, ω) to η_k(·, ω) is uniform on the setsB_j:={y∈B^Y_j(y_0)|_Y(y, ∂ Y)≥ 2j^-1},for each j∈ and ω∈Ω∖𝒩, ∑_k=1^Kη_k ≡ 0,and ζ^_k, M converges to ζ_k in L^r'(σ) for each 1≤ k≤ K. Proof of Claim 1. For any 1≤ k≤ K, m∈, and fixed ω∈Ω,-(S_λ_k, pξ_k, m)_ω(s) =inf_t∈ Y(λ_k _Y(t, s)^p+ξ_k, m(t, ω))≤λ_k _Y(y_0, s)^p+ξ_k, m(y_0, ω)=λ_k_y_0(s)^p.Also by (<ref>)we may assume there is a constant C such that inf_m∈(-∑_k=1^K∫_Ωζ_k, m(ω)∫_Y (S_λ_k, pξ_k, m)_ω d𝔪^ω_kdσ(ω))≥ C,thus for any 1≤ k≤ K,-∫_Ωζ_k, m(ω)∫_Y (S_λ_k, pξ_k, m)_ω d𝔪^ω_kdσ(ω)≥ C -∑_i≠ k(-∫_Ωζ_i, m(ω)∫_Y (S_λ_i, pξ_i, m)_ω d𝔪^ω_i dσ(ω))≥ C-∑_i≠ kλ_i(∫_Ωζ_i, m(ω)∫_Y _y_0(s)^p d𝔪^ω_i(s) dσ(ω))≥ C-∑_i≠ kλ_i‖ζ_i, m‖_L^r'(σ)‖∫_Y _y_0(s)^p d𝔪^∙_i(s)‖_L^r(σ)≥ C-∑_i=1^Kλ_i_p, q(δ_y_0^Y⊗σ, 𝔪_i)^p>-∞,where the last line follows from ‖ζ_k, m‖_L^r'(σ)≤ 1.For fixed ω∈Ω we can integrate the inequality η_k, m(t, ω)≥ -ζ_k, m(ω)(S_λ_k, pξ_k, m)_ω(s)-λ_kζ_k, m(ω)_Y(t, s)^p≥-ζ_k, m(ω)(S_λ_k, pξ_k, m)_ω(s)-2^p-1ζ_k, m(ω)(_y_0(s)^p+_y_0(t)^p) with respect to 𝔫^ω⊗𝔪_k^ω(t, s) with 𝔫∈𝒫^σ_p, q(Y×Ω), then integrate against σ with respect to ω, and using that each 𝔫^ω is nonnegative and has total mass one we thus obtain ∫_Y×Ωη_k, md𝔫 ≥ -∫_Ωζ_k, m∫_Y (S_λ_k, pξ_k, m)_∙ d𝔪^∙_kdσ -2^p-1(∫_Ωζ_k, m∫_Y_y_0(s)^pd𝔪_k^∙(s)dσ+∫_Ωζ_k, m∫_Y_y_0(t)^pd𝔫^∙(t)dσ) ≥ -∫_Ωζ_k, m∫_Y (S_λ_k, pξ_k, m)_∙ d𝔪^∙_kdσ-2^p-1‖ζ_k, m‖_L^r'(σ)(‖∫_Y _y_0(s)^p d𝔪^∙_k(s)‖_L^r(σ)+‖∫_Y_y_0(t)^pd𝔫^∙(t)‖_L^r(σ)) ≥ C-∑_i=1^Kλ_i_p, q(δ_y_0^Y⊗σ, 𝔪_i)^p-2^p-1(max_1≤ i ≤ K_p,q(δ_y_0^Y⊗σ, 𝔪_i)^p+_p,q(δ_y_0^Y⊗σ, 𝔫)^p ).Thus by (<ref>),there exists a constant C_p,q,K(𝔐) depending on p,q,K and 𝔐 such that |∫_Y×Ωη_k, md𝔫|≤ C_p,q,K(𝔐)+2^p-1(K-1)_p,q(δ_y_0^Y⊗σ, 𝔫)^p.Now define for ω∈Ω, δ>0, 1≤ k≤ K, and m, j∈, I^δ, ω_k, m, j: ={t∈B^Y_j(y_0)| η_k, m(t, ω)≥sup_B^Y_j(y_0)max{η_k, m(·, ω),0}-δ}.Since η_k, m(y_0, ω)=0 and η_k, m is continuous, we must have _Y(I^δ, ω_k, m, j)>0 so we can define𝔫_δ, m, k, j^ω: =1_I^δ, ω_k, m, j/_Y(I^δ, ω_k, m, j)_Y∈𝒫(Y).Since η_k, m∈ C(Y×Ω), the set {(t, ω)∈ Y×Ω| t∈ I^δ, ω_k, m, j} is the superlevel set of a lower semi-continuous function on Y×Ω, thus𝔫_δ, m, k, j:=𝔫_δ, m, k, j^∙⊗σ is well-defined and belongs to 𝒫^σ(Y×Ω). Also_p,q(δ_y_0^Y⊗σ, 𝔫_δ, m, k, j)^p =‖(1/_Y(I^δ, ∙_k, m, j)∫_ I^δ, ∙_k, m, j_y_0^p d_Y)^1/p‖_L^q(σ)^p≤ j^p,hence 𝔫_δ, m, k, j∈𝒫_p, q^σ(Y×Ω).Then we find using (<ref>),∫_Ωsup_t'∈B^Y_j(y_0)max{η_k, m(t', ω),0}dσ(ω)-δ ≤∫_Ω1/_Y(I^δ, ω_k, m, j)∫_I^δ, ω_k, m, jη_k, m(t, ω)d_Y(t)dσ(ω)=∫_Ω∫_Y η_k, m(t, ω)d𝔫^ω_δ, m, k, j(t)dσ(ω) ≤ C_p,q,K(𝔐)+2^p-1(K-1)j^p.We may replace max with min andsup with inf,then change the direction of the inequality in the definition of I^δ, ω_k, m ,j to obtain the analogous inequality ∫_Ωinf_t'∈B^Y_j(y_0)min{η_k, m(t', ω),0}dσ(ω)+δ≥ -C_p,q,K(𝔐)-2^p-1(K-1)j^p,then sending δ→ 0 in the two resulting inequalities above and using Hölder's inequality yields∫_Ω‖η_k, m(·, ω)‖_L^2(B^Y_j(y_0))dσ(ω)≤_Y(B^Y_j(y_0))^1/2∫_Ωsup_t'∈B^Y_j(y_0)|η_k, m(t', ω)| dσ(ω)≤_Y(B^Y_j(y_0))^1/2( C_p,q,K(𝔐)+2^p-1(K-1)j^p),where L^2(B^Y_j(y_0)) is considered with respect to _Y.This implies that for each j∈ and 1≤ k≤ K, the sequence (ω↦η_k, m(·, ω))_m∈ is bounded in the Bochner–Lebesgue space L^1(σ; L^2(B^Y_j(y_0))). Since L^2(B^Y_j(y_0)) is a Hilbert space, we may repeatedly apply <cit.>*Theorem 3.1 along with a diagonalization argument to obtain a subsequence of (ω↦η_k, m(·, ω))_m∈ (which we do not relabel)with property that: there exists a function η̃_k: Y×Ω→withω↦η̃_k(·, ω)|_B^Y_j(y_0)∈ L^1(σ; L^2(B^Y_j(y_0))) for each j∈, and for any further (not relabeled) subsequence there exists a σ-null set 𝒩_1⊂Ω such that for all j∈ and ω∈Ω∖𝒩_1,lim_M→∞‖η_k, M^(·, ω)-η̃_k(·, ω)|_B^Y_j(y_0)‖_L^2(B^Y_j(y_0))= 0.By (<ref>) and since sup_m∈‖ζ_k, m‖_L^1(σ)≤sup_m∈‖ζ_k, m‖_L^r'(σ)≤ 1we can apply the real valued Komlós' theorem (see <cit.>*Theorem 1a) for each 1≤ k≤ K and j∈ to the sequences (ω↦sup_t'∈B^Y_j(y_0)|η_k, m(t', ω)|)_m∈ and (ζ_k,m)_m∈, and make yet another diagonalization argument to assume there exists a σ-null set 𝒩_2 such that for all j∈, 1≤ k ≤ K, and ω∈𝒩_2,lim_M→∞1/M∑_m=1^Msup_t'∈B^Y_j(y_0)|η_k, m(t', ω)| is finite,and (<ref>) holds. If p<q, by the Banach–Saks theorem we may pass to another subsequence of (ζ_k,m)_m∈ to assume that ζ_k, M^ converges in L^r'(σ), necessarily to ζ_k, while if p=q we already have ζ_k, M^≡ 1 for all M. With this setup, fix an arbitrary increasing sequence (M_ℓ)_ℓ∈ℕ⊂ and ω∈Ω∖𝒩, where 𝒩:=𝒩_1∪𝒩_2.By (<ref>) we may pass to yet another subsequence to assume for some _Y-null set 𝒩(ω)⊂ Y,lim_ℓ→∞η_k, M_ℓ^(t, ω)= η̃_k(t, ω), for all t∈ Y∖𝒩(ω).We can then apply Lemma <ref> with f_m=ξ_k, m(·, ω)and λ_m, ℓ=ζ_k, m(ω) independent of ℓ∈ℕ(since the sequence (ζ^_k, M(ω))_M∈ converges, it is also uniformly bounded) for 1≤ k≤ K-1 to again obtain that (η_k, M_ℓ^)_ℓ∈ is uniformly Lipschitzon B_j for each j∈. Since η_k, M_ℓ^(y_0, ω)=0 for all k we see this sequence is also bounded on B_j, thus we may apply the Arzelà–Ascoli theorem on each such ball.By another diagonalization argument, this implies there is a continuous extension of η̃_k(·, ω) to all of Y for each ω∈Ω∖𝒩; we denote this extension by η_k(·, ω). Since we had started with an arbitrary increasing sequence (M_ℓ)_ℓ∈,we conclude that (for the full original sequence) (<ref>) holdsand for a fixed ω, this convergence is uniform in t when restricted to B_j for any j∈ by Arzelà–Ascoli.By (<ref>), ∑_k=1^K η_k, M^≡ 0, hence we see the same limiting claim holds for k=K as well. Now, for 1≤ k≤ K, we define Ω_k:={ω∈Ω|ζ_k(ω)≠ 0},ξ_k(t, ω):=η_k(t, ω)/ζ_k(ω)1_Ω_k(ω).Claim 2. For any ε∈ (0, σ(Ω_k)) there exists a set Ω_k, ε⊂Ω∖Ω_k with σ(Ω_k, ε)<ε such that for any 𝔫∈𝒫^σ_p, q(Y×Ω), the functions defined on Ω byω ↦ -∫_Y η_k(t, ω)d𝔫^ω(t)1_Ω∖𝒩(ω),ω ↦[-ζ_k(ω)∫_Y (S_λ_k, pξ_k)_ω(s)d𝔪^ω_k(s)1_Ω_k(ω)+1_Ω∖(Ω_k∪Ω_k, ε)(ω)inf_t'∈ Yη_k(t', ω)]1_Ω∖𝒩(ω)are ℬ_σ-measurable.Proof of Claim 2. Fix 1≤ k≤ K. For any ε>0, by Egorov's theorem there is a ℬ_σ-measurable set Ω_k, ε⊂Ω∖Ω_k with σ(Ω_k, ε)<ε such that ζ_k, M^ converges uniformly to zero on Ω∖ (Ω_k∪Ω_k, ε). For each j∈ℕ, M∈ and 1≤ k≤ K, .-η_k, M^(t, ·)|_B^Y_j(y_0)∈ C_b(Ω; C_b(B^Y_j(y_0)))⊂ L^0(σ; C_b(B^Y_j(y_0))), where this set inclusion follows from Remark <ref> since C_b(B^Y_j(y_0)) is separable.Then by a proof analogous to Lemma <ref> we find that ω↦ -∫_B^Y_j(y_0)η_k, M^(t, ω)d𝔫^ω(t)1_Ω∖𝒩(ω)is ℬ_σ-measurable. Since the convergence of η_k, M^(t, ω) as M→∞ for ω∈Ω∖𝒩 is uniform on B^Y_j(y_0) andσ(𝒩)=0, this yields that for each j∈,ω↦ -∫_B^Y_j(y_0)η_k(t, ω)d𝔫^ω(t)1_Ω∖𝒩(ω)is ℬ_σ-measurable. Since Y is separable and η_k(·, ω) is continuous for ω∈Ω∖𝒩, there exists a countable subset D of Y such that inf_t'∈ Yη_k(t', ω) = inf_t'∈ Dη_k(t', ω),hence the functionω↦1_Ω∖𝒩(ω)inf_t'∈ Yη_k(t', ω) is ℬ_σ-measurable in ω. Then recalling the proof of the second half of Lemma <ref> we can also see that for each j∈,ω↦[-ζ_k(ω)∫_B^Y_j(y_0) (S_λ_k, pξ_k)_ω(s)d𝔪^ω_k(s)1_Ω_k(ω)+1_Ω∖(Ω_k∪Ω_k, ε)(ω)inf_t'∈ Yη_k(t', ω)]1_Ω∖𝒩(ω)is ℬ_σ-measurable. Thus to obtain the claimed ℬ_σ-measurability it is sufficient to show that for each 1≤ k≤ K, the function defined in (<ref>) (resp. (<ref>)) converges σ-a.e. pointwise to the function in (<ref>) (resp. (<ref>)). Note for ω∉Ω∖𝒩 the convergence for that ω is clear.We begin with the measurability of (<ref>). For each 1≤ k≤ K and ω∈Ω∖𝒩,we observe from (<ref>) thatη_k(t, ω)=lim_M→∞1/M∑_m=1^Nη_k, m(t, ω)≥lim sup_M→∞1/M∑_m=1^M [-ζ_k, m(ω)(S_λ_k, pξ_k, m)_ω(s)-2^p-1ζ_k, m(ω)(_y_0(s)^p+_y_0(t)^p)]≥lim sup_M→∞(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω(s)) -2^p-1ζ_k(ω)(_y_0(s)^p+_y_0(t)^p).Integrating against d(𝔫^ω⊗𝔪_k^ω)(t, s) yields∫_Yη_k(t, ω)d𝔫^ω(t) ≥∫_Ylim sup_M→∞( -1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω(s))d𝔪_k^ω(s) -2^p-1ζ_k(ω)(∫_Y _y_0(s)^pd𝔪_k^ω(s)+∫_Y _y_0(t)^pd𝔫^ω(t)) ≥lim sup_M→∞∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω)d𝔪_k^ω -2^p-1ζ_k(ω)( ∫_Y _y_0(s)^pd𝔪_k^ω(s)+∫_Y _y_0(t)^pd𝔫^ω(t) );here we are able to apply Fatou's lemma to obtain the final inequality due to the fact that -1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω(s)≤ -λ_k(sup_M'∈ζ^_k, M'(ω))_y_0(s)^pwhich belongs to L^1(𝔪_k^ω) for σ-a.e. ω since 𝔪_k∈𝒫^σ_p, q(Y×Ω). Since we also have∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω(s))d𝔪_k^ω ≤ -λ_k(sup_M'∈ζ^_k, M'(ω))∫_Y_y_0(s)^pd𝔪_k^ωand the expression on the right belongs to L^1(σ), again due to the fact that 𝔪_k∈𝒫^σ_p, q(Y×Ω), we may integrate (<ref>) against σ and apply Fatou's lemma to obtain∫_Ω∫_Yη_k(t, ω)d𝔫^ω(t)dσ ≥lim sup_M→∞∫_Ω∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω)d𝔪_k^ω dσ(ω) -2^p-1∫_Ωζ_k(ω)( ∫_Y _y_0(s)^pd𝔪_k^ω(s)+∫_Y _y_0(t)^pd𝔫^ω(t) )dσ(ω)>-∞,where the finiteness is from (<ref>) with the fact that 𝔫, 𝔪_k∈𝒫^σ_p, q(Y×Ω). Hence ∫_Yη_k(·, ω)d𝔫^ωhas a finite lower bound for σ-a.e. ω. By (<ref>) this shows that |∫_Yη_k(t, ω)d𝔫^ω(t)|<∞for σ-a.e. ω, hence η_k(·, ω)∈ L^1(𝔫^ω) for σ-a.e. ω.Thus by dominated convergence, we may take a limit as j→∞ in (<ref>) to obtain that the function in (<ref>) is ℬ_σ-measurable for 1≤ k≤ K.Next we show the convergence of (<ref>). If ζ_k=0 σ-a.e., we immediately see (<ref>) is measurable, hence we may assume this is not the case. For ease of notation, we writeΦ(s, ω):=[-ζ_k(ω) (S_λ_k, pξ_k)_ω(s)1_Ω_k(ω)+1_Ω∖(Ω_k∪Ω_k, ε)(ω)inf_t'∈ Yη_k(t', ω)]1_Ω∖𝒩(ω),if we can show for σ-a.e. ω that Φ is bounded from above and below by functions, whose integrals against 𝔪^ω_k are finite from above and below respectively, we may then apply dominated convergence to the positive and negative parts of the expression in (<ref>) to obtain convergenceto (<ref>), which will give the desired ℬ_σ-measurability.First we work toward the upper bound. For any s∈ Y we seeΦ(s, ω)≤[λ_kζ_k(ω)_y_0(s)^p1_Ω_k(ω)+1_Ω∖(Ω_k∪Ω_k, ε)(ω)η_k(y_0, ω)]1_Ω∖𝒩(ω)=λ_kζ_k(ω)_y_0(s)^p1_Ω_k(ω),where we used that ξ_k(y_0, ω)=η_k(y_0, ω)=0 for all k and ω.By Hölder's inequality and since 𝔪_k∈𝒫^σ_p, q(Y×Ω),∫_Ωζ_k(ω)∫_Y _y_0(s)^p d𝔪_k^ω(s)dσ(ω)≤‖ζ_k‖_L^r'(σ)‖∫_Y _y_0(s)^p d𝔪_k^∙(s) ‖_L^r(σ)≤_p,q(δ_y_0^Y⊗σ, 𝔪_k)^p<∞,hence Φ is bounded from above by a function with finite integral against 𝔪_k^ω for σ-a.e. ω. Next we work toward the lower bound. Let us writeξ^_k, M:=η^_k, M/ζ^_k, M,then since ξ^_k, M(·, ω)→ξ_k(·, ω) as M→∞ whenever ω∈Ω_k, for all s∈ Y and ω∈Ω_k∖𝒩 we havelim sup_M→∞( -ζ^_k, M(ω) (S_λ_k, pξ^_k, M)_ω(s))=lim sup_M→∞[ζ^_k, M(ω) inf_t'∈ Y( λ_k _Y(t', s)^p+ξ^_k, M(t', ω))]≤inf_t'∈ Ylim sup_M→∞[ζ^_k, M(ω) (λ_k _Y(t', s)^p+ξ^_k, M(t', ω))]=-ζ_k(ω) (S_λ_k, pξ_k)_ω(s), where we use that lim sup_j→∞(a_jb_j)=(lim_j→∞a_j)(lim sup_j→∞b_j)for any sequences (a_j)_j∈ℕ,(b_j)_j∈ℕsuch that(a_j)_j∈ℕ converges to a positive number. Meanwhile for ω∈Ω∖ (Ω_k∪𝒩) we havelim sup_M→∞( -ζ^_k, M(ω) (S_λ_k, pξ^_k, M)_ω(s))≤inf_t'∈ Ylim sup_M→∞(λ_k ζ^_k, M(ω)_Y(t', s)^p+η^_k, M(t', ω))≤inf_t'∈ Yη_k(t', ω).Since ζ_k, M^ converges σ-a.e., it is bounded σ-a.e, and -ζ_k, M^(ω)(S_λ_k, pξ^_k, M)_ω≤(sup_M'∈ζ^_k, M'(ω))λ_k_y_0(s)^pfor σ-a.e. ω. Then since 𝔪_k∈𝒫^σ_p, q(Y×Ω), we have∫_Y _y_0^p d𝔪_k^∙∈ L^r(σ)⊂ L^1(σ),hence we may use Fatou's lemma to obtain lim sup_M→∞∫_Y ( -ζ_k, M^(ω) (S_λ_k, pξ^_k, M)_ω)d𝔪_k^ω ≤∫_Ylim sup_M→∞( -ζ_k, M^(ω) (S_λ_k, pξ^_k, M)_ω)d𝔪_k^ωfor σ-a.e. ω. Since σ has finite total measure, L^r'(σ)-convergence of the ζ_k, M^ implies the restricted sequence (ζ_k, M^1_Ω_k)_M∈ converges in L^1(σ), necessarily to ζ_k1_Ω_k=ζ_k. Since‖ζ_k‖_L^1(σ)>0,we have ‖ζ_k, M^1_Ω_k‖_L^1(σ)>0 for all M sufficiently large, and we see ‖ζ_k, M^1_Ω_k‖_L^1(σ)^-1∫_Eζ_k, M^1_Ω_kdσ‖ζ_k‖_L^1(σ)^-1∫_Eζ_kdσfor any E∈ℬ_σ. Thus we can view (‖ζ_k, M^1_Ω_k‖_L^1(σ)^-1ζ_k, M^1_Ω_kσ)_M∈ as a sequencein 𝒫(Ω) that converges setwise to the probability measure ‖ζ_k‖_L^1(σ)^-1ζ_kσ.Thus by (<ref>) the L^1(σ)- and L^r'(σ)-convergence of (ζ_k, M^1_Ω_k)_M∈ to ζ_k yieldslim sup_M→∞∫_Ω_kζ_k, M^/‖ζ_k, M^1_Ω_k‖_L^1(σ)∫_Y _y_0^p d𝔪_k^∙ dσ=1/‖ζ_k‖_L^1(σ)lim sup_M→∞∫_Ω_kζ_k, M^∫_Y _y_0^p d𝔪_k^∙ dσ =1/‖ζ_k‖_L^1(σ)∫_Ωζ_k ∫_Y _y_0^p d𝔪_k^∙ dσ≤‖ζ_k‖_L^r'(σ)/‖ζ_k‖_L^1(σ)‖∫_Y _y_0^p d𝔪_k^∙‖_L^r(σ)<∞.Since-∫_Y (S_λ_k, pξ^_k, M)_ω d𝔪_k^ω≤λ_k ∫_Y _y_0^p d𝔪_k^ωwe may apply Fatou's lemma for sequences of probability measures, <cit.>*Theorem 4.1, with the choices g_n=-λ_k ∫_Y _y_0^p d𝔪_k^∙, f_n=∫_Y (S_λ_k, pξ^_k, n)_ω d𝔪_k^∙in the reference which yields∫_Ω_klim sup_M→∞∫_Y (-ζ_k, M^(ω)( S_λ_k, pξ^_k, M)_ω)d𝔪_k^ω dσ(ω)=∫_Ω_kζ_k(ω)lim sup_M→∞(-∫_Y(S_λ_k, pξ^_k, M)_ω d𝔪_k^ω)dσ(ω) ≥‖ζ_k‖_L^1(σ)lim sup_M→∞(-∫_Ω_kζ_k, M^(ω)/‖ζ_k, M^1_Ω_k‖_L^1(σ)∫_Y(S_λ_k, pξ^_k, M)_ω d𝔪_k^ω dσ(ω))= lim sup_M→∞(-∫_Ω_kζ_k, M^(ω)∫_Y (S_λ_k, pξ^_k, M)_ω d𝔪_k^ω dσ(ω));above we have used that lim_M→∞ζ_k, M^(ω)>0 for all ω∈Ω_k.By a calculation analogous to (<ref>), for any M∈ we have-ζ_k, M^(S_λ_k, pξ^_k, M)_∙ ≥ -1/M∑_m=1^Mζ_k, m(S_λ_k, pξ_k, m)_∙,thus combining the above with (<ref>) and (<ref>)we see ∫_Ω_k∫_Ylim sup_M→∞(-ζ_k, M^ (S_λ_k, pξ^_k, M)_∙) d𝔪_k^∙ dσ≥lim sup_M→∞(-1/M∑_m=1^M∫_Ω_kζ_k, m∫_Y (S_λ_k, pξ_k, m)_∙ d𝔪_k^∙ dσ).Since ζ_k, M^ converges uniformly to 0 on Ω∖(Ω_k ∪Ω_k, ε), for all M sufficiently large we have -ζ_k, M^(S_λ_k, pξ^_k, M)_∙≤λ_k_y_0^p on Y× (Ω∖ (Ω_k∪Ω_k, ε)).Since the expression on the right-hand side has finite integral with respect to 𝔪_k, by Fatou's lemma and (<ref>) we have∫_Ω∖ (Ω_k∪Ω_k, ε)∫_Ylim sup_M→∞ (-ζ_k, M^( S_λ_k, pξ^_k, M)_∙)d𝔪_k^∙ dσ≥lim sup_M→∞(-1/M∑_m=1^M∫_Ω∖ (Ω_k∪Ω_k, ε)ζ_k, m∫_Y (S_λ_k, pξ_k, m)_∙ d𝔪_k^∙ dσ),thus by (<ref>) we have∫_Ω∖Ω_k, ε∫_Ylim sup_M→∞(-ζ_k, M^ (S_λ_k, pξ^_k, M)_∙) d𝔪_k^∙ dσ≥lim sup_M→∞(-1/M∑_m=1^M∫_Ω∖Ω_k, εζ_k, m∫_Y (S_λ_k, pξ_k, m)_∙ d𝔪_k^∙ dσ).By the L^r'(σ)-convergence of ζ_k, M^ to 0 on Ω_k, ε and (<ref>), we findlim sup_M→∞∫_Ω_k, ε∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω)d𝔪_k^ω dσ(ω)≤lim sup_M→∞‖ζ^_k, M1_Ω_k, ε‖_L^r'(σ)‖∫_Yλ_k_y_0^pd𝔪_k^∙‖_L^r(σ)=0,then lim sup_M→∞∫_Ω∖Ω_k, ε∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω)d𝔪_k^ω dσ(ω)≥lim sup_M→∞∫_Ω∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω)d𝔪_k^ω dσ(ω)-lim sup_M→∞∫_Ω_k, ε∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω)d𝔪_k^ω dσ(ω)≥lim sup_M→∞∫_Ω∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω)d𝔪_k^ω dσ(ω),thus we see by (<ref>) that the last integral in (<ref>) is bounded from below. Combining with (<ref>) and (<ref>), we observe that Φ(s,·) ≥lim sup_M→∞( -ζ^_k, M(S_λ_k, pξ^_k, M)_∙(s))1_Ω∖Ω_k, εσ-a.e., and the right-hand side has integral with respect to 𝔪_k^ω bounded from below for σ-a.e. ω. Thus we have the ℬ_σ-measurability of (<ref>) for 1≤ k≤ K as claimed.Now suppose 𝔫∈𝒫^σ_p, q(Y×Ω) is a minimizer of 𝔅_Λ, 𝔐^p, q, p, and for 1≤ k≤ K, j∈ let Ω_k, j be the set obtained from Claim 2 with ε=j^-1σ(Ω_k) if σ(Ω_k)>0, and the empty set otherwise.By the ℬ_σ-measurability of (<ref>), we can integrate over Ω∖Ω_k, j then sum over k and use (<ref>) and (<ref>) to seelim sup_M→∞∫_Ω∖Ω_k, j∫_Y(-1/M∑_m=1^Mζ_k, m(ω)(S_λ_k, pξ_k, m)_ω)d𝔪_k^ω dσ(ω)≥𝔅_Λ, 𝔐^p, q, p(𝔫).Then by a combination of (<ref>), (<ref>), and (<ref>) we obtain𝔅_Λ, 𝔐^p, q, p(𝔫)≤ -∑_k=1^K∫_Ω_kζ_k(ω)∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω dσ(ω)+∑_k=1^K∫_Ω∖(Ω_k∪Ω_k, j)inf_t'∈ Yη_k(t', ω) dσ(ω).Althoughthe elements do not necessarily belong to (𝒴_r', σ×𝒵_p)^K, we do have ζ_k∈ L^r'(σ) with ‖ζ_k‖_L^r'(σ)≤ 1, and ξ_k(·, ω)∈ C(Y) for σ-a.e. ω. By (<ref>)and the measurability of (<ref>) and (<ref>), we find𝔅_Λ, 𝔐^p, q, p(𝔫)≤∑_k=1^K∫_Ω_k(-ζ_k(ω)∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω-∫_Yη_k(·, ω) d 𝔫^ω) dσ(ω)+∑_k=1^K∫_Ω∖(Ω_k∪Ω_k, j)∫_Y[inf_t'∈ Yη_k(t', ω)-η_k(t, ω)]d𝔫^ω dσ(ω)-∑_k=1^K∫_Ω_k, j∫_Yη_k(t, ω)d𝔫^ω dσ(ω)≤ -∑_k=1^K∫_Ω_kζ_k(ω)(∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω+∫_Yξ_k(·, ω) d 𝔫^ω)dσ(ω)-∑_k=1^K∫_Ω_k, j∫_Yη_k(t, ω)d𝔫^ω dσ(ω) -∑_k=1^K∫_Ω_kζ_k(ω)(∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω+∫_Yξ_k(·, ω) d 𝔫^ω)dσ(ω),where the final limit follows because (<ref>) combined with (<ref>) implies each η_k∈ L^1(𝔫), and σ(Ω_k, j)→ 0 as j→∞. Since-ζ_k(ω)((S_λ_k,pξ_k)_ω(s)+ξ_k(t, ω))≤λ_kζ_k(ω)_Y(t, s)^pfor all t, s, and ω∈Ω∖𝒩, (<ref>) implies-∑_k=1^K∫_Ω_kζ_k(ω)(∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω+∫_Yξ_k(·, ω) d 𝔫^ω)dσ(ω)≥∑_k=1^Kλ_k‖_p^Y(𝔪_k^∙, 𝔫^∙)^p1_Ω∖Ω_k‖_L^r(σ)+∑_k=1^Kλ_k‖_p^Y(𝔪_k^∙, 𝔫^∙)^p1_Ω_k‖_L^r(σ)≥∑_k=1^Kλ_k‖_p^Y(𝔪_k^∙, 𝔫^∙)^p1_Ω_k‖_L^r(σ)≥ -∑_k=1^K∫_Ω_kζ_k(ω)(∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω+∫_Yξ_k(·, ω) d 𝔫^ω)dσ(ω),hence for any 1≤ k≤ K, for σ-a.e. ω∈Ω∖Ω_k, we have _p^Y(𝔪_k^ω, 𝔫^ω)=0, in particular𝔪_k^ω=𝔫^ω. Now this implies -∑_k=1^K∫_Ω_kζ_k(ω)(∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω+∫_Yξ_k(·, ω) d 𝔫^ω) dσ(ω)=∑_k=1^Kλ_k‖_p^Y(𝔪_k^∙, 𝔫^∙)^p1_Ω_k‖_L^r(σ),then by (<ref>), each term in the sum on the left of the inequality above is less than or equal to each term in the sum on the right, in particular we have termwise equality for each 1≤ k≤ K. Let k be the distinguished index in our hypothesis such that for σ-a.e. ω, 𝔪_k^ω is absolutely continuous with respect to _Y,then again using the dual characterization of the L^r(σ)-norm (<cit.>*Proposition 6.13),-∫_Ω_kζ_k(ω)(∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω+∫_Yξ_k(·, ω) d 𝔫^ω) dσ(ω) =λ_k‖_p^Y(𝔪_k^∙, 𝔫^∙)^p1_Ω_k‖_L^r(σ)≥λ_k∫_Ω_kζ_k(ω)_p^Y(𝔪_k^ω, 𝔫^ω)^pdσ(ω)≥ -∫_Ω_kζ_k(ω)(∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω+∫_Yξ_k(·, ω) d 𝔫^ω) dσ(ω). In particular, for σ-a.e. ω∈Ω_k we must have -∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω-∫_Yξ_k(·, ω) d 𝔫^ω=λ_k_p^Y(𝔪_k^ω, 𝔫^ω)^p,hence-∫_Y (S_λ_k,pξ_k)_ω d 𝔪_k^ω-∫_Y(S_λ_k,p(S_λ_k,pξ_k))_ω d 𝔫^ω=λ_k_p^Y(𝔪_k^ω, 𝔫^ω)^p.For such an ω, let γ^ω∈Π(𝔪_k^ω, 𝔫^ω) be a p-optimal coupling between 𝔪_k^ω and 𝔫^ω. Then we obtain -(S_λ_k,pξ_k)_ω (s)-(S_λ_k,p(S_λ_k,pξ_k))_ω(t)=λ_k _Y(t, s)^p,γ^ω-a.e. (t, s). Since-λ_k _y_0(t)^p-(S_λ_k,pξ_k)_ω(y_0)≤ (S_λ_k,p(S_λ_k,pξ_k))_ω(t)≤ξ_k(t, ω)we see (S_λ_k,p(S_λ_k,pξ_k))_ω is bounded on any compact set, then by Corollary <ref> is uniformly Lipschitz on any compact subset of Y∖∂ Y. Thus in both cases, by Rademacher's theorem (S_λ_k,p(S_λ_k,pξ_k))_ω is differentiable _Y-a.e. on Y.Let t∉∂ Y be a point of differentiability for (S_λ_k,p(S_λ_k,pξ_k))_ω such that there exists s∈ Y satisfying (<ref>);the set of such t has full _Y measure, hence full 𝔪^ω_k measure. Let us denote by ⟨·, ·⟩_Y the Riemannian metric on Y, and write |·|_Y=⟨·, ·⟩_Y^1/2. If a function f on Y is differentiable at y∈ Y, thenf(exp^Y_y( ε v )) =f(y)+ε⟨ v, ∇_Y f⟩+o(ε) as ε→ 0for a unit tangent vector v to Y at t, where exp^Y is the exponential map of Y and ∇_Y f is the gradient of f.This with the choice f=(S_λ_k,p(S_λ_k,pξ_k))_ω implies _Y(exp_t^Y(ε v), s)^p≥ -(S_λ_k,p(S_λ_k,pξ_k))_ω(exp_t^Y(ε v)) +(S_λ_k, pξ_k)_ω(s)=-ε⟨ v, ∇_Y (S_λ_k,p(S_λ_k,pξ_k))_ω(t) ⟩_Y -(S_λ_k,p(S_λ_k,pξ_k))_ω(t) +(S_λ_k, pξ_k)_ω(s)+o(ε) as ε→ 0.Thus if s≠ t, the above shows _Y(·, s)^p is subdifferentiable at t, while <cit.>*Proposition 6 implies superdifferentiability, hence _Y(·, s)^p is differentiable at t.Since p>1,by taking the derivative of (<ref>) with respect to t, after some tedious but routine calculation we obtains=exp_t^Y(|∇_Y (S_λ_k,p(S_λ_k,pξ_k))_ω(t)/pλ_k|_Y^1/p-1(S_λ_k,p(S_λ_k,pξ_k))_ω(t)/|∇_Y(S_λ_k,p(S_λ_k,pξ_k))_ω(t)|_Y)when s≠ t. This shows that there is an 𝔪^ω_k-a.e. single valued map T^ω on Y such that the pair (t, T^ω(t)) satisfy the equality in (<ref>), and we see this includes when T^ω(t)=t. Combining with <cit.>*Lemma 2.4 necessarily we have that γ^ω=(× T^ω)_♯𝔪^ω. The map T^ω is entirely determined by ξ_k(·, ω), hence so is the right marginal 𝔫^ω for σ-a.e. ω∈Ω_k. All together, 𝔫^ω is determined for σ-a.e. ω by ζ_k or ξ_k, thus we see the _p, q-barycenter 𝔫 is uniquely determined. We can apply Theorem <ref> (<ref>), (<ref>), and (<ref>) with any value of q and Ω a one-point space, and σ the associated delta measure and the claims follow immediately. Regarding the duality result, also recall Remark <ref>. The authors would like to thank Guillaume Carlier, Wilfrid Gangbo, Quentin Mérigot, Brendan Pass, and Dejan Slepčev for fruitful discussions, and Felix Otto for pointing out the geodesic nature of the sliced metrics when p=1. The authors would also like to thank Kota Matsui for bringing this topic to their attention. JK was supported in part by National Science Foundation grants DMS-2000128 and DMS-2246606.AT was supported in part by JSPS KAKENHI Grant Number 19H01786, 19K03494.alpha | http://arxiv.org/abs/2311.15874v2 | {
"authors": [
"Jun Kitagawa",
"Asuka Takatsu"
],
"categories": [
"math.MG",
"math.OC",
"math.PR",
"49Q22, 44A12, 30L05, 28A50"
],
"primary_category": "math.MG",
"published": "20231127144727",
"title": "Two new families of metrics via optimal transport and barycenter problems"
} |
Beijing International Center for Mathematical Research, Peking University, No. 5 Yiheyuan Road, Haidian District, Beijing 100871, China [email protected] Beijing International Center for Mathematical Research, Peking University, No. 5 Yiheyuan Road, Haidian District, Beijing 100871, China [email protected] Beijing International Center for Mathematical Research, Peking University, No. 5 Yiheyuan Road, Haidian District, Beijing 100871, China [email protected] the Dominant Weight Polytopes Hongsheng Hu January 14, 2024 ================================ We introduce the dominant weight polytope P^λ for a dominant weight λ corresponding to a root system Φ of any Lie type. We provide an explicit formula for the vertices of P^λ in terms of the Cartan matrix of Φ. When λ is strongly dominant, we show that P^λ is combinatorially equivalent to a rank(Φ)-dimensional hypercube. § INTRODUCTIONA weight polytope is the convex hull of the weights that occur in some highest weight representation of a Lie algebra fulton2013representation,humphreys1972. Weight polytopes appear to be a recurrent object of study in representation theory of Lie algebras Panyushev97,LT20,Walton21,besson2023weight. In type A, a weight polytope is precisely the classicalpermutohedron P_n(x_1, …, x_n), that is, the convex hull of the n! points obtained from (x_1, …, x_n) by permutations of the coordinates. The face structure of weight polytopes was studied in <cit.> and <cit.>, while Postnikov computed the volume and the number of lattice points of weight polytopes in the seminal paper <cit.>.In this paper, we introduce and study the “dominant weight polytopes” for an arbitrary root system of any Lie type. Let Φ be a crystallographic root system of rank r, and E be the Euclidean space (also known as weight space, and points in E are called weights) where Φ lives in. We denote by (-|-) E × E →ℝ the inner product on E. For any root α∈Φ, we denote the corresponding coroot 2 α/(α|α) by α. Then Φ := {α∈ E |α∈Φ} is the dual root system. We fix a set Δ = {α_i | i = 1,…, r} of simple roots of Φ. Lets_i E → E,x ↦ x - 2(x|α_i)/(α_i | α_i)α_i,i = 1, …, r,be the corresponding simple reflections, and W_f:= ⟨ s_1 ,…, s_r ⟩ be the (finite) Weyl group which acts on E and preserves (-|-). Let C_+ be the dominant Weyl chamber:C_+ := {x ∈ E | (x|α_i) > 0,for alli = 1, …, r}. A point x ∈ E is called strongly dominant if x ∈ C_+, and dominant if x ∈C_+, the closure of C_+. Given λ∈C_+, we define the dominant weight polytope P^λ associated with λ byP^λ := conv (W_f λ) ∩C_+⊂ E,where conv (W_f λ) is the convex hull of the finite set W_f λ in E. That is, P^λ is the convex polytope obtained from intersecting a cone, C_+, with another convex set, conv (W_f λ), see Figure <ref> for the graphic visualization of this in the case of type A_3.The polytope P^λ was originally constructed in the first author's thesis <cit.> to study dominant lower Bruhat intervals of affine Weyl groups. In another paper <cit.>, we show that when λ is a dominant coroot lattice point, the polytope P^λ is closely related to the dominant lower Bruhat interval [e, t_λ]^f:=[e, t_λ] ∩^f W of the corresponding affine Weyl group, and the Bruun–Minkovski inequality applied to P^λ shows that the Betti numbers of the corresponding parabolic affine Schubert variety are “asymptotically” log-concave.In this paper, we show that P^λ is the intersection of two closed simplicial cones (see Proposition <ref>), and prove that if λ is strongly dominant, the face structure of the polytope P^λ has a simple description: P^λ is combinatorially equivalent to a hypercube (see Theorem <ref>). Therefore, the combinatorial type of P^λ only depends on the rank r of Φ. Its 2^r vertices are canonically in a one-to-one correspondence to the subsets of the simple (co)roots. Furthermore, each vertex of P^λ can be computed explicitly from the Cartan matrix of Φ (see Corollary <ref>).§.§ Acknowledgments The authors would like to thank Yibo Gao and Connor Simpson for useful discussions.§ MAIN RESULTS For general terminologies and results about convex polytopes, we refer to the standard textbooks grunbaum03, zbMATH00722614.Firstly, we have the following proposition, which gives another description of the polytope P^λ.The polytope P^λ is the intersection of two closed simplicial cones: the closure of the dominant Weyl chamber C_+ and Q^λ, where Q^λ is the closure of the open simplicial cone Q^λ := {λ - ∑_1 ≤ i ≤ r c_i α_i| c_i ∈ℝ_> 0}. If μ∈C_+, then μ∈conv(W_f λ) if and only if λ - μ is a non-negative linear combination of simple coroots {α_1, …, α_r} (as well as simple roots), see <cit.>. Therefore,P^λ = conv (W_f λ) ∩C_+ = C_+∩Q^λ.This proves the proposition. Let ϖ_1, …, ϖ_r∈ E be the fundamental coweights, which form the dual basis to the basis of simple roots. That is, (ϖ_i | α_j) = δ_i,j where δ_i,j is the Kronecker symbol. We define for any subsets I,J ⊆ [r], where [r] := {1, …, r},C_I := {0 + ∑_i∈ I a_i ϖ_i| a_i ∈ℝ_>0}, Q^λ_J := {λ - ∑_i ∈ J c_i α_i| c_i ∈ℝ_>0}, F_I,J:= C_I ∩ Q^λ_J(possibly empty).Then the set {C_I| I ⊆ [r] } consists of all the 2^r faces of C_+. Each C_I, which is of dimension |I |, is open in its closure. In particular, C_∅ = {0}, C_[r] = C_+, and C_+ = _I ⊆ [r] C_I is a disjoint union of all C_I. Similarly, {Q^λ_J| J ⊆ [r] } is the set of all faces of Q^λ. In particular, Q^λ_∅ = {λ}, and Q^λ_[r] = Q^λ. Therefore, we have the following decomposition.The polytope P^λ = _I,J ⊆ [r] F_I,J is a disjoint union of all the F_I,J's.P^λ = C_+∩Q^λ = (_I ⊆ [r] C_I ) ∩(_J ⊆ [r] Q^λ_J) = _I, J ⊆ [r](C_I ∩ Q^λ_J) = _I, J ⊆ [r] F_I,J. Note that C_I and Q^λ_J are convex subsets of E. Thus F_I,J is also convex.From now on let λ∈ C_+ be strongly dominant. For further discussions on the sets F_I,J, we introduce some notations. Let M = ((α_i|α_j))_i,j be the Cartan matrix of Φ. For any subset I ⊆ [r], say, I = {l_1, …, l_k} with l_1 < ⋯ < l_k, we denote by M_I the submatrix ((α_l_i|α_l_j))_i,j of M. Then M_I itself is a Cartan matrix (of a root sub-system of Φ).Suppose I, J ⊆ [r] such that [r] = I ⊔ J is a disjoint union. Then F_I,J consists of a single point.If x ∈ F_I,J, then we may writex = ∑_k∈ I a_k ϖ_k = λ - ∑_j ∈ J c_j α_j with all a_k, c_j ∈ℝ_>0.For any i ∈ J, we have (α_i | x) = ∑_k ∈ I a_k (α_i | ϖ_k) = 0. Therefore,0 = ( α_i | λ - ∑_j ∈ Jc_j α_j) = (α_i | λ) - ∑_j ∈ J c_j (α_i | α_j).In other words, the vector of coefficients (c_j)_j ∈ J is a solution of the linear system of equationsM_J · (X_j)_j∈ J = ( (α_i | λ) )_i ∈ J,where (c_j)_j ∈ J, (X_j)_j∈ J, and ( (α_i | λ) )_i ∈ J are regarded as column vectors, and (X_j)_j∈ J is a set of indeterminates. Note that the Cartan submatrix M_J of M is invertible. Thus the system of equations (<ref>) has a unique solution (X_j)_j ∈ J = M_J^-1·( (α_i | λ) )_i ∈ J. Consequently, F_I,J has at most one single point.Conversely, let (c_j)_j ∈ J := M_J^-1·( (α_i | λ) )_i ∈ J be the solution of (<ref>), and x := λ - ∑_j ∈ J c_j α_j. Note that the inverse of any Cartan matrix has non-negative entries (see, for example, <cit.>, or <cit.> for a list of possible cases). Note that λ∈ C_+ is strongly dominant, hence (α_j | λ) > 0 for any j ∈ J. Therefore, c_j > 0 for any j ∈ J. That is, x ∈ Q^λ_J. Moreover, we have (α_j | x) = 0 for any j ∈ J since (c_j)_j ∈ J is the solution of (<ref>). Therefore x can be written in the form x = ∑_k∈ I a_k ϖ_k for some a_k ∈ℝ. For any k ∈ I we also havea_k = (α_k|x) = (α_k | λ) - ∑_j ∈ J c_j (α_k | α_j) ≥ (α_k | λ) > 0.Thus, x ∈ C_I. By definition, we have x ∈ F_I,J = C_I ∩ Q^λ_J. We need the following technical lemma. Suppose I_1, …, I_n, J_1, …, J_n ⊆ [r] are subsets of [r], and x_k ∈ F_I_k,J_k for k=1, …, n (so F_I_k,J_k∅ for allk). Let r_1, …, r_k ∈ℝ_>0 be arbitrary positive numbers such that ∑_k=1^n r_k = 1. Then ∑_k=1^n r_k x_k ∈ F_(⋃_k I_k), (⋃_k J_k).In particular, F_(⋃_k I_k), (⋃_k J_k)∅.For each k, we writex_k = ∑_i ∈ I_k a_k,iϖ_i= λ - ∑_ j ∈ J_k c_k,jα_jwith all a_k,i, c_k,j∈ℝ_>0. Then we have∑_1 ≤k ≤n r_k x_k = ∑_1 ≤k ≤n ∑_i ∈I_k r_k a_k,i ϖ_i = λ- ∑_1 ≤k ≤n ∑_j ∈J_k r_k c_k,j α_j = ∑_i ∈⋃_k I_k a_i ϖ_i = λ- ∑_j ∈⋃_k J_k c_j α_jfor some a_i, c_j ∈ℝ_>0. This concludes the proof. Note that in Lemma <ref> the point ∑_1 ≤ k ≤ n r_k x_k lies in the convex hull of x_1, …, x_n.For two subsets I,J ⊆ [r], F_I,J∅ if and only if I ∪ J = [r].Suppose I ∪ J[r] and k ∈ [r] ∖ (I ∪ J), and suppose x ∈ F_I,J∅. We writex = ∑_i∈ I a_i ϖ_i = λ - ∑_j ∈ J c_j α_jwith all the coefficients a_i, c_j ∈ℝ_>0. Since k ∉ I and k ∉ J, we have(x | α_k) = ∑_i ∈ I a_i (ϖ_i | α_k) = 0,and (x | α_k) = (λ | α_k) - ∑_j ∈ J c_j (α_j | α_k) ≥ (λ | α_k) > 0,which is a contradiction. Therefore, F_I,J∅ implies I ∪ J = [r].Conversely, suppose I ∪ J = [r]. Let I_0 = I ∖ (I ∩ J) and J_0 = J ∖ (I∩ J). Then [r] = I_0 ⊔ (I∩ J) ⊔ J_0 is a disjoint union. By Lemma <ref>, both of the sets F_I_0,J and F_I,J_0 consist of a single point. Then by Lemma <ref>, we have F_I,J = F_(I_0 ∪ I), (J ∪ J_0)∅. We have the following proposition which describes the closure relation of the F_I,J's. For two subsets I, J ⊆ [r], we have F_I,J = _I^'⊆ I, J^'⊆ J F_I^', J^'.In particular, if I, I^', J, J^'⊆ [r], then F_I^', J^'⊆F_I,J if and only if I^'⊆ I and J^'⊆ J.By Lemma <ref>, we may assume I ∪ J = [r]. Notice that for any I, J ⊆ [r], we have,C_I= {∑_i ∈ I a_i ϖ_i| a_i ∈ℝ_≥ 0} = _I^'⊆ I C_I^',Q^λ_J= {λ - ∑_j ∈ J c_j α_j| c_j ∈ℝ_≥ 0} = _J^'⊆ J Q^λ_J^'.So F_I,J = C_I ∩ Q^λ_J⊆C_I∩Q^λ_J = _I^'⊆ I, J^'⊆ J F_I^',J^'.Conversely, for any subsets I^'⊆ I and J^'⊆ J such that I^'∪ J^' = [r], and any x ∈ F_I^', J^' (F_I^', J^'∅ by Lemma <ref>), we fix an arbitrary point y ∈ F_I,J (F_I,J∅ by Lemma <ref> again). Then for any real number a with 0 < a < 1, we have ax + (1-a) y ∈ F_I,J by Lemma <ref>. Therefore,x = lim_a → 1^-(ax + (1-a)y) ∈F_I, J.This proves ⊇. The following proposition gives the vertex representation (also known as V-representation or V-description) of each F_I,J. For any subsets I,J ⊆ [r] such that I ∪ J = [r], we write I_0 = I ∖ (I ∩ J) and J_0 = J ∖ (I ∩ J). Then F_I,J = conv{ F_(I_0 ⊔ K), (J_0 ⊔ L)| K, L ⊆ [r]withI ∩ J = K ⊔ L }. For any sets of indices K and L such that I ∩ J = K ⊔ L, we have that [r] = I_0 ⊔ K ⊔ L ⊔ J_0 is a disjoint union. Thus, the set F_(I_0 ⊔ K), (J_0 ⊔ L) consists of a single point by Lemma <ref>. By Proposition <ref>, the point in F_(I_0 ⊔ K), (J_0 ⊔ L) lies in F_I,J. Since F_I,J is convex, this proves ⊇.Conversely, it suffices to show thatF_I,J⊆conv{ F_(I_0 ∪ K), (J_0 ∪ L)| K, L ⊆ [r]withI ∩ J = K ⊔ L },since the convex hull is closed. We prove this by induction on the number | I | + | J |. The base case is | I | + | J | = r, so F_I,J consists of a single point and there is nothing to prove.In general, if | I | + | J | > r, then I ∩ J ∅. The two sets F_I, J_0 and F_I_0,J are two different points. By Lemma <ref>, the (open) segment between the two points is contained in F_I,J. In particular, F_I,J is an infinite set. For any x ∈ F_I,J, there exists another point y ∈ F_I,J with yx. We writex= ∑ _i ∈ I a_x,iϖ_i = λ - ∑_j ∈ J c_x,jα_j , y = ∑ _i ∈ I a_y,iϖ_i = λ - ∑_j ∈ J c_y,jα_j ,with all coefficients a_x,i , c_x,j, a_y,i, c_y,j∈ℝ_>0. We consider the affine line passing through x and y, and let z(t) : = tx + (1-t) y,t ∈ℝ.Then z(t) = ∑_i ∈ I z_i(t) ϖ_i= λ -∑_j ∈ J z_j^λ (t) α_j, where z_i(t) = t a_x,i + (1-t) a_y,i,i ∈ I ,z_j^λ (t) = t c_x,j + (1-t) c_y,j,j ∈ J.In particular, z(0) = y and z(1) = x, both of which lie in F_I,J. All of the maps z_i (i ∈ I) and z_j^λ (j ∈ J) are affine maps from ℝ to ℝ, and the map z is an affine map from ℝ to E. Note that z(t) ∈ F_I,J if and only if z_i(t) > 0 and z_j^λ(t) > 0 for any i ∈ I and j ∈ J, that is, z^-1 (F_I,J) = {t ∈ℝ| z(t) ∈ F_I,J} = {t ∈ℝ| z_i (t) > 0, z_j^λ (t) > 0,for alli ∈ I, j ∈ J} = f^-1((ℝ_>0)^| I | + | J |)where fℝ→ℝ^| I | + | J | is the affine map given byt ↦((z_i(t))_i ∈ I, (z_j^λ(t))_j ∈ J).Therefore, z^-1(F_I,J) is an open subset of ℝ. Moreover, since F_I,J is convex and bounded, z^-1(F_I,J) is an open interval, say, (t_1, t_2) with t_1 < 0 < 1 < t_2. Then z_i(t_1) ≥ 0, z_j^λ(t_1) ≥ 0, for all i ∈ I, j ∈ J,and the same for z_i(t_2), z_j^λ(t_2).Consider the point z(t_1). We have z(t_1) ∈F_I,J∖ F_I,J. Let I_1 := {i ∈ I | z_i(t_1) > 0 }⊆ I, J_1 := {j ∈ J | z_j^λ(t_1) > 0 }⊆ J.Then I_1 ∪ J_1 = [r] since z(t_1) ∈ F_I_1, J_1, andeither I_1 ⊊ I or J_1 ⊊ J since z(t_1) ∉ F_I,J. Therefore, | I_1 | + | J_1 | < | I | + | J |. By induction hypothesis, we have z(t_1)∈ F_I_1, J_1⊆conv{ F_((I_1 ∖ (I_1 ∩ J_1) )⊔ K_1 ), ((J_1 ∖ (I_1 ∩ J_1) )⊔ L_1)| K_1, L_1 ⊆ [r]with I_1 ∩ J_1 = K_1 ⊔ L_1 }.Note that I_0 ⊆(I_1 ∖ (I_1 ∩ J_1)), if this is not the case, some element in I_0 belongs to J_1 and hence belongs to J, which is absurd. Similarly J_0 ⊆(J_1 ∖ (I_1 ∩ J_1)). Therefore, F_((I_1 ∖ (I_1 ∩ J_1) )⊔ K_1 ), ((J_1 ∖ (I_1 ∩ J_1) )⊔ L_1)=F_(I_0 ⊔ K), (J_0 ⊔ L)for some K, L such that I ∩ J = K ⊔ L, that is,the right hand side of (<ref>)⊆conv{F_(I_0 ⊔ K), (J_0 ⊔ L)| K, L ⊆ [r]withI ∩ J = K ⊔ L }.Similarly, we have z(t_2) ∈conv{F_(I_0 ⊔ K), (J_0 ⊔ L)| K, L ⊆ [r]withI ∩ J = K ⊔ L }.Recall that x = z(1) lies between z(t_1) and z(t_2). Therefore x ∈conv{F_(I_0 ⊔ K), (J_0 ⊔ L)| K, L ⊆ [r]withI ∩ J = K ⊔ L }.Since x ∈ F_I,J was arbitrarily chosen, this finishes the induction. Hence, ⊆ is proven. In particular, we have the following vertex representation of the convex polytope P^λ. The polytope P^λ is the convex hull of the 2^r points {F_I,J| [r] = I ⊔ J}.By Lemma <ref>, Proposition <ref>, and Proposition <ref>, P^λ = F_[r],[r] = conv{F_I,J| [r] = I ⊔ J}. We define in E the affine hyperplanesH_k := {∑_i ∈ [r] ∖{k} a_i ϖ_i| a_i ∈ℝ}, H_k^λ:= {λ - ∑_j ∈ [r] ∖{k} c_j α_j| c_j ∈ℝ},and the open half spaces (whose closures are called closed half spaces)H_k,+:= {∑_1 ≤ i ≤ r a_i ϖ_i| a_i ∈ℝ for alli,anda_k > 0 }, H_k,+^λ:= {λ - ∑_1 ≤ j ≤ r c_j α_j| c_j ∈ℝ for allj,andc_k > 0 }.Then clearly we have C_+ = ⋂_1 ≤ k ≤ rH_k,+ and Q^λ = ⋂_1 ≤ k ≤ rH_k,+^λ, and thusP^λ = C_+∩Q^λ = (⋂_1 ≤ k ≤ rH_k,+) ∩(⋂_1 ≤ k ≤ rH_k,+^λ)is the intersection of the closed half spaces.For any two points x,y ∈ P^λ, we write x ∼ y if for any k = 1, …, r, either x,y ∈ H_k or x,y ∈ H_k,+, and either x,y ∈ H^λ_k or x,y ∈ H^λ_k,+. Clearly “∼” is an equivalence relation on P^λ. The equivalence classes are called open faces of P^λ. We have the following description of faces of P^λ.The set {F_I,J| I, J ⊆ [r], I ∪ J = [r]} is the set of all faces of P^λ. Furthermore, if I ∪ J= [r], then F_I,J is a real manifold of dimension | I | + | J | - r.We have by Lemmas <ref> and <ref>P^λ = _I,J ⊆ [r] F_I,J = _I,J ⊆ [r], I ∪ J = [r] F_I,J.Then the first part of the theorem follows from the fact that for two points x ∈ F_I,J and y ∈ F_I^', J^', x ∼ y if and only if I = I^' and J = J^'.For the second part of the theorem, consider the | I |-dimensional affine subspaceX := ⋂_k ∈ [r] ∖ I H_k = {∑_i ∈ I a_i ϖ_i| a_i ∈ℝ for alli ∈ I },and the | J |-dimensionalaffine subspaceY := ⋂_j ∈ [r] ∖ J H_j^λ = {λ - ∑_j ∈ J c_j α_j| c_j ∈ℝ for allj ∈ J }.Since | I | + | J |≥ r, their intersection X ∩ Y is a nonempty affine subspace. The vectors {ϖ_i | i∈ I} are parallel to X, and {α_j| j ∈ J ∖ (I ∩ J)} are parallel to Y, and their union {ϖ_i , α_j| i ∈ I, j ∈ J ∖ (I ∩ J)} forms a basis for E. Therefore, X intersects with Y transversally, and hence (X ∩ Y) =X +Y - r = | I | + | J | - r. Notice that C_I is an open cone in X, and Q^λ_J is an open cone in Y. Therefore, the intersection F_I,J = C_I ∩ Q^λ_J, which is nonempty, is an open subset in the affine subspace X ∩ Y, and hence a real manifold of dimension | I | + | J | - r. For a polytope P, there is a partial order on the set of all faces of P defined by F ≤ F^' if and only if F ⊆ F^'. A polytope P of dimension r is called combinatorially equivalent to a hypercube if the face poset of P is isomorphic to the face poset of the standard unit hypercube [0,1]^× r⊂ℝ^r as partially ordered sets.The polytope P^λ is combinatorially equivalent to a hypercube.A face H of the r-dimensional cube [0,1]^× r in the Euclidean space ℝ^r is uniquely determined by a partition [r] = I_0 ⊔ I_1 ⊔ I_01 in the wayH =H(I_0,I_1,I_01) := {(x_1,…, x_r)|x_i = 0 ifi ∈ I_0, x_j =1 ifj ∈ I_1, and x_k ∈ [0,1] if k ∈ I_01}.Suppose H^' = H^'(I^'_0, I^'_1, I^'_01) is another face of the cube corresponding to the partition [r] = I_0^'⊔ I_1^'⊔ I_01^'. Then clearly H ⊆ H^' if and only if I_0^'⊆ I_0 and I_1^'⊆ I_1.We define a map θ from the set of faces of the cube [0,1]^× r to the set of faces of the polytope P^λ byθ H(I_0,I_1,I_01) ↦F_(I_0 ∪ I_01), (I_1 ∪ I_01).Then by Theorem <ref>, this map is well defined and is a bijection with inverseθ^-1 F_I,J↦ H (I ∖ (I ∩ J), J ∖ (I ∩ J), I∩ J ).Therefore, to show that P^λ is a hypercube, it suffices to show for any two faces H and H^' of the cube, H ⊆ H^' if and only if θ(H) ⊆θ(H^'). Then by Proposition <ref>, this is equivalent to the fact that for any two partitions [r] = I_0 ⊔ I_1 ⊔ I_01 and [r] = I_0^'⊔ I_1^'⊔ I_01^' we have the following equivalence,I_0^'⊆ I_0 and I_1^'⊆ I_1if and only if I_0 ∪ I_01⊆ I_0^'∪ I_01^' and I_1 ∪ I_01⊆ I_1^'∪ I_01^'. Suppose first I_0^'⊆ I_0 and I_1^'⊆ I_1. Then we have I_01⊆ I_01^'. For any element i ∈ I_0 ∖ I_0^', we have i ∉ I_1 and hence i ∉ I_1^'. Therefore i∈ I_01^'. This proves I_0 ∪ I_01⊆ I_0^'∪ I_01^'. The containment I_1 ∪ I_01⊆ I_1^'∪ I_01^' is proved by the same argument. Conversely, the containment I_0 ∪ I_01⊆ I_0^'∪ I_01^' implies I_1^'⊆ I_1, and the containment I_1 ∪ I_01⊆ I_1^'∪ I_01^' implies I_0^'⊆ I_0.This completes the proof of the equivalence, and hence it proves the theorem. Note that for any partition [r] = I ⊔ J, the point F_I,J is a 0-dimensional face of P^λ, hence is a vertex. We may compute the 2^r vertices using the Cartan matrix of the root system Φ (which has been demonstrated in the proof of Lemma <ref>).The polytope P^λ is the convex hull of the 2^r vertices {F_I,J| [r] = I ⊔ J}={λ - ∑_j ∈ J c_j α_j| J ⊆ [r] }, where (c_j)_j ∈ J = M_J^-1·( (α_j | λ) )_j ∈ J, and M_J is the square submatrix of the Cartan matrix M corresponding to the index set J. When λ is dominant but not strongly dominant, that is, λ lies on some walls of the dominant Weyl chamber C_+, we can still find all vertices in this case. For any subset J ⊆ [r], letp_J := λ - ∑_j ∈ J c_j α_j,where(c_j)_j ∈ J := M_J^-1·( (α_i | λ) )_i ∈ J.Then P^λ = conv (p_J | J ⊆ [r]). However, it might happen that p_J = p_J^' for different subsets J, J^'⊆ [r], that is, some of the vertices might merge to a single one. § QUESTIONS AND FUTURE WORK We would like to finish with some questions.§.§ Structure of Plambda for lambda on some walls of C+As seen from Remark <ref>, if λ lies on some walls of C_+, then the structure of the polytope P^λ is more complicated and might also be interesting. We have the following What is the combinatorial type of P^λ for λ∈C_+\ C_+? How much does it depend on the root system Φ?§.§ Counting (co)root lattice points in PlambdaIn <cit.>, Postnikov computed the volume and the number of lattice points of weight polytopes and obtained several interesting formulas. It is easy to see that the volume of P^λ is just the volume of the corresponding weight polytope divided by the number of Weyl chambers. However, even if λ is a point in the (co)root lattice or (co)weight lattice, P^λ is in general not a lattice polytope since the vertices F_I,J might not be lattice points. Counting the number of lattice points in P^λ remains an open problem. How many lattice points are inside P^λ∩ℤΦ? besson2023weightunpublished author=Besson, Marc, author=Jeralds, Sam, author=Kiers, Joshua,title=Weight polytopes and saturation of Demazure characters, date=2023, note=Preprint, arXiv:2202.05405, Bur23thesis author=Burrull, Gaston,title=On the combinatorics of parabolic Schubert varieties, type=Ph.D. Thesis, date=2023, note=Available at <https://hdl.handle.net/2123/29909>, BGH-log-concunpublished author=Burrull, Gaston, author=Gui, Tao, author=Hu, Hongsheng,title=Asymptotic log-concavity of dominant lower Bruhat intervals via Brunn–Minkowski inequality, date=2023, note=In preparation, fulton2013representationbook author=Fulton, William, author=Harris, Joe,title=Representation theory: a first course, series=Graduate Texts in Mathematics,publisher=Springer-Verlag,address=New York, date=1991, volume=129, grunbaum03book author=Grünbaum, Branko,title=Convex polytopes,edition=Second, series=Graduate Texts in Mathematics,publisher=Springer-Verlag,address=New York, date=2003, volume=221, note=Prepared and with a preface by Volker Kaibel, Victor Klee and Günter M. Ziegler, hall2015gtm222book author=Hall, Brian,title=Lie groups, Lie algebras, and representations—An elementary introduction,edition=Second, series=Graduate Texts in Mathematics,publisher=Springer,address=Cham, date=2015, volume=222, humphreys1972book author=Humphreys, James E.,title=Introduction to Lie algebras and representation theory, series=Graduate Texts in Mathematics,publisher=Springer-Verlag,address=New York-Berlin, date=1972, volume=9, LT20article author=Lecouvey, Cédric, author=Tarrago, Pierre,title=Mesures centrales pour les graphes multiplicatifs, représentations d'algèbres de Lie et polytopes des poids, date=2020,journal=Ann. Inst. Fourier (Grenoble), volume=70, number=6,pages=23612407, maxwell1989wythoffarticle author=Maxwell, George,title=Wythoff's construction for Coxeter groups, date=1989,journal=Journal of Algebra, volume=123, number=2,pages=351377, ov1990liebook author=Onishchik, Arkady L., author=Vinberg, Èrnest B.,title=Lie groups and algebraic groups, series=Springer Series in Soviet Mathematics,publisher=Springer-Verlag,address=Berlin, date=1990, note=Translated from the Russian and with a preface by D. A. Leites, Panyushev97article author=Panyushev, Dmitri I.,title=Cones of highest weight vectors, weight polytopes, and Lusztig's q-analog, date=1997,journal=Transform. Groups, volume=2, number=1,pages=91115, postnikov2009permutohedraarticle author=Postnikov, Alexander,title=Permutohedra, associahedra, and beyond, date=2009,journal=International Mathematics Research Notices, volume=2009, number=6,pages=10261106, renner2009descentarticle author=Renner, Lex E,title=Descent systems for Bruhat posets, date=2009,journal=Journal of Algebraic Combinatorics, volume=29,pages=413435, Walton21incollection author=Walton, Mark A.,title=Demazure formulas for weight polytopes, date=2021,booktitle=Quantum theory and symmetries, series=CRM Ser. Math. Phys.,publisher=Springer,address=Cham,pages=287294, zbMATH00722614book author=Ziegler, Günter M.,title=Lectures on polytopes, series=Graduate Texts in Mathematics,publisher=Springer-Verlag,address=New York, date=1995, volume=152, | http://arxiv.org/abs/2311.16022v1 | {
"authors": [
"Gaston Burrull",
"Tao Gui",
"Hongsheng Hu"
],
"categories": [
"math.RT",
"math.CO",
"05E10 (Primary), 52B20 (Secondary)"
],
"primary_category": "math.RT",
"published": "20231127173653",
"title": "On the Dominant Weight Polytopes"
} |
University of California, Berkeley, CA 94720-7300, USA Lawrence Berkeley National Laboratory, CA 94720-8153, USA Brookhaven National Laboratory, Upton, NY 11973-500, USA e1email: [email protected] e2email: [email protected] e3email: [email protected] of the scintillation response of water-based liquid scintillator to alpha particles, and implications for particle identification E. J. Callaghan ucb,lbl,e1 T. Kaptanoglu ucb,lbl,e2 M. Smiley ucb,lbl,e3 M. Yeh bnl G. D. Orebi Gannucb,lblReceived: date / Accepted: date ==================================================================================================================================================Next-generation large-scale neutrino detectors, from , at the 1 scale, to , at the 10s-of- scale, will utilize differences in both the scintillation and Cherenkov light emission for different particle species to perform background rejection. This manuscript presents measurements of the scintillation light yield and emission time profile of water-based liquid scintillator samples in response to α radiation. These measurements are used as input to simulation models used to make predictions for future detectors. In particular, we present the timing-based particle identification achievable in generic water-based scintillator detectors at the 4, 1, and 100 scales. We find that α/β discrimination improves with increasing scintillation concentration and we identify better than 80% α rejection for 90% β acceptance in 10% water-based liquid scintillator, at the 4 scale.§INTRODUCTION Large liquid-phase optical detectors have seen continued use in neutrino physics, and are responsible for a number of key results, ranging from the discovery of neutrino flavor transformation to sensitive probes of the neutrino oscillation patterns <cit.>. Historically, water detectors utilized Cherenkov light, whereas detectors employing a liquid scintillator overwhelmingly made use of scintillation light alone. A hybrid detector, which would simultaneously leverage both Cherenkov and scintillation signals for advanced event reconstruction, is a strong candidate for a next-generation detector. One design is<cit.>, a proposed tens of kilotonnes detector which would support a broad experimental program. Fundamental physics topics include measurements of low-energy solar neutrinos <cit.> and reactor- and geo-antineutrinos <cit.>, searches for neutrinoless double-beta decay <cit.>, determination of the CP-violating phase of the PMNS matrix, and neutrino mass heirarchy <cit.>. As a large-scale antineutrino detector,could act as a proof-of-principle for far-field reactor monitoring, in the context of nuclear nonproliferation <cit.>.A challenge in addressing many of these topics experimentally is the rejection of radioactive backgrounds, typically in the low-energy regime around 1 . One such class of background is associated with α radiation from the series of decay products in the uranium and thorium decay series. This radiation can manifest as a background to measurements of elastic scattering of low-energy solar neutrinos <cit.>. Additionally Bi212-Po212 and Bi214-Po214 decays, as well as generic α,n reactions on target nuclei <cit.>, constitute time-correlated events, which can mimic inverse-beta decay events used to detect antineutrinos <cit.>.In liquid scintillators, these events can be classified as α-induced using timing-based particle-identification (), which exploits the different scintillation emission time profiles exhibited by particles with different ionization characteristics <cit.>. This is often achieved with a method known as pulse shape discrimination (PSD), which leverages ratios of the amounts of light observed over different time windows, often computed from a single multiphoton waveform, to classify events as α-like or β-like. This can be extended further by considering the time of each detected photon individually, and using a likelihood-ratio test to compare the α and β hypotheses.Such timing-based PID has been utilized in past liquid scintillator detectors, for example Borexino, which demonstrated low-energy α/β discrimination on the basis of scintillation timing using a Gatti filter PSD method <cit.>, and in SNO+, which aims to use a likelihood-ratio-based discriminant <cit.>. In a hybrid detector capable of identifying Cherenkov light, additional PID is possible beyond that provided by differences in scintillation emission timing, as heavy ions, including αs and protons, are below the Cherenkov-threshold at electron-equivalent energies below the ∼100 scale. Using only timing information, the additional discrimination power available will be limited at low energy, where there are relatively few Cherenkov photons compared to the scintillation light. But future hybrid detectors, which will employ sophisticated Cherenkov tagging, can use the ratio of the number of detected Cherenkov and scintillation photons as a PID metric, for example to reject hadronic events from atmospheric neutral current reactions, which form a background to reactor- and geo-neutrino analyses, as well as to searches for the diffuse supernova neutrino background <cit.>.A powerful tool for next-generation hybrid detectors is the separation of the scintillation and Cherenkov components of the observed photons, which can be achieved via fast-timing photosensors <cit.>, spectral sorting of photons <cit.>, and sophisticated reconstruction algorithms <cit.>. At the materials level, the development of water-based liquid scintillator () <cit.> has constituted a step toward hybrid detection technology in producing a scintillator with an intrinsically favorable proportion of Cherenkov light.is formed as a homogeneous mixture of linear alkylbenzene (LAB), 2,5-diphenyloxazole (PPO), and water. LAB and PPO are a common solvent-fluor pair which constitute a two-component system previously used in experimental neutrino physics <cit.>. There are further advantages ofover pure liquid scintillator, including its relatively low cost, being water-based, and better environmental compatibility. Notably,is a flexible material in that the scintillator fraction can be tuned to adjust the light yield to suit detector goals. The scintillator loading inis variable over a broad range. In this work we perform measurements using alpha and electron sources with 1%, 5% and 10% scintillator concentrations in thetargets. The light yield and emission time profile ofin response to electrons have been previously reported <cit.>.Deployment ofin demonstration-scale detectors is underway, with further usage planned. The ANNIE detector has deployed 365 ofin a contained vessel to study high-energy neutrino reconstruction <cit.>; 1 has been deployed at Brookhaven National Laboratory to demonstrate long-term chemical stability and recirculation techniques, with a planned scale-up to 30 underway;will deploy 4 ofin a fiducialized detector to demonstrate low-energy direction reconstruction <cit.>; BUTTON <cit.> will be an underground testbed that allows for studies of low-background capabilities; and RNET <cit.> is a proposed program to demonstrate reactor antineutrino detection using atarget as part of a broad reactor monitoring program for nuclear non-proliferation goals.This work presents a characterization of the light yield and time profile ofin response to α radiation, and implications for timing-based PID in realistic detector deployments. Section <ref> introduces the simulation software used to measure the light yield and investigate the PID performance of realistic detectors. Section <ref> and Sect. <ref> describe benchtop measurements of the light yield and emission time profile in response to αs, respectively. The results of these measurements are used as input into simulations of realistic detectors at the medium- and large-scale, and Sect. <ref> presents the timing-basedperformance achievable in such detectors utilizing bothand pure LAB with PPO loaded at 2 / (LAB+PPO). Sect. <ref> provides concluding thoughts and avenues for continued investigation. §SIMULATIONS A detailed Monte Carlo (MC) simulation is implemented in<cit.>, a -based <cit.> simulation package that models the interactions of radiation with matter, as well the production, propagation, and detection of photons.allows for flexible detector geometry design, which is utilized in Sect. <ref> to model the light yield experimental setup and in Sect. <ref> to study PID in several large-scale detectors filled with scintillator.The production of the scintillation photons is performed using a -based optical model <cit.>, which further accounts for absorption, reemission, and Rayleigh scattering of the photons as they propagate through the detector medium. The inputs to theoptical model are either measured <cit.> or estimated using the component properties of water and LAB-based scintillator <cit.>.The photomultiplier tubes (PMTs) are modeled as full 3D objects with geometries and quantum efficiencies taken from the manufacturer specifications <cit.>. The PMTs are assigned properties, such as dark-rates and transit times, based on benchtop measurements specific to each PMT model. The methods for determining the properties important for the measurements presented in this manuscript, such as the single-photoelectron (SPE) pulse characteristics, are described in Sect. <ref>. § LIGHT YIELD The scintillation light yield of three targetsamples, at nominal scintillator loadings of 1%, 5% and 10%, under both β and α excitation is measured. Previous light yield measurementsofunder β excitation were reported in Ref. <cit.>. Light yield measurements are not performed for LAB+PPO, as the total amount of light saturates the dynamic range of the electronics, which are configured for sensitivity to samples with lower light levels, and would require dedicated recalibration.Ionization quenching refers to a reduction in scintillation output associated with local ionization density, which manifests as a nonlinear relation between scintillation light emitted and energy deposited in the material. A first semi-empirical model due to Birks <cit.>, which uses the material stopping power as a proxy for the ionization density, remains in use today. Birks' law relates the rate of production of scintillation photons, L, to the material stopping power as= S/1 + ,whereis the specific energy loss per unit path length,is the material-specific Birks' constant that describes the non-linear behavior of L with E, and S is the scintillation efficiency, i.e. the rate of photon production in an unquenched system. Electrons additionaly create Cherenkov light as they propagate, which is included in our analysis in order to accurately measure the total number of emitted scintillation photons, whereas α particles at nuclear energies are below the Cherenkov threshold. In Ref. <cit.> it is shown that Birks' law provides a good fit to the behavior of the light output for α particles as a function of energy for a sample of LAB+PPO. §.§ Experimental Setup The experimental setup for the light yield measurement uses a radioactive source, purchased from Spectrum Techniques <cit.>, placed in a cylindrical, UV-transparent acrylic vessel (AV) filled with the target liquid. A Sr90 source is used for the β emitter and Po210 is used for the α emitter. The Sr90 β decays with Q-value 546 to Y90, which then β decays with Q-value 2.28 and half-life 64 hours <cit.>. The Po210 primarily decays via α emission with an energy of 5.3 to the ground state of Pb206 with a very small branch (0.001%) to an excited state that gives an 803 γ and 4.5 α <cit.>.The AV has a 30 diameter and is 30 tall. An inner cylindrical volume with a diameter of 20 and height of 25 is hollowed out and is used to hold the target material. The source is placed directly above the target material and rests on a ledge that is 3.2 thick. There is no acrylic between the target material and the source; however, there is a small gap of air, about 2 in width, between the target and source.The AV is placed on top of and optically coupled to a 1-inch square H11934-200 Hamamatsu photomultiplier tube (referred to at the trigger PMT) using Eljen Technology EJ-550 optical grease. The center of an R7081-100 10-inch Hamamatsu PMT (referred to as the light yield PMT) is placed 10 from the source. The signal from the trigger PMT is used to initiate the data aquisition (DAQ) system. The total charge collected at the light yield PMT is used to measure the light output of the target material by comparing the response to a detailed simulation model described in Sect. <ref>.The signals are digitized on a CAEN V1742 digitizer over a 1 dynamic range, sampling at 2.5 for 1024 samples, yielding waveforms that are 409.6 long. The data is read out over USB using custom DAQ software <cit.>, which produces HDF5 files containing the waveforms. §.§ Waveform Processing The digitized waveforms from the trigger and light yield PMTs are processed by first calculating a per-waveform baseline using a 120 window at the beginning of the trace. For the light yield PMT, events with PMT pulses are identified by selecting waveforms with peak values less than -5 . The time corresponding to the peak of the PMT pulse is identified and the charge is calculated by integrating a dynamic window that extends 10 before and 100 after the sample associated with the peak. This window is selected to be long enough to include effectively all of the scintillation light produced by the<cit.>. §.§ Simulating the Light Yield Setup The light yield geometry, as simulated in , is shown in Fig. <ref> as simulated in RAT-PAC.The PMT pulses are modeled by summing individual SPE pulses for each detected photoelectron (PE). The SPE pulses are modeled using a lognormal distribution:ft = B + Q/(t - )√(2π)σ e^-1/2logt - /m/σ^2,where B is the baseline, Q is the charge contained in the pulse,is arrival time of the pulse, and m and σ are shape parameters. These parameters are measured as described in Sect. <ref>. The individual pulses from each PE are summed linearly to create a single pulse, which is sampled at 2.5 for 1024 samples. The output of the MC is a collection of HDF5 files with identical structure to the data. §.§ Analysis Strategy Parameter estimation, whether for the light yield measurement or calibration purposes, is achieved by matching simulation to observed data. The use of a proper simulation ensures that optical effects and related physics, for example the production of Cherenkov light by electrons, are naturally accounted for. For any given parameter of the PMT or scintillation model, the parameter is tuned by producing simulations over small steps in that parameter, determining the optimal parameter estimate by minimizing the χ^2 between the binned PMT charge observed in MC and data. For each parameter step, an ensemble of ten independent MC samples is produced, from which a mean χ^2 and uncertainty may be computed. These χ^2 estimates, as a function of the parameter value, are fit with a quadratic function in the neighborhood of the observed minimum, in order to identify the true minimum. The statistical uncertainty on the model parameter is determined by the Δχ^2 = 1 interval of the best-fit quadratic. §.§ Calibrations The SPE pulse shape and size of each PMT, and the photon detection efficiency of the light yield PMT must be calibrated in order to accurately model the data using the MC. The SPE charge distribution, in particular, is a critical model input, as the integrated charge is ultimately used to compare the MC with data. The pulse shapes and sizes are calibrated using a pulsed LED pointed at each PMT, respectively. The pulse to the LED is configured such that, in each pulse, the PMT detects primarily single photons. To extract the SPE pulse parameters, the recorded waveforms are fit with a lognormal function, given in Eq. <ref>, performed by χ^2-minimization. An example SPE pulse fit is shown in Fig. <ref>. The values of m and σ are binned, and fit with a Gaussian fit to determine a mean and standard deviation, which are input into the simulation model which accounts for pulse-to-pulse variation. The observed distribution of SPE charge, Q is Gaussian for the trigger PMT, but not for the light yield PMT. To properly account for the non-Gaussian nature of the latter's charge distribution, a histogram of SPE charge values is used directly as input to the simulation and is sampled from when generating pulses in the light yield PMT.Having calibrated the SPE responses, the photon detection efficiency of the light yield PMT is calibrated using a pure Cherenkov source. The experimental setup described in Sect. <ref> is used, with water as the target material and a Sr90 source providing β radiation. This provides a well-understood source of light that can be used to tune the detection efficiency. This is implemented using global scale factor applied to the quantum efficiency curve, referred to as the “efficiency scaling.” This efficiency scaling accounts for unmodeled inefficiencies, such as the PE collection efficiency. The parameter step size used to sample the χ^2 curve is 0.5%. An example comparison of the data and MC using an efficiency scaling of 63.5% is shown in Fig. <ref>. Using the method outlined in Sect. <ref>, a best fit value of 63.4 ± 0.8 (stat.)% is identified.Several sources of systematic uncertainty were investigated. Namely, we consider uncertainties arising from the integration window size, the light yield SPE charge model, and the geometry of the button source. Varying the low-level analysis cuts was determined to have a negligible impact. Systematics due to the wavelength spectra of the detected photons, which differ between Cherenkov and scintillation light, are not considered. This is expected to be a small systematic, as the wavelength spectra of detected Cherenkov and scintillation photons are similar. The integration window length is varied between 60 and 140 and the maximum observed difference in the determined efficiency scaling of 2.4% is conservatively taken as a two-sided uncertainty. The largest systematic uncertainty is from the SPE charge model for the light yield PMT, which we modify in the simulation from histogram sampling to a Gaussian response, with mean and width extracted from the LED data. This change leads to a 7.1% difference in the best-fit efficiency scaling, which is also taken as a two-sided uncertainty. Lastly, within the plastic button source, the location of the Sr90 is uncertain, and is specified with the corresponding uncertainties in the manufacturer drawings (± 4% on the radial distance to the edge and ± 33% on the thickness of the plastic window between the Sr90 and the bottom) <cit.>. The location of the source is varied in the MC within those constraints, leading to an asymmetric uncertainty of of ±^4.8%_4.1%, driven primarily by the uncertainty in the distance between the active source and target material.The total systematic uncertainty is determined to be ±^8.5%_8.1%. Including the statistical uncertainty, the best-fit efficiency scaling is 63.4 ± ^5.5_5.2%. The central value is used in the simulations of themeasurements, and the uncertainty is propagated to the resulting light yield parameters. §.§ Measurement Methods §.§.§ β Measurements Measurements using the Sr90 source are primarily sensitive to the scintillation efficiency, S, due to the fact that k_BdE/dx << 1 for electrons. In this work, we tune the value of S while holdingfixed to the measured value for LAB+PPO. This is consistent with the methodology of similar previous measurements, such as those described in <cit.>.We select a pure sample of Y90 decays in data using a cut on the charge collected by the light yield PMT, as a proxy for the energy deposited in the scintillator. This removes the low energy Sr90 component, requiring simulation of only the higher energy Y90 decay. The value for the charge cut is determined by looking at the charge of the endpoint of the Sr90 decays of the calibrated simulation output. This endpoint depends on the candidate scintillation efficiency, and we thus employ cut values well above the Sr90 endpoint predicted even for unreasonably large values of S. The charge cut values used are 10 , 30 and 50 for 1% , the 5% , and the 10%respectively.The simulations are run varying the value of S in steps of 5 photons/, holding the value offixed to 0.074 /, as measured for LAB+PPO in Ref. <cit.>. The value offor protons has been shown to be consistent betweenand oxygenated LAB+PPO <cit.>. Nonetheless, varying this value at the level 10% was found to have neglibible impact.The sources of systematic uncertainty considered are the calibrated scaling efficiency, the charge integration window length, and the value of the charge cut used to reject Sr90 decays. The largest contribution is from the efficiency scaling, which in the simulation model is degenerate with S, with an uncertainty of ±^8.5%_8.1%. The uncertainty from the integration window size must be reevaluated given the broader photon arrival time distribution associated with scintillation light than the Cherenkov light observed in water measurements. By varying the integration window length between 60 and 140 , a conservative two-sided uncertainty of 4.5% is observed. The value of the charge cut is varied around the endpoint of the Sr90 (in ten steps of 1 above the nominal cut value) and the maximum change in the result is 0.8%. The total systematic uncertainty is thus ±^9.7%_9.3%.§.§.§ α Measurements Due to the larger α stopping power, measurements of the Po210 source require consideration of , as per Eq. <ref>. With only a single monoenergetic source, as used here, the two parameters of Birks' law are degenerate. Thus we extract the average number of scintillation photons produced, ⟨ L ⟩, from the best-fit MC model evaluation, and interpret this quantity in the context of Birks' model. To generate the results presented in this work, the value ofwas held constant to the value measured for LAB+PPO <cit.>, and the value of S was tuned as described above. It was verified that equivalent results for ⟨ L ⟩ are achieved if, instead, S is held constant and the value ofis varied. Because over 99.9% of Po210 decays produce a single monoenergetic α, we do not apply any cut on the charge to select the signal. The α particles from the Po210 decays deposit an average of 4.8 in the , as they lose some energy in the air between the source and the target.The explicit sources of systematic uncertainty considered are the calibrated scaling efficiency, the charge integration window length, and an uncertainty arising from the sample geometry, namely the height of the small air-gap between the source housing and sample surface. The largest contribution is again the ±^8.5%_8.1% uncertainty on the value of the efficiency scaling. The same conservative two-sided systematic of 4.5% is determined by varying the integration window length. The height of the small gap of air between the source and the sample impacts the total amount of energy deposited by α particles in the target material much more than electrons, due to the difference in stopping powers. The nominal height of the air gap is measured with calipers, and the weight of the sample is standardized to ensure the same amount of material is used in each measurement. With these procedures we expect the height of the gap of air to be consistent across all measurements to within 0.3 . To be conservative, the height of the gap of air is varied in simulation by ±1, and the observed change in the best fit light yield is ±^2.2%_3.1%.We compute an additional uncertainty associated with indeterminate deficiencies in the simulation model, apparent when comparing the data and MC, as presented in Sect. <ref>. This is quantified by determining a scaling to the uncertainty on χ^2 estimates sampled from MC, such that the minimum satisfies χ^2/ = 1. This scaling is then interpreted as a residual fractional uncertainty. Quantitatively, this increases the total systematic uncertainty on the measurement of the 10% sample by approximately a factor of 3, and is negligible for the 1% and 5% samples. The total systematic uncertainties on the measurements of the 1%, 5% and 10% samples are ±^9.9%_9.8%, ±^9.9%_9.8%, and ±^29.7%_29.4%, respectively. §.§ Results §.§.§ β Measurements The light yield PMT charge distributions of simulations using the best fit values of S are compared to the data in Fig. <ref>, exhibiting good agreement, with χ^2/ ∼ 1. Table <ref> lists the measured values of S, which are consistent with the values previously reported in Ref. <cit.>.§.§.§ α Measurements The light yield PMT charge distributions of simulations using the best-fit model parameters are compared to the data in Fig. <ref>, exhibiting generally good agreement, with χ^2/ ∼ 1 (with the exception of the 10% WbLS, discussed later). The uncertainties shown on the data points for both data and MC include only statistical uncertainty, and the χ^2 values are calculated over the ranges shown. The 10%data is broader than predicted by the simulation model, which contributes to the systematic uncertainty on the measurement as described in Sect. <ref>.The average numbers of scintillation photons produced in the best-fit simulations are listed in Table <ref>, and the corresponding constraints on the parameters of Birks' law are shown in Fig. <ref>, with the assumed Birks' constant and measured scintillation efficiencies for electrons overlaid. It is observed that there are consistent model parameters for higher scintillator loadings, namely the 5% and 10% samples, whereas the 1% sample produces less light than predicted by electron-like Birks quenching. A possible explanation is the breakdown of simple ionization quenching in low scintillator loadings, where relatively little energy is deposited directly to scintillating micelles. Such a departure from Birks' law would manifest differently for electrons and αs, owing to their different stopping ranges. § EMISSION TIMINGThe scintillation time profile of two-component systems, containing a primary solvent and secondary fluor, are typically modeled as a sum of exponentially-decaying terms, each modified by a common “rise time,” which accounts for excitation of fluor molecules via non-radiative energy transfer from solvent molecules, as described in <cit.>:St =∑_i=1^N A_i-t/τ_i - -t//τ_i - , ∑_i=1^N A_i = 1,where t is the time of photon emission relative to solvent excitation, τ_i are the lifetimes of the N observable decay modes,is the rise time, and S is normalized so that ∫_0^∞t' St' = 1.The theory of organic scintillators usually associates the observed scintillation with primary excitation and ionization of π-electrons <cit.>. Direct excitation is into a singlet state; in the case of full ionization, electron pairs recombine preferentially into a relatively long-lived triplet state <cit.>. The larger stopping power for αs, relative to electrons, thus translates to a higher proportion of triplet states, and hence a generically slower scintillation time profile. This is the basis for timing-based PID. We describe below measurements of the scintillation time profiles of αs for three materials: 5% , 10% , and LAB+PPO. The responses of these materials to electrons was reported in <cit.>. The time profile of 1%was not measured in this work, as robust data-taking was impractical due to its low light yield under α radiation, coupled with the unsealed vessel design and low coincidence rate intrinsic to the method described below. §.§ Experimental Setup The target material, acrylic vessel, and radioactive source were arranged as described in Sect. <ref>. The acrylic vessel was viewed by two Hamamatsu H11394-200 PMTs, the signals from which were digitized using a CAEN V1742 operating at 5. Digitization was triggered on the rising edge of one PMT coupled to the acrylic vessel; the other was placed at a distance of ∼20, operating in the single-photon regime. The experimental setup is shown in Figure <ref>.The LAB+PPO was sparged with nitrogen gas to remove dissolved oxygen in the sample, which is known to affect the scintillation time-profile <cit.>. This is not performed for thesamples, whose time-profiles are not impacted by the removal of oxygen <cit.>. §.§ Waveform Processing The digitizer produces waveforms of a total length of 204.8. For each waveform, the baseline is calculated using a 15 window preceding the prompt light arrival and the total charge collected is computed by integrating a 120 window following the arrival of the prompt light. The time of the photon for the SPE timing PMT is determined by applying a constant-fraction discriminator (CFD) to the waveform. A fractional threshold of 60% is used with linear interpolation between the two samples where the pulse crossed threshold. The output of applying the CFD produces t_PMT. For the trigger PMT, a time value, t_trig is assigned at the 3-threshold-crossing time. This threshold corresponds to roughly 1/3 of a PE and is used to determine the time of the first photon detected in the trigger PMT, as discussed in Sect. <ref>. The quantity Δ t = t_PMT - t_trig is used to extract the scintillation emission time, as discussed in the following section. §.§ Analysis Strategy The emission time profile of each sample under study is determined by fitting an analytic model to the observed time profile, with particular attention paid to the form of the system response, which is determined by the trigger scheme. As detailed below, this entails both modeling the effect on the system response of the occupancy levels of the trigger PMT, and the independent determination of the occupancy levels. Taken as a fixed input for the time profile analysis, this allows for the dependence on the system response of the scintillation emission time profile to be modeled during fitting, which improves the modeling of the scintillation rise time.The distribution of observed, or “measured,” SPE photodetector times, relative to the system trigger, can conceptually be written asMt = St⊗ Pt⊙ Tt⊗δt -where S is the scintillation emission profile; P is the response function of the photodetector; T is the trigger profile, i.e. the distribution of times between data acquisition triggering and the deposition of energy into the sample;is an overall system delay; ⊗ denotes convolution, i.e. F ⊗ G = ∫_-∞^+∞t' Ft' Gt - t'; ⊙ denotes truncated anticonvolution, i.e. F ⊙ G = ∫_0^+∞t' Ft + t' Gt'; and all operators are applied in the order reading from left to right. The anticonvolution with the trigger profile must be truncated to respect causality, as the system cannot trigger before any photons are detected. The PMT response function P is approximately Gaussian <cit.>, but the trigger profile T is, in general, asymmetric: with the single-photon trigger times defined in Sect. <ref>, T can be written in terms of the first order-statistic (see, e.g., <cit.> for mathematical context) of the scintillation emission profile S – that is, given n photons detected by the trigger PMT, the probabilty density function of the time the first detected photon. This can be written analytically: if Qt is the cumulative distribution function associated with the probability distribution function St, then the first order-statistic of S given n detected photons isS_1^nt = n St1 - Qt^n-1.In practice, the number of photons collected by the trigger PMT is not constant, but varies event-to-event. If W_i is the fraction of events in which the trigger PMT detected i photons, then the trigger profile can be written asTt = P't⊗ ∑_n=1^∞ W_n S_1^nt, ∑_n=1^∞ W_n = 1,where P't is the response function of the trigger PMT. The occupancy fractions (or occupancy spectrum) {W_i} are determined by calibrating the SPE charge of the trigger PMT, as described in Sect. <ref>, and fitting the observed multi-PE charge spectrum during data-taking, as described in Sect. <ref>. With this formulation, the anticonvolution in Eq. <ref> is then evaluated numerically. In this work, we allow for two deexcitation modes (i.e. N = 2 in Eq. <ref>), and model the combined photodetector response of the two PMTs, P ⊗ P', as a Gaussian with σ = 163, corresponding to the combined response of two identical devices operating at manufacturer specification <cit.>. The dependence of the trigger profile Tt = Tt; , τ_1, τ_2, A_1 on the candidate emission time profile is included. §.§ Trigger PMT Calibration In order to determine the distribution of trigger PMT occupancies, as described in Sect. <ref>, this single-photon charge response of the trigger PMT must be parameterized and calibrated. This was accomplished using single-photon pulses provided by a blue LED. The LED was pulsed to establish a low coincidence rate (below 5%) in order to suppress multiphoton contamination. The SPE charge was determined using an integration window of 18, which captures the full width of SPE pulses.The distribution of SPE charge is shown in Fig. <ref>, and is fit with a two-Gaussian model, corresponding to pure noise and a single-photon signal. This simple model was found to adequately describe the observed charge distribution with best-fit signal parameters of μ = 0.731 ± 0.009 and standard deviation σ = 0.538 ± 0.007. These values are consistent with cross-check calibrations performed using longer integration lengths, though the latter provide weaker constraints due to the higher level of electronic noise included in the integration. §.§ Determining the Trigger Occupancy Spectrum The trigger occupancy spectrum, i.e. the distribution of the number of photons detected by the trigger PMT, can be extracted from the trigger PMT charge spectrum collected during data-taking, given models for both the SPE charge response and charge summation. As discussed in Sect. <ref>, the distribution of SPE charge q is approximately a Gaussian function Gq; μ,σ, with mean μ and standard deviation σ. We assume that SPE pulses add perfectly linearly, so that the charge response associated with an n-PE pulse is also a Gaussian Gq; nμ,√(n)σ in accordance with the central limit theorem. Given an observed distribution Pq of charge q, we can writePq; {W_i} = ∑_n=1^∞ W_n Gq; nμ,√(n)σas a sum over multi-PE charge distributions, with W_n defined as in Sect. <ref>.Regarding the fractions {W_i} as free parameters, Eq. <ref> can be fit to the distribution of trigger PMT charge observed during data-taking. The fit is formulated as a least-square-difference minimization between the binned charge data and model. In practice, the infinite sum must be truncated at some finite maximum mode, n_max. In developing the results presented in this work, several choices for n_max were used, and it was found that the fit results are robust to changes to the exact value, as long as the mean charge of the maximum mode, n_maxμ, is well-above the endpoint of the observed charge distribution. The high dimensionality associated with the generality of this model (choosing n_max = 200, for example, requires a 200-dimensional minimization) is dealt with by evaluating the model using the<cit.> library, which allows the cost function, C_0, to be differentiated analytically via automatic differentiation. This allows for efficient minimization using gradient descent, without the presence of the numerical error which accrues when approximating the gradient using finite difference methods. In this work, we employ a basin-hopping algorithm, in which simulated annealing is applied to repeated local gradient-descent-based minimizations.It was observed that the best-fit occupancy spectra exhibit oscillatory behavior, wherein the preferred occupancy varies between several local maxima and minima, as opposed to a unimodal distribution, as might be expected from a monoenergetic source. This is believed to be an artifact of imperfect minimization owing to the combinatorics of high-dimensality and large correlations between neighboring occupancy modes, intrinsic to the general model defined by Eq. <ref>: because neighoring occupancies are nearly-degenerate, there are many minima, which we term “impostor minima,” in which one occupancy representative of its neighborhood contains an excess weighting, but only one minima where the weighting is correctly allocated between neighboring modes. Whether the minima identified are sufficient to properly model the trigger response was studied by manually enforcing unimodality in the occupancy spectrum via a constraint term in the cost function, which penalizes spectra according to the difference between neighboring modes. That is, we instead minimize a penalized cost functionC_λ{W_i} = C_0 + λ∑_n=1^∞W_n+1 - W_n^2,where the penalization strength λ controls level of constraint on the spectrum shape: λ = 0 results in the ordinary least-squares cost function defined previously, whereas in the limit λ→∞ the preferred occupancy spectrum is uniform across all allowed modes. The scintillation time profile results presented in this work are robust to machine precision for different choices of λ spanning 10 orders of magnitude, indicating that the small difference between the impostor and true global minima is below the sensitivity of our timing data to the trigger profile.Observed trigger charge spectra and best-fit models are shown in Fig. <ref>. Best-fit occupancy spectra, for a selection of penalization strengths, are shown in Fig. <ref>. §.§ Results The timing data is shown, with best-fit scintillation time profile models, in Fig. <ref>, and the associated parameters are summarized in Table <ref>. The fits are limited to an approximately 70 ns window, which does impact the total amount of late scintillation light that is included in the fit, particularly for the slower LAB+PPO sample. The mismodeling in the neighborhood of the peak, observable in the structure of the model residuals, can be associated with imperfect modeling of the trigger profile, which can be attributed to possible bias in the occupacy fits described in Sect. <ref>. Such bias can result from biased SPE calibration, of which there are numerous potential sources. One avenue for improvement is to fit for the SPE response simultaneous with the occupancy spectrum, using external calibraton data as a constraint, but such ventures are beyond the scope of this work. Mismodeling of the system response principally affects, of the scintillation parameters, the rise time, while the decay times are relatively robust. As timing-based PID performance is driven by the long-timescale decay behavior, the rise times listed in Table <ref> are deemed to have a negligible impact on the particle identification capabilities.In comparison to the scintillation timing from β radiation of themixtures, presented in Ref. <cit.>, we find that the scintillation timing for α radiation is generally slower. This can be most clearly identified by comparing the value of A_1, which is around 95% for the 5% and 10%for β excitation. The smaller values of A_1 for α radiation indicate a higher fraction of the light is emitted later in time, causing the emission timing to be generally slower. This difference in timing can be leverage in particle identification techniques, presented in the following section.§ PARTICLE IDENTIFICATION PERFORMANCE As discussed in Sect. <ref>, signal and background processes for MeV-scale neutrino physics often arise from different particle species. For example, there is particular interest in the community in rejecting background αs from radioactive decays, thereby improving the selection of electron-like events associated with neutrino interactions.The properties of each of the two main sources of photons in optical detectors, Cherenkov radiation and scintillation, can be leveraged to attempt separation of event samples by particle type. Cherenkov emission is prompt in time, is broad-spectrum in wavelength, is directional, and has a particle-mass-dependent energy threshold. Scintillation emission occurs over longer time scales, is typically confined to a relatively narrow waveband, is isotropic, and has effectively no energy threshold. However, the characteristics of the scintillation response are particle-dependent via so-called quenching mechanisms, for example as discussed in Sect. <ref> and Sect. <ref>. As such, Cherenkov and scintillation photons will present differently in timing, wavelength, and angular distributions, as well as in relative proportion as a function of energy, for different types of particles.This varied response of media to different particle species creates ample potential for PID capabilities. Traditional liquid scintillator detectors rely on timing-based separation mechanisms, but the added lever of finer knowledge of the distinct Cherenkov and scintillation signals has the potential to improvediscrimination power even more in . This is more pertinent at energies relevant to this study where αs will be below Cherenkov threshold. While technologies are being investigated to harness the full potential of the time-based differences using ultra-fast photosensors and precise waveform digitization to allow for performance beyond the single-hit counting regime <cit.>, chromatic differences using spectrally-sorting filters <cit.>, and angular differences using high-coverage detectors with sophisticated reconstruction algorithms <cit.>, we focus solely on extending the time-based PID analogy from liquid scintillator detectors to hybrid detectors deployingand fast-timing PMTs.In order to understand the impact of the timing and light yield measurements beyond the benchtop scale, we simulate realistic detector configurations of various sizes. We the assess the timing-based PID capabilities thereof to understand the level of background rejection, based on particle type, achievable with . §.§ SimulationsWe study detectors at the 1 , 1 and 100 scales with the following configurations:* A ∼4 detector of -like geometry <cit.>, primarily employing Hamamatsu 8- R14688-100 PMTs <cit.> with some additional models for a total number of 231 PMTs, yielding a photocoverage of ∼40%* A ∼1 right cylindrical detector with 5.4 fiducial radius and ∼ 54% photocoverage using ∼ 3,000 12- PMTs with equivalent quantum efficiency and time response to the R14688-100 * A ∼100 right cylindrical detector with 25.2 fiducial radius and ∼ 85% photocoverage, via ∼ 47,000 PMTs of the same hypothetical model as the 1 configuration The detector volumes were determined assuming material density equivalent to water (i.e. 1 ^3) for the given mass.We simulate each detector configuration filled with 5% , 10% , and LAB+PPO. The scintillation time profiles and light yields ofare taken from this work or previous measurements <cit.>, as appropriate. For LAB+PPO, the time profile under α excitation is used as measured in this work, whereas the time profile under β excitation is obtained using similar methodology to <cit.> and the light yield parameters are obtained from similar methodology to <cit.> and building on work from the SNO+ collaboration <cit.>. While Sect. <ref> reports average photon production for α excitation, as opposed to specific model parameters, simulation of large scale detectors requires a definite choice of parameter values. Here we choose parameters such that the scintillation efficiency S for electron and α excitation are both equal to the value measured for electrons. These values are listed in Table <ref>.We focus on determination of PID performance at the energy of the 5.3 α from the Po210 decay, as this is commonly the most prevalent α background from radioactive contaminants in liquid scintillator experiments. To do so, we simulate the Po210 α decay and monoenergetic βs with kinetic energies that give an equivalent number of optical photons produced as the Po210 decays. The corresponding β energy varies by material due to differences in quenching, and is shown in Table <ref>. Several other decays of interest, such as Po212 and Po214 occur at higher energies, and thus the performance determined in this section should be conservative for those decays, as the added light will improve the performance as discussed below. All events are simulated at the center of the detector configurations and isotropically in direction. §.§ Classification RoutineTo study the PID capabilities for these materials, we employ a likelihood-ratio statistic calculated using the hit time residuals for each event. The hit time residual is defined ast_res = t_hit - t_tof - t_vertex,where t_hit is the hit time for the PE as recorded by the PMT (with time response included), t_tof is the estimated time-of-flight from event vertex to PMT and t_vertex is the time of the event vertex. For the time-of-flight calculation, in all cases we assume straight line paths from the event vertex to a hit PMT in the target medium, i.e. no refractions or reflections at interfaces are considered, nor is the effect of scattered light, though these processes are present in the simulation. Some optical effects, e.g. scattering, introduce a common smearing which degrades performance, but the assumption of unrefracted paths does not affect the results, as likelihood ratios are invariant under differentiable transformations. While the index of refraction is varied appropriately by material, we assume a 400 wavelength for all photons when calculating the time-of-flight. This wavelength is near the peaks of both the R14688 quantum efficiency and the scintillation emission spectrum of LAB+PPO and<cit.><cit.><cit.>. At 400 nm the modeled absorption and scattering lengths of each of thematerials are longer than 10.We define a classifier value c for an event as the average likelihood-ratio of the observed photon times, comparing the α- and β-hypotheses. That is,c = 1/N∑_i=0^Nln P(t_i|α) - ln P(t_i|β),where the sum is taken over the N distinct photons detected, which, depending on implementation details for a real detector, is generally either the collection of PMTs which were “hit,” or the collection of individual photoelectrons detected, subject to cuts on the timing of the hits and other factors. P(t_i | α) is the probability of observing time residual t_i given the event is an α, and P(t_i | β) is defined analogously for βs. P(t | α) and P(t | β) are probability density functions (PDFs) determined from simulation, as discussed above. Dedicated PDFs are generated for each material and detector configuration. As the classifier is defined as a sample average, the classifier distributions can be understood via the central limit theorem: the mean classifier value for each particle is the likelihood-ratio averaged over the time-residual distribution for that particle, and the distribution is asymptotically Gaussian with standard deviation inversely proportional to √(N). This naturally facilitates the comparison of different materials and detector configurations, as it decouples the intrinsic classification power (the mean values), driven by the different emission time profiles, from that achievable in any particular deployment, determined by the amount of light collected.We generate time-residual PDFs for two extreme photon-counting scenarios: simple “hits,” wherein only the time of the first photon incident on a PMT is known, and full photoelectron disambiguation, in which the time of every photoelectron is known, regardless of per-PMT pileup. The scintillation emission time profiles measured in this work were measured over a (∼60) time scale, whereas the event window length in real detectors may be much longer, on the order of ∼400. To compromise between these two time scales, we use an analysis window of -10 to 220. This allows for fair inclusion of optical effects, such as Rayleigh scattering, without extending into a regime where unmeasured scintillation decay modes may dominate the detector response. Vertex reconstruction is not performed, and instead we use the true position and time of each event in place of the reconstructed vertex, though the effect of simply smearing this vertex by a characteristic resolution is studied. The emission timing measurements performed in the 60 ns window are used to extrapolate into the longer time-window, which is determined to under predict the scintillation timing tail for LAB+PPO in comparison to work that used longer analysis windows <cit.>. This has less of an impact on the WbLS mixtures, which are significantly faster and emit less light beyond the 60 ns analysis window. Example of time-residual PDFs can be found in Fig. <ref>, which shows the time-residuals for each different material in , and in Fig. <ref>, which shows the time-residuals for each different detector size with 10% . The additional peak around 40 is caused by PMT late pulsing, typical of large area PMTs. The -like detector, owing to its relatively small size, observes a relatively high proportion of multi-PE PMT hits, which is distinct from the larger detectors which operate largely in the SPE regime. As such, the difference between “first PE” and “all PE” PDFs, and hence the corresponding PID performance, in the -like detector is larger than for the other two detectors. In particular for LAB+PPO, the “first PE” PDFs in the -like detector are very similar between the two species, resulting in poorer PID performance than would naively be expected for LAB+PPO. This comes as a result of all the PMTs in the detector registering multiple PE, and so the “first PE” PDFs lack a substantial amount of information, particularly from hits coming at later times, as shown in Fig. <ref>. With modern detectors, such as , seeking to differentiate hits at least in the “few PE” regime using photosensors with faster timing and better readout schemes, we present results based on PDFs using all hit times. §.§ Analysis Methods The classifier value is a quantity for which, ideally, the distributions associated with αs and βs have little overlap, which would enable an efficient selection cut to be employed in the course of a physics analysis. As positive values are associated with α-like events, and negative values with β-like events, a simple threshold value can be used to perform classification. We study these distributions and the resulting classification performance both analytically, and using further simulation.As described in Sect. <ref>, these distributions are asymptotically Gaussian, and as such can summarized by their means and standard deviations. Mathematically, this translates to computing the first and second moments of the log-likelihood ratio: the former is the mean classifier value, and the latter, when scaled by √(N), is the standard deviation. We calculate these values for each detector configuration via numerical integration, to offer the full classifier distributions (at least, asymptotically) in a convenient form, to facilitate follow-on performance studiesThen, using further full MC simulations to sample from the non-asymptotic classifier distributions, we study various figures of merit which quantify the classification performance, as a function of threshold value. An example pair of classifier distributions, both sampled from simulation and expressed as a Gaussian, can be found in Fig. <ref>, which corresponds to a 10% -filled -like detector. The two approaches are compared in the leftmost figure, which shows the compatibility of the full MC sampling and asymptotic distributions, indicating that the non-Gaussian components of the classifier distributions at this light level are small.As electrons are usually considered a “signal” in analysis of neutrino detector data, we label β events as signal and α events as background in this work. The specific classification figures of merit considered are: * Sample purity: N_β()/N_tot()* Signal acceptance: N_β()/N_β, tot* Background rejection: N_α()/N_α, tot* Significance: N_β()/√(N_β() + N_α())* Youden's J: N_β()/N_β, tot- N_α()/N_α, totwhereis the cut value on classifier value; N_α and N_β are the numbers of α and β events selected from the sample, respectively; N_α, tot and N_β, tot are the total numbers of α and β events in the sample, respectively; and N_tot = N_α + N_β is the total number of selected events. The selection is performed such that all events with classifier value less than or equal toare considered to pass the selection cut, and all those with classifier value greater thanfail the selection cut. Each of these quantities is computed as function of the cut value .§.§ Results The first and second moments of the classifier distributions, i.e. the means and single-sample standard deviations of the log-likelihood ratio, as well as the mean numbers of detected photoelectrons, are listed in Table <ref>, Table <ref>, and Table <ref>. The PID figures of merit for a 10% -filled -like detector are shown as a function of the classifier cut value in the right panel of Fig. <ref>, and Table <ref> lists the background rejections associated with a 90% signal acceptance for -like detectors of various target media. From Table <ref>, we see that the α rejection improves with higher light yields from the increasing scintillator concentration. Vertex reconstruction resolution is generally robust across different detector sizes <cit.>, and hence the effect of vertex reconstruction error is quantified using a 5% -filled -like detector, which, owing to its small size, is the most sensitive to the impact of resolution. Applying a 10 Gaussian smearing to each Cartesian coordinate of the true vertex position, representative of the achievable position resolution <cit.>, results in a nonzero but mild loss in performance of less than 0.5% absolute background rejection. The degradation is smaller in larger detectors, where longer nominal photon times-of-flight dominate the fixed uncertainty introduced by fixed vertex resolution. This study does not account for possible complicated reconstruction features, such as tails in the vertex reconstruction, which could impact the PID.The classifier distributions for 10%for the three detectors are shown in Fig. <ref>. In all cases, the classifier distributions are visibly well-separated around the log-likelihood ratio of 0, with polarity as expected from construction. Table <ref> similarly reports the figures of merit associated with a signal acceptance of 90%. The same trend of increasing performance with increasing scintillator fraction is observed in these detector configurations and is summarized in Fig. <ref>, which shows the α rejection at 90% β acceptance for all three detectors as a function of scintillator fraction of the material.Higher light yield materials provide better PID performance, as a result of the classifier variance decreasing in accordance with the central limit theorem. As evidenced in Fig. <ref>, scattering and absorption of photons as they propagate through ever-larger detectors can have a substantial impact, as the smaller, lower photocoverage 4 detector outperforms the larger, higher photocoverage -scale detectors. This is in accordance with the washing out of features in time-residual PDFs for larger detectors (Fig. <ref>) due to smearing from absorption/reemission and optical scattering. Additionally, we find that the particle identification performance ranking of the various detector configurations is robust to the choice of figure of merit used to generate the cuts. Table <ref> shows the performance for 90%, 99%, and 99.9% α rejection to directly compare to existing experiments, such as Borexino <cit.> and SNO+ <cit.>. We also show in Fig. <ref> the simultaneously achievable β acceptance and α rejection for the examined scenarios.A consequence of the short time window used to measure the emission timing in Sect. <ref> is in an underestimation of the amount of light emitted at late times, beyond 60 ns. This yields a conservative estimation of the expected PID performance for the LAB+PPO, and to a lesser extent, the WbLS. We repeated the PID analysis using the published LAB+PPO emission timing measured by the SNO+ detector <cit.> for all three detector configurations. These measurements use a longer analysis window and fit using a four-decay exponential model. We find that using the SNO+ emission timing results in 100% separation of the α and β particles in all three detector configurations. §.§ Discussion The results presented in Sect. <ref> use only timing information, considered over a fixed analysis window. A pertinent question is the role of the Cherenkov light produced by electrons, as its higher proportion insamples may be expected to contribute to timing-based PID beyond that afforded by differences in scintillation emission time profiles. By ignoring Cherenkov photons in the PID analysis of a 5%filled EOS-like detector, we surprisingly find that the presence of Cherenkov light degrades α rejection at the level of 1.5%. This is due to the Cherenkov component competing with the scintillation light rise time, which is measured to be larger for βs than for αs. Thus, at Po210 energies, the small amount of Cherenkov light emitted for βs causes the effective rise-time to look more similar to αs, degrading the classification power.At higher light levels, produced by higher energy βs, the larger Cherenkov contribution is sufficient to enact a genuine shape difference in time profiles, and enhances PID performance. For example, at the light levels associated with Po212 α decays, the behavior is reversed, and the Cherenkov light provides a roughly ∼ 1.5% increase in α rejection, absolute. Of course, it should be noted that the measured scintillation rise times may be subject to systematic uncertainties associated with choices in system response modeling, potentially affecting the competition between the Cherenkov component and the scintillation rise times discussed here.It should be noted that hybrid detectors which leverage dedicated techniques to identify, i.e. “tag,” Cherenkov photons may achieve PID performance beyond that available via the simple likelihood-ratio statistic employed here, owing to the inclusion of other observables. Examples of such observables are angular information and wavelength, employed via the observation of anisotropic photon collection, after performing vertex and direction reconstruction, and spectral photon sorting, respectively. The timing, topological, and spectral information could, in principle, be combined in a joint vertex-direction-PID fit, which inherently determines a particle's identity based on the interaction geometry. These extensions are promising, but their technical exploration is beyond the scope of this work. Lastly, it is to be expected that the PID would improve in larger detectors, compared to what is shown here, because of the ability to leverage differences in the time profile over a longer time window. § CONCLUSIONThis work presents the first characterization of the response ofsamples to α radiation. Using a simulation model, the total amount of scintillation light produced by a 4.8 α particle in 1%, 5%, and 10%is measured, and the results are interpreted in the context of ionization quenching using Birks' law. The scintillation emission time profiles of 5% and 10%are determined using an analytic model, which includes detailed modeling of the system response. These material properties are used as input parameters for MC simulations used to make predictions for real detectors, ranging in scale from from few-ton to 100.Greater PID performance is observed in higher light yield materials and smaller detector sizes, despite lower photocoverage, owing to the increased importance of optical effects at play when photons traverse larger distances. These results utilize only timing information over a limited analysis window, and thus provide a baseline performance which may be improved upon by dedicated techniques leveraging the topological and spectral features of Cherenkov light and by utilizing additional scintillation light across a broader window.Neutrino experiments, such as Borexino, have found that α rejection is critical in studying low energy CNO solar neutrinos <cit.>. Thus, future experiments usingwill likely need to utilize α/β discrimination to improve solar neutrino sensitivity. The results presented in this manuscript allow for more realistic solar neutrino studies in , that build on work detailed in Ref. <cit.>. This will allow a better understanding of the solar neutrino physics reach of hybrid detectors, although such advanced detectors will likely use additional technology and analysis techniques to improve α/β PID beyond the timing-only PID studied in this manuscript.§ ACKNOWLEDGEMENTSWork conducted at Lawrence Berkeley National Laboratory was performed under the auspices of the U.S. Department of Energy under Contract DE-AC02-05CH11231. The work conducted at Brookhaven National Laboratory was supported by the U.S. Department of Energy under contract DE-AC02-98CH10886. EJC was funded by the Consortium for Monitoring, Technology, and Verification under Department of Energy National Nuclear Security Administration award number DE-NA0003920. The project was funded by the U.S. Department of Energy, National Nuclear Security Administration, Office of Defense Nuclear Nonproliferation Research and Development (DNN R&D). This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0018974. ieeetr | http://arxiv.org/abs/2311.16288v1 | {
"authors": [
"E. J. Callaghan",
"T. Kaptanoglu",
"M. Smiley",
"M. Yeh",
"G. D. Orebi Gann"
],
"categories": [
"hep-ex",
"physics.ins-det"
],
"primary_category": "hep-ex",
"published": "20231127200400",
"title": "Characterization of the scintillation response of water-based liquid scintillator to alpha particles, and implications for particle identification"
} |
Ontologising Trustworthy in the Telecommunications Domain A]Ian Oliver, B]Pekka Kuure C]Wiktor SedkowskiD]Thore Sommer[A]Nokia Bell-Labs, Espoo, Finland [email protected] <https://orcid.org/0000-0002-8319-2612>[B]Nokia Mobile Networks, Espoo, Finland [C]Nokia Strategy and Technology,Wrocław, Poland [D]Christian-Abrecht-Universität zu Kiel, GermanyBased upon trusted and confidential computing platforms, telecommunications systems must provide guaranteed security for the processes and data running atop them. This in turn requires us to provide trustworthy systems. The term trustworthy is poorly defined with corresponding misunderstanding and misapplication. We present a definition of this term, as well as others, demonstrate its application against certain telecommunications use cases and address how the learnings from ontologising these structures contribute to standardisation and the necessity for FAIR ontologies across telecommunications standards and hosting organisations.Trustworthy, Trusted Computing, Telecommunications, O-RAN, 5G, 6G, Confidential Computing § INTRODUCTIONThe telecommunications domain is build on an extensive set of interoperable standards mediated by ETSI, GSMA, 3GPP, IETF etc, covering almost every aspect from radio physics to cloud and service architectures. This process, described by some <cit.> as Machiavellian or as a prisoner's dilemma with multiple, sometimes irrational opponents, generates specifications that have enabled to the present day, five generations of telecommunications systems spanning over 40 years to the current day work on sixth generation (6G) standards.Standards adhere to the `FRAND' or fair, reasonable, and non-discriminatory terms of usage <cit.> and terminological agreement must be made, not just at a technical level but also at the legal and patenting levels. The use of terminologies and concepts adhering to findable, accessible, interoperable and reusable (FAIR) principles in therefore critical; and more so when standards cross multiple standardisation bodies and are integrated and referenced to. Such terminologies or ontologies are found inside the standards documents and rarely are presented as their own stand-alone terminological or ontological documents. This causes problems when terminology is then inconsistently applied across standards, even in the same series.The use of the term trustworthy is becoming endemic within standardisation and is especially used in telecommunications because of the critical nature of the systems and the critical nature of data being processed <cit.>. Increasing applied to almost everything, for example trustworthy artificial intelligence in telecoms systems <cit.> is and will be a focus of 6G. The term trustworthy is not defined and nor is it rooted or linked in any other definition of the term, and associated terms in any other standard, especially those specifically dealing with trust.The most interesting specifications are the IETF Remote Attestation Procedures (RATS) and IETF Trusted Execution Environment Provisioning (TEEP) which both define a number of terms related to trusted and confidential computing <cit.>. These are directed at the supporting architectures (and may be included in the later architectural models here) but do not explicitly address the definition of trustworthy.The term trustworthy originates from the concepts of trusted computing and the later confidential computing - mechanisms for establishing the identity, integrity of systems, and then ensuring that any workload (e.g.: AI processes, telecoms databases etc) that requires protection (privacy, secure processing) can run in a secure environment: a trusted and attested machine <cit.>, a CPU enclave etc. <cit.>.A trustworthy system is the minimal basis for providing any form of confidential computing <cit.>. As this is the chosen technology for providing `trustworthy AI' and securing data processing, ensuring a good, common understanding has wide reaching effects.This introduces a further problem of how trust is established <cit.>, how that trust is transferred and limited across the system - the chain of trust - and how that trust is maintained over time. In this paper we present the outline ontology for trusted and confidential computing, the underlying semantics and its use in selected telecoms domains and how the terms related to trust can be utilised. From this we can properly frame the discussion about what makes something trustworthy and if presented as its own standard or supporting documentation can be utilised as a common references. We conclude with our learnings and how this work proceeds with regards to standardisation and their implementations.§ SEMANTICS OF TRUSTWORTHYThe definition of trustworthy according to the Cambridge English and Merriam Webster dictionaries:trustworthy, adjective. Deserving of trust, or able to be trusted, dependable or worthy of confidence Trusted Computing based on some root of trust with necessary infrastructure such as Hardware Security Module (HSM) or Trusted Platform Module (TPM), provides a mechanism for establishing and endorsing trust. Remote attestation <cit.> as part of the management, operations, and supply-chain provides verification point.[The Nokia Attestation Engine is one implementation of this: <https://github.com/nokia/AttestationEngine>].Remote attestation is based around the idea of measurement, for example, cryptographic hashes calculated during the boot-time of a device, or as part of file system integrity monitoring. This further involves collecting those measurements - known as claims - and verifying them against known good values. Once this is complete, a decision is made whether to trust the device. This gives four actions: measure, attest, verify and decide as expressed in figure <ref> utilisingthe formalisms here <cit.>. If measurements - cryptographic hashes, boot logs, identity structures, TPM/SGX quote structures etc. - can be taken and the element has some interface providing access to those then the element is attestable and a claim produced. Good examples of claims are the TPM 2.0 Quote and SGX tcbblock structures, while the IEFT RATS Entity Attestation Token (EAT) <cit.> is a structure for generalising this. If the correctness of the obtained claim can be verified then the element is trustworthy - note the dictionary definition above. That this does not mean we trust the element, but rather that we can obtain information that we can utilise to make a confident decision whether it is trusted or not.Figure <ref> shows how the notion of trust[As a categorical sub-object classifier] is formed. Trusted elements are those elements (a subset of in this formalism) those elements that map to positive decisions. The statement ϕ_Decision represents the whole measurement, attestation, verification and decision process. Decisions are not necessary binary true/false in nature and from this we could[and in the Nokia Attestation Engine we do] have intermediate values, to handle explicitly network failure, incorrect requests etc, which in turn provides the possibility of degrees or levels of trust and their [partial] ordering. This is out of scope for this discussion.The above diagrams (figs.<ref> and <ref>) are simplified - we do not show the mechanisms for internal structures and their composition[Objects, such as elements, measures etc, are composed utilising push-outs which very naturally gives us structures such as the TPM2.0 quote.] - but forms the core principles for these terms, how they are formed and what they mean.This gives our semantic framework for trust and trustworthiness and relates the objects in our system: elements, measure, claims, results and decisions, with the actions over these and the descriptions such as trusted, attestable, trustworthy <cit.>. In the following sections we apply this to the concepts and models found in telecommunications systems. § NFV, O-RAN AND 5GThe ETSI Network Function Virtualisation Architecture <cit.> and Open Radio-Access Network Architecture <cit.> are two of the main components of modern telecommunication systems <cit.>. These each have their own architecture models which serve to define the major functional components and their interfaces. These are used to define terminology and plan the standardisation processes, especially in terms of the functionality and data required by the interfaces. The elements in these models should not be taken to be component definitions but rather than broad roles the implementing components take; in the highly simplified (and incomplete!) figures <ref>, <ref> and <ref> we utilise the relationships between components rather than their interface names as from the specification <cit.>.* O-Cloud is constructed from [physical] infrastructures nodes. This may conform to the NFV architecture principles, and some may reside within a 5G deployment.* O-eNB is an eNB (4G) or gNB(5G) that supports the E2 (SMO-*NB) interface* A 5G base station (gNB) comprises of from an O-RU, O-DU and O-CU and potential additional services.* O-CU, O-DU and the 5G services (AMF, SMF, UPF etc) can be realised and deployed as bare-metal processes or as virtualised functions, typically containers but also as virtual machines.* O-RU are physical network function with one or more antennas. O-DU may be co-located with the O-RU for real-time requirements and constraints* An O-RU, being a physical computing (albeit specialised) unit may managed as part of the NFVI* O-DU and some 5G services may be distributed towards the network edge.* Customer services provided OSS/BSS may be implemented as bare-metal, containers etc., and distributed accordingly.* O-RAN SMO and NFV MANO will be implemented similarly to other services. We can further describe these relationships by relating the implementation to the network function virtualisation infrastructure (NFVI in fig.<ref>) which defines a collection of computing elements - in effect, things with CPUs and in our case trusted elements such as TPMs, CPU enclaves etc. It is important to note that that radio units, despite their internal complexity including ASICs and other specialised hardware for processing radio signals are in effect just computers as any other. § PROVIDING TRUSTAs discussed earlier trust in a system is provided from a core root-of-trust using hardware root-of-trust modules such as the TPM or some other mechanism such as that found in CPU enclaving systems such as Intel SGX, AMD SEV-SNP and TrustZone. Figure <ref> describes these elements.A device consists of a number of components which are capable of running network applications. Device power-on from immutable code <cit.>; the configuration will vary between ASICs, FPGAs, PCs, microcontrollers up to mainframes and supercomputers.The measured boot process <cit.> in forms such as the UEFI measured boot on x86 devices, Dynamtic Root-of-Trust found in Intel TXT equipped systems, similar in some Power based systems and [very] occasionally in IoT devices <cit.>, starts with an explicitly trusted piece of code <cit.>, which then cryptographically measures (hashes) the next piece of code to run - the firmware proper - and write this to a TPM. This process continues during boot until run-time forming a chain-of-trust through a Merkel tree of hashes describing the system's integrity <cit.>.Secure boot is a complimentary process providing verification of a component or software through digital signatures and prevents locally the execution of unverified components. Secure boot only provides evidence about the provenance and not the validity of the contents. If the signing keys are lost - an unfortunately regular occurrence - then anyone can potentially sign components.Figure <ref> shows some of the more important quote structures which provide cryptographically signed evidence or claims about the machine's identity and integrity.We have abstracted away a number of details, for example, a CPU enclave generating an enclave key is a very specific and complex process. However these vendor specific details are not required at this level.Important to note is the amount of cross-referencing, such as the TPM's Endorsement Certificate being verifiable against a manufacturer's certificate authority, or the UEFI eventlog's contents used to calcualte PCRs values etc. The TPM's quote (TPMS_ATTEST) structure itself is signed by an attestation key (AK) derived from the endorsement key (EK) and contains a hash of the hashed public parts of the AK and the EK.Referring back to figure <ref> these are the measurements from any given element. Figure <ref> shows how these (with composition) link together in a specific example: here a PC's firmware and firmware configuration form measurements and the composition of these a quote which becomes a claim through the process of attestation. This shows the structure of a PC composed of a TPM, firmware and configuration, each of these with its own measurement(s). A TPM quote is a measurement formed by the composition of these measures. Composition of PCRs means an ordered concatenation and hashing (Merkel tree) and composition of a Key means signing by that key. It is worth noting the fields of the TPM's quote or TPMS_ATTEST structure as described in table <ref> - the quote structure for SGX can be found in <cit.>.It follows that the term `integrity of a system` is just thecomposition of all possible measures for that system. This implies a lattice in which we can delineate a bound above which gives a minimal set of sufficient measurements that must be taken.§ TRUSTING A TELECOMMUNICATIONS DEPLOYMENTUtilising the models in the earlier sections we apply these to some selected use cases for prototypical telecommunications deployments. These are used to both to validate our models and assist in deriving the properties a `trusted/trustworthy' O-RAN/5G/NFV system requires. It is exactly these use case that are used to drive the standardisation processes and the notions of trustworthy. §.§ O-RU Trust in NFVA simple case is an O-RAN Radio Unit is an NFVI element containing in this case a TPM - this is a typical deployment. The O-RU provides an attestation interface towards an attestation server within the O-RAN SMO as shown in in figure <ref>.The following questions deriving from this must be answered in order to establish trust: * Which measurements and combinations are are required?* What is the decision mechanism by which O-RAN SMO denotes the O-RU as trusted?* What is the decision mechanism by which NFV MANO denotes the O-RU as trusted?* What is the correlation between the O-RAN SMO view of trust and NFV MANO view of trust? We may also establish failure scenarios such as if the NFV MANO detects a loss of trust but the SMO not, is the O-RU still trusted? Is there a trust ordering such that x:RU | trusted_SMO(d) ⇒ trusted_NFV(d)? §.§ O-DU TrustThe question of O-DU Trust is more complex; the DU is often deployed as a container, which in turn runs on some NFVI element. The DU uses one or more RUs as part of its operation. Figure <ref> shows a scenario - NB: the RU here is the same RU (with measurements) as in fig.<ref>.Again, we infer similar questions about the NFVI Element and how MANO decides on its trust; the O-RU is as earlier. For the O-DU we have the following questions inferred:* Can an O-DU only run on trusted NFVI elements?* Does a similar trust ordering exist as before: d:DU | trusted_SMO(d) ⇒ trusted_NFV(d)?? * Can an O-DU manage untrusted O-RUs and vice versa, can a trusted O-RU be utilised by an untrusted O-DU?§.§ O-DU ConfidentialityThe final case depicted in figure <ref> involves confidential computing and that some functionality (maybe some AI function that needs to be trustworthy?) within a O-DU must run within a CPU enclave.The questions raised earlier again are all valid here, but now with the following additions: * Does a relationship of trust exist between the enclave and the NFVI element on which that enclave is implemented (via the CPU)* Does the enclave need to be on the same NFVI element? Does this imply `enclave mobility' or is there a `locality' of processing constraint?* Can a function requiring an enclave run on an untrusted machine, even though the enclave may be trustworthy (and trusted)? * How is trust communicated across the composition (part-of) relationships? In current trustworthy systems there is no direct mechanism to relate the quote generated by an enclave with the quote generated by the the host machine's TPM. If a link does exist then it is very ephemeral and hidden in firmware measurements during boot (if at all, and even then it is most likely proprietary and unavailable, which fails our attestatble definition from earlier).From this model we are forced to address the nature of composition relationships and the establishment and maintenance chain-of-trust over these relationships. We are also forced to confront the fact that a function may be trusted, but is part of an component and running on trusted elements. Putting it in this manner it becomes obvious that trustworthy of individual components is not the same as trustworthy when applied to larger systems.§ CONCLUSIONSWe have presented the domain, and set out to define the underlying semantics for the terms trust, trustworthy, attestable and outlined how these terms fit together. We have then shown an example applying this to the O-RAN, 5G and NFV concepts which underpin modern telecommunications systems. Obviously for brevity much has been simplified in how these systems are composed but satisfies the definition of ontology itself, in that we provide a mechanism for the kinds of things and their nature in these systems.Through the use of some simple examples drawn from the telecoms domain we have shown that we can easily generate a large number of questions about how trustworthiness is establish, how a trust decision might be made, as well as identifying where attestation must take place. Larger questions then arise from how the chains-of-trust and system composition interact.From a FAIR point of view it has been obvious to many in this domain that such ontologies must be made to provide a common terminological grounding. In day-to-day work we interact with over ten international standards organisations specifically for telecommunications - this does not include governmental regulations and internal specifications and requirements.Any such ontology in this area is most likely an evolving document rather than a fixed standard. It is also necessary that the terms here do not conflict but are mappable to existing terms. For example, the use of EAT in IETF RATS is a `claim' in our ontology, and its contents `measurements.Implementation of the ontologies here has been made in Protege and will be released as open source, possibly via one or more standards organisations. At minimum these ontologies will remain open, freely accessible and become part of any input to standards relating to cybersecurity across the standardisation organisations.A final note on the complexity of the ontologies: in the current model we have approximately 100 types, 200 relationships and the description logic is 𝒜ℒ𝒞ℋℐℱ(∘,*).vancouver | http://arxiv.org/abs/2311.15839v1 | {
"authors": [
"Ian Oliver",
"Pekka Kuure",
"Wiktor Sedkowski",
"Thore Sommer"
],
"categories": [
"cs.CR",
"cs.SE",
"D.2.1"
],
"primary_category": "cs.CR",
"published": "20231127140446",
"title": "Ontologising Trustworthy in the Telecommunications Domain"
} |
Aligning Non-Causal Factors for Transformer-Based Source-Free Domain AdaptationSunandini SanyalEqual Contribution Ashish Ramayee Asokan[1] Suvaansh Bhambri Pradyumna YM Akshay Kulkarni Jogendra Nath Kundu R Venkatesh BabuVision and AI Lab, Indian Insitute of Science, BengaluruJanuary 14, 2024 ============================================================================================================================================================================================================================== Conventional domain adaptation algorithms aim to achieve better generalization by aligning only the task-discriminative causal factors between a source and target domain. However, we find that retaining the spurious correlation between causal and non-causal factors plays a vital role in bridging the domain gap and improving target adaptation. Therefore, we propose to build a framework that disentangles and supports causal factor alignment by aligning the non-causal factors first. We also investigate and find that the strong shape bias of vision transformers, coupled with its multi-head attention, make it a suitable architecture for realizing our proposed disentanglement. Hence, we propose to build a Causality-enforcing Source-Free Transformer framework (C-SFTrans[Project Page: <https://val.cds.iisc.ac.in/C-SFTrans/>]) to achieve disentanglement via a novel two-stage alignment approach: a) non-causal factor alignment: non-causal factors are aligned using a style classification task which leads to an overall global alignment, b) task-discriminative causal factor alignment: causal factors are aligned via target adaptation. We are the first to investigate the role of vision transformers (ViTs) in a privacy-preserving source-free setting. Our approach achieves state-of-the-art results in several DA benchmarks.§ INTRODUCTIONMachine learning models often fail to generalize well in scenarios where the test data distribution (source domain) differs a lot from the training data distribution (target domain). In practice, a model often encounters data from unseen domains domain shift. This leads to a poor deployment performance, which critically impacts many real-world applications such as autonomous driving <cit.>, surveillance systems <cit.>, etc.Unsupervised domain adaptation (UDA) methods <cit.> aim to address the challenges of domain shift by learning the task knowledge of the labeled source domain and adapting to an unlabeled target domain. However, these works <cit.> require joint access to the source and target data. Such a constraint is highly impractical as data sharing is usually restricted in most real-life applications due to privacy concerns. Hence, in this work, we focus on the practical problem setting of source-free domain-adaptation (SFDA) <cit.> where a vendor trains a source model and shares only the source model with a client for target adaptation.Conventional DA works <cit.> aim to learn domain-invariant representations by aligning only the task-related features between the source and target domain. We refer to these features as causal factors that heavily influence the goal task. We also denote factors that capture contextual information as non-causal factors. Causal factor alignment leads to a low target error, thereby improving the adaptation performance, as shown theoretically by <cit.>. But these methods require concurrent access to the source and target domain data, which is impractical in restricted data sharing scenarios of SFDA. Further, causal and non-causal factors are spuriously correlated <cit.> and this correlation may break when a domain shift occurs. Hence, in our work, we propose to retain this spurious correlation through disentanglement and learning of both causal and non-causal factors.Motivated by the remarkable success of vision transformer architectures <cit.>, we propose to explore the possibility of disentanglement and alignment of causal/non-causal factors using transformers. Recent domain-invariance-based SFDA works <cit.> have been found to be highly effective on convolution-based architectures. However, in our analysis, we find that a simple vision transformer (ViT) baseline outperforms the state-of-the-art CNN-based methods, implying that transformers are highly robust to domain shifts <cit.>. Secondly, we observe that the domain-invariance methods do not significantly impact the performance of vision transformers due to their inherent shape bias <cit.>. Based on these observations, we propose to leverage ViTs for a realizable disentanglement of causal and non-causal factors. Hence, in our work, we seek an answer to an important question, “How do we develop a framework for disentanglement using vision transformers to retain the spurious correlation between causal and non-causal factors, in the challenging source-free setting?". Conventional approaches <cit.> completely ignore the non-causal factors and align only the causal factors, which leads to sub-optimal alignment of the class-specific clusters as shown in Fig. <ref>B. This results in poor performance, especially in cases of large domain gap between the source and target. Hence, we propose to first align the non-causal factors, which leads to an overall global alignment between the source and target domain. We then align the goal task-discriminative causal factors that optimally aligns the local class-specific clusters (Fig. <ref>C). Hence, in our work, we pinpoint that “aligning the non-causal factors is crucial for improving target adaptation performance". We next seek to devise a framework that explicitly guides the process of disentanglement and alignment. We develop a novel framework - Causality-enforcingSource-Free Transformers (C-SFTrans) that comprises of two stages, a) Non-causal factor alignment, and b) Task-discriminative causal factor alignment. To enable non-causal factor alignment, we propose a subsidiary task of non-causal factor classification for a source-free setting. Prior works <cit.> show that multi-head self-attentions in transformers focus on redundant features. To facilitate the separate learning of diverse non-causal factors and causal factors, we utilize this inherent potential of transformers to designate non-causal and causal attention heads for training. We propose a novel Causal Influence Score Criterion to perform the head selection. The two stages of non-causal factor alignment and task-discriminative causal factor alignment are performed alternatively to achieve domain-invariance. We outline the major contributions of our work:* To the best of our knowledge, we are the first to explore vision transformers (ViTs) for a practical source-free DA setting. We investigate domain-invariance in vision transformers and provide insights to improve target adaptation via non-causal factor alignment. * We propose a novel two-stage disentanglement and alignment framework Causality-enforcingSource-Free Transformers (C-SFTrans) to preserve the spurious correlation between causal and non-causal factors. * We define a novel attention head selection criterion, Causal Influence Score Criterion, to select attention heads for non-causal factor alignment. We also introduce a novel Style Characterizing Input (SCI) to further aid the head selection. * We achieve state-of-the-art results on several source-free benchmarks of single-source and multi-target DA. § RELATED WORKSUnsupervised Domain Adaptation. Unsupervised Domain Adaptation (DA) aims to adapt a source-trained model to a given target domain. DA approaches can be broadly classified into two categories: 1) methods using generative models <cit.> to create synthetic target-like images for adaptation. 2) methods focusing on aligning the source and target feature distributions using statistical distance measures on the source/target features <cit.>, and adversarial training <cit.>. Recent works <cit.> address a more restrictive and privacy-preserving setting of Source-Free Domain Adaptation (SFDA), where the source data is inaccessible during target adaptation. SFDA works of SHOT <cit.> and SHOT++ <cit.> use pseudo-labeling and information maximization to align the source and target domains.Our work also addresses the challenging SFDA setting, intending to improve the adaptation performance. Transformers for Domain Adaptation. Despite the success of Vision Transformers (ViTs) <cit.> in several vision tasks, their application to DA has been relatively less explored. TransDA <cit.> improves the model generalizability by incorporating the transformer's attention module in a convolutional network. CDTrans <cit.> proposes a transformer framework comprising three weight-sharing branches for cross-attention and self-attention using the source and target samples, while SSRT <cit.> introduces a self-training strategy that uses perturbed versions of the target samples to refine the ViT backbone. TVT <cit.> improves the transferability of ViTs through adversarial training.Domain Invariance for Domain Adaptation. These methods aim to learn domain-invariant feature representations between the source and target domains. SHOT <cit.> and SHOT++ <cit.> prevent updates to the source hypothesis, which enables the feature extractor to learn domain-invariant representations. Feature-Mixup <cit.> constructs an intermediate domain whose representations preserve the task discriminative information while being domain-invariant. In contrast, DIPE <cit.> trains the domain-invariant parameters of the source-trained model rather than learning domain-invariant representations between the domains.Causal representation learning. Causality mechanisms <cit.> focus on learning invariant representations and recovering causal features <cit.> that improve the model's generalizability. Some works <cit.> attempt this via texture-invariant representation learning. However, such works are less effective as they align only the causal factors towards improving generalization. In contrast, we propose a novel way of learning domain-invariant representations by taking into account both the non-causal and causal factors in the target domain to achieve the best adaptation performance.§ APPROACH §.§ Preliminaries Problem Setting. We consider the problem setting of closed-set DA, with a labeled source domain dataset 𝒟_s = {( x_s, y_s) : x_s ∈𝒳, y_s ∈𝒞_g} where 𝒳 is the input space and 𝒞_g is the class label set. The unlabeled target dataset is denoted as 𝒟_t = { x_t : x_t ∈𝒳}. The task of DA is to predict the label for each target sample x_t from the label space 𝒞_g. Following <cit.>, we use a vision transformer backbone ViT-B <cit.> as the feature extractor, denoted as f:𝒳→{𝒵_c, 𝒵_1, 𝒵_2, ... 𝒵_N_P}. 𝒵_c denotes the class token feature-space and 𝒵_1, 𝒵_2, ...𝒵_N_P are patch token feature-spaces. N_P denotes the number of patches. We train a goal task classifier f_g on the class-token as f_g: 𝒵_c→𝒞_g. In this work, we operate under the vendor-client paradigm of source-free domain adaptation <cit.> where a vendor trains a source model on the labeled source domain dataset and shares the model with the client. The client, on the other hand, trains the model with the unlabeled target domain data for target adaptation.Causal Model. Let X denote the input variable and Y denote the output variable or label. We represent the structural causal graph G as shown in Fig. <ref>D where we introduce latent variables S and Z to capture the generative concepts (object shape, backgrounds, textures) that lead to the observed variables X and Y. We explicitly separate causal factors S that causally influence the class-label Y (shape) and non-causal factors Z which refers to contextual information (like background, texture, ). We also assume that S is spuriously correlated with Z shown through an unobserved confounder C. This spurious correlation may vary across domains. For instance, in a source-free DA setting, the source domain might have a “ball" in a “football field", while the target domain may have a “ball" in “table tennis". For simplicity, we use Fig. 2(b) of Sun <cit.> where domain shifts are represented as changes in the probability distributions P(C) or P(S, Z | C).Analyzing Causality in Source-Free DA. Since S and Z are correlated spuriously, a source model trained on source domain data X inherits this spurious correlation in the form of bias (contextual information of objects and scenes occurring together in the source domain). In the target domain, this spurious correlation changes, leading to worsened performance of the source model. In our work, we aim to retain the spurious correlation between S and Z in the source-free DA setting when domain shift occurs. We construct an explicitly disentangled network to model the causal factors S and the non-causal factors Z through separate learning objectives. When the domain changes, we leverage the disentanglement and our designed training objectives to retain the P(S, Z|C) correlation.We begin by first investigating the impact of existing CNN-based DA methods on vision transformers (ViTs) and draw insightstowards achieving the desired disentanglement. §.§ Exploring domain-invariance for ViTsMotivated by the impressive performance of vision transformers (ViT) across several vision tasks <cit.>, we propose to investigate the impact of domain-invariance methods on vision transformers in the highly practical and privacy-oriented source-free setting. A few recent works <cit.> propose to learn domain-invariant representations without access to source data. However, we find that these approaches mainly work only with CNN architectures. We first extend two state-of-the-art domain-invariance works, Feature Mixup <cit.> and DIPE <cit.>, for transformers (see Fig. <ref>A). We observe that these works exhibit marginal drops compared to a baseline SHOT <cit.> for transformers, although they have significant gains for CNNs.But why does this happen? Recent works <cit.>show that ViTs incorporate more global information than CNNs. This is because the multi-head self-attention captures causal factors via shape bias <cit.>, while convolutions capture non-causal factors via texture bias. As convolutions inherently capture texture bias, domain-invariance methods perform well as they align causal factors and improve the shape bias in CNNs <cit.>. Since transformers implicitly possess a stronger shape bias, it is more robust to domain shifts <cit.> and existing methods do not improve the adaptation performance significantly, unlike in CNNs. Based on these observations, we hypothesize that ViTs can better model the causal factors and can thus accommodate the disentanglement of non-causal factors, which are easier to learn. However, the question remains, “How can we leverage such a disentanglement in ViTs to retain the spurious correlation between causal and non-causal factors, while operating under the challenging yet practical source-free DA constraints?" Next, we propose a method for disentanglement in ViTs for modeling the causal and non-causal factors.§.§ Non-causal factor alignment for source-free DAConventional approaches ignore non-causal factors while aligning only the causal factors. We know that the spurious correlation between causal and non-causal factors can heavily influence the classification performance <cit.>, especially in scenarios with a large source-target domain gap. Hence, disentangling and aligning non-causal factors can help bridge the domain gap to a large extent, especially for ViT architectures with a strong shape bias. Therefore, we come up with the following insight of effectively utilizing the non-causal factors to enable better alignment of causal factors in the challenging source-free DA setting.Insight 1. (Non-causal factor alignment positively influences causal factor alignment)Aligning non-causal factors leads to an overall global alignment between the source and target domain for source-free DA settings. In other words, it implicitly improves the alignment of causal factors as well, thereby improving the adaptation performance. Remarks. Non-causal factor alignment forces the model to focus on the non-causal factors. Causal factors modeled through the inherent shape bias of transformers lead to substantial gains over CNNs (as shown in Fig. <ref>A). However, aligning the residual non-causal factors can further improve the overall global alignment which helps to preserve the correlation between causal and non-causal factors in the target domain.We demonstrate this phenomenon in Fig. <ref>C (pink bars), where our non-causal factor alignment improves the overall global alignment between the two domains, leading to a significant reduction in the domain gap between the source and target domain. This shows that non-causal factor alignment positively influences causal factor alignment as the spurious correlation between causal and non-causal factors is preserved after target adaptation (Fig. <ref>B).As Insight 1 motivates that aligning non-causal factors is extremely crucial for preserving the spurious correlation between causal and non-causal factors, a natural question arises, “How do we enable alignment of non-causal factors between a source and target domain in a SFDA setting?"Insight 2.(Style clsf. for non-causal factor extraction) Stylization can facilitate controllable access to the local non-causal factors. Thus, non-causal factors can be aligned using a subsidiary task of style classification, while respecting the source-free constraints. Remarks. To enable non-causal factor alignment, we propose a subsidiary task of style classification on both the vendor and client-side in a source-free setting. We make use of label-preserving augmentations <cit.> (see Suppl. for details) to construct novel styles and train a style classifier f_n on a novel style token z_n ∈𝒵_n in the transformer backbone f. Through the subsidiary task of style classification, we propose to extract the non-causal factors and project the local features of the source and target domain into a common feature space. This implicitly aligns the two domains, even without concurrent access to source and target data. Further, it can be easily enabled in a practical source-free setting by sharing only the augmentation information between the vendor and the client, without sharing the data.Analysis Experiments. To analyze the effectiveness of our proposed insights, we examine the effect of non-causal factor alignment on the causal factors. In Fig. <ref>B, we observe that the 𝒜-distance <cit.> between the causal and non-causal factors remain almost same in both source and target domains, indicating that the spuroius correlation between S and Z is preserved with our approach. For (Fig. <ref>C), we construct a domain-invariant feature (as a proxy for the causal factors) by taking the mean of class tokens across augmentations. Between the mean class tokens from source and target domains, we compute the 𝒜-distance <cit.>, which measures the separation between the two distributions/domains. We observe a lower 𝒜-distance at the class token for our method as compared to the baseline SHOT, indicating that non-causal factor alignment leads to improved causal factor alignment (Insight 1) and global alignment between the source and target domains (Insight 2).§.§ Training AlgorithmWe propose a Causality-enforcing Source-Free Transformer (C-SFTrans) framework which involves a two-stage feature alignment for learning causal representations using vision transformers. §.§.§ Vendor-side source trainingThe vendor trains C-SFTrans on the source dataset in two steps: (a) Non-causal factor alignment using a style classification task, and (b) Goal task-discriminative feature alignment, each of which are discussed in detail below.a) Non-causal factor alignment. To align the non-causal factors, we introduce a style classification task that is trained with a novel style token z_n ∈𝒵_n and updates only non-causal attention heads h_n^l ∈ℋ. Here, ℋ={h_i^l}_i=1, l=1^i=N_h, l=L denotes the set of self-attention heads across all blocks, N_h denotes the number of self-attention heads in each block of the backbone f, and L denotes the number of blocks. Each head h_i^l computes self-attention as h_i^l = Softmax (α K Q^T) V. Here, K=z_j^l W_K, Q=z_j^l · W_Q and V=z_j^l W_V, α = 1/√(d_k), and d_k is the dimension of z_j^l. Let 𝒜_j^l_x = h_i^l({z_c^l, z_1^l, ..., z_N_P^l}_x) where x is the input sample that gets divided into N_P patch tokens and 𝒜_j^l denotes the output of the self-attention head. For the non-causal factor alignment which will update only non-causal attention heads, we first need to select these non-causal attention heads. We choose the heads that give higher importance to the non-causal style features than the causal task-discriminative features. Next, we outline the procedure for non-causal attention head selection. Non-causal attention heads selection. We propose a novel selection criteria to select attention heads based on their contribution towards the causal goal task features and the non-causal style-related features. For this, we first construct a novel Style Characterizing Input (SCI) to preserve only style-related features of the input samples. To construct an SCI, we apply a task-destructive transformation (patch shuffling) <cit.>, which keeps the style information intact. Intuitively, we preserve the higher-order statistics of style <cit.> by shuffling patches while the class information is destroyed.We pass both the clean input x and the SCI x_SCI as input to each head (Fig. <ref>A).Let 𝒜_i^l_xbe the output of each head computed using the clean input x computed as follows:𝒜_i^l_x = h_i^l({𝒵_c^l, 𝒵_1^l, ..., 𝒵_N_P^l}_x)Let 𝒜_i^l_x_SCI be the output of the heads computed using the input x_SCI as follows,𝒜_i^l_x_SCI = h_i^l({𝒵_c^l, 𝒵_1^l, ..., 𝒵_N_P^l}_x_SCI)Let β_1, β_2 ∈ℝ^N_h be the importance weights for the domain and task feature outputs respectively. We constrain these using β_2 = 1 - β_1, which simplifies the optimization. We compute the weighted output as follows,𝒜_i^l = β_1_i×𝒜_i^l_x + β_2_i×𝒜_i^l_x_SCIThis weighted output 𝒜_i^l is propagated further across the layers. We keep the entire model frozen, training only the two parameters β_1 and β_2 for each attention head (Fig. <ref>A). The following objective is used for attention heads selection,min_β_1, β_2_(x_s, y_s) ∈𝒟_s [ℒ_cls] where ℒ_cls = ℒ_ce(f_g(z_c), y_c)See Suppl. for more details. Next, we define the criterion for selecting non-causal heads using the optimized β_1, β_2.Definition 1. (Causal Influence Score Criterion) Causal Influence Score (CIS) is computed as CIS_i = β_2_i - β_1_i for an attention head h_i. We choose each attention head h_i as non-causal for which the condition CIS_i > τ is satisfied. The remaining heads are designated as causal heads. Remarks. Since we train β_1 and β_2 with the task classification objective, a higher value of β_2 indicates that the attention head gives more weightage to the style information and is inherently more suitable for the style classification task. Empirically, we set 30% heads as non-causal for our experiments, keeping the remaining heads fixed as causal heads for the goal task classification task. While a more sophisticated block-wise strategy could be used, we find this to be insensitive over a wide range of values (see Suppl).Style classification task. Once the causal and non-causal attention heads are chosen, the vendor prepares the augmented datasets 𝒟_s^(i) = { (x_s^[i], y_s, y_n) } ∀i ∈ [N_a] by augmenting each source sample x_s (where N_a is the number of augmentations). Here, an augmentation a_i:𝒳→𝒳 is applied to get x_s^[i]= a_i(x_s). Each input is assigned a style label y_n=i where i denotes the augmentation label. We use five label-preserving augmentations that simulate novel styles <cit.>. Refer to Suppl. for more details. For the non-causal factor learning task (Fig. <ref>B), we train only the non-causal heads for style classification as follows, min_θ_h_n, θ_f_n_(x, y_s) ∈∪_i𝒟_s^(i) [ℒ_style]where ℒ_style = ℒ_ce(f_n(z_n), y_n) where θ_h_n contains non-causal heads selected using Def. 1.b) Task-discriminative causal factor alignment.After one round of style classifier training, we perform the goal task training, where we update only the causal heads (Fig. <ref>B). The vendor trains the source model consisting of the backbone f and the task classifier f_g with the source labeled dataset 𝒟_s and the task classification loss as follows,min_θ_f∖θ_h_n, θ_f_g_(x_s, y_s) ∈𝒟_s [ℒ_cls] where ℒ_cls = ℒ_ce(f_g(z_c), y_c)where z_c is the class token, θ_f∖θ_h_d are the parameters of the backbone excluding the parameters of non-causal heads and θ_f_g are the parameters of task classifier f_g.The two steps of non-causal factor alignment and goal task-discriminative feature alignment are performed in alternate iterations, one after the other (see Suppl. for details).§.§.§ Client-side target adaptationThe vendor shares the trained C-SFTrans model with the client for target adaptation. The client performs the non-causal factor alignment in the same way as described earlier by augmenting the target data for style classification. Note that vendor and client may share training and augmentation strategies without sharing the training data <cit.>. This step leads to the non-causal factor alignment between source and target domains. For the goal task-discriminative training, the client uses the standard information maximization loss <cit.> as follows,min_θ_f∖θ_h_n, θ_f_g_𝒟_t[ℒ_ent + ℒ_div] + min_θ_h_n, θ_f_n_∪_i𝒟_t^(i) [ℒ_style] where ℒ_ent, ℒ_div denote entropy and diversity losses, respectively and ℒ_style is supervised CE loss. Note that only original unlabeled target data is used to optimize ℒ_ent, ℒ_div. The two steps of non-causal factor alignment and task-discriminative causal factor alignment are done one after the other on the client side as well.§ EXPERIMENTSIn this section, we evaluate our proposed approach by comparing with existing works on several benchmarks and analyze the significance of each component of the approach. Datasets. We evaluate our approach on four existing standard object classification benchmarks for Domain Adaptation: OfficeHome, Office-31, VisDA, and DomainNet. The Office-Home dataset <cit.> contains images from 65 categories of objects found in everyday home and office environments. The images are grouped into four domains - Art (Ar), Clipart (Cl), Product (Pr) and Real World (Rw). The Office-31 (Office) dataset <cit.> consists of images from three domains - Amazon (A), DSLR (D), and Webcam (W). The three domains contain images from 31 classes of objects that are found in a typical office setting. The VisDA <cit.> dataset is a large-scale synthetic-to-real benchmark with images from 12 categories. DomainNet <cit.> is the largest and the most challenging dataset among the four standard benchmarks due to severe class imbalance and diversity of domains. It contains 345 categories of objects from six domains - Clipart (clp), Infograph (inf), Painting (pnt), Quickdraw (qdr), Real (rel), Sketch (skt).Implementation details. To ensure fair comparisons, we make use of DeiT-Base <cit.> with patch size 16 × 16 and follow the experimental setup outlined in CDTrans <cit.>. We use Stochastic Gradient Descent (SGD) with a weight decay ratio of 1 × 10^-4, and a momentum of 0.9 for the training process. Refer to Suppl. for more implementation details.§.§ Comparison with prior artsa) Single-Source Domain Adaptation (SSDA).We provide comparisons between our proposed method, C-SFTrans, and earlier SSDA works in Tables <ref> and <ref>. Our method provides the best performance among source-free works for the three standard DA benchmarks. On Office-Home, C-SFTrans outperforms the transformer based source-free prior work SHOT-B* by 2.5% and shows competitive performance the non-source-free method CDTrans <cit.>. On the Office-31 benchmark (Table <ref>), our technique outperformsthe source-free SHOT-B* by 0.9% and achieves competitive performance when compared to non-source-free works.Table <ref> also demonstrates that our method shows 2.4% improvement over SHOT-B* and is on par with the non-source-free methods CDTrans and SSRT <cit.> on the larger and more challenging VisDA benchmark. We also achieve a significant improvement of 3.6% on the DomainNet benchmark (Table <ref>) over SHOT-B baseline. b) Multi Target Domain Adaptation (MTDA).In Table <ref>, we compare our proposed framework, C-SFTrans, with existing works on multi-target domain adaption on the OfficeHome dataset. Our method achieves a 1.4% improvement over the source-free baseline (SHOT-B) and is comparable to the non-source-free method D-CGCT <cit.> despite using a pure transformer backbone while the latter uses a hybrid convolution-transformer feature extractor. §.§ Analysis We perform a thorough ablation study of our proposed approach and analyze the contribution of each component of our approach in Table <ref>.a) Effect of non-causal factor classification. In Table <ref>, we study the effect of non-causal factor classification on the goal task performance using a subsidiary style classification task. We first train only the style task while keeping the goal task classifier f_g and causal parameters θ_f∖θ_h_n fixed.Here, we observe a significant improvement of 1.1% on the source-side and 1.6% on the target-side. This validates Insight 2 since the non-causal style classification task improves the causal factor alignment, thereby improving the goal task performance. b) Effect of both goal and non-causal tasks.Next, in Table <ref>, we use both goal task and non-causal style task alternately as proposed in Sec. <ref>. As per Insight 1, this should improve the alignment between source and target causal factors, and result in optimal clustering of task-related features. The alternate training of style and goal task yields an overall improvement of 3.7% on source-side and 2.4% on target-side, which validates Insight 1.c) Comparisons with different backbones. In Table <ref>, we provide results for our approach with the ViT-Base backbone pre-trained on the ImageNet-21K dataset, and the DeiT-S backbone pre-trained on the ImageNet-1K dataset. We observe that our method achieves a 1.4% improvement over SHOT-B with the ViT-B backbone, and a 3.7% improvement over SHOT-S with the DeiT-S backbone.§ CONCLUSIONIn this work, we study the concepts of source-free domain-adaptation from the perspective of causality. We conjecture that the spurious correlation among causal and non-causal factors are crucial to preserve in the target domain to improve the adaptation performance. Hence, we provide insights showing that the disentangling and aligning non-causal factors positively influence the alignment of causal factors in SFDA. Further, we first investigate the behavior of vision transformers in SFDA and propose a novel Causality-enforcing Source-free Transformer (C-SFTrans) architecture for non-causal factor alignment. Based on our insights, we introduce a non-causal factor classification task to align non-causal factors. We also propose a novel Causal Influence Score criterion to improve the training. The proposed approach leads to improved task-discriminative causal factor alignment and outperforms the prior works on DA benchmarks of single-source and multi-target SFDA. Acknowledgements. Sunandini Sanyal was supported by the Prime Minister's Research Fellowship, Govt of India.[ \begin@twocolumnfalse Supplementary Material \end@twocolumnfalse ]The supplementary material provides further details of the proposed approach, additional quantitative results, ablations, and implementation details. We have released our code on our project page: <https://val.cds.iisc.ac.in/C-SFTrans/>. The remainder of the supplementary material is organized as follows: * Section <ref>: Proposed Approach (Table <ref>, Algorithm <ref>) * Section <ref>: Implementation Details * Datasets (Section <ref>)* Style augmentations (Section <ref>)* Experimental Settings (Section <ref>)* Section <ref>: Additional Comparisons (Tables <ref>) * Section <ref>: Ablations on target-side goal task training (Tables <ref>, <ref>, and <ref>)§ PROPOSED APPROACHWe summarize all the notations used in the paper in Table <ref>. The notations are grouped into the following 6 categories - models, transformers, datasets, spaces, losses, and criteria. Our proposed method has been outlined in Algorithm <ref>Target adaptation losses. We use the Information Maximization loss <cit.> that consists of entropy loss ℒ_ent and diversity loss ℒ_div. ℒ_ent = -_x_t ∈𝒳∑_k=1^Kδ_k(f_g(z_c)) logδ_k(f_g(z_c)) ℒ_div = ∑_k=1^Kp̂_̂k̂logp̂_̂k̂= KL (p̂, 1/K 1_K) - log Kwhere δ_k(b) = exp(b_k)/∑_i exp(b_i) is the k^th element of softmax output of b ∈ℝ^K. The entropy loss ℒ_ent ensures that the model predicts more confidently for a particular label and the diversity loss ℒ_div ensures that the predictions are well-balanced across different classes. We optimize all parameters of the transformer backbone h, except the non-causal heads h_n^l. min_h^l ∖ h_n^l, f_g_𝒟_t[ℒ_ent + ℒ_div]Pseudo-labeling. We use the clustering method of SHOT <cit.> to obtain pseudo-labels. At first, the centroid of each class is calculated using the following,c_k = ∑_x_t ∈𝒳δ_k(f_g(z_c)) z_c/∑_x_t ∈𝒳δ_k (f_g(z_c))The closest centroid is chosen as the pseudo-label for each sample using the following cosine distance formulation,ŷ_̂ĉ = _kD_c(z_c, c_k)where D_c denotes the cosine-distance in the class-token feature space 𝒵_c between a centroid c_k and the input sample features z_c. In successive iterations, the centroids keep updating and the pseudo-labels get updates with respect to the new centroids.Attention heads in vision transformers. A ViT takes an image x as input of size H × W × C and divides it into N_P patches of size (P,P) each. The total number of patches are N_P = H × W/P^2. In every layer, l, a head h_i^l takes the patches as input and transforms a patch into K, Q, V using the weights W_K, W_Q, W_V, respectively. The self-attention <cit.> is computed as follows, h_i^l =Softmax( QK^T/√(d_k)) Vwhere d_k represents the dimension of the keys/queries.§ IMPLEMENTATION DETAILS §.§ Datasets We use four standard object classification benchmarks for DA to evaluate our approach. The Office-Home dataset <cit.> consists of images from 65 categories of everyday objects from four domains - Art (Ar), Clipart (Cl), Product (Pr), and Real World (Rw). Office-31 <cit.> is a simpler benchmark containing images from 31 categories belonging to three domains of objects in office settings - Amazon (A), Webcam (W), and DSLR (D). VisDA <cit.> is a large-scale benchmark containing images from two domains - 152,397 synthetic source images and 55,388 real-world target images. Lastly, DomainNet <cit.> is the largest and the most challenging dataset due to severe class imbalance and diversity of domains. It contains 345 categories of objects from six domains - Clipart (clp), Infograph (inf), Painting (pnt), Quickdraw (qdr), Real (rel), Sketch (skt). §.§ Style augmentationsWe construct novel stylized images using 5 label-preserving augmentations on the original clean images to enable non-causal factor alignment during the training process. The augmentations are as follows:* FDA augmentation: We use the FDA augmentation <cit.> to generate stylized images based on a fixed set of style images <cit.>. In this augmentation, a given input image is stylized by interchanging the low-level frequencies between the FFTs of the input image and the reference style image.* Weather augmentations:We employ the frost and snow augmentations from <cit.> to simulate the weather augmentation. Specifically, we use the lowest severity of frost and snow (severity = 1) to augment the input images. * AdaIN augmentation:AdaIN <cit.> uses a reference style image to stylize a given input image by altering the feature statistics in an instance normalization (IN) layer <cit.>. We use the same reference style image set as in FDA, and set the augmentation strength to 0.5. * Cartoon augmentation:We employ the cartoonization augmentation from <cit.> to produce cartoon-style images with reduced texture from the input. * Style augmentation:We use the style augmentation from <cit.> that augments an input image through random style transfer. This augmentation alters the texture, contrast and color of the input while preserving its geometrical features. §.§ Experimental settingsIn all our experiments, we use the Stochastic Gradient Descent (SGD) optimizer <cit.> with a momentum of 0.9 and batch size of 64. We follow <cit.> and use label smoothing in the training process. For the source-side, we train the goal task classifier for 20 epochs, and the style classifier until it achieves 80% accuracy. On the target-side, we train the goal task classifier for 2 epochs, and use the same criteria for the style classifier as the source-side. The first 5 epochs of the source-side training are used for warm-up with a warm-up factor of 0.01. On the source-side, we use a learning rate of 8 × 10^-4 for the VisDA dataset, and 8 × 10^-3 for the remaining benchmarks. For the target-side goal task training, we use a learning rate of 5 × 10^-5 for VisDA, 2 × 10^-3 for DomainNet, and 8 × 10^-3 for the rest. Our proposed method comprises an alternate training mechanism where the goal task training and style classifier training are done alternatively for a total of 25 rounds, which is equivalent to 50 epochs of target adaptation in <cit.>. For comparisons, we implement the source-free methods DIPE <cit.> and Feature Mixup <cit.> by replacing the backbone with DeiT-B. While CDTrans <cit.> uses the entire domain for training and evaluation with the DomainNet dataset, we follow the setup of <cit.> to ensure fair comparisons. We train on the train split and evaluate on the test split of each domain.§ ADDITIONAL COMPARISONSWe present additional comparisons with the DomainNet benchmark in Table <ref>. Our method achieves the best results among the existing source-free prior arts and outperforms the source-free SHOT-B^* by 3.6%. We also observe that C-SFTrans surpasses the non-source-free method CDTrans by an impressive 5.5%.§ ABLATIONS ON TARGET-SIDE GOAL TASK TRAINING(a) Target-side goal task training loss. Table <ref> shows the influence of the three loss terms in the target-side goal task training - entropy loss ℒ_ent, diversity loss ℒ_div and self-supervised pseudo-labeling SSPL. We observe that using ℒ_ent alone produces lower results even compared to the source baseline. On the other hand, using both ℒ_ent and ℒ_div significantly improves the performance, which highlights the importance of the diversity term ℒ_div. Finally, we obtain the best results when all three components are used together for target-side adaptation, further showing the significance of the pseudo-labeling step.(b) Sensitivity analysis of alternate training. In our proposed method, we perform style classifier training and goal task training in an alternate fashion, the task classifier f_g is trained for a few epochs, followed by the training of the style classifier f_n until it reaches a certain accuracy threshold (empirically set to 80%). In Table <ref>, we show the effect of varying the number of epochs of the goal task training from 1 to 5, and observe the impact on the goal task accuracy during non-causal factor alignment. We observe that 2 epochs of goal task training achieves the optimal balance between target accuracy and training effort.We observe that just a single epoch of task classifier training negatively impacts the goal task performance. While 3 epochs achieves the best performance, it involves significant training effort for merely 0.5% improvement in the task accuracy. Therefore, 2 epochs of goal task training achieves the optimal balance between target accuracy and training effort. (c) Selection of non-causal heads. We select a set of non-causal attention heads based on their Causal Influence Score (CIS). We sort the CIS in descending order and select the top λ% of heads satisfying the condition CIS > τ. In Table <ref>, we present the effect of altering this hyperparameter λ on the overall performance. We observed that with a lower value of λ, the pathways formed by non-causal heads do not adequately extract and learn the non-causal factors, which consequently hinders the domain-invariant alignment and leads to non-optimal task performance. Similarly, increasing λ too much reduces the ability of the network to learn causal factors and leads to lower performance. Overall, our approach is not very sensitive towards this hyperparameter.(d) Effect of augmentations. Table <ref> demonstrates that fewer augmentations for the style classifier significantly deteriorate the adaptation performance in comparison to the full set of augmentations. This indicates that a more complex style classification task better facilitates the non-causal factor alignment. However, due to the scarcity of more complex augmentations, we use the six outlined in Sec. <ref>ieee_fullname_fix | http://arxiv.org/abs/2311.16294v1 | {
"authors": [
"Sunandini Sanyal",
"Ashish Ramayee Asokan",
"Suvaansh Bhambri",
"Pradyumna YM",
"Akshay Kulkarni",
"Jogendra Nath Kundu",
"R Venkatesh Babu"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231127201315",
"title": "Aligning Non-Causal Factors for Transformer-Based Source-Free Domain Adaptation"
} |
UTF8gbsn firstpage–lastpage Topological skyrmion semimetals Ashley M. Cook January 14, 2024 ===============================By performing global hydrodynamical simulations of accretion discs with driven turbulence models,we demonstrate that elevated levels of turbulence induce highly stochastic migration torques on low-mass companions embedded in these discs.This scenario applies to planets migrating within gravito-turbulent regions of protoplanetary discs as well as stars and black holes embedded in the outskirts of active galactic nuclei (AGN) accretion discs.When the turbulence level is low, linear Lindblad torques persists in the background of stochastic forces and its accumulative effect can still dominate over relatively long timescales. However,in the presence of very stronger turbulence, classical flow patterns around the companion embedded in the disc are disrupted,leading to significant deviations from the expectations of classical Type I migration theory over arbitrarily long timescales. Our findings suggest that the stochastic nature of turbulent migration can prevent low-mass companions from monotonically settling into universal migration traps within the traditional laminar disc framework,thus reducing the frequency of three-body interactions and hierarchical mergers compared to previously expected.We propose a scaling for the transition mass ratio from classical to chaotic migration q∝α_R, where α_R is the Reynolds viscosity stress parameter, which can be further tested and refined by conducting extensive simulations over the relevant parameter space.planet-disc interactions – accetion discs – black holes – gravitational waves – turbulence – planets and satellites: formation§ INTRODUCTIONPlanetary migrationoccurs within the early stages of planet formation,when planets tidally interact with their protoplanetary discs (PPDs).Type I migration primarily affectslow-mass planets<cit.>, typically those up to a few times the massof Earth, when the companion lacks the mass required tocreate a gap in the disc density profile <cit.>.It plays an important role in the formation of super Earths and cores of proto-gas-giant planets <cit.>. Early analytic analysis<cit.> and numerical simulations <cit.> of this process primarily focus on the determination of tidal torque between the disc and planets with fixed orbits.The pace and direction of this migrationare determined by the sum of the differential (usually negative) Lindblad andunsaturated (often positive) corotation torque. The details of torque imbalance is set by the disc structure<cit.>, particularly the temperature and surface densitygradients. Type I migration usually leads to inward motion and occursmuch faster than both the growth of giant planet cores and the evaporationof the disc <cit.>. However, depending on the specific heating mechanisms and accretion-disc structures, the total torque can exhibitlocal positivity in certain regions of the PPD. This in turn can causestalling of Type I migration among a range of planetary mass <cit.>,forming effective migration traps at the boundary of outward migration regions<cit.>. The potentially rapid migration speed warrants self consistent simulations in which the embedded planet's orbit evolve together with the disc structure. Moreover, recent studies <cit.> show that observational signatures generated by migrating planetsmay differ from those produced by stationary planets. Stellar-mass Black Holes (sBHs) also migrate and evolve in active galactic nuclei (AGN) accretion discs around Supermassive Black Holes (SMBHs). The hierarchical mergers of sBHs could possibly contribute to LIGO-Virgo gravitational wave (GW) events <cit.>.Early population studies of these sBHs and their GW features invoke classical Type I migration formulae to account for their orbital decay due to tidal interaction with the disc,driving them to converge radially towards certain migration traps existent in the AGN disc structure <cit.> and undergo frequent mergers and interactions <cit.>.Turbulence ubiquitously exist in both kinds of discs, albeit at different levels.In mid-planes of PPDs, Magneto-rotational Instability (MRI) turbulence level is proven to be low, with an amplitude α≲ 10^-3 in the α presciption <cit.> in numerical simulations <cit.>, due to low ionization level and non-ideal MHD effects.Such upper limits are also confirmed by observations of molecular line broadening <cit.>.This level of turbulence have been shown to drive stochastic Type I migration of low-mass planets in short-term MHD simulations <cit.>,but it has been pointed out by <cit.> that such levels of turbulence is not able to disrupt the classical planetary wake structure,and the residue differential Lindblad torque can still dominate on the long term to drive inward migration.This justifies the use of an effective viscosity α in laminar disc simulations to mimick the effect of turbulent accretion in planet-disc interaction and conclude Type I differential Lindblad torque dependencies,as in the widely applied <cit.> formalism. However, planetesimal or planet core formation might occur very early during PPDs' evolution <cit.>.During their infancy, PPDs are massive and dense.Marginal gravitation instability leads to amplification of local disturbance and gravito-turbulence induced effective viscosity corresponds to an averageeffective Reynolds ⟨α_R ⟩ as large as 0.05 <cit.> or 0.1<cit.>. Outskirts of AGN discs are also subject to marginal gravitational instability <cit.>, gravito-turbulence <cit.>, and intense star formation <cit.>.With a swarm of embedded massive starsco-evolving inside the disc <cit.>, they generate density waves whichinterfere and dissipate, producing an effective turbulent viscosity whose magnitude positively correlates with the mass and number density of embedded objects <cit.>. If strong turbulence is able to significantly alter the flow pattern around the companion, applying linear migration torques in population synthesis of these objects or adding simple stochastic torques on top of a linear component, e.g. in the prediction sBH evolution in AGN discs and GW statistics from their mergers <cit.>, may no longer be accurate.In this letter, we revisit the chaotic migration problem as posed in <cit.>, but focus on what occurs in the highly turbulent regime applicable to the fore-mentioned contexts.We emphasise that for ⟨α_R⟩≳ 0.1, migration of low-mass companions can indeed become chaotic over long timescales as classical circum-companion flow pattern becomes disrupted. The Letter is organised as follows: in <ref>, we introduce our numerical setup for turbulent companion-disc interaction; We will present our results in <ref> and give a brief conclusion in <ref>. § NUMERICAL SETUPTo explore the effect of strong turbulence on Type I migration,we apply a modified version of the hydrodynamic grid code FARGO <cit.>with a phenomenological turbulence prescription that follows <cit.> and <cit.>. The initial and boundary conditions are similar to that adopted by <cit.>.The axisymmetric isothermal disc has aspect ratio h = h_0(r/r_0)^1/2, with h_0 = 0.03.The initial surface density is Σ = Σ_0 (r/r_0)^-1/2, with Σ_0 a normalization constant irrelevant to the dynamics.The embedded companion with a mass of M_ p, or a mass ratio of q = M_ p/M_* compared to its host mass M_*, is initially at the orbital radius of r_0=1, with other code unit being G = M_* = 1, the orbital frequency Ω_0 = Ω(r_0) = 1.Self-gravity is neglected because turbulence is prescribed and not generated,and the instantaneous migration torque that the disc exerts on the planet's orbital angular momentum is normalized in units of Σ_0 r_0^4Ω_0^2.The planet's orbit self-consistently evolves under the influence of the torque, but since the extent of migration is relatively small within a few thousand orbits, the magnitude of torque isn't affected even if we normalize with Σ r^4Ω^2 at real-time locations. The simulation domain covers (0.4-1.8)r_0 in the radial direction, and 0-2π in the azimuthal direction.We resolve the disc by N_r = 512 radial zones with logarithmic spacing, and N_ϕ = 1536 azimuthal zones. wave-killing zones are adopted close to the boundaries to minimize unphysical wave reflections <cit.>.To prevent gas velocity from diverging infinitely close to the companion, the planet gravity is softened by a smoothing (Plummer) length of ϵ = 0.6h_0 r_0.To mimic turbulence, a fluctuating potential Φ_ turb∝γ is applied to the disc,corresponding to the superposition of 50 wave-like modes <cit.> such that Φ_turb (r, ϕ, t)=γ r^2Ω^2∑_k=1^50Λ_k(m_k, r, φ, t)where γ is the dimensionless characteristic amplitude of turbulence.Each stochastic factor for the k-th mode, Λ_k, expressed asΛ_k = ξ_k e^-(r-r_k)^2/σ_k^2cos(m_kϕ -ϕ_k-Ω_k t̃_k) sin (πt̃_k/Δ t_k),is associated with a randomly drawn wavenumber m_k from a logarithmically uniform distribution ranging from m=1 to the maximum wave number m_ max. The initial radial r_k and azimuthal ϕ_k location of the mode are drawn from uniform distribution. The modes’ radial extent is σ_k = π r_k /4m. Modes start at time t_0,k, and their lifetime is Δ t_k = 0.2π r_k / m c_s, with c_s being the local sound speed. Ω_k is the Keplerian frequency at r_k, t̃_k = t - t_0,k ,and ξ_k is a dimensionless constant drawn from a Gaussian distribution of unit width.As discussed in <cit.>, the choice of parameters for this turbulence driver emulates the power spectrum of a typical Kolmogorov cascade, following the m^-5/3 scaling law up to m_ max.An effective Reynold stress parameter ⟨α_R ⟩ around the companion location can be calibrated from the velocity fluctuations to beapproximately ⟨α_R ⟩≃ 35 (γ/h_0)^2,i.e. proportional to the square of the turbulence amplitude (see <cit.> for details).§ RESULTS§.§ Dependence on Turbulence Strength In Figure <ref> we plot the running-time average of migration torques, measured in units of Σ_0 r_0^4 Ω_0^2, for a fiducial set of parameters with companion mass ratio q = 5× 10^-6 and h_0 = 0.03. Different colors represent either runs with different turbulence amplitude γ or with different selection of mode azimuthal wavenumbers.The classical formula <cit.> for type I migration torque scaling isΓ = - C_ L (q/h_0)^2 Σ_0 r_0^4 Ω_0^2.Calibrating with the steady-state γ = 0 result, we obtain C_ L≈ 1.8 (Figure <ref> red line). This quantity is dominated by the Lindblad torque because inthe inviscid (γ=0) disc, corotation torque saturates <cit.>. Although γ = 10^-4 brings in extra turbulence and effective viscosity, the running average of the torque nevertheless converges to the value corresponding to C_ L≈ 1.8 after a few hundred orbits. By default we include all m up to grid scale,but we tested that truncating higher order modes makes very little difference (Figure <ref>, green and yellow lines), asthe large-scale (low-m) modes are most influential to migration.Such behaviour is expected from the analysis of <cit.>.When turbulence level is moderate, structure of density waves and flow patterns of classical planet-disc interaction can still manifest after averaging the density distribution, therefore the migration torque can be seen as the superposition of a random fluctuating component with magnitude of⟨Γ⟩= 1T[ ( ∫_T Γ^2 dt )-( ∫_T Γ dt )^2 ]^1/2 = ⟨ C_ turb⟩ (q/h_0)^2 Σ_0 r_0^4 Ω_0^2and a continuous component close to the classical type-I torque Eqn <ref>. In a quasi-steady state, the fluctuating amplitude is typically much (by nearly 2 order of magnitude)larger than the continuous component as plotted inFigure <ref>, while its accumulative effect will decay with time.With the auto-correlation/eddy-turnover timescale of the turbulent component being∼ one planetary orbit, the time for the accumulative effect of the linear torqueto be significant is τ_ conv∼ (⟨ C_ turb⟩/C_ L)^2 orbits. From the measured torque data,we estimate the standard deviation of Gaussian-distributed torques to be ⟨ C_ turb⟩ = 8and 63 for γ=10^-4 and 10^-3 respectively, consistent with the histogram of total torques (Figure <ref>) that show aroughly Gaussian dispersion dominated by the fluctuating component with typical dispersion being ∼⟨ C_ turb⟩ (q/h_0)^2.In our numerical simulations, the running average torque for γ=10^-4 cases quickly converge with the γ = 0 scenario within ≪ 100 orbits (Figure <ref>),corresponding tothe expectation of τ_ conv∼ 20 orbits. While for the γ=10^-3 cases,the average torque quickly decays to values two orders of magnitude below the linear expectation, instead of showing any trend of settling towards the linear prediction. The analysis that the total torque can be decomposed into a stationary component and a fast-varying component,with no significant coupling between both, assumes that differential Lindblad torque is not significantly altered by turbulence.If we apply the same tactics to the γ=10^-3 scenario, we would expect the timescale for the accumulative effect of the linear torque to be significant to be around 1200 orbits.However,for such strong turbulence corresponding to ⟨α_R⟩∼ 0.04, structure of density waves will be disrupted even after averaging the density distribution across orbits,similar to the ⟨α_R⟩∼ 0.1 experiments in <cit.>,which led us to believe that the running-average showing no sign of convergence towards the γ=0 value is genuine evidence that random walk dominated this chaotic migration process.In support of this claim,we highlight the torque density distribution of the γ= 10^-3 in Figure <ref>. The top panel represents the torque density of the γ=0 model at 2000 orbits in which the planet hasmigrated to r ≃ 0.9 r_0. In this classical flow pattern picture,the main contribution of Lindblad torque is associated with the competition between the leading waves extending towards the upper left (positive torque) and the trailing waves extending towards the lower right (negative torque) <cit.>. The middle panel is the snapshot of the γ=10^-3 model in which the planet's orbit has not evolved significantly due to the turbulent interruption of the planet's tidal wake.The bottom panel is a time average (over 200 orbits) torque density (for the γ=10^-3 model) which reduce thefluctuation but still show no sign of the density-wave structure emerging and is generally smaller than magnitude than the inviscid case. §.§ Dependence on Planet MassIn this section, we further compare the torque evolution of thefiducial case (with q = 5×10^-6) as well as companion mass ratios q = 10^-6 andq = 3× 10^-5.The scale height h_0=0.03 is unchanged and all azimuthal wavenumbers are included.In Figure <ref> we plot the standard deviation of the torque (normalized by Σ_0 r_0^4 Ω_0^2)in thesethree cases over 500-2000 orbits,fitted by⟨Γ⟩≈ 3.6× 10^2 q γΣ_0 r_0^4 Ω_0^2, or⟨ C_ turb⟩ =3.6× 10^2 q^-1 h_0^2 γconforming with Eqn 10 of <cit.> with a factor of %50 discrepancy.Figure <ref> shows the evolution of the running-average torque for three different mass ratios under strong turbulence, also normalized by Σ_0 r_0^4 Ω_0^2.For the low mass ratio q=10^-6 case, the profile settles towards a fluctuating torque with an amplitude smaller than the linear torque,which is predicted to be ∼ 10^-9 in corresponding units (Σ_0 r_0^4 Ω_0^2),suggestive for chaotic migration similar to the fiducial model.Nevertheless, since |⟨ C_ turb⟩/C_ L| is larger than the fiducial case,we expect a robust convergence timescale of ∼ 2× 10^4 orbits, only after which we can confidently tell if the linear background torque is completely suppressed.We did not run the simulation long enough due to computational constraints,but we comment that for very low masses, both the linear and fluctuating components are expected to be minimal,resulting in an overall small migration effect for all ⟨α_ R⟩.In the high mass case with marginal gap-opening q∼ h_0^3 = 2.7 × 10^-5,the fluctuation (∝ q) is larger than the linear torque (∝ q^2) by only one order of magnitude now,suggesting it's much harder to drive chaotic migration of high-mass companions throughturbulence <cit.>.Naturally we do expect gap-opening planets' migration to be less perturbed by turbulent effects, as shown in MHDsimulations <cit.>. In Figure <ref> we see the average of q = 3× 10^-5 converging towards ∼ - 10^-7,which suggest that long-term evolution of this gap opening companion will be linear torque dominated. From the specific torque distribution averaged over the last 200 orbits for q = 3× 10^-5 (Figure <ref>), we also observe the emergence of density wave structure (trails extending towards the upper left and lower right) in contrast to the fiducial case (Figure <ref>). Interestingly, the value of converged Γ is still about one order-of-magnitude smaller than the expected linear torque from Eqn <ref>. This might be due to partial gap opening or non-linear enhancement of positive corotation torque, which is not the main focus of our study. Extrapolating to different values of q and γ or ⟨α_R ⟩,we generally expect a critical transition q for a given level of turbulence α_R above which the planet shifts from chaotic migration towards linear migration.Meanwhile,the mean value of ⟨Γ⟩ begins to become significant. If the marginal gap opening case is close to linear migration.We can determine for ⟨α_R ⟩ = 0.04, h_0=0.03the transition 5× 10^-6≲ q_ crit≲ 3× 10^-5.The same paradigm can be applied to a subsequent parameter survey to conclude a comprehensive scaling for q_ crit. <cit.> investigated the critical planet orbital eccentricity above which circum-companion flow patterns are disrupted. They concluded that when the characteristic epicyclic impact velocity e Ω_0 r_0 is large, the background Keplerian shear would dominate over the classical flow pattern down to the modified Bondi radius R_ B, ecc∼ GM_p/(e Ω_0 r_0)^2.Analogously, we can introduce a “turbulent" Bondi radius of the turbulent velocity based on the turbulent velocity amplitude Δ v, where by definition of Reynolds stress Δ v^2/c_s^2 ∼⟨α_ R⟩, and assume on a time-average sense turbulent motions canstrongly dominate down to the scale of R_ B, turb∼GM_p/Δ v^2 from the planet.Since Lindblad resonances are effective at radial distances of a few pressure scale height ∼ H = h r, we might expect strong chaotic fluctuation to significantly impact the type I torque when R_B, turb is smaller than some factor λ of H,i.e. when q≲λ⟨α_ R⟩ h^3.Here 4 < λ < 25 is loosely constrained from our limiting test cases.More accurate values need to be obtained by extensive simulations. Such a determination is technically straightforward since the transition from classical to chaotic migration can be distinctively identifiedby plotting the running time average of torque over long enough timescales.§ CONCLUSIONSIn this letter, we use hydrodynamical simulations with a driven turbulence prescription to show that low mass planets, or generally disc-embedded companions migrating in strongly turbulent discs are subject to highly stochastic driving forces,and the time-average residue migration torque becomes negligible after a few thousand orbits.For a q = 5× 10^-6 companion, when the ⟨α_R ⟩∼ 4× 10^-4, the average Lindblad torque and corotation torque will dominate after the decay of the running-average of turbulent torque component,as in <cit.>,and become subject to linear expectations.However,for stronger turbulence corresponding to ⟨α_ R⟩∼ 4× 10^-2,the classical resonances are disrupted and the residue for running-time average does not tend to saturate to the classical predictions of Type I torque.This randomizing effect is stronger for lower companion mass, and becomes less significant for marginally gap-opening companion mass.This stochastic migration paradigm applies to planet migrating in outer PPDs dominated by strong gravito-turbulence,as well as sBHs in AGN discs under influence of gravito-turbulence and the density waves generated by the swarm of embedded objects themselves <cit.>,including intermediate mass sBHs and massive stars <cit.>. The ⟨α_R ⟩∼ 4× 10^-2 requirement for our fiducial example appears to be reaching the upper limit of quasi-steady gravito-turbulence <cit.>.However, even if intense cooling triggers disc fragmentation,the gas would not be completely deposited into fragments before the disc region in between the fragments re-stablish quasi-steady state at the maximum turbulence level, provided that the fragments formed do not play a major role in the energy and angular momentum budget of the disc. This self-regulation can either be due to a prolonged cooling timescale from lower surface density (more relevant to gas pressure dominated PPDs),or auxiliary heating mechanisms such as large scale spiral shocks or star formation heating (more relevant to AGN discs) <cit.>.Moreover, lower mass companions, whose flow patterns are more susceptible to disruption, would have lower transition turbulence parameters which widens the viable viscosity parameter space for chaotic migration.In light of these considerations, we propose that chaotic migration can indeed operate in a wide range of parameter space relevant to AGN discs, as well as a significant range of parameter space relevant to PPDs. Regarding the dependencies of the magnitude of the random torque ⟨ C_ turb⟩,our measurement is consistent with the scaling summarised by <cit.>,which can be recast in terms of ⟨α_R ⟩ as: ⟨ C_ turb⟩≈ 360 q^-1 h_0^2 γ≈ 60 q^-1 h_0^3 ⟨α_ R⟩^1/2which serves as a quantitative formula for future population studies that examine the effect of random driving torque on AGN channel GW statistics.In terms of a reference gravitational stability parameter Q_0 ≡ h_0 Ω_0 ^2 / π G Σ_0,the net effect of turbulent migration will move the companion radially by a fraction of Δ rr_0 = ⟨Γ⟩M_p r_0^2 Ω_0 2πΩ_0= 3.8× 10^2 Σ_0 r_0^2 M_p q h_0⟨α_ R⟩^1/2≃10^2Q_0⟨α_ R⟩^1/2h_0^2 ,in either the inward or outward direction over each orbital timescaleand its magnitude would increase with time by an amount ∼ (Ω_0 t/2 π)^1/2.This estimate (Eq. <ref>) is consistent with the numerical simulations (Figures <ref> & <ref>) which show much slower random-walk diffusion (with γ=10^-3) rather monotonic migration (with γ=0).To first order, we expect chaotic migration to prevent sBHs from monotonically migrating towards a migration trap where they become dynamically crowded,with many migrating outwards as well as inwards.Consequently, three-body interactions and hierarchical mergers in migration traps will be less frequent as previously predicted <cit.>.Although the self gravity of the disc's is neglected in these simulation,the scaling relation in Eqn. <ref> indicates that the extend of Δ r ≲ r_0 during the lifespan of massive stars <cit.> in typical AGN discs withmarginal gravitational stability <cit.>, modest gravito-turbulence <cit.>, and small scale height <cit.>.This slow diffusion enables the heavy element pollution of AGN disc <cit.> by massive stars before they migrate into and consumed by the SMBH without the needof universal migration traps <cit.>. It is worth noting that strong three-body interactionin migration traps has been invoked to explain the low mean effective spin seen in LIGO-Virgo observations <cit.>, assuming a significant portion of which is indeed from the AGN channel.On the other hand, <cit.> has proposed other natural ways from the local gas accretion to reduce the magnitude as well as the mean of effective sBH spin from gas accretion,which does not rely on frequent interactions between sBH and sBH binaries.We also propose a scaling for the transition companion mass ratio from classical to chaotic migration q_ crit∝⟨α_ R⟩ h_0^3,which can be further tested and refined by conducting extensive parameter survey using numerical simulations. The distinct nature of the transition allows for straightforward testing.The parameter survey can be performed using either a comparable code setup or by incorporating more realistic turbulence treatments in subsequent works.§ ACKNOWLEDGEMENTS We thank Clément Baruteau for helpful discussions and sharing the turbulent module of FARGO and Ya-Ping Li for his guidance on data analysis. We thank Sergei Nayakshin, Richard Nelson, Kevin Schlaufman, Xing Wei and the anonymous referee for inspiring discussions or comments. This research used DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.ukwww.dirac.ac.uk).§ DATA AVAILABILITYThe data obtained in our simulations can be made available on reasonable request to the corresponding author. mnras | http://arxiv.org/abs/2311.15747v1 | {
"authors": [
"Yinhao Wu",
"Yi-Xian Chen",
"Douglas N. C. Lin"
],
"categories": [
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.HE",
"astro-ph.SR"
],
"primary_category": "astro-ph.EP",
"published": "20231127120555",
"title": "Chaotic Type I Migration in Turbulent Discs"
} |
Learning with Errors over Group Rings Constructed by Semi-direct Product^† Jiaqi Liu, Fang-Wei Fu Jiaqi Liu and Fang-Wei Fu are with Chern Institute of Mathematics and LPMC, Nankai University, Tianjin 300071, China, Emails: [email protected], [email protected] ^†This research is supported by the National Key Research and Development Program of China (Grant Nos. 2022YFA1005000 and 2018YFA0704703), the National Natural Science Foundation of China (Grant Nos. 12141108, 62371259, 12226336), the Fundamental Research Funds for the Central Universities of China (Nankai University), the Nankai Zhide Foundation. manuscript submitted January 14, 2024January 14, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================The Learning with Errors () problem has been widely utilized as a foundation for numerous cryptographic tools over the years. In this study, we focus on an algebraic variant of the problem called Group ring (). We select group rings (or their direct summands) that underlie specific families of finite groups constructed by taking the semi-direct product of two cyclic groups. Unlike the Ring-problem described in <cit.>, the multiplication operation in the group rings considered here is non-commutative. As an extension of Ring-, it maintains computational hardness and can be potentially applied in many cryptographic scenarios. In this paper, we present two polynomial-time quantum reductions. Firstly, we provide a quantum reduction from the worst-case shortest independent vectors problem () in ideal lattices with polynomial approximate factor to the search version of . This reduction requires that the underlying group ring possesses certain mild properties; Secondly, we present another quantum reduction for two types of group rings, where the worst-case problem is directly reduced to the (average-case) decisionproblem. The pseudorandomness ofsamples guaranteed by this reduction can be consequently leveraged to construct semantically secure public-key cryptosystems. Learning with errors; Group rings; Semi-direct product; Group representations; Lattice-based cryptography; Computational hardness § INTRODUCTION§.§ Latticed-based cryptographyLattice-based cryptography has been playing a significant role in post-quantum cryptography. It is well-known that traditional cryptographic systems such as RSA and Diffie-Hellman protocol rely on the computational hardness of certain number-theoretic problems. However, Shor <cit.> introduced a quantum algorithm that can solve integer factorization and discrete logarithm problems in polynomial time, rendering these classical cryptosystems vulnerable to quantum attacks. In contrast, lattice-based cryptosystem has several advantages over such traditional cryptosystems. First, no efficient quantum algorithms have been discovered for lattice problems that are used to construct cryptographic tools for the moment. Consequently, efforts are currently underway to establish post-quantum cryptographic standards, with lattice-based schemes as leading candidates. Second, there is an inherent disparity between the average-case hardness required by cryptography and the worst-case hardness concerned in computational complexity theory. Unlike other types of cryptosystems, lattice-based cryptosystems own security guaranteed by the worst-case hardness of certain lattice problems. Ajtai <cit.> proposed the first public-key cryptosystem from the short integer solution () problem whose security is based on the worst-case hardness of well-studied lattice problems.Third, lattice-based cryptography is computationally efficient in comparison with RSA and the Diffie-Hellman protocol, as it only requires linear operations on vectors or matrices modulo relatively small integers, without the need for large number exponentiations.Regev <cit.> was the first to demonstrate the hardness of the learning with errors () problem, which is an extension of learning parity with noise () problem. The problem has gained considerable attention in recent years, due to its strong hardness and practical applications in various fields, including cryptography, machine learning, and quantum computing. The problem can be formulated as follows: Let q>(n) be a positive integer, fix a secret vector ∈_q^n, sample arbitrarily many _i∈_q^n independently from the uniform distribution,where 1≤ i≤ m, and let e_i∈_q be independent “short" elements which are typically sampled from a discrete Gaussian distribution. Then, compute the samples as(_i,b_i=⟨, _i⟩+e_i)∈_q^n×_q,where ⟨·,·⟩ denotes the Euclidean inner product in ^n. The problem asks to recover the secret vectorgivensamples (search version) or distinguish samples from uniformly random samples in _q^n×_q with a non-negligible advantage within polynomial time (decision version). In <cit.>, Regev initially proved the hardness of the problem for certain parameters, assuming the hardness of finding short vectors in lattices. Specifically, Regev established a (quantum) reduction from worst-case approximate(shortest vector problem) and(shortest independent vectors problem)for n-dimensional lattices to the average-case problem with the same dimension n as the corresponding lattices. Informally speaking, the reduction suggests that if someone could solve in polynomial time with non-negligible probability, it would imply the existence of a quantum algorithm capable of solving approximateorin arbitrary lattices with the same dimension as in problem. Later, Peikert <cit.> presented a classical reduction from worst-case approximate , but with worse parameters than those in <cit.>. While Regev's enjoys worst-case hardness, it suffers from drawbacks regarding efficiency. It scales poorly with increasing dimension n, which makes it impractical for high-dimensional settings. Inspired by the NTRU <cit.> cryptosystem proposed by Hoffstein et al., which is constructed from algebraic lattices, Stehlé et al. <cit.> proposed a more efficient variant of the problem over a cyclotomic ring. Lyubashevsky et al. <cit.> then presented the reduction from the hard problems in ideal lattices in the worst case to the Ring-problem. The improved efficiency of this problem stems from the additional algebraic structures of the rings. The utilization of number-theoretic transformation (NTT), a variant of fast Fourier transformation (FFT) also accelerates the multiplication of ring elements in certain cyclotomic rings, contributing to its computational efficiency. Precisely, Ring-also considers equations of the form b_i=a_i· s + e_i for a secret element s∈ R^∨/qR^∨ and public uniformly random a_i∈ R/qR, where q is the modulus and R is a commutative ring typically chosen as the ring of algebraic integers of a number field. Despite its additional algebraic structure, no known algorithms for ideal lattices outperform those in general lattices for the moment. Later, Peikert et al. <cit.> gave a direct quantum reduction from worst-case (ideal) lattice problems to the decision version of (Ring)-.Brakerski et al. <cit.> introduced Module-, balancing the efficiency and compactness of Ring-and the hardness of Regev's plain . Module-has more flexible parameters than Ring-and Regev's standard , making it a reasonable choice for standardization. Many schemes submitted to NIST for post-quantum standardization such as CRYSTALS-Kyber, CRYSTALS-Dilithium, and Saber, rely on the assumed hardness of variants. Additionally, there are several other algebraically structured variants, e.g., Polynomial- <cit.>, Order- <cit.>, and Middle-Product <cit.>. Peikert et al. <cit.> have developed a unified framework for all proposed variants (over commutative base rings), which encompasses all prior “algebraic” variants defined over number fields. This work has led to more simplified and tighter reductions to Ring-.and are two off-the-shelf problems possessing reductions from worst-case lattice problems. They are both widely used to construct powerful cryptographic primitives. and its variants have numerous applications in cryptography including secure public-key encryption <cit.>, key-homomorphism PRF <cit.>, oblivious transfer <cit.>, identity-based encryption (IBE) <cit.>, attribute-based encryption (ABE) <cit.>, fully homomorphic encryption (FHE) <cit.>, and multilinear maps <cit.>. §.§ Our results Similar to the work in <cit.>, we consider the problems over group rings. Let G={g_1,g_2,…,g_n} be a finite group, and let R be a commutative ring, elements in the group ring R[G] are formal sum∑_i=1^nr_ig_i, r_i∈ R.In this paper, two types of non-commutative finite groups are constructed, each of which is a semi-direct product of two cyclic groups (e.g. _m⋉_n, _n^∗⋉_n). We use them as the underlying groups of the group rings.To study the relationship between an algebraic structure and a lattice in ^n, we need to embed the structure into the Euclidean space ^n. For variants based on rings, there are two types of embeddings into ^n: canonical embedding and coefficient embedding. Canonical embedding provides nice geometric and algebraic properties. One can refer to the work of Lyubashevsky et al. in <cit.>. They embed the base ring R (such as the cyclotomic ring [x]/⟨ x^n +1⟩) into a linear space H⊂^s_1×^2s_2 defined asH={(x_1,x_2,…,x_n)∈^s_1×^2s_2: x_s_1+s_2+i=x_s_1+s_2 for1≤ i≤ s_2}where s_1+2s_2=n. Every element a∈ R is mapped into H by taking n distinct embeddings of a(s_1 real embeddings and 2s_2 complex embeddings).It can be easily verified that H, with the Hermite inner product, is isomorphic to the Euclidean inner product space ^n. Canonical embedding often leads to tighter error rates and enables to obtain more compact systems due to its component-wise multiplication. On the other hand, the coefficient embedding introduced by <cit.> directly maps the ring elements to real vectors according to its coefficients undera specific basis. In this paper, we primarily use coefficient embedding for the sake of simplicity, following the approach described in <cit.>. However, we also need to use an extended version of canonical embedding to analyze the generalized problem due to its connection to irreducible group representations.Taking an example for the canonical embedding in <cit.>, the ring ℂ[x]/⟨ x^n +1⟩ can be viewed as a direct summand of [C_2n] where C_2n denotes the cyclic group of order 2n. Precisely,[C_2n]≅[x]/⟨ x^2n-1⟩≅[x]/⟨ x^n + 1⟩⊕[x]/⟨ x^n -1⟩≅⊕_i=0^2n-1[x]/⟨ x-e^2π i√(-1)/2n⟩.One can observe that each direct summand above corresponds to a(one-dimensional) irreducible representation of C_2n. The canonical embedding of [x]/⟨ x^n +1⟩ consists of a natural inclusion into [x]/⟨ x^n +1⟩ and an isomorphism obtained from (<ref>):[x]/⟨ x^n +1⟩≅⊕_0≤ i≤ 2n-1, 2 ∤ i[x]/⟨ x-e^2π i√(-1)/2n⟩.To generalize the canonical embedding to the group ring constructed from a non-commutative group, we have to deal with irreducible representations of which the dimensions are greater than 1. By applying the Artin-Wedderburn theorem <cit.>, we can decompose the group algebra [G] into the direct sum of matrix algebra overuniquely. Each direct summand, denoted by ^d× d matches exactly an irreducible representation of dimension d for the group G. The complicated form of this decomposition is one of the reasons to use coefficient embedding in this paper rather than canonical embedding.In the case of a group ring [G], we can regard it as a free -module. All the group elements form a -basis of [G] by natural. An element ∑_i=1^n r_ig_i∈[G] can be represented by an n-dimensional vector (r_1,r_2,…, r_n)∈^n. Under this embedding, each ideal of [G] corresponds to a lattice in ^n, which is called an ideal lattice. In our paper, we mainly study the relationship between the hard problems in ideal lattices and the problem over group rings. §.§.§ Avoiding the potential attackFor“hard" problems such as and in certain ideal lattices, several potential attacks have been discovered. Various quantum polynomial-time algorithms have been developed to solve these problems in specific principal ideal lattices as mentioned in <cit.>. However, it is important to note that the Ring-problem present in <cit.> is not under direct threat from the attacks above, as the corresponding lattice problem serves as a lower bound of the security. Regarding the problem over group rings studied in this paper, it may not be affected by these algorithms for some reasons. On the one hand, the ideals of the group rings may not be principal. On the other hand, these attacks are basically designed for rings with commutative operation. Therefore, the attack can not be directly applied to the scenarios discussed in this paper, where the multiplication operation is not commutative. §.§.§ Why selecting such group rings In this paper, we have chosen the quotient ring [_m⋉_n]/⟨ t^n/2 +1⟩ instead of [_m⋉_n] as a candidate, where t is the generator of _n, though two reductions for [_m⋉_n] are both correct.For Ring-, there are some attacks by mapping the ring elements into small order elements to extract useful information. For instance, in <cit.>, they tell us that it is not secure to use [C_2n]≅[x]/⟨ x^2n-1⟩ as the underlying ring due to its vulnerability arising from the mapping [x]/⟨ x^2n-1⟩→[x]/⟨ x-1⟩, which may leak some information that allows for secret recovery. Nevertheless, we often select polynomial rings of the form [x]/⟨ x^n+1⟩, a direct summand of [C_2n], to avoid the information leakage resulting from such attacks.Additionally, when performing the multiplication of two elements in the group ring, it is necessary to exploit all irreducible representations of the finite group due to the coefficient embedding. This implies that the highest dimension of irreducible representation cannot be excessively large. It can be verified that the dimension of irreducible representations of _m⋉_n (Type I) is no more than 2, and that the dimension of irreducible representations of _n^∗⋉_n (Type II) is no more than n-n/p_1p_2⋯ p_s, where p_1,p_2,…, p_s are all the distinct prime factors of s.§.§.§ Relationship with other variants over group rings involves a non-commutative multiplication operation, unlike the variants studied in <cit.>. Consider a group ring R[G], where G={g_1,g_2,…,g_n} is a finite group of order n. In this settings,g_1,g_2,…,g_n naturally form a R-basis of R[G]. Let a,s∈ R[G], then the product a· s can also be expressed as product of a matrix and a vector with entries from R. Precisely, the left multiplication determined by a is an R-linear transformation over ^n, then a uniquely specifies a unique matrix from ^n× n under the natural basis, denoted by (a). Then the product a· s can be equivalently represented as the product of (a) with the coefficient vector of s.over group rings can also be viewed as a “structured-module" . As stated in <cit.>, the group ring is selected as [D_2n], i.e., [_2⋉_n]. Let r,t be generators of _2 and _n, respectively. Let s=s_1(t)+r· s_2(t), a=a_1(t)+ r· a_2(t), b =b_1(t)+r· b_2(t)∈[D_2n], where s_1(t),s_2(t),a_1(t),a_2(t),b_1(t),b_2(t)∈[t]/⟨ t^n-1⟩, then the equation b≈ a· s can be expressed in the matrix form as follows:([ b_1; b_2 ])≈ M·([ s_1; s_2 ]).Here, M is a structured 2× 2 matrix with elements in [t]/⟨ t^n-1⟩ all of which can be computed from s_1(t),s_2(t),a_1(t),a_2(t) efficiently. It is worth noting that the matrix M is structured, since only two elements (a_1(t) and a_2(t)) of the four need to be sampled randomly, while in the situation of unstructured-module , all entries in M are sampled randomly as in <cit.>. That's the reason why the over group rings can be viewed as a “structured-module” . In <cit.>, Pedrouzo et al. introduced an extension of the Ring-framework proposed in <cit.> by adding other indeterminates to the polynomial ring. Specifically, they generalized the univariate polynomials in ring [x]/⟨ x^n+1⟩ to multivariate polynomials in ring [x_1,x_2,…,x_m]/⟨ x_1^n+1,…, x^n_m+1⟩. In particular, multivariate Ring-with two indeterminates can also be interpreted as an instance of over the group ring discussed here. In this setting, the underlying group is represented by the direct product of two cyclic groups. However, in this paper, we primarily focus on the semi-direct product of two cyclic groups. It should be emphasized that, in some sense, the considered in this paper can be regarded as two-variate Ring-, but the two variables are not algebraically independent.There are several schemes that exploit non-commutative algebraic structures. The idea of problem over group rings originates from non-commutative NTRU schemes <cit.> where the authors employ group rings [G] instead of [x]/⟨ x^n-1⟩ considered in the original NTRU scheme <cit.>. Grover et al. <cit.> showed another non-commutative variant over cyclic algebras and provided hardness proof by givinga reduction from hard lattice problems.§.§.§ Proof of reductionsIn this paper, we present two reductions for theproblem. Both reductions make full use of iterative steps (Lemma <ref> and Lemma <ref>) mentioned in <cit.> and <cit.>. One can refer to Section <ref> and Section <ref> for detailed information.The first reduction is from a worst-case lattice problem to the searchproblem. The reduction is based on the work of <cit.> and <cit.>. It can be applied toany group ring that satisfies certain mild properties. Precisely, the reduction applies to those group rings satisfying if the ideal of the group ring has an inverse ideal, then the inverse ideal and the dual ideal are essentially equivalent, up to a certain permutation of the coefficients. Since not any ideal of the group rings is invertible, this reduction only considers those invertible ones. With the help of the searchoracle, we are capable of sampling from discrete Gaussian distributions with narrower and narrower widths in (invertible) ideal lattices through iterative steps. This process continues until the Gaussian parameter reaches our expectations. Consequently, we can obtain “short" vectors in the lattice except with negligible probability. The second one is a direct reduction from a (worst-case) lattice problem to the (average-case) decisionproblem. The procedure closely follows the approaches outlined in <cit.> and <cit.>. The coefficient embedding andgeneralized canonical embedding both play crucial roles in this reduction. We establish the hardness by providing the reduction from solving bounded distance decoding (), the problem of finding the closest lattice vector to a target ∈^n that is sufficiently close to a lattice ℒ, to the decision problem. During the reduction process, the decision oracle behaves like an oracle with a “hidden center" defined by <cit.>.To see how the oracle works for reduction, <cit.> states that one can transform an instance into an sample whose secret corresponds to the closest lattice vector ∈ℒ to . With a suitable oracle for decision and by incrementally movingtowards the closest lattice vectorby carefully measuring the behavior of the oracle, we can detect the distance between the moving pointandby monitoring the acceptance probability of the oracle, which is a monotonically decreasing function of (,). This makes it possible to solve by repeatedly perturbingto a new point ^', and testing whether the new point is significantly closer to the lattice. For further details, we refer to Section <ref>. §.§ Applications Due to the pseudorandomness of samples over group rings, it is possible to construct a public-key cryptosystem that provides semantic security. The security is based upon the hardness of the worst-case problem with a polynomial approximate factor. We can combine the result of this paper with the cryptosystem present in<cit.>, which is also similar to both Diffie-Hellman protocol <cit.> and ElGamal protocol <cit.>. Informally, let R be some group ring or a quotient ring of some group ring. For instance, we can select a ring R=[G]/⟨ t^n/2+1⟩, where G is a semi-direct product of cyclic groups _m=⟨ s⟩ and _n=⟨ t⟩. Let q be a positive integer. The public-key cryptosystem based onis as follows: The key-generation algorithm first generates a uniformly random element a∈ R_q:=R/qR (under coefficient embedding) and two “short” elements s,e∈ R_q from error distribution (typically selected from a discrete Gaussian distribution with “narrow” error). It outputs asample (a,b=s· a+e)∈ R_q× R_q as the public key and s as the secret key. To encrypt a |G|/2-bit messages m∈{0,1}^|G|/2 that can be viewed as a polynomial of degree n/2-1 with 0 or 1 coefficients, the encryption algorithm first generates three “short” random elements r,e_1 and e_2∈ R_q from error distribution and outputs (u,v)∈ R_q× R_q with u =a· r +e_1q and v=b· r+e_2+⌊ q/2⌉· mq. To decrypt (u,v)∈ R_q× R_q, the decrypt algorithm computesm̂=v-s· u=(e· r- s· e_1+e_2)+⌊ q/2⌉· m q.By choosing appropriate parameters, the element e· r- s· e_1+e_2 can be restricted to be very “short”. By rounding the coefficient of m̂ to either 0 or ⌊ q/2⌉, we can efficiently recover the bit string except with negligible probability.To understand why this cryptosystem is semantically secure, we first observe that the public key is generated as asample, though with “short" secret s, which can also be proved to be pseudorandom, as the “normal form ” discussed in <cit.>. Such a sample becomes indistinguishable from a truly random sample except with negligible probability. By replacing the public key with a randomly uniform sample (it makes negligible difference to this system), the encrypted message (a,u)∈ R_q× R_q and (b,v)∈ R_q× R_q both become exactsamples with “random” secret vector r. Therefore, these two encrypted messages are pseudorandom, which implies the semantic security of the system. §.§ Paper organizationIn Section 2, we give some basic knowledge of lattice and related computaionally hard problems. In Section 3, we provide a reduction from worst-case lattice problems to the search version ofover some group rings with special properties. In Section 4, we present a direct reduction from worst-case lattice problems to the decision version of , considering two types of group rings, This generalizes the results of <cit.> and enables to construct some cryptographic primitives. In Section 5, we summarize our findings and offer some thoughts for further study.§ PRELIMINARIESFirst, we present some notations used throughout this paper.For x∈, we denote ⌊ x⌋ as the largest integer not greater than x and ⌊ x⌉:=⌊ x+1/2⌋ as the nearest integer to x (if there exist two nearest integers, select the larger one). For integer n≥ 2, we denote _n=/n as the cyclic group of order n with addition modulo n. Denote φ(m) as the Euler's totient function of m for any positive integer m, i.e., the number of positive integers that are coprime with m and also not greater than m. Throughout this paper, we use bold lower-case letters to denote column vectors, e.g., ,,. We use bold upper-case letters to denote matrices, e.g., ,,. The transpose of vectorand matrixis denoted as ^t and ^t, respectively. The rounding function mentioned above can be applied element-wise to vectors, e.g. ⌊⌉ round each entry ofto its nearest integer. The inner product of two vectors , is denoted by ⟨, ⟩.If S is a nonempty set, then x← S denotes that x is a random variable uniformly distributed in S. If φ is a probability density function, then x←φ denotes that x is a random variable distributed as φ. Given two probability distribution functions ρ_1,ρ_2 over ^n, the statistical distance between ρ_1 and ρ_2 is defined asΔ(ρ_1,ρ_2):=1/2∫_^n|ρ_1()-ρ_2()|d.§.§ Lattice Background In this section, we introduce some definitions and discuss some related “hard” problems regarding lattices.A lattice is defined as a discrete additive subgroup of ℝ^n. Equivalently, an n-dimensional latticeis a set of all integer combinations of n linearly independent basis column vectors :=(_1,_2,…,_n)∈^n× n, i.e.,=():=·ℤ^n={∑_i=1^n z_i_i: z_i∈, 1≤ i≤ n}.Then latticecan be viewed as a full-rank free ℤ-module with basis _1,_2,…,_n.Since a latticeis an additive subgroup of ^n, we can obtain quotient group ^n/ (with operation induced by addition from ^n), which has cosets+={+:∈},for all ∈^nas its elements.The fundamental parallelepiped of a latticeis defined as():=·[0,1)^n={∑_i=1^n c_i_i: 0≤ c_i< 1, 1≤ i≤ n}.It is clear that the definition of the fundamental parallelepiped depends on the choice of basis. Fixing a basis , every coset +∈^n/ has a unique representative in (). In fact, coset + has representative -·⌊^-1·⌋∈(), which we denote by . The determinant of a latticeis the absolute value of the determinant of a basis [It can be proved that the value of determinant is independent from the selection of the basis .], i.e.,:=|()|. The minimum distance of a latticein a given norm · is the length of a shortest nonzero lattice vector, i.e., λ_1():=min_≠∈.The i-th successive minimum λ_i() is defined as the smallest r∈ such thathas i linearly independent vectors with norm no greater than r. In this paper, we use ℓ_2-norm unless otherwise specified. The dual lattice ofis defined as^∨:={∈^n:⟨,⟩∈, for all ∈},where ⟨·, ·⟩ represents inner product (Euclidean inner product by default). And it is easy to prove that ^∨ is also a lattice in ^n and ifis a basis of , then ^-t:=(^-1)^t=(^t)^-1 is a basis of ^∨, hence (^∨)^∨=. §.§ Gaussian measuresFor any (column) vector ∈^n and definite positive matrix Σ∈^n× n, we define the Gaussian distribution with mean vectorand covariance matrix 1/2πΣ∈^n× n as distribution with the probability density functionD_,Σ()=1/√(Σ)exp(-π(-)^tΣ^-1(-))for any ∈^n. Ifis a zero vector, we denote D_,Σ by D_Σ for short.In particular, ifis a zero vector and Σ is a diagonal matrix with diagonal entries r_1^2,r_2^2,…,r_n^2 ∈^+, then D_,Σ degrades to an elliptical (non-spherical) Gaussian distribution, which we denote D_ for convenience, where =(r_1,r_2,…,r_n)∈^n. Furthermore, if r_1=r_2=…=r_n=r, then this distribution degrades to a spherical Gaussian distribution with probability density function 1/√(Σ)exp(-π^2/r^2), which we denote D_r for short. These functions can be extended to sets in the usual way, e.g., D_,Σ(S)=∑_∈ SD_,Σ(), where S⊆^n is a countable set. For an n-dimensional lattice , a vector ∈^n, and a real r>0, we define discrete Gaussian probability distribution over the coset + with parameter Σ∈^n× n asD_+,Σ():=D_Σ()/D_Σ(+),for all ∈+.Note that the support set of D_+,Σ is +, a discrete set in ^n.The (continuous or discrete) Gaussian distribution owns numerous helpful properties. We list some of them as follows:For 0<α<β≤ 2α, the statistical distance between D_α and D_β is no greater than 10(β/α-1).As in the continuous version, almost all samples from n-dimensional discrete Gaussian distribution can be bounded in a sphere with a radius factor of √(n). For any n-dimensional latticeand real r>0, a point sampled from D_,r has ℓ_2 norm at most r√(n), except with probability at most 2^-2n. Micciancio et al. <cit.> first introduced smoothing parameter of lattices, which can be used to characterize the similarity between discrete Gaussian distribution over the lattice and continuous Gaussian distribution with the same parameter. However, for this paper, we require a more generalized definition mentioned in <cit.> as below.Letbe a lattice in ^n and ^∨ be its dual lattice. For a real ε>0, the smoothing parameter η_ε() is defined to be the smallest s such that∑_∈^∨\{0}D_1/s()=∑_∈^∨\{0}exp(-π s^2·^2)≤ε. For any ∈^n× n,is defined to satisfy smoothing condition if∑_∈^∨\{0}exp(-π·^t^t)≤ε,which we denote as ≥η_ε().The following lemma states an important property of the smoothing parameter and also explains the term “smoothing” to some extent. Suppose we have a vector ∈^n sampling from Gaussian distribution D_r where r exceeds the smoothing parameter of a certain lattice , thenis approximately uniform distributed over any fundamental parallelepiped 𝒫 except with small statistical distance. Since the volume of any fundamental parallelepiped 𝒫 ofis equal to (), then the uniform distribution over 𝒫 has probability density function 1/(). To summarize, the results can be stated as follows.For any n-dimensional lattice , ε>0 and r≥η_ε(), the statistical distance between D_r and the uniform distribution over ^n/ is at most ε/2, then we haveD_r(+)∈(1±ε)1/(),whereis an arbitrary vector in ^n. The following lemma states two useful bounds about smoothing parameters.For any n-dimensional lattice , we have η_2^-2n()≤√(n)/λ_1(^∨), and η_ε()≤√(ln(n/ε))λ_n() for all 0<ε<1.§.§ Representations of finite groupsLet G be a finite group, a pair (V,ρ) (V for short) is called a representation of G if V is a linear space and ρ is a group homomorphism that maps G to GL(V), where GL(V) is the group of all invertible linear transformations over V. If a linear subspace U of V is preserved by the action of any element in G, i.e., ρ(g)u∈ U for all g∈ G, u∈ U, then (U,ρ) (U for short) is called a subrepresentation of V. It follows immediately that every representation (V,ρ) has two trivial subrepresentations: {0} and V itself. V is called irreducible if V has no subrepresentations other than the trivial subrepresentations. Otherwise, V is called reducible. The dimension of V is called the dimension of the representation.§.§ Group ringLet G={g_1,g_2,⋯,g_n} be a finite group of order n. The elements in group ring R[G] are formal sums∑_i=1^n r_ig_i, r_i∈ R.Addition is defined as∑_i=1^n a_ig_i+∑_i=1^nb_ig_i=∑_i=1^n(a_i+b_i)g_i.Multiplication is defined as(∑_i=1^na_ig_i)(∑_i=1^nb_ig_i)=∑_r=1^n(∑_g_ig_j=g_ra_ib_j)g_r.Naturally, R[G] can be seen as a free R-module of rank |G|=n, and {g_1,g_2,…,g_n} naturally forms a R-basis.Next, we need to define the matrix form of the elements in a group ring. For any element 𝔥=∑_i=1^n a_ig_i∈ R[G], it defines a linear transformation on R[G], by its left multiplication law. We denote the n× n transformation matrix corresponding to 𝔥 with respect to the basis {g_1,g_2,…,g_n}[The choice of the basis doesn't significantly affect the analysis in the following essentially, we provide this definition here to avoid ambiguity.] by (𝔥). Define the matrix-norm 𝔥_ of 𝔥 as the spectral norm of the matrix (𝔥), i.e., the square root of the largest eigenvalue of ℳ(𝔥)ℳ(𝔥)^T. §.§ Semi-direct productIn this paper, we construct some finite groups mainly by taking semi-direct products on two cyclic groups. Let G and H be two groups, and φ: G→(H) be a group homomorphism, where (H) represents the automorphism group of H. The semi-direct product group of G and H induced by φ is defined as a set K={(g,h)| g∈ G,h∈ H}with multiplication law “·": (g_1,h_1)· (g_2,h_2)=(g_1g_2,φ(g_2^-1)(h_1)· h_2).We denote K with multiplication “·" by G⋉_φ H, or simply G⋉ H. It should be emphasized that semi-direct product as a binary operation of groups is not commutative. From the definition provided above, we can verify that H is a normal subgroup of K, but G may not be. Additionally, the semi-direct product G⋉ H may not be unique, as its structure varies depending on φ. In particular, if we consider the identity mapping for φ, i.e., for every element g∈ G, φ(g)=𝕀_H, then the semi-direct product degrades to direct product, namely G× H. In this case, the elements of G and H act “independently" of each other. The dihedral group D_2n can be generated by two elements s and t with the relation sts=t^-1, where the order of s and t are 2 and n, respectively. Precisely,D_2n:=⟨ s,t:s^2=t^n=1,sts=t^-1⟩.Then D_2n can also be represent as ℤ_2⋉_φℤ_n, where φ(s)=inv, where id maps each element of _n to its inverse (under addition modulo n).Inspired by Example <ref>, we first give two types of groups which are utilized in this paper.Type I: _m⋉_φ_n: Let m be an even positive integer, and n be an arbitrary positive integer. Denote _m and _n as two cyclic groups with generators s and t, respectively, i.e., _m=⟨ s⟩ and _n=⟨ t⟩. Let σ be an element in the automorphism group Aut(H), which maps each element of H to its inverse. Definea homomorphism φ: G→Aut H with φ(s)=σ. Induced by φ, we can define _m⋉_φ_n, denoted as _m⋉_n below: _m⋉_n:=⟨ s,t:s^m=1,t^n=1, sts^-1=t^-1⟩. Type II: _n^∗⋉_ψ_n: Let n be an arbitrary positive integer and let _n denote a multiplicative cyclic group with generator g. Denote ℤ_n^∗ as the set of all positive integers no more than n that are coprime with n. For instance, ℤ_10^∗={1,3,7,9}. It is easy to verify that _n^∗ forms a group under the operation of multiplication modulo n. It is also known that the automorphism group of _n is isomorphic to _n^∗. Thus we have a natural isomomorphism ψ mapping ℤ_n^∗ to Aut(_n)≅ℤ_n^∗, where ψ(a)=ψ_a∈Aut(_n) andψ_a(g)=g^a.Similar to Type I, the semi-direct product of _n^∗ and _n induced by ψ can be expressed as follows:_n^∗⋉_n:={a⊙ g^k: a∈_n^∗, 0≤ k<n}, the multiplication of which is induced by ψ and satisfies(a⊙ g^k_1)(b⊙ g^k_2)=ab⊙ g^k_1b^-1+k_2as in Definition <ref>.In particular, when m=2 is additionally satisfied in Type I, the group _m⋉_n defined by (<ref>) is exactly the dihedral group D_2n=_2⋉_n. We select these two types of groups mainly because their irreducible representations can be computed by the representation theory of finite groups. We provide two lemmas in this paper to present the irreducible representations of these two types of groups. The proofs require some basic knowledge and techniques in representation theory. They are attached in the appendix section in details.Let G_1=_m⋉_φ_n=⟨ s⟩⋉_φ⟨ t⟩ be a finite group defined by (<ref>), where m=2r is even and let ω and ξ be the m-th and n-th primitive roots of unity, respectively.(i) If n=2u+1 is odd, then G_1 has m=2r (non-equivalent) 1-dimensional irreducible representations χ_i (i=1,2,…, 2r), satisfyingχ_i(t)=1,χ_i(s)=ω^i;and ru (non-equivalent) 2-dimensional irreducible representations ρ_i,j (i=1,2,…,r, j=1,2,…,u), satisfyingρ_i,j(t)=([ ξ^j; ξ^n-j ]),ρ_i,j(s)=([0 ω^2i;10 ]).(ii) If n=2u is even, then G_1 has 2m=4r (non-equivalent) 1-dimensional irreducible representations χ_i(i=1,2,…,4r), satisfyingχ_i(t)=1,χ_i(s)=ω^i, i=1,…,2r,χ_i(t)=-1,χ_i(s)=ω^i, i=2r+1,2r+2,…,4r,and r(u-1) (non-equivalent) 2-dimensional irreducible representations (i=1,2,…,r, j=1,2,…,u-1) satisfyingρ_i,j(t)=([ ξ^j; ξ^n-j ]),ρ_i,j(s)=([0 ω^2i;10 ]),As for finite groups of Type II, we only provide an upper bound of the dimensions of all irreducible representations for simplicity, which is sufficient for the analysis in this paper. It can be verified that if r is coprime with s, then _rs≅_r×_s and _rs^*≅_r^*×_s^*.Furthermore, _rs^∗⋉_rs≅(_r^*×_s^*)⋉(_r×_s)≅ (_r^*⋉_r)× (_s^*⋉_s), hence it suffices to list all the irreducible representations of _p^k^∗⋉_p^k where p is a prime and k is a positive integer. By tensoring the irreducible representations of _p_i^k_i^*⋉_p_i^k_i for each prime factor p_i in the decompositionn=p_1^k_1p_2^k_2⋯ p_m^k_m, where i=1,2,⋯,m, we can obtain all irreducible representations of _n^*⋉_n. For groups constructed by the semi-direct product of two Abelian groups, we can determine all their irreducible representations. Refer to <cit.> for detailed information.Let G_2=_p^k^∗⋉_p^k is defined as (<ref>), where p is prime and k is a positive integer. Then G_2 only has irreducible representations with dimension no greater than p^k-1. Let n_1,n_2,…,n_s be the dimensions of all (non-equivalent) irreducible representations of G_2. According to the representation theory, we haven_1^2+n_2^2+⋯+n_s^2=|G_2|=p^2k-1(p-1), n_i| |G_2| for1≤ i≤ s. Thus it follows immediately that n_i≤ p^k-1 for all i. It is also crucial to analyze all the eigenvalues of the group ring elements when regarded as a linear transformation induced by the left multiplication law. In fact, we can compute all eigenvalues of the linear transformations determined by the elements in [G], where G belongs to Type I or Type II. To perform these computations, we need to use some fundamental representation theory and techniques. * Let m be an even positive integer and n be an arbitrary positive integer. Consider the group G_1:=ℤ_m⋉_φℤ_n=⟨ s⟩⋉_φ⟨ t⟩defined by (<ref>). Let 𝔥=∑_i=0^m-1s^if_i(t)∈[G], where f_i(t) is a polynomial of degree no more than n. All the eigenvalues of the matrix (𝔥) are given byf_0(ξ^j)+ω^i f_1(ξ^j)+⋯+ω^i(m-1) f_m-1(ξ^j), i=0,1,…,m-1, j=0,1,…,n-1,where ω,ξ are m-th,n-th primitive roots of unity, respectively. * Let p be a prime and k be a positive integers. Denote m:=p^k-p^k-1 and consider the group _p^k^∗⋉_ψ_p^k defined by (<ref>). Let a be a multiplicative primitive element of _p^k^∗. Let 𝔥=∑_r=0^m-1∑_s=0^p^k-1f_rs(a^r⊙ g^s), where f_rs are complex numbers, then all the eigenvalues of the matrix (𝔥) are given by∑_r=0^m-1∑_s=0^p^k-1f_rs(ω^ir·ξ^js), i=0,1,…,m-1,j = 0,1,…,p^k-1,where ω,ξ are m-th, p^k-th primitive roots of unity, respectively. It is natural that each element in [G] determines a left multiplication action on [G] itself. Here we consider the groups of Type I for instance, and the analysis for Type II follows essentially the same approach. Each irreducible subrepresentation of group G_1 can be found as a summand of left regular representation. Referring to Lemma <ref> and representation theory, the matrix determined by the regular representation over a specific basis is similar to a block diagonal matrix with block size no greater than 2, where each block corresponds to an irreducible subrepresentation of G.By Lemma <ref>, we know all irreducible subrepresentations of G. Therefore, all the eigenvalues can be calculated by considering the eigenvalues of each subrepresentation. §.§ Ideal lattice and coefficient embeddingA left (integral) ideal ℐ of the group ring ℤ[G] is an additive subgroup of G that is closed under left multiplication by any element in ℤ[G], i.e., 𝔥x∈ holds for any 𝔥∈ℤ[G] and x∈. A left ideal has a ℤ-basis as a free left -submodule of rank n. The left inverse of the idealis defined as^-1={x∈ℚ[G]| xy∈ℤ[G],∀ y∈}.An idealis referred to as left invertible if its left inverse ^-1 is such that ^-1=ℤ[G]. And it can be verified that ^-1 is a left fractional ideal of ℤ[G] which means there exists t∈ℤ, such that t^-1⊆ℤ[G].In algebraically structured , an algebraic number field is typically embedded into the real Euclidean space ^n using canonical embedding as described in <cit.>. However, when dealing with group ring , we choose the coefficient embedding for simplicity due to therelatively complicated non-commutative multiplicative operation.Since the group ring [G] is a free -module, and the elements of G naturally form a -basis of [G], we have an embeddingφ:[G]→^n,∑_i=1^n a_ig_i↦(a_1,a_2,…,a_n)referred to as the coefficient embedding of [G], which means the mapping embeds the elements in [G] into ^n according to their coefficients. Moreover, we can also extend the domain of φ to the module tensor product [G]⊗_ℝ=[G], which we denoted by φ̅. The extension φ̅:[G]→ℝ^n is bijective. For simplicity, in this paper, we also use the notation φ instead of φ̅.Under the coefficient embedding φ, any element ∑_i=1^n a_ig_i∈[G] can be represented uniquely by a vector in ℝ^n, where n is the rank of [G] as an -module. This allows us to define norm on elements of [G] by taking the corresponding norm of the vector representation. For an element 𝔥∈[G], its norm is defined as 𝔥=∑_i=1^n a_ig_i:=(a_1,a_2,…,a_n),where · on the right-hand side is any norm defined on vectors in ^n.Furthermore, it can be easily verified that the coefficient embedding φ maps any (fractional) ideal of the ring to a full-rank discrete additive subgroup in ^n, which is consequently a lattice. Here, n is both the rank of the group ring and the lattice. Such lattices induced by a fractional ideal are commonly referred to asideal lattices. Every (left, right, two-sided/fractional) ideal ℐ of [G] can be viewed as a lattice in ^n under coefficient embedding. Moreover, if {u_1,u_2,…,u_n} forms a -basis of the ideal , then {φ(u_1),φ(u_2),…,φ(u_n)}⊂^n forms a basis for the lattice induced by ℐ. Therefore, we can always identify ℐ as a lattice in ^n. Through the study of ideal latticeof a group ring, it is crucial to consider the relationship between dual ideal lattice ^∨ and inverse ideal ^-1. In the following lemma, we primarily consider the two types of groups defined by (<ref>) and (<ref>). We show that the dual ideal lattice and the inverse ideal lattice of the same invertible ideal lattice are equivalent, up to a specific permutation of the coordinates. The following lemma generalized the result of Lemma 3 of <cit.> to Type I and Type II. The proof is attached in the appendix.For any invertible (right) idealof [G] where G is any group of Type I and Type II, let ^-1 be the left inverse of . Then the dual lattice ^∨ of(under coefficient embedding) and ^-1 are the same by rearranging the order of the coordinates.In the following lemma, we establish the connection between ℓ_2 norm of the elements in [G] as defined above, and the matrix norm ·_. Let ℝ[G] be a group ring where G is a finite group of order n. If 𝔥∈ℝ[G] is sampled from D_r (which means every coefficient of 𝔥 is sampled independently from one-dimensional Gaussian D_r), then the matrix norm of 𝔥 is at most nr except with negligible probability. We bound the matrix norm of 𝔥 by analyzing its relationship with ℓ_2 norm. Since all group elements of G inherently form an -basis of [G], we obtain a transformation matrix (𝔥) which represents the left multiplication action determined by 𝔥 over such a basis. For any element τ∈[G] with an ℓ_2-norm equal to 1, we denote its coefficient vector as , i.e., _ℓ_2=1. By considering the ℓ_2-norm of (𝔥)·∈^n, we have(𝔥)·_ℓ_2≤√(n)·𝔥_ℓ_2_ℓ_2after applying the Cauchy-Schwarz inequality. From Lemma <ref>, the ℓ_2 norm of 𝔥 is less than r√(n) except with probability exponentially closed to 0. Therefore, we obtain(𝔥)·_ℓ_2≤ nr,which means 𝔥_≤ nr except with negligible probability. When applying the same process as described in Lemma 10 of <cit.>, we can obtain a tighter bound compared to the one given by Lemma <ref> in this paper when restricting G to a specific group. It should be pointed out that for group G=_m⋉_n of Type I, the matrix norm of the elements in [G] which are sampled from D_r can be bounded by ω(√(log|G|))·√(|G|)· r, which is asymptotically smaller than |G|· r obtained from Lemma <ref>.However, in this paper, we also use the more general bound as mentioned in Lemma <ref> for generality.§.§ Lattice problems We introduce some important and useful problems which are believed to be computionally hard. They are commonly used to characterize the hardness of variants. These problems include Shortest Vector Problem (), Shortest Independent Vectors Problem (), Closest Vector Problem (),and Bounded Distance Decoding (). Letbe an n-dimensional lattice and let γ=γ(n)≥ 1. The _γ problem in the given norm is: find some nonzero vector ∈ such that ≤γ·λ_1(). The _γ problem is to find n linearly independent vectors inwhose norms are all no more than γ·λ_n().Letbe an n-dimensional lattice and let γ≥ 1. The _γ problem in the given norm is: given a target vector ∈^n (which may not be a lattice vector), find some vector ∈ such that -≤γ·^'- for any lattice vector ^'. The following problem is a variant version ofwhere the distance between the target vector and the lattice is bounded.Let ⊂^n be a lattice, and let d<λ_1()/2. The _,d problem in the given norm is: givenandof the form =+ for some ∈ and ∈^n with ≤ d, find .Another problem called Gaussian Decoding Problem (), is essentially when the offset is sampled from a Gaussian distribution.For a lattice ⊂^n and a Gaussian parameter g>0. The _,g problem is: given a coset + whereis sampled from D_g, find .The following lemma states that if the bound d in is significantly smaller compared to the length of the shortest nonzero vector in a lattice , then there exists an efficient (within time (n), where n is the dimension of the lattice) algorithm, that can solve _,d.There is an polynomial-time algorithm that solves _,d for d=2^-n/2λ_1(). The following problem,which is called Discrete Gaussian Sampling () problem, is needed in the process of reduction mentioned in Section <ref> and Section <ref>.Let R be a ring. R_γ asks that given an (invertible) idealof R and a real number s≥γ=γ(), produce samples from the distribution D_,s.In the following text, we only consider above problems in the context of (invertible) ideal lattices in a certain group ring. §.§ Natural inclusion mappingWhen establishing reductions from hard problems in ideal lattices, it is common to consider the relationship between different (ideal) lattices. Let q be a positive integer andbe a lattice. Denote _q as the quotient /q. For any lattices ^'⊆, the natural inclusion map is defined as φ: _q^'→_q which maps x+q^' to x+q.[It can be verified that the definition of natural inclusion map is well-defined under the condition that ^'⊆.] The natural inclusion map can be viewed as a composition of a natural homomorphism /q'→ (/q')/(q/q^')=/q and an inclusion map ^'/q^'→/q'. The following lemma is an important result from <cit.>. It describes under what condition the natural inclusion map φ is a bijection. We include the proof for completeness.Let ^'⊆ be n-dimensional lattices and q be a positive integer. Then the natural inclusion map φ:_q^'→_q is a bijection if and only if q is coprime with the index |/^'|; In this case, φ is efficiently computable and invertible given an arbitrary basis of ^' relative to a basis of .Let ∈^n,^'∈^' n be some bases ofand ^' as free -modules, respectively. It is straightforward to see that ,^' are also bases of _q and ^'_q as free _q-modules. Since ^' is a subset of , then there exists a square matrix ∈ℤ^n× n, such that ^'=·. Let ^'∈_q^n be the coefficient vector of some x^'∈_q^' corresponding to the _q-basis ^'. We havex^' =⟨^', ^'⟩=⟨·,^'⟩=⟨, ^t·^'⟩.Thus, =^t·^'∈_q^n is the coefficient vector of the image of x^', i.e., φ(x')∈_q related to _q-basis . From the above, we can conclude that the natural inclusion map φ is exactly determined by the left multiplication of . Then φ is bijective if and only ifis invertible over _q if and only if q is coprime with |()|=|/'|. Furthermore, by calculating , we can operate φ and φ^-1 efficiently.We can also apply Lemma <ref> to dual lattice ^∨. As |/^'|=|(^')^∨/^∨|, we can obtain thatψ:(_q^')^∨→_q^∨,ψ(x+q(^')^∨)=x+q^∨is also a bijection with the same condition as specified Lemma <ref>. §.§ Group ring To prevent the potential attacks that exploit one-dimensional (irreducible) representations of the group G, one can use the quotient ring of [G] modulo the sum of ideals associated with its corresponding information-leaking representations. According to the Artin-Wedderburn theorem, we can uniquely decompose the group algebra [G] into the direct sum of simple ideals (also simple left [G]-modules). Moreover, these ideals are isomorphic to matrix rings over . Precisely,[G]≅ M_n_1()⊕ M_n_2()⊕⋯⊕ M_n_r(),where M_n_i()≅^n_i× n_i denotes the ring of all n_i× n_i matrices over . For each i, M_n_i() is a simple ideal corresponding to an n_i-dimensional irreducible representation of G and r equals the number of non-equivalent irreducible representations of G. For instance, we consider group G_1 = _m⋉_n defined by (<ref>). According to Lemma <ref>, the group algebra [G_1] can be decomposed as follows:[G_1]≅⊕_i=1^m⊕⊕_j=1^rt^2× 2, ifnis even; ⊕_i=1^2m⊕⊕_j=1^r(t-1)^2× 2, ifnis odd. From representation theory, each direct summandin the Artin-Wedderburn decomposition of [G] above is a minimal principal ideal of [G] with a generator referred to as central primitive idemptotent. <cit.> provides a method to determine all central primitive idempotents of the group algebra [G] where G is the semi-direct product of two finite Abelian groups. Consequently, we can easily compute the simple ideal of [G] corresponding to one-dimensional representations of G when G is either of Type I or Type II. According to the Artin-Wedderburn theorem, we can decompose [D_2n] into the direct sum of simple ideals. When n is even, we have[D_2n]≅⊕_i=0^3⊕⊕_i=4^(n+4)/2^2× 2.By <cit.>, we can obtain all the central primitive idempotents corresponding to one-dimensional representation of D_2n as follows:(1+t+t^2+⋯+t^n-1)(1± s),(1-t+t^2+⋯-t^n-1)(1± s). Each of these four idempotents generates a simple ideal and each of these ideal divides ⟨ t^n/2+1⟩ when 4| n. Thus we can assert that [D_2n]/⟨ t^n/2+1⟩ eliminate the potential attack making use of the 1-dimensional representations of D_2n. When n is odd, there are two one-dimensional irreducible representations for D_2n, and the corresponding central primitive idempotents are given by(1+t+t^2+⋯+t^n-1)(1± s).In this case, we can select [D_2n]/⟨ 1+t+t^2+⋯+t^n-1⟩ for our purpose.Here we describe the problem over group rings. We use the following notations. Let G be a finite group and let R be the group ring [G] itself or one of its quotient rings. For an integer modulus q≥ 2, let R_q denote the quotient ring R/qR. Likewise, for any (fractional) idealof a ring R, let _q denote /q. We also denote R_ as the tensor product R⊗_ and let :=R_/R.For a secret element s∈ R_q and an error distribution ψ over R_, a sample chosen from the R distribution A_s,ψ over R_q× is generated by sampling a∈ R_q uniformly and e←ψ, and then outputting (a, (s· a)/q+eR).Let Ψ be a family of distributions over R_×. The search version Group ring-problem asks to find the secret element s∈ R_q, given arbitrarily many independent samples (a_i,b_i)∈ R_q×𝕋 chosen from the R distribution A_s,ψ, where ψ is a distribution of Ψ. We denote such search problem by R_q,Ψ.Let Υ be a distribution over a family of distributions, each over R_. The (average-case) decision version Group ring- problem asks to distinguish between polynomially many samples from R distributionA_s,ψ for a uniformly random (s,ψ)← R_q×Υ, and the same number of uniformly random and independent samples from R_q×𝕋 with non-negligible advantage. We denote such decision problem by R_q,Υ. §.§ Oracle hidden center problemIn the process of reduction to R, it is necessary to analyze the properties of the given R oracle. In <cit.>, Peikert et al. provided a direct reduction from worst-case lattice problem to decision Ring-. This reduction possesses tighter parameters and more compact error rates compared to those in <cit.>, where the authors established hardness by additionally reducing search Ring-to decision Ring-. The technique introduced in <cit.> exploits the properties of a suitable decision oracle, which is abstracted as an “oracle with a hidden center". Such an oracle enables dealing with problem in ideal lattices as needed in the proof of the reduction, thus facilitating the hardness proof.For any ε,δ∈ [0,1) and β≥ 1, the (ε,δ,β)-OHCP is an approximate search problem defined as below. An instance consists of a scale parameter d>0 and randomized oracle :^k×^≥ 0→{0,1} which satisfies for an input (,t) for -^∗≤β d,((,t)=1)=p(t+log-^∗),for some (unknown) “hidden center" ^∗∈^k with δ d≤^∗≤ d and some (unknown) function p. The goal is to output some ∈^k such that -^∗≤ε d.In <cit.>, Peikert et al. showed that there is an efficient algorithm to solve OHCP if the oracle of the instance satisfies certain conditions. Specifically, it states as follows. There is a (κ,k)-time algorithm that takes as input a confidence parameter κ≥ 20log(k+1) with the scale parameter d>0 and solves (exp(-κ),exp(-κ),1+1/κ)-OHCP in dimension k with accept probablity greater than 1-exp(-κ), provided that the oraclecorresponding to the OHCP instance satisfies the following conditions. For some p_∞∈ [0,1] and s^∗≥ 0,* p(s^∗)-p_∞≥ 1/κ;* |p(s)-p_∞|≤ 2exp(-s/κ) for any s;* p(s) is κ-Lipschitz in s, i.e., |p(s_1)-p(s_2)|≤κ|s_1-s_2| for all s_1,s_2,where p(s) is the acceptance probability of on input (,s). § THE HARDNESS OF SEARCH For G_1=_m⋉_n=⟨ s⟩⋉⟨ t⟩ of Type I with m an even integer and 4| n, we choose the group ring R^(1)=[G_1]/⟨ t^n/2+1⟩. The reason for selecting this group is the same as mentioned in <cit.>. This ring does not have direct summands corresponding to one-dimensional representations, thereby ensuring its resistance against the aforementioned potential attacks.Similarly, for G_2=_p^k^∗⋉_p^k of Type II where we denote the generator of _p^k as g, we select the group ring R^(2)=[G_2]/⟨ 1+g+g^2+⋯+g^p^k-1⟩.§.§ Main result We claim that for rings R^(1) and R^(2) (or more generally, the group ring with equivalent dual ideal and inverse ideal up to a certain permutation under coefficient embedding), the hardness of search problems over them is based on the hardness of finding short vectors in related ideal lattices, similar to the reduction in <cit.>. To be specific, we state the results asfollows. From this section, we denote ω(f(n)) as some fixed function that grows asymptotically faster than f(n). Additionally, we define the familyΨ_≤α for a positive real α as the set of all elliptical Gaussian distributions D_ with each coordinate r_i≤α.Let R=[G] be a group ring where G is of Type I or Type II with n elements. Let α=α(n)>0, and let q=q(n)≥ 2 be such that α q≥ 2n. For some negligible ε = ε(n), there is a probabilistic polynomial-time quantum reduction from R_γ (and hencewith approximate factor Õ(n^3/2/α)) to (search) R_q,Ψ_≤α, whereγ=max{η_ε()·(√(2)n/α),2√(n)/λ_1(^∨)}. Recall that the R-problem asks to sample from discrete Gaussian distribution over the (invertible) idealefficiently. In <cit.>, the author presented a direct reduction from standard lattice problems to the problem. Combining this result, Theorem <ref> accomplishes the reduction from lattice problems to search . To be precise, based on Lemma <ref> and Claim 2.13 in <cit.>, we know that 1/λ_1(^∨)≤η_ε()≤λ_n()·ω(√(log n)) for any ideal latticein ^n. By Lemma <ref>,we obtain the ℓ_2-norm of samples from D_,γ is at most γ√(n) except with negligible probability. Consequently, we can use the outputs of _γ as a solution ofwith approximate factor Õ(n^3/2/α) when α is restricted to be no greater than √(n).This problem is believed to be a computationally hard problem, and the restriction is always satisfied to make the problem information-theoretically solvable. For the sake of completeness, we prove the results for the group ring [G_1] and [G_2] rather than the quotient groups R^(1) and R^(2). In fact, using the same procedure and fundamental homomorphism theorem, we can also prove the same result for R^(1) and R^(2). It is also worth noting that by applying the results mentioned in Remark <ref>, the approximate factor can be further optimized to Õ(n/α) on these two particular groups. According to Lemma 3.2 in <cit.>, one can sampleefficiently from D_,r for sufficiently large r, say r>2^2nλ_n(). In this case, polynomially many samples from D_ℐ,r can be generated typically by the following steps. First, generate sample y from (continuous) Gaussian distribution D_r using the standard method, then output y-(y)∈ℐ. Next, we can repeatedly apply the reduction specified by Lemma <ref> (but still polynomial times) mentioned later. This allows us to sample from discrete Gaussian distribution with narrower and narrower parameters. Since α q≥ 2n, the iterative steps enable us to sample from D_,r/2 given polynomially many samples from D_,r. The repeating iteration continues until the Gaussian parameter reaches the desired value s≥γ. Finally, the procedure ends up with one (or more) sample from D_,s.§.§ The iterative step The proof of the iterative step basically follows the procedure outlined in <cit.> and <cit.>. The reduction uses repeated iterative steps to achieve the goal. The iterative step states that when the initial Gaussian parameter r is sufficiently larger than the smoothing parameter, we can sample efficiently from another discrete Gaussian distribution with narrower parameters, say r/2.Let R=[G] be a group ring where G is of Type I or Type II with n elements. Let α>0 and let q>2 be an integer. There exists an efficient quantum algorithm that, given an invertible idealin R satisfying that () is coprime with q, a real number r≥√(2)q·η_ε() for some negligible ε=ε(n)>0 such that r':=rn/α q>2√(n)/λ_1(^∨), an oracle to R_q,Ψ_≤α, and a list of samples from the discrete Gaussian distribution D_,r (as many as required by the R oracle), outputs an independent sample from D_,r'. The iterative step stated above can be partitioned into two parts as in <cit.>. The first part of the iteration is classical, which shows that given a search R-oracle, we can solve on ^∨ making use of the given discrete Gaussian samples. The proof of this part (Lemma <ref>) is presented in Section 3.3.Let ε=ε(n) be some negligible function, let q> 2 be an integer, and let α∈(0,1) be a real number. Let R=[G] be a group ring where G is of Type I or Type II with n elements, and letbe an invertible ideal in R satisfying that () is coprime with q. Given an oracle for discrete Gaussian distribution D_,r, where r≥√(2)q·η_ε(), there is a probabilistic polynomial-time (classical) reduction from _^∨,d/n to R_q,Ψ_≤α, where d=α q/(√(2)r). The second part of the iteration was initially proposed by <cit.> and later improved by <cit.>. It is worth noting that this part is the only quantum component of the whole reduction. The lemma in the following states that we can use a oracle to sample polynomially many lattice vectors with a narrower width. By employing Lemma <ref>, we can essentially derive a similar lemma as in <cit.>.There is an efficient quantum algoithm that, given any n-dimensional lattice , a number d'<λ_1(^∨)/2 (where λ_1 is in ℓ_2-norm), and an oracle that solves _^∨,d'/√(2n), outputs a sample from D_^∨,√(n)/√(2)d'. Combining the results of Lemma <ref> and Lemma <ref>. We can prove Lemma <ref> as follows. By Lemma <ref>, given samples from D_,r and oracle for search R_q,Ψ_≤α, we can solve _^∨,d/n problem with parameter d=α q/√(2)r. By Lemma <ref> and setting d/n=d'/√(2n), we haved'=√(2)d/√(n)=√(n)/r'<λ_1(^∨)/2,where the last equality comes from the condition mentioned in Lemma <ref>. Thus we obtain samples from D_,r'. §.§ The to SearchreductionIn this section, our goal is to prove Lemma <ref>, which means providing a reduction from problem in ideal lattices to . In <cit.>, it has been proven that to solve in some lattice , it is sufficient to find a close vector modulo q. We present a special case of Lemma 3.5 in <cit.> as follows and the proof follows essentially the same approach.For any q≥ 2, there is a deterministic polynomial-time reduction from _,d (in matrix norm) to q-_,d (in the same norm). When dealing with coefficient embedding, it is common to consider sampling vectors from an (ideal) lattice according to a discrete Gaussian distribution. In the work of <cit.>, the spherical Gaussian distribution was considered, and later in <cit.>, this distribution was generalized to non-spherical Gaussians. However, in the case of , especially with coefficient embedding, a more generalized Gaussian is required. This distribution should have an arbitrary definite positive matrix as its covariance matrix, rather than a diagonal matrix. According to Lemma <ref>, it suffices to give a reduction from q-to . Before showing this reduction, it is necessary to introduce some lemmas concerning smoothing parameters. The smoothing parameter characterizes how a discrete Gaussian over a certain lattice behaves similarly to a continuous Gaussian. As a generalization of Claim 3.9 of <cit.>, we provide the following lemma and corollary, illustrating that when a discrete Gaussian with the smoothness condition is added to a continuous Gaussian, it “acts like" a continuous Gaussian with the same covariance matrix, up to a negligible statistical distance under certain “smoothness condition”. Letbe a lattice of ^n. Let , be two fixed non-singular matrices. Assume thatsmoothness condition∑_∈^∨\{0}exp(-π^t(^-t^-1+1s^2^t)^-1)≤εholds for some negligible ε. Letbe distributed as discrete Gaussian D_+, for arbitrary ∈^n and let ^'∈^n bedistributed as n-dimensional spherical (continuous) Gaussian D_σ. Then the distribution of ·+^' is within statistical distance 4ε of Gaussian distribution D_, where =1/2π^t^t+σ^2/2π_n. By applying this lemma to a fractional ideal in [G] (where the elements are under coefficient embedding), we obtain the following corollary.Let G be a finite group of order n, and letbe an arbitrary fractional ideal in the group ring [G]. Let 𝔥 be some element from [G] and α=𝔥_Mat. Let r,s>0 be two reals and t=1/√(1/r^2+α^2/s^2). Assume that the smoothness condition∑_y∈^∨\{0}exp(-π t^2·y^2)≤εholds for some negligible ε=ε(n)>0. Let v be ditributed as D_+u,r for arbitrary u∈[G], and let e be sampled from n-dimensional Gaussian D_s. Then the distribution of 𝔥· v+e belongs to n-dimensional family Ψ_≤√(r^2α^2+s^2) (under some unitary base transformation). Under coefficient embedding φ, each element in the group ring [G] is mapped to a vector in ^n. Let (𝔥) be the matrix representation of 𝔥. Then the element 𝔥v+e is mapped to (𝔥)φ(v)+φ(e)∈^n. By setting =r· I_n and =(𝔥), we can easily obtain that the largest absolute eigenvalues of ^-1, are 1/r and s, respectively. Thus, the smallest absolute eigenvalue of (^-t^-1+1/s^2·^t)^-1 is bounded by √(1/(1/r^2+α^2/s^2))=t. Consequently,^t(^-t^-1+1s^2^t)^-1≥ t^2^2holds for any ∈^n, which means smoothness condition described in Lemma <ref> is satisfied.Through a certain unitary basis transformation, we can obtain an alternative -basis of [G]. By adjusting the -basis properly, the n coefficients of ·φ(v)+φ(e) are distributed according to a Gaussian distribution with a diagonal covariance matrix. In this matrix, the absolute value of each diagonal element is not greater than √(r^2α^2+s^2).Based on the aforementioned proof, it can be observed that if all the eigenvalues of (𝔥) are λ_1,λ_2,…,λ_n, then we can perform a (known) unitary basis transformation to convert 𝔥v+e to a sample from the diagonal Gaussian distribution ∏_i=1^n D_√(r^2λ_i^2+s^2). Thus, for group rings where the underlying finite group is of Type I and Type II, we could compute the resulting diagonal Gaussian distribution by combining the results of Lemma <ref>.To achieve this goal, we introduce the following lemma, which demonstrates how to convert a q-instance to ainstance. As mentioned in <cit.>, it suffices to use an _q,Ψ_≤α oracle to give a solution efficiently to an instance of _q,Ψ_≤β for any β≤α, even without knowing the exact value of β. Therefore, it is unnecessary to compute all the entries of the covariance matrix of 𝔥· v+e. We only need to give an upper bound of the eigenvalues of the covariance matrix. The following lemma plays a crucial role in reductions for both the search version and the decision version of . It states that under certain mild conditions, given an instance, there exists an efficient algorithm that generates ansample.Let R=[G] be a group ring, where G is a group of Type I or Type II with n elements. Let α>0 be a real, let q> 2 be an integer, and let r>√(2)q·η_ε() be a real. Then given an invertible idealof R with index |R/|=() coprime with q, there exists an efficient algorithm that given an instance from _^-1,d (in matrix norm), where ^-1 is the left inverse ofand samples from D_,r,outputs samples that have R-distribution A_q,ψ (up to negligible statistical distance) for some ψ∈Ψ_≤α, where d=α q/(√(2)r). For an _^-1,d instance y=x+e where x is an element of ^-1 and the matrix norm of e is bounded by d. We can construct an R sample as follows. First, sample z← D_,r from the oracle of discrete Gaussian distribution. Since r exceeds the smoothing parameter of , z q is almost uniformly distributed in _q (up to negligible statistical distance). According to Lemma <ref>, we know the natural inclusion mapφ:/q→ R/qRis a bijection. Thus a:=φ(z qℐ)=zqR, which is also uniform in R_q. Moreover, we have another natural inclusion mapρ:R/qR→^-1/q^-1,where ^-1 is the left inverse of . We construct an elementb=(y· z)/q+e^' R=(x· z)/q+(e· z)/q+e' R∈ 1/q·(R/qR),where e' is an error sampled from continuous Gaussian D_α/√(2). We claim that the pair (a,b) is an R-sample. We first consider the element x· zqR. From the property of natural inclusion mapping, there exists a unique s̅=s+qR∈ R/qR, such thatρ(s̅)=s q^-1=x+q^-1,Note thatx· z+qR=(x+qR)(z+qR)=ρ^-1(x+q^-1)φ(z+q)=(s+qR)(z+qR)=s̅·a̅.We obtain that x· z=s· a qR.It remains to analyze the covariance matrix of (e· z)/q+e^'. Since e_Mat≤α· q/(√(2)r), and z is distributed as D_,r, then it can be verified the smoothness condition holds:∑_y∈^-1\{0}exp(-π t^2y^2)=∑_y∈^-1\{0}exp(-πr^22q^2y^2)≤εwhere t=1/√((q/r)^2+q^2/r^2) as in Lemma <ref>, we know (e· z)/q+e^' is distributed as some ψ∈Ψ_≤α. From the reduction above, we can obtain an R sample given some samples from D_,r with r exceeding the smoothing parameter of . By employing the search R oracle, we can recover s qR except with negligible probability, allowing us to use the bijective mapping from R/qR to /q to address the problem. Thus, we can prove Lemma <ref>.We can observe that an instance of _^-1,d/n can be inherently regarded as an instance of _^-1,d (in the matrix norm). According to Lemma <ref>, we can convert the instance into an R sample (a,b). After inputting (a,b) into the given search R oracle, we can get a solution s̅∈ R/qR. Next, we calculate ρ(s̅)∈^-1/q^-1 which equals x q^-1. Consequently, we obtain a solution of the q_^-1,d/n with the instance y. Additionally, we have exploited the mild properties of groups of Type I and Type II. Specifically, the dual ideal ^∨ and inverse ideal ^-1 (if it exists) of [G] are equivalent up to a (known) permutation, which means _^∨,d/n and_^-1,d/n are essentially equivalent. Hence we have proven the lemma. § HARDNESS OF DECISIONHaving given the reduction from worst-case lattice problem to search ,Regev <cit.> also established a reduction from search to (average-case) decision , which provides the basis for hardness in the decisional setting. Similarly, Lyubashevsky et al.<cit.> also presented such reductions in the cyclotomic ring version. However, these reductions from search Ring-to decision Ring-have more restrictions on the underlying rings and result in worse parameters. Later in <cit.>, Peikert et al. showed a direct and tighter reduction from worst-case (ideal) lattice problem to decision Ring-with more compact error rates. In this section, we study the hardness of the decisional version of , with a proof similar to <cit.>. Let R=[G] be a group ring where G is of Type I or Type II with n elements. Let α=α(n)>0, and let q=q(n)> 2 be an integer such that α q≥ 2n. For some negligible ε = ε(n), there is a probabilistic polynomial-time quantum reduction from R_γ (and hencewith approximate factor Õ(n^3/2/α)) to R_q,Ψ_≤α, whereγ=max{η_ε()·(√(2)n/α),2√(n)/λ_1(^∨)}.When employing the tighter bound mentioned in Remark <ref>, it is possible to improve the parameter γ tomax{η_ϵ()·(√(2n)/α)·ω(√(log n)),√(2n)/λ_1(^∨)},provided that α q≥√(n)·ω(√(log n)), which shows a reduction fromproblem with approximate factor Õ(n/α). For simplicity, we only provide the reduction for group ring with underlying group of Type I. The one for Type II is essentially the same. The reduction process remains the same through repeated iterative steps, as stated in Section <ref>. The only difference is that we have access to a decisionoracle instead of a search version one. To begin with, we introduce some notations of a special family of polynomials in [x].Let reals r>0,ι>0, an integer T≥ 1 and let ξ be a v-th primitive root of unity. Let W_r,ι,T be any set[The specific selection of W_r,ι,T does not affect the analysis in the following.] of polynomials containing for each i=0,1,…, v-1, k=0,1,…,T, r_k^(i)(x)∈[x] which denotes a polynomial such thatr_k^(i)(ξ^ℓ)=r, ∀ℓ≠ i, v-i,r_k^(i)(ξ^i)=r_k^(i)(ξ^v-i)=r(1+ι)^k.Let's consider the group ring with an underlying group G_1=_m⋉_n=⟨ s⟩⋉⟨ t ⟩, where t has order v. Any element 𝔥∈[G] of the form𝔥=∑_i=0^v-1f_it^i, f_i∈,can also be regarded as a polynomial in [x] with degree no greater than v-1 by replacing t with the indeterminate x. It can be easily verified that there is at least one element in [G] satisfying the evaluation (<ref>) and (<ref>) in Definition <ref>, which means W_r,ι,T is well-defined. In fact, we can choose the elements by the same method as Definition 11 in <cit.>.through Lagrange interpolation. The lemma in the following, referred to as iterative steps, combines the results of Lemma <ref> and Lemma <ref>. Let R=[G] be a group ring where G is of Type I or Type II with n elements. There exists an efficient quantum algorithm that given an oracle that can solve R_q,Ψ_≤α on input a number α∈ (0,1) and an integer q≥ 2, an (invertible) ideal ∈[G] with () coprime with q, a real number r≥√(2)q·η() such that r^':= r· n/(α q)>2√(n)/λ_1(^∨), polynomially many samples from discrete Gaussian distribution D_, whereis the matrix representation for each r_k^(i)∈ W_r,ι, T, and a vector ^'∈^n with each coordinates r_i^'>r, outputs an independent sample from D_,^'. The following lemma originates from Lemma 6.6 of <cit.> and is a slightly stronger version of Lemma 6 of <cit.>.Let R=[G] be a group ring of order n with G=⟨ s⟩⋉⟨ t⟩ defined by (<ref>), where t is an element of order v. Let ξ be a v-th primitive root of unity. Let r(x)∈ℝ[x] be apolynomial with degree no greater than v-1, and letc=(∏_i=0^v-1(r/√(v))(ξ^i))^1/v≥ 1.Then the matrix determined by r(t), which is denoted by , satisfies the smoothness condition:∑_y∈ R^∨\{0}exp(-π·^t^t)≤ε,where ε=exp(-c^2v). Since r(t) is invertible, the matrixis also invertible. According to Lemma <ref>, we haveη_ε(^-1R)≤ c√(v)/λ_1((^-1R)^∨),where the dual lattice satisfies (^-1R)^∨=^Tℤ^n under coefficient embedding. Note that the lattice ^Tℤ^n can be viewed as the concatenation of a series of r(t)[t]/⟨ t^v-1⟩ and r(t^-1)[t]/⟨ t^v-1⟩ (each number of which is dependent on the order of s). For any polynomial f=∑_i=0^v-1f_ix^i∈ℝ[x], the norm of f (under coefficient embedding) is given byf_2^2=∑_i=0^v-1f_i^2=∑_i=0^v-1|f(ξ^i)/√(v)|^2.For any polynomial g(x)∈[x] of degree less than v, if g(x)=r(x)g_1(x), theng_2^2=∑_i=0^v-1|r(ξ^i)g_1(ξ^i)/√(v)|^2≥ c^2v∏_i=0^v-1|g_1(ξ^i)|^2/v≥ c^2v,where the first inequality follows from the arithmetic-geometric mean inequality. Similarly, when g(x)=r(x^v-1)g_2(x), then g_2^2≥ c^2v holds for the same reason.Therefore, from (<ref>), we have ≥η_ε(R).Let R=[G] be a group ring with G=_u⋉_v=⟨ s⟩⋉⟨ t⟩ defined by (<ref>) and let n:=uv. There exists a probabilistic polynomial-time (classical) algorithm that given an oracle that solves R_q,Ψ_≤α and input a number α∈ (0,1) and an integer q≥ 2, an invertible right idealin R[G] with () coprime with q, a parameter r≥√(2)q·η_ε(), and polynomially many samples from the discrete Gaussian distribution D_,, whereis the matrix representation of any r(x)∈ W_r,ι, T (where ι=1/(n), T=(n)) viewed as an element in [G], solves _^∨,g for g=1/n·α q/(√(2)r).Let φ denote the coefficient embedding of the group ring [G] into ^n. If α<exp(-n), then except with negligible probability the coset representative e from the instance will satisfyφ(e)≤√(n)g≤α/(2√(n)·η_ε())≤ 2^nλ_1(^∨),where the last equality is derived from Lemma <ref>, which means φ(e) is short enough for us to use Babai's algorithm <cit.> to obtain the solution of the instance. Hence we can assume α>exp(-n) without loss of generality. We let κ=(n) with κ≥ 100n^2ℓ such that the advantage of R oracle is at least 2/κ, where ℓ is the number of samples required by the oracle. We first view e as abivariate polynomial with indeterminates s and t:e(s,t):=f_0(t)+sf_1(t)+⋯+s^u-1f_u-1(t)and denote univariate polynomial e_j(t):=e(ω^j,t)=f_0(t)+ω^j f_1(t)+⋯+ω^j(u-1)f_u-1(t). Denote ρ_j(e)=(e_j(0), e_j(ξ), e_j(ξ^2), …, e_j(ξ^v-1)):=(ρ_j^(0)(e),ρ_j^(1)(e),…,ρ_j^(v-1)(e)) for j=0,1,…,u-1, where ω and ξ are the u-th and v-th primitive roots of unity, respectively. In the following, we determine ρ_j by determining its each coordinate ρ_j^(i) for i=0,1,2,…,v-1. The reduction uses the decisionaloracle to simulate oracles_j^(i):×_≥ 0→{0,1}, 0≤ i≤ v-1,such that the probability that _j^(i)(z,m) outputs 1 only depends on exp(m)|z-ρ_j^(i)(e)+ρ_u-j^(i)(e)/2|, where z∈ with |z-ρ_j^(i)(e)+ρ_u-j^(i)(e)/2| sufficiently small. Hence, _j^(i) serves as an oracle with “hidden center” ρ_j^(i)(e)+ρ_u-j^(i)(e)/2 as defined in Definition <ref>. Likewise, we can also use the decisionoracle to simulate oracles with “hidden centers” ρ_j^(i)(e)-ρ_u-j^(i)(e)/2. Combining these results, we can retrieve ρ_j^(i)(e) and ρ_u-j^(i)(e).First, we consider a fixed j, and then apply Proposition <ref> to find a good enough approximation to ρ_j^(i)(e)±ρ_u-j^(i)(e)/2 for each i, which allows us to recover e_j(t)± e_u-j(t) by solving a system of linear equations. Furthermore, given e_j with respect to all j, we can recover e efficiently except with negligible probability. To achieve this goal, when j=0,u/2, we can use similar process of Lemma 9 in <cit.> to recover ρ_j^(i) (In this case, ρ_j^(i) is real). For the following discussion, we may assume j≠ 0,u/2. Define k_j^(i):→[G_1] satisfying ρ_j(k_j^(i)(z))=z·_i+z̅·_v-i, where _i has 1 in the i-th coordinate and 0 otherwise, and we may restrict the image of k_j^(i) within elements of [G_1] which are of the forma_0+a_1t+a_2t^2+⋯+a_v-1t^v-1∈[G_1],which has zero coefficients on s^it^j for any 1≤ i≤ u-1 and 0≤ j≤ v-1. On input (z,m), the oracle _j^(i) uses fresh Gaussian samples from D_,_k^(i), where _k^(i) is the matrix representation of r_k^(i)∈ W_r,ι,T and (1+ι)^k=exp(m) as in Definition <ref>. Then it performs the transformation from Lemma <ref> on these samples, the coset e_j+e_u-j/2-∑ k_j^(i)(z_j^(i))+^-1, parameter r and matrix norm boundd=α q/(√(2)r)·ω(1), and convert them intosamples. Denote these samples by A^(i)_j,z,m. Then _j^(i) calls the R oracle on these samples and outputs 1 if and only if it accepts. Next, the reductions runs the algorithm for each i=1,2,…, v-1 with oracle _j^(i), confidence parameter κ, and distance bound d'=d/(1+1/κ), and outputs some approximation z_j^(i) to the oracle's center. Finally, the reduction runs Babai's algorithm on the coset e_j+e_u-j/2-∑ k_j^(i)(z_j^(i))+^-1, receiving as ẽ_j^+, and returns ẽ_j^++∑ k_j^(i)(z_j^(i)) as output.The running time of the reduction is polynomial time in the size of the group ring. Assuming z_j^(i) are valid solution to (exp(-κ),exp(-κ),1+1/κ)-OHCP with hidden center ρ_j^(i)(e)+ρ_u-j^(i)(e)/2, we check that the correctness of the algorithm. Since z_j^(i) are valid solutions, we have|z_j^(i)-ρ_j^(i)(e)+ρ_u-j^(i)(e)/2|≤exp(-κ)d'≤exp(-κ)/η()≤ 2^-n-1λ_1(^-1)/√(n)by the definition of OHCP.Thus, ∑ k_j^(i)(z_j^(i))-e_j(t)+e_u-j(t)/2≤ 2^-nλ_1(^-1). Note that e_j(t) and e_u-j(t) have conjugate coefficients at each position, it follows that e_j(t)+e_u-j(t)/2 can be regarded as an element in [G]. The Babai's algorithm will return the exact value of e_j(t)+e_u-j(t)/2-∑ k_j^(i)(z_j^(i)), which we denote by ẽ^+, then finally we output e_j+e_u-j/2=ẽ^++∑ k_j^(i)(z_j^(i)). The analysis also applies to e_j-e_u-j/2, which in turn gives the value of e_j and e_u-j. Hence we prove the correctness of the algorithm. It remains to prove, except with negligible probability over the choice of e and for all i,j: (1)_j^(i) represents valid instances of (exp(-κ),exp(-κ),1+1/κ)-OHCP with “hidden center” ρ_j^(i)(e)+ρ_u-j^(i)(e)/2;(2) _j^(i) satisfies the condition of what is stated in Proposition <ref>. To prove validity, we first observe that the distribution A^(i)_j,z,m depends only on exp(m)|z-ρ_j^(i)(e)+ρ_u-j^(i)(e)/2| if |z-ρ_j^(i)(e)+ρ_u-j^(i)(e)/2|≤ (1+1/κ)d'=d. We haveexp(-κ)d'≤exp(-n)d≤|ρ_j^(i)(e)+ρ_u-j^(i)(e)/2|≤ d'.Therefore, _j^(i),κ,d' correspond to a valid instance of (exp(-κ),exp(-κ),1+1/κ)OHCP with “hidden center” ρ_j^(i)(e)+ρ_u-j^(i)(e)/2, except with negligible probability. Finally, we prove that the oracle _j^(i) indeed satisfies the three conditions specified in Proposition <ref>.(1) For fixed j, denote p_j^(i)(z,m) as the probability that _j^(i) outputs 1 on input (z,m) and p^(i)_∞ for the probability that R oracle outputs 1 on uniformly random inputs. It follows that p_j^(i)(0,0)=p_j'^(i')(0,0) for all (i,j), (i',j'), and p_j^(i)(0,0)-p^(i)_∞ is exactly the advantage that R oracle has against the error rate that we derive from the transformation described by Corollary <ref>. Recall that e is drawn from D_d/n, then e_Mat is no more than d. Similar to the procedure in Lemma <ref>, the resulting R samples are exactly distributed from Ψ_≤α. Since the decisionaloracle has an advantage 2/κ against this distribution of error rate. By Markov's inequality, we may assume that p_j^(i)(0,0)-p_∞^(i)≥ 1/κ holds with non-negligible probability. It means Item 1 in Proposition <ref> is satisfied. (2) For Item 2, the distribution of A^(i)_j,0,m is within negligible statistical distance of distribution A_s,A_k^(i). Recall that r_k^(i)(ξ^i)=r_k^(i)(ξ^v-i)=r(1+ι)^k and r_k^(i)(ξ^h)=r for h≠ i,v-i. With Lemma <ref> and the definition of the smoothness parameter, we can prove the distribution of A^(i)_j,0,m is within statistical distance ℓexp(-v∏_h(1/√(v)· r_k^(i)(ξ^h))^2/v) ≤ℓexp(-exp(4m/v)·(r/q)^2·∏_iρ_j^(i)(e_j+e_m-j)^2/v)≤ℓexp(-exp(4m/v-4n-1))≤ 2exp(-m/κ)of the uniform distribution. Here we use|ρ_j^(i)(e_j+e_m+j)|≥exp(-n)d>exp(-n)·α q/(√(2)r)>exp(-2n-1/2)· q/r.The inequality of (<ref>) follows from the fact that exp(4m/v-4n-1)≫ m/κ +log(ℓ/2). Therefore, we can conclude that |p_j^(i)(0,m)-p_∞^(i)|≤ 2exp(-m/κ), which means Item 2 in Proposition <ref> is satisfied.(3) By Lemma <ref>, the distribution of A^(i)_j,z,m_1 and A^(i)_j,z,m_2 are within statistical distancemin{1,10ℓ(exp(|m_1-m_2|)-1)}≤κ|m_1-m_2|, where ℓ is the number of the samples we have used as mentioned before. Thus, we have proved p_j^(i)(z,m) is κ-Lipschitz. Here we complete the proof. § CONCLUSION Both the search and decisional version of the problem over group rings in this paper enjoy the hardness due to the reductions from some computationally hard problems in ideal lattices. Specifically, we focus on the finite non-commutative groups constructed via the semi-direct product of two cyclic groups (in some sense, a group family that owns the simplest structure). While there are indeed various methods for constructing non-commutative groups from two smaller groups, we believe the results of this paper can be generalized to groups with more complex structures. In fact, the two types of groups discussed in this paper are special cases of metacyclic groups, i.e., any semi-direct product of two cyclic groups (which may not necessarily be induced by φ and ψ mentioned in the definition of (<ref>) and (<ref>)).Additionally, the process of reduction extensively exploits properties of the irreducible representations and their eigenvalues. On the one hand, they are closely related to the left regular representation of the ring elements, which encompasses all irreducible subrepresentations of them to some extent. On the other hand, as mentioned in <cit.>, the dimension of irreducible representations affects the efficiency of computing the multiplication of two elements of the group rings. Therefore, it is essential for the dimension of the irreducible representations of the group elements to be not exceedingly large. In this paper, we have chosen two types of rings that have been utilized to implement the problem. These rings are quotient rings obtained by group rings modulo an ideal. Nevertheless, these selections are made heuristically such that the resulting rings would not suffer from the potential attack exploiting the one-dimensional subrepresentations of the group ring. However, it remains an open problem whether there is any general approach to selecting quotient rings that are not vulnerable to such kind of attack. IEEEtran§ PROOF OF LEMMA <REF> (i) When m is even, by calculating the number of conjugate classes of G_1, we know the number of irreducible representations of G_1 is st+2s.To determine all 1-dimensional irreducible representations of G_1, it suffices to consider the quotient group G_1/[G_1,G_1], where [G_1,G_1] deontes the commutator group of G_1, which is isomorphic to _n=⟨ t⟩. Then G_1/[G_1,G_1]≅_m is also a cyclic group. According to the representation theory, the number of 1-dimensional representations of G and _m are identical, i.e., m=2r. Furthermore, χ_i(s) is completely determined by all the (1-dimensional) representations of _m, as stated in the lemma. Since t and st belong to the same conjugate class, then χ_i(t)=χ_i(st)/χ_i(s)=1.It remains to determine all 2-dimensional irreducible representations of G_1. Let (ρ, V) be a 2-dimensional irreducible representation of G_1. We next consider the restriction of ρ over the subgroup _n. Since _n is an Abelian group, then V can be decomposed into two 1-dimensional subrepresentations V=V_1⊕ V_2 when viewed as a representation of _n. It is important to emphasize that V_1 and V_2 are distinct. Sinceρ(t)(ρ(s)V_1)=ρ(s)ρ(s^-1ts)V_1∈ρ(s)V_1,ρ(s)V_1 is also a subrepresentation of _n. It has to be ρ(s)V_1=V_2 and similarly ρ(s)V_2=V_1. Fixing a nonzero vector v_1∈ V_1, then there exists λ∈, such that ρ(t)v_1=λ v_1. Let v_2=ρ(s)v_1∈ V_2, and then there also exists ζ∈, such thatρ^2(s)v_1=ρ(s)v_2=ζ v_1.As s has order m=2r, it follows that ζ is an r-th root of unity. Thus, the transformation matrix of ρ(s) with respect to the basis v_1,v_2 is given by([ 0 ζ; 1 0 ]).On the other hand, we notice thatρ(t)v_2=ρ(t)ρ(s)v_1=ρ(s)ρ(t^-1)v_1=λ̅ρ(s)v_1,where λ̅ is the complex conjugate of λ. If λis real, then ρ^2(t)=id. In this case, G_1/(ρ)≅_m⋉_2=_m×_2 is abelian, which implies it has no 2-dimensional irreducible representations, hence it makes a contradiction. Then λ has to be the nonreal root of x^n-1. Hence the transformation matrix of ρ(t) with respect to the basis v_1,v_2 is([ ξ^j 0; 0 ξ^n-j ]), j=1,2,…,u. (ii) It is essentially the same as the discussion in (i).§ PROOF OF LEMMA <REF> Type I: Let m be an even positive integer and n an arbitrary positive integer, and let G_1 be a group defined asℤ_m⋉_φℤ_n=⟨ s,t| s^m=1,t^n=1,sts^-1=t^-1⟩,where s and t are the generators of cyclic groups _m and _n, respectively.The group ring [G_1] naturally has a basis over , which are exactly all the elements of G_1, i.e.,{s^it^j| 0≤ i≤ m-1,0≤ j≤ n-1}.Let I be a right ideal in [G], and denote the ideal lattice corresponding to I under coefficient embedding by . Suppose that𝔥_1=∑_j=0^n-1x_0,jt^j+∑_j=0^n-1x_1,jst^j+⋯+∑_j=0^n-1x_m-1,js^m-1t^j∈ I^-1⊆[G_1],where x_i,j∈ℝ, then 𝔥_1 can be regareded as a mn-tuple with real components under coefficient embedding, i.e.,:=(_0,_1,_2,…,_m-1)∈^∨⊂^mn,where _i:=(x_i,0,x_i,1,…,x_i,n-1). Denote _i:=(x_i,0,x_i,n-1,x_i,n-2,…,x_i,2,x_i,1), let:=(z_0,0,z_0,1,…,z_0,n-1,z_1,0,z_1,1,…,z_1,n-1,…,z_m-1,0,…,z_m-1,n-1)=(_0,_m-1,_m-2,_m-3,…,_2,_1),thenis also an mn-tuple obtained by permutating the coordinates of .It suffices to show that ∈^-1 if and only if ∈^∨. For simplicity, the addition and multiplication operations with respect to the first (or second) subscript of x,y,z are all modulo m (or n) (also the same in the proof for Type II).Note that ∈^-1 if and only if, for any element 𝔥=∑_i=0^m-1∑_j=0^n-1y_i,js^it^j∈ I, we have 𝔥_1𝔥∈[G_1]. By computing 𝔥_1𝔥 represented by the basis (<ref>), we obtain that ∈ is equivalent to the condition that for any a=0,1,…,m-1,b=0,1,…,n-1, we have∑_j=0^n-1x_0,jy_a,b-(-1)^aj+∑_j=0^n-1x_1,jy_a-1,b+(-1)^aj+⋯+∑_j=0^n-1x_m-1,jy_a+1-m,b+(-1)^aj∈ℤholds. Applying this result to , we get∑_j=0^n-1z_0,jy_a,b+(-1)^aj+∑_j=0^n-1z_m-1,jy_a-1,b+(-1)^aj+⋯+∑_j=0^n-1z_1,jy_a+1-m,b+(-1)^aj∈ℤ. On the other hand, if ∈^∨, then for any element ∑_i=0^m-1∑_j=0^n-1y_i,js^it^j∈ I, we have∑_i=0^m-1∑_j=0^n-1z_i,jy_i,j∈.Since I is a right ideal of [G],(∑_i=0^m-1∑_j=0^n-1y_i,js^it^j)s^m-at^(-1)^ab=∑_i=0^m-1∑_j=0^n-1y_i,js^m-a+it^-(-1)^a(j-b)∈ Iholds. Then we have ∈^∨ if and only if for any a∈[m],b∈[n],∑_i=0^m-1∑_j=0^n-1z_m-a+i,(-1)^a(j-b)y_i,j=∑_i=0^m-1∑_j=0^n-1z_m-a+i,jy_i,b+(-1)^aj∈,which is exactly the same as (<ref>). Hence we have proved the claim with respect to groups of Type I.Type II: Let n>2 be an integer. We define G_2 as _n^∗⋉_ψ_n, i.e.,G_2:={a⊙ g^k| a∈_n^∗,0≤ k<n-1},where g is a generator of _n (with order n). The multiplication of G_2 is induced by ψ, i.e., for any a,b∈_n^∗, and k_1,k_2∈,(a⊙ g^k_1)(b⊙ g^k_2)=ab⊙ g^k_1b^-1+k_2.Then the group ring [G_2] has a natural -basis: {a⊙ g^k| a∈ℤ_n^∗, k=0,1,2,…,n-1}.Since m:=φ(n) is even when n>2, we can write_n^∗={a_0=1,a_1,a_2,…,a_m/2-1,a_m/2=m-1,a_m/2+1,…,a_m-1},where a_m-i=a_i^-1 holds for any integer i (by arranging the order of the elements in _n^∗ appropriately). Let σ:_n^∗→ [m] be an index mapping, which maps each element of _n^∗ to its index, i.e., σ(a_i)=i for any i∈ [m].Suppose that 𝔞_1=∑_j=0^n-1x_0,j(a_0⊙ g^j)+∑_j=0^n-1x_1,j(a_1⊙ g^j)+⋯+∑_j=0^n-1x_m,j(a_m-1⊙ g^j)∈[G_2]. Similar to the proof for Type I, 𝔞_1 can also be regarded as an mn-tupleunder coefficient embedding. Letbe another mn-tuple satisfyingz_i,j=x_m-i,-ja_i.Since (a_i,n)=1, it can be easily verified thatcan be obtained by permutating the coordinates of . We claim that ∈^-1 if and only if ∈^∨ as in the case of Type I.We have 𝔞_1∈ I^-1 if and only if for any 𝔞=∑_i=0^m-1∑_j=0^n-1y_i,j(a_i⊙ g^j)∈ I, 𝔞_1𝔞∈[G]. It is equivalent to that for any integer r,s, the coefficient of 𝔞_2𝔞 over a_r⊙ g^s is also an integer, i.e., ∑_j=0^n-1x_r,s-jy_0,j+∑_j=0^n-1x_σ(a_ra_1^-1),(s-j)a_1y_1,j+⋯+∑_j=0^n-1x_σ(a_ra_m-1^-1),(s-j)a_m-1y_m-1,j= ∑_j=0^n-1z_σ(a_0a_r^-1),(j-s)a_ry_0,j+∑_j=0^n-1z_σ(a_1a_r^-1),(j-s)a_ry_1,j+⋯+∑_j=0^n-1z_σ(a_m-1a_r^-1),(j-s)a_ry_m-1,j∈. On the other hand, if ∈^∨, then⟨,⟩=∑_i=0^m-1∑_j=0^n-1z_i,jy_i,j∈.Sinceis a right ideal,for any r=0,1,2,…,m-1,s=0,1,2,⋯,n-1, we have(∑_i=0^m-1∑_j=0^n-1y_i,j(a_i⊙ g^j))(a_r^-1⊙ g^-sa_r)=∑_i=0^m-1∑_j=0^n-1y_i,j(a_ia_r^-1⊙ g^(j-s)a_r)∈ I.Thus, we could claim ∈^∨ if and only if for any r∈ [m],s∈ [n],∑_i=0^m-1∑_j=0^n-1z_σ(a_ia_r^-1),(j-s)a_ry_i,j∈,which is exactly the same as (<ref>). | http://arxiv.org/abs/2311.15868v2 | {
"authors": [
"Jiaqi Liu",
"Fang-Wei Fu"
],
"categories": [
"cs.CR",
"cs.IT",
"math.IT"
],
"primary_category": "cs.CR",
"published": "20231127143836",
"title": "Learning with Errors over Group Rings Constructed by Semi-direct Product"
} |
: Scaling Medical Pretraining for Large Language ModelsZeming Chen1 Alejandro Hernández Cano1equal contribution, ^†equal supervision Angelika Romanou1 Antoine Bonnet1 Kyle Matoba1,2Francesco Salvi1 Matteo Pagliardini1 Simin Fan1 Andreas Köpf3 Amirkeivan Mohtashami1 Alexandre Sallinen1 Alireza Sakhaeirad1 Vinitra Swamy1 Igor Krawczuk1 Deniz Bayazit1 Axel Marmet1 Syrielle Montariol1Mary-Anne Hartley1,4 Martin Jaggi1† Antoine Bosselut1†1EPFL 2Idiap Research Institute 3Open Assistant 4Yale====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONThis paper grew out of trying to understand two basic facts about Feynman integrals.The first fact is that a large class of Feynman integrals at L loops and in D dimensions can be written as iterated integrals of length ⌊D L/2⌋.This is less than or equal to half of the number of integrals in momentum space.It seems that, outside of a few experts, this fact was not widely appreciated and, in fact, before the work of ref. <cit.> we lacked even the language to properly discuss transcendental weight in the Feynman integral literature.[With the notable exception of multiple zeta values which are purely numerical and have no functional dependence (see ref. <cit.>) and harmonic polylogarithms (see ref. <cit.>).]It is hard to see who should get the credit of this simple but important observation, but Nima Arkani-Hamed has forcefully made this point to me in a private discussion.This fact is not only surprising, but actually very useful practically since it reduces in half the number of integrals one needs to perform.The second basic fact is that the iterated integrals are a sequence of one-dimensional integrals.This also looks very surprising.How can one rewrite the original integral as a sequence of one-dimensional integrals?And what should these one-dimensional integration variables be?A first clue to answering these questions arose from an important dichotomy of singularities, noticed already by Landau in his original paper <cit.> on singularities of integrals.As Landau showed, the singularities broadly divide into two categories: square root[Or perhaps more generally algebraic.] and logarithmic.These singularities have very different nature.Square root singularities are algebraic and do not contribute to transcendental weight, while logarithmic singularities are transcendental and do contribute to transcendental weight.Furthermore, ref. <cit.> showed in examples that about half of the singularities are of square root type and half are of logarithmic type.It is then reasonable to guess that it is only the logarithmic singularities which contribute to the transcendental weight and in turn this gives a way to explain Arkani-Hamed's observation.But the second question remains.What should be the one-dimensional integration variables?A second clue towards answering this come from Cutkosky's work <cit.> where he described discontinuities across branch cuts in terms of cut integrals.In that reference Cutkosky introduced a new way to write the Feynman integrals, designed with the explicit purpose of making both the singularities and the discontinuities manifest.This representation, as we will review below in sec. <ref> and sec. <ref>, is a sequence of one-dimensional integrals potentially followed by a higher-dimensional integral.As a general rule, when this last integral is non-trivial, an example being that of a non-singular elliptic integral, the Feynman integral can not be written as a polylogarithm (we discuss an example in sec. <ref>).In order to make contact with the iterated integral form of the answer, it should then be possible to explicitly do the integrals producing square root singularities.This would then constitute an alternative way of performing direct integration to the methods of refs. <cit.>.We explicitly show how to do this in a few simple examples.The method relies on nothing more than basic complex analysis and Cauchy theorem.As we will show, in order for this to work, there should be only one pair of square root branch points at each step involving square root singularities.This does indeed happen, sometimes via some non-trivial algebraic identities.The examples presented in this paper are not meant to challenge the state of the art computations, but rather to showcase how the method works in simple examples.We hope to tackle more complicated cases in future work.In sec. <ref> we discuss the case of a reducible integral and show that applying the same ideas to this case poses no difficulty.Therefore, this method, unlike the differential equation method (see ref. <cit.>), can avoid a potentially expensive integral reduction (see ref. <cit.>) step.Clearly, applying the integration algorithm to each Feynman integral separately is not economical.Instead, as in the unitarity method (see refs. <cit.>), one can group together all the diagrams which contribute to a given Landau singularity (at a given order in perturbation theory), compute the on-shell state sums, take the internal momenta off-shell and compute the resulting integral.The full answer can be obtained by merging (not adding!) different contributions which account for all potential singularities.The integration method we introduce has the following advantages over approaches in Feynman parameter space.First, it can apply to a large variety of mixed types of i ϵ conditions (advanced, retarded, Feynman, anti-Feynman) which are sometimes required.This is so because, unlike in Feynman parameter space, we have several denominators and we can choose contours for each independently.Second, the momentum space method applies effortlessly to cut integrals, which are more difficult in Feynman parameter language (see ref. <cit.>).A third advantage is that the on-shell varieties arising in momentum space are typically less singular and do not require as many blow-ups (see refs. <cit.> for examples of blow-ups required in Feynman parameter space).A more technical difference is that (the properly compactified) contours of integration in momentum space are not relative homology classes so are easier to deal with.But most importantly, a huge advantage of our method is that we only need to think about one variable at a time and in principle all the singularities in that variable are visualizable in the complex plane of that variable.The polylogarithmic integrals are fairly well understood and the next frontier is that of integrals which are not polylogarithmic.If the on-shell space of the leading singularity has a non-trivial topology, such as that of a (non-singular) elliptic curve or a Calabi-Yau variety, then the integrals can not be computed in terms of polylogarithms.Sometimes, as in the case of the bubble integral in three dimensions, the on-shell space has the topology of a circle.When complexified, the circle becomes non-compact and adding two points to achieve a compactification amounts to adding two singularities of pole type.In refs. <cit.> a formalism for defining a coaction has been developed by using cuts instead of differentials.In principle this can be applied to integrals of elliptic or Calabi-Yau type.However, the entries of the ensuing symbol will not be as simple as in the polylogarithmic case.It is therefore not clear yet how to use this method for writing the answer in a canonical form or how effective this method can be in that case.Indeed, unlike for the polylogarithmic case, even the notion of a prefactor for the integral does not seem to be well-defined (see ref. <cit.>).Other approaches have been proposed in refs. <cit.>.§ CUTKOSKY'S ARGUMENTIn ref. <cit.>, Cutkosky described a change of variable from the usual loop momentum integration variables to q_e^2, where q_e are internal (not necessarily independent) loop momenta.One can change variables from k_i, independent loop momenta to q_e^2 and other “angular” (in Cutkosky's terminology) variables ξ.After this change of variables the integral reads∫_a_1^b_1d q_1^2/q_1^2 - m_1^2…∫_a_m^b_md q_m^2/q_1^2 - m_m^2∫_γd ξ_1 ∧⋯∧ d ξ_n - m/J,where J is a Jacobian factor.It can potentially contain numerator factors of the original integral as well.In favorable cases, the last form d ξ_1 ∧⋯∧ d ξ_n - m/J has further residues and its integral can be computed in simple terms.In more complicated cases, this last form is a holomorphic form on elliptic curves (or hyper-elliptic curves) or Calabi-Yau manifolds and γ in eq. (<ref>) is a real homology cycle.Despite much study, a general theory of integrals of elliptic or Calabi-Yau type is not available yet.Then, Cutkosky describes the integration limits as the solutions to a modified form of Landau equations, ∑_j ≤ iβ_j q_j = 0 where q_j have norms fixed by the values of the outer integrals.This relies on some (arbitrary) ordering of the propagators.Obviously, a judicious choice of ordering can simplify the calculations.It is worth pointing out that the q_e^2 integrals in Cutkosky's representation (<ref>) have a superficial resemblance to G-functions∫_0^1 d t_1/t_1 - a_1∫_0^t_1d t_2/t_2 - a_2…∫_0^t_r - 1d t_r/t_r - a_r.However, the G-function representation is more restricted since the boundaries of integration depend in a much simpler way on the previous integration variables.In Cutkosky's representation the integration boundaries have a potentially complicated functional dependence on previous integration variables a_s(q_1^2, …, q_s - 1^2) and b_s(q_1^2, …, q_s - 1^2) instead.Nevertheless, Cutkosky's representation has one important qualitative similarity to the G-functions: in both cases we are dealing with one-dimensional integrals on Riemann spheres 𝐏^1.In this representation of the integral, the singularities arise as follows (see ref. <cit.>).If we denote the result of doing all integrals except the outer one by F_(1)(q_1^2, p) where p are external kinematics, then the full answer is∫_a_1^b_1d q_1^2/q_1^2 - m_1^2 F_(1)(q_1^2, p).If there is a singularity when p → p_0, then this means that the integration contour in the q_1^2 complex plane is pinched between q_1^2 = m_1^2 and a singularity of F_(1).More precisely, we must have that F_(1)(m_1^2, p) is singular when p → p_0.By a contour deformation we can pick up a residue at q_1^2 = m_1^2 and this is the only part of the integral which is singular.Therefore, the singularity is given by ± 2 π i F_(1)(m_1^2, p).As remarked above, this indeed becomes singular when p → p_0.§ CUTS AND SPECTRAL DENSITIESGiven a function ρ(x), we can build a function F(x) with a branch cut between a and b whose discontinuity is ρ(x).This construction is well-known andF(z) = 1/2 π i∫_a^b d x/x - zρ(x).Indeed, we haveF(z + i ϵ) - F(z - i ϵ) = 1/2 π i∫_a^b d x (1/x - z - i ϵ - 1/x - z + i ϵ) ρ(x) = = ρ(z), z ∈ (a, b), 0,otherwisewhere we have used 1/x ∓ i ϵ = pv1/x± i πδ(x).Notice the obvious similarity between eq. (<ref>) and eq. (<ref>).The function F_(1) in eq. (<ref>) is the cut of the function defined by the integral.The function F_(1) itself can be represented by a similar integral and the function F_(2) it contains corresponds to another cut, where more propagators are set on-shell.In turn, this writing looks similar to the writing in terms of G-functions, with one major difference: the number of integrals in the G-function representation is equal to ⌊L D/2⌋, while the number of integrals in the Cutkosky representation is at least the number of propagators.We will show below that by explicitly integrating square root singularities and the “angular integrals” one can make the number of integrals match.Clearly in the representation of eq. (<ref>) z = a and z = b are branch points.If ρ(a) or ρ(b) are non-vanishing finite constants then we have logarithmic branch points.It is also possible to have singularities such as ρ(z) ∼ (z - a)^γ when z → a, for γ > -1 (to ensure convergence) and similarly for z → b.In this case the value of ρ at the branch points is either zero or infinity, depending on the sign of γ.If γ is a half-integer then we have a square root branch point at z = a.Obviously, the type of branch cut (i.e. square root versus logarithmic) must match at z = a and z = b.A common form for ρ which we will encounter in the following is ρ(z) = 1/√((z - a)(b - z)) or more generally ρ(z) = 1/√((z - a)(b - z)) P(z), where P(z) is a polylogarithmic function.This is closely connected to the Mandelstam representation (see ref. <cit.>).The Mandelstam representation has been the subject of many studies (see also ref. <cit.> for an application in a similar context to the present one).Our proposal amounts to building some kind of spectral densities in perturbation theory.However, unlike in Mandelstam's approach, we explicitly integrate the square root cuts and keep only the logarithmic singularities.For example, the double-spectral function for a box diagram is of square root type (see ref. <cit.>) which in our approach would not survive the integration. § THE BUBBLE INTEGRALConsider the bubble integral∫d^2 q_1/(q_1^2 - m_1^2) (q_2^2 - m_2^2),where p = q_1 + q_2 and p is the external momentum (see fig. <ref>).This can be rewritten as∫d q_1^2/q_1^2 - m_1^2∫_(q_2^2)_min^(q_2^2)_maxd q_2^2/q_2^2 - m_2^2∫d^2 q_1/d q_1^2 ∧ d q_2^2,where (q_2^2)_min and (q_2^2)_max are the minimum and maximum values of q_2^2, subject to the constraints that q_1^2 is fixed and p = q_1 + q_2.The integration domain is in fig. <ref>. §.§ Euclidean signatureIn Euclidean signature we haved^2 q_1/d q_1^2 ∧ d q_2^2 = -1/4 ϵ(q_1, q_2),where ϵ(v, w) = v^0 w^1 - v^1 w^0.We have(q_1 · q_2)^2 + ϵ(q_1, q_2)^2 = q_1^2 q_2^2.Then, the integral to compute becomes∫_0^∞d q_1^2/q_1^2 + m_1^2∫_(p - q_1)^2^(p + q_1)^2d q_2^2/q_2^2 + m_2^21/4 √(q_1^2 q_2^2 - (q_1 · q_2)^2),where p = √(p^2)and we have used the fact that the minimal value of q_1^2 in Euclidean signature is zero and its maximal value is infinity.Once the value of q_1^2 is fixed, the minimal value of q_2^2 is (p - q_1)^2, obtained when q_1 and p are aligned and the maximal value is (p + q_1)^2 when q_1 and p are anti-aligned.The inner integral reads∫_a_1^b_1d x_1/(x_1 + c_1) √((b_1 - x_1)(x_1 - a_1)),where x_1 = q_2^2, c_1 = m_2^2, b_1 = (p + q_1)^2 and a_1 = (p - q_1)^2.In general, the integral∫_a^b d x/(x + c) √((b - x)(x - a))with a < b and c ∉(a, b) can be computed as follows.We introduce a curve y^2 = (b - x)(x - a) which can be rationally parametrized by t as followsx = a + b/2 + a - b/2t + t^-1/2, y = a - b/2t - t^-1/2.Then, the integrand can be written asω = d x/(x + c)√((b - x)(x - a)) = 1/√((a + c)(b + c)) d log(t - t_+/t - t_-).The value x = a corresponds to t = 1 while x = b corresponds to t = -1.Then we obtain∫_a^b d x/(x + c)√((b - x)(x - a)) = 1/√((a + c)(b + c))∫_1^-1 d log(t - t_+/t - t_-) = π/√((a + c)(b + c)).The logarithm contributes π, so the transcendental weight is purely numerical and does not have any dependence on the kinematics.The same integral can be done by contour integration.Let us briefly describe the method since it will generalize to more complicated cases.We want to define a function √((b - x)(x - a)) to have a branch cut along the segment [a, b].We pick b > a and c ∉[a, b].With the usual definition of the square root for complex numbers we have that √((b - x)(x - a)) has a branch cut (-∞, a] and another branch cut [b, ∞).Since we want to have a branch cut along the segment [a, b], we split the square root as √((x - a)(x - b))→√(x - a)√(x - b) and we use the definition √(z) = √(ρ) e^i θ/2 where z = ρ e^i θ with θ∈ [0, 2 π) for the first square root and the definition √(z) = √(ρ) e^i θ/2 where z = ρ e^i θ with θ∈ (-π, π] for the second square root.If we define z - a = ρ_1 e^i θ_1 with θ_1 ∈ [0, 2 π) and z - b = ρ_2 e^i θ_2 with θ_2 ∈ (-π, π], then we have that the of the square root above the cut is i √(ρ_1 ρ_2).Since for x ∈ [a, b] we have b - x = ρ_2 and x - a = ρ_1 have that if defined such that the branch cut is along [a, b], the function √(x - b)√(x - a) (where the two square roots are defined as above) is equal by continuity from above the cut to i √((b - x)(x - a)).Then we have∫_a^b d x/x + c1/√((b - x)(x - a)) = i/2lim_ϵ→ 0∫_γd x/x + c1/√(x - a)√(x - b),where γ is a contour as in fig. <ref>.We have∫_γd x/x + c1/√((x - a)(x - b)) - 2 π i Res_x = -cd x/x + c1/√(x - a)√(x - b) = 0.We therefore have∫_a^b d x/x + c1/√((x - a)(x - b)) = π/√(a + c)√(b + c). Finally, we continue with the evaluation of the outer integralπ∫_0^∞d x_1/(x_1 + c_1) √((x_1 - a_1 + b_1 i)(x_1 - a_1 - b_1 i)),with x_1 = q_1^2, c_1 = m_1^2, a_1 + b_1 i = (p - i m_2)^2.The integral is along the real axis where the quantity under the square root is positive so, with the usual convention for the square root branch cut, the integration path does not intersect the cut.To compute the integral∫_0^∞d x/(x + c) √((x - a + b i)(x - a - b i))we introduce a curve y^2 = (x - a)^2 + b^2.This curve can be parametrized rationally byx = a + b t - t^-1/2,y = b t + t^-1/2.Then we haveω = d x/(x + c) √((x - a + b i)(x - a - b i)) = 1/√((a + c)^2 + b^2) d logt - t_+/t - t_-,wheret_± = -(a + c) ±√((a + c)^2 + b^2)/b.To finish the computation of the integral we need to compute the values of t at the boundary of the integration region.At x = 0 we have y = √(a^2 + b^2), where we need to pick the positive sign for y.This implies thatt - t^-1 = -2 a/b,t + t^-1 = 2 √(a^2 + b^2)/2,whence t_i = -a + √(a^2 + b^2)/b.For x →∞, we also have y →∞.The condition x →∞ follows from t →∞ or from t → 0^-, but in the second case we obtain y → -∞.Finally, the integral we wanted to compute becomes∫_t_i^∞ω = -1/√((a + c)^2 + b^2)logt_i - t_+/t_i - t_-.Plugging in the values for a, b and c we find that the bubble integral in Euclidean signature readsπ/√(Δ)logp^2 + m_1^2 + m_2^2 + √(Δ)/p^2 + m_1^2 + m_2^2 - √(Δ),whereΔ = (p^2)^2 + m_1^4 + m_2^4 + 2 p^2 m_1^2 + 2 p^2 m_2^2 - 2 m_1^2 m_2^2.§.§ Lorentzian signatureIn two-dimensional Lorentz signature we haved^2 q_1/d q_1^2 ∧ d q_2^2 = d^2 q_1/4 (q_1 · d q_1) ∧ (q_2 · d q_2) = 1/4 ϵ(q_1, q_2),where we have used d q_2 = -d q_1 (since p is considered constant) and fact that (v · d q_1) ∧ (w · d q_1) = -ϵ(v, w) d^2 q_1 with v · d q = v^0 d q^0 - v^1 d q^1.Here we have definedϵ(v, w) = v^0 w^1 - v^1 w^0,and d^2 q = d q^0 ∧ d q^1.It can be checked by a simple calculation that-ϵ(v, w)^2 = v^2 w^2 - (v · w)^2,where v^2 = (v^0)^2 - (v^1)^2 and v · w = v^0 w^0 - v^1 w^2.Therefore,-ϵ(q_1, q_2)^2 = [ q_1^2 q_1 · q_2; q_1 · q_2 q_2^2 ] = 1/4 (4 q_1^2 q_2^2 - (p^2 - q_1^2 - q_2^2)^2). Let us first compute the stationary points of q_2^2 subject to the constraints mentioned above.Using the Lagrange multiplier λ_1, we find∂/∂ q_2(q_2^2 - λ_1 ((p - q_2)^2 - q_1^2)) = 0,which reads(1 + λ_1) q_2 = λ_1 p.Using momentum conservation this implies that q_1 = 1/1 + λ_1 p, q_2 = λ_1/1 + λ_1 p.We can then determine λ_1 since (1 + λ_1)^2 = p^2/q_1^2 so λ_1 = -1 ±√(p^2/q_1^2).Using this value of λ_1 we findq_2^2 = (q_1±p)^2,for the stationary points.At this stage we don't know yet if these are true minima or maxima.To decide the nature of the stationary points and find the minima and maxima one can follow the general procedure described in sec. <ref> which involves computing a bordered Hessian (see eq. (<ref>)).Their nature depends on the signs of p^2 and q_1^2.In this case we do not need the full power of a general theorem since a direct analysis of the inequalities suffices.Going back to the equation eq. (<ref>), we see that for real Lorentz kinematics we have(q_2^2 - q_1^2 - p^2)^2 ≥ 4 q_1^2 p^2.Let us assume p^2 > 0.Then, if q_1^2 < 0 the inequality is satisfied for all values of q_2^2.The same holds if p^2 < 0 and q_1^2 > 0.But if p^2 > 0 and q_1^2 > 0, then we have(q_2^2 - (q_1 - p)^2) (q_2^2 - (q_1 + p)^2) > 0.It follows that the possible values for q_2^2 are eitherq_2^2 ≥ (q_1±p)^2,orq_2^2 ≤ (q_1±p)^2. If instead p^2 < 0 and q_1^2 < 0, then we have(q_2^2 + (√(-q_1^2) - √(-p^2))^2) (q_2^2 + (√(-q_1^2) + √(-p^2))^2) > 0.Then, eitherq_2^2 ≥ -(√(-q_1^2)±√(-p^2))^2,orq_2^2 ≤ -(√(-q_1^2)±√(-p^2))^2. Assuming that p^2 > 0, for q_1^2 < 0 the range of q_2^2 is (-∞, ∞) while for q_1^2 > 0 the range of q_2^2 is (-∞, a_2) ∪ (b_2, ∞) with a_2 = (q_1 - p)^2 and b_2 = (q_1 + p)^2.The end-points of the integration domain (except the ones at infinity) are the same as obtained by the stationary point study.The q_2^2 integral has two forms∫_-∞^∞d x_2/(x_2 - c_2) √((x_2 - a_2)(x_2 - b_2)),where the roots a_2 and b_2 are not real, and∫_(-∞, a_2) ∪ (b_2, ∞)d x_2/(x_2 - c_2) √((x_2 - a_2)(x_2 - b_2)),where the roots a_2 and b_2 are real with a_2 < b_2.These integrals can be seen as integrals along contours in the curve y_2^2 = (x_2 - a_2) (x_2 - b_2), which is a double cover of the complex x_2 plane, branched at two points x_2 = a_2 and x_2 = b_2.This curve can be rewritten as y_2^2 = x_2^2 + α_2 x_2 + β_2 (with α_2 = -a_2 - b_2 and β_2 = a_2 b_2) and after completing the square as (x_2 + α_2/2 - y_2)(x_2 + α_2/2 + y_2) = α_2^2/4 - β_2.If we denote t_2 = x_2 - y_2 + α_2/2 we have x_2 + y_2 + α_2/2 = α_2^2/4 - β_2/t_2 andx_2 = 1/2(-α_2 + t_2 + α_2^2/4 - β_2/t_2), y_2 = 1/2(α_2^2/4 - β_2/t_2 - t_2).This provides a rationalization of the curve and is a useful change of variables in the integral.In terms of the coordinate t_2 we have d x_2/√((x_2 - a_2)(x_2 - b_2)) = d x_2/y_2 = -d t_2/t_2.Then, we haveω_2 = d x_2/(x_2 - c_2) y_2 = -2 d t_2/(t_2 - t_2^+)(t_2 - t_2^-),witht_2^±= c_2 - a_2 + b_2/2±√((c_2 - a_2)(c_2 - b_2)).Then we haveω_2 = -2/t_2^+ - t_2^- d logt_2 - t_2^+/t_2 - t_2^- = -1/√((c_2 - a_2)(c_2 - b_2)) d logt_2 - t_2^+/t_2 - t_2^-.The square root prefactor is now independent on t_2 and can be combined with the outer differential form d q_1^2/q_1^2 - m_1^2.We have c_2 = m_2^2, a_2 = (q_1 - p)^2 and b_2 = (q_1 + p)^2.We will also set x_1 = q_1^2 for brevity.Replacing these in the square root we findd x_1/(x_1 - c_1) √((x_1 - a_1)(x_1 - b_1)),with c_1 = m_1^2, a_1 = (p - m_2)^2 and b_1 = (p + m_2)^2.This can be treated in the same way as before.Indeed, we can introduce a variable y_1 defined by y_1^2 = (x_1 - a_1)(x_1 - b_1) and a uniformizing variable t_1. In the end we get1/√((m_1^2 - (p - m_2)^2)(m_1^2 - (p + m_2)^2))∫_γ_1 d logt_1 - t_1^+/t_1 - t_1^-∫_γ_2 d logt_2 - t_2^+/t_2 - t_2^-.The square root in front can be written as√(Δ) = √((p^2)^2 + m_1^4 + m_2^4 - 2 p^2 m_1^2 - 2 p^2 m_2^2 - 2 m_1^2 m_2^2),which is the familiar Källén function.The integration domain is outside the curve in fig. <ref>.More precisely, for q_1^2 ∈ (-∞, 0] we have q_2^2 ∈ (-∞, ∞) while for q_1^2 ∈ [0, ∞) we have q_2^2 ∈ (-∞, a_2(q_1^2)] ∪ [b_2(q_1^2), ∞).For q_1^2 ∈ (-∞, 0] we can do the inner integral and find that, as function in q_1^2 it has no singularities in the upper half plane (it has a logarithmic branch cut along the positive real axis and there is also a pole at q_1^2 = m_1^2 - i ϵ).In particular there is no pole at infinity and the contour in q_1^2 along the negative real axis can be rotated clockwise to sit on the positive real axis.Changing the direction of integration introduces a minus sign and combining with the previous integration along q_1^2 ∈ [0, ∞) produces-∫_0^∞d x_1/x_1 - c_1∫_a_2(x_1)^b_2(x_1)d x_2/x_2 - c_21/√((x_2 - a_2)(x_2 - b_2)).This has the same form as in Euclidean signature, but this form is a result of a cancellation between different regions.Here we have two integrals but we expect only a single logarithm.Therefore, it should be possible to do one of the integrals and get a rational multiple of 2 π i.Since the integration contour in x_2 variable goes between a_2 and b_2, then in the t_2 variable it goes between t_2 = t̃_2^- = a_2 - b_2/2 and t_2 = t̃_2^+ = b_2 - a_2/2.The points x_2 = a_2 and x_2 = b_2 have a unique point in the double cover and therefore each corresponds to a unique value of t_2.Then, we have ∫_γ_2 d logt_2 - t_2^+/t_2 - t_2^- = log(t̃_2^+ - t_2^+) (t̃_2^- - t_2^-)/(t̃_2^+ - t_2^-) (t̃_2^- - t_2^+).This cross-ratio is very special, since(t̃_2^+ - t_2^+) (t̃_2^- - t_2^-)/(t̃_2^+ - t_2^-) (t̃_2^- - t_2^+) = (a_2 - c_2 - √((c_2 - a_2)(c_2 - b_2)))(b_2 - c_2 + √((c_2 - a_2)(c_2 - b_2)))/(a_2 - c_2 + √((c_2 - a_2)(c_2 - b_2)))(b_2 - c_2 - √((c_2 - a_2)(c_2 - b_2))) = -1,where we have used the expressions in eq. (<ref>).Then, the integral becomes±π i/√(Δ)∫_γ_1 d logt_1 - t_1^+/t_1 - t_1^-,where the sign depends on the determination of the logarithm.The final answer for the integral resembles the one of the Euclidean signature.The biggest difference is that we need to replace p_E^2 → -p^2 intoΔ_E = (p_E^2)^2 + m_1^4 + m_2^4 + 2 p_E^2 m_1^2 + 2 p_E^2 m_2^2 - 2 m_1^2 m_2^2to obtain the square root in Lorentzian signature.This replacement is the same as the one arising from a Wick rotation. §.§ Bubble integral in three dimensionsAs an example where the inner “angular” integral is not zero-dimensional, consider the case of a bubble integral in three dimensions.As we will see, this integral yields a simpler answer than the bubble integral in two dimensions.For simplicity we compute this integral in Euclidean signature.We have∫d^3 q_1/(q_1^2 + m_1^2) (q_2^2 + m_2^2),which can be written as∫_0^∞d q_1^2/q_1^2 + m_1^2∫_(p - q_1)^2^(p + q_1)^2d q_2^2/q_2^2 + m_2^2∫d^3 q_1/d q_1^2 ∧ d q_2^2.The inner integral is over the space of triangles with side lengths p, q_1 and q_2 in a three-dimensional space, where the vector p is fixed.We haveω_3 = d^3 q_1/d q_1^2 ∧ d q_2^2 = d^3 q_1/-4 (q_1 · d q_1) ∧ (q_2 · d q_1) ∧ (v · d q_1)/v · d q_1 = v · d q_1/-4 ϵ(q_1, q_2, v).We can choose any vector for v, as long as ϵ(q_1, q_2, v) ≠ 0.The one-form ω_3 can be computed by choosing a special coordinate frame where p = (p_0, 0, 0).Then if we pick v = (0, 0, 1) we findω_3 = d (q_1)_2/-4 ((q_1)_0 (q_2)_1 - (q_1)_1 (q_2)_0) = d (q_1)_2/4 (q_1)_1 p_0where we have used (q_2)_1 = -(q_1)_1, (q_1)_0 + (q_2)_0 = p_0.Using p_0 = p and(q_1)_1 = √(q_1^2 - (q_1 · p)^2/p^2)cosθ,(q_1)_2 = √(q_1^2 - (q_1 · p)^2/p^2)sinθ,we finally have that ω_3 = -1/4 p d θ.The integral over θ can be done and we obtain -π/2 p.We can therefore pull out the prefactor directly without going through intermediate integrations as for the two-dimensional bubble.We are left with the integral∫_0^∞d q_1^2/q_1^2 + m_1^2∫_(p - q_1)^2^(p + q_1)^2d q_2^2/q_2^2 + m_2^2.Notice that this integral is odd under p→ -p.The prefactor also contains a p which is also odd under this transformation so overall the integral is invariant under p→ -p.The inner integral can be computed straightforwardly, which leaves us with∫_0^∞d q_1^2/q_1^2 + m_1^2log((p + √(q_1^2))^2 + m_2^2/(p - √(q_1^2))^2 + m_2^2).We rewrite the integrand so that it has a cut along the integration region∫_0^∞d q_1^2/q_1^2 + m_2^2log((p - i √(-q_1^2))^2 + m_2^2/(p + i √(-q_1^2))^2 + m_2^2).Here the square root has the principal determination √(z) = √(| z|)exp (i/2z) with z∈ (-π, π).This means that along the real axis √(-q_1^2) = i √(q_1^2) and √(z)≥ 0 for all complex z.We defineρ(q_1^2) = log((p - i √(-q_1^2))^2 + m_2^2/(p + i √(-q_1^2))^2 + m_2^2).In principle we could have chosen the function ρ in several other ways, but this form also has the property that ρ(0) = 0 and lim_q_1^2 →∞ρ(q_1^2) = 0.Then the original integral can be written as1/2∫_0^∞d q_1^2/q_1^2 + m_2^2Discρ(q_1^2),where the discontinuity is across the branch cut along the positive real axis and the factor of one half is due to the fact that computing the discontinuity across the branch cut doubles the value of the function ρ just above the cut.We have∫_0^∞d q_1^2/q_1^2 + m_1^2ρ(q_1^2) = 1/2lim_ϵ→ 0lim_R →∞∫_γ_1 + γ_2 + γ_3d q_1^2/q_1^2 + m_1^2ρ(q_1^2),where γ_1 is a contour from R to ϵ slightly displaced from the real axis in the lower half plane, γ_2 is an arc of circle of radius ϵ and center 0 which continues the path γ_1 and γ_3 is a contour from ϵ to R slightly displaced in the upper half plane.This contour can be completed to a closed contour by adding a circle of radius R and center 0, but paying attention to the logarithmic branch cuts of the function ρ.The resulting contour is sketched in fig. <ref>.Let us find the logarithmic branch cuts of ρ.The branch cut condition is(p - i √(-q_1^2))^2 + m_2^2/(p + i √(-q_1^2))^2 + m_2^2∈ℝ_-.The branch points in q_1^2 arise when either the numerator or the denominator of the argument of the logarithm vanish.This condition can be rewritten as((p - i √(-q_1^2))^2 + m_2^2) ((p + i √(-q_1^2))^2 + m_2^2) = 0,which can be rewritten as(q_1^2 + m_2^2 - p^2 + 2 i p m_2) (q_1^2 + m_2^2 - p^2 - 2 i p m_2) = 0.Solving this we find the logarithmic branch pointsq_1^2 = p^2 - m_2^2 ± 2 i m_2 p.The equations for the logarithmic branch cuts are more complicated.We have schematically represented them by the vertical lines going to infinity in fig. <ref>.The precise location of the branch cuts is not essential, but we of course require that the integration contours do not cross them.The integral along the small circle γ_2 vanishes in the limit ϵ→ 0.The integral along the pieces of the large circle of radius R also vanishes in the limit R →∞; the term d q_1^2/q_1^2 + m_1^2 contributes a pole at infinity, but its residue vanishes (here we are using the crucial property that lim_q_1^2 →∞ρ(q_1^2) = 0).When q_1^2 = p^2 - m_2^2 + 2 i m_2 p we have √(-q_1^2) = ± (m_2 - i p) but for the principal determination of the square root we should choose √(-q_1^2) = m_2 - i p.In this case, it is the numerator of the argument of the logarithm in ρ that vanishes.Since the circles around the logarithmic branch points turn in the clockwise direction, we have that the discontinuity across the branch cut ending at q_1^2 = p^2 - m_2^2 + 2 i m_2 p is -2 π i.Similarly, the discontinuity across the branch cut ending at q_1^2 = p^2 - m_2^2 - 2 i m_2 p is 2 π i.Therefore, the contribution of the logarithmic branch cuts to the contour integral is-2 π i ∫_p^2 - m_2^2 + 2 i m_2 p^R_+d q_1^2/q_1^2 + m_1^2 + 2 π i ∫_p^2 - m_2^2 - 2 i m_2 p^R_-d q_1^2/q_1^2 + m_1^2,where R_± are some complex numbers whose norm is of order R.They are determined by the intersection of the large circle of radius R and the logarithmic branch cuts.In the limit R →∞ this integral becomes2 π i ∫_p^2 - m_2^2 - 2 i m_2 p^p^2 - m_2^2 + 2 i m_2 pd q_1^2/q_1^2 + m_1^2. Equating the value of the integral along the contour in fig. <ref> with the contribution of the residue at q_1^2 = -m_1^2, we obtain∫_0^∞d q_1^2/q_1^2 + m_1^2ρ(q_1^2) = π i ( log(p - i m_1)^2 + m_2^2/(p + i m_1)^2 + m_2^2 + log(p - i m_2)^2 + m_1^2/(p + i m_2)^2 + m_1^2).A non-trivial check on the computation is that the answer should be symmetric under the exchange m_1 ↔ m_2.This can be simplified to∫_0^∞d q_1^2/q_1^2 + m_1^2ρ(q_1^2) = 2 π i logm_1 + m_2 + i p/m_1 + m_2 - i p,where we assumed m_1 > 0, m_2 > 0 and p > 0.Including the prefactor of -π/2 p we obtain the final answer for the bubble integral in three dimensions in Euclidean signature-π^2 i/plogm_1 + m_2 + i p/m_1 + m_2 - i p.The result is real and positive, but this is not completely manifest.It can be rewritten as2 π^2/parctanp/m_1 + m_2.This integral is not difficult to compute by other methods (see for example ref. <cit.>).See also ref. <cit.> for a recent occurrence of the same integral.Curiously, this form of the answer is not invariant under m_i → -m_i, which is a symmetry of the original integral.§ TRIANGLE INTEGRALS §.§ Triangle integral in two dimensionsConsider the triangle integral in two dimensions.This integral is usually computed by reducing it to three bubble integrals.Each of the bubble integrals has a different prefactor so the integral is not “pure” in the language used in the literature.Such integrals can not be computed by the usual means of taking differentials and they are usually reduced to “pure” integrals using a procedure of integral reduction.In this section, we use this example to illustrate how this new method of direct integration copes with this difficulty.We want to compute∫d^2 q_1/(q_1^2 + m_1^2) (q_2^2 + m_2^2) (q_3^2 + m_3^2) = ∫_0^∞d q_1^2/q_1^2 + m_1^2∫_a_2^b_2d q_2^2/q_2^2 + m_2^21/q_3^2 + m_3^2d^2 q_1/d q_1^2 ∧ d q_2^2.The integrand of the second integral becomes1/4 ϵ(q_1, q_2) (q_3^2 + m_3^2).Given q_1^2 and q_2^2, the possible values of q_3^2 are (q_3^2)_±, the roots of a quadratic equation arising from the vanishing of the Gram determinant G(q_1, q_2, q_3) = 0.Let us write this equation as α_3 (q_3^2)^2 + β_3 q_3^2 + γ_3 = 0, where α_3, β_3 and γ_3 are polynomials in p_1^2, p_2^2, p_3^2 and q_1^2, q_2^2.Then we have(q_3^2)_± = -β_3 ±√(β_3^2 - 4 α_3 γ_3)/2 α_3. Geometrically, the two configurations of momenta corresponding to the two values of q_3^2 where q_1^2 and q_2^2 are fixed, are related by a reflection so ϵ(q_1, q_2) changes sign.We therefore have1/ϵ(q_1, q_2) (q_3^2 + m_3^2) = 1/√( G(q_1, q_2))(1/(q_3^2)_+ + m_3^2 - 1/(q_3^2)_- + m_3^2) = = - √(β_3^2 - 4 α_3 γ_3)/√( G(q_1, q_2))1/α_3 m_3^4 - β_3 m_3^2 + γ_3. As before, we have a_2 = inf q_2^2 = (p_3 - q_1)^2 and b_2 = sup q_2^2 = (p_3 + q_1)^2.When q_1 and q_2 are collinear q_2^2 satisfies the equation (q_2^2 - a_2) (q_2^2 - b_2) = 0 which when expanded yields(p_3^2)^2 + (q_1^2)^2 + (q_2^2)^2 - 2 p_3^2 q_1^2 - 2 p_3^2 q_2^2 - 2 q_1^2 q_2^2 = 0.When this equation holds we must have β_3^2 - 4 α_3 γ_3 = 0 since the two configurations for q_3^2 arise via a reflection in the line supporting the momentum p_3.But when the momenta q_1 and q_2 are collinear with p_3, the reflection does not produce a new solution.At the same time, we have4G(q_1, q_2) = (p_3^2)^2 + (q_1^2)^2 + (q_2^2)^2 - 2 p_3^2 q_1^2 - 2 p_3^2 q_2^2 - 2 q_1^2 q_2^2.Hence, when q_2^2 = a_2 or q_2^2 = b_2 both the numerator √(β_3^2 - 4 α_3 γ_3) and the denominator factor √( G(q_1, q_2)) vanish.An explicit computation reveals thatβ_3^2 - 4 α_3 γ_3 = 1/16((p_1^2)^2 + (p_2^2)^2 + (p_3^2)^2 -2 p_1^2 p_2^2 - 2 p_1^2 p_3^2 - 2 p_2^2 p_3^2)((q_1^2)^2 + (q_2^2)^2 + (p_3^2)^2 -2 q_1^2 p_3^2 - 2 q_2^2 p_3^2 - 2 q_1^2 q_2^2).Therefore, we can pull out a factor-1/2√((p_1^2)^2 + (p_2^2)^2 + (p_3^2)^2 -2 p_1^2 p_2^2 - 2 p_1^2 p_3^2 - 2 p_2^2 p_3^2)dependent only on external kinematics and we are left with the integrals∫_0^∞d q_1^2/q_1^2 + m_1^2∫_a_2^b_2d q_2^2/q_2^2 + m_2^21/α_3 m_3^4 - β_3 m_3^2 + γ_3.The integral in q_2^2 can be done by straightforward partial fractioning in q_2^2.If α_3 m_3^4 - β_3 m_3^2 + γ_3 = α_2 (q_2^2)^2 + β_2 q_2^2 + γ_2 and this polynomial in q_2^2 has roots x_±, then∫_a_2^b_2d q_2^2/q_2^2 + m_2^21/α_3 m_3^4 - β_3 m_3^2 + γ_3 = ∫_a_2^b_2d q_2^2/q_2^2 + m_2^21/α_2 (q_2^2 - x_+)(q_2^2 - x_-) =1/α_2 m_2^4 - β_2 m_2^2 + γ_2logb_2 + m_2^2/a_2 + m_2^2 +1/x_+ + m_2^21/√(β_2^2 - 4 α_2 γ_2)logb_2 - x_+/a_2 - x_+ -1/x_- + m_2^21/√(β_2^2 - 4 α_2 γ_2)logb_2 - x_-/a_2 - x_-. An explicit calculation yieldsβ_2^2 - 4 α_2 γ_2 = 1/16((p_1^2)^2 + (p_2^2)^2 + (p_3^2)^2 -2 p_1^2 p_2^2 - 2 p_1^2 p_3^2 - 2 p_2^2 p_3^2)(m_2^4 + (p_2^2)^2 + (q_1^2)^2 + 2 m_2^2 p_2^2 + 2 m_2^2 q_1^2 - 2 p_2^2 q_1^2).We also have that1/α_2 m_2^4 - β_2 m_2^2 + γ_2 + 1/x_+ + m_2^21/√(β_2^2 - 4 α_2 γ_2) - 1/x_- + m_2^21/√(β_2^2 - 4 α_2 γ_2) = 0. To do the final integral, we proceed as before.First, notice that when q_1 = 0 or q_1→∞ we have that a_2 and b_2 coincide.This means that the logarithms vanish.In fact, the integrand of the q_1^2 integral has a square root branch point at q_1^2 = 0 and q_1^2 →∞.It looks like the result of the integration in eq. (<ref>) (and therefore the integrand for the q_1^2 integration) also has square root branch points at β_2^2 - 4 α_2 γ_2 = 0.Interestingly, these square root branch points are actually canceled in the combination in eq. (<ref>).Indeed, when taking q_1^2 along a path such that β_2^2 - 4 α_2 γ_2 goes once around the origin, we have x_+↔ x_- and √(β_2^2 - 4 α_2 γ_2)→ -√(β_2^2 - 4 α_2 γ_2).Then the expression in eq. (<ref>) is sent to itself.As before, we have α_2 m_2^4 - β_2 m_2^2 + γ_2 = α_1 (q_1^2)^2 + β_1 q_1^2 + γ_1.This time α_1, β_1 and γ_1 depend only on the external momenta and the masses.The integrand has some pole singularities.Indeed, α_2 m_2^4 - β_2 m_2^2 + γ_2 = α_1 (q_1^2)^2 + β_1 q_1^2 + γ_1 = 0 when q_1^2 = (q_1^2)_±.We haveRes_q_1^2 = (q_1^2)_±1/α_1 (q_1^2)^2 + β_1 q_1^2 + γ_1 = Res_q_1^2 = (q_1^2)_±1/α_1 (q_1^2 - (q_1^2)_+)(q_1^2 - (q_1^2)_-) = ± 1/√(β_1^2 - 4 α_1 γ_1).There is also a pole when x_+ + m_2^2 = 0.Since α_2 (x_+ + m_2^2)(x_- + m_2^2) = α_1 (q_1^2 - (q_1^2)_+)(q_1^2 - (q_1^2)_-) when x_+ + m_2^2 = 0 we have that either q_1^2 = (q_1^2)_+ or q_1^2 = (q_1^2)_-.Let us assume for definiteness that we have q_1^2 = (q_1^2)_+.Then,Res_q_1^2 = (q_1^2)_+1/x_+ + m_2^2 = lim_q_1^2 → (q_1^2)_+(q_1^2 - (q_1^2)_+)(q_1^2 - (q_1)_-)/(x_+ + m_2^2)(x_- + m_2^2)x_- + m_2^2/q_1^2 - (q_1^2)_- = = α_2/α_1x_- - x_+/(q_1^2)_+ - (q_1^2)_- = -√(β_2^2 - 4 α_2 γ_2)/√(β_1^2 - 4 α_1 γ_1),where we have used m_2^2 = -x_+.The final integral to do is over q_1^2 and runs from 0 to ∞.This along a square root branch cut so the integral can be written as one half the integral around the branch cut.This contour can be deformed to a sum of contours around the poles described above and around the logarithmic branch cuts.The integrals around the logarithmic branch cuts can be written as integrals along a contour connecting the two logarithmic branch points where the new integrand is obtained by replacing the logarithm by 2 π i.After this replacement the integral becomes easy to perform, by using identities such as eq. (<ref>).We will not go through all the trivial (but tedious) steps in detail (see sec. <ref> for more details on the method), except to describe the determination of the locations of logarithmic branch points.From eq. (<ref>) we have that the first logarithmic branch points appear for values of q_1^2 where a_2 = -m_2^2 and b_2 = -m_2^2.Using a_2 = (p_2 - q_1)^2 and b_2 = (p_2 + q_1)^2 we find logarithmic branch points at q_1^2 = (p_3± i m_2)^2.From the second and third terms we have x_± = b_2 which implies that α_2 b_2^2 + β_2 b_2 + γ_2 = 0.When written as an equation in q_1 this equation is of degree four, so one might worry that we need to deal with roots of order four.Fortunately, it turns out that this degree four equation is very special and its roots can be written using a single square rootq_1 = p_1^2 - p_2^2 - p_3^2 ±√(D)/2 p_3,whereD = (p_1^2)^2 + (p_2^2)^2 + (p_3^2)^2 - 2 p_1^2 p_2^2 - 2 p_1^2 p_3^2 - 2 p_2^2 p_3^2 - 4 m_3^2 p_3^2.Each one of these roots appears with multiplicity two.A similar analysis applies for a_2.In this section we have demonstrated an algorithm for computing a reducible Feynman integral with multiple prefactors, without performing an integral reduction.Integral reductions (see ref. <cit.>) are often resource-intensive and require a non-canonical and often symmetry-breaking choice of basis.One interesting fact we can notice is the occurrence of higher order equations, but so far such that their roots only require quadratic field extensions.It is plausible that by this method of integration no unnecessary field extensions will be required.When computing a particular integral in ref. <cit.> using(see ref. <cit.>) we encountered field extensions of degree 16 while the final answer was completely rational.Higher order equations are expected to occur (see refs. <cit.>) in general for more complicated integrals. §.§ Triangle integral in three dimensions It is instructive to consider the triangle integral in three dimensions.[Often the integrals are studied in Feynman parameter space, not in momentum space as we do here.In Feynman parameter space the integrals in odd dimensions may contain square roots, which complicates their analysis.One of the advantages of the momentum space approach is that it can be done in even or odd dimensions with no differences.]In this case we expect the answer to contain one logarithm, so we should be able to compute two integrals as rational multiples of 2 π i.For the triangle integral we have three external momenta p_1, p_2, p_3 with p_1 + p_2 + p_3 = 0 and three internal momenta q_1, q_2, q_3 such that p_1 = q_3 - q_2, p_2 = q_1 - q_3 and p_3 = q_2 - q_1 (see fig. <ref>).We do the integral in Euclidean signature.The integral reads∫d^3 q_1/(q_1^2 + m_1^2) (q_2^2 + m_2^2) (q_3^2 + m_3^2).Putting the integral in the Cutkosky form we find∫_a_1^b_1d q_1^2/q_1^2 + m_1^2∫_a_2^b_2d q_2^2/q_2^2 + m_2^2∫_a_3^b_3d q_3^2/q_3^2 + m_3^2(d^3 q_1/d q_1^2 ∧ d q_2^2 ∧ d q_3^2). We will determine a_1, a_2, a_3 and b_1, b_2, b_3 in the following, but first we computed^3 q_1/d q_1^2 ∧ d q_2^2 ∧ d q_3^2.Using momentum conservation we find q_2 = q_1 + p_3 and q_3 = q_1 + p_1 + p_3 which implies that d q_2^2 = 2 q_2 · d q_1 and d q_3^2 = 2 q_3 · d q_1 since the external momenta are taken to be constant.Next, we find d q_1^2 ∧ d q_2^2 ∧ d q_3^2 = 8 ϵ(q_1, q_2, q_3) d^3 q_1.We haveϵ(q_1, q_2, q_3)^2 = [ q_1^2 q_1 · q_2 q_1 · q_3; q_1 · q_2 q_2^2 q_2 · q_3; q_1 · q_3 q_2 · q_3 q_3^2 ].Therefore, we haved^3 q_1/d q_1^2 ∧ d q_2^2 ∧ d q_3^2 = 1/8 ϵ(q_1, q_2, q_3) = 1/8 √( (q_i · q_j)_1 ≤ i, j ≤ 3). To compute the boundary values, a_i and b_i we proceed as follows.First, we have a_1 = 0 and b_1 = ∞.Next, have the same problem as for the bubble case in sec. <ref>.That is, given the fixed value of q_1^2 and p_3 = q_2 - q_1, find the extremal values of q_2^2.Finally, for the extremal values of q_3^2 there are several possibilities.First, there is a constraint arising from q_3^2 = (q_1 - p_2)^2, or from the bubble with momenta q_1, q_3.Second, there is a constraint arising from the bubble with momenta q_2, q_3.Taken together, these imply the triangle Landau equations.Thus, we see the hierarchical principle of ref. <cit.> arise in a quite concrete way.What is not so clear are singularities in the bubble with momenta q_2, q_3, for example.Let us denote α_3 (q_3^2)^2 + β_3 q_3^2 + γ_3 =(q_i · q_j)_1 ≤ i, j ≤ 3, where α_3, β_3 and γ_3 are complicated polynomials which we will not need to spell out.We define c_3 = β_3/α_3, d_3 = γ_3/α_3 and x_3 = q_3^2, y_3^2 = x_3^2 + c_3 x_3 + d_3.This curve can be parametrized byt_3 = x_3 + c_3/2 - y_3, Δ_3/4 t_3 = x_3 + c_3/2 + y_3,where Δ_3 = c_3^2 - 4 d_3.This implies thatω_3 = d q_3^2/(q_3^2 + m_3^2) √(α_3 (q_3^2)^2 + β_3 q_3^2 + γ_3) = -1/√(α_3 (m_3^2)^2 - β_3 m_3^2 + γ_3) d logt_3 - t_3^+/t_3 - t_3^-,fort_3^± = c_3/2 - m_3^2 ±√((m_3^2)^2 - c_3 m_3^2 + d_3).The boundaries of the integral in x_3 are for values of t_3 where y_3 vanishes.This means that t_3^2 = Δ_3/4 therefore t_3, i = -1/2√(Δ_3) and t_3, f = 1/2√(Δ_3).Computing the definite integral involves computing a cross-ratio(t_3, f - t_3^+) (t_3, i - t_3^-)/(t_3, f - t_3^-) (t_3, i - t_3^+) = -1,which follows from(t_3, f - t_3^+) (t_3, i - t_3^-) = -Δ_3/4 - √(Δ_3)/2 (t_3^- - t_3^+) + Δ_3/4,and similarly for the denominator.This integral therefore will produce a constant transcendental factor of π.The final answer is∫_a_3^b_3d q_3^2/(q_3^2 + m_3^2) √(α_3 (q_3^2)^2 + β_3 q_3^2 + γ_3) = π/√(-α_3 m_3^4 + β_3 m_3^2 - γ_3).This integral is positive if the quantity under the square root in the integrand is positive for q_3^2 ∈ (a_3, b_3) and -m_3^2 ∉(a_3, b_3) (this is certainly the case if m_3^2 > 0 since a_3, b_3 ≥ 0).Then the quantity under the square root in the answer is positive.Now we want to do the second integral,∫_a_2^b_2d q_2^2/q_2^2 + m_2^2π/√(-α_3 m_3^4 + β_3 m_3^2 - γ_3).The quantity under the square root is minus the Gram determinant of q_1, q_2, q_3, evaluated at the “Euclidean on-shell” condition q_3^2 = -m_3^2.We now have -α_3 m_3^4 + β_3 m_3^2 - γ_3 = α_2 (q_2^2)^2 + β_2 q_2^2 + γ_2.We haveα_2 (q_2^2)^2 + β_2 q_2^2 + γ_2 = α_2 (q_2^2 + β_2/2 α_2)^2 - Δ_2/4 α_2,whereΔ_2 = β^2 - 4 α_2 γ_2 = 1/16 (m_3^4 + 2 m_3^2 p_2^2 + 2 m_3^2 q_1^2 + (p_2^2)^2 - 2 p_2^2 q_1^2 + (q_1^2)^2) ((p_1^2)^2 - 2 p_1^2 p_2^2 + (p_2^2)^2 - 2 p_1^2 p_3^2 - 2 p_2^2 p_3^2 + (p_3^2)^2).The first term can be rewritten as(q_1^2 - p_2^2 + m_3^2)^2 + 4 m_3^2 p_2^2 > 0.The second term, up to a prefactor, is the Gram determinant of p_1 and p_2.Indeed,[ p_1^2 p_1 · p_2; p_1 · p_2 p_2^2 ] = -1/4 ((p_1^2)^2 - 2 p_1^2 p_2^2 + (p_2^2)^2 - 2 p_1^2 p_3^2 - 2 p_2^2 p_3^2 + (p_3^2)^2).This Gram determinant, representing a volume in Euclidean space, is positive hence the second term in the factorization of Δ_2 is negative.Therefore, Δ_2 < 0 and the roots of α_2 (q_2^2)^2 + β_2 q_2^2 + γ_2 = 0 are complex.In particular, α_2 (q_2^2)^2 + β_2 q_2^2 + γ_2 > 0 in the region of integration.If P is a quadratic polynomial, the integral can be rewritten as (see eq. (<ref>))∫_a^b d x/x + c1/√(P(x)) = 1/√(P(-c)){ - log (a + c - √(p(a)) + √(p(-c))) + log (-a - c + √(p(a)) + √(p(-c))) + log (b + c - √(p(b)) + √(p(-c))) - log (-b - c + √(p(b)) + √(p(-c))) },with p(x) = P(x) / lc(P) and lc(P) is the leading coefficient of the polynomial P.Using this formula we can do the q_2^2 integral, by substituting a → a_2, b → b_2, x → q_2^2, c → m_2^2, p → P_2 with P_2(x) = α_2 x^2 + β_2 x + γ_2 and p_2(x) = P_2(x)/α_2.The q_2^2 integral yields an expression which contains √(p_2(a_2)) and √(p_2(b_2)) where P_2(x) = α_2 x^2 + β_2 x + γ_2, a_2 = (q_1 - p_3)^2 and b_2 = (q_1 + p_3)^2.These square roots may produce square branch points at the solutions of p_2(b_2) = 0 and p_2(a_2) = 0 in the variable q_1.These are fourth order equations in q_1.However, the polynomials p_2(b_2) and p_2(a_2) are very special and in fact they are perfect squares!Indeed, we haveP_2(a_2) =1/4(m_3^2 p_3 + p_2^2 p_3 + (p_1^2 - p_2^2 - p_3^2) q_1 + p_3 q_1^2)^2 = 1/64 p_3^2× (√(-4 m_3^2 p_3^2 +(p_1^2)^2-2 p_1^2 p_2^2 -2 p_1^2 p_3^2 + (p_2^2)^2-2 p_2^2 p_3^2 + (p_3^2)^2) - p_1^2 + p_2^2 + p_3^2 - 2 p_3q_1)^2 (√(-4 m_3^2 p_3^2 + (p_1^2)^2-2 p_1^2 p_2^2 -2 p_1^2 p_3^2 + (p_2^2)^2-2 p_2^2 p_3^2 + (p_3^2)^2) + p_1^2 - p_2^2 - p_3^2 + 2 p_3q_1)^2. P_2(b_2) = 1/4(m_3^2 p_3 + p_2^2 p_3 - (p_1^2 - p_2^2 - p_3^2) q_1 + p_3 q_1^2)^2= 1/64 p_3^2× (√(-4 m_3^2 p_3^2 +(p_1^2)^2-2 p_1^2 p_2^2 -2 p_1^2 p_3^2 + (p_2^2)^2-2 p_2^2 p_3^2 + (p_3^2)^2) - p_1^2 + p_2^2 + p_3^2 + 2 p_3q_1)^2 (√(-4 m_3^2 p_3^2 + (p_1^2)^2-2 p_1^2 p_2^2 -2 p_1^2 p_3^2 + (p_2^2)^2-2 p_2^2 p_3^2 + (p_3^2)^2) + p_1^2 - p_2^2 - p_3^2 - 2 p_3q_1)^2.It follows that the branch points due to √(p_2(a_2)) and √(p_2(b_2)) are in fact not there at all and the square roots can be extracted.The final integral over q_1^2 can be written as an integral along the branch cut in √(-q_1^2) going from zero to infinity along the positive real axis.As described above, there are several contributions: the contributions from integrating along the logarithmic branch cuts and the contribution from the pole at q_1^2 = -m_1^2.The logarithmic branch points are atb_2 + m_3^2 = 0, q_1 = - p_3± i m_3, a_2 + m_3^2 = 0, q_1 = p_3± i m_3,β_2^2 - 4 α_2 γ_2 = 0, q_1 = ± (m_3 + i p_2), q_1 = ± (m_3 - i p_2).Not all these branch points are on the main sheet we restricted to.Indeed, q_1 = √(q_1^2) and on the main sheet √(z) > 0, so we should pick the solutions q_1 = p_3± i m_3 and q_1 = m_3 ± i p_2 and these become the boundaries of integrals along the logarithmic branch cut.The rest of the calculation is routine and we will not reproduce it.In ref. <cit.> this integral is computed by a clever application of partial fractioning followed by contour integration.The point of the method described here is that no clever partial fractioning is required but instead one can follow a simple algorithm. § THE TWO-LOOP SUNRISE INTEGRALThis integral is famously elliptic, see refs. <cit.>.We will see the elliptic curve and the holomorphic differential appear explicitly.We will do the integrals in Euclidean signature.We start with the integral (see fig. <ref>)I = ∫d^2 q_1 d^2 q_2/(q_1^2 + m_1^2) (q_2^2 + m_2^2) (q_3^2 + m_3^2).Using Cutkosky's change of variables we haveI = ∫_0^∞d q_1^2/q_1^2 + m_1^2∫_0^∞d q_2^2/q_2^2 + m_2^2∫_(q_3^2)_min^(q_3^2)_maxd q_3^2/q_3^2 + m_3^2∫_γd^2 q_1 d^2 q_2/d q_1^2 ∧ d q_2^2 ∧ d q_3^2,where p is the external momentum and p = q_1 + q_2 + q_3, where q_1, q_2 and q_3 are the momenta through the internal lines of the sunrise diagram.The values (q_3^2)_min and (q_3^2)_max are obtained by computing the stationary points of q_3^2 subject to the constraints that p = q_1 + q_2 + q_3 and q_1^2 and q_2^2 take fixed values.The contour γ is a real cycle on an elliptic curve which we will describe in more detail in the following.Indeed, it is clear that q_1^2 and q_2^2 can take any values, but the values of q_3^2 are constrained.For clarity, let us denote the fixed values for q_1^2 and q_2^2 by q_1^2 = z_1 and q_2^2 = z_2.Then, we look for the stationary points of q_3^2 = (p - q_1 - q_2)^2 subject to the constraints q_1^2 = z_1 and q_2^2 = z_2 where z_1, z_2 are some fixed values.We define a functionF(q_1, q_2, λ_1, λ_2) = (p - q_1 - q_2)^2 - λ_1 (q_1^2 - z_1) - λ_2 (q_2^2 - z_2),where λ_1 and λ_2 are Lagrange multipliers. The stationary point conditions read∂/∂ q_1((p - q_1 - q_2)^2 - λ_1 (q_1^2 - z_1) - λ_2 (q_2^2 - z_2)) = 0,∂/∂ q_2((p - q_1 - q_2)^2 - λ_1 (q_1^2 - z_1) - λ_2 (q_2^2 - z_2)) = 0,while the derivatives with respect to the Lagrange multipliers reproduce the constraints.Taking the derivatives we findp - q_1 - q_2 + λ_1 q_1 = 0, p - q_1 - q_2 + λ_2 q_2 = 0.In particular, this implies that λ_1 q_1 = λ_2 q_2.By squaring we have λ_1^2 z_1 = λ_2^2 z_2 so λ_2 = ±λ_1 √(z_1/z_2).Let us first assume that λ_1, λ_2 are non-vanishing.The system of equations above has a solutionq_1 = λ_2/λ_1 + λ_2 - λ_1 λ_2 p,q_2 = λ_1/λ_1 + λ_2 - λ_1 λ_2 p.By squaring the first equation we find(1 + λ_2/λ_1 - λ_2)^2 = p^2/z_2.Using λ_2/λ_1 = ±_1 √(z_1/z_2) we findλ_1 = 1 ±_1 √(z_2/z_1)±_3 √(p^2/z_1), λ_2 = 1 ±_1 √(z_1/z_2)±_2 √(p^2/z_2),with (±_1) (±_2) (±_3) = 1.Then, we haveq_1 = ∓_3 √(z_1/p^2) p,q_2 = ∓_2 √(z_2/p^2) p, q_3 = (1 ±_3 √(z_1/p^2)±_2 √(z_2/p^2)) p,henceq_3^2 = (p±_3 √(z_1)±_2 √(z_2))^2. In conclusion, we find four different critical points for q_3^2.Let us now study their nature in more detail.In order to decide the nature of the critical points, we compute the bordered Hessian matrixH(F) = [ 0 0∂^2 F/∂λ_1 ∂ q_1∂^2 F/∂λ_1 ∂ q_2; 0 0∂^2 F/∂λ_2 ∂ q_1∂^2 F/∂λ_2 ∂ q_2;∂^2 F/∂ q_1 ∂λ_1∂^2 F/∂ q_1 ∂λ_2 ∂^2 F/∂ q_1 ∂ q_1 ∂^2 F/∂ q_1 ∂ q_2;∂^2 F/∂ q_2 ∂λ_1∂^2 F/∂ q_2 ∂λ_2 ∂^2 F/∂ q_2 ∂ q_1 ∂^2 F/∂ q_2 ∂ q_2 ].In our example we haveH(F) = [00 -2 q_10;000 -2 q_2; -2 q_10 2 (1 - λ_1)1_2 21_2;0 -2 q_2 21_2 2 (1 - λ_2)1_2 ]. We have four variables (the components of q_1 and q_2) and two constraints.Therefore, we need to look at the signs of the 5 × 5 and 6 × 6 principal minors.Computing these minors we findH_5(F) = ∓_1 (±_2 p + √(z_2)) √(z_1) z_2, H_6(F) = p√(z_1)√(z_2)(±_1 p±_2 √(z_1)±_3 √(z_2)).The conditions for a minimum are(-1)^2 H_5(F) > 0,(-1)^2 H_6(F) > 0,where 2 is the number of constraints.The conditions for a maximum are(-1)^3 H_5(F) > 0,(-1)^4 H_6(F) > 0. Let us consider the conditions for the minimum:∓_1 (±_2 p + √(z_2)) > 0,±_1 p±_2 √(z_1)±_3 √(z_2) > 0.There are four cases * +_1+_2+_3:in this case the minimum conditions become p + √(z_2) < 0 and p + √(z_1) + √(z_2) > 0.The first condition never holds since p≥ 0 and √(z_2)≥ 0.* +_1-_2-_3: in this case we have p > √(z_2) and p > √(z_1) + √(z_2).* -_1+_2-_3: in this case we have p + √(z_2) > 0 and √(z_1) > p + √(z_2).* -_1-_2+_3: in this case we have √(z_2) > p and √(z_2) > p + √(z_1). Let us now assume that λ_1 = 0 or λ_2 = 0.Since λ_1 q_1 = λ_2 q_2, then if λ_1 = 0, either λ_2 = 0 or q_2 = 0.We will not study the region q_2 = 0 anymore since in this case it does not yield a contribution to the integral.This conclusion should be re-evaluated when studying the case m_2 = 0 or in general when the integrals are divergent.We conclude that when λ_1 = λ_2 = 0 the two analogs of the Landau loop equations become a single equation p = q_1 + q_2.This is consistent with the constraints q_1^2 = z_1 and q_2^2 = z_2 if the triangle inequalities are satisfied z_1 + z_2 > p, p + z_1 > z_2 and p + z_2 > z_1.In conclusion, we have(q_3^2)_min =(p - √(z_1) - √(z_2))^2,p > √(z_1) + √(z_2), (p - √(z_1) + √(z_2))^2,√(z_1) - √(z_2) > p, (p + √(z_1) - √(z_2))^2,√(z_2) - √(z_1) > p, 0,otherwise. For the maximum, we have the following four cases: * +_1+_2+_3: p + √(z_2) > 0 and p + √(z_1) + √(z_2) > 0.These conditions are always satisfied.* +_1-_2-_3: p < √(z_2) and p > √(z_1) + √(z_2).These conditions are incompatible.* -_1+_2-_3: p + √(z_2) < 0 and √(z_1) > p + √(z_2).These conditions are incompatible.* -_1-_2+_3: √(z_2) < p and √(z_2) > p + √(z_1).These conditions are incompatible.It follows that(q_3^2)_max = (p + √(z_1) + √(z_2))^2.This is actually pretty obvious geometrically.The innermost integral contains a one-formd^2 q_1 d^2 q_2/d q_1^2 ∧ d q_2^2 ∧ d q_3^2.We can compute this ratio as follows.We have q_1 + q_2 + q_3 = p and we take p to be constant (or d p = 0).Then we haveω = d^2 q_1 d^2 q_2/(2 q_1 · d q_1) ∧ (2 q_2 · d q_2) ∧ (2 q_3 · d q_3) =d^2 q_1 d^2 q_2/-8 ((q_1 · d q_1) ∧ (q_2 · d q_2) ∧ (q_3 · d q_2) + (q_1 · d q_1) ∧ (q_2 · d q_2) ∧ (q_3 · d q_2)) =v · d q_1/8 ϵ(q_1, q_3) ϵ(v, q_1)where we have used d q_3 = -d q_1 - d q_2.Then, taking v = p and usingϵ(p, q_1) ϵ(q_2, q_3) = [ p · q_2 p · q_3; q_1 · q_2 q_1 · q_3 ] =[ 1/2 (q_1^2 + q_3^2 - x^2 - y^2) 1/2 (y^2 - p^2 - q_3^2); 1/2 (y^2 - q_1^2 - q_2^2) 1/2 (p^2 + q_2^2 - x^2 - y^2) ],where x^2 = (p - q_1)^2 = (q_2 + q_3)^2 and y^2 = (p - q_3)^2 = (q_1 + q_2)^2.The variables x^2 and y^2 are the lengths squared of the diagonals of the quadrilateral whose sides are the vectors q_1, q_2, q_3 and -p.Since the quadrilateral is in a plane, we have that the volume of the simplex it generates vanishes.In other words, the following Gram determinant should vanish (see also ref. <cit.>)G(p, x, q_3) = [ p^2 p · x p · q_3; p · x x^2 x · q_3; p · q_3 x · q_3 q_3^2 ] = P(u, v),where u = x^2, v = y^2,P(u, v) = u^2 v + u v^2 + 2 d_1 1 u v + d_1 0 u + d_0 1 v + d_0 0,withd_1 1 = -1/2 (p^2 + q_1^2 + q_2^2 + q_3^2), d_1 0 = (p^2 - q_3^2) (q_1^2 - q_2^2), d_0 1 = (p^2 - q_1^2) (q_3^2 - q_2^2), d_0 0 = (p^2 - q_1^2 + q_2^2 - q_3^2) (p^2 q_2^2 - q_1^2 q_3^2).The polynomial P can be made homogeneous of degree three so it describes an elliptic curve embedded in 𝐏^2 with homogeneous coordinates (u : v : w).However, an alternative compactification, described in sec. <ref> is more natural.Using these results we have thatω = d u/4 ∂_v P = -d v/4 ∂_u P.This one-form can be obtained from the two-form u d v d w - v d u d w + w d u d v/P by taking a residue at P = 0.We can eliminate the variable v by solving the quadratic equation P = 0 in v to obtainv_± = -(u^2 + 2 d_1 1 u + d_0 1) ±√(Δ)/2 uUsing this in ∂_v P we find∂_v P = ±√(Δ),whereΔ = (u - (p + q_1)^2) (u - (p - q_1)^2) (u - (q_2 + q_3)^2) (u - (q_2 - q_3)^2). Taking into account the triangle inequality, the domain for u = x^2 is as follows.We have (p - q_1)^2 ≤ x^2 and x^2 ≤ (p + q_1)^2.Similarly, we have (q_2 - q_3)^2 ≤ x and x ≤ (q_2 + q_3)^2.Therefore, we havemax((p - q_1)^2, (q_2 - q_3)^2) ≤ u ≤min((p + q_1)^2, (q_2 + q_3)^2).In other words, if we sort the roots of Δ, the integral in u is over the middle two roots.In this interval Δ≥ 0.Similarly, we have∂_u P = ±√(Δ'),whereΔ' =(v - (q_1 + q_2)^2) (v - (q_1 - q_2)^2) (v - (p + q_3)^2) (v - (p - q_3)^2).As before, taking into account the triangle inequality, the domain for v = y^2 ismax((p - q_3)^2, (q_1 - q_2)^2) ≤ u ≤min((p + q_3)^2, (q_1 + q_2)^2). We know that(p - q_3)^2 < (p + q_3)^2, (q_1 - q_2)^2 < (q_1 + q_2)^2so the possible orderings of these four roots are obtained by shuffling the two sets of ordered roots.In total there are six possibilities.The interplay between these conditions and the boundary conditions for integrating over q_3^2 yield a large number of regions.To make progress, we follow a bit of a different route that Cutkosky's.Instead of integrating over q_1^2 and q_2^2 first, we integrate over q_1^2 and the diagonal v = y^2 (see fig. <ref>).The triangle inequalities now imply thatq_2^2 ∈ [(y - q_1)^2, (y + q_1)^2], q_3^2 ∈ [(y - p)^2, (y + p)^2].Then the integral becomes∫_0^∞d q_1^2/q_1^2 + m_1^2∫_0^∞ d v ∫_(√(v) - q_1)^2^(√(v) + q_1)^2d q_2^2/q_2^2 + m_2^21/√((v - (q_1 - q_2)^2) (v - (q_1 + q_2)^2)) ∫_(√(v) - p)^2^(√(v) + p)^2d q_3^2/q_3^2 + m_3^21/√((v - (p - q_3)^2) (v - (p + q_3)^2)). Next, a short calculation reveals that(v - (p - q_3)^2) (v - (p + q_3)^2) = (q_3^2 - (p - √(v))^2) (q_3^2 - (p + √(v))^2).Then, the integrals over q_2^2 and q_3^2 can be done as in eq. (<ref>) and have the effect of introducing a factor of π each and replacing q_2^2 → -m_2^2 and q_3^2 → -m_3^2.After doing these integrals, we findπ^2 ∫_0^∞d q_1^2/q_1^2 + m_1^2∫_0^∞d v/√((v - (p - i m_3)^2) (v - (p + i m_3)^2) (v - (q_1 - i m_2)^2) (v - (q_1 + i m_2)^2)).Note that the quantity under the square root is always positive in the integration domain.Note also that when the “angular” integral was the innermost integral it was a complete elliptic integral, once we pulled it through the q_2^2 and q_3^2 integrals it became an incomplete elliptic integral (see sec. <ref> for a discussion of such integrals).This result can be rewritten in several ways, but the number of integrals can only be reduced at the cost of introducing transcendental functions, such as logarithms.For example, we can also do the integral over q_1^2 which will produce a logarithm and will replace q_1→ i m_1 in the quartic under the square root.Similar results can be more quickly obtained by using Feynman parametrization.The quartic in v can be symmetrically reduced as in eq. (<ref>).At the Euclidean pseudo-threshold p = i (m_1 + m_2 - m_3) the integration can be done explicitly in terms of dilogarithms as shown in ref. <cit.>.Hyperelliptic integrals can also occur, see ref. <cit.>.In that case a similar analysis applies and the angular integrals should yield a distinguished holomorphic form on the hyperelliptic curve along with a distinguished cycle.The parametrization of the integral in terms of momenta squared may open up new possibilities for regularization.The “angular” integral, being along a compact real cycle and not meeting any singularities does not itself produce divergences, however this inner integral is the only one affected by dimensional regularization (see the discussion in sec. <ref>).Divergences arise from integrals along the variables q_e^2.It is therefore more rational to regularize them instead, since they are producing divergences.One idea that immediately comes to mind are hard cut-offs (IR and/or UV) in q_e^2 in Euclidean signature.This will undoubtedly make the integrals more complicated, but possibly not much more than dimensional regularization.We should point out that this type of regularization is better than the usual textbook cutoff regularization, which depends on the choice of loop momentum.However, the mathematical literature has other types of regularizations which have been applied to polylogarithms and multiple zeta values (see ref. <cit.> for a longer discussion).Such regularizations, such as tangential basepoint regularization have already been used in refs. <cit.>.The regularizations used in the mathematical literature have been designed to preserve various identities satisfied by quantities which did not require regularization.Similarly, in physics, we want to preserve various properties satisfied by physical quantities, which are often broken by traditional regularization choices. § CONFIGURATIONS OF QUADRILATERALS AS ELLIPTIC CURVESIt will prove convenient to compactify and complexify the integration domain.This is actually necessary for applying mathematical theorems such as those of refs. <cit.> and also refs. <cit.> for reviews.The compactification is essential if we want to study second type or mixed second type singularities.For the variables q_e^2 an obvious choice of compactification is 𝐑𝐏^1 with complexification 𝐂𝐏^1 (see ref. <cit.> where the same compactification is used).It may happen that, after complexification, the “angular” variables in Cutkosky's terminology (see sec. <ref>) do not parametrize a compact space.Sometimes, as in the case of the bubble integral in three dimensions (see sec. <ref>), a compactification can be performed at the cost of introducing a pole for the innermost differential form.We should note that this rather natural (partial) compactification does not seem to have been discussed in the physics literature before.The type of compactifications that have been considered, see ref. <cit.> involve representing the complexified compactified Minkowski space as a quadric in 𝐏^5, a compactification familiar to twistor theorists.Here, instead, we are proposing to use a compactification to a product of 𝐏^1, times an ad hoc compactification for the “angular” variety.Curiously, an embedding in a product of 𝐏^1 was discussed in a different context in ref. <cit.> but there the interpretation of the coordinate on 𝐏^1 was different from here.In the compactification the integration path q_e^2 ∈ (-∞, ∞) is closed since we have a single point at infinity.Indeed, on the complex projective line 𝐂𝐏^1 or the Riemann sphere we can choose coordinates so that the origin is at the North pole and the infinity is at the South pole.Then the integration contour (-∞, ∞) is along a meridian. The interpretation of quadrilateral configurations as points of elliptic curve was described in ref. <cit.>.As explained in this reference, there are two moduli spaces of quadrilaterals; oriented and unoriented.For our purposes the unoriented moduli space is relevant.The oriented moduli space of quadrilaterals makes sense over the real numbers only.In order to obtain an elliptic curve, we need to compactify the space, which involves taking the lengths of the sides of the quadrilateral to be valued in 𝐏^1.The dual space parametrization used here goes back the initial studies of Landau singularities in refs. <cit.>.This representation is really useful since some non-obvious algebraic properties have simple geometric causes.An equation for the elliptic curve can be obtained by setting to zero the volume of a (degenerate) tetrahedron in fig. <ref>.This yields an equation in u = x^2 and v = y^2 of bi-degree (2, 2) and is, coincidentally, also naturally embedded in 𝐏^1 ×𝐏^1. § A PEEK AT INTEGRALS IN DIMENSIONAL REGULARIZATIONIn dimensional regularization the Cutkosky representation is usually called Baikov representation (see ref. <cit.> for the original paper and ref. <cit.> for an introduction).This has a loop-by-loop version worked out in ref. <cit.>.Let us now do a sample computation in dimensional regularization.We will first attempt a simple integral, the massless bubble in dimension d = 4 - 2 ϵ.Our computation will not use the Wick rotation, which is less natural for massless particles since it clashes with the on-shell conditions.The computation is more complicated than the textbook computation using Feynman parameters and Wick rotation.We have∫d^d q_1^2/q_1^2 q_2^2 = ∫_-∞^∞d q_1^2/q_1^2(∫_-∞^a_2 + ∫_b_2^∞) d q_2^2/q_2^2∫d^d q_1/d q_1^2 ∧ d q_2^2. As before, we have a_2 = (p - q_1)^2 and b_2 = (p + q_1)^2.Recall that p = √(p^2) and if p^2 < 0, then p∈ i ℝ.In this case, when the boundaries of integration are not real, the inner integral is over the full real line so we replace(∫_-∞^a_2 + ∫_b_2^∞) d q_2^2/q_2^2→∫_-∞^∞d q_2^2/q_2^2above.In the following we will do the computation in the region p^2 < 0, sometimes called Euclidean region.We haved^d q_1/d q_1^2 ∧ d q_2^2 = -d^d q_1/4 (q_1 · d q_1) ∧ (q_2 · d q_1) = -1/4d q_1^2 ∧⋯∧ d q_1^d - 1/p^0 q_1^1 - p^1 q_1^0. Next, we use the fact that(p^0 q_1^1 - p^1 q_1^0)^2 = (p^0 q_1^0 - p^1 q_1^1) - ((p^0)^2 - (p^1)^2) ((q_1^0)^2 - (q_1^1)^2).We pick p to have components only along directions 0 and 1 so p^2 = (p^0)^2 - (p^1)^2.Then, denoting q_1^2 = z_1 and q_2^2 = z_2, we have(q_1^0)^2 - (q_1^1)^2 = z_1 + (q_1^2)^2 + ⋯ (q_1^d-1)^2, (p^0 - q_1^0)^2 - (p^1 - q_1^1)^2 = z_2 + (q_1^2)^2 + ⋯ (q_1^d-1)^2.Subtracting these equalities we find p^2 - 2 (p^0 q_1^0 - p^1 q_1^1) = z_2 - z_1.Using this we find(p^0 q_1^1 - p^1 q_1^0)^2 = 1/4 (z_2 - z_1 - p^2)^2 - p^2 (z_1 + ρ^2),where we used ρ^2 = (q_1^2)^2 + ⋯ (q_1^d-1)^2.In conclusion, we haved^d q_1/d q_1^2 ∧ d q_2^2 = ±Ω_d - 3ρ^d - 3 d ρ/√(1/4 (z_1^2 + z_2^2 + (p^2)^2 - 2 z_1 p^2 - 2 z_2 p^2 - 2 z_1 z_2) - p^2 ρ^2),where Ω_d - 3 is the measure on the unit d-3-dimensional sphere.In eq. (<ref>) the right-hand side is positive since the left-hand side is a square of a real quantity.Hence, the quantity under square root in eq. (<ref>) is also positive and taking the square root poses no problem.In general we have Ω_d = 2 π^d + 1/2/Γ(d + 1/2) so the angular integral is easy while the radial integral can be done in terms of an Euler beta function.After integration (for ρ∈ [0, ∞)) we find±2 π^d - 2/2/Γ(d - 2/2)Γ(d - 2/2) Γ(3 - d/2)/2 √(π) (-p^2)^-d - 2/2(1/4 (z_1^2 + z_2^2 + (p^2)^2 - 2 z_1 p^2 - 2 z_2 p^2 - 2 z_1 z_2))^d - 3/2,for d ∈ (2, 3).This form of the integral can be analytically continued in d.The kinematic dependence of this problem is so simple that now we can do a change of variable z_1 → -p^2 z_1 and z_2 → -p^2 z_2 so that the dependence on p^2 can be extracted out of the integral and what remains to compute is a d-dependent prefactor.Upon performing this change of variable we can pull out a factor (-p^2)^d/2 - 2, which is in fact dictated by dimensional analysis.If instead we take p^2 > 0, then there is a finite upper bound on ρ.Doing the ρ integral we obtain±2 π^d - 2/2/Γ(d - 2/2)√(π)Γ(d - 2/2)/2 Γ(d - 1/2) (p^2)^-d - 2/2(1/4 (z_1^2 + z_2^2 + (p^2)^2 - 2 z_1 p^2 - 2 z_2 p^2 - 2 z_1 z_2))^d - 3/2,for d ∈ (2, ∞).Keeping only the z_1 and z_2 dependent part, we have(∫_-∞^0 d z_1/z_1(∫_-∞^a_2(z_1) + ∫_b_2(z_1)^∞) d z_2/z_2 + ∫_0^∞d z_1/z_1∫_-∞^∞d z_2/z_2)(z_1^2 + z_2^2 + 1 + 2 z_1 + 2 z_2 - 2 z_1 z_2)^d - 3/2.The integrals over z_1 and z_2 should be understood in the sense of the Feynman i ε prescription.Rotating the z_1 contour in the second integral above (as we have done in sec. <ref>) we obtain-∫_-∞^0 d z_1/z_1∫_a_2(z_1)^b_2(z_1)d z_2/z_2((z_2 - a_2(z_1)) (z_2 - b_2(z_1)))^d - 3/2,where a_2(z_1) = -(1 + √(-z_1))^2 and b_2(z_1) = -(1 - √(-z_1))^2, so a_2(z_1) < b_2(z_1) < 0.Hence, the integration region does not contain the pole at z_2 = 0 and Feynman i ε are not required to make sense of the integral.Consider the following integral∫_a^b d z/z (z - a)^1/2 (z - b)^1/2,for a < b < 0, which arises from the integral above upon setting d = 4.We can write this integral as half the integral around the cut between a and b.This integral is 2 π i times the residues at z = 0 and z = ∞.We have res_0 = √(-a)√(-b).At infinity we make a change of coordinates z = w^-1 so we getd z/z (z - a)^1/2 (z - b)^1/2 = -d w/w^2 (1 - w a)^1/2 (1 - w b)^1/2 = -d w/w^2 + a d w/2 w + b d w/2 w + ⋯,so res_∞ = 1/2 (a + b).Hence, the integral is∫_a^b d z/z (z - a)^1/2 (z - b)^1/2 = π i/2 (a + b + √(a b)).Using this expression for a = a_2(z_1) and b = b_2(z_1) we find∫_a_2(z_1)^b_2(z_1)d z_2/z_2 (z_2 - a_2(z_1))^1/2 (b_2(z_1) - z_2)^1/2 = π/2 4 z_1,if -1 < z_1 < 0, -4,if z_1 < -1. Plugging this into the z_1 integral, we see that it is convergent at z_1 = 0 but it diverges logarithmically at z_1 →∞.The first region can be thought as an IR region while the second as an UV region so our integral is IR-finite but UV-divergent.Interestingly, the value z_1 = -1 provides a natural boundary between these two regions so one can canonically separate the integral into an IR and a UV region, to be studied separately.If we do not set d = 4, the integral over z_2 in eq. (<ref>) is a hypergeometric integral so it can be computed explicitly.∫_a^b d x/x (x - a)^e (b - x)^e = (b - a)^2 e + 1/a∫_0^1 d y y^e (1 - y)^e (1 - ζ y)^-1 =(b - a)^2 e + 1/aΓ(e + 1)^2/Γ(2 e + 2) _2F_1(1, e + 1; 2 e + 2; ζ),where ζ = 1 - b/a.We want to apply this for e = d - 3/2.However, we are only interested in computing the expansion around ϵ→ 0.Next, we will seek to establish that this hypergeometric integral has the following behavior∫_a_2(z_1)^b_2(z_1)d z_2/z_2((z_2 - a_2(z_1)) (z_2 - b_2(z_1)))^d - 3/2 ∼ -2 π (-z_1)^c_0 (d - 4) + 1 + 𝒪(d - 4),when z_1 → 0, (-z_1)^c_∞ (d - 4) + 𝒪(d - 4),when z_1 →∞,where c_0 and c_∞ are constants to be determined.Then we can obtain the leading behavior in 1/d - 4 by using∫_-1^0 d z_1/z_1 (-z_1)^c_0 (d - 4) + 1 = .(-z_1)^c_0 (d - 4) + 1/c_0 (d - 4) + 1|_z_1 = -1^z_1 = 0 = -1/c_0 (d - 4) + 1,if (c_0 (d - 4)) > -1.Similarly, we have∫_-∞^-1d z_1/z_1 (-z_1)^c_∞ (d - 4) = .(-z_1)^c_∞ (d - 4)/c_∞ (d - 4)|_z_1 = -∞^z_1 = -1 = 1/c_∞ (d - 4),if (c_∞ (d - 4)) < 0. We haveζ = 1 - b_2(z_1)/a_2(z_1) = 4 √(-z_1)/(1 + √(-z_1))^2,andb - a = -(1 - √(-z_1))^2 + (1 + √(-z_1))^2 = 4 √(-z_1).Plugging back into the expression terms of hypergeometric functions we have(b - a)^2 e + 1/a = (4 √(-z_1))^2 e + 1/-(1 + √(-z_1))^2 = -4^2 e + 1× (-z_1)^e + 1/2, z_1 ↗ 0, (-z_1)^e - 1/2, z_1 ↘ -∞.Recall that e = d - 3/2 = 1/2 - ϵ.Hence, c_0 = c_∞ = 1/2.This was an woeful derivation, which we hope to improve later.What is needed is a way to systematically expand around z_1 = 0 in the first region and z_1 → -∞ in the second region.Finally let us briefly discuss the massless box integral.We have∫d^D k/k^2 (k + p_2)^2 (k + p_23)^2 (k - p_1)^2.Let us put this integral in Cutkosky form.The range of k^2 is ℝ.The range of (k + p_2)^2 can be determined by extremizing (k + p_2)^2 at fixed p_2 and subject to the constraint that k^2 is fixed at k^2 = z_1.Using Lagrange multipliers we find∂/∂ k((p + k_2)^2 + α_1 k^2) = 2 (k + p_2 + α_1 k) = 0.Since p_2^2 = 0 and z_1 = k^2 ≠ 0 we have that an extremum is never realized.Hence, the range of (k + p_2)^2 is also ℝ.Note that this is not what happens in Cutkosky's approach of proving his theorem.It remains to be seen if and how his proof would have to be modified to cover this case.The range of (k + p_23)^2 is determined in a similar way.Using the same idea of Lagrange multipliers we find the equation(1 + α_1 + α_2) k = -p_23 - α_2 p_3.The compatibility condition is the existence of a tetrahedron with sides k, k + p_2, k + p_23, p_3, p_14 and p_2 and the extrema arise when the tetrahedron becomes degenerate which is when its volume vanishes.Using the Cayley-Manger formula for the volume of the tetrahedron one can solve for the stationary point of z_3 = (k + p_23)^2 and we find a unique solution z_3 = -(p_23^2 - z_1 + z_2) z_2/z_1 - z_2.It is remarkable that there is a unique solution which is another departure from the case analyzed by Cutkosky.We defer a more detailed study of the nature of this stationary point.The extrema of (k - p_1)^2 can be analyzed similarly and in that case one finds two solutions, as usual.The remaining integrals can be written asd^D k/d k^2 ∧ d (k + p_2)^2 ∧ d (k + p_23)^2 ∧ d (k - p_1)^2 =d^D k/16 (k · d k) ∧ ((k + p_2) · d k) ∧ ((k + p_23) · d k) ∧ ((k - p_1) · d k) =d^D - 4 k_⊥/16 ϵ(k_∥, (k + p_2)_∥, (k + p_23)_∥, (k - p_1)_∥). Next, we will use the following identity-ϵ(k, k + p_2, k + p_23, k - p_1)^2 =[k^2k · (k + p_2) k · (k + p_23)k · (k - p_1);k · (k + p_2)(k + p_2)^2 (k + p_2) · (k + p_23)(k + p_2) · (k - p_1); k · (k + p_23) (k + p_2) · (k + p_23) (k + p_23)^2 (k + p_23) · (k - p_1);k · (k - p_1)(k + p_2) · (k - p_1) (k + p_23) · (k - p_1)(k - p_1)^2 ] = -ϵ(k_∥, (k + p_2)_∥, (k + p_23)_∥, (k - p_1)_∥)^2 - 1/4 (p_12^2 + p_23^2) p_12^2 p_23^2 ρ^2,where we have used the fact that every element in the Gram matrix above can be written by replacing k → k_∥ and adding ρ^2 = k_⊥^2.This allows us to compute the Jacobian asϵ(k_∥, (k + p_2)_∥, (k + p_23)_∥, (k - p_1)_∥) =√(ϵ(k, k + p_2, k + p_23, k - p_1)^2 - 1/4 (p_12^2 + p_23^2) p_12^2 p_23^2 ρ^2),where ϵ(k, k + p_2, k + p_23, k - p_1)^2 is a relatively complicated polynomial in z_1, …, z_4 and p_12^2 and p_23^2, which will not write explicitly, but which can be computed straightforwardly by expanding the Gram determinant above.Somewhat unexpectedly the dependence on z cancels from the coefficient of ρ^2.Next, the strategy is as in the case of the bubble integral.We write the integral over k_⊥ as d^D - 4 k_⊥ = ρ^D - 5 d ρ d Ω_d - 5 and perform both the angular and the radial integrals.The integral over z_4 is again a hypergeometric integral, so if one wants to do the integrals to all orders in ϵ one would need to use identities for integrals with hypergeometric function integrals.Interestingly, the full answer for the box can be computed exactly in terms of hypergeometric functions, see ref. <cit.>.§ USEFUL INTEGRALSConsider the integral∫_a^b d x/(x + c) √((x - α) (β - x)),where a < b, α < β, the interval (a, b) is included in the interval (α, β) so the square root is always positive in the integration region and -c ∉(a, b).To compute this integral we define a curve y^2 = (x - α) (β - x) which can also be written(x - α + β/2)^2 + y^2 = (α - β/2)^2.This curve can be parametrized rationally byx = α + β/2 + α - β/4 i (t - t^-1), y = α - β/4 (t + t^-1). In this new coordinate we haveω = d x/(x + c) √((x - α) (β - x)) = 1/2 √(-(α + c)(c + β)) d logt - t^+/t - t^-,wheret^± = -i (α + β + 2 c) ± 2 √(-(α + c)(c + β))/α - β. Then we solve for t_i corresponding to x = a and for t_f corresponding to x = b.In both cases there are two solutions and we pick the solution for which y > 0.We therefore havet_i = i (2 a - α - β) + 2 √((a - α) (β - a))/α - β, t_f = i (2 b - α - β) + 2 √((b - α) (β - b))/α - β.Using these equations we find∫_a^b d x/(x + c) √((x - α) (β - x)) = 1/2 √(-(α + c)(c + β))log(t_f - t^+) (t_i - t^-)/(t_f - t^-) (t_i - t^+),where we have taken α < a < b < β and -c ∉(α, β).An alternative form for the answer is∫_a^b d x/(x + c) √((x - α) (β - x)) = 2/√((α + c)(β + c))× ( arctan√((a - β) (α + c)/(α - a) (β + c)) - arctan√((b - β) (α + c)/(α - b) (β + c))).Notice the appearance of square roots of cross-ratios of points a, β, α, -c and b, β, α, -c.Interestingly, square roots of cross-ratios have appeared in the recent work of Rudenko (see ref. <cit.>).We havearctan x = 1/2 ilog1 + i x/1 - i x.Then, arctan√(x) has a square root branch cut at x = 0 and a logarithmic branch cut at x = -1.The logarithmic branch points are obtained by solving(a - β)(α + c)/(α - a)(β + c) = -1we have α = β or a = -c.Similarly, from the second arctangent function, we have logarithmic branch points when(b - β)(α + c)/(α - b)(β + c) = -1,which implies α = β or b = -c. If the roots of the quadratic polynomial under the square root are complex, then we have the integral∫_a^b d x/(x + c) √((x - z)(x - z̅)),where -c < a < b and z = u + i v, z̅ = u - i v for u, v ∈ℝ and v ≥ 0.The calculation of this integral is classical.We first define a curve y^2 = (x - u)^2 + v^2, which can be written as (y - x + u)(y + x - u) = v^2.This curve can be parametrized byx = u + 1/2 (v^2/t - t),y = 1/2 (t + v^2/t).We haveω = d x/(x + c) √((x - z)(x - z̅)) = 2 d t/t^2 - 2 (u + c) t - v^2.After partial fractioning this becomesω = 1/√((u + c)^2 + v^2) d logt - t_+/t - t_-,wheret_± = u + c ±√((u + c)^2 + v^2). The bounds of integration can be determined using x(t_i) = a and x(t_f) = b.There are two solutions in each case but we keep the one for which y > 0.In the end we findt_i = u - a + √((u - a)^2 + v^2),t_f = u - b + √((u - b)^2 + v^2). Finally the integral is∫_a^b ω = 1/√((u + c)^2 + v^2)[ log(-b - c + √((b - u)^2 + v^2) - √((u + c)^2 + v^2)/-b - c + √((b - u)^2 + v^2) + √((u + c)^2 + v^2))- log(-a - c + √((a - u)^2 + v^2) - √((u + c)^2 + v^2)/-a - c + √((a - u)^2 + v^2) + √((u + c)^2 + v^2)) ]. Let us now study the singularities of this function.We have logarithmic singularities when the arguments of the logarithms become zero or infinity.For the first logarithm this happens when(-b - c + √((b - u)^2 + v^2) - √((u + c)^2 + v^2)) (-b - c + √((b - u)^2 + v^2) + √((u + c)^2 + v^2)) = 0.This condition is satisfied if b + c = 0 or v = 0 while u ∈ [a, b].In the second case the contour [a, b] is pinched.A similar conclusion holds for the second logarithm with a substituted for b. The usual branch cut prescription for √((x - a)(x - b)) is that if the quantity under square root has a negative real part and a small positive imaginary part then the argument of the square root is i π/2.If instead it has a small negative imaginary part then the argument of the square root is -i π/2.From this point of view √((x - a)(x - b)) is badly behaved since we have (x + i ϵ - a)(x + i ϵ - b) = (x - a)(x - b) - i ϵ (a + b - 2 x) + 𝒪(ϵ^2).Here we assume a, b, x, ϵ∈ℝ and for definiteness a < b.Then ((x + i ϵ - a)(x + i ϵ - b)) < 0 if x ∈ (a, b).We also have ((x + i ϵ - a)(x + i ϵ - b)) = - ϵ (a + b - 2 x).This changes sign when x = a + b/2.It is more convenient to instead use √(x - a)√(x - b).For x > b there are no branch cuts.For x ∈ (a, b) there is a square root branch cut arising from √(x - b).Finally, for x < a there is no branch cut as can be shown by a short calculation.One can also choose the complementary cut, by considering the function √(x - a)√(b - x) instead.Indeed, as long as x ∈ (a, b) the function is continuous from above and below in the complex plane while along x < a and x > b we have the usual square root branch cuts of √(x - a) and √(b - x), respectively.The contours in fig. <ref> are very useful in computing integrals of type∫_a^b d x/f(x) √(x - a)√(x - b),where f does not have any singularities along the real axis.Then, we have∫_-∞ + i δ^a - √(ϵ^2 - δ^2) + i δ + ∫_a - √(ϵ^2 - δ^2) + i δ^a + √(ϵ^2 - δ^2) + i δ + ∫_a + √(ϵ^2 - δ^2) + i δ^b - √(ϵ^2 - δ^2) + i δ + ∫_b - √(ϵ^2 - δ^2) + i δ^b + √(ϵ^2 - δ^2) + i δ + ∫_b + √(ϵ^2 - δ^2) + i δ^∞ + i ϵ,which is a decomposition of the horizontal contour in the upper plane in fig. <ref>.This contour has several portions.The horizontal portions are displaced by δ > 0 in the upper half plane while the arcs of circle have radius ϵ > δ.If we subtract from this path the analogous path in the lower half plane, the linear sections going to ±∞ cancel, while the section along the cut doubles since it receives a contribution from the contour in the upper half plane and a negative contribution (due to the branch cut) along a contour going in the opposite direction.In practice f is often such that the integrals along the small circles of radius ϵ vanish in the limit ϵ→ 0 (which also implies δ→ 0).Then one can compute the integral by an application of Cauchy's theorem, by closing the contours at infinity.If f has branch cuts, we can arrange so the contours go along the branch cuts without crossing them.This produces contributions to the integral from Cauchy's theorem.Finally, we need to take into account the poles, including a potential pole at infinity.If instead we want to compute an integral of type∫_(-∞, a) ∪ (b, ∞)d x/f(x) √(x - a)√(x - b),we can follow a similar recipe, except that now we add the contributions of the two horizontal contours instead of subtracting them.§ CARLSON ELLIPTIC INTEGRALSIn ref. <cit.> the following duplication theorem was proved.If z_1, z_2, z_3 ∈ℂ∖ (-∞, 0) and at most one of them is zero, then we have∫_0^∞d t/∏_i = 1^3 √(t + z_i) = 2 ∫_0^∞d u/∏_i = 1^3 √(u + z_i + λ),whereλ = √(z_1)√(z_2) + √(z_1)√(z_3) + √(z_2)√(z_3)and where the square roots have phases in the right half-plane.The duplication theorem has to do with an isogeny of the elliptic curve.See refs. <cit.> for discussions of isogenies in connection with Feynman integrals.This can be proved by making a change of variableu(t) = t + √(t + z_1)√(t + z_2) + √(t + z_1)√(t + z_3) + √(t + z_2)√(t + z_3) - λ.Then we have2 d u/d t = 2 + √(t + z_2)/√(t + z_1) + √(t + z_1)/√(t + z_2) + √(t + z_3)/√(t + z_1) + √(t + z_1)/√(t + z_3) + √(t + z_2)/√(t + z_3) + √(t + z_3)/√(t + z_2).Next, we haveu + z_1 + λ = (√(t + z_1) + √(t + z_2)) (√(t + z_1) + √(t + z_3))and permutations.Hence,√((u + z_1 + λ) (u + z_2 + λ) (u + z_3 + λ)) = (√(u + z_1) + √(u + z_2)) (√(u + z_1) + √(u + z_3)) (√(u + z_2) + √(u + z_3)).Finally, simple algebra shows thatd u/d t = 1/2√((u + z_1 + λ) (u + z_2 + λ) (u + z_3 + λ))/√((t + z_1) (t + z_2) (t + z_3)).This, together with u(0) = 0 and lim_t →∞ u(t) = ∞ finishes the proof.The symmetric reduction from a quartic to a cubic is treated in ref. <cit.>.Suppose we have z_1, z_2, z_3, z_4 ∈ℂ∖ (-∞, 0) and take the square roots to be in the right half plane.Then, definew_j = √(z_1)√(z_j) + √(z_k)√(z_l), {j, k, l} = {2, 3, 4}.Then,∫_0^∞d u/∏_i = 1^4 √(u + z_i) = ∫_0^∞d v/∏_j = 2^4 √(v + w_j^2).This can be shown by a change of variablev(u) = (√(u + z_1)√(u + z_j) + √(u + z_k)√(u + z_l))^2 - (√(z_1)√(z_j) + √(z_k)√(z_l))^2.We haved v/d u = (√(u + z_1)√(u + z_j) + √(u + z_k)√(u + z_l)) (√(u + z_j)/√(u + z_1) + √(u + z_1)/√(u + z_j) + √(u + z_k)/√(u + z_l) + √(u + z_l)/√(u + z_k)).Then,v + w_j^2 = (√(u + z_1)√(u + z_j) + √(u + z_k)√(u + z_l))^2and√((v + w_2^2) (v + w_3^2) (v + w_4^2)) = (√(u + z_1)√(u + z_2) + √(u + z_3)√(u + z_4)) (√(u + z_1)√(u + z_3) + √(u + z_2)√(u + z_4)) (√(u + z_1)√(u + z_4) + √(u + z_2)√(u + z_3)).Now the result follows by simple algebra.If the integration domain is not (0, ∞) but (x, y) instead, we first make a change of variable to make the domain (0, ∞).For an integral∫_x^y d t/∏_i = 1^4 √(a_i + b_i t)we make a change of variables t = x u + y/u + 1 or u = t - y/x - t.This then yields∫_x^y d t/∏_i = 1^4 √(a_i + b_i t) = y - x/∏_i = 1^4 √(a_i + b_i x)∫_0^∞d u/∏_i = 1^4 √(u + z_i),where z_i = a_i + b_i y/a_i + b_i x.Next, we can combine the domain transformation and the symmetric reduction to obtain the following result (see ref. <cit.>).For x, y ∈ℝ with x > y, and a_i, b_i ∈ℝ for i = 1, 2, 3, 4, defineX_i = √(a_i + b_i x),Y_i = √(a_i + b_i y),i = 1, 2, 3, 4, U_1 i = X_1 X_j Y_k Y_l + Y_1 Y_j X_k X_l/x - y,i = 2, 3, 4,then∫_y^x d t/∏_i = 1^4 √(a_i + b_i t) = ∫_0^∞d t/∏_j = 2^4 √(t + U_1 j^2). Indeed, we have∫_x^y d t/∏_i = 1^4 √(a_i + b_i t) = y - x/∏_i = 1^4 √(a_i + b_i x)∫_0^∞d u/∏_i = 1^4 √(u + z_i),with z_i = a_i + b_i y/a_i + b_i x.Then, by symmetric reduction we have∫_0^∞d u/∏_i = 1^4 √(u + z_i) = ∫_0^∞d v/∏_j = 2^4 √(v + w_j^2),wherew_j = √(z_1)√(z_j) + √(z_k)√(z_l) =√(a_1 + b_1 y/a_1 + b_1 x)√(a_j + b_j y/a_j + b_j x) + √(a_k + b_k y/a_k + b_k x)√(a_l + b_l y/a_l + b_l x) = X_1 X_j Y_k Y_l + X_k X_l Y_1 Y_j/X_1 X_j X_k X_l.Using the rescaling v →v/α^2 for α > 0, the identityα∫_0^∞d v/∏_j = 2^4 √(v + w_j^2) = ∫_0^∞d v/∏_j = 2^4 √(v + w_j^2/α^2)finishes the proof.Let us now consider the explicit case of a complete elliptic integral∫_a_2^a_3d t/√((t - a_1) (t - a_2) (a_3 - t) (a_4 - t))for 0 ≤ a_1 ≤ a_2 ≤ a_3 ≤ a_4.By a change of variables u = t - a_2/a_3 - t we obtain∫_0^∞d u/√(u (a_2 - a_1 + (a_3 - a_1) u) (a_4 - a_2 + (a_4 - a_3) u)).By making a change of variables u = a_2 - a_1/a_3 - a_1 v and u = a_4 - a_2/a_4 - a_3 v we obtain, respectively∫_a_2^a_3d t/√((t - a_1) (t - a_2) (a_3 - t) (a_4 - t)) =1/√((a_2 - a_1) (a_4 - a_3))∫_0^∞d v/√(v (v + 1) (v + (a_1 - a_3) (a_4 - a_2)/(a_1 - a_2) (a_4 - a_3))) =1/√((a_3 - a_1) (a_4 - a_2))∫_0^∞d v/√(v (v + 1) (v + (a_1 - a_2) (a_4 - a_3)/(a_1 - a_3) (a_4 - a_2))).I am grateful to Matt von Hippel and Hjalte Frellesvig for discussions and initial collaboration.I am also grateful to Francis Brown for a discussion about coactions.JHEP | http://arxiv.org/abs/2311.16069v1 | {
"authors": [
"C. Vergu"
],
"categories": [
"hep-th",
"hep-ph"
],
"primary_category": "hep-th",
"published": "20231127184054",
"title": "Cutkosky representation and direct integration"
} |
VSPU]G.V. Afonin VSPU]A.S. Makarov VSPU]R.A. Konchakov NWPU]J.C. Qiao MISiS]A.N. Vasiliev IFTT]N.P. Kobelev VSPU]V.A. Khonik cor [cor]Corresponding [email protected][VSPU] Department of General Physics, Voronezh State Pedagogical University,Lenin St. 86, Voronezh 394043, Russia [NWPU] School of Mechanics, Civil Engineering and Architecture, Northwestern Polytechnical University, Xi’an 710072, China [MISiS] National University of Science and Technology MISiS, Moscow 119049, Russia [IFTT] Institute of Solid State Physics RAS, Chernogolovka, 142432, Russia In this work, we show that above the glass transition there exists a strong unique interrelationship between the thermodynamic parameter of disorder of a metallic glass derived using its excess entropy, diffraction measure of disorder given by the width of the X-ray structure factor and defect concentration derived from shear modulus measurements. Below the glass transition, this relationship is more complicated and depends on both temperature and thermal prehistory.metallic glasses, thermodynamic and diffraction disorder parameters, defectsUnderstanding of general key features of relaxation behavior of metallic glasses (MGs) is of vital importance for the adequate description of their structural state and atomic rearrangements <cit.>. In particular, current literature gives quite a few examples for the interpretation of relaxation properties in terms of different types of defects assumed to exist in real MGs <cit.>. However, any relation of the defect structure with macroscopic structure parameters remains mostly out of view. An attempt to advance in this direction was recently reported in our work <cit.>, in which it was found that the width of the structure factor given by the full width at half maximum (FWHM) of the first peak of structure factor is related tothe concentration c_def of interstitial-type defectsderived from shear modulus measurements of a Pt-based metallic glass. It was shown, in particular, that above the glass transition temperature T_g, the FWHM linearly increases with c_def. This indicates that heat absorption and related increase of c_def provide a significant defect-induced disruption of the dominant short-range order increasing thus the integral structural disordering. Below T_g, the relationship between the FWHM and c_def is more complicated but defect-induced ordering is observed upon approaching T_g.In any case, it is clear that the diffraction measure of disorder is related to the defect concentration.On the other hand, within the view of statistical physics <cit.>, any ordering/disordering should be related to a change of glass entropy. This approach was developed in our recent work <cit.>, in which a dimensionless thermodymanic parameter of structural order was introduced as ξ(T)=1-Δ S(T)/Δ S_melt, where Δ S(T) is temperature-dependent excess entropy of glass with respect to the maternal crystal and Δ S_melt is the entropy of melting. The above order parametervaries in the range 0<ξ<1 upon changing the structure from the fully disordered (liquid-like) state with Δ S →Δ S_melt and ξ→ 0 towards the fully ordered state (crystal-like) characterized by Δ S → 0 and ξ→ 1. An application of this approach to 13 metallic and 2 tellurite glasses showed that the order parameter ξ provides very useful qualitative and quantitative informationon structural ordering of various glasses and its evolution uponrelaxation. This order parameter is quite sensitive to the glass state and its chemical composition. In particular, structural relaxation below T_g results in a significant increase of ξ- parameter while heating above T_g leads to its rapid decrease. It was also found that order parameter ξ_sql calculated for the supercooled liquid state is related to the glass forming ability (GFA). An increase of ξ_sql strongly worsens the GFA. While the parameter ξdescribes the order in glass, the quantity α(T) =Δ S(T)/Δ S_meltdescribes structural disordering. In this work, we determined the parameter α for the aforementioned Pt-based glass and found that it is strongly related to the FWHM and c_def indicating thus that the thermodynamic entropy-based and diffraction characteristics of disordering are i) interrelated and ii) linked to the defect concentration.We studied the same glass Pt_42.5Cu_27Ni_9.5P_21 (at.%) produced in the bulk formby melt jet quenching <cit.> while its FWHM and defect concentration c_def asfunctions of temperature were also taken from this work. The excess entropy of glass with respect to the maternal crystal was calculated from calorimetric measurements as <cit.>Δ S(T)=1/Ṫ∫_T^T_crΔ W(T)/TdT, where Δ W(T)=W_gl(T)-W_cr(T) is the differential heat flow, W_gl(T) and W_cr(T) are the heat flows coming from glass and its maternal crystal, respectively, Ṫ is the heating rate and T_cr is the temperature of the complete crystallization. Specific details of calorimetric measurements are given in Ref.<cit.>. These measurements were performed at a heating rate of 3 K/min on both initial and relaxed samples, where those latter were prepared by heating to 559 K (deep in the supercooled liquid state) and cooling back to room temperature at the same rate. Note that ifcurrent temperature T=T_cr then Δ S=0 and, therefore, the function Δ S (T)represents temperature dependence the excess entropy of glass with respect to the maternal crystal. The melting entropyΔ S_melt was accepted to be11.3 J/(K×mol) <cit.>. This allows calculating the parameter of disordering α, which changes from α→ 1 for the fully disordered (liquid-like) state to α→ 0 for fully ordered (crystal-like) state.Figure <ref> gives temperature dependences of the differential heat flow Δ W for the glass under investigation in the initial and relaxed states. These curves wereused to calculate the excess entropy Δ S with Eq.(<ref>) and next determinethe disorder parameter α=Δ S/Δ S_melt, which is also given in Fig.<ref>. It is seen that exothermal heat flow of the initial sample in the range 400 K<K<510 K (below T_g) leads to a decrease of disorder parameter α in the same range, just as one would expect. Continued heating leads to a strong heat absorption near and above T_g (i.e. in the supercooled liquid state), which is accompanied by a marked increase of disorder(as anticipated) determined by α-increase from 0.52 to 0.71 in the same range. Relaxed sample instead of exothermal flow displays notable endothermal effect, which is coupled to moderate disordering described by an increase of α from 0.42 to 0.52 in the aforementioned temperature range. Disordering of the relaxed sample above T_g is described by the same rise of α as in the case of initial sample. The latter behavior is expected as well since MGs' structural order in the supercooled liquid state is independent of thermal prehistory (see e.g. Ref.<cit.>). Thus, the evolution of the disorder parameter qualitatively agrees with the character of structure evolution upon heating of initial and relaxed samples.One can now combine these results with the information on the FWHM and defect concentration data reported earlier in Ref.<cit.>. In this work, the FWHMwas taken as the width Γ of the 1st peak of the structure factor S(q) normalized by the scattering vector q_0 corresponding to this peak, i.e.γ=Γ/q_0. The defect concentration was calculated withing the framework of the Interstitialcy theory <cit.> using high-frequency shear modulus data as c_def(T)=β ^-1ln[μ (T)/G(T)], where G(T) andμ (T) are temperature dependences of the shear modulus of the glass under investigation in the initial state and after complete crystallization, respectively, and dimensionless shear susceptibility β≈ 18 <cit.>. Temperature dependences of the FWHM γ, defect concentration c_def and disorder parameter α=Δ S/Δ S_meltgiven above (see Fig.<ref>) are shown in Fig.<ref>. It is seen that the relationship between γ, c_def and α above T_g is quite clear and straightforward: the disorder parameter α, FWHM γ and defect concentration c_def sharply increase with temperature.The interpretation of this behavior israther obvious. Heating above T_g leads to rapid defect multiplication (increase of c_def), which is accompanied by strong heat absorption (see the endothermal effect demonstrated by the heat flow Δ Win Fig.<ref> above T_g). On the other hand, an increase of the defect concentration rises the amount of regions with damaged short-range order increasing thus the FWHM, as discussed earlier <cit.>. Basing on the results of the present work, one can now argue that an increase of the defect concentration also increases the thermodynamic parameter of disordering α determined by the glass excess entropy.This is the main finding of the present work. It is also worth emphasizingthat all quantities γ, c_def and α almost do not depend on thermal prehistory, as one would expect. Overall, a quantitative analysis of essentially different phenomena, the shear elasticity, X-ray diffraction and calorimetric behavior of the glass under investigation gives unequivocally consistent results.The situation below the glass transition is more complicated but nonetheless quite understandable. A decrease of α (in other words, structural ordering) of initialsamples with temperature well below T_g is conditioned by exothermal structural relaxation (see exothermal Δ W-rise for the initial state in the range 400 K<T<500K in Fig.<ref>), which is, however, poorly seen in temperature dependences of the FWHM γ and c_def. Preliminary relaxation of samples by heating into the supercooled liquid state and cooling back to room temperature results in endothermal heat flow even well below T_g (see the same range the range 400 K<T<500K in Fig.<ref> for relaxed samples) and related disordering (increase of α) even below T_g, which is quite well manifested in an increase of the defect concentrationbut almost invisible in the FWHM (see c_def(T) and γ (T) dependences for relaxed samples). Structural disordering with temperature above T_g becomes much stronger. In conclusion, we performed calorimetric measurements on bulk glassy Pt_42.5Cu_27Ni_9.5P_21, which was earlier used to uncover the relationship between the width of X-ray structure factor FWHM γ and concentration of defects c_def derived using shear modulus data as a function of temperature and thermal prehistory <cit.>. On this basis, we determined the excess entropy of glass with respect to the maternal crystal using Eq.(<ref>) and calculated the thermodynamic parameter of structural disordering α defined by Eq.(1). It was found that the parameter α above the glass transition temperature T_g rapidly increases with the FWHM γ and c_def proving an interconnection between the thermodynamic excess entropy-based and diffraction measures of disorder and their relation to the defect concentration. Overall, this investigation confirms earlier idea <cit.> that macroscopic structural disordering in metallic glasses above T_g is intrinsically related to their structural defects. Below T_g, the interrelation between α, γ and c_def is more complicated but nonetheless follows reasonable expectations.§ ACKNOWLEDGMENTS The work was supported by the Russian Science Foundation under the project 23-12-00162. 999ChengProgMatSci2011 Y.Q. Cheng, E. Ma, Atomic-level structure and structure-property relationship in metallic glasses, Prog. Mater. Sci. 56 (2011) 379-473. WangProgMaterSci2019 W.H. Wang, Dynamic relaxations and relaxation-property relationships in metallic glasses, Prog. Mater. Sci. 106 (2019) 100561. YeJ.C. Ye, J. Lu, C.T. Liu, Q. Wang, Y. Yang, Atomistic free-volume zones and inelastic deformation of metallic glasses, Nature Mater. 9 (2010) 619-623. Egami T. Egami, Atomic level stresses, Prog. Mater. Sci. 56 (2011) 637-653. Betancourt B.A.P. Betancourt, J.F. Douglas, F.W. Starr, String model for the dynamics of glass-forming liquids, J. Chem. Phys. 140 (2014) 204509. LiSciRep2015 W. Li, Y. Gao, H. Bei, On the correlation between microscopic structural heterogeneity and embrittlement behavior in metallic glasses, Sci. Rep., 5 (2015) 14786.Zhang H. Zhang, C. Zhong, J.F. Douglas, X. Wang, Q. Cao, D. Zhang, J.-Z. Jiang, Role of string-like collective atomic motion on diffusion and structural relaxation in glass forming Cu-Zr alloys, J. Chem. Phys., 142 (2015) 164506. WangNatSciRev2018 Z.Wang, W.H. Wang, Flow units as dynamic defects in metallic glassy materials, Nat. Sci. Rev. 6, (2019) 304–323. MakarovInmermetallics2023 A.S. Makarov, G.V. Afonin, R.A. Konchakov, J.C. Qiao, A.N. Vasiliev, N.P. Kobelev, V.A. Khonik, Defect-induced ordering and disordering in metallic glasses, Intermetallics 163 (2023) 10804.KobelevUFN2023 N.P. Kobelev, V.A. Khonik. A novel view of the nature of formation of metallic glasses, their structural relaxation, and crystallization, Physics - Uspekhi 66 (2023) 673-690. Landau L.D. Landau, E. M. Lifshitz. Statistical Physics. Vol. 5 (3rd ed.). Butterworth-Heineman, 1980. MakarovScrMater2023 A.S. Makarov, G.V. Afonin, R.A. Konchakov, V.A. Khonik, J.C. Qiao, A.N. Vasiliev, N.P. Kobelev, Dimensionless parameter of structural ordering and excess entropy of metallic and tellurite glasses, Scr. Mater. 239 (2024) 115783. MakarovJPCM2021A.S. Makarov, G.V. Afonin, J.C. Qiao, A.M Glezer, N.P. Kobelev, V.A. Khonik, Determination of the thermodynamic potentials of metallic glasses and their relation to the defect structure, J. Phys.: Condens. Matter 33 (2021)435701.MakarovMetals2022 A. Makarov, M. Kretova, G. Afonin, N. Kobelev, V. Khonik. Components of the shear modulus and their dependence on temperature and plastic deformation of a metallic glass, Metals 12 (2022) 1964. NeuberActaMater2021 N. Neuber, O. Gross, M. Frey, B. Bochtler, A. Kuball, S. Hechler, I. Gallino, R. Busch, On the thermodynamics and its connection to structure in the Pt-Pd-Cu-Ni-P bulk metallic glass forming system, Acta Mater 220 (2021) 117300. | http://arxiv.org/abs/2311.15897v1 | {
"authors": [
"G. V. Afonin",
"A. S. Makarov",
"R. A. Konchakov",
"J. C. Qiao",
"A. N. Vasiliev",
"N. P. Kobelev",
"V. A. Khonik"
],
"categories": [
"cond-mat.dis-nn"
],
"primary_category": "cond-mat.dis-nn",
"published": "20231127150127",
"title": "Relation of the thermodynamic parameter of disordering with the width of structure factor and defect concentration in a metallic glass"
} |
We carry out a study on the validity and limitations of truncationschemes customarily employed to treat the quantum kinetic equationsof motion of complex interacting systems.Our system of choice is a semiconductor quantum ring with one electron interacting with few magnetic impurities via a Kondo-like Hamiltonian.This system is an interesting prototype which displays the necessary complexity when suitably scaled (large number of magnetic impurities)but can also be solved exactly when few impurities are present.The complexity in this system comes from the indirectelectron-mediated impurity-impurity interaction and is reflected inthe Heisenberg equations of motion, which form an infinite hierarchy.For the cases of two and three magnetic impurities, we solve forthe quantum dynamics of our system both exactly and following atruncation scheme developed for diluted magnetic semiconductorsin the bulk.We find an excellent agreement between the two approaches whenphysical observables like the impurities' spin angular momentumare computed for times that well exceed the time window of validityof perturbation theory.On the other hand, we find that within time ranges of physicalinterest, the truncation scheme introduces negative populationswhich represents a serious methodological drawback. Case study of the validity of truncation schemes of kineticequations of motion: few magnetic impurities in a semiconductorquantum ring P. I. Tamborenea January 14, 2024 ==========================================================================================================================================§ INTRODUCTION Many-body interacting systems such as diluted magnetic semiconductors(DMS) pose interesting theoretical challenges.Their quantum dynamics is completely described by the Heisenbergequations of motion for the density matrix.These equations are usually coupled to one another and an analyticalsolution is in general not always possible.In different areas of physics, there exists a long tradition ofapproaching the study of this kind of systems of equations of motionby ordering them into a hierarchy of increasing correlations betweenthe particles or the fields involved.<cit.>The work of Kubo<cit.> on the expansion of cumulant functions for stochastic variables has proven useful in carrying out this reordering,and the ideas expounded in his paper have been applied to problems of condensed matter physics such as that of optical excitation in semiconductors<cit.>and, more recently, the theoretical treatment of DMS.<cit.>Typically, only approximate solutions to the system of equationscan be obtained. Once the relevant hierarchy has been established, it is truncated following a particular scheme that discards high order correlationsand leads to another set of equations that is at least numericallytractable.<cit.> In the context of DMS, the study of nanostructures is attracting growing interest.<cit.>Among these structures are narrow quantum rings (QR)<cit.> with fewmagnetic impurities which, due to their simplicityand experimental feasibility,<cit.>are particularly well suited for exploring the strengths and limitations of truncation schemes.When the number of impurities is small, the ultrafast quantum dynamicsof these systems can be computed exactly without resorting to theHeisenberg equations.Such exact solutions are useful since they can be used as benchmarks to which the approximate solutions coming from truncation schemes can be compared.Thus, here we pose the quantum dynamics problem of a DMS QR modelled with the Kondo interaction<cit.> between the electron spin andthe magnetic impurities.Our purpose is twofold: on the one hand, we wish to further ourstudies of angular momentum dynamics and control innanostructures<cit.>.On the other hand, and more to the point of this article, we report in a quantitative way the encountered methodological difficulties, in order to contribute to the development and improvement oftheoretical techniques based on hierarchies of equations of motion.The paper is organized as follows.In Sec.<ref> we lay out the steps andassumptions leading to the one-dimensional model for the DMS QR towhich we devote this study.In Sec. <ref> we describe at length the truncationscheme that we apply to the Heisenberg equations for the many-bodydensity matrices.In Sec. <ref> we integrate numerically thetruncated Heisenberg equations and, when possible, compare theresults with their exact counterparts, which are computed by solvingthe time-dependent Schrödinger equation.Finally, in Sec. <ref> we offer some concludingremarks. § QUANTUM RING SYSTEMWe consider a narrow semiconductor quantum ring doped with a singleelectron and a few Mn impurities.In the envelope-function approximation, the Hamiltonian of the bare QR, including the confining potential U(𝐫), reads H_0 = -ħ^2/2m^∗∇^2 + U(𝐫) where m^∗ is the conduction-band effective mass.Between the electron and the d-shell spin of the impuritieswe assume the typical sd exchange interaction described by the Kondo-like Hamiltonian<cit.> H_sd = J∑_I=1^N𝐒_I·𝐬 δ(𝐫 - 𝐑_I) where N is the number of Mn impurities, J the bulk sdexchange constant, 𝐬 the spin of the electron,and 𝐒_I and 𝐑_I the spin and positionof the I-th impurity, respectively.Note that H_sd conserves the total spin angular momentum (SAM), 𝐬 + ∑_I=1^N𝐒_I.<cit.> Here we adopt a quasi-one-dimensional model for the QR in whichthe radial and vertical components of the wave function are takenas the respective groundstates<cit.> and do not participate in the dynamics.The resulting φ-dependent Hamiltonian reads H = E_0/ħ^2L_z^2 + J/V∑_i=1^N𝐬·𝐒_I δ(φ-φ_I); where L_z=-iħ∂_φ is the z-component operatorof the electron's orbital angular momentum (OAM), E_0=ħ^2/2m^∗a^2, with a being the radius of the ring, and V is the volume of the QR.The location of the impurities is specified by the angularvariables φ_I.The time evolution driven by the many-body Hamiltonian ofEq. (<ref>) can be obtained numerically by solvingthe Schrödinger equation if N is sufficientlysmall.For large N (say N>4) this is no longer practical or even possible.In such cases one resorts to the equations of motion of thedensity matrices, which form a coupled and infinite hierarchy.Here the pitfall is that only by truncating this hierarchya numerically tractable closed set of equations can be obtained.The question is: How to carry out the truncation while preservingboth the basic mathematical properties of the density matrices andthe fundamental physical features of the model? In this work we follow a well-established procedure to treat thehierarchy of quantum density matrices <cit.> andapply it to the QR with one electron and a few magnetic impurities.For bulk DMS, this method yields good approximate solutions onshort time scales, which preserve the fundamental symmetries andtheir associated conserved quantities.However, on longer time scales (e.g., beyond the regime ofperturbation theory), its performance has not been sufficientlyexplored.Here we test this methodology in a rather small version of a DMSsystem, taking advantage of the fact that we can compare its resultsfor long times with exact solutions of the Hamiltonian evolution. § TRUNCATION SCHEMEIn terms of many-body operators the Hamiltonian in Eq. (<ref>)reads H = E_0∑_mσ m^2 c_mσ^†c_mσ + J/V∑_Inn' mσ m'σ'𝐬_σσ'·𝐒_nn' ρ_mm'^Ic_mσ^†c_m'σ'P_nn'^I. In this expression 𝐬_σσ' are the matrixelements of the electron's spin operator in the basis of eigenstates ofs_z (σ=±1/2), and ρ_mm'^I≐ e^i(m-m')φ_Iare the matrix elements of the delta function at φ_I in the basis of eigenvalues of theL_z operator (m∈ℤ).The operators P_nn'^I≐|I,n⟩⟨ I,n'| aredefined through the equations 𝐒_I = ∑_nn'⟨ I, n|𝐒_I|I,n'⟩ P_nn'^I, where |I, n⟩, n∈{-5/2,-3/2,…,3/2,5/2}, arethe eigenstates of the spin 5/2 operator S_z^I.The P^I operators are therefore interpreted as density matrices.Notice that [P^I, P^I']=0 for I≠ I', since they acton different impurities, but [P_n_1n_2^I,P_n_3n_4^I] = P^I_n_1n_4δ_n_2n_3 - P^I_n_3n_2δ_n_1n_4.Derived from the Hamiltonian in Eq. (<ref>), the Heisenberg equations of motion for the expectation values ⟨ c_m_1σ_1^† c_m_2σ_2⟩ and ⟨ P_n_1n_2^I⟩ read iħ∂/∂ t⟨ P^I_n_1n_2⟩ = ∑_nmσ m'σ'ρ_mm'^I𝐬_σσ'·(𝐒_n_2n⟨ c^†_mσc_m'σ'P_n_1n^I⟩ - 𝐒_nn_1⟨ c^†_mσc_m'σ'P_nn_2^I⟩) iħ∂/∂ t⟨c_m_1σ_1^†c_m_2σ_2⟩ = E_0(m_2^2 - m_1^2)⟨ c_m_1σ_1^†c_m_2σ_2⟩ +∑_Inn' mσ𝐒_nn'·(ρ_mm_1^I𝐬_σσ_1⟨c^†_mσc_m_2σ_2P_nn'^I⟩-ρ_m_2m^I𝐬_σ_2σ⟨c^†_m_1σ_1c_mσP_nn'^I⟩). The dynamics introduced by the sd interaction in thetwo-point density matrices for the electron and each impurityin the system therefore depend solely on the three-pointmatrices ⟨ c^†cP^I⟩.Instead of truncating the hierarchy at this level, we take onestep further and add to Eqs. (<ref>) and (<ref>)the equations of motion for ⟨ c^†cP^I⟩, which we express as iħ∂/∂ t⟨ c^†_m_1σ_1 c_m_2σ_2 P_n_1n_2^I⟩ =E_0(m_2^2 - m_1^2)⟨ c_m_1σ_1^† c_m_2σ_2 P_n_1n_2^I⟩+ J/V∑_nmσ(𝐒_n_2n·𝐬_σ_2σρ_mm_2^I⟨ c^†_m_1σ_1 c_mσP_n_1n^I⟩. - . 𝐒_nn_1·𝐬_σσ_1ρ_mm_2^I⟨ c^†_mσ c_m_2σ_2 P_nn_2^I⟩) + Q; where the term Q (actually Q^I_m_1σ_1m_2σ_2n_1n_2)collects all contributions from four-point density matrices,including those of the indirect interaction between the impurities, and is defined as follows Q = J/V∑_I≠ I' nn'mσ𝐒_nn'·(𝐬_σ_2σρ_mm_2^I'⟨ c^†_m_1σ_1 c_mσP_n_1n_2^IP_nn'^I'⟩ - 𝐬_σσ_1ρ_mm_2^I'⟨ c^†_mσ c_m_2σ_2 P_n_1n_2^I P_nn'^I'⟩). When only one impurity is present Q vanishes identically andthe hierarchy does not develop further.In this case the set comprising Eqs. (<ref>),(<ref>) and (<ref>) is closed and theeigendecomposition of the full Hamiltonian can be worked outexactly without resorting to numerical methods.<cit.> Let us now truncate the hierarchy so as to obtain a set ofequations that is closed at the three-point level.In order to do that we first apply the expansion described in Ref. [kub] to each of the four-point densitymatrices appearing in Q, and rewrite them as ⟨ c^†_m_1σ_1 c_m_2σ_2 P^I P^I'⟩ =⟨ P^I⟩⟨ c^†_m_1σ_1 c_m_2σ_2 P^I'⟩ + ⟨ P^I'⟩⟨ c^†_m_1σ_1 c_m_2σ_2 P^I⟩ +⟨ c^†_m_1σ_1 c_m_2σ_2⟩δ⟨ P^I P^I'⟩-⟨ c^†_m_1σ_1 c_m_2σ_2⟩⟨ P^I⟩⟨ P^I'⟩ + δ⟨ c^†_m_1σ_1 c_m_2σ_2 P^I P^I'⟩; where we omit the subindices of the operators P^I and P^I'for clarity.In this expression the factor δ⟨ P^IP^I'⟩is defined as δ⟨ P^IP^I'⟩≐⟨ P^IP^I'⟩ - ⟨ P^I⟩⟨ P^I'⟩,and the rightmost term contains,by definition, all contributions to the left-hand-side that are not reducible to a factorized form similar to those of the other terms.Notice that the expansion in Eq. (<ref>) isexact as long as we do not neglect any term;<cit.> and it is alsosymmetric with respect to the indices I and I' that labelthe impurities, since, by definition, P^I and P^I'commute when I≠ I'.Furthemore, it follows from the definition of P^I that itonly makes sense to consider the case I≠ I', because afour-point density matrix of the form ⟨ c^†_m_1σ_1 c_m_2σ_2 P^I P^I⟩ reduces to a three-point one containing only one operator P^I.Rewriting Q using the expansion in Eq. (<ref>)makes explicit the contributions of the irreducible terms δ⟨ P^IP^I'⟩ and δ⟨ c^†_λ_1 c_λ_2 P^IP^I'⟩ to the dynamics of the system.Finally, to truncate the hierarchy at the three-point levelwe only need to neglect the latter (see Appendix<ref>).We remark at this point that truncating the hierarchy at thetwo-point level yields a set of equations that can be computeddirectly from a mean-field Hamiltonian.The resulting set of equations is obtained by substituting allthree-point matrices ⟨ c^†cP^I⟩ inEqs. (<ref>) and (<ref>) for their mean-field factorizations ⟨ c^†_λ_1 c_λ_2⟩⟨ P^I ⟩.<cit.>In any case, it is worth emphasizing that, regardless of thelevel at which the hierarchy is truncated, the approximationis performed on the density matrices and not on the Hamiltonianitself.In the following section we analyse how relevant these correlationsare to the dynamics when N is small and the system is initiallyin a pure state.We purposely choose configurations that are numerically tractablein the Schrödinger picture in order to have a reliable referencesolution to which the approximate ones can be compared.§ NUMERICAL RESULTSLet us consider a Zn_1-xMn_xTe QR inthe highly-diluted limitx≪ 1 (N/V≈10^-3 nm^-3, where V≈ 777 nm^3 is the volume of the ring and N=2,3the number of impurities we consider in this study.)To compute the ring's volume we assume an average height of 1.5 nm,<cit.> an inner radius ofa=14 nm,<cit.> and an effective and experimentally feasible width ofapproximately 8.4 nm.The latter parameter is estimated using a well-known modelthat assumes a parabolic radial confiningpotential.<cit.>In the highly-diluted limit, the bulk sd exchangeconstant for ZnTe is found to be J=11 meV nm^3 andlargely independent of the number of impurities.<cit.>This value in the bulk yields an effective coupling constantof J/V≈0.0142 meV.We also assume that in the highly-diluted limit the conduction-band effective mass of the (Zn,Mn)Te does not differ considerably fromthat of pure ZnTe, m^∗=0.2m_e, where m_e the bare electron mass.For the radius considered this effective mass yields aconduction-band energy scale of E_0≈0.972 meV, which isalmost two orders of magnitude larger than the effective sd coupling.Because E_0≫ J/V, the energy of the first excited radialstate is expected to be far above that of the groundstate R_0,<cit.>and the quasi-one-dimensional approximation is still valid, even thoughthe effective ring width is of the order of its effective radius. In the bulk the impurities in a highly-diluted DMS are expectedto be quite separated from each other.Even though the precise locations of the impurities cannot bepredicted during fabrication, in order to reproduce this conditionin the ring as accurately as possible we assume that they aremaximally separated from one another.In other words, we distribute them on the ring so that they forman N-sided regular polygon when N>2, and are diametricallyopposite when N=2. To carry out the numerical calculations we consider a sufficientlylarge basis of electronic states with a maximum energy of 25 E_0.We assume in all cases that the electron initially occupies astate of low energy (of the order of E_0) with definite SAMand OAM.Similarly, and for the sake of concreteness, we assume that eachimpurity is initially polarized on the xz plane and alignedat angle β from the ring's axis.Such single-impurity states can be written asd^(5/2)(β)|S_z;5/2⟩, where d^(5/2)(β) is the Wigner small d matrix forspin 5/2 and |S_z;5/2⟩ the eigenstate of the S_zoperator of maximum projection.Notice that an initial polarization on any other plane containingthe ring's axis would describe the same dynamics if the electronis initially in an s_z eigenstate, because the full Hamiltonianis a scalar operator with respect to rotations of the total SAM.These initial conditions on the electron and the impurities states can be met experimentally.The former using twisted-light laser beams<cit.>, andthe latter using suitable magnetic fields.In other words, we assume that the initial state of the wholesystem (electron plus impurities) is a ket of the form |mσ⟩|Mn_1⟩⋯|Mn_N⟩, where |mσ⟩ is the initial state of the electron, and |Mn_I⟩ is the initial state of the I-th impurity.At the onset of the dynamics no entanglement therefore exists betweenthe electron and the impurities or between the impurities themselves.The two- and three-point density matrices⟨ P^IP^I'⟩ and ⟨ c_m_1σ_1^†c_m_2σ_2P_n_1n_2^I⟩are therefore equal to their mean-field factorizations and theirrespective correlated parts are zero.Finally, we integrate the equations on a time scale that isof the order of ultrafast interactions between photocarriersand impurities in DMS.<cit.> Let us first assume that all the impurities' spins have been maximallypolarized along the x axis, that is β=π/2 and⟨ S_x^I⟩=5ħ/2 and ⟨ S_y,z^I⟩=0 for all I.In Fig. <ref>a we pick one of the N impurities in the systemand display the time evolution of its magnetization along the ring's axis.Which impurity we pick is immaterial, since all of them show the same spin dynamics as a consequence of their highly symmetrical spatial distributionon the ring (see Appendix <ref>).The solid line corresponds to the reference (exact) solution obtainedin the Schrödinger picture, while the dashed and dash-dotted linesshow the dynamics of the same quantity as described by the truncationscheme with and without the direct impurity-impurity correlationδ⟨ P^IP^I'⟩, respectively.We see that in the time range considered, the approximate solutionsincluding and leaving out this latter correlation are in excellentagreement with one another and each with the reference solution.When the spins are aligned at different angles, the approximate solutions differ from the reference in no more than 1% when their separation reaches the global maximum (Fig. <ref>b).The same close correspondence is observed for a variety of randomly chosen initial states, as well as for the case in which the impurities' spins are aligned at different tilt angles.A particular example of this case is presented in Fig. <ref>c for N=2 and in Fig. <ref>d for N=3.The addition of the equations for ⟨ P^IP^I'⟩ to theoriginal set is of no consequence at all, as expected in thehighly-diluted limit.We observe that in these situations, and when the average distance between the manganese atoms is large enough, the impurity-impurity exchange terms may be safely approximated by their mean-field contributions.The truncation scheme is, however, not so accurate in approximating the transitions into and out of the available electron states.For the impurities initially in the S_x eigenstate of maximum projection (β=π/2) and an electron in the |1↑⟩ state, the populations of the states |2↑⟩ and |2↓⟩ are over- and underestimated throughout the time range considered (Fig. <ref>a), respectively.The discrepancy is worsened by the fact that the approximated populations eventually take on negative values that, because of their magnitude, cannot be ascribed to numerical error.This behavior is observed as well for smaller integration time steps and different initial states for the electron and the impurities' spins.In particular, it is observed when the latter are oriented at random; that is, when their state is initially described by the condition P^I_n_1n_2(t=0) = 1/6δ_n_1n_2 for all I (Fig. <ref>b).In treating this case we make the additional assumption that correlations δ⟨ c_m_1σ_1^†c_m_2σ_2P^I_n_1n_2⟩ take time to build up<cit.> and are therefore initially zero (that is, the electron's and the impurities' states are not initially entangled.)However, regardless of the initial condition considered, the approximation always respects the hermiticy of all two- and three-point density matrices in the set of truncated equations.Negative values for the populations therefore indicate that the positive semi-definiteness of the electronic density matrix is not conserved during time evolution.This is revealed by the sign of its lowest eigenvalue, which is negative in all but one of the cases presented in Figs. <ref>a-b.In fact, the electronic density matrix becomes indefinite right at the first integration step not only for the particular case β=π/2, but also for other initial tilt angles (Fig. <ref>a), as well as for the case in which the impurities' spins are oriented completely at random (Fig. <ref>b). Notice that the hermiticity of each truncated density matrix can be guaranteed directly on the right-hand side of its equation of motion, since this property depends at most on the density matrices themselves.In contrast, their positive semi-definiteness requires a condition on their second time derivative, and therefore depends strongly on Q.The difference between both properties is most clearly reflected in the negative populations shown in Figs. <ref>a-b.Hermiticity of the whole density matrix requires the populations to be real (not necessarily positive), but its positive semi-definiteness requires in addition that they reach a local minimum whenever they become zero. From Eq. (<ref>), it is not hard to see that this condition depends on the first time derivative of the quantities ⟨ c_m_1σ_1^†c_m_2σ_2P^I_n_1n_2⟩ and therefore directly on the truncated term Q.This is also the case for the other density matrices in the truncated set.Adding the direct impurity-impurity exchange term only introduces small corrections to the values of the populations but does not help at all to solve or reduce this problem.The ⟨ P^I⟩ matrices also lose their initial positive semi-definiteness as their elements evolve in time.It is only when the impurities' spins are maximally polarized along the axis of the ring that this problem does not arise.When this happens the spin part of the initial ket is the eigenstate of maximum projection (5N/2+1/2) of the total SAM, which is a conserved quantity.Neither the electron's nor the impurities' spins therefore change in time, since there are no other available states for them to flip to while keeping the maximum projection constant.Whether correlations of the electron-mediated impurity-impurity interaction are neglected or not is irrelevant to the dynamics in this case, and this is reflected in the conservation of the positive semi-definiteness of the electronic density matrix.Nevertheless, the relevance of these correlations for the computation of the observables grows as the number of available total spin states increases.This is clearly exemplified in Fig. <ref>b by the abrupt change in the relative difference between the approximate and the exact magnetization of the impurities when β=π/2.§ CONCLUDING REMARKSIn this work we studied the quantum dynamics of a quasi-one-dimensional DMSquantum ring. The focus was on testing the methodological difficulties that appear when employing the Heisenberg equations of motion to calculate the dynamics of the electron and impurities' density matrices.Following a standard scheme for DMS in the bulk, we truncated the infinite hierarchy of equations by neglecting all direct impurity-impurity correlations and reducing the indirect electron-mediated interaction to its mean-field terms.Through this approach we obtained an approximate and numerically tractable set of equations that goes beyond traditional mean-field approximations of the full Hamiltonian. In order to study the features and limitations of the truncation scheme, we considered a small system of one electron and few magnetic impurities initially in a pure state.We integrated the exact time-dependent Schrödinger equation and used it to compute the impurities' magnetization and the population of the electronic states.These results set a benchmark that allowed us to assess the accuracy of the truncated set of equations.We found that neglecting the indirect impurity-impurity correlations altogether does not break the fundamental symmetries of the system, but nevertheless leads to a non strictly Hamiltonian time evolution.For different initial states (with and without entanglement between the impurities) and a variety of initial configurations, we found that the energy, the total SAM, the number of particles, and the hermiticity of the density matrices are conserved to numerical precision.The conservation of the number of particles (i.e., the traces of the electronic and the ⟨ P^I⟩ density matrices) is indicative that errors in the populations are, up to numerical precision, exactly compensated at each time step.However, for some populations we observed a small but negative drift that leads them to take on negative values which could not be ascribed to numerical error.The positive semi-definiteness of the density matrices is therefore not conserved throughout the time range studied.In fact, in most cases it breaks right at the first time step.From a theoretical point of view, this problem is rather serious and must be addressed before using the truncated set of equations to study the physics of a DMS QR in depth, particularly when no exact solution is at hand.In practice, however, we saw that the approximation yields a remarkably accurate estimation of an observable like the impurities' magnetization in the same time range.We conclude that, under certain conditions, the truncation scheme can still be applied to study the dynamics of some physical quantities, particularly those that are not too sensitive to errors in the populations, in time scales longer than those of traditional time-dependent perturbation theory. Finally, we mention that the problem of guaranteeing the positive semidefiniteness of a truncated or approximated density matrix has been studied in other areas of many-body physics.A case in point is the field of atomic and molecular physics and the theory of reduced density matrices (see Ref. [maz]), which provides methods for computing physical properties of systems with many interacting electrons using only low-order density matrices (that is, density matrices involving few electronic creation and annihilitation operators).It is known that such methods can sometimes yield density matrices that are not positive semidefinite and need to be corrected. This is in fact possible, but the procedure for correcting (or “purifying”) one particular density matrix in general requires imposing conditions on other density matrices of lower order as well (see Ref. [alc] and references therein). This is also the case for the problem presented here, as any further approximation carried out on any of the density matrices would require guaranteeing also the conservation of the energy and total SAM, without breaking the symmetries of the system, which, as we mentioned, are respected by the original truncation scheme.Such constraints couple all density matrices.Some particular instances of this problem may be tackled using the techniques of semidefinite programming or the theory of convex optimization, for example to replace each density matrix with its optimal projection in the space of positive semidefinite ones that satisfy the required constraints. However, such a complex optimization problem would have to be solved after each numerical integration step, since positive semidefiniteness is not conserved, and even for a small number of impurities its computational cost may be prohibitively high.Furthermore, such mathematical approaches do not necessarily follow meaningful physical criteria that one may wish to enforce.A solution that tackles the core of the physical problem directlyin the truncation scheme itself would therefore be more desirable.This work aims to contribute to the search of such a solutionby singling out a serious drawback present in the conventionaltruncation scheme of the hierarchy of dynamical equations of motion of the density matrices.§ ACKNOWLEDGMENTS We gratefully acknowledge financial support from Universidad de Buenos Aires (UBACyT 2018, 20020170100711BA), CONICET (PIP 11220200100568CO), and ANPCyT (PICT-2020-SERIEA-01082).§ The Heisenberg equations for the quantities ⟨ P^IP^I'⟩ involve only commutators of the form [P^IP^I', P^I”] which again yield terms proportional to P^IP^I'.As a consequence, the time evolution of each ⟨ P^IP^I'⟩ depends only on four-point density matrices ⟨ c^†_λ_1c_λ_2 P^IP^I'⟩.If, as in the expression for Q, these four-point matrices are expanded according to Eq. (<ref>) and all factors δ⟨ P^IP^I'⟩ are expressed in terms of their correlated and mean-field parts, it follows that dropping the term δ⟨ c^†_λ_1 c_λ_2 P^I P^I'⟩ suffices to close the set of equations at the three-point level.It is therefore possible to add the Heisenberg equations for the quantities ⟨ P^IP^I'⟩ when I≠ I' to the original set containing Eqs. (<ref>), (<ref>) and (<ref>) while keeping it closed at the three-point level and without introducing further approximations.§ Let us assume that the impurities are located at the vertices of an N-sided regular polygon and consider a rotation R in the dihedral group D_N.The operation RHR^† leaves the operators L_z^2 and 𝐒_I·𝐬 invariant, but shifts the arguments of the delta function operators by an integer multiple of 2π/N.This translation along the ring is a cyclic permutation that relocates the impurities to different vertices on the same polygon, because it maps each parameter φ_I to some other φ_I'(modulo 2π).The rotated Hamiltonian can also be obtained by permuting the operators 𝐒_I instead of shifting the delta potentials.That is, the operation RHR^† is equivalent to Ô_RHÔ_R^†, for some Ô_R that relabels the 𝐒_I without affecting the parameters φ_I.The operator Ô_R can be written as a product of pairwise permutations Ô_II'≐∑_n_1n_2P^I_n_1n_2P^I'_n_2n_1, that interchanges two impurities in H_sd by swapping the indices I and I' (I≠ I') of the operators 𝐒_I and 𝐒_I'.Notice that Ô_II'^-1 = Ô_I'I = Ô_II'^† = Ô_II' as required.Let us decompose the rotation R into a product of two rotations, R=R_OAMR_SAM, one acting only on the electron's OAM and the other acting on its spin and the impurities' SAM, and consider the ket |ψ⟩ = |mσ⟩|Mn⟩⋯|Mn⟩, where |mσ⟩ is an eigenstate of the operators L_z and s_z with eigenvalues m and σ respectively, and |Mn⟩ an arbitrary single-impurity state that is repeated N times in the product.Notice that |ψ⟩ is an eigenstate of R_OAM and of O_R for any rotation R, since swapping any pair of |Mn⟩ factors in |ψ⟩ does not change the latter.Calling 𝒰(t) the time-evolution operator and S_z^I the z-component of 𝐒_I in the Schrödinger picture, we write⟨ψ|𝒰^† S_z^I𝒰|ψ⟩= ⟨ψ| Ô_R^† R 𝒰^†R^†Ô_R S_z^IÔ_R^† R 𝒰R^†Ô_R |ψ⟩ = ⟨ψ| R_SAM^†𝒰^† S_z^I'𝒰 R_SAM|ψ⟩ = ⟨ψ| R_SAM^†𝒰^† S_z^I'𝒰 R_SAM |ψ⟩ = ⟨ψ| 𝒰^† S_z^I'𝒰|ψ⟩ The second equality on the right-hand side follows from the equalities Ô_R^† S_z^IÔ_R = S_z^I' for some I'≠ I, and [S_z^I, R]=0 for all R.The fourth equality follows instead from the invariance of H with respect to spin-only rotations; that is, [R_SAM,H]=0. | http://arxiv.org/abs/2311.15872v1 | {
"authors": [
"J. M. Lia",
"P. I. Tamborenea"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231127144359",
"title": "Case study of the validity of truncation schemes of kinetic equations of motion: few magnetic impurities in a semiconductor quantum ring"
} |
Mapping quantum circuits to shallow-depth measurement patterns based on graph states [====================================================================================§ INTRODUCTION At present we know without a doubt that (a) the Sun produces its energy as a whole by fusing protons into [4]He, (b) electron neutrinos are by-products of these processes, and (c) an established fraction of such neutrinos have either muon or tau flavour by the time they interact with the detectors on Earth.These three facts, nowadays well established, are the result of more than six decades of research in theoretical and experimental astrophysics and particle physics.More specifically the fusion of protons into [4]He is known to proceed via two different mechanisms: the pp-chain and the CNO-cycle <cit.>.In each of these two mechanisms electron neutrinos are produced in a well-known subset of reactions.In particular, in the pp-chain five fusion reactions among elements lighter than A = 8 produce neutrinos which are labeled by the parent reaction as pp, [7]Be, pep, [8]B, and hep neutrinos.In the CNO-cycle the abundance of C and N acts as a catalyst, and the [13]N and [15]O beta decays provide the primary source of neutrinos, while [17]F beta decay produces a subdominant flux.For each of these eight processes the spectral energy shapes of the produced neutrinos is known, but the calculation of the rate of neutrinos produced in each reaction requires dedicated modeling of the Sun.Over the years several Standard Solar Models (SSMs), able to describe the properties of the Sun and its evolution after entering the main sequence, have been constructed with increasing level of refinement <cit.>.Such models are numerical calculations calibrated to match present-day surface properties of the Sun, and developed under the assumption that the Sun was initially chemically homogeneous and that mass loss is negligible at all moments during its evolution up to the present solar age τ_⊙ = 4.57 Gyr.The calibration is done in order to satisfy the constraints imposed by the current solar luminosity L_⊙, radius R_⊙, and surface metal to hydrogen abundance ratio (Z/X)_⊙.Refinements introduced over the years include more precise observational and experimental information about the input parameters (such as nuclear reaction rates and the surface abundances of different elements), more accurate calculations of constituent quantities (such as radiative opacity and equation of state), the inclusion of new physical effects (such as element diffusion), and the development of faster computers and more precise stellar evolution codes.The detection of solar neutrinos, with their extremely small interaction cross sections, enable us to see into the solar interior and verify directly our understanding of the Sun <cit.> – provided, of course, that one counts with an established model of the physics effects relevant to their production, interaction, and propagation.The Standard Model of particle physics was thought to be such established framework, but it badly failed at the first attempt of this task giving rise to the so-called “solar neutrino problem” <cit.>.Fortunately we lay here, almost fifty years after that first realization of the problem, with a different but well established framework for the relevant effects in solar neutrino propagation.A framework in which the three flavour neutrinos (ν_e, ν_μ, ν_τ) of the Standard Model are mixed quantum superpositions of three massive states ν_i (i=1,2,3) with different masses.This allows for the flavour of the neutrino to oscillate from production to detection <cit.>, and for non-trivial effects (the so called LMA-MSW <cit.> flavour transitions) to take place when crossing dense regions of matter.Furthermore, due to the wealth of experiments exploring neutrino oscillations, the value of the neutrino properties governing the propagation of solar neutrinos, mass differences and mixing angles, are now precisely and independently known.Armed with this robust particle physics framework for neutrino production, propagation, and detection, it is possible to turn to the observation of solar neutrino experiments to test and refine the SSM. Unfortunately soon after the particle physics side of the exercise was clarified, the construction of the SSM run into a new puzzle: the so called “solar composition problem”.In brief, SSMs built in the 1990's using the abundances of heavy elements on the surface of the Sun from Ref. <cit.> (GS98) had notable successes in predicting other observations, in particular helioseismology measurements such as the radial distributions of sound speed and density <cit.>.But in the 2000's new determinations of these abundances became available and pointed towards substantially lower values, as summarized in Ref. <cit.> (AGSS09).The SSMs built incorporating such lower metallicities failed at explaining the helioseismic observations <cit.>.For almost two decades there was no successful solution of this puzzle as changes in the modeling of the Sun did not seem able to account for this discrepancy <cit.>. Consequently two different sets of SSMs were built, each based on the corresponding set of solar abundances <cit.>.With this in mind, in Refs. <cit.> we performed solar model independent analysis of the solar and terrestrial neutrino data available at the time, in the framework of three-neutrino masses and mixing, where the flavour parameters and all the solar neutrino fluxes were simultaneously determined with a minimum set of theoretical priors.The results were compared with the two variants of the SSM, but they were not precise enough to provide a significant discrimination.Since then there have been a number of developments.First of all, a substantial amount of relevant data has been accumulated, in particular the full spectral information of the Phase-II <cit.> and Phase-III <cit.> of Borexino and their results based on the correlated integrated directionality (CID) method <cit.>.All of them have resulted into the first positive observation of the neutrino fluxes produced in the CNO-cycle which are particularly relevant for discrimination among the SSMs.On the model front, an update of the AGSS09 results was recently presented by the same group (AAG21) <cit.>, though leading only to a slight revision upwards of the solar metallicity.Most interestingly, almost simultaneously a new set of results (MB22) <cit.>, based on similar methodologies and techniques but with different atomic input data for the critical oxygen lines among other differences, led to a substantial change in solar elemental abundances with respect to AGSS09 (see the original reference for details).The outcome is a set of solar abundances based on three-dimensional radiation hydrodynamic solar atmosphere models and line formation treated under non-local thermodynamic equilibrium that yields a total solar metallicity comparable to those of the “high-metallicity” results by GS98.Another issue which has come up in the interpretation of the solar neutrino results is the appearance of the so-called “gallium anomaly”.In brief, it accounts for the deficit of the rate of events observed in Gallium source experiments with respect to the expectation.It was originally observed in the calibration of the gallium solar-neutrino detectors GALLEX <cit.> and SAGE <cit.> with radioactive [51]Cr and [37]Ar sources, and it has been recently confirmed by the BEST collaboration with a dedicated source experiment using a [51]Cr source with high statistical significance <cit.>.The solution of this puzzle is an open question in neutrino physics (see Ref. <cit.> and reference therein for a recent discussion of – mostly unsuccessful – attempts at explanations in terms of standard and non-standard physics scenarios).In particular, in the framework of 3ν oscillations the attempts at explanation (or at least alleviation) of the anomaly invoke the uncertainties of the capture cross section <cit.>.With this motivation, in this work we have studied the (in)sensitivity of our results to the intrinsic uncertainty on the observed neutrino rates in the Gallium experiments posed by possible modification of the capture cross section in Gallium, or equivalently, of the detection efficiency of the Gallium solar neutrino experiments.All these developments motivate the new analysis which we present in this paper with the following outline.In Sec. <ref> we describe the assumptions and methodology followed in our study of the neutrino data.As mentioned above, this work builds upon our previous solar flux determination in Refs. <cit.>.Thus in Sec. <ref>, for convenience, we summarize the most prominent elements common to those analyses, but most importantly, we detail the relevant points in which the present analysis method deviates from them.The new determination of the solar fluxes is presented in Sec. <ref> where we also discuss and quantify the role of the Gallium experiments and address their robustness with respect to the Gallium anomaly.In Sec. <ref> we have a closer look at the determination of the neutrino fluxes from the CNO-cycle and its dependence on the assumptions on the relative normalization of the fluxes produced in the three relevant reactions. In Sec. <ref> we compare our determined fluxes with the predictions of the SSMs in the form of a parameter goodness of fit test, and quantify the output of the test for the assumptions in the analysis.We summarize our findings in Sec. <ref>. The article is supplemented with a detailed Appendix <ref> in which we document our analysis of the Borexino III spectral data (and also their recent analysis employing the CID method).§ ANALYSIS FRAMEWORKIn the analysis of solar neutrino experiments we include the total rates from the radiochemical experiments Chlorine <cit.>, Gallex/GNO <cit.>, and SAGE <cit.>, the spectral and day-night data from the four phases of Super-Kamiokande <cit.>, the results of the three phases of SNO in terms of the parametrization given in their combined analysis <cit.>, and the full spectra from Borexino Phase-I <cit.>, Phase-II <cit.>, and Phase-III <cit.>, together with their latest results based on the correlated integrated directionality (CID) method <cit.>.Details of our Borexino Phase-III and CID data analysis, which is totally novel in this article, are presented in Appendix <ref>.In the framework of three neutrino masses and mixing the expected values for these solar neutrino observables depend on the parameters _21, θ_12, and θ_13 as well as on the normalizations of the eight solar fluxes.Thus besides solar experiments, we also include in the analysis the separate DS1, DS2, DS3 spectra from KamLAND <cit.> which in the framework of three neutrino mixing also yield information on the parameters _21, θ_12, and θ_13.In what follows we will use as normalization parameters for the solar fluxes the reduced quantities: f_i = Φ_i/Φ_i^ref with i = pp, [7]Be, pep, [13]N, [15]O, [17]F, [8]B, and hep.In this work the numerical values of Φ_i^ref are set to the predictions of the latest GS98 solar model, presented in Ref. <cit.>. They are listed in Table <ref>.The methodology of the analysis presented in this work builds upon our previous solar flux determination in Refs. <cit.>, which we briefly summarize here for convenience, but it also presents a number of differences besides the additional data included as described next.The theoretical predictions for the solar and KamLAND observables depend on eleven parameters: the eight reduced solar fluxes f_pp, …, f_hep, and the three relevant oscillation parameters _21, θ_12, θ_13.In our analysis we include as well the complementary information on θ_13 obtained after marginalizing over _3ℓ, θ_23 and δ_CP the results of all the other oscillation experiments considered in NuFIT-5.2 <cit.>. This results into a prior sin^2θ_13 = 0.0223± 0.0006, i.e., θ_13 = 8.59^∘(1± 0.014).Given the weak dependence of the solar and KamLAND observables on θ_13, including such prior yields results which are indistinguishable from just fixing the value of θ̅_13=8.59^∘.Throughout this work, we follow a frequentist approach in order to determine the allowed confidence regions for these parameters (unlike in our former works <cit.> where we used instead a Bayesian analysis to reconstruct their posterior probability distribution function).To this end we make use of the experimental data from the various solar and KamLAND samples (D_solar and D_KamLAND, respectively) as well as the corresponding theoretical predictions (which depends on ten free parameters, as explained above) to build the χ^2 statistical function χ^2_global(ω⃗_flux, ω⃗_osc) ≡χ^2_solar(D_solar | ω⃗_flux, ω⃗_osc) + χ^2_KamLAND(D_KamLAND | ω⃗_osc), with ω⃗_flux≡ (f_pp, …,f_hep) and ω⃗_osc≡ (_21, θ_12, θ̅_13).In order to scan this multidimensional parameter space efficiently, we make use of the MultiNest <cit.> and Diver <cit.> algorithms.The allowed range for the solar fluxes is further reduced when imposing the so-called “luminosity constraint”, i.e., the requirement that the overall sum of the thermal energy generated together with each solar neutrino flux coincides with the solar luminosity <cit.>: L_⊙/4π(A.U.)^2 = ∑_i=1^8 α_i Φ_i. Here the constant α_i is the energy released into the star by the nuclear fusion reactions associated with the i^th neutrino flux; its numerical value is independent of details of the solar model to an accuracy of one part in 10^4 or better <cit.>.A detailed derivation of this equation and the numerical values of the coefficients α_i is presented in Ref. <cit.>, with a refinement following in <cit.>.Here, we use the original formulation from <cit.>.The coefficients are listed in Table <ref>.In terms of the reduced fluxes Eq. (<ref>) can be written as: 1 = ∑_i=1^8 β_i f_i withβ_i ≡α_i Φ_i^ref/L_⊙/ [4π(A.U.)^2] where β_i is the fractional contribution to the total solar luminosity of the nuclear reactions responsible for the production of the Φ_i^ref neutrino flux.In Refs. <cit.> we adopted the best-estimate value for the solar luminosity L_⊙/ [4π(A.U.)^2] = 8.5272 × 10^11 MeV cm^-2 s^-1 given in Ref. <cit.>, which was obtained from all the available satellite data <cit.>.This value was revised in Ref. <cit.> using an updated catalog and calibration methodology (see Ref. <cit.> for a detailed comparative discussion), yielding a slightly lower result which is now the reference value listed by the PDG <cit.> and leads to L_⊙/ [4π(A.U.)^2] = 8.4984 × 10^11 MeV cm^-2 s^-1.In this work we adopt this new value when evaluating the β_i coefficients listed in Table <ref>.Furthermore, in order to account for the systematics in the extraction of the solar luminosity we now assign an uncertainty of 0.34% to the constraint in Eq. (<ref>), which we conservatively derive from the range of variation of the estimates of L_⊙.In what follows we will present results with and without imposing the luminosity constraint.For the analysis including the luminosity constraint we add a prior χ^2_LC(ω⃗_flux) = (1-∑_i=1^8 β_i f_i)^2/(0.0034)^2 Besides the imposition of the luminosity constraint in some of the analysis, the flux normalizations are allowed to vary freely within a set of physical constraints.In particular:* The fluxes must be positive: Φ_i ≥ 0 ⇒ f_i ≥ 0.* Consistency in the pp-chain implies that the number of nuclear reactions terminating the pp-chain should not exceed the number of nuclear reactions which initiate it <cit.>: Φ_[7]Be + Φ_[8]B≤Φ_pp + Φ_pep ⇒ 8.12 × 10^-2 f_[7]Be + 8.42 × 10^-5 f_[8]B≤ f_pp + 2.38 × 10^-3 f_pep .* The ratio of the pep neutrino flux to the pp neutrino flux is fixed to high accuracy because they have the same nuclear matrix element.We have constrained this ratio to match the average of the values in the five B23 SSMs (Sec. <ref>), with 1σ Gaussian uncertainty given by the difference between the values in the five models f_pep/f_pp = 1.004 ± 0.018. Technically we implement this constraint by adding a Gaussian prior χ^2_pep/pp(f_pp,f_pep) ≡( f_pep/ f_pp - 1.004/0.018)^2.* For the CNO fluxes (fΦ_[13]N, Φ_[15]O, and Φ_[17]F) a minimum set of assumptions required by consistency are:* The [14]N(p,γ) [15]O reaction must be the slowest process in the main branch of the CNO-cycle <cit.>: Φ_[15]O≤Φ_[13]N⇒ f_[15]O≤ 1.35f_[13]N* the CNO-II branch must be subdominant: Φ_[17]F≤Φ_[15]O⇒ f_[17]F≤ 40f_[15]O . The conditions quoted above are all dictated by solar physics. However, more practical reasons require that the CNO fluxes are treated with a special care.As discussed in detail in Appendix <ref>, the analysis of the Borexino Phase-III spectra in Ref. <cit.> (which we closely reproduce) has been optimized by the collaboration to maximize the sensitivity to the overall CNO production rate, and therefore it may not be directly applicable to a situation where the three [13]N, [15]O and [17]F flux normalizations are left totally free, subject only to the conditions in Eqs. (<ref>) and (<ref>). Hence, following the approach of the Borexino collaboration in Ref. <cit.>, we first perform an analysis where the three CNO components are all scaled simultaneously by a unique normalization parameter while their ratios are kept fixed as predicted by the SSMs.In order to avoid a bias towards one of the different versions of the SSM we have constrained the two ratios to match the average of the five B23 SSMs values Φ_[15]O/Φ_[13]N = 0.73 andΦ_[17]F/Φ_[13]N = 0.016 ⇒f_[15]O/f_[13]N = 0.98 andf_[17]F/f_[13]N = 0.85. In these analysis, which we label <<CNO-Rfixed>>, the conditions in Eq. (<ref>) effectively reduces the number of free parameters from ten to eight, namely the two oscillation parameters in ω⃗_osc and six flux normalizations in ω⃗_flux^CNO-Rfixed≡ (f_pp,f_[7]Be,f_pep,f_[13]N,f_[15]O = 0.98f_[13]N,f_[17]F = 0.85f_[13]N,f_[8]B,f_hep). In Sec. <ref> we will discuss and quantify the effect of relaxing the condition of fixed CNO ratios.§ NEW DETERMINATION OF SOLAR NEUTRINO FLUXESWe present first the results of our analysis with the luminosity constraint and the ratios of the CNO fluxes fixed by the relations in Eq. (<ref>), so that altogether for this case we construct the χ^2 function χ^2_wLC,CNO-Rfixed≡χ^2_global(ω⃗_osc, ω⃗_flux^CNO-Rfixed) + χ^2_pep/pp(f_pp,f_pep) + χ^2_LC(ω⃗_flux^CNO-Rfixed). The results of this analysis are displayed in Fig. <ref>, where we show the two- and one-dimensional projections of Δχ^2_wLC,CNO-Rfixed.From these results one reads that the ranges at 1σ (and at the 99% CL in square brackets) for the two oscillation parameters are: _21 = 7.43_-0.30^+0.30[_-0.49^+0.44] × 10^-5 ,sin^2θ_12 = 0.300_-0.017^+0.020[_-0.027^+0.031], which are very similar to the results of NuFIT-5.2 <cit.> with the expected slight enlargement of the allowed ranges.In other words, within the 3ν scenario the data is precise enough to simultaneously constraint the oscillation parameters and the normalizations of the solar flux components without resulting into a substantial degradation of the former.As for the solar fluxes, the corresponding ranges read: f_pp= 0.9969_-0.0039^+0.0041[_-0.0092^+0.0095],Φ_pp= 5.941_-0.023^+0.024[_-0.055^+0.057] × 10^10 cm^-2 s^-1 , f_[7]Be= 1.019_-0.017^+0.020[_-0.041^+0.047],Φ_[7]Be= 4.93_-0.08^+0.10[_-0.20^+0.23] × 10^9 cm^-2 s^-1 , f_pep= 1.000_-0.018^+0.016[_-0.042^+0.041],Φ_pep= 1.421_-0.026^+0.023[_-0.060^+0.058] × 10^8 cm^-2 s^-1 , f_[13]N= 1.25_-0.14^+0.17[_-0.40^+0.47],Φ_[13]N= 3.48_-0.40^+0.47[_-1.10^+1.30] × 10^8 cm^-2 s^-1 , f_[15]O= 1.22_-0.14^+0.17[_-0.39^+0.46]Φ_[15]O= 2.53_-0.29^+0.34[_-0.80^+0.94] × 10^8 cm^-2 s^-1 , f_[17]F= 1.03_-0.20^+0.20[_-0.48^+0.47],Φ_[17]F= 5.51_-0.63^+0.75[_-1.75^+2.06] × 10^7 cm^-2 s^-1 , f_[8]B= 1.036_-0.020^+0.020[_-0.048^+0.047],Φ_[8]B= 5.20_-0.10^+0.10[_-0.24^+0.24] × 10^6 cm^-2 s^-1 , f_hep= 3.8_-1.2^+1.1[_-2.7^+2.7],Φ_hep= 3.0_-1.0^+0.9[_-2.1^+2.2] × 10^4 cm^-2 s^-1 . Notice that in Fig. <ref> we separately plot the ranges for the three CNO flux normalization parameters, however they are fully correlated since their ratios are fixed, which explains the thin straight-line shape of the regions as seen in the three corresponding panels.Compared to the results from our previous analysis we now find that all the fluxes are clearly determined to be non-zero, while in Refs. <cit.> only an upper bound for the CNO fluxes was found. This is a direct consequence of the positive evidence of neutrinos produced in the CNO cycle provided by Borexino Phase-III spectral data, which is here confirmed in a fully consistent global analysis. We will discuss this point in more detail in Sec. <ref>.We also observe that the inclusion of the full statistics of Borexino has improved the determination of f_[7]Be by a factor 𝒪(3).Figure <ref> exhibits the expected correlation between the allowed ranges of the pp and pep fluxes, which is a consequence of the relation (<ref>).This correlation is somewhat weaker than what observed in the corresponding analysis in Ref. <cit.> because the spectral information from Borexino Phase-II and Phase-III provides now some independent information on f_pep.We also observe the presence of anticorrelation between the allowed ranges of the two most intense fluxes, pp and [7]Be, as dictated by the luminosity constraint (see comparison with Fig. <ref>). Finally we notice that the allowed ranges of f_[7]Be and f_[8]B – the two most precise directly determined flux normalizations irrespective of the luminosity constraint (see Fig. <ref>) – are anticorrelated.This is a direct consequence of the different dependence of the survival probability with sin^2θ_12 in their respective energy ranges. [8]B neutrinos have energies of the order of several MeV for which the flavour transition occurs in the MSW regime and the survival probability P_ee∝sin^2θ_12.Hence an increase in sin^2θ_12 must be compensated by a decrease of f_[8]B to get the correct number of events, which leads to the anticorrelation between the sin^2θ_12 and f_[8]B seen in the corresponding panel in Fig. <ref>.On the contrary, most [7]Be neutrinos have 0.86 MeV (some have 0.38 MeV) and for that energy the flavour transition occurs in the transition regime between MSW and vacuum average oscillations for which P_ee decreases with sin^2θ_12.Hence the correlation between sin^2θ_12 and f_[7]Be seen in the corresponding panel.Altogether, this leads to the anticorrelation observed between f_[7]Be and f_[8]B.This was already mildly present in the results in Ref. <cit.> but it is now a more prominent feature because of the most precise determination of f_[7]Be.All these results imply the following share of the energy production between the pp-chain and the CNO-cycle L_pp-chain/L_⊙ = 0.9919_-0.0030^+0.0035[_-0.0077^+0.0082] ⟺L_CNO/L_⊙ = 0.0079_-0.0011^+0.0009[_-0.0026^+0.0028] in perfect agreement with the SSMs which predict L_CNO/ L_⊙≤ 1% at the 3σ level.Once again we notice that in the present analysis the evidence for L_CNO≠ 0 clearly stands well above 99% CL.We next show in Fig. <ref> the results of the analysis performed without imposing the luminosity constraint – but still with the ratios of the CNO fluxes fixed by the relations in Eq. (<ref>) – for which we employ χ^2_woLC,CNO-Rfixed≡χ^2_global(ω⃗_osc, ω⃗_flux^CNO-Rfixed) + χ^2_pep/pp(f_pp,f_pep). The allowed ranges for the fluxes in this case are: f_pp= 1.038_-0.066^+0.076[_-0.16^+0.18],Φ_pp= 6.19_-0.39^+0.45[_-1.0^+1.1] × 10^10 cm^-2 s^-1 , f_[7]Be= 1.022_-0.018^+0.022[_-0.042^+0.051],Φ_[7]Be= 4.95_-0.089^+0.11[_-0.22^+0.25] × 10^9 cm^-2 s^-1 , f_pep= 1.039_-0.065^+0.082[_-0.16^+0.19],Φ_pep= 1.48_-0.09^+0.11[_-0.22^+0.26] × 10^8 cm^-2 s^-1 , f_[13]N= 1.16_-0.19^+0.19[_-0.45^+0.50],Φ_[13]N= 3.32_-0.54^+0.53[_-1.24^+1.40] × 10^8 cm^-2 s^-1 , f_[15]O= 1.16_-0.19^+0.19[_-0.44^+0.49]Φ_[15]O= 2.41_-0.39^+0.38[_-0.90^+1.02] × 10^8 cm^-2 s^-1 , f_[17]F= 1.01_-0.16^+0.16[_-0.38^+0.45],Φ_[17]F= 5.25_-0.85^+0.84[_-1.97^+2.21] × 10^6 cm^-2 s^-1 , f_[8]B= 1.034_-0.021^+0.020[_-0.051^+0.052],Φ_[8]B= 5.192_-0.11^+0.10[_-0.26^+0.26] × 10^6 cm^-2 s^-1 , f_hep= 3.6_-1.1^+1.2[_-2.6^+3.0],Φ_hep= 2.9_-0.9^+1.0[_-2.1^+2.4] × 10^4 cm^-2 s^-1 . As expected, the pp flux is the most affected by the release of the luminosity constraint as it is this reaction which gives the largest contribution to the solar energy production and therefore its associated neutrino flux is the one more strongly bounded when imposing the luminosity constraint.The pep flux is also affected due to its strong correlation with the pp flux, Eq. (<ref>).The CNO fluxes are mildly affected in an indirect way due to the modified contribution of the pep fluxes to the Borexino spectra.Thus we find that the energy production in the pp-chain and the CNO-cycle without imposing the luminosity constraint are given by: L_pp-chain/L_⊙ = 1.030_-0.061^+0.070[_-0.15^+0.17] andL_CNO/L_⊙ = 0.0075_-0.0013^+0.0013[_-0.0029^+0.0030]. Comparing Eqs. (<ref>) and (<ref>) we see that while the amount of energy produced in the CNO cycle is about the same in both analysis, releasing the luminosity constraint allows for larger production of energy in the pp-chain.So in this case we find that the present value of the ratio of the neutrino-inferred solar luminosity, L_⊙(neutrino-inferred), to the photon measured luminosity L_⊙ is: L_⊙(neutrino-inferred)/L_⊙ = 1.038_-0.060^+0.069[_-0.15^+0.17]. The neutrino-inferred luminosity is in good agreement with the one measured in photons, with a 1σ uncertainty of ∼ 6%.This represents only a very small variation with respect to the previous best determination <cit.>.Such result is expected because the determination of the pp flux, which, as mentioned above gives the largest contribution to the neutrino-inferred solar luminosity, has not improved sensibly with the inclusion of the full statistics of the phases II and III of Borexino.We finish this section by discussing the role of the Gallium experiments in these results with the aim of addressing the possible impact of the Gallium anomaly <cit.>.As described in the introduction, this anomaly consists in a deficit of the event rate observed in Gallium source experiments with respect to the expectation, which represents an obvious puzzle for the interpretation of the results of the solar neutrino Gallium measurements.In this work we assume the well established standard 3ν oscillation scenario and in this context the attempts at explanation (or at least alleviation) of the anomaly invoke the uncertainties of the capture cross section <cit.>. Thus the open question posed by the Gallium anomaly is the possible impact of such modification of the cross section in the results of our fit.In order to quantify this we performed two additional variants of our analysis.In the first one we introduce an additional parameter, f_GA, which multiplies the predicted event rates from all solar fluxes in the Gallium experiments.This parameter is left free to vary in the fits and would mimic an energy independent modification of the capture cross section (or equivalently of the detection efficiency).In the second variant we simply drop Gallium experiments from our global fit.The results of these explorations are shown in Fig. <ref> where we plot the most relevant marginalized one-dimensional projections of Δχ^2 for these two variants.The upper (lower) panels correspond to analysis performed with (without) the luminosity constraint.The left panel shows the projection over the normalization parameter f_GA obtained in the variant of the analysis which makes use of this parameter.As seen from the figure, the results of the fit favour f_GA close to one, or, in other words, the global analysis of the solar experiments do not support a modification of the neutrino capture cross section in Gallium (or any other effect inducing an energy-independent reduction of the detection efficiency in the Gallium experiments).This is so because, within the 3ν oscillation scenario, the global fit implies a rate of pp and [7]Be neutrinos in the Gallium experiment which is in good agreement with the luminosity constraint as well as with the rates observed in Borexino.On the central and right panel of the figure we show the corresponding modification of marginalized one-dimensional projections of Δχ^2 on the pp and pep flux normalizations which are those mostly affected in these variants.For the sake of comparison, in the upper and lower panels we also plot the results for the f_GA = 1 analysis (also visible in the corresponding windows in Figs. <ref> and <ref>, respectively).The figure illustrates that once the luminosity constraint is imposed, the determination of the solar fluxes is totally unaffected by the assumptions about the capture rate in Gallium.As seen in the lower panels, even without the luminosity constraint the impact on the pp and pep determination is marginal, which emphasizes the robustness of the flux determination in Eqs. (<ref>) and (<ref>).This is the case thanks to the independent precise determination of the pp flux in the phases I and II in Borexino.Furthermore, the small modification is the same irrespective of whether the Gallium capture rate is left free or completely removed from the analysis; this is due to the lack of spectral and day-night capabilities in Gallium experiments, which prevents them from providing further information beyond the overall normalization scale of the signal.§ EXAMINATION OF THE DETERMINATION OF THE CNO FLUXESAs mentioned above, one of the most important developments in the experimental determination of the solar neutrino fluxes in the last years have been the evidence of neutrinos produced in the CNO cycle reported by Borexino <cit.>.The detection was made possible thanks to a novel method to constrain the rate of the [210]Bi background. In Ref. <cit.>, using a partial sample of their Phase-III data, the collaboration found a 5.1σ significance of the CNO flux observation, which increased to 7σ with the full Phase-III statistics <cit.>, and to about 8σ when combined with the CID method <cit.>.See appendix <ref> for details.Key ingredients in the analysis performed by the collaboration in Refs. <cit.> (and therefore in the derivation of these results) are the assumptions about the relative contribution of the three reactions producing neutrinos in the CNO cycle, as well as those about other solar fluxes in the same energy range, in particular the pep neutrinos.In a nutshell, as mentioned above, the collaboration assumes a common shift of the normalization of the CNO fluxes with respect to that of the SSM, and it is the evidence of a non-zero value of such normalization which is quantified in Refs. <cit.>.In what respects the rate from the pep flux, the SSM expectation was assumed because the Phase-III data by itself does not allow to constraint simultaneously the CNO and pep flux normalizations.In this respect, the global analysis presented in the previous section are performed under the same paradigm of a common shift normalization of the CNO fluxes, but being global, the pep flux normalization is also simultaneously fitted.For the sake of comparison we reproduce in Fig. <ref> the projection of the marginalized Δχ^2_wLC,CNO-Rfixed (<ref>) and Δχ^2_woLC,CNO-Rfixed (<ref>) on the normalization parameters for the three CNO fluxes.For convenience we also show the projections as a function of the total neutrino flux produced in the CNO cycle.As seen in the figure the results of the analysis (either with or without luminosity constraint) yield a value of Δχ^2 well beyond 3σ for Φ_CNO=0.A dedicated run for this no-CNO scenario case gives Δχ^2 = 54 (33) for the analysis with (without) luminosity constraints, and it is therefore excluded at 7.3σ (5.7σ) CL.In order to study the dependence of the results on the assumption of a unique common shift of the normalization of three CNO fluxes we explored the possibility of making a global analysis in which the three normalization parameters are varied independently.As mentioned above, a priori the three normalizations only have to be subject to a minimum set of consistency relations in Eqs. (<ref>) and (<ref>).However, as discussed in detail in Sec. <ref>, the background model in Refs. <cit.> only assumes an upper bound on the amount of [210]Bi and cannot be reliably employed to such general analysis because of the larger degeneracy between the [210]Bi background and the [13]N flux spectra.With this limitation in mind, we proceed to perform two alternative analysis (with and without imposing the luminosity constraints) in which the normalization of the three CNO fluxes are left free to vary independently but with ratios constrained within a range broad enough to generously account for all variants of the B23 SSM, but not to extend into regions of the parameter space where the assumptions on the background model may not be applicable.Conservatively neglecting correlations between their theoretical uncertainties, the neutrino fluxes of SSMs presented in Ref. <cit.> and available publicly through a public repository <cit.> verify f_[15]O/f_[13]N =1.00 (1± 0.24) 0.95 (1± 0.22) 0.96 (1± 0.21) 1.01 (1± 0.23) 1.00 (1± 0.23) f_[17]F/f_[13]N =1.00 (1± 0.25)B23-GS980.84 (1± 0.23)B23-AGSS09-met0.80 (1± 0.20)B23-AAG210.79 (1± 0.22)B23-MB22-met0.79 (1± 0.22)B23-MB22-phot Thus in these analyses, here onward labeled <<CNO-Rbound>>, we introduce two pulls ξ_1 and ξ_2 for these two ratios so that ω⃗_flu^CNO-Rbound≡ (f_pp,f_[7]Be,f_pep,f_[13]N,f_[15]O = 0.98 ξ_1 f_[13]N,f_[17]F = 0.85 ξ_2f_[13]N,f_[8]B,f_hep) and add two Gaussian priors for these pulls, so that the corresponding χ^2 function without the luminosity constraint is: χ^2_woLC,CNO-Rbound≡χ^2_global(ω⃗_osc, ω⃗_flux^CNO-Rbound) + χ^2_pep/pp(f_pp,f_pep) + (ξ_1 - 1)^2/σ_ξ_1^2 + (ξ_2 - 1)^2/σ_ξ_2^2 with σ_ξ_1=0.26 and σ_ξ_2=0.48, chosen to cover the ranges in Eq. (<ref>).In addition f_[13]N, f_[15]O, and f_[17]F are required to verify the consistency relations in Eqs. (<ref>) and (<ref>).The χ^2 function with the luminosity constraint is obtained by further including the χ^2_LC prior of Eq. (<ref>): χ^2_wLC,CNO-Rbound≡χ^2_woLC,CNO-Rbound + χ^2_LC(ω⃗_flux^CNO-Rbound). We plot in Fig. <ref> the projection of the marginalized Δχ^2_wLC,CNO-Rbound (<ref>) and Δχ^2_woLC,CNO-Rbound (<ref>) on the normalization parameters for the three CNO fluxes as well as on the total neutrino flux produced in the CNO cycle.As seen in the figure, allowing for the ratios of the CNO normalizations to vary within the intervals (<ref>) has little impact on the allowed range of the [15]O flux and on the lower limit of the [13]N and [17]F fluxes.As a consequence, the CL at which the no-CNO scenario can be ruled out is unaffected.On the contrary, we see in Fig. <ref> that the upper bound on the [13]N and [17]F fluxes, and therefore of the total neutrino flux produced in the CNO-cycle, is considerably relaxed.[The allowed ranges for the fluxes produced in the pp-chain are not substantially modified with respect to the ones obtained from the <<CNO-Rfixed>> fits, Eqs. (<ref>) and (<ref>).]This is a consequence of the strong degeneracy between the spectrum of events from these fluxes and those from the [210]Bi background mentioned above, see Fig. <ref> and discussion in Sec. <ref>.Conversely the fact that the range of the [15]O flux is robust under the relaxation of the constraints on the CNO flux ratios, means that the high statistics spectral data of the Phase-III of Borexino holds the potential to differentiate the event rates from [15]O ν's from those from [13]N and [17]F ν's.The reliable quantification of this possibility, however, requires the knowledge of the minimum allowed value of the [210]Bi background which so far has not been presented by the collaboration.So, let us emphasize that our <<CNO-Rbound>> analysis have been performed with the aim of testing the effect of relaxing the severe constrains on the CNO fluxes in the studies of the Borexino collaboration.Our conclusion is that the statistical significance of the evidence of detection of events produced by neutrinos from the CNO-cycle is affected very little by the relaxation of the constraint on their relative ratios.However, their allowed range is, and this can have an impact when confronting the results of the fit with the predictions of the SSM as we discuss next.§ COMPARISON WITH STANDARD SOLAR MODELSNext we compare the results of our determination of the solar fluxes with the expectations from the five B23 solar models: SSMs computed with the abundances compiled in table 5 of <cit.> based on the photospheric and meteoritic solar mixtures (MB22-phot and MB22-met models, respectively), and with the <cit.> (AAG21), the meteoritic scale from <cit.> (AGSS09-met), and <cit.> (GS98) compositions.We use both MB22-phot and MB22-met for completeness, although the abundances are very similar in both scales, as clearly reflected by the results in this section.A similar agreement would be found using both the meteoritic and photospheric scales from AAG21, and therefore we use only one scale in this case.[The structures of these models, as well as the total neutrino fluxes and internal distributions are available at <cit.>.]SSMs predict that nuclear energy accounts for all the solar luminosity (barred about a few parts in 10^4 that are of gravothermal origin) so for all practical matters the neutrino fluxes predicted by SSMs satisfy the luminosity constraint.Therefore we compare the expectations of the various SSM models with the results of our analysis performed with such constraint.In what respects the assumptions on the CNO fluxes, in order to explore the dependence of our conclusions on the specific choice of flux ratios we quantify the results obtained in both the <<FIT=CNO-Rfixed>> analysis (with χ^2_FIT in Eq. (<ref>)) and the <<FIT=CNO-Rbounded>> one (with χ^2_FIT in Eq. (<ref>)).For illustration we plot in Fig. <ref> the marginalized one-dimensional probability distributions for the best determined solar fluxes in such two cases as compared to the predictions for the five B23 SSMs.The probability distributions for our fits are obtained from the one-dimensional marginalized Δχ^2_FIT(f_i) of the corresponding analysis as P_FIT(f_i)∝exp[-Δχ^2_FIT(f_i)/2] normalized to unity.To construct the analogous distributions for each of the SSMs we use the predictions ⟨ f_i^SSM⟩ for the fluxes, the relative uncertainties σ_i^SSM and their correlations ρ_ij^SSM as obtained from Refs. <cit.>, and also assume gaussianity so to build the corresponding χ^2_SSM(ω⃗_flux) χ^2_SSM(ω⃗_flux)=∑_i,j (f_i-f_i^SSM) C^-1_ij (f_i-f_i^SSM) with C_ab = σ_a^SSMσ_b^SSMρ_ab , from which it is trivial to obtain the marginalized one-dimensional Δχ^2_SSM(f_i) and construct the probability P_SSM(f_i)∝exp[-Δχ^2_SSM(f_i)/2].In the frequentist statistical approach, quantitative comparison of a model prediction for a set of fluxes with the results from the data analysis can be obtained using the parameter goodness of fit (PG) criterion introduced in Ref. <cit.>, by comparing the minimum value of χ^2 function for the analysis of the data with that obtained for the same analysis adding the prior imposed by the model.[In this respect it is important to notice that, in order to avoid any bias towards one of the models in the data analysis, in both <<CNO-Rfixed>> and <<CNO-Rbound>> cases the assumptions on the ratios of the three CNO fluxes have been chosen to be “model-democratic”, i.e., centered at the average of the predictions of the models (and, in the case of <<CNO-Rbound>>, with 1σ uncertainties covering the 1σ range allowed by all SSM models).]Thus, following Ref. <cit.>, we construct the test statistics Δχ^2_FIT,SSM,SET = . [χ^2_FIT(ω⃗_osc, ω⃗_flux^FIT) + χ^2_SSM,SET(ω⃗_flux^FIT) ] |_min- . χ^2_FIT(ω⃗_osc, ω⃗_flux^FIT) |_min - . χ^2_SSM,SET(ω⃗_flux^FIT) |_min where χ^2_SSM,SET(ω⃗_flux) is obtained as Eq. (<ref>) with i,j (and a,b) fluxes restricted to a specific subset as specified by “SET”.The minimization of each of the terms in Eq. (<ref>) is performed independently in the corresponding parameter space.Δχ^2_FIT,SSM,SET follows a χ^2 distribution with n degrees of freedom, which, in the present case, coincides with the number of free parameters in common between χ^2_FIT(ω⃗_osc, ω⃗_flux^FIT) and χ^2_SSM,SET(ω⃗_flux^FIT).Notice that, by construction, the result of the test depends on the number of fluxes to be compared, i.e., on the fluxes in “SET”, both because of the actual comparison between the measured and predicted values for those specific fluxes, and because of the change in n with which the p-value of the model is to be computed.This is illustrated in Table <ref> where we list the values of Δχ^2_FIT,SSM,SET for different choices of “SET” which we have labeled as: SETconstrained fluxes FULL (f_pp, f_[7]Be, f_pep, f_[13]N, f_[15]O, f_[17]F, f_[8]B, f_hep) Be+B+CNO (f_[7]Be, f_[13]N, f_[15]O, f_[17]F, f_[8]B) CNO (f_[13]N, f_[15]O, f_[17]F) Upon analyzing the data in the Table <ref>, it becomes evident that the B23-MB22 models (both the meteoritic and photospheric variations) exhibit a significantly higher level of compatibility with the observed data, even slightly better the B23-GS98 model.On the contrary the B23-AGSS09met and B23-AAG21 models exhibits a lower level of compatibility with observations, with B23-AAG21 model slightly better aligned with the data.Maximum discrimination is provided by comparing mainly the CNO fluxes for which the prediction of both models is mostly different.On the other hand, including all the fluxes from the pp-chain in the comparison tends to dilute the discriminating power of the test.The table also illustrates how allowing for the three CNO fluxes normalizations to vary in the fit tends to relax the CL at which the models are compatible with the observations.Let us remember that our previously determined fluxes in Ref <cit.> when confronted with the GS98 and AGSS09 models of the time <cit.> showed absolutely no preference for either model.This was driven by the fact that the most precisely measured [8]B flux (and also of [7]Be) laid right in the middle of the prediction of both models.The new B16-GS98 model in Ref. <cit.> predicted a slightly lower value for [8]B flux in slightly better agreement with the extracted fluxes of Ref <cit.>, but the conclusion was still that there was no significant preference for either model. Compared to those results, both the most precisely determined [7]Be flux and, most importantly, the newly observed rate of CNO events in Borexino have consistently moved towards the prediction of the models with higher metallicity abundances.Let us finish commenting on the relative weight of the experimental precision versus the theoretical model uncertainties in the results in Table <ref>.To this end one can envision an ideal experiment which measures f_i to match precisely the values predicted by one of the models with infinite accuracy.Assuming the measurements to coincide with the predictions of B23-GS98, one gets Δχ^2_SSM,SET = 17.1 and 16.7 for SSM=B23-AGSS09-met and SSM=B23-AAG21 with SET=FULL, which means that the maximum CL at which these two SSM can be disfavoured is 2.2σ and 2.1σ. Choosing instead SET=CNO these numbers become Δχ^2_SSM,SET = 15.0 and 14.1 for SSM=B23-AGSS09-met and SSM=B23-AAG21, respectively, corresponding to a 3.1σ and 3.0σ maximum rejection.This stresses the importance of reducing the uncertainties in the model predictions to boost the discrimination between the models.§ SUMMARYIn this work we have updated our former determination of solar neutrino fluxes from neutrino data as presented in Refs. <cit.>, by incorporating into the analysis the latest results from both solar and non-solar neutrino experiments.In particular this includes the full data from the three phases of the Borexino experiments which have provided us with the first direct evidence of neutrinos produced in the CNO-cycle.We have derived the best neutrino oscillation parameters and solar fluxes constraints using a frequentist analysis with and without imposing nuclear physics as the only source of energy generation (luminosity constraint).Compared to the results from previous analysis we find that the determination of the [7]Be flux has improved by a factor 𝒪(3), but most importantly we now find that the three fluxes produced in the CNO-cycle are clearly determined to be non-zero, with 1σ precision ranges between 20% to ∼ 100% depending on the assumptions in the analysis about their relative normalization.Conversely, in Refs. <cit.> only an upper bound for the CNO fluxes was found.This also implies that it is solidly established that at 99% CL the solar energy produced in the CNO-cycle is between 0.46% and 1.05% of the total solar luminosity.The observation of the CNO neutrinos is also paramount to discriminate among the different versions of the SSMs built with different inputs for the solar abundances, since the CNO fluxes are the most sensitive to the solar composition.In this work we confront for the first time the neutrino fluxes determined on a purely experimental basis with the predictions of the latest generation of SSM obtained in Ref. <cit.>.Our results show that the SSMs built incorporating lower metallicities are less compatible with the solar neutrino observations.This project is funded by USA-NSF grant PHY-1915093 and by the European Union through the Horizon 2020 research and innovation program (Marie Skłodowska-Curie grant agreement 860881-HIDDeN) and the Horizon Europe programme (Marie Skłodowska-Curie Staff Exchange grant agreement 101086085-ASYMMETRY).It also receives support from grants PID2019-110058GB-C21, PID2019-105614GB-C21, PID2019-108892RB-I00, PID2019-110058GB-C21, PID2020-113644GB-I00, PID2022-142545NB-C21, “Unit of Excellence Maria de Maeztu 2020-2023” award to the ICC-UB CEX2019-000918-M, “Unit of Excellence Maria de Maeztu 2021-2025” award to ICE CEX2020-001058-M, grant IFT “Centro de Excelencia Severo Ochoa” CEX2020-001007-S funded by MCIN/AEI/10.13039/501100011033, as well as from grants 2021-SGR-249 and 2021-SGR-1526 (Generalitat de Catalunya), and support from ChETEC-INFRA (EU project no. 101008324).We also acknowledge use of the IFT computing facilities.§ DETAILS OF BOREXINO ANALYSISA detailed description of our analysis of the full spectrum of the Phase-I <cit.> and Phase-II <cit.> of Borexino can be found in Ref. <cit.> and Ref. <cit.> respectively.Here we document the details of our analysis of the Borexino Phase-III data collected from January 2017 to October 2021, corresponding to a total exposure of 1431.6 days×71.3 tons, which we perform following closely the details presented by the collaboration in Refs. <cit.>. §.§ Analysis of Borexino Phase-III spectrumIn our fit we use the Borexino spectral data as a function of the detected hits (N_h) on the detector photomultipliers (including multiple hits on the same photomultiplier) as estimator of the recoil energy of the electron.At it was the case in Phase-II, Borexino divide their Phase-III data in two samples: one enriched (tagged) and one depleted (subtracted) in the [11]C events.The tagged sample picks up about 36% of the solar neutrino events, while the subtracted sample accounts for the remaining 64%.In what follows we denote by s=”tagged” or s=”subtracted” each of the two samples.The data and best fit components for the spectrum of the subtracted sample are shown in Fig. 2(a) of Ref. <cit.>.The data points for this sample can also be found in the data release material in Ref. <cit.>. The corresponding information for the tagged sample was kindly provided to us by the Borexino collaboration <cit.>.The number of expected events T^0_s,i in some bin i of data sample s is the sum of the neutrino-induced signal and the background contributions.The main backgrounds come from radioactive isotopes in the scintillator [11]C, [210]Bi, [10]C, [210]Po and [85]Kr.The collaboration identifies one additional background due to residual external backgrounds.With this T^0_s,i = ∑_f S^f_s,i + ∑_c B^c_s,i where the index f ∈{[7]Be, pep, [13]N, [15]0, [17]F, [8]B} runs over the solar fluxes which contribute in the Borexino-III energy range (see Fig. <ref>), while the index c ∈{[11]C,[210]Po,[210]Bi,[85]Kr,[10]C,ext} runs over the background components.We compute the solar neutrino signal from flux f in bin i, S^f_s,i, as S^f_s,i = ∫_N_h,min^i^N_h,max^i∫ S^f_s/ T_e(T_e) ℛ/ N_h(T_e,N_h)T_eN_h where S_s^f/ T_e is the differential distribution of neutrino-induced events from flux f to sample s as a function of recoil energy of the scattered electrons (T_e) S^f_s/ T_e(T_e) = ℱ_s 𝒩_tgt 𝒯_run ℰ_cut∑_α∫ϕ^f_ν/ E_νP_eα(E_ν) σ^det(ν_α)/ T_eE_ν , Here ℱ_s = 0.3572 (0.6359) is the fraction for s = “tagged” (“subtracted”) signal events, 𝒩_tgt is the number of e^- targets (i.e., the total number of electrons inside the fiducial volume of the detector, corresponding to 71.3 ton of scintillator), 𝒯_run = 1431.6 days is the data taking time, ℰ_cut = 98.5% is the overall efficiency (assumed to be the same as Phase-II), P_eα(E_ν) is the transition probability between the flavours e and α, and σ^det(ν_α)/ T_e is the flavour dependent ν_α - e^- elastic scattering detection cross section.The calculation of P_eα(E_ν) is based on a fully numerical approach which takes into account the specific distribution of the neutrino production point in the solar core for the various solar neutrino flux components as predicted by the SSMs; some technical details on our treatment of neutrino propagation in the solar matter can be found in Appendix A of Ref. <cit.> and in Sec. 2.4 of Ref. <cit.>.In addition Eq. (<ref>) includes the energy resolution function ℛ/ T_e for the detector which gives the probability that an event with electron recoil energy T_e yields an observed number of hits N_h.We assume it follows a Gaussian distribution ℛ/ N_h = 1/√(2π) σ_h(T_e)exp[ -1/2( N_h - N̅_h(T_e)/σ_h(T_e))^2 ] where N̅_h is the expected value of N_h for a given true recoil energy T_e.We determine N̅_h via the calibration procedure described in Ref. <cit.>, while σ_h is slightly different from Borexino Phase-II analysis.Concretely, we derive a relation between N̅_h and σ_h( N̅_h) which is σ_h( N̅_h) = 1.21974 + 1.60121 √(N̅_h) -0.14859 N̅_h. In what respects the backgrounds, we have read the contribution B^c_s,i for each component c in each bin i and for each data set s from Fig. 2(a) of Ref. <cit.> as well as the plot provided to us by the collaboration <cit.>.These figures show the best-fit normalization of the different background components as obtained by the collaboration, and we take them as our nominal background predictions.[One technical detail to notice is that the data in the tables are more thinly binned (817 bins) than the corresponding figures from which we read the backgrounds (163 bins).Given the relatively continuous spectra of the backgrounds, we have recreated the background content of the 817 bins through interpolation.]Our statistical analysis is based on the construction of a χ^2 function built with the described experimental data, neutrino signal expectations and sources of backgrounds.Following Refs. <cit.> we leave the normalization of all the backgrounds as free parameters with the exception of [210]Bi.The treatment of this background is paramount to the positive evidence of CNO neutrinos.As described in <cit.>, the extraction of the CNO neutrino signal from the Borexino data faces two significant challenges: the resemblance between spectra of CNO-ν recoil electrons and the [210]Bi β^- spectra, and their pronounced correlation with the pep-ν recoil energy spectrum.In order to surpass the first challenge, the collaboration restricted the rate of [210]Bi for which it sets and upper limit <cit.>: R([210]Bi)≤ (10.8± 1.0) cpd/ 100 t , while no constraint is imposed on its minimum value which is free to be as low as allowed by the fit (as long as it remains non-negative). We will go back to this point in Sec. <ref>.In our analysis we implement this upper limit by constraining the corresponding normalization factor f_[210]Bi as f_[210]Bi≤( 1 ±1.0/10.8), With this we construct the χ^2_BXIII as χ^2_BXIII = ∑_s,i 2 [ T^0_s,i - O_s,i + O_s,ilog(O_s,i/T^0_s,i) ] + ( f_[210]Bi - 1/σ_[210]Bi)^2Θ(f_[210]Bi -1) , where O_s,i is the observed number of events in bin i of sample s, and σ_[210]Bi = 1.0 / 10.8, and Θ(x) is the Heaviside step function.Constructed this way, χ^2_BXIII depends on 16 parameters: the 3 oscillation parameters (_21, θ_12, θ_13), 6 solar flux normalizations (f_[7]Be, f_pep, f_[13]N, f_[15]O, f_[17]F, f_[8]B) and 7 background normalizations (f_[210]Po, f_[210]Bi, f_[85]Kr, f_[10]C, f_ext and two different factors f_[11]C^tag and f_[11]C^sub for the tagged and subtracted samples).As a first validation of our χ^2 function we perform an analysis focused at reproducing the results on the solar neutrino fluxes found by the Borexino collaboration in Ref. <cit.>, and in particular the positive evidence of CNO neutrinos.In this test fit we fix the three oscillation parameters to their best fit value (sin^2θ_13 = 0.023, sin^2θ_12 = 0.307, _21 = 7.5× 10^-5), and following the procedure of the collaboration we assume a common normalization factor for the three CNO fluxes with respect to the SSM (f_[13]N = f_[15]O = f_[17]F≡ f_CNO).Furthermore, in order to break the pronounced correlation with the pep-ν recoil energy spectrum mentioned above, the collaboration introduced a prior for the pep neutrino signal flux following the SSM.Thus we define χ^2_BXIII,test=χ^2_BXIII + (f_pep - 1/σ_pep)^2 with σ_pep = 0.04/2.74 (for concreteness we choose the B16-GS98 model for this prior).The [7]Be and [8]B fluxes are left completely free.The results of this 11-parameter fit are shown in Figs. <ref> and <ref>.In Fig. <ref> we plot the allowed ranges and correlations for the parameters.Notice that in this figure all parameters are normalized to the best fit values obtained by the corresponding analysis of the Borexino collaboration, hence a value of “1” means perfect agreement.We observe a strong correlation between the normalization of the CNO fluxes f_CNO and the [210]Bi background.This is expected because, as mentioned before, the spectrum of CNO neutrinos and that of the [210]Bi background are similar (as can also be seen in Fig. <ref> which shows our best fitted spectra for the two samples).Still, the two spectra are different enough so that, under the assumption of the upper bound on the [210]Bi background, the degeneracy gets broken enough to lead to a positive evidence of CNO neutrinos in an amount compatible with the prediction of the SSMs.A quantitative comparison with the results of the collaboration is shown in Fig. <ref> where we plot the dependence of our marginalized Δχ^2 on the common CNO flux normalization, f_CNO, together with that obtained by the collaboration as extracted from Figure 2(b) of Ref. <cit.> (labeled “Fit w/ Systematics” in that figure).[Figure 2(b) of Ref. <cit.> shows their Δχ^2 as a function of the CNO-ν event rate which we divide by the central value of the expected rate in the B16-GS98 model to obtain the black dot-dashed curve in Fig. <ref>.]Altogether, these figures show that our constructed event rates and the best-fit normalization of the CNO flux reproduce with very good accuracy those of the fit performed by the collaboration. §.§ Allowing free normalizations for the three CNO fluxesIn their analysis of the different phases, the Borexino collaboration always considers a common shift in the normalization for the three fluxes of neutrinos produced in the CNO cycle with respect to their values in the SSM.On the contrary the normalization of the fluxes produced in the pp-chain are fitted independently.In principle, once one departs from the constraints imposed by the SSM, the normalization of the three CNO fluxes could be shifted independently, subject only to the minimum set of consistency relations in Eqs. (<ref>) and (<ref>).In fact in our previous works <cit.> we could perform such general analysis.At the time there was no evidence of CNO neutrinos and therefore those analysis resulted into a more general set of upper bounds on their allowed values compared to those obtained assuming a common shift. With this as motivation, one can attempt to perform an analysis of the present BXIII spectra under the same assumption of free normalization. However, within the present modelling of the backgrounds in the Borexino analysis, optimized to provide maximum sensitivity to a positive evidence of CNO neutrinos, such generalized analysis runs into trouble as we illustrate in Fig. <ref>.As expected, allowing three free CNO flux normalizations results in a weaker constraints on each of the three parameters.This is particularly the case for the smaller [17]F flux which is allowed to take values as large as ∼ 40 times the value predicted by the SSM without however yielding substantial χ^2 improvements over the standard f_[17]F = 1 value.In the same way f_[15]O is compatible with the prediction of the SSM, Δχ^2(f_[15]O = 1) ≃ 0, with an upper bound f_[15]O≲ 2.[It is interesting to notice that the Borexino bound on Φ_[17]F is about one-half that on Φ_[15]O.This is no surprise since the energy spectra of [17]F and [15]O neutrinos are extremely similar hence neither Borexino nor any other experiment can separate them and what is actually constrained is their sum.This is reflected in the clear anticorrelation visible in Fig. <ref>, while the factor of two stems from the consistency condition in Eq. (<ref>).]On the contrary the fit results into a favoured range for [13]N which, it taken at face value, would imply an incompatibility with the SSM at large CL: Δχ^2(f_[13]N=1) ≳ 6.This large [13]N flux comes at a price of a very low value of the [210]Bi normalization, which as seen in the figure is more strongly correlated with [13]N than with [15]O and [17]F.To illustrate further this point we show in Fig. <ref> our best fitted spectra of the “subtracted” sample for the analysis where one common normalization for the three CNO fluxes is used (left, in what follows “CNO” fit) and the one where all the three normalizations are varied independently (right, in what follows “N” fit).Thus the spectra in the left panel of Fig. <ref> are the same as the right panel of Fig. <ref>, except that now, for convenience, we plot separately the events from each of the CNO fluxes.This highlights clearly the different shape of the spectra of [15]O and [13]N, with [15]O extending to larger energies.It is also evident that [13]N is the one mostly affected by degeneracies with the [210]Bi background.Comparing the two panels we see by naked eye that both spectra describe well the data: in fact, the event rates for [15]O are comparable in both panels.But in the right panel the normalization of the [13]N events is considerably enhanced while the [210]Bi background is suppressed: this is the option favoured by the fit.Upon closer examination we find that in the range of N_h spanning between 300 and 400 photon hits, an increase in the value of f_[13]N better fits the data while driving f_[210]Bi towards 0.In Fig. <ref> we show a blow-up of the spectra in this N_h window.To quantify the difference in the quality of the fit for those two solutions and the relevant range of N_h we plot in the right panel the cumulative difference of χ^2_BXIII,test for the best fits of the “CNO” and “N” fits as a function of the maximum N_h bin included in the fit.Clearly, this anomalously large [13]N solution is possible only because the sole information included in the fit for the [210]Bi background is the upper bound provided by the collaboration.Such upper bound is enough to ensure a lower bound on the amount of CNO neutrinos, and indeed it results in a positive evidence of CNO fluxes (in good agreement with at least some of the SSMs) when a common normalization for the three CNO fluxes is enforced, as reported by the collaboration in Refs. <cit.> (and properly reproduced by us, as described in the previous section).Our results show that this is the case because the spectrum of [210]Bi and [15]O are sufficiently different.However, once the normalization of the three CNO fluxes are not linked together, the degeneracy between the spectral shape of [13]N and [210]Bi – together with the lack of a proper estimate for a lower bound on [210]Bi which is not quantified in Refs. <cit.> – pushes the best-fit of [13]N towards unnaturally large values.In other words, the background model proposed in Refs. <cit.> cannot be reliably employed for fits with independent [13]N and [15]O normalizations.We finish by noticing that this also implies that the high quality data of Borexino Phase-III, besides having been able to yield the first evidence of the presence of the CNO neutrinos, also holds the potential to discriminate between the contributions from [13]N and [15]O, a potential which may be interesting to explore by the collaboration. §.§ Analysis with Correlated Integrated Directionality Method In a very recent work <cit.> the Borexino collaboration has presented a combined analysis of their three phases making use of the Correlated and Integrated Directionality (CID) method, which aims to enhance the precision of the determination of the flux of CNO neutrinos.In a nut-shell, the CID method exploits the sub-dominant Cherenkov light in the liquid scintillator produced by the electrons scattered in the neutrino interaction.These Cherenkov photons retain information of the original direction of the incident neutrino, hence they can be used to enhance the discrimination between the solar neutrino signal and the radioactive backgrounds.Effectively, the CID analysis results into a determination of the total number of solar neutrinos detected within a restricted range of N_h which corresponds to 0.85 MeV < T_e< 1.3 MeV for Phase-I and 0.85 MeV < T_e< 1.29 MeV for Phase-II+III. In this range the dominant contribution comes from CNO, pep and some [8]B.The increased fiducial volume for this analysis brings the exposures to 740.7 days× 104.3 ton× 55.77% for Phase-I and 2888.0 days× 94.4 ton× 63.97% for Phase-II+III.The resulting number of solar neutrinos detected is N^P-I_obs = 643^+235_-224(stat)^+37_-30(sys) for Phase-I and N^P-II+III_obs = 2719^+518_-494(stat)^+85_-83(sys) for Phase-II+III.After subtracting the expected SSM contribution from pep and [8]B the Borexino collaboration obtains the posterior probability distributions for the number of CNO neutrinos shown in Fig. 9 of Ref. <cit.> (which we reproduce in the left panel in Fig. <ref>).Furthermore, since this new directional information is independent of the spectral information, the collaboration proceeded to combine these two priors on N_CNO with the their likelihood for the Borexino Phase-III spectral analysis.This resulted in a slightly stronger dependence of the combined likelihood on the CNO-ν rate shown in their Fig. 12 (which we reproduce in the right panel in Fig. <ref>).In order to account for the CID information in our analysis we try to follow as closely as possible the procedure of the collaboration. With the information provided on the covered energy range and exposures for the CID analysis, we integrate our computed spectra of solar neutrino events in each phase to derive the corresponding total number of expected events in Phase-I and Phase-II+III.We then subtract the SSM predictions for pep and [8]B neutrinos from the observed number of events to derive an estimate for CNO neutrinos in in Phase-I and Phase-II+III, and construct a simple Gaussian χ^2(N_CNO) for Phase-Y (Y=I or II+III) χ^2_CID,P-Y(N_CNO) = ( N_obs^P-Y - N^P-Y_SSM,pep - N^P-Y_SSM,[8]B - N_CNO/σ_P-Y)^2 where in σ_P-Y we add in quadrature the symmetrized statistical and systematic uncertainties in the number of observed events.We plot in the left panel in Fig. <ref> our inferred probability distributions P_P-Y(N_CNO)∝exp[-χ^2_P-Y/2] compared to those from Borexino in Fig. 9 of Ref. <cit.>.As seen in the figure our simple procedure reproduces rather well the results of the collaboration for the Phase-II+III but only reasonably for Phase-I.This may be due to differences in the reanalysis of the Phase-I data by the collaboration in the CID analysis compared to their spectral analysis of 2011.Our simulations of the Phase-I event rates are tuned to their 2011 and there is not enough information in Ref. <cit.> to deduce what may have changed.Thus we decide to introduce in our analysis the CID prior for the Phase-II+III data but not for Phase-I.We then combine the CID from Phase-II+III and Phase-III spectral information as χ^2_CID+BXIII,test = χ^2_BX-III + ( f_pep-1/σ_pep)^2 + χ^2_CID,P-II+III . A quantitative comparison with the results of the collaboration for this combined CID + Phase-III spectrum analysis is shown in the right panel of Fig. <ref> where we plot the dependence of our marginalized Δχ^2 on the CNO flux normalization after including the CID information compared to that obtained by the collaboration in Fig. 12 of Ref. <cit.>.As seen in the figure, we reproduce well the improved sensitivity for the lower range of the CNO flux normalization but our constraints are more conservative in the higher range, though they still represent an improvement over the spectrum-only analysis.Altogether, after all these tests and validations we define the χ^2 for the full Borexino analysis as χ^2_BX(ω⃗_osc, ω⃗_flux) = χ^2_BXI(ω⃗_osc, ω⃗_flux) + χ^2_BXII(ω⃗_osc, ω⃗_flux) + χ^2_BXIII(ω⃗_osc, ω⃗_flux) + χ^2_CID,P-II+III (ω⃗_osc, ω⃗_flux). with χ^2_BXIII(ω⃗_osc ω⃗_flux) and χ^2_CID,P-II+III (ω⃗_osc, ω⃗_flux) in Eqs. (<ref>) and (<ref>), respectively.We finish by noticing that the inclusion of the CID information is not enough to break the large degeneracy between the [13]N and [210]Bi contributions to the spectra discussed in the previous section.JHEPmod | http://arxiv.org/abs/2311.16226v1 | {
"authors": [
"M. C. Gonzalez-Garcia",
"Michele Maltoni",
"João Paulo Pinheiro",
"Aldo M. Serenelli"
],
"categories": [
"hep-ph",
"astro-ph.SR",
"hep-ex"
],
"primary_category": "hep-ph",
"published": "20231127190000",
"title": "Status of Direct Determination of Solar Neutrino Fluxes after Borexino"
} |
Article Title]Ground-breaking Exoplanet Science with the ANDES spectrograph at the ELT[1,2]Enric [email protected] 3]Katia Biazzo 4,5]Emeline Bolmont 6]Paul Molliere 7,8]Katja Poppenhaeger9]Jayne Birkby 10,11]Matteo Brogi 12]Gael Chauvin 12]Andrea Chiavassa 13]Jens Hoeijmakers 14]EmmanuelLellouch 15]Christophe Lovis 16]Roberto Maiolino 17]Lisa Nortmann 1,2]Hannu Parviainen 18]Lorenzo Pino 19,20]Martin Turbet 21]Jesse Wender 22]Simon Albrecht 3]Simone Antoniucci 23,24]Susana C. Barros 25]Andre Beaudoin 25]Bjorn Benneke 26]Isabelle Boisse 27]Aldo S. Bonomo 28]Francesco Borsa 29]Alexis Brandeker 6]Wolfgang Brandner 30]Lars A. Buchhave 18]Anne-Laure Cheffot 16]Robin Deborde 31]Florian Debras 25]Rene Doyon 32]Paolo Di Marcantonio 11]Paolo Giacobbe 1,2]Jonay I. González Hernández 33]Ravit Helled 6]Laura Kreidberg 34,35]Pedro Machado 36]Jesus Maldonado 37]Alessandro Marconi 38]B.L. Canto Martins 39,17]Adriano Miceli 40]Christoph Mordasini 12]Mamadou N'Diaye 41]Andrezj Niedzielski 3]Brunella Nisini 42]Livia Origlia 43]Celine Peroux 7]Alex G.M. Pietrow 18]Enrico Pinna 44]Emily Rauscher 45]Sabine Reffert 46]Philippe Rousselot 18]Nicoletta Sanna 12]Adrien Simonnin 1,2]Alejandro Suárez Mascareño 47]Alessio Zanutta 17]Mathias Zechmeister*[1]Instituto de Astrofísica de Canarias (IAC), 38200 La Laguna, Tenerife, Spain [2]Deptartamento de Astrofísica, Universidad de La Laguna (ULL), 38206 La Laguna, Tenerife, Spain [3]INAF - Astronomical Observatory of Rome, I-00043 Monte Porzio Catone, Rome, Italy [4]Observatoire de Genève, Université de Genève, Chemin Pegasi 51, 1290, Sauverny, Switzerland [5]Centre sur la Vie dans l'Univers, Université de Genève, Geneva, Switzerland [6]Max Planck Insitute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany [7]Leibniz Institute for Astrophysics Potsdam, An der Sternwarte 16, 14482 Potsdam, Germany [8]Potsdam University, Institute for Physics and Astronomy, Karl-Liebknecht-Str. 24/25, 14476 Potsdam-Golm [9]Astrophysics, Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH, UK [10]Dipartimento di Fisica, Universitá degli Studi di Torino, via Pietro Giuria 1, I-10125, Torino, Italy [11]INAF-Osservatorio Astrofisico di Torino, Via Osservatorio 20, I-10025 Pino Torinese, Italy [12]Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Lagrange, CS 34229, Nice,France [13]Lund Observatory, Box 43, SE-221 00 Lund, Sweden [14]LESIA, Observatoire de Paris, 92195 Meudon, France [15]Département d'Astronomie, Université de Genève, Chemin Pegasi 51, CH-1290 Versoix, Switzerland [16]Cavendish Laboratory, University of Cambridge, 19 J. J. Thomson Ave., Cambridge CB3 0HE, UK [17]Institut fur Astrophysik und Geophysik, Georg-August-Universitat, D-37077 Gottingen, Germany [18]INAF - Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, 50125 Firenze, Italy [19]Laboratoire de Météorologie Dynamique/IPSL, CNRS, Sorbonne Université, Ecole Normale Supérieure, PSL Research University, Ecole Polytechnique, 75005 Paris, France [20]Laboratoire d'astrophysique de Bordeaux, Univ. Bordeaux, CNRS, B18N, allée Geoffroy Saint-Hilaire, 33615 Pessac, France [21]Physikalisches Institut, Universität Bern,Gesellschaftsstrasse 6, 3012Bern, Switzerland [22]Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C, Denmark [23]Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal [24]Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua Campo Alegre, 4169-007 Porto, Portugal [25]Department of Physics and Trottier Institute of Research on Exoplanets, Université de Montréal, Montréal, QC H3C 3J7, Canada [26]Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France [27]INAF - Osservatorio Astrofisico di Torino, via Osservatorio 20, 10025 Pino Torinese, Italy [28]INAF - Osservatorio Astronomico di Brera, Via E. Bianchi, 46, 23807 Merate (LC), Italy [29]Department of Astronomy, Stockholm University, AlbaNova University Center, 10691 Stockholm, Sweden [30]DTU Space, Technical University of Denmark, Elektrovej 328, DK-2800 Kgs. Lyngby, Denmark [31]IRAP, CNRS - UMR 5277, Universite de Toulouse, Toulouse, France [32]INAF - Osservatorio Astronomico di Trieste, Via G.B. Tiepolo, 11 I-34143 Trieste, Italy [33]Institute for Computational Science, Center for Theoretical Astrophysics & Cosmology, University of Zurich Winterthurerstr. 190, CH-8057 Zurich, Switzerland [34]Institute of Astrophysics and Space Sciences, Observatorio Astronomico de Lisboa, Ed. Leste, Tapada da Ajuda, 1349-018 Lisbon, Portugal [35]Departamento de Fisica, Faculdade de Ciencias da Universidade de Lisboa, Campo Grande, Lisboa Portugal [36]INAF - Osservatorio Astronomico di Palermo, Piazza del Parlamento 1, 90134 Palermo, Italy [37]Dipartimento di Fisica e Astronomia, Universita degli Studi di Firenze, Via G. Sansone 1,I-50019, Sesto Fiorentino, Firenze, Italy [38]Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, Campus Universitário, Natal, RN, 59072-970, Brazil [39]Dipartimento di Fisica e Astronomia, Universitaa di Firenze, Via G. Sansone 1, 50019 Sesto Fiorentino, Firenze, Italy [40]Weltraumforschung und Planetologie, Physikalisches Institut, Universitat Bern, Gesellschaftsstrasse 6, 3012Bern, Switzerland [41]Institute of Astronomy, Nicolaus Copernicus University in Toruń, Gagarina 11, 87-100 Toruń, Poland [42]Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via Gobetti 93/3, I-40129 Bologna, Italy [43]European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748 Garching-bei-München, Germany [44]Department of Astronomy, University of Michigan, 1085 S University Ave, Ann Arbor, MI 48109, USA [45]Landessternwarte, Zentrum fur Astronomie der Universitat Heidelberg, Konigstuhl 12, 69117 Heidelberg, Germany [46]Institut UTINAM - UMR 6213, CNRS / Univ. de Franche-Comte, OSU THETA, 41 bis Av. de l’Observatoire, BP 1615, F-25010 Besançon Cedex, France[47]INAF - Osservatorio Astronomico di Brera, via E. Bianchi 46, 23807 Merate, ItalyIn the past decade the study of exoplanet atmospheres at high-spectral resolution, via transmission/emission spectroscopy and cross-correlation techniques for atomic/molecular mapping, has become a powerful and consolidated methodology. The current limitation is the signal-to-noise ratio that one can obtain during a planetary transit, which is in turn ultimately limited by telescope size. This limitation will be overcome by ANDES, an optical and near-infrared high-resolution spectrograph for the Extremely Large Telescope, which is currently in Phase B development. ANDES will be a powerful transformational instrument for exoplanet science. It will enable the study of giant planet atmospheres, allowing not only an exquisite determination of atmospheric composition, but also the study of isotopic compositions, dynamics and weather patterns, mapping the planetary atmospheres and probing atmospheric formation and evolution models. The unprecedented angular resolution of ANDES, will also allow us to explore the initial conditions in which planets form in proto-planetary disks. The main science case of ANDES, however, is the study of small, rocky exoplanet atmospheres, including the potential for biomarker detections, and the ability to reach this science case is driving its instrumental design. Here we discuss our simulations and the observing strategies to achieve this specific science goal. Since ANDES will be operational at the same time as NASA's JWST and ESA's ARIEL missions, it will provide enormous synergies in the characterization of planetary atmospheres at high and low spectral resolution. Moreover, ANDES will be able to probe for the first time the atmospheres of several giant and small planets in reflected light. In particular, we show how ANDES will be able to unlock the reflected light atmospheric signal of a golden sample of nearby non-transiting habitable zone earth-sized planets within a few tenths of nights, a scientific objective that no other currently approved astronomical facility will be able to reach.[ [ January 14, 2024 ==================== § INTRODUCTIONExoplanets exhibit an immense diversity in their mass, size, internal composition, temperature, atmospheric makeup, and orbital configurations. Despite significant advancements over the past two decades, we still need a comprehensive understanding of the formation and evolution of exoplanetary systems, and the factors influencing their composition and surface conditions. Consequently, the investigation of exoplanet atmospheres across a broad spectrum of planetary types, spanning from gas giants to rocky planets and from scorching hot to temperate worlds, remains a primary focus within the field of exoplanetary science. This is because atmospheric metallicity and some ratios of elemental abundances (C/O) are tracers of the planet's formation and evolution history <cit.>. Currently the JWST is opening a new era of exoplanet atmospheres exploration at low spectral resolution <cit.>, but (perhaps with the exception of the Trappist1 system) it will not be able to reach the atmospheres of habitable zone rocky planets. Ground-based exoplanet astronomy will profoundly transform with the construction of extremely large telescopes <cit.>. The community will be able to probe the population of known planets in many different directions due to the high angular resolution and increased sensitivity. The study of “classical” targets such as hot Jupiters and directly imaged gas giant planets will see a paradigm shift, and we will directly probe the three-dimensional structure and wind patterns of their atmospheres. This will be possible with the use of high-resolution spectrographs, which are beginning to show their potential on 8 m-class telescopes today. With the advent of the Extremely Large Telescope (ELT), a specific emphasis in exoplanetary research will be placed on examining “habitable” terrestrial planets <cit.>. While the concept of habitability is still under exploration, in practical terms, this involves the study of solid, rocky celestial bodies whose surface temperatures permit the presence of liquid water. Recent studies of the mass-radius relationship for small exoplanets have shown that rocky planets typically have masses below approximately 5–8 times that of Earth <cit.>. Objects exceeding this mass range may be dominated by water or hydrogen envelops, resembling small Neptunes <cit.>. Additionally, the requirement for liquid water implies that the level of irradiation from the host star falls within a factor of approximately two of Earth's irradiation <cit.>. Thus, terrestrial habitable planets occupy a relatively narrow range of parameter space.Although Earth-like planets are relatively common in the universe <cit.>, with the current telescopes and instrumentation, attempting to study the atmosphere of an Earth twin is still out of reach. The main difficulty to study small exoplanet atmospheres using direct imaging observations is their enormous planet-to-star contrast ratio. The ELT possesses two distinct qualities that will set it apart in its ability to investigate exoplanets: its unparalleled light-collecting capacity and angular resolution. These attributes offer in turn two distinct avenues for ANDES, previously known as HIRES <cit.>, to scrutinize exoplanet atmospheres: to expand the characterization of exoplanets via transmission/emission spectroscopy down to the rocky regime, and to attempt for the first time the detection of reflected light from the planetary atmosphere. Our chances of characterizing small rocky planets in the coming decade greatly rely on the on-going and successful discovery of the rocky planet population around the brightest and closest M dwarfs <cit.>. M dwarfs are the most numerous stars in the solar neighbourhood <cit.>, and their lower masses and radii favour the detection of less massive planets. However, planets around M stars have been theoretically predicted to be very vulnerable to atmospheric mass loss <cit.>, and it is still a topic of hot debate whether they are habitable at all, after being exposed to intense radiation from the host stars for sustained periods of time <cit.>. By the time the ELT becomes operational, it is anticipated that the best-suited potentially habitable planets will have been identified and characterized, thanks to ongoing and forthcoming space-based, namely TESS <cit.>, CHEOPS <cit.>, and PLATO<cit.>, and ground-based exoplanet photometric and radial velocity surveys such as NGTS, MEarth, SPECULOOS, HARPS, HARPS-N, ESPRESSO, CARMENES, MAROON-X, or SPiROU, among others. The role of ANDES on the ELT will be to delve deeper into the characterization of these worlds, moving beyond basic parameters like mass and radius. Specifically, ANDES aims to explore their atmospheric structure, composition, and surface conditions, including the detection of possible bio-markers <cit.>. In this detailed atmospheric characterization, there will be space for synergistic observations with facilities such as JWST, ARIEL and other ELT instrumentation that will be contemporaneous to ANDES. The unprecedented angular resolution of ANDES, will also allow us to explore the initial conditions in which planets form and the physical mechanism that give rise to the observed exoplanet population. The majority of the known planets are believed to have formed within ≈15-20 au from their hosting star <cit.>, and understanding the composition and spatial distribution of atomic and molecular gas in this inner region of young (<10 Myr) circumstellar disks is an essential step towards understanding planetary system’s formation and evolution. One of the main scientific objective of ANDES will be to settle the properties of the gas in the inner star-disk region, where different competing mechanisms of disk gas dispersal are at play, namely magnetospheric accretion, jets, photo-evaporated and magnetically driven disk winds <cit.>.In the following sections, we will elaborate on these science cases and observing techniques, and present up to date simulations of the ANDES capabilities, including real or realistic observations that address the primary scientific goals of ANDES. These simulations have helped us establish the top-level requirements for the instrument and decide on a ranking of since case priorities. We have chosen to emphasize the atmospheric characterization of habitable terrestrial planets, considering it the most compelling of our science objectives, encompassing both transmission spectroscopy and the high-contrast/high-resolution method. However, we will also explore a broader array of intriguing scientific scenarios. § ANDES OBSERVING MODES AND CAPABILITIES FOR EXOPLANET OBSERVATIONS §.§ ANDES Top Level Requirements and Observing ModesANDES will have two fundamental observing modes: a seeing-limited R=100,000 fibre-fed spectrograph covering the 0.5–1.8 µm interval (with the goal to extend the coverage to the blue and to the K band), and an integral field unit (IFU) AO-assisted mode (also at R=100,000) that will be operating in the Y, J, and H bands.Exploring small rocky planets in the habitable zone of their stars via transmission spectroscopy is the leading science case of ANDES, while rocky exoplanet reflected light detection is the third in priority (Technical Requirements Specification for ANDES, ESO Document ESO-391757, 2022). Thus, these scientific cases drive ANDES's top-level requirements (TLRs), which are:* Spectral resolution: R ≥ 100 000* Spectral sampling >3.0 pixels per resolution element * Wavelength range: 0.5–1.8 µm (requirement), 0.38–2.4 µm (goal). In general, the case of exoplanet atmospheres requires a range of 550–1800 nm for two major reasons: 1) To cover all the major molecules H_2O, O_2, CO_2, CH_4, NH_3, C_2H_2, HCN, and 2) To include most of the stellar flux for M dwarfs, which are prime targets. Maximizing stellar flux is key to reach sufficient SNR both in transmission and reflected light. Extension towards the blue down to 0.38 µm and/or to the red towards 2.4 µm are not crucial, but would be highly beneficial to expand the potential for exoplanet research; for gas giant planets K-band (2 to 2.4 µm) has been one of the most important bands to date. These potential extensions will be discussed in detail in the following sections. * A diffraction-limited IFU mode in the covering the near-IR Y, J and H bands (goal: I band), with several spaxel sizes (10-100 mas), giving a field of view from ≈10 × 10 to 100 × 100 mas.* AO performance: contrast enhancement of around 1000 at 3λ/D.* High PSF stability on daily timescales* High flat-field accuracy* Wavelength calibration precision 1 m s^-1(goal: 20 cm s^-1), stable at the time scales of several hours.There are essentially two observational techniques that can be used to infer the surface and atmospheric properties of exoplanets using the high-dispersion and high spatial contrast capabilities of ANDES, namely atmospheric transmission/emission spectroscopy and planetary reflected light detection, which we will discuss in detail in the following sections. We note that the science case of transmission/emission spectroscopy relies only on using ANDES as a seeing-limited spectrograph. However, the reflected light science case relies on adaptive optics systems coupled with smaller spaxel-size IFUs. §.§ Seeing limited transmission and emission spectroscopyAtmospheres can be probed in transmission during the transit of an exoplanet in front of its host star. This technique involves comparing observations during the transit to those taken when the planet is not transiting, revealing spectral characteristics of the exoplanet and separating them from the star's spectra over time.The strength of the atmospheric signal in transmission spectroscopy primarily relies on the atmosphere's scale height, calculated as H = k T/(μ g) under hydrostatic equilibrium using the ideal gas law. Here, T represents temperature, μ is the mean molecular weight, and g is the acceleration due to gravity, while k is the Boltzmann constant. The apparent size of the host star's disk affects the signal, making smaller host stars, like M dwarfs, advantageous compared to solar-type stars. For instance, an Earth-like planet orbiting an M4 dwarf would produce a signal of approximately 40 ppm in transmission, while the same signal for the Sun-Earth system would only be 0.2 ppm.In these observations, the primary noise source comes from the small number of photons emitted by the host star. Consequently, bright targets are essential even with the ELT because achieving a cumulative signal-to-noise Ratio (S/N) on the order of 10^5 is necessary during in-transit observations. It is worth noting that some studies have already achieved noise levels of 10–20ppm using high-resolution spectroscopy and cross-correlation techniques, as demonstrated by previous research <cit.>. These precision levels appear to be primarily limited by photon counts, suggesting that further improvements can be made with increased photon accumulation. The ELT's superior light-gathering capabilities during the relatively short duration of exoplanet transits (typically a few hours) are crucial in this context <cit.>.Seeing-limited observations, when the planet and the star are not resolved, can also be used to measure the emission spectrum of a planet by taking advantage of their differential velocities. The differentiation between star and planet spectral features is possible due to the presence of Doppler shifts between both signals. Emission spectroscopy can be performed for both transiting and non-transiting planets. One uses the cross-correlation technique to detect the planetary signal, where one performs the cross-correlation of the stellar-signal corrected spectra with modelled spectra of the expected planetary signals <cit.>. This is useful when looking for atoms and molecules that may originate hundreds to thousands of individual absorption lines in the transmission spectrum, making the cross-correlation technique the most powerful tool for detection. Using this methodology, the contribution of all these lines is combined, reducing the photon noise and permitting the detection of atoms and molecules hidden in the noise when analysed individually. This technique can also be applied to transmission spectroscopy during transit <cit.>.The high dispersion observing techniques that we will discuss throughout this manuscript are phase-resolved transmission and emission spectroscopy (the latter also dubbed `high dispersion phase curves'<cit.>; High signal-to-noise transmission spectroscopy of individual lines <cit.>; eclipse or transit mapping, or both, i.e. the use of time-resolved information during ingress and egress at primary and secondary eclipse (to access latitudinal information) <cit.>§.§ AO-assisted reflected light detection While the transit spectroscopy technique is strongly impacted by stellar contamination and can be inconclusive for planets with no atmosphere (or a very thin one) or high-altitude clouds/hazes, this is not the case for the reflected light at high-contrast and high-resolution. The reflected light high-contrast high-resolution (HCHR) technique consists in combining an extreme Adaptive Optics (AO) and coronagraphic system with a high-resolution spectrograph <cit.>. The AO and coronagraph are used to spatially separate the planet from the star and then hide the light from the star to improve the planet-to-star contrast. An integral-field unit (IFU) then collects the light in the focal plane and sends it to a high-resolution spectrograph. An additional gain in contrast can be obtained through high-resolution spectroscopy. Star and planet spectra can be disentangled thanks to their different spectral features and Doppler shifts. This technique is sensitive to individual spectral lines in the planet spectrum, which can be either Doppler-shifted stellar reflected lines or intrinsic planetary lines originating from molecular absorptions in the planetary atmosphere. Many individual features are usually combined in a cross-correlation analysis to increase SNR. Detecting these spectral signatures constrains many properties of the planetary atmosphere and surface, such as albedo, chemical composition, abundances of molecular species, atmospheric pressure, temperature, cloud coverage, and 3D structure of the atmosphere. Essential observational quantities to be considered in this approach are the angular separation between planet and star, the planet-to-star contrast in reflected light, and the host star brightness. These control the planetary detectability and the amount of stellar contamination at the planet's position. The SNR on the planet spectrum is directly proportional to the planetary signal and inversely proportional to the square root of the contaminating flux from the star, which usually dominates the noise budget. In instrumental terms, the goal is to maximize the planet coupling while minimizing the stellar coupling at the planet location. The planet coupling depends on the overall throughput of the instrument and the Strehl ratio at the wavelength of observation. The stellar coupling mainly depends on the AO residual halo at the angular distance of the planet, the intensity of the stellar Airy pattern at that position, and the possible use of a coronagraphic solution to suppress the stellar point spread function (PSF). Although the technique has not been tested on sky yet, the RISTRETTO instrument will pioneer the technique on the VLT <cit.>. §.§ Challenges ahead: Long period planetsThe biggest challenge in characterizing warm and temperate planets with long periods is that most of the current data reduction methods rely on the rapid change in planetary radial velocities across individual exposures to remove the stellar and telluric lines, for example, through Principal Component Analysis (PCA; <cit.>). See Figure <ref> for a comparison example between a short and long period planet. Despite its large aperture, in the lack of a deep methodological improvement, data processing could hamper the capability of ELT to tackle this particular science case. We assessed the sensitivity of current telluric removal algorithms, such as PCA or SYSREM <cit.>, to slowly moving planets by measuring the fraction of the surviving signal in a simulated 5 hour observation of an emission spectrum centred at different planet's orbital phases (0.25–0.43 in steps of 0.03), for a set of orbital periods (1 to 15 days, variable steps), exposure times (100–800 s in steps of 100 s). We chose a target J magnitude of 14.5 mag. SNR is calculated using a custom version of the ANDES Exposure Time Calculator (ETC) v1.1[<http://tirgo.arcetri.inaf.it/nicoletta/etc_andes_sn_com.html>].Our results show the following trends: * Phases closer to quadrature correspond to slower rates of change of RV, therefore these are the phases most affected by PCA (which is well known and expected, e.g. ).* Very short period planets (<2 d) are almost insensitive to the above because a 5 h observation spans about 10% of the orbit, including spectra far from quadrature and thus allowing the recovery of a very significant fraction (>80%) of the signal.* For longer-period planets, not only the radial velocity rate of change is smaller overall, but the phase range covered by a single 5 h observation is also smaller, resulting in poorer constraints on (v_sys, K_p). As a result, a 15-day period planet has at most 30% of its signal surviving.Results are nearly independent of exposure time, except for the shorter-period planets where smearing effects for long exposure lead to a drop of 10–20% in the recovered signal.The above simulations imply that when estimating the detectability of a planet with current analysis techniques there might be an additional scaling factor due to the detrimental effects of PCA, to be applied on top of any scaling due to other factors such as T/P/abundance profiles or planet radius, among others. We note that these predictions apply to emission observations, that is for spectral sequence where a planet signal is present in each observed spectrum. We also assessed the removal of signal for transmission spectroscopy of long-period planets using the example case of K2-18b. The projected velocity for this planet, that has a 32-day long orbit, changes by 1km s^-1 between the start and end of its 2.3 h long transit. This causes its atmospheric signature to appear quasi-static in the data. For the assessment, we use a simulated spectroscopic time series of one transit event. We compute this simulated data using the spectrum of an M2.5 star obtained by CARMENES as a stellar template <cit.> and model telluric contamination on the basis of realistic 6 h long observations. As for the emission case, we calculated the SNR expected for the J and H magnitudes of K2-18b using the ANDES ETC v1.1. We explored the effect of different duration of the observations. In each case the data covers the full transit event, but the available out of transit coverage is varied. We find that for a quasi-static signal, such as the one of K2-18b the in-transit signal is reduced by a factor of 30-55% compared to a more favourable case in which the planet's velocity changes significantly faster (by ≈ 1 km s^-1 per exposure) during transit. We observe that for the quasi-static signal of K2-18bthe amount of signal removed depends on the amount of out-of-transit baseline available, with more out-of-transit baseline leading to less signal removal (see Figure <ref>). This effect can be well observed in the progression of both the absolute signal as well as in the signal strength relative to the residual noise in the data. For the short period case this correlation becomes more obscured when the signal is normalized by the noise level.This is due to larger noise levels in the regions affected by telluric lines caused by the longer observing duration. The longer out-of-transit coverage requires observations taken at high air mass, which negatively affects the noise level of the data. This effect is not observed in the quasi-static case of K2-18b as the telluric contamination remains well removed from both the planet signal and the region at which noise is evaluated. SYSREM/PCA not only removes signal in transit but also over-corrects the data obtained out of transit, as can be observed as a dark (anti-correlated) signature in Figure <ref> upper left panel. Dash et al. (under review) illustrate that such artifacts in the post-SYSREM/PCA data i) depends on the slope of the in transit signal and ii) are completely reproducible on each tested model. Therefore, both the in-transit and the out-of-transit portion of the spectral sequence can be utilised to pinpoint the orbital solution of the exoplanet. Here, no other approach to data analysis different to PCA, or the use of SYSREM, has been considered. Developing additional alternative data analysis techniques for both visible and near-infrared spectral ranges combining better telluric correction and the use of better stellar templates can perhaps alleviate this difficulty.§.§ Challenges ahead: Stellar contamination The atmospheres of host stars are covered with complex, stochastic patterns associated with convective heat transport, the stellar granulation. Convection manifests in the surface layers as a particular pattern of downflowing cooler plasma and bright areas where hot plasma rises <cit.>. The size of the convection-related surface structures depends on the stellar parameters, with larger granules associated to lower surface gravity stellar types <cit.>. Convection is a difficult process to understand, because it is non-local, and three-dimensional, and it involves nonlinear interactions over many disparate length scales. In this context, the use of numerical three-dimensional (3D) radiative hydrodynamical (RHD) simulations of stellar convection is crucial and has become possible in the last decade large grids of simulations covering a substantial portion of the Hertzsprung-Russell diagram <cit.>. The direct consequence of the non-stationary stellar spectrum, in the form of either Doppler shift or distortion of the line profile during planetary transits, creates a non-negligible source of noise that can alter or even prevent detection, especially when the same spectral lines are present both in the stellar and in the planet spectrum: e.g., the diatomic molecules like CO, H_2O, ZrO, VO or the atomic transitions such as Fe, V, or Li, among others. Among the different stellar types, M dwarfs are key targets for high-resolution spectroscopy due to a high incidence of these stars in the solar neighbourhood <cit.> and their importance as exoplanetary hosts <cit.>. M dwarf stars, with masses between 0.08 and 0.4 M_⊙, are fully convective and can generate strong magnetic fields <cit.>. Their low effective temperatures result in a plethora of molecular line transitions throughout the optical spectrum <cit.> and atomic line transitions <cit.> for which non-LTE effects on particular potassium strong lines lead to not negligible metallicity corrections <cit.>.In the first order, the telluric bands and instrumental trends can be considered stationary or quasi-stationary in wavelength for the duration of a typical observation night and then corrected <cit.>,while at high resolution the shape and position of the stellar absorption lines are extremely variable in time (Figure <ref>): the spatially averaged across the stellar surface convection shifts depend on the stellar parameters and range between several dozens to a few hundred m/s <cit.>.In this context, the good and time-dependent representation of the background stellar disk with 3D RHD simulations is a natural and necessary step forward toward a better understanding of stellar properties and, in the context of exoplanet science, for a detailed and quantitative analysis of the atmospheric signatures of transiting and non-transiting planets <cit.>.One particular aspect concerns the 3D RHD simulations of M dwarf stellar convection. Pioneering works on M and brown dwarfs presented the challenges in modelling these objects with respect tomain sequence stars and highlighted the presence of self-excited gravity waves as an essential mixing process in their atmospheres <cit.>. More recently, new grids are under development <cit.>. Theoretical efforts for obtaining detailed line formation physics, using next-generation time-dependent hydrodynamics simulations, will be crucial in determining spectroscopic based parameters on these stars as well as their impact on atmospheric signatures of planets. Especially when combined with empirical solar observations that are translated into a Sun-as-a-star setting <cit.>.§ HZ ROCKY PLANET ATMOSPHERES: TRANSMISSION AND EMISSION SPECTROSCOPY Transmission spectroscopy is nowadays routinely used to detect molecular and atomic species in the atmosphere of hot, giant planets, using ground-based spectrographs (e.g., ). ANDES at the ELT does not represent a major technical advance in the application of these techniques, but it will be a major leap in capabilities. This is because in practice, the community will be moving from spectrographs on 3 to 4 m telescopes (like CARMENES, GIARPS, or SPIROU for example) to a 39m diameter telescope, as the 8-10 m class telescopes do not have stabilized instrumentation similar to ANDES. Stable spectrographs like ESPRESSO, MAROON-X or KPF, cover only the optical spectral range and thus are not suited to detect molecular species (and CRIRES+ which is limited in spectral coverage). This means that ANDES will have the capability to extend the technique down to small, rocky, temperate exoplanets transiting nearby stars. Some of the best available targets we have so far for ANDES include benchmark systems like the TRAPPIST-1 planets <cit.> and LHS 1140b <cit.>, many of which are also part of JWST approvedobserving programs (see e.g., ).Previous observations, in particular with JWST, have revealed several challenges on our way to use transit spectroscopy to characterize these small planets. Heterogeneities on the stellar photosphere (spots, faculae) can strongly bias the measurements <cit.>. Moreover, if a thick layer of clouds is present in the planet's upper atmosphere, the transit spectroscopy technique struggles to provide molecular detections <cit.>. A detailed feasibility study for ANDES is needed to evaluate quantitatively the number of nights required to characterize the potential atmospheres of transiting, small, nearby exoplanets with transit spectroscopy. Such study should carefully and quantitatively evaluate the two challenges mentioned above (stellar contamination, and clouds) and take into account the specific mean molecular weight of each planet to be observed.Here we have modeled the potential contribution of ANDES by simulating the observations of a rocky and a sub-Neptune small planets, and then extend the simulations to the known population of small planetary systems. We focus on transmission spectroscopy as small rocky planets in the HZ are not good targets for emission spectroscopy.Emission spectroscopy for larger planets is discussed in Section <ref>§.§ Key targets transit simulationsIn order to explore the potential of ANDES, we have modeled several possible atmospheric scenarios for some key planets. In particular, we simulated the Trappist-1b, assuming a wet CO_2 dominated atmosphere (wet meaning 2% H_2O) at the equilibrium temperature of 400 K, and with ten transits stacked. For Trappist-1d, we simulated a wet N_2 dominated atmosphere with 400 ppm of CO_2 (similar to Earth-like but without oxygen) and five transits stacked. Finally, we simulate K2-18b assuming a solar metallicity H-He atmosphere, with a 0.1% of CH_4 as recently claimed by <cit.>. In this case, the simulation is done for a single transit observation.The models were built using petitRADTRANS <cit.>, with line absorption of CO_2 and H_2O, Rayleigh scattering of H_2/He, CO_2 or N_2 (depending on the species) and collision induced absorption by H_2-H_2, CO_2-CO_2 and N_2-N_2. The models also assume that the planets are tidally locked and that the atmospheres are co-rotating, leading to some rotation broadening (mostly relevant here for Trappist-1b). To create the cross-correlation templates, a continuum normalization was carried out by coarsely sampling the continuum in 50 nm bins. No telluric contamination is considered in the modeling.We caution that this exercise is conducted using several assumptions, namely: * A template that is a perfect model of the planet transmission spectrum (besides continuum normalization);* The atmosphere is assumed to be clear down to 1 bar and opaque below, and with no clouds or hazes;* There is no stellar spectrum modeling nor data analysis steps usually needed to remove it (e.g., PCA or SYSREM). The star is assumed to give us a flat continuum plus a wavelength-dependent SNR as the ANDES ETC dictates;* We have not considered systematic noise sources. This might be important as high-resolution observations in the infrared are often not limited by photon noise but either by imperfect telluric removal or, in the case of M-dwarfs, molecular absorption by unocculted spots.The resulting simulated cross-correlation functions are shown in Figure <ref>. It is noticeable how well one can detect the atmosphere for all three planets. In particular, the strong significance of K2-18b from a single transit is undoubtedly thanks to the broad wavelength coverage and the inclusion of the K-band (which we have assumed here) in ANDES.§.§ Small transiting planet accessible population Based on the simulations presented in Section <ref>, we estimate the number of currently known transiting planets with dec < 10^∘ in the R < 4R_⊕ regime[Gathered from the NASA Exoplanet Archive at the date of 13th October 2023], that ANDES could be able to characterize. For each target, we calculate the Transmission Spectroscopy Metric (TSM; ), which is proportional to the expected signal-to-noise ratio based on the strength of spectral features. We then divide the sample of known planets in sub-Neptune-size (1.5 R_⊕ < R < 4 R_⊕) and earth-size (R < 1.5 R_⊕). We consider K2-18b as a representative member of the sub-Neptune-size category, and TRAPPIST-1b and TRAPPIST-1d as representative members of the earth-size category. Our simulations in Section <ref> show that K2-18b will be detected at >20σ in one transit, TRAPPIST-1b (CO_2 dominated atmosphere) at >12σ in 10 transits and TRAPPIST-1d (N_2 dominated atmosphere) at >12σ in 5 transits. We thus establish the TSM-threshold to detect a sub-Neptune-size planet at 5σ in one transit as TSM_K2-18b/4, an earth-size planet with a CO_2 dominated atmosphere in 5 transits as √(2)TSM_TRAPPIST-1b/2.4 (accounting for the different amount of transits), and an earth-size planet with a N_2 dominated atmosphere in 5 transits as TSM_TRAPPIST-1d/2.4. We neglect the differences in transit duration among the known planets. Figure <ref> shows the number of accessible planets for atmospheric detection via transmission spectroscopy.For sub-Neptune-size exoplanet (1.5 R_⊕ < R < 4 R_⊕), ANDES will be able to detect the planetary atmospheres in a single transit observation for about 330 out of about 370 known transiting planets with dec < 10^∘ at better than 5 σ, of which about 130 have P < 7 days and 240 have P < 15 days. These planets cover a wide range of equilibrium temperatures and radii within this range.For the smallest earth-size exoplanets (R < 1.5 R_⊕), the results in Figure <ref> assume 5 transit observations per target, and two possible atmospheric compositions. The red line marks the detectability of a CO_2-dominated atmosphere, while the blue line marks the detectability of an Earth-like atmosphere.In the first case 20 planets out of the 140 currently known are detected at 5 σ, nearly all of them with P < 7 days. For an Earth-like atmosphere 35 planets out of the same 140 known planets are detected at 5 σ, of which 30 have P < 7 days and nearly all have P < 15 days. Note that 4 and 16 earth-size planet atmospheres are detectable in the CO_2-dominated and Earth-like cases, respectively, with only a single transit observation. We note that small planets (R < 4 R_⊕) encompasses a diversity of possible planetary bulk compositions, from rocky earth-like composition to water worlds to puffy sub-Neptunes with H/He envelopes. In particular, the range of possible atmospheric composition of rocky earths and super-earths is not well known and subject of strong debate <cit.>. This is because the composition of the atmosphere is determined by the insolation history of the system, and also by the interplay between the atmosphere and the surface or magma beneath, factoring in the solubility of volatile substances <cit.>. While research has largely centered around the interaction between molten metals and silicate with hydrogen-rich atmospheres <cit.>, little is known about how magma oceans in planets with large water mass fractions interact with their atmospheres. Planets that experience intense radiation could maintain high surface temperatures even if they have water-rich atmospheres <cit.>. These magma oceans could act as large reservoirs to keep volatiles from being stripped away during the early, active life of the host stars, but this has not been empirically tested. Thus, ultimately the detectability thresholds for ANDES will be extremely target-dependent. Still the number of available targets for ANDES that our simulations show is really encouraging. Our estimations are also consistent with a similar exercise, more detailed and considering more molecular species, conducted by <cit.>. An increased well-characterized sample of small planets is needed to constrain theories for how these planets form and retain atmospheres, which is in turn crucial for understanding the habitability of their more temperate siblings. § HZ ROCKY PLANET ATMOSPHERES: REFLECTED LIGHT The HCHR technique offers the possibility to target non-transiting, and thus more nearby, rocky planets. The HCHR reflected light technique is, by essence, sensitive to the planetary albedo, which can be strongly impacted by all reflective layers on the surface (e.g., continental or oceanic ice) and in the atmosphere (e.g., clouds) of planets. While clouds are an issue for the transit spectroscopy technique, they can be an advantage for the HCHR technique. The HCHR technique can be used first to detect planets and then to characterize their surface and atmosphere (if any).Even surface features can, in principle, be probed in the reflected-light geometry, in contrast to the transit geometry, which is only sensitive to the upper atmospheric layers. It is therefore highly interesting to investigate temperate rocky planets with this technique, which could be used to detect the major molecular constituents of their atmosphere (e.g., H_2O, CO_2, CH_4, NH_3, O_2) and constrain the presence of clouds, ice caps, and liquid water on their surface. We note, however, that exoplanet reflected light detections at high resolution have yet to be robustly demonstrated to work on the sky. Some works have placed strong upper limits to this technique's capabilities <cit.> using just high-resolution (without the high-spatial component) that reach down to the ∼ 1x10^-6 level with optical detectors, so detection is in principle possible, but has not yet been proven. We have simulated ANDES single conjugate adaptive optics (SCAO) observations of known exoplanets to explore the target sample that can be probed in this mode. The modeling details for the AO performance and the details of the simulations for exoplanet observations are given in Appendix <ref>. §.§ A Golden Sample for ANDES The results of our simulations are illustrated in Figures <ref> and <ref>, which shows the required exposure time for an SNR=5 detection as a function of angular separation. We can see that about 30 planets of all kinds can be characterized with ANDES in less than one night of observing time. Some "easy" giant planets, both hot and cold, can be probed in less than one hour. The sample includes a large diversity of objects in terms of mass and equilibrium temperature, from warm Jupiters and Neptunes to temperate rocky planets. Among these is Proxima b, which can be characterized in just one night of observations at maximum elongation and significantly less when closer to superior conjunction. One of the primary science goals of ANDES is to detect potential biomarkers in the atmospheres of temperate rocky worlds. From the results of the simulations we thus define a "golden sample" of 5 such planets which are the most favorable currently known in terms of SNR. These are: Proxima b, GJ 273 b, Wolf 1061 c, GJ 682 b, and Ross 128 b. Their main properties are listed in Table <ref>.In these simulations, we chose to observe in Y and H bands. According to the SCAO performance study, a Strehl ratio of 0.6 can be achieved in median seeing conditions in H band, while a value of 0.3 is achieved in Y band. The raw contrast curves include both a high and low-piston scenario. The low-piston scenario provides a contrast of typically ∼1.5· 10^-3 at 25–45 mas from the star. Below 25 mas, the achievable contrast is limited by the first Airy rings of the stellar PSF. A coronagraphic solution would thus be highly beneficial to access the golden sample targets Ross 128 b, Wolf 1061 c and GJ 682 b, located at 15–21 mas from the star at maximum elongation. If this can be achieved, a basic characterization of all 5 golden sample targets could be done in about 33 nights of observing time. The population of potentially habitable planets in the immediate solar neighborhood could thus be probed for the first time by ANDES. §.§ Expected properties and Detectable species in or near the Habitable Zone of small planets: Proxima b To estimate the detectability of the main atmospheric constituents of the best target of our golden sample – Proxima b – with ANDES, we adapted the methodology of <cit.> to HCHR observations to compute cross-correlation functions (CCF) and then SNR. We accounted for the contamination by Earth's telluric lines, as well as the relative velocity of Proxima b, Proxima Cen and the Earth. Calculations were performed in the Y, J and H bands, using the contrast curves in the low-piston scenario, in line with calculations presented in the previous subsections. We first assumed fixed (i.e., wavelength-independent) planet-to-star contrast ratios using typical values derived from global climate model (GCM) simulations of <cit.> at maximum elongation. We find that, assuming a planet-to-star contrast ratio of 10^-7 (3 × 10^-8 and 2 × 10^-7, respectively), Proxima b can be detected with ANDES at 5 σ in about 7 h (60 and 2 h, respectively).In a worst-case scenario, if Proxima b (and the other planets of our 'golden sample') happen to have no atmosphere (which also implies no clouds) and an extremely dark surface, the planet-to-star contrast ratios can be relatively low. Typically, for Proxima b assuming a dark surface (i.e., a Mercury-like albedo of 0.1), no atmosphere, then contrast ratios should be around 3 × 10^-8 at maximum elongation (for a most probable inclination of 60^∘). This very unfavorable scenario cannot be ruled out at the moment for this category of planets, about which we know so little today, but ANDES precisely has the capability to explore this. One might be tempted to give credence to this hypothesis, given the recent results of JWST MIRI on TRAPPIST-1b <cit.> stating the planet has a low albedo and possibly no atmosphere, but there are several things to bear in mind: (1) Additional JWST MIRI observations and models are underway on planet b and the other planets in the TRAPPIST-1 systems, which could invalidate this interpretation (e.g., thermal phase curve of the planet were obtained as part of JWST Cycle 2 observations) ; (2) if we had first observed Mercury – the innermost planet – in the solar system we would have seen a planet with no atmosphere and a very low albedo, whereas the other rocky planets have much higher albedos (around 0.3 for the Earth; 0.7 for Venus). <cit.> discussed this in the context of the TRAPPIST-1 planets ; (3) it is dangerous to draw too broad conclusions about rocky planets around M stars from a single system, and that is precisely where ANDES – with its 'golden sample' – would make it possible to probe in depth this population of planets.Even in this worst-case scenario, the planet Proxima b would be detected (and its albedo determined) within 7 nights of observing time. To simulate in a more advanced way the possibility of detecting the planet Proxima b in the event it has an atmosphere, and to evaluate the capacity of ANDES to detect molecular species, we computed a grid of high-resolution planet-to-star contrast ratios based on the results of 3D line-by-line radiative transfer calculations of Proxima b. This grid was built using a grid of 3D Global Climate Model simulations described in <cit.>. Our assumptions about the planet in our simulations can strongly affect the expected contrast ratios and, thus, our ability to detect a signal in reflected light with ANDES. They show that the no atmosphere, low albedo scenario is pessimistic for several reasons listed below: * Even if the planet has a very low surface albedo, with a thick enough atmosphere, Rayleigh scattering by the atmosphere causes the geometric albedo and, thus, the planet-to-star contrast ratio to rise at visible wavelengths. Taking advantage of this fact however would require that the AO-assisted IFU observing mode of ANDES covered the I band, which is currently only a goal and presents substantial technical difficulties.* If water is present on the planet, it will form surface ice or clouds, likely both simultaneously, which will further boost the planet-to-star contrast ratio up to 1.0 × 10^-7 at visible wavelengths. Note that the ocean glint could increase this contrast ratio by up to typically 30% in the simulations we did (i.e., up to 1.3 × 10^-7). Other types of clouds or hazes would also likely help increase the contrast ratio.* Compared to an Earth-like 1 bar atmosphere, increasing the surface atmospheric pressure can boost the planet-to-star contrast (due to Rayleigh scattering and more clouds forming in the model) despite less sea ice forming due to higher surface temperatures, decreasing the surface atmospheric pressure can also boost the signal due to more sea ice forming. We obtain a similar order of magnitude for the planet-to-star contrast ratio when assuming N_2-dominated atmosphere, O_2-dominated atmosphere, CO_2-dominated atmosphere, etc. In any case, all the habitable planets (i.e., with global oceans) we have simulated have contrast ratios of the order of 1.0 × 10^-7 at maximum elongation (and possibly more, see below).* So far, we have been using an inclination of 60^∘ for the inclination of Proxima b in our calculations. However, if the planet is more inclined and thus more massive and bigger (while still in the rocky planet regime), the reflected light signal could be boosted up to typically 2 × 10^-7. Note that the difference is minimal for inclinations > 60^∘, but can be massive for inclinations < 60^∘. This contrast boost also applies if the planet has no atmosphere and a very low albedo.* Even more important, so far, for our calculations, we have been using the assumption that we are working at the maximum elongation of the planet (37 milliarcseconds for Proxima b). The fact that ANDES can work at lower inner working angles can help us to strongly boost the planet's signal and reduce the stellar coupling at the planet location. For instance, targeting Proxima b when the planet is at 30 mas (instead of 37 mas) could reduce the required integration time by a factor up to 5 (see Fig. <ref>). This gain also applies if the planet has no atmosphere and a very low albedo. This aspect is a main motivation for ANDES to work as close as possible to the diffraction limit, enabling us to probe the planets of the “golden sample” at phase angles that are more favorable for detecting reflected light (relative to the maximum elongation geometry). All the effects mentioned above can be combined, further amplifying the planet-to-star contrast ratio.In addition, we evaluated the capability of ANDES to detect and characterize an atmosphere around Proxima b (if any) using the methodology described above, but using a CCF with atmospheric molecular templates instead of stellar lines reflected by the planet. Using GCM simulations with a global ocean and an Earth-like atmosphere with the self-consistent formation of water vapor and clouds <cit.>, we evaluated it would take 20 hours of observation (300 and 180 hours, respectively) for ANDES to detect H_2O (CO_2 and O_2, respectively). Figure <ref> synthesizes the number of nights required to detect various molecules, depending on the atmosphere considered and whether clouds are included or not. Simulations show a significant spread in the required number of nights to characterize Proxima b, but molecular detections are feasible for a moderate amount of observing time in many scenarios. Surprisingly, for some atmospheric scenarios we note that water absorption lines (or, to put it more accurately, water troughs) are easier to detect than the stellar lines reflected on the planet. If Proxima b happens to have oceans and a thin atmosphere, the planet could be detected at a SNR=5 through water lines in about 1 hour of observing time if targeted at the optimal angular separation of 30 mas. Tenuous atmospheres are easier to detect in Y, J and H bands for two reasons: (i) many more molecular lines are resolved (in thick atmospheres, molecular lines saturate) which facilitate molecular detection by CCF; and (ii) the overall absorption by the atmosphere is much weaker, which tends to increase the surface albedo, increasing the contrast ratio and the planet detection by CCF.Last but not least, in addition to atmospheric molecular detections, very interesting science could also be pursued by:* Measuring the broad-band spectral variations of the planet-to-star contrast ratio. For instance, some cases have a sharp decrease of the contrast ratio above 1 micron, which is induced by the presence of sea ice (that has an albedo that strongly decreases in the infrared). Another interesting and critical aspect that we see in our simulations is that some molecular lines can disappear from the planet-to-star contrast ratio spectra when the atmospheric column of the gas and atmospheric pressure are so large that the saturated lines spill over into one another. In this case, detection of the molecule is still possible, but this requires measuring albedo variations in and out of the molecular bands (and no more a cross-correlation with individual molecular lines).* Measuring the contrast ratio variation for several phase angles, which could be used to probe, for instance, the properties of clouds, surface, or both. The above results are confirmed through more detailed end-to-end simulations(Beaudoin et al. in prep) of ANDES IFU assuming low-piston AO performance. All (10 mas) spaxel signals, dominated by the host star’s spectrum modulated by telluric absorption over the YJH spectral range, are used to construct a master reference spectrum used to extract the planetary signal (local planet contrast and orbital velocity) though a log-likelihood MCMC method. Figure <ref> shows the posterior distributions for a 20-hr simulation of Proxima b at maximal elongation (37 mas) assuming an Earth-like (water-rich) clear atmosphere, a planet/star contrast of 1.3×10^-7 and an orbital velocity of with ∼30 km/s. At this separation, the local planet contrast ratio (cr) is5.7×10^-5. As shown in Figure <ref> cr is recovered with a SNR of 8.6 equivalent to a 5-σ detection in 6 hours i.e less than one night.The same simulation performed with the O_2 component only of the atmosphere would require 320 hours(∼45 nights) for a 5-σ detection.§ ATMOSPHERIC CHARACTERIZATION FROM INFLATED JUPITERS TO SUPER-EARTHS §.§ Atmospheric composition and dynamics of gas giants Due to their diversity, exoplanets constitute an ideal laboratory to study atmospheric dynamics in regimes not accessible in the Solar System. Here, we discuss the prospects of tracing atmospheric dynamics with ANDES. While not included in the baseline design, extending ANDES to the K-band is a goal that is particularly powerful for studying dynamics. K-band is discussed in more detail in Section <ref>, but some of the calculations presented below already use it.Models predict that various circulation patterns should arise as we move in the irradiation - internal heating diagram (Figure <ref>). Atmospheric dynamics theories have been successfully captured some of the trends observed across this parameter space <cit.>. However, many open questions remain, several of which can be tackled thanks to ELT ANDES, for example: * (Very-)Hot gas giants: Q1: What is the altitude- and longitude-resolved geometry of winds, and what drives it across the parameter space (surface gravity, rotation rate, stellar flux, metallicity)? How do composition, clouds/hazes, and thermal structure influence the wind structure, and how do winds impact chemical and thermal gradients? <cit.> Q2: What is the role of atmospheric drag, and which is/are its physical origin/s across the parameter space? <cit.> Q3: What is the magnitude and origin of any variability in atmospheric winds strength and geometry? <cit.>* Warm gas giants (up to 1000 K): Q1: What is the rotational axis (period and obliquity) of warm gas giants, and to what extent does it impact atmospheric dynamics? <cit.> Q2: What is the magnitude of stratospheric winds in less irradiated planets <cit.>?* Young distant Jupiters and isolated brown dwarfs: Q1: What is the underlying mechanism driving atmospheric circulation on these objects, forcing from the convective region below or a form of instability between clouds and the underlying atmospheric structure? <cit.> Q2: Can we link observed properties such as chemical disequilibrium, evidence of patchy clouds, and variability caused by changing atmospheric features to a global picture of atmospheric circulation for these objects? <cit.> For young Jupiters and brown dwarfs, vertical circulation has been advocated to explain the presence of specific spectral features (e.g., dust, disequilibrium species). At the same time, turbulence likely plays a role in determining their atmospheres' patchiness, which likely causes the observed rotational modulation. In brown dwarfs, this has been further reinforced through Doppler imaging enabled by K-band measurements <cit.>, and wind speeds have been inferred by comparing the radio and infra-red variability <cit.>. These techniques are currently only applicable to a few objects with masses outside the planetary regime. While mapping may be possible to a degree in L-band on the ELTs <cit.>, ANDES will allow us to Doppler-map many tens of variable brown dwarfs and the brightest directly imaged planets if the K-band (clearly superior for mapping when compared to L-band, and a goal for ANDES) is added to the instrument, see Section <ref>.For transiting exoplanets, information on dynamics is encoded in spatial variations in longitude and latitude across the atmosphere. Space-borne phase-curves <cit.> and eclipse mapping <cit.> - especially in combination with the large aperture and spectroscopic capability of JWST - have led the way into observational constraints of atmospheric dynamics in transiting exoplanets <cit.>. The challenge is that dynamical information has to be inferred indirectly, for example through shifts in the hot-spot location, or to account for the energy balance between day and night-side hemispheres.High spectral resolution observations offer a complementary avenue to probe dynamics of transiting planets <cit.>, through the measurement of precise line-shapes and longitudinally-resolved atmospheric properties and Doppler-shifts <cit.>. In addition, they can be used to constrain the rotation rate of young Jupiters and brown dwarfs <cit.>.Here, we assess the capabilities of ANDES as an atmospheric dynamics survey machine but also as an instrument to unlock unprecedented details about the atmospheric dynamics of the best individual objects. To this end, we designed two sets of simulations that, despite not representing all possible observing strategies, showcase the potential of ANDES. We first assess the theoretical precision that ANDES can reach on the systemic velocity (v_sys) and the orbital velocity (K_p) of a transiting planet based on its spectrum. To this end, we consider a Jupiter-size planet around a K1 star (analogous to the HD 189733b system), including CO and H_2O at roughly solar abundances and in chemical equilibrium. This study's simulated wavelength range assumed a wavelength range from Y-, J-, H-, all the way to K-band, the last being required to leverage the strong CO lines best (see Section <ref>). We simulate a time-series observation of the primary transit of this system using a custom version of the ANDES ETC v1.1 (Sanna et al., 2023[https://drive.google.com/file/d/18dM5OzTxdvshHY86XPMlfDsbm0hN6juZ/view]). We include telluric lines at fixed precipitable water vapor, scaling them with airmass, and process observations through a custom pipeline based on Principal Component Analysis (PCA). For targets in the J=12-13 magnitude range (bright end), detections are so strong that theoretically one could constrain v_sys within 10 m s^-1 (i.e. below the instrumental required stability) and K_p within a few 100 m s^-1 in one transit observation. Current peak VLT/CRIRES-like performances (sub-km s^-1 in v_sys and a few km s^-1 in K_p), typically obtained at J∼9 mag, are still attainable down to J=15 mag.We then assess to what extent ANDES can identify shifts among different species simultaneously present in the atmosphere. We simulate an atmosphere containing CO (again requiring K-band, an ANDES goal) and H_2O with a range of relative velocities. We perform a likelihood-test ratio to determine whether the simulated data justify the additional complexity of the model requiring relative shift between the species. We find that the higher complexity model is confidently preferred for a relative shift of 1.0 km s^-1, at 3.6σ and 7.5σ at J=13.5 and 13.0, respectively. At the faint J=15.5 end, ELT ANDES will be sensitive to ∼5 km s^-1 shifts, comparable to current peak performances of CRIRES+ at J = 9. We note that the actual sensitivity might depend on the species present in the atmosphere (in this case, CO and H_2O).These simulations are somewhat idealized. However, it is clear that the precision achieved is exquisite, and it might allow us to perform a first census of atmospheric dynamics for a statistically significant sample of gas giants. We evaluated the potential size of this sample as follows. First, we selected from the NASA Exoplanet Archive[Updated to the 13th of October 2023] all planets known at declination smaller than <10^∘ that have mass larger than 0.4 Jupiter masses, and that transit their host star. Finally, for each target we compute the TSM and compared it to the TSM of HD189733b scaled to a J magnitude of 12, to determine the sample of planets for which detailed atmospheric dynamics characterization is possible (km s^-1-level differences among velocities of different species detected, precisions of down to ∼ 10-100 m s^-1 on v_sys and K_p), and to a J magnitude of 15, to determine the sample for which a first assessment of atmospheric dynamics might be possible (∼ 5km s^-1-level differences among velocities of different species detected, sub-km s^-1 precision on v_sys and K_p). Out of about 329 known planets selected according to our criteria, about 140 have TSMs that qualify them for the detailed atmospheric dynamics characterization sample (130 with orbital period shorter than 7 days), and 260 have TSMs that might in principle allow a first assessment of atmospheric dynamics (240 with orbital period shorter than 7 days). Ultimately, this sample size is comparable to the ARIEL tier 1 sample size <cit.>, opening exciting opportunities to synergistically measure atmospheric properties from space and wind speeds and geometries from the ground. We illustrate the sample in Fig. <ref>. We note that the numbers quoted here (∼100 planets for detailed characterization) are comparable to those presented for the K-band (many tens of planets) in Section <ref>, and that the study presented here also included the K-band. The numbers between the two sections are not directly comparable because the characterizability was assessed in different ways (shifts between different species vs. line shape sensitivity to dynamics). That being said, both point to the K-band likely playing an important goal for the characterization of atmospheric dynamics. A more detailed analysis of how the studies described above depend on the inclusion of K-band would be beneficial. Focusing on what can be accomplished on the most favorable targets using ANDES' baseline design, we have simulated more refined ANDES J-band observations (1.15–1.35 μm) of WASP-76b (as an example of a very bright target), based on the three-dimensional water absorption by <cit.> including Doppler shifts from atmospheric dynamics and rotation, as shown in Fig.<ref>. We scaled the noise from an actual WASP-76b SPIRou transit observed in 2020, accounting for a factor of 100 due to the larger collecting surface of ELT compared to CFHT, and included SPIRou-like blaze functions. As a control experiment, we simulated a similar observation scaled to the size of CFHT. Synthetic telluric absorption was calculated using the ESO sky model calculator, accounting for the airmass at each exposure. The synthetic observation sequences were then analysed following a custom PCA-based method <cit.>.Since the model that we used to synthesize these observations is inherently three-dimensional, the line shapes should significantly deviate from Voigt profiles. To test this in a statistical framework, we have employed a nested sampling algorithm to fit for the temperature, the water content, the rotation speed of the planet, and in addition, the speed and latitudinal width of equatorial superrotation following the formalism of <cit.>. With SPIRou synthetic and real data, the code could not resolve the rotation-wind degeneracy, and although temperature and water content were retrieved, the dynamical information was impossible to obtain. ANDES synthetic data, on the other hand, constrain both the rotation rate and super-rotating winds with a precision of 300 m s^-1. Moreover, the latitudinal width of superrotation (25 degrees) was recovered with a 1.7-degree error bar. The recovered line shape from our posterior distribution is shown in Figure<ref>. This test points to ANDES's ability to resolve line shapes in the best transiting planets to a precision sufficient to discriminate between different atmospheric dynamics patterns, even with the very simplified pseudo-1D analysis framework we used. An extension beyond the H-band in wavelength coverage should result in better precision and allow the study of additional molecules.§.§ Unlocking the atmospheres of young giant planets Since 2014, the use of SPHERE and GPI enabled to acquire for the first time low resolution (R_λ∼ 50) emission spectra of tens of young giant planets exhibiting broad features due to unresolved molecular (H_2O, CH_4, VO, FeH, CO) absorptions<cit.>. These spectra can be compared to predictions of atmospheric models to infer first-order information on the bulk properties of the atmosphere, mainly the effective temperature, the surface gravity (pressure), and the properties of clouds. The direct imaging community's current atmospheric models are mainly 1D models involving at least thermodynamics, radiative and convective energy transport, and gas-phase chemistry <cit.>. They solve the pressure-temperature structure in radiative-convective equilibrium and determine the chemical species that are supposed to form. Sophisticated models of clouds of different compositions (silicates, sulfites) have been considered for over a decade. The analysis of these low-resolution spectra of young giant exoplanets showed that as for young brown dwarfs, young exoplanets of a given spectral type with lower surface gravity can be up to 200–500 K cooler than their older counterparts. This discontinuity is particularly noticeable at the M-L transition and evidenced by a lack of methane at the L/T transition, as seen for HR 8799 b <cit.>. These results confirm the early photometric characterization that showed the peculiar properties of young L/T type planets like 2M1207 b <cit.>, HR 8799 bcde <cit.>, HD 95086 b <cit.>, or HD 206893 b <cit.> owing to the low-surface gravity conditions in their atmospheres, leading to enhanced production of clouds probably composed of sub-μm dust grains made of iron and silicate. With cooler temperatures than typically 1000 K, the clouds in their atmospheres start to fragment and sink below the photosphere, as recent variability studies show high-amplitude photometric variations for several of these young, planetary-mass companions <cit.>. The presence of circumplanetary disks, as directly evidenced for PDS 70 c by ALMA sub-mm observations <cit.>, might also affect the spectral energy distribution of the youngest planets mixing contribution in infrared from the photosphere and the (spatially unresolved) circumplanetary disk <cit.>. With the increase in spectral resolution (R_λ > 1000), the line blending of molecular absorptions starts to be resolved, yielding unprecedented accuracy constraints on surface gravity (pressure), chemical abundances, and the complex molecular chemistry of exoplanetary atmospheres. Atmospheric model degeneracy can be better suppressed and missing opacities identified. Note that the combination of spectral diversity with higher spectra resolution and high-contrast imaging is particularly interesting in boosting the detection performance of young exoplanets using the molecular mapping technique <cit.>. As for characterization, high-contrast spectroscopic observations at medium resolution with AO-fed spectrographs, e.g. by OSIRIS at Keck and SINFONI at VLT, led to the determination of molecular abundances such as water, carbon-monoxide or methane for several young planets like HR 8799 b and e <cit.>, β Pic b <cit.>, HIP 65426 b <cit.>. More recently, even the 12CO/13CO isotopologue ratio, connected to the carbon-ice fractionation process, was measured for the first time in the atmosphere of the young exoplanet TYC-998-760-1 b suggesting a formation beyond the CO snowline <cit.>. We note that all studies presenting findings based on the presence of CO (and its isotopologues) required observations in the K-band, which is a goal for ANDES and discussed in Section <ref>. Accessing the planet's atmospheric composition (C/O, D/H, N/O, N/C, and isotopologue ratios) and metallicity is a path forward in the exploration of the formation mechanisms, the birth location, and the potential migration/dynamical history of the exoplanets <cit.>. However, the observed atmospheric composition results from various physical complex processes . They are connected to the disk composition, physical and thermal structure, chemistry, and evolution that will define the building blocks composition of the giant planet, the planetary formation processes that will affect the solid-gas accretion phase, the vertical mixing/diffusion between the planet’s bulk and atmosphere, or simply the dynamical evolution over time, which overall will make any direct interpretation challenging at this stage, but essential to ultimately connect the planet’s physical properties and atmospheres with their origins <cit.>.With the advent of near-infrared spectrographs offering higher spectral resolution (R_λ≥ 30 000) and combined with high-contrast, as this is the case for HiRISE/CRIRES+ at VLT or Kpic at Keck <cit.>, the radial and rotational velocities of the young giant planets can be directly determined to explore the global 3D orbital properties, but also measure the rotational period and the planet's obliquities in connection with their history of formation <cit.>. Moons similar to those around Jupiter are also expected to form in circumplanetary disks as a by-product of planet formation <cit.> and the potential detection of these elusive objects from radial velocity monitoring of self-luminous directly imaged planets will be feasible with ANDES at ELT <cit.>. The novelty of ANDES in this field is clearly the gain in angular resolution, contrast combined with high spectral resolution compared with current or near-term facilities.In summary, given the current estimations of performance and limitation, the niche for ANDES in SCAO-IFU mode is the characterization of young giant planets detected in the planet-forming region zone (1–10 au) not characterized by 10-m class Telescopes. Access to a spectral resolution of 100 000 in near-infrared will give access to the molecular abundances, the isotopologues, the radial and rotational velocities, and the exploration of the global atmosphere circulation of the planet (winds, jets). For very bright exoplanet and brown dwarf companions, the possibility of acquiring Doppler imaging of the exoplanet/brown dwarf companion can be envisioned as detailed Section <ref>. The targets will be the dozens of known/new young exoplanets and brown dwarfs that have been/will be imaged or detected before the first light of the ELT from the ground or in space (Gaia, JWST), and also potentially discovered by MICADO, HARMONI, and METIS at ELTTherefore, ANDES will be a game changer in addressing the following science cases for young Saturns and Jupiters: i/ their direct detection and characterization down to the snow line with potential hints from Gaia or radial velocity studies, ii/ the study of the mass-luminosity & initial entropy relation in relation with their formation and evolution processes, iii/ the exploration of the chemical and cloud composition mapping and variability in connection with the global 3D atmospheric circulation, iv/ the physics of accretion for young protoplanets in connection with their circumplanetary disks, v/ the 3D orbital and dynamical characterization including the measurement of obliquities of young giant planets, and ultimatelyvi/ the potential first discoveries of young exomoons. §.§ Understanding the nature of sub-Neptunes and super-EarthsThe broad diversity of exoplanets so far discovered is starting to reveal demographic trends that provide the first clues to understand the underlying planet formation and evolution processes. Focusing on small planets(R<4 R_⊕), Kepler revealed that about 25% of Sun-like stars in the Milky Way host planets with no counterpart in the solar system: super-Earths (R=1–2 R_⊕) and sub-Neptunes (R=2–4 R_⊕), with orbital periods shorter than 100 days <cit.>.This small planet population presents a bi-modal radius distribution <cit.> known as the "radius valley". Canonically, the dearth of planets between 1.4 and 2.0R_⊕ has been explained as a transition region between rocky planets that held on to a primordial H2/He envelope (sub-Neptunes) and stripped cores (super-Earths). Atmospheric loss models reproduce the correct position of the valley <cit.>. However, water-rich planets are expected to populate the regions close to the star due to type I migration and do not fit into this simple paradigm<cit.>. This indicates thatour knowledge of the true nature of those planets is still limited.Using a refined sample of small planets, <cit.> have recently proposed that for M dwarf hosts, the radius valley is a consequence of interior composition rather than an indicator of atmospheric mass loss. These planets seem to be distributed into three main populations: 1) those that follow a bulk density comparable to Earth, 2) those with cores consisting of rock and water-dominated ices in 1:1 proportion by mass, and 3) planets with a significant envelope made of H/He. Formation models including type I migration support the change in paradigm that these observations propose: the valley separates dry from water worlds rather than rocky planets with or without H/He envelopes. Proving that these results hold true for FGK stars would transform our interpretation of the Kepler-planet demographics. To this end we need more precise masses for a handful of high-value targets. However mass and radius alone will still not be enough <cit.>, and ultimately the characterization of the atmospheres will be needed to break the degeneracies in the internal composition modeling. The range of possible atmospheric composition of rocky earths, super-earths and sub-Neptunes is not well known and subject of strong debate <cit.>. This is because the composition of the atmosphere is determined by the insolation history of the system, and also by the interplay between the atmosphere and the surface or magma beneath, factoring in the solubility of volatile substances <cit.>. While research has largely centered around the interaction between molten metals and silicate with hydrogen-rich atmospheres <cit.>, little is known about how magma oceans in planets with large water mass fractions interact with their atmospheres. Planets that experience intense radiation could maintain high surface temperatures even if they have water-rich atmospheres <cit.>. These magma oceans could act as large reservoirs to keep volatiles from being stripped away during the early, active life of the host stars, but this has not been tested observationally. Mini-Neptunes atmospheres are somewhat easier to characterize due the relatively large extent of their atmospheres induced by the remaining primordial gas (H/He). For example water absorption has been detected in K2-18b <cit.> and HD 106315c <cit.> atmospheres.The best way to unlock the atmospheres of these inaccessible small planets is via infrared phase curve studies. Phase curves have been measured for several dozen hot Jupiters <cit.>, but seldom been attempted for smaller planets, with some exceptions: 55 Cnc e,LHS 3844 b, and K2-141b. Observations of 55 Cancri e (1.9 R_⊕) revealed a phase curve whose peak brightness is offset from the substellar point — possibly indicative of atmospheric circulation <cit.>, however its observed variability is still poorly understood <cit.>. K2-141b is a molten lava world on an extremely short orbit showing hints of a vaporized atmosphere <cit.>. Although no results have been published for small rocky planets yet, the capabilities of JWST are pivotal for small planets phase curves studies. Still the potential for JWST to unlock the composition of such atmospheres may be quite limited <cit.>. ANDES however, will be able to probe these planets in search for metallic atmospheres and the detection of water and other atmospheric species that can reveal the existence or not of water worlds, and the general nature of sub-Neptune to super-Earth planets. A large well-characterized sample (see Figure <ref> for ANDES potential targets) is needed to constrain theories for how these planets form and retain atmospheres, which is in turn crucial for understanding the habitability of their more temperate siblings. § THE STUDY OF PROTOPLANETARY DISKS The majority of the known planets are believed to have formed within ∼15-20 au from their hosting star, followed by radial migration inwards due to planet-disk interactions during the disk gas-rich phase (e.g.,<cit.>). Consequently, the knowledge of the composition and spatial distribution of atomic and molecular gas in the inner <20 au of young (<10 Myr) circumstellar disks is an essential step towards understanding planetary system's formation and evolution. Gas in this inner disk region is dissipated inside-out, as material is accreted onto the central star and removed through winds. At present, we are not able to spatially resolve the relevant regions (< 100 mas at 150 pc). Therefore, gas excitation and spatial distribution information is obtained only through high-resolution spectroscopy of species probing different conditions.The main scientific objective of ANDES in the field of protoplanetary disks (PPDs) will be to settle the properties of the gas in the inner star-disk region, where different competing mechanisms of disk gas dispersal are at play, namely magnetospheric accretion, jets, photo-evaporated and magnetically driven disk winds. High spectral (R∼ 100 000) and spatial (∼ 10 mas) resolution are needed to disentangle and study the different phenomena in detail. The derivation of physical properties and composition of the gas in this region will constrain the mechanisms through which the forming star acquires mass and the angular momentum is removed from the system, and also on the initial condition for planet formation. The main targets will be circumstellar disks of young stellar objects at different ages to trace the disk evolution, including from the one side protostellar objects detectable only in the near-IR (age ∼ 10^5 yr) and on the other side more evolved disks (age up to ∼ 10^7 yr), i.e. disks where the inner region has been already significantly cleaned from dust. Stars with different masses and metallicities will be observed. Here, we describe in more detail the specific scientific goals in the field of PPDs to be reached with the broad spectral range of ANDES. §.§ Gas properties in the inner disk regions Disk-winds, both photo-evaporated and magnetically driven, and bound gas in Keplerian rotation in the inner disk region can be efficiently traced by forbidden lines of atomic and weakly ionized species at low radial velocity (≲ 10 km s^-1, the so-called low velocity component, LVC; e.g., <cit.>; Fig. <ref>). Different lines from disk-winds are typically blue-shifted. The shift in the peaks is only of 2–3 km s^-1, which settles the requirement of R≳100 000. High-resolution data obtained with today's spectrographs show that superimposed with the disk-wind emission component, there is an additional component, peaking at systemic velocity, that could be associated with bound gas in the disk (Nisini et al. 2023, submitted; and references therein). Again, a resolution of ∼100 000 is necessary to deblend the two different components efficiently. With the ANDES sensitivity, it will be possible to separate the two components in a variety of forbidden lines tracing different excitation conditions and thus derive the physical parameters, such as density, temperature, and ionization fraction, that are of paramount importance to disentangle the different wind models and thus quantify their impact on the disk evolution and gas dispersal. The Integral Field Spectroscopy (covering, e.g., the [O i] at 557,630 nm and [S ii] at 406,673 nm lines) will allow the use spectro-astrometric methods to resolve the spatial regions involved. These regions are at significantly smaller scales than the instrumental resolution (few au), as demonstrated by, e.g., <cit.> and <cit.>.Besides forbidden lines, molecules are also expected to be abundant in the gas phase at larger distances with respect to atomic emission, or originating in deeper layers of the optically thin gaseous disk. They will be sufficiently excited to produce rovibrational features in the infrared, providing important information on the thermochemical structure of the disk, which in turn might have some influence on the final composition of the atmospheres of inner giant planets. The main molecular features that can be observed in the near-IR are the overtone CO bands at ∼2.3 μm, the H_2 ro-vibrational lines, and a variety of weak H_2O and OH emission lines in the K-band, which is an ANDES goal. Again, other atomic features, like Na [email protected],2.208 μm and Ca [email protected]–2.4 μm, can be observed in K-band, which trace the very inner and hot regions of PPD disks. Atomic and molecular diagnostics are characterized by different emitting temperatures, from 1500 K (e.g., H_2O) up to temperatures higher than 3000 K (e.g., CO) and originate at different disk radii. These lines are observed in emission in strong accretors, such as class I protostars or highly active T Tauri stars, and as such, can give important clues on the influence of irradiation from accretion UV photons in planet-forming regions.Moreover, as the near-IR traces the hotter molecular line emission regions, these kinds of observations will be a natural complement of observations of the fundamental transitions of the same molecules performed in the mid-IR by JWST and METIS. §.§ Angular momentum extraction by jets and molecular winds A long-standing problem in the study of PPD evolution is how angular momentum is transported in order to allow matter to be accreted by the central star. There are mainly two mechanisms on how accretion is driven inside a PPD: 1) In a viscous disk, turbulent viscosity (often referred to as α-viscosity) leads to redistribution of angular moment towards the outer disk, which in turn allows gas to accrete onto the central star (see, e.g., <cit.>, and references therein); 2) MHD-wind driven accretion in which a large scale magnetic field threads the disk: the gas follows these magnetic field lines and removes angular momentum, which in turn allows for accretion (see, e.g., <cit.>, and references therein). Both these evolution paradigms are, in principle, able to reproduce observed accretion rates and other constraints but produce very different dynamics and, more importantly, very different evolutionary paths for the PPDs evolution <cit.>. From an observational point of view, it is not clear whether the level of turbulence in PPDs is sufficiently high to drive accretion <cit.>. Nevertheless, it also needs to be clarified if, how, and to what extent MHD-winds can efficiently extract angular momentum from the disk and thus drive accretion.The only way to verify the wind-driven accretion scenario is to measure the amount of mass and angular momentum taken away by the outflows, and this can be efficiently done only with an instrument like ANDES, which combines high angular and spectral resolution with high sensitivity.In particular, spectra of T Tauri stars show, in addition to the LVC, forbidden line emission at high velocity, i.e., larger than 40 km s^-1, which is associated with the high speed and collimated jets that are often observed at large distance from the star (e.g., <cit.>; Fig. <ref>). Collimated jets are the mechanism through which the accreting system gets rid of the angular momentum in excess, thus evolving at rotational velocities below the break-out velocity. The relevant information to understand the mechanism of jet formation is stored within a region with a spatial scale of around 10 au from the star, needing an IFU facility with a ∼100 mas FoV. ANDES will allow us to distinguish between the various jet launching scenarios (e.g., from the stellar surface, the magnetosphere/disk interface, or the above-mentioned disk-winds; <cit.>) that predict different spatial scales for the emission. In addition, with the IFU, it will be possible to constrain the rotation of the star-disk system and the consequent removal of angular momentum by the jet (e.g., <cit.>, <cit.>). Some evidence that collimated jets are indeed able to remove angular momentum from the inner (<1 au) regions of PPD have been given by <cit.> with HST/STIS observations of optical jets and by <cit.> with ALMA SiO observations of an embedded protostar. The velocity gradient between the two sides of the rotating jet, which needs to be resolved spatially, is expected to be of the order of a few km s^-1, which is, therefore, the velocity accuracy needed to perform these observations. ANDES will be the perfect instrument to study this aspect through IFU observations of the [O i]@630 nm line in the optical and the [Fe ii]@1.64 μm line in the H-band. Moreover, possible observations in the K-band (an ANDES goal) will also be important to probe the molecular component of the flow. In particular, high-resolution spectral imaging of the bright H_2 line at 2.12 μm, a tracer of molecular winds, will provide information on the role of these molecules in the global evolution of PPDs (e.g., <cit.>). For the first time in the IR regime, it will be possible to measure the mass loss through molecular winds and the angular momentum loss, via the measurement of wind rotation signatures.Rotation of MHD winds created in the 1-10 au disk regions, such as those traced by the H_2 NIR lines, will be fundamental to possibly validate the recent models of wind-driven PPD evolution in contrast with the so far considered paradigm of viscous disk evolution.An example of a molecular wind as observed by SINFONI@VLT (at an angular resolution of 0.12") is given in Fig. 1 by <cit.>, while a spectrum of the H_2 line at 2.12 μm obtained with GIARPS (R∼45 000) at the Telescopio Nazionale Galileo is plotted in Figure <ref>. The dynamical properties of the wind imaged by SINFONI cannot be at present disentangled due to the need for a spectro-imager with a spectral resolution high enough to discriminate its kinematic behaviour morphologically. The shown spectrum demonstrates that a resolution of the order of 100 000 is needed to resolve the line and determine any velocity displacement at a level of some km/s due to the outflow rotation. ANDES performances (angular resolution of 15 mas and spectral resolution of R∼100 000) will allow us to obtain in one shot both high spatial and spectral information at a much higher resolution than now possible to exploit the morphology and kinematics of the wind launching region. By combining ANDES observations in the H-band, it will be possible for the first time to directly compare the molecular and atomic wind components, providing a global picture of the wind-driven scenario in young stars(see Sect. <ref>).§.§ Accretion disks of embedded sources in different environmental conditions Recent surveys performed with the X-shooter@VLT and GIARPS instruments on samples of T Tauri stars in nearby clouds have provided a detailed characterization of the accretion and stellar properties of populations of young low-mass stars (e.g., <cit.>). These studies are now starting to probe the accretion evolution and how it influences the evolution of the circumstellar disk. ANDES can extend these kinds of studies in different directions: 1) probing accretion in younger and embedded sources, where present instrumentation is not sensitive enough to get a sufficiently high S/N ratio; 2) exploring the accretion phenomenon down to the brown-dwarf regime; 3) study if and how accretion depends on different environmental conditions, such as metallicity, density, and photoevaporation. From the one side, characterizing the stellar and disk properties of young embedded accreting protostars (class 0/I) is extremely challenging due to their large extinction and the strong IR excess coming from their circumstellar envelope. Currently, weak photospheric features and emission lines from the inner disk have been detected with K-band low/medium-resolution spectra of small samples of very bright class 0/I sources with moderate IR excess (e.g., <cit.>). However, no measurement was performed on fainter sources where high sensitivity spectra at high resolution in the IR are needed to resolve photospheric lines (e.g., at 2.1–2.3 μm) useful to derive spectral type and veiling. Again, the K-band contains several emission lines, which are useful extinction indicators <cit.>.This information, together with the observations of other accreting diagnostics detectable in the K-band, like the Brγ (at around 2.17 μm), will enable us to fully characterize for the first time large samples of protostars (see Sect. <ref>). On the other side, studies in several young clusters of the Large and Small Magellanic Clouds and the outer Galaxy (<cit.> <cit.>, <cit.>, <cit.>) suggest that metal-poor stars accrete at higher rates compared to solar-metallicity stars in nearby Galactic star-forming regions; moreover, they also suggest that the cluster stellar density of the forming environment can affect the mass accretion rate and thus the star formation process itself. Also, it has been argued on theoretical grounds that the efficiency of dispersal of circumstellar disks depends on stellar metallicity, i.e., the formation of planetesimals around stars may be faster in a high metallicity environment (<cit.>). Therefore, simultaneous measurements of accretion rates and metallicity in young stellar objects in different environments are crucial. Accretion luminosity can be measured from permitted lines, such as Hβ, Ca ii, Paγ, or from the Balmer jump between 340 and 370 nm. This measurement does not need high spectral resolution. However, to measure the mass accretion rate from the accretion luminosity and to compare accretion properties with properties of the central star, such as mass, age, luminosity, and metallicity, it is required to observe the narrow photospheric features sensitive to spectral type, veiling, and metallicity. In the case of embedded and highly veiled young stellar objects, a high spectral resolution is needed to extract the weak photospheric features from the strong continuum. For these latter sources, observations in the IR, band H and K in particular, are particularly useful because of the presence of several useful diagnostics (see, e.g., <cit.>, De Marchi et al. 2023, submitted). The targets of interest will be weak in the optical and located in regions where it will be difficult to find bright close-by sources for driving the AO. They, therefore, require an AO system sensitive to stars weaker than about 20 mag in R. Like in the other science cases, these kinds of observations will be a natural complement of observations performed in the mid-IR by JWST and METIS. §.§ Observing planets as they are being formed Theoretical models of planet formation are being built using only indirect observational constraints (i.e., the initial conditions of the PPD and the observed planetary systems around stars as an outcome). The formation process itself has mostly remained observationally unconstrained, and thus, its study has mostly been restricted to the realm of theory. With the high spatial and spectral resolution of ANDES, this will change, and the characterization of the formation process will be feasible.Here, we present two observational goals that will enable us to estimate constraints on the planet formation process itself and advance our understanding of planet formation. §.§.§ Giant planet formation: Accretion mechanisms and planetary magnetic fieldsThe formation of giant planets involves a phase of strong gas accretion, which leads to much higher luminosity than at later stages due to the large amount of gravitational potential energy released during this process. Planet formation models under the paradigm of core accretion predict that forming protoplanets undergo two main formation phases <cit.>. First, during the attached phase, the protoplanet is still fully embedded in the disk and has a low mass and a large radius (approximately equal to the Hill Sphere). At this stage, the protoplanet has a low effective temperature of around 200 K and low luminosity of ∼ 10^-6L_⊙. As the planet grows, it eventually enters gas runaway accretion at ∼ 100 M_⊕ and starts to contract and detach from the nebula. A circumplanetary disc (CPD) will form and the gas falls at high velocity from either the Hills sphere or from the CPD onto the protoplanet, where it shocks which again leads to high luminosities of 10^-4to 10^-1 L_⊙. This phase lasts for several 10^5 up to a few 10^6 years. During this so-called detached phase, accretion tracers such as H-alpha (656.3), Paschen beta (1282), and the Bracket gamma (2166) in the K-band should be observable. In Figure <ref>, an example of the spectral energy distribution of an accreting giant planet is shown as predicted by theoretical models from <cit.>. The H-alpha emission of accreting protoplanets has been confidently observed in one system (PDS 70b <cit.>) with other studies showing null <cit.> or candidate (e.g. AB Aur <cit.>) detections. Quantifying the H-alpha emission, particularly by disentangling the planetary and stellar components, could provide constraints on the thermal and dynamical structure of the accretion flow <cit.>. Further, having spectra of accreting protoplanets could help answer the fundamental question of whether the start is hot or cold (i.e., whether the accretion shock luminosity is incorporated into the protoplanet or radiated away efficiently in a supercritical shock<cit.>). This knowledge about the starting conditions has strong implications for the luminosity-mass relation of young protoplanets as it strongly depends on it <cit.>. Not only giant planet formation can lead to high luminosity, but also the collisional afterglow of low-mass to intermediate-mass planets is expected to be detectable with next-generation instruments <cit.>.During and shortly after the formation process of giant planets, the magnetic field of the protoplanet is expected to be much stronger due to the high luminosity. Theoretical models and scaling relations predict high magnetic fields of ∼ 100 - 1000 Gauss for these objects <cit.>. Jupiter has a magnetic field strength of ∼ 4 Gauss for comparison. The magnetic field strengths and temperatures are expected to be similar to M dwarfs, where the measurement of the magnetic field strength has been achieved already, using the prominent FeH absorption band at 1 µm, by <cit.>, using a high spectral resolution of R∼100 000. The strength of the magnetic field would deliver the key to understanding how giant planets accrete gas. Strong magnetic fields would lead to magnetospheric accretion like stars, whereas lower magnetic fields would lead to accretion through a boundary layer. The type of accretion is crucial for the hot vs. cold start issue. Global hydromagnetic simulations by <cit.> showed that young giant planets could launch outflows and jets like stars. Here again, the strength of the magnetic field is a fundamentally important quantity.§.§.§ Observing the direct impact of forming giant planets on the PPDThe torque of the protoplanet acting on the gas will push back the gas in the outer disc, resulting in carving a gap into the PPD of ∼ R_Hill <cit.>. Since giant planets are favored to form outside the water ice line, the gas there is too cold for significant VIS or NIR emission. However, taking into account the high luminosity of the accreting protoplanet, the gas in the vicinity of the protoplanet may be heated up to ∼ 1 000 K <cit.>. Both high spatial and spectral resolution might allow to observe the modified spectral emission of the disc with a peak emission at (∼ 2.5 µm) with the low end of the K-band (∼2.4 µm). Further, the accretion flow of from the PPD onto the CPD is expected to be at least partially supersonic, which would shift emission lines of embedded molecules like CO. Tracing the velocity field of ongoing gas accretion onto a giant planet would put observational constraints on the accretion structure which would provide important observational evidence for the construction of advanced hydro-dynamical models.§ SCIENCE CASES IN THE K-BAND The baseline design of ANDES does not include the K-band spectrograph, which is currently a goal. However, several science cases described above already discussed the important, at times even essential, role that the K-band plays in enabling certain kinds of studies (see the various mentions in sections <ref> and <ref>). This is also reflected by the fact that the single most important band for high-resolution spectroscopy of exoplanets to date has been the K-band. It enabled the first molecular detection in an exoplanet at high-resolution, the first measurement of atmospheric wind speeds <cit.>, the first measurement of planetary spin rates <cit.>, the first brightness map of a brown dwarf <cit.>, and the first detection of an isotopologue in an exoplanet atmosphere <cit.>. All of these successes used the fact that the K-band is ideal to observe the strong, regularly spaced lines of the abundant and chemically stable CO molecule. The property of CO as an excellent dynamical tracer <cit.> and the sensitivity increase of the ELT will enable us to pursue several novel, groundbreaking science cases. While K-band is a goal, not in ANDES' baseline design, we propose making ANDES a “CO machine”, ensuring that the opportunities of K-band high-resolution spectroscopy will not remain untapped on the ELT. The transformational power of ANDES K-band lies in the study of atmospheric dynamics in gas giant planets, as already alluded to in Section <ref>. We will also discuss the role of CO as an antibiomarker in terrestrial planets. However, due to the tens of nights that would have to be spent on a single target, we deem the small planet science case less compelling for the K-band. §.§ Close-in gas giant planets Figure <ref> (upper panel) shows how high-resolution K-band transmission spectra obtained with ANDES will reveal the wind patterns in the atmospheres of hot Jupiters. For this estimate, we post-processed the three-dimensional temperature and velocity field obtained from General Circulation Models in <cit.> for the prototypical hot Jupiter HD 189733b with the pRT-Orange transmission code (Mollière et al., in prep.). We then studied the average line shape by calculating the auto-correlation function of the spectrum. Even for a single transit, the effect of winds on the transmission spectrum can be extracted with high significance in the K-band (lower panel). H-band is significantly worse because the CO lines are weaker, while in the M-band, the CO line measurements are affected by thermal background and low telluric transmission. Therefore, the K-band is the best prospect for accurately tracing the dynamics.[Other molecules that may be more readily detected in other bands, such as H_2O, are less direct tracers of atmospheric dynamics, as their abundances are also shaped by variations in temperature around the planet. CO is ideal because of its thermal stability and strong, widely separated lines.]The predictions in Figure <ref> were made assuming a spectral width of 0.006 in the K-band. However, CO’s full wavelength range of interest in the K-band extends from 2.3–2.4 μm. Expanding the wavelength range corresponds to an increase in the S/N by a factor of four. Considering this full spectral range of CO, we find that there are 64 transiting planets for which one can infer the wind properties, as shown in Figure <ref>. For this estimate, we restricted the sample of known transiting exoplanets to equilibrium temperatures > 1000 K (to ensure CO visibility in gas-dominated planets), mass errors <30%, and made use of the so-called transit spectroscopy metric <cit.>. The stars were not checked for visibility from Cerro Amazones. Therefore, while the detailed analysis method to invert high-resolution spectra to obtain information on the wind patterns still needs to be worked out in the coming decade, we find that high-resolution K-band observations with ANDES will enable studying the wind patterns of many tens of transiting planets with high precision. This data set will be revolutionary for understanding the circulation in gas giant planets and the underlying physical phenomena that control them, potentially also shedding light on their internal evolution <cit.>. We again note that the numbers presented here (many tens of planets) are comparable to those presented in Section <ref> (∼100 planets), but not directly comparable, since the sensitivity analysis was carried out differently. Moreover, the number quoted in this section relied on K-band alone, while in Section <ref> the K-band was included in addition to other bands. These results both point to K-band playing a crucial role for undestanding the atmospheric dynamics of gas giants. At the same time, studying how the results in Section <ref> depend on the inclusion of K-band would be beneficial.In addition to transmission spectra, we can place powerful and complementary constraints on the atmospheres of these planets from the measurement of their emission spectra. These spectra probe a higher pressure regime than transmission, can be obtained for both transiting and non-transiting planets —greatly expanding the number of planets we can characterize— and allow us to measure both the day- and night-sides of the planets, given sufficient signal-to-noise. Current high-resolution spectroscopic characterization measurements of hot Jupiters have demonstrated sensitivity to the three-dimensional thermal structure of these planets <cit.>, and spatial variations in winds, probed as a function of longitude <cit.> or through different molecular species <cit.>. Since the Doppler shifts in high-resolution spectra are a function of both the thermal and wind spatial structures <cit.>, the robust properties of CO, as described above, again make it a powerful tracer for dynamics in emission spectra. §.§ Mapping directly imaged planets and brown dwarfsCharacterizing the two-dimensional brightness distribution of brown dwarfs and directly imaged exoplanets would allow a fundamental advance for the theory of clouds, convection and magnetic fields of substellar hydrogen-dominated celestial bodies and globally in our understanding of planetary formation and evolution. The latter is because improving our understanding of convection would allow for better planetary structure models, which are used for planet evolution models, that ultimately depend on the heat that planets retain from formation <cit.>. Due to its strong lines in the K-band, carbon monoxide will enable us to map the surface brightnesses of directly imaged planets and brown dwarfs. This is done by employing the so-called Doppler imaging method, which tracks how temperature heterogeneities (spots or clouds) rotate in and out of view during observation. Brown dwarfs and directly imaged planets rotate with equatorial speeds of up to 20–30 km s^-1, so resolving the motion of spots across the targets’ visible disks requires high spectral resolution. Effectively, the maps are constructed from time-dependent variations in the CO line shapes. This was first demonstrated by <cit.> for Luhman 16 B, an L-T transition object in a brown dwarf binary system. The two brown dwarfs are the closest known to date (2 pc), which made the mapping possible already with CRIRES at the VLT. As demonstrated in a follow-up paper, K-band is the band of choice when mapping the surfaces of brown dwarfs and exoplanets, and this will only really be possible in the era of 30-m (and above) class telescopes <cit.>. More specifically, the ANDES K-band spectrometer would unlock 10s of brown dwarfs for surface mapping and a few directly imaged exoplanets, the most accessible being β Pic b. For further illustration, we show the S/N of the flux contribution of CO lines (which cause absorption features) in β Pic b and Luhman 16 B in the right panel of Figure <ref>. K-band clearly dominates over the CO lines accessible in the H- and M-bands. §.§ Ancillary gas giant science in the K-bandIn addition to tracing dynamics, ANDES K-band will enable a population study of the CO and ^13CO content of gaseous exoplanets, both of which are important formation tracers <cit.>. CO is particularly interesting because it is the major carbon-carrying volatile species once chemically favored <cit.>, and highly stable. For formation constraints, elemental abundance ratios are of interest (e.g., C/O, N/O), for which the abundance of other species must be inferred (H_2O, CH_4, HCN, C_2H_2, NH_3, PH_3, CO_2, ...). Detecting these species would also be crucial for informing chemical disequilibrium models <cit.>. Some of these are accessible in the K-band or can be seen when using the bluer near-IR bands of ANDES (YJH), which would be observed simultaneously with the K-band. As mentioned above, planetary emission spectra of close-in planets will be accessible at any time in the ELT era (so no eclipse scheduling is necessary) as long as the planet moves fast enough to distinguish it from telluric and stellar lines <cit.>. This will allow ANDES to quickly build up a large library of planetary emission spectra to be analyzed with high-resolution retrieval methods <cit.>. Such ground-based observations are, therefore, highly complementary even to current space-based telescopes such as JWST, which require scheduling for transits and eclipses or long continuous staring for phase curves.§.§ Atmospheric characterization of small HZ planetsWe also tested whether planetary CO can be detected in the K-band with stellar light reflected off terrestrial, habitable zone planets. While the classical biomarker molecule O_2 is not visible in the K-band, the CO is an important molecule to rule out false-positive detections of bioactivity. Other molecular species that could contextualize an O_2 detection are better observed in other bands, at least for systems with M-dwarf hosts, and considering transmission spectra <cit.>. CO is a useful anti-biomarker since it traces the abiotic production of O_2, driven by the photolysis of O_2, which most dominantly occurs for M-dwarfs <cit.>. We studied the case of Proxima Cen b, the closest habitable zone planet outside the solar system, and assumed a CO_2-dominated atmosphere with clouds that increase the reflected light visible from the planet. To estimate the detectability of CO, we first computed 3-Dimensional Global Climate Model simulations <cit.>, then used a reflected light line-by-line radiative transfer model to compute high-resolution spectra at maximum elongation, and eventually estimated the S/N of a CO detection using a cross-correlation approach. We found that about 15 nights of observations are sufficient to detect CO at 5σ if a CO/CO_2 ratio between 10^-2 and 10^-4 is assumed. In principle, CO has lines in the H-, K-, and M-bands[As we argued above, the K-band is the best band for giant planets.]. However, for CO/CO_2< 10^-2, CO is not visible in the H band. For CO/CO_2 > 0.01, however, which corresponds to a higher O_2 mixing ratio in the CO_2 photolysis scenario, we find that the K-band’s CO lines become optically thick and saturate, and are thus no longer a viable pathway for detecting CO. In this high CO mixing ratio case, the H-band allows us to detect CO weak lines in about 35 nights. In fact, the lower diffraction limit in the H-band allows to probe Proxima Cen b at shorter angular separations and thus probe the planet at more favorable orbital phase angles (with an expected contrast gain of a factor 2-3), which brings back the required number of nights to 12-17, a number somewhat comparable to K-band estimates. Moreover, the detection of CO in the H-band can be confirmed in the K-band using non-detection of the planet, given that CO at high mixing ratios saturates the band.We therefore argue that to ensure a CO detection, if the molecule is present in the planet’s atmosphere, we require K- in addition to ANDES’ H-band because the detectability of CO is highly dependent on the atmospheric properties such as the CO/CO_2 ratio, the surface pressure, the cloud coverage, cloud albedo, and cloud top pressure, which all control the depth of the CO lines in the H- and K-band reflected light spectrum. We thus conclude that K-band may be a viable option to detect CO as an anti-biomarker in the reflected light of exoplanets but that an in-depth study of the various conceivable atmospheric cases would help to fully assess this goal's feasibility.Also, the emitted light case in the M-band should be studied. However, our first tests show that it may be unfeasible due to lower angular resolution, stellar flux suppression, and increased telluric thermal background and absorption. Due to the broad (partially saturated) line cores, We also note that a resolution of R = 20,000 is sufficient to detect CO in the K-band. Thus, observations at the intended resolution of ANDES may only be necessary if the high-resolution is required to distinguish planet and telluric lines. We note that the line core absorption also depends on the atmospheric CO abundance. Lastly, while Proxima Cen b is the best case in terms of S/N, we argue that at least five planets may ultimately be amenable to K-band characterization: Proxima Cen b, Ross 128 b, GJ 273b, Wolf 1061 c, and GJ 682 b, with exposure times between 2-5 times longer than Proxima Cen. However, several of these targets are very close to the diffraction limit of ANDES in the K band at maximum elongation, which will make efficient stellar light suppression challenging. As studying these planets is a considerable time investment, one option would be first to explore whether the planets can be detected in reflected light, which requires up to 10 times less exposure time. After that, these habitable zone planets may be observed for 10s of nights (likely over multiple years) to establish the presence of (anti-)biomarkers in their atmospheres.We also assessed the detectability of (anti-)biomarker molecules in habitable-zone terrestrial planets with transmission spectroscopy. For this, planets orbiting M-dwarfs are the most feasible candidates, with favorable transit depths and short orbital periods. The most promising planetary system for this is TRAPPIST-1, which is a 7-planet system around an M8V host star at 12.5 pc distance from the solar system, with up to four of its planets in the present-day habitable zone <cit.>. However, as reported in <cit.>, CO is best detected in the H-band in transmission of habitable zone planets, requiring ∼100 transits. CO_2 may be detectable in the bluest part of the K-band (∼2.1 µm), but also here <cit.> argue for H-band being a better option.We also studied whether PH_3 may be detectable in the K-band, assuming a CO_2-dominated atmosphere. This is motivated by the claimed PH_3 detection in Venus, where it was interpreted as a potential biomarker <cit.>. For TRAPPIST-1b we found that PH_3 is only detectable (in 40 nights) when increasing the PH_3 abundances by a hundredfold when compared to the value of 10 ppb reported in <cit.>, making this an unlikely scenario, although according to <cit.> PH_3 could enter a runaway buildup and reach such abundances. §.§ Protoplanetary DisksAs discussed in Section <ref>, various science cases are of interest to the protoplanetary disk community in the K-band, such as the characterization of young stellar objects via emission lines from the inner disk and weak photospheric features or measuring the magnetic field strengths of T Tauri stars via Zeeman splitting (e.g., , and references therein). Some of the most transformative science cases, which the K-band uniquely enables, concern the role of the angular momentum loss in protoplanetary disks via molecular winds and the characterization of embedded protostars and their accretion disks in different environments (see Sect. <ref>). These studies are highly relevant in defining the timescales for accretion and disk dispersal during the early evolution and, more in general, for planet formation, as protoplanetary disks are the birthplaces of exoplanetary systems. § EXOPLANET SCIENCE CASES IN THE U-BAND Currently, the ANDES baseline design extends to 4000 Å, but with the goal of extending the spectral coverage to 3500 Å, covering the U-band. The U-band strongly benefits the core exoplanet science case of characterizing atmospheres and detecting biosignatures for habitable-zone exoplanets and makes several additional exoplanet science cases possible to study. The core exoplanet science case of studying habitable-zone exoplanets relies on the different planetary and stellar rest frames, moving at different radial velocities, to tell planetary and stellar absorption and emission signatures apart (see for example <cit.>). The atmospheres of hot Jupiters are successfully studied using this technique. However, the difference between the planetary and stellar rest frame velocity is much smaller for temperate planets, pushing the planetary and stellar signatures in time-series of spectra closer together, see Fig.<ref>. If the surface of the host star could be expected not to change during ANDES observations, this would be unproblematic. However, especially the low-mass stars that ANDES will target for temperate-planet observations display stellar activity features even at old ages, which manifest themselves as changing spots and faculae as well as multitudes of small and sometimes also larger flares <cit.>. In fact, a recent analysis of spectral lines in low-mass stars showed that most of those lines show at least a minor response to changes in stellar activity <cit.>. For ANDES this means that stellar activity changes can be expected to produce a non-negligible signal in the stellar rest frame, which needs to be quantified and separated from the planetary signal in the adjacent planetary rest frame.The most sensitive tracers of stellar activity accessible to ANDES are the Ca II H and K lines at 3933 and 3968 Å, i.e. in the ANDES U-band. We illustrate this in Fig.<ref> by comparing the spectral response of the Ca II K, Hα, and Ca II 8542 Å lines to a solar flare with an energy of ≈ 10^32 erg as observed by the Swedish 1-m Solar Telescope <cit.> and described in <cit.>, with quiet Sun observations from <cit.>. The former is an average taken over an area of approximately 2x2 arcseconds inside of the lower brightened area marked in the first panel of the figure. Additionally, a scaled flare response is generated using the NESSI code <cit.> where the same profile is assumed to take up 1% of the solar area, in this case the Ca II K line is the only line where a response is clearly visible. We note that in the stellar context, where one would observe the spectrum of the full stellar disk, this would correspond to a rather weak flare. Average observed flare energies of cool stars tend to be an order of magnitude more energetic <cit.>, however the ratios between the three aforementioned lines are expected to stay largely the same.In the stellar context, several studies have shown that the sensitivity of Ca II H&K to even small activity changes cannot be matched by other chromospherically sensitive lines such as Hα, Ca IRT or the Na D lines <cit.>. Additionally, it has been shown that these lines are uniquely suitable for deriving the filling-factor and distribution of different types of active regions <cit.>. Having strictly simultaneous access to CaII H&K data with ANDES will strongly help isolate planetary absorption features and benefit the core exoplanetary science case. In addition, the U-band will enable additional exoplanet science cases:Due to Rayleigh scattering, clear exoplanet atmospheres will produce an upward slope towards the blue end of the spectrum. Techniques like the chromatic Rossiter-McLaughlin effect <cit.> or chromatic Doppler Tomography <cit.> can be used to test for such blue skies with data from the ANDES U-band. The presence of clear skies can be used as a consistency check for the interpretation of biosignature detections; most biosignatures should not produce a detectable signal in cases of cloudy/hazy skies.Ozone, which is a possible biosignature in exoplanet atmospheres <cit.>, produces a broad absorption band that intensifies sharply towards near-ultraviolet wavelengths. In the ANDES U-band, the wavelength region shorter than 3900 Å is the most relevant one for this. Since ozone does not produce sharp absorption features here but rather a broad absorption band, a possible detection method with ANDES is the chromatic Rossiter-McLaughlin effect or chromatic Doppler Tomography.Lava worlds, i.e., hot rocky exoplanets, may produce absorption in the CaII H&K lines themselves. The expected mechanism is exospheres absorbing at these wavelengths, for example, due to sputtered species arising from the rocky surface, or material degassed from a magma ocean<cit.>. So far, searches for this effect have mainly focused on 55 Cnc e, but have not yet been successful <cit.>. ANDES U-band spectra would efficiently probe this scenario. § ADDITIONAL EXOPLANET SCIENCE CASES§.§ Radial Velocity Mass Measurements One of the scientific objectives of ANDES is to detect and characterize exoplanets by measuring their induced radial velocity signals. ANDES will achieve a minimum radial velocity precision of 1 m s^-1, with a goal of 10 cm s^-1. A precision of 1 m s^-1 is sufficient to detect the signals of super-Earths and mini-Neptunes orbiting Sun-like stars and Earth-mass planets orbiting the habitable zone of low-mass stars <cit.>.Reaching the goal of 10 cm s^-1 would make it capable of detecting the signals created by Earth-mass planets orbiting in the habitable zones of Sun-like stars or Mars-like planets orbiting around low-mass stars <cit.>. Figure <ref>, left panel, shows the expected detection limits of ANDES for the case of a G2-type star (1 M_⊙) and an M2-type star (0.3 M_⊙). With 1 m s^-1 precision, ANDES will be able to detect planets at the low end of the mass distribution of known exoplanets. A precision of 10 cm s^-1 would make it capable of investigating a still unexplored region of the parameter space.It is known that by improving the determination of the masses of the exoplanets, it is possible to constrain their interior structure and formation history <cit.>. Currently, the precision of the masses of many transiting exoplanets is limited by the RV precision of the data, which is limited by the apparent brightness of the host stars. Paired with the massive collecting area of the ELT, ANDES will be able to measure the masses of transiting exoplanets with enough precision to constrain their interior structure, even those orbiting faint stars. It will be capable of measuring the masses of Earth-like exoplanet candidates detected by space missions such as PLATO <cit.>, which are expected to be located in systems at a moderate distance to the Sun. ANDES will be able to confirm candidate signals of Earth-like exoplanets detected by ground-based surveys such as the Terra Hunting Experiment <cit.> by exploiting its superior data quality. It will also be capable of characterizing the population of low-mass exoplanets of the solar neighborhood, even those orbiting very faint stars. With a precision of 1 m s^-1, ANDES will be able to fully populate the current low-mass regime by studying fainter targets. Scaling from the capabilities of ESPRESSO@VLT <cit.>, ANDES@ELT will be able to reach 1 m s^-1 precision in solar-type stars up to V=15 mag and in M-dwarfs up to V=16 mag[According to the SN relationships presented in <cit.>, and scaling by the collected area of the ELT. ]. This difference already provides more than 200 known candidate systems in which ESPRESSO cannot achieve a precision of 1 m s^-1, including 24 systems hosting Earth-size planets[Based on data of the Nasa exoplanet archive; 2023/09/28]. With a precision of 10 cm s^-1, ANDES would be able to explore an empty region of the parameter space up to magnitude V=10 mag for solar-type stars and V=10.8 mag for M-dwarfs (Figure <ref>, right panel). Currently, there are 28 known systems for which ANDES would be the only instrument that could achieve a precision of 10 cm s^-1.§.§ Stellar pulsations The physical properties of the exoplanets' host stars are fundamental to understanding their nature, origin, and evolution. To start with, planetary radii derived from transit observations are expressed in terms of the host star radius, while planetary masses derived from radial velocity measurements depend on the host star's mass. In addition to mass and radius, the abundance of C, O, and rock-forming elements like Mg, Si, and Fe determines the mineralogy, structure, geodynamics, and potential habitability of rocky-like planets. Furthermore, models of planet formation have inferred that the frequency of certain types of planets does show a dependence on the host star's metallicity and mass. Also, planetary periods and eccentricities might be related to the presence of heavy elements in the stellar host.This is especially critical for M dwarfs <cit.>. First, they are faint in the optical, so high signal-to-noise spectra tend to be limited by telescope size. Furthermore, the temperatures of M dwarf atmospheres are cool enough to form diatomic and triatomic molecules that create thousands of spectral lines that are poorly known and, moreover, many of them blend with each other. Therefore, many of the spectral lines traditionally used in the spectral analysis of solar-type stars are blended or not present, while the spectral synthesis of M dwarfs suffers from the incomplete knowledge of many molecular data as well as from the accuracy of the atmospheric models used.Asteroseismology, which makes use of the radial and non-radial pulsations of stars permeating throughout the stellar atmosphere, is a powerful tool to provide independent constraints on the fundamental parameters of the stars along with the internal structure. Asteroseismology is based on comparing of patterns of observed oscillation frequencies with theoretical predictions based on stellar evolution models. Many applications of asteroseismology of relevance to exoplanet host stars have been reported. For example, using 29 months of Kepler data, <cit.> yield the radius of Kepler-10 b with an unprecedented uncertainty of only 125 km. The authors also confirm that Kepler-10 was (at that moment) the oldest known rocky-planet-harbouring system (10.41 Gyr). If the planet transits, asteroseismology gives the orbital eccentricity <cit.>. Furthermore, the stellar rotation axis derived from asteroseismology can be combined with the inclination of the planetary orbit plane to yield the host star’s obliquity directly <cit.>. See <cit.> for more examples and a recent review.Stellar oscillations appear all across the Hertzsprung-Russell diagram. Theoretical studies <cit.> predict that low-mass M-dwarf stars have the potential to pulsate. These studies predict short periodicities ranging from 20 minutes up to 3 hours and empirically estimated amplitudes of just a few μmag or a few tens of cm s^-1. So far, the analysis of HARPS data has shown that no signals below 0.5 m s^-1 can be detected with a confidence level better than 90% on M dwarfs <cit.>. Nonetheless, <cit.> report some power excess for the two most long-term stable M dwarfs in their sample, one of them with a 2 hours period and an amplitude as low as 0.36 m s^-1, compatible with the theoretical prediction for low-radial, low-order g-modes. <cit.> acquired more observations for the pulsating candidate but could not confirm the oscillations. The better sensitivity and precision offered by ANDES will allow a better sampling of the predicted short periods and reduce the noise level, thus increasing the chances of detecting the stellar pulsations in M stars for the very first time. Since the stellar pulsation signatures are like a fingerprint of the star, the combination of optical and near-infrared spectra will allow a complete characterization of the pulsation modes and strengths and, therefore, a more detailed analysis of the internal structure and physics of M-stars. Based on the results by <cit.>, and considering the lower limit for the expected pulsation frequencies (3 hours), we estimate that 50 measurements per star should be enough for the detection of these modes with an SNR of 5. Assuming a total integration time per target of 180 seconds and setting a V magnitude limit of 11, we estimate that a sample of 30–65 M-dwarfs can be targeted in 100–200 hours.It is also worth noticing the stellar atmospheric studies that can be done at high-resolution with ANDES, which is important for assessing if metallicity correlations are present at the stellar/sub-stellar boundary. For this work, the near-infrared band is particularly useful. For example, <cit.> used the equivalent width of the Na I can Ca I triplet and the H_ 20-K2 index in the K-band of high-resolution spectra for calculating stellar basic parameters and metallicities. This technique has been widely extended to provide calibrations and near-infrared indexes to characterize large samples of M dwarfs. In this way, we can move forward with modeling the atmospheres where the transition from a hydrogen-fusing object to a dusty object is difficult to model accurately. §.§ RM planet characterization/Obliquity measurementsSpectra taken before, during, and after transit can be used to probe not only the constituents of and physical conditions in extrasolar planet atmospheres but also to measure the projection of the spin-orbit angle on the sky plane, i.e., the projected obliquity of the host star. When a transiting planet hides a portion of the stellar disk, the corresponding radial-velocity component is diminished in the disk-integrated stellar spectrum, leading to line-profile distortions known as the Rossiter–McLaughlin (RM) effect (see <cit.> and references therein). While the Sun's obliquity is low (∼ 7^∘ relative to the ecliptic) this is not necessarily true for extrasolar planet systems. See, for example, the TEPCAT catalogue[<https://www.astro.keele.ac.uk/jkt/tepcat/>] for an up-to-date listing. Spin-orbit misalignment can be created during the epoch of planet formation as well as later during the main-sequence lifetime of the system. Therefore, the degree of alignment between the stellar spin and orbital spin contains important information about the system's past history. Unfortunately, no RM measurements have been conducted for Earth-sized planets orbiting solar-like stars as these are out of reach of current instruments [The RM effect in the Trappist system has been measured.Asteroseismic measurements have led to inclination measurements of solar-like stars with Earth-sized planets in two systems (Kepler-50 and Kepler-65)]. This leaves a major gap in our knowledge. ANDES will be able to determine the obliquities of just such systems. There is an additional advantage for transiting Earth-mass planets on long-period orbits around solar-like stars. Their RM amplitude is larger - of the order of 30 cm s^-1 - relative to the radial velocity reflex semi-amplitude, 9 cm s^-1 for Earth <cit.>. The signal occurs also on a timescale of a few hours and not months or years. Therefore, confirming such a transiting planet from a transit survey might be more readily achievable through measurements of the RM effect. Although the host stars' projected obliquity will be known, no mass estimate of the planet is gained.In summary, the RM effect can be observed and analyzed for "free" if transmission spectroscopy is conducted in a transiting exoplanet system. The quantity measured, projected stellar obliquity, is unknown for the types of systems ANDES will target. For other classes of exoplanet systems spin-orbit misalignments have defied theoretical expectations and have been proven useful in understanding the histories of these systems. In addition, for low-mass planets on long-period orbits, confirmation of transiting exoplanet candidates through the RM effect will be more readily achievable than via traditional radial velocity measurements obtained throughout the orbit. §.§ The Chromatic Rossiter-Mclaughlin effect The Chromatic Rossiter-Mclaughlin (CRM) effect can be exploited to gain additional atmospheric information on exoplanet atmospheres from high-resolution spectra which normally are best accessible through analysis of additional low-resolution spectra <cit.>. Its measurement principle relies on the amplitude of the RM effect at different wavelengths and depends on the square of the star-to-planet radius ratio at the respective wavelength. Determining the amplitudes in the different wavelength regions allows us to measure the apparent star-to-planet radii ratios over the complete wavelength range of high-resolution echelle spectra. By measuring the CRM, we will obtain access to low-res features (e.g., Rayleigh scattering, clouds) that probe different atmospheric pressure ranges and are inaccessible to high-res transmission spectroscopy <cit.>.It comes with several disadvantages, chiefly only planetary atmospheres in wavelength regions where stellar lines are present can be probed. Because the signal in the stellar lines is analyzed, the Signal-to-Noise ratio (SNR) is reduced relative to the case of standard low-resolution observations. Some advantages make an analysis of the CRM in particular systems desirable. The same spectrum as the high-resolution atmospheric study can be used; no additional observations are required. Therefore, narrow and broad spectral features are measured simultaneously. Its dependence on astrophysical stellar noise differs from that of low-resolution observations. For faster-rotating stars, its SNR decreases linearly with planet radius instead of the square of this quantity, potentially offsetting some of the disadvantages in specific systems <cit.>.§.§ Stellar science cases addressed by exoplanet observations Questions such as the nature of the stellar dynamo and stellar surface features are actively investigated in ANDES' working group 2 (see respective White Paper). Observations of exoplanet transits yield additional information about the host star that can be used to address those stellar science goals and do not necessarily require separate observations. Instead, relevant stellar observables can be extracted from the same observational data collected for exoplanet transmission or emission spectra. One example is the differential rotation of cool stars with convective envelopes, which is an important parameter in stellar dynamo studies. In cases where the exoplanetary orbit is even only slightly misaligned with the stellar rotation axis, transit observations sample different latitudes on the host star. The collected spectra can be used to reconstruct the spectrum of the occulted patch of the stellar surface <cit.>, yielding sensitive measurements of stellar differential rotation and stellar limb darkening <cit.> or stellar granulation <cit.>.Another illustration pertains to the spectral line profile linked to the granulation pattern, which, in turn, significantly relies on the stellar fundamental parameters. Each spectral line has unique finger-prints in the spectrum that depend on line strength, depth, shift, width, and asymmetry across the granulation pattern depending on their height of formation and sensitivity to the atmospheric conditions <cit.>. In this context, it is possible to measure and characterize the granulation pattern of the hosting star performing (spatially-) resolved spectroscopy during planet transits <cit.>.Other examples of stellar science cases being addressed by exoplanet observations with ANDES are starspot properties encoded in spot occultations during transits <cit.> or more generally stellar magnetic field <cit.>, short-term stellar variability <cit.>, and serendipitous observations of stellar flares, particularly in the case of M dwarf stars for which ANDES will be crucial in terms of shaping the behavior of the complex observed atmospheres <cit.> and determine the impact of flares on the chemical composition of transmission spectra <cit.>. § SOLAR SYSTEM SCIENCE WITH ANDES§.§ Doppler velocimetry technique in planetary atmospheres The Doppler effect measurement has been previously used to derive wind estimates in the atmospheres of Venus <cit.>, Mars <cit.>, Jupiter <cit.> or Saturn <cit.>. The technique consists in measuring the Doppler shift of solar lines reflected by the planetary disk <cit.> and using it to directly derive the zonal wind across a large range of latitudes and local times, in the dayside. The Doppler velocimetry method, used in the previously referred works, takes advantage of the backscattered solar radiation at the planetary atmosphere. Doppler wind speeds can be retrieved from observing stellar lines (e.g., Fraunhofer lines) at very high spectral resolution. This method was previously applied for determining radial velocities of stars <cit.> and in asteroseismology <cit.>.The technique was also used in recent planetary measurements in the visible range <cit.>. The measurements of the wind velocities in planetary atmospheres are retrieved using an adaptaded form of the absolute accelerometry (AA) technique (Connes, 1985), applied to high-resolution spectra obtained at the Very Large Telescope (ESO). Its applicability to planetary atmospheric winds retrieval has also been demonstrated for the case of Venus <cit.>, Mars <cit.>, Jupiter <cit.>, Saturn <cit.> and even for Titan atmosphere's dynamical studies in the scope of an exploratory study <cit.>.The method is based on an optimum correlation between a measured spectrum and a reference spectrum. A detailed analysis of the particular method used for outer planets has been made by <cit.>. Instead of determining the Doppler shifts on individual lines the AA technique takes the ensemble of lines and achieves a statistical precision in the velocity determination which varies inversely with the spectral line density of the source. Hence, observations at solar wavelengths with a high resolution spectrometer are particularly suited to this technique, since the solar spectrum scattered off by the planetary atmospheres carries the signatures of some 4000 Fraunhofer solar absorption lines in the wavelength domain we use, yielding a theoretical precision of a few ms^-1.In a Titan's atmosphere dynamical study, as a consequence of the observations’ geometry and of the moon's rotation, the line of sight Doppler shift at each pixel on the image of the planetary disk is affected by the combination of the relative orbital motion, the planetary motion and the cloud particles’ motion.In the AA technique the velocity information is extracted from the variation of the spectral intensity in a given measurement relative to an idealized spectral intensity, which can be an average of many observations, or even a model.Let us denote I (λ)and I_0(λ) respectively the Doppler-shifted (measured) and the idealized spectral intensities of the target at a given wavelength. Then, assuming Doppler shifts smaller than the typical line width, combining the first order expansion I_0 - I ≃ δλ∂I_0/∂λ and δV_m/c = δλ/λ, yields -δV_m(λ)/c = (I-I_0)/(λ∂I_0/∂λ), where δV_m is the measured velocity signal projected on the line of sight. Computing a weighted average over a given spectral domain yields Δ V =∫δ V_m/σ^2(δ V_m)dλ/∫1/σ^2(δ V_m)dλ where σ^2(δV_m(λ))is the variance of the velocity measurement at wavelength λ . The rule for optimum weighting is that each weight be proportional to the inverse square of the RMS error. Under the assumption of pure photon noise σ^2(I(λ)) = I (λ) with: - Δ V/c =∫(I(λ)- I_0(λ))M(λ)dλ/∫λ∂ I_0/∂λ M(λ) dλ Where M(λ) = (λ/I)∂I_0/∂λ is called the mask function <cit.>. The mask function can be interpreted as a weighting function which amplifies the differences between the reference and the shifted spectrum within absorption lines, and nullifies them where the spectrum is flat. The technique takes into consideration the whole spectrum, independently of individual spectral lines, and taking the variance it can be shown that the standard deviation is inversely proportional to the packing of lines in the spectrum. The constant Doppler shift arising from the relative motion of the planet and the observer, and systematic shifts from instrumental effects, are eliminated by considering the differential measurements between a spectrum taken at a given point of the disk and the reference spectrum, taken at the center of the disk where the phase angle is null, which are both observed almost simultaneously. ANDES's combined high spectral and spatial resolutions in IFU mode will allow us to push the Doppler velocimetry technique at unprecedented levels to derive wind maps at very high spatial resolution. This is, however, at the cost of being able to close the Adaptive Optics loop, which requires targeting planetary objects with a relatively small angular size, such as Titan. The typical angular diameter of Titan is 0.8". With a diffraction limit (at 2 λ/D, 0.75 μm) of ANDES at about 8 mas, we expect to reach a typical horizontal resolution of 50 km × 50 km. Using the radial velocity photon noise limits <cit.>[We used the online tool <http://www.astro.physik.uni-goettingen.de/research/rvprecision/>], we evaluate that a 1 hour integration time is sufficient to reach a 2 m s^-1 precision per element of resolution for Titan. However, this approach neglects the effect of instrument stability, stellar activity, calibration, and others that can significantly increase noise level <cit.> and deserve to be quantified in detail.Interestingly, Titan has a forest of strong CH_4 lines from the visible (as seen in previous ESPRESSO observations of Titan; Turbet et al., in preparation) to the infrared <cit.>; these lines could be used to improve further the SNR of wind measurements (in addition to using stellar lines), as well as to extend wind measurements at altitudes above the haze layer. However, doing so requires an experimental effort to accurately constrain CH_4 line parameters in the visible (see an example in ).Moreover, high-resolution spectroscopy presents a unique opportunity to observe a planetary target with a CH_4-rich atmosphere, from which CH_4 optical proprieties can be studied and retrieve related isotopic ratios. It also showcases the use of a close planetary target to test new methods for chemical retrieval of minor atmospheric compounds (some of them with relevant astrobiological interest as shown in R. Silva et al., in revision in Planetary and Space Science), in preparation for upcoming studies of cold terrestrial exoplanet atmospheres.§.§ Probing tenuous planetary atmospheres ANDES high spectral resolution and sensitivity will make it a powerful instrument to study tenuous planetary atmospheres, where due to the low pressures (μbar to nbar range), line profiles are defined by thermal broadening, yielding intrinsic FWHM of ∼0.01 cm^-1 at 2 μm. Such atmospheres include the sublimation-driven atmospheres of Pluto, Triton and Io, the latter including also a volcanic contribution. A common feature is the existence of an orbital pressure cycle, related to changing heliocentric distance and/or subsolar latitude. On Pluto and Triton, the pressure seasonal cycle is best monitored through stellar occultations <cit.>, but infrared spectroscopy can also contribute by measuring the N_2 ice temperature <cit.>, which determines the total pressure. Moreover, CH_4 and CO are present at the surface and in the atmosphere. In particular, Pluto's northern latitudes are covered by methane deposits, which are a major source of atmospheric methane <cit.>. But how such deposits will react as this region will reach Northern Solstice by 2029 is unknown and global climate model predictions can be tested against temporally-sampled and spatially-resolved (if the AO loop can be closed on a 0.1" target) observations with ANDES. VLT/CRIRES observations <cit.> have demonstrated the possibility to monitor Pluto's atmospheric methane from ground-based IR spectroscopy but the ability to map it on Pluto's and Triton's disks will be unique to ANDES. Other Transneptunian objects (TNOs) such as Quaoar, Gongdong, Makemake, and Eris are known to have volatile-covered surfaces <cit.>, and recent JWST observations have shown that CO_2 and CO are also common on the surfaces of Kuiper Belt objects <cit.>. This is generally consistent with volatile escape models <cit.> that show that large KBO are able to retain their volatiles over the age of the solar system. This leaves the possibility that other TNOs besides Pluto and Triton may exhibit atmospheres at least along some parts of their orbits. From an observational perspective, the target object should be warm enough when observed – typically, a 21 K (resp. 29 K) N_2 (resp. CH_4) atmosphere is at the transition between collisional and ballistic – and global atmospheres are more promising for detection than atmospheres restricted to the subsolar region. Stellar occultations (e.g. <cit.> for Makemake) provide complementary searches for atmospheres, but by nature can constrain them only near the terminator, a priori not the most favourable region. Although most of the knowledge of Io's nanobar-class atmosphere – see review in <cit.> – is based on UV and sub-mm observations, it is also accessible to near-IR spectroscopy, with the spatially-resolved detection of SO_2 gas at 4 μm in sunlight and of SO emission at 1.7 μm in Jupiter eclipse. The latter has been known for over 20 years but its interpretation remains uncertain. The most plausible scenario <cit.> is that it represents emission of hot SO gas directly ejected from volcanic vents at a high quenching temperature. However, studies of the correlation between this emission and the distribution of hot spots have given contradictory results, and the overall spectral shape (so far restricted to the disk-average spectrum due to S/N limitations) is still defying modelling. ANDES observations combining both high spectral and high spatial resolution should shed light on the origin of this emission. Finally, plume atmospheres, such as Enceladus', recently observed by JWST <cit.>, and Europa's elusive one <cit.>, may also be explored with ANDES. Although the major gas H_2O (and CO_2) will be inaccessible from the ground, other species diagnostic of sub-surface ocean chemical conditions, e.g. CO, CH_4, C_2H_6, CH_3OH, may be searched for. §.§ Isotopic ratios in comets Isotopic ratios represent a powerful tracer of the origin of planetary bodies, especially for small bodies, that suffered less chemical evolution during their lifetime, compared to planets. The comets are among the most primitive materials in the solar system and can provide unique constraints on the composition of the pre-solar nebula. Their main advantage for measuring their chemical composition is the presence of a coma with species (molecules, radicals, ions, and atoms) in gas phase. It permits a detailed chemical analysis of the cometary ices by means of spectroscopy. In the spectralrange covered by ANDES many emission lines due to different radicals can be observed. The main source of information – that implies a high signal-to-noise ratio – coming from these emission lines is the isotopic ratios.The two main isotopic ratios that can be measured in the spectral range covered by ANDES are ^12C/^13C and^14N/^15N that can be measured in the following species: CN (for both ratios, in the U band), NH_2 (for nitrogen, in the V band), C_2 (in the B band), CO^+ (for carbon, in the B band) and N_2^+ (in the U band). These last two species are not always detected in the inner coma. The results obtained so far, mainly with 8-m class telescopes <cit.>indicate a ^12C/^13C ratio similar to the terrestrial value of 89 (but with large errorbars) for C_2, CN and CO^+ (only one measurement in this ion, rarely observed in the inner coma) and a significant enrichment of ^15N compared to ^14N in both CN and NH_2 radicals compared to terrestrial value (by a factor of about 2) and solar bulk <cit.>and Jupiter ammonia <cit.> (by a factor of about 3), only one lower limit being so far obtained for the ^14N/^15N ratio for N_2 with ground-based observations. The significant improvement in the signal-to-noise ratio expected with ANDES at ELT, compared to UVES at VLT (the main spectrograph used so far for these observations) opens very interesting possibilities, both for comets belonging to our solar system and interstellar comets expected to be observed in the near future.A significant reduction of the errorbars is expected, leading to more constraints on the origin of the comets (e.g. according to their dynamical type) and first measurements in interstellar comets – never done so far – will provide unique insight in extrasolar planetary systems. § SYNERGIES WITH OTHER INSTRUMENTS As a next-generation instrument, ANDES will be preceded by a suite of ground- and space-based facilities for exoplanet science, namely the ELTs, JWST, PLATO, ARIEL, ALMA and SKAO, and will be a vital stepping stone on the way to the study of Earth analogues with the possible future great observatories under study like the Habitable Worlds Observatory (HabWorlds) and the Large Interferometer For Exoplanets (LIFE). It therefore plays a crucial role in further advancing our knowledge of exoplanet atmospheres through synergy and complementarity with other instruments, some of which we discuss below. §.§ Ground-based ANDES will follow on the exoplanet characterization efforts of the first generation of spectroscopic workhorses for the ELT, i.e. METIS <cit.>, HARMONI <cit.>, and MICADO <cit.>. For exoplanet atmospheric characterization at high spectral resolution, each has its own niche, complementary to ANDES. MICADO offers single slit (0.016^''×3^'') mode, AO-assisted, high contrast spectroscopy at R < 20,000 for non-instantaneous wavelengths between 0.84-2.46 μm. At similar wavelengths, HARMONI hosts an AO-assisted, image slicer IFU, with a high contrast mode covering the H & K band (non-simultaneously) at R=18,000, with excellent spatial sampling (spaxel size) of 3.88 mas. Both have the potential to search for the reflected light spectra of nearby and widely-separated exoplanets. HARMONI's IFU in particular could observe Proxima b (separation <37 mas), though the design of its focal plane mask and apodizer may require some modification from its current design or use non-standard observing modes to enable this (Vaughan et al. submitted). Thanks to its R=100,000 resolution, the ANDES infrared IFU avoids the issue of stellar PSF saturation that necessitates the HARMONI focal plane mask. Thus, ANDES will be able to observe reflected light spectra of exoplanet atmospheres at much higher spectral resolution, leading to not only tighter abundance constraints, but more comprehensive chemical analyses of rocky exoplanets, thanks to its increased sensitivity to weaker lines from less abundant species in these wavelength regimes, including the key biosignature species oxygen.METIS will offer an IFU in the L & M bands (3-5μm) at R=100,000. It is the perfect complement to ANDES. METIS will be primarily sensitive to the thermal emission of temperate rocky worlds, providing detailed analyses of their composition and thermal structure <cit.>. Combined, ANDES and METIS will obtain a holistic, detailed view of these planets across the full optical-NIR regime, probing both reflected and thermal emission resulting in measurements of the planetary albedo, its heat budget and ultimately establishing the main parameters of its global climate. METIS and ANDES will also use their high spectral resolution to probe the exact shape of spectral lines in the atmospheres of gas giants, providing complementary wavelength information that has the potential to examine dynamic and chemical effects, particularly in disequilibrium, at different altitude/pressure levels in greater detail than ever before.ANDES will also have significant synergies with METIS regarding proto-planetary disk science. METIS will be able to image and spectrally map the inner dusty disk structure, it will study the excitation and dynamics of CO fundamental emission in the inner gaseous disk, and it will allow a study of the gas chemistry in the hot region of the disk through observations of key species such as H_2, H_2O, and organic molecules. On the other hand, ANDES will provide dynamical information and full characterization in terms of physical parameters of the atomic component of the inner gas, it will study the accretion process and provide measurements of the magnetic field, and it will understand the influence of jets in the star-disk interaction.In addition, observations of molecular lines at higher energies with respect to those covered by METIS will give important information for a correct interpretation of the excitation mechanism. The science performed by ANDES in the field of proto-planetary disks will be, therefore, highly complementary to that of METIS and the combination of observations from the two instruments will allow a complete picture of the inner disk region and its interaction with the still accreting star.§.§ Space-based JWST <cit.>, which is already a reality, and ARIEL <cit.>, with a foreseen launch in 2029, are both expected to be operational within a similar time frame (early 2030's) as ANDES. Their combined observations are expected to yield crucial synergies, enabling the exploration of a wide parameter space of exoplanet atmospheres. JWST is a more powerful observatory than ARIEL for single observations, but ARIEL holds a great advantage which is that it covers the entire spectral region from 0.5 to 8 microns simultaneously and that it is a dedicated mission for exoplanet population statistics. Both missions together will measure global planetary atmospheric parameters, including temperature-pressure profiles, cloud coverage, and bulk molecular abundances of more than 1000 planets <cit.>. These space-based observations offer very complementary information as they can access parts of the electromagnetic spectrum that are inaccessible to ground-based observatories due to absorption and scattering in Earth's atmosphere. High contrast imaging instruments at ELT will examine a volume-limited population of young, non-transiting exoplanets formed with the same initial composition as the ARIEL and JWST sample but at considerably colder temperatures (beyond 100 AU). On the other hand, high-dispersion spectroscopy instruments are tailored to study the atmospheres of exoplanets overlapping with ARIEL's sample and those accesible to JWST.With a resolving power of R > 100,000, ELTs can resolve atomic and molecular bands in exoplanet spectra into hundreds or thousands of individual lines. These signals can be combined for detection, offering astrophysical information over small wavelength scales that JWST or ARIEL won't be able to obtain. In particular ANDES will measure planet metallicities<cit.>, inhomogeneities and dynamics in the planetary atmosphere through time-resolved transmission measurements <cit.>, individual lines that can be used as tracers to understand evaporation and atmospheric evolution processes <cit.>, planetary albedos through the detection of reflected light at different wavelengths <cit.> and detailed information about the properties of stellar active regions, augmenting JWST and ARIEL's capability to mitigate stellar activity-induced noise in retrieved transmission spectra <cit.>. More importantly, the high resolution spectroscopic observations can be combined with the bulk compositions, T-P profiles, and thermal inversions derived from ARIEL and JWST data in a single analysis framework <cit.>. These combined analysis of high and low dispersal data analysis will provide a holistic picture of planetary atmospheres. Finally, ANDES radial velocity measurements can be used to determine accurate planet masses for small planets or faint host stars, which might be crucial for some of the best earth-sized targets PLATO <cit.> might find, and also to provide information on planetary rotation and high-altitude wind speeds for the most favorable targets. Accurate mass measurements are necessary to interpret the atmospheric characterization data, whether it is to understand the lack of an atmosphere or to perform retrievals on molecular features <cit.>.§ CONCLUSIONSIn this work we have presented detailed simulations of some of the ground-breaking exoplanet (and solar system) science that will be attainable with ANDES at the ELT. ANDEs will feature a seeing-limited spectrograph with a minimum spectral resolution of R=100,000, covering simultaneously the wavelength range of 0.5–1.8 µm, and with the goal for an extended coverage into the U and the K bands. It will also feature an AO-assisted mode with an IFU covering simultaneously the Y, J, and H bands.The main objective of the ANDES project is to characterize the atmosphere of rocky planets in the habitable zone, mostly around M-dwarf stars. This is particularly relevant as the characterization of these accessible and abundant planets is crucial to understanding their habitability and determining if they could support life. ANDES will also focus on the detection of reflected light from the planetary atmosphere, allowing to explore their structure, composition, and surface conditions. This will provide an unprecedented detailed understanding beyond basic parameters such as mass, radius, and bulk composition. A golden sample of at least five nearby non-transiting earth-sized habitable zone planets will be accessible to ANDES for atmospheric characterization and search for biomarkers within a few nights. This would be a major scientific milestone for exoplanetsand astrobiology, an objective that no other currently approved astronomical facility will be able to reach. Furthermore, ANDES will also be dedicated to studying the initial conditions under which planets form and the physical mechanisms that lead to the different observed planet types populations and system architectures. This involves investigating the composition and spatial distribution of atomic and molecular gas in the inner regions of young circumstellar disks, which is needed for a better understanding of the initial formation and later evolution of planetary systems.Since ANDES will be operational at the same time as NASA's JWST and ESA's ARIEL missions, it will provide enormous synergies in the characterization of planetary atmospheres at high and low spectral resolution. This will open a true golden era for full comprehensive understanding of the diversity of exoplanet atmospheres and how they form and evolve.AcknowledgmentsEP and HP acknowledge financial support from the Spanish Ministry of Science and Innovation (MICINN) project PID2021-125627OB-C32 and PGC2018-098153-B-C31. JIGH and ASM acknowledge financial support from the Spanish Ministry of Science and Innovation (MICINN) project PID2020-117493GB-I00. FD thanks Hayley Beltz for valuable support with the WASP-76 b ANDES simulations. JLB acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program under grant agreement No 805445. BLCM acknowledges continuous grants from the Brazilian funding agencies CNPq and Print/CAPES/UFRN. This study was financed in part by the Coordenaçao de Aperfeiçoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001§ ANDES AO-ASSISTED PERFORMANCE MODELLING§.§ ANDES AO expected performance The current ANDES-SCAO design includes a modulated pyramid WFS sensor inspired on the HARMONI-SCAO in what respect to the main system parameters (wavelength range, number of sub-apertures, and control strategy). We obtained a first estimation of the achievable contrast, starting from the end-to-end AO simulations from <cit.> performed for HARMONI-SCAO, and adding error contributions that had been neglected so far. In particular, we summed up to the HARMONI-SCAO wavefront residuals the effects of: 1) SP: static petalling; 2) DP: dynamic petalling; 3) J: residual PSF jitter. As petalling error we refer here to the differential piston between the 6 segments of ELT-M4. Then, we accumulated 2000 instantaneous PSFs (corresponding to 4s of integration) obtained from these residuals and we computed the contrast achieved by the SCAO system. This approach provides an optimistic estimation of the performance because the mentioned errorswill impact the AO loop itself, increasing the AO residuals. Here, we are neglecting this aspect that will be taken into account in the near future with dedicated end-to-end simulations.The contrast is considered as the ratio between the radial profile and the peak of the PSF. For the contrasts reported in Section <ref>, we considered: AO reference star I≤8; median seeing conditions (0.71asec, "L_0 = 50m); DP = 50nm RMS andJ=100nm RMS, accounting for vibration and wind shake residuals. About SP, we envisioned two possible scenarios: high piston - M4 is delivered to the instrument with SP=200nm RMS and SCAO system is not able to correct for this static error that persists during the observation; low piston - SP=25nm thanks to a good phasing performed by ELT before the handover to ANDES, or thanks to the SCAO capability to correct the petalling error down to this level.The low piston scenario will also enable the use of a coronagraph. The simulations show that even a classic Lyot coronagraph will provide a contrast gain ≥10 in the region around 20mas at 1600nm, reaching values <1.0 · 10^-3.§.§ Planet observations simulations We have simulated ANDES-SCAO observations of known exoplanets to explore the target sample that can be probed in this mode. We base the simulation on the stellar and planetary data available in the “Planetary Systems Composite Data” table[<https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls config=PSCompPars>] in the NASA Exoplanet Archive[<https://exoplanetarchive.ipac.caltech.edu/index.html>].We note that a small subset of the systems must include some quantities required for the simulation. Instead of excluding these systems, we choose to replace these missing values either with sensible default values or with values based on the rest of the quantities. * For stars, some stellar radius, mass, or effective temperature estimates need to be included. We set the missing effective temperatures to 5000 K, and then we fill in the missing mass and radius values using either mass-radius or temperature-mass-radius relations.* For planets, the estimates for some of the orbital parameters, radii, or masses are missing. We default the missing eccentricities, arguments of periastron, and inclinations to 0, 90^∘, and 90^∘, respectively. Next, we calculate the missing semi-major axes using the planet's orbital period and stellar mass (and assume circular orbits). Finally, for planets with either a mass or radius estimate but not both, we set the missing value to match Jupiter's if the existing value indicates the planet to be Jupiter-sized.We first calculate the maximum projected planet-star distance for each planet based on the planet's orbital parameters (semi-major axis, inclination, eccentricity, and argument of periastron), after which we use the maximum projected distance and the distance to the system to calculate the planet's maximum angular separation from its host star. Next, we calculate the planet-star radius and area ratios based on the planet and star radii, and then the planetary equilibrium temperature based on the stellar effective temperature and the planet's semi-major axis.The planet-star flux ratios are calculated as a sum of the planet's reflected and emitted light, F_r and F_e, respectively, relative to the flux from its host star,f = F_r + F_e/F_⋆.The reflected flux ratio is calculated using the maximum projected planet-star distance, assuming an earth-like albedo of 0.3 and a phase angle of π/2. In contrast, the emitted flux ratio is calculated using the planet-star area ratio, planet equilibrium temperature, and the star's effective temperature, approximating both the planet and the star as black bodies.We calculate the number of photons received by a resolution element per second as a function of magnitude using a first-order polynomial in log10(photons/s) fitted to photons/s estimates created by the ANDES ETC v1.1 for a compact source. We use the ETC to calculate the S/N ratios (SNRs) for two magnitudes (5 and 10) with a 6000 s exposure time for each passband. The number of photons/s is obtained as SNR^2 / 6000, after which a polynomial is fit to the two log_10(photons/s) estimates as a function of AB magnitude.[We also tested that the logarithm of the number of photons per second obtained using the ETC is linear in AB magnitude, as it should.] We set the exposure time to 6000 s, the sky background to 30 counts, the telescope and optics temperature to 200 K, and the read-out noise and dark current to 0 to ensure that ≈100% of the noise comes from photon noise.We convert the V, J, H, and K mags into the AB magnitude system[<https://en.wikipedia.org/wiki/AB_magnitude>] to calculate the number of photons received by a resolution element for each star. We first convert the magnitude into a flux density using. The method calculates the flux density in Jy, which we then convert to AB magnitude asm_AB = -2.5 log_10(f_ν/3621 Jy)After this, the AB magnitude can be used with the photons/second model to estimate the number of photons arriving at a resolution element in a given time. We use the contrast curves for the low (750, 1000, 1600, 2200 nm) and high (1000, 1600, 2200 nm) piston scenarios discussed above and provided by Anne-Laure Cheffot and Enrico Pinna (priv. Comm.). The final SNR is calculated following <cit.> and <cit.>, asSNR = f √(t_e F_⋆ n_l K),and the time to a given SNR asT_SNR = SNR^2/f^2 F_⋆ n_l K,where f is the planet-star flux ratio, t_e is the exposure time, F_⋆ is the flux from the star, n_l the number of lines (assumed here arbitrarily to be 1000, which is a typical number for other ground-based spectrographs), and K is 1/contrast. The SNR considers only photon noise and ignores other white noise sources (readout) and systematics (in particular telluric and stellar lines are assumed to be perfectly removed). | http://arxiv.org/abs/2311.17075v1 | {
"authors": [
"Enric Palle",
"Katia Biazzo",
"Emeline Bolmont",
"Paul Molliere",
"Katja Poppenhaeger",
"Jayne Birkby",
"Matteo Brogi",
"Gael Chauvin",
"Andrea Chiavassa",
"Jens Hoeijmakers",
"Emmanuel Lellouch",
"Christophe Lovis",
"Roberto Maiolino",
"Lisa Nortmann",
"Hannu Parviainen",
"Lorenzo Pino",
"Martin Turbet",
"Jesse Wender",
"Simon Albrecht",
"Simone Antoniucci",
"Susana C. Barros",
"Andre Beaudoin",
"Bjorn Benneke",
"Isabelle Boisse",
"Aldo S. Bonomo",
"Francesco Borsa",
"Alexis Brandeker",
"Wolfgang Brandner",
"Lars A. Buchhave",
"Anne-Laure Cheffot",
"Robin Deborde",
"Florian Debras",
"Rene Doyon",
"Paolo Di Marcantonio",
"Paolo Giacobbe",
"Jonay I. Gonzalez Hernandez",
"Ravit Helled",
"Laura Kreidberg",
"Pedro Machado",
"Jesus Maldonado",
"Alessandro Marconi",
"B. L. Canto Martins",
"Adriano Miceli",
"Christoph Mordasini",
"Mamadou N'Diaye",
"Andrezj Niedzielski",
"Brunella Nisini",
"Livia Origlia",
"Celine Peroux",
"Alex G. M. Pietrow",
"Enrico Pinna",
"Emily Rauscher",
"Sabine Reffert",
"Philippe Rousselot",
"Nicoletta Sanna",
"Adrien Simonnin",
"Alejandro Suarez Mascareno",
"Alessio Zanutta",
"Mathias Zechmeister"
],
"categories": [
"astro-ph.IM",
"astro-ph.EP"
],
"primary_category": "astro-ph.IM",
"published": "20231127210430",
"title": "Ground-breaking Exoplanet Science with the ANDES spectrograph at the ELT"
} |
]Microscopic Mechanism of the Thermal Amorphization of ZIF-4 and Melting of ZIF-zni Revealed via Molecular Dynamics and Machine Learning [email protected] Sorbonne Université, CNRS, Physico-chimie des Electrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, FranceWe investigate the microscopic mechanism of the thermally induced ambient pressure ordered–disordered phase transitions of two zeolitic imidazolate frameworks of formula Zn(C3H3N2)2: a porous (ZIF-4) and a dense, non-porous (ZIF-zni) polymorph via a combination of data science and computer simulation approaches. Molecular dynamics simulations are carried out at the atomistic level through the nb-ZIF-FF force field that incorporates ligand–metal reactivity and relies on dummy atoms to reproduce the correct tetrahedral topology around Zn^2+ centres. The force field is capable of reproducing the structure of ZIF-4, ZIF-zni and the amorphous (ZIF_a) and liquid (ZIF_liq) phases that respectively result when these crystalline materials are heated. Symmetry functions computed over a database of structures of the four phases, are used as inputs to train a neural network that predicts the probabilities of belonging to each of the four phases at the local Zn^2+ level with 90% accuracy. We apply this methodology to follow the time-evolution of the amorphization of ZIF-4 and the melting of ZIF-zni along a series of molecular dynamics trajectories. We first computed the transition temperature and determined associated thermodynamic state functions. Subsequently, we studied the mechanisms. Both processes consist of two steps: (i) for ZIF-4, a low-density amorphous phase is first formed, followed by the final ZIF_a phase while (ii) for ZIF-zni, a ZIF_a-like phase precedes the formation of the liquid phase. These processes involve connectivity changes in the first neighbour ligands around the central Zn^2+ cations. We find that the amorphization of ZIF-4 is a non-isotropic processes and we trace back the origins of this anisotropic behaviour to density and lability of coordination bonds.[ Rocio Semino January 14, 2024 ====================§ INTRODUCTION Amorphous Metal-Organic Frameworks (MOFs) have been long known, but only very recently they have attracted attention of the research community. <cit.> Indeed, since amorphous MOFs may conserve building blocks and practically all connectivity from their crystalline counterparts, they combine attractive properties such as intrinsic porosity and high surface areas <cit.> of crystalline phases withmechanical robustness and the presence of multiple defects that can act as catalytic centres typical of amorphous phases. <cit.> Amorphous MOFs can be synthesised as such, but they are mostly obtained from their parent crystalline structures by exerting an external stimulus on them. <cit.> Since these ordered–disordered transitions are reversible, guests within the pores of the crystalline phases can become trapped when the structure becomes amorphous to be later released by forcing the MOF to return to its crystalline state. This principle makes these materials attractive for important industrial and environmental applications including water purification, <cit.> drug delivery <cit.>, capture of radioactive species <cit.> and catalysis <cit.> among others. Moreover, it is easier to adequately shape amorphous materials for applications (for example as pellets, extrudates or sprays) without compromising their porosity or chemical properties. <cit.> Zeolitic Imidazolate Frameworks (ZIFs) conform an exceptionally stable family of MOFs with potential applications to many societal challenging processes. These MOFs have the particularity that their distribution of metal–ligand–metal angles is analogous that of Si–O–Si angles in zeolites, but since their bonds are longer, their porosities are larger. This offers some advantages in terms of potential guest-host-based applications, but it also gives ZIFs more flexibility (i.e. their elastic moduli are at least one order of magnitude lower than those of zeolites), <cit.> as coordination bonds are weaker than covalent bonds. In addition, amorphization of ZIFs occurs at milder conditions (for example, lower temperatures) than for their zeolite analogues, <cit.> as deforming the metal-ligand4 tetrahedron involves reorganising much weaker bonds than it does for the SiO4 case. Many ZIFs exhibit ordered–disordered phase transitions. These may be induced by changes in temperature, <cit.> pressure, <cit.> mechanical grinding, <cit.> interaction with X-rays, <cit.> or even eliminating water from their structures. <cit.> Heat-induced amorphization was observed in a number of MOFs, <cit.> in this work we will concentrate our efforts on ZIF-4. This MOF is well-known for its applications to separating alkenes from alkanes among other gas mixtures <cit.> and its synthesis has been scaled-up. <cit.> It is one of the many existing polymorphs of chemical formula Zn(C3H3N2) and it exhibits a cag topology with connected cages of a diameter of 4.9 Å. <cit.> ZIF-4 has a complex phase diagram, consisting of a series of amorphous and crystalline phases. <cit.> At ambient pressure, the following phase transitions have been experimentally detected: ZIF-4ZIF_aZIF-zniZIF_liqTransition 1 is an amorphization phase transition from the crystalline porous ZIF-4 to ZIF_a, an amorphous phase that has a continuous random network structure similar to amorphous silica. <cit.> This transition occurs at T_1=589 K. <cit.> Further heating leads to transition 2: a recrystallization of ZIF_a into the crystalline dense ZIF-zni solid, which happens at T_2=773 K. It is interesting to note that other Zn(C3H3N2) polymorphs, including ZIF-1 (crb topology), ZIF-3 (dft topology) and ZIF-6 (gis topology) also yield the same amorphous phase upon heating that subsequently crystallises into ZIF-zni. <cit.> Finally, transition 3 involves the melting of ZIF-zni at T_3=863 K to give a liquid MOF (ZIF_liq). <cit.> Bennett and coworkers have shown that heating ZIF-4 first leads to the formation of a low density phase before reaching the high density amorphous phase that transforms into ZIF-zni upon further increasing the temperature. <cit.>Neutron total diffraction data revealed that the Zn-centred tetrahedron remains quite rigid during the amorphization, although an out-of-the-plane ligand motion can act slightly reducing the total connectivity of the amorphous phase with respect to the full connectivity of ZIF-4. <cit.> High field ^13C and ^15N NMR measurements can help differentiate ZIF-4 from ZIF-zni and show that there is an important structural similarity between ZIF_a and both crystalline phases, albeit a signal broadening for the amorphous material. This confirms that the amorphous phases obtained by thermal annealing starting from ZIF-4 or ZIF-zni are identical. <cit.> In situ far infra-red spectroscopy proved that the ligand does not significantly strain during the amorphization process and that collective modes involving deformation of the ZnN4 tetrahedron are the main contributors to the amorphization. <cit.> From a computational standpoint, Gaillac and coworkers have studied the melting mechanism of ZIF-4 via ab initio molecular dynamics simulations. <cit.> In these works, ZIF-4 is considered as a starting point, since treating representative sections of ZIF_a and ZIF-zni would yield too large systems to subject them to ab initio molecular dynamics while spanning reasonable timescales to get correct simulation averages. Very high temperatures, of the order of 1000 K or more, are explored in order to get enough statistics, so the intermediate states between ZIF-4 and ZIF_liq are ignored. The authors reach a good agreement between the structural characteristics of their model and the experimental PDFs, they find that under-coordinated Zn centres act as "seeds" for the melting process to occur and they propose a molecular mechanism for the melting involving first-neighbours exchanges. Despite this wealth of information, many questions remain unanswered, including: what is the mechanism of the amorphization of ZIF-4? What is the mechanism of the melting of ZIF-zni? What happens beyond first-neighbours distances in these transformations? In order to answer these questions we need larger simulation cells, thus, it is necessary to rely on a computational model where electronic degrees of freedom are averaged following the Born-Oppenheimer approximation. However, including reactivity, in particular, metal–ligand reactivity, is essential to model these kinds of ordered–disordered phase transitions. A popular reactive force field, ReaxFF, has been tested for this task, but the modelling community has not yet reached a consensus on whether this force field is adequate to model amorphous ZIFs.<cit.> In this contribution, we rely on nb-ZIF-FF (non-bonded ZIF-FF) <cit.> to study the ordered–disordered transitions 1 and 3 cited above. This force field, originally developed by Balestra and Semino with the purpose of modelling the self-assembly of ZIFs, features metal–ligand reactivity by means of Morse potentials to treat coordination bonds. We found that it captures structural, mechanical and thermodynamic properties of the amorphous and liquid phases as well as of the crystalline ones. We circumvent the inherent difficulty of differentiating multiple ordered and disordered phases through the training of a neural network sorting algorithm that identifies the correct phase at the local, atomic level with high accuracy. This data science augmented molecular dynamics approach allows us to reach molecular detail in the study of the mechanisms of these ordered–disordered transformations and to answer the above raised questions. We observe the formation of a phase recognised as liquid-like prior to the formation of ZIF_a upon heating ZIF-4 at a temperature T>T_1, which can be associated with experimental observations. <cit.> For ZIF-zni, we found the opposite: the density decreases monotonically, going from an amorphous-classified state into the liquid. Both processes occur in an anisotropic fashion, which can be correlated to the local density and labilities of coordination bonds within the materials.This work is structured as follows. Sec. <ref> details the systems of study, the algorithm applied to generate disordered structures, the development of our neural network sorting algorithm and the molecular dynamics simulations details. We then present and discuss our results in Sec. <ref> and summarise them in Sec. <ref>.§ METHODS §.§ Systems of Study We focus on two crystalline MOFs within the Zn(C3H3N2)2 series of polymorphs: <cit.> ZIF-4 and ZIF-zni, a porous and a dense phase respectively. We analyse the phase transitions that transform ZIF-4 into ZIF_a and ZIF-zni into a melt: ZIF_liq (transition 1 and transition 3, see Sec. <ref>). <cit.> These two are ordered–disordered transitions, so we can adequately sample them by just increasing the temperature (to increase the kinetic energy) of the materials. Studying transition 2 is a much more complicated problem, since it would involve using more sophisticated enhanced sampling simulation methods, to correctly sample the collection of rare events that lead from a disordered to an ordered phase. Fig. <ref> illustrates the structure of the four studied systems. ZIF-4 and ZIF-zni initial structures were obtained from the Cambridge Structural Database.<cit.> ZIF_a configurations were generated by simulated annealing of these two crystalline phases via the procedure that we detail below, in Sec. <ref>. ZIF_liq is obtained by heating ZIF-zni or ZIF-4 to T = 700 K. All materials were modelled at the atomistic level through the nb-ZIF-FF force field.<cit.> This force field partially includes reactivity, by allowing the metal–ligand bond to break and form throughout classical molecular dynamics simulations. This is the only kind of reactivity needed to model amorphization of ZIF-4, since the integrity of the ligands along the process was confirmed experimentally.<cit.> The metal–ligand bond is modelled through a Morse potential, which yields zero pairwise forces at long metal–ligand distances. The correct tetrahedral local symmetry around Zn^2+ cations is enforced by distributing part of the cation's charge into the four vertices of a flexible tetrahedron that holds it in its centre (cationic dummy atom model). This force field has been carefully validated for reproducing structural and mechanical properties of a series of ZIFs, including ZIF-4 and ZIF-zni. For more details concerning the nb-ZIF-FF force field, the reader is referred to Ref. Balestra2022. §.§ Generation of the Amorphous Structures To generate ZIF_a, we started either from ZIF-4 or from ZIF-zni. The crystalline structures were subjected to a temperature ramp through which the material is heated from 300 to 900 K, then a short constant T=900 K run is performed and finally a second temperature ramp brings the system back to 300 K. We tested ramps of 1 K ps^-1 and 4.8 K ps^-1, which comply with the less-than-5 K ps^-1 rule proposed by Castel and Coudert to avoid damaging the full coordination of the Zn^2+ centres in the crystalline form.<cit.> We note that these ramps are orders of magnitude higher than those experimentally achievable (on the order of 5-100 K min^-1)<cit.>. We compared radial distribution functions (RDFs) of the amorphous materials generated with these two temperature ramp values and found no significant differences. Constraints were added on the 900 K constant temperature simulation so that the final materials have approximately a cubic shape.To sample the diversity of glass configurations, we generated three independent runs starting from ZIF-4 and three starting from ZIF-zni, that differ in the total time span of the 900 K constant temperature section: 250, 500 or 750 ps. The calculated properties of the glass correspond to an average of the six independent structures obtained after the annealing. During the heating ramp, the volume of the system suffers from an abrupt change that can be distinguished from the expected fluctuations, signalling the phase transition. Even though at the end of the simulation experiment the temperature is brought back to its initial value of 300 K, the final volume differs from the initial one, thus demonstrating that the phase transition has taken place and has not reverted. ZIF_a configurations generated from ZIF-4 or from ZIF-zni are indistinguishable, and they describe quite well the experimentally characterised amorphous material in terms of density, typical neighbour distances and angles and of its bulk modulus, as shown in Tab. <ref>. The bulk modulus K was calculated from the volume fluctuations in the NPT ensemble:⟨(Δ V)^2⟩ = k_b T/K Besides correctly capturing average distance and angle values, nb-ZIF-FF also passes an even more difficult test: it yields correct distributions for these properties. To the best of our knowledge, the angles distributions have up to date only been correctly predicted by ab initio molecular dynamics, this is the first time that an atomistic force field incorporating reactivity allows to accomplish this challenging task, particularly for the N–Zn–N angle distribution that is known to slightly widen in the amorphous phase. <cit.> Indeed, as pointed out by Castel and Coudert, <cit.> the most widely used reactive force field, ReaX-FF <cit.>, fails to reproduce the N–Zn–N angle distribution. Fig. S1 in the Supplementary Material shows that nb-ZIF-FF is capable of predicting the subtle widening of the angle distribution, that is due to the presence of a small proportion of tri-coordinated Zn cations in ZIF_a. The subtle under coordination of the Zn^2+ centres in the amorphous material can be appreciated in Fig. <ref>. The left panel shows the Zn–N radial distribution function for the four phases studied. Even though all structures are quite similar and difficult to distinguish experimentally, <cit.> we can already see some subtle differences in their RDFs. Indeed the peaks for the liquid are broader than those from the amorphous solid and those are, in turn, broader than for the crystals. In the inset of the left panel, we can see that the RDF does no go to zero between the first two peaks nor for ZIF_a nor for ZIF_liq, which indicates nitrogen exchanges in the first coordination sphere of the zinc cations and is in agreement with previous findings. <cit.> The right panel of Fig. <ref> shows the integral of the Zn–N RDF for the four phases, which gives us information on the number of N that can be found on average around a Zn^2+ centre as a function of the distance. The curves for ZIF-4 and ZIF-zni reach the value of four at 2.3 Åindicating the expected perfect tetrahedral coordination in their first neighbours sphere. On the other hand, the curve corresponding to ZIF_a shows a loss in the average Zn–N coordination for the same distance, which is even more pronounced in the case of ZIF_liq. This indicates that both disordered phases typically contain a small proportion of tri-coordinated Zn^2+ centres, as a result of the increased flexibility of ligand motions, as found experimentally. <cit.> The excellent performance of the combination of nb-ZIF-FF with our simulated annealing method in yielding ZIF_a configurations in good agreement with several experimentally measured properties make us confident in our model for this glassy material.§.§ Machine Learning Methods To succeed in our goal of studying the mechanism of the ZIF-4 and ZIF-zni ordered–disordered phase transitions, we first need to define a metric that allows us to distinguish between these crystalline phases and their disordered counterparts at the local, atomic environment level. Such a local metric would allow us to track the evolution of the amorphization and melting processes throughout a simulation, to observe their progression from the deformation of the local structure around a single Zn^2+ centre (i.e. distortions in the tetrahedral spacial arrangement of the first ligand neighbours) to the generation of larger disordered domains within the crystal and how these domains propagate or merge until they take over and the phase transformation reaches its end.Amorphous materials have traditionally been characterised via en extensive exploration of structural properties, including RDFs, structure factors and pore size distribution. <cit.>These ways of describing amorphous materials, albeit useful, rely on preconceived ideas of the underlying chemistry of these materials. Machine learning methods based in agnostic descriptors have been developed to automatically identify, differentiate and classify long-range ordered materials in an unbiased way, free of preconceptions. The task becomes more daunting when the collection of objects to be classified includes amorphous materials, due to the inherent lack of translational symmetry which makes both the choice of appropriate descriptors as well as the sampling of structural diversity, harder. A number of methods have been proposed to deal with this difficult task. <cit.> Here, we test the performance of Behler-Parrinello Symmetry Functions (BPSF) <cit.> as unbiased generic descriptors that act at the atomic environment level, to inform the degree of ZIF_a-ness, ZIF-4-ness, ZIF-zni-ness and ZIF_liq-ness of a given Zn^2+-centred environment. These functions fulfil the basic criteria for being well-behaved chemical descriptors, that is, they yield the same value for two configurations that are related in that one of them is the result of a translation, rotation or same-element-atom permutation operation applied over the other. These atom-centred many-body functions can be classified into two types: radial and angular. The former ones are given by the sum over two-body terms and are related to the connectivity of the central atom, while in the latter ones, three-body terms are considered. We tested twelve symmetry functions of the type: G_i^rad=∑_j ≠ i e^-η(R_ij-R_s)^2· f_c(R_ij)G_i^ang=2^1-ζ∑_i,j,k (1-λcosθ_ijk)^ζ· e^-η(R_ij^2+R_ik^2+R_jk^2) · f_c(R_ij) · f_c(R_ik) · f_c(R_jk) Where f_c is a cutoff function that decays to zero at a distance R_c. Each of the n_Zn^2+ cations in a given configuration will be thus characterised through 12 symmetry functions, each of them centred on the tagged Zn^2+ cation, 4 of them radial and 8 angular. Only Zn–Zn correlations were considered. Further details, including the parameters that were considered for defining the symmetry functions can be found in the Supplementary Material. Before analysing our trajectories through the lens of our chosen descriptors, we sought to verify their aptitude in recognising the four systems studied. To this end, we built a database comprising30000 configurations in total, out of which 3000, 3000, 12000 and 12000 correspond to ZIF-4, ZIF-zni, ZIF_a and ZIF_liq respectively, each of them comprising 64 atomic Zn^2+-centred environments. The amount of non-crystalline structures is higher to improve the classification, since the differentiation between these two groups will be the most challenging task for the algorithm.The configurations correspond to microstates obtained from MD simulations at temperatures spanning the whole stability range in the case of crystalline structures, while for ZIF_a the sampling was made at temperatures between 300 K and 500 K. As mentioned above, liquid state configurations were sampled at 700 K. We justify this criterion by the change of slope at ∼ 600 K observed in the curve of mean potential energy vs. temperature (see Fig. S2) associated with the jump in Cp that occurs at the glass transition temperature (ZIF_aZIF_liq).We then computed the symmetry functions described by eqs. <ref> and <ref> for each of their Zn^2+-centred environments via the RuNNer code <cit.> and plotted their values distributions for all structures together as box plots in R,<cit.> the resulting graph is shown in Fig. S3. We can see from the plot that none of the symmetry functions alone suffices to distinguish between the four phases. We thus used them as features to feed into a neural network that was trained to output the probabilities that the Zn^2+-centred environment belongs to a ZIF-4, ZIF_a, ZIF_liq or ZIF-zni phase (four output values).In order to train the neural network, we divided our database, composed by 1920000 Zn^2+-centred environments (64x30000 structures) into train and test sets in a 80:20 proportion. The neural network architecture was composed by an input layer of 12 nodes, corresponding to the symmetry functions, a single hidden layer comprising 6 nodes, and a output layer of 4 nodes, each one representing the probability of an environment to be classified as one of the reference structures (see Fig. <ref>). Further details can be found in the Supplementary Material. If we assign the environment to the class that has the highest associated probability to obtain a clear-cut sorting, our neural network yields 90.3% accuracy in the classification exercise for the test set.In table <ref> we show the confusion matrix obtained for the test set (which is composed of structures that were not used in the training process). This table indicates the fractions of environments of each type (rows) classified as each of the possible structures (columns). The diagonal elements correspond to the fraction of correct classifications. As expected, the highest source of error (∼10%) was obtained from the miss-classification of ZIF_a and ZIF_liq structures, <cit.> while the classification of crystalline structures was satisfactory in 95% of the cases. Indeed, by simple visual inspection both disordered phases seem to be indistinguishable (see Fig. <ref>). A hint that the classification was possible was given by the fact that the Zn–N radial distribution functions for ZIF_a and ZIF_liq differ, as shown above in Fig. <ref>a. It is however quite remarkable that the neural network manages to distinguish them with such low error despite their important structural similarity. The accuracy of the classification is even more impressive if we take into account that it was made based on structural information only, as it is well-known that the main difference between glasses and their parent crystalline structures lie in their dynamics rather than in their structures. Incorporating dynamics information into the features for a neural network distinguishing amorphous materials would most likely further improve the accuracy. Fig. <ref> shows an example of how our neural network can be applied to follow the time evolution of an amorphization process. In the left part, we can see a material that exhibits coexisting ordered and disordered domains. By applying our neural network, we can distinguish between ZIF-4-, ZIF-zni-, ZIF_a- and ZIF_liq-like environments.Finally, to check that the classification of the amorphous phases is meaningful, we plotted the fraction of disordered-like centres that are classified as being ZIF_liq-like instead of ZIF_a-like as a function of temperature (see Fig. S4). We can see from this plot that there is a clear temperature dependence on the labelling, which is consistent to what is expected for the thermodynamic stability of these two distinct disordered states (see Fig. S2). §.§ Molecular Dynamics Simulations Classical molecular dynamics simulations were carried out through the LAMMPS open source simulation package<cit.> with nb-ZIF-FF as a force field.<cit.>The integration of equations of motion was performed in the NPT ensemble, with Nose-Hoover thermostats and barostats. The damping parameters were set to 100 time steps for the thermostat and 1000 time steps for the barostat. Unless the contrary is specified, the barostat alters the box sizes in a isotropic way. In all cases the pressure was set to 1 bar. The time step was set to 0.5 fs except for simulations with temperature over 700 K, in which a time step of 0.25 fs was used.For generating configurations for the neural network training set, systems of 64 Zn atoms where used, which corresponds to a 2x2x1 supercell of ZIF-4 or a ZIF-zni unit cell (1600 particles in total, including both crystallographic and dummy atoms) while for production runs, the number of Zn atoms was 1024, which corresponds to a 4x4x4 and a 2x2x4 supercell for ZIF-4 and ZIF-zni respectively.§ RESULTS AND DISCUSSION§.§ Thermodynamic Analyses We start our study of ZIF-4ZIF_a (amorphization, transition 1) and ZIF-zniZIF_liq (melting, transition 3) by determining their equilibrium temperatures. We could be tempted to extract the transition temperature from the simulations that we performed to generate the amorphous structures (see Sec. <ref>) as the temperature that matches the drastic volume change of the system, which signals that the amorphization process has taken place. However, this would be misleading: the temperature ramp is so fast that the system cannot reach thermal equilibrium at each temperature. The false transition temperature we see when we apply a temperature ramp to heat the system has to be higher than the thermodynamic transition temperature. In order to find the equilibrium temperature for transitions 1 and 3, we need to find conditions in which both phases are equally stable. <cit.> We first generated a simulation box where ZIF_a and ZIF-4 coexist and occupy approximately the same volume each by running a short simulation at the temperature in which we observed the phase transition in the simulated annealing (T=550 K). Subsequently, we started a series of molecular dynamics simulations at the NPT ensemble, each with different target constant temperature. We let the system evolve, and then we assessed whether the number of amorphous sites had increased, decreased or remained stable. The temperature for which the two phases coexist without one of them gaining terrain over the other is the equilibrium temperature. We followed the same procedure to determine the ZIF-zni melting temperature. Fig. S5 shows the time evolution of the number of liquid-like Zn^2+ centres for ZIF-zni at different temperatures. We can see that the number of liquid-like centres fluctuates around a constant value at T = 625 K. Through this procedure we found T_amorphization=T_1=420 K and T_melting=T_3=625 K. The experimental counterparts are around 520 K and 860 K respectively. <cit.>We can also obtain the amorphization and melting entropies (Δ S_1 and Δ S_3) from these simulations. At the temperature in which the phases coexist, their chemical potentials are equal and Δ G =0, so the entropy can be readily obtained from the enthalpy by diving it by T. The enthalpy is given by the internal energy coming from the force field plus the pressure-volume term. In addition, we can obtain free energy differences at temperature T by integration of the Gibbs-Helmholtz equation: Δ G(T) = -T ∫_T_eq^T⟨Δ H⟩(T)/T^2 dTWhere T_eq is the equilibrium temperature obtained before. This allows us to compute Δ G of both reactions at 298K. Then, since we have a common reference state, i.e. the glass, we can extract information about the relative stability of ZIF-4 with respect to ZIF-zni. These results, along with reference values, are presented in Tab. <ref>.Our computed enthalpies are in good agreement with the reference experimental and ab initio values. From the free energy difference we obtain the correct thermodynamic stability trend: ZIF-4 is in fact metastable at ambient temperature and pressure. Entropies can be rationalized in terms of structural properties. Indeed, the disordered phases have higher entropies than the ordered ones, and if we compare the entropies associated to the crystalline phases, we can see that is higher for ZIF-4 than for ZIF-zni. This is a consequence of the lower density (higher porosity) of ZIF-4, which confers it the possibility of adopting many more equivalent microstates than its high density counterpart can. We note that our equilibrium temperatures are deviated from the experimental values. Even though we cannot predict the right values, the tendencies are reproduced. Experimental values should also be carefully considered, since it has been proven that the amorphization temperature is very sensible to the temperature ramp used to trigger it. <cit.> §.§ Mechanisms of the phase transitions We continue our study by simulating the amorphization and melting processes at the NPT ensemble at P=1 bar and T=550 K for ZIF-4 and T=700 Kfor ZIF-zni. These temperatures are higher than the respective transition temperatures, to guarantee thermodynamic feasibility and reasonably fast kinetics. At the end of these simulations, we reach the corresponding disordered phase (glass from ZIF-4 and liquid from ZIF-zni). We also run the same simulations for ZIF-4 and ZIF-zni but with a box consisting of 3x3x3 unit cells to verify whether the size of the simulation box was large enough to avoid unphysical effects that could happen if the process was favoured by an early artificial percolation of the new phase in a small simulation box. The average times needed for the amorphization to occur for the smaller systems are very close to those we obtained for the larger ones, so we can confirm that our boxes are large enough to adequately treat the two processes. We run four independent systems for each transition in order to take into consideration their stochastic character, the results presented below come from an average of these four independent simulation experiments. A series of snapshots of the system throughout the simulation is shown in Fig. <ref>. Panel A shows the initial configuration, which corresponds to a ZIF-4 crystal. As expected, and confirming the validity of our neural network, the amount of environments classified as ZIF-zni-like remains negligible during the whole simulation. We first observe the generation of a liquid-like phase (coloured in orange, panel B). Subsequently, the amorphous ZIF_a phase is formed from within this lower density phase (coloured in green, panel C) and gradually expands until it takes over the whole system (panel D, final configuration). The amorphization process thus consists of two steps: a first step in which the Zn–N connectivity slightly drops to yield a more disordered, low density state, followed by a second stage that gives rise to ZIF_a. The intermediate liquid-like phase that is first formed could be associated to the experimentally identified low density non-crystalline phase. <cit.> Indeed, this phase is less dense than ZIF_a in about ∼ 10% which is comparable to the experimentally obtained density difference between the low density and high density amorphous phases. <cit.>The liquid phase, in turn, is more dense than the porous ZIF-4 crystal. Note that we did not explicitly include the experimentally found low density phase in the training of the neural network, but the algorithm identified it as a liquid-like phase probably due to the correlation between local density and the symmetry functions values.Furthermore, since the classification is done only by structural information, the fact that the intermediate phase is assigned to be a liquid does not imply that the dynamics are faster than those associated to the amorphous final phase. For ZIF-zni (Fig. <ref>, panels E-H), the situation is inverted: from the crystal (panel E), the connectivity starts gradually diminishing to yield an amorphous-like phase at a first stage (panel F), which then continues to increase its density to reach the final ZIF_liq state (panel G). The liquid phase grows inside the amorphous phase until it takes over the whole simulation box. This suggests that both transitions are thus two-step processes, in what they have in common is that the intermediate phase that is formed in the first step has intermediate density between that of the initial and the final phases (ρ_ZIF-4 < ρ_ZIF_liq < ρ_ZIF_a < ρ_ZIF-zni).In all simulation experiments, we observed that the disordered phases grow as a single cluster that expands and reorganises in terms of the Zn–N connectivity, instead of forming a series of disordered micro-domains that subsequently aggregate. The formation of the higher density amorphous phase does not seem to occur at the interface between crystalline and liquid environments: on the contrary, it is generated at the bulk liquid and propagates until practically all environments are ZIF_a-like. For the melting of ZIF-zni, we also observe that the final phase is formed within the bulk of the intermediate disordered phase.The time-evolution of the the fraction of Zn^2+ centres X_Zn that correspond to ZIF-4-, amorphous- and liquid-like states is plotted in Fig. <ref> and in Fig. S6 for transition 1 and 3 respectively.From these plots we can first confirm the two above mentioned stages for both processes. We can also see in Fig. <ref> that once the amorphous cluster is formed, it grows until it takes over the whole simulation box . To unveil the microscopic processes that trigger the initial steps of the amorphization, we performed a simulation starting fromZIF-4 at a slightly lower temperature than the above described (T=540 K) in which no amorphization is observed during the whole simulation time span, due to the fact that this is a slow activated process. Indeed, the classification algorithm only identifies a small fraction (∼ 2%) of non-crystalline Zn^2+ centres, which corresponds to low-lifetime random fluctuations of the network. The presence of tricoordinated Zn^2+ is observed in both cases, and these represent ∼ 2% of the metallic cations. These unstable defects are generated by breaking Zn–N bonds, and have an average lifetime of ∼3 ps, meaning that the process is reversible. This indicates that bond breaking events leading to undercoordinated Zn^2+ centres are not sufficient to trigger the amorphization process. By monitoring the connectivity of the Zn–N network over time we can observe all possible pathways to restore the tetrahedric coordination of a defect site. Three different mechanisms can be identified: (i) the broken Zn–N bond is restored, leading to exactly the same connectivity as before the defect formation; (ii) the tricoordinated Zn^2+ binds to the other nitrogen that belongs to the same ligand moiety, i.e. a rotation of the imidazolate restores the original connectivity; or (iii) a new bond is formed between the Zn^2+ and a different ligand molecule, leading to a change in the connectivity of the system. This latter mechanism is illustrated in Fig. <ref>. The first two mechanisms were observed in both T = 540 K and T = 550 K simulations, but the last one only took place in significant amounts in the high temperature system, which exhibits an amorphization. In the 540 K simulation, just a few (13) events of this kind were registered along the simulation, and every one of them finally reverted to the original connectivity after a short period of time. These may be considered as failed initial attempts of the new phase to nucleate.This scenario suggests that the first steps of the amorphization are related to the formation of Zn–N bonds that change the original connectivity of the network. This kind of events are much more infrequent than simple bond breaks or than the formation of tricoordinated sites. Processes of type (iii) typically occur for the first time in a simulation at about t=100 ps, while type (ii) processes rotations take place each 1.5 ps on average and simple bond breaking are observed each 0.1 ps. These results are averages over four independent runs at T=550 K. In figure <ref> we include a dotted line indicating the time of occurrence of a type (iii) event. We can see that the process is accompanied by a sudden jump in the number of liquid-like environments recognised by the neural network, thus triggering the amorphization transition.The same analysis was done for the melting of ZIF-zni, by comparing simulations performed at T=700 K and T=670 K, in which no melting was observed, leading to a similar classification of timescales. In this case, since we are at a higher temperature, the characteristic times are reduced to ∼50 ps for the first type (iii) event to occur, while type (ii) rotations take place each 0.5 ps on average and bond breakings occur each 0.02 ps.We complement the microscopic description of the amorphization process by studying the evolution of the number of 3 membered rings formed by neighbour Zn^2+ centres (here we count only the Zn^2+, in fact the rings are formed by alternating metals and ligands, as the (Si–O)_n rings in zeolites). These kind of structures are known to be present in the glass state but not in the crystal, which exhibits rings of 4, 6 and 8 members.<cit.> We can see in Fig. <ref> that the number of 3 membered rings in the early stage correlates with the growth of the intermediate low density phase, thus suggesting that the formation of this kind of structures could be an important step in the amorphization process. The correspondence seems to be amplified in the stage of formation of the final glass structure. This indicates that the final configurations are richer in 3 membered rings patterns, while the intermediate phase preserves more of the topological features of ZIF-4.To gain further microscopic insight into the propagation of the disordered phases during these ordered–disordered transitions, we plotted the average disordered (amorphous- and liquid-like) cluster size projected into the three Cartesian axes as a function of the number of disordered Zn^2+ centres N_Zn in Fig. <ref>. A similar plot for transition 3 can be found in Fig. S7. We can see from the curves in Fig. <ref> that the growth of the disordered phase is anisotropic: it occurs faster in the x direction than in the other two. This behaviour is reproduced in all the independent simulations we run and can be observed both for the first disordered liquid-like low density phase as well as for the final ZIF_a amorphous phase. In the case of transition 3, ZIF-zni melting, no preferential direction can be clearly distinguished during the whole interval of cluster sizes.Anisotropy in the formation of amorphous domains has also been experimentally hypothesised for ZIF-4 systems. <cit.> Fig. <ref>b depicts local density histograms for ZIF-4 projected into the three Cartesian axes. We can see that the local density stays more or less constant when projected onto the x axis, thus facilitating the connectivity exchange events that drive amorphization, as discussed above. Indeed, the porosity along this direction is more connected than in the other two, which show jumps in the local density that are associated to cage changes.We complement this analysis by adding information about the lower temperature simulations made at T=540 K and T=670 K that exhibit bond breaking and formation while preserving the original crystalline structures for ZIF-4 and ZIF-zni respectively. We associated each bond breaking event to a ligand position in the unit cell, in order to check if there are ligand sites that are more labile than others. We found that in the case of ZIF-4, the 32 ligand sites of the unit cell are divided as follows: 8 of them present a high stability, breaking at a rate of approximately once each 4 ps; 8 of them present the highest tendency to break (once each 2.5 ps), while the others lie in an intermediate reactivity. In the inset of the top panel of Fig. <ref> we show a plot of the XY plane of the unit cell with the ligands coloured as blue for the first kind, red for the second, and white for the last one. We can observe that bonds oriented in the X direction are the most labile, while the ones pointing in Y or Z are more stable, in agreement with our previous results.The origin of the difference in stability may be found in the participation of each ligand in different rings. The most stable bonds are those that are part of a four Zn membered ring, while the most labile are only part of 6 membered rings.In the case of ZIF-zni at T= 670 K, the classification of the 128 ligand positions results in 32 members that are more stable than the others, each breaking their coordination bonds each 4.5 ps on average. The others present mean breaking times spanning the interval between 1.9 and 2.7 ps. In this case, members of four Zn membered rings present an intermediate stability. It is found that the most and least labile bonds have important projections in the z direction, this can be observed in the colouring in the inset of Fig. S7. § CONCLUSIONS We have studied the molecular mechanisms and thermodynamic properties of a series of phase transformations that link two crystalline polymorph ZIFs (ZIF-4 and ZIF-zni), with two disordered phases, a lower density, liquid-like one (ZIF_liq) and a higher density, amorphous solid (ZIF_a). To this end, we have modelled these states via a force field that incorporates reactivity in the coordination bonds. We validate our model by successfully comparing structural and mechanical properties computed from it with reference experimental and ab initio data. We have augmented our molecular dynamics method with a phase sorting algorithm based on a neural network that can assign Zn^2+ centred environments to their parent state with high accuracy. The accuracy is quite astounding taking into account that the algorithm has only been fed structural features as inputs and that it can even distinguish between disordered phases. Our molecular dynamics simulations allowed us to determine thermodynamic state functions of the ZIF-4ZIF_a and ZIF-zniZIF_liq transitions as well as for ZIF_aZIF_liq via thermodynamic integration. Specially relevant are those state functions related to the transition between the two crystalline polymorphs that are difficult to measure experimentally. Our results are in good agreement with reference values, and those that were not previously measured can be rationalized in terms of the disorder and porosity of the different phases.We have furthermore followed the amorphization of ZIF-4 and the melting of ZIF-zni processes and obtained mechanistic details at the microscopic level. We identified two stages in both processes. Density changes monotonically in both cases, as does the connectivity.These processes occur via changes in the identity of the first neighbour ligand moieties that are coordinatively bonded to the Zn^2+ centres. Kinetics of the amorphization of ZIF-4 are faster in a preferential direction. We have rationalised this through the analysis of local densities projected onto the three Cartesian axes and the study of the lability if the coordination bonds.This work sheds light on the mechanism of important phase transformations for materials that are excellent candidates for solving pressing environmental and industrial issues. More specifically, our results help better understand the process of generation of amorphous MOFs, which have have been suggested to be key in bridging the gap between research laboratory and real-world application of these promising and versatile porous materials. We also hope that the methods that we developed and deployed are useful to other researchers in this and other fields to study the amorphization of other families of materials or even more broadly to study other kinds of reactive processes.This work was funded by the European Union ERC Starting grant MAGNIFY, grant number 101042514. This work was granted access to the HPC resources of CINES under the allocation A0130911989 made by GENCI and to the HPC resources of the MeSU platform at Sorbonne-Université.§ DATA AVAILABILITY STATEMENT The data that support the findings of this study are available within the article and its supplementary material. | http://arxiv.org/abs/2311.16351v1 | {
"authors": [
"Emilio Mendez",
"Rocio Semino"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231127222950",
"title": "Microscopic Mechanism of the Thermal Amorphization of ZIF-4 and Melting of ZIF-zni Revealed via Molecular Dynamics and Machine Learning Techniques"
} |
Quantum ratchet with Lindblad rate equations Jesús Casado-Pascual January 14, 2024 ============================================Categorization is an important topic both for biological and artificial neural networks. Here, we take an information theoretic approach to assess the efficiency of the representations induced by category learning. We show that one can decompose the relevant Bayesian cost into two components, one for the coding part and one for the decoding part. Minimizing the coding cost implies maximizing the mutual information between the set of categories and the neural activities. We analytically show that this mutual information can be written as the sum of two terms that can be interpreted as (i) finding an appropriate representation space, and, (ii) building a representation with the appropriate metrics, based on the neural Fisher information on this space. One main consequence is that category learning induces an expansion of neural space near decision boundaries. Finally, we provide numerical illustrations that show how Fisher information of the coding neural population aligns with the boundaries between categories. § INTRODUCTION The study of categorization is an important field of research both for biological and artificial neural networks. Here, we take an information theoretic approach to study the optimal properties of the neural representations induced by category learning. We extend the formalism introduced in <cit.> to the case of multilayer networks, allowing to cast the approach and results within the machine learning framework. We consider multilayer feedforward networks whose goal is to learn a categorization task. We first introduce the mean Bayes risk adapted to a categorization task. We show that the minimization of this cost amounts to dealing with two issues: optimizing the decision stage in order to provide the best possible estimator of the category given the neural activities; and optimizing the stimulus encoding (through the multilayer processing) by maximizing the mutual information between categories and neural code. We then characterize the mutual information between the discrete categories and the neural activities in a coding layer, in the limit of a high signal-to-noise ratio. This limit allows to reveal the neural metrics relevant for the categorization task. It shows that maximizing the mutual information leads to finding the feature space most relevant for the classification (and amenable to easy decoding), and to probe this space with a particular metric. The latter depends on a ratio between two Fisher information quantities: one that is specific to the neural coding, and one that quantifies the classification uncertainty. As a result of the optimization, the space will be expanded near a class boundary, and contracted far from a boundary. This implies a better ability to discriminate between nearby inputs in the vicinity of a class boundary, than far from such boundary. This effect, well studied in cognitive science, is called categorical perception <cit.>. Finally, we provide numerical experiments that illustrate how learning modifies the metrics defined by the neural activities coding for categories. In an Appendix we briefly discuss the links and differences with the information bottleneck approach <cit.>.§ REVEALING THE METRIC OF INTERNAL REPRESENTATIONSCost function: Decoupling into coding and decoding parts. We assume we are given a discrete set of classes/categories, μ=1,…,M. Each category is characterized by a density distribution P(𝐬|μ) over the input (sensory) space. A sensory input 𝐬∈ℝ^N_0 elicits a cascade of neural responses (multilayer feedforward processing), 𝐫^(1)∈ℝ^N_1, …,𝐫^(L)∈ℝ^N_L. Finally, an estimate μ of μis extracted from the observation of the neural activity of the last coding layer 𝐫, possibly through a decoding cascade of processing.For a given stimulus 𝐬 and a neural activity 𝐫, the relevant Bayesian quality criterion is given by the divergence 𝒞(𝐬,𝐫) between the true probabilities {P(μ|𝐬), μ=1,...,M} and the estimator {g_μ(𝐫), μ=1,...,M}, defined as the Kullback-Leibler divergence (or relative entropy) <cit.>, 𝒞(𝐬,𝐫) ≡∑_μ=1^M P(μ|𝐬) lnPμ|𝐬)/g_μ(𝐫). Averaging over 𝐫 given 𝐬, and then over 𝐬, one can show that the resulting mean Bayesian cost 𝒞 induced by the estimation can be written as:𝒞 = 𝒞_coding + 𝒞_decodingwith 𝒞_coding =I[μ,𝐬] - I[μ,𝐫], where I[X,Y] denotes the mutual information between the random variables X and Y, and𝒞_decoding = ∫ D_KL(P_μ|𝐫||g_μ|𝐫) P(𝐫) d𝐫, is the average of the Kullback-Leibler divergence of P(μ|𝐫) from the network output g_μ(𝐫). The consequences of this decoupling are as follows.(i) Optimal decoding. The decoding cost 𝒞_decoding is the average relative entropy between the true probability of the category given the neural activity, and the output g. It is the only term depending on g, hence the function minimizing the cost given by Eq. (<ref>) is (if it can be realized) g_μ(𝐫) = P(μ|𝐫). (ii) Optimal coding. The coding cost 𝒞_coding is the difference between the information content of the signal and the mutual information between category membership and neural activity. Since processing cannot increase information <cit.>, the information I[μ,𝐫] conveyed by 𝐫 about μ is at most equal to the one conveyed by the sensory input 𝐬: I[μ, 𝐬] - I[μ, 𝐫]≥ 0. If it is possible to find parameters such that the optimal estimator is reached, then the full average cost function (<ref>) reduces to this difference in mutual information quantities. In such case, the cost function is minimized by maximizing the mutual information I[μ, 𝐫] between neural activity and category membership. Hence the infomax principle is here an outcome of the global optimization problem. This result is somewhat related to the information bottleneck approach <cit.>, see Appendix for details. Geometry of internal representations.The whole chain of neural responses can be seen as first extracting a representation 𝐱∈ℝ^K over which N neurons have their activities 𝐫: the neurons form a population code covering a K-dimensional (K ≪ N) feature space given by 𝐱. This is motivated by the many recent works that show how (both biological and artificial) neural activities can be understood as acting on a lower-dimensional manifold (for works in neuroscience, see e.g. , and for the machine learning literature, see e.g. ). Thus, one has the following Markov chain: μ→𝐬→𝐱=X(𝐬) →𝐫→μ.For the decoding part, the minimization of the cost 𝒞_decoding implies that, in this asymptotic limit of large signal-to-noise ratio (large number N of coding cells), g_μ(𝐫) = P(μ|𝐫) is an efficient estimator of P(μ|𝐱): it is unbiased and saturates the associated Cramér-Rao bound <cit.>.For the coding part, as we have seen, the minimization of the cost 𝒞_coding leads to the maximization of the mutual information I[μ,𝐫] between the categories and the neural representation provided by the network prior to decoding. From the data processing theorem, we have that I[μ,𝐫] ≤I[μ, 𝐱] ≤ I[μ,𝐬]. Thus, for a given projection space X, at best I[μ,𝐫]=I[μ, 𝐱], and optimization with respect to the choice of the space X gives optimally I[μ, 𝐱]=I[μ,𝐬]. The quality of the projection X is given by how much the probability of the category given the stimulus is well approximated by the probability of the category given the projection X(𝐬). As for the mutual information between categories and neural code, in a regime of high signal-to-noise ratio, one can write <cit.>:I[μ,𝐫] = I[μ,𝐱] - 1/2∫tr( F_cat^ T(𝐱) F_code^ -1(𝐱) )P(𝐱)d𝐱where F_code(𝐱) and F_cat(𝐱) are K× K Fisher informationmatrices:[F_code(𝐱)]_ij=-∫_r∂^2 ln P(𝐫|𝐱) /∂ x_i ∂ x_j P(𝐫|𝐱)d𝐫, [F_cat(𝐱)]_ij= -∑_μ=1^M∂^2 ln P(μ|𝐱)/∂ x_i ∂ x_jP(μ|𝐱). In the K=1-d case, it simply writes as: I[μ,𝐫] = I[μ,x] - 1/2∫F_cat(x)/F_code(x)P(x) dx. The Fisher information F_cat(x) characterizes the sensitivity of the category membership with respect to small variations of x. It is large at locations x near a boundary between categories, and small if x is well within a category. F_code(x) is the `usual' Fisher information considered in neuroscience, related to the discriminability measured in psychophysics<cit.>. It characterizes the sensitivity of the neural activity 𝐫 with respect to small variations of x. The expression (<ref>) allows for a simple and intuitive interpretation.∙ Finding a proper discriminant space. The first term, I[μ,𝐱], characterizes the correlation between the categories and the underlying projection space X. Maximizing this term means finding a discriminant space, an appropriate space from the point of view of the categorization task.∙ Finding a proper metric. The second term tells us what should be the metrics of the neural representation, how this space X should be probed: the Fisher information F_code should be large where the categorical Fisher information F_cat is large in order to minimize the second term. Thus for a given space X, minimization of the second term in the mutual information leads to a neural code such that F_code(x) is some increasing function of F_cat(x) (see Appendix <ref>). Efficient coding in view of optimal classification is thus obtained by essentially matching the two metrics. Since F_cat is larger near a class boundary, this should also be the case for F_code(x). A larger F_code(x) around a certain value of x means that the neural representation is stretched at that location (the neural representation tiles the space x more finely near than far from the class boundaries). Thus, category learning implies better cross-category than within-category discrimination, hence the so-called categorical perception. § NUMERICAL EXPERIMENTSTwo-dimensional example with Gaussian categories. We first consider a toy example involving a two-dimensional stimulus space with three overlapping Gaussian categories (see Fig. <ref>a). Given the small dimension of 𝐬, we work with 𝐱 = 𝐬∈ℝ^2 (hence this 𝐱 is given, not found by the network, I[μ,𝐱] and F_cat(𝐱) are here properties of the data). The neural network is a multilayer perceptron with one hidden layer of 32 cells. Each cell i has a noisy neural activity given by r_i(𝐱) = f_i(𝐱) + σ√(g_i(𝐱)) z_i, where f_i is a sigmoidal activation function, z_i is a unit normal random variable, and σ=0.3. Here we take g_i(𝐱) = f_i(𝐱). Note that this multiplicative noise can be seen as a form of dropout, which in the original work <cit.> consists in multiplicative noise in the form of Bernoulli or Gaussian noise (where g_i(𝐱) = f_i(𝐱)^2). The choice g_i(𝐱) = f_i(𝐱) yields a Poisson like noise, as commonly found in biological neural networks <cit.>. We assume that the noise is not correlated between neurons given a stimulus 𝐱, so that we can write P(r|𝐱) = ∏ P(r_i|𝐱), which in turn helps writing the Fisher information as F_code(𝐱) = ∑_i F_code, i(𝐱), where F_code, i(𝐱) is the Fisher information of neuron i.Figure <ref>a shows that after learning the network has indeed learned to estimate the posterior probabilities P(μ|𝐱), correctly partitioning the three categories into their respective regions. Figure <ref>b presents a representation of the Fisher information F_code(𝐱) on the 𝐱-plane after learning. Here, remember that the Fisher information is a 2×2 matrix. At each point, between classes, the eigenvector associated with the largest eigenvalue is orthogonal to the class boundary, and this eigenvalue is largest at the boundary between categories, illustrating the categorical perception phenomenon. Finally, we consider a 1d path in input space, depicted by the dark dots, interpolating between two items drawn from two different categories. We compute the scalar Fisher information of the neural code along this line. Figure <ref>c shows the results, together with the categorical prediction outputted by the network. As expected, the neural Fisher information is the greatest at the boundary between categories. Images of handwritten digits. Here we consider the MNIST dataset <cit.>, a large dataset of 28×28 handwritten digits (hence, the stimulus 𝐬 lives in a 784 dimensional space). The neural network is a multilayer perceptron with two hidden layers, each made of 256 cells with ReLU activation. Poisson like neuronal noise affects the last hidden layer, just as in the previous example, with σ=0.1. A continuum between an item from the `4' category and an item from the `9' category (two categories that are among the most confusable ones) is created by interpolating between them in a latent space discovered by training an autoencoder to reconstruct digits from the MNIST training set <cit.>. Each image along the continuum lies in the relevant manifold of digits. The labels in the x-axis of Fig. <ref>a pictures a few samples from the continuum, which is made of 31 images. This continuum is considered as the 1d `x' in the previous discussions. One can then compute the categorical predictions outputted by the neural network together with the scalar Fisher information of the last hidden layer of neurons. Once again, Fig. <ref>a shows that learning induces categorical perception, with larger Fisher information at the boundary between the two categories. In a previous work <cit.>, the cosine distance between the neural activities 𝐫(x) and 𝐫(x+δ x) was used as a proxy for Fisher information F_code(x), as it is much easier to compute. Fig. <ref>b shows that these two quantities indeed behave quite similarly.§ DISCUSSION AND CHALLENGES We have shown that minimizing the mean Bayes cost in a categorization task notably implies maximizing the mutual information between category membership and neural activity. This optimization leads to (i) finding an appropriate representation space, and, (ii) building a representation with the appropriate metrics on this space, leading to an expansion of neural space near decision boundaries. The results presented here are based on previous works <cit.> and on a paper under preparation <cit.>. To conclude, we mention several challenging issues which should be addressed. (i) Our results are based on the use of the exact probability distributions of the data. They should be reconsidered in the context of learning with a finite set of examples. Note however that the numerical illustrations indicates that the main results hold in such a learning context. (ii) In the neuroscience context (but also in the machine learning context), one should study the effect of (possibly strong) noise at any stage of processing, also implying noise correlations in the subsequent layers – in particular one issue is how to estimate the Fisher information quantity, F_code, as P(𝐫|𝐱) does not factorize in this case. (iii) It would be interesting to find ways of estimating the categorical Fisher information, F_cat. (iv) Finally an important issue is to understand the effect of noise correlations on the geometry of the neural space (in the spirit of <cit.>, but here for the case of category learning).plain § SUPPLEMENTARY MATERIAL§.§ Link with the information bottleneck approachThe information bottleneck (IB) approach <cit.>can be formulated as a rate distortion problem, the considered learning cost beinga distortion function that measures how well the category μ is predicted from the compressed neural representation 𝐫 compared to its prediction from the stimulus 𝐬. Tishby and collaborators developed this framework, theoretically and algorithmically, in the context of deep learning <cit.>.The qualitative idea of the IB approach is that the neural activity should convey as little information as possible about the stimulus provided the information about the category is preserved. Thus, with our notation, the goal is to minimize I[𝐬,𝐫] - β I[μ,𝐫] where β is a Lagrange multiplier. Analyzing this optimization principle, Tishby et al <cit.> show that the Kullback-Leibler divergence D_KL(P_μ|𝐬||P_μ|𝐫) emerges as the relevant effective distorsion measure. This divergence corresponds to our cost function once the decoding stage is optimized, that is g_μ(𝐫) = P(μ|𝐫). Then one sees that the approach followed here is somewhat dual to the IB one. One starts from the K-L divergence, and the infomax criterion emerges from the cost function. There is however two important differences. First, the full cost function that we consider includes the decoding part, and second, the correspondence is with the IB cost in the β→∞ limit.An alternative way to see this correspondence is to consider, from a distortion measure viewpoint,the IB cost associated to the Bayes cost:𝒞_IB(β) = I[𝐬,𝐫] + β 𝒞Making use of the decomposition of 𝒞 in coding and decoding parts (Eq. <ref>), we can write𝒞_IB(β) = 𝒞_IB,coding(β) + β 𝒞_decodingwhere𝒞_IB,coding(β)=I[𝐬,𝐫] + β ( I[μ,𝐬] - I[μ,𝐫] )Since I[μ,𝐬] is a constant, 𝒞_IB,coding(β)is the usual information bottleneck cost function. §.§ Minimization under constraints We comment here on the minimization of the part of the coding cost which depends on the Fisher information quantities, F_code and F_cat, for a given projection space X – hence a given categorical information F_cat. As explained in the main text, minimization of this term requires that F_code essentially follows the categorical Fisher information F_cat. The precise result will depend on the constraints on the neural system.The constraints may be on the neurons parameters, as in <cit.>, or directly on the Fisher information considered as a function, as in <cit.>.In such case one minimizes the right hand side of equation (<ref>) under the chosen constraint Ψ (for simplicity we consider the 1d case),ℰ = 1/2∫_XF_cat(x)/F_code(x)P(x) dx+ λ(∫_X Ψ(F_code(x)) P(x) dx- c)For instance, if Ψ(F)=F^α, one gets F_code(x) ∝ [F_cat(x)]^1/1+α, which is meaningful for α>1. The limit α→ 0corresponds to considering an information theoretic constraint, as we show now.As presented Section <ref>, adopting the viewpoint of the information bottleneck approach <cit.>, we may minimize the mutual information I[x,𝐫] under the constraint that the information conveyed by the neural code about the categories is large enough:ℰ =I[x,𝐫] - β I[μ,𝐫]In the same asymptotic limit as the one considered here,I[x,𝐫] behaves as 1/2∫ln F_code(x) P(x) dx (again here for K=1) <cit.> . Combining this result and the ones in<cit.>, we can thus write ℰ = 1/2∫ln F_code(x) P(x) dx - β(I[μ,x] - 1/2∫_XF_cat(x)/F_code(x)P(x) dx)Up to the (here constant) term I[μ,x], this is equivalent to the cost (<ref>), in the case Ψ(.)=ln(.), taking the dual approach – that is exchanging the roles of the cost and the constraint, β=1/λ. The optimal function is here F_code(x) ∝ F_cat(x). | http://arxiv.org/abs/2311.15682v1 | {
"authors": [
"Laurent Bonnasse-Gahot",
"Jean-Pierre Nadal"
],
"categories": [
"cs.LG",
"cs.IT",
"math.IT",
"q-bio.NC"
],
"primary_category": "cs.LG",
"published": "20231127101622",
"title": "Information theoretic study of the neural geometry induced by category learning"
} |
APS/123-QEDThese authors contributed equally to this work.These authors contributed equally to this work. [email protected] ^1State Key Laboratory of Low-Dimensional Quantum Physics,Department of Physics, Tsinghua University, Beijing 100084, P. R. China^2Frontier Science Center for Quantum Information, Beijing 100084, P.R. China Spin squeezing provides crucial quantum resource for quantum metrology and quantum information science. Here we propose that one axis-twisted (OAT) spin squeezing can be generated from free evolution under a general coupled-spin model with collective spin-spin interactions. We further propose pulse schemes to recover squeezing from parameter imperfections, and reach the extreme squeezing with Heisenberg-limited measurement precision scaling as 1/N for N particles. This work provides a feasible method for generating extreme spin squeezing. Spin Squeezing through Collective Spin-Spin Interactions Yong-Chun Liu^1,2 January 14, 2024 ========================================================§ INTRODUCTION The precision of quantum measurements with uncorrelated particles is restricted by the standard quantum limit, which can be overcome using nonclassical states <cit.>. Among them, squeezed spin state <cit.> is a promising candidate to improve precision by simply performing collective measurements. Various strategies have been proposed and demonstrated to generate spin squeezing, including quantum nondemolition measurements <cit.> and dynamical evolution with nonlinear interactions. For the latter, an all-to-all two-body interaction, often referred to as one-axis twisting (OAT) model <cit.>, is widely studied to deterministically produce strong squeezing <cit.>. Even higher degree of squeezing is attainable with two-axis twisting (TAT) interaction <cit.>, which achieves Heisenberg-limited scaling of measurement precision <cit.>.The squeezing interactions mentioned above require two-body coupling within the spin ensemble. Previous studies focus on several specific systems to create and witness such interactions. For example, the nonlinear atomic collisions in two-component Bose-Einstein condensate give rise to an OAT interaction <cit.>. In cavity QED systems, the OAT Hamiltonian can be engineered via dispersive atom-light interactions with cavity either driven <cit.> or undriven <cit.>, in which the atom-atom interaction is mediated by photons within the cavity. Another approach utilizes the strong interaction between Rydberg atoms to generate squeezing <cit.> and is demonstrated in recent experiments <cit.>. Other studies to produce OAT interaction exist in trapped ions <cit.> and lattice systems <cit.>. These existing researches are mainly restricted to certain systems and lack of universality. To further obtain the TAT interaction, potential strategies include implementing pulse sequences <cit.> and continuous driving field <cit.> to the existing interactions, which still face great challenges of experimental realization.In this work, we investigate a general model of collective interaction coupling two spin ensembles and show its ability of generating deterministic spin squeezing. When the number of particles in two ensembles differs greatly, the one with large particle number mediates an effective OAT interaction in the other ensemble. The imperfection arising from anisotropic coupling strength can be eliminated with spin echo sequences. We also propose a pulse scheme to synthesize the TAT interaction based on this model, resulting in the Heisenberg-limited squeezing. Compared with the previous proposal <cit.>, the squeezing interaction here appears purely from the inter-species coupling, without requirement of additional driving fields. As a result, this scheme is easier for implementation.§ SYSTEM MODELConsider a system of two species of spin-1/2 (or two-level) particles, denoted by S and J, with the inter-species coupling described by the following universal interaction Hamiltonian:H_int = g_x S_x J_x + g_y S_y J_y + g_z S_z J_z ,where g_μ (μ = x, y, z) denotes the coupling strength of different collective spin components between two species. S_μ=1/2∑_k=1^N_sσ_S,μ^(k) and J_μ=1/2∑_k=1^N_jσ_J,μ^(k) (μ = x,y,z) denotes the collective spin operators of the two subsystems, and can be viewed as angular momentum operators of total angular momenta S = N_s/2 and J=N_j/2, respectively. Here σ_S,μ^(k) and σ_J,μ^(k) denotes the Pauli matrices of the k-th particle within subsystem S and J. S_μ and J_μ satisfy the commutation relations [S_i,S_j] = iε_ijkS_k and [J_i,J_j] = iε_ijkJ_k, where ε_ijk (i,j,k=x,y,z) is the Levi-Civita symbol. A wide range of spin-related coupling can be described by the Hamiltonian above, including spin-exchange interactions, H_int=g(S_x J_x + S_y J_y + S_z J_z ), and dipole-dipole interactions, H_int=g(S_x J_x + S_y J_y - 2 S_z J_z ). We choose the initial state to be the direct product of coherent spin state (CSS) in two subsystems, with the CSS in subsystem J pointing at the z-axis, i.e., the eigenstate of J_z with eigenvalue N_j/2 (N_j is the corresponding particle number), and focus on the spin squeezing of subsystem (species) S, under the condition J ≫ S.We now show that such a system can generate OAT squeezing under free evolution. Since J ≫ S, the quantum state of subsystem J evolves much slower than the quantum state of subsystem S, meaning that in the time scale we are considering, the state has only very small deviations from J_z=J. One can therefore work in a subspace in the Hilbert space of our system, in which H_1=g_x S_x J_x + g_y S_y J_y can be viewed as a perturbation to H_0 = g_z S_z J_z. To show the spin squeezing nature of such a system under free evolution, we perform a Frohlich-Nakajima transformation (FNT) <cit.>: H_FN= e^-S_FNH_inte^S_FN = H_0 + H_1+[H_0,S_FN] +[H_1,S_FN]+1/2[[H_0,S_FN],S_FN]+...where S_FN=1/4g_z J[(g_x-g_y)(S_-J_- -S_+J_+) +(g_x+g_y)(S_-J_+ - S_+J_- )] ,and J_±=J_x± iJ_y. We then apply the Holstein-Primakoff transformation <cit.>:J_+= √(2J-a^†a)a, J_-=a^†√(2J-a^†a),J_z = J-a^† a .Here, a^† and a are bosonic creation and annihilation operators satisfying [a,a^†]=1. In the subspace under consideration, J_z is close to J, i.e., ⟨ a^†a⟩≪ J, so we can make the approximation J_+≈√(2J)a, J_-≈√(2J)a^†. By using of this approximation, we obtain the effective Hamiltonian for the coupled system: H_FN≈[g_z J + g_x^2+g_y^2/4g_z +g_x^2 - g_y^2/4g_z(aa+a^†a^†) + g_x^2+g_y^2-2g_z^2/2g_za^†a ]S_z+g_xS_x(a+a^†)+ig_y S_y(a^†-a)/√(2J)+g_xg_y/2g_z S_z^2+ ... Neglect high order terms in N_s/N_j and ⟨ a^†a ⟩ / J, the effective Hamiltonian is reduced toH_FN≈ g_z J S_z + g_x g_y/2g_zS_z^2 ,which is an OAT Hamiltonian with a linear term.This effective Hamiltonian can be understood in two ways. Firstly, from the mathematical point of view, the system is evolving in the subspace in which ⟨ a^† a ⟩≪ J, so the matrix elements of S_FN is much smaller than that of H_int in this subspace. Therefore, (6) means that, in this subspace, H_FN and the original interaction Hamiltonian H_int are very close to each other. Secondly, one can view the subsystem J as an intermediary. When spin S evolves and perturbs the large spin J, such perturbation induces a “back action" on spin S, mediating its inter-species interaction. This mechanism is similar to that in <cit.>, and is analogous to the excitation in condensed-matter spin waves <cit.>. However, different from the previous work, no external control is needed to produce such a "back action" mechanism, making our proposal a more feasible squeezing scheme. § NUMERICAL INVESTIGATION §.§ OAT squeezing dynamics To verify the validity of OAT squeezing under free evolution of the Hamiltonian Eq. (<ref>), we numerically investigate the evolution of the quantum state. As is illustrated by the Husimi Q representation on the generalized Bloch Spheres(after tracing out subsystem J), the quasiprobability distribution of subsystem S is continuously squeezed, signifying the occurrence of spin squeezing, as depicted in Fig. <ref>(a). The degree of spin squeezing is usually characterized by the squeezing parameter ξ^2=4(Δ S_⊥)_min^2 / N_s, where (Δ S_⊥)_min^2 is the minimum of the fluctuation (Δ S_⊥)^2=⟨ S_⊥^2⟩-⟨ S_⊥⟩^2 for the spin component perpendicular to the mean spin direction. In Fig. <ref>(b) we compare the squeezing parameters for free evolution under H_int, H_FN and the OAT Hamiltonian H_OAT=g_x g_y/2 g_z S_z^2 .The agreement of the evolution of squeezing parameters illustrates the validity of the effective OAT squeezing. Such validity is further demonstrated in Fig. <ref>(a) and (b), where the power-law scaling of optimal spin squeezing parameter ξ^2_min and the corresponding squeezing time t_min with respect to system size are compared with ideal OAT squeezing, and we find a clear consistency. For ideal OAT (<ref>), the scaling relation is given by <cit.>ξ_min^2 ≃1/2(N_s/3)^-2/3, t_min≃2 × 3^1 / 6 |g_z|/|g_x g_y| N_s^2 / 3. We have also studied how the size of subsystem J affects the quality of squeezing. Fig. <ref>(c) and (d) shows the relation between N_j/N_s, the ratio between subsystem sizes, and ξ^2_min and t_min. We observe that the two quantities approaches the ideal OAT value when N_j/N_s increases, as expected from the condition J≫ S, and a ratio of the order 10^2 is sufficient for our model to produce squeezing that closely resembles ideal OAT dynamics.When g_x ≠ g_y, numerical simulations indicate that, given the same N_j/N_s ratio as in g_x=g_y case, while H_FN still yields OAT-level squeezing, the squeezing ratio of the quantum state evolving under the original Hamiltonian H_int deviates from OAT, as shown inFig. <ref>(a) and (b). The following observation, however, shows that this imperfection can be overcome by making use of the famous spin echo pulse sequence <cit.>. Fig. <ref>(a) compared the the squeezing parameters for free evolution under H_int= e^S_FNH_FNe^-S_FN, H^' = e^S_FNH_OATe^-S_FN and H_OAT. The discrepancy between the first two indicates that the deviation of H_int from OAT squeezing is mainly caused by the difference between H_FN and H_OAT, which is approximately the linear-in-S_z term in the RHS of Eq. (<ref>) up to lowest order. Therefore, the imperfection can be eliminated by cancelling out the effect of this term. This can be attained by applying a sequence of π pulses to system S equally spaced in time, as illustrated in Fig. <ref>(c). The numerical simulation of squeezing dynamics shown in Fig. <ref>(d) verifies the validity of the pulse scheme. §.§ Generating TAT squeezing from effective OATInspired by the transformation scheme from OAT to TAT in ref. <cit.>, starting from the Hamiltonian H_int that is effectively OAT, squeezing can be further enhanced by applying a more intricate pulse sequence to generate TAT interaction. This scheme is demonstrated in Fig. <ref>(a). It is essentially a combination of the pulse sequence described in Fig. <ref>(c), which eliminates the linear-in-S_z term in H_eff, and the pulse sequence in ref. <cit.>, which synthesizes a TAT interaction from the OAT Hamiltonian H_OAT. Define the rotation operator R_θ = e^-iθ S_y and the evolution operator U(δ t) = e^-iH_intδ t, The evolution operator for a single period T_c=6δ t is given by U_1(6δ t) = [U(δ t ) R_π]^2 R_-π/2[U(δ t ) R_π]^4 R_π/2.For N_j ≫ N_s, H_int≈ H_eff. Using the identity[ e^-i(AS_z+BS_z^2 )R_π]^2 = e^-2i B S_z^2and the result in ref. <cit.>:e^-iτ S_z^2 R_-π/2 e^-2iτ S_z^2 R_π/2≈ e^-iτ(2J_x^2 + J_z^2 )for τ≪(2N_s)^-1, we arrive atU_1(6δ t) ≈ e^-i g_x g_y δ t ( 2S_x^2+S_z^2 )/g_zfor δ t ≪|g_z/(g_x g_y N_s)|, which is equivalent to the evolution operator given byH_TAT = g_x g_y/6g_z(2S_x^2 + S_z^2) = g_x g_y/6g_z(S_x^2 - S_y^2)(the second identity is obtained by subtracting a constant S(S+1)). The validity of our TAT scheme is demonstrated by the numerical simulation in Fig. <ref>(b)-(f). Fig. <ref>(b) illustrates the evolution of the spin squeezing parameter under the pulse scheme, reflecting well alignment with that of the effective TAT Hamiltonian and a substantial outperformance compared with OAT squeezing. The optimal squeezing parameter and the corresponding squeezing time for the effective TAT interaction are approximately given byξ_min^2 ≃1.8/N_s, t_min≃3 |g_z|ln (4N_s)/|g_x g_y| N_s,as is verified in Fig. <ref>(c) and (d). We have also investigated the trend of ξ^2_min and t_min as N_j/N_s increases when N_s is fixed. The result is shown in Fig. <ref>(e), (f), indicating that the two parameters agrees well with TAT for N_j/N_s > 100.Our pulse scheme for generating TAT spin squeezing is subjected to various noises, including imperfections in pulse areas and pulse separations. To investigate the effect of such fluctuations, we simulate the squeezing dynamics of the pulse scheme by adding Gaussian stochastic noises, i.e., assuming the pulse areas or pulse separations are independent and subject to Gaussian distribution. As is shown in Fig. <ref>, under noise in the pulse area and pulse separation less than 0.05%, our scheme can attain almost optimal squeezing of the effective TAT dynamics, indicating the robustness of our scheme.§ CONCLUSIONIn summary, we have proposed that spin squeezing can be generated from systems with collective spin-spin interaction. When the size of two spin subsystems differ greatly (N_j/N_s ≳ 10^2), effective OAT squeezing can be generated under free evolution, without any additional control. The influence of asymmetric coupling strengths on the squeezing can be removed by using a spin-echo pulse sequence consisting of equally-spaced π pulses. Effective TAT squeezing with Heisenberg-limited measurement precision can be realized by applying a more intricate periodic pulse scheme, which incorporates spin echo and OAT-to-TAT transform. The pulse scheme is robust against pulse area and pulse separation fluctuations. This work provides a viable approach for generating high level of spin squeezing in a broad range of systems. Y.W. thanks Jinyu Liu and Qixian Li for enlightening discussions. This work is supported by the National Key R&D Program of China (Grant No. 2023YFA1407600), and the National Natural Science Foundation of China (NSFC) (Grants No. 12275145, No. 92050110, No. 91736106, No. 11674390, and No. 91836302). * | http://arxiv.org/abs/2311.15667v1 | {
"authors": [
"Yanzhen Wang",
"Xuanchen Zhang",
"Yong-Chun Liu"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231127095501",
"title": "Spin Squeezing through Collective Spin-Spin Interactions"
} |
Characterizing Video Question Answering with Sparsified Inputs Shiyuan Huang^1 Robinson Piramuthu^2Vicente Ordonez^2,3Shih-Fu Chang^1 Gunnar A. Sigurdsson^2 ^1 Columbia University ^2 Amazon Alexa AI^3 Rice UniversityJanuary 14, 2024 ==========================================================================================================================================================================N-body systems characterized by r^-2 attractive forces may display a self-similar collapse known as the gravo-thermal catastrophe or core-collapse. Globular clusters are a real-life example of this in astronomy. In these clusters, collapse is halted by dynamical heating from binary stars. This is known as the binary-burning phase of star cluster evolution. A fraction of Milky Way globular clusters may have already reached this phase. It has been speculated –with guidance from simulations– that macroscopic variables such as central density and velocity dispersion are governed post-core-collapse by an effective, low-dimensional system of ordinary differential equations (ODEs). However, it is hard to distinguish potentially chaotic low-dimensional motion, from high-dimensional stochastic noise. Here we apply three machine learning tools to the time series of relevant macroscopic quantities from state-of-the-art dynamical simulations to constrain the post-collapse dynamics: topological data analysis (TDA) on a lag embedding, Sparse Identification of Nonlinear Dynamics (SINDY), and Tests of Accuracy with Random Points (TARP). Even though TARP suggests that the time series are not purely noise, we do not find a low-dimensional system of equations describing their time evolution. § INTRODUCTIONPoint particles interacting through inverse square gravitational forces are ubiquitous in astronomy: from galaxies in galaxy clusters, to stars in galaxies and star clusters, down in scale to planetesimals and even dust <cit.>. While the general gravitational N-body problem is approached numerically either by brute forcing the pairwise interactions <cit.> or by approximated schemes (multi-particle collision, MPC <cit.>, Monte-Carlo <cit.>), it is expected that in specific regimes there may emerge tractable effective equations driving the evolution of relevant macroscopic quantities. It has been suggested that this is the case of post core-collapse oscillations in star clusters <cit.>. E.g. <cit.> claim to find a three-dimensional attractor that qualitatively resembles the Rössler attractor <cit.>. Paper contribution: For the first time, we apply three independent machine learning tools to analyze state-of-the-art simulations of star cluster core collapse. We do not find clear evidence of effective low dimensional dynamics. Topological data analysis <cit.> when applied to a lag embedding of the core density and velocity dispersion time series produces a persistence diagram that appears indistinguishable from noise; similarly, Sparse Identification of Nonlinear Dynamical Systems <cit.> fails to yield an equation reproducing the observed time series. Finally, Tests of Accuracy with Random Points <cit.> find that samples from our original time series do not lie in-distribution with respect to random reshufflings. The post core-collapse evolution is thus not purely independent identically distributed noise, but this is not sufficient to conclude that it is the result of low-dimensional dynamics.§ SIMULATIONSWe have performed a hybrid MPC-particle-mesh simulation with N=10^5 particles where the collective gravitational force is evaluated with a standard particle in cell method on a 32× 16× 128 spherical grid. The Poisson equation is solved with a finite difference scheme. Collisions are resolved with the MPC stochastic scheme <cit.> on the same grid, conditioned every time step Δ t with the cell-based interaction probability p_i= Erf(βΔ t ν_c), where ν_c=8π G^2m̅^2_i n_ilogΛ/σ^3_i is the typical collision frequency in cell i and β a dimensionless constant proportional to the total number of cells. Initial positions and velocities are sampled from an isotropic Plummer model with scale radius r_s and mass scale M. The equations of motion for the N particles are propagated between each collision step with a standard second order leapfrog scheme with a fixed timestep. All radii are expressed inunits of the scale radius r_s, while the time unit is given as a function of the total mass M and the gravitational constant G as t_ dyn≡√(r_s^3/GM). We fix Δ t=t_ dyn/100 and run until 2× 10^5t_ dyn. As a function of time, we save the values of the central density and velocity dispersion (both evaluated within the time-dependent radius enclosing the 5 % of M), as shown in Fig. <ref>, left panel.§ METHODS §.§ Topological data analysis on lag embeddingTDA is the study of the topological properties of point clouds derived from data. The primary tool within TDA is persistent homology, which provides a multi-scale description of the homological features of a dataset such as connected components (H_0), loops (H_1), and voids (H_2). The main output of persistent homology is the barcode or persistence diagram (PD), visualizing the birth and death of topological features as one varies a scale parameter. See <cit.> for comprehensive discussion.We applied persistent homology to a lag embedding of the time series of velocity dispersion and of central density generated from our simulations. To achieve this, we deterended our time series by subtracting a polynomial fit (see Fig. <ref>, right panel), calculated the autocorrelation and sampled at regular intervals such that the autocorrelation signal became insignificant. This was the basis for computing a four-dimensional lag embedding. Our lag embeddings reconstruct a higher-dimensional phase space from a single time series, effectively transforming a one-dimensional series into a point cloud. This is justified by Takens' theorem, asserting that, under certain conditions, the dynamics of a system can be reconstructed using the time-delayed versions of even a single observable <cit.>. By applying TDA to this lag-embedded space, we can discern topological features that might correspond to dynamical structures or patterns in the original time series. For instance, loops revealed by persistent homology might indicate periodic orbits or recurrent dynamics. We used the ripser library <cit.> to compute persistent homology features on our timeseries. §.§ Sparse Identification of Nonlinear Dynamics (SINDY)SINDY is an algorithmic approach designed to derive governing equations of dynamical systems directly from observational data <cit.>. The central idea is to compute derivatives of the observed state variables and then express these derivatives as a sparse linear combination of a pre-defined library of functions. Here we relied on a polynomial library up to third degree, with interactions. We filtered our time series by removing high-frequency Fourier modes and took first order numerical derivatives of velocity dispersion and of central density generated from our simulations. We thus used SINDY with LASSO regularization and MSE loss to look for second-order ODEs, the minimum order that allows for oscillatory behavior. At the implementation level, we relied on the pysindy library <cit.>. §.§ Tests of Accuracy with Random PointsTARP is a necessary and sufficient coverage test to assess whether a set of points has been drawn from a given distribution for which a generative model is available <cit.>. TARP has been originally designed to test the accuracy of posterior estimators by using random points to assess coverage probabilities, but here we are applying it essentially as an anomaly detection technique, for the task of distinguishing our original time series from random reshuffled versions of it. For our purposes, the main output of TARP is the expected coverage probability as a function of credible intervals. When plotted, this reveals that a sample is out of distribution if it deviates from the identity line. Unlike the previous two approaches, finding such a deviation through TARP does not prove, per se, that the dynamics producing our time series in low dimensional. § RESULTS §.§ Topological data analysis on lag embeddingThe PDs of the central density and velocity dispersion time series are shown in Fig. <ref>. In a PD, stable topological structure is represented by points far from the diagonal. Qualitatively, these do not appear to be present in Fig. <ref> for rings and voids, suggesting that these topological features picked up by persistent homology are just noise. Quantitatively, we tested this by computing the persistent entropy statistic on our original time series and on 100 random re-shufflings thereof. Persistent entropy is the Shannon entropy of the lifetime of features in a persistence diagram. The persistent entropy for the original time series typically falls within the distribution of the reshuffled time series, both for density and velocity dispersion. This is not the case for a time series obtained from the Rössler attractor, which happens to have systematically lower persistent entropy than its reshufflings.§.§ Sparse Identification of Nonlinear Dynamics (SINDY)We conducted a thorough analysis of the ODEs proposed by SINDY while varying the strength of regularization, which results in different levels of sparsity in the ODE coefficients. Neither a grid search nor a meticulous manual exploration resulted in equations able to reproduce our time series in the long term. In Fig. <ref> (left panel) we show the solution of one such equation over time. While it initially reproduces the time series it is learned from, it eventually departs dramatically from it. This was the case for all systems of ODEs we found. §.§ Tests of Accuracy with Random PointsFig. <ref> (right panel) shows 100 runs of TARP, none of which overlaps the diagonal line. Our original time series is thus biased (out-of-distribution) with respect to its reshufflings, suggesting that reshuffling erases some temporal dependence. In other words, our time series is not a sequence of independent identically distributed random variables. This can be due to various reasons, including poor trend removal in the pre-processing phase; it does not per se indicate that our time series are generated by a low dimensional dynamics. It does justify further investigation into the matter. § CONCLUSIONSWe applied three machine learning tools that are new to astronomy, especially in the context of star cluster dynamics, to the problem of characterizing the dynamical evolution of a simulated N-body system in the post core-collapse phase. TDA on a lag embedding of the central density and velocity dispersion time series, as well as SINDY, did not provide us with evidence that the dynamical evolution is driven by a low-dimensional system of ODEs, despite the expectations of <cit.>.On the other hand, TARP revealed that the time series we analyzed are out-of-distribution with respect to randomly reshuffled time series. This suggests that there is some structure in the (apparent) noise. The search for the lost attractor is not over yet. M. P. acknowledges financial support from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 896248. This work is supported by the Simons Collaboration on “Learning the Universe". The Flatiron Institute is supported by the Simons Foundation. The work is in part supported by computational resources provided by Calcul Quebec and the Digital Research Alliance of Canada. Y.H. and L.P. acknowledge support from the Canada Research Chairs Program, the National Sciences and Engineering Council of Canada through grants RGPIN-2020- 05073 and 05102, and the Fonds de recherche du Québec through grants 2022-NC-301305 and 300397. P.L acknowledges support from the Simons Foundation. Mircea Petrache acknowledges financial support from Fondecyt Regular grant No. 1210426.plain | http://arxiv.org/abs/2311.16306v1 | {
"authors": [
"Mario Pasquato",
"Syphax Haddad",
"Pierfrancesco Di Cintio",
"Alexandre Adam",
"Pablo Lemos",
"Noé Dia",
"Mircea Petrache",
"Ugo Niccolò Di Carlo",
"Alessandro Alberto Trani",
"Laurence Perreault-Levasseur",
"Yashar Hezaveh"
],
"categories": [
"astro-ph.GA",
"nlin.CD"
],
"primary_category": "astro-ph.GA",
"published": "20231127204341",
"title": "The search for the lost attractor"
} |
plainempty margin=1.2in Robust Conditional Wald Inference for Over-Identified IV* Lee: Princeton University and NBER (email: [email protected]); McCrary: Columbia University and NBER (email: [email protected]); Moreira: FGV EPGE (email: [email protected]); Porter: University of Wisconsin (email: [email protected]); Yap: Princeton University (email: [email protected]). By David S. Lee, Justin McCrary, Marcelo J. Moreira, Jack Porter, Luther Yap*November 2023For the over-identified linear instrumental variables model, researchers commonly report the 2SLS estimate along with the robust standard error and seek to conduct inference with these quantities. If errors are homoskedastic, one can control the degree of inferential distortion using the first-stage F critical values from <cit.>, or use the robust-to-weak instruments Conditional Wald critical values of <cit.>. If errors are non-homoskedastic, these methods do not apply. We derive the generalization of Conditional Wald critical values that is robust to non-homoskedastic errors (e.g., heteroskedasticity or clustered variance structures), which can also be applied to nonlinear weakly-identified models (e.g. weakly-identified GMM).Keywords: Instrumental Variables, Weak Instruments, t-ratio, First-stage F-statistic, Conditional Wald § INTRODUCTION This paper considers inference in the over-identified linear instrumental variables model and its generalization to weakly-identified models and GMM more generally. The core problem of inference in the weak IV literature is that when instruments are weak, conventional asymptotic approximations are poor, causing standard inference procedures (like Wald or t-ratio-based inference) to over-reject, even under the null. Indeed, <cit.> pointed out that any confidence set that is bounded with probability 1 (like the usual β̂±1.96·ŝê(β̂)) could have the potential to cover the true parameter 0 percent of the time (i.e., zero percent confidence level).<cit.> provided a generalized approach to constructing inference procedures that addressed this weak instrument problem via data-dependent critical values; this approach was first demonstrated in the case when errors are homoskedastic. <cit.> presented, for the hypothesis that the parameter is equal to a particular value, conditional versions of the "trinity" of test procedures: Likelihood Ratio (LR), Lagrange Multiplier (LM), and Wald tests. The critical values for each of these test statistics were functions of the observed data and the null hypothesis.Since then, a number of efforts have generalized these tests to accommodate non-homoskedastic settings, given the widespread preference of applied researchers to remain somewhat agnostic about the properties of the errors in the linear model.[ See, for example, <cit.>, <cit.>,<cit.>.] Curiously, while there have been efforts to generalize the Conditional LR (CLR) and LM tests to general non-homoskedastic errors, to the best of our knowledge, the extension of the Conditional Wald in such a way has been neglected.In this paper, we derive the extension of Conditional Wald to non-homoskedastic settings. This effort delivers a Wald-based inference procedure that is valid, similar, robust to arbitrarily weak instruments, robust to HAC error structures, and applicable to more general weakly-identified settings like GMM.There are a number of practical reasons to revisit a testing procedure rooted in a Wald approach. First, for the linear instrumental variables model, applied research has revealed a preference for Wald-based inference. Most typically, researchers compute and report the 2SLS estimator and robust standard errors, regardless of concerns about instrument weakness. In homoskedastic settings, researchers could pursue two different options for conducting Wald-based inference using the computed t-ratio. Researchers can either use the critical value function for Conditional Wald in<cit.>, or they can use the first-stage F-statistic along with the critical value tables in <cit.> and the Bonferroni arguments used in <cit.>. When the errors are non-homoskedastic – as is typically allowed in modern empirical work – the values in <cit.> tables no longer reliably control size distortions, as pointed out in <cit.>.The contribution of the current paper is to provide a method for computing critical values for the t-ratio that will deliver valid, robust inference, even in non-homoskedastic settings.[In this paper, acceptance/rejection of the null is the result of comparing a single statistic with a valid (in this case, data-dependent) critical value. A different, "two-step" inference approach where two different procedures (one "robust to weak instruments" and the other non-robust) are combined to form an overall valid procedure (which can also accommodate non-homoskedastic settings) is proposed by <cit.>. ]A second reason to consider a robust-to-HAC version of Conditional Wald of<cit.> is that there are two recent studies pointing to power advantages of Conditional Wald in the homoskedastic, over-identified setting and in the non-homoskedastic just-identified setting.<cit.> analyze power for the over-identified, homoskedastic case, and provide simulation evidence that Conditional Wald using 2SLS tends to produce shorter confidence set lengths, compared to CLR ( <cit.>). This is a particularly striking finding, in light of papers that point to the near-optimality, in terms of power, of CLR. Furthermore, <cit.>, in the context of the just-identified (robust to HAC errors) IV model, show that two different Wald-based – VtF and Conditional Wald – confidence intervals appear to be almost always shorter than that of <cit.>, a recommended benchmark in the literature. Thus, developing the Conditional Wald robust to HAC errors is not only already aligned with practitioner practice, but these recent studies suggest that it may even have power advantages in the form of shorter confidence intervals.Our motivation for deriving CW critical values is entirely practical and stems from taking as given practitioners' apparent preference for computing the 2SLS point estimate and robust standard error (presuming non-homoskedasticity), and finding critical values that lead to valid inference. Our approach is thus different from identifying the optimal test after having defined a class of procedures and an objective function. Nevertheless, the two studies mentioned above do suggest the possibility that in terms of power and confidence interval length, CW could fare well compared to existing alternatives for the over-identified model.Section <ref> establishes the notation we use for the standard linear IV model with non-homoskedastic errors, Section <ref> derives the critical values for Robust Conditional Wald tests based on 2SLS, LIML, two-step, and CUE GMM estimators, Section <ref> extends the test to nonlinear weakly-identified models (e.g. GMM), and Section <ref> concludes.§ THE LINEAR IV MODELThe standard linear IV model is represented byy_1 = Y_2β +u Y_2 = ZΠ +V_2where y_1 ( n× 1) is the dependent variable, Y_2 ( n× p) are the endogenous variable(s) of interest, and Z ( n× k) are the excluded instruments, while u ( n× 1) and V_2 ( n× p) are the unobserved structural-form errors. The single endogenous regressor case simply corresponds to p=1. We will always take k≥ p with k=p corresponding to the just-identified model and k>p corresponding to the over-identified model. The parameter of interest is β. It is straightforward to accommodate additional covariates (including a constant), but we omit their inclusion in the exposition below.The reduced-form model is: y_1 = ZΠβ +v_1Y_2 = ZΠ +V_2,where u≡ v_1-V_2β. It will be convenient to write the model in a matrix form:Y=ZΠ A+V,where Y=[ y_1:Y_2], V=[ v_1:V_2], and A= [ β :I_p]. We will use the notation Y_i, V_i, Z_i, etc, to denote the i-th row of the corresponding matrix.In <cit.>, the rows of V were assumed to be i.i.d. This paper relaxes this assumption for the derivation of the Conditional Wald test robust to different DGPs. We are motivated by the observation that applied researchers typically prefer not to make the assumption of homoskedasticity, and often they are interested in a clustered error structure, for example.§ THE ROBUST CONDITIONAL WALD TESTS In this section, we derive the Conditional Wald (CW) tests robust to HAC errors for the linear model given in section <ref>.The Wald statistic is formed by three elements: a null value β _0, an estimator β_n, and a robust asymptotic variance estimator A.Var for √(n)( β_n-β _0):𝒲_n=n( β_n-β _0) ^'[ A.Var] ^-1( β _n-β _0) .In section <ref>, we review the class of linear GMM estimators for β, which includes common estimators like 2SLS, LIML, efficient two-step GMM, and the CU (continuously updating) GMM estimator, all of which can be used for constructing a Robust Conditional Wald test. In section <ref>, we review the robust variance estimators based on the asymptotic distribution of √(n)( β_n-β _0). With these components in hand, we can form robust versions of the Wald statistic for various estimators, β_n. Note that there are no new results in Sections <ref> and <ref>, and there are many references that detail these standard results (as one example, see <cit.>). We review a selected set of well-established facts about GMM to highlight that the Robust Conditional Wald test we derive in <ref> is not specific to the leading case in applied work – 2SLS – and can be applied to tests based on other estimators of the parameter of interest. Note that the multitude of different estimators that could be employed arises in the over-identified case; in contrast, for example, in the single instrument case, all of the estimators we discuss below collapse to the standard IV estimator. With that as context, an interested reader can skip tosection <ref>, in which we show how to apply the conditional argument of <cit.> to obtain a critical value function for the Wald statistic that is robust to instrument weakness. The critical value function is then used to form a robust similar test, which can be inverted to generate confidence intervals for β. §.§ Estimators Estimators like 2SLS or LIML can be viewed as particular GMM estimators based on the linear moment condition:g_n( β) =n^-1∑_i=1^nZ_i( y_1i-Y_2i^'β) =n^-1Z^'Yb,where b=( 1,-β ^') ^'. A GMM estimator for β is the minimizer of the criterionQ_n( β) =g_n( β) ^'W_n( β) g_n( β)where the weighting matrix may or may not depend on the unknown coefficient β. Different choices of W_n( β) will lead to different estimators β_n.For the 2SLS estimator, the weighting matrix does not depend on β:W_n^-1=V_u· n^-1∑_i=1^nZ_iZ_i^'= V_u· n^-1Z^'Z,where V_u is an estimator of V_u, which is the variance of u. Because Q_n( β) is quadratic in β, it is straightforward to show that the resulting estimator is 2SLS:β_n=[ Y_2^'Z( Z^'Z) ^-1Z^'Y_2] ^-1Y_2^'Z( Z^'Z) ^-1Z^'y_1.We can re-express the estimator asβ_n=[ Y_2^'Y_2] ^-1Y_2^'y_1,where Y_2=NY_2 and N=Z( Z^'Z) ^-1Z^' is the usual projection matrix. That is, in the first stage, we first regress Y_2 on Z to obtain the fitted values Y_2. In the second stage, we regress y_1 on the fitted values Y_2.For the LIML estimator, the weight matrix is formed by using β along with residuals from OLS regressions of y_1 and Y_2 on Z. Let b=( 1,-β ^') ^', N=Z( Z^'Z) ^-1Z^', and M=I-N. ThenV_u( β) =n^-1∑_i=1^n( v_1,i-V_2,i^'β) ^2=n^-1b^'Y^'MYb,where V=MY=MV. The GMM criterion is no longer quadratic in β once we useW_n( β) ^-1=V_u( β) · n^-1∑_i=1^nZ_iZ_i^'=V_u( β) · n^-1Z^'Z.Instead it is a ratio of quadratic forms:Q_n( β) =n^-1∑_i=1^n( y_1i-Y_2i^'β) Z_i( n^-1∑_i=1^nZ_iZ_i^') ^-1n^-1∑_i=1^nZ_i( y_1i-Y_2i^'β) /n^-1∑_i=1^n( V_1,i-V _2,i^'β) ^2.The minimum ofQ_n( β) =b^'Y^'NYb/ b^'Y^'MYbis well-known to lead to the LIML estimator (see<cit.> among others). This estimator is proportional to the eigenvector associated to the smallest eigenvalue λ _nof the characteristic polynomial | Y^'NY-λ .Y^'MY| =0.Once we consider different weighting functions W_n( β), we can wonder if there is the “best”possible choice. The answer depends if errors are heteroskedastic, clustered, etc. Only in special cases, such as with homoskedastic errors, are the 2SLS and LIML estimators “best.”To obtain the weighting function that optimally accounts forheteroskedasticity, clustering, serial correlation, and other departures from homoskedastic errors with no serial correlation, one first considers the (infeasibly estimated) variance ofvec( n^-1/2∑_i=1^nZ_iV_i^') =n^-1/2∑_i=1^n( V_i⊗ Z_i) .[ Because the Z_iV_i^' is an k× p matrix, we can stack its columns to form a single vector with the vec( ·) operator.]Under general conditions for the DGPs, the limiting variance exists and is given byΩ =lim_nn^-1∑_i=1^n∑_j=1^nC( V_i⊗ Z_i,V_j⊗ Z_j). Since we do not observe the errors V_i, we can make this feasible by replacing them with, as an example, the OLS residuals V_i. There are different estimators for the variances and covariances above, each one of them suited to different assumptions on the DGPs. For example, typically, one uses the variance estimate of <cit.> for heteroskedastic errors that are serially uncorrelated:Ω_n=n^-1∑_i=1^n( V _i⊗ Z_i) ( V_i⊗ Z_i) ^'.Henceforth, we will employ the broader notation Ω_n without explicitly specifying its formulae for different departures from homoskedasticity. Examples of robust variance estimators include<cit.> for heteroskedasticity, <cit.> and<cit.> for both heteroskedasticity and autocorrelation (HAC), and<cit.> for clustered errors. See<cit.> for the IV model.The GMM criterion is thenQ_n( β)= [ n^-1∑_i=1^n ( Y_i-X_i^'β) Z_i] W_n[ n^-1∑_i=1^nZ_i( Y_i-X_i^'β)] , whereW_n^-1 = ( b_n⊗ I_k) ^' Ω_n( b_n⊗ I_k)and b_n=( 1,-β_n^') ^',with β_n being a preliminary consistent estimator of β. Again, this criterionQ_n( β) =b^'Y^'ZW_nZ^'Ybis quadratic in β and we can easily find its closed-form solution:β_n=[ Y_2^'ZW_nZ^'Y_2] ^-1Y_2^'ZW_nZ^'y.which is the two-step GMM estimator. It simplifies to the 2SLS estimator if W_n^-1 is proportional to n^-1Z^'Z. When the weight matrix depends on β, we obtainQ_n( β)= n^-1∑_i=1^n( y_1i-Y_2i^'β) Z_iW_n( β) n^-1∑_i=1^nZ_i( y_1i-Y_2i^'β) , whereW_n( β) ^-1 = ( b⊗ I_k) ^' Ω_n( b⊗ I_k) .which is the Continuously Updating (CU) GMM estimator, proposed by <cit.>.§.§ Wald Test Statistics Finally, the usual Wald test statistics are based on the standard asymptotic approximation to the distribution of the GMM estimators. Under the true parameter β _0,n^1/2g_n( β _0) → _dN( 0,V_0) , where V_0=( b_0⊗ I_k) ^'Ω( b_0⊗ I_k)for b_0=( 1,-β _0^'). For convenience, we derive the asymptotic distribution where we use the parameter β _0 in the criterion function:Q_n( β) =g_n( β) ^'W_n( β _0) g_n( β) . This setup allows for the possibility that the limiting behavior of W_n( β _0) is not necessarily proportional to V_0^-1. Hence, β_n is not necessarily optimal. Under the usual asymptotics, the distribution of estimators which minimize Q_n( β) is the same as if we had used Q_n( β) instead, where the weight uses a preliminary estimator or uses β itself (derivations of these results are standard and can be found, e.g. in <cit.>). As a result, the 2SLS and LIML estimators are asymptotically equivalent, while the two-step GMM and CUE estimators are asymptotically equivalent as well. We derive the asymptotic distribution for the 2SLS and two-step GMM estimators from equations (<ref>) and (<ref>) and, so, for the LIML and continuously updating estimators as well.For the 2SLS estimator, we can writen^1/2( β_n-β _0) =[ n^-1Y_2^'Z( n^-1Z^'Z) ^-1n^-1Z^'Y_2] ^-1n^-1Y_2^'Z( n^-1Z^'Z) ^-1n^-1/2Z^'Vb.Assuming that the following probability limits exist,plimn^-1Y_2^'Z=E_Y_2^'Z and plimn^-1Z^'Z=E_Z^'Z,[ If the process for ( y_1,i,Y_2,i,Z_i) were ergodic with enough moments, then E_Z^'Z=E( Z_iZ_i^') and E_Y_2^'Z=E( Y_2iZ_i^'). However, this notation allows for deterministic values of IVs as well as clustering.]we then have n^1/2( β_n-β _0) → _dN( 0,B_0^-1A_0B_0^-1), whereB_0 = [ E_Y_2^'ZE_Z^'Z^-1E_Z^'Y_2 ] ^-1 andA_0 = E_Y_2^'ZE_Z^'Z^-1( b_0⊗ I_k) ^'Ω( b_0⊗ I_k) E_Z^'Z^-1E_Z^'Y_2.We can find some consistent estimators for A_0 and B_0, and derive a Wald statistic for the 2SLS and LIML estimators:𝒲_n^∗ = n( β_n-β _0) ^'[ B_n^-1A_nB _n^-1] ^-1( β_n-β _0) , where B_n = n^-1Y_2^'Z( Z^'Z) ^-1Z^'Y_2 and A_n = Y_2^'Z( Z^'Z) ^-1( b_n⊗ I_k) ^'Ω_n( b_n⊗ I_k) ( Z^'Z) ^-1Z^'Y_2,with b_n=( 1,-β_n^') ^' based on the respective 2SLS/LIML estimator.[ We typically use ( b_n⊗ I_k) ^' Ω_n( b_n⊗ I_k) as a consistent estimator for V_0. However, other estimators are possible, including ( b_0⊗ I_k) ^'Ω _n( b_0⊗ I_k).]Likewise, for the two-step GMM estimator, we find that n^1/2( β_n-β _0) → _dN( 0,B_0^-1), whereB_0=E_Y_2^'Z[ ( b_0⊗ I_k) ^'Ω( b_0⊗ I_k) ] ^-1E_Z^'Y_2.We can find a consistent estimator for B_0 and derive a Wald statistic for the two-step GMM and continuously updating estimator:𝒲_n^o = n( β_n-β _0) ^'B_n( β_n-β _0) , where B_n = n^-1Y_2^'Z[ ( b _n⊗ I_k) ^'Ω_n( b _n⊗ I_k) ] ^-1n^-1Z^'Y_2.with b_n=( 1,-β_n^') ^' based on the GMM/CU estimators.[ We can also use here either the null value β _0 or the preliminary estimator β_n for the variance estimator.] §.§ Valid Critical Value Functions As emphasized in <cit.>, since the nuisance parameter representing the strength of the first stage may be arbitrarily close to zero, then the usual constant critical values cannot be valid; indeed, <cit.> points out that any valid confidence set in this context must be unbounded with positive probability, which clearly cannot be the case with a constant critical value for any of the Wald statistics mentioned above. To derive valid critical values, using the conditioning strategy of <cit.>, we begin by defining the quantityR=( Z^'Z) ^-1/2Z^'Y=[ R_1:R_2] ,where the k-dimensional vector R_1 is the first column of R and the k× p-matrix R_2 is the last p columns of R. The standardization avoids multiplication by the sample size n. The asymptotic variance of vec(R) isΣ =( I_p+1⊗( E_Z^'Z) ^-1/2) Ω( I_p+1⊗( E_Z^'Z) ^-1/2) = [ [ Σ _11 Σ _12; Σ _21 Σ _22 ] ] ,where the matrix Σ is being partitioned by submatrices of columns/rows of dimensions 1 and p. Analogously, we can use the estimatorΣ_n=( I_p+1⊗( n^-1Z^'Z) ^-1/2) Ω_n( I_p+1⊗( n^-1Z^'Z) ^-1/2) =[ [ Σ_11,n Σ_12,n; Σ_21,n Σ_22,n ] ] . Up to a scale of the sample size n, the GMM criterion isQ_n( β) =b^'R^'( n^-1Z^'Z) ^1/2W_n( β) ( n^-1Z^'Z) ^1/2Rb=b^'R^'W _n( β) Rb,where W_n( β) =( n^-1Z^'Z) ^1/2W_n( β) ( n^-1Z^'Z) ^1/2. It is clear that only R and the weight function W _n( β) fully determine the estimator β _n. To illustrate this connection, recall that the 2SLS estimator results if W_n^-1 is proportional to n^-1Z^'Z. For such a weight, we have W_n( β) =I_k, and we trivially have the 2SLS being dependent only on R. Indeed, the 2SLS estimator can be written asβ=( R_2^'R_2) ^-1R_2^'R_1.The same holds for the other estimators as well. For example, we take the LIML estimator. When W_n( β) ^-1=V_u( β) · n^-1Z^'Z, we have W_n( β) =V_u( β) ^-1I_k. The LIMLestimator solvesQ_n( β) =b^'R^'Rb/ b^'Φ_nb, where Φ _n=n^-1Y^'MY. Having found that the estimators are completely determined by the standardized reduced-form coefficients R and the function W _n( β), we can turn our attention to the Wald statistics.The Wald statistic for the 2SLS/LIML estimators has the form𝒲_n^∗=( β_n-β _0) ^'[ ( R_2^'R_2) ^-1R_2^'( b_n⊗ I_k) ^' Σ_n( b_n⊗ I_k) R_2( R_2^'R_2) ^-1] ^-1( β _n-β _0) .Hence, it is a function of R and Σ_n (or, for LIML, Φ_n). Likewise, the Wald statistic for the two-step GMMand CU estimators can be written as𝒲_n^o=( β_n-β _0) ^'[ R_2^'[ ( b_n⊗ I_k) ^'Σ_n( b_n⊗ I_k) ] ^-1R_2] ( β_n-β _0) ,which again depends only on R and Σ_n (as long as the preliminary estimator β_n depends only on R and Σ_n as well, such as the 2SLS estimator). In short, the Wald statistics associated with any of the estimators we have discussed above are functions of R, Σ_n, and Φ_n as shown above.We now apply the conditioning approach of <cit.>, beginning by finding a useful transformation of R:R_0=RB_0=[ R_u:R_2] , where B_0=[ [1 0_1× p;-β _0I_p ] ] .That is, R_u=R_1-R_2β _0. Note that the asymptotic variance of R_0 is given byΣ _0=( B_0^'⊗ I_k) Σ( B_0⊗ I_k) =[ [ Σ _uu Σ _u2; Σ _2u Σ _22 ] ] .This quantity can of course be consistently estimated as well (regardless of identification of β):Σ_0,n=( B_0^'⊗ I_k)Σ_n( B_0⊗ I_k) =[ [ Σ_uu,n Σ_u2,n; Σ_2u,n Σ_22,n ] ] .Consider a transformation of R.D=vec( R_2) -Σ_2u,nΣ _uu,n^-1R_u.Given Σ_n, there is a one-to-one transformation between the pair R and R_2 and the pair R_u and D. Since we have established that all of the Wald statistics above can be written as functions of R, Σ_n, and Φ_n, this means that they can also be written as functions of R_u,D,Σ_n, and Φ_n. Importantly, adopting the appropriate assumptions relevant for HAC (e.g. see<cit.> or<cit.>), it can be shown that ( R_u,D) → _d( ℛ_u,𝒟), where ℛ_u and 𝒟 are asymptotically normal and independent, with ℛ_u being mean zero with a variance matrix that can be consistently estimated under the null – that is,ℛ_u∼ N( 0,Σ _uu) under the null. As <cit.> shows, this allows one to establish the distribution of test statistics even in the presence of the unknown nuisance parameter (the mean of R_2), since the distribution of ℛ_u conditional on 𝒟 is the same as the marginal distribution.[ We will not standardize here the R_u and D statistics. However, we could have worked with their respective standardized versions, S=[ ( b_0^'⊗ I_k) Σ( b_0⊗ I_k) ] ^-1/2Rb_0 and T=[ ( A_0^'⊗ I_k) Σ ^-1( A_0⊗ I_k) ] ^-1/2( A_0^'⊗ I_k) Σ ^-1vec( R), as in <cit.>.]We can write all Wald statistics as𝒲_n=ψ( R_u,D,Σ _n,Φ_n)(where we explicitly state the distribution of R_u depends on the sample size n). Its asymptotic behavior is given by𝒲_n=ψ( ℛ_u,𝒟,Σ ,Φ) .where Φ =plim Φ_n=plim n^-1V^'MV (if the process is ergodic, Φ is just the variance of the reduced-form errors V). We then find the 1-α quantile, say, c_α( d,Σ ,Φ) of the null asymptotic distribution ofψ( ℛ_u,d,Σ ,Φ) , where ℛ _u∼ N( 0,Σ _uu) .The final conditional test rejects the null when𝒲_n=ψ( R_u,D,Σ _n,Φ_n) >c_α( D, Σ_n,Φ_n) . § GENERALIZATION TO WEAKLY-IDENTIFIED MODELS (INCLUDING GMM)Summarizing the setup in <cit.>, it is assumed that there is a sequence of models F_n( θ ,γ), which is indexed by the sample size n. To illustrate the extension, we focus on aparameter of interest θ∈Θ, and presume there is an l× 1 consistently estimable nuisance parameter γ∈Γ. The objective is to test the null hypothesis θ =θ _0, presuming the availability of three quantities: 1) a standardized sample moment vector (or distance function) evaluated at the null, h_n( θ _0)[ For the linear model, we can take either the (standardized) moment condition h_n( β) =( Z^'Z) ^-1/2Z^'( y_1-Y_2β) or the distance function h( Π ,β) =( Z^'Z) ^-1Z^'Y-[ Πβ :Π].]; 2) a sample gradient of h_n( θ) with respect to θ evaluated at the null, Δ h_n( θ _0); and 3) the consistent estimate γ̂ for γ.The main assumptions in <cit.> are that for any true value ( θ ,γ) ∈Θ×Γ:( [ h_n( θ _0); Δ h_n( θ _0) ] ) d→( [ h( θ _0); Δ h( θ _0) ] )and( [h( θ _0); vec( Δ h( θ _0) ) ] ) ∼ N( ( [ m( θ _0);vec( μ) ] ) ,Σ _0) , where Σ _0=( [Σ _hhΣ _hθ; Σ _θ hΣ _θθ ] )and γ̂p→γ. It is further assumed that Σ _hθ and Σ _hh are continuous in γ, and hence consistently estimable.The mean m( θ _0) belongs to a set M( μ ,γ) ⊆ R^k, with μ∈ℳ, and is defined so that when θ =θ _0, m( θ _0) =0. The goal is to test the null hypothesis ( m( θ _0) ,μ) =( 0,μ) against the alternative ( m( θ _0) ,μ) =( ℳ\{ 0} ,μ) , for any unknown value of μ.We can once again consider the k× 1 quantity𝒟=vec( Δ h( θ _0) ) -Σ _θ hΣ _hh^-1h( θ _0)which, by construction is independent of h( θ _0). The Wald statistic based on the 2SLS estimator for the linear model simplifies to𝒲_n = R_u^'R_2[ R_2^'( b_n⊗ I_k) ^'Σ _0,n( b_n⊗ I_k) R_2] ^-1R_2^'R_u, where b_n = ( 1,-R_u^'R_2( R_2^'R_2) ^-1) ^'.We can thus define a nonlinear analog as𝒲_n = h_n^'Δ h_n[ Δ h_n^'( b_n⊗ I_k) ^' Σ_n( b_n⊗ I_k) Δ h_n] ^-1Δ h_n^'h_n, where b_n = ( 1,-h_n^'Δ h_n( Δ h_n^'Δ h_n) ^-1) ^'. (where we have suppressed the dependence on θ _0), which will converge in distribution to𝒲≡( h^'Δ h) [ Δ h^'( b⊗ I_k) ^'Σ( b⊗ I_k) Δ h] ^-1( Δ h^'h)After substituting in Δ h=𝒟+Σ _θ hΣ _hh^-1h, then it is easy to compute the ( 1-α)th conditional quantile defined by[ 𝒲>c( d,Σ ;α) |𝒟=d] =α The test is straightforward to implement as follows: reject the hypothesis if and only if𝒲_n>c( D_n,Σ _n;α) , where D_n=vec( Δ h_n( θ _0) ) -Σ_θ h Σ_hh^-1h_n( θ _0) .This test will have the property, under the null, thatlim[ 𝒲_n>c( D_n, Σ_n;α) |D_n=d] =αfor all values of d and hencelim[ 𝒲_n>c( D_n, Σ_n;α) ] =αas desired.§ CONCLUSION We are motivated by providing an inference method for researchers interested in the over-identified linear instrumental variables model, and who have a preference for using the 2SLS estimator β̂ for inference, and who do not wish to rely on the assumption of homoskedasticity. If errors are assumed to be homoskedastic, one can use the results of<cit.> and <cit.> to control the amount of distortion in inference. As noted in <cit.>, the tables in <cit.> do not apply to non-homoskedastic settings.<cit.> provides a conservative two-step procedure that builds on<cit.> for more general DGPs.To accommodate practitioners' preference for using the 2SLS estimator and conventional robust standard errors, we present the robust Conditional Wald (data-dependent) critical values for the Wald statistics robust to heteroskedastic, autocorrelated, and/or clustered errors, which turns out to be a relatively straightforward extension of the Conditional Wald test of<cit.>; its derivation has been neglected in the weak-IV literature, which has provided a number of other non-Wald procedures that are both robust to non-homoskedastic errors and to arbitrarily weak instruments.Using existing results from the weak IV literature, we also generalize the procedure to apply to the more general nonlinear models that are typically estimated via minimum distance or GMM. We can explore several Wald statistics within the nonlinear setup as well, contingent on the weights employed in the criterion function. The final conditional test would substitute the conventional critical value with a conditional quantile. aea | http://arxiv.org/abs/2311.15952v1 | {
"authors": [
"David S. Lee",
"Justin McCrary",
"Marcelo J. Moreira",
"Jack Porter",
"Luther Yap"
],
"categories": [
"econ.EM"
],
"primary_category": "econ.EM",
"published": "20231127155758",
"title": "Robust Conditional Wald Inference for Over-Identified IV"
} |
Ryan M. Lau [email protected]]Ryan M. Lau NSF’s NOIRLab, 950 N. Cherry Ave., Tucson, AZ 85719, USA 0000-0001-9315-8437]MatthewJ. Hankins Arkansas Tech University, 215 West O Street, Russellville, AR 72801, USA 0000-0002-9723-0421]Joel Sanchez-Bermudez Instituto de Astronomía, Universidad Nacional Autónoma de México, Apdo. Postal 70264, Ciudad de México 04510, Mexico Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany0000-0002-1536-7193]Deepashri Thatte Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USASydney Institute of Astronomy (SIfA), School of Physics, The University of Sydney, NSW 2006, Australia 0000-0001-7864-308X]Rachel A. Cooper Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0003-1251-4124]Anand Sivaramakrishnan Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Astrophysics Department, American Museum of Natural History, 79th Street at Central Park West, New York, NY 10024 Department of Physics and Astronomy, Johns Hopkins University, 3701 San Martin Drive, Baltimore, MD 21218, USACRESST II and X-ray Astrophysics Laboratory NASA/GSFC, Greenbelt, MD 20771, USA Institute for Astrophysics and Computational Sciences, The Catholic University of America, 620 Michigan Ave., N.E. Washington, DC 20064, USAIPAC, California Institute of Technology, 1200 E. California Boulevard, Pasadena, CA, 91125, USASpace Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA Code 667, NASA/GSFC, Greenbelt, MD 20771Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK 0000-0003-4870-5547]Olivia C. Jones UK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ, UK 0000-0001-7697-2955]Thomas Madura Department of Physics and Astronomy, San Jose State University, San Jose, CA, USA Département de Physique, Université de Montréal, C.P. 6128, succ. centre-ville, Montréal (Qc) H3C 3J7, Canada; and Centre de Recherche en Astrophysique du Québec, CanadaDepartment of Physics and Astronomy, University of California, Los Angeles, 430 Portola Plaza, Los Angeles, CA 90095-1547, USADepartment of Astronomy, School of Science, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, JapanDepartment of Physics and Astronomy, Bartol Research Institute, University of Delaware, Newark, DE 19716 USA 0000-0002-2806-9339]Noel D. Richardson Department of Physics and Astronomy, Embry-Riddle Aeronautical University, 3700 Willow Creek Rd, Prescott, AZ 86301, USASteward Observatory, 933 North Cherry Avenue, Tucson, AZ 85721, USA 0000-0001-7026-6291]Peter Tuthill School of Physics, The University of Sydney, NSW 2006, AustraliaSpace Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0001-9754-2233]Gerd Weigelt Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany 0000-0002-8092-980X]Peredur M. Williams Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UKWe present infrared aperture masking interferometry (AMI) observations of newly formed dust from the colliding winds of the massive binary system Wolf-Rayet (WR) 137 with JWST using the Near Infrared Imager and Slitless Spectrograph (NIRISS). NIRISS AMI observations of WR 137 and a point-spread-function calibrator star, HD 228337, were taken using the F380M and F480M filters in 2022 July and August as part of the Director's Discretionary Early Release Science (DD-ERS) program #1349. Interferometric observables (squared visibilities and closure phases) from the WR 137 “interferogram” were extracted and calibrated using three independent software tools: ImPlaneIA, AMICAL, and SAMpip. The analysis of the calibrated observables yielded consistent values except for slightly discrepant closure phases measured by ImPlaneIA. Based on all three sets of calibrated observables, images were reconstructed using three independent software tools: BSMEM, IRBis, and SQUEEZE. All reconstructed image combinations generated consistent images in both F380M and F480M filters. The reconstructed images of WR 137 reveal a bright central core with a ∼300 mas linear filament extending to the northwest.A geometric colliding-wind model with dust production constrained to the orbital plane of the binary system and enhanced as the system approaches periapsis provided a general agreement with the interferometric observables and reconstructed images. Based on a colliding-wind dust condensation analysis, we suggest that dust formation within the orbital plane of WR 137 is induced by enhanced equatorial mass-loss from the rapidly rotating O9 companion star, whose axis of rotation is aligned with that of the orbit.§ INTRODUCTION Classical Wolf-Rayet (WR) stars, the descendants of massive OB-type stars, are characterized by high luminosities (L_*≳10^5 ), hot surface temperatures (T_⋆≳ 40000 K), and fast, powerful winds (v_∞≳ 1000 km s^-1, Ṁ ≳ 10^-5.5 yr^-1; ).The high luminosities and intense UV radiation produced by WR stars may present an inhospitable environment for dust grains; however, infrared (IR) observations have demonstrated that a subset of WR stars exhibits dust production <cit.>. These dust-producing objects have enhanced carbon abundances (WC stars) and come in two categories: ones that continuously produce dust and those that produce dust in an episodic fashion <cit.>. For the majority of these objects, the dust-formation process is thought to be tied to their binary nature, where a collision between the WC star wind and that of an OB-type companion produces high densities and efficient cooling that favor dust formation <cit.>. However, the detailed physics of colliding-wind dust production is not well understood, and much of our present knowledge has been gleaned from periodic dust producers whose dust formation episodes have been directly tied to the binary orbit. The archetypical example of this is WR 140, which produces dust like clockwork during its periastron passage every 7.93 years <cit.>. WR 137 (also known as HD 192641) is another well-known dust producing WC binary which shows repeating dust formation events <cit.>. It is a confirmed binary consisting of a WC7 star and O9e type companion <cit.>, and has a measured orbital period of 13.1 years <cit.>. The IR light-curve of WR 137 exhibits high-amplitude (≳1 mag) IR brightening episodes with a similar cadence to the orbital period of the binary <cit.>, which suggests that dust formation is orbitally modulated as in WR 140.However, previous images of WR 137 taken in the near-IR by the Hubble Space Telescope (HST) present compact, “jetlike” dust emission <cit.> that does not resemble any of the other known dust-forming WC binary systems <cit.>. Investigating the connection between WR 137's dust formation and its orbital properties has been challenging due to the compact morphology of its extended dust emission. Observations with high spatial resolution, high imaging contrast, and high sensitivity at mid-IR wavelengths are therefore essential for resolving and characterizing the nature of the faint extended dust emission around WR 137. As part of Director's Discretionary Early Release Science (DD-ERS) program #1349, WR 137 was selected as a target to investigate colliding-wind dust formation and to demonstrate the scientific potential of the Aperture Masking Interferometry (AMI) mode of NIRISS.The 7-hole, non-redundant AMI pupil mask is shown in Fig. <ref> (Upper Left) overlaid on an outline of the JWST primary mirror. Although the usage of the non-redundant mask (NRM) that enables AMI[Also referred to as Sparse Aperture Masking (SAM).] on NIRISS blocks ∼85% of the incoming light <cit.>, AMI techniques provide angular resolution that is ∼2× higher compared to traditional imaging (i.e. ∼0.5 λ/D). This facilitates studies of small-angular-scale dust features near a bright point source.Previously, AMI has been used on large, ground-based telescopes to study the size and morphology of several dusty WR stars at IR wavelengths <cit.>.JWST/NIRISS AMI operating at a temperature of 40 K brings advantages of space-based interferometry at these wavelengths: about an order of magnitude better fringe phase measurements, which are sensitive to the point-antisymmetrical component of the target, two or three orders of magnitude better fringe amplitude calibration that track symmetrical target structure, and much lower thermal background than ground-based high-resolution telescopes <cit.>. Notably, revealing diffuse extended components such as disks or rings is challenging from the ground because fringe amplitudes are often corrupted by atmospheric scintillation. JWST has therefore opened a new window on AMI observations from space and enables investigations that require high spatial resolution at mid-IR wavelengths and greater sensitivity to extended structures. In this paper, we present JWST/NIRISS AMI observations of WR 137 using the F380M and F480M filters.The timing of the observations was planned to align with active dust formation from WR 137 based on its IR light curve and known spectroscopic orbit ().In Sec. <ref>, we describe the observations, preparation, data reduction, and the procedures used to extract interferometric observables from the NIRISS AMI data. In Sec. <ref>, we present reconstructed images of WR 137 using three different image reconstruction software packages which we then analyze and compare with geometric dust models used to study other dust-forming WC binaries (e.g., ). Lastly, in Sec. <ref>, we discuss WR 137's dust morphology and the possible influence of its rapidly rotating companion star on conditions for dust formation via wind-wind collision.p2.4cmp1.6cmp2.3cmp2.6cmp2.3cmp2.3cmp2.3cm Summary of Observations 0pt Obs. Date Object Filter Obs. Type NGROUPS NINTS Dither Pattern2022-07-15 13:37:18.246 WR 137 F480M NRM/SUB80 4 1600 Stare 2022-07-15 13:37:17.634 WR 137 F380M NRM/SUB80 2 2720 Stare 2022-07-15 12:12:59.178 HD 228337 F480M NRM/SUB80 7 1020 Stare 2022-07-15 12:20:57.815 HD 228337 F380M NRM/SUB80 2 2720 Stare 2022-07-13 12:43:39.770 WR 137 F480M NRM/SUB804 400 4-point 2022-07-13 12:43:37.531 WR 137 F380M NRM/SUB802 680 4-point 2022-08-09 17:33:05.182 WR 137 F480M NRM/SUB804 400 4-point 2022-08-09 17:33:00.515 WR 137 F380M NRM/SUB802 680 4-point 2022-08-09 17:51:33.502 HD 228337 F480M NRM/SUB80 7 255 4-point 2022-08-09 17:33:45.279 HD 228337 F380M NRM/SUB80 2 680 4-point Summary of the JWST/NIRISS AMI observations of WR 137 and the PSF calibrator HD 228337. All AMI observations were taken using the NIRISS non-redundant mask (NRM) in the SUB80 subarray, which is standard for the AMI mode. NGROUPS and NINTS correspond to the number of groups per integration and the number of integrations per exposure, respectively. Observations using two different dither patterns, stare and 4-point, were performed with identical total exposure times. Note that a duplicate set of dithered data of WR 137 exists because PSF calibrator observations that were planned to follow the 13 July observations of WR 137 were skipped due to a mirror “tilt event.” The set of WR 137 and PSF calibrator observations were successfully completed on 9 August.§ OBSERVATIONS AND DATA REDUCTION§.§ JWST/NIRISS AMI Observations, Preparation, and Data Reduction JWST observations of WR 137 were carried out as part of DD-ERS program #1349 (PI: R. Lau) and were obtained using the AMI mode of the NIRISS instrument <cit.>. Observations of WR 137 and a PSF calibrator star (HD 228337) were taken using the F380M (λ_pivot=3.825 μm; Δλ=0.205 μm) and F480M (λ_pivot=4.815 μm; Δλ=0.298 μm) filters on 2022 July 13 and 15, and 2022 August 9.The NIRISS AMI observations used the standard AMI parameters with the “NISRAPID” readout pattern and the SUB80 (80×80 pixel) subarray, where the NIRISS detector plate scale is 65 mas/pixel and the readout time is 75.44 ms. The total/effective exposure times of the F480M and F380M observations of WR 137 were 10.6/8.0 min and 11.2/6.8 min, respectively. The total exposure time of the PSF calibrator is identical to that of WR 137. Details of the observations can be found in Table <ref>.For both WR 137 and the PSF calibrator, target acquisition was used on the same target as the observation in the “AMIBRIGHT” acquisition mode with the F480M filter and 5 groups in the NISRAPID readout pattern. NIRISS AMI observations also allow for direct imaging using the “CLEARP” aperture with the same filters and subarray used for the AMI exposures.Direct F380M and F480M images of WR 137 were therefore obtained with the AMI exposures on 2022 July 15. The total/effective exposure times of the direct F480M and F380M observations were 39.7/31.7 sec and 39.5/24.1 sec, respectively. However, as expected due its brightness, the PSF core of WR 137 was saturated in both filters and was therefore not used for the analysis in this paper. In preparation for the observations, simulated NIRISS AMI data products of WR 137 and PSF calibrator HD 228337 were generated to ensure the success of the AMI observing strategy. The Multi-Instrument Ramp Generator (MIRaGe; ) was used to simulate raw NIRISS AMI data in the F380M and F480M filters. MIRaGe simulations utilized the Astronomer's Proposal Tool (APT) file for DD-ERS #1349, a simulated sky scene of WR 137, and a source catalog with the flux densities of HD 228337 to generate raw exposures identical to the observing configuration specified in the APT file. The simulated raw exposures were then processed through the same procedures described further in this section as were performed on the real data. The simulated reduced data products demonstrate that the NIRISS AMI observing configuration described above is capable of detecting the predicted extended dust emission near WR 137 <cit.>. Due to WR 137's brightness in the mid-IR (F_3.8μm≳ 1 Jy; ), it was important that the number of groups used per integration (NGROUPS) in the observations be specified to avoid non-linearity effects. Charge migration, also known as the “brighter-fatter effect” <cit.>, is of particular concern for AMI data and occurs due to the buildup of a transverse electric field as charge accumulates in sufficiently illuminated pixels (e.g. ). This effect can significantly impact AMI data given that the core of the PSF contains critical information about the structure of the source.Commissioning tests indicated that charge migration between the brightest pixel at the center of an image of an unresolved star and its eight neighboring pixelsremained below 1% as long as the peak pixel's accumulated up-the-ramp charge remained below 30,000 electrons, the nominal saturation limit for AMI mode <cit.>.Sampling up the ramp with sufficient NGROUPS allows count rate in the peak pixel and surrounding pixels to be examined as a function of exposure time by processing sections of each integration in an exposure as if they were independent exposures. Observations of WR 137 with NGROUPS = 4 and the PSF calibrator star with NGROUPS = 7 were examined in this way and found to display non-linearity due to charge migration of at most 2% at the highest signal level reached. The impact of charge migration on our dataset is therefore negligible. Two sets of observations with identical exposure times were obtained using different dither patterns in order to investigate the optimal dither pattern for the NIRISS AMI mode. The two modes used were a non-dithered `stare' mode and a 4-point dither pattern with 4 primary dithers. Although further work is planned for a more detailed investigation of the dither vs. stare mode, the stare observations are used for the analysis in this paper because of larger uncertainties in the interferometric observables extracted from the dithered observations. Figure <ref> (Upper Right) shows the uv-plane coverage of the non-dithered F380M and F480m observations and presents the spatial frequencies sampled by the non-redundant mask. A duplicate set of dithered data of WR 137 exists (Tab. <ref>) because PSF calibrator observations that were supposed to be taken following the 13 July observations of WR 137 were skipped due to a mirror “tilt event” <cit.>. The set of WR 137 and PSF calibrator observations using the 4-pt primary dithers was rescheduled for 9 August and successfully completed. The observations of WR 137 and the PSF calibrator were reduced using version 1.11.2 of the standard JWST science calibration pipeline with version 11.17.2 of the Calibration Reference Data System (CRDS) for stage 1 and 2 processing. The raw data were first passed to the calwebb_detector1 routine (stage 1), a series of detector level processing steps which take individual integrations and produce a corrected count rate image. The data were then passed to the calwebb_image2 routine (stage 2), which performs instrumental corrections and calibrations on the count rate images to produce fully calibrated exposures. Figure <ref> (Bottom) presents a cutout of the “interferogram” patterns from the calibrated, non-dithered F480M exposures of WR 137 and the PSF calibrator. Since the mask holes are not obstructed by and mirror struts or mirror segment edges (Fig. <ref>, Upper Left) the holes share the same PSF envelope or “primary beam” shape.This primary beam is modulated by the interferometric fringe pattern generated by the baselines between pairs of holes (See ). The hexagonal hole shape leads to the overall hexagonal shape of the interferogram. The core of the WR 137 interferogram appears more extended than the PSF calibrator, which indicates the presence of resolved emission around WR 137. This extended emission in the core of the WR 137 interferogram is a result of reduced fringe amplitudes caused by structure other than the bright central compact source.The more-or-less isotropic nature of the extended emission results from additional structures in various directions from the dominant compact source. For example, if the target were instead a moderate contrast binary the image would look extended along the two components' separation vector.In the case of WR 137, structure clearly extends more than a resolution element from the compact bright source. Note that fringe-phase information is difficult to see directly from the interferogram and requires additional tools for extraction. §.§ Extracting Interferometric Observables with ImPlaneIA, SAMpip, and AMICAL The final stage of NIRISS AMI data processing (stage 3) includes the extraction of interferometric observables (squared visibilities and closure phases) from the calibrated exposures. Details of this process are described in <cit.>. At the time of this work, the third stage did not use the JWST science calibration pipeline for NIRISS AMI data. The stage 3 processing tasks were conducted using three independent software packages: ImPlaneIA[<https://github.com/anand0xff/ImPlaneIA/commit/b6caf9db3b6976b3427b2a4ce5798e470f022c3d>] <cit.>, SAMpip <cit.>, and AMICAL <cit.>.Comparing the results across the three software tools was important for ensuring robust measurements given that observable extraction of AMI data has had limited testing with space-based observations. For each software tool, measured observables for the science target (WR 137) were extracted and then calibrated using the observables of the PSF calibration source (HD 228337).There is an important detail to be noted about the data processing related to cosmic ray events. In the version of the JWST science pipeline used on this dataset, cosmic ray events are flagged at stage 1 (jump detection step); however, this did not apply to observations with NGROUPS less than 3. The F380M observations for this program, where NGROUPS = 2, notably do not meet this requirement and so data impacted by cosmic rays were passed on to the stage 3 processing steps. The presence of the cosmic rays can present issues with the image plane algorithms (ImPlaneIA and SAMpip). All three software tools (ImPlaneIA, SAMpip, and AMICAL) extract interferometric observables from each integration and deliver the raw weighted mean and standard deviation in a final Optical Interferometry Flexible Image Transport System (OIFITS;) file. SAMpip and ImPlaneIA create a model of the data by fitting the fringes directly in the image plane, and the interferometric observables are derived from the model's coefficients.AMICAL instead computes the Fourier Transform of the interferogram and locates the position of the different spatial frequencies (and their corresponding visibility amplitudes and phases) using a matched filter template with the mask's geometry. Detailed information on the application of ImPlaneIA, SAMpip, and AMICAL to NIRISS AMI data is provided in Sec. 5.3 of <cit.>. Lastly, calibrating out the instrumental transfer function of the squared visibilities is done by dividing the raw observables of the target by those of the calibrator star, and the calibration of the closure phases is done by subtracting the closure phases of the calibrator from those of the target. Figure <ref> shows the resulting calibrated interferometric observables for the science target with all three softwares using both the F380M and F480M observations. The calibrated squared visibilities derived from all three softwares are in close agreement, as are the calibrated closure phases from AMICAL and SAMpip.The cause for the discrepant closure phases derived from ImPlaneIA in both F380M and F480M observations is currently under investigation by the ImPlaneIA development team. The discrepancy appears to be due to a difference in the ways ImPlaneIA and SAMpip compute the statistics of interferometric variables. ImPlaneIA determines errors in fringe phases, and separately, fringe amplitudes.SAMpip calculates statistics using fringe complex visibilities before converting the complex errors into real number fringe phase and amplitude errors.ImPlaneIA is currently being updated to use complex visibility averages, which will likely bring it into alignment with SAMpip's error estimation. The SAMpip and ImPlaneIA teams are working to quantify the effect of differences in the computed observables' statistics.§ RESULTS AND ANALYSIS§.§ Image Reconstruction of WR 137's Circumstellar Environment F380M and F480M images of WR 137 were reconstructed from the calibrated observables (Fig. <ref>) using three different software tools: BSMEM <cit.>, SQUEEZE <cit.>, and IRBis <cit.>. Each of the image reconstruction software tools was applied on the three sets of calibrated observables extracted by ImPlaneIA, AMICAL, and SAMpip. The reconstructed F380M and F480M images are presented in Figures <ref> & <ref>, respectively. This threefold approach provides independent methods of processing NIRISS AMI data and performing image reconstruction. The angular resolution achieved in these reconstructed images is ∼0.5 λ/D, which corresponds to ∼60 mas and ∼80 mas for the F380M and F480M observations, respectively. An absolute photometric calibration was not performed on the reconstructed images due to the saturation of WR 137 in the direct NIRISS images (Sec. <ref>). However, future work is planned on strategies for performing an absolute photometric calibration of AMI observations targeting bright sources (e.g. utilizing the PSF reference star).For each reconstructed image, similar image sizes and pixel scales corresponding to 128×128 pixels and 7.42 mas/pixel, respectively, were adopted. For the SQUEEZE reconstruction we employed entropy regularization and performed a simple grid search for a suitable hyper parameter value in the range from μ=10^-3 to μ=10^3 in logarithmic steps. For regularized minimization algorithms, the hyper parameter is a user-defined parameter that balances the weight between the likelihood and the prior probabilities when estimating the best-fit image. The best-fit values for the hyper parameters were found with μ≤10^-1 with minimal variation in the range of μ=10^-3-10^-1. Figures <ref> and <ref> present reconstructed images with hyper parameters of μ=10^-2. The final SQUEEZE results presented are generated from an average of model chains which achieve a reduced chi-squared value χ_r^2≲1.5.The BSMEM reconstruction automatically selects the hyper parameter for the reconstruction. This software uses entropy as its regularization function, which tends to produce smooth brightness distributions. Furthermore, the entropy regularizer ensures that the pixel values are always positive.The χ_r^2 values obtained are close to unity. For IRBis, the image was reconstructed using the edge-preserving regularization function. An additional mask was also utilized to only use the flux within the mask radius r_mask. The reconstruction was then performed using a combination of μ (0.1 to 10^-5) and r_mask (550 to 640 mas). The prior image was a Gaussian model with a FWHM of 650 mas. The best reconstructed image was finally selected using the reconstruction quality parameters q_rec based on the χ_r^2 and the residual ratio values ρρ <cit.>. The final IRBis results provided good agreement with the data for both F380M and F480M filters. The IRBis results also presented a systematically better fit for the closure phases than the visibility amplitude.Figures <ref> and <ref> demonstrate that all combinations of image reconstruction tool and observable extraction software produce similar reconstructed images of WR 137 in both wavelengths. The dust emission extending to the NW from WR 137 appears quasi-linear with some slight curvature angled to the north. The extent of the linear emission is slightly shorter in the F380M images (∼200 mas) compared to the F480M images (∼300 mas). The detection of more extended emission in the F480M images is likely due to lower uncertainties in the F480M observables (Fig. <ref>) and/or cooler dust temperatures at larger distances along the feature. NIRISS AMI performance may be better in the F480M data than the F380M data because the 65 mas detector pixels better sample the F480M data than the shorter wavelength F380M data.The faint structure extending to the SE, which is most prominent in the IRBis reconstructions of the ImPlaneIA observables, is unlikely real given the absence of this feature in all other image reconstructions. The appearance of the SE extension only in images reconstructed from the ImPlaneIA-calibrated observables is likely due to the slightly discrepant closure phase measurements (Fig. <ref>). A comparison of real astrophysical features and likely artifacts from the image reconstruction is presented in Figure <ref>, which shows the BSMEM-reconstructed image data from the F380M and F480M observables extracted by SAMpip. While the overlapping F380M and F480M emission of the bright, linear feature traces astrophysical emission from dust, the faint elliptical features that are displaced in radial position between the F380M and F480M images are most likely image reconstruction artifacts. The linear extended emission in the reconstructed images resembles previous observations of WR 137 obtained by ground-based mid-IR imaging <cit.>. The orientation of this feature is also consistent with the alignment of dust clumps revealed by near-IR imaging with HST <cit.>, which were obtained at a slightly later orbital phase (φ∼0-0.06) than the NIRISS observations (φ=0.9). The near-IR clumps therefore likely trace dust density enhancements along a continuous, linear feature consistent with the feature revealed in the NIRISS observations. A shorter linear emission feature resolved in the near-IR HST images, however, extends in the opposite direction (southeast) of the NIRISS feature. If the origin of this feature is linked to colliding stellar winds (See Sec. <ref>) the opposite orientation is likely due to the different orbital configurations of the colliding-wind binary between the NIRISS and HST observations. §.§ Colliding-wind Shock Opening Angle AnalysisThe linear morphology of WR 137's extended emission is different from the extended dust emission morphologies resolved around other dust-forming colliding-wind binaries <cit.>.Such systems typically show structures consistent with dust formed in a hollow conical wind-collision region revolving with the stars in their orbit and symmetrical about their line of centers. The conical shape of the “shock cone” assumes the collision of two isotropic winds, and its opening angle depends on the wind-momentum balance of the two stars (e.g. ). Assuming a purely hydrodynamic balance and spherically symmetric winds from the stars in WR 137, the half-opening angle of the thin shock cone (θ_h), appropriate for radiative post-shock plasma, can be derived from the following expression (See ): tan θ_h - θ_h = η π/1-η, where η is the wind-momentum ratio of the companion star and the WR star: η = Ṁ_OB v_OB/Ṁ_WR v_WR.A shock-cone half-opening angle of θ_h=18.6^∘, where η = 0.0038, can be derived from the mass-loss rates and terminal wind velocities of the two stars in WR 137 inferred from Potsdam Wolf–Rayet (PoWR; ) models presented by <cit.>: Ṁ_WR=10^-4.65 M_⊙ yr^-1, v_WR=1700 km s^-1, Ṁ_OB=10^-7.1 M_⊙ yr^-1, and v_OB=1800 km s^-1. If the dust emission around WR 137 traces the entire surface of its shock cone, an upper limit of ∼8^∘ for the shock cone half-opening angle can be derived based on the extent of the linear emission and the angular resolution: ∼300 mas and 80 mas, respectively, for the F480M observations.The linear morphology of the extended emission from WR 137 therefore does not appear to be consistent with the predicted 18.6^∘ half-opening angle. However, other factors can affect the dust morphology from colliding-wind binaries: the OB/WR-wind momentum ratio η may be overestimated, or the wind(s) may not be spherically symmetric thus leading to non-uniform dust formation across the surface of a possibly asymmetric shock cone.Since it is difficult to produce such a linear morphology even with a much smaller η, it is unlikely the morphology is solely due to an overestimate of η. For example, even if η were overestimated by a factor of 10, the full opening angle would be ∼17^∘ and would be resolvable in the reconstructed images. The impact of wind asymmetries is particularly important to consider given the likely presence of a decretion disk around the O9 companion <cit.>. Variable dust formation across the surface of the shock interface is therefore investigated in Sec. <ref> utilizing a geometric colliding-wind modeling tool.ll Properties of WR 137 Geometric Dust Model 0pt Adopted Properties Orbital Period (P_Orb) 13.1 yr Periastron Passage Date (P_0) 2023.85 Semi-major Axis (a) 8.56 masEccentricity (e) 0.315Inclination (i) 97.2^∘Argument of periapsis (ω) 0.6^∘ Longitude of the ascending node (Ω) 117.91^∘ Orbital phase (φ) 0.90 Shock-cone half-opening angle (θ_h) 18.6^∘ Distance to WR 137 (d) 1941 pcDust expansion velocity (v_exp) 1700 km s^-1Azimuthal dust modulation centroid (μ_Az) 180^∘ Free ParametersAzimuthal dust modulation width (σ_Az) 6^∘Orbital dust modulation centroid (μ_Orb) 265^∘Orbital dust modulation width (σ_Orb) 13^∘ The orbital period was adopted from <cit.> and Richardson et al. (in prep), and the other orbital parameters (P_0, a, e, i, ω, and Ω) were adopted from recent CHARA observations of WR 137 (Richardson et al., in prep). The dust expansion velocity is assumed to be consistent with the wind velocity of the WC star provided by <cit.>, and the half-opening angle is derived from the wind momentum balance between the WC and companion star (Eq. <ref>, ). The adopted orbital phase is consistent with the expected orbital phase at the time of the observations, 15 July 2022 (Tab. <ref>). The azimuthal dust modulation centroid was assumed to be μ_Az=0^∘. The remaining three orbital and azimuthal dust modulation parameters (μ_Orb, σ_Orb, and σ_Az) were the only free parameters in the model and were adjusted until a satisfactory agreement with the calibrated observables from the F480M observations was achieved (Fig. <ref>, Right). §.§ Interpreting WR 137's Extended IR Emission with Geometric Colliding-wind ModelsGeometric modeling of dust production from colliding-wind binaries provides an important tool to interpret the spatially resolved dust emission around dust-forming WC binaries <cit.>. Such models, which output a map of dust column density, can assess whether the extended emission from WR 137 is consistent with dust production from a colliding-wind binary.The inputs required for the geometric dust modeling of WR 137 are the distance (d), shock-cone half-opening angle (θ_h), dust expansion velocity (v_exp), and orbital parameters of the binary systems which include the orbital period (P_Orb), time of periastron passage (P_0), inclination (i), semi-major axis (a), eccentricity (e), argument of periapsis (ω), the longitude of the ascending node (Ω), and an orbital phase at a given point in time (φ). Orbital parameters were adopted from recent CHARA observations with MIRC+CLIMBX that resolved the binary components in WR 137 (Richardson et al., in prep) and also incorporated previous CHARA observations by <cit.>. The orbital parameters are provided in Table <ref> and assume that the O9 companion star is brighter than the WC7 star in the near-IR CHARA observations. If instead the WC7 star is assumed to be brighter than the O9 companion, there would simply be 180^∘ added to the argument of periapsis (ω). The value of ω=0.6^∘, which assumes the O9 star is brighter than the WC7 star, is notably in closer agreement with the value derived independently from radial velocity observations by <cit.> (ω=326±15^∘) than when the WC7 star is assumed to be the brighter near-IR component (ω=180.6^∘). The effects of non-uniform dust formation can be investigated with the geometric models by modulating dust formation across the surface of the shock cone in the orbital and azimuthal[i.e. perpendicular to the WR–O-star axis.] directions (See Fig. 3 of ). Orbitally modulated and azimuthally asymmetric dust production was notably inferred in the colliding-wind WR binary WR 140 <cit.>. As in <cit.>, dust production variability in the orbital and azimuthal directions are each modeled by a Gaussian distribution of dust density as a function of true anomaly and azimuthal angle, respectively.The parameters of modulated dust production are the centroids (μ) and widths (σ) of the Gaussian functions in the orbital and azimuthal directions. The model images were calculated at a spatial scale of 10 mas/pixel.An azimuthal centroid of μ_Az=180^∘ was adopted, which corresponds to enhanced dust formation along the orbital plane in the “trailing arm” of the shock cone (Fig. <ref>, Left and Center).This assumption is based on dust formation in WR 140, where dust emission is enhanced in the trailing arm <cit.>. It is important to note that an asymmetry in dust production between the leading and trailing arms is indeed predicted based on 3D hydrodynamic modeling <cit.>, however, such models of the colliding-wind binary WR 98a by <cit.> instead predict dust enhancement in the leading arm. Dust enhancement along the leading arm of WR 137's shock cone (i.e., μ_Az=0^∘) cannot be conclusively ruled out due to degeneracies in geometric model parameters.Multi-epoch observations resolving the changing dust morphology around WR 137 will be important for resolving these degeneracies and identifying the region of enhanced dust formation.In order to compare with the NIRISS AMI observations of WR 137, the dust column density maps output from the geometric modeling tool were converted to intensity assuming isothermal and optically-thin dust emission. Given the higher signal-to-noise ratio of the F480M observations, the column density maps were converted to intensity at 4.8 μm. Emission from the unresolved binary, which is not incorporated in the geometric models, is included by modifying the central pixel of the dust map output from the geometric modeling tool.Since dust emission is coincident along the line-of-sight with stellar emission from the binary, it is not possible to distinguish the stellar and dust emission from the NIRISS observations. However, archival mid-IR light curves that sampled WR 137 throughout its orbit can be used to estimate the emission from the central binary and circumstellar dust <cit.>. The stellar emission at 4.8 μm, F^*_4.8μm = 0.8 Jy, can be derived from the quiescent L'-band (λ=3.8 μm) flux density, F^*_3.8μm=1.7 Jy, and the λ F^*_λ∝λ^-1.86 power law that characterizes the IR spectrum of its quiescent emission <cit.>. The total 4.8 μm emission, F^Tot_4.8μm = 2.8 Jy is estimated from M'-band (λ=4.7 μm) photometry of WR 137 obtained by <cit.> in March 1996, which corresponds to a similar orbital phase of WR 137 as that of the JWST NIRISS observations (φ∼0.9).The dust column density map and the stellar component were simply scaled to 70% and 30% of F^Tot_4.8μm, respectively, to produce a 4.8 μm intensity model of WR 137.Synthetic interferometric observables were extracted from the model 4.8 μm intensity maps of WR 137 to compare with the NIRISS F480M observables. Specifically, the F480M observables obtained using SAMpip (Fig. <ref>) were used to compare the synthetic observables. The synthetic observables were extracted by computing the Discrete Fourier Transform of the model image using the spatial frequencies sampled with NIRISS AMI. From the Fourier amplitudes and phases, the squared visibilities and closure phases were constructed using the baseline information of the non-redundant mask. Uncertainties on the synthetic squared visibilities and closure phases were adopted from the SAMpip F480M observables. Note that the extraction of synthetic F480M observables assumes a single wavelength (λ=4.8 μm) for the dust emission from the model. With the azimuthal centroid fixed at μ_Az=180^∘, the remaining three free dust modulation parameters (σ_Az, μ_Orb, and σ_Orb) were adjusted to find a satisfactory agreement with the SAMpip F480M closure phases and squared visibilities. Figure <ref> (Left and Center) presents the dust column density maps from the geometric modelling of WR 137 with and without incorporating modulated dust production parameters from Table <ref>. The interferometric observables of the geometric dust models and the F480M observations presented in Fig. <ref> (Right) show that the observables from the model without modulated dust production disagree with that of the observations, as expected. Fig. <ref> (Right) demonstrates the general agreement between the observations and the modulated dust production model where dust production is confined to the orbital plane and is enhanced as the system approaches periapsis. A reconstructed image using SQUEEZE with a similar configuration used for the F480M image reconstruction (See Sec. <ref>; Fig. <ref>) was generated from the synthetic observables of the modulated geometric dust model. Figure <ref> presents the reconstructed image of the modulated dust model overlaid with the contours of the SQUEEZE-reconstructed image from the SAMpip F480M calibrated observables (See Fig. <ref>).The morphology and intensity profile of the extended dust emission from the modulated dust model show a close agreement to the F480M observations. This agreement suggests that extended dust emission from WR 137 can be explained by dust-production via the colliding-wind mechanism with modulated dust formation rates along the surface of the shock cone. § DISCUSSION: DUST-FORMATION ENABLED BY ENHANCED EQUATORIAL MASS-LOSS FROM THE O9 COMPANION?Observations of persistent double-peaked emission line profiles from the O9 companion star in WR 137 and its polarization signature indicate that the O9 star has a decretion disk that may be linked to the star's rapid rotation <cit.>. The enhancement of dust production along WR 137's orbital plane (Fig. <ref>) may therefore be influenced by anisotropic wind densities and velocities that differ from those of a non-rotating O star. The polarimetric observations of WR 137 notably indicate the disk is aligned with the orbital plane of the system <cit.>. It is therefore plausible that an anisotropic, equatorially-enhanced wind from the O9 companion presents more favorable conditions for dust-formation via the colliding-wind mechanism along the orbital plane due to enhanced densities.The potential influence of anisotropic winds on colliding-wind dust formation has been observed in another WR binary, Apep <cit.>. Recent spectroscopic mid-IR observations of WR 137 with the Stratospheric Observatory for Infrared Astronomy (SOFIA) also suggest that the interaction between the WC and O9 winds is important for triggering dust formation <cit.>.In order to investigate dust-forming conditions in the colliding-wind shock, we utilize the dimensionless parameter, Γ, defined by <cit.> to characterize the radiative cooling of gas in the shock layer: Γ≃ 0.8 (Ṁ_WR/10^-5 M_⊙ yr^-1) (D/0.67 au)^-1(v_WR/10^3 km s^-1)^-3 × (1 + η^1/2) η^1/2, where D is the instantaneous separation between the WR and the companion star, and η is themomentum ratio of the companion star wind over that of the WR star (Eq. <ref>). Higher values of Γ indicate increased cooling of hot gas in the shock. The radiative cooling efficiency is an important factor in dust formation since the hot ∼10^7-10^8 K gas must cool to ∼1000 K in order to form dust.We note that Γ does not account for clumping, which likely plays a significant role in colliding-wind dust formation <cit.>. However, as a simplified investigation of dust-formation in colliding winds, we assume that WR 137 forms dust for a range of Γ values similar to the values of Γ derived from dust formation episodes of WR 140. WR 140 is a well-studied, dust forming colliding wind binary, which forms dust at a phase interval φ_dust = ± 0.04 around periastron passage <cit.>. Based on Eq. <ref> and adopting stellar and orbital properties of WR 140 from previous literature, we calculate Γ for WR 140, Γ_WR140, and normalize by Γ at the separation where WR 140's dust formation begins/ends, D_dust=8.3 au[ Assuming the Gaia distance to WR 140 of 1.64 kpc <cit.>]: Γ_WR140(D)/Γ_WR140(D_dust)=1.0 (D/8.3 au)^-1. We adopt a WR mass-loss rate, terminal wind velocity, and OB/WR wind momentum ratio of Ṁ_WR=2×10^-5 <cit.>, v_WR=2860 km s^-1 <cit.>, and η = 0.043 <cit.> for WR 140, respectively. It is important to emphasize that utilizing Γ as a dust formation diagnostic does not capture all of the complex physics of dust condensation in colliding winds. For example, although Γ_WR140 increases as WR 140 approaches periastron (φ = 0; D = 1.5 au), observations demonstrate that dust formation at separation distances D<D_dust appears to decrease relative to D = D_dust <cit.>. When stars are close, “sudden radiative braking" of the WR wind due to deceleration by the photospheric UV emission of the O star <cit.> likely plays an important role in mitigating dust production at close separation distances. However, an investigation of these effects is beyond the scope of this work. Using the orbital parameters of WR 137 from Table <ref> and the mass-loss rates and terminal wind velocities from <cit.>, we can calculate Γ_WR137 normalized by Γ_WR140(D_dust) as follows: Γ_WR137(D)/Γ_WR140(D_dust)=0.82 (D/13.2 au)^-1, where D=13.2 au corresponds to the binary separation in WR 137 at the time of the JWST observations at an orbital phase of φ=0.90. A comparison of Γ/ Γ_WR140(D_dust) for WR 140 and WR 137 throughout the stellar separation of their respective orbits is shown in Figure <ref>. Despite the clear evidence for dust production at the time of the JWST observations (Fig. <ref> & <ref>), Fig. <ref> suggests that throughout its entire orbit, the radiative cooling of shocked gas in WR 137 is not strong enough to allow for dust production with the adopted stellar and orbital properties.However, the adopted mass-loss rates and wind velocities of WR 137 <cit.> were derived assuming spherical symmetry and thus do not not account for enhanced equatorial mass-loss from the O-star companion.Interestingly, <cit.> predict that the equatorial mass-loss rate is enhanced by a factor of ∼2 for a rapidly rotating O-star (See Table 1 of ). If the mass-loss rate and/or terminal wind velocity of the companion O-star in WR 137 were enhanced by a factor of 2, the radiative cooling in the colliding-winds would satisfy the dust-formation threshold at the stellar separation at the time of the JWST observations (Fig. <ref>).We note that the isotropic O-star mass-loss rates from <cit.> have large uncertainties (log Ṁ=-7.1^+1.0_-0.3) so that the upper limit would allow dust to form. However, a larger isotropic mass-loss rate from the O star would also lead to a larger value of η and a wider shock-cone opening angle (See Eq. <ref>), which is not consistent with the linear morphology of the dust emission in the NIRISS observations (Fig. <ref> & <ref>).We therefore argue that colliding-wind dust production in WR 137 is enabled by enhanced equatorial mass-loss from the O9 companion. Rapid stellar rotation may therefore promote dust production in colliding-wind binaries, and circumstellar dust emission confined to the orbital and/or equatorial plane of the rapidly rotating star likely indicates the impact of this effect.It is important to address the following caveats on the interpretations and discussion. The analysis of dust-forming conditions using the Γ parameter notably does not consider a possible reduction in the mass-loss rate or wind velocity that would impact the wind momentum ratio η. Asymmetric winds would also alter the morphology of the colliding-wind shock cone that would deviate from the geometric model presented in Fig. <ref>. Additionally, it is unclear if rapid stellar rotation can result in enhanced mass-loss confined to such a narrow angular region (∼6^∘; Tab. <ref>) around the equator. Future work with hydrodynamical simulations (e.g. ) that can capture the complex physics of colliding-wind dust formation with asymmetric wind(s) will be important for investigating if non-spherical mass loss can indeed promote dust production in colliding-wind binaries § SUMMARY In this paper, we presented JWST observations of the periodic dust-forming colliding-wind binary system WR 137 using the AMI mode of NIRISS with the F380M and F480M filters. The observations were taken as part of the WR DustERS program (ERS 1349) and provide some of the first science results using the NIRISS AMI observing mode on JWST. Notably, the WR 137 results demonstrate that NIRISS AMI is uniquely suited for observations of faint and extended mid-IR emission around a bright central core at angular scales of ≲400 mas. The NIRISS AMI observations of WR 137 were obtained with two different dither strategies (See Sec. <ref>). The analysis in this work was performed on the `stare' (non-dithered) observations.We extracted interferometric observables from the interferogram pattern from the WR 137 observations and calibrated against a PSF-calibrator star (HD 228337) using three different software packages: ImPlaneIA, SAMpip, and AMICAL.The calibrated squared visibilities of the WR 137 observations were consistent across all three software packages (Fig. <ref>). The calibrated closure phases from AMICAL and SAMpip were consistent as well, while closure phases derived from ImPlaneIA were slightly discrepant. The cause for the discrepant closure phases is currently under investigation. The F480M observations yielded observables with the smallest uncertainties, which is likely due to the larger number of NGROUPS in each integration than in the F380M integrations (Tab. <ref>). Images of WR 137 were reconstructed from the interferometric observables using three different software tools: BSMEM, SQUEEZE, and IRBis. Each tool utilized the three sets of observables extracted from the three software packages.The reconstructed F380M and F480M images presented a nearly identical picture of WR 137 with a bright central core and with a ∼200-300 mas quasi-linear filament extending toward the northwest (Fig. <ref> & <ref>). The similarity of the images reconstructed with these three independent tools demonstrates the robustness of capturing WR 137's morphology from the NIRISS AMI observations.The expected half-opening angle of the shock cone in WR 137 is θ_h=18.6^∘ (See Sec. <ref>), but a shock cone that wide is not consistent with the linear morphology of the resolved dust emission. We used the geometric colliding-wind dust modelling tool from <cit.> to interpret the linear dust morphology extending from WR 137 using orbital parameters from recent CHARA observations by Richardson et al. (in prep; Tab. <ref>). In order to reproduce the linear appearance of the extended dust emission, variable dust production rates across the surface of the colliding-wind shock cone in the orbital and azimuthal directions were needed. A geometric model with dust formation confined to the orbital plane and enhanced as the system approaches periapsis provided a closer agreement to the interferometric observables from the F480M observations than the full model without any dust-production variability (Fig. <ref>). An image reconstructed by SQUEEZE from the modulated dust-model observables showed a close resemblance to the reconstructed image from the F480M observations (Fig. <ref>).We discussed the possible effect of enhanced equatorial mass-loss from the O9 companion star in WR 137 <cit.> to explain the linear morphology of the observed dust production. We used the analytical colliding-wind dust production framework by <cit.> and the well-studied orbital and stellar properties of WR 140 as a reference to investigate dust-formation in WR 137. As a diagnostic for dust formation in colliding-winds, we used estimated values of the parameter Γ which characterizes the radiative cooling of gas in the colliding-wind shock layer <cit.>. We found that WR 137 should not be capable of forming dust given its orbital and stellar properties (Fig. <ref>). However, if the equatorial mass loss from the rapidly rotating O9 companion star were enhanced by a factor of ∼2 <cit.>, this would be sufficient to enable dust formation in WR 137 (Fig. <ref>). We therefore conclude that equatorially enhanced mass-loss from the rapidly rotating O9 companion star may be responsible for the dust formed along the orbital plane of WR 137, which is aligned with the rotation axis of the O9 star <cit.>. JWST observations of WR 137 with NIRISS AMI provided us with the imaging contrast and sensitivity at mid-IR wavelengths to perform a morphological analysis of dust-forming conditions in colliding winds.Our results present a first look at the capabilities of NIRISS AMI that indicate its potential for investigating a wide range of astrophysical environments having bright central cores and faint and close-in extended emission, including active galactic nuclei, evolved stars, and young stellar objects.Notably, the dynamic range of NIRISS AMI observations is expected to improve as technical studies progress for optimizing the analysis and calibration of these datasets, which will allow for probing even fainter features (See ). Sub-pixel dithering <cit.> and improved calibration of second-order systematics such as charge migration may also bring F380M and F430M data up to the quality seen in F480M (See Sec. <ref>). In this work, our aim was not only to investigate colliding-wind dust production, but also to provide a first look at science with the NIRISS AMI observing mode on JWST and to set the stage for the future of space-based aperture-masking observations.RML would like to acknowledge the members of the entire WR DustERS team for their valuable discussions and contributions to this work.We thank Amaya Moro-Martin, William Januszewki, Neill Reid, Margaret Meixner, and Bonnie Meinke for their support of the planning and execution of our DD-ERS program. We would also like to acknowledge the NIRISS instrument and MIRaGe teams for their support of our observation preparation and data analysis plans. We also thank Tomer Shenar for the correspondence on the stellar wind models of WR 137. We would also like to acknowledge the anonymous referee for their insightful feedback that has improved the quality and clarity of this work. AFJM is grateful to NSERC (Canada) for financial aid. NDR is grateful for support from the Cottrell Scholar Award #CS-CSA-2023-143 sponsored by the Research Corporation for Science Advancement. J.S.-B. acknowledges the support received from the UNAM PAPIIT project IA 105023; and from the CONAHCyT “Ciencia de Frontera” project CF-2019/263975.This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #1349. Support for program #1349 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. The material is based upon work supported by NASA under award number 80GSFC21M0002.All of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via [10.17909/ytb0-px48]https://doi.org/10.17909/ytb0-px48 JWST/NIRISS natexlab#1#1[Allen et al.(1972)Allen, Swings, & Harvey]Allen1972 Allen, D. A., Swings, J. P., & Harvey, P. M. 1972, , 20, 333[Artigau et al.(2014)Artigau, Sivaramakrishnan, Greenbaum, Doyon, Goudfrooij, Fullerton, Lafrenière, Volk, Albert, Martel, Ford, & McKernan]Artigau2014 Artigau, É., Sivaramakrishnan, A., Greenbaum, A. Z., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9143, Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave, ed. J. Oschmann, Jacobus M., M. Clampin, G. G. Fazio, & H. A. MacEwen, 914340[Baron et al.(2010)Baron, Monnier, & Kloppenborg]Baron2010 Baron, F., Monnier, J. D., & Kloppenborg, B. 2010, in Optical and Infrared Interferometry II, ed. W. C. Danchi, F. Delplancke, & J. K. Rajagopal, Vol. 7734, International Society for Optics and Photonics (SPIE), 77342I. <https://doi.org/10.1117/12.857364>[Buscher(1994)]Buscher1994 Buscher, D. F. 1994, in Very High Angular Resolution Imaging, ed. J. G. Robertson & W. J. Tango (Dordrecht: Springer Netherlands), 91–93[Callingham et al.(2019)Callingham, Tuthill, Pope, Williams, Crowther, Edwards, Norris, & Kedziora-Chudczer]Callingham2019 Callingham, J. R., Tuthill, P. G., Pope, B. J. S., et al. 2019, Nature Astronomy, 3, 82[Cantó et al.(1996)Cantó, Raga, & Wilkin]Canto1996 Cantó, J., Raga, A. C., & Wilkin, F. P. 1996, , 469, 729[Coulton et al.(2018)Coulton, Armstrong, Smith, Lupton, & Spergel]Coulton2018 Coulton, W. R., Armstrong, R., Smith, K. M., Lupton, R. H., & Spergel, D. N. 2018, , 155, 258[Crowther(2007)]Crowther2007 Crowther, P. A. 2007, , 45, 177[Doyon et al.(2023)Doyon, Willott, Hutchings, Sivaramakrishnan, Albert, Lafrenière, Rowlands, Begoña Vila, Martel, LaMassa, Aldridge, Artigau, Cameron, Chayer, Cook, Cooper, Darveau-Bernier, Dupuis, Earnshaw, Espinoza, Filippazzo, Fullerton, Gaudreau, Gawlik, Goudfrooij, Haley, Kammerer, Kendall, Lambros, Ignat, Maszkiewicz, McColgan, Morishita, Ouellette, Pacifici, Philippi, Radica, Ravindranath, Rowe, Roy, Roy, Saad, Sohn, Talens, Touahri, Thatte, Taylor, Vandal, Volk, Wander, Warner, Zheng, Zhou, Abraham, Beaulieu, Benneke, Ferrarese, Jayawardhana, Johnstone, Kaltenegger, Meyer, Pipher, Rameau, Rieke, Salhi, & Sawicki]Doyon2023 Doyon, R., Willott, C. J., Hutchings, J. B., et al. 2023, , 135, 098001[Duvert et al.(2017)Duvert, Young, & Hummel]Duvert2017 Duvert, G., Young, J., & Hummel, C. A. 2017, , 597, A8[Eatson et al.(2022)Eatson, Pittard, & Van Loo]Eatson2022 Eatson, J. W., Pittard, J. M., & Van Loo, S. 2022, , 516, 6132[Fruchter & Hook(2002)]Fruchter2002 Fruchter, A. S., & Hook, R. N. 2002, , 114, 144[Gayley et al.(1997)Gayley, Owocki, & Cranmer]Gayley1997 Gayley, K. G., Owocki, S. P., & Cranmer, S. R. 1997, , 475, 786[Greenbaum et al.(2015)Greenbaum, Pueyo, Sivaramakrishnan, & Lacour]Greenbaum2015 Greenbaum, A. Z., Pueyo, L., Sivaramakrishnan, A., & Lacour, S. 2015, , 798, 68[Greenbaum et al.(2018)Greenbaum, Sivaramakrishnan, Sahlmann, & Thatte]ImPlaneIA2018 Greenbaum, A. Z., Sivaramakrishnan, A., Sahlmann, J., & Thatte, D. 2018, ImPlaneIA: Image Plane Approach to Interferometric Analysis, Astrophysics Source Code Library, record ascl:1808.004, , , ascl:1808.004[Hamann & Gräfener(2004)]Hamann2004 Hamann, W. R., & Gräfener, G. 2004, , 427, 697[Hamann et al.(2019)Hamann, Gräfener, Liermann, Hainich, Sander, Shenar, Ramachandran, Todt, & Oskinova]Hamann2019 Hamann, W. R., Gräfener, G., Liermann, A., et al. 2019, , 625, A57[Han et al.(2022)Han, Tuthill, Lau, & Soulain]Han2022 Han, Y., Tuthill, P. G., Lau, R. M., & Soulain, A. 2022, , 610, 269[Han et al.(2020)Han, Tuthill, Lau, Soulain, Callingham, Williams, Crowther, Pope, & Marcote]Han2020 Han, Y., Tuthill, P. G., Lau, R. M., et al. 2020, , 498, 5604[Hankins et al.(2016)Hankins, Lau, Morris, Sanchez-Bermudez, Pott, Adams, & Herter]Hankins2016 Hankins, M. J., Lau, R. M., Morris, M. R., et al. 2016, , 827, 136[Harries et al.(2000)Harries, Babler, & Fox]Harries2000 Harries, T. J., Babler, B. L., & Fox, G. K. 2000, , 361, 273[Hendrix et al.(2016)Hendrix, Keppens, van Marle, Camps, Baes, & Meliani]Hendrix2016 Hendrix, T., Keppens, R., van Marle, A. J., et al. 2016, , 460, 3975[Hilbert et al.(2022)Hilbert, Sahlmann, Volk, Osborne, dthatte, Perrin, Chambers, Slavich, Taylor, Tollerud, & Lim]Hilbert2022 Hilbert, B., Sahlmann, J., Volk, K., et al. 2022, MIRaGe: Multi Instrument Ramp Generator, Astrophysics Source Code Library, record ascl:2203.008, , , ascl:2203.008[Hirata & Choi(2020)]Hirata2020 Hirata, C. M., & Choi, A. 2020, , 132, 014501[Hofmann et al.(2014)Hofmann, Weigelt, & Schertl]Hofmann2014 Hofmann, K. H., Weigelt, G., & Schertl, D. 2014, , 565, A48[Lamberts et al.(2017)Lamberts, Millour, Liermann, Dessart, Driebe, Duvert, Finsterle, Girault, Massi, Petrov, Schmutz, Weigelt, & Chesneau]Lamberts2017 Lamberts, A., Millour, F., Liermann, A., et al. 2017, , 468, 2655[Lau et al.(2020a)Lau, Eldridge, Hankins, Lamberts, Sakon, & Williams]Lau2020 Lau, R. M., Eldridge, J. J., Hankins, M. J., et al. 2020a, , 898, 74[Lau et al.(2020b)Lau, Hankins, Han, Endo, Moffat, Ressler, Sakon, Sanchez-Bermudez, Soulain, Stevens, Tuthill, & Williams]Lau2020WR112 Lau, R. M., Hankins, M. J., Han, Y., et al. 2020b, , 900, 190[Lau et al.(2022)Lau, Hankins, Han, Argyriou, Corcoran, Eldridge, Endo, Fox, Garcia Marin, Gull, Jones, Hamaguchi, Lamberts, Law, Madura, Marchenko, Matsuhara, Moffat, Morris, Morris, Onaka, Ressler, Richardson, Russell, Sanchez-Bermudez, Smith, Soulain, Stevens, Tuthill, Weigelt, Williams, & Yamaguchi]Lau2022 —. 2022, Nature Astronomy, 6, 1308[Lawson et al.(2004)Lawson, Cotton, Hummel, Monnier, Zhao, Young, Thorsteinsson, Meimon, Mugnier, Besnerais, Thiebaut, & Tuthill]Lawson2004 Lawson, P. R., Cotton, W. D., Hummel, C. A., et al. 2004, in New Frontiers in Stellar Interferometry, ed. W. A. Traub, Vol. 5491, International Society for Optics and Photonics (SPIE), 886 – 899. <https://doi.org/10.1117/12.550710>[Lefèvre et al.(2005)Lefèvre, Marchenko, Lépine, Moffat, Acker, Harries, Annuk, Bohlender, Demers, Grosdidier, Hill, Morrison, Knauth, Skalkowski, & Viti]Lefevre2005 Lefèvre, L., Marchenko, S. V., Lépine, S., et al. 2005, , 360, 141[Maeder & Meynet(2000)]Maeder2000 Maeder, A., & Meynet, G. 2000, , 361, 159[Marchenko & Moffat(2007)]Marchenko2007 Marchenko, S. V., & Moffat, A. F. J. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 367, Massive Stars in Interactive Binaries, ed. N. St. -Louis & A. F. J. Moffat, 213[Marchenko et al.(1999)Marchenko, Moffat, & Grosdidier]Marchenko1999 Marchenko, S. V., Moffat, A. F. J., & Grosdidier, Y. 1999, , 522, 433[Monnier et al.(2007)Monnier, Tuthill, Danchi, Murphy, & Harries]Monnier2007 Monnier, J. D., Tuthill, P. G., Danchi, W. C., Murphy, N., & Harries, T. J. 2007, , 655, 1033[Owocki et al.(1996)Owocki, Cranmer, & Gayley]Owocki1996 Owocki, S. P., Cranmer, S. R., & Gayley, K. G. 1996, , 472, L115[Pauls et al.(2005)Pauls, Young, Cotton, & Monnier]Pauls2005 Pauls, T. A., Young, J. S., Cotton, W. D., & Monnier, J. D. 2005, , 117, 1255[Peatt et al.(2023)Peatt, Richardson, Williams, Karnath, Shenavrin, Lau, Moffat, & Weigelt]Peatt2023 Peatt, M. J., Richardson, N. D., Williams, P. M., et al. 2023, , 956, 109[Rajagopal et al.(2007)Rajagopal, Menut, Wallace, Danchi, Chesneau, Lopez, Monnier, Ireland, & Tuthill]Rajagopal2007 Rajagopal, J., Menut, J.-L., Wallace, D., et al. 2007, , 671, 2017[Ray et al.(2023)Ray, Sallum, Hinkley, Sivamarakrishnan, Cooper, Kammerer, Greebaum, Thatte, Lazzoni, Tokovinin, de Furio, Factor, Meyer, Stone, Carter, Biller, Skemer, Suarez, Leisenring, Perrin, Kraus, Absil, Balmer, Bonnefoy, Bryan, Betti, Boccaletti, Bonavita, Booth, Bowler, Briesemeister, Cantalloube, Chauvin, Christiaens, Cugno, Currie, Danielski, Dupuy, Faherty, Chen, Calissendorff, Choquet, Fitzgerald, Fortney, Franson, Girard, Grady, Gonzales, Henning, Hines, Hoch, Hood, Howe, Janson, Kalas, Kennedy, Kenworthy, Kervella, Kitzmann, Kuzuhara, Lagrange, Lagage, Lawson, Lew, Liu, Liu, Llop-Sayson, Lloyd, Lueber, Macintosh, Manjavacas, Marino, Marley, Marois, Martinez, Matthews, Matthews, Mawet, Mazoyer, McElwain, Metchev, Miles, Millar-Blanchaer, Molliere, Moran, Morley, Mukherjee, Palma-Bifani, Pantin, Patapis, Petrus, Pueyo, Quanz, Quirrenbach, Rebollido, Adams Redai, Ren, Rickman, Samland, Sargent, Schlieder, Schneider, Stapelfeldt, Sutlieff, Tamura, Tan, Theissen, Uyama, Vigan, Vasist, Vos, Wagner, Wang, Ward-Duong, Whiteford, Wolff, Worthen, Wyatt, Ygouf, Zhang, Zhang, Zhang, & Zhou]Ray2023 Ray, S., Sallum, S., Hinkley, S., et al. 2023, arXiv e-prints, arXiv:2310.11508[Richardson et al.(2016)Richardson, Shenar, Roy-Loubier, Schaefer, Moffat, St-Louis, Gies, Farrington, Hill, Williams, Gordon, Pablo, & Ramiaramanantsoa]Richardson2016 Richardson, N. D., Shenar, T., Roy-Loubier, O., et al. 2016, , 461, 4115[Rigby et al.(2023)Rigby, Perrin, McElwain, Kimble, Friedman, Lallo, Doyon, Feinberg, Ferruit, Glasse, Rieke, Rieke, Wright, Willott, Colon, Milam, Neff, Stark, Valenti, Abell, Abney, Abul-Huda, Scott Acton, Adams, Adler, Aguilar, Ahmed, Albert, Alberts, Aldridge, Allen, Altenburg, Álvarez-Márquez, Alves de Oliveira, Andersen, Anderson, Anderson, Argyriou, Armstrong, Arribas, Artigau, Arvai, Atkinson, Bacon, Bair, Banks, Barrientes, Barringer, Bartosik, Bast, Baudoz, Beatty, Bechtold, Beck, Bergeron, Bergkoetter, Bhatawdekar, Birkmann, Blazek, Blome, Boccaletti, Böker, Boia, Bonaventura, Bond, Bosley, Boucarut, Bourque, Bouwman, Bower, Bowers, Boyer, Bradley, Brady, Braun, Breda, Bresnahan, Bright, Britt, Bromenschenkel, Brooks, Brooks, Brown, Brown, Brown, Bunker, Burger, Bushouse, Cale, Cameron, Cameron, Canipe, Caplinger, Caputo, Cara, Carey, Carniani, Carrasquilla, Carruthers, Case, Catherine, Chance, Chapman, Charlot, Charlow, Chayer, Chen, Cherinka, Chichester, Chilton, Chonis, Clampin, Clark, Clark, Coe, Coleman, Comber, Comeau, Connolly, Cooper, Cooper, Coppock, Correnti, Cossou, Coulais, Coyle, Cracraft, Curti, Cuturic, Davis, Davis, Dean, DeLisa, deMeester, Dencheva, Dencheva, DePasquale, Deschenes, Hunor Detre, Diaz, Dicken, DiFelice, Dillman, Dixon, Doggett, Donaldson, Douglas, DuPrie, Dupuis, Durning, Easmin, Eck, Edeani, Egami, Ehrenwinkler, Eisenhamer, Eisenhower, Elie, Elliott, Elliott, Ellis, Engesser, Espinoza, Etienne, Etxaluze, Falini, Feeney, Ferry, Filippazzo, Fincham, Fix, Flagey, Florian, Flynn, Fontanella, Ford, Forshay, Fox, Franz, Fu, Fullerton, Galkin, Galyer, García Marín, Gardner, Gardner, Garland, Garrett, Gasman, Gaspar, Gaudreau, Gauthier, Geers, Geithner, Gennaro, Giardino, Girard, Giuliano, Glassmire, Glauser, Glazer, Godfrey, Golimowski, Gollnitz, Gong, Gonzaga, Gordon, Gordon, Goudfrooij, Greene, Greenhouse, Grimaldi, Groebner, Grundy, Guillard, Gutman, Ha, Haderlein, Hagedorn, Hainline, Haley, Hami, Hamilton, Hammel, Hansen, Harkins, Harr, Hart, Hart, Hartig, Hashimoto, Haskins, Hathaway, Havey, Hayden, Hecht, Heller-Boyer, Henriques, Henry, Hermann, Hernandez, Hesman, Hicks, Hilbert, Hines, Hoffman, Holfeltz, Holler, Hoppa, Hott, Howard, Howard, Hunter, Hunter, Hurst, Husemann, Hustak, Ilinca Ignat, Illingworth, Irish, Jackson, Jahromi, Jakobsen, James, James, Januszewski, Jenkins, Jirdeh, Johnson, Johnson, Jones, Jones, Jones, Jones, Jordan, Jordan, Jurczyk, Jurling, Kaleida, Kalmanson, Kammerer, Kang, Kao, Karakla, Kavanagh, Kelly, Kendrew, Kennedy, Kenny, Keski-kuha, Keyes, Kidwell, Kinzel, Kirk, Kirkpatrick, Kirshenblat, Klaassen, Knapp, Scott Knight, Knollenberg, Koehler, Koekemoer, Kovacs, Kulp, Kumari, Kyprianou, La Massa, Labador, Labiano, Lagage, Lajoie, Lallo, Lam, Lamb, Lambros, Lampenfield, Langston, Larson, Law, Lawrence, Lee, Leisenring, Lepo, Leveille, Levenson, Levine, Levy, Lewis, Lewis, Libralato, Lightsey, Link, Liu, Lo, Lockwood, Logue, Long, Long, Loomis, Lopez-Caniego, Lorenzo Alvarez, Love-Pruitt, Lucy, Luetzgendorf, Maghami, Maiolino, Major, Malla, Malumuth, Manjavacas, Mannfolk, Marrione, Marston, Martel, Maschmann, Masci, Masciarelli, Maszkiewicz, Mather, McKenzie, McLean, McMaster, Melbourne, Meléndez, Menzel, Merz, Meyett, Meza, Miskey, Misselt, Moller, Morrison, Morse, Moseley, Mosier, Mountain, Mueckay, Mueller, Mullally, Murphy, Murray, Murray, Mustelier, Muzerolle, Mycroft, Myers, Myrick, Nanavati, Nance, Nayak, Naylor, Nelan, Nickson, Nielson, Nieto-Santisteban, Nikolov, Noriega-Crespo, O'Shaughnessy, O'Sullivan, Ochs, Ogle, Oleszczuk, Olmsted, Osborne, Ottens, Owens, Pacifici, Pagan, Page, Park, Parrish, Patapis, Paul, Pauly, Pavlovsky, Pedder, Peek, Pena-Guerrero, Penanen, Perez, Perna, Perriello, Phillips, Pietraszkiewicz, Pinaud, Pirzkal, Pitman, Piwowar, Platais, Player, Plesha, Pollizi, Polster, Pontoppidan, Porterfield, Proffitt, Pueyo, Pulliam, Quirt, Quispe Neira, Ramos Alarcon, Ramsay, Rapp, Rapp, Rauscher, Ravindranath, Rawle, Regan, Reichard, Reis, Ressler, Rest, Reynolds, Rhue, Richon, Rickman, Ridgaway, Ritchie, Rix, Robberto, Robinson, Robinson, Robinson, Rock, Rodriguez, Rodriguez Del Pino, Roellig, Rohrbach, Roman, Romelfanger, Rose, Roteliuk, Roth, Rothwell, Rowlands, Roy, Royer, Royle, Rui, Rumler, Runnels, Russ, Rustamkulov, Ryden, Ryer, Sabata, Sabatke, Sabbi, Samuelson, Sapp, Sappington, Sargent, Sauer, Scheithauer, Schlawin, Schlitz, Schmitz, Schneider, Schreiber, Schulze, Schwab, Scott, Sembach, Shanahan, Shaughnessy, Shaw, Shawger, Shay, Sheehan, Shen, Sherman, Shiao, Shih, Shivaei, Sienkiewicz, Sing, Sirianni, Sivaramakrishnan, Skipper, Sloan, Slocum, Slowinski, Smith, Smith, Smith, Smith, Snyder, Soh, Tony Sohn, Soto, Spencer, Stallcup, Stansberry, Starr, Starr, Stewart, Stiavelli, Straughn, Strickland, Stys, Summers, Sun, Sunnquist, Swade, Swam, Swaters, Swoish, Taylor, Taylor, Te Plate, Tea, Teague, Telfer, Temim, Thatte, Thompson, Thompson, Thomson, Tikkanen, Tippet, Todd, Toolan, Tran, Trejo, Truong, Tsukamoto, Tustain, Tyra, Ubeda, Underwood, Uzzo, Van Campen, Vandal, Vandenbussche, Vila, Volk, Wahlgren, Waldman, Walker, Wander, Warfield, Warner, Wasiak, Watkins, Weaver, Weilert, Weiser, Weiss, Weissman, Welty, West, Wheate, Wheatley, Wheeler, White, Whiteaker, Whitehouse, Whiteleather, Whitman, Williams, Willmer, Willoughby, Wilson, Wirth, Wislowski, Wolf, Wolfe, Wolff, Workman, Wright, Wu, Wu, Wymer, Yates, Yeager, Yeates, Yerger, Yoon, Young, Yu, Zak, Zeidler, Zhou, Zielinski, Zincke, & Zonak]Rigby2023 Rigby, J., Perrin, M., McElwain, M., et al. 2023, , 135, 048001[Sallum et al.(2023)Sallum, Ray, Kammerer, Sivaramakrishnan, Cooper, Greebaum, Thatte, de Furio, Factor, Meyer, Stone, Carter, Biller, Hinkley, Skemer, Suarez, Leisenring, Perrin, Kraus, Absil, Balmer, Bonnefoy, Bryan, Betti, Boccaletti, Bonavita, Booth, Bowler, Briesemeister, Cantalloube, Chauvin, Christiaens, Cugno, Currie, Danielski, Dupuy, Faherty, Chen, Calissendorff, Choquet, Fitzgerald, Fortney, Franson, Girard, Grady, Gonzales, Henning, Hines, Hoch, Hood, Howe, Janson, Kalas, Kennedy, Kenworthy, Kervella, Kitzmann, Kuzuhara, Lagrange, Lagage, Lawson, Lazzoni, Lew, Liu, Liu, Llop-Sayson, Lloyd, Lueber, Macintosh, Manjavacas, Marino, Marley, Marois, Martinez, Matthews, Matthews, Mawet, Mazoyer, McElwain, Metchev, Miles, Millar-Blanchaer, Molliere, Moran, Morley, Mukherjee, Palma-Bifani, Pantin, Patapis, Petrus, Pueyo, Quanz, Quirrenbach, Rebollido, Adams Redai, Ren, Rickman, Samland, Sargent, Schlieder, Schneider, Stapelfeldt, Sutlieff, Tamura, Tan, Theissen, Uyama, Vigan, Vasist, Vos, Wagner, Wang, Ward-Duong, Whiteford, Wolff, Worthen, Wyatt, Ygouf, Zhang, Zhang, Zhang, & Zhou]Sallum2023 Sallum, S., Ray, S., Kammerer, J., et al. 2023, arXiv e-prints, arXiv:2310.11499[Sanchez-Bermudez et al.(2022)Sanchez-Bermudez, Alberdi, Schödel, & Sivaramakrishnan]SAMpip Sanchez-Bermudez, J., Alberdi, A., Schödel, R., & Sivaramakrishnan, A. 2022, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 12183, Optical and Infrared Interferometry and Imaging VIII, ed. A. Mérand, S. Sallum, & J. Sanchez-Bermudez, 121831K[Sander et al.(2019)Sander, Hamann, Todt, Hainich, Shenar, Ramachandran, & Oskinova]Sander2019 Sander, A. A. C., Hamann, W. R., Todt, H., et al. 2019, , 621, A92[Sivaramakrishnan et al.(2009)Sivaramakrishnan, Tuthill, Martinache, Ireland, Lloyd, Perrin, Soummer, McKernan, & Ford]Sivaramakrishnan2009 Sivaramakrishnan, A., Tuthill, P., Martinache, F., et al. 2009, in astro2010: The Astronomy and Astrophysics Decadal Survey, Vol. 2010, 40[Sivaramakrishnan et al.(2023)Sivaramakrishnan, Tuthill, Lloyd, Greenbaum, Thatte, Cooper, Vandal, Kammerer, Sanchez-Bermudez, Pope, Blakely, Albert, Cook, Johnstone, Martel, Volk, Soulain, Artigau, Lafrenière, Willott, Parmentier, Ford, McKernan, Vila, Rowlands, Doyon, Beaulieu, Desdoigts, Fullerton, De Furio, Goudfrooij, Holfeltz, LaMassa, Maszkiewicz, Meyer, Perrin, Pueyo, Sahlmann, Sohn, Teixeira, & Zheng]Sivaramakrishnan2023 Sivaramakrishnan, A., Tuthill, P., Lloyd, J. P., et al. 2023, , 135, 015003[Soulain & Robert(2023)]soulain23ascl Soulain, A., & Robert, C. M. T. 2023, AMICAL: Aperture Masking Interferometry Calibration and Analysis Library, Astrophysics Source Code Library, record ascl:2302.021, , , ascl:2302.021[Soulain et al.(2020)Soulain, Sivaramakrishnan, Tuthill, Thatte, Volk, Cooper, Albert, Artigau, Cook, Doyon, Johnstone, Lafrenière, & Martel]Soulain2020 Soulain, A., Sivaramakrishnan, A., Tuthill, P., et al. 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11446, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 1144611[St-Louis et al.(2020)St-Louis, Piaulet, Richardson, Shenar, Moffat, Eversberg, Hill, Gauza, Knapen, Kubát, Kubátová, Sablowski, Simón-Díaz, Bolduan, Dias, Dubreuil, Fuchs, Garrel, Grutzeck, Hunger, Küsters, Langenbrink, Leadbeater, Li, Lopez, Mauclaire, Moldenhawer, Potter, dos Santos, Schanne, Schmidt, Sieske, Strachan, Stinner, Stinner, Stober, Strandbaek, Syder, Verilhac, Waldschläger, Weiss, & Wendt]St-Louis2020 St-Louis, N., Piaulet, C., Richardson, N. D., et al. 2020, , 497, 4448[Sugawara et al.(2015)Sugawara, Maeda, Tsuboi, Hamaguchi, Corcoran, Pollock, Moffat, Williams, Dougherty, & Pittard]Sugawara2015 Sugawara, Y., Maeda, Y., Tsuboi, Y., et al. 2015, , 67, 121[Thomas et al.(2021)Thomas, Richardson, Eldridge, Schaefer, Monnier, Sana, Moffat, Williams, Corcoran, Stevens, Weigelt, Zainol, Anugu, Le Bouquin, ten Brummelaar, Campos, Couperus, Davies, Ennis, Eversberg, Garde, Gardner, Fló, Kraus, Labdon, Lanthermann, Leadbeater, Lester, Maki, McBride, Ozuyar, Ribeiro, Setterholm, Stober, Wood, & Zurmühl]Thomas2021 Thomas, J. D., Richardson, N. D., Eldridge, J. J., et al. 2021, , 504, 5221[Tuthill et al.(1999)Tuthill, Monnier, & Danchi]Tuthill1999 Tuthill, P. G., Monnier, J. D., & Danchi, W. C. 1999, , 398, 487[Tuthill et al.(2008)Tuthill, Monnier, Lawrance, Danchi, Owocki, & Gayley]Tuthill2008 Tuthill, P. G., Monnier, J. D., Lawrance, N., et al. 2008, , 675, 698[Usov(1991)]Usov1991 Usov, V. V. 1991, , 252, 49[Williams(2019)]Williams2019 Williams, P. M. 2019, , 488, 1282[Williams & Eenens(1989)]Williams1989 Williams, P. M., & Eenens, P. R. J. 1989, , 240, 445[Williams et al.(1985)Williams, Longmore, van der Hucht, Talevera, Wamsteker, Abbott, & Telesco]Williams1985 Williams, P. M., Longmore, A. J., van der Hucht, K. A., et al. 1985, , 215, 23P[Williams et al.(1987)Williams, van der Hucht, & Thé]Williams1987 Williams, P. M., van der Hucht, K. A., & Thé, P. S. 1987, , 182, 91[Williams et al.(2001)Williams, Kidger, van der Hucht, Morris, Tapia, Perinotto, Morbidelli, Fitzsimmons, Anthony, Caldwell, Alonso, & Wild]Williams2001 Williams, P. M., Kidger, M. R., van der Hucht, K. A., et al. 2001, , 324, 156[Williams et al.(2009)Williams, Marchenko, Marston, Moffat, Varricatt, Dougherty, Kidger, Morbidelli, & Tapia]Williams2009 Williams, P. M., Marchenko, S. V., Marston, A. P., et al. 2009, , 395, 1749 | http://arxiv.org/abs/2311.15948v2 | {
"authors": [
"Ryan M. Lau",
"Matthew J. Hankins",
"Joel Sanchez-Bermudez",
"Deepashri Thatte",
"Anthony Soulain",
"Rachel A. Cooper",
"Anand Sivaramakrishnan",
"Michael F. Corcoran",
"Alexandra Z. Greenbaum",
"Theodore R. Gull",
"Yinuo Han",
"Olivia C. Jones",
"Thomas Madura",
"Anthony F. J. Moffat",
"Mark R. Morris",
"Takashi Onaka",
"Christopher M. P. Russell",
"Noel D. Richardson",
"Nathan Smith",
"Peter Tuthill",
"Kevin Volk",
"Gerd Weigelt",
"Peredur M. Williams"
],
"categories": [
"astro-ph.SR",
"astro-ph.EP"
],
"primary_category": "astro-ph.SR",
"published": "20231127155539",
"title": "A First Look with JWST Aperture Masking Interferometry (AMI): Resolving Circumstellar Dust around the Wolf-Rayet Binary WR 137 beyond the Rayleigh Limit"
} |
MadRadar: A Black-Box Physical Layer Attack Framework on mmWave Automotive FMCW Radars David Hunt Duke [email protected] Kristen Angell Duke [email protected] Zhenzhou Qi Duke [email protected] Tingjun Chen Duke [email protected] Miroslav Pajic Duke [email protected] January 14, 2024 ================================================================================================================================================================================================================================================================================================================================= Frequency modulated continuous wave (FMCW) millimeter-wave (mmWave) radars play a critical role in many of the advanced driver assistance systems (ADAS) featured on today's vehicles.While previous works have demonstrated (only) successful false-positive spoofing attacks against these sensors, all but one assumed that an attacker had the runtime knowledge of the victim radar's configuration. In this work, we introduce , a general black-box radar attack framework for automotive mmWave FMCW radars capable of estimating the victim radar's configuration in real-time, and then executing an attack based on the estimates. We evaluate the impact of such attacks maliciously manipulating a victim radar's point cloud, and show the novel ability to effectively `add' (i.e., false positive attacks), `remove' (i.e., false negative attacks), or `move' (i.e., translation attacks) object detections from a victim vehicle's scene. Finally, we experimentally demonstrate the feasibility of our attacks on real-world case studies performed using a real-time physical prototype on a software-defined radio platform.§ INTRODUCTIONRadio detection and ranging (a.k.a., radar) sensors have traditionally been popular in the automotive market due to their reliability in adverse lighting and weather conditions, long detection range, and ability to detect an object’s relative velocity <cit.>. While various techniques and waveforms can be used to perform radar ranging, frequency modulated continuous wave (FMCW) radars are the most common due to the relatively simple implementation at low cost <cit.>. The latest generation of automotive radar in the millimeter-wave (mmWave) frequency bands utilizes greater bandwidths in the frequency range of 76–77 (i.e., long-range sensing) and 77–81 (i.e., short-to-mid range sensing). The higher frequencies and greater bandwidth enable these sensors to have 20× better range resolution (down to 4), 3× greater velocity resolution, and a smaller overall sensor footprint <cit.>. Given their traditional benefits and the additional capabilities presented by the latest generation of mmWave radars, FMCW radars play a critical role in many advanced driver assistance systems (ADAS) including blind spot detection (BSD), auto emergency braking systems (AEBS), lane change assist (LCA), and rear traffic alert (RTA) systems <cit.>. Moving forward, autonomous driving companies (e.g., Mobileye) also plan to use radar sensors to provide additional sensing and redundancy in their future autonomous vehicles by creating a “360^∘ Radar cocoon” <cit.>. As radars continue to gain popularity in automotive systems and applications, it is imperative to understand the vulnerabilities of these systems.While there are a plethora of analyses for camera and(light detection and ranging) vulnerabilities in autonomous vehicles (e.g., <cit.>), automotive radars have only recently started to attract attention in the security community. Existing security research dealing with physical layer (PHY) attacks on FMCW radar systems has solely focused on spoofing attacks inserting false points into a victim radar’s point cloud – i.e., false positive (FP) attacks. No false negative (FN) attacks, resulting in a `removal' of an existing object from the victim radar's scene, have been demonstrated. Similarly, no prior work has introduced translation attacks that can `move' detections of existing objects in the victim radar's scene. Instead, initial works <cit.> only demonstrated the ability to insert FP objects at a specific range in a radar's point cloud, and more recently showed the ability to spoof an object's velocity <cit.>. Moreover, <cit.> demonstrated the ability to spoof an object's angle of arrival (AoA). However, existing methods, except the very recent one from <cit.>, assumed a white-box threat model with full knowledge of the victim radar's parameters, significantly limiting their real-world use. Additionally, <cit.> and <cit.> introduced passive attacks and early detect/late commit (ED/LC) attacks, respectively. While <cit.> introduced passive attacks on FMCW radars using physical patches placed in the environment, these attacks are limited as the attacks cannot dynamically change the spoofing location and each patch must be specifically designed for the specific attack goals, victim radar configuration, and environment. The ED/LC attack <cit.> listens to and then re-transmits a victim's signal to spoof an object's range, but the attack is only designed for chirp spread spectrum-based ranging and thus does not work against FMCW radars.In this work, we present , a novel real-time black-box FMCW radar attack framework for successful FP, FN, and translation attacks, where an attacker learns the victim radar's parameters and then successfully launches an attack. Developing an architecture capable of estimating the victim radar's parameters in real-time with sufficient accuracy presents unique technical challenges. Moreover, estimation errors can propagate throughout the rest of the attack implementation and impact the attack effectiveness. For example, if an attacker's estimate of the victim's frame start time is off by even 20, a spoofing FP attack's perceived location can be off by 3 in the victim radar's view.While <cit.> implemented a black-box FP attack by estimating a victim radar's chirp period and chirp slope, we enable FN and translation attacks by introducing a novel sensing architecture that additionally estimates the frame period in real-time while simultaneously predicting future radar frame start times. We show that our design issufficiently accurate to enable effective attacks –e.g.,estimates a victim's chirp slope and period with a mean error of 0.01 and 0.14, respectively; these highly accurate estimates result in 90% of spoofing attacks being within 1.09 and 0.12 of the desired range and velocity, respectively. Lastly, as our approach observes only six victim frames, the attacker can quickly learn a victim's parameters, making our spoofing attacks significantly more practical compared to the white-box attacks implemented by previous works <cit.>. While all prior FMCW radar security analyses (i.e., <cit.>) solely focused on FP attacks, other (non-security) works have shown that FMCW radars can be adversely, yet intermittently,affected by naturally-occurring interference including same slope, similar slope, and sweeping interference <cit.>. These forms of interference occur when chirps with the same, similar, or different slopes are received by a radar, and may be caused by self-interference or interference from other radars in the environment.We build on these ideas and leverage specific forms of interference to design effective on-demand FN attacks. To the best of our knowledge, this is the first work to presentFN and translation attacks that effectively `remove' or `move/translate' detections of existing objects in a victim radar's point cloud. We accomplish this by introducing very similar slope interference as part of the attack, using the estimated victim radar's parameters.Further, we show that by leveraging the estimated parameters of the victim radar, the proposed attacks can be designed to result in multiple FP and FN object detections (and thus, multiple translated detections) in every execution frame. As part of our analysis, we show how spoofing and intentional interference attacks propagate through a radar’s Range-Doppler, CFAR point-detection, and DBSCAN clustering stages.We demonstrate the feasibility and applicability ofby developing a proof-of-concept prototype using the USRP B210 software-defined radio (SDR) platform <cit.>. The developed attack platform estimates the victim’s parameters and then uses those estimates to launch the desired attacks with (multiple) FP, FN, and translation outcomes, all in real-time.Through simulation and physical experimentation, we show that our black-box attacker can estimate the victim's parameters with sufficient accuracy to launch successful attacks over 95% of the time.We perform comprehensive attack evaluation on real-world case studies using our prototype to demonstrate various attack outcomes – i.e., single and multiple FP, FN, and/or translation attacks, as well as successful attacks on victims employing basic defenses such as parameter randomization. Additional resources, including case study videos, and case studies can be found at <cit.>.Our contributions can be summarized as follows: * We introduce , a black-box attack framework for effective physical layer attacks on mmWave radars without prior knowledge of the victim radar's parameters (e.g., the chirp period and slope, and frame duration);* We enable new black-box attack types by improving upon existing methods for estimating victim parameters;* We demonstrate that mmWave radars are vulnerable to false-negative and translation attacks that effectively `remove' or `move' detections of existing objects in the victim's point cloud, respectively;* We demonstrate feasibility, and evaluate our attacks on multiple real-world case studies performed using a real-time implementation on the USRP B210 SDR platform. This paper is organized as follows.Section <ref> overviews the FMCW radar signal processing pipeline. Attack objectives and threat model are introduced in Section <ref>. Section <ref> describes the attack framework, starting from estimating the victim radar's parameters, before showing how such estimates can be used to launch the attacks. Given the cost and hardware limitations of our real-time physical prototype, we first present results of rigoroussimulation-based performance evaluation of the parameter estimation module in Section <ref> and the full-scale attacks in Section <ref>. Section <ref> presents our physical prototype and results from real-world evaluations, before multiple real-world case studies are introduced in Section <ref> to demonstrate the performance and feasibility of our novel framework in realistic scenarios. Finally, framework limitations and potential defense mechanisms are discussed in Section <ref>, before providing concluding remarks in Section <ref>. § RELATED WORKWhile there are plethora analyses for camera andvulnerabilities in autonomous vehicles (e.g., <cit.>), automotive radars have only recently started to attract attention in the security community. Here, we differentiate the related radar analyses in the context of adversarial environments and more common types of non-adversarial interference.§.§ FMCW Radar Spoofing AttacksPrevious works have demonstrated the ability to spoof an object's relative range and velocity <cit.> with <cit.> also showing the ability to spoof an object's angle using two asynchronous attacking systems. However, these works only demonstrated FP attacks by inserting fake objects into a victim radar's point cloud. Thus, this work is the first to present FN and translation attacks that effectively `remove' and `move' objects in the victim radar's point cloud, respectively. Moreover, all but <cit.> assumed prior knowledge of a victim's parameters (i.e., a white-box threat model).On the other hand, we implement a black-box attack where we accurately estimate a victim radar's key parameters and launch successful FP, FN, and translation attacks using those estimates; this is a significantly more practical and effective threat model. Our framework improves upon recently published <cit.> in two key ways. The black-box attack method from <cit.> does not support FN and translation attacks. Additionally, the attack from <cit.> is limited to launching a single FP attack at a time; we demonstrate how our framework can launch multiple FP and/or FN attacks (case studies in Section <ref>). keeping this as a reminder: Finally, even though the attacks from <cit.> re-transmits a frequency-shifted replica of each victim radar chirp, they still rely on accurate estimates of the victim's radar settings. Thus, these attacks could still be thwarted by randomizing specific radar parameters, including randomized chirp slopes. On the other hand, our attack architecture can easily be re-configured to detect victims employing parameter randomization and adapt its behavior accordingly (Section <ref>).In summary, our attack does not require prior knowledge of a victim's parameters, is still effective against victim's that randomize their parameters, and can simultaneously trigger false-positive, false-negative, and translation events; this makes our attack significantly more practical and effective compared to all previous spoofing attacks.§.§ Malicious Interference and Interference MitigationEven without adversarial activity, radar interference may occur in regular automotive scenarios due to the increased proliferation of radar sensing in modern vehicles. FMCW radars are susceptible to three key types of interference: same slope, similar slope, and sweeping interference occurring when an interfering signal has a chirp slope that is the same, similar, or significantly different to the victim's chirp <cit.>. Generally, interference is the result of multi-path reflections and non-malicious interference from another radar, and all forms of interference can degrade radar performance. Indeed, <cit.> showed that interference could saturate a victim radar’s Rx stage, decrease the signal-to-noise ratio of perceived targets, and generate false peaks or ghost targets . Further, <cit.> documented how interference can impact a radar’s Range-Doppler response. Finally, <cit.> showed how interference could even result in the radar losing a target altogether.Interference mitigation. Several mitigation techniques have been proposed, mainly targeting sweeping interference – i.e., interference from chirps with greatly different slopes.<cit.> proposed various techniques to detect the interference in the time domain of the IF signal and then repair the received signal by nullifying the interference. Other techniques have also been proposed to combat general interference including the use of specific polarization and frequency bands, randomizing chirp timing, and adding random phase shifts to each chirp <cit.>. Attacks using interference. To the best of our knowledge, no work has considered the effects that a malicious actor could have if they intentionally interfered with a radar using attacks based on similar slope interference. Existing methods for mitigating (e.g., similar-slope) interference are developed under the assumption that the interference is only sporadic and not malicious, and thus would not change as the result of mitigation-induced changes in the victim's radar parameters, which are either active or introduce very in-frequent parameter modifications. In this work, we also introduce attacks based on very similar slope interference that could result in a false-negative event for victims employing a CA-CFAR detector. In addition, the attacks would not be detectable in the time domain and as such would make interference mitigation techniques ineffective.§ PRELIMINARIES: FMCW RADAR SIGNAL PROCESSINGRadars employ radio waves for sensing, by transmitting a specifically constructed signal into the environment. The transmitted signal reflects off objects in the radar's field of view; the reflections are then received (and processed) by the radar's receiver (Rx). In particular, the received signal is used to determine the range, velocity, and relative angle of objects in the environment. The ability to detect an object's velocity in a single frame is unique to radars as other sensors (e.g., cameras)can at most determine an object's range (e.g., with stereo cameras) and angle from a single image frame.FMCW radars are a type of radar sensor commonly employed in automotive systems. They use a common signal processing pipeline (Fig. <ref>) with the following five key steps. Step 1: Transmitter (Tx) and Rx chirps. In each frame, a radar transmits a series of identical “chirps”, whose frequency increases linearly over time. In general, a series of 256 chirps are transmitted per radar frame <cit.>. Fig. <ref> illustrates the frequency response of a single Tx chirp and the corresponding Rx chirp, which is reflected off of an object in the environment and received by the radar. To capture radar parameters, which control sensing performance, we use the following notation: denotes the chirp start frequency,the chirp bandwidth,is the chirp period,the intermediate frequency (IF) from mixing the Tx chirp with its corresponding Rx chirp,is the chirp slope, anddenotes the speed of light. Let x(t) denote the FMCW radar Tx signal corresponding to a single chirp in a radar frame, given by <cit.>[We use the common notation with (<ref>) expressing the transmitted signal at baseband where it has been sampled (in the digital domain) as a complex signal, composed of real and imaginary components, often denoted as the in-phase (I) and quadrature (Q) components. The two signals are identical, except that the Q signal is shifted by 90 degrees from the I component. The actual over-the-air radar signal is transmitted as a real-value analog signal.] x(t) = e^ j (2 π· t + π· t^2 ) .Consider a target whose relative[Relative is with respect to the direction of propagation of the radar's Tx signal. Thus, relative range and velocity are scalars.] range and velocity at time t are denoted by d(t) and ,[To simplify presentation, we assume constant velocity over the duration of a radar frame.] respectfully; thus, d(t) = R(t) +, whereis the target's initial position at the start of the radar frame and R(t) = ∫_0^t dt is the distance that it has traveled by time t <cit.>.As the signal propagates at the speed of light (), the timeit takes the radar signal to propagate to the target and back is given by = 2 d(t)/. Thus, the reflected signal received by the radar from the target, denoted by y(t), can be captured asy(t) = A_Rx· e^ j [2 π (t-) + π (t - )^2 ]+ z(t),where A_Rx and z(t) denote the received signal amplitude and noise, respectively. Step 2: Dechirping and IF signal generation. In this step, an IF signal is obtained by mixing the transmitted signal with the received signal;thus, the resulting signal s_IF^(l)(t) corresponding to the l-th chirp is given bys_IF^(l)(t) = x(t) · y^*(t) = A_IF· e^j 2 π· t· e^jl + z'(t), where := 2 ·/c, := 4 π·/λ, λ = / is the signal wavelength, A_IF is the amplitude of the IF signal, and z'(t) = x(t) ·z^*(t) represents the noise present after the mixing <cit.> (for details see Appendix <ref>).While the transmitted and received chirp signals may have a bandwidth up to = 4GHz, modern automotive FMCW radars use a low-pass filter to remove all IF frequencies above 10–20. For example, the TI IWR1443 mmWave FMCW radar has a maximum IF signal bandwidth of 15 <cit.>. The maximum IF frequency directly impacts the maximum range that a radar can detect objects at (as we show in (<ref>)), as well as significantly reduces the cost of implementation. Also, as we show in Section <ref>, this impacts the development of black-box attacks on radar. Step 3: Range-Doppler response. The IF frequency, , corresponding to a specific target is estimated using a fast Fourier transform (FFT) of the IF signal from (<ref>); then, the target's rangeis computed using = /2· c. The range resolution , defined as the minimum required distance between two targets for a radar to distinguish them, and , the maximum detection range are defined as <cit.> (details are provided in Appendix <ref>)= /2 , = ·/,whereis the radar's sampling rate of the IF signal.Multiple chirps in a single frame can be used to detect the velocity of a target, leveraging the slight phase shift, denoted by , between chirps due to a target's relative velocity causing a small change in distance over a chirp's duration <cit.>.can be estimated by taking an FFT across all chirps in a frame for each range bin; the resulting FFT will have a peak at , and the relative velocity satisfies <cit.>. v = ·λ/4 π·. In addition, the velocity resolutionand maximum velocityfollow (details provided in Appendix <ref>) = λ/2 ·, = λ/4 ·, whereis the number of chirps in a radar frame. The Range-FFT and Doppler-FFT responses are often computed simultaneously using the 2D-FFT operation to generate theRange-Doppler response (as illustrated Fig. <ref>). Step 4: CFAR Detection. Constant false alarm rate (CFAR) detectors are commonly used to detect objects in the Range-Doppler response by estimating the relative noise levels around each Range-Doppler cell. In general, this is non-uniform as different objects in the radar's field of view may cause more clutter than other objects. Using the estimated noise level at each cell, the CFAR detector computes a threshold configured to achieve a specific probability of false alarm. As the noise level is not constant, the threshold varies to account for the clutterin different regions of the Range-Doppler response. A cell that has an amplitude above the computed threshold is classified as a detection <cit.>.Step 4 in Fig. <ref> illustrates the computed CFAR detection threshold for the range and velocity domains of a normal target (note that the threshold is not constant). The two most widely used CFAR methods are continuous average CFAR (CA-CFAR) and ordered statistic CFAR (OS-CFAR). The probability of a CA-CFAR detector detecting an object significantly decreases in scenarios with abnormally high clutter in specific regions or with two closely located objects <cit.>.In this work, we exploit this property to design FN events on systems employing CA-CFAR detectors, as these are more commonly used (e.g., inTI IWR1443 mmWave FMCW radar <cit.>), but the approach can also be extended to OS-CFAR detectors.Step 5: Clustering. The final step of the radar signal processing pipeline is to group the detection cloud points corresponding to the same object using a clustering algorithm. In this work, we focus on the commonly employed density-based spatial clustering of applications with noise (DBSCAN) algorithm <cit.>, which achieves two primary objectives: (i) identifying different targets in the radar's field of view by grouping together regions with a high density of detection points, and (ii) filtering out detection points corresponding to noise or multi-path reflections (see Step 5 in Fig. <ref>).§.§ Non-Adversarial Interference and Interference MitigationEven without adversarial activity, radar interference may occur due to the increased proliferation of radar sensing in modern vehicles. FMCW radars are susceptible to three key types of interference: same slope, similar slope, and sweeping interference occurring when an interfering signal has a chirp slope that is the same, similar, or significantly different to the victim radar's chirp <cit.>. Generally, interference is the result of multi-path reflections and non-malicious interference from another radar, and all forms of interference can degrade radar performance. Interference could saturate a victim radar’s Rx stage, decrease the signal-to-noise ratio (SNR) of perceived targets, and generate false peaks or ghost targets <cit.>,as well as impact a radar’s Range-Doppler response <cit.>.resulting in the radar losing a target altogether <cit.>. Interference mitigation. Several mitigation techniques have been proposed, mainly targeting sweeping interference – i.e., interference from chirps with greatly different slopes.<cit.> proposed various techniques to detect the interference in the time domain of the IF signal and then repair the received signal by nullifying the interference. Other techniques have also been proposed to combat general interference including the use of specific polarization and frequency bands, randomizing chirp timing, and adding random phase shifts to each chirp <cit.>. Attacks using interference. To the best of our knowledge, no work has considered the effects that a malicious actor could have if they intentionally interfered with a radar using attacks based on carefully-crafted similar slope interference. Existing methods for mitigating (e.g., similar-slope) interference (e.g., <cit.>)are developed under the assumption that the interference is only sporadic and not adversarial (i.e., malicious).In general most mitigation techniques detect the interference in the time domain of the IF signal and then repair the received signal by nullifying the interference. In this work, we also introduce attacks based on very similar slope interference that result in a FN event for victims employing a CA-CFAR detector. Our attacks are not detectable in the time domain, therefore making such interference mitigation techniques ineffective. § ATTACK OBJECTIVES AND THREAT MODELWe considerrepresentative attack scenarios illustrated in Fig. <ref>. Here, we refer to the victim vehicle as the vehicle performing normal radar sensing operations. The attacker's goal is to produce incorrect sensing outcomes for the victim. The attacker may wish to orchestrate attacks in some relation to an existing target object (e.g., another vehicle) other than the victim, in order to compromise safe victim vehicle operation.§.§ Attack Strategy and GoalsWe introduce the false positive (FP), false negative (FN), and translation attacks thatuse specifically designed signals to add fake targets (FP), remove real targets (FN), or manipulate the range and velocity of existing targets (translation).FP attack. The first considered attack goal is to cause a FP sensing outcome, where an attacker inserts a spoofed (i.e., `fake') object into the victim radar's point cloud as illustrated in Fig. <ref>(b). This is consistent with the existing FP outcomes for camera/ attacks (e.g., <cit.>). Intuitively, to achieve this the attacker should send (slightly delayed) chirps identical to that of the victim, emulating the signal reflected from a (spoofed) object. However, unlike attacks on camera andwhere FP attacks only aim to add a spoofed object at a particular range (i.e., distance) from the victim, with radar attacks, the goal is to spoof both an object's range and velocity. As such, spoofed objects must update their position in consecutive frames based on the desired spoofing velocity; this allows the attack to propagate from perception to the tracking and prediction modules in autonomous vehicles.FN attack. The secondattack goal is to cause a FN sensing outcome, where the victim fails to perceive an existing physical object (Fig. <ref>(c)). While a FN outcome is hard to achieve with black-box attacks on camera or -based sensing, intuitively our approach is to transmit an attacking signal that adds clutter around a desired target, therefore significantly lowering the CA-CFAR detection probability of the object. Translation attack. The final commonly considered attack goal is to cause a translation eventthat effectively `moves' a real object from the victim's point of view (Fig. <ref>(d)). This is achieved by launching a FN attack to `remove' an actual target while simultaneously employing a FP attack to `insert' a fake object into the victim's point cloud;as result, the victim fails to detect the real object but detects the fake object.§.§ Environmental Assumptions We make the following three assumptions: (i) The victim employs the radar processing pipeline from Fig. <ref>. While we focus on themost commonly employed radar sensing pipeline (e.g.,TI IWR1443 mmWave FMCW radar <cit.>), the presented security analysis and attacks are generalizable to othersimilar radar designs;(ii) We focus on attacking only the victim's FMCW radar sensor. While most vehicles feature additional sensors (e.g., cameras), our objective is to show that 's novel black-box attacks are feasible and effective; (iii) We focus on attacking only in the range and velocity domains for the initial black-box attack development. Thus, we assume that the attacker is physically located at the desired angle of attack; e.g., Fig. <ref>(b) shows the case where the attacker is in front of the victim. §.§ Attacker Capability and KnowledgeWe consider physical spoofing attacks, where the attacker can only transmit signals in order to achieve the desired attack outcomes. Unlike existing work, we consider the black-box threat model where the attacker has no knowledge of the radar parameters utilized by the victim. We do assume that the attacker has knowledge about the environment; in particular, the victim's relative position and velocity for FP attacks so that a spoofed object behaves like a realistic target, as well as the position and velocity of the target object for FN attacks.§ ATTACK DESIGNWe now present the methodology used to estimate the victim radar's parameters and show how such estimates can be used to launch FP, FN, and translation attacks. Fig. <ref> overviews the design of 's black-box attack generator. §.§ Parameter Estimation Black-box attacks present a particularly difficult challenge as it is critical that the attacker can accurately estimate, in real-time, the key victim radar parameters. While the architecture from <cit.> developed a black-box FP attack by estimating a victim's chirp period () and chirp slope (), we need to additionally estimate a victim's frame duration () and predict future frames to develop our novel black-box FN and translation attacks. This presented unique technical challenges as accurate attacks require very precise predictions for when the next victim frame will occur. For example, if the prediction is off by 20, the victim will perceive the spoofed object to be 3 away from where the attacker intended to add an object. Specifically, the resulting range error satisfies= · / 2,whereis the frame start time prediction error.We now introduce a real-time sensing module that enables effective FP, FN, and translation attacks by estimating the victim radar's parameters with low estimation errors; this is performed using three key steps summarized in Fig. <ref>. Step 1: Spectrogram generation. Similar to <cit.>, we start by generating a spectrogram for each detected victim frame. The prototype runs at 25MSps sampling rate and checks for victim frames every 0.16. Once a frame is detected by a custom frame detector that tracks increases in received power, we record the received signal forslightly over 2, and then generate a spectrogram that samples the frequency every 2<cit.>. This computation is done in under 10. Step 2: Identify chirps in spectrograms. While <cit.> used signal energy over time to estimate the chirp period () and the spectrogram of a single chirp to estimate the chirp slope (), we designed and implemented a peak detection and clustering algorithm to identify the (time, frequency) points corresponding to each chirp within the generated spectrogram. For each chirp's(time, frequency) points, we use least squares regression to estimate thestart time and slope for the i-th detected chirp, respectively <cit.>. The estimates are computed in real-time using the Eigen C++ library <cit.>.Step 3: Estimate victim parameters.Accurate estimates of the victim radar's chirp slope (), chirp period (), and frame duration (), are achieved by averaging their computed values across multiple recorded victim chirps and frames. While averaging over a large number of computed parameters results in sufficiently accurate estimates for the chirp period and chirp slope, we note that we collect far fewer samples for the frame period (e.g., up to 256 chirps per frame). Thus, we use cross-correlation to compute a more precise frame start time. Specifically, we take the cross-correlation of the first 10 of the received signal and a computed victim chirp (generated using the estimated parameters) to further improve the accuracy of the estimated frame start time. As shown in our experiments and simulations (Section <ref>),this results in sufficiently accurate estimates after only 6 frames. In rare cases, the implemented sensing component experiences errors (e.g., classifying one chirp as two different chirps) resulting in significantly incorrect measurements. We account for these erroneous estimates by using the inter-quartile range to identify and filter out outliers <cit.>. The end result is a robust sensing component capable of quickly and accurately estimating a victim radar's parameters. §.§ False Positive Spoofing Attacks Intuitively, the attacker uses the estimated chirp slope (), chirp period (), and frame duration () to launch a FP spoofing attack by transmitting identically sloped radar chirps with a specific delay, , and phase shift, . Fig. <ref> summarizes the key parameters used when constructing the FP attack. Unlike existing white-box FP radar attacks <cit.>, 's black-box attack framework does not assume a-priori knowledge of the victim radar's parameters;it rather usesandbased on the desired position and velocity of the spoofed object, respectively, obtained as= t_spoof - t_atk= 1/·(2 - ), = 4π/λ·( - /2) ·· n, whereis the desired spoofing range, is the relative range of the victim w.r.t the attacker, t_spoof is the time delay corresponding to a target at , t_atk is the propagation delay for a signal to travelfrom the attacker to the victim, is the desired spoofing velocity,is the relative velocity of the victim (w.r.t the attacker), and n is the attack chirp index. Here, we emphasize the importance of accurate estimation of the victim's position and velocity,which allows for the attacker to spoof objects at specific positions and velocities.We also dynamically scale the amplitude of the Tx signal, denoted by , to emulate the propagation loss that scales with 4 π^2. Based on (<ref>) and the obtained parameter estimates, the n-th chirp of the FP attack signal is computed by x^(n)_FP(t) = · e^j[2 π(t-) + π (t-)^2 + ]. Fig. <ref> shows the Range-Doppler response and point cloud over multiple frames for a simulated FP attack. The “real object” in the scene is the attacker while the “spoofed object” is a fake object that the attacker intends to insert. Note that the spoofed object exhibits realistic motion and has a power level expected for objects at that range.§.§ False Negative AttacksIntuitively, attacks achieve a FN outcome by adding clutter in the Range-Doppler response around a specific target, in order to raise the CA-CFAR detection threshold and thussignificantly decrease the probability of the actual target being detected.We start with the FP attack signal from (<ref>) that spoofs a false object at the same range and velocity as an actual target. Next, we slightly smear the spoofed signal in the range domain by using a very similar slope () that is slightly offset (∼0.01) from the estimated victim slope (). This offset is computed to smear the spoofed signal by an additional 1–3m in the range domain and accounts for the victim radar's estimated bandwidth, chirp period (), and chirp slope (). Finally, the spoofed signal is smeared in the velocity domain by subtly increasing thephase shift between subsequent chirps; the phase shift for each chirp isn+1 = 4 π [( v_0+n·Δϕ') - /2] /λ + n, where n is the attack chirp index, Δϕ' is the amount that the ϕ_doppler increases with each chirp, and v_0 is a velocity slightly less thanso that the added clutter is centered at .Thus, the n-th chirp of the FN attack signal is given by x^(n)_FN(t) = · e^j[2 π(t-) + π (t-)^2 + n]. The resulting attack adds clutter specifically around the target in a way that the CA-CFARfails to detect the object, as shown in Fig. <ref>. In particular, the added clutter increases the CFAR threshold in the range and velocity domains at the target location to the point that no object is detected (see Fig. <ref>(c)). While the FN attack signal shown in Fig. <ref>(b) appears easy to identify in the Range-Doppler response, it is unlikely to be detected on current FMCW radars. Most automotive radar systems only utilize CFAR detectors to detect objects in the Range-Doppler response followed by a clustering algorithm to group detections from the CFAR detector. Thus, a real object stays undetected when an attack causes a FN event in the CA-CFAR detector <cit.>. Additionally, the added clutter is localized around the Range-Doppler bin corresponding to a specific target such that the overall noise level of the entire Range-Doppler response is only slightlyraised. Therefore, it is unlikely that our attacks would be detected by additional monitoring of the noise level of the overall Range-Doppler response. Moreover, most existing interference mitigation methods (e.g., <cit.>) would not be able to detect the attack as the IF signal for the FN attack appears identical to the IF signal from a normal target. Finally, while it would be possible to design an algorithm to detect the added clutter in the Range-Doppler response (e.g., via DNNs),their use would require high computation costs and we are unaware of any such algorithms are currently implemented on commercial systems. §.§ Translation AttackThe translation attack is achieved by simultaneously transmitting the FP attack from (<ref>) and the FN attack from (<ref>). For the FN attack, we setandto the location of an actual target so that it is 'removed' from the victim radar's point cloud. For the FP attack, andare set to the location where we want the victim to detect the object. The result of the combined FP and FN attack is that an object in the victim radar's point cloud is 'moved' as the attacker desires. § EVALUATION OF VICTIM PARAMETER ESTIMATIONIn Section <ref>, we present a real-timephysical prototype developed usingSDR platforms, but we were limited by the available hardware. Thus, we first employ rigorous simulations to emulate real-world conditions and predict the performance of 's framework on a full-scale system. In this section, we present an evaluation of the sensing module before evaluating the full attackperformance (Section <ref>). §.§ Simulation Environment and SetupWe generated realistic environments with multiple objects utilizing the Matlab Phased Array System Toolbox <cit.>. To start, we used the toolbox'sobject to simulate the behavior of radar signals reflecting off of moving targets. Each target's radar cross-section is randomly selected using a normal distribution with a mean of 15 and a variance of 5, corresponding to the cross-section of a common midsized vehicle <cit.>. Next, we simulate the effects of range-dependent time delays, propagation losses, phase shifts, and doppler shifts due to signal propagation using the toolbox'sobject, where the environment thermal noise level is given by -174 + 10 log_10 (dBm). The Tx's and Rx's in our radar and attacker implementations are simulated using objects from the toolbox'sand , respectively. Specifically, each Tx has a Tx gain of 36 and an output power of 5 without the Tx gain, and each Rx has an Rx gain of 42dB and a noise figure of 5dB <cit.>. Thus, the simulated environment features realistic Tx's, Rx's, targets, and signal propagation effects. Finally, we utilize the andobjects from MATLAB's Phased Array Toolbox andclustering algorithm from MATLAB's Machine Learning Toolbox to implement the victim radar signal processing pipeline <cit.>. Experimental setup. We evaluate the sensing module using the simulated environment with 200 different victim configurations based on realistic parameters of the TI IWR1443 mmWave FMCW radar <cit.>. Specifically, the victim radar's chirp slope is sampled uniformly at random from the interval [1, 100], and the chirp period is uniformly sampledfrom the interval [15, 100].To encompass the majority or radar configurations found in the automotive domain, the chirp bandwidth is imposed to be within [30MHz, 3.5GHz]. Finally, we set a radar frame rate of 33Hz, which is the maximum frame rate of an automotive radar <cit.>. More details on the test cases can be found in Appendix <ref>.For each victim configuration, 7 victim frames are simulated. We use the estimated victim frame duration (), chirp period (), and chirp slope () at the end of the 7-th frame to assess the parameter estimation accuracy achieved by . We also record the predicted frame start time for each victim frame to understand how the estimation accuracy changes with an increased number of detected victim frames. Finally,to evaluate 's parameter estimationaccuracyregardless of the victim position and velocity, we uniformly sample the attacker's relative (w.r.t the victim) range () and velocity () at random from the interval [20, 100] and [-10, 10].§.§ Simulation ResultsWe now present the results from our simulated evaluations.We quantify how estimation errors for the victim radar's parameters lead to spreading and spoofing( from (<ref>))errors.Note that the spoofing velocity is generally unaffected as timing and slope estimation errors almost solely affect the range spoofing performance. Estimation accuracy for victim chirp slope and period. Figs. <ref>(a) and <ref>(b) show the cumulative density function (CDF) of the absolute and relative estimation errors for the victim radar's chirp period and chirp slope, whose key metrics are summarized in Table <ref>. The results show that 95% of the estimated chirp period values, , are within 0.59 of their actual values, which corresponds to less than 0.1 of the spoofing error, , based on (<ref>). Also,95% of the estimated chirp slope values, , are within 0.03% of their actual values. While this error could result in some smearing in the range domain, we show in Section <ref> that this is sufficient enough to launch successful attacks. Accuracy vs. Number of measured frames. Fig. <ref>(c) plots the average absolute error for the predicted frame start times for each of the 200 frames, where the error bars represent the 95-th percentile of the absolute error. The left y-axis reports the prediction error in s while the right y-axis reports the resulting spoofing error in meters using (<ref>).It can be seen that theprediction error decreases as the number of number of considered frames increases. Further, the absolute error significantly decreases after the third victim frame is detected when the sensing component begins using the cross-correlation and the computed victim chirp (from the estimated parameters) to achieve more accurate frame start-time estimates.Overall, in 95% of cases,sensing can predict a victim radar's next frame start time with an accuracy that corresponds to less than 2 of range spoofing error () based on (<ref>). § LARGE SCALE EVALUATIONSThe simulation environment introduced in Section <ref> was used forseveral large-scale evaluationsof the accuracy and effectiveness of the black-box attack framework. Specifically, we performed5,000 unique simulations to demonstratethe effectiveness and accuracy of the attacks regardless of the relative range and velocity of the victim. We considered the four victim radar configurations from Table <ref> covering a broad assortment ofautomotive radar configurations.Note thatrepresents the minimum CFAR detection range for each configuration (the full CFAR detection region is provided in Appendix <ref>). Here, we start by evaluating spoofing accuracy using configurations C and D,representative of typical automotive long range radar (LRR) and short range radar (SRR), respectively. Then we evaluate how FP and FN attacks effect a victim's probability of false alarm (PFA) and probability of detection (PD) across all four configurations. §.§ Spoofing Accuracy EvaluationSetup. To evaluate the spoofing accuracy, we consider victim radars with configurations C and D from Table <ref>, representing common real-world configurations. As in Section <ref>, we set the attacker's relative position uniformly at random from the intervals [20, 100]and velocity [-10, 10]to evaluate performance across varying positions and velocities. We evaluated the spoofing accuracy for 100 different desired spoofing ranges () and velocities (), uniformly selected at random from the intervals [50, 100]and [-25, 25], respectively; the test case distributions are summarized in Appendix <ref>. For each trial, we simulated a total number of 10 radar frames, with the attack starting on the 6th frame. As the first 5 frames were used to sense the victim's parameters,each trial featured 5 attack frames. Results. Out of the 500 testing frames in the 100 different scenarios, over 90% of the frames resulted in successful, highly accurate attacks.Of the successful attacks, Fig. <ref> shows the CDFs for the absolute range and velocity spoofing errors, and the statistics are summarized in Table <ref>. In particular, 90% of the successful attacks had the spoofed range within 1.28of the desired range () and the spoofed velocity within 0.12 of the desired velocity ();the absolute velocity spoofing error is significantly lower than the absolute range spoofing error since the velocity spoofing does not depend on the estimated victim chirp period or the predicted frame start time. Also, the mean absolute error for Config D was relatively high as less than 5% of trials have spoofing errors significantly larger than the rest of the trials. Finally, the remaining inaccurate spoofing attacks resulted from insufficiently accurate victim parameter estimations as discussed in Section <ref>.§.§ Attack Effectiveness Assessment Setup.We also evaluated the attack effect on the victim's PD[Probability of detection = 1 - the probability of false negative event.] and PFA[Probability of false alarm is the probability of a false positive occurring.], the traditional metrics for assessing radar detection performance. For each configuration in Table <ref>, we performed 400 different simulations for the FP attack, the FN attack, and the base case without attack.Previously, we demonstrated that our framework accurately estimated a victim's parameters, inserting spoofed signals regardless of the relative position and velocity of the attacker and victim. Now, we show that our attacks are successful regardless of the relative position and velocity of a target-of-interest in the environment. To maintain a consistent starting point for each simulation, we simulated the attacker 75 away from the victim with a relative velocity of 2. When evaluating the effectiveness of our FP attacks on a victim's PFA, we selected the desired spoofing range and velocity uniformly at random from the intervals [50, 100] and [-25, 25], respectively.For each trial, we simulated an existing target with velocity () uniformly chosen at random from the interval [-35, 35] (35 is roughly 78 mph), and a starting point () uniformly selected from the interval [5, 143]. The radar cross-section of the target in each trial was set using the same method described in Section <ref>.For all the trials, we recorded a FN outcome if there was a real target at a specified location but the victim radar did not detect anything within 3 and 3 of an target location.We record a FP outcome if the radar detects another object that is not located within 3or 3 of the actual target.Thus, it is possible for both a FP and an FN event to occur in the same trial.Results. Figs. <ref>and <ref>show the obtained attack effectiveness; the “Range” axis corresponds to the range of the existing target. To estimate the PD and PFA, we grouped trials into 1 of 30, 5 range bins, i.e., the first range bin contains the results corresponding to a target within the 0–5 range. FP Spoofing AttacksFig. <ref>(a) shows the PFAs for each victim radar configuration without attacks – all have very low PFAs (<5%) in this case. Fig. <ref>(b) shows the PFAsunder the FP spoofing attacks – the PFAs significantly increased,and the attacker was capable of adding a spoofed (i.e., fake) object into the victim radar's point cloud regardless of the location of the existing (i.e., real) object. This, combined with the results from Fig. <ref>, shows that FP attacks can successfully insert fake objects very close to the desired locations, no matter the positions of other vehicles in the scene.FN AttacksFig. <ref>(a) shows the PD for each radar configuration when no attack is present – configurations A and B are unable to detect targets at close ranges due to their poor range resolution () and comparatively high CFAR minimum detection range ()(Table <ref> in Appendix <ref>). Fig. <ref>(b) shows the PD for each victim configuration when the FN attack was applied – the attack significantly decreased each radar's PD, with a steep decline in the victim's PD for targets roughly 25 away; the drop at 25 occurs because it is roughly the point where the power received from the attacker is equal to the power received from the target reflection.While it is expected for PD of a real target to slightly decrease with range (longer ranges experience greater path loss resulting in a reduced SNR), our results show that the FN attacks significantly impact PD compared to operation without an attack. Overall, the results demonstrate that we consistently caused FN events in the victim's radar.§ SDR-BASED PHYSICAL IMPLEMENTATIONWe now introduce a real-time prototype implementations ofand a victim radar using SDR platforms. Additionally, we validate our prototype's performance on 600 real-world experiments.Section <ref> then presents results from multiple real-world case studies. §.§ Implementation on an SDR PlatformWe developed a victim radar and aprototype using USRP B210 SDRs, which are controlled by host laptops via the C++-based USRP Hardware Driver (UHD) <cit.>, as illustrated in Fig. <ref>; theprototype alone required 4,500 lines of code.Due to the limitations of available hardware(e.g., the frequency range of [70, 6] and maximum sampling rate of 56 for the USRP B210),we consider an operating frequency () of 1.5 and a sampling rate of 25; this corresponds to a victim range resolution () of ∼6 and the maximum timing accuracy of the attack framework of ∼40. We apply longer chirps and frames to achieve a realistic velocity resolution of ∼0.8.While our prototype implementation is constrained by the hardware limitations, it can easily be extended to a full-scale implementation with the use of more capable (and expensive due to high-frequency SDR) hardware platforms. For our experiments, the victim radar transmits a series ofFMCW chirps and records the received signal (i.e., the reflected chirps), which is then processed offline to obtain the Range-Doppler response and detect objects using the pipeline from Fig. <ref>. The prototype, in real-time, estimates the victim radar's chirp slope (), chirp period (), and frame duration ()as described in Section <ref>.Based on the estimated parameters, the attacker designs and transmits the corresponding signal for launching a FP, FN, or translation attack using (<ref>) and (<ref>).Theprototype is the first to demonstrate the feasibility of launching real-time black-box FP, FN, and translation attacks on a real-world system. §.§ Physical Evaluation of Parameter EstimationSetup. We utilized 500 different victim configurations to validate theprototype's accuracy of estimating the victim radar's parameter. Due to the hardware limitations, we considered victim configurations with a chirp bandwidth of up to 25. The victim chirp slope and duration ( and ) were chosen uniformly at random from the intervals [0.05, 0.53] and [50, 500], respectively (details in Appendix <ref>). The maximum chirp slope was small due to the maximum chirp bandwidth of 25. For each victim configuration, 's sensing module estimates the victim radar's chirp slope (), chirp period (), and frame duration () over a series of 7 frames. Also, the prototype initiated attacks on the 7-th frame, continuingparameter estimation while performing real-time attacks. Chirp period estimation. Table <ref> and Fig. <ref>(a) summarize the results for the chirp period estimation over all physical trials – in 95% of trials, the estimate of the chirp period () is within 39.09 of the actual chirp period (). Compared with the full-scale simulation-based results from Section <ref> (95% of trials had an absolute error less than 0.59), the timing accuracy decrease by roughly two orders of magnitude is attributed to the described hardware constraints of the prototype system. However, these results indicate that 95% of our chirp period estimates are within one sampling period of the actual victim radar's chirp period. Chirp slope estimation. Table <ref> and Fig. <ref>(b) summarize the obtained results of physical evaluation – over 95% of the trials result in estimated chirp slope values () that are within 0.0017 (relative error of 0.403%) of their actual values (). This absolute value is quite low in part because all of the tested radar victim configurations had relatively small slopes. However, the relative value indicates that the accuracy was similar to what was observed in our simulation results (Section <ref>).Again, we attribute the order of magnitude increase in relative slope estimation error to the lower sampling bandwidth of 25, impacting the maximum achievable resolution when generating a spectrogram of the received signal chirps. §.§ Physical Evaluation of Spoofing AccuracySetup. We evaluate spoofing accuracy over 100 unique trials where each real-time trial involved 15 attacking frames. Table <ref> summarizes the victim configuration used for all trials; this configuration used a longer chirp duration () to achieve more realistic velocity resolution () given the lower operating frequency () of the physical prototype. The spoofing range () and velocity () for each trial was uniformly chosen at random from the interval [60, 200] and [-25, 25] (see Appendix <ref> for distribution).Results. Fig. <ref> reports the obtained CDFs for the absolute range and velocity errors, whereas Table <ref> summarizes the relevant statistics – 90% of trials hadthe obtained range within 9.67 of the desired spoofing range () andthe obtained velocity 1.80 of the desired spoofing velocity ().To compare the experimental results with the simulation-based results presented in Section <ref>, we simulated the exact same set of physical scenarios using the simulated environment from Section <ref>. In Fig. <ref>, the simulated results appear in blue while theresults obtained in physical experiments appear in orange. We highlight how our prototype spoofed an object's range slightly more accurately than our simulation predicted. While we observe that the prototype's velocity spoofing was less accurate than the simulations predicted, we attribute this discrepancy to the phase noise in the USRP B210,[The USRP B210 has a phase noise of 1.0 degrees RMS at 3.5GHz <cit.>, corresponding to ∼0.5m/s of potential error in the spoofing velocity due to phase noise; this follows from (<ref>).] which our simulations do not account for. In summary, the results from our real-world physical experiments demonstrate that estimates a victim's parameters and inserts spoofed objects with the anticipated level of accuracy given the hardware limitations.§ REAL WORLD CASE STUDIESWe now demonstrate the real-world capability ofthrough several real-world case studies.We start by demonstrating FN and translation attacks against stationary victims followed by a demonstration of a translation attack on a moving victim. To the best of our knowledge, this it the first work to demonstrate each of the following attack capabilities in realistic case studies. Results and details from these and additional case studies, including time-synchronized videos, are available on the project website <cit.>. §.§ Stationary Case Studies We first present case studies where a stationary attacker is set up to attack a stationary victimtrying to detect objects on the road. Meanwhile, the attacker estimates the victim radar's parameters in real-time and then simultaneously launches theattacks.This experimental setup is commonly found in the real world including infrastructure sensors detecting vehicles at stoplights and stopped vehicles sensing oncoming traffic prior to pulling out of a parking lot. Setup. Fig.<ref> illustrates the experimental setup while Fig. <ref>(a) and Fig. <ref>(b) portray the threat scenarios used for the stationary case studies. Here, the attacker and victim were placed 15 apart from each other. A real target vehicle then drove away from the victim at approximately 9 (∼20 mph). Finally, the victim radar employed the configuration described in Table <ref>. FN attacks.We launched FN attacks withandset to 75 and -5, respectively. The attack started on the 11^th sensed frame; this coincided with causing a FN event when the target vehicle was at 50 distance. The attack progression is shown in Fig. <ref>(a) while Fig. <ref>(a) shows the detected target location for each victim radar frame. The first column of Fig. <ref>(a)presents the victim's perception prior to the attack while the second and third columns present the victim's perceptionwhile under attack. The victim radar's immediately fails to detect the target vehicle once the FN attack is launched. Such a result is incredibly critical as the attack has effectively `removed' an object from the victim's point cloud/scene. Translation attack. Here, we simultaneously launched the FN attack from (<ref>) and FP attack from (<ref>). The FN attack was launched withandset to 75 and -5, respectively. Simultaneously, the FP attack started at 75 while propagating towards the victim with a velocity of 10. The attack progression is featured in Fig.<ref>(b) while Fig. <ref>(b) presents the attack detections over time. The first column of Fig. <ref>(b) presents the victim's perception prior to the attack while the second and third columns present the victim's perceptionwhile under attack. Even though the actual target of interest was moving away from the victim for the duration of the experiment, we observe that the victim erroneously perceived that the target vehicle started moving toward it once the attack started.Also, note that the power level of the spoofed (fake) object increases as it gets `closer' to the victim radar. Overall, such an attack is incredibly powerful as an attacker can effectively `move' any specific object in the victim radar's point cloud (i.e., perceived scene).§.§ Moving Case StudiesWe now demonstrate thatcan launch succesful black-box translation attacks from a moving vehicle, which can critically affect safety of autonomous vehicles.Setup. Fig. <ref> and Fig. <ref>(c) feature the experimental setup and threat scenarios considered in our moving-vehicle case studies.Theplatform was placed in the trunk of the attack vehicle so that it could sense the victim radar's parameters and launch attacks. The victim radar was placed in a separate vehicle that moved independently from the attacker's vehicle.At the start of theexperiment, the attacker and victim began driving forward at 13 and 4.5 (30 mph and 10 mph), respectively. Here, the attacker immediately launched the translation attack –the FN attack started at a target range of 75 and propagated away from the victim radar with the target's velocity of ∼10, while the FP attack started at 100 and propagated towards the victim radar with velocity of 12.Results. The attack progression is featured in Fig. <ref>(c) while Fig. <ref>(c) reports the detected target locations over time. The first and second columns of Fig. <ref>(c) presents the victim radar's perception during the translation attack while the third column presents its perception after the translation attack concluded.The successfully launched translation attack lead the victim to believe that the attack vehicle was moving towards it even though the attack vehicle was actually moving away from it. Such an attack is incredibly powerful as the victim failed to detect the attacker's actual location while simultaneously detecting the attacker's fake location; this could lead to very dangerous situations in real-world scenarios. Finally, the victim was only able to detect the actual location of the attacker once the translation attack completes, further demonstrating the effectiveness of the attack. § DISCUSSION AND FUTURE WORK §.§ Limitations of Angle of Arrival (AoA).Modern radars detect an object's range, velocity, and angle of arrival. As dicussed in Section <ref>, we assume that the attacker is physically located at the desired angle of attack. More versatile attackers would attack a victim from any angle within the victim's field of view. Future works will explore methods for angular spoofing attacks.CFAR detection.In this work, we focusedattacks on radars employing thewidely-used CA-CFAR detector. However, other CFAR detectors exist, e.g.,OS-CFAR <cit.>. While we expect that the presented FN and translation attacks can be used for other radar designs, future work will include attack demonstrations against other CFAR detectors.Physical implementation.As described in Section <ref>, our physical prototype was limited by the available hardware. A more capable (yet, very expensive) hardware platform could be used to implement a full-scale version. To start, an RF chain capable of converting between baseband frequencies and mmWave frequencies (77–81) could be used,butat such high frequencies, with the cost of tens of thousands of dollars. Moreover, generating a spectrogram for the full 4 bandwidth utilized by automotive radars would require an expensive RFSOC boardsupporting complex sampling rates of at least 4Gsps (such platforms have been used in e.g., <cit.>). Future works will seek to develop a full scale system. §.§ Potential Defenses and Future WorksParameter randomization. Most PHY attackson automotive FMCW radars, including ours, rely on the ability to predict when the victim's next frame will occur. As described in <cit.>,a defense against such attacks could be introducing small random changes to a radar's chirp period (), chirp slope (), and frame duration() at each frame; no commercially available mmWave radars support this. While such a defense could thwart most other spoofing attacks,framework can be easily modified to launch effective attacks against victims employing such a defense. Using the same stationary experimental setup from Section <ref>, we modified our victim to randomize the start time of each frame using a normally distributed offset with a mean of 0 and 3 σ (3× standard-deviation) of 3. We modified theprototype to detect victims employing parameter randomizationusing the Liklihood Ratio Test <cit.>.Once the attacker detects parameter randomization, it generates an optimal FN attack against randomization (i.e., FN `jamming' attack) leveraging the FN attack from (<ref>) to cover a much larger range (∼1,000) and velocity (∼200) spread. The first column of Fig. <ref>(d) features the victim's perception when not under attack, whereas the 2nd and 3rd columns present its perception under the optimal FN `jamming' attack. Fig. <ref> compares the victim's detections over time when attacked by our standard translation attack and our optimal FN `jamming' attack. We highlight how the optimized FN `jamming' attack still prevents the victim from detecting the target even though standard spoofing attacks may be thwarted in a sense that the accuracy of the FP detections is significantly affected.As an avenue for future work additional analysis of defense mechanisms againstwill be performed. Multi-sensor fusion. While we focus on attacking a single radar sensor, modern vehicles utilize signals from multiple sensors including cameras, LiDARs, and radars. Even if our attack was successful in manipulating a victim radar's data, it is possible that the attack could be thwarted using the victim's other sensors. Still, such a defense is not guaranteed to succeed as works such as <cit.> demonstrated successful attacks against victims employing LiDAR-camera sensor fusion. Future works will investigate PHY radar attacks on vehicles employing radar-camera and radar-LiDAR-camera sensor fusion.§ CONCLUSIONIn this work, we presented the design of , a novel black-box physical layer attack framework for mmWave FMCW automotive radars. Unlike previous works that focused solely on `adding' fake objects into a victim radar's point cloud, this is the first work to introduce the false-negative and translation attacks that effectively `remove' or `move' detections of existing objects in the victim radar's point cloud. Further, all but one of the previous (false-positive only) spoofing attacks assumed prior knowledge of the victim radar's parameters. By comparison,estimates the victim's chirp period, chirp slope, and frame duration with sufficient accuracy to implement successful attacks over 95% of the time. We have experimentally validated the feasibility and effectiveness of the proposed attacks by developing a real-timeprototype using SDR platforms. Finally, we have demonstrated the real-world capabilities of through real-world case studies. § ACKNOWLEDGMENTSThis work is sponsored in part by the ONR under agreements N00014-23-1-2206 and N00014-20-1-2745, AFOSR under award number FA9550-19-1-0169, as well as by the NSF grants CNS-1652544 and CNS-2211944, and the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks (Athena), grant CNS-2112562. IEEEtran10url@samestyle keysight_how_2020 Anonymous, “enHow Millimeter Wave Automotive Radar Enhances Advanced Driver Assistance Systems (ADAS) and Autonomous Driving,” Keysight Technologies, Tech. Rep., 2020. [Online]. Available: <https://www.keysight.com/us/en/assets/7018-06176/white-papers/5992-3004.pdf> benjamin_imaging_2019 A. Benjamin, “enImaging radar: one sensor to rule them all,” Texas Instruments, Tech. Rep., 2019. [Online]. Available: <https://e2e.ti.com/blogs_/b/behind_the_wheel/posts/imaging-radar-using-ti-mmwave-sensors> gardill_automotive_2019 M. Gardill, “Automotive Radar - An Overview on State-of-the-Art Technology,” 2019. [Online]. Available: <https://www.youtube.com/watch?v=P-C6_4ceY64 ab_channel=IEEEMicrowaveTheoryandTechnologySociety> ramasubramanian_moving_2017 K. Ramasubramanian, K. Ramaiah, and A. Aginskiy, “enMoving from Legacy 24 GHz to State-of-the-Art 77-GHz Radar,” Texas Instruments, Tech. Rep., 2017. [Online]. Available: <https://www.ti.com/lit/wp/spry312/spry312.pdf> mobileye_radar_2022 Anonymous, “Radar & LiDAR Autonomous Driving Sensors by Mobileye & Intel,” Mobileye, Tech. Rep., 2022. [Online]. Available: <https://static.mobileye.com/website/corporate/media/radar-lidar-fact-sheet.pdf> sun_towards_2020 J. Sun, Y. Cao, Q. A. Chen, and Z. M. Mao, “enTowards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures,” in enUSENIX Secur. Simp. (USINEX Security'20), 2020, pp. 877–894.abdelfattah_adversarial_2021 M. Abdelfattah, K. Yuan, Z. J. Wang, and R. Ward, “enAdversarial Attacks on Camera-LiDAR Models for 3D Car Detection,” in en2021 IEEE/RSJ Int. Conf. Intell. Robots and Syst. (IEEE IROS'21).1em plus 0.5em minus 0.4emPrague, Czech Republic: IEEE, sep 2021, pp. 2189–2194.hallyburton_security_2022 R. S. Hallyburton, Y. Liu, Y. Cao, M. Pajic, and Z. M. Mao, “enSecurity Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles,” in en31st USENIX Secur. Symp. (USENIX Security'22), 2022, pp. 1903–1920.cao_adversarial_2019 Y. Cao, C. Xiao, B. Cyr, Y. Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao, “enAdversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving,” in enProc. 2019 ACM SIGSAC Conf. Comput. and Commun. Secur. (ACM CCS'19).1em plus 0.5em minus 0.4emLondon United Kingdom: ACM, Nov. 2019, pp. 2267–2281.hallyburton2023partialinformation R. S. Hallyburton, Q. Zhang, Z. M. Mao, and M. Pajic, “Partial-information, longitudinal cyber attacks on lidar in autonomous vehicles,” arXiv preprint arXiv:2303.03470, 2023.sun_who_2021 Z. Sun, S. Balakrishnan, L. Su, A. Bhuyan, P. Wang, and C. Qiao, “enWho Is in Control? Practical Physical Layer Attack and Defense for mmWave-Based Sensing in Autonomous Vehicles,” enIEEE Trans. Inf. Forensics and Secur., vol. 16, pp. 3199–3214, 2021.miura_low-cost_2019 N. Miura, T. Machida, K. Matsuda, M. Nagata, S. Nashimoto, and D. Suzuki, “enA Low-Cost Replica-Based Distance-Spoofing Attack on mmWave FMCW Radar,” in enProc. 3rd ACM Workshop Attacks and Solutions Hardware Secur. Workshop (ACM ASHES'19).1em plus 0.5em minus 0.4emLondon, United Kingdom: ACM, 2019, pp. 95–100.komissarov_spoofing_2021 R. Komissarov and A. Wool, “enSpoofing Attacks Against Vehicular FMCW Radar,” in enProc. 5th Workshop on Attacks and Solutions in Hardware Secur. (ACM ASHES'21).1em plus 0.5em minus 0.4emACM, Apr. 2021, p. 7.chauhan_platform_2014 R. Chauhan, “enA Platform for False Data Injection in Frequency Modulated Continuous Wave Radar,” Master's thesis, Utah State University, Logan, UT, May 2014. [Online]. Available: <https://digitalcommons.usu.edu/etd/3964/> vennam_mmspoof_nodate R. R. Vennam, I. K. Jain, K. Bansal, J. Orozco, P. Shukla, A. Ranganathan, and D. Bharadia, “mmSpoof: Resilient Spoofing of Automotive Millimeter-wave Radars using Reflect Array,” in 2023 IEEE Symp. Secur. and Privacy (IEEE S&P'23).1em plus 0.5em minus 0.4emIEEE, 2023, pp. 1807–1821.chen_metawave_2023 X. Chen, Z. Li, B. Chen, Y. Zhu, C. X. Lu, Z. Peng, F. Lin, W. Xu, K. Ren, and C. Qiao, “enMetaWave: Attacking mmWave Sensing with Meta-material-enhanced Tags,” in enProc. 2023 Netw. and Distrib. Syst. Secur. Symp. (NDSS'23), San Diego, CA, USA, 2023.ranganathan_physical-layer_2012 A. Ranganathan, B. Danev, A. Francillon, and S. Capkun, “enPhysical-layer attacks on chirp-based ranging systems,” in enProc. 5th ACM Conf. Secur. and Privacy Wireless and Mobile Netw. (ACM WiSec'12).1em plus 0.5em minus 0.4emTucson Arizona USA: ACM, Apr. 2012, pp. 15–26.amar_fmcw-fmcw_2021 R. Amar, M. Alaee-Kerahroodi, and M. R. Bhavani Shankar, “enFMCW-FMCW Interference Analysis in mm-Wave Radars; An indoor case study and validation by measurements,” in en2021 21st Int. Radar Symp (IEEE IRS'21).1em plus 0.5em minus 0.4emBerlin, Germany: IEEE, Jun. 2021, pp. 1–11.alland_interference_2019 S. Alland, W. Stark, M. Ali, and M. Hegde, “enInterference in Automotive Radar Systems: Characteristics, Mitigation Techniques, and Current and Future Research,” enIEEE Signal Process. Mag., vol. 36, no. 5, pp. 45–59, sep 2019.schipper_discussion_2014 T. Schipper, M. Harter, T. Mahler, O. Kern, and T. Zwick, “enDiscussion of the operating range of frequency modulated radars in the presence of interference,” enInt. J. Microw. and Wireless Technol., vol. 6, no. 3-4, pp. 371–378, Jun. 2014.kunert_mosarim_2010 M. Kunert, R. Pietsch, A. John, C. Fischer, M. Ahrholdt, F. Bodereau, M. Goppelt, and A. Ossowska, “enMOSARIM D12.1 – Study report on relevant scenarios and applications and requirements specification,” European Commission Community Research and Development Information Service (CORDIS), Luxembourg, Tech. Rep., aug 2010.kunert_eu_2012 M. Kunert, “enThe EU project MOSARIM: A general overview of project objectives and conducted work,” in en2012 9th European Radar Conf. (IEEE EURAD'12), 2012, pp. 1–5.pietsch_more_2011 R. Pietsch, A. John, D. Walz, M. Kunert, H. Meinel, C. Fischer, and T. Schipper, “MOre Safety for All by Radar Interference Mitigation D1.4 – Impact study of the interference with respect to ASIL,” European Commission Community Research and Development Information Service (CORDIS), Luxembourg, Tech. Rep., may 2011. [Online]. Available: <https://cordis.europa.eu/docs/projects/cnect/1/248231/080/deliverables/001-MOSARIMDeliverable14V161.pdf> ettus_research_usrp_nodate Ettus Research, “USRP Hardware Driver and USRP Manual.” [Online]. Available: <https://files.ettus.com/manual/index.html> Project_Website “MadRadar.” [Online]. Available: <https://sites.google.com/view/madradar> texas_instruments_iwr1443_2018 Texas Instruments, “IWR1443 Single-Chip 76- to 81GHz mmWave Sensor,” oct 2018. [Online]. Available: <https://www.ti.com/lit/gpn/iwr1443> jiang_mmvib_2020 C. Jiang, J. Guo, Y. He, M. Jin, S. Li, and Y. Liu, “enmmVib: micrometer-level vibration measurement with mmwave radar,” in enProc. 26th Annu. Int. Conf. Mobile Comput. and Netw. (ACM MobiCom'20).1em plus 0.5em minus 0.4emLondon United Kingdom: ACM, Sep. 2020, pp. 1–13.wang_remote_2020 Y. Wang, W. Wang, M. Zhou, A. Ren, and Z. Tian, “enRemote Monitoring of Human Vital Signs Based on 77-GHz mm-Wave FMCW Radar,” enSensors, vol. 20, no. 10, p. 2999, May 2020.budge_range_1993 M. Budge and M. Burt, “enRange correlation effects in radars,” in enThe Record 1993 IEEE Nat. Radar Conf.1em plus 0.5em minus 0.4emLynnfield, MA, USA: IEEE, 1993, pp. 212–216.rao_introduction_nodate S. Rao, “enIntroduction to mmwave Sensing: FMCW Radars.” [Online]. Available: <https://training.ti.com/sites/default/files/docs/mmwaveSensing-FMCW-offlineviewing_0.pdf> katzlberger_object_2018 C. Katzlberger, “enObject Detection with Automotive Radar Sensors using CFAR Algorithms,” Ph.D. dissertation, Johannes Kepler University Linz, Linz, Austria, sep 2018. [Online]. Available: <https://www.jku.at/fileadmin/gruppen/183/Docs/Finished_Theses/Bachelor_Thesis_Katzlberger_final.pdf> rohling_radar_1983 H. Rohling, “enRadar CFAR Thresholding in Clutter and Multiple Target Situations,” enIEEE Trans. on Aerosp. and Electron. Syst., vol. AES-19, no. 4, pp. 608–621, jul 1983.rohling_ordered_2011 ——, “Ordered statistic cfar technique - an overview,” in 2011 12th Int. Radar Symp. (IRS'11), 2011, pp. 631–638.ester_density-based_1996 M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise,” in Proc. 2nd Int. Conf. Knowl. Discovery and Data Mining (ACM KDD'96).1em plus 0.5em minus 0.4emAAAI Press, 1996, p. 226–231.kunert_d15_2010 M. Kunert, R. Pietsch, A. John, C. Fischer, F. Bodereau, M. Goppelt, A. Ossowska, T. Wixforth, and T. Schipper, “D1.5 – Study on the state-of-the-art interference mitigation techniques,” European Commission Community Research and Development Information Service (CORDIS), Tech. Rep., jun 2010. [Online]. Available: <https://cordis.europa.eu/docs/projects/cnect/1/248231/080/deliverables/001-MOSARIMDeliverable15V161.pdf> barjenbruch_method_2015 M. Barjenbruch, D. Kellner, K. Dietmayer, J. Klappstein, and J. Dickmann, “enA method for interference cancellation in automotive radar,” in en2015 IEEE MTT-S Int. Conf. Microw. Intell. Mobility (IEEE ICMIM'2015).1em plus 0.5em minus 0.4emIEEE, apr 2015, pp. 1–4.bechter_automotive_2015 J. Bechter and C. Waldschmidt, “enAutomotive radar interference mitigation by reconstruction and cancellation of interference component,” in en2015 IEEE MTT-S Int. Conf. Microw. Intell. Mobility (IEEE ICMIM'2015).1em plus 0.5em minus 0.4emIEEE, apr 2015, pp. 1–4.bechter_automotive_2017 J. Bechter, F. Roos, M. Rahman, and C. Waldschmidt, “enAutomotive radar interference mitigation using a sparse sampling approach,” in en2017 European Radar Conf. (IEEE EURAD'17).1em plus 0.5em minus 0.4emNuremberg: IEEE, oct 2017, pp. 90–93.blue_car Freesvg, “Dark blue racing car vector illustration,” 2014, https://freesvg.org/img/SimpleDarkBlueCarTopView.png.red_car ——, “Dark racing car top view vector,” 2013, https://freesvg.org/red-racing-car-top-view-vector.yellow_car ——, “Yellow car - top view remix,” 2019, https://freesvg.org/yellow-car-top-view-remix.wireless_signal ——, “Wireless signal icon,” 2016, https://freesvg.org/wireless-signal-icon.danger_sign ——, “Danger ahead vector road sign,” 2016, https://freesvg.org/danger-ahead-vector-road-sign.proakis_design_2007 J. Proakis and D. Manolakis, “EnglishDesign of Linear-Phase FIR Filters Using Windows,” in EnglishDigital Signal Processing Principles, Algorithms, and Applications, 4th ed.1em plus 0.5em minus 0.4emNew Jersey: Pearson Prentice Hall, 2007, pp. 666–668.ayguen_pocketfft_nodate H. Ayguen, “PocketFFT for C++,” 2023. [Online]. Available: <https://github.com/hayguen/pocketfft> chamberland_engineering_2020 J.-F. Chamberland and H. D. Phister, Engineering Fundamentals, Aug. 2020. [Online]. Available: <https://dl.icdst.org/pdfs/files4/af2110505caac2fd6ccc9084af7de7f0.pdf> noauthor_eigen_nodate “Eigen.” [Online]. Available: <https://eigen.tuxfamily.org/> degroot_probability_2023 M. DeGroot and M. Schervish, Probability and Statistics, 4th ed.1em plus 0.5em minus 0.4emBoston,MA: Pearson, feb 2023.noauthor_phased_nodate Mathworks, “Phased Array System Toolbox.” [Online]. Available: <https://www.mathworks.com/help/phased/referencelist.html?type=function s_tid=CRUX_topnav> mathworks_radar_nodate MathWorks, “Radar Signal Simulation and Processing for Automated Driving.” [Online]. Available: <https://www.mathworks.com/help/radar/ug/radar-signal-simulation-and-processing-for-automated-driving.html> noauthor_ML_nodate Mathworks, “Statistics and Machine Learning Toolbox.” [Online]. Available: <https://www.mathworks.com/help/stats/referencelist.html?type=function s_tid=CRUX_topnav> cheng_person_2022 Y. Cheng and Y. Liu, “enPerson Reidentification Based on Automotive Radar Point Clouds,” enIEEE Trans Geosci. and Remote Sens., vol. 60, pp. 1–13, 2022.tilly_detection_2020 J. F. Tilly, S. Haag, O. Schumann, F. Weishaupt, B. Duraisamy, J. Dickmann, and M. Fritzsche, “enDetection and Tracking on Automotive Radar Data with Deep Learning,” in en2020 IEEE 23rd Int. Conf. Inf. Fusion (IEEE FUSION'20).1em plus 0.5em minus 0.4emRustenburg, South Africa: IEEE, jul 2020, pp. 1–7.ettus_research_b200b210b200minib205mini_nodate Ettus Research, “B200/B210/B200mini/B205mini.” [Online]. Available: <https://kb.ettus.com/B200/B210/B200mini/B205mini#Frontend_Specifications> peters_arestor_2022 N. Peters, C. Horne, and M. A. Ritchie, “enARESTOR: A Multi-role RF Sensor based on the Xilinx RFSoC,” in en2021 18th European Radar Conf (IEEE EURAD'22).1em plus 0.5em minus 0.4emLondon, United Kingdom: IEEE, Apr. 2022, pp. 102–105.kay_estimator-correlator_1998 S. M. Kay, “Estimator-Correlator,” in Fundamentals of Statistical Signal Processing: Detection Theory.1em plus 0.5em minus 0.4emPearson Education, 1998, vol. 2, pp. 142–147. §.§ FMCW Radar Signal Processing Theory §.§.§ Simplification of the IF SignalAs discussed in (<ref>) from Section <ref>, the IF signal is obtained by mixing the transmitted signal with the received signal – i.e., s_IF^(l)(t) = x(t) · y^*(t) = A_Rx·exp{ j[2 π· t + π· t^2] -j [2 π (t-) + π (t - )^2] } + z'(t) = A_Rx·exp{ j [ 2 π· t + π· t^2 -2 π· t +2π· - π (t^2 - 2t· + ^2) ] } + z'(t)= A_Rx·exp{j[2π·· t + 2π· - π·^2] } + z'(t),where z'(t) = x(t)z^*(t) represents the noise present after the mixing. From (<ref>), several simplifications can be made. First, 2π·· t can be simplified by recognizing that R(t) << and defining as:= 2 ·/.The simplified term can then be expressed as2π·· t = 2 π( 2(R(t) + )/)t, ≈ 2 π2 ·/ t,= 2 π f_IF· t.Next, 2π f_· can be simplified by sampling R(t) at each chrip using R(l ·) = · l · where l is the chirp number in the frame. Additional we defineas= 4 π·/λ.Using these simplifications, 2π· simplifies as follows:2 π·=2 π2(R(l ·) + )/=4 π (R(l ·) + )/λ=4 π R(l ·)/λ + 4 π/λ=4 π· l ·/λ + 4 π/λ = · l + 4 π/λ.Finally, π·^2 can be simplified by recognizing that R(t)^2 << 2R(t)<< ^2. The simplified term can then be expressed as π·^2 =π( 2(R(t) + )/)^2 =π4/^2(R(t)^2 + 2R(t) + ^2 )≈ 4 π·^2/^2.Applying (<ref>),(<ref>), and (<ref>) to (<ref>) gives s_IF^(l)(t)=A_RX·exp{j[2 π f_IF· t +l + 4 π/λ - 4 π·^2/^2} + z'(t).where s_IF^(l)(t) is the IF signal corresponding to the lth chirp. Finally, by simplifying and defining A_IF = A_Rx·exp{j[ 4 π/λ - 4 π·^2/^2]}, (<ref>) can be rewritten in terms of a range, velocity (Doppler), and noise term ass_IF^(l)(t)= A_IF·exp(j 2 π f_IF· t) ·exp(jl) + z'(t).§.§.§ Range Resolution and Maximum Range The output of an FFT (in particular, the dominant signal frequencies) of the signal from (<ref>) is used to evaluate the IF frequency. Thus, the resolution of the FFT also limits the range resolution and maximum range that a radar can detect a target at. By definition, an FFT can separate two tones if they have a frequency difference greater than 1/T where T is the observation period. As such, two targets can be separately identified as long as their IF frequencies are greater than<cit.>. Therefore, by converting from IF frequency to distance provides us with the following range resolutionΔ f_IF =2 Δ d ·/ = 2 Δ d ·/·≥1/ ⇒ Δ d ≥/2 ·⇒ = /2 ·. On the other hand, the maximum range that a radar could detect is limited by the IF frequency bandwidth, which is also based on the ADC sampling rate used to record the IF signal <cit.>. Hence, the maximum range that a radar can detect an object at is obtained as≥f_IFMax = 2 ·/⇒ = ·/. §.§.§ Velocity Resolution and Maximum VelocityFirst, we observe that the phase shift caused by an object's velocity must not be greater than π between successive chirps so as not to be ambiguous. Thus, the maximum velocity satisfies satisfies <cit.>≤π⇒v ≤λ/4 .A similar approach can be used to determine the velocity resolution. In radians, two tones are separable if Δ≥2π/, whereis the number of chirps.Hence, the velocity resolution satisfies <cit.> = 4 π·/λ,Δ ≥2 π/⇒Δ v ≥λ/2 ,whereis the number of chirps in a radar frame.§.§ Simulation Details§.§.§ Distribution of Test Cases for Simulated Victim Radar Parameter Estimation Fig. <ref> shows the distribution of chirp periods and chirp slopes used for the simulation-based evaluation of the victim parameter sensing presented in Section <ref>.§.§.§ Distribution of Test Cases for Attacker Spoofing AccuracyThe distributions of the desired spoofing ranges () and velocities () used for the simulation-based evaluation of the attack spoofing accuracy in Section <ref> are presented in Fig. <ref>.§.§.§ CFAR Detection Regions for Large-Scale EvaluationThe CFAR detection regions for each victim radar configuration utilized in the simulation-based large scale evaluations from Section <ref>, are listed in Table <ref>.§.§ Details for Test Cases Used for Physical Evaluation§.§.§ Distribution of Test Cases for Physical Evaluation of the Sensing Accuracy Fig. <ref> presents the distributions of the desired victim radar chirp periods () and chirp slopes () used for the experimental evaluations performed in Section <ref>. To generate samples at various chirp cycle times and chirp slopes, we performed 200 trials for each of the following three groups: chirps with periods in the range [50, 200], chirps with periods in the range [200, 400], and chirps with periods in the range [400, 500]. This was done to validate our real-time experimental sensing component using a wide range of victim configurations, and it explains the three distinct groupings that appear in the distribution. §.§.§ Distribution of test cases used for physical evaluation of the attack spoofing accuracy. Fig. <ref> presents the distribution of the desired spoofing ranges () and velocities () used to experimentally evaluate the spoofing accuracy of theprototype framework in Section <ref>. | http://arxiv.org/abs/2311.16024v1 | {
"authors": [
"David Hunt",
"Kristen Angell",
"Zhenzhou Qi",
"Tingjun Chen",
"Miroslav Pajic"
],
"categories": [
"eess.SP"
],
"primary_category": "eess.SP",
"published": "20231127173814",
"title": "MadRadar: A Black-Box Physical Layer Attack Framework on mmWave Automotive FMCW Radars"
} |
Subsets and Splits